2019-09-04T06:27:18.227+0000 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none' 2019-09-04T06:27:18.234+0000 D1 NETWORK [main] fd limit hard:64000 soft:64000 max conn: 51200 2019-09-04T06:27:18.235+0000 I CONTROL [initandlisten] MongoDB starting : pid=11676 port=27019 dbpath=/data/db 64-bit host=cmodb803.togewa.com 2019-09-04T06:27:18.235+0000 I CONTROL [initandlisten] db version v4.2.0 2019-09-04T06:27:18.235+0000 I CONTROL [initandlisten] git version: a4b751dcf51dd249c5865812b390cfd1c0129c30 2019-09-04T06:27:18.235+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013 2019-09-04T06:27:18.235+0000 I CONTROL [initandlisten] allocator: tcmalloc 2019-09-04T06:27:18.235+0000 I CONTROL [initandlisten] modules: none 2019-09-04T06:27:18.235+0000 I CONTROL [initandlisten] build environment: 2019-09-04T06:27:18.235+0000 I CONTROL [initandlisten] distmod: rhel70 2019-09-04T06:27:18.235+0000 I CONTROL [initandlisten] distarch: x86_64 2019-09-04T06:27:18.235+0000 I CONTROL [initandlisten] target_arch: x86_64 2019-09-04T06:27:18.235+0000 I CONTROL [initandlisten] options: { config: "/etc/mongod.conf", net: { bindIp: "0.0.0.0", port: 27019 }, processManagement: { fork: true, pidFilePath: "/var/run/mongodb/mongod.pid", timeZoneInfo: "/usr/share/zoneinfo" }, replication: { oplogSizeMB: 1024, replSetName: "configrs" }, security: { authorization: "disabled" }, sharding: { clusterRole: "configsvr" }, storage: { dbPath: "/data/db", directoryPerDB: true, journal: { enabled: true }, wiredTiger: { engineConfig: { directoryForIndexes: true } } }, systemLog: { component: { network: { verbosity: 2 } }, destination: "file", logAppend: false, path: "/var/log/mongodb/mongod.log", traceAllExceptions: true, verbosity: 5 } } 2019-09-04T06:27:18.235+0000 D1 NETWORK [initandlisten] fd limit hard:64000 soft:64000 max conn: 51200 2019-09-04T06:27:18.235+0000 D2 - [initandlisten] Starting periodic job FlowControlRefresher 2019-09-04T06:27:18.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:27:18.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:27:18.235+0000 I STORAGE [initandlisten] Detected data files in /data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'. 2019-09-04T06:27:18.235+0000 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=1382M,cache_overflow=(file_max=0M),session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,recovery], 2019-09-04T06:27:18.808+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:808453][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:WiredTiger.wt with id 0 @ (1, 1039104) 2019-09-04T06:27:18.808+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:808539][11676:0x7f0ed9e9fc00], txn-recover: Recovering log 1 through 2 2019-09-04T06:27:18.808+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:808584][11676:0x7f0ed9e9fc00], txn-recover: Applying op 4 to file 0 at LSN 1/1039232 2019-09-04T06:27:18.808+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:808694][11676:0x7f0ed9e9fc00], txn-recover: Applying op 4 to file 0 at LSN 1/1039232 2019-09-04T06:27:18.808+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:808738][11676:0x7f0ed9e9fc00], txn-recover: Applying op 4 to file 0 at LSN 1/1039232 2019-09-04T06:27:18.808+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:808753][11676:0x7f0ed9e9fc00], txn-recover: Applying op 4 to file 0 at LSN 1/1039232 2019-09-04T06:27:18.808+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:808795][11676:0x7f0ed9e9fc00], txn-recover: Applying op 4 to file 0 at LSN 1/1039232 2019-09-04T06:27:18.808+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:808806][11676:0x7f0ed9e9fc00], txn-recover: Applying op 4 to file 0 at LSN 1/1039232 2019-09-04T06:27:18.808+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:808825][11676:0x7f0ed9e9fc00], txn-recover: Applying op 4 to file 0 at LSN 1/1039232 2019-09-04T06:27:18.808+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:808834][11676:0x7f0ed9e9fc00], txn-recover: Applying op 4 to file 0 at LSN 1/1039232 2019-09-04T06:27:18.909+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:909216][11676:0x7f0ed9e9fc00], txn-recover: Recovering log 2 through 2 2019-09-04T06:27:18.995+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:995915][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:WiredTigerLAS.wt with id 1 @ (1, 0) 2019-09-04T06:27:18.995+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:995993][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:_mdb_catalog.wt with id 3 @ (1, 200704) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996010][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:admin/collection/17--6194257481163143499.wt with id 21 @ (1, 200704) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996026][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:admin/collection/20--6194257481163143499.wt with id 24 @ (1, 200704) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996039][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:admin/collection/22--6194257481163143499.wt with id 26 @ (1, 200704) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996053][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:admin/index/18--6194257481163143499.wt with id 22 @ (1, 38784) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996066][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:admin/index/19--6194257481163143499.wt with id 23 @ (1, 39936) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996079][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:admin/index/21--6194257481163143499.wt with id 25 @ (1, 43904) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996093][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:admin/index/23--6194257481163143499.wt with id 27 @ (1, 50688) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996106][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:config/collection/26--6194257481163143499.wt with id 30 @ (1, 200704) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996122][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:config/collection/28--6194257481163143499.wt with id 32 @ (1, 1039104) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996136][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:config/collection/34--6194257481163143499.wt with id 38 @ (1, 0) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996150][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:config/collection/38--6194257481163143499.wt with id 42 @ (1, 1039104) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996164][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:config/collection/42--6194257481163143499.wt with id 46 @ (1, 1016448) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996177][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:config/collection/50--6194257481163143499.wt with id 54 @ (1, 200704) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996190][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:config/collection/54--6194257481163143499.wt with id 58 @ (1, 200704) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996204][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:config/collection/58--6194257481163143499.wt with id 62 @ (1, 200704) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996231][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:config/collection/71--6194257481163143499.wt with id 75 @ (1, 0) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996245][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:config/collection/75--6194257481163143499.wt with id 79 @ (1, 0) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996259][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:config/collection/82--6194257481163143499.wt with id 86 @ (1, 255616) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996274][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:config/collection/89--6194257481163143499.wt with id 93 @ (1, 0) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996287][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:config/index/27--6194257481163143499.wt with id 31 @ (1, 200704) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996300][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:config/index/29--6194257481163143499.wt with id 33 @ (1, 1039104) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996314][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:config/index/31--6194257481163143499.wt with id 35 @ (1, 307840) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996328][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:config/index/35--6194257481163143499.wt with id 39 @ (1, 73600) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996341][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:config/index/39--6194257481163143499.wt with id 43 @ (1, 81408) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996354][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:config/index/43--6194257481163143499.wt with id 47 @ (1, 1016448) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996367][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:config/index/45--6194257481163143499.wt with id 49 @ (1, 1016448) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996380][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:config/index/47--6194257481163143499.wt with id 51 @ (1, 97152) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996393][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:config/index/51--6194257481163143499.wt with id 55 @ (1, 105984) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996406][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:config/index/55--6194257481163143499.wt with id 59 @ (1, 113792) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996419][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:config/index/59--6194257481163143499.wt with id 63 @ (1, 134656) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996432][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:config/index/62--6194257481163143499.wt with id 66 @ (1, 135936) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996445][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:config/index/65--6194257481163143499.wt with id 69 @ (1, 137216) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996458][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:config/index/68--6194257481163143499.wt with id 72 @ (1, 138496) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996471][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:config/index/72--6194257481163143499.wt with id 76 @ (1, 149376) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996486][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:config/index/76--6194257481163143499.wt with id 80 @ (1, 161536) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996499][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:config/index/79--6194257481163143499.wt with id 83 @ (1, 162688) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996512][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:config/index/83--6194257481163143499.wt with id 87 @ (1, 255616) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996525][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:config/index/86--6194257481163143499.wt with id 90 @ (1, 255616) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996538][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:config/index/90--6194257481163143499.wt with id 94 @ (1, 193152) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996550][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:config/index/93--6194257481163143499.wt with id 97 @ (1, 194304) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996564][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:config/index/95--6194257481163143499.wt with id 99 @ (1, 195456) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996577][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:local/collection/0--6194257481163143499.wt with id 4 @ (1, 31616) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996590][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:local/collection/10--6194257481163143499.wt with id 14 @ (1, 31616) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996604][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:local/collection/16--6194257481163143499.wt with id 20 @ (1, 1039104) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996618][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:local/collection/2--6194257481163143499.wt with id 6 @ (1, 1039104) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996631][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:local/collection/4--6194257481163143499.wt with id 8 @ (1, 1039104) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996645][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:local/collection/6--6194257481163143499.wt with id 10 @ (1, 31616) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996658][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:local/collection/8--6194257481163143499.wt with id 12 @ (1, 31616) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996671][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:local/index/1--6194257481163143499.wt with id 5 @ (1, 31616) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996684][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:local/index/11--6194257481163143499.wt with id 15 @ (1, 31616) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996697][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:local/index/3--6194257481163143499.wt with id 7 @ (1, 200704) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996709][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:local/index/5--6194257481163143499.wt with id 9 @ (1, 31616) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996722][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:local/index/7--6194257481163143499.wt with id 11 @ (1, 31616) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996737][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:local/index/9--6194257481163143499.wt with id 13 @ (1, 31616) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996751][11676:0x7f0ed9e9fc00], txn-recover: Recovering file:sizeStorer.wt with id 2 @ (1, 1039104) 2019-09-04T06:27:18.996+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:996761][11676:0x7f0ed9e9fc00], txn-recover: Main recovery loop: starting at 1/1039104 to 2/256 2019-09-04T06:27:18.997+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:997076][11676:0x7f0ed9e9fc00], txn-recover: Recovering log 1 through 2 2019-09-04T06:27:18.997+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:997111][11676:0x7f0ed9e9fc00], txn-recover: Skipping op 4 to file 0 at LSN 1/1039232 2019-09-04T06:27:18.997+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:997118][11676:0x7f0ed9e9fc00], txn-recover: Skipping op 4 to file 0 at LSN 1/1039232 2019-09-04T06:27:18.997+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:997123][11676:0x7f0ed9e9fc00], txn-recover: Skipping op 4 to file 0 at LSN 1/1039232 2019-09-04T06:27:18.997+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:997128][11676:0x7f0ed9e9fc00], txn-recover: Skipping op 4 to file 0 at LSN 1/1039232 2019-09-04T06:27:18.997+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:997133][11676:0x7f0ed9e9fc00], txn-recover: Skipping op 4 to file 0 at LSN 1/1039232 2019-09-04T06:27:18.997+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:997138][11676:0x7f0ed9e9fc00], txn-recover: Skipping op 4 to file 0 at LSN 1/1039232 2019-09-04T06:27:18.997+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:997143][11676:0x7f0ed9e9fc00], txn-recover: Skipping op 4 to file 0 at LSN 1/1039232 2019-09-04T06:27:18.997+0000 I STORAGE [initandlisten] WiredTiger message [1567578438:997148][11676:0x7f0ed9e9fc00], txn-recover: Skipping op 4 to file 0 at LSN 1/1039232 2019-09-04T06:27:19.093+0000 I STORAGE [initandlisten] WiredTiger message [1567578439:93320][11676:0x7f0ed9e9fc00], txn-recover: Recovering log 2 through 2 2019-09-04T06:27:19.156+0000 I STORAGE [initandlisten] WiredTiger message [1567578439:156045][11676:0x7f0ed9e9fc00], txn-recover: Recovery timestamp 5d6f593c00000002 2019-09-04T06:27:19.156+0000 I STORAGE [initandlisten] WiredTiger message [1567578439:156100][11676:0x7f0ed9e9fc00], txn-recover: Set global recovery timestamp: (1567578428,2) 2019-09-04T06:27:19.227+0000 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(1567578428, 2) 2019-09-04T06:27:19.227+0000 D2 STORAGE [initandlisten] Setting initial data timestamp. Value: Timestamp(1567578428, 2) 2019-09-04T06:27:19.227+0000 D1 COMMAND [WTIdleSessionSweeper] BackgroundJob starting: WTIdleSessionSweeper 2019-09-04T06:27:19.227+0000 D1 COMMAND [WTJournalFlusher] BackgroundJob starting: WTJournalFlusher 2019-09-04T06:27:19.227+0000 D1 STORAGE [WTIdleSessionSweeper] starting WTIdleSessionSweeper thread 2019-09-04T06:27:19.227+0000 D1 STORAGE [WTJournalFlusher] starting WTJournalFlusher thread 2019-09-04T06:27:19.228+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:19.227+0000 D2 STORAGE [initandlisten] oldest_timestamp set to Timestamp(1567578428, 2) 2019-09-04T06:27:19.228+0000 D1 COMMAND [WTCheckpointThread] BackgroundJob starting: WTCheckpointThread 2019-09-04T06:27:19.228+0000 D1 STORAGE [WTCheckpointThread] starting WTCheckpointThread thread 2019-09-04T06:27:19.228+0000 D3 STORAGE [initandlisten] WT begin_transaction for snapshot id 1 2019-09-04T06:27:19.229+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:_mdb_catalog ok range 1 -> 1 current: 1 2019-09-04T06:27:19.230+0000 D2 STORAGE [initandlisten] WiredTigerSizeStorer::load table:_mdb_catalog -> { numRecords: 23, dataSize: 13932 } 2019-09-04T06:27:19.230+0000 D2 STORAGE [initandlisten] WiredTigerSizeStorer::store Marking table:_mdb_catalog dirty, numRecords: 23, dataSize: 13932, use_count: 3 2019-09-04T06:27:19.230+0000 D2 RECOVERY [initandlisten] loadCatalog: 2019-09-04T06:27:19.230+0000 D2 RECOVERY [initandlisten] Id: RecordId(1) Value: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:27:19.230+0000 D2 RECOVERY [initandlisten] Id: RecordId(2) Value: { isFeatureDoc: true, ns: null, nonRepairable: 0, repairable: 1 } 2019-09-04T06:27:19.230+0000 D2 RECOVERY [initandlisten] Id: RecordId(3) Value: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:27:19.230+0000 D2 RECOVERY [initandlisten] Id: RecordId(4) Value: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:27:19.230+0000 D2 RECOVERY [initandlisten] Id: RecordId(5) Value: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:27:19.230+0000 D2 RECOVERY [initandlisten] Id: RecordId(6) Value: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:27:19.230+0000 D2 RECOVERY [initandlisten] Id: RecordId(7) Value: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:27:19.230+0000 D2 RECOVERY [initandlisten] Id: RecordId(10) Value: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:19.230+0000 D2 RECOVERY [initandlisten] Id: RecordId(11) Value: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:27:19.230+0000 D2 RECOVERY [initandlisten] Id: RecordId(12) Value: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:27:19.230+0000 D2 RECOVERY [initandlisten] Id: RecordId(13) Value: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:27:19.230+0000 D2 RECOVERY [initandlisten] Id: RecordId(14) Value: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:27:19.230+0000 D2 RECOVERY [initandlisten] Id: RecordId(15) Value: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:27:19.230+0000 D2 RECOVERY [initandlisten] Id: RecordId(16) Value: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:27:19.230+0000 D2 RECOVERY [initandlisten] Id: RecordId(17) Value: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:27:19.230+0000 D2 RECOVERY [initandlisten] Id: RecordId(18) Value: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:27:19.230+0000 D2 RECOVERY [initandlisten] Id: RecordId(19) Value: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:27:19.230+0000 D2 RECOVERY [initandlisten] Id: RecordId(20) Value: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:27:19.230+0000 D2 RECOVERY [initandlisten] Id: RecordId(21) Value: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:27:19.230+0000 D2 RECOVERY [initandlisten] Id: RecordId(22) Value: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:27:19.230+0000 D2 RECOVERY [initandlisten] Id: RecordId(23) Value: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:27:19.230+0000 D2 RECOVERY [initandlisten] Id: RecordId(24) Value: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:27:19.230+0000 D2 RECOVERY [initandlisten] Id: RecordId(25) Value: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:27:19.230+0000 D3 STORAGE [initandlisten] WT rollback_transaction for snapshot id 1 2019-09-04T06:27:19.230+0000 D3 STORAGE [initandlisten] WT begin_transaction for snapshot id 2 2019-09-04T06:27:19.230+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:27:19.230+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:27:19.230+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.230+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:admin/collection/22--6194257481163143499 ok range 1 -> 1 current: 1 2019-09-04T06:27:19.231+0000 D2 STORAGE [initandlisten] WiredTigerSizeStorer::load table:admin/collection/22--6194257481163143499 -> { numRecords: 2, dataSize: 170 } 2019-09-04T06:27:19.231+0000 D2 STORAGE [initandlisten] WiredTigerSizeStorer::store Marking table:admin/collection/22--6194257481163143499 dirty, numRecords: 2, dataSize: 170, use_count: 3 2019-09-04T06:27:19.231+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:27:19.231+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:27:19.231+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.231+0000 D1 STORAGE [initandlisten] Registering collection admin.system.keys with UUID 6fa72c52-1098-49d5-8075-97e44ea0d586 2019-09-04T06:27:19.231+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:27:19.231+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:27:19.231+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.231+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:27:19.231+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:27:19.231+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.232+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:admin/collection/17--6194257481163143499 ok range 1 -> 1 current: 1 2019-09-04T06:27:19.232+0000 D2 STORAGE [initandlisten] WiredTigerSizeStorer::load table:admin/collection/17--6194257481163143499 -> { numRecords: 1, dataSize: 677 } 2019-09-04T06:27:19.232+0000 D2 STORAGE [initandlisten] WiredTigerSizeStorer::store Marking table:admin/collection/17--6194257481163143499 dirty, numRecords: 1, dataSize: 677, use_count: 3 2019-09-04T06:27:19.232+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:27:19.232+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:27:19.232+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.232+0000 D1 STORAGE [initandlisten] Registering collection admin.system.users with UUID 1c65b785-f989-45d0-a6f4-6a4233f87231 2019-09-04T06:27:19.232+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:27:19.232+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:27:19.232+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.232+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:27:19.232+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:27:19.232+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.233+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:admin/collection/20--6194257481163143499 ok range 1 -> 1 current: 1 2019-09-04T06:27:19.233+0000 D2 STORAGE [initandlisten] WiredTigerSizeStorer::load table:admin/collection/20--6194257481163143499 -> { numRecords: 2, dataSize: 104 } 2019-09-04T06:27:19.233+0000 D2 STORAGE [initandlisten] WiredTigerSizeStorer::store Marking table:admin/collection/20--6194257481163143499 dirty, numRecords: 2, dataSize: 104, use_count: 3 2019-09-04T06:27:19.233+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:27:19.233+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:27:19.233+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.233+0000 D1 STORAGE [initandlisten] Registering collection admin.system.version with UUID 4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab 2019-09-04T06:27:19.233+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:27:19.233+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:27:19.233+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.233+0000 D3 STORAGE [initandlisten] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:27:19.233+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:27:19.233+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.234+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:config/collection/26--6194257481163143499 ok range 1 -> 1 current: 1 2019-09-04T06:27:19.234+0000 D2 STORAGE [initandlisten] WiredTigerSizeStorer::load table:config/collection/26--6194257481163143499 -> { numRecords: 8, dataSize: 2699 } 2019-09-04T06:27:19.234+0000 D2 STORAGE [initandlisten] WiredTigerSizeStorer::store Marking table:config/collection/26--6194257481163143499 dirty, numRecords: 8, dataSize: 2699, use_count: 3 2019-09-04T06:27:19.234+0000 D3 STORAGE [initandlisten] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:27:19.234+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:27:19.234+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.234+0000 D1 STORAGE [initandlisten] Registering collection config.changelog with UUID 0196ba23-ca72-4f67-b3ac-b305f18a38e3 2019-09-04T06:27:19.234+0000 D3 STORAGE [initandlisten] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:27:19.234+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:27:19.234+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.234+0000 D3 STORAGE [initandlisten] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:19.234+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:27:19.234+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.234+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:config/collection/58--6194257481163143499 ok range 1 -> 1 current: 1 2019-09-04T06:27:19.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:27:19.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:27:19.235+0000 D2 STORAGE [initandlisten] WiredTigerSizeStorer::load table:config/collection/58--6194257481163143499 -> { numRecords: 1, dataSize: 236 } 2019-09-04T06:27:19.235+0000 D2 STORAGE [initandlisten] WiredTigerSizeStorer::store Marking table:config/collection/58--6194257481163143499 dirty, numRecords: 1, dataSize: 236, use_count: 3 2019-09-04T06:27:19.235+0000 D3 STORAGE [initandlisten] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:19.235+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:27:19.235+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.235+0000 D1 STORAGE [initandlisten] Registering collection config.chunks with UUID 925b3d05-7eb4-4b6d-b339-82784de07cbe 2019-09-04T06:27:19.235+0000 D3 STORAGE [initandlisten] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:19.235+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:27:19.235+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.235+0000 D3 STORAGE [initandlisten] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:27:19.235+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:27:19.235+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.235+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:config/collection/54--6194257481163143499 ok range 1 -> 1 current: 1 2019-09-04T06:27:19.236+0000 D2 STORAGE [initandlisten] WiredTigerSizeStorer::load table:config/collection/54--6194257481163143499 -> { numRecords: 1, dataSize: 145 } 2019-09-04T06:27:19.236+0000 D2 STORAGE [initandlisten] WiredTigerSizeStorer::store Marking table:config/collection/54--6194257481163143499 dirty, numRecords: 1, dataSize: 145, use_count: 3 2019-09-04T06:27:19.236+0000 D3 STORAGE [initandlisten] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:27:19.236+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:27:19.236+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.236+0000 D1 STORAGE [initandlisten] Registering collection config.collections with UUID 5c6c3426-ae2d-4c69-bf22-b1d2601211ff 2019-09-04T06:27:19.236+0000 D3 STORAGE [initandlisten] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:27:19.236+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:27:19.236+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.236+0000 D3 STORAGE [initandlisten] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:27:19.236+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:27:19.236+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.236+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:config/collection/28--6194257481163143499 ok range 1 -> 1 current: 1 2019-09-04T06:27:19.237+0000 D2 STORAGE [initandlisten] WiredTigerSizeStorer::load table:config/collection/28--6194257481163143499 -> { numRecords: 22, dataSize: 1828 } 2019-09-04T06:27:19.237+0000 D2 STORAGE [initandlisten] WiredTigerSizeStorer::store Marking table:config/collection/28--6194257481163143499 dirty, numRecords: 22, dataSize: 1828, use_count: 3 2019-09-04T06:27:19.237+0000 D3 STORAGE [initandlisten] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:27:19.237+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:27:19.237+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.237+0000 D1 STORAGE [initandlisten] Registering collection config.lockpings with UUID 0e9c403c-5a7d-421c-a744-6abbab57bdce 2019-09-04T06:27:19.237+0000 D3 STORAGE [initandlisten] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:27:19.237+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:27:19.237+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.237+0000 D3 STORAGE [initandlisten] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:27:19.237+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:27:19.237+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.237+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:config/collection/42--6194257481163143499 ok range 1 -> 1 current: 1 2019-09-04T06:27:19.238+0000 D2 STORAGE [initandlisten] WiredTigerSizeStorer::load table:config/collection/42--6194257481163143499 -> { numRecords: 2, dataSize: 308 } 2019-09-04T06:27:19.238+0000 D2 STORAGE [initandlisten] WiredTigerSizeStorer::store Marking table:config/collection/42--6194257481163143499 dirty, numRecords: 2, dataSize: 308, use_count: 3 2019-09-04T06:27:19.238+0000 D3 STORAGE [initandlisten] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:27:19.238+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:27:19.238+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.238+0000 D1 STORAGE [initandlisten] Registering collection config.locks with UUID 1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287 2019-09-04T06:27:19.238+0000 D3 STORAGE [initandlisten] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:27:19.238+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:27:19.238+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.238+0000 D3 STORAGE [initandlisten] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:27:19.238+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:27:19.238+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.238+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:config/collection/75--6194257481163143499 ok range 1 -> 1 current: 1 2019-09-04T06:27:19.239+0000 D2 STORAGE [initandlisten] WiredTigerSizeStorer::load table:config/collection/75--6194257481163143499 -> { numRecords: 0, dataSize: 0 } 2019-09-04T06:27:19.239+0000 D2 RECOVERY [initandlisten] Record store was empty; setting count metadata to zero but marking record store as needing size adjustment during recovery. ns: config.migrations, ident: config/collection/75--6194257481163143499 2019-09-04T06:27:19.239+0000 D2 STORAGE [initandlisten] WiredTigerSizeStorer::store Marking table:config/collection/75--6194257481163143499 dirty, numRecords: 0, dataSize: 0, use_count: 3 2019-09-04T06:27:19.239+0000 D3 STORAGE [initandlisten] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:27:19.239+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:27:19.239+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.239+0000 D1 STORAGE [initandlisten] Registering collection config.migrations with UUID b8de9e4c-de38-4698-9ceb-7e686f580e61 2019-09-04T06:27:19.239+0000 D3 STORAGE [initandlisten] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:27:19.239+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:27:19.239+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.239+0000 D3 STORAGE [initandlisten] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:27:19.239+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:27:19.239+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.239+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:config/collection/38--6194257481163143499 ok range 1 -> 1 current: 1 2019-09-04T06:27:19.239+0000 D2 STORAGE [initandlisten] WiredTigerSizeStorer::load table:config/collection/38--6194257481163143499 -> { numRecords: 1, dataSize: 124 } 2019-09-04T06:27:19.239+0000 D2 STORAGE [initandlisten] WiredTigerSizeStorer::store Marking table:config/collection/38--6194257481163143499 dirty, numRecords: 1, dataSize: 124, use_count: 3 2019-09-04T06:27:19.239+0000 D3 STORAGE [initandlisten] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:27:19.239+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:27:19.239+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.239+0000 D1 STORAGE [initandlisten] Registering collection config.mongos with UUID 1734bd4e-af6d-441a-8751-93e269784617 2019-09-04T06:27:19.239+0000 D3 STORAGE [initandlisten] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:27:19.239+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:27:19.239+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.239+0000 D3 STORAGE [initandlisten] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:27:19.240+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:27:19.240+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.240+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:config/collection/82--6194257481163143499 ok range 1 -> 1 current: 1 2019-09-04T06:27:19.240+0000 D2 STORAGE [initandlisten] WiredTigerSizeStorer::load table:config/collection/82--6194257481163143499 -> { numRecords: 3, dataSize: 321 } 2019-09-04T06:27:19.240+0000 D2 STORAGE [initandlisten] WiredTigerSizeStorer::store Marking table:config/collection/82--6194257481163143499 dirty, numRecords: 3, dataSize: 321, use_count: 3 2019-09-04T06:27:19.240+0000 D3 STORAGE [initandlisten] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:27:19.240+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:27:19.240+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.240+0000 D1 STORAGE [initandlisten] Registering collection config.shards with UUID cc5f25a3-25cf-4a45-b674-6595d24d7e9a 2019-09-04T06:27:19.240+0000 D3 STORAGE [initandlisten] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:27:19.240+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:27:19.240+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.240+0000 D3 STORAGE [initandlisten] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:27:19.240+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:27:19.240+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.241+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:config/collection/71--6194257481163143499 ok range 1 -> 1 current: 1 2019-09-04T06:27:19.241+0000 D2 STORAGE [initandlisten] WiredTigerSizeStorer::load table:config/collection/71--6194257481163143499 -> { numRecords: 0, dataSize: 0 } 2019-09-04T06:27:19.241+0000 D2 RECOVERY [initandlisten] Record store was empty; setting count metadata to zero but marking record store as needing size adjustment during recovery. ns: config.system.sessions, ident: config/collection/71--6194257481163143499 2019-09-04T06:27:19.241+0000 D2 STORAGE [initandlisten] WiredTigerSizeStorer::store Marking table:config/collection/71--6194257481163143499 dirty, numRecords: 0, dataSize: 0, use_count: 3 2019-09-04T06:27:19.241+0000 D3 STORAGE [initandlisten] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:27:19.241+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:27:19.241+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.241+0000 D1 STORAGE [initandlisten] Registering collection config.system.sessions with UUID a6938268-0b91-476c-a0f3-aaac5e5117ed 2019-09-04T06:27:19.241+0000 D3 STORAGE [initandlisten] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:27:19.241+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:27:19.241+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.241+0000 D3 STORAGE [initandlisten] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:27:19.241+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:27:19.241+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.241+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:config/collection/89--6194257481163143499 ok range 1 -> 1 current: 1 2019-09-04T06:27:19.242+0000 D2 STORAGE [initandlisten] WiredTigerSizeStorer::load table:config/collection/89--6194257481163143499 -> { numRecords: 0, dataSize: 0 } 2019-09-04T06:27:19.242+0000 D2 RECOVERY [initandlisten] Record store was empty; setting count metadata to zero but marking record store as needing size adjustment during recovery. ns: config.tags, ident: config/collection/89--6194257481163143499 2019-09-04T06:27:19.242+0000 D2 STORAGE [initandlisten] WiredTigerSizeStorer::store Marking table:config/collection/89--6194257481163143499 dirty, numRecords: 0, dataSize: 0, use_count: 3 2019-09-04T06:27:19.242+0000 D3 STORAGE [initandlisten] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:27:19.242+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:27:19.242+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.242+0000 D1 STORAGE [initandlisten] Registering collection config.tags with UUID f71519e3-c8e3-42c8-9579-254e000a6c18 2019-09-04T06:27:19.242+0000 D3 STORAGE [initandlisten] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:27:19.242+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:27:19.242+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.242+0000 D3 STORAGE [initandlisten] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:27:19.242+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:27:19.242+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.242+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:config/collection/34--6194257481163143499 ok range 1 -> 1 current: 1 2019-09-04T06:27:19.243+0000 D2 STORAGE [initandlisten] WiredTigerSizeStorer::load table:config/collection/34--6194257481163143499 -> { numRecords: 0, dataSize: 0 } 2019-09-04T06:27:19.243+0000 D2 RECOVERY [initandlisten] Record store was empty; setting count metadata to zero but marking record store as needing size adjustment during recovery. ns: config.transactions, ident: config/collection/34--6194257481163143499 2019-09-04T06:27:19.243+0000 D2 STORAGE [initandlisten] WiredTigerSizeStorer::store Marking table:config/collection/34--6194257481163143499 dirty, numRecords: 0, dataSize: 0, use_count: 3 2019-09-04T06:27:19.243+0000 D3 STORAGE [initandlisten] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:27:19.243+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:27:19.243+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.243+0000 D1 STORAGE [initandlisten] Registering collection config.transactions with UUID 1614ccf0-7860-48e4-ab95-3aaa4633e218 2019-09-04T06:27:19.243+0000 D3 STORAGE [initandlisten] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:27:19.243+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:27:19.243+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.243+0000 D3 STORAGE [initandlisten] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:27:19.243+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:27:19.243+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.243+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:config/collection/50--6194257481163143499 ok range 1 -> 1 current: 1 2019-09-04T06:27:19.244+0000 D2 STORAGE [initandlisten] WiredTigerSizeStorer::load table:config/collection/50--6194257481163143499 -> { numRecords: 1, dataSize: 83 } 2019-09-04T06:27:19.244+0000 D2 STORAGE [initandlisten] WiredTigerSizeStorer::store Marking table:config/collection/50--6194257481163143499 dirty, numRecords: 1, dataSize: 83, use_count: 3 2019-09-04T06:27:19.244+0000 D3 STORAGE [initandlisten] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:27:19.244+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:27:19.244+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.244+0000 D1 STORAGE [initandlisten] Registering collection config.version with UUID 20d8341b-073f-4dea-b0c5-c2626b006feb 2019-09-04T06:27:19.244+0000 D3 STORAGE [initandlisten] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:27:19.244+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:27:19.244+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.244+0000 D3 STORAGE [initandlisten] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:19.244+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:19.244+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:19.244+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:local/collection/16--6194257481163143499 ok range 1 -> 1 current: 1 2019-09-04T06:27:19.245+0000 D2 STORAGE [initandlisten] WiredTigerSizeStorer::load table:local/collection/16--6194257481163143499 -> { numRecords: 1344, dataSize: 302971 } 2019-09-04T06:27:19.245+0000 D2 STORAGE [initandlisten] WiredTigerSizeStorer::store Marking table:local/collection/16--6194257481163143499 dirty, numRecords: 1344, dataSize: 302971, use_count: 3 2019-09-04T06:27:19.245+0000 I STORAGE [initandlisten] Starting OplogTruncaterThread local.oplog.rs 2019-09-04T06:27:19.245+0000 I STORAGE [initandlisten] The size storer reports that the oplog contains 1344 records totaling to 302971 bytes 2019-09-04T06:27:19.245+0000 I STORAGE [initandlisten] Scanning the oplog to determine where to place markers for truncation 2019-09-04T06:27:19.245+0000 D1 COMMAND [WT-OplogTruncaterThread-local.oplog.rs] BackgroundJob starting: WT-OplogTruncaterThread-local.oplog.rs 2019-09-04T06:27:19.245+0000 D2 STORAGE [WT-OplogTruncaterThread-local.oplog.rs] no global storage engine yet 2019-09-04T06:27:19.245+0000 D2 STORAGE [initandlisten] Setting new oplogReadTimestamp: Timestamp(1567578428, 2) 2019-09-04T06:27:19.245+0000 D1 STORAGE [initandlisten] Setting oplog visibility at startup. Val: Timestamp(1567578428, 2) 2019-09-04T06:27:19.245+0000 D3 STORAGE [initandlisten] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:19.245+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:19.245+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:19.245+0000 D1 STORAGE [initandlisten] Registering collection local.oplog.rs with UUID b891bec6-9e37-4763-8f6c-c2ecc2c361d5 2019-09-04T06:27:19.245+0000 D3 STORAGE [initandlisten] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:19.245+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:19.245+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:19.245+0000 D3 STORAGE [initandlisten] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:27:19.245+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:27:19.245+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.246+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:local/collection/6--6194257481163143499 ok range 1 -> 1 current: 1 2019-09-04T06:27:19.246+0000 D2 STORAGE [initandlisten] WiredTigerSizeStorer::load table:local/collection/6--6194257481163143499 -> { numRecords: 1, dataSize: 60 } 2019-09-04T06:27:19.246+0000 D2 STORAGE [initandlisten] WiredTigerSizeStorer::store Marking table:local/collection/6--6194257481163143499 dirty, numRecords: 1, dataSize: 60, use_count: 3 2019-09-04T06:27:19.246+0000 D3 STORAGE [initandlisten] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:27:19.246+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:27:19.246+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.246+0000 D1 STORAGE [initandlisten] Registering collection local.replset.election with UUID 0512231e-bb78-4048-95aa-63ea7eb6b5a5 2019-09-04T06:27:19.246+0000 D3 STORAGE [initandlisten] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:27:19.246+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:27:19.246+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.246+0000 D3 STORAGE [initandlisten] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:27:19.246+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:27:19.246+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.247+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:local/collection/4--6194257481163143499 ok range 1 -> 1 current: 1 2019-09-04T06:27:19.247+0000 D2 STORAGE [initandlisten] WiredTigerSizeStorer::load table:local/collection/4--6194257481163143499 -> { numRecords: 1, dataSize: 45 } 2019-09-04T06:27:19.247+0000 D2 STORAGE [initandlisten] WiredTigerSizeStorer::store Marking table:local/collection/4--6194257481163143499 dirty, numRecords: 1, dataSize: 45, use_count: 3 2019-09-04T06:27:19.247+0000 D3 STORAGE [initandlisten] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:27:19.247+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:27:19.247+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.247+0000 D1 STORAGE [initandlisten] Registering collection local.replset.minvalid with UUID e1f04497-1bed-46e1-b7a9-714cf5b1cd7b 2019-09-04T06:27:19.247+0000 D3 STORAGE [initandlisten] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:27:19.247+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:27:19.247+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.247+0000 D3 STORAGE [initandlisten] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:27:19.247+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:27:19.247+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.247+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:local/collection/2--6194257481163143499 ok range 1 -> 1 current: 1 2019-09-04T06:27:19.248+0000 D2 STORAGE [initandlisten] WiredTigerSizeStorer::load table:local/collection/2--6194257481163143499 -> { numRecords: 1, dataSize: 71 } 2019-09-04T06:27:19.248+0000 D2 STORAGE [initandlisten] WiredTigerSizeStorer::store Marking table:local/collection/2--6194257481163143499 dirty, numRecords: 1, dataSize: 71, use_count: 3 2019-09-04T06:27:19.248+0000 D3 STORAGE [initandlisten] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:27:19.248+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:27:19.248+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.248+0000 D1 STORAGE [initandlisten] Registering collection local.replset.oplogTruncateAfterPoint with UUID f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b 2019-09-04T06:27:19.248+0000 D3 STORAGE [initandlisten] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:27:19.248+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:27:19.248+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.248+0000 D3 STORAGE [initandlisten] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:27:19.248+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:27:19.248+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.248+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:local/collection/0--6194257481163143499 ok range 1 -> 1 current: 1 2019-09-04T06:27:19.249+0000 D2 STORAGE [initandlisten] WiredTigerSizeStorer::load table:local/collection/0--6194257481163143499 -> { numRecords: 1, dataSize: 2041 } 2019-09-04T06:27:19.249+0000 D2 STORAGE [initandlisten] WiredTigerSizeStorer::store Marking table:local/collection/0--6194257481163143499 dirty, numRecords: 1, dataSize: 2041, use_count: 3 2019-09-04T06:27:19.249+0000 D3 STORAGE [initandlisten] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:27:19.249+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:27:19.249+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.249+0000 D1 STORAGE [initandlisten] Registering collection local.startup_log with UUID 4860912c-c555-4fe1-b1bb-e6281b586983 2019-09-04T06:27:19.249+0000 D3 STORAGE [initandlisten] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:27:19.249+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:27:19.249+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.249+0000 D3 STORAGE [initandlisten] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:27:19.249+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:27:19.249+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.249+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:local/collection/10--6194257481163143499 ok range 1 -> 1 current: 1 2019-09-04T06:27:19.249+0000 D2 STORAGE [initandlisten] WiredTigerSizeStorer::load table:local/collection/10--6194257481163143499 -> { numRecords: 1, dataSize: 848 } 2019-09-04T06:27:19.250+0000 D2 STORAGE [initandlisten] WiredTigerSizeStorer::store Marking table:local/collection/10--6194257481163143499 dirty, numRecords: 1, dataSize: 848, use_count: 3 2019-09-04T06:27:19.250+0000 D3 STORAGE [initandlisten] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:27:19.250+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:27:19.250+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.250+0000 D1 STORAGE [initandlisten] Registering collection local.system.replset with UUID 6518740c-6e6d-47d6-acc8-8f7aaf4d591e 2019-09-04T06:27:19.250+0000 D3 STORAGE [initandlisten] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:27:19.250+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:27:19.250+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.250+0000 D3 STORAGE [initandlisten] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:27:19.250+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:27:19.250+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.250+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:local/collection/8--6194257481163143499 ok range 1 -> 1 current: 1 2019-09-04T06:27:19.250+0000 D2 STORAGE [initandlisten] WiredTigerSizeStorer::load table:local/collection/8--6194257481163143499 -> { numRecords: 1, dataSize: 41 } 2019-09-04T06:27:19.250+0000 D2 STORAGE [initandlisten] WiredTigerSizeStorer::store Marking table:local/collection/8--6194257481163143499 dirty, numRecords: 1, dataSize: 41, use_count: 3 2019-09-04T06:27:19.250+0000 D3 STORAGE [initandlisten] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:27:19.250+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:27:19.250+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.250+0000 D1 STORAGE [initandlisten] Registering collection local.system.rollback.id with UUID 1f10291c-f664-4c4a-a48a-a3c7297b837c 2019-09-04T06:27:19.250+0000 D3 STORAGE [initandlisten] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:27:19.250+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:27:19.250+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.250+0000 D3 STORAGE [initandlisten] WT rollback_transaction for snapshot id 2 2019-09-04T06:27:19.250+0000 I STORAGE [initandlisten] Timestamp monitor starting 2019-09-04T06:27:19.250+0000 D2 - [initandlisten] Starting periodic job TimestampMonitor 2019-09-04T06:27:19.259+0000 D1 STORAGE [initandlisten] flushing directory /data/db 2019-09-04T06:27:19.260+0000 D3 STORAGE [initandlisten] WT begin_transaction for snapshot id 4 2019-09-04T06:27:19.292+0000 D2 RECOVERY [initandlisten] Reconciling collection and index idents. 2019-09-04T06:27:19.292+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:27:19.292+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:27:19.292+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.292+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:27:19.292+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:27:19.292+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:27:19.292+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.292+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:27:19.292+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:27:19.292+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:27:19.292+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:27:19.292+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.292+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:27:19.292+0000 D3 STORAGE [initandlisten] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:27:19.292+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:27:19.292+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.292+0000 D3 STORAGE [initandlisten] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:27:19.292+0000 D3 STORAGE [initandlisten] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:19.292+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:27:19.292+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.292+0000 D3 STORAGE [initandlisten] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:19.292+0000 D3 STORAGE [initandlisten] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:19.292+0000 D3 STORAGE [initandlisten] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:19.292+0000 D3 STORAGE [initandlisten] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:19.292+0000 D3 STORAGE [initandlisten] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:27:19.292+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:27:19.292+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.292+0000 D3 STORAGE [initandlisten] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:27:19.292+0000 D3 STORAGE [initandlisten] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:27:19.292+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:27:19.292+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.292+0000 D3 STORAGE [initandlisten] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:27:19.292+0000 D3 STORAGE [initandlisten] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:27:19.292+0000 D3 STORAGE [initandlisten] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:27:19.292+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:27:19.292+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.292+0000 D3 STORAGE [initandlisten] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:27:19.292+0000 D3 STORAGE [initandlisten] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:27:19.292+0000 D3 STORAGE [initandlisten] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:27:19.292+0000 D3 STORAGE [initandlisten] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.293+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:27:19.293+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:admin/index/18--6194257481163143499 ok range 6 -> 12 current: 8 2019-09-04T06:27:19.294+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:27:19.294+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:27:19.294+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.294+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:27:19.294+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:27:19.294+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.294+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:27:19.294+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:27:19.294+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.294+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:27:19.294+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:27:19.294+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.294+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:27:19.294+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:27:19.294+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.294+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:27:19.294+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:admin/index/19--6194257481163143499 ok range 6 -> 12 current: 8 2019-09-04T06:27:19.294+0000 D1 STORAGE [initandlisten] admin.system.users: clearing plan cache - collection info cache reset 2019-09-04T06:27:19.294+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:27:19.294+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:27:19.294+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.294+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:27:19.294+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:27:19.294+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.294+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:27:19.294+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:27:19.294+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.294+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:27:19.294+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:27:19.294+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.294+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:27:19.295+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:27:19.295+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.295+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:27:19.295+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:27:19.295+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.295+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:27:19.295+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:27:19.295+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.295+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:27:19.295+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:admin/index/21--6194257481163143499 ok range 6 -> 12 current: 8 2019-09-04T06:27:19.295+0000 D1 STORAGE [initandlisten] admin.system.version: clearing plan cache - collection info cache reset 2019-09-04T06:27:19.295+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:27:19.295+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:27:19.295+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.295+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:27:19.295+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:27:19.295+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.295+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:27:19.295+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:27:19.295+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.295+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:27:19.295+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:27:19.295+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.295+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:27:19.295+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:27:19.295+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.295+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:27:19.295+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:27:19.295+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.295+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:27:19.295+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:27:19.295+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.295+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:27:19.295+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:admin/index/23--6194257481163143499 ok range 6 -> 12 current: 8 2019-09-04T06:27:19.296+0000 D1 STORAGE [initandlisten] admin.system.keys: clearing plan cache - collection info cache reset 2019-09-04T06:27:19.296+0000 D1 - [initandlisten] reloading view catalog for database admin 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:27:19.296+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:config/index/27--6194257481163143499 ok range 6 -> 12 current: 8 2019-09-04T06:27:19.296+0000 D1 STORAGE [initandlisten] config.changelog: clearing plan cache - collection info cache reset 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.296+0000 D3 STORAGE [initandlisten] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:27:19.297+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:27:19.297+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.297+0000 D3 STORAGE [initandlisten] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:27:19.297+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:27:19.297+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.297+0000 D3 STORAGE [initandlisten] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:27:19.297+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:27:19.297+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.297+0000 D3 STORAGE [initandlisten] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:27:19.297+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:27:19.297+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.297+0000 D3 STORAGE [initandlisten] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:27:19.297+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:27:19.297+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.297+0000 D3 STORAGE [initandlisten] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:27:19.297+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:config/index/29--6194257481163143499 ok range 6 -> 12 current: 8 2019-09-04T06:27:19.297+0000 D3 STORAGE [initandlisten] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:27:19.297+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:27:19.297+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.297+0000 D3 STORAGE [initandlisten] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:27:19.297+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:27:19.297+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.297+0000 D3 STORAGE [initandlisten] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:27:19.297+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:27:19.297+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.297+0000 D3 STORAGE [initandlisten] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:27:19.297+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:27:19.297+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.297+0000 D3 STORAGE [initandlisten] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:27:19.297+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:27:19.297+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.297+0000 D3 STORAGE [initandlisten] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:27:19.297+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:config/index/31--6194257481163143499 ok range 6 -> 12 current: 8 2019-09-04T06:27:19.298+0000 D1 STORAGE [initandlisten] config.lockpings: clearing plan cache - collection info cache reset 2019-09-04T06:27:19.298+0000 D3 STORAGE [initandlisten] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:27:19.298+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:27:19.298+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.298+0000 D3 STORAGE [initandlisten] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:27:19.298+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:27:19.298+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.298+0000 D3 STORAGE [initandlisten] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:27:19.298+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:27:19.298+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.298+0000 D3 STORAGE [initandlisten] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:27:19.298+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:27:19.298+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.298+0000 D3 STORAGE [initandlisten] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:27:19.298+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:27:19.298+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.298+0000 D3 STORAGE [initandlisten] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:27:19.298+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:27:19.298+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.298+0000 D3 STORAGE [initandlisten] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:27:19.298+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:27:19.298+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.298+0000 D3 STORAGE [initandlisten] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:27:19.298+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:config/index/35--6194257481163143499 ok range 6 -> 12 current: 8 2019-09-04T06:27:19.298+0000 D1 STORAGE [initandlisten] config.transactions: clearing plan cache - collection info cache reset 2019-09-04T06:27:19.298+0000 D3 STORAGE [initandlisten] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:27:19.298+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:27:19.298+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.298+0000 D3 STORAGE [initandlisten] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:27:19.298+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:27:19.298+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.298+0000 D3 STORAGE [initandlisten] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:27:19.298+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:27:19.298+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.298+0000 D3 STORAGE [initandlisten] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:27:19.298+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:27:19.298+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.298+0000 D3 STORAGE [initandlisten] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:27:19.298+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:27:19.298+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.298+0000 D3 STORAGE [initandlisten] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:27:19.298+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:27:19.298+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.298+0000 D3 STORAGE [initandlisten] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:27:19.298+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:27:19.298+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.298+0000 D3 STORAGE [initandlisten] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:27:19.299+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:config/index/39--6194257481163143499 ok range 6 -> 12 current: 8 2019-09-04T06:27:19.299+0000 D1 STORAGE [initandlisten] config.mongos: clearing plan cache - collection info cache reset 2019-09-04T06:27:19.299+0000 D3 STORAGE [initandlisten] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:27:19.299+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:27:19.299+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.299+0000 D3 STORAGE [initandlisten] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:27:19.299+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:27:19.299+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.299+0000 D3 STORAGE [initandlisten] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:27:19.299+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:27:19.299+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.299+0000 D3 STORAGE [initandlisten] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:27:19.299+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:27:19.299+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.299+0000 D3 STORAGE [initandlisten] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:27:19.299+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:27:19.299+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.299+0000 D3 STORAGE [initandlisten] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:27:19.299+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:27:19.299+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.299+0000 D3 STORAGE [initandlisten] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:27:19.299+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:27:19.299+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.299+0000 D3 STORAGE [initandlisten] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:27:19.299+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:config/index/43--6194257481163143499 ok range 6 -> 12 current: 8 2019-09-04T06:27:19.300+0000 D3 STORAGE [initandlisten] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:27:19.300+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:27:19.300+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.300+0000 D3 STORAGE [initandlisten] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:27:19.300+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:27:19.300+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.300+0000 D3 STORAGE [initandlisten] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:27:19.300+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:27:19.300+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.300+0000 D3 STORAGE [initandlisten] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:27:19.300+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:27:19.300+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.300+0000 D3 STORAGE [initandlisten] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:27:19.300+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:27:19.300+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.300+0000 D3 STORAGE [initandlisten] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:27:19.300+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:config/index/45--6194257481163143499 ok range 6 -> 12 current: 8 2019-09-04T06:27:19.300+0000 D3 STORAGE [initandlisten] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:27:19.300+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:27:19.300+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.300+0000 D3 STORAGE [initandlisten] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:27:19.300+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:27:19.300+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.300+0000 D3 STORAGE [initandlisten] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:27:19.300+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:27:19.300+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.300+0000 D3 STORAGE [initandlisten] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:27:19.300+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:27:19.300+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.300+0000 D3 STORAGE [initandlisten] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:27:19.300+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:27:19.300+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.300+0000 D3 STORAGE [initandlisten] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:27:19.300+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:config/index/47--6194257481163143499 ok range 6 -> 12 current: 8 2019-09-04T06:27:19.301+0000 D1 STORAGE [initandlisten] config.locks: clearing plan cache - collection info cache reset 2019-09-04T06:27:19.301+0000 D3 STORAGE [initandlisten] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:27:19.301+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:27:19.301+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.301+0000 D3 STORAGE [initandlisten] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:27:19.301+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:27:19.301+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.301+0000 D3 STORAGE [initandlisten] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:27:19.301+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:27:19.301+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.301+0000 D3 STORAGE [initandlisten] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:27:19.301+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:27:19.301+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.301+0000 D3 STORAGE [initandlisten] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:27:19.301+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:27:19.301+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.301+0000 D3 STORAGE [initandlisten] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:27:19.301+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:27:19.301+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.301+0000 D3 STORAGE [initandlisten] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:27:19.301+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:27:19.301+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.301+0000 D3 STORAGE [initandlisten] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:27:19.301+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:config/index/51--6194257481163143499 ok range 6 -> 12 current: 8 2019-09-04T06:27:19.301+0000 D1 STORAGE [initandlisten] config.version: clearing plan cache - collection info cache reset 2019-09-04T06:27:19.301+0000 D3 STORAGE [initandlisten] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:27:19.301+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:27:19.301+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.301+0000 D3 STORAGE [initandlisten] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:27:19.301+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:27:19.301+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.301+0000 D3 STORAGE [initandlisten] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:27:19.301+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:27:19.301+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.301+0000 D3 STORAGE [initandlisten] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:27:19.301+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:27:19.301+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.301+0000 D3 STORAGE [initandlisten] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:27:19.301+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:27:19.301+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.301+0000 D3 STORAGE [initandlisten] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:27:19.301+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:27:19.301+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.301+0000 D3 STORAGE [initandlisten] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:27:19.301+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:27:19.301+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.301+0000 D3 STORAGE [initandlisten] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:27:19.302+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:config/index/55--6194257481163143499 ok range 6 -> 12 current: 8 2019-09-04T06:27:19.302+0000 D1 STORAGE [initandlisten] config.collections: clearing plan cache - collection info cache reset 2019-09-04T06:27:19.302+0000 D3 STORAGE [initandlisten] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:19.302+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:27:19.302+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.302+0000 D3 STORAGE [initandlisten] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:19.302+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:27:19.302+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.302+0000 D3 STORAGE [initandlisten] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:19.302+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:27:19.302+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.302+0000 D3 STORAGE [initandlisten] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:19.302+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:27:19.302+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.302+0000 D3 STORAGE [initandlisten] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:19.302+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:27:19.302+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.302+0000 D3 STORAGE [initandlisten] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:19.302+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:27:19.302+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.302+0000 D3 STORAGE [initandlisten] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:19.302+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:27:19.302+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.302+0000 D3 STORAGE [initandlisten] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:19.302+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:config/index/59--6194257481163143499 ok range 6 -> 12 current: 12 2019-09-04T06:27:19.303+0000 D3 STORAGE [initandlisten] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:19.303+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:27:19.303+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.303+0000 D3 STORAGE [initandlisten] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:19.303+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:27:19.303+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.303+0000 D3 STORAGE [initandlisten] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:19.303+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:27:19.303+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.303+0000 D3 STORAGE [initandlisten] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:19.303+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:27:19.303+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.303+0000 D3 STORAGE [initandlisten] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:19.303+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:27:19.303+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.303+0000 D3 STORAGE [initandlisten] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:19.303+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:config/index/62--6194257481163143499 ok range 6 -> 12 current: 12 2019-09-04T06:27:19.303+0000 D3 STORAGE [initandlisten] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:19.303+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:27:19.303+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.303+0000 D3 STORAGE [initandlisten] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:19.303+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:27:19.303+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.303+0000 D3 STORAGE [initandlisten] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:19.303+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:27:19.303+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.303+0000 D3 STORAGE [initandlisten] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:19.303+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:27:19.303+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.304+0000 D3 STORAGE [initandlisten] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:19.304+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:27:19.304+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.304+0000 D3 STORAGE [initandlisten] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:19.304+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:config/index/65--6194257481163143499 ok range 6 -> 12 current: 12 2019-09-04T06:27:19.304+0000 D3 STORAGE [initandlisten] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:19.304+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:27:19.304+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.304+0000 D3 STORAGE [initandlisten] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:19.304+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:27:19.304+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.304+0000 D3 STORAGE [initandlisten] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:19.304+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:27:19.304+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.304+0000 D3 STORAGE [initandlisten] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:19.304+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:27:19.304+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.304+0000 D3 STORAGE [initandlisten] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:19.304+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:27:19.304+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.304+0000 D3 STORAGE [initandlisten] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:19.304+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:config/index/68--6194257481163143499 ok range 6 -> 12 current: 8 2019-09-04T06:27:19.305+0000 D1 STORAGE [initandlisten] config.chunks: clearing plan cache - collection info cache reset 2019-09-04T06:27:19.305+0000 D3 STORAGE [initandlisten] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:27:19.305+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:27:19.305+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.305+0000 D3 STORAGE [initandlisten] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:27:19.305+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:27:19.305+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.305+0000 D3 STORAGE [initandlisten] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:27:19.305+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:27:19.305+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.305+0000 D3 STORAGE [initandlisten] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:27:19.305+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:27:19.305+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.305+0000 D3 STORAGE [initandlisten] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:27:19.305+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:27:19.305+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.305+0000 D3 STORAGE [initandlisten] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:27:19.305+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:27:19.305+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.305+0000 D3 STORAGE [initandlisten] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:27:19.305+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:27:19.305+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.305+0000 D3 STORAGE [initandlisten] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:27:19.305+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:config/index/72--6194257481163143499 ok range 6 -> 12 current: 8 2019-09-04T06:27:19.305+0000 D1 STORAGE [initandlisten] config.system.sessions: clearing plan cache - collection info cache reset 2019-09-04T06:27:19.305+0000 D3 STORAGE [initandlisten] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:27:19.305+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:27:19.305+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.305+0000 D3 STORAGE [initandlisten] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:27:19.305+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:27:19.305+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.305+0000 D3 STORAGE [initandlisten] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:27:19.305+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:27:19.305+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.305+0000 D3 STORAGE [initandlisten] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:27:19.305+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:27:19.305+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.305+0000 D3 STORAGE [initandlisten] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:27:19.305+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:27:19.305+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.305+0000 D3 STORAGE [initandlisten] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:27:19.305+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:27:19.305+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.305+0000 D3 STORAGE [initandlisten] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:27:19.305+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:27:19.305+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.305+0000 D3 STORAGE [initandlisten] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:27:19.306+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:config/index/76--6194257481163143499 ok range 6 -> 12 current: 12 2019-09-04T06:27:19.306+0000 D3 STORAGE [initandlisten] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:27:19.306+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:27:19.306+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.306+0000 D3 STORAGE [initandlisten] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:27:19.306+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:27:19.306+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.306+0000 D3 STORAGE [initandlisten] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:27:19.306+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:27:19.306+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.306+0000 D3 STORAGE [initandlisten] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:27:19.306+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:27:19.306+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.306+0000 D3 STORAGE [initandlisten] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:27:19.306+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:27:19.306+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.306+0000 D3 STORAGE [initandlisten] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:27:19.306+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:config/index/79--6194257481163143499 ok range 6 -> 12 current: 8 2019-09-04T06:27:19.306+0000 D1 STORAGE [initandlisten] config.migrations: clearing plan cache - collection info cache reset 2019-09-04T06:27:19.306+0000 D3 STORAGE [initandlisten] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:27:19.306+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:27:19.306+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.306+0000 D3 STORAGE [initandlisten] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:27:19.306+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:27:19.306+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.306+0000 D3 STORAGE [initandlisten] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:27:19.306+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:27:19.306+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.306+0000 D3 STORAGE [initandlisten] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:27:19.306+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:27:19.306+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.306+0000 D3 STORAGE [initandlisten] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:27:19.306+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:27:19.307+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.307+0000 D3 STORAGE [initandlisten] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:27:19.307+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:27:19.307+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.307+0000 D3 STORAGE [initandlisten] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:27:19.307+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:27:19.307+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.307+0000 D3 STORAGE [initandlisten] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:27:19.307+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:config/index/83--6194257481163143499 ok range 6 -> 12 current: 12 2019-09-04T06:27:19.307+0000 D3 STORAGE [initandlisten] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:27:19.307+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:27:19.307+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.307+0000 D3 STORAGE [initandlisten] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:27:19.307+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:27:19.307+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.307+0000 D3 STORAGE [initandlisten] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:27:19.307+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:27:19.307+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.307+0000 D3 STORAGE [initandlisten] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:27:19.307+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:27:19.307+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.307+0000 D3 STORAGE [initandlisten] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:27:19.307+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:27:19.307+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.307+0000 D3 STORAGE [initandlisten] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:27:19.307+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:config/index/86--6194257481163143499 ok range 6 -> 12 current: 8 2019-09-04T06:27:19.308+0000 D1 STORAGE [initandlisten] config.shards: clearing plan cache - collection info cache reset 2019-09-04T06:27:19.308+0000 D3 STORAGE [initandlisten] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:27:19.308+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:27:19.308+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.308+0000 D3 STORAGE [initandlisten] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:27:19.308+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:27:19.308+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.308+0000 D3 STORAGE [initandlisten] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:27:19.308+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:27:19.308+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.308+0000 D3 STORAGE [initandlisten] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:27:19.308+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:27:19.308+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.308+0000 D3 STORAGE [initandlisten] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:27:19.308+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:27:19.308+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.308+0000 D3 STORAGE [initandlisten] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:27:19.308+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:27:19.308+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.308+0000 D3 STORAGE [initandlisten] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:27:19.308+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:27:19.308+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.308+0000 D3 STORAGE [initandlisten] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:27:19.308+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:config/index/90--6194257481163143499 ok range 6 -> 12 current: 12 2019-09-04T06:27:19.308+0000 D3 STORAGE [initandlisten] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:27:19.308+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:27:19.308+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.308+0000 D3 STORAGE [initandlisten] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:27:19.308+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:27:19.308+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.308+0000 D3 STORAGE [initandlisten] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:27:19.308+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:27:19.308+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.308+0000 D3 STORAGE [initandlisten] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:27:19.308+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:27:19.308+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.308+0000 D3 STORAGE [initandlisten] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:27:19.308+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:27:19.308+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.308+0000 D3 STORAGE [initandlisten] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:27:19.309+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:config/index/93--6194257481163143499 ok range 6 -> 12 current: 8 2019-09-04T06:27:19.309+0000 D3 STORAGE [initandlisten] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:27:19.309+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:27:19.309+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.309+0000 D3 STORAGE [initandlisten] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:27:19.309+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:27:19.309+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.309+0000 D3 STORAGE [initandlisten] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:27:19.309+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:27:19.309+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.309+0000 D3 STORAGE [initandlisten] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:27:19.309+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:27:19.309+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.309+0000 D3 STORAGE [initandlisten] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:27:19.309+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:27:19.309+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.309+0000 D3 STORAGE [initandlisten] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:27:19.309+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:config/index/95--6194257481163143499 ok range 6 -> 12 current: 8 2019-09-04T06:27:19.309+0000 D1 STORAGE [initandlisten] config.tags: clearing plan cache - collection info cache reset 2019-09-04T06:27:19.309+0000 D1 - [initandlisten] reloading view catalog for database config 2019-09-04T06:27:19.309+0000 D3 STORAGE [initandlisten] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:27:19.309+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:27:19.309+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.309+0000 D3 STORAGE [initandlisten] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:27:19.309+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:27:19.309+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.309+0000 D3 STORAGE [initandlisten] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:27:19.309+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:27:19.309+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.309+0000 D3 STORAGE [initandlisten] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:27:19.309+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:27:19.309+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.309+0000 D3 STORAGE [initandlisten] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:27:19.309+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.310+0000 D3 STORAGE [initandlisten] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:27:19.311+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:local/index/7--6194257481163143499 ok range 6 -> 12 current: 8 2019-09-04T06:27:19.311+0000 D1 STORAGE [initandlisten] local.replset.election: clearing plan cache - collection info cache reset 2019-09-04T06:27:19.311+0000 D3 STORAGE [initandlisten] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:27:19.311+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:27:19.311+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.311+0000 D3 STORAGE [initandlisten] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:27:19.311+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:27:19.311+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.311+0000 D3 STORAGE [initandlisten] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:27:19.311+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:27:19.311+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.311+0000 D3 STORAGE [initandlisten] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:27:19.311+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:27:19.311+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.311+0000 D3 STORAGE [initandlisten] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:27:19.311+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:27:19.311+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.311+0000 D3 STORAGE [initandlisten] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:27:19.311+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:27:19.311+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.311+0000 D3 STORAGE [initandlisten] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:27:19.311+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:27:19.311+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.311+0000 D3 STORAGE [initandlisten] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:27:19.311+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:local/index/9--6194257481163143499 ok range 6 -> 12 current: 8 2019-09-04T06:27:19.311+0000 D1 STORAGE [initandlisten] local.system.rollback.id: clearing plan cache - collection info cache reset 2019-09-04T06:27:19.311+0000 D3 STORAGE [initandlisten] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:27:19.311+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:27:19.311+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.311+0000 D3 STORAGE [initandlisten] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:27:19.311+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:27:19.311+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.311+0000 D3 STORAGE [initandlisten] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:27:19.311+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:27:19.311+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.312+0000 D3 STORAGE [initandlisten] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:27:19.312+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:27:19.312+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.312+0000 D3 STORAGE [initandlisten] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:27:19.312+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:27:19.312+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.312+0000 D3 STORAGE [initandlisten] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:27:19.312+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:27:19.312+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.312+0000 D3 STORAGE [initandlisten] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:27:19.312+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:27:19.312+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.312+0000 D3 STORAGE [initandlisten] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:27:19.312+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:local/index/1--6194257481163143499 ok range 6 -> 12 current: 8 2019-09-04T06:27:19.312+0000 D1 STORAGE [initandlisten] local.startup_log: clearing plan cache - collection info cache reset 2019-09-04T06:27:19.312+0000 D3 STORAGE [initandlisten] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:27:19.312+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:27:19.312+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.312+0000 D3 STORAGE [initandlisten] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:27:19.312+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:27:19.312+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.312+0000 D3 STORAGE [initandlisten] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:27:19.312+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:27:19.312+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.312+0000 D3 STORAGE [initandlisten] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:27:19.312+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:27:19.312+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.312+0000 D3 STORAGE [initandlisten] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:27:19.312+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:27:19.312+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.312+0000 D3 STORAGE [initandlisten] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:27:19.312+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:27:19.312+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.312+0000 D3 STORAGE [initandlisten] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:27:19.312+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:27:19.312+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.312+0000 D3 STORAGE [initandlisten] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:27:19.312+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:local/index/11--6194257481163143499 ok range 6 -> 12 current: 8 2019-09-04T06:27:19.313+0000 D1 STORAGE [initandlisten] local.system.replset: clearing plan cache - collection info cache reset 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:19.313+0000 D1 STORAGE [initandlisten] local.oplog.rs: clearing plan cache - collection info cache reset 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:27:19.313+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:local/index/5--6194257481163143499 ok range 6 -> 12 current: 8 2019-09-04T06:27:19.313+0000 D1 STORAGE [initandlisten] local.replset.minvalid: clearing plan cache - collection info cache reset 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.313+0000 D3 STORAGE [initandlisten] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:27:19.314+0000 D2 STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:local/index/3--6194257481163143499 ok range 6 -> 12 current: 8 2019-09-04T06:27:19.314+0000 D1 STORAGE [initandlisten] local.replset.oplogTruncateAfterPoint: clearing plan cache - collection info cache reset 2019-09-04T06:27:19.314+0000 D1 - [initandlisten] reloading view catalog for database local 2019-09-04T06:27:19.314+0000 D3 STORAGE [initandlisten] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:27:19.314+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:27:19.314+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.314+0000 D3 STORAGE [initandlisten] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:27:19.314+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:27:19.314+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.314+0000 D3 STORAGE [initandlisten] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:27:19.314+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:27:19.314+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.314+0000 D3 STORAGE [initandlisten] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:27:19.314+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:27:19.314+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.314+0000 D3 STORAGE [initandlisten] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:19.314+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:19.314+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:19.314+0000 D3 STORAGE [initandlisten] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:27:19.314+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:27:19.314+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.314+0000 D3 STORAGE [initandlisten] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:27:19.314+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:27:19.314+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.314+0000 D1 STORAGE [initandlisten] not reading at last-applied because the PBWM lock is held 2019-09-04T06:27:19.314+0000 D2 CONNPOOL [initandlisten] Controller for NetworkInterfaceTL-CollectionRangeDeleter-TaskExecutor is LimitController 2019-09-04T06:27:19.314+0000 I SHARDING [initandlisten] Marking collection local.system.replset as collection version: 2019-09-04T06:27:19.314+0000 D2 ASIO [CollectionRangeDeleter-TaskExecutor] The NetworkInterfaceTL reactor thread is spinning up 2019-09-04T06:27:19.314+0000 D1 STORAGE [initandlisten] Recovering database: admin 2019-09-04T06:27:19.314+0000 D2 QUERY [initandlisten] Using idhack: query: { _id: "featureCompatibilityVersion" } sort: {} projection: {} 2019-09-04T06:27:19.315+0000 D1 STORAGE [initandlisten] Recovering database: config 2019-09-04T06:27:19.315+0000 D1 STORAGE [initandlisten] Recovering database: local 2019-09-04T06:27:19.315+0000 D3 STORAGE [initandlisten] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:27:19.315+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:27:19.315+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.315+0000 D3 STORAGE [initandlisten] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:27:19.315+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:27:19.315+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.315+0000 D3 STORAGE [initandlisten] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:27:19.315+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:27:19.315+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.315+0000 D3 STORAGE [initandlisten] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:27:19.315+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:27:19.315+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.315+0000 D3 STORAGE [initandlisten] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:19.315+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:19.315+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:19.315+0000 D3 STORAGE [initandlisten] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:27:19.315+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:27:19.315+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.315+0000 D3 STORAGE [initandlisten] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:27:19.315+0000 D3 STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:27:19.315+0000 D3 STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:19.315+0000 D3 REPL [initandlisten] No initial sync flag set, returning initial sync flag value of false. 2019-09-04T06:27:19.315+0000 D1 STORAGE [initandlisten] done repairDatabases 2019-09-04T06:27:19.315+0000 D3 STORAGE [initandlisten] WT rollback_transaction for snapshot id 4 2019-09-04T06:27:19.315+0000 D3 STORAGE [initandlisten] WT begin_transaction for snapshot id 5 2019-09-04T06:27:19.315+0000 D3 STORAGE [initandlisten] WT rollback_transaction for snapshot id 5 2019-09-04T06:27:19.315+0000 D3 REPL [initandlisten] No initial sync flag set, returning initial sync flag value of false. 2019-09-04T06:27:19.315+0000 I STORAGE [initandlisten] Flow Control is enabled on this deployment. 2019-09-04T06:27:19.315+0000 D2 COMMAND [initandlisten] run command admin.$cmd { find: "system.roles", $db: "admin" } 2019-09-04T06:27:19.315+0000 I SHARDING [initandlisten] Marking collection admin.system.roles as collection version: 2019-09-04T06:27:19.315+0000 D2 QUERY [initandlisten] Collection admin.system.roles does not exist. Using EOF plan: query: {} sort: {} projection: {} 2019-09-04T06:27:19.315+0000 I COMMAND [initandlisten] command admin.system.roles command: find { find: "system.roles", $db: "admin" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:321 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } storage:{ data: { bytesRead: 142, timeReadingMicros: 2 } } protocol:op_msg 0ms 2019-09-04T06:27:19.315+0000 D1 ACCESS [initandlisten] There were no users to pin, not starting tracker thread 2019-09-04T06:27:19.315+0000 D2 ACCESS [initandlisten] Invalidating user cache 2019-09-04T06:27:19.315+0000 D4 - [initandlisten] Taking ticket. Available: 1000000000 2019-09-04T06:27:19.315+0000 D3 STORAGE [initandlisten] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:19.315+0000 I SHARDING [initandlisten] Marking collection admin.system.version as collection version: 2019-09-04T06:27:19.315+0000 D2 QUERY [initandlisten] Using idhack: query: { _id: "authSchema" } sort: {} projection: {} 2019-09-04T06:27:19.315+0000 D3 STORAGE [initandlisten] WT begin_transaction for snapshot id 6 2019-09-04T06:27:19.315+0000 D3 STORAGE [initandlisten] WT rollback_transaction for snapshot id 6 2019-09-04T06:27:19.315+0000 D2 QUERY [initandlisten] Using idhack: query: { _id: "shardIdentity" } sort: {} projection: {} 2019-09-04T06:27:19.315+0000 D3 STORAGE [initandlisten] WT begin_transaction for snapshot id 7 2019-09-04T06:27:19.315+0000 D3 STORAGE [initandlisten] WT rollback_transaction for snapshot id 7 2019-09-04T06:27:19.315+0000 D3 STORAGE [initandlisten] WT begin_transaction for snapshot id 8 2019-09-04T06:27:19.316+0000 I SHARDING [initandlisten] Marking collection local.startup_log as collection version: 2019-09-04T06:27:19.316+0000 D3 STORAGE [initandlisten] WT commit_transaction for snapshot id 8 2019-09-04T06:27:19.316+0000 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data' 2019-09-04T06:27:19.316+0000 D2 CONNPOOL [initandlisten] Controller for NetworkInterfaceTL-FreeMonNet is LimitController 2019-09-04T06:27:19.316+0000 D2 ASIO [FreeMonNet] The NetworkInterfaceTL reactor thread is spinning up 2019-09-04T06:27:19.316+0000 D1 EXECUTOR [FreeMonHTTP-0] starting thread in pool FreeMonHTTP 2019-09-04T06:27:19.316+0000 D3 EXECUTOR [FreeMonHTTP-0] waiting for work; I am one of 1 thread(s); the minimum number of threads is 1 2019-09-04T06:27:19.317+0000 D2 CONNPOOL [initandlisten] Controller for NetworkInterfaceTL-ShardRegistry is LimitController 2019-09-04T06:27:19.317+0000 D3 STORAGE [FreeMonProcessor] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:19.317+0000 D2 ASIO [ShardRegistry] The NetworkInterfaceTL reactor thread is spinning up 2019-09-04T06:27:19.317+0000 D2 CONNPOOL [initandlisten] Controller for NetworkInterfaceTL-TaskExecutorPool-0 is ShardingTaskExecutorPoolController 2019-09-04T06:27:19.317+0000 D3 STORAGE [FreeMonProcessor] WT begin_transaction for snapshot id 11 2019-09-04T06:27:19.317+0000 D3 SHARDING [initandlisten] Adding shard config, with CS 2019-09-04T06:27:19.317+0000 D1 SHARDING [initandlisten] Starting up task executor for periodic reloading of ShardRegistry 2019-09-04T06:27:19.317+0000 D1 EXECUTOR [Sharding-Fixed-0] starting thread in pool Sharding-Fixed 2019-09-04T06:27:19.317+0000 D3 EXECUTOR [Sharding-Fixed-0] waiting for work; I am one of 1 thread(s); the minimum number of threads is 1 2019-09-04T06:27:19.317+0000 D2 CONNPOOL [initandlisten] Controller for NetworkInterfaceTL-ShardRegistryUpdater is LimitController 2019-09-04T06:27:19.317+0000 D3 STORAGE [FreeMonProcessor] WT rollback_transaction for snapshot id 11 2019-09-04T06:27:19.317+0000 D3 STORAGE [FreeMonProcessor] WT begin_transaction for snapshot id 12 2019-09-04T06:27:19.317+0000 D3 STORAGE [FreeMonProcessor] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:19.317+0000 D3 STORAGE [FreeMonProcessor] WT rollback_transaction for snapshot id 12 2019-09-04T06:27:19.317+0000 D3 STORAGE [FreeMonProcessor] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:19.317+0000 D2 CONNPOOL [initandlisten] Controller for NetworkInterfaceTL-AddShard-TaskExecutor is LimitController 2019-09-04T06:27:19.317+0000 D2 ASIO [TaskExecutorPool-0] The NetworkInterfaceTL reactor thread is spinning up 2019-09-04T06:27:19.317+0000 D4 - [initandlisten] Taking ticket. Available: 999999999 2019-09-04T06:27:19.317+0000 D1 - [FreeMonProcessor] User Assertion: NotYetInitialized: no replset config has been received src/mongo/db/repl/repl_set_commands.cpp 187 2019-09-04T06:27:19.317+0000 D2 CONNPOOL [initandlisten] Controller for NetworkInterfaceTL-Replication is LimitController 2019-09-04T06:27:19.317+0000 D2 ASIO [ShardRegistryUpdater] The NetworkInterfaceTL reactor thread is spinning up 2019-09-04T06:27:19.317+0000 D1 SHARDING [ShardRegistryUpdater] Reloading shardRegistry 2019-09-04T06:27:19.317+0000 I SHARDING [thread1] creating distributed lock ping thread for process ConfigServer (sleeping for 30000ms) 2019-09-04T06:27:19.317+0000 D1 EXECUTOR [replexec-0] starting thread in pool replexec 2019-09-04T06:27:19.317+0000 D3 EXECUTOR [replexec-0] waiting for work; I am one of 1 thread(s); the minimum number of threads is 1 2019-09-04T06:27:19.317+0000 D4 - [initandlisten] Taking ticket. Available: 999999998 2019-09-04T06:27:19.317+0000 D4 - [initandlisten] Taking ticket. Available: 999999997 2019-09-04T06:27:19.317+0000 D3 REPL [initandlisten] Initializing minValid document 2019-09-04T06:27:19.317+0000 D2 COMMAND [replSetDistLockPinger] run command config.$cmd { findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578439317) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } 2019-09-04T06:27:19.317+0000 D4 - [initandlisten] Taking ticket. Available: 999999996 2019-09-04T06:27:19.317+0000 D4 - [replSetDistLockPinger] Taking ticket. Available: 999999995 2019-09-04T06:27:19.317+0000 D5 QUERY [initandlisten] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:27:19.317+0000 D1 - [replSetDistLockPinger] User Assertion: NotMaster: Not primary while running findAndModify command on collection config.lockpings src/mongo/db/commands/find_and_modify.cpp 178 2019-09-04T06:27:19.317+0000 D2 ASIO [Replication] The NetworkInterfaceTL reactor thread is spinning up 2019-09-04T06:27:19.317+0000 W - [replSetDistLockPinger] DBException thrown :: caused by :: NotMaster: Not primary while running findAndModify command on collection config.lockpings 2019-09-04T06:27:19.317+0000 W - [FreeMonProcessor] DBException thrown :: caused by :: NotYetInitialized: no replset config has been received 2019-09-04T06:27:19.317+0000 D3 STORAGE [monitoring-keys-for-HMAC] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:27:19.317+0000 D3 STORAGE [shard-registry-reload] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:27:19.317+0000 D1 - [shard-registry-reload] User Assertion: ReadConcernMajorityNotAvailableYet: could not get updated shard list from config server :: caused by :: Read concern majority reads are currently not possible. src/mongo/s/client/shard_registry.cpp 400 2019-09-04T06:27:19.317+0000 W - [shard-registry-reload] DBException thrown :: caused by :: ReadConcernMajorityNotAvailableYet: could not get updated shard list from config server :: caused by :: Read concern majority reads are currently not possible. 2019-09-04T06:27:19.318+0000 D2 ASIO [AddShard-TaskExecutor] The NetworkInterfaceTL reactor thread is spinning up 2019-09-04T06:27:19.317+0000 D5 QUERY [initandlisten] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:19.319+0000 D5 QUERY [initandlisten] Rated tree: $and 2019-09-04T06:27:19.319+0000 D5 QUERY [initandlisten] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:19.319+0000 D5 QUERY [initandlisten] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:19.319+0000 D2 QUERY [initandlisten] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:27:19.319+0000 D3 STORAGE [initandlisten] WT begin_transaction for snapshot id 10 2019-09-04T06:27:19.319+0000 I SHARDING [initandlisten] Marking collection local.replset.minvalid as collection version: 2019-09-04T06:27:19.319+0000 D3 STORAGE [initandlisten] WT commit_transaction for snapshot id 10 2019-09-04T06:27:19.319+0000 D4 - [initandlisten] Taking ticket. Available: 999999994 2019-09-04T06:27:19.319+0000 D4 - [initandlisten] Taking ticket. Available: 999999993 2019-09-04T06:27:19.319+0000 D3 STORAGE [initandlisten] WT begin_transaction for snapshot id 19 2019-09-04T06:27:19.319+0000 D1 STORAGE [initandlisten] not reading at last-applied because the PBWM lock is held 2019-09-04T06:27:19.319+0000 I SHARDING [initandlisten] Marking collection local.replset.election as collection version: 2019-09-04T06:27:19.319+0000 D3 STORAGE [initandlisten] WT rollback_transaction for snapshot id 19 2019-09-04T06:27:19.319+0000 D3 STORAGE [initandlisten] WT begin_transaction for snapshot id 20 2019-09-04T06:27:19.319+0000 D3 STORAGE [initandlisten] WT rollback_transaction for snapshot id 20 2019-09-04T06:27:19.319+0000 I REPL [initandlisten] Did not find local initialized voted for document at startup. 2019-09-04T06:27:19.319+0000 D3 STORAGE [initandlisten] WT begin_transaction for snapshot id 21 2019-09-04T06:27:19.319+0000 D3 STORAGE [initandlisten] WT rollback_transaction for snapshot id 21 2019-09-04T06:27:19.319+0000 I REPL [initandlisten] Rollback ID is 1 2019-09-04T06:27:19.319+0000 D3 STORAGE [initandlisten] WT begin_transaction for snapshot id 22 2019-09-04T06:27:19.319+0000 D3 STORAGE [initandlisten] WT rollback_transaction for snapshot id 22 2019-09-04T06:27:19.319+0000 D3 STORAGE [initandlisten] WT begin_transaction for snapshot id 23 2019-09-04T06:27:19.319+0000 D3 STORAGE [initandlisten] WT rollback_transaction for snapshot id 23 2019-09-04T06:27:19.319+0000 D3 REPL [initandlisten] No initial sync flag set, returning initial sync flag value of false. 2019-09-04T06:27:19.319+0000 D3 STORAGE [initandlisten] WT begin_transaction for snapshot id 24 2019-09-04T06:27:19.319+0000 D3 STORAGE [initandlisten] WT rollback_transaction for snapshot id 24 2019-09-04T06:27:19.319+0000 D3 REPL [initandlisten] returning oplog truncate after point: Timestamp(0, 0) 2019-09-04T06:27:19.319+0000 D3 STORAGE [initandlisten] WT begin_transaction for snapshot id 25 2019-09-04T06:27:19.319+0000 D3 STORAGE [initandlisten] WT rollback_transaction for snapshot id 25 2019-09-04T06:27:19.320+0000 D3 STORAGE [initandlisten] WT begin_transaction for snapshot id 26 2019-09-04T06:27:19.320+0000 D3 STORAGE [initandlisten] WT rollback_transaction for snapshot id 26 2019-09-04T06:27:19.320+0000 D3 REPL [initandlisten] No appliedThrough OpTime set, returning empty appliedThrough OpTime. 2019-09-04T06:27:19.320+0000 D3 STORAGE [initandlisten] WT begin_transaction for snapshot id 27 2019-09-04T06:27:19.320+0000 D3 STORAGE [initandlisten] WT rollback_transaction for snapshot id 27 2019-09-04T06:27:19.320+0000 D3 REPL [initandlisten] returning oplog truncate after point: Timestamp(0, 0) 2019-09-04T06:27:19.320+0000 I REPL [initandlisten] Recovering from stable timestamp: Timestamp(1567578428, 2) (top of oplog: { ts: Timestamp(1567578428, 2), t: 1 }, appliedThrough: { ts: Timestamp(0, 0), t: -1 }, TruncateAfter: Timestamp(0, 0)) 2019-09-04T06:27:19.320+0000 I REPL [initandlisten] Starting recovery oplog application at the stable timestamp: Timestamp(1567578428, 2) 2019-09-04T06:27:19.320+0000 I REPL [initandlisten] No oplog entries to apply for recovery. Start point is at the top of the oplog. 2019-09-04T06:27:19.320+0000 D3 STORAGE [initandlisten] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:19.320+0000 D2 COMMAND [initandlisten] run command config.$cmd { find: "transactions", filter: { state: "prepared" }, $db: "config" } 2019-09-04T06:27:19.320+0000 I SHARDING [initandlisten] Marking collection config.transactions as collection version: 2019-09-04T06:27:19.320+0000 D5 QUERY [initandlisten] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.transactionsTree: state $eq "prepared" Sort: {} Proj: {} ============================= 2019-09-04T06:27:19.320+0000 D5 QUERY [initandlisten] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" } 2019-09-04T06:27:19.320+0000 D5 QUERY [initandlisten] Predicate over field 'state' 2019-09-04T06:27:19.320+0000 D5 QUERY [initandlisten] Rated tree: state $eq "prepared" || First: notFirst: full path: state 2019-09-04T06:27:19.320+0000 D5 QUERY [initandlisten] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:19.320+0000 D5 QUERY [initandlisten] Planner: outputting a collscan: COLLSCAN ---ns = config.transactions ---filter = state $eq "prepared" ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:19.320+0000 D2 QUERY [initandlisten] Only one plan is available; it will be run but will not be cached. query: { state: "prepared" } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:27:19.320+0000 D3 STORAGE [initandlisten] WT begin_transaction for snapshot id 28 2019-09-04T06:27:19.320+0000 D3 STORAGE [initandlisten] WT rollback_transaction for snapshot id 28 2019-09-04T06:27:19.320+0000 I COMMAND [initandlisten] command config.transactions command: find { find: "transactions", filter: { state: "prepared" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 queryHash:C681E5F8 planCacheKey:C681E5F8 reslen:322 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 7 } storage:{ data: { bytesRead: 360, timeReadingMicros: 15 } } protocol:op_msg 0ms 2019-09-04T06:27:19.320+0000 D3 STORAGE [initandlisten] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:19.320+0000 D3 STORAGE [initandlisten] WT begin_transaction for snapshot id 29 2019-09-04T06:27:19.320+0000 D3 STORAGE [initandlisten] WT rollback_transaction for snapshot id 29 2019-09-04T06:27:19.320+0000 D3 REPL [initandlisten] No initial sync flag set, returning initial sync flag value of false. 2019-09-04T06:27:19.320+0000 D3 STORAGE [initandlisten] WT begin_transaction for snapshot id 30 2019-09-04T06:27:19.320+0000 D3 STORAGE [initandlisten] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:19.320+0000 I SHARDING [initandlisten] Marking collection local.oplog.rs as collection version: 2019-09-04T06:27:19.320+0000 D3 STORAGE [initandlisten] WT rollback_transaction for snapshot id 30 2019-09-04T06:27:19.320+0000 D2 - [initandlisten] Starting periodic job startPeriodicThreadToAbortExpiredTransactions 2019-09-04T06:27:19.320+0000 D1 COMMAND [ClientCursorMonitor] BackgroundJob starting: ClientCursorMonitor 2019-09-04T06:27:19.320+0000 D1 COMMAND [TTLMonitor] BackgroundJob starting: TTLMonitor 2019-09-04T06:27:19.320+0000 D1 COMMAND [PeriodicTaskRunner] BackgroundJob starting: PeriodicTaskRunner 2019-09-04T06:27:19.322+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:19.329+0000 D2 WRITE [startPeriodicThreadToAbortExpiredTransactions] Beginning scanSessions. Scanning 0 sessions. 2019-09-04T06:27:19.329+0000 D2 - [initandlisten] Starting periodic job startPeriodicThreadToDecreaseSnapshotHistoryCachePressure 2019-09-04T06:27:19.329+0000 D2 NETWORK [replexec-0] getBoundAddrs(): [ 127.0.0.1] [ 10.108.2.33] 2019-09-04T06:27:19.329+0000 D2 - [initandlisten] Starting periodic job LogicalSessionCacheRefresh 2019-09-04T06:27:19.329+0000 I - [FreeMonProcessor] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6b5b62 0x561749c347d3 0x561749c42521 0x561749f29e24 0x56174b1164a5 0x56174a11522d 0x56174a140b54 0x56174a124498 0x56174a12addb 0x56174b82dbbf 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"272DB62","s":"_ZN5mongo11DBExceptionC2ERKNS_6StatusE"},{"b":"561748F88000","o":"CAC7D3","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"FA1E24","s":"_ZN5mongo4repl19CmdReplSetGetConfig3runEPNS_16OperationContextERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_7BSONObjERNS_14BSONObjBuilderE"},{"b":"561748F88000","o":"218E4A5","s":"_ZN5mongo14CommandHelpers18runCommandDirectlyEPNS_16OperationContextERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"118D22D","s":"_ZN5mongo34FTDCSimpleInternalCommandCollector7collectEPNS_16OperationContextERNS_14BSONObjBuilderE"},{"b":"561748F88000","o":"11B8B54","s":"_ZN5mongo23FTDCCollectorCollection7collectEPNS_6ClientE"},{"b":"561748F88000","o":"119C498","s":"_ZN5mongo16FreeMonProcessor16doMetricsCollectEPNS_6ClientE"},{"b":"561748F88000","o":"11A2DDB","s":"_ZN5mongo16FreeMonProcessor3runEv"},{"b":"561748F88000","o":"28A5BBF"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo11DBExceptionC2ERKNS_6StatusE+0x32) [0x56174b6b5b62] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x2AD1) [0x561749c347d3] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZN5mongo4repl19CmdReplSetGetConfig3runEPNS_16OperationContextERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_7BSONObjERNS_14BSONObjBuilderE+0x154) [0x561749f29e24] mongod(_ZN5mongo14CommandHelpers18runCommandDirectlyEPNS_16OperationContextERKNS_12OpMsgRequestE+0x375) [0x56174b1164a5] mongod(_ZN5mongo34FTDCSimpleInternalCommandCollector7collectEPNS_16OperationContextERNS_14BSONObjBuilderE+0x2D) [0x56174a11522d] mongod(_ZN5mongo23FTDCCollectorCollection7collectEPNS_6ClientE+0x3D4) [0x56174a140b54] mongod(_ZN5mongo16FreeMonProcessor16doMetricsCollectEPNS_6ClientE+0x48) [0x56174a124498] mongod(_ZN5mongo16FreeMonProcessor3runEv+0x24B) [0x56174a12addb] mongod(+0x28A5BBF) [0x56174b82dbbf] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:27:19.330+0000 D1 SH_REFR [LogicalSessionCacheRefresh] Refreshing cached database entry for config; current cached database info is {} 2019-09-04T06:27:19.330+0000 D2 - [initandlisten] Starting periodic job LogicalSessionCacheReap 2019-09-04T06:27:19.331+0000 D2 NETWORK [replexec-0] getAddrsForHost("cmodb802.togewa.com:27019"): [ 10.108.2.32] 2019-09-04T06:27:19.331+0000 D1 EXECUTOR [ConfigServerCatalogCacheLoader-0] starting thread in pool ConfigServerCatalogCacheLoader 2019-09-04T06:27:19.331+0000 D3 EXECUTOR [ConfigServerCatalogCacheLoader-0] Executing a task on behalf of pool ConfigServerCatalogCacheLoader 2019-09-04T06:27:19.331+0000 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database config from version {} to version { uuid: UUID("8a5b0d4a-e9ce-4539-b338-ab8116f5c341"), lastMod: 0 } took 1 ms 2019-09-04T06:27:19.331+0000 D3 EXECUTOR [ConfigServerCatalogCacheLoader-0] Not reaping because the earliest retirement date is 2019-09-04T06:27:49.331+0000 2019-09-04T06:27:19.331+0000 D3 STORAGE [LogicalSessionCacheRefresh] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:27:19.331+0000 D1 - [LogicalSessionCacheRefresh] User Assertion: ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible. src/mongo/s/catalog_cache.cpp 162 2019-09-04T06:27:19.331+0000 W - [LogicalSessionCacheRefresh] DBException thrown :: caused by :: ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible. 2019-09-04T06:27:19.333+0000 I NETWORK [initandlisten] Listening on /tmp/mongodb-27019.sock 2019-09-04T06:27:19.333+0000 I NETWORK [initandlisten] Listening on 0.0.0.0 2019-09-04T06:27:19.333+0000 I NETWORK [initandlisten] waiting for connections on port 27019 2019-09-04T06:27:19.333+0000 D1 NETWORK [replexec-0] connected to server cmodb802.togewa.com:27019 2019-09-04T06:27:19.333+0000 D2 NETWORK [replexec-0] getBoundAddrs(): [ 127.0.0.1] [ 10.108.2.33] 2019-09-04T06:27:19.333+0000 D2 NETWORK [replexec-0] getAddrsForHost("cmodb803.togewa.com:27019"): [ 10.108.2.33] 2019-09-04T06:27:19.334+0000 D2 NETWORK [replexec-0] getBoundAddrs(): [ 127.0.0.1] [ 10.108.2.33] 2019-09-04T06:27:19.334+0000 D2 NETWORK [replexec-0] getAddrsForHost("cmodb804.togewa.com:27019"): [ 10.108.2.34] 2019-09-04T06:27:19.334+0000 D1 NETWORK [replexec-0] connected to server cmodb804.togewa.com:27019 2019-09-04T06:27:19.334+0000 D3 STORAGE [replexec-0] WT begin_transaction for snapshot id 38 2019-09-04T06:27:19.334+0000 D3 STORAGE [replexec-0] WT rollback_transaction for snapshot id 38 2019-09-04T06:27:19.334+0000 D3 REPL [replexec-0] returning minvalid: { ts: Timestamp(1567578428, 2), t: 1 }({ ts: Timestamp(1567578428, 2), t: 1 }) 2019-09-04T06:27:19.334+0000 D2 REPL_HB [replexec-0] Cancelling all heartbeats. 2019-09-04T06:27:19.334+0000 D1 REPL [replexec-0] Updated term in topology coordinator to 0 due to new config 2019-09-04T06:27:19.334+0000 I REPL [replexec-0] New replica set config in use: { _id: "configrs", version: 2, configsvr: true, protocolVersion: 1, writeConcernMajorityJournalDefault: true, members: [ { _id: 0, host: "cmodb802.togewa.com:27019", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 2.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "cmodb803.togewa.com:27019", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "cmodb804.togewa.com:27019", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, catchUpTimeoutMillis: -1, catchUpTakeoverDelayMillis: 30000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9') } } 2019-09-04T06:27:19.334+0000 I REPL [replexec-0] This node is cmodb803.togewa.com:27019 in the config 2019-09-04T06:27:19.334+0000 I REPL [replexec-0] transition to STARTUP2 from STARTUP 2019-09-04T06:27:19.334+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:27:19.334Z 2019-09-04T06:27:19.334+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:27:19.334Z 2019-09-04T06:27:19.334+0000 D3 REPL [replexec-0] memberData lastupdate is: 2019-09-04T06:27:19.334+0000 2019-09-04T06:27:19.334+0000 D3 REPL [replexec-0] memberData lastupdate is: 2019-09-04T06:27:19.334+0000 2019-09-04T06:27:19.334+0000 D3 REPL [replexec-0] stalest member MemberId(0) date: 2019-09-04T06:27:19.334+0000 2019-09-04T06:27:19.334+0000 D3 REPL [replexec-0] scheduling next check at 2019-09-04T06:27:29.334+0000 2019-09-04T06:27:19.334+0000 D1 REPL [replexec-0] Updating term from 0 to 1 2019-09-04T06:27:19.334+0000 D1 REPL [replexec-0] Current term is now 1 2019-09-04T06:27:19.334+0000 I REPL [replexec-0] Starting replication storage threads 2019-09-04T06:27:19.334+0000 D1 EXECUTOR [replexec-1] starting thread in pool replexec 2019-09-04T06:27:19.334+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:19.335+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 1) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:19.335+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 1 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:27:29.335+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:19.335+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:19.335+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 2) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:19.335+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 2 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:27:29.335+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:19.335+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:27:49.334+0000 2019-09-04T06:27:19.335+0000 I CONNPOOL [Replication] Connecting to cmodb802.togewa.com:27019 2019-09-04T06:27:19.335+0000 D2 ASIO [Replication] Finished connection setup. 2019-09-04T06:27:19.335+0000 I CONNPOOL [Replication] Connecting to cmodb804.togewa.com:27019 2019-09-04T06:27:19.335+0000 D2 ASIO [Replication] Finished connection setup. 2019-09-04T06:27:19.335+0000 D1 EXECUTOR [replexec-2] starting thread in pool replexec 2019-09-04T06:27:19.335+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:27:49.334+0000 2019-09-04T06:27:19.335+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:19.335+0000 D2 CONNPOOL [replexec-0] Controller for NetworkInterfaceTL-RS is LimitController 2019-09-04T06:27:19.335+0000 D2 CONNPOOL [replexec-0] Controller for NetworkInterfaceTL-RS is LimitController 2019-09-04T06:27:19.335+0000 D3 STORAGE [replexec-0] WT begin_transaction for snapshot id 39 2019-09-04T06:27:19.335+0000 D3 STORAGE [replexec-0] WT rollback_transaction for snapshot id 39 2019-09-04T06:27:19.335+0000 D3 REPL [replexec-0] No initial sync flag set, returning initial sync flag value of false. 2019-09-04T06:27:19.335+0000 I REPL [replexec-0] transition to RECOVERING from STARTUP2 2019-09-04T06:27:19.335+0000 I REPL [replexec-0] Starting replication fetcher thread 2019-09-04T06:27:19.335+0000 I REPL [replexec-0] Starting replication applier thread 2019-09-04T06:27:19.335+0000 I REPL [replexec-0] Starting replication reporter thread 2019-09-04T06:27:19.335+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:27:49.334+0000 2019-09-04T06:27:19.335+0000 D1 EXECUTOR [repl-writer-worker-15] starting thread in pool repl writer worker Pool 2019-09-04T06:27:19.335+0000 D3 EXECUTOR [repl-writer-worker-15] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:19.336+0000 D1 EXECUTOR [repl-writer-worker-5] starting thread in pool repl writer worker Pool 2019-09-04T06:27:19.336+0000 D3 EXECUTOR [repl-writer-worker-5] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:19.336+0000 D1 EXECUTOR [repl-writer-worker-7] starting thread in pool repl writer worker Pool 2019-09-04T06:27:19.336+0000 D3 EXECUTOR [repl-writer-worker-7] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:19.336+0000 D1 EXECUTOR [rsSync-0] starting thread in pool rsSync 2019-09-04T06:27:19.336+0000 D3 EXECUTOR [rsSync-0] Executing a task on behalf of pool rsSync 2019-09-04T06:27:19.336+0000 I REPL [rsSync-0] Starting oplog application 2019-09-04T06:27:19.336+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 41 2019-09-04T06:27:19.336+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 41 2019-09-04T06:27:19.336+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578428, 2), t: 1 }({ ts: Timestamp(1567578428, 2), t: 1 }) 2019-09-04T06:27:19.336+0000 D1 EXECUTOR [replication-0] starting thread in pool replication 2019-09-04T06:27:19.336+0000 D3 EXECUTOR [replication-0] waiting for work; I am one of 1 thread(s); the minimum number of threads is 1 2019-09-04T06:27:19.336+0000 D1 EXECUTOR [repl-writer-worker-1] starting thread in pool repl writer worker Pool 2019-09-04T06:27:19.336+0000 D3 EXECUTOR [repl-writer-worker-1] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:19.336+0000 D1 EXECUTOR [repl-writer-worker-9] starting thread in pool repl writer worker Pool 2019-09-04T06:27:19.336+0000 D3 EXECUTOR [repl-writer-worker-9] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:19.336+0000 D1 EXECUTOR [repl-writer-worker-11] starting thread in pool repl writer worker Pool 2019-09-04T06:27:19.336+0000 D3 EXECUTOR [repl-writer-worker-11] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:19.336+0000 D1 EXECUTOR [repl-writer-worker-13] starting thread in pool repl writer worker Pool 2019-09-04T06:27:19.336+0000 D3 EXECUTOR [repl-writer-worker-13] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:19.336+0000 D1 EXECUTOR [repl-writer-worker-3] starting thread in pool repl writer worker Pool 2019-09-04T06:27:19.336+0000 D3 EXECUTOR [repl-writer-worker-3] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:19.336+0000 D2 ASIO [Replication] Request 1 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), opTime: { ts: Timestamp(1567578439, 1), t: 1 }, wallTime: new Date(1567578439111), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578439, 1), $clusterTime: { clusterTime: Timestamp(1567578439, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578439, 1) } 2019-09-04T06:27:19.336+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), opTime: { ts: Timestamp(1567578439, 1), t: 1 }, wallTime: new Date(1567578439111), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578439, 1), $clusterTime: { clusterTime: Timestamp(1567578439, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578439, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:27:19.336+0000 D2 ASIO [Replication] Request 2 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), opTime: { ts: Timestamp(1567578439, 1), t: 1 }, wallTime: new Date(1567578439111), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578439, 1), $clusterTime: { clusterTime: Timestamp(1567578439, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578439, 1) } 2019-09-04T06:27:19.336+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), opTime: { ts: Timestamp(1567578439, 1), t: 1 }, wallTime: new Date(1567578439111), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578439, 1), $clusterTime: { clusterTime: Timestamp(1567578439, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578439, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:19.336+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:19.336+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 1) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), opTime: { ts: Timestamp(1567578439, 1), t: 1 }, wallTime: new Date(1567578439111), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578439, 1), $clusterTime: { clusterTime: Timestamp(1567578439, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578439, 1) } 2019-09-04T06:27:19.336+0000 D2 REPL [replexec-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578439, 1), t: 1 }, 2019-09-04T06:27:19.111+0000 2019-09-04T06:27:19.336+0000 D2 REPL [replexec-1] Setting replication's stable optime to { ts: Timestamp(1567578428, 2), t: 1 }, 2019-09-04T06:27:08.361+0000 2019-09-04T06:27:19.336+0000 D4 ELECTION [replexec-1] Postponing election timeout due to heartbeat from primary 2019-09-04T06:27:19.336+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:27:19.336+0000 I REPL [replexec-1] Member cmodb802.togewa.com:27019 is now in state PRIMARY 2019-09-04T06:27:19.336+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:27:19.836Z 2019-09-04T06:27:19.336+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:19.336+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 2) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), opTime: { ts: Timestamp(1567578439, 1), t: 1 }, wallTime: new Date(1567578439111), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578439, 1), $clusterTime: { clusterTime: Timestamp(1567578439, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578439, 1) } 2019-09-04T06:27:19.336+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:27:19.336+0000 I REPL [replexec-1] Member cmodb804.togewa.com:27019 is now in state SECONDARY 2019-09-04T06:27:19.336+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:27:19.836Z 2019-09-04T06:27:19.336+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:27:49.334+0000 2019-09-04T06:27:19.336+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:27:49.334+0000 2019-09-04T06:27:19.336+0000 D2 ASIO [RS] The NetworkInterfaceTL reactor thread is spinning up 2019-09-04T06:27:19.337+0000 D1 EXECUTOR [repl-writer-worker-14] starting thread in pool repl writer worker Pool 2019-09-04T06:27:19.337+0000 D3 EXECUTOR [repl-writer-worker-14] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:19.337+0000 D1 EXECUTOR [repl-writer-worker-12] starting thread in pool repl writer worker Pool 2019-09-04T06:27:19.337+0000 D3 EXECUTOR [repl-writer-worker-12] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:19.337+0000 D1 EXECUTOR [repl-writer-worker-10] starting thread in pool repl writer worker Pool 2019-09-04T06:27:19.337+0000 D3 EXECUTOR [repl-writer-worker-10] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:19.337+0000 D1 EXECUTOR [repl-writer-worker-8] starting thread in pool repl writer worker Pool 2019-09-04T06:27:19.337+0000 D3 EXECUTOR [repl-writer-worker-8] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:19.337+0000 D3 STORAGE [rsBackgroundSync] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:19.337+0000 D4 - [rsBackgroundSync] Taking ticket. Available: 999999992 2019-09-04T06:27:19.338+0000 D1 EXECUTOR [repl-writer-worker-4] starting thread in pool repl writer worker Pool 2019-09-04T06:27:19.339+0000 D3 EXECUTOR [repl-writer-worker-4] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:19.339+0000 D1 EXECUTOR [repl-writer-worker-6] starting thread in pool repl writer worker Pool 2019-09-04T06:27:19.339+0000 D3 EXECUTOR [repl-writer-worker-6] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:19.339+0000 D1 EXECUTOR [repl-writer-worker-2] starting thread in pool repl writer worker Pool 2019-09-04T06:27:19.339+0000 D3 EXECUTOR [repl-writer-worker-2] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:19.344+0000 I NETWORK [listener] connection accepted from 10.108.2.51:59034 #5 (1 connection now open) 2019-09-04T06:27:19.344+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:19.344+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:19.344+0000 I NETWORK [conn5] received client metadata from 10.108.2.51:59034 conn5: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:19.350+0000 I - [replSetDistLockPinger] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6b5b62 0x561749c38a0a 0x561749c42521 0x561749a63043 0x56174a33a606 0x56174a33ba55 0x56174b117894 0x56174a082899 0x56174a083f53 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174af452ee 0x56174af457fa 0x56174b0c25e2 0x56174a244e7b 0x56174a243c1e 0x56174a42b1dc 0x56174a23b7b1 0x56174a232a0a 0x56174b82dbbf 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"272DB62","s":"_ZN5mongo11DBExceptionC2ERKNS_6StatusE"},{"b":"561748F88000","o":"CB0A0A","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"ADB043"},{"b":"561748F88000","o":"13B2606"},{"b":"561748F88000","o":"13B3A55"},{"b":"561748F88000","o":"218F894","s":"_ZN5mongo12BasicCommand10Invocation3runEPNS_16OperationContextEPNS_3rpc21ReplyBuilderInterfaceE"},{"b":"561748F88000","o":"10FA899"},{"b":"561748F88000","o":"10FBF53"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"1FBD2EE"},{"b":"561748F88000","o":"1FBD7FA","s":"_ZN5mongo14DBDirectClient4callERNS_7MessageES2_bPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE"},{"b":"561748F88000","o":"213A5E2","s":"_ZN5mongo12DBClientBase20runCommandWithTargetENS_12OpMsgRequestE"},{"b":"561748F88000","o":"12BCE7B","s":"_ZN5mongo13RSLocalClient14runCommandOnceEPNS_16OperationContextENS_10StringDataERKNS_7BSONObjE"},{"b":"561748F88000","o":"12BBC1E","s":"_ZN5mongo10ShardLocal11_runCommandEPNS_16OperationContextERKNS_21ReadPreferenceSettingENS_10StringDataENS_8DurationISt5ratioILl1ELl1000EEEERKNS_7BSONObjE"},{"b":"561748F88000","o":"14A31DC","s":"_ZN5mongo5Shard32runCommandWithFixedRetryAttemptsEPNS_16OperationContextERKNS_21ReadPreferenceSettingERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_7BSONObjENS_8DurationISt5ratioILl1ELl1000EEEENS0_11RetryPolicyE"},{"b":"561748F88000","o":"12B37B1","s":"_ZN5mongo19DistLockCatalogImpl4pingEPNS_16OperationContextENS_10StringDataENS_6Date_tE"},{"b":"561748F88000","o":"12AAA0A","s":"_ZN5mongo22ReplSetDistLockManager6doTaskEv"},{"b":"561748F88000","o":"28A5BBF"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo11DBExceptionC2ERKNS_6StatusE+0x32) [0x56174b6b5b62] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x6D08) [0x561749c38a0a] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xADB043) [0x561749a63043] mongod(+0x13B2606) [0x56174a33a606] mongod(+0x13B3A55) [0x56174a33ba55] mongod(_ZN5mongo12BasicCommand10Invocation3runEPNS_16OperationContextEPNS_3rpc21ReplyBuilderInterfaceE+0x74) [0x56174b117894] mongod(+0x10FA899) [0x56174a082899] mongod(+0x10FBF53) [0x56174a083f53] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(+0x1FBD2EE) [0x56174af452ee] mongod(_ZN5mongo14DBDirectClient4callERNS_7MessageES2_bPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x3A) [0x56174af457fa] mongod(_ZN5mongo12DBClientBase20runCommandWithTargetENS_12OpMsgRequestE+0x1F2) [0x56174b0c25e2] mongod(_ZN5mongo13RSLocalClient14runCommandOnceEPNS_16OperationContextENS_10StringDataERKNS_7BSONObjE+0x4FB) [0x56174a244e7b] mongod(_ZN5mongo10ShardLocal11_runCommandEPNS_16OperationContextERKNS_21ReadPreferenceSettingENS_10StringDataENS_8DurationISt5ratioILl1ELl1000EEEERKNS_7BSONObjE+0x2E) [0x56174a243c1e] mongod(_ZN5mongo5Shard32runCommandWithFixedRetryAttemptsEPNS_16OperationContextERKNS_21ReadPreferenceSettingERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_7BSONObjENS_8DurationISt5ratioILl1ELl1000EEEENS0_11RetryPolicyE+0xDC) [0x56174a42b1dc] mongod(_ZN5mongo19DistLockCatalogImpl4pingEPNS_16OperationContextENS_10StringDataENS_6Date_tE+0x571) [0x56174a23b7b1] mongod(_ZN5mongo22ReplSetDistLockManager6doTaskEv+0x27A) [0x56174a232a0a] mongod(+0x28A5BBF) [0x56174b82dbbf] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:27:19.350+0000 D2 REPL [replSetDistLockPinger] Waiting for write concern. OpTime: { ts: Timestamp(1567578428, 2), t: 1 }, write concern: { w: "majority", wtimeout: 15000 } 2019-09-04T06:27:19.350+0000 D4 STORAGE [replSetDistLockPinger] flushed journal 2019-09-04T06:27:19.350+0000 D1 COMMAND [replSetDistLockPinger] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578439317) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" }': NotMaster: Not primary while running findAndModify command on collection config.lockpings 2019-09-04T06:27:19.350+0000 D2 ASIO [RS] The NetworkInterfaceTL reactor thread is spinning up 2019-09-04T06:27:19.350+0000 D1 EXECUTOR [repl-writer-worker-0] starting thread in pool repl writer worker Pool 2019-09-04T06:27:19.350+0000 D3 EXECUTOR [repl-writer-worker-0] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:19.350+0000 I REPL [rsSync-0] transition to SECONDARY from RECOVERING 2019-09-04T06:27:19.350+0000 D4 ELECTION [rsSync-0] Scheduling election timeout callback at 2019-09-04T06:27:29.496+0000 2019-09-04T06:27:19.351+0000 I REPL [rsSync-0] Resetting sync source to empty, which was :27017 2019-09-04T06:27:19.351+0000 D3 STORAGE [rsBackgroundSync] WT begin_transaction for snapshot id 44 2019-09-04T06:27:19.351+0000 D3 STORAGE [rsBackgroundSync] WT rollback_transaction for snapshot id 44 2019-09-04T06:27:19.351+0000 D1 REPL [rsBackgroundSync] Successfully read last entry of oplog while starting bgsync: { ts: Timestamp(1567578428, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578428361), o: { $v: 1, $set: { ping: new Date(1567578428361) } } } 2019-09-04T06:27:19.351+0000 D1 REPL [rsBackgroundSync] Setting bgsync _lastOpTimeFetched={ ts: Timestamp(1567578428, 2), t: 1 }. Previous _lastOpTimeFetched: { ts: Timestamp(0, 0), t: -1 } 2019-09-04T06:27:19.351+0000 D1 REPL [rsBackgroundSync] bgsync fetch queue set to: { ts: Timestamp(1567578428, 2), t: 1 } 2019-09-04T06:27:19.351+0000 D3 STORAGE [rsBackgroundSync] WT begin_transaction for snapshot id 47 2019-09-04T06:27:19.351+0000 D3 STORAGE [rsBackgroundSync] WT rollback_transaction for snapshot id 47 2019-09-04T06:27:19.351+0000 D3 REPL [rsBackgroundSync] returning minvalid: { ts: Timestamp(1567578428, 2), t: 1 }({ ts: Timestamp(1567578428, 2), t: 1 }) 2019-09-04T06:27:19.351+0000 I REPL [rsBackgroundSync] waiting for 2 pings from other members before syncing 2019-09-04T06:27:19.351+0000 I COMMAND [replSetDistLockPinger] command config.lockpings command: findAndModify { findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578439317) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } numYields:0 ok:0 errMsg:"Not primary while running findAndModify command on collection config.lockpings" errName:NotMaster errCode:10107 reslen:527 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } protocol:op_msg 32ms 2019-09-04T06:27:19.354+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:19.354+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:19.354+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578428, 2) 2019-09-04T06:27:19.354+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 43 2019-09-04T06:27:19.354+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:19.354+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:19.354+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 43 2019-09-04T06:27:19.354+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:19.354+0000 I NETWORK [listener] connection accepted from 10.108.2.44:38542 #6 (2 connections now open) 2019-09-04T06:27:19.354+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:19.354+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:19.355+0000 I NETWORK [conn6] received client metadata from 10.108.2.44:38542 conn6: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:19.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:19.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:19.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:19.368+0000 I - [shard-registry-reload] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6b5b62 0x561749c353c1 0x561749c42521 0x561749a84fea 0x56174a419aad 0x56174a41a578 0x56174ae35a23 0x56174ae36660 0x56174ae3cc7c 0x56174ae3d747 0x56174ae48cb8 0x56174ae78b9e 0x56174b0f5c04 0x56174b0f5e95 0x56174b0fdb1e 0x56174ae6b84d 0x56174ae48814 0x56174b82dbbf 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"272DB62","s":"_ZN5mongo11DBExceptionC2ERKNS_6StatusE"},{"b":"561748F88000","o":"CAD3C1","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"AFCFEA"},{"b":"561748F88000","o":"1491AAD","s":"_ZN5mongo13ShardRegistry6reloadEPNS_16OperationContextE"},{"b":"561748F88000","o":"1492578","s":"_ZN5mongo13ShardRegistry15_internalReloadERKNS_8executor12TaskExecutor12CallbackArgsE"},{"b":"561748F88000","o":"1EADA23","s":"_ZN5mongo8executor22ThreadPoolTaskExecutor11runCallbackESt10shared_ptrINS1_13CallbackStateEE"},{"b":"561748F88000","o":"1EAE660"},{"b":"561748F88000","o":"1EB4C7C","s":"_ZN5mongo8executor26NetworkInterfaceThreadPool19_consumeTasksInlineESt11unique_lockISt5mutexE"},{"b":"561748F88000","o":"1EB5747"},{"b":"561748F88000","o":"1EC0CB8"},{"b":"561748F88000","o":"1EF0B9E","s":"_ZN4asio6detail11executor_opINS0_15work_dispatcherIZN5mongo9transport18TransportLayerASIO11ASIOReactor8scheduleENS3_15unique_functionIFvNS3_6StatusEEEEEUlvE_EESaIvENS0_19scheduler_operationEE11do_completeEPvPSE_RKSt10error_codem"},{"b":"561748F88000","o":"216DC04","s":"_ZN4asio6detail9scheduler10do_run_oneERNS0_27conditionally_enabled_mutex11scoped_lockERNS0_21scheduler_thread_infoERKSt10error_code"},{"b":"561748F88000","o":"216DE95","s":"_ZN4asio6detail9scheduler3runERSt10error_code"},{"b":"561748F88000","o":"2175B1E","s":"_ZN4asio10io_context3runEv"},{"b":"561748F88000","o":"1EE384D","s":"_ZN5mongo9transport18TransportLayerASIO11ASIOReactor3runEv"},{"b":"561748F88000","o":"1EC0814","s":"_ZN5mongo8executor18NetworkInterfaceTL4_runEv"},{"b":"561748F88000","o":"28A5BBF"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo11DBExceptionC2ERKNS_6StatusE+0x32) [0x56174b6b5b62] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x36BF) [0x561749c353c1] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xAFCFEA) [0x561749a84fea] mongod(_ZN5mongo13ShardRegistry6reloadEPNS_16OperationContextE+0x18D) [0x56174a419aad] mongod(_ZN5mongo13ShardRegistry15_internalReloadERKNS_8executor12TaskExecutor12CallbackArgsE+0x248) [0x56174a41a578] mongod(_ZN5mongo8executor22ThreadPoolTaskExecutor11runCallbackESt10shared_ptrINS1_13CallbackStateEE+0x173) [0x56174ae35a23] mongod(+0x1EAE660) [0x56174ae36660] mongod(_ZN5mongo8executor26NetworkInterfaceThreadPool19_consumeTasksInlineESt11unique_lockISt5mutexE+0x26C) [0x56174ae3cc7c] mongod(+0x1EB5747) [0x56174ae3d747] mongod(+0x1EC0CB8) [0x56174ae48cb8] mongod(_ZN4asio6detail11executor_opINS0_15work_dispatcherIZN5mongo9transport18TransportLayerASIO11ASIOReactor8scheduleENS3_15unique_functionIFvNS3_6StatusEEEEEUlvE_EESaIvENS0_19scheduler_operationEE11do_completeEPvPSE_RKSt10error_codem+0x6E) [0x56174ae78b9e] mongod(_ZN4asio6detail9scheduler10do_run_oneERNS0_27conditionally_enabled_mutex11scoped_lockERNS0_21scheduler_thread_infoERKSt10error_code+0x3B4) [0x56174b0f5c04] mongod(_ZN4asio6detail9scheduler3runERSt10error_code+0x115) [0x56174b0f5e95] mongod(_ZN4asio10io_context3runEv+0x3E) [0x56174b0fdb1e] mongod(_ZN5mongo9transport18TransportLayerASIO11ASIOReactor3runEv+0x3D) [0x56174ae6b84d] mongod(_ZN5mongo8executor18NetworkInterfaceTL4_runEv+0x44) [0x56174ae48814] mongod(+0x28A5BBF) [0x56174b82dbbf] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:27:19.368+0000 I SHARDING [shard-registry-reload] Periodic reload of shard registry failed :: caused by :: ReadConcernMajorityNotAvailableYet: could not get updated shard list from config server :: caused by :: Read concern majority reads are currently not possible.; will retry after 30s 2019-09-04T06:27:19.381+0000 I - [LogicalSessionCacheRefresh] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6b5b62 0x561749c353c1 0x561749c42521 0x561749a844e6 0x56174a40ffca 0x56174a410bcd 0x56174a410d02 0x56174a0af1c4 0x56174a0ad4ac 0x56174a0ae594 0x56174a47a143 0x56174a47d086 0x56174a22c215 0x56174b82dbbf 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"272DB62","s":"_ZN5mongo11DBExceptionC2ERKNS_6StatusE"},{"b":"561748F88000","o":"CAD3C1","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"AFC4E6"},{"b":"561748F88000","o":"1487FCA","s":"_ZN5mongo12CatalogCache27_getCollectionRoutingInfoAtEPNS_16OperationContextERKNS_15NamespaceStringEN5boost8optionalINS_9TimestampEEE"},{"b":"561748F88000","o":"1488BCD","s":"_ZN5mongo12CatalogCache24getCollectionRoutingInfoEPNS_16OperationContextERKNS_15NamespaceStringE"},{"b":"561748F88000","o":"1488D02","s":"_ZN5mongo12CatalogCache42getShardedCollectionRoutingInfoWithRefreshEPNS_16OperationContextERKNS_15NamespaceStringE"},{"b":"561748F88000","o":"11271C4","s":"_ZN5mongo25SessionsCollectionSharded32_checkCacheForSessionsCollectionEPNS_16OperationContextE"},{"b":"561748F88000","o":"11254AC","s":"_ZN5mongo30SessionsCollectionConfigServer24_shardCollectionIfNeededEPNS_16OperationContextE"},{"b":"561748F88000","o":"1126594","s":"_ZN5mongo30SessionsCollectionConfigServer23setupSessionsCollectionEPNS_16OperationContextE"},{"b":"561748F88000","o":"14F2143","s":"_ZN5mongo23LogicalSessionCacheImpl8_refreshEPNS_6ClientE"},{"b":"561748F88000","o":"14F5086","s":"_ZN5mongo23LogicalSessionCacheImpl16_periodicRefreshEPNS_6ClientE"},{"b":"561748F88000","o":"12A4215"},{"b":"561748F88000","o":"28A5BBF"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo11DBExceptionC2ERKNS_6StatusE+0x32) [0x56174b6b5b62] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x36BF) [0x561749c353c1] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xAFC4E6) [0x561749a844e6] mongod(_ZN5mongo12CatalogCache27_getCollectionRoutingInfoAtEPNS_16OperationContextERKNS_15NamespaceStringEN5boost8optionalINS_9TimestampEEE+0xAA) [0x56174a40ffca] mongod(_ZN5mongo12CatalogCache24getCollectionRoutingInfoEPNS_16OperationContextERKNS_15NamespaceStringE+0x3D) [0x56174a410bcd] mongod(_ZN5mongo12CatalogCache42getShardedCollectionRoutingInfoWithRefreshEPNS_16OperationContextERKNS_15NamespaceStringE+0x52) [0x56174a410d02] mongod(_ZN5mongo25SessionsCollectionSharded32_checkCacheForSessionsCollectionEPNS_16OperationContextE+0x64) [0x56174a0af1c4] mongod(_ZN5mongo30SessionsCollectionConfigServer24_shardCollectionIfNeededEPNS_16OperationContextE+0x3C) [0x56174a0ad4ac] mongod(_ZN5mongo30SessionsCollectionConfigServer23setupSessionsCollectionEPNS_16OperationContextE+0x74) [0x56174a0ae594] mongod(_ZN5mongo23LogicalSessionCacheImpl8_refreshEPNS_6ClientE+0x103) [0x56174a47a143] mongod(_ZN5mongo23LogicalSessionCacheImpl16_periodicRefreshEPNS_6ClientE+0x26) [0x56174a47d086] mongod(+0x12A4215) [0x56174a22c215] mongod(+0x28A5BBF) [0x56174b82dbbf] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:27:19.381+0000 I CONTROL [LogicalSessionCacheRefresh] Failed to create config.system.sessions: Cannot create config.system.sessions until there are shards, will try again at the next refresh interval 2019-09-04T06:27:19.381+0000 I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Cannot create config.system.sessions until there are shards 2019-09-04T06:27:19.381+0000 D3 STORAGE [LogicalSessionCacheReap] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:27:19.381+0000 D2 COMMAND [LogicalSessionCacheReap] run command config.$cmd { find: "collections", filter: { _id: /^config\./ }, $readPreference: { mode: "nearest", tags: [] }, $db: "config" } 2019-09-04T06:27:19.381+0000 D3 STORAGE [LogicalSessionCacheReap] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:19.381+0000 I SHARDING [LogicalSessionCacheReap] Marking collection config.collections as collection version: 2019-09-04T06:27:19.381+0000 D5 QUERY [LogicalSessionCacheReap] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.collectionsTree: _id regex /^config\./ Sort: {} Proj: {} ============================= 2019-09-04T06:27:19.381+0000 D5 QUERY [LogicalSessionCacheReap] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" } 2019-09-04T06:27:19.381+0000 D5 QUERY [LogicalSessionCacheReap] Predicate over field '_id' 2019-09-04T06:27:19.381+0000 D2 QUERY [LogicalSessionCacheReap] Relevant index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" } 2019-09-04T06:27:19.382+0000 D5 QUERY [LogicalSessionCacheReap] Rated tree: _id regex /^config\./ || First: 0 notFirst: full path: _id 2019-09-04T06:27:19.382+0000 D5 QUERY [LogicalSessionCacheReap] Tagging memoID 1 2019-09-04T06:27:19.382+0000 D5 QUERY [LogicalSessionCacheReap] Enumerator: memo just before moving: [Node #1]: AND enumstate counter 0 choice 0: subnodes: idx[0] pos 0 pred _id regex /^config\./ 2019-09-04T06:27:19.382+0000 D5 QUERY [LogicalSessionCacheReap] About to build solntree from tagged tree: _id regex /^config\./ || Selected Index #0 pos 0 combine 1 2019-09-04T06:27:19.382+0000 D5 QUERY [LogicalSessionCacheReap] Planner: adding solution: FETCH ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [{ _id: 1 }, ] ---Child: ------IXSCAN ---------indexName = _id_ keyPattern = { _id: 1 } ---------direction = 1 ---------bounds = field #0['_id']: ["config.", "config/"), [/^config\./, /^config\./] ---------fetched = 0 ---------sortedByDiskLoc = 0 ---------getSort = [{ _id: 1 }, ] 2019-09-04T06:27:19.382+0000 D5 QUERY [LogicalSessionCacheReap] Planner: outputted 1 indexed solutions. 2019-09-04T06:27:19.382+0000 D2 QUERY [LogicalSessionCacheReap] Only one plan is available; it will be run but will not be cached. query: { _id: /^config\./ } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } 2019-09-04T06:27:19.382+0000 D3 STORAGE [LogicalSessionCacheReap] begin_transaction on local snapshot Timestamp(1567578428, 2) 2019-09-04T06:27:19.382+0000 D3 STORAGE [LogicalSessionCacheReap] WT begin_transaction for snapshot id 37 2019-09-04T06:27:19.382+0000 D3 STORAGE [LogicalSessionCacheReap] WT rollback_transaction for snapshot id 37 2019-09-04T06:27:19.382+0000 I COMMAND [LogicalSessionCacheReap] command config.collections command: find { find: "collections", filter: { _id: /^config\./ }, $readPreference: { mode: "nearest", tags: [] }, $db: "config" } planSummary: IXSCAN { _id: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:4A611094 planCacheKey:B6794B7A reslen:469 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{ data: { bytesRead: 120, timeReadingMicros: 1 } } protocol:op_msg 0ms 2019-09-04T06:27:19.382+0000 D1 SH_REFR [LogicalSessionCacheReap] Refreshing chunks for collection config.system.sessions; current collection version is 0|0||000000000000000000000000 2019-09-04T06:27:19.382+0000 D3 EXECUTOR [ConfigServerCatalogCacheLoader-0] Executing a task on behalf of pool ConfigServerCatalogCacheLoader 2019-09-04T06:27:19.382+0000 D3 STORAGE [ConfigServerCatalogCacheLoader-0] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:27:19.382+0000 D2 COMMAND [ConfigServerCatalogCacheLoader-0] run command config.$cmd { find: "collections", filter: { _id: "config.system.sessions" }, ntoreturn: 1, singleBatch: true, $readPreference: { mode: "nearest", tags: [] }, $db: "config" } 2019-09-04T06:27:19.382+0000 D3 STORAGE [ConfigServerCatalogCacheLoader-0] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:19.382+0000 D2 QUERY [ConfigServerCatalogCacheLoader-0] Using idhack: query: { _id: "config.system.sessions" } sort: {} projection: {} ntoreturn=1 2019-09-04T06:27:19.383+0000 D3 STORAGE [ConfigServerCatalogCacheLoader-0] begin_transaction on local snapshot Timestamp(1567578428, 2) 2019-09-04T06:27:19.383+0000 D3 STORAGE [ConfigServerCatalogCacheLoader-0] WT begin_transaction for snapshot id 54 2019-09-04T06:27:19.383+0000 D3 STORAGE [ConfigServerCatalogCacheLoader-0] WT rollback_transaction for snapshot id 54 2019-09-04T06:27:19.383+0000 I COMMAND [ConfigServerCatalogCacheLoader-0] command config.collections command: find { find: "collections", filter: { _id: "config.system.sessions" }, ntoreturn: 1, singleBatch: true, $readPreference: { mode: "nearest", tags: [] }, $db: "config" } planSummary: IDHACK keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:469 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:27:19.383+0000 D3 STORAGE [ConfigServerCatalogCacheLoader-0] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:27:19.383+0000 D2 COMMAND [ConfigServerCatalogCacheLoader-0] run command config.$cmd { find: "chunks", filter: { ns: "config.system.sessions", lastmod: { $gte: Timestamp(0, 0) } }, sort: { lastmod: 1 }, $readPreference: { mode: "nearest", tags: [] }, $db: "config" } 2019-09-04T06:27:19.383+0000 D3 STORAGE [ConfigServerCatalogCacheLoader-0] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:19.383+0000 I SHARDING [ConfigServerCatalogCacheLoader-0] Marking collection config.chunks as collection version: 2019-09-04T06:27:19.383+0000 D5 QUERY [ConfigServerCatalogCacheLoader-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: $and ns $eq "config.system.sessions" lastmod $gte Timestamp(0, 0) Sort: { lastmod: 1 } Proj: {} ============================= 2019-09-04T06:27:19.383+0000 D5 QUERY [ConfigServerCatalogCacheLoader-0] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:27:19.383+0000 D5 QUERY [ConfigServerCatalogCacheLoader-0] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:27:19.383+0000 D5 QUERY [ConfigServerCatalogCacheLoader-0] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:27:19.383+0000 D5 QUERY [ConfigServerCatalogCacheLoader-0] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:27:19.383+0000 D5 QUERY [ConfigServerCatalogCacheLoader-0] Predicate over field 'lastmod' 2019-09-04T06:27:19.383+0000 D5 QUERY [ConfigServerCatalogCacheLoader-0] Predicate over field 'ns' 2019-09-04T06:27:19.383+0000 D2 QUERY [ConfigServerCatalogCacheLoader-0] Relevant index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:27:19.383+0000 D2 QUERY [ConfigServerCatalogCacheLoader-0] Relevant index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:27:19.383+0000 D2 QUERY [ConfigServerCatalogCacheLoader-0] Relevant index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:27:19.383+0000 D5 QUERY [ConfigServerCatalogCacheLoader-0] Rated tree: $and ns $eq "config.system.sessions" || First: 0 1 2 notFirst: full path: ns lastmod $gte Timestamp(0, 0) || First: notFirst: 2 full path: lastmod 2019-09-04T06:27:19.383+0000 D5 QUERY [ConfigServerCatalogCacheLoader-0] Tagging memoID 1 2019-09-04T06:27:19.383+0000 D5 QUERY [ConfigServerCatalogCacheLoader-0] Enumerator: memo just before moving: [Node #1]: AND enumstate counter 0 choice 0: subnodes: idx[2] pos 0 pred ns $eq "config.system.sessions" pos 1 pred lastmod $gte Timestamp(0, 0) choice 1: subnodes: idx[0] pos 0 pred ns $eq "config.system.sessions" choice 2: subnodes: idx[1] pos 0 pred ns $eq "config.system.sessions" 2019-09-04T06:27:19.383+0000 D5 QUERY [ConfigServerCatalogCacheLoader-0] About to build solntree from tagged tree: $and ns $eq "config.system.sessions" || Selected Index #2 pos 0 combine 1 lastmod $gte Timestamp(0, 0) || Selected Index #2 pos 1 combine 1 2019-09-04T06:27:19.383+0000 D5 QUERY [ConfigServerCatalogCacheLoader-0] Planner: adding solution: FETCH ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [{ lastmod: 1 }, { ns: 1 }, { ns: 1, lastmod: 1 }, ] ---Child: ------IXSCAN ---------indexName = ns_1_lastmod_1 keyPattern = { ns: 1, lastmod: 1 } ---------direction = 1 ---------bounds = field #0['ns']: ["config.system.sessions", "config.system.sessions"], field #1['lastmod']: [Timestamp(0, 0), Timestamp(4294967295, 4294967295)] ---------fetched = 0 ---------sortedByDiskLoc = 0 ---------getSort = [{ lastmod: 1 }, { ns: 1 }, { ns: 1, lastmod: 1 }, ] 2019-09-04T06:27:19.383+0000 D5 QUERY [ConfigServerCatalogCacheLoader-0] Tagging memoID 1 2019-09-04T06:27:19.383+0000 D5 QUERY [ConfigServerCatalogCacheLoader-0] Enumerator: memo just before moving: [Node #1]: AND enumstate counter 1 choice 0: subnodes: idx[2] pos 0 pred ns $eq "config.system.sessions" pos 1 pred lastmod $gte Timestamp(0, 0) choice 1: subnodes: idx[0] pos 0 pred ns $eq "config.system.sessions" choice 2: subnodes: idx[1] pos 0 pred ns $eq "config.system.sessions" 2019-09-04T06:27:19.383+0000 D5 QUERY [ConfigServerCatalogCacheLoader-0] About to build solntree from tagged tree: $and ns $eq "config.system.sessions" || Selected Index #0 pos 0 combine 1 lastmod $gte Timestamp(0, 0) 2019-09-04T06:27:19.383+0000 D5 QUERY [ConfigServerCatalogCacheLoader-0] Planner: adding solution: SORT ---pattern = { lastmod: 1 } ---limit = 0 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] ---Child: ------SORT_KEY_GENERATOR ---------sortSpec = { lastmod: 1 } ---------fetched = 1 ---------sortedByDiskLoc = 0 ---------getSort = [{ min: 1 }, { ns: 1 }, { ns: 1, min: 1 }, ] ---------Child: ------------FETCH ---------------filter: lastmod $gte Timestamp(0, 0) ---------------fetched = 1 ---------------sortedByDiskLoc = 0 ---------------getSort = [{ min: 1 }, { ns: 1 }, { ns: 1, min: 1 }, ] ---------------Child: ------------------IXSCAN ---------------------indexName = ns_1_min_1 keyPattern = { ns: 1, min: 1 } ---------------------direction = 1 ---------------------bounds = field #0['ns']: ["config.system.sessions", "config.system.sessions"], field #1['min']: [MinKey, MaxKey] ---------------------fetched = 0 ---------------------sortedByDiskLoc = 0 ---------------------getSort = [{ min: 1 }, { ns: 1 }, { ns: 1, min: 1 }, ] 2019-09-04T06:27:19.383+0000 D5 QUERY [ConfigServerCatalogCacheLoader-0] Tagging memoID 1 2019-09-04T06:27:19.383+0000 D5 QUERY [ConfigServerCatalogCacheLoader-0] Enumerator: memo just before moving: [Node #1]: AND enumstate counter 2 choice 0: subnodes: idx[2] pos 0 pred ns $eq "config.system.sessions" pos 1 pred lastmod $gte Timestamp(0, 0) choice 1: subnodes: idx[0] pos 0 pred ns $eq "config.system.sessions" choice 2: subnodes: idx[1] pos 0 pred ns $eq "config.system.sessions" 2019-09-04T06:27:19.383+0000 D5 QUERY [ConfigServerCatalogCacheLoader-0] About to build solntree from tagged tree: $and ns $eq "config.system.sessions" || Selected Index #1 pos 0 combine 1 lastmod $gte Timestamp(0, 0) 2019-09-04T06:27:19.383+0000 D5 QUERY [ConfigServerCatalogCacheLoader-0] Planner: adding solution: SORT ---pattern = { lastmod: 1 } ---limit = 0 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] ---Child: ------SORT_KEY_GENERATOR ---------sortSpec = { lastmod: 1 } ---------fetched = 1 ---------sortedByDiskLoc = 0 ---------getSort = [{ ns: 1 }, { ns: 1, shard: 1 }, { ns: 1, shard: 1, min: 1 }, { shard: 1 }, { shard: 1, min: 1 }, ] ---------Child: ------------FETCH ---------------filter: lastmod $gte Timestamp(0, 0) ---------------fetched = 1 ---------------sortedByDiskLoc = 0 ---------------getSort = [{ ns: 1 }, { ns: 1, shard: 1 }, { ns: 1, shard: 1, min: 1 }, { shard: 1 }, { shard: 1, min: 1 }, ] ---------------Child: ------------------IXSCAN ---------------------indexName = ns_1_shard_1_min_1 keyPattern = { ns: 1, shard: 1, min: 1 } ---------------------direction = 1 ---------------------bounds = field #0['ns']: ["config.system.sessions", "config.system.sessions"], field #1['shard']: [MinKey, MaxKey], field #2['min']: [MinKey, MaxKey] ---------------------fetched = 0 ---------------------sortedByDiskLoc = 0 ---------------------getSort = [{ ns: 1 }, { ns: 1, shard: 1 }, { ns: 1, shard: 1, min: 1 }, { shard: 1 }, { shard: 1, min: 1 }, ] 2019-09-04T06:27:19.383+0000 D5 QUERY [ConfigServerCatalogCacheLoader-0] Planner: outputted 3 indexed solutions. 2019-09-04T06:27:19.383+0000 D3 STORAGE [ConfigServerCatalogCacheLoader-0] begin_transaction on local snapshot Timestamp(1567578428, 2) 2019-09-04T06:27:19.383+0000 D3 STORAGE [ConfigServerCatalogCacheLoader-0] WT begin_transaction for snapshot id 55 2019-09-04T06:27:19.384+0000 D5 QUERY [ConfigServerCatalogCacheLoader-0] Scoring plan 0: FETCH ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [{ lastmod: 1 }, { ns: 1 }, { ns: 1, lastmod: 1 }, ] ---Child: ------IXSCAN ---------indexName = ns_1_lastmod_1 keyPattern = { ns: 1, lastmod: 1 } ---------direction = 1 ---------bounds = field #0['ns']: ["config.system.sessions", "config.system.sessions"], field #1['lastmod']: [Timestamp(0, 0), Timestamp(4294967295, 4294967295)] ---------fetched = 0 ---------sortedByDiskLoc = 0 ---------getSort = [{ lastmod: 1 }, { ns: 1 }, { ns: 1, lastmod: 1 }, ] Stats: { "stage" : "FETCH", "nReturned" : 1, "executionTimeMillisEstimate" : 0, "works" : 2, "advanced" : 1, "needTime" : 0, "needYield" : 0, "saveState" : 0, "restoreState" : 0, "isEOF" : 1, "docsExamined" : 1, "alreadyHasObj" : 0, "inputStage" : { "stage" : "IXSCAN", "nReturned" : 1, "executionTimeMillisEstimate" : 0, "works" : 2, "advanced" : 1, "needTime" : 0, "needYield" : 0, "saveState" : 0, "restoreState" : 0, "isEOF" : 1, "keyPattern" : { "ns" : 1, "lastmod" : 1 }, "indexName" : "ns_1_lastmod_1", "isMultiKey" : false, "multiKeyPaths" : { "ns" : [], "lastmod" : [] }, "isUnique" : true, "isSparse" : false, "isPartial" : false, "indexVersion" : 2, "direction" : "forward", "indexBounds" : { "ns" : [ "[\"config.system.sessions\", \"config.system.sessions\"]" ], "lastmod" : [ "[Timestamp(0, 0), Timestamp(4294967295, 4294967295)]" ] }, "keysExamined" : 1, "seeks" : 1, "dupsTested" : 0, "dupsDropped" : 0 } } 2019-09-04T06:27:19.384+0000 D2 QUERY [ConfigServerCatalogCacheLoader-0] Scoring query plan: IXSCAN { ns: 1, lastmod: 1 } planHitEOF=1 2019-09-04T06:27:19.384+0000 D2 QUERY [ConfigServerCatalogCacheLoader-0] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) 2019-09-04T06:27:19.384+0000 D5 QUERY [ConfigServerCatalogCacheLoader-0] score = 1.5003 2019-09-04T06:27:19.384+0000 D5 QUERY [ConfigServerCatalogCacheLoader-0] Adding +1 EOF bonus to score. 2019-09-04T06:27:19.384+0000 D5 QUERY [ConfigServerCatalogCacheLoader-0] Scoring plan 1: SORT ---pattern = { lastmod: 1 } ---limit = 0 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] ---Child: ------SORT_KEY_GENERATOR ---------sortSpec = { lastmod: 1 } ---------fetched = 1 ---------sortedByDiskLoc = 0 ---------getSort = [{ min: 1 }, { ns: 1 }, { ns: 1, min: 1 }, ] ---------Child: ------------FETCH ---------------filter: lastmod $gte Timestamp(0, 0) ---------------fetched = 1 ---------------sortedByDiskLoc = 0 ---------------getSort = [{ min: 1 }, { ns: 1 }, { ns: 1, min: 1 }, ] ---------------Child: ------------------IXSCAN ---------------------indexName = ns_1_min_1 keyPattern = { ns: 1, min: 1 } ---------------------direction = 1 ---------------------bounds = field #0['ns']: ["config.system.sessions", "config.system.sessions"], field #1['min']: [MinKey, MaxKey] ---------------------fetched = 0 ---------------------sortedByDiskLoc = 0 ---------------------getSort = [{ min: 1 }, { ns: 1 }, { ns: 1, min: 1 }, ] Stats: { "stage" : "SORT", "nReturned" : 0, "executionTimeMillisEstimate" : 0, "works" : 2, "advanced" : 0, "needTime" : 2, "needYield" : 0, "saveState" : 0, "restoreState" : 0, "isEOF" : 0, "sortPattern" : { "lastmod" : 1 }, "memUsage" : 244, "memLimit" : 33554432, "inputStage" : { "stage" : "SORT_KEY_GENERATOR", "nReturned" : 1, "executionTimeMillisEstimate" : 0, "works" : 2, "advanced" : 1, "needTime" : 1, "needYield" : 0, "saveState" : 0, "restoreState" : 0, "isEOF" : 0, "inputStage" : { "stage" : "FETCH", "filter" : { "lastmod" : { "$gte" : { "$timestamp" : { "t" : 0, "i" : 0 } } } }, "nReturned" : 1, "executionTimeMillisEstimate" : 0, "works" : 1, "advanced" : 1, "needTime" : 0, "needYield" : 0, "saveState" : 0, "restoreState" : 0, "isEOF" : 0, "docsExamined" : 1, "alreadyHasObj" : 0, "inputStage" : { "stage" : "IXSCAN", "nReturned" : 1, "executionTimeMillisEstimate" : 0, "works" : 1, "advanced" : 1, "needTime" : 0, "needYield" : 0, "saveState" : 0, "restoreState" : 0, "isEOF" : 0, "keyPattern" : { "ns" : 1, "min" : 1 }, "indexName" : "ns_1_min_1", "isMultiKey" : false, "multiKeyPaths" : { "ns" : [], "min" : [] }, "isUnique" : true, "isSparse" : false, "isPartial" : false, "indexVersion" : 2, "direction" : "forward", "indexBounds" : { "ns" : [ "[\"config.system.sessions\", \"config.system.sessions\"]" ], "min" : [ "[MinKey, MaxKey]" ] }, "keysExamined" : 1, "seeks" : 1, "dupsTested" : 0, "dupsDropped" : 0 } } } } 2019-09-04T06:27:19.384+0000 D2 QUERY [ConfigServerCatalogCacheLoader-0] Scoring query plan: IXSCAN { ns: 1, min: 1 } planHitEOF=0 2019-09-04T06:27:19.384+0000 D2 QUERY [ConfigServerCatalogCacheLoader-0] score(1.0002) = baseScore(1) + productivity((0 advanced)/(2 works) = 0) + tieBreakers(0.0001 noFetchBonus + 0 noSortBonus + 0.0001 noIxisectBonus = 0.0002) 2019-09-04T06:27:19.384+0000 D5 QUERY [ConfigServerCatalogCacheLoader-0] score = 1.0002 2019-09-04T06:27:19.384+0000 D5 QUERY [ConfigServerCatalogCacheLoader-0] Scoring plan 2: SORT ---pattern = { lastmod: 1 } ---limit = 0 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] ---Child: ------SORT_KEY_GENERATOR ---------sortSpec = { lastmod: 1 } ---------fetched = 1 ---------sortedByDiskLoc = 0 ---------getSort = [{ ns: 1 }, { ns: 1, shard: 1 }, { ns: 1, shard: 1, min: 1 }, { shard: 1 }, { shard: 1, min: 1 }, ] ---------Child: ------------FETCH ---------------filter: lastmod $gte Timestamp(0, 0) ---------------fetched = 1 ---------------sortedByDiskLoc = 0 ---------------getSort = [{ ns: 1 }, { ns: 1, shard: 1 }, { ns: 1, shard: 1, min: 1 }, { shard: 1 }, { shard: 1, min: 1 }, ] ---------------Child: ------------------IXSCAN ---------------------indexName = ns_1_shard_1_min_1 keyPattern = { ns: 1, shard: 1, min: 1 } ---------------------direction = 1 ---------------------bounds = field #0['ns']: ["config.system.sessions", "config.system.sessions"], field #1['shard']: [MinKey, MaxKey], field #2['min']: [MinKey, MaxKey] ---------------------fetched = 0 ---------------------sortedByDiskLoc = 0 ---------------------getSort = [{ ns: 1 }, { ns: 1, shard: 1 }, { ns: 1, shard: 1, min: 1 }, { shard: 1 }, { shard: 1, min: 1 }, ] Stats: { "stage" : "SORT", "nReturned" : 0, "executionTimeMillisEstimate" : 0, "works" : 2, "advanced" : 0, "needTime" : 2, "needYield" : 0, "saveState" : 0, "restoreState" : 0, "isEOF" : 0, "sortPattern" : { "lastmod" : 1 }, "memUsage" : 244, "memLimit" : 33554432, "inputStage" : { "stage" : "SORT_KEY_GENERATOR", "nReturned" : 1, "executionTimeMillisEstimate" : 0, "works" : 2, "advanced" : 1, "needTime" : 1, "needYield" : 0, "saveState" : 0, "restoreState" : 0, "isEOF" : 0, "inputStage" : { "stage" : "FETCH", "filter" : { "lastmod" : { "$gte" : { "$timestamp" : { "t" : 0, "i" : 0 } } } }, "nReturned" : 1, "executionTimeMillisEstimate" : 0, "works" : 1, "advanced" : 1, "needTime" : 0, "needYield" : 0, "saveState" : 0, "restoreState" : 0, "isEOF" : 0, "docsExamined" : 1, "alreadyHasObj" : 0, "inputStage" : { "stage" : "IXSCAN", "nReturned" : 1, "executionTimeMillisEstimate" : 0, "works" : 1, "advanced" : 1, "needTime" : 0, "needYield" : 0, "saveState" : 0, "restoreState" : 0, "isEOF" : 0, "keyPattern" : { "ns" : 1, "shard" : 1, "min" : 1 }, "indexName" : "ns_1_shard_1_min_1", "isMultiKey" : false, "multiKeyPaths" : { "ns" : [], "shard" : [], "min" : [] }, "isUnique" : true, "isSparse" : false, "isPartial" : false, "indexVersion" : 2, "direction" : "forward", "indexBounds" : { "ns" : [ "[\"config.system.sessions\", \"config.system.sessions\"]" ], "shard" : [ "[MinKey, MaxKey]" ], "min" : [ "[MinKey, MaxKey]" ] }, "keysExamined" : 1, "seeks" : 1, "dupsTested" : 0, "dupsDropped" : 0 } } } } 2019-09-04T06:27:19.384+0000 D2 QUERY [ConfigServerCatalogCacheLoader-0] Scoring query plan: IXSCAN { ns: 1, shard: 1, min: 1 } planHitEOF=0 2019-09-04T06:27:19.384+0000 D2 QUERY [ConfigServerCatalogCacheLoader-0] score(1.0002) = baseScore(1) + productivity((0 advanced)/(2 works) = 0) + tieBreakers(0.0001 noFetchBonus + 0 noSortBonus + 0.0001 noIxisectBonus = 0.0002) 2019-09-04T06:27:19.384+0000 D5 QUERY [ConfigServerCatalogCacheLoader-0] score = 1.0002 2019-09-04T06:27:19.384+0000 D5 QUERY [ConfigServerCatalogCacheLoader-0] Winning solution: FETCH ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [{ lastmod: 1 }, { ns: 1 }, { ns: 1, lastmod: 1 }, ] ---Child: ------IXSCAN ---------indexName = ns_1_lastmod_1 keyPattern = { ns: 1, lastmod: 1 } ---------direction = 1 ---------bounds = field #0['ns']: ["config.system.sessions", "config.system.sessions"], field #1['lastmod']: [Timestamp(0, 0), Timestamp(4294967295, 4294967295)] ---------fetched = 0 ---------sortedByDiskLoc = 0 ---------getSort = [{ lastmod: 1 }, { ns: 1 }, { ns: 1, lastmod: 1 }, ] 2019-09-04T06:27:19.384+0000 D2 QUERY [ConfigServerCatalogCacheLoader-0] Winning plan: IXSCAN { ns: 1, lastmod: 1 } 2019-09-04T06:27:19.384+0000 D1 QUERY [ConfigServerCatalogCacheLoader-0] Creating inactive cache entry for query shape query: { ns: "config.system.sessions", lastmod: { $gte: Timestamp(0, 0) } } sort: { lastmod: 1 } projection: {} queryHash 1DDA71BE planCacheKey 167D77D5 with works value 2 2019-09-04T06:27:19.384+0000 D3 STORAGE [ConfigServerCatalogCacheLoader-0] WT rollback_transaction for snapshot id 55 2019-09-04T06:27:19.384+0000 I COMMAND [ConfigServerCatalogCacheLoader-0] command config.chunks command: find { find: "chunks", filter: { ns: "config.system.sessions", lastmod: { $gte: Timestamp(0, 0) } }, sort: { lastmod: 1 }, $readPreference: { mode: "nearest", tags: [] }, $db: "config" } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 fromMultiPlanner:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:1DDA71BE planCacheKey:167D77D5 reslen:555 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{ data: { bytesRead: 396, timeReadingMicros: 30 } } protocol:op_msg 1ms 2019-09-04T06:27:19.384+0000 D3 STORAGE [ConfigServerCatalogCacheLoader-0] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:27:19.384+0000 D2 COMMAND [ConfigServerCatalogCacheLoader-0] run command config.$cmd { find: "shards", $readPreference: { mode: "nearest", tags: [] }, $db: "config" } 2019-09-04T06:27:19.384+0000 D3 STORAGE [ConfigServerCatalogCacheLoader-0] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:19.384+0000 I SHARDING [ConfigServerCatalogCacheLoader-0] Marking collection config.shards as collection version: 2019-09-04T06:27:19.384+0000 D5 QUERY [ConfigServerCatalogCacheLoader-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:27:19.384+0000 D5 QUERY [ConfigServerCatalogCacheLoader-0] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:27:19.384+0000 D5 QUERY [ConfigServerCatalogCacheLoader-0] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:27:19.384+0000 D5 QUERY [ConfigServerCatalogCacheLoader-0] Rated tree: $and 2019-09-04T06:27:19.384+0000 D5 QUERY [ConfigServerCatalogCacheLoader-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:19.384+0000 D5 QUERY [ConfigServerCatalogCacheLoader-0] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:19.384+0000 D2 QUERY [ConfigServerCatalogCacheLoader-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:27:19.384+0000 D3 STORAGE [ConfigServerCatalogCacheLoader-0] begin_transaction on local snapshot Timestamp(1567578428, 2) 2019-09-04T06:27:19.384+0000 D3 STORAGE [ConfigServerCatalogCacheLoader-0] WT begin_transaction for snapshot id 56 2019-09-04T06:27:19.384+0000 D3 STORAGE [ConfigServerCatalogCacheLoader-0] WT rollback_transaction for snapshot id 56 2019-09-04T06:27:19.384+0000 I COMMAND [ConfigServerCatalogCacheLoader-0] command config.shards command: find { find: "shards", $readPreference: { mode: "nearest", tags: [] }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:646 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:27:19.384+0000 D1 SHARDING [ConfigServerCatalogCacheLoader-0] found 3 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp(1567578428, 2), t: 1 } 2019-09-04T06:27:19.385+0000 D1 NETWORK [ConfigServerCatalogCacheLoader-0] Starting up task executor for monitoring replica sets in response to request to monitor set: shard0000/cmodb806.togewa.com:27018,cmodb807.togewa.com:27018 2019-09-04T06:27:19.385+0000 D2 CONNPOOL [ConfigServerCatalogCacheLoader-0] Controller for NetworkInterfaceTL-ReplicaSetMonitor-TaskExecutor is LimitController 2019-09-04T06:27:19.385+0000 I NETWORK [ConfigServerCatalogCacheLoader-0] Starting new replica set monitor for shard0000/cmodb806.togewa.com:27018,cmodb807.togewa.com:27018 2019-09-04T06:27:19.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] The NetworkInterfaceTL reactor thread is spinning up 2019-09-04T06:27:19.385+0000 D2 NETWORK [ConfigServerCatalogCacheLoader-0] Signaling found set shard0000 2019-09-04T06:27:19.385+0000 D1 NETWORK [ConfigServerCatalogCacheLoader-0] Started targeter for shard0000/cmodb806.togewa.com:27018,cmodb807.togewa.com:27018 2019-09-04T06:27:19.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0000 2019-09-04T06:27:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 3 -- target:[cmodb806.togewa.com:27018] db:admin expDate:2019-09-04T06:27:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:27:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 4 -- target:[cmodb807.togewa.com:27018] db:admin expDate:2019-09-04T06:27:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:27:19.385+0000 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to cmodb806.togewa.com:27018 2019-09-04T06:27:19.385+0000 D3 SHARDING [ConfigServerCatalogCacheLoader-0] Adding shard shard0000, with CS shard0000/cmodb806.togewa.com:27018,cmodb807.togewa.com:27018 2019-09-04T06:27:19.385+0000 I NETWORK [ConfigServerCatalogCacheLoader-0] Starting new replica set monitor for shard0001/cmodb808.togewa.com:27018,cmodb809.togewa.com:27018 2019-09-04T06:27:19.385+0000 D2 NETWORK [ConfigServerCatalogCacheLoader-0] Signaling found set shard0001 2019-09-04T06:27:19.385+0000 D1 NETWORK [ConfigServerCatalogCacheLoader-0] Started targeter for shard0001/cmodb808.togewa.com:27018,cmodb809.togewa.com:27018 2019-09-04T06:27:19.385+0000 D3 SHARDING [ConfigServerCatalogCacheLoader-0] Adding shard shard0001, with CS shard0001/cmodb808.togewa.com:27018,cmodb809.togewa.com:27018 2019-09-04T06:27:19.385+0000 I NETWORK [ConfigServerCatalogCacheLoader-0] Starting new replica set monitor for shard0002/cmodb810.togewa.com:27018,cmodb811.togewa.com:27018 2019-09-04T06:27:19.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Finished connection setup. 2019-09-04T06:27:19.385+0000 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to cmodb807.togewa.com:27018 2019-09-04T06:27:19.385+0000 D2 NETWORK [ConfigServerCatalogCacheLoader-0] Signaling found set shard0002 2019-09-04T06:27:19.385+0000 D1 NETWORK [ConfigServerCatalogCacheLoader-0] Started targeter for shard0002/cmodb810.togewa.com:27018,cmodb811.togewa.com:27018 2019-09-04T06:27:19.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Finished connection setup. 2019-09-04T06:27:19.385+0000 D3 SHARDING [ConfigServerCatalogCacheLoader-0] Adding shard shard0002, with CS shard0002/cmodb810.togewa.com:27018,cmodb811.togewa.com:27018 2019-09-04T06:27:19.385+0000 D3 SHARDING [ConfigServerCatalogCacheLoader-0] Adding shard config, with CS 2019-09-04T06:27:19.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0001 2019-09-04T06:27:19.385+0000 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection config.system.sessions to version 1|0||5d5e4a1c7fc690fd4e5fb282 took 2 ms 2019-09-04T06:27:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 5 -- target:[cmodb809.togewa.com:27018] db:admin expDate:2019-09-04T06:27:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:27:19.385+0000 D3 EXECUTOR [ConfigServerCatalogCacheLoader-0] Not reaping because the earliest retirement date is 2019-09-04T06:27:49.382+0000 2019-09-04T06:27:19.385+0000 D2 WRITE [LogicalSessionCacheReap] Beginning scanSessions. Scanning 0 sessions. 2019-09-04T06:27:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 6 -- target:[cmodb808.togewa.com:27018] db:admin expDate:2019-09-04T06:27:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:27:19.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0002 2019-09-04T06:27:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 7 -- target:[cmodb810.togewa.com:27018] db:admin expDate:2019-09-04T06:27:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:27:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 8 -- target:[cmodb811.togewa.com:27018] db:admin expDate:2019-09-04T06:27:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:27:19.385+0000 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to cmodb809.togewa.com:27018 2019-09-04T06:27:19.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Finished connection setup. 2019-09-04T06:27:19.385+0000 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to cmodb808.togewa.com:27018 2019-09-04T06:27:19.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Finished connection setup. 2019-09-04T06:27:19.385+0000 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to cmodb810.togewa.com:27018 2019-09-04T06:27:19.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Finished connection setup. 2019-09-04T06:27:19.385+0000 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to cmodb811.togewa.com:27018 2019-09-04T06:27:19.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Finished connection setup. 2019-09-04T06:27:19.386+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 4 finished with response: { hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb806.togewa.com:27018", me: "cmodb807.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578433, 1), t: 1 }, lastWriteDate: new Date(1567578433000), majorityOpTime: { ts: Timestamp(1567578433, 1), t: 1 }, majorityWriteDate: new Date(1567578433000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578439387), logicalSessionTimeoutMinutes: 30, connectionId: 17074, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578433, 1), $configServerState: { opTime: { ts: Timestamp(1567578428, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578434, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578433, 1) } 2019-09-04T06:27:19.386+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb806.togewa.com:27018", me: "cmodb807.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578433, 1), t: 1 }, lastWriteDate: new Date(1567578433000), majorityOpTime: { ts: Timestamp(1567578433, 1), t: 1 }, majorityWriteDate: new Date(1567578433000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578439387), logicalSessionTimeoutMinutes: 30, connectionId: 17074, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578433, 1), $configServerState: { opTime: { ts: Timestamp(1567578428, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578434, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578433, 1) } target: cmodb807.togewa.com:27018 2019-09-04T06:27:19.387+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 5 finished with response: { hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb808.togewa.com:27018", me: "cmodb809.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578437, 1), t: 1 }, lastWriteDate: new Date(1567578437000), majorityOpTime: { ts: Timestamp(1567578437, 1), t: 1 }, majorityWriteDate: new Date(1567578437000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578439387), logicalSessionTimeoutMinutes: 30, connectionId: 13302, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578437, 1), $configServerState: { opTime: { ts: Timestamp(1567578419, 3), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578437, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578437, 1) } 2019-09-04T06:27:19.387+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb808.togewa.com:27018", me: "cmodb809.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578437, 1), t: 1 }, lastWriteDate: new Date(1567578437000), majorityOpTime: { ts: Timestamp(1567578437, 1), t: 1 }, majorityWriteDate: new Date(1567578437000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578439387), logicalSessionTimeoutMinutes: 30, connectionId: 13302, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578437, 1), $configServerState: { opTime: { ts: Timestamp(1567578419, 3), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578437, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578437, 1) } target: cmodb809.togewa.com:27018 2019-09-04T06:27:19.387+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 6 finished with response: { hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb808.togewa.com:27018", me: "cmodb808.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578437, 1), t: 1 }, lastWriteDate: new Date(1567578437000), majorityOpTime: { ts: Timestamp(1567578437, 1), t: 1 }, majorityWriteDate: new Date(1567578437000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578439387), logicalSessionTimeoutMinutes: 30, connectionId: 18183, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578437, 1), $configServerState: { opTime: { ts: Timestamp(1567578434, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578437, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578437, 1) } 2019-09-04T06:27:19.387+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb808.togewa.com:27018", me: "cmodb808.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578437, 1), t: 1 }, lastWriteDate: new Date(1567578437000), majorityOpTime: { ts: Timestamp(1567578437, 1), t: 1 }, majorityWriteDate: new Date(1567578437000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578439387), logicalSessionTimeoutMinutes: 30, connectionId: 18183, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578437, 1), $configServerState: { opTime: { ts: Timestamp(1567578434, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578437, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578437, 1) } target: cmodb808.togewa.com:27018 2019-09-04T06:27:19.387+0000 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard0001 is shard0001/cmodb808.togewa.com:27018,cmodb809.togewa.com:27018 2019-09-04T06:27:19.387+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Signaling confirmed set shard0001/cmodb808.togewa.com:27018,cmodb809.togewa.com:27018 with primary cmodb808.togewa.com:27018 2019-09-04T06:27:19.387+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0001 took 1ms 2019-09-04T06:27:19.387+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 7 finished with response: { hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb810.togewa.com:27018", me: "cmodb810.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578429, 1), t: 1 }, lastWriteDate: new Date(1567578429000), majorityOpTime: { ts: Timestamp(1567578429, 1), t: 1 }, majorityWriteDate: new Date(1567578429000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578439387), logicalSessionTimeoutMinutes: 30, connectionId: 20469, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578429, 1), $configServerState: { opTime: { ts: Timestamp(1567578434, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578434, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578429, 1) } 2019-09-04T06:27:19.387+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb810.togewa.com:27018", me: "cmodb810.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578429, 1), t: 1 }, lastWriteDate: new Date(1567578429000), majorityOpTime: { ts: Timestamp(1567578429, 1), t: 1 }, majorityWriteDate: new Date(1567578429000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578439387), logicalSessionTimeoutMinutes: 30, connectionId: 20469, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578429, 1), $configServerState: { opTime: { ts: Timestamp(1567578434, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578434, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578429, 1) } target: cmodb810.togewa.com:27018 2019-09-04T06:27:19.387+0000 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard0002 is shard0002/cmodb810.togewa.com:27018,cmodb811.togewa.com:27018 2019-09-04T06:27:19.387+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Signaling confirmed set shard0002/cmodb810.togewa.com:27018,cmodb811.togewa.com:27018 with primary cmodb810.togewa.com:27018 2019-09-04T06:27:19.387+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 3 finished with response: { hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb806.togewa.com:27018", me: "cmodb806.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578433, 1), t: 1 }, lastWriteDate: new Date(1567578433000), majorityOpTime: { ts: Timestamp(1567578433, 1), t: 1 }, majorityWriteDate: new Date(1567578433000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578439387), logicalSessionTimeoutMinutes: 30, connectionId: 16400, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578433, 1), $configServerState: { opTime: { ts: Timestamp(1567578434, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578434, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578433, 1) } 2019-09-04T06:27:19.387+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb806.togewa.com:27018", me: "cmodb806.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578433, 1), t: 1 }, lastWriteDate: new Date(1567578433000), majorityOpTime: { ts: Timestamp(1567578433, 1), t: 1 }, majorityWriteDate: new Date(1567578433000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578439387), logicalSessionTimeoutMinutes: 30, connectionId: 16400, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578433, 1), $configServerState: { opTime: { ts: Timestamp(1567578434, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578434, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578433, 1) } target: cmodb806.togewa.com:27018 2019-09-04T06:27:19.387+0000 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard0000 is shard0000/cmodb806.togewa.com:27018,cmodb807.togewa.com:27018 2019-09-04T06:27:19.387+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Signaling confirmed set shard0000/cmodb806.togewa.com:27018,cmodb807.togewa.com:27018 with primary cmodb806.togewa.com:27018 2019-09-04T06:27:19.387+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0000 took 2ms 2019-09-04T06:27:19.400+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 8 finished with response: { hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb810.togewa.com:27018", me: "cmodb811.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578429, 1), t: 1 }, lastWriteDate: new Date(1567578429000), majorityOpTime: { ts: Timestamp(1567578429, 1), t: 1 }, majorityWriteDate: new Date(1567578429000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578439397), logicalSessionTimeoutMinutes: 30, connectionId: 13284, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578429, 1), $configServerState: { opTime: { ts: Timestamp(1567578428, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578434, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578429, 1) } 2019-09-04T06:27:19.400+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb810.togewa.com:27018", me: "cmodb811.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578429, 1), t: 1 }, lastWriteDate: new Date(1567578429000), majorityOpTime: { ts: Timestamp(1567578429, 1), t: 1 }, majorityWriteDate: new Date(1567578429000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578439397), logicalSessionTimeoutMinutes: 30, connectionId: 13284, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578429, 1), $configServerState: { opTime: { ts: Timestamp(1567578428, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578434, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578429, 1) } target: cmodb811.togewa.com:27018 2019-09-04T06:27:19.400+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0002 took 15ms 2019-09-04T06:27:19.424+0000 I NETWORK [listener] connection accepted from 10.108.2.62:53320 #13 (3 connections now open) 2019-09-04T06:27:19.424+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:19.424+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:19.424+0000 I NETWORK [conn13] received client metadata from 10.108.2.62:53320 conn13: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:19.424+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:19.428+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:19.428+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:19.435+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:19.490+0000 I NETWORK [listener] connection accepted from 10.108.2.50:49982 #14 (4 connections now open) 2019-09-04T06:27:19.490+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:19.490+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:19.490+0000 I NETWORK [conn14] received client metadata from 10.108.2.50:49982 conn14: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:19.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:19.491+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:19.491+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:19.501+0000 I NETWORK [listener] connection accepted from 10.108.2.58:52008 #15 (5 connections now open) 2019-09-04T06:27:19.502+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:19.502+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:19.502+0000 I NETWORK [conn15] received client metadata from 10.108.2.58:52008 conn15: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:19.502+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:19.502+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:19.502+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:19.506+0000 I NETWORK [listener] connection accepted from 10.108.2.33:45438 #16 (6 connections now open) 2019-09-04T06:27:19.506+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:19.506+0000 D2 COMMAND [conn16] run command admin.$cmd { getnonce: 1, $db: "admin" } 2019-09-04T06:27:19.506+0000 I COMMAND [conn16] command admin.$cmd command: getnonce { getnonce: 1, $db: "admin" } numYields:0 reslen:295 locks:{} protocol:op_query 0ms 2019-09-04T06:27:19.506+0000 D2 COMMAND [conn16] run command admin.$cmd { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:27:19.506+0000 I COMMAND [conn16] command admin.$cmd command: isMaster { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:866 locks:{} protocol:op_query 0ms 2019-09-04T06:27:19.518+0000 D3 STORAGE [monitoring-keys-for-HMAC] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:27:19.518+0000 D2 COMMAND [monitoring-keys-for-HMAC] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(0, 0) } }, sort: { expiresAt: 1 }, $readPreference: { mode: "nearest", tags: [] }, $db: "admin" } 2019-09-04T06:27:19.518+0000 D3 STORAGE [monitoring-keys-for-HMAC] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:19.518+0000 I SHARDING [monitoring-keys-for-HMAC] Marking collection admin.system.keys as collection version: 2019-09-04T06:27:19.518+0000 D5 QUERY [monitoring-keys-for-HMAC] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=admin.system.keysTree: $and purpose $eq "HMAC" expiresAt $gt Timestamp(0, 0) Sort: { expiresAt: 1 } Proj: {} ============================= 2019-09-04T06:27:19.518+0000 D5 QUERY [monitoring-keys-for-HMAC] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" } 2019-09-04T06:27:19.518+0000 D5 QUERY [monitoring-keys-for-HMAC] Predicate over field 'expiresAt' 2019-09-04T06:27:19.518+0000 D5 QUERY [monitoring-keys-for-HMAC] Predicate over field 'purpose' 2019-09-04T06:27:19.518+0000 D5 QUERY [monitoring-keys-for-HMAC] Rated tree: $and purpose $eq "HMAC" || First: notFirst: full path: purpose expiresAt $gt Timestamp(0, 0) || First: notFirst: full path: expiresAt 2019-09-04T06:27:19.518+0000 D5 QUERY [monitoring-keys-for-HMAC] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:19.518+0000 D5 QUERY [monitoring-keys-for-HMAC] Planner: outputting a collscan: SORT ---pattern = { expiresAt: 1 } ---limit = 0 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] ---Child: ------SORT_KEY_GENERATOR ---------sortSpec = { expiresAt: 1 } ---------fetched = 1 ---------sortedByDiskLoc = 0 ---------getSort = [] ---------Child: ------------COLLSCAN ---------------ns = admin.system.keys ---------------filter = $and purpose $eq "HMAC" expiresAt $gt Timestamp(0, 0) ---------------fetched = 1 ---------------sortedByDiskLoc = 0 ---------------getSort = [] 2019-09-04T06:27:19.518+0000 D2 QUERY [monitoring-keys-for-HMAC] Only one plan is available; it will be run but will not be cached. query: { purpose: "HMAC", expiresAt: { $gt: Timestamp(0, 0) } } sort: { expiresAt: 1 } projection: {}, planSummary: COLLSCAN 2019-09-04T06:27:19.518+0000 D3 STORAGE [monitoring-keys-for-HMAC] begin_transaction on local snapshot Timestamp(1567578428, 2) 2019-09-04T06:27:19.518+0000 D3 STORAGE [monitoring-keys-for-HMAC] WT begin_transaction for snapshot id 66 2019-09-04T06:27:19.518+0000 D3 STORAGE [monitoring-keys-for-HMAC] WT rollback_transaction for snapshot id 66 2019-09-04T06:27:19.518+0000 I COMMAND [monitoring-keys-for-HMAC] command admin.system.keys command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(0, 0) } }, sort: { expiresAt: 1 }, $readPreference: { mode: "nearest", tags: [] }, $db: "admin" } planSummary: COLLSCAN keysExamined:0 docsExamined:2 hasSortStage:1 cursorExhausted:1 numYields:0 nreturned:2 queryHash:6DC32749 planCacheKey:6DC32749 reslen:496 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:27:19.522+0000 I NETWORK [listener] connection accepted from 10.108.2.61:37810 #17 (7 connections now open) 2019-09-04T06:27:19.522+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:19.522+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:19.522+0000 I NETWORK [conn17] received client metadata from 10.108.2.61:37810 conn17: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:19.523+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:19.526+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:19.527+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:19.535+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:19.597+0000 I NETWORK [listener] connection accepted from 10.108.2.53:50588 #18 (8 connections now open) 2019-09-04T06:27:19.597+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:19.597+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:19.598+0000 I NETWORK [conn18] received client metadata from 10.108.2.53:50588 conn18: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:19.598+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:19.622+0000 I NETWORK [listener] connection accepted from 10.108.2.64:46502 #19 (9 connections now open) 2019-09-04T06:27:19.622+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:19.622+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:19.622+0000 I NETWORK [conn19] received client metadata from 10.108.2.64:46502 conn19: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:19.622+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:19.625+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:19.625+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:19.627+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:19.627+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:19.630+0000 I NETWORK [listener] connection accepted from 10.108.2.15:39012 #20 (10 connections now open) 2019-09-04T06:27:19.630+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:19.630+0000 D2 COMMAND [conn20] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:19.630+0000 I NETWORK [conn20] received client metadata from 10.108.2.15:39012 conn20: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:19.630+0000 I COMMAND [conn20] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:19.635+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:19.640+0000 I NETWORK [listener] connection accepted from 10.108.2.15:39014 #21 (11 connections now open) 2019-09-04T06:27:19.640+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:19.640+0000 D2 COMMAND [conn21] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:19.640+0000 I NETWORK [conn21] received client metadata from 10.108.2.15:39014 conn21: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:19.640+0000 I COMMAND [conn21] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:19.651+0000 I NETWORK [listener] connection accepted from 10.108.2.45:36426 #22 (12 connections now open) 2019-09-04T06:27:19.651+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:19.651+0000 I NETWORK [listener] connection accepted from 10.108.2.54:49052 #23 (13 connections now open) 2019-09-04T06:27:19.651+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:19.651+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:19.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:19.651+0000 I NETWORK [conn22] received client metadata from 10.108.2.45:36426 conn22: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:19.651+0000 I NETWORK [conn23] received client metadata from 10.108.2.54:49052 conn23: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:19.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:19.651+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:19.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:19.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:19.694+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:19.694+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:19.735+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:19.745+0000 I NETWORK [listener] connection accepted from 10.108.2.64:46504 #24 (14 connections now open) 2019-09-04T06:27:19.745+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:19.745+0000 D2 COMMAND [conn24] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:19.745+0000 I NETWORK [conn24] received client metadata from 10.108.2.64:46504 conn24: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:19.745+0000 I COMMAND [conn24] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:19.765+0000 I NETWORK [listener] connection accepted from 10.108.2.55:36520 #25 (15 connections now open) 2019-09-04T06:27:19.765+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:19.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:19.765+0000 I NETWORK [conn25] received client metadata from 10.108.2.55:36520 conn25: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:19.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:19.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:19.766+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:19.780+0000 I NETWORK [listener] connection accepted from 10.108.2.57:34128 #26 (16 connections now open) 2019-09-04T06:27:19.780+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:19.780+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:19.780+0000 I NETWORK [conn26] received client metadata from 10.108.2.57:34128 conn26: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:19.780+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:19.784+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:19.784+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:19.836+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:19.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:19.836+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 9) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:19.836+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 9 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:27:29.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:19.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:19.836+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 10) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:19.836+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 10 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:27:29.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:19.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:27:49.334+0000 2019-09-04T06:27:19.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:27:49.334+0000 2019-09-04T06:27:19.836+0000 D2 ASIO [Replication] Request 9 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), opTime: { ts: Timestamp(1567578439, 1), t: 1 }, wallTime: new Date(1567578439111), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578439, 1), $clusterTime: { clusterTime: Timestamp(1567578439, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578439, 1) } 2019-09-04T06:27:19.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), opTime: { ts: Timestamp(1567578439, 1), t: 1 }, wallTime: new Date(1567578439111), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578439, 1), $clusterTime: { clusterTime: Timestamp(1567578439, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578439, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:27:19.836+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:19.836+0000 D2 REPL_HB [replexec-2] Received response to heartbeat (requestId: 9) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), opTime: { ts: Timestamp(1567578439, 1), t: 1 }, wallTime: new Date(1567578439111), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578439, 1), $clusterTime: { clusterTime: Timestamp(1567578439, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578439, 1) } 2019-09-04T06:27:19.836+0000 D4 ELECTION [replexec-2] Postponing election timeout due to heartbeat from primary 2019-09-04T06:27:19.836+0000 D4 REPL [replexec-2] Canceling election timeout callback at 2019-09-04T06:27:29.496+0000 2019-09-04T06:27:19.836+0000 D4 ELECTION [replexec-2] Scheduling election timeout callback at 2019-09-04T06:27:30.752+0000 2019-09-04T06:27:19.836+0000 D3 REPL [replexec-2] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:27:19.836+0000 D2 REPL_HB [replexec-2] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:27:20.336Z 2019-09-04T06:27:19.836+0000 D2 ASIO [Replication] Request 10 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), opTime: { ts: Timestamp(1567578439, 1), t: 1 }, wallTime: new Date(1567578439111), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578439, 1), $clusterTime: { clusterTime: Timestamp(1567578439, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578439, 1) } 2019-09-04T06:27:19.836+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:19.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), opTime: { ts: Timestamp(1567578439, 1), t: 1 }, wallTime: new Date(1567578439111), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578439, 1), $clusterTime: { clusterTime: Timestamp(1567578439, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578439, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:19.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:27:49.334+0000 2019-09-04T06:27:19.836+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:27:49.334+0000 2019-09-04T06:27:19.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:19.836+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 10) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), opTime: { ts: Timestamp(1567578439, 1), t: 1 }, wallTime: new Date(1567578439111), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578439, 1), $clusterTime: { clusterTime: Timestamp(1567578439, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578439, 1) } 2019-09-04T06:27:19.836+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:27:19.837+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:27:20.336Z 2019-09-04T06:27:19.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:27:49.334+0000 2019-09-04T06:27:19.854+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:19.854+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:19.891+0000 I NETWORK [listener] connection accepted from 10.108.2.49:53258 #27 (17 connections now open) 2019-09-04T06:27:19.891+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:19.891+0000 D2 COMMAND [conn27] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:19.891+0000 I NETWORK [conn27] received client metadata from 10.108.2.49:53258 conn27: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:19.891+0000 I COMMAND [conn27] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:19.920+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:19.920+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:19.935+0000 I NETWORK [listener] connection accepted from 10.108.2.34:38146 #28 (18 connections now open) 2019-09-04T06:27:19.935+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:19.935+0000 D2 COMMAND [conn28] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:19.935+0000 I NETWORK [conn28] received client metadata from 10.108.2.34:38146 conn28: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:19.935+0000 I COMMAND [conn28] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:19.936+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:19.978+0000 I NETWORK [listener] connection accepted from 10.108.2.52:47046 #29 (19 connections now open) 2019-09-04T06:27:19.978+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:19.978+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:19.978+0000 I NETWORK [conn29] received client metadata from 10.108.2.52:47046 conn29: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:19.978+0000 I NETWORK [listener] connection accepted from 10.108.2.47:56426 #30 (20 connections now open) 2019-09-04T06:27:19.978+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:19.978+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:19.978+0000 D2 COMMAND [conn30] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:19.978+0000 I NETWORK [conn30] received client metadata from 10.108.2.47:56426 conn30: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:19.978+0000 I COMMAND [conn30] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:19.979+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:19.979+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:19.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:19.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:19.995+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:19.995+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:20.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:20.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:20.005+0000 D2 COMMAND [conn16] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:27:20.005+0000 D1 ACCESS [conn16] Getting user dba_root@admin from disk 2019-09-04T06:27:20.005+0000 D3 STORAGE [conn16] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:20.005+0000 I SHARDING [conn16] Marking collection admin.system.users as collection version: 2019-09-04T06:27:20.005+0000 D5 QUERY [conn16] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=admin.system.usersTree: $and db $eq "admin" user $eq "dba_root" Sort: {} Proj: {} ============================= 2019-09-04T06:27:20.005+0000 D5 QUERY [conn16] Index 0 is kp: { user: 1, db: 1 } unique name: '(user_1_db_1, )' io: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" } 2019-09-04T06:27:20.005+0000 D5 QUERY [conn16] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" } 2019-09-04T06:27:20.005+0000 D5 QUERY [conn16] Predicate over field 'db' 2019-09-04T06:27:20.005+0000 D5 QUERY [conn16] Predicate over field 'user' 2019-09-04T06:27:20.005+0000 D2 QUERY [conn16] Relevant index 0 is kp: { user: 1, db: 1 } unique name: '(user_1_db_1, )' io: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" } 2019-09-04T06:27:20.005+0000 D5 QUERY [conn16] Rated tree: $and db $eq "admin" || First: notFirst: 0 full path: db user $eq "dba_root" || First: 0 notFirst: full path: user 2019-09-04T06:27:20.005+0000 D5 QUERY [conn16] Tagging memoID 1 2019-09-04T06:27:20.005+0000 D5 QUERY [conn16] Enumerator: memo just before moving: [Node #1]: AND enumstate counter 0 choice 0: subnodes: idx[0] pos 0 pred user $eq "dba_root" pos 1 pred db $eq "admin" 2019-09-04T06:27:20.005+0000 D5 QUERY [conn16] About to build solntree from tagged tree: $and db $eq "admin" || Selected Index #0 pos 1 combine 1 user $eq "dba_root" || Selected Index #0 pos 0 combine 1 2019-09-04T06:27:20.005+0000 D5 QUERY [conn16] Planner: adding solution: FETCH ---fetched = 1 ---sortedByDiskLoc = 1 ---getSort = [{ db: 1 }, { user: 1 }, { user: 1, db: 1 }, ] ---Child: ------IXSCAN ---------indexName = user_1_db_1 keyPattern = { user: 1, db: 1 } ---------direction = 1 ---------bounds = field #0['user']: ["dba_root", "dba_root"], field #1['db']: ["admin", "admin"] ---------fetched = 0 ---------sortedByDiskLoc = 1 ---------getSort = [{ db: 1 }, { user: 1 }, { user: 1, db: 1 }, ] 2019-09-04T06:27:20.005+0000 D5 QUERY [conn16] Planner: outputted 1 indexed solutions. 2019-09-04T06:27:20.005+0000 D2 QUERY [conn16] Only one plan is available; it will be run but will not be cached. query: { user: "dba_root", db: "admin" } sort: {} projection: {}, planSummary: IXSCAN { user: 1, db: 1 } 2019-09-04T06:27:20.005+0000 D3 STORAGE [conn16] begin_transaction on local snapshot Timestamp(1567578428, 2) 2019-09-04T06:27:20.005+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 97 2019-09-04T06:27:20.005+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 97 2019-09-04T06:27:20.006+0000 I COMMAND [conn16] command admin.system.users command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 queryHash:0A298B98 planCacheKey:C2D1BA7E reslen:385 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{ data: { bytesRead: 113, timeReadingMicros: 7 } } protocol:op_query 1ms 2019-09-04T06:27:20.017+0000 D2 COMMAND [conn16] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:27:20.017+0000 I COMMAND [conn16] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:27:20.017+0000 D2 COMMAND [conn16] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:27:20.017+0000 D1 ACCESS [conn16] Returning user dba_root@admin from cache 2019-09-04T06:27:20.017+0000 I ACCESS [conn16] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45438 2019-09-04T06:27:20.017+0000 I COMMAND [conn16] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:27:20.019+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:20.019+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:20.036+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:20.039+0000 D2 COMMAND [conn16] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:27:20.040+0000 I COMMAND [conn16] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35035 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:27:20.041+0000 D2 COMMAND [conn16] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:27:20.041+0000 D2 COMMAND [conn16] command: replSetGetStatus 2019-09-04T06:27:20.041+0000 I COMMAND [conn16] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2150 locks:{} protocol:op_query 0ms 2019-09-04T06:27:20.041+0000 D2 COMMAND [conn16] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:27:20.041+0000 D3 STORAGE [conn16] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:20.041+0000 D5 QUERY [conn16] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:27:20.041+0000 D5 QUERY [conn16] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:27:20.041+0000 D5 QUERY [conn16] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:27:20.041+0000 D5 QUERY [conn16] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:27:20.041+0000 D5 QUERY [conn16] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:27:20.041+0000 D5 QUERY [conn16] Predicate over field 'jumbo' 2019-09-04T06:27:20.041+0000 D5 QUERY [conn16] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:27:20.041+0000 D5 QUERY [conn16] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:20.041+0000 D5 QUERY [conn16] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:20.041+0000 D2 QUERY [conn16] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:27:20.041+0000 D3 STORAGE [conn16] begin_transaction on local snapshot Timestamp(1567578428, 2) 2019-09-04T06:27:20.041+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 104 2019-09-04T06:27:20.041+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 104 2019-09-04T06:27:20.041+0000 I COMMAND [conn16] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:27:20.041+0000 D2 COMMAND [conn16] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:27:20.041+0000 I COMMAND [conn16] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:27:20.042+0000 D2 COMMAND [conn16] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:27:20.042+0000 D3 STORAGE [conn16] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:20.042+0000 D5 QUERY [conn16] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:27:20.042+0000 D5 QUERY [conn16] Forcing a table scan due to hinted $natural 2019-09-04T06:27:20.042+0000 D2 QUERY [conn16] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:27:20.042+0000 D3 STORAGE [conn16] begin_transaction on local snapshot Timestamp(1567578428, 2) 2019-09-04T06:27:20.042+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 107 2019-09-04T06:27:20.042+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 107 2019-09-04T06:27:20.042+0000 I COMMAND [conn16] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:27:20.042+0000 D2 COMMAND [conn16] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:27:20.042+0000 D3 STORAGE [conn16] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:20.042+0000 D5 QUERY [conn16] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:27:20.042+0000 D5 QUERY [conn16] Forcing a table scan due to hinted $natural 2019-09-04T06:27:20.042+0000 D2 QUERY [conn16] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:27:20.042+0000 D3 STORAGE [conn16] begin_transaction on local snapshot Timestamp(1567578428, 2) 2019-09-04T06:27:20.042+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 109 2019-09-04T06:27:20.042+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 109 2019-09-04T06:27:20.042+0000 I COMMAND [conn16] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:27:20.042+0000 D2 COMMAND [conn16] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:27:20.042+0000 I SHARDING [conn16] Marking collection local.oplog.$main as collection version: 2019-09-04T06:27:20.042+0000 D2 QUERY [conn16] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:27:20.042+0000 I COMMAND [conn16] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:27:20.042+0000 D2 COMMAND [conn16] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:27:20.042+0000 D2 COMMAND [conn16] command: listDatabases 2019-09-04T06:27:20.042+0000 D3 STORAGE [conn16] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:27:20.042+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 112 2019-09-04T06:27:20.042+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:27:20.042+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 112 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 113 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 113 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 114 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 114 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 115 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 115 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 116 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 116 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 117 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 117 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 118 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 118 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 119 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 119 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 120 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 120 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 121 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 121 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 122 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 122 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 123 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 123 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 124 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 124 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 125 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 125 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 126 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:27:20.043+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 126 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 127 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 127 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 128 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 128 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 129 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 129 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 130 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 130 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 131 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 131 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 132 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 132 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 133 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 133 2019-09-04T06:27:20.044+0000 I COMMAND [conn16] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:27:20.044+0000 D2 COMMAND [conn16] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 135 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 135 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 136 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 136 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 137 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 137 2019-09-04T06:27:20.044+0000 I COMMAND [conn16] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:27:20.044+0000 D2 COMMAND [conn16] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 139 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 139 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 140 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 140 2019-09-04T06:27:20.044+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 141 2019-09-04T06:27:20.045+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 141 2019-09-04T06:27:20.045+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 142 2019-09-04T06:27:20.045+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 142 2019-09-04T06:27:20.045+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 143 2019-09-04T06:27:20.045+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 143 2019-09-04T06:27:20.045+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 144 2019-09-04T06:27:20.045+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 144 2019-09-04T06:27:20.045+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 145 2019-09-04T06:27:20.045+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 145 2019-09-04T06:27:20.045+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 146 2019-09-04T06:27:20.045+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 146 2019-09-04T06:27:20.045+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 147 2019-09-04T06:27:20.045+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 147 2019-09-04T06:27:20.045+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 148 2019-09-04T06:27:20.045+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 148 2019-09-04T06:27:20.045+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 149 2019-09-04T06:27:20.045+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 149 2019-09-04T06:27:20.045+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 150 2019-09-04T06:27:20.045+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 150 2019-09-04T06:27:20.045+0000 I COMMAND [conn16] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:27:20.045+0000 D2 COMMAND [conn16] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:27:20.045+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 152 2019-09-04T06:27:20.045+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 152 2019-09-04T06:27:20.045+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 153 2019-09-04T06:27:20.045+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 153 2019-09-04T06:27:20.045+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 154 2019-09-04T06:27:20.045+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 154 2019-09-04T06:27:20.045+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 155 2019-09-04T06:27:20.045+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 155 2019-09-04T06:27:20.045+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 156 2019-09-04T06:27:20.045+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 156 2019-09-04T06:27:20.045+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 157 2019-09-04T06:27:20.045+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 157 2019-09-04T06:27:20.045+0000 I COMMAND [conn16] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:27:20.052+0000 I NETWORK [listener] connection accepted from 10.108.2.47:56430 #31 (21 connections now open) 2019-09-04T06:27:20.052+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:20.052+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:20.052+0000 I NETWORK [conn31] received client metadata from 10.108.2.47:56430 conn31: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:20.052+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:20.119+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:20.119+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:20.127+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:20.127+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:20.136+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:20.141+0000 I NETWORK [listener] connection accepted from 10.108.2.46:40854 #32 (22 connections now open) 2019-09-04T06:27:20.141+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:20.141+0000 I NETWORK [listener] connection accepted from 10.108.2.46:40856 #33 (23 connections now open) 2019-09-04T06:27:20.141+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:20.141+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:20.141+0000 D2 COMMAND [conn32] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:20.141+0000 I NETWORK [conn33] received client metadata from 10.108.2.46:40856 conn33: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:20.141+0000 I NETWORK [conn32] received client metadata from 10.108.2.46:40854 conn32: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:20.141+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:20.141+0000 I COMMAND [conn32] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:20.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:20.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:20.170+0000 I NETWORK [listener] connection accepted from 10.108.2.32:35488 #34 (24 connections now open) 2019-09-04T06:27:20.170+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:20.170+0000 D2 COMMAND [conn34] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:20.170+0000 I NETWORK [conn34] received client metadata from 10.108.2.32:35488 conn34: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:20.171+0000 I COMMAND [conn34] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:20.194+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:20.194+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:20.230+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578439, 1), signature: { hash: BinData(0, 1D27681C2750AA809FB778BC96721FF8411E267D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:20.230+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:27:20.231+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578439, 1), signature: { hash: BinData(0, 1D27681C2750AA809FB778BC96721FF8411E267D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:20.231+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578439, 1), signature: { hash: BinData(0, 1D27681C2750AA809FB778BC96721FF8411E267D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:20.231+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578428, 2), t: 1 }, durableWallTime: new Date(1567578428361), opTime: { ts: Timestamp(1567578428, 2), t: 1 }, wallTime: new Date(1567578428361) } 2019-09-04T06:27:20.231+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578439, 1), signature: { hash: BinData(0, 1D27681C2750AA809FB778BC96721FF8411E267D), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:676 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:20.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:27:20.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 999999991 Now: 1000000000 2019-09-04T06:27:20.236+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:20.245+0000 D4 - [WT-OplogTruncaterThread-local.oplog.rs] Taking ticket. Available: 1000000000 2019-09-04T06:27:20.254+0000 I NETWORK [listener] connection accepted from 10.108.2.45:36430 #35 (25 connections now open) 2019-09-04T06:27:20.254+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:20.254+0000 D2 COMMAND [conn35] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:20.254+0000 I NETWORK [conn35] received client metadata from 10.108.2.45:36430 conn35: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:20.254+0000 I COMMAND [conn35] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:20.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:20.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:20.276+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:20.276+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:20.336+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:20.336+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:20.336+0000 D2 REPL_HB [replexec-2] Sending heartbeat (requestId: 11) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:20.336+0000 D3 EXECUTOR [replexec-2] Scheduling remote command request: RemoteCommand 11 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:27:30.336+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:20.336+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:27:49.334+0000 2019-09-04T06:27:20.336+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 12) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:20.336+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 12 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:27:30.336+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:20.336+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:27:49.334+0000 2019-09-04T06:27:20.336+0000 I NETWORK [listener] connection accepted from 10.108.2.53:50592 #36 (26 connections now open) 2019-09-04T06:27:20.336+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:20.336+0000 D2 COMMAND [conn36] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:20.336+0000 D2 ASIO [Replication] Request 11 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), opTime: { ts: Timestamp(1567578439, 1), t: 1 }, wallTime: new Date(1567578439111), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578439, 1), $clusterTime: { clusterTime: Timestamp(1567578439, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578439, 1) } 2019-09-04T06:27:20.336+0000 I NETWORK [conn36] received client metadata from 10.108.2.53:50592 conn36: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:20.336+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), opTime: { ts: Timestamp(1567578439, 1), t: 1 }, wallTime: new Date(1567578439111), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578439, 1), $clusterTime: { clusterTime: Timestamp(1567578439, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578439, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:27:20.336+0000 I COMMAND [conn36] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:20.336+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:20.336+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 11) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), opTime: { ts: Timestamp(1567578439, 1), t: 1 }, wallTime: new Date(1567578439111), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578439, 1), $clusterTime: { clusterTime: Timestamp(1567578439, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578439, 1) } 2019-09-04T06:27:20.336+0000 D2 ASIO [Replication] Request 12 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), opTime: { ts: Timestamp(1567578439, 1), t: 1 }, wallTime: new Date(1567578439111), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578439, 1), $clusterTime: { clusterTime: Timestamp(1567578439, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578439, 1) } 2019-09-04T06:27:20.336+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:27:20.336+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:27:30.752+0000 2019-09-04T06:27:20.336+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), opTime: { ts: Timestamp(1567578439, 1), t: 1 }, wallTime: new Date(1567578439111), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578439, 1), $clusterTime: { clusterTime: Timestamp(1567578439, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578439, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:20.336+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:27:30.474+0000 2019-09-04T06:27:20.336+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:20.336+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:27:20.336+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:27:20.836Z 2019-09-04T06:27:20.336+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:20.336+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:27:50.336+0000 2019-09-04T06:27:20.336+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:20.336+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:27:50.336+0000 2019-09-04T06:27:20.336+0000 D2 REPL_HB [replexec-2] Received response to heartbeat (requestId: 12) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), opTime: { ts: Timestamp(1567578439, 1), t: 1 }, wallTime: new Date(1567578439111), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578439, 1), $clusterTime: { clusterTime: Timestamp(1567578439, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578439, 1) } 2019-09-04T06:27:20.336+0000 D3 REPL [replexec-2] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:27:20.337+0000 D2 REPL_HB [replexec-2] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:27:20.836Z 2019-09-04T06:27:20.337+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:27:50.336+0000 2019-09-04T06:27:20.351+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 174 2019-09-04T06:27:20.351+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 174 2019-09-04T06:27:20.351+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578428, 2), t: 1 }({ ts: Timestamp(1567578428, 2), t: 1 }) 2019-09-04T06:27:20.351+0000 D3 STORAGE [rsBackgroundSync] WT begin_transaction for snapshot id 177 2019-09-04T06:27:20.351+0000 D3 STORAGE [rsBackgroundSync] WT rollback_transaction for snapshot id 177 2019-09-04T06:27:20.351+0000 D3 REPL [rsBackgroundSync] returning minvalid: { ts: Timestamp(1567578428, 2), t: 1 }({ ts: Timestamp(1567578428, 2), t: 1 }) 2019-09-04T06:27:20.351+0000 I REPL [rsBackgroundSync] sync source candidate: cmodb804.togewa.com:27019 2019-09-04T06:27:20.351+0000 D3 EXECUTOR [rsBackgroundSync] Scheduling remote command request: RemoteCommand 13 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:27:50.351+0000 cmd:{ find: "oplog.rs", limit: 1, sort: { $natural: 1 }, projection: { ts: 1, t: 1 } } 2019-09-04T06:27:20.351+0000 I CONNPOOL [RS] Connecting to cmodb804.togewa.com:27019 2019-09-04T06:27:20.351+0000 D2 ASIO [RS] Finished connection setup. 2019-09-04T06:27:20.352+0000 D2 ASIO [RS] Request 13 finished with response: { cursor: { firstBatch: [ { ts: Timestamp(1566459291, 1) } ], id: 0, ns: "local.oplog.rs" }, ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578439, 1), $clusterTime: { clusterTime: Timestamp(1567578439, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578439, 1) } 2019-09-04T06:27:20.352+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { firstBatch: [ { ts: Timestamp(1566459291, 1) } ], id: 0, ns: "local.oplog.rs" }, ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578439, 1), $clusterTime: { clusterTime: Timestamp(1567578439, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578439, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:20.353+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:20.353+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 14 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:27:50.353+0000 cmd:{ replSetGetRBID: 1 } 2019-09-04T06:27:20.353+0000 D3 EXECUTOR [replication-0] waiting for work; I am one of 1 thread(s); the minimum number of threads is 1 2019-09-04T06:27:20.353+0000 D2 ASIO [RS] Request 14 finished with response: { rbid: 1, ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578439, 1), $clusterTime: { clusterTime: Timestamp(1567578439, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578439, 1) } 2019-09-04T06:27:20.353+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ rbid: 1, ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578439, 1), $clusterTime: { clusterTime: Timestamp(1567578439, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578439, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:20.353+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:20.353+0000 D3 EXECUTOR [replication-0] waiting for work; I am one of 1 thread(s); the minimum number of threads is 1 2019-09-04T06:27:20.353+0000 I REPL [rsBackgroundSync] Changed sync source from empty to cmodb804.togewa.com:27019 2019-09-04T06:27:20.353+0000 D1 REPL [SyncSourceFeedback] setting syncSourceFeedback to cmodb804.togewa.com:27019 2019-09-04T06:27:20.353+0000 D3 STORAGE [rsBackgroundSync] WT begin_transaction for snapshot id 179 2019-09-04T06:27:20.353+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:20.353+0000 D3 STORAGE [rsBackgroundSync] WT rollback_transaction for snapshot id 179 2019-09-04T06:27:20.353+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), appliedOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, appliedWallTime: new Date(1567578439111), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578428, 2), t: 1 }, durableWallTime: new Date(1567578428361), appliedOpTime: { ts: Timestamp(1567578428, 2), t: 1 }, appliedWallTime: new Date(1567578428361), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), appliedOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, appliedWallTime: new Date(1567578439111), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578428, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:20.353+0000 D3 REPL [rsBackgroundSync] No appliedThrough OpTime set, returning empty appliedThrough OpTime. 2019-09-04T06:27:20.353+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 15 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:27:50.353+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), appliedOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, appliedWallTime: new Date(1567578439111), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578428, 2), t: 1 }, durableWallTime: new Date(1567578428361), appliedOpTime: { ts: Timestamp(1567578428, 2), t: 1 }, appliedWallTime: new Date(1567578428361), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), appliedOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, appliedWallTime: new Date(1567578439111), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578428, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:20.353+0000 D3 REPL [rsBackgroundSync] setting appliedThrough to: { ts: Timestamp(1567578428, 2), t: 1 }({ ts: Timestamp(1567578428, 2), t: 1 }) 2019-09-04T06:27:20.353+0000 D3 EXECUTOR [replication-0] waiting for work; I am one of 1 thread(s); the minimum number of threads is 1 2019-09-04T06:27:20.353+0000 D4 - [rsBackgroundSync] Taking ticket. Available: 999999999 2019-09-04T06:27:20.353+0000 D3 STORAGE [rsBackgroundSync] WT set timestamp of future write operations to Timestamp(1567578428, 2) 2019-09-04T06:27:20.353+0000 D3 STORAGE [rsBackgroundSync] WT begin_transaction for snapshot id 180 2019-09-04T06:27:20.353+0000 D5 QUERY [rsBackgroundSync] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:27:20.353+0000 D5 QUERY [rsBackgroundSync] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:20.353+0000 D5 QUERY [rsBackgroundSync] Rated tree: $and 2019-09-04T06:27:20.353+0000 D5 QUERY [rsBackgroundSync] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:20.353+0000 D5 QUERY [rsBackgroundSync] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:20.353+0000 D2 QUERY [rsBackgroundSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:27:20.353+0000 D3 STORAGE [rsBackgroundSync] WT commit_transaction for snapshot id 180 2019-09-04T06:27:20.353+0000 D1 REPL [rsBackgroundSync] scheduling fetcher to read remote oplog on cmodb804.togewa.com:27019 starting at filter: { ts: { $gte: Timestamp(1567578428, 2) } } 2019-09-04T06:27:20.353+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:20.353+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 16 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:25.353+0000 cmd:{ find: "oplog.rs", filter: { ts: { $gte: Timestamp(1567578428, 2) } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, batchSize: 13981010, term: 1, readConcern: { afterClusterTime: Timestamp(1567578428, 2) } } 2019-09-04T06:27:20.353+0000 D3 EXECUTOR [replication-0] waiting for work; I am one of 1 thread(s); the minimum number of threads is 1 2019-09-04T06:27:20.353+0000 D2 CONNPOOL [RS] Connecting to cmodb804.togewa.com:27019 2019-09-04T06:27:20.354+0000 D2 ASIO [RS] Finished connection setup. 2019-09-04T06:27:20.354+0000 D2 ASIO [RS] Request 15 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578439, 1), $clusterTime: { clusterTime: Timestamp(1567578439, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578439, 1) } 2019-09-04T06:27:20.354+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578439, 1), $clusterTime: { clusterTime: Timestamp(1567578439, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578439, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:20.354+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:20.354+0000 D3 EXECUTOR [replication-0] waiting for work; I am one of 1 thread(s); the minimum number of threads is 1 2019-09-04T06:27:20.354+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:20.354+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:20.354+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578428, 2) 2019-09-04T06:27:20.354+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 182 2019-09-04T06:27:20.354+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:20.354+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:20.354+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 182 2019-09-04T06:27:20.354+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:20.354+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:20.354+0000 D2 ASIO [RS] Request 16 finished with response: { cursor: { firstBatch: [ { ts: Timestamp(1567578428, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578428361), o: { $v: 1, $set: { ping: new Date(1567578428361) } } }, { ts: Timestamp(1567578434, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578434783), o: { $v: 1, $set: { ping: new Date(1567578434780), up: 2335 } } }, { ts: Timestamp(1567578437, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578437423), o: { $v: 1, $set: { ping: new Date(1567578437422) } } }, { ts: Timestamp(1567578437, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578437647), o: { $v: 1, $set: { ping: new Date(1567578437641) } } }, { ts: Timestamp(1567578439, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" }, wall: new Date(1567578439111), o: { $v: 1, $set: { ping: new Date(1567578439108) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpApplied: { ts: Timestamp(1567578439, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578439, 1), $clusterTime: { clusterTime: Timestamp(1567578439, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578439, 1) } 2019-09-04T06:27:20.354+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { firstBatch: [ { ts: Timestamp(1567578428, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578428361), o: { $v: 1, $set: { ping: new Date(1567578428361) } } }, { ts: Timestamp(1567578434, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578434783), o: { $v: 1, $set: { ping: new Date(1567578434780), up: 2335 } } }, { ts: Timestamp(1567578437, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578437423), o: { $v: 1, $set: { ping: new Date(1567578437422) } } }, { ts: Timestamp(1567578437, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578437647), o: { $v: 1, $set: { ping: new Date(1567578437641) } } }, { ts: Timestamp(1567578439, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" }, wall: new Date(1567578439111), o: { $v: 1, $set: { ping: new Date(1567578439108) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpApplied: { ts: Timestamp(1567578439, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578439, 1), $clusterTime: { clusterTime: Timestamp(1567578439, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578439, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:20.354+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:20.354+0000 D2 REPL [replication-0] oplog fetcher read 5 operations from remote oplog starting at ts: Timestamp(1567578428, 2) and ending at ts: Timestamp(1567578439, 1) 2019-09-04T06:27:20.354+0000 D1 REPL [replication-0] oplog fetcher successfully fetched from cmodb804.togewa.com:27019 2019-09-04T06:27:20.354+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:27:30.474+0000 2019-09-04T06:27:20.354+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:27:31.562+0000 2019-09-04T06:27:20.354+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:20.354+0000 D2 REPL [replication-0] oplog buffer has 0 bytes 2019-09-04T06:27:20.354+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:27:50.336+0000 2019-09-04T06:27:20.354+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578439, 1), t: 1 } 2019-09-04T06:27:20.354+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:20.355+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:20.355+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578428, 2) 2019-09-04T06:27:20.355+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 187 2019-09-04T06:27:20.355+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:20.355+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:20.355+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 187 2019-09-04T06:27:20.355+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:20.355+0000 D2 REPL [rsSync-0] replication batch size is 4 2019-09-04T06:27:20.355+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:20.355+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578428, 2) 2019-09-04T06:27:20.355+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578434, 1) } 2019-09-04T06:27:20.355+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 190 2019-09-04T06:27:20.355+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:20.355+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:20.355+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 190 2019-09-04T06:27:20.355+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 175 2019-09-04T06:27:20.355+0000 I SHARDING [rsSync-0] Marking collection local.replset.oplogTruncateAfterPoint as collection version: 2019-09-04T06:27:20.355+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 175 2019-09-04T06:27:20.355+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 193 2019-09-04T06:27:20.355+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 193 2019-09-04T06:27:20.355+0000 D3 EXECUTOR [repl-writer-worker-15] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:27:20.355+0000 D3 STORAGE [repl-writer-worker-15] WT begin_transaction for snapshot id 195 2019-09-04T06:27:20.355+0000 D4 STORAGE [repl-writer-worker-15] inserting record with timestamp Timestamp(1567578434, 1) 2019-09-04T06:27:20.355+0000 D3 STORAGE [repl-writer-worker-15] WT set timestamp of future write operations to Timestamp(1567578434, 1) 2019-09-04T06:27:20.355+0000 D4 STORAGE [repl-writer-worker-15] inserting record with timestamp Timestamp(1567578437, 1) 2019-09-04T06:27:20.355+0000 D3 STORAGE [repl-writer-worker-15] WT set timestamp of future write operations to Timestamp(1567578437, 1) 2019-09-04T06:27:20.355+0000 D4 STORAGE [repl-writer-worker-15] inserting record with timestamp Timestamp(1567578437, 2) 2019-09-04T06:27:20.355+0000 D3 STORAGE [repl-writer-worker-15] WT set timestamp of future write operations to Timestamp(1567578437, 2) 2019-09-04T06:27:20.355+0000 D4 STORAGE [repl-writer-worker-15] inserting record with timestamp Timestamp(1567578439, 1) 2019-09-04T06:27:20.355+0000 D3 STORAGE [repl-writer-worker-15] WT set timestamp of future write operations to Timestamp(1567578439, 1) 2019-09-04T06:27:20.355+0000 D3 STORAGE [repl-writer-worker-15] WT commit_transaction for snapshot id 195 2019-09-04T06:27:20.355+0000 D3 EXECUTOR [repl-writer-worker-15] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:20.355+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:27:20.355+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 194 2019-09-04T06:27:20.355+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 194 2019-09-04T06:27:20.355+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 197 2019-09-04T06:27:20.355+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 197 2019-09-04T06:27:20.355+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578439, 1), t: 1 }({ ts: Timestamp(1567578439, 1), t: 1 }) 2019-09-04T06:27:20.355+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578439, 1) 2019-09-04T06:27:20.355+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 198 2019-09-04T06:27:20.355+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578439, 1) } } ] } sort: {} projection: {} 2019-09-04T06:27:20.355+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:20.355+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:27:20.355+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578439, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:27:20.355+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:20.355+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:27:20.355+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:27:20.355+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578439, 1) || First: notFirst: full path: ts 2019-09-04T06:27:20.355+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:20.355+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578439, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:20.355+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:27:20.355+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:27:20.355+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:27:20.355+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:20.355+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:27:20.355+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:27:20.355+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:20.355+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:20.355+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:27:20.355+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578439, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:27:20.355+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:20.355+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:27:20.355+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:27:20.355+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578439, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:27:20.355+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:20.355+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578439, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:20.355+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 198 2019-09-04T06:27:20.355+0000 D3 EXECUTOR [repl-writer-worker-5] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:27:20.355+0000 D3 STORAGE [repl-writer-worker-5] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:20.355+0000 D3 EXECUTOR [repl-writer-worker-7] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:27:20.356+0000 D3 STORAGE [repl-writer-worker-7] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:20.356+0000 I SHARDING [repl-writer-worker-5] Marking collection config.lockpings as collection version: 2019-09-04T06:27:20.356+0000 D3 REPL [repl-writer-worker-7] applying op: { ts: Timestamp(1567578439, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" }, wall: new Date(1567578439111), o: { $v: 1, $set: { ping: new Date(1567578439108) } } }, oplog application mode: Secondary 2019-09-04T06:27:20.356+0000 D3 REPL [repl-writer-worker-5] applying op: { ts: Timestamp(1567578437, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578437647), o: { $v: 1, $set: { ping: new Date(1567578437641) } } }, oplog application mode: Secondary 2019-09-04T06:27:20.356+0000 D3 EXECUTOR [repl-writer-worker-1] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:27:20.356+0000 D3 STORAGE [repl-writer-worker-5] WT set timestamp of future write operations to Timestamp(1567578437, 2) 2019-09-04T06:27:20.356+0000 D3 STORAGE [repl-writer-worker-1] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:20.356+0000 D3 STORAGE [repl-writer-worker-5] WT begin_transaction for snapshot id 200 2019-09-04T06:27:20.356+0000 D3 REPL [repl-writer-worker-1] applying op: { ts: Timestamp(1567578437, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578437423), o: { $v: 1, $set: { ping: new Date(1567578437422) } } }, oplog application mode: Secondary 2019-09-04T06:27:20.356+0000 D2 QUERY [repl-writer-worker-5] Using idhack: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" } 2019-09-04T06:27:20.356+0000 D3 STORAGE [repl-writer-worker-1] WT set timestamp of future write operations to Timestamp(1567578437, 1) 2019-09-04T06:27:20.356+0000 D3 STORAGE [repl-writer-worker-1] WT begin_transaction for snapshot id 202 2019-09-04T06:27:20.356+0000 D2 QUERY [repl-writer-worker-1] Using idhack: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" } 2019-09-04T06:27:20.356+0000 D3 EXECUTOR [repl-writer-worker-9] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:27:20.356+0000 D3 STORAGE [repl-writer-worker-9] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:20.356+0000 I SHARDING [repl-writer-worker-9] Marking collection config.mongos as collection version: 2019-09-04T06:27:20.356+0000 D3 REPL [repl-writer-worker-9] applying op: { ts: Timestamp(1567578434, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578434783), o: { $v: 1, $set: { ping: new Date(1567578434780), up: 2335 } } }, oplog application mode: Secondary 2019-09-04T06:27:20.356+0000 D3 STORAGE [repl-writer-worker-9] WT set timestamp of future write operations to Timestamp(1567578434, 1) 2019-09-04T06:27:20.356+0000 D3 STORAGE [repl-writer-worker-9] WT begin_transaction for snapshot id 203 2019-09-04T06:27:20.356+0000 D2 QUERY [repl-writer-worker-9] Using idhack: { _id: "cmodb801.togewa.com:27017" } 2019-09-04T06:27:20.356+0000 D3 STORAGE [repl-writer-worker-7] WT set timestamp of future write operations to Timestamp(1567578439, 1) 2019-09-04T06:27:20.356+0000 D3 STORAGE [repl-writer-worker-7] WT begin_transaction for snapshot id 201 2019-09-04T06:27:20.356+0000 D2 QUERY [repl-writer-worker-7] Using idhack: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" } 2019-09-04T06:27:20.356+0000 D4 WRITE [repl-writer-worker-1] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:27:20.356+0000 D3 STORAGE [repl-writer-worker-1] WT commit_transaction for snapshot id 202 2019-09-04T06:27:20.356+0000 D3 EXECUTOR [repl-writer-worker-1] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:20.356+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578439, 1), t: 1 } 2019-09-04T06:27:20.356+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 17 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:27:30.356+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578439, 1), t: 1 } } 2019-09-04T06:27:20.357+0000 D3 EXECUTOR [replication-0] waiting for work; I am one of 1 thread(s); the minimum number of threads is 1 2019-09-04T06:27:20.357+0000 D4 WRITE [repl-writer-worker-5] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:27:20.357+0000 D3 STORAGE [repl-writer-worker-5] WT commit_transaction for snapshot id 200 2019-09-04T06:27:20.357+0000 D3 EXECUTOR [repl-writer-worker-5] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:20.357+0000 D4 WRITE [repl-writer-worker-9] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:27:20.357+0000 D3 STORAGE [repl-writer-worker-9] WT commit_transaction for snapshot id 203 2019-09-04T06:27:20.357+0000 D3 EXECUTOR [repl-writer-worker-9] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:20.357+0000 D4 WRITE [repl-writer-worker-7] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:27:20.357+0000 D3 STORAGE [repl-writer-worker-7] WT commit_transaction for snapshot id 201 2019-09-04T06:27:20.357+0000 D3 EXECUTOR [repl-writer-worker-7] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:20.357+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578439, 1), t: 1 }({ ts: Timestamp(1567578439, 1), t: 1 }) 2019-09-04T06:27:20.357+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578439, 1) 2019-09-04T06:27:20.357+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 199 2019-09-04T06:27:20.357+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:27:20.357+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:20.357+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:27:20.357+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:20.357+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:20.357+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:27:20.357+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 199 2019-09-04T06:27:20.357+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578439, 1) 2019-09-04T06:27:20.357+0000 D2 REPL [rsSync-0] Setting replication's stable optime to { ts: Timestamp(1567578439, 1), t: 1 }, 2019-09-04T06:27:19.111+0000 2019-09-04T06:27:20.357+0000 D2 STORAGE [rsSync-0] oldest_timestamp set to Timestamp(1567578434, 1) 2019-09-04T06:27:20.357+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 210 2019-09-04T06:27:20.357+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 210 2019-09-04T06:27:20.357+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578439, 1), t: 1 }({ ts: Timestamp(1567578439, 1), t: 1 }) 2019-09-04T06:27:20.357+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:20.357+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), appliedOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, appliedWallTime: new Date(1567578439111), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578428, 2), t: 1 }, durableWallTime: new Date(1567578428361), appliedOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, appliedWallTime: new Date(1567578439111), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), appliedOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, appliedWallTime: new Date(1567578439111), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:20.357+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 18 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:27:50.357+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), appliedOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, appliedWallTime: new Date(1567578439111), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578428, 2), t: 1 }, durableWallTime: new Date(1567578428361), appliedOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, appliedWallTime: new Date(1567578439111), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), appliedOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, appliedWallTime: new Date(1567578439111), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:20.357+0000 D3 EXECUTOR [replication-0] waiting for work; I am one of 1 thread(s); the minimum number of threads is 1 2019-09-04T06:27:20.357+0000 D2 ASIO [RS] Request 18 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578439, 1), $clusterTime: { clusterTime: Timestamp(1567578439, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578439, 1) } 2019-09-04T06:27:20.357+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578439, 1), $clusterTime: { clusterTime: Timestamp(1567578439, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578439, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:20.358+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:20.358+0000 D3 EXECUTOR [replication-0] waiting for work; I am one of 1 thread(s); the minimum number of threads is 1 2019-09-04T06:27:20.420+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:20.420+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:20.420+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:27:20.420+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:20.420+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), appliedOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, appliedWallTime: new Date(1567578439111), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), appliedOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, appliedWallTime: new Date(1567578439111), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), appliedOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, appliedWallTime: new Date(1567578439111), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:20.420+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 19 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:27:50.420+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), appliedOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, appliedWallTime: new Date(1567578439111), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), appliedOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, appliedWallTime: new Date(1567578439111), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), appliedOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, appliedWallTime: new Date(1567578439111), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:20.420+0000 D3 EXECUTOR [replication-0] waiting for work; I am one of 1 thread(s); the minimum number of threads is 1 2019-09-04T06:27:20.421+0000 D2 ASIO [RS] Request 19 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578439, 1), $clusterTime: { clusterTime: Timestamp(1567578439, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578439, 1) } 2019-09-04T06:27:20.421+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578439, 1), $clusterTime: { clusterTime: Timestamp(1567578439, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578439, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:20.421+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:20.421+0000 D3 EXECUTOR [replication-0] waiting for work; I am one of 1 thread(s); the minimum number of threads is 1 2019-09-04T06:27:20.437+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:20.456+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578439, 1) 2019-09-04T06:27:20.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:20.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:20.490+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:20.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:20.495+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:20.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:20.518+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:20.518+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:20.537+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:20.619+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:20.619+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:20.627+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:20.627+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:20.637+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:20.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:20.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:20.694+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:20.694+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:20.737+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:20.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:20.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:20.776+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:20.776+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:20.836+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:20.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:20.836+0000 D2 REPL_HB [replexec-2] Sending heartbeat (requestId: 20) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:20.836+0000 D3 EXECUTOR [replexec-2] Scheduling remote command request: RemoteCommand 20 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:27:30.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:20.836+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:27:50.336+0000 2019-09-04T06:27:20.836+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 21) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:20.836+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 21 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:27:30.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:20.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:27:50.336+0000 2019-09-04T06:27:20.836+0000 D2 ASIO [Replication] Request 20 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), opTime: { ts: Timestamp(1567578439, 1), t: 1 }, wallTime: new Date(1567578439111), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578439, 1), $clusterTime: { clusterTime: Timestamp(1567578439, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578439, 1) } 2019-09-04T06:27:20.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), opTime: { ts: Timestamp(1567578439, 1), t: 1 }, wallTime: new Date(1567578439111), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578439, 1), $clusterTime: { clusterTime: Timestamp(1567578439, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578439, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:27:20.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:20.836+0000 D2 ASIO [Replication] Request 21 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), opTime: { ts: Timestamp(1567578439, 1), t: 1 }, wallTime: new Date(1567578439111), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578439, 1), $clusterTime: { clusterTime: Timestamp(1567578439, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578439, 1) } 2019-09-04T06:27:20.836+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 20) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), opTime: { ts: Timestamp(1567578439, 1), t: 1 }, wallTime: new Date(1567578439111), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578439, 1), $clusterTime: { clusterTime: Timestamp(1567578439, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578439, 1) } 2019-09-04T06:27:20.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), opTime: { ts: Timestamp(1567578439, 1), t: 1 }, wallTime: new Date(1567578439111), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578439, 1), $clusterTime: { clusterTime: Timestamp(1567578439, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578439, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:20.836+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:27:20.836+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:27:31.562+0000 2019-09-04T06:27:20.836+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:20.836+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:27:32.248+0000 2019-09-04T06:27:20.836+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:27:20.836+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:27:22.836Z 2019-09-04T06:27:20.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:20.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:27:50.836+0000 2019-09-04T06:27:20.836+0000 D2 REPL_HB [replexec-2] Received response to heartbeat (requestId: 21) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), opTime: { ts: Timestamp(1567578439, 1), t: 1 }, wallTime: new Date(1567578439111), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578439, 1), $clusterTime: { clusterTime: Timestamp(1567578439, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578439, 1) } 2019-09-04T06:27:20.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:27:50.836+0000 2019-09-04T06:27:20.836+0000 D3 REPL [replexec-2] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:27:20.836+0000 D2 REPL_HB [replexec-2] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:27:22.836Z 2019-09-04T06:27:20.837+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:27:50.836+0000 2019-09-04T06:27:20.837+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:20.854+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:20.854+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:20.920+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:20.920+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:20.937+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:20.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:20.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:20.990+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:20.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:20.995+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:20.995+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:21.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:21.002+0000 D4 FTDC [ftdc] full-time diagnostic data capture schema change: currrent document is longer than reference document 2019-09-04T06:27:21.018+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:21.018+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:21.037+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:21.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578439, 1), signature: { hash: BinData(0, 1D27681C2750AA809FB778BC96721FF8411E267D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:21.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:27:21.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578439, 1), signature: { hash: BinData(0, 1D27681C2750AA809FB778BC96721FF8411E267D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:21.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578439, 1), signature: { hash: BinData(0, 1D27681C2750AA809FB778BC96721FF8411E267D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:21.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), opTime: { ts: Timestamp(1567578439, 1), t: 1 }, wallTime: new Date(1567578439111) } 2019-09-04T06:27:21.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578439, 1), signature: { hash: BinData(0, 1D27681C2750AA809FB778BC96721FF8411E267D), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:21.118+0000 I NETWORK [listener] connection accepted from 10.108.2.56:35580 #39 (27 connections now open) 2019-09-04T06:27:21.118+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:21.118+0000 D2 COMMAND [conn39] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:21.118+0000 I NETWORK [conn39] received client metadata from 10.108.2.56:35580 conn39: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:21.118+0000 I COMMAND [conn39] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:21.119+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:21.119+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:21.123+0000 D2 COMMAND [conn39] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578431, 1), signature: { hash: BinData(0, D0A0BEB0F4BE06FB58EFD649F48F77452778EFF8), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:27:21.123+0000 D1 REPL [conn39] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578439, 1), t: 1 } 2019-09-04T06:27:21.123+0000 D3 REPL [conn39] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.133+0000 2019-09-04T06:27:21.127+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:21.127+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:21.138+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:21.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:21.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:21.194+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:21.194+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:21.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:27:21.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 999999998 Now: 1000000000 2019-09-04T06:27:21.238+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:21.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:21.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:21.276+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:21.276+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:21.338+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:21.354+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:21.354+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:21.355+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:21.355+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:21.355+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578439, 1) 2019-09-04T06:27:21.355+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 241 2019-09-04T06:27:21.355+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:21.355+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:21.355+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 241 2019-09-04T06:27:21.357+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 244 2019-09-04T06:27:21.357+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 244 2019-09-04T06:27:21.357+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578439, 1), t: 1 }({ ts: Timestamp(1567578439, 1), t: 1 }) 2019-09-04T06:27:21.438+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:21.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:21.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:21.490+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:21.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:21.495+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:21.495+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:21.518+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:21.518+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:21.538+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:21.619+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:21.619+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:21.627+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:21.627+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:21.635+0000 I NETWORK [listener] connection accepted from 10.108.2.44:38548 #40 (28 connections now open) 2019-09-04T06:27:21.635+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:21.635+0000 D2 COMMAND [conn40] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:21.635+0000 I NETWORK [conn40] received client metadata from 10.108.2.44:38548 conn40: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:21.635+0000 I COMMAND [conn40] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:21.635+0000 D2 COMMAND [conn40] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578438, 1), signature: { hash: BinData(0, A66536B9583A771E48634DEEE05F33A2CC1B0D56), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:27:21.635+0000 D1 REPL [conn40] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578439, 1), t: 1 } 2019-09-04T06:27:21.635+0000 D3 REPL [conn40] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.645+0000 2019-09-04T06:27:21.638+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:21.650+0000 I NETWORK [listener] connection accepted from 10.108.2.54:49058 #41 (29 connections now open) 2019-09-04T06:27:21.650+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:21.650+0000 D2 COMMAND [conn41] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:21.650+0000 I NETWORK [conn41] received client metadata from 10.108.2.54:49058 conn41: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:21.651+0000 I COMMAND [conn41] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:21.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:21.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:21.651+0000 I NETWORK [listener] connection accepted from 10.108.2.73:52020 #42 (30 connections now open) 2019-09-04T06:27:21.651+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:21.651+0000 D2 COMMAND [conn41] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578438, 1), signature: { hash: BinData(0, A66536B9583A771E48634DEEE05F33A2CC1B0D56), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:27:21.651+0000 D1 REPL [conn41] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578439, 1), t: 1 } 2019-09-04T06:27:21.651+0000 D3 REPL [conn41] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.661+0000 2019-09-04T06:27:21.651+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:21.651+0000 I NETWORK [conn42] received client metadata from 10.108.2.73:52020 conn42: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:21.651+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:21.651+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:21.651+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:21.694+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:21.694+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:21.739+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:21.743+0000 I NETWORK [listener] connection accepted from 10.108.2.52:47050 #43 (31 connections now open) 2019-09-04T06:27:21.743+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:21.743+0000 D2 COMMAND [conn43] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:21.743+0000 I NETWORK [conn43] received client metadata from 10.108.2.52:47050 conn43: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:21.743+0000 I COMMAND [conn43] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:21.743+0000 D2 COMMAND [conn43] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578432, 1), signature: { hash: BinData(0, F498DAB403BE8C54293FE1F2B3BB1ADB75100204), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:27:21.743+0000 D1 REPL [conn43] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578439, 1), t: 1 } 2019-09-04T06:27:21.743+0000 D3 REPL [conn43] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.753+0000 2019-09-04T06:27:21.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:21.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:21.776+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:21.776+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:21.839+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:21.854+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:21.854+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:21.939+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:21.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:21.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:21.990+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:21.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:21.995+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:21.995+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:22.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:22.018+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:22.018+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:22.039+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:22.119+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:22.119+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:22.127+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:22.127+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:22.139+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:22.150+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:22.151+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:22.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:22.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:22.194+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:22.194+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:22.230+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578440, 1), signature: { hash: BinData(0, 30AEC21AF8D3554C0A284F5F57B7D6A2B6BE3728), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:22.230+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:27:22.230+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578440, 1), signature: { hash: BinData(0, 30AEC21AF8D3554C0A284F5F57B7D6A2B6BE3728), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:22.231+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578440, 1), signature: { hash: BinData(0, 30AEC21AF8D3554C0A284F5F57B7D6A2B6BE3728), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:22.231+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), opTime: { ts: Timestamp(1567578439, 1), t: 1 }, wallTime: new Date(1567578439111) } 2019-09-04T06:27:22.231+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578440, 1), signature: { hash: BinData(0, 30AEC21AF8D3554C0A284F5F57B7D6A2B6BE3728), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:22.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:27:22.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:27:22.239+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:22.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:22.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:22.276+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:22.276+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:22.340+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:22.354+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:22.354+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:22.355+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:22.355+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:22.355+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578439, 1) 2019-09-04T06:27:22.355+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 280 2019-09-04T06:27:22.355+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:22.355+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:22.355+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 280 2019-09-04T06:27:22.358+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 283 2019-09-04T06:27:22.358+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 283 2019-09-04T06:27:22.358+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578439, 1), t: 1 }({ ts: Timestamp(1567578439, 1), t: 1 }) 2019-09-04T06:27:22.440+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:22.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:22.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:22.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:22.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:22.495+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:22.495+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:22.518+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:22.518+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:22.540+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:22.619+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:22.619+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:22.627+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:22.627+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:22.640+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:22.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:22.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:22.694+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:22.694+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:22.740+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:22.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:22.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:22.776+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:22.776+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:22.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:22.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:22.836+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 22) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:22.836+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 22 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:27:32.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:22.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:27:50.836+0000 2019-09-04T06:27:22.836+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 23) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:22.836+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 23 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:27:32.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:22.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:27:50.836+0000 2019-09-04T06:27:22.836+0000 D2 ASIO [Replication] Request 22 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), opTime: { ts: Timestamp(1567578439, 1), t: 1 }, wallTime: new Date(1567578439111), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578439, 1), $clusterTime: { clusterTime: Timestamp(1567578440, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578439, 1) } 2019-09-04T06:27:22.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), opTime: { ts: Timestamp(1567578439, 1), t: 1 }, wallTime: new Date(1567578439111), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578439, 1), $clusterTime: { clusterTime: Timestamp(1567578440, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578439, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:27:22.836+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:22.836+0000 D2 ASIO [Replication] Request 23 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), opTime: { ts: Timestamp(1567578439, 1), t: 1 }, wallTime: new Date(1567578439111), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578439, 1), $clusterTime: { clusterTime: Timestamp(1567578440, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578439, 1) } 2019-09-04T06:27:22.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), opTime: { ts: Timestamp(1567578439, 1), t: 1 }, wallTime: new Date(1567578439111), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578439, 1), $clusterTime: { clusterTime: Timestamp(1567578440, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578439, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:22.836+0000 D2 REPL_HB [replexec-2] Received response to heartbeat (requestId: 22) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), opTime: { ts: Timestamp(1567578439, 1), t: 1 }, wallTime: new Date(1567578439111), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578439, 1), $clusterTime: { clusterTime: Timestamp(1567578440, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578439, 1) } 2019-09-04T06:27:22.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:22.836+0000 D4 ELECTION [replexec-2] Postponing election timeout due to heartbeat from primary 2019-09-04T06:27:22.836+0000 D4 REPL [replexec-2] Canceling election timeout callback at 2019-09-04T06:27:32.248+0000 2019-09-04T06:27:22.836+0000 D4 ELECTION [replexec-2] Scheduling election timeout callback at 2019-09-04T06:27:33.901+0000 2019-09-04T06:27:22.836+0000 D3 REPL [replexec-2] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:27:22.836+0000 D2 REPL_HB [replexec-2] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:27:24.836Z 2019-09-04T06:27:22.836+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:22.836+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:27:52.836+0000 2019-09-04T06:27:22.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:27:52.836+0000 2019-09-04T06:27:22.836+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 23) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), opTime: { ts: Timestamp(1567578439, 1), t: 1 }, wallTime: new Date(1567578439111), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578439, 1), $clusterTime: { clusterTime: Timestamp(1567578440, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578439, 1) } 2019-09-04T06:27:22.836+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:27:22.836+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:27:24.836Z 2019-09-04T06:27:22.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:27:52.836+0000 2019-09-04T06:27:22.840+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:22.854+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:22.854+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:22.941+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:22.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:22.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:22.990+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:22.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:22.995+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:22.995+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:23.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:23.018+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:23.018+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:23.041+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:23.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578440, 1), signature: { hash: BinData(0, 30AEC21AF8D3554C0A284F5F57B7D6A2B6BE3728), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:23.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:27:23.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578440, 1), signature: { hash: BinData(0, 30AEC21AF8D3554C0A284F5F57B7D6A2B6BE3728), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:23.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578440, 1), signature: { hash: BinData(0, 30AEC21AF8D3554C0A284F5F57B7D6A2B6BE3728), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:23.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), opTime: { ts: Timestamp(1567578439, 1), t: 1 }, wallTime: new Date(1567578439111) } 2019-09-04T06:27:23.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578440, 1), signature: { hash: BinData(0, 30AEC21AF8D3554C0A284F5F57B7D6A2B6BE3728), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:23.119+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:23.119+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:23.127+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:23.127+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:23.141+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:23.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:23.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:23.194+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:23.194+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:23.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:27:23.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:27:23.241+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:23.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:23.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:23.276+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:23.276+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:23.341+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:23.354+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:23.354+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:23.355+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:23.355+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:23.355+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578439, 1) 2019-09-04T06:27:23.355+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 311 2019-09-04T06:27:23.355+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:23.355+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:23.355+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 311 2019-09-04T06:27:23.358+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 314 2019-09-04T06:27:23.358+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 314 2019-09-04T06:27:23.358+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578439, 1), t: 1 }({ ts: Timestamp(1567578439, 1), t: 1 }) 2019-09-04T06:27:23.441+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:23.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:23.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:23.490+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:23.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:23.495+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:23.495+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:23.518+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:23.518+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:23.542+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:23.619+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:23.619+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:23.627+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:23.627+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:23.642+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:23.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:23.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:23.694+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:23.694+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:23.742+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:23.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:23.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:23.776+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:23.776+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:23.842+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:23.854+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:23.854+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:23.942+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:23.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:23.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:23.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:23.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:23.995+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:23.995+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:24.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:24.018+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:24.018+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:24.042+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:24.119+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:24.119+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:24.127+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:24.127+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:24.141+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:24.141+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:24.142+0000 D2 COMMAND [conn32] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578435, 1), signature: { hash: BinData(0, 7FECAD142B7D0C48104AAD6F508D560DEF34F4D8), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:27:24.142+0000 D1 REPL [conn32] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578439, 1), t: 1 } 2019-09-04T06:27:24.142+0000 D3 REPL [conn32] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:54.152+0000 2019-09-04T06:27:24.142+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:24.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:24.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:24.230+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578440, 1), signature: { hash: BinData(0, 30AEC21AF8D3554C0A284F5F57B7D6A2B6BE3728), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:24.230+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:27:24.231+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578440, 1), signature: { hash: BinData(0, 30AEC21AF8D3554C0A284F5F57B7D6A2B6BE3728), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:24.231+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578440, 1), signature: { hash: BinData(0, 30AEC21AF8D3554C0A284F5F57B7D6A2B6BE3728), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:24.231+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), opTime: { ts: Timestamp(1567578439, 1), t: 1 }, wallTime: new Date(1567578439111) } 2019-09-04T06:27:24.231+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578440, 1), signature: { hash: BinData(0, 30AEC21AF8D3554C0A284F5F57B7D6A2B6BE3728), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:24.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:27:24.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:27:24.242+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:24.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:24.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:24.276+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:24.276+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:24.343+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:24.354+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:24.354+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:24.356+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:24.356+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:24.356+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578439, 1) 2019-09-04T06:27:24.356+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 343 2019-09-04T06:27:24.356+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:24.356+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:24.356+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 343 2019-09-04T06:27:24.358+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 346 2019-09-04T06:27:24.358+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 346 2019-09-04T06:27:24.358+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578439, 1), t: 1 }({ ts: Timestamp(1567578439, 1), t: 1 }) 2019-09-04T06:27:24.443+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:24.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:24.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:24.490+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:24.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:24.495+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:24.495+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:24.518+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:24.518+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:24.543+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:24.619+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:24.619+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:24.627+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:24.627+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:24.641+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:24.642+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:24.643+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:24.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:24.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:24.743+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:24.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:24.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:24.776+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:24.776+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:24.793+0000 D2 ASIO [RS] Request 17 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578444, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578444789), o: { $v: 1, $set: { ping: new Date(1567578444786), up: 2345 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpApplied: { ts: Timestamp(1567578444, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578439, 1), $clusterTime: { clusterTime: Timestamp(1567578444, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578444, 1) } 2019-09-04T06:27:24.793+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578444, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578444789), o: { $v: 1, $set: { ping: new Date(1567578444786), up: 2345 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpApplied: { ts: Timestamp(1567578444, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578439, 1), $clusterTime: { clusterTime: Timestamp(1567578444, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578444, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:24.793+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:24.793+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578444, 1) and ending at ts: Timestamp(1567578444, 1) 2019-09-04T06:27:24.793+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:27:33.901+0000 2019-09-04T06:27:24.793+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:27:35.741+0000 2019-09-04T06:27:24.793+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:24.793+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:27:52.836+0000 2019-09-04T06:27:24.793+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578444, 1), t: 1 } 2019-09-04T06:27:24.793+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:24.793+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:24.793+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578439, 1) 2019-09-04T06:27:24.793+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 359 2019-09-04T06:27:24.793+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:24.793+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:24.793+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 359 2019-09-04T06:27:24.793+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:24.793+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:27:24.793+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:24.793+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578439, 1) 2019-09-04T06:27:24.793+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 362 2019-09-04T06:27:24.793+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578444, 1) } 2019-09-04T06:27:24.793+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:24.793+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:24.793+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 362 2019-09-04T06:27:24.793+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 347 2019-09-04T06:27:24.793+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 347 2019-09-04T06:27:24.793+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 365 2019-09-04T06:27:24.793+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 365 2019-09-04T06:27:24.793+0000 D3 EXECUTOR [repl-writer-worker-11] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:27:24.793+0000 D3 STORAGE [repl-writer-worker-11] WT begin_transaction for snapshot id 367 2019-09-04T06:27:24.793+0000 D4 STORAGE [repl-writer-worker-11] inserting record with timestamp Timestamp(1567578444, 1) 2019-09-04T06:27:24.793+0000 D3 STORAGE [repl-writer-worker-11] WT set timestamp of future write operations to Timestamp(1567578444, 1) 2019-09-04T06:27:24.794+0000 D3 STORAGE [repl-writer-worker-11] WT commit_transaction for snapshot id 367 2019-09-04T06:27:24.794+0000 D3 EXECUTOR [repl-writer-worker-11] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:24.794+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:27:24.794+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 366 2019-09-04T06:27:24.794+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 366 2019-09-04T06:27:24.794+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 369 2019-09-04T06:27:24.794+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 369 2019-09-04T06:27:24.794+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578444, 1), t: 1 }({ ts: Timestamp(1567578444, 1), t: 1 }) 2019-09-04T06:27:24.794+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578444, 1) 2019-09-04T06:27:24.794+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 370 2019-09-04T06:27:24.794+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578444, 1) } } ] } sort: {} projection: {} 2019-09-04T06:27:24.794+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:24.794+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:27:24.794+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578444, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:27:24.794+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:24.794+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:27:24.794+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:27:24.794+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578444, 1) || First: notFirst: full path: ts 2019-09-04T06:27:24.794+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:24.794+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578444, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:24.794+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:27:24.794+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:27:24.794+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:27:24.794+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:24.794+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:27:24.794+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:27:24.794+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:24.794+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:24.794+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:27:24.794+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578444, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:27:24.794+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:24.794+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:27:24.794+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:27:24.794+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578444, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:27:24.794+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:24.794+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578444, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:24.794+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 370 2019-09-04T06:27:24.794+0000 D3 EXECUTOR [repl-writer-worker-13] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:27:24.794+0000 D3 STORAGE [repl-writer-worker-13] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:24.794+0000 D3 REPL [repl-writer-worker-13] applying op: { ts: Timestamp(1567578444, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578444789), o: { $v: 1, $set: { ping: new Date(1567578444786), up: 2345 } } }, oplog application mode: Secondary 2019-09-04T06:27:24.794+0000 D3 STORAGE [repl-writer-worker-13] WT set timestamp of future write operations to Timestamp(1567578444, 1) 2019-09-04T06:27:24.794+0000 D3 STORAGE [repl-writer-worker-13] WT begin_transaction for snapshot id 372 2019-09-04T06:27:24.794+0000 D2 QUERY [repl-writer-worker-13] Using idhack: { _id: "cmodb801.togewa.com:27017" } 2019-09-04T06:27:24.794+0000 D4 WRITE [repl-writer-worker-13] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:27:24.794+0000 D3 STORAGE [repl-writer-worker-13] WT commit_transaction for snapshot id 372 2019-09-04T06:27:24.794+0000 D3 EXECUTOR [repl-writer-worker-13] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:24.794+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578444, 1), t: 1 }({ ts: Timestamp(1567578444, 1), t: 1 }) 2019-09-04T06:27:24.794+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578444, 1) 2019-09-04T06:27:24.794+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 371 2019-09-04T06:27:24.794+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:27:24.794+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:24.794+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:27:24.794+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:24.794+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:24.794+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:27:24.795+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 371 2019-09-04T06:27:24.795+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578444, 1) 2019-09-04T06:27:24.795+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 375 2019-09-04T06:27:24.795+0000 D1 EXECUTOR [replication-1] starting thread in pool replication 2019-09-04T06:27:24.795+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:24.795+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), appliedOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, appliedWallTime: new Date(1567578439111), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), appliedOpTime: { ts: Timestamp(1567578444, 1), t: 1 }, appliedWallTime: new Date(1567578444789), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), appliedOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, appliedWallTime: new Date(1567578439111), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:24.795+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 24 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:27:54.795+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), appliedOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, appliedWallTime: new Date(1567578439111), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), appliedOpTime: { ts: Timestamp(1567578444, 1), t: 1 }, appliedWallTime: new Date(1567578444789), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), appliedOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, appliedWallTime: new Date(1567578439111), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:24.795+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:27:54.795+0000 2019-09-04T06:27:24.795+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 375 2019-09-04T06:27:24.795+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578444, 1), t: 1 }({ ts: Timestamp(1567578444, 1), t: 1 }) 2019-09-04T06:27:24.795+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578444, 1), t: 1 } 2019-09-04T06:27:24.795+0000 D2 ASIO [RS] Request 24 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578439, 1), $clusterTime: { clusterTime: Timestamp(1567578444, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578444, 1) } 2019-09-04T06:27:24.795+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 25 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:27:34.795+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578439, 1), t: 1 } } 2019-09-04T06:27:24.795+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578439, 1), $clusterTime: { clusterTime: Timestamp(1567578444, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578444, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:24.795+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:24.795+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:27:54.795+0000 2019-09-04T06:27:24.795+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:27:54.795+0000 2019-09-04T06:27:24.802+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:27:24.802+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:24.802+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), appliedOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, appliedWallTime: new Date(1567578439111), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578444, 1), t: 1 }, durableWallTime: new Date(1567578444789), appliedOpTime: { ts: Timestamp(1567578444, 1), t: 1 }, appliedWallTime: new Date(1567578444789), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), appliedOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, appliedWallTime: new Date(1567578439111), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:24.802+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 26 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:27:54.802+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), appliedOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, appliedWallTime: new Date(1567578439111), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578444, 1), t: 1 }, durableWallTime: new Date(1567578444789), appliedOpTime: { ts: Timestamp(1567578444, 1), t: 1 }, appliedWallTime: new Date(1567578444789), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, durableWallTime: new Date(1567578439111), appliedOpTime: { ts: Timestamp(1567578439, 1), t: 1 }, appliedWallTime: new Date(1567578439111), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578439, 1), t: 1 }, lastCommittedWall: new Date(1567578439111), lastOpVisible: { ts: Timestamp(1567578439, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:24.802+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:27:54.795+0000 2019-09-04T06:27:24.803+0000 D2 ASIO [RS] Request 26 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578444, 1), t: 1 }, lastCommittedWall: new Date(1567578444789), lastOpVisible: { ts: Timestamp(1567578444, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578444, 1), $clusterTime: { clusterTime: Timestamp(1567578444, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578444, 1) } 2019-09-04T06:27:24.803+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578444, 1), t: 1 }, lastCommittedWall: new Date(1567578444789), lastOpVisible: { ts: Timestamp(1567578444, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578444, 1), $clusterTime: { clusterTime: Timestamp(1567578444, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578444, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:24.803+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:24.803+0000 D2 ASIO [RS] Request 25 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578444, 1), t: 1 }, lastCommittedWall: new Date(1567578444789), lastOpVisible: { ts: Timestamp(1567578444, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578444, 1), t: 1 }, lastCommittedWall: new Date(1567578444789), lastOpApplied: { ts: Timestamp(1567578444, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578444, 1), $clusterTime: { clusterTime: Timestamp(1567578444, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578444, 1) } 2019-09-04T06:27:24.803+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:27:54.795+0000 2019-09-04T06:27:24.803+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578444, 1), t: 1 }, lastCommittedWall: new Date(1567578444789), lastOpVisible: { ts: Timestamp(1567578444, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578444, 1), t: 1 }, lastCommittedWall: new Date(1567578444789), lastOpApplied: { ts: Timestamp(1567578444, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578444, 1), $clusterTime: { clusterTime: Timestamp(1567578444, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578444, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:24.803+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:24.803+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:27:24.803+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578444, 1), t: 1 }, 2019-09-04T06:27:24.789+0000 2019-09-04T06:27:24.803+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578444, 1), t: 1 }, 2019-09-04T06:27:24.789+0000 2019-09-04T06:27:24.803+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578439, 1) 2019-09-04T06:27:24.803+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:27:35.741+0000 2019-09-04T06:27:24.803+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:27:35.819+0000 2019-09-04T06:27:24.803+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:24.803+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 27 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:27:34.803+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578444, 1), t: 1 } } 2019-09-04T06:27:24.803+0000 D3 REPL [conn40] Got notified of new snapshot: { ts: Timestamp(1567578444, 1), t: 1 }, 2019-09-04T06:27:24.789+0000 2019-09-04T06:27:24.803+0000 D3 REPL [conn40] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.645+0000 2019-09-04T06:27:24.803+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:27:54.795+0000 2019-09-04T06:27:24.803+0000 D3 REPL [conn32] Got notified of new snapshot: { ts: Timestamp(1567578444, 1), t: 1 }, 2019-09-04T06:27:24.789+0000 2019-09-04T06:27:24.803+0000 D3 REPL [conn32] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:54.152+0000 2019-09-04T06:27:24.803+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:27:52.836+0000 2019-09-04T06:27:24.803+0000 D3 REPL [conn43] Got notified of new snapshot: { ts: Timestamp(1567578444, 1), t: 1 }, 2019-09-04T06:27:24.789+0000 2019-09-04T06:27:24.803+0000 D3 REPL [conn43] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.753+0000 2019-09-04T06:27:24.803+0000 D3 REPL [conn39] Got notified of new snapshot: { ts: Timestamp(1567578444, 1), t: 1 }, 2019-09-04T06:27:24.789+0000 2019-09-04T06:27:24.803+0000 D3 REPL [conn39] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.133+0000 2019-09-04T06:27:24.803+0000 D3 REPL [conn41] Got notified of new snapshot: { ts: Timestamp(1567578444, 1), t: 1 }, 2019-09-04T06:27:24.789+0000 2019-09-04T06:27:24.803+0000 D3 REPL [conn41] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.661+0000 2019-09-04T06:27:24.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:24.836+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:24.836+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 28) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:24.836+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 28 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:27:34.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:24.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:27:52.836+0000 2019-09-04T06:27:24.836+0000 D2 REPL_HB [replexec-2] Sending heartbeat (requestId: 29) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:24.836+0000 D3 EXECUTOR [replexec-2] Scheduling remote command request: RemoteCommand 29 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:27:34.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:24.836+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:27:52.836+0000 2019-09-04T06:27:24.836+0000 D2 ASIO [Replication] Request 28 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578444, 1), t: 1 }, durableWallTime: new Date(1567578444789), opTime: { ts: Timestamp(1567578444, 1), t: 1 }, wallTime: new Date(1567578444789), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578444, 1), t: 1 }, lastCommittedWall: new Date(1567578444789), lastOpVisible: { ts: Timestamp(1567578444, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578444, 1), $clusterTime: { clusterTime: Timestamp(1567578444, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578444, 1) } 2019-09-04T06:27:24.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578444, 1), t: 1 }, durableWallTime: new Date(1567578444789), opTime: { ts: Timestamp(1567578444, 1), t: 1 }, wallTime: new Date(1567578444789), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578444, 1), t: 1 }, lastCommittedWall: new Date(1567578444789), lastOpVisible: { ts: Timestamp(1567578444, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578444, 1), $clusterTime: { clusterTime: Timestamp(1567578444, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578444, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:27:24.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:24.836+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 28) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578444, 1), t: 1 }, durableWallTime: new Date(1567578444789), opTime: { ts: Timestamp(1567578444, 1), t: 1 }, wallTime: new Date(1567578444789), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578444, 1), t: 1 }, lastCommittedWall: new Date(1567578444789), lastOpVisible: { ts: Timestamp(1567578444, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578444, 1), $clusterTime: { clusterTime: Timestamp(1567578444, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578444, 1) } 2019-09-04T06:27:24.836+0000 D2 ASIO [Replication] Request 29 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578444, 1), t: 1 }, durableWallTime: new Date(1567578444789), opTime: { ts: Timestamp(1567578444, 1), t: 1 }, wallTime: new Date(1567578444789), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578444, 1), t: 1 }, lastCommittedWall: new Date(1567578444789), lastOpVisible: { ts: Timestamp(1567578444, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578444, 1), $clusterTime: { clusterTime: Timestamp(1567578444, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578444, 1) } 2019-09-04T06:27:24.836+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:27:24.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578444, 1), t: 1 }, durableWallTime: new Date(1567578444789), opTime: { ts: Timestamp(1567578444, 1), t: 1 }, wallTime: new Date(1567578444789), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578444, 1), t: 1 }, lastCommittedWall: new Date(1567578444789), lastOpVisible: { ts: Timestamp(1567578444, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578444, 1), $clusterTime: { clusterTime: Timestamp(1567578444, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578444, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:24.836+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:27:35.819+0000 2019-09-04T06:27:24.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:24.836+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:27:35.811+0000 2019-09-04T06:27:24.836+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:27:24.836+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:27:26.836Z 2019-09-04T06:27:24.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:24.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:27:54.836+0000 2019-09-04T06:27:24.836+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:27:54.836+0000 2019-09-04T06:27:24.836+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 29) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578444, 1), t: 1 }, durableWallTime: new Date(1567578444789), opTime: { ts: Timestamp(1567578444, 1), t: 1 }, wallTime: new Date(1567578444789), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578444, 1), t: 1 }, lastCommittedWall: new Date(1567578444789), lastOpVisible: { ts: Timestamp(1567578444, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578444, 1), $clusterTime: { clusterTime: Timestamp(1567578444, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578444, 1) } 2019-09-04T06:27:24.836+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:27:24.836+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:27:26.836Z 2019-09-04T06:27:24.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:27:54.836+0000 2019-09-04T06:27:24.843+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:24.854+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:24.854+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:24.893+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578444, 1) 2019-09-04T06:27:24.944+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:24.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:24.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:24.990+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:24.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:24.995+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:24.995+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:25.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:25.018+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:25.018+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:25.044+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:25.049+0000 I NETWORK [listener] connection accepted from 10.108.2.55:36526 #44 (32 connections now open) 2019-09-04T06:27:25.049+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:25.049+0000 D2 COMMAND [conn44] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:25.049+0000 I NETWORK [conn44] received client metadata from 10.108.2.55:36526 conn44: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:25.049+0000 I COMMAND [conn44] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:25.050+0000 D2 COMMAND [conn44] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578441, 1), signature: { hash: BinData(0, BCD1176E340592B9823D25B02E3C0813C2D0EE74), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:27:25.050+0000 D1 REPL [conn44] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578444, 1), t: 1 } 2019-09-04T06:27:25.050+0000 D3 REPL [conn44] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:55.060+0000 2019-09-04T06:27:25.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578444, 1), signature: { hash: BinData(0, 2290C6E38C10D644D69E68833C2ADF7BCDF4DD1E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:25.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:27:25.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578444, 1), signature: { hash: BinData(0, 2290C6E38C10D644D69E68833C2ADF7BCDF4DD1E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:25.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578444, 1), signature: { hash: BinData(0, 2290C6E38C10D644D69E68833C2ADF7BCDF4DD1E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:25.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578444, 1), t: 1 }, durableWallTime: new Date(1567578444789), opTime: { ts: Timestamp(1567578444, 1), t: 1 }, wallTime: new Date(1567578444789) } 2019-09-04T06:27:25.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578444, 1), signature: { hash: BinData(0, 2290C6E38C10D644D69E68833C2ADF7BCDF4DD1E), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:25.119+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:25.119+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:25.127+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:25.127+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:25.144+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:25.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:25.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:25.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:27:25.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:27:25.244+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:25.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:25.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:25.276+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:25.276+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:25.344+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:25.354+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:25.354+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:25.444+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:25.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:25.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:25.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:25.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:25.495+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:25.495+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:25.518+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:25.518+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:25.545+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:25.569+0000 D2 ASIO [RS] Request 27 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578445, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578445567), o: { $v: 1, $set: { ping: new Date(1567578445566) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578444, 1), t: 1 }, lastCommittedWall: new Date(1567578444789), lastOpVisible: { ts: Timestamp(1567578444, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578444, 1), t: 1 }, lastCommittedWall: new Date(1567578444789), lastOpApplied: { ts: Timestamp(1567578445, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578444, 1), $clusterTime: { clusterTime: Timestamp(1567578445, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578445, 1) } 2019-09-04T06:27:25.569+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578445, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578445567), o: { $v: 1, $set: { ping: new Date(1567578445566) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578444, 1), t: 1 }, lastCommittedWall: new Date(1567578444789), lastOpVisible: { ts: Timestamp(1567578444, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578444, 1), t: 1 }, lastCommittedWall: new Date(1567578444789), lastOpApplied: { ts: Timestamp(1567578445, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578444, 1), $clusterTime: { clusterTime: Timestamp(1567578445, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578445, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:25.569+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:25.569+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578445, 1) and ending at ts: Timestamp(1567578445, 1) 2019-09-04T06:27:25.569+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:27:35.811+0000 2019-09-04T06:27:25.569+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:27:35.758+0000 2019-09-04T06:27:25.569+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:25.569+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:27:54.836+0000 2019-09-04T06:27:25.569+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578445, 1), t: 1 } 2019-09-04T06:27:25.569+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:25.569+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:25.569+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578444, 1) 2019-09-04T06:27:25.569+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 399 2019-09-04T06:27:25.569+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:25.569+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:25.569+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 399 2019-09-04T06:27:25.569+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:25.569+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:27:25.569+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:25.569+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578444, 1) 2019-09-04T06:27:25.569+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578445, 1) } 2019-09-04T06:27:25.569+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 402 2019-09-04T06:27:25.569+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:25.569+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:25.569+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 402 2019-09-04T06:27:25.569+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 377 2019-09-04T06:27:25.569+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 377 2019-09-04T06:27:25.569+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 405 2019-09-04T06:27:25.569+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 405 2019-09-04T06:27:25.569+0000 D3 EXECUTOR [repl-writer-worker-3] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:27:25.570+0000 D3 STORAGE [repl-writer-worker-3] WT begin_transaction for snapshot id 407 2019-09-04T06:27:25.570+0000 D4 STORAGE [repl-writer-worker-3] inserting record with timestamp Timestamp(1567578445, 1) 2019-09-04T06:27:25.570+0000 D3 STORAGE [repl-writer-worker-3] WT set timestamp of future write operations to Timestamp(1567578445, 1) 2019-09-04T06:27:25.570+0000 D3 STORAGE [repl-writer-worker-3] WT commit_transaction for snapshot id 407 2019-09-04T06:27:25.570+0000 D3 EXECUTOR [repl-writer-worker-3] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:25.570+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:27:25.570+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 406 2019-09-04T06:27:25.570+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 406 2019-09-04T06:27:25.570+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 409 2019-09-04T06:27:25.570+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 409 2019-09-04T06:27:25.570+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578445, 1), t: 1 }({ ts: Timestamp(1567578445, 1), t: 1 }) 2019-09-04T06:27:25.570+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578445, 1) 2019-09-04T06:27:25.570+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 410 2019-09-04T06:27:25.570+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578445, 1) } } ] } sort: {} projection: {} 2019-09-04T06:27:25.570+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:25.570+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:27:25.570+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578445, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:27:25.570+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:25.570+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:27:25.570+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:27:25.570+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578445, 1) || First: notFirst: full path: ts 2019-09-04T06:27:25.570+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:25.570+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578445, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:25.570+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:27:25.570+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:27:25.570+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:27:25.570+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:25.570+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:27:25.570+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:27:25.570+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:25.570+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:25.570+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:27:25.570+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578445, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:27:25.570+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:25.570+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:27:25.570+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:27:25.570+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578445, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:27:25.570+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:25.570+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578445, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:25.570+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 410 2019-09-04T06:27:25.570+0000 D3 EXECUTOR [repl-writer-worker-14] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:27:25.570+0000 D3 STORAGE [repl-writer-worker-14] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:25.570+0000 D3 REPL [repl-writer-worker-14] applying op: { ts: Timestamp(1567578445, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578445567), o: { $v: 1, $set: { ping: new Date(1567578445566) } } }, oplog application mode: Secondary 2019-09-04T06:27:25.570+0000 D3 STORAGE [repl-writer-worker-14] WT set timestamp of future write operations to Timestamp(1567578445, 1) 2019-09-04T06:27:25.570+0000 D3 STORAGE [repl-writer-worker-14] WT begin_transaction for snapshot id 412 2019-09-04T06:27:25.570+0000 D2 QUERY [repl-writer-worker-14] Using idhack: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" } 2019-09-04T06:27:25.570+0000 D4 WRITE [repl-writer-worker-14] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:27:25.570+0000 D3 STORAGE [repl-writer-worker-14] WT commit_transaction for snapshot id 412 2019-09-04T06:27:25.570+0000 D3 EXECUTOR [repl-writer-worker-14] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:25.570+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578445, 1), t: 1 }({ ts: Timestamp(1567578445, 1), t: 1 }) 2019-09-04T06:27:25.570+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578445, 1) 2019-09-04T06:27:25.570+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 411 2019-09-04T06:27:25.570+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:27:25.570+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:25.570+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:27:25.570+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:25.570+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:25.570+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:27:25.570+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 411 2019-09-04T06:27:25.570+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578445, 1) 2019-09-04T06:27:25.571+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 416 2019-09-04T06:27:25.571+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 416 2019-09-04T06:27:25.571+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578445, 1), t: 1 }({ ts: Timestamp(1567578445, 1), t: 1 }) 2019-09-04T06:27:25.571+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:25.571+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578444, 1), t: 1 }, durableWallTime: new Date(1567578444789), appliedOpTime: { ts: Timestamp(1567578444, 1), t: 1 }, appliedWallTime: new Date(1567578444789), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578444, 1), t: 1 }, durableWallTime: new Date(1567578444789), appliedOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, appliedWallTime: new Date(1567578445567), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578444, 1), t: 1 }, durableWallTime: new Date(1567578444789), appliedOpTime: { ts: Timestamp(1567578444, 1), t: 1 }, appliedWallTime: new Date(1567578444789), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578444, 1), t: 1 }, lastCommittedWall: new Date(1567578444789), lastOpVisible: { ts: Timestamp(1567578444, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:25.571+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 30 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:27:55.571+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578444, 1), t: 1 }, durableWallTime: new Date(1567578444789), appliedOpTime: { ts: Timestamp(1567578444, 1), t: 1 }, appliedWallTime: new Date(1567578444789), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578444, 1), t: 1 }, durableWallTime: new Date(1567578444789), appliedOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, appliedWallTime: new Date(1567578445567), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578444, 1), t: 1 }, durableWallTime: new Date(1567578444789), appliedOpTime: { ts: Timestamp(1567578444, 1), t: 1 }, appliedWallTime: new Date(1567578444789), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578444, 1), t: 1 }, lastCommittedWall: new Date(1567578444789), lastOpVisible: { ts: Timestamp(1567578444, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:25.571+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:27:55.571+0000 2019-09-04T06:27:25.571+0000 D2 ASIO [RS] Request 30 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578445, 1), t: 1 }, lastCommittedWall: new Date(1567578445567), lastOpVisible: { ts: Timestamp(1567578445, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578445, 1), $clusterTime: { clusterTime: Timestamp(1567578445, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578445, 1) } 2019-09-04T06:27:25.571+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578445, 1), t: 1 }, lastCommittedWall: new Date(1567578445567), lastOpVisible: { ts: Timestamp(1567578445, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578445, 1), $clusterTime: { clusterTime: Timestamp(1567578445, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578445, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:25.571+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:25.571+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:27:55.571+0000 2019-09-04T06:27:25.571+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578445, 1), t: 1 } 2019-09-04T06:27:25.571+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 31 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:27:35.571+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578444, 1), t: 1 } } 2019-09-04T06:27:25.571+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:27:55.571+0000 2019-09-04T06:27:25.571+0000 D2 ASIO [RS] Request 31 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578445, 1), t: 1 }, lastCommittedWall: new Date(1567578445567), lastOpVisible: { ts: Timestamp(1567578445, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578445, 1), t: 1 }, lastCommittedWall: new Date(1567578445567), lastOpApplied: { ts: Timestamp(1567578445, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578445, 1), $clusterTime: { clusterTime: Timestamp(1567578445, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578445, 1) } 2019-09-04T06:27:25.571+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578445, 1), t: 1 }, lastCommittedWall: new Date(1567578445567), lastOpVisible: { ts: Timestamp(1567578445, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578445, 1), t: 1 }, lastCommittedWall: new Date(1567578445567), lastOpApplied: { ts: Timestamp(1567578445, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578445, 1), $clusterTime: { clusterTime: Timestamp(1567578445, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578445, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:25.571+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:25.571+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:27:25.571+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578445, 1), t: 1 }, 2019-09-04T06:27:25.567+0000 2019-09-04T06:27:25.571+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578445, 1), t: 1 }, 2019-09-04T06:27:25.567+0000 2019-09-04T06:27:25.572+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578440, 1) 2019-09-04T06:27:25.572+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:27:35.758+0000 2019-09-04T06:27:25.572+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:27:36.623+0000 2019-09-04T06:27:25.572+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:25.572+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 32 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:27:35.572+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578445, 1), t: 1 } } 2019-09-04T06:27:25.572+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:27:54.836+0000 2019-09-04T06:27:25.572+0000 D3 REPL [conn32] Got notified of new snapshot: { ts: Timestamp(1567578445, 1), t: 1 }, 2019-09-04T06:27:25.567+0000 2019-09-04T06:27:25.572+0000 D3 REPL [conn32] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:54.152+0000 2019-09-04T06:27:25.572+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:27:55.571+0000 2019-09-04T06:27:25.572+0000 D3 REPL [conn41] Got notified of new snapshot: { ts: Timestamp(1567578445, 1), t: 1 }, 2019-09-04T06:27:25.567+0000 2019-09-04T06:27:25.572+0000 D3 REPL [conn41] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.661+0000 2019-09-04T06:27:25.572+0000 D3 REPL [conn40] Got notified of new snapshot: { ts: Timestamp(1567578445, 1), t: 1 }, 2019-09-04T06:27:25.567+0000 2019-09-04T06:27:25.572+0000 D3 REPL [conn40] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.645+0000 2019-09-04T06:27:25.572+0000 D3 REPL [conn43] Got notified of new snapshot: { ts: Timestamp(1567578445, 1), t: 1 }, 2019-09-04T06:27:25.567+0000 2019-09-04T06:27:25.572+0000 D3 REPL [conn43] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.753+0000 2019-09-04T06:27:25.572+0000 D3 REPL [conn39] Got notified of new snapshot: { ts: Timestamp(1567578445, 1), t: 1 }, 2019-09-04T06:27:25.567+0000 2019-09-04T06:27:25.572+0000 D3 REPL [conn39] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.133+0000 2019-09-04T06:27:25.572+0000 D3 REPL [conn44] Got notified of new snapshot: { ts: Timestamp(1567578445, 1), t: 1 }, 2019-09-04T06:27:25.567+0000 2019-09-04T06:27:25.572+0000 D3 REPL [conn44] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:55.060+0000 2019-09-04T06:27:25.572+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:27:25.572+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:25.572+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578444, 1), t: 1 }, durableWallTime: new Date(1567578444789), appliedOpTime: { ts: Timestamp(1567578444, 1), t: 1 }, appliedWallTime: new Date(1567578444789), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, durableWallTime: new Date(1567578445567), appliedOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, appliedWallTime: new Date(1567578445567), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578444, 1), t: 1 }, durableWallTime: new Date(1567578444789), appliedOpTime: { ts: Timestamp(1567578444, 1), t: 1 }, appliedWallTime: new Date(1567578444789), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578445, 1), t: 1 }, lastCommittedWall: new Date(1567578445567), lastOpVisible: { ts: Timestamp(1567578445, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:25.572+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 33 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:27:55.572+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578444, 1), t: 1 }, durableWallTime: new Date(1567578444789), appliedOpTime: { ts: Timestamp(1567578444, 1), t: 1 }, appliedWallTime: new Date(1567578444789), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, durableWallTime: new Date(1567578445567), appliedOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, appliedWallTime: new Date(1567578445567), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578444, 1), t: 1 }, durableWallTime: new Date(1567578444789), appliedOpTime: { ts: Timestamp(1567578444, 1), t: 1 }, appliedWallTime: new Date(1567578444789), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578445, 1), t: 1 }, lastCommittedWall: new Date(1567578445567), lastOpVisible: { ts: Timestamp(1567578445, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:25.572+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:27:55.571+0000 2019-09-04T06:27:25.572+0000 D2 ASIO [RS] Request 33 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578445, 1), t: 1 }, lastCommittedWall: new Date(1567578445567), lastOpVisible: { ts: Timestamp(1567578445, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578445, 1), $clusterTime: { clusterTime: Timestamp(1567578445, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578445, 1) } 2019-09-04T06:27:25.572+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578445, 1), t: 1 }, lastCommittedWall: new Date(1567578445567), lastOpVisible: { ts: Timestamp(1567578445, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578445, 1), $clusterTime: { clusterTime: Timestamp(1567578445, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578445, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:25.572+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:25.573+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:27:55.571+0000 2019-09-04T06:27:25.619+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:25.619+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:25.627+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:25.627+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:25.645+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:25.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:25.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:25.669+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578445, 1) 2019-09-04T06:27:25.745+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:25.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:25.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:25.776+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:25.776+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:25.845+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:25.854+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:25.854+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:25.945+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:25.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:25.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:25.990+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:25.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:25.995+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:25.995+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:26.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:26.018+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:26.018+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:26.045+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:26.069+0000 I NETWORK [listener] connection accepted from 10.108.2.63:36180 #45 (33 connections now open) 2019-09-04T06:27:26.069+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:26.069+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:26.069+0000 I NETWORK [conn45] received client metadata from 10.108.2.63:36180 conn45: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:26.069+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:26.074+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:26.074+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:26.119+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:26.119+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:26.127+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:26.127+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:26.145+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:26.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:26.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:26.230+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578445, 1), signature: { hash: BinData(0, 81B6ACC5FFC560EB0D9D3C516281D09A0DE0A095), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:26.230+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:27:26.230+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578445, 1), signature: { hash: BinData(0, 81B6ACC5FFC560EB0D9D3C516281D09A0DE0A095), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:26.231+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578445, 1), signature: { hash: BinData(0, 81B6ACC5FFC560EB0D9D3C516281D09A0DE0A095), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:26.231+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, durableWallTime: new Date(1567578445567), opTime: { ts: Timestamp(1567578445, 1), t: 1 }, wallTime: new Date(1567578445567) } 2019-09-04T06:27:26.231+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578445, 1), signature: { hash: BinData(0, 81B6ACC5FFC560EB0D9D3C516281D09A0DE0A095), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:26.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:27:26.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:27:26.246+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:26.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:26.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:26.276+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:26.276+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:26.324+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:26.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:26.346+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:26.354+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:26.354+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:26.359+0000 I NETWORK [listener] connection accepted from 10.108.2.59:48220 #46 (34 connections now open) 2019-09-04T06:27:26.359+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:26.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:26.359+0000 I NETWORK [conn46] received client metadata from 10.108.2.59:48220 conn46: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:26.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:26.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:26.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:26.446+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:26.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:26.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:26.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:26.489+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:26.495+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:26.495+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:26.518+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:26.518+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:26.546+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:26.563+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:26.563+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:26.569+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:26.570+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:26.570+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578445, 1) 2019-09-04T06:27:26.570+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 447 2019-09-04T06:27:26.570+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:26.570+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:26.570+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 447 2019-09-04T06:27:26.571+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 450 2019-09-04T06:27:26.571+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 450 2019-09-04T06:27:26.571+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578445, 1), t: 1 }({ ts: Timestamp(1567578445, 1), t: 1 }) 2019-09-04T06:27:26.619+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:26.619+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:26.627+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:26.627+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:26.646+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:26.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:26.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:26.746+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:26.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:26.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:26.776+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:26.776+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:26.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:26.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:26.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:26.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:26.836+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 34) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:26.836+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 34 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:27:36.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:26.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:27:54.836+0000 2019-09-04T06:27:26.836+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 35) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:26.836+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 35 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:27:36.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:26.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:27:54.836+0000 2019-09-04T06:27:26.836+0000 D2 ASIO [Replication] Request 34 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, durableWallTime: new Date(1567578445567), opTime: { ts: Timestamp(1567578445, 1), t: 1 }, wallTime: new Date(1567578445567), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578445, 1), t: 1 }, lastCommittedWall: new Date(1567578445567), lastOpVisible: { ts: Timestamp(1567578445, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578445, 1), $clusterTime: { clusterTime: Timestamp(1567578445, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578445, 1) } 2019-09-04T06:27:26.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, durableWallTime: new Date(1567578445567), opTime: { ts: Timestamp(1567578445, 1), t: 1 }, wallTime: new Date(1567578445567), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578445, 1), t: 1 }, lastCommittedWall: new Date(1567578445567), lastOpVisible: { ts: Timestamp(1567578445, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578445, 1), $clusterTime: { clusterTime: Timestamp(1567578445, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578445, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:27:26.836+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:26.836+0000 D2 REPL_HB [replexec-2] Received response to heartbeat (requestId: 34) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, durableWallTime: new Date(1567578445567), opTime: { ts: Timestamp(1567578445, 1), t: 1 }, wallTime: new Date(1567578445567), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578445, 1), t: 1 }, lastCommittedWall: new Date(1567578445567), lastOpVisible: { ts: Timestamp(1567578445, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578445, 1), $clusterTime: { clusterTime: Timestamp(1567578445, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578445, 1) } 2019-09-04T06:27:26.836+0000 D4 ELECTION [replexec-2] Postponing election timeout due to heartbeat from primary 2019-09-04T06:27:26.836+0000 D4 REPL [replexec-2] Canceling election timeout callback at 2019-09-04T06:27:36.623+0000 2019-09-04T06:27:26.836+0000 D4 ELECTION [replexec-2] Scheduling election timeout callback at 2019-09-04T06:27:36.872+0000 2019-09-04T06:27:26.836+0000 D3 REPL [replexec-2] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:27:26.836+0000 D2 REPL_HB [replexec-2] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:27:28.836Z 2019-09-04T06:27:26.836+0000 D2 ASIO [Replication] Request 35 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, durableWallTime: new Date(1567578445567), opTime: { ts: Timestamp(1567578445, 1), t: 1 }, wallTime: new Date(1567578445567), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578445, 1), t: 1 }, lastCommittedWall: new Date(1567578445567), lastOpVisible: { ts: Timestamp(1567578445, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578445, 1), $clusterTime: { clusterTime: Timestamp(1567578445, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578445, 1) } 2019-09-04T06:27:26.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, durableWallTime: new Date(1567578445567), opTime: { ts: Timestamp(1567578445, 1), t: 1 }, wallTime: new Date(1567578445567), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578445, 1), t: 1 }, lastCommittedWall: new Date(1567578445567), lastOpVisible: { ts: Timestamp(1567578445, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578445, 1), $clusterTime: { clusterTime: Timestamp(1567578445, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578445, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:26.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:26.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:27:54.836+0000 2019-09-04T06:27:26.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:26.836+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:27:54.836+0000 2019-09-04T06:27:26.836+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 35) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, durableWallTime: new Date(1567578445567), opTime: { ts: Timestamp(1567578445, 1), t: 1 }, wallTime: new Date(1567578445567), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578445, 1), t: 1 }, lastCommittedWall: new Date(1567578445567), lastOpVisible: { ts: Timestamp(1567578445, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578445, 1), $clusterTime: { clusterTime: Timestamp(1567578445, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578445, 1) } 2019-09-04T06:27:26.836+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:27:26.836+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:27:28.836Z 2019-09-04T06:27:26.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:27:54.836+0000 2019-09-04T06:27:26.846+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:26.854+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:26.854+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:26.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:26.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:26.876+0000 I NETWORK [listener] connection accepted from 10.108.2.38:57536 #47 (35 connections now open) 2019-09-04T06:27:26.876+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:26.877+0000 D2 COMMAND [conn47] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:26.877+0000 I NETWORK [conn47] received client metadata from 10.108.2.38:57536 conn47: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:26.877+0000 I COMMAND [conn47] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:26.877+0000 D2 COMMAND [conn47] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:26.877+0000 I COMMAND [conn47] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:26.911+0000 I NETWORK [listener] connection accepted from 10.108.2.39:51424 #48 (36 connections now open) 2019-09-04T06:27:26.911+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:26.911+0000 D2 COMMAND [conn48] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:26.911+0000 I NETWORK [conn48] received client metadata from 10.108.2.39:51424 conn48: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:26.912+0000 I COMMAND [conn48] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:26.912+0000 D2 COMMAND [conn48] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:26.912+0000 I COMMAND [conn48] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:26.946+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:26.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:26.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:26.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:26.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:26.995+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:26.995+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:27.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:27.018+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:27.018+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:27.047+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:27.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578445, 1), signature: { hash: BinData(0, 81B6ACC5FFC560EB0D9D3C516281D09A0DE0A095), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:27.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:27:27.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578445, 1), signature: { hash: BinData(0, 81B6ACC5FFC560EB0D9D3C516281D09A0DE0A095), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:27.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578445, 1), signature: { hash: BinData(0, 81B6ACC5FFC560EB0D9D3C516281D09A0DE0A095), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:27.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, durableWallTime: new Date(1567578445567), opTime: { ts: Timestamp(1567578445, 1), t: 1 }, wallTime: new Date(1567578445567) } 2019-09-04T06:27:27.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578445, 1), signature: { hash: BinData(0, 81B6ACC5FFC560EB0D9D3C516281D09A0DE0A095), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:27.063+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:27.063+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:27.119+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:27.119+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:27.127+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:27.127+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:27.147+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:27.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:27.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:27.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:27:27.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:27:27.247+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:27.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:27.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:27.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:27.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:27.347+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:27.354+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:27.354+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:27.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:27.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:27.447+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:27.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:27.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:27.490+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:27.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:27.495+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:27.495+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:27.518+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:27.518+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:27.547+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:27.563+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:27.563+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:27.570+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:27.570+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:27.570+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578445, 1) 2019-09-04T06:27:27.570+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 485 2019-09-04T06:27:27.570+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:27.570+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:27.570+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 485 2019-09-04T06:27:27.571+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 488 2019-09-04T06:27:27.571+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 488 2019-09-04T06:27:27.571+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578445, 1), t: 1 }({ ts: Timestamp(1567578445, 1), t: 1 }) 2019-09-04T06:27:27.601+0000 I NETWORK [listener] connection accepted from 10.108.2.38:57538 #49 (37 connections now open) 2019-09-04T06:27:27.601+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:27.601+0000 D2 COMMAND [conn49] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:27.602+0000 I NETWORK [conn49] received client metadata from 10.108.2.38:57538 conn49: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:27.602+0000 I COMMAND [conn49] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:27.619+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:27.619+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:27.627+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:27.627+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:27.648+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:27.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:27.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:27.748+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:27.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:27.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:27.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:27.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:27.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:27.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:27.848+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:27.854+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:27.854+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:27.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:27.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:27.948+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:27.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:27.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:27.990+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:27.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:27.995+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:27.995+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:28.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:28.018+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:28.018+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:28.019+0000 I NETWORK [listener] connection accepted from 10.108.2.39:51426 #50 (38 connections now open) 2019-09-04T06:27:28.019+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:28.019+0000 D2 COMMAND [conn50] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:28.019+0000 I NETWORK [conn50] received client metadata from 10.108.2.39:51426 conn50: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:28.019+0000 I COMMAND [conn50] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:28.020+0000 D2 COMMAND [conn50] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578419, 3), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578447, 1), signature: { hash: BinData(0, D1DA825ABF478292B1257C73A8A3167A69C6EA46), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578419, 3), t: 1 } }, $db: "config" } 2019-09-04T06:27:28.020+0000 D1 COMMAND [conn50] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578419, 3), t: 1 } } } 2019-09-04T06:27:28.020+0000 D3 STORAGE [conn50] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:27:28.020+0000 D1 COMMAND [conn50] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578419, 3), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578447, 1), signature: { hash: BinData(0, D1DA825ABF478292B1257C73A8A3167A69C6EA46), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578419, 3), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578445, 1) 2019-09-04T06:27:28.020+0000 D5 QUERY [conn50] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:27:28.020+0000 D5 QUERY [conn50] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:27:28.020+0000 D5 QUERY [conn50] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:27:28.020+0000 D5 QUERY [conn50] Rated tree: $and 2019-09-04T06:27:28.020+0000 D5 QUERY [conn50] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:28.020+0000 D5 QUERY [conn50] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:28.020+0000 D2 QUERY [conn50] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:27:28.020+0000 D3 STORAGE [conn50] WT begin_transaction for snapshot id 505 2019-09-04T06:27:28.020+0000 D3 STORAGE [conn50] WT rollback_transaction for snapshot id 505 2019-09-04T06:27:28.020+0000 I COMMAND [conn50] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578419, 3), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578447, 1), signature: { hash: BinData(0, D1DA825ABF478292B1257C73A8A3167A69C6EA46), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578419, 3), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:879 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:27:28.048+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:28.063+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:28.063+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:28.119+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:28.119+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:28.127+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:28.127+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:28.148+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:28.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:28.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:28.230+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578445, 1), signature: { hash: BinData(0, 81B6ACC5FFC560EB0D9D3C516281D09A0DE0A095), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:28.230+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:27:28.231+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578445, 1), signature: { hash: BinData(0, 81B6ACC5FFC560EB0D9D3C516281D09A0DE0A095), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:28.231+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578445, 1), signature: { hash: BinData(0, 81B6ACC5FFC560EB0D9D3C516281D09A0DE0A095), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:28.231+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, durableWallTime: new Date(1567578445567), opTime: { ts: Timestamp(1567578445, 1), t: 1 }, wallTime: new Date(1567578445567) } 2019-09-04T06:27:28.231+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578445, 1), signature: { hash: BinData(0, 81B6ACC5FFC560EB0D9D3C516281D09A0DE0A095), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:28.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:27:28.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:27:28.249+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:28.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:28.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:28.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:28.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:28.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:28.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:28.349+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:28.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:28.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:28.449+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:28.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:28.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:28.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:28.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:28.495+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:28.495+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:28.518+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:28.518+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:28.549+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:28.563+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:28.563+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:28.570+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:28.570+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:28.570+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578445, 1) 2019-09-04T06:27:28.570+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 522 2019-09-04T06:27:28.570+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:28.570+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:28.570+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 522 2019-09-04T06:27:28.571+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 525 2019-09-04T06:27:28.571+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 525 2019-09-04T06:27:28.571+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578445, 1), t: 1 }({ ts: Timestamp(1567578445, 1), t: 1 }) 2019-09-04T06:27:28.619+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:28.619+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:28.627+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:28.627+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:28.649+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:28.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:28.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:28.749+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:28.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:28.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:28.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:28.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:28.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:28.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:28.836+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:28.836+0000 D2 REPL_HB [replexec-2] Sending heartbeat (requestId: 36) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:28.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:28.836+0000 D3 EXECUTOR [replexec-2] Scheduling remote command request: RemoteCommand 36 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:27:38.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:28.836+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:27:54.836+0000 2019-09-04T06:27:28.836+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 37) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:28.836+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 37 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:27:38.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:28.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:27:54.836+0000 2019-09-04T06:27:28.836+0000 D2 ASIO [Replication] Request 36 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, durableWallTime: new Date(1567578445567), opTime: { ts: Timestamp(1567578445, 1), t: 1 }, wallTime: new Date(1567578445567), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578445, 1), t: 1 }, lastCommittedWall: new Date(1567578445567), lastOpVisible: { ts: Timestamp(1567578445, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578445, 1), $clusterTime: { clusterTime: Timestamp(1567578447, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578445, 1) } 2019-09-04T06:27:28.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, durableWallTime: new Date(1567578445567), opTime: { ts: Timestamp(1567578445, 1), t: 1 }, wallTime: new Date(1567578445567), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578445, 1), t: 1 }, lastCommittedWall: new Date(1567578445567), lastOpVisible: { ts: Timestamp(1567578445, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578445, 1), $clusterTime: { clusterTime: Timestamp(1567578447, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578445, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:27:28.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:28.836+0000 D2 ASIO [Replication] Request 37 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, durableWallTime: new Date(1567578445567), opTime: { ts: Timestamp(1567578445, 1), t: 1 }, wallTime: new Date(1567578445567), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578445, 1), t: 1 }, lastCommittedWall: new Date(1567578445567), lastOpVisible: { ts: Timestamp(1567578445, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578445, 1), $clusterTime: { clusterTime: Timestamp(1567578447, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578445, 1) } 2019-09-04T06:27:28.836+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 36) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, durableWallTime: new Date(1567578445567), opTime: { ts: Timestamp(1567578445, 1), t: 1 }, wallTime: new Date(1567578445567), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578445, 1), t: 1 }, lastCommittedWall: new Date(1567578445567), lastOpVisible: { ts: Timestamp(1567578445, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578445, 1), $clusterTime: { clusterTime: Timestamp(1567578447, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578445, 1) } 2019-09-04T06:27:28.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, durableWallTime: new Date(1567578445567), opTime: { ts: Timestamp(1567578445, 1), t: 1 }, wallTime: new Date(1567578445567), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578445, 1), t: 1 }, lastCommittedWall: new Date(1567578445567), lastOpVisible: { ts: Timestamp(1567578445, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578445, 1), $clusterTime: { clusterTime: Timestamp(1567578447, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578445, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:28.836+0000 D4 ELECTION [replexec-1] Postponing election timeout due to heartbeat from primary 2019-09-04T06:27:28.836+0000 D4 REPL [replexec-1] Canceling election timeout callback at 2019-09-04T06:27:36.872+0000 2019-09-04T06:27:28.836+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:28.836+0000 D4 ELECTION [replexec-1] Scheduling election timeout callback at 2019-09-04T06:27:39.037+0000 2019-09-04T06:27:28.836+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:27:28.836+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:27:30.836Z 2019-09-04T06:27:28.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:28.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:27:58.836+0000 2019-09-04T06:27:28.836+0000 D2 REPL_HB [replexec-2] Received response to heartbeat (requestId: 37) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, durableWallTime: new Date(1567578445567), opTime: { ts: Timestamp(1567578445, 1), t: 1 }, wallTime: new Date(1567578445567), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578445, 1), t: 1 }, lastCommittedWall: new Date(1567578445567), lastOpVisible: { ts: Timestamp(1567578445, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578445, 1), $clusterTime: { clusterTime: Timestamp(1567578447, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578445, 1) } 2019-09-04T06:27:28.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:27:58.836+0000 2019-09-04T06:27:28.836+0000 D3 REPL [replexec-2] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:27:28.836+0000 D2 REPL_HB [replexec-2] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:27:30.836Z 2019-09-04T06:27:28.836+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:27:58.836+0000 2019-09-04T06:27:28.849+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:28.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:28.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:28.950+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:28.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:28.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:28.990+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:28.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:28.995+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:28.995+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:29.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:29.018+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:29.018+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:29.050+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:29.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578447, 1), signature: { hash: BinData(0, D1DA825ABF478292B1257C73A8A3167A69C6EA46), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:29.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:27:29.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578447, 1), signature: { hash: BinData(0, D1DA825ABF478292B1257C73A8A3167A69C6EA46), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:29.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578447, 1), signature: { hash: BinData(0, D1DA825ABF478292B1257C73A8A3167A69C6EA46), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:29.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, durableWallTime: new Date(1567578445567), opTime: { ts: Timestamp(1567578445, 1), t: 1 }, wallTime: new Date(1567578445567) } 2019-09-04T06:27:29.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578447, 1), signature: { hash: BinData(0, D1DA825ABF478292B1257C73A8A3167A69C6EA46), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:29.063+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:29.063+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:29.119+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:29.119+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:29.127+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:29.127+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:29.150+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:29.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:29.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:29.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:27:29.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:27:29.250+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:29.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:29.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:29.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:29.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:29.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:29.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:29.334+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:29.334+0000 D3 REPL [replexec-1] memberData lastupdate is: 2019-09-04T06:27:29.061+0000 2019-09-04T06:27:29.334+0000 D3 REPL [replexec-1] memberData lastupdate is: 2019-09-04T06:27:28.836+0000 2019-09-04T06:27:29.334+0000 D3 REPL [replexec-1] stalest member MemberId(2) date: 2019-09-04T06:27:28.836+0000 2019-09-04T06:27:29.334+0000 D3 REPL [replexec-1] scheduling next check at 2019-09-04T06:27:38.836+0000 2019-09-04T06:27:29.334+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:27:58.836+0000 2019-09-04T06:27:29.350+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:29.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:29.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:29.450+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:29.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:29.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:29.490+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:29.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:29.495+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:29.495+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:29.518+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:29.518+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:29.551+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:29.563+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:29.563+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:29.570+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:29.570+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:29.570+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578445, 1) 2019-09-04T06:27:29.570+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 555 2019-09-04T06:27:29.570+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:29.570+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:29.570+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 555 2019-09-04T06:27:29.571+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 558 2019-09-04T06:27:29.571+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 558 2019-09-04T06:27:29.571+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578445, 1), t: 1 }({ ts: Timestamp(1567578445, 1), t: 1 }) 2019-09-04T06:27:29.619+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:29.619+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:29.627+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:29.627+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:29.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:29.651+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:29.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:29.697+0000 I NETWORK [listener] connection accepted from 10.108.2.60:44744 #51 (39 connections now open) 2019-09-04T06:27:29.697+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:29.698+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:29.698+0000 I NETWORK [conn51] received client metadata from 10.108.2.60:44744 conn51: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:29.698+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:29.703+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:29.703+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:29.751+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:29.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:29.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:29.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:29.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:29.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:29.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:29.851+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:29.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:29.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:29.951+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:29.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:29.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:29.985+0000 D2 ASIO [RS] Request 32 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578449, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578449977), o: { $v: 1, $set: { ping: new Date(1567578449977) } } }, { ts: Timestamp(1567578449, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578449977), o: { $v: 1, $set: { ping: new Date(1567578449976) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578445, 1), t: 1 }, lastCommittedWall: new Date(1567578445567), lastOpVisible: { ts: Timestamp(1567578445, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578445, 1), t: 1 }, lastCommittedWall: new Date(1567578445567), lastOpApplied: { ts: Timestamp(1567578449, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578445, 1), $clusterTime: { clusterTime: Timestamp(1567578449, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578449, 2) } 2019-09-04T06:27:29.985+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578449, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578449977), o: { $v: 1, $set: { ping: new Date(1567578449977) } } }, { ts: Timestamp(1567578449, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578449977), o: { $v: 1, $set: { ping: new Date(1567578449976) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578445, 1), t: 1 }, lastCommittedWall: new Date(1567578445567), lastOpVisible: { ts: Timestamp(1567578445, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578445, 1), t: 1 }, lastCommittedWall: new Date(1567578445567), lastOpApplied: { ts: Timestamp(1567578449, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578445, 1), $clusterTime: { clusterTime: Timestamp(1567578449, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578449, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:29.985+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:29.985+0000 D2 REPL [replication-1] oplog fetcher read 2 operations from remote oplog starting at ts: Timestamp(1567578449, 1) and ending at ts: Timestamp(1567578449, 2) 2019-09-04T06:27:29.985+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:27:39.037+0000 2019-09-04T06:27:29.985+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:27:40.735+0000 2019-09-04T06:27:29.985+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:29.985+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:27:58.836+0000 2019-09-04T06:27:29.985+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578449, 2), t: 1 } 2019-09-04T06:27:29.985+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:29.985+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:29.985+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578445, 1) 2019-09-04T06:27:29.985+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 571 2019-09-04T06:27:29.985+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:29.985+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:29.985+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 571 2019-09-04T06:27:29.985+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:29.985+0000 D2 REPL [rsSync-0] replication batch size is 2 2019-09-04T06:27:29.985+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:29.985+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578445, 1) 2019-09-04T06:27:29.985+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 574 2019-09-04T06:27:29.985+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578449, 1) } 2019-09-04T06:27:29.985+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:29.985+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:29.985+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 574 2019-09-04T06:27:29.985+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 559 2019-09-04T06:27:29.985+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 559 2019-09-04T06:27:29.985+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 577 2019-09-04T06:27:29.985+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 577 2019-09-04T06:27:29.985+0000 D3 EXECUTOR [repl-writer-worker-12] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:27:29.985+0000 D3 STORAGE [repl-writer-worker-12] WT begin_transaction for snapshot id 579 2019-09-04T06:27:29.985+0000 D4 STORAGE [repl-writer-worker-12] inserting record with timestamp Timestamp(1567578449, 1) 2019-09-04T06:27:29.985+0000 D3 STORAGE [repl-writer-worker-12] WT set timestamp of future write operations to Timestamp(1567578449, 1) 2019-09-04T06:27:29.985+0000 D4 STORAGE [repl-writer-worker-12] inserting record with timestamp Timestamp(1567578449, 2) 2019-09-04T06:27:29.985+0000 D3 STORAGE [repl-writer-worker-12] WT set timestamp of future write operations to Timestamp(1567578449, 2) 2019-09-04T06:27:29.985+0000 D3 STORAGE [repl-writer-worker-12] WT commit_transaction for snapshot id 579 2019-09-04T06:27:29.985+0000 D3 EXECUTOR [repl-writer-worker-12] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:29.985+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:27:29.985+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 578 2019-09-04T06:27:29.985+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 578 2019-09-04T06:27:29.985+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 581 2019-09-04T06:27:29.985+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 581 2019-09-04T06:27:29.986+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578449, 2), t: 1 }({ ts: Timestamp(1567578449, 2), t: 1 }) 2019-09-04T06:27:29.986+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578449, 2) 2019-09-04T06:27:29.986+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 582 2019-09-04T06:27:29.986+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578449, 2) } } ] } sort: {} projection: {} 2019-09-04T06:27:29.986+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:29.986+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:27:29.986+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578449, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:27:29.986+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:29.986+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:27:29.986+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:27:29.986+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578449, 2) || First: notFirst: full path: ts 2019-09-04T06:27:29.986+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:29.986+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578449, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:29.986+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:27:29.986+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:27:29.986+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:27:29.986+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:29.986+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:27:29.986+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:27:29.986+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:29.986+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:29.986+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:27:29.986+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578449, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:27:29.986+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:29.986+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:27:29.986+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:27:29.986+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578449, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:27:29.986+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:29.986+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578449, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:29.986+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 582 2019-09-04T06:27:29.986+0000 D3 EXECUTOR [repl-writer-worker-10] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:27:29.986+0000 D3 STORAGE [repl-writer-worker-10] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:29.986+0000 D3 EXECUTOR [repl-writer-worker-8] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:27:29.986+0000 D3 STORAGE [repl-writer-worker-8] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:29.986+0000 D3 REPL [repl-writer-worker-10] applying op: { ts: Timestamp(1567578449, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578449977), o: { $v: 1, $set: { ping: new Date(1567578449976) } } }, oplog application mode: Secondary 2019-09-04T06:27:29.986+0000 D3 STORAGE [repl-writer-worker-10] WT set timestamp of future write operations to Timestamp(1567578449, 2) 2019-09-04T06:27:29.986+0000 D3 STORAGE [repl-writer-worker-10] WT begin_transaction for snapshot id 584 2019-09-04T06:27:29.986+0000 D3 REPL [repl-writer-worker-8] applying op: { ts: Timestamp(1567578449, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578449977), o: { $v: 1, $set: { ping: new Date(1567578449977) } } }, oplog application mode: Secondary 2019-09-04T06:27:29.986+0000 D3 STORAGE [repl-writer-worker-8] WT set timestamp of future write operations to Timestamp(1567578449, 1) 2019-09-04T06:27:29.986+0000 D2 QUERY [repl-writer-worker-10] Using idhack: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" } 2019-09-04T06:27:29.986+0000 D3 STORAGE [repl-writer-worker-8] WT begin_transaction for snapshot id 585 2019-09-04T06:27:29.986+0000 D2 QUERY [repl-writer-worker-8] Using idhack: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" } 2019-09-04T06:27:29.986+0000 D4 WRITE [repl-writer-worker-10] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:27:29.986+0000 D3 STORAGE [repl-writer-worker-10] WT commit_transaction for snapshot id 584 2019-09-04T06:27:29.986+0000 D4 WRITE [repl-writer-worker-8] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:27:29.986+0000 D3 EXECUTOR [repl-writer-worker-10] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:29.986+0000 D3 STORAGE [repl-writer-worker-8] WT commit_transaction for snapshot id 585 2019-09-04T06:27:29.986+0000 D3 EXECUTOR [repl-writer-worker-8] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:29.986+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578449, 2), t: 1 }({ ts: Timestamp(1567578449, 2), t: 1 }) 2019-09-04T06:27:29.986+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578449, 2) 2019-09-04T06:27:29.986+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 583 2019-09-04T06:27:29.986+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:27:29.986+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:29.986+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:27:29.986+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:29.986+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:29.986+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:27:29.986+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 583 2019-09-04T06:27:29.986+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578449, 2) 2019-09-04T06:27:29.986+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 589 2019-09-04T06:27:29.986+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:29.986+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 589 2019-09-04T06:27:29.986+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578449, 2), t: 1 }({ ts: Timestamp(1567578449, 2), t: 1 }) 2019-09-04T06:27:29.986+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, durableWallTime: new Date(1567578445567), appliedOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, appliedWallTime: new Date(1567578445567), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, durableWallTime: new Date(1567578445567), appliedOpTime: { ts: Timestamp(1567578449, 2), t: 1 }, appliedWallTime: new Date(1567578449977), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, durableWallTime: new Date(1567578445567), appliedOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, appliedWallTime: new Date(1567578445567), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578445, 1), t: 1 }, lastCommittedWall: new Date(1567578445567), lastOpVisible: { ts: Timestamp(1567578445, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:29.986+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 38 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:27:59.986+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, durableWallTime: new Date(1567578445567), appliedOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, appliedWallTime: new Date(1567578445567), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, durableWallTime: new Date(1567578445567), appliedOpTime: { ts: Timestamp(1567578449, 2), t: 1 }, appliedWallTime: new Date(1567578449977), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, durableWallTime: new Date(1567578445567), appliedOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, appliedWallTime: new Date(1567578445567), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578445, 1), t: 1 }, lastCommittedWall: new Date(1567578445567), lastOpVisible: { ts: Timestamp(1567578445, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:29.986+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:27:59.986+0000 2019-09-04T06:27:29.987+0000 D2 ASIO [RS] Request 38 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578445, 1), t: 1 }, lastCommittedWall: new Date(1567578445567), lastOpVisible: { ts: Timestamp(1567578445, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578445, 1), $clusterTime: { clusterTime: Timestamp(1567578449, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578449, 2) } 2019-09-04T06:27:29.987+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578445, 1), t: 1 }, lastCommittedWall: new Date(1567578445567), lastOpVisible: { ts: Timestamp(1567578445, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578445, 1), $clusterTime: { clusterTime: Timestamp(1567578449, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578449, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:29.987+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:29.987+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:27:59.987+0000 2019-09-04T06:27:29.987+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578449, 2), t: 1 } 2019-09-04T06:27:29.987+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 39 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:27:39.987+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578445, 1), t: 1 } } 2019-09-04T06:27:29.987+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:27:59.987+0000 2019-09-04T06:27:29.989+0000 D2 ASIO [RS] Request 39 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578449, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578449977), o: { $v: 1, $set: { ping: new Date(1567578449977) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578445, 1), t: 1 }, lastCommittedWall: new Date(1567578445567), lastOpVisible: { ts: Timestamp(1567578445, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578445, 1), t: 1 }, lastCommittedWall: new Date(1567578445567), lastOpApplied: { ts: Timestamp(1567578449, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578445, 1), $clusterTime: { clusterTime: Timestamp(1567578449, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578449, 3) } 2019-09-04T06:27:29.989+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578449, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578449977), o: { $v: 1, $set: { ping: new Date(1567578449977) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578445, 1), t: 1 }, lastCommittedWall: new Date(1567578445567), lastOpVisible: { ts: Timestamp(1567578445, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578445, 1), t: 1 }, lastCommittedWall: new Date(1567578445567), lastOpApplied: { ts: Timestamp(1567578449, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578445, 1), $clusterTime: { clusterTime: Timestamp(1567578449, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578449, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:29.989+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:29.989+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578449, 3) and ending at ts: Timestamp(1567578449, 3) 2019-09-04T06:27:29.989+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:27:40.735+0000 2019-09-04T06:27:29.989+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:27:40.830+0000 2019-09-04T06:27:29.989+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:29.989+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:27:58.836+0000 2019-09-04T06:27:29.989+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578449, 3), t: 1 } 2019-09-04T06:27:29.989+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:29.989+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:29.989+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578449, 2) 2019-09-04T06:27:29.989+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 593 2019-09-04T06:27:29.989+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:29.989+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:29.989+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 593 2019-09-04T06:27:29.989+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:29.989+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:27:29.989+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:29.989+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578449, 3) } 2019-09-04T06:27:29.989+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578449, 2) 2019-09-04T06:27:29.989+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 596 2019-09-04T06:27:29.989+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:29.989+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 590 2019-09-04T06:27:29.989+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:29.989+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 596 2019-09-04T06:27:29.989+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 590 2019-09-04T06:27:29.989+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 599 2019-09-04T06:27:29.989+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 599 2019-09-04T06:27:29.989+0000 D3 EXECUTOR [repl-writer-worker-4] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:27:29.989+0000 D3 STORAGE [repl-writer-worker-4] WT begin_transaction for snapshot id 601 2019-09-04T06:27:29.989+0000 D4 STORAGE [repl-writer-worker-4] inserting record with timestamp Timestamp(1567578449, 3) 2019-09-04T06:27:29.989+0000 D3 STORAGE [repl-writer-worker-4] WT set timestamp of future write operations to Timestamp(1567578449, 3) 2019-09-04T06:27:29.989+0000 D3 STORAGE [repl-writer-worker-4] WT commit_transaction for snapshot id 601 2019-09-04T06:27:29.989+0000 D3 EXECUTOR [repl-writer-worker-4] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:29.989+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:27:29.989+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 600 2019-09-04T06:27:29.989+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 600 2019-09-04T06:27:29.989+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 603 2019-09-04T06:27:29.989+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 603 2019-09-04T06:27:29.989+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578449, 3), t: 1 }({ ts: Timestamp(1567578449, 3), t: 1 }) 2019-09-04T06:27:29.989+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578449, 3) 2019-09-04T06:27:29.989+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 604 2019-09-04T06:27:29.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:29.989+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578449, 3) } } ] } sort: {} projection: {} 2019-09-04T06:27:29.989+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:29.989+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:27:29.990+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578449, 3) Sort: {} Proj: {} ============================= 2019-09-04T06:27:29.990+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:29.990+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:27:29.990+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:27:29.990+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578449, 3) || First: notFirst: full path: ts 2019-09-04T06:27:29.990+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:29.990+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578449, 3) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:29.990+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:27:29.990+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:27:29.990+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:27:29.990+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:29.990+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:27:29.990+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:27:29.990+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:29.990+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:29.990+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:27:29.990+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578449, 3) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:27:29.990+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:29.990+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:27:29.990+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:27:29.990+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578449, 3) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:27:29.990+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:29.990+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578449, 3) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:29.990+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 604 2019-09-04T06:27:29.990+0000 D3 EXECUTOR [repl-writer-worker-6] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:27:29.990+0000 D3 STORAGE [repl-writer-worker-6] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:29.990+0000 D3 REPL [repl-writer-worker-6] applying op: { ts: Timestamp(1567578449, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578449977), o: { $v: 1, $set: { ping: new Date(1567578449977) } } }, oplog application mode: Secondary 2019-09-04T06:27:29.990+0000 D3 STORAGE [repl-writer-worker-6] WT set timestamp of future write operations to Timestamp(1567578449, 3) 2019-09-04T06:27:29.990+0000 D3 STORAGE [repl-writer-worker-6] WT begin_transaction for snapshot id 607 2019-09-04T06:27:29.990+0000 D2 QUERY [repl-writer-worker-6] Using idhack: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" } 2019-09-04T06:27:29.990+0000 D4 WRITE [repl-writer-worker-6] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:27:29.990+0000 D3 STORAGE [repl-writer-worker-6] WT commit_transaction for snapshot id 607 2019-09-04T06:27:29.990+0000 D3 EXECUTOR [repl-writer-worker-6] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:29.990+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578449, 3), t: 1 }({ ts: Timestamp(1567578449, 3), t: 1 }) 2019-09-04T06:27:29.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:29.990+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578449, 3) 2019-09-04T06:27:29.990+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 606 2019-09-04T06:27:29.990+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:27:29.990+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:29.990+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:27:29.990+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:29.990+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:29.990+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:27:29.990+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 606 2019-09-04T06:27:29.990+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578449, 3) 2019-09-04T06:27:29.990+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 610 2019-09-04T06:27:29.990+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 610 2019-09-04T06:27:29.990+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578449, 3), t: 1 }({ ts: Timestamp(1567578449, 3), t: 1 }) 2019-09-04T06:27:29.990+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:29.990+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, durableWallTime: new Date(1567578445567), appliedOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, appliedWallTime: new Date(1567578445567), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, durableWallTime: new Date(1567578445567), appliedOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, appliedWallTime: new Date(1567578449977), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, durableWallTime: new Date(1567578445567), appliedOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, appliedWallTime: new Date(1567578445567), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578445, 1), t: 1 }, lastCommittedWall: new Date(1567578445567), lastOpVisible: { ts: Timestamp(1567578445, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:29.990+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 40 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:27:59.990+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, durableWallTime: new Date(1567578445567), appliedOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, appliedWallTime: new Date(1567578445567), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, durableWallTime: new Date(1567578445567), appliedOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, appliedWallTime: new Date(1567578449977), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, durableWallTime: new Date(1567578445567), appliedOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, appliedWallTime: new Date(1567578445567), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578445, 1), t: 1 }, lastCommittedWall: new Date(1567578445567), lastOpVisible: { ts: Timestamp(1567578445, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:29.990+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:27:59.990+0000 2019-09-04T06:27:29.990+0000 D2 ASIO [RS] Request 40 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578445, 1), t: 1 }, lastCommittedWall: new Date(1567578445567), lastOpVisible: { ts: Timestamp(1567578445, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578445, 1), $clusterTime: { clusterTime: Timestamp(1567578449, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578449, 3) } 2019-09-04T06:27:29.990+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578445, 1), t: 1 }, lastCommittedWall: new Date(1567578445567), lastOpVisible: { ts: Timestamp(1567578445, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578445, 1), $clusterTime: { clusterTime: Timestamp(1567578449, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578449, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:29.990+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:29.990+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:27:59.990+0000 2019-09-04T06:27:29.991+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578449, 3), t: 1 } 2019-09-04T06:27:29.991+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 41 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:27:39.991+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578445, 1), t: 1 } } 2019-09-04T06:27:29.991+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:27:59.990+0000 2019-09-04T06:27:29.994+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:27:29.994+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:29.994+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, durableWallTime: new Date(1567578445567), appliedOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, appliedWallTime: new Date(1567578445567), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578449, 2), t: 1 }, durableWallTime: new Date(1567578449977), appliedOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, appliedWallTime: new Date(1567578449977), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, durableWallTime: new Date(1567578445567), appliedOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, appliedWallTime: new Date(1567578445567), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578445, 1), t: 1 }, lastCommittedWall: new Date(1567578445567), lastOpVisible: { ts: Timestamp(1567578445, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:29.994+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 42 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:27:59.994+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, durableWallTime: new Date(1567578445567), appliedOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, appliedWallTime: new Date(1567578445567), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578449, 2), t: 1 }, durableWallTime: new Date(1567578449977), appliedOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, appliedWallTime: new Date(1567578449977), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, durableWallTime: new Date(1567578445567), appliedOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, appliedWallTime: new Date(1567578445567), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578445, 1), t: 1 }, lastCommittedWall: new Date(1567578445567), lastOpVisible: { ts: Timestamp(1567578445, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:29.995+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:27:59.990+0000 2019-09-04T06:27:29.995+0000 D2 ASIO [RS] Request 42 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578445, 1), t: 1 }, lastCommittedWall: new Date(1567578445567), lastOpVisible: { ts: Timestamp(1567578445, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578445, 1), $clusterTime: { clusterTime: Timestamp(1567578449, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578449, 3) } 2019-09-04T06:27:29.995+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578445, 1), t: 1 }, lastCommittedWall: new Date(1567578445567), lastOpVisible: { ts: Timestamp(1567578445, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578445, 1), $clusterTime: { clusterTime: Timestamp(1567578449, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578449, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:29.995+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:29.995+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:27:59.990+0000 2019-09-04T06:27:29.995+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:29.995+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:29.995+0000 D2 ASIO [RS] Request 41 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578449, 2), t: 1 }, lastCommittedWall: new Date(1567578449977), lastOpVisible: { ts: Timestamp(1567578449, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578449, 2), t: 1 }, lastCommittedWall: new Date(1567578449977), lastOpApplied: { ts: Timestamp(1567578449, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578449, 2), $clusterTime: { clusterTime: Timestamp(1567578449, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578449, 3) } 2019-09-04T06:27:29.996+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578449, 2), t: 1 }, lastCommittedWall: new Date(1567578449977), lastOpVisible: { ts: Timestamp(1567578449, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578449, 2), t: 1 }, lastCommittedWall: new Date(1567578449977), lastOpApplied: { ts: Timestamp(1567578449, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578449, 2), $clusterTime: { clusterTime: Timestamp(1567578449, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578449, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:29.996+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:29.996+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:27:29.996+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578449, 2), t: 1 }, 2019-09-04T06:27:29.977+0000 2019-09-04T06:27:29.996+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578449, 2), t: 1 }, 2019-09-04T06:27:29.977+0000 2019-09-04T06:27:29.996+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578444, 2) 2019-09-04T06:27:29.996+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:27:40.830+0000 2019-09-04T06:27:29.996+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:27:40.186+0000 2019-09-04T06:27:29.996+0000 D3 REPL [conn41] Got notified of new snapshot: { ts: Timestamp(1567578449, 2), t: 1 }, 2019-09-04T06:27:29.977+0000 2019-09-04T06:27:29.996+0000 D3 REPL [conn41] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.661+0000 2019-09-04T06:27:29.996+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:29.996+0000 D3 REPL [conn43] Got notified of new snapshot: { ts: Timestamp(1567578449, 2), t: 1 }, 2019-09-04T06:27:29.977+0000 2019-09-04T06:27:29.996+0000 D3 REPL [conn43] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.753+0000 2019-09-04T06:27:29.996+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:27:58.836+0000 2019-09-04T06:27:29.996+0000 D3 REPL [conn44] Got notified of new snapshot: { ts: Timestamp(1567578449, 2), t: 1 }, 2019-09-04T06:27:29.977+0000 2019-09-04T06:27:29.996+0000 D3 REPL [conn44] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:55.060+0000 2019-09-04T06:27:29.996+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 43 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:27:39.996+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578449, 2), t: 1 } } 2019-09-04T06:27:29.996+0000 D3 REPL [conn32] Got notified of new snapshot: { ts: Timestamp(1567578449, 2), t: 1 }, 2019-09-04T06:27:29.977+0000 2019-09-04T06:27:29.996+0000 D3 REPL [conn32] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:54.152+0000 2019-09-04T06:27:29.996+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:27:59.990+0000 2019-09-04T06:27:29.996+0000 D3 REPL [conn40] Got notified of new snapshot: { ts: Timestamp(1567578449, 2), t: 1 }, 2019-09-04T06:27:29.977+0000 2019-09-04T06:27:29.996+0000 D3 REPL [conn40] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.645+0000 2019-09-04T06:27:29.996+0000 D3 REPL [conn39] Got notified of new snapshot: { ts: Timestamp(1567578449, 2), t: 1 }, 2019-09-04T06:27:29.977+0000 2019-09-04T06:27:29.996+0000 D3 REPL [conn39] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.133+0000 2019-09-04T06:27:30.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:30.000+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:27:30.000+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:30.000+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, durableWallTime: new Date(1567578445567), appliedOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, appliedWallTime: new Date(1567578445567), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, durableWallTime: new Date(1567578449977), appliedOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, appliedWallTime: new Date(1567578449977), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, durableWallTime: new Date(1567578445567), appliedOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, appliedWallTime: new Date(1567578445567), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578449, 2), t: 1 }, lastCommittedWall: new Date(1567578449977), lastOpVisible: { ts: Timestamp(1567578449, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:30.000+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 44 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:00.000+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, durableWallTime: new Date(1567578445567), appliedOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, appliedWallTime: new Date(1567578445567), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, durableWallTime: new Date(1567578449977), appliedOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, appliedWallTime: new Date(1567578449977), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, durableWallTime: new Date(1567578445567), appliedOpTime: { ts: Timestamp(1567578445, 1), t: 1 }, appliedWallTime: new Date(1567578445567), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578449, 2), t: 1 }, lastCommittedWall: new Date(1567578449977), lastOpVisible: { ts: Timestamp(1567578449, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:30.000+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:27:59.990+0000 2019-09-04T06:27:30.000+0000 D2 COMMAND [conn16] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:27:30.001+0000 I COMMAND [conn16] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35129 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:27:30.001+0000 D2 COMMAND [conn16] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:27:30.002+0000 D2 COMMAND [conn16] command: replSetGetStatus 2019-09-04T06:27:30.002+0000 I COMMAND [conn16] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:27:30.002+0000 D4 FTDC [ftdc] full-time diagnostic data capture schema change: currrent document is longer than reference document 2019-09-04T06:27:30.002+0000 D2 COMMAND [conn16] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:27:30.002+0000 D3 STORAGE [conn16] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:30.002+0000 D5 QUERY [conn16] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:27:30.002+0000 D5 QUERY [conn16] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:27:30.002+0000 D5 QUERY [conn16] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:27:30.002+0000 D5 QUERY [conn16] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:27:30.002+0000 D5 QUERY [conn16] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:27:30.002+0000 D5 QUERY [conn16] Predicate over field 'jumbo' 2019-09-04T06:27:30.002+0000 D5 QUERY [conn16] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:27:30.002+0000 D5 QUERY [conn16] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:30.002+0000 D5 QUERY [conn16] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:30.002+0000 D2 QUERY [conn16] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:27:30.002+0000 D3 STORAGE [conn16] begin_transaction on local snapshot Timestamp(1567578449, 3) 2019-09-04T06:27:30.002+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 617 2019-09-04T06:27:30.002+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 617 2019-09-04T06:27:30.002+0000 I COMMAND [conn16] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:27:30.002+0000 D2 COMMAND [conn16] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:27:30.002+0000 I COMMAND [conn16] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:27:30.003+0000 D2 COMMAND [conn16] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:27:30.003+0000 D3 STORAGE [conn16] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:30.003+0000 D5 QUERY [conn16] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:27:30.003+0000 D5 QUERY [conn16] Forcing a table scan due to hinted $natural 2019-09-04T06:27:30.003+0000 D2 QUERY [conn16] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:27:30.003+0000 D3 STORAGE [conn16] begin_transaction on local snapshot Timestamp(1567578449, 3) 2019-09-04T06:27:30.003+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 620 2019-09-04T06:27:30.003+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 620 2019-09-04T06:27:30.003+0000 I COMMAND [conn16] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:27:30.003+0000 D2 COMMAND [conn16] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:27:30.003+0000 D3 STORAGE [conn16] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:30.003+0000 D5 QUERY [conn16] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:27:30.003+0000 D5 QUERY [conn16] Forcing a table scan due to hinted $natural 2019-09-04T06:27:30.003+0000 D2 QUERY [conn16] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:27:30.003+0000 D3 STORAGE [conn16] begin_transaction on local snapshot Timestamp(1567578449, 3) 2019-09-04T06:27:30.003+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 622 2019-09-04T06:27:30.003+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 622 2019-09-04T06:27:30.003+0000 I COMMAND [conn16] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:27:30.003+0000 D2 COMMAND [conn16] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:27:30.003+0000 D2 QUERY [conn16] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:27:30.003+0000 I COMMAND [conn16] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:27:30.003+0000 D2 COMMAND [conn16] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:27:30.003+0000 D2 COMMAND [conn16] command: listDatabases 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 625 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 625 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 626 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 626 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 627 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 627 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 628 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 628 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 629 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 629 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 630 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 630 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 631 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 631 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 632 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 632 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 633 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 633 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 634 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:27:30.004+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 634 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 635 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 635 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 636 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 636 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 637 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 637 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 638 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 638 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 639 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 639 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 640 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 640 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 641 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 641 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 642 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 642 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 643 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 643 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 644 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 644 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 645 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 645 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 646 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:30.005+0000 D3 STORAGE [conn16] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:27:30.006+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 646 2019-09-04T06:27:30.006+0000 I COMMAND [conn16] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 2ms 2019-09-04T06:27:30.006+0000 D2 ASIO [RS] Request 44 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578449, 2), t: 1 }, lastCommittedWall: new Date(1567578449977), lastOpVisible: { ts: Timestamp(1567578449, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578449, 2), $clusterTime: { clusterTime: Timestamp(1567578449, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578449, 3) } 2019-09-04T06:27:30.006+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578449, 2), t: 1 }, lastCommittedWall: new Date(1567578449977), lastOpVisible: { ts: Timestamp(1567578449, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578449, 2), $clusterTime: { clusterTime: Timestamp(1567578449, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578449, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:30.006+0000 D2 ASIO [RS] Request 43 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578449, 3), t: 1 }, lastCommittedWall: new Date(1567578449977), lastOpVisible: { ts: Timestamp(1567578449, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578449, 3), t: 1 }, lastCommittedWall: new Date(1567578449977), lastOpApplied: { ts: Timestamp(1567578449, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578449, 3), $clusterTime: { clusterTime: Timestamp(1567578449, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578449, 3) } 2019-09-04T06:27:30.006+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578449, 3), t: 1 }, lastCommittedWall: new Date(1567578449977), lastOpVisible: { ts: Timestamp(1567578449, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578449, 3), t: 1 }, lastCommittedWall: new Date(1567578449977), lastOpApplied: { ts: Timestamp(1567578449, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578449, 3), $clusterTime: { clusterTime: Timestamp(1567578449, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578449, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:30.006+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:30.006+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:30.006+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:27:30.006+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578449, 3), t: 1 }, 2019-09-04T06:27:29.977+0000 2019-09-04T06:27:30.006+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578449, 3), t: 1 }, 2019-09-04T06:27:29.977+0000 2019-09-04T06:27:30.006+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578444, 3) 2019-09-04T06:27:30.006+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:27:40.186+0000 2019-09-04T06:27:30.006+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:27:40.410+0000 2019-09-04T06:27:30.006+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 45 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:27:40.006+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578449, 3), t: 1 } } 2019-09-04T06:27:30.006+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:00.006+0000 2019-09-04T06:27:30.006+0000 D3 REPL [conn39] Got notified of new snapshot: { ts: Timestamp(1567578449, 3), t: 1 }, 2019-09-04T06:27:29.977+0000 2019-09-04T06:27:30.006+0000 D3 REPL [conn39] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.133+0000 2019-09-04T06:27:30.006+0000 D3 REPL [conn40] Got notified of new snapshot: { ts: Timestamp(1567578449, 3), t: 1 }, 2019-09-04T06:27:29.977+0000 2019-09-04T06:27:30.006+0000 D3 REPL [conn40] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.645+0000 2019-09-04T06:27:30.006+0000 D3 REPL [conn32] Got notified of new snapshot: { ts: Timestamp(1567578449, 3), t: 1 }, 2019-09-04T06:27:29.977+0000 2019-09-04T06:27:30.006+0000 D3 REPL [conn32] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:54.152+0000 2019-09-04T06:27:30.007+0000 D3 REPL [conn44] Got notified of new snapshot: { ts: Timestamp(1567578449, 3), t: 1 }, 2019-09-04T06:27:29.977+0000 2019-09-04T06:27:30.007+0000 D3 REPL [conn44] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:55.060+0000 2019-09-04T06:27:30.007+0000 D3 REPL [conn43] Got notified of new snapshot: { ts: Timestamp(1567578449, 3), t: 1 }, 2019-09-04T06:27:29.977+0000 2019-09-04T06:27:30.007+0000 D3 REPL [conn43] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.753+0000 2019-09-04T06:27:30.007+0000 D3 REPL [conn41] Got notified of new snapshot: { ts: Timestamp(1567578449, 3), t: 1 }, 2019-09-04T06:27:29.977+0000 2019-09-04T06:27:30.007+0000 D3 REPL [conn41] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.661+0000 2019-09-04T06:27:30.007+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:30.007+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:27:58.836+0000 2019-09-04T06:27:30.007+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:00.006+0000 2019-09-04T06:27:30.007+0000 D2 COMMAND [conn16] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:27:30.007+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 648 2019-09-04T06:27:30.007+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 648 2019-09-04T06:27:30.008+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 649 2019-09-04T06:27:30.008+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 649 2019-09-04T06:27:30.008+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 650 2019-09-04T06:27:30.008+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 650 2019-09-04T06:27:30.008+0000 I COMMAND [conn16] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:27:30.019+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:30.019+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:30.021+0000 D2 COMMAND [conn16] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:27:30.021+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 653 2019-09-04T06:27:30.021+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 653 2019-09-04T06:27:30.021+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 654 2019-09-04T06:27:30.021+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 654 2019-09-04T06:27:30.021+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 655 2019-09-04T06:27:30.021+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 655 2019-09-04T06:27:30.021+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 656 2019-09-04T06:27:30.021+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 656 2019-09-04T06:27:30.021+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 657 2019-09-04T06:27:30.021+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 657 2019-09-04T06:27:30.021+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 658 2019-09-04T06:27:30.021+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 658 2019-09-04T06:27:30.021+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 659 2019-09-04T06:27:30.021+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 659 2019-09-04T06:27:30.021+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 660 2019-09-04T06:27:30.021+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 660 2019-09-04T06:27:30.021+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 661 2019-09-04T06:27:30.021+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 661 2019-09-04T06:27:30.021+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 662 2019-09-04T06:27:30.021+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 662 2019-09-04T06:27:30.021+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 663 2019-09-04T06:27:30.021+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 663 2019-09-04T06:27:30.021+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 664 2019-09-04T06:27:30.021+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 664 2019-09-04T06:27:30.021+0000 I COMMAND [conn16] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:27:30.037+0000 D2 COMMAND [conn16] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:27:30.037+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 666 2019-09-04T06:27:30.037+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 666 2019-09-04T06:27:30.037+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 667 2019-09-04T06:27:30.037+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 667 2019-09-04T06:27:30.037+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 668 2019-09-04T06:27:30.037+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 668 2019-09-04T06:27:30.037+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 669 2019-09-04T06:27:30.037+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 669 2019-09-04T06:27:30.037+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 670 2019-09-04T06:27:30.037+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 670 2019-09-04T06:27:30.037+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 671 2019-09-04T06:27:30.037+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 671 2019-09-04T06:27:30.037+0000 I COMMAND [conn16] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:27:30.051+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:30.063+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:30.063+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:30.067+0000 I NETWORK [listener] connection accepted from 10.108.2.72:45616 #52 (40 connections now open) 2019-09-04T06:27:30.067+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:30.067+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:30.067+0000 I NETWORK [conn52] received client metadata from 10.108.2.72:45616 conn52: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:30.067+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:30.067+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:30.067+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:30.085+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578449, 3) 2019-09-04T06:27:30.119+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:30.119+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:30.127+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:30.127+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:30.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:30.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:30.152+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:30.161+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:30.162+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:30.192+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:30.192+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:30.231+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578449, 3), signature: { hash: BinData(0, 8761C4BAA5834C196B95C6027744813B32B1469D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:30.231+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:27:30.231+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578449, 3), signature: { hash: BinData(0, 8761C4BAA5834C196B95C6027744813B32B1469D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:30.231+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578449, 3), signature: { hash: BinData(0, 8761C4BAA5834C196B95C6027744813B32B1469D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:30.231+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, durableWallTime: new Date(1567578449977), opTime: { ts: Timestamp(1567578449, 3), t: 1 }, wallTime: new Date(1567578449977) } 2019-09-04T06:27:30.231+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578449, 3), signature: { hash: BinData(0, 8761C4BAA5834C196B95C6027744813B32B1469D), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:30.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:27:30.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:27:30.252+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:30.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:30.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:30.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:30.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:30.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:30.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:30.352+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:30.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:30.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:30.452+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:30.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:30.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:30.490+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:30.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:30.495+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:30.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:30.518+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:30.518+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:30.552+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:30.563+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:30.563+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:30.565+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:30.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:30.619+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:30.619+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:30.627+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:30.627+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:30.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:30.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:30.652+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:30.661+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:30.661+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:30.692+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:30.692+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:30.743+0000 I NETWORK [listener] connection accepted from 10.108.2.52:47056 #53 (41 connections now open) 2019-09-04T06:27:30.743+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:30.743+0000 D2 COMMAND [conn53] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:30.743+0000 I NETWORK [conn53] received client metadata from 10.108.2.52:47056 conn53: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:30.743+0000 I COMMAND [conn53] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:30.743+0000 D2 COMMAND [conn53] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578442, 1), signature: { hash: BinData(0, 9D53AB1FB4ED3281D2F535162F2E651844E31225), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:27:30.744+0000 D1 REPL [conn53] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578449, 3), t: 1 } 2019-09-04T06:27:30.744+0000 D3 REPL [conn53] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.753+0000 2019-09-04T06:27:30.752+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:30.753+0000 I NETWORK [listener] connection accepted from 10.108.2.50:49992 #54 (42 connections now open) 2019-09-04T06:27:30.753+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:30.753+0000 D2 COMMAND [conn54] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:30.753+0000 I NETWORK [conn54] received client metadata from 10.108.2.50:49992 conn54: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:30.753+0000 I COMMAND [conn54] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:30.754+0000 D2 COMMAND [conn54] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578449, 1), signature: { hash: BinData(0, 19051D282256DCC551BFFE29F82E237D248A825C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:27:30.754+0000 D1 REPL [conn54] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578449, 3), t: 1 } 2019-09-04T06:27:30.754+0000 D3 REPL [conn54] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.764+0000 2019-09-04T06:27:30.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:30.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:30.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:30.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:30.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:30.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:30.836+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:30.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:30.836+0000 D2 REPL_HB [replexec-2] Sending heartbeat (requestId: 46) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:30.836+0000 D3 EXECUTOR [replexec-2] Scheduling remote command request: RemoteCommand 46 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:27:40.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:30.836+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:27:58.836+0000 2019-09-04T06:27:30.836+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 47) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:30.836+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 47 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:27:40.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:30.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:27:58.836+0000 2019-09-04T06:27:30.836+0000 D2 ASIO [Replication] Request 46 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, durableWallTime: new Date(1567578449977), opTime: { ts: Timestamp(1567578449, 3), t: 1 }, wallTime: new Date(1567578449977), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578449, 3), t: 1 }, lastCommittedWall: new Date(1567578449977), lastOpVisible: { ts: Timestamp(1567578449, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578449, 3), $clusterTime: { clusterTime: Timestamp(1567578449, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578449, 3) } 2019-09-04T06:27:30.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, durableWallTime: new Date(1567578449977), opTime: { ts: Timestamp(1567578449, 3), t: 1 }, wallTime: new Date(1567578449977), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578449, 3), t: 1 }, lastCommittedWall: new Date(1567578449977), lastOpVisible: { ts: Timestamp(1567578449, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578449, 3), $clusterTime: { clusterTime: Timestamp(1567578449, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578449, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:27:30.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:30.836+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 46) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, durableWallTime: new Date(1567578449977), opTime: { ts: Timestamp(1567578449, 3), t: 1 }, wallTime: new Date(1567578449977), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578449, 3), t: 1 }, lastCommittedWall: new Date(1567578449977), lastOpVisible: { ts: Timestamp(1567578449, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578449, 3), $clusterTime: { clusterTime: Timestamp(1567578449, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578449, 3) } 2019-09-04T06:27:30.836+0000 D2 ASIO [Replication] Request 47 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, durableWallTime: new Date(1567578449977), opTime: { ts: Timestamp(1567578449, 3), t: 1 }, wallTime: new Date(1567578449977), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578449, 3), t: 1 }, lastCommittedWall: new Date(1567578449977), lastOpVisible: { ts: Timestamp(1567578449, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578449, 3), $clusterTime: { clusterTime: Timestamp(1567578450, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578449, 3) } 2019-09-04T06:27:30.836+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:27:30.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, durableWallTime: new Date(1567578449977), opTime: { ts: Timestamp(1567578449, 3), t: 1 }, wallTime: new Date(1567578449977), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578449, 3), t: 1 }, lastCommittedWall: new Date(1567578449977), lastOpVisible: { ts: Timestamp(1567578449, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578449, 3), $clusterTime: { clusterTime: Timestamp(1567578450, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578449, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:30.836+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:27:40.410+0000 2019-09-04T06:27:30.836+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:30.836+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:27:42.127+0000 2019-09-04T06:27:30.836+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:27:30.836+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:27:32.836Z 2019-09-04T06:27:30.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:30.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:00.836+0000 2019-09-04T06:27:30.836+0000 D2 REPL_HB [replexec-2] Received response to heartbeat (requestId: 47) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, durableWallTime: new Date(1567578449977), opTime: { ts: Timestamp(1567578449, 3), t: 1 }, wallTime: new Date(1567578449977), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578449, 3), t: 1 }, lastCommittedWall: new Date(1567578449977), lastOpVisible: { ts: Timestamp(1567578449, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578449, 3), $clusterTime: { clusterTime: Timestamp(1567578450, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578449, 3) } 2019-09-04T06:27:30.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:00.836+0000 2019-09-04T06:27:30.836+0000 D3 REPL [replexec-2] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:27:30.836+0000 D2 REPL_HB [replexec-2] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:27:32.836Z 2019-09-04T06:27:30.836+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:00.836+0000 2019-09-04T06:27:30.852+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:30.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:30.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:30.915+0000 I NETWORK [listener] connection accepted from 10.108.2.73:52028 #55 (43 connections now open) 2019-09-04T06:27:30.915+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:30.915+0000 D2 COMMAND [conn55] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:30.915+0000 I NETWORK [conn55] received client metadata from 10.108.2.73:52028 conn55: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:30.915+0000 I COMMAND [conn55] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:30.916+0000 D2 COMMAND [conn55] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578442, 1), signature: { hash: BinData(0, 9D53AB1FB4ED3281D2F535162F2E651844E31225), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:27:30.916+0000 D1 REPL [conn55] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578449, 3), t: 1 } 2019-09-04T06:27:30.916+0000 D3 REPL [conn55] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.926+0000 2019-09-04T06:27:30.952+0000 I NETWORK [listener] connection accepted from 10.108.2.58:52018 #56 (44 connections now open) 2019-09-04T06:27:30.952+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:30.952+0000 D2 COMMAND [conn56] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:30.952+0000 I NETWORK [conn56] received client metadata from 10.108.2.58:52018 conn56: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:30.952+0000 I COMMAND [conn56] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:30.953+0000 D2 COMMAND [conn56] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578443, 1), signature: { hash: BinData(0, BBDB9A29B5D8765DFC9912618BC8EC281097B96B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:27:30.953+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:30.953+0000 D1 REPL [conn56] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578449, 3), t: 1 } 2019-09-04T06:27:30.953+0000 D3 REPL [conn56] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.963+0000 2019-09-04T06:27:30.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:30.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:30.989+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:30.989+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:30.989+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578449, 3) 2019-09-04T06:27:30.989+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 711 2019-09-04T06:27:30.989+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:30.989+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:30.989+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 711 2019-09-04T06:27:30.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:30.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:30.990+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 715 2019-09-04T06:27:30.990+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 715 2019-09-04T06:27:30.990+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578449, 3), t: 1 }({ ts: Timestamp(1567578449, 3), t: 1 }) 2019-09-04T06:27:30.995+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:30.995+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:31.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:31.018+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:31.018+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:31.053+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:31.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578450, 1), signature: { hash: BinData(0, 71ED14A177294ADE18841F3DAEE3FE14D1344000), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:31.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:27:31.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578450, 1), signature: { hash: BinData(0, 71ED14A177294ADE18841F3DAEE3FE14D1344000), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:31.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578450, 1), signature: { hash: BinData(0, 71ED14A177294ADE18841F3DAEE3FE14D1344000), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:31.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, durableWallTime: new Date(1567578449977), opTime: { ts: Timestamp(1567578449, 3), t: 1 }, wallTime: new Date(1567578449977) } 2019-09-04T06:27:31.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578450, 1), signature: { hash: BinData(0, 71ED14A177294ADE18841F3DAEE3FE14D1344000), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:31.063+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:31.063+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:31.065+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:31.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:31.119+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:31.119+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:31.127+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:31.127+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:31.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:31.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:31.153+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:31.161+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:31.161+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:31.192+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:31.192+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:31.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:27:31.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:27:31.253+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:31.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:31.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:31.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:31.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:31.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:31.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:31.327+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:31.327+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:31.353+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:31.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:31.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:31.453+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:31.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:31.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:31.490+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:31.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:31.495+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:31.495+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:31.518+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:31.518+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:31.553+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:31.563+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:31.563+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:31.565+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:31.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:31.619+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:31.619+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:31.627+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:31.627+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:31.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:31.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:31.654+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:31.661+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:31.661+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:31.692+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:31.692+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:31.754+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:31.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:31.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:31.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:31.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:31.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:31.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:31.827+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:31.827+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:31.854+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:31.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:31.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:31.954+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:31.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:31.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:31.990+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:31.990+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:31.990+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:31.990+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578449, 3) 2019-09-04T06:27:31.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:31.990+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 753 2019-09-04T06:27:31.990+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:31.990+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:31.990+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 753 2019-09-04T06:27:31.990+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 756 2019-09-04T06:27:31.990+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 756 2019-09-04T06:27:31.990+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578449, 3), t: 1 }({ ts: Timestamp(1567578449, 3), t: 1 }) 2019-09-04T06:27:31.995+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:31.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:32.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:32.018+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:32.018+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:32.054+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:32.063+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:32.063+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:32.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:32.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:32.119+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:32.119+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:32.127+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:32.127+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:32.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:32.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:32.154+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:32.161+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:32.161+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:32.192+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:32.192+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:32.230+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578450, 1), signature: { hash: BinData(0, 71ED14A177294ADE18841F3DAEE3FE14D1344000), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:32.230+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:27:32.231+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578450, 1), signature: { hash: BinData(0, 71ED14A177294ADE18841F3DAEE3FE14D1344000), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:32.231+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578450, 1), signature: { hash: BinData(0, 71ED14A177294ADE18841F3DAEE3FE14D1344000), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:32.231+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, durableWallTime: new Date(1567578449977), opTime: { ts: Timestamp(1567578449, 3), t: 1 }, wallTime: new Date(1567578449977) } 2019-09-04T06:27:32.231+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578450, 1), signature: { hash: BinData(0, 71ED14A177294ADE18841F3DAEE3FE14D1344000), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:32.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:27:32.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:27:32.255+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:32.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:32.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:32.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:32.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:32.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:32.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:32.327+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:32.327+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:32.355+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:32.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:32.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:32.455+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:32.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:32.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:32.490+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:32.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:32.495+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:32.495+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:32.518+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:32.518+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:32.555+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:32.563+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:32.563+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:32.565+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:32.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:32.619+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:32.619+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:32.627+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:32.627+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:32.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:32.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:32.655+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:32.661+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:32.661+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:32.692+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:32.692+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:32.755+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:32.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:32.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:32.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:32.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:32.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:32.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:32.827+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:32.827+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:32.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:32.836+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 48) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:32.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:32.836+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 48 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:27:42.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:32.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:00.836+0000 2019-09-04T06:27:32.836+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 49) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:32.836+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 49 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:27:42.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:32.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:00.836+0000 2019-09-04T06:27:32.836+0000 D2 ASIO [Replication] Request 48 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, durableWallTime: new Date(1567578449977), opTime: { ts: Timestamp(1567578449, 3), t: 1 }, wallTime: new Date(1567578449977), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578449, 3), t: 1 }, lastCommittedWall: new Date(1567578449977), lastOpVisible: { ts: Timestamp(1567578449, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578449, 3), $clusterTime: { clusterTime: Timestamp(1567578450, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578449, 3) } 2019-09-04T06:27:32.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, durableWallTime: new Date(1567578449977), opTime: { ts: Timestamp(1567578449, 3), t: 1 }, wallTime: new Date(1567578449977), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578449, 3), t: 1 }, lastCommittedWall: new Date(1567578449977), lastOpVisible: { ts: Timestamp(1567578449, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578449, 3), $clusterTime: { clusterTime: Timestamp(1567578450, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578449, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:27:32.836+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:32.836+0000 D2 ASIO [Replication] Request 49 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, durableWallTime: new Date(1567578449977), opTime: { ts: Timestamp(1567578449, 3), t: 1 }, wallTime: new Date(1567578449977), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578449, 3), t: 1 }, lastCommittedWall: new Date(1567578449977), lastOpVisible: { ts: Timestamp(1567578449, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578449, 3), $clusterTime: { clusterTime: Timestamp(1567578450, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578449, 3) } 2019-09-04T06:27:32.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, durableWallTime: new Date(1567578449977), opTime: { ts: Timestamp(1567578449, 3), t: 1 }, wallTime: new Date(1567578449977), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578449, 3), t: 1 }, lastCommittedWall: new Date(1567578449977), lastOpVisible: { ts: Timestamp(1567578449, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578449, 3), $clusterTime: { clusterTime: Timestamp(1567578450, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578449, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:32.836+0000 D2 REPL_HB [replexec-2] Received response to heartbeat (requestId: 48) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, durableWallTime: new Date(1567578449977), opTime: { ts: Timestamp(1567578449, 3), t: 1 }, wallTime: new Date(1567578449977), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578449, 3), t: 1 }, lastCommittedWall: new Date(1567578449977), lastOpVisible: { ts: Timestamp(1567578449, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578449, 3), $clusterTime: { clusterTime: Timestamp(1567578450, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578449, 3) } 2019-09-04T06:27:32.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:32.836+0000 D4 ELECTION [replexec-2] Postponing election timeout due to heartbeat from primary 2019-09-04T06:27:32.836+0000 D4 REPL [replexec-2] Canceling election timeout callback at 2019-09-04T06:27:42.127+0000 2019-09-04T06:27:32.836+0000 D4 ELECTION [replexec-2] Scheduling election timeout callback at 2019-09-04T06:27:43.966+0000 2019-09-04T06:27:32.836+0000 D3 REPL [replexec-2] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:27:32.836+0000 D2 REPL_HB [replexec-2] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:27:34.836Z 2019-09-04T06:27:32.836+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:32.836+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:02.836+0000 2019-09-04T06:27:32.836+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 49) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, durableWallTime: new Date(1567578449977), opTime: { ts: Timestamp(1567578449, 3), t: 1 }, wallTime: new Date(1567578449977), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578449, 3), t: 1 }, lastCommittedWall: new Date(1567578449977), lastOpVisible: { ts: Timestamp(1567578449, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578449, 3), $clusterTime: { clusterTime: Timestamp(1567578450, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578449, 3) } 2019-09-04T06:27:32.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:02.836+0000 2019-09-04T06:27:32.836+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:27:32.836+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:27:34.836Z 2019-09-04T06:27:32.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:02.836+0000 2019-09-04T06:27:32.855+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:32.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:32.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:32.956+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:32.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:32.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:32.990+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:32.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:32.990+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:32.990+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:32.990+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578449, 3) 2019-09-04T06:27:32.990+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 793 2019-09-04T06:27:32.990+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:32.990+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:32.990+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 793 2019-09-04T06:27:32.991+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 796 2019-09-04T06:27:32.991+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 796 2019-09-04T06:27:32.991+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578449, 3), t: 1 }({ ts: Timestamp(1567578449, 3), t: 1 }) 2019-09-04T06:27:32.995+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:32.995+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:33.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:33.018+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:33.018+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:33.056+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:33.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578450, 1), signature: { hash: BinData(0, 71ED14A177294ADE18841F3DAEE3FE14D1344000), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:33.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:27:33.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578450, 1), signature: { hash: BinData(0, 71ED14A177294ADE18841F3DAEE3FE14D1344000), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:33.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578450, 1), signature: { hash: BinData(0, 71ED14A177294ADE18841F3DAEE3FE14D1344000), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:33.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, durableWallTime: new Date(1567578449977), opTime: { ts: Timestamp(1567578449, 3), t: 1 }, wallTime: new Date(1567578449977) } 2019-09-04T06:27:33.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578450, 1), signature: { hash: BinData(0, 71ED14A177294ADE18841F3DAEE3FE14D1344000), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:33.063+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:33.063+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:33.065+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:33.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:33.119+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:33.119+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:33.127+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:33.127+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:33.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:33.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:33.156+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:33.161+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:33.161+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:33.192+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:33.192+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:33.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:27:33.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:27:33.256+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:33.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:33.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:33.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:33.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:33.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:33.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:33.327+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:33.327+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:33.345+0000 D2 ASIO [RS] Request 45 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578453, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578453340), o: { $v: 1, $set: { ping: new Date(1567578453335) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578449, 3), t: 1 }, lastCommittedWall: new Date(1567578449977), lastOpVisible: { ts: Timestamp(1567578449, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578449, 3), t: 1 }, lastCommittedWall: new Date(1567578449977), lastOpApplied: { ts: Timestamp(1567578453, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578449, 3), $clusterTime: { clusterTime: Timestamp(1567578453, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578453, 1) } 2019-09-04T06:27:33.345+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578453, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578453340), o: { $v: 1, $set: { ping: new Date(1567578453335) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578449, 3), t: 1 }, lastCommittedWall: new Date(1567578449977), lastOpVisible: { ts: Timestamp(1567578449, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578449, 3), t: 1 }, lastCommittedWall: new Date(1567578449977), lastOpApplied: { ts: Timestamp(1567578453, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578449, 3), $clusterTime: { clusterTime: Timestamp(1567578453, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578453, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:33.345+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:33.345+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578453, 1) and ending at ts: Timestamp(1567578453, 1) 2019-09-04T06:27:33.345+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:27:43.966+0000 2019-09-04T06:27:33.345+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:27:43.662+0000 2019-09-04T06:27:33.345+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:33.345+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578453, 1), t: 1 } 2019-09-04T06:27:33.345+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:02.836+0000 2019-09-04T06:27:33.346+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:33.346+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:33.346+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578449, 3) 2019-09-04T06:27:33.346+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 815 2019-09-04T06:27:33.346+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:33.346+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:33.346+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 815 2019-09-04T06:27:33.346+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:33.346+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:33.346+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578449, 3) 2019-09-04T06:27:33.346+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 818 2019-09-04T06:27:33.346+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:27:33.346+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:33.346+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:33.346+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578453, 1) } 2019-09-04T06:27:33.346+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 818 2019-09-04T06:27:33.346+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 797 2019-09-04T06:27:33.346+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 797 2019-09-04T06:27:33.346+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 821 2019-09-04T06:27:33.346+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 821 2019-09-04T06:27:33.346+0000 D3 EXECUTOR [repl-writer-worker-2] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:27:33.346+0000 D3 STORAGE [repl-writer-worker-2] WT begin_transaction for snapshot id 823 2019-09-04T06:27:33.346+0000 D4 STORAGE [repl-writer-worker-2] inserting record with timestamp Timestamp(1567578453, 1) 2019-09-04T06:27:33.346+0000 D3 STORAGE [repl-writer-worker-2] WT set timestamp of future write operations to Timestamp(1567578453, 1) 2019-09-04T06:27:33.346+0000 D3 STORAGE [repl-writer-worker-2] WT commit_transaction for snapshot id 823 2019-09-04T06:27:33.346+0000 D3 EXECUTOR [repl-writer-worker-2] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:33.346+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:27:33.346+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 822 2019-09-04T06:27:33.346+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 822 2019-09-04T06:27:33.346+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 825 2019-09-04T06:27:33.346+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 825 2019-09-04T06:27:33.346+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578453, 1), t: 1 }({ ts: Timestamp(1567578453, 1), t: 1 }) 2019-09-04T06:27:33.346+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578453, 1) 2019-09-04T06:27:33.346+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 826 2019-09-04T06:27:33.346+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578453, 1) } } ] } sort: {} projection: {} 2019-09-04T06:27:33.346+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:33.346+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:27:33.346+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578453, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:27:33.346+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:33.346+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:27:33.346+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:27:33.346+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578453, 1) || First: notFirst: full path: ts 2019-09-04T06:27:33.346+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:33.346+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578453, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:33.346+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:27:33.346+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:27:33.346+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:27:33.346+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:33.346+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:27:33.346+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:27:33.346+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:33.346+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:33.346+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:27:33.346+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578453, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:27:33.346+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:33.346+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:27:33.346+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:27:33.346+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578453, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:27:33.346+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:33.346+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578453, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:33.346+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 826 2019-09-04T06:27:33.347+0000 D3 EXECUTOR [repl-writer-worker-0] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:27:33.347+0000 D3 STORAGE [repl-writer-worker-0] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:33.347+0000 D3 REPL [repl-writer-worker-0] applying op: { ts: Timestamp(1567578453, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578453340), o: { $v: 1, $set: { ping: new Date(1567578453335) } } }, oplog application mode: Secondary 2019-09-04T06:27:33.347+0000 D3 STORAGE [repl-writer-worker-0] WT set timestamp of future write operations to Timestamp(1567578453, 1) 2019-09-04T06:27:33.347+0000 D3 STORAGE [repl-writer-worker-0] WT begin_transaction for snapshot id 828 2019-09-04T06:27:33.347+0000 D2 QUERY [repl-writer-worker-0] Using idhack: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" } 2019-09-04T06:27:33.347+0000 D4 WRITE [repl-writer-worker-0] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:27:33.347+0000 D3 STORAGE [repl-writer-worker-0] WT commit_transaction for snapshot id 828 2019-09-04T06:27:33.347+0000 D3 EXECUTOR [repl-writer-worker-0] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:33.347+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578453, 1), t: 1 }({ ts: Timestamp(1567578453, 1), t: 1 }) 2019-09-04T06:27:33.347+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578453, 1) 2019-09-04T06:27:33.347+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 827 2019-09-04T06:27:33.347+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:27:33.347+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:33.347+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:27:33.347+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:33.347+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:33.347+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:27:33.347+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 827 2019-09-04T06:27:33.347+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578453, 1) 2019-09-04T06:27:33.347+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 831 2019-09-04T06:27:33.347+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 831 2019-09-04T06:27:33.347+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578453, 1), t: 1 }({ ts: Timestamp(1567578453, 1), t: 1 }) 2019-09-04T06:27:33.347+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:33.347+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, durableWallTime: new Date(1567578449977), appliedOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, appliedWallTime: new Date(1567578449977), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, durableWallTime: new Date(1567578449977), appliedOpTime: { ts: Timestamp(1567578453, 1), t: 1 }, appliedWallTime: new Date(1567578453340), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, durableWallTime: new Date(1567578449977), appliedOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, appliedWallTime: new Date(1567578449977), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578449, 3), t: 1 }, lastCommittedWall: new Date(1567578449977), lastOpVisible: { ts: Timestamp(1567578449, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:33.347+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 50 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:03.347+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, durableWallTime: new Date(1567578449977), appliedOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, appliedWallTime: new Date(1567578449977), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, durableWallTime: new Date(1567578449977), appliedOpTime: { ts: Timestamp(1567578453, 1), t: 1 }, appliedWallTime: new Date(1567578453340), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, durableWallTime: new Date(1567578449977), appliedOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, appliedWallTime: new Date(1567578449977), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578449, 3), t: 1 }, lastCommittedWall: new Date(1567578449977), lastOpVisible: { ts: Timestamp(1567578449, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:33.347+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:03.347+0000 2019-09-04T06:27:33.347+0000 D2 ASIO [RS] Request 50 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578449, 3), t: 1 }, lastCommittedWall: new Date(1567578449977), lastOpVisible: { ts: Timestamp(1567578449, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578449, 3), $clusterTime: { clusterTime: Timestamp(1567578453, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578453, 1) } 2019-09-04T06:27:33.347+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578449, 3), t: 1 }, lastCommittedWall: new Date(1567578449977), lastOpVisible: { ts: Timestamp(1567578449, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578449, 3), $clusterTime: { clusterTime: Timestamp(1567578453, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578453, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:33.347+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:33.347+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:03.347+0000 2019-09-04T06:27:33.348+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578453, 1), t: 1 } 2019-09-04T06:27:33.348+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 51 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:27:43.348+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578449, 3), t: 1 } } 2019-09-04T06:27:33.348+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:03.347+0000 2019-09-04T06:27:33.348+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:27:33.348+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:33.348+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, durableWallTime: new Date(1567578449977), appliedOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, appliedWallTime: new Date(1567578449977), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578453, 1), t: 1 }, durableWallTime: new Date(1567578453340), appliedOpTime: { ts: Timestamp(1567578453, 1), t: 1 }, appliedWallTime: new Date(1567578453340), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, durableWallTime: new Date(1567578449977), appliedOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, appliedWallTime: new Date(1567578449977), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578449, 3), t: 1 }, lastCommittedWall: new Date(1567578449977), lastOpVisible: { ts: Timestamp(1567578449, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:33.348+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 52 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:03.348+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, durableWallTime: new Date(1567578449977), appliedOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, appliedWallTime: new Date(1567578449977), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578453, 1), t: 1 }, durableWallTime: new Date(1567578453340), appliedOpTime: { ts: Timestamp(1567578453, 1), t: 1 }, appliedWallTime: new Date(1567578453340), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, durableWallTime: new Date(1567578449977), appliedOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, appliedWallTime: new Date(1567578449977), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578449, 3), t: 1 }, lastCommittedWall: new Date(1567578449977), lastOpVisible: { ts: Timestamp(1567578449, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:33.348+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:03.347+0000 2019-09-04T06:27:33.348+0000 D2 ASIO [RS] Request 52 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578449, 3), t: 1 }, lastCommittedWall: new Date(1567578449977), lastOpVisible: { ts: Timestamp(1567578449, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578449, 3), $clusterTime: { clusterTime: Timestamp(1567578453, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578453, 1) } 2019-09-04T06:27:33.348+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578449, 3), t: 1 }, lastCommittedWall: new Date(1567578449977), lastOpVisible: { ts: Timestamp(1567578449, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578449, 3), $clusterTime: { clusterTime: Timestamp(1567578453, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578453, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:33.348+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:33.348+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:03.347+0000 2019-09-04T06:27:33.349+0000 D2 ASIO [RS] Request 51 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578453, 1), t: 1 }, lastCommittedWall: new Date(1567578453340), lastOpVisible: { ts: Timestamp(1567578453, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578453, 1), t: 1 }, lastCommittedWall: new Date(1567578453340), lastOpApplied: { ts: Timestamp(1567578453, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578453, 1), $clusterTime: { clusterTime: Timestamp(1567578453, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578453, 1) } 2019-09-04T06:27:33.349+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578453, 1), t: 1 }, lastCommittedWall: new Date(1567578453340), lastOpVisible: { ts: Timestamp(1567578453, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578453, 1), t: 1 }, lastCommittedWall: new Date(1567578453340), lastOpApplied: { ts: Timestamp(1567578453, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578453, 1), $clusterTime: { clusterTime: Timestamp(1567578453, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578453, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:33.349+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:33.349+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:27:33.349+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578453, 1), t: 1 }, 2019-09-04T06:27:33.340+0000 2019-09-04T06:27:33.349+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578453, 1), t: 1 }, 2019-09-04T06:27:33.340+0000 2019-09-04T06:27:33.349+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578448, 1) 2019-09-04T06:27:33.349+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:27:43.662+0000 2019-09-04T06:27:33.349+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:27:43.745+0000 2019-09-04T06:27:33.349+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:33.349+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:02.836+0000 2019-09-04T06:27:33.349+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 53 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:27:43.349+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578453, 1), t: 1 } } 2019-09-04T06:27:33.349+0000 D3 REPL [conn44] Got notified of new snapshot: { ts: Timestamp(1567578453, 1), t: 1 }, 2019-09-04T06:27:33.340+0000 2019-09-04T06:27:33.349+0000 D3 REPL [conn44] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:55.060+0000 2019-09-04T06:27:33.349+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:03.347+0000 2019-09-04T06:27:33.349+0000 D3 REPL [conn43] Got notified of new snapshot: { ts: Timestamp(1567578453, 1), t: 1 }, 2019-09-04T06:27:33.340+0000 2019-09-04T06:27:33.349+0000 D3 REPL [conn43] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.753+0000 2019-09-04T06:27:33.349+0000 D3 REPL [conn32] Got notified of new snapshot: { ts: Timestamp(1567578453, 1), t: 1 }, 2019-09-04T06:27:33.340+0000 2019-09-04T06:27:33.349+0000 D3 REPL [conn32] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:54.152+0000 2019-09-04T06:27:33.349+0000 D3 REPL [conn54] Got notified of new snapshot: { ts: Timestamp(1567578453, 1), t: 1 }, 2019-09-04T06:27:33.340+0000 2019-09-04T06:27:33.349+0000 D3 REPL [conn54] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.764+0000 2019-09-04T06:27:33.349+0000 D3 REPL [conn55] Got notified of new snapshot: { ts: Timestamp(1567578453, 1), t: 1 }, 2019-09-04T06:27:33.340+0000 2019-09-04T06:27:33.349+0000 D3 REPL [conn55] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.926+0000 2019-09-04T06:27:33.349+0000 D3 REPL [conn56] Got notified of new snapshot: { ts: Timestamp(1567578453, 1), t: 1 }, 2019-09-04T06:27:33.340+0000 2019-09-04T06:27:33.349+0000 D3 REPL [conn56] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.963+0000 2019-09-04T06:27:33.349+0000 D3 REPL [conn39] Got notified of new snapshot: { ts: Timestamp(1567578453, 1), t: 1 }, 2019-09-04T06:27:33.340+0000 2019-09-04T06:27:33.349+0000 D3 REPL [conn39] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.133+0000 2019-09-04T06:27:33.349+0000 D3 REPL [conn40] Got notified of new snapshot: { ts: Timestamp(1567578453, 1), t: 1 }, 2019-09-04T06:27:33.340+0000 2019-09-04T06:27:33.349+0000 D3 REPL [conn40] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.645+0000 2019-09-04T06:27:33.349+0000 D3 REPL [conn41] Got notified of new snapshot: { ts: Timestamp(1567578453, 1), t: 1 }, 2019-09-04T06:27:33.340+0000 2019-09-04T06:27:33.349+0000 D3 REPL [conn41] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.661+0000 2019-09-04T06:27:33.349+0000 D3 REPL [conn53] Got notified of new snapshot: { ts: Timestamp(1567578453, 1), t: 1 }, 2019-09-04T06:27:33.340+0000 2019-09-04T06:27:33.349+0000 D3 REPL [conn53] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.753+0000 2019-09-04T06:27:33.356+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:33.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:33.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:33.446+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578453, 1) 2019-09-04T06:27:33.456+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:33.467+0000 I NETWORK [listener] connection accepted from 10.108.2.63:36186 #57 (45 connections now open) 2019-09-04T06:27:33.467+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:33.467+0000 D2 COMMAND [conn57] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:33.467+0000 I NETWORK [conn57] received client metadata from 10.108.2.63:36186 conn57: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:33.467+0000 I COMMAND [conn57] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:33.470+0000 D2 COMMAND [conn57] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578452, 1), signature: { hash: BinData(0, CB2EE64588C44BE37DA7454B6059CFFFB3ABC1AF), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:27:33.470+0000 D1 REPL [conn57] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578453, 1), t: 1 } 2019-09-04T06:27:33.470+0000 D3 REPL [conn57] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:03.480+0000 2019-09-04T06:27:33.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:33.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:33.490+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:33.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:33.495+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:33.495+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:33.506+0000 D2 COMMAND [conn16] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:27:33.506+0000 I COMMAND [conn16] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:27:33.506+0000 D2 COMMAND [conn16] run command admin.$cmd { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:27:33.506+0000 I COMMAND [conn16] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:27:33.518+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:33.518+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:33.557+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:33.563+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:33.563+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:33.565+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:33.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:33.619+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:33.619+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:33.627+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:33.627+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:33.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:33.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:33.657+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:33.661+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:33.661+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:33.692+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:33.692+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:33.757+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:33.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:33.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:33.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:33.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:33.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:33.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:33.827+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:33.827+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:33.857+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:33.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:33.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:33.957+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:33.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:33.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:33.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:33.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:33.995+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:33.995+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:34.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:34.018+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:34.018+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:34.057+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:34.063+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:34.063+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:34.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:34.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:34.119+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:34.119+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:34.127+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:34.127+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:34.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:34.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:34.157+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:34.161+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:34.161+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:34.192+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:34.192+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:34.230+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578453, 1), signature: { hash: BinData(0, B86271281EE1EB30B2052771AD7E14269BDF3893), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:34.231+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:27:34.231+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578453, 1), signature: { hash: BinData(0, B86271281EE1EB30B2052771AD7E14269BDF3893), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:34.231+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578453, 1), signature: { hash: BinData(0, B86271281EE1EB30B2052771AD7E14269BDF3893), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:34.231+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578453, 1), t: 1 }, durableWallTime: new Date(1567578453340), opTime: { ts: Timestamp(1567578453, 1), t: 1 }, wallTime: new Date(1567578453340) } 2019-09-04T06:27:34.231+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578453, 1), signature: { hash: BinData(0, B86271281EE1EB30B2052771AD7E14269BDF3893), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:34.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:27:34.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:27:34.258+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:34.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:34.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:34.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:34.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:34.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:34.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:34.327+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:34.327+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:34.346+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:34.346+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:34.346+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578453, 1) 2019-09-04T06:27:34.346+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 874 2019-09-04T06:27:34.346+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:34.346+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:34.346+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 874 2019-09-04T06:27:34.347+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 877 2019-09-04T06:27:34.347+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 877 2019-09-04T06:27:34.347+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578453, 1), t: 1 }({ ts: Timestamp(1567578453, 1), t: 1 }) 2019-09-04T06:27:34.358+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:34.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:34.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:34.458+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:34.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:34.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:34.490+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:34.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:34.495+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:34.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:34.518+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:34.518+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:34.558+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:34.563+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:34.563+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:34.565+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:34.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:34.619+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:34.619+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:34.627+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:34.627+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:34.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:34.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:34.658+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:34.661+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:34.661+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:34.692+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:34.692+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:34.758+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:34.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:34.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:34.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:34.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:34.808+0000 D2 ASIO [RS] Request 53 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578454, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578454805), o: { $v: 1, $set: { ping: new Date(1567578454802), up: 2355 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578453, 1), t: 1 }, lastCommittedWall: new Date(1567578453340), lastOpVisible: { ts: Timestamp(1567578453, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578453, 1), t: 1 }, lastCommittedWall: new Date(1567578453340), lastOpApplied: { ts: Timestamp(1567578454, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578453, 1), $clusterTime: { clusterTime: Timestamp(1567578454, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578454, 1) } 2019-09-04T06:27:34.808+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578454, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578454805), o: { $v: 1, $set: { ping: new Date(1567578454802), up: 2355 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578453, 1), t: 1 }, lastCommittedWall: new Date(1567578453340), lastOpVisible: { ts: Timestamp(1567578453, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578453, 1), t: 1 }, lastCommittedWall: new Date(1567578453340), lastOpApplied: { ts: Timestamp(1567578454, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578453, 1), $clusterTime: { clusterTime: Timestamp(1567578454, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578454, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:34.808+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:34.808+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578454, 1) and ending at ts: Timestamp(1567578454, 1) 2019-09-04T06:27:34.808+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:27:43.745+0000 2019-09-04T06:27:34.808+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:27:45.656+0000 2019-09-04T06:27:34.808+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:34.808+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578454, 1), t: 1 } 2019-09-04T06:27:34.808+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:34.808+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:34.808+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578453, 1) 2019-09-04T06:27:34.808+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 894 2019-09-04T06:27:34.808+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:34.808+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:34.808+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 894 2019-09-04T06:27:34.808+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:34.808+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:27:34.808+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:34.808+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578453, 1) 2019-09-04T06:27:34.808+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 897 2019-09-04T06:27:34.808+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578454, 1) } 2019-09-04T06:27:34.808+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:34.808+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:34.808+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 897 2019-09-04T06:27:34.808+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:02.836+0000 2019-09-04T06:27:34.808+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 878 2019-09-04T06:27:34.808+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 878 2019-09-04T06:27:34.808+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 900 2019-09-04T06:27:34.808+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 900 2019-09-04T06:27:34.808+0000 D3 EXECUTOR [repl-writer-worker-15] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:27:34.809+0000 D3 STORAGE [repl-writer-worker-15] WT begin_transaction for snapshot id 902 2019-09-04T06:27:34.809+0000 D4 STORAGE [repl-writer-worker-15] inserting record with timestamp Timestamp(1567578454, 1) 2019-09-04T06:27:34.809+0000 D3 STORAGE [repl-writer-worker-15] WT set timestamp of future write operations to Timestamp(1567578454, 1) 2019-09-04T06:27:34.809+0000 D3 STORAGE [repl-writer-worker-15] WT commit_transaction for snapshot id 902 2019-09-04T06:27:34.809+0000 D3 EXECUTOR [repl-writer-worker-15] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:34.809+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:27:34.809+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 901 2019-09-04T06:27:34.809+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 901 2019-09-04T06:27:34.809+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 904 2019-09-04T06:27:34.809+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 904 2019-09-04T06:27:34.809+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578454, 1), t: 1 }({ ts: Timestamp(1567578454, 1), t: 1 }) 2019-09-04T06:27:34.809+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578454, 1) 2019-09-04T06:27:34.809+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 905 2019-09-04T06:27:34.809+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578454, 1) } } ] } sort: {} projection: {} 2019-09-04T06:27:34.809+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:34.809+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:27:34.809+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578454, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:27:34.809+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:34.809+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:27:34.809+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:27:34.809+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578454, 1) || First: notFirst: full path: ts 2019-09-04T06:27:34.809+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:34.809+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578454, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:34.809+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:27:34.809+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:27:34.809+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:27:34.809+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:34.809+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:27:34.809+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:27:34.809+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:34.809+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:34.809+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:27:34.809+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578454, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:27:34.809+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:34.809+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:27:34.809+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:27:34.809+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578454, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:27:34.809+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:34.809+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578454, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:34.809+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 905 2019-09-04T06:27:34.809+0000 D3 EXECUTOR [repl-writer-worker-1] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:27:34.809+0000 D3 STORAGE [repl-writer-worker-1] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:34.809+0000 D3 REPL [repl-writer-worker-1] applying op: { ts: Timestamp(1567578454, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578454805), o: { $v: 1, $set: { ping: new Date(1567578454802), up: 2355 } } }, oplog application mode: Secondary 2019-09-04T06:27:34.809+0000 D3 STORAGE [repl-writer-worker-1] WT set timestamp of future write operations to Timestamp(1567578454, 1) 2019-09-04T06:27:34.809+0000 D3 STORAGE [repl-writer-worker-1] WT begin_transaction for snapshot id 907 2019-09-04T06:27:34.809+0000 D2 QUERY [repl-writer-worker-1] Using idhack: { _id: "cmodb801.togewa.com:27017" } 2019-09-04T06:27:34.809+0000 D4 WRITE [repl-writer-worker-1] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:27:34.809+0000 D3 STORAGE [repl-writer-worker-1] WT commit_transaction for snapshot id 907 2019-09-04T06:27:34.809+0000 D3 EXECUTOR [repl-writer-worker-1] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:34.809+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578454, 1), t: 1 }({ ts: Timestamp(1567578454, 1), t: 1 }) 2019-09-04T06:27:34.809+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578454, 1) 2019-09-04T06:27:34.809+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 906 2019-09-04T06:27:34.809+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:27:34.809+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:34.809+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:27:34.809+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:34.809+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:34.809+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:27:34.809+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 906 2019-09-04T06:27:34.809+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578454, 1) 2019-09-04T06:27:34.809+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 911 2019-09-04T06:27:34.809+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 911 2019-09-04T06:27:34.809+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578454, 1), t: 1 }({ ts: Timestamp(1567578454, 1), t: 1 }) 2019-09-04T06:27:34.809+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:34.810+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, durableWallTime: new Date(1567578449977), appliedOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, appliedWallTime: new Date(1567578449977), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578453, 1), t: 1 }, durableWallTime: new Date(1567578453340), appliedOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, appliedWallTime: new Date(1567578454805), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, durableWallTime: new Date(1567578449977), appliedOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, appliedWallTime: new Date(1567578449977), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578453, 1), t: 1 }, lastCommittedWall: new Date(1567578453340), lastOpVisible: { ts: Timestamp(1567578453, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:34.810+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 54 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:04.810+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, durableWallTime: new Date(1567578449977), appliedOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, appliedWallTime: new Date(1567578449977), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578453, 1), t: 1 }, durableWallTime: new Date(1567578453340), appliedOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, appliedWallTime: new Date(1567578454805), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, durableWallTime: new Date(1567578449977), appliedOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, appliedWallTime: new Date(1567578449977), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578453, 1), t: 1 }, lastCommittedWall: new Date(1567578453340), lastOpVisible: { ts: Timestamp(1567578453, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:34.810+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:04.809+0000 2019-09-04T06:27:34.810+0000 D2 ASIO [RS] Request 54 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578454, 1), t: 1 }, lastCommittedWall: new Date(1567578454805), lastOpVisible: { ts: Timestamp(1567578454, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578454, 1), $clusterTime: { clusterTime: Timestamp(1567578454, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578454, 1) } 2019-09-04T06:27:34.810+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578454, 1), t: 1 }, lastCommittedWall: new Date(1567578454805), lastOpVisible: { ts: Timestamp(1567578454, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578454, 1), $clusterTime: { clusterTime: Timestamp(1567578454, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578454, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:34.810+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:34.810+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:04.810+0000 2019-09-04T06:27:34.810+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578454, 1), t: 1 } 2019-09-04T06:27:34.810+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 55 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:27:44.810+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578453, 1), t: 1 } } 2019-09-04T06:27:34.810+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:04.810+0000 2019-09-04T06:27:34.810+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:27:34.810+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:34.810+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, durableWallTime: new Date(1567578449977), appliedOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, appliedWallTime: new Date(1567578449977), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, durableWallTime: new Date(1567578454805), appliedOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, appliedWallTime: new Date(1567578454805), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, durableWallTime: new Date(1567578449977), appliedOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, appliedWallTime: new Date(1567578449977), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578453, 1), t: 1 }, lastCommittedWall: new Date(1567578453340), lastOpVisible: { ts: Timestamp(1567578453, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:34.810+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 56 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:04.810+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, durableWallTime: new Date(1567578449977), appliedOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, appliedWallTime: new Date(1567578449977), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, durableWallTime: new Date(1567578454805), appliedOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, appliedWallTime: new Date(1567578454805), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, durableWallTime: new Date(1567578449977), appliedOpTime: { ts: Timestamp(1567578449, 3), t: 1 }, appliedWallTime: new Date(1567578449977), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578453, 1), t: 1 }, lastCommittedWall: new Date(1567578453340), lastOpVisible: { ts: Timestamp(1567578453, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:34.810+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:04.810+0000 2019-09-04T06:27:34.810+0000 D2 ASIO [RS] Request 55 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578454, 1), t: 1 }, lastCommittedWall: new Date(1567578454805), lastOpVisible: { ts: Timestamp(1567578454, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578454, 1), t: 1 }, lastCommittedWall: new Date(1567578454805), lastOpApplied: { ts: Timestamp(1567578454, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578454, 1), $clusterTime: { clusterTime: Timestamp(1567578454, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578454, 1) } 2019-09-04T06:27:34.810+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578454, 1), t: 1 }, lastCommittedWall: new Date(1567578454805), lastOpVisible: { ts: Timestamp(1567578454, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578454, 1), t: 1 }, lastCommittedWall: new Date(1567578454805), lastOpApplied: { ts: Timestamp(1567578454, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578454, 1), $clusterTime: { clusterTime: Timestamp(1567578454, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578454, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:34.810+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:34.810+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:27:34.810+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578454, 1), t: 1 }, 2019-09-04T06:27:34.805+0000 2019-09-04T06:27:34.810+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578454, 1), t: 1 }, 2019-09-04T06:27:34.805+0000 2019-09-04T06:27:34.810+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578449, 1) 2019-09-04T06:27:34.811+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:27:45.656+0000 2019-09-04T06:27:34.811+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:27:45.372+0000 2019-09-04T06:27:34.811+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 57 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:27:44.811+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578454, 1), t: 1 } } 2019-09-04T06:27:34.811+0000 D3 REPL [conn32] Got notified of new snapshot: { ts: Timestamp(1567578454, 1), t: 1 }, 2019-09-04T06:27:34.805+0000 2019-09-04T06:27:34.811+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:04.810+0000 2019-09-04T06:27:34.811+0000 D3 REPL [conn32] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:54.152+0000 2019-09-04T06:27:34.811+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:34.811+0000 D3 REPL [conn43] Got notified of new snapshot: { ts: Timestamp(1567578454, 1), t: 1 }, 2019-09-04T06:27:34.805+0000 2019-09-04T06:27:34.811+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:02.836+0000 2019-09-04T06:27:34.811+0000 D3 REPL [conn43] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.753+0000 2019-09-04T06:27:34.811+0000 D3 REPL [conn57] Got notified of new snapshot: { ts: Timestamp(1567578454, 1), t: 1 }, 2019-09-04T06:27:34.805+0000 2019-09-04T06:27:34.811+0000 D3 REPL [conn57] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:03.480+0000 2019-09-04T06:27:34.811+0000 D2 ASIO [RS] Request 56 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578454, 1), t: 1 }, lastCommittedWall: new Date(1567578454805), lastOpVisible: { ts: Timestamp(1567578454, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578454, 1), $clusterTime: { clusterTime: Timestamp(1567578454, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578454, 1) } 2019-09-04T06:27:34.811+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578454, 1), t: 1 }, lastCommittedWall: new Date(1567578454805), lastOpVisible: { ts: Timestamp(1567578454, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578454, 1), $clusterTime: { clusterTime: Timestamp(1567578454, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578454, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:34.811+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:34.811+0000 D3 REPL [conn56] Got notified of new snapshot: { ts: Timestamp(1567578454, 1), t: 1 }, 2019-09-04T06:27:34.805+0000 2019-09-04T06:27:34.811+0000 D3 REPL [conn56] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.963+0000 2019-09-04T06:27:34.811+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:04.810+0000 2019-09-04T06:27:34.811+0000 D3 REPL [conn40] Got notified of new snapshot: { ts: Timestamp(1567578454, 1), t: 1 }, 2019-09-04T06:27:34.805+0000 2019-09-04T06:27:34.811+0000 D3 REPL [conn40] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.645+0000 2019-09-04T06:27:34.811+0000 D3 REPL [conn53] Got notified of new snapshot: { ts: Timestamp(1567578454, 1), t: 1 }, 2019-09-04T06:27:34.805+0000 2019-09-04T06:27:34.811+0000 D3 REPL [conn53] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.753+0000 2019-09-04T06:27:34.811+0000 D3 REPL [conn44] Got notified of new snapshot: { ts: Timestamp(1567578454, 1), t: 1 }, 2019-09-04T06:27:34.805+0000 2019-09-04T06:27:34.811+0000 D3 REPL [conn44] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:55.060+0000 2019-09-04T06:27:34.811+0000 D3 REPL [conn55] Got notified of new snapshot: { ts: Timestamp(1567578454, 1), t: 1 }, 2019-09-04T06:27:34.805+0000 2019-09-04T06:27:34.811+0000 D3 REPL [conn55] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.926+0000 2019-09-04T06:27:34.811+0000 D3 REPL [conn54] Got notified of new snapshot: { ts: Timestamp(1567578454, 1), t: 1 }, 2019-09-04T06:27:34.805+0000 2019-09-04T06:27:34.811+0000 D3 REPL [conn54] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.764+0000 2019-09-04T06:27:34.811+0000 D3 REPL [conn39] Got notified of new snapshot: { ts: Timestamp(1567578454, 1), t: 1 }, 2019-09-04T06:27:34.805+0000 2019-09-04T06:27:34.811+0000 D3 REPL [conn39] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.133+0000 2019-09-04T06:27:34.811+0000 D3 REPL [conn41] Got notified of new snapshot: { ts: Timestamp(1567578454, 1), t: 1 }, 2019-09-04T06:27:34.805+0000 2019-09-04T06:27:34.811+0000 D3 REPL [conn41] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.661+0000 2019-09-04T06:27:34.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:34.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:34.827+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:34.827+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:34.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:34.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:34.836+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 58) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:34.836+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 58 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:27:44.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:34.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:02.836+0000 2019-09-04T06:27:34.836+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 59) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:34.836+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 59 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:27:44.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:34.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:02.836+0000 2019-09-04T06:27:34.836+0000 D2 ASIO [Replication] Request 58 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, durableWallTime: new Date(1567578454805), opTime: { ts: Timestamp(1567578454, 1), t: 1 }, wallTime: new Date(1567578454805), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578454, 1), t: 1 }, lastCommittedWall: new Date(1567578454805), lastOpVisible: { ts: Timestamp(1567578454, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578454, 1), $clusterTime: { clusterTime: Timestamp(1567578454, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578454, 1) } 2019-09-04T06:27:34.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, durableWallTime: new Date(1567578454805), opTime: { ts: Timestamp(1567578454, 1), t: 1 }, wallTime: new Date(1567578454805), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578454, 1), t: 1 }, lastCommittedWall: new Date(1567578454805), lastOpVisible: { ts: Timestamp(1567578454, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578454, 1), $clusterTime: { clusterTime: Timestamp(1567578454, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578454, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:27:34.836+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:34.836+0000 D2 REPL_HB [replexec-2] Received response to heartbeat (requestId: 58) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, durableWallTime: new Date(1567578454805), opTime: { ts: Timestamp(1567578454, 1), t: 1 }, wallTime: new Date(1567578454805), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578454, 1), t: 1 }, lastCommittedWall: new Date(1567578454805), lastOpVisible: { ts: Timestamp(1567578454, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578454, 1), $clusterTime: { clusterTime: Timestamp(1567578454, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578454, 1) } 2019-09-04T06:27:34.836+0000 D4 ELECTION [replexec-2] Postponing election timeout due to heartbeat from primary 2019-09-04T06:27:34.836+0000 D4 REPL [replexec-2] Canceling election timeout callback at 2019-09-04T06:27:45.372+0000 2019-09-04T06:27:34.836+0000 D2 ASIO [Replication] Request 59 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, durableWallTime: new Date(1567578454805), opTime: { ts: Timestamp(1567578454, 1), t: 1 }, wallTime: new Date(1567578454805), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578454, 1), t: 1 }, lastCommittedWall: new Date(1567578454805), lastOpVisible: { ts: Timestamp(1567578454, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578454, 1), $clusterTime: { clusterTime: Timestamp(1567578454, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578454, 1) } 2019-09-04T06:27:34.836+0000 D4 ELECTION [replexec-2] Scheduling election timeout callback at 2019-09-04T06:27:46.223+0000 2019-09-04T06:27:34.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:34.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, durableWallTime: new Date(1567578454805), opTime: { ts: Timestamp(1567578454, 1), t: 1 }, wallTime: new Date(1567578454805), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578454, 1), t: 1 }, lastCommittedWall: new Date(1567578454805), lastOpVisible: { ts: Timestamp(1567578454, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578454, 1), $clusterTime: { clusterTime: Timestamp(1567578454, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578454, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:34.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:34.836+0000 D3 REPL [replexec-2] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:27:34.836+0000 D2 REPL_HB [replexec-2] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:27:36.836Z 2019-09-04T06:27:34.836+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:04.836+0000 2019-09-04T06:27:34.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:04.836+0000 2019-09-04T06:27:34.836+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 59) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, durableWallTime: new Date(1567578454805), opTime: { ts: Timestamp(1567578454, 1), t: 1 }, wallTime: new Date(1567578454805), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578454, 1), t: 1 }, lastCommittedWall: new Date(1567578454805), lastOpVisible: { ts: Timestamp(1567578454, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578454, 1), $clusterTime: { clusterTime: Timestamp(1567578454, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578454, 1) } 2019-09-04T06:27:34.836+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:27:34.836+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:27:36.836Z 2019-09-04T06:27:34.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:04.836+0000 2019-09-04T06:27:34.858+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:34.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:34.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:34.908+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578454, 1) 2019-09-04T06:27:34.959+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:34.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:34.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:34.990+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:34.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:34.995+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:34.995+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:35.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:35.018+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:35.018+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:35.059+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:35.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578454, 1), signature: { hash: BinData(0, D53A73044B7C93F3B2DF914D8424947C47E19988), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:35.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:27:35.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578454, 1), signature: { hash: BinData(0, D53A73044B7C93F3B2DF914D8424947C47E19988), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:35.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578454, 1), signature: { hash: BinData(0, D53A73044B7C93F3B2DF914D8424947C47E19988), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:35.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, durableWallTime: new Date(1567578454805), opTime: { ts: Timestamp(1567578454, 1), t: 1 }, wallTime: new Date(1567578454805) } 2019-09-04T06:27:35.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578454, 1), signature: { hash: BinData(0, D53A73044B7C93F3B2DF914D8424947C47E19988), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:35.063+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:35.063+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:35.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:35.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:35.119+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:35.119+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:35.127+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:35.127+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:35.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:35.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:35.159+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:35.161+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:35.161+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:35.192+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:35.192+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:35.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:27:35.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:27:35.259+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:35.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:35.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:35.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:35.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:35.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:35.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:35.327+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:35.327+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:35.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:35.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:35.359+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:35.459+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:35.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:35.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:35.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:35.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:35.495+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:35.495+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:35.518+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:35.518+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:35.560+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:35.563+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:35.563+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:35.565+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:35.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:35.619+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:35.619+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:35.627+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:35.627+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:35.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:35.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:35.660+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:35.661+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:35.661+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:35.692+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:35.692+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:35.760+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:35.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:35.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:35.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:35.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:35.808+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:35.809+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:35.809+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578454, 1) 2019-09-04T06:27:35.809+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 949 2019-09-04T06:27:35.809+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:35.809+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:35.809+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 949 2019-09-04T06:27:35.810+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 952 2019-09-04T06:27:35.810+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 952 2019-09-04T06:27:35.810+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578454, 1), t: 1 }({ ts: Timestamp(1567578454, 1), t: 1 }) 2019-09-04T06:27:35.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:35.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:35.827+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:35.827+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:35.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:35.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:35.860+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:35.960+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:35.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:35.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:35.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:35.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:35.995+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:35.995+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:36.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:36.060+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:36.063+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:36.063+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:36.065+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:36.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:36.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:36.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:36.161+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:36.161+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:36.161+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:36.192+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:36.192+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:36.230+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578454, 1), signature: { hash: BinData(0, D53A73044B7C93F3B2DF914D8424947C47E19988), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:36.230+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:27:36.230+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578454, 1), signature: { hash: BinData(0, D53A73044B7C93F3B2DF914D8424947C47E19988), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:36.230+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578454, 1), signature: { hash: BinData(0, D53A73044B7C93F3B2DF914D8424947C47E19988), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:36.231+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, durableWallTime: new Date(1567578454805), opTime: { ts: Timestamp(1567578454, 1), t: 1 }, wallTime: new Date(1567578454805) } 2019-09-04T06:27:36.231+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578454, 1), signature: { hash: BinData(0, D53A73044B7C93F3B2DF914D8424947C47E19988), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:36.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:27:36.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:27:36.261+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:36.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:36.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:36.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:36.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:36.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:36.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:36.327+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:36.327+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:36.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:36.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:36.361+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:36.461+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:36.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:36.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:36.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:36.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:36.495+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:36.495+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:36.551+0000 I NETWORK [listener] connection accepted from 10.108.2.74:51662 #58 (46 connections now open) 2019-09-04T06:27:36.551+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:36.551+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:36.552+0000 I NETWORK [conn58] received client metadata from 10.108.2.74:51662 conn58: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:36.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:36.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:36.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:36.561+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:36.563+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:36.563+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:36.565+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:36.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:36.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:36.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:36.661+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:36.661+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:36.661+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:36.692+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:36.692+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:36.761+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:36.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:36.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:36.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:36.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:36.809+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:36.809+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:36.809+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578454, 1) 2019-09-04T06:27:36.809+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 985 2019-09-04T06:27:36.809+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:36.809+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:36.809+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 985 2019-09-04T06:27:36.810+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 988 2019-09-04T06:27:36.810+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 988 2019-09-04T06:27:36.810+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578454, 1), t: 1 }({ ts: Timestamp(1567578454, 1), t: 1 }) 2019-09-04T06:27:36.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:36.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:36.827+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:36.827+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:36.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:36.836+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 60) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:36.836+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 60 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:27:46.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:36.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:36.836+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 61) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:36.836+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 61 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:27:46.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:36.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:04.836+0000 2019-09-04T06:27:36.836+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:04.836+0000 2019-09-04T06:27:36.836+0000 D2 ASIO [Replication] Request 60 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, durableWallTime: new Date(1567578454805), opTime: { ts: Timestamp(1567578454, 1), t: 1 }, wallTime: new Date(1567578454805), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578454, 1), t: 1 }, lastCommittedWall: new Date(1567578454805), lastOpVisible: { ts: Timestamp(1567578454, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578454, 1), $clusterTime: { clusterTime: Timestamp(1567578454, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578454, 1) } 2019-09-04T06:27:36.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, durableWallTime: new Date(1567578454805), opTime: { ts: Timestamp(1567578454, 1), t: 1 }, wallTime: new Date(1567578454805), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578454, 1), t: 1 }, lastCommittedWall: new Date(1567578454805), lastOpVisible: { ts: Timestamp(1567578454, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578454, 1), $clusterTime: { clusterTime: Timestamp(1567578454, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578454, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:27:36.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:36.836+0000 D2 ASIO [Replication] Request 61 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, durableWallTime: new Date(1567578454805), opTime: { ts: Timestamp(1567578454, 1), t: 1 }, wallTime: new Date(1567578454805), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578454, 1), t: 1 }, lastCommittedWall: new Date(1567578454805), lastOpVisible: { ts: Timestamp(1567578454, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578454, 1), $clusterTime: { clusterTime: Timestamp(1567578454, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578454, 1) } 2019-09-04T06:27:36.836+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 60) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, durableWallTime: new Date(1567578454805), opTime: { ts: Timestamp(1567578454, 1), t: 1 }, wallTime: new Date(1567578454805), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578454, 1), t: 1 }, lastCommittedWall: new Date(1567578454805), lastOpVisible: { ts: Timestamp(1567578454, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578454, 1), $clusterTime: { clusterTime: Timestamp(1567578454, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578454, 1) } 2019-09-04T06:27:36.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, durableWallTime: new Date(1567578454805), opTime: { ts: Timestamp(1567578454, 1), t: 1 }, wallTime: new Date(1567578454805), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578454, 1), t: 1 }, lastCommittedWall: new Date(1567578454805), lastOpVisible: { ts: Timestamp(1567578454, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578454, 1), $clusterTime: { clusterTime: Timestamp(1567578454, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578454, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:36.836+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:27:36.836+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:27:46.223+0000 2019-09-04T06:27:36.836+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:27:48.280+0000 2019-09-04T06:27:36.836+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:27:36.836+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:27:38.836Z 2019-09-04T06:27:36.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:36.836+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 61) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, durableWallTime: new Date(1567578454805), opTime: { ts: Timestamp(1567578454, 1), t: 1 }, wallTime: new Date(1567578454805), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578454, 1), t: 1 }, lastCommittedWall: new Date(1567578454805), lastOpVisible: { ts: Timestamp(1567578454, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578454, 1), $clusterTime: { clusterTime: Timestamp(1567578454, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578454, 1) } 2019-09-04T06:27:36.836+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:27:36.836+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:27:38.836Z 2019-09-04T06:27:36.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:36.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:06.836+0000 2019-09-04T06:27:36.836+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:06.836+0000 2019-09-04T06:27:36.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:06.836+0000 2019-09-04T06:27:36.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:36.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:36.861+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:36.962+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:36.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:36.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:36.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:36.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:36.995+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:36.995+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:37.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:37.051+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:37.051+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:37.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578454, 1), signature: { hash: BinData(0, D53A73044B7C93F3B2DF914D8424947C47E19988), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:37.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:27:37.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578454, 1), signature: { hash: BinData(0, D53A73044B7C93F3B2DF914D8424947C47E19988), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:37.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578454, 1), signature: { hash: BinData(0, D53A73044B7C93F3B2DF914D8424947C47E19988), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:37.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, durableWallTime: new Date(1567578454805), opTime: { ts: Timestamp(1567578454, 1), t: 1 }, wallTime: new Date(1567578454805) } 2019-09-04T06:27:37.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578454, 1), signature: { hash: BinData(0, D53A73044B7C93F3B2DF914D8424947C47E19988), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:37.062+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:37.063+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:37.063+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:37.065+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:37.065+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:37.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:37.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:37.161+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:37.161+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:37.162+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:37.192+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:37.192+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:37.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:27:37.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:27:37.262+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:37.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:37.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:37.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:37.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:37.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:37.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:37.327+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:37.327+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:37.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:37.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:37.362+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:37.462+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:37.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:37.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:37.490+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:37.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:37.495+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:37.495+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:37.551+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:37.551+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:37.562+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:37.563+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:37.563+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:37.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:37.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:37.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:37.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:37.661+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:37.661+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:37.662+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:37.692+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:37.692+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:37.763+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:37.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:37.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:37.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:37.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:37.809+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:37.809+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:37.809+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578454, 1) 2019-09-04T06:27:37.809+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 1021 2019-09-04T06:27:37.809+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:37.809+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:37.809+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 1021 2019-09-04T06:27:37.810+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1024 2019-09-04T06:27:37.810+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 1024 2019-09-04T06:27:37.810+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578454, 1), t: 1 }({ ts: Timestamp(1567578454, 1), t: 1 }) 2019-09-04T06:27:37.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:37.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:37.827+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:37.827+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:37.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:37.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:37.863+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:37.963+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:37.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:37.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:37.995+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:37.995+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:38.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:38.051+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:38.051+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:38.063+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:38.063+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:38.063+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:38.065+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:38.065+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:38.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:38.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:38.161+0000 I NETWORK [listener] connection accepted from 10.108.2.48:41974 #59 (47 connections now open) 2019-09-04T06:27:38.161+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:38.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:38.161+0000 I NETWORK [conn59] received client metadata from 10.108.2.48:41974 conn59: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:38.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:38.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:38.161+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:38.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:38.161+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:38.163+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:38.192+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:38.192+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:38.227+0000 D2 ASIO [RS] Request 57 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578458, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578458225), o: { $v: 1, $set: { ping: new Date(1567578458225) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578454, 1), t: 1 }, lastCommittedWall: new Date(1567578454805), lastOpVisible: { ts: Timestamp(1567578454, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578454, 1), t: 1 }, lastCommittedWall: new Date(1567578454805), lastOpApplied: { ts: Timestamp(1567578458, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578454, 1), $clusterTime: { clusterTime: Timestamp(1567578458, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578458, 1) } 2019-09-04T06:27:38.227+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578458, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578458225), o: { $v: 1, $set: { ping: new Date(1567578458225) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578454, 1), t: 1 }, lastCommittedWall: new Date(1567578454805), lastOpVisible: { ts: Timestamp(1567578454, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578454, 1), t: 1 }, lastCommittedWall: new Date(1567578454805), lastOpApplied: { ts: Timestamp(1567578458, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578454, 1), $clusterTime: { clusterTime: Timestamp(1567578458, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578458, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:38.227+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:38.227+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578458, 1) and ending at ts: Timestamp(1567578458, 1) 2019-09-04T06:27:38.227+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:27:48.280+0000 2019-09-04T06:27:38.227+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:27:49.536+0000 2019-09-04T06:27:38.227+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:38.228+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578458, 1), t: 1 } 2019-09-04T06:27:38.228+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:38.228+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:38.228+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578454, 1) 2019-09-04T06:27:38.228+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 1041 2019-09-04T06:27:38.228+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:38.228+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:38.228+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 1041 2019-09-04T06:27:38.228+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:38.228+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:27:38.228+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:38.228+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578458, 1) } 2019-09-04T06:27:38.228+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578454, 1) 2019-09-04T06:27:38.228+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 1044 2019-09-04T06:27:38.228+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:38.228+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:38.228+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 1044 2019-09-04T06:27:38.228+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1025 2019-09-04T06:27:38.228+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:06.836+0000 2019-09-04T06:27:38.228+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 1025 2019-09-04T06:27:38.228+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1047 2019-09-04T06:27:38.228+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 1047 2019-09-04T06:27:38.228+0000 D3 EXECUTOR [repl-writer-worker-5] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:27:38.228+0000 D3 STORAGE [repl-writer-worker-5] WT begin_transaction for snapshot id 1049 2019-09-04T06:27:38.228+0000 D4 STORAGE [repl-writer-worker-5] inserting record with timestamp Timestamp(1567578458, 1) 2019-09-04T06:27:38.228+0000 D3 STORAGE [repl-writer-worker-5] WT set timestamp of future write operations to Timestamp(1567578458, 1) 2019-09-04T06:27:38.228+0000 D3 STORAGE [repl-writer-worker-5] WT commit_transaction for snapshot id 1049 2019-09-04T06:27:38.228+0000 D3 EXECUTOR [repl-writer-worker-5] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:38.228+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:27:38.228+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1048 2019-09-04T06:27:38.228+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 1048 2019-09-04T06:27:38.228+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1051 2019-09-04T06:27:38.228+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 1051 2019-09-04T06:27:38.228+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578458, 1), t: 1 }({ ts: Timestamp(1567578458, 1), t: 1 }) 2019-09-04T06:27:38.228+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578458, 1) 2019-09-04T06:27:38.228+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1052 2019-09-04T06:27:38.228+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578458, 1) } } ] } sort: {} projection: {} 2019-09-04T06:27:38.228+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:38.228+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:27:38.228+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578458, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:27:38.228+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:38.228+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:27:38.228+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:27:38.228+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578458, 1) || First: notFirst: full path: ts 2019-09-04T06:27:38.228+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:38.228+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578458, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:38.228+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:27:38.228+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:27:38.228+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:27:38.228+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:38.228+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:27:38.228+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:27:38.228+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:38.228+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:38.228+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:27:38.228+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578458, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:27:38.228+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:38.228+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:27:38.228+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:27:38.228+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578458, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:27:38.228+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:38.228+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578458, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:38.228+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 1052 2019-09-04T06:27:38.228+0000 D3 EXECUTOR [repl-writer-worker-9] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:27:38.228+0000 D3 STORAGE [repl-writer-worker-9] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:38.228+0000 D3 REPL [repl-writer-worker-9] applying op: { ts: Timestamp(1567578458, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578458225), o: { $v: 1, $set: { ping: new Date(1567578458225) } } }, oplog application mode: Secondary 2019-09-04T06:27:38.228+0000 D3 STORAGE [repl-writer-worker-9] WT set timestamp of future write operations to Timestamp(1567578458, 1) 2019-09-04T06:27:38.228+0000 D3 STORAGE [repl-writer-worker-9] WT begin_transaction for snapshot id 1054 2019-09-04T06:27:38.228+0000 D2 QUERY [repl-writer-worker-9] Using idhack: { _id: "ConfigServer" } 2019-09-04T06:27:38.229+0000 D4 WRITE [repl-writer-worker-9] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:27:38.229+0000 D3 STORAGE [repl-writer-worker-9] WT commit_transaction for snapshot id 1054 2019-09-04T06:27:38.229+0000 D3 EXECUTOR [repl-writer-worker-9] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:38.229+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578458, 1), t: 1 }({ ts: Timestamp(1567578458, 1), t: 1 }) 2019-09-04T06:27:38.229+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578458, 1) 2019-09-04T06:27:38.229+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1053 2019-09-04T06:27:38.229+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:27:38.229+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:38.229+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:27:38.229+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:38.229+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:38.229+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:27:38.229+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 1053 2019-09-04T06:27:38.229+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578458, 1) 2019-09-04T06:27:38.229+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1057 2019-09-04T06:27:38.229+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 1057 2019-09-04T06:27:38.229+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578458, 1), t: 1 }({ ts: Timestamp(1567578458, 1), t: 1 }) 2019-09-04T06:27:38.229+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:38.229+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, durableWallTime: new Date(1567578454805), appliedOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, appliedWallTime: new Date(1567578454805), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, durableWallTime: new Date(1567578454805), appliedOpTime: { ts: Timestamp(1567578458, 1), t: 1 }, appliedWallTime: new Date(1567578458225), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, durableWallTime: new Date(1567578454805), appliedOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, appliedWallTime: new Date(1567578454805), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578454, 1), t: 1 }, lastCommittedWall: new Date(1567578454805), lastOpVisible: { ts: Timestamp(1567578454, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:38.229+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 62 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:08.229+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, durableWallTime: new Date(1567578454805), appliedOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, appliedWallTime: new Date(1567578454805), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, durableWallTime: new Date(1567578454805), appliedOpTime: { ts: Timestamp(1567578458, 1), t: 1 }, appliedWallTime: new Date(1567578458225), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, durableWallTime: new Date(1567578454805), appliedOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, appliedWallTime: new Date(1567578454805), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578454, 1), t: 1 }, lastCommittedWall: new Date(1567578454805), lastOpVisible: { ts: Timestamp(1567578454, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:38.229+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:08.229+0000 2019-09-04T06:27:38.229+0000 D2 ASIO [RS] Request 62 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 1), t: 1 }, lastCommittedWall: new Date(1567578458225), lastOpVisible: { ts: Timestamp(1567578458, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578458, 1), $clusterTime: { clusterTime: Timestamp(1567578458, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578458, 1) } 2019-09-04T06:27:38.229+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 1), t: 1 }, lastCommittedWall: new Date(1567578458225), lastOpVisible: { ts: Timestamp(1567578458, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578458, 1), $clusterTime: { clusterTime: Timestamp(1567578458, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578458, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:38.229+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:38.229+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:08.229+0000 2019-09-04T06:27:38.230+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578458, 1), t: 1 } 2019-09-04T06:27:38.230+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 63 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:27:48.230+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578454, 1), t: 1 } } 2019-09-04T06:27:38.230+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:08.229+0000 2019-09-04T06:27:38.230+0000 D2 ASIO [RS] Request 63 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 1), t: 1 }, lastCommittedWall: new Date(1567578458225), lastOpVisible: { ts: Timestamp(1567578458, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578458, 1), t: 1 }, lastCommittedWall: new Date(1567578458225), lastOpApplied: { ts: Timestamp(1567578458, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578458, 1), $clusterTime: { clusterTime: Timestamp(1567578458, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578458, 1) } 2019-09-04T06:27:38.230+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 1), t: 1 }, lastCommittedWall: new Date(1567578458225), lastOpVisible: { ts: Timestamp(1567578458, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578458, 1), t: 1 }, lastCommittedWall: new Date(1567578458225), lastOpApplied: { ts: Timestamp(1567578458, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578458, 1), $clusterTime: { clusterTime: Timestamp(1567578458, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578458, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:38.230+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:38.230+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:27:38.230+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578458, 1), t: 1 }, 2019-09-04T06:27:38.225+0000 2019-09-04T06:27:38.230+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578458, 1), t: 1 }, 2019-09-04T06:27:38.225+0000 2019-09-04T06:27:38.230+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578453, 1) 2019-09-04T06:27:38.230+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:27:49.536+0000 2019-09-04T06:27:38.230+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:27:48.342+0000 2019-09-04T06:27:38.230+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:38.230+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 64 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:27:48.230+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578458, 1), t: 1 } } 2019-09-04T06:27:38.230+0000 D3 REPL [conn56] Got notified of new snapshot: { ts: Timestamp(1567578458, 1), t: 1 }, 2019-09-04T06:27:38.225+0000 2019-09-04T06:27:38.230+0000 D3 REPL [conn56] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.963+0000 2019-09-04T06:27:38.230+0000 D3 REPL [conn43] Got notified of new snapshot: { ts: Timestamp(1567578458, 1), t: 1 }, 2019-09-04T06:27:38.225+0000 2019-09-04T06:27:38.230+0000 D3 REPL [conn43] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.753+0000 2019-09-04T06:27:38.230+0000 D3 REPL [conn44] Got notified of new snapshot: { ts: Timestamp(1567578458, 1), t: 1 }, 2019-09-04T06:27:38.225+0000 2019-09-04T06:27:38.230+0000 D3 REPL [conn44] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:55.060+0000 2019-09-04T06:27:38.230+0000 D3 REPL [conn55] Got notified of new snapshot: { ts: Timestamp(1567578458, 1), t: 1 }, 2019-09-04T06:27:38.225+0000 2019-09-04T06:27:38.230+0000 D3 REPL [conn55] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.926+0000 2019-09-04T06:27:38.230+0000 D3 REPL [conn54] Got notified of new snapshot: { ts: Timestamp(1567578458, 1), t: 1 }, 2019-09-04T06:27:38.225+0000 2019-09-04T06:27:38.230+0000 D3 REPL [conn54] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.764+0000 2019-09-04T06:27:38.230+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:06.836+0000 2019-09-04T06:27:38.230+0000 D3 REPL [conn41] Got notified of new snapshot: { ts: Timestamp(1567578458, 1), t: 1 }, 2019-09-04T06:27:38.225+0000 2019-09-04T06:27:38.230+0000 D3 REPL [conn41] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.661+0000 2019-09-04T06:27:38.230+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:08.229+0000 2019-09-04T06:27:38.230+0000 D3 REPL [conn32] Got notified of new snapshot: { ts: Timestamp(1567578458, 1), t: 1 }, 2019-09-04T06:27:38.225+0000 2019-09-04T06:27:38.230+0000 D3 REPL [conn32] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:54.152+0000 2019-09-04T06:27:38.230+0000 D3 REPL [conn40] Got notified of new snapshot: { ts: Timestamp(1567578458, 1), t: 1 }, 2019-09-04T06:27:38.225+0000 2019-09-04T06:27:38.230+0000 D3 REPL [conn40] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.645+0000 2019-09-04T06:27:38.230+0000 D3 REPL [conn57] Got notified of new snapshot: { ts: Timestamp(1567578458, 1), t: 1 }, 2019-09-04T06:27:38.225+0000 2019-09-04T06:27:38.230+0000 D3 REPL [conn57] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:03.480+0000 2019-09-04T06:27:38.230+0000 D3 REPL [conn53] Got notified of new snapshot: { ts: Timestamp(1567578458, 1), t: 1 }, 2019-09-04T06:27:38.225+0000 2019-09-04T06:27:38.230+0000 D3 REPL [conn53] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.753+0000 2019-09-04T06:27:38.230+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 1), signature: { hash: BinData(0, 7C64845FBB162C87C6C1C7BD9EF381FD5E443A7C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:38.230+0000 D3 REPL [conn39] Got notified of new snapshot: { ts: Timestamp(1567578458, 1), t: 1 }, 2019-09-04T06:27:38.225+0000 2019-09-04T06:27:38.230+0000 D3 REPL [conn39] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.133+0000 2019-09-04T06:27:38.230+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:27:38.230+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 1), signature: { hash: BinData(0, 7C64845FBB162C87C6C1C7BD9EF381FD5E443A7C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:38.230+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 1), signature: { hash: BinData(0, 7C64845FBB162C87C6C1C7BD9EF381FD5E443A7C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:38.230+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, durableWallTime: new Date(1567578454805), opTime: { ts: Timestamp(1567578458, 1), t: 1 }, wallTime: new Date(1567578458225) } 2019-09-04T06:27:38.230+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 1), signature: { hash: BinData(0, 7C64845FBB162C87C6C1C7BD9EF381FD5E443A7C), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:38.234+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:27:38.234+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:38.234+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, durableWallTime: new Date(1567578454805), appliedOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, appliedWallTime: new Date(1567578454805), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578458, 1), t: 1 }, durableWallTime: new Date(1567578458225), appliedOpTime: { ts: Timestamp(1567578458, 1), t: 1 }, appliedWallTime: new Date(1567578458225), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, durableWallTime: new Date(1567578454805), appliedOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, appliedWallTime: new Date(1567578454805), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 1), t: 1 }, lastCommittedWall: new Date(1567578458225), lastOpVisible: { ts: Timestamp(1567578458, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:38.234+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 65 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:08.234+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, durableWallTime: new Date(1567578454805), appliedOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, appliedWallTime: new Date(1567578454805), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578458, 1), t: 1 }, durableWallTime: new Date(1567578458225), appliedOpTime: { ts: Timestamp(1567578458, 1), t: 1 }, appliedWallTime: new Date(1567578458225), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, durableWallTime: new Date(1567578454805), appliedOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, appliedWallTime: new Date(1567578454805), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 1), t: 1 }, lastCommittedWall: new Date(1567578458225), lastOpVisible: { ts: Timestamp(1567578458, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:38.234+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:08.229+0000 2019-09-04T06:27:38.234+0000 D2 ASIO [RS] Request 65 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 1), t: 1 }, lastCommittedWall: new Date(1567578458225), lastOpVisible: { ts: Timestamp(1567578458, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578458, 1), $clusterTime: { clusterTime: Timestamp(1567578458, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578458, 1) } 2019-09-04T06:27:38.234+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 1), t: 1 }, lastCommittedWall: new Date(1567578458225), lastOpVisible: { ts: Timestamp(1567578458, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578458, 1), $clusterTime: { clusterTime: Timestamp(1567578458, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578458, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:38.234+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:38.234+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:08.229+0000 2019-09-04T06:27:38.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:27:38.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:27:38.263+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:38.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:38.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:38.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:38.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:38.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:38.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:38.327+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:38.327+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:38.328+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578458, 1) 2019-09-04T06:27:38.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:38.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:38.363+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:38.374+0000 D2 ASIO [RS] Request 64 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578458, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578458365), o: { $v: 1, $set: { ping: new Date(1567578458365) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 1), t: 1 }, lastCommittedWall: new Date(1567578458225), lastOpVisible: { ts: Timestamp(1567578458, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578458, 1), t: 1 }, lastCommittedWall: new Date(1567578458225), lastOpApplied: { ts: Timestamp(1567578458, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578458, 1), $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578458, 2) } 2019-09-04T06:27:38.374+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578458, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578458365), o: { $v: 1, $set: { ping: new Date(1567578458365) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 1), t: 1 }, lastCommittedWall: new Date(1567578458225), lastOpVisible: { ts: Timestamp(1567578458, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578458, 1), t: 1 }, lastCommittedWall: new Date(1567578458225), lastOpApplied: { ts: Timestamp(1567578458, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578458, 1), $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578458, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:38.374+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:38.374+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578458, 2) and ending at ts: Timestamp(1567578458, 2) 2019-09-04T06:27:38.374+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:27:48.342+0000 2019-09-04T06:27:38.374+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:27:49.191+0000 2019-09-04T06:27:38.374+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:38.374+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:06.836+0000 2019-09-04T06:27:38.374+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578458, 2), t: 1 } 2019-09-04T06:27:38.374+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:38.374+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:38.374+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578458, 1) 2019-09-04T06:27:38.374+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 1068 2019-09-04T06:27:38.374+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:38.374+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:38.374+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 1068 2019-09-04T06:27:38.374+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:38.374+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:27:38.374+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:38.374+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578458, 1) 2019-09-04T06:27:38.374+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 1071 2019-09-04T06:27:38.374+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578458, 2) } 2019-09-04T06:27:38.374+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:38.374+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:38.374+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 1071 2019-09-04T06:27:38.374+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1058 2019-09-04T06:27:38.375+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 1058 2019-09-04T06:27:38.375+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1074 2019-09-04T06:27:38.375+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 1074 2019-09-04T06:27:38.375+0000 D3 EXECUTOR [repl-writer-worker-7] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:27:38.375+0000 D3 STORAGE [repl-writer-worker-7] WT begin_transaction for snapshot id 1076 2019-09-04T06:27:38.375+0000 D4 STORAGE [repl-writer-worker-7] inserting record with timestamp Timestamp(1567578458, 2) 2019-09-04T06:27:38.375+0000 D3 STORAGE [repl-writer-worker-7] WT set timestamp of future write operations to Timestamp(1567578458, 2) 2019-09-04T06:27:38.375+0000 D3 STORAGE [repl-writer-worker-7] WT commit_transaction for snapshot id 1076 2019-09-04T06:27:38.375+0000 D3 EXECUTOR [repl-writer-worker-7] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:38.375+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:27:38.375+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1075 2019-09-04T06:27:38.375+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 1075 2019-09-04T06:27:38.375+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1078 2019-09-04T06:27:38.375+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 1078 2019-09-04T06:27:38.375+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578458, 2), t: 1 }({ ts: Timestamp(1567578458, 2), t: 1 }) 2019-09-04T06:27:38.375+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578458, 2) 2019-09-04T06:27:38.375+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1079 2019-09-04T06:27:38.375+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578458, 2) } } ] } sort: {} projection: {} 2019-09-04T06:27:38.375+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:38.375+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:27:38.375+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578458, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:27:38.375+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:38.375+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:27:38.375+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:27:38.375+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578458, 2) || First: notFirst: full path: ts 2019-09-04T06:27:38.375+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:38.375+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578458, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:38.375+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:27:38.375+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:27:38.375+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:27:38.375+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:38.375+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:27:38.375+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:27:38.375+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:38.375+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:38.375+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:27:38.375+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578458, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:27:38.375+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:38.375+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:27:38.375+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:27:38.375+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578458, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:27:38.375+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:38.375+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578458, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:38.375+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 1079 2019-09-04T06:27:38.375+0000 D3 EXECUTOR [repl-writer-worker-11] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:27:38.375+0000 D3 STORAGE [repl-writer-worker-11] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:38.375+0000 D3 REPL [repl-writer-worker-11] applying op: { ts: Timestamp(1567578458, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578458365), o: { $v: 1, $set: { ping: new Date(1567578458365) } } }, oplog application mode: Secondary 2019-09-04T06:27:38.375+0000 D3 STORAGE [repl-writer-worker-11] WT set timestamp of future write operations to Timestamp(1567578458, 2) 2019-09-04T06:27:38.375+0000 D3 STORAGE [repl-writer-worker-11] WT begin_transaction for snapshot id 1081 2019-09-04T06:27:38.375+0000 D2 QUERY [repl-writer-worker-11] Using idhack: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" } 2019-09-04T06:27:38.375+0000 D4 WRITE [repl-writer-worker-11] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:27:38.375+0000 D3 STORAGE [repl-writer-worker-11] WT commit_transaction for snapshot id 1081 2019-09-04T06:27:38.375+0000 D3 EXECUTOR [repl-writer-worker-11] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:38.375+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578458, 2), t: 1 }({ ts: Timestamp(1567578458, 2), t: 1 }) 2019-09-04T06:27:38.375+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578458, 2) 2019-09-04T06:27:38.376+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1080 2019-09-04T06:27:38.376+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:27:38.376+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:38.376+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:27:38.376+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:38.376+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:38.376+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:27:38.376+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 1080 2019-09-04T06:27:38.376+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578458, 2) 2019-09-04T06:27:38.376+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1085 2019-09-04T06:27:38.376+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 1085 2019-09-04T06:27:38.376+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578458, 2), t: 1 }({ ts: Timestamp(1567578458, 2), t: 1 }) 2019-09-04T06:27:38.376+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:38.376+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, durableWallTime: new Date(1567578454805), appliedOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, appliedWallTime: new Date(1567578454805), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578458, 1), t: 1 }, durableWallTime: new Date(1567578458225), appliedOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, appliedWallTime: new Date(1567578458365), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, durableWallTime: new Date(1567578454805), appliedOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, appliedWallTime: new Date(1567578454805), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 1), t: 1 }, lastCommittedWall: new Date(1567578458225), lastOpVisible: { ts: Timestamp(1567578458, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:38.376+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 66 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:08.376+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, durableWallTime: new Date(1567578454805), appliedOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, appliedWallTime: new Date(1567578454805), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578458, 1), t: 1 }, durableWallTime: new Date(1567578458225), appliedOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, appliedWallTime: new Date(1567578458365), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, durableWallTime: new Date(1567578454805), appliedOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, appliedWallTime: new Date(1567578454805), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 1), t: 1 }, lastCommittedWall: new Date(1567578458225), lastOpVisible: { ts: Timestamp(1567578458, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:38.376+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:08.376+0000 2019-09-04T06:27:38.376+0000 D2 ASIO [RS] Request 66 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 2), t: 1 }, lastCommittedWall: new Date(1567578458365), lastOpVisible: { ts: Timestamp(1567578458, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578458, 2), $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578458, 2) } 2019-09-04T06:27:38.376+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 2), t: 1 }, lastCommittedWall: new Date(1567578458365), lastOpVisible: { ts: Timestamp(1567578458, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578458, 2), $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578458, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:38.376+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:38.376+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:08.376+0000 2019-09-04T06:27:38.376+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578458, 2), t: 1 } 2019-09-04T06:27:38.376+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 67 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:27:48.376+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578458, 1), t: 1 } } 2019-09-04T06:27:38.376+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:08.376+0000 2019-09-04T06:27:38.377+0000 D2 ASIO [RS] Request 67 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 2), t: 1 }, lastCommittedWall: new Date(1567578458365), lastOpVisible: { ts: Timestamp(1567578458, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578458, 2), t: 1 }, lastCommittedWall: new Date(1567578458365), lastOpApplied: { ts: Timestamp(1567578458, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578458, 2), $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578458, 2) } 2019-09-04T06:27:38.377+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 2), t: 1 }, lastCommittedWall: new Date(1567578458365), lastOpVisible: { ts: Timestamp(1567578458, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578458, 2), t: 1 }, lastCommittedWall: new Date(1567578458365), lastOpApplied: { ts: Timestamp(1567578458, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578458, 2), $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578458, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:38.377+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:38.377+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:27:38.377+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578458, 2), t: 1 }, 2019-09-04T06:27:38.365+0000 2019-09-04T06:27:38.377+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578458, 2), t: 1 }, 2019-09-04T06:27:38.365+0000 2019-09-04T06:27:38.377+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578453, 2) 2019-09-04T06:27:38.377+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:27:49.191+0000 2019-09-04T06:27:38.377+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:27:48.561+0000 2019-09-04T06:27:38.377+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:38.377+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:06.836+0000 2019-09-04T06:27:38.377+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 68 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:27:48.377+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578458, 2), t: 1 } } 2019-09-04T06:27:38.377+0000 D3 REPL [conn54] Got notified of new snapshot: { ts: Timestamp(1567578458, 2), t: 1 }, 2019-09-04T06:27:38.365+0000 2019-09-04T06:27:38.377+0000 D3 REPL [conn54] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.764+0000 2019-09-04T06:27:38.377+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:08.376+0000 2019-09-04T06:27:38.377+0000 D3 REPL [conn32] Got notified of new snapshot: { ts: Timestamp(1567578458, 2), t: 1 }, 2019-09-04T06:27:38.365+0000 2019-09-04T06:27:38.377+0000 D3 REPL [conn32] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:54.152+0000 2019-09-04T06:27:38.377+0000 D3 REPL [conn55] Got notified of new snapshot: { ts: Timestamp(1567578458, 2), t: 1 }, 2019-09-04T06:27:38.365+0000 2019-09-04T06:27:38.377+0000 D3 REPL [conn55] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.926+0000 2019-09-04T06:27:38.377+0000 D3 REPL [conn39] Got notified of new snapshot: { ts: Timestamp(1567578458, 2), t: 1 }, 2019-09-04T06:27:38.365+0000 2019-09-04T06:27:38.377+0000 D3 REPL [conn39] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.133+0000 2019-09-04T06:27:38.377+0000 D3 REPL [conn53] Got notified of new snapshot: { ts: Timestamp(1567578458, 2), t: 1 }, 2019-09-04T06:27:38.365+0000 2019-09-04T06:27:38.377+0000 D3 REPL [conn53] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.753+0000 2019-09-04T06:27:38.377+0000 D3 REPL [conn57] Got notified of new snapshot: { ts: Timestamp(1567578458, 2), t: 1 }, 2019-09-04T06:27:38.365+0000 2019-09-04T06:27:38.377+0000 D3 REPL [conn57] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:03.480+0000 2019-09-04T06:27:38.377+0000 D3 REPL [conn56] Got notified of new snapshot: { ts: Timestamp(1567578458, 2), t: 1 }, 2019-09-04T06:27:38.365+0000 2019-09-04T06:27:38.377+0000 D3 REPL [conn56] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.963+0000 2019-09-04T06:27:38.377+0000 D3 REPL [conn44] Got notified of new snapshot: { ts: Timestamp(1567578458, 2), t: 1 }, 2019-09-04T06:27:38.365+0000 2019-09-04T06:27:38.377+0000 D3 REPL [conn44] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:55.060+0000 2019-09-04T06:27:38.377+0000 D3 REPL [conn43] Got notified of new snapshot: { ts: Timestamp(1567578458, 2), t: 1 }, 2019-09-04T06:27:38.365+0000 2019-09-04T06:27:38.377+0000 D3 REPL [conn43] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.753+0000 2019-09-04T06:27:38.377+0000 D3 REPL [conn40] Got notified of new snapshot: { ts: Timestamp(1567578458, 2), t: 1 }, 2019-09-04T06:27:38.365+0000 2019-09-04T06:27:38.377+0000 D3 REPL [conn40] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.645+0000 2019-09-04T06:27:38.377+0000 D3 REPL [conn41] Got notified of new snapshot: { ts: Timestamp(1567578458, 2), t: 1 }, 2019-09-04T06:27:38.365+0000 2019-09-04T06:27:38.377+0000 D3 REPL [conn41] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.661+0000 2019-09-04T06:27:38.381+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:27:38.381+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:38.381+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, durableWallTime: new Date(1567578454805), appliedOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, appliedWallTime: new Date(1567578454805), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, durableWallTime: new Date(1567578458365), appliedOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, appliedWallTime: new Date(1567578458365), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, durableWallTime: new Date(1567578454805), appliedOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, appliedWallTime: new Date(1567578454805), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 2), t: 1 }, lastCommittedWall: new Date(1567578458365), lastOpVisible: { ts: Timestamp(1567578458, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:38.381+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 69 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:08.381+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, durableWallTime: new Date(1567578454805), appliedOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, appliedWallTime: new Date(1567578454805), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, durableWallTime: new Date(1567578458365), appliedOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, appliedWallTime: new Date(1567578458365), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, durableWallTime: new Date(1567578454805), appliedOpTime: { ts: Timestamp(1567578454, 1), t: 1 }, appliedWallTime: new Date(1567578454805), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 2), t: 1 }, lastCommittedWall: new Date(1567578458365), lastOpVisible: { ts: Timestamp(1567578458, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:38.381+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:08.376+0000 2019-09-04T06:27:38.381+0000 D2 ASIO [RS] Request 69 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 2), t: 1 }, lastCommittedWall: new Date(1567578458365), lastOpVisible: { ts: Timestamp(1567578458, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578458, 2), $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578458, 2) } 2019-09-04T06:27:38.381+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 2), t: 1 }, lastCommittedWall: new Date(1567578458365), lastOpVisible: { ts: Timestamp(1567578458, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578458, 2), $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578458, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:38.381+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:38.381+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:08.376+0000 2019-09-04T06:27:38.463+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:38.474+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578458, 2) 2019-09-04T06:27:38.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:38.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:38.495+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:38.495+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:38.551+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:38.551+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:38.563+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:38.563+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:38.564+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:38.565+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:38.565+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:38.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:38.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:38.659+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:38.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:38.661+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:38.661+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:38.664+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:38.692+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:38.692+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:38.764+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:38.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:38.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:38.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:38.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:38.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:38.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:38.827+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:38.827+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:38.836+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:38.836+0000 D3 REPL [replexec-2] memberData lastupdate is: 2019-09-04T06:27:37.061+0000 2019-09-04T06:27:38.836+0000 D3 REPL [replexec-2] memberData lastupdate is: 2019-09-04T06:27:38.230+0000 2019-09-04T06:27:38.836+0000 D3 REPL [replexec-2] stalest member MemberId(0) date: 2019-09-04T06:27:37.061+0000 2019-09-04T06:27:38.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:38.836+0000 D3 REPL [replexec-2] scheduling next check at 2019-09-04T06:27:47.061+0000 2019-09-04T06:27:38.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:38.836+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:08.836+0000 2019-09-04T06:27:38.836+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 70) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:38.836+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 70 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:27:48.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:38.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:08.836+0000 2019-09-04T06:27:38.836+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 71) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:38.836+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 71 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:27:48.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:38.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:08.836+0000 2019-09-04T06:27:38.836+0000 D2 ASIO [Replication] Request 70 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, durableWallTime: new Date(1567578458365), opTime: { ts: Timestamp(1567578458, 2), t: 1 }, wallTime: new Date(1567578458365), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 2), t: 1 }, lastCommittedWall: new Date(1567578458365), lastOpVisible: { ts: Timestamp(1567578458, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578458, 2), $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578458, 2) } 2019-09-04T06:27:38.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, durableWallTime: new Date(1567578458365), opTime: { ts: Timestamp(1567578458, 2), t: 1 }, wallTime: new Date(1567578458365), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 2), t: 1 }, lastCommittedWall: new Date(1567578458365), lastOpVisible: { ts: Timestamp(1567578458, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578458, 2), $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578458, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:27:38.836+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:38.836+0000 D2 REPL_HB [replexec-2] Received response to heartbeat (requestId: 70) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, durableWallTime: new Date(1567578458365), opTime: { ts: Timestamp(1567578458, 2), t: 1 }, wallTime: new Date(1567578458365), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 2), t: 1 }, lastCommittedWall: new Date(1567578458365), lastOpVisible: { ts: Timestamp(1567578458, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578458, 2), $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578458, 2) } 2019-09-04T06:27:38.836+0000 D2 ASIO [Replication] Request 71 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, durableWallTime: new Date(1567578458365), opTime: { ts: Timestamp(1567578458, 2), t: 1 }, wallTime: new Date(1567578458365), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 2), t: 1 }, lastCommittedWall: new Date(1567578458365), lastOpVisible: { ts: Timestamp(1567578458, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578458, 2), $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578458, 2) } 2019-09-04T06:27:38.836+0000 D4 ELECTION [replexec-2] Postponing election timeout due to heartbeat from primary 2019-09-04T06:27:38.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, durableWallTime: new Date(1567578458365), opTime: { ts: Timestamp(1567578458, 2), t: 1 }, wallTime: new Date(1567578458365), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 2), t: 1 }, lastCommittedWall: new Date(1567578458365), lastOpVisible: { ts: Timestamp(1567578458, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578458, 2), $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578458, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:38.836+0000 D4 REPL [replexec-2] Canceling election timeout callback at 2019-09-04T06:27:48.561+0000 2019-09-04T06:27:38.836+0000 D4 ELECTION [replexec-2] Scheduling election timeout callback at 2019-09-04T06:27:49.954+0000 2019-09-04T06:27:38.836+0000 D3 REPL [replexec-2] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:27:38.836+0000 D2 REPL_HB [replexec-2] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:27:40.836Z 2019-09-04T06:27:38.836+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:38.836+0000 D2 REPL_HB [replexec-2] Received response to heartbeat (requestId: 71) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, durableWallTime: new Date(1567578458365), opTime: { ts: Timestamp(1567578458, 2), t: 1 }, wallTime: new Date(1567578458365), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 2), t: 1 }, lastCommittedWall: new Date(1567578458365), lastOpVisible: { ts: Timestamp(1567578458, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578458, 2), $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578458, 2) } 2019-09-04T06:27:38.836+0000 D3 REPL [replexec-2] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:27:38.836+0000 D2 REPL_HB [replexec-2] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:27:40.836Z 2019-09-04T06:27:38.836+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:38.836+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:08.836+0000 2019-09-04T06:27:38.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:08.836+0000 2019-09-04T06:27:38.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:08.836+0000 2019-09-04T06:27:38.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:38.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:38.864+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:38.964+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:38.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:38.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:38.995+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:38.995+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:39.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:39.051+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:39.051+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:39.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 7C64845FBB162C87C6C1C7BD9EF381FD5E443A7C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:39.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:27:39.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 7C64845FBB162C87C6C1C7BD9EF381FD5E443A7C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:39.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 7C64845FBB162C87C6C1C7BD9EF381FD5E443A7C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:39.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, durableWallTime: new Date(1567578458365), opTime: { ts: Timestamp(1567578458, 2), t: 1 }, wallTime: new Date(1567578458365) } 2019-09-04T06:27:39.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 7C64845FBB162C87C6C1C7BD9EF381FD5E443A7C), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:39.063+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:39.063+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:39.064+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:39.065+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:39.065+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:39.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:39.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:39.159+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:39.160+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:39.161+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:39.161+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:39.164+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:39.192+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:39.192+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:39.203+0000 I NETWORK [listener] connection accepted from 10.108.2.49:53270 #60 (48 connections now open) 2019-09-04T06:27:39.203+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:39.203+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:39.203+0000 I NETWORK [conn60] received client metadata from 10.108.2.49:53270 conn60: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:39.203+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:39.208+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:39.208+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:39.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:27:39.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:27:39.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:39.265+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:39.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:39.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:39.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:39.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:39.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:39.327+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:39.327+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:39.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:39.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:39.365+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:39.375+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:39.375+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:39.375+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578458, 2) 2019-09-04T06:27:39.375+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 1122 2019-09-04T06:27:39.375+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:39.375+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:39.375+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 1122 2019-09-04T06:27:39.376+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1125 2019-09-04T06:27:39.376+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 1125 2019-09-04T06:27:39.376+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578458, 2), t: 1 }({ ts: Timestamp(1567578458, 2), t: 1 }) 2019-09-04T06:27:39.465+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:39.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:39.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:39.495+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:39.495+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:39.551+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:39.551+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:39.563+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:39.563+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:39.565+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:39.565+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:39.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:39.649+0000 I NETWORK [listener] connection accepted from 10.108.2.40:38646 #61 (49 connections now open) 2019-09-04T06:27:39.650+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:39.650+0000 D2 COMMAND [conn61] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:39.650+0000 I NETWORK [conn61] received client metadata from 10.108.2.40:38646 conn61: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:39.650+0000 I COMMAND [conn61] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:39.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:39.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:39.659+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:39.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:39.660+0000 I NETWORK [listener] connection accepted from 10.108.2.73:52032 #62 (50 connections now open) 2019-09-04T06:27:39.660+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:39.660+0000 D2 COMMAND [conn62] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:39.660+0000 I NETWORK [conn62] received client metadata from 10.108.2.73:52032 conn62: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:39.660+0000 I COMMAND [conn62] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:39.661+0000 D2 COMMAND [conn62] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578452, 1), signature: { hash: BinData(0, CB2EE64588C44BE37DA7454B6059CFFFB3ABC1AF), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:27:39.661+0000 D1 REPL [conn62] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578458, 2), t: 1 } 2019-09-04T06:27:39.661+0000 D3 REPL [conn62] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.671+0000 2019-09-04T06:27:39.661+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:39.661+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:39.665+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:39.665+0000 I NETWORK [listener] connection accepted from 10.108.2.55:36534 #63 (51 connections now open) 2019-09-04T06:27:39.665+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:39.666+0000 D2 COMMAND [conn63] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:39.666+0000 I NETWORK [conn63] received client metadata from 10.108.2.55:36534 conn63: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:39.666+0000 I COMMAND [conn63] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:39.666+0000 D2 COMMAND [conn63] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578451, 1), signature: { hash: BinData(0, DEC934211A5E877D872CD481B087C488B1F8FB5C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:27:39.666+0000 D1 REPL [conn63] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578458, 2), t: 1 } 2019-09-04T06:27:39.666+0000 D3 REPL [conn63] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.676+0000 2019-09-04T06:27:39.666+0000 I NETWORK [listener] connection accepted from 10.108.2.74:51666 #64 (52 connections now open) 2019-09-04T06:27:39.666+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:39.666+0000 D2 COMMAND [conn64] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:39.666+0000 I NETWORK [conn64] received client metadata from 10.108.2.74:51666 conn64: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:39.666+0000 I COMMAND [conn64] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:39.666+0000 D2 COMMAND [conn64] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578455, 1), signature: { hash: BinData(0, B608BC040ACD32DAE9997A8CBB1A717399CC35A7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:27:39.666+0000 D1 REPL [conn64] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578458, 2), t: 1 } 2019-09-04T06:27:39.666+0000 D3 REPL [conn64] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.676+0000 2019-09-04T06:27:39.668+0000 I NETWORK [listener] connection accepted from 10.108.2.72:45620 #65 (53 connections now open) 2019-09-04T06:27:39.668+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:39.668+0000 D2 COMMAND [conn65] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:39.669+0000 I NETWORK [conn65] received client metadata from 10.108.2.72:45620 conn65: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:39.669+0000 I COMMAND [conn65] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:39.669+0000 D2 COMMAND [conn65] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578452, 1), signature: { hash: BinData(0, CB2EE64588C44BE37DA7454B6059CFFFB3ABC1AF), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:27:39.669+0000 D1 REPL [conn65] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578458, 2), t: 1 } 2019-09-04T06:27:39.669+0000 D3 REPL [conn65] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.679+0000 2019-09-04T06:27:39.672+0000 I NETWORK [listener] connection accepted from 10.108.2.56:35590 #66 (54 connections now open) 2019-09-04T06:27:39.672+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:39.672+0000 D2 COMMAND [conn66] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:39.672+0000 I NETWORK [conn66] received client metadata from 10.108.2.56:35590 conn66: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:39.672+0000 I COMMAND [conn66] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:39.676+0000 D2 COMMAND [conn66] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578451, 1), signature: { hash: BinData(0, DEC934211A5E877D872CD481B087C488B1F8FB5C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:27:39.676+0000 D1 REPL [conn66] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578458, 2), t: 1 } 2019-09-04T06:27:39.676+0000 D3 REPL [conn66] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.686+0000 2019-09-04T06:27:39.683+0000 I NETWORK [listener] connection accepted from 10.108.2.57:34142 #67 (55 connections now open) 2019-09-04T06:27:39.683+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:39.684+0000 D2 COMMAND [conn67] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:39.684+0000 I NETWORK [conn67] received client metadata from 10.108.2.57:34142 conn67: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:39.684+0000 I COMMAND [conn67] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:39.688+0000 D2 COMMAND [conn67] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 1), signature: { hash: BinData(0, 1DEE64994A51BF344C0BF17F13DA1F5E64B3EBCD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:27:39.688+0000 D1 REPL [conn67] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578458, 2), t: 1 } 2019-09-04T06:27:39.689+0000 D3 REPL [conn67] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.698+0000 2019-09-04T06:27:39.692+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:39.692+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:39.698+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:39.698+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:39.717+0000 I NETWORK [listener] connection accepted from 10.108.2.61:37822 #68 (56 connections now open) 2019-09-04T06:27:39.717+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:39.717+0000 D2 COMMAND [conn68] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:39.717+0000 I NETWORK [conn68] received client metadata from 10.108.2.61:37822 conn68: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:39.717+0000 I COMMAND [conn68] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:39.720+0000 D2 COMMAND [conn68] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578451, 1), signature: { hash: BinData(0, DEC934211A5E877D872CD481B087C488B1F8FB5C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:27:39.720+0000 D1 REPL [conn68] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578458, 2), t: 1 } 2019-09-04T06:27:39.720+0000 D3 REPL [conn68] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.730+0000 2019-09-04T06:27:39.765+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:39.766+0000 D2 COMMAND [conn61] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578458, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 7C64845FBB162C87C6C1C7BD9EF381FD5E443A7C), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578458, 2), t: 1 } }, $db: "config" } 2019-09-04T06:27:39.766+0000 D1 COMMAND [conn61] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578458, 2), t: 1 } } } 2019-09-04T06:27:39.766+0000 D3 STORAGE [conn61] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:27:39.766+0000 D1 COMMAND [conn61] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578458, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 7C64845FBB162C87C6C1C7BD9EF381FD5E443A7C), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578458, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578458, 2) 2019-09-04T06:27:39.766+0000 D5 QUERY [conn61] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:27:39.766+0000 D5 QUERY [conn61] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:27:39.766+0000 D5 QUERY [conn61] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:27:39.766+0000 D5 QUERY [conn61] Rated tree: $and 2019-09-04T06:27:39.766+0000 D5 QUERY [conn61] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:39.766+0000 D5 QUERY [conn61] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:39.766+0000 D2 QUERY [conn61] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:27:39.766+0000 D3 STORAGE [conn61] WT begin_transaction for snapshot id 1152 2019-09-04T06:27:39.766+0000 D3 STORAGE [conn61] WT rollback_transaction for snapshot id 1152 2019-09-04T06:27:39.766+0000 I COMMAND [conn61] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578458, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 7C64845FBB162C87C6C1C7BD9EF381FD5E443A7C), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578458, 2), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:879 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:27:39.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:39.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:39.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:39.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:39.827+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:39.827+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:39.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:39.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:39.865+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:39.966+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:39.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:39.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:39.995+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:39.995+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:40.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:40.004+0000 D2 COMMAND [conn16] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:27:40.004+0000 D1 ACCESS [conn16] Returning user dba_root@admin from cache 2019-09-04T06:27:40.005+0000 I COMMAND [conn16] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:27:40.018+0000 D2 COMMAND [conn16] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:27:40.018+0000 I COMMAND [conn16] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:27:40.038+0000 D2 COMMAND [conn16] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:27:40.038+0000 D1 ACCESS [conn16] Returning user dba_root@admin from cache 2019-09-04T06:27:40.038+0000 I ACCESS [conn16] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45438 2019-09-04T06:27:40.038+0000 I COMMAND [conn16] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:27:40.038+0000 I NETWORK [listener] connection accepted from 10.108.2.40:38648 #69 (57 connections now open) 2019-09-04T06:27:40.039+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:40.039+0000 D2 COMMAND [conn16] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:27:40.039+0000 I COMMAND [conn16] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35129 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:27:40.040+0000 D2 COMMAND [conn16] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:27:40.040+0000 D2 COMMAND [conn16] command: replSetGetStatus 2019-09-04T06:27:40.040+0000 I COMMAND [conn16] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:27:40.040+0000 D2 COMMAND [conn16] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:27:40.040+0000 D3 STORAGE [conn16] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:40.040+0000 D5 QUERY [conn16] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:27:40.040+0000 D5 QUERY [conn16] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:27:40.040+0000 D5 QUERY [conn16] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:27:40.040+0000 D5 QUERY [conn16] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:27:40.040+0000 D5 QUERY [conn16] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:27:40.040+0000 D5 QUERY [conn16] Predicate over field 'jumbo' 2019-09-04T06:27:40.040+0000 D5 QUERY [conn16] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:27:40.040+0000 D5 QUERY [conn16] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:40.040+0000 D5 QUERY [conn16] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:40.040+0000 D2 QUERY [conn16] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:27:40.040+0000 D3 STORAGE [conn16] begin_transaction on local snapshot Timestamp(1567578458, 2) 2019-09-04T06:27:40.040+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1166 2019-09-04T06:27:40.041+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1166 2019-09-04T06:27:40.041+0000 I COMMAND [conn16] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:27:40.041+0000 D2 COMMAND [conn16] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:27:40.041+0000 I COMMAND [conn16] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:27:40.041+0000 D2 COMMAND [conn16] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:27:40.041+0000 D3 STORAGE [conn16] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:40.041+0000 D5 QUERY [conn16] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:27:40.041+0000 D5 QUERY [conn16] Forcing a table scan due to hinted $natural 2019-09-04T06:27:40.041+0000 D2 QUERY [conn16] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:27:40.041+0000 D3 STORAGE [conn16] begin_transaction on local snapshot Timestamp(1567578458, 2) 2019-09-04T06:27:40.041+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1169 2019-09-04T06:27:40.042+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1169 2019-09-04T06:27:40.042+0000 I COMMAND [conn16] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:27:40.042+0000 D2 COMMAND [conn16] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:27:40.042+0000 D3 STORAGE [conn16] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:40.042+0000 D5 QUERY [conn16] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:27:40.042+0000 D5 QUERY [conn16] Forcing a table scan due to hinted $natural 2019-09-04T06:27:40.042+0000 D2 QUERY [conn16] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:27:40.042+0000 D3 STORAGE [conn16] begin_transaction on local snapshot Timestamp(1567578458, 2) 2019-09-04T06:27:40.042+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1171 2019-09-04T06:27:40.042+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1171 2019-09-04T06:27:40.042+0000 I COMMAND [conn16] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:27:40.042+0000 D2 COMMAND [conn16] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:27:40.042+0000 D2 QUERY [conn16] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:27:40.042+0000 I COMMAND [conn16] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:27:40.042+0000 D2 COMMAND [conn16] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:27:40.042+0000 D2 COMMAND [conn16] command: listDatabases 2019-09-04T06:27:40.042+0000 D3 STORAGE [conn16] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:27:40.042+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1174 2019-09-04T06:27:40.042+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:27:40.042+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:40.042+0000 D3 STORAGE [conn16] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:27:40.042+0000 D3 STORAGE [conn16] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:27:40.042+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1174 2019-09-04T06:27:40.042+0000 D3 STORAGE [conn16] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:27:40.042+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1175 2019-09-04T06:27:40.042+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:27:40.042+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:40.042+0000 D3 STORAGE [conn16] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:27:40.042+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1175 2019-09-04T06:27:40.042+0000 D3 STORAGE [conn16] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:27:40.042+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1176 2019-09-04T06:27:40.042+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:27:40.042+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:40.042+0000 D3 STORAGE [conn16] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:27:40.042+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1176 2019-09-04T06:27:40.042+0000 D3 STORAGE [conn16] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:27:40.042+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1177 2019-09-04T06:27:40.042+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:27:40.042+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:40.042+0000 D3 STORAGE [conn16] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:27:40.042+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1177 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1178 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1178 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1179 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1179 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1180 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1180 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1181 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1181 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1182 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1182 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1183 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1183 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1185 2019-09-04T06:27:40.043+0000 D2 COMMAND [conn69] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:27:40.043+0000 I NETWORK [conn69] received client metadata from 10.108.2.40:38648 conn69: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:40.043+0000 I COMMAND [conn69] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1185 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1186 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1186 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1187 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:27:40.043+0000 D2 COMMAND [conn69] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1187 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:27:40.043+0000 I COMMAND [conn69] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1189 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1189 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1190 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:27:40.043+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1190 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1191 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1191 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1192 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1192 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1193 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1193 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1194 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1194 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1195 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1195 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1196 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1196 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1197 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1197 2019-09-04T06:27:40.044+0000 I COMMAND [conn16] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:27:40.044+0000 D2 COMMAND [conn16] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1199 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1199 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1200 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1200 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1201 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1201 2019-09-04T06:27:40.044+0000 I COMMAND [conn16] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:27:40.044+0000 D2 COMMAND [conn16] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1203 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1203 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1204 2019-09-04T06:27:40.044+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1204 2019-09-04T06:27:40.045+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1205 2019-09-04T06:27:40.045+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1205 2019-09-04T06:27:40.045+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1206 2019-09-04T06:27:40.045+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1206 2019-09-04T06:27:40.045+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1207 2019-09-04T06:27:40.045+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1207 2019-09-04T06:27:40.045+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1208 2019-09-04T06:27:40.045+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1208 2019-09-04T06:27:40.045+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1209 2019-09-04T06:27:40.045+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1209 2019-09-04T06:27:40.045+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1210 2019-09-04T06:27:40.045+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1210 2019-09-04T06:27:40.045+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1211 2019-09-04T06:27:40.045+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1211 2019-09-04T06:27:40.045+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1212 2019-09-04T06:27:40.045+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1212 2019-09-04T06:27:40.045+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1213 2019-09-04T06:27:40.045+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1213 2019-09-04T06:27:40.045+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1214 2019-09-04T06:27:40.045+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1214 2019-09-04T06:27:40.045+0000 I COMMAND [conn16] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:27:40.045+0000 D2 COMMAND [conn16] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:27:40.045+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1216 2019-09-04T06:27:40.045+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1216 2019-09-04T06:27:40.045+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1217 2019-09-04T06:27:40.045+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1217 2019-09-04T06:27:40.045+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1218 2019-09-04T06:27:40.045+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1218 2019-09-04T06:27:40.045+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1219 2019-09-04T06:27:40.045+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1219 2019-09-04T06:27:40.045+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1220 2019-09-04T06:27:40.045+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1220 2019-09-04T06:27:40.045+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1221 2019-09-04T06:27:40.045+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1221 2019-09-04T06:27:40.045+0000 I COMMAND [conn16] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:27:40.051+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:40.051+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:40.061+0000 I NETWORK [listener] connection accepted from 10.108.2.41:53480 #70 (58 connections now open) 2019-09-04T06:27:40.061+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:40.061+0000 D2 COMMAND [conn70] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:40.061+0000 I NETWORK [conn70] received client metadata from 10.108.2.41:53480 conn70: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:40.061+0000 I COMMAND [conn70] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:40.063+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:40.063+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:40.065+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:40.065+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:40.066+0000 D2 COMMAND [conn70] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:40.066+0000 I COMMAND [conn70] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:40.066+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:40.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:40.160+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:40.161+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:40.161+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:40.166+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:40.192+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:40.192+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:40.198+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:40.198+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:40.230+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 7C64845FBB162C87C6C1C7BD9EF381FD5E443A7C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:40.230+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:27:40.230+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 7C64845FBB162C87C6C1C7BD9EF381FD5E443A7C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:40.230+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 7C64845FBB162C87C6C1C7BD9EF381FD5E443A7C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:40.231+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, durableWallTime: new Date(1567578458365), opTime: { ts: Timestamp(1567578458, 2), t: 1 }, wallTime: new Date(1567578458365) } 2019-09-04T06:27:40.231+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 7C64845FBB162C87C6C1C7BD9EF381FD5E443A7C), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:40.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:27:40.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:27:40.266+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:40.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:40.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:40.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:40.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:40.327+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:40.327+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:40.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:40.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:40.366+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:40.375+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:40.375+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:40.375+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578458, 2) 2019-09-04T06:27:40.375+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 1238 2019-09-04T06:27:40.375+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:40.375+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:40.375+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 1238 2019-09-04T06:27:40.376+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1241 2019-09-04T06:27:40.376+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 1241 2019-09-04T06:27:40.376+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578458, 2), t: 1 }({ ts: Timestamp(1567578458, 2), t: 1 }) 2019-09-04T06:27:40.466+0000 I NETWORK [listener] connection accepted from 10.108.2.42:40930 #71 (59 connections now open) 2019-09-04T06:27:40.466+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:40.466+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:40.466+0000 D2 COMMAND [conn71] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:40.466+0000 I NETWORK [conn71] received client metadata from 10.108.2.42:40930 conn71: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:40.466+0000 I COMMAND [conn71] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:40.467+0000 D2 COMMAND [conn71] run command config.$cmd { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578458, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 7C64845FBB162C87C6C1C7BD9EF381FD5E443A7C), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578458, 2), t: 1 } }, $db: "config" } 2019-09-04T06:27:40.467+0000 D1 COMMAND [conn71] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578458, 2), t: 1 } } } 2019-09-04T06:27:40.467+0000 D3 STORAGE [conn71] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:27:40.467+0000 D1 COMMAND [conn71] Using 'committed' snapshot: { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578458, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 7C64845FBB162C87C6C1C7BD9EF381FD5E443A7C), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578458, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578458, 2) 2019-09-04T06:27:40.467+0000 I SHARDING [conn71] Marking collection config.settings as collection version: 2019-09-04T06:27:40.467+0000 D2 QUERY [conn71] Collection config.settings does not exist. Using EOF plan: query: { _id: "autosplit" } sort: {} projection: {} limit: 1 2019-09-04T06:27:40.467+0000 I COMMAND [conn71] command config.settings command: find { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578458, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 7C64845FBB162C87C6C1C7BD9EF381FD5E443A7C), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578458, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:27:40.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:40.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:40.551+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:40.551+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:40.563+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:40.563+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:40.565+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:40.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:40.566+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:40.659+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:40.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:40.661+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:40.661+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:40.666+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:40.692+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:40.692+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:40.698+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:40.698+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:40.721+0000 I NETWORK [listener] connection accepted from 10.108.2.36:37444 #72 (60 connections now open) 2019-09-04T06:27:40.721+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:40.722+0000 D2 COMMAND [conn72] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:40.722+0000 I NETWORK [conn72] received client metadata from 10.108.2.36:37444 conn72: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:40.722+0000 I COMMAND [conn72] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:40.722+0000 D2 COMMAND [conn72] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578458, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 7C64845FBB162C87C6C1C7BD9EF381FD5E443A7C), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578458, 2), t: 1 } }, $db: "config" } 2019-09-04T06:27:40.722+0000 D1 COMMAND [conn72] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578458, 2), t: 1 } } } 2019-09-04T06:27:40.722+0000 D3 STORAGE [conn72] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:27:40.722+0000 D1 COMMAND [conn72] Using 'committed' snapshot: { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578458, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 7C64845FBB162C87C6C1C7BD9EF381FD5E443A7C), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578458, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578458, 2) 2019-09-04T06:27:40.722+0000 D2 QUERY [conn72] Collection config.settings does not exist. Using EOF plan: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 2019-09-04T06:27:40.722+0000 I COMMAND [conn72] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578458, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 7C64845FBB162C87C6C1C7BD9EF381FD5E443A7C), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578458, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:27:40.766+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:40.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:40.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:40.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:40.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:40.827+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:40.827+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:40.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:40.836+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:40.836+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 72) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:40.836+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 72 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:27:50.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:40.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:08.836+0000 2019-09-04T06:27:40.836+0000 D2 REPL_HB [replexec-2] Sending heartbeat (requestId: 73) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:40.836+0000 D3 EXECUTOR [replexec-2] Scheduling remote command request: RemoteCommand 73 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:27:50.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:40.836+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:08.836+0000 2019-09-04T06:27:40.836+0000 D2 ASIO [Replication] Request 72 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, durableWallTime: new Date(1567578458365), opTime: { ts: Timestamp(1567578458, 2), t: 1 }, wallTime: new Date(1567578458365), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 2), t: 1 }, lastCommittedWall: new Date(1567578458365), lastOpVisible: { ts: Timestamp(1567578458, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578458, 2), $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578458, 2) } 2019-09-04T06:27:40.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, durableWallTime: new Date(1567578458365), opTime: { ts: Timestamp(1567578458, 2), t: 1 }, wallTime: new Date(1567578458365), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 2), t: 1 }, lastCommittedWall: new Date(1567578458365), lastOpVisible: { ts: Timestamp(1567578458, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578458, 2), $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578458, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:27:40.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:40.836+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 72) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, durableWallTime: new Date(1567578458365), opTime: { ts: Timestamp(1567578458, 2), t: 1 }, wallTime: new Date(1567578458365), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 2), t: 1 }, lastCommittedWall: new Date(1567578458365), lastOpVisible: { ts: Timestamp(1567578458, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578458, 2), $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578458, 2) } 2019-09-04T06:27:40.836+0000 D2 ASIO [Replication] Request 73 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, durableWallTime: new Date(1567578458365), opTime: { ts: Timestamp(1567578458, 2), t: 1 }, wallTime: new Date(1567578458365), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 2), t: 1 }, lastCommittedWall: new Date(1567578458365), lastOpVisible: { ts: Timestamp(1567578458, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578458, 2), $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578458, 2) } 2019-09-04T06:27:40.836+0000 D4 ELECTION [replexec-1] Postponing election timeout due to heartbeat from primary 2019-09-04T06:27:40.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, durableWallTime: new Date(1567578458365), opTime: { ts: Timestamp(1567578458, 2), t: 1 }, wallTime: new Date(1567578458365), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 2), t: 1 }, lastCommittedWall: new Date(1567578458365), lastOpVisible: { ts: Timestamp(1567578458, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578458, 2), $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578458, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:40.836+0000 D4 REPL [replexec-1] Canceling election timeout callback at 2019-09-04T06:27:49.954+0000 2019-09-04T06:27:40.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:40.836+0000 D4 ELECTION [replexec-1] Scheduling election timeout callback at 2019-09-04T06:27:51.058+0000 2019-09-04T06:27:40.836+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:27:40.836+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:27:42.836Z 2019-09-04T06:27:40.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:40.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:10.836+0000 2019-09-04T06:27:40.836+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 73) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, durableWallTime: new Date(1567578458365), opTime: { ts: Timestamp(1567578458, 2), t: 1 }, wallTime: new Date(1567578458365), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 2), t: 1 }, lastCommittedWall: new Date(1567578458365), lastOpVisible: { ts: Timestamp(1567578458, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578458, 2), $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578458, 2) } 2019-09-04T06:27:40.836+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:10.836+0000 2019-09-04T06:27:40.836+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:27:40.836+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:27:42.836Z 2019-09-04T06:27:40.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:10.836+0000 2019-09-04T06:27:40.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:40.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:40.867+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:40.945+0000 I NETWORK [listener] connection accepted from 10.108.2.42:40932 #73 (61 connections now open) 2019-09-04T06:27:40.945+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:40.945+0000 D2 COMMAND [conn73] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:40.945+0000 I NETWORK [conn73] received client metadata from 10.108.2.42:40932 conn73: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:40.945+0000 I COMMAND [conn73] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:40.946+0000 D2 COMMAND [conn73] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:40.946+0000 I COMMAND [conn73] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:40.967+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:40.972+0000 I NETWORK [listener] connection accepted from 10.108.2.43:33742 #74 (62 connections now open) 2019-09-04T06:27:40.972+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:40.972+0000 D2 COMMAND [conn74] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:40.972+0000 I NETWORK [conn74] received client metadata from 10.108.2.43:33742 conn74: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:40.972+0000 I COMMAND [conn74] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:40.976+0000 D2 COMMAND [conn74] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:40.976+0000 I COMMAND [conn74] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:41.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:41.016+0000 I NETWORK [listener] connection accepted from 10.108.2.56:35594 #75 (63 connections now open) 2019-09-04T06:27:41.016+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:41.016+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:41.016+0000 I NETWORK [conn75] received client metadata from 10.108.2.56:35594 conn75: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:41.016+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:41.020+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:41.020+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:41.040+0000 D2 COMMAND [conn72] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578458, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 7C64845FBB162C87C6C1C7BD9EF381FD5E443A7C), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578458, 2), t: 1 } }, $db: "config" } 2019-09-04T06:27:41.040+0000 D1 COMMAND [conn72] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578458, 2), t: 1 } } } 2019-09-04T06:27:41.040+0000 D3 STORAGE [conn72] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:27:41.040+0000 D1 COMMAND [conn72] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578458, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 7C64845FBB162C87C6C1C7BD9EF381FD5E443A7C), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578458, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578458, 2) 2019-09-04T06:27:41.040+0000 D5 QUERY [conn72] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:27:41.040+0000 D5 QUERY [conn72] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:27:41.040+0000 D5 QUERY [conn72] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:27:41.040+0000 D5 QUERY [conn72] Rated tree: $and 2019-09-04T06:27:41.040+0000 D5 QUERY [conn72] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:41.040+0000 D5 QUERY [conn72] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:41.040+0000 D2 QUERY [conn72] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:27:41.040+0000 D3 STORAGE [conn72] WT begin_transaction for snapshot id 1266 2019-09-04T06:27:41.040+0000 D3 STORAGE [conn72] WT rollback_transaction for snapshot id 1266 2019-09-04T06:27:41.040+0000 I COMMAND [conn72] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578458, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 7C64845FBB162C87C6C1C7BD9EF381FD5E443A7C), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578458, 2), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:879 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:27:41.051+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:41.051+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:41.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 7C64845FBB162C87C6C1C7BD9EF381FD5E443A7C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:41.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:27:41.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 7C64845FBB162C87C6C1C7BD9EF381FD5E443A7C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:41.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 7C64845FBB162C87C6C1C7BD9EF381FD5E443A7C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:41.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, durableWallTime: new Date(1567578458365), opTime: { ts: Timestamp(1567578458, 2), t: 1 }, wallTime: new Date(1567578458365) } 2019-09-04T06:27:41.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 7C64845FBB162C87C6C1C7BD9EF381FD5E443A7C), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:41.063+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:41.063+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:41.065+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:41.065+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:41.067+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:41.081+0000 I NETWORK [listener] connection accepted from 10.108.2.59:48232 #76 (64 connections now open) 2019-09-04T06:27:41.081+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:41.081+0000 D2 COMMAND [conn76] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:41.082+0000 I NETWORK [conn76] received client metadata from 10.108.2.59:48232 conn76: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:41.082+0000 I COMMAND [conn76] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:41.082+0000 D2 COMMAND [conn76] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578451, 1), signature: { hash: BinData(0, DEC934211A5E877D872CD481B087C488B1F8FB5C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:27:41.082+0000 D1 REPL [conn76] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578458, 2), t: 1 } 2019-09-04T06:27:41.082+0000 D3 REPL [conn76] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.092+0000 2019-09-04T06:27:41.159+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:41.160+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:41.161+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:41.161+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:41.167+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:41.192+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:41.192+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:41.198+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:41.198+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:41.221+0000 I NETWORK [listener] connection accepted from 10.108.2.36:37446 #77 (65 connections now open) 2019-09-04T06:27:41.221+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:41.221+0000 D2 COMMAND [conn77] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:41.221+0000 I NETWORK [conn77] received client metadata from 10.108.2.36:37446 conn77: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:41.221+0000 I COMMAND [conn77] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:41.221+0000 D2 COMMAND [conn77] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:41.221+0000 I COMMAND [conn77] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:41.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:27:41.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:27:41.267+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:41.274+0000 I NETWORK [listener] connection accepted from 10.108.2.37:59710 #78 (66 connections now open) 2019-09-04T06:27:41.274+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:41.274+0000 D2 COMMAND [conn78] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:41.274+0000 I NETWORK [conn78] received client metadata from 10.108.2.37:59710 conn78: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:41.274+0000 I COMMAND [conn78] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:41.274+0000 D2 COMMAND [conn78] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:41.274+0000 I COMMAND [conn78] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:41.278+0000 I NETWORK [listener] connection accepted from 10.108.2.52:47064 #79 (67 connections now open) 2019-09-04T06:27:41.278+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:41.278+0000 D2 COMMAND [conn79] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:41.278+0000 I NETWORK [conn79] received client metadata from 10.108.2.52:47064 conn79: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:41.278+0000 I COMMAND [conn79] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:41.279+0000 D2 COMMAND [conn79] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578452, 1), signature: { hash: BinData(0, CB2EE64588C44BE37DA7454B6059CFFFB3ABC1AF), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:27:41.279+0000 D1 REPL [conn79] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578458, 2), t: 1 } 2019-09-04T06:27:41.279+0000 D3 REPL [conn79] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.289+0000 2019-09-04T06:27:41.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:41.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:41.296+0000 I NETWORK [listener] connection accepted from 10.108.2.54:49070 #80 (68 connections now open) 2019-09-04T06:27:41.297+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:41.297+0000 D2 COMMAND [conn80] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:41.297+0000 I NETWORK [conn80] received client metadata from 10.108.2.54:49070 conn80: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:41.297+0000 I COMMAND [conn80] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:41.297+0000 D2 COMMAND [conn80] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 1), signature: { hash: BinData(0, 1DEE64994A51BF344C0BF17F13DA1F5E64B3EBCD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:27:41.297+0000 D1 REPL [conn80] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578458, 2), t: 1 } 2019-09-04T06:27:41.297+0000 D3 REPL [conn80] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.307+0000 2019-09-04T06:27:41.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:41.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:41.327+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:41.327+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:41.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:41.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:41.367+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:41.375+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:41.375+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:41.375+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578458, 2) 2019-09-04T06:27:41.375+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 1291 2019-09-04T06:27:41.375+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:41.375+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:41.375+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 1291 2019-09-04T06:27:41.376+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1294 2019-09-04T06:27:41.376+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 1294 2019-09-04T06:27:41.376+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578458, 2), t: 1 }({ ts: Timestamp(1567578458, 2), t: 1 }) 2019-09-04T06:27:41.467+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:41.512+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:41.512+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:41.551+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:41.551+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:41.563+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:41.563+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:41.565+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:41.565+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:41.568+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:41.659+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:41.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:41.661+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:41.661+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:41.668+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:41.692+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:41.692+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:41.698+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:41.698+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:41.768+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:41.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:41.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:41.794+0000 I NETWORK [listener] connection accepted from 10.108.2.37:59712 #81 (69 connections now open) 2019-09-04T06:27:41.794+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:41.794+0000 D2 COMMAND [conn81] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:41.794+0000 I NETWORK [conn81] received client metadata from 10.108.2.37:59712 conn81: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:41.794+0000 I COMMAND [conn81] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:41.794+0000 D2 COMMAND [conn81] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578445, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 7C64845FBB162C87C6C1C7BD9EF381FD5E443A7C), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578445, 1), t: 1 } }, $db: "config" } 2019-09-04T06:27:41.794+0000 D1 COMMAND [conn81] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578445, 1), t: 1 } } } 2019-09-04T06:27:41.794+0000 D3 STORAGE [conn81] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:27:41.794+0000 D1 COMMAND [conn81] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578445, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 7C64845FBB162C87C6C1C7BD9EF381FD5E443A7C), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578445, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578458, 2) 2019-09-04T06:27:41.794+0000 D5 QUERY [conn81] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:27:41.794+0000 D5 QUERY [conn81] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:27:41.794+0000 D5 QUERY [conn81] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:27:41.794+0000 D5 QUERY [conn81] Rated tree: $and 2019-09-04T06:27:41.794+0000 D5 QUERY [conn81] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:41.794+0000 D5 QUERY [conn81] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:41.794+0000 D2 QUERY [conn81] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:27:41.794+0000 D3 STORAGE [conn81] WT begin_transaction for snapshot id 1306 2019-09-04T06:27:41.794+0000 D3 STORAGE [conn81] WT rollback_transaction for snapshot id 1306 2019-09-04T06:27:41.794+0000 I COMMAND [conn81] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578445, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 2), signature: { hash: BinData(0, 7C64845FBB162C87C6C1C7BD9EF381FD5E443A7C), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578445, 1), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:879 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:27:41.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:41.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:41.827+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:41.827+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:41.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:41.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:41.868+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:41.968+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:41.983+0000 I NETWORK [listener] connection accepted from 10.108.2.62:53342 #82 (70 connections now open) 2019-09-04T06:27:41.983+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:41.984+0000 D2 COMMAND [conn82] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:41.984+0000 I NETWORK [conn82] received client metadata from 10.108.2.62:53342 conn82: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:41.984+0000 I COMMAND [conn82] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:41.988+0000 D2 COMMAND [conn82] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578452, 1), signature: { hash: BinData(0, CB2EE64588C44BE37DA7454B6059CFFFB3ABC1AF), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:27:41.988+0000 D1 REPL [conn82] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578458, 2), t: 1 } 2019-09-04T06:27:41.988+0000 D3 REPL [conn82] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.998+0000 2019-09-04T06:27:42.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:42.012+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:42.012+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:42.051+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:42.051+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:42.063+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:42.063+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:42.065+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:42.065+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:42.068+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:42.159+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:42.160+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:42.161+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:42.161+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:42.168+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:42.192+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:42.192+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:42.198+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:42.198+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:42.230+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578459, 1), signature: { hash: BinData(0, 4A182969CAD8127586E69A8E1B36ADE61B73589D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:42.230+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:27:42.230+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578459, 1), signature: { hash: BinData(0, 4A182969CAD8127586E69A8E1B36ADE61B73589D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:42.230+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578459, 1), signature: { hash: BinData(0, 4A182969CAD8127586E69A8E1B36ADE61B73589D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:42.230+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, durableWallTime: new Date(1567578458365), opTime: { ts: Timestamp(1567578458, 2), t: 1 }, wallTime: new Date(1567578458365) } 2019-09-04T06:27:42.230+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578459, 1), signature: { hash: BinData(0, 4A182969CAD8127586E69A8E1B36ADE61B73589D), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:42.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:27:42.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:27:42.268+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:42.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:42.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:42.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:42.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:42.327+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:42.327+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:42.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:42.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:42.369+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:42.375+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:42.375+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:42.375+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578458, 2) 2019-09-04T06:27:42.375+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 1328 2019-09-04T06:27:42.375+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:42.375+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:42.375+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 1328 2019-09-04T06:27:42.376+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1331 2019-09-04T06:27:42.376+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 1331 2019-09-04T06:27:42.376+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578458, 2), t: 1 }({ ts: Timestamp(1567578458, 2), t: 1 }) 2019-09-04T06:27:42.469+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:42.512+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:42.512+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:42.551+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:42.551+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:42.563+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:42.563+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:42.565+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:42.565+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:42.569+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:42.659+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:42.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:42.661+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:42.661+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:42.669+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:42.692+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:42.692+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:42.698+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:42.698+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:42.769+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:42.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:42.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:42.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:42.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:42.827+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:42.827+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:42.836+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:42.836+0000 D2 REPL_HB [replexec-2] Sending heartbeat (requestId: 74) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:42.836+0000 D3 EXECUTOR [replexec-2] Scheduling remote command request: RemoteCommand 74 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:27:52.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:42.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:42.836+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 75) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:42.836+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 75 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:27:52.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:42.836+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:10.836+0000 2019-09-04T06:27:42.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:10.836+0000 2019-09-04T06:27:42.836+0000 D2 ASIO [Replication] Request 74 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, durableWallTime: new Date(1567578458365), opTime: { ts: Timestamp(1567578458, 2), t: 1 }, wallTime: new Date(1567578458365), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 2), t: 1 }, lastCommittedWall: new Date(1567578458365), lastOpVisible: { ts: Timestamp(1567578458, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578458, 2), $clusterTime: { clusterTime: Timestamp(1567578459, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578458, 2) } 2019-09-04T06:27:42.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, durableWallTime: new Date(1567578458365), opTime: { ts: Timestamp(1567578458, 2), t: 1 }, wallTime: new Date(1567578458365), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 2), t: 1 }, lastCommittedWall: new Date(1567578458365), lastOpVisible: { ts: Timestamp(1567578458, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578458, 2), $clusterTime: { clusterTime: Timestamp(1567578459, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578458, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:27:42.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:42.836+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 74) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, durableWallTime: new Date(1567578458365), opTime: { ts: Timestamp(1567578458, 2), t: 1 }, wallTime: new Date(1567578458365), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 2), t: 1 }, lastCommittedWall: new Date(1567578458365), lastOpVisible: { ts: Timestamp(1567578458, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578458, 2), $clusterTime: { clusterTime: Timestamp(1567578459, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578458, 2) } 2019-09-04T06:27:42.836+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:27:42.836+0000 D2 ASIO [Replication] Request 75 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, durableWallTime: new Date(1567578458365), opTime: { ts: Timestamp(1567578458, 2), t: 1 }, wallTime: new Date(1567578458365), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 2), t: 1 }, lastCommittedWall: new Date(1567578458365), lastOpVisible: { ts: Timestamp(1567578458, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578458, 2), $clusterTime: { clusterTime: Timestamp(1567578459, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578458, 2) } 2019-09-04T06:27:42.836+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:27:51.058+0000 2019-09-04T06:27:42.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, durableWallTime: new Date(1567578458365), opTime: { ts: Timestamp(1567578458, 2), t: 1 }, wallTime: new Date(1567578458365), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 2), t: 1 }, lastCommittedWall: new Date(1567578458365), lastOpVisible: { ts: Timestamp(1567578458, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578458, 2), $clusterTime: { clusterTime: Timestamp(1567578459, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578458, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:42.836+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:42.836+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:27:53.890+0000 2019-09-04T06:27:42.836+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:27:42.836+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:27:44.836Z 2019-09-04T06:27:42.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:42.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:12.836+0000 2019-09-04T06:27:42.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:12.836+0000 2019-09-04T06:27:42.836+0000 D2 REPL_HB [replexec-2] Received response to heartbeat (requestId: 75) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, durableWallTime: new Date(1567578458365), opTime: { ts: Timestamp(1567578458, 2), t: 1 }, wallTime: new Date(1567578458365), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 2), t: 1 }, lastCommittedWall: new Date(1567578458365), lastOpVisible: { ts: Timestamp(1567578458, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578458, 2), $clusterTime: { clusterTime: Timestamp(1567578459, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578458, 2) } 2019-09-04T06:27:42.836+0000 D3 REPL [replexec-2] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:27:42.836+0000 D2 REPL_HB [replexec-2] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:27:44.836Z 2019-09-04T06:27:42.836+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:12.836+0000 2019-09-04T06:27:42.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:42.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:42.869+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:42.969+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:43.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:43.012+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:43.012+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:43.051+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:43.051+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:43.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578459, 1), signature: { hash: BinData(0, 4A182969CAD8127586E69A8E1B36ADE61B73589D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:43.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:27:43.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578459, 1), signature: { hash: BinData(0, 4A182969CAD8127586E69A8E1B36ADE61B73589D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:43.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578459, 1), signature: { hash: BinData(0, 4A182969CAD8127586E69A8E1B36ADE61B73589D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:43.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, durableWallTime: new Date(1567578458365), opTime: { ts: Timestamp(1567578458, 2), t: 1 }, wallTime: new Date(1567578458365) } 2019-09-04T06:27:43.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578459, 1), signature: { hash: BinData(0, 4A182969CAD8127586E69A8E1B36ADE61B73589D), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:43.063+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:43.063+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:43.065+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:43.065+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:43.069+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:43.159+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:43.160+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:43.161+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:43.161+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:43.169+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:43.192+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:43.192+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:43.198+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:43.198+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:43.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:27:43.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:27:43.270+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:43.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:43.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:43.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:43.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:43.327+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:43.327+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:43.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:43.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:43.370+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:43.376+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:43.376+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:43.376+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578458, 2) 2019-09-04T06:27:43.376+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 1361 2019-09-04T06:27:43.376+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:43.376+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:43.376+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 1361 2019-09-04T06:27:43.376+0000 D2 ASIO [RS] Request 68 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 2), t: 1 }, lastCommittedWall: new Date(1567578458365), lastOpVisible: { ts: Timestamp(1567578458, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578458, 2), t: 1 }, lastCommittedWall: new Date(1567578458365), lastOpApplied: { ts: Timestamp(1567578458, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578458, 2), $clusterTime: { clusterTime: Timestamp(1567578459, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578458, 2) } 2019-09-04T06:27:43.376+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 2), t: 1 }, lastCommittedWall: new Date(1567578458365), lastOpVisible: { ts: Timestamp(1567578458, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578458, 2), t: 1 }, lastCommittedWall: new Date(1567578458365), lastOpApplied: { ts: Timestamp(1567578458, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578458, 2), $clusterTime: { clusterTime: Timestamp(1567578459, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578458, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:43.376+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:43.376+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:27:43.376+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:27:53.890+0000 2019-09-04T06:27:43.376+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:27:54.441+0000 2019-09-04T06:27:43.376+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:43.376+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:12.836+0000 2019-09-04T06:27:43.376+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 76 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:27:53.376+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578458, 2), t: 1 } } 2019-09-04T06:27:43.377+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:08.376+0000 2019-09-04T06:27:43.377+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1364 2019-09-04T06:27:43.377+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 1364 2019-09-04T06:27:43.377+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578458, 2), t: 1 }({ ts: Timestamp(1567578458, 2), t: 1 }) 2019-09-04T06:27:43.381+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:43.381+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, durableWallTime: new Date(1567578458365), appliedOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, appliedWallTime: new Date(1567578458365), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, durableWallTime: new Date(1567578458365), appliedOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, appliedWallTime: new Date(1567578458365), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, durableWallTime: new Date(1567578458365), appliedOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, appliedWallTime: new Date(1567578458365), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 2), t: 1 }, lastCommittedWall: new Date(1567578458365), lastOpVisible: { ts: Timestamp(1567578458, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:43.381+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 77 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:13.381+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, durableWallTime: new Date(1567578458365), appliedOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, appliedWallTime: new Date(1567578458365), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, durableWallTime: new Date(1567578458365), appliedOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, appliedWallTime: new Date(1567578458365), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, durableWallTime: new Date(1567578458365), appliedOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, appliedWallTime: new Date(1567578458365), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 2), t: 1 }, lastCommittedWall: new Date(1567578458365), lastOpVisible: { ts: Timestamp(1567578458, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:43.381+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:08.376+0000 2019-09-04T06:27:43.381+0000 D2 ASIO [RS] Request 77 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 2), t: 1 }, lastCommittedWall: new Date(1567578458365), lastOpVisible: { ts: Timestamp(1567578458, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578458, 2), $clusterTime: { clusterTime: Timestamp(1567578459, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578458, 2) } 2019-09-04T06:27:43.381+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 2), t: 1 }, lastCommittedWall: new Date(1567578458365), lastOpVisible: { ts: Timestamp(1567578458, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578458, 2), $clusterTime: { clusterTime: Timestamp(1567578459, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578458, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:43.381+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:43.381+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:08.376+0000 2019-09-04T06:27:43.470+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:43.512+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:43.512+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:43.551+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:43.551+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:43.563+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:43.563+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:43.565+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:43.565+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:43.570+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:43.659+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:43.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:43.661+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:43.661+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:43.670+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:43.692+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:43.692+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:43.698+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:43.698+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:43.770+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:43.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:43.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:43.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:43.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:43.827+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:43.827+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:43.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:43.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:43.870+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:43.971+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:44.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:44.012+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:44.012+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:44.051+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:44.051+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:44.063+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:44.063+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:44.065+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:44.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:44.071+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:44.159+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:44.160+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:44.161+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:44.161+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:44.171+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:44.192+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:44.192+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:44.198+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:44.198+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:44.230+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578459, 1), signature: { hash: BinData(0, 4A182969CAD8127586E69A8E1B36ADE61B73589D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:44.230+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:27:44.230+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578459, 1), signature: { hash: BinData(0, 4A182969CAD8127586E69A8E1B36ADE61B73589D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:44.230+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578459, 1), signature: { hash: BinData(0, 4A182969CAD8127586E69A8E1B36ADE61B73589D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:44.230+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, durableWallTime: new Date(1567578458365), opTime: { ts: Timestamp(1567578458, 2), t: 1 }, wallTime: new Date(1567578458365) } 2019-09-04T06:27:44.231+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578459, 1), signature: { hash: BinData(0, 4A182969CAD8127586E69A8E1B36ADE61B73589D), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:44.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:27:44.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:27:44.271+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:44.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:44.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:44.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:44.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:44.327+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:44.327+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:44.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:44.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:44.371+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:44.376+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:44.376+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:44.376+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578458, 2) 2019-09-04T06:27:44.376+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 1394 2019-09-04T06:27:44.376+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:44.376+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:44.376+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 1394 2019-09-04T06:27:44.377+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1397 2019-09-04T06:27:44.377+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 1397 2019-09-04T06:27:44.377+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578458, 2), t: 1 }({ ts: Timestamp(1567578458, 2), t: 1 }) 2019-09-04T06:27:44.418+0000 I NETWORK [listener] connection accepted from 10.108.2.62:53344 #83 (71 connections now open) 2019-09-04T06:27:44.418+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:44.418+0000 D2 COMMAND [conn83] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:44.418+0000 I NETWORK [conn83] received client metadata from 10.108.2.62:53344 conn83: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:44.418+0000 I COMMAND [conn83] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:44.421+0000 D2 COMMAND [conn83] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578462, 1), signature: { hash: BinData(0, B297735DD8383F1BAD6EB580CC3032FDC0BD6BA3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:27:44.422+0000 D1 REPL [conn83] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578458, 2), t: 1 } 2019-09-04T06:27:44.422+0000 D3 REPL [conn83] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.431+0000 2019-09-04T06:27:44.471+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:44.512+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:44.512+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:44.551+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:44.551+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:44.563+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:44.563+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:44.565+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:44.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:44.571+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:44.659+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:44.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:44.661+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:44.661+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:44.671+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:44.692+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:44.692+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:44.698+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:44.698+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:44.772+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:44.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:44.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:44.811+0000 I NETWORK [listener] connection accepted from 10.108.2.44:38568 #84 (72 connections now open) 2019-09-04T06:27:44.811+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:44.811+0000 D2 COMMAND [conn84] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:44.811+0000 I NETWORK [conn84] received client metadata from 10.108.2.44:38568 conn84: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:44.811+0000 I NETWORK [listener] connection accepted from 10.108.2.58:52024 #85 (73 connections now open) 2019-09-04T06:27:44.811+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:44.811+0000 I COMMAND [conn84] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:44.812+0000 D2 COMMAND [conn85] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:44.812+0000 I NETWORK [listener] connection accepted from 10.108.2.50:50000 #86 (74 connections now open) 2019-09-04T06:27:44.812+0000 I NETWORK [conn85] received client metadata from 10.108.2.58:52024 conn85: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:44.812+0000 I COMMAND [conn85] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:44.812+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:44.812+0000 D2 COMMAND [conn84] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 1), signature: { hash: BinData(0, 1DEE64994A51BF344C0BF17F13DA1F5E64B3EBCD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:27:44.812+0000 D1 REPL [conn84] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578458, 2), t: 1 } 2019-09-04T06:27:44.812+0000 D3 REPL [conn84] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.822+0000 2019-09-04T06:27:44.812+0000 D2 COMMAND [conn86] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:44.812+0000 I NETWORK [conn86] received client metadata from 10.108.2.50:50000 conn86: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:44.812+0000 D2 COMMAND [conn85] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578463, 1), signature: { hash: BinData(0, C1B62A1C3071B6FFFBA1E3E4221F429C7BC6B94F), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:27:44.812+0000 D1 REPL [conn85] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578458, 2), t: 1 } 2019-09-04T06:27:44.812+0000 I COMMAND [conn86] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:44.812+0000 D3 REPL [conn85] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.822+0000 2019-09-04T06:27:44.812+0000 D2 COMMAND [conn86] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578459, 1), signature: { hash: BinData(0, 59D37A2B4CECE73D530761BA1DED9CE509A0BE65), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:27:44.812+0000 D1 REPL [conn86] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578458, 2), t: 1 } 2019-09-04T06:27:44.812+0000 D3 REPL [conn86] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.822+0000 2019-09-04T06:27:44.813+0000 I NETWORK [listener] connection accepted from 10.108.2.46:40868 #87 (75 connections now open) 2019-09-04T06:27:44.813+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:44.813+0000 D2 ASIO [RS] Request 76 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578464, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578464811), o: { $v: 1, $set: { ping: new Date(1567578464808), up: 2365 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 2), t: 1 }, lastCommittedWall: new Date(1567578458365), lastOpVisible: { ts: Timestamp(1567578458, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578458, 2), t: 1 }, lastCommittedWall: new Date(1567578458365), lastOpApplied: { ts: Timestamp(1567578464, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578458, 2), $clusterTime: { clusterTime: Timestamp(1567578464, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578464, 1) } 2019-09-04T06:27:44.813+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578464, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578464811), o: { $v: 1, $set: { ping: new Date(1567578464808), up: 2365 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 2), t: 1 }, lastCommittedWall: new Date(1567578458365), lastOpVisible: { ts: Timestamp(1567578458, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578458, 2), t: 1 }, lastCommittedWall: new Date(1567578458365), lastOpApplied: { ts: Timestamp(1567578464, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578458, 2), $clusterTime: { clusterTime: Timestamp(1567578464, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578464, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:44.813+0000 D2 COMMAND [conn87] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:44.813+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:44.813+0000 I NETWORK [conn87] received client metadata from 10.108.2.46:40868 conn87: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:44.813+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578464, 1) and ending at ts: Timestamp(1567578464, 1) 2019-09-04T06:27:44.813+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:27:54.441+0000 2019-09-04T06:27:44.813+0000 I COMMAND [conn87] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:44.813+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:27:55.606+0000 2019-09-04T06:27:44.813+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:44.813+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578464, 1), t: 1 } 2019-09-04T06:27:44.813+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:44.813+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:44.813+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578458, 2) 2019-09-04T06:27:44.813+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 1418 2019-09-04T06:27:44.813+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:44.813+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:44.813+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 1418 2019-09-04T06:27:44.813+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:27:44.813+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:12.836+0000 2019-09-04T06:27:44.813+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578464, 1) } 2019-09-04T06:27:44.813+0000 D2 COMMAND [conn87] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578455, 1), signature: { hash: BinData(0, B608BC040ACD32DAE9997A8CBB1A717399CC35A7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:27:44.813+0000 D1 REPL [conn87] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578458, 2), t: 1 } 2019-09-04T06:27:44.813+0000 D3 REPL [conn87] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.823+0000 2019-09-04T06:27:44.814+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1398 2019-09-04T06:27:44.814+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 1398 2019-09-04T06:27:44.814+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1423 2019-09-04T06:27:44.814+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 1423 2019-09-04T06:27:44.814+0000 D3 EXECUTOR [repl-writer-worker-13] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:27:44.813+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:44.814+0000 D3 STORAGE [repl-writer-worker-13] WT begin_transaction for snapshot id 1425 2019-09-04T06:27:44.814+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:44.814+0000 D4 STORAGE [repl-writer-worker-13] inserting record with timestamp Timestamp(1567578464, 1) 2019-09-04T06:27:44.814+0000 D3 STORAGE [repl-writer-worker-13] WT set timestamp of future write operations to Timestamp(1567578464, 1) 2019-09-04T06:27:44.814+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578458, 2) 2019-09-04T06:27:44.814+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 1421 2019-09-04T06:27:44.814+0000 D3 STORAGE [repl-writer-worker-13] WT commit_transaction for snapshot id 1425 2019-09-04T06:27:44.814+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:44.814+0000 D3 EXECUTOR [repl-writer-worker-13] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:44.814+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:44.814+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 1421 2019-09-04T06:27:44.814+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:27:44.814+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1424 2019-09-04T06:27:44.814+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 1424 2019-09-04T06:27:44.814+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1429 2019-09-04T06:27:44.814+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 1429 2019-09-04T06:27:44.814+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578464, 1), t: 1 }({ ts: Timestamp(1567578464, 1), t: 1 }) 2019-09-04T06:27:44.814+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578464, 1) 2019-09-04T06:27:44.814+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1430 2019-09-04T06:27:44.814+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578464, 1) } } ] } sort: {} projection: {} 2019-09-04T06:27:44.814+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:44.814+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:27:44.814+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578464, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:27:44.814+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:44.814+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:27:44.814+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:27:44.814+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578464, 1) || First: notFirst: full path: ts 2019-09-04T06:27:44.814+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:44.814+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578464, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:44.814+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:27:44.814+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:27:44.814+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:27:44.814+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:44.814+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:27:44.814+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:27:44.814+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:44.814+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:44.814+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:27:44.814+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578464, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:27:44.814+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:44.814+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:27:44.814+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:27:44.814+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578464, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:27:44.814+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:44.814+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578464, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:44.814+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 1430 2019-09-04T06:27:44.814+0000 D3 EXECUTOR [repl-writer-worker-3] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:27:44.814+0000 D3 STORAGE [repl-writer-worker-3] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:44.815+0000 D3 REPL [repl-writer-worker-3] applying op: { ts: Timestamp(1567578464, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578464811), o: { $v: 1, $set: { ping: new Date(1567578464808), up: 2365 } } }, oplog application mode: Secondary 2019-09-04T06:27:44.815+0000 D3 STORAGE [repl-writer-worker-3] WT set timestamp of future write operations to Timestamp(1567578464, 1) 2019-09-04T06:27:44.815+0000 D3 STORAGE [repl-writer-worker-3] WT begin_transaction for snapshot id 1432 2019-09-04T06:27:44.815+0000 D2 QUERY [repl-writer-worker-3] Using idhack: { _id: "cmodb801.togewa.com:27017" } 2019-09-04T06:27:44.815+0000 D4 WRITE [repl-writer-worker-3] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:27:44.815+0000 D3 STORAGE [repl-writer-worker-3] WT commit_transaction for snapshot id 1432 2019-09-04T06:27:44.815+0000 D3 EXECUTOR [repl-writer-worker-3] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:44.815+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578464, 1), t: 1 }({ ts: Timestamp(1567578464, 1), t: 1 }) 2019-09-04T06:27:44.815+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578464, 1) 2019-09-04T06:27:44.815+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1431 2019-09-04T06:27:44.815+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:27:44.815+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:44.815+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:27:44.815+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:44.815+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:44.815+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:27:44.815+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 1431 2019-09-04T06:27:44.815+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578464, 1) 2019-09-04T06:27:44.815+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1435 2019-09-04T06:27:44.815+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 1435 2019-09-04T06:27:44.815+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578464, 1), t: 1 }({ ts: Timestamp(1567578464, 1), t: 1 }) 2019-09-04T06:27:44.815+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:44.815+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, durableWallTime: new Date(1567578458365), appliedOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, appliedWallTime: new Date(1567578458365), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, durableWallTime: new Date(1567578458365), appliedOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, appliedWallTime: new Date(1567578464811), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, durableWallTime: new Date(1567578458365), appliedOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, appliedWallTime: new Date(1567578458365), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 2), t: 1 }, lastCommittedWall: new Date(1567578458365), lastOpVisible: { ts: Timestamp(1567578458, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:44.815+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 78 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:14.815+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, durableWallTime: new Date(1567578458365), appliedOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, appliedWallTime: new Date(1567578458365), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, durableWallTime: new Date(1567578458365), appliedOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, appliedWallTime: new Date(1567578464811), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, durableWallTime: new Date(1567578458365), appliedOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, appliedWallTime: new Date(1567578458365), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578458, 2), t: 1 }, lastCommittedWall: new Date(1567578458365), lastOpVisible: { ts: Timestamp(1567578458, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:44.815+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:14.815+0000 2019-09-04T06:27:44.815+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578464, 1), t: 1 } 2019-09-04T06:27:44.815+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 79 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:27:54.815+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578458, 2), t: 1 } } 2019-09-04T06:27:44.815+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:14.815+0000 2019-09-04T06:27:44.815+0000 D2 ASIO [RS] Request 78 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578464, 1), t: 1 }, lastCommittedWall: new Date(1567578464811), lastOpVisible: { ts: Timestamp(1567578464, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578464, 1), $clusterTime: { clusterTime: Timestamp(1567578464, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578464, 1) } 2019-09-04T06:27:44.815+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578464, 1), t: 1 }, lastCommittedWall: new Date(1567578464811), lastOpVisible: { ts: Timestamp(1567578464, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578464, 1), $clusterTime: { clusterTime: Timestamp(1567578464, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578464, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:44.816+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:44.816+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:14.815+0000 2019-09-04T06:27:44.816+0000 D2 ASIO [RS] Request 79 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578464, 1), t: 1 }, lastCommittedWall: new Date(1567578464811), lastOpVisible: { ts: Timestamp(1567578464, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578464, 1), t: 1 }, lastCommittedWall: new Date(1567578464811), lastOpApplied: { ts: Timestamp(1567578464, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578464, 1), $clusterTime: { clusterTime: Timestamp(1567578464, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578464, 1) } 2019-09-04T06:27:44.816+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578464, 1), t: 1 }, lastCommittedWall: new Date(1567578464811), lastOpVisible: { ts: Timestamp(1567578464, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578464, 1), t: 1 }, lastCommittedWall: new Date(1567578464811), lastOpApplied: { ts: Timestamp(1567578464, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578464, 1), $clusterTime: { clusterTime: Timestamp(1567578464, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578464, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:44.816+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:44.816+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:27:44.816+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578464, 1), t: 1 }, 2019-09-04T06:27:44.811+0000 2019-09-04T06:27:44.816+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578464, 1), t: 1 }, 2019-09-04T06:27:44.811+0000 2019-09-04T06:27:44.816+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578459, 1) 2019-09-04T06:27:44.816+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:27:55.606+0000 2019-09-04T06:27:44.816+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:27:55.190+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn32] Got notified of new snapshot: { ts: Timestamp(1567578464, 1), t: 1 }, 2019-09-04T06:27:44.811+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn32] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:54.152+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn44] Got notified of new snapshot: { ts: Timestamp(1567578464, 1), t: 1 }, 2019-09-04T06:27:44.811+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn44] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:55.060+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn56] Got notified of new snapshot: { ts: Timestamp(1567578464, 1), t: 1 }, 2019-09-04T06:27:44.811+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn56] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.963+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn43] Got notified of new snapshot: { ts: Timestamp(1567578464, 1), t: 1 }, 2019-09-04T06:27:44.811+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn43] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.753+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn41] Got notified of new snapshot: { ts: Timestamp(1567578464, 1), t: 1 }, 2019-09-04T06:27:44.811+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn41] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.661+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn80] Got notified of new snapshot: { ts: Timestamp(1567578464, 1), t: 1 }, 2019-09-04T06:27:44.811+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn80] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.307+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn83] Got notified of new snapshot: { ts: Timestamp(1567578464, 1), t: 1 }, 2019-09-04T06:27:44.811+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn83] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.431+0000 2019-09-04T06:27:44.816+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:44.816+0000 D3 REPL [conn86] Got notified of new snapshot: { ts: Timestamp(1567578464, 1), t: 1 }, 2019-09-04T06:27:44.811+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn86] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.822+0000 2019-09-04T06:27:44.816+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:12.836+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn87] Got notified of new snapshot: { ts: Timestamp(1567578464, 1), t: 1 }, 2019-09-04T06:27:44.811+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn87] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.823+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn54] Got notified of new snapshot: { ts: Timestamp(1567578464, 1), t: 1 }, 2019-09-04T06:27:44.811+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn54] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.764+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn55] Got notified of new snapshot: { ts: Timestamp(1567578464, 1), t: 1 }, 2019-09-04T06:27:44.811+0000 2019-09-04T06:27:44.816+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 80 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:27:54.816+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578464, 1), t: 1 } } 2019-09-04T06:27:44.816+0000 D3 REPL [conn55] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.926+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn57] Got notified of new snapshot: { ts: Timestamp(1567578464, 1), t: 1 }, 2019-09-04T06:27:44.811+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn57] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:03.480+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn53] Got notified of new snapshot: { ts: Timestamp(1567578464, 1), t: 1 }, 2019-09-04T06:27:44.811+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn53] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.753+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn39] Got notified of new snapshot: { ts: Timestamp(1567578464, 1), t: 1 }, 2019-09-04T06:27:44.811+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn39] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.133+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn40] Got notified of new snapshot: { ts: Timestamp(1567578464, 1), t: 1 }, 2019-09-04T06:27:44.811+0000 2019-09-04T06:27:44.816+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:14.815+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn40] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.645+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn62] Got notified of new snapshot: { ts: Timestamp(1567578464, 1), t: 1 }, 2019-09-04T06:27:44.811+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn62] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.671+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn63] Got notified of new snapshot: { ts: Timestamp(1567578464, 1), t: 1 }, 2019-09-04T06:27:44.811+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn63] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.676+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn64] Got notified of new snapshot: { ts: Timestamp(1567578464, 1), t: 1 }, 2019-09-04T06:27:44.811+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn64] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.676+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn65] Got notified of new snapshot: { ts: Timestamp(1567578464, 1), t: 1 }, 2019-09-04T06:27:44.811+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn65] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.679+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn66] Got notified of new snapshot: { ts: Timestamp(1567578464, 1), t: 1 }, 2019-09-04T06:27:44.811+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn66] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.686+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn67] Got notified of new snapshot: { ts: Timestamp(1567578464, 1), t: 1 }, 2019-09-04T06:27:44.811+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn67] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.698+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn68] Got notified of new snapshot: { ts: Timestamp(1567578464, 1), t: 1 }, 2019-09-04T06:27:44.811+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn68] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.730+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn79] Got notified of new snapshot: { ts: Timestamp(1567578464, 1), t: 1 }, 2019-09-04T06:27:44.811+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn79] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.289+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn76] Got notified of new snapshot: { ts: Timestamp(1567578464, 1), t: 1 }, 2019-09-04T06:27:44.811+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn76] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.092+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn82] Got notified of new snapshot: { ts: Timestamp(1567578464, 1), t: 1 }, 2019-09-04T06:27:44.811+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn82] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.998+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn84] Got notified of new snapshot: { ts: Timestamp(1567578464, 1), t: 1 }, 2019-09-04T06:27:44.811+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn84] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.822+0000 2019-09-04T06:27:44.816+0000 D3 REPL [conn85] Got notified of new snapshot: { ts: Timestamp(1567578464, 1), t: 1 }, 2019-09-04T06:27:44.811+0000 2019-09-04T06:27:44.817+0000 D3 REPL [conn85] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.822+0000 2019-09-04T06:27:44.819+0000 I NETWORK [listener] connection accepted from 10.108.2.51:59050 #88 (76 connections now open) 2019-09-04T06:27:44.819+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:44.819+0000 D2 COMMAND [conn88] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:44.819+0000 I NETWORK [conn88] received client metadata from 10.108.2.51:59050 conn88: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:44.819+0000 I COMMAND [conn88] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:44.819+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:27:44.819+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:44.819+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, durableWallTime: new Date(1567578458365), appliedOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, appliedWallTime: new Date(1567578458365), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, durableWallTime: new Date(1567578464811), appliedOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, appliedWallTime: new Date(1567578464811), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, durableWallTime: new Date(1567578458365), appliedOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, appliedWallTime: new Date(1567578458365), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578464, 1), t: 1 }, lastCommittedWall: new Date(1567578464811), lastOpVisible: { ts: Timestamp(1567578464, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:44.819+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 81 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:14.819+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, durableWallTime: new Date(1567578458365), appliedOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, appliedWallTime: new Date(1567578458365), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, durableWallTime: new Date(1567578464811), appliedOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, appliedWallTime: new Date(1567578464811), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, durableWallTime: new Date(1567578458365), appliedOpTime: { ts: Timestamp(1567578458, 2), t: 1 }, appliedWallTime: new Date(1567578458365), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578464, 1), t: 1 }, lastCommittedWall: new Date(1567578464811), lastOpVisible: { ts: Timestamp(1567578464, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:44.819+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:14.815+0000 2019-09-04T06:27:44.819+0000 D2 COMMAND [conn27] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578460, 1), signature: { hash: BinData(0, 6FADD4E1F9FE163B4F89453E13CBEE9116958205), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:27:44.819+0000 D1 REPL [conn27] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578464, 1), t: 1 } 2019-09-04T06:27:44.819+0000 D3 REPL [conn27] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.829+0000 2019-09-04T06:27:44.819+0000 D2 ASIO [RS] Request 81 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578464, 1), t: 1 }, lastCommittedWall: new Date(1567578464811), lastOpVisible: { ts: Timestamp(1567578464, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578464, 1), $clusterTime: { clusterTime: Timestamp(1567578464, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578464, 1) } 2019-09-04T06:27:44.819+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578464, 1), t: 1 }, lastCommittedWall: new Date(1567578464811), lastOpVisible: { ts: Timestamp(1567578464, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578464, 1), $clusterTime: { clusterTime: Timestamp(1567578464, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578464, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:44.819+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:44.819+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:14.815+0000 2019-09-04T06:27:44.823+0000 D2 COMMAND [conn88] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578459, 1), signature: { hash: BinData(0, 59D37A2B4CECE73D530761BA1DED9CE509A0BE65), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:27:44.823+0000 D1 REPL [conn88] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578464, 1), t: 1 } 2019-09-04T06:27:44.823+0000 D3 REPL [conn88] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.833+0000 2019-09-04T06:27:44.823+0000 D2 COMMAND [conn24] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578455, 1), signature: { hash: BinData(0, B608BC040ACD32DAE9997A8CBB1A717399CC35A7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:27:44.823+0000 D1 REPL [conn24] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578464, 1), t: 1 } 2019-09-04T06:27:44.823+0000 D3 REPL [conn24] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.833+0000 2019-09-04T06:27:44.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:44.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:44.825+0000 D2 COMMAND [conn36] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578462, 1), signature: { hash: BinData(0, B297735DD8383F1BAD6EB580CC3032FDC0BD6BA3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:27:44.825+0000 D1 REPL [conn36] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578464, 1), t: 1 } 2019-09-04T06:27:44.825+0000 D3 REPL [conn36] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.835+0000 2019-09-04T06:27:44.826+0000 D2 COMMAND [conn30] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578455, 1), signature: { hash: BinData(0, B608BC040ACD32DAE9997A8CBB1A717399CC35A7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:27:44.826+0000 D1 REPL [conn30] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578464, 1), t: 1 } 2019-09-04T06:27:44.826+0000 D3 REPL [conn30] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.836+0000 2019-09-04T06:27:44.826+0000 D2 COMMAND [conn35] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 1), signature: { hash: BinData(0, 1DEE64994A51BF344C0BF17F13DA1F5E64B3EBCD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:27:44.826+0000 D1 REPL [conn35] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578464, 1), t: 1 } 2019-09-04T06:27:44.826+0000 D3 REPL [conn35] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.836+0000 2019-09-04T06:27:44.827+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:44.827+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:44.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:44.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:44.836+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 82) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:44.836+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 82 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:27:54.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:44.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:12.836+0000 2019-09-04T06:27:44.836+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 83) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:44.836+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 83 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:27:54.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:44.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:12.836+0000 2019-09-04T06:27:44.836+0000 D2 ASIO [Replication] Request 82 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, durableWallTime: new Date(1567578464811), opTime: { ts: Timestamp(1567578464, 1), t: 1 }, wallTime: new Date(1567578464811), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578464, 1), t: 1 }, lastCommittedWall: new Date(1567578464811), lastOpVisible: { ts: Timestamp(1567578464, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578464, 1), $clusterTime: { clusterTime: Timestamp(1567578464, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578464, 1) } 2019-09-04T06:27:44.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, durableWallTime: new Date(1567578464811), opTime: { ts: Timestamp(1567578464, 1), t: 1 }, wallTime: new Date(1567578464811), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578464, 1), t: 1 }, lastCommittedWall: new Date(1567578464811), lastOpVisible: { ts: Timestamp(1567578464, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578464, 1), $clusterTime: { clusterTime: Timestamp(1567578464, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578464, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:27:44.836+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:44.836+0000 D2 REPL_HB [replexec-2] Received response to heartbeat (requestId: 82) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, durableWallTime: new Date(1567578464811), opTime: { ts: Timestamp(1567578464, 1), t: 1 }, wallTime: new Date(1567578464811), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578464, 1), t: 1 }, lastCommittedWall: new Date(1567578464811), lastOpVisible: { ts: Timestamp(1567578464, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578464, 1), $clusterTime: { clusterTime: Timestamp(1567578464, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578464, 1) } 2019-09-04T06:27:44.836+0000 D4 ELECTION [replexec-2] Postponing election timeout due to heartbeat from primary 2019-09-04T06:27:44.836+0000 D4 REPL [replexec-2] Canceling election timeout callback at 2019-09-04T06:27:55.190+0000 2019-09-04T06:27:44.836+0000 D4 ELECTION [replexec-2] Scheduling election timeout callback at 2019-09-04T06:27:56.171+0000 2019-09-04T06:27:44.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:44.836+0000 D3 REPL [replexec-2] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:27:44.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:12.836+0000 2019-09-04T06:27:44.836+0000 D2 REPL_HB [replexec-2] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:27:46.836Z 2019-09-04T06:27:44.836+0000 D2 ASIO [Replication] Request 83 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, durableWallTime: new Date(1567578464811), opTime: { ts: Timestamp(1567578464, 1), t: 1 }, wallTime: new Date(1567578464811), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578464, 1), t: 1 }, lastCommittedWall: new Date(1567578464811), lastOpVisible: { ts: Timestamp(1567578464, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578464, 1), $clusterTime: { clusterTime: Timestamp(1567578464, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578464, 1) } 2019-09-04T06:27:44.836+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:12.836+0000 2019-09-04T06:27:44.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, durableWallTime: new Date(1567578464811), opTime: { ts: Timestamp(1567578464, 1), t: 1 }, wallTime: new Date(1567578464811), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578464, 1), t: 1 }, lastCommittedWall: new Date(1567578464811), lastOpVisible: { ts: Timestamp(1567578464, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578464, 1), $clusterTime: { clusterTime: Timestamp(1567578464, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578464, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:44.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:44.836+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 83) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, durableWallTime: new Date(1567578464811), opTime: { ts: Timestamp(1567578464, 1), t: 1 }, wallTime: new Date(1567578464811), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578464, 1), t: 1 }, lastCommittedWall: new Date(1567578464811), lastOpVisible: { ts: Timestamp(1567578464, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578464, 1), $clusterTime: { clusterTime: Timestamp(1567578464, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578464, 1) } 2019-09-04T06:27:44.836+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:27:44.836+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:27:46.836Z 2019-09-04T06:27:44.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:12.836+0000 2019-09-04T06:27:44.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:44.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:44.872+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:44.914+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578464, 1) 2019-09-04T06:27:44.972+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:45.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:45.012+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:45.012+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:45.051+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:45.051+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:45.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578464, 1), signature: { hash: BinData(0, 766E27FE965C4CD2B3F0E3C4520230BAC9E52D4A), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:45.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:27:45.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578464, 1), signature: { hash: BinData(0, 766E27FE965C4CD2B3F0E3C4520230BAC9E52D4A), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:45.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578464, 1), signature: { hash: BinData(0, 766E27FE965C4CD2B3F0E3C4520230BAC9E52D4A), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:45.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, durableWallTime: new Date(1567578464811), opTime: { ts: Timestamp(1567578464, 1), t: 1 }, wallTime: new Date(1567578464811) } 2019-09-04T06:27:45.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578464, 1), signature: { hash: BinData(0, 766E27FE965C4CD2B3F0E3C4520230BAC9E52D4A), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:45.063+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:45.063+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:45.065+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:45.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:45.072+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:45.159+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:45.160+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:45.161+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:45.161+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:45.172+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:45.192+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:45.192+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:45.198+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:45.198+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:45.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:27:45.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:27:45.272+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:45.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:45.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:45.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:45.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:45.327+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:45.327+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:45.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:45.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:45.372+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:45.473+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:45.512+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:45.512+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:45.551+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:45.551+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:45.563+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:45.563+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:45.565+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:45.565+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:45.573+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:45.659+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:45.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:45.661+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:45.661+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:45.673+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:45.692+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:45.692+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:45.698+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:45.698+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:45.773+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:45.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:45.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:45.814+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:45.814+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:45.814+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578464, 1) 2019-09-04T06:27:45.814+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 1472 2019-09-04T06:27:45.814+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:45.814+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:45.814+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 1472 2019-09-04T06:27:45.815+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1475 2019-09-04T06:27:45.815+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 1475 2019-09-04T06:27:45.815+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578464, 1), t: 1 }({ ts: Timestamp(1567578464, 1), t: 1 }) 2019-09-04T06:27:45.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:45.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:45.827+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:45.827+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:45.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:45.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:45.873+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:45.973+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:46.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:46.012+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:46.012+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:46.051+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:46.051+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:46.063+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:46.063+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:46.065+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:46.065+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:46.073+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:46.159+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:46.160+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:46.161+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:46.161+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:46.174+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:46.192+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:46.192+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:46.198+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:46.198+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:46.231+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578464, 1), signature: { hash: BinData(0, 766E27FE965C4CD2B3F0E3C4520230BAC9E52D4A), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:46.231+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:27:46.231+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578464, 1), signature: { hash: BinData(0, 766E27FE965C4CD2B3F0E3C4520230BAC9E52D4A), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:46.231+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578464, 1), signature: { hash: BinData(0, 766E27FE965C4CD2B3F0E3C4520230BAC9E52D4A), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:46.231+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, durableWallTime: new Date(1567578464811), opTime: { ts: Timestamp(1567578464, 1), t: 1 }, wallTime: new Date(1567578464811) } 2019-09-04T06:27:46.231+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578464, 1), signature: { hash: BinData(0, 766E27FE965C4CD2B3F0E3C4520230BAC9E52D4A), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:46.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:27:46.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:27:46.274+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:46.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:46.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:46.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:46.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:46.327+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:46.327+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:46.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:46.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:46.374+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:46.402+0000 I NETWORK [listener] connection accepted from 10.108.2.57:34146 #89 (77 connections now open) 2019-09-04T06:27:46.402+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:46.402+0000 D2 COMMAND [conn89] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:46.402+0000 I NETWORK [conn89] received client metadata from 10.108.2.57:34146 conn89: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:46.403+0000 I COMMAND [conn89] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:46.407+0000 D2 COMMAND [conn89] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 1), signature: { hash: BinData(0, 1DEE64994A51BF344C0BF17F13DA1F5E64B3EBCD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:27:46.408+0000 D1 REPL [conn89] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578464, 1), t: 1 } 2019-09-04T06:27:46.408+0000 D3 REPL [conn89] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:16.417+0000 2019-09-04T06:27:46.474+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:46.512+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:46.512+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:46.551+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:46.551+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:46.563+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:46.563+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:46.565+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:46.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:46.574+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:46.659+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:46.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:46.661+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:46.661+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:46.674+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:46.692+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:46.692+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:46.698+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:46.698+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:46.774+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:46.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:46.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:46.814+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:46.814+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:46.814+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578464, 1) 2019-09-04T06:27:46.814+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 1506 2019-09-04T06:27:46.814+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:46.814+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:46.814+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 1506 2019-09-04T06:27:46.815+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1509 2019-09-04T06:27:46.815+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 1509 2019-09-04T06:27:46.815+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578464, 1), t: 1 }({ ts: Timestamp(1567578464, 1), t: 1 }) 2019-09-04T06:27:46.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:46.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:46.827+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:46.827+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:46.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:46.836+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:46.836+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 84) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:46.836+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 84 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:27:56.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:46.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:12.836+0000 2019-09-04T06:27:46.836+0000 D2 REPL_HB [replexec-2] Sending heartbeat (requestId: 85) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:46.836+0000 D3 EXECUTOR [replexec-2] Scheduling remote command request: RemoteCommand 85 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:27:56.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:46.836+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:12.836+0000 2019-09-04T06:27:46.836+0000 D2 ASIO [Replication] Request 84 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, durableWallTime: new Date(1567578464811), opTime: { ts: Timestamp(1567578464, 1), t: 1 }, wallTime: new Date(1567578464811), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578464, 1), t: 1 }, lastCommittedWall: new Date(1567578464811), lastOpVisible: { ts: Timestamp(1567578464, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578464, 1), $clusterTime: { clusterTime: Timestamp(1567578465, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578464, 1) } 2019-09-04T06:27:46.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, durableWallTime: new Date(1567578464811), opTime: { ts: Timestamp(1567578464, 1), t: 1 }, wallTime: new Date(1567578464811), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578464, 1), t: 1 }, lastCommittedWall: new Date(1567578464811), lastOpVisible: { ts: Timestamp(1567578464, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578464, 1), $clusterTime: { clusterTime: Timestamp(1567578465, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578464, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:27:46.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:46.836+0000 D2 ASIO [Replication] Request 85 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, durableWallTime: new Date(1567578464811), opTime: { ts: Timestamp(1567578464, 1), t: 1 }, wallTime: new Date(1567578464811), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578464, 1), t: 1 }, lastCommittedWall: new Date(1567578464811), lastOpVisible: { ts: Timestamp(1567578464, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578464, 1), $clusterTime: { clusterTime: Timestamp(1567578464, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578464, 1) } 2019-09-04T06:27:46.836+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 84) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, durableWallTime: new Date(1567578464811), opTime: { ts: Timestamp(1567578464, 1), t: 1 }, wallTime: new Date(1567578464811), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578464, 1), t: 1 }, lastCommittedWall: new Date(1567578464811), lastOpVisible: { ts: Timestamp(1567578464, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578464, 1), $clusterTime: { clusterTime: Timestamp(1567578465, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578464, 1) } 2019-09-04T06:27:46.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, durableWallTime: new Date(1567578464811), opTime: { ts: Timestamp(1567578464, 1), t: 1 }, wallTime: new Date(1567578464811), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578464, 1), t: 1 }, lastCommittedWall: new Date(1567578464811), lastOpVisible: { ts: Timestamp(1567578464, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578464, 1), $clusterTime: { clusterTime: Timestamp(1567578464, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578464, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:46.836+0000 D4 ELECTION [replexec-1] Postponing election timeout due to heartbeat from primary 2019-09-04T06:27:46.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:46.836+0000 D4 REPL [replexec-1] Canceling election timeout callback at 2019-09-04T06:27:56.171+0000 2019-09-04T06:27:46.836+0000 D4 ELECTION [replexec-1] Scheduling election timeout callback at 2019-09-04T06:27:57.885+0000 2019-09-04T06:27:46.836+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:46.836+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:16.836+0000 2019-09-04T06:27:46.836+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:27:46.836+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:27:48.836Z 2019-09-04T06:27:46.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:16.836+0000 2019-09-04T06:27:46.836+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 85) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, durableWallTime: new Date(1567578464811), opTime: { ts: Timestamp(1567578464, 1), t: 1 }, wallTime: new Date(1567578464811), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578464, 1), t: 1 }, lastCommittedWall: new Date(1567578464811), lastOpVisible: { ts: Timestamp(1567578464, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578464, 1), $clusterTime: { clusterTime: Timestamp(1567578464, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578464, 1) } 2019-09-04T06:27:46.836+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:27:46.836+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:27:48.836Z 2019-09-04T06:27:46.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:16.836+0000 2019-09-04T06:27:46.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:46.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:46.875+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:46.975+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:47.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:47.012+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:47.012+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:47.051+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:47.051+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:47.061+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:47.061+0000 D3 REPL [replexec-2] memberData lastupdate is: 2019-09-04T06:27:46.836+0000 2019-09-04T06:27:47.061+0000 D3 REPL [replexec-2] memberData lastupdate is: 2019-09-04T06:27:46.836+0000 2019-09-04T06:27:47.061+0000 D3 REPL [replexec-2] stalest member MemberId(0) date: 2019-09-04T06:27:46.836+0000 2019-09-04T06:27:47.061+0000 D3 REPL [replexec-2] scheduling next check at 2019-09-04T06:27:56.836+0000 2019-09-04T06:27:47.061+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:16.836+0000 2019-09-04T06:27:47.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578465, 2), signature: { hash: BinData(0, 93D17C88A00BAB819F7592A90E6ECC9C52CF2EA1), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:47.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:27:47.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578465, 2), signature: { hash: BinData(0, 93D17C88A00BAB819F7592A90E6ECC9C52CF2EA1), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:47.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578465, 2), signature: { hash: BinData(0, 93D17C88A00BAB819F7592A90E6ECC9C52CF2EA1), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:47.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, durableWallTime: new Date(1567578464811), opTime: { ts: Timestamp(1567578464, 1), t: 1 }, wallTime: new Date(1567578464811) } 2019-09-04T06:27:47.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578465, 2), signature: { hash: BinData(0, 93D17C88A00BAB819F7592A90E6ECC9C52CF2EA1), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:47.065+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:47.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:47.075+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:47.159+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:47.160+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:47.161+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:47.161+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:47.175+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:47.192+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:47.192+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:47.198+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:47.198+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:47.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:27:47.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:27:47.275+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:47.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:47.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:47.327+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:47.327+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:47.375+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:47.436+0000 D2 ASIO [RS] Request 80 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578467, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578467433), o: { $v: 1, $set: { ping: new Date(1567578467433) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578464, 1), t: 1 }, lastCommittedWall: new Date(1567578464811), lastOpVisible: { ts: Timestamp(1567578464, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578464, 1), t: 1 }, lastCommittedWall: new Date(1567578464811), lastOpApplied: { ts: Timestamp(1567578467, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578464, 1), $clusterTime: { clusterTime: Timestamp(1567578467, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578467, 1) } 2019-09-04T06:27:47.436+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578467, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578467433), o: { $v: 1, $set: { ping: new Date(1567578467433) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578464, 1), t: 1 }, lastCommittedWall: new Date(1567578464811), lastOpVisible: { ts: Timestamp(1567578464, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578464, 1), t: 1 }, lastCommittedWall: new Date(1567578464811), lastOpApplied: { ts: Timestamp(1567578467, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578464, 1), $clusterTime: { clusterTime: Timestamp(1567578467, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578467, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:47.436+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:47.436+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578467, 1) and ending at ts: Timestamp(1567578467, 1) 2019-09-04T06:27:47.436+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:27:57.885+0000 2019-09-04T06:27:47.437+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:27:57.523+0000 2019-09-04T06:27:47.437+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:47.437+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578467, 1), t: 1 } 2019-09-04T06:27:47.437+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:16.836+0000 2019-09-04T06:27:47.437+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:47.437+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:47.437+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578464, 1) 2019-09-04T06:27:47.437+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 1528 2019-09-04T06:27:47.437+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:47.437+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:47.437+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 1528 2019-09-04T06:27:47.437+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:47.437+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:27:47.437+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:47.437+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578464, 1) 2019-09-04T06:27:47.437+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578467, 1) } 2019-09-04T06:27:47.437+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 1531 2019-09-04T06:27:47.437+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:47.437+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:47.437+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 1531 2019-09-04T06:27:47.437+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1510 2019-09-04T06:27:47.437+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 1510 2019-09-04T06:27:47.437+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1534 2019-09-04T06:27:47.437+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 1534 2019-09-04T06:27:47.437+0000 D3 EXECUTOR [repl-writer-worker-14] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:27:47.437+0000 D3 STORAGE [repl-writer-worker-14] WT begin_transaction for snapshot id 1536 2019-09-04T06:27:47.437+0000 D4 STORAGE [repl-writer-worker-14] inserting record with timestamp Timestamp(1567578467, 1) 2019-09-04T06:27:47.437+0000 D3 STORAGE [repl-writer-worker-14] WT set timestamp of future write operations to Timestamp(1567578467, 1) 2019-09-04T06:27:47.437+0000 D3 STORAGE [repl-writer-worker-14] WT commit_transaction for snapshot id 1536 2019-09-04T06:27:47.437+0000 D3 EXECUTOR [repl-writer-worker-14] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:47.437+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:27:47.437+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1535 2019-09-04T06:27:47.437+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 1535 2019-09-04T06:27:47.437+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1538 2019-09-04T06:27:47.437+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 1538 2019-09-04T06:27:47.437+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578467, 1), t: 1 }({ ts: Timestamp(1567578467, 1), t: 1 }) 2019-09-04T06:27:47.437+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578467, 1) 2019-09-04T06:27:47.437+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1539 2019-09-04T06:27:47.437+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578467, 1) } } ] } sort: {} projection: {} 2019-09-04T06:27:47.437+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:47.437+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:27:47.437+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578467, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:27:47.437+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:47.437+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:27:47.437+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:27:47.437+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578467, 1) || First: notFirst: full path: ts 2019-09-04T06:27:47.437+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:47.437+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578467, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:47.437+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:27:47.437+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:27:47.437+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:27:47.437+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:47.437+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:27:47.437+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:27:47.437+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:47.437+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:47.437+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:27:47.437+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578467, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:27:47.437+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:47.437+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:27:47.437+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:27:47.437+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578467, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:27:47.437+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:47.437+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578467, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:47.437+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 1539 2019-09-04T06:27:47.437+0000 D3 EXECUTOR [repl-writer-worker-12] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:27:47.438+0000 D3 STORAGE [repl-writer-worker-12] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:47.438+0000 D3 REPL [repl-writer-worker-12] applying op: { ts: Timestamp(1567578467, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578467433), o: { $v: 1, $set: { ping: new Date(1567578467433) } } }, oplog application mode: Secondary 2019-09-04T06:27:47.438+0000 D3 STORAGE [repl-writer-worker-12] WT set timestamp of future write operations to Timestamp(1567578467, 1) 2019-09-04T06:27:47.438+0000 D3 STORAGE [repl-writer-worker-12] WT begin_transaction for snapshot id 1541 2019-09-04T06:27:47.438+0000 D2 QUERY [repl-writer-worker-12] Using idhack: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" } 2019-09-04T06:27:47.438+0000 D4 WRITE [repl-writer-worker-12] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:27:47.438+0000 D3 STORAGE [repl-writer-worker-12] WT commit_transaction for snapshot id 1541 2019-09-04T06:27:47.438+0000 D3 EXECUTOR [repl-writer-worker-12] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:47.438+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578467, 1), t: 1 }({ ts: Timestamp(1567578467, 1), t: 1 }) 2019-09-04T06:27:47.438+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578467, 1) 2019-09-04T06:27:47.438+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1540 2019-09-04T06:27:47.438+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:27:47.438+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:47.438+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:27:47.438+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:47.438+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:47.438+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:27:47.438+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 1540 2019-09-04T06:27:47.438+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578467, 1) 2019-09-04T06:27:47.438+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:47.438+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1544 2019-09-04T06:27:47.438+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, durableWallTime: new Date(1567578464811), appliedOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, appliedWallTime: new Date(1567578464811), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, durableWallTime: new Date(1567578464811), appliedOpTime: { ts: Timestamp(1567578467, 1), t: 1 }, appliedWallTime: new Date(1567578467433), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, durableWallTime: new Date(1567578464811), appliedOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, appliedWallTime: new Date(1567578464811), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578464, 1), t: 1 }, lastCommittedWall: new Date(1567578464811), lastOpVisible: { ts: Timestamp(1567578464, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:47.438+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 86 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:17.438+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, durableWallTime: new Date(1567578464811), appliedOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, appliedWallTime: new Date(1567578464811), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, durableWallTime: new Date(1567578464811), appliedOpTime: { ts: Timestamp(1567578467, 1), t: 1 }, appliedWallTime: new Date(1567578467433), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, durableWallTime: new Date(1567578464811), appliedOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, appliedWallTime: new Date(1567578464811), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578464, 1), t: 1 }, lastCommittedWall: new Date(1567578464811), lastOpVisible: { ts: Timestamp(1567578464, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:47.438+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:17.438+0000 2019-09-04T06:27:47.438+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 1544 2019-09-04T06:27:47.438+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578467, 1), t: 1 }({ ts: Timestamp(1567578467, 1), t: 1 }) 2019-09-04T06:27:47.438+0000 D2 ASIO [RS] Request 86 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578467, 1), t: 1 }, lastCommittedWall: new Date(1567578467433), lastOpVisible: { ts: Timestamp(1567578467, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578467, 1), $clusterTime: { clusterTime: Timestamp(1567578467, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578467, 1) } 2019-09-04T06:27:47.438+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578467, 1), t: 1 }, lastCommittedWall: new Date(1567578467433), lastOpVisible: { ts: Timestamp(1567578467, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578467, 1), $clusterTime: { clusterTime: Timestamp(1567578467, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578467, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:47.438+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:47.438+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:17.438+0000 2019-09-04T06:27:47.439+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578467, 1), t: 1 } 2019-09-04T06:27:47.439+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 87 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:27:57.439+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578464, 1), t: 1 } } 2019-09-04T06:27:47.439+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:17.438+0000 2019-09-04T06:27:47.439+0000 D2 ASIO [RS] Request 87 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578467, 1), t: 1 }, lastCommittedWall: new Date(1567578467433), lastOpVisible: { ts: Timestamp(1567578467, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578467, 1), t: 1 }, lastCommittedWall: new Date(1567578467433), lastOpApplied: { ts: Timestamp(1567578467, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578467, 1), $clusterTime: { clusterTime: Timestamp(1567578467, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578467, 1) } 2019-09-04T06:27:47.439+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578467, 1), t: 1 }, lastCommittedWall: new Date(1567578467433), lastOpVisible: { ts: Timestamp(1567578467, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578467, 1), t: 1 }, lastCommittedWall: new Date(1567578467433), lastOpApplied: { ts: Timestamp(1567578467, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578467, 1), $clusterTime: { clusterTime: Timestamp(1567578467, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578467, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:47.439+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:47.439+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:27:47.439+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578467, 1), t: 1 }, 2019-09-04T06:27:47.433+0000 2019-09-04T06:27:47.439+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578467, 1), t: 1 }, 2019-09-04T06:27:47.433+0000 2019-09-04T06:27:47.439+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578462, 1) 2019-09-04T06:27:47.439+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:27:57.523+0000 2019-09-04T06:27:47.439+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:27:58.062+0000 2019-09-04T06:27:47.439+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:47.439+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 88 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:27:57.439+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578467, 1), t: 1 } } 2019-09-04T06:27:47.439+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:16.836+0000 2019-09-04T06:27:47.439+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:17.438+0000 2019-09-04T06:27:47.439+0000 D3 REPL [conn56] Got notified of new snapshot: { ts: Timestamp(1567578467, 1), t: 1 }, 2019-09-04T06:27:47.433+0000 2019-09-04T06:27:47.439+0000 D3 REPL [conn56] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.963+0000 2019-09-04T06:27:47.439+0000 D3 REPL [conn57] Got notified of new snapshot: { ts: Timestamp(1567578467, 1), t: 1 }, 2019-09-04T06:27:47.433+0000 2019-09-04T06:27:47.439+0000 D3 REPL [conn57] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:03.480+0000 2019-09-04T06:27:47.439+0000 D3 REPL [conn41] Got notified of new snapshot: { ts: Timestamp(1567578467, 1), t: 1 }, 2019-09-04T06:27:47.433+0000 2019-09-04T06:27:47.439+0000 D3 REPL [conn41] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.661+0000 2019-09-04T06:27:47.439+0000 D3 REPL [conn65] Got notified of new snapshot: { ts: Timestamp(1567578467, 1), t: 1 }, 2019-09-04T06:27:47.433+0000 2019-09-04T06:27:47.439+0000 D3 REPL [conn65] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.679+0000 2019-09-04T06:27:47.439+0000 D3 REPL [conn67] Got notified of new snapshot: { ts: Timestamp(1567578467, 1), t: 1 }, 2019-09-04T06:27:47.433+0000 2019-09-04T06:27:47.439+0000 D3 REPL [conn67] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.698+0000 2019-09-04T06:27:47.439+0000 D3 REPL [conn79] Got notified of new snapshot: { ts: Timestamp(1567578467, 1), t: 1 }, 2019-09-04T06:27:47.433+0000 2019-09-04T06:27:47.439+0000 D3 REPL [conn79] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.289+0000 2019-09-04T06:27:47.439+0000 D3 REPL [conn64] Got notified of new snapshot: { ts: Timestamp(1567578467, 1), t: 1 }, 2019-09-04T06:27:47.433+0000 2019-09-04T06:27:47.439+0000 D3 REPL [conn64] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.676+0000 2019-09-04T06:27:47.439+0000 D3 REPL [conn89] Got notified of new snapshot: { ts: Timestamp(1567578467, 1), t: 1 }, 2019-09-04T06:27:47.433+0000 2019-09-04T06:27:47.439+0000 D3 REPL [conn89] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:16.417+0000 2019-09-04T06:27:47.439+0000 D3 REPL [conn68] Got notified of new snapshot: { ts: Timestamp(1567578467, 1), t: 1 }, 2019-09-04T06:27:47.433+0000 2019-09-04T06:27:47.439+0000 D3 REPL [conn68] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.730+0000 2019-09-04T06:27:47.439+0000 D3 REPL [conn76] Got notified of new snapshot: { ts: Timestamp(1567578467, 1), t: 1 }, 2019-09-04T06:27:47.433+0000 2019-09-04T06:27:47.439+0000 D3 REPL [conn76] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.092+0000 2019-09-04T06:27:47.439+0000 D3 REPL [conn84] Got notified of new snapshot: { ts: Timestamp(1567578467, 1), t: 1 }, 2019-09-04T06:27:47.433+0000 2019-09-04T06:27:47.439+0000 D3 REPL [conn84] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.822+0000 2019-09-04T06:27:47.439+0000 D3 REPL [conn27] Got notified of new snapshot: { ts: Timestamp(1567578467, 1), t: 1 }, 2019-09-04T06:27:47.433+0000 2019-09-04T06:27:47.439+0000 D3 REPL [conn27] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.829+0000 2019-09-04T06:27:47.439+0000 D3 REPL [conn24] Got notified of new snapshot: { ts: Timestamp(1567578467, 1), t: 1 }, 2019-09-04T06:27:47.433+0000 2019-09-04T06:27:47.439+0000 D3 REPL [conn24] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.833+0000 2019-09-04T06:27:47.439+0000 D3 REPL [conn36] Got notified of new snapshot: { ts: Timestamp(1567578467, 1), t: 1 }, 2019-09-04T06:27:47.433+0000 2019-09-04T06:27:47.439+0000 D3 REPL [conn36] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.835+0000 2019-09-04T06:27:47.439+0000 D3 REPL [conn30] Got notified of new snapshot: { ts: Timestamp(1567578467, 1), t: 1 }, 2019-09-04T06:27:47.433+0000 2019-09-04T06:27:47.439+0000 D3 REPL [conn30] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.836+0000 2019-09-04T06:27:47.439+0000 D3 REPL [conn35] Got notified of new snapshot: { ts: Timestamp(1567578467, 1), t: 1 }, 2019-09-04T06:27:47.433+0000 2019-09-04T06:27:47.439+0000 D3 REPL [conn35] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.836+0000 2019-09-04T06:27:47.439+0000 D3 REPL [conn87] Got notified of new snapshot: { ts: Timestamp(1567578467, 1), t: 1 }, 2019-09-04T06:27:47.433+0000 2019-09-04T06:27:47.439+0000 D3 REPL [conn87] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.823+0000 2019-09-04T06:27:47.440+0000 D3 REPL [conn32] Got notified of new snapshot: { ts: Timestamp(1567578467, 1), t: 1 }, 2019-09-04T06:27:47.433+0000 2019-09-04T06:27:47.440+0000 D3 REPL [conn32] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:54.152+0000 2019-09-04T06:27:47.440+0000 D3 REPL [conn44] Got notified of new snapshot: { ts: Timestamp(1567578467, 1), t: 1 }, 2019-09-04T06:27:47.433+0000 2019-09-04T06:27:47.440+0000 D3 REPL [conn44] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:55.060+0000 2019-09-04T06:27:47.440+0000 D3 REPL [conn80] Got notified of new snapshot: { ts: Timestamp(1567578467, 1), t: 1 }, 2019-09-04T06:27:47.433+0000 2019-09-04T06:27:47.440+0000 D3 REPL [conn80] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.307+0000 2019-09-04T06:27:47.440+0000 D3 REPL [conn54] Got notified of new snapshot: { ts: Timestamp(1567578467, 1), t: 1 }, 2019-09-04T06:27:47.433+0000 2019-09-04T06:27:47.440+0000 D3 REPL [conn54] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.764+0000 2019-09-04T06:27:47.440+0000 D3 REPL [conn55] Got notified of new snapshot: { ts: Timestamp(1567578467, 1), t: 1 }, 2019-09-04T06:27:47.433+0000 2019-09-04T06:27:47.440+0000 D3 REPL [conn55] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.926+0000 2019-09-04T06:27:47.440+0000 D3 REPL [conn43] Got notified of new snapshot: { ts: Timestamp(1567578467, 1), t: 1 }, 2019-09-04T06:27:47.433+0000 2019-09-04T06:27:47.440+0000 D3 REPL [conn43] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.753+0000 2019-09-04T06:27:47.440+0000 D3 REPL [conn53] Got notified of new snapshot: { ts: Timestamp(1567578467, 1), t: 1 }, 2019-09-04T06:27:47.433+0000 2019-09-04T06:27:47.440+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:27:47.440+0000 D3 REPL [conn53] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.753+0000 2019-09-04T06:27:47.440+0000 D3 REPL [conn39] Got notified of new snapshot: { ts: Timestamp(1567578467, 1), t: 1 }, 2019-09-04T06:27:47.433+0000 2019-09-04T06:27:47.440+0000 D3 REPL [conn39] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.133+0000 2019-09-04T06:27:47.440+0000 D3 REPL [conn40] Got notified of new snapshot: { ts: Timestamp(1567578467, 1), t: 1 }, 2019-09-04T06:27:47.433+0000 2019-09-04T06:27:47.440+0000 D3 REPL [conn40] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.645+0000 2019-09-04T06:27:47.440+0000 D3 REPL [conn63] Got notified of new snapshot: { ts: Timestamp(1567578467, 1), t: 1 }, 2019-09-04T06:27:47.433+0000 2019-09-04T06:27:47.440+0000 D3 REPL [conn63] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.676+0000 2019-09-04T06:27:47.440+0000 D3 REPL [conn83] Got notified of new snapshot: { ts: Timestamp(1567578467, 1), t: 1 }, 2019-09-04T06:27:47.433+0000 2019-09-04T06:27:47.440+0000 D3 REPL [conn83] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.431+0000 2019-09-04T06:27:47.440+0000 D3 REPL [conn86] Got notified of new snapshot: { ts: Timestamp(1567578467, 1), t: 1 }, 2019-09-04T06:27:47.433+0000 2019-09-04T06:27:47.440+0000 D3 REPL [conn86] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.822+0000 2019-09-04T06:27:47.440+0000 D3 REPL [conn62] Got notified of new snapshot: { ts: Timestamp(1567578467, 1), t: 1 }, 2019-09-04T06:27:47.433+0000 2019-09-04T06:27:47.440+0000 D3 REPL [conn62] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.671+0000 2019-09-04T06:27:47.440+0000 D3 REPL [conn82] Got notified of new snapshot: { ts: Timestamp(1567578467, 1), t: 1 }, 2019-09-04T06:27:47.433+0000 2019-09-04T06:27:47.440+0000 D3 REPL [conn82] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.998+0000 2019-09-04T06:27:47.440+0000 D3 REPL [conn85] Got notified of new snapshot: { ts: Timestamp(1567578467, 1), t: 1 }, 2019-09-04T06:27:47.433+0000 2019-09-04T06:27:47.440+0000 D3 REPL [conn85] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.822+0000 2019-09-04T06:27:47.440+0000 D3 REPL [conn88] Got notified of new snapshot: { ts: Timestamp(1567578467, 1), t: 1 }, 2019-09-04T06:27:47.433+0000 2019-09-04T06:27:47.440+0000 D3 REPL [conn88] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.833+0000 2019-09-04T06:27:47.440+0000 D3 REPL [conn66] Got notified of new snapshot: { ts: Timestamp(1567578467, 1), t: 1 }, 2019-09-04T06:27:47.433+0000 2019-09-04T06:27:47.440+0000 D3 REPL [conn66] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.686+0000 2019-09-04T06:27:47.440+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:47.440+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, durableWallTime: new Date(1567578464811), appliedOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, appliedWallTime: new Date(1567578464811), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578467, 1), t: 1 }, durableWallTime: new Date(1567578467433), appliedOpTime: { ts: Timestamp(1567578467, 1), t: 1 }, appliedWallTime: new Date(1567578467433), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, durableWallTime: new Date(1567578464811), appliedOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, appliedWallTime: new Date(1567578464811), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578467, 1), t: 1 }, lastCommittedWall: new Date(1567578467433), lastOpVisible: { ts: Timestamp(1567578467, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:47.440+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 89 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:17.440+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, durableWallTime: new Date(1567578464811), appliedOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, appliedWallTime: new Date(1567578464811), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578467, 1), t: 1 }, durableWallTime: new Date(1567578467433), appliedOpTime: { ts: Timestamp(1567578467, 1), t: 1 }, appliedWallTime: new Date(1567578467433), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, durableWallTime: new Date(1567578464811), appliedOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, appliedWallTime: new Date(1567578464811), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578467, 1), t: 1 }, lastCommittedWall: new Date(1567578467433), lastOpVisible: { ts: Timestamp(1567578467, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:47.440+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:17.438+0000 2019-09-04T06:27:47.440+0000 D2 ASIO [RS] Request 89 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578467, 1), t: 1 }, lastCommittedWall: new Date(1567578467433), lastOpVisible: { ts: Timestamp(1567578467, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578467, 1), $clusterTime: { clusterTime: Timestamp(1567578467, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578467, 1) } 2019-09-04T06:27:47.440+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578467, 1), t: 1 }, lastCommittedWall: new Date(1567578467433), lastOpVisible: { ts: Timestamp(1567578467, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578467, 1), $clusterTime: { clusterTime: Timestamp(1567578467, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578467, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:47.440+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:47.440+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:17.438+0000 2019-09-04T06:27:47.475+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:47.512+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:47.512+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:47.537+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578467, 1) 2019-09-04T06:27:47.551+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:47.551+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:47.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:47.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:47.576+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:47.659+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:47.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:47.661+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:47.661+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:47.676+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:47.679+0000 D2 ASIO [RS] Request 88 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578467, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578467674), o: { $v: 1, $set: { ping: new Date(1567578467668) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578467, 1), t: 1 }, lastCommittedWall: new Date(1567578467433), lastOpVisible: { ts: Timestamp(1567578467, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578467, 1), t: 1 }, lastCommittedWall: new Date(1567578467433), lastOpApplied: { ts: Timestamp(1567578467, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578467, 1), $clusterTime: { clusterTime: Timestamp(1567578467, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578467, 2) } 2019-09-04T06:27:47.679+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578467, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578467674), o: { $v: 1, $set: { ping: new Date(1567578467668) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578467, 1), t: 1 }, lastCommittedWall: new Date(1567578467433), lastOpVisible: { ts: Timestamp(1567578467, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578467, 1), t: 1 }, lastCommittedWall: new Date(1567578467433), lastOpApplied: { ts: Timestamp(1567578467, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578467, 1), $clusterTime: { clusterTime: Timestamp(1567578467, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578467, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:47.679+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:47.679+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578467, 2) and ending at ts: Timestamp(1567578467, 2) 2019-09-04T06:27:47.679+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:27:58.062+0000 2019-09-04T06:27:47.679+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:27:58.096+0000 2019-09-04T06:27:47.679+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:47.679+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578467, 2), t: 1 } 2019-09-04T06:27:47.679+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:47.679+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:47.679+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578467, 1) 2019-09-04T06:27:47.679+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 1553 2019-09-04T06:27:47.679+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:47.679+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:47.679+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 1553 2019-09-04T06:27:47.679+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:47.679+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:27:47.679+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:47.679+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578467, 2) } 2019-09-04T06:27:47.679+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:16.836+0000 2019-09-04T06:27:47.679+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578467, 1) 2019-09-04T06:27:47.679+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 1556 2019-09-04T06:27:47.679+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:47.679+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:47.679+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 1556 2019-09-04T06:27:47.679+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1546 2019-09-04T06:27:47.679+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 1546 2019-09-04T06:27:47.679+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1559 2019-09-04T06:27:47.679+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 1559 2019-09-04T06:27:47.679+0000 D3 EXECUTOR [repl-writer-worker-10] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:27:47.679+0000 D3 STORAGE [repl-writer-worker-10] WT begin_transaction for snapshot id 1561 2019-09-04T06:27:47.679+0000 D4 STORAGE [repl-writer-worker-10] inserting record with timestamp Timestamp(1567578467, 2) 2019-09-04T06:27:47.679+0000 D3 STORAGE [repl-writer-worker-10] WT set timestamp of future write operations to Timestamp(1567578467, 2) 2019-09-04T06:27:47.679+0000 D3 STORAGE [repl-writer-worker-10] WT commit_transaction for snapshot id 1561 2019-09-04T06:27:47.679+0000 D3 EXECUTOR [repl-writer-worker-10] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:47.679+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:27:47.679+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1560 2019-09-04T06:27:47.679+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 1560 2019-09-04T06:27:47.679+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1563 2019-09-04T06:27:47.679+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 1563 2019-09-04T06:27:47.679+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578467, 2), t: 1 }({ ts: Timestamp(1567578467, 2), t: 1 }) 2019-09-04T06:27:47.679+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578467, 2) 2019-09-04T06:27:47.679+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1564 2019-09-04T06:27:47.679+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578467, 2) } } ] } sort: {} projection: {} 2019-09-04T06:27:47.680+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:47.680+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:27:47.680+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578467, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:27:47.680+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:47.680+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:27:47.680+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:27:47.680+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578467, 2) || First: notFirst: full path: ts 2019-09-04T06:27:47.680+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:47.680+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578467, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:47.680+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:27:47.680+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:27:47.680+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:27:47.680+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:47.680+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:27:47.680+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:27:47.680+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:47.680+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:47.680+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:27:47.680+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578467, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:27:47.680+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:47.680+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:27:47.680+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:27:47.680+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578467, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:27:47.680+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:47.680+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578467, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:47.680+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 1564 2019-09-04T06:27:47.680+0000 D3 EXECUTOR [repl-writer-worker-8] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:27:47.680+0000 D3 STORAGE [repl-writer-worker-8] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:47.680+0000 D3 REPL [repl-writer-worker-8] applying op: { ts: Timestamp(1567578467, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578467674), o: { $v: 1, $set: { ping: new Date(1567578467668) } } }, oplog application mode: Secondary 2019-09-04T06:27:47.680+0000 D3 STORAGE [repl-writer-worker-8] WT set timestamp of future write operations to Timestamp(1567578467, 2) 2019-09-04T06:27:47.680+0000 D3 STORAGE [repl-writer-worker-8] WT begin_transaction for snapshot id 1566 2019-09-04T06:27:47.680+0000 D2 QUERY [repl-writer-worker-8] Using idhack: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" } 2019-09-04T06:27:47.680+0000 D4 WRITE [repl-writer-worker-8] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:27:47.680+0000 D3 STORAGE [repl-writer-worker-8] WT commit_transaction for snapshot id 1566 2019-09-04T06:27:47.680+0000 D3 EXECUTOR [repl-writer-worker-8] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:47.680+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578467, 2), t: 1 }({ ts: Timestamp(1567578467, 2), t: 1 }) 2019-09-04T06:27:47.680+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578467, 2) 2019-09-04T06:27:47.680+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1565 2019-09-04T06:27:47.680+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:27:47.680+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:47.680+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:27:47.680+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:47.680+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:47.680+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:27:47.680+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 1565 2019-09-04T06:27:47.680+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578467, 2) 2019-09-04T06:27:47.680+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1570 2019-09-04T06:27:47.680+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 1570 2019-09-04T06:27:47.680+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578467, 2), t: 1 }({ ts: Timestamp(1567578467, 2), t: 1 }) 2019-09-04T06:27:47.680+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:47.680+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, durableWallTime: new Date(1567578464811), appliedOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, appliedWallTime: new Date(1567578464811), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578467, 1), t: 1 }, durableWallTime: new Date(1567578467433), appliedOpTime: { ts: Timestamp(1567578467, 2), t: 1 }, appliedWallTime: new Date(1567578467674), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, durableWallTime: new Date(1567578464811), appliedOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, appliedWallTime: new Date(1567578464811), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578467, 1), t: 1 }, lastCommittedWall: new Date(1567578467433), lastOpVisible: { ts: Timestamp(1567578467, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:47.680+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 90 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:17.680+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, durableWallTime: new Date(1567578464811), appliedOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, appliedWallTime: new Date(1567578464811), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578467, 1), t: 1 }, durableWallTime: new Date(1567578467433), appliedOpTime: { ts: Timestamp(1567578467, 2), t: 1 }, appliedWallTime: new Date(1567578467674), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, durableWallTime: new Date(1567578464811), appliedOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, appliedWallTime: new Date(1567578464811), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578467, 1), t: 1 }, lastCommittedWall: new Date(1567578467433), lastOpVisible: { ts: Timestamp(1567578467, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:47.680+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:17.680+0000 2019-09-04T06:27:47.681+0000 D2 ASIO [RS] Request 90 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578467, 1), t: 1 }, lastCommittedWall: new Date(1567578467433), lastOpVisible: { ts: Timestamp(1567578467, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578467, 1), $clusterTime: { clusterTime: Timestamp(1567578467, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578467, 2) } 2019-09-04T06:27:47.681+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578467, 1), t: 1 }, lastCommittedWall: new Date(1567578467433), lastOpVisible: { ts: Timestamp(1567578467, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578467, 1), $clusterTime: { clusterTime: Timestamp(1567578467, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578467, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:47.681+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:47.681+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:17.681+0000 2019-09-04T06:27:47.681+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578467, 2), t: 1 } 2019-09-04T06:27:47.681+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 91 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:27:57.681+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578467, 1), t: 1 } } 2019-09-04T06:27:47.681+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:17.681+0000 2019-09-04T06:27:47.689+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:27:47.689+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:47.689+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, durableWallTime: new Date(1567578464811), appliedOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, appliedWallTime: new Date(1567578464811), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578467, 2), t: 1 }, durableWallTime: new Date(1567578467674), appliedOpTime: { ts: Timestamp(1567578467, 2), t: 1 }, appliedWallTime: new Date(1567578467674), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, durableWallTime: new Date(1567578464811), appliedOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, appliedWallTime: new Date(1567578464811), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578467, 1), t: 1 }, lastCommittedWall: new Date(1567578467433), lastOpVisible: { ts: Timestamp(1567578467, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:47.689+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 92 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:17.689+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, durableWallTime: new Date(1567578464811), appliedOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, appliedWallTime: new Date(1567578464811), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578467, 2), t: 1 }, durableWallTime: new Date(1567578467674), appliedOpTime: { ts: Timestamp(1567578467, 2), t: 1 }, appliedWallTime: new Date(1567578467674), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, durableWallTime: new Date(1567578464811), appliedOpTime: { ts: Timestamp(1567578464, 1), t: 1 }, appliedWallTime: new Date(1567578464811), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578467, 1), t: 1 }, lastCommittedWall: new Date(1567578467433), lastOpVisible: { ts: Timestamp(1567578467, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:47.689+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:17.681+0000 2019-09-04T06:27:47.689+0000 D2 ASIO [RS] Request 92 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578467, 1), t: 1 }, lastCommittedWall: new Date(1567578467433), lastOpVisible: { ts: Timestamp(1567578467, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578467, 1), $clusterTime: { clusterTime: Timestamp(1567578467, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578467, 2) } 2019-09-04T06:27:47.689+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578467, 1), t: 1 }, lastCommittedWall: new Date(1567578467433), lastOpVisible: { ts: Timestamp(1567578467, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578467, 1), $clusterTime: { clusterTime: Timestamp(1567578467, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578467, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:47.689+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:47.689+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:17.681+0000 2019-09-04T06:27:47.690+0000 D2 ASIO [RS] Request 91 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578467, 2), t: 1 }, lastCommittedWall: new Date(1567578467674), lastOpVisible: { ts: Timestamp(1567578467, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578467, 2), t: 1 }, lastCommittedWall: new Date(1567578467674), lastOpApplied: { ts: Timestamp(1567578467, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578467, 2), $clusterTime: { clusterTime: Timestamp(1567578467, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578467, 2) } 2019-09-04T06:27:47.690+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578467, 2), t: 1 }, lastCommittedWall: new Date(1567578467674), lastOpVisible: { ts: Timestamp(1567578467, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578467, 2), t: 1 }, lastCommittedWall: new Date(1567578467674), lastOpApplied: { ts: Timestamp(1567578467, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578467, 2), $clusterTime: { clusterTime: Timestamp(1567578467, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578467, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:47.690+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:47.690+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:27:47.690+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578467, 2), t: 1 }, 2019-09-04T06:27:47.674+0000 2019-09-04T06:27:47.690+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578467, 2), t: 1 }, 2019-09-04T06:27:47.674+0000 2019-09-04T06:27:47.690+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578462, 2) 2019-09-04T06:27:47.690+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:27:58.096+0000 2019-09-04T06:27:47.690+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:27:59.029+0000 2019-09-04T06:27:47.690+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:47.690+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 93 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:27:57.690+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578467, 2), t: 1 } } 2019-09-04T06:27:47.690+0000 D3 REPL [conn41] Got notified of new snapshot: { ts: Timestamp(1567578467, 2), t: 1 }, 2019-09-04T06:27:47.674+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn41] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.661+0000 2019-09-04T06:27:47.690+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:17.681+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn64] Got notified of new snapshot: { ts: Timestamp(1567578467, 2), t: 1 }, 2019-09-04T06:27:47.674+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn64] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.676+0000 2019-09-04T06:27:47.690+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:16.836+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn79] Got notified of new snapshot: { ts: Timestamp(1567578467, 2), t: 1 }, 2019-09-04T06:27:47.674+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn79] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.289+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn65] Got notified of new snapshot: { ts: Timestamp(1567578467, 2), t: 1 }, 2019-09-04T06:27:47.674+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn65] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.679+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn24] Got notified of new snapshot: { ts: Timestamp(1567578467, 2), t: 1 }, 2019-09-04T06:27:47.674+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn24] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.833+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn30] Got notified of new snapshot: { ts: Timestamp(1567578467, 2), t: 1 }, 2019-09-04T06:27:47.674+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn30] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.836+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn87] Got notified of new snapshot: { ts: Timestamp(1567578467, 2), t: 1 }, 2019-09-04T06:27:47.674+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn87] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.823+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn44] Got notified of new snapshot: { ts: Timestamp(1567578467, 2), t: 1 }, 2019-09-04T06:27:47.674+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn44] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:55.060+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn55] Got notified of new snapshot: { ts: Timestamp(1567578467, 2), t: 1 }, 2019-09-04T06:27:47.674+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn55] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.926+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn43] Got notified of new snapshot: { ts: Timestamp(1567578467, 2), t: 1 }, 2019-09-04T06:27:47.674+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn43] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.753+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn39] Got notified of new snapshot: { ts: Timestamp(1567578467, 2), t: 1 }, 2019-09-04T06:27:47.674+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn39] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.133+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn63] Got notified of new snapshot: { ts: Timestamp(1567578467, 2), t: 1 }, 2019-09-04T06:27:47.674+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn63] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.676+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn82] Got notified of new snapshot: { ts: Timestamp(1567578467, 2), t: 1 }, 2019-09-04T06:27:47.674+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn82] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.998+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn88] Got notified of new snapshot: { ts: Timestamp(1567578467, 2), t: 1 }, 2019-09-04T06:27:47.674+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn88] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.833+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn86] Got notified of new snapshot: { ts: Timestamp(1567578467, 2), t: 1 }, 2019-09-04T06:27:47.674+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn86] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.822+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn56] Got notified of new snapshot: { ts: Timestamp(1567578467, 2), t: 1 }, 2019-09-04T06:27:47.674+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn56] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.963+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn68] Got notified of new snapshot: { ts: Timestamp(1567578467, 2), t: 1 }, 2019-09-04T06:27:47.674+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn68] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.730+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn84] Got notified of new snapshot: { ts: Timestamp(1567578467, 2), t: 1 }, 2019-09-04T06:27:47.674+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn84] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.822+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn67] Got notified of new snapshot: { ts: Timestamp(1567578467, 2), t: 1 }, 2019-09-04T06:27:47.674+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn67] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.698+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn57] Got notified of new snapshot: { ts: Timestamp(1567578467, 2), t: 1 }, 2019-09-04T06:27:47.674+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn57] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:03.480+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn40] Got notified of new snapshot: { ts: Timestamp(1567578467, 2), t: 1 }, 2019-09-04T06:27:47.674+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn40] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.645+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn85] Got notified of new snapshot: { ts: Timestamp(1567578467, 2), t: 1 }, 2019-09-04T06:27:47.674+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn85] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.822+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn36] Got notified of new snapshot: { ts: Timestamp(1567578467, 2), t: 1 }, 2019-09-04T06:27:47.674+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn36] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.835+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn27] Got notified of new snapshot: { ts: Timestamp(1567578467, 2), t: 1 }, 2019-09-04T06:27:47.674+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn27] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.829+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn83] Got notified of new snapshot: { ts: Timestamp(1567578467, 2), t: 1 }, 2019-09-04T06:27:47.674+0000 2019-09-04T06:27:47.690+0000 D3 REPL [conn83] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.431+0000 2019-09-04T06:27:47.691+0000 D3 REPL [conn76] Got notified of new snapshot: { ts: Timestamp(1567578467, 2), t: 1 }, 2019-09-04T06:27:47.674+0000 2019-09-04T06:27:47.691+0000 D3 REPL [conn76] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.092+0000 2019-09-04T06:27:47.691+0000 D3 REPL [conn66] Got notified of new snapshot: { ts: Timestamp(1567578467, 2), t: 1 }, 2019-09-04T06:27:47.674+0000 2019-09-04T06:27:47.691+0000 D3 REPL [conn66] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.686+0000 2019-09-04T06:27:47.691+0000 D3 REPL [conn53] Got notified of new snapshot: { ts: Timestamp(1567578467, 2), t: 1 }, 2019-09-04T06:27:47.674+0000 2019-09-04T06:27:47.691+0000 D3 REPL [conn53] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.753+0000 2019-09-04T06:27:47.691+0000 D3 REPL [conn62] Got notified of new snapshot: { ts: Timestamp(1567578467, 2), t: 1 }, 2019-09-04T06:27:47.674+0000 2019-09-04T06:27:47.691+0000 D3 REPL [conn62] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.671+0000 2019-09-04T06:27:47.691+0000 D3 REPL [conn35] Got notified of new snapshot: { ts: Timestamp(1567578467, 2), t: 1 }, 2019-09-04T06:27:47.674+0000 2019-09-04T06:27:47.691+0000 D3 REPL [conn35] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.836+0000 2019-09-04T06:27:47.691+0000 D3 REPL [conn32] Got notified of new snapshot: { ts: Timestamp(1567578467, 2), t: 1 }, 2019-09-04T06:27:47.674+0000 2019-09-04T06:27:47.691+0000 D3 REPL [conn32] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:54.152+0000 2019-09-04T06:27:47.691+0000 D3 REPL [conn54] Got notified of new snapshot: { ts: Timestamp(1567578467, 2), t: 1 }, 2019-09-04T06:27:47.674+0000 2019-09-04T06:27:47.691+0000 D3 REPL [conn54] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.764+0000 2019-09-04T06:27:47.691+0000 D3 REPL [conn80] Got notified of new snapshot: { ts: Timestamp(1567578467, 2), t: 1 }, 2019-09-04T06:27:47.674+0000 2019-09-04T06:27:47.691+0000 D3 REPL [conn80] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.307+0000 2019-09-04T06:27:47.691+0000 D3 REPL [conn89] Got notified of new snapshot: { ts: Timestamp(1567578467, 2), t: 1 }, 2019-09-04T06:27:47.674+0000 2019-09-04T06:27:47.691+0000 D3 REPL [conn89] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:16.417+0000 2019-09-04T06:27:47.692+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:47.692+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:47.698+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:47.698+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:47.776+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:47.779+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578467, 2) 2019-09-04T06:27:47.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:47.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:47.827+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:47.827+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:47.876+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:47.976+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:48.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:48.012+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:48.012+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:48.051+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:48.051+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:48.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:48.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:48.076+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:48.130+0000 D2 COMMAND [conn20] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:48.130+0000 I COMMAND [conn20] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:48.159+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:48.160+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:48.161+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:48.161+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:48.176+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:48.192+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:48.192+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:48.198+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:48.198+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:48.230+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578467, 2), signature: { hash: BinData(0, 39CA4ECD9822B8750EA957C2F8D1BDF1ADA59AE9), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:48.230+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:27:48.230+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578467, 2), signature: { hash: BinData(0, 39CA4ECD9822B8750EA957C2F8D1BDF1ADA59AE9), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:48.230+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578467, 2), signature: { hash: BinData(0, 39CA4ECD9822B8750EA957C2F8D1BDF1ADA59AE9), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:48.231+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578467, 2), t: 1 }, durableWallTime: new Date(1567578467674), opTime: { ts: Timestamp(1567578467, 2), t: 1 }, wallTime: new Date(1567578467674) } 2019-09-04T06:27:48.231+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578467, 2), signature: { hash: BinData(0, 39CA4ECD9822B8750EA957C2F8D1BDF1ADA59AE9), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:48.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:27:48.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:27:48.277+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:48.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:48.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:48.327+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:48.327+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:48.377+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:48.477+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:48.506+0000 D2 COMMAND [conn16] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:27:48.506+0000 I COMMAND [conn16] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:27:48.506+0000 D2 COMMAND [conn16] run command admin.$cmd { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:27:48.506+0000 I COMMAND [conn16] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:27:48.512+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:48.512+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:48.551+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:48.551+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:48.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:48.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:48.577+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:48.659+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:48.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:48.661+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:48.661+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:48.677+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:48.679+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:48.679+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:48.679+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578467, 2) 2019-09-04T06:27:48.679+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 1596 2019-09-04T06:27:48.679+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:48.679+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:48.679+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 1596 2019-09-04T06:27:48.680+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1599 2019-09-04T06:27:48.680+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 1599 2019-09-04T06:27:48.680+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578467, 2), t: 1 }({ ts: Timestamp(1567578467, 2), t: 1 }) 2019-09-04T06:27:48.692+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:48.692+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:48.698+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:48.698+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:48.777+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:48.827+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:48.827+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:48.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:48.836+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:48.836+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 94) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:48.836+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 94 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:27:58.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:48.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:16.836+0000 2019-09-04T06:27:48.836+0000 D2 REPL_HB [replexec-2] Sending heartbeat (requestId: 95) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:48.836+0000 D3 EXECUTOR [replexec-2] Scheduling remote command request: RemoteCommand 95 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:27:58.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:48.836+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:16.836+0000 2019-09-04T06:27:48.836+0000 D2 ASIO [Replication] Request 94 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578467, 2), t: 1 }, durableWallTime: new Date(1567578467674), opTime: { ts: Timestamp(1567578467, 2), t: 1 }, wallTime: new Date(1567578467674), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578467, 2), t: 1 }, lastCommittedWall: new Date(1567578467674), lastOpVisible: { ts: Timestamp(1567578467, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578467, 2), $clusterTime: { clusterTime: Timestamp(1567578467, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578467, 2) } 2019-09-04T06:27:48.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578467, 2), t: 1 }, durableWallTime: new Date(1567578467674), opTime: { ts: Timestamp(1567578467, 2), t: 1 }, wallTime: new Date(1567578467674), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578467, 2), t: 1 }, lastCommittedWall: new Date(1567578467674), lastOpVisible: { ts: Timestamp(1567578467, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578467, 2), $clusterTime: { clusterTime: Timestamp(1567578467, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578467, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:27:48.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:48.836+0000 D2 ASIO [Replication] Request 95 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578467, 2), t: 1 }, durableWallTime: new Date(1567578467674), opTime: { ts: Timestamp(1567578467, 2), t: 1 }, wallTime: new Date(1567578467674), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578467, 2), t: 1 }, lastCommittedWall: new Date(1567578467674), lastOpVisible: { ts: Timestamp(1567578467, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578467, 2), $clusterTime: { clusterTime: Timestamp(1567578467, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578467, 2) } 2019-09-04T06:27:48.836+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 94) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578467, 2), t: 1 }, durableWallTime: new Date(1567578467674), opTime: { ts: Timestamp(1567578467, 2), t: 1 }, wallTime: new Date(1567578467674), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578467, 2), t: 1 }, lastCommittedWall: new Date(1567578467674), lastOpVisible: { ts: Timestamp(1567578467, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578467, 2), $clusterTime: { clusterTime: Timestamp(1567578467, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578467, 2) } 2019-09-04T06:27:48.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578467, 2), t: 1 }, durableWallTime: new Date(1567578467674), opTime: { ts: Timestamp(1567578467, 2), t: 1 }, wallTime: new Date(1567578467674), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578467, 2), t: 1 }, lastCommittedWall: new Date(1567578467674), lastOpVisible: { ts: Timestamp(1567578467, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578467, 2), $clusterTime: { clusterTime: Timestamp(1567578467, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578467, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:48.836+0000 D4 ELECTION [replexec-1] Postponing election timeout due to heartbeat from primary 2019-09-04T06:27:48.836+0000 D4 REPL [replexec-1] Canceling election timeout callback at 2019-09-04T06:27:59.029+0000 2019-09-04T06:27:48.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:48.836+0000 D4 ELECTION [replexec-1] Scheduling election timeout callback at 2019-09-04T06:28:00.160+0000 2019-09-04T06:27:48.836+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:27:48.836+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:27:50.836Z 2019-09-04T06:27:48.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:48.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:18.836+0000 2019-09-04T06:27:48.836+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:18.836+0000 2019-09-04T06:27:48.836+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 95) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578467, 2), t: 1 }, durableWallTime: new Date(1567578467674), opTime: { ts: Timestamp(1567578467, 2), t: 1 }, wallTime: new Date(1567578467674), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578467, 2), t: 1 }, lastCommittedWall: new Date(1567578467674), lastOpVisible: { ts: Timestamp(1567578467, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578467, 2), $clusterTime: { clusterTime: Timestamp(1567578467, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578467, 2) } 2019-09-04T06:27:48.836+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:27:48.836+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:27:50.836Z 2019-09-04T06:27:48.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:18.836+0000 2019-09-04T06:27:48.877+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:48.978+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:49.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:49.012+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:49.012+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:49.051+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:49.051+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:49.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578467, 2), signature: { hash: BinData(0, 39CA4ECD9822B8750EA957C2F8D1BDF1ADA59AE9), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:49.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:27:49.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578467, 2), signature: { hash: BinData(0, 39CA4ECD9822B8750EA957C2F8D1BDF1ADA59AE9), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:49.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578467, 2), signature: { hash: BinData(0, 39CA4ECD9822B8750EA957C2F8D1BDF1ADA59AE9), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:49.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578467, 2), t: 1 }, durableWallTime: new Date(1567578467674), opTime: { ts: Timestamp(1567578467, 2), t: 1 }, wallTime: new Date(1567578467674) } 2019-09-04T06:27:49.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578467, 2), signature: { hash: BinData(0, 39CA4ECD9822B8750EA957C2F8D1BDF1ADA59AE9), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:49.065+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:49.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:49.078+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:49.130+0000 D2 ASIO [RS] Request 93 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578469, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" }, wall: new Date(1567578469129), o: { $v: 1, $set: { ping: new Date(1567578469126) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578467, 2), t: 1 }, lastCommittedWall: new Date(1567578467674), lastOpVisible: { ts: Timestamp(1567578467, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578467, 2), t: 1 }, lastCommittedWall: new Date(1567578467674), lastOpApplied: { ts: Timestamp(1567578469, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578467, 2), $clusterTime: { clusterTime: Timestamp(1567578469, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578469, 1) } 2019-09-04T06:27:49.130+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578469, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" }, wall: new Date(1567578469129), o: { $v: 1, $set: { ping: new Date(1567578469126) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578467, 2), t: 1 }, lastCommittedWall: new Date(1567578467674), lastOpVisible: { ts: Timestamp(1567578467, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578467, 2), t: 1 }, lastCommittedWall: new Date(1567578467674), lastOpApplied: { ts: Timestamp(1567578469, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578467, 2), $clusterTime: { clusterTime: Timestamp(1567578469, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578469, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:49.131+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:49.131+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578469, 1) and ending at ts: Timestamp(1567578469, 1) 2019-09-04T06:27:49.131+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:28:00.160+0000 2019-09-04T06:27:49.131+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:27:59.898+0000 2019-09-04T06:27:49.131+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:49.131+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:18.836+0000 2019-09-04T06:27:49.131+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578469, 1), t: 1 } 2019-09-04T06:27:49.131+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:49.131+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:49.131+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578467, 2) 2019-09-04T06:27:49.131+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 1610 2019-09-04T06:27:49.131+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:49.131+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:49.131+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 1610 2019-09-04T06:27:49.131+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:49.131+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:27:49.131+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:49.131+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578467, 2) 2019-09-04T06:27:49.131+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578469, 1) } 2019-09-04T06:27:49.131+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 1613 2019-09-04T06:27:49.131+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:49.131+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:49.131+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 1613 2019-09-04T06:27:49.131+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1600 2019-09-04T06:27:49.131+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 1600 2019-09-04T06:27:49.131+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1616 2019-09-04T06:27:49.131+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 1616 2019-09-04T06:27:49.131+0000 D3 EXECUTOR [repl-writer-worker-4] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:27:49.131+0000 D3 STORAGE [repl-writer-worker-4] WT begin_transaction for snapshot id 1618 2019-09-04T06:27:49.131+0000 D4 STORAGE [repl-writer-worker-4] inserting record with timestamp Timestamp(1567578469, 1) 2019-09-04T06:27:49.131+0000 D3 STORAGE [repl-writer-worker-4] WT set timestamp of future write operations to Timestamp(1567578469, 1) 2019-09-04T06:27:49.131+0000 D3 STORAGE [repl-writer-worker-4] WT commit_transaction for snapshot id 1618 2019-09-04T06:27:49.131+0000 D3 EXECUTOR [repl-writer-worker-4] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:49.131+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:27:49.131+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1617 2019-09-04T06:27:49.131+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 1617 2019-09-04T06:27:49.131+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1620 2019-09-04T06:27:49.131+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 1620 2019-09-04T06:27:49.131+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578469, 1), t: 1 }({ ts: Timestamp(1567578469, 1), t: 1 }) 2019-09-04T06:27:49.131+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578469, 1) 2019-09-04T06:27:49.131+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1621 2019-09-04T06:27:49.131+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578469, 1) } } ] } sort: {} projection: {} 2019-09-04T06:27:49.131+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:49.131+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:27:49.131+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578469, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:27:49.131+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:49.131+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:27:49.131+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:27:49.131+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578469, 1) || First: notFirst: full path: ts 2019-09-04T06:27:49.131+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:49.131+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578469, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:49.131+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:27:49.131+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:27:49.131+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:27:49.131+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:49.131+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:27:49.131+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:27:49.131+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:49.131+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:49.131+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:27:49.131+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578469, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:27:49.131+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:49.131+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:27:49.131+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:27:49.131+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578469, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:27:49.131+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:49.131+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578469, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:49.132+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 1621 2019-09-04T06:27:49.132+0000 D3 EXECUTOR [repl-writer-worker-6] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:27:49.132+0000 D3 STORAGE [repl-writer-worker-6] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:49.132+0000 D3 REPL [repl-writer-worker-6] applying op: { ts: Timestamp(1567578469, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" }, wall: new Date(1567578469129), o: { $v: 1, $set: { ping: new Date(1567578469126) } } }, oplog application mode: Secondary 2019-09-04T06:27:49.132+0000 D3 STORAGE [repl-writer-worker-6] WT set timestamp of future write operations to Timestamp(1567578469, 1) 2019-09-04T06:27:49.132+0000 D3 STORAGE [repl-writer-worker-6] WT begin_transaction for snapshot id 1623 2019-09-04T06:27:49.132+0000 D2 QUERY [repl-writer-worker-6] Using idhack: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" } 2019-09-04T06:27:49.132+0000 D4 WRITE [repl-writer-worker-6] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:27:49.132+0000 D3 STORAGE [repl-writer-worker-6] WT commit_transaction for snapshot id 1623 2019-09-04T06:27:49.132+0000 D3 EXECUTOR [repl-writer-worker-6] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:49.132+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578469, 1), t: 1 }({ ts: Timestamp(1567578469, 1), t: 1 }) 2019-09-04T06:27:49.132+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578469, 1) 2019-09-04T06:27:49.132+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1622 2019-09-04T06:27:49.132+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:27:49.132+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:49.132+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:27:49.132+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:49.132+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:49.132+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:27:49.132+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 1622 2019-09-04T06:27:49.132+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578469, 1) 2019-09-04T06:27:49.132+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1626 2019-09-04T06:27:49.132+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 1626 2019-09-04T06:27:49.132+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578469, 1), t: 1 }({ ts: Timestamp(1567578469, 1), t: 1 }) 2019-09-04T06:27:49.132+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:49.132+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578467, 2), t: 1 }, durableWallTime: new Date(1567578467674), appliedOpTime: { ts: Timestamp(1567578467, 2), t: 1 }, appliedWallTime: new Date(1567578467674), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578467, 2), t: 1 }, durableWallTime: new Date(1567578467674), appliedOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, appliedWallTime: new Date(1567578469129), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578467, 2), t: 1 }, durableWallTime: new Date(1567578467674), appliedOpTime: { ts: Timestamp(1567578467, 2), t: 1 }, appliedWallTime: new Date(1567578467674), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578467, 2), t: 1 }, lastCommittedWall: new Date(1567578467674), lastOpVisible: { ts: Timestamp(1567578467, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:49.132+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 96 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:19.132+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578467, 2), t: 1 }, durableWallTime: new Date(1567578467674), appliedOpTime: { ts: Timestamp(1567578467, 2), t: 1 }, appliedWallTime: new Date(1567578467674), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578467, 2), t: 1 }, durableWallTime: new Date(1567578467674), appliedOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, appliedWallTime: new Date(1567578469129), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578467, 2), t: 1 }, durableWallTime: new Date(1567578467674), appliedOpTime: { ts: Timestamp(1567578467, 2), t: 1 }, appliedWallTime: new Date(1567578467674), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578467, 2), t: 1 }, lastCommittedWall: new Date(1567578467674), lastOpVisible: { ts: Timestamp(1567578467, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:49.132+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:19.132+0000 2019-09-04T06:27:49.132+0000 D2 ASIO [RS] Request 96 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578467, 2), t: 1 }, lastCommittedWall: new Date(1567578467674), lastOpVisible: { ts: Timestamp(1567578467, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578467, 2), $clusterTime: { clusterTime: Timestamp(1567578469, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578469, 1) } 2019-09-04T06:27:49.132+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578467, 2), t: 1 }, lastCommittedWall: new Date(1567578467674), lastOpVisible: { ts: Timestamp(1567578467, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578467, 2), $clusterTime: { clusterTime: Timestamp(1567578469, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578469, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:49.132+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:49.132+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:19.132+0000 2019-09-04T06:27:49.133+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578469, 1), t: 1 } 2019-09-04T06:27:49.133+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 97 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:27:59.133+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578467, 2), t: 1 } } 2019-09-04T06:27:49.133+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:19.132+0000 2019-09-04T06:27:49.136+0000 D2 ASIO [RS] Request 97 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578469, 1), t: 1 }, lastCommittedWall: new Date(1567578469129), lastOpVisible: { ts: Timestamp(1567578469, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578469, 1), t: 1 }, lastCommittedWall: new Date(1567578469129), lastOpApplied: { ts: Timestamp(1567578469, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578469, 1), $clusterTime: { clusterTime: Timestamp(1567578469, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578469, 1) } 2019-09-04T06:27:49.136+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578469, 1), t: 1 }, lastCommittedWall: new Date(1567578469129), lastOpVisible: { ts: Timestamp(1567578469, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578469, 1), t: 1 }, lastCommittedWall: new Date(1567578469129), lastOpApplied: { ts: Timestamp(1567578469, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578469, 1), $clusterTime: { clusterTime: Timestamp(1567578469, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578469, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:49.136+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:49.136+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:27:49.136+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578469, 1), t: 1 }, 2019-09-04T06:27:49.129+0000 2019-09-04T06:27:49.136+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578469, 1), t: 1 }, 2019-09-04T06:27:49.129+0000 2019-09-04T06:27:49.136+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578464, 1) 2019-09-04T06:27:49.136+0000 D3 REPL [conn87] Got notified of new snapshot: { ts: Timestamp(1567578469, 1), t: 1 }, 2019-09-04T06:27:49.129+0000 2019-09-04T06:27:49.136+0000 D3 REPL [conn87] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.823+0000 2019-09-04T06:27:49.136+0000 D3 REPL [conn79] Got notified of new snapshot: { ts: Timestamp(1567578469, 1), t: 1 }, 2019-09-04T06:27:49.129+0000 2019-09-04T06:27:49.136+0000 D3 REPL [conn79] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.289+0000 2019-09-04T06:27:49.136+0000 D3 REPL [conn44] Got notified of new snapshot: { ts: Timestamp(1567578469, 1), t: 1 }, 2019-09-04T06:27:49.129+0000 2019-09-04T06:27:49.136+0000 D3 REPL [conn44] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:55.060+0000 2019-09-04T06:27:49.136+0000 D3 REPL [conn39] Got notified of new snapshot: { ts: Timestamp(1567578469, 1), t: 1 }, 2019-09-04T06:27:49.129+0000 2019-09-04T06:27:49.136+0000 D3 REPL [conn39] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.133+0000 2019-09-04T06:27:49.136+0000 D3 REPL [conn63] Got notified of new snapshot: { ts: Timestamp(1567578469, 1), t: 1 }, 2019-09-04T06:27:49.129+0000 2019-09-04T06:27:49.136+0000 D3 REPL [conn63] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.676+0000 2019-09-04T06:27:49.136+0000 D3 REPL [conn67] Got notified of new snapshot: { ts: Timestamp(1567578469, 1), t: 1 }, 2019-09-04T06:27:49.129+0000 2019-09-04T06:27:49.136+0000 D3 REPL [conn67] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.698+0000 2019-09-04T06:27:49.136+0000 D3 REPL [conn84] Got notified of new snapshot: { ts: Timestamp(1567578469, 1), t: 1 }, 2019-09-04T06:27:49.129+0000 2019-09-04T06:27:49.136+0000 D3 REPL [conn84] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.822+0000 2019-09-04T06:27:49.136+0000 D3 REPL [conn66] Got notified of new snapshot: { ts: Timestamp(1567578469, 1), t: 1 }, 2019-09-04T06:27:49.129+0000 2019-09-04T06:27:49.136+0000 D3 REPL [conn66] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.686+0000 2019-09-04T06:27:49.136+0000 D3 REPL [conn85] Got notified of new snapshot: { ts: Timestamp(1567578469, 1), t: 1 }, 2019-09-04T06:27:49.129+0000 2019-09-04T06:27:49.136+0000 D3 REPL [conn85] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.822+0000 2019-09-04T06:27:49.136+0000 D3 REPL [conn27] Got notified of new snapshot: { ts: Timestamp(1567578469, 1), t: 1 }, 2019-09-04T06:27:49.129+0000 2019-09-04T06:27:49.136+0000 D3 REPL [conn27] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.829+0000 2019-09-04T06:27:49.136+0000 D3 REPL [conn76] Got notified of new snapshot: { ts: Timestamp(1567578469, 1), t: 1 }, 2019-09-04T06:27:49.129+0000 2019-09-04T06:27:49.136+0000 D3 REPL [conn76] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.092+0000 2019-09-04T06:27:49.136+0000 D3 REPL [conn88] Got notified of new snapshot: { ts: Timestamp(1567578469, 1), t: 1 }, 2019-09-04T06:27:49.129+0000 2019-09-04T06:27:49.136+0000 D3 REPL [conn88] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.833+0000 2019-09-04T06:27:49.136+0000 D3 REPL [conn35] Got notified of new snapshot: { ts: Timestamp(1567578469, 1), t: 1 }, 2019-09-04T06:27:49.129+0000 2019-09-04T06:27:49.137+0000 D3 REPL [conn35] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.836+0000 2019-09-04T06:27:49.137+0000 D3 REPL [conn53] Got notified of new snapshot: { ts: Timestamp(1567578469, 1), t: 1 }, 2019-09-04T06:27:49.129+0000 2019-09-04T06:27:49.137+0000 D3 REPL [conn53] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.753+0000 2019-09-04T06:27:49.137+0000 D3 REPL [conn54] Got notified of new snapshot: { ts: Timestamp(1567578469, 1), t: 1 }, 2019-09-04T06:27:49.129+0000 2019-09-04T06:27:49.137+0000 D3 REPL [conn54] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.764+0000 2019-09-04T06:27:49.137+0000 D3 REPL [conn56] Got notified of new snapshot: { ts: Timestamp(1567578469, 1), t: 1 }, 2019-09-04T06:27:49.129+0000 2019-09-04T06:27:49.137+0000 D3 REPL [conn56] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.963+0000 2019-09-04T06:27:49.137+0000 D3 REPL [conn43] Got notified of new snapshot: { ts: Timestamp(1567578469, 1), t: 1 }, 2019-09-04T06:27:49.129+0000 2019-09-04T06:27:49.137+0000 D3 REPL [conn43] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.753+0000 2019-09-04T06:27:49.137+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:27:49.137+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:27:59.898+0000 2019-09-04T06:27:49.137+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:27:59.542+0000 2019-09-04T06:27:49.137+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:49.137+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:18.836+0000 2019-09-04T06:27:49.137+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 98 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:27:59.137+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578469, 1), t: 1 } } 2019-09-04T06:27:49.137+0000 D3 REPL [conn64] Got notified of new snapshot: { ts: Timestamp(1567578469, 1), t: 1 }, 2019-09-04T06:27:49.129+0000 2019-09-04T06:27:49.137+0000 D3 REPL [conn64] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.676+0000 2019-09-04T06:27:49.137+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:19.132+0000 2019-09-04T06:27:49.137+0000 D3 REPL [conn89] Got notified of new snapshot: { ts: Timestamp(1567578469, 1), t: 1 }, 2019-09-04T06:27:49.129+0000 2019-09-04T06:27:49.137+0000 D3 REPL [conn89] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:16.417+0000 2019-09-04T06:27:49.137+0000 D3 REPL [conn65] Got notified of new snapshot: { ts: Timestamp(1567578469, 1), t: 1 }, 2019-09-04T06:27:49.129+0000 2019-09-04T06:27:49.137+0000 D3 REPL [conn65] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.679+0000 2019-09-04T06:27:49.137+0000 D3 REPL [conn55] Got notified of new snapshot: { ts: Timestamp(1567578469, 1), t: 1 }, 2019-09-04T06:27:49.129+0000 2019-09-04T06:27:49.137+0000 D3 REPL [conn55] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.926+0000 2019-09-04T06:27:49.137+0000 D3 REPL [conn41] Got notified of new snapshot: { ts: Timestamp(1567578469, 1), t: 1 }, 2019-09-04T06:27:49.129+0000 2019-09-04T06:27:49.137+0000 D3 REPL [conn41] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.661+0000 2019-09-04T06:27:49.137+0000 D3 REPL [conn82] Got notified of new snapshot: { ts: Timestamp(1567578469, 1), t: 1 }, 2019-09-04T06:27:49.129+0000 2019-09-04T06:27:49.137+0000 D3 REPL [conn82] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.998+0000 2019-09-04T06:27:49.137+0000 D3 REPL [conn86] Got notified of new snapshot: { ts: Timestamp(1567578469, 1), t: 1 }, 2019-09-04T06:27:49.129+0000 2019-09-04T06:27:49.137+0000 D3 REPL [conn86] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.822+0000 2019-09-04T06:27:49.137+0000 D3 REPL [conn68] Got notified of new snapshot: { ts: Timestamp(1567578469, 1), t: 1 }, 2019-09-04T06:27:49.129+0000 2019-09-04T06:27:49.137+0000 D3 REPL [conn68] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.730+0000 2019-09-04T06:27:49.137+0000 D3 REPL [conn24] Got notified of new snapshot: { ts: Timestamp(1567578469, 1), t: 1 }, 2019-09-04T06:27:49.129+0000 2019-09-04T06:27:49.137+0000 D3 REPL [conn24] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.833+0000 2019-09-04T06:27:49.137+0000 D3 REPL [conn40] Got notified of new snapshot: { ts: Timestamp(1567578469, 1), t: 1 }, 2019-09-04T06:27:49.129+0000 2019-09-04T06:27:49.137+0000 D3 REPL [conn40] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:51.645+0000 2019-09-04T06:27:49.137+0000 D3 REPL [conn36] Got notified of new snapshot: { ts: Timestamp(1567578469, 1), t: 1 }, 2019-09-04T06:27:49.129+0000 2019-09-04T06:27:49.137+0000 D3 REPL [conn36] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.835+0000 2019-09-04T06:27:49.137+0000 D3 REPL [conn83] Got notified of new snapshot: { ts: Timestamp(1567578469, 1), t: 1 }, 2019-09-04T06:27:49.129+0000 2019-09-04T06:27:49.137+0000 D3 REPL [conn83] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.431+0000 2019-09-04T06:27:49.137+0000 D3 REPL [conn57] Got notified of new snapshot: { ts: Timestamp(1567578469, 1), t: 1 }, 2019-09-04T06:27:49.129+0000 2019-09-04T06:27:49.137+0000 D3 REPL [conn57] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:03.480+0000 2019-09-04T06:27:49.137+0000 D3 REPL [conn62] Got notified of new snapshot: { ts: Timestamp(1567578469, 1), t: 1 }, 2019-09-04T06:27:49.129+0000 2019-09-04T06:27:49.137+0000 D3 REPL [conn62] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.671+0000 2019-09-04T06:27:49.137+0000 D3 REPL [conn32] Got notified of new snapshot: { ts: Timestamp(1567578469, 1), t: 1 }, 2019-09-04T06:27:49.129+0000 2019-09-04T06:27:49.137+0000 D3 REPL [conn32] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:54.152+0000 2019-09-04T06:27:49.137+0000 D3 REPL [conn80] Got notified of new snapshot: { ts: Timestamp(1567578469, 1), t: 1 }, 2019-09-04T06:27:49.129+0000 2019-09-04T06:27:49.137+0000 D3 REPL [conn80] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.307+0000 2019-09-04T06:27:49.137+0000 D3 REPL [conn30] Got notified of new snapshot: { ts: Timestamp(1567578469, 1), t: 1 }, 2019-09-04T06:27:49.129+0000 2019-09-04T06:27:49.137+0000 D3 REPL [conn30] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.836+0000 2019-09-04T06:27:49.137+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:49.137+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578467, 2), t: 1 }, durableWallTime: new Date(1567578467674), appliedOpTime: { ts: Timestamp(1567578467, 2), t: 1 }, appliedWallTime: new Date(1567578467674), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, durableWallTime: new Date(1567578469129), appliedOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, appliedWallTime: new Date(1567578469129), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578467, 2), t: 1 }, durableWallTime: new Date(1567578467674), appliedOpTime: { ts: Timestamp(1567578467, 2), t: 1 }, appliedWallTime: new Date(1567578467674), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578469, 1), t: 1 }, lastCommittedWall: new Date(1567578469129), lastOpVisible: { ts: Timestamp(1567578469, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:49.137+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 99 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:19.137+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578467, 2), t: 1 }, durableWallTime: new Date(1567578467674), appliedOpTime: { ts: Timestamp(1567578467, 2), t: 1 }, appliedWallTime: new Date(1567578467674), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, durableWallTime: new Date(1567578469129), appliedOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, appliedWallTime: new Date(1567578469129), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578467, 2), t: 1 }, durableWallTime: new Date(1567578467674), appliedOpTime: { ts: Timestamp(1567578467, 2), t: 1 }, appliedWallTime: new Date(1567578467674), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578469, 1), t: 1 }, lastCommittedWall: new Date(1567578469129), lastOpVisible: { ts: Timestamp(1567578469, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:49.137+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:19.132+0000 2019-09-04T06:27:49.137+0000 D2 ASIO [RS] Request 99 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578469, 1), t: 1 }, lastCommittedWall: new Date(1567578469129), lastOpVisible: { ts: Timestamp(1567578469, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578469, 1), $clusterTime: { clusterTime: Timestamp(1567578469, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578469, 1) } 2019-09-04T06:27:49.138+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578469, 1), t: 1 }, lastCommittedWall: new Date(1567578469129), lastOpVisible: { ts: Timestamp(1567578469, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578469, 1), $clusterTime: { clusterTime: Timestamp(1567578469, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578469, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:49.138+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:49.138+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:19.132+0000 2019-09-04T06:27:49.159+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:49.160+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:49.161+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:49.161+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:49.178+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:49.192+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:49.192+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:49.198+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:49.198+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:49.231+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578469, 1) 2019-09-04T06:27:49.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:27:49.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:27:49.278+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:49.327+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:49.327+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:49.329+0000 D2 WRITE [startPeriodicThreadToAbortExpiredTransactions] Beginning scanSessions. Scanning 0 sessions. 2019-09-04T06:27:49.351+0000 D2 COMMAND [replSetDistLockPinger] run command config.$cmd { findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578469351) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } 2019-09-04T06:27:49.351+0000 D4 - [replSetDistLockPinger] Taking ticket. Available: 1000000000 2019-09-04T06:27:49.351+0000 D1 - [replSetDistLockPinger] User Assertion: NotMaster: Not primary while running findAndModify command on collection config.lockpings src/mongo/db/commands/find_and_modify.cpp 178 2019-09-04T06:27:49.351+0000 W - [replSetDistLockPinger] DBException thrown :: caused by :: NotMaster: Not primary while running findAndModify command on collection config.lockpings 2019-09-04T06:27:49.369+0000 D1 SHARDING [shard-registry-reload] Reloading shardRegistry 2019-09-04T06:27:49.369+0000 D3 STORAGE [shard-registry-reload] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:27:49.369+0000 D2 COMMAND [shard-registry-reload] run command config.$cmd { find: "shards", $readPreference: { mode: "nearest", tags: [] }, $db: "config" } 2019-09-04T06:27:49.369+0000 D3 STORAGE [shard-registry-reload] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:49.369+0000 D5 QUERY [shard-registry-reload] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:27:49.369+0000 D5 QUERY [shard-registry-reload] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:27:49.369+0000 D5 QUERY [shard-registry-reload] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:27:49.369+0000 D5 QUERY [shard-registry-reload] Rated tree: $and 2019-09-04T06:27:49.369+0000 D5 QUERY [shard-registry-reload] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:49.369+0000 D5 QUERY [shard-registry-reload] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:49.369+0000 D2 QUERY [shard-registry-reload] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:27:49.370+0000 D3 STORAGE [shard-registry-reload] begin_transaction on local snapshot Timestamp(1567578469, 1) 2019-09-04T06:27:49.370+0000 D3 STORAGE [shard-registry-reload] WT begin_transaction for snapshot id 1638 2019-09-04T06:27:49.370+0000 D3 STORAGE [shard-registry-reload] WT rollback_transaction for snapshot id 1638 2019-09-04T06:27:49.370+0000 I COMMAND [shard-registry-reload] command config.shards command: find { find: "shards", $readPreference: { mode: "nearest", tags: [] }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:646 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:27:49.370+0000 D1 SHARDING [shard-registry-reload] found 3 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp(1567578469, 1), t: 1 } 2019-09-04T06:27:49.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0000/cmodb806.togewa.com:27018,cmodb807.togewa.com:27018 2019-09-04T06:27:49.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0000, with CS shard0000/cmodb806.togewa.com:27018,cmodb807.togewa.com:27018 2019-09-04T06:27:49.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0001/cmodb808.togewa.com:27018,cmodb809.togewa.com:27018 2019-09-04T06:27:49.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0001, with CS shard0001/cmodb808.togewa.com:27018,cmodb809.togewa.com:27018 2019-09-04T06:27:49.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0002/cmodb810.togewa.com:27018,cmodb811.togewa.com:27018 2019-09-04T06:27:49.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0002, with CS shard0002/cmodb810.togewa.com:27018,cmodb811.togewa.com:27018 2019-09-04T06:27:49.370+0000 D3 SHARDING [shard-registry-reload] Adding shard config, with CS 2019-09-04T06:27:49.373+0000 I - [replSetDistLockPinger] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6b5b62 0x561749c38a0a 0x561749c42521 0x561749a63043 0x56174a33a606 0x56174a33ba55 0x56174b117894 0x56174a082899 0x56174a083f53 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174af452ee 0x56174af457fa 0x56174b0c25e2 0x56174a244e7b 0x56174a243c1e 0x56174a42b1dc 0x56174a23b7b1 0x56174a232a0a 0x56174b82dbbf 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"272DB62","s":"_ZN5mongo11DBExceptionC2ERKNS_6StatusE"},{"b":"561748F88000","o":"CB0A0A","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"ADB043"},{"b":"561748F88000","o":"13B2606"},{"b":"561748F88000","o":"13B3A55"},{"b":"561748F88000","o":"218F894","s":"_ZN5mongo12BasicCommand10Invocation3runEPNS_16OperationContextEPNS_3rpc21ReplyBuilderInterfaceE"},{"b":"561748F88000","o":"10FA899"},{"b":"561748F88000","o":"10FBF53"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"1FBD2EE"},{"b":"561748F88000","o":"1FBD7FA","s":"_ZN5mongo14DBDirectClient4callERNS_7MessageES2_bPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE"},{"b":"561748F88000","o":"213A5E2","s":"_ZN5mongo12DBClientBase20runCommandWithTargetENS_12OpMsgRequestE"},{"b":"561748F88000","o":"12BCE7B","s":"_ZN5mongo13RSLocalClient14runCommandOnceEPNS_16OperationContextENS_10StringDataERKNS_7BSONObjE"},{"b":"561748F88000","o":"12BBC1E","s":"_ZN5mongo10ShardLocal11_runCommandEPNS_16OperationContextERKNS_21ReadPreferenceSettingENS_10StringDataENS_8DurationISt5ratioILl1ELl1000EEEERKNS_7BSONObjE"},{"b":"561748F88000","o":"14A31DC","s":"_ZN5mongo5Shard32runCommandWithFixedRetryAttemptsEPNS_16OperationContextERKNS_21ReadPreferenceSettingERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_7BSONObjENS_8DurationISt5ratioILl1ELl1000EEEENS0_11RetryPolicyE"},{"b":"561748F88000","o":"12B37B1","s":"_ZN5mongo19DistLockCatalogImpl4pingEPNS_16OperationContextENS_10StringDataENS_6Date_tE"},{"b":"561748F88000","o":"12AAA0A","s":"_ZN5mongo22ReplSetDistLockManager6doTaskEv"},{"b":"561748F88000","o":"28A5BBF"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo11DBExceptionC2ERKNS_6StatusE+0x32) [0x56174b6b5b62] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x6D08) [0x561749c38a0a] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xADB043) [0x561749a63043] mongod(+0x13B2606) [0x56174a33a606] mongod(+0x13B3A55) [0x56174a33ba55] mongod(_ZN5mongo12BasicCommand10Invocation3runEPNS_16OperationContextEPNS_3rpc21ReplyBuilderInterfaceE+0x74) [0x56174b117894] mongod(+0x10FA899) [0x56174a082899] mongod(+0x10FBF53) [0x56174a083f53] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(+0x1FBD2EE) [0x56174af452ee] mongod(_ZN5mongo14DBDirectClient4callERNS_7MessageES2_bPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x3A) [0x56174af457fa] mongod(_ZN5mongo12DBClientBase20runCommandWithTargetENS_12OpMsgRequestE+0x1F2) [0x56174b0c25e2] mongod(_ZN5mongo13RSLocalClient14runCommandOnceEPNS_16OperationContextENS_10StringDataERKNS_7BSONObjE+0x4FB) [0x56174a244e7b] mongod(_ZN5mongo10ShardLocal11_runCommandEPNS_16OperationContextERKNS_21ReadPreferenceSettingENS_10StringDataENS_8DurationISt5ratioILl1ELl1000EEEERKNS_7BSONObjE+0x2E) [0x56174a243c1e] mongod(_ZN5mongo5Shard32runCommandWithFixedRetryAttemptsEPNS_16OperationContextERKNS_21ReadPreferenceSettingERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_7BSONObjENS_8DurationISt5ratioILl1ELl1000EEEENS0_11RetryPolicyE+0xDC) [0x56174a42b1dc] mongod(_ZN5mongo19DistLockCatalogImpl4pingEPNS_16OperationContextENS_10StringDataENS_6Date_tE+0x571) [0x56174a23b7b1] mongod(_ZN5mongo22ReplSetDistLockManager6doTaskEv+0x27A) [0x56174a232a0a] mongod(+0x28A5BBF) [0x56174b82dbbf] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:27:49.373+0000 D2 REPL [replSetDistLockPinger] Waiting for write concern. OpTime: { ts: Timestamp(1567578469, 1), t: 1 }, write concern: { w: "majority", wtimeout: 15000 } 2019-09-04T06:27:49.373+0000 D4 STORAGE [replSetDistLockPinger] flushed journal 2019-09-04T06:27:49.373+0000 D1 COMMAND [replSetDistLockPinger] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578469351) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" }': NotMaster: Not primary while running findAndModify command on collection config.lockpings 2019-09-04T06:27:49.374+0000 I COMMAND [replSetDistLockPinger] command config.lockpings command: findAndModify { findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578469351) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } numYields:0 ok:0 errMsg:"Not primary while running findAndModify command on collection config.lockpings" errName:NotMaster errCode:10107 reslen:527 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } protocol:op_msg 22ms 2019-09-04T06:27:49.378+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:49.382+0000 D1 EXECUTOR [ConfigServerCatalogCacheLoader-0] Reaping this thread; next thread reaped no earlier than 2019-09-04T06:28:19.382+0000 2019-09-04T06:27:49.382+0000 D1 EXECUTOR [ConfigServerCatalogCacheLoader-0] shutting down thread in pool ConfigServerCatalogCacheLoader 2019-09-04T06:27:49.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0001 2019-09-04T06:27:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 100 -- target:[cmodb808.togewa.com:27018] db:admin expDate:2019-09-04T06:27:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:27:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 101 -- target:[cmodb809.togewa.com:27018] db:admin expDate:2019-09-04T06:27:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:27:49.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0000 2019-09-04T06:27:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 102 -- target:[cmodb806.togewa.com:27018] db:admin expDate:2019-09-04T06:27:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:27:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 103 -- target:[cmodb807.togewa.com:27018] db:admin expDate:2019-09-04T06:27:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:27:49.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0002 2019-09-04T06:27:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 104 -- target:[cmodb810.togewa.com:27018] db:admin expDate:2019-09-04T06:27:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:27:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 105 -- target:[cmodb811.togewa.com:27018] db:admin expDate:2019-09-04T06:27:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:27:49.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 100 finished with response: { hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb808.togewa.com:27018", me: "cmodb808.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578467, 1), t: 1 }, lastWriteDate: new Date(1567578467000), majorityOpTime: { ts: Timestamp(1567578467, 1), t: 1 }, majorityWriteDate: new Date(1567578467000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578469386), logicalSessionTimeoutMinutes: 30, connectionId: 18183, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578467, 1), $configServerState: { opTime: { ts: Timestamp(1567578464, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578467, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578467, 1) } 2019-09-04T06:27:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb808.togewa.com:27018", me: "cmodb808.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578467, 1), t: 1 }, lastWriteDate: new Date(1567578467000), majorityOpTime: { ts: Timestamp(1567578467, 1), t: 1 }, majorityWriteDate: new Date(1567578467000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578469386), logicalSessionTimeoutMinutes: 30, connectionId: 18183, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578467, 1), $configServerState: { opTime: { ts: Timestamp(1567578464, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578467, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578467, 1) } target: cmodb808.togewa.com:27018 2019-09-04T06:27:49.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 101 finished with response: { hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb808.togewa.com:27018", me: "cmodb809.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578467, 1), t: 1 }, lastWriteDate: new Date(1567578467000), majorityOpTime: { ts: Timestamp(1567578467, 1), t: 1 }, majorityWriteDate: new Date(1567578467000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578469386), logicalSessionTimeoutMinutes: 30, connectionId: 13302, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578467, 1), $configServerState: { opTime: { ts: Timestamp(1567578449, 3), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578467, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578467, 1) } 2019-09-04T06:27:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb808.togewa.com:27018", me: "cmodb809.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578467, 1), t: 1 }, lastWriteDate: new Date(1567578467000), majorityOpTime: { ts: Timestamp(1567578467, 1), t: 1 }, majorityWriteDate: new Date(1567578467000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578469386), logicalSessionTimeoutMinutes: 30, connectionId: 13302, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578467, 1), $configServerState: { opTime: { ts: Timestamp(1567578449, 3), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578467, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578467, 1) } target: cmodb809.togewa.com:27018 2019-09-04T06:27:49.385+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0001 took 0ms 2019-09-04T06:27:49.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 104 finished with response: { hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb810.togewa.com:27018", me: "cmodb810.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578459, 1), t: 1 }, lastWriteDate: new Date(1567578459000), majorityOpTime: { ts: Timestamp(1567578459, 1), t: 1 }, majorityWriteDate: new Date(1567578459000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578469386), logicalSessionTimeoutMinutes: 30, connectionId: 20469, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578459, 1), $configServerState: { opTime: { ts: Timestamp(1567578464, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578465, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578459, 1) } 2019-09-04T06:27:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb810.togewa.com:27018", me: "cmodb810.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578459, 1), t: 1 }, lastWriteDate: new Date(1567578459000), majorityOpTime: { ts: Timestamp(1567578459, 1), t: 1 }, majorityWriteDate: new Date(1567578459000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578469386), logicalSessionTimeoutMinutes: 30, connectionId: 20469, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578459, 1), $configServerState: { opTime: { ts: Timestamp(1567578464, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578465, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578459, 1) } target: cmodb810.togewa.com:27018 2019-09-04T06:27:49.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 102 finished with response: { hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb806.togewa.com:27018", me: "cmodb806.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578465, 2), t: 1 }, lastWriteDate: new Date(1567578465000), majorityOpTime: { ts: Timestamp(1567578465, 2), t: 1 }, majorityWriteDate: new Date(1567578465000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578469385), logicalSessionTimeoutMinutes: 30, connectionId: 16400, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578465, 2), $configServerState: { opTime: { ts: Timestamp(1567578464, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578465, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578465, 2) } 2019-09-04T06:27:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb806.togewa.com:27018", me: "cmodb806.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578465, 2), t: 1 }, lastWriteDate: new Date(1567578465000), majorityOpTime: { ts: Timestamp(1567578465, 2), t: 1 }, majorityWriteDate: new Date(1567578465000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578469385), logicalSessionTimeoutMinutes: 30, connectionId: 16400, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578465, 2), $configServerState: { opTime: { ts: Timestamp(1567578464, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578465, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578465, 2) } target: cmodb806.togewa.com:27018 2019-09-04T06:27:49.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 103 finished with response: { hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb806.togewa.com:27018", me: "cmodb807.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578465, 2), t: 1 }, lastWriteDate: new Date(1567578465000), majorityOpTime: { ts: Timestamp(1567578465, 2), t: 1 }, majorityWriteDate: new Date(1567578465000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578469386), logicalSessionTimeoutMinutes: 30, connectionId: 17074, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578465, 2), $configServerState: { opTime: { ts: Timestamp(1567578458, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578465, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578465, 2) } 2019-09-04T06:27:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb806.togewa.com:27018", me: "cmodb807.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578465, 2), t: 1 }, lastWriteDate: new Date(1567578465000), majorityOpTime: { ts: Timestamp(1567578465, 2), t: 1 }, majorityWriteDate: new Date(1567578465000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578469386), logicalSessionTimeoutMinutes: 30, connectionId: 17074, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578465, 2), $configServerState: { opTime: { ts: Timestamp(1567578458, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578465, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578465, 2) } target: cmodb807.togewa.com:27018 2019-09-04T06:27:49.385+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0000 took 0ms 2019-09-04T06:27:49.390+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 105 finished with response: { hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb810.togewa.com:27018", me: "cmodb811.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578459, 1), t: 1 }, lastWriteDate: new Date(1567578459000), majorityOpTime: { ts: Timestamp(1567578459, 1), t: 1 }, majorityWriteDate: new Date(1567578459000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578469386), logicalSessionTimeoutMinutes: 30, connectionId: 13284, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578459, 1), $configServerState: { opTime: { ts: Timestamp(1567578464, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578465, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578459, 1) } 2019-09-04T06:27:49.390+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb810.togewa.com:27018", me: "cmodb811.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578459, 1), t: 1 }, lastWriteDate: new Date(1567578459000), majorityOpTime: { ts: Timestamp(1567578459, 1), t: 1 }, majorityWriteDate: new Date(1567578459000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578469386), logicalSessionTimeoutMinutes: 30, connectionId: 13284, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578459, 1), $configServerState: { opTime: { ts: Timestamp(1567578464, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578465, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578459, 1) } target: cmodb811.togewa.com:27018 2019-09-04T06:27:49.390+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0002 took 5ms 2019-09-04T06:27:49.478+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:49.512+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:49.512+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:49.551+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:49.551+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:49.565+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:49.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:49.579+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:49.659+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:49.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:49.661+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:49.661+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:49.679+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:49.692+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:49.692+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:49.698+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:49.698+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:49.779+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:49.827+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:49.827+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:49.879+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:49.979+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:50.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:50.004+0000 D2 COMMAND [conn16] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:27:50.004+0000 D1 ACCESS [conn16] Returning user dba_root@admin from cache 2019-09-04T06:27:50.004+0000 I COMMAND [conn16] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:27:50.013+0000 D2 COMMAND [conn16] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:27:50.013+0000 I COMMAND [conn16] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:27:50.013+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:50.014+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:50.017+0000 D2 COMMAND [conn16] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:27:50.018+0000 D1 ACCESS [conn16] Returning user dba_root@admin from cache 2019-09-04T06:27:50.018+0000 I ACCESS [conn16] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45438 2019-09-04T06:27:50.018+0000 I COMMAND [conn16] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:27:50.036+0000 D2 COMMAND [conn16] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:27:50.036+0000 I COMMAND [conn16] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35129 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:27:50.042+0000 D2 COMMAND [conn16] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:27:50.042+0000 D2 COMMAND [conn16] command: replSetGetStatus 2019-09-04T06:27:50.042+0000 I COMMAND [conn16] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:27:50.042+0000 D2 COMMAND [conn16] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:27:50.042+0000 D3 STORAGE [conn16] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:50.042+0000 D5 QUERY [conn16] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:27:50.042+0000 D5 QUERY [conn16] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:27:50.042+0000 D5 QUERY [conn16] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:27:50.042+0000 D5 QUERY [conn16] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:27:50.042+0000 D5 QUERY [conn16] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:27:50.042+0000 D5 QUERY [conn16] Predicate over field 'jumbo' 2019-09-04T06:27:50.042+0000 D5 QUERY [conn16] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:27:50.042+0000 D5 QUERY [conn16] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:50.042+0000 D5 QUERY [conn16] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:50.042+0000 D2 QUERY [conn16] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:27:50.042+0000 D3 STORAGE [conn16] begin_transaction on local snapshot Timestamp(1567578469, 1) 2019-09-04T06:27:50.042+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1655 2019-09-04T06:27:50.042+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1655 2019-09-04T06:27:50.042+0000 I COMMAND [conn16] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:27:50.042+0000 D2 COMMAND [conn16] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:27:50.042+0000 I COMMAND [conn16] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:27:50.042+0000 D2 COMMAND [conn16] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:27:50.043+0000 D3 STORAGE [conn16] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:50.043+0000 D5 QUERY [conn16] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:27:50.043+0000 D5 QUERY [conn16] Forcing a table scan due to hinted $natural 2019-09-04T06:27:50.043+0000 D2 QUERY [conn16] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:27:50.043+0000 D3 STORAGE [conn16] begin_transaction on local snapshot Timestamp(1567578469, 1) 2019-09-04T06:27:50.043+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1658 2019-09-04T06:27:50.043+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1658 2019-09-04T06:27:50.043+0000 I COMMAND [conn16] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:27:50.043+0000 D2 COMMAND [conn16] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:27:50.043+0000 D3 STORAGE [conn16] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:50.043+0000 D5 QUERY [conn16] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:27:50.043+0000 D5 QUERY [conn16] Forcing a table scan due to hinted $natural 2019-09-04T06:27:50.043+0000 D2 QUERY [conn16] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:27:50.043+0000 D3 STORAGE [conn16] begin_transaction on local snapshot Timestamp(1567578469, 1) 2019-09-04T06:27:50.043+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1660 2019-09-04T06:27:50.043+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1660 2019-09-04T06:27:50.043+0000 I COMMAND [conn16] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:27:50.043+0000 D2 COMMAND [conn16] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:27:50.043+0000 D2 QUERY [conn16] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:27:50.043+0000 I COMMAND [conn16] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:27:50.043+0000 D2 COMMAND [conn16] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:27:50.043+0000 D2 COMMAND [conn16] command: listDatabases 2019-09-04T06:27:50.043+0000 D3 STORAGE [conn16] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:27:50.043+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1663 2019-09-04T06:27:50.043+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:27:50.043+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:50.043+0000 D3 STORAGE [conn16] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:27:50.043+0000 D3 STORAGE [conn16] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:27:50.043+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1663 2019-09-04T06:27:50.043+0000 D3 STORAGE [conn16] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:27:50.043+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1664 2019-09-04T06:27:50.043+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:27:50.043+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:50.043+0000 D3 STORAGE [conn16] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1664 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1665 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1665 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1666 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1666 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1667 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1667 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1668 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1668 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1669 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1669 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1670 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1670 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1671 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1671 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1672 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1672 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1673 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1673 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1674 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1674 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1675 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1675 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1676 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1676 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1677 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:27:50.044+0000 D3 STORAGE [conn16] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1677 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1678 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1678 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1679 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1679 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1680 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1680 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1681 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1681 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1682 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1682 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1683 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1683 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1684 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1684 2019-09-04T06:27:50.045+0000 I COMMAND [conn16] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:27:50.045+0000 D2 COMMAND [conn16] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1686 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1686 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1687 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1687 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1688 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1688 2019-09-04T06:27:50.045+0000 I COMMAND [conn16] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:27:50.045+0000 D2 COMMAND [conn16] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1690 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1690 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1691 2019-09-04T06:27:50.045+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1691 2019-09-04T06:27:50.046+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1692 2019-09-04T06:27:50.046+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1692 2019-09-04T06:27:50.046+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1693 2019-09-04T06:27:50.046+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1693 2019-09-04T06:27:50.046+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1694 2019-09-04T06:27:50.046+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1694 2019-09-04T06:27:50.046+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1695 2019-09-04T06:27:50.046+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1695 2019-09-04T06:27:50.046+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1696 2019-09-04T06:27:50.046+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1696 2019-09-04T06:27:50.046+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1697 2019-09-04T06:27:50.046+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1697 2019-09-04T06:27:50.046+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1698 2019-09-04T06:27:50.046+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1698 2019-09-04T06:27:50.046+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1699 2019-09-04T06:27:50.046+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1699 2019-09-04T06:27:50.046+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1700 2019-09-04T06:27:50.046+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1700 2019-09-04T06:27:50.046+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1701 2019-09-04T06:27:50.046+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1701 2019-09-04T06:27:50.046+0000 I COMMAND [conn16] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:27:50.046+0000 D2 COMMAND [conn16] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:27:50.046+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1703 2019-09-04T06:27:50.046+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1703 2019-09-04T06:27:50.046+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1704 2019-09-04T06:27:50.046+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1704 2019-09-04T06:27:50.046+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1705 2019-09-04T06:27:50.046+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1705 2019-09-04T06:27:50.046+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1706 2019-09-04T06:27:50.046+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1706 2019-09-04T06:27:50.046+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1707 2019-09-04T06:27:50.046+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1707 2019-09-04T06:27:50.046+0000 D3 STORAGE [conn16] WT begin_transaction for snapshot id 1708 2019-09-04T06:27:50.046+0000 D3 STORAGE [conn16] WT rollback_transaction for snapshot id 1708 2019-09-04T06:27:50.046+0000 I COMMAND [conn16] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:27:50.048+0000 I NETWORK [listener] connection accepted from 10.108.2.33:45456 #90 (78 connections now open) 2019-09-04T06:27:50.048+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:50.048+0000 D2 COMMAND [conn90] run command admin.$cmd { getnonce: 1, $db: "admin" } 2019-09-04T06:27:50.048+0000 I COMMAND [conn90] command admin.$cmd command: getnonce { getnonce: 1, $db: "admin" } numYields:0 reslen:295 locks:{} protocol:op_query 0ms 2019-09-04T06:27:50.048+0000 D2 COMMAND [conn90] run command admin.$cmd { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:27:50.048+0000 I COMMAND [conn90] command admin.$cmd command: isMaster { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:866 locks:{} protocol:op_query 0ms 2019-09-04T06:27:50.051+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:50.051+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:50.065+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:50.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:50.079+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:50.131+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:50.131+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:50.131+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578469, 1) 2019-09-04T06:27:50.131+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 1714 2019-09-04T06:27:50.131+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:50.131+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:50.131+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 1714 2019-09-04T06:27:50.132+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1717 2019-09-04T06:27:50.132+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 1717 2019-09-04T06:27:50.132+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578469, 1), t: 1 }({ ts: Timestamp(1567578469, 1), t: 1 }) 2019-09-04T06:27:50.159+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:50.160+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:50.161+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:50.161+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:50.179+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:50.192+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:50.192+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:50.198+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:50.198+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:50.230+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578469, 1), signature: { hash: BinData(0, 2877DB90CCE0ECC5A54FB29DFF710CC575234AF4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:50.230+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:27:50.230+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578469, 1), signature: { hash: BinData(0, 2877DB90CCE0ECC5A54FB29DFF710CC575234AF4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:50.231+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578469, 1), signature: { hash: BinData(0, 2877DB90CCE0ECC5A54FB29DFF710CC575234AF4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:50.231+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, durableWallTime: new Date(1567578469129), opTime: { ts: Timestamp(1567578469, 1), t: 1 }, wallTime: new Date(1567578469129) } 2019-09-04T06:27:50.231+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578469, 1), signature: { hash: BinData(0, 2877DB90CCE0ECC5A54FB29DFF710CC575234AF4), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:50.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:27:50.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 999999999 Now: 1000000000 2019-09-04T06:27:50.280+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:50.327+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:50.327+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:50.380+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:50.424+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:50.424+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:50.480+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:50.512+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:50.512+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:50.551+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:50.551+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:50.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:50.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:50.580+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:50.659+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:50.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:50.661+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:50.661+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:50.680+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:50.698+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:50.698+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:50.780+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:50.827+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:50.827+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:50.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:50.836+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 106) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:50.836+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 106 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:28:00.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:50.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:50.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:18.836+0000 2019-09-04T06:27:50.836+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 107) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:50.836+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 107 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:00.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:50.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:18.836+0000 2019-09-04T06:27:50.836+0000 D2 ASIO [Replication] Request 106 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, durableWallTime: new Date(1567578469129), opTime: { ts: Timestamp(1567578469, 1), t: 1 }, wallTime: new Date(1567578469129), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578469, 1), t: 1 }, lastCommittedWall: new Date(1567578469129), lastOpVisible: { ts: Timestamp(1567578469, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578469, 1), $clusterTime: { clusterTime: Timestamp(1567578469, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578469, 1) } 2019-09-04T06:27:50.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, durableWallTime: new Date(1567578469129), opTime: { ts: Timestamp(1567578469, 1), t: 1 }, wallTime: new Date(1567578469129), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578469, 1), t: 1 }, lastCommittedWall: new Date(1567578469129), lastOpVisible: { ts: Timestamp(1567578469, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578469, 1), $clusterTime: { clusterTime: Timestamp(1567578469, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578469, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:27:50.836+0000 D2 ASIO [Replication] Request 107 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, durableWallTime: new Date(1567578469129), opTime: { ts: Timestamp(1567578469, 1), t: 1 }, wallTime: new Date(1567578469129), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578469, 1), t: 1 }, lastCommittedWall: new Date(1567578469129), lastOpVisible: { ts: Timestamp(1567578469, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578469, 1), $clusterTime: { clusterTime: Timestamp(1567578469, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578469, 1) } 2019-09-04T06:27:50.836+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:50.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, durableWallTime: new Date(1567578469129), opTime: { ts: Timestamp(1567578469, 1), t: 1 }, wallTime: new Date(1567578469129), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578469, 1), t: 1 }, lastCommittedWall: new Date(1567578469129), lastOpVisible: { ts: Timestamp(1567578469, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578469, 1), $clusterTime: { clusterTime: Timestamp(1567578469, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578469, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:50.836+0000 D2 REPL_HB [replexec-2] Received response to heartbeat (requestId: 106) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, durableWallTime: new Date(1567578469129), opTime: { ts: Timestamp(1567578469, 1), t: 1 }, wallTime: new Date(1567578469129), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578469, 1), t: 1 }, lastCommittedWall: new Date(1567578469129), lastOpVisible: { ts: Timestamp(1567578469, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578469, 1), $clusterTime: { clusterTime: Timestamp(1567578469, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578469, 1) } 2019-09-04T06:27:50.836+0000 D4 ELECTION [replexec-2] Postponing election timeout due to heartbeat from primary 2019-09-04T06:27:50.836+0000 D4 REPL [replexec-2] Canceling election timeout callback at 2019-09-04T06:27:59.542+0000 2019-09-04T06:27:50.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:50.836+0000 D4 ELECTION [replexec-2] Scheduling election timeout callback at 2019-09-04T06:28:02.196+0000 2019-09-04T06:27:50.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:50.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:27:50.837+0000 D3 REPL [replexec-2] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:27:50.837+0000 D2 REPL_HB [replexec-2] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:27:52.836Z 2019-09-04T06:27:50.837+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:27:50.837+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 107) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, durableWallTime: new Date(1567578469129), opTime: { ts: Timestamp(1567578469, 1), t: 1 }, wallTime: new Date(1567578469129), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578469, 1), t: 1 }, lastCommittedWall: new Date(1567578469129), lastOpVisible: { ts: Timestamp(1567578469, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578469, 1), $clusterTime: { clusterTime: Timestamp(1567578469, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578469, 1) } 2019-09-04T06:27:50.837+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:27:50.837+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:27:52.837Z 2019-09-04T06:27:50.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:27:50.880+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:50.924+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:50.924+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:50.981+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:51.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:51.012+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:51.012+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:51.051+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:51.051+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:51.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578469, 1), signature: { hash: BinData(0, 2877DB90CCE0ECC5A54FB29DFF710CC575234AF4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:51.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:27:51.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578469, 1), signature: { hash: BinData(0, 2877DB90CCE0ECC5A54FB29DFF710CC575234AF4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:51.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578469, 1), signature: { hash: BinData(0, 2877DB90CCE0ECC5A54FB29DFF710CC575234AF4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:51.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, durableWallTime: new Date(1567578469129), opTime: { ts: Timestamp(1567578469, 1), t: 1 }, wallTime: new Date(1567578469129) } 2019-09-04T06:27:51.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578469, 1), signature: { hash: BinData(0, 2877DB90CCE0ECC5A54FB29DFF710CC575234AF4), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:51.081+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:51.131+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:51.131+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:51.131+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578469, 1) 2019-09-04T06:27:51.132+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 1739 2019-09-04T06:27:51.132+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:51.132+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:51.132+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 1739 2019-09-04T06:27:51.132+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1742 2019-09-04T06:27:51.132+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 1742 2019-09-04T06:27:51.132+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578469, 1), t: 1 }({ ts: Timestamp(1567578469, 1), t: 1 }) 2019-09-04T06:27:51.135+0000 I COMMAND [conn39] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578431, 1), signature: { hash: BinData(0, D0A0BEB0F4BE06FB58EFD649F48F77452778EFF8), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:27:51.135+0000 D1 - [conn39] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:27:51.135+0000 W - [conn39] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:27:51.152+0000 I - [conn39] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:27:51.153+0000 D1 COMMAND [conn39] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578431, 1), signature: { hash: BinData(0, D0A0BEB0F4BE06FB58EFD649F48F77452778EFF8), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:27:51.153+0000 D1 - [conn39] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:27:51.153+0000 W - [conn39] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:27:51.159+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:51.159+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:51.172+0000 I - [conn39] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:27:51.173+0000 W COMMAND [conn39] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:27:51.173+0000 I COMMAND [conn39] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578431, 1), signature: { hash: BinData(0, D0A0BEB0F4BE06FB58EFD649F48F77452778EFF8), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:27:51.173+0000 D2 NETWORK [conn39] Session from 10.108.2.56:35580 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:27:51.173+0000 I NETWORK [conn39] end connection 10.108.2.56:35580 (77 connections now open) 2019-09-04T06:27:51.181+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:51.198+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:51.198+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:51.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:27:51.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:27:51.281+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:51.327+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:51.327+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:51.381+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:51.424+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:51.424+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:51.481+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:51.512+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:51.512+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:51.551+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:51.551+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:51.581+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:51.634+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:51.634+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:51.635+0000 I NETWORK [listener] connection accepted from 10.108.2.44:38574 #91 (78 connections now open) 2019-09-04T06:27:51.635+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:51.635+0000 D2 COMMAND [conn91] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:51.635+0000 I NETWORK [conn91] received client metadata from 10.108.2.44:38574 conn91: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:51.635+0000 I COMMAND [conn91] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:51.635+0000 D2 COMMAND [conn91] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578468, 1), signature: { hash: BinData(0, 174FC9E8758591059B0F1359F37D9D72B5364FD1), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:27:51.635+0000 D1 REPL [conn91] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578469, 1), t: 1 } 2019-09-04T06:27:51.636+0000 D3 REPL [conn91] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.645+0000 2019-09-04T06:27:51.648+0000 I COMMAND [conn40] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578438, 1), signature: { hash: BinData(0, A66536B9583A771E48634DEEE05F33A2CC1B0D56), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:27:51.648+0000 D1 - [conn40] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:27:51.648+0000 W - [conn40] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:27:51.650+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:51.650+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:51.650+0000 I NETWORK [listener] connection accepted from 10.108.2.48:41984 #92 (79 connections now open) 2019-09-04T06:27:51.650+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:51.650+0000 I NETWORK [listener] connection accepted from 10.108.2.72:45628 #93 (80 connections now open) 2019-09-04T06:27:51.650+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:51.651+0000 D2 COMMAND [conn93] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:51.651+0000 I NETWORK [conn93] received client metadata from 10.108.2.72:45628 conn93: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:51.651+0000 I COMMAND [conn93] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:51.651+0000 I NETWORK [listener] connection accepted from 10.108.2.54:49076 #94 (81 connections now open) 2019-09-04T06:27:51.651+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:51.651+0000 I NETWORK [listener] connection accepted from 10.108.2.58:52032 #95 (82 connections now open) 2019-09-04T06:27:51.651+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:51.651+0000 D2 COMMAND [conn95] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:51.651+0000 I NETWORK [conn95] received client metadata from 10.108.2.58:52032 conn95: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:51.652+0000 I COMMAND [conn95] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:51.652+0000 I NETWORK [listener] connection accepted from 10.108.2.73:52042 #96 (83 connections now open) 2019-09-04T06:27:51.652+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:51.652+0000 D2 COMMAND [conn93] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578462, 1), signature: { hash: BinData(0, B297735DD8383F1BAD6EB580CC3032FDC0BD6BA3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:27:51.652+0000 D1 REPL [conn93] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578469, 1), t: 1 } 2019-09-04T06:27:51.652+0000 D3 REPL [conn93] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.662+0000 2019-09-04T06:27:51.652+0000 D2 COMMAND [conn95] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578463, 1), signature: { hash: BinData(0, C1B62A1C3071B6FFFBA1E3E4221F429C7BC6B94F), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:27:51.652+0000 D1 REPL [conn95] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578469, 1), t: 1 } 2019-09-04T06:27:51.652+0000 D3 REPL [conn95] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.662+0000 2019-09-04T06:27:51.653+0000 D2 COMMAND [conn96] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:51.653+0000 I NETWORK [conn96] received client metadata from 10.108.2.73:52042 conn96: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:51.653+0000 I COMMAND [conn96] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:51.653+0000 D2 COMMAND [conn94] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:51.653+0000 I NETWORK [conn94] received client metadata from 10.108.2.54:49076 conn94: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:51.653+0000 I COMMAND [conn94] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:51.653+0000 D2 COMMAND [conn96] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578462, 1), signature: { hash: BinData(0, B297735DD8383F1BAD6EB580CC3032FDC0BD6BA3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:27:51.653+0000 D1 REPL [conn96] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578469, 1), t: 1 } 2019-09-04T06:27:51.653+0000 D3 REPL [conn96] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.663+0000 2019-09-04T06:27:51.653+0000 D2 COMMAND [conn94] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578468, 1), signature: { hash: BinData(0, 174FC9E8758591059B0F1359F37D9D72B5364FD1), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:27:51.653+0000 D1 REPL [conn94] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578469, 1), t: 1 } 2019-09-04T06:27:51.653+0000 D3 REPL [conn94] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.663+0000 2019-09-04T06:27:51.654+0000 D2 COMMAND [conn92] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:51.654+0000 I NETWORK [conn92] received client metadata from 10.108.2.48:41984 conn92: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:51.654+0000 I COMMAND [conn92] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:51.654+0000 D2 COMMAND [conn92] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578470, 1), signature: { hash: BinData(0, B6774A8FE7872486BBD7D13128F552CA92255F0B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:27:51.654+0000 D1 REPL [conn92] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578469, 1), t: 1 } 2019-09-04T06:27:51.654+0000 D3 REPL [conn92] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.664+0000 2019-09-04T06:27:51.656+0000 I NETWORK [listener] connection accepted from 10.108.2.47:56452 #97 (84 connections now open) 2019-09-04T06:27:51.657+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:51.657+0000 D2 COMMAND [conn97] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:51.657+0000 I NETWORK [conn97] received client metadata from 10.108.2.47:56452 conn97: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:51.657+0000 I COMMAND [conn97] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:51.659+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:51.659+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:51.660+0000 D2 COMMAND [conn97] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578465, 1), signature: { hash: BinData(0, B4CC25BA8F9AE0B854E22D714AAD0C62F00EA853), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:27:51.660+0000 D1 REPL [conn97] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578469, 1), t: 1 } 2019-09-04T06:27:51.660+0000 D3 REPL [conn97] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.670+0000 2019-09-04T06:27:51.663+0000 I COMMAND [conn41] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578438, 1), signature: { hash: BinData(0, A66536B9583A771E48634DEEE05F33A2CC1B0D56), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:27:51.663+0000 D1 - [conn41] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:27:51.663+0000 W - [conn41] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:27:51.665+0000 I - [conn40] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:27:51.665+0000 D1 COMMAND [conn40] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578438, 1), signature: { hash: BinData(0, A66536B9583A771E48634DEEE05F33A2CC1B0D56), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:27:51.665+0000 D1 - [conn40] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:27:51.665+0000 W - [conn40] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:27:51.682+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:51.698+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:51.698+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:51.702+0000 I - [conn40] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:27:51.702+0000 W COMMAND [conn40] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:27:51.702+0000 I COMMAND [conn40] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578438, 1), signature: { hash: BinData(0, A66536B9583A771E48634DEEE05F33A2CC1B0D56), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:27:51.702+0000 D2 NETWORK [conn40] Session from 10.108.2.44:38548 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:27:51.702+0000 I NETWORK [conn40] end connection 10.108.2.44:38548 (83 connections now open) 2019-09-04T06:27:51.702+0000 I - [conn41] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:27:51.702+0000 D1 COMMAND [conn41] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578438, 1), signature: { hash: BinData(0, A66536B9583A771E48634DEEE05F33A2CC1B0D56), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:27:51.702+0000 D1 - [conn41] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:27:51.702+0000 W - [conn41] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:27:51.722+0000 I - [conn41] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:27:51.722+0000 W COMMAND [conn41] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:27:51.723+0000 I COMMAND [conn41] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578438, 1), signature: { hash: BinData(0, A66536B9583A771E48634DEEE05F33A2CC1B0D56), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30051ms 2019-09-04T06:27:51.723+0000 D2 NETWORK [conn41] Session from 10.108.2.54:49058 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:27:51.723+0000 I NETWORK [conn41] end connection 10.108.2.54:49058 (82 connections now open) 2019-09-04T06:27:51.743+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:51.743+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:51.757+0000 I COMMAND [conn43] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578432, 1), signature: { hash: BinData(0, F498DAB403BE8C54293FE1F2B3BB1ADB75100204), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:27:51.757+0000 D1 - [conn43] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:27:51.757+0000 W - [conn43] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:27:51.757+0000 I NETWORK [listener] connection accepted from 10.108.2.59:48238 #98 (83 connections now open) 2019-09-04T06:27:51.757+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:51.758+0000 D2 COMMAND [conn98] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:51.758+0000 I NETWORK [conn98] received client metadata from 10.108.2.59:48238 conn98: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:51.758+0000 I COMMAND [conn98] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:51.758+0000 D2 COMMAND [conn98] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578461, 1), signature: { hash: BinData(0, 692CD45BA7CC13DACC90D19B9475E230267CC4C6), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:27:51.758+0000 D1 REPL [conn98] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578469, 1), t: 1 } 2019-09-04T06:27:51.758+0000 D3 REPL [conn98] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.768+0000 2019-09-04T06:27:51.773+0000 I - [conn43] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:27:51.774+0000 D1 COMMAND [conn43] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578432, 1), signature: { hash: BinData(0, F498DAB403BE8C54293FE1F2B3BB1ADB75100204), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:27:51.774+0000 D1 - [conn43] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:27:51.774+0000 W - [conn43] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:27:51.782+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:51.793+0000 I - [conn43] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:27:51.794+0000 W COMMAND [conn43] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:27:51.794+0000 I COMMAND [conn43] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578432, 1), signature: { hash: BinData(0, F498DAB403BE8C54293FE1F2B3BB1ADB75100204), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:27:51.794+0000 D2 NETWORK [conn43] Session from 10.108.2.52:47050 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:27:51.794+0000 I NETWORK [conn43] end connection 10.108.2.52:47050 (82 connections now open) 2019-09-04T06:27:51.827+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:51.827+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:51.882+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:51.924+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:51.924+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:51.982+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:52.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:52.012+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:52.012+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:52.043+0000 I NETWORK [listener] connection accepted from 10.108.2.50:50008 #99 (83 connections now open) 2019-09-04T06:27:52.043+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:52.043+0000 D2 COMMAND [conn99] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:52.043+0000 I NETWORK [conn99] received client metadata from 10.108.2.50:50008 conn99: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:52.043+0000 I COMMAND [conn99] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:52.044+0000 D2 COMMAND [conn99] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578469, 1), signature: { hash: BinData(0, 48284AC78A7C2DCD17E674B25E8399518D944CFC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:27:52.044+0000 D1 REPL [conn99] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578469, 1), t: 1 } 2019-09-04T06:27:52.044+0000 D3 REPL [conn99] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:22.054+0000 2019-09-04T06:27:52.051+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:52.051+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:52.082+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:52.132+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:52.132+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:52.132+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578469, 1) 2019-09-04T06:27:52.132+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 1780 2019-09-04T06:27:52.132+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:52.132+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:52.132+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 1780 2019-09-04T06:27:52.133+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1783 2019-09-04T06:27:52.133+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 1783 2019-09-04T06:27:52.133+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578469, 1), t: 1 }({ ts: Timestamp(1567578469, 1), t: 1 }) 2019-09-04T06:27:52.134+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:52.134+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:52.150+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:52.150+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:52.159+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:52.160+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:52.182+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:52.198+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:52.198+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:52.230+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578469, 1), signature: { hash: BinData(0, 2877DB90CCE0ECC5A54FB29DFF710CC575234AF4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:52.230+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:27:52.230+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578469, 1), signature: { hash: BinData(0, 2877DB90CCE0ECC5A54FB29DFF710CC575234AF4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:52.230+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578469, 1), signature: { hash: BinData(0, 2877DB90CCE0ECC5A54FB29DFF710CC575234AF4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:52.231+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, durableWallTime: new Date(1567578469129), opTime: { ts: Timestamp(1567578469, 1), t: 1 }, wallTime: new Date(1567578469129) } 2019-09-04T06:27:52.231+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578469, 1), signature: { hash: BinData(0, 2877DB90CCE0ECC5A54FB29DFF710CC575234AF4), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:52.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:27:52.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:27:52.242+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:52.242+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:52.282+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:52.383+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:52.424+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:52.424+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:52.483+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:52.512+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:52.512+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:52.551+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:52.551+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:52.583+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:52.584+0000 I NETWORK [listener] connection accepted from 10.108.2.74:51674 #100 (84 connections now open) 2019-09-04T06:27:52.584+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:52.585+0000 D2 COMMAND [conn100] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:52.585+0000 I NETWORK [conn100] received client metadata from 10.108.2.74:51674 conn100: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:52.585+0000 I COMMAND [conn100] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:52.585+0000 D2 COMMAND [conn100] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578465, 1), signature: { hash: BinData(0, B4CC25BA8F9AE0B854E22D714AAD0C62F00EA853), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:27:52.585+0000 D1 REPL [conn100] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578469, 1), t: 1 } 2019-09-04T06:27:52.585+0000 D3 REPL [conn100] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:22.595+0000 2019-09-04T06:27:52.659+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:52.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:52.683+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:52.698+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:52.698+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:52.783+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:52.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:52.836+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 108) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:52.836+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 108 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:28:02.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:52.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:27:52.836+0000 D2 ASIO [Replication] Request 108 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, durableWallTime: new Date(1567578469129), opTime: { ts: Timestamp(1567578469, 1), t: 1 }, wallTime: new Date(1567578469129), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578469, 1), t: 1 }, lastCommittedWall: new Date(1567578469129), lastOpVisible: { ts: Timestamp(1567578469, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578469, 1), $clusterTime: { clusterTime: Timestamp(1567578470, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578469, 1) } 2019-09-04T06:27:52.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, durableWallTime: new Date(1567578469129), opTime: { ts: Timestamp(1567578469, 1), t: 1 }, wallTime: new Date(1567578469129), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578469, 1), t: 1 }, lastCommittedWall: new Date(1567578469129), lastOpVisible: { ts: Timestamp(1567578469, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578469, 1), $clusterTime: { clusterTime: Timestamp(1567578470, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578469, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:27:52.836+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:52.836+0000 D2 REPL_HB [replexec-2] Received response to heartbeat (requestId: 108) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, durableWallTime: new Date(1567578469129), opTime: { ts: Timestamp(1567578469, 1), t: 1 }, wallTime: new Date(1567578469129), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578469, 1), t: 1 }, lastCommittedWall: new Date(1567578469129), lastOpVisible: { ts: Timestamp(1567578469, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578469, 1), $clusterTime: { clusterTime: Timestamp(1567578470, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578469, 1) } 2019-09-04T06:27:52.836+0000 D4 ELECTION [replexec-2] Postponing election timeout due to heartbeat from primary 2019-09-04T06:27:52.836+0000 D4 REPL [replexec-2] Canceling election timeout callback at 2019-09-04T06:28:02.196+0000 2019-09-04T06:27:52.836+0000 D4 ELECTION [replexec-2] Scheduling election timeout callback at 2019-09-04T06:28:04.102+0000 2019-09-04T06:27:52.836+0000 D3 REPL [replexec-2] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:27:52.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:52.836+0000 D2 REPL_HB [replexec-2] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:27:54.836Z 2019-09-04T06:27:52.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:27:52.837+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:27:52.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:52.837+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 109) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:52.837+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 109 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:02.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:52.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:27:52.837+0000 D2 ASIO [Replication] Request 109 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, durableWallTime: new Date(1567578469129), opTime: { ts: Timestamp(1567578469, 1), t: 1 }, wallTime: new Date(1567578469129), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578469, 1), t: 1 }, lastCommittedWall: new Date(1567578469129), lastOpVisible: { ts: Timestamp(1567578469, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578469, 1), $clusterTime: { clusterTime: Timestamp(1567578470, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578469, 1) } 2019-09-04T06:27:52.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, durableWallTime: new Date(1567578469129), opTime: { ts: Timestamp(1567578469, 1), t: 1 }, wallTime: new Date(1567578469129), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578469, 1), t: 1 }, lastCommittedWall: new Date(1567578469129), lastOpVisible: { ts: Timestamp(1567578469, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578469, 1), $clusterTime: { clusterTime: Timestamp(1567578470, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578469, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:52.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:52.837+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 109) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, durableWallTime: new Date(1567578469129), opTime: { ts: Timestamp(1567578469, 1), t: 1 }, wallTime: new Date(1567578469129), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578469, 1), t: 1 }, lastCommittedWall: new Date(1567578469129), lastOpVisible: { ts: Timestamp(1567578469, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578469, 1), $clusterTime: { clusterTime: Timestamp(1567578470, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578469, 1) } 2019-09-04T06:27:52.837+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:27:52.837+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:27:54.837Z 2019-09-04T06:27:52.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:27:52.884+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:52.924+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:52.924+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:52.984+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:53.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:53.012+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:53.012+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:53.051+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:53.051+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:53.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578470, 1), signature: { hash: BinData(0, 8D8EC963597A94D6578ABF65FB105EAA54EB1B4D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:53.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:27:53.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578470, 1), signature: { hash: BinData(0, 8D8EC963597A94D6578ABF65FB105EAA54EB1B4D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:53.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578470, 1), signature: { hash: BinData(0, 8D8EC963597A94D6578ABF65FB105EAA54EB1B4D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:53.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, durableWallTime: new Date(1567578469129), opTime: { ts: Timestamp(1567578469, 1), t: 1 }, wallTime: new Date(1567578469129) } 2019-09-04T06:27:53.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578470, 1), signature: { hash: BinData(0, 8D8EC963597A94D6578ABF65FB105EAA54EB1B4D), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:53.084+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:53.132+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:53.132+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:53.132+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578469, 1) 2019-09-04T06:27:53.132+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 1804 2019-09-04T06:27:53.132+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:53.132+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:53.132+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 1804 2019-09-04T06:27:53.133+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1807 2019-09-04T06:27:53.133+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 1807 2019-09-04T06:27:53.133+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578469, 1), t: 1 }({ ts: Timestamp(1567578469, 1), t: 1 }) 2019-09-04T06:27:53.159+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:53.160+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:53.184+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:53.198+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:53.198+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:53.199+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:53.199+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:53.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:27:53.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:27:53.248+0000 I NETWORK [listener] connection accepted from 10.108.2.43:33754 #101 (85 connections now open) 2019-09-04T06:27:53.248+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:53.249+0000 D2 COMMAND [conn101] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:53.249+0000 I NETWORK [conn101] received client metadata from 10.108.2.43:33754 conn101: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:53.249+0000 I COMMAND [conn101] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:53.254+0000 D2 COMMAND [conn101] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578469, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578470, 1), signature: { hash: BinData(0, 8D8EC963597A94D6578ABF65FB105EAA54EB1B4D), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578469, 1), t: 1 } }, $db: "config" } 2019-09-04T06:27:53.254+0000 D1 COMMAND [conn101] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578469, 1), t: 1 } } } 2019-09-04T06:27:53.254+0000 D3 STORAGE [conn101] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:27:53.254+0000 D1 COMMAND [conn101] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578469, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578470, 1), signature: { hash: BinData(0, 8D8EC963597A94D6578ABF65FB105EAA54EB1B4D), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578469, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578469, 1) 2019-09-04T06:27:53.254+0000 D5 QUERY [conn101] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:27:53.254+0000 D5 QUERY [conn101] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:27:53.254+0000 D5 QUERY [conn101] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:27:53.254+0000 D5 QUERY [conn101] Rated tree: $and 2019-09-04T06:27:53.254+0000 D5 QUERY [conn101] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:53.254+0000 D5 QUERY [conn101] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:53.254+0000 D2 QUERY [conn101] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:27:53.254+0000 D3 STORAGE [conn101] WT begin_transaction for snapshot id 1814 2019-09-04T06:27:53.254+0000 D3 STORAGE [conn101] WT rollback_transaction for snapshot id 1814 2019-09-04T06:27:53.254+0000 I COMMAND [conn101] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578469, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578470, 1), signature: { hash: BinData(0, 8D8EC963597A94D6578ABF65FB105EAA54EB1B4D), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578469, 1), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:879 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:27:53.284+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:53.384+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:53.424+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:53.424+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:53.485+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:53.512+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:53.512+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:53.551+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:53.551+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:53.585+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:53.659+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:53.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:53.685+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:53.698+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:53.698+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:53.699+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:53.699+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:53.785+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:53.885+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:53.924+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:53.924+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:53.985+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:54.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:54.012+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:54.012+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:54.051+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:54.051+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:54.086+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:54.132+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:54.133+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:54.133+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578469, 1) 2019-09-04T06:27:54.133+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 1826 2019-09-04T06:27:54.133+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:54.133+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:54.133+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 1826 2019-09-04T06:27:54.133+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1829 2019-09-04T06:27:54.133+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 1829 2019-09-04T06:27:54.133+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578469, 1), t: 1 }({ ts: Timestamp(1567578469, 1), t: 1 }) 2019-09-04T06:27:54.136+0000 D2 ASIO [RS] Request 98 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578469, 1), t: 1 }, lastCommittedWall: new Date(1567578469129), lastOpVisible: { ts: Timestamp(1567578469, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578469, 1), t: 1 }, lastCommittedWall: new Date(1567578469129), lastOpApplied: { ts: Timestamp(1567578469, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578469, 1), $clusterTime: { clusterTime: Timestamp(1567578470, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578469, 1) } 2019-09-04T06:27:54.136+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578469, 1), t: 1 }, lastCommittedWall: new Date(1567578469129), lastOpVisible: { ts: Timestamp(1567578469, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578469, 1), t: 1 }, lastCommittedWall: new Date(1567578469129), lastOpApplied: { ts: Timestamp(1567578469, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578469, 1), $clusterTime: { clusterTime: Timestamp(1567578470, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578469, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:54.136+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:54.136+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:27:54.136+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:28:04.102+0000 2019-09-04T06:27:54.137+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:28:05.305+0000 2019-09-04T06:27:54.137+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:54.137+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 110 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:04.137+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578469, 1), t: 1 } } 2019-09-04T06:27:54.137+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:27:54.137+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:19.132+0000 2019-09-04T06:27:54.138+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:54.138+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, durableWallTime: new Date(1567578469129), appliedOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, appliedWallTime: new Date(1567578469129), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, durableWallTime: new Date(1567578469129), appliedOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, appliedWallTime: new Date(1567578469129), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, durableWallTime: new Date(1567578469129), appliedOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, appliedWallTime: new Date(1567578469129), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578469, 1), t: 1 }, lastCommittedWall: new Date(1567578469129), lastOpVisible: { ts: Timestamp(1567578469, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:54.138+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 111 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:24.138+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, durableWallTime: new Date(1567578469129), appliedOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, appliedWallTime: new Date(1567578469129), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, durableWallTime: new Date(1567578469129), appliedOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, appliedWallTime: new Date(1567578469129), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, durableWallTime: new Date(1567578469129), appliedOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, appliedWallTime: new Date(1567578469129), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578469, 1), t: 1 }, lastCommittedWall: new Date(1567578469129), lastOpVisible: { ts: Timestamp(1567578469, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:54.138+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:19.132+0000 2019-09-04T06:27:54.138+0000 D2 ASIO [RS] Request 111 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578469, 1), t: 1 }, lastCommittedWall: new Date(1567578469129), lastOpVisible: { ts: Timestamp(1567578469, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578469, 1), $clusterTime: { clusterTime: Timestamp(1567578470, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578469, 1) } 2019-09-04T06:27:54.138+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578469, 1), t: 1 }, lastCommittedWall: new Date(1567578469129), lastOpVisible: { ts: Timestamp(1567578469, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578469, 1), $clusterTime: { clusterTime: Timestamp(1567578470, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578469, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:54.138+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:54.138+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:19.132+0000 2019-09-04T06:27:54.153+0000 I COMMAND [conn32] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578435, 1), signature: { hash: BinData(0, 7FECAD142B7D0C48104AAD6F508D560DEF34F4D8), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:27:54.153+0000 D1 - [conn32] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:27:54.153+0000 W - [conn32] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:27:54.159+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:54.160+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:54.171+0000 I - [conn32] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:27:54.171+0000 D1 COMMAND [conn32] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578435, 1), signature: { hash: BinData(0, 7FECAD142B7D0C48104AAD6F508D560DEF34F4D8), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:27:54.171+0000 D1 - [conn32] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:27:54.171+0000 W - [conn32] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:27:54.186+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:54.192+0000 I - [conn32] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:27:54.192+0000 W COMMAND [conn32] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:27:54.192+0000 I COMMAND [conn32] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578435, 1), signature: { hash: BinData(0, 7FECAD142B7D0C48104AAD6F508D560DEF34F4D8), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30028ms 2019-09-04T06:27:54.192+0000 D2 NETWORK [conn32] Session from 10.108.2.46:40854 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:27:54.192+0000 I NETWORK [conn32] end connection 10.108.2.46:40854 (84 connections now open) 2019-09-04T06:27:54.198+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:54.198+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:54.199+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:54.199+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:54.230+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578470, 1), signature: { hash: BinData(0, 8D8EC963597A94D6578ABF65FB105EAA54EB1B4D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:54.230+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:27:54.230+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578470, 1), signature: { hash: BinData(0, 8D8EC963597A94D6578ABF65FB105EAA54EB1B4D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:54.230+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578470, 1), signature: { hash: BinData(0, 8D8EC963597A94D6578ABF65FB105EAA54EB1B4D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:54.231+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, durableWallTime: new Date(1567578469129), opTime: { ts: Timestamp(1567578469, 1), t: 1 }, wallTime: new Date(1567578469129) } 2019-09-04T06:27:54.231+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578470, 1), signature: { hash: BinData(0, 8D8EC963597A94D6578ABF65FB105EAA54EB1B4D), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:54.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:27:54.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:27:54.286+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:54.386+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:54.424+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:54.424+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:54.486+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:54.512+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:54.512+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:54.551+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:54.551+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:54.586+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:54.659+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:54.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:54.686+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:54.698+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:54.698+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:54.699+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:54.699+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:54.787+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:54.820+0000 D2 ASIO [RS] Request 110 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578474, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578474817), o: { $v: 1, $set: { ping: new Date(1567578474814), up: 2375 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578469, 1), t: 1 }, lastCommittedWall: new Date(1567578469129), lastOpVisible: { ts: Timestamp(1567578469, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578469, 1), t: 1 }, lastCommittedWall: new Date(1567578469129), lastOpApplied: { ts: Timestamp(1567578474, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578469, 1), $clusterTime: { clusterTime: Timestamp(1567578474, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578474, 1) } 2019-09-04T06:27:54.820+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578474, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578474817), o: { $v: 1, $set: { ping: new Date(1567578474814), up: 2375 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578469, 1), t: 1 }, lastCommittedWall: new Date(1567578469129), lastOpVisible: { ts: Timestamp(1567578469, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578469, 1), t: 1 }, lastCommittedWall: new Date(1567578469129), lastOpApplied: { ts: Timestamp(1567578474, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578469, 1), $clusterTime: { clusterTime: Timestamp(1567578474, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578474, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:54.820+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:54.820+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578474, 1) and ending at ts: Timestamp(1567578474, 1) 2019-09-04T06:27:54.820+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:28:05.305+0000 2019-09-04T06:27:54.820+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:28:06.105+0000 2019-09-04T06:27:54.820+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:54.820+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578474, 1), t: 1 } 2019-09-04T06:27:54.820+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:27:54.820+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:54.820+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:54.820+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578469, 1) 2019-09-04T06:27:54.820+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 1844 2019-09-04T06:27:54.821+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:54.821+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:54.821+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 1844 2019-09-04T06:27:54.821+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:54.821+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:54.821+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578469, 1) 2019-09-04T06:27:54.821+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 1847 2019-09-04T06:27:54.821+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:54.821+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:54.821+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 1847 2019-09-04T06:27:54.821+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:27:54.821+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578474, 1) } 2019-09-04T06:27:54.821+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1830 2019-09-04T06:27:54.821+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 1830 2019-09-04T06:27:54.821+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1850 2019-09-04T06:27:54.821+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 1850 2019-09-04T06:27:54.821+0000 D3 EXECUTOR [repl-writer-worker-2] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:27:54.821+0000 D3 STORAGE [repl-writer-worker-2] WT begin_transaction for snapshot id 1852 2019-09-04T06:27:54.821+0000 D4 STORAGE [repl-writer-worker-2] inserting record with timestamp Timestamp(1567578474, 1) 2019-09-04T06:27:54.821+0000 D3 STORAGE [repl-writer-worker-2] WT set timestamp of future write operations to Timestamp(1567578474, 1) 2019-09-04T06:27:54.821+0000 D3 STORAGE [repl-writer-worker-2] WT commit_transaction for snapshot id 1852 2019-09-04T06:27:54.821+0000 D3 EXECUTOR [repl-writer-worker-2] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:54.821+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:27:54.821+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1851 2019-09-04T06:27:54.821+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 1851 2019-09-04T06:27:54.821+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1854 2019-09-04T06:27:54.821+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 1854 2019-09-04T06:27:54.821+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578474, 1), t: 1 }({ ts: Timestamp(1567578474, 1), t: 1 }) 2019-09-04T06:27:54.821+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578474, 1) 2019-09-04T06:27:54.821+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1855 2019-09-04T06:27:54.821+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578474, 1) } } ] } sort: {} projection: {} 2019-09-04T06:27:54.821+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:54.821+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:27:54.821+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578474, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:27:54.821+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:54.821+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:27:54.821+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:27:54.821+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578474, 1) || First: notFirst: full path: ts 2019-09-04T06:27:54.821+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:54.821+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578474, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:54.821+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:27:54.822+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:27:54.822+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:27:54.822+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:54.822+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:27:54.822+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:27:54.822+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:54.822+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:54.822+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:27:54.822+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578474, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:27:54.822+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:54.822+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:27:54.822+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:27:54.822+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578474, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:27:54.822+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:54.822+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578474, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:54.822+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 1855 2019-09-04T06:27:54.822+0000 D3 EXECUTOR [repl-writer-worker-0] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:27:54.822+0000 D3 STORAGE [repl-writer-worker-0] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:54.822+0000 D3 REPL [repl-writer-worker-0] applying op: { ts: Timestamp(1567578474, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578474817), o: { $v: 1, $set: { ping: new Date(1567578474814), up: 2375 } } }, oplog application mode: Secondary 2019-09-04T06:27:54.822+0000 D3 STORAGE [repl-writer-worker-0] WT set timestamp of future write operations to Timestamp(1567578474, 1) 2019-09-04T06:27:54.822+0000 D3 STORAGE [repl-writer-worker-0] WT begin_transaction for snapshot id 1857 2019-09-04T06:27:54.822+0000 D2 QUERY [repl-writer-worker-0] Using idhack: { _id: "cmodb801.togewa.com:27017" } 2019-09-04T06:27:54.822+0000 D4 WRITE [repl-writer-worker-0] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:27:54.822+0000 D3 STORAGE [repl-writer-worker-0] WT commit_transaction for snapshot id 1857 2019-09-04T06:27:54.822+0000 D3 EXECUTOR [repl-writer-worker-0] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:54.822+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578474, 1), t: 1 }({ ts: Timestamp(1567578474, 1), t: 1 }) 2019-09-04T06:27:54.822+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578474, 1) 2019-09-04T06:27:54.822+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1856 2019-09-04T06:27:54.822+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:27:54.822+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:54.822+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:27:54.822+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578474, 1), t: 1 } 2019-09-04T06:27:54.823+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:54.823+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 112 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:04.823+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578469, 1), t: 1 } } 2019-09-04T06:27:54.823+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:54.823+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:27:54.823+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 1856 2019-09-04T06:27:54.823+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578474, 1) 2019-09-04T06:27:54.823+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:19.132+0000 2019-09-04T06:27:54.823+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:54.823+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1860 2019-09-04T06:27:54.823+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, durableWallTime: new Date(1567578469129), appliedOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, appliedWallTime: new Date(1567578469129), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, durableWallTime: new Date(1567578469129), appliedOpTime: { ts: Timestamp(1567578474, 1), t: 1 }, appliedWallTime: new Date(1567578474817), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, durableWallTime: new Date(1567578469129), appliedOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, appliedWallTime: new Date(1567578469129), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578469, 1), t: 1 }, lastCommittedWall: new Date(1567578469129), lastOpVisible: { ts: Timestamp(1567578469, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:54.823+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 1860 2019-09-04T06:27:54.823+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578474, 1), t: 1 }({ ts: Timestamp(1567578474, 1), t: 1 }) 2019-09-04T06:27:54.823+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 113 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:24.823+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, durableWallTime: new Date(1567578469129), appliedOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, appliedWallTime: new Date(1567578469129), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, durableWallTime: new Date(1567578469129), appliedOpTime: { ts: Timestamp(1567578474, 1), t: 1 }, appliedWallTime: new Date(1567578474817), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, durableWallTime: new Date(1567578469129), appliedOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, appliedWallTime: new Date(1567578469129), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578469, 1), t: 1 }, lastCommittedWall: new Date(1567578469129), lastOpVisible: { ts: Timestamp(1567578469, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:54.823+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:19.132+0000 2019-09-04T06:27:54.824+0000 D2 ASIO [RS] Request 113 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578469, 1), t: 1 }, lastCommittedWall: new Date(1567578469129), lastOpVisible: { ts: Timestamp(1567578469, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578469, 1), $clusterTime: { clusterTime: Timestamp(1567578474, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578474, 1) } 2019-09-04T06:27:54.824+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578469, 1), t: 1 }, lastCommittedWall: new Date(1567578469129), lastOpVisible: { ts: Timestamp(1567578469, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578469, 1), $clusterTime: { clusterTime: Timestamp(1567578474, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578474, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:54.824+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:54.824+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:19.132+0000 2019-09-04T06:27:54.826+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:27:54.827+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:54.827+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, durableWallTime: new Date(1567578469129), appliedOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, appliedWallTime: new Date(1567578469129), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578474, 1), t: 1 }, durableWallTime: new Date(1567578474817), appliedOpTime: { ts: Timestamp(1567578474, 1), t: 1 }, appliedWallTime: new Date(1567578474817), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, durableWallTime: new Date(1567578469129), appliedOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, appliedWallTime: new Date(1567578469129), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578469, 1), t: 1 }, lastCommittedWall: new Date(1567578469129), lastOpVisible: { ts: Timestamp(1567578469, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:54.827+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 114 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:24.827+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, durableWallTime: new Date(1567578469129), appliedOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, appliedWallTime: new Date(1567578469129), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578474, 1), t: 1 }, durableWallTime: new Date(1567578474817), appliedOpTime: { ts: Timestamp(1567578474, 1), t: 1 }, appliedWallTime: new Date(1567578474817), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, durableWallTime: new Date(1567578469129), appliedOpTime: { ts: Timestamp(1567578469, 1), t: 1 }, appliedWallTime: new Date(1567578469129), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578469, 1), t: 1 }, lastCommittedWall: new Date(1567578469129), lastOpVisible: { ts: Timestamp(1567578469, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:54.827+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:19.132+0000 2019-09-04T06:27:54.827+0000 D2 ASIO [RS] Request 114 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578469, 1), t: 1 }, lastCommittedWall: new Date(1567578469129), lastOpVisible: { ts: Timestamp(1567578469, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578469, 1), $clusterTime: { clusterTime: Timestamp(1567578474, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578474, 1) } 2019-09-04T06:27:54.827+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578469, 1), t: 1 }, lastCommittedWall: new Date(1567578469129), lastOpVisible: { ts: Timestamp(1567578469, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578469, 1), $clusterTime: { clusterTime: Timestamp(1567578474, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578474, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:54.827+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:54.827+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:19.132+0000 2019-09-04T06:27:54.827+0000 D2 ASIO [RS] Request 112 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578474, 1), t: 1 }, lastCommittedWall: new Date(1567578474817), lastOpVisible: { ts: Timestamp(1567578474, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578474, 1), t: 1 }, lastCommittedWall: new Date(1567578474817), lastOpApplied: { ts: Timestamp(1567578474, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578474, 1), $clusterTime: { clusterTime: Timestamp(1567578474, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578474, 1) } 2019-09-04T06:27:54.827+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578474, 1), t: 1 }, lastCommittedWall: new Date(1567578474817), lastOpVisible: { ts: Timestamp(1567578474, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578474, 1), t: 1 }, lastCommittedWall: new Date(1567578474817), lastOpApplied: { ts: Timestamp(1567578474, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578474, 1), $clusterTime: { clusterTime: Timestamp(1567578474, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578474, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:54.827+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:54.827+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:27:54.827+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578474, 1), t: 1 }, 2019-09-04T06:27:54.817+0000 2019-09-04T06:27:54.827+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578474, 1), t: 1 }, 2019-09-04T06:27:54.817+0000 2019-09-04T06:27:54.827+0000 D2 COMMAND [conn21] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578474, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f596a02d1a496712d7187'), operName: "", parentOperId: "5d6f596a02d1a496712d7185" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578474, 1), signature: { hash: BinData(0, 925188623748D9122E4E83805786E3C9F5225E4E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578474, 1), t: 1 } }, $db: "config" } 2019-09-04T06:27:54.827+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578469, 1) 2019-09-04T06:27:54.827+0000 D3 REPL [conn54] Got notified of new snapshot: { ts: Timestamp(1567578474, 1), t: 1 }, 2019-09-04T06:27:54.817+0000 2019-09-04T06:27:54.827+0000 D3 REPL [conn54] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.764+0000 2019-09-04T06:27:54.827+0000 D3 REPL [conn66] Got notified of new snapshot: { ts: Timestamp(1567578474, 1), t: 1 }, 2019-09-04T06:27:54.817+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn66] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.686+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn27] Got notified of new snapshot: { ts: Timestamp(1567578474, 1), t: 1 }, 2019-09-04T06:27:54.817+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn27] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.829+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn80] Got notified of new snapshot: { ts: Timestamp(1567578474, 1), t: 1 }, 2019-09-04T06:27:54.817+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn80] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.307+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn65] Got notified of new snapshot: { ts: Timestamp(1567578474, 1), t: 1 }, 2019-09-04T06:27:54.817+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn65] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.679+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn98] Got notified of new snapshot: { ts: Timestamp(1567578474, 1), t: 1 }, 2019-09-04T06:27:54.817+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn98] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.768+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn36] Got notified of new snapshot: { ts: Timestamp(1567578474, 1), t: 1 }, 2019-09-04T06:27:54.817+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn36] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.835+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn63] Got notified of new snapshot: { ts: Timestamp(1567578474, 1), t: 1 }, 2019-09-04T06:27:54.817+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn63] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.676+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn96] Got notified of new snapshot: { ts: Timestamp(1567578474, 1), t: 1 }, 2019-09-04T06:27:54.817+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn96] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.663+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn94] Got notified of new snapshot: { ts: Timestamp(1567578474, 1), t: 1 }, 2019-09-04T06:27:54.817+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn94] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.663+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn67] Got notified of new snapshot: { ts: Timestamp(1567578474, 1), t: 1 }, 2019-09-04T06:27:54.817+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn67] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.698+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn64] Got notified of new snapshot: { ts: Timestamp(1567578474, 1), t: 1 }, 2019-09-04T06:27:54.817+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn64] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.676+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn89] Got notified of new snapshot: { ts: Timestamp(1567578474, 1), t: 1 }, 2019-09-04T06:27:54.817+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn89] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:16.417+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn82] Got notified of new snapshot: { ts: Timestamp(1567578474, 1), t: 1 }, 2019-09-04T06:27:54.817+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn82] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.998+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn68] Got notified of new snapshot: { ts: Timestamp(1567578474, 1), t: 1 }, 2019-09-04T06:27:54.817+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn68] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.730+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn83] Got notified of new snapshot: { ts: Timestamp(1567578474, 1), t: 1 }, 2019-09-04T06:27:54.817+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn83] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.431+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn62] Got notified of new snapshot: { ts: Timestamp(1567578474, 1), t: 1 }, 2019-09-04T06:27:54.817+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn62] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.671+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn88] Got notified of new snapshot: { ts: Timestamp(1567578474, 1), t: 1 }, 2019-09-04T06:27:54.817+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn88] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.833+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn53] Got notified of new snapshot: { ts: Timestamp(1567578474, 1), t: 1 }, 2019-09-04T06:27:54.817+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn53] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.753+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn56] Got notified of new snapshot: { ts: Timestamp(1567578474, 1), t: 1 }, 2019-09-04T06:27:54.817+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn56] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.963+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn30] Got notified of new snapshot: { ts: Timestamp(1567578474, 1), t: 1 }, 2019-09-04T06:27:54.817+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn30] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.836+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn91] Got notified of new snapshot: { ts: Timestamp(1567578474, 1), t: 1 }, 2019-09-04T06:27:54.817+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn91] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.645+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn55] Got notified of new snapshot: { ts: Timestamp(1567578474, 1), t: 1 }, 2019-09-04T06:27:54.817+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn55] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.926+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn86] Got notified of new snapshot: { ts: Timestamp(1567578474, 1), t: 1 }, 2019-09-04T06:27:54.817+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn86] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.822+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn24] Got notified of new snapshot: { ts: Timestamp(1567578474, 1), t: 1 }, 2019-09-04T06:27:54.817+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn24] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.833+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn99] Got notified of new snapshot: { ts: Timestamp(1567578474, 1), t: 1 }, 2019-09-04T06:27:54.817+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn99] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:22.054+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn100] Got notified of new snapshot: { ts: Timestamp(1567578474, 1), t: 1 }, 2019-09-04T06:27:54.817+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn100] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:22.595+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn85] Got notified of new snapshot: { ts: Timestamp(1567578474, 1), t: 1 }, 2019-09-04T06:27:54.817+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn85] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.822+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn35] Got notified of new snapshot: { ts: Timestamp(1567578474, 1), t: 1 }, 2019-09-04T06:27:54.817+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn35] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.836+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn84] Got notified of new snapshot: { ts: Timestamp(1567578474, 1), t: 1 }, 2019-09-04T06:27:54.817+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn84] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.822+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn57] Got notified of new snapshot: { ts: Timestamp(1567578474, 1), t: 1 }, 2019-09-04T06:27:54.817+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn57] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:03.480+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn93] Got notified of new snapshot: { ts: Timestamp(1567578474, 1), t: 1 }, 2019-09-04T06:27:54.817+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn93] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.662+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn95] Got notified of new snapshot: { ts: Timestamp(1567578474, 1), t: 1 }, 2019-09-04T06:27:54.817+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn95] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.662+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn79] Got notified of new snapshot: { ts: Timestamp(1567578474, 1), t: 1 }, 2019-09-04T06:27:54.817+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn79] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.289+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn87] Got notified of new snapshot: { ts: Timestamp(1567578474, 1), t: 1 }, 2019-09-04T06:27:54.817+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn87] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.823+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn76] Got notified of new snapshot: { ts: Timestamp(1567578474, 1), t: 1 }, 2019-09-04T06:27:54.817+0000 2019-09-04T06:27:54.828+0000 D3 REPL [conn76] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.092+0000 2019-09-04T06:27:54.829+0000 D1 TRACKING [conn21] Cmd: find, TrackingId: 5d6f596a02d1a496712d7185|5d6f596a02d1a496712d7187 2019-09-04T06:27:54.829+0000 D3 REPL [conn92] Got notified of new snapshot: { ts: Timestamp(1567578474, 1), t: 1 }, 2019-09-04T06:27:54.817+0000 2019-09-04T06:27:54.829+0000 D3 REPL [conn92] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.664+0000 2019-09-04T06:27:54.829+0000 D3 REPL [conn97] Got notified of new snapshot: { ts: Timestamp(1567578474, 1), t: 1 }, 2019-09-04T06:27:54.817+0000 2019-09-04T06:27:54.829+0000 D3 REPL [conn97] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.670+0000 2019-09-04T06:27:54.829+0000 D3 REPL [conn44] Got notified of new snapshot: { ts: Timestamp(1567578474, 1), t: 1 }, 2019-09-04T06:27:54.817+0000 2019-09-04T06:27:54.829+0000 D3 REPL [conn44] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:27:55.060+0000 2019-09-04T06:27:54.829+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:28:06.105+0000 2019-09-04T06:27:54.829+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:28:05.559+0000 2019-09-04T06:27:54.829+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:54.829+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:27:54.829+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 115 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:04.829+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578474, 1), t: 1 } } 2019-09-04T06:27:54.829+0000 D1 COMMAND [conn21] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578474, 1), t: 1 } } } 2019-09-04T06:27:54.829+0000 D3 STORAGE [conn21] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:27:54.829+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:19.132+0000 2019-09-04T06:27:54.829+0000 D1 COMMAND [conn21] Using 'committed' snapshot: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578474, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f596a02d1a496712d7187'), operName: "", parentOperId: "5d6f596a02d1a496712d7185" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578474, 1), signature: { hash: BinData(0, 925188623748D9122E4E83805786E3C9F5225E4E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578474, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578474, 1) 2019-09-04T06:27:54.829+0000 D2 QUERY [conn21] Collection config.settings does not exist. Using EOF plan: query: { _id: "balancer" } sort: {} projection: {} limit: 1 2019-09-04T06:27:54.829+0000 I COMMAND [conn21] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578474, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f596a02d1a496712d7187'), operName: "", parentOperId: "5d6f596a02d1a496712d7185" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578474, 1), signature: { hash: BinData(0, 925188623748D9122E4E83805786E3C9F5225E4E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578474, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:27:54.830+0000 D2 COMMAND [conn21] run command config.$cmd { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578474, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f596a02d1a496712d7189'), operName: "", parentOperId: "5d6f596a02d1a496712d7185" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578474, 1), signature: { hash: BinData(0, 925188623748D9122E4E83805786E3C9F5225E4E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578474, 1), t: 1 } }, $db: "config" } 2019-09-04T06:27:54.830+0000 D1 TRACKING [conn21] Cmd: find, TrackingId: 5d6f596a02d1a496712d7185|5d6f596a02d1a496712d7189 2019-09-04T06:27:54.830+0000 D1 COMMAND [conn21] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578474, 1), t: 1 } } } 2019-09-04T06:27:54.830+0000 D3 STORAGE [conn21] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:27:54.830+0000 D1 COMMAND [conn21] Using 'committed' snapshot: { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578474, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f596a02d1a496712d7189'), operName: "", parentOperId: "5d6f596a02d1a496712d7185" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578474, 1), signature: { hash: BinData(0, 925188623748D9122E4E83805786E3C9F5225E4E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578474, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578474, 1) 2019-09-04T06:27:54.830+0000 D2 QUERY [conn21] Collection config.settings does not exist. Using EOF plan: query: { _id: "autosplit" } sort: {} projection: {} limit: 1 2019-09-04T06:27:54.830+0000 I COMMAND [conn21] command config.settings command: find { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578474, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f596a02d1a496712d7189'), operName: "", parentOperId: "5d6f596a02d1a496712d7185" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578474, 1), signature: { hash: BinData(0, 925188623748D9122E4E83805786E3C9F5225E4E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578474, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:27:54.836+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:54.836+0000 D2 REPL_HB [replexec-2] Sending heartbeat (requestId: 116) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:54.836+0000 D3 EXECUTOR [replexec-2] Scheduling remote command request: RemoteCommand 116 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:28:04.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:54.836+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:27:54.836+0000 D2 ASIO [Replication] Request 116 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578474, 1), t: 1 }, durableWallTime: new Date(1567578474817), opTime: { ts: Timestamp(1567578474, 1), t: 1 }, wallTime: new Date(1567578474817), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578474, 1), t: 1 }, lastCommittedWall: new Date(1567578474817), lastOpVisible: { ts: Timestamp(1567578474, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578474, 1), $clusterTime: { clusterTime: Timestamp(1567578474, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578474, 1) } 2019-09-04T06:27:54.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578474, 1), t: 1 }, durableWallTime: new Date(1567578474817), opTime: { ts: Timestamp(1567578474, 1), t: 1 }, wallTime: new Date(1567578474817), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578474, 1), t: 1 }, lastCommittedWall: new Date(1567578474817), lastOpVisible: { ts: Timestamp(1567578474, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578474, 1), $clusterTime: { clusterTime: Timestamp(1567578474, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578474, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:27:54.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:54.836+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 116) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578474, 1), t: 1 }, durableWallTime: new Date(1567578474817), opTime: { ts: Timestamp(1567578474, 1), t: 1 }, wallTime: new Date(1567578474817), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578474, 1), t: 1 }, lastCommittedWall: new Date(1567578474817), lastOpVisible: { ts: Timestamp(1567578474, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578474, 1), $clusterTime: { clusterTime: Timestamp(1567578474, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578474, 1) } 2019-09-04T06:27:54.836+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:27:54.836+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:28:05.559+0000 2019-09-04T06:27:54.836+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:28:05.249+0000 2019-09-04T06:27:54.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:54.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:27:54.836+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:27:54.836+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:27:56.836Z 2019-09-04T06:27:54.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:27:54.837+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:54.837+0000 D2 REPL_HB [replexec-2] Sending heartbeat (requestId: 117) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:54.837+0000 D3 EXECUTOR [replexec-2] Scheduling remote command request: RemoteCommand 117 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:04.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:54.837+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:27:54.837+0000 D2 ASIO [Replication] Request 117 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578474, 1), t: 1 }, durableWallTime: new Date(1567578474817), opTime: { ts: Timestamp(1567578474, 1), t: 1 }, wallTime: new Date(1567578474817), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578474, 1), t: 1 }, lastCommittedWall: new Date(1567578474817), lastOpVisible: { ts: Timestamp(1567578474, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578474, 1), $clusterTime: { clusterTime: Timestamp(1567578474, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578474, 1) } 2019-09-04T06:27:54.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578474, 1), t: 1 }, durableWallTime: new Date(1567578474817), opTime: { ts: Timestamp(1567578474, 1), t: 1 }, wallTime: new Date(1567578474817), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578474, 1), t: 1 }, lastCommittedWall: new Date(1567578474817), lastOpVisible: { ts: Timestamp(1567578474, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578474, 1), $clusterTime: { clusterTime: Timestamp(1567578474, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578474, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:54.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:54.837+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 117) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578474, 1), t: 1 }, durableWallTime: new Date(1567578474817), opTime: { ts: Timestamp(1567578474, 1), t: 1 }, wallTime: new Date(1567578474817), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578474, 1), t: 1 }, lastCommittedWall: new Date(1567578474817), lastOpVisible: { ts: Timestamp(1567578474, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578474, 1), $clusterTime: { clusterTime: Timestamp(1567578474, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578474, 1) } 2019-09-04T06:27:54.837+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:27:54.837+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:27:56.837Z 2019-09-04T06:27:54.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:27:54.887+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:54.921+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578474, 1) 2019-09-04T06:27:54.924+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:54.924+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:54.987+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:55.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:55.012+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:55.012+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:55.049+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:55.049+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:55.051+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:55.051+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:55.060+0000 I COMMAND [conn44] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578441, 1), signature: { hash: BinData(0, BCD1176E340592B9823D25B02E3C0813C2D0EE74), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:27:55.060+0000 D1 - [conn44] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:27:55.060+0000 W - [conn44] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:27:55.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578474, 1), signature: { hash: BinData(0, 925188623748D9122E4E83805786E3C9F5225E4E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:55.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:27:55.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578474, 1), signature: { hash: BinData(0, 925188623748D9122E4E83805786E3C9F5225E4E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:55.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578474, 1), signature: { hash: BinData(0, 925188623748D9122E4E83805786E3C9F5225E4E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:55.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578474, 1), t: 1 }, durableWallTime: new Date(1567578474817), opTime: { ts: Timestamp(1567578474, 1), t: 1 }, wallTime: new Date(1567578474817) } 2019-09-04T06:27:55.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578474, 1), signature: { hash: BinData(0, 925188623748D9122E4E83805786E3C9F5225E4E), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:55.077+0000 I - [conn44] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:27:55.077+0000 D1 COMMAND [conn44] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578441, 1), signature: { hash: BinData(0, BCD1176E340592B9823D25B02E3C0813C2D0EE74), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:27:55.077+0000 D1 - [conn44] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:27:55.077+0000 W - [conn44] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:27:55.087+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:55.097+0000 I - [conn44] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:27:55.097+0000 W COMMAND [conn44] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:27:55.097+0000 I COMMAND [conn44] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578441, 1), signature: { hash: BinData(0, BCD1176E340592B9823D25B02E3C0813C2D0EE74), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:27:55.097+0000 D2 NETWORK [conn44] Session from 10.108.2.55:36526 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:27:55.097+0000 I NETWORK [conn44] end connection 10.108.2.55:36526 (83 connections now open) 2019-09-04T06:27:55.159+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:55.160+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:55.187+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:55.198+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:55.198+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:55.199+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:55.199+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:55.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:27:55.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:27:55.272+0000 D2 COMMAND [conn49] run command config.$cmd { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578464, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578467, 1), signature: { hash: BinData(0, 39CA4ECD9822B8750EA957C2F8D1BDF1ADA59AE9), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578464, 1), t: 1 } }, $db: "config" } 2019-09-04T06:27:55.272+0000 D1 COMMAND [conn49] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578464, 1), t: 1 } } } 2019-09-04T06:27:55.272+0000 D3 STORAGE [conn49] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:27:55.272+0000 D1 COMMAND [conn49] Using 'committed' snapshot: { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578464, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578467, 1), signature: { hash: BinData(0, 39CA4ECD9822B8750EA957C2F8D1BDF1ADA59AE9), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578464, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578474, 1) 2019-09-04T06:27:55.272+0000 D2 QUERY [conn49] Using idhack: query: { _id: "config.system.sessions" } sort: {} projection: {} limit: 1 2019-09-04T06:27:55.272+0000 D3 STORAGE [conn49] WT begin_transaction for snapshot id 1875 2019-09-04T06:27:55.272+0000 D3 STORAGE [conn49] WT rollback_transaction for snapshot id 1875 2019-09-04T06:27:55.272+0000 I COMMAND [conn49] command config.collections command: find { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578464, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578467, 1), signature: { hash: BinData(0, 39CA4ECD9822B8750EA957C2F8D1BDF1ADA59AE9), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578464, 1), t: 1 } }, $db: "config" } planSummary: IDHACK keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:702 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:27:55.288+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:55.388+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:55.424+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:55.424+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:55.488+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:55.512+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:55.512+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:55.549+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:55.549+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:55.551+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:55.551+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:55.579+0000 D2 ASIO [RS] Request 115 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578475, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578475572), o: { $v: 1, $set: { ping: new Date(1567578475571) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578474, 1), t: 1 }, lastCommittedWall: new Date(1567578474817), lastOpVisible: { ts: Timestamp(1567578474, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578474, 1), t: 1 }, lastCommittedWall: new Date(1567578474817), lastOpApplied: { ts: Timestamp(1567578475, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578474, 1), $clusterTime: { clusterTime: Timestamp(1567578475, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578475, 3) } 2019-09-04T06:27:55.579+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578475, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578475572), o: { $v: 1, $set: { ping: new Date(1567578475571) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578474, 1), t: 1 }, lastCommittedWall: new Date(1567578474817), lastOpVisible: { ts: Timestamp(1567578474, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578474, 1), t: 1 }, lastCommittedWall: new Date(1567578474817), lastOpApplied: { ts: Timestamp(1567578475, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578474, 1), $clusterTime: { clusterTime: Timestamp(1567578475, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578475, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:55.579+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:55.579+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578475, 3) and ending at ts: Timestamp(1567578475, 3) 2019-09-04T06:27:55.579+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:28:05.249+0000 2019-09-04T06:27:55.580+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:28:05.677+0000 2019-09-04T06:27:55.580+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578475, 3), t: 1 } 2019-09-04T06:27:55.580+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:55.580+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:27:55.580+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:55.580+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:55.580+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578474, 1) 2019-09-04T06:27:55.580+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 1883 2019-09-04T06:27:55.580+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:55.580+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:55.580+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 1883 2019-09-04T06:27:55.580+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:55.580+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:55.580+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:27:55.580+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578474, 1) 2019-09-04T06:27:55.580+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 1886 2019-09-04T06:27:55.580+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578475, 3) } 2019-09-04T06:27:55.580+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:55.580+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:55.580+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 1886 2019-09-04T06:27:55.580+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1862 2019-09-04T06:27:55.580+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 1862 2019-09-04T06:27:55.580+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1889 2019-09-04T06:27:55.580+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 1889 2019-09-04T06:27:55.580+0000 D3 EXECUTOR [repl-writer-worker-15] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:27:55.580+0000 D3 STORAGE [repl-writer-worker-15] WT begin_transaction for snapshot id 1891 2019-09-04T06:27:55.580+0000 D4 STORAGE [repl-writer-worker-15] inserting record with timestamp Timestamp(1567578475, 3) 2019-09-04T06:27:55.580+0000 D3 STORAGE [repl-writer-worker-15] WT set timestamp of future write operations to Timestamp(1567578475, 3) 2019-09-04T06:27:55.580+0000 D3 STORAGE [repl-writer-worker-15] WT commit_transaction for snapshot id 1891 2019-09-04T06:27:55.580+0000 D3 EXECUTOR [repl-writer-worker-15] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:55.580+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:27:55.580+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1890 2019-09-04T06:27:55.580+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 1890 2019-09-04T06:27:55.580+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1893 2019-09-04T06:27:55.580+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 1893 2019-09-04T06:27:55.580+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578475, 3), t: 1 }({ ts: Timestamp(1567578475, 3), t: 1 }) 2019-09-04T06:27:55.580+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578475, 3) 2019-09-04T06:27:55.580+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1894 2019-09-04T06:27:55.580+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578475, 3) } } ] } sort: {} projection: {} 2019-09-04T06:27:55.580+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:55.580+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:27:55.580+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578475, 3) Sort: {} Proj: {} ============================= 2019-09-04T06:27:55.580+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:55.580+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:27:55.580+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:27:55.580+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578475, 3) || First: notFirst: full path: ts 2019-09-04T06:27:55.580+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:55.580+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578475, 3) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:55.580+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:27:55.580+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:27:55.580+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:27:55.581+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:55.581+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:27:55.581+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:27:55.581+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:55.581+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:55.581+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:27:55.581+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578475, 3) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:27:55.581+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:55.581+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:27:55.581+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:27:55.581+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578475, 3) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:27:55.581+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:55.581+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578475, 3) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:55.581+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 1894 2019-09-04T06:27:55.581+0000 D3 EXECUTOR [repl-writer-worker-1] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:27:55.581+0000 D3 STORAGE [repl-writer-worker-1] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:55.581+0000 D3 REPL [repl-writer-worker-1] applying op: { ts: Timestamp(1567578475, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578475572), o: { $v: 1, $set: { ping: new Date(1567578475571) } } }, oplog application mode: Secondary 2019-09-04T06:27:55.581+0000 D3 STORAGE [repl-writer-worker-1] WT set timestamp of future write operations to Timestamp(1567578475, 3) 2019-09-04T06:27:55.581+0000 D3 STORAGE [repl-writer-worker-1] WT begin_transaction for snapshot id 1896 2019-09-04T06:27:55.581+0000 D2 QUERY [repl-writer-worker-1] Using idhack: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" } 2019-09-04T06:27:55.581+0000 D4 WRITE [repl-writer-worker-1] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:27:55.581+0000 D3 STORAGE [repl-writer-worker-1] WT commit_transaction for snapshot id 1896 2019-09-04T06:27:55.581+0000 D3 EXECUTOR [repl-writer-worker-1] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:55.581+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578475, 3), t: 1 }({ ts: Timestamp(1567578475, 3), t: 1 }) 2019-09-04T06:27:55.581+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578475, 3) 2019-09-04T06:27:55.581+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1895 2019-09-04T06:27:55.581+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:27:55.581+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:27:55.581+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:27:55.581+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:27:55.581+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:27:55.581+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:27:55.581+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 1895 2019-09-04T06:27:55.581+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578475, 3) 2019-09-04T06:27:55.581+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:55.581+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1899 2019-09-04T06:27:55.581+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 1899 2019-09-04T06:27:55.581+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578474, 1), t: 1 }, durableWallTime: new Date(1567578474817), appliedOpTime: { ts: Timestamp(1567578474, 1), t: 1 }, appliedWallTime: new Date(1567578474817), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578474, 1), t: 1 }, durableWallTime: new Date(1567578474817), appliedOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, appliedWallTime: new Date(1567578475572), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578474, 1), t: 1 }, durableWallTime: new Date(1567578474817), appliedOpTime: { ts: Timestamp(1567578474, 1), t: 1 }, appliedWallTime: new Date(1567578474817), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578474, 1), t: 1 }, lastCommittedWall: new Date(1567578474817), lastOpVisible: { ts: Timestamp(1567578474, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:55.581+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578475, 3), t: 1 }({ ts: Timestamp(1567578475, 3), t: 1 }) 2019-09-04T06:27:55.581+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 118 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:25.581+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578474, 1), t: 1 }, durableWallTime: new Date(1567578474817), appliedOpTime: { ts: Timestamp(1567578474, 1), t: 1 }, appliedWallTime: new Date(1567578474817), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578474, 1), t: 1 }, durableWallTime: new Date(1567578474817), appliedOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, appliedWallTime: new Date(1567578475572), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578474, 1), t: 1 }, durableWallTime: new Date(1567578474817), appliedOpTime: { ts: Timestamp(1567578474, 1), t: 1 }, appliedWallTime: new Date(1567578474817), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578474, 1), t: 1 }, lastCommittedWall: new Date(1567578474817), lastOpVisible: { ts: Timestamp(1567578474, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:55.581+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:25.581+0000 2019-09-04T06:27:55.582+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578475, 3), t: 1 } 2019-09-04T06:27:55.582+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 119 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:05.582+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578474, 1), t: 1 } } 2019-09-04T06:27:55.582+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:25.581+0000 2019-09-04T06:27:55.582+0000 D2 ASIO [RS] Request 118 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578475, 3), t: 1 }, lastCommittedWall: new Date(1567578475572), lastOpVisible: { ts: Timestamp(1567578475, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578475, 3), $clusterTime: { clusterTime: Timestamp(1567578475, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578475, 3) } 2019-09-04T06:27:55.582+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578475, 3), t: 1 }, lastCommittedWall: new Date(1567578475572), lastOpVisible: { ts: Timestamp(1567578475, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578475, 3), $clusterTime: { clusterTime: Timestamp(1567578475, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578475, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:55.582+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:55.582+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:25.581+0000 2019-09-04T06:27:55.582+0000 D2 ASIO [RS] Request 119 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578475, 3), t: 1 }, lastCommittedWall: new Date(1567578475572), lastOpVisible: { ts: Timestamp(1567578475, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578475, 3), t: 1 }, lastCommittedWall: new Date(1567578475572), lastOpApplied: { ts: Timestamp(1567578475, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578475, 3), $clusterTime: { clusterTime: Timestamp(1567578475, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578475, 3) } 2019-09-04T06:27:55.582+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578475, 3), t: 1 }, lastCommittedWall: new Date(1567578475572), lastOpVisible: { ts: Timestamp(1567578475, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578475, 3), t: 1 }, lastCommittedWall: new Date(1567578475572), lastOpApplied: { ts: Timestamp(1567578475, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578475, 3), $clusterTime: { clusterTime: Timestamp(1567578475, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578475, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:55.582+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:55.582+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:27:55.582+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578475, 3), t: 1 }, 2019-09-04T06:27:55.572+0000 2019-09-04T06:27:55.582+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578475, 3), t: 1 }, 2019-09-04T06:27:55.572+0000 2019-09-04T06:27:55.582+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578470, 3) 2019-09-04T06:27:55.582+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:28:05.677+0000 2019-09-04T06:27:55.582+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:28:06.662+0000 2019-09-04T06:27:55.582+0000 D3 REPL [conn76] Got notified of new snapshot: { ts: Timestamp(1567578475, 3), t: 1 }, 2019-09-04T06:27:55.572+0000 2019-09-04T06:27:55.582+0000 D3 REPL [conn76] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.092+0000 2019-09-04T06:27:55.582+0000 D3 REPL [conn94] Got notified of new snapshot: { ts: Timestamp(1567578475, 3), t: 1 }, 2019-09-04T06:27:55.572+0000 2019-09-04T06:27:55.582+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:55.582+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:27:55.582+0000 D3 REPL [conn94] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.663+0000 2019-09-04T06:27:55.582+0000 D3 REPL [conn63] Got notified of new snapshot: { ts: Timestamp(1567578475, 3), t: 1 }, 2019-09-04T06:27:55.572+0000 2019-09-04T06:27:55.582+0000 D3 REPL [conn63] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.676+0000 2019-09-04T06:27:55.582+0000 D3 REPL [conn64] Got notified of new snapshot: { ts: Timestamp(1567578475, 3), t: 1 }, 2019-09-04T06:27:55.572+0000 2019-09-04T06:27:55.582+0000 D3 REPL [conn64] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.676+0000 2019-09-04T06:27:55.582+0000 D3 REPL [conn89] Got notified of new snapshot: { ts: Timestamp(1567578475, 3), t: 1 }, 2019-09-04T06:27:55.572+0000 2019-09-04T06:27:55.582+0000 D3 REPL [conn89] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:16.417+0000 2019-09-04T06:27:55.582+0000 D3 REPL [conn82] Got notified of new snapshot: { ts: Timestamp(1567578475, 3), t: 1 }, 2019-09-04T06:27:55.572+0000 2019-09-04T06:27:55.582+0000 D3 REPL [conn82] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.998+0000 2019-09-04T06:27:55.582+0000 D3 REPL [conn68] Got notified of new snapshot: { ts: Timestamp(1567578475, 3), t: 1 }, 2019-09-04T06:27:55.572+0000 2019-09-04T06:27:55.582+0000 D3 REPL [conn68] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.730+0000 2019-09-04T06:27:55.582+0000 D3 REPL [conn83] Got notified of new snapshot: { ts: Timestamp(1567578475, 3), t: 1 }, 2019-09-04T06:27:55.572+0000 2019-09-04T06:27:55.582+0000 D3 REPL [conn83] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.431+0000 2019-09-04T06:27:55.582+0000 D3 REPL [conn62] Got notified of new snapshot: { ts: Timestamp(1567578475, 3), t: 1 }, 2019-09-04T06:27:55.572+0000 2019-09-04T06:27:55.582+0000 D3 REPL [conn62] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.671+0000 2019-09-04T06:27:55.582+0000 D3 REPL [conn30] Got notified of new snapshot: { ts: Timestamp(1567578475, 3), t: 1 }, 2019-09-04T06:27:55.572+0000 2019-09-04T06:27:55.582+0000 D3 REPL [conn30] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.836+0000 2019-09-04T06:27:55.582+0000 D3 REPL [conn91] Got notified of new snapshot: { ts: Timestamp(1567578475, 3), t: 1 }, 2019-09-04T06:27:55.572+0000 2019-09-04T06:27:55.582+0000 D3 REPL [conn91] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.645+0000 2019-09-04T06:27:55.582+0000 D3 REPL [conn86] Got notified of new snapshot: { ts: Timestamp(1567578475, 3), t: 1 }, 2019-09-04T06:27:55.572+0000 2019-09-04T06:27:55.582+0000 D3 REPL [conn86] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.822+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn99] Got notified of new snapshot: { ts: Timestamp(1567578475, 3), t: 1 }, 2019-09-04T06:27:55.572+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn99] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:22.054+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn100] Got notified of new snapshot: { ts: Timestamp(1567578475, 3), t: 1 }, 2019-09-04T06:27:55.572+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn100] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:22.595+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn35] Got notified of new snapshot: { ts: Timestamp(1567578475, 3), t: 1 }, 2019-09-04T06:27:55.572+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn35] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.836+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn84] Got notified of new snapshot: { ts: Timestamp(1567578475, 3), t: 1 }, 2019-09-04T06:27:55.572+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn84] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.822+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn79] Got notified of new snapshot: { ts: Timestamp(1567578475, 3), t: 1 }, 2019-09-04T06:27:55.572+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn79] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.289+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn87] Got notified of new snapshot: { ts: Timestamp(1567578475, 3), t: 1 }, 2019-09-04T06:27:55.572+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn87] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.823+0000 2019-09-04T06:27:55.583+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 120 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:05.583+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578475, 3), t: 1 } } 2019-09-04T06:27:55.583+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:25.581+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn66] Got notified of new snapshot: { ts: Timestamp(1567578475, 3), t: 1 }, 2019-09-04T06:27:55.572+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn66] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.686+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn27] Got notified of new snapshot: { ts: Timestamp(1567578475, 3), t: 1 }, 2019-09-04T06:27:55.572+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn27] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.829+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn65] Got notified of new snapshot: { ts: Timestamp(1567578475, 3), t: 1 }, 2019-09-04T06:27:55.572+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn65] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.679+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn36] Got notified of new snapshot: { ts: Timestamp(1567578475, 3), t: 1 }, 2019-09-04T06:27:55.572+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn36] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.835+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn96] Got notified of new snapshot: { ts: Timestamp(1567578475, 3), t: 1 }, 2019-09-04T06:27:55.572+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn96] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.663+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn98] Got notified of new snapshot: { ts: Timestamp(1567578475, 3), t: 1 }, 2019-09-04T06:27:55.572+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn98] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.768+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn67] Got notified of new snapshot: { ts: Timestamp(1567578475, 3), t: 1 }, 2019-09-04T06:27:55.572+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn67] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.698+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn88] Got notified of new snapshot: { ts: Timestamp(1567578475, 3), t: 1 }, 2019-09-04T06:27:55.572+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn88] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.833+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn53] Got notified of new snapshot: { ts: Timestamp(1567578475, 3), t: 1 }, 2019-09-04T06:27:55.572+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn53] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.753+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn80] Got notified of new snapshot: { ts: Timestamp(1567578475, 3), t: 1 }, 2019-09-04T06:27:55.572+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn80] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.307+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn56] Got notified of new snapshot: { ts: Timestamp(1567578475, 3), t: 1 }, 2019-09-04T06:27:55.572+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn56] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.963+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn54] Got notified of new snapshot: { ts: Timestamp(1567578475, 3), t: 1 }, 2019-09-04T06:27:55.572+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn54] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.764+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn55] Got notified of new snapshot: { ts: Timestamp(1567578475, 3), t: 1 }, 2019-09-04T06:27:55.572+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn55] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.926+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn24] Got notified of new snapshot: { ts: Timestamp(1567578475, 3), t: 1 }, 2019-09-04T06:27:55.572+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn24] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.833+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn85] Got notified of new snapshot: { ts: Timestamp(1567578475, 3), t: 1 }, 2019-09-04T06:27:55.572+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn85] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.822+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn57] Got notified of new snapshot: { ts: Timestamp(1567578475, 3), t: 1 }, 2019-09-04T06:27:55.572+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn57] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:03.480+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn93] Got notified of new snapshot: { ts: Timestamp(1567578475, 3), t: 1 }, 2019-09-04T06:27:55.572+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn93] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.662+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn95] Got notified of new snapshot: { ts: Timestamp(1567578475, 3), t: 1 }, 2019-09-04T06:27:55.572+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn95] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.662+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn92] Got notified of new snapshot: { ts: Timestamp(1567578475, 3), t: 1 }, 2019-09-04T06:27:55.572+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn92] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.664+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn97] Got notified of new snapshot: { ts: Timestamp(1567578475, 3), t: 1 }, 2019-09-04T06:27:55.572+0000 2019-09-04T06:27:55.583+0000 D3 REPL [conn97] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.670+0000 2019-09-04T06:27:55.599+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:27:55.599+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:55.599+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578474, 1), t: 1 }, durableWallTime: new Date(1567578474817), appliedOpTime: { ts: Timestamp(1567578474, 1), t: 1 }, appliedWallTime: new Date(1567578474817), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, durableWallTime: new Date(1567578475572), appliedOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, appliedWallTime: new Date(1567578475572), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578474, 1), t: 1 }, durableWallTime: new Date(1567578474817), appliedOpTime: { ts: Timestamp(1567578474, 1), t: 1 }, appliedWallTime: new Date(1567578474817), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578475, 3), t: 1 }, lastCommittedWall: new Date(1567578475572), lastOpVisible: { ts: Timestamp(1567578475, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:55.600+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 121 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:25.600+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578474, 1), t: 1 }, durableWallTime: new Date(1567578474817), appliedOpTime: { ts: Timestamp(1567578474, 1), t: 1 }, appliedWallTime: new Date(1567578474817), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, durableWallTime: new Date(1567578475572), appliedOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, appliedWallTime: new Date(1567578475572), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578474, 1), t: 1 }, durableWallTime: new Date(1567578474817), appliedOpTime: { ts: Timestamp(1567578474, 1), t: 1 }, appliedWallTime: new Date(1567578474817), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578475, 3), t: 1 }, lastCommittedWall: new Date(1567578475572), lastOpVisible: { ts: Timestamp(1567578475, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:27:55.600+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:25.581+0000 2019-09-04T06:27:55.599+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:55.600+0000 D2 ASIO [RS] Request 121 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578475, 3), t: 1 }, lastCommittedWall: new Date(1567578475572), lastOpVisible: { ts: Timestamp(1567578475, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578475, 3), $clusterTime: { clusterTime: Timestamp(1567578475, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578475, 3) } 2019-09-04T06:27:55.600+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578475, 3), t: 1 }, lastCommittedWall: new Date(1567578475572), lastOpVisible: { ts: Timestamp(1567578475, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578475, 3), $clusterTime: { clusterTime: Timestamp(1567578475, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578475, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:55.600+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:27:55.600+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:25.581+0000 2019-09-04T06:27:55.659+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:55.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:55.680+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578475, 3) 2019-09-04T06:27:55.698+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:55.698+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:55.699+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:55.699+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:55.700+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:55.800+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:55.900+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:55.924+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:55.924+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:56.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:56.000+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:56.012+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:56.012+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:56.051+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:56.051+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:56.100+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:56.159+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:56.160+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:56.198+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:56.198+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:56.199+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:56.199+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:56.200+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:56.230+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578475, 3), signature: { hash: BinData(0, 11855EAE959EB67E67E8C613577F5E91C5FB1D85), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:56.231+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:27:56.231+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578475, 3), signature: { hash: BinData(0, 11855EAE959EB67E67E8C613577F5E91C5FB1D85), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:56.231+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578475, 3), signature: { hash: BinData(0, 11855EAE959EB67E67E8C613577F5E91C5FB1D85), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:56.231+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, durableWallTime: new Date(1567578475572), opTime: { ts: Timestamp(1567578475, 3), t: 1 }, wallTime: new Date(1567578475572) } 2019-09-04T06:27:56.231+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578475, 3), signature: { hash: BinData(0, 11855EAE959EB67E67E8C613577F5E91C5FB1D85), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:56.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:27:56.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:27:56.281+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:56.282+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:56.301+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:56.401+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:56.424+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:56.424+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:56.501+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:56.512+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:56.512+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:56.551+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:56.551+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:56.580+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:56.580+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:56.580+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578475, 3) 2019-09-04T06:27:56.580+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 1918 2019-09-04T06:27:56.580+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:56.580+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:56.580+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 1918 2019-09-04T06:27:56.581+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1921 2019-09-04T06:27:56.581+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 1921 2019-09-04T06:27:56.581+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578475, 3), t: 1 }({ ts: Timestamp(1567578475, 3), t: 1 }) 2019-09-04T06:27:56.601+0000 D2 COMMAND [conn49] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578474, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578475, 2), signature: { hash: BinData(0, 11855EAE959EB67E67E8C613577F5E91C5FB1D85), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578474, 1), t: 1 } }, $db: "config" } 2019-09-04T06:27:56.601+0000 D1 COMMAND [conn49] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578474, 1), t: 1 } } } 2019-09-04T06:27:56.601+0000 D3 STORAGE [conn49] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:27:56.601+0000 D1 COMMAND [conn49] Using 'committed' snapshot: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578474, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578475, 2), signature: { hash: BinData(0, 11855EAE959EB67E67E8C613577F5E91C5FB1D85), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578474, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578475, 3) 2019-09-04T06:27:56.601+0000 D2 QUERY [conn49] Collection config.settings does not exist. Using EOF plan: query: { _id: "balancer" } sort: {} projection: {} limit: 1 2019-09-04T06:27:56.601+0000 I COMMAND [conn49] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578474, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578475, 2), signature: { hash: BinData(0, 11855EAE959EB67E67E8C613577F5E91C5FB1D85), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578474, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:27:56.601+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:56.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:56.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:56.698+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:56.698+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:56.699+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:56.699+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:56.701+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:56.781+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:56.781+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:56.801+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:56.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:56.836+0000 D3 REPL [replexec-0] memberData lastupdate is: 2019-09-04T06:27:55.061+0000 2019-09-04T06:27:56.836+0000 D3 REPL [replexec-0] memberData lastupdate is: 2019-09-04T06:27:56.231+0000 2019-09-04T06:27:56.836+0000 D3 REPL [replexec-0] stalest member MemberId(0) date: 2019-09-04T06:27:55.061+0000 2019-09-04T06:27:56.836+0000 D3 REPL [replexec-0] scheduling next check at 2019-09-04T06:28:05.061+0000 2019-09-04T06:27:56.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:56.836+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 122) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:56.836+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 122 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:28:06.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:56.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:27:56.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:27:56.836+0000 D2 ASIO [Replication] Request 122 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, durableWallTime: new Date(1567578475572), opTime: { ts: Timestamp(1567578475, 3), t: 1 }, wallTime: new Date(1567578475572), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578475, 3), t: 1 }, lastCommittedWall: new Date(1567578475572), lastOpVisible: { ts: Timestamp(1567578475, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578475, 3), $clusterTime: { clusterTime: Timestamp(1567578475, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578475, 3) } 2019-09-04T06:27:56.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, durableWallTime: new Date(1567578475572), opTime: { ts: Timestamp(1567578475, 3), t: 1 }, wallTime: new Date(1567578475572), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578475, 3), t: 1 }, lastCommittedWall: new Date(1567578475572), lastOpVisible: { ts: Timestamp(1567578475, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578475, 3), $clusterTime: { clusterTime: Timestamp(1567578475, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578475, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:27:56.836+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:56.836+0000 D2 REPL_HB [replexec-2] Received response to heartbeat (requestId: 122) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, durableWallTime: new Date(1567578475572), opTime: { ts: Timestamp(1567578475, 3), t: 1 }, wallTime: new Date(1567578475572), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578475, 3), t: 1 }, lastCommittedWall: new Date(1567578475572), lastOpVisible: { ts: Timestamp(1567578475, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578475, 3), $clusterTime: { clusterTime: Timestamp(1567578475, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578475, 3) } 2019-09-04T06:27:56.836+0000 D4 ELECTION [replexec-2] Postponing election timeout due to heartbeat from primary 2019-09-04T06:27:56.836+0000 D4 REPL [replexec-2] Canceling election timeout callback at 2019-09-04T06:28:06.662+0000 2019-09-04T06:27:56.836+0000 D4 ELECTION [replexec-2] Scheduling election timeout callback at 2019-09-04T06:28:07.504+0000 2019-09-04T06:27:56.836+0000 D3 REPL [replexec-2] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:27:56.836+0000 D2 REPL_HB [replexec-2] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:27:58.836Z 2019-09-04T06:27:56.836+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:56.836+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:27:56.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:27:56.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:56.837+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 123) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:56.837+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 123 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:06.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:56.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:27:56.837+0000 D2 ASIO [Replication] Request 123 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, durableWallTime: new Date(1567578475572), opTime: { ts: Timestamp(1567578475, 3), t: 1 }, wallTime: new Date(1567578475572), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578475, 3), t: 1 }, lastCommittedWall: new Date(1567578475572), lastOpVisible: { ts: Timestamp(1567578475, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578475, 3), $clusterTime: { clusterTime: Timestamp(1567578475, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578475, 3) } 2019-09-04T06:27:56.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, durableWallTime: new Date(1567578475572), opTime: { ts: Timestamp(1567578475, 3), t: 1 }, wallTime: new Date(1567578475572), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578475, 3), t: 1 }, lastCommittedWall: new Date(1567578475572), lastOpVisible: { ts: Timestamp(1567578475, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578475, 3), $clusterTime: { clusterTime: Timestamp(1567578475, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578475, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:56.837+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:56.837+0000 D2 REPL_HB [replexec-2] Received response to heartbeat (requestId: 123) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, durableWallTime: new Date(1567578475572), opTime: { ts: Timestamp(1567578475, 3), t: 1 }, wallTime: new Date(1567578475572), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578475, 3), t: 1 }, lastCommittedWall: new Date(1567578475572), lastOpVisible: { ts: Timestamp(1567578475, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578475, 3), $clusterTime: { clusterTime: Timestamp(1567578475, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578475, 3) } 2019-09-04T06:27:56.837+0000 D3 REPL [replexec-2] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:27:56.837+0000 D2 REPL_HB [replexec-2] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:27:58.837Z 2019-09-04T06:27:56.837+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:27:56.876+0000 D2 COMMAND [conn47] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:56.876+0000 I COMMAND [conn47] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:56.901+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:56.911+0000 D2 COMMAND [conn48] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:56.911+0000 I COMMAND [conn48] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:56.924+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:56.924+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:57.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:57.002+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:57.012+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:57.012+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:57.051+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:57.051+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:57.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578475, 3), signature: { hash: BinData(0, 11855EAE959EB67E67E8C613577F5E91C5FB1D85), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:57.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:27:57.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578475, 3), signature: { hash: BinData(0, 11855EAE959EB67E67E8C613577F5E91C5FB1D85), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:57.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578475, 3), signature: { hash: BinData(0, 11855EAE959EB67E67E8C613577F5E91C5FB1D85), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:57.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, durableWallTime: new Date(1567578475572), opTime: { ts: Timestamp(1567578475, 3), t: 1 }, wallTime: new Date(1567578475572) } 2019-09-04T06:27:57.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578475, 3), signature: { hash: BinData(0, 11855EAE959EB67E67E8C613577F5E91C5FB1D85), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:57.102+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:57.159+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:57.160+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:57.198+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:57.198+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:57.199+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:57.199+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:57.202+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:57.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:27:57.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:27:57.281+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:57.281+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:57.302+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:57.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:57.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:57.402+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:57.424+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:57.424+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:57.502+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:57.512+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:57.512+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:57.580+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:57.581+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:57.581+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578475, 3) 2019-09-04T06:27:57.581+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 1943 2019-09-04T06:27:57.581+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:57.581+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:57.581+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 1943 2019-09-04T06:27:57.582+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1946 2019-09-04T06:27:57.582+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 1946 2019-09-04T06:27:57.582+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578475, 3), t: 1 }({ ts: Timestamp(1567578475, 3), t: 1 }) 2019-09-04T06:27:57.602+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:57.659+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:57.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:57.698+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:57.698+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:57.699+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:57.699+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:57.703+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:57.781+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:57.781+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:57.803+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:57.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:57.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:57.903+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:57.924+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:57.924+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:58.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:58.003+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:58.012+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:58.012+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:58.103+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:58.159+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:58.160+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:58.198+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:58.198+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:58.199+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:58.199+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:58.203+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:58.230+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578477, 1), signature: { hash: BinData(0, 34908BA0F13339304222F9E08926332F9A4131BB), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:58.231+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:27:58.231+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578477, 1), signature: { hash: BinData(0, 34908BA0F13339304222F9E08926332F9A4131BB), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:58.231+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578477, 1), signature: { hash: BinData(0, 34908BA0F13339304222F9E08926332F9A4131BB), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:58.231+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, durableWallTime: new Date(1567578475572), opTime: { ts: Timestamp(1567578475, 3), t: 1 }, wallTime: new Date(1567578475572) } 2019-09-04T06:27:58.231+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578477, 1), signature: { hash: BinData(0, 34908BA0F13339304222F9E08926332F9A4131BB), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:58.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:27:58.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:27:58.281+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:58.281+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:58.304+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:58.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:58.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:58.404+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:58.424+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:58.424+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:58.504+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:58.512+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:58.512+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:58.581+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:58.581+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:58.581+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578475, 3) 2019-09-04T06:27:58.581+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 1965 2019-09-04T06:27:58.581+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:58.581+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:58.581+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 1965 2019-09-04T06:27:58.582+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1968 2019-09-04T06:27:58.582+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 1968 2019-09-04T06:27:58.582+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578475, 3), t: 1 }({ ts: Timestamp(1567578475, 3), t: 1 }) 2019-09-04T06:27:58.604+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:58.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:58.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:58.698+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:58.698+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:58.699+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:58.699+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:58.704+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:58.781+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:58.781+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:58.804+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:58.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:58.836+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 124) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:58.836+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 124 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:28:08.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:58.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:27:58.836+0000 D2 ASIO [Replication] Request 124 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, durableWallTime: new Date(1567578475572), opTime: { ts: Timestamp(1567578475, 3), t: 1 }, wallTime: new Date(1567578475572), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578475, 3), t: 1 }, lastCommittedWall: new Date(1567578475572), lastOpVisible: { ts: Timestamp(1567578475, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578475, 3), $clusterTime: { clusterTime: Timestamp(1567578477, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578475, 3) } 2019-09-04T06:27:58.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, durableWallTime: new Date(1567578475572), opTime: { ts: Timestamp(1567578475, 3), t: 1 }, wallTime: new Date(1567578475572), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578475, 3), t: 1 }, lastCommittedWall: new Date(1567578475572), lastOpVisible: { ts: Timestamp(1567578475, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578475, 3), $clusterTime: { clusterTime: Timestamp(1567578477, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578475, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:27:58.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:58.836+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 124) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, durableWallTime: new Date(1567578475572), opTime: { ts: Timestamp(1567578475, 3), t: 1 }, wallTime: new Date(1567578475572), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578475, 3), t: 1 }, lastCommittedWall: new Date(1567578475572), lastOpVisible: { ts: Timestamp(1567578475, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578475, 3), $clusterTime: { clusterTime: Timestamp(1567578477, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578475, 3) } 2019-09-04T06:27:58.836+0000 D4 ELECTION [replexec-1] Postponing election timeout due to heartbeat from primary 2019-09-04T06:27:58.836+0000 D4 REPL [replexec-1] Canceling election timeout callback at 2019-09-04T06:28:07.504+0000 2019-09-04T06:27:58.836+0000 D4 ELECTION [replexec-1] Scheduling election timeout callback at 2019-09-04T06:28:10.276+0000 2019-09-04T06:27:58.836+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:58.836+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:27:58.836+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:27:58.836+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:28:00.836Z 2019-09-04T06:27:58.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:27:58.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:27:58.837+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 125) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:58.837+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 125 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:08.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:27:58.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:27:58.837+0000 D2 ASIO [Replication] Request 125 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, durableWallTime: new Date(1567578475572), opTime: { ts: Timestamp(1567578475, 3), t: 1 }, wallTime: new Date(1567578475572), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578475, 3), t: 1 }, lastCommittedWall: new Date(1567578475572), lastOpVisible: { ts: Timestamp(1567578475, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578475, 3), $clusterTime: { clusterTime: Timestamp(1567578477, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578475, 3) } 2019-09-04T06:27:58.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, durableWallTime: new Date(1567578475572), opTime: { ts: Timestamp(1567578475, 3), t: 1 }, wallTime: new Date(1567578475572), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578475, 3), t: 1 }, lastCommittedWall: new Date(1567578475572), lastOpVisible: { ts: Timestamp(1567578475, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578475, 3), $clusterTime: { clusterTime: Timestamp(1567578477, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578475, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:58.837+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:27:58.837+0000 D2 REPL_HB [replexec-2] Received response to heartbeat (requestId: 125) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, durableWallTime: new Date(1567578475572), opTime: { ts: Timestamp(1567578475, 3), t: 1 }, wallTime: new Date(1567578475572), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578475, 3), t: 1 }, lastCommittedWall: new Date(1567578475572), lastOpVisible: { ts: Timestamp(1567578475, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578475, 3), $clusterTime: { clusterTime: Timestamp(1567578477, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578475, 3) } 2019-09-04T06:27:58.837+0000 D3 REPL [replexec-2] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:27:58.837+0000 D2 REPL_HB [replexec-2] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:28:00.837Z 2019-09-04T06:27:58.837+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:27:58.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:58.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:58.904+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:58.924+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:58.924+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:59.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:27:59.004+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:59.012+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:59.012+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:59.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578477, 1), signature: { hash: BinData(0, 34908BA0F13339304222F9E08926332F9A4131BB), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:59.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:27:59.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578477, 1), signature: { hash: BinData(0, 34908BA0F13339304222F9E08926332F9A4131BB), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:59.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578477, 1), signature: { hash: BinData(0, 34908BA0F13339304222F9E08926332F9A4131BB), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:27:59.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, durableWallTime: new Date(1567578475572), opTime: { ts: Timestamp(1567578475, 3), t: 1 }, wallTime: new Date(1567578475572) } 2019-09-04T06:27:59.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578477, 1), signature: { hash: BinData(0, 34908BA0F13339304222F9E08926332F9A4131BB), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:59.105+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:59.198+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:59.198+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:59.199+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:59.199+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:59.205+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:59.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:27:59.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:27:59.281+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:59.281+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:59.305+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:59.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:59.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:59.405+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:59.424+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:59.424+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:59.505+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:59.512+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:59.512+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:59.581+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:59.581+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:59.581+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578475, 3) 2019-09-04T06:27:59.581+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 1988 2019-09-04T06:27:59.581+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:59.581+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:59.581+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 1988 2019-09-04T06:27:59.582+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1991 2019-09-04T06:27:59.582+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 1991 2019-09-04T06:27:59.582+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578475, 3), t: 1 }({ ts: Timestamp(1567578475, 3), t: 1 }) 2019-09-04T06:27:59.605+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:59.698+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:59.698+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:59.699+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:59.699+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:59.705+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:59.781+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:59.781+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:59.806+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:59.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:59.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:59.864+0000 I NETWORK [listener] connection accepted from 10.108.2.72:45636 #102 (84 connections now open) 2019-09-04T06:27:59.864+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:27:59.864+0000 D2 COMMAND [conn102] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:27:59.864+0000 I NETWORK [conn102] received client metadata from 10.108.2.72:45636 conn102: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:27:59.864+0000 I COMMAND [conn102] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:27:59.864+0000 D2 COMMAND [conn102] run command config.$cmd { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578472, 1), signature: { hash: BinData(0, 5F7DD3BA379B8A9729407EBC61070D727C8E5B57), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:27:59.864+0000 D1 REPL [conn102] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578475, 3), t: 1 } 2019-09-04T06:27:59.864+0000 D3 REPL [conn102] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:29.874+0000 2019-09-04T06:27:59.906+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:27:59.924+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:27:59.924+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:27:59.998+0000 D2 ASIO [RS] Request 120 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578479, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578479996), o: { $v: 1, $set: { ping: new Date(1567578479996) } } }, { ts: Timestamp(1567578479, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578479996), o: { $v: 1, $set: { ping: new Date(1567578479996) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578475, 3), t: 1 }, lastCommittedWall: new Date(1567578475572), lastOpVisible: { ts: Timestamp(1567578475, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578475, 3), t: 1 }, lastCommittedWall: new Date(1567578475572), lastOpApplied: { ts: Timestamp(1567578479, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578475, 3), $clusterTime: { clusterTime: Timestamp(1567578479, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578479, 2) } 2019-09-04T06:27:59.998+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578479, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578479996), o: { $v: 1, $set: { ping: new Date(1567578479996) } } }, { ts: Timestamp(1567578479, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578479996), o: { $v: 1, $set: { ping: new Date(1567578479996) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578475, 3), t: 1 }, lastCommittedWall: new Date(1567578475572), lastOpVisible: { ts: Timestamp(1567578475, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578475, 3), t: 1 }, lastCommittedWall: new Date(1567578475572), lastOpApplied: { ts: Timestamp(1567578479, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578475, 3), $clusterTime: { clusterTime: Timestamp(1567578479, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578479, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:27:59.998+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:27:59.998+0000 D2 REPL [replication-0] oplog fetcher read 2 operations from remote oplog starting at ts: Timestamp(1567578479, 1) and ending at ts: Timestamp(1567578479, 2) 2019-09-04T06:27:59.999+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:28:10.276+0000 2019-09-04T06:27:59.999+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:28:10.369+0000 2019-09-04T06:27:59.999+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:27:59.999+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578479, 2), t: 1 } 2019-09-04T06:27:59.999+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:59.999+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:59.999+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578475, 3) 2019-09-04T06:27:59.999+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 2001 2019-09-04T06:27:59.999+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:59.999+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:59.999+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 2001 2019-09-04T06:27:59.999+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:27:59.999+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:27:59.999+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578475, 3) 2019-09-04T06:27:59.999+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 2004 2019-09-04T06:27:59.999+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:27:59.999+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:27:59.999+0000 D2 REPL [rsSync-0] replication batch size is 2 2019-09-04T06:27:59.999+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578479, 1) } 2019-09-04T06:27:59.999+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 2004 2019-09-04T06:27:59.999+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:27:59.999+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 1992 2019-09-04T06:27:59.999+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 1992 2019-09-04T06:27:59.999+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2007 2019-09-04T06:27:59.999+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2007 2019-09-04T06:27:59.999+0000 D3 EXECUTOR [repl-writer-worker-5] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:27:59.999+0000 D3 STORAGE [repl-writer-worker-5] WT begin_transaction for snapshot id 2009 2019-09-04T06:27:59.999+0000 D4 STORAGE [repl-writer-worker-5] inserting record with timestamp Timestamp(1567578479, 1) 2019-09-04T06:27:59.999+0000 D3 STORAGE [repl-writer-worker-5] WT set timestamp of future write operations to Timestamp(1567578479, 1) 2019-09-04T06:27:59.999+0000 D4 STORAGE [repl-writer-worker-5] inserting record with timestamp Timestamp(1567578479, 2) 2019-09-04T06:27:59.999+0000 D3 STORAGE [repl-writer-worker-5] WT set timestamp of future write operations to Timestamp(1567578479, 2) 2019-09-04T06:27:59.999+0000 D3 STORAGE [repl-writer-worker-5] WT commit_transaction for snapshot id 2009 2019-09-04T06:27:59.999+0000 D3 EXECUTOR [repl-writer-worker-5] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:27:59.999+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:27:59.999+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2008 2019-09-04T06:27:59.999+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 2008 2019-09-04T06:27:59.999+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2011 2019-09-04T06:27:59.999+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2011 2019-09-04T06:27:59.999+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578479, 2), t: 1 }({ ts: Timestamp(1567578479, 2), t: 1 }) 2019-09-04T06:28:00.000+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578479, 2) 2019-09-04T06:28:00.000+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2012 2019-09-04T06:28:00.000+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578479, 2) } } ] } sort: {} projection: {} 2019-09-04T06:28:00.000+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:00.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:00.000+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:28:00.000+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578479, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:28:00.000+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:00.000+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:00.000+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:00.000+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578479, 2) || First: notFirst: full path: ts 2019-09-04T06:28:00.000+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:00.000+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578479, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:00.000+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:00.000+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:28:00.000+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:00.000+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:00.000+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:00.000+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:00.000+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:00.000+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:00.000+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:00.000+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578479, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:00.000+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:00.000+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:00.000+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:00.000+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578479, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:00.000+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:00.000+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578479, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:00.000+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 2012 2019-09-04T06:28:00.000+0000 D3 EXECUTOR [repl-writer-worker-9] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:00.000+0000 D3 STORAGE [repl-writer-worker-9] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:00.000+0000 D3 REPL [repl-writer-worker-9] applying op: { ts: Timestamp(1567578479, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578479996), o: { $v: 1, $set: { ping: new Date(1567578479996) } } }, oplog application mode: Secondary 2019-09-04T06:28:00.000+0000 D3 STORAGE [repl-writer-worker-9] WT set timestamp of future write operations to Timestamp(1567578479, 2) 2019-09-04T06:28:00.000+0000 D3 STORAGE [repl-writer-worker-9] WT begin_transaction for snapshot id 2015 2019-09-04T06:28:00.000+0000 D2 QUERY [repl-writer-worker-9] Using idhack: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" } 2019-09-04T06:28:00.000+0000 D4 WRITE [repl-writer-worker-9] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:28:00.000+0000 D3 STORAGE [repl-writer-worker-9] WT commit_transaction for snapshot id 2015 2019-09-04T06:28:00.000+0000 D3 EXECUTOR [repl-writer-worker-9] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:00.000+0000 D3 STORAGE [repl-writer-worker-9] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:00.000+0000 D3 REPL [repl-writer-worker-9] applying op: { ts: Timestamp(1567578479, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578479996), o: { $v: 1, $set: { ping: new Date(1567578479996) } } }, oplog application mode: Secondary 2019-09-04T06:28:00.000+0000 D3 STORAGE [repl-writer-worker-9] WT set timestamp of future write operations to Timestamp(1567578479, 1) 2019-09-04T06:28:00.000+0000 D3 STORAGE [repl-writer-worker-9] WT begin_transaction for snapshot id 2017 2019-09-04T06:28:00.000+0000 D2 QUERY [repl-writer-worker-9] Using idhack: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" } 2019-09-04T06:28:00.000+0000 D4 WRITE [repl-writer-worker-9] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:28:00.000+0000 D3 STORAGE [repl-writer-worker-9] WT commit_transaction for snapshot id 2017 2019-09-04T06:28:00.000+0000 D3 EXECUTOR [repl-writer-worker-9] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:00.000+0000 D3 EXECUTOR [repl-writer-worker-7] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:00.005+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578479, 2), t: 1 } 2019-09-04T06:28:00.005+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 126 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:10.005+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578475, 3), t: 1 } } 2019-09-04T06:28:00.005+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:25.581+0000 2019-09-04T06:28:00.005+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578479, 2), t: 1 }({ ts: Timestamp(1567578479, 2), t: 1 }) 2019-09-04T06:28:00.005+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578479, 2) 2019-09-04T06:28:00.005+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2014 2019-09-04T06:28:00.005+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:28:00.005+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:00.005+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:28:00.005+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:00.005+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:00.005+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:28:00.005+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 2014 2019-09-04T06:28:00.005+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578479, 2) 2019-09-04T06:28:00.006+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2020 2019-09-04T06:28:00.006+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2020 2019-09-04T06:28:00.006+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578479, 2), t: 1 }({ ts: Timestamp(1567578479, 2), t: 1 }) 2019-09-04T06:28:00.006+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:00.006+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, durableWallTime: new Date(1567578475572), appliedOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, appliedWallTime: new Date(1567578475572), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, durableWallTime: new Date(1567578475572), appliedOpTime: { ts: Timestamp(1567578479, 2), t: 1 }, appliedWallTime: new Date(1567578479996), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, durableWallTime: new Date(1567578475572), appliedOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, appliedWallTime: new Date(1567578475572), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578475, 3), t: 1 }, lastCommittedWall: new Date(1567578475572), lastOpVisible: { ts: Timestamp(1567578475, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:00.006+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 127 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:30.006+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, durableWallTime: new Date(1567578475572), appliedOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, appliedWallTime: new Date(1567578475572), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, durableWallTime: new Date(1567578475572), appliedOpTime: { ts: Timestamp(1567578479, 2), t: 1 }, appliedWallTime: new Date(1567578479996), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, durableWallTime: new Date(1567578475572), appliedOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, appliedWallTime: new Date(1567578475572), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578475, 3), t: 1 }, lastCommittedWall: new Date(1567578475572), lastOpVisible: { ts: Timestamp(1567578475, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:00.006+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:25.581+0000 2019-09-04T06:28:00.006+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:28:00.006+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:28:00.007+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:28:00.014+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:28:00.015+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:28:00.016+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:28:00.016+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:28:00.016+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:28:00.016+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:28:00.016+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:28:00.017+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:00.017+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:00.017+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:28:00.017+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:00.017+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35129 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:28:00.017+0000 D2 ASIO [RS] Request 127 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578479, 2), t: 1 }, lastCommittedWall: new Date(1567578479996), lastOpVisible: { ts: Timestamp(1567578479, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578479, 2), $clusterTime: { clusterTime: Timestamp(1567578479, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578479, 2) } 2019-09-04T06:28:00.017+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578479, 2), t: 1 }, lastCommittedWall: new Date(1567578479996), lastOpVisible: { ts: Timestamp(1567578479, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578479, 2), $clusterTime: { clusterTime: Timestamp(1567578479, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578479, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:00.017+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:00.017+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, durableWallTime: new Date(1567578475572), appliedOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, appliedWallTime: new Date(1567578475572), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578479, 2), t: 1 }, durableWallTime: new Date(1567578479996), appliedOpTime: { ts: Timestamp(1567578479, 2), t: 1 }, appliedWallTime: new Date(1567578479996), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, durableWallTime: new Date(1567578475572), appliedOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, appliedWallTime: new Date(1567578475572), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578475, 3), t: 1 }, lastCommittedWall: new Date(1567578475572), lastOpVisible: { ts: Timestamp(1567578475, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:00.017+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 128 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:30.017+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, durableWallTime: new Date(1567578475572), appliedOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, appliedWallTime: new Date(1567578475572), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578479, 2), t: 1 }, durableWallTime: new Date(1567578479996), appliedOpTime: { ts: Timestamp(1567578479, 2), t: 1 }, appliedWallTime: new Date(1567578479996), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, durableWallTime: new Date(1567578475572), appliedOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, appliedWallTime: new Date(1567578475572), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578475, 3), t: 1 }, lastCommittedWall: new Date(1567578475572), lastOpVisible: { ts: Timestamp(1567578475, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:00.017+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:25.581+0000 2019-09-04T06:28:00.021+0000 D2 ASIO [RS] Request 126 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578479, 2), t: 1 }, lastCommittedWall: new Date(1567578479996), lastOpVisible: { ts: Timestamp(1567578479, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578479, 2), t: 1 }, lastCommittedWall: new Date(1567578479996), lastOpApplied: { ts: Timestamp(1567578479, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578479, 2), $clusterTime: { clusterTime: Timestamp(1567578479, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578479, 2) } 2019-09-04T06:28:00.021+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578479, 2), t: 1 }, lastCommittedWall: new Date(1567578479996), lastOpVisible: { ts: Timestamp(1567578479, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578479, 2), t: 1 }, lastCommittedWall: new Date(1567578479996), lastOpApplied: { ts: Timestamp(1567578479, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578479, 2), $clusterTime: { clusterTime: Timestamp(1567578479, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578479, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:00.021+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:00.021+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:28:00.021+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578479, 2), t: 1 }, 2019-09-04T06:27:59.996+0000 2019-09-04T06:28:00.021+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578479, 2), t: 1 }, 2019-09-04T06:27:59.996+0000 2019-09-04T06:28:00.021+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578474, 2) 2019-09-04T06:28:00.021+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:28:10.369+0000 2019-09-04T06:28:00.021+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:28:10.798+0000 2019-09-04T06:28:00.021+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 129 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:10.021+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578479, 2), t: 1 } } 2019-09-04T06:28:00.021+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:25.581+0000 2019-09-04T06:28:00.021+0000 D3 REPL [conn76] Got notified of new snapshot: { ts: Timestamp(1567578479, 2), t: 1 }, 2019-09-04T06:27:59.996+0000 2019-09-04T06:28:00.021+0000 D3 REPL [conn76] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.092+0000 2019-09-04T06:28:00.021+0000 D3 REPL [conn82] Got notified of new snapshot: { ts: Timestamp(1567578479, 2), t: 1 }, 2019-09-04T06:27:59.996+0000 2019-09-04T06:28:00.021+0000 D3 REPL [conn82] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.998+0000 2019-09-04T06:28:00.021+0000 D3 REPL [conn91] Got notified of new snapshot: { ts: Timestamp(1567578479, 2), t: 1 }, 2019-09-04T06:27:59.996+0000 2019-09-04T06:28:00.021+0000 D3 REPL [conn91] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.645+0000 2019-09-04T06:28:00.021+0000 D3 REPL [conn86] Got notified of new snapshot: { ts: Timestamp(1567578479, 2), t: 1 }, 2019-09-04T06:27:59.996+0000 2019-09-04T06:28:00.021+0000 D3 REPL [conn86] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.822+0000 2019-09-04T06:28:00.021+0000 D3 REPL [conn99] Got notified of new snapshot: { ts: Timestamp(1567578479, 2), t: 1 }, 2019-09-04T06:27:59.996+0000 2019-09-04T06:28:00.021+0000 D3 REPL [conn99] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:22.054+0000 2019-09-04T06:28:00.021+0000 D3 REPL [conn84] Got notified of new snapshot: { ts: Timestamp(1567578479, 2), t: 1 }, 2019-09-04T06:27:59.996+0000 2019-09-04T06:28:00.021+0000 D3 REPL [conn84] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.822+0000 2019-09-04T06:28:00.021+0000 D3 REPL [conn54] Got notified of new snapshot: { ts: Timestamp(1567578479, 2), t: 1 }, 2019-09-04T06:27:59.996+0000 2019-09-04T06:28:00.021+0000 D3 REPL [conn54] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.764+0000 2019-09-04T06:28:00.021+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:00.021+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:00.022+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:28:00.022+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:28:00.022+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:28:00.023+0000 D2 ASIO [RS] Request 128 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578479, 2), t: 1 }, lastCommittedWall: new Date(1567578479996), lastOpVisible: { ts: Timestamp(1567578479, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578479, 2), $clusterTime: { clusterTime: Timestamp(1567578479, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578479, 2) } 2019-09-04T06:28:00.023+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578479, 2), t: 1 }, lastCommittedWall: new Date(1567578479996), lastOpVisible: { ts: Timestamp(1567578479, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578479, 2), $clusterTime: { clusterTime: Timestamp(1567578479, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578479, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:00.023+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:00.023+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:25.581+0000 2019-09-04T06:28:00.027+0000 D3 REPL [conn94] Got notified of new snapshot: { ts: Timestamp(1567578479, 2), t: 1 }, 2019-09-04T06:27:59.996+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn94] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.663+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn63] Got notified of new snapshot: { ts: Timestamp(1567578479, 2), t: 1 }, 2019-09-04T06:27:59.996+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn63] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.676+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn64] Got notified of new snapshot: { ts: Timestamp(1567578479, 2), t: 1 }, 2019-09-04T06:27:59.996+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn64] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.676+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn89] Got notified of new snapshot: { ts: Timestamp(1567578479, 2), t: 1 }, 2019-09-04T06:27:59.996+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn89] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:16.417+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn68] Got notified of new snapshot: { ts: Timestamp(1567578479, 2), t: 1 }, 2019-09-04T06:27:59.996+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn68] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.730+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn83] Got notified of new snapshot: { ts: Timestamp(1567578479, 2), t: 1 }, 2019-09-04T06:27:59.996+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn83] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.431+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn62] Got notified of new snapshot: { ts: Timestamp(1567578479, 2), t: 1 }, 2019-09-04T06:27:59.996+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn62] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.671+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn30] Got notified of new snapshot: { ts: Timestamp(1567578479, 2), t: 1 }, 2019-09-04T06:27:59.996+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn30] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.836+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn100] Got notified of new snapshot: { ts: Timestamp(1567578479, 2), t: 1 }, 2019-09-04T06:27:59.996+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn100] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:22.595+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn35] Got notified of new snapshot: { ts: Timestamp(1567578479, 2), t: 1 }, 2019-09-04T06:27:59.996+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn35] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.836+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn79] Got notified of new snapshot: { ts: Timestamp(1567578479, 2), t: 1 }, 2019-09-04T06:27:59.996+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn79] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.289+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn87] Got notified of new snapshot: { ts: Timestamp(1567578479, 2), t: 1 }, 2019-09-04T06:27:59.996+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn87] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.823+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn66] Got notified of new snapshot: { ts: Timestamp(1567578479, 2), t: 1 }, 2019-09-04T06:27:59.996+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn66] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.686+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn27] Got notified of new snapshot: { ts: Timestamp(1567578479, 2), t: 1 }, 2019-09-04T06:27:59.996+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn27] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.829+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn65] Got notified of new snapshot: { ts: Timestamp(1567578479, 2), t: 1 }, 2019-09-04T06:27:59.996+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn65] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.679+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn36] Got notified of new snapshot: { ts: Timestamp(1567578479, 2), t: 1 }, 2019-09-04T06:27:59.996+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn36] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.835+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn96] Got notified of new snapshot: { ts: Timestamp(1567578479, 2), t: 1 }, 2019-09-04T06:27:59.996+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn96] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.663+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn98] Got notified of new snapshot: { ts: Timestamp(1567578479, 2), t: 1 }, 2019-09-04T06:27:59.996+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn98] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.768+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn67] Got notified of new snapshot: { ts: Timestamp(1567578479, 2), t: 1 }, 2019-09-04T06:27:59.996+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn67] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.698+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn88] Got notified of new snapshot: { ts: Timestamp(1567578479, 2), t: 1 }, 2019-09-04T06:27:59.996+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn88] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.833+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn53] Got notified of new snapshot: { ts: Timestamp(1567578479, 2), t: 1 }, 2019-09-04T06:27:59.996+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn53] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.753+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn80] Got notified of new snapshot: { ts: Timestamp(1567578479, 2), t: 1 }, 2019-09-04T06:27:59.996+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn80] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.307+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn56] Got notified of new snapshot: { ts: Timestamp(1567578479, 2), t: 1 }, 2019-09-04T06:27:59.996+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn56] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.963+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn55] Got notified of new snapshot: { ts: Timestamp(1567578479, 2), t: 1 }, 2019-09-04T06:27:59.996+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn55] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.926+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn24] Got notified of new snapshot: { ts: Timestamp(1567578479, 2), t: 1 }, 2019-09-04T06:27:59.996+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn24] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.833+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn85] Got notified of new snapshot: { ts: Timestamp(1567578479, 2), t: 1 }, 2019-09-04T06:27:59.996+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn85] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.822+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn57] Got notified of new snapshot: { ts: Timestamp(1567578479, 2), t: 1 }, 2019-09-04T06:27:59.996+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn57] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:03.480+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn93] Got notified of new snapshot: { ts: Timestamp(1567578479, 2), t: 1 }, 2019-09-04T06:27:59.996+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn93] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.662+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn95] Got notified of new snapshot: { ts: Timestamp(1567578479, 2), t: 1 }, 2019-09-04T06:27:59.996+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn95] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.662+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn92] Got notified of new snapshot: { ts: Timestamp(1567578479, 2), t: 1 }, 2019-09-04T06:27:59.996+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn92] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.664+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn97] Got notified of new snapshot: { ts: Timestamp(1567578479, 2), t: 1 }, 2019-09-04T06:27:59.996+0000 2019-09-04T06:28:00.028+0000 D3 REPL [conn97] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.670+0000 2019-09-04T06:28:00.036+0000 D3 REPL [conn102] Got notified of new snapshot: { ts: Timestamp(1567578479, 2), t: 1 }, 2019-09-04T06:27:59.996+0000 2019-09-04T06:28:00.036+0000 D3 REPL [conn102] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:29.874+0000 2019-09-04T06:28:00.048+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:28:00.048+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:00.048+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:28:00.048+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:28:00.048+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:28:00.048+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:28:00.048+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:28:00.048+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:28:00.048+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:28:00.048+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:00.048+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:00.048+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:28:00.048+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578479, 2) 2019-09-04T06:28:00.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2029 2019-09-04T06:28:00.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2029 2019-09-04T06:28:00.048+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:28:00.048+0000 D2 ASIO [RS] Request 129 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578480, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578480018), o: { $v: 1, $set: { ping: new Date(1567578480009) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578479, 2), t: 1 }, lastCommittedWall: new Date(1567578479996), lastOpVisible: { ts: Timestamp(1567578479, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578479, 2), t: 1 }, lastCommittedWall: new Date(1567578479996), lastOpApplied: { ts: Timestamp(1567578480, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578479, 2), $clusterTime: { clusterTime: Timestamp(1567578480, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578480, 1) } 2019-09-04T06:28:00.048+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578480, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578480018), o: { $v: 1, $set: { ping: new Date(1567578480009) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578479, 2), t: 1 }, lastCommittedWall: new Date(1567578479996), lastOpVisible: { ts: Timestamp(1567578479, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578479, 2), t: 1 }, lastCommittedWall: new Date(1567578479996), lastOpApplied: { ts: Timestamp(1567578480, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578479, 2), $clusterTime: { clusterTime: Timestamp(1567578480, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578480, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:00.048+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:00.048+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578480, 1) and ending at ts: Timestamp(1567578480, 1) 2019-09-04T06:28:00.048+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:28:10.798+0000 2019-09-04T06:28:00.048+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:28:11.124+0000 2019-09-04T06:28:00.048+0000 D2 REPL [replication-1] oplog buffer has 0 bytes 2019-09-04T06:28:00.048+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578480, 1), t: 1 } 2019-09-04T06:28:00.052+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:28:00.052+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:00.052+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:00.052+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:00.052+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578479, 2) 2019-09-04T06:28:00.052+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 2032 2019-09-04T06:28:00.052+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:00.052+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:00.052+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 2032 2019-09-04T06:28:00.052+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:00.052+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:00.052+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578479, 2) 2019-09-04T06:28:00.052+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 2035 2019-09-04T06:28:00.052+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:00.052+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:00.052+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 2035 2019-09-04T06:28:00.052+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:28:00.052+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578480, 1) } 2019-09-04T06:28:00.052+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2021 2019-09-04T06:28:00.052+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 2021 2019-09-04T06:28:00.052+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2038 2019-09-04T06:28:00.052+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2038 2019-09-04T06:28:00.052+0000 D3 EXECUTOR [repl-writer-worker-11] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:00.052+0000 D3 STORAGE [repl-writer-worker-11] WT begin_transaction for snapshot id 2040 2019-09-04T06:28:00.052+0000 D4 STORAGE [repl-writer-worker-11] inserting record with timestamp Timestamp(1567578480, 1) 2019-09-04T06:28:00.052+0000 D3 STORAGE [repl-writer-worker-11] WT set timestamp of future write operations to Timestamp(1567578480, 1) 2019-09-04T06:28:00.052+0000 D3 STORAGE [repl-writer-worker-11] WT commit_transaction for snapshot id 2040 2019-09-04T06:28:00.052+0000 D3 EXECUTOR [repl-writer-worker-11] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:00.052+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:28:00.052+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2039 2019-09-04T06:28:00.052+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 2039 2019-09-04T06:28:00.052+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2042 2019-09-04T06:28:00.052+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2042 2019-09-04T06:28:00.052+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578480, 1), t: 1 }({ ts: Timestamp(1567578480, 1), t: 1 }) 2019-09-04T06:28:00.052+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578480, 1) 2019-09-04T06:28:00.052+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2043 2019-09-04T06:28:00.052+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578480, 1) } } ] } sort: {} projection: {} 2019-09-04T06:28:00.052+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:00.052+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:28:00.052+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578480, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:28:00.052+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:00.052+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:00.052+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:00.052+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578480, 1) || First: notFirst: full path: ts 2019-09-04T06:28:00.052+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:00.052+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578480, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:00.052+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:00.052+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:28:00.052+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:00.052+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:00.052+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:00.052+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:00.052+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:00.052+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:00.052+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:00.052+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578480, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:00.052+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:00.052+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:00.052+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:00.052+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578480, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:00.052+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:00.052+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578480, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:00.052+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 2043 2019-09-04T06:28:00.053+0000 D3 EXECUTOR [repl-writer-worker-13] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:00.053+0000 D3 STORAGE [repl-writer-worker-13] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:00.053+0000 D3 REPL [repl-writer-worker-13] applying op: { ts: Timestamp(1567578480, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578480018), o: { $v: 1, $set: { ping: new Date(1567578480009) } } }, oplog application mode: Secondary 2019-09-04T06:28:00.053+0000 D3 STORAGE [repl-writer-worker-13] WT set timestamp of future write operations to Timestamp(1567578480, 1) 2019-09-04T06:28:00.053+0000 D3 STORAGE [repl-writer-worker-13] WT begin_transaction for snapshot id 2045 2019-09-04T06:28:00.053+0000 D2 QUERY [repl-writer-worker-13] Using idhack: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" } 2019-09-04T06:28:00.053+0000 D4 WRITE [repl-writer-worker-13] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:28:00.053+0000 D3 STORAGE [repl-writer-worker-13] WT commit_transaction for snapshot id 2045 2019-09-04T06:28:00.053+0000 D3 EXECUTOR [repl-writer-worker-13] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:00.053+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578480, 1), t: 1 }({ ts: Timestamp(1567578480, 1), t: 1 }) 2019-09-04T06:28:00.053+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578480, 1) 2019-09-04T06:28:00.053+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2044 2019-09-04T06:28:00.053+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:28:00.053+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:00.053+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:28:00.053+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:00.053+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:00.053+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:28:00.053+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 2044 2019-09-04T06:28:00.053+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578480, 1) 2019-09-04T06:28:00.053+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2048 2019-09-04T06:28:00.053+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2048 2019-09-04T06:28:00.053+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578480, 1), t: 1 }({ ts: Timestamp(1567578480, 1), t: 1 }) 2019-09-04T06:28:00.053+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:00.053+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, durableWallTime: new Date(1567578475572), appliedOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, appliedWallTime: new Date(1567578475572), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578479, 2), t: 1 }, durableWallTime: new Date(1567578479996), appliedOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, appliedWallTime: new Date(1567578480018), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, durableWallTime: new Date(1567578475572), appliedOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, appliedWallTime: new Date(1567578475572), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578479, 2), t: 1 }, lastCommittedWall: new Date(1567578479996), lastOpVisible: { ts: Timestamp(1567578479, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:00.053+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 130 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:30.053+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, durableWallTime: new Date(1567578475572), appliedOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, appliedWallTime: new Date(1567578475572), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578479, 2), t: 1 }, durableWallTime: new Date(1567578479996), appliedOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, appliedWallTime: new Date(1567578480018), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, durableWallTime: new Date(1567578475572), appliedOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, appliedWallTime: new Date(1567578475572), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578479, 2), t: 1 }, lastCommittedWall: new Date(1567578479996), lastOpVisible: { ts: Timestamp(1567578479, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:00.053+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:30.053+0000 2019-09-04T06:28:00.056+0000 D2 ASIO [RS] Request 130 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578479, 2), t: 1 }, lastCommittedWall: new Date(1567578479996), lastOpVisible: { ts: Timestamp(1567578479, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578479, 2), $clusterTime: { clusterTime: Timestamp(1567578480, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578480, 1) } 2019-09-04T06:28:00.056+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578479, 2), t: 1 }, lastCommittedWall: new Date(1567578479996), lastOpVisible: { ts: Timestamp(1567578479, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578479, 2), $clusterTime: { clusterTime: Timestamp(1567578480, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578480, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:00.056+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:00.056+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:30.056+0000 2019-09-04T06:28:00.058+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578480, 1), t: 1 } 2019-09-04T06:28:00.058+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 131 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:10.058+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578479, 2), t: 1 } } 2019-09-04T06:28:00.058+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:30.056+0000 2019-09-04T06:28:00.058+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:28:00.058+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:28:00.058+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:28:00.058+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:00.058+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:28:00.058+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:28:00.058+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:28:00.058+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578480, 1) 2019-09-04T06:28:00.058+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2052 2019-09-04T06:28:00.058+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2052 2019-09-04T06:28:00.058+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:28:00.059+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:28:00.059+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:00.059+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:28:00.059+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:28:00.059+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:28:00.059+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578480, 1) 2019-09-04T06:28:00.059+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2054 2019-09-04T06:28:00.059+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2054 2019-09-04T06:28:00.059+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:28:00.059+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:28:00.059+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:28:00.059+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:28:00.059+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:28:00.059+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:28:00.059+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:28:00.059+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2057 2019-09-04T06:28:00.059+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:28:00.059+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:00.059+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:28:00.059+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:28:00.059+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2057 2019-09-04T06:28:00.059+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:28:00.059+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2058 2019-09-04T06:28:00.059+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:28:00.059+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:00.059+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:28:00.059+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2058 2019-09-04T06:28:00.059+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:28:00.059+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2059 2019-09-04T06:28:00.059+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:28:00.059+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:00.059+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:28:00.059+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2059 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2060 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2060 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2061 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2061 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2062 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2062 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2063 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2063 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2064 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2064 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2065 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2065 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2066 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2066 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2067 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2067 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2068 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2068 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2069 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2069 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2070 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2070 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2071 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2071 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2072 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:00.060+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:28:00.061+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2072 2019-09-04T06:28:00.061+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:28:00.061+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2073 2019-09-04T06:28:00.061+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:28:00.061+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:00.061+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:28:00.061+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2073 2019-09-04T06:28:00.061+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:28:00.061+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2074 2019-09-04T06:28:00.061+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:28:00.061+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:00.061+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:28:00.061+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2074 2019-09-04T06:28:00.061+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:28:00.061+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2075 2019-09-04T06:28:00.061+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:28:00.061+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:00.061+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:28:00.061+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2075 2019-09-04T06:28:00.061+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:00.061+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2076 2019-09-04T06:28:00.061+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:00.061+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:00.061+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2076 2019-09-04T06:28:00.061+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:28:00.061+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2077 2019-09-04T06:28:00.061+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:28:00.061+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:00.061+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:28:00.061+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2077 2019-09-04T06:28:00.061+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:28:00.061+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2078 2019-09-04T06:28:00.061+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:28:00.061+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:00.061+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:28:00.061+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2078 2019-09-04T06:28:00.061+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:28:00.061+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:28:00.061+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2080 2019-09-04T06:28:00.061+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2080 2019-09-04T06:28:00.061+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2081 2019-09-04T06:28:00.061+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2081 2019-09-04T06:28:00.061+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2082 2019-09-04T06:28:00.061+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2082 2019-09-04T06:28:00.061+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:28:00.061+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:28:00.061+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2084 2019-09-04T06:28:00.061+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2084 2019-09-04T06:28:00.061+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2085 2019-09-04T06:28:00.062+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2085 2019-09-04T06:28:00.062+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2086 2019-09-04T06:28:00.062+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2086 2019-09-04T06:28:00.062+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2087 2019-09-04T06:28:00.062+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2087 2019-09-04T06:28:00.062+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2088 2019-09-04T06:28:00.062+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2088 2019-09-04T06:28:00.062+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2089 2019-09-04T06:28:00.062+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2089 2019-09-04T06:28:00.062+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2090 2019-09-04T06:28:00.062+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2090 2019-09-04T06:28:00.062+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2091 2019-09-04T06:28:00.062+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2091 2019-09-04T06:28:00.062+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2092 2019-09-04T06:28:00.062+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2092 2019-09-04T06:28:00.062+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2093 2019-09-04T06:28:00.062+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2093 2019-09-04T06:28:00.062+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2094 2019-09-04T06:28:00.062+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2094 2019-09-04T06:28:00.062+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2095 2019-09-04T06:28:00.062+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2095 2019-09-04T06:28:00.062+0000 D2 ASIO [RS] Request 131 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578480, 1), t: 1 }, lastCommittedWall: new Date(1567578480018), lastOpVisible: { ts: Timestamp(1567578480, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578480, 1), t: 1 }, lastCommittedWall: new Date(1567578480018), lastOpApplied: { ts: Timestamp(1567578480, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578480, 1), $clusterTime: { clusterTime: Timestamp(1567578480, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578480, 1) } 2019-09-04T06:28:00.062+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:28:00.062+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578480, 1), t: 1 }, lastCommittedWall: new Date(1567578480018), lastOpVisible: { ts: Timestamp(1567578480, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578480, 1), t: 1 }, lastCommittedWall: new Date(1567578480018), lastOpApplied: { ts: Timestamp(1567578480, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578480, 1), $clusterTime: { clusterTime: Timestamp(1567578480, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578480, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:00.062+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:00.062+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:28:00.062+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578480, 1), t: 1 }, 2019-09-04T06:28:00.018+0000 2019-09-04T06:28:00.062+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578480, 1), t: 1 }, 2019-09-04T06:28:00.018+0000 2019-09-04T06:28:00.062+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578475, 1) 2019-09-04T06:28:00.062+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:28:00.062+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2097 2019-09-04T06:28:00.062+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2097 2019-09-04T06:28:00.062+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2098 2019-09-04T06:28:00.062+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2098 2019-09-04T06:28:00.062+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2099 2019-09-04T06:28:00.062+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2099 2019-09-04T06:28:00.062+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2100 2019-09-04T06:28:00.062+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2100 2019-09-04T06:28:00.062+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2101 2019-09-04T06:28:00.062+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2101 2019-09-04T06:28:00.062+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2102 2019-09-04T06:28:00.062+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2102 2019-09-04T06:28:00.062+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:28:11.124+0000 2019-09-04T06:28:00.062+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:28:11.104+0000 2019-09-04T06:28:00.062+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:00.063+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 132 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:10.063+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578480, 1), t: 1 } } 2019-09-04T06:28:00.063+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:00.063+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:30.056+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn91] Got notified of new snapshot: { ts: Timestamp(1567578480, 1), t: 1 }, 2019-09-04T06:28:00.018+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn91] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.645+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn86] Got notified of new snapshot: { ts: Timestamp(1567578480, 1), t: 1 }, 2019-09-04T06:28:00.018+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn86] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.822+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn99] Got notified of new snapshot: { ts: Timestamp(1567578480, 1), t: 1 }, 2019-09-04T06:28:00.018+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn99] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:22.054+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn84] Got notified of new snapshot: { ts: Timestamp(1567578480, 1), t: 1 }, 2019-09-04T06:28:00.018+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn84] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.822+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn54] Got notified of new snapshot: { ts: Timestamp(1567578480, 1), t: 1 }, 2019-09-04T06:28:00.018+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn54] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.764+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn82] Got notified of new snapshot: { ts: Timestamp(1567578480, 1), t: 1 }, 2019-09-04T06:28:00.018+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn82] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.998+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn94] Got notified of new snapshot: { ts: Timestamp(1567578480, 1), t: 1 }, 2019-09-04T06:28:00.018+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn94] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.663+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn63] Got notified of new snapshot: { ts: Timestamp(1567578480, 1), t: 1 }, 2019-09-04T06:28:00.018+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn63] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.676+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn64] Got notified of new snapshot: { ts: Timestamp(1567578480, 1), t: 1 }, 2019-09-04T06:28:00.018+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn64] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.676+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn89] Got notified of new snapshot: { ts: Timestamp(1567578480, 1), t: 1 }, 2019-09-04T06:28:00.018+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn89] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:16.417+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn68] Got notified of new snapshot: { ts: Timestamp(1567578480, 1), t: 1 }, 2019-09-04T06:28:00.018+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn68] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.730+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn83] Got notified of new snapshot: { ts: Timestamp(1567578480, 1), t: 1 }, 2019-09-04T06:28:00.018+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn83] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.431+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn62] Got notified of new snapshot: { ts: Timestamp(1567578480, 1), t: 1 }, 2019-09-04T06:28:00.018+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn62] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.671+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn30] Got notified of new snapshot: { ts: Timestamp(1567578480, 1), t: 1 }, 2019-09-04T06:28:00.018+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn30] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.836+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn100] Got notified of new snapshot: { ts: Timestamp(1567578480, 1), t: 1 }, 2019-09-04T06:28:00.018+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn100] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:22.595+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn35] Got notified of new snapshot: { ts: Timestamp(1567578480, 1), t: 1 }, 2019-09-04T06:28:00.018+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn35] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.836+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn79] Got notified of new snapshot: { ts: Timestamp(1567578480, 1), t: 1 }, 2019-09-04T06:28:00.018+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn79] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.289+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn87] Got notified of new snapshot: { ts: Timestamp(1567578480, 1), t: 1 }, 2019-09-04T06:28:00.018+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn87] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.823+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn66] Got notified of new snapshot: { ts: Timestamp(1567578480, 1), t: 1 }, 2019-09-04T06:28:00.018+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn66] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.686+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn27] Got notified of new snapshot: { ts: Timestamp(1567578480, 1), t: 1 }, 2019-09-04T06:28:00.018+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn27] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.829+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn65] Got notified of new snapshot: { ts: Timestamp(1567578480, 1), t: 1 }, 2019-09-04T06:28:00.018+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn65] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.679+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn36] Got notified of new snapshot: { ts: Timestamp(1567578480, 1), t: 1 }, 2019-09-04T06:28:00.018+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn36] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.835+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn96] Got notified of new snapshot: { ts: Timestamp(1567578480, 1), t: 1 }, 2019-09-04T06:28:00.018+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn96] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.663+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn98] Got notified of new snapshot: { ts: Timestamp(1567578480, 1), t: 1 }, 2019-09-04T06:28:00.018+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn98] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.768+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn67] Got notified of new snapshot: { ts: Timestamp(1567578480, 1), t: 1 }, 2019-09-04T06:28:00.018+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn67] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.698+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn88] Got notified of new snapshot: { ts: Timestamp(1567578480, 1), t: 1 }, 2019-09-04T06:28:00.018+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn88] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.833+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn53] Got notified of new snapshot: { ts: Timestamp(1567578480, 1), t: 1 }, 2019-09-04T06:28:00.018+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn53] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.753+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn80] Got notified of new snapshot: { ts: Timestamp(1567578480, 1), t: 1 }, 2019-09-04T06:28:00.018+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn80] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.307+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn56] Got notified of new snapshot: { ts: Timestamp(1567578480, 1), t: 1 }, 2019-09-04T06:28:00.018+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn56] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.963+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn55] Got notified of new snapshot: { ts: Timestamp(1567578480, 1), t: 1 }, 2019-09-04T06:28:00.018+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn55] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:00.926+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn24] Got notified of new snapshot: { ts: Timestamp(1567578480, 1), t: 1 }, 2019-09-04T06:28:00.018+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn24] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.833+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn85] Got notified of new snapshot: { ts: Timestamp(1567578480, 1), t: 1 }, 2019-09-04T06:28:00.018+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn85] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.822+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn57] Got notified of new snapshot: { ts: Timestamp(1567578480, 1), t: 1 }, 2019-09-04T06:28:00.018+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn57] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:03.480+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn93] Got notified of new snapshot: { ts: Timestamp(1567578480, 1), t: 1 }, 2019-09-04T06:28:00.018+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn93] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.662+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn95] Got notified of new snapshot: { ts: Timestamp(1567578480, 1), t: 1 }, 2019-09-04T06:28:00.018+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn95] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.662+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn92] Got notified of new snapshot: { ts: Timestamp(1567578480, 1), t: 1 }, 2019-09-04T06:28:00.018+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn92] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.664+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn97] Got notified of new snapshot: { ts: Timestamp(1567578480, 1), t: 1 }, 2019-09-04T06:28:00.018+0000 2019-09-04T06:28:00.063+0000 D3 REPL [conn97] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.670+0000 2019-09-04T06:28:00.064+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:28:00.064+0000 D3 REPL [conn102] Got notified of new snapshot: { ts: Timestamp(1567578480, 1), t: 1 }, 2019-09-04T06:28:00.018+0000 2019-09-04T06:28:00.064+0000 D3 REPL [conn102] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:29.874+0000 2019-09-04T06:28:00.064+0000 D3 REPL [conn76] Got notified of new snapshot: { ts: Timestamp(1567578480, 1), t: 1 }, 2019-09-04T06:28:00.018+0000 2019-09-04T06:28:00.064+0000 D3 REPL [conn76] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.092+0000 2019-09-04T06:28:00.065+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:28:00.065+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:00.065+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, durableWallTime: new Date(1567578475572), appliedOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, appliedWallTime: new Date(1567578475572), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, durableWallTime: new Date(1567578480018), appliedOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, appliedWallTime: new Date(1567578480018), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, durableWallTime: new Date(1567578475572), appliedOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, appliedWallTime: new Date(1567578475572), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578480, 1), t: 1 }, lastCommittedWall: new Date(1567578480018), lastOpVisible: { ts: Timestamp(1567578480, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:00.065+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 133 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:30.065+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, durableWallTime: new Date(1567578475572), appliedOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, appliedWallTime: new Date(1567578475572), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, durableWallTime: new Date(1567578480018), appliedOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, appliedWallTime: new Date(1567578480018), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, durableWallTime: new Date(1567578475572), appliedOpTime: { ts: Timestamp(1567578475, 3), t: 1 }, appliedWallTime: new Date(1567578475572), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578480, 1), t: 1 }, lastCommittedWall: new Date(1567578480018), lastOpVisible: { ts: Timestamp(1567578480, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:00.065+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:30.056+0000 2019-09-04T06:28:00.065+0000 D2 ASIO [RS] Request 133 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578480, 1), t: 1 }, lastCommittedWall: new Date(1567578480018), lastOpVisible: { ts: Timestamp(1567578480, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578480, 1), $clusterTime: { clusterTime: Timestamp(1567578480, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578480, 1) } 2019-09-04T06:28:00.065+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578480, 1), t: 1 }, lastCommittedWall: new Date(1567578480018), lastOpVisible: { ts: Timestamp(1567578480, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578480, 1), $clusterTime: { clusterTime: Timestamp(1567578480, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578480, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:00.065+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:00.065+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:30.056+0000 2019-09-04T06:28:00.106+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578480, 1) 2019-09-04T06:28:00.117+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:00.199+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:00.199+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:00.217+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:00.230+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578480, 1), signature: { hash: BinData(0, 6BD002E7F10B5ADCF3BEB8E194F196D13128B11C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:00.230+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:28:00.230+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578480, 1), signature: { hash: BinData(0, 6BD002E7F10B5ADCF3BEB8E194F196D13128B11C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:00.230+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578480, 1), signature: { hash: BinData(0, 6BD002E7F10B5ADCF3BEB8E194F196D13128B11C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:00.230+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, durableWallTime: new Date(1567578480018), opTime: { ts: Timestamp(1567578480, 1), t: 1 }, wallTime: new Date(1567578480018) } 2019-09-04T06:28:00.231+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578480, 1), signature: { hash: BinData(0, 6BD002E7F10B5ADCF3BEB8E194F196D13128B11C), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:00.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:00.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:00.281+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:00.281+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:00.317+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:00.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:00.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:00.417+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:00.424+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:00.424+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:00.512+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:00.512+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:00.517+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:00.618+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:00.699+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:00.699+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:00.718+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:00.754+0000 I COMMAND [conn53] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578442, 1), signature: { hash: BinData(0, 9D53AB1FB4ED3281D2F535162F2E651844E31225), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:28:00.754+0000 D1 - [conn53] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:00.754+0000 W - [conn53] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:00.764+0000 I COMMAND [conn54] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578449, 1), signature: { hash: BinData(0, 19051D282256DCC551BFFE29F82E237D248A825C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:28:00.764+0000 D1 - [conn54] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:00.764+0000 W - [conn54] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:00.771+0000 I - [conn53] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:00.771+0000 D1 COMMAND [conn53] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578442, 1), signature: { hash: BinData(0, 9D53AB1FB4ED3281D2F535162F2E651844E31225), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:00.771+0000 D1 - [conn53] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:00.771+0000 W - [conn53] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:00.781+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:00.781+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:00.788+0000 I - [conn54] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:00.788+0000 D1 COMMAND [conn54] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578449, 1), signature: { hash: BinData(0, 19051D282256DCC551BFFE29F82E237D248A825C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:00.788+0000 D1 - [conn54] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:00.788+0000 W - [conn54] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:00.808+0000 I - [conn53] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:00.808+0000 W COMMAND [conn53] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:00.808+0000 I COMMAND [conn53] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578442, 1), signature: { hash: BinData(0, 9D53AB1FB4ED3281D2F535162F2E651844E31225), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:28:00.808+0000 D2 NETWORK [conn53] Session from 10.108.2.52:47056 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:00.808+0000 I NETWORK [conn53] end connection 10.108.2.52:47056 (83 connections now open) 2019-09-04T06:28:00.818+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:00.827+0000 I - [conn54] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:00.827+0000 W COMMAND [conn54] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:00.828+0000 I COMMAND [conn54] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578449, 1), signature: { hash: BinData(0, 19051D282256DCC551BFFE29F82E237D248A825C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30034ms 2019-09-04T06:28:00.828+0000 D2 NETWORK [conn54] Session from 10.108.2.50:49992 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:00.828+0000 I NETWORK [conn54] end connection 10.108.2.50:49992 (82 connections now open) 2019-09-04T06:28:00.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:00.836+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 134) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:00.836+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 134 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:28:10.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:00.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:00.836+0000 D2 ASIO [Replication] Request 134 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, durableWallTime: new Date(1567578480018), opTime: { ts: Timestamp(1567578480, 1), t: 1 }, wallTime: new Date(1567578480018), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578480, 1), t: 1 }, lastCommittedWall: new Date(1567578480018), lastOpVisible: { ts: Timestamp(1567578480, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578480, 1), $clusterTime: { clusterTime: Timestamp(1567578480, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578480, 1) } 2019-09-04T06:28:00.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, durableWallTime: new Date(1567578480018), opTime: { ts: Timestamp(1567578480, 1), t: 1 }, wallTime: new Date(1567578480018), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578480, 1), t: 1 }, lastCommittedWall: new Date(1567578480018), lastOpVisible: { ts: Timestamp(1567578480, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578480, 1), $clusterTime: { clusterTime: Timestamp(1567578480, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578480, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:28:00.836+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:28:00.836+0000 D2 REPL_HB [replexec-2] Received response to heartbeat (requestId: 134) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, durableWallTime: new Date(1567578480018), opTime: { ts: Timestamp(1567578480, 1), t: 1 }, wallTime: new Date(1567578480018), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578480, 1), t: 1 }, lastCommittedWall: new Date(1567578480018), lastOpVisible: { ts: Timestamp(1567578480, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578480, 1), $clusterTime: { clusterTime: Timestamp(1567578480, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578480, 1) } 2019-09-04T06:28:00.836+0000 D4 ELECTION [replexec-2] Postponing election timeout due to heartbeat from primary 2019-09-04T06:28:00.836+0000 D4 REPL [replexec-2] Canceling election timeout callback at 2019-09-04T06:28:11.104+0000 2019-09-04T06:28:00.836+0000 D4 ELECTION [replexec-2] Scheduling election timeout callback at 2019-09-04T06:28:11.852+0000 2019-09-04T06:28:00.836+0000 D3 REPL [replexec-2] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:28:00.836+0000 D2 REPL_HB [replexec-2] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:28:02.836Z 2019-09-04T06:28:00.836+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:28:00.836+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:00.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:00.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:00.837+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 135) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:00.837+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 135 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:10.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:00.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:00.837+0000 D2 ASIO [Replication] Request 135 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, durableWallTime: new Date(1567578480018), opTime: { ts: Timestamp(1567578480, 1), t: 1 }, wallTime: new Date(1567578480018), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578480, 1), t: 1 }, lastCommittedWall: new Date(1567578480018), lastOpVisible: { ts: Timestamp(1567578480, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578480, 1), $clusterTime: { clusterTime: Timestamp(1567578480, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578480, 1) } 2019-09-04T06:28:00.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, durableWallTime: new Date(1567578480018), opTime: { ts: Timestamp(1567578480, 1), t: 1 }, wallTime: new Date(1567578480018), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578480, 1), t: 1 }, lastCommittedWall: new Date(1567578480018), lastOpVisible: { ts: Timestamp(1567578480, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578480, 1), $clusterTime: { clusterTime: Timestamp(1567578480, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578480, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:00.837+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:28:00.837+0000 D2 REPL_HB [replexec-2] Received response to heartbeat (requestId: 135) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, durableWallTime: new Date(1567578480018), opTime: { ts: Timestamp(1567578480, 1), t: 1 }, wallTime: new Date(1567578480018), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578480, 1), t: 1 }, lastCommittedWall: new Date(1567578480018), lastOpVisible: { ts: Timestamp(1567578480, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578480, 1), $clusterTime: { clusterTime: Timestamp(1567578480, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578480, 1) } 2019-09-04T06:28:00.837+0000 D3 REPL [replexec-2] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:28:00.837+0000 D2 REPL_HB [replexec-2] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:28:02.837Z 2019-09-04T06:28:00.837+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:00.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:00.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:00.918+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:00.924+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:00.924+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:00.927+0000 I COMMAND [conn55] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578442, 1), signature: { hash: BinData(0, 9D53AB1FB4ED3281D2F535162F2E651844E31225), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:28:00.927+0000 D1 - [conn55] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:00.927+0000 W - [conn55] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:00.945+0000 I - [conn55] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:00.945+0000 D1 COMMAND [conn55] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578442, 1), signature: { hash: BinData(0, 9D53AB1FB4ED3281D2F535162F2E651844E31225), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:00.945+0000 D1 - [conn55] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:00.945+0000 W - [conn55] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:00.964+0000 I COMMAND [conn56] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578443, 1), signature: { hash: BinData(0, BBDB9A29B5D8765DFC9912618BC8EC281097B96B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:28:00.964+0000 D1 - [conn56] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:00.964+0000 W - [conn56] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:00.965+0000 I - [conn55] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:00.965+0000 W COMMAND [conn55] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:00.965+0000 I COMMAND [conn55] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578442, 1), signature: { hash: BinData(0, 9D53AB1FB4ED3281D2F535162F2E651844E31225), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:28:00.965+0000 D2 NETWORK [conn55] Session from 10.108.2.73:52028 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:00.965+0000 I NETWORK [conn55] end connection 10.108.2.73:52028 (81 connections now open) 2019-09-04T06:28:00.983+0000 I - [conn56] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:00.983+0000 D1 COMMAND [conn56] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578443, 1), signature: { hash: BinData(0, BBDB9A29B5D8765DFC9912618BC8EC281097B96B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:00.983+0000 D1 - [conn56] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:00.983+0000 W - [conn56] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:01.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:01.004+0000 I - [conn56] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:01.004+0000 W COMMAND [conn56] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:01.004+0000 I COMMAND [conn56] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578443, 1), signature: { hash: BinData(0, BBDB9A29B5D8765DFC9912618BC8EC281097B96B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:28:01.005+0000 D2 NETWORK [conn56] Session from 10.108.2.58:52018 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:01.005+0000 I NETWORK [conn56] end connection 10.108.2.58:52018 (80 connections now open) 2019-09-04T06:28:01.012+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:01.012+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:01.018+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:01.052+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:01.052+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:01.052+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578480, 1) 2019-09-04T06:28:01.052+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 2117 2019-09-04T06:28:01.052+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:01.052+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:01.052+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 2117 2019-09-04T06:28:01.053+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2120 2019-09-04T06:28:01.053+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2120 2019-09-04T06:28:01.053+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578480, 1), t: 1 }({ ts: Timestamp(1567578480, 1), t: 1 }) 2019-09-04T06:28:01.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578480, 1), signature: { hash: BinData(0, 6BD002E7F10B5ADCF3BEB8E194F196D13128B11C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:01.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:28:01.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578480, 1), signature: { hash: BinData(0, 6BD002E7F10B5ADCF3BEB8E194F196D13128B11C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:01.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578480, 1), signature: { hash: BinData(0, 6BD002E7F10B5ADCF3BEB8E194F196D13128B11C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:01.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, durableWallTime: new Date(1567578480018), opTime: { ts: Timestamp(1567578480, 1), t: 1 }, wallTime: new Date(1567578480018) } 2019-09-04T06:28:01.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578480, 1), signature: { hash: BinData(0, 6BD002E7F10B5ADCF3BEB8E194F196D13128B11C), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:01.118+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:01.199+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:01.199+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:01.218+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:01.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:01.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:01.281+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:01.281+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:01.318+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:01.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:01.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:01.419+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:01.424+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:01.424+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:01.512+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:01.512+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:01.519+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:01.619+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:01.699+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:01.699+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:01.719+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:01.781+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:01.781+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:01.819+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:01.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:01.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:01.919+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:01.924+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:01.924+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:02.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:02.019+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:02.052+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:02.052+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:02.052+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578480, 1) 2019-09-04T06:28:02.052+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 2134 2019-09-04T06:28:02.052+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:02.052+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:02.053+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 2134 2019-09-04T06:28:02.053+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2137 2019-09-04T06:28:02.053+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2137 2019-09-04T06:28:02.053+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578480, 1), t: 1 }({ ts: Timestamp(1567578480, 1), t: 1 }) 2019-09-04T06:28:02.119+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:02.199+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:02.199+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:02.220+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:02.230+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578480, 1), signature: { hash: BinData(0, 6BD002E7F10B5ADCF3BEB8E194F196D13128B11C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:02.230+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:28:02.230+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578480, 1), signature: { hash: BinData(0, 6BD002E7F10B5ADCF3BEB8E194F196D13128B11C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:02.230+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578480, 1), signature: { hash: BinData(0, 6BD002E7F10B5ADCF3BEB8E194F196D13128B11C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:02.230+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, durableWallTime: new Date(1567578480018), opTime: { ts: Timestamp(1567578480, 1), t: 1 }, wallTime: new Date(1567578480018) } 2019-09-04T06:28:02.230+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578480, 1), signature: { hash: BinData(0, 6BD002E7F10B5ADCF3BEB8E194F196D13128B11C), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:02.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:02.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:02.281+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:02.281+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:02.320+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:02.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:02.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:02.420+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:02.424+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:02.424+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:02.520+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:02.550+0000 D2 COMMAND [conn72] run command config.$cmd { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578479, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578479, 2), signature: { hash: BinData(0, 8316738462027A3D475E8CC625C7F6C5F314FF9B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578479, 2), t: 1 } }, $db: "config" } 2019-09-04T06:28:02.550+0000 D1 COMMAND [conn72] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578479, 2), t: 1 } } } 2019-09-04T06:28:02.550+0000 D3 STORAGE [conn72] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:28:02.550+0000 D1 COMMAND [conn72] Using 'committed' snapshot: { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578479, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578479, 2), signature: { hash: BinData(0, 8316738462027A3D475E8CC625C7F6C5F314FF9B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578479, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578480, 1) 2019-09-04T06:28:02.550+0000 D2 QUERY [conn72] Using idhack: query: { _id: "config.system.sessions" } sort: {} projection: {} limit: 1 2019-09-04T06:28:02.550+0000 D3 STORAGE [conn72] WT begin_transaction for snapshot id 2145 2019-09-04T06:28:02.550+0000 D3 STORAGE [conn72] WT rollback_transaction for snapshot id 2145 2019-09-04T06:28:02.550+0000 I COMMAND [conn72] command config.collections command: find { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578479, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578479, 2), signature: { hash: BinData(0, 8316738462027A3D475E8CC625C7F6C5F314FF9B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578479, 2), t: 1 } }, $db: "config" } planSummary: IDHACK keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:702 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:28:02.620+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:02.699+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:02.699+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:02.720+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:02.781+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:02.781+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:02.820+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:02.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:02.836+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 136) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:02.836+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 136 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:28:12.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:02.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:02.836+0000 D2 ASIO [Replication] Request 136 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, durableWallTime: new Date(1567578480018), opTime: { ts: Timestamp(1567578480, 1), t: 1 }, wallTime: new Date(1567578480018), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578480, 1), t: 1 }, lastCommittedWall: new Date(1567578480018), lastOpVisible: { ts: Timestamp(1567578480, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578480, 1), $clusterTime: { clusterTime: Timestamp(1567578480, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578480, 1) } 2019-09-04T06:28:02.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, durableWallTime: new Date(1567578480018), opTime: { ts: Timestamp(1567578480, 1), t: 1 }, wallTime: new Date(1567578480018), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578480, 1), t: 1 }, lastCommittedWall: new Date(1567578480018), lastOpVisible: { ts: Timestamp(1567578480, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578480, 1), $clusterTime: { clusterTime: Timestamp(1567578480, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578480, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:28:02.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:02.836+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 136) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, durableWallTime: new Date(1567578480018), opTime: { ts: Timestamp(1567578480, 1), t: 1 }, wallTime: new Date(1567578480018), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578480, 1), t: 1 }, lastCommittedWall: new Date(1567578480018), lastOpVisible: { ts: Timestamp(1567578480, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578480, 1), $clusterTime: { clusterTime: Timestamp(1567578480, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578480, 1) } 2019-09-04T06:28:02.836+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:28:02.836+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:28:11.852+0000 2019-09-04T06:28:02.836+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:28:13.013+0000 2019-09-04T06:28:02.836+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:28:02.836+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:28:04.836Z 2019-09-04T06:28:02.836+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:28:02.836+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:02.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:02.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:02.837+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 137) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:02.837+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 137 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:12.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:02.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:02.837+0000 D2 ASIO [Replication] Request 137 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, durableWallTime: new Date(1567578480018), opTime: { ts: Timestamp(1567578480, 1), t: 1 }, wallTime: new Date(1567578480018), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578480, 1), t: 1 }, lastCommittedWall: new Date(1567578480018), lastOpVisible: { ts: Timestamp(1567578480, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578480, 1), $clusterTime: { clusterTime: Timestamp(1567578480, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578480, 1) } 2019-09-04T06:28:02.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, durableWallTime: new Date(1567578480018), opTime: { ts: Timestamp(1567578480, 1), t: 1 }, wallTime: new Date(1567578480018), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578480, 1), t: 1 }, lastCommittedWall: new Date(1567578480018), lastOpVisible: { ts: Timestamp(1567578480, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578480, 1), $clusterTime: { clusterTime: Timestamp(1567578480, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578480, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:02.837+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:28:02.837+0000 D2 REPL_HB [replexec-2] Received response to heartbeat (requestId: 137) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, durableWallTime: new Date(1567578480018), opTime: { ts: Timestamp(1567578480, 1), t: 1 }, wallTime: new Date(1567578480018), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578480, 1), t: 1 }, lastCommittedWall: new Date(1567578480018), lastOpVisible: { ts: Timestamp(1567578480, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578480, 1), $clusterTime: { clusterTime: Timestamp(1567578480, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578480, 1) } 2019-09-04T06:28:02.837+0000 D3 REPL [replexec-2] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:28:02.837+0000 D2 REPL_HB [replexec-2] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:28:04.837Z 2019-09-04T06:28:02.837+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:02.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:02.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:02.920+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:02.924+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:02.924+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:03.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:03.021+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:03.053+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:03.053+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:03.053+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578480, 1) 2019-09-04T06:28:03.053+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 2152 2019-09-04T06:28:03.053+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:03.053+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:03.053+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 2152 2019-09-04T06:28:03.053+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2155 2019-09-04T06:28:03.053+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2155 2019-09-04T06:28:03.054+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578480, 1), t: 1 }({ ts: Timestamp(1567578480, 1), t: 1 }) 2019-09-04T06:28:03.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578480, 1), signature: { hash: BinData(0, 6BD002E7F10B5ADCF3BEB8E194F196D13128B11C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:03.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:28:03.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578480, 1), signature: { hash: BinData(0, 6BD002E7F10B5ADCF3BEB8E194F196D13128B11C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:03.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578480, 1), signature: { hash: BinData(0, 6BD002E7F10B5ADCF3BEB8E194F196D13128B11C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:03.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, durableWallTime: new Date(1567578480018), opTime: { ts: Timestamp(1567578480, 1), t: 1 }, wallTime: new Date(1567578480018) } 2019-09-04T06:28:03.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578480, 1), signature: { hash: BinData(0, 6BD002E7F10B5ADCF3BEB8E194F196D13128B11C), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:03.121+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:03.199+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:03.199+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:03.221+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:03.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:03.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:03.281+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:03.281+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:03.321+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:03.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:03.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:03.358+0000 D2 ASIO [RS] Request 132 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578483, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578483355), o: { $v: 1, $set: { ping: new Date(1567578483350) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578480, 1), t: 1 }, lastCommittedWall: new Date(1567578480018), lastOpVisible: { ts: Timestamp(1567578480, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578480, 1), t: 1 }, lastCommittedWall: new Date(1567578480018), lastOpApplied: { ts: Timestamp(1567578483, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578480, 1), $clusterTime: { clusterTime: Timestamp(1567578483, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578483, 1) } 2019-09-04T06:28:03.359+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578483, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578483355), o: { $v: 1, $set: { ping: new Date(1567578483350) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578480, 1), t: 1 }, lastCommittedWall: new Date(1567578480018), lastOpVisible: { ts: Timestamp(1567578480, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578480, 1), t: 1 }, lastCommittedWall: new Date(1567578480018), lastOpApplied: { ts: Timestamp(1567578483, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578480, 1), $clusterTime: { clusterTime: Timestamp(1567578483, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578483, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:03.359+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:03.359+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578483, 1) and ending at ts: Timestamp(1567578483, 1) 2019-09-04T06:28:03.359+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:28:13.013+0000 2019-09-04T06:28:03.359+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:28:14.284+0000 2019-09-04T06:28:03.359+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:03.359+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578483, 1), t: 1 } 2019-09-04T06:28:03.359+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:03.359+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:03.359+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578480, 1) 2019-09-04T06:28:03.359+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 2164 2019-09-04T06:28:03.359+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:03.359+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:03.359+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 2164 2019-09-04T06:28:03.359+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:03.359+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:28:03.359+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:03.359+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578480, 1) 2019-09-04T06:28:03.359+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 2167 2019-09-04T06:28:03.359+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:03.359+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:03.359+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 2167 2019-09-04T06:28:03.359+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:03.359+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578483, 1) } 2019-09-04T06:28:03.359+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2156 2019-09-04T06:28:03.359+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 2156 2019-09-04T06:28:03.359+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2170 2019-09-04T06:28:03.359+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2170 2019-09-04T06:28:03.359+0000 D3 EXECUTOR [repl-writer-worker-3] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:03.359+0000 D3 STORAGE [repl-writer-worker-3] WT begin_transaction for snapshot id 2172 2019-09-04T06:28:03.359+0000 D4 STORAGE [repl-writer-worker-3] inserting record with timestamp Timestamp(1567578483, 1) 2019-09-04T06:28:03.359+0000 D3 STORAGE [repl-writer-worker-3] WT set timestamp of future write operations to Timestamp(1567578483, 1) 2019-09-04T06:28:03.359+0000 D3 STORAGE [repl-writer-worker-3] WT commit_transaction for snapshot id 2172 2019-09-04T06:28:03.359+0000 D3 EXECUTOR [repl-writer-worker-3] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:03.359+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:28:03.359+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2171 2019-09-04T06:28:03.359+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 2171 2019-09-04T06:28:03.359+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2174 2019-09-04T06:28:03.359+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2174 2019-09-04T06:28:03.359+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578483, 1), t: 1 }({ ts: Timestamp(1567578483, 1), t: 1 }) 2019-09-04T06:28:03.360+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578483, 1) 2019-09-04T06:28:03.360+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2175 2019-09-04T06:28:03.360+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578483, 1) } } ] } sort: {} projection: {} 2019-09-04T06:28:03.360+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:03.360+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:28:03.360+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578483, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:28:03.360+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:03.360+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:03.360+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:03.360+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578483, 1) || First: notFirst: full path: ts 2019-09-04T06:28:03.360+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:03.360+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578483, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:03.360+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:03.360+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:28:03.360+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:03.360+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:03.360+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:03.360+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:03.360+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:03.360+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:03.360+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:03.360+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578483, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:03.360+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:03.360+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:03.360+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:03.360+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578483, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:03.360+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:03.360+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578483, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:03.360+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 2175 2019-09-04T06:28:03.360+0000 D3 EXECUTOR [repl-writer-worker-14] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:03.360+0000 D3 STORAGE [repl-writer-worker-14] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:03.360+0000 D3 REPL [repl-writer-worker-14] applying op: { ts: Timestamp(1567578483, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578483355), o: { $v: 1, $set: { ping: new Date(1567578483350) } } }, oplog application mode: Secondary 2019-09-04T06:28:03.360+0000 D3 STORAGE [repl-writer-worker-14] WT set timestamp of future write operations to Timestamp(1567578483, 1) 2019-09-04T06:28:03.360+0000 D3 STORAGE [repl-writer-worker-14] WT begin_transaction for snapshot id 2177 2019-09-04T06:28:03.360+0000 D2 QUERY [repl-writer-worker-14] Using idhack: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" } 2019-09-04T06:28:03.360+0000 D4 WRITE [repl-writer-worker-14] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:28:03.360+0000 D3 STORAGE [repl-writer-worker-14] WT commit_transaction for snapshot id 2177 2019-09-04T06:28:03.360+0000 D3 EXECUTOR [repl-writer-worker-14] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:03.360+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578483, 1), t: 1 }({ ts: Timestamp(1567578483, 1), t: 1 }) 2019-09-04T06:28:03.360+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578483, 1) 2019-09-04T06:28:03.360+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2176 2019-09-04T06:28:03.360+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:28:03.360+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:03.360+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:28:03.360+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:03.360+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:03.360+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:28:03.360+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 2176 2019-09-04T06:28:03.360+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578483, 1) 2019-09-04T06:28:03.360+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2180 2019-09-04T06:28:03.360+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2180 2019-09-04T06:28:03.360+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578483, 1), t: 1 }({ ts: Timestamp(1567578483, 1), t: 1 }) 2019-09-04T06:28:03.360+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:03.360+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, durableWallTime: new Date(1567578480018), appliedOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, appliedWallTime: new Date(1567578480018), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, durableWallTime: new Date(1567578480018), appliedOpTime: { ts: Timestamp(1567578483, 1), t: 1 }, appliedWallTime: new Date(1567578483355), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, durableWallTime: new Date(1567578480018), appliedOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, appliedWallTime: new Date(1567578480018), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578480, 1), t: 1 }, lastCommittedWall: new Date(1567578480018), lastOpVisible: { ts: Timestamp(1567578480, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:03.360+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 138 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:33.360+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, durableWallTime: new Date(1567578480018), appliedOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, appliedWallTime: new Date(1567578480018), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, durableWallTime: new Date(1567578480018), appliedOpTime: { ts: Timestamp(1567578483, 1), t: 1 }, appliedWallTime: new Date(1567578483355), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, durableWallTime: new Date(1567578480018), appliedOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, appliedWallTime: new Date(1567578480018), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578480, 1), t: 1 }, lastCommittedWall: new Date(1567578480018), lastOpVisible: { ts: Timestamp(1567578480, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:03.360+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:33.360+0000 2019-09-04T06:28:03.361+0000 D2 ASIO [RS] Request 138 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578483, 1), t: 1 }, lastCommittedWall: new Date(1567578483355), lastOpVisible: { ts: Timestamp(1567578483, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578483, 1), $clusterTime: { clusterTime: Timestamp(1567578483, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578483, 1) } 2019-09-04T06:28:03.361+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578483, 1), t: 1 }, lastCommittedWall: new Date(1567578483355), lastOpVisible: { ts: Timestamp(1567578483, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578483, 1), $clusterTime: { clusterTime: Timestamp(1567578483, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578483, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:03.361+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:03.361+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:33.361+0000 2019-09-04T06:28:03.361+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578483, 1), t: 1 } 2019-09-04T06:28:03.361+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 139 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:13.361+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578480, 1), t: 1 } } 2019-09-04T06:28:03.361+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:33.361+0000 2019-09-04T06:28:03.361+0000 D2 ASIO [RS] Request 139 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578483, 1), t: 1 }, lastCommittedWall: new Date(1567578483355), lastOpVisible: { ts: Timestamp(1567578483, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578483, 1), t: 1 }, lastCommittedWall: new Date(1567578483355), lastOpApplied: { ts: Timestamp(1567578483, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578483, 1), $clusterTime: { clusterTime: Timestamp(1567578483, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578483, 1) } 2019-09-04T06:28:03.361+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578483, 1), t: 1 }, lastCommittedWall: new Date(1567578483355), lastOpVisible: { ts: Timestamp(1567578483, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578483, 1), t: 1 }, lastCommittedWall: new Date(1567578483355), lastOpApplied: { ts: Timestamp(1567578483, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578483, 1), $clusterTime: { clusterTime: Timestamp(1567578483, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578483, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:03.361+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:03.361+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:28:03.361+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578483, 1), t: 1 }, 2019-09-04T06:28:03.355+0000 2019-09-04T06:28:03.361+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578483, 1), t: 1 }, 2019-09-04T06:28:03.355+0000 2019-09-04T06:28:03.361+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578478, 1) 2019-09-04T06:28:03.361+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:28:14.284+0000 2019-09-04T06:28:03.361+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:28:14.672+0000 2019-09-04T06:28:03.361+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 140 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:13.361+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578483, 1), t: 1 } } 2019-09-04T06:28:03.361+0000 D3 REPL [conn82] Got notified of new snapshot: { ts: Timestamp(1567578483, 1), t: 1 }, 2019-09-04T06:28:03.355+0000 2019-09-04T06:28:03.361+0000 D3 REPL [conn82] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.998+0000 2019-09-04T06:28:03.361+0000 D3 REPL [conn99] Got notified of new snapshot: { ts: Timestamp(1567578483, 1), t: 1 }, 2019-09-04T06:28:03.355+0000 2019-09-04T06:28:03.361+0000 D3 REPL [conn99] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:22.054+0000 2019-09-04T06:28:03.361+0000 D3 REPL [conn84] Got notified of new snapshot: { ts: Timestamp(1567578483, 1), t: 1 }, 2019-09-04T06:28:03.355+0000 2019-09-04T06:28:03.361+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:33.361+0000 2019-09-04T06:28:03.361+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:03.361+0000 D3 REPL [conn84] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.822+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn102] Got notified of new snapshot: { ts: Timestamp(1567578483, 1), t: 1 }, 2019-09-04T06:28:03.355+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn102] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:29.874+0000 2019-09-04T06:28:03.362+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn97] Got notified of new snapshot: { ts: Timestamp(1567578483, 1), t: 1 }, 2019-09-04T06:28:03.355+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn97] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.670+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn92] Got notified of new snapshot: { ts: Timestamp(1567578483, 1), t: 1 }, 2019-09-04T06:28:03.355+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn92] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.664+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn36] Got notified of new snapshot: { ts: Timestamp(1567578483, 1), t: 1 }, 2019-09-04T06:28:03.355+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn36] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.835+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn96] Got notified of new snapshot: { ts: Timestamp(1567578483, 1), t: 1 }, 2019-09-04T06:28:03.355+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn96] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.663+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn24] Got notified of new snapshot: { ts: Timestamp(1567578483, 1), t: 1 }, 2019-09-04T06:28:03.355+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn24] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.833+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn95] Got notified of new snapshot: { ts: Timestamp(1567578483, 1), t: 1 }, 2019-09-04T06:28:03.355+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn95] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.662+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn91] Got notified of new snapshot: { ts: Timestamp(1567578483, 1), t: 1 }, 2019-09-04T06:28:03.355+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn91] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.645+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn86] Got notified of new snapshot: { ts: Timestamp(1567578483, 1), t: 1 }, 2019-09-04T06:28:03.355+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn86] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.822+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn64] Got notified of new snapshot: { ts: Timestamp(1567578483, 1), t: 1 }, 2019-09-04T06:28:03.355+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn64] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.676+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn83] Got notified of new snapshot: { ts: Timestamp(1567578483, 1), t: 1 }, 2019-09-04T06:28:03.355+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn83] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.431+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn94] Got notified of new snapshot: { ts: Timestamp(1567578483, 1), t: 1 }, 2019-09-04T06:28:03.355+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn94] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.663+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn63] Got notified of new snapshot: { ts: Timestamp(1567578483, 1), t: 1 }, 2019-09-04T06:28:03.355+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn63] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.676+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn89] Got notified of new snapshot: { ts: Timestamp(1567578483, 1), t: 1 }, 2019-09-04T06:28:03.355+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn89] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:16.417+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn68] Got notified of new snapshot: { ts: Timestamp(1567578483, 1), t: 1 }, 2019-09-04T06:28:03.355+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn68] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.730+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn62] Got notified of new snapshot: { ts: Timestamp(1567578483, 1), t: 1 }, 2019-09-04T06:28:03.355+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn62] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.671+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn100] Got notified of new snapshot: { ts: Timestamp(1567578483, 1), t: 1 }, 2019-09-04T06:28:03.355+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn100] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:22.595+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn35] Got notified of new snapshot: { ts: Timestamp(1567578483, 1), t: 1 }, 2019-09-04T06:28:03.355+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn35] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.836+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn79] Got notified of new snapshot: { ts: Timestamp(1567578483, 1), t: 1 }, 2019-09-04T06:28:03.355+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn79] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.289+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn87] Got notified of new snapshot: { ts: Timestamp(1567578483, 1), t: 1 }, 2019-09-04T06:28:03.355+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn87] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.823+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn66] Got notified of new snapshot: { ts: Timestamp(1567578483, 1), t: 1 }, 2019-09-04T06:28:03.355+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn66] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.686+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn27] Got notified of new snapshot: { ts: Timestamp(1567578483, 1), t: 1 }, 2019-09-04T06:28:03.355+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn27] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.829+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn65] Got notified of new snapshot: { ts: Timestamp(1567578483, 1), t: 1 }, 2019-09-04T06:28:03.355+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn65] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.679+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn98] Got notified of new snapshot: { ts: Timestamp(1567578483, 1), t: 1 }, 2019-09-04T06:28:03.355+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn98] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.768+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn67] Got notified of new snapshot: { ts: Timestamp(1567578483, 1), t: 1 }, 2019-09-04T06:28:03.355+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn67] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.698+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn88] Got notified of new snapshot: { ts: Timestamp(1567578483, 1), t: 1 }, 2019-09-04T06:28:03.355+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn88] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.833+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn80] Got notified of new snapshot: { ts: Timestamp(1567578483, 1), t: 1 }, 2019-09-04T06:28:03.355+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn80] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.307+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn85] Got notified of new snapshot: { ts: Timestamp(1567578483, 1), t: 1 }, 2019-09-04T06:28:03.355+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn85] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.822+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn57] Got notified of new snapshot: { ts: Timestamp(1567578483, 1), t: 1 }, 2019-09-04T06:28:03.355+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn57] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:03.480+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn93] Got notified of new snapshot: { ts: Timestamp(1567578483, 1), t: 1 }, 2019-09-04T06:28:03.355+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn93] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.662+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn76] Got notified of new snapshot: { ts: Timestamp(1567578483, 1), t: 1 }, 2019-09-04T06:28:03.355+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn76] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.092+0000 2019-09-04T06:28:03.362+0000 D3 REPL [conn30] Got notified of new snapshot: { ts: Timestamp(1567578483, 1), t: 1 }, 2019-09-04T06:28:03.355+0000 2019-09-04T06:28:03.363+0000 D3 REPL [conn30] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.836+0000 2019-09-04T06:28:03.367+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:28:03.367+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:03.367+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, durableWallTime: new Date(1567578480018), appliedOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, appliedWallTime: new Date(1567578480018), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578483, 1), t: 1 }, durableWallTime: new Date(1567578483355), appliedOpTime: { ts: Timestamp(1567578483, 1), t: 1 }, appliedWallTime: new Date(1567578483355), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, durableWallTime: new Date(1567578480018), appliedOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, appliedWallTime: new Date(1567578480018), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578483, 1), t: 1 }, lastCommittedWall: new Date(1567578483355), lastOpVisible: { ts: Timestamp(1567578483, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:03.367+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 141 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:33.367+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, durableWallTime: new Date(1567578480018), appliedOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, appliedWallTime: new Date(1567578480018), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578483, 1), t: 1 }, durableWallTime: new Date(1567578483355), appliedOpTime: { ts: Timestamp(1567578483, 1), t: 1 }, appliedWallTime: new Date(1567578483355), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, durableWallTime: new Date(1567578480018), appliedOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, appliedWallTime: new Date(1567578480018), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578483, 1), t: 1 }, lastCommittedWall: new Date(1567578483355), lastOpVisible: { ts: Timestamp(1567578483, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:03.367+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:33.361+0000 2019-09-04T06:28:03.367+0000 D2 ASIO [RS] Request 141 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578483, 1), t: 1 }, lastCommittedWall: new Date(1567578483355), lastOpVisible: { ts: Timestamp(1567578483, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578483, 1), $clusterTime: { clusterTime: Timestamp(1567578483, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578483, 1) } 2019-09-04T06:28:03.367+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578483, 1), t: 1 }, lastCommittedWall: new Date(1567578483355), lastOpVisible: { ts: Timestamp(1567578483, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578483, 1), $clusterTime: { clusterTime: Timestamp(1567578483, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578483, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:03.367+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:03.367+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:33.361+0000 2019-09-04T06:28:03.421+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:03.424+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:03.424+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:03.459+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578483, 1) 2019-09-04T06:28:03.466+0000 I NETWORK [listener] connection accepted from 10.108.2.63:36204 #103 (81 connections now open) 2019-09-04T06:28:03.466+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:03.467+0000 D2 COMMAND [conn103] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:03.467+0000 I NETWORK [conn103] received client metadata from 10.108.2.63:36204 conn103: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:03.467+0000 I COMMAND [conn103] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:03.481+0000 I COMMAND [conn57] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578452, 1), signature: { hash: BinData(0, CB2EE64588C44BE37DA7454B6059CFFFB3ABC1AF), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:28:03.481+0000 D1 - [conn57] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:03.481+0000 W - [conn57] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:03.497+0000 I - [conn57] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:03.498+0000 D1 COMMAND [conn57] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578452, 1), signature: { hash: BinData(0, CB2EE64588C44BE37DA7454B6059CFFFB3ABC1AF), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:03.498+0000 D1 - [conn57] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:03.498+0000 W - [conn57] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:03.507+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:28:03.507+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:28:03.507+0000 D2 COMMAND [conn90] run command admin.$cmd { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:28:03.507+0000 I COMMAND [conn90] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:28:03.518+0000 I - [conn57] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:03.518+0000 W COMMAND [conn57] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:03.518+0000 I COMMAND [conn57] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578452, 1), signature: { hash: BinData(0, CB2EE64588C44BE37DA7454B6059CFFFB3ABC1AF), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:28:03.518+0000 D2 NETWORK [conn57] Session from 10.108.2.63:36186 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:03.518+0000 I NETWORK [conn57] end connection 10.108.2.63:36186 (80 connections now open) 2019-09-04T06:28:03.521+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:03.621+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:03.699+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:03.699+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:03.722+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:03.781+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:03.781+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:03.822+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:03.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:03.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:03.922+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:03.924+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:03.924+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:04.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:04.022+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:04.122+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:04.199+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:04.199+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:04.222+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:04.230+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578483, 1), signature: { hash: BinData(0, 7124990D58D2F949DFA4D165BCCD61B1E498897F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:04.230+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:28:04.230+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578483, 1), signature: { hash: BinData(0, 7124990D58D2F949DFA4D165BCCD61B1E498897F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:04.230+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578483, 1), signature: { hash: BinData(0, 7124990D58D2F949DFA4D165BCCD61B1E498897F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:04.230+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578483, 1), t: 1 }, durableWallTime: new Date(1567578483355), opTime: { ts: Timestamp(1567578483, 1), t: 1 }, wallTime: new Date(1567578483355) } 2019-09-04T06:28:04.230+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578483, 1), signature: { hash: BinData(0, 7124990D58D2F949DFA4D165BCCD61B1E498897F), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:04.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:04.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:04.281+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:04.281+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:04.323+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:04.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:04.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:04.359+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:04.359+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:04.359+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578483, 1) 2019-09-04T06:28:04.359+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 2198 2019-09-04T06:28:04.359+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:04.359+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:04.359+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 2198 2019-09-04T06:28:04.360+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2201 2019-09-04T06:28:04.360+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2201 2019-09-04T06:28:04.360+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578483, 1), t: 1 }({ ts: Timestamp(1567578483, 1), t: 1 }) 2019-09-04T06:28:04.423+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:04.424+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:04.424+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:04.523+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:04.623+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:04.699+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:04.699+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:04.723+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:04.781+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:04.781+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:04.823+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:04.833+0000 D2 ASIO [RS] Request 140 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578484, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578484832), o: { $v: 1, $set: { ping: new Date(1567578484828), up: 2385 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578483, 1), t: 1 }, lastCommittedWall: new Date(1567578483355), lastOpVisible: { ts: Timestamp(1567578483, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578483, 1), t: 1 }, lastCommittedWall: new Date(1567578483355), lastOpApplied: { ts: Timestamp(1567578484, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578483, 1), $clusterTime: { clusterTime: Timestamp(1567578484, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578484, 1) } 2019-09-04T06:28:04.833+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578484, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578484832), o: { $v: 1, $set: { ping: new Date(1567578484828), up: 2385 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578483, 1), t: 1 }, lastCommittedWall: new Date(1567578483355), lastOpVisible: { ts: Timestamp(1567578483, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578483, 1), t: 1 }, lastCommittedWall: new Date(1567578483355), lastOpApplied: { ts: Timestamp(1567578484, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578483, 1), $clusterTime: { clusterTime: Timestamp(1567578484, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578484, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:04.833+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:04.833+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578484, 1) and ending at ts: Timestamp(1567578484, 1) 2019-09-04T06:28:04.833+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:28:14.672+0000 2019-09-04T06:28:04.833+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:28:15.057+0000 2019-09-04T06:28:04.833+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:28:04.833+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:04.833+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578484, 1), t: 1 } 2019-09-04T06:28:04.834+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:04.834+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:04.834+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578483, 1) 2019-09-04T06:28:04.834+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 2207 2019-09-04T06:28:04.834+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:04.834+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:04.834+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 2207 2019-09-04T06:28:04.834+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:04.834+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:28:04.834+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578484, 1) } 2019-09-04T06:28:04.834+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:04.834+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578483, 1) 2019-09-04T06:28:04.834+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 2210 2019-09-04T06:28:04.834+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:04.834+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:04.834+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 2210 2019-09-04T06:28:04.834+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2202 2019-09-04T06:28:04.834+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 2202 2019-09-04T06:28:04.834+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2213 2019-09-04T06:28:04.834+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2213 2019-09-04T06:28:04.834+0000 D3 EXECUTOR [repl-writer-worker-12] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:04.834+0000 D3 STORAGE [repl-writer-worker-12] WT begin_transaction for snapshot id 2215 2019-09-04T06:28:04.834+0000 D4 STORAGE [repl-writer-worker-12] inserting record with timestamp Timestamp(1567578484, 1) 2019-09-04T06:28:04.834+0000 D3 STORAGE [repl-writer-worker-12] WT set timestamp of future write operations to Timestamp(1567578484, 1) 2019-09-04T06:28:04.834+0000 D3 STORAGE [repl-writer-worker-12] WT commit_transaction for snapshot id 2215 2019-09-04T06:28:04.834+0000 D3 EXECUTOR [repl-writer-worker-12] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:04.834+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:28:04.834+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2214 2019-09-04T06:28:04.834+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 2214 2019-09-04T06:28:04.834+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2217 2019-09-04T06:28:04.834+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2217 2019-09-04T06:28:04.834+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578484, 1), t: 1 }({ ts: Timestamp(1567578484, 1), t: 1 }) 2019-09-04T06:28:04.834+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578484, 1) 2019-09-04T06:28:04.834+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2218 2019-09-04T06:28:04.834+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578484, 1) } } ] } sort: {} projection: {} 2019-09-04T06:28:04.834+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:04.834+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:28:04.834+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578484, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:28:04.834+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:04.834+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:04.834+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:04.834+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578484, 1) || First: notFirst: full path: ts 2019-09-04T06:28:04.834+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:04.834+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578484, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:04.834+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:04.834+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:28:04.834+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:04.834+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:04.834+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:04.834+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:04.834+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:04.834+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:04.834+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:04.834+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578484, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:04.834+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:04.834+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:04.834+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:04.834+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578484, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:04.834+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:04.834+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578484, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:04.834+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 2218 2019-09-04T06:28:04.835+0000 D3 EXECUTOR [repl-writer-worker-10] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:04.835+0000 D3 STORAGE [repl-writer-worker-10] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:04.835+0000 D3 REPL [repl-writer-worker-10] applying op: { ts: Timestamp(1567578484, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578484832), o: { $v: 1, $set: { ping: new Date(1567578484828), up: 2385 } } }, oplog application mode: Secondary 2019-09-04T06:28:04.835+0000 D3 STORAGE [repl-writer-worker-10] WT set timestamp of future write operations to Timestamp(1567578484, 1) 2019-09-04T06:28:04.835+0000 D3 STORAGE [repl-writer-worker-10] WT begin_transaction for snapshot id 2220 2019-09-04T06:28:04.835+0000 D2 QUERY [repl-writer-worker-10] Using idhack: { _id: "cmodb801.togewa.com:27017" } 2019-09-04T06:28:04.835+0000 D4 WRITE [repl-writer-worker-10] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:28:04.835+0000 D3 STORAGE [repl-writer-worker-10] WT commit_transaction for snapshot id 2220 2019-09-04T06:28:04.835+0000 D3 EXECUTOR [repl-writer-worker-10] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:04.835+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578484, 1), t: 1 }({ ts: Timestamp(1567578484, 1), t: 1 }) 2019-09-04T06:28:04.835+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578484, 1) 2019-09-04T06:28:04.835+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2219 2019-09-04T06:28:04.835+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:28:04.835+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:04.835+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:28:04.835+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:04.835+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:04.835+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:28:04.835+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 2219 2019-09-04T06:28:04.835+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578484, 1) 2019-09-04T06:28:04.835+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:04.835+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2223 2019-09-04T06:28:04.835+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2223 2019-09-04T06:28:04.835+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, durableWallTime: new Date(1567578480018), appliedOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, appliedWallTime: new Date(1567578480018), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578483, 1), t: 1 }, durableWallTime: new Date(1567578483355), appliedOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, appliedWallTime: new Date(1567578484832), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, durableWallTime: new Date(1567578480018), appliedOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, appliedWallTime: new Date(1567578480018), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578483, 1), t: 1 }, lastCommittedWall: new Date(1567578483355), lastOpVisible: { ts: Timestamp(1567578483, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:04.835+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 142 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:34.835+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, durableWallTime: new Date(1567578480018), appliedOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, appliedWallTime: new Date(1567578480018), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578483, 1), t: 1 }, durableWallTime: new Date(1567578483355), appliedOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, appliedWallTime: new Date(1567578484832), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, durableWallTime: new Date(1567578480018), appliedOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, appliedWallTime: new Date(1567578480018), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578483, 1), t: 1 }, lastCommittedWall: new Date(1567578483355), lastOpVisible: { ts: Timestamp(1567578483, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:04.835+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:34.835+0000 2019-09-04T06:28:04.835+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578484, 1), t: 1 }({ ts: Timestamp(1567578484, 1), t: 1 }) 2019-09-04T06:28:04.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:04.836+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 143) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:04.836+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 143 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:28:14.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:04.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:04.836+0000 D2 ASIO [RS] Request 142 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578483, 1), t: 1 }, lastCommittedWall: new Date(1567578483355), lastOpVisible: { ts: Timestamp(1567578483, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578483, 1), $clusterTime: { clusterTime: Timestamp(1567578484, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578484, 1) } 2019-09-04T06:28:04.836+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578483, 1), t: 1 }, lastCommittedWall: new Date(1567578483355), lastOpVisible: { ts: Timestamp(1567578483, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578483, 1), $clusterTime: { clusterTime: Timestamp(1567578484, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578484, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:04.836+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578484, 1), t: 1 } 2019-09-04T06:28:04.836+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 144 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:14.836+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578483, 1), t: 1 } } 2019-09-04T06:28:04.836+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:04.836+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:34.836+0000 2019-09-04T06:28:04.836+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:34.836+0000 2019-09-04T06:28:04.836+0000 D2 ASIO [Replication] Request 143 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, durableWallTime: new Date(1567578484832), opTime: { ts: Timestamp(1567578484, 1), t: 1 }, wallTime: new Date(1567578484832), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578483, 1), t: 1 }, lastCommittedWall: new Date(1567578483355), lastOpVisible: { ts: Timestamp(1567578483, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578483, 1), $clusterTime: { clusterTime: Timestamp(1567578484, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578484, 1) } 2019-09-04T06:28:04.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, durableWallTime: new Date(1567578484832), opTime: { ts: Timestamp(1567578484, 1), t: 1 }, wallTime: new Date(1567578484832), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578483, 1), t: 1 }, lastCommittedWall: new Date(1567578483355), lastOpVisible: { ts: Timestamp(1567578483, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578483, 1), $clusterTime: { clusterTime: Timestamp(1567578484, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578484, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:28:04.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:04.836+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 143) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, durableWallTime: new Date(1567578484832), opTime: { ts: Timestamp(1567578484, 1), t: 1 }, wallTime: new Date(1567578484832), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578483, 1), t: 1 }, lastCommittedWall: new Date(1567578483355), lastOpVisible: { ts: Timestamp(1567578483, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578483, 1), $clusterTime: { clusterTime: Timestamp(1567578484, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578484, 1) } 2019-09-04T06:28:04.836+0000 D4 ELECTION [replexec-1] Postponing election timeout due to heartbeat from primary 2019-09-04T06:28:04.836+0000 D4 REPL [replexec-1] Canceling election timeout callback at 2019-09-04T06:28:15.057+0000 2019-09-04T06:28:04.836+0000 D4 ELECTION [replexec-1] Scheduling election timeout callback at 2019-09-04T06:28:15.152+0000 2019-09-04T06:28:04.836+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:28:04.836+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:28:06.836Z 2019-09-04T06:28:04.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:04.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:04.836+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:28:04.836+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:04.836+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, durableWallTime: new Date(1567578484832), appliedOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, appliedWallTime: new Date(1567578484832), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, durableWallTime: new Date(1567578484832), appliedOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, appliedWallTime: new Date(1567578484832), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, durableWallTime: new Date(1567578480018), appliedOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, appliedWallTime: new Date(1567578480018), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578483, 1), t: 1 }, lastCommittedWall: new Date(1567578483355), lastOpVisible: { ts: Timestamp(1567578483, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:04.836+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 145 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:34.836+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, durableWallTime: new Date(1567578484832), appliedOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, appliedWallTime: new Date(1567578484832), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, durableWallTime: new Date(1567578484832), appliedOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, appliedWallTime: new Date(1567578484832), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, durableWallTime: new Date(1567578480018), appliedOpTime: { ts: Timestamp(1567578480, 1), t: 1 }, appliedWallTime: new Date(1567578480018), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578483, 1), t: 1 }, lastCommittedWall: new Date(1567578483355), lastOpVisible: { ts: Timestamp(1567578483, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:04.836+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:04.836+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:34.836+0000 2019-09-04T06:28:04.836+0000 D2 ASIO [RS] Request 145 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578483, 1), t: 1 }, lastCommittedWall: new Date(1567578483355), lastOpVisible: { ts: Timestamp(1567578483, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578483, 1), $clusterTime: { clusterTime: Timestamp(1567578484, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578484, 1) } 2019-09-04T06:28:04.836+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578483, 1), t: 1 }, lastCommittedWall: new Date(1567578483355), lastOpVisible: { ts: Timestamp(1567578483, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578483, 1), $clusterTime: { clusterTime: Timestamp(1567578484, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578484, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:04.836+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:04.836+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:34.836+0000 2019-09-04T06:28:04.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:04.837+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 146) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:04.837+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 146 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:14.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:04.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:04.837+0000 D2 ASIO [RS] Request 144 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578484, 1), t: 1 }, lastCommittedWall: new Date(1567578484832), lastOpVisible: { ts: Timestamp(1567578484, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578484, 1), t: 1 }, lastCommittedWall: new Date(1567578484832), lastOpApplied: { ts: Timestamp(1567578484, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578484, 1), $clusterTime: { clusterTime: Timestamp(1567578484, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578484, 1) } 2019-09-04T06:28:04.837+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578484, 1), t: 1 }, lastCommittedWall: new Date(1567578484832), lastOpVisible: { ts: Timestamp(1567578484, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578484, 1), t: 1 }, lastCommittedWall: new Date(1567578484832), lastOpApplied: { ts: Timestamp(1567578484, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578484, 1), $clusterTime: { clusterTime: Timestamp(1567578484, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578484, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:04.837+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:04.837+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:28:04.837+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578484, 1), t: 1 }, 2019-09-04T06:28:04.832+0000 2019-09-04T06:28:04.837+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578484, 1), t: 1 }, 2019-09-04T06:28:04.832+0000 2019-09-04T06:28:04.837+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578479, 1) 2019-09-04T06:28:04.837+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:28:15.152+0000 2019-09-04T06:28:04.837+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:28:15.262+0000 2019-09-04T06:28:04.837+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 147 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:14.837+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578484, 1), t: 1 } } 2019-09-04T06:28:04.837+0000 D2 ASIO [Replication] Request 146 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578483, 1), t: 1 }, durableWallTime: new Date(1567578483355), opTime: { ts: Timestamp(1567578484, 1), t: 1 }, wallTime: new Date(1567578484832), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578484, 1), t: 1 }, lastCommittedWall: new Date(1567578484832), lastOpVisible: { ts: Timestamp(1567578484, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578484, 1), $clusterTime: { clusterTime: Timestamp(1567578484, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578484, 1) } 2019-09-04T06:28:04.837+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:34.836+0000 2019-09-04T06:28:04.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578483, 1), t: 1 }, durableWallTime: new Date(1567578483355), opTime: { ts: Timestamp(1567578484, 1), t: 1 }, wallTime: new Date(1567578484832), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578484, 1), t: 1 }, lastCommittedWall: new Date(1567578484832), lastOpVisible: { ts: Timestamp(1567578484, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578484, 1), $clusterTime: { clusterTime: Timestamp(1567578484, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578484, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:04.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:04.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:04.837+0000 D2 COMMAND [conn21] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578484, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f597402d1a496712d718c'), operName: "", parentOperId: "5d6f597402d1a496712d718a" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578484, 1), signature: { hash: BinData(0, 6980FB0C752AAB21D1004B84F6036EBB0020B7F6), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578484, 1), t: 1 } }, $db: "config" } 2019-09-04T06:28:04.837+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 146) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578483, 1), t: 1 }, durableWallTime: new Date(1567578483355), opTime: { ts: Timestamp(1567578484, 1), t: 1 }, wallTime: new Date(1567578484832), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578484, 1), t: 1 }, lastCommittedWall: new Date(1567578484832), lastOpVisible: { ts: Timestamp(1567578484, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578484, 1), $clusterTime: { clusterTime: Timestamp(1567578484, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578484, 1) } 2019-09-04T06:28:04.837+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:28:04.837+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:28:06.837Z 2019-09-04T06:28:04.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:04.837+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:04.837+0000 D3 REPL [conn92] Got notified of new snapshot: { ts: Timestamp(1567578484, 1), t: 1 }, 2019-09-04T06:28:04.832+0000 2019-09-04T06:28:04.837+0000 D3 REPL [conn92] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.664+0000 2019-09-04T06:28:04.837+0000 D3 REPL [conn100] Got notified of new snapshot: { ts: Timestamp(1567578484, 1), t: 1 }, 2019-09-04T06:28:04.832+0000 2019-09-04T06:28:04.837+0000 D3 REPL [conn100] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:22.595+0000 2019-09-04T06:28:04.837+0000 D3 REPL [conn79] Got notified of new snapshot: { ts: Timestamp(1567578484, 1), t: 1 }, 2019-09-04T06:28:04.832+0000 2019-09-04T06:28:04.837+0000 D3 REPL [conn79] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.289+0000 2019-09-04T06:28:04.837+0000 D3 REPL [conn66] Got notified of new snapshot: { ts: Timestamp(1567578484, 1), t: 1 }, 2019-09-04T06:28:04.832+0000 2019-09-04T06:28:04.837+0000 D3 REPL [conn66] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.686+0000 2019-09-04T06:28:04.837+0000 D3 REPL [conn65] Got notified of new snapshot: { ts: Timestamp(1567578484, 1), t: 1 }, 2019-09-04T06:28:04.832+0000 2019-09-04T06:28:04.837+0000 D3 REPL [conn65] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.679+0000 2019-09-04T06:28:04.837+0000 D3 REPL [conn67] Got notified of new snapshot: { ts: Timestamp(1567578484, 1), t: 1 }, 2019-09-04T06:28:04.832+0000 2019-09-04T06:28:04.837+0000 D3 REPL [conn67] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.698+0000 2019-09-04T06:28:04.837+0000 D3 REPL [conn83] Got notified of new snapshot: { ts: Timestamp(1567578484, 1), t: 1 }, 2019-09-04T06:28:04.832+0000 2019-09-04T06:28:04.837+0000 D3 REPL [conn83] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.431+0000 2019-09-04T06:28:04.837+0000 D3 REPL [conn89] Got notified of new snapshot: { ts: Timestamp(1567578484, 1), t: 1 }, 2019-09-04T06:28:04.832+0000 2019-09-04T06:28:04.837+0000 D3 REPL [conn89] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:16.417+0000 2019-09-04T06:28:04.837+0000 D3 REPL [conn62] Got notified of new snapshot: { ts: Timestamp(1567578484, 1), t: 1 }, 2019-09-04T06:28:04.832+0000 2019-09-04T06:28:04.837+0000 D3 REPL [conn62] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.671+0000 2019-09-04T06:28:04.837+0000 D3 REPL [conn35] Got notified of new snapshot: { ts: Timestamp(1567578484, 1), t: 1 }, 2019-09-04T06:28:04.832+0000 2019-09-04T06:28:04.837+0000 D3 REPL [conn35] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.836+0000 2019-09-04T06:28:04.837+0000 D3 REPL [conn87] Got notified of new snapshot: { ts: Timestamp(1567578484, 1), t: 1 }, 2019-09-04T06:28:04.832+0000 2019-09-04T06:28:04.837+0000 D3 REPL [conn87] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.823+0000 2019-09-04T06:28:04.837+0000 D3 REPL [conn27] Got notified of new snapshot: { ts: Timestamp(1567578484, 1), t: 1 }, 2019-09-04T06:28:04.832+0000 2019-09-04T06:28:04.837+0000 D3 REPL [conn27] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.829+0000 2019-09-04T06:28:04.838+0000 D3 REPL [conn98] Got notified of new snapshot: { ts: Timestamp(1567578484, 1), t: 1 }, 2019-09-04T06:28:04.832+0000 2019-09-04T06:28:04.838+0000 D3 REPL [conn98] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.768+0000 2019-09-04T06:28:04.838+0000 D3 REPL [conn88] Got notified of new snapshot: { ts: Timestamp(1567578484, 1), t: 1 }, 2019-09-04T06:28:04.832+0000 2019-09-04T06:28:04.838+0000 D3 REPL [conn88] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.833+0000 2019-09-04T06:28:04.838+0000 D3 REPL [conn85] Got notified of new snapshot: { ts: Timestamp(1567578484, 1), t: 1 }, 2019-09-04T06:28:04.832+0000 2019-09-04T06:28:04.838+0000 D3 REPL [conn85] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.822+0000 2019-09-04T06:28:04.838+0000 D3 REPL [conn82] Got notified of new snapshot: { ts: Timestamp(1567578484, 1), t: 1 }, 2019-09-04T06:28:04.832+0000 2019-09-04T06:28:04.838+0000 D3 REPL [conn82] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.998+0000 2019-09-04T06:28:04.838+0000 D1 TRACKING [conn21] Cmd: find, TrackingId: 5d6f597402d1a496712d718a|5d6f597402d1a496712d718c 2019-09-04T06:28:04.838+0000 D1 COMMAND [conn21] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578484, 1), t: 1 } } } 2019-09-04T06:28:04.838+0000 D3 STORAGE [conn21] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:28:04.838+0000 D1 COMMAND [conn21] Using 'committed' snapshot: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578484, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f597402d1a496712d718c'), operName: "", parentOperId: "5d6f597402d1a496712d718a" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578484, 1), signature: { hash: BinData(0, 6980FB0C752AAB21D1004B84F6036EBB0020B7F6), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578484, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578484, 1) 2019-09-04T06:28:04.838+0000 D2 QUERY [conn21] Collection config.settings does not exist. Using EOF plan: query: { _id: "balancer" } sort: {} projection: {} limit: 1 2019-09-04T06:28:04.838+0000 I COMMAND [conn21] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578484, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f597402d1a496712d718c'), operName: "", parentOperId: "5d6f597402d1a496712d718a" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578484, 1), signature: { hash: BinData(0, 6980FB0C752AAB21D1004B84F6036EBB0020B7F6), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578484, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:28:04.838+0000 D3 REPL [conn99] Got notified of new snapshot: { ts: Timestamp(1567578484, 1), t: 1 }, 2019-09-04T06:28:04.832+0000 2019-09-04T06:28:04.838+0000 D3 REPL [conn99] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:22.054+0000 2019-09-04T06:28:04.838+0000 D3 REPL [conn84] Got notified of new snapshot: { ts: Timestamp(1567578484, 1), t: 1 }, 2019-09-04T06:28:04.832+0000 2019-09-04T06:28:04.838+0000 D3 REPL [conn84] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.822+0000 2019-09-04T06:28:04.838+0000 D3 REPL [conn95] Got notified of new snapshot: { ts: Timestamp(1567578484, 1), t: 1 }, 2019-09-04T06:28:04.832+0000 2019-09-04T06:28:04.838+0000 D3 REPL [conn95] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.662+0000 2019-09-04T06:28:04.838+0000 D3 REPL [conn91] Got notified of new snapshot: { ts: Timestamp(1567578484, 1), t: 1 }, 2019-09-04T06:28:04.832+0000 2019-09-04T06:28:04.838+0000 D3 REPL [conn91] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.645+0000 2019-09-04T06:28:04.838+0000 D3 REPL [conn86] Got notified of new snapshot: { ts: Timestamp(1567578484, 1), t: 1 }, 2019-09-04T06:28:04.832+0000 2019-09-04T06:28:04.838+0000 D3 REPL [conn86] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.822+0000 2019-09-04T06:28:04.838+0000 D3 REPL [conn102] Got notified of new snapshot: { ts: Timestamp(1567578484, 1), t: 1 }, 2019-09-04T06:28:04.832+0000 2019-09-04T06:28:04.838+0000 D3 REPL [conn102] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:29.874+0000 2019-09-04T06:28:04.838+0000 D3 REPL [conn97] Got notified of new snapshot: { ts: Timestamp(1567578484, 1), t: 1 }, 2019-09-04T06:28:04.832+0000 2019-09-04T06:28:04.838+0000 D3 REPL [conn97] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.670+0000 2019-09-04T06:28:04.838+0000 D3 REPL [conn94] Got notified of new snapshot: { ts: Timestamp(1567578484, 1), t: 1 }, 2019-09-04T06:28:04.832+0000 2019-09-04T06:28:04.838+0000 D3 REPL [conn94] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.663+0000 2019-09-04T06:28:04.838+0000 D3 REPL [conn63] Got notified of new snapshot: { ts: Timestamp(1567578484, 1), t: 1 }, 2019-09-04T06:28:04.832+0000 2019-09-04T06:28:04.838+0000 D3 REPL [conn63] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.676+0000 2019-09-04T06:28:04.838+0000 D3 REPL [conn68] Got notified of new snapshot: { ts: Timestamp(1567578484, 1), t: 1 }, 2019-09-04T06:28:04.832+0000 2019-09-04T06:28:04.838+0000 D3 REPL [conn68] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.730+0000 2019-09-04T06:28:04.838+0000 D3 REPL [conn36] Got notified of new snapshot: { ts: Timestamp(1567578484, 1), t: 1 }, 2019-09-04T06:28:04.832+0000 2019-09-04T06:28:04.838+0000 D3 REPL [conn36] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.835+0000 2019-09-04T06:28:04.838+0000 D3 REPL [conn96] Got notified of new snapshot: { ts: Timestamp(1567578484, 1), t: 1 }, 2019-09-04T06:28:04.832+0000 2019-09-04T06:28:04.838+0000 D3 REPL [conn96] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.663+0000 2019-09-04T06:28:04.838+0000 D3 REPL [conn24] Got notified of new snapshot: { ts: Timestamp(1567578484, 1), t: 1 }, 2019-09-04T06:28:04.832+0000 2019-09-04T06:28:04.838+0000 D3 REPL [conn24] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.833+0000 2019-09-04T06:28:04.838+0000 D3 REPL [conn64] Got notified of new snapshot: { ts: Timestamp(1567578484, 1), t: 1 }, 2019-09-04T06:28:04.832+0000 2019-09-04T06:28:04.838+0000 D3 REPL [conn64] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.676+0000 2019-09-04T06:28:04.838+0000 D3 REPL [conn80] Got notified of new snapshot: { ts: Timestamp(1567578484, 1), t: 1 }, 2019-09-04T06:28:04.832+0000 2019-09-04T06:28:04.838+0000 D3 REPL [conn80] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.307+0000 2019-09-04T06:28:04.838+0000 D3 REPL [conn93] Got notified of new snapshot: { ts: Timestamp(1567578484, 1), t: 1 }, 2019-09-04T06:28:04.832+0000 2019-09-04T06:28:04.838+0000 D3 REPL [conn93] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.662+0000 2019-09-04T06:28:04.838+0000 D3 REPL [conn76] Got notified of new snapshot: { ts: Timestamp(1567578484, 1), t: 1 }, 2019-09-04T06:28:04.832+0000 2019-09-04T06:28:04.838+0000 D3 REPL [conn76] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.092+0000 2019-09-04T06:28:04.838+0000 D3 REPL [conn30] Got notified of new snapshot: { ts: Timestamp(1567578484, 1), t: 1 }, 2019-09-04T06:28:04.832+0000 2019-09-04T06:28:04.838+0000 D3 REPL [conn30] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.836+0000 2019-09-04T06:28:04.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:04.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:04.923+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:04.924+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:04.924+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:04.934+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578484, 1) 2019-09-04T06:28:05.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:05.023+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:05.024+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:05.024+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:05.061+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:05.061+0000 D3 REPL [replexec-0] memberData lastupdate is: 2019-09-04T06:28:04.836+0000 2019-09-04T06:28:05.061+0000 D3 REPL [replexec-0] memberData lastupdate is: 2019-09-04T06:28:04.837+0000 2019-09-04T06:28:05.061+0000 D3 REPL [replexec-0] stalest member MemberId(0) date: 2019-09-04T06:28:04.836+0000 2019-09-04T06:28:05.061+0000 D3 REPL [replexec-0] scheduling next check at 2019-09-04T06:28:14.836+0000 2019-09-04T06:28:05.061+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:05.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578484, 1), signature: { hash: BinData(0, 6980FB0C752AAB21D1004B84F6036EBB0020B7F6), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:05.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:28:05.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578484, 1), signature: { hash: BinData(0, 6980FB0C752AAB21D1004B84F6036EBB0020B7F6), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:05.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578484, 1), signature: { hash: BinData(0, 6980FB0C752AAB21D1004B84F6036EBB0020B7F6), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:05.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, durableWallTime: new Date(1567578484832), opTime: { ts: Timestamp(1567578484, 1), t: 1 }, wallTime: new Date(1567578484832) } 2019-09-04T06:28:05.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578484, 1), signature: { hash: BinData(0, 6980FB0C752AAB21D1004B84F6036EBB0020B7F6), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:05.123+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:05.123+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:05.124+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:05.132+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:05.133+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:05.199+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:05.199+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:05.224+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:05.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:05.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:05.281+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:05.281+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:05.324+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:05.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:05.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:05.424+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:05.424+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:05.424+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:05.523+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:05.523+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:05.524+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:05.623+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:05.623+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:05.624+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:05.632+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:05.632+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:05.699+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:05.699+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:05.725+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:05.781+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:05.781+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:05.825+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:05.834+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:05.834+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:05.834+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578484, 1) 2019-09-04T06:28:05.834+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 2244 2019-09-04T06:28:05.834+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:05.834+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:05.834+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 2244 2019-09-04T06:28:05.835+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2247 2019-09-04T06:28:05.836+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2247 2019-09-04T06:28:05.836+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578484, 1), t: 1 }({ ts: Timestamp(1567578484, 1), t: 1 }) 2019-09-04T06:28:05.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:05.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:05.924+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:05.924+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:05.925+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:06.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:06.023+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:06.023+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:06.025+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:06.123+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:06.123+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:06.125+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:06.132+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:06.132+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:06.199+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:06.199+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:06.225+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:06.230+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578484, 1), signature: { hash: BinData(0, 6980FB0C752AAB21D1004B84F6036EBB0020B7F6), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:06.230+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:28:06.230+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578484, 1), signature: { hash: BinData(0, 6980FB0C752AAB21D1004B84F6036EBB0020B7F6), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:06.230+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578484, 1), signature: { hash: BinData(0, 6980FB0C752AAB21D1004B84F6036EBB0020B7F6), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:06.230+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, durableWallTime: new Date(1567578484832), opTime: { ts: Timestamp(1567578484, 1), t: 1 }, wallTime: new Date(1567578484832) } 2019-09-04T06:28:06.230+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578484, 1), signature: { hash: BinData(0, 6980FB0C752AAB21D1004B84F6036EBB0020B7F6), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:06.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:06.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:06.281+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:06.281+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:06.325+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:06.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:06.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:06.424+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:06.424+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:06.426+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:06.523+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:06.523+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:06.526+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:06.623+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:06.623+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:06.626+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:06.632+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:06.632+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:06.699+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:06.699+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:06.726+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:06.781+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:06.781+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:06.826+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:06.834+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:06.834+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:06.834+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578484, 1) 2019-09-04T06:28:06.834+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 2266 2019-09-04T06:28:06.834+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:06.834+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:06.835+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 2266 2019-09-04T06:28:06.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:06.836+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 148) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:06.836+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 148 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:28:16.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:06.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:06.836+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2269 2019-09-04T06:28:06.836+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2269 2019-09-04T06:28:06.836+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578484, 1), t: 1 }({ ts: Timestamp(1567578484, 1), t: 1 }) 2019-09-04T06:28:06.836+0000 D2 ASIO [Replication] Request 148 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, durableWallTime: new Date(1567578484832), opTime: { ts: Timestamp(1567578484, 1), t: 1 }, wallTime: new Date(1567578484832), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578484, 1), t: 1 }, lastCommittedWall: new Date(1567578484832), lastOpVisible: { ts: Timestamp(1567578484, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578484, 1), $clusterTime: { clusterTime: Timestamp(1567578484, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578484, 1) } 2019-09-04T06:28:06.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, durableWallTime: new Date(1567578484832), opTime: { ts: Timestamp(1567578484, 1), t: 1 }, wallTime: new Date(1567578484832), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578484, 1), t: 1 }, lastCommittedWall: new Date(1567578484832), lastOpVisible: { ts: Timestamp(1567578484, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578484, 1), $clusterTime: { clusterTime: Timestamp(1567578484, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578484, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:28:06.836+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:28:06.836+0000 D2 REPL_HB [replexec-2] Received response to heartbeat (requestId: 148) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, durableWallTime: new Date(1567578484832), opTime: { ts: Timestamp(1567578484, 1), t: 1 }, wallTime: new Date(1567578484832), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578484, 1), t: 1 }, lastCommittedWall: new Date(1567578484832), lastOpVisible: { ts: Timestamp(1567578484, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578484, 1), $clusterTime: { clusterTime: Timestamp(1567578484, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578484, 1) } 2019-09-04T06:28:06.836+0000 D4 ELECTION [replexec-2] Postponing election timeout due to heartbeat from primary 2019-09-04T06:28:06.836+0000 D4 REPL [replexec-2] Canceling election timeout callback at 2019-09-04T06:28:15.262+0000 2019-09-04T06:28:06.836+0000 D4 ELECTION [replexec-2] Scheduling election timeout callback at 2019-09-04T06:28:17.690+0000 2019-09-04T06:28:06.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:06.836+0000 D3 REPL [replexec-2] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:28:06.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:06.836+0000 D2 REPL_HB [replexec-2] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:28:08.836Z 2019-09-04T06:28:06.836+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:06.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:06.837+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 149) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:06.837+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 149 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:16.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:06.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:06.837+0000 D2 ASIO [Replication] Request 149 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, durableWallTime: new Date(1567578484832), opTime: { ts: Timestamp(1567578484, 1), t: 1 }, wallTime: new Date(1567578484832), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578484, 1), t: 1 }, lastCommittedWall: new Date(1567578484832), lastOpVisible: { ts: Timestamp(1567578484, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578484, 1), $clusterTime: { clusterTime: Timestamp(1567578484, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578484, 1) } 2019-09-04T06:28:06.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, durableWallTime: new Date(1567578484832), opTime: { ts: Timestamp(1567578484, 1), t: 1 }, wallTime: new Date(1567578484832), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578484, 1), t: 1 }, lastCommittedWall: new Date(1567578484832), lastOpVisible: { ts: Timestamp(1567578484, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578484, 1), $clusterTime: { clusterTime: Timestamp(1567578484, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578484, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:06.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:06.837+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 149) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, durableWallTime: new Date(1567578484832), opTime: { ts: Timestamp(1567578484, 1), t: 1 }, wallTime: new Date(1567578484832), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578484, 1), t: 1 }, lastCommittedWall: new Date(1567578484832), lastOpVisible: { ts: Timestamp(1567578484, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578484, 1), $clusterTime: { clusterTime: Timestamp(1567578484, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578484, 1) } 2019-09-04T06:28:06.837+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:28:06.837+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:28:08.837Z 2019-09-04T06:28:06.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:06.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:06.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:06.924+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:06.924+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:06.926+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:06.990+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:06.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:07.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:07.023+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:07.023+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:07.026+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:07.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578484, 1), signature: { hash: BinData(0, 6980FB0C752AAB21D1004B84F6036EBB0020B7F6), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:07.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:28:07.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578484, 1), signature: { hash: BinData(0, 6980FB0C752AAB21D1004B84F6036EBB0020B7F6), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:07.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578484, 1), signature: { hash: BinData(0, 6980FB0C752AAB21D1004B84F6036EBB0020B7F6), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:07.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, durableWallTime: new Date(1567578484832), opTime: { ts: Timestamp(1567578484, 1), t: 1 }, wallTime: new Date(1567578484832) } 2019-09-04T06:28:07.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578484, 1), signature: { hash: BinData(0, 6980FB0C752AAB21D1004B84F6036EBB0020B7F6), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:07.123+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:07.123+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:07.126+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:07.132+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:07.132+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:07.199+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:07.199+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:07.226+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:07.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:07.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:07.281+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:07.281+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:07.327+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:07.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:07.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:07.424+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:07.424+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:07.427+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:07.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:07.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:07.523+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:07.523+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:07.527+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:07.623+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:07.623+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:07.627+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:07.632+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:07.632+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:07.699+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:07.699+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:07.727+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:07.781+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:07.781+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:07.827+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:07.835+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:07.835+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:07.835+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578484, 1) 2019-09-04T06:28:07.835+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 2291 2019-09-04T06:28:07.835+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:07.835+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:07.835+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 2291 2019-09-04T06:28:07.836+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2294 2019-09-04T06:28:07.836+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2294 2019-09-04T06:28:07.836+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578484, 1), t: 1 }({ ts: Timestamp(1567578484, 1), t: 1 }) 2019-09-04T06:28:07.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:07.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:07.924+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:07.924+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:07.927+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:07.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:07.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:08.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:08.023+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:08.023+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:08.027+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:08.123+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:08.123+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:08.128+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:08.132+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:08.132+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:08.199+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:08.199+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:08.228+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:08.230+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578484, 1), signature: { hash: BinData(0, 6980FB0C752AAB21D1004B84F6036EBB0020B7F6), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:08.230+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:28:08.230+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578484, 1), signature: { hash: BinData(0, 6980FB0C752AAB21D1004B84F6036EBB0020B7F6), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:08.230+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578484, 1), signature: { hash: BinData(0, 6980FB0C752AAB21D1004B84F6036EBB0020B7F6), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:08.230+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, durableWallTime: new Date(1567578484832), opTime: { ts: Timestamp(1567578484, 1), t: 1 }, wallTime: new Date(1567578484832) } 2019-09-04T06:28:08.230+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578484, 1), signature: { hash: BinData(0, 6980FB0C752AAB21D1004B84F6036EBB0020B7F6), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:08.232+0000 D2 ASIO [RS] Request 147 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578488, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578488229), o: { $v: 1, $set: { ping: new Date(1567578488229) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578484, 1), t: 1 }, lastCommittedWall: new Date(1567578484832), lastOpVisible: { ts: Timestamp(1567578484, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578484, 1), t: 1 }, lastCommittedWall: new Date(1567578484832), lastOpApplied: { ts: Timestamp(1567578488, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578484, 1), $clusterTime: { clusterTime: Timestamp(1567578488, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578488, 1) } 2019-09-04T06:28:08.232+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578488, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578488229), o: { $v: 1, $set: { ping: new Date(1567578488229) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578484, 1), t: 1 }, lastCommittedWall: new Date(1567578484832), lastOpVisible: { ts: Timestamp(1567578484, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578484, 1), t: 1 }, lastCommittedWall: new Date(1567578484832), lastOpApplied: { ts: Timestamp(1567578488, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578484, 1), $clusterTime: { clusterTime: Timestamp(1567578488, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578488, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:08.232+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:08.232+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578488, 1) and ending at ts: Timestamp(1567578488, 1) 2019-09-04T06:28:08.232+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:28:17.690+0000 2019-09-04T06:28:08.232+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:28:18.351+0000 2019-09-04T06:28:08.232+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:28:08.232+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:08.232+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:08.232+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578488, 1), t: 1 } 2019-09-04T06:28:08.232+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:08.232+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578484, 1) 2019-09-04T06:28:08.232+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 2306 2019-09-04T06:28:08.232+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:08.232+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:08.232+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 2306 2019-09-04T06:28:08.232+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:08.232+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:08.232+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:28:08.232+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578484, 1) 2019-09-04T06:28:08.232+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578488, 1) } 2019-09-04T06:28:08.232+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 2309 2019-09-04T06:28:08.232+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2295 2019-09-04T06:28:08.232+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:08.232+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 2295 2019-09-04T06:28:08.232+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:08.233+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2310 2019-09-04T06:28:08.233+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 2309 2019-09-04T06:28:08.233+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2310 2019-09-04T06:28:08.233+0000 D3 EXECUTOR [repl-writer-worker-8] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:08.233+0000 D3 STORAGE [repl-writer-worker-8] WT begin_transaction for snapshot id 2314 2019-09-04T06:28:08.233+0000 D4 STORAGE [repl-writer-worker-8] inserting record with timestamp Timestamp(1567578488, 1) 2019-09-04T06:28:08.233+0000 D3 STORAGE [repl-writer-worker-8] WT set timestamp of future write operations to Timestamp(1567578488, 1) 2019-09-04T06:28:08.233+0000 D3 STORAGE [repl-writer-worker-8] WT commit_transaction for snapshot id 2314 2019-09-04T06:28:08.233+0000 D3 EXECUTOR [repl-writer-worker-8] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:08.233+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:28:08.233+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2313 2019-09-04T06:28:08.233+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 2313 2019-09-04T06:28:08.233+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2316 2019-09-04T06:28:08.233+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2316 2019-09-04T06:28:08.233+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578488, 1), t: 1 }({ ts: Timestamp(1567578488, 1), t: 1 }) 2019-09-04T06:28:08.233+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578488, 1) 2019-09-04T06:28:08.233+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2317 2019-09-04T06:28:08.233+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578488, 1) } } ] } sort: {} projection: {} 2019-09-04T06:28:08.233+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:08.233+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:28:08.233+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578488, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:28:08.233+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:08.233+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:08.233+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:08.233+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578488, 1) || First: notFirst: full path: ts 2019-09-04T06:28:08.233+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:08.233+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578488, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:08.233+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:08.233+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:28:08.233+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:08.233+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:08.233+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:08.233+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:08.233+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:08.233+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:08.233+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:08.233+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578488, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:08.233+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:08.233+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:08.233+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:08.233+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578488, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:08.233+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:08.233+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578488, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:08.233+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 2317 2019-09-04T06:28:08.233+0000 D3 EXECUTOR [repl-writer-worker-4] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:08.233+0000 D3 STORAGE [repl-writer-worker-4] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:08.233+0000 D3 REPL [repl-writer-worker-4] applying op: { ts: Timestamp(1567578488, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578488229), o: { $v: 1, $set: { ping: new Date(1567578488229) } } }, oplog application mode: Secondary 2019-09-04T06:28:08.233+0000 D3 STORAGE [repl-writer-worker-4] WT set timestamp of future write operations to Timestamp(1567578488, 1) 2019-09-04T06:28:08.233+0000 D3 STORAGE [repl-writer-worker-4] WT begin_transaction for snapshot id 2319 2019-09-04T06:28:08.234+0000 D2 QUERY [repl-writer-worker-4] Using idhack: { _id: "ConfigServer" } 2019-09-04T06:28:08.234+0000 D4 WRITE [repl-writer-worker-4] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:28:08.234+0000 D3 STORAGE [repl-writer-worker-4] WT commit_transaction for snapshot id 2319 2019-09-04T06:28:08.234+0000 D3 EXECUTOR [repl-writer-worker-4] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:08.234+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578488, 1), t: 1 }({ ts: Timestamp(1567578488, 1), t: 1 }) 2019-09-04T06:28:08.234+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578488, 1) 2019-09-04T06:28:08.234+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2318 2019-09-04T06:28:08.234+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:28:08.234+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:08.234+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:28:08.234+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:08.234+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:08.234+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:28:08.234+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 2318 2019-09-04T06:28:08.234+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578488, 1) 2019-09-04T06:28:08.234+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:08.234+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2322 2019-09-04T06:28:08.234+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2322 2019-09-04T06:28:08.234+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, durableWallTime: new Date(1567578484832), appliedOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, appliedWallTime: new Date(1567578484832), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, durableWallTime: new Date(1567578484832), appliedOpTime: { ts: Timestamp(1567578488, 1), t: 1 }, appliedWallTime: new Date(1567578488229), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, durableWallTime: new Date(1567578484832), appliedOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, appliedWallTime: new Date(1567578484832), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578484, 1), t: 1 }, lastCommittedWall: new Date(1567578484832), lastOpVisible: { ts: Timestamp(1567578484, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:08.234+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 150 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:38.234+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, durableWallTime: new Date(1567578484832), appliedOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, appliedWallTime: new Date(1567578484832), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, durableWallTime: new Date(1567578484832), appliedOpTime: { ts: Timestamp(1567578488, 1), t: 1 }, appliedWallTime: new Date(1567578488229), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, durableWallTime: new Date(1567578484832), appliedOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, appliedWallTime: new Date(1567578484832), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578484, 1), t: 1 }, lastCommittedWall: new Date(1567578484832), lastOpVisible: { ts: Timestamp(1567578484, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:08.234+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578488, 1), t: 1 }({ ts: Timestamp(1567578488, 1), t: 1 }) 2019-09-04T06:28:08.234+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:38.234+0000 2019-09-04T06:28:08.234+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578488, 1), t: 1 } 2019-09-04T06:28:08.234+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 151 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:18.234+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578484, 1), t: 1 } } 2019-09-04T06:28:08.234+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:38.234+0000 2019-09-04T06:28:08.234+0000 D2 ASIO [RS] Request 150 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 1), t: 1 }, lastCommittedWall: new Date(1567578488229), lastOpVisible: { ts: Timestamp(1567578488, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578488, 1), $clusterTime: { clusterTime: Timestamp(1567578488, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578488, 1) } 2019-09-04T06:28:08.234+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 1), t: 1 }, lastCommittedWall: new Date(1567578488229), lastOpVisible: { ts: Timestamp(1567578488, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578488, 1), $clusterTime: { clusterTime: Timestamp(1567578488, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578488, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:08.234+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:08.234+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:38.234+0000 2019-09-04T06:28:08.235+0000 D2 ASIO [RS] Request 151 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 1), t: 1 }, lastCommittedWall: new Date(1567578488229), lastOpVisible: { ts: Timestamp(1567578488, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578488, 1), t: 1 }, lastCommittedWall: new Date(1567578488229), lastOpApplied: { ts: Timestamp(1567578488, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578488, 1), $clusterTime: { clusterTime: Timestamp(1567578488, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578488, 1) } 2019-09-04T06:28:08.235+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 1), t: 1 }, lastCommittedWall: new Date(1567578488229), lastOpVisible: { ts: Timestamp(1567578488, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578488, 1), t: 1 }, lastCommittedWall: new Date(1567578488229), lastOpApplied: { ts: Timestamp(1567578488, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578488, 1), $clusterTime: { clusterTime: Timestamp(1567578488, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578488, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:08.235+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:08.235+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:28:08.235+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578488, 1), t: 1 }, 2019-09-04T06:28:08.229+0000 2019-09-04T06:28:08.235+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578488, 1), t: 1 }, 2019-09-04T06:28:08.229+0000 2019-09-04T06:28:08.235+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578483, 1) 2019-09-04T06:28:08.235+0000 D3 REPL [conn80] Got notified of new snapshot: { ts: Timestamp(1567578488, 1), t: 1 }, 2019-09-04T06:28:08.229+0000 2019-09-04T06:28:08.235+0000 D3 REPL [conn80] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.307+0000 2019-09-04T06:28:08.235+0000 D3 REPL [conn62] Got notified of new snapshot: { ts: Timestamp(1567578488, 1), t: 1 }, 2019-09-04T06:28:08.229+0000 2019-09-04T06:28:08.235+0000 D3 REPL [conn62] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.671+0000 2019-09-04T06:28:08.235+0000 D3 REPL [conn92] Got notified of new snapshot: { ts: Timestamp(1567578488, 1), t: 1 }, 2019-09-04T06:28:08.229+0000 2019-09-04T06:28:08.235+0000 D3 REPL [conn92] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.664+0000 2019-09-04T06:28:08.235+0000 D3 REPL [conn88] Got notified of new snapshot: { ts: Timestamp(1567578488, 1), t: 1 }, 2019-09-04T06:28:08.229+0000 2019-09-04T06:28:08.235+0000 D3 REPL [conn88] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.833+0000 2019-09-04T06:28:08.235+0000 D3 REPL [conn65] Got notified of new snapshot: { ts: Timestamp(1567578488, 1), t: 1 }, 2019-09-04T06:28:08.229+0000 2019-09-04T06:28:08.235+0000 D3 REPL [conn65] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.679+0000 2019-09-04T06:28:08.235+0000 D3 REPL [conn36] Got notified of new snapshot: { ts: Timestamp(1567578488, 1), t: 1 }, 2019-09-04T06:28:08.229+0000 2019-09-04T06:28:08.235+0000 D3 REPL [conn36] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.835+0000 2019-09-04T06:28:08.235+0000 D3 REPL [conn98] Got notified of new snapshot: { ts: Timestamp(1567578488, 1), t: 1 }, 2019-09-04T06:28:08.229+0000 2019-09-04T06:28:08.235+0000 D3 REPL [conn98] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.768+0000 2019-09-04T06:28:08.235+0000 D3 REPL [conn82] Got notified of new snapshot: { ts: Timestamp(1567578488, 1), t: 1 }, 2019-09-04T06:28:08.229+0000 2019-09-04T06:28:08.235+0000 D3 REPL [conn82] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.998+0000 2019-09-04T06:28:08.235+0000 D3 REPL [conn99] Got notified of new snapshot: { ts: Timestamp(1567578488, 1), t: 1 }, 2019-09-04T06:28:08.229+0000 2019-09-04T06:28:08.235+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:28:08.235+0000 D3 REPL [conn99] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:22.054+0000 2019-09-04T06:28:08.235+0000 D3 REPL [conn84] Got notified of new snapshot: { ts: Timestamp(1567578488, 1), t: 1 }, 2019-09-04T06:28:08.229+0000 2019-09-04T06:28:08.235+0000 D3 REPL [conn84] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.822+0000 2019-09-04T06:28:08.235+0000 D3 REPL [conn95] Got notified of new snapshot: { ts: Timestamp(1567578488, 1), t: 1 }, 2019-09-04T06:28:08.229+0000 2019-09-04T06:28:08.235+0000 D3 REPL [conn95] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.662+0000 2019-09-04T06:28:08.235+0000 D3 REPL [conn91] Got notified of new snapshot: { ts: Timestamp(1567578488, 1), t: 1 }, 2019-09-04T06:28:08.229+0000 2019-09-04T06:28:08.235+0000 D3 REPL [conn91] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.645+0000 2019-09-04T06:28:08.235+0000 D3 REPL [conn86] Got notified of new snapshot: { ts: Timestamp(1567578488, 1), t: 1 }, 2019-09-04T06:28:08.229+0000 2019-09-04T06:28:08.235+0000 D3 REPL [conn86] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.822+0000 2019-09-04T06:28:08.235+0000 D3 REPL [conn94] Got notified of new snapshot: { ts: Timestamp(1567578488, 1), t: 1 }, 2019-09-04T06:28:08.229+0000 2019-09-04T06:28:08.235+0000 D3 REPL [conn94] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.663+0000 2019-09-04T06:28:08.235+0000 D3 REPL [conn63] Got notified of new snapshot: { ts: Timestamp(1567578488, 1), t: 1 }, 2019-09-04T06:28:08.229+0000 2019-09-04T06:28:08.235+0000 D3 REPL [conn63] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.676+0000 2019-09-04T06:28:08.235+0000 D3 REPL [conn68] Got notified of new snapshot: { ts: Timestamp(1567578488, 1), t: 1 }, 2019-09-04T06:28:08.229+0000 2019-09-04T06:28:08.235+0000 D3 REPL [conn68] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.730+0000 2019-09-04T06:28:08.235+0000 D3 REPL [conn24] Got notified of new snapshot: { ts: Timestamp(1567578488, 1), t: 1 }, 2019-09-04T06:28:08.229+0000 2019-09-04T06:28:08.235+0000 D3 REPL [conn24] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.833+0000 2019-09-04T06:28:08.235+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:28:18.351+0000 2019-09-04T06:28:08.235+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:28:18.337+0000 2019-09-04T06:28:08.235+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:08.235+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 152 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:18.235+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578488, 1), t: 1 } } 2019-09-04T06:28:08.235+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:08.235+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:38.234+0000 2019-09-04T06:28:08.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:08.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:08.235+0000 D3 REPL [conn67] Got notified of new snapshot: { ts: Timestamp(1567578488, 1), t: 1 }, 2019-09-04T06:28:08.229+0000 2019-09-04T06:28:08.235+0000 D3 REPL [conn67] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.698+0000 2019-09-04T06:28:08.235+0000 D3 REPL [conn83] Got notified of new snapshot: { ts: Timestamp(1567578488, 1), t: 1 }, 2019-09-04T06:28:08.229+0000 2019-09-04T06:28:08.235+0000 D3 REPL [conn83] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.431+0000 2019-09-04T06:28:08.235+0000 D3 REPL [conn89] Got notified of new snapshot: { ts: Timestamp(1567578488, 1), t: 1 }, 2019-09-04T06:28:08.229+0000 2019-09-04T06:28:08.235+0000 D3 REPL [conn89] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:16.417+0000 2019-09-04T06:28:08.235+0000 D3 REPL [conn93] Got notified of new snapshot: { ts: Timestamp(1567578488, 1), t: 1 }, 2019-09-04T06:28:08.229+0000 2019-09-04T06:28:08.235+0000 D3 REPL [conn93] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.662+0000 2019-09-04T06:28:08.235+0000 D3 REPL [conn76] Got notified of new snapshot: { ts: Timestamp(1567578488, 1), t: 1 }, 2019-09-04T06:28:08.229+0000 2019-09-04T06:28:08.235+0000 D3 REPL [conn76] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.092+0000 2019-09-04T06:28:08.235+0000 D3 REPL [conn30] Got notified of new snapshot: { ts: Timestamp(1567578488, 1), t: 1 }, 2019-09-04T06:28:08.229+0000 2019-09-04T06:28:08.235+0000 D3 REPL [conn30] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.836+0000 2019-09-04T06:28:08.236+0000 D3 REPL [conn100] Got notified of new snapshot: { ts: Timestamp(1567578488, 1), t: 1 }, 2019-09-04T06:28:08.229+0000 2019-09-04T06:28:08.236+0000 D3 REPL [conn100] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:22.595+0000 2019-09-04T06:28:08.236+0000 D3 REPL [conn35] Got notified of new snapshot: { ts: Timestamp(1567578488, 1), t: 1 }, 2019-09-04T06:28:08.229+0000 2019-09-04T06:28:08.236+0000 D3 REPL [conn35] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.836+0000 2019-09-04T06:28:08.236+0000 D3 REPL [conn27] Got notified of new snapshot: { ts: Timestamp(1567578488, 1), t: 1 }, 2019-09-04T06:28:08.229+0000 2019-09-04T06:28:08.236+0000 D3 REPL [conn27] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.829+0000 2019-09-04T06:28:08.236+0000 D3 REPL [conn79] Got notified of new snapshot: { ts: Timestamp(1567578488, 1), t: 1 }, 2019-09-04T06:28:08.229+0000 2019-09-04T06:28:08.236+0000 D3 REPL [conn79] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.289+0000 2019-09-04T06:28:08.236+0000 D3 REPL [conn66] Got notified of new snapshot: { ts: Timestamp(1567578488, 1), t: 1 }, 2019-09-04T06:28:08.229+0000 2019-09-04T06:28:08.236+0000 D3 REPL [conn66] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.686+0000 2019-09-04T06:28:08.236+0000 D3 REPL [conn85] Got notified of new snapshot: { ts: Timestamp(1567578488, 1), t: 1 }, 2019-09-04T06:28:08.229+0000 2019-09-04T06:28:08.236+0000 D3 REPL [conn85] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.822+0000 2019-09-04T06:28:08.236+0000 D3 REPL [conn102] Got notified of new snapshot: { ts: Timestamp(1567578488, 1), t: 1 }, 2019-09-04T06:28:08.229+0000 2019-09-04T06:28:08.236+0000 D3 REPL [conn102] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:29.874+0000 2019-09-04T06:28:08.236+0000 D3 REPL [conn97] Got notified of new snapshot: { ts: Timestamp(1567578488, 1), t: 1 }, 2019-09-04T06:28:08.229+0000 2019-09-04T06:28:08.236+0000 D3 REPL [conn97] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.670+0000 2019-09-04T06:28:08.236+0000 D3 REPL [conn87] Got notified of new snapshot: { ts: Timestamp(1567578488, 1), t: 1 }, 2019-09-04T06:28:08.229+0000 2019-09-04T06:28:08.236+0000 D3 REPL [conn87] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.823+0000 2019-09-04T06:28:08.236+0000 D3 REPL [conn96] Got notified of new snapshot: { ts: Timestamp(1567578488, 1), t: 1 }, 2019-09-04T06:28:08.229+0000 2019-09-04T06:28:08.236+0000 D3 REPL [conn96] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.663+0000 2019-09-04T06:28:08.236+0000 D3 REPL [conn64] Got notified of new snapshot: { ts: Timestamp(1567578488, 1), t: 1 }, 2019-09-04T06:28:08.229+0000 2019-09-04T06:28:08.236+0000 D3 REPL [conn64] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.676+0000 2019-09-04T06:28:08.236+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:08.236+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, durableWallTime: new Date(1567578484832), appliedOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, appliedWallTime: new Date(1567578484832), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578488, 1), t: 1 }, durableWallTime: new Date(1567578488229), appliedOpTime: { ts: Timestamp(1567578488, 1), t: 1 }, appliedWallTime: new Date(1567578488229), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, durableWallTime: new Date(1567578484832), appliedOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, appliedWallTime: new Date(1567578484832), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 1), t: 1 }, lastCommittedWall: new Date(1567578488229), lastOpVisible: { ts: Timestamp(1567578488, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:08.236+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 153 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:38.236+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, durableWallTime: new Date(1567578484832), appliedOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, appliedWallTime: new Date(1567578484832), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578488, 1), t: 1 }, durableWallTime: new Date(1567578488229), appliedOpTime: { ts: Timestamp(1567578488, 1), t: 1 }, appliedWallTime: new Date(1567578488229), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, durableWallTime: new Date(1567578484832), appliedOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, appliedWallTime: new Date(1567578484832), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 1), t: 1 }, lastCommittedWall: new Date(1567578488229), lastOpVisible: { ts: Timestamp(1567578488, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:08.236+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:38.234+0000 2019-09-04T06:28:08.236+0000 D2 ASIO [RS] Request 153 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 1), t: 1 }, lastCommittedWall: new Date(1567578488229), lastOpVisible: { ts: Timestamp(1567578488, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578488, 1), $clusterTime: { clusterTime: Timestamp(1567578488, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578488, 1) } 2019-09-04T06:28:08.236+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 1), t: 1 }, lastCommittedWall: new Date(1567578488229), lastOpVisible: { ts: Timestamp(1567578488, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578488, 1), $clusterTime: { clusterTime: Timestamp(1567578488, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578488, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:08.236+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:08.236+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:38.234+0000 2019-09-04T06:28:08.281+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:08.281+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:08.328+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:08.333+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578488, 1) 2019-09-04T06:28:08.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:08.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:08.386+0000 D2 ASIO [RS] Request 152 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578488, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578488376), o: { $v: 1, $set: { ping: new Date(1567578488376) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 1), t: 1 }, lastCommittedWall: new Date(1567578488229), lastOpVisible: { ts: Timestamp(1567578488, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578488, 1), t: 1 }, lastCommittedWall: new Date(1567578488229), lastOpApplied: { ts: Timestamp(1567578488, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578488, 1), $clusterTime: { clusterTime: Timestamp(1567578488, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578488, 2) } 2019-09-04T06:28:08.387+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578488, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578488376), o: { $v: 1, $set: { ping: new Date(1567578488376) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 1), t: 1 }, lastCommittedWall: new Date(1567578488229), lastOpVisible: { ts: Timestamp(1567578488, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578488, 1), t: 1 }, lastCommittedWall: new Date(1567578488229), lastOpApplied: { ts: Timestamp(1567578488, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578488, 1), $clusterTime: { clusterTime: Timestamp(1567578488, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578488, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:08.387+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:08.387+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578488, 2) and ending at ts: Timestamp(1567578488, 2) 2019-09-04T06:28:08.387+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:28:18.337+0000 2019-09-04T06:28:08.387+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:28:18.935+0000 2019-09-04T06:28:08.387+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578488, 2), t: 1 } 2019-09-04T06:28:08.387+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:08.387+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:08.387+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:08.387+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:08.387+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578488, 1) 2019-09-04T06:28:08.387+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 2329 2019-09-04T06:28:08.387+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:08.387+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:08.387+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 2329 2019-09-04T06:28:08.387+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:08.387+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:28:08.387+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578488, 2) } 2019-09-04T06:28:08.387+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2323 2019-09-04T06:28:08.387+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 2323 2019-09-04T06:28:08.387+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2333 2019-09-04T06:28:08.387+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2333 2019-09-04T06:28:08.387+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:08.387+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578488, 1) 2019-09-04T06:28:08.387+0000 D3 EXECUTOR [repl-writer-worker-6] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:08.387+0000 D3 STORAGE [repl-writer-worker-6] WT begin_transaction for snapshot id 2335 2019-09-04T06:28:08.387+0000 D4 STORAGE [repl-writer-worker-6] inserting record with timestamp Timestamp(1567578488, 2) 2019-09-04T06:28:08.387+0000 D3 STORAGE [repl-writer-worker-6] WT set timestamp of future write operations to Timestamp(1567578488, 2) 2019-09-04T06:28:08.387+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 2332 2019-09-04T06:28:08.387+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:08.387+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:08.387+0000 D3 STORAGE [repl-writer-worker-6] WT commit_transaction for snapshot id 2335 2019-09-04T06:28:08.387+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 2332 2019-09-04T06:28:08.387+0000 D3 EXECUTOR [repl-writer-worker-6] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:08.387+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:28:08.387+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2334 2019-09-04T06:28:08.387+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 2334 2019-09-04T06:28:08.387+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2339 2019-09-04T06:28:08.387+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2339 2019-09-04T06:28:08.387+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578488, 2), t: 1 }({ ts: Timestamp(1567578488, 2), t: 1 }) 2019-09-04T06:28:08.387+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578488, 2) 2019-09-04T06:28:08.387+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2340 2019-09-04T06:28:08.387+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578488, 2) } } ] } sort: {} projection: {} 2019-09-04T06:28:08.387+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:08.387+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:28:08.387+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578488, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:28:08.387+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:08.387+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:08.387+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:08.387+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578488, 2) || First: notFirst: full path: ts 2019-09-04T06:28:08.387+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:08.387+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578488, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:08.387+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:08.387+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:28:08.387+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:08.387+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:08.387+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:08.387+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:08.387+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:08.387+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:08.387+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:08.387+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578488, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:08.388+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:08.388+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:08.388+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:08.388+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578488, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:08.388+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:08.388+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578488, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:08.388+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 2340 2019-09-04T06:28:08.388+0000 D3 EXECUTOR [repl-writer-worker-2] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:08.388+0000 D3 STORAGE [repl-writer-worker-2] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:08.388+0000 D3 REPL [repl-writer-worker-2] applying op: { ts: Timestamp(1567578488, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578488376), o: { $v: 1, $set: { ping: new Date(1567578488376) } } }, oplog application mode: Secondary 2019-09-04T06:28:08.388+0000 D3 STORAGE [repl-writer-worker-2] WT set timestamp of future write operations to Timestamp(1567578488, 2) 2019-09-04T06:28:08.388+0000 D3 STORAGE [repl-writer-worker-2] WT begin_transaction for snapshot id 2342 2019-09-04T06:28:08.388+0000 D2 QUERY [repl-writer-worker-2] Using idhack: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" } 2019-09-04T06:28:08.388+0000 D4 WRITE [repl-writer-worker-2] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:28:08.388+0000 D3 STORAGE [repl-writer-worker-2] WT commit_transaction for snapshot id 2342 2019-09-04T06:28:08.388+0000 D3 EXECUTOR [repl-writer-worker-2] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:08.388+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578488, 2), t: 1 }({ ts: Timestamp(1567578488, 2), t: 1 }) 2019-09-04T06:28:08.388+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578488, 2) 2019-09-04T06:28:08.388+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2341 2019-09-04T06:28:08.388+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:28:08.388+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:08.388+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:28:08.388+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:08.388+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:08.388+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:28:08.388+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 2341 2019-09-04T06:28:08.388+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578488, 2) 2019-09-04T06:28:08.388+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2345 2019-09-04T06:28:08.388+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2345 2019-09-04T06:28:08.388+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578488, 2), t: 1 }({ ts: Timestamp(1567578488, 2), t: 1 }) 2019-09-04T06:28:08.388+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:08.388+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, durableWallTime: new Date(1567578484832), appliedOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, appliedWallTime: new Date(1567578484832), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578488, 1), t: 1 }, durableWallTime: new Date(1567578488229), appliedOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, appliedWallTime: new Date(1567578488376), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, durableWallTime: new Date(1567578484832), appliedOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, appliedWallTime: new Date(1567578484832), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 1), t: 1 }, lastCommittedWall: new Date(1567578488229), lastOpVisible: { ts: Timestamp(1567578488, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:08.388+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 154 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:38.388+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, durableWallTime: new Date(1567578484832), appliedOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, appliedWallTime: new Date(1567578484832), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578488, 1), t: 1 }, durableWallTime: new Date(1567578488229), appliedOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, appliedWallTime: new Date(1567578488376), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, durableWallTime: new Date(1567578484832), appliedOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, appliedWallTime: new Date(1567578484832), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 1), t: 1 }, lastCommittedWall: new Date(1567578488229), lastOpVisible: { ts: Timestamp(1567578488, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:08.388+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:38.388+0000 2019-09-04T06:28:08.389+0000 D2 ASIO [RS] Request 154 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpVisible: { ts: Timestamp(1567578488, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578488, 2), $clusterTime: { clusterTime: Timestamp(1567578488, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578488, 2) } 2019-09-04T06:28:08.389+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpVisible: { ts: Timestamp(1567578488, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578488, 2), $clusterTime: { clusterTime: Timestamp(1567578488, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578488, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:08.389+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:08.389+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:38.389+0000 2019-09-04T06:28:08.389+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578488, 2), t: 1 } 2019-09-04T06:28:08.389+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 155 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:18.389+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578488, 1), t: 1 } } 2019-09-04T06:28:08.389+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:38.389+0000 2019-09-04T06:28:08.389+0000 D2 ASIO [RS] Request 155 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpVisible: { ts: Timestamp(1567578488, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpApplied: { ts: Timestamp(1567578488, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578488, 2), $clusterTime: { clusterTime: Timestamp(1567578488, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578488, 2) } 2019-09-04T06:28:08.389+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpVisible: { ts: Timestamp(1567578488, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpApplied: { ts: Timestamp(1567578488, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578488, 2), $clusterTime: { clusterTime: Timestamp(1567578488, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578488, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:08.389+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:08.389+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:28:08.389+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578488, 2), t: 1 }, 2019-09-04T06:28:08.376+0000 2019-09-04T06:28:08.389+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578488, 2), t: 1 }, 2019-09-04T06:28:08.376+0000 2019-09-04T06:28:08.389+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578483, 2) 2019-09-04T06:28:08.389+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:28:18.935+0000 2019-09-04T06:28:08.389+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:28:18.588+0000 2019-09-04T06:28:08.389+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:28:08.389+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:08.389+0000 D3 REPL [conn65] Got notified of new snapshot: { ts: Timestamp(1567578488, 2), t: 1 }, 2019-09-04T06:28:08.376+0000 2019-09-04T06:28:08.389+0000 D3 REPL [conn65] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.679+0000 2019-09-04T06:28:08.389+0000 D3 REPL [conn98] Got notified of new snapshot: { ts: Timestamp(1567578488, 2), t: 1 }, 2019-09-04T06:28:08.376+0000 2019-09-04T06:28:08.389+0000 D3 REPL [conn98] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.768+0000 2019-09-04T06:28:08.389+0000 D3 REPL [conn82] Got notified of new snapshot: { ts: Timestamp(1567578488, 2), t: 1 }, 2019-09-04T06:28:08.376+0000 2019-09-04T06:28:08.389+0000 D3 REPL [conn82] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.998+0000 2019-09-04T06:28:08.389+0000 D3 REPL [conn99] Got notified of new snapshot: { ts: Timestamp(1567578488, 2), t: 1 }, 2019-09-04T06:28:08.376+0000 2019-09-04T06:28:08.389+0000 D3 REPL [conn99] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:22.054+0000 2019-09-04T06:28:08.389+0000 D3 REPL [conn84] Got notified of new snapshot: { ts: Timestamp(1567578488, 2), t: 1 }, 2019-09-04T06:28:08.376+0000 2019-09-04T06:28:08.389+0000 D3 REPL [conn84] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.822+0000 2019-09-04T06:28:08.389+0000 D3 REPL [conn95] Got notified of new snapshot: { ts: Timestamp(1567578488, 2), t: 1 }, 2019-09-04T06:28:08.376+0000 2019-09-04T06:28:08.389+0000 D3 REPL [conn95] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.662+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn86] Got notified of new snapshot: { ts: Timestamp(1567578488, 2), t: 1 }, 2019-09-04T06:28:08.376+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn86] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.822+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn94] Got notified of new snapshot: { ts: Timestamp(1567578488, 2), t: 1 }, 2019-09-04T06:28:08.376+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn94] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.663+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn63] Got notified of new snapshot: { ts: Timestamp(1567578488, 2), t: 1 }, 2019-09-04T06:28:08.376+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn63] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.676+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn68] Got notified of new snapshot: { ts: Timestamp(1567578488, 2), t: 1 }, 2019-09-04T06:28:08.376+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn68] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.730+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn24] Got notified of new snapshot: { ts: Timestamp(1567578488, 2), t: 1 }, 2019-09-04T06:28:08.376+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn24] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.833+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn93] Got notified of new snapshot: { ts: Timestamp(1567578488, 2), t: 1 }, 2019-09-04T06:28:08.376+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn93] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.662+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn76] Got notified of new snapshot: { ts: Timestamp(1567578488, 2), t: 1 }, 2019-09-04T06:28:08.376+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn76] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.092+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn30] Got notified of new snapshot: { ts: Timestamp(1567578488, 2), t: 1 }, 2019-09-04T06:28:08.376+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn30] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.836+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn79] Got notified of new snapshot: { ts: Timestamp(1567578488, 2), t: 1 }, 2019-09-04T06:28:08.376+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn79] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.289+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn66] Got notified of new snapshot: { ts: Timestamp(1567578488, 2), t: 1 }, 2019-09-04T06:28:08.376+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn66] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.686+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn87] Got notified of new snapshot: { ts: Timestamp(1567578488, 2), t: 1 }, 2019-09-04T06:28:08.376+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn87] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.823+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn92] Got notified of new snapshot: { ts: Timestamp(1567578488, 2), t: 1 }, 2019-09-04T06:28:08.376+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn92] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.664+0000 2019-09-04T06:28:08.389+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 156 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:18.389+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578488, 2), t: 1 } } 2019-09-04T06:28:08.390+0000 D3 REPL [conn80] Got notified of new snapshot: { ts: Timestamp(1567578488, 2), t: 1 }, 2019-09-04T06:28:08.376+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn80] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:11.307+0000 2019-09-04T06:28:08.390+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:38.389+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn62] Got notified of new snapshot: { ts: Timestamp(1567578488, 2), t: 1 }, 2019-09-04T06:28:08.376+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn62] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.671+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn88] Got notified of new snapshot: { ts: Timestamp(1567578488, 2), t: 1 }, 2019-09-04T06:28:08.376+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn88] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.833+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn36] Got notified of new snapshot: { ts: Timestamp(1567578488, 2), t: 1 }, 2019-09-04T06:28:08.376+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn36] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.835+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn91] Got notified of new snapshot: { ts: Timestamp(1567578488, 2), t: 1 }, 2019-09-04T06:28:08.376+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn91] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.645+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn67] Got notified of new snapshot: { ts: Timestamp(1567578488, 2), t: 1 }, 2019-09-04T06:28:08.376+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn67] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.698+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn83] Got notified of new snapshot: { ts: Timestamp(1567578488, 2), t: 1 }, 2019-09-04T06:28:08.376+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn83] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.431+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn89] Got notified of new snapshot: { ts: Timestamp(1567578488, 2), t: 1 }, 2019-09-04T06:28:08.376+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn89] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:16.417+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn100] Got notified of new snapshot: { ts: Timestamp(1567578488, 2), t: 1 }, 2019-09-04T06:28:08.376+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn100] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:22.595+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn35] Got notified of new snapshot: { ts: Timestamp(1567578488, 2), t: 1 }, 2019-09-04T06:28:08.376+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn35] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.836+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn27] Got notified of new snapshot: { ts: Timestamp(1567578488, 2), t: 1 }, 2019-09-04T06:28:08.376+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn27] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.829+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn85] Got notified of new snapshot: { ts: Timestamp(1567578488, 2), t: 1 }, 2019-09-04T06:28:08.376+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn85] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:14.822+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn102] Got notified of new snapshot: { ts: Timestamp(1567578488, 2), t: 1 }, 2019-09-04T06:28:08.376+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn102] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:29.874+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn97] Got notified of new snapshot: { ts: Timestamp(1567578488, 2), t: 1 }, 2019-09-04T06:28:08.376+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn97] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.670+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn96] Got notified of new snapshot: { ts: Timestamp(1567578488, 2), t: 1 }, 2019-09-04T06:28:08.376+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn96] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.663+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn64] Got notified of new snapshot: { ts: Timestamp(1567578488, 2), t: 1 }, 2019-09-04T06:28:08.376+0000 2019-09-04T06:28:08.390+0000 D3 REPL [conn64] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:09.676+0000 2019-09-04T06:28:08.401+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:28:08.401+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:08.401+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, durableWallTime: new Date(1567578484832), appliedOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, appliedWallTime: new Date(1567578484832), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), appliedOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, appliedWallTime: new Date(1567578488376), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, durableWallTime: new Date(1567578484832), appliedOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, appliedWallTime: new Date(1567578484832), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpVisible: { ts: Timestamp(1567578488, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:08.401+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 157 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:38.401+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, durableWallTime: new Date(1567578484832), appliedOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, appliedWallTime: new Date(1567578484832), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), appliedOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, appliedWallTime: new Date(1567578488376), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, durableWallTime: new Date(1567578484832), appliedOpTime: { ts: Timestamp(1567578484, 1), t: 1 }, appliedWallTime: new Date(1567578484832), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpVisible: { ts: Timestamp(1567578488, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:08.401+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:38.389+0000 2019-09-04T06:28:08.402+0000 D2 ASIO [RS] Request 157 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpVisible: { ts: Timestamp(1567578488, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578488, 2), $clusterTime: { clusterTime: Timestamp(1567578488, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578488, 2) } 2019-09-04T06:28:08.402+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpVisible: { ts: Timestamp(1567578488, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578488, 2), $clusterTime: { clusterTime: Timestamp(1567578488, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578488, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:08.402+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:08.402+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:38.389+0000 2019-09-04T06:28:08.424+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:08.424+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:08.428+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:08.487+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578488, 2) 2019-09-04T06:28:08.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:08.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:08.523+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:08.523+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:08.528+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:08.623+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:08.623+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:08.628+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:08.632+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:08.632+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:08.699+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:08.699+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:08.728+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:08.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:08.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:08.781+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:08.781+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:08.828+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:08.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:08.836+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 158) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:08.836+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 158 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:28:18.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:08.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:08.836+0000 D2 ASIO [Replication] Request 158 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), opTime: { ts: Timestamp(1567578488, 2), t: 1 }, wallTime: new Date(1567578488376), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpVisible: { ts: Timestamp(1567578488, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578488, 2), $clusterTime: { clusterTime: Timestamp(1567578488, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578488, 2) } 2019-09-04T06:28:08.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), opTime: { ts: Timestamp(1567578488, 2), t: 1 }, wallTime: new Date(1567578488376), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpVisible: { ts: Timestamp(1567578488, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578488, 2), $clusterTime: { clusterTime: Timestamp(1567578488, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578488, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:28:08.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:08.836+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 158) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), opTime: { ts: Timestamp(1567578488, 2), t: 1 }, wallTime: new Date(1567578488376), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpVisible: { ts: Timestamp(1567578488, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578488, 2), $clusterTime: { clusterTime: Timestamp(1567578488, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578488, 2) } 2019-09-04T06:28:08.836+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:28:08.836+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:28:18.588+0000 2019-09-04T06:28:08.836+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:28:19.573+0000 2019-09-04T06:28:08.836+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:28:08.836+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:08.836+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:28:08.836+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:28:10.836Z 2019-09-04T06:28:08.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:08.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:08.837+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 159) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:08.837+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 159 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:18.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:08.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:08.837+0000 D2 ASIO [Replication] Request 159 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), opTime: { ts: Timestamp(1567578488, 2), t: 1 }, wallTime: new Date(1567578488376), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpVisible: { ts: Timestamp(1567578488, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578488, 2), $clusterTime: { clusterTime: Timestamp(1567578488, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578488, 2) } 2019-09-04T06:28:08.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), opTime: { ts: Timestamp(1567578488, 2), t: 1 }, wallTime: new Date(1567578488376), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpVisible: { ts: Timestamp(1567578488, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578488, 2), $clusterTime: { clusterTime: Timestamp(1567578488, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578488, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:08.837+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:28:08.837+0000 D2 REPL_HB [replexec-2] Received response to heartbeat (requestId: 159) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), opTime: { ts: Timestamp(1567578488, 2), t: 1 }, wallTime: new Date(1567578488376), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpVisible: { ts: Timestamp(1567578488, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578488, 2), $clusterTime: { clusterTime: Timestamp(1567578488, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578488, 2) } 2019-09-04T06:28:08.837+0000 D3 REPL [replexec-2] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:28:08.837+0000 D2 REPL_HB [replexec-2] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:28:10.837Z 2019-09-04T06:28:08.837+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:08.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:08.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:08.924+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:08.924+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:08.928+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:08.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:08.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:09.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:09.023+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:09.023+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:09.029+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:09.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578488, 2), signature: { hash: BinData(0, 115EB4D30785D73F3CC9671DA2371676F9B8886D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:09.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:28:09.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578488, 2), signature: { hash: BinData(0, 115EB4D30785D73F3CC9671DA2371676F9B8886D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:09.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578488, 2), signature: { hash: BinData(0, 115EB4D30785D73F3CC9671DA2371676F9B8886D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:09.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), opTime: { ts: Timestamp(1567578488, 2), t: 1 }, wallTime: new Date(1567578488376) } 2019-09-04T06:28:09.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578488, 2), signature: { hash: BinData(0, 115EB4D30785D73F3CC9671DA2371676F9B8886D), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:09.122+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:09.123+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:09.129+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:09.132+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:09.132+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:09.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:09.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:09.199+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:09.199+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:09.229+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:09.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:09.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:09.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:09.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:09.281+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:09.281+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:09.329+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:09.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:09.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:09.387+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:09.387+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:09.387+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578488, 2) 2019-09-04T06:28:09.387+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 2371 2019-09-04T06:28:09.387+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:09.387+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:09.387+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 2371 2019-09-04T06:28:09.388+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2374 2019-09-04T06:28:09.388+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2374 2019-09-04T06:28:09.388+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578488, 2), t: 1 }({ ts: Timestamp(1567578488, 2), t: 1 }) 2019-09-04T06:28:09.424+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:09.424+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:09.429+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:09.490+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:09.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:09.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:09.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:09.523+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:09.523+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:09.529+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:09.537+0000 D2 COMMAND [conn61] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578488, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578488, 4), signature: { hash: BinData(0, 115EB4D30785D73F3CC9671DA2371676F9B8886D), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578488, 2), t: 1 } }, $db: "config" } 2019-09-04T06:28:09.537+0000 D1 COMMAND [conn61] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578488, 2), t: 1 } } } 2019-09-04T06:28:09.537+0000 D3 STORAGE [conn61] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:28:09.537+0000 D1 COMMAND [conn61] Using 'committed' snapshot: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578488, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578488, 4), signature: { hash: BinData(0, 115EB4D30785D73F3CC9671DA2371676F9B8886D), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578488, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578488, 2) 2019-09-04T06:28:09.537+0000 D2 QUERY [conn61] Collection config.settings does not exist. Using EOF plan: query: { _id: "balancer" } sort: {} projection: {} limit: 1 2019-09-04T06:28:09.537+0000 I COMMAND [conn61] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578488, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578488, 4), signature: { hash: BinData(0, 115EB4D30785D73F3CC9671DA2371676F9B8886D), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578488, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:28:09.623+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:09.623+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:09.629+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:09.632+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:09.632+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:09.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:09.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:09.665+0000 I NETWORK [listener] connection accepted from 10.108.2.55:36558 #104 (81 connections now open) 2019-09-04T06:28:09.665+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:09.665+0000 D2 COMMAND [conn104] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:09.665+0000 I NETWORK [conn104] received client metadata from 10.108.2.55:36558 conn104: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:09.665+0000 I COMMAND [conn104] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:09.672+0000 I COMMAND [conn62] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578452, 1), signature: { hash: BinData(0, CB2EE64588C44BE37DA7454B6059CFFFB3ABC1AF), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:28:09.672+0000 D1 - [conn62] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:09.672+0000 W - [conn62] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:09.674+0000 I NETWORK [listener] connection accepted from 10.108.2.56:35608 #105 (82 connections now open) 2019-09-04T06:28:09.674+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:09.674+0000 D2 COMMAND [conn105] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:09.674+0000 I NETWORK [conn105] received client metadata from 10.108.2.56:35608 conn105: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:09.674+0000 I COMMAND [conn105] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:09.676+0000 I COMMAND [conn63] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578451, 1), signature: { hash: BinData(0, DEC934211A5E877D872CD481B087C488B1F8FB5C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:09.677+0000 D1 - [conn63] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:09.677+0000 W - [conn63] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:09.677+0000 I COMMAND [conn64] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578455, 1), signature: { hash: BinData(0, B608BC040ACD32DAE9997A8CBB1A717399CC35A7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:28:09.678+0000 D1 - [conn64] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:09.678+0000 W - [conn64] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:09.681+0000 I COMMAND [conn65] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578452, 1), signature: { hash: BinData(0, CB2EE64588C44BE37DA7454B6059CFFFB3ABC1AF), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:09.681+0000 D1 - [conn65] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:09.681+0000 W - [conn65] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:09.687+0000 I COMMAND [conn66] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578451, 1), signature: { hash: BinData(0, DEC934211A5E877D872CD481B087C488B1F8FB5C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:09.687+0000 D1 - [conn66] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:09.687+0000 W - [conn66] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:09.689+0000 I - [conn62] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:09.689+0000 D1 COMMAND [conn62] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578452, 1), signature: { hash: BinData(0, CB2EE64588C44BE37DA7454B6059CFFFB3ABC1AF), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:09.689+0000 D1 - [conn62] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:09.689+0000 W - [conn62] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:09.698+0000 I COMMAND [conn67] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 1), signature: { hash: BinData(0, 1DEE64994A51BF344C0BF17F13DA1F5E64B3EBCD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:09.699+0000 D1 - [conn67] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:09.699+0000 W - [conn67] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:09.699+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:09.699+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:09.716+0000 I - [conn64] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:09.716+0000 D1 COMMAND [conn64] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578455, 1), signature: { hash: BinData(0, B608BC040ACD32DAE9997A8CBB1A717399CC35A7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:09.716+0000 D1 - [conn64] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:09.716+0000 W - [conn64] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:09.717+0000 I NETWORK [listener] connection accepted from 10.108.2.61:37838 #106 (83 connections now open) 2019-09-04T06:28:09.717+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:09.717+0000 D2 COMMAND [conn106] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:09.717+0000 I NETWORK [conn106] received client metadata from 10.108.2.61:37838 conn106: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:09.717+0000 I COMMAND [conn106] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:09.729+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:09.730+0000 I COMMAND [conn68] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578451, 1), signature: { hash: BinData(0, DEC934211A5E877D872CD481B087C488B1F8FB5C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:09.731+0000 D1 - [conn68] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:09.731+0000 W - [conn68] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:09.733+0000 I - [conn67] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:09.733+0000 D1 COMMAND [conn67] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 1), signature: { hash: BinData(0, 1DEE64994A51BF344C0BF17F13DA1F5E64B3EBCD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:09.733+0000 D1 - [conn67] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:09.733+0000 W - [conn67] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:09.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:09.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:09.781+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:09.781+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:09.782+0000 I - [conn65] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:09.782+0000 D1 COMMAND [conn65] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578452, 1), signature: { hash: BinData(0, CB2EE64588C44BE37DA7454B6059CFFFB3ABC1AF), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:09.782+0000 D1 - [conn65] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:09.782+0000 W - [conn65] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:09.787+0000 I - [conn66] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:09.787+0000 D1 COMMAND [conn66] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578451, 1), signature: { hash: BinData(0, DEC934211A5E877D872CD481B087C488B1F8FB5C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:09.787+0000 D1 - [conn66] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:09.787+0000 W - [conn66] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:09.807+0000 I - [conn67] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:09.807+0000 W COMMAND [conn67] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:09.807+0000 I COMMAND [conn67] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 1), signature: { hash: BinData(0, 1DEE64994A51BF344C0BF17F13DA1F5E64B3EBCD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30044ms 2019-09-04T06:28:09.807+0000 D2 NETWORK [conn67] Session from 10.108.2.57:34142 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:09.807+0000 I NETWORK [conn67] end connection 10.108.2.57:34142 (82 connections now open) 2019-09-04T06:28:09.814+0000 I - [conn63] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:09.814+0000 D1 COMMAND [conn63] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578451, 1), signature: { hash: BinData(0, DEC934211A5E877D872CD481B087C488B1F8FB5C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:09.814+0000 D1 - [conn63] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:09.814+0000 W - [conn63] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:09.829+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:09.851+0000 I - [conn65] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:09.851+0000 W COMMAND [conn65] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:09.851+0000 I COMMAND [conn65] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578452, 1), signature: { hash: BinData(0, CB2EE64588C44BE37DA7454B6059CFFFB3ABC1AF), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30113ms 2019-09-04T06:28:09.852+0000 D2 NETWORK [conn65] Session from 10.108.2.72:45620 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:09.852+0000 I NETWORK [conn65] end connection 10.108.2.72:45620 (81 connections now open) 2019-09-04T06:28:09.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:09.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:09.860+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:09.860+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:09.861+0000 I NETWORK [listener] connection accepted from 10.108.2.73:52050 #107 (82 connections now open) 2019-09-04T06:28:09.861+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:09.861+0000 D2 COMMAND [conn107] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:09.861+0000 I NETWORK [conn107] received client metadata from 10.108.2.73:52050 conn107: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:09.861+0000 I COMMAND [conn107] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:09.861+0000 D2 COMMAND [conn107] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578482, 1), signature: { hash: BinData(0, 3ACCBE001A4DF149C45D49591F00BE6F4AE6BCE4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:28:09.861+0000 D1 REPL [conn107] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578488, 2), t: 1 } 2019-09-04T06:28:09.861+0000 D3 REPL [conn107] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.871+0000 2019-09-04T06:28:09.863+0000 I NETWORK [listener] connection accepted from 10.108.2.48:42000 #108 (83 connections now open) 2019-09-04T06:28:09.863+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:09.865+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:09.865+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:09.866+0000 I NETWORK [listener] connection accepted from 10.108.2.74:51684 #109 (84 connections now open) 2019-09-04T06:28:09.866+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:09.866+0000 D2 COMMAND [conn109] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:09.866+0000 I NETWORK [conn109] received client metadata from 10.108.2.74:51684 conn109: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:09.866+0000 I COMMAND [conn109] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:09.867+0000 D2 COMMAND [conn109] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578485, 1), signature: { hash: BinData(0, 52C21B0434F0E97A4117AE1AE4E5E5F2B2245704), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:28:09.867+0000 D1 REPL [conn109] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578488, 2), t: 1 } 2019-09-04T06:28:09.867+0000 D3 REPL [conn109] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.877+0000 2019-09-04T06:28:09.868+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:09.868+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:09.869+0000 D2 COMMAND [conn108] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:09.869+0000 I NETWORK [conn108] received client metadata from 10.108.2.48:42000 conn108: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:09.869+0000 I COMMAND [conn108] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:09.869+0000 D2 COMMAND [conn108] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578480, 1), signature: { hash: BinData(0, 6D93D72CE5EBCF8DB8B943DC77DF5B5E8E3E9809), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:09.869+0000 D1 REPL [conn108] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578488, 2), t: 1 } 2019-09-04T06:28:09.869+0000 D3 REPL [conn108] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.879+0000 2019-09-04T06:28:09.869+0000 I NETWORK [listener] connection accepted from 10.108.2.72:45646 #110 (85 connections now open) 2019-09-04T06:28:09.869+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:09.871+0000 D2 COMMAND [conn110] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:09.871+0000 I NETWORK [conn110] received client metadata from 10.108.2.72:45646 conn110: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:09.871+0000 I COMMAND [conn110] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:09.871+0000 I - [conn63] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:09.871+0000 W COMMAND [conn63] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:09.871+0000 I COMMAND [conn63] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578451, 1), signature: { hash: BinData(0, DEC934211A5E877D872CD481B087C488B1F8FB5C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30148ms 2019-09-04T06:28:09.871+0000 D2 NETWORK [conn63] Session from 10.108.2.55:36534 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:09.871+0000 I NETWORK [conn63] end connection 10.108.2.55:36534 (84 connections now open) 2019-09-04T06:28:09.871+0000 D2 COMMAND [conn110] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578482, 1), signature: { hash: BinData(0, 3ACCBE001A4DF149C45D49591F00BE6F4AE6BCE4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:09.872+0000 D1 REPL [conn110] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578488, 2), t: 1 } 2019-09-04T06:28:09.872+0000 D3 REPL [conn110] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.881+0000 2019-09-04T06:28:09.882+0000 I NETWORK [listener] connection accepted from 10.108.2.57:34158 #111 (85 connections now open) 2019-09-04T06:28:09.882+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:09.882+0000 D2 COMMAND [conn111] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:09.882+0000 I NETWORK [conn111] received client metadata from 10.108.2.57:34158 conn111: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:09.882+0000 I COMMAND [conn111] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:09.886+0000 D2 COMMAND [conn111] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578488, 1), signature: { hash: BinData(0, F4E758946912814F3BEF328731BD00334916A2A7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:09.886+0000 D1 REPL [conn111] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578488, 2), t: 1 } 2019-09-04T06:28:09.886+0000 D3 REPL [conn111] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.896+0000 2019-09-04T06:28:09.891+0000 I - [conn66] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:09.891+0000 I - [conn62] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:09.891+0000 W COMMAND [conn66] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:09.891+0000 W COMMAND [conn62] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:09.891+0000 I COMMAND [conn66] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578451, 1), signature: { hash: BinData(0, DEC934211A5E877D872CD481B087C488B1F8FB5C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30110ms 2019-09-04T06:28:09.891+0000 I COMMAND [conn62] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578452, 1), signature: { hash: BinData(0, CB2EE64588C44BE37DA7454B6059CFFFB3ABC1AF), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30028ms 2019-09-04T06:28:09.891+0000 D2 NETWORK [conn66] Session from 10.108.2.56:35590 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:09.891+0000 I NETWORK [conn66] end connection 10.108.2.56:35590 (84 connections now open) 2019-09-04T06:28:09.894+0000 D2 COMMAND [conn72] run command config.$cmd { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578488, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578488, 4), signature: { hash: BinData(0, 115EB4D30785D73F3CC9671DA2371676F9B8886D), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578488, 2), t: 1 } }, $db: "config" } 2019-09-04T06:28:09.894+0000 D1 COMMAND [conn72] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578488, 2), t: 1 } } } 2019-09-04T06:28:09.894+0000 D3 STORAGE [conn72] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:28:09.894+0000 D1 COMMAND [conn72] Using 'committed' snapshot: { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578488, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578488, 4), signature: { hash: BinData(0, 115EB4D30785D73F3CC9671DA2371676F9B8886D), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578488, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578488, 2) 2019-09-04T06:28:09.894+0000 D2 QUERY [conn72] Using idhack: query: { _id: "config.system.sessions" } sort: {} projection: {} limit: 1 2019-09-04T06:28:09.894+0000 D3 STORAGE [conn72] WT begin_transaction for snapshot id 2404 2019-09-04T06:28:09.894+0000 D3 STORAGE [conn72] WT rollback_transaction for snapshot id 2404 2019-09-04T06:28:09.894+0000 I COMMAND [conn72] command config.collections command: find { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578488, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578488, 4), signature: { hash: BinData(0, 115EB4D30785D73F3CC9671DA2371676F9B8886D), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578488, 2), t: 1 } }, $db: "config" } planSummary: IDHACK keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:702 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:28:09.898+0000 D2 NETWORK [conn62] Session from 10.108.2.73:52032 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:09.899+0000 I NETWORK [conn62] end connection 10.108.2.73:52032 (83 connections now open) 2019-09-04T06:28:09.903+0000 I NETWORK [listener] connection accepted from 10.108.2.60:44768 #112 (84 connections now open) 2019-09-04T06:28:09.903+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:09.905+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:09.905+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:09.907+0000 D2 COMMAND [conn112] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:09.907+0000 I NETWORK [conn112] received client metadata from 10.108.2.60:44768 conn112: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:09.907+0000 I COMMAND [conn112] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:09.909+0000 D2 COMMAND [conn103] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578482, 1), signature: { hash: BinData(0, 3ACCBE001A4DF149C45D49591F00BE6F4AE6BCE4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:09.909+0000 D1 REPL [conn103] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578488, 2), t: 1 } 2019-09-04T06:28:09.909+0000 D3 REPL [conn103] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.919+0000 2019-09-04T06:28:09.911+0000 I - [conn64] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:09.911+0000 I - [conn68] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:09.911+0000 D2 COMMAND [conn112] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578483, 1), signature: { hash: BinData(0, 621D9BEDA95DF18ED95436DD15D0B64BF8D938E4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:09.911+0000 D1 COMMAND [conn68] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578451, 1), signature: { hash: BinData(0, DEC934211A5E877D872CD481B087C488B1F8FB5C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:09.912+0000 D1 REPL [conn112] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578488, 2), t: 1 } 2019-09-04T06:28:09.912+0000 D3 REPL [conn112] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.921+0000 2019-09-04T06:28:09.912+0000 W COMMAND [conn64] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:09.912+0000 I COMMAND [conn64] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578455, 1), signature: { hash: BinData(0, B608BC040ACD32DAE9997A8CBB1A717399CC35A7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30049ms 2019-09-04T06:28:09.912+0000 D2 NETWORK [conn64] Session from 10.108.2.74:51666 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:09.912+0000 I NETWORK [conn64] end connection 10.108.2.74:51666 (83 connections now open) 2019-09-04T06:28:09.912+0000 D1 - [conn68] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:09.912+0000 W - [conn68] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:09.924+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:09.924+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:09.930+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:09.933+0000 I - [conn68] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:09.933+0000 W COMMAND [conn68] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:09.933+0000 I COMMAND [conn68] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578451, 1), signature: { hash: BinData(0, DEC934211A5E877D872CD481B087C488B1F8FB5C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30191ms 2019-09-04T06:28:09.933+0000 D2 NETWORK [conn68] Session from 10.108.2.61:37822 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:09.933+0000 I NETWORK [conn68] end connection 10.108.2.61:37822 (82 connections now open) 2019-09-04T06:28:09.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:09.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:09.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:09.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:09.995+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:09.995+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:10.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:10.003+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:28:10.004+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:28:10.004+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:28:10.011+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:28:10.011+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:28:10.012+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:28:10.012+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:28:10.012+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:28:10.012+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:28:10.012+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:28:10.013+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35129 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:28:10.015+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:28:10.015+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:28:10.015+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:28:10.015+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:28:10.015+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:10.015+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:28:10.015+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:28:10.015+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:28:10.015+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:28:10.015+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:28:10.015+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:28:10.015+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:28:10.015+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:10.015+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:10.015+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:28:10.015+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578488, 2) 2019-09-04T06:28:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2420 2019-09-04T06:28:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2420 2019-09-04T06:28:10.015+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:28:10.016+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:28:10.016+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:28:10.016+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:28:10.016+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:10.016+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:28:10.016+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:28:10.016+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:28:10.016+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578488, 2) 2019-09-04T06:28:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2423 2019-09-04T06:28:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2423 2019-09-04T06:28:10.016+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:28:10.016+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:28:10.016+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:10.017+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:28:10.017+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:28:10.017+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:28:10.017+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578488, 2) 2019-09-04T06:28:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2425 2019-09-04T06:28:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2425 2019-09-04T06:28:10.017+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:28:10.017+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:28:10.017+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:28:10.017+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:28:10.018+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:28:10.018+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2428 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2428 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2429 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2429 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2430 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2430 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2431 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2431 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2432 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2432 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2433 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2433 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2434 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2434 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2435 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2435 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2436 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:10.018+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2436 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2437 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2437 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2438 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2438 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2439 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2439 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2440 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2440 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2441 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2441 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2442 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2442 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2443 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2443 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2444 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2444 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2445 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2445 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2446 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2446 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2447 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2447 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2448 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2448 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2449 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:28:10.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2449 2019-09-04T06:28:10.020+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:28:10.027+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:10.027+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:10.032+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:10.044+0000 D2 COMMAND [conn69] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:10.044+0000 I COMMAND [conn69] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:10.049+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:28:10.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2453 2019-09-04T06:28:10.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2453 2019-09-04T06:28:10.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2454 2019-09-04T06:28:10.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2454 2019-09-04T06:28:10.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2455 2019-09-04T06:28:10.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2455 2019-09-04T06:28:10.049+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:28:10.050+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:28:10.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2457 2019-09-04T06:28:10.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2457 2019-09-04T06:28:10.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2458 2019-09-04T06:28:10.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2458 2019-09-04T06:28:10.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2459 2019-09-04T06:28:10.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2459 2019-09-04T06:28:10.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2460 2019-09-04T06:28:10.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2460 2019-09-04T06:28:10.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2461 2019-09-04T06:28:10.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2461 2019-09-04T06:28:10.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2462 2019-09-04T06:28:10.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2462 2019-09-04T06:28:10.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2463 2019-09-04T06:28:10.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2463 2019-09-04T06:28:10.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2464 2019-09-04T06:28:10.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2464 2019-09-04T06:28:10.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2465 2019-09-04T06:28:10.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2465 2019-09-04T06:28:10.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2466 2019-09-04T06:28:10.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2466 2019-09-04T06:28:10.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2467 2019-09-04T06:28:10.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2467 2019-09-04T06:28:10.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2468 2019-09-04T06:28:10.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2468 2019-09-04T06:28:10.051+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:28:10.051+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:28:10.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2470 2019-09-04T06:28:10.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2470 2019-09-04T06:28:10.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2471 2019-09-04T06:28:10.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2471 2019-09-04T06:28:10.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2472 2019-09-04T06:28:10.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2472 2019-09-04T06:28:10.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2473 2019-09-04T06:28:10.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2473 2019-09-04T06:28:10.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2474 2019-09-04T06:28:10.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2474 2019-09-04T06:28:10.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2475 2019-09-04T06:28:10.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2475 2019-09-04T06:28:10.051+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:28:10.055+0000 D2 COMMAND [conn70] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:10.056+0000 I COMMAND [conn70] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:10.123+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:10.123+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:10.132+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:10.132+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:10.133+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:10.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:10.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:10.199+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:10.199+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:10.230+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578488, 4), signature: { hash: BinData(0, 115EB4D30785D73F3CC9671DA2371676F9B8886D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:10.230+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:28:10.230+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578488, 4), signature: { hash: BinData(0, 115EB4D30785D73F3CC9671DA2371676F9B8886D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:10.230+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578488, 4), signature: { hash: BinData(0, 115EB4D30785D73F3CC9671DA2371676F9B8886D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:10.231+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), opTime: { ts: Timestamp(1567578488, 2), t: 1 }, wallTime: new Date(1567578488376) } 2019-09-04T06:28:10.231+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578488, 4), signature: { hash: BinData(0, 115EB4D30785D73F3CC9671DA2371676F9B8886D), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:10.233+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:10.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:10.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:10.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:10.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:10.281+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:10.281+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:10.333+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:10.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:10.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:10.359+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:10.359+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:10.365+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:10.365+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:10.368+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:10.368+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:10.388+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:10.388+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:10.388+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578488, 2) 2019-09-04T06:28:10.388+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 2490 2019-09-04T06:28:10.388+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:10.388+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:10.388+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 2490 2019-09-04T06:28:10.388+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2493 2019-09-04T06:28:10.388+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2493 2019-09-04T06:28:10.388+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578488, 2), t: 1 }({ ts: Timestamp(1567578488, 2), t: 1 }) 2019-09-04T06:28:10.404+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:10.404+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:10.424+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:10.424+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:10.433+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:10.465+0000 D2 COMMAND [conn71] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578483, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578486, 1), signature: { hash: BinData(0, D3D9EBDAF54D4A496D2A6EDA13F329A5D29E1779), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578483, 1), t: 1 } }, $db: "config" } 2019-09-04T06:28:10.465+0000 D1 COMMAND [conn71] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578483, 1), t: 1 } } } 2019-09-04T06:28:10.465+0000 D3 STORAGE [conn71] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:28:10.465+0000 D1 COMMAND [conn71] Using 'committed' snapshot: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578483, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578486, 1), signature: { hash: BinData(0, D3D9EBDAF54D4A496D2A6EDA13F329A5D29E1779), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578483, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578488, 2) 2019-09-04T06:28:10.465+0000 D2 QUERY [conn71] Collection config.settings does not exist. Using EOF plan: query: { _id: "balancer" } sort: {} projection: {} limit: 1 2019-09-04T06:28:10.465+0000 I COMMAND [conn71] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578483, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578486, 1), signature: { hash: BinData(0, D3D9EBDAF54D4A496D2A6EDA13F329A5D29E1779), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578483, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:28:10.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:10.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:10.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:10.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:10.495+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:10.495+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:10.523+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:10.523+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:10.533+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:10.623+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:10.623+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:10.632+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:10.632+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:10.633+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:10.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:10.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:10.699+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:10.699+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:10.733+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:10.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:10.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:10.781+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:10.781+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:10.833+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:10.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:10.836+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 160) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:10.836+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 160 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:28:20.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:10.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:10.836+0000 D2 ASIO [Replication] Request 160 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), opTime: { ts: Timestamp(1567578488, 2), t: 1 }, wallTime: new Date(1567578488376), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpVisible: { ts: Timestamp(1567578488, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578488, 2), $clusterTime: { clusterTime: Timestamp(1567578489, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578488, 2) } 2019-09-04T06:28:10.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), opTime: { ts: Timestamp(1567578488, 2), t: 1 }, wallTime: new Date(1567578488376), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpVisible: { ts: Timestamp(1567578488, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578488, 2), $clusterTime: { clusterTime: Timestamp(1567578489, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578488, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:28:10.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:10.836+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 160) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), opTime: { ts: Timestamp(1567578488, 2), t: 1 }, wallTime: new Date(1567578488376), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpVisible: { ts: Timestamp(1567578488, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578488, 2), $clusterTime: { clusterTime: Timestamp(1567578489, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578488, 2) } 2019-09-04T06:28:10.836+0000 D4 ELECTION [replexec-1] Postponing election timeout due to heartbeat from primary 2019-09-04T06:28:10.836+0000 D4 REPL [replexec-1] Canceling election timeout callback at 2019-09-04T06:28:19.573+0000 2019-09-04T06:28:10.836+0000 D4 ELECTION [replexec-1] Scheduling election timeout callback at 2019-09-04T06:28:20.912+0000 2019-09-04T06:28:10.836+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:28:10.836+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:28:12.836Z 2019-09-04T06:28:10.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:10.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:10.836+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:10.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:10.837+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 161) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:10.837+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 161 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:20.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:10.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:10.837+0000 D2 ASIO [Replication] Request 161 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), opTime: { ts: Timestamp(1567578488, 2), t: 1 }, wallTime: new Date(1567578488376), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpVisible: { ts: Timestamp(1567578488, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578488, 2), $clusterTime: { clusterTime: Timestamp(1567578489, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578488, 2) } 2019-09-04T06:28:10.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), opTime: { ts: Timestamp(1567578488, 2), t: 1 }, wallTime: new Date(1567578488376), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpVisible: { ts: Timestamp(1567578488, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578488, 2), $clusterTime: { clusterTime: Timestamp(1567578489, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578488, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:10.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:10.837+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 161) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), opTime: { ts: Timestamp(1567578488, 2), t: 1 }, wallTime: new Date(1567578488376), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpVisible: { ts: Timestamp(1567578488, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578488, 2), $clusterTime: { clusterTime: Timestamp(1567578489, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578488, 2) } 2019-09-04T06:28:10.837+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:28:10.837+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:28:12.837Z 2019-09-04T06:28:10.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:10.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:10.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:10.924+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:10.924+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:10.934+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:10.945+0000 D2 COMMAND [conn73] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:10.945+0000 I COMMAND [conn73] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:10.968+0000 D2 COMMAND [conn74] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:10.968+0000 I COMMAND [conn74] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:10.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:10.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:10.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:10.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:10.995+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:10.995+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:11.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:11.023+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:11.023+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:11.034+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:11.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578489, 2), signature: { hash: BinData(0, 8E61A86A03C977C94C10D7297A219471A75130CF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:11.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:28:11.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578489, 2), signature: { hash: BinData(0, 8E61A86A03C977C94C10D7297A219471A75130CF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:11.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578489, 2), signature: { hash: BinData(0, 8E61A86A03C977C94C10D7297A219471A75130CF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:11.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), opTime: { ts: Timestamp(1567578488, 2), t: 1 }, wallTime: new Date(1567578488376) } 2019-09-04T06:28:11.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578489, 2), signature: { hash: BinData(0, 8E61A86A03C977C94C10D7297A219471A75130CF), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:11.094+0000 I COMMAND [conn76] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578451, 1), signature: { hash: BinData(0, DEC934211A5E877D872CD481B087C488B1F8FB5C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:28:11.095+0000 D1 - [conn76] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:11.095+0000 W - [conn76] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:11.111+0000 I - [conn76] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:11.111+0000 D1 COMMAND [conn76] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578451, 1), signature: { hash: BinData(0, DEC934211A5E877D872CD481B087C488B1F8FB5C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:11.111+0000 D1 - [conn76] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:11.111+0000 W - [conn76] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:11.123+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:11.123+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:11.131+0000 I - [conn76] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:11.131+0000 W COMMAND [conn76] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:11.131+0000 I COMMAND [conn76] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578451, 1), signature: { hash: BinData(0, DEC934211A5E877D872CD481B087C488B1F8FB5C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:28:11.132+0000 D2 NETWORK [conn76] Session from 10.108.2.59:48232 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:11.132+0000 I NETWORK [conn76] end connection 10.108.2.59:48232 (81 connections now open) 2019-09-04T06:28:11.132+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:11.132+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:11.134+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:11.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:11.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:11.199+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:11.199+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:11.220+0000 D2 COMMAND [conn77] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:11.221+0000 I COMMAND [conn77] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:11.234+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:11.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:11.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:11.241+0000 I NETWORK [listener] connection accepted from 10.108.2.41:53496 #113 (82 connections now open) 2019-09-04T06:28:11.241+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:11.242+0000 D2 COMMAND [conn113] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:11.242+0000 I NETWORK [conn113] received client metadata from 10.108.2.41:53496 conn113: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:11.242+0000 I COMMAND [conn113] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:11.246+0000 D2 COMMAND [conn113] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578483, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578489, 1), signature: { hash: BinData(0, 8E61A86A03C977C94C10D7297A219471A75130CF), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578483, 1), t: 1 } }, $db: "config" } 2019-09-04T06:28:11.246+0000 D1 COMMAND [conn113] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578483, 1), t: 1 } } } 2019-09-04T06:28:11.246+0000 D3 STORAGE [conn113] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:28:11.246+0000 D1 COMMAND [conn113] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578483, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578489, 1), signature: { hash: BinData(0, 8E61A86A03C977C94C10D7297A219471A75130CF), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578483, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578488, 2) 2019-09-04T06:28:11.246+0000 D5 QUERY [conn113] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:28:11.246+0000 D5 QUERY [conn113] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:28:11.246+0000 D5 QUERY [conn113] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:28:11.246+0000 D5 QUERY [conn113] Rated tree: $and 2019-09-04T06:28:11.246+0000 D5 QUERY [conn113] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:11.246+0000 D5 QUERY [conn113] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:11.246+0000 D2 QUERY [conn113] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:28:11.246+0000 D3 STORAGE [conn113] WT begin_transaction for snapshot id 2524 2019-09-04T06:28:11.246+0000 D3 STORAGE [conn113] WT rollback_transaction for snapshot id 2524 2019-09-04T06:28:11.246+0000 I COMMAND [conn113] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578483, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578489, 1), signature: { hash: BinData(0, 8E61A86A03C977C94C10D7297A219471A75130CF), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578483, 1), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:879 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:28:11.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:11.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:11.273+0000 D2 COMMAND [conn78] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:11.273+0000 I COMMAND [conn78] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:11.278+0000 I NETWORK [listener] connection accepted from 10.108.2.52:47078 #114 (83 connections now open) 2019-09-04T06:28:11.278+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:11.278+0000 D2 COMMAND [conn114] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:11.278+0000 I NETWORK [conn114] received client metadata from 10.108.2.52:47078 conn114: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:11.278+0000 I COMMAND [conn114] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:11.281+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:11.281+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:11.281+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:11.281+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:11.282+0000 I NETWORK [listener] connection accepted from 10.108.2.59:48252 #115 (84 connections now open) 2019-09-04T06:28:11.282+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:11.282+0000 D2 COMMAND [conn115] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:11.282+0000 I NETWORK [conn115] received client metadata from 10.108.2.59:48252 conn115: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:11.282+0000 I COMMAND [conn115] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:11.282+0000 D2 COMMAND [conn115] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578481, 1), signature: { hash: BinData(0, E9AAF9B781D488C5E3ACF0A29B982E7119C1C4F4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:28:11.282+0000 D1 REPL [conn115] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578488, 2), t: 1 } 2019-09-04T06:28:11.282+0000 D3 REPL [conn115] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:41.292+0000 2019-09-04T06:28:11.292+0000 I COMMAND [conn79] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578452, 1), signature: { hash: BinData(0, CB2EE64588C44BE37DA7454B6059CFFFB3ABC1AF), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:11.292+0000 D1 - [conn79] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:11.292+0000 W - [conn79] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:11.308+0000 I - [conn79] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:11.308+0000 D1 COMMAND [conn79] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578452, 1), signature: { hash: BinData(0, CB2EE64588C44BE37DA7454B6059CFFFB3ABC1AF), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:11.308+0000 D1 - [conn79] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:11.308+0000 W - [conn79] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:11.310+0000 I COMMAND [conn80] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 1), signature: { hash: BinData(0, 1DEE64994A51BF344C0BF17F13DA1F5E64B3EBCD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:11.310+0000 D1 - [conn80] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:11.310+0000 W - [conn80] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:11.328+0000 I - [conn79] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:11.329+0000 W COMMAND [conn79] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:11.329+0000 I COMMAND [conn79] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578452, 1), signature: { hash: BinData(0, CB2EE64588C44BE37DA7454B6059CFFFB3ABC1AF), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:28:11.329+0000 D2 NETWORK [conn79] Session from 10.108.2.52:47064 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:11.329+0000 I NETWORK [conn79] end connection 10.108.2.52:47064 (83 connections now open) 2019-09-04T06:28:11.334+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:11.345+0000 I - [conn80] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:11.345+0000 D1 COMMAND [conn80] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 1), signature: { hash: BinData(0, 1DEE64994A51BF344C0BF17F13DA1F5E64B3EBCD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:11.345+0000 D1 - [conn80] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:11.345+0000 W - [conn80] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:11.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:11.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:11.365+0000 I - [conn80] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:11.365+0000 W COMMAND [conn80] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:11.365+0000 I COMMAND [conn80] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 1), signature: { hash: BinData(0, 1DEE64994A51BF344C0BF17F13DA1F5E64B3EBCD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30048ms 2019-09-04T06:28:11.365+0000 D2 NETWORK [conn80] Session from 10.108.2.54:49070 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:11.365+0000 I NETWORK [conn80] end connection 10.108.2.54:49070 (82 connections now open) 2019-09-04T06:28:11.388+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:11.388+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:11.388+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578488, 2) 2019-09-04T06:28:11.388+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 2536 2019-09-04T06:28:11.388+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:11.388+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:11.388+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 2536 2019-09-04T06:28:11.389+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2539 2019-09-04T06:28:11.389+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2539 2019-09-04T06:28:11.389+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578488, 2), t: 1 }({ ts: Timestamp(1567578488, 2), t: 1 }) 2019-09-04T06:28:11.434+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:11.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:11.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:11.478+0000 D2 COMMAND [conn114] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578482, 1), signature: { hash: BinData(0, 3ACCBE001A4DF149C45D49591F00BE6F4AE6BCE4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:11.478+0000 D1 REPL [conn114] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578488, 2), t: 1 } 2019-09-04T06:28:11.478+0000 D3 REPL [conn114] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:41.488+0000 2019-09-04T06:28:11.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:11.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:11.495+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:11.495+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:11.523+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:11.523+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:11.534+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:11.623+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:11.623+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:11.632+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:11.632+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:11.634+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:11.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:11.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:11.699+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:11.699+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:11.735+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:11.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:11.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:11.781+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:11.781+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:11.781+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:11.781+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:11.794+0000 D2 COMMAND [conn81] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578488, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578489, 2), signature: { hash: BinData(0, 8E61A86A03C977C94C10D7297A219471A75130CF), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578488, 2), t: 1 } }, $db: "config" } 2019-09-04T06:28:11.794+0000 D1 COMMAND [conn81] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578488, 2), t: 1 } } } 2019-09-04T06:28:11.794+0000 D3 STORAGE [conn81] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:28:11.794+0000 D1 COMMAND [conn81] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578488, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578489, 2), signature: { hash: BinData(0, 8E61A86A03C977C94C10D7297A219471A75130CF), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578488, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578488, 2) 2019-09-04T06:28:11.795+0000 D5 QUERY [conn81] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:28:11.795+0000 D5 QUERY [conn81] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:28:11.795+0000 D5 QUERY [conn81] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:28:11.795+0000 D5 QUERY [conn81] Rated tree: $and 2019-09-04T06:28:11.795+0000 D5 QUERY [conn81] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:11.795+0000 D5 QUERY [conn81] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:11.795+0000 D2 QUERY [conn81] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:28:11.795+0000 D3 STORAGE [conn81] WT begin_transaction for snapshot id 2553 2019-09-04T06:28:11.795+0000 D3 STORAGE [conn81] WT rollback_transaction for snapshot id 2553 2019-09-04T06:28:11.795+0000 I COMMAND [conn81] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578488, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578489, 2), signature: { hash: BinData(0, 8E61A86A03C977C94C10D7297A219471A75130CF), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578488, 2), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:879 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:28:11.835+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:11.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:11.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:11.935+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:11.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:11.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:11.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:11.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:11.995+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:11.995+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:11.999+0000 I COMMAND [conn82] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578452, 1), signature: { hash: BinData(0, CB2EE64588C44BE37DA7454B6059CFFFB3ABC1AF), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:11.999+0000 D1 - [conn82] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:11.999+0000 W - [conn82] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:12.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:12.015+0000 I - [conn82] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:12.016+0000 D1 COMMAND [conn82] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578452, 1), signature: { hash: BinData(0, CB2EE64588C44BE37DA7454B6059CFFFB3ABC1AF), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:12.016+0000 D1 - [conn82] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:12.016+0000 W - [conn82] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:12.023+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:12.023+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:12.035+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:12.036+0000 I - [conn82] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:12.036+0000 W COMMAND [conn82] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:12.036+0000 I COMMAND [conn82] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578452, 1), signature: { hash: BinData(0, CB2EE64588C44BE37DA7454B6059CFFFB3ABC1AF), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:28:12.036+0000 D2 NETWORK [conn82] Session from 10.108.2.62:53342 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:12.036+0000 I NETWORK [conn82] end connection 10.108.2.62:53342 (81 connections now open) 2019-09-04T06:28:12.123+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:12.123+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:12.132+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:12.132+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:12.135+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:12.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:12.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:12.199+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:12.199+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:12.230+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578489, 2), signature: { hash: BinData(0, 8E61A86A03C977C94C10D7297A219471A75130CF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:12.230+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:28:12.230+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578489, 2), signature: { hash: BinData(0, 8E61A86A03C977C94C10D7297A219471A75130CF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:12.230+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578489, 2), signature: { hash: BinData(0, 8E61A86A03C977C94C10D7297A219471A75130CF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:12.230+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), opTime: { ts: Timestamp(1567578488, 2), t: 1 }, wallTime: new Date(1567578488376) } 2019-09-04T06:28:12.230+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578489, 2), signature: { hash: BinData(0, 8E61A86A03C977C94C10D7297A219471A75130CF), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:12.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:12.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:12.235+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:12.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:12.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:12.281+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:12.281+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:12.335+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:12.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:12.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:12.388+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:12.388+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:12.388+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578488, 2) 2019-09-04T06:28:12.388+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 2570 2019-09-04T06:28:12.388+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:12.388+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:12.388+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 2570 2019-09-04T06:28:12.389+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2573 2019-09-04T06:28:12.389+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2573 2019-09-04T06:28:12.389+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578488, 2), t: 1 }({ ts: Timestamp(1567578488, 2), t: 1 }) 2019-09-04T06:28:12.435+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:12.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:12.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:12.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:12.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:12.495+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:12.495+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:12.523+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:12.523+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:12.536+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:12.623+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:12.623+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:12.632+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:12.632+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:12.636+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:12.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:12.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:12.699+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:12.699+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:12.736+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:12.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:12.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:12.781+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:12.781+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:12.836+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:28:12.836+0000 D2 REPL_HB [replexec-2] Sending heartbeat (requestId: 162) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:12.836+0000 D3 EXECUTOR [replexec-2] Scheduling remote command request: RemoteCommand 162 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:28:22.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:12.836+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:12.836+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:12.836+0000 D2 ASIO [Replication] Request 162 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), opTime: { ts: Timestamp(1567578488, 2), t: 1 }, wallTime: new Date(1567578488376), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpVisible: { ts: Timestamp(1567578488, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578488, 2), $clusterTime: { clusterTime: Timestamp(1567578489, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578488, 2) } 2019-09-04T06:28:12.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), opTime: { ts: Timestamp(1567578488, 2), t: 1 }, wallTime: new Date(1567578488376), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpVisible: { ts: Timestamp(1567578488, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578488, 2), $clusterTime: { clusterTime: Timestamp(1567578489, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578488, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:28:12.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:12.836+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 162) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), opTime: { ts: Timestamp(1567578488, 2), t: 1 }, wallTime: new Date(1567578488376), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpVisible: { ts: Timestamp(1567578488, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578488, 2), $clusterTime: { clusterTime: Timestamp(1567578489, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578488, 2) } 2019-09-04T06:28:12.836+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:28:12.836+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:28:20.912+0000 2019-09-04T06:28:12.836+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:28:24.168+0000 2019-09-04T06:28:12.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:12.836+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:28:12.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:12.836+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:28:14.836Z 2019-09-04T06:28:12.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:12.837+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:28:12.837+0000 D2 REPL_HB [replexec-2] Sending heartbeat (requestId: 163) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:12.837+0000 D3 EXECUTOR [replexec-2] Scheduling remote command request: RemoteCommand 163 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:22.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:12.837+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:12.837+0000 D2 ASIO [Replication] Request 163 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), opTime: { ts: Timestamp(1567578488, 2), t: 1 }, wallTime: new Date(1567578488376), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpVisible: { ts: Timestamp(1567578488, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578488, 2), $clusterTime: { clusterTime: Timestamp(1567578489, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578488, 2) } 2019-09-04T06:28:12.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), opTime: { ts: Timestamp(1567578488, 2), t: 1 }, wallTime: new Date(1567578488376), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpVisible: { ts: Timestamp(1567578488, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578488, 2), $clusterTime: { clusterTime: Timestamp(1567578489, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578488, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:12.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:12.837+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 163) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), opTime: { ts: Timestamp(1567578488, 2), t: 1 }, wallTime: new Date(1567578488376), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpVisible: { ts: Timestamp(1567578488, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578488, 2), $clusterTime: { clusterTime: Timestamp(1567578489, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578488, 2) } 2019-09-04T06:28:12.837+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:28:12.837+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:28:14.837Z 2019-09-04T06:28:12.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:12.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:12.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:12.936+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:12.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:12.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:12.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:12.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:12.995+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:12.995+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:13.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:13.023+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:13.023+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:13.036+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:13.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578489, 2), signature: { hash: BinData(0, 8E61A86A03C977C94C10D7297A219471A75130CF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:13.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:28:13.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578489, 2), signature: { hash: BinData(0, 8E61A86A03C977C94C10D7297A219471A75130CF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:13.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578489, 2), signature: { hash: BinData(0, 8E61A86A03C977C94C10D7297A219471A75130CF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:13.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), opTime: { ts: Timestamp(1567578488, 2), t: 1 }, wallTime: new Date(1567578488376) } 2019-09-04T06:28:13.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578489, 2), signature: { hash: BinData(0, 8E61A86A03C977C94C10D7297A219471A75130CF), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:13.123+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:13.123+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:13.132+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:13.132+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:13.136+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:13.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:13.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:13.199+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:13.199+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:13.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:13.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:13.236+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:13.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:13.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:13.281+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:13.281+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:13.336+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:13.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:13.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:13.388+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:13.388+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:13.388+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578488, 2) 2019-09-04T06:28:13.388+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 2600 2019-09-04T06:28:13.388+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:13.388+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:13.389+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 2600 2019-09-04T06:28:13.389+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2603 2019-09-04T06:28:13.389+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2603 2019-09-04T06:28:13.389+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578488, 2), t: 1 }({ ts: Timestamp(1567578488, 2), t: 1 }) 2019-09-04T06:28:13.389+0000 D2 ASIO [RS] Request 156 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpVisible: { ts: Timestamp(1567578488, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpApplied: { ts: Timestamp(1567578488, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578488, 2), $clusterTime: { clusterTime: Timestamp(1567578489, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578488, 2) } 2019-09-04T06:28:13.389+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpVisible: { ts: Timestamp(1567578488, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpApplied: { ts: Timestamp(1567578488, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578488, 2), $clusterTime: { clusterTime: Timestamp(1567578489, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578488, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:13.389+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:13.389+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:28:13.389+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:28:24.168+0000 2019-09-04T06:28:13.389+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:28:23.542+0000 2019-09-04T06:28:13.389+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:13.389+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:13.389+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 164 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:23.389+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578488, 2), t: 1 } } 2019-09-04T06:28:13.390+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:38.389+0000 2019-09-04T06:28:13.402+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:13.402+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), appliedOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, appliedWallTime: new Date(1567578488376), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), appliedOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, appliedWallTime: new Date(1567578488376), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), appliedOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, appliedWallTime: new Date(1567578488376), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpVisible: { ts: Timestamp(1567578488, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:13.402+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 165 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:43.402+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), appliedOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, appliedWallTime: new Date(1567578488376), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), appliedOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, appliedWallTime: new Date(1567578488376), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), appliedOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, appliedWallTime: new Date(1567578488376), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpVisible: { ts: Timestamp(1567578488, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:13.402+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:38.389+0000 2019-09-04T06:28:13.402+0000 D2 ASIO [RS] Request 165 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpVisible: { ts: Timestamp(1567578488, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578488, 2), $clusterTime: { clusterTime: Timestamp(1567578489, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578488, 2) } 2019-09-04T06:28:13.402+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpVisible: { ts: Timestamp(1567578488, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578488, 2), $clusterTime: { clusterTime: Timestamp(1567578489, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578488, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:13.402+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:13.402+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:38.389+0000 2019-09-04T06:28:13.437+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:13.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:13.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:13.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:13.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:13.495+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:13.495+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:13.523+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:13.523+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:13.537+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:13.623+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:13.623+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:13.632+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:13.632+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:13.637+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:13.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:13.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:13.699+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:13.699+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:13.737+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:13.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:13.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:13.781+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:13.781+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:13.837+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:13.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:13.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:13.937+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:13.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:13.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:13.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:13.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:13.995+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:13.995+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:14.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:14.023+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:14.023+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:14.037+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:14.123+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:14.123+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:14.132+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:14.132+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:14.138+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:14.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:14.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:14.230+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578489, 2), signature: { hash: BinData(0, 8E61A86A03C977C94C10D7297A219471A75130CF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:14.231+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:28:14.231+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578489, 2), signature: { hash: BinData(0, 8E61A86A03C977C94C10D7297A219471A75130CF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:14.231+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578489, 2), signature: { hash: BinData(0, 8E61A86A03C977C94C10D7297A219471A75130CF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:14.231+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), opTime: { ts: Timestamp(1567578488, 2), t: 1 }, wallTime: new Date(1567578488376) } 2019-09-04T06:28:14.231+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578489, 2), signature: { hash: BinData(0, 8E61A86A03C977C94C10D7297A219471A75130CF), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:14.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:14.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:14.238+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:14.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:14.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:14.281+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:14.281+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:14.338+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:14.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:14.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:14.389+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:14.389+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:14.389+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578488, 2) 2019-09-04T06:28:14.389+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 2630 2019-09-04T06:28:14.389+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:14.389+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:14.389+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 2630 2019-09-04T06:28:14.389+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2633 2019-09-04T06:28:14.389+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2633 2019-09-04T06:28:14.389+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578488, 2), t: 1 }({ ts: Timestamp(1567578488, 2), t: 1 }) 2019-09-04T06:28:14.418+0000 I NETWORK [listener] connection accepted from 10.108.2.62:53360 #116 (82 connections now open) 2019-09-04T06:28:14.418+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:14.418+0000 D2 COMMAND [conn116] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:14.418+0000 I NETWORK [conn116] received client metadata from 10.108.2.62:53360 conn116: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:14.418+0000 I COMMAND [conn116] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:14.436+0000 I COMMAND [conn83] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578462, 1), signature: { hash: BinData(0, B297735DD8383F1BAD6EB580CC3032FDC0BD6BA3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:28:14.436+0000 D1 - [conn83] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:14.436+0000 W - [conn83] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:14.438+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:14.453+0000 I - [conn83] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:14.453+0000 D1 COMMAND [conn83] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578462, 1), signature: { hash: BinData(0, B297735DD8383F1BAD6EB580CC3032FDC0BD6BA3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:14.453+0000 D1 - [conn83] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:14.453+0000 W - [conn83] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:14.473+0000 I - [conn83] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:14.473+0000 W COMMAND [conn83] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:14.473+0000 I COMMAND [conn83] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578462, 1), signature: { hash: BinData(0, B297735DD8383F1BAD6EB580CC3032FDC0BD6BA3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30031ms 2019-09-04T06:28:14.473+0000 D2 NETWORK [conn83] Session from 10.108.2.62:53344 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:14.473+0000 I NETWORK [conn83] end connection 10.108.2.62:53344 (81 connections now open) 2019-09-04T06:28:14.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:14.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:14.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:14.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:14.495+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:14.495+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:14.523+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:14.523+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:14.538+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:14.623+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:14.623+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:14.632+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:14.632+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:14.638+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:14.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:14.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:14.738+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:14.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:14.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:14.781+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:14.781+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:14.813+0000 I NETWORK [listener] connection accepted from 10.108.2.46:40886 #117 (82 connections now open) 2019-09-04T06:28:14.813+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:14.813+0000 D2 COMMAND [conn117] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:14.813+0000 I NETWORK [conn117] received client metadata from 10.108.2.46:40886 conn117: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:14.813+0000 I COMMAND [conn117] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:14.819+0000 I NETWORK [listener] connection accepted from 10.108.2.51:59068 #118 (83 connections now open) 2019-09-04T06:28:14.819+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:14.819+0000 D2 COMMAND [conn118] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:14.819+0000 I NETWORK [conn118] received client metadata from 10.108.2.51:59068 conn118: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:14.819+0000 I COMMAND [conn118] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:14.822+0000 I NETWORK [listener] connection accepted from 10.108.2.49:53292 #119 (84 connections now open) 2019-09-04T06:28:14.822+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:14.822+0000 D2 COMMAND [conn119] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:14.822+0000 I NETWORK [conn119] received client metadata from 10.108.2.49:53292 conn119: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:14.822+0000 I COMMAND [conn119] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:14.828+0000 I COMMAND [conn86] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578459, 1), signature: { hash: BinData(0, 59D37A2B4CECE73D530761BA1DED9CE509A0BE65), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:14.828+0000 I COMMAND [conn84] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 1), signature: { hash: BinData(0, 1DEE64994A51BF344C0BF17F13DA1F5E64B3EBCD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:14.828+0000 D1 - [conn84] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:14.828+0000 W - [conn84] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:14.828+0000 D1 - [conn86] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:14.831+0000 W - [conn86] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:14.828+0000 I NETWORK [listener] connection accepted from 10.108.2.45:36458 #120 (85 connections now open) 2019-09-04T06:28:14.831+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:14.831+0000 I NETWORK [listener] connection accepted from 10.108.2.64:46544 #121 (86 connections now open) 2019-09-04T06:28:14.831+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:14.831+0000 I NETWORK [listener] connection accepted from 10.108.2.53:50624 #122 (87 connections now open) 2019-09-04T06:28:14.831+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:14.828+0000 I COMMAND [conn87] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578455, 1), signature: { hash: BinData(0, B608BC040ACD32DAE9997A8CBB1A717399CC35A7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:28:14.831+0000 D1 - [conn87] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:14.831+0000 W - [conn87] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:14.828+0000 I COMMAND [conn85] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578463, 1), signature: { hash: BinData(0, C1B62A1C3071B6FFFBA1E3E4221F429C7BC6B94F), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:14.831+0000 D1 - [conn85] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:14.831+0000 W - [conn85] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:14.831+0000 D2 COMMAND [conn122] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:14.831+0000 I NETWORK [conn122] received client metadata from 10.108.2.53:50624 conn122: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:14.831+0000 I COMMAND [conn122] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:14.831+0000 D2 COMMAND [conn121] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:14.831+0000 I NETWORK [conn121] received client metadata from 10.108.2.64:46544 conn121: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:14.831+0000 I COMMAND [conn121] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:14.832+0000 D2 COMMAND [conn120] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:14.832+0000 I NETWORK [conn120] received client metadata from 10.108.2.45:36458 conn120: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:14.832+0000 I COMMAND [conn120] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:14.835+0000 I COMMAND [conn27] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578460, 1), signature: { hash: BinData(0, 6FADD4E1F9FE163B4F89453E13CBEE9116958205), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:14.836+0000 D1 - [conn27] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:14.836+0000 W - [conn27] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:14.836+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:28:14.836+0000 D3 REPL [replexec-2] memberData lastupdate is: 2019-09-04T06:28:13.061+0000 2019-09-04T06:28:14.836+0000 D3 REPL [replexec-2] memberData lastupdate is: 2019-09-04T06:28:14.231+0000 2019-09-04T06:28:14.836+0000 D3 REPL [replexec-2] stalest member MemberId(0) date: 2019-09-04T06:28:13.061+0000 2019-09-04T06:28:14.836+0000 D3 REPL [replexec-2] scheduling next check at 2019-09-04T06:28:23.061+0000 2019-09-04T06:28:14.836+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:28:14.836+0000 D2 REPL_HB [replexec-2] Sending heartbeat (requestId: 166) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:14.836+0000 D3 EXECUTOR [replexec-2] Scheduling remote command request: RemoteCommand 166 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:28:24.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:14.836+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:14.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:14.836+0000 D2 ASIO [Replication] Request 166 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), opTime: { ts: Timestamp(1567578488, 2), t: 1 }, wallTime: new Date(1567578488376), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpVisible: { ts: Timestamp(1567578488, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578488, 2), $clusterTime: { clusterTime: Timestamp(1567578489, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578488, 2) } 2019-09-04T06:28:14.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), opTime: { ts: Timestamp(1567578488, 2), t: 1 }, wallTime: new Date(1567578488376), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpVisible: { ts: Timestamp(1567578488, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578488, 2), $clusterTime: { clusterTime: Timestamp(1567578489, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578488, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:28:14.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:14.836+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 166) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), opTime: { ts: Timestamp(1567578488, 2), t: 1 }, wallTime: new Date(1567578488376), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpVisible: { ts: Timestamp(1567578488, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578488, 2), $clusterTime: { clusterTime: Timestamp(1567578489, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578488, 2) } 2019-09-04T06:28:14.836+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:28:14.836+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:28:23.542+0000 2019-09-04T06:28:14.836+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:28:24.894+0000 2019-09-04T06:28:14.836+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:28:14.836+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:28:16.836Z 2019-09-04T06:28:14.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:14.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:14.837+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:14.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:14.837+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 167) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:14.837+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 167 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:24.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:14.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:14.837+0000 D2 ASIO [Replication] Request 167 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), opTime: { ts: Timestamp(1567578488, 2), t: 1 }, wallTime: new Date(1567578488376), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpVisible: { ts: Timestamp(1567578488, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578488, 2), $clusterTime: { clusterTime: Timestamp(1567578489, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578488, 2) } 2019-09-04T06:28:14.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), opTime: { ts: Timestamp(1567578488, 2), t: 1 }, wallTime: new Date(1567578488376), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpVisible: { ts: Timestamp(1567578488, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578488, 2), $clusterTime: { clusterTime: Timestamp(1567578489, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578488, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:14.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:14.837+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 167) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), opTime: { ts: Timestamp(1567578488, 2), t: 1 }, wallTime: new Date(1567578488376), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpVisible: { ts: Timestamp(1567578488, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578488, 2), $clusterTime: { clusterTime: Timestamp(1567578489, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578488, 2) } 2019-09-04T06:28:14.837+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:28:14.837+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:28:16.837Z 2019-09-04T06:28:14.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:14.838+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:14.839+0000 I COMMAND [conn24] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578455, 1), signature: { hash: BinData(0, B608BC040ACD32DAE9997A8CBB1A717399CC35A7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:14.839+0000 D1 - [conn24] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:14.839+0000 W - [conn24] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:14.839+0000 I COMMAND [conn88] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578459, 1), signature: { hash: BinData(0, 59D37A2B4CECE73D530761BA1DED9CE509A0BE65), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:14.839+0000 D1 - [conn88] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:14.839+0000 W - [conn88] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:14.839+0000 I COMMAND [conn30] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578455, 1), signature: { hash: BinData(0, B608BC040ACD32DAE9997A8CBB1A717399CC35A7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:14.839+0000 D1 - [conn30] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:14.839+0000 W - [conn30] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:14.840+0000 I COMMAND [conn36] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578462, 1), signature: { hash: BinData(0, B297735DD8383F1BAD6EB580CC3032FDC0BD6BA3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:14.840+0000 D1 - [conn36] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:14.841+0000 W - [conn36] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:14.842+0000 D2 ASIO [RS] Request 164 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578494, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578494840), o: { $v: 1, $set: { ping: new Date(1567578494837), up: 2395 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpVisible: { ts: Timestamp(1567578488, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpApplied: { ts: Timestamp(1567578494, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578488, 2), $clusterTime: { clusterTime: Timestamp(1567578494, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578494, 1) } 2019-09-04T06:28:14.842+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578494, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578494840), o: { $v: 1, $set: { ping: new Date(1567578494837), up: 2395 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpVisible: { ts: Timestamp(1567578488, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpApplied: { ts: Timestamp(1567578494, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578488, 2), $clusterTime: { clusterTime: Timestamp(1567578494, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578494, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:14.842+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:14.842+0000 I COMMAND [conn35] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 1), signature: { hash: BinData(0, 1DEE64994A51BF344C0BF17F13DA1F5E64B3EBCD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:14.843+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578494, 1) and ending at ts: Timestamp(1567578494, 1) 2019-09-04T06:28:14.843+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:28:24.894+0000 2019-09-04T06:28:14.843+0000 D1 - [conn35] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:14.843+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:28:25.196+0000 2019-09-04T06:28:14.843+0000 W - [conn35] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:14.843+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578494, 1), t: 1 } 2019-09-04T06:28:14.843+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:28:14.843+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:14.843+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:14.843+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:14.843+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578488, 2) 2019-09-04T06:28:14.843+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 2652 2019-09-04T06:28:14.843+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:14.843+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:14.843+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 2652 2019-09-04T06:28:14.843+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:14.843+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:14.843+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578488, 2) 2019-09-04T06:28:14.843+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 2655 2019-09-04T06:28:14.843+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:14.843+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:14.843+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 2655 2019-09-04T06:28:14.843+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:28:14.843+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578494, 1) } 2019-09-04T06:28:14.843+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2634 2019-09-04T06:28:14.843+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 2634 2019-09-04T06:28:14.843+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2658 2019-09-04T06:28:14.843+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2658 2019-09-04T06:28:14.843+0000 D3 EXECUTOR [repl-writer-worker-0] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:14.843+0000 D3 STORAGE [repl-writer-worker-0] WT begin_transaction for snapshot id 2660 2019-09-04T06:28:14.843+0000 D4 STORAGE [repl-writer-worker-0] inserting record with timestamp Timestamp(1567578494, 1) 2019-09-04T06:28:14.843+0000 D3 STORAGE [repl-writer-worker-0] WT set timestamp of future write operations to Timestamp(1567578494, 1) 2019-09-04T06:28:14.843+0000 D3 STORAGE [repl-writer-worker-0] WT commit_transaction for snapshot id 2660 2019-09-04T06:28:14.843+0000 D3 EXECUTOR [repl-writer-worker-0] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:14.843+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:28:14.843+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2659 2019-09-04T06:28:14.843+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 2659 2019-09-04T06:28:14.843+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2662 2019-09-04T06:28:14.843+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2662 2019-09-04T06:28:14.843+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578494, 1), t: 1 }({ ts: Timestamp(1567578494, 1), t: 1 }) 2019-09-04T06:28:14.843+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578494, 1) 2019-09-04T06:28:14.843+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2663 2019-09-04T06:28:14.843+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578494, 1) } } ] } sort: {} projection: {} 2019-09-04T06:28:14.843+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:14.843+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:28:14.845+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578494, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:28:14.846+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:14.846+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:14.846+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:14.846+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578494, 1) || First: notFirst: full path: ts 2019-09-04T06:28:14.846+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:14.846+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578494, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:14.846+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:14.846+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:28:14.847+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:14.847+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:14.847+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:14.847+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:14.847+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:14.847+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:14.847+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:14.847+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578494, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:14.847+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:14.847+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:14.847+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:14.847+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578494, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:14.847+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:14.847+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578494, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:14.847+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 2663 2019-09-04T06:28:14.847+0000 D3 EXECUTOR [repl-writer-worker-15] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:14.847+0000 D3 STORAGE [repl-writer-worker-15] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:14.847+0000 D3 REPL [repl-writer-worker-15] applying op: { ts: Timestamp(1567578494, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578494840), o: { $v: 1, $set: { ping: new Date(1567578494837), up: 2395 } } }, oplog application mode: Secondary 2019-09-04T06:28:14.847+0000 D3 STORAGE [repl-writer-worker-15] WT set timestamp of future write operations to Timestamp(1567578494, 1) 2019-09-04T06:28:14.847+0000 D3 STORAGE [repl-writer-worker-15] WT begin_transaction for snapshot id 2666 2019-09-04T06:28:14.847+0000 D2 QUERY [repl-writer-worker-15] Using idhack: { _id: "cmodb801.togewa.com:27017" } 2019-09-04T06:28:14.847+0000 D4 WRITE [repl-writer-worker-15] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:28:14.848+0000 D3 STORAGE [repl-writer-worker-15] WT commit_transaction for snapshot id 2666 2019-09-04T06:28:14.848+0000 D3 EXECUTOR [repl-writer-worker-15] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:14.848+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578494, 1), t: 1 }({ ts: Timestamp(1567578494, 1), t: 1 }) 2019-09-04T06:28:14.848+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578494, 1) 2019-09-04T06:28:14.848+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2665 2019-09-04T06:28:14.848+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:28:14.848+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:14.848+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:28:14.848+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:14.848+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:14.848+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:28:14.848+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 2665 2019-09-04T06:28:14.848+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578494, 1) 2019-09-04T06:28:14.848+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2669 2019-09-04T06:28:14.848+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2669 2019-09-04T06:28:14.848+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578494, 1), t: 1 }({ ts: Timestamp(1567578494, 1), t: 1 }) 2019-09-04T06:28:14.846+0000 I - [conn84] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:14.849+0000 D1 COMMAND [conn84] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 1), signature: { hash: BinData(0, 1DEE64994A51BF344C0BF17F13DA1F5E64B3EBCD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:14.849+0000 D1 - [conn84] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:14.849+0000 W - [conn84] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:14.848+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:14.849+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), appliedOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, appliedWallTime: new Date(1567578488376), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), appliedOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, appliedWallTime: new Date(1567578494840), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), appliedOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, appliedWallTime: new Date(1567578488376), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpVisible: { ts: Timestamp(1567578488, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:14.849+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 168 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:44.849+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), appliedOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, appliedWallTime: new Date(1567578488376), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), appliedOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, appliedWallTime: new Date(1567578494840), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), appliedOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, appliedWallTime: new Date(1567578488376), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578488, 2), t: 1 }, lastCommittedWall: new Date(1567578488376), lastOpVisible: { ts: Timestamp(1567578488, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:14.849+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:44.848+0000 2019-09-04T06:28:14.843+0000 D2 COMMAND [conn21] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578494, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f597e02d1a496712d7192'), operName: "", parentOperId: "5d6f597e02d1a496712d718f" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578494, 1), signature: { hash: BinData(0, 59D2E66C9D2BDEE5314F4BED0A1B1AF23AE759E0), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578494, 1), t: 1 } }, $db: "config" } 2019-09-04T06:28:14.849+0000 D1 TRACKING [conn21] Cmd: find, TrackingId: 5d6f597e02d1a496712d718f|5d6f597e02d1a496712d7192 2019-09-04T06:28:14.849+0000 D1 REPL [conn21] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1567578494, 1), t: 1 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578488, 2), t: 1 } 2019-09-04T06:28:14.849+0000 D3 REPL [conn21] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:44.859+0000 2019-09-04T06:28:14.850+0000 D2 ASIO [RS] Request 168 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578494, 1), t: 1 }, lastCommittedWall: new Date(1567578494840), lastOpVisible: { ts: Timestamp(1567578494, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578494, 1), $clusterTime: { clusterTime: Timestamp(1567578494, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578494, 1) } 2019-09-04T06:28:14.850+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578494, 1), t: 1 }, lastCommittedWall: new Date(1567578494840), lastOpVisible: { ts: Timestamp(1567578494, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578494, 1), $clusterTime: { clusterTime: Timestamp(1567578494, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578494, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:14.850+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:14.850+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:44.850+0000 2019-09-04T06:28:14.845+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578494, 1), t: 1 } 2019-09-04T06:28:14.852+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 169 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:24.852+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578488, 2), t: 1 } } 2019-09-04T06:28:14.853+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:44.850+0000 2019-09-04T06:28:14.853+0000 D2 ASIO [RS] Request 169 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578494, 1), t: 1 }, lastCommittedWall: new Date(1567578494840), lastOpVisible: { ts: Timestamp(1567578494, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578494, 1), t: 1 }, lastCommittedWall: new Date(1567578494840), lastOpApplied: { ts: Timestamp(1567578494, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578494, 1), $clusterTime: { clusterTime: Timestamp(1567578494, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578494, 1) } 2019-09-04T06:28:14.853+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578494, 1), t: 1 }, lastCommittedWall: new Date(1567578494840), lastOpVisible: { ts: Timestamp(1567578494, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578494, 1), t: 1 }, lastCommittedWall: new Date(1567578494840), lastOpApplied: { ts: Timestamp(1567578494, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578494, 1), $clusterTime: { clusterTime: Timestamp(1567578494, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578494, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:14.853+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:14.853+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:28:14.853+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578494, 1), t: 1 }, 2019-09-04T06:28:14.840+0000 2019-09-04T06:28:14.853+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578494, 1), t: 1 }, 2019-09-04T06:28:14.840+0000 2019-09-04T06:28:14.853+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578489, 1) 2019-09-04T06:28:14.853+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:28:25.196+0000 2019-09-04T06:28:14.853+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:28:25.647+0000 2019-09-04T06:28:14.853+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 170 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:24.853+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578494, 1), t: 1 } } 2019-09-04T06:28:14.853+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:44.850+0000 2019-09-04T06:28:14.853+0000 D3 REPL [conn98] Got notified of new snapshot: { ts: Timestamp(1567578494, 1), t: 1 }, 2019-09-04T06:28:14.840+0000 2019-09-04T06:28:14.853+0000 D3 REPL [conn98] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.768+0000 2019-09-04T06:28:14.853+0000 D3 REPL [conn91] Got notified of new snapshot: { ts: Timestamp(1567578494, 1), t: 1 }, 2019-09-04T06:28:14.840+0000 2019-09-04T06:28:14.853+0000 D3 REPL [conn91] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.645+0000 2019-09-04T06:28:14.853+0000 D3 REPL [conn89] Got notified of new snapshot: { ts: Timestamp(1567578494, 1), t: 1 }, 2019-09-04T06:28:14.840+0000 2019-09-04T06:28:14.853+0000 D3 REPL [conn89] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:16.417+0000 2019-09-04T06:28:14.853+0000 D3 REPL [conn100] Got notified of new snapshot: { ts: Timestamp(1567578494, 1), t: 1 }, 2019-09-04T06:28:14.840+0000 2019-09-04T06:28:14.853+0000 D3 REPL [conn100] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:22.595+0000 2019-09-04T06:28:14.853+0000 D3 REPL [conn102] Got notified of new snapshot: { ts: Timestamp(1567578494, 1), t: 1 }, 2019-09-04T06:28:14.840+0000 2019-09-04T06:28:14.853+0000 D3 REPL [conn102] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:29.874+0000 2019-09-04T06:28:14.853+0000 D3 REPL [conn97] Got notified of new snapshot: { ts: Timestamp(1567578494, 1), t: 1 }, 2019-09-04T06:28:14.840+0000 2019-09-04T06:28:14.853+0000 D3 REPL [conn97] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.670+0000 2019-09-04T06:28:14.853+0000 D3 REPL [conn96] Got notified of new snapshot: { ts: Timestamp(1567578494, 1), t: 1 }, 2019-09-04T06:28:14.840+0000 2019-09-04T06:28:14.853+0000 D3 REPL [conn96] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.663+0000 2019-09-04T06:28:14.853+0000 D3 REPL [conn110] Got notified of new snapshot: { ts: Timestamp(1567578494, 1), t: 1 }, 2019-09-04T06:28:14.840+0000 2019-09-04T06:28:14.853+0000 D3 REPL [conn110] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.881+0000 2019-09-04T06:28:14.853+0000 D3 REPL [conn111] Got notified of new snapshot: { ts: Timestamp(1567578494, 1), t: 1 }, 2019-09-04T06:28:14.840+0000 2019-09-04T06:28:14.853+0000 D3 REPL [conn111] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.896+0000 2019-09-04T06:28:14.853+0000 D3 REPL [conn112] Got notified of new snapshot: { ts: Timestamp(1567578494, 1), t: 1 }, 2019-09-04T06:28:14.840+0000 2019-09-04T06:28:14.853+0000 D3 REPL [conn112] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.921+0000 2019-09-04T06:28:14.853+0000 D3 REPL [conn115] Got notified of new snapshot: { ts: Timestamp(1567578494, 1), t: 1 }, 2019-09-04T06:28:14.840+0000 2019-09-04T06:28:14.853+0000 D3 REPL [conn115] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:41.292+0000 2019-09-04T06:28:14.853+0000 D3 REPL [conn114] Got notified of new snapshot: { ts: Timestamp(1567578494, 1), t: 1 }, 2019-09-04T06:28:14.840+0000 2019-09-04T06:28:14.853+0000 D3 REPL [conn114] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:41.488+0000 2019-09-04T06:28:14.853+0000 D3 REPL [conn21] Got notified of new snapshot: { ts: Timestamp(1567578494, 1), t: 1 }, 2019-09-04T06:28:14.840+0000 2019-09-04T06:28:14.853+0000 D1 COMMAND [conn21] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578494, 1), t: 1 } } } 2019-09-04T06:28:14.853+0000 D3 STORAGE [conn21] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:28:14.853+0000 D1 COMMAND [conn21] Using 'committed' snapshot: { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578494, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f597e02d1a496712d7192'), operName: "", parentOperId: "5d6f597e02d1a496712d718f" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578494, 1), signature: { hash: BinData(0, 59D2E66C9D2BDEE5314F4BED0A1B1AF23AE759E0), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578494, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578494, 1) 2019-09-04T06:28:14.853+0000 D2 QUERY [conn21] Collection config.settings does not exist. Using EOF plan: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 2019-09-04T06:28:14.853+0000 I COMMAND [conn21] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578494, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f597e02d1a496712d7192'), operName: "", parentOperId: "5d6f597e02d1a496712d718f" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578494, 1), signature: { hash: BinData(0, 59D2E66C9D2BDEE5314F4BED0A1B1AF23AE759E0), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578494, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 4ms 2019-09-04T06:28:14.854+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:14.854+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:14.855+0000 D3 REPL [conn103] Got notified of new snapshot: { ts: Timestamp(1567578494, 1), t: 1 }, 2019-09-04T06:28:14.840+0000 2019-09-04T06:28:14.855+0000 D3 REPL [conn103] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.919+0000 2019-09-04T06:28:14.855+0000 D3 REPL [conn108] Got notified of new snapshot: { ts: Timestamp(1567578494, 1), t: 1 }, 2019-09-04T06:28:14.840+0000 2019-09-04T06:28:14.855+0000 D3 REPL [conn108] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.879+0000 2019-09-04T06:28:14.855+0000 D3 REPL [conn109] Got notified of new snapshot: { ts: Timestamp(1567578494, 1), t: 1 }, 2019-09-04T06:28:14.840+0000 2019-09-04T06:28:14.855+0000 D3 REPL [conn109] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.877+0000 2019-09-04T06:28:14.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:14.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:14.856+0000 D3 REPL [conn107] Got notified of new snapshot: { ts: Timestamp(1567578494, 1), t: 1 }, 2019-09-04T06:28:14.840+0000 2019-09-04T06:28:14.856+0000 D3 REPL [conn107] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.871+0000 2019-09-04T06:28:14.857+0000 D3 REPL [conn92] Got notified of new snapshot: { ts: Timestamp(1567578494, 1), t: 1 }, 2019-09-04T06:28:14.840+0000 2019-09-04T06:28:14.857+0000 D3 REPL [conn92] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.664+0000 2019-09-04T06:28:14.857+0000 D3 REPL [conn93] Got notified of new snapshot: { ts: Timestamp(1567578494, 1), t: 1 }, 2019-09-04T06:28:14.840+0000 2019-09-04T06:28:14.857+0000 D3 REPL [conn93] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.662+0000 2019-09-04T06:28:14.857+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:28:14.857+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:14.857+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), appliedOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, appliedWallTime: new Date(1567578488376), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, durableWallTime: new Date(1567578494840), appliedOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, appliedWallTime: new Date(1567578494840), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), appliedOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, appliedWallTime: new Date(1567578488376), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578494, 1), t: 1 }, lastCommittedWall: new Date(1567578494840), lastOpVisible: { ts: Timestamp(1567578494, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:14.857+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 171 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:44.857+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), appliedOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, appliedWallTime: new Date(1567578488376), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, durableWallTime: new Date(1567578494840), appliedOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, appliedWallTime: new Date(1567578494840), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, durableWallTime: new Date(1567578488376), appliedOpTime: { ts: Timestamp(1567578488, 2), t: 1 }, appliedWallTime: new Date(1567578488376), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578494, 1), t: 1 }, lastCommittedWall: new Date(1567578494840), lastOpVisible: { ts: Timestamp(1567578494, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:14.857+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:44.850+0000 2019-09-04T06:28:14.857+0000 D2 ASIO [RS] Request 171 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578494, 1), t: 1 }, lastCommittedWall: new Date(1567578494840), lastOpVisible: { ts: Timestamp(1567578494, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578494, 1), $clusterTime: { clusterTime: Timestamp(1567578494, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578494, 1) } 2019-09-04T06:28:14.857+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578494, 1), t: 1 }, lastCommittedWall: new Date(1567578494840), lastOpVisible: { ts: Timestamp(1567578494, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578494, 1), $clusterTime: { clusterTime: Timestamp(1567578494, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578494, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:14.857+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:14.857+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:44.850+0000 2019-09-04T06:28:14.858+0000 D3 REPL [conn94] Got notified of new snapshot: { ts: Timestamp(1567578494, 1), t: 1 }, 2019-09-04T06:28:14.840+0000 2019-09-04T06:28:14.858+0000 D3 REPL [conn94] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.663+0000 2019-09-04T06:28:14.858+0000 D3 REPL [conn95] Got notified of new snapshot: { ts: Timestamp(1567578494, 1), t: 1 }, 2019-09-04T06:28:14.840+0000 2019-09-04T06:28:14.859+0000 D3 REPL [conn95] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.662+0000 2019-09-04T06:28:14.860+0000 D3 REPL [conn99] Got notified of new snapshot: { ts: Timestamp(1567578494, 1), t: 1 }, 2019-09-04T06:28:14.840+0000 2019-09-04T06:28:14.860+0000 D3 REPL [conn99] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:22.054+0000 2019-09-04T06:28:14.863+0000 I - [conn36] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:14.863+0000 D1 COMMAND [conn36] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578462, 1), signature: { hash: BinData(0, B297735DD8383F1BAD6EB580CC3032FDC0BD6BA3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:14.863+0000 D1 - [conn36] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:14.863+0000 W - [conn36] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:14.880+0000 I - [conn27] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:14.880+0000 D1 COMMAND [conn27] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578460, 1), signature: { hash: BinData(0, 6FADD4E1F9FE163B4F89453E13CBEE9116958205), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:14.880+0000 D1 - [conn27] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:14.880+0000 W - [conn27] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:14.896+0000 I - [conn88] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:14.896+0000 D1 COMMAND [conn88] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578459, 1), signature: { hash: BinData(0, 59D37A2B4CECE73D530761BA1DED9CE509A0BE65), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:14.896+0000 D1 - [conn88] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:14.896+0000 W - [conn88] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:14.913+0000 I - [conn30] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:14.913+0000 D1 COMMAND [conn30] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578455, 1), signature: { hash: BinData(0, B608BC040ACD32DAE9997A8CBB1A717399CC35A7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:14.913+0000 D1 - [conn30] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:14.913+0000 W - [conn30] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:14.930+0000 I - [conn24] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:14.930+0000 D1 COMMAND [conn24] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578455, 1), signature: { hash: BinData(0, B608BC040ACD32DAE9997A8CBB1A717399CC35A7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:14.930+0000 D1 - [conn24] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:14.930+0000 W - [conn24] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:14.938+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:14.949+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578494, 1) 2019-09-04T06:28:14.950+0000 I - [conn88] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:14.950+0000 W COMMAND [conn88] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:14.950+0000 I COMMAND [conn88] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578459, 1), signature: { hash: BinData(0, 59D37A2B4CECE73D530761BA1DED9CE509A0BE65), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30073ms 2019-09-04T06:28:14.950+0000 D2 NETWORK [conn88] Session from 10.108.2.51:59050 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:14.950+0000 I NETWORK [conn88] end connection 10.108.2.51:59050 (86 connections now open) 2019-09-04T06:28:14.966+0000 I - [conn35] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:14.967+0000 D1 COMMAND [conn35] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 1), signature: { hash: BinData(0, 1DEE64994A51BF344C0BF17F13DA1F5E64B3EBCD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:14.967+0000 D1 - [conn35] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:14.967+0000 W - [conn35] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:14.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:14.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:14.986+0000 I - [conn30] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:14.986+0000 W COMMAND [conn30] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:14.986+0000 I COMMAND [conn30] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578455, 1), signature: { hash: BinData(0, B608BC040ACD32DAE9997A8CBB1A717399CC35A7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30087ms 2019-09-04T06:28:14.986+0000 D2 NETWORK [conn30] Session from 10.108.2.47:56426 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:14.986+0000 I NETWORK [conn30] end connection 10.108.2.47:56426 (85 connections now open) 2019-09-04T06:28:14.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:14.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:14.995+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:14.995+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:15.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:15.003+0000 I - [conn85] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:15.003+0000 D1 COMMAND [conn85] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578463, 1), signature: { hash: BinData(0, C1B62A1C3071B6FFFBA1E3E4221F429C7BC6B94F), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:15.003+0000 D1 - [conn85] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:15.003+0000 W - [conn85] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:15.011+0000 I NETWORK [listener] connection accepted from 10.108.2.58:52044 #123 (86 connections now open) 2019-09-04T06:28:15.011+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:15.011+0000 D2 COMMAND [conn123] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:15.011+0000 I NETWORK [conn123] received client metadata from 10.108.2.58:52044 conn123: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:15.011+0000 I COMMAND [conn123] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:15.011+0000 I NETWORK [listener] connection accepted from 10.108.2.50:50020 #124 (87 connections now open) 2019-09-04T06:28:15.012+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:15.012+0000 D2 COMMAND [conn123] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578493, 1), signature: { hash: BinData(0, C73D3AB40BC5B730663FB05640F6CBA1033C72E5), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:15.012+0000 D1 REPL [conn123] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578494, 1), t: 1 } 2019-09-04T06:28:15.012+0000 D3 REPL [conn123] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.022+0000 2019-09-04T06:28:15.012+0000 D2 COMMAND [conn124] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:15.012+0000 I NETWORK [conn124] received client metadata from 10.108.2.50:50020 conn124: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:15.012+0000 I COMMAND [conn124] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:15.012+0000 D2 COMMAND [conn124] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578489, 1), signature: { hash: BinData(0, 5E46B3B42F624DF9AB2FBC0649BD9C499C9A1173), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:15.012+0000 D1 REPL [conn124] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578494, 1), t: 1 } 2019-09-04T06:28:15.012+0000 D3 REPL [conn124] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.022+0000 2019-09-04T06:28:15.013+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:15.013+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:15.019+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:15.019+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:15.023+0000 I - [conn27] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:15.023+0000 W COMMAND [conn27] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:15.023+0000 I COMMAND [conn27] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578460, 1), signature: { hash: BinData(0, 6FADD4E1F9FE163B4F89453E13CBEE9116958205), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30060ms 2019-09-04T06:28:15.023+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:15.023+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:15.023+0000 D2 NETWORK [conn27] Session from 10.108.2.49:53258 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:15.023+0000 I NETWORK [conn27] end connection 10.108.2.49:53258 (86 connections now open) 2019-09-04T06:28:15.023+0000 D2 COMMAND [conn119] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578490, 1), signature: { hash: BinData(0, 2EEDB79AA71EFDCD4F74F12BA8B91BCD928A35AA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:15.023+0000 D1 REPL [conn119] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578494, 1), t: 1 } 2019-09-04T06:28:15.023+0000 D3 REPL [conn119] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.033+0000 2019-09-04T06:28:15.025+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:15.025+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:15.025+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:15.025+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:15.030+0000 D2 COMMAND [conn120] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578488, 1), signature: { hash: BinData(0, F4E758946912814F3BEF328731BD00334916A2A7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:15.031+0000 D1 REPL [conn120] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578494, 1), t: 1 } 2019-09-04T06:28:15.031+0000 D3 REPL [conn120] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.040+0000 2019-09-04T06:28:15.038+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:15.040+0000 I - [conn87] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:15.040+0000 D1 COMMAND [conn87] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578455, 1), signature: { hash: BinData(0, B608BC040ACD32DAE9997A8CBB1A717399CC35A7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:15.040+0000 D1 - [conn87] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:15.040+0000 W - [conn87] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:15.060+0000 I - [conn84] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:15.060+0000 W COMMAND [conn84] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:15.060+0000 I COMMAND [conn84] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 1), signature: { hash: BinData(0, 1DEE64994A51BF344C0BF17F13DA1F5E64B3EBCD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30036ms 2019-09-04T06:28:15.060+0000 D2 NETWORK [conn84] Session from 10.108.2.44:38568 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:15.060+0000 I NETWORK [conn84] end connection 10.108.2.44:38568 (85 connections now open) 2019-09-04T06:28:15.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578494, 1), signature: { hash: BinData(0, 59D2E66C9D2BDEE5314F4BED0A1B1AF23AE759E0), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:15.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:28:15.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578494, 1), signature: { hash: BinData(0, 59D2E66C9D2BDEE5314F4BED0A1B1AF23AE759E0), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:15.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578494, 1), signature: { hash: BinData(0, 59D2E66C9D2BDEE5314F4BED0A1B1AF23AE759E0), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:15.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, durableWallTime: new Date(1567578494840), opTime: { ts: Timestamp(1567578494, 1), t: 1 }, wallTime: new Date(1567578494840) } 2019-09-04T06:28:15.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578494, 1), signature: { hash: BinData(0, 59D2E66C9D2BDEE5314F4BED0A1B1AF23AE759E0), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:15.080+0000 I - [conn36] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:15.080+0000 W COMMAND [conn36] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:15.080+0000 I COMMAND [conn36] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578462, 1), signature: { hash: BinData(0, B297735DD8383F1BAD6EB580CC3032FDC0BD6BA3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30038ms 2019-09-04T06:28:15.080+0000 D2 NETWORK [conn36] Session from 10.108.2.53:50592 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:15.080+0000 I NETWORK [conn36] end connection 10.108.2.53:50592 (84 connections now open) 2019-09-04T06:28:15.100+0000 I - [conn24] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:15.100+0000 W COMMAND [conn24] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:15.100+0000 I COMMAND [conn24] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578455, 1), signature: { hash: BinData(0, B608BC040ACD32DAE9997A8CBB1A717399CC35A7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30106ms 2019-09-04T06:28:15.100+0000 D2 NETWORK [conn24] Session from 10.108.2.64:46504 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:15.100+0000 I NETWORK [conn24] end connection 10.108.2.64:46504 (83 connections now open) 2019-09-04T06:28:15.116+0000 I - [conn86] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:15.117+0000 D1 COMMAND [conn86] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578459, 1), signature: { hash: BinData(0, 59D37A2B4CECE73D530761BA1DED9CE509A0BE65), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:15.117+0000 D1 - [conn86] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:15.117+0000 W - [conn86] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:15.123+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:15.123+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:15.132+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:15.132+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:15.136+0000 I - [conn85] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:15.136+0000 W COMMAND [conn85] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:15.136+0000 I COMMAND [conn85] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578463, 1), signature: { hash: BinData(0, C1B62A1C3071B6FFFBA1E3E4221F429C7BC6B94F), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30191ms 2019-09-04T06:28:15.136+0000 D2 NETWORK [conn85] Session from 10.108.2.58:52024 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:15.136+0000 I NETWORK [conn85] end connection 10.108.2.58:52024 (82 connections now open) 2019-09-04T06:28:15.139+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:15.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:15.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:15.156+0000 I - [conn86] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:15.156+0000 W COMMAND [conn86] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:15.156+0000 I COMMAND [conn86] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578459, 1), signature: { hash: BinData(0, 59D37A2B4CECE73D530761BA1DED9CE509A0BE65), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30304ms 2019-09-04T06:28:15.156+0000 D2 NETWORK [conn86] Session from 10.108.2.50:50000 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:15.156+0000 I NETWORK [conn86] end connection 10.108.2.50:50000 (81 connections now open) 2019-09-04T06:28:15.176+0000 I - [conn87] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:15.176+0000 W COMMAND [conn87] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:15.176+0000 I COMMAND [conn87] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578455, 1), signature: { hash: BinData(0, B608BC040ACD32DAE9997A8CBB1A717399CC35A7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30226ms 2019-09-04T06:28:15.176+0000 D2 NETWORK [conn87] Session from 10.108.2.46:40868 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:15.176+0000 I NETWORK [conn87] end connection 10.108.2.46:40868 (80 connections now open) 2019-09-04T06:28:15.196+0000 I - [conn35] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:15.196+0000 W COMMAND [conn35] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:15.196+0000 I COMMAND [conn35] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 1), signature: { hash: BinData(0, 1DEE64994A51BF344C0BF17F13DA1F5E64B3EBCD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30140ms 2019-09-04T06:28:15.196+0000 D2 NETWORK [conn35] Session from 10.108.2.45:36430 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:15.196+0000 I NETWORK [conn35] end connection 10.108.2.45:36430 (79 connections now open) 2019-09-04T06:28:15.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:15.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:15.239+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:15.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:15.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:15.281+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:15.281+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:15.339+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:15.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:15.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:15.439+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:15.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:15.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:15.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:15.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:15.495+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:15.495+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:15.512+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:15.512+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:15.519+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:15.519+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:15.523+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:15.523+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:15.524+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:15.524+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:15.525+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:15.525+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:15.539+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:15.623+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:15.623+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:15.632+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:15.632+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:15.639+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:15.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:15.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:15.739+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:15.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:15.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:15.781+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:15.781+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:15.839+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:15.843+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:15.843+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:15.843+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578494, 1) 2019-09-04T06:28:15.843+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 2710 2019-09-04T06:28:15.843+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:15.843+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:15.843+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 2710 2019-09-04T06:28:15.849+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2713 2019-09-04T06:28:15.849+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2713 2019-09-04T06:28:15.849+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578494, 1), t: 1 }({ ts: Timestamp(1567578494, 1), t: 1 }) 2019-09-04T06:28:15.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:15.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:15.940+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:15.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:15.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:15.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:15.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:15.995+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:15.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:16.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:16.023+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:16.023+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:16.040+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:16.068+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:16.069+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:16.123+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:16.123+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:16.132+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:16.132+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:16.140+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:16.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:16.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:16.230+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578494, 1), signature: { hash: BinData(0, 59D2E66C9D2BDEE5314F4BED0A1B1AF23AE759E0), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:16.231+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:28:16.231+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578494, 1), signature: { hash: BinData(0, 59D2E66C9D2BDEE5314F4BED0A1B1AF23AE759E0), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:16.231+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578494, 1), signature: { hash: BinData(0, 59D2E66C9D2BDEE5314F4BED0A1B1AF23AE759E0), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:16.231+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, durableWallTime: new Date(1567578494840), opTime: { ts: Timestamp(1567578494, 1), t: 1 }, wallTime: new Date(1567578494840) } 2019-09-04T06:28:16.231+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578494, 1), signature: { hash: BinData(0, 59D2E66C9D2BDEE5314F4BED0A1B1AF23AE759E0), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:16.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:16.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:16.240+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:16.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:16.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:16.281+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:16.281+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:16.312+0000 I NETWORK [listener] connection accepted from 10.20.102.80:61016 #125 (80 connections now open) 2019-09-04T06:28:16.312+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:16.313+0000 D2 COMMAND [conn125] run command admin.$cmd { isMaster: 1, client: { application: { name: "robo3t" }, driver: { name: "MongoDB Internal Client", version: "4.0.5-17-gd808df2233" }, os: { type: "Windows", name: "Microsoft Windows 7", architecture: "x86_64", version: "6.1 SP1 (build 7601)" } }, $db: "admin" } 2019-09-04T06:28:16.313+0000 I NETWORK [conn125] received client metadata from 10.20.102.80:61016 conn125: { application: { name: "robo3t" }, driver: { name: "MongoDB Internal Client", version: "4.0.5-17-gd808df2233" }, os: { type: "Windows", name: "Microsoft Windows 7", architecture: "x86_64", version: "6.1 SP1 (build 7601)" } } 2019-09-04T06:28:16.313+0000 I COMMAND [conn125] command admin.$cmd appName: "robo3t" command: isMaster { isMaster: 1, client: { application: { name: "robo3t" }, driver: { name: "MongoDB Internal Client", version: "4.0.5-17-gd808df2233" }, os: { type: "Windows", name: "Microsoft Windows 7", architecture: "x86_64", version: "6.1 SP1 (build 7601)" } }, $db: "admin" } numYields:0 reslen:866 locks:{} protocol:op_query 0ms 2019-09-04T06:28:16.322+0000 D2 COMMAND [conn125] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:28:16.322+0000 D1 ACCESS [conn125] Returning user dba_root@admin from cache 2019-09-04T06:28:16.322+0000 I COMMAND [conn125] command admin.$cmd appName: "robo3t" command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:394 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:16.324+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:16.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:16.331+0000 D2 COMMAND [conn125] run command admin.$cmd { saslContinue: 1, payload: "xxx", conversationId: 1, $db: "admin" } 2019-09-04T06:28:16.331+0000 I COMMAND [conn125] command admin.$cmd appName: "robo3t" command: saslContinue { saslContinue: 1, payload: "xxx", conversationId: 1, $db: "admin" } numYields:0 reslen:323 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:16.340+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:16.341+0000 D2 COMMAND [conn125] run command admin.$cmd { saslContinue: 1, payload: "xxx", conversationId: 1, $db: "admin" } 2019-09-04T06:28:16.341+0000 D1 ACCESS [conn125] Returning user dba_root@admin from cache 2019-09-04T06:28:16.341+0000 I ACCESS [conn125] Successfully authenticated as principal dba_root on admin from client 10.20.102.80:61016 2019-09-04T06:28:16.341+0000 I COMMAND [conn125] command admin.$cmd appName: "robo3t" command: saslContinue { saslContinue: 1, payload: "xxx", conversationId: 1, $db: "admin" } numYields:0 reslen:293 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:16.350+0000 D2 COMMAND [conn125] run command admin.$cmd { ping: 1, $db: "admin" } 2019-09-04T06:28:16.350+0000 I COMMAND [conn125] command admin.$cmd appName: "robo3t" command: ping { ping: 1, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:16.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:16.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:16.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:16.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:16.419+0000 I COMMAND [conn89] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 1), signature: { hash: BinData(0, 1DEE64994A51BF344C0BF17F13DA1F5E64B3EBCD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:28:16.419+0000 D1 - [conn89] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:16.419+0000 W - [conn89] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:16.437+0000 I - [conn89] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:16.437+0000 D1 COMMAND [conn89] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 1), signature: { hash: BinData(0, 1DEE64994A51BF344C0BF17F13DA1F5E64B3EBCD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:16.437+0000 D1 - [conn89] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:16.437+0000 W - [conn89] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:16.440+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:16.458+0000 I - [conn89] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:16.458+0000 W COMMAND [conn89] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:16.458+0000 I COMMAND [conn89] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578458, 1), signature: { hash: BinData(0, 1DEE64994A51BF344C0BF17F13DA1F5E64B3EBCD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:28:16.458+0000 D2 NETWORK [conn89] Session from 10.108.2.57:34146 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:16.458+0000 I NETWORK [conn89] end connection 10.108.2.57:34146 (79 connections now open) 2019-09-04T06:28:16.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:16.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:16.490+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:16.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:16.495+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:16.495+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:16.523+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:16.523+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:16.540+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:16.568+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:16.568+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:16.623+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:16.623+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:16.632+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:16.632+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:16.640+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:16.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:16.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:16.740+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:16.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:16.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:16.781+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:16.781+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:16.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:16.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:16.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:16.836+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 172) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:16.836+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 172 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:28:26.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:16.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:16.836+0000 D2 ASIO [Replication] Request 172 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, durableWallTime: new Date(1567578494840), opTime: { ts: Timestamp(1567578494, 1), t: 1 }, wallTime: new Date(1567578494840), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578494, 1), t: 1 }, lastCommittedWall: new Date(1567578494840), lastOpVisible: { ts: Timestamp(1567578494, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578494, 1), $clusterTime: { clusterTime: Timestamp(1567578494, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578494, 1) } 2019-09-04T06:28:16.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, durableWallTime: new Date(1567578494840), opTime: { ts: Timestamp(1567578494, 1), t: 1 }, wallTime: new Date(1567578494840), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578494, 1), t: 1 }, lastCommittedWall: new Date(1567578494840), lastOpVisible: { ts: Timestamp(1567578494, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578494, 1), $clusterTime: { clusterTime: Timestamp(1567578494, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578494, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:28:16.836+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:28:16.836+0000 D2 REPL_HB [replexec-2] Received response to heartbeat (requestId: 172) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, durableWallTime: new Date(1567578494840), opTime: { ts: Timestamp(1567578494, 1), t: 1 }, wallTime: new Date(1567578494840), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578494, 1), t: 1 }, lastCommittedWall: new Date(1567578494840), lastOpVisible: { ts: Timestamp(1567578494, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578494, 1), $clusterTime: { clusterTime: Timestamp(1567578494, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578494, 1) } 2019-09-04T06:28:16.836+0000 D4 ELECTION [replexec-2] Postponing election timeout due to heartbeat from primary 2019-09-04T06:28:16.836+0000 D4 REPL [replexec-2] Canceling election timeout callback at 2019-09-04T06:28:25.647+0000 2019-09-04T06:28:16.836+0000 D4 ELECTION [replexec-2] Scheduling election timeout callback at 2019-09-04T06:28:27.025+0000 2019-09-04T06:28:16.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:16.836+0000 D3 REPL [replexec-2] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:28:16.836+0000 D2 REPL_HB [replexec-2] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:28:18.836Z 2019-09-04T06:28:16.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:16.836+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:16.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:16.837+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 173) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:16.837+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 173 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:26.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:16.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:16.837+0000 D2 ASIO [Replication] Request 173 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, durableWallTime: new Date(1567578494840), opTime: { ts: Timestamp(1567578494, 1), t: 1 }, wallTime: new Date(1567578494840), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578494, 1), t: 1 }, lastCommittedWall: new Date(1567578494840), lastOpVisible: { ts: Timestamp(1567578494, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578494, 1), $clusterTime: { clusterTime: Timestamp(1567578494, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578494, 1) } 2019-09-04T06:28:16.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, durableWallTime: new Date(1567578494840), opTime: { ts: Timestamp(1567578494, 1), t: 1 }, wallTime: new Date(1567578494840), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578494, 1), t: 1 }, lastCommittedWall: new Date(1567578494840), lastOpVisible: { ts: Timestamp(1567578494, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578494, 1), $clusterTime: { clusterTime: Timestamp(1567578494, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578494, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:16.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:16.837+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 173) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, durableWallTime: new Date(1567578494840), opTime: { ts: Timestamp(1567578494, 1), t: 1 }, wallTime: new Date(1567578494840), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578494, 1), t: 1 }, lastCommittedWall: new Date(1567578494840), lastOpVisible: { ts: Timestamp(1567578494, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578494, 1), $clusterTime: { clusterTime: Timestamp(1567578494, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578494, 1) } 2019-09-04T06:28:16.837+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:28:16.837+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:28:18.837Z 2019-09-04T06:28:16.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:16.841+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:16.843+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:16.843+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:16.843+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578494, 1) 2019-09-04T06:28:16.843+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 2748 2019-09-04T06:28:16.843+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:16.843+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:16.843+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 2748 2019-09-04T06:28:16.849+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2751 2019-09-04T06:28:16.849+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2751 2019-09-04T06:28:16.849+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578494, 1), t: 1 }({ ts: Timestamp(1567578494, 1), t: 1 }) 2019-09-04T06:28:16.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:16.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:16.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:16.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:16.941+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:16.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:16.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:16.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:16.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:16.995+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:16.995+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:17.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:17.023+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:17.023+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:17.041+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:17.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578494, 1), signature: { hash: BinData(0, 59D2E66C9D2BDEE5314F4BED0A1B1AF23AE759E0), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:17.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:28:17.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578494, 1), signature: { hash: BinData(0, 59D2E66C9D2BDEE5314F4BED0A1B1AF23AE759E0), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:17.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578494, 1), signature: { hash: BinData(0, 59D2E66C9D2BDEE5314F4BED0A1B1AF23AE759E0), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:17.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, durableWallTime: new Date(1567578494840), opTime: { ts: Timestamp(1567578494, 1), t: 1 }, wallTime: new Date(1567578494840) } 2019-09-04T06:28:17.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578494, 1), signature: { hash: BinData(0, 59D2E66C9D2BDEE5314F4BED0A1B1AF23AE759E0), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:17.068+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:17.068+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:17.123+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:17.123+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:17.132+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:17.132+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:17.141+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:17.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:17.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:17.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:17.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:17.241+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:17.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:17.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:17.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:17.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:17.341+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:17.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:17.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:17.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:17.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:17.440+0000 D2 ASIO [RS] Request 170 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578497, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578497438), o: { $v: 1, $set: { ping: new Date(1567578497437) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578494, 1), t: 1 }, lastCommittedWall: new Date(1567578494840), lastOpVisible: { ts: Timestamp(1567578494, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578494, 1), t: 1 }, lastCommittedWall: new Date(1567578494840), lastOpApplied: { ts: Timestamp(1567578497, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578494, 1), $clusterTime: { clusterTime: Timestamp(1567578497, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578497, 1) } 2019-09-04T06:28:17.440+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578497, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578497438), o: { $v: 1, $set: { ping: new Date(1567578497437) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578494, 1), t: 1 }, lastCommittedWall: new Date(1567578494840), lastOpVisible: { ts: Timestamp(1567578494, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578494, 1), t: 1 }, lastCommittedWall: new Date(1567578494840), lastOpApplied: { ts: Timestamp(1567578497, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578494, 1), $clusterTime: { clusterTime: Timestamp(1567578497, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578497, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:17.440+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:17.440+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578497, 1) and ending at ts: Timestamp(1567578497, 1) 2019-09-04T06:28:17.440+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:28:27.025+0000 2019-09-04T06:28:17.440+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:28:27.944+0000 2019-09-04T06:28:17.440+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:28:17.440+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:17.440+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:17.440+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:17.440+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578494, 1) 2019-09-04T06:28:17.440+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 2771 2019-09-04T06:28:17.440+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:17.440+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:17.440+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 2771 2019-09-04T06:28:17.440+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:17.440+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:17.440+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578494, 1) 2019-09-04T06:28:17.440+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 2774 2019-09-04T06:28:17.440+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:17.440+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:17.440+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 2774 2019-09-04T06:28:17.440+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:28:17.440+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578497, 1) } 2019-09-04T06:28:17.440+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2752 2019-09-04T06:28:17.441+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 2752 2019-09-04T06:28:17.441+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2777 2019-09-04T06:28:17.441+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2777 2019-09-04T06:28:17.441+0000 D3 EXECUTOR [repl-writer-worker-1] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:17.441+0000 D3 STORAGE [repl-writer-worker-1] WT begin_transaction for snapshot id 2779 2019-09-04T06:28:17.440+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578497, 1), t: 1 } 2019-09-04T06:28:17.441+0000 D4 STORAGE [repl-writer-worker-1] inserting record with timestamp Timestamp(1567578497, 1) 2019-09-04T06:28:17.441+0000 D3 STORAGE [repl-writer-worker-1] WT set timestamp of future write operations to Timestamp(1567578497, 1) 2019-09-04T06:28:17.441+0000 D3 STORAGE [repl-writer-worker-1] WT commit_transaction for snapshot id 2779 2019-09-04T06:28:17.441+0000 D3 EXECUTOR [repl-writer-worker-1] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:17.441+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:28:17.441+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2778 2019-09-04T06:28:17.441+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 2778 2019-09-04T06:28:17.441+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2781 2019-09-04T06:28:17.441+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2781 2019-09-04T06:28:17.441+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578497, 1), t: 1 }({ ts: Timestamp(1567578497, 1), t: 1 }) 2019-09-04T06:28:17.441+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578497, 1) 2019-09-04T06:28:17.441+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2782 2019-09-04T06:28:17.441+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578497, 1) } } ] } sort: {} projection: {} 2019-09-04T06:28:17.441+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:17.441+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:28:17.441+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578497, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:28:17.441+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:17.441+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:17.441+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:17.441+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578497, 1) || First: notFirst: full path: ts 2019-09-04T06:28:17.441+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:17.441+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578497, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:17.441+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:17.441+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:28:17.441+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:17.441+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:17.441+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:17.441+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:17.441+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:17.441+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:17.441+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:17.441+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578497, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:17.441+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:17.441+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:17.441+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:17.441+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578497, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:17.441+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:17.441+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578497, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:17.441+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 2782 2019-09-04T06:28:17.441+0000 D3 EXECUTOR [repl-writer-worker-5] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:17.441+0000 D3 STORAGE [repl-writer-worker-5] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:17.441+0000 D3 REPL [repl-writer-worker-5] applying op: { ts: Timestamp(1567578497, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578497438), o: { $v: 1, $set: { ping: new Date(1567578497437) } } }, oplog application mode: Secondary 2019-09-04T06:28:17.441+0000 D3 STORAGE [repl-writer-worker-5] WT set timestamp of future write operations to Timestamp(1567578497, 1) 2019-09-04T06:28:17.441+0000 D3 STORAGE [repl-writer-worker-5] WT begin_transaction for snapshot id 2784 2019-09-04T06:28:17.441+0000 D2 QUERY [repl-writer-worker-5] Using idhack: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" } 2019-09-04T06:28:17.441+0000 D4 WRITE [repl-writer-worker-5] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:28:17.441+0000 D3 STORAGE [repl-writer-worker-5] WT commit_transaction for snapshot id 2784 2019-09-04T06:28:17.441+0000 D3 EXECUTOR [repl-writer-worker-5] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:17.441+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578497, 1), t: 1 }({ ts: Timestamp(1567578497, 1), t: 1 }) 2019-09-04T06:28:17.442+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578497, 1) 2019-09-04T06:28:17.442+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2783 2019-09-04T06:28:17.442+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:28:17.442+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:17.442+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:28:17.442+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:17.442+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:17.442+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:28:17.442+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 2783 2019-09-04T06:28:17.442+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578497, 1) 2019-09-04T06:28:17.442+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2787 2019-09-04T06:28:17.442+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2787 2019-09-04T06:28:17.442+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578497, 1), t: 1 }({ ts: Timestamp(1567578497, 1), t: 1 }) 2019-09-04T06:28:17.442+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:17.442+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, durableWallTime: new Date(1567578494840), appliedOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, appliedWallTime: new Date(1567578494840), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, durableWallTime: new Date(1567578494840), appliedOpTime: { ts: Timestamp(1567578497, 1), t: 1 }, appliedWallTime: new Date(1567578497438), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, durableWallTime: new Date(1567578494840), appliedOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, appliedWallTime: new Date(1567578494840), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578494, 1), t: 1 }, lastCommittedWall: new Date(1567578494840), lastOpVisible: { ts: Timestamp(1567578494, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:17.442+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 174 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:47.442+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, durableWallTime: new Date(1567578494840), appliedOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, appliedWallTime: new Date(1567578494840), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, durableWallTime: new Date(1567578494840), appliedOpTime: { ts: Timestamp(1567578497, 1), t: 1 }, appliedWallTime: new Date(1567578497438), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, durableWallTime: new Date(1567578494840), appliedOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, appliedWallTime: new Date(1567578494840), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578494, 1), t: 1 }, lastCommittedWall: new Date(1567578494840), lastOpVisible: { ts: Timestamp(1567578494, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:17.442+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:47.442+0000 2019-09-04T06:28:17.442+0000 D2 ASIO [RS] Request 174 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578497, 1), t: 1 }, lastCommittedWall: new Date(1567578497438), lastOpVisible: { ts: Timestamp(1567578497, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578497, 1), $clusterTime: { clusterTime: Timestamp(1567578497, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578497, 1) } 2019-09-04T06:28:17.442+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578497, 1), t: 1 }, lastCommittedWall: new Date(1567578497438), lastOpVisible: { ts: Timestamp(1567578497, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578497, 1), $clusterTime: { clusterTime: Timestamp(1567578497, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578497, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:17.442+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:17.442+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:47.442+0000 2019-09-04T06:28:17.442+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:17.443+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:28:17.443+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:17.443+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, durableWallTime: new Date(1567578494840), appliedOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, appliedWallTime: new Date(1567578494840), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578497, 1), t: 1 }, durableWallTime: new Date(1567578497438), appliedOpTime: { ts: Timestamp(1567578497, 1), t: 1 }, appliedWallTime: new Date(1567578497438), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, durableWallTime: new Date(1567578494840), appliedOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, appliedWallTime: new Date(1567578494840), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578494, 1), t: 1 }, lastCommittedWall: new Date(1567578494840), lastOpVisible: { ts: Timestamp(1567578494, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:17.443+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 175 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:47.443+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, durableWallTime: new Date(1567578494840), appliedOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, appliedWallTime: new Date(1567578494840), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578497, 1), t: 1 }, durableWallTime: new Date(1567578497438), appliedOpTime: { ts: Timestamp(1567578497, 1), t: 1 }, appliedWallTime: new Date(1567578497438), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, durableWallTime: new Date(1567578494840), appliedOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, appliedWallTime: new Date(1567578494840), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578494, 1), t: 1 }, lastCommittedWall: new Date(1567578494840), lastOpVisible: { ts: Timestamp(1567578494, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:17.443+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:47.443+0000 2019-09-04T06:28:17.443+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578497, 1), t: 1 } 2019-09-04T06:28:17.443+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 176 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:27.443+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578494, 1), t: 1 } } 2019-09-04T06:28:17.443+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:47.443+0000 2019-09-04T06:28:17.443+0000 D2 ASIO [RS] Request 175 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578497, 1), t: 1 }, lastCommittedWall: new Date(1567578497438), lastOpVisible: { ts: Timestamp(1567578497, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578497, 1), $clusterTime: { clusterTime: Timestamp(1567578497, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578497, 1) } 2019-09-04T06:28:17.443+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578497, 1), t: 1 }, lastCommittedWall: new Date(1567578497438), lastOpVisible: { ts: Timestamp(1567578497, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578497, 1), $clusterTime: { clusterTime: Timestamp(1567578497, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578497, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:17.443+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:17.443+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:47.443+0000 2019-09-04T06:28:17.443+0000 D2 ASIO [RS] Request 176 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578497, 1), t: 1 }, lastCommittedWall: new Date(1567578497438), lastOpVisible: { ts: Timestamp(1567578497, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578497, 1), t: 1 }, lastCommittedWall: new Date(1567578497438), lastOpApplied: { ts: Timestamp(1567578497, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578497, 1), $clusterTime: { clusterTime: Timestamp(1567578497, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578497, 1) } 2019-09-04T06:28:17.443+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578497, 1), t: 1 }, lastCommittedWall: new Date(1567578497438), lastOpVisible: { ts: Timestamp(1567578497, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578497, 1), t: 1 }, lastCommittedWall: new Date(1567578497438), lastOpApplied: { ts: Timestamp(1567578497, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578497, 1), $clusterTime: { clusterTime: Timestamp(1567578497, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578497, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:17.443+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:17.443+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:28:17.443+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578497, 1), t: 1 }, 2019-09-04T06:28:17.438+0000 2019-09-04T06:28:17.443+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578497, 1), t: 1 }, 2019-09-04T06:28:17.438+0000 2019-09-04T06:28:17.443+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578492, 1) 2019-09-04T06:28:17.443+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:28:27.944+0000 2019-09-04T06:28:17.443+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:28:27.636+0000 2019-09-04T06:28:17.443+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 177 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:27.443+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578497, 1), t: 1 } } 2019-09-04T06:28:17.443+0000 D3 REPL [conn91] Got notified of new snapshot: { ts: Timestamp(1567578497, 1), t: 1 }, 2019-09-04T06:28:17.438+0000 2019-09-04T06:28:17.443+0000 D3 REPL [conn91] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.645+0000 2019-09-04T06:28:17.443+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:47.443+0000 2019-09-04T06:28:17.443+0000 D3 REPL [conn100] Got notified of new snapshot: { ts: Timestamp(1567578497, 1), t: 1 }, 2019-09-04T06:28:17.438+0000 2019-09-04T06:28:17.443+0000 D3 REPL [conn100] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:22.595+0000 2019-09-04T06:28:17.443+0000 D3 REPL [conn111] Got notified of new snapshot: { ts: Timestamp(1567578497, 1), t: 1 }, 2019-09-04T06:28:17.438+0000 2019-09-04T06:28:17.443+0000 D3 REPL [conn111] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.896+0000 2019-09-04T06:28:17.443+0000 D3 REPL [conn97] Got notified of new snapshot: { ts: Timestamp(1567578497, 1), t: 1 }, 2019-09-04T06:28:17.438+0000 2019-09-04T06:28:17.443+0000 D3 REPL [conn97] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.670+0000 2019-09-04T06:28:17.443+0000 D3 REPL [conn123] Got notified of new snapshot: { ts: Timestamp(1567578497, 1), t: 1 }, 2019-09-04T06:28:17.438+0000 2019-09-04T06:28:17.443+0000 D3 REPL [conn123] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.022+0000 2019-09-04T06:28:17.443+0000 D3 REPL [conn112] Got notified of new snapshot: { ts: Timestamp(1567578497, 1), t: 1 }, 2019-09-04T06:28:17.438+0000 2019-09-04T06:28:17.443+0000 D3 REPL [conn112] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.921+0000 2019-09-04T06:28:17.443+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:17.443+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:17.443+0000 D3 REPL [conn115] Got notified of new snapshot: { ts: Timestamp(1567578497, 1), t: 1 }, 2019-09-04T06:28:17.438+0000 2019-09-04T06:28:17.443+0000 D3 REPL [conn115] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:41.292+0000 2019-09-04T06:28:17.443+0000 D3 REPL [conn114] Got notified of new snapshot: { ts: Timestamp(1567578497, 1), t: 1 }, 2019-09-04T06:28:17.438+0000 2019-09-04T06:28:17.443+0000 D3 REPL [conn114] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:41.488+0000 2019-09-04T06:28:17.443+0000 D3 REPL [conn108] Got notified of new snapshot: { ts: Timestamp(1567578497, 1), t: 1 }, 2019-09-04T06:28:17.438+0000 2019-09-04T06:28:17.444+0000 D3 REPL [conn108] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.879+0000 2019-09-04T06:28:17.444+0000 D3 REPL [conn109] Got notified of new snapshot: { ts: Timestamp(1567578497, 1), t: 1 }, 2019-09-04T06:28:17.438+0000 2019-09-04T06:28:17.444+0000 D3 REPL [conn109] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.877+0000 2019-09-04T06:28:17.444+0000 D3 REPL [conn92] Got notified of new snapshot: { ts: Timestamp(1567578497, 1), t: 1 }, 2019-09-04T06:28:17.438+0000 2019-09-04T06:28:17.444+0000 D3 REPL [conn92] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.664+0000 2019-09-04T06:28:17.444+0000 D3 REPL [conn93] Got notified of new snapshot: { ts: Timestamp(1567578497, 1), t: 1 }, 2019-09-04T06:28:17.438+0000 2019-09-04T06:28:17.444+0000 D3 REPL [conn93] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.662+0000 2019-09-04T06:28:17.444+0000 D3 REPL [conn94] Got notified of new snapshot: { ts: Timestamp(1567578497, 1), t: 1 }, 2019-09-04T06:28:17.438+0000 2019-09-04T06:28:17.444+0000 D3 REPL [conn94] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.663+0000 2019-09-04T06:28:17.444+0000 D3 REPL [conn95] Got notified of new snapshot: { ts: Timestamp(1567578497, 1), t: 1 }, 2019-09-04T06:28:17.438+0000 2019-09-04T06:28:17.444+0000 D3 REPL [conn95] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.662+0000 2019-09-04T06:28:17.444+0000 D3 REPL [conn119] Got notified of new snapshot: { ts: Timestamp(1567578497, 1), t: 1 }, 2019-09-04T06:28:17.438+0000 2019-09-04T06:28:17.444+0000 D3 REPL [conn119] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.033+0000 2019-09-04T06:28:17.444+0000 D3 REPL [conn120] Got notified of new snapshot: { ts: Timestamp(1567578497, 1), t: 1 }, 2019-09-04T06:28:17.438+0000 2019-09-04T06:28:17.444+0000 D3 REPL [conn120] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.040+0000 2019-09-04T06:28:17.444+0000 D3 REPL [conn98] Got notified of new snapshot: { ts: Timestamp(1567578497, 1), t: 1 }, 2019-09-04T06:28:17.438+0000 2019-09-04T06:28:17.444+0000 D3 REPL [conn98] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.768+0000 2019-09-04T06:28:17.444+0000 D3 REPL [conn96] Got notified of new snapshot: { ts: Timestamp(1567578497, 1), t: 1 }, 2019-09-04T06:28:17.438+0000 2019-09-04T06:28:17.444+0000 D3 REPL [conn96] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.663+0000 2019-09-04T06:28:17.444+0000 D3 REPL [conn102] Got notified of new snapshot: { ts: Timestamp(1567578497, 1), t: 1 }, 2019-09-04T06:28:17.438+0000 2019-09-04T06:28:17.444+0000 D3 REPL [conn102] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:29.874+0000 2019-09-04T06:28:17.444+0000 D3 REPL [conn103] Got notified of new snapshot: { ts: Timestamp(1567578497, 1), t: 1 }, 2019-09-04T06:28:17.438+0000 2019-09-04T06:28:17.444+0000 D3 REPL [conn103] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.919+0000 2019-09-04T06:28:17.444+0000 D3 REPL [conn107] Got notified of new snapshot: { ts: Timestamp(1567578497, 1), t: 1 }, 2019-09-04T06:28:17.438+0000 2019-09-04T06:28:17.444+0000 D3 REPL [conn107] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.871+0000 2019-09-04T06:28:17.444+0000 D3 REPL [conn99] Got notified of new snapshot: { ts: Timestamp(1567578497, 1), t: 1 }, 2019-09-04T06:28:17.438+0000 2019-09-04T06:28:17.444+0000 D3 REPL [conn99] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:22.054+0000 2019-09-04T06:28:17.444+0000 D3 REPL [conn110] Got notified of new snapshot: { ts: Timestamp(1567578497, 1), t: 1 }, 2019-09-04T06:28:17.438+0000 2019-09-04T06:28:17.444+0000 D3 REPL [conn110] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.881+0000 2019-09-04T06:28:17.444+0000 D3 REPL [conn124] Got notified of new snapshot: { ts: Timestamp(1567578497, 1), t: 1 }, 2019-09-04T06:28:17.438+0000 2019-09-04T06:28:17.444+0000 D3 REPL [conn124] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.022+0000 2019-09-04T06:28:17.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:17.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:17.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:17.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:17.495+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:17.495+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:17.523+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:17.523+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:17.540+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578497, 1) 2019-09-04T06:28:17.543+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:17.568+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:17.568+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:17.623+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:17.623+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:17.632+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:17.632+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:17.643+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:17.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:17.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:17.697+0000 D2 ASIO [RS] Request 177 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578497, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578497695), o: { $v: 1, $set: { ping: new Date(1567578497688) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578497, 1), t: 1 }, lastCommittedWall: new Date(1567578497438), lastOpVisible: { ts: Timestamp(1567578497, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578497, 1), t: 1 }, lastCommittedWall: new Date(1567578497438), lastOpApplied: { ts: Timestamp(1567578497, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578497, 1), $clusterTime: { clusterTime: Timestamp(1567578497, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578497, 2) } 2019-09-04T06:28:17.697+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578497, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578497695), o: { $v: 1, $set: { ping: new Date(1567578497688) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578497, 1), t: 1 }, lastCommittedWall: new Date(1567578497438), lastOpVisible: { ts: Timestamp(1567578497, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578497, 1), t: 1 }, lastCommittedWall: new Date(1567578497438), lastOpApplied: { ts: Timestamp(1567578497, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578497, 1), $clusterTime: { clusterTime: Timestamp(1567578497, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578497, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:17.697+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:17.697+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578497, 2) and ending at ts: Timestamp(1567578497, 2) 2019-09-04T06:28:17.697+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:28:27.636+0000 2019-09-04T06:28:17.697+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:28:29.081+0000 2019-09-04T06:28:17.697+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578497, 2), t: 1 } 2019-09-04T06:28:17.697+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:17.697+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:17.697+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:17.697+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:17.697+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578497, 1) 2019-09-04T06:28:17.697+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 2799 2019-09-04T06:28:17.697+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:17.697+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:17.697+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 2799 2019-09-04T06:28:17.697+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:17.697+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:28:17.697+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:17.697+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578497, 1) 2019-09-04T06:28:17.697+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 2802 2019-09-04T06:28:17.697+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:17.697+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:17.697+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 2802 2019-09-04T06:28:17.697+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578497, 2) } 2019-09-04T06:28:17.697+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2788 2019-09-04T06:28:17.697+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 2788 2019-09-04T06:28:17.697+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2805 2019-09-04T06:28:17.697+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2805 2019-09-04T06:28:17.698+0000 D3 EXECUTOR [repl-writer-worker-9] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:17.698+0000 D3 STORAGE [repl-writer-worker-9] WT begin_transaction for snapshot id 2807 2019-09-04T06:28:17.698+0000 D4 STORAGE [repl-writer-worker-9] inserting record with timestamp Timestamp(1567578497, 2) 2019-09-04T06:28:17.698+0000 D3 STORAGE [repl-writer-worker-9] WT set timestamp of future write operations to Timestamp(1567578497, 2) 2019-09-04T06:28:17.698+0000 D3 STORAGE [repl-writer-worker-9] WT commit_transaction for snapshot id 2807 2019-09-04T06:28:17.698+0000 D3 EXECUTOR [repl-writer-worker-9] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:17.698+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:28:17.698+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2806 2019-09-04T06:28:17.698+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 2806 2019-09-04T06:28:17.698+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2809 2019-09-04T06:28:17.698+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2809 2019-09-04T06:28:17.698+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578497, 2), t: 1 }({ ts: Timestamp(1567578497, 2), t: 1 }) 2019-09-04T06:28:17.698+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578497, 2) 2019-09-04T06:28:17.698+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2810 2019-09-04T06:28:17.698+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578497, 2) } } ] } sort: {} projection: {} 2019-09-04T06:28:17.698+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:17.698+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:28:17.698+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578497, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:28:17.698+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:17.698+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:17.698+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:17.698+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578497, 2) || First: notFirst: full path: ts 2019-09-04T06:28:17.698+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:17.698+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578497, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:17.698+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:17.698+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:28:17.698+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:17.698+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:17.698+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:17.698+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:17.698+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:17.698+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:17.698+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:17.698+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578497, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:17.698+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:17.698+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:17.698+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:17.698+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578497, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:17.698+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:17.698+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578497, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:17.698+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 2810 2019-09-04T06:28:17.698+0000 D3 EXECUTOR [repl-writer-worker-7] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:17.698+0000 D3 STORAGE [repl-writer-worker-7] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:17.698+0000 D3 REPL [repl-writer-worker-7] applying op: { ts: Timestamp(1567578497, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578497695), o: { $v: 1, $set: { ping: new Date(1567578497688) } } }, oplog application mode: Secondary 2019-09-04T06:28:17.698+0000 D3 STORAGE [repl-writer-worker-7] WT set timestamp of future write operations to Timestamp(1567578497, 2) 2019-09-04T06:28:17.698+0000 D3 STORAGE [repl-writer-worker-7] WT begin_transaction for snapshot id 2812 2019-09-04T06:28:17.698+0000 D2 QUERY [repl-writer-worker-7] Using idhack: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" } 2019-09-04T06:28:17.698+0000 D4 WRITE [repl-writer-worker-7] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:28:17.698+0000 D3 STORAGE [repl-writer-worker-7] WT commit_transaction for snapshot id 2812 2019-09-04T06:28:17.699+0000 D3 EXECUTOR [repl-writer-worker-7] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:17.699+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578497, 2), t: 1 }({ ts: Timestamp(1567578497, 2), t: 1 }) 2019-09-04T06:28:17.699+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578497, 2) 2019-09-04T06:28:17.699+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2811 2019-09-04T06:28:17.699+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:28:17.699+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:17.699+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:28:17.699+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:17.699+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:17.699+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:28:17.699+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 2811 2019-09-04T06:28:17.699+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578497, 2) 2019-09-04T06:28:17.699+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:17.699+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, durableWallTime: new Date(1567578494840), appliedOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, appliedWallTime: new Date(1567578494840), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578497, 1), t: 1 }, durableWallTime: new Date(1567578497438), appliedOpTime: { ts: Timestamp(1567578497, 2), t: 1 }, appliedWallTime: new Date(1567578497695), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, durableWallTime: new Date(1567578494840), appliedOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, appliedWallTime: new Date(1567578494840), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578497, 1), t: 1 }, lastCommittedWall: new Date(1567578497438), lastOpVisible: { ts: Timestamp(1567578497, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:17.699+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 178 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:47.699+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, durableWallTime: new Date(1567578494840), appliedOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, appliedWallTime: new Date(1567578494840), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578497, 1), t: 1 }, durableWallTime: new Date(1567578497438), appliedOpTime: { ts: Timestamp(1567578497, 2), t: 1 }, appliedWallTime: new Date(1567578497695), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, durableWallTime: new Date(1567578494840), appliedOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, appliedWallTime: new Date(1567578494840), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578497, 1), t: 1 }, lastCommittedWall: new Date(1567578497438), lastOpVisible: { ts: Timestamp(1567578497, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:17.699+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2815 2019-09-04T06:28:17.699+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2815 2019-09-04T06:28:17.699+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578497, 2), t: 1 }({ ts: Timestamp(1567578497, 2), t: 1 }) 2019-09-04T06:28:17.699+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578497, 2), t: 1 } 2019-09-04T06:28:17.699+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 179 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:27.699+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578497, 1), t: 1 } } 2019-09-04T06:28:17.699+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:47.699+0000 2019-09-04T06:28:17.699+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:47.699+0000 2019-09-04T06:28:17.699+0000 D2 ASIO [RS] Request 178 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578497, 2), t: 1 }, lastCommittedWall: new Date(1567578497695), lastOpVisible: { ts: Timestamp(1567578497, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578497, 2), $clusterTime: { clusterTime: Timestamp(1567578497, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578497, 2) } 2019-09-04T06:28:17.699+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578497, 2), t: 1 }, lastCommittedWall: new Date(1567578497695), lastOpVisible: { ts: Timestamp(1567578497, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578497, 2), $clusterTime: { clusterTime: Timestamp(1567578497, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578497, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:17.700+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:17.700+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:47.699+0000 2019-09-04T06:28:17.700+0000 D2 ASIO [RS] Request 179 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578497, 2), t: 1 }, lastCommittedWall: new Date(1567578497695), lastOpVisible: { ts: Timestamp(1567578497, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578497, 2), t: 1 }, lastCommittedWall: new Date(1567578497695), lastOpApplied: { ts: Timestamp(1567578497, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578497, 2), $clusterTime: { clusterTime: Timestamp(1567578497, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578497, 2) } 2019-09-04T06:28:17.700+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578497, 2), t: 1 }, lastCommittedWall: new Date(1567578497695), lastOpVisible: { ts: Timestamp(1567578497, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578497, 2), t: 1 }, lastCommittedWall: new Date(1567578497695), lastOpApplied: { ts: Timestamp(1567578497, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578497, 2), $clusterTime: { clusterTime: Timestamp(1567578497, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578497, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:17.700+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:17.700+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:28:17.700+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578497, 2), t: 1 }, 2019-09-04T06:28:17.695+0000 2019-09-04T06:28:17.700+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578497, 2), t: 1 }, 2019-09-04T06:28:17.695+0000 2019-09-04T06:28:17.700+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578492, 2) 2019-09-04T06:28:17.700+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:28:29.081+0000 2019-09-04T06:28:17.700+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:28:28.719+0000 2019-09-04T06:28:17.700+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 180 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:27.700+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578497, 2), t: 1 } } 2019-09-04T06:28:17.700+0000 D3 REPL [conn97] Got notified of new snapshot: { ts: Timestamp(1567578497, 2), t: 1 }, 2019-09-04T06:28:17.695+0000 2019-09-04T06:28:17.700+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:47.699+0000 2019-09-04T06:28:17.700+0000 D3 REPL [conn97] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.670+0000 2019-09-04T06:28:17.700+0000 D3 REPL [conn96] Got notified of new snapshot: { ts: Timestamp(1567578497, 2), t: 1 }, 2019-09-04T06:28:17.695+0000 2019-09-04T06:28:17.700+0000 D3 REPL [conn96] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.663+0000 2019-09-04T06:28:17.700+0000 D3 REPL [conn115] Got notified of new snapshot: { ts: Timestamp(1567578497, 2), t: 1 }, 2019-09-04T06:28:17.695+0000 2019-09-04T06:28:17.700+0000 D3 REPL [conn115] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:41.292+0000 2019-09-04T06:28:17.700+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:28:17.700+0000 D3 REPL [conn114] Got notified of new snapshot: { ts: Timestamp(1567578497, 2), t: 1 }, 2019-09-04T06:28:17.695+0000 2019-09-04T06:28:17.700+0000 D3 REPL [conn114] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:41.488+0000 2019-09-04T06:28:17.700+0000 D3 REPL [conn108] Got notified of new snapshot: { ts: Timestamp(1567578497, 2), t: 1 }, 2019-09-04T06:28:17.695+0000 2019-09-04T06:28:17.700+0000 D3 REPL [conn108] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.879+0000 2019-09-04T06:28:17.700+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:17.700+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:28:17.700+0000 D3 REPL [conn109] Got notified of new snapshot: { ts: Timestamp(1567578497, 2), t: 1 }, 2019-09-04T06:28:17.695+0000 2019-09-04T06:28:17.700+0000 D3 REPL [conn109] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.877+0000 2019-09-04T06:28:17.700+0000 D3 REPL [conn92] Got notified of new snapshot: { ts: Timestamp(1567578497, 2), t: 1 }, 2019-09-04T06:28:17.695+0000 2019-09-04T06:28:17.700+0000 D3 REPL [conn92] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.664+0000 2019-09-04T06:28:17.700+0000 D3 REPL [conn93] Got notified of new snapshot: { ts: Timestamp(1567578497, 2), t: 1 }, 2019-09-04T06:28:17.695+0000 2019-09-04T06:28:17.700+0000 D3 REPL [conn93] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.662+0000 2019-09-04T06:28:17.700+0000 D3 REPL [conn95] Got notified of new snapshot: { ts: Timestamp(1567578497, 2), t: 1 }, 2019-09-04T06:28:17.695+0000 2019-09-04T06:28:17.700+0000 D3 REPL [conn95] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.662+0000 2019-09-04T06:28:17.700+0000 D3 REPL [conn119] Got notified of new snapshot: { ts: Timestamp(1567578497, 2), t: 1 }, 2019-09-04T06:28:17.695+0000 2019-09-04T06:28:17.700+0000 D3 REPL [conn119] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.033+0000 2019-09-04T06:28:17.700+0000 D3 REPL [conn98] Got notified of new snapshot: { ts: Timestamp(1567578497, 2), t: 1 }, 2019-09-04T06:28:17.695+0000 2019-09-04T06:28:17.700+0000 D3 REPL [conn98] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.768+0000 2019-09-04T06:28:17.700+0000 D3 REPL [conn102] Got notified of new snapshot: { ts: Timestamp(1567578497, 2), t: 1 }, 2019-09-04T06:28:17.695+0000 2019-09-04T06:28:17.700+0000 D3 REPL [conn102] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:29.874+0000 2019-09-04T06:28:17.700+0000 D3 REPL [conn110] Got notified of new snapshot: { ts: Timestamp(1567578497, 2), t: 1 }, 2019-09-04T06:28:17.695+0000 2019-09-04T06:28:17.700+0000 D3 REPL [conn110] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.881+0000 2019-09-04T06:28:17.700+0000 D3 REPL [conn91] Got notified of new snapshot: { ts: Timestamp(1567578497, 2), t: 1 }, 2019-09-04T06:28:17.695+0000 2019-09-04T06:28:17.700+0000 D3 REPL [conn91] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.645+0000 2019-09-04T06:28:17.700+0000 D3 REPL [conn100] Got notified of new snapshot: { ts: Timestamp(1567578497, 2), t: 1 }, 2019-09-04T06:28:17.695+0000 2019-09-04T06:28:17.700+0000 D3 REPL [conn100] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:22.595+0000 2019-09-04T06:28:17.700+0000 D3 REPL [conn111] Got notified of new snapshot: { ts: Timestamp(1567578497, 2), t: 1 }, 2019-09-04T06:28:17.695+0000 2019-09-04T06:28:17.700+0000 D3 REPL [conn111] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.896+0000 2019-09-04T06:28:17.700+0000 D3 REPL [conn123] Got notified of new snapshot: { ts: Timestamp(1567578497, 2), t: 1 }, 2019-09-04T06:28:17.695+0000 2019-09-04T06:28:17.700+0000 D3 REPL [conn123] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.022+0000 2019-09-04T06:28:17.700+0000 D3 REPL [conn94] Got notified of new snapshot: { ts: Timestamp(1567578497, 2), t: 1 }, 2019-09-04T06:28:17.695+0000 2019-09-04T06:28:17.700+0000 D3 REPL [conn94] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.663+0000 2019-09-04T06:28:17.700+0000 D3 REPL [conn120] Got notified of new snapshot: { ts: Timestamp(1567578497, 2), t: 1 }, 2019-09-04T06:28:17.695+0000 2019-09-04T06:28:17.700+0000 D3 REPL [conn120] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.040+0000 2019-09-04T06:28:17.700+0000 D3 REPL [conn112] Got notified of new snapshot: { ts: Timestamp(1567578497, 2), t: 1 }, 2019-09-04T06:28:17.695+0000 2019-09-04T06:28:17.700+0000 D3 REPL [conn112] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.921+0000 2019-09-04T06:28:17.700+0000 D3 REPL [conn103] Got notified of new snapshot: { ts: Timestamp(1567578497, 2), t: 1 }, 2019-09-04T06:28:17.695+0000 2019-09-04T06:28:17.700+0000 D3 REPL [conn103] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.919+0000 2019-09-04T06:28:17.701+0000 D3 REPL [conn107] Got notified of new snapshot: { ts: Timestamp(1567578497, 2), t: 1 }, 2019-09-04T06:28:17.695+0000 2019-09-04T06:28:17.701+0000 D3 REPL [conn107] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.871+0000 2019-09-04T06:28:17.701+0000 D3 REPL [conn99] Got notified of new snapshot: { ts: Timestamp(1567578497, 2), t: 1 }, 2019-09-04T06:28:17.695+0000 2019-09-04T06:28:17.701+0000 D3 REPL [conn99] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:22.054+0000 2019-09-04T06:28:17.701+0000 D3 REPL [conn124] Got notified of new snapshot: { ts: Timestamp(1567578497, 2), t: 1 }, 2019-09-04T06:28:17.695+0000 2019-09-04T06:28:17.701+0000 D3 REPL [conn124] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.022+0000 2019-09-04T06:28:17.701+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:17.701+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, durableWallTime: new Date(1567578494840), appliedOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, appliedWallTime: new Date(1567578494840), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578497, 2), t: 1 }, durableWallTime: new Date(1567578497695), appliedOpTime: { ts: Timestamp(1567578497, 2), t: 1 }, appliedWallTime: new Date(1567578497695), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, durableWallTime: new Date(1567578494840), appliedOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, appliedWallTime: new Date(1567578494840), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578497, 2), t: 1 }, lastCommittedWall: new Date(1567578497695), lastOpVisible: { ts: Timestamp(1567578497, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:17.701+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 181 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:47.701+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, durableWallTime: new Date(1567578494840), appliedOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, appliedWallTime: new Date(1567578494840), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578497, 2), t: 1 }, durableWallTime: new Date(1567578497695), appliedOpTime: { ts: Timestamp(1567578497, 2), t: 1 }, appliedWallTime: new Date(1567578497695), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, durableWallTime: new Date(1567578494840), appliedOpTime: { ts: Timestamp(1567578494, 1), t: 1 }, appliedWallTime: new Date(1567578494840), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578497, 2), t: 1 }, lastCommittedWall: new Date(1567578497695), lastOpVisible: { ts: Timestamp(1567578497, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:17.701+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:47.699+0000 2019-09-04T06:28:17.701+0000 D2 ASIO [RS] Request 181 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578497, 2), t: 1 }, lastCommittedWall: new Date(1567578497695), lastOpVisible: { ts: Timestamp(1567578497, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578497, 2), $clusterTime: { clusterTime: Timestamp(1567578497, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578497, 2) } 2019-09-04T06:28:17.701+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578497, 2), t: 1 }, lastCommittedWall: new Date(1567578497695), lastOpVisible: { ts: Timestamp(1567578497, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578497, 2), $clusterTime: { clusterTime: Timestamp(1567578497, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578497, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:17.701+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:17.701+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:47.699+0000 2019-09-04T06:28:17.743+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:17.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:17.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:17.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:17.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:17.798+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578497, 2) 2019-09-04T06:28:17.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:17.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:17.843+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:17.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:17.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:17.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:17.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:17.943+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:17.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:17.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:17.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:17.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:17.995+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:17.995+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:18.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:18.023+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:18.023+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:18.043+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:18.068+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:18.068+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:18.123+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:18.123+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:18.130+0000 D2 COMMAND [conn20] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:18.130+0000 I COMMAND [conn20] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:18.132+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:18.132+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:18.143+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:18.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:18.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:18.230+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578497, 2), signature: { hash: BinData(0, 94CE03391C7A75618FA90A787AE6914E32684345), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:18.230+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:28:18.230+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578497, 2), signature: { hash: BinData(0, 94CE03391C7A75618FA90A787AE6914E32684345), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:18.230+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578497, 2), signature: { hash: BinData(0, 94CE03391C7A75618FA90A787AE6914E32684345), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:18.230+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578497, 2), t: 1 }, durableWallTime: new Date(1567578497695), opTime: { ts: Timestamp(1567578497, 2), t: 1 }, wallTime: new Date(1567578497695) } 2019-09-04T06:28:18.231+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578497, 2), signature: { hash: BinData(0, 94CE03391C7A75618FA90A787AE6914E32684345), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:18.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:18.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:18.243+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:18.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:18.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:18.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:18.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:18.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:18.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:18.344+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:18.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:18.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:18.444+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:18.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:18.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:18.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:18.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:18.495+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:18.495+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:18.508+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:28:18.508+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:28:18.508+0000 D2 COMMAND [conn90] run command admin.$cmd { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:28:18.508+0000 I COMMAND [conn90] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:28:18.523+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:18.523+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:18.544+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:18.568+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:18.568+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:18.623+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:18.623+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:18.632+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:18.632+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:18.644+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:18.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:18.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:18.697+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:18.697+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:18.697+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578497, 2) 2019-09-04T06:28:18.697+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 2849 2019-09-04T06:28:18.697+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:18.697+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:18.697+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 2849 2019-09-04T06:28:18.697+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer::flush table:config/collection/58--6194257481163143499 -> { numRecords: 1, dataSize: 236 } 2019-09-04T06:28:18.698+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer::flush table:config/collection/71--6194257481163143499 -> { numRecords: 0, dataSize: 0 } 2019-09-04T06:28:18.698+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer::flush table:config/collection/89--6194257481163143499 -> { numRecords: 0, dataSize: 0 } 2019-09-04T06:28:18.698+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer::flush table:config/collection/82--6194257481163143499 -> { numRecords: 3, dataSize: 321 } 2019-09-04T06:28:18.698+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer::flush table:config/collection/34--6194257481163143499 -> { numRecords: 0, dataSize: 0 } 2019-09-04T06:28:18.698+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer::flush table:config/collection/50--6194257481163143499 -> { numRecords: 1, dataSize: 83 } 2019-09-04T06:28:18.698+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer::flush table:admin/collection/17--6194257481163143499 -> { numRecords: 1, dataSize: 677 } 2019-09-04T06:28:18.698+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer::flush table:local/collection/16--6194257481163143499 -> { numRecords: 1373, dataSize: 309568 } 2019-09-04T06:28:18.698+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer::flush table:admin/collection/22--6194257481163143499 -> { numRecords: 2, dataSize: 170 } 2019-09-04T06:28:18.698+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer::flush table:config/collection/26--6194257481163143499 -> { numRecords: 8, dataSize: 2699 } 2019-09-04T06:28:18.698+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer::flush table:config/collection/38--6194257481163143499 -> { numRecords: 1, dataSize: 124 } 2019-09-04T06:28:18.698+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer::flush table:local/collection/10--6194257481163143499 -> { numRecords: 1, dataSize: 848 } 2019-09-04T06:28:18.698+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer::flush table:local/collection/8--6194257481163143499 -> { numRecords: 1, dataSize: 41 } 2019-09-04T06:28:18.698+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer::flush table:config/collection/28--6194257481163143499 -> { numRecords: 22, dataSize: 1828 } 2019-09-04T06:28:18.698+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer::flush table:config/collection/42--6194257481163143499 -> { numRecords: 2, dataSize: 308 } 2019-09-04T06:28:18.698+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer::flush table:config/collection/54--6194257481163143499 -> { numRecords: 1, dataSize: 145 } 2019-09-04T06:28:18.698+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer::flush table:local/collection/6--6194257481163143499 -> { numRecords: 1, dataSize: 60 } 2019-09-04T06:28:18.698+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer::flush table:local/collection/4--6194257481163143499 -> { numRecords: 1, dataSize: 80 } 2019-09-04T06:28:18.698+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer::flush table:local/collection/2--6194257481163143499 -> { numRecords: 1, dataSize: 71 } 2019-09-04T06:28:18.698+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer::flush table:config/collection/75--6194257481163143499 -> { numRecords: 0, dataSize: 0 } 2019-09-04T06:28:18.698+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer::flush table:local/collection/0--6194257481163143499 -> { numRecords: 2, dataSize: 4163 } 2019-09-04T06:28:18.698+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer::flush table:admin/collection/20--6194257481163143499 -> { numRecords: 2, dataSize: 104 } 2019-09-04T06:28:18.698+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer::flush table:_mdb_catalog -> { numRecords: 23, dataSize: 13932 } 2019-09-04T06:28:18.698+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer flush took 229 µs 2019-09-04T06:28:18.699+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2852 2019-09-04T06:28:18.699+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2852 2019-09-04T06:28:18.699+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578497, 2), t: 1 }({ ts: Timestamp(1567578497, 2), t: 1 }) 2019-09-04T06:28:18.748+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:18.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:18.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:18.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:18.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:18.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:18.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:18.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:18.836+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 182) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:18.836+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 182 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:28:28.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:18.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:18.836+0000 D2 ASIO [Replication] Request 182 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578497, 2), t: 1 }, durableWallTime: new Date(1567578497695), opTime: { ts: Timestamp(1567578497, 2), t: 1 }, wallTime: new Date(1567578497695), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578497, 2), t: 1 }, lastCommittedWall: new Date(1567578497695), lastOpVisible: { ts: Timestamp(1567578497, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578497, 2), $clusterTime: { clusterTime: Timestamp(1567578497, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578497, 2) } 2019-09-04T06:28:18.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578497, 2), t: 1 }, durableWallTime: new Date(1567578497695), opTime: { ts: Timestamp(1567578497, 2), t: 1 }, wallTime: new Date(1567578497695), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578497, 2), t: 1 }, lastCommittedWall: new Date(1567578497695), lastOpVisible: { ts: Timestamp(1567578497, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578497, 2), $clusterTime: { clusterTime: Timestamp(1567578497, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578497, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:28:18.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:18.836+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 182) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578497, 2), t: 1 }, durableWallTime: new Date(1567578497695), opTime: { ts: Timestamp(1567578497, 2), t: 1 }, wallTime: new Date(1567578497695), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578497, 2), t: 1 }, lastCommittedWall: new Date(1567578497695), lastOpVisible: { ts: Timestamp(1567578497, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578497, 2), $clusterTime: { clusterTime: Timestamp(1567578497, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578497, 2) } 2019-09-04T06:28:18.836+0000 D4 ELECTION [replexec-1] Postponing election timeout due to heartbeat from primary 2019-09-04T06:28:18.836+0000 D4 REPL [replexec-1] Canceling election timeout callback at 2019-09-04T06:28:28.719+0000 2019-09-04T06:28:18.836+0000 D4 ELECTION [replexec-1] Scheduling election timeout callback at 2019-09-04T06:28:30.202+0000 2019-09-04T06:28:18.836+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:28:18.836+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:28:18.836+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:18.836+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:28:20.836Z 2019-09-04T06:28:18.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:18.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:18.837+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 183) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:18.837+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 183 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:28.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:18.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:18.837+0000 D2 ASIO [Replication] Request 183 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578497, 2), t: 1 }, durableWallTime: new Date(1567578497695), opTime: { ts: Timestamp(1567578497, 2), t: 1 }, wallTime: new Date(1567578497695), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578497, 2), t: 1 }, lastCommittedWall: new Date(1567578497695), lastOpVisible: { ts: Timestamp(1567578497, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578497, 2), $clusterTime: { clusterTime: Timestamp(1567578497, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578497, 2) } 2019-09-04T06:28:18.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578497, 2), t: 1 }, durableWallTime: new Date(1567578497695), opTime: { ts: Timestamp(1567578497, 2), t: 1 }, wallTime: new Date(1567578497695), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578497, 2), t: 1 }, lastCommittedWall: new Date(1567578497695), lastOpVisible: { ts: Timestamp(1567578497, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578497, 2), $clusterTime: { clusterTime: Timestamp(1567578497, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578497, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:18.837+0000 D3 EXECUTOR [replexec-2] Executing a task on behalf of pool replexec 2019-09-04T06:28:18.837+0000 D2 REPL_HB [replexec-2] Received response to heartbeat (requestId: 183) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578497, 2), t: 1 }, durableWallTime: new Date(1567578497695), opTime: { ts: Timestamp(1567578497, 2), t: 1 }, wallTime: new Date(1567578497695), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578497, 2), t: 1 }, lastCommittedWall: new Date(1567578497695), lastOpVisible: { ts: Timestamp(1567578497, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578497, 2), $clusterTime: { clusterTime: Timestamp(1567578497, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578497, 2) } 2019-09-04T06:28:18.837+0000 D3 REPL [replexec-2] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:28:18.837+0000 D2 REPL_HB [replexec-2] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:28:20.837Z 2019-09-04T06:28:18.837+0000 D3 EXECUTOR [replexec-2] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:18.848+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:18.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:18.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:18.948+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:18.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:18.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:18.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:18.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:18.995+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:18.995+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:19.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:19.023+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:19.023+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:19.048+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:19.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578497, 2), signature: { hash: BinData(0, 94CE03391C7A75618FA90A787AE6914E32684345), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:19.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:28:19.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578497, 2), signature: { hash: BinData(0, 94CE03391C7A75618FA90A787AE6914E32684345), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:19.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578497, 2), signature: { hash: BinData(0, 94CE03391C7A75618FA90A787AE6914E32684345), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:19.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578497, 2), t: 1 }, durableWallTime: new Date(1567578497695), opTime: { ts: Timestamp(1567578497, 2), t: 1 }, wallTime: new Date(1567578497695) } 2019-09-04T06:28:19.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578497, 2), signature: { hash: BinData(0, 94CE03391C7A75618FA90A787AE6914E32684345), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:19.068+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:19.068+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:19.123+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:19.123+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:19.132+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:19.132+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:19.139+0000 D2 ASIO [RS] Request 180 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578499, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" }, wall: new Date(1567578499137), o: { $v: 1, $set: { ping: new Date(1567578499134) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578497, 2), t: 1 }, lastCommittedWall: new Date(1567578497695), lastOpVisible: { ts: Timestamp(1567578497, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578497, 2), t: 1 }, lastCommittedWall: new Date(1567578497695), lastOpApplied: { ts: Timestamp(1567578499, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578497, 2), $clusterTime: { clusterTime: Timestamp(1567578499, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578499, 1) } 2019-09-04T06:28:19.139+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578499, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" }, wall: new Date(1567578499137), o: { $v: 1, $set: { ping: new Date(1567578499134) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578497, 2), t: 1 }, lastCommittedWall: new Date(1567578497695), lastOpVisible: { ts: Timestamp(1567578497, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578497, 2), t: 1 }, lastCommittedWall: new Date(1567578497695), lastOpApplied: { ts: Timestamp(1567578499, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578497, 2), $clusterTime: { clusterTime: Timestamp(1567578499, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578499, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:19.139+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:19.139+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578499, 1) and ending at ts: Timestamp(1567578499, 1) 2019-09-04T06:28:19.139+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:28:30.202+0000 2019-09-04T06:28:19.139+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:28:29.512+0000 2019-09-04T06:28:19.139+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578499, 1), t: 1 } 2019-09-04T06:28:19.139+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:19.139+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:19.139+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:19.139+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:19.139+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578497, 2) 2019-09-04T06:28:19.139+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 2868 2019-09-04T06:28:19.139+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:19.139+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:19.139+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 2868 2019-09-04T06:28:19.139+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:19.139+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:28:19.139+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578499, 1) } 2019-09-04T06:28:19.139+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:19.139+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2853 2019-09-04T06:28:19.139+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578497, 2) 2019-09-04T06:28:19.140+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 2871 2019-09-04T06:28:19.140+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:19.140+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:19.140+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 2871 2019-09-04T06:28:19.140+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 2853 2019-09-04T06:28:19.140+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2874 2019-09-04T06:28:19.140+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2874 2019-09-04T06:28:19.140+0000 D3 EXECUTOR [repl-writer-worker-11] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:19.140+0000 D3 STORAGE [repl-writer-worker-11] WT begin_transaction for snapshot id 2876 2019-09-04T06:28:19.140+0000 D4 STORAGE [repl-writer-worker-11] inserting record with timestamp Timestamp(1567578499, 1) 2019-09-04T06:28:19.140+0000 D3 STORAGE [repl-writer-worker-11] WT set timestamp of future write operations to Timestamp(1567578499, 1) 2019-09-04T06:28:19.140+0000 D2 STORAGE [repl-writer-worker-11] WiredTigerSizeStorer::store Marking table:local/collection/16--6194257481163143499 dirty, numRecords: 1374, dataSize: 309804, use_count: 3 2019-09-04T06:28:19.140+0000 D3 STORAGE [repl-writer-worker-11] WT commit_transaction for snapshot id 2876 2019-09-04T06:28:19.140+0000 D3 EXECUTOR [repl-writer-worker-11] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:19.140+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:28:19.140+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2875 2019-09-04T06:28:19.140+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 2875 2019-09-04T06:28:19.140+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2878 2019-09-04T06:28:19.140+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2878 2019-09-04T06:28:19.140+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578499, 1), t: 1 }({ ts: Timestamp(1567578499, 1), t: 1 }) 2019-09-04T06:28:19.140+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578499, 1) 2019-09-04T06:28:19.140+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2879 2019-09-04T06:28:19.140+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578499, 1) } } ] } sort: {} projection: {} 2019-09-04T06:28:19.140+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:19.140+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:28:19.140+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578499, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:28:19.140+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:19.140+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:19.140+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:19.140+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578499, 1) || First: notFirst: full path: ts 2019-09-04T06:28:19.140+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:19.140+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578499, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:19.140+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:19.140+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:28:19.140+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:19.140+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:19.140+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:19.140+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:19.140+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:19.140+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:19.140+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:19.140+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578499, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:19.140+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:19.140+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:19.141+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:19.141+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578499, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:19.141+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:19.141+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578499, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:19.141+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 2879 2019-09-04T06:28:19.141+0000 D3 EXECUTOR [repl-writer-worker-13] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:19.141+0000 D3 STORAGE [repl-writer-worker-13] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:19.141+0000 D3 REPL [repl-writer-worker-13] applying op: { ts: Timestamp(1567578499, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" }, wall: new Date(1567578499137), o: { $v: 1, $set: { ping: new Date(1567578499134) } } }, oplog application mode: Secondary 2019-09-04T06:28:19.141+0000 D3 STORAGE [repl-writer-worker-13] WT set timestamp of future write operations to Timestamp(1567578499, 1) 2019-09-04T06:28:19.141+0000 D3 STORAGE [repl-writer-worker-13] WT begin_transaction for snapshot id 2881 2019-09-04T06:28:19.141+0000 D2 QUERY [repl-writer-worker-13] Using idhack: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" } 2019-09-04T06:28:19.141+0000 D2 STORAGE [repl-writer-worker-13] WiredTigerSizeStorer::store Marking table:config/collection/28--6194257481163143499 dirty, numRecords: 22, dataSize: 1828, use_count: 3 2019-09-04T06:28:19.141+0000 D4 WRITE [repl-writer-worker-13] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:28:19.141+0000 D3 STORAGE [repl-writer-worker-13] WT commit_transaction for snapshot id 2881 2019-09-04T06:28:19.141+0000 D3 EXECUTOR [repl-writer-worker-13] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:19.141+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578499, 1), t: 1 }({ ts: Timestamp(1567578499, 1), t: 1 }) 2019-09-04T06:28:19.141+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578499, 1) 2019-09-04T06:28:19.141+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2880 2019-09-04T06:28:19.141+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:28:19.141+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:19.141+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:28:19.141+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:19.141+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:19.141+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:28:19.141+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 2880 2019-09-04T06:28:19.141+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578499, 1) 2019-09-04T06:28:19.141+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2884 2019-09-04T06:28:19.141+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2884 2019-09-04T06:28:19.141+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578499, 1), t: 1 }({ ts: Timestamp(1567578499, 1), t: 1 }) 2019-09-04T06:28:19.141+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:19.141+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578497, 2), t: 1 }, durableWallTime: new Date(1567578497695), appliedOpTime: { ts: Timestamp(1567578497, 2), t: 1 }, appliedWallTime: new Date(1567578497695), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578497, 2), t: 1 }, durableWallTime: new Date(1567578497695), appliedOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, appliedWallTime: new Date(1567578499137), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578497, 2), t: 1 }, durableWallTime: new Date(1567578497695), appliedOpTime: { ts: Timestamp(1567578497, 2), t: 1 }, appliedWallTime: new Date(1567578497695), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578497, 2), t: 1 }, lastCommittedWall: new Date(1567578497695), lastOpVisible: { ts: Timestamp(1567578497, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:19.141+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 184 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:49.141+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578497, 2), t: 1 }, durableWallTime: new Date(1567578497695), appliedOpTime: { ts: Timestamp(1567578497, 2), t: 1 }, appliedWallTime: new Date(1567578497695), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578497, 2), t: 1 }, durableWallTime: new Date(1567578497695), appliedOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, appliedWallTime: new Date(1567578499137), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578497, 2), t: 1 }, durableWallTime: new Date(1567578497695), appliedOpTime: { ts: Timestamp(1567578497, 2), t: 1 }, appliedWallTime: new Date(1567578497695), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578497, 2), t: 1 }, lastCommittedWall: new Date(1567578497695), lastOpVisible: { ts: Timestamp(1567578497, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:19.141+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:49.141+0000 2019-09-04T06:28:19.141+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578499, 1), t: 1 } 2019-09-04T06:28:19.141+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 185 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:29.141+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578497, 2), t: 1 } } 2019-09-04T06:28:19.141+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:49.141+0000 2019-09-04T06:28:19.142+0000 D2 ASIO [RS] Request 184 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578499, 1), t: 1 }, lastCommittedWall: new Date(1567578499137), lastOpVisible: { ts: Timestamp(1567578499, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578499, 1), $clusterTime: { clusterTime: Timestamp(1567578499, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578499, 1) } 2019-09-04T06:28:19.142+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578499, 1), t: 1 }, lastCommittedWall: new Date(1567578499137), lastOpVisible: { ts: Timestamp(1567578499, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578499, 1), $clusterTime: { clusterTime: Timestamp(1567578499, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578499, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:19.142+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:19.142+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:49.141+0000 2019-09-04T06:28:19.142+0000 D2 ASIO [RS] Request 185 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578499, 1), t: 1 }, lastCommittedWall: new Date(1567578499137), lastOpVisible: { ts: Timestamp(1567578499, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578499, 1), t: 1 }, lastCommittedWall: new Date(1567578499137), lastOpApplied: { ts: Timestamp(1567578499, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578499, 1), $clusterTime: { clusterTime: Timestamp(1567578499, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578499, 1) } 2019-09-04T06:28:19.142+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578499, 1), t: 1 }, lastCommittedWall: new Date(1567578499137), lastOpVisible: { ts: Timestamp(1567578499, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578499, 1), t: 1 }, lastCommittedWall: new Date(1567578499137), lastOpApplied: { ts: Timestamp(1567578499, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578499, 1), $clusterTime: { clusterTime: Timestamp(1567578499, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578499, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:19.142+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:19.142+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:28:19.142+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578499, 1), t: 1 }, 2019-09-04T06:28:19.137+0000 2019-09-04T06:28:19.142+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578499, 1), t: 1 }, 2019-09-04T06:28:19.137+0000 2019-09-04T06:28:19.142+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578494, 1) 2019-09-04T06:28:19.142+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:28:29.512+0000 2019-09-04T06:28:19.142+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:28:30.216+0000 2019-09-04T06:28:19.142+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 186 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:29.142+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578499, 1), t: 1 } } 2019-09-04T06:28:19.142+0000 D3 REPL [conn111] Got notified of new snapshot: { ts: Timestamp(1567578499, 1), t: 1 }, 2019-09-04T06:28:19.137+0000 2019-09-04T06:28:19.142+0000 D3 REPL [conn111] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.896+0000 2019-09-04T06:28:19.142+0000 D3 REPL [conn114] Got notified of new snapshot: { ts: Timestamp(1567578499, 1), t: 1 }, 2019-09-04T06:28:19.137+0000 2019-09-04T06:28:19.142+0000 D3 REPL [conn114] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:41.488+0000 2019-09-04T06:28:19.142+0000 D3 REPL [conn120] Got notified of new snapshot: { ts: Timestamp(1567578499, 1), t: 1 }, 2019-09-04T06:28:19.137+0000 2019-09-04T06:28:19.142+0000 D3 REPL [conn120] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.040+0000 2019-09-04T06:28:19.142+0000 D3 REPL [conn93] Got notified of new snapshot: { ts: Timestamp(1567578499, 1), t: 1 }, 2019-09-04T06:28:19.137+0000 2019-09-04T06:28:19.142+0000 D3 REPL [conn93] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.662+0000 2019-09-04T06:28:19.142+0000 D3 REPL [conn124] Got notified of new snapshot: { ts: Timestamp(1567578499, 1), t: 1 }, 2019-09-04T06:28:19.137+0000 2019-09-04T06:28:19.142+0000 D3 REPL [conn124] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.022+0000 2019-09-04T06:28:19.142+0000 D3 REPL [conn98] Got notified of new snapshot: { ts: Timestamp(1567578499, 1), t: 1 }, 2019-09-04T06:28:19.137+0000 2019-09-04T06:28:19.142+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:19.142+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:20.836+0000 2019-09-04T06:28:19.142+0000 D3 REPL [conn98] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.768+0000 2019-09-04T06:28:19.142+0000 D3 REPL [conn99] Got notified of new snapshot: { ts: Timestamp(1567578499, 1), t: 1 }, 2019-09-04T06:28:19.137+0000 2019-09-04T06:28:19.142+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:49.141+0000 2019-09-04T06:28:19.142+0000 D3 REPL [conn99] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:22.054+0000 2019-09-04T06:28:19.142+0000 D3 REPL [conn92] Got notified of new snapshot: { ts: Timestamp(1567578499, 1), t: 1 }, 2019-09-04T06:28:19.137+0000 2019-09-04T06:28:19.142+0000 D3 REPL [conn92] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.664+0000 2019-09-04T06:28:19.142+0000 D3 REPL [conn103] Got notified of new snapshot: { ts: Timestamp(1567578499, 1), t: 1 }, 2019-09-04T06:28:19.137+0000 2019-09-04T06:28:19.142+0000 D3 REPL [conn103] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.919+0000 2019-09-04T06:28:19.142+0000 D3 REPL [conn107] Got notified of new snapshot: { ts: Timestamp(1567578499, 1), t: 1 }, 2019-09-04T06:28:19.137+0000 2019-09-04T06:28:19.142+0000 D3 REPL [conn107] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.871+0000 2019-09-04T06:28:19.142+0000 D3 REPL [conn95] Got notified of new snapshot: { ts: Timestamp(1567578499, 1), t: 1 }, 2019-09-04T06:28:19.137+0000 2019-09-04T06:28:19.142+0000 D3 REPL [conn95] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.662+0000 2019-09-04T06:28:19.142+0000 D3 REPL [conn119] Got notified of new snapshot: { ts: Timestamp(1567578499, 1), t: 1 }, 2019-09-04T06:28:19.137+0000 2019-09-04T06:28:19.142+0000 D3 REPL [conn119] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.033+0000 2019-09-04T06:28:19.142+0000 D3 REPL [conn94] Got notified of new snapshot: { ts: Timestamp(1567578499, 1), t: 1 }, 2019-09-04T06:28:19.137+0000 2019-09-04T06:28:19.142+0000 D3 REPL [conn94] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.663+0000 2019-09-04T06:28:19.142+0000 D3 REPL [conn108] Got notified of new snapshot: { ts: Timestamp(1567578499, 1), t: 1 }, 2019-09-04T06:28:19.137+0000 2019-09-04T06:28:19.142+0000 D3 REPL [conn108] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.879+0000 2019-09-04T06:28:19.142+0000 D3 REPL [conn102] Got notified of new snapshot: { ts: Timestamp(1567578499, 1), t: 1 }, 2019-09-04T06:28:19.137+0000 2019-09-04T06:28:19.142+0000 D3 REPL [conn102] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:29.874+0000 2019-09-04T06:28:19.142+0000 D3 REPL [conn110] Got notified of new snapshot: { ts: Timestamp(1567578499, 1), t: 1 }, 2019-09-04T06:28:19.137+0000 2019-09-04T06:28:19.142+0000 D3 REPL [conn110] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.881+0000 2019-09-04T06:28:19.142+0000 D3 REPL [conn91] Got notified of new snapshot: { ts: Timestamp(1567578499, 1), t: 1 }, 2019-09-04T06:28:19.137+0000 2019-09-04T06:28:19.142+0000 D3 REPL [conn91] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.645+0000 2019-09-04T06:28:19.142+0000 D3 REPL [conn123] Got notified of new snapshot: { ts: Timestamp(1567578499, 1), t: 1 }, 2019-09-04T06:28:19.137+0000 2019-09-04T06:28:19.142+0000 D3 REPL [conn123] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.022+0000 2019-09-04T06:28:19.142+0000 D3 REPL [conn97] Got notified of new snapshot: { ts: Timestamp(1567578499, 1), t: 1 }, 2019-09-04T06:28:19.137+0000 2019-09-04T06:28:19.142+0000 D3 REPL [conn97] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.670+0000 2019-09-04T06:28:19.142+0000 D3 REPL [conn96] Got notified of new snapshot: { ts: Timestamp(1567578499, 1), t: 1 }, 2019-09-04T06:28:19.137+0000 2019-09-04T06:28:19.142+0000 D3 REPL [conn96] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:21.663+0000 2019-09-04T06:28:19.142+0000 D3 REPL [conn100] Got notified of new snapshot: { ts: Timestamp(1567578499, 1), t: 1 }, 2019-09-04T06:28:19.137+0000 2019-09-04T06:28:19.142+0000 D3 REPL [conn100] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:22.595+0000 2019-09-04T06:28:19.142+0000 D3 REPL [conn112] Got notified of new snapshot: { ts: Timestamp(1567578499, 1), t: 1 }, 2019-09-04T06:28:19.137+0000 2019-09-04T06:28:19.142+0000 D3 REPL [conn112] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.921+0000 2019-09-04T06:28:19.143+0000 D3 REPL [conn109] Got notified of new snapshot: { ts: Timestamp(1567578499, 1), t: 1 }, 2019-09-04T06:28:19.137+0000 2019-09-04T06:28:19.143+0000 D3 REPL [conn109] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.877+0000 2019-09-04T06:28:19.143+0000 D3 REPL [conn115] Got notified of new snapshot: { ts: Timestamp(1567578499, 1), t: 1 }, 2019-09-04T06:28:19.137+0000 2019-09-04T06:28:19.143+0000 D3 REPL [conn115] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:41.292+0000 2019-09-04T06:28:19.147+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:28:19.147+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:19.147+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578497, 2), t: 1 }, durableWallTime: new Date(1567578497695), appliedOpTime: { ts: Timestamp(1567578497, 2), t: 1 }, appliedWallTime: new Date(1567578497695), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), appliedOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, appliedWallTime: new Date(1567578499137), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578497, 2), t: 1 }, durableWallTime: new Date(1567578497695), appliedOpTime: { ts: Timestamp(1567578497, 2), t: 1 }, appliedWallTime: new Date(1567578497695), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578499, 1), t: 1 }, lastCommittedWall: new Date(1567578499137), lastOpVisible: { ts: Timestamp(1567578499, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:19.148+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 187 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:49.148+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578497, 2), t: 1 }, durableWallTime: new Date(1567578497695), appliedOpTime: { ts: Timestamp(1567578497, 2), t: 1 }, appliedWallTime: new Date(1567578497695), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), appliedOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, appliedWallTime: new Date(1567578499137), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578497, 2), t: 1 }, durableWallTime: new Date(1567578497695), appliedOpTime: { ts: Timestamp(1567578497, 2), t: 1 }, appliedWallTime: new Date(1567578497695), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578499, 1), t: 1 }, lastCommittedWall: new Date(1567578499137), lastOpVisible: { ts: Timestamp(1567578499, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:19.148+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:49.141+0000 2019-09-04T06:28:19.148+0000 D2 ASIO [RS] Request 187 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578499, 1), t: 1 }, lastCommittedWall: new Date(1567578499137), lastOpVisible: { ts: Timestamp(1567578499, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578499, 1), $clusterTime: { clusterTime: Timestamp(1567578499, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578499, 1) } 2019-09-04T06:28:19.148+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578499, 1), t: 1 }, lastCommittedWall: new Date(1567578499137), lastOpVisible: { ts: Timestamp(1567578499, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578499, 1), $clusterTime: { clusterTime: Timestamp(1567578499, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578499, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:19.148+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:19.148+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:49.141+0000 2019-09-04T06:28:19.148+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:19.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:19.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:19.228+0000 D3 STORAGE [WTCheckpointThread] setting timestamp read source: 6, provided timestamp: Timestamp(1567578499, 1) 2019-09-04T06:28:19.228+0000 D3 STORAGE [WTCheckpointThread] WT begin_transaction for snapshot id 2888 2019-09-04T06:28:19.228+0000 D3 STORAGE [WTCheckpointThread] WT rollback_transaction for snapshot id 2888 2019-09-04T06:28:19.228+0000 D2 RECOVERY [WTCheckpointThread] Performing stable checkpoint. StableTimestamp: Timestamp(1567578499, 1), OplogNeededForRollback: Timestamp(1567578499, 1) 2019-09-04T06:28:19.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:19.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:19.240+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578499, 1) 2019-09-04T06:28:19.250+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:19.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:19.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:19.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:19.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:19.317+0000 D3 STORAGE [FreeMonProcessor] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:19.320+0000 D3 INDEX [TTLMonitor] thread awake 2019-09-04T06:28:19.321+0000 D3 COMMAND [PeriodicTaskRunner] task: DBConnectionPool-cleaner took: 0ms 2019-09-04T06:28:19.321+0000 D3 COMMAND [PeriodicTaskRunner] task: DBConnectionPool-cleaner took: 0ms 2019-09-04T06:28:19.321+0000 D2 - [PeriodicTaskRunner] cleaning up unused lock buckets of the global lock manager 2019-09-04T06:28:19.321+0000 D3 COMMAND [PeriodicTaskRunner] task: UnusedLockCleaner took: 0ms 2019-09-04T06:28:19.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:19.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:19.329+0000 D2 WRITE [startPeriodicThreadToAbortExpiredTransactions] Beginning scanSessions. Scanning 0 sessions. 2019-09-04T06:28:19.350+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:19.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:19.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:19.370+0000 D1 SHARDING [shard-registry-reload] Reloading shardRegistry 2019-09-04T06:28:19.370+0000 D3 STORAGE [shard-registry-reload] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:28:19.370+0000 D2 COMMAND [shard-registry-reload] run command config.$cmd { find: "shards", $readPreference: { mode: "nearest", tags: [] }, $db: "config" } 2019-09-04T06:28:19.370+0000 D3 STORAGE [shard-registry-reload] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:19.370+0000 D5 QUERY [shard-registry-reload] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:28:19.370+0000 D5 QUERY [shard-registry-reload] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:28:19.370+0000 D5 QUERY [shard-registry-reload] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:28:19.370+0000 D5 QUERY [shard-registry-reload] Rated tree: $and 2019-09-04T06:28:19.370+0000 D5 QUERY [shard-registry-reload] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:19.370+0000 D5 QUERY [shard-registry-reload] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:19.370+0000 D2 QUERY [shard-registry-reload] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:28:19.370+0000 D3 STORAGE [shard-registry-reload] begin_transaction on local snapshot Timestamp(1567578499, 1) 2019-09-04T06:28:19.370+0000 D3 STORAGE [shard-registry-reload] WT begin_transaction for snapshot id 2900 2019-09-04T06:28:19.370+0000 D3 STORAGE [shard-registry-reload] WT rollback_transaction for snapshot id 2900 2019-09-04T06:28:19.370+0000 I COMMAND [shard-registry-reload] command config.shards command: find { find: "shards", $readPreference: { mode: "nearest", tags: [] }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:646 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:28:19.370+0000 D1 SHARDING [shard-registry-reload] found 3 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp(1567578499, 1), t: 1 } 2019-09-04T06:28:19.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0000/cmodb806.togewa.com:27018,cmodb807.togewa.com:27018 2019-09-04T06:28:19.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0000, with CS shard0000/cmodb806.togewa.com:27018,cmodb807.togewa.com:27018 2019-09-04T06:28:19.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0001/cmodb808.togewa.com:27018,cmodb809.togewa.com:27018 2019-09-04T06:28:19.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0001, with CS shard0001/cmodb808.togewa.com:27018,cmodb809.togewa.com:27018 2019-09-04T06:28:19.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0002/cmodb810.togewa.com:27018,cmodb811.togewa.com:27018 2019-09-04T06:28:19.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0002, with CS shard0002/cmodb810.togewa.com:27018,cmodb811.togewa.com:27018 2019-09-04T06:28:19.370+0000 D3 SHARDING [shard-registry-reload] Adding shard config, with CS 2019-09-04T06:28:19.374+0000 D2 COMMAND [replSetDistLockPinger] run command config.$cmd { findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578499374) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } 2019-09-04T06:28:19.374+0000 D4 - [replSetDistLockPinger] Taking ticket. Available: 1000000000 2019-09-04T06:28:19.374+0000 D1 - [replSetDistLockPinger] User Assertion: NotMaster: Not primary while running findAndModify command on collection config.lockpings src/mongo/db/commands/find_and_modify.cpp 178 2019-09-04T06:28:19.374+0000 W - [replSetDistLockPinger] DBException thrown :: caused by :: NotMaster: Not primary while running findAndModify command on collection config.lockpings 2019-09-04T06:28:19.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0001 2019-09-04T06:28:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 188 -- target:[cmodb808.togewa.com:27018] db:admin expDate:2019-09-04T06:28:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:28:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 189 -- target:[cmodb809.togewa.com:27018] db:admin expDate:2019-09-04T06:28:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:28:19.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0002 2019-09-04T06:28:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 190 -- target:[cmodb810.togewa.com:27018] db:admin expDate:2019-09-04T06:28:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:28:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 191 -- target:[cmodb811.togewa.com:27018] db:admin expDate:2019-09-04T06:28:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:28:19.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0000 2019-09-04T06:28:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 192 -- target:[cmodb806.togewa.com:27018] db:admin expDate:2019-09-04T06:28:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:28:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 193 -- target:[cmodb807.togewa.com:27018] db:admin expDate:2019-09-04T06:28:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:28:19.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 189 finished with response: { hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb808.togewa.com:27018", me: "cmodb809.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578497, 1), t: 1 }, lastWriteDate: new Date(1567578497000), majorityOpTime: { ts: Timestamp(1567578497, 1), t: 1 }, majorityWriteDate: new Date(1567578497000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578499386), logicalSessionTimeoutMinutes: 30, connectionId: 13302, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578497, 1), $configServerState: { opTime: { ts: Timestamp(1567578480, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578497, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578497, 1) } 2019-09-04T06:28:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb808.togewa.com:27018", me: "cmodb809.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578497, 1), t: 1 }, lastWriteDate: new Date(1567578497000), majorityOpTime: { ts: Timestamp(1567578497, 1), t: 1 }, majorityWriteDate: new Date(1567578497000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578499386), logicalSessionTimeoutMinutes: 30, connectionId: 13302, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578497, 1), $configServerState: { opTime: { ts: Timestamp(1567578480, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578497, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578497, 1) } target: cmodb809.togewa.com:27018 2019-09-04T06:28:19.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 188 finished with response: { hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb808.togewa.com:27018", me: "cmodb808.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578497, 1), t: 1 }, lastWriteDate: new Date(1567578497000), majorityOpTime: { ts: Timestamp(1567578497, 1), t: 1 }, majorityWriteDate: new Date(1567578497000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578499386), logicalSessionTimeoutMinutes: 30, connectionId: 18183, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578497, 1), $configServerState: { opTime: { ts: Timestamp(1567578494, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578497, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578497, 1) } 2019-09-04T06:28:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb808.togewa.com:27018", me: "cmodb808.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578497, 1), t: 1 }, lastWriteDate: new Date(1567578497000), majorityOpTime: { ts: Timestamp(1567578497, 1), t: 1 }, majorityWriteDate: new Date(1567578497000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578499386), logicalSessionTimeoutMinutes: 30, connectionId: 18183, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578497, 1), $configServerState: { opTime: { ts: Timestamp(1567578494, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578497, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578497, 1) } target: cmodb808.togewa.com:27018 2019-09-04T06:28:19.385+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0001 took 0ms 2019-09-04T06:28:19.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 190 finished with response: { hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb810.togewa.com:27018", me: "cmodb810.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578489, 1), t: 1 }, lastWriteDate: new Date(1567578489000), majorityOpTime: { ts: Timestamp(1567578489, 1), t: 1 }, majorityWriteDate: new Date(1567578489000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578499386), logicalSessionTimeoutMinutes: 30, connectionId: 20469, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578489, 1), $configServerState: { opTime: { ts: Timestamp(1567578494, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578494, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578489, 1) } 2019-09-04T06:28:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb810.togewa.com:27018", me: "cmodb810.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578489, 1), t: 1 }, lastWriteDate: new Date(1567578489000), majorityOpTime: { ts: Timestamp(1567578489, 1), t: 1 }, majorityWriteDate: new Date(1567578489000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578499386), logicalSessionTimeoutMinutes: 30, connectionId: 20469, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578489, 1), $configServerState: { opTime: { ts: Timestamp(1567578494, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578494, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578489, 1) } target: cmodb810.togewa.com:27018 2019-09-04T06:28:19.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 192 finished with response: { hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb806.togewa.com:27018", me: "cmodb806.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578493, 10), t: 1 }, lastWriteDate: new Date(1567578493000), majorityOpTime: { ts: Timestamp(1567578493, 10), t: 1 }, majorityWriteDate: new Date(1567578493000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578499385), logicalSessionTimeoutMinutes: 30, connectionId: 16400, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578493, 10), $configServerState: { opTime: { ts: Timestamp(1567578494, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578494, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578493, 10) } 2019-09-04T06:28:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb806.togewa.com:27018", me: "cmodb806.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578493, 10), t: 1 }, lastWriteDate: new Date(1567578493000), majorityOpTime: { ts: Timestamp(1567578493, 10), t: 1 }, majorityWriteDate: new Date(1567578493000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578499385), logicalSessionTimeoutMinutes: 30, connectionId: 16400, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578493, 10), $configServerState: { opTime: { ts: Timestamp(1567578494, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578494, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578493, 10) } target: cmodb806.togewa.com:27018 2019-09-04T06:28:19.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 193 finished with response: { hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb806.togewa.com:27018", me: "cmodb807.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578493, 10), t: 1 }, lastWriteDate: new Date(1567578493000), majorityOpTime: { ts: Timestamp(1567578493, 10), t: 1 }, majorityWriteDate: new Date(1567578493000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578499386), logicalSessionTimeoutMinutes: 30, connectionId: 17074, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578493, 10), $configServerState: { opTime: { ts: Timestamp(1567578488, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578494, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578493, 10) } 2019-09-04T06:28:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb806.togewa.com:27018", me: "cmodb807.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578493, 10), t: 1 }, lastWriteDate: new Date(1567578493000), majorityOpTime: { ts: Timestamp(1567578493, 10), t: 1 }, majorityWriteDate: new Date(1567578493000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578499386), logicalSessionTimeoutMinutes: 30, connectionId: 17074, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578493, 10), $configServerState: { opTime: { ts: Timestamp(1567578488, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578494, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578493, 10) } target: cmodb807.togewa.com:27018 2019-09-04T06:28:19.385+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0000 took 0ms 2019-09-04T06:28:19.390+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 191 finished with response: { hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb810.togewa.com:27018", me: "cmodb811.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578489, 1), t: 1 }, lastWriteDate: new Date(1567578489000), majorityOpTime: { ts: Timestamp(1567578489, 1), t: 1 }, majorityWriteDate: new Date(1567578489000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578499386), logicalSessionTimeoutMinutes: 30, connectionId: 13284, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578489, 1), $configServerState: { opTime: { ts: Timestamp(1567578488, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578494, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578489, 1) } 2019-09-04T06:28:19.390+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb810.togewa.com:27018", me: "cmodb811.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578489, 1), t: 1 }, lastWriteDate: new Date(1567578489000), majorityOpTime: { ts: Timestamp(1567578489, 1), t: 1 }, majorityWriteDate: new Date(1567578489000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578499386), logicalSessionTimeoutMinutes: 30, connectionId: 13284, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578489, 1), $configServerState: { opTime: { ts: Timestamp(1567578488, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578494, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578489, 1) } target: cmodb811.togewa.com:27018 2019-09-04T06:28:19.390+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0002 took 5ms 2019-09-04T06:28:19.393+0000 I - [replSetDistLockPinger] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6b5b62 0x561749c38a0a 0x561749c42521 0x561749a63043 0x56174a33a606 0x56174a33ba55 0x56174b117894 0x56174a082899 0x56174a083f53 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174af452ee 0x56174af457fa 0x56174b0c25e2 0x56174a244e7b 0x56174a243c1e 0x56174a42b1dc 0x56174a23b7b1 0x56174a232a0a 0x56174b82dbbf 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"272DB62","s":"_ZN5mongo11DBExceptionC2ERKNS_6StatusE"},{"b":"561748F88000","o":"CB0A0A","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"ADB043"},{"b":"561748F88000","o":"13B2606"},{"b":"561748F88000","o":"13B3A55"},{"b":"561748F88000","o":"218F894","s":"_ZN5mongo12BasicCommand10Invocation3runEPNS_16OperationContextEPNS_3rpc21ReplyBuilderInterfaceE"},{"b":"561748F88000","o":"10FA899"},{"b":"561748F88000","o":"10FBF53"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"1FBD2EE"},{"b":"561748F88000","o":"1FBD7FA","s":"_ZN5mongo14DBDirectClient4callERNS_7MessageES2_bPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE"},{"b":"561748F88000","o":"213A5E2","s":"_ZN5mongo12DBClientBase20runCommandWithTargetENS_12OpMsgRequestE"},{"b":"561748F88000","o":"12BCE7B","s":"_ZN5mongo13RSLocalClient14runCommandOnceEPNS_16OperationContextENS_10StringDataERKNS_7BSONObjE"},{"b":"561748F88000","o":"12BBC1E","s":"_ZN5mongo10ShardLocal11_runCommandEPNS_16OperationContextERKNS_21ReadPreferenceSettingENS_10StringDataENS_8DurationISt5ratioILl1ELl1000EEEERKNS_7BSONObjE"},{"b":"561748F88000","o":"14A31DC","s":"_ZN5mongo5Shard32runCommandWithFixedRetryAttemptsEPNS_16OperationContextERKNS_21ReadPreferenceSettingERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_7BSONObjENS_8DurationISt5ratioILl1ELl1000EEEENS0_11RetryPolicyE"},{"b":"561748F88000","o":"12B37B1","s":"_ZN5mongo19DistLockCatalogImpl4pingEPNS_16OperationContextENS_10StringDataENS_6Date_tE"},{"b":"561748F88000","o":"12AAA0A","s":"_ZN5mongo22ReplSetDistLockManager6doTaskEv"},{"b":"561748F88000","o":"28A5BBF"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo11DBExceptionC2ERKNS_6StatusE+0x32) [0x56174b6b5b62] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x6D08) [0x561749c38a0a] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xADB043) [0x561749a63043] mongod(+0x13B2606) [0x56174a33a606] mongod(+0x13B3A55) [0x56174a33ba55] mongod(_ZN5mongo12BasicCommand10Invocation3runEPNS_16OperationContextEPNS_3rpc21ReplyBuilderInterfaceE+0x74) [0x56174b117894] mongod(+0x10FA899) [0x56174a082899] mongod(+0x10FBF53) [0x56174a083f53] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(+0x1FBD2EE) [0x56174af452ee] mongod(_ZN5mongo14DBDirectClient4callERNS_7MessageES2_bPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x3A) [0x56174af457fa] mongod(_ZN5mongo12DBClientBase20runCommandWithTargetENS_12OpMsgRequestE+0x1F2) [0x56174b0c25e2] mongod(_ZN5mongo13RSLocalClient14runCommandOnceEPNS_16OperationContextENS_10StringDataERKNS_7BSONObjE+0x4FB) [0x56174a244e7b] mongod(_ZN5mongo10ShardLocal11_runCommandEPNS_16OperationContextERKNS_21ReadPreferenceSettingENS_10StringDataENS_8DurationISt5ratioILl1ELl1000EEEERKNS_7BSONObjE+0x2E) [0x56174a243c1e] mongod(_ZN5mongo5Shard32runCommandWithFixedRetryAttemptsEPNS_16OperationContextERKNS_21ReadPreferenceSettingERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_7BSONObjENS_8DurationISt5ratioILl1ELl1000EEEENS0_11RetryPolicyE+0xDC) [0x56174a42b1dc] mongod(_ZN5mongo19DistLockCatalogImpl4pingEPNS_16OperationContextENS_10StringDataENS_6Date_tE+0x571) [0x56174a23b7b1] mongod(_ZN5mongo22ReplSetDistLockManager6doTaskEv+0x27A) [0x56174a232a0a] mongod(+0x28A5BBF) [0x56174b82dbbf] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:19.393+0000 D2 REPL [replSetDistLockPinger] Waiting for write concern. OpTime: { ts: Timestamp(1567578499, 1), t: 1 }, write concern: { w: "majority", wtimeout: 15000 } 2019-09-04T06:28:19.393+0000 D4 STORAGE [replSetDistLockPinger] flushed journal 2019-09-04T06:28:19.393+0000 D1 COMMAND [replSetDistLockPinger] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578499374) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" }': NotMaster: Not primary while running findAndModify command on collection config.lockpings 2019-09-04T06:28:19.393+0000 I COMMAND [replSetDistLockPinger] command config.lockpings command: findAndModify { findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578499374) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } numYields:0 ok:0 errMsg:"Not primary while running findAndModify command on collection config.lockpings" errName:NotMaster errCode:10107 reslen:527 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } protocol:op_msg 19ms 2019-09-04T06:28:19.450+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:19.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:19.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:19.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:19.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:19.495+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:19.495+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:19.523+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:19.523+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:19.550+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:19.568+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:19.568+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:19.623+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:19.623+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:19.632+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:19.632+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:19.650+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:19.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:19.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:19.697+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:19.697+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:19.751+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:19.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:19.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:19.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:19.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:19.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:19.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:19.851+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:19.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:19.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:19.951+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:19.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:19.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:19.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:19.989+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:19.995+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:19.995+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:20.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:20.003+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:28:20.003+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:28:20.003+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:28:20.010+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:28:20.010+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:28:20.011+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:28:20.011+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:28:20.011+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:28:20.011+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:28:20.011+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:28:20.012+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35129 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:28:20.013+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:28:20.013+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:28:20.013+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:28:20.014+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:28:20.014+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:20.014+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:28:20.014+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:28:20.014+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:28:20.014+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:28:20.014+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:28:20.014+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:28:20.014+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:28:20.014+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:20.014+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:20.014+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:28:20.014+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578499, 1) 2019-09-04T06:28:20.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2925 2019-09-04T06:28:20.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2925 2019-09-04T06:28:20.014+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:28:20.015+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:28:20.015+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:28:20.015+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:28:20.015+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:20.015+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:28:20.015+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:28:20.015+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:28:20.015+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578499, 1) 2019-09-04T06:28:20.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2928 2019-09-04T06:28:20.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2928 2019-09-04T06:28:20.015+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:28:20.015+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:28:20.015+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:20.015+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:28:20.015+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:28:20.015+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:28:20.015+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578499, 1) 2019-09-04T06:28:20.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2930 2019-09-04T06:28:20.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2930 2019-09-04T06:28:20.016+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:28:20.016+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:28:20.016+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:28:20.016+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:28:20.016+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:28:20.016+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:28:20.016+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:28:20.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2933 2019-09-04T06:28:20.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:28:20.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:20.016+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:28:20.016+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:28:20.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2933 2019-09-04T06:28:20.016+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:28:20.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2934 2019-09-04T06:28:20.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:28:20.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:20.016+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:28:20.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2934 2019-09-04T06:28:20.016+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:28:20.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2935 2019-09-04T06:28:20.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:28:20.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:20.016+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:28:20.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2935 2019-09-04T06:28:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:28:20.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2936 2019-09-04T06:28:20.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:28:20.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:28:20.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2936 2019-09-04T06:28:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:28:20.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2937 2019-09-04T06:28:20.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2937 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2938 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2938 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2939 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2939 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2940 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2940 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2941 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2941 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2942 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2942 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2943 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2943 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2944 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2944 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2945 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2945 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2946 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2946 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2947 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2947 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2948 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2948 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2949 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2949 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2950 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2950 2019-09-04T06:28:20.017+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:28:20.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2951 2019-09-04T06:28:20.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:28:20.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:20.018+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:28:20.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2951 2019-09-04T06:28:20.018+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:20.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2952 2019-09-04T06:28:20.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:20.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:20.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2952 2019-09-04T06:28:20.018+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:28:20.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2953 2019-09-04T06:28:20.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:28:20.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:20.018+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:28:20.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2953 2019-09-04T06:28:20.018+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:28:20.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2954 2019-09-04T06:28:20.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:28:20.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:20.018+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:28:20.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2954 2019-09-04T06:28:20.018+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:28:20.032+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:28:20.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2956 2019-09-04T06:28:20.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2956 2019-09-04T06:28:20.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2957 2019-09-04T06:28:20.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2957 2019-09-04T06:28:20.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2958 2019-09-04T06:28:20.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2958 2019-09-04T06:28:20.033+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:28:20.034+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:20.034+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:20.043+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:28:20.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2961 2019-09-04T06:28:20.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2961 2019-09-04T06:28:20.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2962 2019-09-04T06:28:20.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2962 2019-09-04T06:28:20.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2963 2019-09-04T06:28:20.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2963 2019-09-04T06:28:20.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2964 2019-09-04T06:28:20.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2964 2019-09-04T06:28:20.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2965 2019-09-04T06:28:20.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2965 2019-09-04T06:28:20.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2966 2019-09-04T06:28:20.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2966 2019-09-04T06:28:20.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2967 2019-09-04T06:28:20.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2967 2019-09-04T06:28:20.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2968 2019-09-04T06:28:20.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2968 2019-09-04T06:28:20.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2969 2019-09-04T06:28:20.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2969 2019-09-04T06:28:20.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2970 2019-09-04T06:28:20.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2970 2019-09-04T06:28:20.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2971 2019-09-04T06:28:20.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2971 2019-09-04T06:28:20.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2972 2019-09-04T06:28:20.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2972 2019-09-04T06:28:20.043+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:28:20.045+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:28:20.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2974 2019-09-04T06:28:20.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2974 2019-09-04T06:28:20.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2975 2019-09-04T06:28:20.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2975 2019-09-04T06:28:20.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2976 2019-09-04T06:28:20.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2976 2019-09-04T06:28:20.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2977 2019-09-04T06:28:20.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2977 2019-09-04T06:28:20.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2978 2019-09-04T06:28:20.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2978 2019-09-04T06:28:20.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 2979 2019-09-04T06:28:20.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 2979 2019-09-04T06:28:20.045+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:28:20.051+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:20.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:20.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:20.068+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:20.068+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:20.123+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:20.123+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:20.132+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:20.132+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:20.140+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:20.140+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:20.140+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578499, 1) 2019-09-04T06:28:20.140+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 2985 2019-09-04T06:28:20.140+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:20.140+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:20.140+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 2985 2019-09-04T06:28:20.141+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 2988 2019-09-04T06:28:20.141+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 2988 2019-09-04T06:28:20.141+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578499, 1), t: 1 }({ ts: Timestamp(1567578499, 1), t: 1 }) 2019-09-04T06:28:20.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:20.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:20.151+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:20.166+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:20.166+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:20.197+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:20.197+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:20.230+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578499, 1), signature: { hash: BinData(0, AC86A8AE2DCCE8C7EAE5EE883E2F96554B733218), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:20.230+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:28:20.230+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578499, 1), signature: { hash: BinData(0, AC86A8AE2DCCE8C7EAE5EE883E2F96554B733218), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:20.230+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578499, 1), signature: { hash: BinData(0, AC86A8AE2DCCE8C7EAE5EE883E2F96554B733218), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:20.230+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), opTime: { ts: Timestamp(1567578499, 1), t: 1 }, wallTime: new Date(1567578499137) } 2019-09-04T06:28:20.230+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578499, 1), signature: { hash: BinData(0, AC86A8AE2DCCE8C7EAE5EE883E2F96554B733218), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:20.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:20.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 999999999 Now: 1000000000 2019-09-04T06:28:20.251+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:20.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:20.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:20.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:20.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:20.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:20.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:20.351+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:20.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:20.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:20.452+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:20.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:20.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:20.490+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:20.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:20.495+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:20.495+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:20.523+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:20.523+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:20.549+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:28:20.549+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:28:20.549+0000 D2 COMMAND [conn90] run command admin.$cmd { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:28:20.549+0000 I COMMAND [conn90] command admin.$cmd command: isMaster { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:866 locks:{} protocol:op_query 0ms 2019-09-04T06:28:20.552+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:20.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:20.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:20.568+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:20.568+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:20.623+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:20.623+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:20.632+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:20.632+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:20.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:20.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:20.652+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:20.666+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:20.666+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:20.697+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:20.697+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:20.752+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:20.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:20.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:20.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:20.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:20.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:20.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:20.836+0000 D1 EXECUTOR [replexec-2] Reaping this thread; next thread reaped no earlier than 2019-09-04T06:28:50.836+0000 2019-09-04T06:28:20.836+0000 D1 EXECUTOR [replexec-2] shutting down thread in pool replexec 2019-09-04T06:28:20.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:20.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:50.836+0000 2019-09-04T06:28:20.836+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 194) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:20.836+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 194 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:28:30.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:20.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:50.836+0000 2019-09-04T06:28:20.836+0000 D2 ASIO [Replication] Request 194 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), opTime: { ts: Timestamp(1567578499, 1), t: 1 }, wallTime: new Date(1567578499137), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578499, 1), t: 1 }, lastCommittedWall: new Date(1567578499137), lastOpVisible: { ts: Timestamp(1567578499, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578499, 1), $clusterTime: { clusterTime: Timestamp(1567578499, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578499, 1) } 2019-09-04T06:28:20.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), opTime: { ts: Timestamp(1567578499, 1), t: 1 }, wallTime: new Date(1567578499137), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578499, 1), t: 1 }, lastCommittedWall: new Date(1567578499137), lastOpVisible: { ts: Timestamp(1567578499, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578499, 1), $clusterTime: { clusterTime: Timestamp(1567578499, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578499, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:28:20.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:20.836+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 194) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), opTime: { ts: Timestamp(1567578499, 1), t: 1 }, wallTime: new Date(1567578499137), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578499, 1), t: 1 }, lastCommittedWall: new Date(1567578499137), lastOpVisible: { ts: Timestamp(1567578499, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578499, 1), $clusterTime: { clusterTime: Timestamp(1567578499, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578499, 1) } 2019-09-04T06:28:20.836+0000 D4 ELECTION [replexec-1] Postponing election timeout due to heartbeat from primary 2019-09-04T06:28:20.836+0000 D4 REPL [replexec-1] Canceling election timeout callback at 2019-09-04T06:28:30.216+0000 2019-09-04T06:28:20.836+0000 D4 ELECTION [replexec-1] Scheduling election timeout callback at 2019-09-04T06:28:31.996+0000 2019-09-04T06:28:20.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:20.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:50.836+0000 2019-09-04T06:28:20.836+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:28:20.836+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:28:22.836Z 2019-09-04T06:28:20.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:50.836+0000 2019-09-04T06:28:20.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:20.837+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 195) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:20.837+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 195 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:30.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:20.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:50.836+0000 2019-09-04T06:28:20.837+0000 D2 ASIO [Replication] Request 195 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), opTime: { ts: Timestamp(1567578499, 1), t: 1 }, wallTime: new Date(1567578499137), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578499, 1), t: 1 }, lastCommittedWall: new Date(1567578499137), lastOpVisible: { ts: Timestamp(1567578499, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578499, 1), $clusterTime: { clusterTime: Timestamp(1567578499, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578499, 1) } 2019-09-04T06:28:20.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), opTime: { ts: Timestamp(1567578499, 1), t: 1 }, wallTime: new Date(1567578499137), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578499, 1), t: 1 }, lastCommittedWall: new Date(1567578499137), lastOpVisible: { ts: Timestamp(1567578499, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578499, 1), $clusterTime: { clusterTime: Timestamp(1567578499, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578499, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:20.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:20.837+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 195) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), opTime: { ts: Timestamp(1567578499, 1), t: 1 }, wallTime: new Date(1567578499137), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578499, 1), t: 1 }, lastCommittedWall: new Date(1567578499137), lastOpVisible: { ts: Timestamp(1567578499, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578499, 1), $clusterTime: { clusterTime: Timestamp(1567578499, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578499, 1) } 2019-09-04T06:28:20.837+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:28:20.837+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:28:22.837Z 2019-09-04T06:28:20.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:50.836+0000 2019-09-04T06:28:20.852+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:20.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:20.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:20.952+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:20.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:20.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:20.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:20.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:20.995+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:20.995+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:21.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:21.023+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:21.023+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:21.052+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:21.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578499, 1), signature: { hash: BinData(0, AC86A8AE2DCCE8C7EAE5EE883E2F96554B733218), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:21.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:28:21.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578499, 1), signature: { hash: BinData(0, AC86A8AE2DCCE8C7EAE5EE883E2F96554B733218), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:21.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578499, 1), signature: { hash: BinData(0, AC86A8AE2DCCE8C7EAE5EE883E2F96554B733218), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:21.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), opTime: { ts: Timestamp(1567578499, 1), t: 1 }, wallTime: new Date(1567578499137) } 2019-09-04T06:28:21.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578499, 1), signature: { hash: BinData(0, AC86A8AE2DCCE8C7EAE5EE883E2F96554B733218), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:21.065+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:21.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:21.068+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:21.068+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:21.123+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:21.123+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:21.132+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:21.132+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:21.140+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:21.140+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:21.140+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578499, 1) 2019-09-04T06:28:21.140+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 3026 2019-09-04T06:28:21.140+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:21.140+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:21.140+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 3026 2019-09-04T06:28:21.141+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3029 2019-09-04T06:28:21.142+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 3029 2019-09-04T06:28:21.142+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578499, 1), t: 1 }({ ts: Timestamp(1567578499, 1), t: 1 }) 2019-09-04T06:28:21.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:21.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:21.153+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:21.166+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:21.166+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:21.197+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:21.197+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:21.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:21.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:21.253+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:21.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:21.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:21.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:21.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:21.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:21.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:21.332+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:21.332+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:21.353+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:21.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:21.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:21.453+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:21.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:21.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:21.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:21.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:21.495+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:21.495+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:21.523+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:21.523+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:21.553+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:21.565+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:21.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:21.568+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:21.568+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:21.573+0000 I NETWORK [listener] connection accepted from 10.20.102.80:61022 #126 (80 connections now open) 2019-09-04T06:28:21.573+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:21.573+0000 D2 COMMAND [conn126] run command admin.$cmd { isMaster: 1, client: { application: { name: "robo3t" }, driver: { name: "MongoDB Internal Client", version: "4.0.5-17-gd808df2233" }, os: { type: "Windows", name: "Microsoft Windows 7", architecture: "x86_64", version: "6.1 SP1 (build 7601)" } }, $db: "admin" } 2019-09-04T06:28:21.573+0000 I NETWORK [conn126] received client metadata from 10.20.102.80:61022 conn126: { application: { name: "robo3t" }, driver: { name: "MongoDB Internal Client", version: "4.0.5-17-gd808df2233" }, os: { type: "Windows", name: "Microsoft Windows 7", architecture: "x86_64", version: "6.1 SP1 (build 7601)" } } 2019-09-04T06:28:21.574+0000 I COMMAND [conn126] command admin.$cmd appName: "robo3t" command: isMaster { isMaster: 1, client: { application: { name: "robo3t" }, driver: { name: "MongoDB Internal Client", version: "4.0.5-17-gd808df2233" }, os: { type: "Windows", name: "Microsoft Windows 7", architecture: "x86_64", version: "6.1 SP1 (build 7601)" } }, $db: "admin" } numYields:0 reslen:866 locks:{} protocol:op_query 0ms 2019-09-04T06:28:21.583+0000 D2 COMMAND [conn126] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:28:21.584+0000 D1 ACCESS [conn126] Returning user dba_root@admin from cache 2019-09-04T06:28:21.584+0000 I COMMAND [conn126] command admin.$cmd appName: "robo3t" command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:394 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:21.593+0000 D2 COMMAND [conn126] run command admin.$cmd { saslContinue: 1, payload: "xxx", conversationId: 1, $db: "admin" } 2019-09-04T06:28:21.593+0000 I COMMAND [conn126] command admin.$cmd appName: "robo3t" command: saslContinue { saslContinue: 1, payload: "xxx", conversationId: 1, $db: "admin" } numYields:0 reslen:323 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:21.603+0000 D2 COMMAND [conn126] run command admin.$cmd { saslContinue: 1, payload: "xxx", conversationId: 1, $db: "admin" } 2019-09-04T06:28:21.603+0000 D1 ACCESS [conn126] Returning user dba_root@admin from cache 2019-09-04T06:28:21.603+0000 I ACCESS [conn126] Successfully authenticated as principal dba_root on admin from client 10.20.102.80:61022 2019-09-04T06:28:21.603+0000 I COMMAND [conn126] command admin.$cmd appName: "robo3t" command: saslContinue { saslContinue: 1, payload: "xxx", conversationId: 1, $db: "admin" } numYields:0 reslen:293 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:21.612+0000 D2 COMMAND [conn126] run command admin.$cmd { ping: 1, $db: "admin" } 2019-09-04T06:28:21.613+0000 I COMMAND [conn126] command admin.$cmd appName: "robo3t" command: ping { ping: 1, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:21.623+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:21.623+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:21.632+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:21.632+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:21.634+0000 I NETWORK [listener] connection accepted from 10.108.2.44:38592 #127 (81 connections now open) 2019-09-04T06:28:21.634+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:21.635+0000 D2 COMMAND [conn127] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:21.635+0000 I NETWORK [conn127] received client metadata from 10.108.2.44:38592 conn127: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:21.635+0000 I COMMAND [conn127] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:21.648+0000 I COMMAND [conn91] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578468, 1), signature: { hash: BinData(0, 174FC9E8758591059B0F1359F37D9D72B5364FD1), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:28:21.648+0000 D1 - [conn91] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:21.648+0000 W - [conn91] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:21.650+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:21.650+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:21.650+0000 I NETWORK [listener] connection accepted from 10.108.2.54:49092 #128 (82 connections now open) 2019-09-04T06:28:21.650+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:21.650+0000 D2 COMMAND [conn128] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:21.650+0000 I NETWORK [conn128] received client metadata from 10.108.2.54:49092 conn128: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:21.650+0000 I COMMAND [conn128] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:21.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:21.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:21.653+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:21.656+0000 I NETWORK [listener] connection accepted from 10.108.2.47:56468 #129 (83 connections now open) 2019-09-04T06:28:21.656+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:21.656+0000 D2 COMMAND [conn129] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:21.656+0000 I NETWORK [conn129] received client metadata from 10.108.2.47:56468 conn129: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:21.656+0000 I COMMAND [conn129] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:21.662+0000 I COMMAND [conn95] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578463, 1), signature: { hash: BinData(0, C1B62A1C3071B6FFFBA1E3E4221F429C7BC6B94F), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:28:21.663+0000 D1 - [conn95] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:21.663+0000 W - [conn95] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:21.664+0000 I COMMAND [conn93] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578462, 1), signature: { hash: BinData(0, B297735DD8383F1BAD6EB580CC3032FDC0BD6BA3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:28:21.665+0000 D1 - [conn93] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:21.665+0000 W - [conn93] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:21.665+0000 I - [conn91] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:21.665+0000 D1 COMMAND [conn91] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578468, 1), signature: { hash: BinData(0, 174FC9E8758591059B0F1359F37D9D72B5364FD1), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:21.665+0000 D1 - [conn91] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:21.665+0000 W - [conn91] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:21.665+0000 I COMMAND [conn94] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578468, 1), signature: { hash: BinData(0, 174FC9E8758591059B0F1359F37D9D72B5364FD1), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:28:21.666+0000 D1 - [conn94] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:21.666+0000 W - [conn94] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:21.666+0000 I COMMAND [conn96] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578462, 1), signature: { hash: BinData(0, B297735DD8383F1BAD6EB580CC3032FDC0BD6BA3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:28:21.666+0000 D1 - [conn96] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:21.666+0000 W - [conn96] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:21.666+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:21.666+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:21.666+0000 I COMMAND [conn92] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578470, 1), signature: { hash: BinData(0, B6774A8FE7872486BBD7D13128F552CA92255F0B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:28:21.667+0000 D1 - [conn92] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:21.667+0000 W - [conn92] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:21.673+0000 I COMMAND [conn97] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578465, 1), signature: { hash: BinData(0, B4CC25BA8F9AE0B854E22D714AAD0C62F00EA853), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:28:21.673+0000 D1 - [conn97] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:21.673+0000 W - [conn97] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:21.690+0000 I - [conn91] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:21.691+0000 W COMMAND [conn91] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:21.691+0000 I COMMAND [conn91] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578468, 1), signature: { hash: BinData(0, 174FC9E8758591059B0F1359F37D9D72B5364FD1), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:28:21.691+0000 D2 NETWORK [conn91] Session from 10.108.2.44:38574 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:21.691+0000 I NETWORK [conn91] end connection 10.108.2.44:38574 (82 connections now open) 2019-09-04T06:28:21.697+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:21.697+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:21.707+0000 I - [conn95] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:21.707+0000 D1 COMMAND [conn95] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578463, 1), signature: { hash: BinData(0, C1B62A1C3071B6FFFBA1E3E4221F429C7BC6B94F), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:21.707+0000 D1 - [conn95] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:21.707+0000 W - [conn95] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:21.724+0000 I - [conn96] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:21.724+0000 D1 COMMAND [conn96] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578462, 1), signature: { hash: BinData(0, B297735DD8383F1BAD6EB580CC3032FDC0BD6BA3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:21.724+0000 D1 - [conn96] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:21.724+0000 W - [conn96] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:21.735+0000 I - [conn93] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:21.735+0000 D1 COMMAND [conn93] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578462, 1), signature: { hash: BinData(0, B297735DD8383F1BAD6EB580CC3032FDC0BD6BA3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:21.735+0000 D1 - [conn93] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:21.735+0000 W - [conn93] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:21.743+0000 I NETWORK [listener] connection accepted from 10.108.2.52:47088 #130 (83 connections now open) 2019-09-04T06:28:21.743+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:21.743+0000 D2 COMMAND [conn130] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:21.743+0000 I NETWORK [conn130] received client metadata from 10.108.2.52:47088 conn130: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:21.743+0000 I COMMAND [conn130] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:21.743+0000 D2 COMMAND [conn130] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578492, 1), signature: { hash: BinData(0, 369D20320EAA8E78506D058308BAF1A8A714E0C9), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:28:21.743+0000 D1 REPL [conn130] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578499, 1), t: 1 } 2019-09-04T06:28:21.743+0000 D3 REPL [conn130] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:51.753+0000 2019-09-04T06:28:21.753+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:21.755+0000 I - [conn96] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:21.755+0000 W COMMAND [conn96] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:21.755+0000 I COMMAND [conn96] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578462, 1), signature: { hash: BinData(0, B297735DD8383F1BAD6EB580CC3032FDC0BD6BA3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30070ms 2019-09-04T06:28:21.755+0000 D2 NETWORK [conn96] Session from 10.108.2.73:52042 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:21.755+0000 I NETWORK [conn96] end connection 10.108.2.73:52042 (82 connections now open) 2019-09-04T06:28:21.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:21.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:21.770+0000 I COMMAND [conn98] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578461, 1), signature: { hash: BinData(0, 692CD45BA7CC13DACC90D19B9475E230267CC4C6), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:28:21.771+0000 D1 - [conn98] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:21.771+0000 W - [conn98] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:21.775+0000 I - [conn95] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:21.775+0000 W COMMAND [conn95] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:21.775+0000 I COMMAND [conn95] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578463, 1), signature: { hash: BinData(0, C1B62A1C3071B6FFFBA1E3E4221F429C7BC6B94F), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30054ms 2019-09-04T06:28:21.775+0000 D2 NETWORK [conn95] Session from 10.108.2.58:52032 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:21.775+0000 I NETWORK [conn95] end connection 10.108.2.58:52032 (81 connections now open) 2019-09-04T06:28:21.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:21.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:21.792+0000 I - [conn94] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:21.792+0000 D1 COMMAND [conn94] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578468, 1), signature: { hash: BinData(0, 174FC9E8758591059B0F1359F37D9D72B5364FD1), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:21.792+0000 D1 - [conn94] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:21.792+0000 W - [conn94] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:21.812+0000 I - [conn93] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:21.812+0000 W COMMAND [conn93] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:21.812+0000 I COMMAND [conn93] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578462, 1), signature: { hash: BinData(0, B297735DD8383F1BAD6EB580CC3032FDC0BD6BA3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30083ms 2019-09-04T06:28:21.812+0000 D2 NETWORK [conn93] Session from 10.108.2.72:45628 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:21.812+0000 I NETWORK [conn93] end connection 10.108.2.72:45628 (80 connections now open) 2019-09-04T06:28:21.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:21.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:21.828+0000 I - [conn97] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:21.828+0000 D1 COMMAND [conn97] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578465, 1), signature: { hash: BinData(0, B4CC25BA8F9AE0B854E22D714AAD0C62F00EA853), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:21.828+0000 D1 - [conn97] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:21.829+0000 W - [conn97] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:21.832+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:21.832+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:21.848+0000 I - [conn94] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:21.848+0000 W COMMAND [conn94] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:21.848+0000 I COMMAND [conn94] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578468, 1), signature: { hash: BinData(0, 174FC9E8758591059B0F1359F37D9D72B5364FD1), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30138ms 2019-09-04T06:28:21.849+0000 D2 NETWORK [conn94] Session from 10.108.2.54:49076 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:21.849+0000 I NETWORK [conn94] end connection 10.108.2.54:49076 (79 connections now open) 2019-09-04T06:28:21.853+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:21.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:21.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:21.868+0000 I - [conn97] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:21.868+0000 W COMMAND [conn97] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:21.868+0000 I COMMAND [conn97] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578465, 1), signature: { hash: BinData(0, B4CC25BA8F9AE0B854E22D714AAD0C62F00EA853), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30168ms 2019-09-04T06:28:21.868+0000 D2 NETWORK [conn97] Session from 10.108.2.47:56452 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:21.868+0000 I NETWORK [conn97] end connection 10.108.2.47:56452 (78 connections now open) 2019-09-04T06:28:21.886+0000 I - [conn92] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:21.886+0000 D1 COMMAND [conn92] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578470, 1), signature: { hash: BinData(0, B6774A8FE7872486BBD7D13128F552CA92255F0B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:21.886+0000 D1 - [conn92] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:21.886+0000 W - [conn92] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:21.902+0000 I - [conn98] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:21.902+0000 D1 COMMAND [conn98] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578461, 1), signature: { hash: BinData(0, 692CD45BA7CC13DACC90D19B9475E230267CC4C6), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:21.902+0000 D1 - [conn98] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:21.902+0000 W - [conn98] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:21.922+0000 I - [conn92] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:21.922+0000 W COMMAND [conn92] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:21.922+0000 I COMMAND [conn92] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578470, 1), signature: { hash: BinData(0, B6774A8FE7872486BBD7D13128F552CA92255F0B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30232ms 2019-09-04T06:28:21.922+0000 D2 NETWORK [conn92] Session from 10.108.2.48:41984 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:21.922+0000 I NETWORK [conn92] end connection 10.108.2.48:41984 (77 connections now open) 2019-09-04T06:28:21.943+0000 I - [conn98] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:21.943+0000 W COMMAND [conn98] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:21.943+0000 I COMMAND [conn98] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578461, 1), signature: { hash: BinData(0, 692CD45BA7CC13DACC90D19B9475E230267CC4C6), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30143ms 2019-09-04T06:28:21.943+0000 D2 NETWORK [conn98] Session from 10.108.2.59:48238 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:21.943+0000 I NETWORK [conn98] end connection 10.108.2.59:48238 (76 connections now open) 2019-09-04T06:28:21.953+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:21.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:21.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:21.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:21.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:21.995+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:21.995+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:22.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:22.023+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:22.023+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:22.054+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:22.056+0000 I COMMAND [conn99] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578469, 1), signature: { hash: BinData(0, 48284AC78A7C2DCD17E674B25E8399518D944CFC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:28:22.056+0000 D1 - [conn99] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:22.056+0000 W - [conn99] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:22.065+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:22.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:22.068+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:22.068+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:22.073+0000 I - [conn99] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:22.073+0000 D1 COMMAND [conn99] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578469, 1), signature: { hash: BinData(0, 48284AC78A7C2DCD17E674B25E8399518D944CFC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:22.073+0000 D1 - [conn99] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:22.073+0000 W - [conn99] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:22.093+0000 I - [conn99] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:22.093+0000 W COMMAND [conn99] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:22.093+0000 I COMMAND [conn99] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578469, 1), signature: { hash: BinData(0, 48284AC78A7C2DCD17E674B25E8399518D944CFC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:28:22.093+0000 D2 NETWORK [conn99] Session from 10.108.2.50:50008 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:22.093+0000 I NETWORK [conn99] end connection 10.108.2.50:50008 (75 connections now open) 2019-09-04T06:28:22.123+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:22.123+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:22.132+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:22.132+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:22.140+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:22.140+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:22.140+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578499, 1) 2019-09-04T06:28:22.140+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 3076 2019-09-04T06:28:22.140+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:22.140+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:22.140+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 3076 2019-09-04T06:28:22.142+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3079 2019-09-04T06:28:22.142+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 3079 2019-09-04T06:28:22.142+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578499, 1), t: 1 }({ ts: Timestamp(1567578499, 1), t: 1 }) 2019-09-04T06:28:22.149+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:22.150+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:22.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:22.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:22.154+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:22.166+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:22.166+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:22.197+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:22.197+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:22.230+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578500, 1), signature: { hash: BinData(0, E1270D41C39AE151E57923F5E5D47AC9950519FD), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:22.230+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:28:22.230+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578500, 1), signature: { hash: BinData(0, E1270D41C39AE151E57923F5E5D47AC9950519FD), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:22.230+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578500, 1), signature: { hash: BinData(0, E1270D41C39AE151E57923F5E5D47AC9950519FD), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:22.230+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), opTime: { ts: Timestamp(1567578499, 1), t: 1 }, wallTime: new Date(1567578499137) } 2019-09-04T06:28:22.230+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578500, 1), signature: { hash: BinData(0, E1270D41C39AE151E57923F5E5D47AC9950519FD), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:22.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:22.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:22.254+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:22.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:22.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:22.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:22.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:22.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:22.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:22.332+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:22.332+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:22.354+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:22.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:22.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:22.454+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:22.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:22.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:22.490+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:22.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:22.495+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:22.495+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:22.523+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:22.523+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:22.554+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:22.565+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:22.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:22.568+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:22.568+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:22.599+0000 I COMMAND [conn100] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578465, 1), signature: { hash: BinData(0, B4CC25BA8F9AE0B854E22D714AAD0C62F00EA853), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:28:22.599+0000 D1 - [conn100] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:22.599+0000 W - [conn100] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:22.616+0000 I - [conn100] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:22.616+0000 D1 COMMAND [conn100] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578465, 1), signature: { hash: BinData(0, B4CC25BA8F9AE0B854E22D714AAD0C62F00EA853), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:22.616+0000 D1 - [conn100] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:22.616+0000 W - [conn100] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:22.623+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:22.623+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:22.632+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:22.632+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:22.637+0000 I - [conn100] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:22.637+0000 W COMMAND [conn100] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:22.637+0000 I COMMAND [conn100] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578465, 1), signature: { hash: BinData(0, B4CC25BA8F9AE0B854E22D714AAD0C62F00EA853), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30031ms 2019-09-04T06:28:22.637+0000 D2 NETWORK [conn100] Session from 10.108.2.74:51674 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:22.637+0000 I NETWORK [conn100] end connection 10.108.2.74:51674 (74 connections now open) 2019-09-04T06:28:22.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:22.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:22.654+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:22.666+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:22.666+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:22.697+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:22.697+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:22.755+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:22.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:22.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:22.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:22.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:22.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:22.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:22.832+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:22.832+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:22.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:22.836+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 196) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:22.836+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 196 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:28:32.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:22.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:50.836+0000 2019-09-04T06:28:22.836+0000 D2 ASIO [Replication] Request 196 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), opTime: { ts: Timestamp(1567578499, 1), t: 1 }, wallTime: new Date(1567578499137), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578499, 1), t: 1 }, lastCommittedWall: new Date(1567578499137), lastOpVisible: { ts: Timestamp(1567578499, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578499, 1), $clusterTime: { clusterTime: Timestamp(1567578500, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578499, 1) } 2019-09-04T06:28:22.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), opTime: { ts: Timestamp(1567578499, 1), t: 1 }, wallTime: new Date(1567578499137), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578499, 1), t: 1 }, lastCommittedWall: new Date(1567578499137), lastOpVisible: { ts: Timestamp(1567578499, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578499, 1), $clusterTime: { clusterTime: Timestamp(1567578500, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578499, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:28:22.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:22.836+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 196) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), opTime: { ts: Timestamp(1567578499, 1), t: 1 }, wallTime: new Date(1567578499137), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578499, 1), t: 1 }, lastCommittedWall: new Date(1567578499137), lastOpVisible: { ts: Timestamp(1567578499, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578499, 1), $clusterTime: { clusterTime: Timestamp(1567578500, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578499, 1) } 2019-09-04T06:28:22.836+0000 D4 ELECTION [replexec-1] Postponing election timeout due to heartbeat from primary 2019-09-04T06:28:22.836+0000 D4 REPL [replexec-1] Canceling election timeout callback at 2019-09-04T06:28:31.996+0000 2019-09-04T06:28:22.836+0000 D4 ELECTION [replexec-1] Scheduling election timeout callback at 2019-09-04T06:28:33.062+0000 2019-09-04T06:28:22.836+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:28:22.836+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:28:24.836Z 2019-09-04T06:28:22.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:22.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:52.836+0000 2019-09-04T06:28:22.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:52.836+0000 2019-09-04T06:28:22.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:22.837+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 197) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:22.837+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 197 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:32.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:22.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:52.836+0000 2019-09-04T06:28:22.837+0000 D2 ASIO [Replication] Request 197 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), opTime: { ts: Timestamp(1567578499, 1), t: 1 }, wallTime: new Date(1567578499137), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578499, 1), t: 1 }, lastCommittedWall: new Date(1567578499137), lastOpVisible: { ts: Timestamp(1567578499, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578499, 1), $clusterTime: { clusterTime: Timestamp(1567578500, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578499, 1) } 2019-09-04T06:28:22.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), opTime: { ts: Timestamp(1567578499, 1), t: 1 }, wallTime: new Date(1567578499137), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578499, 1), t: 1 }, lastCommittedWall: new Date(1567578499137), lastOpVisible: { ts: Timestamp(1567578499, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578499, 1), $clusterTime: { clusterTime: Timestamp(1567578500, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578499, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:22.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:22.837+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 197) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), opTime: { ts: Timestamp(1567578499, 1), t: 1 }, wallTime: new Date(1567578499137), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578499, 1), t: 1 }, lastCommittedWall: new Date(1567578499137), lastOpVisible: { ts: Timestamp(1567578499, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578499, 1), $clusterTime: { clusterTime: Timestamp(1567578500, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578499, 1) } 2019-09-04T06:28:22.837+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:28:22.837+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:28:24.837Z 2019-09-04T06:28:22.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:52.836+0000 2019-09-04T06:28:22.855+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:22.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:22.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:22.955+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:22.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:22.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:22.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:22.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:22.995+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:22.995+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:23.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:23.023+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:23.023+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:23.055+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:23.061+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:23.061+0000 D3 REPL [replexec-1] memberData lastupdate is: 2019-09-04T06:28:22.836+0000 2019-09-04T06:28:23.061+0000 D3 REPL [replexec-1] memberData lastupdate is: 2019-09-04T06:28:22.837+0000 2019-09-04T06:28:23.061+0000 D3 REPL [replexec-1] stalest member MemberId(0) date: 2019-09-04T06:28:22.836+0000 2019-09-04T06:28:23.061+0000 D3 REPL [replexec-1] scheduling next check at 2019-09-04T06:28:32.836+0000 2019-09-04T06:28:23.061+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:52.836+0000 2019-09-04T06:28:23.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578500, 1), signature: { hash: BinData(0, E1270D41C39AE151E57923F5E5D47AC9950519FD), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:23.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:28:23.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578500, 1), signature: { hash: BinData(0, E1270D41C39AE151E57923F5E5D47AC9950519FD), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:23.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578500, 1), signature: { hash: BinData(0, E1270D41C39AE151E57923F5E5D47AC9950519FD), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:23.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), opTime: { ts: Timestamp(1567578499, 1), t: 1 }, wallTime: new Date(1567578499137) } 2019-09-04T06:28:23.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578500, 1), signature: { hash: BinData(0, E1270D41C39AE151E57923F5E5D47AC9950519FD), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:23.065+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:23.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:23.068+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:23.068+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:23.123+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:23.123+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:23.132+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:23.132+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:23.140+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:23.141+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:23.141+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578499, 1) 2019-09-04T06:28:23.141+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 3118 2019-09-04T06:28:23.141+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:23.141+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:23.141+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 3118 2019-09-04T06:28:23.142+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3121 2019-09-04T06:28:23.142+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 3121 2019-09-04T06:28:23.142+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578499, 1), t: 1 }({ ts: Timestamp(1567578499, 1), t: 1 }) 2019-09-04T06:28:23.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:23.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:23.155+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:23.166+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:23.166+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:23.197+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:23.197+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:23.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:23.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:23.255+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:23.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:23.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:23.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:23.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:23.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:23.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:23.332+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:23.332+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:23.348+0000 D2 COMMAND [conn49] run command config.$cmd { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578494, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578497, 1), signature: { hash: BinData(0, 94CE03391C7A75618FA90A787AE6914E32684345), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578494, 1), t: 1 } }, $db: "config" } 2019-09-04T06:28:23.348+0000 D1 COMMAND [conn49] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578494, 1), t: 1 } } } 2019-09-04T06:28:23.348+0000 D3 STORAGE [conn49] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:28:23.348+0000 D1 COMMAND [conn49] Using 'committed' snapshot: { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578494, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578497, 1), signature: { hash: BinData(0, 94CE03391C7A75618FA90A787AE6914E32684345), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578494, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578499, 1) 2019-09-04T06:28:23.348+0000 D2 QUERY [conn49] Using idhack: query: { _id: "config.system.sessions" } sort: {} projection: {} limit: 1 2019-09-04T06:28:23.348+0000 D3 STORAGE [conn49] WT begin_transaction for snapshot id 3132 2019-09-04T06:28:23.349+0000 D3 STORAGE [conn49] WT rollback_transaction for snapshot id 3132 2019-09-04T06:28:23.349+0000 I COMMAND [conn49] command config.collections command: find { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578494, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578497, 1), signature: { hash: BinData(0, 94CE03391C7A75618FA90A787AE6914E32684345), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578494, 1), t: 1 } }, $db: "config" } planSummary: IDHACK keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:702 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:28:23.355+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:23.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:23.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:23.455+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:23.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:23.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:23.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:23.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:23.495+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:23.495+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:23.523+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:23.523+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:23.556+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:23.565+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:23.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:23.568+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:23.568+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:23.623+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:23.623+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:23.632+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:23.632+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:23.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:23.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:23.656+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:23.666+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:23.666+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:23.697+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:23.697+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:23.756+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:23.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:23.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:23.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:23.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:23.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:23.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:23.832+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:23.832+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:23.856+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:23.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:23.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:23.956+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:23.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:23.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:23.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:23.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:23.995+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:23.995+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:24.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:24.023+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:24.023+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:24.056+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:24.065+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:24.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:24.068+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:24.068+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:24.123+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:24.123+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:24.132+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:24.132+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:24.141+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:24.141+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:24.141+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578499, 1) 2019-09-04T06:28:24.141+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 3160 2019-09-04T06:28:24.141+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:24.141+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:24.141+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 3160 2019-09-04T06:28:24.141+0000 D2 COMMAND [conn117] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578495, 1), signature: { hash: BinData(0, 9A809E4530E0A2460F67DC0FD6A8649E22A7A597), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:28:24.141+0000 D1 REPL [conn117] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578499, 1), t: 1 } 2019-09-04T06:28:24.141+0000 D3 REPL [conn117] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:54.151+0000 2019-09-04T06:28:24.142+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3164 2019-09-04T06:28:24.142+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 3164 2019-09-04T06:28:24.142+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578499, 1), t: 1 }({ ts: Timestamp(1567578499, 1), t: 1 }) 2019-09-04T06:28:24.142+0000 D2 ASIO [RS] Request 186 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578499, 1), t: 1 }, lastCommittedWall: new Date(1567578499137), lastOpVisible: { ts: Timestamp(1567578499, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578499, 1), t: 1 }, lastCommittedWall: new Date(1567578499137), lastOpApplied: { ts: Timestamp(1567578499, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578499, 1), $clusterTime: { clusterTime: Timestamp(1567578500, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578499, 1) } 2019-09-04T06:28:24.142+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578499, 1), t: 1 }, lastCommittedWall: new Date(1567578499137), lastOpVisible: { ts: Timestamp(1567578499, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578499, 1), t: 1 }, lastCommittedWall: new Date(1567578499137), lastOpApplied: { ts: Timestamp(1567578499, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578499, 1), $clusterTime: { clusterTime: Timestamp(1567578500, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578499, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:24.142+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:24.142+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:28:24.142+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:28:33.062+0000 2019-09-04T06:28:24.142+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:28:35.157+0000 2019-09-04T06:28:24.142+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 198 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:34.142+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578499, 1), t: 1 } } 2019-09-04T06:28:24.142+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:49.141+0000 2019-09-04T06:28:24.143+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:24.143+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:52.836+0000 2019-09-04T06:28:24.148+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:24.148+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), appliedOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, appliedWallTime: new Date(1567578499137), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), appliedOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, appliedWallTime: new Date(1567578499137), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), appliedOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, appliedWallTime: new Date(1567578499137), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578499, 1), t: 1 }, lastCommittedWall: new Date(1567578499137), lastOpVisible: { ts: Timestamp(1567578499, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:24.148+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 199 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:54.148+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), appliedOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, appliedWallTime: new Date(1567578499137), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), appliedOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, appliedWallTime: new Date(1567578499137), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), appliedOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, appliedWallTime: new Date(1567578499137), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578499, 1), t: 1 }, lastCommittedWall: new Date(1567578499137), lastOpVisible: { ts: Timestamp(1567578499, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:24.148+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:49.141+0000 2019-09-04T06:28:24.148+0000 D2 ASIO [RS] Request 199 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578499, 1), t: 1 }, lastCommittedWall: new Date(1567578499137), lastOpVisible: { ts: Timestamp(1567578499, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578499, 1), $clusterTime: { clusterTime: Timestamp(1567578500, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578499, 1) } 2019-09-04T06:28:24.148+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578499, 1), t: 1 }, lastCommittedWall: new Date(1567578499137), lastOpVisible: { ts: Timestamp(1567578499, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578499, 1), $clusterTime: { clusterTime: Timestamp(1567578500, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578499, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:24.148+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:24.148+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:49.141+0000 2019-09-04T06:28:24.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:24.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:24.157+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:24.166+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:24.166+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:24.197+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:24.197+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:24.230+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578500, 1), signature: { hash: BinData(0, E1270D41C39AE151E57923F5E5D47AC9950519FD), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:24.230+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:28:24.230+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578500, 1), signature: { hash: BinData(0, E1270D41C39AE151E57923F5E5D47AC9950519FD), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:24.230+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578500, 1), signature: { hash: BinData(0, E1270D41C39AE151E57923F5E5D47AC9950519FD), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:24.230+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), opTime: { ts: Timestamp(1567578499, 1), t: 1 }, wallTime: new Date(1567578499137) } 2019-09-04T06:28:24.231+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578500, 1), signature: { hash: BinData(0, E1270D41C39AE151E57923F5E5D47AC9950519FD), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:24.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:24.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:24.257+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:24.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:24.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:24.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:24.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:24.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:24.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:24.332+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:24.332+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:24.357+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:24.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:24.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:24.457+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:24.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:24.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:24.490+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:24.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:24.495+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:24.495+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:24.523+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:24.523+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:24.557+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:24.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:24.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:24.568+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:24.568+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:24.623+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:24.623+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:24.632+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:24.632+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:24.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:24.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:24.657+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:24.666+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:24.666+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:24.697+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:24.697+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:24.757+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:24.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:24.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:24.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:24.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:24.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:24.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:24.832+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:24.832+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:24.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:24.836+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 200) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:24.836+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 200 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:28:34.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:24.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:52.836+0000 2019-09-04T06:28:24.836+0000 D2 ASIO [Replication] Request 200 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), opTime: { ts: Timestamp(1567578499, 1), t: 1 }, wallTime: new Date(1567578499137), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578499, 1), t: 1 }, lastCommittedWall: new Date(1567578499137), lastOpVisible: { ts: Timestamp(1567578499, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578499, 1), $clusterTime: { clusterTime: Timestamp(1567578500, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578499, 1) } 2019-09-04T06:28:24.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), opTime: { ts: Timestamp(1567578499, 1), t: 1 }, wallTime: new Date(1567578499137), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578499, 1), t: 1 }, lastCommittedWall: new Date(1567578499137), lastOpVisible: { ts: Timestamp(1567578499, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578499, 1), $clusterTime: { clusterTime: Timestamp(1567578500, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578499, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:28:24.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:24.836+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 200) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), opTime: { ts: Timestamp(1567578499, 1), t: 1 }, wallTime: new Date(1567578499137), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578499, 1), t: 1 }, lastCommittedWall: new Date(1567578499137), lastOpVisible: { ts: Timestamp(1567578499, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578499, 1), $clusterTime: { clusterTime: Timestamp(1567578500, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578499, 1) } 2019-09-04T06:28:24.836+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:28:24.836+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:28:35.157+0000 2019-09-04T06:28:24.836+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:28:35.255+0000 2019-09-04T06:28:24.836+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:28:24.836+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:28:26.836Z 2019-09-04T06:28:24.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:24.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:54.836+0000 2019-09-04T06:28:24.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:54.836+0000 2019-09-04T06:28:24.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:24.837+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 201) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:24.837+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 201 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:34.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:24.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:54.836+0000 2019-09-04T06:28:24.837+0000 D2 ASIO [Replication] Request 201 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), opTime: { ts: Timestamp(1567578499, 1), t: 1 }, wallTime: new Date(1567578499137), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578499, 1), t: 1 }, lastCommittedWall: new Date(1567578499137), lastOpVisible: { ts: Timestamp(1567578499, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578499, 1), $clusterTime: { clusterTime: Timestamp(1567578500, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578499, 1) } 2019-09-04T06:28:24.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), opTime: { ts: Timestamp(1567578499, 1), t: 1 }, wallTime: new Date(1567578499137), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578499, 1), t: 1 }, lastCommittedWall: new Date(1567578499137), lastOpVisible: { ts: Timestamp(1567578499, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578499, 1), $clusterTime: { clusterTime: Timestamp(1567578500, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578499, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:24.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:24.837+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 201) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), opTime: { ts: Timestamp(1567578499, 1), t: 1 }, wallTime: new Date(1567578499137), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578499, 1), t: 1 }, lastCommittedWall: new Date(1567578499137), lastOpVisible: { ts: Timestamp(1567578499, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578499, 1), $clusterTime: { clusterTime: Timestamp(1567578500, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578499, 1) } 2019-09-04T06:28:24.837+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:28:24.837+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:28:26.837Z 2019-09-04T06:28:24.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:54.836+0000 2019-09-04T06:28:24.858+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:24.858+0000 D2 ASIO [RS] Request 198 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578504, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578504855), o: { $v: 1, $set: { ping: new Date(1567578504852), up: 2405 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578499, 1), t: 1 }, lastCommittedWall: new Date(1567578499137), lastOpVisible: { ts: Timestamp(1567578499, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578499, 1), t: 1 }, lastCommittedWall: new Date(1567578499137), lastOpApplied: { ts: Timestamp(1567578504, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578499, 1), $clusterTime: { clusterTime: Timestamp(1567578504, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578504, 1) } 2019-09-04T06:28:24.858+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578504, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578504855), o: { $v: 1, $set: { ping: new Date(1567578504852), up: 2405 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578499, 1), t: 1 }, lastCommittedWall: new Date(1567578499137), lastOpVisible: { ts: Timestamp(1567578499, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578499, 1), t: 1 }, lastCommittedWall: new Date(1567578499137), lastOpApplied: { ts: Timestamp(1567578504, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578499, 1), $clusterTime: { clusterTime: Timestamp(1567578504, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578504, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:24.858+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:24.858+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578504, 1) and ending at ts: Timestamp(1567578504, 1) 2019-09-04T06:28:24.858+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:28:35.255+0000 2019-09-04T06:28:24.858+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:28:36.307+0000 2019-09-04T06:28:24.858+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578504, 1), t: 1 } 2019-09-04T06:28:24.858+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:24.858+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:24.858+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578499, 1) 2019-09-04T06:28:24.858+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 3193 2019-09-04T06:28:24.858+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:24.858+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:24.858+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:24.858+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:54.836+0000 2019-09-04T06:28:24.858+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 3193 2019-09-04T06:28:24.858+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:24.858+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:28:24.858+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:24.858+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578499, 1) 2019-09-04T06:28:24.858+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 3196 2019-09-04T06:28:24.858+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:24.858+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578504, 1) } 2019-09-04T06:28:24.858+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:24.858+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 3196 2019-09-04T06:28:24.858+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3165 2019-09-04T06:28:24.858+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 3165 2019-09-04T06:28:24.858+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3199 2019-09-04T06:28:24.858+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 3199 2019-09-04T06:28:24.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:24.859+0000 D3 EXECUTOR [repl-writer-worker-3] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:24.859+0000 D3 STORAGE [repl-writer-worker-3] WT begin_transaction for snapshot id 3202 2019-09-04T06:28:24.859+0000 D4 STORAGE [repl-writer-worker-3] inserting record with timestamp Timestamp(1567578504, 1) 2019-09-04T06:28:24.859+0000 D3 STORAGE [repl-writer-worker-3] WT set timestamp of future write operations to Timestamp(1567578504, 1) 2019-09-04T06:28:24.859+0000 D3 STORAGE [repl-writer-worker-3] WT commit_transaction for snapshot id 3202 2019-09-04T06:28:24.859+0000 D3 EXECUTOR [repl-writer-worker-3] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:24.859+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:28:24.859+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3200 2019-09-04T06:28:24.859+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 3200 2019-09-04T06:28:24.859+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3204 2019-09-04T06:28:24.859+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 3204 2019-09-04T06:28:24.859+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578504, 1), t: 1 }({ ts: Timestamp(1567578504, 1), t: 1 }) 2019-09-04T06:28:24.859+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578504, 1) 2019-09-04T06:28:24.859+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3205 2019-09-04T06:28:24.859+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578504, 1) } } ] } sort: {} projection: {} 2019-09-04T06:28:24.859+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:24.859+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:28:24.859+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578504, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:28:24.859+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:24.859+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:24.859+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:24.859+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578504, 1) || First: notFirst: full path: ts 2019-09-04T06:28:24.859+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:24.859+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578504, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:24.859+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:24.859+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:28:24.859+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:24.859+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:24.859+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:24.859+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:24.859+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:24.859+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:24.859+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:24.859+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578504, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:24.859+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:24.859+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:24.859+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:24.859+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578504, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:24.859+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:24.859+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578504, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:24.859+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 3205 2019-09-04T06:28:24.859+0000 D3 EXECUTOR [repl-writer-worker-14] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:24.860+0000 D3 STORAGE [repl-writer-worker-14] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:24.860+0000 D3 REPL [repl-writer-worker-14] applying op: { ts: Timestamp(1567578504, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578504855), o: { $v: 1, $set: { ping: new Date(1567578504852), up: 2405 } } }, oplog application mode: Secondary 2019-09-04T06:28:24.860+0000 D3 STORAGE [repl-writer-worker-14] WT set timestamp of future write operations to Timestamp(1567578504, 1) 2019-09-04T06:28:24.860+0000 D3 STORAGE [repl-writer-worker-14] WT begin_transaction for snapshot id 3207 2019-09-04T06:28:24.860+0000 D2 QUERY [repl-writer-worker-14] Using idhack: { _id: "cmodb801.togewa.com:27017" } 2019-09-04T06:28:24.860+0000 D4 WRITE [repl-writer-worker-14] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:28:24.860+0000 D3 STORAGE [repl-writer-worker-14] WT commit_transaction for snapshot id 3207 2019-09-04T06:28:24.860+0000 D3 EXECUTOR [repl-writer-worker-14] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:24.860+0000 D2 COMMAND [conn21] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578504, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f598802d1a496712d719f'), operName: "", parentOperId: "5d6f598802d1a496712d719d" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578504, 1), signature: { hash: BinData(0, 7C315C214FF76DBE8DFFAA95062C50550D2AFFC8), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578504, 1), t: 1 } }, $db: "config" } 2019-09-04T06:28:24.860+0000 D1 TRACKING [conn21] Cmd: find, TrackingId: 5d6f598802d1a496712d719d|5d6f598802d1a496712d719f 2019-09-04T06:28:24.860+0000 D1 REPL [conn21] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1567578504, 1), t: 1 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578499, 1), t: 1 } 2019-09-04T06:28:24.860+0000 D3 REPL [conn21] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:54.870+0000 2019-09-04T06:28:24.860+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578504, 1), t: 1 }({ ts: Timestamp(1567578504, 1), t: 1 }) 2019-09-04T06:28:24.860+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578504, 1) 2019-09-04T06:28:24.860+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3206 2019-09-04T06:28:24.860+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:24.860+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:28:24.860+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:24.860+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:28:24.860+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:24.860+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:24.860+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:28:24.860+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 3206 2019-09-04T06:28:24.860+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578504, 1) 2019-09-04T06:28:24.860+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578504, 1), t: 1 } 2019-09-04T06:28:24.860+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 202 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:34.860+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578499, 1), t: 1 } } 2019-09-04T06:28:24.860+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:24.860+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), appliedOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, appliedWallTime: new Date(1567578499137), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), appliedOpTime: { ts: Timestamp(1567578504, 1), t: 1 }, appliedWallTime: new Date(1567578504855), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), appliedOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, appliedWallTime: new Date(1567578499137), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578499, 1), t: 1 }, lastCommittedWall: new Date(1567578499137), lastOpVisible: { ts: Timestamp(1567578499, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:24.860+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:54.860+0000 2019-09-04T06:28:24.860+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 203 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:54.860+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), appliedOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, appliedWallTime: new Date(1567578499137), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), appliedOpTime: { ts: Timestamp(1567578504, 1), t: 1 }, appliedWallTime: new Date(1567578504855), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), appliedOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, appliedWallTime: new Date(1567578499137), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578499, 1), t: 1 }, lastCommittedWall: new Date(1567578499137), lastOpVisible: { ts: Timestamp(1567578499, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:24.860+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:54.860+0000 2019-09-04T06:28:24.860+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3211 2019-09-04T06:28:24.860+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 3211 2019-09-04T06:28:24.860+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578504, 1), t: 1 }({ ts: Timestamp(1567578504, 1), t: 1 }) 2019-09-04T06:28:24.860+0000 D2 ASIO [RS] Request 202 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578504, 1), t: 1 }, lastCommittedWall: new Date(1567578504855), lastOpVisible: { ts: Timestamp(1567578504, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578504, 1), t: 1 }, lastCommittedWall: new Date(1567578504855), lastOpApplied: { ts: Timestamp(1567578504, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578504, 1), $clusterTime: { clusterTime: Timestamp(1567578504, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578504, 1) } 2019-09-04T06:28:24.860+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578504, 1), t: 1 }, lastCommittedWall: new Date(1567578504855), lastOpVisible: { ts: Timestamp(1567578504, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578504, 1), t: 1 }, lastCommittedWall: new Date(1567578504855), lastOpApplied: { ts: Timestamp(1567578504, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578504, 1), $clusterTime: { clusterTime: Timestamp(1567578504, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578504, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:24.860+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:24.860+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:28:24.860+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578504, 1), t: 1 }, 2019-09-04T06:28:24.855+0000 2019-09-04T06:28:24.860+0000 D2 ASIO [RS] Request 203 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578504, 1), t: 1 }, lastCommittedWall: new Date(1567578504855), lastOpVisible: { ts: Timestamp(1567578504, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578504, 1), $clusterTime: { clusterTime: Timestamp(1567578504, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578504, 1) } 2019-09-04T06:28:24.860+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578504, 1), t: 1 }, lastCommittedWall: new Date(1567578504855), lastOpVisible: { ts: Timestamp(1567578504, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578504, 1), $clusterTime: { clusterTime: Timestamp(1567578504, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578504, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:24.860+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:24.861+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:54.860+0000 2019-09-04T06:28:24.860+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578504, 1), t: 1 }, 2019-09-04T06:28:24.855+0000 2019-09-04T06:28:24.861+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578499, 1) 2019-09-04T06:28:24.861+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:28:36.307+0000 2019-09-04T06:28:24.861+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:28:34.909+0000 2019-09-04T06:28:24.861+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 204 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:34.861+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578504, 1), t: 1 } } 2019-09-04T06:28:24.861+0000 D3 REPL [conn112] Got notified of new snapshot: { ts: Timestamp(1567578504, 1), t: 1 }, 2019-09-04T06:28:24.855+0000 2019-09-04T06:28:24.861+0000 D3 REPL [conn112] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.921+0000 2019-09-04T06:28:24.861+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:54.860+0000 2019-09-04T06:28:24.861+0000 D3 REPL [conn115] Got notified of new snapshot: { ts: Timestamp(1567578504, 1), t: 1 }, 2019-09-04T06:28:24.855+0000 2019-09-04T06:28:24.861+0000 D3 REPL [conn115] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:41.292+0000 2019-09-04T06:28:24.861+0000 D3 REPL [conn124] Got notified of new snapshot: { ts: Timestamp(1567578504, 1), t: 1 }, 2019-09-04T06:28:24.855+0000 2019-09-04T06:28:24.861+0000 D3 REPL [conn124] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.022+0000 2019-09-04T06:28:24.861+0000 D3 REPL [conn103] Got notified of new snapshot: { ts: Timestamp(1567578504, 1), t: 1 }, 2019-09-04T06:28:24.855+0000 2019-09-04T06:28:24.861+0000 D3 REPL [conn103] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.919+0000 2019-09-04T06:28:24.861+0000 D3 REPL [conn107] Got notified of new snapshot: { ts: Timestamp(1567578504, 1), t: 1 }, 2019-09-04T06:28:24.855+0000 2019-09-04T06:28:24.861+0000 D3 REPL [conn107] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.871+0000 2019-09-04T06:28:24.861+0000 D3 REPL [conn119] Got notified of new snapshot: { ts: Timestamp(1567578504, 1), t: 1 }, 2019-09-04T06:28:24.855+0000 2019-09-04T06:28:24.861+0000 D3 REPL [conn119] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.033+0000 2019-09-04T06:28:24.861+0000 D3 REPL [conn108] Got notified of new snapshot: { ts: Timestamp(1567578504, 1), t: 1 }, 2019-09-04T06:28:24.855+0000 2019-09-04T06:28:24.861+0000 D3 REPL [conn108] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.879+0000 2019-09-04T06:28:24.861+0000 D3 REPL [conn102] Got notified of new snapshot: { ts: Timestamp(1567578504, 1), t: 1 }, 2019-09-04T06:28:24.855+0000 2019-09-04T06:28:24.861+0000 D3 REPL [conn102] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:29.874+0000 2019-09-04T06:28:24.861+0000 D3 REPL [conn110] Got notified of new snapshot: { ts: Timestamp(1567578504, 1), t: 1 }, 2019-09-04T06:28:24.855+0000 2019-09-04T06:28:24.861+0000 D3 REPL [conn110] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.881+0000 2019-09-04T06:28:24.861+0000 D3 REPL [conn123] Got notified of new snapshot: { ts: Timestamp(1567578504, 1), t: 1 }, 2019-09-04T06:28:24.855+0000 2019-09-04T06:28:24.861+0000 D3 REPL [conn123] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.022+0000 2019-09-04T06:28:24.861+0000 D3 REPL [conn109] Got notified of new snapshot: { ts: Timestamp(1567578504, 1), t: 1 }, 2019-09-04T06:28:24.855+0000 2019-09-04T06:28:24.861+0000 D3 REPL [conn109] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.877+0000 2019-09-04T06:28:24.861+0000 D3 REPL [conn130] Got notified of new snapshot: { ts: Timestamp(1567578504, 1), t: 1 }, 2019-09-04T06:28:24.855+0000 2019-09-04T06:28:24.861+0000 D3 REPL [conn130] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:51.753+0000 2019-09-04T06:28:24.861+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:24.861+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:54.836+0000 2019-09-04T06:28:24.861+0000 D3 REPL [conn111] Got notified of new snapshot: { ts: Timestamp(1567578504, 1), t: 1 }, 2019-09-04T06:28:24.855+0000 2019-09-04T06:28:24.861+0000 D3 REPL [conn111] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.896+0000 2019-09-04T06:28:24.861+0000 D3 REPL [conn120] Got notified of new snapshot: { ts: Timestamp(1567578504, 1), t: 1 }, 2019-09-04T06:28:24.855+0000 2019-09-04T06:28:24.861+0000 D3 REPL [conn120] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.040+0000 2019-09-04T06:28:24.861+0000 D3 REPL [conn117] Got notified of new snapshot: { ts: Timestamp(1567578504, 1), t: 1 }, 2019-09-04T06:28:24.855+0000 2019-09-04T06:28:24.861+0000 D3 REPL [conn117] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:54.151+0000 2019-09-04T06:28:24.861+0000 D3 REPL [conn21] Got notified of new snapshot: { ts: Timestamp(1567578504, 1), t: 1 }, 2019-09-04T06:28:24.855+0000 2019-09-04T06:28:24.861+0000 D1 COMMAND [conn21] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578504, 1), t: 1 } } } 2019-09-04T06:28:24.861+0000 D3 STORAGE [conn21] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:28:24.861+0000 D1 COMMAND [conn21] Using 'committed' snapshot: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578504, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f598802d1a496712d719f'), operName: "", parentOperId: "5d6f598802d1a496712d719d" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578504, 1), signature: { hash: BinData(0, 7C315C214FF76DBE8DFFAA95062C50550D2AFFC8), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578504, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578504, 1) 2019-09-04T06:28:24.861+0000 D2 QUERY [conn21] Collection config.settings does not exist. Using EOF plan: query: { _id: "balancer" } sort: {} projection: {} limit: 1 2019-09-04T06:28:24.861+0000 I COMMAND [conn21] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578504, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f598802d1a496712d719f'), operName: "", parentOperId: "5d6f598802d1a496712d719d" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578504, 1), signature: { hash: BinData(0, 7C315C214FF76DBE8DFFAA95062C50550D2AFFC8), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578504, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 1ms 2019-09-04T06:28:24.861+0000 D3 REPL [conn114] Got notified of new snapshot: { ts: Timestamp(1567578504, 1), t: 1 }, 2019-09-04T06:28:24.855+0000 2019-09-04T06:28:24.861+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:28:24.861+0000 D3 REPL [conn114] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:41.488+0000 2019-09-04T06:28:24.861+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:24.861+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), appliedOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, appliedWallTime: new Date(1567578499137), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578504, 1), t: 1 }, durableWallTime: new Date(1567578504855), appliedOpTime: { ts: Timestamp(1567578504, 1), t: 1 }, appliedWallTime: new Date(1567578504855), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), appliedOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, appliedWallTime: new Date(1567578499137), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578504, 1), t: 1 }, lastCommittedWall: new Date(1567578504855), lastOpVisible: { ts: Timestamp(1567578504, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:24.861+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 205 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:54.861+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), appliedOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, appliedWallTime: new Date(1567578499137), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578504, 1), t: 1 }, durableWallTime: new Date(1567578504855), appliedOpTime: { ts: Timestamp(1567578504, 1), t: 1 }, appliedWallTime: new Date(1567578504855), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), appliedOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, appliedWallTime: new Date(1567578499137), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578504, 1), t: 1 }, lastCommittedWall: new Date(1567578504855), lastOpVisible: { ts: Timestamp(1567578504, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:24.861+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:54.860+0000 2019-09-04T06:28:24.862+0000 D2 ASIO [RS] Request 205 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578504, 1), t: 1 }, lastCommittedWall: new Date(1567578504855), lastOpVisible: { ts: Timestamp(1567578504, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578504, 1), $clusterTime: { clusterTime: Timestamp(1567578504, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578504, 1) } 2019-09-04T06:28:24.862+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578504, 1), t: 1 }, lastCommittedWall: new Date(1567578504855), lastOpVisible: { ts: Timestamp(1567578504, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578504, 1), $clusterTime: { clusterTime: Timestamp(1567578504, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578504, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:24.862+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:24.862+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:54.860+0000 2019-09-04T06:28:24.958+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:24.959+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578504, 1) 2019-09-04T06:28:24.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:24.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:24.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:24.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:24.995+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:24.995+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:25.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:25.023+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:25.023+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:25.049+0000 D2 COMMAND [conn104] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578501, 1), signature: { hash: BinData(0, 457E83C320927CAB801EC3E140A2B78C5168E291), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:28:25.049+0000 D1 REPL [conn104] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578504, 1), t: 1 } 2019-09-04T06:28:25.049+0000 D3 REPL [conn104] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:55.059+0000 2019-09-04T06:28:25.058+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:25.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578504, 1), signature: { hash: BinData(0, 7C315C214FF76DBE8DFFAA95062C50550D2AFFC8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:25.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:28:25.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578504, 1), signature: { hash: BinData(0, 7C315C214FF76DBE8DFFAA95062C50550D2AFFC8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:25.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578504, 1), signature: { hash: BinData(0, 7C315C214FF76DBE8DFFAA95062C50550D2AFFC8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:25.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578504, 1), t: 1 }, durableWallTime: new Date(1567578504855), opTime: { ts: Timestamp(1567578504, 1), t: 1 }, wallTime: new Date(1567578504855) } 2019-09-04T06:28:25.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578504, 1), signature: { hash: BinData(0, 7C315C214FF76DBE8DFFAA95062C50550D2AFFC8), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:25.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:25.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:25.068+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:25.068+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:25.123+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:25.123+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:25.132+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:25.132+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:25.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:25.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:25.158+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:25.166+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:25.166+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:25.197+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:25.197+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:25.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:25.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:25.258+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:25.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:25.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:25.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:25.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:25.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:25.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:25.332+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:25.332+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:25.358+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:25.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:25.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:25.458+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:25.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:25.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:25.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:25.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:25.495+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:25.495+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:25.523+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:25.523+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:25.559+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:25.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:25.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:25.568+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:25.568+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:25.588+0000 D2 ASIO [RS] Request 204 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578505, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578505582), o: { $v: 1, $set: { ping: new Date(1567578505581) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578504, 1), t: 1 }, lastCommittedWall: new Date(1567578504855), lastOpVisible: { ts: Timestamp(1567578504, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578504, 1), t: 1 }, lastCommittedWall: new Date(1567578504855), lastOpApplied: { ts: Timestamp(1567578505, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578504, 1), $clusterTime: { clusterTime: Timestamp(1567578505, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578505, 1) } 2019-09-04T06:28:25.588+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578505, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578505582), o: { $v: 1, $set: { ping: new Date(1567578505581) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578504, 1), t: 1 }, lastCommittedWall: new Date(1567578504855), lastOpVisible: { ts: Timestamp(1567578504, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578504, 1), t: 1 }, lastCommittedWall: new Date(1567578504855), lastOpApplied: { ts: Timestamp(1567578505, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578504, 1), $clusterTime: { clusterTime: Timestamp(1567578505, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578505, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:25.588+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:25.588+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578505, 1) and ending at ts: Timestamp(1567578505, 1) 2019-09-04T06:28:25.588+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:28:34.909+0000 2019-09-04T06:28:25.588+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:28:36.665+0000 2019-09-04T06:28:25.588+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:25.588+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578505, 1), t: 1 } 2019-09-04T06:28:25.589+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:25.589+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:25.589+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:54.836+0000 2019-09-04T06:28:25.589+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578504, 1) 2019-09-04T06:28:25.589+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 3241 2019-09-04T06:28:25.589+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:25.589+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:25.589+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 3241 2019-09-04T06:28:25.589+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:28:25.589+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:25.589+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:25.589+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578504, 1) 2019-09-04T06:28:25.589+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578505, 1) } 2019-09-04T06:28:25.589+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 3244 2019-09-04T06:28:25.589+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:25.589+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:25.589+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3213 2019-09-04T06:28:25.589+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 3244 2019-09-04T06:28:25.589+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 3213 2019-09-04T06:28:25.589+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3247 2019-09-04T06:28:25.589+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 3247 2019-09-04T06:28:25.589+0000 D3 EXECUTOR [repl-writer-worker-12] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:25.589+0000 D3 STORAGE [repl-writer-worker-12] WT begin_transaction for snapshot id 3249 2019-09-04T06:28:25.589+0000 D4 STORAGE [repl-writer-worker-12] inserting record with timestamp Timestamp(1567578505, 1) 2019-09-04T06:28:25.589+0000 D3 STORAGE [repl-writer-worker-12] WT set timestamp of future write operations to Timestamp(1567578505, 1) 2019-09-04T06:28:25.589+0000 D3 STORAGE [repl-writer-worker-12] WT commit_transaction for snapshot id 3249 2019-09-04T06:28:25.589+0000 D3 EXECUTOR [repl-writer-worker-12] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:25.589+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:28:25.589+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3248 2019-09-04T06:28:25.589+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 3248 2019-09-04T06:28:25.589+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3251 2019-09-04T06:28:25.589+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 3251 2019-09-04T06:28:25.589+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578505, 1), t: 1 }({ ts: Timestamp(1567578505, 1), t: 1 }) 2019-09-04T06:28:25.589+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578505, 1) 2019-09-04T06:28:25.589+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3252 2019-09-04T06:28:25.589+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578505, 1) } } ] } sort: {} projection: {} 2019-09-04T06:28:25.589+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:25.589+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:28:25.589+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578505, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:28:25.589+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:25.589+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:25.589+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:25.589+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578505, 1) || First: notFirst: full path: ts 2019-09-04T06:28:25.589+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:25.589+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578505, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:25.589+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:25.589+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:28:25.589+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:25.589+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:25.589+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:25.590+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:25.590+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:25.590+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:25.590+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:25.590+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578505, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:25.590+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:25.590+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:25.590+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:25.590+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578505, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:25.590+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:25.590+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578505, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:25.590+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 3252 2019-09-04T06:28:25.590+0000 D3 EXECUTOR [repl-writer-worker-10] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:25.590+0000 D3 STORAGE [repl-writer-worker-10] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:25.590+0000 D3 REPL [repl-writer-worker-10] applying op: { ts: Timestamp(1567578505, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578505582), o: { $v: 1, $set: { ping: new Date(1567578505581) } } }, oplog application mode: Secondary 2019-09-04T06:28:25.590+0000 D3 STORAGE [repl-writer-worker-10] WT set timestamp of future write operations to Timestamp(1567578505, 1) 2019-09-04T06:28:25.590+0000 D3 STORAGE [repl-writer-worker-10] WT begin_transaction for snapshot id 3254 2019-09-04T06:28:25.590+0000 D2 QUERY [repl-writer-worker-10] Using idhack: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" } 2019-09-04T06:28:25.590+0000 D4 WRITE [repl-writer-worker-10] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:28:25.590+0000 D3 STORAGE [repl-writer-worker-10] WT commit_transaction for snapshot id 3254 2019-09-04T06:28:25.590+0000 D3 EXECUTOR [repl-writer-worker-10] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:25.590+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578505, 1), t: 1 }({ ts: Timestamp(1567578505, 1), t: 1 }) 2019-09-04T06:28:25.590+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578505, 1) 2019-09-04T06:28:25.590+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3253 2019-09-04T06:28:25.590+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:28:25.590+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:25.590+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:28:25.590+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:25.590+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:25.590+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:28:25.590+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 3253 2019-09-04T06:28:25.590+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578505, 1) 2019-09-04T06:28:25.590+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:25.590+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3257 2019-09-04T06:28:25.590+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), appliedOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, appliedWallTime: new Date(1567578499137), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578504, 1), t: 1 }, durableWallTime: new Date(1567578504855), appliedOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, appliedWallTime: new Date(1567578505582), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), appliedOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, appliedWallTime: new Date(1567578499137), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578504, 1), t: 1 }, lastCommittedWall: new Date(1567578504855), lastOpVisible: { ts: Timestamp(1567578504, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:25.590+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 3257 2019-09-04T06:28:25.590+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578505, 1), t: 1 }({ ts: Timestamp(1567578505, 1), t: 1 }) 2019-09-04T06:28:25.590+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 206 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:55.590+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), appliedOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, appliedWallTime: new Date(1567578499137), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578504, 1), t: 1 }, durableWallTime: new Date(1567578504855), appliedOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, appliedWallTime: new Date(1567578505582), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), appliedOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, appliedWallTime: new Date(1567578499137), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578504, 1), t: 1 }, lastCommittedWall: new Date(1567578504855), lastOpVisible: { ts: Timestamp(1567578504, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:25.590+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:55.590+0000 2019-09-04T06:28:25.591+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578505, 1), t: 1 } 2019-09-04T06:28:25.591+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 207 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:35.591+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578504, 1), t: 1 } } 2019-09-04T06:28:25.591+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:55.590+0000 2019-09-04T06:28:25.591+0000 D2 ASIO [RS] Request 206 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578505, 1), t: 1 }, lastCommittedWall: new Date(1567578505582), lastOpVisible: { ts: Timestamp(1567578505, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578505, 1), $clusterTime: { clusterTime: Timestamp(1567578505, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578505, 1) } 2019-09-04T06:28:25.591+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578505, 1), t: 1 }, lastCommittedWall: new Date(1567578505582), lastOpVisible: { ts: Timestamp(1567578505, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578505, 1), $clusterTime: { clusterTime: Timestamp(1567578505, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578505, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:25.591+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:25.591+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:55.590+0000 2019-09-04T06:28:25.591+0000 D2 ASIO [RS] Request 207 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578505, 1), t: 1 }, lastCommittedWall: new Date(1567578505582), lastOpVisible: { ts: Timestamp(1567578505, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578505, 1), t: 1 }, lastCommittedWall: new Date(1567578505582), lastOpApplied: { ts: Timestamp(1567578505, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578505, 1), $clusterTime: { clusterTime: Timestamp(1567578505, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578505, 1) } 2019-09-04T06:28:25.591+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578505, 1), t: 1 }, lastCommittedWall: new Date(1567578505582), lastOpVisible: { ts: Timestamp(1567578505, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578505, 1), t: 1 }, lastCommittedWall: new Date(1567578505582), lastOpApplied: { ts: Timestamp(1567578505, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578505, 1), $clusterTime: { clusterTime: Timestamp(1567578505, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578505, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:25.591+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:25.591+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:28:25.591+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578505, 1), t: 1 }, 2019-09-04T06:28:25.582+0000 2019-09-04T06:28:25.591+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578505, 1), t: 1 }, 2019-09-04T06:28:25.582+0000 2019-09-04T06:28:25.591+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578500, 1) 2019-09-04T06:28:25.591+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:28:36.665+0000 2019-09-04T06:28:25.591+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:28:35.773+0000 2019-09-04T06:28:25.591+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:25.591+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:54.836+0000 2019-09-04T06:28:25.591+0000 D3 REPL [conn103] Got notified of new snapshot: { ts: Timestamp(1567578505, 1), t: 1 }, 2019-09-04T06:28:25.582+0000 2019-09-04T06:28:25.591+0000 D3 REPL [conn103] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.919+0000 2019-09-04T06:28:25.591+0000 D3 REPL [conn108] Got notified of new snapshot: { ts: Timestamp(1567578505, 1), t: 1 }, 2019-09-04T06:28:25.582+0000 2019-09-04T06:28:25.591+0000 D3 REPL [conn108] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.879+0000 2019-09-04T06:28:25.591+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 208 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:35.591+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578505, 1), t: 1 } } 2019-09-04T06:28:25.591+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:55.590+0000 2019-09-04T06:28:25.591+0000 D3 REPL [conn119] Got notified of new snapshot: { ts: Timestamp(1567578505, 1), t: 1 }, 2019-09-04T06:28:25.582+0000 2019-09-04T06:28:25.592+0000 D3 REPL [conn119] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.033+0000 2019-09-04T06:28:25.592+0000 D3 REPL [conn102] Got notified of new snapshot: { ts: Timestamp(1567578505, 1), t: 1 }, 2019-09-04T06:28:25.582+0000 2019-09-04T06:28:25.592+0000 D3 REPL [conn102] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:29.874+0000 2019-09-04T06:28:25.592+0000 D3 REPL [conn110] Got notified of new snapshot: { ts: Timestamp(1567578505, 1), t: 1 }, 2019-09-04T06:28:25.582+0000 2019-09-04T06:28:25.592+0000 D3 REPL [conn110] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.881+0000 2019-09-04T06:28:25.592+0000 D3 REPL [conn123] Got notified of new snapshot: { ts: Timestamp(1567578505, 1), t: 1 }, 2019-09-04T06:28:25.582+0000 2019-09-04T06:28:25.592+0000 D3 REPL [conn123] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.022+0000 2019-09-04T06:28:25.592+0000 D3 REPL [conn109] Got notified of new snapshot: { ts: Timestamp(1567578505, 1), t: 1 }, 2019-09-04T06:28:25.582+0000 2019-09-04T06:28:25.592+0000 D3 REPL [conn109] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.877+0000 2019-09-04T06:28:25.592+0000 D3 REPL [conn130] Got notified of new snapshot: { ts: Timestamp(1567578505, 1), t: 1 }, 2019-09-04T06:28:25.582+0000 2019-09-04T06:28:25.592+0000 D3 REPL [conn130] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:51.753+0000 2019-09-04T06:28:25.592+0000 D3 REPL [conn120] Got notified of new snapshot: { ts: Timestamp(1567578505, 1), t: 1 }, 2019-09-04T06:28:25.582+0000 2019-09-04T06:28:25.592+0000 D3 REPL [conn120] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.040+0000 2019-09-04T06:28:25.592+0000 D3 REPL [conn124] Got notified of new snapshot: { ts: Timestamp(1567578505, 1), t: 1 }, 2019-09-04T06:28:25.582+0000 2019-09-04T06:28:25.592+0000 D3 REPL [conn124] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.022+0000 2019-09-04T06:28:25.592+0000 D3 REPL [conn112] Got notified of new snapshot: { ts: Timestamp(1567578505, 1), t: 1 }, 2019-09-04T06:28:25.582+0000 2019-09-04T06:28:25.592+0000 D3 REPL [conn112] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.921+0000 2019-09-04T06:28:25.592+0000 D3 REPL [conn115] Got notified of new snapshot: { ts: Timestamp(1567578505, 1), t: 1 }, 2019-09-04T06:28:25.582+0000 2019-09-04T06:28:25.592+0000 D3 REPL [conn115] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:41.292+0000 2019-09-04T06:28:25.592+0000 D3 REPL [conn107] Got notified of new snapshot: { ts: Timestamp(1567578505, 1), t: 1 }, 2019-09-04T06:28:25.582+0000 2019-09-04T06:28:25.592+0000 D3 REPL [conn107] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.871+0000 2019-09-04T06:28:25.592+0000 D3 REPL [conn111] Got notified of new snapshot: { ts: Timestamp(1567578505, 1), t: 1 }, 2019-09-04T06:28:25.582+0000 2019-09-04T06:28:25.592+0000 D3 REPL [conn111] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.896+0000 2019-09-04T06:28:25.592+0000 D3 REPL [conn117] Got notified of new snapshot: { ts: Timestamp(1567578505, 1), t: 1 }, 2019-09-04T06:28:25.582+0000 2019-09-04T06:28:25.592+0000 D3 REPL [conn117] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:54.151+0000 2019-09-04T06:28:25.592+0000 D3 REPL [conn114] Got notified of new snapshot: { ts: Timestamp(1567578505, 1), t: 1 }, 2019-09-04T06:28:25.582+0000 2019-09-04T06:28:25.592+0000 D3 REPL [conn114] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:41.488+0000 2019-09-04T06:28:25.592+0000 D3 REPL [conn104] Got notified of new snapshot: { ts: Timestamp(1567578505, 1), t: 1 }, 2019-09-04T06:28:25.582+0000 2019-09-04T06:28:25.592+0000 D3 REPL [conn104] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:55.059+0000 2019-09-04T06:28:25.594+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:28:25.594+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:25.594+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), appliedOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, appliedWallTime: new Date(1567578499137), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, durableWallTime: new Date(1567578505582), appliedOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, appliedWallTime: new Date(1567578505582), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), appliedOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, appliedWallTime: new Date(1567578499137), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578505, 1), t: 1 }, lastCommittedWall: new Date(1567578505582), lastOpVisible: { ts: Timestamp(1567578505, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:25.594+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 209 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:55.594+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), appliedOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, appliedWallTime: new Date(1567578499137), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, durableWallTime: new Date(1567578505582), appliedOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, appliedWallTime: new Date(1567578505582), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, durableWallTime: new Date(1567578499137), appliedOpTime: { ts: Timestamp(1567578499, 1), t: 1 }, appliedWallTime: new Date(1567578499137), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578505, 1), t: 1 }, lastCommittedWall: new Date(1567578505582), lastOpVisible: { ts: Timestamp(1567578505, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:25.594+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:55.590+0000 2019-09-04T06:28:25.594+0000 D2 ASIO [RS] Request 209 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578505, 1), t: 1 }, lastCommittedWall: new Date(1567578505582), lastOpVisible: { ts: Timestamp(1567578505, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578505, 1), $clusterTime: { clusterTime: Timestamp(1567578505, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578505, 1) } 2019-09-04T06:28:25.594+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578505, 1), t: 1 }, lastCommittedWall: new Date(1567578505582), lastOpVisible: { ts: Timestamp(1567578505, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578505, 1), $clusterTime: { clusterTime: Timestamp(1567578505, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578505, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:25.594+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:25.594+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:55.590+0000 2019-09-04T06:28:25.623+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:25.623+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:25.632+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:25.632+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:25.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:25.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:25.659+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:25.666+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:25.666+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:25.689+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578505, 1) 2019-09-04T06:28:25.697+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:25.697+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:25.759+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:25.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:25.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:25.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:25.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:25.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:25.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:25.832+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:25.832+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:25.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:25.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:25.859+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:25.959+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:25.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:25.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:25.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:25.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:25.995+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:25.995+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:26.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:26.059+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:26.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:26.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:26.068+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:26.068+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:26.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:26.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:26.159+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:26.166+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:26.166+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:26.197+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:26.197+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:26.230+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578505, 1), signature: { hash: BinData(0, 135D1A51FC1539B8C3660DCF530D7F785796EFAC), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:26.230+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:28:26.230+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578505, 1), signature: { hash: BinData(0, 135D1A51FC1539B8C3660DCF530D7F785796EFAC), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:26.231+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578505, 1), signature: { hash: BinData(0, 135D1A51FC1539B8C3660DCF530D7F785796EFAC), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:26.231+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, durableWallTime: new Date(1567578505582), opTime: { ts: Timestamp(1567578505, 1), t: 1 }, wallTime: new Date(1567578505582) } 2019-09-04T06:28:26.231+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578505, 1), signature: { hash: BinData(0, 135D1A51FC1539B8C3660DCF530D7F785796EFAC), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:26.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:26.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:26.260+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:26.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:26.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:26.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:26.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:26.293+0000 I NETWORK [listener] connection accepted from 10.108.2.60:44776 #131 (75 connections now open) 2019-09-04T06:28:26.293+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:26.293+0000 D2 COMMAND [conn131] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:26.293+0000 I NETWORK [conn131] received client metadata from 10.108.2.60:44776 conn131: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:26.293+0000 I COMMAND [conn131] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:26.297+0000 D2 COMMAND [conn131] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578503, 1), signature: { hash: BinData(0, F866E98FA10478836B415F12129881EA7AA32552), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:28:26.297+0000 D1 REPL [conn131] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578505, 1), t: 1 } 2019-09-04T06:28:26.297+0000 D3 REPL [conn131] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:56.307+0000 2019-09-04T06:28:26.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:26.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:26.332+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:26.332+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:26.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:26.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:26.360+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:26.460+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:26.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:26.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:26.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:26.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:26.495+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:26.495+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:26.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:26.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:26.560+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:26.565+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:26.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:26.568+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:26.568+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:26.589+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:26.589+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:26.589+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578505, 1) 2019-09-04T06:28:26.589+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 3294 2019-09-04T06:28:26.589+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:26.589+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:26.589+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 3294 2019-09-04T06:28:26.590+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3297 2019-09-04T06:28:26.590+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 3297 2019-09-04T06:28:26.590+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578505, 1), t: 1 }({ ts: Timestamp(1567578505, 1), t: 1 }) 2019-09-04T06:28:26.602+0000 D2 COMMAND [conn49] run command config.$cmd { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578505, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578505, 1), signature: { hash: BinData(0, 135D1A51FC1539B8C3660DCF530D7F785796EFAC), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578505, 1), t: 1 } }, $db: "config" } 2019-09-04T06:28:26.602+0000 D1 COMMAND [conn49] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578505, 1), t: 1 } } } 2019-09-04T06:28:26.602+0000 D3 STORAGE [conn49] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:28:26.602+0000 D1 COMMAND [conn49] Using 'committed' snapshot: { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578505, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578505, 1), signature: { hash: BinData(0, 135D1A51FC1539B8C3660DCF530D7F785796EFAC), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578505, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578505, 1) 2019-09-04T06:28:26.602+0000 D2 QUERY [conn49] Collection config.settings does not exist. Using EOF plan: query: { _id: "autosplit" } sort: {} projection: {} limit: 1 2019-09-04T06:28:26.602+0000 I COMMAND [conn49] command config.settings command: find { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578505, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578505, 1), signature: { hash: BinData(0, 135D1A51FC1539B8C3660DCF530D7F785796EFAC), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578505, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:28:26.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:26.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:26.660+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:26.666+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:26.666+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:26.694+0000 D2 COMMAND [conn49] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578505, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578505, 1), signature: { hash: BinData(0, 135D1A51FC1539B8C3660DCF530D7F785796EFAC), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578505, 1), t: 1 } }, $db: "config" } 2019-09-04T06:28:26.694+0000 D1 COMMAND [conn49] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578505, 1), t: 1 } } } 2019-09-04T06:28:26.694+0000 D3 STORAGE [conn49] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:28:26.694+0000 D1 COMMAND [conn49] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578505, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578505, 1), signature: { hash: BinData(0, 135D1A51FC1539B8C3660DCF530D7F785796EFAC), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578505, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578505, 1) 2019-09-04T06:28:26.694+0000 D5 QUERY [conn49] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:28:26.694+0000 D5 QUERY [conn49] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:28:26.694+0000 D5 QUERY [conn49] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:28:26.694+0000 D5 QUERY [conn49] Rated tree: $and 2019-09-04T06:28:26.694+0000 D5 QUERY [conn49] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:26.694+0000 D5 QUERY [conn49] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:26.694+0000 D2 QUERY [conn49] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:28:26.694+0000 D3 STORAGE [conn49] WT begin_transaction for snapshot id 3302 2019-09-04T06:28:26.694+0000 D3 STORAGE [conn49] WT rollback_transaction for snapshot id 3302 2019-09-04T06:28:26.694+0000 I COMMAND [conn49] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578505, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578505, 1), signature: { hash: BinData(0, 135D1A51FC1539B8C3660DCF530D7F785796EFAC), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578505, 1), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:879 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:28:26.697+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:26.697+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:26.760+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:26.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:26.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:26.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:26.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:26.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:26.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:26.832+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:26.832+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:26.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:26.836+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 210) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:26.836+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 210 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:28:36.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:26.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:54.836+0000 2019-09-04T06:28:26.836+0000 D2 ASIO [Replication] Request 210 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, durableWallTime: new Date(1567578505582), opTime: { ts: Timestamp(1567578505, 1), t: 1 }, wallTime: new Date(1567578505582), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578505, 1), t: 1 }, lastCommittedWall: new Date(1567578505582), lastOpVisible: { ts: Timestamp(1567578505, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578505, 1), $clusterTime: { clusterTime: Timestamp(1567578505, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578505, 1) } 2019-09-04T06:28:26.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, durableWallTime: new Date(1567578505582), opTime: { ts: Timestamp(1567578505, 1), t: 1 }, wallTime: new Date(1567578505582), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578505, 1), t: 1 }, lastCommittedWall: new Date(1567578505582), lastOpVisible: { ts: Timestamp(1567578505, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578505, 1), $clusterTime: { clusterTime: Timestamp(1567578505, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578505, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:28:26.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:26.836+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 210) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, durableWallTime: new Date(1567578505582), opTime: { ts: Timestamp(1567578505, 1), t: 1 }, wallTime: new Date(1567578505582), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578505, 1), t: 1 }, lastCommittedWall: new Date(1567578505582), lastOpVisible: { ts: Timestamp(1567578505, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578505, 1), $clusterTime: { clusterTime: Timestamp(1567578505, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578505, 1) } 2019-09-04T06:28:26.836+0000 D4 ELECTION [replexec-1] Postponing election timeout due to heartbeat from primary 2019-09-04T06:28:26.836+0000 D4 REPL [replexec-1] Canceling election timeout callback at 2019-09-04T06:28:35.773+0000 2019-09-04T06:28:26.836+0000 D4 ELECTION [replexec-1] Scheduling election timeout callback at 2019-09-04T06:28:37.178+0000 2019-09-04T06:28:26.836+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:28:26.836+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:28:28.836Z 2019-09-04T06:28:26.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:26.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:56.836+0000 2019-09-04T06:28:26.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:56.836+0000 2019-09-04T06:28:26.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:26.837+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 211) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:26.837+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 211 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:36.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:26.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:56.836+0000 2019-09-04T06:28:26.837+0000 D2 ASIO [Replication] Request 211 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, durableWallTime: new Date(1567578505582), opTime: { ts: Timestamp(1567578505, 1), t: 1 }, wallTime: new Date(1567578505582), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578505, 1), t: 1 }, lastCommittedWall: new Date(1567578505582), lastOpVisible: { ts: Timestamp(1567578505, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578505, 1), $clusterTime: { clusterTime: Timestamp(1567578505, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578505, 1) } 2019-09-04T06:28:26.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, durableWallTime: new Date(1567578505582), opTime: { ts: Timestamp(1567578505, 1), t: 1 }, wallTime: new Date(1567578505582), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578505, 1), t: 1 }, lastCommittedWall: new Date(1567578505582), lastOpVisible: { ts: Timestamp(1567578505, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578505, 1), $clusterTime: { clusterTime: Timestamp(1567578505, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578505, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:26.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:26.837+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 211) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, durableWallTime: new Date(1567578505582), opTime: { ts: Timestamp(1567578505, 1), t: 1 }, wallTime: new Date(1567578505582), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578505, 1), t: 1 }, lastCommittedWall: new Date(1567578505582), lastOpVisible: { ts: Timestamp(1567578505, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578505, 1), $clusterTime: { clusterTime: Timestamp(1567578505, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578505, 1) } 2019-09-04T06:28:26.837+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:28:26.837+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:28:28.837Z 2019-09-04T06:28:26.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:56.836+0000 2019-09-04T06:28:26.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:26.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:26.860+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:26.876+0000 D2 COMMAND [conn47] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:26.876+0000 I COMMAND [conn47] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:26.911+0000 D2 COMMAND [conn48] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:26.911+0000 I COMMAND [conn48] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:26.960+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:26.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:26.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:26.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:26.989+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:26.995+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:26.995+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:27.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:27.051+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:27.051+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:27.061+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:27.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578505, 1), signature: { hash: BinData(0, 135D1A51FC1539B8C3660DCF530D7F785796EFAC), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:27.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:28:27.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578505, 1), signature: { hash: BinData(0, 135D1A51FC1539B8C3660DCF530D7F785796EFAC), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:27.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578505, 1), signature: { hash: BinData(0, 135D1A51FC1539B8C3660DCF530D7F785796EFAC), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:27.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, durableWallTime: new Date(1567578505582), opTime: { ts: Timestamp(1567578505, 1), t: 1 }, wallTime: new Date(1567578505582) } 2019-09-04T06:28:27.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578505, 1), signature: { hash: BinData(0, 135D1A51FC1539B8C3660DCF530D7F785796EFAC), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:27.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:27.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:27.068+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:27.068+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:27.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:27.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:27.161+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:27.166+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:27.166+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:27.197+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:27.197+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:27.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:27.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:27.261+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:27.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:27.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:27.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:27.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:27.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:27.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:27.332+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:27.332+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:27.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:27.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:27.361+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:27.461+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:27.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:27.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:27.490+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:27.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:27.495+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:27.495+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:27.551+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:27.551+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:27.556+0000 D2 COMMAND [conn106] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578501, 1), signature: { hash: BinData(0, 457E83C320927CAB801EC3E140A2B78C5168E291), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:28:27.556+0000 D1 REPL [conn106] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578505, 1), t: 1 } 2019-09-04T06:28:27.556+0000 D3 REPL [conn106] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:57.566+0000 2019-09-04T06:28:27.561+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:27.565+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:27.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:27.568+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:27.568+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:27.589+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:27.589+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:27.589+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578505, 1) 2019-09-04T06:28:27.589+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 3337 2019-09-04T06:28:27.589+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:27.589+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:27.589+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 3337 2019-09-04T06:28:27.591+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3340 2019-09-04T06:28:27.591+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 3340 2019-09-04T06:28:27.591+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578505, 1), t: 1 }({ ts: Timestamp(1567578505, 1), t: 1 }) 2019-09-04T06:28:27.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:27.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:27.661+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:27.666+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:27.666+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:27.697+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:27.697+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:27.761+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:27.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:27.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:27.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:27.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:27.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:27.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:27.832+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:27.832+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:27.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:27.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:27.861+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:27.962+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:27.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:27.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:27.995+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:27.995+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:28.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:28.020+0000 D2 COMMAND [conn50] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:28.020+0000 I COMMAND [conn50] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:28.020+0000 D2 COMMAND [conn50] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578480, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578507, 1), signature: { hash: BinData(0, 371CD3B99D7842B7B2E9D8FA464345E95561D104), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578480, 1), t: 1 } }, $db: "config" } 2019-09-04T06:28:28.020+0000 D1 COMMAND [conn50] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578480, 1), t: 1 } } } 2019-09-04T06:28:28.020+0000 D3 STORAGE [conn50] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:28:28.020+0000 D1 COMMAND [conn50] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578480, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578507, 1), signature: { hash: BinData(0, 371CD3B99D7842B7B2E9D8FA464345E95561D104), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578480, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578505, 1) 2019-09-04T06:28:28.020+0000 D5 QUERY [conn50] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:28:28.020+0000 D5 QUERY [conn50] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:28:28.020+0000 D5 QUERY [conn50] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:28:28.020+0000 D5 QUERY [conn50] Rated tree: $and 2019-09-04T06:28:28.020+0000 D5 QUERY [conn50] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:28.020+0000 D5 QUERY [conn50] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:28.020+0000 D2 QUERY [conn50] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:28:28.020+0000 D3 STORAGE [conn50] WT begin_transaction for snapshot id 3354 2019-09-04T06:28:28.020+0000 D3 STORAGE [conn50] WT rollback_transaction for snapshot id 3354 2019-09-04T06:28:28.020+0000 I COMMAND [conn50] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578480, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578507, 1), signature: { hash: BinData(0, 371CD3B99D7842B7B2E9D8FA464345E95561D104), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578480, 1), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:879 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:28:28.051+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:28.051+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:28.062+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:28.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:28.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:28.068+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:28.068+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:28.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:28.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:28.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:28.160+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:28.162+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:28.166+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:28.166+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:28.197+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:28.197+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:28.230+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578505, 1), signature: { hash: BinData(0, 135D1A51FC1539B8C3660DCF530D7F785796EFAC), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:28.230+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:28:28.231+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578505, 1), signature: { hash: BinData(0, 135D1A51FC1539B8C3660DCF530D7F785796EFAC), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:28.231+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578505, 1), signature: { hash: BinData(0, 135D1A51FC1539B8C3660DCF530D7F785796EFAC), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:28.231+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, durableWallTime: new Date(1567578505582), opTime: { ts: Timestamp(1567578505, 1), t: 1 }, wallTime: new Date(1567578505582) } 2019-09-04T06:28:28.231+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578505, 1), signature: { hash: BinData(0, 135D1A51FC1539B8C3660DCF530D7F785796EFAC), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:28.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:28.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:28.262+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:28.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:28.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:28.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:28.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:28.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:28.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:28.332+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:28.332+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:28.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:28.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:28.362+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:28.462+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:28.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:28.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:28.495+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:28.495+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:28.551+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:28.551+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:28.563+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:28.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:28.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:28.568+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:28.568+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:28.590+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:28.590+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:28.590+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578505, 1) 2019-09-04T06:28:28.590+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 3375 2019-09-04T06:28:28.590+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:28.590+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:28.590+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 3375 2019-09-04T06:28:28.591+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3378 2019-09-04T06:28:28.591+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 3378 2019-09-04T06:28:28.591+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578505, 1), t: 1 }({ ts: Timestamp(1567578505, 1), t: 1 }) 2019-09-04T06:28:28.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:28.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:28.659+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:28.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:28.663+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:28.666+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:28.666+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:28.697+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:28.697+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:28.742+0000 D2 COMMAND [conn121] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578505, 1), signature: { hash: BinData(0, EE8BDA2660D95BC27BD45C346F1798888A2141A4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:28:28.742+0000 D1 REPL [conn121] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578505, 1), t: 1 } 2019-09-04T06:28:28.742+0000 D3 REPL [conn121] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:58.752+0000 2019-09-04T06:28:28.763+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:28.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:28.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:28.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:28.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:28.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:28.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:28.832+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:28.832+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:28.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:28.836+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 212) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:28.836+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 212 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:28:38.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:28.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:56.836+0000 2019-09-04T06:28:28.836+0000 D2 ASIO [Replication] Request 212 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, durableWallTime: new Date(1567578505582), opTime: { ts: Timestamp(1567578505, 1), t: 1 }, wallTime: new Date(1567578505582), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578505, 1), t: 1 }, lastCommittedWall: new Date(1567578505582), lastOpVisible: { ts: Timestamp(1567578505, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578505, 1), $clusterTime: { clusterTime: Timestamp(1567578507, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578505, 1) } 2019-09-04T06:28:28.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, durableWallTime: new Date(1567578505582), opTime: { ts: Timestamp(1567578505, 1), t: 1 }, wallTime: new Date(1567578505582), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578505, 1), t: 1 }, lastCommittedWall: new Date(1567578505582), lastOpVisible: { ts: Timestamp(1567578505, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578505, 1), $clusterTime: { clusterTime: Timestamp(1567578507, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578505, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:28:28.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:28.836+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 212) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, durableWallTime: new Date(1567578505582), opTime: { ts: Timestamp(1567578505, 1), t: 1 }, wallTime: new Date(1567578505582), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578505, 1), t: 1 }, lastCommittedWall: new Date(1567578505582), lastOpVisible: { ts: Timestamp(1567578505, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578505, 1), $clusterTime: { clusterTime: Timestamp(1567578507, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578505, 1) } 2019-09-04T06:28:28.836+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:28:28.836+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:28:37.178+0000 2019-09-04T06:28:28.836+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:28:39.177+0000 2019-09-04T06:28:28.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:28.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:58.836+0000 2019-09-04T06:28:28.836+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:28:28.836+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:28:30.836Z 2019-09-04T06:28:28.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:58.836+0000 2019-09-04T06:28:28.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:28.837+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 213) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:28.837+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 213 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:38.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:28.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:58.836+0000 2019-09-04T06:28:28.837+0000 D2 ASIO [Replication] Request 213 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, durableWallTime: new Date(1567578505582), opTime: { ts: Timestamp(1567578505, 1), t: 1 }, wallTime: new Date(1567578505582), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578505, 1), t: 1 }, lastCommittedWall: new Date(1567578505582), lastOpVisible: { ts: Timestamp(1567578505, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578505, 1), $clusterTime: { clusterTime: Timestamp(1567578507, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578505, 1) } 2019-09-04T06:28:28.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, durableWallTime: new Date(1567578505582), opTime: { ts: Timestamp(1567578505, 1), t: 1 }, wallTime: new Date(1567578505582), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578505, 1), t: 1 }, lastCommittedWall: new Date(1567578505582), lastOpVisible: { ts: Timestamp(1567578505, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578505, 1), $clusterTime: { clusterTime: Timestamp(1567578507, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578505, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:28.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:28.837+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 213) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, durableWallTime: new Date(1567578505582), opTime: { ts: Timestamp(1567578505, 1), t: 1 }, wallTime: new Date(1567578505582), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578505, 1), t: 1 }, lastCommittedWall: new Date(1567578505582), lastOpVisible: { ts: Timestamp(1567578505, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578505, 1), $clusterTime: { clusterTime: Timestamp(1567578507, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578505, 1) } 2019-09-04T06:28:28.837+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:28:28.837+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:28:30.837Z 2019-09-04T06:28:28.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:58.836+0000 2019-09-04T06:28:28.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:28.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:28.863+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:28.963+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:28.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:28.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:28.995+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:28.995+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:29.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:29.051+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:29.051+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:29.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578507, 1), signature: { hash: BinData(0, 371CD3B99D7842B7B2E9D8FA464345E95561D104), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:29.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:28:29.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578507, 1), signature: { hash: BinData(0, 371CD3B99D7842B7B2E9D8FA464345E95561D104), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:29.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578507, 1), signature: { hash: BinData(0, 371CD3B99D7842B7B2E9D8FA464345E95561D104), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:29.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, durableWallTime: new Date(1567578505582), opTime: { ts: Timestamp(1567578505, 1), t: 1 }, wallTime: new Date(1567578505582) } 2019-09-04T06:28:29.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578507, 1), signature: { hash: BinData(0, 371CD3B99D7842B7B2E9D8FA464345E95561D104), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:29.063+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:29.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:29.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:29.068+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:29.068+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:29.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:29.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:29.159+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:29.160+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:29.163+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:29.166+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:29.166+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:29.197+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:29.197+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:29.203+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:29.203+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:29.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:29.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:29.264+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:29.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:29.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:29.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:29.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:29.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:29.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:29.332+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:29.332+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:29.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:29.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:29.364+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:29.464+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:29.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:29.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:29.495+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:29.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:29.551+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:29.551+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:29.564+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:29.565+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:29.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:29.568+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:29.568+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:29.590+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:29.590+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:29.590+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578505, 1) 2019-09-04T06:28:29.590+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 3414 2019-09-04T06:28:29.590+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:29.590+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:29.590+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 3414 2019-09-04T06:28:29.591+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3417 2019-09-04T06:28:29.591+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 3417 2019-09-04T06:28:29.591+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578505, 1), t: 1 }({ ts: Timestamp(1567578505, 1), t: 1 }) 2019-09-04T06:28:29.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:29.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:29.659+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:29.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:29.664+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:29.666+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:29.666+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:29.697+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:29.697+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:29.703+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:29.703+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:29.764+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:29.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:29.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:29.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:29.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:29.832+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:29.832+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:29.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:29.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:29.864+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:29.878+0000 I COMMAND [conn102] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578472, 1), signature: { hash: BinData(0, 5F7DD3BA379B8A9729407EBC61070D727C8E5B57), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:28:29.878+0000 D1 - [conn102] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:29.878+0000 W - [conn102] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:29.895+0000 I - [conn102] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:29.895+0000 D1 COMMAND [conn102] assertion while executing command 'find' on database 'config' with arguments '{ find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578472, 1), signature: { hash: BinData(0, 5F7DD3BA379B8A9729407EBC61070D727C8E5B57), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:29.895+0000 D1 - [conn102] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:29.895+0000 W - [conn102] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:29.916+0000 I - [conn102] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:29.916+0000 W COMMAND [conn102] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:29.917+0000 I COMMAND [conn102] command config.$cmd command: find { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578472, 1), signature: { hash: BinData(0, 5F7DD3BA379B8A9729407EBC61070D727C8E5B57), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:28:29.917+0000 D2 NETWORK [conn102] Session from 10.108.2.72:45636 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:29.917+0000 I NETWORK [conn102] end connection 10.108.2.72:45636 (74 connections now open) 2019-09-04T06:28:29.965+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:29.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:29.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:29.995+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:29.995+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:30.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:30.004+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:28:30.004+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:28:30.004+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:28:30.013+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:28:30.013+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:28:30.015+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:28:30.015+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:28:30.015+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:28:30.015+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:28:30.016+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:28:30.016+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35129 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:28:30.018+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:28:30.018+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:28:30.018+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:28:30.018+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:28:30.018+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:30.018+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:28:30.018+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:28:30.018+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:28:30.018+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:28:30.018+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:28:30.018+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:28:30.018+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:28:30.018+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:30.018+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:30.018+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:28:30.018+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578505, 1) 2019-09-04T06:28:30.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3436 2019-09-04T06:28:30.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3436 2019-09-04T06:28:30.018+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:28:30.031+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:28:30.031+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:28:30.031+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:28:30.031+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:30.031+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:28:30.031+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:28:30.031+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:28:30.031+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578505, 1) 2019-09-04T06:28:30.031+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3439 2019-09-04T06:28:30.031+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3439 2019-09-04T06:28:30.031+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:28:30.032+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:28:30.032+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:30.032+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:28:30.032+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:28:30.032+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:28:30.032+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578505, 1) 2019-09-04T06:28:30.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3441 2019-09-04T06:28:30.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3441 2019-09-04T06:28:30.032+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:28:30.032+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:28:30.032+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:28:30.032+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:28:30.032+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:28:30.032+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:28:30.032+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3444 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3444 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3445 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3445 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3446 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3446 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3447 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3447 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3448 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3448 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3449 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3449 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3450 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3450 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3451 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3451 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3452 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3452 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3453 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3453 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3454 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3454 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3455 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3455 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3456 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3456 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3457 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:28:30.033+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3457 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3458 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3458 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3459 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3459 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3460 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3460 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3461 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3461 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3462 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3462 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3463 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3463 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3464 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3464 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3465 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:28:30.034+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3465 2019-09-04T06:28:30.034+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:28:30.045+0000 D2 ASIO [RS] Request 208 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578510, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578510013), o: { $v: 1, $set: { ping: new Date(1567578510008) } } }, { ts: Timestamp(1567578510, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578510016), o: { $v: 1, $set: { ping: new Date(1567578510011) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578505, 1), t: 1 }, lastCommittedWall: new Date(1567578505582), lastOpVisible: { ts: Timestamp(1567578505, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578505, 1), t: 1 }, lastCommittedWall: new Date(1567578505582), lastOpApplied: { ts: Timestamp(1567578510, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578505, 1), $clusterTime: { clusterTime: Timestamp(1567578510, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578510, 2) } 2019-09-04T06:28:30.045+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578510, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578510013), o: { $v: 1, $set: { ping: new Date(1567578510008) } } }, { ts: Timestamp(1567578510, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578510016), o: { $v: 1, $set: { ping: new Date(1567578510011) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578505, 1), t: 1 }, lastCommittedWall: new Date(1567578505582), lastOpVisible: { ts: Timestamp(1567578505, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578505, 1), t: 1 }, lastCommittedWall: new Date(1567578505582), lastOpApplied: { ts: Timestamp(1567578510, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578505, 1), $clusterTime: { clusterTime: Timestamp(1567578510, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578510, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:30.045+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:30.045+0000 D2 REPL [replication-0] oplog fetcher read 2 operations from remote oplog starting at ts: Timestamp(1567578510, 1) and ending at ts: Timestamp(1567578510, 2) 2019-09-04T06:28:30.045+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:28:39.177+0000 2019-09-04T06:28:30.045+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:28:41.329+0000 2019-09-04T06:28:30.045+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578510, 2), t: 1 } 2019-09-04T06:28:30.045+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:30.045+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:58.836+0000 2019-09-04T06:28:30.045+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:30.045+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:30.045+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578505, 1) 2019-09-04T06:28:30.045+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 3468 2019-09-04T06:28:30.045+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:30.045+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:30.045+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 3468 2019-09-04T06:28:30.045+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:30.045+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:30.045+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578505, 1) 2019-09-04T06:28:30.045+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 3471 2019-09-04T06:28:30.045+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:30.045+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:30.045+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 3471 2019-09-04T06:28:30.045+0000 D2 REPL [rsSync-0] replication batch size is 2 2019-09-04T06:28:30.045+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578510, 1) } 2019-09-04T06:28:30.045+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3418 2019-09-04T06:28:30.045+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 3418 2019-09-04T06:28:30.045+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3474 2019-09-04T06:28:30.045+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 3474 2019-09-04T06:28:30.045+0000 D3 EXECUTOR [repl-writer-worker-8] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:30.045+0000 D3 STORAGE [repl-writer-worker-8] WT begin_transaction for snapshot id 3476 2019-09-04T06:28:30.045+0000 D4 STORAGE [repl-writer-worker-8] inserting record with timestamp Timestamp(1567578510, 1) 2019-09-04T06:28:30.045+0000 D3 STORAGE [repl-writer-worker-8] WT set timestamp of future write operations to Timestamp(1567578510, 1) 2019-09-04T06:28:30.045+0000 D4 STORAGE [repl-writer-worker-8] inserting record with timestamp Timestamp(1567578510, 2) 2019-09-04T06:28:30.045+0000 D3 STORAGE [repl-writer-worker-8] WT set timestamp of future write operations to Timestamp(1567578510, 2) 2019-09-04T06:28:30.045+0000 D3 STORAGE [repl-writer-worker-8] WT commit_transaction for snapshot id 3476 2019-09-04T06:28:30.045+0000 D3 EXECUTOR [repl-writer-worker-8] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:30.045+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:28:30.045+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3475 2019-09-04T06:28:30.045+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 3475 2019-09-04T06:28:30.045+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3478 2019-09-04T06:28:30.045+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 3478 2019-09-04T06:28:30.045+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578510, 2), t: 1 }({ ts: Timestamp(1567578510, 2), t: 1 }) 2019-09-04T06:28:30.045+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578510, 2) 2019-09-04T06:28:30.045+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3479 2019-09-04T06:28:30.045+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578510, 2) } } ] } sort: {} projection: {} 2019-09-04T06:28:30.045+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:30.045+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:28:30.045+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578510, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:28:30.045+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:30.045+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:30.045+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:30.045+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578510, 2) || First: notFirst: full path: ts 2019-09-04T06:28:30.045+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:30.045+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578510, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:30.045+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:30.045+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:28:30.045+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:30.045+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:30.045+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:30.045+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:30.045+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:30.045+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:30.045+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:30.045+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578510, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:30.045+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:30.045+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:30.045+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:30.045+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578510, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:30.045+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:30.045+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578510, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:30.046+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 3479 2019-09-04T06:28:30.046+0000 D3 EXECUTOR [repl-writer-worker-4] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:30.046+0000 D3 STORAGE [repl-writer-worker-4] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:30.046+0000 D3 REPL [repl-writer-worker-4] applying op: { ts: Timestamp(1567578510, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578510013), o: { $v: 1, $set: { ping: new Date(1567578510008) } } }, oplog application mode: Secondary 2019-09-04T06:28:30.046+0000 D3 STORAGE [repl-writer-worker-4] WT set timestamp of future write operations to Timestamp(1567578510, 1) 2019-09-04T06:28:30.046+0000 D3 STORAGE [repl-writer-worker-4] WT begin_transaction for snapshot id 3481 2019-09-04T06:28:30.046+0000 D2 QUERY [repl-writer-worker-4] Using idhack: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" } 2019-09-04T06:28:30.046+0000 D4 WRITE [repl-writer-worker-4] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:28:30.046+0000 D3 STORAGE [repl-writer-worker-4] WT commit_transaction for snapshot id 3481 2019-09-04T06:28:30.046+0000 D3 EXECUTOR [repl-writer-worker-4] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:30.046+0000 D3 STORAGE [repl-writer-worker-4] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:30.046+0000 D3 REPL [repl-writer-worker-4] applying op: { ts: Timestamp(1567578510, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578510016), o: { $v: 1, $set: { ping: new Date(1567578510011) } } }, oplog application mode: Secondary 2019-09-04T06:28:30.046+0000 D3 STORAGE [repl-writer-worker-4] WT set timestamp of future write operations to Timestamp(1567578510, 2) 2019-09-04T06:28:30.046+0000 D3 STORAGE [repl-writer-worker-4] WT begin_transaction for snapshot id 3483 2019-09-04T06:28:30.046+0000 D2 QUERY [repl-writer-worker-4] Using idhack: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" } 2019-09-04T06:28:30.046+0000 D4 WRITE [repl-writer-worker-4] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:28:30.046+0000 D3 STORAGE [repl-writer-worker-4] WT commit_transaction for snapshot id 3483 2019-09-04T06:28:30.046+0000 D3 EXECUTOR [repl-writer-worker-4] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:30.046+0000 D3 EXECUTOR [repl-writer-worker-6] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:30.046+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578510, 2), t: 1 }({ ts: Timestamp(1567578510, 2), t: 1 }) 2019-09-04T06:28:30.046+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578510, 2) 2019-09-04T06:28:30.046+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3480 2019-09-04T06:28:30.046+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:28:30.046+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:30.046+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:28:30.046+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:30.046+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:30.046+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:28:30.046+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 3480 2019-09-04T06:28:30.046+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578510, 2) 2019-09-04T06:28:30.046+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3486 2019-09-04T06:28:30.046+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 3486 2019-09-04T06:28:30.046+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578510, 2), t: 1 }({ ts: Timestamp(1567578510, 2), t: 1 }) 2019-09-04T06:28:30.046+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:30.046+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, durableWallTime: new Date(1567578505582), appliedOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, appliedWallTime: new Date(1567578505582), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, durableWallTime: new Date(1567578505582), appliedOpTime: { ts: Timestamp(1567578510, 2), t: 1 }, appliedWallTime: new Date(1567578510016), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, durableWallTime: new Date(1567578505582), appliedOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, appliedWallTime: new Date(1567578505582), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578505, 1), t: 1 }, lastCommittedWall: new Date(1567578505582), lastOpVisible: { ts: Timestamp(1567578505, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:30.046+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 214 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:00.046+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, durableWallTime: new Date(1567578505582), appliedOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, appliedWallTime: new Date(1567578505582), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, durableWallTime: new Date(1567578505582), appliedOpTime: { ts: Timestamp(1567578510, 2), t: 1 }, appliedWallTime: new Date(1567578510016), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, durableWallTime: new Date(1567578505582), appliedOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, appliedWallTime: new Date(1567578505582), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578505, 1), t: 1 }, lastCommittedWall: new Date(1567578505582), lastOpVisible: { ts: Timestamp(1567578505, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:30.046+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:00.046+0000 2019-09-04T06:28:30.047+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578510, 2), t: 1 } 2019-09-04T06:28:30.047+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 215 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:40.047+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578505, 1), t: 1 } } 2019-09-04T06:28:30.047+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:00.046+0000 2019-09-04T06:28:30.048+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:28:30.049+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:28:30.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3489 2019-09-04T06:28:30.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3489 2019-09-04T06:28:30.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3490 2019-09-04T06:28:30.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3490 2019-09-04T06:28:30.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3491 2019-09-04T06:28:30.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3491 2019-09-04T06:28:30.049+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:28:30.049+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:28:30.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3493 2019-09-04T06:28:30.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3493 2019-09-04T06:28:30.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3494 2019-09-04T06:28:30.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3494 2019-09-04T06:28:30.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3495 2019-09-04T06:28:30.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3495 2019-09-04T06:28:30.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3496 2019-09-04T06:28:30.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3496 2019-09-04T06:28:30.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3497 2019-09-04T06:28:30.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3497 2019-09-04T06:28:30.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3498 2019-09-04T06:28:30.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3498 2019-09-04T06:28:30.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3499 2019-09-04T06:28:30.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3499 2019-09-04T06:28:30.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3500 2019-09-04T06:28:30.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3500 2019-09-04T06:28:30.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3501 2019-09-04T06:28:30.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3501 2019-09-04T06:28:30.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3502 2019-09-04T06:28:30.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3502 2019-09-04T06:28:30.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3503 2019-09-04T06:28:30.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3503 2019-09-04T06:28:30.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3504 2019-09-04T06:28:30.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3504 2019-09-04T06:28:30.050+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:28:30.050+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:28:30.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3506 2019-09-04T06:28:30.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3506 2019-09-04T06:28:30.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3507 2019-09-04T06:28:30.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3507 2019-09-04T06:28:30.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3508 2019-09-04T06:28:30.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3508 2019-09-04T06:28:30.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3509 2019-09-04T06:28:30.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3509 2019-09-04T06:28:30.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3510 2019-09-04T06:28:30.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3510 2019-09-04T06:28:30.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3511 2019-09-04T06:28:30.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3511 2019-09-04T06:28:30.050+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:28:30.051+0000 D2 ASIO [RS] Request 214 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578505, 1), t: 1 }, lastCommittedWall: new Date(1567578505582), lastOpVisible: { ts: Timestamp(1567578505, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578505, 1), $clusterTime: { clusterTime: Timestamp(1567578510, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578510, 2) } 2019-09-04T06:28:30.051+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578505, 1), t: 1 }, lastCommittedWall: new Date(1567578505582), lastOpVisible: { ts: Timestamp(1567578505, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578505, 1), $clusterTime: { clusterTime: Timestamp(1567578510, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578510, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:30.051+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:30.051+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, durableWallTime: new Date(1567578505582), appliedOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, appliedWallTime: new Date(1567578505582), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578510, 2), t: 1 }, durableWallTime: new Date(1567578510016), appliedOpTime: { ts: Timestamp(1567578510, 2), t: 1 }, appliedWallTime: new Date(1567578510016), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, durableWallTime: new Date(1567578505582), appliedOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, appliedWallTime: new Date(1567578505582), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578505, 1), t: 1 }, lastCommittedWall: new Date(1567578505582), lastOpVisible: { ts: Timestamp(1567578505, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:30.051+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 216 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:00.051+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, durableWallTime: new Date(1567578505582), appliedOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, appliedWallTime: new Date(1567578505582), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578510, 2), t: 1 }, durableWallTime: new Date(1567578510016), appliedOpTime: { ts: Timestamp(1567578510, 2), t: 1 }, appliedWallTime: new Date(1567578510016), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, durableWallTime: new Date(1567578505582), appliedOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, appliedWallTime: new Date(1567578505582), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578505, 1), t: 1 }, lastCommittedWall: new Date(1567578505582), lastOpVisible: { ts: Timestamp(1567578505, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:30.051+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:00.046+0000 2019-09-04T06:28:30.051+0000 D2 ASIO [RS] Request 216 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578505, 1), t: 1 }, lastCommittedWall: new Date(1567578505582), lastOpVisible: { ts: Timestamp(1567578505, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578505, 1), $clusterTime: { clusterTime: Timestamp(1567578510, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578510, 2) } 2019-09-04T06:28:30.051+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578505, 1), t: 1 }, lastCommittedWall: new Date(1567578505582), lastOpVisible: { ts: Timestamp(1567578505, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578505, 1), $clusterTime: { clusterTime: Timestamp(1567578510, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578510, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:30.051+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:30.051+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:00.046+0000 2019-09-04T06:28:30.051+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:30.051+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:30.058+0000 D2 ASIO [RS] Request 215 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578510, 2), t: 1 }, lastCommittedWall: new Date(1567578510016), lastOpVisible: { ts: Timestamp(1567578510, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578510, 2), t: 1 }, lastCommittedWall: new Date(1567578510016), lastOpApplied: { ts: Timestamp(1567578510, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578510, 2), $clusterTime: { clusterTime: Timestamp(1567578510, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578510, 2) } 2019-09-04T06:28:30.058+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578510, 2), t: 1 }, lastCommittedWall: new Date(1567578510016), lastOpVisible: { ts: Timestamp(1567578510, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578510, 2), t: 1 }, lastCommittedWall: new Date(1567578510016), lastOpApplied: { ts: Timestamp(1567578510, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578510, 2), $clusterTime: { clusterTime: Timestamp(1567578510, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578510, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:30.058+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:30.058+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:28:30.058+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578510, 2), t: 1 }, 2019-09-04T06:28:30.016+0000 2019-09-04T06:28:30.058+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578510, 2), t: 1 }, 2019-09-04T06:28:30.016+0000 2019-09-04T06:28:30.058+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578505, 2) 2019-09-04T06:28:30.058+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:28:41.329+0000 2019-09-04T06:28:30.058+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:28:40.441+0000 2019-09-04T06:28:30.058+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 217 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:40.058+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578510, 2), t: 1 } } 2019-09-04T06:28:30.058+0000 D3 REPL [conn119] Got notified of new snapshot: { ts: Timestamp(1567578510, 2), t: 1 }, 2019-09-04T06:28:30.016+0000 2019-09-04T06:28:30.058+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:00.046+0000 2019-09-04T06:28:30.058+0000 D3 REPL [conn119] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.033+0000 2019-09-04T06:28:30.058+0000 D3 REPL [conn115] Got notified of new snapshot: { ts: Timestamp(1567578510, 2), t: 1 }, 2019-09-04T06:28:30.016+0000 2019-09-04T06:28:30.058+0000 D3 REPL [conn115] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:41.292+0000 2019-09-04T06:28:30.058+0000 D3 REPL [conn130] Got notified of new snapshot: { ts: Timestamp(1567578510, 2), t: 1 }, 2019-09-04T06:28:30.016+0000 2019-09-04T06:28:30.058+0000 D3 REPL [conn130] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:51.753+0000 2019-09-04T06:28:30.058+0000 D3 REPL [conn121] Got notified of new snapshot: { ts: Timestamp(1567578510, 2), t: 1 }, 2019-09-04T06:28:30.016+0000 2019-09-04T06:28:30.058+0000 D3 REPL [conn121] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:58.752+0000 2019-09-04T06:28:30.058+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:30.058+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:58.836+0000 2019-09-04T06:28:30.058+0000 D3 REPL [conn124] Got notified of new snapshot: { ts: Timestamp(1567578510, 2), t: 1 }, 2019-09-04T06:28:30.016+0000 2019-09-04T06:28:30.058+0000 D3 REPL [conn124] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.022+0000 2019-09-04T06:28:30.058+0000 D3 REPL [conn107] Got notified of new snapshot: { ts: Timestamp(1567578510, 2), t: 1 }, 2019-09-04T06:28:30.016+0000 2019-09-04T06:28:30.058+0000 D3 REPL [conn107] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.871+0000 2019-09-04T06:28:30.058+0000 D3 REPL [conn106] Got notified of new snapshot: { ts: Timestamp(1567578510, 2), t: 1 }, 2019-09-04T06:28:30.016+0000 2019-09-04T06:28:30.058+0000 D3 REPL [conn106] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:57.566+0000 2019-09-04T06:28:30.058+0000 D3 REPL [conn103] Got notified of new snapshot: { ts: Timestamp(1567578510, 2), t: 1 }, 2019-09-04T06:28:30.016+0000 2019-09-04T06:28:30.058+0000 D3 REPL [conn103] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.919+0000 2019-09-04T06:28:30.058+0000 D3 REPL [conn108] Got notified of new snapshot: { ts: Timestamp(1567578510, 2), t: 1 }, 2019-09-04T06:28:30.016+0000 2019-09-04T06:28:30.058+0000 D3 REPL [conn108] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.879+0000 2019-09-04T06:28:30.058+0000 D3 REPL [conn110] Got notified of new snapshot: { ts: Timestamp(1567578510, 2), t: 1 }, 2019-09-04T06:28:30.016+0000 2019-09-04T06:28:30.058+0000 D3 REPL [conn110] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.881+0000 2019-09-04T06:28:30.058+0000 D3 REPL [conn109] Got notified of new snapshot: { ts: Timestamp(1567578510, 2), t: 1 }, 2019-09-04T06:28:30.016+0000 2019-09-04T06:28:30.058+0000 D3 REPL [conn109] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.877+0000 2019-09-04T06:28:30.058+0000 D3 REPL [conn112] Got notified of new snapshot: { ts: Timestamp(1567578510, 2), t: 1 }, 2019-09-04T06:28:30.016+0000 2019-09-04T06:28:30.058+0000 D3 REPL [conn112] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.921+0000 2019-09-04T06:28:30.058+0000 D3 REPL [conn123] Got notified of new snapshot: { ts: Timestamp(1567578510, 2), t: 1 }, 2019-09-04T06:28:30.016+0000 2019-09-04T06:28:30.058+0000 D3 REPL [conn123] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.022+0000 2019-09-04T06:28:30.058+0000 D3 REPL [conn111] Got notified of new snapshot: { ts: Timestamp(1567578510, 2), t: 1 }, 2019-09-04T06:28:30.016+0000 2019-09-04T06:28:30.058+0000 D3 REPL [conn111] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.896+0000 2019-09-04T06:28:30.058+0000 D3 REPL [conn117] Got notified of new snapshot: { ts: Timestamp(1567578510, 2), t: 1 }, 2019-09-04T06:28:30.016+0000 2019-09-04T06:28:30.058+0000 D3 REPL [conn117] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:54.151+0000 2019-09-04T06:28:30.058+0000 D3 REPL [conn114] Got notified of new snapshot: { ts: Timestamp(1567578510, 2), t: 1 }, 2019-09-04T06:28:30.016+0000 2019-09-04T06:28:30.058+0000 D3 REPL [conn114] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:41.488+0000 2019-09-04T06:28:30.058+0000 D3 REPL [conn104] Got notified of new snapshot: { ts: Timestamp(1567578510, 2), t: 1 }, 2019-09-04T06:28:30.016+0000 2019-09-04T06:28:30.058+0000 D3 REPL [conn104] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:55.059+0000 2019-09-04T06:28:30.058+0000 D3 REPL [conn131] Got notified of new snapshot: { ts: Timestamp(1567578510, 2), t: 1 }, 2019-09-04T06:28:30.016+0000 2019-09-04T06:28:30.058+0000 D3 REPL [conn131] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:56.307+0000 2019-09-04T06:28:30.058+0000 D3 REPL [conn120] Got notified of new snapshot: { ts: Timestamp(1567578510, 2), t: 1 }, 2019-09-04T06:28:30.016+0000 2019-09-04T06:28:30.058+0000 D3 REPL [conn120] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.040+0000 2019-09-04T06:28:30.061+0000 D2 ASIO [RS] Request 217 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578510, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578510057), o: { $v: 1, $set: { ping: new Date(1567578510056) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578510, 2), t: 1 }, lastCommittedWall: new Date(1567578510016), lastOpVisible: { ts: Timestamp(1567578510, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578510, 2), t: 1 }, lastCommittedWall: new Date(1567578510016), lastOpApplied: { ts: Timestamp(1567578510, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578510, 2), $clusterTime: { clusterTime: Timestamp(1567578510, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578510, 3) } 2019-09-04T06:28:30.061+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578510, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578510057), o: { $v: 1, $set: { ping: new Date(1567578510056) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578510, 2), t: 1 }, lastCommittedWall: new Date(1567578510016), lastOpVisible: { ts: Timestamp(1567578510, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578510, 2), t: 1 }, lastCommittedWall: new Date(1567578510016), lastOpApplied: { ts: Timestamp(1567578510, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578510, 2), $clusterTime: { clusterTime: Timestamp(1567578510, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578510, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:30.061+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:30.061+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578510, 3) and ending at ts: Timestamp(1567578510, 3) 2019-09-04T06:28:30.061+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:28:40.441+0000 2019-09-04T06:28:30.061+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:28:40.205+0000 2019-09-04T06:28:30.061+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578510, 3), t: 1 } 2019-09-04T06:28:30.061+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:30.061+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:58.836+0000 2019-09-04T06:28:30.061+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:30.061+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:30.061+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578510, 2) 2019-09-04T06:28:30.061+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 3515 2019-09-04T06:28:30.061+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:30.061+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:30.061+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 3515 2019-09-04T06:28:30.061+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:30.061+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:30.061+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578510, 2) 2019-09-04T06:28:30.061+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 3518 2019-09-04T06:28:30.061+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:30.061+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:30.061+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 3518 2019-09-04T06:28:30.061+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:28:30.061+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578510, 3) } 2019-09-04T06:28:30.061+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3487 2019-09-04T06:28:30.061+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 3487 2019-09-04T06:28:30.061+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3521 2019-09-04T06:28:30.061+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 3521 2019-09-04T06:28:30.061+0000 D3 EXECUTOR [repl-writer-worker-2] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:30.061+0000 D3 STORAGE [repl-writer-worker-2] WT begin_transaction for snapshot id 3523 2019-09-04T06:28:30.061+0000 D4 STORAGE [repl-writer-worker-2] inserting record with timestamp Timestamp(1567578510, 3) 2019-09-04T06:28:30.061+0000 D3 STORAGE [repl-writer-worker-2] WT set timestamp of future write operations to Timestamp(1567578510, 3) 2019-09-04T06:28:30.061+0000 D3 STORAGE [repl-writer-worker-2] WT commit_transaction for snapshot id 3523 2019-09-04T06:28:30.062+0000 D3 EXECUTOR [repl-writer-worker-2] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:30.062+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:28:30.062+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3522 2019-09-04T06:28:30.062+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 3522 2019-09-04T06:28:30.062+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3525 2019-09-04T06:28:30.062+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 3525 2019-09-04T06:28:30.062+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578510, 3), t: 1 }({ ts: Timestamp(1567578510, 3), t: 1 }) 2019-09-04T06:28:30.062+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578510, 3) 2019-09-04T06:28:30.062+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3526 2019-09-04T06:28:30.062+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578510, 3) } } ] } sort: {} projection: {} 2019-09-04T06:28:30.062+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:30.062+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:28:30.062+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578510, 3) Sort: {} Proj: {} ============================= 2019-09-04T06:28:30.062+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:30.062+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:30.062+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:30.062+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578510, 3) || First: notFirst: full path: ts 2019-09-04T06:28:30.062+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:30.062+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578510, 3) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:30.062+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:30.062+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:28:30.062+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:30.062+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:30.062+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:30.062+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:30.062+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:30.062+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:30.062+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:30.062+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578510, 3) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:30.062+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:30.062+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:30.062+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:30.062+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578510, 3) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:30.062+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:30.062+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578510, 3) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:30.062+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 3526 2019-09-04T06:28:30.062+0000 D3 EXECUTOR [repl-writer-worker-0] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:30.062+0000 D3 STORAGE [repl-writer-worker-0] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:30.062+0000 D3 REPL [repl-writer-worker-0] applying op: { ts: Timestamp(1567578510, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578510057), o: { $v: 1, $set: { ping: new Date(1567578510056) } } }, oplog application mode: Secondary 2019-09-04T06:28:30.062+0000 D3 STORAGE [repl-writer-worker-0] WT set timestamp of future write operations to Timestamp(1567578510, 3) 2019-09-04T06:28:30.062+0000 D3 STORAGE [repl-writer-worker-0] WT begin_transaction for snapshot id 3528 2019-09-04T06:28:30.062+0000 D2 QUERY [repl-writer-worker-0] Using idhack: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" } 2019-09-04T06:28:30.062+0000 D4 WRITE [repl-writer-worker-0] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:28:30.062+0000 D3 STORAGE [repl-writer-worker-0] WT commit_transaction for snapshot id 3528 2019-09-04T06:28:30.062+0000 D3 EXECUTOR [repl-writer-worker-0] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:30.062+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578510, 3), t: 1 }({ ts: Timestamp(1567578510, 3), t: 1 }) 2019-09-04T06:28:30.062+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578510, 3) 2019-09-04T06:28:30.062+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3527 2019-09-04T06:28:30.062+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:28:30.062+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:30.062+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:28:30.062+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:30.062+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:30.062+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:28:30.062+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 3527 2019-09-04T06:28:30.062+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578510, 3) 2019-09-04T06:28:30.062+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3531 2019-09-04T06:28:30.062+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 3531 2019-09-04T06:28:30.062+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578510, 3), t: 1 }({ ts: Timestamp(1567578510, 3), t: 1 }) 2019-09-04T06:28:30.063+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:30.063+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, durableWallTime: new Date(1567578505582), appliedOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, appliedWallTime: new Date(1567578505582), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578510, 2), t: 1 }, durableWallTime: new Date(1567578510016), appliedOpTime: { ts: Timestamp(1567578510, 3), t: 1 }, appliedWallTime: new Date(1567578510057), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, durableWallTime: new Date(1567578505582), appliedOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, appliedWallTime: new Date(1567578505582), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578510, 2), t: 1 }, lastCommittedWall: new Date(1567578510016), lastOpVisible: { ts: Timestamp(1567578510, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:30.063+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 218 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:00.063+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, durableWallTime: new Date(1567578505582), appliedOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, appliedWallTime: new Date(1567578505582), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578510, 2), t: 1 }, durableWallTime: new Date(1567578510016), appliedOpTime: { ts: Timestamp(1567578510, 3), t: 1 }, appliedWallTime: new Date(1567578510057), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, durableWallTime: new Date(1567578505582), appliedOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, appliedWallTime: new Date(1567578505582), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578510, 2), t: 1 }, lastCommittedWall: new Date(1567578510016), lastOpVisible: { ts: Timestamp(1567578510, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:30.063+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:00.062+0000 2019-09-04T06:28:30.063+0000 D2 ASIO [RS] Request 218 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578510, 2), t: 1 }, lastCommittedWall: new Date(1567578510016), lastOpVisible: { ts: Timestamp(1567578510, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578510, 2), $clusterTime: { clusterTime: Timestamp(1567578510, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578510, 3) } 2019-09-04T06:28:30.063+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578510, 2), t: 1 }, lastCommittedWall: new Date(1567578510016), lastOpVisible: { ts: Timestamp(1567578510, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578510, 2), $clusterTime: { clusterTime: Timestamp(1567578510, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578510, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:30.063+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:30.063+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:00.063+0000 2019-09-04T06:28:30.063+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578510, 3), t: 1 } 2019-09-04T06:28:30.063+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 219 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:40.063+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578510, 2), t: 1 } } 2019-09-04T06:28:30.063+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:00.063+0000 2019-09-04T06:28:30.064+0000 D2 ASIO [RS] Request 219 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578510, 3), t: 1 }, lastCommittedWall: new Date(1567578510057), lastOpVisible: { ts: Timestamp(1567578510, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578510, 3), t: 1 }, lastCommittedWall: new Date(1567578510057), lastOpApplied: { ts: Timestamp(1567578510, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578510, 3), $clusterTime: { clusterTime: Timestamp(1567578510, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578510, 3) } 2019-09-04T06:28:30.064+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578510, 3), t: 1 }, lastCommittedWall: new Date(1567578510057), lastOpVisible: { ts: Timestamp(1567578510, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578510, 3), t: 1 }, lastCommittedWall: new Date(1567578510057), lastOpApplied: { ts: Timestamp(1567578510, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578510, 3), $clusterTime: { clusterTime: Timestamp(1567578510, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578510, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:30.064+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:30.064+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:28:30.064+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578510, 3), t: 1 }, 2019-09-04T06:28:30.057+0000 2019-09-04T06:28:30.064+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578510, 3), t: 1 }, 2019-09-04T06:28:30.057+0000 2019-09-04T06:28:30.064+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578505, 3) 2019-09-04T06:28:30.064+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:28:40.205+0000 2019-09-04T06:28:30.064+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:28:40.166+0000 2019-09-04T06:28:30.064+0000 D3 REPL [conn119] Got notified of new snapshot: { ts: Timestamp(1567578510, 3), t: 1 }, 2019-09-04T06:28:30.057+0000 2019-09-04T06:28:30.064+0000 D3 REPL [conn119] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.033+0000 2019-09-04T06:28:30.064+0000 D3 REPL [conn112] Got notified of new snapshot: { ts: Timestamp(1567578510, 3), t: 1 }, 2019-09-04T06:28:30.057+0000 2019-09-04T06:28:30.064+0000 D3 REPL [conn112] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.921+0000 2019-09-04T06:28:30.064+0000 D3 REPL [conn109] Got notified of new snapshot: { ts: Timestamp(1567578510, 3), t: 1 }, 2019-09-04T06:28:30.057+0000 2019-09-04T06:28:30.064+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 220 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:40.064+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578510, 3), t: 1 } } 2019-09-04T06:28:30.064+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:30.064+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:28:58.836+0000 2019-09-04T06:28:30.064+0000 D3 REPL [conn109] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.877+0000 2019-09-04T06:28:30.064+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:00.063+0000 2019-09-04T06:28:30.064+0000 D3 REPL [conn111] Got notified of new snapshot: { ts: Timestamp(1567578510, 3), t: 1 }, 2019-09-04T06:28:30.057+0000 2019-09-04T06:28:30.064+0000 D3 REPL [conn111] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.896+0000 2019-09-04T06:28:30.064+0000 D3 REPL [conn103] Got notified of new snapshot: { ts: Timestamp(1567578510, 3), t: 1 }, 2019-09-04T06:28:30.057+0000 2019-09-04T06:28:30.064+0000 D3 REPL [conn103] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.919+0000 2019-09-04T06:28:30.064+0000 D3 REPL [conn120] Got notified of new snapshot: { ts: Timestamp(1567578510, 3), t: 1 }, 2019-09-04T06:28:30.057+0000 2019-09-04T06:28:30.064+0000 D3 REPL [conn120] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.040+0000 2019-09-04T06:28:30.064+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:28:30.064+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:30.064+0000 D3 REPL [conn124] Got notified of new snapshot: { ts: Timestamp(1567578510, 3), t: 1 }, 2019-09-04T06:28:30.057+0000 2019-09-04T06:28:30.064+0000 D3 REPL [conn124] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.022+0000 2019-09-04T06:28:30.064+0000 D3 REPL [conn121] Got notified of new snapshot: { ts: Timestamp(1567578510, 3), t: 1 }, 2019-09-04T06:28:30.057+0000 2019-09-04T06:28:30.064+0000 D3 REPL [conn121] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:58.752+0000 2019-09-04T06:28:30.064+0000 D3 REPL [conn115] Got notified of new snapshot: { ts: Timestamp(1567578510, 3), t: 1 }, 2019-09-04T06:28:30.057+0000 2019-09-04T06:28:30.064+0000 D3 REPL [conn115] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:41.292+0000 2019-09-04T06:28:30.064+0000 D3 REPL [conn114] Got notified of new snapshot: { ts: Timestamp(1567578510, 3), t: 1 }, 2019-09-04T06:28:30.057+0000 2019-09-04T06:28:30.064+0000 D3 REPL [conn114] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:41.488+0000 2019-09-04T06:28:30.064+0000 D3 REPL [conn107] Got notified of new snapshot: { ts: Timestamp(1567578510, 3), t: 1 }, 2019-09-04T06:28:30.057+0000 2019-09-04T06:28:30.064+0000 D3 REPL [conn107] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.871+0000 2019-09-04T06:28:30.064+0000 D3 REPL [conn130] Got notified of new snapshot: { ts: Timestamp(1567578510, 3), t: 1 }, 2019-09-04T06:28:30.057+0000 2019-09-04T06:28:30.064+0000 D3 REPL [conn130] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:51.753+0000 2019-09-04T06:28:30.064+0000 D3 REPL [conn117] Got notified of new snapshot: { ts: Timestamp(1567578510, 3), t: 1 }, 2019-09-04T06:28:30.057+0000 2019-09-04T06:28:30.064+0000 D3 REPL [conn117] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:54.151+0000 2019-09-04T06:28:30.064+0000 D3 REPL [conn131] Got notified of new snapshot: { ts: Timestamp(1567578510, 3), t: 1 }, 2019-09-04T06:28:30.057+0000 2019-09-04T06:28:30.064+0000 D3 REPL [conn131] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:56.307+0000 2019-09-04T06:28:30.064+0000 D3 REPL [conn104] Got notified of new snapshot: { ts: Timestamp(1567578510, 3), t: 1 }, 2019-09-04T06:28:30.057+0000 2019-09-04T06:28:30.064+0000 D3 REPL [conn104] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:55.059+0000 2019-09-04T06:28:30.064+0000 D3 REPL [conn110] Got notified of new snapshot: { ts: Timestamp(1567578510, 3), t: 1 }, 2019-09-04T06:28:30.057+0000 2019-09-04T06:28:30.064+0000 D3 REPL [conn110] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.881+0000 2019-09-04T06:28:30.064+0000 D3 REPL [conn106] Got notified of new snapshot: { ts: Timestamp(1567578510, 3), t: 1 }, 2019-09-04T06:28:30.057+0000 2019-09-04T06:28:30.064+0000 D3 REPL [conn106] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:57.566+0000 2019-09-04T06:28:30.064+0000 D3 REPL [conn108] Got notified of new snapshot: { ts: Timestamp(1567578510, 3), t: 1 }, 2019-09-04T06:28:30.057+0000 2019-09-04T06:28:30.064+0000 D3 REPL [conn108] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.879+0000 2019-09-04T06:28:30.064+0000 D3 REPL [conn123] Got notified of new snapshot: { ts: Timestamp(1567578510, 3), t: 1 }, 2019-09-04T06:28:30.057+0000 2019-09-04T06:28:30.064+0000 D3 REPL [conn123] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.022+0000 2019-09-04T06:28:30.065+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, durableWallTime: new Date(1567578505582), appliedOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, appliedWallTime: new Date(1567578505582), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578510, 3), t: 1 }, durableWallTime: new Date(1567578510057), appliedOpTime: { ts: Timestamp(1567578510, 3), t: 1 }, appliedWallTime: new Date(1567578510057), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, durableWallTime: new Date(1567578505582), appliedOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, appliedWallTime: new Date(1567578505582), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578510, 3), t: 1 }, lastCommittedWall: new Date(1567578510057), lastOpVisible: { ts: Timestamp(1567578510, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:30.065+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 221 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:00.065+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, durableWallTime: new Date(1567578505582), appliedOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, appliedWallTime: new Date(1567578505582), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578510, 3), t: 1 }, durableWallTime: new Date(1567578510057), appliedOpTime: { ts: Timestamp(1567578510, 3), t: 1 }, appliedWallTime: new Date(1567578510057), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, durableWallTime: new Date(1567578505582), appliedOpTime: { ts: Timestamp(1567578505, 1), t: 1 }, appliedWallTime: new Date(1567578505582), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578510, 3), t: 1 }, lastCommittedWall: new Date(1567578510057), lastOpVisible: { ts: Timestamp(1567578510, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:30.065+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:00.063+0000 2019-09-04T06:28:30.065+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:30.065+0000 D2 ASIO [RS] Request 221 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578510, 3), t: 1 }, lastCommittedWall: new Date(1567578510057), lastOpVisible: { ts: Timestamp(1567578510, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578510, 3), $clusterTime: { clusterTime: Timestamp(1567578510, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578510, 3) } 2019-09-04T06:28:30.065+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578510, 3), t: 1 }, lastCommittedWall: new Date(1567578510057), lastOpVisible: { ts: Timestamp(1567578510, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578510, 3), $clusterTime: { clusterTime: Timestamp(1567578510, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578510, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:30.065+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:30.065+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:00.063+0000 2019-09-04T06:28:30.065+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:30.065+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:30.068+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:30.068+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:30.145+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578510, 3) 2019-09-04T06:28:30.159+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:30.160+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:30.165+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:30.166+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:30.166+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:30.197+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:30.197+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:30.203+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:30.203+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:30.230+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578510, 3), signature: { hash: BinData(0, 0DD04A4D9F0FB37AB14D3FA1D511E8D7C511444F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:30.230+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:28:30.231+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578510, 3), signature: { hash: BinData(0, 0DD04A4D9F0FB37AB14D3FA1D511E8D7C511444F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:30.231+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578510, 3), signature: { hash: BinData(0, 0DD04A4D9F0FB37AB14D3FA1D511E8D7C511444F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:30.231+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578510, 3), t: 1 }, durableWallTime: new Date(1567578510057), opTime: { ts: Timestamp(1567578510, 3), t: 1 }, wallTime: new Date(1567578510057) } 2019-09-04T06:28:30.231+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578510, 3), signature: { hash: BinData(0, 0DD04A4D9F0FB37AB14D3FA1D511E8D7C511444F), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:30.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:30.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:30.265+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:30.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:30.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:30.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:30.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:30.332+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:30.332+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:30.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:30.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:30.365+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:30.423+0000 I NETWORK [listener] connection accepted from 10.108.2.48:42016 #132 (75 connections now open) 2019-09-04T06:28:30.423+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:30.423+0000 D2 COMMAND [conn132] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:30.423+0000 I NETWORK [conn132] received client metadata from 10.108.2.48:42016 conn132: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:30.423+0000 I COMMAND [conn132] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:30.423+0000 D2 COMMAND [conn132] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578510, 1), signature: { hash: BinData(0, 2F12003156507F6FFDC6E8CA92EB3C8A43793298), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:28:30.423+0000 D1 REPL [conn132] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578510, 3), t: 1 } 2019-09-04T06:28:30.423+0000 D3 REPL [conn132] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.433+0000 2019-09-04T06:28:30.465+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:30.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:30.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:30.551+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:30.551+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:30.565+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:30.565+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:30.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:30.568+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:30.568+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:30.659+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:30.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:30.665+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:30.666+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:30.666+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:30.691+0000 I NETWORK [listener] connection accepted from 10.108.2.74:51698 #133 (76 connections now open) 2019-09-04T06:28:30.691+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:30.692+0000 D2 COMMAND [conn133] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:30.692+0000 I NETWORK [conn133] received client metadata from 10.108.2.74:51698 conn133: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:30.692+0000 I COMMAND [conn133] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:30.692+0000 D2 COMMAND [conn133] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578505, 1), signature: { hash: BinData(0, EE8BDA2660D95BC27BD45C346F1798888A2141A4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:28:30.692+0000 D1 REPL [conn133] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578510, 3), t: 1 } 2019-09-04T06:28:30.692+0000 D3 REPL [conn133] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.702+0000 2019-09-04T06:28:30.697+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:30.697+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:30.703+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:30.703+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:30.743+0000 I NETWORK [listener] connection accepted from 10.108.2.52:47094 #134 (77 connections now open) 2019-09-04T06:28:30.743+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:30.743+0000 D2 COMMAND [conn134] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:30.743+0000 I NETWORK [conn134] received client metadata from 10.108.2.52:47094 conn134: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:30.743+0000 I COMMAND [conn134] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:30.743+0000 D2 COMMAND [conn134] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578502, 1), signature: { hash: BinData(0, 7B6594811839B8D4C40313150387F0AED9621701), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:28:30.743+0000 D1 REPL [conn134] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578510, 3), t: 1 } 2019-09-04T06:28:30.743+0000 D3 REPL [conn134] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.753+0000 2019-09-04T06:28:30.766+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:30.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:30.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:30.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:30.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:30.832+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:30.832+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:30.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:30.836+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 222) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:30.836+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 222 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:28:40.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:30.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:28:58.836+0000 2019-09-04T06:28:30.836+0000 D2 ASIO [Replication] Request 222 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578510, 3), t: 1 }, durableWallTime: new Date(1567578510057), opTime: { ts: Timestamp(1567578510, 3), t: 1 }, wallTime: new Date(1567578510057), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578510, 3), t: 1 }, lastCommittedWall: new Date(1567578510057), lastOpVisible: { ts: Timestamp(1567578510, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578510, 3), $clusterTime: { clusterTime: Timestamp(1567578510, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578510, 3) } 2019-09-04T06:28:30.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578510, 3), t: 1 }, durableWallTime: new Date(1567578510057), opTime: { ts: Timestamp(1567578510, 3), t: 1 }, wallTime: new Date(1567578510057), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578510, 3), t: 1 }, lastCommittedWall: new Date(1567578510057), lastOpVisible: { ts: Timestamp(1567578510, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578510, 3), $clusterTime: { clusterTime: Timestamp(1567578510, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578510, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:28:30.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:30.836+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 222) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578510, 3), t: 1 }, durableWallTime: new Date(1567578510057), opTime: { ts: Timestamp(1567578510, 3), t: 1 }, wallTime: new Date(1567578510057), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578510, 3), t: 1 }, lastCommittedWall: new Date(1567578510057), lastOpVisible: { ts: Timestamp(1567578510, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578510, 3), $clusterTime: { clusterTime: Timestamp(1567578510, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578510, 3) } 2019-09-04T06:28:30.836+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:28:30.836+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:28:40.166+0000 2019-09-04T06:28:30.836+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:28:41.415+0000 2019-09-04T06:28:30.836+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:28:30.836+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:28:32.836Z 2019-09-04T06:28:30.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:30.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:00.836+0000 2019-09-04T06:28:30.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:00.836+0000 2019-09-04T06:28:30.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:30.837+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 223) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:30.837+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 223 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:40.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:30.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:00.836+0000 2019-09-04T06:28:30.837+0000 D2 ASIO [Replication] Request 223 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578510, 3), t: 1 }, durableWallTime: new Date(1567578510057), opTime: { ts: Timestamp(1567578510, 3), t: 1 }, wallTime: new Date(1567578510057), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578510, 3), t: 1 }, lastCommittedWall: new Date(1567578510057), lastOpVisible: { ts: Timestamp(1567578510, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578510, 3), $clusterTime: { clusterTime: Timestamp(1567578510, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578510, 3) } 2019-09-04T06:28:30.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578510, 3), t: 1 }, durableWallTime: new Date(1567578510057), opTime: { ts: Timestamp(1567578510, 3), t: 1 }, wallTime: new Date(1567578510057), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578510, 3), t: 1 }, lastCommittedWall: new Date(1567578510057), lastOpVisible: { ts: Timestamp(1567578510, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578510, 3), $clusterTime: { clusterTime: Timestamp(1567578510, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578510, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:30.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:30.837+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 223) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578510, 3), t: 1 }, durableWallTime: new Date(1567578510057), opTime: { ts: Timestamp(1567578510, 3), t: 1 }, wallTime: new Date(1567578510057), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578510, 3), t: 1 }, lastCommittedWall: new Date(1567578510057), lastOpVisible: { ts: Timestamp(1567578510, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578510, 3), $clusterTime: { clusterTime: Timestamp(1567578510, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578510, 3) } 2019-09-04T06:28:30.837+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:28:30.837+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:28:32.837Z 2019-09-04T06:28:30.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:00.836+0000 2019-09-04T06:28:30.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:30.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:30.866+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:30.915+0000 I NETWORK [listener] connection accepted from 10.108.2.73:52064 #135 (78 connections now open) 2019-09-04T06:28:30.915+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:30.915+0000 D2 COMMAND [conn135] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:30.915+0000 I NETWORK [conn135] received client metadata from 10.108.2.73:52064 conn135: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:30.915+0000 I COMMAND [conn135] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:30.915+0000 D2 COMMAND [conn135] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578502, 1), signature: { hash: BinData(0, 7B6594811839B8D4C40313150387F0AED9621701), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:28:30.916+0000 D1 REPL [conn135] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578510, 3), t: 1 } 2019-09-04T06:28:30.916+0000 D3 REPL [conn135] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.925+0000 2019-09-04T06:28:30.966+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:31.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:31.016+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:31.016+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:31.051+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:31.051+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:31.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578510, 3), signature: { hash: BinData(0, 0DD04A4D9F0FB37AB14D3FA1D511E8D7C511444F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:31.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:28:31.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578510, 3), signature: { hash: BinData(0, 0DD04A4D9F0FB37AB14D3FA1D511E8D7C511444F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:31.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578510, 3), signature: { hash: BinData(0, 0DD04A4D9F0FB37AB14D3FA1D511E8D7C511444F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:31.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578510, 3), t: 1 }, durableWallTime: new Date(1567578510057), opTime: { ts: Timestamp(1567578510, 3), t: 1 }, wallTime: new Date(1567578510057) } 2019-09-04T06:28:31.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578510, 3), signature: { hash: BinData(0, 0DD04A4D9F0FB37AB14D3FA1D511E8D7C511444F), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:31.061+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:31.061+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:31.061+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578510, 3) 2019-09-04T06:28:31.061+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 3570 2019-09-04T06:28:31.061+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:31.061+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:31.061+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 3570 2019-09-04T06:28:31.063+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3573 2019-09-04T06:28:31.063+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 3573 2019-09-04T06:28:31.063+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578510, 3), t: 1 }({ ts: Timestamp(1567578510, 3), t: 1 }) 2019-09-04T06:28:31.065+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:31.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:31.066+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:31.068+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:31.068+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:31.159+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:31.160+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:31.166+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:31.166+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:31.166+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:31.197+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:31.197+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:31.203+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:31.203+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:31.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:31.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:31.266+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:31.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:31.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:31.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:31.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:31.332+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:31.332+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:31.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:31.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:31.367+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:31.467+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:31.516+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:31.516+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:31.551+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:31.551+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:31.565+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:31.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:31.568+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:31.568+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:31.568+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:31.659+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:31.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:31.666+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:31.666+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:31.668+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:31.697+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:31.697+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:31.703+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:31.703+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:31.768+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:31.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:31.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:31.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:31.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:31.832+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:31.832+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:31.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:31.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:31.868+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:31.968+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:32.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:32.016+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:32.016+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:32.051+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:32.051+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:32.062+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:32.062+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:32.062+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578510, 3) 2019-09-04T06:28:32.062+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 3602 2019-09-04T06:28:32.062+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:32.062+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:32.062+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 3602 2019-09-04T06:28:32.063+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3605 2019-09-04T06:28:32.063+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 3605 2019-09-04T06:28:32.063+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578510, 3), t: 1 }({ ts: Timestamp(1567578510, 3), t: 1 }) 2019-09-04T06:28:32.065+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:32.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:32.068+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:32.068+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:32.068+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:32.159+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:32.160+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:32.166+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:32.166+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:32.169+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:32.197+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:32.197+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:32.203+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:32.203+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:32.230+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578510, 3), signature: { hash: BinData(0, 0DD04A4D9F0FB37AB14D3FA1D511E8D7C511444F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:32.231+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:28:32.231+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578510, 3), signature: { hash: BinData(0, 0DD04A4D9F0FB37AB14D3FA1D511E8D7C511444F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:32.231+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578510, 3), signature: { hash: BinData(0, 0DD04A4D9F0FB37AB14D3FA1D511E8D7C511444F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:32.231+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578510, 3), t: 1 }, durableWallTime: new Date(1567578510057), opTime: { ts: Timestamp(1567578510, 3), t: 1 }, wallTime: new Date(1567578510057) } 2019-09-04T06:28:32.231+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578510, 3), signature: { hash: BinData(0, 0DD04A4D9F0FB37AB14D3FA1D511E8D7C511444F), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:32.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:32.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:32.269+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:32.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:32.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:32.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:32.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:32.332+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:32.332+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:32.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:32.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:32.369+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:32.468+0000 I NETWORK [listener] connection accepted from 10.108.2.59:48264 #136 (79 connections now open) 2019-09-04T06:28:32.468+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:32.468+0000 D2 COMMAND [conn136] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:32.468+0000 I NETWORK [conn136] received client metadata from 10.108.2.59:48264 conn136: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:32.468+0000 I COMMAND [conn136] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:32.468+0000 D2 COMMAND [conn136] run command config.$cmd { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578511, 1), signature: { hash: BinData(0, F56EFECF966613B7CF9F4E79C9426BAFAB58CE38), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:28:32.468+0000 D1 REPL [conn136] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578510, 3), t: 1 } 2019-09-04T06:28:32.468+0000 D3 REPL [conn136] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:02.478+0000 2019-09-04T06:28:32.469+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:32.516+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:32.516+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:32.551+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:32.551+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:32.565+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:32.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:32.568+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:32.568+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:32.569+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:32.659+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:32.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:32.666+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:32.666+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:32.669+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:32.697+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:32.697+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:32.703+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:32.703+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:32.769+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:32.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:32.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:32.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:32.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:32.832+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:32.832+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:32.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:32.836+0000 D3 REPL [replexec-0] memberData lastupdate is: 2019-09-04T06:28:31.061+0000 2019-09-04T06:28:32.836+0000 D3 REPL [replexec-0] memberData lastupdate is: 2019-09-04T06:28:32.231+0000 2019-09-04T06:28:32.836+0000 D3 REPL [replexec-0] stalest member MemberId(0) date: 2019-09-04T06:28:31.061+0000 2019-09-04T06:28:32.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:32.836+0000 D3 REPL [replexec-0] scheduling next check at 2019-09-04T06:28:41.061+0000 2019-09-04T06:28:32.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:02.836+0000 2019-09-04T06:28:32.836+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 224) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:32.836+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 224 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:28:42.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:32.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:02.836+0000 2019-09-04T06:28:32.836+0000 D2 ASIO [Replication] Request 224 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578510, 3), t: 1 }, durableWallTime: new Date(1567578510057), opTime: { ts: Timestamp(1567578510, 3), t: 1 }, wallTime: new Date(1567578510057), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578510, 3), t: 1 }, lastCommittedWall: new Date(1567578510057), lastOpVisible: { ts: Timestamp(1567578510, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578510, 3), $clusterTime: { clusterTime: Timestamp(1567578511, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578510, 3) } 2019-09-04T06:28:32.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578510, 3), t: 1 }, durableWallTime: new Date(1567578510057), opTime: { ts: Timestamp(1567578510, 3), t: 1 }, wallTime: new Date(1567578510057), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578510, 3), t: 1 }, lastCommittedWall: new Date(1567578510057), lastOpVisible: { ts: Timestamp(1567578510, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578510, 3), $clusterTime: { clusterTime: Timestamp(1567578511, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578510, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:28:32.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:32.836+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 224) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578510, 3), t: 1 }, durableWallTime: new Date(1567578510057), opTime: { ts: Timestamp(1567578510, 3), t: 1 }, wallTime: new Date(1567578510057), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578510, 3), t: 1 }, lastCommittedWall: new Date(1567578510057), lastOpVisible: { ts: Timestamp(1567578510, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578510, 3), $clusterTime: { clusterTime: Timestamp(1567578511, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578510, 3) } 2019-09-04T06:28:32.836+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:28:32.836+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:28:41.415+0000 2019-09-04T06:28:32.836+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:28:44.060+0000 2019-09-04T06:28:32.836+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:28:32.836+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:28:34.836Z 2019-09-04T06:28:32.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:32.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:02.836+0000 2019-09-04T06:28:32.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:02.836+0000 2019-09-04T06:28:32.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:32.837+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 225) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:32.837+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 225 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:42.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:32.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:02.836+0000 2019-09-04T06:28:32.837+0000 D2 ASIO [Replication] Request 225 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578510, 3), t: 1 }, durableWallTime: new Date(1567578510057), opTime: { ts: Timestamp(1567578510, 3), t: 1 }, wallTime: new Date(1567578510057), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578510, 3), t: 1 }, lastCommittedWall: new Date(1567578510057), lastOpVisible: { ts: Timestamp(1567578510, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578510, 3), $clusterTime: { clusterTime: Timestamp(1567578511, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578510, 3) } 2019-09-04T06:28:32.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578510, 3), t: 1 }, durableWallTime: new Date(1567578510057), opTime: { ts: Timestamp(1567578510, 3), t: 1 }, wallTime: new Date(1567578510057), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578510, 3), t: 1 }, lastCommittedWall: new Date(1567578510057), lastOpVisible: { ts: Timestamp(1567578510, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578510, 3), $clusterTime: { clusterTime: Timestamp(1567578511, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578510, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:32.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:32.837+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 225) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578510, 3), t: 1 }, durableWallTime: new Date(1567578510057), opTime: { ts: Timestamp(1567578510, 3), t: 1 }, wallTime: new Date(1567578510057), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578510, 3), t: 1 }, lastCommittedWall: new Date(1567578510057), lastOpVisible: { ts: Timestamp(1567578510, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578510, 3), $clusterTime: { clusterTime: Timestamp(1567578511, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578510, 3) } 2019-09-04T06:28:32.837+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:28:32.837+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:28:34.837Z 2019-09-04T06:28:32.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:02.836+0000 2019-09-04T06:28:32.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:32.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:32.869+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:32.970+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:33.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:33.016+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:33.016+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:33.051+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:33.051+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:33.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578511, 1), signature: { hash: BinData(0, 667341AF5351C003C8BD613BBA08DF1FF6F46E12), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:33.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:28:33.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578511, 1), signature: { hash: BinData(0, 667341AF5351C003C8BD613BBA08DF1FF6F46E12), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:33.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578511, 1), signature: { hash: BinData(0, 667341AF5351C003C8BD613BBA08DF1FF6F46E12), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:33.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578510, 3), t: 1 }, durableWallTime: new Date(1567578510057), opTime: { ts: Timestamp(1567578510, 3), t: 1 }, wallTime: new Date(1567578510057) } 2019-09-04T06:28:33.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578511, 1), signature: { hash: BinData(0, 667341AF5351C003C8BD613BBA08DF1FF6F46E12), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:33.062+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:33.062+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:33.062+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578510, 3) 2019-09-04T06:28:33.062+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 3637 2019-09-04T06:28:33.062+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:33.062+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:33.062+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 3637 2019-09-04T06:28:33.063+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3640 2019-09-04T06:28:33.063+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 3640 2019-09-04T06:28:33.063+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578510, 3), t: 1 }({ ts: Timestamp(1567578510, 3), t: 1 }) 2019-09-04T06:28:33.065+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:33.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:33.068+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:33.068+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:33.070+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:33.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:33.160+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:33.166+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:33.166+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:33.170+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:33.197+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:33.197+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:33.203+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:33.203+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:33.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:33.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:33.270+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:33.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:33.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:33.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:33.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:33.332+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:33.332+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:33.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:33.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:33.369+0000 D2 ASIO [RS] Request 220 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578513, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578513366), o: { $v: 1, $set: { ping: new Date(1567578513361) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578510, 3), t: 1 }, lastCommittedWall: new Date(1567578510057), lastOpVisible: { ts: Timestamp(1567578510, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578510, 3), t: 1 }, lastCommittedWall: new Date(1567578510057), lastOpApplied: { ts: Timestamp(1567578513, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578510, 3), $clusterTime: { clusterTime: Timestamp(1567578513, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578513, 1) } 2019-09-04T06:28:33.369+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578513, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578513366), o: { $v: 1, $set: { ping: new Date(1567578513361) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578510, 3), t: 1 }, lastCommittedWall: new Date(1567578510057), lastOpVisible: { ts: Timestamp(1567578510, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578510, 3), t: 1 }, lastCommittedWall: new Date(1567578510057), lastOpApplied: { ts: Timestamp(1567578513, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578510, 3), $clusterTime: { clusterTime: Timestamp(1567578513, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578513, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:33.369+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:33.369+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578513, 1) and ending at ts: Timestamp(1567578513, 1) 2019-09-04T06:28:33.369+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:28:44.060+0000 2019-09-04T06:28:33.369+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:28:43.523+0000 2019-09-04T06:28:33.369+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578513, 1), t: 1 } 2019-09-04T06:28:33.369+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:33.369+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:02.836+0000 2019-09-04T06:28:33.369+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:33.369+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:33.369+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578510, 3) 2019-09-04T06:28:33.370+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 3654 2019-09-04T06:28:33.370+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:33.370+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:33.370+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 3654 2019-09-04T06:28:33.370+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:33.370+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:33.370+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578510, 3) 2019-09-04T06:28:33.370+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 3657 2019-09-04T06:28:33.370+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:33.370+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:33.370+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 3657 2019-09-04T06:28:33.370+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:28:33.370+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578513, 1) } 2019-09-04T06:28:33.370+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3641 2019-09-04T06:28:33.370+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 3641 2019-09-04T06:28:33.370+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3660 2019-09-04T06:28:33.370+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 3660 2019-09-04T06:28:33.370+0000 D3 EXECUTOR [repl-writer-worker-15] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:33.370+0000 D3 STORAGE [repl-writer-worker-15] WT begin_transaction for snapshot id 3662 2019-09-04T06:28:33.370+0000 D4 STORAGE [repl-writer-worker-15] inserting record with timestamp Timestamp(1567578513, 1) 2019-09-04T06:28:33.370+0000 D3 STORAGE [repl-writer-worker-15] WT set timestamp of future write operations to Timestamp(1567578513, 1) 2019-09-04T06:28:33.370+0000 D3 STORAGE [repl-writer-worker-15] WT commit_transaction for snapshot id 3662 2019-09-04T06:28:33.370+0000 D3 EXECUTOR [repl-writer-worker-15] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:33.370+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:28:33.370+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3661 2019-09-04T06:28:33.370+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 3661 2019-09-04T06:28:33.370+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3664 2019-09-04T06:28:33.370+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 3664 2019-09-04T06:28:33.370+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578513, 1), t: 1 }({ ts: Timestamp(1567578513, 1), t: 1 }) 2019-09-04T06:28:33.370+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578513, 1) 2019-09-04T06:28:33.370+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3665 2019-09-04T06:28:33.370+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578513, 1) } } ] } sort: {} projection: {} 2019-09-04T06:28:33.370+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:33.370+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:28:33.370+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578513, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:28:33.370+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:33.370+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:33.370+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:33.370+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578513, 1) || First: notFirst: full path: ts 2019-09-04T06:28:33.370+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:33.370+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578513, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:33.370+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:33.370+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:28:33.370+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:33.370+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:33.370+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:33.370+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:33.370+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:33.370+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:33.370+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:33.370+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578513, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:33.370+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:33.370+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:33.370+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:33.370+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578513, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:33.370+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:33.370+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578513, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:33.370+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 3665 2019-09-04T06:28:33.370+0000 D3 EXECUTOR [repl-writer-worker-1] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:33.370+0000 D3 STORAGE [repl-writer-worker-1] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:33.370+0000 D3 REPL [repl-writer-worker-1] applying op: { ts: Timestamp(1567578513, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578513366), o: { $v: 1, $set: { ping: new Date(1567578513361) } } }, oplog application mode: Secondary 2019-09-04T06:28:33.371+0000 D3 STORAGE [repl-writer-worker-1] WT set timestamp of future write operations to Timestamp(1567578513, 1) 2019-09-04T06:28:33.371+0000 D3 STORAGE [repl-writer-worker-1] WT begin_transaction for snapshot id 3667 2019-09-04T06:28:33.371+0000 D2 QUERY [repl-writer-worker-1] Using idhack: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" } 2019-09-04T06:28:33.371+0000 D4 WRITE [repl-writer-worker-1] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:28:33.371+0000 D3 STORAGE [repl-writer-worker-1] WT commit_transaction for snapshot id 3667 2019-09-04T06:28:33.371+0000 D3 EXECUTOR [repl-writer-worker-1] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:33.371+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578513, 1), t: 1 }({ ts: Timestamp(1567578513, 1), t: 1 }) 2019-09-04T06:28:33.371+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578513, 1) 2019-09-04T06:28:33.371+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3666 2019-09-04T06:28:33.371+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:28:33.371+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:33.371+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:28:33.371+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:33.371+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:33.371+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:28:33.371+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 3666 2019-09-04T06:28:33.371+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578513, 1) 2019-09-04T06:28:33.371+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3670 2019-09-04T06:28:33.371+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 3670 2019-09-04T06:28:33.371+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578513, 1), t: 1 }({ ts: Timestamp(1567578513, 1), t: 1 }) 2019-09-04T06:28:33.371+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:33.371+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578510, 3), t: 1 }, durableWallTime: new Date(1567578510057), appliedOpTime: { ts: Timestamp(1567578510, 3), t: 1 }, appliedWallTime: new Date(1567578510057), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578510, 3), t: 1 }, durableWallTime: new Date(1567578510057), appliedOpTime: { ts: Timestamp(1567578513, 1), t: 1 }, appliedWallTime: new Date(1567578513366), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578510, 3), t: 1 }, durableWallTime: new Date(1567578510057), appliedOpTime: { ts: Timestamp(1567578510, 3), t: 1 }, appliedWallTime: new Date(1567578510057), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578510, 3), t: 1 }, lastCommittedWall: new Date(1567578510057), lastOpVisible: { ts: Timestamp(1567578510, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:33.371+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 226 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:03.371+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578510, 3), t: 1 }, durableWallTime: new Date(1567578510057), appliedOpTime: { ts: Timestamp(1567578510, 3), t: 1 }, appliedWallTime: new Date(1567578510057), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578510, 3), t: 1 }, durableWallTime: new Date(1567578510057), appliedOpTime: { ts: Timestamp(1567578513, 1), t: 1 }, appliedWallTime: new Date(1567578513366), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578510, 3), t: 1 }, durableWallTime: new Date(1567578510057), appliedOpTime: { ts: Timestamp(1567578510, 3), t: 1 }, appliedWallTime: new Date(1567578510057), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578510, 3), t: 1 }, lastCommittedWall: new Date(1567578510057), lastOpVisible: { ts: Timestamp(1567578510, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:33.371+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:03.371+0000 2019-09-04T06:28:33.371+0000 D2 ASIO [RS] Request 226 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578510, 3), t: 1 }, lastCommittedWall: new Date(1567578510057), lastOpVisible: { ts: Timestamp(1567578510, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578510, 3), $clusterTime: { clusterTime: Timestamp(1567578513, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578513, 1) } 2019-09-04T06:28:33.371+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578510, 3), t: 1 }, lastCommittedWall: new Date(1567578510057), lastOpVisible: { ts: Timestamp(1567578510, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578510, 3), $clusterTime: { clusterTime: Timestamp(1567578513, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578513, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:33.371+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:33.371+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:03.371+0000 2019-09-04T06:28:33.371+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578513, 1), t: 1 } 2019-09-04T06:28:33.372+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 227 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:43.372+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578510, 3), t: 1 } } 2019-09-04T06:28:33.372+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:03.371+0000 2019-09-04T06:28:33.372+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:33.372+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:28:33.372+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:33.372+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578510, 3), t: 1 }, durableWallTime: new Date(1567578510057), appliedOpTime: { ts: Timestamp(1567578510, 3), t: 1 }, appliedWallTime: new Date(1567578510057), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578513, 1), t: 1 }, durableWallTime: new Date(1567578513366), appliedOpTime: { ts: Timestamp(1567578513, 1), t: 1 }, appliedWallTime: new Date(1567578513366), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578510, 3), t: 1 }, durableWallTime: new Date(1567578510057), appliedOpTime: { ts: Timestamp(1567578510, 3), t: 1 }, appliedWallTime: new Date(1567578510057), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578510, 3), t: 1 }, lastCommittedWall: new Date(1567578510057), lastOpVisible: { ts: Timestamp(1567578510, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:33.372+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 228 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:03.372+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578510, 3), t: 1 }, durableWallTime: new Date(1567578510057), appliedOpTime: { ts: Timestamp(1567578510, 3), t: 1 }, appliedWallTime: new Date(1567578510057), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578513, 1), t: 1 }, durableWallTime: new Date(1567578513366), appliedOpTime: { ts: Timestamp(1567578513, 1), t: 1 }, appliedWallTime: new Date(1567578513366), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578510, 3), t: 1 }, durableWallTime: new Date(1567578510057), appliedOpTime: { ts: Timestamp(1567578510, 3), t: 1 }, appliedWallTime: new Date(1567578510057), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578510, 3), t: 1 }, lastCommittedWall: new Date(1567578510057), lastOpVisible: { ts: Timestamp(1567578510, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:33.372+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:03.371+0000 2019-09-04T06:28:33.372+0000 D2 ASIO [RS] Request 228 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578510, 3), t: 1 }, lastCommittedWall: new Date(1567578510057), lastOpVisible: { ts: Timestamp(1567578510, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578510, 3), $clusterTime: { clusterTime: Timestamp(1567578513, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578513, 1) } 2019-09-04T06:28:33.373+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578510, 3), t: 1 }, lastCommittedWall: new Date(1567578510057), lastOpVisible: { ts: Timestamp(1567578510, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578510, 3), $clusterTime: { clusterTime: Timestamp(1567578513, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578513, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:33.373+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:33.373+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:03.371+0000 2019-09-04T06:28:33.373+0000 D2 ASIO [RS] Request 227 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578513, 1), t: 1 }, lastCommittedWall: new Date(1567578513366), lastOpVisible: { ts: Timestamp(1567578513, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578513, 1), t: 1 }, lastCommittedWall: new Date(1567578513366), lastOpApplied: { ts: Timestamp(1567578513, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578513, 1), $clusterTime: { clusterTime: Timestamp(1567578513, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578513, 1) } 2019-09-04T06:28:33.373+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578513, 1), t: 1 }, lastCommittedWall: new Date(1567578513366), lastOpVisible: { ts: Timestamp(1567578513, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578513, 1), t: 1 }, lastCommittedWall: new Date(1567578513366), lastOpApplied: { ts: Timestamp(1567578513, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578513, 1), $clusterTime: { clusterTime: Timestamp(1567578513, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578513, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:33.373+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:33.373+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:28:33.373+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578513, 1), t: 1 }, 2019-09-04T06:28:33.366+0000 2019-09-04T06:28:33.373+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578513, 1), t: 1 }, 2019-09-04T06:28:33.366+0000 2019-09-04T06:28:33.373+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578508, 1) 2019-09-04T06:28:33.373+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:28:43.523+0000 2019-09-04T06:28:33.373+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:28:43.423+0000 2019-09-04T06:28:33.373+0000 D3 REPL [conn112] Got notified of new snapshot: { ts: Timestamp(1567578513, 1), t: 1 }, 2019-09-04T06:28:33.366+0000 2019-09-04T06:28:33.373+0000 D3 REPL [conn112] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.921+0000 2019-09-04T06:28:33.373+0000 D3 REPL [conn104] Got notified of new snapshot: { ts: Timestamp(1567578513, 1), t: 1 }, 2019-09-04T06:28:33.366+0000 2019-09-04T06:28:33.373+0000 D3 REPL [conn104] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:55.059+0000 2019-09-04T06:28:33.373+0000 D3 REPL [conn111] Got notified of new snapshot: { ts: Timestamp(1567578513, 1), t: 1 }, 2019-09-04T06:28:33.366+0000 2019-09-04T06:28:33.373+0000 D3 REPL [conn111] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.896+0000 2019-09-04T06:28:33.373+0000 D3 REPL [conn132] Got notified of new snapshot: { ts: Timestamp(1567578513, 1), t: 1 }, 2019-09-04T06:28:33.366+0000 2019-09-04T06:28:33.373+0000 D3 REPL [conn132] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.433+0000 2019-09-04T06:28:33.373+0000 D3 REPL [conn120] Got notified of new snapshot: { ts: Timestamp(1567578513, 1), t: 1 }, 2019-09-04T06:28:33.366+0000 2019-09-04T06:28:33.373+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:33.373+0000 D3 REPL [conn120] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.040+0000 2019-09-04T06:28:33.373+0000 D3 REPL [conn121] Got notified of new snapshot: { ts: Timestamp(1567578513, 1), t: 1 }, 2019-09-04T06:28:33.366+0000 2019-09-04T06:28:33.373+0000 D3 REPL [conn121] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:58.752+0000 2019-09-04T06:28:33.373+0000 D3 REPL [conn115] Got notified of new snapshot: { ts: Timestamp(1567578513, 1), t: 1 }, 2019-09-04T06:28:33.366+0000 2019-09-04T06:28:33.373+0000 D3 REPL [conn115] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:41.292+0000 2019-09-04T06:28:33.373+0000 D3 REPL [conn114] Got notified of new snapshot: { ts: Timestamp(1567578513, 1), t: 1 }, 2019-09-04T06:28:33.366+0000 2019-09-04T06:28:33.373+0000 D3 REPL [conn114] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:41.488+0000 2019-09-04T06:28:33.373+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 229 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:43.373+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578513, 1), t: 1 } } 2019-09-04T06:28:33.373+0000 D3 REPL [conn134] Got notified of new snapshot: { ts: Timestamp(1567578513, 1), t: 1 }, 2019-09-04T06:28:33.366+0000 2019-09-04T06:28:33.373+0000 D3 REPL [conn134] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.753+0000 2019-09-04T06:28:33.373+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:03.371+0000 2019-09-04T06:28:33.373+0000 D3 REPL [conn131] Got notified of new snapshot: { ts: Timestamp(1567578513, 1), t: 1 }, 2019-09-04T06:28:33.366+0000 2019-09-04T06:28:33.373+0000 D3 REPL [conn131] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:56.307+0000 2019-09-04T06:28:33.373+0000 D3 REPL [conn110] Got notified of new snapshot: { ts: Timestamp(1567578513, 1), t: 1 }, 2019-09-04T06:28:33.366+0000 2019-09-04T06:28:33.373+0000 D3 REPL [conn110] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.881+0000 2019-09-04T06:28:33.373+0000 D3 REPL [conn108] Got notified of new snapshot: { ts: Timestamp(1567578513, 1), t: 1 }, 2019-09-04T06:28:33.366+0000 2019-09-04T06:28:33.373+0000 D3 REPL [conn108] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.879+0000 2019-09-04T06:28:33.373+0000 D3 REPL [conn135] Got notified of new snapshot: { ts: Timestamp(1567578513, 1), t: 1 }, 2019-09-04T06:28:33.366+0000 2019-09-04T06:28:33.373+0000 D3 REPL [conn135] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.925+0000 2019-09-04T06:28:33.373+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:02.836+0000 2019-09-04T06:28:33.373+0000 D3 REPL [conn119] Got notified of new snapshot: { ts: Timestamp(1567578513, 1), t: 1 }, 2019-09-04T06:28:33.366+0000 2019-09-04T06:28:33.374+0000 D3 REPL [conn119] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.033+0000 2019-09-04T06:28:33.374+0000 D3 REPL [conn136] Got notified of new snapshot: { ts: Timestamp(1567578513, 1), t: 1 }, 2019-09-04T06:28:33.366+0000 2019-09-04T06:28:33.374+0000 D3 REPL [conn136] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:02.478+0000 2019-09-04T06:28:33.374+0000 D3 REPL [conn124] Got notified of new snapshot: { ts: Timestamp(1567578513, 1), t: 1 }, 2019-09-04T06:28:33.366+0000 2019-09-04T06:28:33.374+0000 D3 REPL [conn124] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.022+0000 2019-09-04T06:28:33.374+0000 D3 REPL [conn107] Got notified of new snapshot: { ts: Timestamp(1567578513, 1), t: 1 }, 2019-09-04T06:28:33.366+0000 2019-09-04T06:28:33.374+0000 D3 REPL [conn107] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.871+0000 2019-09-04T06:28:33.374+0000 D3 REPL [conn130] Got notified of new snapshot: { ts: Timestamp(1567578513, 1), t: 1 }, 2019-09-04T06:28:33.366+0000 2019-09-04T06:28:33.374+0000 D3 REPL [conn130] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:51.753+0000 2019-09-04T06:28:33.374+0000 D3 REPL [conn109] Got notified of new snapshot: { ts: Timestamp(1567578513, 1), t: 1 }, 2019-09-04T06:28:33.366+0000 2019-09-04T06:28:33.374+0000 D3 REPL [conn109] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.877+0000 2019-09-04T06:28:33.374+0000 D3 REPL [conn106] Got notified of new snapshot: { ts: Timestamp(1567578513, 1), t: 1 }, 2019-09-04T06:28:33.366+0000 2019-09-04T06:28:33.374+0000 D3 REPL [conn106] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:57.566+0000 2019-09-04T06:28:33.374+0000 D3 REPL [conn123] Got notified of new snapshot: { ts: Timestamp(1567578513, 1), t: 1 }, 2019-09-04T06:28:33.366+0000 2019-09-04T06:28:33.374+0000 D3 REPL [conn123] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.022+0000 2019-09-04T06:28:33.374+0000 D3 REPL [conn103] Got notified of new snapshot: { ts: Timestamp(1567578513, 1), t: 1 }, 2019-09-04T06:28:33.366+0000 2019-09-04T06:28:33.374+0000 D3 REPL [conn103] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.919+0000 2019-09-04T06:28:33.374+0000 D3 REPL [conn133] Got notified of new snapshot: { ts: Timestamp(1567578513, 1), t: 1 }, 2019-09-04T06:28:33.366+0000 2019-09-04T06:28:33.374+0000 D3 REPL [conn133] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.702+0000 2019-09-04T06:28:33.374+0000 D3 REPL [conn117] Got notified of new snapshot: { ts: Timestamp(1567578513, 1), t: 1 }, 2019-09-04T06:28:33.366+0000 2019-09-04T06:28:33.374+0000 D3 REPL [conn117] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:54.151+0000 2019-09-04T06:28:33.470+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578513, 1) 2019-09-04T06:28:33.472+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:33.508+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:28:33.508+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:28:33.508+0000 D2 COMMAND [conn90] run command admin.$cmd { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:28:33.508+0000 I COMMAND [conn90] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:28:33.516+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:33.516+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:33.551+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:33.551+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:33.565+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:33.565+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:33.568+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:33.568+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:33.572+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:33.659+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:33.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:33.666+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:33.666+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:33.672+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:33.697+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:33.697+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:33.703+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:33.703+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:33.772+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:33.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:33.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:33.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:33.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:33.832+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:33.832+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:33.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:33.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:33.873+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:33.973+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:34.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:34.016+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:34.016+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:34.051+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:34.051+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:34.065+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:34.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:34.068+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:34.068+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:34.073+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:34.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:34.160+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:34.166+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:34.166+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:34.173+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:34.197+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:34.197+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:34.203+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:34.203+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:34.230+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578513, 1), signature: { hash: BinData(0, 39060484773AB9BA1557F7ABAC798B150F454A53), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:34.230+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:28:34.230+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578513, 1), signature: { hash: BinData(0, 39060484773AB9BA1557F7ABAC798B150F454A53), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:34.231+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578513, 1), signature: { hash: BinData(0, 39060484773AB9BA1557F7ABAC798B150F454A53), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:34.231+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578513, 1), t: 1 }, durableWallTime: new Date(1567578513366), opTime: { ts: Timestamp(1567578513, 1), t: 1 }, wallTime: new Date(1567578513366) } 2019-09-04T06:28:34.231+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578513, 1), signature: { hash: BinData(0, 39060484773AB9BA1557F7ABAC798B150F454A53), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:34.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:34.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:34.273+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:34.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:34.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:34.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:34.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:34.332+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:34.332+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:34.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:34.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:34.370+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:34.370+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:34.370+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578513, 1) 2019-09-04T06:28:34.370+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 3703 2019-09-04T06:28:34.370+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:34.370+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:34.370+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 3703 2019-09-04T06:28:34.371+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3706 2019-09-04T06:28:34.371+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 3706 2019-09-04T06:28:34.371+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578513, 1), t: 1 }({ ts: Timestamp(1567578513, 1), t: 1 }) 2019-09-04T06:28:34.373+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:34.473+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:34.516+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:34.516+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:34.551+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:34.551+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:34.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:34.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:34.568+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:34.568+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:34.574+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:34.659+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:34.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:34.666+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:34.666+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:34.674+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:34.697+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:34.697+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:34.703+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:34.703+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:34.774+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:34.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:34.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:34.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:34.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:34.832+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:34.832+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:34.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:34.836+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 230) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:34.836+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 230 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:28:44.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:34.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:02.836+0000 2019-09-04T06:28:34.836+0000 D2 ASIO [Replication] Request 230 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578513, 1), t: 1 }, durableWallTime: new Date(1567578513366), opTime: { ts: Timestamp(1567578513, 1), t: 1 }, wallTime: new Date(1567578513366), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578513, 1), t: 1 }, lastCommittedWall: new Date(1567578513366), lastOpVisible: { ts: Timestamp(1567578513, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578513, 1), $clusterTime: { clusterTime: Timestamp(1567578513, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578513, 1) } 2019-09-04T06:28:34.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578513, 1), t: 1 }, durableWallTime: new Date(1567578513366), opTime: { ts: Timestamp(1567578513, 1), t: 1 }, wallTime: new Date(1567578513366), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578513, 1), t: 1 }, lastCommittedWall: new Date(1567578513366), lastOpVisible: { ts: Timestamp(1567578513, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578513, 1), $clusterTime: { clusterTime: Timestamp(1567578513, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578513, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:28:34.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:34.836+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 230) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578513, 1), t: 1 }, durableWallTime: new Date(1567578513366), opTime: { ts: Timestamp(1567578513, 1), t: 1 }, wallTime: new Date(1567578513366), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578513, 1), t: 1 }, lastCommittedWall: new Date(1567578513366), lastOpVisible: { ts: Timestamp(1567578513, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578513, 1), $clusterTime: { clusterTime: Timestamp(1567578513, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578513, 1) } 2019-09-04T06:28:34.836+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:28:34.836+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:28:43.423+0000 2019-09-04T06:28:34.836+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:28:45.553+0000 2019-09-04T06:28:34.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:34.836+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:28:34.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:04.836+0000 2019-09-04T06:28:34.836+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:28:36.836Z 2019-09-04T06:28:34.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:04.836+0000 2019-09-04T06:28:34.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:34.837+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 231) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:34.837+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 231 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:44.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:34.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:04.836+0000 2019-09-04T06:28:34.837+0000 D2 ASIO [Replication] Request 231 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578513, 1), t: 1 }, durableWallTime: new Date(1567578513366), opTime: { ts: Timestamp(1567578513, 1), t: 1 }, wallTime: new Date(1567578513366), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578513, 1), t: 1 }, lastCommittedWall: new Date(1567578513366), lastOpVisible: { ts: Timestamp(1567578513, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578513, 1), $clusterTime: { clusterTime: Timestamp(1567578513, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578513, 1) } 2019-09-04T06:28:34.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578513, 1), t: 1 }, durableWallTime: new Date(1567578513366), opTime: { ts: Timestamp(1567578513, 1), t: 1 }, wallTime: new Date(1567578513366), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578513, 1), t: 1 }, lastCommittedWall: new Date(1567578513366), lastOpVisible: { ts: Timestamp(1567578513, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578513, 1), $clusterTime: { clusterTime: Timestamp(1567578513, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578513, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:34.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:34.837+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 231) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578513, 1), t: 1 }, durableWallTime: new Date(1567578513366), opTime: { ts: Timestamp(1567578513, 1), t: 1 }, wallTime: new Date(1567578513366), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578513, 1), t: 1 }, lastCommittedWall: new Date(1567578513366), lastOpVisible: { ts: Timestamp(1567578513, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578513, 1), $clusterTime: { clusterTime: Timestamp(1567578513, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578513, 1) } 2019-09-04T06:28:34.837+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:28:34.837+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:28:36.837Z 2019-09-04T06:28:34.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:04.836+0000 2019-09-04T06:28:34.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:34.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:34.865+0000 D2 ASIO [RS] Request 229 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578514, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578514864), o: { $v: 1, $set: { ping: new Date(1567578514860), up: 2415 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578513, 1), t: 1 }, lastCommittedWall: new Date(1567578513366), lastOpVisible: { ts: Timestamp(1567578513, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578513, 1), t: 1 }, lastCommittedWall: new Date(1567578513366), lastOpApplied: { ts: Timestamp(1567578514, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578513, 1), $clusterTime: { clusterTime: Timestamp(1567578514, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578514, 1) } 2019-09-04T06:28:34.865+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578514, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578514864), o: { $v: 1, $set: { ping: new Date(1567578514860), up: 2415 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578513, 1), t: 1 }, lastCommittedWall: new Date(1567578513366), lastOpVisible: { ts: Timestamp(1567578513, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578513, 1), t: 1 }, lastCommittedWall: new Date(1567578513366), lastOpApplied: { ts: Timestamp(1567578514, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578513, 1), $clusterTime: { clusterTime: Timestamp(1567578514, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578514, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:34.865+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:34.865+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578514, 1) and ending at ts: Timestamp(1567578514, 1) 2019-09-04T06:28:34.865+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:28:45.553+0000 2019-09-04T06:28:34.865+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:28:45.171+0000 2019-09-04T06:28:34.865+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:34.865+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:04.836+0000 2019-09-04T06:28:34.865+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:34.865+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:34.865+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578513, 1) 2019-09-04T06:28:34.865+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 3721 2019-09-04T06:28:34.865+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:34.865+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:34.865+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 3721 2019-09-04T06:28:34.865+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:34.865+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:28:34.865+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578514, 1) } 2019-09-04T06:28:34.865+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:34.865+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578513, 1) 2019-09-04T06:28:34.865+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 3724 2019-09-04T06:28:34.865+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:34.866+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:34.866+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 3724 2019-09-04T06:28:34.865+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578514, 1), t: 1 } 2019-09-04T06:28:34.865+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3707 2019-09-04T06:28:34.866+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 3707 2019-09-04T06:28:34.866+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3727 2019-09-04T06:28:34.866+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 3727 2019-09-04T06:28:34.866+0000 D3 EXECUTOR [repl-writer-worker-5] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:34.866+0000 D3 STORAGE [repl-writer-worker-5] WT begin_transaction for snapshot id 3729 2019-09-04T06:28:34.866+0000 D4 STORAGE [repl-writer-worker-5] inserting record with timestamp Timestamp(1567578514, 1) 2019-09-04T06:28:34.866+0000 D3 STORAGE [repl-writer-worker-5] WT set timestamp of future write operations to Timestamp(1567578514, 1) 2019-09-04T06:28:34.866+0000 D3 STORAGE [repl-writer-worker-5] WT commit_transaction for snapshot id 3729 2019-09-04T06:28:34.866+0000 D3 EXECUTOR [repl-writer-worker-5] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:34.866+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:28:34.866+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3728 2019-09-04T06:28:34.866+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 3728 2019-09-04T06:28:34.866+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3731 2019-09-04T06:28:34.866+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 3731 2019-09-04T06:28:34.866+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578514, 1), t: 1 }({ ts: Timestamp(1567578514, 1), t: 1 }) 2019-09-04T06:28:34.866+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578514, 1) 2019-09-04T06:28:34.866+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3732 2019-09-04T06:28:34.866+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578514, 1) } } ] } sort: {} projection: {} 2019-09-04T06:28:34.866+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:34.866+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:28:34.866+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578514, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:28:34.866+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:34.866+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:34.866+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:34.866+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578514, 1) || First: notFirst: full path: ts 2019-09-04T06:28:34.866+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:34.866+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578514, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:34.866+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:34.866+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:28:34.866+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:34.866+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:34.866+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:34.866+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:34.866+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:34.866+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:34.866+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:34.866+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578514, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:34.866+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:34.866+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:34.866+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:34.866+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578514, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:34.866+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:34.866+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578514, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:34.866+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 3732 2019-09-04T06:28:34.866+0000 D3 EXECUTOR [repl-writer-worker-9] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:34.866+0000 D3 STORAGE [repl-writer-worker-9] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:34.866+0000 D3 REPL [repl-writer-worker-9] applying op: { ts: Timestamp(1567578514, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578514864), o: { $v: 1, $set: { ping: new Date(1567578514860), up: 2415 } } }, oplog application mode: Secondary 2019-09-04T06:28:34.867+0000 D3 STORAGE [repl-writer-worker-9] WT set timestamp of future write operations to Timestamp(1567578514, 1) 2019-09-04T06:28:34.867+0000 D3 STORAGE [repl-writer-worker-9] WT begin_transaction for snapshot id 3734 2019-09-04T06:28:34.867+0000 D2 QUERY [repl-writer-worker-9] Using idhack: { _id: "cmodb801.togewa.com:27017" } 2019-09-04T06:28:34.867+0000 D4 WRITE [repl-writer-worker-9] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:28:34.867+0000 D3 STORAGE [repl-writer-worker-9] WT commit_transaction for snapshot id 3734 2019-09-04T06:28:34.867+0000 D3 EXECUTOR [repl-writer-worker-9] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:34.867+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578514, 1), t: 1 }({ ts: Timestamp(1567578514, 1), t: 1 }) 2019-09-04T06:28:34.867+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578514, 1) 2019-09-04T06:28:34.867+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3733 2019-09-04T06:28:34.867+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:28:34.867+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:34.867+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:28:34.867+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:34.867+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:34.867+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:28:34.867+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 3733 2019-09-04T06:28:34.867+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578514, 1) 2019-09-04T06:28:34.867+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:34.867+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3737 2019-09-04T06:28:34.867+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578513, 1), t: 1 }, durableWallTime: new Date(1567578513366), appliedOpTime: { ts: Timestamp(1567578513, 1), t: 1 }, appliedWallTime: new Date(1567578513366), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578513, 1), t: 1 }, durableWallTime: new Date(1567578513366), appliedOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, appliedWallTime: new Date(1567578514864), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578513, 1), t: 1 }, durableWallTime: new Date(1567578513366), appliedOpTime: { ts: Timestamp(1567578513, 1), t: 1 }, appliedWallTime: new Date(1567578513366), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578513, 1), t: 1 }, lastCommittedWall: new Date(1567578513366), lastOpVisible: { ts: Timestamp(1567578513, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:34.867+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 3737 2019-09-04T06:28:34.867+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578514, 1), t: 1 }({ ts: Timestamp(1567578514, 1), t: 1 }) 2019-09-04T06:28:34.867+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 232 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:04.867+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578513, 1), t: 1 }, durableWallTime: new Date(1567578513366), appliedOpTime: { ts: Timestamp(1567578513, 1), t: 1 }, appliedWallTime: new Date(1567578513366), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578513, 1), t: 1 }, durableWallTime: new Date(1567578513366), appliedOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, appliedWallTime: new Date(1567578514864), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578513, 1), t: 1 }, durableWallTime: new Date(1567578513366), appliedOpTime: { ts: Timestamp(1567578513, 1), t: 1 }, appliedWallTime: new Date(1567578513366), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578513, 1), t: 1 }, lastCommittedWall: new Date(1567578513366), lastOpVisible: { ts: Timestamp(1567578513, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:34.867+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:04.867+0000 2019-09-04T06:28:34.868+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578514, 1), t: 1 } 2019-09-04T06:28:34.868+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 233 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:44.868+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578513, 1), t: 1 } } 2019-09-04T06:28:34.868+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:04.867+0000 2019-09-04T06:28:34.868+0000 D2 ASIO [RS] Request 232 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578514, 1), t: 1 }, lastCommittedWall: new Date(1567578514864), lastOpVisible: { ts: Timestamp(1567578514, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578514, 1), $clusterTime: { clusterTime: Timestamp(1567578514, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578514, 1) } 2019-09-04T06:28:34.868+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578514, 1), t: 1 }, lastCommittedWall: new Date(1567578514864), lastOpVisible: { ts: Timestamp(1567578514, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578514, 1), $clusterTime: { clusterTime: Timestamp(1567578514, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578514, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:34.868+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:34.868+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:04.867+0000 2019-09-04T06:28:34.868+0000 D2 ASIO [RS] Request 233 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578514, 1), t: 1 }, lastCommittedWall: new Date(1567578514864), lastOpVisible: { ts: Timestamp(1567578514, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578514, 1), t: 1 }, lastCommittedWall: new Date(1567578514864), lastOpApplied: { ts: Timestamp(1567578514, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578514, 1), $clusterTime: { clusterTime: Timestamp(1567578514, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578514, 1) } 2019-09-04T06:28:34.868+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578514, 1), t: 1 }, lastCommittedWall: new Date(1567578514864), lastOpVisible: { ts: Timestamp(1567578514, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578514, 1), t: 1 }, lastCommittedWall: new Date(1567578514864), lastOpApplied: { ts: Timestamp(1567578514, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578514, 1), $clusterTime: { clusterTime: Timestamp(1567578514, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578514, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:34.868+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:34.868+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:28:34.868+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578514, 1), t: 1 }, 2019-09-04T06:28:34.864+0000 2019-09-04T06:28:34.868+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578514, 1), t: 1 }, 2019-09-04T06:28:34.864+0000 2019-09-04T06:28:34.868+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578509, 1) 2019-09-04T06:28:34.868+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:28:45.171+0000 2019-09-04T06:28:34.868+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:28:45.336+0000 2019-09-04T06:28:34.868+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 234 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:44.868+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578514, 1), t: 1 } } 2019-09-04T06:28:34.868+0000 D3 REPL [conn111] Got notified of new snapshot: { ts: Timestamp(1567578514, 1), t: 1 }, 2019-09-04T06:28:34.864+0000 2019-09-04T06:28:34.868+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:04.867+0000 2019-09-04T06:28:34.868+0000 D3 REPL [conn111] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.896+0000 2019-09-04T06:28:34.868+0000 D3 REPL [conn132] Got notified of new snapshot: { ts: Timestamp(1567578514, 1), t: 1 }, 2019-09-04T06:28:34.864+0000 2019-09-04T06:28:34.868+0000 D3 REPL [conn132] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.433+0000 2019-09-04T06:28:34.868+0000 D3 REPL [conn135] Got notified of new snapshot: { ts: Timestamp(1567578514, 1), t: 1 }, 2019-09-04T06:28:34.864+0000 2019-09-04T06:28:34.868+0000 D3 REPL [conn135] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.925+0000 2019-09-04T06:28:34.868+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:34.868+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:04.836+0000 2019-09-04T06:28:34.868+0000 D3 REPL [conn115] Got notified of new snapshot: { ts: Timestamp(1567578514, 1), t: 1 }, 2019-09-04T06:28:34.864+0000 2019-09-04T06:28:34.868+0000 D3 REPL [conn115] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:41.292+0000 2019-09-04T06:28:34.868+0000 D3 REPL [conn114] Got notified of new snapshot: { ts: Timestamp(1567578514, 1), t: 1 }, 2019-09-04T06:28:34.864+0000 2019-09-04T06:28:34.868+0000 D3 REPL [conn114] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:41.488+0000 2019-09-04T06:28:34.868+0000 D3 REPL [conn131] Got notified of new snapshot: { ts: Timestamp(1567578514, 1), t: 1 }, 2019-09-04T06:28:34.864+0000 2019-09-04T06:28:34.869+0000 D3 REPL [conn131] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:56.307+0000 2019-09-04T06:28:34.869+0000 D3 REPL [conn110] Got notified of new snapshot: { ts: Timestamp(1567578514, 1), t: 1 }, 2019-09-04T06:28:34.864+0000 2019-09-04T06:28:34.869+0000 D3 REPL [conn110] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.881+0000 2019-09-04T06:28:34.869+0000 D3 REPL [conn108] Got notified of new snapshot: { ts: Timestamp(1567578514, 1), t: 1 }, 2019-09-04T06:28:34.864+0000 2019-09-04T06:28:34.869+0000 D3 REPL [conn108] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.879+0000 2019-09-04T06:28:34.869+0000 D3 REPL [conn119] Got notified of new snapshot: { ts: Timestamp(1567578514, 1), t: 1 }, 2019-09-04T06:28:34.864+0000 2019-09-04T06:28:34.869+0000 D3 REPL [conn119] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.033+0000 2019-09-04T06:28:34.869+0000 D3 REPL [conn109] Got notified of new snapshot: { ts: Timestamp(1567578514, 1), t: 1 }, 2019-09-04T06:28:34.864+0000 2019-09-04T06:28:34.869+0000 D3 REPL [conn109] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.877+0000 2019-09-04T06:28:34.869+0000 D3 REPL [conn103] Got notified of new snapshot: { ts: Timestamp(1567578514, 1), t: 1 }, 2019-09-04T06:28:34.864+0000 2019-09-04T06:28:34.869+0000 D3 REPL [conn103] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.919+0000 2019-09-04T06:28:34.869+0000 D3 REPL [conn112] Got notified of new snapshot: { ts: Timestamp(1567578514, 1), t: 1 }, 2019-09-04T06:28:34.864+0000 2019-09-04T06:28:34.869+0000 D3 REPL [conn112] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.921+0000 2019-09-04T06:28:34.869+0000 D3 REPL [conn136] Got notified of new snapshot: { ts: Timestamp(1567578514, 1), t: 1 }, 2019-09-04T06:28:34.864+0000 2019-09-04T06:28:34.869+0000 D3 REPL [conn136] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:02.478+0000 2019-09-04T06:28:34.869+0000 D3 REPL [conn104] Got notified of new snapshot: { ts: Timestamp(1567578514, 1), t: 1 }, 2019-09-04T06:28:34.864+0000 2019-09-04T06:28:34.869+0000 D3 REPL [conn104] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:55.059+0000 2019-09-04T06:28:34.869+0000 D3 REPL [conn120] Got notified of new snapshot: { ts: Timestamp(1567578514, 1), t: 1 }, 2019-09-04T06:28:34.864+0000 2019-09-04T06:28:34.869+0000 D3 REPL [conn120] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.040+0000 2019-09-04T06:28:34.869+0000 D3 REPL [conn121] Got notified of new snapshot: { ts: Timestamp(1567578514, 1), t: 1 }, 2019-09-04T06:28:34.864+0000 2019-09-04T06:28:34.869+0000 D3 REPL [conn121] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:58.752+0000 2019-09-04T06:28:34.869+0000 D3 REPL [conn124] Got notified of new snapshot: { ts: Timestamp(1567578514, 1), t: 1 }, 2019-09-04T06:28:34.864+0000 2019-09-04T06:28:34.869+0000 D3 REPL [conn124] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.022+0000 2019-09-04T06:28:34.869+0000 D3 REPL [conn107] Got notified of new snapshot: { ts: Timestamp(1567578514, 1), t: 1 }, 2019-09-04T06:28:34.864+0000 2019-09-04T06:28:34.869+0000 D3 REPL [conn107] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.871+0000 2019-09-04T06:28:34.869+0000 D3 REPL [conn130] Got notified of new snapshot: { ts: Timestamp(1567578514, 1), t: 1 }, 2019-09-04T06:28:34.864+0000 2019-09-04T06:28:34.869+0000 D3 REPL [conn130] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:51.753+0000 2019-09-04T06:28:34.869+0000 D3 REPL [conn106] Got notified of new snapshot: { ts: Timestamp(1567578514, 1), t: 1 }, 2019-09-04T06:28:34.864+0000 2019-09-04T06:28:34.869+0000 D3 REPL [conn106] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:57.566+0000 2019-09-04T06:28:34.869+0000 D3 REPL [conn123] Got notified of new snapshot: { ts: Timestamp(1567578514, 1), t: 1 }, 2019-09-04T06:28:34.864+0000 2019-09-04T06:28:34.869+0000 D3 REPL [conn123] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.022+0000 2019-09-04T06:28:34.869+0000 D3 REPL [conn133] Got notified of new snapshot: { ts: Timestamp(1567578514, 1), t: 1 }, 2019-09-04T06:28:34.864+0000 2019-09-04T06:28:34.869+0000 D3 REPL [conn133] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.702+0000 2019-09-04T06:28:34.869+0000 D3 REPL [conn117] Got notified of new snapshot: { ts: Timestamp(1567578514, 1), t: 1 }, 2019-09-04T06:28:34.864+0000 2019-09-04T06:28:34.869+0000 D3 REPL [conn117] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:54.151+0000 2019-09-04T06:28:34.869+0000 D3 REPL [conn134] Got notified of new snapshot: { ts: Timestamp(1567578514, 1), t: 1 }, 2019-09-04T06:28:34.864+0000 2019-09-04T06:28:34.869+0000 D3 REPL [conn134] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.753+0000 2019-09-04T06:28:34.872+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:28:34.872+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:34.872+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578513, 1), t: 1 }, durableWallTime: new Date(1567578513366), appliedOpTime: { ts: Timestamp(1567578513, 1), t: 1 }, appliedWallTime: new Date(1567578513366), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, durableWallTime: new Date(1567578514864), appliedOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, appliedWallTime: new Date(1567578514864), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578513, 1), t: 1 }, durableWallTime: new Date(1567578513366), appliedOpTime: { ts: Timestamp(1567578513, 1), t: 1 }, appliedWallTime: new Date(1567578513366), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578514, 1), t: 1 }, lastCommittedWall: new Date(1567578514864), lastOpVisible: { ts: Timestamp(1567578514, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:34.872+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 235 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:04.872+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578513, 1), t: 1 }, durableWallTime: new Date(1567578513366), appliedOpTime: { ts: Timestamp(1567578513, 1), t: 1 }, appliedWallTime: new Date(1567578513366), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, durableWallTime: new Date(1567578514864), appliedOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, appliedWallTime: new Date(1567578514864), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578513, 1), t: 1 }, durableWallTime: new Date(1567578513366), appliedOpTime: { ts: Timestamp(1567578513, 1), t: 1 }, appliedWallTime: new Date(1567578513366), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578514, 1), t: 1 }, lastCommittedWall: new Date(1567578514864), lastOpVisible: { ts: Timestamp(1567578514, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:34.872+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:04.867+0000 2019-09-04T06:28:34.872+0000 D2 ASIO [RS] Request 235 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578514, 1), t: 1 }, lastCommittedWall: new Date(1567578514864), lastOpVisible: { ts: Timestamp(1567578514, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578514, 1), $clusterTime: { clusterTime: Timestamp(1567578514, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578514, 1) } 2019-09-04T06:28:34.872+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578514, 1), t: 1 }, lastCommittedWall: new Date(1567578514864), lastOpVisible: { ts: Timestamp(1567578514, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578514, 1), $clusterTime: { clusterTime: Timestamp(1567578514, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578514, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:34.872+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:34.872+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:04.867+0000 2019-09-04T06:28:34.874+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:34.966+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578514, 1) 2019-09-04T06:28:34.974+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:35.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:35.016+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:35.016+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:35.051+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:35.051+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:35.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578514, 1), signature: { hash: BinData(0, B888CBCD4C5ECFDB6BA6EAAC3AC8D2C0C4D8F297), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:35.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:28:35.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578514, 1), signature: { hash: BinData(0, B888CBCD4C5ECFDB6BA6EAAC3AC8D2C0C4D8F297), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:35.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578514, 1), signature: { hash: BinData(0, B888CBCD4C5ECFDB6BA6EAAC3AC8D2C0C4D8F297), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:35.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, durableWallTime: new Date(1567578514864), opTime: { ts: Timestamp(1567578514, 1), t: 1 }, wallTime: new Date(1567578514864) } 2019-09-04T06:28:35.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578514, 1), signature: { hash: BinData(0, B888CBCD4C5ECFDB6BA6EAAC3AC8D2C0C4D8F297), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:35.065+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:35.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:35.068+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:35.068+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:35.074+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:35.159+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:35.160+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:35.166+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:35.166+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:35.174+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:35.197+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:35.197+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:35.203+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:35.203+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:35.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:35.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:35.274+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:35.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:35.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:35.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:35.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:35.332+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:35.332+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:35.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:35.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:35.375+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:35.475+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:35.516+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:35.516+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:35.551+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:35.551+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:35.565+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:35.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:35.568+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:35.568+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:35.575+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:35.659+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:35.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:35.666+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:35.666+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:35.675+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:35.697+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:35.697+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:35.703+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:35.703+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:35.775+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:35.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:35.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:35.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:35.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:35.832+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:35.832+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:35.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:35.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:35.866+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:35.866+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:35.866+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578514, 1) 2019-09-04T06:28:35.866+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 3768 2019-09-04T06:28:35.866+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:35.866+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:35.866+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 3768 2019-09-04T06:28:35.867+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3771 2019-09-04T06:28:35.867+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 3771 2019-09-04T06:28:35.867+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578514, 1), t: 1 }({ ts: Timestamp(1567578514, 1), t: 1 }) 2019-09-04T06:28:35.875+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:35.975+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:36.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:36.016+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:36.016+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:36.051+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:36.051+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:36.065+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:36.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:36.068+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:36.068+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:36.076+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:36.159+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:36.160+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:36.166+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:36.166+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:36.176+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:36.197+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:36.197+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:36.203+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:36.203+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:36.230+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578514, 1), signature: { hash: BinData(0, B888CBCD4C5ECFDB6BA6EAAC3AC8D2C0C4D8F297), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:36.230+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:28:36.231+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578514, 1), signature: { hash: BinData(0, B888CBCD4C5ECFDB6BA6EAAC3AC8D2C0C4D8F297), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:36.231+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578514, 1), signature: { hash: BinData(0, B888CBCD4C5ECFDB6BA6EAAC3AC8D2C0C4D8F297), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:36.231+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, durableWallTime: new Date(1567578514864), opTime: { ts: Timestamp(1567578514, 1), t: 1 }, wallTime: new Date(1567578514864) } 2019-09-04T06:28:36.231+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578514, 1), signature: { hash: BinData(0, B888CBCD4C5ECFDB6BA6EAAC3AC8D2C0C4D8F297), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:36.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:36.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:36.276+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:36.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:36.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:36.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:36.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:36.332+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:36.332+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:36.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:36.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:36.376+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:36.476+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:36.516+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:36.516+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:36.551+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:36.551+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:36.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:36.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:36.568+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:36.568+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:36.576+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:36.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:36.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:36.666+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:36.666+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:36.676+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:36.697+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:36.697+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:36.703+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:36.703+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:36.777+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:36.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:36.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:36.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:36.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:36.832+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:36.832+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:36.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:36.836+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 236) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:36.836+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 236 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:28:46.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:36.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:04.836+0000 2019-09-04T06:28:36.836+0000 D2 ASIO [Replication] Request 236 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, durableWallTime: new Date(1567578514864), opTime: { ts: Timestamp(1567578514, 1), t: 1 }, wallTime: new Date(1567578514864), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578514, 1), t: 1 }, lastCommittedWall: new Date(1567578514864), lastOpVisible: { ts: Timestamp(1567578514, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578514, 1), $clusterTime: { clusterTime: Timestamp(1567578514, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578514, 1) } 2019-09-04T06:28:36.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, durableWallTime: new Date(1567578514864), opTime: { ts: Timestamp(1567578514, 1), t: 1 }, wallTime: new Date(1567578514864), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578514, 1), t: 1 }, lastCommittedWall: new Date(1567578514864), lastOpVisible: { ts: Timestamp(1567578514, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578514, 1), $clusterTime: { clusterTime: Timestamp(1567578514, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578514, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:28:36.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:36.836+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 236) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, durableWallTime: new Date(1567578514864), opTime: { ts: Timestamp(1567578514, 1), t: 1 }, wallTime: new Date(1567578514864), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578514, 1), t: 1 }, lastCommittedWall: new Date(1567578514864), lastOpVisible: { ts: Timestamp(1567578514, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578514, 1), $clusterTime: { clusterTime: Timestamp(1567578514, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578514, 1) } 2019-09-04T06:28:36.836+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:28:36.836+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:28:45.336+0000 2019-09-04T06:28:36.836+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:28:47.400+0000 2019-09-04T06:28:36.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:36.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:06.836+0000 2019-09-04T06:28:36.836+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:28:36.836+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:28:38.836Z 2019-09-04T06:28:36.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:06.836+0000 2019-09-04T06:28:36.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:36.837+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 237) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:36.837+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 237 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:46.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:36.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:06.836+0000 2019-09-04T06:28:36.837+0000 D2 ASIO [Replication] Request 237 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, durableWallTime: new Date(1567578514864), opTime: { ts: Timestamp(1567578514, 1), t: 1 }, wallTime: new Date(1567578514864), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578514, 1), t: 1 }, lastCommittedWall: new Date(1567578514864), lastOpVisible: { ts: Timestamp(1567578514, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578514, 1), $clusterTime: { clusterTime: Timestamp(1567578514, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578514, 1) } 2019-09-04T06:28:36.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, durableWallTime: new Date(1567578514864), opTime: { ts: Timestamp(1567578514, 1), t: 1 }, wallTime: new Date(1567578514864), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578514, 1), t: 1 }, lastCommittedWall: new Date(1567578514864), lastOpVisible: { ts: Timestamp(1567578514, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578514, 1), $clusterTime: { clusterTime: Timestamp(1567578514, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578514, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:36.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:36.837+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 237) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, durableWallTime: new Date(1567578514864), opTime: { ts: Timestamp(1567578514, 1), t: 1 }, wallTime: new Date(1567578514864), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578514, 1), t: 1 }, lastCommittedWall: new Date(1567578514864), lastOpVisible: { ts: Timestamp(1567578514, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578514, 1), $clusterTime: { clusterTime: Timestamp(1567578514, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578514, 1) } 2019-09-04T06:28:36.837+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:28:36.837+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:28:38.837Z 2019-09-04T06:28:36.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:06.836+0000 2019-09-04T06:28:36.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:36.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:36.866+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:36.866+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:36.866+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578514, 1) 2019-09-04T06:28:36.866+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 3800 2019-09-04T06:28:36.866+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:36.866+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:36.866+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 3800 2019-09-04T06:28:36.867+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3803 2019-09-04T06:28:36.867+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 3803 2019-09-04T06:28:36.867+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578514, 1), t: 1 }({ ts: Timestamp(1567578514, 1), t: 1 }) 2019-09-04T06:28:36.877+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:36.977+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:37.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:37.016+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:37.016+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:37.051+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:37.051+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:37.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578514, 1), signature: { hash: BinData(0, B888CBCD4C5ECFDB6BA6EAAC3AC8D2C0C4D8F297), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:37.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:28:37.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578514, 1), signature: { hash: BinData(0, B888CBCD4C5ECFDB6BA6EAAC3AC8D2C0C4D8F297), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:37.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578514, 1), signature: { hash: BinData(0, B888CBCD4C5ECFDB6BA6EAAC3AC8D2C0C4D8F297), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:37.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, durableWallTime: new Date(1567578514864), opTime: { ts: Timestamp(1567578514, 1), t: 1 }, wallTime: new Date(1567578514864) } 2019-09-04T06:28:37.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578514, 1), signature: { hash: BinData(0, B888CBCD4C5ECFDB6BA6EAAC3AC8D2C0C4D8F297), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:37.065+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:37.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:37.077+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:37.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:37.160+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:37.166+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:37.166+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:37.177+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:37.197+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:37.197+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:37.203+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:37.203+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:37.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:37.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:37.277+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:37.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:37.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:37.332+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:37.332+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:37.377+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:37.478+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:37.516+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:37.516+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:37.551+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:37.551+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:37.565+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:37.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:37.578+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:37.659+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:37.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:37.666+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:37.666+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:37.678+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:37.697+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:37.697+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:37.703+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:37.703+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:37.778+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:37.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:37.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:37.832+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:37.832+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:37.866+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:37.866+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:37.866+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578514, 1) 2019-09-04T06:28:37.866+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 3826 2019-09-04T06:28:37.867+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:37.867+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:37.867+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 3826 2019-09-04T06:28:37.868+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3829 2019-09-04T06:28:37.868+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 3829 2019-09-04T06:28:37.868+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578514, 1), t: 1 }({ ts: Timestamp(1567578514, 1), t: 1 }) 2019-09-04T06:28:37.878+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:37.978+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:38.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:38.016+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:38.016+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:38.051+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:38.051+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:38.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:38.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:38.078+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:38.159+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:38.160+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:38.166+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:38.166+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:38.178+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:38.197+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:38.197+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:38.203+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:38.203+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:38.230+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578514, 1), signature: { hash: BinData(0, B888CBCD4C5ECFDB6BA6EAAC3AC8D2C0C4D8F297), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:38.230+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:28:38.230+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578514, 1), signature: { hash: BinData(0, B888CBCD4C5ECFDB6BA6EAAC3AC8D2C0C4D8F297), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:38.230+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578514, 1), signature: { hash: BinData(0, B888CBCD4C5ECFDB6BA6EAAC3AC8D2C0C4D8F297), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:38.230+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, durableWallTime: new Date(1567578514864), opTime: { ts: Timestamp(1567578514, 1), t: 1 }, wallTime: new Date(1567578514864) } 2019-09-04T06:28:38.231+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578514, 1), signature: { hash: BinData(0, B888CBCD4C5ECFDB6BA6EAAC3AC8D2C0C4D8F297), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:38.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:38.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:38.239+0000 D2 ASIO [RS] Request 234 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578518, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578518234), o: { $v: 1, $set: { ping: new Date(1567578518233) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578514, 1), t: 1 }, lastCommittedWall: new Date(1567578514864), lastOpVisible: { ts: Timestamp(1567578514, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578514, 1), t: 1 }, lastCommittedWall: new Date(1567578514864), lastOpApplied: { ts: Timestamp(1567578518, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578514, 1), $clusterTime: { clusterTime: Timestamp(1567578518, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578518, 1) } 2019-09-04T06:28:38.239+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578518, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578518234), o: { $v: 1, $set: { ping: new Date(1567578518233) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578514, 1), t: 1 }, lastCommittedWall: new Date(1567578514864), lastOpVisible: { ts: Timestamp(1567578514, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578514, 1), t: 1 }, lastCommittedWall: new Date(1567578514864), lastOpApplied: { ts: Timestamp(1567578518, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578514, 1), $clusterTime: { clusterTime: Timestamp(1567578518, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578518, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:38.239+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:38.239+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578518, 1) and ending at ts: Timestamp(1567578518, 1) 2019-09-04T06:28:38.239+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:28:47.400+0000 2019-09-04T06:28:38.239+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:28:48.730+0000 2019-09-04T06:28:38.239+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:38.239+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:06.836+0000 2019-09-04T06:28:38.239+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:38.239+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:38.239+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578514, 1) 2019-09-04T06:28:38.239+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 3841 2019-09-04T06:28:38.239+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:38.239+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:38.239+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 3841 2019-09-04T06:28:38.239+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:38.239+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:28:38.239+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578518, 1), t: 1 } 2019-09-04T06:28:38.239+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:38.239+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578514, 1) 2019-09-04T06:28:38.239+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578518, 1) } 2019-09-04T06:28:38.239+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 3844 2019-09-04T06:28:38.239+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:38.239+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:38.239+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 3844 2019-09-04T06:28:38.239+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3830 2019-09-04T06:28:38.239+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 3830 2019-09-04T06:28:38.239+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3847 2019-09-04T06:28:38.239+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 3847 2019-09-04T06:28:38.239+0000 D3 EXECUTOR [repl-writer-worker-7] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:38.239+0000 D3 STORAGE [repl-writer-worker-7] WT begin_transaction for snapshot id 3849 2019-09-04T06:28:38.239+0000 D4 STORAGE [repl-writer-worker-7] inserting record with timestamp Timestamp(1567578518, 1) 2019-09-04T06:28:38.239+0000 D3 STORAGE [repl-writer-worker-7] WT set timestamp of future write operations to Timestamp(1567578518, 1) 2019-09-04T06:28:38.239+0000 D3 STORAGE [repl-writer-worker-7] WT commit_transaction for snapshot id 3849 2019-09-04T06:28:38.239+0000 D3 EXECUTOR [repl-writer-worker-7] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:38.239+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:28:38.239+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3848 2019-09-04T06:28:38.239+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 3848 2019-09-04T06:28:38.239+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3851 2019-09-04T06:28:38.239+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 3851 2019-09-04T06:28:38.239+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578518, 1), t: 1 }({ ts: Timestamp(1567578518, 1), t: 1 }) 2019-09-04T06:28:38.239+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578518, 1) 2019-09-04T06:28:38.239+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3852 2019-09-04T06:28:38.239+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578518, 1) } } ] } sort: {} projection: {} 2019-09-04T06:28:38.239+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:38.239+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:28:38.239+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578518, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:28:38.239+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:38.240+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:38.240+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:38.240+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578518, 1) || First: notFirst: full path: ts 2019-09-04T06:28:38.240+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:38.240+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578518, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:38.240+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:38.240+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:28:38.240+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:38.240+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:38.240+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:38.240+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:38.240+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:38.240+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:38.240+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:38.240+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578518, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:38.240+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:38.240+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:38.240+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:38.240+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578518, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:38.240+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:38.240+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578518, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:38.240+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 3852 2019-09-04T06:28:38.240+0000 D3 EXECUTOR [repl-writer-worker-11] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:38.240+0000 D3 STORAGE [repl-writer-worker-11] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:38.240+0000 D3 REPL [repl-writer-worker-11] applying op: { ts: Timestamp(1567578518, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578518234), o: { $v: 1, $set: { ping: new Date(1567578518233) } } }, oplog application mode: Secondary 2019-09-04T06:28:38.240+0000 D3 STORAGE [repl-writer-worker-11] WT set timestamp of future write operations to Timestamp(1567578518, 1) 2019-09-04T06:28:38.240+0000 D3 STORAGE [repl-writer-worker-11] WT begin_transaction for snapshot id 3854 2019-09-04T06:28:38.240+0000 D2 QUERY [repl-writer-worker-11] Using idhack: { _id: "ConfigServer" } 2019-09-04T06:28:38.240+0000 D4 WRITE [repl-writer-worker-11] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:28:38.240+0000 D3 STORAGE [repl-writer-worker-11] WT commit_transaction for snapshot id 3854 2019-09-04T06:28:38.240+0000 D3 EXECUTOR [repl-writer-worker-11] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:38.240+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578518, 1), t: 1 }({ ts: Timestamp(1567578518, 1), t: 1 }) 2019-09-04T06:28:38.240+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578518, 1) 2019-09-04T06:28:38.240+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3853 2019-09-04T06:28:38.240+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:28:38.240+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:38.240+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:28:38.240+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:38.240+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:38.240+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:28:38.240+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 3853 2019-09-04T06:28:38.240+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578518, 1) 2019-09-04T06:28:38.240+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3858 2019-09-04T06:28:38.240+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 3858 2019-09-04T06:28:38.240+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578518, 1), t: 1 }({ ts: Timestamp(1567578518, 1), t: 1 }) 2019-09-04T06:28:38.240+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:38.240+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, durableWallTime: new Date(1567578514864), appliedOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, appliedWallTime: new Date(1567578514864), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, durableWallTime: new Date(1567578514864), appliedOpTime: { ts: Timestamp(1567578518, 1), t: 1 }, appliedWallTime: new Date(1567578518234), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, durableWallTime: new Date(1567578514864), appliedOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, appliedWallTime: new Date(1567578514864), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578514, 1), t: 1 }, lastCommittedWall: new Date(1567578514864), lastOpVisible: { ts: Timestamp(1567578514, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:38.240+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 238 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:08.240+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, durableWallTime: new Date(1567578514864), appliedOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, appliedWallTime: new Date(1567578514864), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, durableWallTime: new Date(1567578514864), appliedOpTime: { ts: Timestamp(1567578518, 1), t: 1 }, appliedWallTime: new Date(1567578518234), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, durableWallTime: new Date(1567578514864), appliedOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, appliedWallTime: new Date(1567578514864), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578514, 1), t: 1 }, lastCommittedWall: new Date(1567578514864), lastOpVisible: { ts: Timestamp(1567578514, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:38.240+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:08.240+0000 2019-09-04T06:28:38.241+0000 D2 ASIO [RS] Request 238 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578514, 1), t: 1 }, lastCommittedWall: new Date(1567578514864), lastOpVisible: { ts: Timestamp(1567578514, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578514, 1), $clusterTime: { clusterTime: Timestamp(1567578518, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578518, 1) } 2019-09-04T06:28:38.241+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578514, 1), t: 1 }, lastCommittedWall: new Date(1567578514864), lastOpVisible: { ts: Timestamp(1567578514, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578514, 1), $clusterTime: { clusterTime: Timestamp(1567578518, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578518, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:38.241+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:38.241+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:08.241+0000 2019-09-04T06:28:38.241+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578518, 1), t: 1 } 2019-09-04T06:28:38.241+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 239 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:48.241+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578514, 1), t: 1 } } 2019-09-04T06:28:38.241+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:08.241+0000 2019-09-04T06:28:38.243+0000 D2 ASIO [RS] Request 239 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 1), t: 1 }, lastCommittedWall: new Date(1567578518234), lastOpVisible: { ts: Timestamp(1567578518, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578518, 1), t: 1 }, lastCommittedWall: new Date(1567578518234), lastOpApplied: { ts: Timestamp(1567578518, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578518, 1), $clusterTime: { clusterTime: Timestamp(1567578518, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578518, 1) } 2019-09-04T06:28:38.243+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 1), t: 1 }, lastCommittedWall: new Date(1567578518234), lastOpVisible: { ts: Timestamp(1567578518, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578518, 1), t: 1 }, lastCommittedWall: new Date(1567578518234), lastOpApplied: { ts: Timestamp(1567578518, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578518, 1), $clusterTime: { clusterTime: Timestamp(1567578518, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578518, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:38.243+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:38.243+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:28:38.243+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578518, 1), t: 1 }, 2019-09-04T06:28:38.234+0000 2019-09-04T06:28:38.243+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578518, 1), t: 1 }, 2019-09-04T06:28:38.234+0000 2019-09-04T06:28:38.243+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578513, 1) 2019-09-04T06:28:38.243+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:28:48.730+0000 2019-09-04T06:28:38.243+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:28:48.924+0000 2019-09-04T06:28:38.243+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:38.243+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 240 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:48.243+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578518, 1), t: 1 } } 2019-09-04T06:28:38.243+0000 D3 REPL [conn132] Got notified of new snapshot: { ts: Timestamp(1567578518, 1), t: 1 }, 2019-09-04T06:28:38.234+0000 2019-09-04T06:28:38.243+0000 D3 REPL [conn132] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.433+0000 2019-09-04T06:28:38.243+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:06.836+0000 2019-09-04T06:28:38.243+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:08.241+0000 2019-09-04T06:28:38.243+0000 D3 REPL [conn115] Got notified of new snapshot: { ts: Timestamp(1567578518, 1), t: 1 }, 2019-09-04T06:28:38.234+0000 2019-09-04T06:28:38.243+0000 D3 REPL [conn115] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:41.292+0000 2019-09-04T06:28:38.243+0000 D3 REPL [conn119] Got notified of new snapshot: { ts: Timestamp(1567578518, 1), t: 1 }, 2019-09-04T06:28:38.234+0000 2019-09-04T06:28:38.243+0000 D3 REPL [conn119] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.033+0000 2019-09-04T06:28:38.243+0000 D3 REPL [conn109] Got notified of new snapshot: { ts: Timestamp(1567578518, 1), t: 1 }, 2019-09-04T06:28:38.234+0000 2019-09-04T06:28:38.243+0000 D3 REPL [conn109] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.877+0000 2019-09-04T06:28:38.243+0000 D3 REPL [conn104] Got notified of new snapshot: { ts: Timestamp(1567578518, 1), t: 1 }, 2019-09-04T06:28:38.234+0000 2019-09-04T06:28:38.243+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:28:38.243+0000 D3 REPL [conn104] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:55.059+0000 2019-09-04T06:28:38.243+0000 D3 REPL [conn120] Got notified of new snapshot: { ts: Timestamp(1567578518, 1), t: 1 }, 2019-09-04T06:28:38.234+0000 2019-09-04T06:28:38.243+0000 D3 REPL [conn120] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.040+0000 2019-09-04T06:28:38.243+0000 D3 REPL [conn121] Got notified of new snapshot: { ts: Timestamp(1567578518, 1), t: 1 }, 2019-09-04T06:28:38.234+0000 2019-09-04T06:28:38.243+0000 D3 REPL [conn121] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:58.752+0000 2019-09-04T06:28:38.243+0000 D3 REPL [conn124] Got notified of new snapshot: { ts: Timestamp(1567578518, 1), t: 1 }, 2019-09-04T06:28:38.234+0000 2019-09-04T06:28:38.243+0000 D3 REPL [conn124] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.022+0000 2019-09-04T06:28:38.243+0000 D3 REPL [conn107] Got notified of new snapshot: { ts: Timestamp(1567578518, 1), t: 1 }, 2019-09-04T06:28:38.234+0000 2019-09-04T06:28:38.243+0000 D3 REPL [conn107] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.871+0000 2019-09-04T06:28:38.243+0000 D3 REPL [conn130] Got notified of new snapshot: { ts: Timestamp(1567578518, 1), t: 1 }, 2019-09-04T06:28:38.234+0000 2019-09-04T06:28:38.243+0000 D3 REPL [conn130] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:51.753+0000 2019-09-04T06:28:38.243+0000 D3 REPL [conn106] Got notified of new snapshot: { ts: Timestamp(1567578518, 1), t: 1 }, 2019-09-04T06:28:38.234+0000 2019-09-04T06:28:38.243+0000 D3 REPL [conn106] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:57.566+0000 2019-09-04T06:28:38.243+0000 D3 REPL [conn123] Got notified of new snapshot: { ts: Timestamp(1567578518, 1), t: 1 }, 2019-09-04T06:28:38.234+0000 2019-09-04T06:28:38.243+0000 D3 REPL [conn123] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.022+0000 2019-09-04T06:28:38.243+0000 D3 REPL [conn133] Got notified of new snapshot: { ts: Timestamp(1567578518, 1), t: 1 }, 2019-09-04T06:28:38.234+0000 2019-09-04T06:28:38.243+0000 D3 REPL [conn133] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.702+0000 2019-09-04T06:28:38.243+0000 D3 REPL [conn117] Got notified of new snapshot: { ts: Timestamp(1567578518, 1), t: 1 }, 2019-09-04T06:28:38.234+0000 2019-09-04T06:28:38.243+0000 D3 REPL [conn117] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:54.151+0000 2019-09-04T06:28:38.243+0000 D3 REPL [conn134] Got notified of new snapshot: { ts: Timestamp(1567578518, 1), t: 1 }, 2019-09-04T06:28:38.234+0000 2019-09-04T06:28:38.243+0000 D3 REPL [conn134] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.753+0000 2019-09-04T06:28:38.243+0000 D3 REPL [conn135] Got notified of new snapshot: { ts: Timestamp(1567578518, 1), t: 1 }, 2019-09-04T06:28:38.234+0000 2019-09-04T06:28:38.243+0000 D3 REPL [conn135] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.925+0000 2019-09-04T06:28:38.243+0000 D3 REPL [conn111] Got notified of new snapshot: { ts: Timestamp(1567578518, 1), t: 1 }, 2019-09-04T06:28:38.234+0000 2019-09-04T06:28:38.243+0000 D3 REPL [conn111] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.896+0000 2019-09-04T06:28:38.243+0000 D3 REPL [conn131] Got notified of new snapshot: { ts: Timestamp(1567578518, 1), t: 1 }, 2019-09-04T06:28:38.234+0000 2019-09-04T06:28:38.243+0000 D3 REPL [conn131] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:56.307+0000 2019-09-04T06:28:38.243+0000 D3 REPL [conn110] Got notified of new snapshot: { ts: Timestamp(1567578518, 1), t: 1 }, 2019-09-04T06:28:38.234+0000 2019-09-04T06:28:38.243+0000 D3 REPL [conn110] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.881+0000 2019-09-04T06:28:38.243+0000 D3 REPL [conn108] Got notified of new snapshot: { ts: Timestamp(1567578518, 1), t: 1 }, 2019-09-04T06:28:38.234+0000 2019-09-04T06:28:38.243+0000 D3 REPL [conn108] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.879+0000 2019-09-04T06:28:38.243+0000 D3 REPL [conn114] Got notified of new snapshot: { ts: Timestamp(1567578518, 1), t: 1 }, 2019-09-04T06:28:38.234+0000 2019-09-04T06:28:38.243+0000 D3 REPL [conn114] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:41.488+0000 2019-09-04T06:28:38.243+0000 D3 REPL [conn136] Got notified of new snapshot: { ts: Timestamp(1567578518, 1), t: 1 }, 2019-09-04T06:28:38.234+0000 2019-09-04T06:28:38.243+0000 D3 REPL [conn136] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:02.478+0000 2019-09-04T06:28:38.243+0000 D3 REPL [conn103] Got notified of new snapshot: { ts: Timestamp(1567578518, 1), t: 1 }, 2019-09-04T06:28:38.234+0000 2019-09-04T06:28:38.243+0000 D3 REPL [conn103] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.919+0000 2019-09-04T06:28:38.243+0000 D3 REPL [conn112] Got notified of new snapshot: { ts: Timestamp(1567578518, 1), t: 1 }, 2019-09-04T06:28:38.234+0000 2019-09-04T06:28:38.243+0000 D3 REPL [conn112] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.921+0000 2019-09-04T06:28:38.243+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:38.243+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, durableWallTime: new Date(1567578514864), appliedOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, appliedWallTime: new Date(1567578514864), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578518, 1), t: 1 }, durableWallTime: new Date(1567578518234), appliedOpTime: { ts: Timestamp(1567578518, 1), t: 1 }, appliedWallTime: new Date(1567578518234), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, durableWallTime: new Date(1567578514864), appliedOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, appliedWallTime: new Date(1567578514864), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 1), t: 1 }, lastCommittedWall: new Date(1567578518234), lastOpVisible: { ts: Timestamp(1567578518, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:38.243+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 241 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:08.243+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, durableWallTime: new Date(1567578514864), appliedOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, appliedWallTime: new Date(1567578514864), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578518, 1), t: 1 }, durableWallTime: new Date(1567578518234), appliedOpTime: { ts: Timestamp(1567578518, 1), t: 1 }, appliedWallTime: new Date(1567578518234), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, durableWallTime: new Date(1567578514864), appliedOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, appliedWallTime: new Date(1567578514864), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 1), t: 1 }, lastCommittedWall: new Date(1567578518234), lastOpVisible: { ts: Timestamp(1567578518, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:38.243+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:08.241+0000 2019-09-04T06:28:38.244+0000 D2 ASIO [RS] Request 241 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 1), t: 1 }, lastCommittedWall: new Date(1567578518234), lastOpVisible: { ts: Timestamp(1567578518, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578518, 1), $clusterTime: { clusterTime: Timestamp(1567578518, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578518, 1) } 2019-09-04T06:28:38.244+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 1), t: 1 }, lastCommittedWall: new Date(1567578518234), lastOpVisible: { ts: Timestamp(1567578518, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578518, 1), $clusterTime: { clusterTime: Timestamp(1567578518, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578518, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:38.244+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:38.244+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:08.241+0000 2019-09-04T06:28:38.279+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:38.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:38.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:38.332+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:38.332+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:38.339+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578518, 1) 2019-09-04T06:28:38.379+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:38.398+0000 D2 ASIO [RS] Request 240 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578518, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578518389), o: { $v: 1, $set: { ping: new Date(1567578518388) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 1), t: 1 }, lastCommittedWall: new Date(1567578518234), lastOpVisible: { ts: Timestamp(1567578518, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578518, 1), t: 1 }, lastCommittedWall: new Date(1567578518234), lastOpApplied: { ts: Timestamp(1567578518, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578518, 1), $clusterTime: { clusterTime: Timestamp(1567578518, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578518, 2) } 2019-09-04T06:28:38.398+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578518, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578518389), o: { $v: 1, $set: { ping: new Date(1567578518388) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 1), t: 1 }, lastCommittedWall: new Date(1567578518234), lastOpVisible: { ts: Timestamp(1567578518, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578518, 1), t: 1 }, lastCommittedWall: new Date(1567578518234), lastOpApplied: { ts: Timestamp(1567578518, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578518, 1), $clusterTime: { clusterTime: Timestamp(1567578518, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578518, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:38.398+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:38.398+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578518, 2) and ending at ts: Timestamp(1567578518, 2) 2019-09-04T06:28:38.398+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:28:48.924+0000 2019-09-04T06:28:38.398+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:28:48.419+0000 2019-09-04T06:28:38.398+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:38.398+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:06.836+0000 2019-09-04T06:28:38.398+0000 D2 REPL [replication-1] oplog buffer has 0 bytes 2019-09-04T06:28:38.398+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578518, 2), t: 1 } 2019-09-04T06:28:38.398+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:38.398+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:38.398+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578518, 1) 2019-09-04T06:28:38.398+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 3864 2019-09-04T06:28:38.398+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:38.398+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:38.398+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 3864 2019-09-04T06:28:38.398+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:38.398+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:28:38.398+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:38.398+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578518, 2) } 2019-09-04T06:28:38.398+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578518, 1) 2019-09-04T06:28:38.398+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 3867 2019-09-04T06:28:38.398+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:38.398+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:38.398+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 3867 2019-09-04T06:28:38.398+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3859 2019-09-04T06:28:38.399+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 3859 2019-09-04T06:28:38.399+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3870 2019-09-04T06:28:38.399+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 3870 2019-09-04T06:28:38.399+0000 D3 EXECUTOR [repl-writer-worker-13] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:38.399+0000 D3 STORAGE [repl-writer-worker-13] WT begin_transaction for snapshot id 3872 2019-09-04T06:28:38.399+0000 D4 STORAGE [repl-writer-worker-13] inserting record with timestamp Timestamp(1567578518, 2) 2019-09-04T06:28:38.399+0000 D3 STORAGE [repl-writer-worker-13] WT set timestamp of future write operations to Timestamp(1567578518, 2) 2019-09-04T06:28:38.399+0000 D3 STORAGE [repl-writer-worker-13] WT commit_transaction for snapshot id 3872 2019-09-04T06:28:38.399+0000 D3 EXECUTOR [repl-writer-worker-13] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:38.399+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:28:38.399+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3871 2019-09-04T06:28:38.399+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 3871 2019-09-04T06:28:38.399+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3874 2019-09-04T06:28:38.399+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 3874 2019-09-04T06:28:38.399+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578518, 2), t: 1 }({ ts: Timestamp(1567578518, 2), t: 1 }) 2019-09-04T06:28:38.399+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578518, 2) 2019-09-04T06:28:38.399+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3875 2019-09-04T06:28:38.399+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578518, 2) } } ] } sort: {} projection: {} 2019-09-04T06:28:38.399+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:38.399+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:28:38.399+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578518, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:28:38.399+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:38.399+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:38.399+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:38.399+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578518, 2) || First: notFirst: full path: ts 2019-09-04T06:28:38.399+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:38.399+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578518, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:38.399+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:38.399+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:28:38.399+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:38.399+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:38.399+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:38.399+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:38.399+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:38.399+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:38.399+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:38.399+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578518, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:38.399+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:38.399+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:38.399+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:38.399+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578518, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:38.399+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:38.399+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578518, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:38.399+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 3875 2019-09-04T06:28:38.399+0000 D3 EXECUTOR [repl-writer-worker-3] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:38.399+0000 D3 STORAGE [repl-writer-worker-3] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:38.399+0000 D3 REPL [repl-writer-worker-3] applying op: { ts: Timestamp(1567578518, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578518389), o: { $v: 1, $set: { ping: new Date(1567578518388) } } }, oplog application mode: Secondary 2019-09-04T06:28:38.399+0000 D3 STORAGE [repl-writer-worker-3] WT set timestamp of future write operations to Timestamp(1567578518, 2) 2019-09-04T06:28:38.399+0000 D3 STORAGE [repl-writer-worker-3] WT begin_transaction for snapshot id 3877 2019-09-04T06:28:38.399+0000 D2 QUERY [repl-writer-worker-3] Using idhack: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" } 2019-09-04T06:28:38.399+0000 D4 WRITE [repl-writer-worker-3] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:28:38.399+0000 D3 STORAGE [repl-writer-worker-3] WT commit_transaction for snapshot id 3877 2019-09-04T06:28:38.399+0000 D3 EXECUTOR [repl-writer-worker-3] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:38.399+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578518, 2), t: 1 }({ ts: Timestamp(1567578518, 2), t: 1 }) 2019-09-04T06:28:38.399+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578518, 2) 2019-09-04T06:28:38.399+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3876 2019-09-04T06:28:38.399+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:28:38.399+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:38.399+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:28:38.399+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:38.399+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:38.399+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:28:38.399+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 3876 2019-09-04T06:28:38.400+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578518, 2) 2019-09-04T06:28:38.400+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3880 2019-09-04T06:28:38.400+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 3880 2019-09-04T06:28:38.400+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578518, 2), t: 1 }({ ts: Timestamp(1567578518, 2), t: 1 }) 2019-09-04T06:28:38.400+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:38.400+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, durableWallTime: new Date(1567578514864), appliedOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, appliedWallTime: new Date(1567578514864), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578518, 1), t: 1 }, durableWallTime: new Date(1567578518234), appliedOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, appliedWallTime: new Date(1567578518389), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, durableWallTime: new Date(1567578514864), appliedOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, appliedWallTime: new Date(1567578514864), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 1), t: 1 }, lastCommittedWall: new Date(1567578518234), lastOpVisible: { ts: Timestamp(1567578518, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:38.400+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 242 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:08.400+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, durableWallTime: new Date(1567578514864), appliedOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, appliedWallTime: new Date(1567578514864), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578518, 1), t: 1 }, durableWallTime: new Date(1567578518234), appliedOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, appliedWallTime: new Date(1567578518389), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, durableWallTime: new Date(1567578514864), appliedOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, appliedWallTime: new Date(1567578514864), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 1), t: 1 }, lastCommittedWall: new Date(1567578518234), lastOpVisible: { ts: Timestamp(1567578518, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:38.400+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:08.400+0000 2019-09-04T06:28:38.400+0000 D2 ASIO [RS] Request 242 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 1), t: 1 }, lastCommittedWall: new Date(1567578518234), lastOpVisible: { ts: Timestamp(1567578518, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578518, 1), $clusterTime: { clusterTime: Timestamp(1567578518, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578518, 2) } 2019-09-04T06:28:38.400+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 1), t: 1 }, lastCommittedWall: new Date(1567578518234), lastOpVisible: { ts: Timestamp(1567578518, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578518, 1), $clusterTime: { clusterTime: Timestamp(1567578518, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578518, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:38.400+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:38.400+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:08.400+0000 2019-09-04T06:28:38.400+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578518, 2), t: 1 } 2019-09-04T06:28:38.400+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 243 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:48.400+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578518, 1), t: 1 } } 2019-09-04T06:28:38.400+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:08.400+0000 2019-09-04T06:28:38.404+0000 D2 ASIO [RS] Request 243 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 2), t: 1 }, lastCommittedWall: new Date(1567578518389), lastOpVisible: { ts: Timestamp(1567578518, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578518, 2), t: 1 }, lastCommittedWall: new Date(1567578518389), lastOpApplied: { ts: Timestamp(1567578518, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578518, 2), $clusterTime: { clusterTime: Timestamp(1567578518, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578518, 2) } 2019-09-04T06:28:38.404+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 2), t: 1 }, lastCommittedWall: new Date(1567578518389), lastOpVisible: { ts: Timestamp(1567578518, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578518, 2), t: 1 }, lastCommittedWall: new Date(1567578518389), lastOpApplied: { ts: Timestamp(1567578518, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578518, 2), $clusterTime: { clusterTime: Timestamp(1567578518, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578518, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:38.404+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:38.404+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:28:38.404+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578518, 2), t: 1 }, 2019-09-04T06:28:38.389+0000 2019-09-04T06:28:38.404+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578518, 2), t: 1 }, 2019-09-04T06:28:38.389+0000 2019-09-04T06:28:38.404+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578513, 2) 2019-09-04T06:28:38.404+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:28:48.419+0000 2019-09-04T06:28:38.404+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:28:48.804+0000 2019-09-04T06:28:38.404+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:38.404+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:06.836+0000 2019-09-04T06:28:38.404+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 244 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:48.404+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578518, 2), t: 1 } } 2019-09-04T06:28:38.404+0000 D3 REPL [conn115] Got notified of new snapshot: { ts: Timestamp(1567578518, 2), t: 1 }, 2019-09-04T06:28:38.389+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn115] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:41.292+0000 2019-09-04T06:28:38.405+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:08.400+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn104] Got notified of new snapshot: { ts: Timestamp(1567578518, 2), t: 1 }, 2019-09-04T06:28:38.389+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn104] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:55.059+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn123] Got notified of new snapshot: { ts: Timestamp(1567578518, 2), t: 1 }, 2019-09-04T06:28:38.389+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn123] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.022+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn107] Got notified of new snapshot: { ts: Timestamp(1567578518, 2), t: 1 }, 2019-09-04T06:28:38.389+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn107] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.871+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn106] Got notified of new snapshot: { ts: Timestamp(1567578518, 2), t: 1 }, 2019-09-04T06:28:38.389+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn106] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:57.566+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn135] Got notified of new snapshot: { ts: Timestamp(1567578518, 2), t: 1 }, 2019-09-04T06:28:38.389+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn135] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.925+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn134] Got notified of new snapshot: { ts: Timestamp(1567578518, 2), t: 1 }, 2019-09-04T06:28:38.389+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn134] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.753+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn111] Got notified of new snapshot: { ts: Timestamp(1567578518, 2), t: 1 }, 2019-09-04T06:28:38.389+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn111] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.896+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn110] Got notified of new snapshot: { ts: Timestamp(1567578518, 2), t: 1 }, 2019-09-04T06:28:38.389+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn110] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.881+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn114] Got notified of new snapshot: { ts: Timestamp(1567578518, 2), t: 1 }, 2019-09-04T06:28:38.389+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn114] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:41.488+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn103] Got notified of new snapshot: { ts: Timestamp(1567578518, 2), t: 1 }, 2019-09-04T06:28:38.389+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn103] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.919+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn132] Got notified of new snapshot: { ts: Timestamp(1567578518, 2), t: 1 }, 2019-09-04T06:28:38.389+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn132] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.433+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn119] Got notified of new snapshot: { ts: Timestamp(1567578518, 2), t: 1 }, 2019-09-04T06:28:38.389+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn119] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.033+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn109] Got notified of new snapshot: { ts: Timestamp(1567578518, 2), t: 1 }, 2019-09-04T06:28:38.389+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn109] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.877+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn117] Got notified of new snapshot: { ts: Timestamp(1567578518, 2), t: 1 }, 2019-09-04T06:28:38.389+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn117] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:54.151+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn112] Got notified of new snapshot: { ts: Timestamp(1567578518, 2), t: 1 }, 2019-09-04T06:28:38.389+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn112] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.921+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn121] Got notified of new snapshot: { ts: Timestamp(1567578518, 2), t: 1 }, 2019-09-04T06:28:38.389+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn121] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:58.752+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn108] Got notified of new snapshot: { ts: Timestamp(1567578518, 2), t: 1 }, 2019-09-04T06:28:38.389+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn108] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:39.879+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn131] Got notified of new snapshot: { ts: Timestamp(1567578518, 2), t: 1 }, 2019-09-04T06:28:38.389+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn131] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:56.307+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn130] Got notified of new snapshot: { ts: Timestamp(1567578518, 2), t: 1 }, 2019-09-04T06:28:38.389+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn130] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:51.753+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn120] Got notified of new snapshot: { ts: Timestamp(1567578518, 2), t: 1 }, 2019-09-04T06:28:38.389+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn120] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.040+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn133] Got notified of new snapshot: { ts: Timestamp(1567578518, 2), t: 1 }, 2019-09-04T06:28:38.389+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn133] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.702+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn124] Got notified of new snapshot: { ts: Timestamp(1567578518, 2), t: 1 }, 2019-09-04T06:28:38.389+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn124] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.022+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn136] Got notified of new snapshot: { ts: Timestamp(1567578518, 2), t: 1 }, 2019-09-04T06:28:38.389+0000 2019-09-04T06:28:38.405+0000 D3 REPL [conn136] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:02.478+0000 2019-09-04T06:28:38.406+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:28:38.406+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:38.406+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, durableWallTime: new Date(1567578514864), appliedOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, appliedWallTime: new Date(1567578514864), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), appliedOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, appliedWallTime: new Date(1567578518389), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, durableWallTime: new Date(1567578514864), appliedOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, appliedWallTime: new Date(1567578514864), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 2), t: 1 }, lastCommittedWall: new Date(1567578518389), lastOpVisible: { ts: Timestamp(1567578518, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:38.406+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 245 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:08.406+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, durableWallTime: new Date(1567578514864), appliedOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, appliedWallTime: new Date(1567578514864), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), appliedOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, appliedWallTime: new Date(1567578518389), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, durableWallTime: new Date(1567578514864), appliedOpTime: { ts: Timestamp(1567578514, 1), t: 1 }, appliedWallTime: new Date(1567578514864), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 2), t: 1 }, lastCommittedWall: new Date(1567578518389), lastOpVisible: { ts: Timestamp(1567578518, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:38.406+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:08.400+0000 2019-09-04T06:28:38.406+0000 D2 ASIO [RS] Request 245 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 2), t: 1 }, lastCommittedWall: new Date(1567578518389), lastOpVisible: { ts: Timestamp(1567578518, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578518, 2), $clusterTime: { clusterTime: Timestamp(1567578518, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578518, 2) } 2019-09-04T06:28:38.406+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 2), t: 1 }, lastCommittedWall: new Date(1567578518389), lastOpVisible: { ts: Timestamp(1567578518, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578518, 2), $clusterTime: { clusterTime: Timestamp(1567578518, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578518, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:38.406+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:38.406+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:08.400+0000 2019-09-04T06:28:38.479+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:38.498+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578518, 2) 2019-09-04T06:28:38.516+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:38.516+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:38.551+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:38.551+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:38.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:38.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:38.579+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:38.659+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:38.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:38.666+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:38.666+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:38.679+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:38.697+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:38.697+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:38.703+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:38.703+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:38.779+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:38.832+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:38.832+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:38.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:38.836+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 246) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:38.836+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 246 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:28:48.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:38.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:06.836+0000 2019-09-04T06:28:38.836+0000 D2 ASIO [Replication] Request 246 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), opTime: { ts: Timestamp(1567578518, 2), t: 1 }, wallTime: new Date(1567578518389), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 2), t: 1 }, lastCommittedWall: new Date(1567578518389), lastOpVisible: { ts: Timestamp(1567578518, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578518, 2), $clusterTime: { clusterTime: Timestamp(1567578518, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578518, 2) } 2019-09-04T06:28:38.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), opTime: { ts: Timestamp(1567578518, 2), t: 1 }, wallTime: new Date(1567578518389), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 2), t: 1 }, lastCommittedWall: new Date(1567578518389), lastOpVisible: { ts: Timestamp(1567578518, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578518, 2), $clusterTime: { clusterTime: Timestamp(1567578518, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578518, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:28:38.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:38.836+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 246) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), opTime: { ts: Timestamp(1567578518, 2), t: 1 }, wallTime: new Date(1567578518389), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 2), t: 1 }, lastCommittedWall: new Date(1567578518389), lastOpVisible: { ts: Timestamp(1567578518, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578518, 2), $clusterTime: { clusterTime: Timestamp(1567578518, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578518, 2) } 2019-09-04T06:28:38.836+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:28:38.836+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:28:48.804+0000 2019-09-04T06:28:38.836+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:28:49.128+0000 2019-09-04T06:28:38.836+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:28:38.836+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:28:40.836Z 2019-09-04T06:28:38.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:38.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:08.836+0000 2019-09-04T06:28:38.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:08.836+0000 2019-09-04T06:28:38.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:38.837+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 247) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:38.837+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 247 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:48.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:38.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:08.836+0000 2019-09-04T06:28:38.837+0000 D2 ASIO [Replication] Request 247 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), opTime: { ts: Timestamp(1567578518, 2), t: 1 }, wallTime: new Date(1567578518389), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 2), t: 1 }, lastCommittedWall: new Date(1567578518389), lastOpVisible: { ts: Timestamp(1567578518, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578518, 2), $clusterTime: { clusterTime: Timestamp(1567578518, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578518, 2) } 2019-09-04T06:28:38.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), opTime: { ts: Timestamp(1567578518, 2), t: 1 }, wallTime: new Date(1567578518389), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 2), t: 1 }, lastCommittedWall: new Date(1567578518389), lastOpVisible: { ts: Timestamp(1567578518, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578518, 2), $clusterTime: { clusterTime: Timestamp(1567578518, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578518, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:38.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:38.837+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 247) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), opTime: { ts: Timestamp(1567578518, 2), t: 1 }, wallTime: new Date(1567578518389), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 2), t: 1 }, lastCommittedWall: new Date(1567578518389), lastOpVisible: { ts: Timestamp(1567578518, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578518, 2), $clusterTime: { clusterTime: Timestamp(1567578518, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578518, 2) } 2019-09-04T06:28:38.837+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:28:38.837+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:28:40.837Z 2019-09-04T06:28:38.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:08.836+0000 2019-09-04T06:28:38.879+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:38.980+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:39.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:39.016+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:39.016+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:39.051+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:39.051+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:39.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578518, 2), signature: { hash: BinData(0, 7FDE543E440912D5EBF05D092EFE3E413526E933), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:39.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:28:39.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578518, 2), signature: { hash: BinData(0, 7FDE543E440912D5EBF05D092EFE3E413526E933), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:39.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578518, 2), signature: { hash: BinData(0, 7FDE543E440912D5EBF05D092EFE3E413526E933), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:39.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), opTime: { ts: Timestamp(1567578518, 2), t: 1 }, wallTime: new Date(1567578518389) } 2019-09-04T06:28:39.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578518, 2), signature: { hash: BinData(0, 7FDE543E440912D5EBF05D092EFE3E413526E933), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:39.065+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:39.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:39.080+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:39.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:39.160+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:39.166+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:39.166+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:39.180+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:39.197+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:39.197+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:39.203+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:39.203+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:39.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:39.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:39.280+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:39.332+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:39.332+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:39.380+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:39.399+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:39.399+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:39.399+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578518, 2) 2019-09-04T06:28:39.399+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 3904 2019-09-04T06:28:39.399+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:39.399+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:39.399+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 3904 2019-09-04T06:28:39.400+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 3907 2019-09-04T06:28:39.400+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 3907 2019-09-04T06:28:39.400+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578518, 2), t: 1 }({ ts: Timestamp(1567578518, 2), t: 1 }) 2019-09-04T06:28:39.480+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:39.516+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:39.516+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:39.537+0000 D2 COMMAND [conn61] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578518, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578518, 2), signature: { hash: BinData(0, 7FDE543E440912D5EBF05D092EFE3E413526E933), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578518, 2), t: 1 } }, $db: "config" } 2019-09-04T06:28:39.537+0000 D1 COMMAND [conn61] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578518, 2), t: 1 } } } 2019-09-04T06:28:39.537+0000 D3 STORAGE [conn61] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:28:39.537+0000 D1 COMMAND [conn61] Using 'committed' snapshot: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578518, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578518, 2), signature: { hash: BinData(0, 7FDE543E440912D5EBF05D092EFE3E413526E933), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578518, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578518, 2) 2019-09-04T06:28:39.537+0000 D2 QUERY [conn61] Collection config.settings does not exist. Using EOF plan: query: { _id: "balancer" } sort: {} projection: {} limit: 1 2019-09-04T06:28:39.537+0000 I COMMAND [conn61] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578518, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578518, 2), signature: { hash: BinData(0, 7FDE543E440912D5EBF05D092EFE3E413526E933), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578518, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:28:39.538+0000 D2 COMMAND [conn61] run command config.$cmd { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578518, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578518, 2), signature: { hash: BinData(0, 7FDE543E440912D5EBF05D092EFE3E413526E933), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578518, 2), t: 1 } }, $db: "config" } 2019-09-04T06:28:39.538+0000 D1 COMMAND [conn61] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578518, 2), t: 1 } } } 2019-09-04T06:28:39.538+0000 D3 STORAGE [conn61] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:28:39.538+0000 D1 COMMAND [conn61] Using 'committed' snapshot: { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578518, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578518, 2), signature: { hash: BinData(0, 7FDE543E440912D5EBF05D092EFE3E413526E933), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578518, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578518, 2) 2019-09-04T06:28:39.538+0000 D2 QUERY [conn61] Collection config.settings does not exist. Using EOF plan: query: { _id: "autosplit" } sort: {} projection: {} limit: 1 2019-09-04T06:28:39.538+0000 I COMMAND [conn61] command config.settings command: find { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578518, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578518, 2), signature: { hash: BinData(0, 7FDE543E440912D5EBF05D092EFE3E413526E933), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578518, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:28:39.551+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:39.551+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:39.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:39.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:39.580+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:39.659+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:39.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:39.666+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:39.666+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:39.681+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:39.697+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:39.697+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:39.703+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:39.703+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:39.781+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:39.832+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:39.832+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:39.869+0000 I NETWORK [listener] connection accepted from 10.108.2.72:45662 #137 (80 connections now open) 2019-09-04T06:28:39.869+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:39.869+0000 D2 COMMAND [conn137] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:39.869+0000 I NETWORK [conn137] received client metadata from 10.108.2.72:45662 conn137: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:39.869+0000 I COMMAND [conn137] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:39.872+0000 I COMMAND [conn107] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578482, 1), signature: { hash: BinData(0, 3ACCBE001A4DF149C45D49591F00BE6F4AE6BCE4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:28:39.872+0000 D1 - [conn107] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:39.872+0000 W - [conn107] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:39.878+0000 I COMMAND [conn109] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578485, 1), signature: { hash: BinData(0, 52C21B0434F0E97A4117AE1AE4E5E5F2B2245704), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:28:39.878+0000 D1 - [conn109] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:39.878+0000 W - [conn109] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:39.880+0000 I COMMAND [conn108] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578480, 1), signature: { hash: BinData(0, 6D93D72CE5EBCF8DB8B943DC77DF5B5E8E3E9809), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:39.880+0000 D1 - [conn108] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:39.880+0000 W - [conn108] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:39.881+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:39.881+0000 I COMMAND [conn110] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578482, 1), signature: { hash: BinData(0, 3ACCBE001A4DF149C45D49591F00BE6F4AE6BCE4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:39.881+0000 D1 - [conn110] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:39.881+0000 W - [conn110] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:39.883+0000 I NETWORK [listener] connection accepted from 10.108.2.57:34182 #138 (81 connections now open) 2019-09-04T06:28:39.883+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:39.883+0000 D2 COMMAND [conn138] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:39.883+0000 I NETWORK [conn138] received client metadata from 10.108.2.57:34182 conn138: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:39.883+0000 I COMMAND [conn138] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:39.889+0000 I - [conn107] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:39.889+0000 D1 COMMAND [conn107] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578482, 1), signature: { hash: BinData(0, 3ACCBE001A4DF149C45D49591F00BE6F4AE6BCE4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:39.889+0000 D1 - [conn107] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:39.889+0000 W - [conn107] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:39.896+0000 I COMMAND [conn111] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578488, 1), signature: { hash: BinData(0, F4E758946912814F3BEF328731BD00334916A2A7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:39.896+0000 D1 - [conn111] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:39.896+0000 W - [conn111] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:39.906+0000 I - [conn110] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:39.906+0000 D1 COMMAND [conn110] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578482, 1), signature: { hash: BinData(0, 3ACCBE001A4DF149C45D49591F00BE6F4AE6BCE4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:39.906+0000 D1 - [conn110] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:39.906+0000 W - [conn110] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:39.912+0000 I NETWORK [listener] connection accepted from 10.108.2.63:36222 #139 (82 connections now open) 2019-09-04T06:28:39.912+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:39.912+0000 D2 COMMAND [conn139] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:39.912+0000 I NETWORK [conn139] received client metadata from 10.108.2.63:36222 conn139: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:39.913+0000 I COMMAND [conn139] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:39.919+0000 I COMMAND [conn103] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578482, 1), signature: { hash: BinData(0, 3ACCBE001A4DF149C45D49591F00BE6F4AE6BCE4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:39.920+0000 D1 - [conn103] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:39.920+0000 W - [conn103] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:39.922+0000 I COMMAND [conn112] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578483, 1), signature: { hash: BinData(0, 621D9BEDA95DF18ED95436DD15D0B64BF8D938E4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:39.922+0000 D1 - [conn112] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:39.922+0000 W - [conn112] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:39.926+0000 I - [conn107] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:39.926+0000 W COMMAND [conn107] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:39.926+0000 I COMMAND [conn107] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578482, 1), signature: { hash: BinData(0, 3ACCBE001A4DF149C45D49591F00BE6F4AE6BCE4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:28:39.926+0000 D2 NETWORK [conn107] Session from 10.108.2.73:52050 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:39.926+0000 I NETWORK [conn107] end connection 10.108.2.73:52050 (81 connections now open) 2019-09-04T06:28:39.943+0000 I - [conn111] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:39.943+0000 D1 COMMAND [conn111] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578488, 1), signature: { hash: BinData(0, F4E758946912814F3BEF328731BD00334916A2A7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:39.943+0000 D1 - [conn111] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:39.943+0000 W - [conn111] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:39.963+0000 I - [conn110] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:39.963+0000 W COMMAND [conn110] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:39.963+0000 I COMMAND [conn110] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578482, 1), signature: { hash: BinData(0, 3ACCBE001A4DF149C45D49591F00BE6F4AE6BCE4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30034ms 2019-09-04T06:28:39.963+0000 D2 NETWORK [conn110] Session from 10.108.2.72:45646 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:39.963+0000 I NETWORK [conn110] end connection 10.108.2.72:45646 (80 connections now open) 2019-09-04T06:28:39.980+0000 I - [conn103] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:39.980+0000 D1 COMMAND [conn103] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578482, 1), signature: { hash: BinData(0, 3ACCBE001A4DF149C45D49591F00BE6F4AE6BCE4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:39.980+0000 D1 - [conn103] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:39.980+0000 W - [conn103] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:39.981+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:39.996+0000 I - [conn112] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:39.996+0000 D1 COMMAND [conn112] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578483, 1), signature: { hash: BinData(0, 621D9BEDA95DF18ED95436DD15D0B64BF8D938E4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:39.996+0000 D1 - [conn112] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:39.996+0000 W - [conn112] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:40.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:40.005+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:28:40.005+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:28:40.005+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:28:40.016+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:28:40.016+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:28:40.017+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:40.017+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:40.018+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:28:40.018+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:28:40.018+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:28:40.018+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:28:40.030+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:28:40.031+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35129 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:28:40.032+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:28:40.032+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:28:40.032+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:28:40.032+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:28:40.032+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:40.032+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:28:40.032+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:28:40.032+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:28:40.032+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:28:40.032+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:28:40.032+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:28:40.032+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:28:40.032+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:40.032+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:40.032+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:28:40.032+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578518, 2) 2019-09-04T06:28:40.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3929 2019-09-04T06:28:40.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3929 2019-09-04T06:28:40.032+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:28:40.032+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:28:40.032+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:28:40.033+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:28:40.033+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:40.033+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:28:40.033+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:28:40.033+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:28:40.033+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578518, 2) 2019-09-04T06:28:40.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3932 2019-09-04T06:28:40.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3932 2019-09-04T06:28:40.033+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:28:40.033+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:28:40.033+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:40.033+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:28:40.033+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:28:40.033+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:28:40.033+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578518, 2) 2019-09-04T06:28:40.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3934 2019-09-04T06:28:40.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3934 2019-09-04T06:28:40.033+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:28:40.033+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:28:40.033+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:28:40.033+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:28:40.033+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:28:40.033+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:28:40.033+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:28:40.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3937 2019-09-04T06:28:40.033+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:28:40.033+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:40.033+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:28:40.033+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:28:40.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3937 2019-09-04T06:28:40.033+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:28:40.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3938 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3938 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3939 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3939 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3940 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3940 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3941 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3941 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3942 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3942 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3943 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3943 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3944 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3944 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3945 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3945 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3946 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3946 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3947 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3947 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3948 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3948 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3949 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3949 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3950 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3950 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3951 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3951 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3952 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:28:40.034+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:40.035+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:28:40.035+0000 I - [conn103] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:40.035+0000 W COMMAND [conn103] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:40.035+0000 I COMMAND [conn103] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578482, 1), signature: { hash: BinData(0, 3ACCBE001A4DF149C45D49591F00BE6F4AE6BCE4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30070ms 2019-09-04T06:28:40.039+0000 D2 NETWORK [conn103] Session from 10.108.2.63:36204 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:40.040+0000 I NETWORK [conn103] end connection 10.108.2.63:36204 (79 connections now open) 2019-09-04T06:28:40.040+0000 D2 COMMAND [conn69] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:40.040+0000 I COMMAND [conn69] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:40.035+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3952 2019-09-04T06:28:40.040+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:28:40.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3954 2019-09-04T06:28:40.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:28:40.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:40.040+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:28:40.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3954 2019-09-04T06:28:40.040+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:28:40.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3955 2019-09-04T06:28:40.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:28:40.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:40.040+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:28:40.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3955 2019-09-04T06:28:40.040+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:28:40.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3956 2019-09-04T06:28:40.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:28:40.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:40.040+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:28:40.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3956 2019-09-04T06:28:40.040+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:40.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3957 2019-09-04T06:28:40.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:40.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:40.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3957 2019-09-04T06:28:40.040+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:28:40.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3958 2019-09-04T06:28:40.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:28:40.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:40.040+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:28:40.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3958 2019-09-04T06:28:40.040+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:28:40.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3959 2019-09-04T06:28:40.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:28:40.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:40.040+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:28:40.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3959 2019-09-04T06:28:40.040+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 6ms 2019-09-04T06:28:40.040+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:28:40.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3961 2019-09-04T06:28:40.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3961 2019-09-04T06:28:40.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3962 2019-09-04T06:28:40.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3962 2019-09-04T06:28:40.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3963 2019-09-04T06:28:40.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3963 2019-09-04T06:28:40.041+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:28:40.041+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:28:40.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3965 2019-09-04T06:28:40.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3965 2019-09-04T06:28:40.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3966 2019-09-04T06:28:40.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3966 2019-09-04T06:28:40.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3967 2019-09-04T06:28:40.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3967 2019-09-04T06:28:40.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3968 2019-09-04T06:28:40.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3968 2019-09-04T06:28:40.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3969 2019-09-04T06:28:40.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3969 2019-09-04T06:28:40.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3970 2019-09-04T06:28:40.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3970 2019-09-04T06:28:40.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3971 2019-09-04T06:28:40.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3971 2019-09-04T06:28:40.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3972 2019-09-04T06:28:40.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3972 2019-09-04T06:28:40.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3973 2019-09-04T06:28:40.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3973 2019-09-04T06:28:40.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3974 2019-09-04T06:28:40.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3974 2019-09-04T06:28:40.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3975 2019-09-04T06:28:40.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3975 2019-09-04T06:28:40.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3976 2019-09-04T06:28:40.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3976 2019-09-04T06:28:40.041+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:28:40.041+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:28:40.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3978 2019-09-04T06:28:40.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3978 2019-09-04T06:28:40.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3979 2019-09-04T06:28:40.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3979 2019-09-04T06:28:40.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3980 2019-09-04T06:28:40.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3980 2019-09-04T06:28:40.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3981 2019-09-04T06:28:40.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3981 2019-09-04T06:28:40.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3982 2019-09-04T06:28:40.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3982 2019-09-04T06:28:40.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 3983 2019-09-04T06:28:40.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 3983 2019-09-04T06:28:40.042+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:28:40.051+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:40.051+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:40.055+0000 I - [conn112] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:40.055+0000 W COMMAND [conn112] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:40.055+0000 I COMMAND [conn112] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578483, 1), signature: { hash: BinData(0, 621D9BEDA95DF18ED95436DD15D0B64BF8D938E4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30084ms 2019-09-04T06:28:40.056+0000 D2 NETWORK [conn112] Session from 10.108.2.60:44768 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:40.056+0000 I NETWORK [conn112] end connection 10.108.2.60:44768 (78 connections now open) 2019-09-04T06:28:40.056+0000 D2 COMMAND [conn70] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:40.056+0000 I COMMAND [conn70] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:40.065+0000 I NETWORK [listener] connection accepted from 10.108.2.55:36580 #140 (79 connections now open) 2019-09-04T06:28:40.065+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:40.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:40.066+0000 D2 COMMAND [conn140] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:40.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:40.066+0000 I NETWORK [conn140] received client metadata from 10.108.2.55:36580 conn140: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:40.066+0000 I COMMAND [conn140] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:40.066+0000 D2 COMMAND [conn140] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578511, 1), signature: { hash: BinData(0, F56EFECF966613B7CF9F4E79C9426BAFAB58CE38), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:40.066+0000 D1 REPL [conn140] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578518, 2), t: 1 } 2019-09-04T06:28:40.066+0000 D3 REPL [conn140] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.076+0000 2019-09-04T06:28:40.067+0000 I NETWORK [listener] connection accepted from 10.108.2.74:51704 #141 (80 connections now open) 2019-09-04T06:28:40.067+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:40.067+0000 D2 COMMAND [conn141] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:40.067+0000 I NETWORK [conn141] received client metadata from 10.108.2.74:51704 conn141: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:40.067+0000 I COMMAND [conn141] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:40.067+0000 D2 COMMAND [conn141] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578515, 1), signature: { hash: BinData(0, 687BE122806C404325073B712C2FE776A546E867), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:28:40.067+0000 D1 REPL [conn141] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578518, 2), t: 1 } 2019-09-04T06:28:40.067+0000 D3 REPL [conn141] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.077+0000 2019-09-04T06:28:40.069+0000 D2 COMMAND [conn137] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578512, 1), signature: { hash: BinData(0, 52188A2AB5758BF914786CB9D47A9D6B7C3891AC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:40.069+0000 D1 REPL [conn137] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578518, 2), t: 1 } 2019-09-04T06:28:40.069+0000 D3 REPL [conn137] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.079+0000 2019-09-04T06:28:40.069+0000 D2 COMMAND [conn105] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578511, 1), signature: { hash: BinData(0, F56EFECF966613B7CF9F4E79C9426BAFAB58CE38), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:40.069+0000 D1 REPL [conn105] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578518, 2), t: 1 } 2019-09-04T06:28:40.069+0000 D3 REPL [conn105] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.079+0000 2019-09-04T06:28:40.076+0000 I - [conn111] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:40.076+0000 W COMMAND [conn111] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:40.076+0000 I COMMAND [conn111] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578488, 1), signature: { hash: BinData(0, F4E758946912814F3BEF328731BD00334916A2A7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30057ms 2019-09-04T06:28:40.076+0000 D2 NETWORK [conn111] Session from 10.108.2.57:34158 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:40.076+0000 I NETWORK [conn111] end connection 10.108.2.57:34158 (79 connections now open) 2019-09-04T06:28:40.081+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:40.110+0000 I - [conn109] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:40.110+0000 I - [conn108] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:40.110+0000 D1 COMMAND [conn109] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578485, 1), signature: { hash: BinData(0, 52C21B0434F0E97A4117AE1AE4E5E5F2B2245704), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:40.110+0000 D1 COMMAND [conn108] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578480, 1), signature: { hash: BinData(0, 6D93D72CE5EBCF8DB8B943DC77DF5B5E8E3E9809), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:40.110+0000 D1 - [conn108] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:40.110+0000 D1 - [conn109] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:40.110+0000 W - [conn108] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:40.110+0000 W - [conn109] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:40.118+0000 I NETWORK [listener] connection accepted from 10.108.2.61:37858 #142 (80 connections now open) 2019-09-04T06:28:40.118+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:40.118+0000 D2 COMMAND [conn142] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:40.118+0000 I NETWORK [conn142] received client metadata from 10.108.2.61:37858 conn142: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:40.118+0000 I COMMAND [conn142] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:40.122+0000 D2 COMMAND [conn142] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578511, 1), signature: { hash: BinData(0, F56EFECF966613B7CF9F4E79C9426BAFAB58CE38), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:40.122+0000 D1 REPL [conn142] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578518, 2), t: 1 } 2019-09-04T06:28:40.122+0000 D3 REPL [conn142] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.132+0000 2019-09-04T06:28:40.142+0000 I - [conn109] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:40.143+0000 W COMMAND [conn109] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:40.143+0000 I COMMAND [conn109] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578485, 1), signature: { hash: BinData(0, 52C21B0434F0E97A4117AE1AE4E5E5F2B2245704), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30243ms 2019-09-04T06:28:40.143+0000 D2 NETWORK [conn109] Session from 10.108.2.74:51684 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:40.143+0000 I NETWORK [conn109] end connection 10.108.2.74:51684 (79 connections now open) 2019-09-04T06:28:40.151+0000 I - [conn108] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:40.151+0000 W COMMAND [conn108] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:40.151+0000 I COMMAND [conn108] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578480, 1), signature: { hash: BinData(0, 6D93D72CE5EBCF8DB8B943DC77DF5B5E8E3E9809), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30241ms 2019-09-04T06:28:40.151+0000 D2 NETWORK [conn108] Session from 10.108.2.48:42000 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:40.151+0000 I NETWORK [conn108] end connection 10.108.2.48:42000 (78 connections now open) 2019-09-04T06:28:40.159+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:40.159+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:40.166+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:40.166+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:40.181+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:40.197+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:40.197+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:40.203+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:40.203+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:40.230+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578518, 2), signature: { hash: BinData(0, 7FDE543E440912D5EBF05D092EFE3E413526E933), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:40.230+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:28:40.230+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578518, 2), signature: { hash: BinData(0, 7FDE543E440912D5EBF05D092EFE3E413526E933), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:40.230+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578518, 2), signature: { hash: BinData(0, 7FDE543E440912D5EBF05D092EFE3E413526E933), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:40.230+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), opTime: { ts: Timestamp(1567578518, 2), t: 1 }, wallTime: new Date(1567578518389) } 2019-09-04T06:28:40.230+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578518, 2), signature: { hash: BinData(0, 7FDE543E440912D5EBF05D092EFE3E413526E933), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:40.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:40.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:40.281+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:40.332+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:40.332+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:40.381+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:40.399+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:40.399+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:40.399+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578518, 2) 2019-09-04T06:28:40.399+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 4003 2019-09-04T06:28:40.399+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:40.399+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:40.399+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 4003 2019-09-04T06:28:40.400+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4006 2019-09-04T06:28:40.400+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4006 2019-09-04T06:28:40.400+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578518, 2), t: 1 }({ ts: Timestamp(1567578518, 2), t: 1 }) 2019-09-04T06:28:40.428+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:40.428+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:40.465+0000 D2 COMMAND [conn71] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578518, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578518, 2), signature: { hash: BinData(0, 7FDE543E440912D5EBF05D092EFE3E413526E933), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578518, 2), t: 1 } }, $db: "config" } 2019-09-04T06:28:40.465+0000 D1 COMMAND [conn71] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578518, 2), t: 1 } } } 2019-09-04T06:28:40.465+0000 D3 STORAGE [conn71] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:28:40.465+0000 D1 COMMAND [conn71] Using 'committed' snapshot: { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578518, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578518, 2), signature: { hash: BinData(0, 7FDE543E440912D5EBF05D092EFE3E413526E933), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578518, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578518, 2) 2019-09-04T06:28:40.465+0000 D2 QUERY [conn71] Collection config.settings does not exist. Using EOF plan: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 2019-09-04T06:28:40.465+0000 I COMMAND [conn71] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578518, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578518, 2), signature: { hash: BinData(0, 7FDE543E440912D5EBF05D092EFE3E413526E933), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578518, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:28:40.482+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:40.516+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:40.516+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:40.551+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:40.551+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:40.565+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:40.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:40.582+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:40.659+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:40.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:40.666+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:40.666+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:40.682+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:40.703+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:40.703+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:40.721+0000 D2 COMMAND [conn72] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578514, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578514, 1), signature: { hash: BinData(0, B888CBCD4C5ECFDB6BA6EAAC3AC8D2C0C4D8F297), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578514, 1), t: 1 } }, $db: "config" } 2019-09-04T06:28:40.721+0000 D1 COMMAND [conn72] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578514, 1), t: 1 } } } 2019-09-04T06:28:40.721+0000 D3 STORAGE [conn72] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:28:40.721+0000 D1 COMMAND [conn72] Using 'committed' snapshot: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578514, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578514, 1), signature: { hash: BinData(0, B888CBCD4C5ECFDB6BA6EAAC3AC8D2C0C4D8F297), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578514, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578518, 2) 2019-09-04T06:28:40.721+0000 D2 QUERY [conn72] Collection config.settings does not exist. Using EOF plan: query: { _id: "balancer" } sort: {} projection: {} limit: 1 2019-09-04T06:28:40.721+0000 I COMMAND [conn72] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578514, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578514, 1), signature: { hash: BinData(0, B888CBCD4C5ECFDB6BA6EAAC3AC8D2C0C4D8F297), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578514, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:28:40.782+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:40.832+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:40.832+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:40.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:40.836+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 248) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:40.836+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 248 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:28:50.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:40.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:08.836+0000 2019-09-04T06:28:40.836+0000 D2 ASIO [Replication] Request 248 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), opTime: { ts: Timestamp(1567578518, 2), t: 1 }, wallTime: new Date(1567578518389), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 2), t: 1 }, lastCommittedWall: new Date(1567578518389), lastOpVisible: { ts: Timestamp(1567578518, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578518, 2), $clusterTime: { clusterTime: Timestamp(1567578518, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578518, 2) } 2019-09-04T06:28:40.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), opTime: { ts: Timestamp(1567578518, 2), t: 1 }, wallTime: new Date(1567578518389), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 2), t: 1 }, lastCommittedWall: new Date(1567578518389), lastOpVisible: { ts: Timestamp(1567578518, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578518, 2), $clusterTime: { clusterTime: Timestamp(1567578518, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578518, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:28:40.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:40.836+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 248) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), opTime: { ts: Timestamp(1567578518, 2), t: 1 }, wallTime: new Date(1567578518389), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 2), t: 1 }, lastCommittedWall: new Date(1567578518389), lastOpVisible: { ts: Timestamp(1567578518, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578518, 2), $clusterTime: { clusterTime: Timestamp(1567578518, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578518, 2) } 2019-09-04T06:28:40.836+0000 D4 ELECTION [replexec-1] Postponing election timeout due to heartbeat from primary 2019-09-04T06:28:40.836+0000 D4 REPL [replexec-1] Canceling election timeout callback at 2019-09-04T06:28:49.128+0000 2019-09-04T06:28:40.836+0000 D4 ELECTION [replexec-1] Scheduling election timeout callback at 2019-09-04T06:28:51.877+0000 2019-09-04T06:28:40.836+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:28:40.836+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:28:42.836Z 2019-09-04T06:28:40.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:40.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:10.836+0000 2019-09-04T06:28:40.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:10.836+0000 2019-09-04T06:28:40.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:40.837+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 249) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:40.837+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 249 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:50.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:40.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:10.836+0000 2019-09-04T06:28:40.837+0000 D2 ASIO [Replication] Request 249 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), opTime: { ts: Timestamp(1567578518, 2), t: 1 }, wallTime: new Date(1567578518389), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 2), t: 1 }, lastCommittedWall: new Date(1567578518389), lastOpVisible: { ts: Timestamp(1567578518, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578518, 2), $clusterTime: { clusterTime: Timestamp(1567578518, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578518, 2) } 2019-09-04T06:28:40.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), opTime: { ts: Timestamp(1567578518, 2), t: 1 }, wallTime: new Date(1567578518389), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 2), t: 1 }, lastCommittedWall: new Date(1567578518389), lastOpVisible: { ts: Timestamp(1567578518, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578518, 2), $clusterTime: { clusterTime: Timestamp(1567578518, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578518, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:40.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:40.837+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 249) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), opTime: { ts: Timestamp(1567578518, 2), t: 1 }, wallTime: new Date(1567578518389), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 2), t: 1 }, lastCommittedWall: new Date(1567578518389), lastOpVisible: { ts: Timestamp(1567578518, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578518, 2), $clusterTime: { clusterTime: Timestamp(1567578518, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578518, 2) } 2019-09-04T06:28:40.837+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:28:40.837+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:28:42.837Z 2019-09-04T06:28:40.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:10.836+0000 2019-09-04T06:28:40.882+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:40.928+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:40.928+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:40.945+0000 D2 COMMAND [conn73] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:40.945+0000 I COMMAND [conn73] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:40.968+0000 D2 COMMAND [conn74] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:40.968+0000 I COMMAND [conn74] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:40.982+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:41.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:41.016+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:41.016+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:41.051+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:41.051+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:41.061+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:41.061+0000 D3 REPL [replexec-1] memberData lastupdate is: 2019-09-04T06:28:40.836+0000 2019-09-04T06:28:41.061+0000 D3 REPL [replexec-1] memberData lastupdate is: 2019-09-04T06:28:40.837+0000 2019-09-04T06:28:41.061+0000 D3 REPL [replexec-1] stalest member MemberId(0) date: 2019-09-04T06:28:40.836+0000 2019-09-04T06:28:41.061+0000 D3 REPL [replexec-1] scheduling next check at 2019-09-04T06:28:50.836+0000 2019-09-04T06:28:41.061+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:10.836+0000 2019-09-04T06:28:41.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578518, 2), signature: { hash: BinData(0, 7FDE543E440912D5EBF05D092EFE3E413526E933), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:41.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:28:41.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578518, 2), signature: { hash: BinData(0, 7FDE543E440912D5EBF05D092EFE3E413526E933), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:41.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578518, 2), signature: { hash: BinData(0, 7FDE543E440912D5EBF05D092EFE3E413526E933), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:41.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), opTime: { ts: Timestamp(1567578518, 2), t: 1 }, wallTime: new Date(1567578518389) } 2019-09-04T06:28:41.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578518, 2), signature: { hash: BinData(0, 7FDE543E440912D5EBF05D092EFE3E413526E933), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:41.082+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:41.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:41.160+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:41.183+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:41.203+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:41.203+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:41.221+0000 D2 COMMAND [conn77] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:41.221+0000 I COMMAND [conn77] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:41.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:41.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:41.273+0000 D2 COMMAND [conn78] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:41.274+0000 I COMMAND [conn78] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:41.283+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:41.293+0000 I COMMAND [conn115] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578481, 1), signature: { hash: BinData(0, E9AAF9B781D488C5E3ACF0A29B982E7119C1C4F4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:28:41.293+0000 D1 - [conn115] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:41.293+0000 W - [conn115] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:41.310+0000 I - [conn115] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:41.310+0000 D1 COMMAND [conn115] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578481, 1), signature: { hash: BinData(0, E9AAF9B781D488C5E3ACF0A29B982E7119C1C4F4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:41.310+0000 D1 - [conn115] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:41.310+0000 W - [conn115] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:41.330+0000 I - [conn115] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:41.330+0000 W COMMAND [conn115] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:41.330+0000 I COMMAND [conn115] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578481, 1), signature: { hash: BinData(0, E9AAF9B781D488C5E3ACF0A29B982E7119C1C4F4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:28:41.330+0000 D2 NETWORK [conn115] Session from 10.108.2.59:48252 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:41.330+0000 I NETWORK [conn115] end connection 10.108.2.59:48252 (77 connections now open) 2019-09-04T06:28:41.332+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:41.332+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:41.383+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:41.399+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:41.399+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:41.399+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578518, 2) 2019-09-04T06:28:41.399+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 4031 2019-09-04T06:28:41.399+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:41.399+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:41.399+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 4031 2019-09-04T06:28:41.400+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4034 2019-09-04T06:28:41.400+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4034 2019-09-04T06:28:41.400+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578518, 2), t: 1 }({ ts: Timestamp(1567578518, 2), t: 1 }) 2019-09-04T06:28:41.428+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:41.428+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:41.483+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:41.491+0000 I COMMAND [conn114] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578482, 1), signature: { hash: BinData(0, 3ACCBE001A4DF149C45D49591F00BE6F4AE6BCE4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:41.491+0000 D1 - [conn114] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:41.491+0000 W - [conn114] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:41.509+0000 I - [conn114] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:41.509+0000 D1 COMMAND [conn114] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578482, 1), signature: { hash: BinData(0, 3ACCBE001A4DF149C45D49591F00BE6F4AE6BCE4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:41.509+0000 D1 - [conn114] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:41.509+0000 W - [conn114] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:41.516+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:41.516+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:41.530+0000 I - [conn114] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:41.530+0000 W COMMAND [conn114] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:41.530+0000 I COMMAND [conn114] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578482, 1), signature: { hash: BinData(0, 3ACCBE001A4DF149C45D49591F00BE6F4AE6BCE4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:28:41.531+0000 D2 NETWORK [conn114] Session from 10.108.2.52:47078 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:41.531+0000 I NETWORK [conn114] end connection 10.108.2.52:47078 (76 connections now open) 2019-09-04T06:28:41.551+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:41.551+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:41.583+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:41.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:41.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:41.683+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:41.696+0000 D2 COMMAND [conn128] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578518, 1), signature: { hash: BinData(0, 66B8FDDB4AF15B67A9013881B963975EBDEB24EF), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:41.696+0000 D1 REPL [conn128] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578518, 2), t: 1 } 2019-09-04T06:28:41.696+0000 D3 REPL [conn128] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:11.706+0000 2019-09-04T06:28:41.703+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:41.703+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:41.784+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:41.832+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:41.832+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:41.884+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:41.928+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:41.928+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:41.984+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:42.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:42.016+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:42.016+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:42.051+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:42.051+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:42.084+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:42.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:42.160+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:42.184+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:42.203+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:42.203+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:42.230+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578519, 1), signature: { hash: BinData(0, 11343C686046946B15639CDDD1F6CD7209AB8683), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:42.230+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:28:42.230+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578519, 1), signature: { hash: BinData(0, 11343C686046946B15639CDDD1F6CD7209AB8683), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:42.230+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578519, 1), signature: { hash: BinData(0, 11343C686046946B15639CDDD1F6CD7209AB8683), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:42.230+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), opTime: { ts: Timestamp(1567578518, 2), t: 1 }, wallTime: new Date(1567578518389) } 2019-09-04T06:28:42.231+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578519, 1), signature: { hash: BinData(0, 11343C686046946B15639CDDD1F6CD7209AB8683), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:42.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:42.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:42.284+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:42.361+0000 I NETWORK [listener] connection accepted from 10.20.102.80:61129 #143 (77 connections now open) 2019-09-04T06:28:42.361+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:42.362+0000 D2 COMMAND [conn143] run command admin.$cmd { isMaster: 1, client: { application: { name: "robo3t" }, driver: { name: "MongoDB Internal Client", version: "4.0.5-17-gd808df2233" }, os: { type: "Windows", name: "Microsoft Windows 7", architecture: "x86_64", version: "6.1 SP1 (build 7601)" } }, $db: "admin" } 2019-09-04T06:28:42.362+0000 I NETWORK [conn143] received client metadata from 10.20.102.80:61129 conn143: { application: { name: "robo3t" }, driver: { name: "MongoDB Internal Client", version: "4.0.5-17-gd808df2233" }, os: { type: "Windows", name: "Microsoft Windows 7", architecture: "x86_64", version: "6.1 SP1 (build 7601)" } } 2019-09-04T06:28:42.362+0000 I COMMAND [conn143] command admin.$cmd appName: "robo3t" command: isMaster { isMaster: 1, client: { application: { name: "robo3t" }, driver: { name: "MongoDB Internal Client", version: "4.0.5-17-gd808df2233" }, os: { type: "Windows", name: "Microsoft Windows 7", architecture: "x86_64", version: "6.1 SP1 (build 7601)" } }, $db: "admin" } numYields:0 reslen:866 locks:{} protocol:op_query 0ms 2019-09-04T06:28:42.372+0000 D2 COMMAND [conn143] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:28:42.372+0000 D1 ACCESS [conn143] Returning user dba_root@admin from cache 2019-09-04T06:28:42.372+0000 I COMMAND [conn143] command admin.$cmd appName: "robo3t" command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:394 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:42.379+0000 D2 COMMAND [conn116] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578512, 1), signature: { hash: BinData(0, 52188A2AB5758BF914786CB9D47A9D6B7C3891AC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:42.379+0000 D1 REPL [conn116] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578518, 2), t: 1 } 2019-09-04T06:28:42.379+0000 D3 REPL [conn116] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:12.389+0000 2019-09-04T06:28:42.382+0000 D2 COMMAND [conn143] run command admin.$cmd { saslContinue: 1, payload: "xxx", conversationId: 1, $db: "admin" } 2019-09-04T06:28:42.382+0000 I COMMAND [conn143] command admin.$cmd appName: "robo3t" command: saslContinue { saslContinue: 1, payload: "xxx", conversationId: 1, $db: "admin" } numYields:0 reslen:323 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:42.384+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:42.391+0000 D2 COMMAND [conn143] run command admin.$cmd { saslContinue: 1, payload: "xxx", conversationId: 1, $db: "admin" } 2019-09-04T06:28:42.391+0000 D1 ACCESS [conn143] Returning user dba_root@admin from cache 2019-09-04T06:28:42.391+0000 I ACCESS [conn143] Successfully authenticated as principal dba_root on admin from client 10.20.102.80:61129 2019-09-04T06:28:42.391+0000 I COMMAND [conn143] command admin.$cmd appName: "robo3t" command: saslContinue { saslContinue: 1, payload: "xxx", conversationId: 1, $db: "admin" } numYields:0 reslen:293 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:42.399+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:42.399+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:42.399+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578518, 2) 2019-09-04T06:28:42.399+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 4056 2019-09-04T06:28:42.400+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:42.400+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:42.400+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 4056 2019-09-04T06:28:42.400+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4059 2019-09-04T06:28:42.400+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4059 2019-09-04T06:28:42.400+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578518, 2), t: 1 }({ ts: Timestamp(1567578518, 2), t: 1 }) 2019-09-04T06:28:42.401+0000 D2 COMMAND [conn143] run command admin.$cmd { ping: 1, $db: "admin" } 2019-09-04T06:28:42.401+0000 I COMMAND [conn143] command admin.$cmd appName: "robo3t" command: ping { ping: 1, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:42.428+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:42.428+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:42.485+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:42.516+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:42.516+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:42.551+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:42.551+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:42.585+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:42.659+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:42.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:42.685+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:42.703+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:42.703+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:42.785+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:42.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:42.836+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 250) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:42.836+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 250 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:28:52.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:42.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:10.836+0000 2019-09-04T06:28:42.836+0000 D2 ASIO [Replication] Request 250 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), opTime: { ts: Timestamp(1567578518, 2), t: 1 }, wallTime: new Date(1567578518389), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 2), t: 1 }, lastCommittedWall: new Date(1567578518389), lastOpVisible: { ts: Timestamp(1567578518, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578518, 2), $clusterTime: { clusterTime: Timestamp(1567578519, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578518, 2) } 2019-09-04T06:28:42.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), opTime: { ts: Timestamp(1567578518, 2), t: 1 }, wallTime: new Date(1567578518389), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 2), t: 1 }, lastCommittedWall: new Date(1567578518389), lastOpVisible: { ts: Timestamp(1567578518, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578518, 2), $clusterTime: { clusterTime: Timestamp(1567578519, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578518, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:28:42.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:42.836+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 250) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), opTime: { ts: Timestamp(1567578518, 2), t: 1 }, wallTime: new Date(1567578518389), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 2), t: 1 }, lastCommittedWall: new Date(1567578518389), lastOpVisible: { ts: Timestamp(1567578518, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578518, 2), $clusterTime: { clusterTime: Timestamp(1567578519, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578518, 2) } 2019-09-04T06:28:42.836+0000 D4 ELECTION [replexec-1] Postponing election timeout due to heartbeat from primary 2019-09-04T06:28:42.836+0000 D4 REPL [replexec-1] Canceling election timeout callback at 2019-09-04T06:28:51.877+0000 2019-09-04T06:28:42.836+0000 D4 ELECTION [replexec-1] Scheduling election timeout callback at 2019-09-04T06:28:54.165+0000 2019-09-04T06:28:42.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:42.836+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:28:42.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:12.836+0000 2019-09-04T06:28:42.836+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:28:44.836Z 2019-09-04T06:28:42.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:12.836+0000 2019-09-04T06:28:42.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:42.837+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 251) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:42.837+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 251 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:52.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:42.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:12.836+0000 2019-09-04T06:28:42.837+0000 D2 ASIO [Replication] Request 251 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), opTime: { ts: Timestamp(1567578518, 2), t: 1 }, wallTime: new Date(1567578518389), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 2), t: 1 }, lastCommittedWall: new Date(1567578518389), lastOpVisible: { ts: Timestamp(1567578518, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578518, 2), $clusterTime: { clusterTime: Timestamp(1567578519, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578518, 2) } 2019-09-04T06:28:42.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), opTime: { ts: Timestamp(1567578518, 2), t: 1 }, wallTime: new Date(1567578518389), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 2), t: 1 }, lastCommittedWall: new Date(1567578518389), lastOpVisible: { ts: Timestamp(1567578518, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578518, 2), $clusterTime: { clusterTime: Timestamp(1567578519, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578518, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:42.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:42.837+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 251) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), opTime: { ts: Timestamp(1567578518, 2), t: 1 }, wallTime: new Date(1567578518389), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 2), t: 1 }, lastCommittedWall: new Date(1567578518389), lastOpVisible: { ts: Timestamp(1567578518, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578518, 2), $clusterTime: { clusterTime: Timestamp(1567578519, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578518, 2) } 2019-09-04T06:28:42.837+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:28:42.837+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:28:44.837Z 2019-09-04T06:28:42.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:12.836+0000 2019-09-04T06:28:42.885+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:42.928+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:42.928+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:42.985+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:43.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:43.016+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:43.016+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:43.051+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:43.051+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:43.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578519, 1), signature: { hash: BinData(0, 11343C686046946B15639CDDD1F6CD7209AB8683), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:43.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:28:43.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578519, 1), signature: { hash: BinData(0, 11343C686046946B15639CDDD1F6CD7209AB8683), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:43.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578519, 1), signature: { hash: BinData(0, 11343C686046946B15639CDDD1F6CD7209AB8683), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:43.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), opTime: { ts: Timestamp(1567578518, 2), t: 1 }, wallTime: new Date(1567578518389) } 2019-09-04T06:28:43.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578519, 1), signature: { hash: BinData(0, 11343C686046946B15639CDDD1F6CD7209AB8683), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:43.085+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:43.159+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:43.160+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:43.185+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:43.203+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:43.203+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:43.204+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:43.205+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:43.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:43.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:43.286+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:43.386+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:43.400+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:43.400+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:43.400+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578518, 2) 2019-09-04T06:28:43.400+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 4077 2019-09-04T06:28:43.400+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:43.400+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:43.400+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 4077 2019-09-04T06:28:43.400+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4080 2019-09-04T06:28:43.400+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4080 2019-09-04T06:28:43.401+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578518, 2), t: 1 }({ ts: Timestamp(1567578518, 2), t: 1 }) 2019-09-04T06:28:43.404+0000 D2 ASIO [RS] Request 244 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 2), t: 1 }, lastCommittedWall: new Date(1567578518389), lastOpVisible: { ts: Timestamp(1567578518, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578518, 2), t: 1 }, lastCommittedWall: new Date(1567578518389), lastOpApplied: { ts: Timestamp(1567578518, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578518, 2), $clusterTime: { clusterTime: Timestamp(1567578519, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578518, 2) } 2019-09-04T06:28:43.404+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 2), t: 1 }, lastCommittedWall: new Date(1567578518389), lastOpVisible: { ts: Timestamp(1567578518, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578518, 2), t: 1 }, lastCommittedWall: new Date(1567578518389), lastOpApplied: { ts: Timestamp(1567578518, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578518, 2), $clusterTime: { clusterTime: Timestamp(1567578519, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578518, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:43.404+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:43.404+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:28:43.404+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:28:54.165+0000 2019-09-04T06:28:43.404+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:28:53.758+0000 2019-09-04T06:28:43.404+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:43.404+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:12.836+0000 2019-09-04T06:28:43.404+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 252 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:53.404+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578518, 2), t: 1 } } 2019-09-04T06:28:43.405+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:08.400+0000 2019-09-04T06:28:43.406+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:43.406+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), appliedOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, appliedWallTime: new Date(1567578518389), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), appliedOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, appliedWallTime: new Date(1567578518389), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), appliedOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, appliedWallTime: new Date(1567578518389), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 2), t: 1 }, lastCommittedWall: new Date(1567578518389), lastOpVisible: { ts: Timestamp(1567578518, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:43.406+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 253 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:13.406+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), appliedOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, appliedWallTime: new Date(1567578518389), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), appliedOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, appliedWallTime: new Date(1567578518389), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), appliedOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, appliedWallTime: new Date(1567578518389), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 2), t: 1 }, lastCommittedWall: new Date(1567578518389), lastOpVisible: { ts: Timestamp(1567578518, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:43.406+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:08.400+0000 2019-09-04T06:28:43.406+0000 D2 ASIO [RS] Request 253 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 2), t: 1 }, lastCommittedWall: new Date(1567578518389), lastOpVisible: { ts: Timestamp(1567578518, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578518, 2), $clusterTime: { clusterTime: Timestamp(1567578519, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578518, 2) } 2019-09-04T06:28:43.406+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 2), t: 1 }, lastCommittedWall: new Date(1567578518389), lastOpVisible: { ts: Timestamp(1567578518, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578518, 2), $clusterTime: { clusterTime: Timestamp(1567578519, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578518, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:43.406+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:43.406+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:08.400+0000 2019-09-04T06:28:43.428+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:43.428+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:43.486+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:43.516+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:43.516+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:43.551+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:43.551+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:43.586+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:43.659+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:43.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:43.686+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:43.703+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:43.703+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:43.704+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:43.704+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:43.786+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:43.886+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:43.928+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:43.928+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:43.987+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:44.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:44.016+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:44.016+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:44.051+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:44.051+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:44.087+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:44.159+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:44.160+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:44.187+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:44.203+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:44.203+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:44.204+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:44.204+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:44.231+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578519, 1), signature: { hash: BinData(0, 11343C686046946B15639CDDD1F6CD7209AB8683), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:44.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:28:44.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578519, 1), signature: { hash: BinData(0, 11343C686046946B15639CDDD1F6CD7209AB8683), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:44.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578519, 1), signature: { hash: BinData(0, 11343C686046946B15639CDDD1F6CD7209AB8683), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:44.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), opTime: { ts: Timestamp(1567578518, 2), t: 1 }, wallTime: new Date(1567578518389) } 2019-09-04T06:28:44.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578519, 1), signature: { hash: BinData(0, 11343C686046946B15639CDDD1F6CD7209AB8683), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:44.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:44.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:44.287+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:44.387+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:44.400+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:44.400+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:44.400+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578518, 2) 2019-09-04T06:28:44.400+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 4098 2019-09-04T06:28:44.400+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:44.400+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:44.400+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 4098 2019-09-04T06:28:44.401+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4101 2019-09-04T06:28:44.401+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4101 2019-09-04T06:28:44.401+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578518, 2), t: 1 }({ ts: Timestamp(1567578518, 2), t: 1 }) 2019-09-04T06:28:44.428+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:44.428+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:44.488+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:44.516+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:44.516+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:44.551+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:44.551+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:44.588+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:44.659+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:44.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:44.688+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:44.703+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:44.703+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:44.704+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:44.704+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:44.788+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:44.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:44.836+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 254) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:44.836+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 254 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:28:54.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:44.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:12.836+0000 2019-09-04T06:28:44.836+0000 D2 ASIO [Replication] Request 254 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), opTime: { ts: Timestamp(1567578518, 2), t: 1 }, wallTime: new Date(1567578518389), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 2), t: 1 }, lastCommittedWall: new Date(1567578518389), lastOpVisible: { ts: Timestamp(1567578518, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578518, 2), $clusterTime: { clusterTime: Timestamp(1567578519, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578518, 2) } 2019-09-04T06:28:44.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), opTime: { ts: Timestamp(1567578518, 2), t: 1 }, wallTime: new Date(1567578518389), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 2), t: 1 }, lastCommittedWall: new Date(1567578518389), lastOpVisible: { ts: Timestamp(1567578518, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578518, 2), $clusterTime: { clusterTime: Timestamp(1567578519, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578518, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:28:44.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:44.836+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 254) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), opTime: { ts: Timestamp(1567578518, 2), t: 1 }, wallTime: new Date(1567578518389), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 2), t: 1 }, lastCommittedWall: new Date(1567578518389), lastOpVisible: { ts: Timestamp(1567578518, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578518, 2), $clusterTime: { clusterTime: Timestamp(1567578519, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578518, 2) } 2019-09-04T06:28:44.836+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:28:44.836+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:28:53.758+0000 2019-09-04T06:28:44.836+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:28:55.086+0000 2019-09-04T06:28:44.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:44.836+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:28:44.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:14.836+0000 2019-09-04T06:28:44.836+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:28:46.836Z 2019-09-04T06:28:44.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:14.836+0000 2019-09-04T06:28:44.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:44.837+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 255) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:44.837+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 255 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:54.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:44.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:14.836+0000 2019-09-04T06:28:44.837+0000 D2 ASIO [Replication] Request 255 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), opTime: { ts: Timestamp(1567578518, 2), t: 1 }, wallTime: new Date(1567578518389), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 2), t: 1 }, lastCommittedWall: new Date(1567578518389), lastOpVisible: { ts: Timestamp(1567578518, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578518, 2), $clusterTime: { clusterTime: Timestamp(1567578522, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578518, 2) } 2019-09-04T06:28:44.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), opTime: { ts: Timestamp(1567578518, 2), t: 1 }, wallTime: new Date(1567578518389), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 2), t: 1 }, lastCommittedWall: new Date(1567578518389), lastOpVisible: { ts: Timestamp(1567578518, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578518, 2), $clusterTime: { clusterTime: Timestamp(1567578522, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578518, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:44.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:44.837+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 255) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), opTime: { ts: Timestamp(1567578518, 2), t: 1 }, wallTime: new Date(1567578518389), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 2), t: 1 }, lastCommittedWall: new Date(1567578518389), lastOpVisible: { ts: Timestamp(1567578518, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578518, 2), $clusterTime: { clusterTime: Timestamp(1567578522, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578518, 2) } 2019-09-04T06:28:44.837+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:28:44.837+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:28:46.837Z 2019-09-04T06:28:44.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:14.836+0000 2019-09-04T06:28:44.878+0000 D2 ASIO [RS] Request 252 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578524, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578524869), o: { $v: 1, $set: { ping: new Date(1567578524866), up: 2425 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 2), t: 1 }, lastCommittedWall: new Date(1567578518389), lastOpVisible: { ts: Timestamp(1567578518, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578518, 2), t: 1 }, lastCommittedWall: new Date(1567578518389), lastOpApplied: { ts: Timestamp(1567578524, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578518, 2), $clusterTime: { clusterTime: Timestamp(1567578524, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578524, 1) } 2019-09-04T06:28:44.878+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578524, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578524869), o: { $v: 1, $set: { ping: new Date(1567578524866), up: 2425 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 2), t: 1 }, lastCommittedWall: new Date(1567578518389), lastOpVisible: { ts: Timestamp(1567578518, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578518, 2), t: 1 }, lastCommittedWall: new Date(1567578518389), lastOpApplied: { ts: Timestamp(1567578524, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578518, 2), $clusterTime: { clusterTime: Timestamp(1567578524, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578524, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:44.878+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:44.878+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578524, 1) and ending at ts: Timestamp(1567578524, 1) 2019-09-04T06:28:44.878+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:28:55.086+0000 2019-09-04T06:28:44.878+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:28:56.087+0000 2019-09-04T06:28:44.878+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:44.878+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578524, 1), t: 1 } 2019-09-04T06:28:44.878+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:44.878+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:44.878+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578518, 2) 2019-09-04T06:28:44.878+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 4110 2019-09-04T06:28:44.878+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:44.878+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:44.878+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 4110 2019-09-04T06:28:44.878+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:44.878+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:28:44.878+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:44.878+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578518, 2) 2019-09-04T06:28:44.878+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578524, 1) } 2019-09-04T06:28:44.878+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 4113 2019-09-04T06:28:44.878+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:44.878+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:44.878+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 4113 2019-09-04T06:28:44.878+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4102 2019-09-04T06:28:44.878+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:14.836+0000 2019-09-04T06:28:44.878+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4102 2019-09-04T06:28:44.878+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4116 2019-09-04T06:28:44.878+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4116 2019-09-04T06:28:44.878+0000 D3 EXECUTOR [repl-writer-worker-14] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:44.878+0000 D3 STORAGE [repl-writer-worker-14] WT begin_transaction for snapshot id 4118 2019-09-04T06:28:44.878+0000 D4 STORAGE [repl-writer-worker-14] inserting record with timestamp Timestamp(1567578524, 1) 2019-09-04T06:28:44.878+0000 D3 STORAGE [repl-writer-worker-14] WT set timestamp of future write operations to Timestamp(1567578524, 1) 2019-09-04T06:28:44.878+0000 D3 STORAGE [repl-writer-worker-14] WT commit_transaction for snapshot id 4118 2019-09-04T06:28:44.878+0000 D3 EXECUTOR [repl-writer-worker-14] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:44.878+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:28:44.878+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4117 2019-09-04T06:28:44.878+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4117 2019-09-04T06:28:44.878+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4120 2019-09-04T06:28:44.879+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4120 2019-09-04T06:28:44.879+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578524, 1), t: 1 }({ ts: Timestamp(1567578524, 1), t: 1 }) 2019-09-04T06:28:44.879+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578524, 1) 2019-09-04T06:28:44.879+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4121 2019-09-04T06:28:44.879+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578524, 1) } } ] } sort: {} projection: {} 2019-09-04T06:28:44.879+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:44.879+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:28:44.879+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578524, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:28:44.879+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:44.879+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:44.879+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:44.879+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578524, 1) || First: notFirst: full path: ts 2019-09-04T06:28:44.879+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:44.879+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578524, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:44.879+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:44.879+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:28:44.879+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:44.879+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:44.879+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:44.879+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:44.879+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:44.879+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:44.879+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:44.879+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578524, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:44.879+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:44.879+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:44.879+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:44.879+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578524, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:44.879+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:44.879+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578524, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:44.879+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4121 2019-09-04T06:28:44.879+0000 D3 EXECUTOR [repl-writer-worker-12] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:44.879+0000 D3 STORAGE [repl-writer-worker-12] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:44.879+0000 D3 REPL [repl-writer-worker-12] applying op: { ts: Timestamp(1567578524, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578524869), o: { $v: 1, $set: { ping: new Date(1567578524866), up: 2425 } } }, oplog application mode: Secondary 2019-09-04T06:28:44.879+0000 D3 STORAGE [repl-writer-worker-12] WT set timestamp of future write operations to Timestamp(1567578524, 1) 2019-09-04T06:28:44.879+0000 D3 STORAGE [repl-writer-worker-12] WT begin_transaction for snapshot id 4123 2019-09-04T06:28:44.879+0000 D2 QUERY [repl-writer-worker-12] Using idhack: { _id: "cmodb801.togewa.com:27017" } 2019-09-04T06:28:44.879+0000 D4 WRITE [repl-writer-worker-12] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:28:44.879+0000 D3 STORAGE [repl-writer-worker-12] WT commit_transaction for snapshot id 4123 2019-09-04T06:28:44.879+0000 D3 EXECUTOR [repl-writer-worker-12] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:44.879+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578524, 1), t: 1 }({ ts: Timestamp(1567578524, 1), t: 1 }) 2019-09-04T06:28:44.879+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578524, 1) 2019-09-04T06:28:44.879+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4122 2019-09-04T06:28:44.879+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:28:44.879+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:44.879+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:28:44.879+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:44.879+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:44.879+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:28:44.879+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4122 2019-09-04T06:28:44.879+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578524, 1) 2019-09-04T06:28:44.879+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4126 2019-09-04T06:28:44.879+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4126 2019-09-04T06:28:44.879+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578524, 1), t: 1 }({ ts: Timestamp(1567578524, 1), t: 1 }) 2019-09-04T06:28:44.879+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:44.879+0000 D2 COMMAND [conn21] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578524, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f599c02d1a496712d71a9'), operName: "", parentOperId: "5d6f599c02d1a496712d71a7" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578524, 1), signature: { hash: BinData(0, 0457228E9B6EEE0173CB2D74639B195F2A6F1062), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578524, 1), t: 1 } }, $db: "config" } 2019-09-04T06:28:44.879+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), appliedOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, appliedWallTime: new Date(1567578518389), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), appliedOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, appliedWallTime: new Date(1567578524869), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), appliedOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, appliedWallTime: new Date(1567578518389), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 2), t: 1 }, lastCommittedWall: new Date(1567578518389), lastOpVisible: { ts: Timestamp(1567578518, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:44.879+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 256 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:14.879+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), appliedOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, appliedWallTime: new Date(1567578518389), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), appliedOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, appliedWallTime: new Date(1567578524869), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), appliedOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, appliedWallTime: new Date(1567578518389), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578518, 2), t: 1 }, lastCommittedWall: new Date(1567578518389), lastOpVisible: { ts: Timestamp(1567578518, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:44.879+0000 D1 TRACKING [conn21] Cmd: find, TrackingId: 5d6f599c02d1a496712d71a7|5d6f599c02d1a496712d71a9 2019-09-04T06:28:44.879+0000 D1 REPL [conn21] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1567578524, 1), t: 1 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578518, 2), t: 1 } 2019-09-04T06:28:44.879+0000 D3 REPL [conn21] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:14.889+0000 2019-09-04T06:28:44.879+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:14.879+0000 2019-09-04T06:28:44.880+0000 D2 ASIO [RS] Request 256 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578524, 1), t: 1 }, lastCommittedWall: new Date(1567578524869), lastOpVisible: { ts: Timestamp(1567578524, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578524, 1), $clusterTime: { clusterTime: Timestamp(1567578524, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578524, 1) } 2019-09-04T06:28:44.880+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578524, 1), t: 1 }, lastCommittedWall: new Date(1567578524869), lastOpVisible: { ts: Timestamp(1567578524, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578524, 1), $clusterTime: { clusterTime: Timestamp(1567578524, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578524, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:44.880+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:44.880+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:14.880+0000 2019-09-04T06:28:44.880+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578524, 1), t: 1 } 2019-09-04T06:28:44.880+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 257 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:54.880+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578518, 2), t: 1 } } 2019-09-04T06:28:44.880+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:14.880+0000 2019-09-04T06:28:44.880+0000 D2 ASIO [RS] Request 257 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578524, 1), t: 1 }, lastCommittedWall: new Date(1567578524869), lastOpVisible: { ts: Timestamp(1567578524, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578524, 1), t: 1 }, lastCommittedWall: new Date(1567578524869), lastOpApplied: { ts: Timestamp(1567578524, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578524, 1), $clusterTime: { clusterTime: Timestamp(1567578524, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578524, 1) } 2019-09-04T06:28:44.880+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578524, 1), t: 1 }, lastCommittedWall: new Date(1567578524869), lastOpVisible: { ts: Timestamp(1567578524, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578524, 1), t: 1 }, lastCommittedWall: new Date(1567578524869), lastOpApplied: { ts: Timestamp(1567578524, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578524, 1), $clusterTime: { clusterTime: Timestamp(1567578524, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578524, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:44.880+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:44.880+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:28:44.880+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578524, 1), t: 1 }, 2019-09-04T06:28:44.869+0000 2019-09-04T06:28:44.880+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578524, 1), t: 1 }, 2019-09-04T06:28:44.869+0000 2019-09-04T06:28:44.880+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578519, 1) 2019-09-04T06:28:44.880+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:28:56.087+0000 2019-09-04T06:28:44.880+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:28:54.892+0000 2019-09-04T06:28:44.880+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 258 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:54.880+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578524, 1), t: 1 } } 2019-09-04T06:28:44.880+0000 D3 REPL [conn123] Got notified of new snapshot: { ts: Timestamp(1567578524, 1), t: 1 }, 2019-09-04T06:28:44.869+0000 2019-09-04T06:28:44.880+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:14.880+0000 2019-09-04T06:28:44.880+0000 D3 REPL [conn123] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.022+0000 2019-09-04T06:28:44.880+0000 D3 REPL [conn141] Got notified of new snapshot: { ts: Timestamp(1567578524, 1), t: 1 }, 2019-09-04T06:28:44.869+0000 2019-09-04T06:28:44.880+0000 D3 REPL [conn141] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.077+0000 2019-09-04T06:28:44.880+0000 D3 REPL [conn132] Got notified of new snapshot: { ts: Timestamp(1567578524, 1), t: 1 }, 2019-09-04T06:28:44.869+0000 2019-09-04T06:28:44.881+0000 D3 REPL [conn132] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.433+0000 2019-09-04T06:28:44.881+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:44.881+0000 D3 REPL [conn130] Got notified of new snapshot: { ts: Timestamp(1567578524, 1), t: 1 }, 2019-09-04T06:28:44.869+0000 2019-09-04T06:28:44.881+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:14.836+0000 2019-09-04T06:28:44.881+0000 D3 REPL [conn130] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:51.753+0000 2019-09-04T06:28:44.881+0000 D3 REPL [conn133] Got notified of new snapshot: { ts: Timestamp(1567578524, 1), t: 1 }, 2019-09-04T06:28:44.869+0000 2019-09-04T06:28:44.881+0000 D3 REPL [conn133] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.702+0000 2019-09-04T06:28:44.881+0000 D3 REPL [conn136] Got notified of new snapshot: { ts: Timestamp(1567578524, 1), t: 1 }, 2019-09-04T06:28:44.869+0000 2019-09-04T06:28:44.881+0000 D3 REPL [conn136] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:02.478+0000 2019-09-04T06:28:44.881+0000 D3 REPL [conn142] Got notified of new snapshot: { ts: Timestamp(1567578524, 1), t: 1 }, 2019-09-04T06:28:44.869+0000 2019-09-04T06:28:44.881+0000 D3 REPL [conn142] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.132+0000 2019-09-04T06:28:44.881+0000 D3 REPL [conn21] Got notified of new snapshot: { ts: Timestamp(1567578524, 1), t: 1 }, 2019-09-04T06:28:44.869+0000 2019-09-04T06:28:44.881+0000 D1 COMMAND [conn21] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578524, 1), t: 1 } } } 2019-09-04T06:28:44.881+0000 D3 REPL [conn116] Got notified of new snapshot: { ts: Timestamp(1567578524, 1), t: 1 }, 2019-09-04T06:28:44.869+0000 2019-09-04T06:28:44.881+0000 D3 STORAGE [conn21] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:28:44.881+0000 D3 REPL [conn116] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:12.389+0000 2019-09-04T06:28:44.881+0000 D1 COMMAND [conn21] Using 'committed' snapshot: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578524, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f599c02d1a496712d71a9'), operName: "", parentOperId: "5d6f599c02d1a496712d71a7" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578524, 1), signature: { hash: BinData(0, 0457228E9B6EEE0173CB2D74639B195F2A6F1062), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578524, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578524, 1) 2019-09-04T06:28:44.881+0000 D2 QUERY [conn21] Collection config.settings does not exist. Using EOF plan: query: { _id: "balancer" } sort: {} projection: {} limit: 1 2019-09-04T06:28:44.881+0000 I COMMAND [conn21] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578524, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f599c02d1a496712d71a9'), operName: "", parentOperId: "5d6f599c02d1a496712d71a7" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578524, 1), signature: { hash: BinData(0, 0457228E9B6EEE0173CB2D74639B195F2A6F1062), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578524, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 1ms 2019-09-04T06:28:44.881+0000 D3 REPL [conn104] Got notified of new snapshot: { ts: Timestamp(1567578524, 1), t: 1 }, 2019-09-04T06:28:44.869+0000 2019-09-04T06:28:44.881+0000 D3 REPL [conn104] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:55.059+0000 2019-09-04T06:28:44.881+0000 D3 REPL [conn106] Got notified of new snapshot: { ts: Timestamp(1567578524, 1), t: 1 }, 2019-09-04T06:28:44.869+0000 2019-09-04T06:28:44.881+0000 D3 REPL [conn106] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:57.566+0000 2019-09-04T06:28:44.881+0000 D3 REPL [conn134] Got notified of new snapshot: { ts: Timestamp(1567578524, 1), t: 1 }, 2019-09-04T06:28:44.869+0000 2019-09-04T06:28:44.881+0000 D3 REPL [conn134] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.753+0000 2019-09-04T06:28:44.881+0000 D3 REPL [conn119] Got notified of new snapshot: { ts: Timestamp(1567578524, 1), t: 1 }, 2019-09-04T06:28:44.869+0000 2019-09-04T06:28:44.881+0000 D3 REPL [conn119] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.033+0000 2019-09-04T06:28:44.881+0000 D3 REPL [conn117] Got notified of new snapshot: { ts: Timestamp(1567578524, 1), t: 1 }, 2019-09-04T06:28:44.869+0000 2019-09-04T06:28:44.881+0000 D3 REPL [conn117] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:54.151+0000 2019-09-04T06:28:44.881+0000 D3 REPL [conn121] Got notified of new snapshot: { ts: Timestamp(1567578524, 1), t: 1 }, 2019-09-04T06:28:44.869+0000 2019-09-04T06:28:44.881+0000 D3 REPL [conn121] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:58.752+0000 2019-09-04T06:28:44.881+0000 D3 REPL [conn131] Got notified of new snapshot: { ts: Timestamp(1567578524, 1), t: 1 }, 2019-09-04T06:28:44.869+0000 2019-09-04T06:28:44.881+0000 D3 REPL [conn131] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:56.307+0000 2019-09-04T06:28:44.881+0000 D3 REPL [conn120] Got notified of new snapshot: { ts: Timestamp(1567578524, 1), t: 1 }, 2019-09-04T06:28:44.869+0000 2019-09-04T06:28:44.881+0000 D3 REPL [conn120] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.040+0000 2019-09-04T06:28:44.881+0000 D3 REPL [conn124] Got notified of new snapshot: { ts: Timestamp(1567578524, 1), t: 1 }, 2019-09-04T06:28:44.869+0000 2019-09-04T06:28:44.881+0000 D3 REPL [conn124] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:45.022+0000 2019-09-04T06:28:44.881+0000 D3 REPL [conn140] Got notified of new snapshot: { ts: Timestamp(1567578524, 1), t: 1 }, 2019-09-04T06:28:44.869+0000 2019-09-04T06:28:44.881+0000 D3 REPL [conn140] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.076+0000 2019-09-04T06:28:44.881+0000 D3 REPL [conn135] Got notified of new snapshot: { ts: Timestamp(1567578524, 1), t: 1 }, 2019-09-04T06:28:44.869+0000 2019-09-04T06:28:44.881+0000 D3 REPL [conn135] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.925+0000 2019-09-04T06:28:44.881+0000 D3 REPL [conn137] Got notified of new snapshot: { ts: Timestamp(1567578524, 1), t: 1 }, 2019-09-04T06:28:44.869+0000 2019-09-04T06:28:44.881+0000 D3 REPL [conn137] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.079+0000 2019-09-04T06:28:44.881+0000 D3 REPL [conn105] Got notified of new snapshot: { ts: Timestamp(1567578524, 1), t: 1 }, 2019-09-04T06:28:44.869+0000 2019-09-04T06:28:44.881+0000 D3 REPL [conn105] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.079+0000 2019-09-04T06:28:44.881+0000 D3 REPL [conn128] Got notified of new snapshot: { ts: Timestamp(1567578524, 1), t: 1 }, 2019-09-04T06:28:44.869+0000 2019-09-04T06:28:44.881+0000 D3 REPL [conn128] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:11.706+0000 2019-09-04T06:28:44.881+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:28:44.881+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:44.881+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), appliedOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, appliedWallTime: new Date(1567578518389), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, durableWallTime: new Date(1567578524869), appliedOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, appliedWallTime: new Date(1567578524869), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), appliedOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, appliedWallTime: new Date(1567578518389), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578524, 1), t: 1 }, lastCommittedWall: new Date(1567578524869), lastOpVisible: { ts: Timestamp(1567578524, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:44.881+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 259 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:14.881+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), appliedOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, appliedWallTime: new Date(1567578518389), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, durableWallTime: new Date(1567578524869), appliedOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, appliedWallTime: new Date(1567578524869), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, durableWallTime: new Date(1567578518389), appliedOpTime: { ts: Timestamp(1567578518, 2), t: 1 }, appliedWallTime: new Date(1567578518389), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578524, 1), t: 1 }, lastCommittedWall: new Date(1567578524869), lastOpVisible: { ts: Timestamp(1567578524, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:44.882+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:14.880+0000 2019-09-04T06:28:44.882+0000 D2 ASIO [RS] Request 259 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578524, 1), t: 1 }, lastCommittedWall: new Date(1567578524869), lastOpVisible: { ts: Timestamp(1567578524, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578524, 1), $clusterTime: { clusterTime: Timestamp(1567578524, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578524, 1) } 2019-09-04T06:28:44.882+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578524, 1), t: 1 }, lastCommittedWall: new Date(1567578524869), lastOpVisible: { ts: Timestamp(1567578524, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578524, 1), $clusterTime: { clusterTime: Timestamp(1567578524, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578524, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:44.882+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:44.882+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:14.880+0000 2019-09-04T06:28:44.888+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:44.928+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:44.928+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:44.978+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578524, 1) 2019-09-04T06:28:44.988+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:45.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:45.011+0000 I NETWORK [listener] connection accepted from 10.108.2.58:52060 #144 (78 connections now open) 2019-09-04T06:28:45.011+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:45.011+0000 I NETWORK [listener] connection accepted from 10.108.2.50:50038 #145 (79 connections now open) 2019-09-04T06:28:45.011+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:45.011+0000 D2 COMMAND [conn144] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:45.011+0000 D2 COMMAND [conn145] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:45.011+0000 I NETWORK [conn145] received client metadata from 10.108.2.50:50038 conn145: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:45.011+0000 I NETWORK [conn144] received client metadata from 10.108.2.58:52060 conn144: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:45.011+0000 I COMMAND [conn144] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:45.011+0000 I COMMAND [conn145] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:45.016+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:45.016+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:45.022+0000 I COMMAND [conn124] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578489, 1), signature: { hash: BinData(0, 5E46B3B42F624DF9AB2FBC0649BD9C499C9A1173), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:45.022+0000 D1 - [conn124] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:45.022+0000 W - [conn124] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:45.023+0000 I COMMAND [conn123] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578493, 1), signature: { hash: BinData(0, C73D3AB40BC5B730663FB05640F6CBA1033C72E5), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:45.023+0000 D1 - [conn123] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:45.023+0000 W - [conn123] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:45.026+0000 I NETWORK [listener] connection accepted from 10.108.2.49:53306 #146 (80 connections now open) 2019-09-04T06:28:45.026+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:45.026+0000 D2 COMMAND [conn146] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:45.026+0000 I NETWORK [conn146] received client metadata from 10.108.2.49:53306 conn146: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:45.026+0000 I COMMAND [conn146] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:45.033+0000 I COMMAND [conn119] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578490, 1), signature: { hash: BinData(0, 2EEDB79AA71EFDCD4F74F12BA8B91BCD928A35AA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:45.033+0000 D1 - [conn119] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:45.033+0000 W - [conn119] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:45.035+0000 I NETWORK [listener] connection accepted from 10.108.2.45:36472 #147 (81 connections now open) 2019-09-04T06:28:45.035+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:45.035+0000 D2 COMMAND [conn147] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:45.035+0000 I NETWORK [conn147] received client metadata from 10.108.2.45:36472 conn147: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:45.035+0000 I COMMAND [conn147] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:45.039+0000 I - [conn124] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:45.039+0000 D1 COMMAND [conn124] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578489, 1), signature: { hash: BinData(0, 5E46B3B42F624DF9AB2FBC0649BD9C499C9A1173), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:45.039+0000 D1 - [conn124] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:45.039+0000 W - [conn124] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:45.040+0000 I COMMAND [conn120] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578488, 1), signature: { hash: BinData(0, F4E758946912814F3BEF328731BD00334916A2A7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:45.040+0000 D1 - [conn120] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:45.040+0000 W - [conn120] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:45.051+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:45.051+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:45.057+0000 I - [conn119] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:45.058+0000 D1 COMMAND [conn119] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578490, 1), signature: { hash: BinData(0, 2EEDB79AA71EFDCD4F74F12BA8B91BCD928A35AA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:45.058+0000 D1 - [conn119] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:45.058+0000 W - [conn119] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:45.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578524, 1), signature: { hash: BinData(0, 0457228E9B6EEE0173CB2D74639B195F2A6F1062), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:45.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:28:45.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578524, 1), signature: { hash: BinData(0, 0457228E9B6EEE0173CB2D74639B195F2A6F1062), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:45.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578524, 1), signature: { hash: BinData(0, 0457228E9B6EEE0173CB2D74639B195F2A6F1062), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:45.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, durableWallTime: new Date(1567578524869), opTime: { ts: Timestamp(1567578524, 1), t: 1 }, wallTime: new Date(1567578524869) } 2019-09-04T06:28:45.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578524, 1), signature: { hash: BinData(0, 0457228E9B6EEE0173CB2D74639B195F2A6F1062), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:45.074+0000 I - [conn123] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:45.074+0000 D1 COMMAND [conn123] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578493, 1), signature: { hash: BinData(0, C73D3AB40BC5B730663FB05640F6CBA1033C72E5), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:45.074+0000 D1 - [conn123] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:45.074+0000 W - [conn123] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:45.088+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:45.115+0000 I - [conn119] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:45.115+0000 W COMMAND [conn119] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:45.115+0000 I COMMAND [conn119] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578490, 1), signature: { hash: BinData(0, 2EEDB79AA71EFDCD4F74F12BA8B91BCD928A35AA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30034ms 2019-09-04T06:28:45.115+0000 D2 NETWORK [conn119] Session from 10.108.2.49:53292 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:45.115+0000 I NETWORK [conn119] end connection 10.108.2.49:53292 (80 connections now open) 2019-09-04T06:28:45.128+0000 I - [conn124] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:45.128+0000 W COMMAND [conn124] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:45.128+0000 I COMMAND [conn124] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578489, 1), signature: { hash: BinData(0, 5E46B3B42F624DF9AB2FBC0649BD9C499C9A1173), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:28:45.128+0000 D2 NETWORK [conn124] Session from 10.108.2.50:50020 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:45.128+0000 I NETWORK [conn124] end connection 10.108.2.50:50020 (79 connections now open) 2019-09-04T06:28:45.133+0000 I - [conn120] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:45.134+0000 D1 COMMAND [conn120] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578488, 1), signature: { hash: BinData(0, F4E758946912814F3BEF328731BD00334916A2A7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:45.134+0000 D1 - [conn120] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:45.134+0000 W - [conn120] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:45.159+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:45.160+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:45.174+0000 I - [conn123] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:45.174+0000 I - [conn120] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:45.174+0000 W COMMAND [conn123] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:45.174+0000 I COMMAND [conn123] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578493, 1), signature: { hash: BinData(0, C73D3AB40BC5B730663FB05640F6CBA1033C72E5), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30062ms 2019-09-04T06:28:45.174+0000 W COMMAND [conn120] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:45.174+0000 I COMMAND [conn120] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578488, 1), signature: { hash: BinData(0, F4E758946912814F3BEF328731BD00334916A2A7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30102ms 2019-09-04T06:28:45.174+0000 D2 NETWORK [conn123] Session from 10.108.2.58:52044 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:45.174+0000 D2 NETWORK [conn120] Session from 10.108.2.45:36458 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:45.174+0000 I NETWORK [conn120] end connection 10.108.2.45:36458 (78 connections now open) 2019-09-04T06:28:45.174+0000 I NETWORK [conn123] end connection 10.108.2.58:52044 (77 connections now open) 2019-09-04T06:28:45.189+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:45.203+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:45.203+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:45.204+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:45.204+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:45.210+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:45.210+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:45.214+0000 I NETWORK [listener] connection accepted from 10.108.2.46:40912 #148 (78 connections now open) 2019-09-04T06:28:45.214+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:45.214+0000 D2 COMMAND [conn148] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:45.214+0000 I NETWORK [conn148] received client metadata from 10.108.2.46:40912 conn148: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:45.214+0000 D2 COMMAND [conn118] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578519, 1), signature: { hash: BinData(0, 87C50F1B8A58F46D74C3BCF7B0920458500C9D85), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:45.214+0000 D1 REPL [conn118] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578524, 1), t: 1 } 2019-09-04T06:28:45.214+0000 I COMMAND [conn148] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:45.214+0000 D3 REPL [conn118] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.224+0000 2019-09-04T06:28:45.214+0000 D2 COMMAND [conn148] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578525, 1), signature: { hash: BinData(0, A9EF6308D6B77986F6D15C773C57E66A2510FEA9), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:28:45.214+0000 D1 REPL [conn148] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578524, 1), t: 1 } 2019-09-04T06:28:45.214+0000 D3 REPL [conn148] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.224+0000 2019-09-04T06:28:45.223+0000 D2 COMMAND [conn146] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578520, 1), signature: { hash: BinData(0, 4AC25D5CB9A6D9101F27355D0FB7D0FF04C668B6), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:45.223+0000 D1 REPL [conn146] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578524, 1), t: 1 } 2019-09-04T06:28:45.223+0000 D3 REPL [conn146] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.233+0000 2019-09-04T06:28:45.224+0000 D2 COMMAND [conn122] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578522, 1), signature: { hash: BinData(0, 21A966CF5FD66B29B7A606E7014BBC74FFC0F15C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:45.224+0000 D1 REPL [conn122] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578524, 1), t: 1 } 2019-09-04T06:28:45.224+0000 D3 REPL [conn122] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.234+0000 2019-09-04T06:28:45.226+0000 I NETWORK [listener] connection accepted from 10.108.2.64:46560 #149 (79 connections now open) 2019-09-04T06:28:45.226+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:45.226+0000 D2 COMMAND [conn149] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:45.226+0000 I NETWORK [conn149] received client metadata from 10.108.2.64:46560 conn149: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:45.226+0000 I COMMAND [conn149] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:45.229+0000 D2 COMMAND [conn149] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578515, 1), signature: { hash: BinData(0, 687BE122806C404325073B712C2FE776A546E867), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:45.229+0000 D1 REPL [conn149] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578524, 1), t: 1 } 2019-09-04T06:28:45.229+0000 D3 REPL [conn149] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.239+0000 2019-09-04T06:28:45.230+0000 D2 COMMAND [conn129] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578525, 1), signature: { hash: BinData(0, A9EF6308D6B77986F6D15C773C57E66A2510FEA9), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:28:45.230+0000 D1 REPL [conn129] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578524, 1), t: 1 } 2019-09-04T06:28:45.230+0000 D3 REPL [conn129] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.240+0000 2019-09-04T06:28:45.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:45.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:45.289+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:45.389+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:45.428+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:45.428+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:45.489+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:45.516+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:45.516+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:45.551+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:45.551+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:45.589+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:45.659+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:45.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:45.689+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:45.703+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:45.703+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:45.704+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:45.704+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:45.710+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:45.710+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:45.789+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:45.878+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:45.878+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:45.878+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578524, 1) 2019-09-04T06:28:45.878+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 4159 2019-09-04T06:28:45.878+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:45.878+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:45.878+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 4159 2019-09-04T06:28:45.879+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4162 2019-09-04T06:28:45.879+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4162 2019-09-04T06:28:45.879+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578524, 1), t: 1 }({ ts: Timestamp(1567578524, 1), t: 1 }) 2019-09-04T06:28:45.889+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:45.928+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:45.928+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:45.990+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:46.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:46.016+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:46.016+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:46.051+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:46.051+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:46.090+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:46.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:46.160+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:46.190+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:46.203+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:46.203+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:46.204+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:46.204+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:46.231+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578524, 1), signature: { hash: BinData(0, 0457228E9B6EEE0173CB2D74639B195F2A6F1062), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:46.231+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:28:46.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578524, 1), signature: { hash: BinData(0, 0457228E9B6EEE0173CB2D74639B195F2A6F1062), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:46.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578524, 1), signature: { hash: BinData(0, 0457228E9B6EEE0173CB2D74639B195F2A6F1062), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:46.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, durableWallTime: new Date(1567578524869), opTime: { ts: Timestamp(1567578524, 1), t: 1 }, wallTime: new Date(1567578524869) } 2019-09-04T06:28:46.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578524, 1), signature: { hash: BinData(0, 0457228E9B6EEE0173CB2D74639B195F2A6F1062), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:46.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:46.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:46.286+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:46.286+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:46.290+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:46.390+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:46.428+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:46.428+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:46.490+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:46.516+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:46.516+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:46.551+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:46.551+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:46.591+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:46.659+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:46.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:46.691+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:46.703+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:46.703+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:46.704+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:46.704+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:46.786+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:46.786+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:46.791+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:46.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:46.836+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 260) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:46.836+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 260 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:28:56.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:46.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:14.836+0000 2019-09-04T06:28:46.836+0000 D2 ASIO [Replication] Request 260 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, durableWallTime: new Date(1567578524869), opTime: { ts: Timestamp(1567578524, 1), t: 1 }, wallTime: new Date(1567578524869), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578524, 1), t: 1 }, lastCommittedWall: new Date(1567578524869), lastOpVisible: { ts: Timestamp(1567578524, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578524, 1), $clusterTime: { clusterTime: Timestamp(1567578525, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578524, 1) } 2019-09-04T06:28:46.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, durableWallTime: new Date(1567578524869), opTime: { ts: Timestamp(1567578524, 1), t: 1 }, wallTime: new Date(1567578524869), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578524, 1), t: 1 }, lastCommittedWall: new Date(1567578524869), lastOpVisible: { ts: Timestamp(1567578524, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578524, 1), $clusterTime: { clusterTime: Timestamp(1567578525, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578524, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:28:46.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:46.836+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 260) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, durableWallTime: new Date(1567578524869), opTime: { ts: Timestamp(1567578524, 1), t: 1 }, wallTime: new Date(1567578524869), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578524, 1), t: 1 }, lastCommittedWall: new Date(1567578524869), lastOpVisible: { ts: Timestamp(1567578524, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578524, 1), $clusterTime: { clusterTime: Timestamp(1567578525, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578524, 1) } 2019-09-04T06:28:46.836+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:28:46.836+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:28:54.892+0000 2019-09-04T06:28:46.836+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:28:57.046+0000 2019-09-04T06:28:46.836+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:28:46.836+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:28:48.836Z 2019-09-04T06:28:46.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:46.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:16.836+0000 2019-09-04T06:28:46.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:16.836+0000 2019-09-04T06:28:46.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:46.837+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 261) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:46.837+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 261 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:56.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:46.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:16.836+0000 2019-09-04T06:28:46.837+0000 D2 ASIO [Replication] Request 261 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, durableWallTime: new Date(1567578524869), opTime: { ts: Timestamp(1567578524, 1), t: 1 }, wallTime: new Date(1567578524869), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578524, 1), t: 1 }, lastCommittedWall: new Date(1567578524869), lastOpVisible: { ts: Timestamp(1567578524, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578524, 1), $clusterTime: { clusterTime: Timestamp(1567578525, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578524, 1) } 2019-09-04T06:28:46.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, durableWallTime: new Date(1567578524869), opTime: { ts: Timestamp(1567578524, 1), t: 1 }, wallTime: new Date(1567578524869), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578524, 1), t: 1 }, lastCommittedWall: new Date(1567578524869), lastOpVisible: { ts: Timestamp(1567578524, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578524, 1), $clusterTime: { clusterTime: Timestamp(1567578525, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578524, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:46.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:46.837+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 261) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, durableWallTime: new Date(1567578524869), opTime: { ts: Timestamp(1567578524, 1), t: 1 }, wallTime: new Date(1567578524869), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578524, 1), t: 1 }, lastCommittedWall: new Date(1567578524869), lastOpVisible: { ts: Timestamp(1567578524, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578524, 1), $clusterTime: { clusterTime: Timestamp(1567578525, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578524, 1) } 2019-09-04T06:28:46.837+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:28:46.837+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:28:48.837Z 2019-09-04T06:28:46.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:16.836+0000 2019-09-04T06:28:46.879+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:46.879+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:46.879+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578524, 1) 2019-09-04T06:28:46.879+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 4181 2019-09-04T06:28:46.879+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:46.879+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:46.879+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 4181 2019-09-04T06:28:46.880+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4184 2019-09-04T06:28:46.880+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4184 2019-09-04T06:28:46.880+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578524, 1), t: 1 }({ ts: Timestamp(1567578524, 1), t: 1 }) 2019-09-04T06:28:46.891+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:46.928+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:46.928+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:46.991+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:47.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:47.016+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:47.016+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:47.051+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:47.051+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:47.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578525, 1), signature: { hash: BinData(0, 3B7824A53A7C42B4FC267B99E76066FD173C829F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:47.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:28:47.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578525, 1), signature: { hash: BinData(0, 3B7824A53A7C42B4FC267B99E76066FD173C829F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:47.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578525, 1), signature: { hash: BinData(0, 3B7824A53A7C42B4FC267B99E76066FD173C829F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:47.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, durableWallTime: new Date(1567578524869), opTime: { ts: Timestamp(1567578524, 1), t: 1 }, wallTime: new Date(1567578524869) } 2019-09-04T06:28:47.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578525, 1), signature: { hash: BinData(0, 3B7824A53A7C42B4FC267B99E76066FD173C829F), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:47.091+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:47.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:47.160+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:47.191+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:47.203+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:47.203+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:47.204+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:47.204+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:47.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:47.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:47.286+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:47.286+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:47.292+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:47.356+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:47.356+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:47.392+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:47.428+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:47.428+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:47.448+0000 D2 ASIO [RS] Request 258 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578527, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578527442), o: { $v: 1, $set: { ping: new Date(1567578527441) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578524, 1), t: 1 }, lastCommittedWall: new Date(1567578524869), lastOpVisible: { ts: Timestamp(1567578524, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578524, 1), t: 1 }, lastCommittedWall: new Date(1567578524869), lastOpApplied: { ts: Timestamp(1567578527, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578524, 1), $clusterTime: { clusterTime: Timestamp(1567578527, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578527, 1) } 2019-09-04T06:28:47.448+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578527, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578527442), o: { $v: 1, $set: { ping: new Date(1567578527441) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578524, 1), t: 1 }, lastCommittedWall: new Date(1567578524869), lastOpVisible: { ts: Timestamp(1567578524, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578524, 1), t: 1 }, lastCommittedWall: new Date(1567578524869), lastOpApplied: { ts: Timestamp(1567578527, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578524, 1), $clusterTime: { clusterTime: Timestamp(1567578527, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578527, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:47.448+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:47.448+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578527, 1) and ending at ts: Timestamp(1567578527, 1) 2019-09-04T06:28:47.448+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:28:57.046+0000 2019-09-04T06:28:47.448+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:28:57.545+0000 2019-09-04T06:28:47.448+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:47.448+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578527, 1), t: 1 } 2019-09-04T06:28:47.448+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:47.448+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:47.448+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578524, 1) 2019-09-04T06:28:47.448+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 4200 2019-09-04T06:28:47.448+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:47.448+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:47.448+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 4200 2019-09-04T06:28:47.448+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:47.448+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:28:47.448+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:47.448+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578524, 1) 2019-09-04T06:28:47.448+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578527, 1) } 2019-09-04T06:28:47.448+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 4203 2019-09-04T06:28:47.448+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:47.448+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:47.448+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4185 2019-09-04T06:28:47.448+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 4203 2019-09-04T06:28:47.448+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:16.836+0000 2019-09-04T06:28:47.449+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4185 2019-09-04T06:28:47.449+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4206 2019-09-04T06:28:47.449+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4206 2019-09-04T06:28:47.449+0000 D3 EXECUTOR [repl-writer-worker-10] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:47.449+0000 D3 STORAGE [repl-writer-worker-10] WT begin_transaction for snapshot id 4208 2019-09-04T06:28:47.449+0000 D4 STORAGE [repl-writer-worker-10] inserting record with timestamp Timestamp(1567578527, 1) 2019-09-04T06:28:47.449+0000 D3 STORAGE [repl-writer-worker-10] WT set timestamp of future write operations to Timestamp(1567578527, 1) 2019-09-04T06:28:47.449+0000 D3 STORAGE [repl-writer-worker-10] WT commit_transaction for snapshot id 4208 2019-09-04T06:28:47.449+0000 D3 EXECUTOR [repl-writer-worker-10] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:47.449+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:28:47.449+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4207 2019-09-04T06:28:47.449+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4207 2019-09-04T06:28:47.449+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4210 2019-09-04T06:28:47.449+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4210 2019-09-04T06:28:47.449+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578527, 1), t: 1 }({ ts: Timestamp(1567578527, 1), t: 1 }) 2019-09-04T06:28:47.449+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578527, 1) 2019-09-04T06:28:47.449+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4211 2019-09-04T06:28:47.449+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578527, 1) } } ] } sort: {} projection: {} 2019-09-04T06:28:47.449+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:47.449+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:28:47.449+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578527, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:28:47.449+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:47.449+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:47.449+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:47.449+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578527, 1) || First: notFirst: full path: ts 2019-09-04T06:28:47.449+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:47.449+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578527, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:47.449+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:47.449+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:28:47.449+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:47.449+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:47.449+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:47.449+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:47.449+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:47.449+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:47.449+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:47.449+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578527, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:47.449+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:47.449+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:47.449+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:47.449+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578527, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:47.449+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:47.449+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578527, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:47.449+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4211 2019-09-04T06:28:47.449+0000 D3 EXECUTOR [repl-writer-worker-8] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:47.449+0000 D3 STORAGE [repl-writer-worker-8] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:47.449+0000 D3 REPL [repl-writer-worker-8] applying op: { ts: Timestamp(1567578527, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578527442), o: { $v: 1, $set: { ping: new Date(1567578527441) } } }, oplog application mode: Secondary 2019-09-04T06:28:47.449+0000 D3 STORAGE [repl-writer-worker-8] WT set timestamp of future write operations to Timestamp(1567578527, 1) 2019-09-04T06:28:47.450+0000 D3 STORAGE [repl-writer-worker-8] WT begin_transaction for snapshot id 4213 2019-09-04T06:28:47.450+0000 D2 QUERY [repl-writer-worker-8] Using idhack: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" } 2019-09-04T06:28:47.450+0000 D4 WRITE [repl-writer-worker-8] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:28:47.450+0000 D3 STORAGE [repl-writer-worker-8] WT commit_transaction for snapshot id 4213 2019-09-04T06:28:47.450+0000 D3 EXECUTOR [repl-writer-worker-8] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:47.450+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578527, 1), t: 1 }({ ts: Timestamp(1567578527, 1), t: 1 }) 2019-09-04T06:28:47.450+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578527, 1) 2019-09-04T06:28:47.450+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4212 2019-09-04T06:28:47.450+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:28:47.450+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:47.450+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:28:47.450+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:47.450+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:47.450+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:28:47.450+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4212 2019-09-04T06:28:47.450+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578527, 1) 2019-09-04T06:28:47.450+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4216 2019-09-04T06:28:47.450+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4216 2019-09-04T06:28:47.450+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578527, 1), t: 1 }({ ts: Timestamp(1567578527, 1), t: 1 }) 2019-09-04T06:28:47.450+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:47.450+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, durableWallTime: new Date(1567578524869), appliedOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, appliedWallTime: new Date(1567578524869), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, durableWallTime: new Date(1567578524869), appliedOpTime: { ts: Timestamp(1567578527, 1), t: 1 }, appliedWallTime: new Date(1567578527442), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, durableWallTime: new Date(1567578524869), appliedOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, appliedWallTime: new Date(1567578524869), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578524, 1), t: 1 }, lastCommittedWall: new Date(1567578524869), lastOpVisible: { ts: Timestamp(1567578524, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:47.450+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 262 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:17.450+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, durableWallTime: new Date(1567578524869), appliedOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, appliedWallTime: new Date(1567578524869), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, durableWallTime: new Date(1567578524869), appliedOpTime: { ts: Timestamp(1567578527, 1), t: 1 }, appliedWallTime: new Date(1567578527442), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, durableWallTime: new Date(1567578524869), appliedOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, appliedWallTime: new Date(1567578524869), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578524, 1), t: 1 }, lastCommittedWall: new Date(1567578524869), lastOpVisible: { ts: Timestamp(1567578524, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:47.450+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:17.450+0000 2019-09-04T06:28:47.450+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578527, 1), t: 1 } 2019-09-04T06:28:47.450+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 263 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:57.450+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578524, 1), t: 1 } } 2019-09-04T06:28:47.450+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:17.450+0000 2019-09-04T06:28:47.450+0000 D2 ASIO [RS] Request 262 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578527, 1), t: 1 }, lastCommittedWall: new Date(1567578527442), lastOpVisible: { ts: Timestamp(1567578527, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578527, 1), $clusterTime: { clusterTime: Timestamp(1567578527, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578527, 1) } 2019-09-04T06:28:47.450+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578527, 1), t: 1 }, lastCommittedWall: new Date(1567578527442), lastOpVisible: { ts: Timestamp(1567578527, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578527, 1), $clusterTime: { clusterTime: Timestamp(1567578527, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578527, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:47.451+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:47.451+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:17.450+0000 2019-09-04T06:28:47.451+0000 D2 ASIO [RS] Request 263 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578527, 1), t: 1 }, lastCommittedWall: new Date(1567578527442), lastOpVisible: { ts: Timestamp(1567578527, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578527, 1), t: 1 }, lastCommittedWall: new Date(1567578527442), lastOpApplied: { ts: Timestamp(1567578527, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578527, 1), $clusterTime: { clusterTime: Timestamp(1567578527, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578527, 1) } 2019-09-04T06:28:47.451+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578527, 1), t: 1 }, lastCommittedWall: new Date(1567578527442), lastOpVisible: { ts: Timestamp(1567578527, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578527, 1), t: 1 }, lastCommittedWall: new Date(1567578527442), lastOpApplied: { ts: Timestamp(1567578527, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578527, 1), $clusterTime: { clusterTime: Timestamp(1567578527, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578527, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:47.451+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:47.451+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:28:47.451+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578527, 1), t: 1 }, 2019-09-04T06:28:47.442+0000 2019-09-04T06:28:47.451+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578527, 1), t: 1 }, 2019-09-04T06:28:47.442+0000 2019-09-04T06:28:47.451+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578522, 1) 2019-09-04T06:28:47.451+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:28:57.545+0000 2019-09-04T06:28:47.451+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:28:58.338+0000 2019-09-04T06:28:47.451+0000 D3 REPL [conn130] Got notified of new snapshot: { ts: Timestamp(1567578527, 1), t: 1 }, 2019-09-04T06:28:47.442+0000 2019-09-04T06:28:47.451+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 264 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:57.451+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578527, 1), t: 1 } } 2019-09-04T06:28:47.451+0000 D3 REPL [conn130] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:51.753+0000 2019-09-04T06:28:47.451+0000 D3 REPL [conn136] Got notified of new snapshot: { ts: Timestamp(1567578527, 1), t: 1 }, 2019-09-04T06:28:47.442+0000 2019-09-04T06:28:47.451+0000 D3 REPL [conn136] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:02.478+0000 2019-09-04T06:28:47.451+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:17.450+0000 2019-09-04T06:28:47.451+0000 D3 REPL [conn104] Got notified of new snapshot: { ts: Timestamp(1567578527, 1), t: 1 }, 2019-09-04T06:28:47.442+0000 2019-09-04T06:28:47.451+0000 D3 REPL [conn104] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:55.059+0000 2019-09-04T06:28:47.451+0000 D3 REPL [conn134] Got notified of new snapshot: { ts: Timestamp(1567578527, 1), t: 1 }, 2019-09-04T06:28:47.442+0000 2019-09-04T06:28:47.451+0000 D3 REPL [conn134] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.753+0000 2019-09-04T06:28:47.451+0000 D3 REPL [conn116] Got notified of new snapshot: { ts: Timestamp(1567578527, 1), t: 1 }, 2019-09-04T06:28:47.442+0000 2019-09-04T06:28:47.451+0000 D3 REPL [conn116] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:12.389+0000 2019-09-04T06:28:47.451+0000 D3 REPL [conn131] Got notified of new snapshot: { ts: Timestamp(1567578527, 1), t: 1 }, 2019-09-04T06:28:47.442+0000 2019-09-04T06:28:47.451+0000 D3 REPL [conn131] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:56.307+0000 2019-09-04T06:28:47.451+0000 D3 REPL [conn140] Got notified of new snapshot: { ts: Timestamp(1567578527, 1), t: 1 }, 2019-09-04T06:28:47.442+0000 2019-09-04T06:28:47.451+0000 D3 REPL [conn140] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.076+0000 2019-09-04T06:28:47.451+0000 D3 REPL [conn105] Got notified of new snapshot: { ts: Timestamp(1567578527, 1), t: 1 }, 2019-09-04T06:28:47.442+0000 2019-09-04T06:28:47.451+0000 D3 REPL [conn105] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.079+0000 2019-09-04T06:28:47.451+0000 D3 REPL [conn146] Got notified of new snapshot: { ts: Timestamp(1567578527, 1), t: 1 }, 2019-09-04T06:28:47.442+0000 2019-09-04T06:28:47.451+0000 D3 REPL [conn146] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.233+0000 2019-09-04T06:28:47.451+0000 D3 REPL [conn148] Got notified of new snapshot: { ts: Timestamp(1567578527, 1), t: 1 }, 2019-09-04T06:28:47.442+0000 2019-09-04T06:28:47.451+0000 D3 REPL [conn148] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.224+0000 2019-09-04T06:28:47.451+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:47.451+0000 D3 REPL [conn149] Got notified of new snapshot: { ts: Timestamp(1567578527, 1), t: 1 }, 2019-09-04T06:28:47.442+0000 2019-09-04T06:28:47.451+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:16.836+0000 2019-09-04T06:28:47.451+0000 D3 REPL [conn149] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.239+0000 2019-09-04T06:28:47.451+0000 D3 REPL [conn141] Got notified of new snapshot: { ts: Timestamp(1567578527, 1), t: 1 }, 2019-09-04T06:28:47.442+0000 2019-09-04T06:28:47.451+0000 D3 REPL [conn141] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.077+0000 2019-09-04T06:28:47.451+0000 D3 REPL [conn132] Got notified of new snapshot: { ts: Timestamp(1567578527, 1), t: 1 }, 2019-09-04T06:28:47.442+0000 2019-09-04T06:28:47.451+0000 D3 REPL [conn132] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.433+0000 2019-09-04T06:28:47.451+0000 D3 REPL [conn133] Got notified of new snapshot: { ts: Timestamp(1567578527, 1), t: 1 }, 2019-09-04T06:28:47.442+0000 2019-09-04T06:28:47.451+0000 D3 REPL [conn133] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.702+0000 2019-09-04T06:28:47.451+0000 D3 REPL [conn142] Got notified of new snapshot: { ts: Timestamp(1567578527, 1), t: 1 }, 2019-09-04T06:28:47.442+0000 2019-09-04T06:28:47.451+0000 D3 REPL [conn142] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.132+0000 2019-09-04T06:28:47.451+0000 D3 REPL [conn117] Got notified of new snapshot: { ts: Timestamp(1567578527, 1), t: 1 }, 2019-09-04T06:28:47.442+0000 2019-09-04T06:28:47.451+0000 D3 REPL [conn117] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:54.151+0000 2019-09-04T06:28:47.451+0000 D3 REPL [conn106] Got notified of new snapshot: { ts: Timestamp(1567578527, 1), t: 1 }, 2019-09-04T06:28:47.442+0000 2019-09-04T06:28:47.451+0000 D3 REPL [conn106] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:57.566+0000 2019-09-04T06:28:47.451+0000 D3 REPL [conn121] Got notified of new snapshot: { ts: Timestamp(1567578527, 1), t: 1 }, 2019-09-04T06:28:47.442+0000 2019-09-04T06:28:47.451+0000 D3 REPL [conn121] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:58.752+0000 2019-09-04T06:28:47.451+0000 D3 REPL [conn135] Got notified of new snapshot: { ts: Timestamp(1567578527, 1), t: 1 }, 2019-09-04T06:28:47.442+0000 2019-09-04T06:28:47.451+0000 D3 REPL [conn135] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.925+0000 2019-09-04T06:28:47.451+0000 D3 REPL [conn137] Got notified of new snapshot: { ts: Timestamp(1567578527, 1), t: 1 }, 2019-09-04T06:28:47.442+0000 2019-09-04T06:28:47.451+0000 D3 REPL [conn137] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.079+0000 2019-09-04T06:28:47.451+0000 D3 REPL [conn128] Got notified of new snapshot: { ts: Timestamp(1567578527, 1), t: 1 }, 2019-09-04T06:28:47.442+0000 2019-09-04T06:28:47.451+0000 D3 REPL [conn128] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:11.706+0000 2019-09-04T06:28:47.451+0000 D3 REPL [conn118] Got notified of new snapshot: { ts: Timestamp(1567578527, 1), t: 1 }, 2019-09-04T06:28:47.442+0000 2019-09-04T06:28:47.451+0000 D3 REPL [conn118] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.224+0000 2019-09-04T06:28:47.451+0000 D3 REPL [conn122] Got notified of new snapshot: { ts: Timestamp(1567578527, 1), t: 1 }, 2019-09-04T06:28:47.442+0000 2019-09-04T06:28:47.451+0000 D3 REPL [conn122] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.234+0000 2019-09-04T06:28:47.451+0000 D3 REPL [conn129] Got notified of new snapshot: { ts: Timestamp(1567578527, 1), t: 1 }, 2019-09-04T06:28:47.442+0000 2019-09-04T06:28:47.451+0000 D3 REPL [conn129] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.240+0000 2019-09-04T06:28:47.453+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:28:47.453+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:47.453+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, durableWallTime: new Date(1567578524869), appliedOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, appliedWallTime: new Date(1567578524869), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578527, 1), t: 1 }, durableWallTime: new Date(1567578527442), appliedOpTime: { ts: Timestamp(1567578527, 1), t: 1 }, appliedWallTime: new Date(1567578527442), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, durableWallTime: new Date(1567578524869), appliedOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, appliedWallTime: new Date(1567578524869), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578527, 1), t: 1 }, lastCommittedWall: new Date(1567578527442), lastOpVisible: { ts: Timestamp(1567578527, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:47.453+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 265 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:17.453+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, durableWallTime: new Date(1567578524869), appliedOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, appliedWallTime: new Date(1567578524869), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578527, 1), t: 1 }, durableWallTime: new Date(1567578527442), appliedOpTime: { ts: Timestamp(1567578527, 1), t: 1 }, appliedWallTime: new Date(1567578527442), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, durableWallTime: new Date(1567578524869), appliedOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, appliedWallTime: new Date(1567578524869), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578527, 1), t: 1 }, lastCommittedWall: new Date(1567578527442), lastOpVisible: { ts: Timestamp(1567578527, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:47.453+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:17.450+0000 2019-09-04T06:28:47.453+0000 D2 ASIO [RS] Request 265 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578527, 1), t: 1 }, lastCommittedWall: new Date(1567578527442), lastOpVisible: { ts: Timestamp(1567578527, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578527, 1), $clusterTime: { clusterTime: Timestamp(1567578527, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578527, 1) } 2019-09-04T06:28:47.453+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578527, 1), t: 1 }, lastCommittedWall: new Date(1567578527442), lastOpVisible: { ts: Timestamp(1567578527, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578527, 1), $clusterTime: { clusterTime: Timestamp(1567578527, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578527, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:47.453+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:47.453+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:17.450+0000 2019-09-04T06:28:47.492+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:47.516+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:47.516+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:47.549+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578527, 1) 2019-09-04T06:28:47.592+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:47.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:47.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:47.692+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:47.703+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:47.703+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:47.704+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:47.704+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:47.707+0000 D2 ASIO [RS] Request 264 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578527, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578527704), o: { $v: 1, $set: { ping: new Date(1567578527698) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578527, 1), t: 1 }, lastCommittedWall: new Date(1567578527442), lastOpVisible: { ts: Timestamp(1567578527, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578527, 1), t: 1 }, lastCommittedWall: new Date(1567578527442), lastOpApplied: { ts: Timestamp(1567578527, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578527, 1), $clusterTime: { clusterTime: Timestamp(1567578527, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578527, 2) } 2019-09-04T06:28:47.707+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578527, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578527704), o: { $v: 1, $set: { ping: new Date(1567578527698) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578527, 1), t: 1 }, lastCommittedWall: new Date(1567578527442), lastOpVisible: { ts: Timestamp(1567578527, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578527, 1), t: 1 }, lastCommittedWall: new Date(1567578527442), lastOpApplied: { ts: Timestamp(1567578527, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578527, 1), $clusterTime: { clusterTime: Timestamp(1567578527, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578527, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:47.707+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:47.707+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578527, 2) and ending at ts: Timestamp(1567578527, 2) 2019-09-04T06:28:47.707+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:28:58.338+0000 2019-09-04T06:28:47.707+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:28:59.070+0000 2019-09-04T06:28:47.708+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:47.708+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578527, 2), t: 1 } 2019-09-04T06:28:47.708+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:16.836+0000 2019-09-04T06:28:47.708+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:47.708+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:47.708+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578527, 1) 2019-09-04T06:28:47.708+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 4224 2019-09-04T06:28:47.708+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:47.708+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:47.708+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 4224 2019-09-04T06:28:47.708+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:47.708+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:28:47.708+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:47.708+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578527, 1) 2019-09-04T06:28:47.708+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 4227 2019-09-04T06:28:47.708+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578527, 2) } 2019-09-04T06:28:47.708+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:47.708+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:47.708+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 4227 2019-09-04T06:28:47.708+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4217 2019-09-04T06:28:47.708+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4217 2019-09-04T06:28:47.708+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4230 2019-09-04T06:28:47.708+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4230 2019-09-04T06:28:47.708+0000 D3 EXECUTOR [repl-writer-worker-4] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:47.708+0000 D3 STORAGE [repl-writer-worker-4] WT begin_transaction for snapshot id 4232 2019-09-04T06:28:47.708+0000 D4 STORAGE [repl-writer-worker-4] inserting record with timestamp Timestamp(1567578527, 2) 2019-09-04T06:28:47.708+0000 D3 STORAGE [repl-writer-worker-4] WT set timestamp of future write operations to Timestamp(1567578527, 2) 2019-09-04T06:28:47.708+0000 D3 STORAGE [repl-writer-worker-4] WT commit_transaction for snapshot id 4232 2019-09-04T06:28:47.708+0000 D3 EXECUTOR [repl-writer-worker-4] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:47.708+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:28:47.708+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4231 2019-09-04T06:28:47.708+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4231 2019-09-04T06:28:47.708+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4234 2019-09-04T06:28:47.708+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4234 2019-09-04T06:28:47.708+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578527, 2), t: 1 }({ ts: Timestamp(1567578527, 2), t: 1 }) 2019-09-04T06:28:47.708+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578527, 2) 2019-09-04T06:28:47.708+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4235 2019-09-04T06:28:47.708+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578527, 2) } } ] } sort: {} projection: {} 2019-09-04T06:28:47.708+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:47.708+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:28:47.708+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578527, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:28:47.708+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:47.708+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:47.708+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:47.708+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578527, 2) || First: notFirst: full path: ts 2019-09-04T06:28:47.708+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:47.708+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578527, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:47.708+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:47.708+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:28:47.708+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:47.708+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:47.708+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:47.708+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:47.708+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:47.708+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:47.708+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:47.708+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578527, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:47.708+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:47.708+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:47.708+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:47.708+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578527, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:47.708+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:47.708+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578527, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:47.709+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4235 2019-09-04T06:28:47.709+0000 D3 EXECUTOR [repl-writer-worker-6] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:47.709+0000 D3 STORAGE [repl-writer-worker-6] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:47.709+0000 D3 REPL [repl-writer-worker-6] applying op: { ts: Timestamp(1567578527, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578527704), o: { $v: 1, $set: { ping: new Date(1567578527698) } } }, oplog application mode: Secondary 2019-09-04T06:28:47.709+0000 D3 STORAGE [repl-writer-worker-6] WT set timestamp of future write operations to Timestamp(1567578527, 2) 2019-09-04T06:28:47.709+0000 D3 STORAGE [repl-writer-worker-6] WT begin_transaction for snapshot id 4237 2019-09-04T06:28:47.709+0000 D2 QUERY [repl-writer-worker-6] Using idhack: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" } 2019-09-04T06:28:47.709+0000 D4 WRITE [repl-writer-worker-6] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:28:47.709+0000 D3 STORAGE [repl-writer-worker-6] WT commit_transaction for snapshot id 4237 2019-09-04T06:28:47.709+0000 D3 EXECUTOR [repl-writer-worker-6] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:47.709+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578527, 2), t: 1 }({ ts: Timestamp(1567578527, 2), t: 1 }) 2019-09-04T06:28:47.709+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578527, 2) 2019-09-04T06:28:47.709+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4236 2019-09-04T06:28:47.709+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:28:47.709+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:47.709+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:28:47.709+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:47.709+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:47.709+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:28:47.709+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4236 2019-09-04T06:28:47.709+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578527, 2) 2019-09-04T06:28:47.709+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4240 2019-09-04T06:28:47.709+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4240 2019-09-04T06:28:47.709+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578527, 2), t: 1 }({ ts: Timestamp(1567578527, 2), t: 1 }) 2019-09-04T06:28:47.709+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:47.709+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, durableWallTime: new Date(1567578524869), appliedOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, appliedWallTime: new Date(1567578524869), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578527, 1), t: 1 }, durableWallTime: new Date(1567578527442), appliedOpTime: { ts: Timestamp(1567578527, 2), t: 1 }, appliedWallTime: new Date(1567578527704), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, durableWallTime: new Date(1567578524869), appliedOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, appliedWallTime: new Date(1567578524869), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578527, 1), t: 1 }, lastCommittedWall: new Date(1567578527442), lastOpVisible: { ts: Timestamp(1567578527, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:47.709+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 266 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:17.709+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, durableWallTime: new Date(1567578524869), appliedOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, appliedWallTime: new Date(1567578524869), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578527, 1), t: 1 }, durableWallTime: new Date(1567578527442), appliedOpTime: { ts: Timestamp(1567578527, 2), t: 1 }, appliedWallTime: new Date(1567578527704), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, durableWallTime: new Date(1567578524869), appliedOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, appliedWallTime: new Date(1567578524869), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578527, 1), t: 1 }, lastCommittedWall: new Date(1567578527442), lastOpVisible: { ts: Timestamp(1567578527, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:47.709+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:17.709+0000 2019-09-04T06:28:47.709+0000 D2 ASIO [RS] Request 266 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578527, 2), t: 1 }, lastCommittedWall: new Date(1567578527704), lastOpVisible: { ts: Timestamp(1567578527, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578527, 2), $clusterTime: { clusterTime: Timestamp(1567578527, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578527, 2) } 2019-09-04T06:28:47.710+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578527, 2), t: 1 }, lastCommittedWall: new Date(1567578527704), lastOpVisible: { ts: Timestamp(1567578527, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578527, 2), $clusterTime: { clusterTime: Timestamp(1567578527, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578527, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:47.710+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:47.710+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:17.710+0000 2019-09-04T06:28:47.710+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578527, 2), t: 1 } 2019-09-04T06:28:47.710+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 267 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:57.710+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578527, 1), t: 1 } } 2019-09-04T06:28:47.710+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:17.710+0000 2019-09-04T06:28:47.710+0000 D2 ASIO [RS] Request 267 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578527, 2), t: 1 }, lastCommittedWall: new Date(1567578527704), lastOpVisible: { ts: Timestamp(1567578527, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578527, 2), t: 1 }, lastCommittedWall: new Date(1567578527704), lastOpApplied: { ts: Timestamp(1567578527, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578527, 2), $clusterTime: { clusterTime: Timestamp(1567578527, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578527, 2) } 2019-09-04T06:28:47.710+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578527, 2), t: 1 }, lastCommittedWall: new Date(1567578527704), lastOpVisible: { ts: Timestamp(1567578527, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578527, 2), t: 1 }, lastCommittedWall: new Date(1567578527704), lastOpApplied: { ts: Timestamp(1567578527, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578527, 2), $clusterTime: { clusterTime: Timestamp(1567578527, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578527, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:47.710+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:47.710+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:28:47.710+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578527, 2), t: 1 }, 2019-09-04T06:28:47.704+0000 2019-09-04T06:28:47.710+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578527, 2), t: 1 }, 2019-09-04T06:28:47.704+0000 2019-09-04T06:28:47.710+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578522, 2) 2019-09-04T06:28:47.710+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:28:59.070+0000 2019-09-04T06:28:47.710+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:28:58.249+0000 2019-09-04T06:28:47.710+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 268 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:57.710+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578527, 2), t: 1 } } 2019-09-04T06:28:47.710+0000 D3 REPL [conn134] Got notified of new snapshot: { ts: Timestamp(1567578527, 2), t: 1 }, 2019-09-04T06:28:47.704+0000 2019-09-04T06:28:47.710+0000 D3 REPL [conn134] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.753+0000 2019-09-04T06:28:47.710+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:17.710+0000 2019-09-04T06:28:47.710+0000 D3 REPL [conn131] Got notified of new snapshot: { ts: Timestamp(1567578527, 2), t: 1 }, 2019-09-04T06:28:47.704+0000 2019-09-04T06:28:47.710+0000 D3 REPL [conn131] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:56.307+0000 2019-09-04T06:28:47.710+0000 D3 REPL [conn140] Got notified of new snapshot: { ts: Timestamp(1567578527, 2), t: 1 }, 2019-09-04T06:28:47.704+0000 2019-09-04T06:28:47.710+0000 D3 REPL [conn140] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.076+0000 2019-09-04T06:28:47.710+0000 D3 REPL [conn118] Got notified of new snapshot: { ts: Timestamp(1567578527, 2), t: 1 }, 2019-09-04T06:28:47.704+0000 2019-09-04T06:28:47.710+0000 D3 REPL [conn118] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.224+0000 2019-09-04T06:28:47.710+0000 D3 REPL [conn132] Got notified of new snapshot: { ts: Timestamp(1567578527, 2), t: 1 }, 2019-09-04T06:28:47.704+0000 2019-09-04T06:28:47.710+0000 D3 REPL [conn132] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.433+0000 2019-09-04T06:28:47.710+0000 D3 REPL [conn141] Got notified of new snapshot: { ts: Timestamp(1567578527, 2), t: 1 }, 2019-09-04T06:28:47.704+0000 2019-09-04T06:28:47.710+0000 D3 REPL [conn141] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.077+0000 2019-09-04T06:28:47.710+0000 D3 REPL [conn135] Got notified of new snapshot: { ts: Timestamp(1567578527, 2), t: 1 }, 2019-09-04T06:28:47.704+0000 2019-09-04T06:28:47.710+0000 D3 REPL [conn135] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.925+0000 2019-09-04T06:28:47.710+0000 D3 REPL [conn129] Got notified of new snapshot: { ts: Timestamp(1567578527, 2), t: 1 }, 2019-09-04T06:28:47.704+0000 2019-09-04T06:28:47.710+0000 D3 REPL [conn129] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.240+0000 2019-09-04T06:28:47.710+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:47.710+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:16.836+0000 2019-09-04T06:28:47.710+0000 D3 REPL [conn106] Got notified of new snapshot: { ts: Timestamp(1567578527, 2), t: 1 }, 2019-09-04T06:28:47.704+0000 2019-09-04T06:28:47.710+0000 D3 REPL [conn106] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:57.566+0000 2019-09-04T06:28:47.710+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:28:47.710+0000 D3 REPL [conn122] Got notified of new snapshot: { ts: Timestamp(1567578527, 2), t: 1 }, 2019-09-04T06:28:47.704+0000 2019-09-04T06:28:47.710+0000 D3 REPL [conn122] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.234+0000 2019-09-04T06:28:47.710+0000 D3 REPL [conn148] Got notified of new snapshot: { ts: Timestamp(1567578527, 2), t: 1 }, 2019-09-04T06:28:47.704+0000 2019-09-04T06:28:47.710+0000 D3 REPL [conn148] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.224+0000 2019-09-04T06:28:47.710+0000 D3 REPL [conn149] Got notified of new snapshot: { ts: Timestamp(1567578527, 2), t: 1 }, 2019-09-04T06:28:47.704+0000 2019-09-04T06:28:47.710+0000 D3 REPL [conn149] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.239+0000 2019-09-04T06:28:47.710+0000 D3 REPL [conn136] Got notified of new snapshot: { ts: Timestamp(1567578527, 2), t: 1 }, 2019-09-04T06:28:47.704+0000 2019-09-04T06:28:47.710+0000 D3 REPL [conn136] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:02.478+0000 2019-09-04T06:28:47.710+0000 D3 REPL [conn104] Got notified of new snapshot: { ts: Timestamp(1567578527, 2), t: 1 }, 2019-09-04T06:28:47.704+0000 2019-09-04T06:28:47.710+0000 D3 REPL [conn104] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:55.059+0000 2019-09-04T06:28:47.710+0000 D3 REPL [conn105] Got notified of new snapshot: { ts: Timestamp(1567578527, 2), t: 1 }, 2019-09-04T06:28:47.704+0000 2019-09-04T06:28:47.710+0000 D3 REPL [conn105] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.079+0000 2019-09-04T06:28:47.710+0000 D3 REPL [conn130] Got notified of new snapshot: { ts: Timestamp(1567578527, 2), t: 1 }, 2019-09-04T06:28:47.704+0000 2019-09-04T06:28:47.710+0000 D3 REPL [conn130] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:51.753+0000 2019-09-04T06:28:47.710+0000 D3 REPL [conn116] Got notified of new snapshot: { ts: Timestamp(1567578527, 2), t: 1 }, 2019-09-04T06:28:47.704+0000 2019-09-04T06:28:47.710+0000 D3 REPL [conn116] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:12.389+0000 2019-09-04T06:28:47.711+0000 D3 REPL [conn117] Got notified of new snapshot: { ts: Timestamp(1567578527, 2), t: 1 }, 2019-09-04T06:28:47.704+0000 2019-09-04T06:28:47.711+0000 D3 REPL [conn117] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:54.151+0000 2019-09-04T06:28:47.711+0000 D3 REPL [conn146] Got notified of new snapshot: { ts: Timestamp(1567578527, 2), t: 1 }, 2019-09-04T06:28:47.704+0000 2019-09-04T06:28:47.711+0000 D3 REPL [conn146] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.233+0000 2019-09-04T06:28:47.711+0000 D3 REPL [conn142] Got notified of new snapshot: { ts: Timestamp(1567578527, 2), t: 1 }, 2019-09-04T06:28:47.704+0000 2019-09-04T06:28:47.711+0000 D3 REPL [conn142] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.132+0000 2019-09-04T06:28:47.711+0000 D3 REPL [conn137] Got notified of new snapshot: { ts: Timestamp(1567578527, 2), t: 1 }, 2019-09-04T06:28:47.704+0000 2019-09-04T06:28:47.711+0000 D3 REPL [conn137] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.079+0000 2019-09-04T06:28:47.711+0000 D3 REPL [conn133] Got notified of new snapshot: { ts: Timestamp(1567578527, 2), t: 1 }, 2019-09-04T06:28:47.704+0000 2019-09-04T06:28:47.711+0000 D3 REPL [conn133] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.702+0000 2019-09-04T06:28:47.711+0000 D3 REPL [conn121] Got notified of new snapshot: { ts: Timestamp(1567578527, 2), t: 1 }, 2019-09-04T06:28:47.704+0000 2019-09-04T06:28:47.711+0000 D3 REPL [conn121] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:58.752+0000 2019-09-04T06:28:47.711+0000 D3 REPL [conn128] Got notified of new snapshot: { ts: Timestamp(1567578527, 2), t: 1 }, 2019-09-04T06:28:47.704+0000 2019-09-04T06:28:47.711+0000 D3 REPL [conn128] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:11.706+0000 2019-09-04T06:28:47.711+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:47.711+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, durableWallTime: new Date(1567578524869), appliedOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, appliedWallTime: new Date(1567578524869), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578527, 2), t: 1 }, durableWallTime: new Date(1567578527704), appliedOpTime: { ts: Timestamp(1567578527, 2), t: 1 }, appliedWallTime: new Date(1567578527704), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, durableWallTime: new Date(1567578524869), appliedOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, appliedWallTime: new Date(1567578524869), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578527, 2), t: 1 }, lastCommittedWall: new Date(1567578527704), lastOpVisible: { ts: Timestamp(1567578527, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:47.711+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 269 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:17.711+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, durableWallTime: new Date(1567578524869), appliedOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, appliedWallTime: new Date(1567578524869), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578527, 2), t: 1 }, durableWallTime: new Date(1567578527704), appliedOpTime: { ts: Timestamp(1567578527, 2), t: 1 }, appliedWallTime: new Date(1567578527704), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, durableWallTime: new Date(1567578524869), appliedOpTime: { ts: Timestamp(1567578524, 1), t: 1 }, appliedWallTime: new Date(1567578524869), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578527, 2), t: 1 }, lastCommittedWall: new Date(1567578527704), lastOpVisible: { ts: Timestamp(1567578527, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:47.711+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:17.710+0000 2019-09-04T06:28:47.711+0000 D2 ASIO [RS] Request 269 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578527, 2), t: 1 }, lastCommittedWall: new Date(1567578527704), lastOpVisible: { ts: Timestamp(1567578527, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578527, 2), $clusterTime: { clusterTime: Timestamp(1567578527, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578527, 2) } 2019-09-04T06:28:47.711+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578527, 2), t: 1 }, lastCommittedWall: new Date(1567578527704), lastOpVisible: { ts: Timestamp(1567578527, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578527, 2), $clusterTime: { clusterTime: Timestamp(1567578527, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578527, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:47.711+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:47.711+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:17.710+0000 2019-09-04T06:28:47.786+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:47.786+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:47.792+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:47.808+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578527, 2) 2019-09-04T06:28:47.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:47.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:47.893+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:47.928+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:47.928+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:47.993+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:48.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:48.016+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:48.016+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:48.093+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:48.130+0000 D2 COMMAND [conn20] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:48.130+0000 I COMMAND [conn20] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:48.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:48.160+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:48.193+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:48.203+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:48.203+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:48.204+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:48.204+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:48.231+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578527, 2), signature: { hash: BinData(0, 63116C84CA6CC405D1A06C8270449024592EB8F8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:48.231+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:28:48.231+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578527, 2), signature: { hash: BinData(0, 63116C84CA6CC405D1A06C8270449024592EB8F8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:48.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578527, 2), signature: { hash: BinData(0, 63116C84CA6CC405D1A06C8270449024592EB8F8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:48.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578527, 2), t: 1 }, durableWallTime: new Date(1567578527704), opTime: { ts: Timestamp(1567578527, 2), t: 1 }, wallTime: new Date(1567578527704) } 2019-09-04T06:28:48.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578527, 2), signature: { hash: BinData(0, 63116C84CA6CC405D1A06C8270449024592EB8F8), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:48.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:48.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:48.286+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:48.286+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:48.293+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:48.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:48.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:48.394+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:48.428+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:48.428+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:48.494+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:48.509+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:28:48.509+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:28:48.509+0000 D2 COMMAND [conn90] run command admin.$cmd { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:28:48.509+0000 I COMMAND [conn90] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:28:48.516+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:48.516+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:48.594+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:48.659+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:48.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:48.694+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:48.703+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:48.703+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:48.704+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:48.704+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:48.708+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:48.708+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:48.708+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578527, 2) 2019-09-04T06:28:48.708+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 4263 2019-09-04T06:28:48.708+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:48.708+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:48.708+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 4263 2019-09-04T06:28:48.709+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4266 2019-09-04T06:28:48.709+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4266 2019-09-04T06:28:48.709+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578527, 2), t: 1 }({ ts: Timestamp(1567578527, 2), t: 1 }) 2019-09-04T06:28:48.786+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:48.786+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:48.794+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:48.836+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:48.836+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 270) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:48.836+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 270 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:28:58.836+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:48.836+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:16.836+0000 2019-09-04T06:28:48.836+0000 D2 ASIO [Replication] Request 270 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578527, 2), t: 1 }, durableWallTime: new Date(1567578527704), opTime: { ts: Timestamp(1567578527, 2), t: 1 }, wallTime: new Date(1567578527704), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578527, 2), t: 1 }, lastCommittedWall: new Date(1567578527704), lastOpVisible: { ts: Timestamp(1567578527, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578527, 2), $clusterTime: { clusterTime: Timestamp(1567578527, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578527, 2) } 2019-09-04T06:28:48.836+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578527, 2), t: 1 }, durableWallTime: new Date(1567578527704), opTime: { ts: Timestamp(1567578527, 2), t: 1 }, wallTime: new Date(1567578527704), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578527, 2), t: 1 }, lastCommittedWall: new Date(1567578527704), lastOpVisible: { ts: Timestamp(1567578527, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578527, 2), $clusterTime: { clusterTime: Timestamp(1567578527, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578527, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:28:48.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:48.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:48.837+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 270) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578527, 2), t: 1 }, durableWallTime: new Date(1567578527704), opTime: { ts: Timestamp(1567578527, 2), t: 1 }, wallTime: new Date(1567578527704), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578527, 2), t: 1 }, lastCommittedWall: new Date(1567578527704), lastOpVisible: { ts: Timestamp(1567578527, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578527, 2), $clusterTime: { clusterTime: Timestamp(1567578527, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578527, 2) } 2019-09-04T06:28:48.837+0000 D4 ELECTION [replexec-1] Postponing election timeout due to heartbeat from primary 2019-09-04T06:28:48.837+0000 D4 REPL [replexec-1] Canceling election timeout callback at 2019-09-04T06:28:58.249+0000 2019-09-04T06:28:48.837+0000 D4 ELECTION [replexec-1] Scheduling election timeout callback at 2019-09-04T06:28:59.384+0000 2019-09-04T06:28:48.837+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:28:48.837+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:28:50.837Z 2019-09-04T06:28:48.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:48.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:18.837+0000 2019-09-04T06:28:48.837+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 271) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:48.837+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 271 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:28:58.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:48.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:18.837+0000 2019-09-04T06:28:48.837+0000 D1 EXECUTOR [replexec-3] starting thread in pool replexec 2019-09-04T06:28:48.837+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:18.837+0000 2019-09-04T06:28:48.837+0000 D2 ASIO [Replication] Request 271 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578527, 2), t: 1 }, durableWallTime: new Date(1567578527704), opTime: { ts: Timestamp(1567578527, 2), t: 1 }, wallTime: new Date(1567578527704), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578527, 2), t: 1 }, lastCommittedWall: new Date(1567578527704), lastOpVisible: { ts: Timestamp(1567578527, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578527, 2), $clusterTime: { clusterTime: Timestamp(1567578527, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578527, 2) } 2019-09-04T06:28:48.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578527, 2), t: 1 }, durableWallTime: new Date(1567578527704), opTime: { ts: Timestamp(1567578527, 2), t: 1 }, wallTime: new Date(1567578527704), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578527, 2), t: 1 }, lastCommittedWall: new Date(1567578527704), lastOpVisible: { ts: Timestamp(1567578527, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578527, 2), $clusterTime: { clusterTime: Timestamp(1567578527, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578527, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:48.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:48.837+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 271) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578527, 2), t: 1 }, durableWallTime: new Date(1567578527704), opTime: { ts: Timestamp(1567578527, 2), t: 1 }, wallTime: new Date(1567578527704), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578527, 2), t: 1 }, lastCommittedWall: new Date(1567578527704), lastOpVisible: { ts: Timestamp(1567578527, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578527, 2), $clusterTime: { clusterTime: Timestamp(1567578527, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578527, 2) } 2019-09-04T06:28:48.837+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:28:48.837+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:28:50.837Z 2019-09-04T06:28:48.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:18.837+0000 2019-09-04T06:28:48.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:48.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:48.894+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:48.928+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:48.928+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:48.995+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:49.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:49.016+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:49.016+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:49.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578527, 2), signature: { hash: BinData(0, 63116C84CA6CC405D1A06C8270449024592EB8F8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:49.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:28:49.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578527, 2), signature: { hash: BinData(0, 63116C84CA6CC405D1A06C8270449024592EB8F8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:49.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578527, 2), signature: { hash: BinData(0, 63116C84CA6CC405D1A06C8270449024592EB8F8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:49.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578527, 2), t: 1 }, durableWallTime: new Date(1567578527704), opTime: { ts: Timestamp(1567578527, 2), t: 1 }, wallTime: new Date(1567578527704) } 2019-09-04T06:28:49.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578527, 2), signature: { hash: BinData(0, 63116C84CA6CC405D1A06C8270449024592EB8F8), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:49.095+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:49.151+0000 D2 ASIO [RS] Request 268 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578529, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" }, wall: new Date(1567578529141), o: { $v: 1, $set: { ping: new Date(1567578529138) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578527, 2), t: 1 }, lastCommittedWall: new Date(1567578527704), lastOpVisible: { ts: Timestamp(1567578527, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578527, 2), t: 1 }, lastCommittedWall: new Date(1567578527704), lastOpApplied: { ts: Timestamp(1567578529, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578527, 2), $clusterTime: { clusterTime: Timestamp(1567578529, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578529, 1) } 2019-09-04T06:28:49.151+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578529, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" }, wall: new Date(1567578529141), o: { $v: 1, $set: { ping: new Date(1567578529138) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578527, 2), t: 1 }, lastCommittedWall: new Date(1567578527704), lastOpVisible: { ts: Timestamp(1567578527, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578527, 2), t: 1 }, lastCommittedWall: new Date(1567578527704), lastOpApplied: { ts: Timestamp(1567578529, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578527, 2), $clusterTime: { clusterTime: Timestamp(1567578529, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578529, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:49.151+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:49.151+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578529, 1) and ending at ts: Timestamp(1567578529, 1) 2019-09-04T06:28:49.151+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:28:59.384+0000 2019-09-04T06:28:49.151+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:28:59.272+0000 2019-09-04T06:28:49.151+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:49.151+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:18.837+0000 2019-09-04T06:28:49.151+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578529, 1), t: 1 } 2019-09-04T06:28:49.151+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:49.151+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:49.151+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578527, 2) 2019-09-04T06:28:49.151+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 4275 2019-09-04T06:28:49.151+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:49.151+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:49.151+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 4275 2019-09-04T06:28:49.151+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:49.151+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:49.151+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:28:49.151+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578527, 2) 2019-09-04T06:28:49.151+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 4278 2019-09-04T06:28:49.151+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:49.151+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578529, 1) } 2019-09-04T06:28:49.151+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:49.151+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 4278 2019-09-04T06:28:49.151+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4267 2019-09-04T06:28:49.151+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4267 2019-09-04T06:28:49.151+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4281 2019-09-04T06:28:49.151+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4281 2019-09-04T06:28:49.151+0000 D3 EXECUTOR [repl-writer-worker-2] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:49.151+0000 D3 STORAGE [repl-writer-worker-2] WT begin_transaction for snapshot id 4283 2019-09-04T06:28:49.151+0000 D4 STORAGE [repl-writer-worker-2] inserting record with timestamp Timestamp(1567578529, 1) 2019-09-04T06:28:49.151+0000 D3 STORAGE [repl-writer-worker-2] WT set timestamp of future write operations to Timestamp(1567578529, 1) 2019-09-04T06:28:49.152+0000 D3 STORAGE [repl-writer-worker-2] WT commit_transaction for snapshot id 4283 2019-09-04T06:28:49.152+0000 D3 EXECUTOR [repl-writer-worker-2] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:49.152+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:28:49.152+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4282 2019-09-04T06:28:49.152+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4282 2019-09-04T06:28:49.152+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4285 2019-09-04T06:28:49.152+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4285 2019-09-04T06:28:49.152+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578529, 1), t: 1 }({ ts: Timestamp(1567578529, 1), t: 1 }) 2019-09-04T06:28:49.152+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578529, 1) 2019-09-04T06:28:49.152+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4286 2019-09-04T06:28:49.152+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578529, 1) } } ] } sort: {} projection: {} 2019-09-04T06:28:49.152+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:49.152+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:28:49.152+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578529, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:28:49.152+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:49.152+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:49.152+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:49.152+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578529, 1) || First: notFirst: full path: ts 2019-09-04T06:28:49.152+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:49.152+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578529, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:49.152+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:49.152+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:28:49.152+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:49.152+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:49.152+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:49.152+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:49.152+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:49.152+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:49.152+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:49.152+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578529, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:49.152+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:49.152+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:49.152+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:49.152+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578529, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:49.152+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:49.152+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578529, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:49.152+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4286 2019-09-04T06:28:49.152+0000 D3 EXECUTOR [repl-writer-worker-0] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:49.152+0000 D3 STORAGE [repl-writer-worker-0] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:49.152+0000 D3 REPL [repl-writer-worker-0] applying op: { ts: Timestamp(1567578529, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" }, wall: new Date(1567578529141), o: { $v: 1, $set: { ping: new Date(1567578529138) } } }, oplog application mode: Secondary 2019-09-04T06:28:49.152+0000 D3 STORAGE [repl-writer-worker-0] WT set timestamp of future write operations to Timestamp(1567578529, 1) 2019-09-04T06:28:49.152+0000 D3 STORAGE [repl-writer-worker-0] WT begin_transaction for snapshot id 4288 2019-09-04T06:28:49.152+0000 D2 QUERY [repl-writer-worker-0] Using idhack: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" } 2019-09-04T06:28:49.152+0000 D4 WRITE [repl-writer-worker-0] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:28:49.152+0000 D3 STORAGE [repl-writer-worker-0] WT commit_transaction for snapshot id 4288 2019-09-04T06:28:49.152+0000 D3 EXECUTOR [repl-writer-worker-0] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:49.152+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578529, 1), t: 1 }({ ts: Timestamp(1567578529, 1), t: 1 }) 2019-09-04T06:28:49.152+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578529, 1) 2019-09-04T06:28:49.152+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4287 2019-09-04T06:28:49.152+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:28:49.152+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:49.153+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:28:49.153+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:49.153+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:49.153+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:28:49.153+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4287 2019-09-04T06:28:49.153+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578529, 1) 2019-09-04T06:28:49.153+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4291 2019-09-04T06:28:49.153+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4291 2019-09-04T06:28:49.153+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578529, 1), t: 1 }({ ts: Timestamp(1567578529, 1), t: 1 }) 2019-09-04T06:28:49.153+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:49.153+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578527, 2), t: 1 }, durableWallTime: new Date(1567578527704), appliedOpTime: { ts: Timestamp(1567578527, 2), t: 1 }, appliedWallTime: new Date(1567578527704), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578527, 2), t: 1 }, durableWallTime: new Date(1567578527704), appliedOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, appliedWallTime: new Date(1567578529141), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578527, 2), t: 1 }, durableWallTime: new Date(1567578527704), appliedOpTime: { ts: Timestamp(1567578527, 2), t: 1 }, appliedWallTime: new Date(1567578527704), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578527, 2), t: 1 }, lastCommittedWall: new Date(1567578527704), lastOpVisible: { ts: Timestamp(1567578527, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:49.153+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 272 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:19.153+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578527, 2), t: 1 }, durableWallTime: new Date(1567578527704), appliedOpTime: { ts: Timestamp(1567578527, 2), t: 1 }, appliedWallTime: new Date(1567578527704), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578527, 2), t: 1 }, durableWallTime: new Date(1567578527704), appliedOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, appliedWallTime: new Date(1567578529141), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578527, 2), t: 1 }, durableWallTime: new Date(1567578527704), appliedOpTime: { ts: Timestamp(1567578527, 2), t: 1 }, appliedWallTime: new Date(1567578527704), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578527, 2), t: 1 }, lastCommittedWall: new Date(1567578527704), lastOpVisible: { ts: Timestamp(1567578527, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:49.153+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:19.153+0000 2019-09-04T06:28:49.153+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578529, 1), t: 1 } 2019-09-04T06:28:49.153+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 273 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:59.153+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578527, 2), t: 1 } } 2019-09-04T06:28:49.153+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:19.153+0000 2019-09-04T06:28:49.153+0000 D2 ASIO [RS] Request 272 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578527, 2), t: 1 }, lastCommittedWall: new Date(1567578527704), lastOpVisible: { ts: Timestamp(1567578527, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578527, 2), $clusterTime: { clusterTime: Timestamp(1567578529, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578529, 1) } 2019-09-04T06:28:49.153+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578527, 2), t: 1 }, lastCommittedWall: new Date(1567578527704), lastOpVisible: { ts: Timestamp(1567578527, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578527, 2), $clusterTime: { clusterTime: Timestamp(1567578529, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578529, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:49.153+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:49.153+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:19.153+0000 2019-09-04T06:28:49.155+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:28:49.155+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:49.155+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578527, 2), t: 1 }, durableWallTime: new Date(1567578527704), appliedOpTime: { ts: Timestamp(1567578527, 2), t: 1 }, appliedWallTime: new Date(1567578527704), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), appliedOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, appliedWallTime: new Date(1567578529141), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578527, 2), t: 1 }, durableWallTime: new Date(1567578527704), appliedOpTime: { ts: Timestamp(1567578527, 2), t: 1 }, appliedWallTime: new Date(1567578527704), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578527, 2), t: 1 }, lastCommittedWall: new Date(1567578527704), lastOpVisible: { ts: Timestamp(1567578527, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:49.155+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 274 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:19.155+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578527, 2), t: 1 }, durableWallTime: new Date(1567578527704), appliedOpTime: { ts: Timestamp(1567578527, 2), t: 1 }, appliedWallTime: new Date(1567578527704), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), appliedOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, appliedWallTime: new Date(1567578529141), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578527, 2), t: 1 }, durableWallTime: new Date(1567578527704), appliedOpTime: { ts: Timestamp(1567578527, 2), t: 1 }, appliedWallTime: new Date(1567578527704), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578527, 2), t: 1 }, lastCommittedWall: new Date(1567578527704), lastOpVisible: { ts: Timestamp(1567578527, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:49.155+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:19.153+0000 2019-09-04T06:28:49.155+0000 D2 ASIO [RS] Request 274 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578527, 2), t: 1 }, lastCommittedWall: new Date(1567578527704), lastOpVisible: { ts: Timestamp(1567578527, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578527, 2), $clusterTime: { clusterTime: Timestamp(1567578529, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578529, 1) } 2019-09-04T06:28:49.155+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578527, 2), t: 1 }, lastCommittedWall: new Date(1567578527704), lastOpVisible: { ts: Timestamp(1567578527, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578527, 2), $clusterTime: { clusterTime: Timestamp(1567578529, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578529, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:49.155+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:49.155+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:19.153+0000 2019-09-04T06:28:49.155+0000 D2 ASIO [RS] Request 273 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578529, 1), t: 1 }, lastCommittedWall: new Date(1567578529141), lastOpVisible: { ts: Timestamp(1567578529, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578529, 1), t: 1 }, lastCommittedWall: new Date(1567578529141), lastOpApplied: { ts: Timestamp(1567578529, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578529, 1), $clusterTime: { clusterTime: Timestamp(1567578529, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578529, 1) } 2019-09-04T06:28:49.155+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578529, 1), t: 1 }, lastCommittedWall: new Date(1567578529141), lastOpVisible: { ts: Timestamp(1567578529, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578529, 1), t: 1 }, lastCommittedWall: new Date(1567578529141), lastOpApplied: { ts: Timestamp(1567578529, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578529, 1), $clusterTime: { clusterTime: Timestamp(1567578529, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578529, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:49.155+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:49.155+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:28:49.156+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578529, 1), t: 1 }, 2019-09-04T06:28:49.141+0000 2019-09-04T06:28:49.156+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578529, 1), t: 1 }, 2019-09-04T06:28:49.141+0000 2019-09-04T06:28:49.156+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578524, 1) 2019-09-04T06:28:49.156+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:28:59.272+0000 2019-09-04T06:28:49.156+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:29:00.382+0000 2019-09-04T06:28:49.156+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:28:49.156+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 275 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:28:59.156+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578529, 1), t: 1 } } 2019-09-04T06:28:49.156+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:18.837+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn131] Got notified of new snapshot: { ts: Timestamp(1567578529, 1), t: 1 }, 2019-09-04T06:28:49.141+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn131] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:56.307+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn105] Got notified of new snapshot: { ts: Timestamp(1567578529, 1), t: 1 }, 2019-09-04T06:28:49.141+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn105] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.079+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn136] Got notified of new snapshot: { ts: Timestamp(1567578529, 1), t: 1 }, 2019-09-04T06:28:49.141+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn136] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:02.478+0000 2019-09-04T06:28:49.156+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:19.153+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn118] Got notified of new snapshot: { ts: Timestamp(1567578529, 1), t: 1 }, 2019-09-04T06:28:49.141+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn118] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.224+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn122] Got notified of new snapshot: { ts: Timestamp(1567578529, 1), t: 1 }, 2019-09-04T06:28:49.141+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn122] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.234+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn137] Got notified of new snapshot: { ts: Timestamp(1567578529, 1), t: 1 }, 2019-09-04T06:28:49.141+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn137] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.079+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn104] Got notified of new snapshot: { ts: Timestamp(1567578529, 1), t: 1 }, 2019-09-04T06:28:49.141+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn104] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:55.059+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn130] Got notified of new snapshot: { ts: Timestamp(1567578529, 1), t: 1 }, 2019-09-04T06:28:49.141+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn130] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:51.753+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn117] Got notified of new snapshot: { ts: Timestamp(1567578529, 1), t: 1 }, 2019-09-04T06:28:49.141+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn117] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:54.151+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn142] Got notified of new snapshot: { ts: Timestamp(1567578529, 1), t: 1 }, 2019-09-04T06:28:49.141+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn142] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.132+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn133] Got notified of new snapshot: { ts: Timestamp(1567578529, 1), t: 1 }, 2019-09-04T06:28:49.141+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn133] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.702+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn128] Got notified of new snapshot: { ts: Timestamp(1567578529, 1), t: 1 }, 2019-09-04T06:28:49.141+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn128] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:11.706+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn134] Got notified of new snapshot: { ts: Timestamp(1567578529, 1), t: 1 }, 2019-09-04T06:28:49.141+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn134] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.753+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn132] Got notified of new snapshot: { ts: Timestamp(1567578529, 1), t: 1 }, 2019-09-04T06:28:49.141+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn132] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.433+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn135] Got notified of new snapshot: { ts: Timestamp(1567578529, 1), t: 1 }, 2019-09-04T06:28:49.141+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn135] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.925+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn106] Got notified of new snapshot: { ts: Timestamp(1567578529, 1), t: 1 }, 2019-09-04T06:28:49.141+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn106] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:57.566+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn140] Got notified of new snapshot: { ts: Timestamp(1567578529, 1), t: 1 }, 2019-09-04T06:28:49.141+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn140] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.076+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn141] Got notified of new snapshot: { ts: Timestamp(1567578529, 1), t: 1 }, 2019-09-04T06:28:49.141+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn141] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.077+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn116] Got notified of new snapshot: { ts: Timestamp(1567578529, 1), t: 1 }, 2019-09-04T06:28:49.141+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn116] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:12.389+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn146] Got notified of new snapshot: { ts: Timestamp(1567578529, 1), t: 1 }, 2019-09-04T06:28:49.141+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn146] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.233+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn149] Got notified of new snapshot: { ts: Timestamp(1567578529, 1), t: 1 }, 2019-09-04T06:28:49.141+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn149] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.239+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn121] Got notified of new snapshot: { ts: Timestamp(1567578529, 1), t: 1 }, 2019-09-04T06:28:49.141+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn121] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:58.752+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn129] Got notified of new snapshot: { ts: Timestamp(1567578529, 1), t: 1 }, 2019-09-04T06:28:49.141+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn129] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.240+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn148] Got notified of new snapshot: { ts: Timestamp(1567578529, 1), t: 1 }, 2019-09-04T06:28:49.141+0000 2019-09-04T06:28:49.156+0000 D3 REPL [conn148] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.224+0000 2019-09-04T06:28:49.195+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:49.203+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:49.203+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:49.204+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:49.204+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:49.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:49.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:49.251+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578529, 1) 2019-09-04T06:28:49.286+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:49.286+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:49.295+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:49.329+0000 D2 WRITE [startPeriodicThreadToAbortExpiredTransactions] Beginning scanSessions. Scanning 0 sessions. 2019-09-04T06:28:49.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:49.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:49.370+0000 D1 SHARDING [shard-registry-reload] Reloading shardRegistry 2019-09-04T06:28:49.370+0000 D3 STORAGE [shard-registry-reload] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:28:49.370+0000 D2 COMMAND [shard-registry-reload] run command config.$cmd { find: "shards", $readPreference: { mode: "nearest", tags: [] }, $db: "config" } 2019-09-04T06:28:49.370+0000 D3 STORAGE [shard-registry-reload] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:49.370+0000 D5 QUERY [shard-registry-reload] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:28:49.370+0000 D5 QUERY [shard-registry-reload] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:28:49.370+0000 D5 QUERY [shard-registry-reload] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:28:49.370+0000 D5 QUERY [shard-registry-reload] Rated tree: $and 2019-09-04T06:28:49.370+0000 D5 QUERY [shard-registry-reload] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:49.370+0000 D5 QUERY [shard-registry-reload] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:49.370+0000 D2 QUERY [shard-registry-reload] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:28:49.370+0000 D3 STORAGE [shard-registry-reload] begin_transaction on local snapshot Timestamp(1567578529, 1) 2019-09-04T06:28:49.370+0000 D3 STORAGE [shard-registry-reload] WT begin_transaction for snapshot id 4301 2019-09-04T06:28:49.370+0000 D3 STORAGE [shard-registry-reload] WT rollback_transaction for snapshot id 4301 2019-09-04T06:28:49.370+0000 I COMMAND [shard-registry-reload] command config.shards command: find { find: "shards", $readPreference: { mode: "nearest", tags: [] }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:646 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:28:49.370+0000 D1 SHARDING [shard-registry-reload] found 3 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp(1567578529, 1), t: 1 } 2019-09-04T06:28:49.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0000/cmodb806.togewa.com:27018,cmodb807.togewa.com:27018 2019-09-04T06:28:49.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0000, with CS shard0000/cmodb806.togewa.com:27018,cmodb807.togewa.com:27018 2019-09-04T06:28:49.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0001/cmodb808.togewa.com:27018,cmodb809.togewa.com:27018 2019-09-04T06:28:49.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0001, with CS shard0001/cmodb808.togewa.com:27018,cmodb809.togewa.com:27018 2019-09-04T06:28:49.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0002/cmodb810.togewa.com:27018,cmodb811.togewa.com:27018 2019-09-04T06:28:49.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0002, with CS shard0002/cmodb810.togewa.com:27018,cmodb811.togewa.com:27018 2019-09-04T06:28:49.370+0000 D3 SHARDING [shard-registry-reload] Adding shard config, with CS 2019-09-04T06:28:49.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0001 2019-09-04T06:28:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 276 -- target:[cmodb808.togewa.com:27018] db:admin expDate:2019-09-04T06:28:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:28:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 277 -- target:[cmodb809.togewa.com:27018] db:admin expDate:2019-09-04T06:28:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:28:49.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0000 2019-09-04T06:28:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 278 -- target:[cmodb806.togewa.com:27018] db:admin expDate:2019-09-04T06:28:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:28:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 279 -- target:[cmodb807.togewa.com:27018] db:admin expDate:2019-09-04T06:28:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:28:49.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0002 2019-09-04T06:28:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 280 -- target:[cmodb810.togewa.com:27018] db:admin expDate:2019-09-04T06:28:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:28:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 281 -- target:[cmodb811.togewa.com:27018] db:admin expDate:2019-09-04T06:28:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:28:49.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 276 finished with response: { hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb808.togewa.com:27018", me: "cmodb808.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578527, 1), t: 1 }, lastWriteDate: new Date(1567578527000), majorityOpTime: { ts: Timestamp(1567578527, 1), t: 1 }, majorityWriteDate: new Date(1567578527000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578529386), logicalSessionTimeoutMinutes: 30, connectionId: 18183, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578527, 1), $configServerState: { opTime: { ts: Timestamp(1567578524, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578527, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578527, 1) } 2019-09-04T06:28:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb808.togewa.com:27018", me: "cmodb808.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578527, 1), t: 1 }, lastWriteDate: new Date(1567578527000), majorityOpTime: { ts: Timestamp(1567578527, 1), t: 1 }, majorityWriteDate: new Date(1567578527000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578529386), logicalSessionTimeoutMinutes: 30, connectionId: 18183, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578527, 1), $configServerState: { opTime: { ts: Timestamp(1567578524, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578527, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578527, 1) } target: cmodb808.togewa.com:27018 2019-09-04T06:28:49.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 277 finished with response: { hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb808.togewa.com:27018", me: "cmodb809.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578527, 1), t: 1 }, lastWriteDate: new Date(1567578527000), majorityOpTime: { ts: Timestamp(1567578527, 1), t: 1 }, majorityWriteDate: new Date(1567578527000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578529386), logicalSessionTimeoutMinutes: 30, connectionId: 13302, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578527, 1), $configServerState: { opTime: { ts: Timestamp(1567578510, 3), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578527, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578527, 1) } 2019-09-04T06:28:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb808.togewa.com:27018", me: "cmodb809.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578527, 1), t: 1 }, lastWriteDate: new Date(1567578527000), majorityOpTime: { ts: Timestamp(1567578527, 1), t: 1 }, majorityWriteDate: new Date(1567578527000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578529386), logicalSessionTimeoutMinutes: 30, connectionId: 13302, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578527, 1), $configServerState: { opTime: { ts: Timestamp(1567578510, 3), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578527, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578527, 1) } target: cmodb809.togewa.com:27018 2019-09-04T06:28:49.385+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0001 took 0ms 2019-09-04T06:28:49.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 278 finished with response: { hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb806.togewa.com:27018", me: "cmodb806.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578523, 1), t: 1 }, lastWriteDate: new Date(1567578523000), majorityOpTime: { ts: Timestamp(1567578523, 1), t: 1 }, majorityWriteDate: new Date(1567578523000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578529385), logicalSessionTimeoutMinutes: 30, connectionId: 16400, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578523, 1), $configServerState: { opTime: { ts: Timestamp(1567578524, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578524, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578523, 1) } 2019-09-04T06:28:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb806.togewa.com:27018", me: "cmodb806.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578523, 1), t: 1 }, lastWriteDate: new Date(1567578523000), majorityOpTime: { ts: Timestamp(1567578523, 1), t: 1 }, majorityWriteDate: new Date(1567578523000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578529385), logicalSessionTimeoutMinutes: 30, connectionId: 16400, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578523, 1), $configServerState: { opTime: { ts: Timestamp(1567578524, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578524, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578523, 1) } target: cmodb806.togewa.com:27018 2019-09-04T06:28:49.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 279 finished with response: { hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb806.togewa.com:27018", me: "cmodb807.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578523, 1), t: 1 }, lastWriteDate: new Date(1567578523000), majorityOpTime: { ts: Timestamp(1567578523, 1), t: 1 }, majorityWriteDate: new Date(1567578523000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578529386), logicalSessionTimeoutMinutes: 30, connectionId: 17074, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578523, 1), $configServerState: { opTime: { ts: Timestamp(1567578518, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578524, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578523, 1) } 2019-09-04T06:28:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb806.togewa.com:27018", me: "cmodb807.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578523, 1), t: 1 }, lastWriteDate: new Date(1567578523000), majorityOpTime: { ts: Timestamp(1567578523, 1), t: 1 }, majorityWriteDate: new Date(1567578523000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578529386), logicalSessionTimeoutMinutes: 30, connectionId: 17074, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578523, 1), $configServerState: { opTime: { ts: Timestamp(1567578518, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578524, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578523, 1) } target: cmodb807.togewa.com:27018 2019-09-04T06:28:49.385+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0000 took 0ms 2019-09-04T06:28:49.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 280 finished with response: { hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb810.togewa.com:27018", me: "cmodb810.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578519, 1), t: 1 }, lastWriteDate: new Date(1567578519000), majorityOpTime: { ts: Timestamp(1567578519, 1), t: 1 }, majorityWriteDate: new Date(1567578519000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578529386), logicalSessionTimeoutMinutes: 30, connectionId: 20469, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578519, 1), $configServerState: { opTime: { ts: Timestamp(1567578524, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578524, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578519, 1) } 2019-09-04T06:28:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb810.togewa.com:27018", me: "cmodb810.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578519, 1), t: 1 }, lastWriteDate: new Date(1567578519000), majorityOpTime: { ts: Timestamp(1567578519, 1), t: 1 }, majorityWriteDate: new Date(1567578519000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578529386), logicalSessionTimeoutMinutes: 30, connectionId: 20469, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578519, 1), $configServerState: { opTime: { ts: Timestamp(1567578524, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578524, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578519, 1) } target: cmodb810.togewa.com:27018 2019-09-04T06:28:49.390+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 281 finished with response: { hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb810.togewa.com:27018", me: "cmodb811.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578519, 1), t: 1 }, lastWriteDate: new Date(1567578519000), majorityOpTime: { ts: Timestamp(1567578519, 1), t: 1 }, majorityWriteDate: new Date(1567578519000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578529386), logicalSessionTimeoutMinutes: 30, connectionId: 13284, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578519, 1), $configServerState: { opTime: { ts: Timestamp(1567578518, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578524, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578519, 1) } 2019-09-04T06:28:49.390+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb810.togewa.com:27018", me: "cmodb811.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578519, 1), t: 1 }, lastWriteDate: new Date(1567578519000), majorityOpTime: { ts: Timestamp(1567578519, 1), t: 1 }, majorityWriteDate: new Date(1567578519000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578529386), logicalSessionTimeoutMinutes: 30, connectionId: 13284, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578519, 1), $configServerState: { opTime: { ts: Timestamp(1567578518, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578524, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578519, 1) } target: cmodb811.togewa.com:27018 2019-09-04T06:28:49.390+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0002 took 5ms 2019-09-04T06:28:49.394+0000 D2 COMMAND [replSetDistLockPinger] run command config.$cmd { findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578529394) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } 2019-09-04T06:28:49.394+0000 D4 - [replSetDistLockPinger] Taking ticket. Available: 1000000000 2019-09-04T06:28:49.394+0000 D1 - [replSetDistLockPinger] User Assertion: NotMaster: Not primary while running findAndModify command on collection config.lockpings src/mongo/db/commands/find_and_modify.cpp 178 2019-09-04T06:28:49.394+0000 W - [replSetDistLockPinger] DBException thrown :: caused by :: NotMaster: Not primary while running findAndModify command on collection config.lockpings 2019-09-04T06:28:49.395+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:49.416+0000 I - [replSetDistLockPinger] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6b5b62 0x561749c38a0a 0x561749c42521 0x561749a63043 0x56174a33a606 0x56174a33ba55 0x56174b117894 0x56174a082899 0x56174a083f53 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174af452ee 0x56174af457fa 0x56174b0c25e2 0x56174a244e7b 0x56174a243c1e 0x56174a42b1dc 0x56174a23b7b1 0x56174a232a0a 0x56174b82dbbf 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"272DB62","s":"_ZN5mongo11DBExceptionC2ERKNS_6StatusE"},{"b":"561748F88000","o":"CB0A0A","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"ADB043"},{"b":"561748F88000","o":"13B2606"},{"b":"561748F88000","o":"13B3A55"},{"b":"561748F88000","o":"218F894","s":"_ZN5mongo12BasicCommand10Invocation3runEPNS_16OperationContextEPNS_3rpc21ReplyBuilderInterfaceE"},{"b":"561748F88000","o":"10FA899"},{"b":"561748F88000","o":"10FBF53"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"1FBD2EE"},{"b":"561748F88000","o":"1FBD7FA","s":"_ZN5mongo14DBDirectClient4callERNS_7MessageES2_bPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE"},{"b":"561748F88000","o":"213A5E2","s":"_ZN5mongo12DBClientBase20runCommandWithTargetENS_12OpMsgRequestE"},{"b":"561748F88000","o":"12BCE7B","s":"_ZN5mongo13RSLocalClient14runCommandOnceEPNS_16OperationContextENS_10StringDataERKNS_7BSONObjE"},{"b":"561748F88000","o":"12BBC1E","s":"_ZN5mongo10ShardLocal11_runCommandEPNS_16OperationContextERKNS_21ReadPreferenceSettingENS_10StringDataENS_8DurationISt5ratioILl1ELl1000EEEERKNS_7BSONObjE"},{"b":"561748F88000","o":"14A31DC","s":"_ZN5mongo5Shard32runCommandWithFixedRetryAttemptsEPNS_16OperationContextERKNS_21ReadPreferenceSettingERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_7BSONObjENS_8DurationISt5ratioILl1ELl1000EEEENS0_11RetryPolicyE"},{"b":"561748F88000","o":"12B37B1","s":"_ZN5mongo19DistLockCatalogImpl4pingEPNS_16OperationContextENS_10StringDataENS_6Date_tE"},{"b":"561748F88000","o":"12AAA0A","s":"_ZN5mongo22ReplSetDistLockManager6doTaskEv"},{"b":"561748F88000","o":"28A5BBF"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo11DBExceptionC2ERKNS_6StatusE+0x32) [0x56174b6b5b62] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x6D08) [0x561749c38a0a] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xADB043) [0x561749a63043] mongod(+0x13B2606) [0x56174a33a606] mongod(+0x13B3A55) [0x56174a33ba55] mongod(_ZN5mongo12BasicCommand10Invocation3runEPNS_16OperationContextEPNS_3rpc21ReplyBuilderInterfaceE+0x74) [0x56174b117894] mongod(+0x10FA899) [0x56174a082899] mongod(+0x10FBF53) [0x56174a083f53] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(+0x1FBD2EE) [0x56174af452ee] mongod(_ZN5mongo14DBDirectClient4callERNS_7MessageES2_bPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x3A) [0x56174af457fa] mongod(_ZN5mongo12DBClientBase20runCommandWithTargetENS_12OpMsgRequestE+0x1F2) [0x56174b0c25e2] mongod(_ZN5mongo13RSLocalClient14runCommandOnceEPNS_16OperationContextENS_10StringDataERKNS_7BSONObjE+0x4FB) [0x56174a244e7b] mongod(_ZN5mongo10ShardLocal11_runCommandEPNS_16OperationContextERKNS_21ReadPreferenceSettingENS_10StringDataENS_8DurationISt5ratioILl1ELl1000EEEERKNS_7BSONObjE+0x2E) [0x56174a243c1e] mongod(_ZN5mongo5Shard32runCommandWithFixedRetryAttemptsEPNS_16OperationContextERKNS_21ReadPreferenceSettingERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_7BSONObjENS_8DurationISt5ratioILl1ELl1000EEEENS0_11RetryPolicyE+0xDC) [0x56174a42b1dc] mongod(_ZN5mongo19DistLockCatalogImpl4pingEPNS_16OperationContextENS_10StringDataENS_6Date_tE+0x571) [0x56174a23b7b1] mongod(_ZN5mongo22ReplSetDistLockManager6doTaskEv+0x27A) [0x56174a232a0a] mongod(+0x28A5BBF) [0x56174b82dbbf] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:49.416+0000 D2 REPL [replSetDistLockPinger] Waiting for write concern. OpTime: { ts: Timestamp(1567578529, 1), t: 1 }, write concern: { w: "majority", wtimeout: 15000 } 2019-09-04T06:28:49.416+0000 D4 STORAGE [replSetDistLockPinger] flushed journal 2019-09-04T06:28:49.416+0000 D1 COMMAND [replSetDistLockPinger] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578529394) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" }': NotMaster: Not primary while running findAndModify command on collection config.lockpings 2019-09-04T06:28:49.416+0000 I COMMAND [replSetDistLockPinger] command config.lockpings command: findAndModify { findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578529394) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } numYields:0 ok:0 errMsg:"Not primary while running findAndModify command on collection config.lockpings" errName:NotMaster errCode:10107 reslen:527 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } protocol:op_msg 22ms 2019-09-04T06:28:49.428+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:49.428+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:49.495+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:49.516+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:49.516+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:49.595+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:49.696+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:49.703+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:49.703+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:49.704+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:49.704+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:49.786+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:49.786+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:49.796+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:49.836+0000 I NETWORK [listener] connection accepted from 10.108.2.51:59090 #150 (80 connections now open) 2019-09-04T06:28:49.836+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:49.836+0000 D2 COMMAND [conn150] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:49.836+0000 I NETWORK [conn150] received client metadata from 10.108.2.51:59090 conn150: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:49.836+0000 I COMMAND [conn150] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:49.841+0000 D2 COMMAND [conn150] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578529, 1), signature: { hash: BinData(0, 46852229934D9D2582165361D79CD6C82E821B6B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:28:49.841+0000 D1 REPL [conn150] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578529, 1), t: 1 } 2019-09-04T06:28:49.841+0000 D3 REPL [conn150] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:19.851+0000 2019-09-04T06:28:49.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:49.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:49.896+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:49.928+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:49.928+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:49.996+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:50.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:50.008+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:28:50.008+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:28:50.008+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:28:50.021+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:50.021+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:50.023+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:28:50.023+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:28:50.048+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:28:50.048+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:28:50.048+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:28:50.048+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:28:50.048+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:28:50.049+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35129 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:28:50.049+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:28:50.049+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:28:50.050+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:28:50.050+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:28:50.050+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:50.050+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:28:50.050+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:28:50.050+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:28:50.050+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:28:50.050+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:28:50.050+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:28:50.050+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:28:50.050+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:50.050+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:50.050+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:28:50.050+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578529, 1) 2019-09-04T06:28:50.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4320 2019-09-04T06:28:50.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4320 2019-09-04T06:28:50.050+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:28:50.050+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:28:50.050+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:28:50.050+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:28:50.050+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:50.050+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:28:50.050+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:28:50.050+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:28:50.050+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578529, 1) 2019-09-04T06:28:50.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4323 2019-09-04T06:28:50.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4323 2019-09-04T06:28:50.050+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:28:50.051+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:28:50.051+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:50.051+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:28:50.051+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:28:50.051+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:28:50.051+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578529, 1) 2019-09-04T06:28:50.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4325 2019-09-04T06:28:50.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4325 2019-09-04T06:28:50.051+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:28:50.051+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:28:50.051+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:28:50.051+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:28:50.051+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:28:50.051+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:28:50.051+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:28:50.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4328 2019-09-04T06:28:50.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:28:50.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:50.051+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:28:50.051+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:28:50.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4328 2019-09-04T06:28:50.051+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:28:50.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4329 2019-09-04T06:28:50.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:28:50.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:50.051+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:28:50.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4329 2019-09-04T06:28:50.051+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:28:50.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4330 2019-09-04T06:28:50.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:28:50.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:50.051+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:28:50.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4330 2019-09-04T06:28:50.051+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:28:50.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4331 2019-09-04T06:28:50.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:28:50.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:50.051+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:28:50.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4331 2019-09-04T06:28:50.051+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:28:50.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4332 2019-09-04T06:28:50.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:28:50.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4332 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4333 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4333 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4334 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4334 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4335 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4335 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4336 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4336 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4337 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4337 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4338 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4338 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4339 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4339 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4340 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4340 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4341 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4341 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4342 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4342 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4343 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4343 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4344 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4344 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4345 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:28:50.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4345 2019-09-04T06:28:50.053+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:28:50.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4346 2019-09-04T06:28:50.053+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:28:50.053+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:50.053+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:28:50.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4346 2019-09-04T06:28:50.053+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:50.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4347 2019-09-04T06:28:50.053+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:50.053+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:50.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4347 2019-09-04T06:28:50.053+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:28:50.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4348 2019-09-04T06:28:50.053+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:28:50.053+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:50.053+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:28:50.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4348 2019-09-04T06:28:50.053+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:28:50.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4349 2019-09-04T06:28:50.053+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:28:50.053+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:28:50.053+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:28:50.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4349 2019-09-04T06:28:50.053+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:28:50.053+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:28:50.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4351 2019-09-04T06:28:50.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4351 2019-09-04T06:28:50.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4352 2019-09-04T06:28:50.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4352 2019-09-04T06:28:50.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4353 2019-09-04T06:28:50.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4353 2019-09-04T06:28:50.053+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:28:50.053+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:28:50.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4355 2019-09-04T06:28:50.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4355 2019-09-04T06:28:50.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4356 2019-09-04T06:28:50.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4356 2019-09-04T06:28:50.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4357 2019-09-04T06:28:50.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4357 2019-09-04T06:28:50.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4358 2019-09-04T06:28:50.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4358 2019-09-04T06:28:50.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4359 2019-09-04T06:28:50.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4359 2019-09-04T06:28:50.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4360 2019-09-04T06:28:50.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4360 2019-09-04T06:28:50.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4361 2019-09-04T06:28:50.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4361 2019-09-04T06:28:50.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4362 2019-09-04T06:28:50.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4362 2019-09-04T06:28:50.054+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4363 2019-09-04T06:28:50.054+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4363 2019-09-04T06:28:50.054+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4364 2019-09-04T06:28:50.054+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4364 2019-09-04T06:28:50.054+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4365 2019-09-04T06:28:50.054+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4365 2019-09-04T06:28:50.054+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4366 2019-09-04T06:28:50.054+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4366 2019-09-04T06:28:50.054+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:28:50.054+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:28:50.054+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4368 2019-09-04T06:28:50.054+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4368 2019-09-04T06:28:50.054+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4369 2019-09-04T06:28:50.054+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4369 2019-09-04T06:28:50.054+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4370 2019-09-04T06:28:50.054+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4370 2019-09-04T06:28:50.054+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4371 2019-09-04T06:28:50.054+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4371 2019-09-04T06:28:50.054+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4372 2019-09-04T06:28:50.054+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4372 2019-09-04T06:28:50.054+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4373 2019-09-04T06:28:50.054+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4373 2019-09-04T06:28:50.054+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:28:50.096+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:50.151+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:50.151+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:50.151+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578529, 1) 2019-09-04T06:28:50.151+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 4375 2019-09-04T06:28:50.151+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:50.152+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:50.152+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 4375 2019-09-04T06:28:50.153+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4378 2019-09-04T06:28:50.153+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4378 2019-09-04T06:28:50.153+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578529, 1), t: 1 }({ ts: Timestamp(1567578529, 1), t: 1 }) 2019-09-04T06:28:50.197+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:50.204+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:50.204+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:50.231+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578529, 1), signature: { hash: BinData(0, 1DBD5EAF910346FCB59A8F70DF5F86A6BC7B1EDA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:50.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:28:50.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578529, 1), signature: { hash: BinData(0, 1DBD5EAF910346FCB59A8F70DF5F86A6BC7B1EDA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:50.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578529, 1), signature: { hash: BinData(0, 1DBD5EAF910346FCB59A8F70DF5F86A6BC7B1EDA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:50.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), opTime: { ts: Timestamp(1567578529, 1), t: 1 }, wallTime: new Date(1567578529141) } 2019-09-04T06:28:50.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578529, 1), signature: { hash: BinData(0, 1DBD5EAF910346FCB59A8F70DF5F86A6BC7B1EDA), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:50.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:50.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 999999999 Now: 1000000000 2019-09-04T06:28:50.286+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:50.286+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:50.297+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:50.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:50.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:50.397+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:50.428+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:50.428+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:50.497+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:50.516+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:50.516+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:50.597+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:50.698+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:50.704+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:50.704+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:50.786+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:50.786+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:50.798+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:50.836+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:50.836+0000 D3 REPL [replexec-1] memberData lastupdate is: 2019-09-04T06:28:49.061+0000 2019-09-04T06:28:50.836+0000 D3 REPL [replexec-1] memberData lastupdate is: 2019-09-04T06:28:50.232+0000 2019-09-04T06:28:50.836+0000 D3 REPL [replexec-1] stalest member MemberId(0) date: 2019-09-04T06:28:49.061+0000 2019-09-04T06:28:50.836+0000 D3 REPL [replexec-1] scheduling next check at 2019-09-04T06:28:59.061+0000 2019-09-04T06:28:50.836+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:18.837+0000 2019-09-04T06:28:50.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:50.837+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:28:50.837+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 282) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:50.837+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 282 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:29:00.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:50.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:18.837+0000 2019-09-04T06:28:50.837+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 283) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:50.837+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 283 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:00.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:50.837+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:18.837+0000 2019-09-04T06:28:50.837+0000 D2 ASIO [Replication] Request 282 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), opTime: { ts: Timestamp(1567578529, 1), t: 1 }, wallTime: new Date(1567578529141), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578529, 1), t: 1 }, lastCommittedWall: new Date(1567578529141), lastOpVisible: { ts: Timestamp(1567578529, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578529, 1), $clusterTime: { clusterTime: Timestamp(1567578529, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578529, 1) } 2019-09-04T06:28:50.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), opTime: { ts: Timestamp(1567578529, 1), t: 1 }, wallTime: new Date(1567578529141), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578529, 1), t: 1 }, lastCommittedWall: new Date(1567578529141), lastOpVisible: { ts: Timestamp(1567578529, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578529, 1), $clusterTime: { clusterTime: Timestamp(1567578529, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578529, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:28:50.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:50.837+0000 D2 ASIO [Replication] Request 283 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), opTime: { ts: Timestamp(1567578529, 1), t: 1 }, wallTime: new Date(1567578529141), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578529, 1), t: 1 }, lastCommittedWall: new Date(1567578529141), lastOpVisible: { ts: Timestamp(1567578529, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578529, 1), $clusterTime: { clusterTime: Timestamp(1567578529, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578529, 1) } 2019-09-04T06:28:50.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), opTime: { ts: Timestamp(1567578529, 1), t: 1 }, wallTime: new Date(1567578529141), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578529, 1), t: 1 }, lastCommittedWall: new Date(1567578529141), lastOpVisible: { ts: Timestamp(1567578529, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578529, 1), $clusterTime: { clusterTime: Timestamp(1567578529, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578529, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:50.837+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 282) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), opTime: { ts: Timestamp(1567578529, 1), t: 1 }, wallTime: new Date(1567578529141), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578529, 1), t: 1 }, lastCommittedWall: new Date(1567578529141), lastOpVisible: { ts: Timestamp(1567578529, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578529, 1), $clusterTime: { clusterTime: Timestamp(1567578529, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578529, 1) } 2019-09-04T06:28:50.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:50.837+0000 D4 ELECTION [replexec-1] Postponing election timeout due to heartbeat from primary 2019-09-04T06:28:50.837+0000 D4 REPL [replexec-1] Canceling election timeout callback at 2019-09-04T06:29:00.382+0000 2019-09-04T06:28:50.837+0000 D4 ELECTION [replexec-1] Scheduling election timeout callback at 2019-09-04T06:29:01.738+0000 2019-09-04T06:28:50.837+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:28:50.837+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:28:52.837Z 2019-09-04T06:28:50.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:50.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:20.837+0000 2019-09-04T06:28:50.837+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:20.837+0000 2019-09-04T06:28:50.837+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 283) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), opTime: { ts: Timestamp(1567578529, 1), t: 1 }, wallTime: new Date(1567578529141), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578529, 1), t: 1 }, lastCommittedWall: new Date(1567578529141), lastOpVisible: { ts: Timestamp(1567578529, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578529, 1), $clusterTime: { clusterTime: Timestamp(1567578529, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578529, 1) } 2019-09-04T06:28:50.837+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:28:50.837+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:28:52.837Z 2019-09-04T06:28:50.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:20.837+0000 2019-09-04T06:28:50.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:50.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:50.898+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:50.928+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:50.928+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:50.998+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:51.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:51.016+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:51.016+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:51.050+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:28:51.050+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:28:51.050+0000 D2 COMMAND [conn90] run command admin.$cmd { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:28:51.050+0000 I COMMAND [conn90] command admin.$cmd command: isMaster { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:866 locks:{} protocol:op_query 0ms 2019-09-04T06:28:51.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578529, 1), signature: { hash: BinData(0, 1DBD5EAF910346FCB59A8F70DF5F86A6BC7B1EDA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:51.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:28:51.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578529, 1), signature: { hash: BinData(0, 1DBD5EAF910346FCB59A8F70DF5F86A6BC7B1EDA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:51.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578529, 1), signature: { hash: BinData(0, 1DBD5EAF910346FCB59A8F70DF5F86A6BC7B1EDA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:51.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), opTime: { ts: Timestamp(1567578529, 1), t: 1 }, wallTime: new Date(1567578529141) } 2019-09-04T06:28:51.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578529, 1), signature: { hash: BinData(0, 1DBD5EAF910346FCB59A8F70DF5F86A6BC7B1EDA), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:51.098+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:51.115+0000 I NETWORK [listener] connection accepted from 10.108.2.56:35634 #151 (81 connections now open) 2019-09-04T06:28:51.115+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:51.116+0000 D2 COMMAND [conn151] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:51.116+0000 I NETWORK [conn151] received client metadata from 10.108.2.56:35634 conn151: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:51.116+0000 I COMMAND [conn151] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:51.119+0000 D2 COMMAND [conn151] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578521, 1), signature: { hash: BinData(0, 085CFC83A551012D6A72779653032EB1C623A5B1), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:28:51.119+0000 D1 REPL [conn151] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578529, 1), t: 1 } 2019-09-04T06:28:51.119+0000 D3 REPL [conn151] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.129+0000 2019-09-04T06:28:51.152+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:51.152+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:51.152+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578529, 1) 2019-09-04T06:28:51.152+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 4398 2019-09-04T06:28:51.152+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:51.152+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:51.152+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 4398 2019-09-04T06:28:51.153+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4401 2019-09-04T06:28:51.153+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4401 2019-09-04T06:28:51.153+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578529, 1), t: 1 }({ ts: Timestamp(1567578529, 1), t: 1 }) 2019-09-04T06:28:51.199+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:51.204+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:51.204+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:51.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:51.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:51.286+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:51.286+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:51.299+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:51.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:51.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:51.399+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:51.428+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:51.428+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:51.499+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:51.516+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:51.516+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:51.599+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:51.634+0000 D2 COMMAND [conn127] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578528, 1), signature: { hash: BinData(0, 56096C741DA0E1098955367B7ABC9039CDF5E4B7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:28:51.634+0000 D1 REPL [conn127] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578529, 1), t: 1 } 2019-09-04T06:28:51.634+0000 D3 REPL [conn127] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.644+0000 2019-09-04T06:28:51.650+0000 I NETWORK [listener] connection accepted from 10.108.2.48:42028 #152 (82 connections now open) 2019-09-04T06:28:51.650+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:51.650+0000 I NETWORK [listener] connection accepted from 10.108.2.72:45670 #153 (83 connections now open) 2019-09-04T06:28:51.650+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:51.650+0000 I NETWORK [listener] connection accepted from 10.108.2.54:49114 #154 (84 connections now open) 2019-09-04T06:28:51.650+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:51.650+0000 D2 COMMAND [conn153] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:51.650+0000 I NETWORK [conn153] received client metadata from 10.108.2.72:45670 conn153: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:51.650+0000 D2 COMMAND [conn154] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:51.650+0000 I COMMAND [conn153] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:51.650+0000 I NETWORK [conn154] received client metadata from 10.108.2.54:49114 conn154: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:51.650+0000 D2 COMMAND [conn152] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:51.651+0000 I NETWORK [conn152] received client metadata from 10.108.2.48:42028 conn152: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:51.651+0000 I COMMAND [conn154] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:51.651+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:51.651+0000 I COMMAND [conn152] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:51.651+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:51.651+0000 D2 COMMAND [conn153] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578522, 1), signature: { hash: BinData(0, 21A966CF5FD66B29B7A606E7014BBC74FFC0F15C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:28:51.651+0000 D1 REPL [conn153] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578529, 1), t: 1 } 2019-09-04T06:28:51.651+0000 D3 REPL [conn153] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:28:51.651+0000 D2 COMMAND [conn152] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578530, 1), signature: { hash: BinData(0, 6CBB8EC6AAAB5F918296074E946EB69F170C4DFC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:28:51.651+0000 D1 REPL [conn152] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578529, 1), t: 1 } 2019-09-04T06:28:51.651+0000 D3 REPL [conn152] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:28:51.651+0000 D2 COMMAND [conn154] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578528, 1), signature: { hash: BinData(0, 56096C741DA0E1098955367B7ABC9039CDF5E4B7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:28:51.651+0000 D1 REPL [conn154] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578529, 1), t: 1 } 2019-09-04T06:28:51.651+0000 D3 REPL [conn154] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:28:51.652+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:51.652+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:51.652+0000 I NETWORK [listener] connection accepted from 10.108.2.73:52076 #155 (85 connections now open) 2019-09-04T06:28:51.652+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:51.652+0000 D2 COMMAND [conn155] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:51.652+0000 I NETWORK [conn155] received client metadata from 10.108.2.73:52076 conn155: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:51.652+0000 I COMMAND [conn155] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:51.653+0000 D2 COMMAND [conn155] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578522, 1), signature: { hash: BinData(0, 21A966CF5FD66B29B7A606E7014BBC74FFC0F15C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:28:51.653+0000 D1 REPL [conn155] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578529, 1), t: 1 } 2019-09-04T06:28:51.653+0000 D3 REPL [conn155] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.663+0000 2019-09-04T06:28:51.699+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:51.704+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:51.704+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:51.756+0000 I COMMAND [conn130] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578492, 1), signature: { hash: BinData(0, 369D20320EAA8E78506D058308BAF1A8A714E0C9), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:28:51.756+0000 D1 - [conn130] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:51.756+0000 W - [conn130] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:51.756+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:51.756+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:51.773+0000 I - [conn130] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:51.773+0000 D1 COMMAND [conn130] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578492, 1), signature: { hash: BinData(0, 369D20320EAA8E78506D058308BAF1A8A714E0C9), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:51.773+0000 D1 - [conn130] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:51.773+0000 W - [conn130] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:51.786+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:51.786+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:51.793+0000 I - [conn130] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:51.793+0000 W COMMAND [conn130] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:51.793+0000 I COMMAND [conn130] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578492, 1), signature: { hash: BinData(0, 369D20320EAA8E78506D058308BAF1A8A714E0C9), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:28:51.793+0000 D2 NETWORK [conn130] Session from 10.108.2.52:47088 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:51.793+0000 I NETWORK [conn130] end connection 10.108.2.52:47088 (84 connections now open) 2019-09-04T06:28:51.799+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:51.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:51.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:51.900+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:51.928+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:51.928+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:52.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:52.000+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:52.043+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:52.043+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:52.100+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:52.150+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:52.150+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:52.151+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:52.151+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:52.152+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:52.152+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:52.152+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578529, 1) 2019-09-04T06:28:52.152+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 4430 2019-09-04T06:28:52.152+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:52.152+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:52.152+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 4430 2019-09-04T06:28:52.153+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4433 2019-09-04T06:28:52.153+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4433 2019-09-04T06:28:52.153+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578529, 1), t: 1 }({ ts: Timestamp(1567578529, 1), t: 1 }) 2019-09-04T06:28:52.200+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:52.204+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:52.204+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:52.231+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578529, 1), signature: { hash: BinData(0, 1DBD5EAF910346FCB59A8F70DF5F86A6BC7B1EDA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:52.231+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:28:52.231+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578529, 1), signature: { hash: BinData(0, 1DBD5EAF910346FCB59A8F70DF5F86A6BC7B1EDA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:52.231+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578529, 1), signature: { hash: BinData(0, 1DBD5EAF910346FCB59A8F70DF5F86A6BC7B1EDA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:52.231+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), opTime: { ts: Timestamp(1567578529, 1), t: 1 }, wallTime: new Date(1567578529141) } 2019-09-04T06:28:52.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578529, 1), signature: { hash: BinData(0, 1DBD5EAF910346FCB59A8F70DF5F86A6BC7B1EDA), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:52.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:52.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:52.256+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:52.256+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:52.286+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:52.286+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:52.300+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:52.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:52.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:52.400+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:52.428+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:52.428+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:52.500+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:52.543+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:52.543+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:52.585+0000 I NETWORK [listener] connection accepted from 10.108.2.74:51712 #156 (85 connections now open) 2019-09-04T06:28:52.585+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:52.585+0000 D2 COMMAND [conn156] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:52.585+0000 I NETWORK [conn156] received client metadata from 10.108.2.74:51712 conn156: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:52.585+0000 I COMMAND [conn156] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:52.585+0000 D2 COMMAND [conn156] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578525, 1), signature: { hash: BinData(0, A9EF6308D6B77986F6D15C773C57E66A2510FEA9), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:28:52.585+0000 D1 REPL [conn156] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578529, 1), t: 1 } 2019-09-04T06:28:52.585+0000 D3 REPL [conn156] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:22.595+0000 2019-09-04T06:28:52.601+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:52.701+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:52.704+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:52.704+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:52.786+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:52.786+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:52.801+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:52.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:52.837+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:28:52.837+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 284) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:52.837+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 284 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:29:02.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:52.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:20.837+0000 2019-09-04T06:28:52.837+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 285) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:52.837+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 285 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:02.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:52.837+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:20.837+0000 2019-09-04T06:28:52.837+0000 D2 ASIO [Replication] Request 284 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), opTime: { ts: Timestamp(1567578529, 1), t: 1 }, wallTime: new Date(1567578529141), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578529, 1), t: 1 }, lastCommittedWall: new Date(1567578529141), lastOpVisible: { ts: Timestamp(1567578529, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578529, 1), $clusterTime: { clusterTime: Timestamp(1567578530, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578529, 1) } 2019-09-04T06:28:52.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), opTime: { ts: Timestamp(1567578529, 1), t: 1 }, wallTime: new Date(1567578529141), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578529, 1), t: 1 }, lastCommittedWall: new Date(1567578529141), lastOpVisible: { ts: Timestamp(1567578529, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578529, 1), $clusterTime: { clusterTime: Timestamp(1567578530, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578529, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:28:52.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:52.837+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 284) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), opTime: { ts: Timestamp(1567578529, 1), t: 1 }, wallTime: new Date(1567578529141), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578529, 1), t: 1 }, lastCommittedWall: new Date(1567578529141), lastOpVisible: { ts: Timestamp(1567578529, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578529, 1), $clusterTime: { clusterTime: Timestamp(1567578530, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578529, 1) } 2019-09-04T06:28:52.837+0000 D2 ASIO [Replication] Request 285 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), opTime: { ts: Timestamp(1567578529, 1), t: 1 }, wallTime: new Date(1567578529141), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578529, 1), t: 1 }, lastCommittedWall: new Date(1567578529141), lastOpVisible: { ts: Timestamp(1567578529, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578529, 1), $clusterTime: { clusterTime: Timestamp(1567578530, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578529, 1) } 2019-09-04T06:28:52.837+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:28:52.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), opTime: { ts: Timestamp(1567578529, 1), t: 1 }, wallTime: new Date(1567578529141), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578529, 1), t: 1 }, lastCommittedWall: new Date(1567578529141), lastOpVisible: { ts: Timestamp(1567578529, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578529, 1), $clusterTime: { clusterTime: Timestamp(1567578530, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578529, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:52.837+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:29:01.738+0000 2019-09-04T06:28:52.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:52.837+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:29:04.212+0000 2019-09-04T06:28:52.837+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:28:52.837+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:28:54.837Z 2019-09-04T06:28:52.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:52.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:22.837+0000 2019-09-04T06:28:52.837+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:22.837+0000 2019-09-04T06:28:52.837+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 285) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), opTime: { ts: Timestamp(1567578529, 1), t: 1 }, wallTime: new Date(1567578529141), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578529, 1), t: 1 }, lastCommittedWall: new Date(1567578529141), lastOpVisible: { ts: Timestamp(1567578529, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578529, 1), $clusterTime: { clusterTime: Timestamp(1567578530, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578529, 1) } 2019-09-04T06:28:52.837+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:28:52.837+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:28:54.837Z 2019-09-04T06:28:52.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:22.837+0000 2019-09-04T06:28:52.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:52.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:52.901+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:52.928+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:52.928+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:53.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:53.001+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:53.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578530, 1), signature: { hash: BinData(0, 0AE55A627A3D094EE988D3BE941197CA8E921D01), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:53.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:28:53.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578530, 1), signature: { hash: BinData(0, 0AE55A627A3D094EE988D3BE941197CA8E921D01), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:53.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578530, 1), signature: { hash: BinData(0, 0AE55A627A3D094EE988D3BE941197CA8E921D01), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:53.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), opTime: { ts: Timestamp(1567578529, 1), t: 1 }, wallTime: new Date(1567578529141) } 2019-09-04T06:28:53.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578530, 1), signature: { hash: BinData(0, 0AE55A627A3D094EE988D3BE941197CA8E921D01), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:53.101+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:53.152+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:53.152+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:53.152+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578529, 1) 2019-09-04T06:28:53.152+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 4451 2019-09-04T06:28:53.152+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:53.152+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:53.152+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 4451 2019-09-04T06:28:53.153+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4454 2019-09-04T06:28:53.153+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4454 2019-09-04T06:28:53.154+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578529, 1), t: 1 }({ ts: Timestamp(1567578529, 1), t: 1 }) 2019-09-04T06:28:53.201+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:53.204+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:53.204+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:53.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:53.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:53.259+0000 D2 COMMAND [conn101] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:53.259+0000 I COMMAND [conn101] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:53.264+0000 D2 COMMAND [conn101] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578527, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578527, 2), signature: { hash: BinData(0, 63116C84CA6CC405D1A06C8270449024592EB8F8), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578527, 2), t: 1 } }, $db: "config" } 2019-09-04T06:28:53.264+0000 D1 COMMAND [conn101] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578527, 2), t: 1 } } } 2019-09-04T06:28:53.264+0000 D3 STORAGE [conn101] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:28:53.264+0000 D1 COMMAND [conn101] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578527, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578527, 2), signature: { hash: BinData(0, 63116C84CA6CC405D1A06C8270449024592EB8F8), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578527, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578529, 1) 2019-09-04T06:28:53.264+0000 D5 QUERY [conn101] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:28:53.264+0000 D5 QUERY [conn101] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:28:53.264+0000 D5 QUERY [conn101] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:28:53.264+0000 D5 QUERY [conn101] Rated tree: $and 2019-09-04T06:28:53.264+0000 D5 QUERY [conn101] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:53.264+0000 D5 QUERY [conn101] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:53.264+0000 D2 QUERY [conn101] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:28:53.264+0000 D3 STORAGE [conn101] WT begin_transaction for snapshot id 4459 2019-09-04T06:28:53.265+0000 D3 STORAGE [conn101] WT rollback_transaction for snapshot id 4459 2019-09-04T06:28:53.265+0000 I COMMAND [conn101] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578527, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578527, 2), signature: { hash: BinData(0, 63116C84CA6CC405D1A06C8270449024592EB8F8), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578527, 2), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:879 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:28:53.271+0000 I NETWORK [listener] connection accepted from 10.20.102.80:61144 #157 (86 connections now open) 2019-09-04T06:28:53.271+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:53.271+0000 D2 COMMAND [conn157] run command admin.$cmd { isMaster: 1, client: { application: { name: "robo3t" }, driver: { name: "MongoDB Internal Client", version: "4.0.5-17-gd808df2233" }, os: { type: "Windows", name: "Microsoft Windows 7", architecture: "x86_64", version: "6.1 SP1 (build 7601)" } }, $db: "admin" } 2019-09-04T06:28:53.271+0000 I NETWORK [conn157] received client metadata from 10.20.102.80:61144 conn157: { application: { name: "robo3t" }, driver: { name: "MongoDB Internal Client", version: "4.0.5-17-gd808df2233" }, os: { type: "Windows", name: "Microsoft Windows 7", architecture: "x86_64", version: "6.1 SP1 (build 7601)" } } 2019-09-04T06:28:53.271+0000 I COMMAND [conn157] command admin.$cmd appName: "robo3t" command: isMaster { isMaster: 1, client: { application: { name: "robo3t" }, driver: { name: "MongoDB Internal Client", version: "4.0.5-17-gd808df2233" }, os: { type: "Windows", name: "Microsoft Windows 7", architecture: "x86_64", version: "6.1 SP1 (build 7601)" } }, $db: "admin" } numYields:0 reslen:866 locks:{} protocol:op_query 0ms 2019-09-04T06:28:53.283+0000 D2 COMMAND [conn157] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:28:53.283+0000 D1 ACCESS [conn157] Returning user dba_root@admin from cache 2019-09-04T06:28:53.283+0000 I COMMAND [conn157] command admin.$cmd appName: "robo3t" command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:394 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:53.286+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:53.286+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:53.295+0000 D2 COMMAND [conn157] run command admin.$cmd { saslContinue: 1, payload: "xxx", conversationId: 1, $db: "admin" } 2019-09-04T06:28:53.295+0000 I COMMAND [conn157] command admin.$cmd appName: "robo3t" command: saslContinue { saslContinue: 1, payload: "xxx", conversationId: 1, $db: "admin" } numYields:0 reslen:323 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:53.301+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:53.307+0000 D2 COMMAND [conn157] run command admin.$cmd { saslContinue: 1, payload: "xxx", conversationId: 1, $db: "admin" } 2019-09-04T06:28:53.307+0000 D1 ACCESS [conn157] Returning user dba_root@admin from cache 2019-09-04T06:28:53.307+0000 I ACCESS [conn157] Successfully authenticated as principal dba_root on admin from client 10.20.102.80:61144 2019-09-04T06:28:53.307+0000 I COMMAND [conn157] command admin.$cmd appName: "robo3t" command: saslContinue { saslContinue: 1, payload: "xxx", conversationId: 1, $db: "admin" } numYields:0 reslen:293 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:53.319+0000 D2 COMMAND [conn157] run command admin.$cmd { ping: 1, $db: "admin" } 2019-09-04T06:28:53.319+0000 I COMMAND [conn157] command admin.$cmd appName: "robo3t" command: ping { ping: 1, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:53.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:53.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:53.402+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:53.428+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:53.428+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:53.502+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:53.602+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:53.702+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:53.704+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:53.704+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:53.786+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:53.786+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:53.802+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:53.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:53.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:53.902+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:53.928+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:53.928+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:54.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:54.003+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:54.103+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:54.141+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:54.141+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:54.142+0000 I NETWORK [listener] connection accepted from 10.108.2.46:40918 #158 (87 connections now open) 2019-09-04T06:28:54.142+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:54.142+0000 D2 COMMAND [conn158] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:54.142+0000 I NETWORK [conn158] received client metadata from 10.108.2.46:40918 conn158: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:54.142+0000 I COMMAND [conn158] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:54.142+0000 D2 COMMAND [conn158] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578525, 1), signature: { hash: BinData(0, A9EF6308D6B77986F6D15C773C57E66A2510FEA9), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:28:54.143+0000 D1 REPL [conn158] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578529, 1), t: 1 } 2019-09-04T06:28:54.143+0000 D3 REPL [conn158] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:24.152+0000 2019-09-04T06:28:54.152+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:54.153+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:54.153+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578529, 1) 2019-09-04T06:28:54.153+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 4477 2019-09-04T06:28:54.153+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:54.153+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:54.153+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 4477 2019-09-04T06:28:54.154+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4480 2019-09-04T06:28:54.154+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4480 2019-09-04T06:28:54.154+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578529, 1), t: 1 }({ ts: Timestamp(1567578529, 1), t: 1 }) 2019-09-04T06:28:54.155+0000 I COMMAND [conn117] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578495, 1), signature: { hash: BinData(0, 9A809E4530E0A2460F67DC0FD6A8649E22A7A597), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:28:54.155+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:54.155+0000 D1 - [conn117] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:54.155+0000 W - [conn117] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:54.155+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), appliedOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, appliedWallTime: new Date(1567578529141), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), appliedOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, appliedWallTime: new Date(1567578529141), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), appliedOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, appliedWallTime: new Date(1567578529141), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578529, 1), t: 1 }, lastCommittedWall: new Date(1567578529141), lastOpVisible: { ts: Timestamp(1567578529, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:54.155+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 286 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:24.155+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), appliedOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, appliedWallTime: new Date(1567578529141), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), appliedOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, appliedWallTime: new Date(1567578529141), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), appliedOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, appliedWallTime: new Date(1567578529141), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578529, 1), t: 1 }, lastCommittedWall: new Date(1567578529141), lastOpVisible: { ts: Timestamp(1567578529, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:54.155+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:19.153+0000 2019-09-04T06:28:54.155+0000 D2 ASIO [RS] Request 286 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578529, 1), t: 1 }, lastCommittedWall: new Date(1567578529141), lastOpVisible: { ts: Timestamp(1567578529, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578529, 1), $clusterTime: { clusterTime: Timestamp(1567578532, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578529, 1) } 2019-09-04T06:28:54.155+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578529, 1), t: 1 }, lastCommittedWall: new Date(1567578529141), lastOpVisible: { ts: Timestamp(1567578529, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578529, 1), $clusterTime: { clusterTime: Timestamp(1567578532, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578529, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:54.155+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:54.155+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:19.153+0000 2019-09-04T06:28:54.155+0000 D2 ASIO [RS] Request 275 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578529, 1), t: 1 }, lastCommittedWall: new Date(1567578529141), lastOpVisible: { ts: Timestamp(1567578529, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578529, 1), t: 1 }, lastCommittedWall: new Date(1567578529141), lastOpApplied: { ts: Timestamp(1567578529, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578529, 1), $clusterTime: { clusterTime: Timestamp(1567578532, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578529, 1) } 2019-09-04T06:28:54.155+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578529, 1), t: 1 }, lastCommittedWall: new Date(1567578529141), lastOpVisible: { ts: Timestamp(1567578529, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578529, 1), t: 1 }, lastCommittedWall: new Date(1567578529141), lastOpApplied: { ts: Timestamp(1567578529, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578529, 1), $clusterTime: { clusterTime: Timestamp(1567578532, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578529, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:54.155+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:54.155+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:28:54.155+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:29:04.212+0000 2019-09-04T06:28:54.155+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:29:05.608+0000 2019-09-04T06:28:54.155+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 287 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:04.155+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578529, 1), t: 1 } } 2019-09-04T06:28:54.155+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:19.153+0000 2019-09-04T06:28:54.155+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:54.155+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:22.837+0000 2019-09-04T06:28:54.171+0000 I - [conn117] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:54.172+0000 D1 COMMAND [conn117] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578495, 1), signature: { hash: BinData(0, 9A809E4530E0A2460F67DC0FD6A8649E22A7A597), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:54.172+0000 D1 - [conn117] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:54.172+0000 W - [conn117] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:54.191+0000 I - [conn117] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:54.191+0000 W COMMAND [conn117] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:54.191+0000 I COMMAND [conn117] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578495, 1), signature: { hash: BinData(0, 9A809E4530E0A2460F67DC0FD6A8649E22A7A597), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:28:54.192+0000 D2 NETWORK [conn117] Session from 10.108.2.46:40886 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:54.192+0000 I NETWORK [conn117] end connection 10.108.2.46:40886 (86 connections now open) 2019-09-04T06:28:54.203+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:54.204+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:54.204+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:54.231+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578532, 1), signature: { hash: BinData(0, 6573B58EE4EB15C16AFF1037EFF043F59C9F7092), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:54.231+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:28:54.231+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578532, 1), signature: { hash: BinData(0, 6573B58EE4EB15C16AFF1037EFF043F59C9F7092), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:54.231+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578532, 1), signature: { hash: BinData(0, 6573B58EE4EB15C16AFF1037EFF043F59C9F7092), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:54.231+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), opTime: { ts: Timestamp(1567578529, 1), t: 1 }, wallTime: new Date(1567578529141) } 2019-09-04T06:28:54.231+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578532, 1), signature: { hash: BinData(0, 6573B58EE4EB15C16AFF1037EFF043F59C9F7092), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:54.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:54.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:54.286+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:54.286+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:54.303+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:54.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:54.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:54.403+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:54.428+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:54.428+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:54.503+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:54.603+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:54.641+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:54.641+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:54.703+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:54.704+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:54.704+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:54.786+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:54.786+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:54.804+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:54.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:54.837+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:28:54.837+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 288) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:54.837+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 288 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:29:04.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:54.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:22.837+0000 2019-09-04T06:28:54.837+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 289) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:54.837+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 289 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:04.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:54.837+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:22.837+0000 2019-09-04T06:28:54.837+0000 D2 ASIO [Replication] Request 288 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), opTime: { ts: Timestamp(1567578529, 1), t: 1 }, wallTime: new Date(1567578529141), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578529, 1), t: 1 }, lastCommittedWall: new Date(1567578529141), lastOpVisible: { ts: Timestamp(1567578529, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578529, 1), $clusterTime: { clusterTime: Timestamp(1567578532, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578529, 1) } 2019-09-04T06:28:54.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), opTime: { ts: Timestamp(1567578529, 1), t: 1 }, wallTime: new Date(1567578529141), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578529, 1), t: 1 }, lastCommittedWall: new Date(1567578529141), lastOpVisible: { ts: Timestamp(1567578529, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578529, 1), $clusterTime: { clusterTime: Timestamp(1567578532, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578529, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:28:54.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:54.837+0000 D2 ASIO [Replication] Request 289 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), opTime: { ts: Timestamp(1567578529, 1), t: 1 }, wallTime: new Date(1567578529141), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578529, 1), t: 1 }, lastCommittedWall: new Date(1567578529141), lastOpVisible: { ts: Timestamp(1567578529, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578529, 1), $clusterTime: { clusterTime: Timestamp(1567578532, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578529, 1) } 2019-09-04T06:28:54.837+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 288) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), opTime: { ts: Timestamp(1567578529, 1), t: 1 }, wallTime: new Date(1567578529141), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578529, 1), t: 1 }, lastCommittedWall: new Date(1567578529141), lastOpVisible: { ts: Timestamp(1567578529, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578529, 1), $clusterTime: { clusterTime: Timestamp(1567578532, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578529, 1) } 2019-09-04T06:28:54.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), opTime: { ts: Timestamp(1567578529, 1), t: 1 }, wallTime: new Date(1567578529141), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578529, 1), t: 1 }, lastCommittedWall: new Date(1567578529141), lastOpVisible: { ts: Timestamp(1567578529, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578529, 1), $clusterTime: { clusterTime: Timestamp(1567578532, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578529, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:54.837+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:28:54.837+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:29:05.608+0000 2019-09-04T06:28:54.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:54.837+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:29:04.880+0000 2019-09-04T06:28:54.837+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:28:54.837+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:28:56.837Z 2019-09-04T06:28:54.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:54.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:24.837+0000 2019-09-04T06:28:54.837+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:24.837+0000 2019-09-04T06:28:54.837+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 289) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), opTime: { ts: Timestamp(1567578529, 1), t: 1 }, wallTime: new Date(1567578529141), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578529, 1), t: 1 }, lastCommittedWall: new Date(1567578529141), lastOpVisible: { ts: Timestamp(1567578529, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578529, 1), $clusterTime: { clusterTime: Timestamp(1567578532, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578529, 1) } 2019-09-04T06:28:54.837+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:28:54.837+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:28:56.837Z 2019-09-04T06:28:54.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:24.837+0000 2019-09-04T06:28:54.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:54.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:54.900+0000 D2 ASIO [RS] Request 287 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578534, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578534883), o: { $v: 1, $set: { ping: new Date(1567578534880), up: 2435 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578529, 1), t: 1 }, lastCommittedWall: new Date(1567578529141), lastOpVisible: { ts: Timestamp(1567578529, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578529, 1), t: 1 }, lastCommittedWall: new Date(1567578529141), lastOpApplied: { ts: Timestamp(1567578534, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578529, 1), $clusterTime: { clusterTime: Timestamp(1567578534, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578534, 1) } 2019-09-04T06:28:54.900+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578534, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578534883), o: { $v: 1, $set: { ping: new Date(1567578534880), up: 2435 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578529, 1), t: 1 }, lastCommittedWall: new Date(1567578529141), lastOpVisible: { ts: Timestamp(1567578529, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578529, 1), t: 1 }, lastCommittedWall: new Date(1567578529141), lastOpApplied: { ts: Timestamp(1567578534, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578529, 1), $clusterTime: { clusterTime: Timestamp(1567578534, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578534, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:54.900+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:54.900+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578534, 1) and ending at ts: Timestamp(1567578534, 1) 2019-09-04T06:28:54.900+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:29:04.880+0000 2019-09-04T06:28:54.900+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:29:04.920+0000 2019-09-04T06:28:54.900+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:54.900+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:24.837+0000 2019-09-04T06:28:54.900+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578534, 1), t: 1 } 2019-09-04T06:28:54.900+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:54.900+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:54.900+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578529, 1) 2019-09-04T06:28:54.900+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 4494 2019-09-04T06:28:54.900+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:54.900+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:54.900+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 4494 2019-09-04T06:28:54.900+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:54.900+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:28:54.900+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:54.900+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578529, 1) 2019-09-04T06:28:54.900+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 4497 2019-09-04T06:28:54.900+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578534, 1) } 2019-09-04T06:28:54.900+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:54.900+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:54.900+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 4497 2019-09-04T06:28:54.900+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4481 2019-09-04T06:28:54.900+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4481 2019-09-04T06:28:54.900+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4500 2019-09-04T06:28:54.900+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4500 2019-09-04T06:28:54.900+0000 D3 EXECUTOR [repl-writer-worker-15] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:54.900+0000 D3 STORAGE [repl-writer-worker-15] WT begin_transaction for snapshot id 4502 2019-09-04T06:28:54.900+0000 D4 STORAGE [repl-writer-worker-15] inserting record with timestamp Timestamp(1567578534, 1) 2019-09-04T06:28:54.900+0000 D3 STORAGE [repl-writer-worker-15] WT set timestamp of future write operations to Timestamp(1567578534, 1) 2019-09-04T06:28:54.900+0000 D3 STORAGE [repl-writer-worker-15] WT commit_transaction for snapshot id 4502 2019-09-04T06:28:54.900+0000 D3 EXECUTOR [repl-writer-worker-15] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:54.900+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:28:54.900+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4501 2019-09-04T06:28:54.900+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4501 2019-09-04T06:28:54.900+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4504 2019-09-04T06:28:54.900+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4504 2019-09-04T06:28:54.900+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578534, 1), t: 1 }({ ts: Timestamp(1567578534, 1), t: 1 }) 2019-09-04T06:28:54.900+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578534, 1) 2019-09-04T06:28:54.900+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4505 2019-09-04T06:28:54.901+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578534, 1) } } ] } sort: {} projection: {} 2019-09-04T06:28:54.901+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:54.901+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:28:54.901+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578534, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:28:54.901+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:54.901+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:54.901+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:54.901+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578534, 1) || First: notFirst: full path: ts 2019-09-04T06:28:54.901+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:54.901+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578534, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:54.901+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:54.901+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:28:54.901+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:54.901+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:54.901+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:54.901+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:54.901+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:54.901+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:54.901+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:54.901+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578534, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:54.901+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:54.901+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:54.901+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:54.901+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578534, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:54.901+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:54.901+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578534, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:54.901+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4505 2019-09-04T06:28:54.901+0000 D3 EXECUTOR [repl-writer-worker-1] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:54.901+0000 D3 STORAGE [repl-writer-worker-1] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:54.901+0000 D3 REPL [repl-writer-worker-1] applying op: { ts: Timestamp(1567578534, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578534883), o: { $v: 1, $set: { ping: new Date(1567578534880), up: 2435 } } }, oplog application mode: Secondary 2019-09-04T06:28:54.901+0000 D3 STORAGE [repl-writer-worker-1] WT set timestamp of future write operations to Timestamp(1567578534, 1) 2019-09-04T06:28:54.901+0000 D3 STORAGE [repl-writer-worker-1] WT begin_transaction for snapshot id 4507 2019-09-04T06:28:54.901+0000 D2 QUERY [repl-writer-worker-1] Using idhack: { _id: "cmodb801.togewa.com:27017" } 2019-09-04T06:28:54.901+0000 D4 WRITE [repl-writer-worker-1] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:28:54.901+0000 D3 STORAGE [repl-writer-worker-1] WT commit_transaction for snapshot id 4507 2019-09-04T06:28:54.901+0000 D3 EXECUTOR [repl-writer-worker-1] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:54.901+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578534, 1), t: 1 }({ ts: Timestamp(1567578534, 1), t: 1 }) 2019-09-04T06:28:54.901+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578534, 1) 2019-09-04T06:28:54.901+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4506 2019-09-04T06:28:54.901+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:28:54.901+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:54.901+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:28:54.901+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:54.901+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:54.901+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:28:54.901+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4506 2019-09-04T06:28:54.901+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578534, 1) 2019-09-04T06:28:54.901+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:54.901+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4510 2019-09-04T06:28:54.901+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), appliedOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, appliedWallTime: new Date(1567578529141), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), appliedOpTime: { ts: Timestamp(1567578534, 1), t: 1 }, appliedWallTime: new Date(1567578534883), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), appliedOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, appliedWallTime: new Date(1567578529141), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578529, 1), t: 1 }, lastCommittedWall: new Date(1567578529141), lastOpVisible: { ts: Timestamp(1567578529, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:54.901+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4510 2019-09-04T06:28:54.901+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578534, 1), t: 1 }({ ts: Timestamp(1567578534, 1), t: 1 }) 2019-09-04T06:28:54.901+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 290 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:24.901+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), appliedOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, appliedWallTime: new Date(1567578529141), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), appliedOpTime: { ts: Timestamp(1567578534, 1), t: 1 }, appliedWallTime: new Date(1567578534883), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), appliedOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, appliedWallTime: new Date(1567578529141), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578529, 1), t: 1 }, lastCommittedWall: new Date(1567578529141), lastOpVisible: { ts: Timestamp(1567578529, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:54.901+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:24.901+0000 2019-09-04T06:28:54.901+0000 D2 ASIO [RS] Request 290 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578529, 1), t: 1 }, lastCommittedWall: new Date(1567578529141), lastOpVisible: { ts: Timestamp(1567578529, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578529, 1), $clusterTime: { clusterTime: Timestamp(1567578534, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578534, 1) } 2019-09-04T06:28:54.901+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578529, 1), t: 1 }, lastCommittedWall: new Date(1567578529141), lastOpVisible: { ts: Timestamp(1567578529, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578529, 1), $clusterTime: { clusterTime: Timestamp(1567578534, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578534, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:54.901+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:54.901+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:24.901+0000 2019-09-04T06:28:54.902+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578534, 1), t: 1 } 2019-09-04T06:28:54.902+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 291 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:04.902+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578529, 1), t: 1 } } 2019-09-04T06:28:54.902+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:24.901+0000 2019-09-04T06:28:54.918+0000 D2 ASIO [RS] Request 291 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578534, 1), t: 1 }, lastCommittedWall: new Date(1567578534883), lastOpVisible: { ts: Timestamp(1567578534, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578534, 1), t: 1 }, lastCommittedWall: new Date(1567578534883), lastOpApplied: { ts: Timestamp(1567578534, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578534, 1), $clusterTime: { clusterTime: Timestamp(1567578534, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578534, 1) } 2019-09-04T06:28:54.918+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578534, 1), t: 1 }, lastCommittedWall: new Date(1567578534883), lastOpVisible: { ts: Timestamp(1567578534, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578534, 1), t: 1 }, lastCommittedWall: new Date(1567578534883), lastOpApplied: { ts: Timestamp(1567578534, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578534, 1), $clusterTime: { clusterTime: Timestamp(1567578534, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578534, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:54.918+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:54.918+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:28:54.919+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578534, 1), t: 1 }, 2019-09-04T06:28:54.883+0000 2019-09-04T06:28:54.919+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578534, 1), t: 1 }, 2019-09-04T06:28:54.883+0000 2019-09-04T06:28:54.919+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578529, 1) 2019-09-04T06:28:54.919+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:29:04.920+0000 2019-09-04T06:28:54.919+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:29:05.549+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn121] Got notified of new snapshot: { ts: Timestamp(1567578534, 1), t: 1 }, 2019-09-04T06:28:54.883+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn121] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:58.752+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn142] Got notified of new snapshot: { ts: Timestamp(1567578534, 1), t: 1 }, 2019-09-04T06:28:54.883+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn142] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.132+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn133] Got notified of new snapshot: { ts: Timestamp(1567578534, 1), t: 1 }, 2019-09-04T06:28:54.883+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn133] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.702+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn106] Got notified of new snapshot: { ts: Timestamp(1567578534, 1), t: 1 }, 2019-09-04T06:28:54.883+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn106] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:57.566+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn135] Got notified of new snapshot: { ts: Timestamp(1567578534, 1), t: 1 }, 2019-09-04T06:28:54.883+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn135] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.925+0000 2019-09-04T06:28:54.919+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:28:54.919+0000 D3 REPL [conn140] Got notified of new snapshot: { ts: Timestamp(1567578534, 1), t: 1 }, 2019-09-04T06:28:54.883+0000 2019-09-04T06:28:54.919+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:24.837+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn140] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.076+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn116] Got notified of new snapshot: { ts: Timestamp(1567578534, 1), t: 1 }, 2019-09-04T06:28:54.883+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn116] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:12.389+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn149] Got notified of new snapshot: { ts: Timestamp(1567578534, 1), t: 1 }, 2019-09-04T06:28:54.883+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn149] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.239+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn129] Got notified of new snapshot: { ts: Timestamp(1567578534, 1), t: 1 }, 2019-09-04T06:28:54.883+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn129] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.240+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn150] Got notified of new snapshot: { ts: Timestamp(1567578534, 1), t: 1 }, 2019-09-04T06:28:54.883+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn150] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:19.851+0000 2019-09-04T06:28:54.919+0000 D2 COMMAND [conn21] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578534, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f59a602d1a496712d71b5'), operName: "", parentOperId: "5d6f59a602d1a496712d71b2" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578534, 1), signature: { hash: BinData(0, 6E487BB07DDC4AF59C86FCDC85720A2262122C67), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578534, 1), t: 1 } }, $db: "config" } 2019-09-04T06:28:54.919+0000 D3 REPL [conn127] Got notified of new snapshot: { ts: Timestamp(1567578534, 1), t: 1 }, 2019-09-04T06:28:54.883+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn127] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.644+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn153] Got notified of new snapshot: { ts: Timestamp(1567578534, 1), t: 1 }, 2019-09-04T06:28:54.883+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn153] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn152] Got notified of new snapshot: { ts: Timestamp(1567578534, 1), t: 1 }, 2019-09-04T06:28:54.883+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn152] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn155] Got notified of new snapshot: { ts: Timestamp(1567578534, 1), t: 1 }, 2019-09-04T06:28:54.883+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn155] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.663+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn158] Got notified of new snapshot: { ts: Timestamp(1567578534, 1), t: 1 }, 2019-09-04T06:28:54.883+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn158] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:24.152+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn156] Got notified of new snapshot: { ts: Timestamp(1567578534, 1), t: 1 }, 2019-09-04T06:28:54.883+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn156] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:22.595+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn154] Got notified of new snapshot: { ts: Timestamp(1567578534, 1), t: 1 }, 2019-09-04T06:28:54.883+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn154] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn151] Got notified of new snapshot: { ts: Timestamp(1567578534, 1), t: 1 }, 2019-09-04T06:28:54.883+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn151] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.129+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn148] Got notified of new snapshot: { ts: Timestamp(1567578534, 1), t: 1 }, 2019-09-04T06:28:54.883+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn148] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.224+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn105] Got notified of new snapshot: { ts: Timestamp(1567578534, 1), t: 1 }, 2019-09-04T06:28:54.883+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn105] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.079+0000 2019-09-04T06:28:54.919+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 292 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:04.919+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578534, 1), t: 1 } } 2019-09-04T06:28:54.919+0000 D3 REPL [conn136] Got notified of new snapshot: { ts: Timestamp(1567578534, 1), t: 1 }, 2019-09-04T06:28:54.883+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn136] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:02.478+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn118] Got notified of new snapshot: { ts: Timestamp(1567578534, 1), t: 1 }, 2019-09-04T06:28:54.883+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn118] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.224+0000 2019-09-04T06:28:54.919+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:24.901+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn137] Got notified of new snapshot: { ts: Timestamp(1567578534, 1), t: 1 }, 2019-09-04T06:28:54.883+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn137] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.079+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn146] Got notified of new snapshot: { ts: Timestamp(1567578534, 1), t: 1 }, 2019-09-04T06:28:54.883+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn146] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.233+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn122] Got notified of new snapshot: { ts: Timestamp(1567578534, 1), t: 1 }, 2019-09-04T06:28:54.883+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn122] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.234+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn104] Got notified of new snapshot: { ts: Timestamp(1567578534, 1), t: 1 }, 2019-09-04T06:28:54.883+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn104] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:55.059+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn131] Got notified of new snapshot: { ts: Timestamp(1567578534, 1), t: 1 }, 2019-09-04T06:28:54.883+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn131] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:56.307+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn128] Got notified of new snapshot: { ts: Timestamp(1567578534, 1), t: 1 }, 2019-09-04T06:28:54.883+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn128] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:11.706+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn132] Got notified of new snapshot: { ts: Timestamp(1567578534, 1), t: 1 }, 2019-09-04T06:28:54.883+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn132] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.433+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn134] Got notified of new snapshot: { ts: Timestamp(1567578534, 1), t: 1 }, 2019-09-04T06:28:54.883+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn134] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.753+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn141] Got notified of new snapshot: { ts: Timestamp(1567578534, 1), t: 1 }, 2019-09-04T06:28:54.883+0000 2019-09-04T06:28:54.919+0000 D3 REPL [conn141] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.077+0000 2019-09-04T06:28:54.920+0000 D1 TRACKING [conn21] Cmd: find, TrackingId: 5d6f59a602d1a496712d71b2|5d6f59a602d1a496712d71b5 2019-09-04T06:28:54.920+0000 D1 COMMAND [conn21] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578534, 1), t: 1 } } } 2019-09-04T06:28:54.920+0000 D3 STORAGE [conn21] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:28:54.920+0000 D1 COMMAND [conn21] Using 'committed' snapshot: { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578534, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f59a602d1a496712d71b5'), operName: "", parentOperId: "5d6f59a602d1a496712d71b2" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578534, 1), signature: { hash: BinData(0, 6E487BB07DDC4AF59C86FCDC85720A2262122C67), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578534, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578534, 1) 2019-09-04T06:28:54.920+0000 D2 QUERY [conn21] Collection config.settings does not exist. Using EOF plan: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 2019-09-04T06:28:54.920+0000 I COMMAND [conn21] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578534, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f59a602d1a496712d71b5'), operName: "", parentOperId: "5d6f59a602d1a496712d71b2" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578534, 1), signature: { hash: BinData(0, 6E487BB07DDC4AF59C86FCDC85720A2262122C67), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578534, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:28:54.920+0000 D2 COMMAND [conn21] run command config.$cmd { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578534, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f59a602d1a496712d71b6'), operName: "", parentOperId: "5d6f59a602d1a496712d71b2" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578534, 1), signature: { hash: BinData(0, 6E487BB07DDC4AF59C86FCDC85720A2262122C67), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578534, 1), t: 1 } }, $db: "config" } 2019-09-04T06:28:54.920+0000 D1 TRACKING [conn21] Cmd: find, TrackingId: 5d6f59a602d1a496712d71b2|5d6f59a602d1a496712d71b6 2019-09-04T06:28:54.920+0000 D1 COMMAND [conn21] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578534, 1), t: 1 } } } 2019-09-04T06:28:54.920+0000 D3 STORAGE [conn21] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:28:54.920+0000 D1 COMMAND [conn21] Using 'committed' snapshot: { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578534, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f59a602d1a496712d71b6'), operName: "", parentOperId: "5d6f59a602d1a496712d71b2" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578534, 1), signature: { hash: BinData(0, 6E487BB07DDC4AF59C86FCDC85720A2262122C67), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578534, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578534, 1) 2019-09-04T06:28:54.920+0000 D2 QUERY [conn21] Collection config.settings does not exist. Using EOF plan: query: { _id: "autosplit" } sort: {} projection: {} limit: 1 2019-09-04T06:28:54.920+0000 I COMMAND [conn21] command config.settings command: find { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578534, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f59a602d1a496712d71b6'), operName: "", parentOperId: "5d6f59a602d1a496712d71b2" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578534, 1), signature: { hash: BinData(0, 6E487BB07DDC4AF59C86FCDC85720A2262122C67), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578534, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:28:54.928+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:54.928+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:54.938+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:28:54.938+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:54.938+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), appliedOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, appliedWallTime: new Date(1567578529141), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578534, 1), t: 1 }, durableWallTime: new Date(1567578534883), appliedOpTime: { ts: Timestamp(1567578534, 1), t: 1 }, appliedWallTime: new Date(1567578534883), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), appliedOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, appliedWallTime: new Date(1567578529141), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578534, 1), t: 1 }, lastCommittedWall: new Date(1567578534883), lastOpVisible: { ts: Timestamp(1567578534, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:54.938+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 293 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:24.938+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), appliedOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, appliedWallTime: new Date(1567578529141), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578534, 1), t: 1 }, durableWallTime: new Date(1567578534883), appliedOpTime: { ts: Timestamp(1567578534, 1), t: 1 }, appliedWallTime: new Date(1567578534883), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), appliedOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, appliedWallTime: new Date(1567578529141), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578534, 1), t: 1 }, lastCommittedWall: new Date(1567578534883), lastOpVisible: { ts: Timestamp(1567578534, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:54.938+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:54.939+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:24.901+0000 2019-09-04T06:28:54.939+0000 D2 ASIO [RS] Request 293 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578534, 1), t: 1 }, lastCommittedWall: new Date(1567578534883), lastOpVisible: { ts: Timestamp(1567578534, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578534, 1), $clusterTime: { clusterTime: Timestamp(1567578534, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578534, 1) } 2019-09-04T06:28:54.939+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578534, 1), t: 1 }, lastCommittedWall: new Date(1567578534883), lastOpVisible: { ts: Timestamp(1567578534, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578534, 1), $clusterTime: { clusterTime: Timestamp(1567578534, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578534, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:54.939+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:54.939+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:24.901+0000 2019-09-04T06:28:55.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:55.000+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578534, 1) 2019-09-04T06:28:55.028+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:55.029+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:55.039+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:55.049+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:55.049+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:55.060+0000 I COMMAND [conn104] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578501, 1), signature: { hash: BinData(0, 457E83C320927CAB801EC3E140A2B78C5168E291), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:28:55.060+0000 D1 - [conn104] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:55.060+0000 W - [conn104] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:55.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578534, 1), signature: { hash: BinData(0, 6E487BB07DDC4AF59C86FCDC85720A2262122C67), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:55.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:28:55.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578534, 1), signature: { hash: BinData(0, 6E487BB07DDC4AF59C86FCDC85720A2262122C67), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:55.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578534, 1), signature: { hash: BinData(0, 6E487BB07DDC4AF59C86FCDC85720A2262122C67), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:55.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578534, 1), t: 1 }, durableWallTime: new Date(1567578534883), opTime: { ts: Timestamp(1567578534, 1), t: 1 }, wallTime: new Date(1567578534883) } 2019-09-04T06:28:55.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578534, 1), signature: { hash: BinData(0, 6E487BB07DDC4AF59C86FCDC85720A2262122C67), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:55.077+0000 I - [conn104] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:55.078+0000 D1 COMMAND [conn104] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578501, 1), signature: { hash: BinData(0, 457E83C320927CAB801EC3E140A2B78C5168E291), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:55.078+0000 D1 - [conn104] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:55.078+0000 W - [conn104] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:55.098+0000 I - [conn104] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:55.099+0000 W COMMAND [conn104] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:55.099+0000 I COMMAND [conn104] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578501, 1), signature: { hash: BinData(0, 457E83C320927CAB801EC3E140A2B78C5168E291), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30028ms 2019-09-04T06:28:55.099+0000 D2 NETWORK [conn104] Session from 10.108.2.55:36558 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:55.099+0000 I NETWORK [conn104] end connection 10.108.2.55:36558 (85 connections now open) 2019-09-04T06:28:55.127+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:55.127+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:55.137+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:55.138+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:55.139+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:55.204+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:55.204+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:55.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:55.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:55.239+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:55.286+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:55.286+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:55.339+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:55.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:55.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:55.428+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:55.428+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:55.439+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:55.528+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:55.528+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:55.539+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:55.549+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:55.549+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:55.616+0000 D2 ASIO [RS] Request 292 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578535, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578535591), o: { $v: 1, $set: { ping: new Date(1567578535590) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578534, 1), t: 1 }, lastCommittedWall: new Date(1567578534883), lastOpVisible: { ts: Timestamp(1567578534, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578534, 1), t: 1 }, lastCommittedWall: new Date(1567578534883), lastOpApplied: { ts: Timestamp(1567578535, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578534, 1), $clusterTime: { clusterTime: Timestamp(1567578535, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578535, 1) } 2019-09-04T06:28:55.616+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578535, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578535591), o: { $v: 1, $set: { ping: new Date(1567578535590) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578534, 1), t: 1 }, lastCommittedWall: new Date(1567578534883), lastOpVisible: { ts: Timestamp(1567578534, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578534, 1), t: 1 }, lastCommittedWall: new Date(1567578534883), lastOpApplied: { ts: Timestamp(1567578535, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578534, 1), $clusterTime: { clusterTime: Timestamp(1567578535, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578535, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:55.616+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:55.616+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578535, 1) and ending at ts: Timestamp(1567578535, 1) 2019-09-04T06:28:55.616+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:29:05.549+0000 2019-09-04T06:28:55.616+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:29:06.363+0000 2019-09-04T06:28:55.616+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:55.616+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:24.837+0000 2019-09-04T06:28:55.616+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578535, 1), t: 1 } 2019-09-04T06:28:55.616+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:55.616+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:55.616+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578534, 1) 2019-09-04T06:28:55.616+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 4531 2019-09-04T06:28:55.616+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:55.616+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:55.616+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 4531 2019-09-04T06:28:55.616+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:55.616+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:28:55.616+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:55.616+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578534, 1) 2019-09-04T06:28:55.616+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578535, 1) } 2019-09-04T06:28:55.616+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 4534 2019-09-04T06:28:55.616+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:55.616+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:55.616+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 4534 2019-09-04T06:28:55.616+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4511 2019-09-04T06:28:55.616+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4511 2019-09-04T06:28:55.616+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4537 2019-09-04T06:28:55.616+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4537 2019-09-04T06:28:55.616+0000 D3 EXECUTOR [repl-writer-worker-5] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:55.616+0000 D3 STORAGE [repl-writer-worker-5] WT begin_transaction for snapshot id 4539 2019-09-04T06:28:55.616+0000 D4 STORAGE [repl-writer-worker-5] inserting record with timestamp Timestamp(1567578535, 1) 2019-09-04T06:28:55.616+0000 D3 STORAGE [repl-writer-worker-5] WT set timestamp of future write operations to Timestamp(1567578535, 1) 2019-09-04T06:28:55.617+0000 D3 STORAGE [repl-writer-worker-5] WT commit_transaction for snapshot id 4539 2019-09-04T06:28:55.617+0000 D3 EXECUTOR [repl-writer-worker-5] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:55.617+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:28:55.617+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4538 2019-09-04T06:28:55.617+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4538 2019-09-04T06:28:55.617+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4541 2019-09-04T06:28:55.617+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4541 2019-09-04T06:28:55.617+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578535, 1), t: 1 }({ ts: Timestamp(1567578535, 1), t: 1 }) 2019-09-04T06:28:55.617+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578535, 1) 2019-09-04T06:28:55.617+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4542 2019-09-04T06:28:55.617+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578535, 1) } } ] } sort: {} projection: {} 2019-09-04T06:28:55.617+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:55.617+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:28:55.617+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578535, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:28:55.617+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:55.617+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:55.617+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:55.617+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578535, 1) || First: notFirst: full path: ts 2019-09-04T06:28:55.617+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:55.617+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578535, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:55.617+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:55.617+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:28:55.617+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:55.617+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:55.617+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:55.617+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:55.617+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:55.617+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:55.617+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:28:55.617+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578535, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:28:55.617+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:55.617+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:28:55.617+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:28:55.617+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578535, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:28:55.617+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:55.617+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578535, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:55.617+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4542 2019-09-04T06:28:55.617+0000 D3 EXECUTOR [repl-writer-worker-9] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:28:55.617+0000 D3 STORAGE [repl-writer-worker-9] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:55.617+0000 D3 REPL [repl-writer-worker-9] applying op: { ts: Timestamp(1567578535, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578535591), o: { $v: 1, $set: { ping: new Date(1567578535590) } } }, oplog application mode: Secondary 2019-09-04T06:28:55.617+0000 D3 STORAGE [repl-writer-worker-9] WT set timestamp of future write operations to Timestamp(1567578535, 1) 2019-09-04T06:28:55.617+0000 D3 STORAGE [repl-writer-worker-9] WT begin_transaction for snapshot id 4544 2019-09-04T06:28:55.617+0000 D2 QUERY [repl-writer-worker-9] Using idhack: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" } 2019-09-04T06:28:55.617+0000 D4 WRITE [repl-writer-worker-9] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:28:55.617+0000 D3 STORAGE [repl-writer-worker-9] WT commit_transaction for snapshot id 4544 2019-09-04T06:28:55.617+0000 D3 EXECUTOR [repl-writer-worker-9] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:28:55.617+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578535, 1), t: 1 }({ ts: Timestamp(1567578535, 1), t: 1 }) 2019-09-04T06:28:55.617+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578535, 1) 2019-09-04T06:28:55.617+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4543 2019-09-04T06:28:55.617+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:28:55.617+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:28:55.617+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:28:55.617+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:28:55.617+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:28:55.617+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:28:55.617+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4543 2019-09-04T06:28:55.617+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578535, 1) 2019-09-04T06:28:55.617+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4547 2019-09-04T06:28:55.617+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4547 2019-09-04T06:28:55.617+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578535, 1), t: 1 }({ ts: Timestamp(1567578535, 1), t: 1 }) 2019-09-04T06:28:55.617+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:55.617+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), appliedOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, appliedWallTime: new Date(1567578529141), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578534, 1), t: 1 }, durableWallTime: new Date(1567578534883), appliedOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, appliedWallTime: new Date(1567578535591), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), appliedOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, appliedWallTime: new Date(1567578529141), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578534, 1), t: 1 }, lastCommittedWall: new Date(1567578534883), lastOpVisible: { ts: Timestamp(1567578534, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:55.617+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 294 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:25.617+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), appliedOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, appliedWallTime: new Date(1567578529141), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578534, 1), t: 1 }, durableWallTime: new Date(1567578534883), appliedOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, appliedWallTime: new Date(1567578535591), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), appliedOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, appliedWallTime: new Date(1567578529141), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578534, 1), t: 1 }, lastCommittedWall: new Date(1567578534883), lastOpVisible: { ts: Timestamp(1567578534, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:55.618+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:25.617+0000 2019-09-04T06:28:55.618+0000 D2 ASIO [RS] Request 294 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578534, 1), t: 1 }, lastCommittedWall: new Date(1567578534883), lastOpVisible: { ts: Timestamp(1567578534, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578534, 1), $clusterTime: { clusterTime: Timestamp(1567578535, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578535, 1) } 2019-09-04T06:28:55.618+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578534, 1), t: 1 }, lastCommittedWall: new Date(1567578534883), lastOpVisible: { ts: Timestamp(1567578534, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578534, 1), $clusterTime: { clusterTime: Timestamp(1567578535, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578535, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:55.618+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:55.618+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:25.618+0000 2019-09-04T06:28:55.618+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578535, 1), t: 1 } 2019-09-04T06:28:55.618+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 295 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:05.618+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578534, 1), t: 1 } } 2019-09-04T06:28:55.618+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:25.618+0000 2019-09-04T06:28:55.626+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:55.627+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:55.628+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:28:55.628+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:55.628+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), appliedOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, appliedWallTime: new Date(1567578529141), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, durableWallTime: new Date(1567578535591), appliedOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, appliedWallTime: new Date(1567578535591), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), appliedOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, appliedWallTime: new Date(1567578529141), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578534, 1), t: 1 }, lastCommittedWall: new Date(1567578534883), lastOpVisible: { ts: Timestamp(1567578534, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:55.628+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 296 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:25.628+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), appliedOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, appliedWallTime: new Date(1567578529141), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, durableWallTime: new Date(1567578535591), appliedOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, appliedWallTime: new Date(1567578535591), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, durableWallTime: new Date(1567578529141), appliedOpTime: { ts: Timestamp(1567578529, 1), t: 1 }, appliedWallTime: new Date(1567578529141), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578534, 1), t: 1 }, lastCommittedWall: new Date(1567578534883), lastOpVisible: { ts: Timestamp(1567578534, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:28:55.628+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:25.618+0000 2019-09-04T06:28:55.628+0000 D2 ASIO [RS] Request 296 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578534, 1), t: 1 }, lastCommittedWall: new Date(1567578534883), lastOpVisible: { ts: Timestamp(1567578534, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578534, 1), $clusterTime: { clusterTime: Timestamp(1567578535, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578535, 1) } 2019-09-04T06:28:55.629+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578534, 1), t: 1 }, lastCommittedWall: new Date(1567578534883), lastOpVisible: { ts: Timestamp(1567578534, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578534, 1), $clusterTime: { clusterTime: Timestamp(1567578535, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578535, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:55.629+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:28:55.629+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:25.618+0000 2019-09-04T06:28:55.629+0000 D2 ASIO [RS] Request 295 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578535, 1), t: 1 }, lastCommittedWall: new Date(1567578535591), lastOpVisible: { ts: Timestamp(1567578535, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578535, 1), t: 1 }, lastCommittedWall: new Date(1567578535591), lastOpApplied: { ts: Timestamp(1567578535, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578535, 1), $clusterTime: { clusterTime: Timestamp(1567578535, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578535, 1) } 2019-09-04T06:28:55.629+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578535, 1), t: 1 }, lastCommittedWall: new Date(1567578535591), lastOpVisible: { ts: Timestamp(1567578535, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578535, 1), t: 1 }, lastCommittedWall: new Date(1567578535591), lastOpApplied: { ts: Timestamp(1567578535, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578535, 1), $clusterTime: { clusterTime: Timestamp(1567578535, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578535, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:55.629+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:28:55.629+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:28:55.629+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578535, 1), t: 1 }, 2019-09-04T06:28:55.591+0000 2019-09-04T06:28:55.629+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578535, 1), t: 1 }, 2019-09-04T06:28:55.591+0000 2019-09-04T06:28:55.629+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578530, 1) 2019-09-04T06:28:55.629+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:29:06.363+0000 2019-09-04T06:28:55.629+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:29:06.094+0000 2019-09-04T06:28:55.629+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:55.629+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 297 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:05.629+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578535, 1), t: 1 } } 2019-09-04T06:28:55.629+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:24.837+0000 2019-09-04T06:28:55.629+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:25.618+0000 2019-09-04T06:28:55.629+0000 D3 REPL [conn133] Got notified of new snapshot: { ts: Timestamp(1567578535, 1), t: 1 }, 2019-09-04T06:28:55.591+0000 2019-09-04T06:28:55.629+0000 D3 REPL [conn133] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.702+0000 2019-09-04T06:28:55.629+0000 D3 REPL [conn148] Got notified of new snapshot: { ts: Timestamp(1567578535, 1), t: 1 }, 2019-09-04T06:28:55.591+0000 2019-09-04T06:28:55.629+0000 D3 REPL [conn148] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.224+0000 2019-09-04T06:28:55.629+0000 D3 REPL [conn140] Got notified of new snapshot: { ts: Timestamp(1567578535, 1), t: 1 }, 2019-09-04T06:28:55.591+0000 2019-09-04T06:28:55.629+0000 D3 REPL [conn140] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.076+0000 2019-09-04T06:28:55.629+0000 D3 REPL [conn149] Got notified of new snapshot: { ts: Timestamp(1567578535, 1), t: 1 }, 2019-09-04T06:28:55.591+0000 2019-09-04T06:28:55.629+0000 D3 REPL [conn149] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.239+0000 2019-09-04T06:28:55.629+0000 D3 REPL [conn129] Got notified of new snapshot: { ts: Timestamp(1567578535, 1), t: 1 }, 2019-09-04T06:28:55.591+0000 2019-09-04T06:28:55.629+0000 D3 REPL [conn129] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.240+0000 2019-09-04T06:28:55.629+0000 D3 REPL [conn150] Got notified of new snapshot: { ts: Timestamp(1567578535, 1), t: 1 }, 2019-09-04T06:28:55.591+0000 2019-09-04T06:28:55.629+0000 D3 REPL [conn150] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:19.851+0000 2019-09-04T06:28:55.629+0000 D3 REPL [conn127] Got notified of new snapshot: { ts: Timestamp(1567578535, 1), t: 1 }, 2019-09-04T06:28:55.591+0000 2019-09-04T06:28:55.629+0000 D3 REPL [conn127] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.644+0000 2019-09-04T06:28:55.629+0000 D3 REPL [conn153] Got notified of new snapshot: { ts: Timestamp(1567578535, 1), t: 1 }, 2019-09-04T06:28:55.591+0000 2019-09-04T06:28:55.629+0000 D3 REPL [conn153] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:28:55.630+0000 D3 REPL [conn155] Got notified of new snapshot: { ts: Timestamp(1567578535, 1), t: 1 }, 2019-09-04T06:28:55.591+0000 2019-09-04T06:28:55.630+0000 D3 REPL [conn155] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.663+0000 2019-09-04T06:28:55.630+0000 D3 REPL [conn156] Got notified of new snapshot: { ts: Timestamp(1567578535, 1), t: 1 }, 2019-09-04T06:28:55.591+0000 2019-09-04T06:28:55.630+0000 D3 REPL [conn156] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:22.595+0000 2019-09-04T06:28:55.630+0000 D3 REPL [conn151] Got notified of new snapshot: { ts: Timestamp(1567578535, 1), t: 1 }, 2019-09-04T06:28:55.591+0000 2019-09-04T06:28:55.630+0000 D3 REPL [conn151] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.129+0000 2019-09-04T06:28:55.630+0000 D3 REPL [conn146] Got notified of new snapshot: { ts: Timestamp(1567578535, 1), t: 1 }, 2019-09-04T06:28:55.591+0000 2019-09-04T06:28:55.630+0000 D3 REPL [conn146] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.233+0000 2019-09-04T06:28:55.630+0000 D3 REPL [conn128] Got notified of new snapshot: { ts: Timestamp(1567578535, 1), t: 1 }, 2019-09-04T06:28:55.591+0000 2019-09-04T06:28:55.630+0000 D3 REPL [conn128] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:11.706+0000 2019-09-04T06:28:55.630+0000 D3 REPL [conn134] Got notified of new snapshot: { ts: Timestamp(1567578535, 1), t: 1 }, 2019-09-04T06:28:55.591+0000 2019-09-04T06:28:55.630+0000 D3 REPL [conn134] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.753+0000 2019-09-04T06:28:55.630+0000 D3 REPL [conn118] Got notified of new snapshot: { ts: Timestamp(1567578535, 1), t: 1 }, 2019-09-04T06:28:55.591+0000 2019-09-04T06:28:55.630+0000 D3 REPL [conn118] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.224+0000 2019-09-04T06:28:55.630+0000 D3 REPL [conn121] Got notified of new snapshot: { ts: Timestamp(1567578535, 1), t: 1 }, 2019-09-04T06:28:55.591+0000 2019-09-04T06:28:55.630+0000 D3 REPL [conn121] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:58.752+0000 2019-09-04T06:28:55.630+0000 D3 REPL [conn136] Got notified of new snapshot: { ts: Timestamp(1567578535, 1), t: 1 }, 2019-09-04T06:28:55.591+0000 2019-09-04T06:28:55.630+0000 D3 REPL [conn136] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:02.478+0000 2019-09-04T06:28:55.630+0000 D3 REPL [conn141] Got notified of new snapshot: { ts: Timestamp(1567578535, 1), t: 1 }, 2019-09-04T06:28:55.591+0000 2019-09-04T06:28:55.630+0000 D3 REPL [conn141] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.077+0000 2019-09-04T06:28:55.630+0000 D3 REPL [conn137] Got notified of new snapshot: { ts: Timestamp(1567578535, 1), t: 1 }, 2019-09-04T06:28:55.591+0000 2019-09-04T06:28:55.630+0000 D3 REPL [conn137] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.079+0000 2019-09-04T06:28:55.630+0000 D3 REPL [conn132] Got notified of new snapshot: { ts: Timestamp(1567578535, 1), t: 1 }, 2019-09-04T06:28:55.591+0000 2019-09-04T06:28:55.630+0000 D3 REPL [conn132] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.433+0000 2019-09-04T06:28:55.630+0000 D3 REPL [conn131] Got notified of new snapshot: { ts: Timestamp(1567578535, 1), t: 1 }, 2019-09-04T06:28:55.591+0000 2019-09-04T06:28:55.630+0000 D3 REPL [conn131] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:56.307+0000 2019-09-04T06:28:55.630+0000 D3 REPL [conn106] Got notified of new snapshot: { ts: Timestamp(1567578535, 1), t: 1 }, 2019-09-04T06:28:55.591+0000 2019-09-04T06:28:55.630+0000 D3 REPL [conn106] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:28:57.566+0000 2019-09-04T06:28:55.630+0000 D3 REPL [conn142] Got notified of new snapshot: { ts: Timestamp(1567578535, 1), t: 1 }, 2019-09-04T06:28:55.591+0000 2019-09-04T06:28:55.630+0000 D3 REPL [conn142] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.132+0000 2019-09-04T06:28:55.630+0000 D3 REPL [conn116] Got notified of new snapshot: { ts: Timestamp(1567578535, 1), t: 1 }, 2019-09-04T06:28:55.591+0000 2019-09-04T06:28:55.630+0000 D3 REPL [conn116] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:12.389+0000 2019-09-04T06:28:55.630+0000 D3 REPL [conn135] Got notified of new snapshot: { ts: Timestamp(1567578535, 1), t: 1 }, 2019-09-04T06:28:55.591+0000 2019-09-04T06:28:55.630+0000 D3 REPL [conn135] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.925+0000 2019-09-04T06:28:55.630+0000 D3 REPL [conn122] Got notified of new snapshot: { ts: Timestamp(1567578535, 1), t: 1 }, 2019-09-04T06:28:55.591+0000 2019-09-04T06:28:55.630+0000 D3 REPL [conn122] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.234+0000 2019-09-04T06:28:55.630+0000 D3 REPL [conn105] Got notified of new snapshot: { ts: Timestamp(1567578535, 1), t: 1 }, 2019-09-04T06:28:55.591+0000 2019-09-04T06:28:55.630+0000 D3 REPL [conn105] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.079+0000 2019-09-04T06:28:55.630+0000 D3 REPL [conn152] Got notified of new snapshot: { ts: Timestamp(1567578535, 1), t: 1 }, 2019-09-04T06:28:55.591+0000 2019-09-04T06:28:55.630+0000 D3 REPL [conn152] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:28:55.630+0000 D3 REPL [conn154] Got notified of new snapshot: { ts: Timestamp(1567578535, 1), t: 1 }, 2019-09-04T06:28:55.591+0000 2019-09-04T06:28:55.630+0000 D3 REPL [conn154] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:28:55.630+0000 D3 REPL [conn158] Got notified of new snapshot: { ts: Timestamp(1567578535, 1), t: 1 }, 2019-09-04T06:28:55.591+0000 2019-09-04T06:28:55.630+0000 D3 REPL [conn158] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:24.152+0000 2019-09-04T06:28:55.637+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:55.637+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:55.639+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:55.704+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:55.704+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:55.716+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578535, 1) 2019-09-04T06:28:55.740+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:55.786+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:55.786+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:55.840+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:55.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:55.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:55.928+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:55.928+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:55.940+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:56.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:56.028+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:56.028+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:56.040+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:56.127+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:56.127+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:56.137+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:56.137+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:56.140+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:56.204+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:56.204+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:56.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578535, 1), signature: { hash: BinData(0, 3EACE787785BF2AAA591414ACAAB63F6A354BD8F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:56.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:28:56.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578535, 1), signature: { hash: BinData(0, 3EACE787785BF2AAA591414ACAAB63F6A354BD8F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:56.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578535, 1), signature: { hash: BinData(0, 3EACE787785BF2AAA591414ACAAB63F6A354BD8F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:56.233+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, durableWallTime: new Date(1567578535591), opTime: { ts: Timestamp(1567578535, 1), t: 1 }, wallTime: new Date(1567578535591) } 2019-09-04T06:28:56.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578535, 1), signature: { hash: BinData(0, 3EACE787785BF2AAA591414ACAAB63F6A354BD8F), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:56.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:56.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:56.240+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:56.286+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:56.286+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:56.294+0000 I NETWORK [listener] connection accepted from 10.108.2.60:44800 #159 (86 connections now open) 2019-09-04T06:28:56.294+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:56.294+0000 D2 COMMAND [conn159] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:56.294+0000 I NETWORK [conn159] received client metadata from 10.108.2.60:44800 conn159: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:56.294+0000 I COMMAND [conn159] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:56.308+0000 I COMMAND [conn131] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578503, 1), signature: { hash: BinData(0, F866E98FA10478836B415F12129881EA7AA32552), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:28:56.308+0000 D1 - [conn131] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:56.308+0000 W - [conn131] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:56.325+0000 I - [conn131] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:56.325+0000 D1 COMMAND [conn131] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578503, 1), signature: { hash: BinData(0, F866E98FA10478836B415F12129881EA7AA32552), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:56.325+0000 D1 - [conn131] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:56.325+0000 W - [conn131] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:56.340+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:56.345+0000 I - [conn131] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:56.345+0000 W COMMAND [conn131] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:56.345+0000 I COMMAND [conn131] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578503, 1), signature: { hash: BinData(0, F866E98FA10478836B415F12129881EA7AA32552), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:28:56.345+0000 D2 NETWORK [conn131] Session from 10.108.2.60:44776 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:56.345+0000 I NETWORK [conn131] end connection 10.108.2.60:44776 (85 connections now open) 2019-09-04T06:28:56.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:56.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:56.428+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:56.428+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:56.440+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:56.528+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:56.528+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:56.541+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:56.602+0000 D2 COMMAND [conn49] run command config.$cmd { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578535, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578535, 1), signature: { hash: BinData(0, 3EACE787785BF2AAA591414ACAAB63F6A354BD8F), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578535, 1), t: 1 } }, $db: "config" } 2019-09-04T06:28:56.602+0000 D1 COMMAND [conn49] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578535, 1), t: 1 } } } 2019-09-04T06:28:56.602+0000 D3 STORAGE [conn49] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:28:56.602+0000 D1 COMMAND [conn49] Using 'committed' snapshot: { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578535, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578535, 1), signature: { hash: BinData(0, 3EACE787785BF2AAA591414ACAAB63F6A354BD8F), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578535, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578535, 1) 2019-09-04T06:28:56.602+0000 D2 QUERY [conn49] Collection config.settings does not exist. Using EOF plan: query: { _id: "autosplit" } sort: {} projection: {} limit: 1 2019-09-04T06:28:56.602+0000 I COMMAND [conn49] command config.settings command: find { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578535, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578535, 1), signature: { hash: BinData(0, 3EACE787785BF2AAA591414ACAAB63F6A354BD8F), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578535, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:28:56.616+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:56.616+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:56.616+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578535, 1) 2019-09-04T06:28:56.616+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 4569 2019-09-04T06:28:56.616+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:56.616+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:56.616+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 4569 2019-09-04T06:28:56.617+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4572 2019-09-04T06:28:56.618+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4572 2019-09-04T06:28:56.618+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578535, 1), t: 1 }({ ts: Timestamp(1567578535, 1), t: 1 }) 2019-09-04T06:28:56.627+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:56.627+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:56.637+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:56.637+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:56.641+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:56.704+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:56.704+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:56.741+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:56.786+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:56.786+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:56.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:56.837+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 298) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:56.837+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 298 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:29:06.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:56.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:56.837+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 299) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:56.837+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 299 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:06.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:56.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:24.837+0000 2019-09-04T06:28:56.837+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:24.837+0000 2019-09-04T06:28:56.837+0000 D2 ASIO [Replication] Request 298 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, durableWallTime: new Date(1567578535591), opTime: { ts: Timestamp(1567578535, 1), t: 1 }, wallTime: new Date(1567578535591), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578535, 1), t: 1 }, lastCommittedWall: new Date(1567578535591), lastOpVisible: { ts: Timestamp(1567578535, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578535, 1), $clusterTime: { clusterTime: Timestamp(1567578535, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578535, 1) } 2019-09-04T06:28:56.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, durableWallTime: new Date(1567578535591), opTime: { ts: Timestamp(1567578535, 1), t: 1 }, wallTime: new Date(1567578535591), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578535, 1), t: 1 }, lastCommittedWall: new Date(1567578535591), lastOpVisible: { ts: Timestamp(1567578535, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578535, 1), $clusterTime: { clusterTime: Timestamp(1567578535, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578535, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:28:56.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:56.837+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 298) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, durableWallTime: new Date(1567578535591), opTime: { ts: Timestamp(1567578535, 1), t: 1 }, wallTime: new Date(1567578535591), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578535, 1), t: 1 }, lastCommittedWall: new Date(1567578535591), lastOpVisible: { ts: Timestamp(1567578535, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578535, 1), $clusterTime: { clusterTime: Timestamp(1567578535, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578535, 1) } 2019-09-04T06:28:56.837+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:28:56.837+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:29:06.094+0000 2019-09-04T06:28:56.837+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:29:07.387+0000 2019-09-04T06:28:56.837+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:28:56.837+0000 D2 ASIO [Replication] Request 299 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, durableWallTime: new Date(1567578535591), opTime: { ts: Timestamp(1567578535, 1), t: 1 }, wallTime: new Date(1567578535591), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578535, 1), t: 1 }, lastCommittedWall: new Date(1567578535591), lastOpVisible: { ts: Timestamp(1567578535, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578535, 1), $clusterTime: { clusterTime: Timestamp(1567578535, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578535, 1) } 2019-09-04T06:28:56.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:56.837+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:28:58.837Z 2019-09-04T06:28:56.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, durableWallTime: new Date(1567578535591), opTime: { ts: Timestamp(1567578535, 1), t: 1 }, wallTime: new Date(1567578535591), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578535, 1), t: 1 }, lastCommittedWall: new Date(1567578535591), lastOpVisible: { ts: Timestamp(1567578535, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578535, 1), $clusterTime: { clusterTime: Timestamp(1567578535, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578535, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:56.837+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:28:56.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:26.837+0000 2019-09-04T06:28:56.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:26.837+0000 2019-09-04T06:28:56.837+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 299) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, durableWallTime: new Date(1567578535591), opTime: { ts: Timestamp(1567578535, 1), t: 1 }, wallTime: new Date(1567578535591), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578535, 1), t: 1 }, lastCommittedWall: new Date(1567578535591), lastOpVisible: { ts: Timestamp(1567578535, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578535, 1), $clusterTime: { clusterTime: Timestamp(1567578535, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578535, 1) } 2019-09-04T06:28:56.837+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:28:56.837+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:28:58.837Z 2019-09-04T06:28:56.837+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:26.837+0000 2019-09-04T06:28:56.841+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:56.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:56.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:56.876+0000 D2 COMMAND [conn47] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:56.876+0000 I COMMAND [conn47] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:56.911+0000 D2 COMMAND [conn48] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:56.911+0000 I COMMAND [conn48] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:56.928+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:56.928+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:56.941+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:56.990+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:56.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:57.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:57.028+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:57.028+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:57.041+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:57.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578535, 1), signature: { hash: BinData(0, 3EACE787785BF2AAA591414ACAAB63F6A354BD8F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:57.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:28:57.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578535, 1), signature: { hash: BinData(0, 3EACE787785BF2AAA591414ACAAB63F6A354BD8F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:57.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578535, 1), signature: { hash: BinData(0, 3EACE787785BF2AAA591414ACAAB63F6A354BD8F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:57.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, durableWallTime: new Date(1567578535591), opTime: { ts: Timestamp(1567578535, 1), t: 1 }, wallTime: new Date(1567578535591) } 2019-09-04T06:28:57.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578535, 1), signature: { hash: BinData(0, 3EACE787785BF2AAA591414ACAAB63F6A354BD8F), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:57.127+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:57.127+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:57.137+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:57.137+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:57.141+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:57.204+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:57.204+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:57.232+0000 I NETWORK [listener] connection accepted from 10.20.102.80:61164 #160 (86 connections now open) 2019-09-04T06:28:57.232+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:57.233+0000 D2 COMMAND [conn160] run command admin.$cmd { isMaster: 1, client: { application: { name: "robo3t" }, driver: { name: "MongoDB Internal Client", version: "4.0.5-17-gd808df2233" }, os: { type: "Windows", name: "Microsoft Windows 7", architecture: "x86_64", version: "6.1 SP1 (build 7601)" } }, $db: "admin" } 2019-09-04T06:28:57.233+0000 I NETWORK [conn160] received client metadata from 10.20.102.80:61164 conn160: { application: { name: "robo3t" }, driver: { name: "MongoDB Internal Client", version: "4.0.5-17-gd808df2233" }, os: { type: "Windows", name: "Microsoft Windows 7", architecture: "x86_64", version: "6.1 SP1 (build 7601)" } } 2019-09-04T06:28:57.233+0000 I COMMAND [conn160] command admin.$cmd appName: "robo3t" command: isMaster { isMaster: 1, client: { application: { name: "robo3t" }, driver: { name: "MongoDB Internal Client", version: "4.0.5-17-gd808df2233" }, os: { type: "Windows", name: "Microsoft Windows 7", architecture: "x86_64", version: "6.1 SP1 (build 7601)" } }, $db: "admin" } numYields:0 reslen:866 locks:{} protocol:op_query 0ms 2019-09-04T06:28:57.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:57.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:57.241+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:57.242+0000 D2 COMMAND [conn160] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:28:57.242+0000 D1 ACCESS [conn160] Returning user dba_root@admin from cache 2019-09-04T06:28:57.242+0000 I COMMAND [conn160] command admin.$cmd appName: "robo3t" command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:394 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:57.251+0000 D2 COMMAND [conn160] run command admin.$cmd { saslContinue: 1, payload: "xxx", conversationId: 1, $db: "admin" } 2019-09-04T06:28:57.251+0000 I COMMAND [conn160] command admin.$cmd appName: "robo3t" command: saslContinue { saslContinue: 1, payload: "xxx", conversationId: 1, $db: "admin" } numYields:0 reslen:323 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:57.260+0000 D2 COMMAND [conn160] run command admin.$cmd { saslContinue: 1, payload: "xxx", conversationId: 1, $db: "admin" } 2019-09-04T06:28:57.260+0000 D1 ACCESS [conn160] Returning user dba_root@admin from cache 2019-09-04T06:28:57.260+0000 I ACCESS [conn160] Successfully authenticated as principal dba_root on admin from client 10.20.102.80:61164 2019-09-04T06:28:57.260+0000 I COMMAND [conn160] command admin.$cmd appName: "robo3t" command: saslContinue { saslContinue: 1, payload: "xxx", conversationId: 1, $db: "admin" } numYields:0 reslen:293 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:57.269+0000 D2 COMMAND [conn160] run command admin.$cmd { ping: 1, $db: "admin" } 2019-09-04T06:28:57.269+0000 I COMMAND [conn160] command admin.$cmd appName: "robo3t" command: ping { ping: 1, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:57.286+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:57.286+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:57.341+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:57.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:57.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:57.428+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:57.428+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:57.442+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:57.490+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:57.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:57.528+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:57.528+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:57.542+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:57.568+0000 I COMMAND [conn106] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578501, 1), signature: { hash: BinData(0, 457E83C320927CAB801EC3E140A2B78C5168E291), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:28:57.568+0000 D1 - [conn106] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:57.568+0000 W - [conn106] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:57.584+0000 I - [conn106] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:57.584+0000 D1 COMMAND [conn106] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578501, 1), signature: { hash: BinData(0, 457E83C320927CAB801EC3E140A2B78C5168E291), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:57.584+0000 D1 - [conn106] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:57.584+0000 W - [conn106] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:57.604+0000 I - [conn106] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:57.604+0000 W COMMAND [conn106] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:57.604+0000 I COMMAND [conn106] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578501, 1), signature: { hash: BinData(0, 457E83C320927CAB801EC3E140A2B78C5168E291), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30028ms 2019-09-04T06:28:57.604+0000 D2 NETWORK [conn106] Session from 10.108.2.61:37838 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:57.605+0000 I NETWORK [conn106] end connection 10.108.2.61:37838 (85 connections now open) 2019-09-04T06:28:57.617+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:57.617+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:57.617+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578535, 1) 2019-09-04T06:28:57.617+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 4600 2019-09-04T06:28:57.617+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:57.617+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:57.617+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 4600 2019-09-04T06:28:57.618+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4603 2019-09-04T06:28:57.618+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4603 2019-09-04T06:28:57.618+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578535, 1), t: 1 }({ ts: Timestamp(1567578535, 1), t: 1 }) 2019-09-04T06:28:57.627+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:57.627+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:57.637+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:57.637+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:57.642+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:57.704+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:57.704+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:57.742+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:57.786+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:57.786+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:57.842+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:57.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:57.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:57.928+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:57.928+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:57.942+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:57.990+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:57.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:58.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:58.028+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:58.028+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:58.042+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:58.127+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:58.127+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:58.137+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:58.137+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:58.143+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:58.204+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:58.204+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:58.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578537, 1), signature: { hash: BinData(0, 76481168433B8420E944C1E24774737B1718841A), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:58.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:28:58.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578537, 1), signature: { hash: BinData(0, 76481168433B8420E944C1E24774737B1718841A), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:58.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578537, 1), signature: { hash: BinData(0, 76481168433B8420E944C1E24774737B1718841A), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:58.233+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, durableWallTime: new Date(1567578535591), opTime: { ts: Timestamp(1567578535, 1), t: 1 }, wallTime: new Date(1567578535591) } 2019-09-04T06:28:58.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578537, 1), signature: { hash: BinData(0, 76481168433B8420E944C1E24774737B1718841A), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:58.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:58.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:58.243+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:58.286+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:58.286+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:58.343+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:58.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:58.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:58.428+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:58.428+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:58.443+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:58.490+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:58.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:58.528+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:58.528+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:58.543+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:58.605+0000 I NETWORK [listener] connection accepted from 10.108.2.54:49118 #161 (86 connections now open) 2019-09-04T06:28:58.605+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:58.605+0000 D2 COMMAND [conn161] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:58.605+0000 I NETWORK [conn161] received client metadata from 10.108.2.54:49118 conn161: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:58.605+0000 I COMMAND [conn161] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:58.605+0000 D2 COMMAND [conn161] run command config.$cmd { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578538, 1), signature: { hash: BinData(0, C6DF695CCD0A4881611104329A2D7ABCFFC191B9), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:28:58.605+0000 D1 REPL [conn161] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578535, 1), t: 1 } 2019-09-04T06:28:58.605+0000 D3 REPL [conn161] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:28.615+0000 2019-09-04T06:28:58.617+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:58.617+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:58.617+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578535, 1) 2019-09-04T06:28:58.617+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 4626 2019-09-04T06:28:58.617+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:58.617+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:58.617+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 4626 2019-09-04T06:28:58.618+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4629 2019-09-04T06:28:58.618+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4629 2019-09-04T06:28:58.618+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578535, 1), t: 1 }({ ts: Timestamp(1567578535, 1), t: 1 }) 2019-09-04T06:28:58.627+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:58.627+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:58.637+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:58.637+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:58.643+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:58.704+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:58.704+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:58.743+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:58.755+0000 I COMMAND [conn121] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578505, 1), signature: { hash: BinData(0, EE8BDA2660D95BC27BD45C346F1798888A2141A4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:28:58.755+0000 D1 - [conn121] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:28:58.755+0000 W - [conn121] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:58.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:58.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:58.773+0000 I - [conn121] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:58.773+0000 D1 COMMAND [conn121] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578505, 1), signature: { hash: BinData(0, EE8BDA2660D95BC27BD45C346F1798888A2141A4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:58.773+0000 D1 - [conn121] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:28:58.773+0000 W - [conn121] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:28:58.786+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:58.786+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:58.794+0000 I - [conn121] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:28:58.794+0000 W COMMAND [conn121] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:28:58.794+0000 I COMMAND [conn121] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578505, 1), signature: { hash: BinData(0, EE8BDA2660D95BC27BD45C346F1798888A2141A4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30031ms 2019-09-04T06:28:58.794+0000 D2 NETWORK [conn121] Session from 10.108.2.64:46544 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:28:58.794+0000 I NETWORK [conn121] end connection 10.108.2.64:46544 (85 connections now open) 2019-09-04T06:28:58.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:58.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:28:58.837+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 300) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:58.837+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 300 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:29:08.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:58.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:26.837+0000 2019-09-04T06:28:58.837+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 301) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:58.837+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 301 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:08.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:28:58.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:26.837+0000 2019-09-04T06:28:58.837+0000 D2 ASIO [Replication] Request 300 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, durableWallTime: new Date(1567578535591), opTime: { ts: Timestamp(1567578535, 1), t: 1 }, wallTime: new Date(1567578535591), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578535, 1), t: 1 }, lastCommittedWall: new Date(1567578535591), lastOpVisible: { ts: Timestamp(1567578535, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578535, 1), $clusterTime: { clusterTime: Timestamp(1567578538, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578535, 1) } 2019-09-04T06:28:58.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, durableWallTime: new Date(1567578535591), opTime: { ts: Timestamp(1567578535, 1), t: 1 }, wallTime: new Date(1567578535591), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578535, 1), t: 1 }, lastCommittedWall: new Date(1567578535591), lastOpVisible: { ts: Timestamp(1567578535, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578535, 1), $clusterTime: { clusterTime: Timestamp(1567578538, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578535, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:28:58.837+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:28:58.837+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 300) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, durableWallTime: new Date(1567578535591), opTime: { ts: Timestamp(1567578535, 1), t: 1 }, wallTime: new Date(1567578535591), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578535, 1), t: 1 }, lastCommittedWall: new Date(1567578535591), lastOpVisible: { ts: Timestamp(1567578535, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578535, 1), $clusterTime: { clusterTime: Timestamp(1567578538, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578535, 1) } 2019-09-04T06:28:58.837+0000 D2 ASIO [Replication] Request 301 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, durableWallTime: new Date(1567578535591), opTime: { ts: Timestamp(1567578535, 1), t: 1 }, wallTime: new Date(1567578535591), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578535, 1), t: 1 }, lastCommittedWall: new Date(1567578535591), lastOpVisible: { ts: Timestamp(1567578535, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578535, 1), $clusterTime: { clusterTime: Timestamp(1567578538, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578535, 1) } 2019-09-04T06:28:58.837+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:28:58.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, durableWallTime: new Date(1567578535591), opTime: { ts: Timestamp(1567578535, 1), t: 1 }, wallTime: new Date(1567578535591), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578535, 1), t: 1 }, lastCommittedWall: new Date(1567578535591), lastOpVisible: { ts: Timestamp(1567578535, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578535, 1), $clusterTime: { clusterTime: Timestamp(1567578538, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578535, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:28:58.837+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:29:07.387+0000 2019-09-04T06:28:58.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:28:58.837+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:29:09.184+0000 2019-09-04T06:28:58.837+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:28:58.837+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:29:00.837Z 2019-09-04T06:28:58.837+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:28:58.837+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:28.837+0000 2019-09-04T06:28:58.837+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 301) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, durableWallTime: new Date(1567578535591), opTime: { ts: Timestamp(1567578535, 1), t: 1 }, wallTime: new Date(1567578535591), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578535, 1), t: 1 }, lastCommittedWall: new Date(1567578535591), lastOpVisible: { ts: Timestamp(1567578535, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578535, 1), $clusterTime: { clusterTime: Timestamp(1567578538, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578535, 1) } 2019-09-04T06:28:58.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:28.837+0000 2019-09-04T06:28:58.837+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:28:58.837+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:29:00.837Z 2019-09-04T06:28:58.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:28.837+0000 2019-09-04T06:28:58.844+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:58.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:58.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:58.928+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:58.928+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:58.944+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:58.990+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:58.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:59.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:28:59.028+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:59.028+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:59.044+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:59.061+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:28:59.061+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:28:58.837+0000 2019-09-04T06:28:59.061+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:28:58.837+0000 2019-09-04T06:28:59.061+0000 D3 REPL [replexec-3] stalest member MemberId(0) date: 2019-09-04T06:28:58.837+0000 2019-09-04T06:28:59.061+0000 D3 REPL [replexec-3] scheduling next check at 2019-09-04T06:29:08.837+0000 2019-09-04T06:28:59.061+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:28.837+0000 2019-09-04T06:28:59.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578538, 1), signature: { hash: BinData(0, 783301D384019A6E1B0D5EACC972F7FF7B5BFF17), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:59.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:28:59.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578538, 1), signature: { hash: BinData(0, 783301D384019A6E1B0D5EACC972F7FF7B5BFF17), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:59.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578538, 1), signature: { hash: BinData(0, 783301D384019A6E1B0D5EACC972F7FF7B5BFF17), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:28:59.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, durableWallTime: new Date(1567578535591), opTime: { ts: Timestamp(1567578535, 1), t: 1 }, wallTime: new Date(1567578535591) } 2019-09-04T06:28:59.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578538, 1), signature: { hash: BinData(0, 783301D384019A6E1B0D5EACC972F7FF7B5BFF17), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:59.127+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:59.127+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:59.137+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:59.137+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:59.144+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:59.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:59.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:59.204+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:59.204+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:59.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:28:59.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:28:59.244+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:59.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:59.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:59.286+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:59.286+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:59.344+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:59.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:59.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:59.428+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:59.428+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:59.444+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:59.490+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:59.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:59.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:59.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:59.528+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:59.528+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:59.545+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:59.617+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:28:59.617+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:28:59.617+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578535, 1) 2019-09-04T06:28:59.617+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 4656 2019-09-04T06:28:59.617+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:28:59.617+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:28:59.617+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 4656 2019-09-04T06:28:59.618+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4659 2019-09-04T06:28:59.618+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4659 2019-09-04T06:28:59.618+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578535, 1), t: 1 }({ ts: Timestamp(1567578535, 1), t: 1 }) 2019-09-04T06:28:59.627+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:59.627+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:59.637+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:59.637+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:59.645+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:59.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:59.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:59.704+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:59.704+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:59.735+0000 I NETWORK [listener] connection accepted from 10.108.2.49:53318 #162 (86 connections now open) 2019-09-04T06:28:59.735+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:28:59.735+0000 D2 COMMAND [conn162] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:28:59.735+0000 I NETWORK [conn162] received client metadata from 10.108.2.49:53318 conn162: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:28:59.735+0000 I COMMAND [conn162] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:28:59.740+0000 D2 COMMAND [conn162] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578530, 1), signature: { hash: BinData(0, 6CBB8EC6AAAB5F918296074E946EB69F170C4DFC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:28:59.740+0000 D1 REPL [conn162] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578535, 1), t: 1 } 2019-09-04T06:28:59.740+0000 D3 REPL [conn162] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:29.750+0000 2019-09-04T06:28:59.745+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:59.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:59.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:59.786+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:59.786+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:59.845+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:59.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:59.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:59.928+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:59.928+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:59.945+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:28:59.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:59.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:59.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:59.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:28:59.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:28:59.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:00.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:00.004+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:29:00.004+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:29:00.004+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:29:00.018+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:29:00.018+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:29:00.019+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:29:00.019+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:29:00.019+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:29:00.019+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:29:00.034+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:00.034+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:00.045+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:29:00.045+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35129 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:29:00.046+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:00.048+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:29:00.048+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:29:00.048+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:29:00.049+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:29:00.049+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:00.049+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:29:00.049+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:29:00.049+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:29:00.049+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:29:00.049+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:29:00.049+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:29:00.049+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:29:00.049+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:00.049+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:00.049+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:29:00.049+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578535, 1) 2019-09-04T06:29:00.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4681 2019-09-04T06:29:00.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4681 2019-09-04T06:29:00.049+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:29:00.049+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:29:00.049+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:29:00.049+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:29:00.049+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:00.049+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:29:00.049+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:29:00.049+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:29:00.049+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578535, 1) 2019-09-04T06:29:00.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4684 2019-09-04T06:29:00.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4684 2019-09-04T06:29:00.049+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:29:00.050+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:29:00.050+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:00.050+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:29:00.050+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:29:00.050+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:29:00.050+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578535, 1) 2019-09-04T06:29:00.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4686 2019-09-04T06:29:00.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4686 2019-09-04T06:29:00.050+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:29:00.050+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:29:00.050+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:29:00.050+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:29:00.050+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:29:00.050+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:29:00.050+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:29:00.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4689 2019-09-04T06:29:00.050+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:29:00.050+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:00.050+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:29:00.050+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:29:00.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4689 2019-09-04T06:29:00.050+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:29:00.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4690 2019-09-04T06:29:00.050+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:29:00.050+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:00.050+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:29:00.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4690 2019-09-04T06:29:00.050+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:29:00.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4691 2019-09-04T06:29:00.050+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:29:00.050+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:00.050+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:29:00.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4691 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4692 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4692 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4693 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4693 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4694 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4694 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4695 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4695 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4696 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4696 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4697 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4697 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4698 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4698 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4699 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4699 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4700 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4700 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4701 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4701 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4702 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4702 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:29:00.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4703 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4703 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4704 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4704 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4705 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4705 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4706 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4706 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4707 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4707 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4708 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4708 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4709 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4709 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4710 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4710 2019-09-04T06:29:00.052+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 2ms 2019-09-04T06:29:00.052+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:29:00.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4712 2019-09-04T06:29:00.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4712 2019-09-04T06:29:00.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4713 2019-09-04T06:29:00.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4713 2019-09-04T06:29:00.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4714 2019-09-04T06:29:00.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4714 2019-09-04T06:29:00.053+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:29:00.053+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:29:00.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4716 2019-09-04T06:29:00.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4716 2019-09-04T06:29:00.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4717 2019-09-04T06:29:00.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4717 2019-09-04T06:29:00.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4718 2019-09-04T06:29:00.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4718 2019-09-04T06:29:00.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4719 2019-09-04T06:29:00.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4719 2019-09-04T06:29:00.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4720 2019-09-04T06:29:00.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4720 2019-09-04T06:29:00.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4721 2019-09-04T06:29:00.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4721 2019-09-04T06:29:00.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4722 2019-09-04T06:29:00.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4722 2019-09-04T06:29:00.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4723 2019-09-04T06:29:00.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4723 2019-09-04T06:29:00.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4724 2019-09-04T06:29:00.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4724 2019-09-04T06:29:00.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4725 2019-09-04T06:29:00.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4725 2019-09-04T06:29:00.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4726 2019-09-04T06:29:00.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4726 2019-09-04T06:29:00.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4727 2019-09-04T06:29:00.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4727 2019-09-04T06:29:00.053+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:29:00.053+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:29:00.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4729 2019-09-04T06:29:00.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4729 2019-09-04T06:29:00.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4730 2019-09-04T06:29:00.054+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4730 2019-09-04T06:29:00.054+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4731 2019-09-04T06:29:00.054+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4731 2019-09-04T06:29:00.054+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4732 2019-09-04T06:29:00.054+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4732 2019-09-04T06:29:00.054+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4733 2019-09-04T06:29:00.054+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4733 2019-09-04T06:29:00.054+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 4734 2019-09-04T06:29:00.054+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 4734 2019-09-04T06:29:00.054+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:29:00.081+0000 D2 ASIO [RS] Request 297 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578540, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578540059), o: { $v: 1, $set: { ping: new Date(1567578540059) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578535, 1), t: 1 }, lastCommittedWall: new Date(1567578535591), lastOpVisible: { ts: Timestamp(1567578535, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578535, 1), t: 1 }, lastCommittedWall: new Date(1567578535591), lastOpApplied: { ts: Timestamp(1567578540, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578535, 1), $clusterTime: { clusterTime: Timestamp(1567578540, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578540, 1) } 2019-09-04T06:29:00.081+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578540, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578540059), o: { $v: 1, $set: { ping: new Date(1567578540059) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578535, 1), t: 1 }, lastCommittedWall: new Date(1567578535591), lastOpVisible: { ts: Timestamp(1567578535, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578535, 1), t: 1 }, lastCommittedWall: new Date(1567578535591), lastOpApplied: { ts: Timestamp(1567578540, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578535, 1), $clusterTime: { clusterTime: Timestamp(1567578540, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578540, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:00.081+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:00.081+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578540, 1) and ending at ts: Timestamp(1567578540, 1) 2019-09-04T06:29:00.081+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:29:09.184+0000 2019-09-04T06:29:00.081+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:29:10.362+0000 2019-09-04T06:29:00.081+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:00.081+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:28.837+0000 2019-09-04T06:29:00.082+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578540, 1), t: 1 } 2019-09-04T06:29:00.082+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:00.082+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:00.082+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578535, 1) 2019-09-04T06:29:00.082+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 4737 2019-09-04T06:29:00.082+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:00.082+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:00.082+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 4737 2019-09-04T06:29:00.082+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:00.082+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:29:00.082+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:00.082+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578535, 1) 2019-09-04T06:29:00.082+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 4740 2019-09-04T06:29:00.082+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578540, 1) } 2019-09-04T06:29:00.082+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:00.082+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:00.082+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 4740 2019-09-04T06:29:00.082+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4660 2019-09-04T06:29:00.082+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4660 2019-09-04T06:29:00.082+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4743 2019-09-04T06:29:00.082+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4743 2019-09-04T06:29:00.082+0000 D3 EXECUTOR [repl-writer-worker-7] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:00.082+0000 D3 STORAGE [repl-writer-worker-7] WT begin_transaction for snapshot id 4745 2019-09-04T06:29:00.082+0000 D4 STORAGE [repl-writer-worker-7] inserting record with timestamp Timestamp(1567578540, 1) 2019-09-04T06:29:00.082+0000 D3 STORAGE [repl-writer-worker-7] WT set timestamp of future write operations to Timestamp(1567578540, 1) 2019-09-04T06:29:00.082+0000 D3 STORAGE [repl-writer-worker-7] WT commit_transaction for snapshot id 4745 2019-09-04T06:29:00.082+0000 D3 EXECUTOR [repl-writer-worker-7] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:00.082+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:29:00.082+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4744 2019-09-04T06:29:00.082+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4744 2019-09-04T06:29:00.082+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4747 2019-09-04T06:29:00.082+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4747 2019-09-04T06:29:00.082+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578540, 1), t: 1 }({ ts: Timestamp(1567578540, 1), t: 1 }) 2019-09-04T06:29:00.082+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578540, 1) 2019-09-04T06:29:00.082+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4748 2019-09-04T06:29:00.082+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578540, 1) } } ] } sort: {} projection: {} 2019-09-04T06:29:00.082+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:00.082+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:29:00.082+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578540, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:29:00.082+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:00.082+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:00.082+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:00.082+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578540, 1) || First: notFirst: full path: ts 2019-09-04T06:29:00.082+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:00.082+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578540, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:00.082+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:00.082+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:29:00.082+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:00.082+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:00.082+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:00.082+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:00.082+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:00.082+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:00.082+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:00.082+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578540, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:00.082+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:00.082+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:00.082+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:00.082+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578540, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:00.082+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:00.082+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578540, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:00.082+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4748 2019-09-04T06:29:00.082+0000 D3 EXECUTOR [repl-writer-worker-11] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:00.082+0000 D3 STORAGE [repl-writer-worker-11] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:00.083+0000 D3 REPL [repl-writer-worker-11] applying op: { ts: Timestamp(1567578540, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578540059), o: { $v: 1, $set: { ping: new Date(1567578540059) } } }, oplog application mode: Secondary 2019-09-04T06:29:00.083+0000 D3 STORAGE [repl-writer-worker-11] WT set timestamp of future write operations to Timestamp(1567578540, 1) 2019-09-04T06:29:00.083+0000 D3 STORAGE [repl-writer-worker-11] WT begin_transaction for snapshot id 4750 2019-09-04T06:29:00.083+0000 D2 QUERY [repl-writer-worker-11] Using idhack: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" } 2019-09-04T06:29:00.083+0000 D4 WRITE [repl-writer-worker-11] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:29:00.083+0000 D3 STORAGE [repl-writer-worker-11] WT commit_transaction for snapshot id 4750 2019-09-04T06:29:00.083+0000 D3 EXECUTOR [repl-writer-worker-11] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:00.083+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578540, 1), t: 1 }({ ts: Timestamp(1567578540, 1), t: 1 }) 2019-09-04T06:29:00.083+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578540, 1) 2019-09-04T06:29:00.083+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4749 2019-09-04T06:29:00.083+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:29:00.083+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:00.083+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:29:00.083+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:00.083+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:00.083+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:29:00.083+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4749 2019-09-04T06:29:00.083+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578540, 1) 2019-09-04T06:29:00.083+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4753 2019-09-04T06:29:00.083+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4753 2019-09-04T06:29:00.083+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578540, 1), t: 1 }({ ts: Timestamp(1567578540, 1), t: 1 }) 2019-09-04T06:29:00.083+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:00.083+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, durableWallTime: new Date(1567578535591), appliedOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, appliedWallTime: new Date(1567578535591), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, durableWallTime: new Date(1567578535591), appliedOpTime: { ts: Timestamp(1567578540, 1), t: 1 }, appliedWallTime: new Date(1567578540059), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, durableWallTime: new Date(1567578535591), appliedOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, appliedWallTime: new Date(1567578535591), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578535, 1), t: 1 }, lastCommittedWall: new Date(1567578535591), lastOpVisible: { ts: Timestamp(1567578535, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:00.083+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 302 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:30.083+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, durableWallTime: new Date(1567578535591), appliedOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, appliedWallTime: new Date(1567578535591), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, durableWallTime: new Date(1567578535591), appliedOpTime: { ts: Timestamp(1567578540, 1), t: 1 }, appliedWallTime: new Date(1567578540059), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, durableWallTime: new Date(1567578535591), appliedOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, appliedWallTime: new Date(1567578535591), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578535, 1), t: 1 }, lastCommittedWall: new Date(1567578535591), lastOpVisible: { ts: Timestamp(1567578535, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:00.083+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.083+0000 2019-09-04T06:29:00.083+0000 D2 ASIO [RS] Request 302 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578535, 1), t: 1 }, lastCommittedWall: new Date(1567578535591), lastOpVisible: { ts: Timestamp(1567578535, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578535, 1), $clusterTime: { clusterTime: Timestamp(1567578540, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578540, 1) } 2019-09-04T06:29:00.083+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578535, 1), t: 1 }, lastCommittedWall: new Date(1567578535591), lastOpVisible: { ts: Timestamp(1567578535, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578535, 1), $clusterTime: { clusterTime: Timestamp(1567578540, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578540, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:00.083+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:00.083+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.083+0000 2019-09-04T06:29:00.084+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578540, 1), t: 1 } 2019-09-04T06:29:00.084+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 303 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:10.084+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578535, 1), t: 1 } } 2019-09-04T06:29:00.084+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.083+0000 2019-09-04T06:29:00.084+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:29:00.084+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:00.084+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, durableWallTime: new Date(1567578535591), appliedOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, appliedWallTime: new Date(1567578535591), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578540, 1), t: 1 }, durableWallTime: new Date(1567578540059), appliedOpTime: { ts: Timestamp(1567578540, 1), t: 1 }, appliedWallTime: new Date(1567578540059), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, durableWallTime: new Date(1567578535591), appliedOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, appliedWallTime: new Date(1567578535591), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578535, 1), t: 1 }, lastCommittedWall: new Date(1567578535591), lastOpVisible: { ts: Timestamp(1567578535, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:00.084+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 304 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:30.084+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, durableWallTime: new Date(1567578535591), appliedOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, appliedWallTime: new Date(1567578535591), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578540, 1), t: 1 }, durableWallTime: new Date(1567578540059), appliedOpTime: { ts: Timestamp(1567578540, 1), t: 1 }, appliedWallTime: new Date(1567578540059), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, durableWallTime: new Date(1567578535591), appliedOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, appliedWallTime: new Date(1567578535591), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578535, 1), t: 1 }, lastCommittedWall: new Date(1567578535591), lastOpVisible: { ts: Timestamp(1567578535, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:00.084+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.083+0000 2019-09-04T06:29:00.085+0000 D2 ASIO [RS] Request 304 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578535, 1), t: 1 }, lastCommittedWall: new Date(1567578535591), lastOpVisible: { ts: Timestamp(1567578535, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578535, 1), $clusterTime: { clusterTime: Timestamp(1567578540, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578540, 1) } 2019-09-04T06:29:00.085+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578535, 1), t: 1 }, lastCommittedWall: new Date(1567578535591), lastOpVisible: { ts: Timestamp(1567578535, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578535, 1), $clusterTime: { clusterTime: Timestamp(1567578540, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578540, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:00.085+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:00.085+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.083+0000 2019-09-04T06:29:00.085+0000 D2 ASIO [RS] Request 303 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578540, 1), t: 1 }, lastCommittedWall: new Date(1567578540059), lastOpVisible: { ts: Timestamp(1567578540, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578540, 1), t: 1 }, lastCommittedWall: new Date(1567578540059), lastOpApplied: { ts: Timestamp(1567578540, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578540, 1), $clusterTime: { clusterTime: Timestamp(1567578540, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578540, 1) } 2019-09-04T06:29:00.085+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578540, 1), t: 1 }, lastCommittedWall: new Date(1567578540059), lastOpVisible: { ts: Timestamp(1567578540, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578540, 1), t: 1 }, lastCommittedWall: new Date(1567578540059), lastOpApplied: { ts: Timestamp(1567578540, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578540, 1), $clusterTime: { clusterTime: Timestamp(1567578540, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578540, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:00.085+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:00.085+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:29:00.085+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578540, 1), t: 1 }, 2019-09-04T06:29:00.059+0000 2019-09-04T06:29:00.085+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578540, 1), t: 1 }, 2019-09-04T06:29:00.059+0000 2019-09-04T06:29:00.085+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578535, 1) 2019-09-04T06:29:00.085+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:29:10.362+0000 2019-09-04T06:29:00.085+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:29:10.122+0000 2019-09-04T06:29:00.085+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:00.085+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 305 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:10.085+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578540, 1), t: 1 } } 2019-09-04T06:29:00.085+0000 D3 REPL [conn140] Got notified of new snapshot: { ts: Timestamp(1567578540, 1), t: 1 }, 2019-09-04T06:29:00.059+0000 2019-09-04T06:29:00.085+0000 D3 REPL [conn140] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.076+0000 2019-09-04T06:29:00.085+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.083+0000 2019-09-04T06:29:00.085+0000 D3 REPL [conn129] Got notified of new snapshot: { ts: Timestamp(1567578540, 1), t: 1 }, 2019-09-04T06:29:00.059+0000 2019-09-04T06:29:00.085+0000 D3 REPL [conn129] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.240+0000 2019-09-04T06:29:00.085+0000 D3 REPL [conn127] Got notified of new snapshot: { ts: Timestamp(1567578540, 1), t: 1 }, 2019-09-04T06:29:00.059+0000 2019-09-04T06:29:00.085+0000 D3 REPL [conn127] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.644+0000 2019-09-04T06:29:00.085+0000 D3 REPL [conn153] Got notified of new snapshot: { ts: Timestamp(1567578540, 1), t: 1 }, 2019-09-04T06:29:00.059+0000 2019-09-04T06:29:00.085+0000 D3 REPL [conn153] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:00.085+0000 D3 REPL [conn156] Got notified of new snapshot: { ts: Timestamp(1567578540, 1), t: 1 }, 2019-09-04T06:29:00.059+0000 2019-09-04T06:29:00.085+0000 D3 REPL [conn156] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:22.595+0000 2019-09-04T06:29:00.085+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:28.837+0000 2019-09-04T06:29:00.085+0000 D3 REPL [conn135] Got notified of new snapshot: { ts: Timestamp(1567578540, 1), t: 1 }, 2019-09-04T06:29:00.059+0000 2019-09-04T06:29:00.085+0000 D3 REPL [conn135] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.925+0000 2019-09-04T06:29:00.086+0000 D3 REPL [conn134] Got notified of new snapshot: { ts: Timestamp(1567578540, 1), t: 1 }, 2019-09-04T06:29:00.059+0000 2019-09-04T06:29:00.086+0000 D3 REPL [conn134] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.753+0000 2019-09-04T06:29:00.086+0000 D3 REPL [conn141] Got notified of new snapshot: { ts: Timestamp(1567578540, 1), t: 1 }, 2019-09-04T06:29:00.059+0000 2019-09-04T06:29:00.086+0000 D3 REPL [conn141] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.077+0000 2019-09-04T06:29:00.086+0000 D3 REPL [conn132] Got notified of new snapshot: { ts: Timestamp(1567578540, 1), t: 1 }, 2019-09-04T06:29:00.059+0000 2019-09-04T06:29:00.086+0000 D3 REPL [conn132] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.433+0000 2019-09-04T06:29:00.086+0000 D3 REPL [conn116] Got notified of new snapshot: { ts: Timestamp(1567578540, 1), t: 1 }, 2019-09-04T06:29:00.059+0000 2019-09-04T06:29:00.086+0000 D3 REPL [conn116] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:12.389+0000 2019-09-04T06:29:00.086+0000 D3 REPL [conn122] Got notified of new snapshot: { ts: Timestamp(1567578540, 1), t: 1 }, 2019-09-04T06:29:00.059+0000 2019-09-04T06:29:00.086+0000 D3 REPL [conn122] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.234+0000 2019-09-04T06:29:00.086+0000 D3 REPL [conn152] Got notified of new snapshot: { ts: Timestamp(1567578540, 1), t: 1 }, 2019-09-04T06:29:00.059+0000 2019-09-04T06:29:00.086+0000 D3 REPL [conn152] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:00.086+0000 D3 REPL [conn158] Got notified of new snapshot: { ts: Timestamp(1567578540, 1), t: 1 }, 2019-09-04T06:29:00.059+0000 2019-09-04T06:29:00.086+0000 D3 REPL [conn158] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:24.152+0000 2019-09-04T06:29:00.086+0000 D3 REPL [conn133] Got notified of new snapshot: { ts: Timestamp(1567578540, 1), t: 1 }, 2019-09-04T06:29:00.059+0000 2019-09-04T06:29:00.086+0000 D3 REPL [conn133] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.702+0000 2019-09-04T06:29:00.086+0000 D3 REPL [conn148] Got notified of new snapshot: { ts: Timestamp(1567578540, 1), t: 1 }, 2019-09-04T06:29:00.059+0000 2019-09-04T06:29:00.086+0000 D3 REPL [conn148] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.224+0000 2019-09-04T06:29:00.086+0000 D3 REPL [conn149] Got notified of new snapshot: { ts: Timestamp(1567578540, 1), t: 1 }, 2019-09-04T06:29:00.059+0000 2019-09-04T06:29:00.086+0000 D3 REPL [conn149] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.239+0000 2019-09-04T06:29:00.086+0000 D3 REPL [conn150] Got notified of new snapshot: { ts: Timestamp(1567578540, 1), t: 1 }, 2019-09-04T06:29:00.059+0000 2019-09-04T06:29:00.086+0000 D3 REPL [conn150] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:19.851+0000 2019-09-04T06:29:00.086+0000 D3 REPL [conn155] Got notified of new snapshot: { ts: Timestamp(1567578540, 1), t: 1 }, 2019-09-04T06:29:00.059+0000 2019-09-04T06:29:00.086+0000 D3 REPL [conn155] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.663+0000 2019-09-04T06:29:00.086+0000 D3 REPL [conn151] Got notified of new snapshot: { ts: Timestamp(1567578540, 1), t: 1 }, 2019-09-04T06:29:00.059+0000 2019-09-04T06:29:00.086+0000 D3 REPL [conn151] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.129+0000 2019-09-04T06:29:00.086+0000 D3 REPL [conn128] Got notified of new snapshot: { ts: Timestamp(1567578540, 1), t: 1 }, 2019-09-04T06:29:00.059+0000 2019-09-04T06:29:00.086+0000 D3 REPL [conn128] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:11.706+0000 2019-09-04T06:29:00.086+0000 D3 REPL [conn118] Got notified of new snapshot: { ts: Timestamp(1567578540, 1), t: 1 }, 2019-09-04T06:29:00.059+0000 2019-09-04T06:29:00.086+0000 D3 REPL [conn118] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.224+0000 2019-09-04T06:29:00.086+0000 D3 REPL [conn136] Got notified of new snapshot: { ts: Timestamp(1567578540, 1), t: 1 }, 2019-09-04T06:29:00.059+0000 2019-09-04T06:29:00.086+0000 D3 REPL [conn136] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:02.478+0000 2019-09-04T06:29:00.086+0000 D3 REPL [conn137] Got notified of new snapshot: { ts: Timestamp(1567578540, 1), t: 1 }, 2019-09-04T06:29:00.059+0000 2019-09-04T06:29:00.086+0000 D3 REPL [conn137] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.079+0000 2019-09-04T06:29:00.086+0000 D3 REPL [conn142] Got notified of new snapshot: { ts: Timestamp(1567578540, 1), t: 1 }, 2019-09-04T06:29:00.059+0000 2019-09-04T06:29:00.086+0000 D3 REPL [conn142] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.132+0000 2019-09-04T06:29:00.086+0000 D3 REPL [conn146] Got notified of new snapshot: { ts: Timestamp(1567578540, 1), t: 1 }, 2019-09-04T06:29:00.059+0000 2019-09-04T06:29:00.086+0000 D3 REPL [conn146] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.233+0000 2019-09-04T06:29:00.086+0000 D3 REPL [conn105] Got notified of new snapshot: { ts: Timestamp(1567578540, 1), t: 1 }, 2019-09-04T06:29:00.059+0000 2019-09-04T06:29:00.086+0000 D3 REPL [conn105] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.079+0000 2019-09-04T06:29:00.086+0000 D3 REPL [conn154] Got notified of new snapshot: { ts: Timestamp(1567578540, 1), t: 1 }, 2019-09-04T06:29:00.059+0000 2019-09-04T06:29:00.086+0000 D3 REPL [conn154] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:00.086+0000 D3 REPL [conn161] Got notified of new snapshot: { ts: Timestamp(1567578540, 1), t: 1 }, 2019-09-04T06:29:00.059+0000 2019-09-04T06:29:00.086+0000 D3 REPL [conn161] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:28.615+0000 2019-09-04T06:29:00.086+0000 D3 REPL [conn162] Got notified of new snapshot: { ts: Timestamp(1567578540, 1), t: 1 }, 2019-09-04T06:29:00.059+0000 2019-09-04T06:29:00.086+0000 D3 REPL [conn162] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:29.750+0000 2019-09-04T06:29:00.091+0000 D2 ASIO [RS] Request 305 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578540, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578540060), o: { $v: 1, $set: { ping: new Date(1567578540059) } } }, { ts: Timestamp(1567578540, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578540065), o: { $v: 1, $set: { ping: new Date(1567578540065) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578540, 1), t: 1 }, lastCommittedWall: new Date(1567578540059), lastOpVisible: { ts: Timestamp(1567578540, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578540, 1), t: 1 }, lastCommittedWall: new Date(1567578540059), lastOpApplied: { ts: Timestamp(1567578540, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578540, 1), $clusterTime: { clusterTime: Timestamp(1567578540, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578540, 3) } 2019-09-04T06:29:00.091+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578540, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578540060), o: { $v: 1, $set: { ping: new Date(1567578540059) } } }, { ts: Timestamp(1567578540, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578540065), o: { $v: 1, $set: { ping: new Date(1567578540065) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578540, 1), t: 1 }, lastCommittedWall: new Date(1567578540059), lastOpVisible: { ts: Timestamp(1567578540, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578540, 1), t: 1 }, lastCommittedWall: new Date(1567578540059), lastOpApplied: { ts: Timestamp(1567578540, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578540, 1), $clusterTime: { clusterTime: Timestamp(1567578540, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578540, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:00.091+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:00.091+0000 D2 REPL [replication-1] oplog fetcher read 2 operations from remote oplog starting at ts: Timestamp(1567578540, 2) and ending at ts: Timestamp(1567578540, 3) 2019-09-04T06:29:00.091+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:29:10.122+0000 2019-09-04T06:29:00.091+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:29:10.739+0000 2019-09-04T06:29:00.091+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:00.091+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:28.837+0000 2019-09-04T06:29:00.091+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578540, 3), t: 1 } 2019-09-04T06:29:00.091+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:00.091+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:00.091+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578540, 1) 2019-09-04T06:29:00.091+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 4757 2019-09-04T06:29:00.092+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:00.092+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:00.092+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 4757 2019-09-04T06:29:00.092+0000 D2 REPL [rsSync-0] replication batch size is 2 2019-09-04T06:29:00.092+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:00.092+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578540, 2) } 2019-09-04T06:29:00.092+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:00.092+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578540, 1) 2019-09-04T06:29:00.092+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 4760 2019-09-04T06:29:00.092+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:00.092+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:00.092+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4754 2019-09-04T06:29:00.092+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 4760 2019-09-04T06:29:00.092+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4754 2019-09-04T06:29:00.092+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4763 2019-09-04T06:29:00.092+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4763 2019-09-04T06:29:00.092+0000 D3 EXECUTOR [repl-writer-worker-13] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:00.092+0000 D3 STORAGE [repl-writer-worker-13] WT begin_transaction for snapshot id 4765 2019-09-04T06:29:00.092+0000 D4 STORAGE [repl-writer-worker-13] inserting record with timestamp Timestamp(1567578540, 2) 2019-09-04T06:29:00.092+0000 D3 STORAGE [repl-writer-worker-13] WT set timestamp of future write operations to Timestamp(1567578540, 2) 2019-09-04T06:29:00.092+0000 D4 STORAGE [repl-writer-worker-13] inserting record with timestamp Timestamp(1567578540, 3) 2019-09-04T06:29:00.092+0000 D3 STORAGE [repl-writer-worker-13] WT set timestamp of future write operations to Timestamp(1567578540, 3) 2019-09-04T06:29:00.092+0000 D3 STORAGE [repl-writer-worker-13] WT commit_transaction for snapshot id 4765 2019-09-04T06:29:00.092+0000 D3 EXECUTOR [repl-writer-worker-13] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:00.092+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:29:00.092+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4764 2019-09-04T06:29:00.092+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4764 2019-09-04T06:29:00.092+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4767 2019-09-04T06:29:00.092+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4767 2019-09-04T06:29:00.092+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578540, 3), t: 1 }({ ts: Timestamp(1567578540, 3), t: 1 }) 2019-09-04T06:29:00.092+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578540, 3) 2019-09-04T06:29:00.092+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4768 2019-09-04T06:29:00.092+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578540, 3) } } ] } sort: {} projection: {} 2019-09-04T06:29:00.092+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:00.092+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:29:00.092+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578540, 3) Sort: {} Proj: {} ============================= 2019-09-04T06:29:00.092+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:00.092+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:00.092+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:00.092+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578540, 3) || First: notFirst: full path: ts 2019-09-04T06:29:00.092+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:00.092+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578540, 3) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:00.092+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:00.092+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:29:00.092+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:00.092+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:00.092+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:00.092+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:00.092+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:00.092+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:00.092+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:00.092+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578540, 3) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:00.092+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:00.092+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:00.092+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:00.092+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578540, 3) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:00.092+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:00.092+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578540, 3) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:00.092+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4768 2019-09-04T06:29:00.092+0000 D3 EXECUTOR [repl-writer-worker-3] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:00.092+0000 D3 STORAGE [repl-writer-worker-3] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:00.092+0000 D3 REPL [repl-writer-worker-3] applying op: { ts: Timestamp(1567578540, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578540065), o: { $v: 1, $set: { ping: new Date(1567578540065) } } }, oplog application mode: Secondary 2019-09-04T06:29:00.092+0000 D3 STORAGE [repl-writer-worker-3] WT set timestamp of future write operations to Timestamp(1567578540, 3) 2019-09-04T06:29:00.092+0000 D3 STORAGE [repl-writer-worker-3] WT begin_transaction for snapshot id 4770 2019-09-04T06:29:00.092+0000 D2 QUERY [repl-writer-worker-3] Using idhack: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" } 2019-09-04T06:29:00.092+0000 D4 WRITE [repl-writer-worker-3] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:29:00.092+0000 D3 STORAGE [repl-writer-worker-3] WT commit_transaction for snapshot id 4770 2019-09-04T06:29:00.092+0000 D3 EXECUTOR [repl-writer-worker-3] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:00.092+0000 D3 STORAGE [repl-writer-worker-3] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:00.092+0000 D3 EXECUTOR [repl-writer-worker-14] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:00.093+0000 D3 REPL [repl-writer-worker-3] applying op: { ts: Timestamp(1567578540, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578540060), o: { $v: 1, $set: { ping: new Date(1567578540059) } } }, oplog application mode: Secondary 2019-09-04T06:29:00.093+0000 D3 STORAGE [repl-writer-worker-3] WT set timestamp of future write operations to Timestamp(1567578540, 2) 2019-09-04T06:29:00.093+0000 D3 STORAGE [repl-writer-worker-3] WT begin_transaction for snapshot id 4772 2019-09-04T06:29:00.093+0000 D2 QUERY [repl-writer-worker-3] Using idhack: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" } 2019-09-04T06:29:00.093+0000 D4 WRITE [repl-writer-worker-3] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:29:00.093+0000 D3 STORAGE [repl-writer-worker-3] WT commit_transaction for snapshot id 4772 2019-09-04T06:29:00.093+0000 D3 EXECUTOR [repl-writer-worker-3] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:00.093+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578540, 3), t: 1 }({ ts: Timestamp(1567578540, 3), t: 1 }) 2019-09-04T06:29:00.093+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578540, 3) 2019-09-04T06:29:00.093+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4769 2019-09-04T06:29:00.093+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:29:00.093+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:00.093+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:29:00.093+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:00.093+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:00.093+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:29:00.093+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4769 2019-09-04T06:29:00.093+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578540, 3) 2019-09-04T06:29:00.093+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4775 2019-09-04T06:29:00.093+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4775 2019-09-04T06:29:00.093+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578540, 3), t: 1 }({ ts: Timestamp(1567578540, 3), t: 1 }) 2019-09-04T06:29:00.093+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:00.093+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, durableWallTime: new Date(1567578535591), appliedOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, appliedWallTime: new Date(1567578535591), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578540, 1), t: 1 }, durableWallTime: new Date(1567578540059), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, durableWallTime: new Date(1567578535591), appliedOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, appliedWallTime: new Date(1567578535591), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578540, 1), t: 1 }, lastCommittedWall: new Date(1567578540059), lastOpVisible: { ts: Timestamp(1567578540, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:00.093+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 306 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:30.093+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, durableWallTime: new Date(1567578535591), appliedOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, appliedWallTime: new Date(1567578535591), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578540, 1), t: 1 }, durableWallTime: new Date(1567578540059), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, durableWallTime: new Date(1567578535591), appliedOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, appliedWallTime: new Date(1567578535591), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578540, 1), t: 1 }, lastCommittedWall: new Date(1567578540059), lastOpVisible: { ts: Timestamp(1567578540, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:00.093+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.093+0000 2019-09-04T06:29:00.093+0000 D2 ASIO [RS] Request 306 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578540, 1), t: 1 }, lastCommittedWall: new Date(1567578540059), lastOpVisible: { ts: Timestamp(1567578540, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578540, 1), $clusterTime: { clusterTime: Timestamp(1567578540, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578540, 3) } 2019-09-04T06:29:00.093+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578540, 1), t: 1 }, lastCommittedWall: new Date(1567578540059), lastOpVisible: { ts: Timestamp(1567578540, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578540, 1), $clusterTime: { clusterTime: Timestamp(1567578540, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578540, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:00.093+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:00.093+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.093+0000 2019-09-04T06:29:00.093+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578540, 3), t: 1 } 2019-09-04T06:29:00.094+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 307 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:10.094+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578540, 1), t: 1 } } 2019-09-04T06:29:00.094+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.093+0000 2019-09-04T06:29:00.095+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:29:00.095+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:00.095+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, durableWallTime: new Date(1567578535591), appliedOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, appliedWallTime: new Date(1567578535591), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, durableWallTime: new Date(1567578535591), appliedOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, appliedWallTime: new Date(1567578535591), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578540, 1), t: 1 }, lastCommittedWall: new Date(1567578540059), lastOpVisible: { ts: Timestamp(1567578540, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:00.095+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 308 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:30.095+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, durableWallTime: new Date(1567578535591), appliedOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, appliedWallTime: new Date(1567578535591), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, durableWallTime: new Date(1567578535591), appliedOpTime: { ts: Timestamp(1567578535, 1), t: 1 }, appliedWallTime: new Date(1567578535591), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578540, 1), t: 1 }, lastCommittedWall: new Date(1567578540059), lastOpVisible: { ts: Timestamp(1567578540, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:00.095+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.093+0000 2019-09-04T06:29:00.096+0000 D2 ASIO [RS] Request 308 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578540, 3), t: 1 }, lastCommittedWall: new Date(1567578540065), lastOpVisible: { ts: Timestamp(1567578540, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578540, 3), $clusterTime: { clusterTime: Timestamp(1567578540, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578540, 3) } 2019-09-04T06:29:00.096+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578540, 3), t: 1 }, lastCommittedWall: new Date(1567578540065), lastOpVisible: { ts: Timestamp(1567578540, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578540, 3), $clusterTime: { clusterTime: Timestamp(1567578540, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578540, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:00.096+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:00.096+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.093+0000 2019-09-04T06:29:00.096+0000 D2 ASIO [RS] Request 307 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578540, 3), t: 1 }, lastCommittedWall: new Date(1567578540065), lastOpVisible: { ts: Timestamp(1567578540, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578540, 3), t: 1 }, lastCommittedWall: new Date(1567578540065), lastOpApplied: { ts: Timestamp(1567578540, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578540, 3), $clusterTime: { clusterTime: Timestamp(1567578540, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578540, 3) } 2019-09-04T06:29:00.096+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578540, 3), t: 1 }, lastCommittedWall: new Date(1567578540065), lastOpVisible: { ts: Timestamp(1567578540, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578540, 3), t: 1 }, lastCommittedWall: new Date(1567578540065), lastOpApplied: { ts: Timestamp(1567578540, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578540, 3), $clusterTime: { clusterTime: Timestamp(1567578540, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578540, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:00.096+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:00.096+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:29:00.096+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578540, 3), t: 1 }, 2019-09-04T06:29:00.065+0000 2019-09-04T06:29:00.096+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578540, 3), t: 1 }, 2019-09-04T06:29:00.065+0000 2019-09-04T06:29:00.096+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578535, 3) 2019-09-04T06:29:00.096+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:29:10.739+0000 2019-09-04T06:29:00.096+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:29:10.150+0000 2019-09-04T06:29:00.096+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:00.096+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 309 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:10.096+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578540, 3), t: 1 } } 2019-09-04T06:29:00.096+0000 D3 REPL [conn153] Got notified of new snapshot: { ts: Timestamp(1567578540, 3), t: 1 }, 2019-09-04T06:29:00.065+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn153] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:00.096+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.093+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn135] Got notified of new snapshot: { ts: Timestamp(1567578540, 3), t: 1 }, 2019-09-04T06:29:00.065+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn135] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.925+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn132] Got notified of new snapshot: { ts: Timestamp(1567578540, 3), t: 1 }, 2019-09-04T06:29:00.065+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn132] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.433+0000 2019-09-04T06:29:00.096+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:28.837+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn149] Got notified of new snapshot: { ts: Timestamp(1567578540, 3), t: 1 }, 2019-09-04T06:29:00.065+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn149] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.239+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn155] Got notified of new snapshot: { ts: Timestamp(1567578540, 3), t: 1 }, 2019-09-04T06:29:00.065+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn155] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.663+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn152] Got notified of new snapshot: { ts: Timestamp(1567578540, 3), t: 1 }, 2019-09-04T06:29:00.065+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn152] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn150] Got notified of new snapshot: { ts: Timestamp(1567578540, 3), t: 1 }, 2019-09-04T06:29:00.065+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn150] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:19.851+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn136] Got notified of new snapshot: { ts: Timestamp(1567578540, 3), t: 1 }, 2019-09-04T06:29:00.065+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn136] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:02.478+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn118] Got notified of new snapshot: { ts: Timestamp(1567578540, 3), t: 1 }, 2019-09-04T06:29:00.065+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn118] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.224+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn137] Got notified of new snapshot: { ts: Timestamp(1567578540, 3), t: 1 }, 2019-09-04T06:29:00.065+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn137] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.079+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn141] Got notified of new snapshot: { ts: Timestamp(1567578540, 3), t: 1 }, 2019-09-04T06:29:00.065+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn141] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.077+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn154] Got notified of new snapshot: { ts: Timestamp(1567578540, 3), t: 1 }, 2019-09-04T06:29:00.065+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn154] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn162] Got notified of new snapshot: { ts: Timestamp(1567578540, 3), t: 1 }, 2019-09-04T06:29:00.065+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn162] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:29.750+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn148] Got notified of new snapshot: { ts: Timestamp(1567578540, 3), t: 1 }, 2019-09-04T06:29:00.065+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn148] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.224+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn146] Got notified of new snapshot: { ts: Timestamp(1567578540, 3), t: 1 }, 2019-09-04T06:29:00.065+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn146] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.233+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn129] Got notified of new snapshot: { ts: Timestamp(1567578540, 3), t: 1 }, 2019-09-04T06:29:00.065+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn129] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.240+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn127] Got notified of new snapshot: { ts: Timestamp(1567578540, 3), t: 1 }, 2019-09-04T06:29:00.065+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn127] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.644+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn156] Got notified of new snapshot: { ts: Timestamp(1567578540, 3), t: 1 }, 2019-09-04T06:29:00.065+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn156] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:22.595+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn134] Got notified of new snapshot: { ts: Timestamp(1567578540, 3), t: 1 }, 2019-09-04T06:29:00.065+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn134] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.753+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn161] Got notified of new snapshot: { ts: Timestamp(1567578540, 3), t: 1 }, 2019-09-04T06:29:00.065+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn161] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:28.615+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn105] Got notified of new snapshot: { ts: Timestamp(1567578540, 3), t: 1 }, 2019-09-04T06:29:00.065+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn105] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.079+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn140] Got notified of new snapshot: { ts: Timestamp(1567578540, 3), t: 1 }, 2019-09-04T06:29:00.065+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn140] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.076+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn158] Got notified of new snapshot: { ts: Timestamp(1567578540, 3), t: 1 }, 2019-09-04T06:29:00.065+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn158] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:24.152+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn122] Got notified of new snapshot: { ts: Timestamp(1567578540, 3), t: 1 }, 2019-09-04T06:29:00.065+0000 2019-09-04T06:29:00.096+0000 D3 REPL [conn122] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.234+0000 2019-09-04T06:29:00.097+0000 D3 REPL [conn128] Got notified of new snapshot: { ts: Timestamp(1567578540, 3), t: 1 }, 2019-09-04T06:29:00.065+0000 2019-09-04T06:29:00.097+0000 D3 REPL [conn128] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:11.706+0000 2019-09-04T06:29:00.097+0000 D3 REPL [conn116] Got notified of new snapshot: { ts: Timestamp(1567578540, 3), t: 1 }, 2019-09-04T06:29:00.065+0000 2019-09-04T06:29:00.097+0000 D3 REPL [conn116] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:12.389+0000 2019-09-04T06:29:00.097+0000 D3 REPL [conn142] Got notified of new snapshot: { ts: Timestamp(1567578540, 3), t: 1 }, 2019-09-04T06:29:00.065+0000 2019-09-04T06:29:00.097+0000 D3 REPL [conn142] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.132+0000 2019-09-04T06:29:00.097+0000 D3 REPL [conn151] Got notified of new snapshot: { ts: Timestamp(1567578540, 3), t: 1 }, 2019-09-04T06:29:00.065+0000 2019-09-04T06:29:00.097+0000 D3 REPL [conn151] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.129+0000 2019-09-04T06:29:00.097+0000 D3 REPL [conn133] Got notified of new snapshot: { ts: Timestamp(1567578540, 3), t: 1 }, 2019-09-04T06:29:00.065+0000 2019-09-04T06:29:00.097+0000 D3 REPL [conn133] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:00.702+0000 2019-09-04T06:29:00.127+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:00.127+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:00.137+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:00.137+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:00.146+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:00.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:00.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:00.182+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578540, 3) 2019-09-04T06:29:00.204+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:00.204+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:00.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578540, 3), signature: { hash: BinData(0, 7812434F444542B1441E1B3D8D9EE677E158ED76), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:00.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:29:00.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578540, 3), signature: { hash: BinData(0, 7812434F444542B1441E1B3D8D9EE677E158ED76), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:00.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578540, 3), signature: { hash: BinData(0, 7812434F444542B1441E1B3D8D9EE677E158ED76), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:00.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), opTime: { ts: Timestamp(1567578540, 3), t: 1 }, wallTime: new Date(1567578540065) } 2019-09-04T06:29:00.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578540, 3), signature: { hash: BinData(0, 7812434F444542B1441E1B3D8D9EE677E158ED76), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:00.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:00.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:00.246+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:00.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:00.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:00.286+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:00.286+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:00.346+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:00.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:00.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:00.428+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:00.428+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:00.433+0000 I COMMAND [conn132] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578510, 1), signature: { hash: BinData(0, 2F12003156507F6FFDC6E8CA92EB3C8A43793298), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:29:00.433+0000 D1 - [conn132] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:00.433+0000 W - [conn132] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:00.446+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:00.451+0000 I - [conn132] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:00.452+0000 D1 COMMAND [conn132] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578510, 1), signature: { hash: BinData(0, 2F12003156507F6FFDC6E8CA92EB3C8A43793298), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:00.452+0000 D1 - [conn132] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:00.452+0000 W - [conn132] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:00.473+0000 I - [conn132] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:00.473+0000 W COMMAND [conn132] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:00.473+0000 I COMMAND [conn132] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578510, 1), signature: { hash: BinData(0, 2F12003156507F6FFDC6E8CA92EB3C8A43793298), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30028ms 2019-09-04T06:29:00.473+0000 D2 NETWORK [conn132] Session from 10.108.2.48:42016 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:00.473+0000 I NETWORK [conn132] end connection 10.108.2.48:42016 (85 connections now open) 2019-09-04T06:29:00.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:00.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:00.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:00.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:00.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:00.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:00.532+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:00.532+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:00.546+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:00.627+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:00.627+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:00.637+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:00.637+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:00.646+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:00.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:00.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:00.702+0000 I COMMAND [conn133] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578505, 1), signature: { hash: BinData(0, EE8BDA2660D95BC27BD45C346F1798888A2141A4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:29:00.702+0000 D1 - [conn133] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:00.702+0000 W - [conn133] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:00.704+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:00.704+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:00.722+0000 I - [conn133] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:00.722+0000 D1 COMMAND [conn133] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578505, 1), signature: { hash: BinData(0, EE8BDA2660D95BC27BD45C346F1798888A2141A4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:00.722+0000 D1 - [conn133] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:00.722+0000 W - [conn133] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:00.743+0000 I NETWORK [listener] connection accepted from 10.108.2.52:47116 #163 (86 connections now open) 2019-09-04T06:29:00.743+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:00.743+0000 D2 COMMAND [conn163] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:00.743+0000 I NETWORK [conn163] received client metadata from 10.108.2.52:47116 conn163: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:00.743+0000 I COMMAND [conn163] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:00.745+0000 I - [conn133] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:00.745+0000 W COMMAND [conn133] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:00.745+0000 I COMMAND [conn133] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578505, 1), signature: { hash: BinData(0, EE8BDA2660D95BC27BD45C346F1798888A2141A4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:29:00.745+0000 D2 NETWORK [conn133] Session from 10.108.2.74:51698 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:00.745+0000 I NETWORK [conn133] end connection 10.108.2.74:51698 (85 connections now open) 2019-09-04T06:29:00.746+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:00.754+0000 I COMMAND [conn134] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578502, 1), signature: { hash: BinData(0, 7B6594811839B8D4C40313150387F0AED9621701), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:29:00.754+0000 D1 - [conn134] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:00.754+0000 W - [conn134] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:00.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:00.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:00.772+0000 I - [conn134] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:00.772+0000 D1 COMMAND [conn134] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578502, 1), signature: { hash: BinData(0, 7B6594811839B8D4C40313150387F0AED9621701), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:00.772+0000 D1 - [conn134] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:00.772+0000 W - [conn134] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:00.786+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:00.786+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:00.793+0000 I - [conn134] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:00.793+0000 W COMMAND [conn134] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:00.793+0000 I COMMAND [conn134] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578502, 1), signature: { hash: BinData(0, 7B6594811839B8D4C40313150387F0AED9621701), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30028ms 2019-09-04T06:29:00.793+0000 D2 NETWORK [conn134] Session from 10.108.2.52:47094 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:00.793+0000 I NETWORK [conn134] end connection 10.108.2.52:47094 (84 connections now open) 2019-09-04T06:29:00.837+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:00.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:00.837+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 310) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:00.837+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 310 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:29:10.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:00.837+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:28.837+0000 2019-09-04T06:29:00.837+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 311) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:00.837+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 311 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:10.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:00.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:28.837+0000 2019-09-04T06:29:00.837+0000 D2 ASIO [Replication] Request 310 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), opTime: { ts: Timestamp(1567578540, 3), t: 1 }, wallTime: new Date(1567578540065), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578540, 3), t: 1 }, lastCommittedWall: new Date(1567578540065), lastOpVisible: { ts: Timestamp(1567578540, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578540, 3), $clusterTime: { clusterTime: Timestamp(1567578540, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578540, 3) } 2019-09-04T06:29:00.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), opTime: { ts: Timestamp(1567578540, 3), t: 1 }, wallTime: new Date(1567578540065), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578540, 3), t: 1 }, lastCommittedWall: new Date(1567578540065), lastOpVisible: { ts: Timestamp(1567578540, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578540, 3), $clusterTime: { clusterTime: Timestamp(1567578540, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578540, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:29:00.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:00.837+0000 D2 ASIO [Replication] Request 311 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), opTime: { ts: Timestamp(1567578540, 3), t: 1 }, wallTime: new Date(1567578540065), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578540, 3), t: 1 }, lastCommittedWall: new Date(1567578540065), lastOpVisible: { ts: Timestamp(1567578540, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578540, 3), $clusterTime: { clusterTime: Timestamp(1567578540, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578540, 3) } 2019-09-04T06:29:00.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), opTime: { ts: Timestamp(1567578540, 3), t: 1 }, wallTime: new Date(1567578540065), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578540, 3), t: 1 }, lastCommittedWall: new Date(1567578540065), lastOpVisible: { ts: Timestamp(1567578540, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578540, 3), $clusterTime: { clusterTime: Timestamp(1567578540, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578540, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:00.837+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:00.837+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 310) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), opTime: { ts: Timestamp(1567578540, 3), t: 1 }, wallTime: new Date(1567578540065), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578540, 3), t: 1 }, lastCommittedWall: new Date(1567578540065), lastOpVisible: { ts: Timestamp(1567578540, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578540, 3), $clusterTime: { clusterTime: Timestamp(1567578540, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578540, 3) } 2019-09-04T06:29:00.837+0000 D4 ELECTION [replexec-1] Postponing election timeout due to heartbeat from primary 2019-09-04T06:29:00.837+0000 D4 REPL [replexec-1] Canceling election timeout callback at 2019-09-04T06:29:10.150+0000 2019-09-04T06:29:00.837+0000 D4 ELECTION [replexec-1] Scheduling election timeout callback at 2019-09-04T06:29:11.216+0000 2019-09-04T06:29:00.837+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:29:00.837+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:29:02.837Z 2019-09-04T06:29:00.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:00.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:00.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:00.837+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 311) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), opTime: { ts: Timestamp(1567578540, 3), t: 1 }, wallTime: new Date(1567578540065), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578540, 3), t: 1 }, lastCommittedWall: new Date(1567578540065), lastOpVisible: { ts: Timestamp(1567578540, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578540, 3), $clusterTime: { clusterTime: Timestamp(1567578540, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578540, 3) } 2019-09-04T06:29:00.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:29:00.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:29:02.838Z 2019-09-04T06:29:00.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:00.847+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:00.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:00.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:00.926+0000 I COMMAND [conn135] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578502, 1), signature: { hash: BinData(0, 7B6594811839B8D4C40313150387F0AED9621701), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:29:00.926+0000 D1 - [conn135] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:00.926+0000 W - [conn135] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:00.928+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:00.928+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:00.944+0000 I - [conn135] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:00.944+0000 D1 COMMAND [conn135] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578502, 1), signature: { hash: BinData(0, 7B6594811839B8D4C40313150387F0AED9621701), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:00.944+0000 D1 - [conn135] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:00.944+0000 W - [conn135] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:00.947+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:00.965+0000 I - [conn135] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:00.965+0000 W COMMAND [conn135] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:00.965+0000 I COMMAND [conn135] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578502, 1), signature: { hash: BinData(0, 7B6594811839B8D4C40313150387F0AED9621701), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30028ms 2019-09-04T06:29:00.965+0000 D2 NETWORK [conn135] Session from 10.108.2.73:52064 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:00.965+0000 I NETWORK [conn135] end connection 10.108.2.73:52064 (83 connections now open) 2019-09-04T06:29:00.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:00.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:00.990+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:00.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:00.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:00.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:01.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:01.032+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:01.032+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:01.047+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:01.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578540, 3), signature: { hash: BinData(0, 7812434F444542B1441E1B3D8D9EE677E158ED76), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:01.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:29:01.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578540, 3), signature: { hash: BinData(0, 7812434F444542B1441E1B3D8D9EE677E158ED76), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:01.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578540, 3), signature: { hash: BinData(0, 7812434F444542B1441E1B3D8D9EE677E158ED76), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:01.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), opTime: { ts: Timestamp(1567578540, 3), t: 1 }, wallTime: new Date(1567578540065) } 2019-09-04T06:29:01.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578540, 3), signature: { hash: BinData(0, 7812434F444542B1441E1B3D8D9EE677E158ED76), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:01.092+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:01.092+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:01.092+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578540, 3) 2019-09-04T06:29:01.092+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 4807 2019-09-04T06:29:01.092+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:01.092+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:01.092+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 4807 2019-09-04T06:29:01.093+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4810 2019-09-04T06:29:01.093+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4810 2019-09-04T06:29:01.093+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578540, 3), t: 1 }({ ts: Timestamp(1567578540, 3), t: 1 }) 2019-09-04T06:29:01.127+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:01.127+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:01.137+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:01.138+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:01.147+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:01.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:01.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:01.204+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:01.204+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:01.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:01.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:01.247+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:01.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:01.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:01.286+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:01.286+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:01.347+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:01.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:01.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:01.447+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:01.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:01.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:01.490+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:01.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:01.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:01.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:01.532+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:01.532+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:01.548+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:01.627+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:01.627+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:01.637+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:01.637+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:01.648+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:01.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:01.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:01.704+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:01.704+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:01.748+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:01.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:01.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:01.786+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:01.786+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:01.848+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:01.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:01.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:01.948+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:01.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:01.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:01.990+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:01.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:01.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:01.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:02.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:02.032+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:02.032+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:02.048+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:02.092+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:02.092+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:02.092+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578540, 3) 2019-09-04T06:29:02.092+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 4836 2019-09-04T06:29:02.092+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:02.092+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:02.092+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 4836 2019-09-04T06:29:02.093+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4839 2019-09-04T06:29:02.093+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4839 2019-09-04T06:29:02.093+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578540, 3), t: 1 }({ ts: Timestamp(1567578540, 3), t: 1 }) 2019-09-04T06:29:02.127+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:02.127+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:02.137+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:02.137+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:02.149+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:02.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:02.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:02.204+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:02.204+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:02.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578540, 3), signature: { hash: BinData(0, 7812434F444542B1441E1B3D8D9EE677E158ED76), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:02.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:29:02.233+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578540, 3), signature: { hash: BinData(0, 7812434F444542B1441E1B3D8D9EE677E158ED76), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:02.233+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578540, 3), signature: { hash: BinData(0, 7812434F444542B1441E1B3D8D9EE677E158ED76), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:02.233+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), opTime: { ts: Timestamp(1567578540, 3), t: 1 }, wallTime: new Date(1567578540065) } 2019-09-04T06:29:02.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578540, 3), signature: { hash: BinData(0, 7812434F444542B1441E1B3D8D9EE677E158ED76), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:02.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:02.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:02.249+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:02.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:02.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:02.286+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:02.286+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:02.349+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:02.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:02.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:02.449+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:02.467+0000 I NETWORK [listener] connection accepted from 10.108.2.59:48282 #164 (84 connections now open) 2019-09-04T06:29:02.467+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:02.467+0000 D2 COMMAND [conn164] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:02.467+0000 I NETWORK [conn164] received client metadata from 10.108.2.59:48282 conn164: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:02.467+0000 I COMMAND [conn164] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:02.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:02.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:02.481+0000 I COMMAND [conn136] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578511, 1), signature: { hash: BinData(0, F56EFECF966613B7CF9F4E79C9426BAFAB58CE38), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:29:02.481+0000 D1 - [conn136] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:02.481+0000 W - [conn136] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:02.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:02.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:02.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:02.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:02.498+0000 I - [conn136] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:02.498+0000 D1 COMMAND [conn136] assertion while executing command 'find' on database 'config' with arguments '{ find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578511, 1), signature: { hash: BinData(0, F56EFECF966613B7CF9F4E79C9426BAFAB58CE38), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:02.498+0000 D1 - [conn136] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:02.498+0000 W - [conn136] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:02.518+0000 I - [conn136] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:02.518+0000 W COMMAND [conn136] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:02.518+0000 I COMMAND [conn136] command config.$cmd command: find { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578511, 1), signature: { hash: BinData(0, F56EFECF966613B7CF9F4E79C9426BAFAB58CE38), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:29:02.518+0000 D2 NETWORK [conn136] Session from 10.108.2.59:48264 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:02.519+0000 I NETWORK [conn136] end connection 10.108.2.59:48264 (83 connections now open) 2019-09-04T06:29:02.532+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:02.532+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:02.549+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:02.627+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:02.627+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:02.637+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:02.637+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:02.649+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:02.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:02.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:02.704+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:02.704+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:02.750+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:02.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:02.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:02.786+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:02.786+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:02.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:02.837+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 312) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:02.837+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 312 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:29:12.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:02.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:02.837+0000 D2 ASIO [Replication] Request 312 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), opTime: { ts: Timestamp(1567578540, 3), t: 1 }, wallTime: new Date(1567578540065), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578540, 3), t: 1 }, lastCommittedWall: new Date(1567578540065), lastOpVisible: { ts: Timestamp(1567578540, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578540, 3), $clusterTime: { clusterTime: Timestamp(1567578540, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578540, 3) } 2019-09-04T06:29:02.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), opTime: { ts: Timestamp(1567578540, 3), t: 1 }, wallTime: new Date(1567578540065), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578540, 3), t: 1 }, lastCommittedWall: new Date(1567578540065), lastOpVisible: { ts: Timestamp(1567578540, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578540, 3), $clusterTime: { clusterTime: Timestamp(1567578540, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578540, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:29:02.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:02.837+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 312) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), opTime: { ts: Timestamp(1567578540, 3), t: 1 }, wallTime: new Date(1567578540065), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578540, 3), t: 1 }, lastCommittedWall: new Date(1567578540065), lastOpVisible: { ts: Timestamp(1567578540, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578540, 3), $clusterTime: { clusterTime: Timestamp(1567578540, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578540, 3) } 2019-09-04T06:29:02.837+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:29:02.837+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:29:11.216+0000 2019-09-04T06:29:02.837+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:29:13.518+0000 2019-09-04T06:29:02.837+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:02.837+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:29:02.837+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:29:04.837Z 2019-09-04T06:29:02.837+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:02.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:02.838+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:02.838+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 313) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:02.838+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 313 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:12.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:02.838+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:02.838+0000 D2 ASIO [Replication] Request 313 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), opTime: { ts: Timestamp(1567578540, 3), t: 1 }, wallTime: new Date(1567578540065), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578540, 3), t: 1 }, lastCommittedWall: new Date(1567578540065), lastOpVisible: { ts: Timestamp(1567578540, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578540, 3), $clusterTime: { clusterTime: Timestamp(1567578540, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578540, 3) } 2019-09-04T06:29:02.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), opTime: { ts: Timestamp(1567578540, 3), t: 1 }, wallTime: new Date(1567578540065), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578540, 3), t: 1 }, lastCommittedWall: new Date(1567578540065), lastOpVisible: { ts: Timestamp(1567578540, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578540, 3), $clusterTime: { clusterTime: Timestamp(1567578540, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578540, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:02.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:02.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 313) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), opTime: { ts: Timestamp(1567578540, 3), t: 1 }, wallTime: new Date(1567578540065), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578540, 3), t: 1 }, lastCommittedWall: new Date(1567578540065), lastOpVisible: { ts: Timestamp(1567578540, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578540, 3), $clusterTime: { clusterTime: Timestamp(1567578540, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578540, 3) } 2019-09-04T06:29:02.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:29:02.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:29:04.838Z 2019-09-04T06:29:02.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:02.850+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:02.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:02.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:02.950+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:02.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:02.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:02.990+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:02.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:02.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:02.997+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:03.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:03.032+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:03.032+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:03.050+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:03.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578540, 3), signature: { hash: BinData(0, 7812434F444542B1441E1B3D8D9EE677E158ED76), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:03.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:29:03.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578540, 3), signature: { hash: BinData(0, 7812434F444542B1441E1B3D8D9EE677E158ED76), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:03.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578540, 3), signature: { hash: BinData(0, 7812434F444542B1441E1B3D8D9EE677E158ED76), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:03.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), opTime: { ts: Timestamp(1567578540, 3), t: 1 }, wallTime: new Date(1567578540065) } 2019-09-04T06:29:03.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578540, 3), signature: { hash: BinData(0, 7812434F444542B1441E1B3D8D9EE677E158ED76), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:03.092+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:03.092+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:03.093+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578540, 3) 2019-09-04T06:29:03.093+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 4868 2019-09-04T06:29:03.093+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:03.093+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:03.093+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 4868 2019-09-04T06:29:03.093+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4871 2019-09-04T06:29:03.093+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4871 2019-09-04T06:29:03.094+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578540, 3), t: 1 }({ ts: Timestamp(1567578540, 3), t: 1 }) 2019-09-04T06:29:03.127+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:03.127+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:03.137+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:03.137+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:03.150+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:03.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:03.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:03.204+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:03.204+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:03.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:03.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:03.247+0000 D2 COMMAND [conn144] run command config.$cmd { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578543, 1), signature: { hash: BinData(0, 39254E2B47ABC88D4706F0088450D3CFF007B444), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:29:03.247+0000 D1 REPL [conn144] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578540, 3), t: 1 } 2019-09-04T06:29:03.247+0000 D3 REPL [conn144] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:33.257+0000 2019-09-04T06:29:03.250+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:03.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:03.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:03.286+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:03.286+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:03.350+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:03.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:03.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:03.385+0000 D2 ASIO [RS] Request 309 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578543, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578543379), o: { $v: 1, $set: { ping: new Date(1567578543374) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578540, 3), t: 1 }, lastCommittedWall: new Date(1567578540065), lastOpVisible: { ts: Timestamp(1567578540, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578540, 3), t: 1 }, lastCommittedWall: new Date(1567578540065), lastOpApplied: { ts: Timestamp(1567578543, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578540, 3), $clusterTime: { clusterTime: Timestamp(1567578543, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578543, 1) } 2019-09-04T06:29:03.385+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578543, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578543379), o: { $v: 1, $set: { ping: new Date(1567578543374) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578540, 3), t: 1 }, lastCommittedWall: new Date(1567578540065), lastOpVisible: { ts: Timestamp(1567578540, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578540, 3), t: 1 }, lastCommittedWall: new Date(1567578540065), lastOpApplied: { ts: Timestamp(1567578543, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578540, 3), $clusterTime: { clusterTime: Timestamp(1567578543, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578543, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:03.385+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:03.385+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578543, 1) and ending at ts: Timestamp(1567578543, 1) 2019-09-04T06:29:03.385+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:29:13.518+0000 2019-09-04T06:29:03.385+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:29:14.616+0000 2019-09-04T06:29:03.385+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:03.385+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:03.385+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578543, 1), t: 1 } 2019-09-04T06:29:03.385+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:03.385+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:03.385+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578540, 3) 2019-09-04T06:29:03.385+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 4884 2019-09-04T06:29:03.385+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:03.385+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:03.385+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 4884 2019-09-04T06:29:03.385+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:03.385+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:29:03.385+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:03.385+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578540, 3) 2019-09-04T06:29:03.385+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 4887 2019-09-04T06:29:03.385+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578543, 1) } 2019-09-04T06:29:03.385+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:03.385+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:03.386+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 4887 2019-09-04T06:29:03.386+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4872 2019-09-04T06:29:03.386+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4872 2019-09-04T06:29:03.386+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4890 2019-09-04T06:29:03.386+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4890 2019-09-04T06:29:03.386+0000 D3 EXECUTOR [repl-writer-worker-12] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:03.386+0000 D3 STORAGE [repl-writer-worker-12] WT begin_transaction for snapshot id 4892 2019-09-04T06:29:03.386+0000 D4 STORAGE [repl-writer-worker-12] inserting record with timestamp Timestamp(1567578543, 1) 2019-09-04T06:29:03.386+0000 D3 STORAGE [repl-writer-worker-12] WT set timestamp of future write operations to Timestamp(1567578543, 1) 2019-09-04T06:29:03.386+0000 D3 STORAGE [repl-writer-worker-12] WT commit_transaction for snapshot id 4892 2019-09-04T06:29:03.386+0000 D3 EXECUTOR [repl-writer-worker-12] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:03.386+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:29:03.386+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4891 2019-09-04T06:29:03.386+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4891 2019-09-04T06:29:03.386+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4894 2019-09-04T06:29:03.386+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4894 2019-09-04T06:29:03.386+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578543, 1), t: 1 }({ ts: Timestamp(1567578543, 1), t: 1 }) 2019-09-04T06:29:03.386+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578543, 1) 2019-09-04T06:29:03.386+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4895 2019-09-04T06:29:03.386+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578543, 1) } } ] } sort: {} projection: {} 2019-09-04T06:29:03.386+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:03.386+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:29:03.386+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578543, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:29:03.386+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:03.386+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:03.386+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:03.386+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578543, 1) || First: notFirst: full path: ts 2019-09-04T06:29:03.386+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:03.386+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578543, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:03.386+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:03.386+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:29:03.386+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:03.386+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:03.386+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:03.386+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:03.386+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:03.386+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:03.386+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:03.386+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578543, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:03.386+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:03.386+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:03.386+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:03.386+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578543, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:03.386+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:03.386+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578543, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:03.386+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4895 2019-09-04T06:29:03.386+0000 D3 EXECUTOR [repl-writer-worker-10] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:03.386+0000 D3 STORAGE [repl-writer-worker-10] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:03.386+0000 D3 REPL [repl-writer-worker-10] applying op: { ts: Timestamp(1567578543, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578543379), o: { $v: 1, $set: { ping: new Date(1567578543374) } } }, oplog application mode: Secondary 2019-09-04T06:29:03.386+0000 D3 STORAGE [repl-writer-worker-10] WT set timestamp of future write operations to Timestamp(1567578543, 1) 2019-09-04T06:29:03.386+0000 D3 STORAGE [repl-writer-worker-10] WT begin_transaction for snapshot id 4897 2019-09-04T06:29:03.386+0000 D2 QUERY [repl-writer-worker-10] Using idhack: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" } 2019-09-04T06:29:03.386+0000 D4 WRITE [repl-writer-worker-10] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:29:03.386+0000 D3 STORAGE [repl-writer-worker-10] WT commit_transaction for snapshot id 4897 2019-09-04T06:29:03.386+0000 D3 EXECUTOR [repl-writer-worker-10] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:03.386+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578543, 1), t: 1 }({ ts: Timestamp(1567578543, 1), t: 1 }) 2019-09-04T06:29:03.386+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578543, 1) 2019-09-04T06:29:03.386+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4896 2019-09-04T06:29:03.386+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:29:03.386+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:03.386+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:29:03.386+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:03.386+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:03.386+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:29:03.386+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4896 2019-09-04T06:29:03.386+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578543, 1) 2019-09-04T06:29:03.387+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:03.387+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4900 2019-09-04T06:29:03.387+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578543, 1), t: 1 }, appliedWallTime: new Date(1567578543379), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578540, 3), t: 1 }, lastCommittedWall: new Date(1567578540065), lastOpVisible: { ts: Timestamp(1567578540, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:03.387+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4900 2019-09-04T06:29:03.387+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578543, 1), t: 1 }({ ts: Timestamp(1567578543, 1), t: 1 }) 2019-09-04T06:29:03.387+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 314 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:33.387+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578543, 1), t: 1 }, appliedWallTime: new Date(1567578543379), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578540, 3), t: 1 }, lastCommittedWall: new Date(1567578540065), lastOpVisible: { ts: Timestamp(1567578540, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:03.387+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:33.386+0000 2019-09-04T06:29:03.387+0000 D2 ASIO [RS] Request 314 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578540, 3), t: 1 }, lastCommittedWall: new Date(1567578540065), lastOpVisible: { ts: Timestamp(1567578540, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578540, 3), $clusterTime: { clusterTime: Timestamp(1567578543, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578543, 1) } 2019-09-04T06:29:03.387+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578540, 3), t: 1 }, lastCommittedWall: new Date(1567578540065), lastOpVisible: { ts: Timestamp(1567578540, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578540, 3), $clusterTime: { clusterTime: Timestamp(1567578543, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578543, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:03.387+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:03.387+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:33.387+0000 2019-09-04T06:29:03.387+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578543, 1), t: 1 } 2019-09-04T06:29:03.387+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 315 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:13.387+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578540, 3), t: 1 } } 2019-09-04T06:29:03.387+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:33.387+0000 2019-09-04T06:29:03.388+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:29:03.388+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:03.388+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578543, 1), t: 1 }, durableWallTime: new Date(1567578543379), appliedOpTime: { ts: Timestamp(1567578543, 1), t: 1 }, appliedWallTime: new Date(1567578543379), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578540, 3), t: 1 }, lastCommittedWall: new Date(1567578540065), lastOpVisible: { ts: Timestamp(1567578540, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:03.388+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 316 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:33.388+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578543, 1), t: 1 }, durableWallTime: new Date(1567578543379), appliedOpTime: { ts: Timestamp(1567578543, 1), t: 1 }, appliedWallTime: new Date(1567578543379), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578540, 3), t: 1 }, lastCommittedWall: new Date(1567578540065), lastOpVisible: { ts: Timestamp(1567578540, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:03.388+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:33.387+0000 2019-09-04T06:29:03.388+0000 D2 ASIO [RS] Request 316 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578540, 3), t: 1 }, lastCommittedWall: new Date(1567578540065), lastOpVisible: { ts: Timestamp(1567578540, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578540, 3), $clusterTime: { clusterTime: Timestamp(1567578543, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578543, 1) } 2019-09-04T06:29:03.389+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578540, 3), t: 1 }, lastCommittedWall: new Date(1567578540065), lastOpVisible: { ts: Timestamp(1567578540, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578540, 3), $clusterTime: { clusterTime: Timestamp(1567578543, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578543, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:03.389+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:03.389+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:33.387+0000 2019-09-04T06:29:03.389+0000 D2 ASIO [RS] Request 315 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578543, 1), t: 1 }, lastCommittedWall: new Date(1567578543379), lastOpVisible: { ts: Timestamp(1567578543, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578543, 1), t: 1 }, lastCommittedWall: new Date(1567578543379), lastOpApplied: { ts: Timestamp(1567578543, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578543, 1), $clusterTime: { clusterTime: Timestamp(1567578543, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578543, 1) } 2019-09-04T06:29:03.389+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578543, 1), t: 1 }, lastCommittedWall: new Date(1567578543379), lastOpVisible: { ts: Timestamp(1567578543, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578543, 1), t: 1 }, lastCommittedWall: new Date(1567578543379), lastOpApplied: { ts: Timestamp(1567578543, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578543, 1), $clusterTime: { clusterTime: Timestamp(1567578543, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578543, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:03.389+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:03.389+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:29:03.389+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578543, 1), t: 1 }, 2019-09-04T06:29:03.379+0000 2019-09-04T06:29:03.389+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578543, 1), t: 1 }, 2019-09-04T06:29:03.379+0000 2019-09-04T06:29:03.389+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578538, 1) 2019-09-04T06:29:03.389+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:29:14.616+0000 2019-09-04T06:29:03.389+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:29:14.457+0000 2019-09-04T06:29:03.389+0000 D3 REPL [conn118] Got notified of new snapshot: { ts: Timestamp(1567578543, 1), t: 1 }, 2019-09-04T06:29:03.379+0000 2019-09-04T06:29:03.389+0000 D3 REPL [conn118] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.224+0000 2019-09-04T06:29:03.389+0000 D3 REPL [conn155] Got notified of new snapshot: { ts: Timestamp(1567578543, 1), t: 1 }, 2019-09-04T06:29:03.379+0000 2019-09-04T06:29:03.389+0000 D3 REPL [conn155] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.663+0000 2019-09-04T06:29:03.389+0000 D3 REPL [conn162] Got notified of new snapshot: { ts: Timestamp(1567578543, 1), t: 1 }, 2019-09-04T06:29:03.379+0000 2019-09-04T06:29:03.389+0000 D3 REPL [conn162] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:29.750+0000 2019-09-04T06:29:03.389+0000 D3 REPL [conn154] Got notified of new snapshot: { ts: Timestamp(1567578543, 1), t: 1 }, 2019-09-04T06:29:03.379+0000 2019-09-04T06:29:03.389+0000 D3 REPL [conn154] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:03.389+0000 D3 REPL [conn127] Got notified of new snapshot: { ts: Timestamp(1567578543, 1), t: 1 }, 2019-09-04T06:29:03.379+0000 2019-09-04T06:29:03.389+0000 D3 REPL [conn127] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.644+0000 2019-09-04T06:29:03.389+0000 D3 REPL [conn161] Got notified of new snapshot: { ts: Timestamp(1567578543, 1), t: 1 }, 2019-09-04T06:29:03.379+0000 2019-09-04T06:29:03.389+0000 D3 REPL [conn161] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:28.615+0000 2019-09-04T06:29:03.389+0000 D3 REPL [conn158] Got notified of new snapshot: { ts: Timestamp(1567578543, 1), t: 1 }, 2019-09-04T06:29:03.379+0000 2019-09-04T06:29:03.389+0000 D3 REPL [conn158] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:24.152+0000 2019-09-04T06:29:03.389+0000 D3 REPL [conn128] Got notified of new snapshot: { ts: Timestamp(1567578543, 1), t: 1 }, 2019-09-04T06:29:03.379+0000 2019-09-04T06:29:03.389+0000 D3 REPL [conn128] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:11.706+0000 2019-09-04T06:29:03.389+0000 D3 REPL [conn142] Got notified of new snapshot: { ts: Timestamp(1567578543, 1), t: 1 }, 2019-09-04T06:29:03.379+0000 2019-09-04T06:29:03.389+0000 D3 REPL [conn142] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.132+0000 2019-09-04T06:29:03.389+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:03.389+0000 D3 REPL [conn151] Got notified of new snapshot: { ts: Timestamp(1567578543, 1), t: 1 }, 2019-09-04T06:29:03.379+0000 2019-09-04T06:29:03.389+0000 D3 REPL [conn151] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.129+0000 2019-09-04T06:29:03.390+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:03.390+0000 D3 REPL [conn153] Got notified of new snapshot: { ts: Timestamp(1567578543, 1), t: 1 }, 2019-09-04T06:29:03.379+0000 2019-09-04T06:29:03.390+0000 D3 REPL [conn153] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:03.390+0000 D3 REPL [conn150] Got notified of new snapshot: { ts: Timestamp(1567578543, 1), t: 1 }, 2019-09-04T06:29:03.379+0000 2019-09-04T06:29:03.390+0000 D3 REPL [conn150] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:19.851+0000 2019-09-04T06:29:03.390+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 317 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:13.390+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578543, 1), t: 1 } } 2019-09-04T06:29:03.390+0000 D3 REPL [conn149] Got notified of new snapshot: { ts: Timestamp(1567578543, 1), t: 1 }, 2019-09-04T06:29:03.379+0000 2019-09-04T06:29:03.390+0000 D3 REPL [conn149] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.239+0000 2019-09-04T06:29:03.390+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:33.387+0000 2019-09-04T06:29:03.390+0000 D3 REPL [conn141] Got notified of new snapshot: { ts: Timestamp(1567578543, 1), t: 1 }, 2019-09-04T06:29:03.379+0000 2019-09-04T06:29:03.390+0000 D3 REPL [conn141] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.077+0000 2019-09-04T06:29:03.390+0000 D3 REPL [conn152] Got notified of new snapshot: { ts: Timestamp(1567578543, 1), t: 1 }, 2019-09-04T06:29:03.379+0000 2019-09-04T06:29:03.390+0000 D3 REPL [conn152] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:03.390+0000 D3 REPL [conn137] Got notified of new snapshot: { ts: Timestamp(1567578543, 1), t: 1 }, 2019-09-04T06:29:03.379+0000 2019-09-04T06:29:03.390+0000 D3 REPL [conn137] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.079+0000 2019-09-04T06:29:03.390+0000 D3 REPL [conn146] Got notified of new snapshot: { ts: Timestamp(1567578543, 1), t: 1 }, 2019-09-04T06:29:03.379+0000 2019-09-04T06:29:03.390+0000 D3 REPL [conn146] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.233+0000 2019-09-04T06:29:03.390+0000 D3 REPL [conn148] Got notified of new snapshot: { ts: Timestamp(1567578543, 1), t: 1 }, 2019-09-04T06:29:03.379+0000 2019-09-04T06:29:03.390+0000 D3 REPL [conn148] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.224+0000 2019-09-04T06:29:03.390+0000 D3 REPL [conn129] Got notified of new snapshot: { ts: Timestamp(1567578543, 1), t: 1 }, 2019-09-04T06:29:03.379+0000 2019-09-04T06:29:03.390+0000 D3 REPL [conn129] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.240+0000 2019-09-04T06:29:03.390+0000 D3 REPL [conn156] Got notified of new snapshot: { ts: Timestamp(1567578543, 1), t: 1 }, 2019-09-04T06:29:03.379+0000 2019-09-04T06:29:03.390+0000 D3 REPL [conn156] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:22.595+0000 2019-09-04T06:29:03.390+0000 D3 REPL [conn105] Got notified of new snapshot: { ts: Timestamp(1567578543, 1), t: 1 }, 2019-09-04T06:29:03.379+0000 2019-09-04T06:29:03.390+0000 D3 REPL [conn105] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.079+0000 2019-09-04T06:29:03.390+0000 D3 REPL [conn140] Got notified of new snapshot: { ts: Timestamp(1567578543, 1), t: 1 }, 2019-09-04T06:29:03.379+0000 2019-09-04T06:29:03.390+0000 D3 REPL [conn140] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.076+0000 2019-09-04T06:29:03.390+0000 D3 REPL [conn122] Got notified of new snapshot: { ts: Timestamp(1567578543, 1), t: 1 }, 2019-09-04T06:29:03.379+0000 2019-09-04T06:29:03.390+0000 D3 REPL [conn122] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.234+0000 2019-09-04T06:29:03.390+0000 D3 REPL [conn116] Got notified of new snapshot: { ts: Timestamp(1567578543, 1), t: 1 }, 2019-09-04T06:29:03.379+0000 2019-09-04T06:29:03.390+0000 D3 REPL [conn116] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:12.389+0000 2019-09-04T06:29:03.390+0000 D3 REPL [conn144] Got notified of new snapshot: { ts: Timestamp(1567578543, 1), t: 1 }, 2019-09-04T06:29:03.379+0000 2019-09-04T06:29:03.390+0000 D3 REPL [conn144] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:33.257+0000 2019-09-04T06:29:03.451+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:03.464+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:03.465+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:03.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:03.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:03.485+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578543, 1) 2019-09-04T06:29:03.490+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:03.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:03.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:03.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:03.509+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:29:03.509+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:29:03.509+0000 D2 COMMAND [conn90] run command admin.$cmd { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:29:03.509+0000 I COMMAND [conn90] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:29:03.532+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:03.532+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:03.551+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:03.627+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:03.627+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:03.637+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:03.637+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:03.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:03.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:03.651+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:03.704+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:03.704+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:03.751+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:03.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:03.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:03.786+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:03.786+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:03.851+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:03.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:03.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:03.951+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:03.964+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:03.964+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:03.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:03.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:03.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:03.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:03.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:03.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:04.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:04.032+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:04.032+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:04.051+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:04.127+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:04.127+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:04.137+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:04.137+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:04.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:04.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:04.152+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:04.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578543, 1), signature: { hash: BinData(0, D5E4ECF66E06B8182581A34D0447544077D04F70), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:04.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:29:04.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578543, 1), signature: { hash: BinData(0, D5E4ECF66E06B8182581A34D0447544077D04F70), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:04.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578543, 1), signature: { hash: BinData(0, D5E4ECF66E06B8182581A34D0447544077D04F70), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:04.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578543, 1), t: 1 }, durableWallTime: new Date(1567578543379), opTime: { ts: Timestamp(1567578543, 1), t: 1 }, wallTime: new Date(1567578543379) } 2019-09-04T06:29:04.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578543, 1), signature: { hash: BinData(0, D5E4ECF66E06B8182581A34D0447544077D04F70), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:04.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:04.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:04.252+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:04.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:04.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:04.286+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:04.286+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:04.352+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:04.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:04.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:04.386+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:04.386+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:04.386+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578543, 1) 2019-09-04T06:29:04.386+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 4932 2019-09-04T06:29:04.386+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:04.386+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:04.386+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 4932 2019-09-04T06:29:04.387+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4935 2019-09-04T06:29:04.387+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4935 2019-09-04T06:29:04.387+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578543, 1), t: 1 }({ ts: Timestamp(1567578543, 1), t: 1 }) 2019-09-04T06:29:04.452+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:04.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:04.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:04.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:04.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:04.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:04.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:04.503+0000 D2 ASIO [RS] Request 317 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578544, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578544498), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f59b0ac9313827bca3d49'), when: new Date(1567578544498), who: "ConfigServer:conn9511" } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578543, 1), t: 1 }, lastCommittedWall: new Date(1567578543379), lastOpVisible: { ts: Timestamp(1567578543, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578543, 1), t: 1 }, lastCommittedWall: new Date(1567578543379), lastOpApplied: { ts: Timestamp(1567578544, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578543, 1), $clusterTime: { clusterTime: Timestamp(1567578544, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 1) } 2019-09-04T06:29:04.503+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578544, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578544498), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f59b0ac9313827bca3d49'), when: new Date(1567578544498), who: "ConfigServer:conn9511" } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578543, 1), t: 1 }, lastCommittedWall: new Date(1567578543379), lastOpVisible: { ts: Timestamp(1567578543, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578543, 1), t: 1 }, lastCommittedWall: new Date(1567578543379), lastOpApplied: { ts: Timestamp(1567578544, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578543, 1), $clusterTime: { clusterTime: Timestamp(1567578544, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:04.503+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:04.503+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578544, 1) and ending at ts: Timestamp(1567578544, 1) 2019-09-04T06:29:04.503+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:29:14.457+0000 2019-09-04T06:29:04.503+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:29:15.597+0000 2019-09-04T06:29:04.503+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:04.503+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:04.503+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578544, 1), t: 1 } 2019-09-04T06:29:04.504+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:04.504+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:04.504+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578543, 1) 2019-09-04T06:29:04.504+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 4941 2019-09-04T06:29:04.504+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:04.504+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:04.504+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 4941 2019-09-04T06:29:04.504+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:04.504+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:04.504+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:29:04.504+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578543, 1) 2019-09-04T06:29:04.504+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 4944 2019-09-04T06:29:04.504+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:04.504+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578544, 1) } 2019-09-04T06:29:04.504+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:04.504+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 4944 2019-09-04T06:29:04.504+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4936 2019-09-04T06:29:04.504+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4936 2019-09-04T06:29:04.504+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4947 2019-09-04T06:29:04.504+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4947 2019-09-04T06:29:04.504+0000 D3 EXECUTOR [repl-writer-worker-8] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:04.504+0000 D3 STORAGE [repl-writer-worker-8] WT begin_transaction for snapshot id 4949 2019-09-04T06:29:04.504+0000 D4 STORAGE [repl-writer-worker-8] inserting record with timestamp Timestamp(1567578544, 1) 2019-09-04T06:29:04.504+0000 D3 STORAGE [repl-writer-worker-8] WT set timestamp of future write operations to Timestamp(1567578544, 1) 2019-09-04T06:29:04.504+0000 D3 STORAGE [repl-writer-worker-8] WT commit_transaction for snapshot id 4949 2019-09-04T06:29:04.504+0000 D3 EXECUTOR [repl-writer-worker-8] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:04.504+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:29:04.504+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4948 2019-09-04T06:29:04.504+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4948 2019-09-04T06:29:04.504+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4951 2019-09-04T06:29:04.504+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4951 2019-09-04T06:29:04.504+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578544, 1), t: 1 }({ ts: Timestamp(1567578544, 1), t: 1 }) 2019-09-04T06:29:04.504+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578544, 1) 2019-09-04T06:29:04.504+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4952 2019-09-04T06:29:04.504+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578544, 1) } } ] } sort: {} projection: {} 2019-09-04T06:29:04.504+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:04.504+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:29:04.504+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578544, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:29:04.504+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:04.504+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:04.504+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:04.504+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578544, 1) || First: notFirst: full path: ts 2019-09-04T06:29:04.504+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:04.504+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578544, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:04.504+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:04.504+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:29:04.504+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:04.504+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:04.504+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:04.504+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:04.504+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:04.504+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:04.504+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:04.504+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578544, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:04.504+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:04.504+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:04.504+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:04.504+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578544, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:04.504+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:04.504+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578544, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:04.504+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4952 2019-09-04T06:29:04.504+0000 D3 EXECUTOR [repl-writer-worker-4] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:04.504+0000 D3 STORAGE [repl-writer-worker-4] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:04.504+0000 I SHARDING [repl-writer-worker-4] Marking collection config.locks as collection version: 2019-09-04T06:29:04.504+0000 D3 REPL [repl-writer-worker-4] applying op: { ts: Timestamp(1567578544, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578544498), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f59b0ac9313827bca3d49'), when: new Date(1567578544498), who: "ConfigServer:conn9511" } } }, oplog application mode: Secondary 2019-09-04T06:29:04.504+0000 D3 STORAGE [repl-writer-worker-4] WT set timestamp of future write operations to Timestamp(1567578544, 1) 2019-09-04T06:29:04.504+0000 D3 STORAGE [repl-writer-worker-4] WT begin_transaction for snapshot id 4954 2019-09-04T06:29:04.504+0000 D2 QUERY [repl-writer-worker-4] Using idhack: { _id: "config" } 2019-09-04T06:29:04.505+0000 D2 STORAGE [repl-writer-worker-4] WiredTigerSizeStorer::store Marking table:config/collection/42--6194257481163143499 dirty, numRecords: 2, dataSize: 307, use_count: 3 2019-09-04T06:29:04.505+0000 D4 WRITE [repl-writer-worker-4] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:29:04.505+0000 D3 STORAGE [repl-writer-worker-4] WT commit_transaction for snapshot id 4954 2019-09-04T06:29:04.506+0000 D3 EXECUTOR [repl-writer-worker-4] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:04.506+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578544, 1), t: 1 } 2019-09-04T06:29:04.506+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 318 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:14.506+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578543, 1), t: 1 } } 2019-09-04T06:29:04.506+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:33.387+0000 2019-09-04T06:29:04.506+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578544, 1), t: 1 }({ ts: Timestamp(1567578544, 1), t: 1 }) 2019-09-04T06:29:04.506+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578544, 1) 2019-09-04T06:29:04.506+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4953 2019-09-04T06:29:04.506+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:29:04.506+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:04.506+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:29:04.506+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:04.506+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:04.506+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:29:04.506+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4953 2019-09-04T06:29:04.506+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578544, 1) 2019-09-04T06:29:04.506+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:04.506+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4957 2019-09-04T06:29:04.506+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578543, 1), t: 1 }, durableWallTime: new Date(1567578543379), appliedOpTime: { ts: Timestamp(1567578544, 1), t: 1 }, appliedWallTime: new Date(1567578544498), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578543, 1), t: 1 }, lastCommittedWall: new Date(1567578543379), lastOpVisible: { ts: Timestamp(1567578543, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:04.506+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 319 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:34.506+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578543, 1), t: 1 }, durableWallTime: new Date(1567578543379), appliedOpTime: { ts: Timestamp(1567578544, 1), t: 1 }, appliedWallTime: new Date(1567578544498), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578543, 1), t: 1 }, lastCommittedWall: new Date(1567578543379), lastOpVisible: { ts: Timestamp(1567578543, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:04.506+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:33.387+0000 2019-09-04T06:29:04.506+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4957 2019-09-04T06:29:04.506+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578544, 1), t: 1 }({ ts: Timestamp(1567578544, 1), t: 1 }) 2019-09-04T06:29:04.506+0000 D2 ASIO [RS] Request 319 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578543, 1), t: 1 }, lastCommittedWall: new Date(1567578543379), lastOpVisible: { ts: Timestamp(1567578543, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578543, 1), $clusterTime: { clusterTime: Timestamp(1567578544, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 1) } 2019-09-04T06:29:04.506+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578543, 1), t: 1 }, lastCommittedWall: new Date(1567578543379), lastOpVisible: { ts: Timestamp(1567578543, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578543, 1), $clusterTime: { clusterTime: Timestamp(1567578544, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:04.506+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:04.506+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:33.387+0000 2019-09-04T06:29:04.507+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:29:04.507+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:04.507+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 1), t: 1 }, durableWallTime: new Date(1567578544498), appliedOpTime: { ts: Timestamp(1567578544, 1), t: 1 }, appliedWallTime: new Date(1567578544498), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578543, 1), t: 1 }, lastCommittedWall: new Date(1567578543379), lastOpVisible: { ts: Timestamp(1567578543, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:04.507+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 320 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:34.507+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 1), t: 1 }, durableWallTime: new Date(1567578544498), appliedOpTime: { ts: Timestamp(1567578544, 1), t: 1 }, appliedWallTime: new Date(1567578544498), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578543, 1), t: 1 }, lastCommittedWall: new Date(1567578543379), lastOpVisible: { ts: Timestamp(1567578543, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:04.507+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:33.387+0000 2019-09-04T06:29:04.507+0000 D2 ASIO [RS] Request 320 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578543, 1), t: 1 }, lastCommittedWall: new Date(1567578543379), lastOpVisible: { ts: Timestamp(1567578543, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578543, 1), $clusterTime: { clusterTime: Timestamp(1567578544, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 1) } 2019-09-04T06:29:04.507+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578543, 1), t: 1 }, lastCommittedWall: new Date(1567578543379), lastOpVisible: { ts: Timestamp(1567578543, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578543, 1), $clusterTime: { clusterTime: Timestamp(1567578544, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:04.508+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:04.508+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:33.387+0000 2019-09-04T06:29:04.508+0000 D2 ASIO [RS] Request 318 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 1), t: 1 }, lastCommittedWall: new Date(1567578544498), lastOpVisible: { ts: Timestamp(1567578544, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578544, 1), t: 1 }, lastCommittedWall: new Date(1567578544498), lastOpApplied: { ts: Timestamp(1567578544, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 1), $clusterTime: { clusterTime: Timestamp(1567578544, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 1) } 2019-09-04T06:29:04.508+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 1), t: 1 }, lastCommittedWall: new Date(1567578544498), lastOpVisible: { ts: Timestamp(1567578544, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578544, 1), t: 1 }, lastCommittedWall: new Date(1567578544498), lastOpApplied: { ts: Timestamp(1567578544, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 1), $clusterTime: { clusterTime: Timestamp(1567578544, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:04.508+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:04.508+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:29:04.508+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578544, 1), t: 1 }, 2019-09-04T06:29:04.498+0000 2019-09-04T06:29:04.508+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578544, 1), t: 1 }, 2019-09-04T06:29:04.498+0000 2019-09-04T06:29:04.508+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578539, 1) 2019-09-04T06:29:04.508+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:29:15.597+0000 2019-09-04T06:29:04.508+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:29:15.837+0000 2019-09-04T06:29:04.508+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:04.508+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:04.508+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 321 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:14.508+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578544, 1), t: 1 } } 2019-09-04T06:29:04.508+0000 D3 REPL [conn154] Got notified of new snapshot: { ts: Timestamp(1567578544, 1), t: 1 }, 2019-09-04T06:29:04.498+0000 2019-09-04T06:29:04.508+0000 D3 REPL [conn154] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:04.508+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:33.387+0000 2019-09-04T06:29:04.508+0000 D3 REPL [conn161] Got notified of new snapshot: { ts: Timestamp(1567578544, 1), t: 1 }, 2019-09-04T06:29:04.498+0000 2019-09-04T06:29:04.508+0000 D3 REPL [conn161] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:28.615+0000 2019-09-04T06:29:04.508+0000 D3 REPL [conn127] Got notified of new snapshot: { ts: Timestamp(1567578544, 1), t: 1 }, 2019-09-04T06:29:04.498+0000 2019-09-04T06:29:04.508+0000 D3 REPL [conn127] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.644+0000 2019-09-04T06:29:04.508+0000 D3 REPL [conn158] Got notified of new snapshot: { ts: Timestamp(1567578544, 1), t: 1 }, 2019-09-04T06:29:04.498+0000 2019-09-04T06:29:04.509+0000 D3 REPL [conn158] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:24.152+0000 2019-09-04T06:29:04.509+0000 D3 REPL [conn152] Got notified of new snapshot: { ts: Timestamp(1567578544, 1), t: 1 }, 2019-09-04T06:29:04.498+0000 2019-09-04T06:29:04.509+0000 D3 REPL [conn152] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:04.509+0000 D3 REPL [conn146] Got notified of new snapshot: { ts: Timestamp(1567578544, 1), t: 1 }, 2019-09-04T06:29:04.498+0000 2019-09-04T06:29:04.509+0000 D3 REPL [conn146] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.233+0000 2019-09-04T06:29:04.509+0000 D3 REPL [conn137] Got notified of new snapshot: { ts: Timestamp(1567578544, 1), t: 1 }, 2019-09-04T06:29:04.498+0000 2019-09-04T06:29:04.509+0000 D3 REPL [conn137] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.079+0000 2019-09-04T06:29:04.509+0000 D3 REPL [conn156] Got notified of new snapshot: { ts: Timestamp(1567578544, 1), t: 1 }, 2019-09-04T06:29:04.498+0000 2019-09-04T06:29:04.509+0000 D3 REPL [conn156] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:22.595+0000 2019-09-04T06:29:04.509+0000 D3 REPL [conn148] Got notified of new snapshot: { ts: Timestamp(1567578544, 1), t: 1 }, 2019-09-04T06:29:04.498+0000 2019-09-04T06:29:04.509+0000 D3 REPL [conn148] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.224+0000 2019-09-04T06:29:04.509+0000 D3 REPL [conn116] Got notified of new snapshot: { ts: Timestamp(1567578544, 1), t: 1 }, 2019-09-04T06:29:04.498+0000 2019-09-04T06:29:04.509+0000 D3 REPL [conn116] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:12.389+0000 2019-09-04T06:29:04.509+0000 D3 REPL [conn150] Got notified of new snapshot: { ts: Timestamp(1567578544, 1), t: 1 }, 2019-09-04T06:29:04.498+0000 2019-09-04T06:29:04.509+0000 D3 REPL [conn150] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:19.851+0000 2019-09-04T06:29:04.509+0000 D3 REPL [conn140] Got notified of new snapshot: { ts: Timestamp(1567578544, 1), t: 1 }, 2019-09-04T06:29:04.498+0000 2019-09-04T06:29:04.509+0000 D3 REPL [conn140] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.076+0000 2019-09-04T06:29:04.509+0000 D3 REPL [conn118] Got notified of new snapshot: { ts: Timestamp(1567578544, 1), t: 1 }, 2019-09-04T06:29:04.498+0000 2019-09-04T06:29:04.509+0000 D3 REPL [conn118] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.224+0000 2019-09-04T06:29:04.509+0000 D3 REPL [conn162] Got notified of new snapshot: { ts: Timestamp(1567578544, 1), t: 1 }, 2019-09-04T06:29:04.498+0000 2019-09-04T06:29:04.509+0000 D3 REPL [conn162] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:29.750+0000 2019-09-04T06:29:04.509+0000 D3 REPL [conn155] Got notified of new snapshot: { ts: Timestamp(1567578544, 1), t: 1 }, 2019-09-04T06:29:04.498+0000 2019-09-04T06:29:04.509+0000 D3 REPL [conn155] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.663+0000 2019-09-04T06:29:04.509+0000 D3 REPL [conn149] Got notified of new snapshot: { ts: Timestamp(1567578544, 1), t: 1 }, 2019-09-04T06:29:04.498+0000 2019-09-04T06:29:04.509+0000 D3 REPL [conn149] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.239+0000 2019-09-04T06:29:04.509+0000 D3 REPL [conn128] Got notified of new snapshot: { ts: Timestamp(1567578544, 1), t: 1 }, 2019-09-04T06:29:04.498+0000 2019-09-04T06:29:04.509+0000 D3 REPL [conn128] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:11.706+0000 2019-09-04T06:29:04.509+0000 D3 REPL [conn142] Got notified of new snapshot: { ts: Timestamp(1567578544, 1), t: 1 }, 2019-09-04T06:29:04.498+0000 2019-09-04T06:29:04.509+0000 D3 REPL [conn142] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.132+0000 2019-09-04T06:29:04.509+0000 D3 REPL [conn151] Got notified of new snapshot: { ts: Timestamp(1567578544, 1), t: 1 }, 2019-09-04T06:29:04.498+0000 2019-09-04T06:29:04.509+0000 D3 REPL [conn151] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.129+0000 2019-09-04T06:29:04.509+0000 D3 REPL [conn153] Got notified of new snapshot: { ts: Timestamp(1567578544, 1), t: 1 }, 2019-09-04T06:29:04.498+0000 2019-09-04T06:29:04.509+0000 D3 REPL [conn153] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:04.509+0000 D3 REPL [conn141] Got notified of new snapshot: { ts: Timestamp(1567578544, 1), t: 1 }, 2019-09-04T06:29:04.498+0000 2019-09-04T06:29:04.509+0000 D3 REPL [conn141] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.077+0000 2019-09-04T06:29:04.509+0000 D3 REPL [conn129] Got notified of new snapshot: { ts: Timestamp(1567578544, 1), t: 1 }, 2019-09-04T06:29:04.498+0000 2019-09-04T06:29:04.509+0000 D3 REPL [conn129] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.240+0000 2019-09-04T06:29:04.509+0000 D3 REPL [conn105] Got notified of new snapshot: { ts: Timestamp(1567578544, 1), t: 1 }, 2019-09-04T06:29:04.498+0000 2019-09-04T06:29:04.509+0000 D3 REPL [conn105] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.079+0000 2019-09-04T06:29:04.509+0000 D3 REPL [conn122] Got notified of new snapshot: { ts: Timestamp(1567578544, 1), t: 1 }, 2019-09-04T06:29:04.498+0000 2019-09-04T06:29:04.509+0000 D3 REPL [conn122] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.234+0000 2019-09-04T06:29:04.509+0000 D3 REPL [conn144] Got notified of new snapshot: { ts: Timestamp(1567578544, 1), t: 1 }, 2019-09-04T06:29:04.498+0000 2019-09-04T06:29:04.509+0000 D3 REPL [conn144] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:33.257+0000 2019-09-04T06:29:04.511+0000 D2 ASIO [RS] Request 321 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578544, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578544508), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f59b0ac9313827bca3d50'), when: new Date(1567578544508), who: "ConfigServer:conn9511" } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 1), t: 1 }, lastCommittedWall: new Date(1567578544498), lastOpVisible: { ts: Timestamp(1567578544, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578544, 1), t: 1 }, lastCommittedWall: new Date(1567578544498), lastOpApplied: { ts: Timestamp(1567578544, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 1), $clusterTime: { clusterTime: Timestamp(1567578544, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 2) } 2019-09-04T06:29:04.511+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578544, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578544508), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f59b0ac9313827bca3d50'), when: new Date(1567578544508), who: "ConfigServer:conn9511" } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 1), t: 1 }, lastCommittedWall: new Date(1567578544498), lastOpVisible: { ts: Timestamp(1567578544, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578544, 1), t: 1 }, lastCommittedWall: new Date(1567578544498), lastOpApplied: { ts: Timestamp(1567578544, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 1), $clusterTime: { clusterTime: Timestamp(1567578544, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:04.511+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:04.511+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578544, 2) and ending at ts: Timestamp(1567578544, 2) 2019-09-04T06:29:04.511+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:29:15.837+0000 2019-09-04T06:29:04.511+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:29:15.588+0000 2019-09-04T06:29:04.511+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578544, 2), t: 1 } 2019-09-04T06:29:04.511+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:04.511+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:04.511+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:04.511+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:04.511+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578544, 1) 2019-09-04T06:29:04.511+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 4961 2019-09-04T06:29:04.511+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:04.511+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:04.511+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 4961 2019-09-04T06:29:04.511+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:04.511+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:29:04.511+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:04.511+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578544, 2) } 2019-09-04T06:29:04.511+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578544, 1) 2019-09-04T06:29:04.511+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 4964 2019-09-04T06:29:04.511+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:04.511+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:04.511+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 4964 2019-09-04T06:29:04.511+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4959 2019-09-04T06:29:04.511+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4959 2019-09-04T06:29:04.511+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4967 2019-09-04T06:29:04.511+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4967 2019-09-04T06:29:04.511+0000 D3 EXECUTOR [repl-writer-worker-6] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:04.511+0000 D3 STORAGE [repl-writer-worker-6] WT begin_transaction for snapshot id 4969 2019-09-04T06:29:04.511+0000 D4 STORAGE [repl-writer-worker-6] inserting record with timestamp Timestamp(1567578544, 2) 2019-09-04T06:29:04.511+0000 D3 STORAGE [repl-writer-worker-6] WT set timestamp of future write operations to Timestamp(1567578544, 2) 2019-09-04T06:29:04.511+0000 D3 STORAGE [repl-writer-worker-6] WT commit_transaction for snapshot id 4969 2019-09-04T06:29:04.511+0000 D3 EXECUTOR [repl-writer-worker-6] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:04.511+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:29:04.511+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4968 2019-09-04T06:29:04.511+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4968 2019-09-04T06:29:04.511+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4971 2019-09-04T06:29:04.511+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4971 2019-09-04T06:29:04.511+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578544, 2), t: 1 }({ ts: Timestamp(1567578544, 2), t: 1 }) 2019-09-04T06:29:04.511+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578544, 2) 2019-09-04T06:29:04.511+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4972 2019-09-04T06:29:04.511+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578544, 2) } } ] } sort: {} projection: {} 2019-09-04T06:29:04.511+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:04.511+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:29:04.511+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578544, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:29:04.511+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:04.511+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:04.511+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:04.511+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578544, 2) || First: notFirst: full path: ts 2019-09-04T06:29:04.511+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:04.512+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578544, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:04.512+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:04.512+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:29:04.512+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:04.512+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:04.512+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:04.512+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:04.512+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:04.512+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:04.512+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:04.512+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578544, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:04.512+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:04.512+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:04.512+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:04.512+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578544, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:04.512+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:04.512+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578544, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:04.512+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4972 2019-09-04T06:29:04.512+0000 D3 EXECUTOR [repl-writer-worker-2] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:04.512+0000 D3 STORAGE [repl-writer-worker-2] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:04.512+0000 D3 REPL [repl-writer-worker-2] applying op: { ts: Timestamp(1567578544, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578544508), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f59b0ac9313827bca3d50'), when: new Date(1567578544508), who: "ConfigServer:conn9511" } } }, oplog application mode: Secondary 2019-09-04T06:29:04.512+0000 D3 STORAGE [repl-writer-worker-2] WT set timestamp of future write operations to Timestamp(1567578544, 2) 2019-09-04T06:29:04.512+0000 D3 STORAGE [repl-writer-worker-2] WT begin_transaction for snapshot id 4974 2019-09-04T06:29:04.512+0000 D2 QUERY [repl-writer-worker-2] Using idhack: { _id: "config.system.sessions" } 2019-09-04T06:29:04.512+0000 D4 WRITE [repl-writer-worker-2] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:29:04.512+0000 D3 STORAGE [repl-writer-worker-2] WT commit_transaction for snapshot id 4974 2019-09-04T06:29:04.512+0000 D3 EXECUTOR [repl-writer-worker-2] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:04.512+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578544, 2), t: 1 }({ ts: Timestamp(1567578544, 2), t: 1 }) 2019-09-04T06:29:04.512+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578544, 2) 2019-09-04T06:29:04.512+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4973 2019-09-04T06:29:04.512+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:29:04.512+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:04.512+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:29:04.512+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:04.512+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:04.512+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:29:04.512+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4973 2019-09-04T06:29:04.512+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578544, 2) 2019-09-04T06:29:04.512+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:04.512+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4977 2019-09-04T06:29:04.512+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 1), t: 1 }, durableWallTime: new Date(1567578544498), appliedOpTime: { ts: Timestamp(1567578544, 2), t: 1 }, appliedWallTime: new Date(1567578544508), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 1), t: 1 }, lastCommittedWall: new Date(1567578544498), lastOpVisible: { ts: Timestamp(1567578544, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:04.512+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 322 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:34.512+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 1), t: 1 }, durableWallTime: new Date(1567578544498), appliedOpTime: { ts: Timestamp(1567578544, 2), t: 1 }, appliedWallTime: new Date(1567578544508), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 1), t: 1 }, lastCommittedWall: new Date(1567578544498), lastOpVisible: { ts: Timestamp(1567578544, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:04.512+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.512+0000 2019-09-04T06:29:04.512+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4977 2019-09-04T06:29:04.512+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578544, 2), t: 1 }({ ts: Timestamp(1567578544, 2), t: 1 }) 2019-09-04T06:29:04.512+0000 D2 ASIO [RS] Request 322 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 1), t: 1 }, lastCommittedWall: new Date(1567578544498), lastOpVisible: { ts: Timestamp(1567578544, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 1), $clusterTime: { clusterTime: Timestamp(1567578544, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 2) } 2019-09-04T06:29:04.512+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 1), t: 1 }, lastCommittedWall: new Date(1567578544498), lastOpVisible: { ts: Timestamp(1567578544, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 1), $clusterTime: { clusterTime: Timestamp(1567578544, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:04.512+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:04.512+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.512+0000 2019-09-04T06:29:04.513+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578544, 2), t: 1 } 2019-09-04T06:29:04.513+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 323 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:14.513+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578544, 1), t: 1 } } 2019-09-04T06:29:04.513+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.512+0000 2019-09-04T06:29:04.529+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:29:04.529+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:04.529+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 2), t: 1 }, durableWallTime: new Date(1567578544508), appliedOpTime: { ts: Timestamp(1567578544, 2), t: 1 }, appliedWallTime: new Date(1567578544508), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 1), t: 1 }, lastCommittedWall: new Date(1567578544498), lastOpVisible: { ts: Timestamp(1567578544, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:04.529+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 324 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:34.529+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 2), t: 1 }, durableWallTime: new Date(1567578544508), appliedOpTime: { ts: Timestamp(1567578544, 2), t: 1 }, appliedWallTime: new Date(1567578544508), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 1), t: 1 }, lastCommittedWall: new Date(1567578544498), lastOpVisible: { ts: Timestamp(1567578544, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:04.529+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.512+0000 2019-09-04T06:29:04.529+0000 D2 ASIO [RS] Request 324 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 1), t: 1 }, lastCommittedWall: new Date(1567578544498), lastOpVisible: { ts: Timestamp(1567578544, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 1), $clusterTime: { clusterTime: Timestamp(1567578544, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 2) } 2019-09-04T06:29:04.529+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 1), t: 1 }, lastCommittedWall: new Date(1567578544498), lastOpVisible: { ts: Timestamp(1567578544, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 1), $clusterTime: { clusterTime: Timestamp(1567578544, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:04.529+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:04.529+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.512+0000 2019-09-04T06:29:04.530+0000 D2 ASIO [RS] Request 323 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 2), t: 1 }, lastCommittedWall: new Date(1567578544508), lastOpVisible: { ts: Timestamp(1567578544, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578544, 2), t: 1 }, lastCommittedWall: new Date(1567578544508), lastOpApplied: { ts: Timestamp(1567578544, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 2), $clusterTime: { clusterTime: Timestamp(1567578544, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 2) } 2019-09-04T06:29:04.530+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 2), t: 1 }, lastCommittedWall: new Date(1567578544508), lastOpVisible: { ts: Timestamp(1567578544, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578544, 2), t: 1 }, lastCommittedWall: new Date(1567578544508), lastOpApplied: { ts: Timestamp(1567578544, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 2), $clusterTime: { clusterTime: Timestamp(1567578544, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:04.530+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:04.530+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:29:04.530+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578544, 2), t: 1 }, 2019-09-04T06:29:04.508+0000 2019-09-04T06:29:04.530+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578544, 2), t: 1 }, 2019-09-04T06:29:04.508+0000 2019-09-04T06:29:04.531+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578539, 2) 2019-09-04T06:29:04.531+0000 D3 REPL [conn105] Got notified of new snapshot: { ts: Timestamp(1567578544, 2), t: 1 }, 2019-09-04T06:29:04.508+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn105] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.079+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn118] Got notified of new snapshot: { ts: Timestamp(1567578544, 2), t: 1 }, 2019-09-04T06:29:04.508+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn118] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.224+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn155] Got notified of new snapshot: { ts: Timestamp(1567578544, 2), t: 1 }, 2019-09-04T06:29:04.508+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn155] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.663+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn141] Got notified of new snapshot: { ts: Timestamp(1567578544, 2), t: 1 }, 2019-09-04T06:29:04.508+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn141] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.077+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn137] Got notified of new snapshot: { ts: Timestamp(1567578544, 2), t: 1 }, 2019-09-04T06:29:04.508+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn137] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.079+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn116] Got notified of new snapshot: { ts: Timestamp(1567578544, 2), t: 1 }, 2019-09-04T06:29:04.508+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn116] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:12.389+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn162] Got notified of new snapshot: { ts: Timestamp(1567578544, 2), t: 1 }, 2019-09-04T06:29:04.508+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn162] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:29.750+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn149] Got notified of new snapshot: { ts: Timestamp(1567578544, 2), t: 1 }, 2019-09-04T06:29:04.508+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn149] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.239+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn142] Got notified of new snapshot: { ts: Timestamp(1567578544, 2), t: 1 }, 2019-09-04T06:29:04.508+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn142] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.132+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn153] Got notified of new snapshot: { ts: Timestamp(1567578544, 2), t: 1 }, 2019-09-04T06:29:04.508+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn153] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn129] Got notified of new snapshot: { ts: Timestamp(1567578544, 2), t: 1 }, 2019-09-04T06:29:04.508+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn129] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.240+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn122] Got notified of new snapshot: { ts: Timestamp(1567578544, 2), t: 1 }, 2019-09-04T06:29:04.508+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn122] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.234+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn140] Got notified of new snapshot: { ts: Timestamp(1567578544, 2), t: 1 }, 2019-09-04T06:29:04.508+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn140] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.076+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn144] Got notified of new snapshot: { ts: Timestamp(1567578544, 2), t: 1 }, 2019-09-04T06:29:04.508+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn144] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:33.257+0000 2019-09-04T06:29:04.531+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:29:15.588+0000 2019-09-04T06:29:04.531+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:29:15.814+0000 2019-09-04T06:29:04.531+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:04.531+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 325 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:14.531+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578544, 2), t: 1 } } 2019-09-04T06:29:04.531+0000 D3 REPL [conn158] Got notified of new snapshot: { ts: Timestamp(1567578544, 2), t: 1 }, 2019-09-04T06:29:04.508+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn158] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:24.152+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn127] Got notified of new snapshot: { ts: Timestamp(1567578544, 2), t: 1 }, 2019-09-04T06:29:04.508+0000 2019-09-04T06:29:04.531+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.512+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn127] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.644+0000 2019-09-04T06:29:04.531+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn151] Got notified of new snapshot: { ts: Timestamp(1567578544, 2), t: 1 }, 2019-09-04T06:29:04.508+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn151] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.129+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn128] Got notified of new snapshot: { ts: Timestamp(1567578544, 2), t: 1 }, 2019-09-04T06:29:04.508+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn128] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:11.706+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn152] Got notified of new snapshot: { ts: Timestamp(1567578544, 2), t: 1 }, 2019-09-04T06:29:04.508+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn152] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn148] Got notified of new snapshot: { ts: Timestamp(1567578544, 2), t: 1 }, 2019-09-04T06:29:04.508+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn148] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.224+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn150] Got notified of new snapshot: { ts: Timestamp(1567578544, 2), t: 1 }, 2019-09-04T06:29:04.508+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn150] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:19.851+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn156] Got notified of new snapshot: { ts: Timestamp(1567578544, 2), t: 1 }, 2019-09-04T06:29:04.508+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn156] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:22.595+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn154] Got notified of new snapshot: { ts: Timestamp(1567578544, 2), t: 1 }, 2019-09-04T06:29:04.508+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn154] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn161] Got notified of new snapshot: { ts: Timestamp(1567578544, 2), t: 1 }, 2019-09-04T06:29:04.508+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn161] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:28.615+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn146] Got notified of new snapshot: { ts: Timestamp(1567578544, 2), t: 1 }, 2019-09-04T06:29:04.508+0000 2019-09-04T06:29:04.531+0000 D3 REPL [conn146] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.233+0000 2019-09-04T06:29:04.532+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:04.532+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:04.535+0000 D2 ASIO [RS] Request 325 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578544, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578544531), o: { $v: 1, $set: { state: 0 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 2), t: 1 }, lastCommittedWall: new Date(1567578544508), lastOpVisible: { ts: Timestamp(1567578544, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578544, 2), t: 1 }, lastCommittedWall: new Date(1567578544508), lastOpApplied: { ts: Timestamp(1567578544, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 2), $clusterTime: { clusterTime: Timestamp(1567578544, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 3) } 2019-09-04T06:29:04.535+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578544, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578544531), o: { $v: 1, $set: { state: 0 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 2), t: 1 }, lastCommittedWall: new Date(1567578544508), lastOpVisible: { ts: Timestamp(1567578544, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578544, 2), t: 1 }, lastCommittedWall: new Date(1567578544508), lastOpApplied: { ts: Timestamp(1567578544, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 2), $clusterTime: { clusterTime: Timestamp(1567578544, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:04.535+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:04.535+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578544, 3) and ending at ts: Timestamp(1567578544, 3) 2019-09-04T06:29:04.535+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:29:15.814+0000 2019-09-04T06:29:04.535+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:29:15.866+0000 2019-09-04T06:29:04.535+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:04.535+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578544, 3), t: 1 } 2019-09-04T06:29:04.535+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:04.535+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:04.535+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:04.535+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578544, 2) 2019-09-04T06:29:04.535+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 4982 2019-09-04T06:29:04.535+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:04.535+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:04.535+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 4982 2019-09-04T06:29:04.535+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:04.535+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:29:04.535+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:04.535+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578544, 2) 2019-09-04T06:29:04.535+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 4985 2019-09-04T06:29:04.535+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578544, 3) } 2019-09-04T06:29:04.535+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:04.535+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:04.535+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 4985 2019-09-04T06:29:04.535+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4979 2019-09-04T06:29:04.535+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4979 2019-09-04T06:29:04.535+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4988 2019-09-04T06:29:04.535+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4988 2019-09-04T06:29:04.535+0000 D3 EXECUTOR [repl-writer-worker-0] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:04.536+0000 D3 STORAGE [repl-writer-worker-0] WT begin_transaction for snapshot id 4990 2019-09-04T06:29:04.536+0000 D4 STORAGE [repl-writer-worker-0] inserting record with timestamp Timestamp(1567578544, 3) 2019-09-04T06:29:04.536+0000 D3 STORAGE [repl-writer-worker-0] WT set timestamp of future write operations to Timestamp(1567578544, 3) 2019-09-04T06:29:04.536+0000 D3 STORAGE [repl-writer-worker-0] WT commit_transaction for snapshot id 4990 2019-09-04T06:29:04.536+0000 D3 EXECUTOR [repl-writer-worker-0] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:04.536+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:29:04.536+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4989 2019-09-04T06:29:04.536+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4989 2019-09-04T06:29:04.536+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4992 2019-09-04T06:29:04.536+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4992 2019-09-04T06:29:04.536+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578544, 3), t: 1 }({ ts: Timestamp(1567578544, 3), t: 1 }) 2019-09-04T06:29:04.536+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578544, 3) 2019-09-04T06:29:04.536+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4993 2019-09-04T06:29:04.536+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578544, 3) } } ] } sort: {} projection: {} 2019-09-04T06:29:04.536+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:04.536+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:29:04.536+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578544, 3) Sort: {} Proj: {} ============================= 2019-09-04T06:29:04.536+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:04.536+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:04.536+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:04.536+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578544, 3) || First: notFirst: full path: ts 2019-09-04T06:29:04.536+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:04.536+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578544, 3) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:04.536+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:04.536+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:29:04.536+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:04.536+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:04.536+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:04.536+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:04.536+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:04.536+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:04.536+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:04.536+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578544, 3) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:04.536+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:04.536+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:04.536+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:04.536+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578544, 3) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:04.536+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:04.536+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578544, 3) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:04.536+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4993 2019-09-04T06:29:04.536+0000 D3 EXECUTOR [repl-writer-worker-15] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:04.536+0000 D3 STORAGE [repl-writer-worker-15] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:04.536+0000 D3 REPL [repl-writer-worker-15] applying op: { ts: Timestamp(1567578544, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578544531), o: { $v: 1, $set: { state: 0 } } }, oplog application mode: Secondary 2019-09-04T06:29:04.536+0000 D3 STORAGE [repl-writer-worker-15] WT set timestamp of future write operations to Timestamp(1567578544, 3) 2019-09-04T06:29:04.536+0000 D3 STORAGE [repl-writer-worker-15] WT begin_transaction for snapshot id 4995 2019-09-04T06:29:04.536+0000 D2 QUERY [repl-writer-worker-15] Using idhack: { _id: "config.system.sessions" } 2019-09-04T06:29:04.536+0000 D4 WRITE [repl-writer-worker-15] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:29:04.536+0000 D3 STORAGE [repl-writer-worker-15] WT commit_transaction for snapshot id 4995 2019-09-04T06:29:04.536+0000 D3 EXECUTOR [repl-writer-worker-15] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:04.536+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578544, 3), t: 1 }({ ts: Timestamp(1567578544, 3), t: 1 }) 2019-09-04T06:29:04.536+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578544, 3) 2019-09-04T06:29:04.536+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4994 2019-09-04T06:29:04.536+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:29:04.536+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:04.536+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:29:04.536+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:04.536+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:04.536+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:29:04.536+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4994 2019-09-04T06:29:04.536+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578544, 3) 2019-09-04T06:29:04.536+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4998 2019-09-04T06:29:04.536+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 4998 2019-09-04T06:29:04.536+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578544, 3), t: 1 }({ ts: Timestamp(1567578544, 3), t: 1 }) 2019-09-04T06:29:04.536+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:04.536+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 2), t: 1 }, durableWallTime: new Date(1567578544508), appliedOpTime: { ts: Timestamp(1567578544, 3), t: 1 }, appliedWallTime: new Date(1567578544531), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 2), t: 1 }, lastCommittedWall: new Date(1567578544508), lastOpVisible: { ts: Timestamp(1567578544, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:04.536+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 326 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:34.536+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 2), t: 1 }, durableWallTime: new Date(1567578544508), appliedOpTime: { ts: Timestamp(1567578544, 3), t: 1 }, appliedWallTime: new Date(1567578544531), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 2), t: 1 }, lastCommittedWall: new Date(1567578544508), lastOpVisible: { ts: Timestamp(1567578544, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:04.537+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.536+0000 2019-09-04T06:29:04.537+0000 D2 ASIO [RS] Request 326 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 2), t: 1 }, lastCommittedWall: new Date(1567578544508), lastOpVisible: { ts: Timestamp(1567578544, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 2), $clusterTime: { clusterTime: Timestamp(1567578544, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 3) } 2019-09-04T06:29:04.537+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 2), t: 1 }, lastCommittedWall: new Date(1567578544508), lastOpVisible: { ts: Timestamp(1567578544, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 2), $clusterTime: { clusterTime: Timestamp(1567578544, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:04.537+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:04.537+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.537+0000 2019-09-04T06:29:04.537+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578544, 3), t: 1 } 2019-09-04T06:29:04.537+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 327 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:14.537+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578544, 2), t: 1 } } 2019-09-04T06:29:04.537+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.537+0000 2019-09-04T06:29:04.563+0000 D2 ASIO [RS] Request 327 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 3), t: 1 }, lastCommittedWall: new Date(1567578544531), lastOpVisible: { ts: Timestamp(1567578544, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578544, 3), t: 1 }, lastCommittedWall: new Date(1567578544531), lastOpApplied: { ts: Timestamp(1567578544, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 3), $clusterTime: { clusterTime: Timestamp(1567578544, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 3) } 2019-09-04T06:29:04.563+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 3), t: 1 }, lastCommittedWall: new Date(1567578544531), lastOpVisible: { ts: Timestamp(1567578544, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578544, 3), t: 1 }, lastCommittedWall: new Date(1567578544531), lastOpApplied: { ts: Timestamp(1567578544, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 3), $clusterTime: { clusterTime: Timestamp(1567578544, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:04.563+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:04.563+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:29:04.563+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578544, 3), t: 1 }, 2019-09-04T06:29:04.531+0000 2019-09-04T06:29:04.563+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578544, 3), t: 1 }, 2019-09-04T06:29:04.531+0000 2019-09-04T06:29:04.563+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578539, 3) 2019-09-04T06:29:04.563+0000 D3 REPL [conn127] Got notified of new snapshot: { ts: Timestamp(1567578544, 3), t: 1 }, 2019-09-04T06:29:04.531+0000 2019-09-04T06:29:04.563+0000 D3 REPL [conn127] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.644+0000 2019-09-04T06:29:04.563+0000 D3 REPL [conn153] Got notified of new snapshot: { ts: Timestamp(1567578544, 3), t: 1 }, 2019-09-04T06:29:04.531+0000 2019-09-04T06:29:04.563+0000 D3 REPL [conn153] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:04.563+0000 D3 REPL [conn161] Got notified of new snapshot: { ts: Timestamp(1567578544, 3), t: 1 }, 2019-09-04T06:29:04.531+0000 2019-09-04T06:29:04.563+0000 D3 REPL [conn161] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:28.615+0000 2019-09-04T06:29:04.563+0000 D3 REPL [conn149] Got notified of new snapshot: { ts: Timestamp(1567578544, 3), t: 1 }, 2019-09-04T06:29:04.531+0000 2019-09-04T06:29:04.563+0000 D3 REPL [conn149] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.239+0000 2019-09-04T06:29:04.563+0000 D3 REPL [conn129] Got notified of new snapshot: { ts: Timestamp(1567578544, 3), t: 1 }, 2019-09-04T06:29:04.531+0000 2019-09-04T06:29:04.563+0000 D3 REPL [conn129] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.240+0000 2019-09-04T06:29:04.563+0000 D3 REPL [conn137] Got notified of new snapshot: { ts: Timestamp(1567578544, 3), t: 1 }, 2019-09-04T06:29:04.531+0000 2019-09-04T06:29:04.563+0000 D3 REPL [conn137] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.079+0000 2019-09-04T06:29:04.563+0000 D3 REPL [conn151] Got notified of new snapshot: { ts: Timestamp(1567578544, 3), t: 1 }, 2019-09-04T06:29:04.531+0000 2019-09-04T06:29:04.563+0000 D3 REPL [conn151] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.129+0000 2019-09-04T06:29:04.563+0000 D3 REPL [conn152] Got notified of new snapshot: { ts: Timestamp(1567578544, 3), t: 1 }, 2019-09-04T06:29:04.531+0000 2019-09-04T06:29:04.563+0000 D3 REPL [conn152] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:04.564+0000 D3 REPL [conn150] Got notified of new snapshot: { ts: Timestamp(1567578544, 3), t: 1 }, 2019-09-04T06:29:04.531+0000 2019-09-04T06:29:04.564+0000 D3 REPL [conn150] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:19.851+0000 2019-09-04T06:29:04.564+0000 D3 REPL [conn154] Got notified of new snapshot: { ts: Timestamp(1567578544, 3), t: 1 }, 2019-09-04T06:29:04.531+0000 2019-09-04T06:29:04.564+0000 D3 REPL [conn154] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:04.564+0000 D3 REPL [conn146] Got notified of new snapshot: { ts: Timestamp(1567578544, 3), t: 1 }, 2019-09-04T06:29:04.531+0000 2019-09-04T06:29:04.564+0000 D3 REPL [conn146] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.233+0000 2019-09-04T06:29:04.564+0000 D3 REPL [conn140] Got notified of new snapshot: { ts: Timestamp(1567578544, 3), t: 1 }, 2019-09-04T06:29:04.531+0000 2019-09-04T06:29:04.564+0000 D3 REPL [conn140] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.076+0000 2019-09-04T06:29:04.564+0000 D3 REPL [conn158] Got notified of new snapshot: { ts: Timestamp(1567578544, 3), t: 1 }, 2019-09-04T06:29:04.531+0000 2019-09-04T06:29:04.564+0000 D3 REPL [conn158] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:24.152+0000 2019-09-04T06:29:04.564+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:29:15.866+0000 2019-09-04T06:29:04.564+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:29:14.693+0000 2019-09-04T06:29:04.564+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 328 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:14.564+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578544, 3), t: 1 } } 2019-09-04T06:29:04.564+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.537+0000 2019-09-04T06:29:04.564+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:04.564+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:04.564+0000 D3 REPL [conn144] Got notified of new snapshot: { ts: Timestamp(1567578544, 3), t: 1 }, 2019-09-04T06:29:04.531+0000 2019-09-04T06:29:04.564+0000 D3 REPL [conn144] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:33.257+0000 2019-09-04T06:29:04.564+0000 D3 REPL [conn122] Got notified of new snapshot: { ts: Timestamp(1567578544, 3), t: 1 }, 2019-09-04T06:29:04.531+0000 2019-09-04T06:29:04.564+0000 D3 REPL [conn122] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.234+0000 2019-09-04T06:29:04.564+0000 D3 REPL [conn118] Got notified of new snapshot: { ts: Timestamp(1567578544, 3), t: 1 }, 2019-09-04T06:29:04.531+0000 2019-09-04T06:29:04.564+0000 D3 REPL [conn118] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.224+0000 2019-09-04T06:29:04.564+0000 D3 REPL [conn128] Got notified of new snapshot: { ts: Timestamp(1567578544, 3), t: 1 }, 2019-09-04T06:29:04.531+0000 2019-09-04T06:29:04.564+0000 D3 REPL [conn128] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:11.706+0000 2019-09-04T06:29:04.564+0000 D3 REPL [conn105] Got notified of new snapshot: { ts: Timestamp(1567578544, 3), t: 1 }, 2019-09-04T06:29:04.531+0000 2019-09-04T06:29:04.564+0000 D3 REPL [conn105] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.079+0000 2019-09-04T06:29:04.564+0000 D3 REPL [conn155] Got notified of new snapshot: { ts: Timestamp(1567578544, 3), t: 1 }, 2019-09-04T06:29:04.531+0000 2019-09-04T06:29:04.564+0000 D3 REPL [conn155] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.663+0000 2019-09-04T06:29:04.564+0000 D3 REPL [conn156] Got notified of new snapshot: { ts: Timestamp(1567578544, 3), t: 1 }, 2019-09-04T06:29:04.531+0000 2019-09-04T06:29:04.564+0000 D3 REPL [conn156] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:22.595+0000 2019-09-04T06:29:04.564+0000 D3 REPL [conn141] Got notified of new snapshot: { ts: Timestamp(1567578544, 3), t: 1 }, 2019-09-04T06:29:04.531+0000 2019-09-04T06:29:04.564+0000 D3 REPL [conn141] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.077+0000 2019-09-04T06:29:04.564+0000 D3 REPL [conn116] Got notified of new snapshot: { ts: Timestamp(1567578544, 3), t: 1 }, 2019-09-04T06:29:04.531+0000 2019-09-04T06:29:04.564+0000 D3 REPL [conn116] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:12.389+0000 2019-09-04T06:29:04.564+0000 D3 REPL [conn142] Got notified of new snapshot: { ts: Timestamp(1567578544, 3), t: 1 }, 2019-09-04T06:29:04.531+0000 2019-09-04T06:29:04.564+0000 D3 REPL [conn142] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.132+0000 2019-09-04T06:29:04.564+0000 D3 REPL [conn162] Got notified of new snapshot: { ts: Timestamp(1567578544, 3), t: 1 }, 2019-09-04T06:29:04.531+0000 2019-09-04T06:29:04.564+0000 D3 REPL [conn162] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:29.750+0000 2019-09-04T06:29:04.564+0000 D3 REPL [conn148] Got notified of new snapshot: { ts: Timestamp(1567578544, 3), t: 1 }, 2019-09-04T06:29:04.531+0000 2019-09-04T06:29:04.564+0000 D3 REPL [conn148] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.224+0000 2019-09-04T06:29:04.566+0000 D2 ASIO [RS] Request 328 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578544, 4), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578544564), o: { $v: 1, $set: { state: 0 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 3), t: 1 }, lastCommittedWall: new Date(1567578544531), lastOpVisible: { ts: Timestamp(1567578544, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578544, 3), t: 1 }, lastCommittedWall: new Date(1567578544531), lastOpApplied: { ts: Timestamp(1567578544, 4), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 3), $clusterTime: { clusterTime: Timestamp(1567578544, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 4) } 2019-09-04T06:29:04.566+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578544, 4), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578544564), o: { $v: 1, $set: { state: 0 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 3), t: 1 }, lastCommittedWall: new Date(1567578544531), lastOpVisible: { ts: Timestamp(1567578544, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578544, 3), t: 1 }, lastCommittedWall: new Date(1567578544531), lastOpApplied: { ts: Timestamp(1567578544, 4), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 3), $clusterTime: { clusterTime: Timestamp(1567578544, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 4) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:04.566+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:04.566+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578544, 4) and ending at ts: Timestamp(1567578544, 4) 2019-09-04T06:29:04.566+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:29:14.693+0000 2019-09-04T06:29:04.566+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:29:15.324+0000 2019-09-04T06:29:04.566+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:04.566+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578544, 4), t: 1 } 2019-09-04T06:29:04.566+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:04.566+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:04.566+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578544, 3) 2019-09-04T06:29:04.566+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 5002 2019-09-04T06:29:04.566+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:04.566+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:04.566+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 5002 2019-09-04T06:29:04.566+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:04.566+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:29:04.566+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:04.566+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578544, 4) } 2019-09-04T06:29:04.566+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578544, 3) 2019-09-04T06:29:04.566+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 5005 2019-09-04T06:29:04.566+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:04.566+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:04.566+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 5005 2019-09-04T06:29:04.566+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 4999 2019-09-04T06:29:04.566+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:04.566+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 4999 2019-09-04T06:29:04.566+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5008 2019-09-04T06:29:04.566+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5008 2019-09-04T06:29:04.566+0000 D3 EXECUTOR [repl-writer-worker-1] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:04.566+0000 D3 STORAGE [repl-writer-worker-1] WT begin_transaction for snapshot id 5010 2019-09-04T06:29:04.566+0000 D4 STORAGE [repl-writer-worker-1] inserting record with timestamp Timestamp(1567578544, 4) 2019-09-04T06:29:04.566+0000 D3 STORAGE [repl-writer-worker-1] WT set timestamp of future write operations to Timestamp(1567578544, 4) 2019-09-04T06:29:04.566+0000 D3 STORAGE [repl-writer-worker-1] WT commit_transaction for snapshot id 5010 2019-09-04T06:29:04.566+0000 D3 EXECUTOR [repl-writer-worker-1] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:04.566+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:29:04.566+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5009 2019-09-04T06:29:04.566+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5009 2019-09-04T06:29:04.566+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5012 2019-09-04T06:29:04.566+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5012 2019-09-04T06:29:04.566+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578544, 4), t: 1 }({ ts: Timestamp(1567578544, 4), t: 1 }) 2019-09-04T06:29:04.566+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578544, 4) 2019-09-04T06:29:04.566+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5013 2019-09-04T06:29:04.566+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578544, 4) } } ] } sort: {} projection: {} 2019-09-04T06:29:04.566+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:04.566+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:29:04.566+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578544, 4) Sort: {} Proj: {} ============================= 2019-09-04T06:29:04.566+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:04.566+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:04.566+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:04.566+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578544, 4) || First: notFirst: full path: ts 2019-09-04T06:29:04.566+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:04.566+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578544, 4) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:04.566+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:04.566+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:29:04.566+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:04.566+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:04.566+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:04.566+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:04.566+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:04.566+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:04.566+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:04.566+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578544, 4) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:04.566+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:04.566+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:04.566+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:04.566+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578544, 4) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:04.566+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:04.566+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578544, 4) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:04.567+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5013 2019-09-04T06:29:04.567+0000 D3 EXECUTOR [repl-writer-worker-5] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:04.567+0000 D3 STORAGE [repl-writer-worker-5] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:04.567+0000 D3 REPL [repl-writer-worker-5] applying op: { ts: Timestamp(1567578544, 4), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578544564), o: { $v: 1, $set: { state: 0 } } }, oplog application mode: Secondary 2019-09-04T06:29:04.567+0000 D3 STORAGE [repl-writer-worker-5] WT set timestamp of future write operations to Timestamp(1567578544, 4) 2019-09-04T06:29:04.567+0000 D3 STORAGE [repl-writer-worker-5] WT begin_transaction for snapshot id 5015 2019-09-04T06:29:04.567+0000 D2 QUERY [repl-writer-worker-5] Using idhack: { _id: "config" } 2019-09-04T06:29:04.567+0000 D4 WRITE [repl-writer-worker-5] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:29:04.567+0000 D3 STORAGE [repl-writer-worker-5] WT commit_transaction for snapshot id 5015 2019-09-04T06:29:04.567+0000 D3 EXECUTOR [repl-writer-worker-5] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:04.567+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578544, 4), t: 1 }({ ts: Timestamp(1567578544, 4), t: 1 }) 2019-09-04T06:29:04.567+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578544, 4) 2019-09-04T06:29:04.567+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5014 2019-09-04T06:29:04.567+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:29:04.567+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:04.567+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:29:04.567+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:04.567+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:04.567+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:29:04.567+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5014 2019-09-04T06:29:04.567+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578544, 4) 2019-09-04T06:29:04.567+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:04.567+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5018 2019-09-04T06:29:04.567+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 2), t: 1 }, durableWallTime: new Date(1567578544508), appliedOpTime: { ts: Timestamp(1567578544, 4), t: 1 }, appliedWallTime: new Date(1567578544564), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 3), t: 1 }, lastCommittedWall: new Date(1567578544531), lastOpVisible: { ts: Timestamp(1567578544, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:04.567+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5018 2019-09-04T06:29:04.567+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578544, 4), t: 1 }({ ts: Timestamp(1567578544, 4), t: 1 }) 2019-09-04T06:29:04.567+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 329 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:34.567+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 2), t: 1 }, durableWallTime: new Date(1567578544508), appliedOpTime: { ts: Timestamp(1567578544, 4), t: 1 }, appliedWallTime: new Date(1567578544564), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 3), t: 1 }, lastCommittedWall: new Date(1567578544531), lastOpVisible: { ts: Timestamp(1567578544, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:04.567+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.567+0000 2019-09-04T06:29:04.567+0000 D2 ASIO [RS] Request 329 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 3), t: 1 }, lastCommittedWall: new Date(1567578544531), lastOpVisible: { ts: Timestamp(1567578544, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 3), $clusterTime: { clusterTime: Timestamp(1567578544, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 4) } 2019-09-04T06:29:04.567+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 3), t: 1 }, lastCommittedWall: new Date(1567578544531), lastOpVisible: { ts: Timestamp(1567578544, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 3), $clusterTime: { clusterTime: Timestamp(1567578544, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 4) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:04.567+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:04.567+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.567+0000 2019-09-04T06:29:04.568+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578544, 4), t: 1 } 2019-09-04T06:29:04.568+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 330 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:14.568+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578544, 3), t: 1 } } 2019-09-04T06:29:04.568+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.567+0000 2019-09-04T06:29:04.574+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:29:04.574+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:04.574+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 3), t: 1 }, durableWallTime: new Date(1567578544531), appliedOpTime: { ts: Timestamp(1567578544, 4), t: 1 }, appliedWallTime: new Date(1567578544564), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 3), t: 1 }, lastCommittedWall: new Date(1567578544531), lastOpVisible: { ts: Timestamp(1567578544, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:04.574+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 331 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:34.574+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 3), t: 1 }, durableWallTime: new Date(1567578544531), appliedOpTime: { ts: Timestamp(1567578544, 4), t: 1 }, appliedWallTime: new Date(1567578544564), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 3), t: 1 }, lastCommittedWall: new Date(1567578544531), lastOpVisible: { ts: Timestamp(1567578544, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:04.574+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.567+0000 2019-09-04T06:29:04.575+0000 D2 ASIO [RS] Request 331 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 3), t: 1 }, lastCommittedWall: new Date(1567578544531), lastOpVisible: { ts: Timestamp(1567578544, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 3), $clusterTime: { clusterTime: Timestamp(1567578544, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 4) } 2019-09-04T06:29:04.575+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 3), t: 1 }, lastCommittedWall: new Date(1567578544531), lastOpVisible: { ts: Timestamp(1567578544, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 3), $clusterTime: { clusterTime: Timestamp(1567578544, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 4) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:04.575+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:04.575+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.567+0000 2019-09-04T06:29:04.577+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:29:04.577+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:04.577+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 4), t: 1 }, durableWallTime: new Date(1567578544564), appliedOpTime: { ts: Timestamp(1567578544, 4), t: 1 }, appliedWallTime: new Date(1567578544564), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 3), t: 1 }, lastCommittedWall: new Date(1567578544531), lastOpVisible: { ts: Timestamp(1567578544, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:04.577+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 332 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:34.577+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 4), t: 1 }, durableWallTime: new Date(1567578544564), appliedOpTime: { ts: Timestamp(1567578544, 4), t: 1 }, appliedWallTime: new Date(1567578544564), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 3), t: 1 }, lastCommittedWall: new Date(1567578544531), lastOpVisible: { ts: Timestamp(1567578544, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:04.577+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.567+0000 2019-09-04T06:29:04.577+0000 D2 ASIO [RS] Request 332 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 3), t: 1 }, lastCommittedWall: new Date(1567578544531), lastOpVisible: { ts: Timestamp(1567578544, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 3), $clusterTime: { clusterTime: Timestamp(1567578544, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 4) } 2019-09-04T06:29:04.577+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 3), t: 1 }, lastCommittedWall: new Date(1567578544531), lastOpVisible: { ts: Timestamp(1567578544, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 3), $clusterTime: { clusterTime: Timestamp(1567578544, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 4) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:04.577+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:04.577+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.567+0000 2019-09-04T06:29:04.577+0000 D2 ASIO [RS] Request 330 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 4), t: 1 }, lastCommittedWall: new Date(1567578544564), lastOpVisible: { ts: Timestamp(1567578544, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578544, 4), t: 1 }, lastCommittedWall: new Date(1567578544564), lastOpApplied: { ts: Timestamp(1567578544, 4), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 4), $clusterTime: { clusterTime: Timestamp(1567578544, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 4) } 2019-09-04T06:29:04.577+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 4), t: 1 }, lastCommittedWall: new Date(1567578544564), lastOpVisible: { ts: Timestamp(1567578544, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578544, 4), t: 1 }, lastCommittedWall: new Date(1567578544564), lastOpApplied: { ts: Timestamp(1567578544, 4), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 4), $clusterTime: { clusterTime: Timestamp(1567578544, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 4) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:04.577+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:04.578+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:29:04.578+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578544, 4), t: 1 }, 2019-09-04T06:29:04.564+0000 2019-09-04T06:29:04.578+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578544, 4), t: 1 }, 2019-09-04T06:29:04.564+0000 2019-09-04T06:29:04.578+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578539, 4) 2019-09-04T06:29:04.578+0000 D3 REPL [conn144] Got notified of new snapshot: { ts: Timestamp(1567578544, 4), t: 1 }, 2019-09-04T06:29:04.564+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn144] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:33.257+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn122] Got notified of new snapshot: { ts: Timestamp(1567578544, 4), t: 1 }, 2019-09-04T06:29:04.564+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn122] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.234+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn161] Got notified of new snapshot: { ts: Timestamp(1567578544, 4), t: 1 }, 2019-09-04T06:29:04.564+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn161] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:28.615+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn152] Got notified of new snapshot: { ts: Timestamp(1567578544, 4), t: 1 }, 2019-09-04T06:29:04.564+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn152] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn146] Got notified of new snapshot: { ts: Timestamp(1567578544, 4), t: 1 }, 2019-09-04T06:29:04.564+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn146] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.233+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn158] Got notified of new snapshot: { ts: Timestamp(1567578544, 4), t: 1 }, 2019-09-04T06:29:04.564+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn158] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:24.152+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn118] Got notified of new snapshot: { ts: Timestamp(1567578544, 4), t: 1 }, 2019-09-04T06:29:04.564+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn118] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.224+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn105] Got notified of new snapshot: { ts: Timestamp(1567578544, 4), t: 1 }, 2019-09-04T06:29:04.564+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn105] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.079+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn156] Got notified of new snapshot: { ts: Timestamp(1567578544, 4), t: 1 }, 2019-09-04T06:29:04.564+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn156] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:22.595+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn162] Got notified of new snapshot: { ts: Timestamp(1567578544, 4), t: 1 }, 2019-09-04T06:29:04.564+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn162] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:29.750+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn116] Got notified of new snapshot: { ts: Timestamp(1567578544, 4), t: 1 }, 2019-09-04T06:29:04.564+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn116] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:12.389+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn150] Got notified of new snapshot: { ts: Timestamp(1567578544, 4), t: 1 }, 2019-09-04T06:29:04.564+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn150] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:19.851+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn140] Got notified of new snapshot: { ts: Timestamp(1567578544, 4), t: 1 }, 2019-09-04T06:29:04.564+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn140] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.076+0000 2019-09-04T06:29:04.578+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:29:15.324+0000 2019-09-04T06:29:04.578+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:29:15.280+0000 2019-09-04T06:29:04.578+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:04.578+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 333 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:14.578+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578544, 4), t: 1 } } 2019-09-04T06:29:04.578+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:04.578+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.567+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn128] Got notified of new snapshot: { ts: Timestamp(1567578544, 4), t: 1 }, 2019-09-04T06:29:04.564+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn128] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:11.706+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn153] Got notified of new snapshot: { ts: Timestamp(1567578544, 4), t: 1 }, 2019-09-04T06:29:04.564+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn153] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn127] Got notified of new snapshot: { ts: Timestamp(1567578544, 4), t: 1 }, 2019-09-04T06:29:04.564+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn127] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.644+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn129] Got notified of new snapshot: { ts: Timestamp(1567578544, 4), t: 1 }, 2019-09-04T06:29:04.564+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn129] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.240+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn155] Got notified of new snapshot: { ts: Timestamp(1567578544, 4), t: 1 }, 2019-09-04T06:29:04.564+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn155] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.663+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn141] Got notified of new snapshot: { ts: Timestamp(1567578544, 4), t: 1 }, 2019-09-04T06:29:04.564+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn141] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.077+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn148] Got notified of new snapshot: { ts: Timestamp(1567578544, 4), t: 1 }, 2019-09-04T06:29:04.564+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn148] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.224+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn137] Got notified of new snapshot: { ts: Timestamp(1567578544, 4), t: 1 }, 2019-09-04T06:29:04.564+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn137] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.079+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn151] Got notified of new snapshot: { ts: Timestamp(1567578544, 4), t: 1 }, 2019-09-04T06:29:04.564+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn151] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.129+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn154] Got notified of new snapshot: { ts: Timestamp(1567578544, 4), t: 1 }, 2019-09-04T06:29:04.564+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn154] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn149] Got notified of new snapshot: { ts: Timestamp(1567578544, 4), t: 1 }, 2019-09-04T06:29:04.564+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn149] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.239+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn142] Got notified of new snapshot: { ts: Timestamp(1567578544, 4), t: 1 }, 2019-09-04T06:29:04.564+0000 2019-09-04T06:29:04.578+0000 D3 REPL [conn142] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.132+0000 2019-09-04T06:29:04.585+0000 D2 ASIO [RS] Request 333 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578544, 5), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578544578), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f59b0ac9313827bca3d68'), when: new Date(1567578544578), who: "ConfigServer:conn10279" } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 4), t: 1 }, lastCommittedWall: new Date(1567578544564), lastOpVisible: { ts: Timestamp(1567578544, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578544, 4), t: 1 }, lastCommittedWall: new Date(1567578544564), lastOpApplied: { ts: Timestamp(1567578544, 5), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 4), $clusterTime: { clusterTime: Timestamp(1567578544, 5), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 5) } 2019-09-04T06:29:04.585+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578544, 5), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578544578), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f59b0ac9313827bca3d68'), when: new Date(1567578544578), who: "ConfigServer:conn10279" } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 4), t: 1 }, lastCommittedWall: new Date(1567578544564), lastOpVisible: { ts: Timestamp(1567578544, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578544, 4), t: 1 }, lastCommittedWall: new Date(1567578544564), lastOpApplied: { ts: Timestamp(1567578544, 5), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 4), $clusterTime: { clusterTime: Timestamp(1567578544, 5), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 5) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:04.585+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:04.585+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578544, 5) and ending at ts: Timestamp(1567578544, 5) 2019-09-04T06:29:04.585+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:29:15.280+0000 2019-09-04T06:29:04.585+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:29:15.221+0000 2019-09-04T06:29:04.585+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:04.585+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:04.585+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:04.585+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:04.585+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578544, 4) 2019-09-04T06:29:04.585+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 5022 2019-09-04T06:29:04.585+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:04.585+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:04.585+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 5022 2019-09-04T06:29:04.585+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:04.585+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:29:04.585+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:04.586+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578544, 5) } 2019-09-04T06:29:04.585+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578544, 5), t: 1 } 2019-09-04T06:29:04.586+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578544, 4) 2019-09-04T06:29:04.586+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 5025 2019-09-04T06:29:04.586+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:04.586+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:04.586+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 5025 2019-09-04T06:29:04.586+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5019 2019-09-04T06:29:04.586+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5019 2019-09-04T06:29:04.586+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5028 2019-09-04T06:29:04.586+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5028 2019-09-04T06:29:04.586+0000 D3 EXECUTOR [repl-writer-worker-9] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:04.586+0000 D3 STORAGE [repl-writer-worker-9] WT begin_transaction for snapshot id 5030 2019-09-04T06:29:04.586+0000 D4 STORAGE [repl-writer-worker-9] inserting record with timestamp Timestamp(1567578544, 5) 2019-09-04T06:29:04.586+0000 D3 STORAGE [repl-writer-worker-9] WT set timestamp of future write operations to Timestamp(1567578544, 5) 2019-09-04T06:29:04.586+0000 D3 STORAGE [repl-writer-worker-9] WT commit_transaction for snapshot id 5030 2019-09-04T06:29:04.586+0000 D3 EXECUTOR [repl-writer-worker-9] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:04.586+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:29:04.586+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5029 2019-09-04T06:29:04.586+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5029 2019-09-04T06:29:04.586+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5032 2019-09-04T06:29:04.586+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5032 2019-09-04T06:29:04.586+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578544, 5), t: 1 }({ ts: Timestamp(1567578544, 5), t: 1 }) 2019-09-04T06:29:04.586+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578544, 5) 2019-09-04T06:29:04.586+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5033 2019-09-04T06:29:04.586+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578544, 5) } } ] } sort: {} projection: {} 2019-09-04T06:29:04.586+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:04.586+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:29:04.586+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578544, 5) Sort: {} Proj: {} ============================= 2019-09-04T06:29:04.586+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:04.586+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:04.586+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:04.586+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578544, 5) || First: notFirst: full path: ts 2019-09-04T06:29:04.586+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:04.586+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578544, 5) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:04.586+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:04.586+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:29:04.586+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:04.586+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:04.586+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:04.586+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:04.586+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:04.586+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:04.586+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:04.586+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578544, 5) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:04.586+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:04.586+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:04.586+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:04.586+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578544, 5) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:04.586+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:04.586+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578544, 5) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:04.586+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5033 2019-09-04T06:29:04.586+0000 D3 EXECUTOR [repl-writer-worker-7] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:04.586+0000 D3 STORAGE [repl-writer-worker-7] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:04.586+0000 D3 REPL [repl-writer-worker-7] applying op: { ts: Timestamp(1567578544, 5), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578544578), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f59b0ac9313827bca3d68'), when: new Date(1567578544578), who: "ConfigServer:conn10279" } } }, oplog application mode: Secondary 2019-09-04T06:29:04.586+0000 D3 STORAGE [repl-writer-worker-7] WT set timestamp of future write operations to Timestamp(1567578544, 5) 2019-09-04T06:29:04.586+0000 D3 STORAGE [repl-writer-worker-7] WT begin_transaction for snapshot id 5035 2019-09-04T06:29:04.586+0000 D2 QUERY [repl-writer-worker-7] Using idhack: { _id: "config" } 2019-09-04T06:29:04.586+0000 D4 WRITE [repl-writer-worker-7] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:29:04.586+0000 D3 STORAGE [repl-writer-worker-7] WT commit_transaction for snapshot id 5035 2019-09-04T06:29:04.586+0000 D3 EXECUTOR [repl-writer-worker-7] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:04.586+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578544, 5), t: 1 }({ ts: Timestamp(1567578544, 5), t: 1 }) 2019-09-04T06:29:04.586+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578544, 5) 2019-09-04T06:29:04.586+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5034 2019-09-04T06:29:04.586+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:29:04.586+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:04.586+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:29:04.586+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:04.586+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:04.586+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:29:04.586+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5034 2019-09-04T06:29:04.586+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578544, 5) 2019-09-04T06:29:04.586+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:04.586+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5038 2019-09-04T06:29:04.586+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 4), t: 1 }, durableWallTime: new Date(1567578544564), appliedOpTime: { ts: Timestamp(1567578544, 5), t: 1 }, appliedWallTime: new Date(1567578544578), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 4), t: 1 }, lastCommittedWall: new Date(1567578544564), lastOpVisible: { ts: Timestamp(1567578544, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:04.587+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 334 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:34.586+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 4), t: 1 }, durableWallTime: new Date(1567578544564), appliedOpTime: { ts: Timestamp(1567578544, 5), t: 1 }, appliedWallTime: new Date(1567578544578), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 4), t: 1 }, lastCommittedWall: new Date(1567578544564), lastOpVisible: { ts: Timestamp(1567578544, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:04.587+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.586+0000 2019-09-04T06:29:04.586+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5038 2019-09-04T06:29:04.587+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578544, 5), t: 1 }({ ts: Timestamp(1567578544, 5), t: 1 }) 2019-09-04T06:29:04.587+0000 D2 ASIO [RS] Request 334 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 4), t: 1 }, lastCommittedWall: new Date(1567578544564), lastOpVisible: { ts: Timestamp(1567578544, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 4), $clusterTime: { clusterTime: Timestamp(1567578544, 5), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 5) } 2019-09-04T06:29:04.587+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 4), t: 1 }, lastCommittedWall: new Date(1567578544564), lastOpVisible: { ts: Timestamp(1567578544, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 4), $clusterTime: { clusterTime: Timestamp(1567578544, 5), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 5) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:04.587+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:04.587+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.587+0000 2019-09-04T06:29:04.588+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578544, 5), t: 1 } 2019-09-04T06:29:04.588+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 335 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:14.588+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578544, 4), t: 1 } } 2019-09-04T06:29:04.588+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.587+0000 2019-09-04T06:29:04.588+0000 D2 ASIO [RS] Request 335 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 5), t: 1 }, lastCommittedWall: new Date(1567578544578), lastOpVisible: { ts: Timestamp(1567578544, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578544, 5), t: 1 }, lastCommittedWall: new Date(1567578544578), lastOpApplied: { ts: Timestamp(1567578544, 5), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 5), $clusterTime: { clusterTime: Timestamp(1567578544, 5), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 5) } 2019-09-04T06:29:04.588+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 5), t: 1 }, lastCommittedWall: new Date(1567578544578), lastOpVisible: { ts: Timestamp(1567578544, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578544, 5), t: 1 }, lastCommittedWall: new Date(1567578544578), lastOpApplied: { ts: Timestamp(1567578544, 5), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 5), $clusterTime: { clusterTime: Timestamp(1567578544, 5), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 5) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:04.588+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:04.588+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:29:04.588+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578544, 5), t: 1 }, 2019-09-04T06:29:04.578+0000 2019-09-04T06:29:04.588+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578544, 5), t: 1 }, 2019-09-04T06:29:04.578+0000 2019-09-04T06:29:04.588+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578539, 5) 2019-09-04T06:29:04.588+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:29:15.221+0000 2019-09-04T06:29:04.588+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:29:15.003+0000 2019-09-04T06:29:04.588+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 336 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:14.588+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578544, 5), t: 1 } } 2019-09-04T06:29:04.588+0000 D3 REPL [conn146] Got notified of new snapshot: { ts: Timestamp(1567578544, 5), t: 1 }, 2019-09-04T06:29:04.578+0000 2019-09-04T06:29:04.588+0000 D3 REPL [conn146] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.233+0000 2019-09-04T06:29:04.588+0000 D3 REPL [conn158] Got notified of new snapshot: { ts: Timestamp(1567578544, 5), t: 1 }, 2019-09-04T06:29:04.578+0000 2019-09-04T06:29:04.588+0000 D3 REPL [conn158] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:24.152+0000 2019-09-04T06:29:04.588+0000 D3 REPL [conn128] Got notified of new snapshot: { ts: Timestamp(1567578544, 5), t: 1 }, 2019-09-04T06:29:04.578+0000 2019-09-04T06:29:04.588+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.587+0000 2019-09-04T06:29:04.588+0000 D3 REPL [conn128] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:11.706+0000 2019-09-04T06:29:04.588+0000 D3 REPL [conn156] Got notified of new snapshot: { ts: Timestamp(1567578544, 5), t: 1 }, 2019-09-04T06:29:04.578+0000 2019-09-04T06:29:04.588+0000 D3 REPL [conn156] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:22.595+0000 2019-09-04T06:29:04.588+0000 D3 REPL [conn129] Got notified of new snapshot: { ts: Timestamp(1567578544, 5), t: 1 }, 2019-09-04T06:29:04.578+0000 2019-09-04T06:29:04.588+0000 D3 REPL [conn129] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.240+0000 2019-09-04T06:29:04.588+0000 D3 REPL [conn155] Got notified of new snapshot: { ts: Timestamp(1567578544, 5), t: 1 }, 2019-09-04T06:29:04.578+0000 2019-09-04T06:29:04.588+0000 D3 REPL [conn155] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.663+0000 2019-09-04T06:29:04.588+0000 D3 REPL [conn150] Got notified of new snapshot: { ts: Timestamp(1567578544, 5), t: 1 }, 2019-09-04T06:29:04.578+0000 2019-09-04T06:29:04.588+0000 D3 REPL [conn150] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:19.851+0000 2019-09-04T06:29:04.588+0000 D3 REPL [conn148] Got notified of new snapshot: { ts: Timestamp(1567578544, 5), t: 1 }, 2019-09-04T06:29:04.578+0000 2019-09-04T06:29:04.588+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:04.588+0000 D3 REPL [conn148] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.224+0000 2019-09-04T06:29:04.588+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:04.588+0000 D3 REPL [conn116] Got notified of new snapshot: { ts: Timestamp(1567578544, 5), t: 1 }, 2019-09-04T06:29:04.578+0000 2019-09-04T06:29:04.588+0000 D3 REPL [conn116] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:12.389+0000 2019-09-04T06:29:04.588+0000 D3 REPL [conn149] Got notified of new snapshot: { ts: Timestamp(1567578544, 5), t: 1 }, 2019-09-04T06:29:04.578+0000 2019-09-04T06:29:04.588+0000 D3 REPL [conn149] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.239+0000 2019-09-04T06:29:04.588+0000 D3 REPL [conn140] Got notified of new snapshot: { ts: Timestamp(1567578544, 5), t: 1 }, 2019-09-04T06:29:04.578+0000 2019-09-04T06:29:04.588+0000 D3 REPL [conn140] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.076+0000 2019-09-04T06:29:04.588+0000 D3 REPL [conn144] Got notified of new snapshot: { ts: Timestamp(1567578544, 5), t: 1 }, 2019-09-04T06:29:04.578+0000 2019-09-04T06:29:04.588+0000 D3 REPL [conn144] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:33.257+0000 2019-09-04T06:29:04.588+0000 D3 REPL [conn161] Got notified of new snapshot: { ts: Timestamp(1567578544, 5), t: 1 }, 2019-09-04T06:29:04.578+0000 2019-09-04T06:29:04.588+0000 D3 REPL [conn161] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:28.615+0000 2019-09-04T06:29:04.588+0000 D3 REPL [conn118] Got notified of new snapshot: { ts: Timestamp(1567578544, 5), t: 1 }, 2019-09-04T06:29:04.578+0000 2019-09-04T06:29:04.588+0000 D3 REPL [conn118] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.224+0000 2019-09-04T06:29:04.588+0000 D3 REPL [conn153] Got notified of new snapshot: { ts: Timestamp(1567578544, 5), t: 1 }, 2019-09-04T06:29:04.578+0000 2019-09-04T06:29:04.588+0000 D3 REPL [conn153] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:04.588+0000 D3 REPL [conn142] Got notified of new snapshot: { ts: Timestamp(1567578544, 5), t: 1 }, 2019-09-04T06:29:04.578+0000 2019-09-04T06:29:04.588+0000 D3 REPL [conn142] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.132+0000 2019-09-04T06:29:04.588+0000 D3 REPL [conn122] Got notified of new snapshot: { ts: Timestamp(1567578544, 5), t: 1 }, 2019-09-04T06:29:04.578+0000 2019-09-04T06:29:04.588+0000 D3 REPL [conn122] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.234+0000 2019-09-04T06:29:04.588+0000 D3 REPL [conn162] Got notified of new snapshot: { ts: Timestamp(1567578544, 5), t: 1 }, 2019-09-04T06:29:04.578+0000 2019-09-04T06:29:04.588+0000 D3 REPL [conn162] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:29.750+0000 2019-09-04T06:29:04.588+0000 D3 REPL [conn127] Got notified of new snapshot: { ts: Timestamp(1567578544, 5), t: 1 }, 2019-09-04T06:29:04.578+0000 2019-09-04T06:29:04.588+0000 D3 REPL [conn127] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.644+0000 2019-09-04T06:29:04.588+0000 D3 REPL [conn105] Got notified of new snapshot: { ts: Timestamp(1567578544, 5), t: 1 }, 2019-09-04T06:29:04.578+0000 2019-09-04T06:29:04.588+0000 D3 REPL [conn105] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.079+0000 2019-09-04T06:29:04.588+0000 D3 REPL [conn141] Got notified of new snapshot: { ts: Timestamp(1567578544, 5), t: 1 }, 2019-09-04T06:29:04.578+0000 2019-09-04T06:29:04.588+0000 D3 REPL [conn141] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.077+0000 2019-09-04T06:29:04.588+0000 D3 REPL [conn152] Got notified of new snapshot: { ts: Timestamp(1567578544, 5), t: 1 }, 2019-09-04T06:29:04.578+0000 2019-09-04T06:29:04.588+0000 D3 REPL [conn152] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:04.588+0000 D3 REPL [conn154] Got notified of new snapshot: { ts: Timestamp(1567578544, 5), t: 1 }, 2019-09-04T06:29:04.578+0000 2019-09-04T06:29:04.589+0000 D3 REPL [conn154] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:04.589+0000 D3 REPL [conn151] Got notified of new snapshot: { ts: Timestamp(1567578544, 5), t: 1 }, 2019-09-04T06:29:04.578+0000 2019-09-04T06:29:04.589+0000 D3 REPL [conn151] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.129+0000 2019-09-04T06:29:04.589+0000 D3 REPL [conn137] Got notified of new snapshot: { ts: Timestamp(1567578544, 5), t: 1 }, 2019-09-04T06:29:04.578+0000 2019-09-04T06:29:04.589+0000 D3 REPL [conn137] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.079+0000 2019-09-04T06:29:04.589+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:29:04.589+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:04.589+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 5), t: 1 }, durableWallTime: new Date(1567578544578), appliedOpTime: { ts: Timestamp(1567578544, 5), t: 1 }, appliedWallTime: new Date(1567578544578), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 5), t: 1 }, lastCommittedWall: new Date(1567578544578), lastOpVisible: { ts: Timestamp(1567578544, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:04.589+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 337 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:34.589+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 5), t: 1 }, durableWallTime: new Date(1567578544578), appliedOpTime: { ts: Timestamp(1567578544, 5), t: 1 }, appliedWallTime: new Date(1567578544578), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 5), t: 1 }, lastCommittedWall: new Date(1567578544578), lastOpVisible: { ts: Timestamp(1567578544, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:04.589+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.587+0000 2019-09-04T06:29:04.589+0000 D2 ASIO [RS] Request 337 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 5), t: 1 }, lastCommittedWall: new Date(1567578544578), lastOpVisible: { ts: Timestamp(1567578544, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 5), $clusterTime: { clusterTime: Timestamp(1567578544, 5), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 5) } 2019-09-04T06:29:04.589+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 5), t: 1 }, lastCommittedWall: new Date(1567578544578), lastOpVisible: { ts: Timestamp(1567578544, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 5), $clusterTime: { clusterTime: Timestamp(1567578544, 5), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 5) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:04.589+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:04.589+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.587+0000 2019-09-04T06:29:04.593+0000 D2 ASIO [RS] Request 336 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578544, 6), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578544588), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f59b0ac9313827bca3d70'), when: new Date(1567578544588), who: "ConfigServer:conn10279" } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 5), t: 1 }, lastCommittedWall: new Date(1567578544578), lastOpVisible: { ts: Timestamp(1567578544, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578544, 5), t: 1 }, lastCommittedWall: new Date(1567578544578), lastOpApplied: { ts: Timestamp(1567578544, 6), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 5), $clusterTime: { clusterTime: Timestamp(1567578544, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 6) } 2019-09-04T06:29:04.593+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578544, 6), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578544588), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f59b0ac9313827bca3d70'), when: new Date(1567578544588), who: "ConfigServer:conn10279" } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 5), t: 1 }, lastCommittedWall: new Date(1567578544578), lastOpVisible: { ts: Timestamp(1567578544, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578544, 5), t: 1 }, lastCommittedWall: new Date(1567578544578), lastOpApplied: { ts: Timestamp(1567578544, 6), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 5), $clusterTime: { clusterTime: Timestamp(1567578544, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 6) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:04.593+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:04.593+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578544, 6) and ending at ts: Timestamp(1567578544, 6) 2019-09-04T06:29:04.593+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:29:15.003+0000 2019-09-04T06:29:04.593+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:29:15.897+0000 2019-09-04T06:29:04.593+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:04.593+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578544, 6), t: 1 } 2019-09-04T06:29:04.593+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:04.593+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:04.593+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:04.593+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578544, 5) 2019-09-04T06:29:04.593+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 5042 2019-09-04T06:29:04.593+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:04.593+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:04.593+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 5042 2019-09-04T06:29:04.593+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:04.593+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:04.593+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:29:04.593+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578544, 5) 2019-09-04T06:29:04.593+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 5045 2019-09-04T06:29:04.593+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578544, 6) } 2019-09-04T06:29:04.593+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:04.593+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:04.593+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 5045 2019-09-04T06:29:04.593+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5040 2019-09-04T06:29:04.593+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5040 2019-09-04T06:29:04.593+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5048 2019-09-04T06:29:04.593+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5048 2019-09-04T06:29:04.593+0000 D3 EXECUTOR [repl-writer-worker-11] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:04.593+0000 D3 STORAGE [repl-writer-worker-11] WT begin_transaction for snapshot id 5050 2019-09-04T06:29:04.593+0000 D4 STORAGE [repl-writer-worker-11] inserting record with timestamp Timestamp(1567578544, 6) 2019-09-04T06:29:04.593+0000 D3 STORAGE [repl-writer-worker-11] WT set timestamp of future write operations to Timestamp(1567578544, 6) 2019-09-04T06:29:04.593+0000 D3 STORAGE [repl-writer-worker-11] WT commit_transaction for snapshot id 5050 2019-09-04T06:29:04.593+0000 D3 EXECUTOR [repl-writer-worker-11] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:04.593+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:29:04.593+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5049 2019-09-04T06:29:04.593+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5049 2019-09-04T06:29:04.593+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5052 2019-09-04T06:29:04.593+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5052 2019-09-04T06:29:04.593+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578544, 6), t: 1 }({ ts: Timestamp(1567578544, 6), t: 1 }) 2019-09-04T06:29:04.593+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578544, 6) 2019-09-04T06:29:04.593+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5053 2019-09-04T06:29:04.593+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578544, 6) } } ] } sort: {} projection: {} 2019-09-04T06:29:04.593+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:04.593+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:29:04.593+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578544, 6) Sort: {} Proj: {} ============================= 2019-09-04T06:29:04.593+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:04.593+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:04.593+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:04.593+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578544, 6) || First: notFirst: full path: ts 2019-09-04T06:29:04.593+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:04.593+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578544, 6) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:04.593+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:04.593+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:29:04.593+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:04.593+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:04.593+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:04.593+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:04.593+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:04.593+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:04.593+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:04.593+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578544, 6) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:04.593+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:04.593+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:04.593+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:04.593+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578544, 6) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:04.594+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:04.594+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578544, 6) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:04.594+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5053 2019-09-04T06:29:04.594+0000 D3 EXECUTOR [repl-writer-worker-13] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:04.594+0000 D3 STORAGE [repl-writer-worker-13] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:04.594+0000 D3 REPL [repl-writer-worker-13] applying op: { ts: Timestamp(1567578544, 6), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578544588), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f59b0ac9313827bca3d70'), when: new Date(1567578544588), who: "ConfigServer:conn10279" } } }, oplog application mode: Secondary 2019-09-04T06:29:04.594+0000 D3 STORAGE [repl-writer-worker-13] WT set timestamp of future write operations to Timestamp(1567578544, 6) 2019-09-04T06:29:04.594+0000 D3 STORAGE [repl-writer-worker-13] WT begin_transaction for snapshot id 5055 2019-09-04T06:29:04.594+0000 D2 QUERY [repl-writer-worker-13] Using idhack: { _id: "config.system.sessions" } 2019-09-04T06:29:04.594+0000 D4 WRITE [repl-writer-worker-13] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:29:04.594+0000 D3 STORAGE [repl-writer-worker-13] WT commit_transaction for snapshot id 5055 2019-09-04T06:29:04.594+0000 D3 EXECUTOR [repl-writer-worker-13] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:04.594+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578544, 6), t: 1 }({ ts: Timestamp(1567578544, 6), t: 1 }) 2019-09-04T06:29:04.594+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578544, 6) 2019-09-04T06:29:04.594+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5054 2019-09-04T06:29:04.594+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:29:04.594+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:04.594+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:29:04.594+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:04.594+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:04.594+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:29:04.594+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5054 2019-09-04T06:29:04.594+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578544, 6) 2019-09-04T06:29:04.594+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:04.594+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5058 2019-09-04T06:29:04.594+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 5), t: 1 }, durableWallTime: new Date(1567578544578), appliedOpTime: { ts: Timestamp(1567578544, 6), t: 1 }, appliedWallTime: new Date(1567578544588), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 5), t: 1 }, lastCommittedWall: new Date(1567578544578), lastOpVisible: { ts: Timestamp(1567578544, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:04.594+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 338 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:34.594+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 5), t: 1 }, durableWallTime: new Date(1567578544578), appliedOpTime: { ts: Timestamp(1567578544, 6), t: 1 }, appliedWallTime: new Date(1567578544588), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 5), t: 1 }, lastCommittedWall: new Date(1567578544578), lastOpVisible: { ts: Timestamp(1567578544, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:04.594+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.594+0000 2019-09-04T06:29:04.594+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5058 2019-09-04T06:29:04.594+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578544, 6), t: 1 }({ ts: Timestamp(1567578544, 6), t: 1 }) 2019-09-04T06:29:04.594+0000 D2 ASIO [RS] Request 338 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 5), t: 1 }, lastCommittedWall: new Date(1567578544578), lastOpVisible: { ts: Timestamp(1567578544, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 5), $clusterTime: { clusterTime: Timestamp(1567578544, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 6) } 2019-09-04T06:29:04.594+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 5), t: 1 }, lastCommittedWall: new Date(1567578544578), lastOpVisible: { ts: Timestamp(1567578544, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 5), $clusterTime: { clusterTime: Timestamp(1567578544, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 6) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:04.594+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:04.594+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.594+0000 2019-09-04T06:29:04.595+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578544, 6), t: 1 } 2019-09-04T06:29:04.595+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 339 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:14.595+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578544, 5), t: 1 } } 2019-09-04T06:29:04.595+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.594+0000 2019-09-04T06:29:04.595+0000 D2 ASIO [RS] Request 339 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 6), t: 1 }, lastCommittedWall: new Date(1567578544588), lastOpVisible: { ts: Timestamp(1567578544, 6), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578544, 6), t: 1 }, lastCommittedWall: new Date(1567578544588), lastOpApplied: { ts: Timestamp(1567578544, 6), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 6), $clusterTime: { clusterTime: Timestamp(1567578544, 7), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 6) } 2019-09-04T06:29:04.595+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 6), t: 1 }, lastCommittedWall: new Date(1567578544588), lastOpVisible: { ts: Timestamp(1567578544, 6), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578544, 6), t: 1 }, lastCommittedWall: new Date(1567578544588), lastOpApplied: { ts: Timestamp(1567578544, 6), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 6), $clusterTime: { clusterTime: Timestamp(1567578544, 7), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 6) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:04.595+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:04.595+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:29:04.595+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578544, 6), t: 1 }, 2019-09-04T06:29:04.588+0000 2019-09-04T06:29:04.595+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578544, 6), t: 1 }, 2019-09-04T06:29:04.588+0000 2019-09-04T06:29:04.595+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578539, 6) 2019-09-04T06:29:04.595+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:29:15.897+0000 2019-09-04T06:29:04.595+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:29:15.199+0000 2019-09-04T06:29:04.595+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:04.595+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 340 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:14.595+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578544, 6), t: 1 } } 2019-09-04T06:29:04.595+0000 D3 REPL [conn155] Got notified of new snapshot: { ts: Timestamp(1567578544, 6), t: 1 }, 2019-09-04T06:29:04.588+0000 2019-09-04T06:29:04.595+0000 D3 REPL [conn155] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.663+0000 2019-09-04T06:29:04.595+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.594+0000 2019-09-04T06:29:04.595+0000 D3 REPL [conn156] Got notified of new snapshot: { ts: Timestamp(1567578544, 6), t: 1 }, 2019-09-04T06:29:04.588+0000 2019-09-04T06:29:04.595+0000 D3 REPL [conn156] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:22.595+0000 2019-09-04T06:29:04.595+0000 D3 REPL [conn128] Got notified of new snapshot: { ts: Timestamp(1567578544, 6), t: 1 }, 2019-09-04T06:29:04.588+0000 2019-09-04T06:29:04.595+0000 D3 REPL [conn128] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:11.706+0000 2019-09-04T06:29:04.595+0000 D3 REPL [conn142] Got notified of new snapshot: { ts: Timestamp(1567578544, 6), t: 1 }, 2019-09-04T06:29:04.588+0000 2019-09-04T06:29:04.595+0000 D3 REPL [conn142] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.132+0000 2019-09-04T06:29:04.595+0000 D3 REPL [conn153] Got notified of new snapshot: { ts: Timestamp(1567578544, 6), t: 1 }, 2019-09-04T06:29:04.588+0000 2019-09-04T06:29:04.595+0000 D3 REPL [conn153] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:04.595+0000 D3 REPL [conn118] Got notified of new snapshot: { ts: Timestamp(1567578544, 6), t: 1 }, 2019-09-04T06:29:04.588+0000 2019-09-04T06:29:04.595+0000 D3 REPL [conn118] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.224+0000 2019-09-04T06:29:04.595+0000 D3 REPL [conn127] Got notified of new snapshot: { ts: Timestamp(1567578544, 6), t: 1 }, 2019-09-04T06:29:04.588+0000 2019-09-04T06:29:04.595+0000 D3 REPL [conn127] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.644+0000 2019-09-04T06:29:04.595+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:04.595+0000 D3 REPL [conn161] Got notified of new snapshot: { ts: Timestamp(1567578544, 6), t: 1 }, 2019-09-04T06:29:04.588+0000 2019-09-04T06:29:04.595+0000 D3 REPL [conn161] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:28.615+0000 2019-09-04T06:29:04.595+0000 D3 REPL [conn141] Got notified of new snapshot: { ts: Timestamp(1567578544, 6), t: 1 }, 2019-09-04T06:29:04.588+0000 2019-09-04T06:29:04.595+0000 D3 REPL [conn141] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.077+0000 2019-09-04T06:29:04.595+0000 D3 REPL [conn154] Got notified of new snapshot: { ts: Timestamp(1567578544, 6), t: 1 }, 2019-09-04T06:29:04.588+0000 2019-09-04T06:29:04.595+0000 D3 REPL [conn154] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:04.596+0000 D3 REPL [conn137] Got notified of new snapshot: { ts: Timestamp(1567578544, 6), t: 1 }, 2019-09-04T06:29:04.588+0000 2019-09-04T06:29:04.596+0000 D3 REPL [conn137] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.079+0000 2019-09-04T06:29:04.596+0000 D3 REPL [conn140] Got notified of new snapshot: { ts: Timestamp(1567578544, 6), t: 1 }, 2019-09-04T06:29:04.588+0000 2019-09-04T06:29:04.596+0000 D3 REPL [conn140] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.076+0000 2019-09-04T06:29:04.596+0000 D3 REPL [conn146] Got notified of new snapshot: { ts: Timestamp(1567578544, 6), t: 1 }, 2019-09-04T06:29:04.588+0000 2019-09-04T06:29:04.596+0000 D3 REPL [conn146] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.233+0000 2019-09-04T06:29:04.596+0000 D3 REPL [conn158] Got notified of new snapshot: { ts: Timestamp(1567578544, 6), t: 1 }, 2019-09-04T06:29:04.588+0000 2019-09-04T06:29:04.596+0000 D3 REPL [conn158] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:24.152+0000 2019-09-04T06:29:04.596+0000 D3 REPL [conn148] Got notified of new snapshot: { ts: Timestamp(1567578544, 6), t: 1 }, 2019-09-04T06:29:04.588+0000 2019-09-04T06:29:04.596+0000 D3 REPL [conn148] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.224+0000 2019-09-04T06:29:04.596+0000 D3 REPL [conn116] Got notified of new snapshot: { ts: Timestamp(1567578544, 6), t: 1 }, 2019-09-04T06:29:04.588+0000 2019-09-04T06:29:04.596+0000 D3 REPL [conn116] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:12.389+0000 2019-09-04T06:29:04.596+0000 D3 REPL [conn129] Got notified of new snapshot: { ts: Timestamp(1567578544, 6), t: 1 }, 2019-09-04T06:29:04.588+0000 2019-09-04T06:29:04.596+0000 D3 REPL [conn129] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.240+0000 2019-09-04T06:29:04.596+0000 D3 REPL [conn162] Got notified of new snapshot: { ts: Timestamp(1567578544, 6), t: 1 }, 2019-09-04T06:29:04.588+0000 2019-09-04T06:29:04.596+0000 D3 REPL [conn162] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:29.750+0000 2019-09-04T06:29:04.596+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:29:04.596+0000 D3 REPL [conn105] Got notified of new snapshot: { ts: Timestamp(1567578544, 6), t: 1 }, 2019-09-04T06:29:04.588+0000 2019-09-04T06:29:04.596+0000 D3 REPL [conn105] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.079+0000 2019-09-04T06:29:04.596+0000 D3 REPL [conn152] Got notified of new snapshot: { ts: Timestamp(1567578544, 6), t: 1 }, 2019-09-04T06:29:04.588+0000 2019-09-04T06:29:04.596+0000 D3 REPL [conn152] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:04.596+0000 D3 REPL [conn122] Got notified of new snapshot: { ts: Timestamp(1567578544, 6), t: 1 }, 2019-09-04T06:29:04.588+0000 2019-09-04T06:29:04.596+0000 D3 REPL [conn122] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.234+0000 2019-09-04T06:29:04.596+0000 D3 REPL [conn151] Got notified of new snapshot: { ts: Timestamp(1567578544, 6), t: 1 }, 2019-09-04T06:29:04.588+0000 2019-09-04T06:29:04.596+0000 D3 REPL [conn151] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.129+0000 2019-09-04T06:29:04.596+0000 D3 REPL [conn150] Got notified of new snapshot: { ts: Timestamp(1567578544, 6), t: 1 }, 2019-09-04T06:29:04.588+0000 2019-09-04T06:29:04.596+0000 D3 REPL [conn150] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:19.851+0000 2019-09-04T06:29:04.596+0000 D3 REPL [conn149] Got notified of new snapshot: { ts: Timestamp(1567578544, 6), t: 1 }, 2019-09-04T06:29:04.588+0000 2019-09-04T06:29:04.596+0000 D3 REPL [conn149] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.239+0000 2019-09-04T06:29:04.596+0000 D3 REPL [conn144] Got notified of new snapshot: { ts: Timestamp(1567578544, 6), t: 1 }, 2019-09-04T06:29:04.588+0000 2019-09-04T06:29:04.596+0000 D3 REPL [conn144] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:33.257+0000 2019-09-04T06:29:04.596+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:04.596+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 6), t: 1 }, durableWallTime: new Date(1567578544588), appliedOpTime: { ts: Timestamp(1567578544, 6), t: 1 }, appliedWallTime: new Date(1567578544588), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 6), t: 1 }, lastCommittedWall: new Date(1567578544588), lastOpVisible: { ts: Timestamp(1567578544, 6), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:04.596+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 341 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:34.596+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 6), t: 1 }, durableWallTime: new Date(1567578544588), appliedOpTime: { ts: Timestamp(1567578544, 6), t: 1 }, appliedWallTime: new Date(1567578544588), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 6), t: 1 }, lastCommittedWall: new Date(1567578544588), lastOpVisible: { ts: Timestamp(1567578544, 6), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:04.596+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.594+0000 2019-09-04T06:29:04.596+0000 D2 ASIO [RS] Request 341 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 6), t: 1 }, lastCommittedWall: new Date(1567578544588), lastOpVisible: { ts: Timestamp(1567578544, 6), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 6), $clusterTime: { clusterTime: Timestamp(1567578544, 7), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 6) } 2019-09-04T06:29:04.596+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 6), t: 1 }, lastCommittedWall: new Date(1567578544588), lastOpVisible: { ts: Timestamp(1567578544, 6), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 6), $clusterTime: { clusterTime: Timestamp(1567578544, 7), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 6) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:04.596+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:04.596+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.594+0000 2019-09-04T06:29:04.597+0000 D2 ASIO [RS] Request 340 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578544, 7), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578544595), o: { $v: 1, $set: { state: 0 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 6), t: 1 }, lastCommittedWall: new Date(1567578544588), lastOpVisible: { ts: Timestamp(1567578544, 6), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578544, 6), t: 1 }, lastCommittedWall: new Date(1567578544588), lastOpApplied: { ts: Timestamp(1567578544, 7), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 6), $clusterTime: { clusterTime: Timestamp(1567578544, 7), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 7) } 2019-09-04T06:29:04.597+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578544, 7), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578544595), o: { $v: 1, $set: { state: 0 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 6), t: 1 }, lastCommittedWall: new Date(1567578544588), lastOpVisible: { ts: Timestamp(1567578544, 6), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578544, 6), t: 1 }, lastCommittedWall: new Date(1567578544588), lastOpApplied: { ts: Timestamp(1567578544, 7), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 6), $clusterTime: { clusterTime: Timestamp(1567578544, 7), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 7) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:04.597+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:04.597+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578544, 7) and ending at ts: Timestamp(1567578544, 7) 2019-09-04T06:29:04.597+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:29:15.199+0000 2019-09-04T06:29:04.597+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:29:15.154+0000 2019-09-04T06:29:04.597+0000 D2 REPL [replication-1] oplog buffer has 0 bytes 2019-09-04T06:29:04.597+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578544, 7), t: 1 } 2019-09-04T06:29:04.597+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:04.597+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:04.597+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:04.597+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:04.597+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578544, 6) 2019-09-04T06:29:04.597+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 5062 2019-09-04T06:29:04.597+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:04.597+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:04.597+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 5062 2019-09-04T06:29:04.597+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:29:04.597+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:04.597+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578544, 7) } 2019-09-04T06:29:04.597+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:04.598+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578544, 6) 2019-09-04T06:29:04.598+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 5065 2019-09-04T06:29:04.598+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5060 2019-09-04T06:29:04.598+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:04.598+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:04.598+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 5065 2019-09-04T06:29:04.598+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5060 2019-09-04T06:29:04.598+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5068 2019-09-04T06:29:04.598+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5068 2019-09-04T06:29:04.598+0000 D3 EXECUTOR [repl-writer-worker-14] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:04.598+0000 D3 STORAGE [repl-writer-worker-14] WT begin_transaction for snapshot id 5070 2019-09-04T06:29:04.598+0000 D4 STORAGE [repl-writer-worker-14] inserting record with timestamp Timestamp(1567578544, 7) 2019-09-04T06:29:04.598+0000 D3 STORAGE [repl-writer-worker-14] WT set timestamp of future write operations to Timestamp(1567578544, 7) 2019-09-04T06:29:04.598+0000 D3 STORAGE [repl-writer-worker-14] WT commit_transaction for snapshot id 5070 2019-09-04T06:29:04.598+0000 D3 EXECUTOR [repl-writer-worker-14] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:04.598+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:29:04.598+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5069 2019-09-04T06:29:04.598+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5069 2019-09-04T06:29:04.598+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5072 2019-09-04T06:29:04.598+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5072 2019-09-04T06:29:04.598+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578544, 7), t: 1 }({ ts: Timestamp(1567578544, 7), t: 1 }) 2019-09-04T06:29:04.598+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578544, 7) 2019-09-04T06:29:04.598+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5073 2019-09-04T06:29:04.598+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578544, 7) } } ] } sort: {} projection: {} 2019-09-04T06:29:04.598+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:04.598+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:29:04.598+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578544, 7) Sort: {} Proj: {} ============================= 2019-09-04T06:29:04.598+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:04.598+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:04.598+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:04.598+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578544, 7) || First: notFirst: full path: ts 2019-09-04T06:29:04.598+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:04.598+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578544, 7) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:04.598+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:04.598+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:29:04.598+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:04.598+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:04.598+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:04.598+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:04.598+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:04.598+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:04.598+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:04.598+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578544, 7) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:04.598+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:04.598+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:04.598+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:04.598+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578544, 7) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:04.598+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:04.598+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578544, 7) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:04.598+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5073 2019-09-04T06:29:04.598+0000 D3 EXECUTOR [repl-writer-worker-3] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:04.598+0000 D3 STORAGE [repl-writer-worker-3] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:04.598+0000 D3 REPL [repl-writer-worker-3] applying op: { ts: Timestamp(1567578544, 7), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578544595), o: { $v: 1, $set: { state: 0 } } }, oplog application mode: Secondary 2019-09-04T06:29:04.598+0000 D3 STORAGE [repl-writer-worker-3] WT set timestamp of future write operations to Timestamp(1567578544, 7) 2019-09-04T06:29:04.598+0000 D3 STORAGE [repl-writer-worker-3] WT begin_transaction for snapshot id 5075 2019-09-04T06:29:04.598+0000 D2 QUERY [repl-writer-worker-3] Using idhack: { _id: "config.system.sessions" } 2019-09-04T06:29:04.598+0000 D4 WRITE [repl-writer-worker-3] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:29:04.598+0000 D3 STORAGE [repl-writer-worker-3] WT commit_transaction for snapshot id 5075 2019-09-04T06:29:04.598+0000 D3 EXECUTOR [repl-writer-worker-3] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:04.598+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578544, 7), t: 1 }({ ts: Timestamp(1567578544, 7), t: 1 }) 2019-09-04T06:29:04.598+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578544, 7) 2019-09-04T06:29:04.598+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5074 2019-09-04T06:29:04.598+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:29:04.598+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:04.598+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:29:04.598+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:04.598+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:04.598+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:29:04.598+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5074 2019-09-04T06:29:04.598+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578544, 7) 2019-09-04T06:29:04.598+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5078 2019-09-04T06:29:04.598+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5078 2019-09-04T06:29:04.598+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578544, 7), t: 1 }({ ts: Timestamp(1567578544, 7), t: 1 }) 2019-09-04T06:29:04.598+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:04.598+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 6), t: 1 }, durableWallTime: new Date(1567578544588), appliedOpTime: { ts: Timestamp(1567578544, 7), t: 1 }, appliedWallTime: new Date(1567578544595), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 6), t: 1 }, lastCommittedWall: new Date(1567578544588), lastOpVisible: { ts: Timestamp(1567578544, 6), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:04.598+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 342 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:34.598+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 6), t: 1 }, durableWallTime: new Date(1567578544588), appliedOpTime: { ts: Timestamp(1567578544, 7), t: 1 }, appliedWallTime: new Date(1567578544595), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 6), t: 1 }, lastCommittedWall: new Date(1567578544588), lastOpVisible: { ts: Timestamp(1567578544, 6), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:04.598+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.598+0000 2019-09-04T06:29:04.599+0000 D2 ASIO [RS] Request 342 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 6), t: 1 }, lastCommittedWall: new Date(1567578544588), lastOpVisible: { ts: Timestamp(1567578544, 6), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 6), $clusterTime: { clusterTime: Timestamp(1567578544, 7), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 7) } 2019-09-04T06:29:04.599+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 6), t: 1 }, lastCommittedWall: new Date(1567578544588), lastOpVisible: { ts: Timestamp(1567578544, 6), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 6), $clusterTime: { clusterTime: Timestamp(1567578544, 7), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 7) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:04.599+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:04.599+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.599+0000 2019-09-04T06:29:04.599+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:29:04.599+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:04.599+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 7), t: 1 }, durableWallTime: new Date(1567578544595), appliedOpTime: { ts: Timestamp(1567578544, 7), t: 1 }, appliedWallTime: new Date(1567578544595), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 6), t: 1 }, lastCommittedWall: new Date(1567578544588), lastOpVisible: { ts: Timestamp(1567578544, 6), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:04.599+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 343 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:34.599+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 7), t: 1 }, durableWallTime: new Date(1567578544595), appliedOpTime: { ts: Timestamp(1567578544, 7), t: 1 }, appliedWallTime: new Date(1567578544595), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 6), t: 1 }, lastCommittedWall: new Date(1567578544588), lastOpVisible: { ts: Timestamp(1567578544, 6), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:04.599+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578544, 7), t: 1 } 2019-09-04T06:29:04.599+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.599+0000 2019-09-04T06:29:04.599+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 344 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:14.599+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578544, 6), t: 1 } } 2019-09-04T06:29:04.599+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.599+0000 2019-09-04T06:29:04.600+0000 D2 ASIO [RS] Request 343 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 7), t: 1 }, lastCommittedWall: new Date(1567578544595), lastOpVisible: { ts: Timestamp(1567578544, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 7), $clusterTime: { clusterTime: Timestamp(1567578544, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 7) } 2019-09-04T06:29:04.600+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 7), t: 1 }, lastCommittedWall: new Date(1567578544595), lastOpVisible: { ts: Timestamp(1567578544, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 7), $clusterTime: { clusterTime: Timestamp(1567578544, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 7) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:04.600+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:04.600+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.599+0000 2019-09-04T06:29:04.600+0000 D2 ASIO [RS] Request 344 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 7), t: 1 }, lastCommittedWall: new Date(1567578544595), lastOpVisible: { ts: Timestamp(1567578544, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578544, 7), t: 1 }, lastCommittedWall: new Date(1567578544595), lastOpApplied: { ts: Timestamp(1567578544, 7), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 7), $clusterTime: { clusterTime: Timestamp(1567578544, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 7) } 2019-09-04T06:29:04.600+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 7), t: 1 }, lastCommittedWall: new Date(1567578544595), lastOpVisible: { ts: Timestamp(1567578544, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578544, 7), t: 1 }, lastCommittedWall: new Date(1567578544595), lastOpApplied: { ts: Timestamp(1567578544, 7), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 7), $clusterTime: { clusterTime: Timestamp(1567578544, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 7) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:04.600+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:04.600+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:29:04.600+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578544, 7), t: 1 }, 2019-09-04T06:29:04.595+0000 2019-09-04T06:29:04.600+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578544, 7), t: 1 }, 2019-09-04T06:29:04.595+0000 2019-09-04T06:29:04.600+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578539, 7) 2019-09-04T06:29:04.600+0000 D3 REPL [conn140] Got notified of new snapshot: { ts: Timestamp(1567578544, 7), t: 1 }, 2019-09-04T06:29:04.595+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn140] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.076+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn152] Got notified of new snapshot: { ts: Timestamp(1567578544, 7), t: 1 }, 2019-09-04T06:29:04.595+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn152] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn142] Got notified of new snapshot: { ts: Timestamp(1567578544, 7), t: 1 }, 2019-09-04T06:29:04.595+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn142] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.132+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn141] Got notified of new snapshot: { ts: Timestamp(1567578544, 7), t: 1 }, 2019-09-04T06:29:04.595+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn141] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.077+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn158] Got notified of new snapshot: { ts: Timestamp(1567578544, 7), t: 1 }, 2019-09-04T06:29:04.595+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn158] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:24.152+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn161] Got notified of new snapshot: { ts: Timestamp(1567578544, 7), t: 1 }, 2019-09-04T06:29:04.595+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn161] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:28.615+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn105] Got notified of new snapshot: { ts: Timestamp(1567578544, 7), t: 1 }, 2019-09-04T06:29:04.595+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn105] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.079+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn122] Got notified of new snapshot: { ts: Timestamp(1567578544, 7), t: 1 }, 2019-09-04T06:29:04.595+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn122] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.234+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn162] Got notified of new snapshot: { ts: Timestamp(1567578544, 7), t: 1 }, 2019-09-04T06:29:04.595+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn162] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:29.750+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn154] Got notified of new snapshot: { ts: Timestamp(1567578544, 7), t: 1 }, 2019-09-04T06:29:04.595+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn154] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn150] Got notified of new snapshot: { ts: Timestamp(1567578544, 7), t: 1 }, 2019-09-04T06:29:04.595+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn150] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:19.851+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn116] Got notified of new snapshot: { ts: Timestamp(1567578544, 7), t: 1 }, 2019-09-04T06:29:04.595+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn116] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:12.389+0000 2019-09-04T06:29:04.600+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:29:15.154+0000 2019-09-04T06:29:04.600+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:29:14.786+0000 2019-09-04T06:29:04.600+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:04.600+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 345 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:14.600+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578544, 7), t: 1 } } 2019-09-04T06:29:04.600+0000 D3 REPL [conn127] Got notified of new snapshot: { ts: Timestamp(1567578544, 7), t: 1 }, 2019-09-04T06:29:04.595+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn127] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.644+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn129] Got notified of new snapshot: { ts: Timestamp(1567578544, 7), t: 1 }, 2019-09-04T06:29:04.595+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn129] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.240+0000 2019-09-04T06:29:04.600+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn128] Got notified of new snapshot: { ts: Timestamp(1567578544, 7), t: 1 }, 2019-09-04T06:29:04.595+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn128] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:11.706+0000 2019-09-04T06:29:04.600+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.599+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn155] Got notified of new snapshot: { ts: Timestamp(1567578544, 7), t: 1 }, 2019-09-04T06:29:04.595+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn155] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.663+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn144] Got notified of new snapshot: { ts: Timestamp(1567578544, 7), t: 1 }, 2019-09-04T06:29:04.595+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn144] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:33.257+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn156] Got notified of new snapshot: { ts: Timestamp(1567578544, 7), t: 1 }, 2019-09-04T06:29:04.595+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn156] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:22.595+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn151] Got notified of new snapshot: { ts: Timestamp(1567578544, 7), t: 1 }, 2019-09-04T06:29:04.595+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn151] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.129+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn146] Got notified of new snapshot: { ts: Timestamp(1567578544, 7), t: 1 }, 2019-09-04T06:29:04.595+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn146] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.233+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn118] Got notified of new snapshot: { ts: Timestamp(1567578544, 7), t: 1 }, 2019-09-04T06:29:04.595+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn118] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.224+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn148] Got notified of new snapshot: { ts: Timestamp(1567578544, 7), t: 1 }, 2019-09-04T06:29:04.595+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn148] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.224+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn149] Got notified of new snapshot: { ts: Timestamp(1567578544, 7), t: 1 }, 2019-09-04T06:29:04.595+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn149] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.239+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn137] Got notified of new snapshot: { ts: Timestamp(1567578544, 7), t: 1 }, 2019-09-04T06:29:04.595+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn137] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.079+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn153] Got notified of new snapshot: { ts: Timestamp(1567578544, 7), t: 1 }, 2019-09-04T06:29:04.595+0000 2019-09-04T06:29:04.600+0000 D3 REPL [conn153] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:04.601+0000 D2 ASIO [RS] Request 345 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578544, 8), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578544599), o: { $v: 1, $set: { state: 0 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 7), t: 1 }, lastCommittedWall: new Date(1567578544595), lastOpVisible: { ts: Timestamp(1567578544, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578544, 7), t: 1 }, lastCommittedWall: new Date(1567578544595), lastOpApplied: { ts: Timestamp(1567578544, 8), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 7), $clusterTime: { clusterTime: Timestamp(1567578544, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 8) } 2019-09-04T06:29:04.601+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578544, 8), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578544599), o: { $v: 1, $set: { state: 0 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 7), t: 1 }, lastCommittedWall: new Date(1567578544595), lastOpVisible: { ts: Timestamp(1567578544, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578544, 7), t: 1 }, lastCommittedWall: new Date(1567578544595), lastOpApplied: { ts: Timestamp(1567578544, 8), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 7), $clusterTime: { clusterTime: Timestamp(1567578544, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 8) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:04.601+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:04.601+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578544, 8) and ending at ts: Timestamp(1567578544, 8) 2019-09-04T06:29:04.601+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:29:14.786+0000 2019-09-04T06:29:04.601+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:29:15.967+0000 2019-09-04T06:29:04.601+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:04.601+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578544, 8), t: 1 } 2019-09-04T06:29:04.601+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:04.601+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:04.601+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578544, 7) 2019-09-04T06:29:04.601+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 5082 2019-09-04T06:29:04.601+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:04.601+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:04.601+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 5082 2019-09-04T06:29:04.601+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:04.601+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:29:04.601+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:04.601+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578544, 7) 2019-09-04T06:29:04.601+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578544, 8) } 2019-09-04T06:29:04.601+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:04.601+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 5085 2019-09-04T06:29:04.601+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:04.601+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:04.601+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5080 2019-09-04T06:29:04.601+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 5085 2019-09-04T06:29:04.601+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5080 2019-09-04T06:29:04.601+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5088 2019-09-04T06:29:04.601+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5088 2019-09-04T06:29:04.601+0000 D3 EXECUTOR [repl-writer-worker-12] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:04.601+0000 D3 STORAGE [repl-writer-worker-12] WT begin_transaction for snapshot id 5090 2019-09-04T06:29:04.601+0000 D4 STORAGE [repl-writer-worker-12] inserting record with timestamp Timestamp(1567578544, 8) 2019-09-04T06:29:04.601+0000 D3 STORAGE [repl-writer-worker-12] WT set timestamp of future write operations to Timestamp(1567578544, 8) 2019-09-04T06:29:04.601+0000 D3 STORAGE [repl-writer-worker-12] WT commit_transaction for snapshot id 5090 2019-09-04T06:29:04.601+0000 D3 EXECUTOR [repl-writer-worker-12] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:04.601+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:29:04.601+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5089 2019-09-04T06:29:04.601+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5089 2019-09-04T06:29:04.601+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5092 2019-09-04T06:29:04.601+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5092 2019-09-04T06:29:04.601+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578544, 8), t: 1 }({ ts: Timestamp(1567578544, 8), t: 1 }) 2019-09-04T06:29:04.601+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578544, 8) 2019-09-04T06:29:04.601+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5093 2019-09-04T06:29:04.601+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578544, 8) } } ] } sort: {} projection: {} 2019-09-04T06:29:04.601+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:04.601+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:29:04.601+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578544, 8) Sort: {} Proj: {} ============================= 2019-09-04T06:29:04.601+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:04.601+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:04.601+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:04.601+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578544, 8) || First: notFirst: full path: ts 2019-09-04T06:29:04.601+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:04.601+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578544, 8) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:04.601+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:04.601+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:29:04.601+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:04.601+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:04.601+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:04.601+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:04.601+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:04.601+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:04.601+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:04.601+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578544, 8) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:04.601+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:04.601+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:04.601+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:04.601+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578544, 8) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:04.601+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:04.601+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578544, 8) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:04.601+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5093 2019-09-04T06:29:04.601+0000 D3 EXECUTOR [repl-writer-worker-10] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:04.601+0000 D3 STORAGE [repl-writer-worker-10] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:04.601+0000 D3 REPL [repl-writer-worker-10] applying op: { ts: Timestamp(1567578544, 8), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578544599), o: { $v: 1, $set: { state: 0 } } }, oplog application mode: Secondary 2019-09-04T06:29:04.601+0000 D3 STORAGE [repl-writer-worker-10] WT set timestamp of future write operations to Timestamp(1567578544, 8) 2019-09-04T06:29:04.601+0000 D3 STORAGE [repl-writer-worker-10] WT begin_transaction for snapshot id 5095 2019-09-04T06:29:04.601+0000 D2 QUERY [repl-writer-worker-10] Using idhack: { _id: "config" } 2019-09-04T06:29:04.602+0000 D4 WRITE [repl-writer-worker-10] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:29:04.602+0000 D3 STORAGE [repl-writer-worker-10] WT commit_transaction for snapshot id 5095 2019-09-04T06:29:04.602+0000 D3 EXECUTOR [repl-writer-worker-10] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:04.602+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578544, 8), t: 1 }({ ts: Timestamp(1567578544, 8), t: 1 }) 2019-09-04T06:29:04.602+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578544, 8) 2019-09-04T06:29:04.602+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5094 2019-09-04T06:29:04.602+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:29:04.602+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:04.602+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:29:04.602+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:04.602+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:04.602+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:29:04.602+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5094 2019-09-04T06:29:04.602+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578544, 8) 2019-09-04T06:29:04.602+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5098 2019-09-04T06:29:04.602+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5098 2019-09-04T06:29:04.602+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578544, 8), t: 1 }({ ts: Timestamp(1567578544, 8), t: 1 }) 2019-09-04T06:29:04.602+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:04.602+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 7), t: 1 }, durableWallTime: new Date(1567578544595), appliedOpTime: { ts: Timestamp(1567578544, 8), t: 1 }, appliedWallTime: new Date(1567578544599), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 7), t: 1 }, lastCommittedWall: new Date(1567578544595), lastOpVisible: { ts: Timestamp(1567578544, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:04.602+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 346 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:34.602+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 7), t: 1 }, durableWallTime: new Date(1567578544595), appliedOpTime: { ts: Timestamp(1567578544, 8), t: 1 }, appliedWallTime: new Date(1567578544599), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 7), t: 1 }, lastCommittedWall: new Date(1567578544595), lastOpVisible: { ts: Timestamp(1567578544, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:04.602+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.602+0000 2019-09-04T06:29:04.602+0000 D2 ASIO [RS] Request 346 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 7), t: 1 }, lastCommittedWall: new Date(1567578544595), lastOpVisible: { ts: Timestamp(1567578544, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 7), $clusterTime: { clusterTime: Timestamp(1567578544, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 8) } 2019-09-04T06:29:04.602+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 7), t: 1 }, lastCommittedWall: new Date(1567578544595), lastOpVisible: { ts: Timestamp(1567578544, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 7), $clusterTime: { clusterTime: Timestamp(1567578544, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 8) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:04.602+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:04.602+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.602+0000 2019-09-04T06:29:04.602+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:29:04.602+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:04.602+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 8), t: 1 }, durableWallTime: new Date(1567578544599), appliedOpTime: { ts: Timestamp(1567578544, 8), t: 1 }, appliedWallTime: new Date(1567578544599), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 7), t: 1 }, lastCommittedWall: new Date(1567578544595), lastOpVisible: { ts: Timestamp(1567578544, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:04.602+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 347 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:34.602+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 8), t: 1 }, durableWallTime: new Date(1567578544599), appliedOpTime: { ts: Timestamp(1567578544, 8), t: 1 }, appliedWallTime: new Date(1567578544599), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, durableWallTime: new Date(1567578540065), appliedOpTime: { ts: Timestamp(1567578540, 3), t: 1 }, appliedWallTime: new Date(1567578540065), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 7), t: 1 }, lastCommittedWall: new Date(1567578544595), lastOpVisible: { ts: Timestamp(1567578544, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:04.602+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.602+0000 2019-09-04T06:29:04.603+0000 D2 ASIO [RS] Request 347 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 7), t: 1 }, lastCommittedWall: new Date(1567578544595), lastOpVisible: { ts: Timestamp(1567578544, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 7), $clusterTime: { clusterTime: Timestamp(1567578544, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 8) } 2019-09-04T06:29:04.603+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 7), t: 1 }, lastCommittedWall: new Date(1567578544595), lastOpVisible: { ts: Timestamp(1567578544, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 7), $clusterTime: { clusterTime: Timestamp(1567578544, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 8) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:04.603+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:04.603+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.603+0000 2019-09-04T06:29:04.603+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578544, 8), t: 1 } 2019-09-04T06:29:04.603+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 348 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:14.603+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578544, 7), t: 1 } } 2019-09-04T06:29:04.603+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.603+0000 2019-09-04T06:29:04.603+0000 D2 ASIO [RS] Request 348 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 8), t: 1 }, lastCommittedWall: new Date(1567578544599), lastOpVisible: { ts: Timestamp(1567578544, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578544, 8), t: 1 }, lastCommittedWall: new Date(1567578544599), lastOpApplied: { ts: Timestamp(1567578544, 8), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 8), $clusterTime: { clusterTime: Timestamp(1567578544, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 8) } 2019-09-04T06:29:04.603+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 8), t: 1 }, lastCommittedWall: new Date(1567578544599), lastOpVisible: { ts: Timestamp(1567578544, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578544, 8), t: 1 }, lastCommittedWall: new Date(1567578544599), lastOpApplied: { ts: Timestamp(1567578544, 8), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 8), $clusterTime: { clusterTime: Timestamp(1567578544, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 8) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:04.603+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:04.603+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:29:04.603+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578544, 8), t: 1 }, 2019-09-04T06:29:04.599+0000 2019-09-04T06:29:04.603+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578544, 8), t: 1 }, 2019-09-04T06:29:04.599+0000 2019-09-04T06:29:04.603+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578539, 8) 2019-09-04T06:29:04.603+0000 D3 REPL [conn137] Got notified of new snapshot: { ts: Timestamp(1567578544, 8), t: 1 }, 2019-09-04T06:29:04.599+0000 2019-09-04T06:29:04.603+0000 D3 REPL [conn137] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.079+0000 2019-09-04T06:29:04.603+0000 D3 REPL [conn150] Got notified of new snapshot: { ts: Timestamp(1567578544, 8), t: 1 }, 2019-09-04T06:29:04.599+0000 2019-09-04T06:29:04.603+0000 D3 REPL [conn150] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:19.851+0000 2019-09-04T06:29:04.603+0000 D3 REPL [conn122] Got notified of new snapshot: { ts: Timestamp(1567578544, 8), t: 1 }, 2019-09-04T06:29:04.599+0000 2019-09-04T06:29:04.603+0000 D3 REPL [conn122] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.234+0000 2019-09-04T06:29:04.603+0000 D3 REPL [conn144] Got notified of new snapshot: { ts: Timestamp(1567578544, 8), t: 1 }, 2019-09-04T06:29:04.599+0000 2019-09-04T06:29:04.603+0000 D3 REPL [conn144] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:33.257+0000 2019-09-04T06:29:04.603+0000 D3 REPL [conn127] Got notified of new snapshot: { ts: Timestamp(1567578544, 8), t: 1 }, 2019-09-04T06:29:04.599+0000 2019-09-04T06:29:04.603+0000 D3 REPL [conn127] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.644+0000 2019-09-04T06:29:04.603+0000 D3 REPL [conn129] Got notified of new snapshot: { ts: Timestamp(1567578544, 8), t: 1 }, 2019-09-04T06:29:04.599+0000 2019-09-04T06:29:04.603+0000 D3 REPL [conn129] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.240+0000 2019-09-04T06:29:04.603+0000 D3 REPL [conn155] Got notified of new snapshot: { ts: Timestamp(1567578544, 8), t: 1 }, 2019-09-04T06:29:04.599+0000 2019-09-04T06:29:04.603+0000 D3 REPL [conn155] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.663+0000 2019-09-04T06:29:04.603+0000 D3 REPL [conn156] Got notified of new snapshot: { ts: Timestamp(1567578544, 8), t: 1 }, 2019-09-04T06:29:04.599+0000 2019-09-04T06:29:04.603+0000 D3 REPL [conn156] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:22.595+0000 2019-09-04T06:29:04.603+0000 D3 REPL [conn146] Got notified of new snapshot: { ts: Timestamp(1567578544, 8), t: 1 }, 2019-09-04T06:29:04.599+0000 2019-09-04T06:29:04.603+0000 D3 REPL [conn146] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.233+0000 2019-09-04T06:29:04.603+0000 D3 REPL [conn161] Got notified of new snapshot: { ts: Timestamp(1567578544, 8), t: 1 }, 2019-09-04T06:29:04.599+0000 2019-09-04T06:29:04.603+0000 D3 REPL [conn161] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:28.615+0000 2019-09-04T06:29:04.603+0000 D3 REPL [conn148] Got notified of new snapshot: { ts: Timestamp(1567578544, 8), t: 1 }, 2019-09-04T06:29:04.599+0000 2019-09-04T06:29:04.603+0000 D3 REPL [conn148] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.224+0000 2019-09-04T06:29:04.603+0000 D3 REPL [conn116] Got notified of new snapshot: { ts: Timestamp(1567578544, 8), t: 1 }, 2019-09-04T06:29:04.599+0000 2019-09-04T06:29:04.603+0000 D3 REPL [conn116] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:12.389+0000 2019-09-04T06:29:04.603+0000 D3 REPL [conn154] Got notified of new snapshot: { ts: Timestamp(1567578544, 8), t: 1 }, 2019-09-04T06:29:04.599+0000 2019-09-04T06:29:04.603+0000 D3 REPL [conn154] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:04.603+0000 D3 REPL [conn142] Got notified of new snapshot: { ts: Timestamp(1567578544, 8), t: 1 }, 2019-09-04T06:29:04.599+0000 2019-09-04T06:29:04.603+0000 D3 REPL [conn142] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.132+0000 2019-09-04T06:29:04.603+0000 D3 REPL [conn158] Got notified of new snapshot: { ts: Timestamp(1567578544, 8), t: 1 }, 2019-09-04T06:29:04.599+0000 2019-09-04T06:29:04.603+0000 D3 REPL [conn158] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:24.152+0000 2019-09-04T06:29:04.603+0000 D3 REPL [conn105] Got notified of new snapshot: { ts: Timestamp(1567578544, 8), t: 1 }, 2019-09-04T06:29:04.599+0000 2019-09-04T06:29:04.604+0000 D3 REPL [conn105] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.079+0000 2019-09-04T06:29:04.604+0000 D3 REPL [conn162] Got notified of new snapshot: { ts: Timestamp(1567578544, 8), t: 1 }, 2019-09-04T06:29:04.599+0000 2019-09-04T06:29:04.604+0000 D3 REPL [conn162] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:29.750+0000 2019-09-04T06:29:04.604+0000 D3 REPL [conn140] Got notified of new snapshot: { ts: Timestamp(1567578544, 8), t: 1 }, 2019-09-04T06:29:04.599+0000 2019-09-04T06:29:04.604+0000 D3 REPL [conn140] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.076+0000 2019-09-04T06:29:04.604+0000 D3 REPL [conn152] Got notified of new snapshot: { ts: Timestamp(1567578544, 8), t: 1 }, 2019-09-04T06:29:04.599+0000 2019-09-04T06:29:04.604+0000 D3 REPL [conn152] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:04.604+0000 D3 REPL [conn128] Got notified of new snapshot: { ts: Timestamp(1567578544, 8), t: 1 }, 2019-09-04T06:29:04.599+0000 2019-09-04T06:29:04.604+0000 D3 REPL [conn128] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:11.706+0000 2019-09-04T06:29:04.604+0000 D3 REPL [conn151] Got notified of new snapshot: { ts: Timestamp(1567578544, 8), t: 1 }, 2019-09-04T06:29:04.599+0000 2019-09-04T06:29:04.604+0000 D3 REPL [conn151] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.129+0000 2019-09-04T06:29:04.604+0000 D3 REPL [conn118] Got notified of new snapshot: { ts: Timestamp(1567578544, 8), t: 1 }, 2019-09-04T06:29:04.599+0000 2019-09-04T06:29:04.604+0000 D3 REPL [conn118] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.224+0000 2019-09-04T06:29:04.604+0000 D3 REPL [conn149] Got notified of new snapshot: { ts: Timestamp(1567578544, 8), t: 1 }, 2019-09-04T06:29:04.599+0000 2019-09-04T06:29:04.604+0000 D3 REPL [conn149] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.239+0000 2019-09-04T06:29:04.604+0000 D3 REPL [conn153] Got notified of new snapshot: { ts: Timestamp(1567578544, 8), t: 1 }, 2019-09-04T06:29:04.599+0000 2019-09-04T06:29:04.604+0000 D3 REPL [conn153] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:04.604+0000 D3 REPL [conn141] Got notified of new snapshot: { ts: Timestamp(1567578544, 8), t: 1 }, 2019-09-04T06:29:04.599+0000 2019-09-04T06:29:04.604+0000 D3 REPL [conn141] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.077+0000 2019-09-04T06:29:04.604+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:29:15.967+0000 2019-09-04T06:29:04.604+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:29:15.001+0000 2019-09-04T06:29:04.604+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 349 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:14.604+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578544, 8), t: 1 } } 2019-09-04T06:29:04.604+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.603+0000 2019-09-04T06:29:04.604+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:04.604+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:04.605+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578544, 8) 2019-09-04T06:29:04.627+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:04.627+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:04.637+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:04.637+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:04.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:04.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:04.677+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:04.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:04.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:04.777+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:04.786+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:04.786+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:04.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:04.837+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 350) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:04.837+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 350 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:29:14.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:04.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:04.837+0000 D2 ASIO [Replication] Request 350 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578544, 8), t: 1 }, durableWallTime: new Date(1567578544599), opTime: { ts: Timestamp(1567578544, 8), t: 1 }, wallTime: new Date(1567578544599), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 8), t: 1 }, lastCommittedWall: new Date(1567578544599), lastOpVisible: { ts: Timestamp(1567578544, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578544, 8), $clusterTime: { clusterTime: Timestamp(1567578544, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 8) } 2019-09-04T06:29:04.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578544, 8), t: 1 }, durableWallTime: new Date(1567578544599), opTime: { ts: Timestamp(1567578544, 8), t: 1 }, wallTime: new Date(1567578544599), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 8), t: 1 }, lastCommittedWall: new Date(1567578544599), lastOpVisible: { ts: Timestamp(1567578544, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578544, 8), $clusterTime: { clusterTime: Timestamp(1567578544, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 8) } target: cmodb802.togewa.com:27019 2019-09-04T06:29:04.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:04.837+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 350) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578544, 8), t: 1 }, durableWallTime: new Date(1567578544599), opTime: { ts: Timestamp(1567578544, 8), t: 1 }, wallTime: new Date(1567578544599), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 8), t: 1 }, lastCommittedWall: new Date(1567578544599), lastOpVisible: { ts: Timestamp(1567578544, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578544, 8), $clusterTime: { clusterTime: Timestamp(1567578544, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 8) } 2019-09-04T06:29:04.837+0000 D4 ELECTION [replexec-1] Postponing election timeout due to heartbeat from primary 2019-09-04T06:29:04.837+0000 D4 REPL [replexec-1] Canceling election timeout callback at 2019-09-04T06:29:15.001+0000 2019-09-04T06:29:04.837+0000 D4 ELECTION [replexec-1] Scheduling election timeout callback at 2019-09-04T06:29:16.215+0000 2019-09-04T06:29:04.837+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:04.837+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:29:04.837+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:04.837+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:29:06.837Z 2019-09-04T06:29:04.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:04.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:04.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 351) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:04.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 351 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:14.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:04.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:04.838+0000 D2 ASIO [Replication] Request 351 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578544, 8), t: 1 }, durableWallTime: new Date(1567578544599), opTime: { ts: Timestamp(1567578544, 8), t: 1 }, wallTime: new Date(1567578544599), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 8), t: 1 }, lastCommittedWall: new Date(1567578544599), lastOpVisible: { ts: Timestamp(1567578544, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 8), $clusterTime: { clusterTime: Timestamp(1567578544, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 8) } 2019-09-04T06:29:04.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578544, 8), t: 1 }, durableWallTime: new Date(1567578544599), opTime: { ts: Timestamp(1567578544, 8), t: 1 }, wallTime: new Date(1567578544599), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 8), t: 1 }, lastCommittedWall: new Date(1567578544599), lastOpVisible: { ts: Timestamp(1567578544, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 8), $clusterTime: { clusterTime: Timestamp(1567578544, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 8) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:04.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:04.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 351) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578544, 8), t: 1 }, durableWallTime: new Date(1567578544599), opTime: { ts: Timestamp(1567578544, 8), t: 1 }, wallTime: new Date(1567578544599), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 8), t: 1 }, lastCommittedWall: new Date(1567578544599), lastOpVisible: { ts: Timestamp(1567578544, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 8), $clusterTime: { clusterTime: Timestamp(1567578544, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 8) } 2019-09-04T06:29:04.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:29:04.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:29:06.838Z 2019-09-04T06:29:04.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:04.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:04.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:04.877+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:04.923+0000 D2 ASIO [RS] Request 349 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578544, 9), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578544922), o: { $v: 1, $set: { ping: new Date(1567578544919), up: 2445 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 8), t: 1 }, lastCommittedWall: new Date(1567578544599), lastOpVisible: { ts: Timestamp(1567578544, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578544, 8), t: 1 }, lastCommittedWall: new Date(1567578544599), lastOpApplied: { ts: Timestamp(1567578544, 9), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 8), $clusterTime: { clusterTime: Timestamp(1567578544, 9), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 9) } 2019-09-04T06:29:04.923+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578544, 9), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578544922), o: { $v: 1, $set: { ping: new Date(1567578544919), up: 2445 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 8), t: 1 }, lastCommittedWall: new Date(1567578544599), lastOpVisible: { ts: Timestamp(1567578544, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578544, 8), t: 1 }, lastCommittedWall: new Date(1567578544599), lastOpApplied: { ts: Timestamp(1567578544, 9), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 8), $clusterTime: { clusterTime: Timestamp(1567578544, 9), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 9) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:04.924+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:04.924+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578544, 9) and ending at ts: Timestamp(1567578544, 9) 2019-09-04T06:29:04.924+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:29:16.215+0000 2019-09-04T06:29:04.924+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:29:16.395+0000 2019-09-04T06:29:04.924+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:04.924+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:04.924+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:04.924+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:04.924+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578544, 8) 2019-09-04T06:29:04.924+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 5108 2019-09-04T06:29:04.924+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:04.924+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:04.924+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 5108 2019-09-04T06:29:04.924+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:04.924+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:29:04.924+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:04.924+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578544, 9) } 2019-09-04T06:29:04.924+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578544, 8) 2019-09-04T06:29:04.924+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 5111 2019-09-04T06:29:04.924+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:04.924+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:04.924+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5100 2019-09-04T06:29:04.924+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 5111 2019-09-04T06:29:04.924+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578544, 9), t: 1 } 2019-09-04T06:29:04.924+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5100 2019-09-04T06:29:04.924+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5114 2019-09-04T06:29:04.924+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5114 2019-09-04T06:29:04.924+0000 D3 EXECUTOR [repl-writer-worker-8] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:04.924+0000 D3 STORAGE [repl-writer-worker-8] WT begin_transaction for snapshot id 5116 2019-09-04T06:29:04.924+0000 D4 STORAGE [repl-writer-worker-8] inserting record with timestamp Timestamp(1567578544, 9) 2019-09-04T06:29:04.924+0000 D3 STORAGE [repl-writer-worker-8] WT set timestamp of future write operations to Timestamp(1567578544, 9) 2019-09-04T06:29:04.924+0000 D3 STORAGE [repl-writer-worker-8] WT commit_transaction for snapshot id 5116 2019-09-04T06:29:04.924+0000 D3 EXECUTOR [repl-writer-worker-8] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:04.924+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:29:04.924+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5115 2019-09-04T06:29:04.924+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5115 2019-09-04T06:29:04.924+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5118 2019-09-04T06:29:04.924+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5118 2019-09-04T06:29:04.924+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578544, 9), t: 1 }({ ts: Timestamp(1567578544, 9), t: 1 }) 2019-09-04T06:29:04.924+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578544, 9) 2019-09-04T06:29:04.924+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5119 2019-09-04T06:29:04.924+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578544, 9) } } ] } sort: {} projection: {} 2019-09-04T06:29:04.924+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:04.924+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:29:04.924+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578544, 9) Sort: {} Proj: {} ============================= 2019-09-04T06:29:04.924+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:04.924+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:04.924+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:04.924+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578544, 9) || First: notFirst: full path: ts 2019-09-04T06:29:04.924+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:04.924+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578544, 9) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:04.924+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:04.924+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:29:04.924+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:04.924+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:04.924+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:04.925+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:04.925+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:04.925+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:04.925+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:04.925+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578544, 9) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:04.925+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:04.925+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:04.925+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:04.925+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578544, 9) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:04.925+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:04.925+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578544, 9) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:04.925+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5119 2019-09-04T06:29:04.925+0000 D3 EXECUTOR [repl-writer-worker-4] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:04.925+0000 D3 STORAGE [repl-writer-worker-4] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:04.925+0000 D3 REPL [repl-writer-worker-4] applying op: { ts: Timestamp(1567578544, 9), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578544922), o: { $v: 1, $set: { ping: new Date(1567578544919), up: 2445 } } }, oplog application mode: Secondary 2019-09-04T06:29:04.925+0000 D3 STORAGE [repl-writer-worker-4] WT set timestamp of future write operations to Timestamp(1567578544, 9) 2019-09-04T06:29:04.925+0000 D3 STORAGE [repl-writer-worker-4] WT begin_transaction for snapshot id 5121 2019-09-04T06:29:04.925+0000 D2 QUERY [repl-writer-worker-4] Using idhack: { _id: "cmodb801.togewa.com:27017" } 2019-09-04T06:29:04.925+0000 D4 WRITE [repl-writer-worker-4] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:29:04.925+0000 D3 STORAGE [repl-writer-worker-4] WT commit_transaction for snapshot id 5121 2019-09-04T06:29:04.925+0000 D3 EXECUTOR [repl-writer-worker-4] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:04.925+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578544, 9), t: 1 }({ ts: Timestamp(1567578544, 9), t: 1 }) 2019-09-04T06:29:04.925+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578544, 9) 2019-09-04T06:29:04.925+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5120 2019-09-04T06:29:04.925+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:29:04.925+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:04.925+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:29:04.925+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:04.925+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:04.925+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:29:04.925+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5120 2019-09-04T06:29:04.925+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578544, 9) 2019-09-04T06:29:04.925+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:04.925+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578544, 8), t: 1 }, durableWallTime: new Date(1567578544599), appliedOpTime: { ts: Timestamp(1567578544, 8), t: 1 }, appliedWallTime: new Date(1567578544599), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 8), t: 1 }, durableWallTime: new Date(1567578544599), appliedOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, appliedWallTime: new Date(1567578544922), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 8), t: 1 }, durableWallTime: new Date(1567578544599), appliedOpTime: { ts: Timestamp(1567578544, 8), t: 1 }, appliedWallTime: new Date(1567578544599), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 8), t: 1 }, lastCommittedWall: new Date(1567578544599), lastOpVisible: { ts: Timestamp(1567578544, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:04.925+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 352 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:34.925+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578544, 8), t: 1 }, durableWallTime: new Date(1567578544599), appliedOpTime: { ts: Timestamp(1567578544, 8), t: 1 }, appliedWallTime: new Date(1567578544599), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 8), t: 1 }, durableWallTime: new Date(1567578544599), appliedOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, appliedWallTime: new Date(1567578544922), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 8), t: 1 }, durableWallTime: new Date(1567578544599), appliedOpTime: { ts: Timestamp(1567578544, 8), t: 1 }, appliedWallTime: new Date(1567578544599), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 8), t: 1 }, lastCommittedWall: new Date(1567578544599), lastOpVisible: { ts: Timestamp(1567578544, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:04.925+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.925+0000 2019-09-04T06:29:04.925+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5124 2019-09-04T06:29:04.925+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5124 2019-09-04T06:29:04.925+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578544, 9), t: 1 }({ ts: Timestamp(1567578544, 9), t: 1 }) 2019-09-04T06:29:04.925+0000 D2 ASIO [RS] Request 352 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 9), t: 1 }, lastCommittedWall: new Date(1567578544922), lastOpVisible: { ts: Timestamp(1567578544, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 9), $clusterTime: { clusterTime: Timestamp(1567578544, 9), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 9) } 2019-09-04T06:29:04.925+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 9), t: 1 }, lastCommittedWall: new Date(1567578544922), lastOpVisible: { ts: Timestamp(1567578544, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 9), $clusterTime: { clusterTime: Timestamp(1567578544, 9), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 9) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:04.926+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:04.926+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.926+0000 2019-09-04T06:29:04.926+0000 D2 COMMAND [conn21] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578544, 9), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f59b002d1a496712d71ba'), operName: "", parentOperId: "5d6f59b002d1a496712d71b7" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578544, 9), signature: { hash: BinData(0, A631763849ED0F125A6A031BD1895D0D9878E645), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578544, 9), t: 1 } }, $db: "config" } 2019-09-04T06:29:04.926+0000 D1 TRACKING [conn21] Cmd: find, TrackingId: 5d6f59b002d1a496712d71b7|5d6f59b002d1a496712d71ba 2019-09-04T06:29:04.926+0000 D1 REPL [conn21] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1567578544, 9), t: 1 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578544, 8), t: 1 } 2019-09-04T06:29:04.926+0000 D3 REPL [conn21] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:34.936+0000 2019-09-04T06:29:04.926+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578544, 9), t: 1 } 2019-09-04T06:29:04.926+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 353 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:14.926+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578544, 8), t: 1 } } 2019-09-04T06:29:04.926+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.926+0000 2019-09-04T06:29:04.926+0000 D2 ASIO [RS] Request 353 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 9), t: 1 }, lastCommittedWall: new Date(1567578544922), lastOpVisible: { ts: Timestamp(1567578544, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578544, 9), t: 1 }, lastCommittedWall: new Date(1567578544922), lastOpApplied: { ts: Timestamp(1567578544, 9), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 9), $clusterTime: { clusterTime: Timestamp(1567578544, 9), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 9) } 2019-09-04T06:29:04.926+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 9), t: 1 }, lastCommittedWall: new Date(1567578544922), lastOpVisible: { ts: Timestamp(1567578544, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578544, 9), t: 1 }, lastCommittedWall: new Date(1567578544922), lastOpApplied: { ts: Timestamp(1567578544, 9), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 9), $clusterTime: { clusterTime: Timestamp(1567578544, 9), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 9) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:04.926+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:04.926+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:29:04.926+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578544, 9), t: 1 }, 2019-09-04T06:29:04.922+0000 2019-09-04T06:29:04.926+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578544, 9), t: 1 }, 2019-09-04T06:29:04.922+0000 2019-09-04T06:29:04.927+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578539, 9) 2019-09-04T06:29:04.927+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:29:16.395+0000 2019-09-04T06:29:04.927+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:29:15.435+0000 2019-09-04T06:29:04.927+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 354 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:14.927+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578544, 9), t: 1 } } 2019-09-04T06:29:04.927+0000 D3 REPL [conn122] Got notified of new snapshot: { ts: Timestamp(1567578544, 9), t: 1 }, 2019-09-04T06:29:04.922+0000 2019-09-04T06:29:04.927+0000 D3 REPL [conn122] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.234+0000 2019-09-04T06:29:04.927+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.926+0000 2019-09-04T06:29:04.927+0000 D3 REPL [conn144] Got notified of new snapshot: { ts: Timestamp(1567578544, 9), t: 1 }, 2019-09-04T06:29:04.922+0000 2019-09-04T06:29:04.927+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:29:04.927+0000 D3 REPL [conn144] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:33.257+0000 2019-09-04T06:29:04.927+0000 D3 REPL [conn155] Got notified of new snapshot: { ts: Timestamp(1567578544, 9), t: 1 }, 2019-09-04T06:29:04.922+0000 2019-09-04T06:29:04.927+0000 D3 REPL [conn155] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.663+0000 2019-09-04T06:29:04.927+0000 D3 REPL [conn161] Got notified of new snapshot: { ts: Timestamp(1567578544, 9), t: 1 }, 2019-09-04T06:29:04.922+0000 2019-09-04T06:29:04.927+0000 D3 REPL [conn161] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:28.615+0000 2019-09-04T06:29:04.927+0000 D3 REPL [conn148] Got notified of new snapshot: { ts: Timestamp(1567578544, 9), t: 1 }, 2019-09-04T06:29:04.922+0000 2019-09-04T06:29:04.927+0000 D3 REPL [conn148] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.224+0000 2019-09-04T06:29:04.927+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:04.927+0000 D3 REPL [conn154] Got notified of new snapshot: { ts: Timestamp(1567578544, 9), t: 1 }, 2019-09-04T06:29:04.922+0000 2019-09-04T06:29:04.927+0000 D3 REPL [conn154] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:04.927+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:04.927+0000 D3 REPL [conn142] Got notified of new snapshot: { ts: Timestamp(1567578544, 9), t: 1 }, 2019-09-04T06:29:04.922+0000 2019-09-04T06:29:04.927+0000 D3 REPL [conn142] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.132+0000 2019-09-04T06:29:04.927+0000 D3 REPL [conn105] Got notified of new snapshot: { ts: Timestamp(1567578544, 9), t: 1 }, 2019-09-04T06:29:04.922+0000 2019-09-04T06:29:04.927+0000 D3 REPL [conn105] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.079+0000 2019-09-04T06:29:04.927+0000 D3 REPL [conn140] Got notified of new snapshot: { ts: Timestamp(1567578544, 9), t: 1 }, 2019-09-04T06:29:04.922+0000 2019-09-04T06:29:04.927+0000 D3 REPL [conn140] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.076+0000 2019-09-04T06:29:04.927+0000 D3 REPL [conn152] Got notified of new snapshot: { ts: Timestamp(1567578544, 9), t: 1 }, 2019-09-04T06:29:04.922+0000 2019-09-04T06:29:04.927+0000 D3 REPL [conn152] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:04.927+0000 D3 REPL [conn151] Got notified of new snapshot: { ts: Timestamp(1567578544, 9), t: 1 }, 2019-09-04T06:29:04.922+0000 2019-09-04T06:29:04.927+0000 D3 REPL [conn151] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.129+0000 2019-09-04T06:29:04.927+0000 D3 REPL [conn149] Got notified of new snapshot: { ts: Timestamp(1567578544, 9), t: 1 }, 2019-09-04T06:29:04.922+0000 2019-09-04T06:29:04.927+0000 D3 REPL [conn149] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.239+0000 2019-09-04T06:29:04.927+0000 D3 REPL [conn141] Got notified of new snapshot: { ts: Timestamp(1567578544, 9), t: 1 }, 2019-09-04T06:29:04.922+0000 2019-09-04T06:29:04.927+0000 D3 REPL [conn141] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.077+0000 2019-09-04T06:29:04.927+0000 D3 REPL [conn137] Got notified of new snapshot: { ts: Timestamp(1567578544, 9), t: 1 }, 2019-09-04T06:29:04.922+0000 2019-09-04T06:29:04.927+0000 D3 REPL [conn137] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.079+0000 2019-09-04T06:29:04.927+0000 D3 REPL [conn150] Got notified of new snapshot: { ts: Timestamp(1567578544, 9), t: 1 }, 2019-09-04T06:29:04.922+0000 2019-09-04T06:29:04.927+0000 D3 REPL [conn150] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:19.851+0000 2019-09-04T06:29:04.927+0000 D3 REPL [conn127] Got notified of new snapshot: { ts: Timestamp(1567578544, 9), t: 1 }, 2019-09-04T06:29:04.922+0000 2019-09-04T06:29:04.927+0000 D3 REPL [conn127] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.644+0000 2019-09-04T06:29:04.927+0000 D3 REPL [conn158] Got notified of new snapshot: { ts: Timestamp(1567578544, 9), t: 1 }, 2019-09-04T06:29:04.922+0000 2019-09-04T06:29:04.927+0000 D3 REPL [conn158] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:24.152+0000 2019-09-04T06:29:04.927+0000 D3 REPL [conn128] Got notified of new snapshot: { ts: Timestamp(1567578544, 9), t: 1 }, 2019-09-04T06:29:04.922+0000 2019-09-04T06:29:04.927+0000 D3 REPL [conn128] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:11.706+0000 2019-09-04T06:29:04.927+0000 D3 REPL [conn153] Got notified of new snapshot: { ts: Timestamp(1567578544, 9), t: 1 }, 2019-09-04T06:29:04.922+0000 2019-09-04T06:29:04.927+0000 D3 REPL [conn153] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:04.927+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:04.927+0000 D3 REPL [conn162] Got notified of new snapshot: { ts: Timestamp(1567578544, 9), t: 1 }, 2019-09-04T06:29:04.922+0000 2019-09-04T06:29:04.927+0000 D3 REPL [conn162] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:29.750+0000 2019-09-04T06:29:04.927+0000 D3 REPL [conn118] Got notified of new snapshot: { ts: Timestamp(1567578544, 9), t: 1 }, 2019-09-04T06:29:04.922+0000 2019-09-04T06:29:04.927+0000 D3 REPL [conn118] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.224+0000 2019-09-04T06:29:04.927+0000 D3 REPL [conn21] Got notified of new snapshot: { ts: Timestamp(1567578544, 9), t: 1 }, 2019-09-04T06:29:04.922+0000 2019-09-04T06:29:04.927+0000 D1 COMMAND [conn21] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578544, 9), t: 1 } } } 2019-09-04T06:29:04.927+0000 D3 REPL [conn129] Got notified of new snapshot: { ts: Timestamp(1567578544, 9), t: 1 }, 2019-09-04T06:29:04.922+0000 2019-09-04T06:29:04.927+0000 D3 REPL [conn129] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.240+0000 2019-09-04T06:29:04.927+0000 D3 STORAGE [conn21] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:29:04.927+0000 D3 REPL [conn156] Got notified of new snapshot: { ts: Timestamp(1567578544, 9), t: 1 }, 2019-09-04T06:29:04.922+0000 2019-09-04T06:29:04.927+0000 D3 REPL [conn156] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:22.595+0000 2019-09-04T06:29:04.927+0000 D1 COMMAND [conn21] Using 'committed' snapshot: { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578544, 9), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f59b002d1a496712d71ba'), operName: "", parentOperId: "5d6f59b002d1a496712d71b7" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578544, 9), signature: { hash: BinData(0, A631763849ED0F125A6A031BD1895D0D9878E645), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578544, 9), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578544, 9) 2019-09-04T06:29:04.927+0000 D2 QUERY [conn21] Collection config.settings does not exist. Using EOF plan: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 2019-09-04T06:29:04.927+0000 I COMMAND [conn21] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578544, 9), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f59b002d1a496712d71ba'), operName: "", parentOperId: "5d6f59b002d1a496712d71b7" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578544, 9), signature: { hash: BinData(0, A631763849ED0F125A6A031BD1895D0D9878E645), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578544, 9), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 1ms 2019-09-04T06:29:04.927+0000 D3 REPL [conn146] Got notified of new snapshot: { ts: Timestamp(1567578544, 9), t: 1 }, 2019-09-04T06:29:04.922+0000 2019-09-04T06:29:04.927+0000 D3 REPL [conn146] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.233+0000 2019-09-04T06:29:04.927+0000 D3 REPL [conn116] Got notified of new snapshot: { ts: Timestamp(1567578544, 9), t: 1 }, 2019-09-04T06:29:04.922+0000 2019-09-04T06:29:04.927+0000 D3 REPL [conn116] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:12.389+0000 2019-09-04T06:29:04.927+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578544, 8), t: 1 }, durableWallTime: new Date(1567578544599), appliedOpTime: { ts: Timestamp(1567578544, 8), t: 1 }, appliedWallTime: new Date(1567578544599), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, durableWallTime: new Date(1567578544922), appliedOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, appliedWallTime: new Date(1567578544922), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 8), t: 1 }, durableWallTime: new Date(1567578544599), appliedOpTime: { ts: Timestamp(1567578544, 8), t: 1 }, appliedWallTime: new Date(1567578544599), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 9), t: 1 }, lastCommittedWall: new Date(1567578544922), lastOpVisible: { ts: Timestamp(1567578544, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:04.927+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 355 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:34.927+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578544, 8), t: 1 }, durableWallTime: new Date(1567578544599), appliedOpTime: { ts: Timestamp(1567578544, 8), t: 1 }, appliedWallTime: new Date(1567578544599), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, durableWallTime: new Date(1567578544922), appliedOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, appliedWallTime: new Date(1567578544922), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 8), t: 1 }, durableWallTime: new Date(1567578544599), appliedOpTime: { ts: Timestamp(1567578544, 8), t: 1 }, appliedWallTime: new Date(1567578544599), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 9), t: 1 }, lastCommittedWall: new Date(1567578544922), lastOpVisible: { ts: Timestamp(1567578544, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:04.927+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.926+0000 2019-09-04T06:29:04.928+0000 D2 ASIO [RS] Request 355 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 9), t: 1 }, lastCommittedWall: new Date(1567578544922), lastOpVisible: { ts: Timestamp(1567578544, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 9), $clusterTime: { clusterTime: Timestamp(1567578544, 9), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 9) } 2019-09-04T06:29:04.928+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 9), t: 1 }, lastCommittedWall: new Date(1567578544922), lastOpVisible: { ts: Timestamp(1567578544, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 9), $clusterTime: { clusterTime: Timestamp(1567578544, 9), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 9) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:04.928+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:04.928+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:34.926+0000 2019-09-04T06:29:04.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:04.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:04.977+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:04.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:04.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:04.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:04.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:05.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:05.024+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578544, 9) 2019-09-04T06:29:05.032+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:05.032+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:05.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578544, 9), signature: { hash: BinData(0, A631763849ED0F125A6A031BD1895D0D9878E645), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:05.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:29:05.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578544, 9), signature: { hash: BinData(0, A631763849ED0F125A6A031BD1895D0D9878E645), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:05.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578544, 9), signature: { hash: BinData(0, A631763849ED0F125A6A031BD1895D0D9878E645), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:05.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, durableWallTime: new Date(1567578544922), opTime: { ts: Timestamp(1567578544, 9), t: 1 }, wallTime: new Date(1567578544922) } 2019-09-04T06:29:05.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578544, 9), signature: { hash: BinData(0, A631763849ED0F125A6A031BD1895D0D9878E645), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:05.077+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:05.127+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:05.127+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:05.137+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:05.137+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:05.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:05.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:05.171+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:05.171+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:05.172+0000 I NETWORK [listener] connection accepted from 10.108.2.74:51722 #165 (84 connections now open) 2019-09-04T06:29:05.172+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:05.172+0000 D2 COMMAND [conn165] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:05.172+0000 I NETWORK [conn165] received client metadata from 10.108.2.74:51722 conn165: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:05.172+0000 I COMMAND [conn165] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:05.172+0000 D2 COMMAND [conn165] run command config.$cmd { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578535, 1), signature: { hash: BinData(0, C98F7F1FBA3C8F1EFEDAE23E1990CC12FB8D9F3E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:29:05.172+0000 D1 REPL [conn165] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578544, 9), t: 1 } 2019-09-04T06:29:05.172+0000 D3 REPL [conn165] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:35.182+0000 2019-09-04T06:29:05.177+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:05.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:05.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:05.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:05.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:05.278+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:05.286+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:05.286+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:05.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:05.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:05.378+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:05.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:05.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:05.478+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:05.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:05.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:05.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:05.497+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:05.532+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:05.532+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:05.578+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:05.627+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:05.627+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:05.637+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:05.637+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:05.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:05.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:05.670+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:05.670+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:05.678+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:05.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:05.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:05.778+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:05.786+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:05.786+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:05.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:05.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:05.879+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:05.924+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:05.924+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:05.924+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578544, 9) 2019-09-04T06:29:05.924+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 5155 2019-09-04T06:29:05.924+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:05.924+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:05.924+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 5155 2019-09-04T06:29:05.925+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5158 2019-09-04T06:29:05.925+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5158 2019-09-04T06:29:05.925+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578544, 9), t: 1 }({ ts: Timestamp(1567578544, 9), t: 1 }) 2019-09-04T06:29:05.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:05.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:05.979+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:05.990+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:05.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:05.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:05.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:06.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:06.032+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:06.032+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:06.074+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:06.074+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:06.079+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:06.127+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:06.127+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:06.137+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:06.137+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:06.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:06.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:06.179+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:06.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578544, 9), signature: { hash: BinData(0, A631763849ED0F125A6A031BD1895D0D9878E645), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:06.233+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:29:06.233+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578544, 9), signature: { hash: BinData(0, A631763849ED0F125A6A031BD1895D0D9878E645), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:06.233+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578544, 9), signature: { hash: BinData(0, A631763849ED0F125A6A031BD1895D0D9878E645), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:06.233+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, durableWallTime: new Date(1567578544922), opTime: { ts: Timestamp(1567578544, 9), t: 1 }, wallTime: new Date(1567578544922) } 2019-09-04T06:29:06.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578544, 9), signature: { hash: BinData(0, A631763849ED0F125A6A031BD1895D0D9878E645), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:06.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:06.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:06.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:06.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:06.279+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:06.286+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:06.286+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:06.324+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:06.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:06.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:06.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:06.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:06.360+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:06.379+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:06.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:06.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:06.479+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:06.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:06.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:06.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:06.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:06.532+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:06.532+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:06.573+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:06.573+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:06.580+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:06.627+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:06.627+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:06.637+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:06.637+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:06.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:06.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:06.680+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:06.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:06.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:06.780+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:06.786+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:06.786+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:06.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:06.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:06.837+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:06.837+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 356) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:06.837+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 356 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:29:16.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:06.837+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:06.837+0000 D2 ASIO [Replication] Request 356 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, durableWallTime: new Date(1567578544922), opTime: { ts: Timestamp(1567578544, 9), t: 1 }, wallTime: new Date(1567578544922), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 9), t: 1 }, lastCommittedWall: new Date(1567578544922), lastOpVisible: { ts: Timestamp(1567578544, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578544, 9), $clusterTime: { clusterTime: Timestamp(1567578544, 9), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 9) } 2019-09-04T06:29:06.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, durableWallTime: new Date(1567578544922), opTime: { ts: Timestamp(1567578544, 9), t: 1 }, wallTime: new Date(1567578544922), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 9), t: 1 }, lastCommittedWall: new Date(1567578544922), lastOpVisible: { ts: Timestamp(1567578544, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578544, 9), $clusterTime: { clusterTime: Timestamp(1567578544, 9), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 9) } target: cmodb802.togewa.com:27019 2019-09-04T06:29:06.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:06.837+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 356) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, durableWallTime: new Date(1567578544922), opTime: { ts: Timestamp(1567578544, 9), t: 1 }, wallTime: new Date(1567578544922), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 9), t: 1 }, lastCommittedWall: new Date(1567578544922), lastOpVisible: { ts: Timestamp(1567578544, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578544, 9), $clusterTime: { clusterTime: Timestamp(1567578544, 9), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 9) } 2019-09-04T06:29:06.837+0000 D4 ELECTION [replexec-1] Postponing election timeout due to heartbeat from primary 2019-09-04T06:29:06.837+0000 D4 REPL [replexec-1] Canceling election timeout callback at 2019-09-04T06:29:15.435+0000 2019-09-04T06:29:06.837+0000 D4 ELECTION [replexec-1] Scheduling election timeout callback at 2019-09-04T06:29:17.546+0000 2019-09-04T06:29:06.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:06.837+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:29:06.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:06.837+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:29:08.837Z 2019-09-04T06:29:06.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:06.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:06.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 357) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:06.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 357 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:16.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:06.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:06.838+0000 D2 ASIO [Replication] Request 357 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, durableWallTime: new Date(1567578544922), opTime: { ts: Timestamp(1567578544, 9), t: 1 }, wallTime: new Date(1567578544922), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 9), t: 1 }, lastCommittedWall: new Date(1567578544922), lastOpVisible: { ts: Timestamp(1567578544, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 9), $clusterTime: { clusterTime: Timestamp(1567578544, 9), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 9) } 2019-09-04T06:29:06.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, durableWallTime: new Date(1567578544922), opTime: { ts: Timestamp(1567578544, 9), t: 1 }, wallTime: new Date(1567578544922), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 9), t: 1 }, lastCommittedWall: new Date(1567578544922), lastOpVisible: { ts: Timestamp(1567578544, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 9), $clusterTime: { clusterTime: Timestamp(1567578544, 9), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 9) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:06.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:06.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 357) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, durableWallTime: new Date(1567578544922), opTime: { ts: Timestamp(1567578544, 9), t: 1 }, wallTime: new Date(1567578544922), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 9), t: 1 }, lastCommittedWall: new Date(1567578544922), lastOpVisible: { ts: Timestamp(1567578544, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 9), $clusterTime: { clusterTime: Timestamp(1567578544, 9), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578544, 9) } 2019-09-04T06:29:06.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:29:06.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:29:08.838Z 2019-09-04T06:29:06.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:06.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:06.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:06.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:06.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:06.880+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:06.924+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:06.925+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:06.925+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578544, 9) 2019-09-04T06:29:06.925+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 5189 2019-09-04T06:29:06.925+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:06.925+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:06.925+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 5189 2019-09-04T06:29:06.926+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5192 2019-09-04T06:29:06.926+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5192 2019-09-04T06:29:06.926+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578544, 9), t: 1 }({ ts: Timestamp(1567578544, 9), t: 1 }) 2019-09-04T06:29:06.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:06.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:06.980+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:06.990+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:06.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:06.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:06.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:07.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:07.032+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:07.032+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:07.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578544, 9), signature: { hash: BinData(0, A631763849ED0F125A6A031BD1895D0D9878E645), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:07.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:29:07.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578544, 9), signature: { hash: BinData(0, A631763849ED0F125A6A031BD1895D0D9878E645), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:07.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578544, 9), signature: { hash: BinData(0, A631763849ED0F125A6A031BD1895D0D9878E645), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:07.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, durableWallTime: new Date(1567578544922), opTime: { ts: Timestamp(1567578544, 9), t: 1 }, wallTime: new Date(1567578544922) } 2019-09-04T06:29:07.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578544, 9), signature: { hash: BinData(0, A631763849ED0F125A6A031BD1895D0D9878E645), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:07.073+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:07.073+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:07.080+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:07.127+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:07.127+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:07.137+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:07.137+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:07.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:07.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:07.181+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:07.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:07.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:07.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:07.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:07.281+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:07.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:07.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:07.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:07.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:07.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:07.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:07.381+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:07.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:07.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:07.481+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:07.490+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:07.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:07.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:07.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:07.532+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:07.532+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:07.573+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:07.573+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:07.581+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:07.627+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:07.627+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:07.637+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:07.637+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:07.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:07.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:07.681+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:07.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:07.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:07.782+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:07.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:07.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:07.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:07.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:07.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:07.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:07.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:07.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:07.882+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:07.925+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:07.925+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:07.925+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578544, 9) 2019-09-04T06:29:07.925+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 5223 2019-09-04T06:29:07.925+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:07.925+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:07.925+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 5223 2019-09-04T06:29:07.926+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5226 2019-09-04T06:29:07.926+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5226 2019-09-04T06:29:07.926+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578544, 9), t: 1 }({ ts: Timestamp(1567578544, 9), t: 1 }) 2019-09-04T06:29:07.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:07.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:07.982+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:07.990+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:07.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:07.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:07.997+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:08.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:08.032+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:08.032+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:08.073+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:08.073+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:08.082+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:08.127+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:08.127+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:08.137+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:08.137+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:08.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:08.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:08.182+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:08.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578544, 9), signature: { hash: BinData(0, A631763849ED0F125A6A031BD1895D0D9878E645), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:08.233+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:29:08.233+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578544, 9), signature: { hash: BinData(0, A631763849ED0F125A6A031BD1895D0D9878E645), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:08.233+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578544, 9), signature: { hash: BinData(0, A631763849ED0F125A6A031BD1895D0D9878E645), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:08.233+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, durableWallTime: new Date(1567578544922), opTime: { ts: Timestamp(1567578544, 9), t: 1 }, wallTime: new Date(1567578544922) } 2019-09-04T06:29:08.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578544, 9), signature: { hash: BinData(0, A631763849ED0F125A6A031BD1895D0D9878E645), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:08.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:08.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:08.263+0000 D2 ASIO [RS] Request 354 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578548, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578548243), o: { $v: 1, $set: { ping: new Date(1567578548243) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 9), t: 1 }, lastCommittedWall: new Date(1567578544922), lastOpVisible: { ts: Timestamp(1567578544, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578544, 9), t: 1 }, lastCommittedWall: new Date(1567578544922), lastOpApplied: { ts: Timestamp(1567578548, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 9), $clusterTime: { clusterTime: Timestamp(1567578548, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578548, 1) } 2019-09-04T06:29:08.263+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578548, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578548243), o: { $v: 1, $set: { ping: new Date(1567578548243) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 9), t: 1 }, lastCommittedWall: new Date(1567578544922), lastOpVisible: { ts: Timestamp(1567578544, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578544, 9), t: 1 }, lastCommittedWall: new Date(1567578544922), lastOpApplied: { ts: Timestamp(1567578548, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 9), $clusterTime: { clusterTime: Timestamp(1567578548, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578548, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:08.263+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:08.263+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578548, 1) and ending at ts: Timestamp(1567578548, 1) 2019-09-04T06:29:08.263+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:29:17.546+0000 2019-09-04T06:29:08.263+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:29:18.523+0000 2019-09-04T06:29:08.263+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:08.263+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:08.263+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:08.263+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:08.263+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578544, 9) 2019-09-04T06:29:08.263+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 5240 2019-09-04T06:29:08.263+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:08.263+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:08.263+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 5240 2019-09-04T06:29:08.263+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:08.263+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:29:08.263+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:08.263+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578544, 9) 2019-09-04T06:29:08.263+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 5243 2019-09-04T06:29:08.263+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578548, 1) } 2019-09-04T06:29:08.263+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:08.263+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:08.263+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 5243 2019-09-04T06:29:08.263+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578548, 1), t: 1 } 2019-09-04T06:29:08.263+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5227 2019-09-04T06:29:08.264+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5227 2019-09-04T06:29:08.264+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5246 2019-09-04T06:29:08.264+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5246 2019-09-04T06:29:08.264+0000 D3 EXECUTOR [repl-writer-worker-6] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:08.264+0000 D3 STORAGE [repl-writer-worker-6] WT begin_transaction for snapshot id 5248 2019-09-04T06:29:08.264+0000 D4 STORAGE [repl-writer-worker-6] inserting record with timestamp Timestamp(1567578548, 1) 2019-09-04T06:29:08.264+0000 D3 STORAGE [repl-writer-worker-6] WT set timestamp of future write operations to Timestamp(1567578548, 1) 2019-09-04T06:29:08.264+0000 D3 STORAGE [repl-writer-worker-6] WT commit_transaction for snapshot id 5248 2019-09-04T06:29:08.264+0000 D3 EXECUTOR [repl-writer-worker-6] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:08.264+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:29:08.264+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5247 2019-09-04T06:29:08.264+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5247 2019-09-04T06:29:08.264+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5250 2019-09-04T06:29:08.264+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5250 2019-09-04T06:29:08.264+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578548, 1), t: 1 }({ ts: Timestamp(1567578548, 1), t: 1 }) 2019-09-04T06:29:08.264+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578548, 1) 2019-09-04T06:29:08.264+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5251 2019-09-04T06:29:08.264+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578548, 1) } } ] } sort: {} projection: {} 2019-09-04T06:29:08.264+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:08.264+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:29:08.264+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578548, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:29:08.264+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:08.264+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:08.264+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:08.264+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578548, 1) || First: notFirst: full path: ts 2019-09-04T06:29:08.264+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:08.264+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578548, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:08.264+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:08.264+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:29:08.264+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:08.264+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:08.264+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:08.264+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:08.264+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:08.264+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:08.264+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:08.264+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578548, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:08.264+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:08.264+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:08.264+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:08.264+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578548, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:08.264+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:08.264+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578548, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:08.264+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5251 2019-09-04T06:29:08.264+0000 D3 EXECUTOR [repl-writer-worker-2] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:08.264+0000 D3 STORAGE [repl-writer-worker-2] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:08.264+0000 D3 REPL [repl-writer-worker-2] applying op: { ts: Timestamp(1567578548, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578548243), o: { $v: 1, $set: { ping: new Date(1567578548243) } } }, oplog application mode: Secondary 2019-09-04T06:29:08.264+0000 D3 STORAGE [repl-writer-worker-2] WT set timestamp of future write operations to Timestamp(1567578548, 1) 2019-09-04T06:29:08.264+0000 D3 STORAGE [repl-writer-worker-2] WT begin_transaction for snapshot id 5253 2019-09-04T06:29:08.264+0000 D2 QUERY [repl-writer-worker-2] Using idhack: { _id: "ConfigServer" } 2019-09-04T06:29:08.264+0000 D4 WRITE [repl-writer-worker-2] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:29:08.264+0000 D3 STORAGE [repl-writer-worker-2] WT commit_transaction for snapshot id 5253 2019-09-04T06:29:08.265+0000 D3 EXECUTOR [repl-writer-worker-2] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:08.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:08.265+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578548, 1), t: 1 }({ ts: Timestamp(1567578548, 1), t: 1 }) 2019-09-04T06:29:08.265+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578548, 1) 2019-09-04T06:29:08.265+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5252 2019-09-04T06:29:08.265+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:29:08.265+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:08.265+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:29:08.265+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:08.265+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:08.265+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:29:08.265+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5252 2019-09-04T06:29:08.265+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578548, 1) 2019-09-04T06:29:08.265+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5257 2019-09-04T06:29:08.265+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5257 2019-09-04T06:29:08.265+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578548, 1), t: 1 }({ ts: Timestamp(1567578548, 1), t: 1 }) 2019-09-04T06:29:08.265+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:08.265+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, durableWallTime: new Date(1567578544922), appliedOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, appliedWallTime: new Date(1567578544922), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, durableWallTime: new Date(1567578544922), appliedOpTime: { ts: Timestamp(1567578548, 1), t: 1 }, appliedWallTime: new Date(1567578548243), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, durableWallTime: new Date(1567578544922), appliedOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, appliedWallTime: new Date(1567578544922), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 9), t: 1 }, lastCommittedWall: new Date(1567578544922), lastOpVisible: { ts: Timestamp(1567578544, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:08.265+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 358 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:38.265+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, durableWallTime: new Date(1567578544922), appliedOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, appliedWallTime: new Date(1567578544922), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, durableWallTime: new Date(1567578544922), appliedOpTime: { ts: Timestamp(1567578548, 1), t: 1 }, appliedWallTime: new Date(1567578548243), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, durableWallTime: new Date(1567578544922), appliedOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, appliedWallTime: new Date(1567578544922), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 9), t: 1 }, lastCommittedWall: new Date(1567578544922), lastOpVisible: { ts: Timestamp(1567578544, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:08.265+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:38.265+0000 2019-09-04T06:29:08.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:08.265+0000 D2 ASIO [RS] Request 358 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 9), t: 1 }, lastCommittedWall: new Date(1567578544922), lastOpVisible: { ts: Timestamp(1567578544, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 9), $clusterTime: { clusterTime: Timestamp(1567578548, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578548, 1) } 2019-09-04T06:29:08.266+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578544, 9), t: 1 }, lastCommittedWall: new Date(1567578544922), lastOpVisible: { ts: Timestamp(1567578544, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578544, 9), $clusterTime: { clusterTime: Timestamp(1567578548, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578548, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:08.266+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578548, 1), t: 1 } 2019-09-04T06:29:08.266+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:08.266+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 359 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:18.266+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578544, 9), t: 1 } } 2019-09-04T06:29:08.266+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:38.266+0000 2019-09-04T06:29:08.266+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:38.266+0000 2019-09-04T06:29:08.276+0000 D2 ASIO [RS] Request 359 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 1), t: 1 }, lastCommittedWall: new Date(1567578548243), lastOpVisible: { ts: Timestamp(1567578548, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578548, 1), t: 1 }, lastCommittedWall: new Date(1567578548243), lastOpApplied: { ts: Timestamp(1567578548, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578548, 1), $clusterTime: { clusterTime: Timestamp(1567578548, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578548, 1) } 2019-09-04T06:29:08.276+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 1), t: 1 }, lastCommittedWall: new Date(1567578548243), lastOpVisible: { ts: Timestamp(1567578548, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578548, 1), t: 1 }, lastCommittedWall: new Date(1567578548243), lastOpApplied: { ts: Timestamp(1567578548, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578548, 1), $clusterTime: { clusterTime: Timestamp(1567578548, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578548, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:08.276+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:08.276+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:29:08.276+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578548, 1), t: 1 }, 2019-09-04T06:29:08.243+0000 2019-09-04T06:29:08.276+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578548, 1), t: 1 }, 2019-09-04T06:29:08.243+0000 2019-09-04T06:29:08.276+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578543, 1) 2019-09-04T06:29:08.276+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:29:18.523+0000 2019-09-04T06:29:08.276+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:29:19.353+0000 2019-09-04T06:29:08.276+0000 D3 REPL [conn122] Got notified of new snapshot: { ts: Timestamp(1567578548, 1), t: 1 }, 2019-09-04T06:29:08.243+0000 2019-09-04T06:29:08.276+0000 D3 REPL [conn122] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.234+0000 2019-09-04T06:29:08.276+0000 D3 REPL [conn152] Got notified of new snapshot: { ts: Timestamp(1567578548, 1), t: 1 }, 2019-09-04T06:29:08.243+0000 2019-09-04T06:29:08.276+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:08.276+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:08.276+0000 D3 REPL [conn152] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:08.276+0000 D3 REPL [conn142] Got notified of new snapshot: { ts: Timestamp(1567578548, 1), t: 1 }, 2019-09-04T06:29:08.243+0000 2019-09-04T06:29:08.276+0000 D3 REPL [conn142] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.132+0000 2019-09-04T06:29:08.276+0000 D3 REPL [conn140] Got notified of new snapshot: { ts: Timestamp(1567578548, 1), t: 1 }, 2019-09-04T06:29:08.243+0000 2019-09-04T06:29:08.276+0000 D3 REPL [conn140] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.076+0000 2019-09-04T06:29:08.276+0000 D3 REPL [conn151] Got notified of new snapshot: { ts: Timestamp(1567578548, 1), t: 1 }, 2019-09-04T06:29:08.243+0000 2019-09-04T06:29:08.276+0000 D3 REPL [conn151] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.129+0000 2019-09-04T06:29:08.277+0000 D3 REPL [conn141] Got notified of new snapshot: { ts: Timestamp(1567578548, 1), t: 1 }, 2019-09-04T06:29:08.243+0000 2019-09-04T06:29:08.277+0000 D3 REPL [conn141] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.077+0000 2019-09-04T06:29:08.277+0000 D3 REPL [conn150] Got notified of new snapshot: { ts: Timestamp(1567578548, 1), t: 1 }, 2019-09-04T06:29:08.243+0000 2019-09-04T06:29:08.277+0000 D3 REPL [conn150] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:19.851+0000 2019-09-04T06:29:08.277+0000 D3 REPL [conn158] Got notified of new snapshot: { ts: Timestamp(1567578548, 1), t: 1 }, 2019-09-04T06:29:08.243+0000 2019-09-04T06:29:08.277+0000 D3 REPL [conn158] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:24.152+0000 2019-09-04T06:29:08.277+0000 D3 REPL [conn128] Got notified of new snapshot: { ts: Timestamp(1567578548, 1), t: 1 }, 2019-09-04T06:29:08.243+0000 2019-09-04T06:29:08.277+0000 D3 REPL [conn128] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:11.706+0000 2019-09-04T06:29:08.277+0000 D3 REPL [conn162] Got notified of new snapshot: { ts: Timestamp(1567578548, 1), t: 1 }, 2019-09-04T06:29:08.243+0000 2019-09-04T06:29:08.277+0000 D3 REPL [conn162] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:29.750+0000 2019-09-04T06:29:08.277+0000 D3 REPL [conn146] Got notified of new snapshot: { ts: Timestamp(1567578548, 1), t: 1 }, 2019-09-04T06:29:08.243+0000 2019-09-04T06:29:08.277+0000 D3 REPL [conn146] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.233+0000 2019-09-04T06:29:08.277+0000 D3 REPL [conn165] Got notified of new snapshot: { ts: Timestamp(1567578548, 1), t: 1 }, 2019-09-04T06:29:08.243+0000 2019-09-04T06:29:08.277+0000 D3 REPL [conn165] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:35.182+0000 2019-09-04T06:29:08.277+0000 D3 REPL [conn116] Got notified of new snapshot: { ts: Timestamp(1567578548, 1), t: 1 }, 2019-09-04T06:29:08.243+0000 2019-09-04T06:29:08.277+0000 D3 REPL [conn116] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:12.389+0000 2019-09-04T06:29:08.277+0000 D3 REPL [conn156] Got notified of new snapshot: { ts: Timestamp(1567578548, 1), t: 1 }, 2019-09-04T06:29:08.243+0000 2019-09-04T06:29:08.277+0000 D3 REPL [conn156] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:22.595+0000 2019-09-04T06:29:08.277+0000 D3 REPL [conn129] Got notified of new snapshot: { ts: Timestamp(1567578548, 1), t: 1 }, 2019-09-04T06:29:08.243+0000 2019-09-04T06:29:08.277+0000 D3 REPL [conn129] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.240+0000 2019-09-04T06:29:08.277+0000 D3 REPL [conn118] Got notified of new snapshot: { ts: Timestamp(1567578548, 1), t: 1 }, 2019-09-04T06:29:08.243+0000 2019-09-04T06:29:08.277+0000 D3 REPL [conn118] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.224+0000 2019-09-04T06:29:08.277+0000 D3 REPL [conn153] Got notified of new snapshot: { ts: Timestamp(1567578548, 1), t: 1 }, 2019-09-04T06:29:08.243+0000 2019-09-04T06:29:08.277+0000 D3 REPL [conn153] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:08.277+0000 D3 REPL [conn127] Got notified of new snapshot: { ts: Timestamp(1567578548, 1), t: 1 }, 2019-09-04T06:29:08.243+0000 2019-09-04T06:29:08.277+0000 D3 REPL [conn127] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.644+0000 2019-09-04T06:29:08.277+0000 D3 REPL [conn155] Got notified of new snapshot: { ts: Timestamp(1567578548, 1), t: 1 }, 2019-09-04T06:29:08.243+0000 2019-09-04T06:29:08.277+0000 D3 REPL [conn155] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.663+0000 2019-09-04T06:29:08.277+0000 D3 REPL [conn144] Got notified of new snapshot: { ts: Timestamp(1567578548, 1), t: 1 }, 2019-09-04T06:29:08.243+0000 2019-09-04T06:29:08.277+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 360 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:18.277+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578548, 1), t: 1 } } 2019-09-04T06:29:08.277+0000 D3 REPL [conn144] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:33.257+0000 2019-09-04T06:29:08.277+0000 D3 REPL [conn161] Got notified of new snapshot: { ts: Timestamp(1567578548, 1), t: 1 }, 2019-09-04T06:29:08.243+0000 2019-09-04T06:29:08.277+0000 D3 REPL [conn161] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:28.615+0000 2019-09-04T06:29:08.277+0000 D3 REPL [conn105] Got notified of new snapshot: { ts: Timestamp(1567578548, 1), t: 1 }, 2019-09-04T06:29:08.243+0000 2019-09-04T06:29:08.277+0000 D3 REPL [conn105] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.079+0000 2019-09-04T06:29:08.277+0000 D3 REPL [conn154] Got notified of new snapshot: { ts: Timestamp(1567578548, 1), t: 1 }, 2019-09-04T06:29:08.243+0000 2019-09-04T06:29:08.277+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:38.266+0000 2019-09-04T06:29:08.277+0000 D3 REPL [conn154] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:08.277+0000 D3 REPL [conn148] Got notified of new snapshot: { ts: Timestamp(1567578548, 1), t: 1 }, 2019-09-04T06:29:08.243+0000 2019-09-04T06:29:08.277+0000 D3 REPL [conn148] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.224+0000 2019-09-04T06:29:08.277+0000 D3 REPL [conn149] Got notified of new snapshot: { ts: Timestamp(1567578548, 1), t: 1 }, 2019-09-04T06:29:08.243+0000 2019-09-04T06:29:08.277+0000 D3 REPL [conn149] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.239+0000 2019-09-04T06:29:08.277+0000 D3 REPL [conn137] Got notified of new snapshot: { ts: Timestamp(1567578548, 1), t: 1 }, 2019-09-04T06:29:08.243+0000 2019-09-04T06:29:08.277+0000 D3 REPL [conn137] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.079+0000 2019-09-04T06:29:08.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:08.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:08.317+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:29:08.317+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:08.317+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:08.317+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, durableWallTime: new Date(1567578544922), appliedOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, appliedWallTime: new Date(1567578544922), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578548, 1), t: 1 }, durableWallTime: new Date(1567578548243), appliedOpTime: { ts: Timestamp(1567578548, 1), t: 1 }, appliedWallTime: new Date(1567578548243), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, durableWallTime: new Date(1567578544922), appliedOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, appliedWallTime: new Date(1567578544922), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 1), t: 1 }, lastCommittedWall: new Date(1567578548243), lastOpVisible: { ts: Timestamp(1567578548, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:08.317+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 361 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:38.317+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, durableWallTime: new Date(1567578544922), appliedOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, appliedWallTime: new Date(1567578544922), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578548, 1), t: 1 }, durableWallTime: new Date(1567578548243), appliedOpTime: { ts: Timestamp(1567578548, 1), t: 1 }, appliedWallTime: new Date(1567578548243), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, durableWallTime: new Date(1567578544922), appliedOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, appliedWallTime: new Date(1567578544922), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 1), t: 1 }, lastCommittedWall: new Date(1567578548243), lastOpVisible: { ts: Timestamp(1567578548, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:08.317+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:38.266+0000 2019-09-04T06:29:08.317+0000 D2 ASIO [RS] Request 361 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 1), t: 1 }, lastCommittedWall: new Date(1567578548243), lastOpVisible: { ts: Timestamp(1567578548, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578548, 1), $clusterTime: { clusterTime: Timestamp(1567578548, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578548, 1) } 2019-09-04T06:29:08.317+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 1), t: 1 }, lastCommittedWall: new Date(1567578548243), lastOpVisible: { ts: Timestamp(1567578548, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578548, 1), $clusterTime: { clusterTime: Timestamp(1567578548, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578548, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:08.317+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:08.317+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:38.266+0000 2019-09-04T06:29:08.324+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:08.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:08.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:08.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:08.364+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578548, 1) 2019-09-04T06:29:08.416+0000 D2 ASIO [RS] Request 360 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578548, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578548405), o: { $v: 1, $set: { ping: new Date(1567578548405) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 1), t: 1 }, lastCommittedWall: new Date(1567578548243), lastOpVisible: { ts: Timestamp(1567578548, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578548, 1), t: 1 }, lastCommittedWall: new Date(1567578548243), lastOpApplied: { ts: Timestamp(1567578548, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578548, 1), $clusterTime: { clusterTime: Timestamp(1567578548, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578548, 2) } 2019-09-04T06:29:08.416+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578548, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578548405), o: { $v: 1, $set: { ping: new Date(1567578548405) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 1), t: 1 }, lastCommittedWall: new Date(1567578548243), lastOpVisible: { ts: Timestamp(1567578548, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578548, 1), t: 1 }, lastCommittedWall: new Date(1567578548243), lastOpApplied: { ts: Timestamp(1567578548, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578548, 1), $clusterTime: { clusterTime: Timestamp(1567578548, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578548, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:08.416+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:08.416+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578548, 2) and ending at ts: Timestamp(1567578548, 2) 2019-09-04T06:29:08.416+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:29:19.353+0000 2019-09-04T06:29:08.416+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:29:19.203+0000 2019-09-04T06:29:08.416+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578548, 2), t: 1 } 2019-09-04T06:29:08.416+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:08.416+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:08.416+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:08.416+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:08.416+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578548, 1) 2019-09-04T06:29:08.416+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 5264 2019-09-04T06:29:08.417+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:08.417+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:08.417+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 5264 2019-09-04T06:29:08.417+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:08.417+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:29:08.417+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:08.417+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578548, 2) } 2019-09-04T06:29:08.417+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578548, 1) 2019-09-04T06:29:08.417+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 5267 2019-09-04T06:29:08.417+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:08.417+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:08.417+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5258 2019-09-04T06:29:08.417+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 5267 2019-09-04T06:29:08.417+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5258 2019-09-04T06:29:08.417+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5270 2019-09-04T06:29:08.417+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5270 2019-09-04T06:29:08.417+0000 D3 EXECUTOR [repl-writer-worker-0] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:08.417+0000 D3 STORAGE [repl-writer-worker-0] WT begin_transaction for snapshot id 5272 2019-09-04T06:29:08.417+0000 D4 STORAGE [repl-writer-worker-0] inserting record with timestamp Timestamp(1567578548, 2) 2019-09-04T06:29:08.417+0000 D3 STORAGE [repl-writer-worker-0] WT set timestamp of future write operations to Timestamp(1567578548, 2) 2019-09-04T06:29:08.417+0000 D3 STORAGE [repl-writer-worker-0] WT commit_transaction for snapshot id 5272 2019-09-04T06:29:08.417+0000 D3 EXECUTOR [repl-writer-worker-0] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:08.417+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:29:08.417+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5271 2019-09-04T06:29:08.417+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5271 2019-09-04T06:29:08.417+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5274 2019-09-04T06:29:08.417+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5274 2019-09-04T06:29:08.417+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578548, 2), t: 1 }({ ts: Timestamp(1567578548, 2), t: 1 }) 2019-09-04T06:29:08.417+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578548, 2) 2019-09-04T06:29:08.417+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5275 2019-09-04T06:29:08.417+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578548, 2) } } ] } sort: {} projection: {} 2019-09-04T06:29:08.417+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:08.417+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:29:08.417+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578548, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:29:08.417+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:08.417+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:08.417+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:08.417+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578548, 2) || First: notFirst: full path: ts 2019-09-04T06:29:08.417+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:08.417+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578548, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:08.417+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:08.417+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:29:08.417+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:08.417+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:08.417+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:08.417+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:08.417+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:08.417+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:08.417+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:08.417+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578548, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:08.417+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:08.417+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:08.417+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:08.417+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578548, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:08.417+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:08.417+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578548, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:08.417+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5275 2019-09-04T06:29:08.417+0000 D3 EXECUTOR [repl-writer-worker-15] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:08.417+0000 D3 STORAGE [repl-writer-worker-15] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:08.417+0000 D3 REPL [repl-writer-worker-15] applying op: { ts: Timestamp(1567578548, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578548405), o: { $v: 1, $set: { ping: new Date(1567578548405) } } }, oplog application mode: Secondary 2019-09-04T06:29:08.417+0000 D3 STORAGE [repl-writer-worker-15] WT set timestamp of future write operations to Timestamp(1567578548, 2) 2019-09-04T06:29:08.417+0000 D3 STORAGE [repl-writer-worker-15] WT begin_transaction for snapshot id 5277 2019-09-04T06:29:08.418+0000 D2 QUERY [repl-writer-worker-15] Using idhack: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" } 2019-09-04T06:29:08.418+0000 D4 WRITE [repl-writer-worker-15] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:29:08.418+0000 D3 STORAGE [repl-writer-worker-15] WT commit_transaction for snapshot id 5277 2019-09-04T06:29:08.418+0000 D3 EXECUTOR [repl-writer-worker-15] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:08.418+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578548, 2), t: 1 }({ ts: Timestamp(1567578548, 2), t: 1 }) 2019-09-04T06:29:08.418+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578548, 2) 2019-09-04T06:29:08.418+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5276 2019-09-04T06:29:08.418+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:29:08.418+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:08.418+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:29:08.418+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:08.418+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:08.418+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:29:08.418+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5276 2019-09-04T06:29:08.418+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578548, 2) 2019-09-04T06:29:08.418+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5281 2019-09-04T06:29:08.418+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5281 2019-09-04T06:29:08.418+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578548, 2), t: 1 }({ ts: Timestamp(1567578548, 2), t: 1 }) 2019-09-04T06:29:08.418+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:08.418+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, durableWallTime: new Date(1567578544922), appliedOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, appliedWallTime: new Date(1567578544922), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578548, 1), t: 1 }, durableWallTime: new Date(1567578548243), appliedOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, appliedWallTime: new Date(1567578548405), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, durableWallTime: new Date(1567578544922), appliedOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, appliedWallTime: new Date(1567578544922), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 1), t: 1 }, lastCommittedWall: new Date(1567578548243), lastOpVisible: { ts: Timestamp(1567578548, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:08.418+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 362 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:38.418+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, durableWallTime: new Date(1567578544922), appliedOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, appliedWallTime: new Date(1567578544922), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578548, 1), t: 1 }, durableWallTime: new Date(1567578548243), appliedOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, appliedWallTime: new Date(1567578548405), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, durableWallTime: new Date(1567578544922), appliedOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, appliedWallTime: new Date(1567578544922), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 1), t: 1 }, lastCommittedWall: new Date(1567578548243), lastOpVisible: { ts: Timestamp(1567578548, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:08.418+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:38.418+0000 2019-09-04T06:29:08.418+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578548, 2), t: 1 } 2019-09-04T06:29:08.418+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 363 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:18.418+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578548, 1), t: 1 } } 2019-09-04T06:29:08.418+0000 D2 ASIO [RS] Request 362 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 1), t: 1 }, lastCommittedWall: new Date(1567578548243), lastOpVisible: { ts: Timestamp(1567578548, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578548, 1), $clusterTime: { clusterTime: Timestamp(1567578548, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578548, 2) } 2019-09-04T06:29:08.419+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:38.418+0000 2019-09-04T06:29:08.419+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 1), t: 1 }, lastCommittedWall: new Date(1567578548243), lastOpVisible: { ts: Timestamp(1567578548, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578548, 1), $clusterTime: { clusterTime: Timestamp(1567578548, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578548, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:08.419+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:08.419+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:38.418+0000 2019-09-04T06:29:08.422+0000 D2 ASIO [RS] Request 363 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpVisible: { ts: Timestamp(1567578548, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpApplied: { ts: Timestamp(1567578548, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578548, 2), $clusterTime: { clusterTime: Timestamp(1567578548, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578548, 2) } 2019-09-04T06:29:08.422+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpVisible: { ts: Timestamp(1567578548, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpApplied: { ts: Timestamp(1567578548, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578548, 2), $clusterTime: { clusterTime: Timestamp(1567578548, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578548, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:08.422+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:08.422+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:29:08.422+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578548, 2), t: 1 }, 2019-09-04T06:29:08.405+0000 2019-09-04T06:29:08.422+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578548, 2), t: 1 }, 2019-09-04T06:29:08.405+0000 2019-09-04T06:29:08.422+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578543, 2) 2019-09-04T06:29:08.422+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:29:19.203+0000 2019-09-04T06:29:08.422+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:29:19.893+0000 2019-09-04T06:29:08.422+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:08.422+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 364 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:18.422+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578548, 2), t: 1 } } 2019-09-04T06:29:08.422+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:08.422+0000 D3 REPL [conn141] Got notified of new snapshot: { ts: Timestamp(1567578548, 2), t: 1 }, 2019-09-04T06:29:08.405+0000 2019-09-04T06:29:08.422+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:38.418+0000 2019-09-04T06:29:08.422+0000 D3 REPL [conn141] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.077+0000 2019-09-04T06:29:08.422+0000 D3 REPL [conn142] Got notified of new snapshot: { ts: Timestamp(1567578548, 2), t: 1 }, 2019-09-04T06:29:08.405+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn142] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.132+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn151] Got notified of new snapshot: { ts: Timestamp(1567578548, 2), t: 1 }, 2019-09-04T06:29:08.405+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn151] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.129+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn150] Got notified of new snapshot: { ts: Timestamp(1567578548, 2), t: 1 }, 2019-09-04T06:29:08.405+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn150] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:19.851+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn118] Got notified of new snapshot: { ts: Timestamp(1567578548, 2), t: 1 }, 2019-09-04T06:29:08.405+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn118] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.224+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn153] Got notified of new snapshot: { ts: Timestamp(1567578548, 2), t: 1 }, 2019-09-04T06:29:08.405+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn153] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn156] Got notified of new snapshot: { ts: Timestamp(1567578548, 2), t: 1 }, 2019-09-04T06:29:08.405+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn156] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:22.595+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn154] Got notified of new snapshot: { ts: Timestamp(1567578548, 2), t: 1 }, 2019-09-04T06:29:08.405+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn154] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn137] Got notified of new snapshot: { ts: Timestamp(1567578548, 2), t: 1 }, 2019-09-04T06:29:08.405+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn137] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.079+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn122] Got notified of new snapshot: { ts: Timestamp(1567578548, 2), t: 1 }, 2019-09-04T06:29:08.405+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn122] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.234+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn165] Got notified of new snapshot: { ts: Timestamp(1567578548, 2), t: 1 }, 2019-09-04T06:29:08.405+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn165] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:35.182+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn152] Got notified of new snapshot: { ts: Timestamp(1567578548, 2), t: 1 }, 2019-09-04T06:29:08.405+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn152] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn158] Got notified of new snapshot: { ts: Timestamp(1567578548, 2), t: 1 }, 2019-09-04T06:29:08.405+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn158] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:24.152+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn162] Got notified of new snapshot: { ts: Timestamp(1567578548, 2), t: 1 }, 2019-09-04T06:29:08.405+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn162] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:29.750+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn140] Got notified of new snapshot: { ts: Timestamp(1567578548, 2), t: 1 }, 2019-09-04T06:29:08.405+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn140] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.076+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn128] Got notified of new snapshot: { ts: Timestamp(1567578548, 2), t: 1 }, 2019-09-04T06:29:08.405+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn128] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:11.706+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn146] Got notified of new snapshot: { ts: Timestamp(1567578548, 2), t: 1 }, 2019-09-04T06:29:08.405+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn146] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.233+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn116] Got notified of new snapshot: { ts: Timestamp(1567578548, 2), t: 1 }, 2019-09-04T06:29:08.405+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn116] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:12.389+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn129] Got notified of new snapshot: { ts: Timestamp(1567578548, 2), t: 1 }, 2019-09-04T06:29:08.405+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn129] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.240+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn127] Got notified of new snapshot: { ts: Timestamp(1567578548, 2), t: 1 }, 2019-09-04T06:29:08.405+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn127] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.644+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn155] Got notified of new snapshot: { ts: Timestamp(1567578548, 2), t: 1 }, 2019-09-04T06:29:08.405+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn155] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.663+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn144] Got notified of new snapshot: { ts: Timestamp(1567578548, 2), t: 1 }, 2019-09-04T06:29:08.405+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn144] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:33.257+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn161] Got notified of new snapshot: { ts: Timestamp(1567578548, 2), t: 1 }, 2019-09-04T06:29:08.405+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn161] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:28.615+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn105] Got notified of new snapshot: { ts: Timestamp(1567578548, 2), t: 1 }, 2019-09-04T06:29:08.405+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn105] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:10.079+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn148] Got notified of new snapshot: { ts: Timestamp(1567578548, 2), t: 1 }, 2019-09-04T06:29:08.405+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn148] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.224+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn149] Got notified of new snapshot: { ts: Timestamp(1567578548, 2), t: 1 }, 2019-09-04T06:29:08.405+0000 2019-09-04T06:29:08.423+0000 D3 REPL [conn149] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.239+0000 2019-09-04T06:29:08.424+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:08.429+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:29:08.429+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:08.429+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, durableWallTime: new Date(1567578544922), appliedOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, appliedWallTime: new Date(1567578544922), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), appliedOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, appliedWallTime: new Date(1567578548405), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, durableWallTime: new Date(1567578544922), appliedOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, appliedWallTime: new Date(1567578544922), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpVisible: { ts: Timestamp(1567578548, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:08.429+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 365 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:38.429+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, durableWallTime: new Date(1567578544922), appliedOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, appliedWallTime: new Date(1567578544922), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), appliedOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, appliedWallTime: new Date(1567578548405), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, durableWallTime: new Date(1567578544922), appliedOpTime: { ts: Timestamp(1567578544, 9), t: 1 }, appliedWallTime: new Date(1567578544922), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpVisible: { ts: Timestamp(1567578548, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:08.429+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:38.418+0000 2019-09-04T06:29:08.429+0000 D2 ASIO [RS] Request 365 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpVisible: { ts: Timestamp(1567578548, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578548, 2), $clusterTime: { clusterTime: Timestamp(1567578548, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578548, 2) } 2019-09-04T06:29:08.429+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpVisible: { ts: Timestamp(1567578548, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578548, 2), $clusterTime: { clusterTime: Timestamp(1567578548, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578548, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:08.430+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:08.430+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:38.418+0000 2019-09-04T06:29:08.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:08.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:08.490+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:08.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:08.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:08.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:08.517+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578548, 2) 2019-09-04T06:29:08.524+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:08.532+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:08.532+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:08.573+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:08.573+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:08.624+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:08.627+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:08.627+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:08.637+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:08.637+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:08.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:08.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:08.724+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:08.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:08.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:08.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:08.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:08.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:08.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:08.825+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:08.837+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:08.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:08.837+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 366) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:08.837+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 366 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:29:18.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:08.837+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:08.837+0000 D3 REPL [replexec-0] memberData lastupdate is: 2019-09-04T06:29:07.061+0000 2019-09-04T06:29:08.837+0000 D3 REPL [replexec-0] memberData lastupdate is: 2019-09-04T06:29:08.233+0000 2019-09-04T06:29:08.837+0000 D3 REPL [replexec-0] stalest member MemberId(0) date: 2019-09-04T06:29:07.061+0000 2019-09-04T06:29:08.837+0000 D3 REPL [replexec-0] scheduling next check at 2019-09-04T06:29:17.061+0000 2019-09-04T06:29:08.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:08.837+0000 D2 ASIO [Replication] Request 366 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), opTime: { ts: Timestamp(1567578548, 2), t: 1 }, wallTime: new Date(1567578548405), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpVisible: { ts: Timestamp(1567578548, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578548, 2), $clusterTime: { clusterTime: Timestamp(1567578548, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578548, 2) } 2019-09-04T06:29:08.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), opTime: { ts: Timestamp(1567578548, 2), t: 1 }, wallTime: new Date(1567578548405), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpVisible: { ts: Timestamp(1567578548, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578548, 2), $clusterTime: { clusterTime: Timestamp(1567578548, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578548, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:29:08.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:08.837+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 366) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), opTime: { ts: Timestamp(1567578548, 2), t: 1 }, wallTime: new Date(1567578548405), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpVisible: { ts: Timestamp(1567578548, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578548, 2), $clusterTime: { clusterTime: Timestamp(1567578548, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578548, 2) } 2019-09-04T06:29:08.837+0000 D4 ELECTION [replexec-1] Postponing election timeout due to heartbeat from primary 2019-09-04T06:29:08.837+0000 D4 REPL [replexec-1] Canceling election timeout callback at 2019-09-04T06:29:19.893+0000 2019-09-04T06:29:08.837+0000 D4 ELECTION [replexec-1] Scheduling election timeout callback at 2019-09-04T06:29:19.526+0000 2019-09-04T06:29:08.837+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:29:08.837+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:29:10.837Z 2019-09-04T06:29:08.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:08.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:08.837+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:08.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:08.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 367) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:08.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 367 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:18.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:08.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:08.838+0000 D2 ASIO [Replication] Request 367 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), opTime: { ts: Timestamp(1567578548, 2), t: 1 }, wallTime: new Date(1567578548405), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpVisible: { ts: Timestamp(1567578548, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578548, 2), $clusterTime: { clusterTime: Timestamp(1567578548, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578548, 2) } 2019-09-04T06:29:08.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), opTime: { ts: Timestamp(1567578548, 2), t: 1 }, wallTime: new Date(1567578548405), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpVisible: { ts: Timestamp(1567578548, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578548, 2), $clusterTime: { clusterTime: Timestamp(1567578548, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578548, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:08.838+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:08.838+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 367) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), opTime: { ts: Timestamp(1567578548, 2), t: 1 }, wallTime: new Date(1567578548405), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpVisible: { ts: Timestamp(1567578548, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578548, 2), $clusterTime: { clusterTime: Timestamp(1567578548, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578548, 2) } 2019-09-04T06:29:08.838+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:29:08.838+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:29:10.838Z 2019-09-04T06:29:08.838+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:08.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:08.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:08.925+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:08.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:08.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:08.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:08.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:08.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:08.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:09.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:09.025+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:09.032+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:09.032+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:09.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578548, 2), signature: { hash: BinData(0, C95238C133F9DF6D6618CDFFA3BB8E1442687DCF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:09.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:29:09.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578548, 2), signature: { hash: BinData(0, C95238C133F9DF6D6618CDFFA3BB8E1442687DCF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:09.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578548, 2), signature: { hash: BinData(0, C95238C133F9DF6D6618CDFFA3BB8E1442687DCF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:09.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), opTime: { ts: Timestamp(1567578548, 2), t: 1 }, wallTime: new Date(1567578548405) } 2019-09-04T06:29:09.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578548, 2), signature: { hash: BinData(0, C95238C133F9DF6D6618CDFFA3BB8E1442687DCF), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:09.073+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:09.073+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:09.125+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:09.127+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:09.127+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:09.137+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:09.137+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:09.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:09.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:09.225+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:09.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:09.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:09.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:09.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:09.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:09.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:09.324+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:09.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:09.325+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:09.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:09.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:09.417+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:09.417+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:09.417+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578548, 2) 2019-09-04T06:29:09.417+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 5311 2019-09-04T06:29:09.417+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:09.417+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:09.417+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 5311 2019-09-04T06:29:09.418+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5314 2019-09-04T06:29:09.418+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5314 2019-09-04T06:29:09.418+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578548, 2), t: 1 }({ ts: Timestamp(1567578548, 2), t: 1 }) 2019-09-04T06:29:09.426+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:09.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:09.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:09.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:09.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:09.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:09.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:09.526+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:09.532+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:09.532+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:09.538+0000 D2 COMMAND [conn61] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578548, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578548, 2), signature: { hash: BinData(0, C95238C133F9DF6D6618CDFFA3BB8E1442687DCF), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578548, 2), t: 1 } }, $db: "config" } 2019-09-04T06:29:09.538+0000 D1 COMMAND [conn61] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578548, 2), t: 1 } } } 2019-09-04T06:29:09.538+0000 D3 STORAGE [conn61] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:29:09.538+0000 D1 COMMAND [conn61] Using 'committed' snapshot: { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578548, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578548, 2), signature: { hash: BinData(0, C95238C133F9DF6D6618CDFFA3BB8E1442687DCF), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578548, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578548, 2) 2019-09-04T06:29:09.538+0000 D2 QUERY [conn61] Collection config.settings does not exist. Using EOF plan: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 2019-09-04T06:29:09.538+0000 I COMMAND [conn61] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578548, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578548, 2), signature: { hash: BinData(0, C95238C133F9DF6D6618CDFFA3BB8E1442687DCF), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578548, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:29:09.538+0000 D2 COMMAND [conn61] run command config.$cmd { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578548, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578548, 2), signature: { hash: BinData(0, C95238C133F9DF6D6618CDFFA3BB8E1442687DCF), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578548, 2), t: 1 } }, $db: "config" } 2019-09-04T06:29:09.538+0000 D1 COMMAND [conn61] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578548, 2), t: 1 } } } 2019-09-04T06:29:09.538+0000 D3 STORAGE [conn61] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:29:09.538+0000 D1 COMMAND [conn61] Using 'committed' snapshot: { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578548, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578548, 2), signature: { hash: BinData(0, C95238C133F9DF6D6618CDFFA3BB8E1442687DCF), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578548, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578548, 2) 2019-09-04T06:29:09.538+0000 D2 QUERY [conn61] Collection config.settings does not exist. Using EOF plan: query: { _id: "autosplit" } sort: {} projection: {} limit: 1 2019-09-04T06:29:09.538+0000 I COMMAND [conn61] command config.settings command: find { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578548, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578548, 2), signature: { hash: BinData(0, C95238C133F9DF6D6618CDFFA3BB8E1442687DCF), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578548, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:29:09.573+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:09.573+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:09.626+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:09.627+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:09.627+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:09.637+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:09.637+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:09.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:09.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:09.702+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:09.702+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:09.726+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:09.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:09.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:09.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:09.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:09.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:09.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:09.826+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:09.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:09.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:09.926+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:09.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:09.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:09.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:09.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:09.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:09.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:10.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:10.003+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:29:10.003+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:29:10.003+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:29:10.018+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:29:10.019+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:29:10.034+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:10.034+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:10.034+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:10.045+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:29:10.045+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:29:10.045+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:29:10.045+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:29:10.047+0000 D2 COMMAND [conn69] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:10.047+0000 I COMMAND [conn69] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:10.048+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:29:10.048+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35129 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:29:10.049+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:29:10.049+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:29:10.050+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:29:10.050+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:29:10.050+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:10.050+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:29:10.050+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:29:10.050+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:29:10.050+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:29:10.050+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:29:10.050+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:29:10.050+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:29:10.050+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:10.050+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:10.050+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:29:10.050+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578548, 2) 2019-09-04T06:29:10.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5342 2019-09-04T06:29:10.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5342 2019-09-04T06:29:10.050+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:29:10.050+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:29:10.050+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:29:10.051+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:29:10.051+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:10.051+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:29:10.051+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:29:10.051+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:29:10.051+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578548, 2) 2019-09-04T06:29:10.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5345 2019-09-04T06:29:10.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5345 2019-09-04T06:29:10.051+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:29:10.051+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:29:10.051+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:10.051+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:29:10.051+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:29:10.051+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:29:10.051+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578548, 2) 2019-09-04T06:29:10.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5347 2019-09-04T06:29:10.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5347 2019-09-04T06:29:10.051+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:29:10.051+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:29:10.051+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:29:10.051+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:29:10.051+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:29:10.051+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5350 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5350 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5351 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5351 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5352 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5352 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5353 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5353 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5354 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5354 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5355 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5355 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5356 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5356 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5357 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5357 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5358 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5358 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5359 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:10.052+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5359 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5360 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5360 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5361 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5361 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5362 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5362 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5363 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5363 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5364 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5364 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5365 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5365 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5366 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5366 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5367 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5367 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5368 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:29:10.053+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:10.054+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:29:10.054+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5368 2019-09-04T06:29:10.054+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:10.054+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5369 2019-09-04T06:29:10.054+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:10.054+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:10.054+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5369 2019-09-04T06:29:10.054+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:29:10.054+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5370 2019-09-04T06:29:10.054+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:29:10.054+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:10.054+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:29:10.054+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5370 2019-09-04T06:29:10.054+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:29:10.054+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5371 2019-09-04T06:29:10.054+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:29:10.054+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:10.054+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:29:10.054+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5371 2019-09-04T06:29:10.054+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 2ms 2019-09-04T06:29:10.054+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:29:10.054+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5373 2019-09-04T06:29:10.054+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5373 2019-09-04T06:29:10.054+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5374 2019-09-04T06:29:10.054+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5374 2019-09-04T06:29:10.054+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5375 2019-09-04T06:29:10.054+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5375 2019-09-04T06:29:10.054+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:29:10.054+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:29:10.054+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5377 2019-09-04T06:29:10.054+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5377 2019-09-04T06:29:10.054+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5378 2019-09-04T06:29:10.054+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5378 2019-09-04T06:29:10.054+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5379 2019-09-04T06:29:10.055+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5379 2019-09-04T06:29:10.055+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5380 2019-09-04T06:29:10.055+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5380 2019-09-04T06:29:10.055+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5381 2019-09-04T06:29:10.055+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5381 2019-09-04T06:29:10.055+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5382 2019-09-04T06:29:10.055+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5382 2019-09-04T06:29:10.055+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5383 2019-09-04T06:29:10.055+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5383 2019-09-04T06:29:10.055+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5384 2019-09-04T06:29:10.055+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5384 2019-09-04T06:29:10.055+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5385 2019-09-04T06:29:10.055+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5385 2019-09-04T06:29:10.055+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5386 2019-09-04T06:29:10.055+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5386 2019-09-04T06:29:10.055+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5387 2019-09-04T06:29:10.055+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5387 2019-09-04T06:29:10.055+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5388 2019-09-04T06:29:10.055+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5388 2019-09-04T06:29:10.055+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:29:10.055+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:29:10.055+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5390 2019-09-04T06:29:10.055+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5390 2019-09-04T06:29:10.055+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5391 2019-09-04T06:29:10.055+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5391 2019-09-04T06:29:10.055+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5392 2019-09-04T06:29:10.055+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5392 2019-09-04T06:29:10.055+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5393 2019-09-04T06:29:10.055+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5393 2019-09-04T06:29:10.055+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5394 2019-09-04T06:29:10.055+0000 D2 COMMAND [conn70] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:10.055+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5394 2019-09-04T06:29:10.055+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5396 2019-09-04T06:29:10.055+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5396 2019-09-04T06:29:10.055+0000 I COMMAND [conn70] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:10.056+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:29:10.065+0000 I NETWORK [listener] connection accepted from 10.108.2.55:36598 #166 (85 connections now open) 2019-09-04T06:29:10.065+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:10.065+0000 D2 COMMAND [conn166] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:10.065+0000 I NETWORK [conn166] received client metadata from 10.108.2.55:36598 conn166: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:10.065+0000 I COMMAND [conn166] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:10.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:10.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:10.073+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:10.073+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:10.078+0000 I COMMAND [conn140] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578511, 1), signature: { hash: BinData(0, F56EFECF966613B7CF9F4E79C9426BAFAB58CE38), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:29:10.078+0000 I COMMAND [conn141] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578515, 1), signature: { hash: BinData(0, 687BE122806C404325073B712C2FE776A546E867), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:29:10.078+0000 D1 - [conn140] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:10.078+0000 D1 - [conn141] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:10.078+0000 W - [conn140] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:10.078+0000 W - [conn141] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:10.080+0000 I COMMAND [conn137] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578512, 1), signature: { hash: BinData(0, 52188A2AB5758BF914786CB9D47A9D6B7C3891AC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:29:10.080+0000 D1 - [conn137] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:10.080+0000 W - [conn137] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:10.081+0000 I COMMAND [conn105] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578511, 1), signature: { hash: BinData(0, F56EFECF966613B7CF9F4E79C9426BAFAB58CE38), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:29:10.081+0000 D1 - [conn105] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:10.081+0000 W - [conn105] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:10.112+0000 I - [conn137] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:10.112+0000 D1 COMMAND [conn137] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578512, 1), signature: { hash: BinData(0, 52188A2AB5758BF914786CB9D47A9D6B7C3891AC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:10.112+0000 D1 - [conn137] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:10.112+0000 W - [conn137] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:10.115+0000 I - [conn140] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:10.116+0000 D1 COMMAND [conn140] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578511, 1), signature: { hash: BinData(0, F56EFECF966613B7CF9F4E79C9426BAFAB58CE38), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:10.116+0000 D1 - [conn140] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:10.116+0000 W - [conn140] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:10.117+0000 I NETWORK [listener] connection accepted from 10.108.2.61:37876 #167 (86 connections now open) 2019-09-04T06:29:10.117+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:10.117+0000 D2 COMMAND [conn167] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:10.117+0000 I NETWORK [conn167] received client metadata from 10.108.2.61:37876 conn167: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:10.118+0000 I COMMAND [conn167] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:10.127+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:10.127+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:10.133+0000 I COMMAND [conn142] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578511, 1), signature: { hash: BinData(0, F56EFECF966613B7CF9F4E79C9426BAFAB58CE38), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:29:10.133+0000 D1 - [conn142] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:10.133+0000 W - [conn142] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:10.134+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:10.135+0000 I - [conn105] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:10.135+0000 D1 COMMAND [conn105] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578511, 1), signature: { hash: BinData(0, F56EFECF966613B7CF9F4E79C9426BAFAB58CE38), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:10.135+0000 D1 - [conn105] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:10.135+0000 W - [conn105] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:10.137+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:10.137+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:10.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:10.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:10.153+0000 I - [conn142] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:10.153+0000 D1 COMMAND [conn142] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578511, 1), signature: { hash: BinData(0, F56EFECF966613B7CF9F4E79C9426BAFAB58CE38), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:10.153+0000 D1 - [conn142] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:10.153+0000 W - [conn142] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:10.171+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:10.172+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:10.176+0000 I - [conn105] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:10.176+0000 W COMMAND [conn105] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:10.176+0000 I COMMAND [conn105] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578511, 1), signature: { hash: BinData(0, F56EFECF966613B7CF9F4E79C9426BAFAB58CE38), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30065ms 2019-09-04T06:29:10.176+0000 D2 NETWORK [conn105] Session from 10.108.2.56:35608 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:10.176+0000 I NETWORK [conn105] end connection 10.108.2.56:35608 (85 connections now open) 2019-09-04T06:29:10.198+0000 I - [conn137] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:10.198+0000 W COMMAND [conn137] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:10.198+0000 I COMMAND [conn137] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578512, 1), signature: { hash: BinData(0, 52188A2AB5758BF914786CB9D47A9D6B7C3891AC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30042ms 2019-09-04T06:29:10.198+0000 D2 NETWORK [conn137] Session from 10.108.2.72:45662 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:10.198+0000 I NETWORK [conn137] end connection 10.108.2.72:45662 (84 connections now open) 2019-09-04T06:29:10.202+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:10.202+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:10.216+0000 I - [conn141] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:10.217+0000 D1 COMMAND [conn141] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578515, 1), signature: { hash: BinData(0, 687BE122806C404325073B712C2FE776A546E867), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:10.217+0000 D1 - [conn141] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:10.217+0000 W - [conn141] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:10.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578548, 2), signature: { hash: BinData(0, C95238C133F9DF6D6618CDFFA3BB8E1442687DCF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:10.233+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:29:10.233+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578548, 2), signature: { hash: BinData(0, C95238C133F9DF6D6618CDFFA3BB8E1442687DCF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:10.233+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578548, 2), signature: { hash: BinData(0, C95238C133F9DF6D6618CDFFA3BB8E1442687DCF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:10.233+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), opTime: { ts: Timestamp(1567578548, 2), t: 1 }, wallTime: new Date(1567578548405) } 2019-09-04T06:29:10.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578548, 2), signature: { hash: BinData(0, C95238C133F9DF6D6618CDFFA3BB8E1442687DCF), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:10.234+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:10.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:10.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:10.239+0000 I - [conn142] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:10.239+0000 W COMMAND [conn142] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:10.239+0000 I COMMAND [conn142] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578511, 1), signature: { hash: BinData(0, F56EFECF966613B7CF9F4E79C9426BAFAB58CE38), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:29:10.240+0000 D2 NETWORK [conn142] Session from 10.108.2.61:37858 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:10.240+0000 I NETWORK [conn142] end connection 10.108.2.61:37858 (83 connections now open) 2019-09-04T06:29:10.261+0000 I NETWORK [listener] connection accepted from 10.108.2.73:52092 #168 (84 connections now open) 2019-09-04T06:29:10.261+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:10.261+0000 D2 COMMAND [conn168] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:10.261+0000 I NETWORK [conn168] received client metadata from 10.108.2.73:52092 conn168: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:10.262+0000 I COMMAND [conn168] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:10.262+0000 D2 COMMAND [conn168] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578542, 1), signature: { hash: BinData(0, 2B81C580DAB368E6A1EFD40BA8E2A6979A86E4AD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:29:10.262+0000 D1 REPL [conn168] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578548, 2), t: 1 } 2019-09-04T06:29:10.262+0000 D3 REPL [conn168] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.272+0000 2019-09-04T06:29:10.263+0000 I - [conn141] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:10.263+0000 W COMMAND [conn141] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:10.263+0000 I COMMAND [conn141] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578515, 1), signature: { hash: BinData(0, 687BE122806C404325073B712C2FE776A546E867), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30149ms 2019-09-04T06:29:10.263+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:10.263+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:10.263+0000 D2 NETWORK [conn141] Session from 10.108.2.74:51704 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:10.263+0000 I NETWORK [conn141] end connection 10.108.2.74:51704 (83 connections now open) 2019-09-04T06:29:10.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:10.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:10.265+0000 D2 COMMAND [conn166] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578541, 1), signature: { hash: BinData(0, 9C6D5E8C5F6FC23A243A48F5AE0DC2087CEF8162), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:29:10.265+0000 D1 REPL [conn166] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578548, 2), t: 1 } 2019-09-04T06:29:10.265+0000 D3 REPL [conn166] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.275+0000 2019-09-04T06:29:10.279+0000 D2 COMMAND [conn138] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578548, 1), signature: { hash: BinData(0, A5F366D9C352A6D5B06F4537F7BE1EE092889425), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:29:10.279+0000 D1 REPL [conn138] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578548, 2), t: 1 } 2019-09-04T06:29:10.279+0000 D3 REPL [conn138] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.289+0000 2019-09-04T06:29:10.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:10.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:10.283+0000 I - [conn140] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:10.283+0000 W COMMAND [conn140] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:10.283+0000 I COMMAND [conn140] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578511, 1), signature: { hash: BinData(0, F56EFECF966613B7CF9F4E79C9426BAFAB58CE38), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30049ms 2019-09-04T06:29:10.283+0000 D2 NETWORK [conn140] Session from 10.108.2.55:36580 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:10.283+0000 I NETWORK [conn140] end connection 10.108.2.55:36580 (82 connections now open) 2019-09-04T06:29:10.309+0000 D2 COMMAND [conn139] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578542, 1), signature: { hash: BinData(0, 2B81C580DAB368E6A1EFD40BA8E2A6979A86E4AD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:29:10.309+0000 D1 REPL [conn139] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578548, 2), t: 1 } 2019-09-04T06:29:10.309+0000 D3 REPL [conn139] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.319+0000 2019-09-04T06:29:10.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:10.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:10.334+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:10.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:10.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:10.417+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:10.417+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:10.417+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578548, 2) 2019-09-04T06:29:10.417+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 5419 2019-09-04T06:29:10.417+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:10.417+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:10.417+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 5419 2019-09-04T06:29:10.418+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5422 2019-09-04T06:29:10.419+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5422 2019-09-04T06:29:10.419+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578548, 2), t: 1 }({ ts: Timestamp(1567578548, 2), t: 1 }) 2019-09-04T06:29:10.434+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:10.465+0000 D2 COMMAND [conn71] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578548, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578548, 2), signature: { hash: BinData(0, C95238C133F9DF6D6618CDFFA3BB8E1442687DCF), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578548, 2), t: 1 } }, $db: "config" } 2019-09-04T06:29:10.465+0000 D1 COMMAND [conn71] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578548, 2), t: 1 } } } 2019-09-04T06:29:10.465+0000 D3 STORAGE [conn71] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:29:10.465+0000 D1 COMMAND [conn71] Using 'committed' snapshot: { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578548, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578548, 2), signature: { hash: BinData(0, C95238C133F9DF6D6618CDFFA3BB8E1442687DCF), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578548, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578548, 2) 2019-09-04T06:29:10.465+0000 D2 QUERY [conn71] Collection config.settings does not exist. Using EOF plan: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 2019-09-04T06:29:10.465+0000 I COMMAND [conn71] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578548, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578548, 2), signature: { hash: BinData(0, C95238C133F9DF6D6618CDFFA3BB8E1442687DCF), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578548, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:29:10.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:10.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:10.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:10.489+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:10.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:10.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:10.532+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:10.532+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:10.534+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:10.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:10.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:10.573+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:10.573+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:10.627+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:10.627+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:10.634+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:10.637+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:10.637+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:10.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:10.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:10.671+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:10.671+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:10.702+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:10.702+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:10.721+0000 D2 COMMAND [conn72] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578548, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578550, 1), signature: { hash: BinData(0, A6F1AA634DF50C9D72D831555A919A5C89DD867E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578548, 2), t: 1 } }, $db: "config" } 2019-09-04T06:29:10.721+0000 D1 COMMAND [conn72] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578548, 2), t: 1 } } } 2019-09-04T06:29:10.721+0000 D3 STORAGE [conn72] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:29:10.721+0000 D1 COMMAND [conn72] Using 'committed' snapshot: { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578548, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578550, 1), signature: { hash: BinData(0, A6F1AA634DF50C9D72D831555A919A5C89DD867E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578548, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578548, 2) 2019-09-04T06:29:10.721+0000 D2 QUERY [conn72] Collection config.settings does not exist. Using EOF plan: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 2019-09-04T06:29:10.721+0000 I COMMAND [conn72] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578548, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578550, 1), signature: { hash: BinData(0, A6F1AA634DF50C9D72D831555A919A5C89DD867E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578548, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:29:10.722+0000 D2 COMMAND [conn72] run command config.$cmd { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578548, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578550, 1), signature: { hash: BinData(0, A6F1AA634DF50C9D72D831555A919A5C89DD867E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578548, 2), t: 1 } }, $db: "config" } 2019-09-04T06:29:10.722+0000 D1 COMMAND [conn72] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578548, 2), t: 1 } } } 2019-09-04T06:29:10.722+0000 D3 STORAGE [conn72] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:29:10.722+0000 D1 COMMAND [conn72] Using 'committed' snapshot: { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578548, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578550, 1), signature: { hash: BinData(0, A6F1AA634DF50C9D72D831555A919A5C89DD867E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578548, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578548, 2) 2019-09-04T06:29:10.722+0000 D2 QUERY [conn72] Collection config.settings does not exist. Using EOF plan: query: { _id: "autosplit" } sort: {} projection: {} limit: 1 2019-09-04T06:29:10.722+0000 I COMMAND [conn72] command config.settings command: find { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578548, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578550, 1), signature: { hash: BinData(0, A6F1AA634DF50C9D72D831555A919A5C89DD867E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578548, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:29:10.735+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:10.762+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:10.763+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:10.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:10.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:10.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:10.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:10.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:10.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:10.835+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:10.837+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:10.837+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 368) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:10.837+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 368 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:29:20.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:10.837+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:10.837+0000 D2 ASIO [Replication] Request 368 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), opTime: { ts: Timestamp(1567578548, 2), t: 1 }, wallTime: new Date(1567578548405), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpVisible: { ts: Timestamp(1567578548, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578548, 2), $clusterTime: { clusterTime: Timestamp(1567578550, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578548, 2) } 2019-09-04T06:29:10.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), opTime: { ts: Timestamp(1567578548, 2), t: 1 }, wallTime: new Date(1567578548405), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpVisible: { ts: Timestamp(1567578548, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578548, 2), $clusterTime: { clusterTime: Timestamp(1567578550, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578548, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:29:10.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:10.837+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 368) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), opTime: { ts: Timestamp(1567578548, 2), t: 1 }, wallTime: new Date(1567578548405), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpVisible: { ts: Timestamp(1567578548, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578548, 2), $clusterTime: { clusterTime: Timestamp(1567578550, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578548, 2) } 2019-09-04T06:29:10.837+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:29:10.837+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:29:19.526+0000 2019-09-04T06:29:10.837+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:29:21.228+0000 2019-09-04T06:29:10.837+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:29:10.837+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:29:12.837Z 2019-09-04T06:29:10.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:10.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:10.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:10.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:10.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 369) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:10.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 369 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:20.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:10.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:10.838+0000 D2 ASIO [Replication] Request 369 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), opTime: { ts: Timestamp(1567578548, 2), t: 1 }, wallTime: new Date(1567578548405), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpVisible: { ts: Timestamp(1567578548, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578548, 2), $clusterTime: { clusterTime: Timestamp(1567578550, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578548, 2) } 2019-09-04T06:29:10.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), opTime: { ts: Timestamp(1567578548, 2), t: 1 }, wallTime: new Date(1567578548405), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpVisible: { ts: Timestamp(1567578548, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578548, 2), $clusterTime: { clusterTime: Timestamp(1567578550, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578548, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:10.838+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:10.838+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 369) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), opTime: { ts: Timestamp(1567578548, 2), t: 1 }, wallTime: new Date(1567578548405), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpVisible: { ts: Timestamp(1567578548, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578548, 2), $clusterTime: { clusterTime: Timestamp(1567578550, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578548, 2) } 2019-09-04T06:29:10.838+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:29:10.838+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:29:12.838Z 2019-09-04T06:29:10.838+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:10.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:10.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:10.935+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:10.945+0000 D2 COMMAND [conn73] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:10.945+0000 I COMMAND [conn73] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:10.968+0000 D2 COMMAND [conn74] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:10.968+0000 I COMMAND [conn74] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:10.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:10.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:10.982+0000 D2 COMMAND [conn71] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578548, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578548, 2), signature: { hash: BinData(0, C95238C133F9DF6D6618CDFFA3BB8E1442687DCF), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578548, 2), t: 1 } }, $db: "config" } 2019-09-04T06:29:10.982+0000 D1 COMMAND [conn71] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578548, 2), t: 1 } } } 2019-09-04T06:29:10.982+0000 D3 STORAGE [conn71] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:29:10.982+0000 D1 COMMAND [conn71] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578548, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578548, 2), signature: { hash: BinData(0, C95238C133F9DF6D6618CDFFA3BB8E1442687DCF), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578548, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578548, 2) 2019-09-04T06:29:10.982+0000 D5 QUERY [conn71] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:29:10.982+0000 D5 QUERY [conn71] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:29:10.982+0000 D5 QUERY [conn71] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:29:10.982+0000 D5 QUERY [conn71] Rated tree: $and 2019-09-04T06:29:10.982+0000 D5 QUERY [conn71] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:10.982+0000 D5 QUERY [conn71] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:10.982+0000 D2 QUERY [conn71] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:29:10.982+0000 D3 STORAGE [conn71] WT begin_transaction for snapshot id 5446 2019-09-04T06:29:10.982+0000 D3 STORAGE [conn71] WT rollback_transaction for snapshot id 5446 2019-09-04T06:29:10.982+0000 I COMMAND [conn71] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578548, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578548, 2), signature: { hash: BinData(0, C95238C133F9DF6D6618CDFFA3BB8E1442687DCF), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578548, 2), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:879 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:29:10.990+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:10.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:10.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:10.997+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:11.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:11.032+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:11.032+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:11.035+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:11.040+0000 D2 COMMAND [conn72] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578548, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578550, 1), signature: { hash: BinData(0, A6F1AA634DF50C9D72D831555A919A5C89DD867E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578548, 2), t: 1 } }, $db: "config" } 2019-09-04T06:29:11.040+0000 D1 COMMAND [conn72] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578548, 2), t: 1 } } } 2019-09-04T06:29:11.040+0000 D3 STORAGE [conn72] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:29:11.040+0000 D1 COMMAND [conn72] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578548, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578550, 1), signature: { hash: BinData(0, A6F1AA634DF50C9D72D831555A919A5C89DD867E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578548, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578548, 2) 2019-09-04T06:29:11.040+0000 D5 QUERY [conn72] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:29:11.040+0000 D5 QUERY [conn72] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:29:11.040+0000 D5 QUERY [conn72] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:29:11.040+0000 D5 QUERY [conn72] Rated tree: $and 2019-09-04T06:29:11.040+0000 D5 QUERY [conn72] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:11.040+0000 D5 QUERY [conn72] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:11.040+0000 D2 QUERY [conn72] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:29:11.040+0000 D3 STORAGE [conn72] WT begin_transaction for snapshot id 5452 2019-09-04T06:29:11.040+0000 D3 STORAGE [conn72] WT rollback_transaction for snapshot id 5452 2019-09-04T06:29:11.040+0000 I COMMAND [conn72] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578548, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578550, 1), signature: { hash: BinData(0, A6F1AA634DF50C9D72D831555A919A5C89DD867E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578548, 2), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:879 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:29:11.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578550, 1), signature: { hash: BinData(0, A6F1AA634DF50C9D72D831555A919A5C89DD867E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:11.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:29:11.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578550, 1), signature: { hash: BinData(0, A6F1AA634DF50C9D72D831555A919A5C89DD867E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:11.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578550, 1), signature: { hash: BinData(0, A6F1AA634DF50C9D72D831555A919A5C89DD867E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:11.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), opTime: { ts: Timestamp(1567578548, 2), t: 1 }, wallTime: new Date(1567578548405) } 2019-09-04T06:29:11.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578550, 1), signature: { hash: BinData(0, A6F1AA634DF50C9D72D831555A919A5C89DD867E), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:11.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:11.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:11.073+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:11.073+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:11.127+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:11.127+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:11.135+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:11.137+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:11.137+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:11.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:11.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:11.171+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:11.171+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:11.202+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:11.202+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:11.221+0000 D2 COMMAND [conn77] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:11.221+0000 I COMMAND [conn77] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:11.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:11.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:11.235+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:11.249+0000 D2 COMMAND [conn113] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:11.249+0000 I COMMAND [conn113] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:11.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:11.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:11.273+0000 D2 COMMAND [conn78] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:11.274+0000 I COMMAND [conn78] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:11.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:11.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:11.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:11.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:11.335+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:11.337+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:11.337+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:11.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:11.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:11.417+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:11.418+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:11.418+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578548, 2) 2019-09-04T06:29:11.418+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 5472 2019-09-04T06:29:11.418+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:11.418+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:11.418+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 5472 2019-09-04T06:29:11.419+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5475 2019-09-04T06:29:11.419+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5475 2019-09-04T06:29:11.419+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578548, 2), t: 1 }({ ts: Timestamp(1567578548, 2), t: 1 }) 2019-09-04T06:29:11.436+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:11.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:11.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:11.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:11.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:11.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:11.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:11.532+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:11.532+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:11.536+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:11.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:11.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:11.573+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:11.573+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:11.627+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:11.627+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:11.636+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:11.637+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:11.637+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:11.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:11.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:11.671+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:11.671+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:11.682+0000 D2 COMMAND [conn164] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578541, 1), signature: { hash: BinData(0, 9C6D5E8C5F6FC23A243A48F5AE0DC2087CEF8162), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:29:11.682+0000 D1 REPL [conn164] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578548, 2), t: 1 } 2019-09-04T06:29:11.682+0000 D3 REPL [conn164] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:41.692+0000 2019-09-04T06:29:11.702+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:11.702+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:11.709+0000 I COMMAND [conn128] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578518, 1), signature: { hash: BinData(0, 66B8FDDB4AF15B67A9013881B963975EBDEB24EF), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:29:11.709+0000 D1 - [conn128] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:11.709+0000 W - [conn128] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:11.729+0000 I - [conn128] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:11.730+0000 D1 COMMAND [conn128] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578518, 1), signature: { hash: BinData(0, 66B8FDDB4AF15B67A9013881B963975EBDEB24EF), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:11.730+0000 D1 - [conn128] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:11.730+0000 W - [conn128] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:11.736+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:11.750+0000 I - [conn128] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:11.750+0000 W COMMAND [conn128] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:11.750+0000 I COMMAND [conn128] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578518, 1), signature: { hash: BinData(0, 66B8FDDB4AF15B67A9013881B963975EBDEB24EF), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30033ms 2019-09-04T06:29:11.750+0000 D2 NETWORK [conn128] Session from 10.108.2.54:49092 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:11.750+0000 I NETWORK [conn128] end connection 10.108.2.54:49092 (81 connections now open) 2019-09-04T06:29:11.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:11.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:11.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:11.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:11.794+0000 D2 COMMAND [conn81] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:11.794+0000 I COMMAND [conn81] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:11.795+0000 D2 COMMAND [conn81] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578535, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578550, 1), signature: { hash: BinData(0, A6F1AA634DF50C9D72D831555A919A5C89DD867E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578535, 1), t: 1 } }, $db: "config" } 2019-09-04T06:29:11.795+0000 D1 COMMAND [conn81] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578535, 1), t: 1 } } } 2019-09-04T06:29:11.795+0000 D3 STORAGE [conn81] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:29:11.795+0000 D1 COMMAND [conn81] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578535, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578550, 1), signature: { hash: BinData(0, A6F1AA634DF50C9D72D831555A919A5C89DD867E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578535, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578548, 2) 2019-09-04T06:29:11.795+0000 D5 QUERY [conn81] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:29:11.795+0000 D5 QUERY [conn81] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:29:11.795+0000 D5 QUERY [conn81] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:29:11.795+0000 D5 QUERY [conn81] Rated tree: $and 2019-09-04T06:29:11.795+0000 D5 QUERY [conn81] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:11.795+0000 D5 QUERY [conn81] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:11.795+0000 D2 QUERY [conn81] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:29:11.795+0000 D3 STORAGE [conn81] WT begin_transaction for snapshot id 5492 2019-09-04T06:29:11.795+0000 D3 STORAGE [conn81] WT rollback_transaction for snapshot id 5492 2019-09-04T06:29:11.795+0000 I COMMAND [conn81] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578535, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578550, 1), signature: { hash: BinData(0, A6F1AA634DF50C9D72D831555A919A5C89DD867E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578535, 1), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:879 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:29:11.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:11.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:11.836+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:11.837+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:11.837+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:11.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:11.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:11.878+0000 D2 COMMAND [conn163] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578542, 1), signature: { hash: BinData(0, 2B81C580DAB368E6A1EFD40BA8E2A6979A86E4AD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:29:11.878+0000 D1 REPL [conn163] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578548, 2), t: 1 } 2019-09-04T06:29:11.878+0000 D3 REPL [conn163] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:41.888+0000 2019-09-04T06:29:11.936+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:11.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:11.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:11.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:11.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:11.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:11.997+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:12.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:12.032+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:12.032+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:12.036+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:12.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:12.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:12.073+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:12.073+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:12.127+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:12.127+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:12.137+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:12.137+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:12.137+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:12.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:12.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:12.171+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:12.171+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:12.202+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:12.202+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:12.233+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578550, 1), signature: { hash: BinData(0, A6F1AA634DF50C9D72D831555A919A5C89DD867E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:12.233+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:29:12.233+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578550, 1), signature: { hash: BinData(0, A6F1AA634DF50C9D72D831555A919A5C89DD867E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:12.233+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578550, 1), signature: { hash: BinData(0, A6F1AA634DF50C9D72D831555A919A5C89DD867E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:12.233+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), opTime: { ts: Timestamp(1567578548, 2), t: 1 }, wallTime: new Date(1567578548405) } 2019-09-04T06:29:12.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578550, 1), signature: { hash: BinData(0, A6F1AA634DF50C9D72D831555A919A5C89DD867E), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:12.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:12.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:12.237+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:12.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:12.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:12.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:12.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:12.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:12.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:12.337+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:12.337+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:12.337+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:12.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:12.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:12.382+0000 I NETWORK [listener] connection accepted from 10.108.2.62:53394 #169 (82 connections now open) 2019-09-04T06:29:12.382+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:12.382+0000 D2 COMMAND [conn169] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:12.382+0000 I NETWORK [conn169] received client metadata from 10.108.2.62:53394 conn169: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:12.382+0000 I COMMAND [conn169] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:12.392+0000 I COMMAND [conn116] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578512, 1), signature: { hash: BinData(0, 52188A2AB5758BF914786CB9D47A9D6B7C3891AC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:29:12.392+0000 D1 - [conn116] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:12.392+0000 W - [conn116] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:12.410+0000 I - [conn116] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:12.410+0000 D1 COMMAND [conn116] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578512, 1), signature: { hash: BinData(0, 52188A2AB5758BF914786CB9D47A9D6B7C3891AC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:12.410+0000 D1 - [conn116] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:12.410+0000 W - [conn116] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:12.418+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:12.418+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:12.418+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578548, 2) 2019-09-04T06:29:12.418+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 5518 2019-09-04T06:29:12.418+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:12.418+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:12.418+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 5518 2019-09-04T06:29:12.419+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5521 2019-09-04T06:29:12.419+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5521 2019-09-04T06:29:12.419+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578548, 2), t: 1 }({ ts: Timestamp(1567578548, 2), t: 1 }) 2019-09-04T06:29:12.431+0000 I - [conn116] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:12.431+0000 W COMMAND [conn116] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:12.431+0000 I COMMAND [conn116] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578512, 1), signature: { hash: BinData(0, 52188A2AB5758BF914786CB9D47A9D6B7C3891AC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:29:12.431+0000 D2 NETWORK [conn116] Session from 10.108.2.62:53360 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:12.432+0000 I NETWORK [conn116] end connection 10.108.2.62:53360 (81 connections now open) 2019-09-04T06:29:12.437+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:12.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:12.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:12.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:12.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:12.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:12.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:12.532+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:12.532+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:12.537+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:12.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:12.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:12.573+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:12.573+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:12.627+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:12.627+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:12.637+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:12.637+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:12.637+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:12.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:12.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:12.671+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:12.671+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:12.702+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:12.702+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:12.737+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:12.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:12.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:12.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:12.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:12.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:12.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:12.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:12.837+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 370) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:12.837+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 370 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:29:22.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:12.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:12.837+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:12.837+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:12.837+0000 D2 ASIO [Replication] Request 370 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), opTime: { ts: Timestamp(1567578548, 2), t: 1 }, wallTime: new Date(1567578548405), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpVisible: { ts: Timestamp(1567578548, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578548, 2), $clusterTime: { clusterTime: Timestamp(1567578550, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578548, 2) } 2019-09-04T06:29:12.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), opTime: { ts: Timestamp(1567578548, 2), t: 1 }, wallTime: new Date(1567578548405), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpVisible: { ts: Timestamp(1567578548, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578548, 2), $clusterTime: { clusterTime: Timestamp(1567578550, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578548, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:29:12.837+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:12.837+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 370) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), opTime: { ts: Timestamp(1567578548, 2), t: 1 }, wallTime: new Date(1567578548405), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpVisible: { ts: Timestamp(1567578548, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578548, 2), $clusterTime: { clusterTime: Timestamp(1567578550, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578548, 2) } 2019-09-04T06:29:12.837+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:29:12.837+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:29:21.228+0000 2019-09-04T06:29:12.837+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:29:22.981+0000 2019-09-04T06:29:12.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:12.837+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:29:12.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:12.837+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:29:14.837Z 2019-09-04T06:29:12.837+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:12.837+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:12.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:12.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 371) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:12.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 371 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:22.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:12.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:12.838+0000 D2 ASIO [Replication] Request 371 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), opTime: { ts: Timestamp(1567578548, 2), t: 1 }, wallTime: new Date(1567578548405), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpVisible: { ts: Timestamp(1567578548, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578548, 2), $clusterTime: { clusterTime: Timestamp(1567578550, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578548, 2) } 2019-09-04T06:29:12.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), opTime: { ts: Timestamp(1567578548, 2), t: 1 }, wallTime: new Date(1567578548405), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpVisible: { ts: Timestamp(1567578548, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578548, 2), $clusterTime: { clusterTime: Timestamp(1567578550, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578548, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:12.838+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:12.838+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 371) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), opTime: { ts: Timestamp(1567578548, 2), t: 1 }, wallTime: new Date(1567578548405), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpVisible: { ts: Timestamp(1567578548, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578548, 2), $clusterTime: { clusterTime: Timestamp(1567578550, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578548, 2) } 2019-09-04T06:29:12.838+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:29:12.838+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:29:14.838Z 2019-09-04T06:29:12.838+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:12.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:12.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:12.938+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:12.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:12.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:12.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:12.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:12.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:12.997+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:13.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:13.032+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:13.032+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:13.038+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:13.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578550, 1), signature: { hash: BinData(0, A6F1AA634DF50C9D72D831555A919A5C89DD867E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:13.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:29:13.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578550, 1), signature: { hash: BinData(0, A6F1AA634DF50C9D72D831555A919A5C89DD867E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:13.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578550, 1), signature: { hash: BinData(0, A6F1AA634DF50C9D72D831555A919A5C89DD867E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:13.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), opTime: { ts: Timestamp(1567578548, 2), t: 1 }, wallTime: new Date(1567578548405) } 2019-09-04T06:29:13.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578550, 1), signature: { hash: BinData(0, A6F1AA634DF50C9D72D831555A919A5C89DD867E), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:13.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:13.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:13.073+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:13.073+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:13.127+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:13.127+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:13.137+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:13.137+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:13.138+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:13.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:13.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:13.171+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:13.171+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:13.202+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:13.202+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:13.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:13.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:13.238+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:13.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:13.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:13.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:13.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:13.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:13.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:13.324+0000 I NETWORK [listener] connection accepted from 10.108.2.73:52096 #170 (82 connections now open) 2019-09-04T06:29:13.324+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:13.324+0000 D2 COMMAND [conn170] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:13.324+0000 I NETWORK [conn170] received client metadata from 10.108.2.73:52096 conn170: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:13.325+0000 I COMMAND [conn170] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:13.325+0000 D2 COMMAND [conn170] run command config.$cmd { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578552, 1), signature: { hash: BinData(0, 05551CF1F69A904A3734176F5171337687942DA6), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:29:13.325+0000 D1 REPL [conn170] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578548, 2), t: 1 } 2019-09-04T06:29:13.325+0000 D3 REPL [conn170] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:43.335+0000 2019-09-04T06:29:13.337+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:13.337+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:13.338+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:13.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:13.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:13.418+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:13.418+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:13.418+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578548, 2) 2019-09-04T06:29:13.418+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 5560 2019-09-04T06:29:13.418+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:13.418+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:13.418+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 5560 2019-09-04T06:29:13.419+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5563 2019-09-04T06:29:13.419+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5563 2019-09-04T06:29:13.419+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578548, 2), t: 1 }({ ts: Timestamp(1567578548, 2), t: 1 }) 2019-09-04T06:29:13.422+0000 D2 ASIO [RS] Request 364 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpVisible: { ts: Timestamp(1567578548, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpApplied: { ts: Timestamp(1567578548, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578548, 2), $clusterTime: { clusterTime: Timestamp(1567578550, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578548, 2) } 2019-09-04T06:29:13.422+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpVisible: { ts: Timestamp(1567578548, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpApplied: { ts: Timestamp(1567578548, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578548, 2), $clusterTime: { clusterTime: Timestamp(1567578550, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578548, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:13.422+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:13.422+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:29:13.423+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:29:22.981+0000 2019-09-04T06:29:13.423+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:29:24.298+0000 2019-09-04T06:29:13.423+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:13.423+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 372 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:23.423+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578548, 2), t: 1 } } 2019-09-04T06:29:13.423+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:13.423+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:38.418+0000 2019-09-04T06:29:13.430+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:13.430+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), appliedOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, appliedWallTime: new Date(1567578548405), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), appliedOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, appliedWallTime: new Date(1567578548405), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), appliedOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, appliedWallTime: new Date(1567578548405), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpVisible: { ts: Timestamp(1567578548, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:13.430+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 373 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:43.430+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), appliedOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, appliedWallTime: new Date(1567578548405), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), appliedOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, appliedWallTime: new Date(1567578548405), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), appliedOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, appliedWallTime: new Date(1567578548405), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpVisible: { ts: Timestamp(1567578548, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:13.430+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:38.418+0000 2019-09-04T06:29:13.430+0000 D2 ASIO [RS] Request 373 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpVisible: { ts: Timestamp(1567578548, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578548, 2), $clusterTime: { clusterTime: Timestamp(1567578552, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578548, 2) } 2019-09-04T06:29:13.430+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpVisible: { ts: Timestamp(1567578548, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578548, 2), $clusterTime: { clusterTime: Timestamp(1567578552, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578548, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:13.430+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:13.430+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:38.418+0000 2019-09-04T06:29:13.438+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:13.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:13.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:13.490+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:13.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:13.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:13.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:13.532+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:13.532+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:13.538+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:13.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:13.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:13.573+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:13.573+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:13.627+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:13.627+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:13.637+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:13.637+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:13.638+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:13.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:13.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:13.671+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:13.671+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:13.702+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:13.702+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:13.739+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:13.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:13.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:13.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:13.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:13.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:13.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:13.837+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:13.837+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:13.839+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:13.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:13.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:13.939+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:13.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:13.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:13.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:13.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:13.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:13.997+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:14.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:14.032+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:14.032+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:14.039+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:14.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:14.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:14.073+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:14.073+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:14.127+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:14.127+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:14.137+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:14.137+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:14.139+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:14.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:14.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:14.171+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:14.171+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:14.202+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:14.202+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:14.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578552, 1), signature: { hash: BinData(0, 0FABB2E678EF57FFDBDE2F3CC344D50E7B065174), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:14.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:29:14.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578552, 1), signature: { hash: BinData(0, 0FABB2E678EF57FFDBDE2F3CC344D50E7B065174), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:14.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578552, 1), signature: { hash: BinData(0, 0FABB2E678EF57FFDBDE2F3CC344D50E7B065174), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:14.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), opTime: { ts: Timestamp(1567578548, 2), t: 1 }, wallTime: new Date(1567578548405) } 2019-09-04T06:29:14.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578552, 1), signature: { hash: BinData(0, 0FABB2E678EF57FFDBDE2F3CC344D50E7B065174), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:14.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:14.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:14.239+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:14.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:14.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:14.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:14.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:14.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:14.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:14.337+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:14.337+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:14.340+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:14.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:14.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:14.418+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:14.419+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:14.419+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578548, 2) 2019-09-04T06:29:14.419+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 5601 2019-09-04T06:29:14.419+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:14.419+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:14.419+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 5601 2019-09-04T06:29:14.419+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5604 2019-09-04T06:29:14.419+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5604 2019-09-04T06:29:14.419+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578548, 2), t: 1 }({ ts: Timestamp(1567578548, 2), t: 1 }) 2019-09-04T06:29:14.440+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:14.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:14.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:14.490+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:14.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:14.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:14.497+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:14.532+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:14.532+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:14.540+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:14.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:14.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:14.573+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:14.573+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:14.627+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:14.627+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:14.637+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:14.637+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:14.640+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:14.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:14.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:14.671+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:14.671+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:14.702+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:14.702+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:14.740+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:14.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:14.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:14.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:14.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:14.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:14.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:14.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:14.837+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 374) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:14.837+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 374 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:29:24.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:14.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:14.837+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:14.837+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:14.837+0000 D2 ASIO [Replication] Request 374 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), opTime: { ts: Timestamp(1567578548, 2), t: 1 }, wallTime: new Date(1567578548405), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpVisible: { ts: Timestamp(1567578548, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578548, 2), $clusterTime: { clusterTime: Timestamp(1567578552, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578548, 2) } 2019-09-04T06:29:14.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), opTime: { ts: Timestamp(1567578548, 2), t: 1 }, wallTime: new Date(1567578548405), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpVisible: { ts: Timestamp(1567578548, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578548, 2), $clusterTime: { clusterTime: Timestamp(1567578552, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578548, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:29:14.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:14.837+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 374) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), opTime: { ts: Timestamp(1567578548, 2), t: 1 }, wallTime: new Date(1567578548405), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpVisible: { ts: Timestamp(1567578548, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578548, 2), $clusterTime: { clusterTime: Timestamp(1567578552, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578548, 2) } 2019-09-04T06:29:14.837+0000 D4 ELECTION [replexec-1] Postponing election timeout due to heartbeat from primary 2019-09-04T06:29:14.837+0000 D4 REPL [replexec-1] Canceling election timeout callback at 2019-09-04T06:29:24.298+0000 2019-09-04T06:29:14.837+0000 D4 ELECTION [replexec-1] Scheduling election timeout callback at 2019-09-04T06:29:25.314+0000 2019-09-04T06:29:14.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:14.838+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:29:14.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:14.838+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:29:16.837Z 2019-09-04T06:29:14.838+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:14.838+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 375) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:14.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:14.838+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 375 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:24.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:14.838+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:14.838+0000 D2 ASIO [Replication] Request 375 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), opTime: { ts: Timestamp(1567578548, 2), t: 1 }, wallTime: new Date(1567578548405), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpVisible: { ts: Timestamp(1567578548, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578548, 2), $clusterTime: { clusterTime: Timestamp(1567578552, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578548, 2) } 2019-09-04T06:29:14.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), opTime: { ts: Timestamp(1567578548, 2), t: 1 }, wallTime: new Date(1567578548405), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpVisible: { ts: Timestamp(1567578548, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578548, 2), $clusterTime: { clusterTime: Timestamp(1567578552, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578548, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:14.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:14.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 375) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), opTime: { ts: Timestamp(1567578548, 2), t: 1 }, wallTime: new Date(1567578548405), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpVisible: { ts: Timestamp(1567578548, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578548, 2), $clusterTime: { clusterTime: Timestamp(1567578552, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578548, 2) } 2019-09-04T06:29:14.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:29:14.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:29:16.838Z 2019-09-04T06:29:14.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:14.840+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:14.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:14.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:14.931+0000 D2 ASIO [RS] Request 372 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578554, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578554929), o: { $v: 1, $set: { ping: new Date(1567578554926), up: 2455 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpVisible: { ts: Timestamp(1567578548, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpApplied: { ts: Timestamp(1567578554, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578548, 2), $clusterTime: { clusterTime: Timestamp(1567578554, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578554, 1) } 2019-09-04T06:29:14.931+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578554, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578554929), o: { $v: 1, $set: { ping: new Date(1567578554926), up: 2455 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpVisible: { ts: Timestamp(1567578548, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpApplied: { ts: Timestamp(1567578554, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578548, 2), $clusterTime: { clusterTime: Timestamp(1567578554, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578554, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:14.931+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:14.932+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578554, 1) and ending at ts: Timestamp(1567578554, 1) 2019-09-04T06:29:14.932+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:29:25.314+0000 2019-09-04T06:29:14.932+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:29:25.609+0000 2019-09-04T06:29:14.932+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:14.932+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:14.932+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578554, 1), t: 1 } 2019-09-04T06:29:14.932+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:14.932+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:14.932+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578548, 2) 2019-09-04T06:29:14.932+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 5623 2019-09-04T06:29:14.932+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:14.932+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:14.932+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 5623 2019-09-04T06:29:14.932+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:14.932+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:14.932+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578548, 2) 2019-09-04T06:29:14.932+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:29:14.932+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 5626 2019-09-04T06:29:14.932+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:14.932+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:14.932+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578554, 1) } 2019-09-04T06:29:14.932+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 5626 2019-09-04T06:29:14.932+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5605 2019-09-04T06:29:14.932+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5605 2019-09-04T06:29:14.932+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5629 2019-09-04T06:29:14.932+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5629 2019-09-04T06:29:14.932+0000 D3 EXECUTOR [repl-writer-worker-1] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:14.932+0000 D3 STORAGE [repl-writer-worker-1] WT begin_transaction for snapshot id 5631 2019-09-04T06:29:14.932+0000 D4 STORAGE [repl-writer-worker-1] inserting record with timestamp Timestamp(1567578554, 1) 2019-09-04T06:29:14.932+0000 D3 STORAGE [repl-writer-worker-1] WT set timestamp of future write operations to Timestamp(1567578554, 1) 2019-09-04T06:29:14.932+0000 D3 STORAGE [repl-writer-worker-1] WT commit_transaction for snapshot id 5631 2019-09-04T06:29:14.932+0000 D3 EXECUTOR [repl-writer-worker-1] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:14.932+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:29:14.932+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5630 2019-09-04T06:29:14.932+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5630 2019-09-04T06:29:14.932+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5633 2019-09-04T06:29:14.932+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5633 2019-09-04T06:29:14.932+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578554, 1), t: 1 }({ ts: Timestamp(1567578554, 1), t: 1 }) 2019-09-04T06:29:14.932+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578554, 1) 2019-09-04T06:29:14.932+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5634 2019-09-04T06:29:14.932+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578554, 1) } } ] } sort: {} projection: {} 2019-09-04T06:29:14.932+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:14.932+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:29:14.932+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578554, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:29:14.932+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:14.932+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:14.932+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:14.932+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578554, 1) || First: notFirst: full path: ts 2019-09-04T06:29:14.932+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:14.932+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578554, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:14.932+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:14.932+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:29:14.932+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:14.932+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:14.932+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:14.932+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:14.932+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:14.932+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:14.933+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:14.933+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578554, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:14.933+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:14.933+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:14.933+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:14.933+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578554, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:14.933+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:14.933+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578554, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:14.933+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5634 2019-09-04T06:29:14.933+0000 D3 EXECUTOR [repl-writer-worker-5] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:14.933+0000 D3 STORAGE [repl-writer-worker-5] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:14.933+0000 D3 REPL [repl-writer-worker-5] applying op: { ts: Timestamp(1567578554, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578554929), o: { $v: 1, $set: { ping: new Date(1567578554926), up: 2455 } } }, oplog application mode: Secondary 2019-09-04T06:29:14.933+0000 D3 STORAGE [repl-writer-worker-5] WT set timestamp of future write operations to Timestamp(1567578554, 1) 2019-09-04T06:29:14.933+0000 D3 STORAGE [repl-writer-worker-5] WT begin_transaction for snapshot id 5636 2019-09-04T06:29:14.933+0000 D2 QUERY [repl-writer-worker-5] Using idhack: { _id: "cmodb801.togewa.com:27017" } 2019-09-04T06:29:14.933+0000 D4 WRITE [repl-writer-worker-5] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:29:14.933+0000 D3 STORAGE [repl-writer-worker-5] WT commit_transaction for snapshot id 5636 2019-09-04T06:29:14.933+0000 D3 EXECUTOR [repl-writer-worker-5] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:14.933+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578554, 1), t: 1 }({ ts: Timestamp(1567578554, 1), t: 1 }) 2019-09-04T06:29:14.933+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578554, 1) 2019-09-04T06:29:14.933+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5635 2019-09-04T06:29:14.933+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:29:14.933+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:14.933+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:29:14.933+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:14.933+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:14.933+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:29:14.933+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5635 2019-09-04T06:29:14.933+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578554, 1) 2019-09-04T06:29:14.933+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:14.933+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), appliedOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, appliedWallTime: new Date(1567578548405), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), appliedOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, appliedWallTime: new Date(1567578554929), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), appliedOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, appliedWallTime: new Date(1567578548405), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpVisible: { ts: Timestamp(1567578548, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:14.933+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 376 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:44.933+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), appliedOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, appliedWallTime: new Date(1567578548405), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), appliedOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, appliedWallTime: new Date(1567578554929), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), appliedOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, appliedWallTime: new Date(1567578548405), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpVisible: { ts: Timestamp(1567578548, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:14.933+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:44.933+0000 2019-09-04T06:29:14.933+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5639 2019-09-04T06:29:14.933+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5639 2019-09-04T06:29:14.933+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578554, 1), t: 1 }({ ts: Timestamp(1567578554, 1), t: 1 }) 2019-09-04T06:29:14.933+0000 D2 ASIO [RS] Request 376 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578554, 1), t: 1 }, lastCommittedWall: new Date(1567578554929), lastOpVisible: { ts: Timestamp(1567578554, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578554, 1), $clusterTime: { clusterTime: Timestamp(1567578554, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578554, 1) } 2019-09-04T06:29:14.933+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578554, 1), t: 1 }, lastCommittedWall: new Date(1567578554929), lastOpVisible: { ts: Timestamp(1567578554, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578554, 1), $clusterTime: { clusterTime: Timestamp(1567578554, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578554, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:14.933+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:14.933+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:44.933+0000 2019-09-04T06:29:14.934+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578554, 1), t: 1 } 2019-09-04T06:29:14.934+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 377 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:24.934+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578548, 2), t: 1 } } 2019-09-04T06:29:14.934+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:44.933+0000 2019-09-04T06:29:14.934+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:29:14.934+0000 D2 ASIO [RS] Request 377 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578554, 1), t: 1 }, lastCommittedWall: new Date(1567578554929), lastOpVisible: { ts: Timestamp(1567578554, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578554, 1), t: 1 }, lastCommittedWall: new Date(1567578554929), lastOpApplied: { ts: Timestamp(1567578554, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578554, 1), $clusterTime: { clusterTime: Timestamp(1567578554, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578554, 1) } 2019-09-04T06:29:14.934+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578554, 1), t: 1 }, lastCommittedWall: new Date(1567578554929), lastOpVisible: { ts: Timestamp(1567578554, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578554, 1), t: 1 }, lastCommittedWall: new Date(1567578554929), lastOpApplied: { ts: Timestamp(1567578554, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578554, 1), $clusterTime: { clusterTime: Timestamp(1567578554, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578554, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:14.934+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:14.934+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), appliedOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, appliedWallTime: new Date(1567578548405), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, durableWallTime: new Date(1567578554929), appliedOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, appliedWallTime: new Date(1567578554929), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), appliedOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, appliedWallTime: new Date(1567578548405), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpVisible: { ts: Timestamp(1567578548, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:14.934+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:14.934+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 378 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:44.934+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), appliedOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, appliedWallTime: new Date(1567578548405), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, durableWallTime: new Date(1567578554929), appliedOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, appliedWallTime: new Date(1567578554929), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, durableWallTime: new Date(1567578548405), appliedOpTime: { ts: Timestamp(1567578548, 2), t: 1 }, appliedWallTime: new Date(1567578548405), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578548, 2), t: 1 }, lastCommittedWall: new Date(1567578548405), lastOpVisible: { ts: Timestamp(1567578548, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:14.934+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:29:14.934+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578554, 1), t: 1 }, 2019-09-04T06:29:14.929+0000 2019-09-04T06:29:14.934+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:44.934+0000 2019-09-04T06:29:14.934+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578554, 1), t: 1 }, 2019-09-04T06:29:14.929+0000 2019-09-04T06:29:14.934+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578549, 1) 2019-09-04T06:29:14.934+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:29:25.609+0000 2019-09-04T06:29:14.934+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:29:25.241+0000 2019-09-04T06:29:14.934+0000 D3 REPL [conn163] Got notified of new snapshot: { ts: Timestamp(1567578554, 1), t: 1 }, 2019-09-04T06:29:14.929+0000 2019-09-04T06:29:14.934+0000 D3 REPL [conn163] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:41.888+0000 2019-09-04T06:29:14.934+0000 D3 REPL [conn152] Got notified of new snapshot: { ts: Timestamp(1567578554, 1), t: 1 }, 2019-09-04T06:29:14.929+0000 2019-09-04T06:29:14.934+0000 D3 REPL [conn152] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:14.934+0000 D3 REPL [conn165] Got notified of new snapshot: { ts: Timestamp(1567578554, 1), t: 1 }, 2019-09-04T06:29:14.929+0000 2019-09-04T06:29:14.934+0000 D3 REPL [conn165] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:35.182+0000 2019-09-04T06:29:14.934+0000 D3 REPL [conn166] Got notified of new snapshot: { ts: Timestamp(1567578554, 1), t: 1 }, 2019-09-04T06:29:14.929+0000 2019-09-04T06:29:14.934+0000 D3 REPL [conn166] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.275+0000 2019-09-04T06:29:14.934+0000 D3 REPL [conn138] Got notified of new snapshot: { ts: Timestamp(1567578554, 1), t: 1 }, 2019-09-04T06:29:14.929+0000 2019-09-04T06:29:14.934+0000 D3 REPL [conn138] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.289+0000 2019-09-04T06:29:14.934+0000 D2 ASIO [RS] Request 378 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578554, 1), t: 1 }, lastCommittedWall: new Date(1567578554929), lastOpVisible: { ts: Timestamp(1567578554, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578554, 1), $clusterTime: { clusterTime: Timestamp(1567578554, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578554, 1) } 2019-09-04T06:29:14.934+0000 D3 REPL [conn139] Got notified of new snapshot: { ts: Timestamp(1567578554, 1), t: 1 }, 2019-09-04T06:29:14.929+0000 2019-09-04T06:29:14.934+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578554, 1), t: 1 }, lastCommittedWall: new Date(1567578554929), lastOpVisible: { ts: Timestamp(1567578554, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578554, 1), $clusterTime: { clusterTime: Timestamp(1567578554, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578554, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:14.934+0000 D3 REPL [conn139] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.319+0000 2019-09-04T06:29:14.934+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:14.934+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:14.934+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:14.934+0000 D3 REPL [conn146] Got notified of new snapshot: { ts: Timestamp(1567578554, 1), t: 1 }, 2019-09-04T06:29:14.929+0000 2019-09-04T06:29:14.934+0000 D3 REPL [conn146] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.233+0000 2019-09-04T06:29:14.934+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:44.934+0000 2019-09-04T06:29:14.934+0000 D3 REPL [conn155] Got notified of new snapshot: { ts: Timestamp(1567578554, 1), t: 1 }, 2019-09-04T06:29:14.929+0000 2019-09-04T06:29:14.934+0000 D3 REPL [conn155] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.663+0000 2019-09-04T06:29:14.934+0000 D3 REPL [conn161] Got notified of new snapshot: { ts: Timestamp(1567578554, 1), t: 1 }, 2019-09-04T06:29:14.929+0000 2019-09-04T06:29:14.934+0000 D3 REPL [conn161] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:28.615+0000 2019-09-04T06:29:14.934+0000 D3 REPL [conn148] Got notified of new snapshot: { ts: Timestamp(1567578554, 1), t: 1 }, 2019-09-04T06:29:14.929+0000 2019-09-04T06:29:14.935+0000 D3 REPL [conn148] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.224+0000 2019-09-04T06:29:14.935+0000 D3 REPL [conn168] Got notified of new snapshot: { ts: Timestamp(1567578554, 1), t: 1 }, 2019-09-04T06:29:14.929+0000 2019-09-04T06:29:14.935+0000 D3 REPL [conn168] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.272+0000 2019-09-04T06:29:14.935+0000 D3 REPL [conn150] Got notified of new snapshot: { ts: Timestamp(1567578554, 1), t: 1 }, 2019-09-04T06:29:14.929+0000 2019-09-04T06:29:14.935+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 379 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:24.935+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578554, 1), t: 1 } } 2019-09-04T06:29:14.935+0000 D3 REPL [conn150] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:19.851+0000 2019-09-04T06:29:14.935+0000 D3 REPL [conn153] Got notified of new snapshot: { ts: Timestamp(1567578554, 1), t: 1 }, 2019-09-04T06:29:14.929+0000 2019-09-04T06:29:14.935+0000 D3 REPL [conn153] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:14.935+0000 D3 REPL [conn154] Got notified of new snapshot: { ts: Timestamp(1567578554, 1), t: 1 }, 2019-09-04T06:29:14.929+0000 2019-09-04T06:29:14.935+0000 D3 REPL [conn154] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:14.935+0000 D3 REPL [conn122] Got notified of new snapshot: { ts: Timestamp(1567578554, 1), t: 1 }, 2019-09-04T06:29:14.929+0000 2019-09-04T06:29:14.935+0000 D3 REPL [conn122] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.234+0000 2019-09-04T06:29:14.935+0000 D3 REPL [conn170] Got notified of new snapshot: { ts: Timestamp(1567578554, 1), t: 1 }, 2019-09-04T06:29:14.929+0000 2019-09-04T06:29:14.935+0000 D3 REPL [conn170] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:43.335+0000 2019-09-04T06:29:14.935+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:44.934+0000 2019-09-04T06:29:14.935+0000 D3 REPL [conn156] Got notified of new snapshot: { ts: Timestamp(1567578554, 1), t: 1 }, 2019-09-04T06:29:14.929+0000 2019-09-04T06:29:14.935+0000 D3 REPL [conn156] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:22.595+0000 2019-09-04T06:29:14.935+0000 D3 REPL [conn118] Got notified of new snapshot: { ts: Timestamp(1567578554, 1), t: 1 }, 2019-09-04T06:29:14.929+0000 2019-09-04T06:29:14.935+0000 D3 REPL [conn118] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.224+0000 2019-09-04T06:29:14.935+0000 D3 REPL [conn151] Got notified of new snapshot: { ts: Timestamp(1567578554, 1), t: 1 }, 2019-09-04T06:29:14.929+0000 2019-09-04T06:29:14.935+0000 D3 REPL [conn151] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.129+0000 2019-09-04T06:29:14.935+0000 D3 REPL [conn162] Got notified of new snapshot: { ts: Timestamp(1567578554, 1), t: 1 }, 2019-09-04T06:29:14.929+0000 2019-09-04T06:29:14.935+0000 D3 REPL [conn162] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:29.750+0000 2019-09-04T06:29:14.935+0000 D3 REPL [conn127] Got notified of new snapshot: { ts: Timestamp(1567578554, 1), t: 1 }, 2019-09-04T06:29:14.929+0000 2019-09-04T06:29:14.935+0000 D3 REPL [conn127] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.644+0000 2019-09-04T06:29:14.935+0000 D3 REPL [conn144] Got notified of new snapshot: { ts: Timestamp(1567578554, 1), t: 1 }, 2019-09-04T06:29:14.929+0000 2019-09-04T06:29:14.935+0000 D3 REPL [conn144] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:33.257+0000 2019-09-04T06:29:14.935+0000 D3 REPL [conn149] Got notified of new snapshot: { ts: Timestamp(1567578554, 1), t: 1 }, 2019-09-04T06:29:14.929+0000 2019-09-04T06:29:14.935+0000 D3 REPL [conn149] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.239+0000 2019-09-04T06:29:14.935+0000 D3 REPL [conn158] Got notified of new snapshot: { ts: Timestamp(1567578554, 1), t: 1 }, 2019-09-04T06:29:14.929+0000 2019-09-04T06:29:14.935+0000 D3 REPL [conn158] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:24.152+0000 2019-09-04T06:29:14.935+0000 D3 REPL [conn164] Got notified of new snapshot: { ts: Timestamp(1567578554, 1), t: 1 }, 2019-09-04T06:29:14.929+0000 2019-09-04T06:29:14.935+0000 D3 REPL [conn164] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:41.692+0000 2019-09-04T06:29:14.935+0000 D3 REPL [conn129] Got notified of new snapshot: { ts: Timestamp(1567578554, 1), t: 1 }, 2019-09-04T06:29:14.929+0000 2019-09-04T06:29:14.935+0000 D3 REPL [conn129] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:15.240+0000 2019-09-04T06:29:14.941+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:14.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:14.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:14.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:14.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:14.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:14.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:15.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:15.032+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578554, 1) 2019-09-04T06:29:15.032+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:15.032+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:15.041+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:15.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578554, 1), signature: { hash: BinData(0, C9ECE6678C5583E865B73828B466E46501210270), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:15.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:29:15.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578554, 1), signature: { hash: BinData(0, C9ECE6678C5583E865B73828B466E46501210270), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:15.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578554, 1), signature: { hash: BinData(0, C9ECE6678C5583E865B73828B466E46501210270), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:15.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, durableWallTime: new Date(1567578554929), opTime: { ts: Timestamp(1567578554, 1), t: 1 }, wallTime: new Date(1567578554929) } 2019-09-04T06:29:15.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578554, 1), signature: { hash: BinData(0, C9ECE6678C5583E865B73828B466E46501210270), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:15.065+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:15.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:15.073+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:15.073+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:15.127+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:15.127+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:15.137+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:15.137+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:15.141+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:15.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:15.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:15.171+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:15.171+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:15.202+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:15.202+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:15.224+0000 I COMMAND [conn148] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578525, 1), signature: { hash: BinData(0, A9EF6308D6B77986F6D15C773C57E66A2510FEA9), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:29:15.224+0000 I COMMAND [conn118] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578519, 1), signature: { hash: BinData(0, 87C50F1B8A58F46D74C3BCF7B0920458500C9D85), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:29:15.224+0000 D1 - [conn148] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:15.224+0000 D1 - [conn118] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:15.224+0000 W - [conn148] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:15.224+0000 W - [conn118] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:15.226+0000 I NETWORK [listener] connection accepted from 10.108.2.53:50656 #171 (83 connections now open) 2019-09-04T06:29:15.226+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:15.226+0000 D2 COMMAND [conn171] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:15.226+0000 I NETWORK [conn171] received client metadata from 10.108.2.53:50656 conn171: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:15.226+0000 I COMMAND [conn171] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:15.228+0000 I NETWORK [listener] connection accepted from 10.108.2.64:46576 #172 (84 connections now open) 2019-09-04T06:29:15.228+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:15.228+0000 D2 COMMAND [conn172] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:15.228+0000 I NETWORK [conn172] received client metadata from 10.108.2.64:46576 conn172: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:15.228+0000 I COMMAND [conn172] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:15.234+0000 I COMMAND [conn146] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578520, 1), signature: { hash: BinData(0, 4AC25D5CB9A6D9101F27355D0FB7D0FF04C668B6), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:29:15.234+0000 D1 - [conn146] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:15.234+0000 W - [conn146] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:15.234+0000 I COMMAND [conn122] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578522, 1), signature: { hash: BinData(0, 21A966CF5FD66B29B7A606E7014BBC74FFC0F15C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:29:15.234+0000 D1 - [conn122] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:15.234+0000 W - [conn122] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:15.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:15.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:15.235+0000 I NETWORK [listener] connection accepted from 10.108.2.47:56504 #173 (85 connections now open) 2019-09-04T06:29:15.235+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:15.235+0000 D2 COMMAND [conn173] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:15.235+0000 I NETWORK [conn173] received client metadata from 10.108.2.47:56504 conn173: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:15.235+0000 I COMMAND [conn173] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:15.239+0000 I COMMAND [conn149] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578515, 1), signature: { hash: BinData(0, 687BE122806C404325073B712C2FE776A546E867), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:29:15.239+0000 D1 - [conn149] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:15.239+0000 W - [conn149] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:15.240+0000 I COMMAND [conn129] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578525, 1), signature: { hash: BinData(0, A9EF6308D6B77986F6D15C773C57E66A2510FEA9), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:29:15.240+0000 D1 - [conn129] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:15.240+0000 W - [conn129] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:15.241+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:15.263+0000 I - [conn148] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:15.263+0000 D1 COMMAND [conn148] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578525, 1), signature: { hash: BinData(0, A9EF6308D6B77986F6D15C773C57E66A2510FEA9), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:15.263+0000 D1 - [conn148] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:15.263+0000 W - [conn148] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:15.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:15.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:15.280+0000 I - [conn129] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:15.280+0000 D1 COMMAND [conn129] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578525, 1), signature: { hash: BinData(0, A9EF6308D6B77986F6D15C773C57E66A2510FEA9), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:15.280+0000 D1 - [conn129] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:15.280+0000 W - [conn129] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:15.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:15.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:15.297+0000 I - [conn149] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:15.297+0000 D1 COMMAND [conn149] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578515, 1), signature: { hash: BinData(0, 687BE122806C404325073B712C2FE776A546E867), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:15.297+0000 D1 - [conn149] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:15.297+0000 W - [conn149] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:15.302+0000 I - [conn118] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:15.302+0000 D1 COMMAND [conn118] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578519, 1), signature: { hash: BinData(0, 87C50F1B8A58F46D74C3BCF7B0920458500C9D85), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:15.302+0000 D1 - [conn118] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:15.302+0000 W - [conn118] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:15.318+0000 I - [conn146] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:15.318+0000 D1 COMMAND [conn146] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578520, 1), signature: { hash: BinData(0, 4AC25D5CB9A6D9101F27355D0FB7D0FF04C668B6), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:15.318+0000 D1 - [conn146] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:15.318+0000 W - [conn146] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:15.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:15.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:15.337+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:15.337+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:15.339+0000 I - [conn129] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:15.339+0000 W COMMAND [conn129] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:15.339+0000 I COMMAND [conn129] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578525, 1), signature: { hash: BinData(0, A9EF6308D6B77986F6D15C773C57E66A2510FEA9), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30050ms 2019-09-04T06:29:15.339+0000 D2 NETWORK [conn129] Session from 10.108.2.47:56468 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:15.339+0000 I NETWORK [conn129] end connection 10.108.2.47:56468 (84 connections now open) 2019-09-04T06:29:15.341+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:15.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:15.359+0000 I - [conn148] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:15.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:15.359+0000 W COMMAND [conn148] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:15.359+0000 I COMMAND [conn148] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578525, 1), signature: { hash: BinData(0, A9EF6308D6B77986F6D15C773C57E66A2510FEA9), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30048ms 2019-09-04T06:29:15.359+0000 D2 NETWORK [conn148] Session from 10.108.2.46:40912 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:15.359+0000 I NETWORK [conn148] end connection 10.108.2.46:40912 (83 connections now open) 2019-09-04T06:29:15.370+0000 I - [conn122] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:15.370+0000 D1 COMMAND [conn122] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578522, 1), signature: { hash: BinData(0, 21A966CF5FD66B29B7A606E7014BBC74FFC0F15C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:15.370+0000 D1 - [conn122] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:15.370+0000 W - [conn122] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:15.390+0000 I - [conn146] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:15.390+0000 W COMMAND [conn146] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:15.390+0000 I COMMAND [conn146] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578520, 1), signature: { hash: BinData(0, 4AC25D5CB9A6D9101F27355D0FB7D0FF04C668B6), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30094ms 2019-09-04T06:29:15.390+0000 D2 NETWORK [conn146] Session from 10.108.2.49:53306 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:15.390+0000 I NETWORK [conn146] end connection 10.108.2.49:53306 (82 connections now open) 2019-09-04T06:29:15.411+0000 I - [conn118] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:15.411+0000 W COMMAND [conn118] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:15.411+0000 I COMMAND [conn118] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578519, 1), signature: { hash: BinData(0, 87C50F1B8A58F46D74C3BCF7B0920458500C9D85), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30087ms 2019-09-04T06:29:15.411+0000 D2 NETWORK [conn118] Session from 10.108.2.51:59068 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:15.411+0000 I NETWORK [conn118] end connection 10.108.2.51:59068 (81 connections now open) 2019-09-04T06:29:15.412+0000 D2 COMMAND [conn145] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578549, 1), signature: { hash: BinData(0, D8A1192E2948EF75C7130338E2090A58950ED76E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:29:15.412+0000 D1 REPL [conn145] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578554, 1), t: 1 } 2019-09-04T06:29:15.412+0000 D3 REPL [conn145] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.422+0000 2019-09-04T06:29:15.412+0000 I NETWORK [listener] connection accepted from 10.108.2.44:38628 #174 (82 connections now open) 2019-09-04T06:29:15.412+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:15.412+0000 I NETWORK [listener] connection accepted from 10.108.2.58:52084 #175 (83 connections now open) 2019-09-04T06:29:15.412+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:15.412+0000 D2 COMMAND [conn174] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:15.412+0000 I NETWORK [conn174] received client metadata from 10.108.2.44:38628 conn174: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:15.412+0000 I COMMAND [conn174] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:15.412+0000 D2 COMMAND [conn175] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:15.412+0000 I NETWORK [conn175] received client metadata from 10.108.2.58:52084 conn175: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:15.412+0000 I COMMAND [conn175] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:15.412+0000 D2 COMMAND [conn174] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578548, 1), signature: { hash: BinData(0, A5F366D9C352A6D5B06F4537F7BE1EE092889425), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:29:15.412+0000 D1 REPL [conn174] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578554, 1), t: 1 } 2019-09-04T06:29:15.412+0000 D3 REPL [conn174] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.422+0000 2019-09-04T06:29:15.412+0000 D2 COMMAND [conn175] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578553, 1), signature: { hash: BinData(0, 0F4123AD4F6F23E0F6991DA797B74DD1D7349CDB), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:29:15.412+0000 D1 REPL [conn175] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578554, 1), t: 1 } 2019-09-04T06:29:15.412+0000 D3 REPL [conn175] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.422+0000 2019-09-04T06:29:15.414+0000 I NETWORK [listener] connection accepted from 10.108.2.46:40932 #176 (84 connections now open) 2019-09-04T06:29:15.414+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:15.414+0000 D2 COMMAND [conn176] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:15.414+0000 I NETWORK [conn176] received client metadata from 10.108.2.46:40932 conn176: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:15.414+0000 I COMMAND [conn176] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:15.414+0000 D2 COMMAND [conn176] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578555, 1), signature: { hash: BinData(0, BDB37965C7D794D8E96FEBC7A1DC5F3E24B76E27), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:29:15.414+0000 D1 REPL [conn176] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578554, 1), t: 1 } 2019-09-04T06:29:15.414+0000 D3 REPL [conn176] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.424+0000 2019-09-04T06:29:15.424+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:15.424+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:15.430+0000 I - [conn122] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:15.430+0000 W COMMAND [conn122] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:15.430+0000 I COMMAND [conn122] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578522, 1), signature: { hash: BinData(0, 21A966CF5FD66B29B7A606E7014BBC74FFC0F15C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30145ms 2019-09-04T06:29:15.430+0000 D2 NETWORK [conn122] Session from 10.108.2.53:50624 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:15.430+0000 I NETWORK [conn122] end connection 10.108.2.53:50624 (83 connections now open) 2019-09-04T06:29:15.432+0000 I NETWORK [listener] connection accepted from 10.108.2.49:53330 #177 (84 connections now open) 2019-09-04T06:29:15.432+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:15.432+0000 D2 COMMAND [conn177] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:15.432+0000 I NETWORK [conn177] received client metadata from 10.108.2.49:53330 conn177: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:15.432+0000 I COMMAND [conn177] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:15.436+0000 D2 COMMAND [conn177] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578550, 1), signature: { hash: BinData(0, 372D11DC30DAD6D51E7FE642D8716FF825445C85), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:29:15.436+0000 D1 REPL [conn177] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578554, 1), t: 1 } 2019-09-04T06:29:15.436+0000 D3 REPL [conn177] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.446+0000 2019-09-04T06:29:15.441+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:15.450+0000 I - [conn149] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:15.450+0000 W COMMAND [conn149] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:15.450+0000 I COMMAND [conn149] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578515, 1), signature: { hash: BinData(0, 687BE122806C404325073B712C2FE776A546E867), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30068ms 2019-09-04T06:29:15.450+0000 D2 NETWORK [conn149] Session from 10.108.2.64:46560 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:15.450+0000 I NETWORK [conn149] end connection 10.108.2.64:46560 (83 connections now open) 2019-09-04T06:29:15.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:15.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:15.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:15.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:15.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:15.497+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:15.532+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:15.532+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:15.541+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:15.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:15.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:15.573+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:15.573+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:15.627+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:15.627+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:15.637+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:15.637+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:15.641+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:15.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:15.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:15.671+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:15.672+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:15.702+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:15.702+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:15.741+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:15.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:15.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:15.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:15.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:15.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:15.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:15.837+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:15.837+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:15.842+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:15.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:15.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:15.924+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:15.924+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:15.932+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:15.932+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:15.932+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578554, 1) 2019-09-04T06:29:15.932+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 5692 2019-09-04T06:29:15.932+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:15.932+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:15.932+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 5692 2019-09-04T06:29:15.934+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5695 2019-09-04T06:29:15.934+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5695 2019-09-04T06:29:15.934+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578554, 1), t: 1 }({ ts: Timestamp(1567578554, 1), t: 1 }) 2019-09-04T06:29:15.942+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:15.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:15.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:15.990+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:15.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:15.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:15.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:16.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:16.042+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:16.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:16.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:16.073+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:16.073+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:16.142+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:16.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:16.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:16.171+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:16.171+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:16.202+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:16.202+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:16.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578555, 1), signature: { hash: BinData(0, 7C917D5FA2D689F267C53BA48903CE3CB75E55F6), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:16.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:29:16.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578555, 1), signature: { hash: BinData(0, 7C917D5FA2D689F267C53BA48903CE3CB75E55F6), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:16.233+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578555, 1), signature: { hash: BinData(0, 7C917D5FA2D689F267C53BA48903CE3CB75E55F6), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:16.233+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, durableWallTime: new Date(1567578554929), opTime: { ts: Timestamp(1567578554, 1), t: 1 }, wallTime: new Date(1567578554929) } 2019-09-04T06:29:16.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578555, 1), signature: { hash: BinData(0, 7C917D5FA2D689F267C53BA48903CE3CB75E55F6), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:16.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:16.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:16.242+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:16.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:16.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:16.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:16.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:16.299+0000 D2 COMMAND [conn125] run command admin.$cmd { ping: 1, $db: "admin" } 2019-09-04T06:29:16.299+0000 I COMMAND [conn125] command admin.$cmd appName: "robo3t" command: ping { ping: 1, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:16.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:16.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:16.337+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:16.337+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:16.342+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:16.351+0000 I NETWORK [listener] connection accepted from 10.20.102.80:61183 #178 (84 connections now open) 2019-09-04T06:29:16.351+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:16.351+0000 D2 COMMAND [conn178] run command admin.$cmd { isMaster: 1, client: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.0.5-17-gd808df2233" }, os: { type: "Windows", name: "Microsoft Windows 7", architecture: "x86_64", version: "6.1 SP1 (build 7601)" } }, $db: "admin" } 2019-09-04T06:29:16.351+0000 I NETWORK [conn178] received client metadata from 10.20.102.80:61183 conn178: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.0.5-17-gd808df2233" }, os: { type: "Windows", name: "Microsoft Windows 7", architecture: "x86_64", version: "6.1 SP1 (build 7601)" } } 2019-09-04T06:29:16.351+0000 I COMMAND [conn178] command admin.$cmd appName: "MongoDB Shell" command: isMaster { isMaster: 1, client: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.0.5-17-gd808df2233" }, os: { type: "Windows", name: "Microsoft Windows 7", architecture: "x86_64", version: "6.1 SP1 (build 7601)" } }, $db: "admin" } numYields:0 reslen:866 locks:{} protocol:op_query 0ms 2019-09-04T06:29:16.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:16.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:16.362+0000 D2 COMMAND [conn178] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-256", payload: "xxx", $db: "admin" } 2019-09-04T06:29:16.362+0000 D1 ACCESS [conn178] Returning user dba_root@admin from cache 2019-09-04T06:29:16.362+0000 I COMMAND [conn178] command admin.$cmd appName: "MongoDB Shell" command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-256", payload: "xxx", $db: "admin" } numYields:0 reslen:410 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:16.373+0000 D2 COMMAND [conn178] run command admin.$cmd { saslContinue: 1, payload: "xxx", conversationId: 1, $db: "admin" } 2019-09-04T06:29:16.373+0000 I COMMAND [conn178] command admin.$cmd appName: "MongoDB Shell" command: saslContinue { saslContinue: 1, payload: "xxx", conversationId: 1, $db: "admin" } numYields:0 reslen:339 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:16.384+0000 D2 COMMAND [conn178] run command admin.$cmd { saslContinue: 1, payload: "xxx", conversationId: 1, $db: "admin" } 2019-09-04T06:29:16.384+0000 D1 ACCESS [conn178] Returning user dba_root@admin from cache 2019-09-04T06:29:16.385+0000 I ACCESS [conn178] Successfully authenticated as principal dba_root on admin from client 10.20.102.80:61183 2019-09-04T06:29:16.385+0000 I COMMAND [conn178] command admin.$cmd appName: "MongoDB Shell" command: saslContinue { saslContinue: 1, payload: "xxx", conversationId: 1, $db: "admin" } numYields:0 reslen:293 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:16.395+0000 D2 COMMAND [conn178] run command config.$cmd { ping: 1.0, lsid: { id: UUID("4ca3bc30-0f16-4335-a15f-3e7d48b5566e") }, $clusterTime: { clusterTime: Timestamp(1567578374, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:29:16.396+0000 I COMMAND [conn178] command config.$cmd appName: "MongoDB Shell" command: ping { ping: 1.0, lsid: { id: UUID("4ca3bc30-0f16-4335-a15f-3e7d48b5566e") }, $clusterTime: { clusterTime: Timestamp(1567578374, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:16.402+0000 I NETWORK [listener] connection accepted from 10.108.2.57:34202 #179 (85 connections now open) 2019-09-04T06:29:16.402+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:16.402+0000 D2 COMMAND [conn179] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:16.402+0000 I NETWORK [conn179] received client metadata from 10.108.2.57:34202 conn179: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:16.403+0000 I COMMAND [conn179] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:16.407+0000 D2 COMMAND [conn179] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578548, 1), signature: { hash: BinData(0, A5F366D9C352A6D5B06F4537F7BE1EE092889425), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:29:16.407+0000 D1 REPL [conn179] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578554, 1), t: 1 } 2019-09-04T06:29:16.407+0000 D3 REPL [conn179] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:46.417+0000 2019-09-04T06:29:16.442+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:16.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:16.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:16.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:16.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:16.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:16.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:16.543+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:16.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:16.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:16.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:16.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:16.573+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:16.573+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:16.643+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:16.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:16.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:16.671+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:16.671+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:16.702+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:16.702+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:16.743+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:16.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:16.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:16.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:16.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:16.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:16.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:16.837+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:16.837+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 380) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:16.837+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 380 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:29:26.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:16.837+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:16.837+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:16.837+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:16.837+0000 D2 ASIO [Replication] Request 380 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, durableWallTime: new Date(1567578554929), opTime: { ts: Timestamp(1567578554, 1), t: 1 }, wallTime: new Date(1567578554929), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578554, 1), t: 1 }, lastCommittedWall: new Date(1567578554929), lastOpVisible: { ts: Timestamp(1567578554, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578554, 1), $clusterTime: { clusterTime: Timestamp(1567578555, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578554, 1) } 2019-09-04T06:29:16.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, durableWallTime: new Date(1567578554929), opTime: { ts: Timestamp(1567578554, 1), t: 1 }, wallTime: new Date(1567578554929), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578554, 1), t: 1 }, lastCommittedWall: new Date(1567578554929), lastOpVisible: { ts: Timestamp(1567578554, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578554, 1), $clusterTime: { clusterTime: Timestamp(1567578555, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578554, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:29:16.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:16.837+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 380) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, durableWallTime: new Date(1567578554929), opTime: { ts: Timestamp(1567578554, 1), t: 1 }, wallTime: new Date(1567578554929), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578554, 1), t: 1 }, lastCommittedWall: new Date(1567578554929), lastOpVisible: { ts: Timestamp(1567578554, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578554, 1), $clusterTime: { clusterTime: Timestamp(1567578555, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578554, 1) } 2019-09-04T06:29:16.837+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:29:16.837+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:29:25.241+0000 2019-09-04T06:29:16.837+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:29:28.107+0000 2019-09-04T06:29:16.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:16.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:16.837+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:29:16.837+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:29:18.837Z 2019-09-04T06:29:16.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:16.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:16.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 381) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:16.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 381 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:26.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:16.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:16.838+0000 D2 ASIO [Replication] Request 381 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, durableWallTime: new Date(1567578554929), opTime: { ts: Timestamp(1567578554, 1), t: 1 }, wallTime: new Date(1567578554929), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578554, 1), t: 1 }, lastCommittedWall: new Date(1567578554929), lastOpVisible: { ts: Timestamp(1567578554, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578554, 1), $clusterTime: { clusterTime: Timestamp(1567578555, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578554, 1) } 2019-09-04T06:29:16.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, durableWallTime: new Date(1567578554929), opTime: { ts: Timestamp(1567578554, 1), t: 1 }, wallTime: new Date(1567578554929), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578554, 1), t: 1 }, lastCommittedWall: new Date(1567578554929), lastOpVisible: { ts: Timestamp(1567578554, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578554, 1), $clusterTime: { clusterTime: Timestamp(1567578555, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578554, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:16.838+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:16.838+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 381) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, durableWallTime: new Date(1567578554929), opTime: { ts: Timestamp(1567578554, 1), t: 1 }, wallTime: new Date(1567578554929), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578554, 1), t: 1 }, lastCommittedWall: new Date(1567578554929), lastOpVisible: { ts: Timestamp(1567578554, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578554, 1), $clusterTime: { clusterTime: Timestamp(1567578555, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578554, 1) } 2019-09-04T06:29:16.838+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:29:16.838+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:29:18.838Z 2019-09-04T06:29:16.838+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:16.843+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:16.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:16.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:16.932+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:16.932+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:16.932+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578554, 1) 2019-09-04T06:29:16.932+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 5735 2019-09-04T06:29:16.932+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:16.933+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:16.933+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 5735 2019-09-04T06:29:16.934+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5738 2019-09-04T06:29:16.934+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5738 2019-09-04T06:29:16.934+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578554, 1), t: 1 }({ ts: Timestamp(1567578554, 1), t: 1 }) 2019-09-04T06:29:16.943+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:16.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:16.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:16.990+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:16.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:16.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:16.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:17.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:17.043+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:17.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:17.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:17.061+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:17.061+0000 D3 REPL [replexec-0] memberData lastupdate is: 2019-09-04T06:29:16.837+0000 2019-09-04T06:29:17.061+0000 D3 REPL [replexec-0] memberData lastupdate is: 2019-09-04T06:29:16.838+0000 2019-09-04T06:29:17.061+0000 D3 REPL [replexec-0] stalest member MemberId(0) date: 2019-09-04T06:29:16.837+0000 2019-09-04T06:29:17.061+0000 D3 REPL [replexec-0] scheduling next check at 2019-09-04T06:29:26.837+0000 2019-09-04T06:29:17.061+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:17.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578555, 1), signature: { hash: BinData(0, 7C917D5FA2D689F267C53BA48903CE3CB75E55F6), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:17.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:29:17.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578555, 1), signature: { hash: BinData(0, 7C917D5FA2D689F267C53BA48903CE3CB75E55F6), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:17.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578555, 1), signature: { hash: BinData(0, 7C917D5FA2D689F267C53BA48903CE3CB75E55F6), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:17.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, durableWallTime: new Date(1567578554929), opTime: { ts: Timestamp(1567578554, 1), t: 1 }, wallTime: new Date(1567578554929) } 2019-09-04T06:29:17.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578555, 1), signature: { hash: BinData(0, 7C917D5FA2D689F267C53BA48903CE3CB75E55F6), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:17.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:17.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:17.073+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:17.073+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:17.143+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:17.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:17.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:17.171+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:17.171+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:17.202+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:17.202+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:17.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:17.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:17.244+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:17.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:17.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:17.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:17.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:17.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:17.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:17.337+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:17.337+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:17.344+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:17.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:17.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:17.444+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:17.453+0000 D2 ASIO [RS] Request 379 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578557, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578557452), o: { $v: 1, $set: { ping: new Date(1567578557451) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578554, 1), t: 1 }, lastCommittedWall: new Date(1567578554929), lastOpVisible: { ts: Timestamp(1567578554, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578554, 1), t: 1 }, lastCommittedWall: new Date(1567578554929), lastOpApplied: { ts: Timestamp(1567578557, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578554, 1), $clusterTime: { clusterTime: Timestamp(1567578557, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578557, 1) } 2019-09-04T06:29:17.453+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578557, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578557452), o: { $v: 1, $set: { ping: new Date(1567578557451) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578554, 1), t: 1 }, lastCommittedWall: new Date(1567578554929), lastOpVisible: { ts: Timestamp(1567578554, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578554, 1), t: 1 }, lastCommittedWall: new Date(1567578554929), lastOpApplied: { ts: Timestamp(1567578557, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578554, 1), $clusterTime: { clusterTime: Timestamp(1567578557, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578557, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:17.453+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:17.453+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578557, 1) and ending at ts: Timestamp(1567578557, 1) 2019-09-04T06:29:17.453+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:29:28.107+0000 2019-09-04T06:29:17.453+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:29:28.634+0000 2019-09-04T06:29:17.453+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:17.453+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:17.453+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578557, 1), t: 1 } 2019-09-04T06:29:17.454+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:17.454+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:17.454+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578554, 1) 2019-09-04T06:29:17.454+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 5758 2019-09-04T06:29:17.454+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:17.454+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:17.454+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 5758 2019-09-04T06:29:17.454+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:17.454+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:29:17.454+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:17.454+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578554, 1) 2019-09-04T06:29:17.454+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 5761 2019-09-04T06:29:17.454+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578557, 1) } 2019-09-04T06:29:17.454+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:17.454+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:17.454+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 5761 2019-09-04T06:29:17.454+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5739 2019-09-04T06:29:17.454+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5739 2019-09-04T06:29:17.454+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5764 2019-09-04T06:29:17.454+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5764 2019-09-04T06:29:17.454+0000 D3 EXECUTOR [repl-writer-worker-9] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:17.454+0000 D3 STORAGE [repl-writer-worker-9] WT begin_transaction for snapshot id 5766 2019-09-04T06:29:17.454+0000 D4 STORAGE [repl-writer-worker-9] inserting record with timestamp Timestamp(1567578557, 1) 2019-09-04T06:29:17.454+0000 D3 STORAGE [repl-writer-worker-9] WT set timestamp of future write operations to Timestamp(1567578557, 1) 2019-09-04T06:29:17.454+0000 D3 STORAGE [repl-writer-worker-9] WT commit_transaction for snapshot id 5766 2019-09-04T06:29:17.454+0000 D3 EXECUTOR [repl-writer-worker-9] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:17.454+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:29:17.454+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5765 2019-09-04T06:29:17.454+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5765 2019-09-04T06:29:17.454+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5768 2019-09-04T06:29:17.454+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5768 2019-09-04T06:29:17.454+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578557, 1), t: 1 }({ ts: Timestamp(1567578557, 1), t: 1 }) 2019-09-04T06:29:17.454+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578557, 1) 2019-09-04T06:29:17.454+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5769 2019-09-04T06:29:17.454+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578557, 1) } } ] } sort: {} projection: {} 2019-09-04T06:29:17.454+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:17.454+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:29:17.454+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578557, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:29:17.454+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:17.454+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:17.454+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:17.454+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578557, 1) || First: notFirst: full path: ts 2019-09-04T06:29:17.454+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:17.454+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578557, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:17.454+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:17.454+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:29:17.454+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:17.454+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:17.454+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:17.454+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:17.454+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:17.454+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:17.454+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:17.454+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578557, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:17.454+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:17.454+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:17.454+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:17.454+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578557, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:17.454+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:17.454+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578557, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:17.455+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5769 2019-09-04T06:29:17.455+0000 D3 EXECUTOR [repl-writer-worker-7] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:17.455+0000 D3 STORAGE [repl-writer-worker-7] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:17.455+0000 D3 REPL [repl-writer-worker-7] applying op: { ts: Timestamp(1567578557, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578557452), o: { $v: 1, $set: { ping: new Date(1567578557451) } } }, oplog application mode: Secondary 2019-09-04T06:29:17.455+0000 D3 STORAGE [repl-writer-worker-7] WT set timestamp of future write operations to Timestamp(1567578557, 1) 2019-09-04T06:29:17.455+0000 D3 STORAGE [repl-writer-worker-7] WT begin_transaction for snapshot id 5771 2019-09-04T06:29:17.455+0000 D2 QUERY [repl-writer-worker-7] Using idhack: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" } 2019-09-04T06:29:17.455+0000 D4 WRITE [repl-writer-worker-7] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:29:17.455+0000 D3 STORAGE [repl-writer-worker-7] WT commit_transaction for snapshot id 5771 2019-09-04T06:29:17.455+0000 D3 EXECUTOR [repl-writer-worker-7] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:17.455+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578557, 1), t: 1 }({ ts: Timestamp(1567578557, 1), t: 1 }) 2019-09-04T06:29:17.455+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578557, 1) 2019-09-04T06:29:17.455+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5770 2019-09-04T06:29:17.455+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:29:17.455+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:17.455+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:29:17.455+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:17.455+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:17.455+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:29:17.455+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5770 2019-09-04T06:29:17.455+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578557, 1) 2019-09-04T06:29:17.455+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:17.455+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5774 2019-09-04T06:29:17.455+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, durableWallTime: new Date(1567578554929), appliedOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, appliedWallTime: new Date(1567578554929), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, durableWallTime: new Date(1567578554929), appliedOpTime: { ts: Timestamp(1567578557, 1), t: 1 }, appliedWallTime: new Date(1567578557452), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, durableWallTime: new Date(1567578554929), appliedOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, appliedWallTime: new Date(1567578554929), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578554, 1), t: 1 }, lastCommittedWall: new Date(1567578554929), lastOpVisible: { ts: Timestamp(1567578554, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:17.455+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5774 2019-09-04T06:29:17.455+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 382 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:47.455+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, durableWallTime: new Date(1567578554929), appliedOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, appliedWallTime: new Date(1567578554929), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, durableWallTime: new Date(1567578554929), appliedOpTime: { ts: Timestamp(1567578557, 1), t: 1 }, appliedWallTime: new Date(1567578557452), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, durableWallTime: new Date(1567578554929), appliedOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, appliedWallTime: new Date(1567578554929), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578554, 1), t: 1 }, lastCommittedWall: new Date(1567578554929), lastOpVisible: { ts: Timestamp(1567578554, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:17.455+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:47.455+0000 2019-09-04T06:29:17.455+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578557, 1), t: 1 }({ ts: Timestamp(1567578557, 1), t: 1 }) 2019-09-04T06:29:17.456+0000 D2 ASIO [RS] Request 382 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578557, 1), t: 1 }, lastCommittedWall: new Date(1567578557452), lastOpVisible: { ts: Timestamp(1567578557, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578557, 1), $clusterTime: { clusterTime: Timestamp(1567578557, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578557, 1) } 2019-09-04T06:29:17.456+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578557, 1), t: 1 }, lastCommittedWall: new Date(1567578557452), lastOpVisible: { ts: Timestamp(1567578557, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578557, 1), $clusterTime: { clusterTime: Timestamp(1567578557, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578557, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:17.456+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578557, 1), t: 1 } 2019-09-04T06:29:17.456+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:17.456+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 383 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:27.456+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578554, 1), t: 1 } } 2019-09-04T06:29:17.456+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:47.456+0000 2019-09-04T06:29:17.456+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:47.456+0000 2019-09-04T06:29:17.456+0000 D2 ASIO [RS] Request 383 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578557, 1), t: 1 }, lastCommittedWall: new Date(1567578557452), lastOpVisible: { ts: Timestamp(1567578557, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578557, 1), t: 1 }, lastCommittedWall: new Date(1567578557452), lastOpApplied: { ts: Timestamp(1567578557, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578557, 1), $clusterTime: { clusterTime: Timestamp(1567578557, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578557, 1) } 2019-09-04T06:29:17.456+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578557, 1), t: 1 }, lastCommittedWall: new Date(1567578557452), lastOpVisible: { ts: Timestamp(1567578557, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578557, 1), t: 1 }, lastCommittedWall: new Date(1567578557452), lastOpApplied: { ts: Timestamp(1567578557, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578557, 1), $clusterTime: { clusterTime: Timestamp(1567578557, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578557, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:17.456+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:17.456+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:29:17.456+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578557, 1), t: 1 }, 2019-09-04T06:29:17.452+0000 2019-09-04T06:29:17.456+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578557, 1), t: 1 }, 2019-09-04T06:29:17.452+0000 2019-09-04T06:29:17.456+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578552, 1) 2019-09-04T06:29:17.456+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:29:28.634+0000 2019-09-04T06:29:17.456+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:29:28.236+0000 2019-09-04T06:29:17.456+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:17.456+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:17.456+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 384 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:27.456+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578557, 1), t: 1 } } 2019-09-04T06:29:17.456+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:47.456+0000 2019-09-04T06:29:17.456+0000 D3 REPL [conn152] Got notified of new snapshot: { ts: Timestamp(1567578557, 1), t: 1 }, 2019-09-04T06:29:17.452+0000 2019-09-04T06:29:17.456+0000 D3 REPL [conn152] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:17.456+0000 D3 REPL [conn155] Got notified of new snapshot: { ts: Timestamp(1567578557, 1), t: 1 }, 2019-09-04T06:29:17.452+0000 2019-09-04T06:29:17.456+0000 D3 REPL [conn155] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.663+0000 2019-09-04T06:29:17.456+0000 D3 REPL [conn154] Got notified of new snapshot: { ts: Timestamp(1567578557, 1), t: 1 }, 2019-09-04T06:29:17.452+0000 2019-09-04T06:29:17.456+0000 D3 REPL [conn154] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:17.456+0000 D3 REPL [conn170] Got notified of new snapshot: { ts: Timestamp(1567578557, 1), t: 1 }, 2019-09-04T06:29:17.452+0000 2019-09-04T06:29:17.456+0000 D3 REPL [conn170] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:43.335+0000 2019-09-04T06:29:17.456+0000 D3 REPL [conn162] Got notified of new snapshot: { ts: Timestamp(1567578557, 1), t: 1 }, 2019-09-04T06:29:17.452+0000 2019-09-04T06:29:17.456+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:29:17.456+0000 D3 REPL [conn162] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:29.750+0000 2019-09-04T06:29:17.456+0000 D3 REPL [conn144] Got notified of new snapshot: { ts: Timestamp(1567578557, 1), t: 1 }, 2019-09-04T06:29:17.452+0000 2019-09-04T06:29:17.456+0000 D3 REPL [conn144] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:33.257+0000 2019-09-04T06:29:17.456+0000 D3 REPL [conn158] Got notified of new snapshot: { ts: Timestamp(1567578557, 1), t: 1 }, 2019-09-04T06:29:17.452+0000 2019-09-04T06:29:17.456+0000 D3 REPL [conn158] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:24.152+0000 2019-09-04T06:29:17.456+0000 D3 REPL [conn177] Got notified of new snapshot: { ts: Timestamp(1567578557, 1), t: 1 }, 2019-09-04T06:29:17.452+0000 2019-09-04T06:29:17.456+0000 D3 REPL [conn177] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.446+0000 2019-09-04T06:29:17.456+0000 D3 REPL [conn179] Got notified of new snapshot: { ts: Timestamp(1567578557, 1), t: 1 }, 2019-09-04T06:29:17.452+0000 2019-09-04T06:29:17.456+0000 D3 REPL [conn179] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:46.417+0000 2019-09-04T06:29:17.457+0000 D3 REPL [conn163] Got notified of new snapshot: { ts: Timestamp(1567578557, 1), t: 1 }, 2019-09-04T06:29:17.452+0000 2019-09-04T06:29:17.457+0000 D3 REPL [conn163] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:41.888+0000 2019-09-04T06:29:17.457+0000 D3 REPL [conn165] Got notified of new snapshot: { ts: Timestamp(1567578557, 1), t: 1 }, 2019-09-04T06:29:17.452+0000 2019-09-04T06:29:17.457+0000 D3 REPL [conn165] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:35.182+0000 2019-09-04T06:29:17.457+0000 D3 REPL [conn166] Got notified of new snapshot: { ts: Timestamp(1567578557, 1), t: 1 }, 2019-09-04T06:29:17.452+0000 2019-09-04T06:29:17.457+0000 D3 REPL [conn166] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.275+0000 2019-09-04T06:29:17.457+0000 D3 REPL [conn138] Got notified of new snapshot: { ts: Timestamp(1567578557, 1), t: 1 }, 2019-09-04T06:29:17.452+0000 2019-09-04T06:29:17.457+0000 D3 REPL [conn138] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.289+0000 2019-09-04T06:29:17.457+0000 D3 REPL [conn139] Got notified of new snapshot: { ts: Timestamp(1567578557, 1), t: 1 }, 2019-09-04T06:29:17.452+0000 2019-09-04T06:29:17.457+0000 D3 REPL [conn139] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.319+0000 2019-09-04T06:29:17.457+0000 D3 REPL [conn161] Got notified of new snapshot: { ts: Timestamp(1567578557, 1), t: 1 }, 2019-09-04T06:29:17.452+0000 2019-09-04T06:29:17.457+0000 D3 REPL [conn161] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:28.615+0000 2019-09-04T06:29:17.457+0000 D3 REPL [conn168] Got notified of new snapshot: { ts: Timestamp(1567578557, 1), t: 1 }, 2019-09-04T06:29:17.452+0000 2019-09-04T06:29:17.457+0000 D3 REPL [conn168] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.272+0000 2019-09-04T06:29:17.457+0000 D3 REPL [conn150] Got notified of new snapshot: { ts: Timestamp(1567578557, 1), t: 1 }, 2019-09-04T06:29:17.452+0000 2019-09-04T06:29:17.457+0000 D3 REPL [conn150] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:19.851+0000 2019-09-04T06:29:17.457+0000 D3 REPL [conn153] Got notified of new snapshot: { ts: Timestamp(1567578557, 1), t: 1 }, 2019-09-04T06:29:17.452+0000 2019-09-04T06:29:17.457+0000 D3 REPL [conn153] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:17.457+0000 D3 REPL [conn156] Got notified of new snapshot: { ts: Timestamp(1567578557, 1), t: 1 }, 2019-09-04T06:29:17.452+0000 2019-09-04T06:29:17.457+0000 D3 REPL [conn156] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:22.595+0000 2019-09-04T06:29:17.457+0000 D3 REPL [conn151] Got notified of new snapshot: { ts: Timestamp(1567578557, 1), t: 1 }, 2019-09-04T06:29:17.452+0000 2019-09-04T06:29:17.457+0000 D3 REPL [conn151] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.129+0000 2019-09-04T06:29:17.457+0000 D3 REPL [conn127] Got notified of new snapshot: { ts: Timestamp(1567578557, 1), t: 1 }, 2019-09-04T06:29:17.452+0000 2019-09-04T06:29:17.457+0000 D3 REPL [conn127] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.644+0000 2019-09-04T06:29:17.457+0000 D3 REPL [conn164] Got notified of new snapshot: { ts: Timestamp(1567578557, 1), t: 1 }, 2019-09-04T06:29:17.452+0000 2019-09-04T06:29:17.457+0000 D3 REPL [conn164] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:41.692+0000 2019-09-04T06:29:17.457+0000 D3 REPL [conn145] Got notified of new snapshot: { ts: Timestamp(1567578557, 1), t: 1 }, 2019-09-04T06:29:17.452+0000 2019-09-04T06:29:17.457+0000 D3 REPL [conn145] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.422+0000 2019-09-04T06:29:17.457+0000 D3 REPL [conn174] Got notified of new snapshot: { ts: Timestamp(1567578557, 1), t: 1 }, 2019-09-04T06:29:17.452+0000 2019-09-04T06:29:17.457+0000 D3 REPL [conn174] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.422+0000 2019-09-04T06:29:17.457+0000 D3 REPL [conn175] Got notified of new snapshot: { ts: Timestamp(1567578557, 1), t: 1 }, 2019-09-04T06:29:17.452+0000 2019-09-04T06:29:17.457+0000 D3 REPL [conn175] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.422+0000 2019-09-04T06:29:17.457+0000 D3 REPL [conn176] Got notified of new snapshot: { ts: Timestamp(1567578557, 1), t: 1 }, 2019-09-04T06:29:17.452+0000 2019-09-04T06:29:17.457+0000 D3 REPL [conn176] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.424+0000 2019-09-04T06:29:17.457+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:17.457+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, durableWallTime: new Date(1567578554929), appliedOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, appliedWallTime: new Date(1567578554929), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578557, 1), t: 1 }, durableWallTime: new Date(1567578557452), appliedOpTime: { ts: Timestamp(1567578557, 1), t: 1 }, appliedWallTime: new Date(1567578557452), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, durableWallTime: new Date(1567578554929), appliedOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, appliedWallTime: new Date(1567578554929), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578557, 1), t: 1 }, lastCommittedWall: new Date(1567578557452), lastOpVisible: { ts: Timestamp(1567578557, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:17.457+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 385 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:47.457+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, durableWallTime: new Date(1567578554929), appliedOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, appliedWallTime: new Date(1567578554929), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578557, 1), t: 1 }, durableWallTime: new Date(1567578557452), appliedOpTime: { ts: Timestamp(1567578557, 1), t: 1 }, appliedWallTime: new Date(1567578557452), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, durableWallTime: new Date(1567578554929), appliedOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, appliedWallTime: new Date(1567578554929), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578557, 1), t: 1 }, lastCommittedWall: new Date(1567578557452), lastOpVisible: { ts: Timestamp(1567578557, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:17.457+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:47.456+0000 2019-09-04T06:29:17.457+0000 D2 ASIO [RS] Request 385 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578557, 1), t: 1 }, lastCommittedWall: new Date(1567578557452), lastOpVisible: { ts: Timestamp(1567578557, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578557, 1), $clusterTime: { clusterTime: Timestamp(1567578557, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578557, 1) } 2019-09-04T06:29:17.457+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578557, 1), t: 1 }, lastCommittedWall: new Date(1567578557452), lastOpVisible: { ts: Timestamp(1567578557, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578557, 1), $clusterTime: { clusterTime: Timestamp(1567578557, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578557, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:17.457+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:17.457+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:47.456+0000 2019-09-04T06:29:17.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:17.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:17.490+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:17.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:17.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:17.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:17.544+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:17.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:17.553+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 1ms 2019-09-04T06:29:17.554+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578557, 1) 2019-09-04T06:29:17.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:17.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:17.573+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:17.573+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:17.644+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:17.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:17.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:17.671+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:17.671+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:17.702+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:17.702+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:17.715+0000 D2 ASIO [RS] Request 384 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578557, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578557713), o: { $v: 1, $set: { ping: new Date(1567578557707) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578557, 1), t: 1 }, lastCommittedWall: new Date(1567578557452), lastOpVisible: { ts: Timestamp(1567578557, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578557, 1), t: 1 }, lastCommittedWall: new Date(1567578557452), lastOpApplied: { ts: Timestamp(1567578557, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578557, 1), $clusterTime: { clusterTime: Timestamp(1567578557, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578557, 2) } 2019-09-04T06:29:17.715+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578557, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578557713), o: { $v: 1, $set: { ping: new Date(1567578557707) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578557, 1), t: 1 }, lastCommittedWall: new Date(1567578557452), lastOpVisible: { ts: Timestamp(1567578557, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578557, 1), t: 1 }, lastCommittedWall: new Date(1567578557452), lastOpApplied: { ts: Timestamp(1567578557, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578557, 1), $clusterTime: { clusterTime: Timestamp(1567578557, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578557, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:17.715+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:17.715+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578557, 2) and ending at ts: Timestamp(1567578557, 2) 2019-09-04T06:29:17.715+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:29:28.236+0000 2019-09-04T06:29:17.715+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:29:28.086+0000 2019-09-04T06:29:17.715+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:17.715+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:17.715+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578557, 2), t: 1 } 2019-09-04T06:29:17.715+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:17.715+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:17.715+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578557, 1) 2019-09-04T06:29:17.715+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 5787 2019-09-04T06:29:17.715+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:17.715+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:17.715+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 5787 2019-09-04T06:29:17.715+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:17.715+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:29:17.715+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:17.715+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578557, 1) 2019-09-04T06:29:17.715+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578557, 2) } 2019-09-04T06:29:17.715+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5775 2019-09-04T06:29:17.715+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 5790 2019-09-04T06:29:17.715+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:17.715+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:17.716+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 5790 2019-09-04T06:29:17.716+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5775 2019-09-04T06:29:17.716+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5793 2019-09-04T06:29:17.716+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5793 2019-09-04T06:29:17.716+0000 D3 EXECUTOR [repl-writer-worker-11] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:17.716+0000 D3 STORAGE [repl-writer-worker-11] WT begin_transaction for snapshot id 5795 2019-09-04T06:29:17.716+0000 D4 STORAGE [repl-writer-worker-11] inserting record with timestamp Timestamp(1567578557, 2) 2019-09-04T06:29:17.716+0000 D3 STORAGE [repl-writer-worker-11] WT set timestamp of future write operations to Timestamp(1567578557, 2) 2019-09-04T06:29:17.716+0000 D3 STORAGE [repl-writer-worker-11] WT commit_transaction for snapshot id 5795 2019-09-04T06:29:17.716+0000 D3 EXECUTOR [repl-writer-worker-11] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:17.716+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:29:17.716+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5794 2019-09-04T06:29:17.716+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5794 2019-09-04T06:29:17.716+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5797 2019-09-04T06:29:17.716+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5797 2019-09-04T06:29:17.716+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578557, 2), t: 1 }({ ts: Timestamp(1567578557, 2), t: 1 }) 2019-09-04T06:29:17.716+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578557, 2) 2019-09-04T06:29:17.716+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5798 2019-09-04T06:29:17.716+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578557, 2) } } ] } sort: {} projection: {} 2019-09-04T06:29:17.716+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:17.716+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:29:17.716+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578557, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:29:17.716+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:17.716+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:17.716+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:17.716+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578557, 2) || First: notFirst: full path: ts 2019-09-04T06:29:17.716+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:17.716+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578557, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:17.716+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:17.716+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:29:17.716+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:17.716+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:17.716+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:17.716+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:17.716+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:17.716+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:17.716+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:17.716+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578557, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:17.716+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:17.716+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:17.716+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:17.716+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578557, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:17.716+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:17.716+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578557, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:17.716+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5798 2019-09-04T06:29:17.716+0000 D3 EXECUTOR [repl-writer-worker-13] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:17.716+0000 D3 STORAGE [repl-writer-worker-13] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:17.716+0000 D3 REPL [repl-writer-worker-13] applying op: { ts: Timestamp(1567578557, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578557713), o: { $v: 1, $set: { ping: new Date(1567578557707) } } }, oplog application mode: Secondary 2019-09-04T06:29:17.716+0000 D3 STORAGE [repl-writer-worker-13] WT set timestamp of future write operations to Timestamp(1567578557, 2) 2019-09-04T06:29:17.716+0000 D3 STORAGE [repl-writer-worker-13] WT begin_transaction for snapshot id 5800 2019-09-04T06:29:17.716+0000 D2 QUERY [repl-writer-worker-13] Using idhack: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" } 2019-09-04T06:29:17.716+0000 D4 WRITE [repl-writer-worker-13] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:29:17.716+0000 D3 STORAGE [repl-writer-worker-13] WT commit_transaction for snapshot id 5800 2019-09-04T06:29:17.716+0000 D3 EXECUTOR [repl-writer-worker-13] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:17.716+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578557, 2), t: 1 }({ ts: Timestamp(1567578557, 2), t: 1 }) 2019-09-04T06:29:17.716+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578557, 2) 2019-09-04T06:29:17.716+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5799 2019-09-04T06:29:17.716+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:29:17.716+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:17.716+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:29:17.716+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:17.716+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:17.716+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:29:17.716+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5799 2019-09-04T06:29:17.716+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578557, 2) 2019-09-04T06:29:17.717+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5804 2019-09-04T06:29:17.717+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5804 2019-09-04T06:29:17.717+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578557, 2), t: 1 }({ ts: Timestamp(1567578557, 2), t: 1 }) 2019-09-04T06:29:17.717+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:17.717+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, durableWallTime: new Date(1567578554929), appliedOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, appliedWallTime: new Date(1567578554929), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578557, 1), t: 1 }, durableWallTime: new Date(1567578557452), appliedOpTime: { ts: Timestamp(1567578557, 2), t: 1 }, appliedWallTime: new Date(1567578557713), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, durableWallTime: new Date(1567578554929), appliedOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, appliedWallTime: new Date(1567578554929), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578557, 1), t: 1 }, lastCommittedWall: new Date(1567578557452), lastOpVisible: { ts: Timestamp(1567578557, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:17.717+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 386 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:47.717+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, durableWallTime: new Date(1567578554929), appliedOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, appliedWallTime: new Date(1567578554929), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578557, 1), t: 1 }, durableWallTime: new Date(1567578557452), appliedOpTime: { ts: Timestamp(1567578557, 2), t: 1 }, appliedWallTime: new Date(1567578557713), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, durableWallTime: new Date(1567578554929), appliedOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, appliedWallTime: new Date(1567578554929), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578557, 1), t: 1 }, lastCommittedWall: new Date(1567578557452), lastOpVisible: { ts: Timestamp(1567578557, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:17.717+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:47.717+0000 2019-09-04T06:29:17.717+0000 D2 ASIO [RS] Request 386 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578557, 2), t: 1 }, lastCommittedWall: new Date(1567578557713), lastOpVisible: { ts: Timestamp(1567578557, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578557, 2), $clusterTime: { clusterTime: Timestamp(1567578557, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578557, 2) } 2019-09-04T06:29:17.717+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578557, 2), t: 1 }, lastCommittedWall: new Date(1567578557713), lastOpVisible: { ts: Timestamp(1567578557, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578557, 2), $clusterTime: { clusterTime: Timestamp(1567578557, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578557, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:17.717+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:17.717+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:47.717+0000 2019-09-04T06:29:17.717+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578557, 2), t: 1 } 2019-09-04T06:29:17.717+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 387 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:27.717+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578557, 1), t: 1 } } 2019-09-04T06:29:17.717+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:47.717+0000 2019-09-04T06:29:17.718+0000 D2 ASIO [RS] Request 387 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578557, 2), t: 1 }, lastCommittedWall: new Date(1567578557713), lastOpVisible: { ts: Timestamp(1567578557, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578557, 2), t: 1 }, lastCommittedWall: new Date(1567578557713), lastOpApplied: { ts: Timestamp(1567578557, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578557, 2), $clusterTime: { clusterTime: Timestamp(1567578557, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578557, 2) } 2019-09-04T06:29:17.718+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578557, 2), t: 1 }, lastCommittedWall: new Date(1567578557713), lastOpVisible: { ts: Timestamp(1567578557, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578557, 2), t: 1 }, lastCommittedWall: new Date(1567578557713), lastOpApplied: { ts: Timestamp(1567578557, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578557, 2), $clusterTime: { clusterTime: Timestamp(1567578557, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578557, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:17.718+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:17.718+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:29:17.718+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578557, 2), t: 1 }, 2019-09-04T06:29:17.713+0000 2019-09-04T06:29:17.718+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578557, 2), t: 1 }, 2019-09-04T06:29:17.713+0000 2019-09-04T06:29:17.718+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578552, 2) 2019-09-04T06:29:17.718+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:29:28.086+0000 2019-09-04T06:29:17.718+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:29:28.865+0000 2019-09-04T06:29:17.718+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:17.718+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 388 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:27.718+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578557, 2), t: 1 } } 2019-09-04T06:29:17.718+0000 D3 REPL [conn170] Got notified of new snapshot: { ts: Timestamp(1567578557, 2), t: 1 }, 2019-09-04T06:29:17.713+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn170] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:43.335+0000 2019-09-04T06:29:17.718+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:47.717+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn144] Got notified of new snapshot: { ts: Timestamp(1567578557, 2), t: 1 }, 2019-09-04T06:29:17.713+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn144] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:33.257+0000 2019-09-04T06:29:17.718+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn139] Got notified of new snapshot: { ts: Timestamp(1567578557, 2), t: 1 }, 2019-09-04T06:29:17.713+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn139] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.319+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn177] Got notified of new snapshot: { ts: Timestamp(1567578557, 2), t: 1 }, 2019-09-04T06:29:17.713+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn177] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.446+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn176] Got notified of new snapshot: { ts: Timestamp(1567578557, 2), t: 1 }, 2019-09-04T06:29:17.713+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn176] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.424+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn161] Got notified of new snapshot: { ts: Timestamp(1567578557, 2), t: 1 }, 2019-09-04T06:29:17.713+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn161] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:28.615+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn179] Got notified of new snapshot: { ts: Timestamp(1567578557, 2), t: 1 }, 2019-09-04T06:29:17.713+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn179] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:46.417+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn156] Got notified of new snapshot: { ts: Timestamp(1567578557, 2), t: 1 }, 2019-09-04T06:29:17.713+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn156] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:22.595+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn127] Got notified of new snapshot: { ts: Timestamp(1567578557, 2), t: 1 }, 2019-09-04T06:29:17.713+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn127] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.644+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn145] Got notified of new snapshot: { ts: Timestamp(1567578557, 2), t: 1 }, 2019-09-04T06:29:17.713+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn145] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.422+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn175] Got notified of new snapshot: { ts: Timestamp(1567578557, 2), t: 1 }, 2019-09-04T06:29:17.713+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn175] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.422+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn163] Got notified of new snapshot: { ts: Timestamp(1567578557, 2), t: 1 }, 2019-09-04T06:29:17.713+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn163] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:41.888+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn152] Got notified of new snapshot: { ts: Timestamp(1567578557, 2), t: 1 }, 2019-09-04T06:29:17.713+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn152] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn158] Got notified of new snapshot: { ts: Timestamp(1567578557, 2), t: 1 }, 2019-09-04T06:29:17.713+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn158] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:24.152+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn154] Got notified of new snapshot: { ts: Timestamp(1567578557, 2), t: 1 }, 2019-09-04T06:29:17.713+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn154] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn166] Got notified of new snapshot: { ts: Timestamp(1567578557, 2), t: 1 }, 2019-09-04T06:29:17.713+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn166] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.275+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn164] Got notified of new snapshot: { ts: Timestamp(1567578557, 2), t: 1 }, 2019-09-04T06:29:17.713+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn164] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:41.692+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn155] Got notified of new snapshot: { ts: Timestamp(1567578557, 2), t: 1 }, 2019-09-04T06:29:17.713+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn155] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.663+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn168] Got notified of new snapshot: { ts: Timestamp(1567578557, 2), t: 1 }, 2019-09-04T06:29:17.713+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn168] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.272+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn165] Got notified of new snapshot: { ts: Timestamp(1567578557, 2), t: 1 }, 2019-09-04T06:29:17.713+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn165] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:35.182+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn151] Got notified of new snapshot: { ts: Timestamp(1567578557, 2), t: 1 }, 2019-09-04T06:29:17.713+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn151] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.129+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn138] Got notified of new snapshot: { ts: Timestamp(1567578557, 2), t: 1 }, 2019-09-04T06:29:17.713+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn138] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.289+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn162] Got notified of new snapshot: { ts: Timestamp(1567578557, 2), t: 1 }, 2019-09-04T06:29:17.713+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn162] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:29.750+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn153] Got notified of new snapshot: { ts: Timestamp(1567578557, 2), t: 1 }, 2019-09-04T06:29:17.713+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn153] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn174] Got notified of new snapshot: { ts: Timestamp(1567578557, 2), t: 1 }, 2019-09-04T06:29:17.713+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn174] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.422+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn150] Got notified of new snapshot: { ts: Timestamp(1567578557, 2), t: 1 }, 2019-09-04T06:29:17.713+0000 2019-09-04T06:29:17.718+0000 D3 REPL [conn150] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:19.851+0000 2019-09-04T06:29:17.725+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:29:17.725+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:17.725+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, durableWallTime: new Date(1567578554929), appliedOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, appliedWallTime: new Date(1567578554929), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578557, 2), t: 1 }, durableWallTime: new Date(1567578557713), appliedOpTime: { ts: Timestamp(1567578557, 2), t: 1 }, appliedWallTime: new Date(1567578557713), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, durableWallTime: new Date(1567578554929), appliedOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, appliedWallTime: new Date(1567578554929), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578557, 2), t: 1 }, lastCommittedWall: new Date(1567578557713), lastOpVisible: { ts: Timestamp(1567578557, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:17.725+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 389 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:47.725+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, durableWallTime: new Date(1567578554929), appliedOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, appliedWallTime: new Date(1567578554929), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578557, 2), t: 1 }, durableWallTime: new Date(1567578557713), appliedOpTime: { ts: Timestamp(1567578557, 2), t: 1 }, appliedWallTime: new Date(1567578557713), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, durableWallTime: new Date(1567578554929), appliedOpTime: { ts: Timestamp(1567578554, 1), t: 1 }, appliedWallTime: new Date(1567578554929), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578557, 2), t: 1 }, lastCommittedWall: new Date(1567578557713), lastOpVisible: { ts: Timestamp(1567578557, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:17.725+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:47.717+0000 2019-09-04T06:29:17.725+0000 D2 ASIO [RS] Request 389 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578557, 2), t: 1 }, lastCommittedWall: new Date(1567578557713), lastOpVisible: { ts: Timestamp(1567578557, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578557, 2), $clusterTime: { clusterTime: Timestamp(1567578557, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578557, 2) } 2019-09-04T06:29:17.725+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578557, 2), t: 1 }, lastCommittedWall: new Date(1567578557713), lastOpVisible: { ts: Timestamp(1567578557, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578557, 2), $clusterTime: { clusterTime: Timestamp(1567578557, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578557, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:17.725+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:17.725+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:47.717+0000 2019-09-04T06:29:17.744+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:17.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:17.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:17.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:17.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:17.815+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578557, 2) 2019-09-04T06:29:17.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:17.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:17.837+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:17.837+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:17.845+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:17.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:17.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:17.945+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:17.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:17.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:17.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:17.997+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:18.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:18.045+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:18.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:18.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:18.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:18.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:18.073+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:18.073+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:18.130+0000 D2 COMMAND [conn20] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:18.130+0000 I COMMAND [conn20] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:18.145+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:18.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:18.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:18.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:18.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:18.171+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:18.171+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:18.202+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:18.202+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:18.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578557, 2), signature: { hash: BinData(0, 3F5864037A711550FF6AEE33CA7A805B73D2D63E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:18.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:29:18.233+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578557, 2), signature: { hash: BinData(0, 3F5864037A711550FF6AEE33CA7A805B73D2D63E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:18.233+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578557, 2), signature: { hash: BinData(0, 3F5864037A711550FF6AEE33CA7A805B73D2D63E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:18.233+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578557, 2), t: 1 }, durableWallTime: new Date(1567578557713), opTime: { ts: Timestamp(1567578557, 2), t: 1 }, wallTime: new Date(1567578557713) } 2019-09-04T06:29:18.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578557, 2), signature: { hash: BinData(0, 3F5864037A711550FF6AEE33CA7A805B73D2D63E), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:18.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:18.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:18.245+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:18.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:18.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:18.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:18.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:18.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:18.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:18.337+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:18.337+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:18.345+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:18.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:18.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:18.446+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:18.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:18.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:18.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:18.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:18.510+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:29:18.510+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:29:18.510+0000 D2 COMMAND [conn90] run command admin.$cmd { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:29:18.510+0000 I COMMAND [conn90] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:29:18.546+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:18.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:18.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:18.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:18.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:18.573+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:18.573+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:18.646+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:18.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:18.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:18.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:18.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:18.671+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:18.671+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:18.702+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:18.702+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:18.716+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:18.716+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:18.716+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578557, 2) 2019-09-04T06:29:18.716+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 5840 2019-09-04T06:29:18.716+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:18.716+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:18.716+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 5840 2019-09-04T06:29:18.716+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer::flush table:local/collection/16--6194257481163143499 -> { numRecords: 1407, dataSize: 317222 } 2019-09-04T06:29:18.716+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer::flush table:config/collection/42--6194257481163143499 -> { numRecords: 2, dataSize: 308 } 2019-09-04T06:29:18.716+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer::flush table:config/collection/28--6194257481163143499 -> { numRecords: 22, dataSize: 1828 } 2019-09-04T06:29:18.716+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer flush took 72 µs 2019-09-04T06:29:18.717+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5843 2019-09-04T06:29:18.717+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5843 2019-09-04T06:29:18.717+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578557, 2), t: 1 }({ ts: Timestamp(1567578557, 2), t: 1 }) 2019-09-04T06:29:18.752+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:18.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:18.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:18.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:18.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:18.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:18.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:18.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:18.837+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 390) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:18.837+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 390 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:29:28.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:18.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:18.837+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:18.837+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:18.837+0000 D2 ASIO [Replication] Request 390 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578557, 2), t: 1 }, durableWallTime: new Date(1567578557713), opTime: { ts: Timestamp(1567578557, 2), t: 1 }, wallTime: new Date(1567578557713), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578557, 2), t: 1 }, lastCommittedWall: new Date(1567578557713), lastOpVisible: { ts: Timestamp(1567578557, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578557, 2), $clusterTime: { clusterTime: Timestamp(1567578557, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578557, 2) } 2019-09-04T06:29:18.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578557, 2), t: 1 }, durableWallTime: new Date(1567578557713), opTime: { ts: Timestamp(1567578557, 2), t: 1 }, wallTime: new Date(1567578557713), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578557, 2), t: 1 }, lastCommittedWall: new Date(1567578557713), lastOpVisible: { ts: Timestamp(1567578557, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578557, 2), $clusterTime: { clusterTime: Timestamp(1567578557, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578557, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:29:18.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:18.837+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 390) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578557, 2), t: 1 }, durableWallTime: new Date(1567578557713), opTime: { ts: Timestamp(1567578557, 2), t: 1 }, wallTime: new Date(1567578557713), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578557, 2), t: 1 }, lastCommittedWall: new Date(1567578557713), lastOpVisible: { ts: Timestamp(1567578557, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578557, 2), $clusterTime: { clusterTime: Timestamp(1567578557, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578557, 2) } 2019-09-04T06:29:18.837+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:29:18.837+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:29:28.865+0000 2019-09-04T06:29:18.837+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:29:29.431+0000 2019-09-04T06:29:18.837+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:18.837+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:29:18.837+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:18.837+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:29:20.837Z 2019-09-04T06:29:18.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:18.838+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:18.838+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 391) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:18.838+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 391 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:28.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:18.838+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:18.838+0000 D2 ASIO [Replication] Request 391 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578557, 2), t: 1 }, durableWallTime: new Date(1567578557713), opTime: { ts: Timestamp(1567578557, 2), t: 1 }, wallTime: new Date(1567578557713), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578557, 2), t: 1 }, lastCommittedWall: new Date(1567578557713), lastOpVisible: { ts: Timestamp(1567578557, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578557, 2), $clusterTime: { clusterTime: Timestamp(1567578557, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578557, 2) } 2019-09-04T06:29:18.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578557, 2), t: 1 }, durableWallTime: new Date(1567578557713), opTime: { ts: Timestamp(1567578557, 2), t: 1 }, wallTime: new Date(1567578557713), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578557, 2), t: 1 }, lastCommittedWall: new Date(1567578557713), lastOpVisible: { ts: Timestamp(1567578557, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578557, 2), $clusterTime: { clusterTime: Timestamp(1567578557, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578557, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:18.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:18.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 391) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578557, 2), t: 1 }, durableWallTime: new Date(1567578557713), opTime: { ts: Timestamp(1567578557, 2), t: 1 }, wallTime: new Date(1567578557713), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578557, 2), t: 1 }, lastCommittedWall: new Date(1567578557713), lastOpVisible: { ts: Timestamp(1567578557, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578557, 2), $clusterTime: { clusterTime: Timestamp(1567578557, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578557, 2) } 2019-09-04T06:29:18.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:29:18.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:29:20.838Z 2019-09-04T06:29:18.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:18.852+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:18.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:18.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:18.952+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:18.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:18.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:18.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:18.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:19.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:19.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:19.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:19.052+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:19.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578557, 2), signature: { hash: BinData(0, 3F5864037A711550FF6AEE33CA7A805B73D2D63E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:19.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:29:19.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578557, 2), signature: { hash: BinData(0, 3F5864037A711550FF6AEE33CA7A805B73D2D63E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:19.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578557, 2), signature: { hash: BinData(0, 3F5864037A711550FF6AEE33CA7A805B73D2D63E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:19.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578557, 2), t: 1 }, durableWallTime: new Date(1567578557713), opTime: { ts: Timestamp(1567578557, 2), t: 1 }, wallTime: new Date(1567578557713) } 2019-09-04T06:29:19.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578557, 2), signature: { hash: BinData(0, 3F5864037A711550FF6AEE33CA7A805B73D2D63E), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:19.065+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:19.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:19.073+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:19.073+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:19.151+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:19.151+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:19.153+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:19.160+0000 D2 ASIO [RS] Request 388 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578559, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" }, wall: new Date(1567578559157), o: { $v: 1, $set: { ping: new Date(1567578559154) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578557, 2), t: 1 }, lastCommittedWall: new Date(1567578557713), lastOpVisible: { ts: Timestamp(1567578557, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578557, 2), t: 1 }, lastCommittedWall: new Date(1567578557713), lastOpApplied: { ts: Timestamp(1567578559, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578557, 2), $clusterTime: { clusterTime: Timestamp(1567578559, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578559, 1) } 2019-09-04T06:29:19.160+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578559, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" }, wall: new Date(1567578559157), o: { $v: 1, $set: { ping: new Date(1567578559154) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578557, 2), t: 1 }, lastCommittedWall: new Date(1567578557713), lastOpVisible: { ts: Timestamp(1567578557, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578557, 2), t: 1 }, lastCommittedWall: new Date(1567578557713), lastOpApplied: { ts: Timestamp(1567578559, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578557, 2), $clusterTime: { clusterTime: Timestamp(1567578559, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578559, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:19.160+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:19.160+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578559, 1) and ending at ts: Timestamp(1567578559, 1) 2019-09-04T06:29:19.160+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:29:29.431+0000 2019-09-04T06:29:19.160+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:29:29.677+0000 2019-09-04T06:29:19.160+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:19.160+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:19.160+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578559, 1), t: 1 } 2019-09-04T06:29:19.160+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:19.160+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:19.160+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578557, 2) 2019-09-04T06:29:19.160+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 5859 2019-09-04T06:29:19.160+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:19.160+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:19.160+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 5859 2019-09-04T06:29:19.160+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:19.160+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:19.160+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:29:19.160+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578557, 2) 2019-09-04T06:29:19.160+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 5862 2019-09-04T06:29:19.160+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:19.160+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578559, 1) } 2019-09-04T06:29:19.160+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:19.160+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 5862 2019-09-04T06:29:19.160+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5844 2019-09-04T06:29:19.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:19.160+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5844 2019-09-04T06:29:19.161+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5866 2019-09-04T06:29:19.161+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5866 2019-09-04T06:29:19.161+0000 D3 EXECUTOR [repl-writer-worker-14] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:19.161+0000 D3 STORAGE [repl-writer-worker-14] WT begin_transaction for snapshot id 5868 2019-09-04T06:29:19.161+0000 D4 STORAGE [repl-writer-worker-14] inserting record with timestamp Timestamp(1567578559, 1) 2019-09-04T06:29:19.161+0000 D3 STORAGE [repl-writer-worker-14] WT set timestamp of future write operations to Timestamp(1567578559, 1) 2019-09-04T06:29:19.161+0000 D2 STORAGE [repl-writer-worker-14] WiredTigerSizeStorer::store Marking table:local/collection/16--6194257481163143499 dirty, numRecords: 1408, dataSize: 317458, use_count: 3 2019-09-04T06:29:19.161+0000 D3 STORAGE [repl-writer-worker-14] WT commit_transaction for snapshot id 5868 2019-09-04T06:29:19.161+0000 D3 EXECUTOR [repl-writer-worker-14] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:19.161+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:29:19.161+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5867 2019-09-04T06:29:19.161+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5867 2019-09-04T06:29:19.161+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5870 2019-09-04T06:29:19.161+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5870 2019-09-04T06:29:19.161+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578559, 1), t: 1 }({ ts: Timestamp(1567578559, 1), t: 1 }) 2019-09-04T06:29:19.161+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578559, 1) 2019-09-04T06:29:19.161+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5871 2019-09-04T06:29:19.161+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578559, 1) } } ] } sort: {} projection: {} 2019-09-04T06:29:19.161+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:19.161+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:29:19.161+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578559, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:29:19.161+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:19.161+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:19.161+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:19.161+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578559, 1) || First: notFirst: full path: ts 2019-09-04T06:29:19.161+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:19.161+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578559, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:19.161+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:19.161+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:29:19.161+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:19.161+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:19.161+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:19.161+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:19.161+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:19.161+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:19.161+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:19.161+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578559, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:19.161+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:19.161+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:19.161+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:19.161+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578559, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:19.161+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:19.161+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578559, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:19.161+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5871 2019-09-04T06:29:19.161+0000 D3 EXECUTOR [repl-writer-worker-3] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:19.161+0000 D3 STORAGE [repl-writer-worker-3] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:19.161+0000 D3 REPL [repl-writer-worker-3] applying op: { ts: Timestamp(1567578559, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" }, wall: new Date(1567578559157), o: { $v: 1, $set: { ping: new Date(1567578559154) } } }, oplog application mode: Secondary 2019-09-04T06:29:19.161+0000 D3 STORAGE [repl-writer-worker-3] WT set timestamp of future write operations to Timestamp(1567578559, 1) 2019-09-04T06:29:19.161+0000 D3 STORAGE [repl-writer-worker-3] WT begin_transaction for snapshot id 5873 2019-09-04T06:29:19.161+0000 D2 QUERY [repl-writer-worker-3] Using idhack: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" } 2019-09-04T06:29:19.161+0000 D2 STORAGE [repl-writer-worker-3] WiredTigerSizeStorer::store Marking table:config/collection/28--6194257481163143499 dirty, numRecords: 22, dataSize: 1828, use_count: 3 2019-09-04T06:29:19.161+0000 D4 WRITE [repl-writer-worker-3] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:29:19.161+0000 D3 STORAGE [repl-writer-worker-3] WT commit_transaction for snapshot id 5873 2019-09-04T06:29:19.161+0000 D3 EXECUTOR [repl-writer-worker-3] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:19.161+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578559, 1), t: 1 }({ ts: Timestamp(1567578559, 1), t: 1 }) 2019-09-04T06:29:19.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:19.161+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578559, 1) 2019-09-04T06:29:19.161+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5872 2019-09-04T06:29:19.161+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:29:19.161+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:19.161+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:29:19.161+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:19.161+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:19.161+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:29:19.162+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 5872 2019-09-04T06:29:19.162+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578559, 1) 2019-09-04T06:29:19.162+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5876 2019-09-04T06:29:19.162+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5876 2019-09-04T06:29:19.162+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578559, 1), t: 1 }({ ts: Timestamp(1567578559, 1), t: 1 }) 2019-09-04T06:29:19.162+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:19.162+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578557, 2), t: 1 }, durableWallTime: new Date(1567578557713), appliedOpTime: { ts: Timestamp(1567578557, 2), t: 1 }, appliedWallTime: new Date(1567578557713), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578557, 2), t: 1 }, durableWallTime: new Date(1567578557713), appliedOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, appliedWallTime: new Date(1567578559157), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578557, 2), t: 1 }, durableWallTime: new Date(1567578557713), appliedOpTime: { ts: Timestamp(1567578557, 2), t: 1 }, appliedWallTime: new Date(1567578557713), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578557, 2), t: 1 }, lastCommittedWall: new Date(1567578557713), lastOpVisible: { ts: Timestamp(1567578557, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:19.162+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 392 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:49.162+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578557, 2), t: 1 }, durableWallTime: new Date(1567578557713), appliedOpTime: { ts: Timestamp(1567578557, 2), t: 1 }, appliedWallTime: new Date(1567578557713), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578557, 2), t: 1 }, durableWallTime: new Date(1567578557713), appliedOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, appliedWallTime: new Date(1567578559157), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578557, 2), t: 1 }, durableWallTime: new Date(1567578557713), appliedOpTime: { ts: Timestamp(1567578557, 2), t: 1 }, appliedWallTime: new Date(1567578557713), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578557, 2), t: 1 }, lastCommittedWall: new Date(1567578557713), lastOpVisible: { ts: Timestamp(1567578557, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:19.162+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:49.162+0000 2019-09-04T06:29:19.162+0000 D2 ASIO [RS] Request 392 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578557, 2), t: 1 }, lastCommittedWall: new Date(1567578557713), lastOpVisible: { ts: Timestamp(1567578557, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578557, 2), $clusterTime: { clusterTime: Timestamp(1567578559, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578559, 1) } 2019-09-04T06:29:19.162+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578557, 2), t: 1 }, lastCommittedWall: new Date(1567578557713), lastOpVisible: { ts: Timestamp(1567578557, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578557, 2), $clusterTime: { clusterTime: Timestamp(1567578559, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578559, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:19.162+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:19.162+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:49.162+0000 2019-09-04T06:29:19.162+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578559, 1), t: 1 } 2019-09-04T06:29:19.162+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 393 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:29.162+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578557, 2), t: 1 } } 2019-09-04T06:29:19.162+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:49.162+0000 2019-09-04T06:29:19.162+0000 D2 ASIO [RS] Request 393 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578559, 1), t: 1 }, lastCommittedWall: new Date(1567578559157), lastOpVisible: { ts: Timestamp(1567578559, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578559, 1), t: 1 }, lastCommittedWall: new Date(1567578559157), lastOpApplied: { ts: Timestamp(1567578559, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578559, 1), $clusterTime: { clusterTime: Timestamp(1567578559, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578559, 1) } 2019-09-04T06:29:19.162+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578559, 1), t: 1 }, lastCommittedWall: new Date(1567578559157), lastOpVisible: { ts: Timestamp(1567578559, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578559, 1), t: 1 }, lastCommittedWall: new Date(1567578559157), lastOpApplied: { ts: Timestamp(1567578559, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578559, 1), $clusterTime: { clusterTime: Timestamp(1567578559, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578559, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:19.162+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:19.162+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:29:19.162+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578559, 1), t: 1 }, 2019-09-04T06:29:19.157+0000 2019-09-04T06:29:19.162+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:29:19.162+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578559, 1), t: 1 }, 2019-09-04T06:29:19.157+0000 2019-09-04T06:29:19.163+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578554, 1) 2019-09-04T06:29:19.163+0000 D3 REPL [conn177] Got notified of new snapshot: { ts: Timestamp(1567578559, 1), t: 1 }, 2019-09-04T06:29:19.157+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn177] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.446+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn127] Got notified of new snapshot: { ts: Timestamp(1567578559, 1), t: 1 }, 2019-09-04T06:29:19.157+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn127] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.644+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn144] Got notified of new snapshot: { ts: Timestamp(1567578559, 1), t: 1 }, 2019-09-04T06:29:19.157+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn144] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:33.257+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn154] Got notified of new snapshot: { ts: Timestamp(1567578559, 1), t: 1 }, 2019-09-04T06:29:19.157+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn154] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn145] Got notified of new snapshot: { ts: Timestamp(1567578559, 1), t: 1 }, 2019-09-04T06:29:19.157+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn145] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.422+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn151] Got notified of new snapshot: { ts: Timestamp(1567578559, 1), t: 1 }, 2019-09-04T06:29:19.157+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn151] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.129+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn158] Got notified of new snapshot: { ts: Timestamp(1567578559, 1), t: 1 }, 2019-09-04T06:29:19.157+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn158] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:24.152+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn179] Got notified of new snapshot: { ts: Timestamp(1567578559, 1), t: 1 }, 2019-09-04T06:29:19.157+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn179] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:46.417+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn165] Got notified of new snapshot: { ts: Timestamp(1567578559, 1), t: 1 }, 2019-09-04T06:29:19.157+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn165] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:35.182+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn153] Got notified of new snapshot: { ts: Timestamp(1567578559, 1), t: 1 }, 2019-09-04T06:29:19.157+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn153] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn150] Got notified of new snapshot: { ts: Timestamp(1567578559, 1), t: 1 }, 2019-09-04T06:29:19.157+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn150] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:19.851+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn138] Got notified of new snapshot: { ts: Timestamp(1567578559, 1), t: 1 }, 2019-09-04T06:29:19.157+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn138] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.289+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn155] Got notified of new snapshot: { ts: Timestamp(1567578559, 1), t: 1 }, 2019-09-04T06:29:19.157+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn155] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.663+0000 2019-09-04T06:29:19.163+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:29:29.677+0000 2019-09-04T06:29:19.163+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:29:29.464+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn176] Got notified of new snapshot: { ts: Timestamp(1567578559, 1), t: 1 }, 2019-09-04T06:29:19.157+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn176] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.424+0000 2019-09-04T06:29:19.163+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:19.163+0000 D3 REPL [conn170] Got notified of new snapshot: { ts: Timestamp(1567578559, 1), t: 1 }, 2019-09-04T06:29:19.157+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn170] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:43.335+0000 2019-09-04T06:29:19.163+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn163] Got notified of new snapshot: { ts: Timestamp(1567578559, 1), t: 1 }, 2019-09-04T06:29:19.157+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn163] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:41.888+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn175] Got notified of new snapshot: { ts: Timestamp(1567578559, 1), t: 1 }, 2019-09-04T06:29:19.157+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn175] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.422+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn152] Got notified of new snapshot: { ts: Timestamp(1567578559, 1), t: 1 }, 2019-09-04T06:29:19.157+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn152] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:21.661+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn161] Got notified of new snapshot: { ts: Timestamp(1567578559, 1), t: 1 }, 2019-09-04T06:29:19.157+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn161] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:28.615+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn164] Got notified of new snapshot: { ts: Timestamp(1567578559, 1), t: 1 }, 2019-09-04T06:29:19.157+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn164] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:41.692+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn168] Got notified of new snapshot: { ts: Timestamp(1567578559, 1), t: 1 }, 2019-09-04T06:29:19.157+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn168] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.272+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn156] Got notified of new snapshot: { ts: Timestamp(1567578559, 1), t: 1 }, 2019-09-04T06:29:19.157+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn156] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:22.595+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn162] Got notified of new snapshot: { ts: Timestamp(1567578559, 1), t: 1 }, 2019-09-04T06:29:19.157+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn162] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:29.750+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn174] Got notified of new snapshot: { ts: Timestamp(1567578559, 1), t: 1 }, 2019-09-04T06:29:19.157+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn174] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.422+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn166] Got notified of new snapshot: { ts: Timestamp(1567578559, 1), t: 1 }, 2019-09-04T06:29:19.157+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn166] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.275+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn139] Got notified of new snapshot: { ts: Timestamp(1567578559, 1), t: 1 }, 2019-09-04T06:29:19.157+0000 2019-09-04T06:29:19.163+0000 D3 REPL [conn139] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.319+0000 2019-09-04T06:29:19.163+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 394 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:29.163+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578559, 1), t: 1 } } 2019-09-04T06:29:19.163+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:19.163+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578557, 2), t: 1 }, durableWallTime: new Date(1567578557713), appliedOpTime: { ts: Timestamp(1567578557, 2), t: 1 }, appliedWallTime: new Date(1567578557713), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), appliedOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, appliedWallTime: new Date(1567578559157), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578557, 2), t: 1 }, durableWallTime: new Date(1567578557713), appliedOpTime: { ts: Timestamp(1567578557, 2), t: 1 }, appliedWallTime: new Date(1567578557713), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578559, 1), t: 1 }, lastCommittedWall: new Date(1567578559157), lastOpVisible: { ts: Timestamp(1567578559, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:19.163+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 395 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:49.163+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578557, 2), t: 1 }, durableWallTime: new Date(1567578557713), appliedOpTime: { ts: Timestamp(1567578557, 2), t: 1 }, appliedWallTime: new Date(1567578557713), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), appliedOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, appliedWallTime: new Date(1567578559157), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578557, 2), t: 1 }, durableWallTime: new Date(1567578557713), appliedOpTime: { ts: Timestamp(1567578557, 2), t: 1 }, appliedWallTime: new Date(1567578557713), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578559, 1), t: 1 }, lastCommittedWall: new Date(1567578559157), lastOpVisible: { ts: Timestamp(1567578559, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:19.163+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:49.163+0000 2019-09-04T06:29:19.163+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:49.163+0000 2019-09-04T06:29:19.164+0000 D2 ASIO [RS] Request 395 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578559, 1), t: 1 }, lastCommittedWall: new Date(1567578559157), lastOpVisible: { ts: Timestamp(1567578559, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578559, 1), $clusterTime: { clusterTime: Timestamp(1567578559, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578559, 1) } 2019-09-04T06:29:19.164+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578559, 1), t: 1 }, lastCommittedWall: new Date(1567578559157), lastOpVisible: { ts: Timestamp(1567578559, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578559, 1), $clusterTime: { clusterTime: Timestamp(1567578559, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578559, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:19.164+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:19.164+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:49.163+0000 2019-09-04T06:29:19.171+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:19.171+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:19.202+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:19.202+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:19.208+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:19.208+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:19.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:19.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:19.253+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:19.260+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578559, 1) 2019-09-04T06:29:19.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:19.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:19.272+0000 D3 STORAGE [WTCheckpointThread] setting timestamp read source: 6, provided timestamp: Timestamp(1567578559, 1) 2019-09-04T06:29:19.272+0000 D3 STORAGE [WTCheckpointThread] WT begin_transaction for snapshot id 5884 2019-09-04T06:29:19.272+0000 D3 STORAGE [WTCheckpointThread] WT rollback_transaction for snapshot id 5884 2019-09-04T06:29:19.272+0000 D2 RECOVERY [WTCheckpointThread] Performing stable checkpoint. StableTimestamp: Timestamp(1567578559, 1), OplogNeededForRollback: Timestamp(1567578559, 1) 2019-09-04T06:29:19.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:19.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:19.317+0000 D3 STORAGE [FreeMonProcessor] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:19.320+0000 D3 INDEX [TTLMonitor] thread awake 2019-09-04T06:29:19.321+0000 D3 COMMAND [PeriodicTaskRunner] task: DBConnectionPool-cleaner took: 0ms 2019-09-04T06:29:19.321+0000 D3 COMMAND [PeriodicTaskRunner] task: DBConnectionPool-cleaner took: 0ms 2019-09-04T06:29:19.321+0000 D2 - [PeriodicTaskRunner] cleaning up unused lock buckets of the global lock manager 2019-09-04T06:29:19.321+0000 D3 COMMAND [PeriodicTaskRunner] task: UnusedLockCleaner took: 0ms 2019-09-04T06:29:19.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:19.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:19.329+0000 D2 WRITE [startPeriodicThreadToAbortExpiredTransactions] Beginning scanSessions. Scanning 0 sessions. 2019-09-04T06:29:19.337+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:19.337+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:19.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:19.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:19.370+0000 D1 SHARDING [shard-registry-reload] Reloading shardRegistry 2019-09-04T06:29:19.370+0000 D3 STORAGE [shard-registry-reload] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:29:19.370+0000 D2 COMMAND [shard-registry-reload] run command config.$cmd { find: "shards", $readPreference: { mode: "nearest", tags: [] }, $db: "config" } 2019-09-04T06:29:19.370+0000 D3 STORAGE [shard-registry-reload] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:19.370+0000 D5 QUERY [shard-registry-reload] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:29:19.370+0000 D5 QUERY [shard-registry-reload] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:29:19.370+0000 D5 QUERY [shard-registry-reload] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:29:19.370+0000 D5 QUERY [shard-registry-reload] Rated tree: $and 2019-09-04T06:29:19.370+0000 D5 QUERY [shard-registry-reload] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:19.370+0000 D5 QUERY [shard-registry-reload] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:19.370+0000 D2 QUERY [shard-registry-reload] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:29:19.370+0000 D3 STORAGE [shard-registry-reload] begin_transaction on local snapshot Timestamp(1567578559, 1) 2019-09-04T06:29:19.370+0000 D3 STORAGE [shard-registry-reload] WT begin_transaction for snapshot id 5895 2019-09-04T06:29:19.370+0000 D3 STORAGE [shard-registry-reload] WT rollback_transaction for snapshot id 5895 2019-09-04T06:29:19.370+0000 I COMMAND [shard-registry-reload] command config.shards command: find { find: "shards", $readPreference: { mode: "nearest", tags: [] }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:646 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:29:19.370+0000 D1 SHARDING [shard-registry-reload] found 3 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp(1567578559, 1), t: 1 } 2019-09-04T06:29:19.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0000/cmodb806.togewa.com:27018,cmodb807.togewa.com:27018 2019-09-04T06:29:19.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0000, with CS shard0000/cmodb806.togewa.com:27018,cmodb807.togewa.com:27018 2019-09-04T06:29:19.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0001/cmodb808.togewa.com:27018,cmodb809.togewa.com:27018 2019-09-04T06:29:19.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0001, with CS shard0001/cmodb808.togewa.com:27018,cmodb809.togewa.com:27018 2019-09-04T06:29:19.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0002/cmodb810.togewa.com:27018,cmodb811.togewa.com:27018 2019-09-04T06:29:19.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0002, with CS shard0002/cmodb810.togewa.com:27018,cmodb811.togewa.com:27018 2019-09-04T06:29:19.370+0000 D3 SHARDING [shard-registry-reload] Adding shard config, with CS 2019-09-04T06:29:19.382+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:19.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0001 2019-09-04T06:29:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 396 -- target:[cmodb808.togewa.com:27018] db:admin expDate:2019-09-04T06:29:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:29:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 397 -- target:[cmodb809.togewa.com:27018] db:admin expDate:2019-09-04T06:29:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:29:19.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0002 2019-09-04T06:29:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 398 -- target:[cmodb810.togewa.com:27018] db:admin expDate:2019-09-04T06:29:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:29:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 399 -- target:[cmodb811.togewa.com:27018] db:admin expDate:2019-09-04T06:29:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:29:19.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0000 2019-09-04T06:29:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 400 -- target:[cmodb806.togewa.com:27018] db:admin expDate:2019-09-04T06:29:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:29:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 401 -- target:[cmodb807.togewa.com:27018] db:admin expDate:2019-09-04T06:29:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:29:19.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 396 finished with response: { hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb808.togewa.com:27018", me: "cmodb808.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578557, 1), t: 1 }, lastWriteDate: new Date(1567578557000), majorityOpTime: { ts: Timestamp(1567578557, 1), t: 1 }, majorityWriteDate: new Date(1567578557000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578559386), logicalSessionTimeoutMinutes: 30, connectionId: 18183, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578557, 1), $configServerState: { opTime: { ts: Timestamp(1567578554, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578557, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578557, 1) } 2019-09-04T06:29:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb808.togewa.com:27018", me: "cmodb808.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578557, 1), t: 1 }, lastWriteDate: new Date(1567578557000), majorityOpTime: { ts: Timestamp(1567578557, 1), t: 1 }, majorityWriteDate: new Date(1567578557000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578559386), logicalSessionTimeoutMinutes: 30, connectionId: 18183, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578557, 1), $configServerState: { opTime: { ts: Timestamp(1567578554, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578557, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578557, 1) } target: cmodb808.togewa.com:27018 2019-09-04T06:29:19.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 397 finished with response: { hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb808.togewa.com:27018", me: "cmodb809.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578557, 1), t: 1 }, lastWriteDate: new Date(1567578557000), majorityOpTime: { ts: Timestamp(1567578557, 1), t: 1 }, majorityWriteDate: new Date(1567578557000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578559386), logicalSessionTimeoutMinutes: 30, connectionId: 13302, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578557, 1), $configServerState: { opTime: { ts: Timestamp(1567578540, 3), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578557, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578557, 1) } 2019-09-04T06:29:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb808.togewa.com:27018", me: "cmodb809.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578557, 1), t: 1 }, lastWriteDate: new Date(1567578557000), majorityOpTime: { ts: Timestamp(1567578557, 1), t: 1 }, majorityWriteDate: new Date(1567578557000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578559386), logicalSessionTimeoutMinutes: 30, connectionId: 13302, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578557, 1), $configServerState: { opTime: { ts: Timestamp(1567578540, 3), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578557, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578557, 1) } target: cmodb809.togewa.com:27018 2019-09-04T06:29:19.385+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0001 took 0ms 2019-09-04T06:29:19.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 398 finished with response: { hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb810.togewa.com:27018", me: "cmodb810.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578549, 1), t: 1 }, lastWriteDate: new Date(1567578549000), majorityOpTime: { ts: Timestamp(1567578549, 1), t: 1 }, majorityWriteDate: new Date(1567578549000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578559386), logicalSessionTimeoutMinutes: 30, connectionId: 20469, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578549, 1), $configServerState: { opTime: { ts: Timestamp(1567578554, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578555, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578549, 1) } 2019-09-04T06:29:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb810.togewa.com:27018", me: "cmodb810.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578549, 1), t: 1 }, lastWriteDate: new Date(1567578549000), majorityOpTime: { ts: Timestamp(1567578549, 1), t: 1 }, majorityWriteDate: new Date(1567578549000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578559386), logicalSessionTimeoutMinutes: 30, connectionId: 20469, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578549, 1), $configServerState: { opTime: { ts: Timestamp(1567578554, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578555, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578549, 1) } target: cmodb810.togewa.com:27018 2019-09-04T06:29:19.386+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 400 finished with response: { hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb806.togewa.com:27018", me: "cmodb806.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578553, 2), t: 1 }, lastWriteDate: new Date(1567578553000), majorityOpTime: { ts: Timestamp(1567578553, 2), t: 1 }, majorityWriteDate: new Date(1567578553000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578559385), logicalSessionTimeoutMinutes: 30, connectionId: 16400, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578553, 2), $configServerState: { opTime: { ts: Timestamp(1567578554, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578555, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578553, 2) } 2019-09-04T06:29:19.386+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb806.togewa.com:27018", me: "cmodb806.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578553, 2), t: 1 }, lastWriteDate: new Date(1567578553000), majorityOpTime: { ts: Timestamp(1567578553, 2), t: 1 }, majorityWriteDate: new Date(1567578553000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578559385), logicalSessionTimeoutMinutes: 30, connectionId: 16400, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578553, 2), $configServerState: { opTime: { ts: Timestamp(1567578554, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578555, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578553, 2) } target: cmodb806.togewa.com:27018 2019-09-04T06:29:19.386+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 401 finished with response: { hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb806.togewa.com:27018", me: "cmodb807.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578553, 2), t: 1 }, lastWriteDate: new Date(1567578553000), majorityOpTime: { ts: Timestamp(1567578553, 2), t: 1 }, majorityWriteDate: new Date(1567578553000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578559386), logicalSessionTimeoutMinutes: 30, connectionId: 17074, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578553, 2), $configServerState: { opTime: { ts: Timestamp(1567578548, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578555, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578553, 2) } 2019-09-04T06:29:19.386+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb806.togewa.com:27018", me: "cmodb807.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578553, 2), t: 1 }, lastWriteDate: new Date(1567578553000), majorityOpTime: { ts: Timestamp(1567578553, 2), t: 1 }, majorityWriteDate: new Date(1567578553000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578559386), logicalSessionTimeoutMinutes: 30, connectionId: 17074, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578553, 2), $configServerState: { opTime: { ts: Timestamp(1567578548, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578555, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578553, 2) } target: cmodb807.togewa.com:27018 2019-09-04T06:29:19.386+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0000 took 0ms 2019-09-04T06:29:19.390+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 399 finished with response: { hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb810.togewa.com:27018", me: "cmodb811.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578549, 1), t: 1 }, lastWriteDate: new Date(1567578549000), majorityOpTime: { ts: Timestamp(1567578549, 1), t: 1 }, majorityWriteDate: new Date(1567578549000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578559386), logicalSessionTimeoutMinutes: 30, connectionId: 13284, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578549, 1), $configServerState: { opTime: { ts: Timestamp(1567578548, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578555, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578549, 1) } 2019-09-04T06:29:19.390+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb810.togewa.com:27018", me: "cmodb811.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578549, 1), t: 1 }, lastWriteDate: new Date(1567578549000), majorityOpTime: { ts: Timestamp(1567578549, 1), t: 1 }, majorityWriteDate: new Date(1567578549000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578559386), logicalSessionTimeoutMinutes: 30, connectionId: 13284, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578549, 1), $configServerState: { opTime: { ts: Timestamp(1567578548, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578555, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578549, 1) } target: cmodb811.togewa.com:27018 2019-09-04T06:29:19.390+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0002 took 5ms 2019-09-04T06:29:19.416+0000 D2 COMMAND [replSetDistLockPinger] run command config.$cmd { findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578559416) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } 2019-09-04T06:29:19.416+0000 D4 - [replSetDistLockPinger] Taking ticket. Available: 1000000000 2019-09-04T06:29:19.416+0000 D1 - [replSetDistLockPinger] User Assertion: NotMaster: Not primary while running findAndModify command on collection config.lockpings src/mongo/db/commands/find_and_modify.cpp 178 2019-09-04T06:29:19.416+0000 W - [replSetDistLockPinger] DBException thrown :: caused by :: NotMaster: Not primary while running findAndModify command on collection config.lockpings 2019-09-04T06:29:19.439+0000 I - [replSetDistLockPinger] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6b5b62 0x561749c38a0a 0x561749c42521 0x561749a63043 0x56174a33a606 0x56174a33ba55 0x56174b117894 0x56174a082899 0x56174a083f53 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174af452ee 0x56174af457fa 0x56174b0c25e2 0x56174a244e7b 0x56174a243c1e 0x56174a42b1dc 0x56174a23b7b1 0x56174a232a0a 0x56174b82dbbf 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"272DB62","s":"_ZN5mongo11DBExceptionC2ERKNS_6StatusE"},{"b":"561748F88000","o":"CB0A0A","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"ADB043"},{"b":"561748F88000","o":"13B2606"},{"b":"561748F88000","o":"13B3A55"},{"b":"561748F88000","o":"218F894","s":"_ZN5mongo12BasicCommand10Invocation3runEPNS_16OperationContextEPNS_3rpc21ReplyBuilderInterfaceE"},{"b":"561748F88000","o":"10FA899"},{"b":"561748F88000","o":"10FBF53"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"1FBD2EE"},{"b":"561748F88000","o":"1FBD7FA","s":"_ZN5mongo14DBDirectClient4callERNS_7MessageES2_bPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE"},{"b":"561748F88000","o":"213A5E2","s":"_ZN5mongo12DBClientBase20runCommandWithTargetENS_12OpMsgRequestE"},{"b":"561748F88000","o":"12BCE7B","s":"_ZN5mongo13RSLocalClient14runCommandOnceEPNS_16OperationContextENS_10StringDataERKNS_7BSONObjE"},{"b":"561748F88000","o":"12BBC1E","s":"_ZN5mongo10ShardLocal11_runCommandEPNS_16OperationContextERKNS_21ReadPreferenceSettingENS_10StringDataENS_8DurationISt5ratioILl1ELl1000EEEERKNS_7BSONObjE"},{"b":"561748F88000","o":"14A31DC","s":"_ZN5mongo5Shard32runCommandWithFixedRetryAttemptsEPNS_16OperationContextERKNS_21ReadPreferenceSettingERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_7BSONObjENS_8DurationISt5ratioILl1ELl1000EEEENS0_11RetryPolicyE"},{"b":"561748F88000","o":"12B37B1","s":"_ZN5mongo19DistLockCatalogImpl4pingEPNS_16OperationContextENS_10StringDataENS_6Date_tE"},{"b":"561748F88000","o":"12AAA0A","s":"_ZN5mongo22ReplSetDistLockManager6doTaskEv"},{"b":"561748F88000","o":"28A5BBF"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo11DBExceptionC2ERKNS_6StatusE+0x32) [0x56174b6b5b62] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x6D08) [0x561749c38a0a] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xADB043) [0x561749a63043] mongod(+0x13B2606) [0x56174a33a606] mongod(+0x13B3A55) [0x56174a33ba55] mongod(_ZN5mongo12BasicCommand10Invocation3runEPNS_16OperationContextEPNS_3rpc21ReplyBuilderInterfaceE+0x74) [0x56174b117894] mongod(+0x10FA899) [0x56174a082899] mongod(+0x10FBF53) [0x56174a083f53] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(+0x1FBD2EE) [0x56174af452ee] mongod(_ZN5mongo14DBDirectClient4callERNS_7MessageES2_bPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x3A) [0x56174af457fa] mongod(_ZN5mongo12DBClientBase20runCommandWithTargetENS_12OpMsgRequestE+0x1F2) [0x56174b0c25e2] mongod(_ZN5mongo13RSLocalClient14runCommandOnceEPNS_16OperationContextENS_10StringDataERKNS_7BSONObjE+0x4FB) [0x56174a244e7b] mongod(_ZN5mongo10ShardLocal11_runCommandEPNS_16OperationContextERKNS_21ReadPreferenceSettingENS_10StringDataENS_8DurationISt5ratioILl1ELl1000EEEERKNS_7BSONObjE+0x2E) [0x56174a243c1e] mongod(_ZN5mongo5Shard32runCommandWithFixedRetryAttemptsEPNS_16OperationContextERKNS_21ReadPreferenceSettingERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_7BSONObjENS_8DurationISt5ratioILl1ELl1000EEEENS0_11RetryPolicyE+0xDC) [0x56174a42b1dc] mongod(_ZN5mongo19DistLockCatalogImpl4pingEPNS_16OperationContextENS_10StringDataENS_6Date_tE+0x571) [0x56174a23b7b1] mongod(_ZN5mongo22ReplSetDistLockManager6doTaskEv+0x27A) [0x56174a232a0a] mongod(+0x28A5BBF) [0x56174b82dbbf] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:19.439+0000 D2 REPL [replSetDistLockPinger] Waiting for write concern. OpTime: { ts: Timestamp(1567578559, 1), t: 1 }, write concern: { w: "majority", wtimeout: 15000 } 2019-09-04T06:29:19.439+0000 D4 STORAGE [replSetDistLockPinger] flushed journal 2019-09-04T06:29:19.439+0000 D1 COMMAND [replSetDistLockPinger] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578559416) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" }': NotMaster: Not primary while running findAndModify command on collection config.lockpings 2019-09-04T06:29:19.439+0000 I COMMAND [replSetDistLockPinger] command config.lockpings command: findAndModify { findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578559416) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } numYields:0 ok:0 errMsg:"Not primary while running findAndModify command on collection config.lockpings" errName:NotMaster errCode:10107 reslen:527 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } protocol:op_msg 22ms 2019-09-04T06:29:19.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:19.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:19.482+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:19.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:19.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:19.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:19.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:19.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:19.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:19.573+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:19.573+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:19.582+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:19.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:19.651+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:19.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:19.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:19.671+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:19.671+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:19.682+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:19.702+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:19.702+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:19.708+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:19.708+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:19.782+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:19.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:19.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:19.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:19.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:19.834+0000 I NETWORK [listener] connection accepted from 10.108.2.51:59106 #180 (86 connections now open) 2019-09-04T06:29:19.834+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:19.834+0000 D2 COMMAND [conn180] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:19.834+0000 I NETWORK [conn180] received client metadata from 10.108.2.51:59106 conn180: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:19.834+0000 I COMMAND [conn180] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:19.837+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:19.837+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:19.852+0000 I COMMAND [conn150] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578529, 1), signature: { hash: BinData(0, 46852229934D9D2582165361D79CD6C82E821B6B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:29:19.852+0000 D1 - [conn150] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:19.852+0000 W - [conn150] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:19.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:19.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:19.870+0000 I - [conn150] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:19.870+0000 D1 COMMAND [conn150] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578529, 1), signature: { hash: BinData(0, 46852229934D9D2582165361D79CD6C82E821B6B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:19.870+0000 D1 - [conn150] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:19.870+0000 W - [conn150] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:19.882+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:19.892+0000 I - [conn150] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:19.892+0000 W COMMAND [conn150] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:19.892+0000 I COMMAND [conn150] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578529, 1), signature: { hash: BinData(0, 46852229934D9D2582165361D79CD6C82E821B6B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30028ms 2019-09-04T06:29:19.892+0000 D2 NETWORK [conn150] Session from 10.108.2.51:59090 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:19.892+0000 I NETWORK [conn150] end connection 10.108.2.51:59090 (85 connections now open) 2019-09-04T06:29:19.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:19.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:19.983+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:19.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:19.997+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:20.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:20.003+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:29:20.004+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:29:20.004+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:29:20.013+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:29:20.013+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:29:20.017+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:29:20.017+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:29:20.017+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:29:20.017+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:29:20.027+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:29:20.028+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35129 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:29:20.036+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:29:20.036+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:29:20.036+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:29:20.056+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:20.056+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:20.056+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:29:20.057+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:20.057+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:29:20.057+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:29:20.057+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:29:20.057+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:29:20.057+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:29:20.057+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:29:20.057+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:29:20.057+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:20.057+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:20.057+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:29:20.057+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578559, 1) 2019-09-04T06:29:20.057+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5922 2019-09-04T06:29:20.057+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5922 2019-09-04T06:29:20.057+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:29:20.062+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:29:20.062+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:29:20.062+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:29:20.062+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:20.062+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:29:20.062+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:29:20.062+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:29:20.062+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578559, 1) 2019-09-04T06:29:20.062+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5925 2019-09-04T06:29:20.062+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5925 2019-09-04T06:29:20.062+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:29:20.062+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:29:20.062+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:20.062+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:29:20.062+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:29:20.063+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:29:20.063+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578559, 1) 2019-09-04T06:29:20.063+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5927 2019-09-04T06:29:20.063+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5927 2019-09-04T06:29:20.063+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:29:20.063+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:29:20.063+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:29:20.063+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:29:20.063+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:29:20.063+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:29:20.063+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:29:20.063+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5930 2019-09-04T06:29:20.063+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:29:20.063+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:20.063+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:29:20.063+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:29:20.063+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5930 2019-09-04T06:29:20.063+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:29:20.063+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5931 2019-09-04T06:29:20.063+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:29:20.063+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:20.063+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:29:20.063+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5931 2019-09-04T06:29:20.063+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:29:20.063+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5932 2019-09-04T06:29:20.063+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:29:20.063+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:20.063+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:29:20.063+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5932 2019-09-04T06:29:20.063+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:29:20.063+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5933 2019-09-04T06:29:20.063+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5933 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5934 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5934 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5935 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5935 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5936 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5936 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5937 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5937 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5938 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5938 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5939 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5939 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5940 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5940 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5941 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:29:20.064+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5941 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5942 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5942 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5943 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5943 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5944 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5944 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5945 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5945 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5946 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5946 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5947 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5947 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5948 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5948 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5949 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5949 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5950 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5950 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5951 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:20.065+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:20.065+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:29:20.066+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5951 2019-09-04T06:29:20.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:20.066+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 2ms 2019-09-04T06:29:20.066+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:29:20.066+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5954 2019-09-04T06:29:20.066+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5954 2019-09-04T06:29:20.066+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5955 2019-09-04T06:29:20.066+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5955 2019-09-04T06:29:20.066+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5956 2019-09-04T06:29:20.066+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5956 2019-09-04T06:29:20.066+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:29:20.066+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:29:20.066+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5958 2019-09-04T06:29:20.066+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5958 2019-09-04T06:29:20.066+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5959 2019-09-04T06:29:20.066+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5959 2019-09-04T06:29:20.066+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5960 2019-09-04T06:29:20.066+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5960 2019-09-04T06:29:20.066+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5961 2019-09-04T06:29:20.066+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5961 2019-09-04T06:29:20.066+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5962 2019-09-04T06:29:20.067+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5962 2019-09-04T06:29:20.067+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5963 2019-09-04T06:29:20.067+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5963 2019-09-04T06:29:20.067+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5964 2019-09-04T06:29:20.067+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5964 2019-09-04T06:29:20.067+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5965 2019-09-04T06:29:20.067+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5965 2019-09-04T06:29:20.067+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5966 2019-09-04T06:29:20.067+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5966 2019-09-04T06:29:20.067+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5967 2019-09-04T06:29:20.067+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5967 2019-09-04T06:29:20.067+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5968 2019-09-04T06:29:20.067+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5968 2019-09-04T06:29:20.067+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5969 2019-09-04T06:29:20.067+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5969 2019-09-04T06:29:20.067+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:29:20.067+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:29:20.067+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5971 2019-09-04T06:29:20.067+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5971 2019-09-04T06:29:20.067+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5972 2019-09-04T06:29:20.067+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5972 2019-09-04T06:29:20.067+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5973 2019-09-04T06:29:20.067+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5973 2019-09-04T06:29:20.067+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5974 2019-09-04T06:29:20.067+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5974 2019-09-04T06:29:20.067+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5975 2019-09-04T06:29:20.067+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5975 2019-09-04T06:29:20.067+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 5976 2019-09-04T06:29:20.067+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 5976 2019-09-04T06:29:20.067+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:29:20.073+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:20.073+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:20.083+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:20.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:20.160+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:20.161+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:20.161+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578559, 1) 2019-09-04T06:29:20.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:20.161+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 5980 2019-09-04T06:29:20.161+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:20.161+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:20.161+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 5980 2019-09-04T06:29:20.162+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 5983 2019-09-04T06:29:20.162+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 5983 2019-09-04T06:29:20.162+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578559, 1), t: 1 }({ ts: Timestamp(1567578559, 1), t: 1 }) 2019-09-04T06:29:20.171+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:20.171+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:20.183+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:20.202+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:20.202+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:20.208+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:20.208+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:20.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578559, 1), signature: { hash: BinData(0, F5D95130DE0426A01509FC85D06ED91F26C1B46E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:20.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:29:20.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578559, 1), signature: { hash: BinData(0, F5D95130DE0426A01509FC85D06ED91F26C1B46E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:20.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578559, 1), signature: { hash: BinData(0, F5D95130DE0426A01509FC85D06ED91F26C1B46E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:20.233+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), opTime: { ts: Timestamp(1567578559, 1), t: 1 }, wallTime: new Date(1567578559157) } 2019-09-04T06:29:20.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578559, 1), signature: { hash: BinData(0, F5D95130DE0426A01509FC85D06ED91F26C1B46E), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:20.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:20.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 999999999 Now: 1000000000 2019-09-04T06:29:20.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:20.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:20.283+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:20.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:20.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:20.337+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:20.337+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:20.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:20.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:20.383+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:20.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:20.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:20.483+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:20.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:20.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:20.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:20.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:20.573+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:20.573+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:20.584+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:20.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:20.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:20.671+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:20.671+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:20.684+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:20.702+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:20.702+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:20.708+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:20.708+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:20.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:20.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:20.784+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:20.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:20.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:20.837+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:20.837+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 402) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:20.837+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 402 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:29:30.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:20.837+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:20.837+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:20.837+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:20.837+0000 D2 ASIO [Replication] Request 402 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), opTime: { ts: Timestamp(1567578559, 1), t: 1 }, wallTime: new Date(1567578559157), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578559, 1), t: 1 }, lastCommittedWall: new Date(1567578559157), lastOpVisible: { ts: Timestamp(1567578559, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578559, 1), $clusterTime: { clusterTime: Timestamp(1567578559, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578559, 1) } 2019-09-04T06:29:20.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), opTime: { ts: Timestamp(1567578559, 1), t: 1 }, wallTime: new Date(1567578559157), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578559, 1), t: 1 }, lastCommittedWall: new Date(1567578559157), lastOpVisible: { ts: Timestamp(1567578559, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578559, 1), $clusterTime: { clusterTime: Timestamp(1567578559, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578559, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:29:20.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:20.837+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 402) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), opTime: { ts: Timestamp(1567578559, 1), t: 1 }, wallTime: new Date(1567578559157), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578559, 1), t: 1 }, lastCommittedWall: new Date(1567578559157), lastOpVisible: { ts: Timestamp(1567578559, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578559, 1), $clusterTime: { clusterTime: Timestamp(1567578559, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578559, 1) } 2019-09-04T06:29:20.837+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:29:20.837+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:29:29.464+0000 2019-09-04T06:29:20.837+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:29:31.323+0000 2019-09-04T06:29:20.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:20.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:20.837+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:29:20.837+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:29:22.837Z 2019-09-04T06:29:20.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:20.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:20.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 403) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:20.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 403 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:30.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:20.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:20.838+0000 D2 ASIO [Replication] Request 403 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), opTime: { ts: Timestamp(1567578559, 1), t: 1 }, wallTime: new Date(1567578559157), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578559, 1), t: 1 }, lastCommittedWall: new Date(1567578559157), lastOpVisible: { ts: Timestamp(1567578559, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578559, 1), $clusterTime: { clusterTime: Timestamp(1567578559, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578559, 1) } 2019-09-04T06:29:20.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), opTime: { ts: Timestamp(1567578559, 1), t: 1 }, wallTime: new Date(1567578559157), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578559, 1), t: 1 }, lastCommittedWall: new Date(1567578559157), lastOpVisible: { ts: Timestamp(1567578559, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578559, 1), $clusterTime: { clusterTime: Timestamp(1567578559, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578559, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:20.838+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:20.838+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 403) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), opTime: { ts: Timestamp(1567578559, 1), t: 1 }, wallTime: new Date(1567578559157), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578559, 1), t: 1 }, lastCommittedWall: new Date(1567578559157), lastOpVisible: { ts: Timestamp(1567578559, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578559, 1), $clusterTime: { clusterTime: Timestamp(1567578559, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578559, 1) } 2019-09-04T06:29:20.838+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:29:20.838+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:29:22.838Z 2019-09-04T06:29:20.838+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:20.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:20.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:20.884+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:20.984+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:21.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:21.020+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:21.021+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:21.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:21.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:21.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578559, 1), signature: { hash: BinData(0, F5D95130DE0426A01509FC85D06ED91F26C1B46E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:21.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:29:21.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578559, 1), signature: { hash: BinData(0, F5D95130DE0426A01509FC85D06ED91F26C1B46E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:21.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578559, 1), signature: { hash: BinData(0, F5D95130DE0426A01509FC85D06ED91F26C1B46E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:21.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), opTime: { ts: Timestamp(1567578559, 1), t: 1 }, wallTime: new Date(1567578559157) } 2019-09-04T06:29:21.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578559, 1), signature: { hash: BinData(0, F5D95130DE0426A01509FC85D06ED91F26C1B46E), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:21.065+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:21.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:21.073+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:21.073+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:21.084+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:21.116+0000 I NETWORK [listener] connection accepted from 10.108.2.56:35652 #181 (86 connections now open) 2019-09-04T06:29:21.116+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:21.116+0000 D2 COMMAND [conn181] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:21.116+0000 I NETWORK [conn181] received client metadata from 10.108.2.56:35652 conn181: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:21.116+0000 I COMMAND [conn181] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:21.131+0000 I COMMAND [conn151] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578521, 1), signature: { hash: BinData(0, 085CFC83A551012D6A72779653032EB1C623A5B1), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:29:21.131+0000 D1 - [conn151] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:21.131+0000 W - [conn151] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:21.149+0000 I - [conn151] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:21.150+0000 D1 COMMAND [conn151] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578521, 1), signature: { hash: BinData(0, 085CFC83A551012D6A72779653032EB1C623A5B1), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:21.150+0000 D1 - [conn151] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:21.150+0000 W - [conn151] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:21.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:21.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:21.161+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:21.161+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:21.161+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578559, 1) 2019-09-04T06:29:21.161+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 6014 2019-09-04T06:29:21.161+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:21.161+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:21.161+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 6014 2019-09-04T06:29:21.162+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6017 2019-09-04T06:29:21.162+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6017 2019-09-04T06:29:21.162+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578559, 1), t: 1 }({ ts: Timestamp(1567578559, 1), t: 1 }) 2019-09-04T06:29:21.171+0000 I - [conn151] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:21.171+0000 W COMMAND [conn151] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:21.171+0000 I COMMAND [conn151] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578521, 1), signature: { hash: BinData(0, 085CFC83A551012D6A72779653032EB1C623A5B1), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:29:21.171+0000 D2 NETWORK [conn151] Session from 10.108.2.56:35634 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:21.171+0000 I NETWORK [conn151] end connection 10.108.2.56:35634 (85 connections now open) 2019-09-04T06:29:21.171+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:21.171+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:21.184+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:21.202+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:21.202+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:21.208+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:21.208+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:21.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:21.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:21.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:21.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:21.285+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:21.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:21.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:21.337+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:21.337+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:21.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:21.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:21.385+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:21.485+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:21.520+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:21.520+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:21.551+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:29:21.551+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:29:21.551+0000 D2 COMMAND [conn90] run command admin.$cmd { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:29:21.551+0000 I COMMAND [conn90] command admin.$cmd command: isMaster { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:866 locks:{} protocol:op_query 0ms 2019-09-04T06:29:21.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:21.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:21.556+0000 D2 COMMAND [conn126] run command admin.$cmd { ping: 1, $db: "admin" } 2019-09-04T06:29:21.556+0000 I COMMAND [conn126] command admin.$cmd appName: "robo3t" command: ping { ping: 1, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:21.565+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:21.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:21.573+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:21.573+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:21.585+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:21.593+0000 I NETWORK [listener] connection accepted from 10.20.102.80:61191 #182 (86 connections now open) 2019-09-04T06:29:21.593+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:21.593+0000 D2 COMMAND [conn182] run command admin.$cmd { isMaster: 1, client: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.0.5-17-gd808df2233" }, os: { type: "Windows", name: "Microsoft Windows 7", architecture: "x86_64", version: "6.1 SP1 (build 7601)" } }, $db: "admin" } 2019-09-04T06:29:21.593+0000 I NETWORK [conn182] received client metadata from 10.20.102.80:61191 conn182: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.0.5-17-gd808df2233" }, os: { type: "Windows", name: "Microsoft Windows 7", architecture: "x86_64", version: "6.1 SP1 (build 7601)" } } 2019-09-04T06:29:21.593+0000 I COMMAND [conn182] command admin.$cmd appName: "MongoDB Shell" command: isMaster { isMaster: 1, client: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.0.5-17-gd808df2233" }, os: { type: "Windows", name: "Microsoft Windows 7", architecture: "x86_64", version: "6.1 SP1 (build 7601)" } }, $db: "admin" } numYields:0 reslen:866 locks:{} protocol:op_query 0ms 2019-09-04T06:29:21.604+0000 D2 COMMAND [conn182] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-256", payload: "xxx", $db: "admin" } 2019-09-04T06:29:21.604+0000 D1 ACCESS [conn182] Returning user dba_root@admin from cache 2019-09-04T06:29:21.605+0000 I COMMAND [conn182] command admin.$cmd appName: "MongoDB Shell" command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-256", payload: "xxx", $db: "admin" } numYields:0 reslen:410 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:21.615+0000 D2 COMMAND [conn182] run command admin.$cmd { saslContinue: 1, payload: "xxx", conversationId: 1, $db: "admin" } 2019-09-04T06:29:21.615+0000 I COMMAND [conn182] command admin.$cmd appName: "MongoDB Shell" command: saslContinue { saslContinue: 1, payload: "xxx", conversationId: 1, $db: "admin" } numYields:0 reslen:339 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:21.626+0000 D2 COMMAND [conn182] run command admin.$cmd { saslContinue: 1, payload: "xxx", conversationId: 1, $db: "admin" } 2019-09-04T06:29:21.626+0000 D1 ACCESS [conn182] Returning user dba_root@admin from cache 2019-09-04T06:29:21.626+0000 I ACCESS [conn182] Successfully authenticated as principal dba_root on admin from client 10.20.102.80:61191 2019-09-04T06:29:21.626+0000 I COMMAND [conn182] command admin.$cmd appName: "MongoDB Shell" command: saslContinue { saslContinue: 1, payload: "xxx", conversationId: 1, $db: "admin" } numYields:0 reslen:293 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:21.634+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:21.634+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:21.635+0000 I NETWORK [listener] connection accepted from 10.108.2.44:38634 #183 (87 connections now open) 2019-09-04T06:29:21.635+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:21.635+0000 D2 COMMAND [conn183] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:21.635+0000 I NETWORK [conn183] received client metadata from 10.108.2.44:38634 conn183: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:21.635+0000 I COMMAND [conn183] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:21.635+0000 D2 COMMAND [conn183] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578558, 1), signature: { hash: BinData(0, 4DFF71D198DA148E62B7DCAB7E886B063BCE4B69), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:29:21.635+0000 D1 REPL [conn183] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578559, 1), t: 1 } 2019-09-04T06:29:21.635+0000 D3 REPL [conn183] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.645+0000 2019-09-04T06:29:21.636+0000 D2 COMMAND [conn182] run command config.$cmd { ping: 1.0, lsid: { id: UUID("02492cc9-cb3a-4cd4-9c2e-0d7430e82ce2") }, $clusterTime: { clusterTime: Timestamp(1567578379, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:29:21.636+0000 I COMMAND [conn182] command config.$cmd appName: "MongoDB Shell" command: ping { ping: 1.0, lsid: { id: UUID("02492cc9-cb3a-4cd4-9c2e-0d7430e82ce2") }, $clusterTime: { clusterTime: Timestamp(1567578379, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:21.646+0000 I COMMAND [conn127] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578528, 1), signature: { hash: BinData(0, 56096C741DA0E1098955367B7ABC9039CDF5E4B7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:29:21.646+0000 D1 - [conn127] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:21.646+0000 W - [conn127] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:21.650+0000 I NETWORK [listener] connection accepted from 10.108.2.48:42048 #184 (88 connections now open) 2019-09-04T06:29:21.650+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:21.650+0000 I NETWORK [listener] connection accepted from 10.108.2.72:45696 #185 (89 connections now open) 2019-09-04T06:29:21.650+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:21.650+0000 D2 COMMAND [conn185] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:21.650+0000 I NETWORK [conn185] received client metadata from 10.108.2.72:45696 conn185: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:21.650+0000 I COMMAND [conn185] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:21.651+0000 D2 COMMAND [conn184] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:21.651+0000 I NETWORK [conn184] received client metadata from 10.108.2.48:42048 conn184: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:21.651+0000 I COMMAND [conn184] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:21.651+0000 I NETWORK [listener] connection accepted from 10.108.2.58:52092 #186 (90 connections now open) 2019-09-04T06:29:21.651+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:21.651+0000 D2 COMMAND [conn186] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:21.651+0000 I NETWORK [conn186] received client metadata from 10.108.2.58:52092 conn186: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:21.651+0000 I COMMAND [conn186] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:21.652+0000 D2 COMMAND [conn186] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578553, 1), signature: { hash: BinData(0, 0F4123AD4F6F23E0F6991DA797B74DD1D7349CDB), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:29:21.652+0000 D1 REPL [conn186] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578559, 1), t: 1 } 2019-09-04T06:29:21.652+0000 D3 REPL [conn186] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.662+0000 2019-09-04T06:29:21.654+0000 D2 COMMAND [conn173] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578555, 1), signature: { hash: BinData(0, BDB37965C7D794D8E96FEBC7A1DC5F3E24B76E27), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:29:21.654+0000 D1 REPL [conn173] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578559, 1), t: 1 } 2019-09-04T06:29:21.654+0000 D3 REPL [conn173] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.664+0000 2019-09-04T06:29:21.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:21.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:21.662+0000 I COMMAND [conn154] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578528, 1), signature: { hash: BinData(0, 56096C741DA0E1098955367B7ABC9039CDF5E4B7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:29:21.663+0000 D1 - [conn154] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:21.663+0000 W - [conn154] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:21.663+0000 I COMMAND [conn153] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578522, 1), signature: { hash: BinData(0, 21A966CF5FD66B29B7A606E7014BBC74FFC0F15C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:29:21.663+0000 D1 - [conn153] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:21.663+0000 W - [conn153] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:21.663+0000 I - [conn127] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:21.663+0000 D1 COMMAND [conn127] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578528, 1), signature: { hash: BinData(0, 56096C741DA0E1098955367B7ABC9039CDF5E4B7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:21.664+0000 D1 - [conn127] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:21.664+0000 W - [conn127] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:21.664+0000 I COMMAND [conn152] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578530, 1), signature: { hash: BinData(0, 6CBB8EC6AAAB5F918296074E946EB69F170C4DFC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:29:21.664+0000 D1 - [conn152] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:21.664+0000 W - [conn152] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:21.664+0000 I COMMAND [conn155] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578522, 1), signature: { hash: BinData(0, 21A966CF5FD66B29B7A606E7014BBC74FFC0F15C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:29:21.665+0000 D1 - [conn155] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:21.665+0000 W - [conn155] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:21.671+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:21.671+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:21.680+0000 I - [conn153] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:21.680+0000 D1 COMMAND [conn153] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578522, 1), signature: { hash: BinData(0, 21A966CF5FD66B29B7A606E7014BBC74FFC0F15C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:21.680+0000 D1 - [conn153] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:21.680+0000 W - [conn153] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:21.685+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:21.697+0000 I - [conn154] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:21.697+0000 D1 COMMAND [conn154] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578528, 1), signature: { hash: BinData(0, 56096C741DA0E1098955367B7ABC9039CDF5E4B7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:21.697+0000 D1 - [conn154] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:21.697+0000 W - [conn154] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:21.702+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:21.702+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:21.708+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:21.708+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:21.716+0000 I - [conn152] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:21.716+0000 D1 COMMAND [conn152] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578530, 1), signature: { hash: BinData(0, 6CBB8EC6AAAB5F918296074E946EB69F170C4DFC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:21.716+0000 D1 - [conn152] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:21.716+0000 W - [conn152] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:21.736+0000 I - [conn154] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:21.736+0000 W COMMAND [conn154] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:21.736+0000 I COMMAND [conn154] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578528, 1), signature: { hash: BinData(0, 56096C741DA0E1098955367B7ABC9039CDF5E4B7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30046ms 2019-09-04T06:29:21.736+0000 D2 NETWORK [conn154] Session from 10.108.2.54:49114 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:21.736+0000 I NETWORK [conn154] end connection 10.108.2.54:49114 (89 connections now open) 2019-09-04T06:29:21.743+0000 I NETWORK [listener] connection accepted from 10.108.2.52:47128 #187 (90 connections now open) 2019-09-04T06:29:21.743+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:21.743+0000 D2 COMMAND [conn187] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:21.743+0000 I NETWORK [conn187] received client metadata from 10.108.2.52:47128 conn187: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:21.743+0000 I COMMAND [conn187] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:21.743+0000 D2 COMMAND [conn187] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578552, 1), signature: { hash: BinData(0, 05551CF1F69A904A3734176F5171337687942DA6), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:29:21.744+0000 D1 REPL [conn187] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578559, 1), t: 1 } 2019-09-04T06:29:21.744+0000 D3 REPL [conn187] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.753+0000 2019-09-04T06:29:21.756+0000 I - [conn127] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:21.756+0000 W COMMAND [conn127] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:21.756+0000 I COMMAND [conn127] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578528, 1), signature: { hash: BinData(0, 56096C741DA0E1098955367B7ABC9039CDF5E4B7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:29:21.756+0000 D2 NETWORK [conn127] Session from 10.108.2.44:38592 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:21.757+0000 I NETWORK [conn127] end connection 10.108.2.44:38592 (89 connections now open) 2019-09-04T06:29:21.757+0000 I NETWORK [listener] connection accepted from 10.108.2.59:48296 #188 (90 connections now open) 2019-09-04T06:29:21.757+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:21.757+0000 D2 COMMAND [conn188] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:21.758+0000 I NETWORK [conn188] received client metadata from 10.108.2.59:48296 conn188: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:21.758+0000 I COMMAND [conn188] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:21.758+0000 D2 COMMAND [conn188] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578551, 1), signature: { hash: BinData(0, 71017D2CA5DD957C25D8652F338F2394E060419D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:29:21.758+0000 D1 REPL [conn188] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578559, 1), t: 1 } 2019-09-04T06:29:21.758+0000 D3 REPL [conn188] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.768+0000 2019-09-04T06:29:21.773+0000 I - [conn155] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:21.773+0000 D1 COMMAND [conn155] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578522, 1), signature: { hash: BinData(0, 21A966CF5FD66B29B7A606E7014BBC74FFC0F15C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:21.773+0000 D1 - [conn155] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:21.773+0000 W - [conn155] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:21.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:21.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:21.785+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:21.793+0000 I - [conn153] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:21.793+0000 W COMMAND [conn153] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:21.793+0000 I COMMAND [conn153] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578522, 1), signature: { hash: BinData(0, 21A966CF5FD66B29B7A606E7014BBC74FFC0F15C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:29:21.793+0000 D2 NETWORK [conn153] Session from 10.108.2.72:45670 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:21.793+0000 I NETWORK [conn153] end connection 10.108.2.72:45670 (89 connections now open) 2019-09-04T06:29:21.813+0000 I - [conn152] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:21.813+0000 W COMMAND [conn152] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:21.813+0000 I COMMAND [conn152] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578530, 1), signature: { hash: BinData(0, 6CBB8EC6AAAB5F918296074E946EB69F170C4DFC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30065ms 2019-09-04T06:29:21.813+0000 D2 NETWORK [conn152] Session from 10.108.2.48:42028 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:21.813+0000 I NETWORK [conn152] end connection 10.108.2.48:42028 (88 connections now open) 2019-09-04T06:29:21.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:21.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:21.833+0000 I - [conn155] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:21.833+0000 W COMMAND [conn155] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:21.833+0000 I COMMAND [conn155] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578522, 1), signature: { hash: BinData(0, 21A966CF5FD66B29B7A606E7014BBC74FFC0F15C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30120ms 2019-09-04T06:29:21.833+0000 D2 NETWORK [conn155] Session from 10.108.2.73:52076 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:21.833+0000 I NETWORK [conn155] end connection 10.108.2.73:52076 (87 connections now open) 2019-09-04T06:29:21.837+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:21.837+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:21.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:21.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:21.885+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:21.986+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:22.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:22.020+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:22.020+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:22.043+0000 I NETWORK [listener] connection accepted from 10.108.2.50:50066 #189 (88 connections now open) 2019-09-04T06:29:22.043+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:22.043+0000 D2 COMMAND [conn189] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:22.043+0000 I NETWORK [conn189] received client metadata from 10.108.2.50:50066 conn189: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:22.043+0000 I COMMAND [conn189] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:22.044+0000 D2 COMMAND [conn189] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578559, 1), signature: { hash: BinData(0, 08AAA06C9B64C61AD7A5B7A57074BF7F508B44CC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:29:22.044+0000 D1 REPL [conn189] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578559, 1), t: 1 } 2019-09-04T06:29:22.044+0000 D3 REPL [conn189] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:52.054+0000 2019-09-04T06:29:22.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:22.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:22.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:22.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:22.073+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:22.073+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:22.086+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:22.134+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:22.134+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:22.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:22.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:22.161+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:22.161+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:22.161+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578559, 1) 2019-09-04T06:29:22.161+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 6068 2019-09-04T06:29:22.161+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:22.161+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:22.161+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 6068 2019-09-04T06:29:22.162+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6071 2019-09-04T06:29:22.162+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6071 2019-09-04T06:29:22.162+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578559, 1), t: 1 }({ ts: Timestamp(1567578559, 1), t: 1 }) 2019-09-04T06:29:22.171+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:22.171+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:22.186+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:22.202+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:22.202+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:22.208+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:22.208+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:22.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578560, 1), signature: { hash: BinData(0, 53F17561D6DF09A80BD65603CCB76377F975300F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:22.233+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:29:22.233+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578560, 1), signature: { hash: BinData(0, 53F17561D6DF09A80BD65603CCB76377F975300F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:22.233+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578560, 1), signature: { hash: BinData(0, 53F17561D6DF09A80BD65603CCB76377F975300F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:22.233+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), opTime: { ts: Timestamp(1567578559, 1), t: 1 }, wallTime: new Date(1567578559157) } 2019-09-04T06:29:22.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578560, 1), signature: { hash: BinData(0, 53F17561D6DF09A80BD65603CCB76377F975300F), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:22.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:22.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:22.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:22.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:22.286+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:22.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:22.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:22.337+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:22.337+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:22.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:22.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:22.386+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:22.486+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:22.520+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:22.520+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:22.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:22.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:22.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:22.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:22.573+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:22.573+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:22.587+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:22.599+0000 I COMMAND [conn156] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578525, 1), signature: { hash: BinData(0, A9EF6308D6B77986F6D15C773C57E66A2510FEA9), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:29:22.599+0000 D1 - [conn156] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:22.599+0000 W - [conn156] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:22.617+0000 I - [conn156] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:22.618+0000 D1 COMMAND [conn156] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578525, 1), signature: { hash: BinData(0, A9EF6308D6B77986F6D15C773C57E66A2510FEA9), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:22.618+0000 D1 - [conn156] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:22.618+0000 W - [conn156] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:22.639+0000 I - [conn156] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:22.640+0000 W COMMAND [conn156] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:22.640+0000 I COMMAND [conn156] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578525, 1), signature: { hash: BinData(0, A9EF6308D6B77986F6D15C773C57E66A2510FEA9), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30032ms 2019-09-04T06:29:22.640+0000 D2 NETWORK [conn156] Session from 10.108.2.74:51712 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:22.640+0000 I NETWORK [conn156] end connection 10.108.2.74:51712 (87 connections now open) 2019-09-04T06:29:22.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:22.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:22.671+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:22.671+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:22.687+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:22.702+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:22.702+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:22.708+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:22.708+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:22.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:22.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:22.787+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:22.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:22.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:22.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:22.837+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 404) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:22.837+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 404 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:29:32.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:22.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:22.837+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:22.837+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:22.837+0000 D2 ASIO [Replication] Request 404 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), opTime: { ts: Timestamp(1567578559, 1), t: 1 }, wallTime: new Date(1567578559157), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578559, 1), t: 1 }, lastCommittedWall: new Date(1567578559157), lastOpVisible: { ts: Timestamp(1567578559, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578559, 1), $clusterTime: { clusterTime: Timestamp(1567578560, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578559, 1) } 2019-09-04T06:29:22.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), opTime: { ts: Timestamp(1567578559, 1), t: 1 }, wallTime: new Date(1567578559157), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578559, 1), t: 1 }, lastCommittedWall: new Date(1567578559157), lastOpVisible: { ts: Timestamp(1567578559, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578559, 1), $clusterTime: { clusterTime: Timestamp(1567578560, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578559, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:29:22.837+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:22.837+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 404) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), opTime: { ts: Timestamp(1567578559, 1), t: 1 }, wallTime: new Date(1567578559157), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578559, 1), t: 1 }, lastCommittedWall: new Date(1567578559157), lastOpVisible: { ts: Timestamp(1567578559, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578559, 1), $clusterTime: { clusterTime: Timestamp(1567578560, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578559, 1) } 2019-09-04T06:29:22.837+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:29:22.837+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:29:31.323+0000 2019-09-04T06:29:22.837+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:29:33.045+0000 2019-09-04T06:29:22.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:22.837+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:29:22.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:22.837+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:29:24.837Z 2019-09-04T06:29:22.837+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:22.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:22.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 405) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:22.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 405 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:32.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:22.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:22.838+0000 D2 ASIO [Replication] Request 405 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), opTime: { ts: Timestamp(1567578559, 1), t: 1 }, wallTime: new Date(1567578559157), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578559, 1), t: 1 }, lastCommittedWall: new Date(1567578559157), lastOpVisible: { ts: Timestamp(1567578559, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578559, 1), $clusterTime: { clusterTime: Timestamp(1567578560, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578559, 1) } 2019-09-04T06:29:22.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), opTime: { ts: Timestamp(1567578559, 1), t: 1 }, wallTime: new Date(1567578559157), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578559, 1), t: 1 }, lastCommittedWall: new Date(1567578559157), lastOpVisible: { ts: Timestamp(1567578559, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578559, 1), $clusterTime: { clusterTime: Timestamp(1567578560, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578559, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:22.838+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:22.838+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 405) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), opTime: { ts: Timestamp(1567578559, 1), t: 1 }, wallTime: new Date(1567578559157), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578559, 1), t: 1 }, lastCommittedWall: new Date(1567578559157), lastOpVisible: { ts: Timestamp(1567578559, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578559, 1), $clusterTime: { clusterTime: Timestamp(1567578560, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578559, 1) } 2019-09-04T06:29:22.838+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:29:22.838+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:29:24.838Z 2019-09-04T06:29:22.838+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:22.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:22.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:22.887+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:22.987+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:23.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:23.020+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:23.020+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:23.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:23.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:23.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578560, 1), signature: { hash: BinData(0, 53F17561D6DF09A80BD65603CCB76377F975300F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:23.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:29:23.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578560, 1), signature: { hash: BinData(0, 53F17561D6DF09A80BD65603CCB76377F975300F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:23.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578560, 1), signature: { hash: BinData(0, 53F17561D6DF09A80BD65603CCB76377F975300F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:23.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), opTime: { ts: Timestamp(1567578559, 1), t: 1 }, wallTime: new Date(1567578559157) } 2019-09-04T06:29:23.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578560, 1), signature: { hash: BinData(0, 53F17561D6DF09A80BD65603CCB76377F975300F), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:23.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:23.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:23.073+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:23.073+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:23.088+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:23.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:23.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:23.161+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:23.161+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:23.161+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578559, 1) 2019-09-04T06:29:23.161+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 6101 2019-09-04T06:29:23.161+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:23.161+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:23.161+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 6101 2019-09-04T06:29:23.162+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6104 2019-09-04T06:29:23.162+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6104 2019-09-04T06:29:23.162+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578559, 1), t: 1 }({ ts: Timestamp(1567578559, 1), t: 1 }) 2019-09-04T06:29:23.171+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:23.171+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:23.188+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:23.202+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:23.202+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:23.208+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:23.208+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:23.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:23.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:23.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:23.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:23.288+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:23.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:23.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:23.337+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:23.337+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:23.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:23.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:23.388+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:23.488+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:23.520+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:23.520+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:23.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:23.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:23.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:23.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:23.573+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:23.573+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:23.588+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:23.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:23.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:23.671+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:23.671+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:23.689+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:23.702+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:23.702+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:23.708+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:23.708+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:23.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:23.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:23.789+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:23.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:23.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:23.837+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:23.837+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:23.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:23.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:23.889+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:23.989+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:24.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:24.020+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:24.020+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:24.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:24.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:24.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:24.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:24.073+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:24.073+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:24.089+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:24.157+0000 I COMMAND [conn158] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578525, 1), signature: { hash: BinData(0, A9EF6308D6B77986F6D15C773C57E66A2510FEA9), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:29:24.157+0000 D1 - [conn158] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:24.157+0000 W - [conn158] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:24.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:24.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:24.162+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:24.162+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:24.162+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578559, 1) 2019-09-04T06:29:24.162+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 6133 2019-09-04T06:29:24.162+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:24.162+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:24.162+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 6133 2019-09-04T06:29:24.163+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6136 2019-09-04T06:29:24.163+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6136 2019-09-04T06:29:24.163+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578559, 1), t: 1 }({ ts: Timestamp(1567578559, 1), t: 1 }) 2019-09-04T06:29:24.163+0000 D2 ASIO [RS] Request 394 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578559, 1), t: 1 }, lastCommittedWall: new Date(1567578559157), lastOpVisible: { ts: Timestamp(1567578559, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578559, 1), t: 1 }, lastCommittedWall: new Date(1567578559157), lastOpApplied: { ts: Timestamp(1567578559, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578559, 1), $clusterTime: { clusterTime: Timestamp(1567578560, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578559, 1) } 2019-09-04T06:29:24.163+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578559, 1), t: 1 }, lastCommittedWall: new Date(1567578559157), lastOpVisible: { ts: Timestamp(1567578559, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578559, 1), t: 1 }, lastCommittedWall: new Date(1567578559157), lastOpApplied: { ts: Timestamp(1567578559, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578559, 1), $clusterTime: { clusterTime: Timestamp(1567578560, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578559, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:24.163+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:24.163+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:29:24.163+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:29:33.045+0000 2019-09-04T06:29:24.163+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:29:35.218+0000 2019-09-04T06:29:24.163+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 406 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:34.163+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578559, 1), t: 1 } } 2019-09-04T06:29:24.164+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:49.163+0000 2019-09-04T06:29:24.164+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:24.164+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:24.165+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:24.165+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), appliedOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, appliedWallTime: new Date(1567578559157), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), appliedOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, appliedWallTime: new Date(1567578559157), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), appliedOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, appliedWallTime: new Date(1567578559157), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578559, 1), t: 1 }, lastCommittedWall: new Date(1567578559157), lastOpVisible: { ts: Timestamp(1567578559, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:24.165+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 407 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:54.165+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), appliedOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, appliedWallTime: new Date(1567578559157), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), appliedOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, appliedWallTime: new Date(1567578559157), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), appliedOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, appliedWallTime: new Date(1567578559157), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578559, 1), t: 1 }, lastCommittedWall: new Date(1567578559157), lastOpVisible: { ts: Timestamp(1567578559, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:24.165+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:49.163+0000 2019-09-04T06:29:24.166+0000 D2 ASIO [RS] Request 407 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578559, 1), t: 1 }, lastCommittedWall: new Date(1567578559157), lastOpVisible: { ts: Timestamp(1567578559, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578559, 1), $clusterTime: { clusterTime: Timestamp(1567578560, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578559, 1) } 2019-09-04T06:29:24.166+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578559, 1), t: 1 }, lastCommittedWall: new Date(1567578559157), lastOpVisible: { ts: Timestamp(1567578559, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578559, 1), $clusterTime: { clusterTime: Timestamp(1567578560, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578559, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:24.166+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:24.166+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:49.163+0000 2019-09-04T06:29:24.171+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:24.171+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:24.174+0000 I - [conn158] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:24.174+0000 D1 COMMAND [conn158] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578525, 1), signature: { hash: BinData(0, A9EF6308D6B77986F6D15C773C57E66A2510FEA9), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:24.174+0000 D1 - [conn158] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:24.174+0000 W - [conn158] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:24.189+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:24.194+0000 I - [conn158] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:24.194+0000 W COMMAND [conn158] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:24.194+0000 I COMMAND [conn158] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578525, 1), signature: { hash: BinData(0, A9EF6308D6B77986F6D15C773C57E66A2510FEA9), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30031ms 2019-09-04T06:29:24.194+0000 D2 NETWORK [conn158] Session from 10.108.2.46:40918 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:24.194+0000 I NETWORK [conn158] end connection 10.108.2.46:40918 (86 connections now open) 2019-09-04T06:29:24.202+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:24.202+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:24.208+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:24.208+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:24.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578560, 1), signature: { hash: BinData(0, 53F17561D6DF09A80BD65603CCB76377F975300F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:24.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:29:24.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578560, 1), signature: { hash: BinData(0, 53F17561D6DF09A80BD65603CCB76377F975300F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:24.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578560, 1), signature: { hash: BinData(0, 53F17561D6DF09A80BD65603CCB76377F975300F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:24.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), opTime: { ts: Timestamp(1567578559, 1), t: 1 }, wallTime: new Date(1567578559157) } 2019-09-04T06:29:24.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578560, 1), signature: { hash: BinData(0, 53F17561D6DF09A80BD65603CCB76377F975300F), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:24.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:24.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:24.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:24.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:24.289+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:24.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:24.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:24.337+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:24.337+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:24.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:24.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:24.390+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:24.490+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:24.520+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:24.520+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:24.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:24.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:24.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:24.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:24.573+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:24.573+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:24.590+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:24.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:24.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:24.671+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:24.671+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:24.690+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:24.702+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:24.702+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:24.708+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:24.708+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:24.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:24.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:24.790+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:24.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:24.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:24.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:24.837+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 408) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:24.837+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 408 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:29:34.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:24.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:24.837+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:24.837+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:24.837+0000 D2 ASIO [Replication] Request 408 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), opTime: { ts: Timestamp(1567578559, 1), t: 1 }, wallTime: new Date(1567578559157), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578559, 1), t: 1 }, lastCommittedWall: new Date(1567578559157), lastOpVisible: { ts: Timestamp(1567578559, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578559, 1), $clusterTime: { clusterTime: Timestamp(1567578560, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578559, 1) } 2019-09-04T06:29:24.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), opTime: { ts: Timestamp(1567578559, 1), t: 1 }, wallTime: new Date(1567578559157), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578559, 1), t: 1 }, lastCommittedWall: new Date(1567578559157), lastOpVisible: { ts: Timestamp(1567578559, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578559, 1), $clusterTime: { clusterTime: Timestamp(1567578560, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578559, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:29:24.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:24.837+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 408) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), opTime: { ts: Timestamp(1567578559, 1), t: 1 }, wallTime: new Date(1567578559157), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578559, 1), t: 1 }, lastCommittedWall: new Date(1567578559157), lastOpVisible: { ts: Timestamp(1567578559, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578559, 1), $clusterTime: { clusterTime: Timestamp(1567578560, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578559, 1) } 2019-09-04T06:29:24.837+0000 D4 ELECTION [replexec-1] Postponing election timeout due to heartbeat from primary 2019-09-04T06:29:24.837+0000 D4 REPL [replexec-1] Canceling election timeout callback at 2019-09-04T06:29:35.218+0000 2019-09-04T06:29:24.837+0000 D4 ELECTION [replexec-1] Scheduling election timeout callback at 2019-09-04T06:29:35.945+0000 2019-09-04T06:29:24.837+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:24.837+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:29:24.837+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:24.837+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:29:26.837Z 2019-09-04T06:29:24.837+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:24.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:24.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 409) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:24.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 409 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:34.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:24.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:24.838+0000 D2 ASIO [Replication] Request 409 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), opTime: { ts: Timestamp(1567578559, 1), t: 1 }, wallTime: new Date(1567578559157), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578559, 1), t: 1 }, lastCommittedWall: new Date(1567578559157), lastOpVisible: { ts: Timestamp(1567578559, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578559, 1), $clusterTime: { clusterTime: Timestamp(1567578560, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578559, 1) } 2019-09-04T06:29:24.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), opTime: { ts: Timestamp(1567578559, 1), t: 1 }, wallTime: new Date(1567578559157), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578559, 1), t: 1 }, lastCommittedWall: new Date(1567578559157), lastOpVisible: { ts: Timestamp(1567578559, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578559, 1), $clusterTime: { clusterTime: Timestamp(1567578560, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578559, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:24.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:24.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 409) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), opTime: { ts: Timestamp(1567578559, 1), t: 1 }, wallTime: new Date(1567578559157), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578559, 1), t: 1 }, lastCommittedWall: new Date(1567578559157), lastOpVisible: { ts: Timestamp(1567578559, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578559, 1), $clusterTime: { clusterTime: Timestamp(1567578560, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578559, 1) } 2019-09-04T06:29:24.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:29:24.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:29:26.838Z 2019-09-04T06:29:24.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:24.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:24.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:24.890+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:24.948+0000 D2 ASIO [RS] Request 406 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578564, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578564935), o: { $v: 1, $set: { ping: new Date(1567578564932), up: 2465 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578559, 1), t: 1 }, lastCommittedWall: new Date(1567578559157), lastOpVisible: { ts: Timestamp(1567578559, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578559, 1), t: 1 }, lastCommittedWall: new Date(1567578559157), lastOpApplied: { ts: Timestamp(1567578564, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578559, 1), $clusterTime: { clusterTime: Timestamp(1567578564, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578564, 1) } 2019-09-04T06:29:24.948+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578564, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578564935), o: { $v: 1, $set: { ping: new Date(1567578564932), up: 2465 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578559, 1), t: 1 }, lastCommittedWall: new Date(1567578559157), lastOpVisible: { ts: Timestamp(1567578559, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578559, 1), t: 1 }, lastCommittedWall: new Date(1567578559157), lastOpApplied: { ts: Timestamp(1567578564, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578559, 1), $clusterTime: { clusterTime: Timestamp(1567578564, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578564, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:24.948+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:24.948+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578564, 1) and ending at ts: Timestamp(1567578564, 1) 2019-09-04T06:29:24.948+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:29:35.945+0000 2019-09-04T06:29:24.948+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:29:35.787+0000 2019-09-04T06:29:24.948+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:24.948+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:24.948+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:24.948+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:24.948+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578559, 1) 2019-09-04T06:29:24.948+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 6161 2019-09-04T06:29:24.948+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:24.948+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:24.948+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 6161 2019-09-04T06:29:24.948+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:24.948+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:24.948+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:29:24.948+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578559, 1) 2019-09-04T06:29:24.948+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 6164 2019-09-04T06:29:24.948+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:24.948+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:24.948+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 6164 2019-09-04T06:29:24.948+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578564, 1), t: 1 } 2019-09-04T06:29:24.948+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578564, 1) } 2019-09-04T06:29:24.948+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6137 2019-09-04T06:29:24.949+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 6137 2019-09-04T06:29:24.949+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6167 2019-09-04T06:29:24.949+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6167 2019-09-04T06:29:24.949+0000 D3 EXECUTOR [repl-writer-worker-12] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:24.949+0000 D3 STORAGE [repl-writer-worker-12] WT begin_transaction for snapshot id 6169 2019-09-04T06:29:24.949+0000 D4 STORAGE [repl-writer-worker-12] inserting record with timestamp Timestamp(1567578564, 1) 2019-09-04T06:29:24.949+0000 D3 STORAGE [repl-writer-worker-12] WT set timestamp of future write operations to Timestamp(1567578564, 1) 2019-09-04T06:29:24.949+0000 D3 STORAGE [repl-writer-worker-12] WT commit_transaction for snapshot id 6169 2019-09-04T06:29:24.949+0000 D3 EXECUTOR [repl-writer-worker-12] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:24.949+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:29:24.949+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6168 2019-09-04T06:29:24.949+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 6168 2019-09-04T06:29:24.949+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6171 2019-09-04T06:29:24.949+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6171 2019-09-04T06:29:24.949+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578564, 1), t: 1 }({ ts: Timestamp(1567578564, 1), t: 1 }) 2019-09-04T06:29:24.949+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578564, 1) 2019-09-04T06:29:24.949+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6172 2019-09-04T06:29:24.949+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578564, 1) } } ] } sort: {} projection: {} 2019-09-04T06:29:24.949+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:24.949+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:29:24.949+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578564, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:29:24.949+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:24.949+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:24.949+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:24.949+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578564, 1) || First: notFirst: full path: ts 2019-09-04T06:29:24.949+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:24.949+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578564, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:24.949+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:24.949+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:29:24.949+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:24.949+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:24.949+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:24.949+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:24.949+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:24.949+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:24.949+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:24.949+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578564, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:24.949+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:24.949+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:24.949+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:24.949+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578564, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:24.949+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:24.949+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578564, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:24.949+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 6172 2019-09-04T06:29:24.949+0000 D3 EXECUTOR [repl-writer-worker-10] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:24.949+0000 D3 STORAGE [repl-writer-worker-10] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:24.949+0000 D3 REPL [repl-writer-worker-10] applying op: { ts: Timestamp(1567578564, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578564935), o: { $v: 1, $set: { ping: new Date(1567578564932), up: 2465 } } }, oplog application mode: Secondary 2019-09-04T06:29:24.949+0000 D3 STORAGE [repl-writer-worker-10] WT set timestamp of future write operations to Timestamp(1567578564, 1) 2019-09-04T06:29:24.949+0000 D3 STORAGE [repl-writer-worker-10] WT begin_transaction for snapshot id 6174 2019-09-04T06:29:24.949+0000 D2 QUERY [repl-writer-worker-10] Using idhack: { _id: "cmodb801.togewa.com:27017" } 2019-09-04T06:29:24.949+0000 D4 WRITE [repl-writer-worker-10] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:29:24.950+0000 D3 STORAGE [repl-writer-worker-10] WT commit_transaction for snapshot id 6174 2019-09-04T06:29:24.950+0000 D3 EXECUTOR [repl-writer-worker-10] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:24.950+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578564, 1), t: 1 }({ ts: Timestamp(1567578564, 1), t: 1 }) 2019-09-04T06:29:24.950+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578564, 1) 2019-09-04T06:29:24.950+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6173 2019-09-04T06:29:24.950+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:29:24.950+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:24.950+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:29:24.950+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:24.950+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:24.950+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:29:24.950+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 6173 2019-09-04T06:29:24.950+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578564, 1) 2019-09-04T06:29:24.950+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6177 2019-09-04T06:29:24.950+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6177 2019-09-04T06:29:24.950+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578564, 1), t: 1 }({ ts: Timestamp(1567578564, 1), t: 1 }) 2019-09-04T06:29:24.950+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:24.950+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), appliedOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, appliedWallTime: new Date(1567578559157), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), appliedOpTime: { ts: Timestamp(1567578564, 1), t: 1 }, appliedWallTime: new Date(1567578564935), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), appliedOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, appliedWallTime: new Date(1567578559157), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578559, 1), t: 1 }, lastCommittedWall: new Date(1567578559157), lastOpVisible: { ts: Timestamp(1567578559, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:24.950+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 410 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:54.950+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), appliedOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, appliedWallTime: new Date(1567578559157), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), appliedOpTime: { ts: Timestamp(1567578564, 1), t: 1 }, appliedWallTime: new Date(1567578564935), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), appliedOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, appliedWallTime: new Date(1567578559157), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578559, 1), t: 1 }, lastCommittedWall: new Date(1567578559157), lastOpVisible: { ts: Timestamp(1567578559, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:24.950+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:54.950+0000 2019-09-04T06:29:24.950+0000 D2 ASIO [RS] Request 410 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578559, 1), t: 1 }, lastCommittedWall: new Date(1567578559157), lastOpVisible: { ts: Timestamp(1567578559, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578559, 1), $clusterTime: { clusterTime: Timestamp(1567578564, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578564, 1) } 2019-09-04T06:29:24.950+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578559, 1), t: 1 }, lastCommittedWall: new Date(1567578559157), lastOpVisible: { ts: Timestamp(1567578559, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578559, 1), $clusterTime: { clusterTime: Timestamp(1567578564, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578564, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:24.950+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:24.950+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:54.950+0000 2019-09-04T06:29:24.950+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578564, 1), t: 1 } 2019-09-04T06:29:24.950+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 411 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:34.950+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578559, 1), t: 1 } } 2019-09-04T06:29:24.950+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:54.950+0000 2019-09-04T06:29:24.959+0000 D2 ASIO [RS] Request 411 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578564, 1), t: 1 }, lastCommittedWall: new Date(1567578564935), lastOpVisible: { ts: Timestamp(1567578564, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578564, 1), t: 1 }, lastCommittedWall: new Date(1567578564935), lastOpApplied: { ts: Timestamp(1567578564, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578564, 1), $clusterTime: { clusterTime: Timestamp(1567578564, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578564, 1) } 2019-09-04T06:29:24.959+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578564, 1), t: 1 }, lastCommittedWall: new Date(1567578564935), lastOpVisible: { ts: Timestamp(1567578564, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578564, 1), t: 1 }, lastCommittedWall: new Date(1567578564935), lastOpApplied: { ts: Timestamp(1567578564, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578564, 1), $clusterTime: { clusterTime: Timestamp(1567578564, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578564, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:24.959+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:24.959+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:29:24.959+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578564, 1), t: 1 }, 2019-09-04T06:29:24.935+0000 2019-09-04T06:29:24.959+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578564, 1), t: 1 }, 2019-09-04T06:29:24.935+0000 2019-09-04T06:29:24.960+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578559, 1) 2019-09-04T06:29:24.960+0000 D3 REPL [conn166] Got notified of new snapshot: { ts: Timestamp(1567578564, 1), t: 1 }, 2019-09-04T06:29:24.935+0000 2019-09-04T06:29:24.960+0000 D3 REPL [conn166] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.275+0000 2019-09-04T06:29:24.960+0000 D3 REPL [conn174] Got notified of new snapshot: { ts: Timestamp(1567578564, 1), t: 1 }, 2019-09-04T06:29:24.935+0000 2019-09-04T06:29:24.960+0000 D3 REPL [conn174] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.422+0000 2019-09-04T06:29:24.960+0000 D3 REPL [conn189] Got notified of new snapshot: { ts: Timestamp(1567578564, 1), t: 1 }, 2019-09-04T06:29:24.935+0000 2019-09-04T06:29:24.960+0000 D3 REPL [conn189] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:52.054+0000 2019-09-04T06:29:24.960+0000 D3 REPL [conn175] Got notified of new snapshot: { ts: Timestamp(1567578564, 1), t: 1 }, 2019-09-04T06:29:24.935+0000 2019-09-04T06:29:24.960+0000 D3 REPL [conn175] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.422+0000 2019-09-04T06:29:24.960+0000 D3 REPL [conn168] Got notified of new snapshot: { ts: Timestamp(1567578564, 1), t: 1 }, 2019-09-04T06:29:24.935+0000 2019-09-04T06:29:24.960+0000 D3 REPL [conn168] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.272+0000 2019-09-04T06:29:24.960+0000 D3 REPL [conn162] Got notified of new snapshot: { ts: Timestamp(1567578564, 1), t: 1 }, 2019-09-04T06:29:24.935+0000 2019-09-04T06:29:24.960+0000 D3 REPL [conn162] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:29.750+0000 2019-09-04T06:29:24.960+0000 D2 COMMAND [conn21] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578564, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f59c402d1a496712d71ca'), operName: "", parentOperId: "5d6f59c402d1a496712d71c7" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578564, 1), signature: { hash: BinData(0, B872B3BEFC5A843EBD494DBEF6F479344332803B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578564, 1), t: 1 } }, $db: "config" } 2019-09-04T06:29:24.960+0000 D3 REPL [conn161] Got notified of new snapshot: { ts: Timestamp(1567578564, 1), t: 1 }, 2019-09-04T06:29:24.935+0000 2019-09-04T06:29:24.960+0000 D3 REPL [conn161] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:28.615+0000 2019-09-04T06:29:24.960+0000 D3 REPL [conn177] Got notified of new snapshot: { ts: Timestamp(1567578564, 1), t: 1 }, 2019-09-04T06:29:24.935+0000 2019-09-04T06:29:24.960+0000 D3 REPL [conn177] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.446+0000 2019-09-04T06:29:24.960+0000 D3 REPL [conn144] Got notified of new snapshot: { ts: Timestamp(1567578564, 1), t: 1 }, 2019-09-04T06:29:24.935+0000 2019-09-04T06:29:24.960+0000 D3 REPL [conn144] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:33.257+0000 2019-09-04T06:29:24.960+0000 D3 REPL [conn145] Got notified of new snapshot: { ts: Timestamp(1567578564, 1), t: 1 }, 2019-09-04T06:29:24.935+0000 2019-09-04T06:29:24.960+0000 D3 REPL [conn145] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.422+0000 2019-09-04T06:29:24.960+0000 D3 REPL [conn165] Got notified of new snapshot: { ts: Timestamp(1567578564, 1), t: 1 }, 2019-09-04T06:29:24.935+0000 2019-09-04T06:29:24.960+0000 D3 REPL [conn165] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:35.182+0000 2019-09-04T06:29:24.960+0000 D3 REPL [conn176] Got notified of new snapshot: { ts: Timestamp(1567578564, 1), t: 1 }, 2019-09-04T06:29:24.935+0000 2019-09-04T06:29:24.960+0000 D3 REPL [conn176] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.424+0000 2019-09-04T06:29:24.960+0000 D3 REPL [conn170] Got notified of new snapshot: { ts: Timestamp(1567578564, 1), t: 1 }, 2019-09-04T06:29:24.935+0000 2019-09-04T06:29:24.960+0000 D3 REPL [conn170] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:43.335+0000 2019-09-04T06:29:24.960+0000 D3 REPL [conn163] Got notified of new snapshot: { ts: Timestamp(1567578564, 1), t: 1 }, 2019-09-04T06:29:24.935+0000 2019-09-04T06:29:24.960+0000 D3 REPL [conn163] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:41.888+0000 2019-09-04T06:29:24.960+0000 D3 REPL [conn164] Got notified of new snapshot: { ts: Timestamp(1567578564, 1), t: 1 }, 2019-09-04T06:29:24.935+0000 2019-09-04T06:29:24.960+0000 D3 REPL [conn164] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:41.692+0000 2019-09-04T06:29:24.960+0000 D3 REPL [conn183] Got notified of new snapshot: { ts: Timestamp(1567578564, 1), t: 1 }, 2019-09-04T06:29:24.935+0000 2019-09-04T06:29:24.960+0000 D3 REPL [conn183] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.645+0000 2019-09-04T06:29:24.960+0000 D3 REPL [conn187] Got notified of new snapshot: { ts: Timestamp(1567578564, 1), t: 1 }, 2019-09-04T06:29:24.935+0000 2019-09-04T06:29:24.960+0000 D3 REPL [conn187] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.753+0000 2019-09-04T06:29:24.960+0000 D3 REPL [conn138] Got notified of new snapshot: { ts: Timestamp(1567578564, 1), t: 1 }, 2019-09-04T06:29:24.935+0000 2019-09-04T06:29:24.960+0000 D3 REPL [conn138] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.289+0000 2019-09-04T06:29:24.960+0000 D3 REPL [conn179] Got notified of new snapshot: { ts: Timestamp(1567578564, 1), t: 1 }, 2019-09-04T06:29:24.935+0000 2019-09-04T06:29:24.960+0000 D3 REPL [conn179] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:46.417+0000 2019-09-04T06:29:24.960+0000 D3 REPL [conn139] Got notified of new snapshot: { ts: Timestamp(1567578564, 1), t: 1 }, 2019-09-04T06:29:24.935+0000 2019-09-04T06:29:24.960+0000 D3 REPL [conn139] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.319+0000 2019-09-04T06:29:24.960+0000 D3 REPL [conn186] Got notified of new snapshot: { ts: Timestamp(1567578564, 1), t: 1 }, 2019-09-04T06:29:24.935+0000 2019-09-04T06:29:24.960+0000 D3 REPL [conn186] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.662+0000 2019-09-04T06:29:24.960+0000 D3 REPL [conn173] Got notified of new snapshot: { ts: Timestamp(1567578564, 1), t: 1 }, 2019-09-04T06:29:24.935+0000 2019-09-04T06:29:24.960+0000 D3 REPL [conn173] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.664+0000 2019-09-04T06:29:24.960+0000 D3 REPL [conn188] Got notified of new snapshot: { ts: Timestamp(1567578564, 1), t: 1 }, 2019-09-04T06:29:24.935+0000 2019-09-04T06:29:24.960+0000 D3 REPL [conn188] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.768+0000 2019-09-04T06:29:24.960+0000 D1 TRACKING [conn21] Cmd: find, TrackingId: 5d6f59c402d1a496712d71c7|5d6f59c402d1a496712d71ca 2019-09-04T06:29:24.960+0000 D1 COMMAND [conn21] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578564, 1), t: 1 } } } 2019-09-04T06:29:24.960+0000 D3 STORAGE [conn21] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:29:24.960+0000 D1 COMMAND [conn21] Using 'committed' snapshot: { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578564, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f59c402d1a496712d71ca'), operName: "", parentOperId: "5d6f59c402d1a496712d71c7" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578564, 1), signature: { hash: BinData(0, B872B3BEFC5A843EBD494DBEF6F479344332803B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578564, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578564, 1) 2019-09-04T06:29:24.960+0000 D2 QUERY [conn21] Collection config.settings does not exist. Using EOF plan: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 2019-09-04T06:29:24.960+0000 I COMMAND [conn21] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578564, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f59c402d1a496712d71ca'), operName: "", parentOperId: "5d6f59c402d1a496712d71c7" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578564, 1), signature: { hash: BinData(0, B872B3BEFC5A843EBD494DBEF6F479344332803B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578564, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:29:24.960+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:29:35.787+0000 2019-09-04T06:29:24.961+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:29:36.146+0000 2019-09-04T06:29:24.961+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:24.961+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:24.961+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 412 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:34.961+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578564, 1), t: 1 } } 2019-09-04T06:29:24.961+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:54.950+0000 2019-09-04T06:29:24.962+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:29:24.962+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:24.962+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), appliedOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, appliedWallTime: new Date(1567578559157), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578564, 1), t: 1 }, durableWallTime: new Date(1567578564935), appliedOpTime: { ts: Timestamp(1567578564, 1), t: 1 }, appliedWallTime: new Date(1567578564935), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), appliedOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, appliedWallTime: new Date(1567578559157), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578564, 1), t: 1 }, lastCommittedWall: new Date(1567578564935), lastOpVisible: { ts: Timestamp(1567578564, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:24.962+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 413 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:54.962+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), appliedOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, appliedWallTime: new Date(1567578559157), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578564, 1), t: 1 }, durableWallTime: new Date(1567578564935), appliedOpTime: { ts: Timestamp(1567578564, 1), t: 1 }, appliedWallTime: new Date(1567578564935), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), appliedOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, appliedWallTime: new Date(1567578559157), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578564, 1), t: 1 }, lastCommittedWall: new Date(1567578564935), lastOpVisible: { ts: Timestamp(1567578564, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:24.962+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:54.950+0000 2019-09-04T06:29:24.962+0000 D2 ASIO [RS] Request 413 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578564, 1), t: 1 }, lastCommittedWall: new Date(1567578564935), lastOpVisible: { ts: Timestamp(1567578564, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578564, 1), $clusterTime: { clusterTime: Timestamp(1567578564, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578564, 1) } 2019-09-04T06:29:24.962+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578564, 1), t: 1 }, lastCommittedWall: new Date(1567578564935), lastOpVisible: { ts: Timestamp(1567578564, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578564, 1), $clusterTime: { clusterTime: Timestamp(1567578564, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578564, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:24.962+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:24.962+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:54.950+0000 2019-09-04T06:29:24.991+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:25.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:25.020+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:25.020+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:25.049+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578564, 1) 2019-09-04T06:29:25.049+0000 I NETWORK [listener] connection accepted from 10.108.2.55:36608 #190 (87 connections now open) 2019-09-04T06:29:25.049+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:25.049+0000 D2 COMMAND [conn190] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:25.049+0000 I NETWORK [conn190] received client metadata from 10.108.2.55:36608 conn190: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:25.049+0000 I COMMAND [conn190] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:25.050+0000 D2 COMMAND [conn190] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578561, 1), signature: { hash: BinData(0, D212476059A77AEDF4B121F0DF010997DB78B838), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:29:25.050+0000 D1 REPL [conn190] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578564, 1), t: 1 } 2019-09-04T06:29:25.050+0000 D3 REPL [conn190] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:55.060+0000 2019-09-04T06:29:25.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:25.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:25.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578564, 1), signature: { hash: BinData(0, B872B3BEFC5A843EBD494DBEF6F479344332803B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:25.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:29:25.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578564, 1), signature: { hash: BinData(0, B872B3BEFC5A843EBD494DBEF6F479344332803B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:25.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578564, 1), signature: { hash: BinData(0, B872B3BEFC5A843EBD494DBEF6F479344332803B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:25.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578564, 1), t: 1 }, durableWallTime: new Date(1567578564935), opTime: { ts: Timestamp(1567578564, 1), t: 1 }, wallTime: new Date(1567578564935) } 2019-09-04T06:29:25.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578564, 1), signature: { hash: BinData(0, B872B3BEFC5A843EBD494DBEF6F479344332803B), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:25.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:25.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:25.073+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:25.073+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:25.091+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:25.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:25.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:25.171+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:25.171+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:25.191+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:25.202+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:25.202+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:25.208+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:25.208+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:25.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:25.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:25.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:25.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:25.291+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:25.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:25.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:25.337+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:25.337+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:25.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:25.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:25.391+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:25.492+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:25.520+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:25.520+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:25.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:25.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:25.565+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:25.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:25.573+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:25.573+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:25.592+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:25.632+0000 D2 ASIO [RS] Request 412 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578565, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578565630), o: { $v: 1, $set: { ping: new Date(1567578565630) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578564, 1), t: 1 }, lastCommittedWall: new Date(1567578564935), lastOpVisible: { ts: Timestamp(1567578564, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578564, 1), t: 1 }, lastCommittedWall: new Date(1567578564935), lastOpApplied: { ts: Timestamp(1567578565, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578564, 1), $clusterTime: { clusterTime: Timestamp(1567578565, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578565, 1) } 2019-09-04T06:29:25.632+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578565, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578565630), o: { $v: 1, $set: { ping: new Date(1567578565630) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578564, 1), t: 1 }, lastCommittedWall: new Date(1567578564935), lastOpVisible: { ts: Timestamp(1567578564, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578564, 1), t: 1 }, lastCommittedWall: new Date(1567578564935), lastOpApplied: { ts: Timestamp(1567578565, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578564, 1), $clusterTime: { clusterTime: Timestamp(1567578565, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578565, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:25.632+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:25.632+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578565, 1) and ending at ts: Timestamp(1567578565, 1) 2019-09-04T06:29:25.632+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:29:36.146+0000 2019-09-04T06:29:25.632+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:29:37.056+0000 2019-09-04T06:29:25.632+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578565, 1), t: 1 } 2019-09-04T06:29:25.632+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:25.632+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:25.633+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:25.633+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:25.633+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578564, 1) 2019-09-04T06:29:25.633+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 6203 2019-09-04T06:29:25.633+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:25.633+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:25.633+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 6203 2019-09-04T06:29:25.633+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:25.633+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:25.633+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:29:25.633+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578565, 1) } 2019-09-04T06:29:25.633+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578564, 1) 2019-09-04T06:29:25.633+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 6206 2019-09-04T06:29:25.633+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6178 2019-09-04T06:29:25.633+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:25.633+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:25.633+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 6206 2019-09-04T06:29:25.633+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 6178 2019-09-04T06:29:25.633+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6209 2019-09-04T06:29:25.633+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6209 2019-09-04T06:29:25.633+0000 D3 EXECUTOR [repl-writer-worker-8] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:25.633+0000 D3 STORAGE [repl-writer-worker-8] WT begin_transaction for snapshot id 6211 2019-09-04T06:29:25.633+0000 D4 STORAGE [repl-writer-worker-8] inserting record with timestamp Timestamp(1567578565, 1) 2019-09-04T06:29:25.633+0000 D3 STORAGE [repl-writer-worker-8] WT set timestamp of future write operations to Timestamp(1567578565, 1) 2019-09-04T06:29:25.633+0000 D3 STORAGE [repl-writer-worker-8] WT commit_transaction for snapshot id 6211 2019-09-04T06:29:25.633+0000 D3 EXECUTOR [repl-writer-worker-8] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:25.633+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:29:25.633+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6210 2019-09-04T06:29:25.633+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 6210 2019-09-04T06:29:25.633+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6213 2019-09-04T06:29:25.633+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6213 2019-09-04T06:29:25.633+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578565, 1), t: 1 }({ ts: Timestamp(1567578565, 1), t: 1 }) 2019-09-04T06:29:25.633+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578565, 1) 2019-09-04T06:29:25.633+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6214 2019-09-04T06:29:25.633+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578565, 1) } } ] } sort: {} projection: {} 2019-09-04T06:29:25.633+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:25.633+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:29:25.633+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578565, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:29:25.633+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:25.633+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:25.633+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:25.633+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578565, 1) || First: notFirst: full path: ts 2019-09-04T06:29:25.633+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:25.633+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578565, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:25.633+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:25.633+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:29:25.633+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:25.633+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:25.633+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:25.633+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:25.633+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:25.633+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:25.633+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:25.633+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578565, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:25.633+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:25.633+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:25.633+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:25.633+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578565, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:25.633+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:25.633+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578565, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:25.633+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 6214 2019-09-04T06:29:25.634+0000 D3 EXECUTOR [repl-writer-worker-4] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:25.634+0000 D3 STORAGE [repl-writer-worker-4] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:25.634+0000 D3 REPL [repl-writer-worker-4] applying op: { ts: Timestamp(1567578565, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578565630), o: { $v: 1, $set: { ping: new Date(1567578565630) } } }, oplog application mode: Secondary 2019-09-04T06:29:25.634+0000 D3 STORAGE [repl-writer-worker-4] WT set timestamp of future write operations to Timestamp(1567578565, 1) 2019-09-04T06:29:25.634+0000 D3 STORAGE [repl-writer-worker-4] WT begin_transaction for snapshot id 6216 2019-09-04T06:29:25.634+0000 D2 QUERY [repl-writer-worker-4] Using idhack: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" } 2019-09-04T06:29:25.634+0000 D4 WRITE [repl-writer-worker-4] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:29:25.634+0000 D3 STORAGE [repl-writer-worker-4] WT commit_transaction for snapshot id 6216 2019-09-04T06:29:25.634+0000 D3 EXECUTOR [repl-writer-worker-4] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:25.634+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578565, 1), t: 1 }({ ts: Timestamp(1567578565, 1), t: 1 }) 2019-09-04T06:29:25.634+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578565, 1) 2019-09-04T06:29:25.634+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6215 2019-09-04T06:29:25.634+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:29:25.634+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:25.634+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:29:25.634+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:25.634+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:25.634+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:29:25.634+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 6215 2019-09-04T06:29:25.634+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578565, 1) 2019-09-04T06:29:25.634+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6219 2019-09-04T06:29:25.634+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6219 2019-09-04T06:29:25.634+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578565, 1), t: 1 }({ ts: Timestamp(1567578565, 1), t: 1 }) 2019-09-04T06:29:25.634+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:25.634+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), appliedOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, appliedWallTime: new Date(1567578559157), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578564, 1), t: 1 }, durableWallTime: new Date(1567578564935), appliedOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, appliedWallTime: new Date(1567578565630), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), appliedOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, appliedWallTime: new Date(1567578559157), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578564, 1), t: 1 }, lastCommittedWall: new Date(1567578564935), lastOpVisible: { ts: Timestamp(1567578564, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:25.634+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 414 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:55.634+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), appliedOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, appliedWallTime: new Date(1567578559157), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578564, 1), t: 1 }, durableWallTime: new Date(1567578564935), appliedOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, appliedWallTime: new Date(1567578565630), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), appliedOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, appliedWallTime: new Date(1567578559157), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578564, 1), t: 1 }, lastCommittedWall: new Date(1567578564935), lastOpVisible: { ts: Timestamp(1567578564, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:25.634+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:55.634+0000 2019-09-04T06:29:25.635+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578565, 1), t: 1 } 2019-09-04T06:29:25.635+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 415 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:35.635+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578564, 1), t: 1 } } 2019-09-04T06:29:25.635+0000 D2 ASIO [RS] Request 414 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578565, 1), t: 1 }, lastCommittedWall: new Date(1567578565630), lastOpVisible: { ts: Timestamp(1567578565, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578565, 1), $clusterTime: { clusterTime: Timestamp(1567578565, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578565, 1) } 2019-09-04T06:29:25.635+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578565, 1), t: 1 }, lastCommittedWall: new Date(1567578565630), lastOpVisible: { ts: Timestamp(1567578565, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578565, 1), $clusterTime: { clusterTime: Timestamp(1567578565, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578565, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:25.635+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:55.634+0000 2019-09-04T06:29:25.635+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:25.635+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:55.634+0000 2019-09-04T06:29:25.635+0000 D2 ASIO [RS] Request 415 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578565, 1), t: 1 }, lastCommittedWall: new Date(1567578565630), lastOpVisible: { ts: Timestamp(1567578565, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578565, 1), t: 1 }, lastCommittedWall: new Date(1567578565630), lastOpApplied: { ts: Timestamp(1567578565, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578565, 1), $clusterTime: { clusterTime: Timestamp(1567578565, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578565, 1) } 2019-09-04T06:29:25.635+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:29:25.635+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578565, 1), t: 1 }, lastCommittedWall: new Date(1567578565630), lastOpVisible: { ts: Timestamp(1567578565, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578565, 1), t: 1 }, lastCommittedWall: new Date(1567578565630), lastOpApplied: { ts: Timestamp(1567578565, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578565, 1), $clusterTime: { clusterTime: Timestamp(1567578565, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578565, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:25.635+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:25.635+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:29:25.635+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578565, 1), t: 1 }, 2019-09-04T06:29:25.630+0000 2019-09-04T06:29:25.635+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578565, 1), t: 1 }, 2019-09-04T06:29:25.630+0000 2019-09-04T06:29:25.635+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578560, 1) 2019-09-04T06:29:25.635+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:25.635+0000 D3 REPL [conn139] Got notified of new snapshot: { ts: Timestamp(1567578565, 1), t: 1 }, 2019-09-04T06:29:25.630+0000 2019-09-04T06:29:25.635+0000 D3 REPL [conn139] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.319+0000 2019-09-04T06:29:25.635+0000 D3 REPL [conn161] Got notified of new snapshot: { ts: Timestamp(1567578565, 1), t: 1 }, 2019-09-04T06:29:25.630+0000 2019-09-04T06:29:25.635+0000 D3 REPL [conn161] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:28.615+0000 2019-09-04T06:29:25.635+0000 D3 REPL [conn188] Got notified of new snapshot: { ts: Timestamp(1567578565, 1), t: 1 }, 2019-09-04T06:29:25.630+0000 2019-09-04T06:29:25.635+0000 D3 REPL [conn188] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.768+0000 2019-09-04T06:29:25.635+0000 D3 REPL [conn189] Got notified of new snapshot: { ts: Timestamp(1567578565, 1), t: 1 }, 2019-09-04T06:29:25.630+0000 2019-09-04T06:29:25.635+0000 D3 REPL [conn189] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:52.054+0000 2019-09-04T06:29:25.635+0000 D3 REPL [conn176] Got notified of new snapshot: { ts: Timestamp(1567578565, 1), t: 1 }, 2019-09-04T06:29:25.630+0000 2019-09-04T06:29:25.635+0000 D3 REPL [conn176] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.424+0000 2019-09-04T06:29:25.635+0000 D3 REPL [conn170] Got notified of new snapshot: { ts: Timestamp(1567578565, 1), t: 1 }, 2019-09-04T06:29:25.630+0000 2019-09-04T06:29:25.635+0000 D3 REPL [conn170] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:43.335+0000 2019-09-04T06:29:25.635+0000 D3 REPL [conn163] Got notified of new snapshot: { ts: Timestamp(1567578565, 1), t: 1 }, 2019-09-04T06:29:25.630+0000 2019-09-04T06:29:25.635+0000 D3 REPL [conn163] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:41.888+0000 2019-09-04T06:29:25.635+0000 D3 REPL [conn164] Got notified of new snapshot: { ts: Timestamp(1567578565, 1), t: 1 }, 2019-09-04T06:29:25.630+0000 2019-09-04T06:29:25.635+0000 D3 REPL [conn164] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:41.692+0000 2019-09-04T06:29:25.636+0000 D3 REPL [conn187] Got notified of new snapshot: { ts: Timestamp(1567578565, 1), t: 1 }, 2019-09-04T06:29:25.630+0000 2019-09-04T06:29:25.636+0000 D3 REPL [conn187] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.753+0000 2019-09-04T06:29:25.636+0000 D3 REPL [conn138] Got notified of new snapshot: { ts: Timestamp(1567578565, 1), t: 1 }, 2019-09-04T06:29:25.630+0000 2019-09-04T06:29:25.636+0000 D3 REPL [conn138] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.289+0000 2019-09-04T06:29:25.636+0000 D3 REPL [conn179] Got notified of new snapshot: { ts: Timestamp(1567578565, 1), t: 1 }, 2019-09-04T06:29:25.630+0000 2019-09-04T06:29:25.636+0000 D3 REPL [conn179] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:46.417+0000 2019-09-04T06:29:25.636+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:29:37.056+0000 2019-09-04T06:29:25.636+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:29:36.377+0000 2019-09-04T06:29:25.636+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:25.636+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 416 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:35.636+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578565, 1), t: 1 } } 2019-09-04T06:29:25.636+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:25.636+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:55.635+0000 2019-09-04T06:29:25.636+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), appliedOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, appliedWallTime: new Date(1567578559157), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, durableWallTime: new Date(1567578565630), appliedOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, appliedWallTime: new Date(1567578565630), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), appliedOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, appliedWallTime: new Date(1567578559157), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578565, 1), t: 1 }, lastCommittedWall: new Date(1567578565630), lastOpVisible: { ts: Timestamp(1567578565, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:25.636+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 417 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:55.636+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), appliedOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, appliedWallTime: new Date(1567578559157), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, durableWallTime: new Date(1567578565630), appliedOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, appliedWallTime: new Date(1567578565630), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, durableWallTime: new Date(1567578559157), appliedOpTime: { ts: Timestamp(1567578559, 1), t: 1 }, appliedWallTime: new Date(1567578559157), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578565, 1), t: 1 }, lastCommittedWall: new Date(1567578565630), lastOpVisible: { ts: Timestamp(1567578565, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:25.636+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:55.635+0000 2019-09-04T06:29:25.636+0000 D3 REPL [conn166] Got notified of new snapshot: { ts: Timestamp(1567578565, 1), t: 1 }, 2019-09-04T06:29:25.630+0000 2019-09-04T06:29:25.636+0000 D3 REPL [conn166] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.275+0000 2019-09-04T06:29:25.636+0000 D3 REPL [conn175] Got notified of new snapshot: { ts: Timestamp(1567578565, 1), t: 1 }, 2019-09-04T06:29:25.630+0000 2019-09-04T06:29:25.636+0000 D3 REPL [conn175] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.422+0000 2019-09-04T06:29:25.636+0000 D3 REPL [conn168] Got notified of new snapshot: { ts: Timestamp(1567578565, 1), t: 1 }, 2019-09-04T06:29:25.630+0000 2019-09-04T06:29:25.636+0000 D3 REPL [conn168] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.272+0000 2019-09-04T06:29:25.636+0000 D3 REPL [conn162] Got notified of new snapshot: { ts: Timestamp(1567578565, 1), t: 1 }, 2019-09-04T06:29:25.630+0000 2019-09-04T06:29:25.636+0000 D3 REPL [conn162] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:29.750+0000 2019-09-04T06:29:25.636+0000 D3 REPL [conn186] Got notified of new snapshot: { ts: Timestamp(1567578565, 1), t: 1 }, 2019-09-04T06:29:25.630+0000 2019-09-04T06:29:25.636+0000 D3 REPL [conn186] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.662+0000 2019-09-04T06:29:25.636+0000 D3 REPL [conn173] Got notified of new snapshot: { ts: Timestamp(1567578565, 1), t: 1 }, 2019-09-04T06:29:25.630+0000 2019-09-04T06:29:25.636+0000 D3 REPL [conn173] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.664+0000 2019-09-04T06:29:25.636+0000 D3 REPL [conn165] Got notified of new snapshot: { ts: Timestamp(1567578565, 1), t: 1 }, 2019-09-04T06:29:25.630+0000 2019-09-04T06:29:25.636+0000 D3 REPL [conn165] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:35.182+0000 2019-09-04T06:29:25.636+0000 D3 REPL [conn183] Got notified of new snapshot: { ts: Timestamp(1567578565, 1), t: 1 }, 2019-09-04T06:29:25.630+0000 2019-09-04T06:29:25.636+0000 D3 REPL [conn183] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.645+0000 2019-09-04T06:29:25.636+0000 D3 REPL [conn144] Got notified of new snapshot: { ts: Timestamp(1567578565, 1), t: 1 }, 2019-09-04T06:29:25.630+0000 2019-09-04T06:29:25.636+0000 D3 REPL [conn144] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:33.257+0000 2019-09-04T06:29:25.636+0000 D3 REPL [conn145] Got notified of new snapshot: { ts: Timestamp(1567578565, 1), t: 1 }, 2019-09-04T06:29:25.630+0000 2019-09-04T06:29:25.636+0000 D3 REPL [conn145] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.422+0000 2019-09-04T06:29:25.636+0000 D3 REPL [conn177] Got notified of new snapshot: { ts: Timestamp(1567578565, 1), t: 1 }, 2019-09-04T06:29:25.630+0000 2019-09-04T06:29:25.636+0000 D3 REPL [conn177] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.446+0000 2019-09-04T06:29:25.636+0000 D3 REPL [conn174] Got notified of new snapshot: { ts: Timestamp(1567578565, 1), t: 1 }, 2019-09-04T06:29:25.630+0000 2019-09-04T06:29:25.636+0000 D3 REPL [conn174] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.422+0000 2019-09-04T06:29:25.636+0000 D2 ASIO [RS] Request 417 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578565, 1), t: 1 }, lastCommittedWall: new Date(1567578565630), lastOpVisible: { ts: Timestamp(1567578565, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578565, 1), $clusterTime: { clusterTime: Timestamp(1567578565, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578565, 1) } 2019-09-04T06:29:25.636+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578565, 1), t: 1 }, lastCommittedWall: new Date(1567578565630), lastOpVisible: { ts: Timestamp(1567578565, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578565, 1), $clusterTime: { clusterTime: Timestamp(1567578565, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578565, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:25.636+0000 D3 REPL [conn190] Got notified of new snapshot: { ts: Timestamp(1567578565, 1), t: 1 }, 2019-09-04T06:29:25.630+0000 2019-09-04T06:29:25.636+0000 D3 REPL [conn190] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:55.060+0000 2019-09-04T06:29:25.636+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:25.636+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:55.635+0000 2019-09-04T06:29:25.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:25.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:25.671+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:25.671+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:25.692+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:25.702+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:25.702+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:25.708+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:25.708+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:25.733+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578565, 1) 2019-09-04T06:29:25.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:25.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:25.792+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:25.824+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:25.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:25.837+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:25.837+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:25.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:25.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:25.892+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:25.992+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:26.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:26.020+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:26.020+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:26.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:26.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:26.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:26.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:26.073+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:26.073+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:26.092+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:26.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:26.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:26.171+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:26.171+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:26.193+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:26.202+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:26.202+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:26.208+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:26.208+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:26.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578565, 1), signature: { hash: BinData(0, 12A593135860780BBBEDF16DED9852A6DF16BEF8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:26.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:29:26.233+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578565, 1), signature: { hash: BinData(0, 12A593135860780BBBEDF16DED9852A6DF16BEF8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:26.233+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578565, 1), signature: { hash: BinData(0, 12A593135860780BBBEDF16DED9852A6DF16BEF8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:26.233+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, durableWallTime: new Date(1567578565630), opTime: { ts: Timestamp(1567578565, 1), t: 1 }, wallTime: new Date(1567578565630) } 2019-09-04T06:29:26.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578565, 1), signature: { hash: BinData(0, 12A593135860780BBBEDF16DED9852A6DF16BEF8), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:26.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:26.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:26.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:26.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:26.289+0000 D2 COMMAND [conn159] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578563, 1), signature: { hash: BinData(0, 2A0BBF5A114B4F2CAB0EF4784300FF5254D97052), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:29:26.289+0000 D1 REPL [conn159] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578565, 1), t: 1 } 2019-09-04T06:29:26.289+0000 D3 REPL [conn159] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:56.299+0000 2019-09-04T06:29:26.293+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:26.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:26.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:26.337+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:26.337+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:26.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:26.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:26.393+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:26.493+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:26.520+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:26.520+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:26.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:26.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:26.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:26.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:26.573+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:26.573+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:26.593+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:26.601+0000 D2 COMMAND [conn49] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578554, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578557, 1), signature: { hash: BinData(0, 3F5864037A711550FF6AEE33CA7A805B73D2D63E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578554, 1), t: 1 } }, $db: "config" } 2019-09-04T06:29:26.601+0000 D1 COMMAND [conn49] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578554, 1), t: 1 } } } 2019-09-04T06:29:26.601+0000 D3 STORAGE [conn49] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:29:26.601+0000 D1 COMMAND [conn49] Using 'committed' snapshot: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578554, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578557, 1), signature: { hash: BinData(0, 3F5864037A711550FF6AEE33CA7A805B73D2D63E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578554, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578565, 1) 2019-09-04T06:29:26.601+0000 D2 QUERY [conn49] Collection config.settings does not exist. Using EOF plan: query: { _id: "balancer" } sort: {} projection: {} limit: 1 2019-09-04T06:29:26.601+0000 I COMMAND [conn49] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578554, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578557, 1), signature: { hash: BinData(0, 3F5864037A711550FF6AEE33CA7A805B73D2D63E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578554, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:29:26.602+0000 D2 COMMAND [conn49] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578565, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578565, 1), signature: { hash: BinData(0, 12A593135860780BBBEDF16DED9852A6DF16BEF8), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578565, 1), t: 1 } }, $db: "config" } 2019-09-04T06:29:26.602+0000 D1 COMMAND [conn49] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578565, 1), t: 1 } } } 2019-09-04T06:29:26.602+0000 D3 STORAGE [conn49] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:29:26.602+0000 D1 COMMAND [conn49] Using 'committed' snapshot: { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578565, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578565, 1), signature: { hash: BinData(0, 12A593135860780BBBEDF16DED9852A6DF16BEF8), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578565, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578565, 1) 2019-09-04T06:29:26.602+0000 D2 QUERY [conn49] Collection config.settings does not exist. Using EOF plan: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 2019-09-04T06:29:26.602+0000 I COMMAND [conn49] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578565, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578565, 1), signature: { hash: BinData(0, 12A593135860780BBBEDF16DED9852A6DF16BEF8), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578565, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:29:26.602+0000 D2 COMMAND [conn49] run command config.$cmd { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578565, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578565, 1), signature: { hash: BinData(0, 12A593135860780BBBEDF16DED9852A6DF16BEF8), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578565, 1), t: 1 } }, $db: "config" } 2019-09-04T06:29:26.602+0000 D1 COMMAND [conn49] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578565, 1), t: 1 } } } 2019-09-04T06:29:26.602+0000 D3 STORAGE [conn49] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:29:26.602+0000 D1 COMMAND [conn49] Using 'committed' snapshot: { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578565, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578565, 1), signature: { hash: BinData(0, 12A593135860780BBBEDF16DED9852A6DF16BEF8), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578565, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578565, 1) 2019-09-04T06:29:26.602+0000 D2 QUERY [conn49] Collection config.settings does not exist. Using EOF plan: query: { _id: "autosplit" } sort: {} projection: {} limit: 1 2019-09-04T06:29:26.602+0000 I COMMAND [conn49] command config.settings command: find { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578565, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578565, 1), signature: { hash: BinData(0, 12A593135860780BBBEDF16DED9852A6DF16BEF8), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578565, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:29:26.633+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:26.633+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:26.633+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578565, 1) 2019-09-04T06:29:26.633+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 6253 2019-09-04T06:29:26.633+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:26.633+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:26.633+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 6253 2019-09-04T06:29:26.634+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6256 2019-09-04T06:29:26.634+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6256 2019-09-04T06:29:26.634+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578565, 1), t: 1 }({ ts: Timestamp(1567578565, 1), t: 1 }) 2019-09-04T06:29:26.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:26.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:26.671+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:26.671+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:26.693+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:26.702+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:26.702+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:26.708+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:26.708+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:26.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:26.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:26.794+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:26.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:26.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:26.837+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:26.837+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:29:25.061+0000 2019-09-04T06:29:26.837+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:29:26.233+0000 2019-09-04T06:29:26.837+0000 D3 REPL [replexec-3] stalest member MemberId(0) date: 2019-09-04T06:29:25.061+0000 2019-09-04T06:29:26.837+0000 D3 REPL [replexec-3] scheduling next check at 2019-09-04T06:29:35.061+0000 2019-09-04T06:29:26.837+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:26.837+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 418) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:26.837+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 418 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:29:36.837+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:26.837+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:26.837+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:26.837+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:30.837+0000 2019-09-04T06:29:26.837+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:26.837+0000 D2 ASIO [Replication] Request 418 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, durableWallTime: new Date(1567578565630), opTime: { ts: Timestamp(1567578565, 1), t: 1 }, wallTime: new Date(1567578565630), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578565, 1), t: 1 }, lastCommittedWall: new Date(1567578565630), lastOpVisible: { ts: Timestamp(1567578565, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578565, 1), $clusterTime: { clusterTime: Timestamp(1567578565, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578565, 1) } 2019-09-04T06:29:26.837+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, durableWallTime: new Date(1567578565630), opTime: { ts: Timestamp(1567578565, 1), t: 1 }, wallTime: new Date(1567578565630), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578565, 1), t: 1 }, lastCommittedWall: new Date(1567578565630), lastOpVisible: { ts: Timestamp(1567578565, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578565, 1), $clusterTime: { clusterTime: Timestamp(1567578565, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578565, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:29:26.837+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:26.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:26.838+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 418) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, durableWallTime: new Date(1567578565630), opTime: { ts: Timestamp(1567578565, 1), t: 1 }, wallTime: new Date(1567578565630), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578565, 1), t: 1 }, lastCommittedWall: new Date(1567578565630), lastOpVisible: { ts: Timestamp(1567578565, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578565, 1), $clusterTime: { clusterTime: Timestamp(1567578565, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578565, 1) } 2019-09-04T06:29:26.838+0000 D4 ELECTION [replexec-1] Postponing election timeout due to heartbeat from primary 2019-09-04T06:29:26.838+0000 D4 REPL [replexec-1] Canceling election timeout callback at 2019-09-04T06:29:36.377+0000 2019-09-04T06:29:26.838+0000 D4 ELECTION [replexec-1] Scheduling election timeout callback at 2019-09-04T06:29:37.076+0000 2019-09-04T06:29:26.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:26.838+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:29:26.838+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:29:28.838Z 2019-09-04T06:29:26.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:56.838+0000 2019-09-04T06:29:26.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 419) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:26.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 419 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:36.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:26.838+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:56.838+0000 2019-09-04T06:29:26.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:56.838+0000 2019-09-04T06:29:26.838+0000 D2 ASIO [Replication] Request 419 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, durableWallTime: new Date(1567578565630), opTime: { ts: Timestamp(1567578565, 1), t: 1 }, wallTime: new Date(1567578565630), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578565, 1), t: 1 }, lastCommittedWall: new Date(1567578565630), lastOpVisible: { ts: Timestamp(1567578565, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578565, 1), $clusterTime: { clusterTime: Timestamp(1567578565, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578565, 1) } 2019-09-04T06:29:26.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, durableWallTime: new Date(1567578565630), opTime: { ts: Timestamp(1567578565, 1), t: 1 }, wallTime: new Date(1567578565630), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578565, 1), t: 1 }, lastCommittedWall: new Date(1567578565630), lastOpVisible: { ts: Timestamp(1567578565, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578565, 1), $clusterTime: { clusterTime: Timestamp(1567578565, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578565, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:26.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:26.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 419) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, durableWallTime: new Date(1567578565630), opTime: { ts: Timestamp(1567578565, 1), t: 1 }, wallTime: new Date(1567578565630), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578565, 1), t: 1 }, lastCommittedWall: new Date(1567578565630), lastOpVisible: { ts: Timestamp(1567578565, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578565, 1), $clusterTime: { clusterTime: Timestamp(1567578565, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578565, 1) } 2019-09-04T06:29:26.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:29:26.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:29:28.838Z 2019-09-04T06:29:26.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:56.838+0000 2019-09-04T06:29:26.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:26.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:26.876+0000 D2 COMMAND [conn47] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:26.876+0000 I COMMAND [conn47] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:26.894+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:26.911+0000 D2 COMMAND [conn48] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:26.911+0000 I COMMAND [conn48] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:26.994+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:27.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:27.020+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:27.020+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:27.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:27.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:27.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578565, 1), signature: { hash: BinData(0, 12A593135860780BBBEDF16DED9852A6DF16BEF8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:27.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:29:27.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578565, 1), signature: { hash: BinData(0, 12A593135860780BBBEDF16DED9852A6DF16BEF8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:27.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578565, 1), signature: { hash: BinData(0, 12A593135860780BBBEDF16DED9852A6DF16BEF8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:27.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, durableWallTime: new Date(1567578565630), opTime: { ts: Timestamp(1567578565, 1), t: 1 }, wallTime: new Date(1567578565630) } 2019-09-04T06:29:27.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578565, 1), signature: { hash: BinData(0, 12A593135860780BBBEDF16DED9852A6DF16BEF8), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:27.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:27.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:27.094+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:27.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:27.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:27.171+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:27.171+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:27.194+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:27.202+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:27.202+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:27.208+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:27.208+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:27.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:27.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:27.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:27.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:27.295+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:27.337+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:27.337+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:27.395+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:27.495+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:27.520+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:27.520+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:27.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:27.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:27.557+0000 D2 COMMAND [conn167] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578561, 1), signature: { hash: BinData(0, D212476059A77AEDF4B121F0DF010997DB78B838), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:29:27.557+0000 D1 REPL [conn167] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578565, 1), t: 1 } 2019-09-04T06:29:27.557+0000 D3 REPL [conn167] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:57.567+0000 2019-09-04T06:29:27.565+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:27.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:27.595+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:27.633+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:27.633+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:27.633+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578565, 1) 2019-09-04T06:29:27.633+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 6285 2019-09-04T06:29:27.633+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:27.633+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:27.633+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 6285 2019-09-04T06:29:27.634+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6288 2019-09-04T06:29:27.635+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6288 2019-09-04T06:29:27.635+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578565, 1), t: 1 }({ ts: Timestamp(1567578565, 1), t: 1 }) 2019-09-04T06:29:27.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:27.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:27.671+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:27.671+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:27.695+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:27.702+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:27.702+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:27.708+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:27.708+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:27.782+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:27.782+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:27.795+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:27.837+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:27.837+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:27.896+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:27.996+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:28.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:28.020+0000 D2 COMMAND [conn50] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:28.020+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:28.020+0000 I COMMAND [conn50] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:28.020+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:28.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:28.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:28.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:28.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:28.096+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:28.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:28.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:28.171+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:28.171+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:28.196+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:28.202+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:28.202+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:28.208+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:28.208+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:28.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578567, 1), signature: { hash: BinData(0, 7F4B8408314934050621D2A8A9D1FB0440739CCE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:28.233+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:29:28.233+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578567, 1), signature: { hash: BinData(0, 7F4B8408314934050621D2A8A9D1FB0440739CCE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:28.233+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578567, 1), signature: { hash: BinData(0, 7F4B8408314934050621D2A8A9D1FB0440739CCE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:28.233+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, durableWallTime: new Date(1567578565630), opTime: { ts: Timestamp(1567578565, 1), t: 1 }, wallTime: new Date(1567578565630) } 2019-09-04T06:29:28.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578567, 1), signature: { hash: BinData(0, 7F4B8408314934050621D2A8A9D1FB0440739CCE), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:28.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:28.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:28.282+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:28.282+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:28.296+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:28.337+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:28.337+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:28.396+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:28.497+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:28.520+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:28.520+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:28.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:28.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:28.565+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:28.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:28.597+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:28.604+0000 I NETWORK [listener] connection accepted from 10.108.2.54:49138 #191 (88 connections now open) 2019-09-04T06:29:28.604+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:28.604+0000 D2 COMMAND [conn191] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:28.604+0000 I NETWORK [conn191] received client metadata from 10.108.2.54:49138 conn191: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:28.604+0000 I COMMAND [conn191] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:28.618+0000 I COMMAND [conn161] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578538, 1), signature: { hash: BinData(0, C6DF695CCD0A4881611104329A2D7ABCFFC191B9), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:29:28.618+0000 D1 - [conn161] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:28.618+0000 W - [conn161] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:28.634+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:28.634+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:28.634+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578565, 1) 2019-09-04T06:29:28.634+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 6313 2019-09-04T06:29:28.634+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:28.634+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:28.634+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 6313 2019-09-04T06:29:28.635+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6316 2019-09-04T06:29:28.635+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6316 2019-09-04T06:29:28.635+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578565, 1), t: 1 }({ ts: Timestamp(1567578565, 1), t: 1 }) 2019-09-04T06:29:28.637+0000 I - [conn161] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:28.637+0000 D1 COMMAND [conn161] assertion while executing command 'find' on database 'config' with arguments '{ find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578538, 1), signature: { hash: BinData(0, C6DF695CCD0A4881611104329A2D7ABCFFC191B9), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:28.637+0000 D1 - [conn161] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:28.637+0000 W - [conn161] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:28.658+0000 I - [conn161] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:28.658+0000 W COMMAND [conn161] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:28.658+0000 I COMMAND [conn161] command config.$cmd command: find { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578538, 1), signature: { hash: BinData(0, C6DF695CCD0A4881611104329A2D7ABCFFC191B9), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30031ms 2019-09-04T06:29:28.659+0000 D2 NETWORK [conn161] Session from 10.108.2.54:49118 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:28.659+0000 I NETWORK [conn161] end connection 10.108.2.54:49118 (87 connections now open) 2019-09-04T06:29:28.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:28.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:28.671+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:28.671+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:28.697+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:28.702+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:28.702+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:28.708+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:28.708+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:28.742+0000 D2 COMMAND [conn172] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578565, 1), signature: { hash: BinData(0, 0FB04B3A73468DCEAB3E65E8617977C014F37989), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:29:28.742+0000 D1 REPL [conn172] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578565, 1), t: 1 } 2019-09-04T06:29:28.742+0000 D3 REPL [conn172] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:58.752+0000 2019-09-04T06:29:28.797+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:28.837+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:28.837+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:28.838+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:28.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:28.838+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 420) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:28.838+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 420 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:29:38.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:28.838+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:56.838+0000 2019-09-04T06:29:28.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 421) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:28.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 421 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:38.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:28.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:56.838+0000 2019-09-04T06:29:28.838+0000 D2 ASIO [Replication] Request 420 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, durableWallTime: new Date(1567578565630), opTime: { ts: Timestamp(1567578565, 1), t: 1 }, wallTime: new Date(1567578565630), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578565, 1), t: 1 }, lastCommittedWall: new Date(1567578565630), lastOpVisible: { ts: Timestamp(1567578565, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578565, 1), $clusterTime: { clusterTime: Timestamp(1567578567, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578565, 1) } 2019-09-04T06:29:28.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, durableWallTime: new Date(1567578565630), opTime: { ts: Timestamp(1567578565, 1), t: 1 }, wallTime: new Date(1567578565630), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578565, 1), t: 1 }, lastCommittedWall: new Date(1567578565630), lastOpVisible: { ts: Timestamp(1567578565, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578565, 1), $clusterTime: { clusterTime: Timestamp(1567578567, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578565, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:29:28.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:28.838+0000 D2 ASIO [Replication] Request 421 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, durableWallTime: new Date(1567578565630), opTime: { ts: Timestamp(1567578565, 1), t: 1 }, wallTime: new Date(1567578565630), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578565, 1), t: 1 }, lastCommittedWall: new Date(1567578565630), lastOpVisible: { ts: Timestamp(1567578565, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578565, 1), $clusterTime: { clusterTime: Timestamp(1567578567, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578565, 1) } 2019-09-04T06:29:28.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, durableWallTime: new Date(1567578565630), opTime: { ts: Timestamp(1567578565, 1), t: 1 }, wallTime: new Date(1567578565630), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578565, 1), t: 1 }, lastCommittedWall: new Date(1567578565630), lastOpVisible: { ts: Timestamp(1567578565, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578565, 1), $clusterTime: { clusterTime: Timestamp(1567578567, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578565, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:28.838+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:28.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 420) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, durableWallTime: new Date(1567578565630), opTime: { ts: Timestamp(1567578565, 1), t: 1 }, wallTime: new Date(1567578565630), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578565, 1), t: 1 }, lastCommittedWall: new Date(1567578565630), lastOpVisible: { ts: Timestamp(1567578565, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578565, 1), $clusterTime: { clusterTime: Timestamp(1567578567, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578565, 1) } 2019-09-04T06:29:28.838+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:29:28.838+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:29:37.076+0000 2019-09-04T06:29:28.838+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:29:39.035+0000 2019-09-04T06:29:28.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:29:28.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:29:30.838Z 2019-09-04T06:29:28.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:28.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:58.838+0000 2019-09-04T06:29:28.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:58.838+0000 2019-09-04T06:29:28.838+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 421) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, durableWallTime: new Date(1567578565630), opTime: { ts: Timestamp(1567578565, 1), t: 1 }, wallTime: new Date(1567578565630), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578565, 1), t: 1 }, lastCommittedWall: new Date(1567578565630), lastOpVisible: { ts: Timestamp(1567578565, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578565, 1), $clusterTime: { clusterTime: Timestamp(1567578567, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578565, 1) } 2019-09-04T06:29:28.838+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:29:28.838+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:29:30.838Z 2019-09-04T06:29:28.838+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:58.838+0000 2019-09-04T06:29:28.897+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:28.998+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:29.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:29.020+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:29.020+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:29.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:29.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:29.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578567, 1), signature: { hash: BinData(0, 7F4B8408314934050621D2A8A9D1FB0440739CCE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:29.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:29:29.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578567, 1), signature: { hash: BinData(0, 7F4B8408314934050621D2A8A9D1FB0440739CCE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:29.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578567, 1), signature: { hash: BinData(0, 7F4B8408314934050621D2A8A9D1FB0440739CCE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:29.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, durableWallTime: new Date(1567578565630), opTime: { ts: Timestamp(1567578565, 1), t: 1 }, wallTime: new Date(1567578565630) } 2019-09-04T06:29:29.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578567, 1), signature: { hash: BinData(0, 7F4B8408314934050621D2A8A9D1FB0440739CCE), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:29.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:29.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:29.098+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:29.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:29.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:29.171+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:29.171+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:29.198+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:29.202+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:29.202+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:29.208+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:29.208+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:29.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:29.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:29.298+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:29.337+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:29.337+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:29.398+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:29.498+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:29.520+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:29.520+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:29.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:29.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:29.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:29.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:29.599+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:29.634+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:29.634+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:29.634+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578565, 1) 2019-09-04T06:29:29.634+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 6339 2019-09-04T06:29:29.634+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:29.634+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:29.634+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 6339 2019-09-04T06:29:29.635+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6342 2019-09-04T06:29:29.635+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6342 2019-09-04T06:29:29.635+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578565, 1), t: 1 }({ ts: Timestamp(1567578565, 1), t: 1 }) 2019-09-04T06:29:29.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:29.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:29.671+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:29.671+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:29.699+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:29.702+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:29.702+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:29.708+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:29.708+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:29.754+0000 I COMMAND [conn162] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578530, 1), signature: { hash: BinData(0, 6CBB8EC6AAAB5F918296074E946EB69F170C4DFC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:29:29.754+0000 D1 - [conn162] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:29.754+0000 W - [conn162] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:29.773+0000 I - [conn162] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:29.773+0000 D1 COMMAND [conn162] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578530, 1), signature: { hash: BinData(0, 6CBB8EC6AAAB5F918296074E946EB69F170C4DFC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:29.773+0000 D1 - [conn162] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:29.773+0000 W - [conn162] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:29.795+0000 I - [conn162] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:29.795+0000 W COMMAND [conn162] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:29.795+0000 I COMMAND [conn162] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578530, 1), signature: { hash: BinData(0, 6CBB8EC6AAAB5F918296074E946EB69F170C4DFC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30032ms 2019-09-04T06:29:29.795+0000 D2 NETWORK [conn162] Session from 10.108.2.49:53318 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:29.795+0000 I NETWORK [conn162] end connection 10.108.2.49:53318 (86 connections now open) 2019-09-04T06:29:29.799+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:29.837+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:29.837+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:29.899+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:29.933+0000 D2 COMMAND [conn185] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578562, 1), signature: { hash: BinData(0, D409C985E59557A0ED9DA32FA1E2AFD0B3BC04D7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:29:29.933+0000 D1 REPL [conn185] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578565, 1), t: 1 } 2019-09-04T06:29:29.933+0000 D3 REPL [conn185] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:59.943+0000 2019-09-04T06:29:29.999+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:30.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:30.005+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:29:30.005+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:29:30.005+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:29:30.017+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:29:30.017+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:29:30.020+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:30.020+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:30.034+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:29:30.034+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:29:30.034+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:29:30.034+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:29:30.041+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:29:30.041+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35129 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:29:30.056+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:29:30.056+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:30.056+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:29:30.056+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:29:30.056+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:30.058+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:29:30.058+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:30.058+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:29:30.058+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:29:30.058+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:29:30.058+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:29:30.058+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:29:30.058+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:29:30.058+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:29:30.058+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:30.058+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:30.058+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:29:30.058+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578565, 1) 2019-09-04T06:29:30.059+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6358 2019-09-04T06:29:30.059+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6358 2019-09-04T06:29:30.059+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:29:30.059+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:29:30.059+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:29:30.059+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:29:30.059+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:30.059+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:29:30.059+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:29:30.059+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:29:30.059+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578565, 1) 2019-09-04T06:29:30.059+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6361 2019-09-04T06:29:30.059+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6361 2019-09-04T06:29:30.059+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:29:30.059+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:29:30.059+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:30.059+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:29:30.059+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:29:30.059+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:29:30.059+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578565, 1) 2019-09-04T06:29:30.059+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6363 2019-09-04T06:29:30.059+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6363 2019-09-04T06:29:30.059+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:29:30.060+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:29:30.060+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:29:30.060+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:29:30.060+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:29:30.060+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6366 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6366 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6367 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6367 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6368 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6368 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6369 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6369 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6370 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6370 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6371 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6371 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6372 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6372 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6373 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:29:30.060+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6373 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6374 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6374 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6375 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6375 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6376 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6376 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6377 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6377 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6378 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6378 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6379 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6379 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6380 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6380 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6381 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6381 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6382 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6382 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6383 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6383 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6384 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6384 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6385 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6385 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6386 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6386 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6387 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:29:30.061+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6387 2019-09-04T06:29:30.061+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:29:30.062+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:29:30.062+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6389 2019-09-04T06:29:30.062+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6389 2019-09-04T06:29:30.062+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6390 2019-09-04T06:29:30.062+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6390 2019-09-04T06:29:30.062+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6391 2019-09-04T06:29:30.062+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6391 2019-09-04T06:29:30.062+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:29:30.062+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:29:30.062+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6393 2019-09-04T06:29:30.062+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6393 2019-09-04T06:29:30.062+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6394 2019-09-04T06:29:30.062+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6394 2019-09-04T06:29:30.062+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6395 2019-09-04T06:29:30.062+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6395 2019-09-04T06:29:30.062+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6396 2019-09-04T06:29:30.062+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6396 2019-09-04T06:29:30.062+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6397 2019-09-04T06:29:30.062+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6397 2019-09-04T06:29:30.062+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6398 2019-09-04T06:29:30.062+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6398 2019-09-04T06:29:30.062+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6399 2019-09-04T06:29:30.062+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6399 2019-09-04T06:29:30.062+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6400 2019-09-04T06:29:30.062+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6400 2019-09-04T06:29:30.062+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6401 2019-09-04T06:29:30.062+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6401 2019-09-04T06:29:30.062+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6402 2019-09-04T06:29:30.062+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6402 2019-09-04T06:29:30.062+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6403 2019-09-04T06:29:30.062+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6403 2019-09-04T06:29:30.062+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6404 2019-09-04T06:29:30.062+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6404 2019-09-04T06:29:30.063+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:29:30.063+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:29:30.063+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6406 2019-09-04T06:29:30.063+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6406 2019-09-04T06:29:30.063+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6407 2019-09-04T06:29:30.063+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6407 2019-09-04T06:29:30.063+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6408 2019-09-04T06:29:30.063+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6408 2019-09-04T06:29:30.063+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6409 2019-09-04T06:29:30.063+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6409 2019-09-04T06:29:30.063+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6410 2019-09-04T06:29:30.063+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6410 2019-09-04T06:29:30.063+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6411 2019-09-04T06:29:30.063+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6411 2019-09-04T06:29:30.063+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:29:30.065+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:30.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:30.100+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:30.102+0000 D2 ASIO [RS] Request 416 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578570, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578570091), o: { $v: 1, $set: { ping: new Date(1567578570091) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578565, 1), t: 1 }, lastCommittedWall: new Date(1567578565630), lastOpVisible: { ts: Timestamp(1567578565, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578565, 1), t: 1 }, lastCommittedWall: new Date(1567578565630), lastOpApplied: { ts: Timestamp(1567578570, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578565, 1), $clusterTime: { clusterTime: Timestamp(1567578570, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578570, 1) } 2019-09-04T06:29:30.102+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578570, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578570091), o: { $v: 1, $set: { ping: new Date(1567578570091) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578565, 1), t: 1 }, lastCommittedWall: new Date(1567578565630), lastOpVisible: { ts: Timestamp(1567578565, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578565, 1), t: 1 }, lastCommittedWall: new Date(1567578565630), lastOpApplied: { ts: Timestamp(1567578570, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578565, 1), $clusterTime: { clusterTime: Timestamp(1567578570, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578570, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:30.102+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:30.102+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578570, 1) and ending at ts: Timestamp(1567578570, 1) 2019-09-04T06:29:30.102+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:29:39.035+0000 2019-09-04T06:29:30.102+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:29:40.771+0000 2019-09-04T06:29:30.102+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:30.102+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:58.838+0000 2019-09-04T06:29:30.102+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:30.102+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:30.102+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578565, 1) 2019-09-04T06:29:30.102+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578570, 1), t: 1 } 2019-09-04T06:29:30.102+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 6415 2019-09-04T06:29:30.102+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:30.102+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:30.102+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 6415 2019-09-04T06:29:30.102+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:30.102+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:29:30.102+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:30.102+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578565, 1) 2019-09-04T06:29:30.102+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578570, 1) } 2019-09-04T06:29:30.102+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 6418 2019-09-04T06:29:30.102+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:30.102+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:30.102+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 6418 2019-09-04T06:29:30.102+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6343 2019-09-04T06:29:30.103+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 6343 2019-09-04T06:29:30.103+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6421 2019-09-04T06:29:30.103+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6421 2019-09-04T06:29:30.103+0000 D3 EXECUTOR [repl-writer-worker-6] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:30.103+0000 D3 STORAGE [repl-writer-worker-6] WT begin_transaction for snapshot id 6423 2019-09-04T06:29:30.103+0000 D4 STORAGE [repl-writer-worker-6] inserting record with timestamp Timestamp(1567578570, 1) 2019-09-04T06:29:30.103+0000 D3 STORAGE [repl-writer-worker-6] WT set timestamp of future write operations to Timestamp(1567578570, 1) 2019-09-04T06:29:30.103+0000 D3 STORAGE [repl-writer-worker-6] WT commit_transaction for snapshot id 6423 2019-09-04T06:29:30.103+0000 D3 EXECUTOR [repl-writer-worker-6] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:30.103+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:29:30.103+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6422 2019-09-04T06:29:30.103+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 6422 2019-09-04T06:29:30.103+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6425 2019-09-04T06:29:30.103+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6425 2019-09-04T06:29:30.103+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578570, 1), t: 1 }({ ts: Timestamp(1567578570, 1), t: 1 }) 2019-09-04T06:29:30.103+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578570, 1) 2019-09-04T06:29:30.103+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6426 2019-09-04T06:29:30.103+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578570, 1) } } ] } sort: {} projection: {} 2019-09-04T06:29:30.103+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:30.103+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:29:30.103+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578570, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:29:30.103+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:30.103+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:30.103+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:30.103+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578570, 1) || First: notFirst: full path: ts 2019-09-04T06:29:30.103+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:30.103+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578570, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:30.103+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:30.103+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:29:30.103+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:30.103+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:30.103+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:30.103+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:30.103+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:30.103+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:30.103+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:30.103+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578570, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:30.103+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:30.103+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:30.103+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:30.103+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578570, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:30.103+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:30.103+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578570, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:30.103+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 6426 2019-09-04T06:29:30.103+0000 D3 EXECUTOR [repl-writer-worker-2] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:30.103+0000 D3 STORAGE [repl-writer-worker-2] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:30.103+0000 D3 REPL [repl-writer-worker-2] applying op: { ts: Timestamp(1567578570, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578570091), o: { $v: 1, $set: { ping: new Date(1567578570091) } } }, oplog application mode: Secondary 2019-09-04T06:29:30.103+0000 D3 STORAGE [repl-writer-worker-2] WT set timestamp of future write operations to Timestamp(1567578570, 1) 2019-09-04T06:29:30.103+0000 D3 STORAGE [repl-writer-worker-2] WT begin_transaction for snapshot id 6428 2019-09-04T06:29:30.103+0000 D2 QUERY [repl-writer-worker-2] Using idhack: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" } 2019-09-04T06:29:30.103+0000 D4 WRITE [repl-writer-worker-2] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:29:30.104+0000 D3 STORAGE [repl-writer-worker-2] WT commit_transaction for snapshot id 6428 2019-09-04T06:29:30.104+0000 D3 EXECUTOR [repl-writer-worker-2] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:30.104+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578570, 1), t: 1 }({ ts: Timestamp(1567578570, 1), t: 1 }) 2019-09-04T06:29:30.104+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578570, 1) 2019-09-04T06:29:30.104+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6427 2019-09-04T06:29:30.104+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:29:30.104+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:30.104+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:29:30.104+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:30.104+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:30.104+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:29:30.104+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 6427 2019-09-04T06:29:30.104+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578570, 1) 2019-09-04T06:29:30.104+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:30.104+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6431 2019-09-04T06:29:30.104+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, durableWallTime: new Date(1567578565630), appliedOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, appliedWallTime: new Date(1567578565630), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, durableWallTime: new Date(1567578565630), appliedOpTime: { ts: Timestamp(1567578570, 1), t: 1 }, appliedWallTime: new Date(1567578570091), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, durableWallTime: new Date(1567578565630), appliedOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, appliedWallTime: new Date(1567578565630), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578565, 1), t: 1 }, lastCommittedWall: new Date(1567578565630), lastOpVisible: { ts: Timestamp(1567578565, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:30.104+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6431 2019-09-04T06:29:30.104+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 422 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:00.104+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, durableWallTime: new Date(1567578565630), appliedOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, appliedWallTime: new Date(1567578565630), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, durableWallTime: new Date(1567578565630), appliedOpTime: { ts: Timestamp(1567578570, 1), t: 1 }, appliedWallTime: new Date(1567578570091), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, durableWallTime: new Date(1567578565630), appliedOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, appliedWallTime: new Date(1567578565630), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578565, 1), t: 1 }, lastCommittedWall: new Date(1567578565630), lastOpVisible: { ts: Timestamp(1567578565, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:30.104+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578570, 1), t: 1 }({ ts: Timestamp(1567578570, 1), t: 1 }) 2019-09-04T06:29:30.104+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:00.104+0000 2019-09-04T06:29:30.104+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578570, 1), t: 1 } 2019-09-04T06:29:30.104+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 423 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:40.104+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578565, 1), t: 1 } } 2019-09-04T06:29:30.104+0000 D2 ASIO [RS] Request 422 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578565, 1), t: 1 }, lastCommittedWall: new Date(1567578565630), lastOpVisible: { ts: Timestamp(1567578565, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578565, 1), $clusterTime: { clusterTime: Timestamp(1567578570, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578570, 1) } 2019-09-04T06:29:30.104+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578565, 1), t: 1 }, lastCommittedWall: new Date(1567578565630), lastOpVisible: { ts: Timestamp(1567578565, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578565, 1), $clusterTime: { clusterTime: Timestamp(1567578570, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578570, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:30.104+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:30.104+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:00.104+0000 2019-09-04T06:29:30.104+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:00.104+0000 2019-09-04T06:29:30.105+0000 D2 ASIO [RS] Request 423 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578570, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578570097), o: { $v: 1, $set: { ping: new Date(1567578570097) } } }, { ts: Timestamp(1567578570, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578570097), o: { $v: 1, $set: { ping: new Date(1567578570096) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578565, 1), t: 1 }, lastCommittedWall: new Date(1567578565630), lastOpVisible: { ts: Timestamp(1567578565, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578565, 1), t: 1 }, lastCommittedWall: new Date(1567578565630), lastOpApplied: { ts: Timestamp(1567578570, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578565, 1), $clusterTime: { clusterTime: Timestamp(1567578570, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578570, 3) } 2019-09-04T06:29:30.105+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578570, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578570097), o: { $v: 1, $set: { ping: new Date(1567578570097) } } }, { ts: Timestamp(1567578570, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578570097), o: { $v: 1, $set: { ping: new Date(1567578570096) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578565, 1), t: 1 }, lastCommittedWall: new Date(1567578565630), lastOpVisible: { ts: Timestamp(1567578565, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578565, 1), t: 1 }, lastCommittedWall: new Date(1567578565630), lastOpApplied: { ts: Timestamp(1567578570, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578565, 1), $clusterTime: { clusterTime: Timestamp(1567578570, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578570, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:30.105+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:30.105+0000 D2 REPL [replication-0] oplog fetcher read 2 operations from remote oplog starting at ts: Timestamp(1567578570, 2) and ending at ts: Timestamp(1567578570, 3) 2019-09-04T06:29:30.105+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:29:40.771+0000 2019-09-04T06:29:30.105+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:29:40.679+0000 2019-09-04T06:29:30.105+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578570, 3), t: 1 } 2019-09-04T06:29:30.105+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:30.105+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:30.106+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578570, 1) 2019-09-04T06:29:30.106+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 6435 2019-09-04T06:29:30.106+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:30.106+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:30.106+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 6435 2019-09-04T06:29:30.106+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:30.106+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:30.106+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578570, 1) 2019-09-04T06:29:30.106+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 6438 2019-09-04T06:29:30.106+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:30.106+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:30.106+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 6438 2019-09-04T06:29:30.106+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:30.106+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:58.838+0000 2019-09-04T06:29:30.106+0000 D2 REPL [rsSync-0] replication batch size is 2 2019-09-04T06:29:30.106+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578570, 2) } 2019-09-04T06:29:30.106+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6432 2019-09-04T06:29:30.106+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 6432 2019-09-04T06:29:30.106+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6441 2019-09-04T06:29:30.106+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6441 2019-09-04T06:29:30.106+0000 D3 EXECUTOR [repl-writer-worker-0] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:30.106+0000 D3 STORAGE [repl-writer-worker-0] WT begin_transaction for snapshot id 6443 2019-09-04T06:29:30.106+0000 D4 STORAGE [repl-writer-worker-0] inserting record with timestamp Timestamp(1567578570, 2) 2019-09-04T06:29:30.106+0000 D3 STORAGE [repl-writer-worker-0] WT set timestamp of future write operations to Timestamp(1567578570, 2) 2019-09-04T06:29:30.106+0000 D4 STORAGE [repl-writer-worker-0] inserting record with timestamp Timestamp(1567578570, 3) 2019-09-04T06:29:30.106+0000 D3 STORAGE [repl-writer-worker-0] WT set timestamp of future write operations to Timestamp(1567578570, 3) 2019-09-04T06:29:30.106+0000 D3 STORAGE [repl-writer-worker-0] WT commit_transaction for snapshot id 6443 2019-09-04T06:29:30.106+0000 D3 EXECUTOR [repl-writer-worker-0] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:30.106+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:29:30.106+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6442 2019-09-04T06:29:30.106+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 6442 2019-09-04T06:29:30.106+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6445 2019-09-04T06:29:30.106+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6445 2019-09-04T06:29:30.106+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578570, 3), t: 1 }({ ts: Timestamp(1567578570, 3), t: 1 }) 2019-09-04T06:29:30.106+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578570, 3) 2019-09-04T06:29:30.106+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6446 2019-09-04T06:29:30.106+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578570, 3) } } ] } sort: {} projection: {} 2019-09-04T06:29:30.106+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:30.106+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:29:30.106+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578570, 3) Sort: {} Proj: {} ============================= 2019-09-04T06:29:30.106+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:30.106+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:30.106+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:30.106+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578570, 3) || First: notFirst: full path: ts 2019-09-04T06:29:30.106+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:30.106+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578570, 3) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:30.106+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:30.106+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:29:30.106+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:30.106+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:30.106+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:30.106+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:30.106+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:30.106+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:30.106+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:30.106+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578570, 3) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:30.106+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:30.106+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:30.106+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:30.107+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578570, 3) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:30.107+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:30.107+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578570, 3) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:30.107+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 6446 2019-09-04T06:29:30.107+0000 D3 EXECUTOR [repl-writer-worker-15] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:30.107+0000 D3 STORAGE [repl-writer-worker-15] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:30.107+0000 D3 REPL [repl-writer-worker-15] applying op: { ts: Timestamp(1567578570, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578570097), o: { $v: 1, $set: { ping: new Date(1567578570097) } } }, oplog application mode: Secondary 2019-09-04T06:29:30.107+0000 D3 STORAGE [repl-writer-worker-15] WT set timestamp of future write operations to Timestamp(1567578570, 2) 2019-09-04T06:29:30.107+0000 D3 STORAGE [repl-writer-worker-15] WT begin_transaction for snapshot id 6448 2019-09-04T06:29:30.107+0000 D2 QUERY [repl-writer-worker-15] Using idhack: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" } 2019-09-04T06:29:30.107+0000 D4 WRITE [repl-writer-worker-15] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:29:30.107+0000 D3 STORAGE [repl-writer-worker-15] WT commit_transaction for snapshot id 6448 2019-09-04T06:29:30.107+0000 D3 EXECUTOR [repl-writer-worker-15] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:30.107+0000 D3 STORAGE [repl-writer-worker-15] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:30.107+0000 D3 REPL [repl-writer-worker-15] applying op: { ts: Timestamp(1567578570, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578570097), o: { $v: 1, $set: { ping: new Date(1567578570096) } } }, oplog application mode: Secondary 2019-09-04T06:29:30.107+0000 D3 STORAGE [repl-writer-worker-15] WT set timestamp of future write operations to Timestamp(1567578570, 3) 2019-09-04T06:29:30.107+0000 D3 STORAGE [repl-writer-worker-15] WT begin_transaction for snapshot id 6450 2019-09-04T06:29:30.107+0000 D2 QUERY [repl-writer-worker-15] Using idhack: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" } 2019-09-04T06:29:30.107+0000 D4 WRITE [repl-writer-worker-15] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:29:30.107+0000 D3 STORAGE [repl-writer-worker-15] WT commit_transaction for snapshot id 6450 2019-09-04T06:29:30.107+0000 D3 EXECUTOR [repl-writer-worker-15] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:30.107+0000 D3 EXECUTOR [repl-writer-worker-1] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:30.107+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578570, 3), t: 1 }({ ts: Timestamp(1567578570, 3), t: 1 }) 2019-09-04T06:29:30.107+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578570, 3) 2019-09-04T06:29:30.107+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6447 2019-09-04T06:29:30.107+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:29:30.107+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:30.107+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:29:30.107+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:30.107+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:30.107+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:29:30.107+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 6447 2019-09-04T06:29:30.107+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578570, 3) 2019-09-04T06:29:30.107+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:30.107+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6453 2019-09-04T06:29:30.107+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6453 2019-09-04T06:29:30.107+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, durableWallTime: new Date(1567578565630), appliedOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, appliedWallTime: new Date(1567578565630), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, durableWallTime: new Date(1567578565630), appliedOpTime: { ts: Timestamp(1567578570, 3), t: 1 }, appliedWallTime: new Date(1567578570097), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, durableWallTime: new Date(1567578565630), appliedOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, appliedWallTime: new Date(1567578565630), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578565, 1), t: 1 }, lastCommittedWall: new Date(1567578565630), lastOpVisible: { ts: Timestamp(1567578565, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:30.107+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 424 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:00.107+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, durableWallTime: new Date(1567578565630), appliedOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, appliedWallTime: new Date(1567578565630), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, durableWallTime: new Date(1567578565630), appliedOpTime: { ts: Timestamp(1567578570, 3), t: 1 }, appliedWallTime: new Date(1567578570097), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, durableWallTime: new Date(1567578565630), appliedOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, appliedWallTime: new Date(1567578565630), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578565, 1), t: 1 }, lastCommittedWall: new Date(1567578565630), lastOpVisible: { ts: Timestamp(1567578565, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:30.107+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:00.107+0000 2019-09-04T06:29:30.107+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578570, 3), t: 1 }({ ts: Timestamp(1567578570, 3), t: 1 }) 2019-09-04T06:29:30.107+0000 D2 ASIO [RS] Request 424 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578565, 1), t: 1 }, lastCommittedWall: new Date(1567578565630), lastOpVisible: { ts: Timestamp(1567578565, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578565, 1), $clusterTime: { clusterTime: Timestamp(1567578570, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578570, 3) } 2019-09-04T06:29:30.107+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578565, 1), t: 1 }, lastCommittedWall: new Date(1567578565630), lastOpVisible: { ts: Timestamp(1567578565, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578565, 1), $clusterTime: { clusterTime: Timestamp(1567578570, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578570, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:30.107+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:30.108+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:00.107+0000 2019-09-04T06:29:30.108+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578570, 3), t: 1 } 2019-09-04T06:29:30.108+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 425 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:40.108+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578565, 1), t: 1 } } 2019-09-04T06:29:30.108+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:00.107+0000 2019-09-04T06:29:30.113+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:29:30.113+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:30.113+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, durableWallTime: new Date(1567578565630), appliedOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, appliedWallTime: new Date(1567578565630), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578570, 1), t: 1 }, durableWallTime: new Date(1567578570091), appliedOpTime: { ts: Timestamp(1567578570, 3), t: 1 }, appliedWallTime: new Date(1567578570097), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, durableWallTime: new Date(1567578565630), appliedOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, appliedWallTime: new Date(1567578565630), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578565, 1), t: 1 }, lastCommittedWall: new Date(1567578565630), lastOpVisible: { ts: Timestamp(1567578565, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:30.113+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 426 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:00.113+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, durableWallTime: new Date(1567578565630), appliedOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, appliedWallTime: new Date(1567578565630), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578570, 1), t: 1 }, durableWallTime: new Date(1567578570091), appliedOpTime: { ts: Timestamp(1567578570, 3), t: 1 }, appliedWallTime: new Date(1567578570097), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, durableWallTime: new Date(1567578565630), appliedOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, appliedWallTime: new Date(1567578565630), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578565, 1), t: 1 }, lastCommittedWall: new Date(1567578565630), lastOpVisible: { ts: Timestamp(1567578565, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:30.113+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:00.107+0000 2019-09-04T06:29:30.113+0000 D2 ASIO [RS] Request 426 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578565, 1), t: 1 }, lastCommittedWall: new Date(1567578565630), lastOpVisible: { ts: Timestamp(1567578565, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578565, 1), $clusterTime: { clusterTime: Timestamp(1567578570, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578570, 3) } 2019-09-04T06:29:30.113+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578565, 1), t: 1 }, lastCommittedWall: new Date(1567578565630), lastOpVisible: { ts: Timestamp(1567578565, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578565, 1), $clusterTime: { clusterTime: Timestamp(1567578570, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578570, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:30.113+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:30.113+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:00.107+0000 2019-09-04T06:29:30.114+0000 D2 ASIO [RS] Request 425 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578570, 1), t: 1 }, lastCommittedWall: new Date(1567578570091), lastOpVisible: { ts: Timestamp(1567578570, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578570, 1), t: 1 }, lastCommittedWall: new Date(1567578570091), lastOpApplied: { ts: Timestamp(1567578570, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578570, 1), $clusterTime: { clusterTime: Timestamp(1567578570, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578570, 3) } 2019-09-04T06:29:30.114+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578570, 1), t: 1 }, lastCommittedWall: new Date(1567578570091), lastOpVisible: { ts: Timestamp(1567578570, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578570, 1), t: 1 }, lastCommittedWall: new Date(1567578570091), lastOpApplied: { ts: Timestamp(1567578570, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578570, 1), $clusterTime: { clusterTime: Timestamp(1567578570, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578570, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:30.114+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:30.114+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:29:30.114+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578570, 1), t: 1 }, 2019-09-04T06:29:30.091+0000 2019-09-04T06:29:30.114+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578570, 1), t: 1 }, 2019-09-04T06:29:30.091+0000 2019-09-04T06:29:30.114+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578565, 1) 2019-09-04T06:29:30.114+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:29:40.679+0000 2019-09-04T06:29:30.114+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:29:40.144+0000 2019-09-04T06:29:30.114+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 427 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:40.114+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578570, 1), t: 1 } } 2019-09-04T06:29:30.114+0000 D3 REPL [conn188] Got notified of new snapshot: { ts: Timestamp(1567578570, 1), t: 1 }, 2019-09-04T06:29:30.091+0000 2019-09-04T06:29:30.114+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:00.107+0000 2019-09-04T06:29:30.114+0000 D3 REPL [conn188] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.768+0000 2019-09-04T06:29:30.114+0000 D3 REPL [conn189] Got notified of new snapshot: { ts: Timestamp(1567578570, 1), t: 1 }, 2019-09-04T06:29:30.091+0000 2019-09-04T06:29:30.114+0000 D3 REPL [conn189] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:52.054+0000 2019-09-04T06:29:30.114+0000 D3 REPL [conn187] Got notified of new snapshot: { ts: Timestamp(1567578570, 1), t: 1 }, 2019-09-04T06:29:30.091+0000 2019-09-04T06:29:30.114+0000 D3 REPL [conn187] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.753+0000 2019-09-04T06:29:30.114+0000 D3 REPL [conn179] Got notified of new snapshot: { ts: Timestamp(1567578570, 1), t: 1 }, 2019-09-04T06:29:30.091+0000 2019-09-04T06:29:30.114+0000 D3 REPL [conn179] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:46.417+0000 2019-09-04T06:29:30.114+0000 D3 REPL [conn166] Got notified of new snapshot: { ts: Timestamp(1567578570, 1), t: 1 }, 2019-09-04T06:29:30.091+0000 2019-09-04T06:29:30.114+0000 D3 REPL [conn166] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.275+0000 2019-09-04T06:29:30.114+0000 D3 REPL [conn175] Got notified of new snapshot: { ts: Timestamp(1567578570, 1), t: 1 }, 2019-09-04T06:29:30.091+0000 2019-09-04T06:29:30.114+0000 D3 REPL [conn175] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.422+0000 2019-09-04T06:29:30.114+0000 D3 REPL [conn168] Got notified of new snapshot: { ts: Timestamp(1567578570, 1), t: 1 }, 2019-09-04T06:29:30.091+0000 2019-09-04T06:29:30.114+0000 D3 REPL [conn168] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.272+0000 2019-09-04T06:29:30.114+0000 D3 REPL [conn165] Got notified of new snapshot: { ts: Timestamp(1567578570, 1), t: 1 }, 2019-09-04T06:29:30.091+0000 2019-09-04T06:29:30.114+0000 D3 REPL [conn165] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:35.182+0000 2019-09-04T06:29:30.114+0000 D3 REPL [conn183] Got notified of new snapshot: { ts: Timestamp(1567578570, 1), t: 1 }, 2019-09-04T06:29:30.091+0000 2019-09-04T06:29:30.114+0000 D3 REPL [conn183] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.645+0000 2019-09-04T06:29:30.114+0000 D3 REPL [conn144] Got notified of new snapshot: { ts: Timestamp(1567578570, 1), t: 1 }, 2019-09-04T06:29:30.091+0000 2019-09-04T06:29:30.114+0000 D3 REPL [conn144] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:33.257+0000 2019-09-04T06:29:30.114+0000 D3 REPL [conn174] Got notified of new snapshot: { ts: Timestamp(1567578570, 1), t: 1 }, 2019-09-04T06:29:30.091+0000 2019-09-04T06:29:30.114+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:30.115+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:58.838+0000 2019-09-04T06:29:30.115+0000 D3 REPL [conn174] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.422+0000 2019-09-04T06:29:30.115+0000 D3 REPL [conn190] Got notified of new snapshot: { ts: Timestamp(1567578570, 1), t: 1 }, 2019-09-04T06:29:30.091+0000 2019-09-04T06:29:30.115+0000 D3 REPL [conn190] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:55.060+0000 2019-09-04T06:29:30.115+0000 D3 REPL [conn159] Got notified of new snapshot: { ts: Timestamp(1567578570, 1), t: 1 }, 2019-09-04T06:29:30.091+0000 2019-09-04T06:29:30.115+0000 D3 REPL [conn159] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:56.299+0000 2019-09-04T06:29:30.115+0000 D3 REPL [conn172] Got notified of new snapshot: { ts: Timestamp(1567578570, 1), t: 1 }, 2019-09-04T06:29:30.091+0000 2019-09-04T06:29:30.115+0000 D3 REPL [conn172] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:58.752+0000 2019-09-04T06:29:30.115+0000 D3 REPL [conn139] Got notified of new snapshot: { ts: Timestamp(1567578570, 1), t: 1 }, 2019-09-04T06:29:30.091+0000 2019-09-04T06:29:30.115+0000 D3 REPL [conn139] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.319+0000 2019-09-04T06:29:30.115+0000 D3 REPL [conn176] Got notified of new snapshot: { ts: Timestamp(1567578570, 1), t: 1 }, 2019-09-04T06:29:30.091+0000 2019-09-04T06:29:30.115+0000 D3 REPL [conn176] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.424+0000 2019-09-04T06:29:30.115+0000 D3 REPL [conn170] Got notified of new snapshot: { ts: Timestamp(1567578570, 1), t: 1 }, 2019-09-04T06:29:30.091+0000 2019-09-04T06:29:30.115+0000 D3 REPL [conn170] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:43.335+0000 2019-09-04T06:29:30.115+0000 D3 REPL [conn163] Got notified of new snapshot: { ts: Timestamp(1567578570, 1), t: 1 }, 2019-09-04T06:29:30.091+0000 2019-09-04T06:29:30.115+0000 D3 REPL [conn163] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:41.888+0000 2019-09-04T06:29:30.115+0000 D3 REPL [conn164] Got notified of new snapshot: { ts: Timestamp(1567578570, 1), t: 1 }, 2019-09-04T06:29:30.091+0000 2019-09-04T06:29:30.115+0000 D3 REPL [conn164] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:41.692+0000 2019-09-04T06:29:30.115+0000 D3 REPL [conn138] Got notified of new snapshot: { ts: Timestamp(1567578570, 1), t: 1 }, 2019-09-04T06:29:30.091+0000 2019-09-04T06:29:30.115+0000 D3 REPL [conn138] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.289+0000 2019-09-04T06:29:30.115+0000 D3 REPL [conn186] Got notified of new snapshot: { ts: Timestamp(1567578570, 1), t: 1 }, 2019-09-04T06:29:30.091+0000 2019-09-04T06:29:30.115+0000 D3 REPL [conn186] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.662+0000 2019-09-04T06:29:30.115+0000 D3 REPL [conn173] Got notified of new snapshot: { ts: Timestamp(1567578570, 1), t: 1 }, 2019-09-04T06:29:30.091+0000 2019-09-04T06:29:30.115+0000 D3 REPL [conn173] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.664+0000 2019-09-04T06:29:30.115+0000 D3 REPL [conn145] Got notified of new snapshot: { ts: Timestamp(1567578570, 1), t: 1 }, 2019-09-04T06:29:30.091+0000 2019-09-04T06:29:30.115+0000 D3 REPL [conn145] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.422+0000 2019-09-04T06:29:30.115+0000 D3 REPL [conn177] Got notified of new snapshot: { ts: Timestamp(1567578570, 1), t: 1 }, 2019-09-04T06:29:30.091+0000 2019-09-04T06:29:30.115+0000 D3 REPL [conn177] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.446+0000 2019-09-04T06:29:30.115+0000 D3 REPL [conn167] Got notified of new snapshot: { ts: Timestamp(1567578570, 1), t: 1 }, 2019-09-04T06:29:30.091+0000 2019-09-04T06:29:30.115+0000 D3 REPL [conn167] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:57.567+0000 2019-09-04T06:29:30.115+0000 D3 REPL [conn185] Got notified of new snapshot: { ts: Timestamp(1567578570, 1), t: 1 }, 2019-09-04T06:29:30.091+0000 2019-09-04T06:29:30.115+0000 D3 REPL [conn185] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:59.943+0000 2019-09-04T06:29:30.116+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:29:30.116+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:30.116+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, durableWallTime: new Date(1567578565630), appliedOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, appliedWallTime: new Date(1567578565630), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578570, 3), t: 1 }, durableWallTime: new Date(1567578570097), appliedOpTime: { ts: Timestamp(1567578570, 3), t: 1 }, appliedWallTime: new Date(1567578570097), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, durableWallTime: new Date(1567578565630), appliedOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, appliedWallTime: new Date(1567578565630), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578570, 1), t: 1 }, lastCommittedWall: new Date(1567578570091), lastOpVisible: { ts: Timestamp(1567578570, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:30.116+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 428 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:00.116+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, durableWallTime: new Date(1567578565630), appliedOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, appliedWallTime: new Date(1567578565630), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578570, 3), t: 1 }, durableWallTime: new Date(1567578570097), appliedOpTime: { ts: Timestamp(1567578570, 3), t: 1 }, appliedWallTime: new Date(1567578570097), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, durableWallTime: new Date(1567578565630), appliedOpTime: { ts: Timestamp(1567578565, 1), t: 1 }, appliedWallTime: new Date(1567578565630), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578570, 1), t: 1 }, lastCommittedWall: new Date(1567578570091), lastOpVisible: { ts: Timestamp(1567578570, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:30.116+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:00.107+0000 2019-09-04T06:29:30.116+0000 D2 ASIO [RS] Request 428 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578570, 1), t: 1 }, lastCommittedWall: new Date(1567578570091), lastOpVisible: { ts: Timestamp(1567578570, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578570, 1), $clusterTime: { clusterTime: Timestamp(1567578570, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578570, 3) } 2019-09-04T06:29:30.116+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578570, 1), t: 1 }, lastCommittedWall: new Date(1567578570091), lastOpVisible: { ts: Timestamp(1567578570, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578570, 1), $clusterTime: { clusterTime: Timestamp(1567578570, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578570, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:30.116+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:30.116+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:00.107+0000 2019-09-04T06:29:30.116+0000 D2 ASIO [RS] Request 427 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578570, 3), t: 1 }, lastCommittedWall: new Date(1567578570097), lastOpVisible: { ts: Timestamp(1567578570, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578570, 3), t: 1 }, lastCommittedWall: new Date(1567578570097), lastOpApplied: { ts: Timestamp(1567578570, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578570, 3), $clusterTime: { clusterTime: Timestamp(1567578570, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578570, 3) } 2019-09-04T06:29:30.116+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578570, 3), t: 1 }, lastCommittedWall: new Date(1567578570097), lastOpVisible: { ts: Timestamp(1567578570, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578570, 3), t: 1 }, lastCommittedWall: new Date(1567578570097), lastOpApplied: { ts: Timestamp(1567578570, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578570, 3), $clusterTime: { clusterTime: Timestamp(1567578570, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578570, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:30.116+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:30.116+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:29:30.116+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578570, 3), t: 1 }, 2019-09-04T06:29:30.097+0000 2019-09-04T06:29:30.116+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578570, 3), t: 1 }, 2019-09-04T06:29:30.097+0000 2019-09-04T06:29:30.117+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578565, 3) 2019-09-04T06:29:30.117+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:29:40.144+0000 2019-09-04T06:29:30.117+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:29:40.918+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn172] Got notified of new snapshot: { ts: Timestamp(1567578570, 3), t: 1 }, 2019-09-04T06:29:30.097+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn172] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:58.752+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn187] Got notified of new snapshot: { ts: Timestamp(1567578570, 3), t: 1 }, 2019-09-04T06:29:30.097+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn187] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.753+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn168] Got notified of new snapshot: { ts: Timestamp(1567578570, 3), t: 1 }, 2019-09-04T06:29:30.097+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn168] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.272+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn189] Got notified of new snapshot: { ts: Timestamp(1567578570, 3), t: 1 }, 2019-09-04T06:29:30.097+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn189] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:52.054+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn186] Got notified of new snapshot: { ts: Timestamp(1567578570, 3), t: 1 }, 2019-09-04T06:29:30.097+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn186] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.662+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn144] Got notified of new snapshot: { ts: Timestamp(1567578570, 3), t: 1 }, 2019-09-04T06:29:30.097+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn144] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:33.257+0000 2019-09-04T06:29:30.117+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:30.117+0000 D3 REPL [conn163] Got notified of new snapshot: { ts: Timestamp(1567578570, 3), t: 1 }, 2019-09-04T06:29:30.097+0000 2019-09-04T06:29:30.117+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:58.838+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn163] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:41.888+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn177] Got notified of new snapshot: { ts: Timestamp(1567578570, 3), t: 1 }, 2019-09-04T06:29:30.097+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn177] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.446+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn145] Got notified of new snapshot: { ts: Timestamp(1567578570, 3), t: 1 }, 2019-09-04T06:29:30.097+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn145] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.422+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn170] Got notified of new snapshot: { ts: Timestamp(1567578570, 3), t: 1 }, 2019-09-04T06:29:30.097+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn170] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:43.335+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn164] Got notified of new snapshot: { ts: Timestamp(1567578570, 3), t: 1 }, 2019-09-04T06:29:30.097+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn164] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:41.692+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn173] Got notified of new snapshot: { ts: Timestamp(1567578570, 3), t: 1 }, 2019-09-04T06:29:30.097+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn173] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.664+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn139] Got notified of new snapshot: { ts: Timestamp(1567578570, 3), t: 1 }, 2019-09-04T06:29:30.097+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn139] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.319+0000 2019-09-04T06:29:30.117+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 429 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:40.117+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578570, 3), t: 1 } } 2019-09-04T06:29:30.117+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:00.107+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn179] Got notified of new snapshot: { ts: Timestamp(1567578570, 3), t: 1 }, 2019-09-04T06:29:30.097+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn179] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:46.417+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn166] Got notified of new snapshot: { ts: Timestamp(1567578570, 3), t: 1 }, 2019-09-04T06:29:30.097+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn166] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.275+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn165] Got notified of new snapshot: { ts: Timestamp(1567578570, 3), t: 1 }, 2019-09-04T06:29:30.097+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn165] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:35.182+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn183] Got notified of new snapshot: { ts: Timestamp(1567578570, 3), t: 1 }, 2019-09-04T06:29:30.097+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn183] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.645+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn175] Got notified of new snapshot: { ts: Timestamp(1567578570, 3), t: 1 }, 2019-09-04T06:29:30.097+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn175] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.422+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn159] Got notified of new snapshot: { ts: Timestamp(1567578570, 3), t: 1 }, 2019-09-04T06:29:30.097+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn159] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:56.299+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn176] Got notified of new snapshot: { ts: Timestamp(1567578570, 3), t: 1 }, 2019-09-04T06:29:30.097+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn176] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.424+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn188] Got notified of new snapshot: { ts: Timestamp(1567578570, 3), t: 1 }, 2019-09-04T06:29:30.097+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn188] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.768+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn174] Got notified of new snapshot: { ts: Timestamp(1567578570, 3), t: 1 }, 2019-09-04T06:29:30.097+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn174] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.422+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn190] Got notified of new snapshot: { ts: Timestamp(1567578570, 3), t: 1 }, 2019-09-04T06:29:30.097+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn190] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:55.060+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn185] Got notified of new snapshot: { ts: Timestamp(1567578570, 3), t: 1 }, 2019-09-04T06:29:30.097+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn185] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:59.943+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn167] Got notified of new snapshot: { ts: Timestamp(1567578570, 3), t: 1 }, 2019-09-04T06:29:30.097+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn167] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:57.567+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn138] Got notified of new snapshot: { ts: Timestamp(1567578570, 3), t: 1 }, 2019-09-04T06:29:30.097+0000 2019-09-04T06:29:30.117+0000 D3 REPL [conn138] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.289+0000 2019-09-04T06:29:30.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:30.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:30.171+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:30.171+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:30.200+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:30.202+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:30.202+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:30.203+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578570, 3) 2019-09-04T06:29:30.208+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:30.208+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:30.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578570, 3), signature: { hash: BinData(0, EFAE826F99BE6CD89638C3487B4B82CAA6F90456), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:30.233+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:29:30.233+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578570, 3), signature: { hash: BinData(0, EFAE826F99BE6CD89638C3487B4B82CAA6F90456), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:30.233+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578570, 3), signature: { hash: BinData(0, EFAE826F99BE6CD89638C3487B4B82CAA6F90456), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:30.233+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578570, 3), t: 1 }, durableWallTime: new Date(1567578570097), opTime: { ts: Timestamp(1567578570, 3), t: 1 }, wallTime: new Date(1567578570097) } 2019-09-04T06:29:30.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578570, 3), signature: { hash: BinData(0, EFAE826F99BE6CD89638C3487B4B82CAA6F90456), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:30.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:30.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:30.300+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:30.337+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:30.337+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:30.400+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:30.423+0000 D2 COMMAND [conn184] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578570, 1), signature: { hash: BinData(0, C7C317DFCB05C1E2BBC73621E403F791F3E36874), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:29:30.423+0000 D1 REPL [conn184] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578570, 3), t: 1 } 2019-09-04T06:29:30.423+0000 D3 REPL [conn184] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.433+0000 2019-09-04T06:29:30.432+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:30.433+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:30.458+0000 I NETWORK [listener] connection accepted from 10.108.2.55:36612 #192 (87 connections now open) 2019-09-04T06:29:30.458+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:30.458+0000 D2 COMMAND [conn192] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:30.458+0000 I NETWORK [conn192] received client metadata from 10.108.2.55:36612 conn192: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:30.458+0000 I COMMAND [conn192] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:30.459+0000 D2 COMMAND [conn192] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578561, 1), signature: { hash: BinData(0, D212476059A77AEDF4B121F0DF010997DB78B838), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:29:30.459+0000 D1 REPL [conn192] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578570, 3), t: 1 } 2019-09-04T06:29:30.459+0000 D3 REPL [conn192] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.469+0000 2019-09-04T06:29:30.500+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:30.520+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:30.520+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:30.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:30.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:30.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:30.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:30.600+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:30.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:30.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:30.671+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:30.671+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:30.701+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:30.708+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:30.708+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:30.743+0000 I NETWORK [listener] connection accepted from 10.108.2.52:47136 #193 (88 connections now open) 2019-09-04T06:29:30.743+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:30.743+0000 D2 COMMAND [conn193] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:30.743+0000 I NETWORK [conn193] received client metadata from 10.108.2.52:47136 conn193: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:30.743+0000 I COMMAND [conn193] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:30.743+0000 D2 COMMAND [conn193] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578562, 1), signature: { hash: BinData(0, D409C985E59557A0ED9DA32FA1E2AFD0B3BC04D7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:29:30.743+0000 D1 REPL [conn193] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578570, 3), t: 1 } 2019-09-04T06:29:30.743+0000 D3 REPL [conn193] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.753+0000 2019-09-04T06:29:30.753+0000 I NETWORK [listener] connection accepted from 10.108.2.50:50070 #194 (89 connections now open) 2019-09-04T06:29:30.753+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:30.753+0000 D2 COMMAND [conn194] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:30.753+0000 I NETWORK [conn194] received client metadata from 10.108.2.50:50070 conn194: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:30.753+0000 I COMMAND [conn194] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:30.753+0000 D2 COMMAND [conn194] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578569, 1), signature: { hash: BinData(0, 7B33B4EC422C8F7442E7E40E2C288C538CABC8B5), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:29:30.753+0000 D1 REPL [conn194] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578570, 3), t: 1 } 2019-09-04T06:29:30.753+0000 D3 REPL [conn194] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.763+0000 2019-09-04T06:29:30.801+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:30.837+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:30.837+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:30.838+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:30.838+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 430) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:30.838+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 430 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:29:40.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:30.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:30.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 431) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:30.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 431 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:40.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:30.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:58.838+0000 2019-09-04T06:29:30.838+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:58.838+0000 2019-09-04T06:29:30.838+0000 D2 ASIO [Replication] Request 430 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578570, 3), t: 1 }, durableWallTime: new Date(1567578570097), opTime: { ts: Timestamp(1567578570, 3), t: 1 }, wallTime: new Date(1567578570097), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578570, 3), t: 1 }, lastCommittedWall: new Date(1567578570097), lastOpVisible: { ts: Timestamp(1567578570, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578570, 3), $clusterTime: { clusterTime: Timestamp(1567578570, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578570, 3) } 2019-09-04T06:29:30.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578570, 3), t: 1 }, durableWallTime: new Date(1567578570097), opTime: { ts: Timestamp(1567578570, 3), t: 1 }, wallTime: new Date(1567578570097), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578570, 3), t: 1 }, lastCommittedWall: new Date(1567578570097), lastOpVisible: { ts: Timestamp(1567578570, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578570, 3), $clusterTime: { clusterTime: Timestamp(1567578570, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578570, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:29:30.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:30.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 430) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578570, 3), t: 1 }, durableWallTime: new Date(1567578570097), opTime: { ts: Timestamp(1567578570, 3), t: 1 }, wallTime: new Date(1567578570097), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578570, 3), t: 1 }, lastCommittedWall: new Date(1567578570097), lastOpVisible: { ts: Timestamp(1567578570, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578570, 3), $clusterTime: { clusterTime: Timestamp(1567578570, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578570, 3) } 2019-09-04T06:29:30.838+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:29:30.838+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:29:40.918+0000 2019-09-04T06:29:30.838+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:29:40.862+0000 2019-09-04T06:29:30.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:29:30.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:29:32.838Z 2019-09-04T06:29:30.838+0000 D2 ASIO [Replication] Request 431 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578570, 3), t: 1 }, durableWallTime: new Date(1567578570097), opTime: { ts: Timestamp(1567578570, 3), t: 1 }, wallTime: new Date(1567578570097), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578570, 3), t: 1 }, lastCommittedWall: new Date(1567578570097), lastOpVisible: { ts: Timestamp(1567578570, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578570, 3), $clusterTime: { clusterTime: Timestamp(1567578570, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578570, 3) } 2019-09-04T06:29:30.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578570, 3), t: 1 }, durableWallTime: new Date(1567578570097), opTime: { ts: Timestamp(1567578570, 3), t: 1 }, wallTime: new Date(1567578570097), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578570, 3), t: 1 }, lastCommittedWall: new Date(1567578570097), lastOpVisible: { ts: Timestamp(1567578570, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578570, 3), $clusterTime: { clusterTime: Timestamp(1567578570, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578570, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:30.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:30.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:58.838+0000 2019-09-04T06:29:30.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:58.838+0000 2019-09-04T06:29:30.838+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:30.838+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 431) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578570, 3), t: 1 }, durableWallTime: new Date(1567578570097), opTime: { ts: Timestamp(1567578570, 3), t: 1 }, wallTime: new Date(1567578570097), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578570, 3), t: 1 }, lastCommittedWall: new Date(1567578570097), lastOpVisible: { ts: Timestamp(1567578570, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578570, 3), $clusterTime: { clusterTime: Timestamp(1567578570, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578570, 3) } 2019-09-04T06:29:30.838+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:29:30.838+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:29:32.838Z 2019-09-04T06:29:30.838+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:29:58.838+0000 2019-09-04T06:29:30.886+0000 I NETWORK [listener] connection accepted from 10.108.2.44:38638 #195 (90 connections now open) 2019-09-04T06:29:30.886+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:30.887+0000 D2 COMMAND [conn195] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:30.887+0000 I NETWORK [conn195] received client metadata from 10.108.2.44:38638 conn195: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:30.887+0000 I COMMAND [conn195] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:30.887+0000 D2 COMMAND [conn195] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578568, 1), signature: { hash: BinData(0, 262C353B5DFD2D2918EF11F585DC4092E62B0B4B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:29:30.887+0000 D1 REPL [conn195] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578570, 3), t: 1 } 2019-09-04T06:29:30.887+0000 D3 REPL [conn195] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.897+0000 2019-09-04T06:29:30.901+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:30.932+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:30.932+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:30.952+0000 I NETWORK [listener] connection accepted from 10.108.2.58:52098 #196 (91 connections now open) 2019-09-04T06:29:30.952+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:30.952+0000 D2 COMMAND [conn196] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:30.952+0000 I NETWORK [conn196] received client metadata from 10.108.2.58:52098 conn196: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:30.952+0000 I COMMAND [conn196] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:30.952+0000 D2 COMMAND [conn196] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578563, 1), signature: { hash: BinData(0, 2A0BBF5A114B4F2CAB0EF4784300FF5254D97052), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:29:30.952+0000 D1 REPL [conn196] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578570, 3), t: 1 } 2019-09-04T06:29:30.952+0000 D3 REPL [conn196] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.962+0000 2019-09-04T06:29:30.976+0000 I NETWORK [listener] connection accepted from 10.108.2.46:40942 #197 (92 connections now open) 2019-09-04T06:29:30.976+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:30.976+0000 D2 COMMAND [conn197] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:30.976+0000 I NETWORK [conn197] received client metadata from 10.108.2.46:40942 conn197: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:30.976+0000 I COMMAND [conn197] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:30.977+0000 D2 COMMAND [conn197] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578565, 1), signature: { hash: BinData(0, 0FB04B3A73468DCEAB3E65E8617977C014F37989), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:29:30.977+0000 D1 REPL [conn197] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578570, 3), t: 1 } 2019-09-04T06:29:30.977+0000 D3 REPL [conn197] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.987+0000 2019-09-04T06:29:31.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:31.001+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:31.020+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:31.020+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:31.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:31.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:31.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578570, 3), signature: { hash: BinData(0, EFAE826F99BE6CD89638C3487B4B82CAA6F90456), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:31.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:29:31.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578570, 3), signature: { hash: BinData(0, EFAE826F99BE6CD89638C3487B4B82CAA6F90456), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:31.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578570, 3), signature: { hash: BinData(0, EFAE826F99BE6CD89638C3487B4B82CAA6F90456), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:31.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578570, 3), t: 1 }, durableWallTime: new Date(1567578570097), opTime: { ts: Timestamp(1567578570, 3), t: 1 }, wallTime: new Date(1567578570097) } 2019-09-04T06:29:31.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578570, 3), signature: { hash: BinData(0, EFAE826F99BE6CD89638C3487B4B82CAA6F90456), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:31.101+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:31.106+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:31.106+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:31.106+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578570, 3) 2019-09-04T06:29:31.106+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 6489 2019-09-04T06:29:31.106+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:31.106+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:31.106+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 6489 2019-09-04T06:29:31.107+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6492 2019-09-04T06:29:31.107+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6492 2019-09-04T06:29:31.107+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578570, 3), t: 1 }({ ts: Timestamp(1567578570, 3), t: 1 }) 2019-09-04T06:29:31.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:31.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:31.201+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:31.208+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:31.208+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:31.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:31.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:31.301+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:31.337+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:31.337+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:31.402+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:31.432+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:31.432+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:31.502+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:31.520+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:31.520+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:31.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:31.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:31.602+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:31.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:31.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:31.702+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:31.708+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:31.708+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:31.802+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:31.837+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:31.837+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:31.902+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:31.932+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:31.932+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:32.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:32.003+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:32.020+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:32.020+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:32.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:32.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:32.103+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:32.106+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:32.106+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:32.106+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578570, 3) 2019-09-04T06:29:32.106+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 6509 2019-09-04T06:29:32.106+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:32.106+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:32.106+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 6509 2019-09-04T06:29:32.108+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6512 2019-09-04T06:29:32.108+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6512 2019-09-04T06:29:32.108+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578570, 3), t: 1 }({ ts: Timestamp(1567578570, 3), t: 1 }) 2019-09-04T06:29:32.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:32.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:32.203+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:32.208+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:32.208+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:32.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578570, 3), signature: { hash: BinData(0, EFAE826F99BE6CD89638C3487B4B82CAA6F90456), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:32.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:29:32.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578570, 3), signature: { hash: BinData(0, EFAE826F99BE6CD89638C3487B4B82CAA6F90456), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:32.233+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578570, 3), signature: { hash: BinData(0, EFAE826F99BE6CD89638C3487B4B82CAA6F90456), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:32.233+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578570, 3), t: 1 }, durableWallTime: new Date(1567578570097), opTime: { ts: Timestamp(1567578570, 3), t: 1 }, wallTime: new Date(1567578570097) } 2019-09-04T06:29:32.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578570, 3), signature: { hash: BinData(0, EFAE826F99BE6CD89638C3487B4B82CAA6F90456), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:32.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:32.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:32.303+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:32.403+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:32.432+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:32.432+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:32.503+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:32.520+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:32.520+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:32.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:32.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:32.603+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:32.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:32.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:32.704+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:32.708+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:32.708+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:32.804+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:32.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:32.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:32.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 432) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:32.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 432 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:29:42.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:32.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:29:58.838+0000 2019-09-04T06:29:32.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 433) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:32.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 433 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:42.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:32.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:29:58.838+0000 2019-09-04T06:29:32.838+0000 D2 ASIO [Replication] Request 432 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578570, 3), t: 1 }, durableWallTime: new Date(1567578570097), opTime: { ts: Timestamp(1567578570, 3), t: 1 }, wallTime: new Date(1567578570097), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578570, 3), t: 1 }, lastCommittedWall: new Date(1567578570097), lastOpVisible: { ts: Timestamp(1567578570, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578570, 3), $clusterTime: { clusterTime: Timestamp(1567578570, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578570, 3) } 2019-09-04T06:29:32.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578570, 3), t: 1 }, durableWallTime: new Date(1567578570097), opTime: { ts: Timestamp(1567578570, 3), t: 1 }, wallTime: new Date(1567578570097), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578570, 3), t: 1 }, lastCommittedWall: new Date(1567578570097), lastOpVisible: { ts: Timestamp(1567578570, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578570, 3), $clusterTime: { clusterTime: Timestamp(1567578570, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578570, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:29:32.838+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:32.838+0000 D2 ASIO [Replication] Request 433 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578570, 3), t: 1 }, durableWallTime: new Date(1567578570097), opTime: { ts: Timestamp(1567578570, 3), t: 1 }, wallTime: new Date(1567578570097), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578570, 3), t: 1 }, lastCommittedWall: new Date(1567578570097), lastOpVisible: { ts: Timestamp(1567578570, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578570, 3), $clusterTime: { clusterTime: Timestamp(1567578570, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578570, 3) } 2019-09-04T06:29:32.838+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 432) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578570, 3), t: 1 }, durableWallTime: new Date(1567578570097), opTime: { ts: Timestamp(1567578570, 3), t: 1 }, wallTime: new Date(1567578570097), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578570, 3), t: 1 }, lastCommittedWall: new Date(1567578570097), lastOpVisible: { ts: Timestamp(1567578570, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578570, 3), $clusterTime: { clusterTime: Timestamp(1567578570, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578570, 3) } 2019-09-04T06:29:32.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578570, 3), t: 1 }, durableWallTime: new Date(1567578570097), opTime: { ts: Timestamp(1567578570, 3), t: 1 }, wallTime: new Date(1567578570097), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578570, 3), t: 1 }, lastCommittedWall: new Date(1567578570097), lastOpVisible: { ts: Timestamp(1567578570, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578570, 3), $clusterTime: { clusterTime: Timestamp(1567578570, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578570, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:32.838+0000 D4 ELECTION [replexec-1] Postponing election timeout due to heartbeat from primary 2019-09-04T06:29:32.838+0000 D4 REPL [replexec-1] Canceling election timeout callback at 2019-09-04T06:29:40.862+0000 2019-09-04T06:29:32.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:32.838+0000 D4 ELECTION [replexec-1] Scheduling election timeout callback at 2019-09-04T06:29:43.818+0000 2019-09-04T06:29:32.838+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:29:32.838+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:29:34.838Z 2019-09-04T06:29:32.838+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:32.838+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:02.838+0000 2019-09-04T06:29:32.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:02.838+0000 2019-09-04T06:29:32.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 433) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578570, 3), t: 1 }, durableWallTime: new Date(1567578570097), opTime: { ts: Timestamp(1567578570, 3), t: 1 }, wallTime: new Date(1567578570097), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578570, 3), t: 1 }, lastCommittedWall: new Date(1567578570097), lastOpVisible: { ts: Timestamp(1567578570, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578570, 3), $clusterTime: { clusterTime: Timestamp(1567578570, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578570, 3) } 2019-09-04T06:29:32.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:29:32.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:29:34.838Z 2019-09-04T06:29:32.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:02.838+0000 2019-09-04T06:29:32.904+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:32.932+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:32.932+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:33.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:33.004+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:33.020+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:33.020+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:33.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:33.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:33.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578570, 3), signature: { hash: BinData(0, EFAE826F99BE6CD89638C3487B4B82CAA6F90456), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:33.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:29:33.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578570, 3), signature: { hash: BinData(0, EFAE826F99BE6CD89638C3487B4B82CAA6F90456), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:33.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578570, 3), signature: { hash: BinData(0, EFAE826F99BE6CD89638C3487B4B82CAA6F90456), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:33.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578570, 3), t: 1 }, durableWallTime: new Date(1567578570097), opTime: { ts: Timestamp(1567578570, 3), t: 1 }, wallTime: new Date(1567578570097) } 2019-09-04T06:29:33.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578570, 3), signature: { hash: BinData(0, EFAE826F99BE6CD89638C3487B4B82CAA6F90456), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:33.104+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:33.106+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:33.106+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:33.106+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578570, 3) 2019-09-04T06:29:33.106+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 6528 2019-09-04T06:29:33.106+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:33.106+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:33.106+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 6528 2019-09-04T06:29:33.108+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6531 2019-09-04T06:29:33.108+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6531 2019-09-04T06:29:33.108+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578570, 3), t: 1 }({ ts: Timestamp(1567578570, 3), t: 1 }) 2019-09-04T06:29:33.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:33.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:33.204+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:33.208+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:33.208+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:33.209+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:33.210+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:33.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:33.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:33.260+0000 I COMMAND [conn144] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578543, 1), signature: { hash: BinData(0, 39254E2B47ABC88D4706F0088450D3CFF007B444), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:29:33.260+0000 D1 - [conn144] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:33.260+0000 W - [conn144] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:33.277+0000 I - [conn144] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:33.277+0000 D1 COMMAND [conn144] assertion while executing command 'find' on database 'config' with arguments '{ find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578543, 1), signature: { hash: BinData(0, 39254E2B47ABC88D4706F0088450D3CFF007B444), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:33.277+0000 D1 - [conn144] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:33.277+0000 W - [conn144] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:33.297+0000 I - [conn144] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:33.297+0000 W COMMAND [conn144] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:33.297+0000 I COMMAND [conn144] command config.$cmd command: find { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578543, 1), signature: { hash: BinData(0, 39254E2B47ABC88D4706F0088450D3CFF007B444), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:29:33.297+0000 D2 NETWORK [conn144] Session from 10.108.2.58:52060 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:33.297+0000 I NETWORK [conn144] end connection 10.108.2.58:52060 (91 connections now open) 2019-09-04T06:29:33.305+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:33.405+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:33.407+0000 D2 ASIO [RS] Request 429 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578573, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578573395), o: { $v: 1, $set: { ping: new Date(1567578573390) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578570, 3), t: 1 }, lastCommittedWall: new Date(1567578570097), lastOpVisible: { ts: Timestamp(1567578570, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578570, 3), t: 1 }, lastCommittedWall: new Date(1567578570097), lastOpApplied: { ts: Timestamp(1567578573, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578570, 3), $clusterTime: { clusterTime: Timestamp(1567578573, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578573, 1) } 2019-09-04T06:29:33.407+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578573, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578573395), o: { $v: 1, $set: { ping: new Date(1567578573390) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578570, 3), t: 1 }, lastCommittedWall: new Date(1567578570097), lastOpVisible: { ts: Timestamp(1567578570, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578570, 3), t: 1 }, lastCommittedWall: new Date(1567578570097), lastOpApplied: { ts: Timestamp(1567578573, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578570, 3), $clusterTime: { clusterTime: Timestamp(1567578573, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578573, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:33.407+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:33.407+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578573, 1) and ending at ts: Timestamp(1567578573, 1) 2019-09-04T06:29:33.407+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:29:43.818+0000 2019-09-04T06:29:33.407+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:29:44.548+0000 2019-09-04T06:29:33.407+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:33.407+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:02.838+0000 2019-09-04T06:29:33.407+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578573, 1), t: 1 } 2019-09-04T06:29:33.407+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:33.407+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:33.407+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578570, 3) 2019-09-04T06:29:33.407+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 6538 2019-09-04T06:29:33.407+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:33.407+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:33.407+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 6538 2019-09-04T06:29:33.407+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:33.407+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:29:33.407+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:33.407+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578570, 3) 2019-09-04T06:29:33.407+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 6541 2019-09-04T06:29:33.407+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:33.407+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:33.407+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578573, 1) } 2019-09-04T06:29:33.407+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 6541 2019-09-04T06:29:33.407+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6532 2019-09-04T06:29:33.407+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 6532 2019-09-04T06:29:33.407+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6544 2019-09-04T06:29:33.407+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6544 2019-09-04T06:29:33.407+0000 D3 EXECUTOR [repl-writer-worker-5] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:33.407+0000 D3 STORAGE [repl-writer-worker-5] WT begin_transaction for snapshot id 6546 2019-09-04T06:29:33.407+0000 D4 STORAGE [repl-writer-worker-5] inserting record with timestamp Timestamp(1567578573, 1) 2019-09-04T06:29:33.407+0000 D3 STORAGE [repl-writer-worker-5] WT set timestamp of future write operations to Timestamp(1567578573, 1) 2019-09-04T06:29:33.407+0000 D3 STORAGE [repl-writer-worker-5] WT commit_transaction for snapshot id 6546 2019-09-04T06:29:33.407+0000 D3 EXECUTOR [repl-writer-worker-5] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:33.407+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:29:33.407+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6545 2019-09-04T06:29:33.408+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 6545 2019-09-04T06:29:33.408+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6548 2019-09-04T06:29:33.408+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6548 2019-09-04T06:29:33.408+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578573, 1), t: 1 }({ ts: Timestamp(1567578573, 1), t: 1 }) 2019-09-04T06:29:33.408+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578573, 1) 2019-09-04T06:29:33.408+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6549 2019-09-04T06:29:33.408+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578573, 1) } } ] } sort: {} projection: {} 2019-09-04T06:29:33.408+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:33.408+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:29:33.408+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578573, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:29:33.408+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:33.408+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:33.408+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:33.408+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578573, 1) || First: notFirst: full path: ts 2019-09-04T06:29:33.408+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:33.408+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578573, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:33.408+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:33.408+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:29:33.408+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:33.408+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:33.408+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:33.408+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:33.408+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:33.408+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:33.408+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:33.408+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578573, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:33.408+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:33.408+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:33.408+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:33.408+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578573, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:33.408+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:33.408+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578573, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:33.408+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 6549 2019-09-04T06:29:33.408+0000 D3 EXECUTOR [repl-writer-worker-9] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:33.408+0000 D3 STORAGE [repl-writer-worker-9] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:33.408+0000 D3 REPL [repl-writer-worker-9] applying op: { ts: Timestamp(1567578573, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578573395), o: { $v: 1, $set: { ping: new Date(1567578573390) } } }, oplog application mode: Secondary 2019-09-04T06:29:33.408+0000 D3 STORAGE [repl-writer-worker-9] WT set timestamp of future write operations to Timestamp(1567578573, 1) 2019-09-04T06:29:33.408+0000 D3 STORAGE [repl-writer-worker-9] WT begin_transaction for snapshot id 6551 2019-09-04T06:29:33.408+0000 D2 QUERY [repl-writer-worker-9] Using idhack: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" } 2019-09-04T06:29:33.408+0000 D4 WRITE [repl-writer-worker-9] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:29:33.408+0000 D3 STORAGE [repl-writer-worker-9] WT commit_transaction for snapshot id 6551 2019-09-04T06:29:33.408+0000 D3 EXECUTOR [repl-writer-worker-9] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:33.408+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578573, 1), t: 1 }({ ts: Timestamp(1567578573, 1), t: 1 }) 2019-09-04T06:29:33.408+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578573, 1) 2019-09-04T06:29:33.408+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6550 2019-09-04T06:29:33.408+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:29:33.408+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:33.408+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:29:33.408+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:33.408+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:33.408+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:29:33.408+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 6550 2019-09-04T06:29:33.408+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578573, 1) 2019-09-04T06:29:33.408+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:33.408+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6554 2019-09-04T06:29:33.408+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578570, 3), t: 1 }, durableWallTime: new Date(1567578570097), appliedOpTime: { ts: Timestamp(1567578570, 3), t: 1 }, appliedWallTime: new Date(1567578570097), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578570, 3), t: 1 }, durableWallTime: new Date(1567578570097), appliedOpTime: { ts: Timestamp(1567578573, 1), t: 1 }, appliedWallTime: new Date(1567578573395), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578570, 3), t: 1 }, durableWallTime: new Date(1567578570097), appliedOpTime: { ts: Timestamp(1567578570, 3), t: 1 }, appliedWallTime: new Date(1567578570097), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578570, 3), t: 1 }, lastCommittedWall: new Date(1567578570097), lastOpVisible: { ts: Timestamp(1567578570, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:33.408+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 434 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:03.408+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578570, 3), t: 1 }, durableWallTime: new Date(1567578570097), appliedOpTime: { ts: Timestamp(1567578570, 3), t: 1 }, appliedWallTime: new Date(1567578570097), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578570, 3), t: 1 }, durableWallTime: new Date(1567578570097), appliedOpTime: { ts: Timestamp(1567578573, 1), t: 1 }, appliedWallTime: new Date(1567578573395), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578570, 3), t: 1 }, durableWallTime: new Date(1567578570097), appliedOpTime: { ts: Timestamp(1567578570, 3), t: 1 }, appliedWallTime: new Date(1567578570097), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578570, 3), t: 1 }, lastCommittedWall: new Date(1567578570097), lastOpVisible: { ts: Timestamp(1567578570, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:33.408+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:03.408+0000 2019-09-04T06:29:33.408+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6554 2019-09-04T06:29:33.409+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578573, 1), t: 1 }({ ts: Timestamp(1567578573, 1), t: 1 }) 2019-09-04T06:29:33.409+0000 D2 ASIO [RS] Request 434 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578570, 3), t: 1 }, lastCommittedWall: new Date(1567578570097), lastOpVisible: { ts: Timestamp(1567578570, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578570, 3), $clusterTime: { clusterTime: Timestamp(1567578573, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578573, 1) } 2019-09-04T06:29:33.409+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578570, 3), t: 1 }, lastCommittedWall: new Date(1567578570097), lastOpVisible: { ts: Timestamp(1567578570, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578570, 3), $clusterTime: { clusterTime: Timestamp(1567578573, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578573, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:33.409+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:33.409+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:03.409+0000 2019-09-04T06:29:33.409+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578573, 1), t: 1 } 2019-09-04T06:29:33.409+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 435 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:43.409+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578570, 3), t: 1 } } 2019-09-04T06:29:33.409+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:03.409+0000 2019-09-04T06:29:33.416+0000 D2 ASIO [RS] Request 435 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578573, 1), t: 1 }, lastCommittedWall: new Date(1567578573395), lastOpVisible: { ts: Timestamp(1567578573, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578573, 1), t: 1 }, lastCommittedWall: new Date(1567578573395), lastOpApplied: { ts: Timestamp(1567578573, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578573, 1), $clusterTime: { clusterTime: Timestamp(1567578573, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578573, 1) } 2019-09-04T06:29:33.416+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578573, 1), t: 1 }, lastCommittedWall: new Date(1567578573395), lastOpVisible: { ts: Timestamp(1567578573, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578573, 1), t: 1 }, lastCommittedWall: new Date(1567578573395), lastOpApplied: { ts: Timestamp(1567578573, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578573, 1), $clusterTime: { clusterTime: Timestamp(1567578573, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578573, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:33.416+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:33.416+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:29:33.416+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578573, 1), t: 1 }, 2019-09-04T06:29:33.395+0000 2019-09-04T06:29:33.416+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578573, 1), t: 1 }, 2019-09-04T06:29:33.395+0000 2019-09-04T06:29:33.416+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578568, 1) 2019-09-04T06:29:33.416+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:29:44.548+0000 2019-09-04T06:29:33.416+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:29:44.893+0000 2019-09-04T06:29:33.416+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:33.416+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:02.838+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn187] Got notified of new snapshot: { ts: Timestamp(1567578573, 1), t: 1 }, 2019-09-04T06:29:33.395+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn187] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.753+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn168] Got notified of new snapshot: { ts: Timestamp(1567578573, 1), t: 1 }, 2019-09-04T06:29:33.395+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn168] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.272+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn189] Got notified of new snapshot: { ts: Timestamp(1567578573, 1), t: 1 }, 2019-09-04T06:29:33.395+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn189] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:52.054+0000 2019-09-04T06:29:33.417+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 436 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:43.416+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578573, 1), t: 1 } } 2019-09-04T06:29:33.417+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:03.409+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn176] Got notified of new snapshot: { ts: Timestamp(1567578573, 1), t: 1 }, 2019-09-04T06:29:33.395+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn176] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.424+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn179] Got notified of new snapshot: { ts: Timestamp(1567578573, 1), t: 1 }, 2019-09-04T06:29:33.395+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn179] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:46.417+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn183] Got notified of new snapshot: { ts: Timestamp(1567578573, 1), t: 1 }, 2019-09-04T06:29:33.395+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn183] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.645+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn165] Got notified of new snapshot: { ts: Timestamp(1567578573, 1), t: 1 }, 2019-09-04T06:29:33.395+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn165] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:35.182+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn166] Got notified of new snapshot: { ts: Timestamp(1567578573, 1), t: 1 }, 2019-09-04T06:29:33.395+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn166] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.275+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn139] Got notified of new snapshot: { ts: Timestamp(1567578573, 1), t: 1 }, 2019-09-04T06:29:33.395+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn139] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.319+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn173] Got notified of new snapshot: { ts: Timestamp(1567578573, 1), t: 1 }, 2019-09-04T06:29:33.395+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn173] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.664+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn164] Got notified of new snapshot: { ts: Timestamp(1567578573, 1), t: 1 }, 2019-09-04T06:29:33.395+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn164] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:41.692+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn193] Got notified of new snapshot: { ts: Timestamp(1567578573, 1), t: 1 }, 2019-09-04T06:29:33.395+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn193] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.753+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn194] Got notified of new snapshot: { ts: Timestamp(1567578573, 1), t: 1 }, 2019-09-04T06:29:33.395+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn194] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.763+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn195] Got notified of new snapshot: { ts: Timestamp(1567578573, 1), t: 1 }, 2019-09-04T06:29:33.395+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn195] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.897+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn172] Got notified of new snapshot: { ts: Timestamp(1567578573, 1), t: 1 }, 2019-09-04T06:29:33.395+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn172] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:58.752+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn163] Got notified of new snapshot: { ts: Timestamp(1567578573, 1), t: 1 }, 2019-09-04T06:29:33.395+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn163] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:41.888+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn177] Got notified of new snapshot: { ts: Timestamp(1567578573, 1), t: 1 }, 2019-09-04T06:29:33.395+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn177] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.446+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn145] Got notified of new snapshot: { ts: Timestamp(1567578573, 1), t: 1 }, 2019-09-04T06:29:33.395+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn145] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.422+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn170] Got notified of new snapshot: { ts: Timestamp(1567578573, 1), t: 1 }, 2019-09-04T06:29:33.395+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn170] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:43.335+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn175] Got notified of new snapshot: { ts: Timestamp(1567578573, 1), t: 1 }, 2019-09-04T06:29:33.395+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn175] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.422+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn159] Got notified of new snapshot: { ts: Timestamp(1567578573, 1), t: 1 }, 2019-09-04T06:29:33.395+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn159] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:56.299+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn186] Got notified of new snapshot: { ts: Timestamp(1567578573, 1), t: 1 }, 2019-09-04T06:29:33.395+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn186] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.662+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn188] Got notified of new snapshot: { ts: Timestamp(1567578573, 1), t: 1 }, 2019-09-04T06:29:33.395+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn188] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.768+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn174] Got notified of new snapshot: { ts: Timestamp(1567578573, 1), t: 1 }, 2019-09-04T06:29:33.395+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn174] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.422+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn190] Got notified of new snapshot: { ts: Timestamp(1567578573, 1), t: 1 }, 2019-09-04T06:29:33.395+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn190] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:55.060+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn185] Got notified of new snapshot: { ts: Timestamp(1567578573, 1), t: 1 }, 2019-09-04T06:29:33.395+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn185] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:59.943+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn167] Got notified of new snapshot: { ts: Timestamp(1567578573, 1), t: 1 }, 2019-09-04T06:29:33.395+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn167] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:57.567+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn138] Got notified of new snapshot: { ts: Timestamp(1567578573, 1), t: 1 }, 2019-09-04T06:29:33.395+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn138] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.289+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn184] Got notified of new snapshot: { ts: Timestamp(1567578573, 1), t: 1 }, 2019-09-04T06:29:33.395+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn184] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.433+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn192] Got notified of new snapshot: { ts: Timestamp(1567578573, 1), t: 1 }, 2019-09-04T06:29:33.395+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn192] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.469+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn196] Got notified of new snapshot: { ts: Timestamp(1567578573, 1), t: 1 }, 2019-09-04T06:29:33.395+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn196] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.962+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn197] Got notified of new snapshot: { ts: Timestamp(1567578573, 1), t: 1 }, 2019-09-04T06:29:33.395+0000 2019-09-04T06:29:33.417+0000 D3 REPL [conn197] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.987+0000 2019-09-04T06:29:33.432+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:33.432+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:33.440+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:29:33.440+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:33.440+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578570, 3), t: 1 }, durableWallTime: new Date(1567578570097), appliedOpTime: { ts: Timestamp(1567578570, 3), t: 1 }, appliedWallTime: new Date(1567578570097), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578573, 1), t: 1 }, durableWallTime: new Date(1567578573395), appliedOpTime: { ts: Timestamp(1567578573, 1), t: 1 }, appliedWallTime: new Date(1567578573395), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578570, 3), t: 1 }, durableWallTime: new Date(1567578570097), appliedOpTime: { ts: Timestamp(1567578570, 3), t: 1 }, appliedWallTime: new Date(1567578570097), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578573, 1), t: 1 }, lastCommittedWall: new Date(1567578573395), lastOpVisible: { ts: Timestamp(1567578573, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:33.440+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 437 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:03.440+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578570, 3), t: 1 }, durableWallTime: new Date(1567578570097), appliedOpTime: { ts: Timestamp(1567578570, 3), t: 1 }, appliedWallTime: new Date(1567578570097), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578573, 1), t: 1 }, durableWallTime: new Date(1567578573395), appliedOpTime: { ts: Timestamp(1567578573, 1), t: 1 }, appliedWallTime: new Date(1567578573395), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578570, 3), t: 1 }, durableWallTime: new Date(1567578570097), appliedOpTime: { ts: Timestamp(1567578570, 3), t: 1 }, appliedWallTime: new Date(1567578570097), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578573, 1), t: 1 }, lastCommittedWall: new Date(1567578573395), lastOpVisible: { ts: Timestamp(1567578573, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:33.441+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:03.409+0000 2019-09-04T06:29:33.441+0000 D2 ASIO [RS] Request 437 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578573, 1), t: 1 }, lastCommittedWall: new Date(1567578573395), lastOpVisible: { ts: Timestamp(1567578573, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578573, 1), $clusterTime: { clusterTime: Timestamp(1567578573, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578573, 1) } 2019-09-04T06:29:33.441+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578573, 1), t: 1 }, lastCommittedWall: new Date(1567578573395), lastOpVisible: { ts: Timestamp(1567578573, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578573, 1), $clusterTime: { clusterTime: Timestamp(1567578573, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578573, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:33.441+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:33.441+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:03.409+0000 2019-09-04T06:29:33.472+0000 I NETWORK [listener] connection accepted from 10.108.2.63:36258 #198 (92 connections now open) 2019-09-04T06:29:33.472+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:33.472+0000 D2 COMMAND [conn198] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:33.472+0000 I NETWORK [conn198] received client metadata from 10.108.2.63:36258 conn198: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:33.472+0000 I COMMAND [conn198] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:33.475+0000 D2 COMMAND [conn198] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578572, 1), signature: { hash: BinData(0, BEAA603D37F22B007C54F93405BE7B4D709F86E2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:29:33.475+0000 D1 REPL [conn198] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578573, 1), t: 1 } 2019-09-04T06:29:33.475+0000 D3 REPL [conn198] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:03.485+0000 2019-09-04T06:29:33.505+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:33.507+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578573, 1) 2019-09-04T06:29:33.510+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:29:33.510+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:29:33.510+0000 D2 COMMAND [conn90] run command admin.$cmd { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:29:33.511+0000 I COMMAND [conn90] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:29:33.520+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:33.520+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:33.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:33.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:33.605+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:33.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:33.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:33.705+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:33.708+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:33.708+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:33.709+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:33.709+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:33.805+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:33.906+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:33.932+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:33.932+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:34.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:34.006+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:34.020+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:34.020+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:34.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:34.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:34.106+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:34.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:34.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:34.206+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:34.208+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:34.208+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:34.209+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:34.209+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:34.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578573, 1), signature: { hash: BinData(0, A8C7A77B71DA3A257FE233C1A19F3747615CB4CD), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:34.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:29:34.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578573, 1), signature: { hash: BinData(0, A8C7A77B71DA3A257FE233C1A19F3747615CB4CD), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:34.233+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578573, 1), signature: { hash: BinData(0, A8C7A77B71DA3A257FE233C1A19F3747615CB4CD), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:34.233+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578573, 1), t: 1 }, durableWallTime: new Date(1567578573395), opTime: { ts: Timestamp(1567578573, 1), t: 1 }, wallTime: new Date(1567578573395) } 2019-09-04T06:29:34.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578573, 1), signature: { hash: BinData(0, A8C7A77B71DA3A257FE233C1A19F3747615CB4CD), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:34.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:34.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:34.306+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:34.406+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:34.407+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:34.407+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:34.407+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578573, 1) 2019-09-04T06:29:34.407+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 6577 2019-09-04T06:29:34.407+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:34.407+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:34.407+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 6577 2019-09-04T06:29:34.409+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6580 2019-09-04T06:29:34.409+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6580 2019-09-04T06:29:34.409+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578573, 1), t: 1 }({ ts: Timestamp(1567578573, 1), t: 1 }) 2019-09-04T06:29:34.432+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:34.432+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:34.506+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:34.520+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:34.520+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:34.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:34.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:34.607+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:34.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:34.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:34.707+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:34.708+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:34.708+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:34.709+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:34.709+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:34.807+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:34.838+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:34.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:34.838+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 438) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:34.838+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 438 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:29:44.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:34.838+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:02.838+0000 2019-09-04T06:29:34.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 439) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:34.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 439 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:44.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:34.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:02.838+0000 2019-09-04T06:29:34.838+0000 D2 ASIO [Replication] Request 438 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578573, 1), t: 1 }, durableWallTime: new Date(1567578573395), opTime: { ts: Timestamp(1567578573, 1), t: 1 }, wallTime: new Date(1567578573395), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578573, 1), t: 1 }, lastCommittedWall: new Date(1567578573395), lastOpVisible: { ts: Timestamp(1567578573, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578573, 1), $clusterTime: { clusterTime: Timestamp(1567578573, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578573, 1) } 2019-09-04T06:29:34.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578573, 1), t: 1 }, durableWallTime: new Date(1567578573395), opTime: { ts: Timestamp(1567578573, 1), t: 1 }, wallTime: new Date(1567578573395), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578573, 1), t: 1 }, lastCommittedWall: new Date(1567578573395), lastOpVisible: { ts: Timestamp(1567578573, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578573, 1), $clusterTime: { clusterTime: Timestamp(1567578573, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578573, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:29:34.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:34.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 438) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578573, 1), t: 1 }, durableWallTime: new Date(1567578573395), opTime: { ts: Timestamp(1567578573, 1), t: 1 }, wallTime: new Date(1567578573395), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578573, 1), t: 1 }, lastCommittedWall: new Date(1567578573395), lastOpVisible: { ts: Timestamp(1567578573, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578573, 1), $clusterTime: { clusterTime: Timestamp(1567578573, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578573, 1) } 2019-09-04T06:29:34.838+0000 D2 ASIO [Replication] Request 439 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578573, 1), t: 1 }, durableWallTime: new Date(1567578573395), opTime: { ts: Timestamp(1567578573, 1), t: 1 }, wallTime: new Date(1567578573395), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578573, 1), t: 1 }, lastCommittedWall: new Date(1567578573395), lastOpVisible: { ts: Timestamp(1567578573, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578573, 1), $clusterTime: { clusterTime: Timestamp(1567578573, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578573, 1) } 2019-09-04T06:29:34.838+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:29:34.838+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:29:44.893+0000 2019-09-04T06:29:34.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578573, 1), t: 1 }, durableWallTime: new Date(1567578573395), opTime: { ts: Timestamp(1567578573, 1), t: 1 }, wallTime: new Date(1567578573395), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578573, 1), t: 1 }, lastCommittedWall: new Date(1567578573395), lastOpVisible: { ts: Timestamp(1567578573, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578573, 1), $clusterTime: { clusterTime: Timestamp(1567578573, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578573, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:34.838+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:29:46.150+0000 2019-09-04T06:29:34.838+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:34.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:29:34.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:29:36.838Z 2019-09-04T06:29:34.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:34.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:04.838+0000 2019-09-04T06:29:34.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:04.838+0000 2019-09-04T06:29:34.838+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 439) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578573, 1), t: 1 }, durableWallTime: new Date(1567578573395), opTime: { ts: Timestamp(1567578573, 1), t: 1 }, wallTime: new Date(1567578573395), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578573, 1), t: 1 }, lastCommittedWall: new Date(1567578573395), lastOpVisible: { ts: Timestamp(1567578573, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578573, 1), $clusterTime: { clusterTime: Timestamp(1567578573, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578573, 1) } 2019-09-04T06:29:34.838+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:29:34.838+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:29:36.838Z 2019-09-04T06:29:34.838+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:04.838+0000 2019-09-04T06:29:34.907+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:34.932+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:34.932+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:34.977+0000 D2 ASIO [RS] Request 436 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578574, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578574963), o: { $v: 1, $set: { ping: new Date(1567578574959), up: 2475 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578573, 1), t: 1 }, lastCommittedWall: new Date(1567578573395), lastOpVisible: { ts: Timestamp(1567578573, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578573, 1), t: 1 }, lastCommittedWall: new Date(1567578573395), lastOpApplied: { ts: Timestamp(1567578574, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578573, 1), $clusterTime: { clusterTime: Timestamp(1567578574, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578574, 1) } 2019-09-04T06:29:34.977+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578574, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578574963), o: { $v: 1, $set: { ping: new Date(1567578574959), up: 2475 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578573, 1), t: 1 }, lastCommittedWall: new Date(1567578573395), lastOpVisible: { ts: Timestamp(1567578573, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578573, 1), t: 1 }, lastCommittedWall: new Date(1567578573395), lastOpApplied: { ts: Timestamp(1567578574, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578573, 1), $clusterTime: { clusterTime: Timestamp(1567578574, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578574, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:34.977+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:34.977+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578574, 1) and ending at ts: Timestamp(1567578574, 1) 2019-09-04T06:29:34.977+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:29:46.150+0000 2019-09-04T06:29:34.977+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:29:46.475+0000 2019-09-04T06:29:34.977+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:34.977+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578574, 1), t: 1 } 2019-09-04T06:29:34.977+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:34.977+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:34.977+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578573, 1) 2019-09-04T06:29:34.977+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 6590 2019-09-04T06:29:34.978+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:34.978+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:34.978+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 6590 2019-09-04T06:29:34.978+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:29:34.978+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:34.978+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:34.978+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578574, 1) } 2019-09-04T06:29:34.978+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578573, 1) 2019-09-04T06:29:34.978+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 6593 2019-09-04T06:29:34.978+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:34.978+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:34.978+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 6593 2019-09-04T06:29:34.978+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6581 2019-09-04T06:29:34.977+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:04.838+0000 2019-09-04T06:29:34.978+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 6581 2019-09-04T06:29:34.978+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6596 2019-09-04T06:29:34.978+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6596 2019-09-04T06:29:34.978+0000 D3 EXECUTOR [repl-writer-worker-7] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:34.978+0000 D3 STORAGE [repl-writer-worker-7] WT begin_transaction for snapshot id 6598 2019-09-04T06:29:34.978+0000 D4 STORAGE [repl-writer-worker-7] inserting record with timestamp Timestamp(1567578574, 1) 2019-09-04T06:29:34.978+0000 D3 STORAGE [repl-writer-worker-7] WT set timestamp of future write operations to Timestamp(1567578574, 1) 2019-09-04T06:29:34.978+0000 D3 STORAGE [repl-writer-worker-7] WT commit_transaction for snapshot id 6598 2019-09-04T06:29:34.978+0000 D3 EXECUTOR [repl-writer-worker-7] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:34.978+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:29:34.978+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6597 2019-09-04T06:29:34.978+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 6597 2019-09-04T06:29:34.978+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6600 2019-09-04T06:29:34.978+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6600 2019-09-04T06:29:34.978+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578574, 1), t: 1 }({ ts: Timestamp(1567578574, 1), t: 1 }) 2019-09-04T06:29:34.978+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578574, 1) 2019-09-04T06:29:34.978+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6601 2019-09-04T06:29:34.978+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578574, 1) } } ] } sort: {} projection: {} 2019-09-04T06:29:34.978+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:34.978+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:29:34.978+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578574, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:29:34.978+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:34.978+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:34.978+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:34.978+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578574, 1) || First: notFirst: full path: ts 2019-09-04T06:29:34.978+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:34.978+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578574, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:34.978+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:34.978+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:29:34.978+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:34.978+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:34.978+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:34.978+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:34.978+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:34.978+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:34.978+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:34.978+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578574, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:34.978+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:34.978+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:34.978+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:34.978+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578574, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:34.978+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:34.978+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578574, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:34.978+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 6601 2019-09-04T06:29:34.978+0000 D3 EXECUTOR [repl-writer-worker-11] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:34.978+0000 D3 STORAGE [repl-writer-worker-11] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:34.978+0000 D3 REPL [repl-writer-worker-11] applying op: { ts: Timestamp(1567578574, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578574963), o: { $v: 1, $set: { ping: new Date(1567578574959), up: 2475 } } }, oplog application mode: Secondary 2019-09-04T06:29:34.978+0000 D3 STORAGE [repl-writer-worker-11] WT set timestamp of future write operations to Timestamp(1567578574, 1) 2019-09-04T06:29:34.978+0000 D3 STORAGE [repl-writer-worker-11] WT begin_transaction for snapshot id 6603 2019-09-04T06:29:34.978+0000 D2 QUERY [repl-writer-worker-11] Using idhack: { _id: "cmodb801.togewa.com:27017" } 2019-09-04T06:29:34.979+0000 D4 WRITE [repl-writer-worker-11] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:29:34.979+0000 D3 STORAGE [repl-writer-worker-11] WT commit_transaction for snapshot id 6603 2019-09-04T06:29:34.979+0000 D3 EXECUTOR [repl-writer-worker-11] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:34.979+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578574, 1), t: 1 }({ ts: Timestamp(1567578574, 1), t: 1 }) 2019-09-04T06:29:34.979+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578574, 1) 2019-09-04T06:29:34.979+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6602 2019-09-04T06:29:34.979+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:29:34.979+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:34.979+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:29:34.979+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:34.979+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:34.979+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:29:34.979+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 6602 2019-09-04T06:29:34.979+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578574, 1) 2019-09-04T06:29:34.979+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6606 2019-09-04T06:29:34.979+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6606 2019-09-04T06:29:34.979+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578574, 1), t: 1 }({ ts: Timestamp(1567578574, 1), t: 1 }) 2019-09-04T06:29:34.979+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:34.979+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578573, 1), t: 1 }, durableWallTime: new Date(1567578573395), appliedOpTime: { ts: Timestamp(1567578573, 1), t: 1 }, appliedWallTime: new Date(1567578573395), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578573, 1), t: 1 }, durableWallTime: new Date(1567578573395), appliedOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, appliedWallTime: new Date(1567578574963), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578573, 1), t: 1 }, durableWallTime: new Date(1567578573395), appliedOpTime: { ts: Timestamp(1567578573, 1), t: 1 }, appliedWallTime: new Date(1567578573395), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578573, 1), t: 1 }, lastCommittedWall: new Date(1567578573395), lastOpVisible: { ts: Timestamp(1567578573, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:34.979+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 440 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:04.979+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578573, 1), t: 1 }, durableWallTime: new Date(1567578573395), appliedOpTime: { ts: Timestamp(1567578573, 1), t: 1 }, appliedWallTime: new Date(1567578573395), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578573, 1), t: 1 }, durableWallTime: new Date(1567578573395), appliedOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, appliedWallTime: new Date(1567578574963), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578573, 1), t: 1 }, durableWallTime: new Date(1567578573395), appliedOpTime: { ts: Timestamp(1567578573, 1), t: 1 }, appliedWallTime: new Date(1567578573395), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578573, 1), t: 1 }, lastCommittedWall: new Date(1567578573395), lastOpVisible: { ts: Timestamp(1567578573, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:34.979+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:04.979+0000 2019-09-04T06:29:34.979+0000 D2 ASIO [RS] Request 440 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578573, 1), t: 1 }, lastCommittedWall: new Date(1567578573395), lastOpVisible: { ts: Timestamp(1567578573, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578573, 1), $clusterTime: { clusterTime: Timestamp(1567578574, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578574, 1) } 2019-09-04T06:29:34.979+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578573, 1), t: 1 }, lastCommittedWall: new Date(1567578573395), lastOpVisible: { ts: Timestamp(1567578573, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578573, 1), $clusterTime: { clusterTime: Timestamp(1567578574, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578574, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:34.979+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:34.979+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:04.979+0000 2019-09-04T06:29:34.979+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578574, 1), t: 1 } 2019-09-04T06:29:34.979+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 441 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:44.979+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578573, 1), t: 1 } } 2019-09-04T06:29:34.980+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:04.979+0000 2019-09-04T06:29:34.980+0000 D2 ASIO [RS] Request 441 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578574, 1), t: 1 }, lastCommittedWall: new Date(1567578574963), lastOpVisible: { ts: Timestamp(1567578574, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578574, 1), t: 1 }, lastCommittedWall: new Date(1567578574963), lastOpApplied: { ts: Timestamp(1567578574, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578574, 1), $clusterTime: { clusterTime: Timestamp(1567578574, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578574, 1) } 2019-09-04T06:29:34.980+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578574, 1), t: 1 }, lastCommittedWall: new Date(1567578574963), lastOpVisible: { ts: Timestamp(1567578574, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578574, 1), t: 1 }, lastCommittedWall: new Date(1567578574963), lastOpApplied: { ts: Timestamp(1567578574, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578574, 1), $clusterTime: { clusterTime: Timestamp(1567578574, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578574, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:34.980+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:34.980+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:29:34.980+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578574, 1), t: 1 }, 2019-09-04T06:29:34.963+0000 2019-09-04T06:29:34.980+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578574, 1), t: 1 }, 2019-09-04T06:29:34.963+0000 2019-09-04T06:29:34.980+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578569, 1) 2019-09-04T06:29:34.980+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:29:46.475+0000 2019-09-04T06:29:34.980+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:29:45.263+0000 2019-09-04T06:29:34.980+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 442 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:44.980+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578574, 1), t: 1 } } 2019-09-04T06:29:34.980+0000 D3 REPL [conn168] Got notified of new snapshot: { ts: Timestamp(1567578574, 1), t: 1 }, 2019-09-04T06:29:34.963+0000 2019-09-04T06:29:34.980+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:04.979+0000 2019-09-04T06:29:34.980+0000 D3 REPL [conn168] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.272+0000 2019-09-04T06:29:34.980+0000 D3 REPL [conn166] Got notified of new snapshot: { ts: Timestamp(1567578574, 1), t: 1 }, 2019-09-04T06:29:34.963+0000 2019-09-04T06:29:34.980+0000 D3 REPL [conn166] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.275+0000 2019-09-04T06:29:34.980+0000 D3 REPL [conn173] Got notified of new snapshot: { ts: Timestamp(1567578574, 1), t: 1 }, 2019-09-04T06:29:34.963+0000 2019-09-04T06:29:34.980+0000 D3 REPL [conn173] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.664+0000 2019-09-04T06:29:34.980+0000 D3 REPL [conn193] Got notified of new snapshot: { ts: Timestamp(1567578574, 1), t: 1 }, 2019-09-04T06:29:34.963+0000 2019-09-04T06:29:34.980+0000 D3 REPL [conn193] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.753+0000 2019-09-04T06:29:34.980+0000 D3 REPL [conn195] Got notified of new snapshot: { ts: Timestamp(1567578574, 1), t: 1 }, 2019-09-04T06:29:34.963+0000 2019-09-04T06:29:34.980+0000 D3 REPL [conn195] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.897+0000 2019-09-04T06:29:34.980+0000 D3 REPL [conn163] Got notified of new snapshot: { ts: Timestamp(1567578574, 1), t: 1 }, 2019-09-04T06:29:34.963+0000 2019-09-04T06:29:34.980+0000 D2 COMMAND [conn21] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578574, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f59ce02d1a496712d71ce'), operName: "", parentOperId: "5d6f59ce02d1a496712d71cc" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578574, 1), signature: { hash: BinData(0, 712017CC287EFBA578BBBB6D0C4404C004D4650B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578574, 1), t: 1 } }, $db: "config" } 2019-09-04T06:29:34.980+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:34.980+0000 D3 REPL [conn163] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:41.888+0000 2019-09-04T06:29:34.980+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:04.838+0000 2019-09-04T06:29:34.980+0000 D3 REPL [conn165] Got notified of new snapshot: { ts: Timestamp(1567578574, 1), t: 1 }, 2019-09-04T06:29:34.963+0000 2019-09-04T06:29:34.980+0000 D3 REPL [conn165] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:35.182+0000 2019-09-04T06:29:34.980+0000 D3 REPL [conn139] Got notified of new snapshot: { ts: Timestamp(1567578574, 1), t: 1 }, 2019-09-04T06:29:34.963+0000 2019-09-04T06:29:34.980+0000 D3 REPL [conn139] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.319+0000 2019-09-04T06:29:34.980+0000 D3 REPL [conn164] Got notified of new snapshot: { ts: Timestamp(1567578574, 1), t: 1 }, 2019-09-04T06:29:34.963+0000 2019-09-04T06:29:34.980+0000 D3 REPL [conn164] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:41.692+0000 2019-09-04T06:29:34.980+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:29:34.980+0000 D3 REPL [conn194] Got notified of new snapshot: { ts: Timestamp(1567578574, 1), t: 1 }, 2019-09-04T06:29:34.963+0000 2019-09-04T06:29:34.980+0000 D3 REPL [conn194] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.763+0000 2019-09-04T06:29:34.980+0000 D3 REPL [conn172] Got notified of new snapshot: { ts: Timestamp(1567578574, 1), t: 1 }, 2019-09-04T06:29:34.963+0000 2019-09-04T06:29:34.980+0000 D3 REPL [conn172] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:58.752+0000 2019-09-04T06:29:34.980+0000 D3 REPL [conn177] Got notified of new snapshot: { ts: Timestamp(1567578574, 1), t: 1 }, 2019-09-04T06:29:34.963+0000 2019-09-04T06:29:34.980+0000 D3 REPL [conn177] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.446+0000 2019-09-04T06:29:34.980+0000 D3 REPL [conn170] Got notified of new snapshot: { ts: Timestamp(1567578574, 1), t: 1 }, 2019-09-04T06:29:34.963+0000 2019-09-04T06:29:34.980+0000 D3 REPL [conn170] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:43.335+0000 2019-09-04T06:29:34.980+0000 D3 REPL [conn159] Got notified of new snapshot: { ts: Timestamp(1567578574, 1), t: 1 }, 2019-09-04T06:29:34.963+0000 2019-09-04T06:29:34.980+0000 D3 REPL [conn159] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:56.299+0000 2019-09-04T06:29:34.981+0000 D3 REPL [conn188] Got notified of new snapshot: { ts: Timestamp(1567578574, 1), t: 1 }, 2019-09-04T06:29:34.963+0000 2019-09-04T06:29:34.981+0000 D3 REPL [conn188] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.768+0000 2019-09-04T06:29:34.981+0000 D3 REPL [conn190] Got notified of new snapshot: { ts: Timestamp(1567578574, 1), t: 1 }, 2019-09-04T06:29:34.963+0000 2019-09-04T06:29:34.981+0000 D3 REPL [conn190] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:55.060+0000 2019-09-04T06:29:34.981+0000 D3 REPL [conn167] Got notified of new snapshot: { ts: Timestamp(1567578574, 1), t: 1 }, 2019-09-04T06:29:34.963+0000 2019-09-04T06:29:34.981+0000 D3 REPL [conn167] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:57.567+0000 2019-09-04T06:29:34.981+0000 D3 REPL [conn184] Got notified of new snapshot: { ts: Timestamp(1567578574, 1), t: 1 }, 2019-09-04T06:29:34.963+0000 2019-09-04T06:29:34.981+0000 D3 REPL [conn184] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.433+0000 2019-09-04T06:29:34.981+0000 D3 REPL [conn196] Got notified of new snapshot: { ts: Timestamp(1567578574, 1), t: 1 }, 2019-09-04T06:29:34.963+0000 2019-09-04T06:29:34.981+0000 D3 REPL [conn196] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.962+0000 2019-09-04T06:29:34.981+0000 D3 REPL [conn187] Got notified of new snapshot: { ts: Timestamp(1567578574, 1), t: 1 }, 2019-09-04T06:29:34.963+0000 2019-09-04T06:29:34.981+0000 D3 REPL [conn187] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.753+0000 2019-09-04T06:29:34.981+0000 D3 REPL [conn176] Got notified of new snapshot: { ts: Timestamp(1567578574, 1), t: 1 }, 2019-09-04T06:29:34.963+0000 2019-09-04T06:29:34.981+0000 D3 REPL [conn176] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.424+0000 2019-09-04T06:29:34.981+0000 D3 REPL [conn183] Got notified of new snapshot: { ts: Timestamp(1567578574, 1), t: 1 }, 2019-09-04T06:29:34.963+0000 2019-09-04T06:29:34.981+0000 D3 REPL [conn183] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.645+0000 2019-09-04T06:29:34.981+0000 D3 REPL [conn189] Got notified of new snapshot: { ts: Timestamp(1567578574, 1), t: 1 }, 2019-09-04T06:29:34.963+0000 2019-09-04T06:29:34.981+0000 D3 REPL [conn189] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:52.054+0000 2019-09-04T06:29:34.981+0000 D3 REPL [conn179] Got notified of new snapshot: { ts: Timestamp(1567578574, 1), t: 1 }, 2019-09-04T06:29:34.963+0000 2019-09-04T06:29:34.981+0000 D3 REPL [conn179] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:46.417+0000 2019-09-04T06:29:34.981+0000 D3 REPL [conn145] Got notified of new snapshot: { ts: Timestamp(1567578574, 1), t: 1 }, 2019-09-04T06:29:34.963+0000 2019-09-04T06:29:34.981+0000 D3 REPL [conn145] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.422+0000 2019-09-04T06:29:34.981+0000 D3 REPL [conn175] Got notified of new snapshot: { ts: Timestamp(1567578574, 1), t: 1 }, 2019-09-04T06:29:34.963+0000 2019-09-04T06:29:34.981+0000 D3 REPL [conn175] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.422+0000 2019-09-04T06:29:34.981+0000 D3 REPL [conn186] Got notified of new snapshot: { ts: Timestamp(1567578574, 1), t: 1 }, 2019-09-04T06:29:34.963+0000 2019-09-04T06:29:34.981+0000 D3 REPL [conn186] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.662+0000 2019-09-04T06:29:34.981+0000 D3 REPL [conn174] Got notified of new snapshot: { ts: Timestamp(1567578574, 1), t: 1 }, 2019-09-04T06:29:34.963+0000 2019-09-04T06:29:34.981+0000 D3 REPL [conn174] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.422+0000 2019-09-04T06:29:34.981+0000 D3 REPL [conn185] Got notified of new snapshot: { ts: Timestamp(1567578574, 1), t: 1 }, 2019-09-04T06:29:34.963+0000 2019-09-04T06:29:34.981+0000 D3 REPL [conn185] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:59.943+0000 2019-09-04T06:29:34.981+0000 D3 REPL [conn138] Got notified of new snapshot: { ts: Timestamp(1567578574, 1), t: 1 }, 2019-09-04T06:29:34.963+0000 2019-09-04T06:29:34.981+0000 D3 REPL [conn138] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.289+0000 2019-09-04T06:29:34.981+0000 D3 REPL [conn192] Got notified of new snapshot: { ts: Timestamp(1567578574, 1), t: 1 }, 2019-09-04T06:29:34.963+0000 2019-09-04T06:29:34.981+0000 D3 REPL [conn192] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.469+0000 2019-09-04T06:29:34.981+0000 D3 REPL [conn197] Got notified of new snapshot: { ts: Timestamp(1567578574, 1), t: 1 }, 2019-09-04T06:29:34.963+0000 2019-09-04T06:29:34.981+0000 D1 TRACKING [conn21] Cmd: find, TrackingId: 5d6f59ce02d1a496712d71cc|5d6f59ce02d1a496712d71ce 2019-09-04T06:29:34.981+0000 D3 REPL [conn197] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.987+0000 2019-09-04T06:29:34.981+0000 D3 REPL [conn198] Got notified of new snapshot: { ts: Timestamp(1567578574, 1), t: 1 }, 2019-09-04T06:29:34.963+0000 2019-09-04T06:29:34.981+0000 D3 REPL [conn198] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:03.485+0000 2019-09-04T06:29:34.981+0000 D1 COMMAND [conn21] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578574, 1), t: 1 } } } 2019-09-04T06:29:34.981+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:34.981+0000 D3 STORAGE [conn21] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:29:34.981+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578573, 1), t: 1 }, durableWallTime: new Date(1567578573395), appliedOpTime: { ts: Timestamp(1567578573, 1), t: 1 }, appliedWallTime: new Date(1567578573395), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, durableWallTime: new Date(1567578574963), appliedOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, appliedWallTime: new Date(1567578574963), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578573, 1), t: 1 }, durableWallTime: new Date(1567578573395), appliedOpTime: { ts: Timestamp(1567578573, 1), t: 1 }, appliedWallTime: new Date(1567578573395), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578574, 1), t: 1 }, lastCommittedWall: new Date(1567578574963), lastOpVisible: { ts: Timestamp(1567578574, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:34.981+0000 D1 COMMAND [conn21] Using 'committed' snapshot: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578574, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f59ce02d1a496712d71ce'), operName: "", parentOperId: "5d6f59ce02d1a496712d71cc" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578574, 1), signature: { hash: BinData(0, 712017CC287EFBA578BBBB6D0C4404C004D4650B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578574, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578574, 1) 2019-09-04T06:29:34.981+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 443 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:04.981+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578573, 1), t: 1 }, durableWallTime: new Date(1567578573395), appliedOpTime: { ts: Timestamp(1567578573, 1), t: 1 }, appliedWallTime: new Date(1567578573395), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, durableWallTime: new Date(1567578574963), appliedOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, appliedWallTime: new Date(1567578574963), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578573, 1), t: 1 }, durableWallTime: new Date(1567578573395), appliedOpTime: { ts: Timestamp(1567578573, 1), t: 1 }, appliedWallTime: new Date(1567578573395), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578574, 1), t: 1 }, lastCommittedWall: new Date(1567578574963), lastOpVisible: { ts: Timestamp(1567578574, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:34.981+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:04.979+0000 2019-09-04T06:29:34.981+0000 D2 QUERY [conn21] Collection config.settings does not exist. Using EOF plan: query: { _id: "balancer" } sort: {} projection: {} limit: 1 2019-09-04T06:29:34.981+0000 I COMMAND [conn21] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578574, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f59ce02d1a496712d71ce'), operName: "", parentOperId: "5d6f59ce02d1a496712d71cc" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578574, 1), signature: { hash: BinData(0, 712017CC287EFBA578BBBB6D0C4404C004D4650B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578574, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:29:34.981+0000 D2 ASIO [RS] Request 443 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578574, 1), t: 1 }, lastCommittedWall: new Date(1567578574963), lastOpVisible: { ts: Timestamp(1567578574, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578574, 1), $clusterTime: { clusterTime: Timestamp(1567578574, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578574, 1) } 2019-09-04T06:29:34.981+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578574, 1), t: 1 }, lastCommittedWall: new Date(1567578574963), lastOpVisible: { ts: Timestamp(1567578574, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578574, 1), $clusterTime: { clusterTime: Timestamp(1567578574, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578574, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:34.981+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:34.981+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:04.979+0000 2019-09-04T06:29:35.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:35.007+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:35.020+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:35.020+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:35.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:35.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:35.061+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:35.061+0000 D3 REPL [replexec-1] memberData lastupdate is: 2019-09-04T06:29:34.838+0000 2019-09-04T06:29:35.061+0000 D3 REPL [replexec-1] memberData lastupdate is: 2019-09-04T06:29:34.838+0000 2019-09-04T06:29:35.061+0000 D3 REPL [replexec-1] stalest member MemberId(0) date: 2019-09-04T06:29:34.838+0000 2019-09-04T06:29:35.061+0000 D3 REPL [replexec-1] scheduling next check at 2019-09-04T06:29:44.838+0000 2019-09-04T06:29:35.061+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:04.838+0000 2019-09-04T06:29:35.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578574, 1), signature: { hash: BinData(0, 712017CC287EFBA578BBBB6D0C4404C004D4650B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:35.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:29:35.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578574, 1), signature: { hash: BinData(0, 712017CC287EFBA578BBBB6D0C4404C004D4650B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:35.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578574, 1), signature: { hash: BinData(0, 712017CC287EFBA578BBBB6D0C4404C004D4650B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:35.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, durableWallTime: new Date(1567578574963), opTime: { ts: Timestamp(1567578574, 1), t: 1 }, wallTime: new Date(1567578574963) } 2019-09-04T06:29:35.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578574, 1), signature: { hash: BinData(0, 712017CC287EFBA578BBBB6D0C4404C004D4650B), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:35.078+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578574, 1) 2019-09-04T06:29:35.107+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:35.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:35.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:35.171+0000 I NETWORK [listener] connection accepted from 10.108.2.74:51742 #199 (93 connections now open) 2019-09-04T06:29:35.171+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:35.172+0000 D2 COMMAND [conn199] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:35.172+0000 I NETWORK [conn199] received client metadata from 10.108.2.74:51742 conn199: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:35.172+0000 I COMMAND [conn199] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:35.183+0000 I COMMAND [conn165] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578535, 1), signature: { hash: BinData(0, C98F7F1FBA3C8F1EFEDAE23E1990CC12FB8D9F3E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:29:35.183+0000 D1 - [conn165] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:35.183+0000 W - [conn165] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:35.200+0000 I - [conn165] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:35.200+0000 D1 COMMAND [conn165] assertion while executing command 'find' on database 'config' with arguments '{ find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578535, 1), signature: { hash: BinData(0, C98F7F1FBA3C8F1EFEDAE23E1990CC12FB8D9F3E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:35.200+0000 D1 - [conn165] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:35.200+0000 W - [conn165] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:35.207+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:35.208+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:35.208+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:35.209+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:35.209+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:35.220+0000 I - [conn165] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:35.220+0000 W COMMAND [conn165] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:35.220+0000 I COMMAND [conn165] command config.$cmd command: find { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578535, 1), signature: { hash: BinData(0, C98F7F1FBA3C8F1EFEDAE23E1990CC12FB8D9F3E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:29:35.220+0000 D2 NETWORK [conn165] Session from 10.108.2.74:51722 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:35.220+0000 I NETWORK [conn165] end connection 10.108.2.74:51722 (92 connections now open) 2019-09-04T06:29:35.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:35.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:35.308+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:35.408+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:35.432+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:35.432+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:35.508+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:35.520+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:35.520+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:35.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:35.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:35.608+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:35.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:35.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:35.708+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:35.708+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:35.708+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:35.709+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:35.709+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:35.808+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:35.909+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:35.932+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:35.932+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:35.978+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:35.978+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:35.978+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578574, 1) 2019-09-04T06:29:35.978+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 6627 2019-09-04T06:29:35.978+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:35.978+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:35.978+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 6627 2019-09-04T06:29:35.979+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6630 2019-09-04T06:29:35.979+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6630 2019-09-04T06:29:35.979+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578574, 1), t: 1 }({ ts: Timestamp(1567578574, 1), t: 1 }) 2019-09-04T06:29:36.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:36.009+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:36.020+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:36.020+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:36.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:36.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:36.109+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:36.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:36.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:36.208+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:36.208+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:36.209+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:36.209+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:36.209+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:36.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578574, 1), signature: { hash: BinData(0, 712017CC287EFBA578BBBB6D0C4404C004D4650B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:36.233+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:29:36.233+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578574, 1), signature: { hash: BinData(0, 712017CC287EFBA578BBBB6D0C4404C004D4650B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:36.233+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578574, 1), signature: { hash: BinData(0, 712017CC287EFBA578BBBB6D0C4404C004D4650B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:36.233+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, durableWallTime: new Date(1567578574963), opTime: { ts: Timestamp(1567578574, 1), t: 1 }, wallTime: new Date(1567578574963) } 2019-09-04T06:29:36.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578574, 1), signature: { hash: BinData(0, 712017CC287EFBA578BBBB6D0C4404C004D4650B), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:36.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:36.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:36.291+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:36.291+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:36.309+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:36.409+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:36.432+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:36.432+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:36.509+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:36.520+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:36.520+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:36.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:36.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:36.610+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:36.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:36.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:36.708+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:36.708+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:36.709+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:36.709+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:36.710+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:36.791+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:36.791+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:36.810+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:36.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:36.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:36.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 444) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:36.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 444 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:29:46.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:36.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:04.838+0000 2019-09-04T06:29:36.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 445) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:36.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 445 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:46.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:36.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:04.838+0000 2019-09-04T06:29:36.838+0000 D2 ASIO [Replication] Request 444 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, durableWallTime: new Date(1567578574963), opTime: { ts: Timestamp(1567578574, 1), t: 1 }, wallTime: new Date(1567578574963), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578574, 1), t: 1 }, lastCommittedWall: new Date(1567578574963), lastOpVisible: { ts: Timestamp(1567578574, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578574, 1), $clusterTime: { clusterTime: Timestamp(1567578574, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578574, 1) } 2019-09-04T06:29:36.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, durableWallTime: new Date(1567578574963), opTime: { ts: Timestamp(1567578574, 1), t: 1 }, wallTime: new Date(1567578574963), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578574, 1), t: 1 }, lastCommittedWall: new Date(1567578574963), lastOpVisible: { ts: Timestamp(1567578574, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578574, 1), $clusterTime: { clusterTime: Timestamp(1567578574, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578574, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:29:36.838+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:36.838+0000 D2 ASIO [Replication] Request 445 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, durableWallTime: new Date(1567578574963), opTime: { ts: Timestamp(1567578574, 1), t: 1 }, wallTime: new Date(1567578574963), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578574, 1), t: 1 }, lastCommittedWall: new Date(1567578574963), lastOpVisible: { ts: Timestamp(1567578574, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578574, 1), $clusterTime: { clusterTime: Timestamp(1567578574, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578574, 1) } 2019-09-04T06:29:36.838+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 444) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, durableWallTime: new Date(1567578574963), opTime: { ts: Timestamp(1567578574, 1), t: 1 }, wallTime: new Date(1567578574963), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578574, 1), t: 1 }, lastCommittedWall: new Date(1567578574963), lastOpVisible: { ts: Timestamp(1567578574, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578574, 1), $clusterTime: { clusterTime: Timestamp(1567578574, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578574, 1) } 2019-09-04T06:29:36.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, durableWallTime: new Date(1567578574963), opTime: { ts: Timestamp(1567578574, 1), t: 1 }, wallTime: new Date(1567578574963), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578574, 1), t: 1 }, lastCommittedWall: new Date(1567578574963), lastOpVisible: { ts: Timestamp(1567578574, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578574, 1), $clusterTime: { clusterTime: Timestamp(1567578574, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578574, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:36.838+0000 D4 ELECTION [replexec-1] Postponing election timeout due to heartbeat from primary 2019-09-04T06:29:36.838+0000 D4 REPL [replexec-1] Canceling election timeout callback at 2019-09-04T06:29:45.263+0000 2019-09-04T06:29:36.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:36.838+0000 D4 ELECTION [replexec-1] Scheduling election timeout callback at 2019-09-04T06:29:47.850+0000 2019-09-04T06:29:36.838+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:29:36.838+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:29:38.838Z 2019-09-04T06:29:36.838+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:36.838+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:36.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:36.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 445) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, durableWallTime: new Date(1567578574963), opTime: { ts: Timestamp(1567578574, 1), t: 1 }, wallTime: new Date(1567578574963), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578574, 1), t: 1 }, lastCommittedWall: new Date(1567578574963), lastOpVisible: { ts: Timestamp(1567578574, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578574, 1), $clusterTime: { clusterTime: Timestamp(1567578574, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578574, 1) } 2019-09-04T06:29:36.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:29:36.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:29:38.838Z 2019-09-04T06:29:36.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:36.910+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:36.932+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:36.932+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:36.978+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:36.978+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:36.978+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578574, 1) 2019-09-04T06:29:36.978+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 6649 2019-09-04T06:29:36.978+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:36.978+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:36.978+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 6649 2019-09-04T06:29:36.979+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6652 2019-09-04T06:29:36.979+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6652 2019-09-04T06:29:36.979+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578574, 1), t: 1 }({ ts: Timestamp(1567578574, 1), t: 1 }) 2019-09-04T06:29:37.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:37.010+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:37.020+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:37.020+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:37.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:37.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:37.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578574, 1), signature: { hash: BinData(0, 712017CC287EFBA578BBBB6D0C4404C004D4650B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:37.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:29:37.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578574, 1), signature: { hash: BinData(0, 712017CC287EFBA578BBBB6D0C4404C004D4650B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:37.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578574, 1), signature: { hash: BinData(0, 712017CC287EFBA578BBBB6D0C4404C004D4650B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:37.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, durableWallTime: new Date(1567578574963), opTime: { ts: Timestamp(1567578574, 1), t: 1 }, wallTime: new Date(1567578574963) } 2019-09-04T06:29:37.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578574, 1), signature: { hash: BinData(0, 712017CC287EFBA578BBBB6D0C4404C004D4650B), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:37.110+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:37.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:37.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:37.208+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:37.208+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:37.209+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:37.209+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:37.210+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:37.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:37.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:37.291+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:37.291+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:37.311+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:37.356+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:37.356+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:37.411+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:37.432+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:37.432+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:37.511+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:37.520+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:37.520+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:37.611+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:37.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:37.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:37.708+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:37.708+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:37.709+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:37.709+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:37.711+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:37.791+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:37.791+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:37.811+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:37.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:37.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:37.911+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:37.932+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:37.932+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:37.978+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:37.979+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:37.979+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578574, 1) 2019-09-04T06:29:37.979+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 6672 2019-09-04T06:29:37.979+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:37.979+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:37.979+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 6672 2019-09-04T06:29:37.979+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6675 2019-09-04T06:29:37.979+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6675 2019-09-04T06:29:37.979+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578574, 1), t: 1 }({ ts: Timestamp(1567578574, 1), t: 1 }) 2019-09-04T06:29:38.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:38.012+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:38.020+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:38.020+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:38.112+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:38.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:38.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:38.208+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:38.208+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:38.209+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:38.209+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:38.212+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:38.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578574, 1), signature: { hash: BinData(0, 712017CC287EFBA578BBBB6D0C4404C004D4650B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:38.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:29:38.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578574, 1), signature: { hash: BinData(0, 712017CC287EFBA578BBBB6D0C4404C004D4650B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:38.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578574, 1), signature: { hash: BinData(0, 712017CC287EFBA578BBBB6D0C4404C004D4650B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:38.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, durableWallTime: new Date(1567578574963), opTime: { ts: Timestamp(1567578574, 1), t: 1 }, wallTime: new Date(1567578574963) } 2019-09-04T06:29:38.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578574, 1), signature: { hash: BinData(0, 712017CC287EFBA578BBBB6D0C4404C004D4650B), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:38.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:38.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:38.281+0000 D2 ASIO [RS] Request 442 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578578, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578578277), o: { $v: 1, $set: { ping: new Date(1567578578276) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578574, 1), t: 1 }, lastCommittedWall: new Date(1567578574963), lastOpVisible: { ts: Timestamp(1567578574, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578574, 1), t: 1 }, lastCommittedWall: new Date(1567578574963), lastOpApplied: { ts: Timestamp(1567578578, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578574, 1), $clusterTime: { clusterTime: Timestamp(1567578578, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578578, 1) } 2019-09-04T06:29:38.281+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578578, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578578277), o: { $v: 1, $set: { ping: new Date(1567578578276) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578574, 1), t: 1 }, lastCommittedWall: new Date(1567578574963), lastOpVisible: { ts: Timestamp(1567578574, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578574, 1), t: 1 }, lastCommittedWall: new Date(1567578574963), lastOpApplied: { ts: Timestamp(1567578578, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578574, 1), $clusterTime: { clusterTime: Timestamp(1567578578, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578578, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:38.281+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:38.281+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578578, 1) and ending at ts: Timestamp(1567578578, 1) 2019-09-04T06:29:38.281+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:29:47.850+0000 2019-09-04T06:29:38.281+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:29:48.595+0000 2019-09-04T06:29:38.281+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:38.281+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:38.281+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578578, 1), t: 1 } 2019-09-04T06:29:38.281+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:38.281+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:38.281+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578574, 1) 2019-09-04T06:29:38.281+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 6685 2019-09-04T06:29:38.281+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:38.281+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:38.281+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 6685 2019-09-04T06:29:38.281+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:38.281+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:29:38.281+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:38.281+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578574, 1) 2019-09-04T06:29:38.281+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578578, 1) } 2019-09-04T06:29:38.281+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 6688 2019-09-04T06:29:38.281+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:38.281+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:38.281+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 6688 2019-09-04T06:29:38.281+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6676 2019-09-04T06:29:38.281+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 6676 2019-09-04T06:29:38.281+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6691 2019-09-04T06:29:38.281+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6691 2019-09-04T06:29:38.281+0000 D3 EXECUTOR [repl-writer-worker-13] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:38.281+0000 D3 STORAGE [repl-writer-worker-13] WT begin_transaction for snapshot id 6693 2019-09-04T06:29:38.281+0000 D4 STORAGE [repl-writer-worker-13] inserting record with timestamp Timestamp(1567578578, 1) 2019-09-04T06:29:38.281+0000 D3 STORAGE [repl-writer-worker-13] WT set timestamp of future write operations to Timestamp(1567578578, 1) 2019-09-04T06:29:38.281+0000 D3 STORAGE [repl-writer-worker-13] WT commit_transaction for snapshot id 6693 2019-09-04T06:29:38.281+0000 D3 EXECUTOR [repl-writer-worker-13] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:38.281+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:29:38.281+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6692 2019-09-04T06:29:38.281+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 6692 2019-09-04T06:29:38.281+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6695 2019-09-04T06:29:38.281+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6695 2019-09-04T06:29:38.281+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578578, 1), t: 1 }({ ts: Timestamp(1567578578, 1), t: 1 }) 2019-09-04T06:29:38.281+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578578, 1) 2019-09-04T06:29:38.281+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6696 2019-09-04T06:29:38.281+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578578, 1) } } ] } sort: {} projection: {} 2019-09-04T06:29:38.281+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:38.281+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:29:38.281+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578578, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:29:38.281+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:38.281+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:38.281+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:38.281+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578578, 1) || First: notFirst: full path: ts 2019-09-04T06:29:38.281+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:38.282+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578578, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:38.282+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:38.282+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:29:38.282+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:38.282+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:38.282+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:38.282+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:38.282+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:38.282+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:38.282+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:38.282+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578578, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:38.282+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:38.282+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:38.282+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:38.282+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578578, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:38.282+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:38.282+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578578, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:38.282+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 6696 2019-09-04T06:29:38.282+0000 D3 EXECUTOR [repl-writer-worker-14] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:38.282+0000 D3 STORAGE [repl-writer-worker-14] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:38.282+0000 D3 REPL [repl-writer-worker-14] applying op: { ts: Timestamp(1567578578, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578578277), o: { $v: 1, $set: { ping: new Date(1567578578276) } } }, oplog application mode: Secondary 2019-09-04T06:29:38.282+0000 D3 STORAGE [repl-writer-worker-14] WT set timestamp of future write operations to Timestamp(1567578578, 1) 2019-09-04T06:29:38.282+0000 D3 STORAGE [repl-writer-worker-14] WT begin_transaction for snapshot id 6698 2019-09-04T06:29:38.282+0000 D2 QUERY [repl-writer-worker-14] Using idhack: { _id: "ConfigServer" } 2019-09-04T06:29:38.282+0000 D4 WRITE [repl-writer-worker-14] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:29:38.282+0000 D3 STORAGE [repl-writer-worker-14] WT commit_transaction for snapshot id 6698 2019-09-04T06:29:38.282+0000 D3 EXECUTOR [repl-writer-worker-14] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:38.282+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578578, 1), t: 1 }({ ts: Timestamp(1567578578, 1), t: 1 }) 2019-09-04T06:29:38.282+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578578, 1) 2019-09-04T06:29:38.282+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6697 2019-09-04T06:29:38.282+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:29:38.282+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:38.282+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:29:38.282+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:38.282+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:38.282+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:29:38.282+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 6697 2019-09-04T06:29:38.282+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578578, 1) 2019-09-04T06:29:38.282+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6701 2019-09-04T06:29:38.282+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6701 2019-09-04T06:29:38.282+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578578, 1), t: 1 }({ ts: Timestamp(1567578578, 1), t: 1 }) 2019-09-04T06:29:38.282+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:38.282+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, durableWallTime: new Date(1567578574963), appliedOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, appliedWallTime: new Date(1567578574963), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, durableWallTime: new Date(1567578574963), appliedOpTime: { ts: Timestamp(1567578578, 1), t: 1 }, appliedWallTime: new Date(1567578578277), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, durableWallTime: new Date(1567578574963), appliedOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, appliedWallTime: new Date(1567578574963), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578574, 1), t: 1 }, lastCommittedWall: new Date(1567578574963), lastOpVisible: { ts: Timestamp(1567578574, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:38.282+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 446 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:08.282+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, durableWallTime: new Date(1567578574963), appliedOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, appliedWallTime: new Date(1567578574963), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, durableWallTime: new Date(1567578574963), appliedOpTime: { ts: Timestamp(1567578578, 1), t: 1 }, appliedWallTime: new Date(1567578578277), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, durableWallTime: new Date(1567578574963), appliedOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, appliedWallTime: new Date(1567578574963), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578574, 1), t: 1 }, lastCommittedWall: new Date(1567578574963), lastOpVisible: { ts: Timestamp(1567578574, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:38.282+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:08.282+0000 2019-09-04T06:29:38.283+0000 D2 ASIO [RS] Request 446 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 1), t: 1 }, lastCommittedWall: new Date(1567578578277), lastOpVisible: { ts: Timestamp(1567578578, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578578, 1), $clusterTime: { clusterTime: Timestamp(1567578578, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578578, 1) } 2019-09-04T06:29:38.283+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 1), t: 1 }, lastCommittedWall: new Date(1567578578277), lastOpVisible: { ts: Timestamp(1567578578, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578578, 1), $clusterTime: { clusterTime: Timestamp(1567578578, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578578, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:38.283+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:38.283+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:08.283+0000 2019-09-04T06:29:38.283+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578578, 1), t: 1 } 2019-09-04T06:29:38.283+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 447 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:48.283+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578574, 1), t: 1 } } 2019-09-04T06:29:38.283+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:08.283+0000 2019-09-04T06:29:38.283+0000 D2 ASIO [RS] Request 447 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 1), t: 1 }, lastCommittedWall: new Date(1567578578277), lastOpVisible: { ts: Timestamp(1567578578, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578578, 1), t: 1 }, lastCommittedWall: new Date(1567578578277), lastOpApplied: { ts: Timestamp(1567578578, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578578, 1), $clusterTime: { clusterTime: Timestamp(1567578578, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578578, 1) } 2019-09-04T06:29:38.283+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:29:38.283+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 1), t: 1 }, lastCommittedWall: new Date(1567578578277), lastOpVisible: { ts: Timestamp(1567578578, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578578, 1), t: 1 }, lastCommittedWall: new Date(1567578578277), lastOpApplied: { ts: Timestamp(1567578578, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578578, 1), $clusterTime: { clusterTime: Timestamp(1567578578, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578578, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:38.283+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:38.283+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:29:38.283+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578578, 1), t: 1 }, 2019-09-04T06:29:38.277+0000 2019-09-04T06:29:38.283+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578578, 1), t: 1 }, 2019-09-04T06:29:38.277+0000 2019-09-04T06:29:38.283+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578573, 1) 2019-09-04T06:29:38.283+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:29:48.595+0000 2019-09-04T06:29:38.283+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:29:48.803+0000 2019-09-04T06:29:38.283+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:38.283+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:38.283+0000 D3 REPL [conn163] Got notified of new snapshot: { ts: Timestamp(1567578578, 1), t: 1 }, 2019-09-04T06:29:38.277+0000 2019-09-04T06:29:38.283+0000 D3 REPL [conn163] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:41.888+0000 2019-09-04T06:29:38.283+0000 D3 REPL [conn195] Got notified of new snapshot: { ts: Timestamp(1567578578, 1), t: 1 }, 2019-09-04T06:29:38.277+0000 2019-09-04T06:29:38.283+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 448 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:48.283+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578578, 1), t: 1 } } 2019-09-04T06:29:38.283+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:38.283+0000 D3 REPL [conn195] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.897+0000 2019-09-04T06:29:38.283+0000 D3 REPL [conn145] Got notified of new snapshot: { ts: Timestamp(1567578578, 1), t: 1 }, 2019-09-04T06:29:38.277+0000 2019-09-04T06:29:38.283+0000 D3 REPL [conn145] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.422+0000 2019-09-04T06:29:38.283+0000 D3 REPL [conn186] Got notified of new snapshot: { ts: Timestamp(1567578578, 1), t: 1 }, 2019-09-04T06:29:38.277+0000 2019-09-04T06:29:38.283+0000 D3 REPL [conn186] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.662+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn159] Got notified of new snapshot: { ts: Timestamp(1567578578, 1), t: 1 }, 2019-09-04T06:29:38.277+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn159] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:56.299+0000 2019-09-04T06:29:38.284+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:08.283+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn190] Got notified of new snapshot: { ts: Timestamp(1567578578, 1), t: 1 }, 2019-09-04T06:29:38.277+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn190] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:55.060+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn184] Got notified of new snapshot: { ts: Timestamp(1567578578, 1), t: 1 }, 2019-09-04T06:29:38.277+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn184] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.433+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn187] Got notified of new snapshot: { ts: Timestamp(1567578578, 1), t: 1 }, 2019-09-04T06:29:38.277+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn187] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.753+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn183] Got notified of new snapshot: { ts: Timestamp(1567578578, 1), t: 1 }, 2019-09-04T06:29:38.277+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn183] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.645+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn179] Got notified of new snapshot: { ts: Timestamp(1567578578, 1), t: 1 }, 2019-09-04T06:29:38.277+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn179] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:46.417+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn175] Got notified of new snapshot: { ts: Timestamp(1567578578, 1), t: 1 }, 2019-09-04T06:29:38.277+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn175] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.422+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn174] Got notified of new snapshot: { ts: Timestamp(1567578578, 1), t: 1 }, 2019-09-04T06:29:38.277+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn174] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.422+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn138] Got notified of new snapshot: { ts: Timestamp(1567578578, 1), t: 1 }, 2019-09-04T06:29:38.277+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn138] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.289+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn198] Got notified of new snapshot: { ts: Timestamp(1567578578, 1), t: 1 }, 2019-09-04T06:29:38.277+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn198] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:03.485+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn168] Got notified of new snapshot: { ts: Timestamp(1567578578, 1), t: 1 }, 2019-09-04T06:29:38.277+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn168] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.272+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn166] Got notified of new snapshot: { ts: Timestamp(1567578578, 1), t: 1 }, 2019-09-04T06:29:38.277+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn166] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.275+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn173] Got notified of new snapshot: { ts: Timestamp(1567578578, 1), t: 1 }, 2019-09-04T06:29:38.277+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn173] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.664+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn193] Got notified of new snapshot: { ts: Timestamp(1567578578, 1), t: 1 }, 2019-09-04T06:29:38.277+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn193] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.753+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn139] Got notified of new snapshot: { ts: Timestamp(1567578578, 1), t: 1 }, 2019-09-04T06:29:38.277+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn139] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.319+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn164] Got notified of new snapshot: { ts: Timestamp(1567578578, 1), t: 1 }, 2019-09-04T06:29:38.277+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn164] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:41.692+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn172] Got notified of new snapshot: { ts: Timestamp(1567578578, 1), t: 1 }, 2019-09-04T06:29:38.277+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn172] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:58.752+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn170] Got notified of new snapshot: { ts: Timestamp(1567578578, 1), t: 1 }, 2019-09-04T06:29:38.277+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn170] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:43.335+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn188] Got notified of new snapshot: { ts: Timestamp(1567578578, 1), t: 1 }, 2019-09-04T06:29:38.277+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn188] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.768+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn167] Got notified of new snapshot: { ts: Timestamp(1567578578, 1), t: 1 }, 2019-09-04T06:29:38.277+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn167] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:57.567+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn196] Got notified of new snapshot: { ts: Timestamp(1567578578, 1), t: 1 }, 2019-09-04T06:29:38.277+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn196] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.962+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn189] Got notified of new snapshot: { ts: Timestamp(1567578578, 1), t: 1 }, 2019-09-04T06:29:38.277+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn189] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:52.054+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn194] Got notified of new snapshot: { ts: Timestamp(1567578578, 1), t: 1 }, 2019-09-04T06:29:38.277+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn194] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.763+0000 2019-09-04T06:29:38.284+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, durableWallTime: new Date(1567578574963), appliedOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, appliedWallTime: new Date(1567578574963), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578578, 1), t: 1 }, durableWallTime: new Date(1567578578277), appliedOpTime: { ts: Timestamp(1567578578, 1), t: 1 }, appliedWallTime: new Date(1567578578277), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, durableWallTime: new Date(1567578574963), appliedOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, appliedWallTime: new Date(1567578574963), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 1), t: 1 }, lastCommittedWall: new Date(1567578578277), lastOpVisible: { ts: Timestamp(1567578578, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:38.284+0000 D3 REPL [conn177] Got notified of new snapshot: { ts: Timestamp(1567578578, 1), t: 1 }, 2019-09-04T06:29:38.277+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn177] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.446+0000 2019-09-04T06:29:38.284+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 449 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:08.284+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, durableWallTime: new Date(1567578574963), appliedOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, appliedWallTime: new Date(1567578574963), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578578, 1), t: 1 }, durableWallTime: new Date(1567578578277), appliedOpTime: { ts: Timestamp(1567578578, 1), t: 1 }, appliedWallTime: new Date(1567578578277), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, durableWallTime: new Date(1567578574963), appliedOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, appliedWallTime: new Date(1567578574963), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 1), t: 1 }, lastCommittedWall: new Date(1567578578277), lastOpVisible: { ts: Timestamp(1567578578, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:38.284+0000 D3 REPL [conn185] Got notified of new snapshot: { ts: Timestamp(1567578578, 1), t: 1 }, 2019-09-04T06:29:38.277+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn185] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:59.943+0000 2019-09-04T06:29:38.284+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:08.283+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn192] Got notified of new snapshot: { ts: Timestamp(1567578578, 1), t: 1 }, 2019-09-04T06:29:38.277+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn192] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.469+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn197] Got notified of new snapshot: { ts: Timestamp(1567578578, 1), t: 1 }, 2019-09-04T06:29:38.277+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn197] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.987+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn176] Got notified of new snapshot: { ts: Timestamp(1567578578, 1), t: 1 }, 2019-09-04T06:29:38.277+0000 2019-09-04T06:29:38.284+0000 D3 REPL [conn176] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.424+0000 2019-09-04T06:29:38.284+0000 D2 ASIO [RS] Request 449 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 1), t: 1 }, lastCommittedWall: new Date(1567578578277), lastOpVisible: { ts: Timestamp(1567578578, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578578, 1), $clusterTime: { clusterTime: Timestamp(1567578578, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578578, 1) } 2019-09-04T06:29:38.284+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 1), t: 1 }, lastCommittedWall: new Date(1567578578277), lastOpVisible: { ts: Timestamp(1567578578, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578578, 1), $clusterTime: { clusterTime: Timestamp(1567578578, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578578, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:38.284+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:38.284+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:08.283+0000 2019-09-04T06:29:38.291+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:38.291+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:38.312+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:38.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:38.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:38.381+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578578, 1) 2019-09-04T06:29:38.412+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:38.427+0000 D2 ASIO [RS] Request 448 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578578, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578578423), o: { $v: 1, $set: { ping: new Date(1567578578423) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 1), t: 1 }, lastCommittedWall: new Date(1567578578277), lastOpVisible: { ts: Timestamp(1567578578, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578578, 1), t: 1 }, lastCommittedWall: new Date(1567578578277), lastOpApplied: { ts: Timestamp(1567578578, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578578, 1), $clusterTime: { clusterTime: Timestamp(1567578578, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578578, 2) } 2019-09-04T06:29:38.427+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578578, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578578423), o: { $v: 1, $set: { ping: new Date(1567578578423) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 1), t: 1 }, lastCommittedWall: new Date(1567578578277), lastOpVisible: { ts: Timestamp(1567578578, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578578, 1), t: 1 }, lastCommittedWall: new Date(1567578578277), lastOpApplied: { ts: Timestamp(1567578578, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578578, 1), $clusterTime: { clusterTime: Timestamp(1567578578, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578578, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:38.427+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:38.427+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578578, 2) and ending at ts: Timestamp(1567578578, 2) 2019-09-04T06:29:38.427+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:29:48.803+0000 2019-09-04T06:29:38.427+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:29:48.878+0000 2019-09-04T06:29:38.427+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:38.427+0000 D2 REPL [replication-0] oplog buffer has 0 bytes 2019-09-04T06:29:38.427+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:38.427+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:38.427+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:38.427+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578578, 1) 2019-09-04T06:29:38.427+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 6707 2019-09-04T06:29:38.427+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:38.427+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:38.427+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 6707 2019-09-04T06:29:38.427+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:38.427+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:29:38.427+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:38.427+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578578, 2) } 2019-09-04T06:29:38.427+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578578, 1) 2019-09-04T06:29:38.427+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 6710 2019-09-04T06:29:38.427+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:38.427+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6702 2019-09-04T06:29:38.427+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:38.427+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 6710 2019-09-04T06:29:38.427+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578578, 2), t: 1 } 2019-09-04T06:29:38.427+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 6702 2019-09-04T06:29:38.427+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6713 2019-09-04T06:29:38.427+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6713 2019-09-04T06:29:38.428+0000 D3 EXECUTOR [repl-writer-worker-3] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:38.428+0000 D3 STORAGE [repl-writer-worker-3] WT begin_transaction for snapshot id 6715 2019-09-04T06:29:38.428+0000 D4 STORAGE [repl-writer-worker-3] inserting record with timestamp Timestamp(1567578578, 2) 2019-09-04T06:29:38.428+0000 D3 STORAGE [repl-writer-worker-3] WT set timestamp of future write operations to Timestamp(1567578578, 2) 2019-09-04T06:29:38.428+0000 D3 STORAGE [repl-writer-worker-3] WT commit_transaction for snapshot id 6715 2019-09-04T06:29:38.428+0000 D3 EXECUTOR [repl-writer-worker-3] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:38.428+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:29:38.428+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6714 2019-09-04T06:29:38.428+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 6714 2019-09-04T06:29:38.428+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6717 2019-09-04T06:29:38.428+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6717 2019-09-04T06:29:38.428+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578578, 2), t: 1 }({ ts: Timestamp(1567578578, 2), t: 1 }) 2019-09-04T06:29:38.428+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578578, 2) 2019-09-04T06:29:38.428+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6718 2019-09-04T06:29:38.428+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578578, 2) } } ] } sort: {} projection: {} 2019-09-04T06:29:38.428+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:38.428+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:29:38.428+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578578, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:29:38.428+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:38.428+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:38.428+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:38.428+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578578, 2) || First: notFirst: full path: ts 2019-09-04T06:29:38.428+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:38.428+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578578, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:38.428+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:38.428+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:29:38.428+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:38.428+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:38.428+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:38.428+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:38.428+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:38.428+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:38.428+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:38.428+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578578, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:38.428+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:38.428+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:38.428+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:38.428+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578578, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:38.428+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:38.428+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578578, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:38.428+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 6718 2019-09-04T06:29:38.428+0000 D3 EXECUTOR [repl-writer-worker-12] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:38.428+0000 D3 STORAGE [repl-writer-worker-12] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:38.428+0000 D3 REPL [repl-writer-worker-12] applying op: { ts: Timestamp(1567578578, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578578423), o: { $v: 1, $set: { ping: new Date(1567578578423) } } }, oplog application mode: Secondary 2019-09-04T06:29:38.428+0000 D3 STORAGE [repl-writer-worker-12] WT set timestamp of future write operations to Timestamp(1567578578, 2) 2019-09-04T06:29:38.428+0000 D3 STORAGE [repl-writer-worker-12] WT begin_transaction for snapshot id 6720 2019-09-04T06:29:38.428+0000 D2 QUERY [repl-writer-worker-12] Using idhack: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" } 2019-09-04T06:29:38.428+0000 D4 WRITE [repl-writer-worker-12] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:29:38.428+0000 D3 STORAGE [repl-writer-worker-12] WT commit_transaction for snapshot id 6720 2019-09-04T06:29:38.428+0000 D3 EXECUTOR [repl-writer-worker-12] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:38.428+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578578, 2), t: 1 }({ ts: Timestamp(1567578578, 2), t: 1 }) 2019-09-04T06:29:38.428+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578578, 2) 2019-09-04T06:29:38.428+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6719 2019-09-04T06:29:38.428+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:29:38.428+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:38.428+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:29:38.428+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:38.428+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:38.428+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:29:38.428+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 6719 2019-09-04T06:29:38.428+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578578, 2) 2019-09-04T06:29:38.429+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:38.429+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6723 2019-09-04T06:29:38.429+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, durableWallTime: new Date(1567578574963), appliedOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, appliedWallTime: new Date(1567578574963), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578578, 1), t: 1 }, durableWallTime: new Date(1567578578277), appliedOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, appliedWallTime: new Date(1567578578423), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, durableWallTime: new Date(1567578574963), appliedOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, appliedWallTime: new Date(1567578574963), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 1), t: 1 }, lastCommittedWall: new Date(1567578578277), lastOpVisible: { ts: Timestamp(1567578578, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:38.429+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6723 2019-09-04T06:29:38.429+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578578, 2), t: 1 }({ ts: Timestamp(1567578578, 2), t: 1 }) 2019-09-04T06:29:38.429+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 450 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:08.429+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, durableWallTime: new Date(1567578574963), appliedOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, appliedWallTime: new Date(1567578574963), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578578, 1), t: 1 }, durableWallTime: new Date(1567578578277), appliedOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, appliedWallTime: new Date(1567578578423), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, durableWallTime: new Date(1567578574963), appliedOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, appliedWallTime: new Date(1567578574963), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 1), t: 1 }, lastCommittedWall: new Date(1567578578277), lastOpVisible: { ts: Timestamp(1567578578, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:38.429+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:08.429+0000 2019-09-04T06:29:38.429+0000 D2 ASIO [RS] Request 450 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 1), t: 1 }, lastCommittedWall: new Date(1567578578277), lastOpVisible: { ts: Timestamp(1567578578, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578578, 1), $clusterTime: { clusterTime: Timestamp(1567578578, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578578, 2) } 2019-09-04T06:29:38.429+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 1), t: 1 }, lastCommittedWall: new Date(1567578578277), lastOpVisible: { ts: Timestamp(1567578578, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578578, 1), $clusterTime: { clusterTime: Timestamp(1567578578, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578578, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:38.429+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:38.429+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:08.429+0000 2019-09-04T06:29:38.429+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578578, 2), t: 1 } 2019-09-04T06:29:38.429+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 451 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:48.429+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578578, 1), t: 1 } } 2019-09-04T06:29:38.430+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:08.429+0000 2019-09-04T06:29:38.430+0000 D2 ASIO [RS] Request 451 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpVisible: { ts: Timestamp(1567578578, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpApplied: { ts: Timestamp(1567578578, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578578, 2), $clusterTime: { clusterTime: Timestamp(1567578578, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578578, 2) } 2019-09-04T06:29:38.430+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpVisible: { ts: Timestamp(1567578578, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpApplied: { ts: Timestamp(1567578578, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578578, 2), $clusterTime: { clusterTime: Timestamp(1567578578, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578578, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:38.430+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:38.430+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:29:38.430+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578578, 2), t: 1 }, 2019-09-04T06:29:38.423+0000 2019-09-04T06:29:38.430+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578578, 2), t: 1 }, 2019-09-04T06:29:38.423+0000 2019-09-04T06:29:38.430+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578573, 2) 2019-09-04T06:29:38.430+0000 D3 REPL [conn166] Got notified of new snapshot: { ts: Timestamp(1567578578, 2), t: 1 }, 2019-09-04T06:29:38.423+0000 2019-09-04T06:29:38.430+0000 D3 REPL [conn166] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.275+0000 2019-09-04T06:29:38.430+0000 D3 REPL [conn145] Got notified of new snapshot: { ts: Timestamp(1567578578, 2), t: 1 }, 2019-09-04T06:29:38.423+0000 2019-09-04T06:29:38.430+0000 D3 REPL [conn145] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.422+0000 2019-09-04T06:29:38.430+0000 D3 REPL [conn176] Got notified of new snapshot: { ts: Timestamp(1567578578, 2), t: 1 }, 2019-09-04T06:29:38.423+0000 2019-09-04T06:29:38.430+0000 D3 REPL [conn176] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.424+0000 2019-09-04T06:29:38.430+0000 D3 REPL [conn173] Got notified of new snapshot: { ts: Timestamp(1567578578, 2), t: 1 }, 2019-09-04T06:29:38.423+0000 2019-09-04T06:29:38.430+0000 D3 REPL [conn173] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.664+0000 2019-09-04T06:29:38.430+0000 D3 REPL [conn174] Got notified of new snapshot: { ts: Timestamp(1567578578, 2), t: 1 }, 2019-09-04T06:29:38.423+0000 2019-09-04T06:29:38.430+0000 D3 REPL [conn174] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.422+0000 2019-09-04T06:29:38.430+0000 D3 REPL [conn139] Got notified of new snapshot: { ts: Timestamp(1567578578, 2), t: 1 }, 2019-09-04T06:29:38.423+0000 2019-09-04T06:29:38.430+0000 D3 REPL [conn139] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.319+0000 2019-09-04T06:29:38.430+0000 D3 REPL [conn192] Got notified of new snapshot: { ts: Timestamp(1567578578, 2), t: 1 }, 2019-09-04T06:29:38.423+0000 2019-09-04T06:29:38.430+0000 D3 REPL [conn192] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.469+0000 2019-09-04T06:29:38.430+0000 D3 REPL [conn170] Got notified of new snapshot: { ts: Timestamp(1567578578, 2), t: 1 }, 2019-09-04T06:29:38.423+0000 2019-09-04T06:29:38.430+0000 D3 REPL [conn170] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:43.335+0000 2019-09-04T06:29:38.430+0000 D3 REPL [conn193] Got notified of new snapshot: { ts: Timestamp(1567578578, 2), t: 1 }, 2019-09-04T06:29:38.423+0000 2019-09-04T06:29:38.430+0000 D3 REPL [conn193] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.753+0000 2019-09-04T06:29:38.430+0000 D3 REPL [conn194] Got notified of new snapshot: { ts: Timestamp(1567578578, 2), t: 1 }, 2019-09-04T06:29:38.423+0000 2019-09-04T06:29:38.430+0000 D3 REPL [conn194] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.763+0000 2019-09-04T06:29:38.430+0000 D3 REPL [conn195] Got notified of new snapshot: { ts: Timestamp(1567578578, 2), t: 1 }, 2019-09-04T06:29:38.423+0000 2019-09-04T06:29:38.430+0000 D3 REPL [conn195] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.897+0000 2019-09-04T06:29:38.430+0000 D3 REPL [conn167] Got notified of new snapshot: { ts: Timestamp(1567578578, 2), t: 1 }, 2019-09-04T06:29:38.423+0000 2019-09-04T06:29:38.430+0000 D3 REPL [conn167] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:57.567+0000 2019-09-04T06:29:38.430+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:29:48.878+0000 2019-09-04T06:29:38.430+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:29:49.492+0000 2019-09-04T06:29:38.430+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:38.430+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 452 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:48.430+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578578, 2), t: 1 } } 2019-09-04T06:29:38.430+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:38.430+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:08.429+0000 2019-09-04T06:29:38.430+0000 D3 REPL [conn197] Got notified of new snapshot: { ts: Timestamp(1567578578, 2), t: 1 }, 2019-09-04T06:29:38.423+0000 2019-09-04T06:29:38.430+0000 D3 REPL [conn197] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.987+0000 2019-09-04T06:29:38.430+0000 D3 REPL [conn185] Got notified of new snapshot: { ts: Timestamp(1567578578, 2), t: 1 }, 2019-09-04T06:29:38.423+0000 2019-09-04T06:29:38.430+0000 D3 REPL [conn185] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:59.943+0000 2019-09-04T06:29:38.430+0000 D3 REPL [conn196] Got notified of new snapshot: { ts: Timestamp(1567578578, 2), t: 1 }, 2019-09-04T06:29:38.423+0000 2019-09-04T06:29:38.431+0000 D3 REPL [conn196] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.962+0000 2019-09-04T06:29:38.431+0000 D3 REPL [conn190] Got notified of new snapshot: { ts: Timestamp(1567578578, 2), t: 1 }, 2019-09-04T06:29:38.423+0000 2019-09-04T06:29:38.431+0000 D3 REPL [conn190] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:55.060+0000 2019-09-04T06:29:38.431+0000 D3 REPL [conn179] Got notified of new snapshot: { ts: Timestamp(1567578578, 2), t: 1 }, 2019-09-04T06:29:38.423+0000 2019-09-04T06:29:38.431+0000 D3 REPL [conn179] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:46.417+0000 2019-09-04T06:29:38.431+0000 D3 REPL [conn188] Got notified of new snapshot: { ts: Timestamp(1567578578, 2), t: 1 }, 2019-09-04T06:29:38.423+0000 2019-09-04T06:29:38.431+0000 D3 REPL [conn188] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.768+0000 2019-09-04T06:29:38.431+0000 D3 REPL [conn172] Got notified of new snapshot: { ts: Timestamp(1567578578, 2), t: 1 }, 2019-09-04T06:29:38.423+0000 2019-09-04T06:29:38.431+0000 D3 REPL [conn172] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:58.752+0000 2019-09-04T06:29:38.431+0000 D3 REPL [conn187] Got notified of new snapshot: { ts: Timestamp(1567578578, 2), t: 1 }, 2019-09-04T06:29:38.423+0000 2019-09-04T06:29:38.431+0000 D3 REPL [conn187] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.753+0000 2019-09-04T06:29:38.431+0000 D3 REPL [conn159] Got notified of new snapshot: { ts: Timestamp(1567578578, 2), t: 1 }, 2019-09-04T06:29:38.423+0000 2019-09-04T06:29:38.431+0000 D3 REPL [conn159] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:56.299+0000 2019-09-04T06:29:38.431+0000 D3 REPL [conn163] Got notified of new snapshot: { ts: Timestamp(1567578578, 2), t: 1 }, 2019-09-04T06:29:38.423+0000 2019-09-04T06:29:38.431+0000 D3 REPL [conn163] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:41.888+0000 2019-09-04T06:29:38.431+0000 D3 REPL [conn175] Got notified of new snapshot: { ts: Timestamp(1567578578, 2), t: 1 }, 2019-09-04T06:29:38.423+0000 2019-09-04T06:29:38.431+0000 D3 REPL [conn175] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.422+0000 2019-09-04T06:29:38.431+0000 D3 REPL [conn138] Got notified of new snapshot: { ts: Timestamp(1567578578, 2), t: 1 }, 2019-09-04T06:29:38.423+0000 2019-09-04T06:29:38.431+0000 D3 REPL [conn138] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.289+0000 2019-09-04T06:29:38.431+0000 D3 REPL [conn189] Got notified of new snapshot: { ts: Timestamp(1567578578, 2), t: 1 }, 2019-09-04T06:29:38.423+0000 2019-09-04T06:29:38.431+0000 D3 REPL [conn189] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:52.054+0000 2019-09-04T06:29:38.431+0000 D3 REPL [conn183] Got notified of new snapshot: { ts: Timestamp(1567578578, 2), t: 1 }, 2019-09-04T06:29:38.423+0000 2019-09-04T06:29:38.431+0000 D3 REPL [conn183] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.645+0000 2019-09-04T06:29:38.431+0000 D3 REPL [conn184] Got notified of new snapshot: { ts: Timestamp(1567578578, 2), t: 1 }, 2019-09-04T06:29:38.423+0000 2019-09-04T06:29:38.431+0000 D3 REPL [conn184] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.433+0000 2019-09-04T06:29:38.431+0000 D3 REPL [conn198] Got notified of new snapshot: { ts: Timestamp(1567578578, 2), t: 1 }, 2019-09-04T06:29:38.423+0000 2019-09-04T06:29:38.431+0000 D3 REPL [conn198] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:03.485+0000 2019-09-04T06:29:38.431+0000 D3 REPL [conn164] Got notified of new snapshot: { ts: Timestamp(1567578578, 2), t: 1 }, 2019-09-04T06:29:38.423+0000 2019-09-04T06:29:38.431+0000 D3 REPL [conn164] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:41.692+0000 2019-09-04T06:29:38.431+0000 D3 REPL [conn177] Got notified of new snapshot: { ts: Timestamp(1567578578, 2), t: 1 }, 2019-09-04T06:29:38.423+0000 2019-09-04T06:29:38.431+0000 D3 REPL [conn177] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.446+0000 2019-09-04T06:29:38.431+0000 D3 REPL [conn168] Got notified of new snapshot: { ts: Timestamp(1567578578, 2), t: 1 }, 2019-09-04T06:29:38.423+0000 2019-09-04T06:29:38.431+0000 D3 REPL [conn168] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:40.272+0000 2019-09-04T06:29:38.431+0000 D3 REPL [conn186] Got notified of new snapshot: { ts: Timestamp(1567578578, 2), t: 1 }, 2019-09-04T06:29:38.423+0000 2019-09-04T06:29:38.431+0000 D3 REPL [conn186] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.662+0000 2019-09-04T06:29:38.432+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:38.432+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:38.433+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:29:38.433+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:38.433+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, durableWallTime: new Date(1567578574963), appliedOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, appliedWallTime: new Date(1567578574963), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), appliedOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, appliedWallTime: new Date(1567578578423), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, durableWallTime: new Date(1567578574963), appliedOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, appliedWallTime: new Date(1567578574963), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpVisible: { ts: Timestamp(1567578578, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:38.433+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 453 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:08.433+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, durableWallTime: new Date(1567578574963), appliedOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, appliedWallTime: new Date(1567578574963), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), appliedOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, appliedWallTime: new Date(1567578578423), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, durableWallTime: new Date(1567578574963), appliedOpTime: { ts: Timestamp(1567578574, 1), t: 1 }, appliedWallTime: new Date(1567578574963), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpVisible: { ts: Timestamp(1567578578, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:38.433+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:08.429+0000 2019-09-04T06:29:38.434+0000 D2 ASIO [RS] Request 453 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpVisible: { ts: Timestamp(1567578578, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578578, 2), $clusterTime: { clusterTime: Timestamp(1567578578, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578578, 2) } 2019-09-04T06:29:38.434+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpVisible: { ts: Timestamp(1567578578, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578578, 2), $clusterTime: { clusterTime: Timestamp(1567578578, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578578, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:38.434+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:38.434+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:08.429+0000 2019-09-04T06:29:38.512+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:38.520+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:38.520+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:38.527+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578578, 2) 2019-09-04T06:29:38.612+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:38.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:38.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:38.708+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:38.708+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:38.709+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:38.709+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:38.713+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:38.791+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:38.791+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:38.813+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:38.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:38.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:38.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 454) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:38.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 454 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:29:48.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:38.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:38.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 455) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:38.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 455 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:48.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:38.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:38.838+0000 D2 ASIO [Replication] Request 455 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), opTime: { ts: Timestamp(1567578578, 2), t: 1 }, wallTime: new Date(1567578578423), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpVisible: { ts: Timestamp(1567578578, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578578, 2), $clusterTime: { clusterTime: Timestamp(1567578578, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578578, 2) } 2019-09-04T06:29:38.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), opTime: { ts: Timestamp(1567578578, 2), t: 1 }, wallTime: new Date(1567578578423), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpVisible: { ts: Timestamp(1567578578, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578578, 2), $clusterTime: { clusterTime: Timestamp(1567578578, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578578, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:38.838+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:38.838+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 455) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), opTime: { ts: Timestamp(1567578578, 2), t: 1 }, wallTime: new Date(1567578578423), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpVisible: { ts: Timestamp(1567578578, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578578, 2), $clusterTime: { clusterTime: Timestamp(1567578578, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578578, 2) } 2019-09-04T06:29:38.838+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:29:38.838+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:29:40.838Z 2019-09-04T06:29:38.838+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:38.838+0000 D2 ASIO [Replication] Request 454 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), opTime: { ts: Timestamp(1567578578, 2), t: 1 }, wallTime: new Date(1567578578423), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpVisible: { ts: Timestamp(1567578578, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578578, 2), $clusterTime: { clusterTime: Timestamp(1567578578, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578578, 2) } 2019-09-04T06:29:38.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), opTime: { ts: Timestamp(1567578578, 2), t: 1 }, wallTime: new Date(1567578578423), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpVisible: { ts: Timestamp(1567578578, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578578, 2), $clusterTime: { clusterTime: Timestamp(1567578578, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578578, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:29:38.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:38.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 454) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), opTime: { ts: Timestamp(1567578578, 2), t: 1 }, wallTime: new Date(1567578578423), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpVisible: { ts: Timestamp(1567578578, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578578, 2), $clusterTime: { clusterTime: Timestamp(1567578578, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578578, 2) } 2019-09-04T06:29:38.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:29:38.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:29:49.492+0000 2019-09-04T06:29:38.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:29:48.974+0000 2019-09-04T06:29:38.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:38.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:29:38.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:38.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:29:40.839Z 2019-09-04T06:29:38.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:38.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:38.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:38.913+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:38.932+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:38.932+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:39.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:39.013+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:39.020+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:39.020+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:39.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578578, 2), signature: { hash: BinData(0, 7DB8410F7DF0ADC5791AD2C7DC7D4E021E7B3358), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:39.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:29:39.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578578, 2), signature: { hash: BinData(0, 7DB8410F7DF0ADC5791AD2C7DC7D4E021E7B3358), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:39.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578578, 2), signature: { hash: BinData(0, 7DB8410F7DF0ADC5791AD2C7DC7D4E021E7B3358), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:39.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), opTime: { ts: Timestamp(1567578578, 2), t: 1 }, wallTime: new Date(1567578578423) } 2019-09-04T06:29:39.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578578, 2), signature: { hash: BinData(0, 7DB8410F7DF0ADC5791AD2C7DC7D4E021E7B3358), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:39.113+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:39.208+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:39.208+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:39.209+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:39.209+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:39.213+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:39.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:39.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:39.291+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:39.291+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:39.314+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:39.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:39.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:39.414+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:39.428+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:39.428+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:39.428+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578578, 2) 2019-09-04T06:29:39.428+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 6744 2019-09-04T06:29:39.428+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:39.428+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:39.428+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 6744 2019-09-04T06:29:39.429+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6747 2019-09-04T06:29:39.429+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6747 2019-09-04T06:29:39.429+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578578, 2), t: 1 }({ ts: Timestamp(1567578578, 2), t: 1 }) 2019-09-04T06:29:39.432+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:39.432+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:39.514+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:39.520+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:39.520+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:39.537+0000 D2 COMMAND [conn61] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578578, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578578, 2), signature: { hash: BinData(0, 7DB8410F7DF0ADC5791AD2C7DC7D4E021E7B3358), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578578, 2), t: 1 } }, $db: "config" } 2019-09-04T06:29:39.537+0000 D1 COMMAND [conn61] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578578, 2), t: 1 } } } 2019-09-04T06:29:39.537+0000 D3 STORAGE [conn61] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:29:39.537+0000 D1 COMMAND [conn61] Using 'committed' snapshot: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578578, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578578, 2), signature: { hash: BinData(0, 7DB8410F7DF0ADC5791AD2C7DC7D4E021E7B3358), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578578, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578578, 2) 2019-09-04T06:29:39.537+0000 D2 QUERY [conn61] Collection config.settings does not exist. Using EOF plan: query: { _id: "balancer" } sort: {} projection: {} limit: 1 2019-09-04T06:29:39.537+0000 I COMMAND [conn61] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578578, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578578, 2), signature: { hash: BinData(0, 7DB8410F7DF0ADC5791AD2C7DC7D4E021E7B3358), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578578, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:29:39.614+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:39.708+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:39.708+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:39.709+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:39.709+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:39.714+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:39.791+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:39.791+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:39.814+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:39.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:39.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:39.915+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:39.932+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:39.932+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:40.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:40.003+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:29:40.003+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:29:40.004+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:29:40.011+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:29:40.011+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:29:40.011+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:29:40.011+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:29:40.011+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:29:40.011+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:29:40.011+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:29:40.012+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35129 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:29:40.013+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:29:40.013+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:29:40.013+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:29:40.014+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:29:40.014+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:40.014+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:29:40.014+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:29:40.014+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:29:40.014+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:29:40.014+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:29:40.014+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:29:40.014+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:29:40.014+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:40.014+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:40.014+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:29:40.014+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578578, 2) 2019-09-04T06:29:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6763 2019-09-04T06:29:40.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6763 2019-09-04T06:29:40.015+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:29:40.015+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:29:40.015+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:29:40.015+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:29:40.015+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:40.015+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:29:40.015+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:29:40.015+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:29:40.015+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578578, 2) 2019-09-04T06:29:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6766 2019-09-04T06:29:40.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6766 2019-09-04T06:29:40.015+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:29:40.015+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:29:40.015+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:40.015+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:29:40.015+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:29:40.015+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:29:40.015+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578578, 2) 2019-09-04T06:29:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6768 2019-09-04T06:29:40.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6768 2019-09-04T06:29:40.015+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:29:40.015+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:29:40.015+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:29:40.015+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:29:40.016+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:29:40.016+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6771 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6771 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6772 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6772 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6773 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6773 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6774 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6774 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6775 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6775 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6776 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6776 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6777 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6777 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6778 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6778 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6779 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6779 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6780 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6780 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6781 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6781 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6782 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6782 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6783 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6783 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6784 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6784 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6785 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6785 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6786 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6786 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6787 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6787 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6788 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6788 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6789 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6789 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6790 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6790 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6791 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6791 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6792 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:29:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6792 2019-09-04T06:29:40.017+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:29:40.018+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:29:40.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6794 2019-09-04T06:29:40.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6794 2019-09-04T06:29:40.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6795 2019-09-04T06:29:40.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6795 2019-09-04T06:29:40.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6796 2019-09-04T06:29:40.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6796 2019-09-04T06:29:40.018+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:29:40.018+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:29:40.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6798 2019-09-04T06:29:40.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6798 2019-09-04T06:29:40.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6799 2019-09-04T06:29:40.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6799 2019-09-04T06:29:40.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6800 2019-09-04T06:29:40.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6800 2019-09-04T06:29:40.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6801 2019-09-04T06:29:40.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6801 2019-09-04T06:29:40.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6802 2019-09-04T06:29:40.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6802 2019-09-04T06:29:40.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6803 2019-09-04T06:29:40.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6803 2019-09-04T06:29:40.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6804 2019-09-04T06:29:40.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6804 2019-09-04T06:29:40.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6805 2019-09-04T06:29:40.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6805 2019-09-04T06:29:40.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6806 2019-09-04T06:29:40.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6806 2019-09-04T06:29:40.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6807 2019-09-04T06:29:40.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6807 2019-09-04T06:29:40.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6808 2019-09-04T06:29:40.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6808 2019-09-04T06:29:40.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6809 2019-09-04T06:29:40.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6809 2019-09-04T06:29:40.019+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:29:40.024+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:40.030+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:40.030+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:40.031+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:29:40.031+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6812 2019-09-04T06:29:40.031+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6812 2019-09-04T06:29:40.031+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6813 2019-09-04T06:29:40.031+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6813 2019-09-04T06:29:40.031+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6814 2019-09-04T06:29:40.031+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6814 2019-09-04T06:29:40.031+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6815 2019-09-04T06:29:40.031+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6815 2019-09-04T06:29:40.031+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6816 2019-09-04T06:29:40.031+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6816 2019-09-04T06:29:40.031+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 6817 2019-09-04T06:29:40.031+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 6817 2019-09-04T06:29:40.031+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:29:40.042+0000 D2 COMMAND [conn69] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:40.042+0000 I COMMAND [conn69] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:40.055+0000 D2 COMMAND [conn70] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:40.055+0000 I COMMAND [conn70] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:40.125+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:40.209+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:40.209+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:40.225+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:40.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578578, 2), signature: { hash: BinData(0, 7DB8410F7DF0ADC5791AD2C7DC7D4E021E7B3358), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:40.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:29:40.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578578, 2), signature: { hash: BinData(0, 7DB8410F7DF0ADC5791AD2C7DC7D4E021E7B3358), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:40.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578578, 2), signature: { hash: BinData(0, 7DB8410F7DF0ADC5791AD2C7DC7D4E021E7B3358), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:40.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), opTime: { ts: Timestamp(1567578578, 2), t: 1 }, wallTime: new Date(1567578578423) } 2019-09-04T06:29:40.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578578, 2), signature: { hash: BinData(0, 7DB8410F7DF0ADC5791AD2C7DC7D4E021E7B3358), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:40.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:40.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:40.274+0000 I COMMAND [conn168] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578542, 1), signature: { hash: BinData(0, 2B81C580DAB368E6A1EFD40BA8E2A6979A86E4AD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:29:40.274+0000 D1 - [conn168] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:40.274+0000 W - [conn168] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:40.277+0000 I COMMAND [conn166] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578541, 1), signature: { hash: BinData(0, 9C6D5E8C5F6FC23A243A48F5AE0DC2087CEF8162), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:29:40.277+0000 D1 - [conn166] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:40.277+0000 W - [conn166] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:40.291+0000 I COMMAND [conn138] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578548, 1), signature: { hash: BinData(0, A5F366D9C352A6D5B06F4537F7BE1EE092889425), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:29:40.291+0000 D1 - [conn138] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:40.291+0000 W - [conn138] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:40.291+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:40.291+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:40.309+0000 I - [conn138] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:40.309+0000 D1 COMMAND [conn138] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578548, 1), signature: { hash: BinData(0, A5F366D9C352A6D5B06F4537F7BE1EE092889425), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:40.309+0000 D1 - [conn138] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:40.309+0000 W - [conn138] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:40.310+0000 I - [conn168] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:40.310+0000 D1 COMMAND [conn168] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578542, 1), signature: { hash: BinData(0, 2B81C580DAB368E6A1EFD40BA8E2A6979A86E4AD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:40.310+0000 D1 - [conn168] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:40.310+0000 W - [conn168] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:40.321+0000 I COMMAND [conn139] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578542, 1), signature: { hash: BinData(0, 2B81C580DAB368E6A1EFD40BA8E2A6979A86E4AD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:29:40.321+0000 D1 - [conn139] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:40.321+0000 W - [conn139] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:40.325+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:40.327+0000 I - [conn166] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:40.327+0000 D1 COMMAND [conn166] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578541, 1), signature: { hash: BinData(0, 9C6D5E8C5F6FC23A243A48F5AE0DC2087CEF8162), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:40.327+0000 D1 - [conn166] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:40.327+0000 W - [conn166] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:40.346+0000 I - [conn138] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:40.347+0000 W COMMAND [conn138] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:40.347+0000 I COMMAND [conn138] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578548, 1), signature: { hash: BinData(0, A5F366D9C352A6D5B06F4537F7BE1EE092889425), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:29:40.347+0000 D2 NETWORK [conn138] Session from 10.108.2.57:34182 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:40.347+0000 I NETWORK [conn138] end connection 10.108.2.57:34182 (91 connections now open) 2019-09-04T06:29:40.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:40.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:40.367+0000 I - [conn168] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:40.368+0000 W COMMAND [conn168] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:40.368+0000 I COMMAND [conn168] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578542, 1), signature: { hash: BinData(0, 2B81C580DAB368E6A1EFD40BA8E2A6979A86E4AD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30048ms 2019-09-04T06:29:40.368+0000 D2 NETWORK [conn168] Session from 10.108.2.73:52092 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:40.368+0000 I NETWORK [conn168] end connection 10.108.2.73:52092 (90 connections now open) 2019-09-04T06:29:40.384+0000 I - [conn139] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:40.384+0000 D1 COMMAND [conn139] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578542, 1), signature: { hash: BinData(0, 2B81C580DAB368E6A1EFD40BA8E2A6979A86E4AD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:40.384+0000 D1 - [conn139] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:40.384+0000 W - [conn139] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:40.403+0000 I - [conn166] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:40.403+0000 W COMMAND [conn166] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:40.403+0000 I COMMAND [conn166] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578541, 1), signature: { hash: BinData(0, 9C6D5E8C5F6FC23A243A48F5AE0DC2087CEF8162), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30061ms 2019-09-04T06:29:40.404+0000 D2 NETWORK [conn166] Session from 10.108.2.55:36598 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:40.404+0000 I NETWORK [conn166] end connection 10.108.2.55:36598 (89 connections now open) 2019-09-04T06:29:40.423+0000 I - [conn139] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:40.423+0000 W COMMAND [conn139] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:40.423+0000 I COMMAND [conn139] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578542, 1), signature: { hash: BinData(0, 2B81C580DAB368E6A1EFD40BA8E2A6979A86E4AD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30075ms 2019-09-04T06:29:40.423+0000 D2 NETWORK [conn139] Session from 10.108.2.63:36222 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:40.423+0000 I NETWORK [conn139] end connection 10.108.2.63:36222 (88 connections now open) 2019-09-04T06:29:40.425+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:40.428+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:40.428+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:40.428+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578578, 2) 2019-09-04T06:29:40.428+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 6826 2019-09-04T06:29:40.428+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:40.428+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:40.428+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 6826 2019-09-04T06:29:40.429+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6829 2019-09-04T06:29:40.429+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6829 2019-09-04T06:29:40.429+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578578, 2), t: 1 }({ ts: Timestamp(1567578578, 2), t: 1 }) 2019-09-04T06:29:40.432+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:40.432+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:40.464+0000 I NETWORK [listener] connection accepted from 10.108.2.48:42060 #200 (89 connections now open) 2019-09-04T06:29:40.465+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:40.465+0000 D2 COMMAND [conn200] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:40.465+0000 I NETWORK [conn200] received client metadata from 10.108.2.48:42060 conn200: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:40.465+0000 I COMMAND [conn200] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:40.465+0000 D2 COMMAND [conn200] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578580, 1), signature: { hash: BinData(0, 86E87D4DBA38E94854814CCDC22E7B802B4418C3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:29:40.465+0000 D1 REPL [conn200] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578578, 2), t: 1 } 2019-09-04T06:29:40.465+0000 D3 REPL [conn200] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.475+0000 2019-09-04T06:29:40.465+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:40.465+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:40.466+0000 I NETWORK [listener] connection accepted from 10.108.2.55:36618 #201 (90 connections now open) 2019-09-04T06:29:40.466+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:40.466+0000 D2 COMMAND [conn201] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:40.466+0000 I NETWORK [conn201] received client metadata from 10.108.2.55:36618 conn201: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:40.466+0000 I COMMAND [conn201] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:40.466+0000 D2 COMMAND [conn201] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578571, 1), signature: { hash: BinData(0, 472AE666FE64617C815B4B64B8CFDCED57BE7FFA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:29:40.466+0000 D1 REPL [conn201] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578578, 2), t: 1 } 2019-09-04T06:29:40.466+0000 D3 REPL [conn201] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.476+0000 2019-09-04T06:29:40.466+0000 D2 COMMAND [conn199] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578575, 1), signature: { hash: BinData(0, A1616ED3E044436BEC7C98EB05F697E1909A54E7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:29:40.467+0000 D1 REPL [conn199] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578578, 2), t: 1 } 2019-09-04T06:29:40.467+0000 D3 REPL [conn199] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.476+0000 2019-09-04T06:29:40.468+0000 D2 COMMAND [conn181] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578571, 1), signature: { hash: BinData(0, 472AE666FE64617C815B4B64B8CFDCED57BE7FFA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:29:40.468+0000 D1 REPL [conn181] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578578, 2), t: 1 } 2019-09-04T06:29:40.468+0000 D3 REPL [conn181] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.478+0000 2019-09-04T06:29:40.469+0000 I NETWORK [listener] connection accepted from 10.108.2.72:45708 #202 (91 connections now open) 2019-09-04T06:29:40.469+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:40.470+0000 D2 COMMAND [conn202] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:40.470+0000 I NETWORK [conn202] received client metadata from 10.108.2.72:45708 conn202: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:40.470+0000 I COMMAND [conn202] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:40.470+0000 D2 COMMAND [conn202] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578572, 1), signature: { hash: BinData(0, BEAA603D37F22B007C54F93405BE7B4D709F86E2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:29:40.470+0000 D1 REPL [conn202] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578578, 2), t: 1 } 2019-09-04T06:29:40.470+0000 D3 REPL [conn202] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.480+0000 2019-09-04T06:29:40.484+0000 I NETWORK [listener] connection accepted from 10.108.2.57:34218 #203 (92 connections now open) 2019-09-04T06:29:40.484+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:40.484+0000 D2 COMMAND [conn203] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:40.484+0000 I NETWORK [conn203] received client metadata from 10.108.2.57:34218 conn203: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:40.484+0000 I COMMAND [conn203] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:40.489+0000 D2 COMMAND [conn203] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578578, 1), signature: { hash: BinData(0, C1354BF9089533EC9529632B626A8C0C97EF30BD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:29:40.489+0000 D1 REPL [conn203] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578578, 2), t: 1 } 2019-09-04T06:29:40.489+0000 D3 REPL [conn203] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.499+0000 2019-09-04T06:29:40.500+0000 I NETWORK [listener] connection accepted from 10.108.2.60:44826 #204 (93 connections now open) 2019-09-04T06:29:40.501+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:40.501+0000 D2 COMMAND [conn204] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:40.501+0000 I NETWORK [conn204] received client metadata from 10.108.2.60:44826 conn204: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:40.501+0000 I COMMAND [conn204] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:40.504+0000 D2 COMMAND [conn204] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578573, 1), signature: { hash: BinData(0, 63C6058835D1DE93F5F1A44E095F1DBE683122D6), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:29:40.504+0000 D1 REPL [conn204] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578578, 2), t: 1 } 2019-09-04T06:29:40.504+0000 D3 REPL [conn204] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.514+0000 2019-09-04T06:29:40.510+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:40.510+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:40.517+0000 I NETWORK [listener] connection accepted from 10.108.2.61:37896 #205 (94 connections now open) 2019-09-04T06:29:40.517+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:40.517+0000 D2 COMMAND [conn205] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:40.517+0000 I NETWORK [conn205] received client metadata from 10.108.2.61:37896 conn205: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:40.517+0000 I COMMAND [conn205] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:40.520+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:40.520+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:40.520+0000 D2 COMMAND [conn205] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578571, 1), signature: { hash: BinData(0, 472AE666FE64617C815B4B64B8CFDCED57BE7FFA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:29:40.520+0000 D1 REPL [conn205] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578578, 2), t: 1 } 2019-09-04T06:29:40.520+0000 D3 REPL [conn205] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.530+0000 2019-09-04T06:29:40.525+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:40.625+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:40.709+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:40.709+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:40.721+0000 D2 COMMAND [conn72] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578574, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578574, 1), signature: { hash: BinData(0, 712017CC287EFBA578BBBB6D0C4404C004D4650B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578574, 1), t: 1 } }, $db: "config" } 2019-09-04T06:29:40.721+0000 D1 COMMAND [conn72] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578574, 1), t: 1 } } } 2019-09-04T06:29:40.721+0000 D3 STORAGE [conn72] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:29:40.721+0000 D1 COMMAND [conn72] Using 'committed' snapshot: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578574, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578574, 1), signature: { hash: BinData(0, 712017CC287EFBA578BBBB6D0C4404C004D4650B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578574, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578578, 2) 2019-09-04T06:29:40.721+0000 D2 QUERY [conn72] Collection config.settings does not exist. Using EOF plan: query: { _id: "balancer" } sort: {} projection: {} limit: 1 2019-09-04T06:29:40.721+0000 I COMMAND [conn72] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578574, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578574, 1), signature: { hash: BinData(0, 712017CC287EFBA578BBBB6D0C4404C004D4650B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578574, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:29:40.725+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:40.791+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:40.791+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:40.826+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:40.838+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:40.838+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 456) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:40.838+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 456 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:50.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:40.838+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:40.838+0000 D2 ASIO [Replication] Request 456 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), opTime: { ts: Timestamp(1567578578, 2), t: 1 }, wallTime: new Date(1567578578423), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpVisible: { ts: Timestamp(1567578578, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578578, 2), $clusterTime: { clusterTime: Timestamp(1567578580, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578578, 2) } 2019-09-04T06:29:40.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), opTime: { ts: Timestamp(1567578578, 2), t: 1 }, wallTime: new Date(1567578578423), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpVisible: { ts: Timestamp(1567578578, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578578, 2), $clusterTime: { clusterTime: Timestamp(1567578580, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578578, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:40.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:40.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 456) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), opTime: { ts: Timestamp(1567578578, 2), t: 1 }, wallTime: new Date(1567578578423), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpVisible: { ts: Timestamp(1567578578, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578578, 2), $clusterTime: { clusterTime: Timestamp(1567578580, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578578, 2) } 2019-09-04T06:29:40.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:29:40.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:29:42.838Z 2019-09-04T06:29:40.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:40.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:40.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 457) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:40.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 457 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:29:50.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:40.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:40.839+0000 D2 ASIO [Replication] Request 457 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), opTime: { ts: Timestamp(1567578578, 2), t: 1 }, wallTime: new Date(1567578578423), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpVisible: { ts: Timestamp(1567578578, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578578, 2), $clusterTime: { clusterTime: Timestamp(1567578580, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578578, 2) } 2019-09-04T06:29:40.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), opTime: { ts: Timestamp(1567578578, 2), t: 1 }, wallTime: new Date(1567578578423), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpVisible: { ts: Timestamp(1567578578, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578578, 2), $clusterTime: { clusterTime: Timestamp(1567578580, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578578, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:29:40.839+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:40.839+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 457) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), opTime: { ts: Timestamp(1567578578, 2), t: 1 }, wallTime: new Date(1567578578423), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpVisible: { ts: Timestamp(1567578578, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578578, 2), $clusterTime: { clusterTime: Timestamp(1567578580, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578578, 2) } 2019-09-04T06:29:40.839+0000 D4 ELECTION [replexec-1] Postponing election timeout due to heartbeat from primary 2019-09-04T06:29:40.839+0000 D4 REPL [replexec-1] Canceling election timeout callback at 2019-09-04T06:29:48.974+0000 2019-09-04T06:29:40.839+0000 D4 ELECTION [replexec-1] Scheduling election timeout callback at 2019-09-04T06:29:51.341+0000 2019-09-04T06:29:40.839+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:29:40.839+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:29:42.839Z 2019-09-04T06:29:40.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:40.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:40.839+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:40.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:40.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:40.926+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:40.932+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:40.932+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:40.945+0000 D2 COMMAND [conn73] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:40.945+0000 I COMMAND [conn73] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:40.965+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:40.965+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:40.968+0000 D2 COMMAND [conn74] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:40.968+0000 I COMMAND [conn74] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:41.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:41.009+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:41.009+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:41.020+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:41.020+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:41.026+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:41.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578580, 1), signature: { hash: BinData(0, E861A5D82AE94BD0F9528E855B56580DF62F730C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:41.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:29:41.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578580, 1), signature: { hash: BinData(0, E861A5D82AE94BD0F9528E855B56580DF62F730C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:41.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578580, 1), signature: { hash: BinData(0, E861A5D82AE94BD0F9528E855B56580DF62F730C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:41.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), opTime: { ts: Timestamp(1567578578, 2), t: 1 }, wallTime: new Date(1567578578423) } 2019-09-04T06:29:41.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578580, 1), signature: { hash: BinData(0, E861A5D82AE94BD0F9528E855B56580DF62F730C), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:41.126+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:41.209+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:41.209+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:41.221+0000 D2 COMMAND [conn77] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:41.221+0000 I COMMAND [conn77] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:41.226+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:41.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:41.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:41.273+0000 D2 COMMAND [conn78] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:41.274+0000 I COMMAND [conn78] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:41.291+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:41.291+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:41.326+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:41.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:41.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:41.426+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:41.428+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:41.428+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:41.428+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578578, 2) 2019-09-04T06:29:41.428+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 6867 2019-09-04T06:29:41.428+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:41.428+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:41.428+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 6867 2019-09-04T06:29:41.429+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6870 2019-09-04T06:29:41.429+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6870 2019-09-04T06:29:41.429+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578578, 2), t: 1 }({ ts: Timestamp(1567578578, 2), t: 1 }) 2019-09-04T06:29:41.432+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:41.432+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:41.520+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:41.520+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:41.526+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:41.627+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:41.695+0000 I COMMAND [conn164] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578541, 1), signature: { hash: BinData(0, 9C6D5E8C5F6FC23A243A48F5AE0DC2087CEF8162), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:29:41.695+0000 D1 - [conn164] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:41.695+0000 W - [conn164] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:41.709+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:41.709+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:41.712+0000 I - [conn164] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:41.712+0000 D1 COMMAND [conn164] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578541, 1), signature: { hash: BinData(0, 9C6D5E8C5F6FC23A243A48F5AE0DC2087CEF8162), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:41.712+0000 D1 - [conn164] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:41.712+0000 W - [conn164] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:41.727+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:41.733+0000 I - [conn164] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:41.733+0000 W COMMAND [conn164] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:41.733+0000 I COMMAND [conn164] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578541, 1), signature: { hash: BinData(0, 9C6D5E8C5F6FC23A243A48F5AE0DC2087CEF8162), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:29:41.733+0000 D2 NETWORK [conn164] Session from 10.108.2.59:48282 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:41.733+0000 I NETWORK [conn164] end connection 10.108.2.59:48282 (93 connections now open) 2019-09-04T06:29:41.791+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:41.791+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:41.827+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:41.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:41.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:41.891+0000 I COMMAND [conn163] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578542, 1), signature: { hash: BinData(0, 2B81C580DAB368E6A1EFD40BA8E2A6979A86E4AD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:29:41.891+0000 D1 - [conn163] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:41.891+0000 W - [conn163] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:41.911+0000 I - [conn163] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:41.911+0000 D1 COMMAND [conn163] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578542, 1), signature: { hash: BinData(0, 2B81C580DAB368E6A1EFD40BA8E2A6979A86E4AD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:41.911+0000 D1 - [conn163] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:41.911+0000 W - [conn163] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:41.927+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:41.932+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:41.932+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:41.933+0000 I - [conn163] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:41.934+0000 W COMMAND [conn163] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:41.934+0000 I COMMAND [conn163] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578542, 1), signature: { hash: BinData(0, 2B81C580DAB368E6A1EFD40BA8E2A6979A86E4AD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30032ms 2019-09-04T06:29:41.934+0000 D2 NETWORK [conn163] Session from 10.108.2.52:47116 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:41.934+0000 I NETWORK [conn163] end connection 10.108.2.52:47116 (92 connections now open) 2019-09-04T06:29:42.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:42.027+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:42.078+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:42.078+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:42.096+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:42.096+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:42.097+0000 D2 COMMAND [conn191] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578578, 1), signature: { hash: BinData(0, C1354BF9089533EC9529632B626A8C0C97EF30BD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:29:42.097+0000 D1 REPL [conn191] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578578, 2), t: 1 } 2019-09-04T06:29:42.097+0000 D3 REPL [conn191] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:12.107+0000 2019-09-04T06:29:42.127+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:42.209+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:42.209+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:42.228+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:42.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578580, 1), signature: { hash: BinData(0, E861A5D82AE94BD0F9528E855B56580DF62F730C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:42.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:29:42.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578580, 1), signature: { hash: BinData(0, E861A5D82AE94BD0F9528E855B56580DF62F730C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:42.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578580, 1), signature: { hash: BinData(0, E861A5D82AE94BD0F9528E855B56580DF62F730C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:42.233+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), opTime: { ts: Timestamp(1567578578, 2), t: 1 }, wallTime: new Date(1567578578423) } 2019-09-04T06:29:42.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578580, 1), signature: { hash: BinData(0, E861A5D82AE94BD0F9528E855B56580DF62F730C), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:42.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:42.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:42.291+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:42.291+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:42.328+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:42.337+0000 D2 COMMAND [conn143] run command admin.$cmd { ping: 1, $db: "admin" } 2019-09-04T06:29:42.337+0000 I COMMAND [conn143] command admin.$cmd appName: "robo3t" command: ping { ping: 1, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:42.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:42.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:42.387+0000 I NETWORK [listener] connection accepted from 10.20.102.80:61218 #206 (93 connections now open) 2019-09-04T06:29:42.387+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:42.397+0000 D2 COMMAND [conn206] run command admin.$cmd { isMaster: 1, client: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.0.5-17-gd808df2233" }, os: { type: "Windows", name: "Microsoft Windows 7", architecture: "x86_64", version: "6.1 SP1 (build 7601)" } }, $db: "admin" } 2019-09-04T06:29:42.397+0000 I NETWORK [conn206] received client metadata from 10.20.102.80:61218 conn206: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.0.5-17-gd808df2233" }, os: { type: "Windows", name: "Microsoft Windows 7", architecture: "x86_64", version: "6.1 SP1 (build 7601)" } } 2019-09-04T06:29:42.397+0000 I COMMAND [conn206] command admin.$cmd appName: "MongoDB Shell" command: isMaster { isMaster: 1, client: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.0.5-17-gd808df2233" }, os: { type: "Windows", name: "Microsoft Windows 7", architecture: "x86_64", version: "6.1 SP1 (build 7601)" } }, $db: "admin" } numYields:0 reslen:866 locks:{} protocol:op_query 0ms 2019-09-04T06:29:42.408+0000 D2 COMMAND [conn206] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-256", payload: "xxx", $db: "admin" } 2019-09-04T06:29:42.408+0000 D1 ACCESS [conn206] Returning user dba_root@admin from cache 2019-09-04T06:29:42.408+0000 I COMMAND [conn206] command admin.$cmd appName: "MongoDB Shell" command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-256", payload: "xxx", $db: "admin" } numYields:0 reslen:410 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:42.418+0000 D2 COMMAND [conn206] run command admin.$cmd { saslContinue: 1, payload: "xxx", conversationId: 1, $db: "admin" } 2019-09-04T06:29:42.418+0000 I COMMAND [conn206] command admin.$cmd appName: "MongoDB Shell" command: saslContinue { saslContinue: 1, payload: "xxx", conversationId: 1, $db: "admin" } numYields:0 reslen:339 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:42.428+0000 D2 COMMAND [conn206] run command admin.$cmd { saslContinue: 1, payload: "xxx", conversationId: 1, $db: "admin" } 2019-09-04T06:29:42.428+0000 D1 ACCESS [conn206] Returning user dba_root@admin from cache 2019-09-04T06:29:42.428+0000 I ACCESS [conn206] Successfully authenticated as principal dba_root on admin from client 10.20.102.80:61218 2019-09-04T06:29:42.428+0000 I COMMAND [conn206] command admin.$cmd appName: "MongoDB Shell" command: saslContinue { saslContinue: 1, payload: "xxx", conversationId: 1, $db: "admin" } numYields:0 reslen:293 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:42.428+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:42.428+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:42.429+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:42.429+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578578, 2) 2019-09-04T06:29:42.429+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 6892 2019-09-04T06:29:42.429+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:42.429+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:42.429+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 6892 2019-09-04T06:29:42.429+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6895 2019-09-04T06:29:42.429+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6895 2019-09-04T06:29:42.429+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578578, 2), t: 1 }({ ts: Timestamp(1567578578, 2), t: 1 }) 2019-09-04T06:29:42.432+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:42.432+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:42.437+0000 D2 COMMAND [conn206] run command config.$cmd { ping: 1.0, lsid: { id: UUID("2fef7d2a-ea06-44d7-a315-b0e911b7f5bf") }, $clusterTime: { clusterTime: Timestamp(1567578399, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:29:42.437+0000 I COMMAND [conn206] command config.$cmd appName: "MongoDB Shell" command: ping { ping: 1.0, lsid: { id: UUID("2fef7d2a-ea06-44d7-a315-b0e911b7f5bf") }, $clusterTime: { clusterTime: Timestamp(1567578399, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:42.528+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:42.577+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:42.578+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:42.596+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:42.596+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:42.628+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:42.709+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:42.709+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:42.728+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:42.779+0000 D2 COMMAND [conn169] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578582, 1), signature: { hash: BinData(0, CBAA63ADEE2EDC53BD93E84FCB4CEC8CB94AD092), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:29:42.779+0000 D1 REPL [conn169] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578578, 2), t: 1 } 2019-09-04T06:29:42.779+0000 D3 REPL [conn169] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:12.789+0000 2019-09-04T06:29:42.791+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:42.791+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:42.829+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:42.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:42.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 458) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:42.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 458 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:52.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:42.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:42.838+0000 D2 ASIO [Replication] Request 458 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), opTime: { ts: Timestamp(1567578578, 2), t: 1 }, wallTime: new Date(1567578578423), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpVisible: { ts: Timestamp(1567578578, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578578, 2), $clusterTime: { clusterTime: Timestamp(1567578582, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578578, 2) } 2019-09-04T06:29:42.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), opTime: { ts: Timestamp(1567578578, 2), t: 1 }, wallTime: new Date(1567578578423), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpVisible: { ts: Timestamp(1567578578, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578578, 2), $clusterTime: { clusterTime: Timestamp(1567578582, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578578, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:42.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:42.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 458) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), opTime: { ts: Timestamp(1567578578, 2), t: 1 }, wallTime: new Date(1567578578423), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpVisible: { ts: Timestamp(1567578578, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578578, 2), $clusterTime: { clusterTime: Timestamp(1567578582, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578578, 2) } 2019-09-04T06:29:42.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:29:42.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:29:44.838Z 2019-09-04T06:29:42.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:42.839+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:42.839+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 459) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:42.839+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 459 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:29:52.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:42.839+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:42.839+0000 D2 ASIO [Replication] Request 459 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), opTime: { ts: Timestamp(1567578578, 2), t: 1 }, wallTime: new Date(1567578578423), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpVisible: { ts: Timestamp(1567578578, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578578, 2), $clusterTime: { clusterTime: Timestamp(1567578582, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578578, 2) } 2019-09-04T06:29:42.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), opTime: { ts: Timestamp(1567578578, 2), t: 1 }, wallTime: new Date(1567578578423), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpVisible: { ts: Timestamp(1567578578, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578578, 2), $clusterTime: { clusterTime: Timestamp(1567578582, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578578, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:29:42.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:42.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 459) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), opTime: { ts: Timestamp(1567578578, 2), t: 1 }, wallTime: new Date(1567578578423), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpVisible: { ts: Timestamp(1567578578, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578578, 2), $clusterTime: { clusterTime: Timestamp(1567578582, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578578, 2) } 2019-09-04T06:29:42.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:29:42.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:29:51.341+0000 2019-09-04T06:29:42.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:29:54.090+0000 2019-09-04T06:29:42.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:29:42.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:29:44.839Z 2019-09-04T06:29:42.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:42.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:42.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:42.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:42.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:42.929+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:42.932+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:42.932+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:43.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:43.029+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:43.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578582, 1), signature: { hash: BinData(0, BE15F8118DB42F211D0AB013CD468EF6A77F7847), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:43.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:29:43.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578582, 1), signature: { hash: BinData(0, BE15F8118DB42F211D0AB013CD468EF6A77F7847), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:43.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578582, 1), signature: { hash: BinData(0, BE15F8118DB42F211D0AB013CD468EF6A77F7847), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:43.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), opTime: { ts: Timestamp(1567578578, 2), t: 1 }, wallTime: new Date(1567578578423) } 2019-09-04T06:29:43.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578582, 1), signature: { hash: BinData(0, BE15F8118DB42F211D0AB013CD468EF6A77F7847), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:43.129+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:43.209+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:43.209+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:43.229+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:43.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:43.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:43.291+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:43.291+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:43.324+0000 I NETWORK [listener] connection accepted from 10.108.2.73:52114 #207 (94 connections now open) 2019-09-04T06:29:43.324+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:43.324+0000 D2 COMMAND [conn207] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:43.324+0000 I NETWORK [conn207] received client metadata from 10.108.2.73:52114 conn207: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:43.324+0000 I COMMAND [conn207] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:43.329+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:43.340+0000 I COMMAND [conn170] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578552, 1), signature: { hash: BinData(0, 05551CF1F69A904A3734176F5171337687942DA6), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:29:43.340+0000 D1 - [conn170] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:43.340+0000 W - [conn170] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:43.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:43.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:43.357+0000 I - [conn170] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:43.357+0000 D1 COMMAND [conn170] assertion while executing command 'find' on database 'config' with arguments '{ find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578552, 1), signature: { hash: BinData(0, 05551CF1F69A904A3734176F5171337687942DA6), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:43.357+0000 D1 - [conn170] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:43.357+0000 W - [conn170] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:43.377+0000 I - [conn170] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:43.377+0000 W COMMAND [conn170] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:43.377+0000 I COMMAND [conn170] command config.$cmd command: find { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578552, 1), signature: { hash: BinData(0, 05551CF1F69A904A3734176F5171337687942DA6), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30032ms 2019-09-04T06:29:43.377+0000 D2 NETWORK [conn170] Session from 10.108.2.73:52096 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:43.377+0000 I NETWORK [conn170] end connection 10.108.2.73:52096 (93 connections now open) 2019-09-04T06:29:43.399+0000 D2 COMMAND [conn147] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578578, 1), signature: { hash: BinData(0, C1354BF9089533EC9529632B626A8C0C97EF30BD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:29:43.399+0000 D1 REPL [conn147] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578578, 2), t: 1 } 2019-09-04T06:29:43.399+0000 D3 REPL [conn147] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:13.409+0000 2019-09-04T06:29:43.429+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:43.429+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:43.429+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578578, 2) 2019-09-04T06:29:43.429+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 6915 2019-09-04T06:29:43.429+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:43.429+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:43.429+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 6915 2019-09-04T06:29:43.429+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:43.429+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6918 2019-09-04T06:29:43.429+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6918 2019-09-04T06:29:43.429+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578578, 2), t: 1 }({ ts: Timestamp(1567578578, 2), t: 1 }) 2019-09-04T06:29:43.430+0000 D2 ASIO [RS] Request 452 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpVisible: { ts: Timestamp(1567578578, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpApplied: { ts: Timestamp(1567578578, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578578, 2), $clusterTime: { clusterTime: Timestamp(1567578582, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578578, 2) } 2019-09-04T06:29:43.430+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpVisible: { ts: Timestamp(1567578578, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpApplied: { ts: Timestamp(1567578578, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578578, 2), $clusterTime: { clusterTime: Timestamp(1567578582, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578578, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:43.430+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:43.430+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:29:43.430+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:29:54.090+0000 2019-09-04T06:29:43.430+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:29:54.083+0000 2019-09-04T06:29:43.430+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:43.430+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 460 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:53.430+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578578, 2), t: 1 } } 2019-09-04T06:29:43.430+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:43.430+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:08.429+0000 2019-09-04T06:29:43.432+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:43.432+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:43.434+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:43.434+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), appliedOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, appliedWallTime: new Date(1567578578423), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), appliedOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, appliedWallTime: new Date(1567578578423), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), appliedOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, appliedWallTime: new Date(1567578578423), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpVisible: { ts: Timestamp(1567578578, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:43.434+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 461 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:13.434+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), appliedOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, appliedWallTime: new Date(1567578578423), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), appliedOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, appliedWallTime: new Date(1567578578423), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), appliedOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, appliedWallTime: new Date(1567578578423), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpVisible: { ts: Timestamp(1567578578, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:43.434+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:08.429+0000 2019-09-04T06:29:43.434+0000 D2 ASIO [RS] Request 461 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpVisible: { ts: Timestamp(1567578578, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578578, 2), $clusterTime: { clusterTime: Timestamp(1567578582, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578578, 2) } 2019-09-04T06:29:43.434+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpVisible: { ts: Timestamp(1567578578, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578578, 2), $clusterTime: { clusterTime: Timestamp(1567578582, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578578, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:43.434+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:43.434+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:08.429+0000 2019-09-04T06:29:43.529+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:43.630+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:43.709+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:43.709+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:43.730+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:43.791+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:43.791+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:43.830+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:43.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:43.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:43.930+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:43.932+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:43.932+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:44.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:44.030+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:44.130+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:44.209+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:44.209+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:44.231+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:44.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578582, 1), signature: { hash: BinData(0, BE15F8118DB42F211D0AB013CD468EF6A77F7847), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:44.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:29:44.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578582, 1), signature: { hash: BinData(0, BE15F8118DB42F211D0AB013CD468EF6A77F7847), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:44.233+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578582, 1), signature: { hash: BinData(0, BE15F8118DB42F211D0AB013CD468EF6A77F7847), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:44.233+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), opTime: { ts: Timestamp(1567578578, 2), t: 1 }, wallTime: new Date(1567578578423) } 2019-09-04T06:29:44.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578582, 1), signature: { hash: BinData(0, BE15F8118DB42F211D0AB013CD468EF6A77F7847), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:44.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:44.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:44.291+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:44.291+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:44.331+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:44.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:44.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:44.429+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:44.429+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:44.429+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578578, 2) 2019-09-04T06:29:44.429+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 6932 2019-09-04T06:29:44.429+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:44.429+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:44.429+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 6932 2019-09-04T06:29:44.430+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6935 2019-09-04T06:29:44.430+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6935 2019-09-04T06:29:44.430+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578578, 2), t: 1 }({ ts: Timestamp(1567578578, 2), t: 1 }) 2019-09-04T06:29:44.431+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:44.432+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:44.432+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:44.531+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:44.631+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:44.709+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:44.709+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:44.731+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:44.791+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:44.791+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:44.832+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:44.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:44.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:44.838+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:29:43.061+0000 2019-09-04T06:29:44.838+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:29:44.233+0000 2019-09-04T06:29:44.838+0000 D3 REPL [replexec-3] stalest member MemberId(0) date: 2019-09-04T06:29:43.061+0000 2019-09-04T06:29:44.838+0000 D3 REPL [replexec-3] scheduling next check at 2019-09-04T06:29:53.061+0000 2019-09-04T06:29:44.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:44.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 462) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:44.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 462 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:54.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:44.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:44.838+0000 D2 ASIO [Replication] Request 462 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), opTime: { ts: Timestamp(1567578578, 2), t: 1 }, wallTime: new Date(1567578578423), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpVisible: { ts: Timestamp(1567578578, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578578, 2), $clusterTime: { clusterTime: Timestamp(1567578582, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578578, 2) } 2019-09-04T06:29:44.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), opTime: { ts: Timestamp(1567578578, 2), t: 1 }, wallTime: new Date(1567578578423), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpVisible: { ts: Timestamp(1567578578, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578578, 2), $clusterTime: { clusterTime: Timestamp(1567578582, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578578, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:44.838+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:44.838+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 462) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), opTime: { ts: Timestamp(1567578578, 2), t: 1 }, wallTime: new Date(1567578578423), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpVisible: { ts: Timestamp(1567578578, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578578, 2), $clusterTime: { clusterTime: Timestamp(1567578582, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578578, 2) } 2019-09-04T06:29:44.838+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:29:44.838+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:29:46.838Z 2019-09-04T06:29:44.838+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:44.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:44.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 463) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:44.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 463 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:29:54.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:44.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:44.839+0000 D2 ASIO [Replication] Request 463 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), opTime: { ts: Timestamp(1567578578, 2), t: 1 }, wallTime: new Date(1567578578423), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpVisible: { ts: Timestamp(1567578578, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578578, 2), $clusterTime: { clusterTime: Timestamp(1567578582, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578578, 2) } 2019-09-04T06:29:44.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), opTime: { ts: Timestamp(1567578578, 2), t: 1 }, wallTime: new Date(1567578578423), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpVisible: { ts: Timestamp(1567578578, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578578, 2), $clusterTime: { clusterTime: Timestamp(1567578582, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578578, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:29:44.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:44.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 463) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), opTime: { ts: Timestamp(1567578578, 2), t: 1 }, wallTime: new Date(1567578578423), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpVisible: { ts: Timestamp(1567578578, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578578, 2), $clusterTime: { clusterTime: Timestamp(1567578582, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578578, 2) } 2019-09-04T06:29:44.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:29:44.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:29:54.083+0000 2019-09-04T06:29:44.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:29:55.693+0000 2019-09-04T06:29:44.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:29:44.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:29:46.839Z 2019-09-04T06:29:44.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:44.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:44.839+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:44.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:44.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:44.932+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:44.932+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:44.932+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:44.986+0000 D2 ASIO [RS] Request 460 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578584, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578584984), o: { $v: 1, $set: { ping: new Date(1567578584980), up: 2485 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpVisible: { ts: Timestamp(1567578578, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpApplied: { ts: Timestamp(1567578584, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578578, 2), $clusterTime: { clusterTime: Timestamp(1567578584, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578584, 1) } 2019-09-04T06:29:44.986+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578584, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578584984), o: { $v: 1, $set: { ping: new Date(1567578584980), up: 2485 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpVisible: { ts: Timestamp(1567578578, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpApplied: { ts: Timestamp(1567578584, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578578, 2), $clusterTime: { clusterTime: Timestamp(1567578584, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578584, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:44.986+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:44.986+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578584, 1) and ending at ts: Timestamp(1567578584, 1) 2019-09-04T06:29:44.986+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:29:55.693+0000 2019-09-04T06:29:44.986+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:29:55.221+0000 2019-09-04T06:29:44.986+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:44.986+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:44.986+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578584, 1), t: 1 } 2019-09-04T06:29:44.986+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:44.986+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:44.986+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578578, 2) 2019-09-04T06:29:44.986+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 6943 2019-09-04T06:29:44.986+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:44.986+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:44.986+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 6943 2019-09-04T06:29:44.986+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:44.986+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:29:44.986+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:44.986+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578578, 2) 2019-09-04T06:29:44.986+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578584, 1) } 2019-09-04T06:29:44.986+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 6946 2019-09-04T06:29:44.986+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:44.987+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:44.987+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 6946 2019-09-04T06:29:44.987+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6936 2019-09-04T06:29:44.987+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 6936 2019-09-04T06:29:44.987+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6949 2019-09-04T06:29:44.987+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6949 2019-09-04T06:29:44.987+0000 D3 EXECUTOR [repl-writer-worker-10] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:44.987+0000 D3 STORAGE [repl-writer-worker-10] WT begin_transaction for snapshot id 6951 2019-09-04T06:29:44.987+0000 D4 STORAGE [repl-writer-worker-10] inserting record with timestamp Timestamp(1567578584, 1) 2019-09-04T06:29:44.987+0000 D3 STORAGE [repl-writer-worker-10] WT set timestamp of future write operations to Timestamp(1567578584, 1) 2019-09-04T06:29:44.987+0000 D3 STORAGE [repl-writer-worker-10] WT commit_transaction for snapshot id 6951 2019-09-04T06:29:44.987+0000 D3 EXECUTOR [repl-writer-worker-10] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:44.987+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:29:44.987+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6950 2019-09-04T06:29:44.987+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 6950 2019-09-04T06:29:44.987+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6953 2019-09-04T06:29:44.987+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6953 2019-09-04T06:29:44.987+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578584, 1), t: 1 }({ ts: Timestamp(1567578584, 1), t: 1 }) 2019-09-04T06:29:44.987+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578584, 1) 2019-09-04T06:29:44.987+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6954 2019-09-04T06:29:44.987+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578584, 1) } } ] } sort: {} projection: {} 2019-09-04T06:29:44.987+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:44.987+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:29:44.987+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578584, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:29:44.987+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:44.987+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:44.987+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:44.987+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578584, 1) || First: notFirst: full path: ts 2019-09-04T06:29:44.987+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:44.987+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578584, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:44.987+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:44.987+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:29:44.987+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:44.987+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:44.987+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:44.987+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:44.987+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:44.987+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:44.987+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:44.987+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578584, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:44.987+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:44.987+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:44.987+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:44.987+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578584, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:44.987+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:44.987+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578584, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:44.987+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 6954 2019-09-04T06:29:44.987+0000 D3 EXECUTOR [repl-writer-worker-8] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:44.987+0000 D3 STORAGE [repl-writer-worker-8] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:44.987+0000 D3 REPL [repl-writer-worker-8] applying op: { ts: Timestamp(1567578584, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578584984), o: { $v: 1, $set: { ping: new Date(1567578584980), up: 2485 } } }, oplog application mode: Secondary 2019-09-04T06:29:44.987+0000 D3 STORAGE [repl-writer-worker-8] WT set timestamp of future write operations to Timestamp(1567578584, 1) 2019-09-04T06:29:44.987+0000 D3 STORAGE [repl-writer-worker-8] WT begin_transaction for snapshot id 6956 2019-09-04T06:29:44.987+0000 D2 QUERY [repl-writer-worker-8] Using idhack: { _id: "cmodb801.togewa.com:27017" } 2019-09-04T06:29:44.987+0000 D4 WRITE [repl-writer-worker-8] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:29:44.987+0000 D3 STORAGE [repl-writer-worker-8] WT commit_transaction for snapshot id 6956 2019-09-04T06:29:44.987+0000 D3 EXECUTOR [repl-writer-worker-8] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:44.987+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578584, 1), t: 1 }({ ts: Timestamp(1567578584, 1), t: 1 }) 2019-09-04T06:29:44.988+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578584, 1) 2019-09-04T06:29:44.988+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6955 2019-09-04T06:29:44.988+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:29:44.988+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:44.988+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:29:44.988+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:44.988+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:44.988+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:29:44.988+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 6955 2019-09-04T06:29:44.988+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578584, 1) 2019-09-04T06:29:44.988+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6960 2019-09-04T06:29:44.988+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6960 2019-09-04T06:29:44.988+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578584, 1), t: 1 }({ ts: Timestamp(1567578584, 1), t: 1 }) 2019-09-04T06:29:44.988+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:44.988+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), appliedOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, appliedWallTime: new Date(1567578578423), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), appliedOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, appliedWallTime: new Date(1567578584984), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), appliedOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, appliedWallTime: new Date(1567578578423), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpVisible: { ts: Timestamp(1567578578, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:44.988+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 464 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:14.988+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), appliedOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, appliedWallTime: new Date(1567578578423), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), appliedOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, appliedWallTime: new Date(1567578584984), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), appliedOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, appliedWallTime: new Date(1567578578423), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpVisible: { ts: Timestamp(1567578578, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:44.988+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:14.988+0000 2019-09-04T06:29:44.988+0000 D2 ASIO [RS] Request 464 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpVisible: { ts: Timestamp(1567578578, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578578, 2), $clusterTime: { clusterTime: Timestamp(1567578584, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578584, 1) } 2019-09-04T06:29:44.988+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578578, 2), t: 1 }, lastCommittedWall: new Date(1567578578423), lastOpVisible: { ts: Timestamp(1567578578, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578578, 2), $clusterTime: { clusterTime: Timestamp(1567578584, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578584, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:44.988+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:44.988+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:14.988+0000 2019-09-04T06:29:44.988+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578584, 1), t: 1 } 2019-09-04T06:29:44.988+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 465 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:54.988+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578578, 2), t: 1 } } 2019-09-04T06:29:44.988+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:14.988+0000 2019-09-04T06:29:44.989+0000 D2 ASIO [RS] Request 465 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578584, 1), t: 1 }, lastCommittedWall: new Date(1567578584984), lastOpVisible: { ts: Timestamp(1567578584, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578584, 1), t: 1 }, lastCommittedWall: new Date(1567578584984), lastOpApplied: { ts: Timestamp(1567578584, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578584, 1), $clusterTime: { clusterTime: Timestamp(1567578584, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578584, 1) } 2019-09-04T06:29:44.989+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578584, 1), t: 1 }, lastCommittedWall: new Date(1567578584984), lastOpVisible: { ts: Timestamp(1567578584, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578584, 1), t: 1 }, lastCommittedWall: new Date(1567578584984), lastOpApplied: { ts: Timestamp(1567578584, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578584, 1), $clusterTime: { clusterTime: Timestamp(1567578584, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578584, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:44.989+0000 D2 COMMAND [conn21] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578584, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f59d802d1a496712d71d3'), operName: "", parentOperId: "5d6f59d802d1a496712d71d1" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578584, 1), signature: { hash: BinData(0, EDD18B36478B93BA7C916C5AF525A5D6B3D6A040), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578584, 1), t: 1 } }, $db: "config" } 2019-09-04T06:29:44.989+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:44.989+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:29:44.989+0000 D1 TRACKING [conn21] Cmd: find, TrackingId: 5d6f59d802d1a496712d71d1|5d6f59d802d1a496712d71d3 2019-09-04T06:29:44.989+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578584, 1), t: 1 }, 2019-09-04T06:29:44.984+0000 2019-09-04T06:29:44.989+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578584, 1), t: 1 }, 2019-09-04T06:29:44.984+0000 2019-09-04T06:29:44.989+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578579, 1) 2019-09-04T06:29:44.989+0000 D3 REPL [conn205] Got notified of new snapshot: { ts: Timestamp(1567578584, 1), t: 1 }, 2019-09-04T06:29:44.984+0000 2019-09-04T06:29:44.989+0000 D3 REPL [conn205] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.530+0000 2019-09-04T06:29:44.989+0000 D3 REPL [conn194] Got notified of new snapshot: { ts: Timestamp(1567578584, 1), t: 1 }, 2019-09-04T06:29:44.984+0000 2019-09-04T06:29:44.989+0000 D3 REPL [conn194] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.763+0000 2019-09-04T06:29:44.989+0000 D3 REPL [conn173] Got notified of new snapshot: { ts: Timestamp(1567578584, 1), t: 1 }, 2019-09-04T06:29:44.984+0000 2019-09-04T06:29:44.989+0000 D3 REPL [conn173] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.664+0000 2019-09-04T06:29:44.989+0000 D3 REPL [conn172] Got notified of new snapshot: { ts: Timestamp(1567578584, 1), t: 1 }, 2019-09-04T06:29:44.984+0000 2019-09-04T06:29:44.989+0000 D3 REPL [conn172] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:58.752+0000 2019-09-04T06:29:44.989+0000 D3 REPL [conn169] Got notified of new snapshot: { ts: Timestamp(1567578584, 1), t: 1 }, 2019-09-04T06:29:44.984+0000 2019-09-04T06:29:44.989+0000 D3 REPL [conn169] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:12.789+0000 2019-09-04T06:29:44.989+0000 D3 REPL [conn184] Got notified of new snapshot: { ts: Timestamp(1567578584, 1), t: 1 }, 2019-09-04T06:29:44.984+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn184] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.433+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn181] Got notified of new snapshot: { ts: Timestamp(1567578584, 1), t: 1 }, 2019-09-04T06:29:44.984+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn181] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.478+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn197] Got notified of new snapshot: { ts: Timestamp(1567578584, 1), t: 1 }, 2019-09-04T06:29:44.984+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn197] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.987+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn190] Got notified of new snapshot: { ts: Timestamp(1567578584, 1), t: 1 }, 2019-09-04T06:29:44.984+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn190] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:55.060+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn188] Got notified of new snapshot: { ts: Timestamp(1567578584, 1), t: 1 }, 2019-09-04T06:29:44.984+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn188] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.768+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn187] Got notified of new snapshot: { ts: Timestamp(1567578584, 1), t: 1 }, 2019-09-04T06:29:44.984+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn187] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.753+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn183] Got notified of new snapshot: { ts: Timestamp(1567578584, 1), t: 1 }, 2019-09-04T06:29:44.984+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn183] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.645+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn198] Got notified of new snapshot: { ts: Timestamp(1567578584, 1), t: 1 }, 2019-09-04T06:29:44.984+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn198] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:03.485+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn177] Got notified of new snapshot: { ts: Timestamp(1567578584, 1), t: 1 }, 2019-09-04T06:29:44.984+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn177] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.446+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn186] Got notified of new snapshot: { ts: Timestamp(1567578584, 1), t: 1 }, 2019-09-04T06:29:44.984+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn186] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.662+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn203] Got notified of new snapshot: { ts: Timestamp(1567578584, 1), t: 1 }, 2019-09-04T06:29:44.984+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn203] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.499+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn204] Got notified of new snapshot: { ts: Timestamp(1567578584, 1), t: 1 }, 2019-09-04T06:29:44.984+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn204] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.514+0000 2019-09-04T06:29:44.990+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:29:55.221+0000 2019-09-04T06:29:44.990+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:29:56.471+0000 2019-09-04T06:29:44.990+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:44.990+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:44.990+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 466 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:54.990+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578584, 1), t: 1 } } 2019-09-04T06:29:44.990+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:14.988+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn174] Got notified of new snapshot: { ts: Timestamp(1567578584, 1), t: 1 }, 2019-09-04T06:29:44.984+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn174] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.422+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn145] Got notified of new snapshot: { ts: Timestamp(1567578584, 1), t: 1 }, 2019-09-04T06:29:44.984+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn145] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.422+0000 2019-09-04T06:29:44.990+0000 D1 COMMAND [conn21] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578584, 1), t: 1 } } } 2019-09-04T06:29:44.990+0000 D3 REPL [conn147] Got notified of new snapshot: { ts: Timestamp(1567578584, 1), t: 1 }, 2019-09-04T06:29:44.984+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn147] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:13.409+0000 2019-09-04T06:29:44.990+0000 D3 STORAGE [conn21] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:29:44.990+0000 D1 COMMAND [conn21] Using 'committed' snapshot: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578584, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f59d802d1a496712d71d3'), operName: "", parentOperId: "5d6f59d802d1a496712d71d1" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578584, 1), signature: { hash: BinData(0, EDD18B36478B93BA7C916C5AF525A5D6B3D6A040), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578584, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578584, 1) 2019-09-04T06:29:44.990+0000 D2 QUERY [conn21] Collection config.settings does not exist. Using EOF plan: query: { _id: "balancer" } sort: {} projection: {} limit: 1 2019-09-04T06:29:44.990+0000 I COMMAND [conn21] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578584, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f59d802d1a496712d71d3'), operName: "", parentOperId: "5d6f59d802d1a496712d71d1" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578584, 1), signature: { hash: BinData(0, EDD18B36478B93BA7C916C5AF525A5D6B3D6A040), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578584, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:29:44.990+0000 D3 REPL [conn167] Got notified of new snapshot: { ts: Timestamp(1567578584, 1), t: 1 }, 2019-09-04T06:29:44.984+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn167] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:57.567+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn196] Got notified of new snapshot: { ts: Timestamp(1567578584, 1), t: 1 }, 2019-09-04T06:29:44.984+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn196] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.962+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn179] Got notified of new snapshot: { ts: Timestamp(1567578584, 1), t: 1 }, 2019-09-04T06:29:44.984+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn179] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:46.417+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn192] Got notified of new snapshot: { ts: Timestamp(1567578584, 1), t: 1 }, 2019-09-04T06:29:44.984+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn192] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.469+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn159] Got notified of new snapshot: { ts: Timestamp(1567578584, 1), t: 1 }, 2019-09-04T06:29:44.984+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn159] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:56.299+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn175] Got notified of new snapshot: { ts: Timestamp(1567578584, 1), t: 1 }, 2019-09-04T06:29:44.984+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn175] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.422+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn189] Got notified of new snapshot: { ts: Timestamp(1567578584, 1), t: 1 }, 2019-09-04T06:29:44.984+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn189] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:52.054+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn191] Got notified of new snapshot: { ts: Timestamp(1567578584, 1), t: 1 }, 2019-09-04T06:29:44.984+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn191] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:12.107+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn202] Got notified of new snapshot: { ts: Timestamp(1567578584, 1), t: 1 }, 2019-09-04T06:29:44.984+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn202] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.480+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn200] Got notified of new snapshot: { ts: Timestamp(1567578584, 1), t: 1 }, 2019-09-04T06:29:44.984+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn200] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.475+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn201] Got notified of new snapshot: { ts: Timestamp(1567578584, 1), t: 1 }, 2019-09-04T06:29:44.984+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn201] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.476+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn199] Got notified of new snapshot: { ts: Timestamp(1567578584, 1), t: 1 }, 2019-09-04T06:29:44.984+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn199] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.476+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn195] Got notified of new snapshot: { ts: Timestamp(1567578584, 1), t: 1 }, 2019-09-04T06:29:44.984+0000 2019-09-04T06:29:44.990+0000 D3 REPL [conn195] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.897+0000 2019-09-04T06:29:44.991+0000 D3 REPL [conn185] Got notified of new snapshot: { ts: Timestamp(1567578584, 1), t: 1 }, 2019-09-04T06:29:44.984+0000 2019-09-04T06:29:44.991+0000 D3 REPL [conn185] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:59.943+0000 2019-09-04T06:29:44.991+0000 D3 REPL [conn193] Got notified of new snapshot: { ts: Timestamp(1567578584, 1), t: 1 }, 2019-09-04T06:29:44.984+0000 2019-09-04T06:29:44.991+0000 D3 REPL [conn193] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.753+0000 2019-09-04T06:29:44.991+0000 D3 REPL [conn176] Got notified of new snapshot: { ts: Timestamp(1567578584, 1), t: 1 }, 2019-09-04T06:29:44.984+0000 2019-09-04T06:29:44.991+0000 D3 REPL [conn176] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:45.424+0000 2019-09-04T06:29:44.998+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:29:44.999+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:44.999+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), appliedOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, appliedWallTime: new Date(1567578578423), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, durableWallTime: new Date(1567578584984), appliedOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, appliedWallTime: new Date(1567578584984), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), appliedOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, appliedWallTime: new Date(1567578578423), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578584, 1), t: 1 }, lastCommittedWall: new Date(1567578584984), lastOpVisible: { ts: Timestamp(1567578584, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:44.999+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 467 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:14.999+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), appliedOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, appliedWallTime: new Date(1567578578423), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, durableWallTime: new Date(1567578584984), appliedOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, appliedWallTime: new Date(1567578584984), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, durableWallTime: new Date(1567578578423), appliedOpTime: { ts: Timestamp(1567578578, 2), t: 1 }, appliedWallTime: new Date(1567578578423), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578584, 1), t: 1 }, lastCommittedWall: new Date(1567578584984), lastOpVisible: { ts: Timestamp(1567578584, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:44.999+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:14.988+0000 2019-09-04T06:29:44.999+0000 D2 ASIO [RS] Request 467 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578584, 1), t: 1 }, lastCommittedWall: new Date(1567578584984), lastOpVisible: { ts: Timestamp(1567578584, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578584, 1), $clusterTime: { clusterTime: Timestamp(1567578584, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578584, 1) } 2019-09-04T06:29:44.999+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578584, 1), t: 1 }, lastCommittedWall: new Date(1567578584984), lastOpVisible: { ts: Timestamp(1567578584, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578584, 1), $clusterTime: { clusterTime: Timestamp(1567578584, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578584, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:44.999+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:44.999+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:14.988+0000 2019-09-04T06:29:45.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:45.032+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:45.038+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:45.038+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:45.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578584, 1), signature: { hash: BinData(0, EDD18B36478B93BA7C916C5AF525A5D6B3D6A040), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:45.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:29:45.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578584, 1), signature: { hash: BinData(0, EDD18B36478B93BA7C916C5AF525A5D6B3D6A040), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:45.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578584, 1), signature: { hash: BinData(0, EDD18B36478B93BA7C916C5AF525A5D6B3D6A040), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:45.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, durableWallTime: new Date(1567578584984), opTime: { ts: Timestamp(1567578584, 1), t: 1 }, wallTime: new Date(1567578584984) } 2019-09-04T06:29:45.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578584, 1), signature: { hash: BinData(0, EDD18B36478B93BA7C916C5AF525A5D6B3D6A040), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:45.086+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578584, 1) 2019-09-04T06:29:45.131+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:45.131+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:45.132+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:45.142+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:45.143+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:45.209+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:45.209+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:45.232+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:45.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:45.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:45.291+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:45.291+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:45.332+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:45.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:45.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:45.422+0000 I COMMAND [conn174] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578548, 1), signature: { hash: BinData(0, A5F366D9C352A6D5B06F4537F7BE1EE092889425), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:29:45.422+0000 I COMMAND [conn175] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578553, 1), signature: { hash: BinData(0, 0F4123AD4F6F23E0F6991DA797B74DD1D7349CDB), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:29:45.422+0000 D1 - [conn174] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:45.423+0000 D1 - [conn175] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:45.423+0000 W - [conn174] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:45.423+0000 W - [conn175] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:45.423+0000 I COMMAND [conn145] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578549, 1), signature: { hash: BinData(0, D8A1192E2948EF75C7130338E2090A58950ED76E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:29:45.423+0000 D1 - [conn145] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:45.423+0000 W - [conn145] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:45.424+0000 I COMMAND [conn176] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578555, 1), signature: { hash: BinData(0, BDB37965C7D794D8E96FEBC7A1DC5F3E24B76E27), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:29:45.424+0000 D1 - [conn176] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:45.424+0000 W - [conn176] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:45.431+0000 I NETWORK [listener] connection accepted from 10.108.2.49:53344 #208 (94 connections now open) 2019-09-04T06:29:45.431+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:45.431+0000 D2 COMMAND [conn208] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:45.431+0000 I NETWORK [conn208] received client metadata from 10.108.2.49:53344 conn208: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:45.431+0000 I COMMAND [conn208] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:45.432+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:45.432+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:45.432+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:45.447+0000 I COMMAND [conn177] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578550, 1), signature: { hash: BinData(0, 372D11DC30DAD6D51E7FE642D8716FF825445C85), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:29:45.447+0000 D1 - [conn177] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:45.447+0000 W - [conn177] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:45.452+0000 I - [conn145] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:45.452+0000 D1 COMMAND [conn145] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578549, 1), signature: { hash: BinData(0, D8A1192E2948EF75C7130338E2090A58950ED76E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:45.452+0000 D1 - [conn145] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:45.452+0000 W - [conn145] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:45.472+0000 I - [conn177] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:45.472+0000 D1 COMMAND [conn177] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578550, 1), signature: { hash: BinData(0, 372D11DC30DAD6D51E7FE642D8716FF825445C85), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:45.472+0000 D1 - [conn177] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:45.472+0000 W - [conn177] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:45.479+0000 I - [conn175] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:45.480+0000 D1 COMMAND [conn175] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578553, 1), signature: { hash: BinData(0, 0F4123AD4F6F23E0F6991DA797B74DD1D7349CDB), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:45.480+0000 D1 - [conn175] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:45.480+0000 W - [conn175] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:45.502+0000 I - [conn177] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:45.502+0000 W COMMAND [conn177] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:45.502+0000 I COMMAND [conn177] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578550, 1), signature: { hash: BinData(0, 372D11DC30DAD6D51E7FE642D8716FF825445C85), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30035ms 2019-09-04T06:29:45.502+0000 D2 NETWORK [conn177] Session from 10.108.2.49:53330 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:45.502+0000 I NETWORK [conn177] end connection 10.108.2.49:53330 (93 connections now open) 2019-09-04T06:29:45.524+0000 I - [conn175] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:45.524+0000 W COMMAND [conn175] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:45.524+0000 I COMMAND [conn175] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578553, 1), signature: { hash: BinData(0, 0F4123AD4F6F23E0F6991DA797B74DD1D7349CDB), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30067ms 2019-09-04T06:29:45.524+0000 D2 NETWORK [conn175] Session from 10.108.2.58:52084 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:45.524+0000 I NETWORK [conn175] end connection 10.108.2.58:52084 (92 connections now open) 2019-09-04T06:29:45.533+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:45.537+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:45.537+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:45.543+0000 I - [conn176] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:45.543+0000 D1 COMMAND [conn176] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578555, 1), signature: { hash: BinData(0, BDB37965C7D794D8E96FEBC7A1DC5F3E24B76E27), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:45.543+0000 D1 - [conn176] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:45.543+0000 W - [conn176] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:45.581+0000 I - [conn176] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:45.581+0000 W COMMAND [conn176] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:45.581+0000 I COMMAND [conn176] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578555, 1), signature: { hash: BinData(0, BDB37965C7D794D8E96FEBC7A1DC5F3E24B76E27), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30128ms 2019-09-04T06:29:45.581+0000 D2 NETWORK [conn176] Session from 10.108.2.46:40932 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:45.581+0000 I NETWORK [conn176] end connection 10.108.2.46:40932 (91 connections now open) 2019-09-04T06:29:45.581+0000 I - [conn174] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:45.584+0000 D1 COMMAND [conn174] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578548, 1), signature: { hash: BinData(0, A5F366D9C352A6D5B06F4537F7BE1EE092889425), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:45.584+0000 D1 - [conn174] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:45.584+0000 W - [conn174] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:45.611+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:45.611+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:45.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:45.611+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:45.612+0000 I NETWORK [listener] connection accepted from 10.108.2.58:52106 #209 (92 connections now open) 2019-09-04T06:29:45.612+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:45.612+0000 D2 COMMAND [conn209] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:45.612+0000 I NETWORK [conn209] received client metadata from 10.108.2.58:52106 conn209: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:45.612+0000 I COMMAND [conn209] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:45.612+0000 D2 COMMAND [conn209] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578583, 1), signature: { hash: BinData(0, F158BC25720340B443747D4FFA61B3F2D0B5D09D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:29:45.612+0000 D1 REPL [conn209] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578584, 1), t: 1 } 2019-09-04T06:29:45.612+0000 D3 REPL [conn209] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.622+0000 2019-09-04T06:29:45.613+0000 D2 COMMAND [conn180] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578579, 1), signature: { hash: BinData(0, 4E0EB43FB9673465B07DACBBA684379C1C10ABEB), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:29:45.613+0000 D1 REPL [conn180] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578584, 1), t: 1 } 2019-09-04T06:29:45.613+0000 D3 REPL [conn180] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.623+0000 2019-09-04T06:29:45.621+0000 I - [conn174] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:45.621+0000 W COMMAND [conn174] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:45.621+0000 I COMMAND [conn174] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578548, 1), signature: { hash: BinData(0, A5F366D9C352A6D5B06F4537F7BE1EE092889425), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30171ms 2019-09-04T06:29:45.621+0000 D2 NETWORK [conn174] Session from 10.108.2.44:38628 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:45.621+0000 I NETWORK [conn174] end connection 10.108.2.44:38628 (91 connections now open) 2019-09-04T06:29:45.624+0000 D2 COMMAND [conn171] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578582, 1), signature: { hash: BinData(0, CBAA63ADEE2EDC53BD93E84FCB4CEC8CB94AD092), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:29:45.624+0000 D1 REPL [conn171] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578584, 1), t: 1 } 2019-09-04T06:29:45.624+0000 D3 REPL [conn171] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.634+0000 2019-09-04T06:29:45.624+0000 I - [conn145] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:45.624+0000 W COMMAND [conn145] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:45.624+0000 I COMMAND [conn145] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578549, 1), signature: { hash: BinData(0, D8A1192E2948EF75C7130338E2090A58950ED76E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30040ms 2019-09-04T06:29:45.624+0000 D2 NETWORK [conn145] Session from 10.108.2.50:50038 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:45.624+0000 I NETWORK [conn145] end connection 10.108.2.50:50038 (90 connections now open) 2019-09-04T06:29:45.627+0000 I NETWORK [listener] connection accepted from 10.108.2.64:46594 #210 (91 connections now open) 2019-09-04T06:29:45.627+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:45.627+0000 D2 COMMAND [conn210] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:45.627+0000 I NETWORK [conn210] received client metadata from 10.108.2.64:46594 conn210: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:45.627+0000 I COMMAND [conn210] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:45.630+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:45.631+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:45.631+0000 D2 COMMAND [conn210] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578585, 1), signature: { hash: BinData(0, 7ED407E7D8FC2F48E792B1C41AEA50FC38ED9E10), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:29:45.631+0000 D1 REPL [conn210] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578584, 1), t: 1 } 2019-09-04T06:29:45.631+0000 D3 REPL [conn210] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.641+0000 2019-09-04T06:29:45.633+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:45.633+0000 I NETWORK [listener] connection accepted from 10.108.2.47:56522 #211 (92 connections now open) 2019-09-04T06:29:45.633+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:45.633+0000 D2 COMMAND [conn211] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:45.633+0000 I NETWORK [conn211] received client metadata from 10.108.2.47:56522 conn211: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:45.633+0000 I COMMAND [conn211] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:45.635+0000 I NETWORK [listener] connection accepted from 10.108.2.45:36514 #212 (93 connections now open) 2019-09-04T06:29:45.635+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:45.636+0000 D2 COMMAND [conn212] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:45.636+0000 I NETWORK [conn212] received client metadata from 10.108.2.45:36514 conn212: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:45.636+0000 I COMMAND [conn212] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:45.636+0000 D2 COMMAND [conn211] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578585, 1), signature: { hash: BinData(0, 7ED407E7D8FC2F48E792B1C41AEA50FC38ED9E10), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:29:45.636+0000 D1 REPL [conn211] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578584, 1), t: 1 } 2019-09-04T06:29:45.636+0000 D3 REPL [conn211] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.646+0000 2019-09-04T06:29:45.640+0000 D2 COMMAND [conn212] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578578, 1), signature: { hash: BinData(0, C1354BF9089533EC9529632B626A8C0C97EF30BD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:29:45.640+0000 D1 REPL [conn212] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578584, 1), t: 1 } 2019-09-04T06:29:45.640+0000 D3 REPL [conn212] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.650+0000 2019-09-04T06:29:45.642+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:45.642+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:45.709+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:45.709+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:45.733+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:45.791+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:45.791+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:45.833+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:45.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:45.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:45.932+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:45.932+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:45.933+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:45.987+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:45.987+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:45.987+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578584, 1) 2019-09-04T06:29:45.987+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 6993 2019-09-04T06:29:45.987+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:45.987+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:45.987+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 6993 2019-09-04T06:29:45.988+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 6996 2019-09-04T06:29:45.988+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 6996 2019-09-04T06:29:45.988+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578584, 1), t: 1 }({ ts: Timestamp(1567578584, 1), t: 1 }) 2019-09-04T06:29:46.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:46.033+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:46.037+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:46.037+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:46.110+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:46.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:46.111+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:46.111+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:46.131+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:46.131+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:46.133+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:46.142+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:46.142+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:46.209+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:46.209+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:46.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578585, 1), signature: { hash: BinData(0, D1B41DB073D2CC0A75156618C8E3C76B24C7D093), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:46.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:29:46.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578585, 1), signature: { hash: BinData(0, D1B41DB073D2CC0A75156618C8E3C76B24C7D093), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:46.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578585, 1), signature: { hash: BinData(0, D1B41DB073D2CC0A75156618C8E3C76B24C7D093), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:46.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, durableWallTime: new Date(1567578584984), opTime: { ts: Timestamp(1567578584, 1), t: 1 }, wallTime: new Date(1567578584984) } 2019-09-04T06:29:46.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578585, 1), signature: { hash: BinData(0, D1B41DB073D2CC0A75156618C8E3C76B24C7D093), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:46.234+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:46.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:46.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:46.291+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:46.291+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:46.334+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:46.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:46.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:46.419+0000 I COMMAND [conn179] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578548, 1), signature: { hash: BinData(0, A5F366D9C352A6D5B06F4537F7BE1EE092889425), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:29:46.419+0000 D1 - [conn179] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:46.419+0000 W - [conn179] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:46.432+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:46.432+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:46.434+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:46.436+0000 I - [conn179] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:46.436+0000 D1 COMMAND [conn179] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578548, 1), signature: { hash: BinData(0, A5F366D9C352A6D5B06F4537F7BE1EE092889425), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:46.436+0000 D1 - [conn179] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:46.436+0000 W - [conn179] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:46.456+0000 I - [conn179] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:46.456+0000 W COMMAND [conn179] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:46.456+0000 I COMMAND [conn179] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578548, 1), signature: { hash: BinData(0, A5F366D9C352A6D5B06F4537F7BE1EE092889425), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30028ms 2019-09-04T06:29:46.456+0000 D2 NETWORK [conn179] Session from 10.108.2.57:34202 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:46.456+0000 I NETWORK [conn179] end connection 10.108.2.57:34202 (92 connections now open) 2019-09-04T06:29:46.534+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:46.537+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:46.537+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:46.631+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:46.631+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:46.634+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:46.642+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:46.642+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:46.709+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:46.709+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:46.734+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:46.791+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:46.791+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:46.834+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:46.838+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:46.838+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 468) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:46.838+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 468 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:56.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:46.838+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:46.838+0000 D2 ASIO [Replication] Request 468 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, durableWallTime: new Date(1567578584984), opTime: { ts: Timestamp(1567578584, 1), t: 1 }, wallTime: new Date(1567578584984), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578584, 1), t: 1 }, lastCommittedWall: new Date(1567578584984), lastOpVisible: { ts: Timestamp(1567578584, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578584, 1), $clusterTime: { clusterTime: Timestamp(1567578585, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578584, 1) } 2019-09-04T06:29:46.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, durableWallTime: new Date(1567578584984), opTime: { ts: Timestamp(1567578584, 1), t: 1 }, wallTime: new Date(1567578584984), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578584, 1), t: 1 }, lastCommittedWall: new Date(1567578584984), lastOpVisible: { ts: Timestamp(1567578584, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578584, 1), $clusterTime: { clusterTime: Timestamp(1567578585, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578584, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:46.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:46.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 468) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, durableWallTime: new Date(1567578584984), opTime: { ts: Timestamp(1567578584, 1), t: 1 }, wallTime: new Date(1567578584984), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578584, 1), t: 1 }, lastCommittedWall: new Date(1567578584984), lastOpVisible: { ts: Timestamp(1567578584, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578584, 1), $clusterTime: { clusterTime: Timestamp(1567578585, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578584, 1) } 2019-09-04T06:29:46.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:29:46.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:29:48.838Z 2019-09-04T06:29:46.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:46.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:46.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 469) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:46.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 469 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:29:56.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:46.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:46.839+0000 D2 ASIO [Replication] Request 469 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, durableWallTime: new Date(1567578584984), opTime: { ts: Timestamp(1567578584, 1), t: 1 }, wallTime: new Date(1567578584984), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578584, 1), t: 1 }, lastCommittedWall: new Date(1567578584984), lastOpVisible: { ts: Timestamp(1567578584, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578584, 1), $clusterTime: { clusterTime: Timestamp(1567578585, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578584, 1) } 2019-09-04T06:29:46.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, durableWallTime: new Date(1567578584984), opTime: { ts: Timestamp(1567578584, 1), t: 1 }, wallTime: new Date(1567578584984), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578584, 1), t: 1 }, lastCommittedWall: new Date(1567578584984), lastOpVisible: { ts: Timestamp(1567578584, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578584, 1), $clusterTime: { clusterTime: Timestamp(1567578585, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578584, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:29:46.839+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:46.839+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 469) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, durableWallTime: new Date(1567578584984), opTime: { ts: Timestamp(1567578584, 1), t: 1 }, wallTime: new Date(1567578584984), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578584, 1), t: 1 }, lastCommittedWall: new Date(1567578584984), lastOpVisible: { ts: Timestamp(1567578584, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578584, 1), $clusterTime: { clusterTime: Timestamp(1567578585, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578584, 1) } 2019-09-04T06:29:46.839+0000 D4 ELECTION [replexec-1] Postponing election timeout due to heartbeat from primary 2019-09-04T06:29:46.839+0000 D4 REPL [replexec-1] Canceling election timeout callback at 2019-09-04T06:29:56.471+0000 2019-09-04T06:29:46.839+0000 D4 ELECTION [replexec-1] Scheduling election timeout callback at 2019-09-04T06:29:56.856+0000 2019-09-04T06:29:46.839+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:29:46.839+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:29:48.839Z 2019-09-04T06:29:46.839+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:46.839+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:46.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:46.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:46.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:46.932+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:46.932+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:46.934+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:46.987+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:46.987+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:46.987+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578584, 1) 2019-09-04T06:29:46.987+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 7017 2019-09-04T06:29:46.987+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:46.987+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:46.987+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 7017 2019-09-04T06:29:46.988+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7020 2019-09-04T06:29:46.988+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7020 2019-09-04T06:29:46.988+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578584, 1), t: 1 }({ ts: Timestamp(1567578584, 1), t: 1 }) 2019-09-04T06:29:46.990+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:46.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:47.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:47.035+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:47.037+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:47.037+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:47.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578585, 1), signature: { hash: BinData(0, D1B41DB073D2CC0A75156618C8E3C76B24C7D093), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:47.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:29:47.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578585, 1), signature: { hash: BinData(0, D1B41DB073D2CC0A75156618C8E3C76B24C7D093), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:47.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578585, 1), signature: { hash: BinData(0, D1B41DB073D2CC0A75156618C8E3C76B24C7D093), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:47.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, durableWallTime: new Date(1567578584984), opTime: { ts: Timestamp(1567578584, 1), t: 1 }, wallTime: new Date(1567578584984) } 2019-09-04T06:29:47.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578585, 1), signature: { hash: BinData(0, D1B41DB073D2CC0A75156618C8E3C76B24C7D093), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:47.131+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:47.131+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:47.135+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:47.142+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:47.142+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:47.209+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:47.209+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:47.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:47.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:47.235+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:47.291+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:47.291+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:47.335+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:47.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:47.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:47.432+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:47.432+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:47.435+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:47.467+0000 D2 ASIO [RS] Request 466 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578587, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578587456), o: { $v: 1, $set: { ping: new Date(1567578587455) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578584, 1), t: 1 }, lastCommittedWall: new Date(1567578584984), lastOpVisible: { ts: Timestamp(1567578584, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578584, 1), t: 1 }, lastCommittedWall: new Date(1567578584984), lastOpApplied: { ts: Timestamp(1567578587, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578584, 1), $clusterTime: { clusterTime: Timestamp(1567578587, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578587, 1) } 2019-09-04T06:29:47.467+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578587, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578587456), o: { $v: 1, $set: { ping: new Date(1567578587455) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578584, 1), t: 1 }, lastCommittedWall: new Date(1567578584984), lastOpVisible: { ts: Timestamp(1567578584, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578584, 1), t: 1 }, lastCommittedWall: new Date(1567578584984), lastOpApplied: { ts: Timestamp(1567578587, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578584, 1), $clusterTime: { clusterTime: Timestamp(1567578587, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578587, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:47.467+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:47.468+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578587, 1) and ending at ts: Timestamp(1567578587, 1) 2019-09-04T06:29:47.468+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:29:56.856+0000 2019-09-04T06:29:47.468+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:29:58.514+0000 2019-09-04T06:29:47.468+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578587, 1), t: 1 } 2019-09-04T06:29:47.468+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:47.468+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:47.468+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:47.468+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:47.468+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578584, 1) 2019-09-04T06:29:47.468+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 7035 2019-09-04T06:29:47.468+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:47.468+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:47.468+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 7035 2019-09-04T06:29:47.468+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:47.468+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:29:47.468+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:47.468+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578584, 1) 2019-09-04T06:29:47.468+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 7038 2019-09-04T06:29:47.468+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578587, 1) } 2019-09-04T06:29:47.468+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:47.468+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:47.468+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 7038 2019-09-04T06:29:47.468+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7021 2019-09-04T06:29:47.468+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7021 2019-09-04T06:29:47.468+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7041 2019-09-04T06:29:47.468+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7041 2019-09-04T06:29:47.468+0000 D3 EXECUTOR [repl-writer-worker-4] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:47.468+0000 D3 STORAGE [repl-writer-worker-4] WT begin_transaction for snapshot id 7043 2019-09-04T06:29:47.468+0000 D4 STORAGE [repl-writer-worker-4] inserting record with timestamp Timestamp(1567578587, 1) 2019-09-04T06:29:47.468+0000 D3 STORAGE [repl-writer-worker-4] WT set timestamp of future write operations to Timestamp(1567578587, 1) 2019-09-04T06:29:47.468+0000 D3 STORAGE [repl-writer-worker-4] WT commit_transaction for snapshot id 7043 2019-09-04T06:29:47.468+0000 D3 EXECUTOR [repl-writer-worker-4] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:47.468+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:29:47.468+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7042 2019-09-04T06:29:47.468+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7042 2019-09-04T06:29:47.468+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7045 2019-09-04T06:29:47.468+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7045 2019-09-04T06:29:47.468+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578587, 1), t: 1 }({ ts: Timestamp(1567578587, 1), t: 1 }) 2019-09-04T06:29:47.468+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578587, 1) 2019-09-04T06:29:47.468+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7046 2019-09-04T06:29:47.468+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578587, 1) } } ] } sort: {} projection: {} 2019-09-04T06:29:47.468+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:47.468+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:29:47.468+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578587, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:29:47.468+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:47.468+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:47.468+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:47.468+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578587, 1) || First: notFirst: full path: ts 2019-09-04T06:29:47.468+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:47.468+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578587, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:47.468+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:47.468+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:29:47.468+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:47.468+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:47.468+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:47.469+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:47.469+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:47.469+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:47.469+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:47.469+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578587, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:47.469+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:47.469+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:47.469+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:47.469+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578587, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:47.469+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:47.469+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578587, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:47.469+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7046 2019-09-04T06:29:47.469+0000 D3 EXECUTOR [repl-writer-worker-6] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:47.469+0000 D3 STORAGE [repl-writer-worker-6] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:47.469+0000 D3 REPL [repl-writer-worker-6] applying op: { ts: Timestamp(1567578587, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578587456), o: { $v: 1, $set: { ping: new Date(1567578587455) } } }, oplog application mode: Secondary 2019-09-04T06:29:47.469+0000 D3 STORAGE [repl-writer-worker-6] WT set timestamp of future write operations to Timestamp(1567578587, 1) 2019-09-04T06:29:47.469+0000 D3 STORAGE [repl-writer-worker-6] WT begin_transaction for snapshot id 7048 2019-09-04T06:29:47.469+0000 D2 QUERY [repl-writer-worker-6] Using idhack: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" } 2019-09-04T06:29:47.469+0000 D4 WRITE [repl-writer-worker-6] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:29:47.469+0000 D3 STORAGE [repl-writer-worker-6] WT commit_transaction for snapshot id 7048 2019-09-04T06:29:47.469+0000 D3 EXECUTOR [repl-writer-worker-6] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:47.469+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578587, 1), t: 1 }({ ts: Timestamp(1567578587, 1), t: 1 }) 2019-09-04T06:29:47.469+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578587, 1) 2019-09-04T06:29:47.469+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7047 2019-09-04T06:29:47.469+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:29:47.469+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:47.469+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:29:47.469+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:47.469+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:47.469+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:29:47.469+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7047 2019-09-04T06:29:47.469+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578587, 1) 2019-09-04T06:29:47.469+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7051 2019-09-04T06:29:47.469+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7051 2019-09-04T06:29:47.469+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578587, 1), t: 1 }({ ts: Timestamp(1567578587, 1), t: 1 }) 2019-09-04T06:29:47.469+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:47.469+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, durableWallTime: new Date(1567578584984), appliedOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, appliedWallTime: new Date(1567578584984), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, durableWallTime: new Date(1567578584984), appliedOpTime: { ts: Timestamp(1567578587, 1), t: 1 }, appliedWallTime: new Date(1567578587456), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, durableWallTime: new Date(1567578584984), appliedOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, appliedWallTime: new Date(1567578584984), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578584, 1), t: 1 }, lastCommittedWall: new Date(1567578584984), lastOpVisible: { ts: Timestamp(1567578584, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:47.469+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 470 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:17.469+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, durableWallTime: new Date(1567578584984), appliedOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, appliedWallTime: new Date(1567578584984), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, durableWallTime: new Date(1567578584984), appliedOpTime: { ts: Timestamp(1567578587, 1), t: 1 }, appliedWallTime: new Date(1567578587456), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, durableWallTime: new Date(1567578584984), appliedOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, appliedWallTime: new Date(1567578584984), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578584, 1), t: 1 }, lastCommittedWall: new Date(1567578584984), lastOpVisible: { ts: Timestamp(1567578584, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:47.469+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:17.469+0000 2019-09-04T06:29:47.470+0000 D2 ASIO [RS] Request 470 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578584, 1), t: 1 }, lastCommittedWall: new Date(1567578584984), lastOpVisible: { ts: Timestamp(1567578584, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578584, 1), $clusterTime: { clusterTime: Timestamp(1567578587, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578587, 1) } 2019-09-04T06:29:47.470+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578584, 1), t: 1 }, lastCommittedWall: new Date(1567578584984), lastOpVisible: { ts: Timestamp(1567578584, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578584, 1), $clusterTime: { clusterTime: Timestamp(1567578587, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578587, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:47.470+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:47.470+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:17.470+0000 2019-09-04T06:29:47.470+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578587, 1), t: 1 } 2019-09-04T06:29:47.470+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 471 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:57.470+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578584, 1), t: 1 } } 2019-09-04T06:29:47.470+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:17.470+0000 2019-09-04T06:29:47.475+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:29:47.475+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:47.475+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, durableWallTime: new Date(1567578584984), appliedOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, appliedWallTime: new Date(1567578584984), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578587, 1), t: 1 }, durableWallTime: new Date(1567578587456), appliedOpTime: { ts: Timestamp(1567578587, 1), t: 1 }, appliedWallTime: new Date(1567578587456), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, durableWallTime: new Date(1567578584984), appliedOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, appliedWallTime: new Date(1567578584984), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578584, 1), t: 1 }, lastCommittedWall: new Date(1567578584984), lastOpVisible: { ts: Timestamp(1567578584, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:47.475+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 472 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:17.475+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, durableWallTime: new Date(1567578584984), appliedOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, appliedWallTime: new Date(1567578584984), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578587, 1), t: 1 }, durableWallTime: new Date(1567578587456), appliedOpTime: { ts: Timestamp(1567578587, 1), t: 1 }, appliedWallTime: new Date(1567578587456), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, durableWallTime: new Date(1567578584984), appliedOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, appliedWallTime: new Date(1567578584984), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578584, 1), t: 1 }, lastCommittedWall: new Date(1567578584984), lastOpVisible: { ts: Timestamp(1567578584, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:47.475+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:17.470+0000 2019-09-04T06:29:47.475+0000 D2 ASIO [RS] Request 472 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578584, 1), t: 1 }, lastCommittedWall: new Date(1567578584984), lastOpVisible: { ts: Timestamp(1567578584, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578584, 1), $clusterTime: { clusterTime: Timestamp(1567578587, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578587, 1) } 2019-09-04T06:29:47.475+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578584, 1), t: 1 }, lastCommittedWall: new Date(1567578584984), lastOpVisible: { ts: Timestamp(1567578584, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578584, 1), $clusterTime: { clusterTime: Timestamp(1567578587, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578587, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:47.475+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:47.475+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:17.470+0000 2019-09-04T06:29:47.476+0000 D2 ASIO [RS] Request 471 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578587, 1), t: 1 }, lastCommittedWall: new Date(1567578587456), lastOpVisible: { ts: Timestamp(1567578587, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578587, 1), t: 1 }, lastCommittedWall: new Date(1567578587456), lastOpApplied: { ts: Timestamp(1567578587, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578587, 1), $clusterTime: { clusterTime: Timestamp(1567578587, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578587, 1) } 2019-09-04T06:29:47.476+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578587, 1), t: 1 }, lastCommittedWall: new Date(1567578587456), lastOpVisible: { ts: Timestamp(1567578587, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578587, 1), t: 1 }, lastCommittedWall: new Date(1567578587456), lastOpApplied: { ts: Timestamp(1567578587, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578587, 1), $clusterTime: { clusterTime: Timestamp(1567578587, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578587, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:47.476+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:47.476+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:29:47.476+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578587, 1), t: 1 }, 2019-09-04T06:29:47.456+0000 2019-09-04T06:29:47.476+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578587, 1), t: 1 }, 2019-09-04T06:29:47.456+0000 2019-09-04T06:29:47.476+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578582, 1) 2019-09-04T06:29:47.476+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:29:58.514+0000 2019-09-04T06:29:47.476+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:29:57.886+0000 2019-09-04T06:29:47.476+0000 D3 REPL [conn173] Got notified of new snapshot: { ts: Timestamp(1567578587, 1), t: 1 }, 2019-09-04T06:29:47.456+0000 2019-09-04T06:29:47.476+0000 D3 REPL [conn173] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.664+0000 2019-09-04T06:29:47.476+0000 D3 REPL [conn188] Got notified of new snapshot: { ts: Timestamp(1567578587, 1), t: 1 }, 2019-09-04T06:29:47.456+0000 2019-09-04T06:29:47.476+0000 D3 REPL [conn188] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.768+0000 2019-09-04T06:29:47.476+0000 D3 REPL [conn210] Got notified of new snapshot: { ts: Timestamp(1567578587, 1), t: 1 }, 2019-09-04T06:29:47.456+0000 2019-09-04T06:29:47.476+0000 D3 REPL [conn210] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.641+0000 2019-09-04T06:29:47.476+0000 D3 REPL [conn189] Got notified of new snapshot: { ts: Timestamp(1567578587, 1), t: 1 }, 2019-09-04T06:29:47.456+0000 2019-09-04T06:29:47.476+0000 D3 REPL [conn189] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:52.054+0000 2019-09-04T06:29:47.476+0000 D3 REPL [conn193] Got notified of new snapshot: { ts: Timestamp(1567578587, 1), t: 1 }, 2019-09-04T06:29:47.456+0000 2019-09-04T06:29:47.476+0000 D3 REPL [conn193] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.753+0000 2019-09-04T06:29:47.476+0000 D3 REPL [conn186] Got notified of new snapshot: { ts: Timestamp(1567578587, 1), t: 1 }, 2019-09-04T06:29:47.456+0000 2019-09-04T06:29:47.476+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:47.476+0000 D3 REPL [conn186] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.662+0000 2019-09-04T06:29:47.476+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:47.476+0000 D3 REPL [conn204] Got notified of new snapshot: { ts: Timestamp(1567578587, 1), t: 1 }, 2019-09-04T06:29:47.456+0000 2019-09-04T06:29:47.476+0000 D3 REPL [conn204] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.514+0000 2019-09-04T06:29:47.476+0000 D3 REPL [conn147] Got notified of new snapshot: { ts: Timestamp(1567578587, 1), t: 1 }, 2019-09-04T06:29:47.456+0000 2019-09-04T06:29:47.476+0000 D3 REPL [conn147] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:13.409+0000 2019-09-04T06:29:47.476+0000 D3 REPL [conn196] Got notified of new snapshot: { ts: Timestamp(1567578587, 1), t: 1 }, 2019-09-04T06:29:47.456+0000 2019-09-04T06:29:47.476+0000 D3 REPL [conn196] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.962+0000 2019-09-04T06:29:47.476+0000 D3 REPL [conn192] Got notified of new snapshot: { ts: Timestamp(1567578587, 1), t: 1 }, 2019-09-04T06:29:47.456+0000 2019-09-04T06:29:47.476+0000 D3 REPL [conn192] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.469+0000 2019-09-04T06:29:47.476+0000 D3 REPL [conn191] Got notified of new snapshot: { ts: Timestamp(1567578587, 1), t: 1 }, 2019-09-04T06:29:47.456+0000 2019-09-04T06:29:47.476+0000 D3 REPL [conn191] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:12.107+0000 2019-09-04T06:29:47.476+0000 D3 REPL [conn200] Got notified of new snapshot: { ts: Timestamp(1567578587, 1), t: 1 }, 2019-09-04T06:29:47.456+0000 2019-09-04T06:29:47.476+0000 D3 REPL [conn200] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.475+0000 2019-09-04T06:29:47.476+0000 D3 REPL [conn199] Got notified of new snapshot: { ts: Timestamp(1567578587, 1), t: 1 }, 2019-09-04T06:29:47.456+0000 2019-09-04T06:29:47.476+0000 D3 REPL [conn199] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.476+0000 2019-09-04T06:29:47.476+0000 D3 REPL [conn185] Got notified of new snapshot: { ts: Timestamp(1567578587, 1), t: 1 }, 2019-09-04T06:29:47.456+0000 2019-09-04T06:29:47.476+0000 D3 REPL [conn185] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:59.943+0000 2019-09-04T06:29:47.476+0000 D3 REPL [conn171] Got notified of new snapshot: { ts: Timestamp(1567578587, 1), t: 1 }, 2019-09-04T06:29:47.456+0000 2019-09-04T06:29:47.476+0000 D3 REPL [conn171] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.634+0000 2019-09-04T06:29:47.476+0000 D3 REPL [conn205] Got notified of new snapshot: { ts: Timestamp(1567578587, 1), t: 1 }, 2019-09-04T06:29:47.456+0000 2019-09-04T06:29:47.476+0000 D3 REPL [conn205] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.530+0000 2019-09-04T06:29:47.476+0000 D3 REPL [conn211] Got notified of new snapshot: { ts: Timestamp(1567578587, 1), t: 1 }, 2019-09-04T06:29:47.456+0000 2019-09-04T06:29:47.476+0000 D3 REPL [conn211] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.646+0000 2019-09-04T06:29:47.476+0000 D3 REPL [conn212] Got notified of new snapshot: { ts: Timestamp(1567578587, 1), t: 1 }, 2019-09-04T06:29:47.456+0000 2019-09-04T06:29:47.476+0000 D3 REPL [conn212] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.650+0000 2019-09-04T06:29:47.476+0000 D3 REPL [conn194] Got notified of new snapshot: { ts: Timestamp(1567578587, 1), t: 1 }, 2019-09-04T06:29:47.456+0000 2019-09-04T06:29:47.476+0000 D3 REPL [conn194] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.763+0000 2019-09-04T06:29:47.476+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 473 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:57.476+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578587, 1), t: 1 } } 2019-09-04T06:29:47.476+0000 D3 REPL [conn172] Got notified of new snapshot: { ts: Timestamp(1567578587, 1), t: 1 }, 2019-09-04T06:29:47.456+0000 2019-09-04T06:29:47.476+0000 D3 REPL [conn172] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:58.752+0000 2019-09-04T06:29:47.476+0000 D3 REPL [conn184] Got notified of new snapshot: { ts: Timestamp(1567578587, 1), t: 1 }, 2019-09-04T06:29:47.456+0000 2019-09-04T06:29:47.476+0000 D3 REPL [conn184] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.433+0000 2019-09-04T06:29:47.476+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:17.470+0000 2019-09-04T06:29:47.476+0000 D3 REPL [conn197] Got notified of new snapshot: { ts: Timestamp(1567578587, 1), t: 1 }, 2019-09-04T06:29:47.456+0000 2019-09-04T06:29:47.476+0000 D3 REPL [conn197] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.987+0000 2019-09-04T06:29:47.477+0000 D3 REPL [conn169] Got notified of new snapshot: { ts: Timestamp(1567578587, 1), t: 1 }, 2019-09-04T06:29:47.456+0000 2019-09-04T06:29:47.477+0000 D3 REPL [conn169] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:12.789+0000 2019-09-04T06:29:47.477+0000 D3 REPL [conn181] Got notified of new snapshot: { ts: Timestamp(1567578587, 1), t: 1 }, 2019-09-04T06:29:47.456+0000 2019-09-04T06:29:47.477+0000 D3 REPL [conn181] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.478+0000 2019-09-04T06:29:47.477+0000 D3 REPL [conn183] Got notified of new snapshot: { ts: Timestamp(1567578587, 1), t: 1 }, 2019-09-04T06:29:47.456+0000 2019-09-04T06:29:47.477+0000 D3 REPL [conn183] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.645+0000 2019-09-04T06:29:47.477+0000 D3 REPL [conn203] Got notified of new snapshot: { ts: Timestamp(1567578587, 1), t: 1 }, 2019-09-04T06:29:47.456+0000 2019-09-04T06:29:47.477+0000 D3 REPL [conn203] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.499+0000 2019-09-04T06:29:47.477+0000 D3 REPL [conn167] Got notified of new snapshot: { ts: Timestamp(1567578587, 1), t: 1 }, 2019-09-04T06:29:47.456+0000 2019-09-04T06:29:47.477+0000 D3 REPL [conn167] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:57.567+0000 2019-09-04T06:29:47.477+0000 D3 REPL [conn159] Got notified of new snapshot: { ts: Timestamp(1567578587, 1), t: 1 }, 2019-09-04T06:29:47.456+0000 2019-09-04T06:29:47.477+0000 D3 REPL [conn159] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:56.299+0000 2019-09-04T06:29:47.477+0000 D3 REPL [conn180] Got notified of new snapshot: { ts: Timestamp(1567578587, 1), t: 1 }, 2019-09-04T06:29:47.456+0000 2019-09-04T06:29:47.477+0000 D3 REPL [conn180] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.623+0000 2019-09-04T06:29:47.477+0000 D3 REPL [conn209] Got notified of new snapshot: { ts: Timestamp(1567578587, 1), t: 1 }, 2019-09-04T06:29:47.456+0000 2019-09-04T06:29:47.477+0000 D3 REPL [conn209] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.622+0000 2019-09-04T06:29:47.477+0000 D3 REPL [conn202] Got notified of new snapshot: { ts: Timestamp(1567578587, 1), t: 1 }, 2019-09-04T06:29:47.456+0000 2019-09-04T06:29:47.477+0000 D3 REPL [conn202] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.480+0000 2019-09-04T06:29:47.477+0000 D3 REPL [conn201] Got notified of new snapshot: { ts: Timestamp(1567578587, 1), t: 1 }, 2019-09-04T06:29:47.456+0000 2019-09-04T06:29:47.477+0000 D3 REPL [conn201] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.476+0000 2019-09-04T06:29:47.477+0000 D3 REPL [conn198] Got notified of new snapshot: { ts: Timestamp(1567578587, 1), t: 1 }, 2019-09-04T06:29:47.456+0000 2019-09-04T06:29:47.477+0000 D3 REPL [conn198] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:03.485+0000 2019-09-04T06:29:47.477+0000 D3 REPL [conn195] Got notified of new snapshot: { ts: Timestamp(1567578587, 1), t: 1 }, 2019-09-04T06:29:47.456+0000 2019-09-04T06:29:47.477+0000 D3 REPL [conn195] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.897+0000 2019-09-04T06:29:47.477+0000 D3 REPL [conn187] Got notified of new snapshot: { ts: Timestamp(1567578587, 1), t: 1 }, 2019-09-04T06:29:47.456+0000 2019-09-04T06:29:47.477+0000 D3 REPL [conn187] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.753+0000 2019-09-04T06:29:47.477+0000 D3 REPL [conn190] Got notified of new snapshot: { ts: Timestamp(1567578587, 1), t: 1 }, 2019-09-04T06:29:47.456+0000 2019-09-04T06:29:47.477+0000 D3 REPL [conn190] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:55.060+0000 2019-09-04T06:29:47.490+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:47.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:47.535+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:47.537+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:47.537+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:47.568+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578587, 1) 2019-09-04T06:29:47.631+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:47.631+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:47.635+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:47.642+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:47.642+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:47.709+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:47.709+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:47.728+0000 D2 ASIO [RS] Request 473 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578587, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578587722), o: { $v: 1, $set: { ping: new Date(1567578587715) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578587, 1), t: 1 }, lastCommittedWall: new Date(1567578587456), lastOpVisible: { ts: Timestamp(1567578587, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578587, 1), t: 1 }, lastCommittedWall: new Date(1567578587456), lastOpApplied: { ts: Timestamp(1567578587, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578587, 1), $clusterTime: { clusterTime: Timestamp(1567578587, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578587, 2) } 2019-09-04T06:29:47.728+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578587, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578587722), o: { $v: 1, $set: { ping: new Date(1567578587715) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578587, 1), t: 1 }, lastCommittedWall: new Date(1567578587456), lastOpVisible: { ts: Timestamp(1567578587, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578587, 1), t: 1 }, lastCommittedWall: new Date(1567578587456), lastOpApplied: { ts: Timestamp(1567578587, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578587, 1), $clusterTime: { clusterTime: Timestamp(1567578587, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578587, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:47.728+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:47.728+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578587, 2) and ending at ts: Timestamp(1567578587, 2) 2019-09-04T06:29:47.728+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:29:57.886+0000 2019-09-04T06:29:47.728+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:29:58.454+0000 2019-09-04T06:29:47.728+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578587, 2), t: 1 } 2019-09-04T06:29:47.728+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:47.728+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:47.728+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:47.728+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:47.728+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578587, 1) 2019-09-04T06:29:47.728+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 7060 2019-09-04T06:29:47.728+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:47.728+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:47.728+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 7060 2019-09-04T06:29:47.728+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:47.728+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:29:47.728+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:47.729+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578587, 1) 2019-09-04T06:29:47.729+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 7063 2019-09-04T06:29:47.729+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578587, 2) } 2019-09-04T06:29:47.729+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:47.729+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:47.729+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 7063 2019-09-04T06:29:47.729+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7053 2019-09-04T06:29:47.729+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7053 2019-09-04T06:29:47.729+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7066 2019-09-04T06:29:47.729+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7066 2019-09-04T06:29:47.729+0000 D3 EXECUTOR [repl-writer-worker-2] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:47.729+0000 D3 STORAGE [repl-writer-worker-2] WT begin_transaction for snapshot id 7068 2019-09-04T06:29:47.729+0000 D4 STORAGE [repl-writer-worker-2] inserting record with timestamp Timestamp(1567578587, 2) 2019-09-04T06:29:47.729+0000 D3 STORAGE [repl-writer-worker-2] WT set timestamp of future write operations to Timestamp(1567578587, 2) 2019-09-04T06:29:47.729+0000 D3 STORAGE [repl-writer-worker-2] WT commit_transaction for snapshot id 7068 2019-09-04T06:29:47.729+0000 D3 EXECUTOR [repl-writer-worker-2] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:47.729+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:29:47.729+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7067 2019-09-04T06:29:47.729+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7067 2019-09-04T06:29:47.729+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7070 2019-09-04T06:29:47.729+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7070 2019-09-04T06:29:47.729+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578587, 2), t: 1 }({ ts: Timestamp(1567578587, 2), t: 1 }) 2019-09-04T06:29:47.729+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578587, 2) 2019-09-04T06:29:47.729+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7071 2019-09-04T06:29:47.729+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578587, 2) } } ] } sort: {} projection: {} 2019-09-04T06:29:47.729+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:47.729+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:29:47.729+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578587, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:29:47.729+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:47.729+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:47.729+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:47.729+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578587, 2) || First: notFirst: full path: ts 2019-09-04T06:29:47.729+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:47.729+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578587, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:47.729+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:47.729+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:29:47.729+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:47.729+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:47.729+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:47.729+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:47.729+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:47.729+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:47.729+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:47.729+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578587, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:47.729+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:47.729+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:47.729+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:47.729+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578587, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:47.729+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:47.729+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578587, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:47.729+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7071 2019-09-04T06:29:47.729+0000 D3 EXECUTOR [repl-writer-worker-0] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:47.729+0000 D3 STORAGE [repl-writer-worker-0] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:47.729+0000 D3 REPL [repl-writer-worker-0] applying op: { ts: Timestamp(1567578587, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578587722), o: { $v: 1, $set: { ping: new Date(1567578587715) } } }, oplog application mode: Secondary 2019-09-04T06:29:47.729+0000 D3 STORAGE [repl-writer-worker-0] WT set timestamp of future write operations to Timestamp(1567578587, 2) 2019-09-04T06:29:47.729+0000 D3 STORAGE [repl-writer-worker-0] WT begin_transaction for snapshot id 7073 2019-09-04T06:29:47.729+0000 D2 QUERY [repl-writer-worker-0] Using idhack: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" } 2019-09-04T06:29:47.730+0000 D4 WRITE [repl-writer-worker-0] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:29:47.730+0000 D3 STORAGE [repl-writer-worker-0] WT commit_transaction for snapshot id 7073 2019-09-04T06:29:47.730+0000 D3 EXECUTOR [repl-writer-worker-0] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:47.730+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578587, 2), t: 1 }({ ts: Timestamp(1567578587, 2), t: 1 }) 2019-09-04T06:29:47.730+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578587, 2) 2019-09-04T06:29:47.730+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7072 2019-09-04T06:29:47.730+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:29:47.730+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:47.730+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:29:47.730+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:47.730+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:47.730+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:29:47.730+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7072 2019-09-04T06:29:47.730+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578587, 2) 2019-09-04T06:29:47.730+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7076 2019-09-04T06:29:47.730+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7076 2019-09-04T06:29:47.730+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578587, 2), t: 1 }({ ts: Timestamp(1567578587, 2), t: 1 }) 2019-09-04T06:29:47.730+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:47.730+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, durableWallTime: new Date(1567578584984), appliedOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, appliedWallTime: new Date(1567578584984), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578587, 1), t: 1 }, durableWallTime: new Date(1567578587456), appliedOpTime: { ts: Timestamp(1567578587, 2), t: 1 }, appliedWallTime: new Date(1567578587722), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, durableWallTime: new Date(1567578584984), appliedOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, appliedWallTime: new Date(1567578584984), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578587, 1), t: 1 }, lastCommittedWall: new Date(1567578587456), lastOpVisible: { ts: Timestamp(1567578587, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:47.730+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 474 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:17.730+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, durableWallTime: new Date(1567578584984), appliedOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, appliedWallTime: new Date(1567578584984), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578587, 1), t: 1 }, durableWallTime: new Date(1567578587456), appliedOpTime: { ts: Timestamp(1567578587, 2), t: 1 }, appliedWallTime: new Date(1567578587722), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, durableWallTime: new Date(1567578584984), appliedOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, appliedWallTime: new Date(1567578584984), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578587, 1), t: 1 }, lastCommittedWall: new Date(1567578587456), lastOpVisible: { ts: Timestamp(1567578587, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:47.730+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:17.730+0000 2019-09-04T06:29:47.730+0000 D2 ASIO [RS] Request 474 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578587, 1), t: 1 }, lastCommittedWall: new Date(1567578587456), lastOpVisible: { ts: Timestamp(1567578587, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578587, 1), $clusterTime: { clusterTime: Timestamp(1567578587, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578587, 2) } 2019-09-04T06:29:47.730+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578587, 1), t: 1 }, lastCommittedWall: new Date(1567578587456), lastOpVisible: { ts: Timestamp(1567578587, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578587, 1), $clusterTime: { clusterTime: Timestamp(1567578587, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578587, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:47.730+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:47.730+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:17.730+0000 2019-09-04T06:29:47.730+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578587, 2), t: 1 } 2019-09-04T06:29:47.730+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 475 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:57.730+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578587, 1), t: 1 } } 2019-09-04T06:29:47.730+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:17.730+0000 2019-09-04T06:29:47.732+0000 D2 ASIO [RS] Request 475 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578587, 2), t: 1 }, lastCommittedWall: new Date(1567578587722), lastOpVisible: { ts: Timestamp(1567578587, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578587, 2), t: 1 }, lastCommittedWall: new Date(1567578587722), lastOpApplied: { ts: Timestamp(1567578587, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578587, 2), $clusterTime: { clusterTime: Timestamp(1567578587, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578587, 2) } 2019-09-04T06:29:47.732+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578587, 2), t: 1 }, lastCommittedWall: new Date(1567578587722), lastOpVisible: { ts: Timestamp(1567578587, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578587, 2), t: 1 }, lastCommittedWall: new Date(1567578587722), lastOpApplied: { ts: Timestamp(1567578587, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578587, 2), $clusterTime: { clusterTime: Timestamp(1567578587, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578587, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:47.733+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:47.733+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:29:47.733+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578587, 2), t: 1 }, 2019-09-04T06:29:47.722+0000 2019-09-04T06:29:47.733+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578587, 2), t: 1 }, 2019-09-04T06:29:47.722+0000 2019-09-04T06:29:47.733+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578582, 2) 2019-09-04T06:29:47.733+0000 D3 REPL [conn211] Got notified of new snapshot: { ts: Timestamp(1567578587, 2), t: 1 }, 2019-09-04T06:29:47.722+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn211] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.646+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn147] Got notified of new snapshot: { ts: Timestamp(1567578587, 2), t: 1 }, 2019-09-04T06:29:47.722+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn147] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:13.409+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn204] Got notified of new snapshot: { ts: Timestamp(1567578587, 2), t: 1 }, 2019-09-04T06:29:47.722+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn204] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.514+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn197] Got notified of new snapshot: { ts: Timestamp(1567578587, 2), t: 1 }, 2019-09-04T06:29:47.722+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn197] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.987+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn199] Got notified of new snapshot: { ts: Timestamp(1567578587, 2), t: 1 }, 2019-09-04T06:29:47.722+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn199] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.476+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn203] Got notified of new snapshot: { ts: Timestamp(1567578587, 2), t: 1 }, 2019-09-04T06:29:47.722+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn203] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.499+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn196] Got notified of new snapshot: { ts: Timestamp(1567578587, 2), t: 1 }, 2019-09-04T06:29:47.722+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn196] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.962+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn191] Got notified of new snapshot: { ts: Timestamp(1567578587, 2), t: 1 }, 2019-09-04T06:29:47.722+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn191] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:12.107+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn169] Got notified of new snapshot: { ts: Timestamp(1567578587, 2), t: 1 }, 2019-09-04T06:29:47.722+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn169] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:12.789+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn183] Got notified of new snapshot: { ts: Timestamp(1567578587, 2), t: 1 }, 2019-09-04T06:29:47.722+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn183] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.645+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn167] Got notified of new snapshot: { ts: Timestamp(1567578587, 2), t: 1 }, 2019-09-04T06:29:47.722+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn167] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:57.567+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn202] Got notified of new snapshot: { ts: Timestamp(1567578587, 2), t: 1 }, 2019-09-04T06:29:47.722+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn202] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.480+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn198] Got notified of new snapshot: { ts: Timestamp(1567578587, 2), t: 1 }, 2019-09-04T06:29:47.722+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn198] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:03.485+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn187] Got notified of new snapshot: { ts: Timestamp(1567578587, 2), t: 1 }, 2019-09-04T06:29:47.722+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn187] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.753+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn180] Got notified of new snapshot: { ts: Timestamp(1567578587, 2), t: 1 }, 2019-09-04T06:29:47.722+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn180] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.623+0000 2019-09-04T06:29:47.733+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:29:58.454+0000 2019-09-04T06:29:47.733+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:29:58.337+0000 2019-09-04T06:29:47.733+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:47.733+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:47.733+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 476 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:57.733+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578587, 2), t: 1 } } 2019-09-04T06:29:47.733+0000 D3 REPL [conn189] Got notified of new snapshot: { ts: Timestamp(1567578587, 2), t: 1 }, 2019-09-04T06:29:47.722+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn189] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:52.054+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn185] Got notified of new snapshot: { ts: Timestamp(1567578587, 2), t: 1 }, 2019-09-04T06:29:47.722+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn185] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:59.943+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn210] Got notified of new snapshot: { ts: Timestamp(1567578587, 2), t: 1 }, 2019-09-04T06:29:47.722+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn210] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.641+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn173] Got notified of new snapshot: { ts: Timestamp(1567578587, 2), t: 1 }, 2019-09-04T06:29:47.722+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn173] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.664+0000 2019-09-04T06:29:47.733+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:17.730+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn200] Got notified of new snapshot: { ts: Timestamp(1567578587, 2), t: 1 }, 2019-09-04T06:29:47.722+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn200] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.475+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn192] Got notified of new snapshot: { ts: Timestamp(1567578587, 2), t: 1 }, 2019-09-04T06:29:47.722+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn192] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.469+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn186] Got notified of new snapshot: { ts: Timestamp(1567578587, 2), t: 1 }, 2019-09-04T06:29:47.722+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn186] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.662+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn184] Got notified of new snapshot: { ts: Timestamp(1567578587, 2), t: 1 }, 2019-09-04T06:29:47.722+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn184] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.433+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn193] Got notified of new snapshot: { ts: Timestamp(1567578587, 2), t: 1 }, 2019-09-04T06:29:47.722+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn193] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.753+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn205] Got notified of new snapshot: { ts: Timestamp(1567578587, 2), t: 1 }, 2019-09-04T06:29:47.722+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn205] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.530+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn194] Got notified of new snapshot: { ts: Timestamp(1567578587, 2), t: 1 }, 2019-09-04T06:29:47.722+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn194] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.763+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn159] Got notified of new snapshot: { ts: Timestamp(1567578587, 2), t: 1 }, 2019-09-04T06:29:47.722+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn159] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:56.299+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn201] Got notified of new snapshot: { ts: Timestamp(1567578587, 2), t: 1 }, 2019-09-04T06:29:47.722+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn201] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.476+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn171] Got notified of new snapshot: { ts: Timestamp(1567578587, 2), t: 1 }, 2019-09-04T06:29:47.722+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn171] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.634+0000 2019-09-04T06:29:47.733+0000 D3 REPL [conn190] Got notified of new snapshot: { ts: Timestamp(1567578587, 2), t: 1 }, 2019-09-04T06:29:47.722+0000 2019-09-04T06:29:47.734+0000 D3 REPL [conn190] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:55.060+0000 2019-09-04T06:29:47.734+0000 D3 REPL [conn172] Got notified of new snapshot: { ts: Timestamp(1567578587, 2), t: 1 }, 2019-09-04T06:29:47.722+0000 2019-09-04T06:29:47.734+0000 D3 REPL [conn172] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:58.752+0000 2019-09-04T06:29:47.734+0000 D3 REPL [conn195] Got notified of new snapshot: { ts: Timestamp(1567578587, 2), t: 1 }, 2019-09-04T06:29:47.722+0000 2019-09-04T06:29:47.734+0000 D3 REPL [conn195] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.897+0000 2019-09-04T06:29:47.734+0000 D3 REPL [conn209] Got notified of new snapshot: { ts: Timestamp(1567578587, 2), t: 1 }, 2019-09-04T06:29:47.722+0000 2019-09-04T06:29:47.734+0000 D3 REPL [conn209] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.622+0000 2019-09-04T06:29:47.734+0000 D3 REPL [conn181] Got notified of new snapshot: { ts: Timestamp(1567578587, 2), t: 1 }, 2019-09-04T06:29:47.722+0000 2019-09-04T06:29:47.734+0000 D3 REPL [conn181] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.478+0000 2019-09-04T06:29:47.734+0000 D3 REPL [conn212] Got notified of new snapshot: { ts: Timestamp(1567578587, 2), t: 1 }, 2019-09-04T06:29:47.722+0000 2019-09-04T06:29:47.734+0000 D3 REPL [conn212] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.650+0000 2019-09-04T06:29:47.734+0000 D3 REPL [conn188] Got notified of new snapshot: { ts: Timestamp(1567578587, 2), t: 1 }, 2019-09-04T06:29:47.722+0000 2019-09-04T06:29:47.734+0000 D3 REPL [conn188] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.768+0000 2019-09-04T06:29:47.734+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:29:47.734+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:47.734+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, durableWallTime: new Date(1567578584984), appliedOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, appliedWallTime: new Date(1567578584984), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578587, 2), t: 1 }, durableWallTime: new Date(1567578587722), appliedOpTime: { ts: Timestamp(1567578587, 2), t: 1 }, appliedWallTime: new Date(1567578587722), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, durableWallTime: new Date(1567578584984), appliedOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, appliedWallTime: new Date(1567578584984), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578587, 2), t: 1 }, lastCommittedWall: new Date(1567578587722), lastOpVisible: { ts: Timestamp(1567578587, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:47.734+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 477 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:17.734+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, durableWallTime: new Date(1567578584984), appliedOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, appliedWallTime: new Date(1567578584984), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578587, 2), t: 1 }, durableWallTime: new Date(1567578587722), appliedOpTime: { ts: Timestamp(1567578587, 2), t: 1 }, appliedWallTime: new Date(1567578587722), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, durableWallTime: new Date(1567578584984), appliedOpTime: { ts: Timestamp(1567578584, 1), t: 1 }, appliedWallTime: new Date(1567578584984), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578587, 2), t: 1 }, lastCommittedWall: new Date(1567578587722), lastOpVisible: { ts: Timestamp(1567578587, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:47.734+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:17.730+0000 2019-09-04T06:29:47.734+0000 D2 ASIO [RS] Request 477 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578587, 2), t: 1 }, lastCommittedWall: new Date(1567578587722), lastOpVisible: { ts: Timestamp(1567578587, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578587, 2), $clusterTime: { clusterTime: Timestamp(1567578587, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578587, 2) } 2019-09-04T06:29:47.734+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578587, 2), t: 1 }, lastCommittedWall: new Date(1567578587722), lastOpVisible: { ts: Timestamp(1567578587, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578587, 2), $clusterTime: { clusterTime: Timestamp(1567578587, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578587, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:47.734+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:47.734+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:17.730+0000 2019-09-04T06:29:47.735+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:47.791+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:47.791+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:47.829+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578587, 2) 2019-09-04T06:29:47.835+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:47.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:47.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:47.932+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:47.932+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:47.936+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:47.990+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:47.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:48.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:48.036+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:48.037+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:48.037+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:48.130+0000 D2 COMMAND [conn20] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:48.130+0000 I COMMAND [conn20] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:48.131+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:48.131+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:48.136+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:48.142+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:48.142+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:48.209+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:48.209+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:48.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578587, 2), signature: { hash: BinData(0, A351DDC34AA3DF23BCA23C7C247B350401C0DD36), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:48.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:29:48.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578587, 2), signature: { hash: BinData(0, A351DDC34AA3DF23BCA23C7C247B350401C0DD36), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:48.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578587, 2), signature: { hash: BinData(0, A351DDC34AA3DF23BCA23C7C247B350401C0DD36), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:48.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578587, 2), t: 1 }, durableWallTime: new Date(1567578587722), opTime: { ts: Timestamp(1567578587, 2), t: 1 }, wallTime: new Date(1567578587722) } 2019-09-04T06:29:48.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578587, 2), signature: { hash: BinData(0, A351DDC34AA3DF23BCA23C7C247B350401C0DD36), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:48.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:48.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:48.236+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:48.291+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:48.291+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:48.336+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:48.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:48.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:48.432+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:48.432+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:48.436+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:48.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:48.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:48.511+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:29:48.511+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:29:48.511+0000 D2 COMMAND [conn90] run command admin.$cmd { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:29:48.511+0000 I COMMAND [conn90] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:29:48.536+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:48.537+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:48.537+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:48.631+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:48.631+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:48.636+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:48.642+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:48.642+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:48.709+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:48.709+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:48.729+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:48.729+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:48.729+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578587, 2) 2019-09-04T06:29:48.729+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 7101 2019-09-04T06:29:48.729+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:48.729+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:48.729+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 7101 2019-09-04T06:29:48.730+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7104 2019-09-04T06:29:48.730+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7104 2019-09-04T06:29:48.730+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578587, 2), t: 1 }({ ts: Timestamp(1567578587, 2), t: 1 }) 2019-09-04T06:29:48.737+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:48.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:48.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:48.791+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:48.791+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:48.837+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:48.838+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:48.838+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 478) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:48.838+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 478 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:29:58.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:48.838+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:48.838+0000 D2 ASIO [Replication] Request 478 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578587, 2), t: 1 }, durableWallTime: new Date(1567578587722), opTime: { ts: Timestamp(1567578587, 2), t: 1 }, wallTime: new Date(1567578587722), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578587, 2), t: 1 }, lastCommittedWall: new Date(1567578587722), lastOpVisible: { ts: Timestamp(1567578587, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578587, 2), $clusterTime: { clusterTime: Timestamp(1567578587, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578587, 2) } 2019-09-04T06:29:48.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578587, 2), t: 1 }, durableWallTime: new Date(1567578587722), opTime: { ts: Timestamp(1567578587, 2), t: 1 }, wallTime: new Date(1567578587722), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578587, 2), t: 1 }, lastCommittedWall: new Date(1567578587722), lastOpVisible: { ts: Timestamp(1567578587, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578587, 2), $clusterTime: { clusterTime: Timestamp(1567578587, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578587, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:48.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:48.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 478) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578587, 2), t: 1 }, durableWallTime: new Date(1567578587722), opTime: { ts: Timestamp(1567578587, 2), t: 1 }, wallTime: new Date(1567578587722), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578587, 2), t: 1 }, lastCommittedWall: new Date(1567578587722), lastOpVisible: { ts: Timestamp(1567578587, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578587, 2), $clusterTime: { clusterTime: Timestamp(1567578587, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578587, 2) } 2019-09-04T06:29:48.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:29:48.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:29:50.838Z 2019-09-04T06:29:48.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:48.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:48.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 479) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:48.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 479 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:29:58.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:48.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:48.839+0000 D2 ASIO [Replication] Request 479 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578587, 2), t: 1 }, durableWallTime: new Date(1567578587722), opTime: { ts: Timestamp(1567578587, 2), t: 1 }, wallTime: new Date(1567578587722), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578587, 2), t: 1 }, lastCommittedWall: new Date(1567578587722), lastOpVisible: { ts: Timestamp(1567578587, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578587, 2), $clusterTime: { clusterTime: Timestamp(1567578587, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578587, 2) } 2019-09-04T06:29:48.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578587, 2), t: 1 }, durableWallTime: new Date(1567578587722), opTime: { ts: Timestamp(1567578587, 2), t: 1 }, wallTime: new Date(1567578587722), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578587, 2), t: 1 }, lastCommittedWall: new Date(1567578587722), lastOpVisible: { ts: Timestamp(1567578587, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578587, 2), $clusterTime: { clusterTime: Timestamp(1567578587, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578587, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:29:48.839+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:48.839+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 479) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578587, 2), t: 1 }, durableWallTime: new Date(1567578587722), opTime: { ts: Timestamp(1567578587, 2), t: 1 }, wallTime: new Date(1567578587722), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578587, 2), t: 1 }, lastCommittedWall: new Date(1567578587722), lastOpVisible: { ts: Timestamp(1567578587, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578587, 2), $clusterTime: { clusterTime: Timestamp(1567578587, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578587, 2) } 2019-09-04T06:29:48.839+0000 D4 ELECTION [replexec-1] Postponing election timeout due to heartbeat from primary 2019-09-04T06:29:48.839+0000 D4 REPL [replexec-1] Canceling election timeout callback at 2019-09-04T06:29:58.337+0000 2019-09-04T06:29:48.839+0000 D4 ELECTION [replexec-1] Scheduling election timeout callback at 2019-09-04T06:29:59.023+0000 2019-09-04T06:29:48.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:48.839+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:29:48.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:48.839+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:29:50.839Z 2019-09-04T06:29:48.839+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:48.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:48.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:48.932+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:48.932+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:48.937+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:48.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:48.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:49.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:49.037+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:49.037+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:49.037+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:49.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578587, 2), signature: { hash: BinData(0, A351DDC34AA3DF23BCA23C7C247B350401C0DD36), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:49.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:29:49.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578587, 2), signature: { hash: BinData(0, A351DDC34AA3DF23BCA23C7C247B350401C0DD36), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:49.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578587, 2), signature: { hash: BinData(0, A351DDC34AA3DF23BCA23C7C247B350401C0DD36), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:49.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578587, 2), t: 1 }, durableWallTime: new Date(1567578587722), opTime: { ts: Timestamp(1567578587, 2), t: 1 }, wallTime: new Date(1567578587722) } 2019-09-04T06:29:49.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578587, 2), signature: { hash: BinData(0, A351DDC34AA3DF23BCA23C7C247B350401C0DD36), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:49.131+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:49.131+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:49.137+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:49.142+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:49.142+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:49.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:49.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:49.196+0000 D2 ASIO [RS] Request 476 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578589, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" }, wall: new Date(1567578589164), o: { $v: 1, $set: { ping: new Date(1567578589161) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578587, 2), t: 1 }, lastCommittedWall: new Date(1567578587722), lastOpVisible: { ts: Timestamp(1567578587, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578587, 2), t: 1 }, lastCommittedWall: new Date(1567578587722), lastOpApplied: { ts: Timestamp(1567578589, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578587, 2), $clusterTime: { clusterTime: Timestamp(1567578589, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578589, 1) } 2019-09-04T06:29:49.196+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578589, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" }, wall: new Date(1567578589164), o: { $v: 1, $set: { ping: new Date(1567578589161) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578587, 2), t: 1 }, lastCommittedWall: new Date(1567578587722), lastOpVisible: { ts: Timestamp(1567578587, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578587, 2), t: 1 }, lastCommittedWall: new Date(1567578587722), lastOpApplied: { ts: Timestamp(1567578589, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578587, 2), $clusterTime: { clusterTime: Timestamp(1567578589, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578589, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:49.196+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:49.196+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578589, 1) and ending at ts: Timestamp(1567578589, 1) 2019-09-04T06:29:49.196+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:29:59.023+0000 2019-09-04T06:29:49.196+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:29:59.772+0000 2019-09-04T06:29:49.196+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:49.196+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:49.196+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578589, 1), t: 1 } 2019-09-04T06:29:49.196+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:49.196+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:49.196+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578587, 2) 2019-09-04T06:29:49.196+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 7118 2019-09-04T06:29:49.196+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:49.196+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:49.196+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 7118 2019-09-04T06:29:49.196+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:49.196+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:49.196+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:29:49.196+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578589, 1) } 2019-09-04T06:29:49.196+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7105 2019-09-04T06:29:49.196+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578587, 2) 2019-09-04T06:29:49.196+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 7121 2019-09-04T06:29:49.196+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:49.196+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:49.196+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7105 2019-09-04T06:29:49.196+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 7121 2019-09-04T06:29:49.196+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7122 2019-09-04T06:29:49.196+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7122 2019-09-04T06:29:49.196+0000 D3 EXECUTOR [repl-writer-worker-15] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:49.196+0000 D3 STORAGE [repl-writer-worker-15] WT begin_transaction for snapshot id 7126 2019-09-04T06:29:49.196+0000 D4 STORAGE [repl-writer-worker-15] inserting record with timestamp Timestamp(1567578589, 1) 2019-09-04T06:29:49.196+0000 D3 STORAGE [repl-writer-worker-15] WT set timestamp of future write operations to Timestamp(1567578589, 1) 2019-09-04T06:29:49.196+0000 D3 STORAGE [repl-writer-worker-15] WT commit_transaction for snapshot id 7126 2019-09-04T06:29:49.196+0000 D3 EXECUTOR [repl-writer-worker-15] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:49.196+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:29:49.196+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7125 2019-09-04T06:29:49.196+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7125 2019-09-04T06:29:49.196+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7128 2019-09-04T06:29:49.196+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7128 2019-09-04T06:29:49.196+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578589, 1), t: 1 }({ ts: Timestamp(1567578589, 1), t: 1 }) 2019-09-04T06:29:49.197+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578589, 1) 2019-09-04T06:29:49.197+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7129 2019-09-04T06:29:49.197+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578589, 1) } } ] } sort: {} projection: {} 2019-09-04T06:29:49.197+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:49.197+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:29:49.197+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578589, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:29:49.197+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:49.197+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:49.197+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:49.197+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578589, 1) || First: notFirst: full path: ts 2019-09-04T06:29:49.197+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:49.197+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578589, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:49.197+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:49.197+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:29:49.197+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:49.197+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:49.197+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:49.197+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:49.197+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:49.197+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:49.197+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:49.197+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578589, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:49.197+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:49.197+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:49.197+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:49.197+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578589, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:49.197+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:49.197+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578589, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:49.197+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7129 2019-09-04T06:29:49.197+0000 D3 EXECUTOR [repl-writer-worker-1] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:49.197+0000 D3 STORAGE [repl-writer-worker-1] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:49.197+0000 D3 REPL [repl-writer-worker-1] applying op: { ts: Timestamp(1567578589, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" }, wall: new Date(1567578589164), o: { $v: 1, $set: { ping: new Date(1567578589161) } } }, oplog application mode: Secondary 2019-09-04T06:29:49.197+0000 D3 STORAGE [repl-writer-worker-1] WT set timestamp of future write operations to Timestamp(1567578589, 1) 2019-09-04T06:29:49.197+0000 D3 STORAGE [repl-writer-worker-1] WT begin_transaction for snapshot id 7131 2019-09-04T06:29:49.197+0000 D2 QUERY [repl-writer-worker-1] Using idhack: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" } 2019-09-04T06:29:49.197+0000 D4 WRITE [repl-writer-worker-1] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:29:49.197+0000 D3 STORAGE [repl-writer-worker-1] WT commit_transaction for snapshot id 7131 2019-09-04T06:29:49.197+0000 D3 EXECUTOR [repl-writer-worker-1] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:49.197+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578589, 1), t: 1 }({ ts: Timestamp(1567578589, 1), t: 1 }) 2019-09-04T06:29:49.197+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578589, 1) 2019-09-04T06:29:49.197+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7130 2019-09-04T06:29:49.197+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:29:49.197+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:49.197+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:29:49.197+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:49.197+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:49.197+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:29:49.197+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7130 2019-09-04T06:29:49.197+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578589, 1) 2019-09-04T06:29:49.197+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7134 2019-09-04T06:29:49.197+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7134 2019-09-04T06:29:49.197+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578589, 1), t: 1 }({ ts: Timestamp(1567578589, 1), t: 1 }) 2019-09-04T06:29:49.197+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:49.197+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578587, 2), t: 1 }, durableWallTime: new Date(1567578587722), appliedOpTime: { ts: Timestamp(1567578587, 2), t: 1 }, appliedWallTime: new Date(1567578587722), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578587, 2), t: 1 }, durableWallTime: new Date(1567578587722), appliedOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, appliedWallTime: new Date(1567578589164), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578587, 2), t: 1 }, durableWallTime: new Date(1567578587722), appliedOpTime: { ts: Timestamp(1567578587, 2), t: 1 }, appliedWallTime: new Date(1567578587722), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578587, 2), t: 1 }, lastCommittedWall: new Date(1567578587722), lastOpVisible: { ts: Timestamp(1567578587, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:49.197+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 480 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:19.197+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578587, 2), t: 1 }, durableWallTime: new Date(1567578587722), appliedOpTime: { ts: Timestamp(1567578587, 2), t: 1 }, appliedWallTime: new Date(1567578587722), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578587, 2), t: 1 }, durableWallTime: new Date(1567578587722), appliedOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, appliedWallTime: new Date(1567578589164), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578587, 2), t: 1 }, durableWallTime: new Date(1567578587722), appliedOpTime: { ts: Timestamp(1567578587, 2), t: 1 }, appliedWallTime: new Date(1567578587722), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578587, 2), t: 1 }, lastCommittedWall: new Date(1567578587722), lastOpVisible: { ts: Timestamp(1567578587, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:49.197+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:19.197+0000 2019-09-04T06:29:49.198+0000 D2 ASIO [RS] Request 480 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578587, 2), t: 1 }, lastCommittedWall: new Date(1567578587722), lastOpVisible: { ts: Timestamp(1567578587, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578587, 2), $clusterTime: { clusterTime: Timestamp(1567578589, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578589, 1) } 2019-09-04T06:29:49.198+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578587, 2), t: 1 }, lastCommittedWall: new Date(1567578587722), lastOpVisible: { ts: Timestamp(1567578587, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578587, 2), $clusterTime: { clusterTime: Timestamp(1567578589, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578589, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:49.198+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:49.198+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:19.198+0000 2019-09-04T06:29:49.198+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578589, 1), t: 1 } 2019-09-04T06:29:49.198+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 481 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:59.198+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578587, 2), t: 1 } } 2019-09-04T06:29:49.198+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:19.198+0000 2019-09-04T06:29:49.209+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:49.209+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:49.223+0000 D2 ASIO [RS] Request 481 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578589, 1), t: 1 }, lastCommittedWall: new Date(1567578589164), lastOpVisible: { ts: Timestamp(1567578589, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578589, 1), t: 1 }, lastCommittedWall: new Date(1567578589164), lastOpApplied: { ts: Timestamp(1567578589, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578589, 1), $clusterTime: { clusterTime: Timestamp(1567578589, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578589, 1) } 2019-09-04T06:29:49.223+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578589, 1), t: 1 }, lastCommittedWall: new Date(1567578589164), lastOpVisible: { ts: Timestamp(1567578589, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578589, 1), t: 1 }, lastCommittedWall: new Date(1567578589164), lastOpApplied: { ts: Timestamp(1567578589, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578589, 1), $clusterTime: { clusterTime: Timestamp(1567578589, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578589, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:49.223+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:49.223+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:29:49.223+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578589, 1), t: 1 }, 2019-09-04T06:29:49.164+0000 2019-09-04T06:29:49.223+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578589, 1), t: 1 }, 2019-09-04T06:29:49.164+0000 2019-09-04T06:29:49.223+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578584, 1) 2019-09-04T06:29:49.223+0000 D3 REPL [conn181] Got notified of new snapshot: { ts: Timestamp(1567578589, 1), t: 1 }, 2019-09-04T06:29:49.164+0000 2019-09-04T06:29:49.223+0000 D3 REPL [conn181] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.478+0000 2019-09-04T06:29:49.223+0000 D3 REPL [conn203] Got notified of new snapshot: { ts: Timestamp(1567578589, 1), t: 1 }, 2019-09-04T06:29:49.164+0000 2019-09-04T06:29:49.223+0000 D3 REPL [conn203] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.499+0000 2019-09-04T06:29:49.223+0000 D3 REPL [conn194] Got notified of new snapshot: { ts: Timestamp(1567578589, 1), t: 1 }, 2019-09-04T06:29:49.164+0000 2019-09-04T06:29:49.223+0000 D3 REPL [conn194] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.763+0000 2019-09-04T06:29:49.223+0000 D3 REPL [conn167] Got notified of new snapshot: { ts: Timestamp(1567578589, 1), t: 1 }, 2019-09-04T06:29:49.164+0000 2019-09-04T06:29:49.223+0000 D3 REPL [conn167] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:57.567+0000 2019-09-04T06:29:49.223+0000 D3 REPL [conn202] Got notified of new snapshot: { ts: Timestamp(1567578589, 1), t: 1 }, 2019-09-04T06:29:49.164+0000 2019-09-04T06:29:49.223+0000 D3 REPL [conn202] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.480+0000 2019-09-04T06:29:49.223+0000 D3 REPL [conn187] Got notified of new snapshot: { ts: Timestamp(1567578589, 1), t: 1 }, 2019-09-04T06:29:49.164+0000 2019-09-04T06:29:49.223+0000 D3 REPL [conn187] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.753+0000 2019-09-04T06:29:49.223+0000 D3 REPL [conn183] Got notified of new snapshot: { ts: Timestamp(1567578589, 1), t: 1 }, 2019-09-04T06:29:49.164+0000 2019-09-04T06:29:49.223+0000 D3 REPL [conn183] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.645+0000 2019-09-04T06:29:49.223+0000 D3 REPL [conn200] Got notified of new snapshot: { ts: Timestamp(1567578589, 1), t: 1 }, 2019-09-04T06:29:49.164+0000 2019-09-04T06:29:49.223+0000 D3 REPL [conn200] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.475+0000 2019-09-04T06:29:49.223+0000 D3 REPL [conn173] Got notified of new snapshot: { ts: Timestamp(1567578589, 1), t: 1 }, 2019-09-04T06:29:49.164+0000 2019-09-04T06:29:49.223+0000 D3 REPL [conn173] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.664+0000 2019-09-04T06:29:49.223+0000 D3 REPL [conn210] Got notified of new snapshot: { ts: Timestamp(1567578589, 1), t: 1 }, 2019-09-04T06:29:49.164+0000 2019-09-04T06:29:49.223+0000 D3 REPL [conn210] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.641+0000 2019-09-04T06:29:49.223+0000 D3 REPL [conn185] Got notified of new snapshot: { ts: Timestamp(1567578589, 1), t: 1 }, 2019-09-04T06:29:49.164+0000 2019-09-04T06:29:49.223+0000 D3 REPL [conn185] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:59.943+0000 2019-09-04T06:29:49.224+0000 D3 REPL [conn189] Got notified of new snapshot: { ts: Timestamp(1567578589, 1), t: 1 }, 2019-09-04T06:29:49.164+0000 2019-09-04T06:29:49.224+0000 D3 REPL [conn189] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:52.054+0000 2019-09-04T06:29:49.224+0000 D3 REPL [conn180] Got notified of new snapshot: { ts: Timestamp(1567578589, 1), t: 1 }, 2019-09-04T06:29:49.164+0000 2019-09-04T06:29:49.224+0000 D3 REPL [conn180] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.623+0000 2019-09-04T06:29:49.224+0000 D3 REPL [conn184] Got notified of new snapshot: { ts: Timestamp(1567578589, 1), t: 1 }, 2019-09-04T06:29:49.164+0000 2019-09-04T06:29:49.224+0000 D3 REPL [conn184] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.433+0000 2019-09-04T06:29:49.224+0000 D3 REPL [conn205] Got notified of new snapshot: { ts: Timestamp(1567578589, 1), t: 1 }, 2019-09-04T06:29:49.164+0000 2019-09-04T06:29:49.224+0000 D3 REPL [conn205] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.530+0000 2019-09-04T06:29:49.224+0000 D3 REPL [conn159] Got notified of new snapshot: { ts: Timestamp(1567578589, 1), t: 1 }, 2019-09-04T06:29:49.164+0000 2019-09-04T06:29:49.224+0000 D3 REPL [conn159] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:56.299+0000 2019-09-04T06:29:49.224+0000 D3 REPL [conn192] Got notified of new snapshot: { ts: Timestamp(1567578589, 1), t: 1 }, 2019-09-04T06:29:49.164+0000 2019-09-04T06:29:49.224+0000 D3 REPL [conn192] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.469+0000 2019-09-04T06:29:49.224+0000 D3 REPL [conn172] Got notified of new snapshot: { ts: Timestamp(1567578589, 1), t: 1 }, 2019-09-04T06:29:49.164+0000 2019-09-04T06:29:49.224+0000 D3 REPL [conn172] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:58.752+0000 2019-09-04T06:29:49.224+0000 D3 REPL [conn209] Got notified of new snapshot: { ts: Timestamp(1567578589, 1), t: 1 }, 2019-09-04T06:29:49.164+0000 2019-09-04T06:29:49.224+0000 D3 REPL [conn209] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.622+0000 2019-09-04T06:29:49.224+0000 D3 REPL [conn171] Got notified of new snapshot: { ts: Timestamp(1567578589, 1), t: 1 }, 2019-09-04T06:29:49.164+0000 2019-09-04T06:29:49.224+0000 D3 REPL [conn171] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.634+0000 2019-09-04T06:29:49.224+0000 D3 REPL [conn212] Got notified of new snapshot: { ts: Timestamp(1567578589, 1), t: 1 }, 2019-09-04T06:29:49.164+0000 2019-09-04T06:29:49.224+0000 D3 REPL [conn212] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.650+0000 2019-09-04T06:29:49.224+0000 D3 REPL [conn188] Got notified of new snapshot: { ts: Timestamp(1567578589, 1), t: 1 }, 2019-09-04T06:29:49.164+0000 2019-09-04T06:29:49.224+0000 D3 REPL [conn188] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.768+0000 2019-09-04T06:29:49.224+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:29:59.772+0000 2019-09-04T06:29:49.224+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:29:59.608+0000 2019-09-04T06:29:49.224+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:49.224+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:49.224+0000 D3 REPL [conn147] Got notified of new snapshot: { ts: Timestamp(1567578589, 1), t: 1 }, 2019-09-04T06:29:49.164+0000 2019-09-04T06:29:49.224+0000 D3 REPL [conn147] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:13.409+0000 2019-09-04T06:29:49.224+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 482 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:29:59.224+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578589, 1), t: 1 } } 2019-09-04T06:29:49.224+0000 D3 REPL [conn197] Got notified of new snapshot: { ts: Timestamp(1567578589, 1), t: 1 }, 2019-09-04T06:29:49.164+0000 2019-09-04T06:29:49.224+0000 D3 REPL [conn197] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.987+0000 2019-09-04T06:29:49.224+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:19.198+0000 2019-09-04T06:29:49.224+0000 D3 REPL [conn195] Got notified of new snapshot: { ts: Timestamp(1567578589, 1), t: 1 }, 2019-09-04T06:29:49.164+0000 2019-09-04T06:29:49.224+0000 D3 REPL [conn195] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.897+0000 2019-09-04T06:29:49.224+0000 D3 REPL [conn190] Got notified of new snapshot: { ts: Timestamp(1567578589, 1), t: 1 }, 2019-09-04T06:29:49.164+0000 2019-09-04T06:29:49.224+0000 D3 REPL [conn190] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:55.060+0000 2019-09-04T06:29:49.224+0000 D3 REPL [conn201] Got notified of new snapshot: { ts: Timestamp(1567578589, 1), t: 1 }, 2019-09-04T06:29:49.164+0000 2019-09-04T06:29:49.224+0000 D3 REPL [conn201] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.476+0000 2019-09-04T06:29:49.224+0000 D3 REPL [conn169] Got notified of new snapshot: { ts: Timestamp(1567578589, 1), t: 1 }, 2019-09-04T06:29:49.164+0000 2019-09-04T06:29:49.224+0000 D3 REPL [conn169] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:12.789+0000 2019-09-04T06:29:49.224+0000 D3 REPL [conn193] Got notified of new snapshot: { ts: Timestamp(1567578589, 1), t: 1 }, 2019-09-04T06:29:49.164+0000 2019-09-04T06:29:49.224+0000 D3 REPL [conn193] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.753+0000 2019-09-04T06:29:49.224+0000 D3 REPL [conn204] Got notified of new snapshot: { ts: Timestamp(1567578589, 1), t: 1 }, 2019-09-04T06:29:49.164+0000 2019-09-04T06:29:49.224+0000 D3 REPL [conn204] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.514+0000 2019-09-04T06:29:49.224+0000 D3 REPL [conn191] Got notified of new snapshot: { ts: Timestamp(1567578589, 1), t: 1 }, 2019-09-04T06:29:49.164+0000 2019-09-04T06:29:49.224+0000 D3 REPL [conn191] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:12.107+0000 2019-09-04T06:29:49.224+0000 D3 REPL [conn196] Got notified of new snapshot: { ts: Timestamp(1567578589, 1), t: 1 }, 2019-09-04T06:29:49.164+0000 2019-09-04T06:29:49.224+0000 D3 REPL [conn196] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.962+0000 2019-09-04T06:29:49.224+0000 D3 REPL [conn199] Got notified of new snapshot: { ts: Timestamp(1567578589, 1), t: 1 }, 2019-09-04T06:29:49.164+0000 2019-09-04T06:29:49.224+0000 D3 REPL [conn199] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.476+0000 2019-09-04T06:29:49.224+0000 D3 REPL [conn211] Got notified of new snapshot: { ts: Timestamp(1567578589, 1), t: 1 }, 2019-09-04T06:29:49.164+0000 2019-09-04T06:29:49.224+0000 D3 REPL [conn211] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.646+0000 2019-09-04T06:29:49.224+0000 D3 REPL [conn198] Got notified of new snapshot: { ts: Timestamp(1567578589, 1), t: 1 }, 2019-09-04T06:29:49.164+0000 2019-09-04T06:29:49.225+0000 D3 REPL [conn198] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:03.485+0000 2019-09-04T06:29:49.225+0000 D3 REPL [conn186] Got notified of new snapshot: { ts: Timestamp(1567578589, 1), t: 1 }, 2019-09-04T06:29:49.164+0000 2019-09-04T06:29:49.225+0000 D3 REPL [conn186] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:51.662+0000 2019-09-04T06:29:49.233+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:29:49.233+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:49.233+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578587, 2), t: 1 }, durableWallTime: new Date(1567578587722), appliedOpTime: { ts: Timestamp(1567578587, 2), t: 1 }, appliedWallTime: new Date(1567578587722), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), appliedOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, appliedWallTime: new Date(1567578589164), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578587, 2), t: 1 }, durableWallTime: new Date(1567578587722), appliedOpTime: { ts: Timestamp(1567578587, 2), t: 1 }, appliedWallTime: new Date(1567578587722), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578589, 1), t: 1 }, lastCommittedWall: new Date(1567578589164), lastOpVisible: { ts: Timestamp(1567578589, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:49.233+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 483 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:19.233+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578587, 2), t: 1 }, durableWallTime: new Date(1567578587722), appliedOpTime: { ts: Timestamp(1567578587, 2), t: 1 }, appliedWallTime: new Date(1567578587722), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), appliedOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, appliedWallTime: new Date(1567578589164), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578587, 2), t: 1 }, durableWallTime: new Date(1567578587722), appliedOpTime: { ts: Timestamp(1567578587, 2), t: 1 }, appliedWallTime: new Date(1567578587722), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578589, 1), t: 1 }, lastCommittedWall: new Date(1567578589164), lastOpVisible: { ts: Timestamp(1567578589, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:49.233+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:19.198+0000 2019-09-04T06:29:49.233+0000 D2 ASIO [RS] Request 483 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578589, 1), t: 1 }, lastCommittedWall: new Date(1567578589164), lastOpVisible: { ts: Timestamp(1567578589, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578589, 1), $clusterTime: { clusterTime: Timestamp(1567578589, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578589, 1) } 2019-09-04T06:29:49.233+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578589, 1), t: 1 }, lastCommittedWall: new Date(1567578589164), lastOpVisible: { ts: Timestamp(1567578589, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578589, 1), $clusterTime: { clusterTime: Timestamp(1567578589, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578589, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:49.233+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:49.233+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:19.198+0000 2019-09-04T06:29:49.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:49.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:49.237+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:49.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:49.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:49.291+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:49.291+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:49.296+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578589, 1) 2019-09-04T06:29:49.329+0000 D2 WRITE [startPeriodicThreadToAbortExpiredTransactions] Beginning scanSessions. Scanning 0 sessions. 2019-09-04T06:29:49.337+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:49.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:49.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:49.370+0000 D1 SHARDING [shard-registry-reload] Reloading shardRegistry 2019-09-04T06:29:49.370+0000 D3 STORAGE [shard-registry-reload] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:29:49.370+0000 D2 COMMAND [shard-registry-reload] run command config.$cmd { find: "shards", $readPreference: { mode: "nearest", tags: [] }, $db: "config" } 2019-09-04T06:29:49.370+0000 D3 STORAGE [shard-registry-reload] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:49.370+0000 D5 QUERY [shard-registry-reload] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:29:49.370+0000 D5 QUERY [shard-registry-reload] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:29:49.370+0000 D5 QUERY [shard-registry-reload] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:29:49.370+0000 D5 QUERY [shard-registry-reload] Rated tree: $and 2019-09-04T06:29:49.370+0000 D5 QUERY [shard-registry-reload] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:49.370+0000 D5 QUERY [shard-registry-reload] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:49.370+0000 D2 QUERY [shard-registry-reload] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:29:49.370+0000 D3 STORAGE [shard-registry-reload] begin_transaction on local snapshot Timestamp(1567578589, 1) 2019-09-04T06:29:49.370+0000 D3 STORAGE [shard-registry-reload] WT begin_transaction for snapshot id 7144 2019-09-04T06:29:49.370+0000 D3 STORAGE [shard-registry-reload] WT rollback_transaction for snapshot id 7144 2019-09-04T06:29:49.370+0000 I COMMAND [shard-registry-reload] command config.shards command: find { find: "shards", $readPreference: { mode: "nearest", tags: [] }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:646 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:29:49.370+0000 D1 SHARDING [shard-registry-reload] found 3 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp(1567578589, 1), t: 1 } 2019-09-04T06:29:49.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0000/cmodb806.togewa.com:27018,cmodb807.togewa.com:27018 2019-09-04T06:29:49.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0000, with CS shard0000/cmodb806.togewa.com:27018,cmodb807.togewa.com:27018 2019-09-04T06:29:49.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0001/cmodb808.togewa.com:27018,cmodb809.togewa.com:27018 2019-09-04T06:29:49.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0001, with CS shard0001/cmodb808.togewa.com:27018,cmodb809.togewa.com:27018 2019-09-04T06:29:49.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0002/cmodb810.togewa.com:27018,cmodb811.togewa.com:27018 2019-09-04T06:29:49.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0002, with CS shard0002/cmodb810.togewa.com:27018,cmodb811.togewa.com:27018 2019-09-04T06:29:49.370+0000 D3 SHARDING [shard-registry-reload] Adding shard config, with CS 2019-09-04T06:29:49.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0002 2019-09-04T06:29:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 484 -- target:[cmodb810.togewa.com:27018] db:admin expDate:2019-09-04T06:29:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:29:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 485 -- target:[cmodb811.togewa.com:27018] db:admin expDate:2019-09-04T06:29:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:29:49.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0000 2019-09-04T06:29:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 486 -- target:[cmodb806.togewa.com:27018] db:admin expDate:2019-09-04T06:29:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:29:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 487 -- target:[cmodb807.togewa.com:27018] db:admin expDate:2019-09-04T06:29:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:29:49.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0001 2019-09-04T06:29:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 488 -- target:[cmodb808.togewa.com:27018] db:admin expDate:2019-09-04T06:29:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:29:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 489 -- target:[cmodb809.togewa.com:27018] db:admin expDate:2019-09-04T06:29:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:29:49.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 484 finished with response: { hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb810.togewa.com:27018", me: "cmodb810.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578579, 1), t: 1 }, lastWriteDate: new Date(1567578579000), majorityOpTime: { ts: Timestamp(1567578579, 1), t: 1 }, majorityWriteDate: new Date(1567578579000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578589386), logicalSessionTimeoutMinutes: 30, connectionId: 20469, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578579, 1), $configServerState: { opTime: { ts: Timestamp(1567578584, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578585, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578579, 1) } 2019-09-04T06:29:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb810.togewa.com:27018", me: "cmodb810.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578579, 1), t: 1 }, lastWriteDate: new Date(1567578579000), majorityOpTime: { ts: Timestamp(1567578579, 1), t: 1 }, majorityWriteDate: new Date(1567578579000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578589386), logicalSessionTimeoutMinutes: 30, connectionId: 20469, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578579, 1), $configServerState: { opTime: { ts: Timestamp(1567578584, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578585, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578579, 1) } target: cmodb810.togewa.com:27018 2019-09-04T06:29:49.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 489 finished with response: { hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb808.togewa.com:27018", me: "cmodb809.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578587, 1), t: 1 }, lastWriteDate: new Date(1567578587000), majorityOpTime: { ts: Timestamp(1567578587, 1), t: 1 }, majorityWriteDate: new Date(1567578587000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578589386), logicalSessionTimeoutMinutes: 30, connectionId: 13302, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578587, 1), $configServerState: { opTime: { ts: Timestamp(1567578570, 3), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578587, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578587, 1) } 2019-09-04T06:29:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb808.togewa.com:27018", me: "cmodb809.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578587, 1), t: 1 }, lastWriteDate: new Date(1567578587000), majorityOpTime: { ts: Timestamp(1567578587, 1), t: 1 }, majorityWriteDate: new Date(1567578587000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578589386), logicalSessionTimeoutMinutes: 30, connectionId: 13302, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578587, 1), $configServerState: { opTime: { ts: Timestamp(1567578570, 3), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578587, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578587, 1) } target: cmodb809.togewa.com:27018 2019-09-04T06:29:49.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 487 finished with response: { hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb806.togewa.com:27018", me: "cmodb807.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578583, 1), t: 1 }, lastWriteDate: new Date(1567578583000), majorityOpTime: { ts: Timestamp(1567578583, 1), t: 1 }, majorityWriteDate: new Date(1567578583000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578589386), logicalSessionTimeoutMinutes: 30, connectionId: 17074, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578583, 1), $configServerState: { opTime: { ts: Timestamp(1567578578, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578585, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578583, 1) } 2019-09-04T06:29:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb806.togewa.com:27018", me: "cmodb807.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578583, 1), t: 1 }, lastWriteDate: new Date(1567578583000), majorityOpTime: { ts: Timestamp(1567578583, 1), t: 1 }, majorityWriteDate: new Date(1567578583000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578589386), logicalSessionTimeoutMinutes: 30, connectionId: 17074, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578583, 1), $configServerState: { opTime: { ts: Timestamp(1567578578, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578585, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578583, 1) } target: cmodb807.togewa.com:27018 2019-09-04T06:29:49.386+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 486 finished with response: { hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb806.togewa.com:27018", me: "cmodb806.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578583, 1), t: 1 }, lastWriteDate: new Date(1567578583000), majorityOpTime: { ts: Timestamp(1567578583, 1), t: 1 }, majorityWriteDate: new Date(1567578583000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578589385), logicalSessionTimeoutMinutes: 30, connectionId: 16400, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578583, 1), $configServerState: { opTime: { ts: Timestamp(1567578584, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578585, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578583, 1) } 2019-09-04T06:29:49.386+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb806.togewa.com:27018", me: "cmodb806.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578583, 1), t: 1 }, lastWriteDate: new Date(1567578583000), majorityOpTime: { ts: Timestamp(1567578583, 1), t: 1 }, majorityWriteDate: new Date(1567578583000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578589385), logicalSessionTimeoutMinutes: 30, connectionId: 16400, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578583, 1), $configServerState: { opTime: { ts: Timestamp(1567578584, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578585, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578583, 1) } target: cmodb806.togewa.com:27018 2019-09-04T06:29:49.386+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0000 took 0ms 2019-09-04T06:29:49.386+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 488 finished with response: { hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb808.togewa.com:27018", me: "cmodb808.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578587, 1), t: 1 }, lastWriteDate: new Date(1567578587000), majorityOpTime: { ts: Timestamp(1567578587, 1), t: 1 }, majorityWriteDate: new Date(1567578587000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578589386), logicalSessionTimeoutMinutes: 30, connectionId: 18183, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578587, 1), $configServerState: { opTime: { ts: Timestamp(1567578584, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578587, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578587, 1) } 2019-09-04T06:29:49.386+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb808.togewa.com:27018", me: "cmodb808.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578587, 1), t: 1 }, lastWriteDate: new Date(1567578587000), majorityOpTime: { ts: Timestamp(1567578587, 1), t: 1 }, majorityWriteDate: new Date(1567578587000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578589386), logicalSessionTimeoutMinutes: 30, connectionId: 18183, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578587, 1), $configServerState: { opTime: { ts: Timestamp(1567578584, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578587, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578587, 1) } target: cmodb808.togewa.com:27018 2019-09-04T06:29:49.386+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0001 took 0ms 2019-09-04T06:29:49.390+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 485 finished with response: { hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb810.togewa.com:27018", me: "cmodb811.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578579, 1), t: 1 }, lastWriteDate: new Date(1567578579000), majorityOpTime: { ts: Timestamp(1567578579, 1), t: 1 }, majorityWriteDate: new Date(1567578579000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578589386), logicalSessionTimeoutMinutes: 30, connectionId: 13284, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578579, 1), $configServerState: { opTime: { ts: Timestamp(1567578578, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578585, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578579, 1) } 2019-09-04T06:29:49.390+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb810.togewa.com:27018", me: "cmodb811.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578579, 1), t: 1 }, lastWriteDate: new Date(1567578579000), majorityOpTime: { ts: Timestamp(1567578579, 1), t: 1 }, majorityWriteDate: new Date(1567578579000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578589386), logicalSessionTimeoutMinutes: 30, connectionId: 13284, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578579, 1), $configServerState: { opTime: { ts: Timestamp(1567578578, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578585, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578579, 1) } target: cmodb811.togewa.com:27018 2019-09-04T06:29:49.390+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0002 took 5ms 2019-09-04T06:29:49.432+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:49.432+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:49.438+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:49.439+0000 D2 COMMAND [replSetDistLockPinger] run command config.$cmd { findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578589439) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } 2019-09-04T06:29:49.439+0000 D4 - [replSetDistLockPinger] Taking ticket. Available: 1000000000 2019-09-04T06:29:49.439+0000 D1 - [replSetDistLockPinger] User Assertion: NotMaster: Not primary while running findAndModify command on collection config.lockpings src/mongo/db/commands/find_and_modify.cpp 178 2019-09-04T06:29:49.439+0000 W - [replSetDistLockPinger] DBException thrown :: caused by :: NotMaster: Not primary while running findAndModify command on collection config.lockpings 2019-09-04T06:29:49.461+0000 I - [replSetDistLockPinger] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6b5b62 0x561749c38a0a 0x561749c42521 0x561749a63043 0x56174a33a606 0x56174a33ba55 0x56174b117894 0x56174a082899 0x56174a083f53 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174af452ee 0x56174af457fa 0x56174b0c25e2 0x56174a244e7b 0x56174a243c1e 0x56174a42b1dc 0x56174a23b7b1 0x56174a232a0a 0x56174b82dbbf 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"272DB62","s":"_ZN5mongo11DBExceptionC2ERKNS_6StatusE"},{"b":"561748F88000","o":"CB0A0A","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"ADB043"},{"b":"561748F88000","o":"13B2606"},{"b":"561748F88000","o":"13B3A55"},{"b":"561748F88000","o":"218F894","s":"_ZN5mongo12BasicCommand10Invocation3runEPNS_16OperationContextEPNS_3rpc21ReplyBuilderInterfaceE"},{"b":"561748F88000","o":"10FA899"},{"b":"561748F88000","o":"10FBF53"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"1FBD2EE"},{"b":"561748F88000","o":"1FBD7FA","s":"_ZN5mongo14DBDirectClient4callERNS_7MessageES2_bPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE"},{"b":"561748F88000","o":"213A5E2","s":"_ZN5mongo12DBClientBase20runCommandWithTargetENS_12OpMsgRequestE"},{"b":"561748F88000","o":"12BCE7B","s":"_ZN5mongo13RSLocalClient14runCommandOnceEPNS_16OperationContextENS_10StringDataERKNS_7BSONObjE"},{"b":"561748F88000","o":"12BBC1E","s":"_ZN5mongo10ShardLocal11_runCommandEPNS_16OperationContextERKNS_21ReadPreferenceSettingENS_10StringDataENS_8DurationISt5ratioILl1ELl1000EEEERKNS_7BSONObjE"},{"b":"561748F88000","o":"14A31DC","s":"_ZN5mongo5Shard32runCommandWithFixedRetryAttemptsEPNS_16OperationContextERKNS_21ReadPreferenceSettingERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_7BSONObjENS_8DurationISt5ratioILl1ELl1000EEEENS0_11RetryPolicyE"},{"b":"561748F88000","o":"12B37B1","s":"_ZN5mongo19DistLockCatalogImpl4pingEPNS_16OperationContextENS_10StringDataENS_6Date_tE"},{"b":"561748F88000","o":"12AAA0A","s":"_ZN5mongo22ReplSetDistLockManager6doTaskEv"},{"b":"561748F88000","o":"28A5BBF"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo11DBExceptionC2ERKNS_6StatusE+0x32) [0x56174b6b5b62] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x6D08) [0x561749c38a0a] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xADB043) [0x561749a63043] mongod(+0x13B2606) [0x56174a33a606] mongod(+0x13B3A55) [0x56174a33ba55] mongod(_ZN5mongo12BasicCommand10Invocation3runEPNS_16OperationContextEPNS_3rpc21ReplyBuilderInterfaceE+0x74) [0x56174b117894] mongod(+0x10FA899) [0x56174a082899] mongod(+0x10FBF53) [0x56174a083f53] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(+0x1FBD2EE) [0x56174af452ee] mongod(_ZN5mongo14DBDirectClient4callERNS_7MessageES2_bPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x3A) [0x56174af457fa] mongod(_ZN5mongo12DBClientBase20runCommandWithTargetENS_12OpMsgRequestE+0x1F2) [0x56174b0c25e2] mongod(_ZN5mongo13RSLocalClient14runCommandOnceEPNS_16OperationContextENS_10StringDataERKNS_7BSONObjE+0x4FB) [0x56174a244e7b] mongod(_ZN5mongo10ShardLocal11_runCommandEPNS_16OperationContextERKNS_21ReadPreferenceSettingENS_10StringDataENS_8DurationISt5ratioILl1ELl1000EEEERKNS_7BSONObjE+0x2E) [0x56174a243c1e] mongod(_ZN5mongo5Shard32runCommandWithFixedRetryAttemptsEPNS_16OperationContextERKNS_21ReadPreferenceSettingERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_7BSONObjENS_8DurationISt5ratioILl1ELl1000EEEENS0_11RetryPolicyE+0xDC) [0x56174a42b1dc] mongod(_ZN5mongo19DistLockCatalogImpl4pingEPNS_16OperationContextENS_10StringDataENS_6Date_tE+0x571) [0x56174a23b7b1] mongod(_ZN5mongo22ReplSetDistLockManager6doTaskEv+0x27A) [0x56174a232a0a] mongod(+0x28A5BBF) [0x56174b82dbbf] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:49.461+0000 D2 REPL [replSetDistLockPinger] Waiting for write concern. OpTime: { ts: Timestamp(1567578589, 1), t: 1 }, write concern: { w: "majority", wtimeout: 15000 } 2019-09-04T06:29:49.461+0000 D4 STORAGE [replSetDistLockPinger] flushed journal 2019-09-04T06:29:49.461+0000 D1 COMMAND [replSetDistLockPinger] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578589439) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" }': NotMaster: Not primary while running findAndModify command on collection config.lockpings 2019-09-04T06:29:49.461+0000 I COMMAND [replSetDistLockPinger] command config.lockpings command: findAndModify { findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578589439) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } numYields:0 ok:0 errMsg:"Not primary while running findAndModify command on collection config.lockpings" errName:NotMaster errCode:10107 reslen:527 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } protocol:op_msg 21ms 2019-09-04T06:29:49.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:49.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:49.497+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:49.497+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:49.537+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:49.537+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:49.538+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:49.631+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:49.631+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:49.638+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:49.642+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:49.642+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:49.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:49.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:49.709+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:49.709+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:49.738+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:49.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:49.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:49.791+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:49.791+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:49.838+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:49.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:49.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:49.932+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:49.932+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:49.938+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:49.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:49.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:49.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:49.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:49.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:49.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:50.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:50.004+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:29:50.004+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:29:50.004+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:29:50.011+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:29:50.011+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:29:50.012+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:29:50.012+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:29:50.012+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:29:50.012+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:29:50.012+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:29:50.013+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35129 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:29:50.014+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:29:50.014+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:29:50.015+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:29:50.015+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:29:50.015+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:50.015+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:29:50.015+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:29:50.015+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:29:50.015+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:29:50.015+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:29:50.015+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:29:50.015+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:29:50.015+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:50.015+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:50.015+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:29:50.015+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578589, 1) 2019-09-04T06:29:50.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7168 2019-09-04T06:29:50.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7168 2019-09-04T06:29:50.015+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:29:50.015+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:29:50.015+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:29:50.015+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:29:50.016+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:50.016+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:29:50.016+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:29:50.016+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:29:50.016+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578589, 1) 2019-09-04T06:29:50.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7171 2019-09-04T06:29:50.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7171 2019-09-04T06:29:50.016+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:29:50.016+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:29:50.016+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:50.016+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:29:50.016+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:29:50.016+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:29:50.016+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578589, 1) 2019-09-04T06:29:50.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7173 2019-09-04T06:29:50.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7173 2019-09-04T06:29:50.016+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:29:50.016+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:29:50.016+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:29:50.016+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:29:50.016+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:29:50.016+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:29:50.016+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:29:50.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7176 2019-09-04T06:29:50.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:29:50.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:50.016+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:29:50.016+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:29:50.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7176 2019-09-04T06:29:50.016+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:29:50.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7177 2019-09-04T06:29:50.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7177 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7178 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7178 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7179 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7179 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7180 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7180 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7181 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7181 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7182 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7182 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7183 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7183 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7184 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7184 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7185 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7185 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7186 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7186 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7187 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7187 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7188 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7188 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7189 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7189 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7190 2019-09-04T06:29:50.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:29:50.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:50.018+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:29:50.018+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:29:50.018+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:29:50.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7190 2019-09-04T06:29:50.018+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:29:50.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7191 2019-09-04T06:29:50.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:29:50.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:50.018+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:29:50.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7191 2019-09-04T06:29:50.018+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:29:50.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7192 2019-09-04T06:29:50.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:29:50.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:50.018+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:29:50.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7192 2019-09-04T06:29:50.018+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:29:50.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7193 2019-09-04T06:29:50.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:29:50.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:50.018+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:29:50.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7193 2019-09-04T06:29:50.018+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:29:50.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7194 2019-09-04T06:29:50.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:29:50.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:50.018+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:29:50.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7194 2019-09-04T06:29:50.018+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:50.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7195 2019-09-04T06:29:50.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:50.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:50.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7195 2019-09-04T06:29:50.018+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:29:50.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7196 2019-09-04T06:29:50.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:29:50.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:50.018+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:29:50.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7196 2019-09-04T06:29:50.018+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:29:50.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7197 2019-09-04T06:29:50.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:29:50.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:29:50.018+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:29:50.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7197 2019-09-04T06:29:50.018+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:29:50.031+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:29:50.031+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7199 2019-09-04T06:29:50.031+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7199 2019-09-04T06:29:50.031+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7200 2019-09-04T06:29:50.031+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7200 2019-09-04T06:29:50.031+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7201 2019-09-04T06:29:50.031+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7201 2019-09-04T06:29:50.031+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:29:50.031+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:29:50.031+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7203 2019-09-04T06:29:50.031+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7203 2019-09-04T06:29:50.031+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7204 2019-09-04T06:29:50.031+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7204 2019-09-04T06:29:50.031+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7205 2019-09-04T06:29:50.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7205 2019-09-04T06:29:50.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7206 2019-09-04T06:29:50.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7206 2019-09-04T06:29:50.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7207 2019-09-04T06:29:50.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7207 2019-09-04T06:29:50.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7208 2019-09-04T06:29:50.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7208 2019-09-04T06:29:50.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7209 2019-09-04T06:29:50.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7209 2019-09-04T06:29:50.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7210 2019-09-04T06:29:50.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7210 2019-09-04T06:29:50.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7211 2019-09-04T06:29:50.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7211 2019-09-04T06:29:50.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7212 2019-09-04T06:29:50.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7212 2019-09-04T06:29:50.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7213 2019-09-04T06:29:50.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7213 2019-09-04T06:29:50.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7214 2019-09-04T06:29:50.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7214 2019-09-04T06:29:50.032+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:29:50.032+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:29:50.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7216 2019-09-04T06:29:50.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7216 2019-09-04T06:29:50.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7217 2019-09-04T06:29:50.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7217 2019-09-04T06:29:50.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7218 2019-09-04T06:29:50.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7218 2019-09-04T06:29:50.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7219 2019-09-04T06:29:50.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7219 2019-09-04T06:29:50.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7220 2019-09-04T06:29:50.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7220 2019-09-04T06:29:50.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7221 2019-09-04T06:29:50.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7221 2019-09-04T06:29:50.032+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:29:50.046+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:50.046+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:50.046+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:50.131+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:50.131+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:50.142+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:50.142+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:50.146+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:50.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:50.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:50.196+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:50.196+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:50.197+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578589, 1) 2019-09-04T06:29:50.197+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 7227 2019-09-04T06:29:50.197+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:50.197+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:50.197+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 7227 2019-09-04T06:29:50.197+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7230 2019-09-04T06:29:50.197+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7230 2019-09-04T06:29:50.197+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578589, 1), t: 1 }({ ts: Timestamp(1567578589, 1), t: 1 }) 2019-09-04T06:29:50.209+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:50.209+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:50.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578589, 1), signature: { hash: BinData(0, F362EC3CFE2EEA23C32BF9ED6A7ED7B1CF884841), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:50.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:29:50.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578589, 1), signature: { hash: BinData(0, F362EC3CFE2EEA23C32BF9ED6A7ED7B1CF884841), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:50.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578589, 1), signature: { hash: BinData(0, F362EC3CFE2EEA23C32BF9ED6A7ED7B1CF884841), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:50.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), opTime: { ts: Timestamp(1567578589, 1), t: 1 }, wallTime: new Date(1567578589164) } 2019-09-04T06:29:50.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578589, 1), signature: { hash: BinData(0, F362EC3CFE2EEA23C32BF9ED6A7ED7B1CF884841), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:50.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:50.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 999999999 Now: 1000000000 2019-09-04T06:29:50.246+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:50.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:50.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:50.291+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:50.291+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:50.346+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:50.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:50.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:50.432+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:50.432+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:50.446+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:50.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:50.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:50.490+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:50.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:50.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:50.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:50.537+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:50.537+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:50.546+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:50.631+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:50.631+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:50.642+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:50.642+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:50.646+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:50.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:50.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:50.709+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:50.709+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:50.747+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:50.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:50.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:50.791+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:50.791+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:50.838+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:50.838+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 490) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:50.838+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 490 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:00.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:50.838+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:50.838+0000 D2 ASIO [Replication] Request 490 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), opTime: { ts: Timestamp(1567578589, 1), t: 1 }, wallTime: new Date(1567578589164), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578589, 1), t: 1 }, lastCommittedWall: new Date(1567578589164), lastOpVisible: { ts: Timestamp(1567578589, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578589, 1), $clusterTime: { clusterTime: Timestamp(1567578589, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578589, 1) } 2019-09-04T06:29:50.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), opTime: { ts: Timestamp(1567578589, 1), t: 1 }, wallTime: new Date(1567578589164), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578589, 1), t: 1 }, lastCommittedWall: new Date(1567578589164), lastOpVisible: { ts: Timestamp(1567578589, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578589, 1), $clusterTime: { clusterTime: Timestamp(1567578589, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578589, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:50.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:50.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 490) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), opTime: { ts: Timestamp(1567578589, 1), t: 1 }, wallTime: new Date(1567578589164), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578589, 1), t: 1 }, lastCommittedWall: new Date(1567578589164), lastOpVisible: { ts: Timestamp(1567578589, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578589, 1), $clusterTime: { clusterTime: Timestamp(1567578589, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578589, 1) } 2019-09-04T06:29:50.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:29:50.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:29:52.838Z 2019-09-04T06:29:50.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:50.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:50.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 491) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:50.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 491 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:30:00.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:50.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:50.839+0000 D2 ASIO [Replication] Request 491 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), opTime: { ts: Timestamp(1567578589, 1), t: 1 }, wallTime: new Date(1567578589164), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578589, 1), t: 1 }, lastCommittedWall: new Date(1567578589164), lastOpVisible: { ts: Timestamp(1567578589, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578589, 1), $clusterTime: { clusterTime: Timestamp(1567578589, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578589, 1) } 2019-09-04T06:29:50.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), opTime: { ts: Timestamp(1567578589, 1), t: 1 }, wallTime: new Date(1567578589164), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578589, 1), t: 1 }, lastCommittedWall: new Date(1567578589164), lastOpVisible: { ts: Timestamp(1567578589, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578589, 1), $clusterTime: { clusterTime: Timestamp(1567578589, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578589, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:29:50.839+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:50.839+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 491) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), opTime: { ts: Timestamp(1567578589, 1), t: 1 }, wallTime: new Date(1567578589164), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578589, 1), t: 1 }, lastCommittedWall: new Date(1567578589164), lastOpVisible: { ts: Timestamp(1567578589, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578589, 1), $clusterTime: { clusterTime: Timestamp(1567578589, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578589, 1) } 2019-09-04T06:29:50.839+0000 D4 ELECTION [replexec-1] Postponing election timeout due to heartbeat from primary 2019-09-04T06:29:50.839+0000 D4 REPL [replexec-1] Canceling election timeout callback at 2019-09-04T06:29:59.608+0000 2019-09-04T06:29:50.839+0000 D4 ELECTION [replexec-1] Scheduling election timeout callback at 2019-09-04T06:30:01.009+0000 2019-09-04T06:29:50.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:50.839+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:29:50.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:50.839+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:29:52.839Z 2019-09-04T06:29:50.839+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:50.847+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:50.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:50.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:50.932+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:50.932+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:50.947+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:50.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:50.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:50.990+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:50.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:50.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:50.997+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:51.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:51.037+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:51.037+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:51.047+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:51.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578589, 1), signature: { hash: BinData(0, F362EC3CFE2EEA23C32BF9ED6A7ED7B1CF884841), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:51.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:29:51.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578589, 1), signature: { hash: BinData(0, F362EC3CFE2EEA23C32BF9ED6A7ED7B1CF884841), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:51.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578589, 1), signature: { hash: BinData(0, F362EC3CFE2EEA23C32BF9ED6A7ED7B1CF884841), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:51.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), opTime: { ts: Timestamp(1567578589, 1), t: 1 }, wallTime: new Date(1567578589164) } 2019-09-04T06:29:51.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578589, 1), signature: { hash: BinData(0, F362EC3CFE2EEA23C32BF9ED6A7ED7B1CF884841), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:51.131+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:51.131+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:51.142+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:51.142+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:51.147+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:51.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:51.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:51.197+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:51.197+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:51.197+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578589, 1) 2019-09-04T06:29:51.197+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 7260 2019-09-04T06:29:51.197+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:51.197+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:51.197+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 7260 2019-09-04T06:29:51.198+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7263 2019-09-04T06:29:51.198+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7263 2019-09-04T06:29:51.198+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578589, 1), t: 1 }({ ts: Timestamp(1567578589, 1), t: 1 }) 2019-09-04T06:29:51.209+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:51.209+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:51.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:51.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:51.247+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:51.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:51.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:51.291+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:51.291+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:51.347+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:51.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:51.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:51.447+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:51.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:51.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:51.490+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:51.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:51.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:51.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:51.537+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:51.537+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:51.548+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:51.631+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:51.631+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:51.642+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:51.642+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:51.648+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:51.648+0000 I COMMAND [conn183] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578558, 1), signature: { hash: BinData(0, 4DFF71D198DA148E62B7DCAB7E886B063BCE4B69), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:29:51.648+0000 D1 - [conn183] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:51.648+0000 W - [conn183] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:51.650+0000 I NETWORK [listener] connection accepted from 10.108.2.48:42068 #213 (93 connections now open) 2019-09-04T06:29:51.650+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:51.650+0000 D2 COMMAND [conn213] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:51.650+0000 I NETWORK [conn213] received client metadata from 10.108.2.48:42068 conn213: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:51.650+0000 I COMMAND [conn213] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:51.650+0000 I NETWORK [listener] connection accepted from 10.108.2.54:49160 #214 (94 connections now open) 2019-09-04T06:29:51.650+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:51.650+0000 I NETWORK [listener] connection accepted from 10.108.2.72:45716 #215 (95 connections now open) 2019-09-04T06:29:51.650+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:51.650+0000 D2 COMMAND [conn215] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:51.650+0000 I NETWORK [conn215] received client metadata from 10.108.2.72:45716 conn215: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:51.650+0000 I COMMAND [conn215] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:51.651+0000 D2 COMMAND [conn213] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578590, 1), signature: { hash: BinData(0, A7E9BE56119A00BA3C8E3F60B38676D3DC7FF217), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:29:51.651+0000 D1 REPL [conn213] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578589, 1), t: 1 } 2019-09-04T06:29:51.651+0000 D3 REPL [conn213] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:29:51.651+0000 D2 COMMAND [conn214] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:51.651+0000 I NETWORK [conn214] received client metadata from 10.108.2.54:49160 conn214: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:51.651+0000 I COMMAND [conn214] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:51.651+0000 D2 COMMAND [conn214] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578588, 1), signature: { hash: BinData(0, 66E892E731879211782E008B61DB2B292F6E252E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:29:51.651+0000 D1 REPL [conn214] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578589, 1), t: 1 } 2019-09-04T06:29:51.651+0000 D3 REPL [conn214] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:29:51.651+0000 D2 COMMAND [conn215] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578582, 1), signature: { hash: BinData(0, CBAA63ADEE2EDC53BD93E84FCB4CEC8CB94AD092), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:29:51.652+0000 D1 REPL [conn215] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578589, 1), t: 1 } 2019-09-04T06:29:51.652+0000 D3 REPL [conn215] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:29:51.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:51.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:51.652+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:51.652+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:51.664+0000 I COMMAND [conn186] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578553, 1), signature: { hash: BinData(0, 0F4123AD4F6F23E0F6991DA797B74DD1D7349CDB), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:29:51.664+0000 D1 - [conn186] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:51.664+0000 W - [conn186] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:51.665+0000 I - [conn183] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:51.665+0000 D1 COMMAND [conn183] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578558, 1), signature: { hash: BinData(0, 4DFF71D198DA148E62B7DCAB7E886B063BCE4B69), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:51.665+0000 D1 - [conn183] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:51.665+0000 W - [conn183] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:51.667+0000 I COMMAND [conn173] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578555, 1), signature: { hash: BinData(0, BDB37965C7D794D8E96FEBC7A1DC5F3E24B76E27), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:29:51.667+0000 D1 - [conn173] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:51.667+0000 W - [conn173] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:51.682+0000 I - [conn186] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:51.682+0000 D1 COMMAND [conn186] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578553, 1), signature: { hash: BinData(0, 0F4123AD4F6F23E0F6991DA797B74DD1D7349CDB), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:51.682+0000 D1 - [conn186] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:51.682+0000 W - [conn186] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:51.702+0000 I - [conn183] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:51.702+0000 W COMMAND [conn183] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:51.702+0000 I COMMAND [conn183] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578558, 1), signature: { hash: BinData(0, 4DFF71D198DA148E62B7DCAB7E886B063BCE4B69), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:29:51.702+0000 D2 NETWORK [conn183] Session from 10.108.2.44:38634 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:51.702+0000 I NETWORK [conn183] end connection 10.108.2.44:38634 (94 connections now open) 2019-09-04T06:29:51.709+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:51.709+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:51.719+0000 I - [conn173] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:51.719+0000 D1 COMMAND [conn173] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578555, 1), signature: { hash: BinData(0, BDB37965C7D794D8E96FEBC7A1DC5F3E24B76E27), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:51.719+0000 D1 - [conn173] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:51.719+0000 W - [conn173] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:51.739+0000 I - [conn186] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:51.739+0000 W COMMAND [conn186] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:51.739+0000 I COMMAND [conn186] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578553, 1), signature: { hash: BinData(0, 0F4123AD4F6F23E0F6991DA797B74DD1D7349CDB), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:29:51.739+0000 D2 NETWORK [conn186] Session from 10.108.2.58:52092 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:51.739+0000 I NETWORK [conn186] end connection 10.108.2.58:52092 (93 connections now open) 2019-09-04T06:29:51.748+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:51.756+0000 I COMMAND [conn187] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578552, 1), signature: { hash: BinData(0, 05551CF1F69A904A3734176F5171337687942DA6), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:29:51.756+0000 D1 - [conn187] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:51.756+0000 W - [conn187] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:51.756+0000 I NETWORK [listener] connection accepted from 10.108.2.59:48316 #216 (94 connections now open) 2019-09-04T06:29:51.756+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:51.756+0000 D2 COMMAND [conn216] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:51.756+0000 I NETWORK [conn216] received client metadata from 10.108.2.59:48316 conn216: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:51.756+0000 I COMMAND [conn216] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:51.759+0000 I - [conn173] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:51.759+0000 W COMMAND [conn173] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:51.759+0000 I COMMAND [conn173] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578555, 1), signature: { hash: BinData(0, BDB37965C7D794D8E96FEBC7A1DC5F3E24B76E27), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30065ms 2019-09-04T06:29:51.759+0000 D2 NETWORK [conn173] Session from 10.108.2.47:56504 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:51.759+0000 I NETWORK [conn173] end connection 10.108.2.47:56504 (93 connections now open) 2019-09-04T06:29:51.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:51.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:51.771+0000 I COMMAND [conn188] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578551, 1), signature: { hash: BinData(0, 71017D2CA5DD957C25D8652F338F2394E060419D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:29:51.771+0000 D1 - [conn188] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:51.771+0000 W - [conn188] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:51.776+0000 I - [conn187] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:51.776+0000 D1 COMMAND [conn187] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578552, 1), signature: { hash: BinData(0, 05551CF1F69A904A3734176F5171337687942DA6), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:51.776+0000 D1 - [conn187] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:51.776+0000 W - [conn187] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:51.791+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:51.791+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:51.793+0000 I - [conn188] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:51.793+0000 D1 COMMAND [conn188] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578551, 1), signature: { hash: BinData(0, 71017D2CA5DD957C25D8652F338F2394E060419D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:51.793+0000 D1 - [conn188] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:51.793+0000 W - [conn188] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:51.813+0000 I - [conn187] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:51.813+0000 W COMMAND [conn187] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:51.813+0000 I COMMAND [conn187] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578552, 1), signature: { hash: BinData(0, 05551CF1F69A904A3734176F5171337687942DA6), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30032ms 2019-09-04T06:29:51.813+0000 D2 NETWORK [conn187] Session from 10.108.2.52:47128 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:51.813+0000 I NETWORK [conn187] end connection 10.108.2.52:47128 (92 connections now open) 2019-09-04T06:29:51.833+0000 I - [conn188] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:51.833+0000 W COMMAND [conn188] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:51.833+0000 I COMMAND [conn188] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578551, 1), signature: { hash: BinData(0, 71017D2CA5DD957C25D8652F338F2394E060419D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30035ms 2019-09-04T06:29:51.833+0000 D2 NETWORK [conn188] Session from 10.108.2.59:48296 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:51.833+0000 I NETWORK [conn188] end connection 10.108.2.59:48296 (91 connections now open) 2019-09-04T06:29:51.848+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:51.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:51.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:51.948+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:51.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:51.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:51.990+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:51.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:51.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:51.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:52.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:52.037+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:52.037+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:52.048+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:52.052+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:29:52.052+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:29:52.052+0000 D2 COMMAND [conn90] run command admin.$cmd { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:29:52.052+0000 I COMMAND [conn90] command admin.$cmd command: isMaster { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:866 locks:{} protocol:op_query 0ms 2019-09-04T06:29:52.055+0000 I COMMAND [conn189] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578559, 1), signature: { hash: BinData(0, 08AAA06C9B64C61AD7A5B7A57074BF7F508B44CC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:29:52.056+0000 D1 - [conn189] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:52.056+0000 W - [conn189] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:52.072+0000 I - [conn189] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:52.072+0000 D1 COMMAND [conn189] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578559, 1), signature: { hash: BinData(0, 08AAA06C9B64C61AD7A5B7A57074BF7F508B44CC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:52.072+0000 D1 - [conn189] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:52.072+0000 W - [conn189] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:52.092+0000 I - [conn189] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:52.092+0000 W COMMAND [conn189] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:52.093+0000 I COMMAND [conn189] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578559, 1), signature: { hash: BinData(0, 08AAA06C9B64C61AD7A5B7A57074BF7F508B44CC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30028ms 2019-09-04T06:29:52.093+0000 D2 NETWORK [conn189] Session from 10.108.2.50:50066 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:52.093+0000 I NETWORK [conn189] end connection 10.108.2.50:50066 (90 connections now open) 2019-09-04T06:29:52.131+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:52.131+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:52.142+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:52.142+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:52.149+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:52.151+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:52.151+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:52.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:52.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:52.197+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:52.197+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:52.197+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578589, 1) 2019-09-04T06:29:52.197+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 7301 2019-09-04T06:29:52.197+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:52.197+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:52.197+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 7301 2019-09-04T06:29:52.198+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7304 2019-09-04T06:29:52.198+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7304 2019-09-04T06:29:52.198+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578589, 1), t: 1 }({ ts: Timestamp(1567578589, 1), t: 1 }) 2019-09-04T06:29:52.209+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:52.209+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:52.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578589, 1), signature: { hash: BinData(0, F362EC3CFE2EEA23C32BF9ED6A7ED7B1CF884841), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:52.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:29:52.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578589, 1), signature: { hash: BinData(0, F362EC3CFE2EEA23C32BF9ED6A7ED7B1CF884841), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:52.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578589, 1), signature: { hash: BinData(0, F362EC3CFE2EEA23C32BF9ED6A7ED7B1CF884841), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:52.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), opTime: { ts: Timestamp(1567578589, 1), t: 1 }, wallTime: new Date(1567578589164) } 2019-09-04T06:29:52.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578589, 1), signature: { hash: BinData(0, F362EC3CFE2EEA23C32BF9ED6A7ED7B1CF884841), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:52.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:52.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:52.249+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:52.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:52.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:52.291+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:52.291+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:52.349+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:52.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:52.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:52.449+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:52.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:52.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:52.490+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:52.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:52.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:52.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:52.537+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:52.537+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:52.549+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:52.584+0000 I NETWORK [listener] connection accepted from 10.108.2.74:51752 #217 (91 connections now open) 2019-09-04T06:29:52.585+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:52.585+0000 D2 COMMAND [conn217] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:52.585+0000 I NETWORK [conn217] received client metadata from 10.108.2.74:51752 conn217: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:52.585+0000 I COMMAND [conn217] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:52.585+0000 D2 COMMAND [conn217] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578585, 1), signature: { hash: BinData(0, 7ED407E7D8FC2F48E792B1C41AEA50FC38ED9E10), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:29:52.585+0000 D1 REPL [conn217] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578589, 1), t: 1 } 2019-09-04T06:29:52.585+0000 D3 REPL [conn217] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:22.595+0000 2019-09-04T06:29:52.631+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:52.631+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:52.642+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:52.642+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:52.649+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:52.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:52.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:52.709+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:52.709+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:52.749+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:52.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:52.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:52.791+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:52.791+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:52.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:52.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 492) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:52.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 492 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:02.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:52.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:52.838+0000 D2 ASIO [Replication] Request 492 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), opTime: { ts: Timestamp(1567578589, 1), t: 1 }, wallTime: new Date(1567578589164), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578589, 1), t: 1 }, lastCommittedWall: new Date(1567578589164), lastOpVisible: { ts: Timestamp(1567578589, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578589, 1), $clusterTime: { clusterTime: Timestamp(1567578590, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578589, 1) } 2019-09-04T06:29:52.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), opTime: { ts: Timestamp(1567578589, 1), t: 1 }, wallTime: new Date(1567578589164), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578589, 1), t: 1 }, lastCommittedWall: new Date(1567578589164), lastOpVisible: { ts: Timestamp(1567578589, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578589, 1), $clusterTime: { clusterTime: Timestamp(1567578590, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578589, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:52.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:52.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 492) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), opTime: { ts: Timestamp(1567578589, 1), t: 1 }, wallTime: new Date(1567578589164), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578589, 1), t: 1 }, lastCommittedWall: new Date(1567578589164), lastOpVisible: { ts: Timestamp(1567578589, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578589, 1), $clusterTime: { clusterTime: Timestamp(1567578590, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578589, 1) } 2019-09-04T06:29:52.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:29:52.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:29:54.838Z 2019-09-04T06:29:52.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:52.839+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:52.839+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 493) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:52.839+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 493 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:30:02.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:52.839+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:52.839+0000 D2 ASIO [Replication] Request 493 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), opTime: { ts: Timestamp(1567578589, 1), t: 1 }, wallTime: new Date(1567578589164), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578589, 1), t: 1 }, lastCommittedWall: new Date(1567578589164), lastOpVisible: { ts: Timestamp(1567578589, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578589, 1), $clusterTime: { clusterTime: Timestamp(1567578590, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578589, 1) } 2019-09-04T06:29:52.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), opTime: { ts: Timestamp(1567578589, 1), t: 1 }, wallTime: new Date(1567578589164), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578589, 1), t: 1 }, lastCommittedWall: new Date(1567578589164), lastOpVisible: { ts: Timestamp(1567578589, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578589, 1), $clusterTime: { clusterTime: Timestamp(1567578590, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578589, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:29:52.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:52.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 493) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), opTime: { ts: Timestamp(1567578589, 1), t: 1 }, wallTime: new Date(1567578589164), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578589, 1), t: 1 }, lastCommittedWall: new Date(1567578589164), lastOpVisible: { ts: Timestamp(1567578589, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578589, 1), $clusterTime: { clusterTime: Timestamp(1567578590, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578589, 1) } 2019-09-04T06:29:52.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:29:52.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:30:01.009+0000 2019-09-04T06:29:52.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:30:03.761+0000 2019-09-04T06:29:52.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:29:52.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:29:54.839Z 2019-09-04T06:29:52.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:52.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:52.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:52.850+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:52.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:52.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:52.950+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:52.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:52.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:52.990+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:52.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:52.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:52.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:53.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:53.037+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:53.037+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:53.050+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:53.061+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:53.061+0000 D3 REPL [replexec-1] memberData lastupdate is: 2019-09-04T06:29:52.839+0000 2019-09-04T06:29:53.061+0000 D3 REPL [replexec-1] memberData lastupdate is: 2019-09-04T06:29:52.838+0000 2019-09-04T06:29:53.061+0000 D3 REPL [replexec-1] stalest member MemberId(2) date: 2019-09-04T06:29:52.838+0000 2019-09-04T06:29:53.061+0000 D3 REPL [replexec-1] scheduling next check at 2019-09-04T06:30:02.838+0000 2019-09-04T06:29:53.061+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:53.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578590, 1), signature: { hash: BinData(0, 8D224982D8FAD4C011D51FB98F094EA4A84385D8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:53.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:29:53.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578590, 1), signature: { hash: BinData(0, 8D224982D8FAD4C011D51FB98F094EA4A84385D8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:53.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578590, 1), signature: { hash: BinData(0, 8D224982D8FAD4C011D51FB98F094EA4A84385D8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:53.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), opTime: { ts: Timestamp(1567578589, 1), t: 1 }, wallTime: new Date(1567578589164) } 2019-09-04T06:29:53.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578590, 1), signature: { hash: BinData(0, 8D224982D8FAD4C011D51FB98F094EA4A84385D8), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:53.131+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:53.131+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:53.142+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:53.142+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:53.150+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:53.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:53.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:53.197+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:53.197+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:53.197+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578589, 1) 2019-09-04T06:29:53.197+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 7334 2019-09-04T06:29:53.198+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:53.198+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:53.198+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 7334 2019-09-04T06:29:53.198+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7337 2019-09-04T06:29:53.198+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7337 2019-09-04T06:29:53.198+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578589, 1), t: 1 }({ ts: Timestamp(1567578589, 1), t: 1 }) 2019-09-04T06:29:53.209+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:53.209+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:53.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:53.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:53.242+0000 D2 COMMAND [conn157] run command admin.$cmd { ping: 1, $db: "admin" } 2019-09-04T06:29:53.242+0000 I COMMAND [conn157] command admin.$cmd appName: "robo3t" command: ping { ping: 1, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:53.250+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:53.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:53.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:53.269+0000 D2 COMMAND [conn101] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:53.269+0000 I COMMAND [conn101] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:53.291+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:53.291+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:53.297+0000 I NETWORK [listener] connection accepted from 10.20.102.80:61234 #218 (92 connections now open) 2019-09-04T06:29:53.297+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:53.297+0000 D2 COMMAND [conn218] run command admin.$cmd { isMaster: 1, client: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.0.5-17-gd808df2233" }, os: { type: "Windows", name: "Microsoft Windows 7", architecture: "x86_64", version: "6.1 SP1 (build 7601)" } }, $db: "admin" } 2019-09-04T06:29:53.297+0000 I NETWORK [conn218] received client metadata from 10.20.102.80:61234 conn218: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.0.5-17-gd808df2233" }, os: { type: "Windows", name: "Microsoft Windows 7", architecture: "x86_64", version: "6.1 SP1 (build 7601)" } } 2019-09-04T06:29:53.297+0000 I COMMAND [conn218] command admin.$cmd appName: "MongoDB Shell" command: isMaster { isMaster: 1, client: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.0.5-17-gd808df2233" }, os: { type: "Windows", name: "Microsoft Windows 7", architecture: "x86_64", version: "6.1 SP1 (build 7601)" } }, $db: "admin" } numYields:0 reslen:866 locks:{} protocol:op_query 0ms 2019-09-04T06:29:53.307+0000 D2 COMMAND [conn218] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-256", payload: "xxx", $db: "admin" } 2019-09-04T06:29:53.307+0000 D1 ACCESS [conn218] Returning user dba_root@admin from cache 2019-09-04T06:29:53.307+0000 I COMMAND [conn218] command admin.$cmd appName: "MongoDB Shell" command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-256", payload: "xxx", $db: "admin" } numYields:0 reslen:410 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:53.317+0000 D2 COMMAND [conn218] run command admin.$cmd { saslContinue: 1, payload: "xxx", conversationId: 1, $db: "admin" } 2019-09-04T06:29:53.317+0000 I COMMAND [conn218] command admin.$cmd appName: "MongoDB Shell" command: saslContinue { saslContinue: 1, payload: "xxx", conversationId: 1, $db: "admin" } numYields:0 reslen:339 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:53.327+0000 D2 COMMAND [conn218] run command admin.$cmd { saslContinue: 1, payload: "xxx", conversationId: 1, $db: "admin" } 2019-09-04T06:29:53.327+0000 D1 ACCESS [conn218] Returning user dba_root@admin from cache 2019-09-04T06:29:53.327+0000 I ACCESS [conn218] Successfully authenticated as principal dba_root on admin from client 10.20.102.80:61234 2019-09-04T06:29:53.327+0000 I COMMAND [conn218] command admin.$cmd appName: "MongoDB Shell" command: saslContinue { saslContinue: 1, payload: "xxx", conversationId: 1, $db: "admin" } numYields:0 reslen:293 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:53.336+0000 D2 COMMAND [conn218] run command admin.$cmd { ping: 1.0, lsid: { id: UUID("ac8e303f-4e60-4a79-b9a4-f7cba7354076") }, $clusterTime: { clusterTime: Timestamp(1567578410, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } 2019-09-04T06:29:53.336+0000 I COMMAND [conn218] command admin.$cmd appName: "MongoDB Shell" command: ping { ping: 1.0, lsid: { id: UUID("ac8e303f-4e60-4a79-b9a4-f7cba7354076") }, $clusterTime: { clusterTime: Timestamp(1567578410, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:53.350+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:53.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:53.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:53.450+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:53.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:53.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:53.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:53.489+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:53.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:53.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:53.537+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:53.537+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:53.551+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:53.631+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:53.631+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:53.642+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:53.642+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:53.651+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:53.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:53.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:53.709+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:53.709+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:53.751+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:53.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:53.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:53.791+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:53.791+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:53.851+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:53.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:53.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:53.951+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:53.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:53.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:53.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:53.989+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:53.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:53.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:54.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:54.037+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:54.037+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:54.051+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:54.131+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:54.131+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:54.141+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:54.141+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:54.142+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:54.142+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:54.142+0000 I NETWORK [listener] connection accepted from 10.108.2.46:40956 #219 (93 connections now open) 2019-09-04T06:29:54.142+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:54.142+0000 D2 COMMAND [conn219] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:29:54.142+0000 I NETWORK [conn219] received client metadata from 10.108.2.46:40956 conn219: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:29:54.142+0000 I COMMAND [conn219] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:29:54.143+0000 D2 COMMAND [conn219] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578585, 1), signature: { hash: BinData(0, 7ED407E7D8FC2F48E792B1C41AEA50FC38ED9E10), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:29:54.143+0000 D1 REPL [conn219] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578589, 1), t: 1 } 2019-09-04T06:29:54.143+0000 D3 REPL [conn219] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:24.153+0000 2019-09-04T06:29:54.151+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:54.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:54.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:54.198+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:54.198+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:54.198+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578589, 1) 2019-09-04T06:29:54.198+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 7373 2019-09-04T06:29:54.198+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:54.198+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:54.198+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 7373 2019-09-04T06:29:54.198+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7376 2019-09-04T06:29:54.198+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7376 2019-09-04T06:29:54.198+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578589, 1), t: 1 }({ ts: Timestamp(1567578589, 1), t: 1 }) 2019-09-04T06:29:54.224+0000 D2 ASIO [RS] Request 482 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578589, 1), t: 1 }, lastCommittedWall: new Date(1567578589164), lastOpVisible: { ts: Timestamp(1567578589, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578589, 1), t: 1 }, lastCommittedWall: new Date(1567578589164), lastOpApplied: { ts: Timestamp(1567578589, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578589, 1), $clusterTime: { clusterTime: Timestamp(1567578590, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578589, 1) } 2019-09-04T06:29:54.224+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578589, 1), t: 1 }, lastCommittedWall: new Date(1567578589164), lastOpVisible: { ts: Timestamp(1567578589, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578589, 1), t: 1 }, lastCommittedWall: new Date(1567578589164), lastOpApplied: { ts: Timestamp(1567578589, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578589, 1), $clusterTime: { clusterTime: Timestamp(1567578590, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578589, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:54.224+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:54.225+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:29:54.225+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:30:03.761+0000 2019-09-04T06:29:54.225+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:30:04.405+0000 2019-09-04T06:29:54.225+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:54.225+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:54.225+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 494 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:04.225+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578589, 1), t: 1 } } 2019-09-04T06:29:54.225+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:19.198+0000 2019-09-04T06:29:54.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578590, 1), signature: { hash: BinData(0, 8D224982D8FAD4C011D51FB98F094EA4A84385D8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:54.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:29:54.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578590, 1), signature: { hash: BinData(0, 8D224982D8FAD4C011D51FB98F094EA4A84385D8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:54.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578590, 1), signature: { hash: BinData(0, 8D224982D8FAD4C011D51FB98F094EA4A84385D8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:54.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), opTime: { ts: Timestamp(1567578589, 1), t: 1 }, wallTime: new Date(1567578589164) } 2019-09-04T06:29:54.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578590, 1), signature: { hash: BinData(0, 8D224982D8FAD4C011D51FB98F094EA4A84385D8), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:54.233+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:54.233+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), appliedOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, appliedWallTime: new Date(1567578589164), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), appliedOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, appliedWallTime: new Date(1567578589164), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), appliedOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, appliedWallTime: new Date(1567578589164), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578589, 1), t: 1 }, lastCommittedWall: new Date(1567578589164), lastOpVisible: { ts: Timestamp(1567578589, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:54.233+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 495 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:24.233+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), appliedOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, appliedWallTime: new Date(1567578589164), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), appliedOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, appliedWallTime: new Date(1567578589164), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), appliedOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, appliedWallTime: new Date(1567578589164), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578589, 1), t: 1 }, lastCommittedWall: new Date(1567578589164), lastOpVisible: { ts: Timestamp(1567578589, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:54.233+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:19.198+0000 2019-09-04T06:29:54.233+0000 D2 ASIO [RS] Request 495 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578589, 1), t: 1 }, lastCommittedWall: new Date(1567578589164), lastOpVisible: { ts: Timestamp(1567578589, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578589, 1), $clusterTime: { clusterTime: Timestamp(1567578590, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578589, 1) } 2019-09-04T06:29:54.233+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578589, 1), t: 1 }, lastCommittedWall: new Date(1567578589164), lastOpVisible: { ts: Timestamp(1567578589, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578589, 1), $clusterTime: { clusterTime: Timestamp(1567578590, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578589, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:54.233+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:54.233+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:19.198+0000 2019-09-04T06:29:54.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:54.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:54.252+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:54.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:54.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:54.291+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:54.291+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:54.352+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:54.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:54.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:54.452+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:54.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:54.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:54.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:54.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:54.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:54.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:54.537+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:54.537+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:54.552+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:54.631+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:54.631+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:54.641+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:54.641+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:54.642+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:54.642+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:54.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:54.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:54.652+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:54.752+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:54.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:54.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:54.791+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:54.791+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:54.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:54.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 496) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:54.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 496 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:04.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:54.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:54.838+0000 D2 ASIO [Replication] Request 496 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), opTime: { ts: Timestamp(1567578589, 1), t: 1 }, wallTime: new Date(1567578589164), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578589, 1), t: 1 }, lastCommittedWall: new Date(1567578589164), lastOpVisible: { ts: Timestamp(1567578589, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578589, 1), $clusterTime: { clusterTime: Timestamp(1567578590, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578589, 1) } 2019-09-04T06:29:54.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), opTime: { ts: Timestamp(1567578589, 1), t: 1 }, wallTime: new Date(1567578589164), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578589, 1), t: 1 }, lastCommittedWall: new Date(1567578589164), lastOpVisible: { ts: Timestamp(1567578589, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578589, 1), $clusterTime: { clusterTime: Timestamp(1567578590, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578589, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:54.838+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:54.838+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 496) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), opTime: { ts: Timestamp(1567578589, 1), t: 1 }, wallTime: new Date(1567578589164), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578589, 1), t: 1 }, lastCommittedWall: new Date(1567578589164), lastOpVisible: { ts: Timestamp(1567578589, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578589, 1), $clusterTime: { clusterTime: Timestamp(1567578590, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578589, 1) } 2019-09-04T06:29:54.838+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:29:54.838+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:29:56.838Z 2019-09-04T06:29:54.838+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:54.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:54.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 497) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:54.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 497 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:30:04.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:54.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:54.839+0000 D2 ASIO [Replication] Request 497 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), opTime: { ts: Timestamp(1567578589, 1), t: 1 }, wallTime: new Date(1567578589164), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578589, 1), t: 1 }, lastCommittedWall: new Date(1567578589164), lastOpVisible: { ts: Timestamp(1567578589, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578589, 1), $clusterTime: { clusterTime: Timestamp(1567578590, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578589, 1) } 2019-09-04T06:29:54.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), opTime: { ts: Timestamp(1567578589, 1), t: 1 }, wallTime: new Date(1567578589164), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578589, 1), t: 1 }, lastCommittedWall: new Date(1567578589164), lastOpVisible: { ts: Timestamp(1567578589, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578589, 1), $clusterTime: { clusterTime: Timestamp(1567578590, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578589, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:29:54.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:54.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 497) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), opTime: { ts: Timestamp(1567578589, 1), t: 1 }, wallTime: new Date(1567578589164), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578589, 1), t: 1 }, lastCommittedWall: new Date(1567578589164), lastOpVisible: { ts: Timestamp(1567578589, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578589, 1), $clusterTime: { clusterTime: Timestamp(1567578590, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578589, 1) } 2019-09-04T06:29:54.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:29:54.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:30:04.405+0000 2019-09-04T06:29:54.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:30:04.972+0000 2019-09-04T06:29:54.839+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:54.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:29:54.839+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:54.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:29:56.839Z 2019-09-04T06:29:54.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:54.852+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:54.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:54.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:54.952+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:54.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:54.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:54.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:54.989+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:54.995+0000 D2 ASIO [RS] Request 494 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578594, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578594992), o: { $v: 1, $set: { ping: new Date(1567578594989), up: 2495 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578589, 1), t: 1 }, lastCommittedWall: new Date(1567578589164), lastOpVisible: { ts: Timestamp(1567578589, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578589, 1), t: 1 }, lastCommittedWall: new Date(1567578589164), lastOpApplied: { ts: Timestamp(1567578594, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578589, 1), $clusterTime: { clusterTime: Timestamp(1567578594, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578594, 1) } 2019-09-04T06:29:54.996+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578594, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578594992), o: { $v: 1, $set: { ping: new Date(1567578594989), up: 2495 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578589, 1), t: 1 }, lastCommittedWall: new Date(1567578589164), lastOpVisible: { ts: Timestamp(1567578589, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578589, 1), t: 1 }, lastCommittedWall: new Date(1567578589164), lastOpApplied: { ts: Timestamp(1567578594, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578589, 1), $clusterTime: { clusterTime: Timestamp(1567578594, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578594, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:54.996+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:54.996+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578594, 1) and ending at ts: Timestamp(1567578594, 1) 2019-09-04T06:29:54.996+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:30:04.972+0000 2019-09-04T06:29:54.996+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:30:05.449+0000 2019-09-04T06:29:54.996+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:54.996+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:54.996+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578594, 1), t: 1 } 2019-09-04T06:29:54.996+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:54.996+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:54.996+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578589, 1) 2019-09-04T06:29:54.996+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 7398 2019-09-04T06:29:54.996+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:54.996+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:54.996+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 7398 2019-09-04T06:29:54.996+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:54.996+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:29:54.996+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:54.996+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578589, 1) 2019-09-04T06:29:54.996+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 7401 2019-09-04T06:29:54.996+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578594, 1) } 2019-09-04T06:29:54.996+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:54.996+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:54.996+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 7401 2019-09-04T06:29:54.996+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7377 2019-09-04T06:29:54.996+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7377 2019-09-04T06:29:54.996+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7404 2019-09-04T06:29:54.996+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7404 2019-09-04T06:29:54.996+0000 D3 EXECUTOR [repl-writer-worker-5] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:54.996+0000 D3 STORAGE [repl-writer-worker-5] WT begin_transaction for snapshot id 7406 2019-09-04T06:29:54.996+0000 D4 STORAGE [repl-writer-worker-5] inserting record with timestamp Timestamp(1567578594, 1) 2019-09-04T06:29:54.996+0000 D3 STORAGE [repl-writer-worker-5] WT set timestamp of future write operations to Timestamp(1567578594, 1) 2019-09-04T06:29:54.996+0000 D3 STORAGE [repl-writer-worker-5] WT commit_transaction for snapshot id 7406 2019-09-04T06:29:54.996+0000 D3 EXECUTOR [repl-writer-worker-5] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:54.996+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:29:54.996+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7405 2019-09-04T06:29:54.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:54.996+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7405 2019-09-04T06:29:54.996+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7409 2019-09-04T06:29:54.996+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7409 2019-09-04T06:29:54.996+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578594, 1), t: 1 }({ ts: Timestamp(1567578594, 1), t: 1 }) 2019-09-04T06:29:54.996+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578594, 1) 2019-09-04T06:29:54.996+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7410 2019-09-04T06:29:54.996+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578594, 1) } } ] } sort: {} projection: {} 2019-09-04T06:29:54.996+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:54.996+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:29:54.996+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578594, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:29:54.996+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:54.996+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:54.996+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:54.996+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578594, 1) || First: notFirst: full path: ts 2019-09-04T06:29:54.996+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:54.997+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578594, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:54.997+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:54.997+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:29:54.997+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:54.997+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:54.997+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:54.997+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:54.997+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:54.997+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:54.997+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:54.997+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578594, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:54.997+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:54.997+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:54.997+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:54.997+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578594, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:54.997+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:54.997+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578594, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:54.997+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7410 2019-09-04T06:29:54.997+0000 D3 EXECUTOR [repl-writer-worker-9] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:54.997+0000 D3 STORAGE [repl-writer-worker-9] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:54.997+0000 D3 REPL [repl-writer-worker-9] applying op: { ts: Timestamp(1567578594, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578594992), o: { $v: 1, $set: { ping: new Date(1567578594989), up: 2495 } } }, oplog application mode: Secondary 2019-09-04T06:29:54.997+0000 D3 STORAGE [repl-writer-worker-9] WT set timestamp of future write operations to Timestamp(1567578594, 1) 2019-09-04T06:29:54.997+0000 D3 STORAGE [repl-writer-worker-9] WT begin_transaction for snapshot id 7412 2019-09-04T06:29:54.997+0000 D2 QUERY [repl-writer-worker-9] Using idhack: { _id: "cmodb801.togewa.com:27017" } 2019-09-04T06:29:54.997+0000 D4 WRITE [repl-writer-worker-9] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:29:54.997+0000 D3 STORAGE [repl-writer-worker-9] WT commit_transaction for snapshot id 7412 2019-09-04T06:29:54.997+0000 D3 EXECUTOR [repl-writer-worker-9] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:54.997+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578594, 1), t: 1 }({ ts: Timestamp(1567578594, 1), t: 1 }) 2019-09-04T06:29:54.997+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:54.997+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578594, 1) 2019-09-04T06:29:54.997+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7411 2019-09-04T06:29:54.997+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:29:54.997+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:54.997+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:29:54.997+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:54.997+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:54.997+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:29:54.997+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7411 2019-09-04T06:29:54.997+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578594, 1) 2019-09-04T06:29:54.997+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7416 2019-09-04T06:29:54.997+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7416 2019-09-04T06:29:54.997+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578594, 1), t: 1 }({ ts: Timestamp(1567578594, 1), t: 1 }) 2019-09-04T06:29:54.997+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:54.997+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), appliedOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, appliedWallTime: new Date(1567578589164), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), appliedOpTime: { ts: Timestamp(1567578594, 1), t: 1 }, appliedWallTime: new Date(1567578594992), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), appliedOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, appliedWallTime: new Date(1567578589164), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578589, 1), t: 1 }, lastCommittedWall: new Date(1567578589164), lastOpVisible: { ts: Timestamp(1567578589, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:54.997+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 498 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:24.997+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), appliedOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, appliedWallTime: new Date(1567578589164), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), appliedOpTime: { ts: Timestamp(1567578594, 1), t: 1 }, appliedWallTime: new Date(1567578594992), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), appliedOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, appliedWallTime: new Date(1567578589164), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578589, 1), t: 1 }, lastCommittedWall: new Date(1567578589164), lastOpVisible: { ts: Timestamp(1567578589, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:54.997+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:24.997+0000 2019-09-04T06:29:54.997+0000 D2 ASIO [RS] Request 498 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578594, 1), t: 1 }, lastCommittedWall: new Date(1567578594992), lastOpVisible: { ts: Timestamp(1567578594, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578594, 1), $clusterTime: { clusterTime: Timestamp(1567578594, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578594, 1) } 2019-09-04T06:29:54.997+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578594, 1), t: 1 }, lastCommittedWall: new Date(1567578594992), lastOpVisible: { ts: Timestamp(1567578594, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578594, 1), $clusterTime: { clusterTime: Timestamp(1567578594, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578594, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:54.998+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:54.998+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:24.997+0000 2019-09-04T06:29:54.998+0000 D2 COMMAND [conn21] run command config.$cmd { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578594, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f59e202d1a496712d71e0'), operName: "", parentOperId: "5d6f59e202d1a496712d71dc" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578594, 1), signature: { hash: BinData(0, 1DA4BA94B808460750D9AC6F94730105FC64EAA1), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578594, 1), t: 1 } }, $db: "config" } 2019-09-04T06:29:54.998+0000 D1 TRACKING [conn21] Cmd: find, TrackingId: 5d6f59e202d1a496712d71dc|5d6f59e202d1a496712d71e0 2019-09-04T06:29:54.998+0000 D1 REPL [conn21] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1567578594, 1), t: 1 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578589, 1), t: 1 } 2019-09-04T06:29:54.998+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578594, 1), t: 1 } 2019-09-04T06:29:54.998+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 499 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:04.998+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578589, 1), t: 1 } } 2019-09-04T06:29:54.998+0000 D3 REPL [conn21] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:25.008+0000 2019-09-04T06:29:54.998+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:24.997+0000 2019-09-04T06:29:54.998+0000 D2 ASIO [RS] Request 499 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578594, 1), t: 1 }, lastCommittedWall: new Date(1567578594992), lastOpVisible: { ts: Timestamp(1567578594, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578594, 1), t: 1 }, lastCommittedWall: new Date(1567578594992), lastOpApplied: { ts: Timestamp(1567578594, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578594, 1), $clusterTime: { clusterTime: Timestamp(1567578594, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578594, 1) } 2019-09-04T06:29:54.998+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578594, 1), t: 1 }, lastCommittedWall: new Date(1567578594992), lastOpVisible: { ts: Timestamp(1567578594, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578594, 1), t: 1 }, lastCommittedWall: new Date(1567578594992), lastOpApplied: { ts: Timestamp(1567578594, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578594, 1), $clusterTime: { clusterTime: Timestamp(1567578594, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578594, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:54.998+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:54.998+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:29:54.998+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578594, 1), t: 1 }, 2019-09-04T06:29:54.992+0000 2019-09-04T06:29:54.998+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578594, 1), t: 1 }, 2019-09-04T06:29:54.992+0000 2019-09-04T06:29:54.998+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578589, 1) 2019-09-04T06:29:54.998+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:30:05.449+0000 2019-09-04T06:29:54.998+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:30:05.685+0000 2019-09-04T06:29:54.998+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:54.998+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:54.998+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 500 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:04.998+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578594, 1), t: 1 } } 2019-09-04T06:29:54.998+0000 D3 REPL [conn194] Got notified of new snapshot: { ts: Timestamp(1567578594, 1), t: 1 }, 2019-09-04T06:29:54.992+0000 2019-09-04T06:29:54.998+0000 D3 REPL [conn194] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.763+0000 2019-09-04T06:29:54.998+0000 D3 REPL [conn202] Got notified of new snapshot: { ts: Timestamp(1567578594, 1), t: 1 }, 2019-09-04T06:29:54.992+0000 2019-09-04T06:29:54.998+0000 D3 REPL [conn202] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.480+0000 2019-09-04T06:29:54.998+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:24.997+0000 2019-09-04T06:29:54.998+0000 D3 REPL [conn167] Got notified of new snapshot: { ts: Timestamp(1567578594, 1), t: 1 }, 2019-09-04T06:29:54.992+0000 2019-09-04T06:29:54.998+0000 D3 REPL [conn167] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:57.567+0000 2019-09-04T06:29:54.998+0000 D3 REPL [conn159] Got notified of new snapshot: { ts: Timestamp(1567578594, 1), t: 1 }, 2019-09-04T06:29:54.992+0000 2019-09-04T06:29:54.998+0000 D3 REPL [conn159] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:56.299+0000 2019-09-04T06:29:54.998+0000 D3 REPL [conn205] Got notified of new snapshot: { ts: Timestamp(1567578594, 1), t: 1 }, 2019-09-04T06:29:54.992+0000 2019-09-04T06:29:54.998+0000 D3 REPL [conn205] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.530+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn147] Got notified of new snapshot: { ts: Timestamp(1567578594, 1), t: 1 }, 2019-09-04T06:29:54.992+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn147] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:13.409+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn209] Got notified of new snapshot: { ts: Timestamp(1567578594, 1), t: 1 }, 2019-09-04T06:29:54.992+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn209] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.622+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn204] Got notified of new snapshot: { ts: Timestamp(1567578594, 1), t: 1 }, 2019-09-04T06:29:54.992+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn204] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.514+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn195] Got notified of new snapshot: { ts: Timestamp(1567578594, 1), t: 1 }, 2019-09-04T06:29:54.992+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn195] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.897+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn214] Got notified of new snapshot: { ts: Timestamp(1567578594, 1), t: 1 }, 2019-09-04T06:29:54.992+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn214] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn193] Got notified of new snapshot: { ts: Timestamp(1567578594, 1), t: 1 }, 2019-09-04T06:29:54.992+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn193] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.753+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn191] Got notified of new snapshot: { ts: Timestamp(1567578594, 1), t: 1 }, 2019-09-04T06:29:54.992+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn191] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:12.107+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn199] Got notified of new snapshot: { ts: Timestamp(1567578594, 1), t: 1 }, 2019-09-04T06:29:54.992+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn199] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.476+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn198] Got notified of new snapshot: { ts: Timestamp(1567578594, 1), t: 1 }, 2019-09-04T06:29:54.992+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn198] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:03.485+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn217] Got notified of new snapshot: { ts: Timestamp(1567578594, 1), t: 1 }, 2019-09-04T06:29:54.992+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn217] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:22.595+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn219] Got notified of new snapshot: { ts: Timestamp(1567578594, 1), t: 1 }, 2019-09-04T06:29:54.992+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn219] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:24.153+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn181] Got notified of new snapshot: { ts: Timestamp(1567578594, 1), t: 1 }, 2019-09-04T06:29:54.992+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn181] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.478+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn203] Got notified of new snapshot: { ts: Timestamp(1567578594, 1), t: 1 }, 2019-09-04T06:29:54.992+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn203] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.499+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn185] Got notified of new snapshot: { ts: Timestamp(1567578594, 1), t: 1 }, 2019-09-04T06:29:54.992+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn185] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:59.943+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn200] Got notified of new snapshot: { ts: Timestamp(1567578594, 1), t: 1 }, 2019-09-04T06:29:54.992+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn200] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.475+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn210] Got notified of new snapshot: { ts: Timestamp(1567578594, 1), t: 1 }, 2019-09-04T06:29:54.992+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn210] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.641+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn184] Got notified of new snapshot: { ts: Timestamp(1567578594, 1), t: 1 }, 2019-09-04T06:29:54.992+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn184] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.433+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn180] Got notified of new snapshot: { ts: Timestamp(1567578594, 1), t: 1 }, 2019-09-04T06:29:54.992+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn180] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.623+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn172] Got notified of new snapshot: { ts: Timestamp(1567578594, 1), t: 1 }, 2019-09-04T06:29:54.992+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn172] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:58.752+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn171] Got notified of new snapshot: { ts: Timestamp(1567578594, 1), t: 1 }, 2019-09-04T06:29:54.992+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn171] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.634+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn192] Got notified of new snapshot: { ts: Timestamp(1567578594, 1), t: 1 }, 2019-09-04T06:29:54.992+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn192] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.469+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn197] Got notified of new snapshot: { ts: Timestamp(1567578594, 1), t: 1 }, 2019-09-04T06:29:54.992+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn197] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.987+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn190] Got notified of new snapshot: { ts: Timestamp(1567578594, 1), t: 1 }, 2019-09-04T06:29:54.992+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn190] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:55.060+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn169] Got notified of new snapshot: { ts: Timestamp(1567578594, 1), t: 1 }, 2019-09-04T06:29:54.992+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn169] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:12.789+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn212] Got notified of new snapshot: { ts: Timestamp(1567578594, 1), t: 1 }, 2019-09-04T06:29:54.992+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn212] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.650+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn196] Got notified of new snapshot: { ts: Timestamp(1567578594, 1), t: 1 }, 2019-09-04T06:29:54.992+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn196] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.962+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn211] Got notified of new snapshot: { ts: Timestamp(1567578594, 1), t: 1 }, 2019-09-04T06:29:54.992+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn211] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.646+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn213] Got notified of new snapshot: { ts: Timestamp(1567578594, 1), t: 1 }, 2019-09-04T06:29:54.992+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn213] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn201] Got notified of new snapshot: { ts: Timestamp(1567578594, 1), t: 1 }, 2019-09-04T06:29:54.992+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn201] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.476+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn215] Got notified of new snapshot: { ts: Timestamp(1567578594, 1), t: 1 }, 2019-09-04T06:29:54.992+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn215] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:29:54.999+0000 D3 REPL [conn21] Got notified of new snapshot: { ts: Timestamp(1567578594, 1), t: 1 }, 2019-09-04T06:29:54.992+0000 2019-09-04T06:29:54.999+0000 D1 COMMAND [conn21] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578594, 1), t: 1 } } } 2019-09-04T06:29:54.999+0000 D3 STORAGE [conn21] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:29:54.999+0000 D1 COMMAND [conn21] Using 'committed' snapshot: { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578594, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f59e202d1a496712d71e0'), operName: "", parentOperId: "5d6f59e202d1a496712d71dc" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578594, 1), signature: { hash: BinData(0, 1DA4BA94B808460750D9AC6F94730105FC64EAA1), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578594, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578594, 1) 2019-09-04T06:29:54.999+0000 D2 QUERY [conn21] Collection config.settings does not exist. Using EOF plan: query: { _id: "autosplit" } sort: {} projection: {} limit: 1 2019-09-04T06:29:54.999+0000 I COMMAND [conn21] command config.settings command: find { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578594, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f59e202d1a496712d71e0'), operName: "", parentOperId: "5d6f59e202d1a496712d71dc" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578594, 1), signature: { hash: BinData(0, 1DA4BA94B808460750D9AC6F94730105FC64EAA1), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578594, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 1ms 2019-09-04T06:29:55.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:55.000+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:29:55.000+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:55.000+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), appliedOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, appliedWallTime: new Date(1567578589164), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578594, 1), t: 1 }, durableWallTime: new Date(1567578594992), appliedOpTime: { ts: Timestamp(1567578594, 1), t: 1 }, appliedWallTime: new Date(1567578594992), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), appliedOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, appliedWallTime: new Date(1567578589164), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578594, 1), t: 1 }, lastCommittedWall: new Date(1567578594992), lastOpVisible: { ts: Timestamp(1567578594, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:55.000+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 501 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:25.000+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), appliedOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, appliedWallTime: new Date(1567578589164), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578594, 1), t: 1 }, durableWallTime: new Date(1567578594992), appliedOpTime: { ts: Timestamp(1567578594, 1), t: 1 }, appliedWallTime: new Date(1567578594992), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), appliedOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, appliedWallTime: new Date(1567578589164), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578594, 1), t: 1 }, lastCommittedWall: new Date(1567578594992), lastOpVisible: { ts: Timestamp(1567578594, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:55.000+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:24.997+0000 2019-09-04T06:29:55.001+0000 D2 ASIO [RS] Request 501 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578594, 1), t: 1 }, lastCommittedWall: new Date(1567578594992), lastOpVisible: { ts: Timestamp(1567578594, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578594, 1), $clusterTime: { clusterTime: Timestamp(1567578594, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578594, 1) } 2019-09-04T06:29:55.001+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578594, 1), t: 1 }, lastCommittedWall: new Date(1567578594992), lastOpVisible: { ts: Timestamp(1567578594, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578594, 1), $clusterTime: { clusterTime: Timestamp(1567578594, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578594, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:55.001+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:55.001+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:24.997+0000 2019-09-04T06:29:55.037+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:55.037+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:55.053+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:55.060+0000 I COMMAND [conn190] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578561, 1), signature: { hash: BinData(0, D212476059A77AEDF4B121F0DF010997DB78B838), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:29:55.060+0000 D1 - [conn190] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:55.060+0000 W - [conn190] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:55.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578594, 1), signature: { hash: BinData(0, 1DA4BA94B808460750D9AC6F94730105FC64EAA1), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:55.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:29:55.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578594, 1), signature: { hash: BinData(0, 1DA4BA94B808460750D9AC6F94730105FC64EAA1), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:55.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578594, 1), signature: { hash: BinData(0, 1DA4BA94B808460750D9AC6F94730105FC64EAA1), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:55.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578594, 1), t: 1 }, durableWallTime: new Date(1567578594992), opTime: { ts: Timestamp(1567578594, 1), t: 1 }, wallTime: new Date(1567578594992) } 2019-09-04T06:29:55.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578594, 1), signature: { hash: BinData(0, 1DA4BA94B808460750D9AC6F94730105FC64EAA1), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:55.077+0000 I - [conn190] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:55.077+0000 D1 COMMAND [conn190] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578561, 1), signature: { hash: BinData(0, D212476059A77AEDF4B121F0DF010997DB78B838), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:55.077+0000 D1 - [conn190] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:55.077+0000 W - [conn190] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:55.096+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578594, 1) 2019-09-04T06:29:55.097+0000 I - [conn190] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:55.097+0000 W COMMAND [conn190] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:55.097+0000 I COMMAND [conn190] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578561, 1), signature: { hash: BinData(0, D212476059A77AEDF4B121F0DF010997DB78B838), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:29:55.097+0000 D2 NETWORK [conn190] Session from 10.108.2.55:36608 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:55.097+0000 I NETWORK [conn190] end connection 10.108.2.55:36608 (92 connections now open) 2019-09-04T06:29:55.131+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:55.131+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:55.142+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:55.142+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:55.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:55.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:55.153+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:55.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:55.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:55.253+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:55.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:55.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:55.291+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:55.291+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:55.353+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:55.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:55.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:55.453+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:55.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:55.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:55.490+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:55.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:55.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:55.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:55.537+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:55.537+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:55.553+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:55.631+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:55.631+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:55.642+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:55.642+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:55.646+0000 D2 ASIO [RS] Request 500 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578595, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578595634), o: { $v: 1, $set: { ping: new Date(1567578595634) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578594, 1), t: 1 }, lastCommittedWall: new Date(1567578594992), lastOpVisible: { ts: Timestamp(1567578594, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578594, 1), t: 1 }, lastCommittedWall: new Date(1567578594992), lastOpApplied: { ts: Timestamp(1567578595, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578594, 1), $clusterTime: { clusterTime: Timestamp(1567578595, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578595, 1) } 2019-09-04T06:29:55.646+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578595, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578595634), o: { $v: 1, $set: { ping: new Date(1567578595634) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578594, 1), t: 1 }, lastCommittedWall: new Date(1567578594992), lastOpVisible: { ts: Timestamp(1567578594, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578594, 1), t: 1 }, lastCommittedWall: new Date(1567578594992), lastOpApplied: { ts: Timestamp(1567578595, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578594, 1), $clusterTime: { clusterTime: Timestamp(1567578595, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578595, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:55.646+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:55.646+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578595, 1) and ending at ts: Timestamp(1567578595, 1) 2019-09-04T06:29:55.646+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:30:05.685+0000 2019-09-04T06:29:55.646+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:30:06.000+0000 2019-09-04T06:29:55.646+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:55.646+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578595, 1), t: 1 } 2019-09-04T06:29:55.646+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:55.646+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:55.646+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578594, 1) 2019-09-04T06:29:55.646+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 7437 2019-09-04T06:29:55.646+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:55.646+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:55.646+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 7437 2019-09-04T06:29:55.646+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:55.646+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:29:55.646+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:55.646+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:55.646+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578595, 1) } 2019-09-04T06:29:55.646+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578594, 1) 2019-09-04T06:29:55.646+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 7440 2019-09-04T06:29:55.646+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:55.646+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7417 2019-09-04T06:29:55.646+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:55.646+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 7440 2019-09-04T06:29:55.646+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7417 2019-09-04T06:29:55.646+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7443 2019-09-04T06:29:55.646+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7443 2019-09-04T06:29:55.646+0000 D3 EXECUTOR [repl-writer-worker-7] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:55.646+0000 D3 STORAGE [repl-writer-worker-7] WT begin_transaction for snapshot id 7445 2019-09-04T06:29:55.646+0000 D4 STORAGE [repl-writer-worker-7] inserting record with timestamp Timestamp(1567578595, 1) 2019-09-04T06:29:55.646+0000 D3 STORAGE [repl-writer-worker-7] WT set timestamp of future write operations to Timestamp(1567578595, 1) 2019-09-04T06:29:55.646+0000 D3 STORAGE [repl-writer-worker-7] WT commit_transaction for snapshot id 7445 2019-09-04T06:29:55.646+0000 D3 EXECUTOR [repl-writer-worker-7] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:55.646+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:29:55.646+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7444 2019-09-04T06:29:55.646+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7444 2019-09-04T06:29:55.646+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7447 2019-09-04T06:29:55.646+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7447 2019-09-04T06:29:55.646+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578595, 1), t: 1 }({ ts: Timestamp(1567578595, 1), t: 1 }) 2019-09-04T06:29:55.646+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578595, 1) 2019-09-04T06:29:55.646+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7448 2019-09-04T06:29:55.646+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578595, 1) } } ] } sort: {} projection: {} 2019-09-04T06:29:55.647+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:55.647+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:29:55.647+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578595, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:29:55.647+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:55.647+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:55.647+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:55.647+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578595, 1) || First: notFirst: full path: ts 2019-09-04T06:29:55.647+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:55.647+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578595, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:55.647+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:55.647+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:29:55.647+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:55.647+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:55.647+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:55.647+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:55.647+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:55.647+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:55.647+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:29:55.647+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578595, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:29:55.647+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:55.647+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:29:55.647+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:29:55.647+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578595, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:29:55.647+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:55.647+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578595, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:55.647+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7448 2019-09-04T06:29:55.647+0000 D3 EXECUTOR [repl-writer-worker-11] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:29:55.647+0000 D3 STORAGE [repl-writer-worker-11] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:55.647+0000 D3 REPL [repl-writer-worker-11] applying op: { ts: Timestamp(1567578595, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578595634), o: { $v: 1, $set: { ping: new Date(1567578595634) } } }, oplog application mode: Secondary 2019-09-04T06:29:55.647+0000 D3 STORAGE [repl-writer-worker-11] WT set timestamp of future write operations to Timestamp(1567578595, 1) 2019-09-04T06:29:55.647+0000 D3 STORAGE [repl-writer-worker-11] WT begin_transaction for snapshot id 7450 2019-09-04T06:29:55.647+0000 D2 QUERY [repl-writer-worker-11] Using idhack: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" } 2019-09-04T06:29:55.647+0000 D4 WRITE [repl-writer-worker-11] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:29:55.647+0000 D3 STORAGE [repl-writer-worker-11] WT commit_transaction for snapshot id 7450 2019-09-04T06:29:55.647+0000 D3 EXECUTOR [repl-writer-worker-11] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:29:55.647+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578595, 1), t: 1 }({ ts: Timestamp(1567578595, 1), t: 1 }) 2019-09-04T06:29:55.647+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578595, 1) 2019-09-04T06:29:55.647+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7449 2019-09-04T06:29:55.647+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:29:55.647+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:29:55.647+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:29:55.647+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:55.647+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:55.647+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:29:55.647+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7449 2019-09-04T06:29:55.647+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578595, 1) 2019-09-04T06:29:55.647+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7453 2019-09-04T06:29:55.647+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7453 2019-09-04T06:29:55.647+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578595, 1), t: 1 }({ ts: Timestamp(1567578595, 1), t: 1 }) 2019-09-04T06:29:55.647+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:55.647+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), appliedOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, appliedWallTime: new Date(1567578589164), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578594, 1), t: 1 }, durableWallTime: new Date(1567578594992), appliedOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, appliedWallTime: new Date(1567578595634), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), appliedOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, appliedWallTime: new Date(1567578589164), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578594, 1), t: 1 }, lastCommittedWall: new Date(1567578594992), lastOpVisible: { ts: Timestamp(1567578594, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:55.647+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 502 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:25.647+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), appliedOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, appliedWallTime: new Date(1567578589164), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578594, 1), t: 1 }, durableWallTime: new Date(1567578594992), appliedOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, appliedWallTime: new Date(1567578595634), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), appliedOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, appliedWallTime: new Date(1567578589164), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578594, 1), t: 1 }, lastCommittedWall: new Date(1567578594992), lastOpVisible: { ts: Timestamp(1567578594, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:55.647+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:25.647+0000 2019-09-04T06:29:55.648+0000 D2 ASIO [RS] Request 502 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578594, 1), t: 1 }, lastCommittedWall: new Date(1567578594992), lastOpVisible: { ts: Timestamp(1567578594, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578594, 1), $clusterTime: { clusterTime: Timestamp(1567578595, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578595, 1) } 2019-09-04T06:29:55.648+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578594, 1), t: 1 }, lastCommittedWall: new Date(1567578594992), lastOpVisible: { ts: Timestamp(1567578594, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578594, 1), $clusterTime: { clusterTime: Timestamp(1567578595, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578595, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:55.648+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:55.648+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:25.648+0000 2019-09-04T06:29:55.648+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578595, 1), t: 1 } 2019-09-04T06:29:55.648+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 503 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:05.648+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578594, 1), t: 1 } } 2019-09-04T06:29:55.648+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:25.648+0000 2019-09-04T06:29:55.651+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:29:55.651+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:55.651+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), appliedOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, appliedWallTime: new Date(1567578589164), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, durableWallTime: new Date(1567578595634), appliedOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, appliedWallTime: new Date(1567578595634), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), appliedOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, appliedWallTime: new Date(1567578589164), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578594, 1), t: 1 }, lastCommittedWall: new Date(1567578594992), lastOpVisible: { ts: Timestamp(1567578594, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:55.651+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 504 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:25.651+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), appliedOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, appliedWallTime: new Date(1567578589164), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, durableWallTime: new Date(1567578595634), appliedOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, appliedWallTime: new Date(1567578595634), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, durableWallTime: new Date(1567578589164), appliedOpTime: { ts: Timestamp(1567578589, 1), t: 1 }, appliedWallTime: new Date(1567578589164), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578594, 1), t: 1 }, lastCommittedWall: new Date(1567578594992), lastOpVisible: { ts: Timestamp(1567578594, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:29:55.651+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:25.648+0000 2019-09-04T06:29:55.651+0000 D2 ASIO [RS] Request 504 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578594, 1), t: 1 }, lastCommittedWall: new Date(1567578594992), lastOpVisible: { ts: Timestamp(1567578594, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578594, 1), $clusterTime: { clusterTime: Timestamp(1567578595, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578595, 1) } 2019-09-04T06:29:55.651+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578594, 1), t: 1 }, lastCommittedWall: new Date(1567578594992), lastOpVisible: { ts: Timestamp(1567578594, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578594, 1), $clusterTime: { clusterTime: Timestamp(1567578595, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578595, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:55.651+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:29:55.651+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:25.648+0000 2019-09-04T06:29:55.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:55.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:55.652+0000 D2 ASIO [RS] Request 503 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578595, 1), t: 1 }, lastCommittedWall: new Date(1567578595634), lastOpVisible: { ts: Timestamp(1567578595, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578595, 1), t: 1 }, lastCommittedWall: new Date(1567578595634), lastOpApplied: { ts: Timestamp(1567578595, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578595, 1), $clusterTime: { clusterTime: Timestamp(1567578595, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578595, 1) } 2019-09-04T06:29:55.652+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578595, 1), t: 1 }, lastCommittedWall: new Date(1567578595634), lastOpVisible: { ts: Timestamp(1567578595, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578595, 1), t: 1 }, lastCommittedWall: new Date(1567578595634), lastOpApplied: { ts: Timestamp(1567578595, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578595, 1), $clusterTime: { clusterTime: Timestamp(1567578595, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578595, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:55.652+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:29:55.652+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:29:55.652+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578595, 1), t: 1 }, 2019-09-04T06:29:55.634+0000 2019-09-04T06:29:55.652+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578595, 1), t: 1 }, 2019-09-04T06:29:55.634+0000 2019-09-04T06:29:55.652+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578590, 1) 2019-09-04T06:29:55.652+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:30:06.000+0000 2019-09-04T06:29:55.652+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:30:06.884+0000 2019-09-04T06:29:55.652+0000 D3 REPL [conn159] Got notified of new snapshot: { ts: Timestamp(1567578595, 1), t: 1 }, 2019-09-04T06:29:55.634+0000 2019-09-04T06:29:55.652+0000 D3 REPL [conn159] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:56.299+0000 2019-09-04T06:29:55.652+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 505 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:05.652+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578595, 1), t: 1 } } 2019-09-04T06:29:55.652+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:25.648+0000 2019-09-04T06:29:55.652+0000 D3 REPL [conn209] Got notified of new snapshot: { ts: Timestamp(1567578595, 1), t: 1 }, 2019-09-04T06:29:55.634+0000 2019-09-04T06:29:55.652+0000 D3 REPL [conn209] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.622+0000 2019-09-04T06:29:55.652+0000 D3 REPL [conn193] Got notified of new snapshot: { ts: Timestamp(1567578595, 1), t: 1 }, 2019-09-04T06:29:55.634+0000 2019-09-04T06:29:55.652+0000 D3 REPL [conn193] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.753+0000 2019-09-04T06:29:55.652+0000 D3 REPL [conn191] Got notified of new snapshot: { ts: Timestamp(1567578595, 1), t: 1 }, 2019-09-04T06:29:55.634+0000 2019-09-04T06:29:55.652+0000 D3 REPL [conn191] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:12.107+0000 2019-09-04T06:29:55.652+0000 D3 REPL [conn213] Got notified of new snapshot: { ts: Timestamp(1567578595, 1), t: 1 }, 2019-09-04T06:29:55.634+0000 2019-09-04T06:29:55.652+0000 D3 REPL [conn213] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:29:55.652+0000 D3 REPL [conn219] Got notified of new snapshot: { ts: Timestamp(1567578595, 1), t: 1 }, 2019-09-04T06:29:55.634+0000 2019-09-04T06:29:55.652+0000 D3 REPL [conn219] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:24.153+0000 2019-09-04T06:29:55.652+0000 D3 REPL [conn196] Got notified of new snapshot: { ts: Timestamp(1567578595, 1), t: 1 }, 2019-09-04T06:29:55.634+0000 2019-09-04T06:29:55.652+0000 D3 REPL [conn196] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.962+0000 2019-09-04T06:29:55.652+0000 D3 REPL [conn200] Got notified of new snapshot: { ts: Timestamp(1567578595, 1), t: 1 }, 2019-09-04T06:29:55.634+0000 2019-09-04T06:29:55.652+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:55.652+0000 D3 REPL [conn200] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.475+0000 2019-09-04T06:29:55.652+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:55.652+0000 D3 REPL [conn184] Got notified of new snapshot: { ts: Timestamp(1567578595, 1), t: 1 }, 2019-09-04T06:29:55.634+0000 2019-09-04T06:29:55.652+0000 D3 REPL [conn184] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.433+0000 2019-09-04T06:29:55.652+0000 D3 REPL [conn172] Got notified of new snapshot: { ts: Timestamp(1567578595, 1), t: 1 }, 2019-09-04T06:29:55.634+0000 2019-09-04T06:29:55.652+0000 D3 REPL [conn172] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:58.752+0000 2019-09-04T06:29:55.652+0000 D3 REPL [conn192] Got notified of new snapshot: { ts: Timestamp(1567578595, 1), t: 1 }, 2019-09-04T06:29:55.634+0000 2019-09-04T06:29:55.652+0000 D3 REPL [conn192] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.469+0000 2019-09-04T06:29:55.652+0000 D3 REPL [conn212] Got notified of new snapshot: { ts: Timestamp(1567578595, 1), t: 1 }, 2019-09-04T06:29:55.634+0000 2019-09-04T06:29:55.652+0000 D3 REPL [conn212] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.650+0000 2019-09-04T06:29:55.652+0000 D3 REPL [conn211] Got notified of new snapshot: { ts: Timestamp(1567578595, 1), t: 1 }, 2019-09-04T06:29:55.634+0000 2019-09-04T06:29:55.652+0000 D3 REPL [conn211] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.646+0000 2019-09-04T06:29:55.652+0000 D3 REPL [conn201] Got notified of new snapshot: { ts: Timestamp(1567578595, 1), t: 1 }, 2019-09-04T06:29:55.634+0000 2019-09-04T06:29:55.652+0000 D3 REPL [conn201] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.476+0000 2019-09-04T06:29:55.652+0000 D3 REPL [conn194] Got notified of new snapshot: { ts: Timestamp(1567578595, 1), t: 1 }, 2019-09-04T06:29:55.634+0000 2019-09-04T06:29:55.652+0000 D3 REPL [conn194] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.763+0000 2019-09-04T06:29:55.652+0000 D3 REPL [conn202] Got notified of new snapshot: { ts: Timestamp(1567578595, 1), t: 1 }, 2019-09-04T06:29:55.634+0000 2019-09-04T06:29:55.652+0000 D3 REPL [conn202] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.480+0000 2019-09-04T06:29:55.652+0000 D3 REPL [conn204] Got notified of new snapshot: { ts: Timestamp(1567578595, 1), t: 1 }, 2019-09-04T06:29:55.634+0000 2019-09-04T06:29:55.652+0000 D3 REPL [conn204] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.514+0000 2019-09-04T06:29:55.652+0000 D3 REPL [conn147] Got notified of new snapshot: { ts: Timestamp(1567578595, 1), t: 1 }, 2019-09-04T06:29:55.634+0000 2019-09-04T06:29:55.652+0000 D3 REPL [conn147] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:13.409+0000 2019-09-04T06:29:55.652+0000 D3 REPL [conn167] Got notified of new snapshot: { ts: Timestamp(1567578595, 1), t: 1 }, 2019-09-04T06:29:55.634+0000 2019-09-04T06:29:55.652+0000 D3 REPL [conn167] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:57.567+0000 2019-09-04T06:29:55.652+0000 D3 REPL [conn195] Got notified of new snapshot: { ts: Timestamp(1567578595, 1), t: 1 }, 2019-09-04T06:29:55.634+0000 2019-09-04T06:29:55.652+0000 D3 REPL [conn195] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.897+0000 2019-09-04T06:29:55.653+0000 D3 REPL [conn205] Got notified of new snapshot: { ts: Timestamp(1567578595, 1), t: 1 }, 2019-09-04T06:29:55.634+0000 2019-09-04T06:29:55.653+0000 D3 REPL [conn205] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.530+0000 2019-09-04T06:29:55.653+0000 D3 REPL [conn199] Got notified of new snapshot: { ts: Timestamp(1567578595, 1), t: 1 }, 2019-09-04T06:29:55.634+0000 2019-09-04T06:29:55.653+0000 D3 REPL [conn199] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.476+0000 2019-09-04T06:29:55.653+0000 D3 REPL [conn171] Got notified of new snapshot: { ts: Timestamp(1567578595, 1), t: 1 }, 2019-09-04T06:29:55.634+0000 2019-09-04T06:29:55.653+0000 D3 REPL [conn171] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.634+0000 2019-09-04T06:29:55.653+0000 D3 REPL [conn169] Got notified of new snapshot: { ts: Timestamp(1567578595, 1), t: 1 }, 2019-09-04T06:29:55.634+0000 2019-09-04T06:29:55.653+0000 D3 REPL [conn169] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:12.789+0000 2019-09-04T06:29:55.653+0000 D3 REPL [conn214] Got notified of new snapshot: { ts: Timestamp(1567578595, 1), t: 1 }, 2019-09-04T06:29:55.634+0000 2019-09-04T06:29:55.653+0000 D3 REPL [conn214] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:29:55.653+0000 D3 REPL [conn210] Got notified of new snapshot: { ts: Timestamp(1567578595, 1), t: 1 }, 2019-09-04T06:29:55.634+0000 2019-09-04T06:29:55.653+0000 D3 REPL [conn210] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.641+0000 2019-09-04T06:29:55.653+0000 D3 REPL [conn197] Got notified of new snapshot: { ts: Timestamp(1567578595, 1), t: 1 }, 2019-09-04T06:29:55.634+0000 2019-09-04T06:29:55.653+0000 D3 REPL [conn197] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.987+0000 2019-09-04T06:29:55.653+0000 D3 REPL [conn215] Got notified of new snapshot: { ts: Timestamp(1567578595, 1), t: 1 }, 2019-09-04T06:29:55.634+0000 2019-09-04T06:29:55.653+0000 D3 REPL [conn215] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:29:55.653+0000 D3 REPL [conn198] Got notified of new snapshot: { ts: Timestamp(1567578595, 1), t: 1 }, 2019-09-04T06:29:55.634+0000 2019-09-04T06:29:55.653+0000 D3 REPL [conn198] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:03.485+0000 2019-09-04T06:29:55.653+0000 D3 REPL [conn185] Got notified of new snapshot: { ts: Timestamp(1567578595, 1), t: 1 }, 2019-09-04T06:29:55.634+0000 2019-09-04T06:29:55.653+0000 D3 REPL [conn185] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:29:59.943+0000 2019-09-04T06:29:55.653+0000 D3 REPL [conn180] Got notified of new snapshot: { ts: Timestamp(1567578595, 1), t: 1 }, 2019-09-04T06:29:55.634+0000 2019-09-04T06:29:55.653+0000 D3 REPL [conn180] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.623+0000 2019-09-04T06:29:55.653+0000 D3 REPL [conn217] Got notified of new snapshot: { ts: Timestamp(1567578595, 1), t: 1 }, 2019-09-04T06:29:55.634+0000 2019-09-04T06:29:55.653+0000 D3 REPL [conn217] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:22.595+0000 2019-09-04T06:29:55.653+0000 D3 REPL [conn203] Got notified of new snapshot: { ts: Timestamp(1567578595, 1), t: 1 }, 2019-09-04T06:29:55.634+0000 2019-09-04T06:29:55.653+0000 D3 REPL [conn203] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.499+0000 2019-09-04T06:29:55.653+0000 D3 REPL [conn181] Got notified of new snapshot: { ts: Timestamp(1567578595, 1), t: 1 }, 2019-09-04T06:29:55.634+0000 2019-09-04T06:29:55.653+0000 D3 REPL [conn181] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.478+0000 2019-09-04T06:29:55.653+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:55.746+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578595, 1) 2019-09-04T06:29:55.753+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:55.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:55.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:55.791+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:55.791+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:55.854+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:55.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:55.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:55.954+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:55.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:55.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:55.990+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:55.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:55.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:55.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:56.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:56.037+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:56.037+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:56.054+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:56.079+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:56.079+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:56.131+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:56.131+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:56.142+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:56.142+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:56.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:56.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:56.154+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:56.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578595, 1), signature: { hash: BinData(0, F896433053214DB1247333EC620E9247ACC41471), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:56.233+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:29:56.233+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578595, 1), signature: { hash: BinData(0, F896433053214DB1247333EC620E9247ACC41471), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:56.233+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578595, 1), signature: { hash: BinData(0, F896433053214DB1247333EC620E9247ACC41471), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:56.233+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, durableWallTime: new Date(1567578595634), opTime: { ts: Timestamp(1567578595, 1), t: 1 }, wallTime: new Date(1567578595634) } 2019-09-04T06:29:56.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578595, 1), signature: { hash: BinData(0, F896433053214DB1247333EC620E9247ACC41471), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:56.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:56.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:56.254+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:56.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:56.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:56.289+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:56.289+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:56.291+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:56.291+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:56.300+0000 I COMMAND [conn159] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578563, 1), signature: { hash: BinData(0, 2A0BBF5A114B4F2CAB0EF4784300FF5254D97052), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:29:56.300+0000 D1 - [conn159] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:56.300+0000 W - [conn159] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:56.317+0000 I - [conn159] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:56.317+0000 D1 COMMAND [conn159] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578563, 1), signature: { hash: BinData(0, 2A0BBF5A114B4F2CAB0EF4784300FF5254D97052), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:56.317+0000 D1 - [conn159] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:56.317+0000 W - [conn159] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:56.324+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:56.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:56.338+0000 I - [conn159] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:56.338+0000 W COMMAND [conn159] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:56.338+0000 I COMMAND [conn159] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578563, 1), signature: { hash: BinData(0, 2A0BBF5A114B4F2CAB0EF4784300FF5254D97052), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30028ms 2019-09-04T06:29:56.338+0000 D2 NETWORK [conn159] Session from 10.108.2.60:44800 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:56.338+0000 I NETWORK [conn159] end connection 10.108.2.60:44800 (91 connections now open) 2019-09-04T06:29:56.355+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:56.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:56.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:56.360+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:56.360+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:56.455+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:56.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:56.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:56.490+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:56.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:56.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:56.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:56.537+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:56.537+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:56.555+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:56.578+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:56.578+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:56.631+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:56.631+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:56.642+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:56.642+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:56.646+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:56.646+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:56.646+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578595, 1) 2019-09-04T06:29:56.646+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 7484 2019-09-04T06:29:56.646+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:56.646+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:56.646+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 7484 2019-09-04T06:29:56.647+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7487 2019-09-04T06:29:56.647+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7487 2019-09-04T06:29:56.647+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578595, 1), t: 1 }({ ts: Timestamp(1567578595, 1), t: 1 }) 2019-09-04T06:29:56.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:56.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:56.655+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:56.694+0000 D2 COMMAND [conn49] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578595, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578595, 1), signature: { hash: BinData(0, F896433053214DB1247333EC620E9247ACC41471), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578595, 1), t: 1 } }, $db: "config" } 2019-09-04T06:29:56.694+0000 D1 COMMAND [conn49] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578595, 1), t: 1 } } } 2019-09-04T06:29:56.694+0000 D3 STORAGE [conn49] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:29:56.694+0000 D1 COMMAND [conn49] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578595, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578595, 1), signature: { hash: BinData(0, F896433053214DB1247333EC620E9247ACC41471), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578595, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578595, 1) 2019-09-04T06:29:56.694+0000 D5 QUERY [conn49] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:29:56.694+0000 D5 QUERY [conn49] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:29:56.694+0000 D5 QUERY [conn49] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:29:56.694+0000 D5 QUERY [conn49] Rated tree: $and 2019-09-04T06:29:56.694+0000 D5 QUERY [conn49] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:56.694+0000 D5 QUERY [conn49] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:56.694+0000 D2 QUERY [conn49] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:29:56.694+0000 D3 STORAGE [conn49] WT begin_transaction for snapshot id 7490 2019-09-04T06:29:56.694+0000 D3 STORAGE [conn49] WT rollback_transaction for snapshot id 7490 2019-09-04T06:29:56.694+0000 I COMMAND [conn49] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578595, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578595, 1), signature: { hash: BinData(0, F896433053214DB1247333EC620E9247ACC41471), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578595, 1), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:879 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:29:56.755+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:56.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:56.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:56.789+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:56.789+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:56.791+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:56.791+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:56.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:56.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:56.838+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:56.838+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 506) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:56.838+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 506 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:06.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:56.838+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:56.838+0000 D2 ASIO [Replication] Request 506 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, durableWallTime: new Date(1567578595634), opTime: { ts: Timestamp(1567578595, 1), t: 1 }, wallTime: new Date(1567578595634), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578595, 1), t: 1 }, lastCommittedWall: new Date(1567578595634), lastOpVisible: { ts: Timestamp(1567578595, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578595, 1), $clusterTime: { clusterTime: Timestamp(1567578595, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578595, 1) } 2019-09-04T06:29:56.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, durableWallTime: new Date(1567578595634), opTime: { ts: Timestamp(1567578595, 1), t: 1 }, wallTime: new Date(1567578595634), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578595, 1), t: 1 }, lastCommittedWall: new Date(1567578595634), lastOpVisible: { ts: Timestamp(1567578595, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578595, 1), $clusterTime: { clusterTime: Timestamp(1567578595, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578595, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:56.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:56.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 506) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, durableWallTime: new Date(1567578595634), opTime: { ts: Timestamp(1567578595, 1), t: 1 }, wallTime: new Date(1567578595634), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578595, 1), t: 1 }, lastCommittedWall: new Date(1567578595634), lastOpVisible: { ts: Timestamp(1567578595, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578595, 1), $clusterTime: { clusterTime: Timestamp(1567578595, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578595, 1) } 2019-09-04T06:29:56.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:29:56.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:29:58.838Z 2019-09-04T06:29:56.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:56.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:56.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 507) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:56.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 507 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:30:06.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:56.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:56.839+0000 D2 ASIO [Replication] Request 507 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, durableWallTime: new Date(1567578595634), opTime: { ts: Timestamp(1567578595, 1), t: 1 }, wallTime: new Date(1567578595634), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578595, 1), t: 1 }, lastCommittedWall: new Date(1567578595634), lastOpVisible: { ts: Timestamp(1567578595, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578595, 1), $clusterTime: { clusterTime: Timestamp(1567578595, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578595, 1) } 2019-09-04T06:29:56.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, durableWallTime: new Date(1567578595634), opTime: { ts: Timestamp(1567578595, 1), t: 1 }, wallTime: new Date(1567578595634), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578595, 1), t: 1 }, lastCommittedWall: new Date(1567578595634), lastOpVisible: { ts: Timestamp(1567578595, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578595, 1), $clusterTime: { clusterTime: Timestamp(1567578595, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578595, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:29:56.839+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:56.839+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 507) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, durableWallTime: new Date(1567578595634), opTime: { ts: Timestamp(1567578595, 1), t: 1 }, wallTime: new Date(1567578595634), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578595, 1), t: 1 }, lastCommittedWall: new Date(1567578595634), lastOpVisible: { ts: Timestamp(1567578595, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578595, 1), $clusterTime: { clusterTime: Timestamp(1567578595, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578595, 1) } 2019-09-04T06:29:56.839+0000 D4 ELECTION [replexec-1] Postponing election timeout due to heartbeat from primary 2019-09-04T06:29:56.839+0000 D4 REPL [replexec-1] Canceling election timeout callback at 2019-09-04T06:30:06.884+0000 2019-09-04T06:29:56.839+0000 D4 ELECTION [replexec-1] Scheduling election timeout callback at 2019-09-04T06:30:07.990+0000 2019-09-04T06:29:56.839+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:29:56.839+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:29:58.839Z 2019-09-04T06:29:56.839+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:56.839+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:56.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:56.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:56.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:56.855+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:56.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:56.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:56.876+0000 D2 COMMAND [conn47] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:56.876+0000 I COMMAND [conn47] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:56.911+0000 D2 COMMAND [conn48] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:56.911+0000 I COMMAND [conn48] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:56.955+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:56.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:56.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:56.990+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:56.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:56.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:56.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:57.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:57.037+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:57.037+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:57.055+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:57.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578595, 1), signature: { hash: BinData(0, F896433053214DB1247333EC620E9247ACC41471), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:57.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:29:57.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578595, 1), signature: { hash: BinData(0, F896433053214DB1247333EC620E9247ACC41471), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:57.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578595, 1), signature: { hash: BinData(0, F896433053214DB1247333EC620E9247ACC41471), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:57.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, durableWallTime: new Date(1567578595634), opTime: { ts: Timestamp(1567578595, 1), t: 1 }, wallTime: new Date(1567578595634) } 2019-09-04T06:29:57.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578595, 1), signature: { hash: BinData(0, F896433053214DB1247333EC620E9247ACC41471), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:57.078+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:57.078+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:57.131+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:57.131+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:57.142+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:57.142+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:57.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:57.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:57.156+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:57.204+0000 D2 COMMAND [conn160] run command admin.$cmd { ping: 1, $db: "admin" } 2019-09-04T06:29:57.204+0000 I COMMAND [conn160] command admin.$cmd appName: "robo3t" command: ping { ping: 1, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:57.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:57.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:57.256+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:57.258+0000 I NETWORK [listener] connection accepted from 10.20.102.80:61254 #220 (92 connections now open) 2019-09-04T06:29:57.258+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:29:57.258+0000 D2 COMMAND [conn220] run command admin.$cmd { isMaster: 1, client: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.0.5-17-gd808df2233" }, os: { type: "Windows", name: "Microsoft Windows 7", architecture: "x86_64", version: "6.1 SP1 (build 7601)" } }, $db: "admin" } 2019-09-04T06:29:57.258+0000 I NETWORK [conn220] received client metadata from 10.20.102.80:61254 conn220: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.0.5-17-gd808df2233" }, os: { type: "Windows", name: "Microsoft Windows 7", architecture: "x86_64", version: "6.1 SP1 (build 7601)" } } 2019-09-04T06:29:57.258+0000 I COMMAND [conn220] command admin.$cmd appName: "MongoDB Shell" command: isMaster { isMaster: 1, client: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.0.5-17-gd808df2233" }, os: { type: "Windows", name: "Microsoft Windows 7", architecture: "x86_64", version: "6.1 SP1 (build 7601)" } }, $db: "admin" } numYields:0 reslen:866 locks:{} protocol:op_query 0ms 2019-09-04T06:29:57.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:57.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:57.268+0000 D2 COMMAND [conn220] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-256", payload: "xxx", $db: "admin" } 2019-09-04T06:29:57.268+0000 D1 ACCESS [conn220] Returning user dba_root@admin from cache 2019-09-04T06:29:57.268+0000 I COMMAND [conn220] command admin.$cmd appName: "MongoDB Shell" command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-256", payload: "xxx", $db: "admin" } numYields:0 reslen:410 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:57.277+0000 D2 COMMAND [conn220] run command admin.$cmd { saslContinue: 1, payload: "xxx", conversationId: 1, $db: "admin" } 2019-09-04T06:29:57.277+0000 I COMMAND [conn220] command admin.$cmd appName: "MongoDB Shell" command: saslContinue { saslContinue: 1, payload: "xxx", conversationId: 1, $db: "admin" } numYields:0 reslen:339 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:57.287+0000 D2 COMMAND [conn220] run command admin.$cmd { saslContinue: 1, payload: "xxx", conversationId: 1, $db: "admin" } 2019-09-04T06:29:57.287+0000 D1 ACCESS [conn220] Returning user dba_root@admin from cache 2019-09-04T06:29:57.287+0000 I ACCESS [conn220] Successfully authenticated as principal dba_root on admin from client 10.20.102.80:61254 2019-09-04T06:29:57.287+0000 I COMMAND [conn220] command admin.$cmd appName: "MongoDB Shell" command: saslContinue { saslContinue: 1, payload: "xxx", conversationId: 1, $db: "admin" } numYields:0 reslen:293 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:57.296+0000 D2 COMMAND [conn220] run command admin.$cmd { ping: 1.0, lsid: { id: UUID("23af97f8-66f0-4a27-b5f1-59167651ca5f") }, $clusterTime: { clusterTime: Timestamp(1567578415, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } 2019-09-04T06:29:57.296+0000 I COMMAND [conn220] command admin.$cmd appName: "MongoDB Shell" command: ping { ping: 1.0, lsid: { id: UUID("23af97f8-66f0-4a27-b5f1-59167651ca5f") }, $clusterTime: { clusterTime: Timestamp(1567578415, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:57.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:57.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:57.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:57.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:57.356+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:57.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:57.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:57.456+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:57.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:57.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:57.490+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:57.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:57.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:57.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:57.537+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:57.537+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:57.556+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:57.569+0000 I COMMAND [conn167] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578561, 1), signature: { hash: BinData(0, D212476059A77AEDF4B121F0DF010997DB78B838), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:29:57.570+0000 D1 - [conn167] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:57.570+0000 W - [conn167] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:57.578+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:57.578+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:57.586+0000 I - [conn167] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:57.586+0000 D1 COMMAND [conn167] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578561, 1), signature: { hash: BinData(0, D212476059A77AEDF4B121F0DF010997DB78B838), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:57.586+0000 D1 - [conn167] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:57.586+0000 W - [conn167] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:57.607+0000 I - [conn167] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:57.607+0000 W COMMAND [conn167] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:57.607+0000 I COMMAND [conn167] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578561, 1), signature: { hash: BinData(0, D212476059A77AEDF4B121F0DF010997DB78B838), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:29:57.607+0000 D2 NETWORK [conn167] Session from 10.108.2.61:37876 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:57.607+0000 I NETWORK [conn167] end connection 10.108.2.61:37876 (91 connections now open) 2019-09-04T06:29:57.631+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:57.631+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:57.642+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:57.642+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:57.646+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:57.647+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:57.647+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578595, 1) 2019-09-04T06:29:57.647+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 7528 2019-09-04T06:29:57.647+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:57.647+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:57.647+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 7528 2019-09-04T06:29:57.648+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7531 2019-09-04T06:29:57.648+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7531 2019-09-04T06:29:57.648+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578595, 1), t: 1 }({ ts: Timestamp(1567578595, 1), t: 1 }) 2019-09-04T06:29:57.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:57.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:57.656+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:57.756+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:57.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:57.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:57.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:57.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:57.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:57.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:57.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:57.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:57.857+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:57.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:57.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:57.957+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:57.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:57.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:57.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:57.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:57.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:57.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:58.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:58.021+0000 D2 COMMAND [conn50] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578570, 3), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578597, 1), signature: { hash: BinData(0, 19078AE682571CF884C1A9C5A414BD93F0CFA994), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578570, 3), t: 1 } }, $db: "config" } 2019-09-04T06:29:58.021+0000 D1 COMMAND [conn50] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578570, 3), t: 1 } } } 2019-09-04T06:29:58.021+0000 D3 STORAGE [conn50] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:29:58.021+0000 D1 COMMAND [conn50] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578570, 3), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578597, 1), signature: { hash: BinData(0, 19078AE682571CF884C1A9C5A414BD93F0CFA994), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578570, 3), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578595, 1) 2019-09-04T06:29:58.021+0000 D5 QUERY [conn50] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:29:58.021+0000 D5 QUERY [conn50] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:29:58.021+0000 D5 QUERY [conn50] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:29:58.021+0000 D5 QUERY [conn50] Rated tree: $and 2019-09-04T06:29:58.021+0000 D5 QUERY [conn50] Planner: outputted 0 indexed solutions. 2019-09-04T06:29:58.021+0000 D5 QUERY [conn50] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:29:58.021+0000 D2 QUERY [conn50] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:29:58.021+0000 D3 STORAGE [conn50] WT begin_transaction for snapshot id 7543 2019-09-04T06:29:58.021+0000 D3 STORAGE [conn50] WT rollback_transaction for snapshot id 7543 2019-09-04T06:29:58.021+0000 I COMMAND [conn50] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578570, 3), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578597, 1), signature: { hash: BinData(0, 19078AE682571CF884C1A9C5A414BD93F0CFA994), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578570, 3), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:879 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:29:58.037+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:58.037+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:58.057+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:58.078+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:58.078+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:58.131+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:58.131+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:58.142+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:58.142+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:58.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:58.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:58.157+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:58.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578595, 1), signature: { hash: BinData(0, F896433053214DB1247333EC620E9247ACC41471), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:58.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:29:58.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578595, 1), signature: { hash: BinData(0, F896433053214DB1247333EC620E9247ACC41471), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:58.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578595, 1), signature: { hash: BinData(0, F896433053214DB1247333EC620E9247ACC41471), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:58.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, durableWallTime: new Date(1567578595634), opTime: { ts: Timestamp(1567578595, 1), t: 1 }, wallTime: new Date(1567578595634) } 2019-09-04T06:29:58.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578595, 1), signature: { hash: BinData(0, F896433053214DB1247333EC620E9247ACC41471), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:58.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:58.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:58.257+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:58.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:58.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:58.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:58.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:58.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:58.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:58.357+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:58.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:58.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:58.457+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:58.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:58.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:58.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:58.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:58.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:58.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:58.537+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:58.537+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:58.558+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:58.578+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:58.578+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:58.631+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:58.631+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:58.642+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:58.642+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:58.647+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:58.647+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:58.647+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578595, 1) 2019-09-04T06:29:58.647+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 7563 2019-09-04T06:29:58.647+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:58.647+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:58.647+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 7563 2019-09-04T06:29:58.648+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7566 2019-09-04T06:29:58.648+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7566 2019-09-04T06:29:58.648+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578595, 1), t: 1 }({ ts: Timestamp(1567578595, 1), t: 1 }) 2019-09-04T06:29:58.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:58.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:58.658+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:58.755+0000 I COMMAND [conn172] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578565, 1), signature: { hash: BinData(0, 0FB04B3A73468DCEAB3E65E8617977C014F37989), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:29:58.756+0000 D1 - [conn172] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:58.756+0000 W - [conn172] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:58.758+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:58.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:58.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:58.773+0000 I - [conn172] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:58.773+0000 D1 COMMAND [conn172] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578565, 1), signature: { hash: BinData(0, 0FB04B3A73468DCEAB3E65E8617977C014F37989), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:58.773+0000 D1 - [conn172] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:58.773+0000 W - [conn172] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:58.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:58.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:58.794+0000 I - [conn172] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:58.794+0000 W COMMAND [conn172] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:58.794+0000 I COMMAND [conn172] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578565, 1), signature: { hash: BinData(0, 0FB04B3A73468DCEAB3E65E8617977C014F37989), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30031ms 2019-09-04T06:29:58.794+0000 D2 NETWORK [conn172] Session from 10.108.2.64:46576 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:58.794+0000 I NETWORK [conn172] end connection 10.108.2.64:46576 (90 connections now open) 2019-09-04T06:29:58.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:58.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:58.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:58.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 508) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:58.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 508 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:08.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:58.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:58.838+0000 D2 ASIO [Replication] Request 508 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, durableWallTime: new Date(1567578595634), opTime: { ts: Timestamp(1567578595, 1), t: 1 }, wallTime: new Date(1567578595634), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578595, 1), t: 1 }, lastCommittedWall: new Date(1567578595634), lastOpVisible: { ts: Timestamp(1567578595, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578595, 1), $clusterTime: { clusterTime: Timestamp(1567578597, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578595, 1) } 2019-09-04T06:29:58.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, durableWallTime: new Date(1567578595634), opTime: { ts: Timestamp(1567578595, 1), t: 1 }, wallTime: new Date(1567578595634), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578595, 1), t: 1 }, lastCommittedWall: new Date(1567578595634), lastOpVisible: { ts: Timestamp(1567578595, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578595, 1), $clusterTime: { clusterTime: Timestamp(1567578597, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578595, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:29:58.838+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:58.838+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 508) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, durableWallTime: new Date(1567578595634), opTime: { ts: Timestamp(1567578595, 1), t: 1 }, wallTime: new Date(1567578595634), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578595, 1), t: 1 }, lastCommittedWall: new Date(1567578595634), lastOpVisible: { ts: Timestamp(1567578595, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578595, 1), $clusterTime: { clusterTime: Timestamp(1567578597, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578595, 1) } 2019-09-04T06:29:58.838+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:29:58.838+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:30:00.838Z 2019-09-04T06:29:58.838+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:58.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:29:58.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 509) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:58.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 509 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:30:08.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:29:58.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:58.839+0000 D2 ASIO [Replication] Request 509 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, durableWallTime: new Date(1567578595634), opTime: { ts: Timestamp(1567578595, 1), t: 1 }, wallTime: new Date(1567578595634), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578595, 1), t: 1 }, lastCommittedWall: new Date(1567578595634), lastOpVisible: { ts: Timestamp(1567578595, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578595, 1), $clusterTime: { clusterTime: Timestamp(1567578597, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578595, 1) } 2019-09-04T06:29:58.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, durableWallTime: new Date(1567578595634), opTime: { ts: Timestamp(1567578595, 1), t: 1 }, wallTime: new Date(1567578595634), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578595, 1), t: 1 }, lastCommittedWall: new Date(1567578595634), lastOpVisible: { ts: Timestamp(1567578595, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578595, 1), $clusterTime: { clusterTime: Timestamp(1567578597, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578595, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:29:58.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:29:58.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 509) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, durableWallTime: new Date(1567578595634), opTime: { ts: Timestamp(1567578595, 1), t: 1 }, wallTime: new Date(1567578595634), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578595, 1), t: 1 }, lastCommittedWall: new Date(1567578595634), lastOpVisible: { ts: Timestamp(1567578595, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578595, 1), $clusterTime: { clusterTime: Timestamp(1567578597, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578595, 1) } 2019-09-04T06:29:58.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:29:58.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:30:07.990+0000 2019-09-04T06:29:58.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:30:09.530+0000 2019-09-04T06:29:58.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:29:58.839+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:29:58.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:30:00.839Z 2019-09-04T06:29:58.839+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:58.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:29:58.858+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:58.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:58.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:58.958+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:58.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:58.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:58.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:58.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:58.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:58.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:59.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:29:59.037+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:59.037+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:59.058+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:59.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578597, 1), signature: { hash: BinData(0, 19078AE682571CF884C1A9C5A414BD93F0CFA994), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:59.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:29:59.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578597, 1), signature: { hash: BinData(0, 19078AE682571CF884C1A9C5A414BD93F0CFA994), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:59.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578597, 1), signature: { hash: BinData(0, 19078AE682571CF884C1A9C5A414BD93F0CFA994), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:29:59.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, durableWallTime: new Date(1567578595634), opTime: { ts: Timestamp(1567578595, 1), t: 1 }, wallTime: new Date(1567578595634) } 2019-09-04T06:29:59.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578597, 1), signature: { hash: BinData(0, 19078AE682571CF884C1A9C5A414BD93F0CFA994), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:59.078+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:59.078+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:59.131+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:59.131+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:59.142+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:59.142+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:59.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:59.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:59.158+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:59.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:29:59.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:29:59.258+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:59.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:59.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:59.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:59.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:59.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:59.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:59.359+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:59.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:59.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:59.459+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:59.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:59.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:59.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:59.489+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:59.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:59.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:59.537+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:59.537+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:59.559+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:59.578+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:59.578+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:59.631+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:59.631+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:59.642+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:59.642+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:59.647+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:29:59.647+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:29:59.647+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578595, 1) 2019-09-04T06:29:59.647+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 7597 2019-09-04T06:29:59.647+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:29:59.647+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:29:59.647+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 7597 2019-09-04T06:29:59.648+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7600 2019-09-04T06:29:59.648+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7600 2019-09-04T06:29:59.648+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578595, 1), t: 1 }({ ts: Timestamp(1567578595, 1), t: 1 }) 2019-09-04T06:29:59.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:59.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:59.659+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:59.707+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:59.707+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:59.759+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:59.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:59.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:59.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:59.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:59.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:59.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:59.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:59.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:59.859+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:59.947+0000 I COMMAND [conn185] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578562, 1), signature: { hash: BinData(0, D409C985E59557A0ED9DA32FA1E2AFD0B3BC04D7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:29:59.947+0000 D1 - [conn185] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:29:59.947+0000 W - [conn185] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:59.959+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:29:59.964+0000 I - [conn185] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:59.964+0000 D1 COMMAND [conn185] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578562, 1), signature: { hash: BinData(0, D409C985E59557A0ED9DA32FA1E2AFD0B3BC04D7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:59.964+0000 D1 - [conn185] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:29:59.964+0000 W - [conn185] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:29:59.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:59.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:59.984+0000 I - [conn185] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:29:59.984+0000 W COMMAND [conn185] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:29:59.984+0000 I COMMAND [conn185] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578562, 1), signature: { hash: BinData(0, D409C985E59557A0ED9DA32FA1E2AFD0B3BC04D7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30031ms 2019-09-04T06:29:59.984+0000 D2 NETWORK [conn185] Session from 10.108.2.72:45696 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:29:59.984+0000 I NETWORK [conn185] end connection 10.108.2.72:45696 (89 connections now open) 2019-09-04T06:29:59.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:59.989+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:29:59.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:29:59.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:00.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:00.004+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:30:00.004+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:30:00.005+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:30:00.016+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:30:00.016+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:30:00.018+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:30:00.018+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:30:00.018+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:30:00.018+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:30:00.019+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:30:00.020+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35129 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:30:00.033+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:30:00.033+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:30:00.034+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:30:00.034+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:30:00.034+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:00.034+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:30:00.034+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:30:00.034+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:30:00.034+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:30:00.034+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:30:00.034+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:30:00.034+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:30:00.034+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:00.034+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:00.034+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:30:00.034+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578595, 1) 2019-09-04T06:30:00.034+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7617 2019-09-04T06:30:00.034+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7617 2019-09-04T06:30:00.034+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:30:00.034+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:30:00.034+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:30:00.034+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:30:00.034+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:00.034+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:30:00.034+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:30:00.034+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:30:00.034+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578595, 1) 2019-09-04T06:30:00.034+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7620 2019-09-04T06:30:00.035+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7620 2019-09-04T06:30:00.035+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:30:00.035+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:30:00.035+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:00.035+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:30:00.035+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:30:00.035+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:30:00.035+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578595, 1) 2019-09-04T06:30:00.035+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7622 2019-09-04T06:30:00.035+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7622 2019-09-04T06:30:00.035+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:30:00.035+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:30:00.035+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:30:00.035+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:30:00.035+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:30:00.035+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:30:00.035+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:30:00.035+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7625 2019-09-04T06:30:00.035+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:30:00.035+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:00.035+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:30:00.035+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:30:00.035+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7625 2019-09-04T06:30:00.035+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:30:00.035+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7626 2019-09-04T06:30:00.035+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:30:00.035+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:00.035+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:30:00.035+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7626 2019-09-04T06:30:00.035+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:30:00.035+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7627 2019-09-04T06:30:00.035+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:30:00.035+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:00.035+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:30:00.035+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7627 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7628 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7628 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7629 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7629 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7630 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7630 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7631 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7631 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7632 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7632 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7633 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7633 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7634 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7634 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7635 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7635 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7636 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7636 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7637 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7637 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7638 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7638 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7639 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:30:00.036+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7639 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7640 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7640 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7641 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7641 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7642 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7642 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7643 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7643 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7644 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7644 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7645 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7645 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7646 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7646 2019-09-04T06:30:00.037+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:30:00.037+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7648 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7648 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7649 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7649 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7650 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7650 2019-09-04T06:30:00.037+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:30:00.037+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7652 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7652 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7653 2019-09-04T06:30:00.037+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7653 2019-09-04T06:30:00.038+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7654 2019-09-04T06:30:00.038+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7654 2019-09-04T06:30:00.038+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7655 2019-09-04T06:30:00.038+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7655 2019-09-04T06:30:00.038+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7656 2019-09-04T06:30:00.038+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7656 2019-09-04T06:30:00.038+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7657 2019-09-04T06:30:00.038+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7657 2019-09-04T06:30:00.038+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7658 2019-09-04T06:30:00.038+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7658 2019-09-04T06:30:00.038+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7659 2019-09-04T06:30:00.038+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7659 2019-09-04T06:30:00.038+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7660 2019-09-04T06:30:00.038+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7660 2019-09-04T06:30:00.038+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7661 2019-09-04T06:30:00.038+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7661 2019-09-04T06:30:00.038+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7662 2019-09-04T06:30:00.038+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7662 2019-09-04T06:30:00.038+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7663 2019-09-04T06:30:00.038+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7663 2019-09-04T06:30:00.038+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:30:00.038+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:30:00.038+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7665 2019-09-04T06:30:00.038+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7665 2019-09-04T06:30:00.038+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7666 2019-09-04T06:30:00.038+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7666 2019-09-04T06:30:00.038+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7667 2019-09-04T06:30:00.038+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7667 2019-09-04T06:30:00.038+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7668 2019-09-04T06:30:00.038+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7668 2019-09-04T06:30:00.038+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7669 2019-09-04T06:30:00.038+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7669 2019-09-04T06:30:00.038+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 7670 2019-09-04T06:30:00.038+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 7670 2019-09-04T06:30:00.038+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:30:00.038+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:00.039+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:00.059+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:00.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:00.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:00.078+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:00.078+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:00.117+0000 D2 ASIO [RS] Request 505 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578600, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578600115), o: { $v: 1, $set: { ping: new Date(1567578600115) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578595, 1), t: 1 }, lastCommittedWall: new Date(1567578595634), lastOpVisible: { ts: Timestamp(1567578595, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578595, 1), t: 1 }, lastCommittedWall: new Date(1567578595634), lastOpApplied: { ts: Timestamp(1567578600, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578595, 1), $clusterTime: { clusterTime: Timestamp(1567578600, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578600, 1) } 2019-09-04T06:30:00.117+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578600, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578600115), o: { $v: 1, $set: { ping: new Date(1567578600115) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578595, 1), t: 1 }, lastCommittedWall: new Date(1567578595634), lastOpVisible: { ts: Timestamp(1567578595, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578595, 1), t: 1 }, lastCommittedWall: new Date(1567578595634), lastOpApplied: { ts: Timestamp(1567578600, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578595, 1), $clusterTime: { clusterTime: Timestamp(1567578600, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578600, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:00.117+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:00.117+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578600, 1) and ending at ts: Timestamp(1567578600, 1) 2019-09-04T06:30:00.117+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:30:09.530+0000 2019-09-04T06:30:00.117+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:30:10.621+0000 2019-09-04T06:30:00.117+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:00.117+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:30:00.117+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578600, 1), t: 1 } 2019-09-04T06:30:00.117+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:00.117+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:00.117+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578595, 1) 2019-09-04T06:30:00.117+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 7676 2019-09-04T06:30:00.117+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:00.117+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:00.117+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 7676 2019-09-04T06:30:00.117+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:00.117+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:30:00.117+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:00.117+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578595, 1) 2019-09-04T06:30:00.117+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 7679 2019-09-04T06:30:00.117+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578600, 1) } 2019-09-04T06:30:00.117+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:00.117+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:00.117+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 7679 2019-09-04T06:30:00.117+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7601 2019-09-04T06:30:00.118+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7601 2019-09-04T06:30:00.118+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7682 2019-09-04T06:30:00.118+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7682 2019-09-04T06:30:00.118+0000 D3 EXECUTOR [repl-writer-worker-13] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:00.118+0000 D3 STORAGE [repl-writer-worker-13] WT begin_transaction for snapshot id 7684 2019-09-04T06:30:00.118+0000 D4 STORAGE [repl-writer-worker-13] inserting record with timestamp Timestamp(1567578600, 1) 2019-09-04T06:30:00.118+0000 D3 STORAGE [repl-writer-worker-13] WT set timestamp of future write operations to Timestamp(1567578600, 1) 2019-09-04T06:30:00.118+0000 D3 STORAGE [repl-writer-worker-13] WT commit_transaction for snapshot id 7684 2019-09-04T06:30:00.118+0000 D3 EXECUTOR [repl-writer-worker-13] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:00.118+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:30:00.118+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7683 2019-09-04T06:30:00.118+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7683 2019-09-04T06:30:00.118+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7686 2019-09-04T06:30:00.118+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7686 2019-09-04T06:30:00.118+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578600, 1), t: 1 }({ ts: Timestamp(1567578600, 1), t: 1 }) 2019-09-04T06:30:00.118+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578600, 1) 2019-09-04T06:30:00.118+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7687 2019-09-04T06:30:00.118+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578600, 1) } } ] } sort: {} projection: {} 2019-09-04T06:30:00.118+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:00.118+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:30:00.118+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578600, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:30:00.118+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:00.118+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:00.118+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:00.118+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578600, 1) || First: notFirst: full path: ts 2019-09-04T06:30:00.118+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:00.118+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578600, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:00.118+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:00.118+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:30:00.118+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:00.118+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:00.118+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:00.118+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:00.118+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:00.118+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:00.118+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:00.118+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578600, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:00.118+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:00.118+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:00.118+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:00.118+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578600, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:00.118+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:00.118+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578600, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:00.118+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7687 2019-09-04T06:30:00.118+0000 D3 EXECUTOR [repl-writer-worker-14] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:00.118+0000 D3 STORAGE [repl-writer-worker-14] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:00.118+0000 D3 REPL [repl-writer-worker-14] applying op: { ts: Timestamp(1567578600, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578600115), o: { $v: 1, $set: { ping: new Date(1567578600115) } } }, oplog application mode: Secondary 2019-09-04T06:30:00.118+0000 D3 STORAGE [repl-writer-worker-14] WT set timestamp of future write operations to Timestamp(1567578600, 1) 2019-09-04T06:30:00.118+0000 D3 STORAGE [repl-writer-worker-14] WT begin_transaction for snapshot id 7689 2019-09-04T06:30:00.118+0000 D2 QUERY [repl-writer-worker-14] Using idhack: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" } 2019-09-04T06:30:00.118+0000 D4 WRITE [repl-writer-worker-14] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:30:00.118+0000 D3 STORAGE [repl-writer-worker-14] WT commit_transaction for snapshot id 7689 2019-09-04T06:30:00.118+0000 D3 EXECUTOR [repl-writer-worker-14] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:00.118+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578600, 1), t: 1 }({ ts: Timestamp(1567578600, 1), t: 1 }) 2019-09-04T06:30:00.118+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578600, 1) 2019-09-04T06:30:00.118+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7688 2019-09-04T06:30:00.118+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:30:00.118+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:00.118+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:30:00.118+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:00.118+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:00.118+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:30:00.118+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7688 2019-09-04T06:30:00.118+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578600, 1) 2019-09-04T06:30:00.118+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:00.118+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7692 2019-09-04T06:30:00.118+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, durableWallTime: new Date(1567578595634), appliedOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, appliedWallTime: new Date(1567578595634), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, durableWallTime: new Date(1567578595634), appliedOpTime: { ts: Timestamp(1567578600, 1), t: 1 }, appliedWallTime: new Date(1567578600115), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, durableWallTime: new Date(1567578595634), appliedOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, appliedWallTime: new Date(1567578595634), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578595, 1), t: 1 }, lastCommittedWall: new Date(1567578595634), lastOpVisible: { ts: Timestamp(1567578595, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:00.118+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 510 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:30.118+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, durableWallTime: new Date(1567578595634), appliedOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, appliedWallTime: new Date(1567578595634), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, durableWallTime: new Date(1567578595634), appliedOpTime: { ts: Timestamp(1567578600, 1), t: 1 }, appliedWallTime: new Date(1567578600115), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, durableWallTime: new Date(1567578595634), appliedOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, appliedWallTime: new Date(1567578595634), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578595, 1), t: 1 }, lastCommittedWall: new Date(1567578595634), lastOpVisible: { ts: Timestamp(1567578595, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:00.118+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:30.118+0000 2019-09-04T06:30:00.118+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7692 2019-09-04T06:30:00.119+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578600, 1), t: 1 }({ ts: Timestamp(1567578600, 1), t: 1 }) 2019-09-04T06:30:00.119+0000 D2 ASIO [RS] Request 510 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578600, 1), t: 1 }, lastCommittedWall: new Date(1567578600115), lastOpVisible: { ts: Timestamp(1567578600, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578600, 1), $clusterTime: { clusterTime: Timestamp(1567578600, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578600, 1) } 2019-09-04T06:30:00.119+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578600, 1), t: 1 }, lastCommittedWall: new Date(1567578600115), lastOpVisible: { ts: Timestamp(1567578600, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578600, 1), $clusterTime: { clusterTime: Timestamp(1567578600, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578600, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:00.119+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:00.119+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:30.119+0000 2019-09-04T06:30:00.119+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578600, 1), t: 1 } 2019-09-04T06:30:00.119+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 511 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:10.119+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578595, 1), t: 1 } } 2019-09-04T06:30:00.119+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:30:00.119+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:30.119+0000 2019-09-04T06:30:00.119+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:00.119+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, durableWallTime: new Date(1567578595634), appliedOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, appliedWallTime: new Date(1567578595634), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578600, 1), t: 1 }, durableWallTime: new Date(1567578600115), appliedOpTime: { ts: Timestamp(1567578600, 1), t: 1 }, appliedWallTime: new Date(1567578600115), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, durableWallTime: new Date(1567578595634), appliedOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, appliedWallTime: new Date(1567578595634), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578595, 1), t: 1 }, lastCommittedWall: new Date(1567578595634), lastOpVisible: { ts: Timestamp(1567578595, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:00.119+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 512 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:30.119+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, durableWallTime: new Date(1567578595634), appliedOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, appliedWallTime: new Date(1567578595634), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578600, 1), t: 1 }, durableWallTime: new Date(1567578600115), appliedOpTime: { ts: Timestamp(1567578600, 1), t: 1 }, appliedWallTime: new Date(1567578600115), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, durableWallTime: new Date(1567578595634), appliedOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, appliedWallTime: new Date(1567578595634), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578595, 1), t: 1 }, lastCommittedWall: new Date(1567578595634), lastOpVisible: { ts: Timestamp(1567578595, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:00.119+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:30.119+0000 2019-09-04T06:30:00.120+0000 D2 ASIO [RS] Request 511 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578600, 1), t: 1 }, lastCommittedWall: new Date(1567578600115), lastOpVisible: { ts: Timestamp(1567578600, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578600, 1), t: 1 }, lastCommittedWall: new Date(1567578600115), lastOpApplied: { ts: Timestamp(1567578600, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578600, 1), $clusterTime: { clusterTime: Timestamp(1567578600, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578600, 1) } 2019-09-04T06:30:00.120+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578600, 1), t: 1 }, lastCommittedWall: new Date(1567578600115), lastOpVisible: { ts: Timestamp(1567578600, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578600, 1), t: 1 }, lastCommittedWall: new Date(1567578600115), lastOpApplied: { ts: Timestamp(1567578600, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578600, 1), $clusterTime: { clusterTime: Timestamp(1567578600, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578600, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:00.120+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:00.120+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:30:00.120+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578600, 1), t: 1 }, 2019-09-04T06:30:00.115+0000 2019-09-04T06:30:00.120+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578600, 1), t: 1 }, 2019-09-04T06:30:00.115+0000 2019-09-04T06:30:00.120+0000 D2 ASIO [RS] Request 512 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578600, 1), t: 1 }, lastCommittedWall: new Date(1567578600115), lastOpVisible: { ts: Timestamp(1567578600, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578600, 1), $clusterTime: { clusterTime: Timestamp(1567578600, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578600, 1) } 2019-09-04T06:30:00.120+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578600, 1), t: 1 }, lastCommittedWall: new Date(1567578600115), lastOpVisible: { ts: Timestamp(1567578600, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578600, 1), $clusterTime: { clusterTime: Timestamp(1567578600, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578600, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:00.120+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578595, 1) 2019-09-04T06:30:00.120+0000 D3 REPL [conn204] Got notified of new snapshot: { ts: Timestamp(1567578600, 1), t: 1 }, 2019-09-04T06:30:00.115+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn204] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.514+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn196] Got notified of new snapshot: { ts: Timestamp(1567578600, 1), t: 1 }, 2019-09-04T06:30:00.115+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn196] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.962+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn197] Got notified of new snapshot: { ts: Timestamp(1567578600, 1), t: 1 }, 2019-09-04T06:30:00.115+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn197] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.987+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn202] Got notified of new snapshot: { ts: Timestamp(1567578600, 1), t: 1 }, 2019-09-04T06:30:00.115+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn202] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.480+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn203] Got notified of new snapshot: { ts: Timestamp(1567578600, 1), t: 1 }, 2019-09-04T06:30:00.115+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn203] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.499+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn192] Got notified of new snapshot: { ts: Timestamp(1567578600, 1), t: 1 }, 2019-09-04T06:30:00.115+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn192] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.469+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn217] Got notified of new snapshot: { ts: Timestamp(1567578600, 1), t: 1 }, 2019-09-04T06:30:00.115+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn217] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:22.595+0000 2019-09-04T06:30:00.120+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:00.120+0000 D3 REPL [conn194] Got notified of new snapshot: { ts: Timestamp(1567578600, 1), t: 1 }, 2019-09-04T06:30:00.115+0000 2019-09-04T06:30:00.120+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:30.120+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn194] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.763+0000 2019-09-04T06:30:00.120+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:30:10.621+0000 2019-09-04T06:30:00.120+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:30:10.442+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn213] Got notified of new snapshot: { ts: Timestamp(1567578600, 1), t: 1 }, 2019-09-04T06:30:00.115+0000 2019-09-04T06:30:00.120+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:30:00.120+0000 D3 REPL [conn213] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:00.120+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 513 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:10.120+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578600, 1), t: 1 } } 2019-09-04T06:30:00.120+0000 D3 REPL [conn205] Got notified of new snapshot: { ts: Timestamp(1567578600, 1), t: 1 }, 2019-09-04T06:30:00.115+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn205] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.530+0000 2019-09-04T06:30:00.120+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:30.120+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn171] Got notified of new snapshot: { ts: Timestamp(1567578600, 1), t: 1 }, 2019-09-04T06:30:00.115+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn171] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.634+0000 2019-09-04T06:30:00.120+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn214] Got notified of new snapshot: { ts: Timestamp(1567578600, 1), t: 1 }, 2019-09-04T06:30:00.115+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn214] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn212] Got notified of new snapshot: { ts: Timestamp(1567578600, 1), t: 1 }, 2019-09-04T06:30:00.115+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn212] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.650+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn201] Got notified of new snapshot: { ts: Timestamp(1567578600, 1), t: 1 }, 2019-09-04T06:30:00.115+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn201] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.476+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn198] Got notified of new snapshot: { ts: Timestamp(1567578600, 1), t: 1 }, 2019-09-04T06:30:00.115+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn198] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:03.485+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn180] Got notified of new snapshot: { ts: Timestamp(1567578600, 1), t: 1 }, 2019-09-04T06:30:00.115+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn180] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.623+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn147] Got notified of new snapshot: { ts: Timestamp(1567578600, 1), t: 1 }, 2019-09-04T06:30:00.115+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn147] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:13.409+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn195] Got notified of new snapshot: { ts: Timestamp(1567578600, 1), t: 1 }, 2019-09-04T06:30:00.115+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn195] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.897+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn199] Got notified of new snapshot: { ts: Timestamp(1567578600, 1), t: 1 }, 2019-09-04T06:30:00.115+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn199] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.476+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn169] Got notified of new snapshot: { ts: Timestamp(1567578600, 1), t: 1 }, 2019-09-04T06:30:00.115+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn169] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:12.789+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn210] Got notified of new snapshot: { ts: Timestamp(1567578600, 1), t: 1 }, 2019-09-04T06:30:00.115+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn210] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.641+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn215] Got notified of new snapshot: { ts: Timestamp(1567578600, 1), t: 1 }, 2019-09-04T06:30:00.115+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn215] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn211] Got notified of new snapshot: { ts: Timestamp(1567578600, 1), t: 1 }, 2019-09-04T06:30:00.115+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn211] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.646+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn184] Got notified of new snapshot: { ts: Timestamp(1567578600, 1), t: 1 }, 2019-09-04T06:30:00.115+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn184] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.433+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn191] Got notified of new snapshot: { ts: Timestamp(1567578600, 1), t: 1 }, 2019-09-04T06:30:00.115+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn191] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:12.107+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn200] Got notified of new snapshot: { ts: Timestamp(1567578600, 1), t: 1 }, 2019-09-04T06:30:00.115+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn200] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.475+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn219] Got notified of new snapshot: { ts: Timestamp(1567578600, 1), t: 1 }, 2019-09-04T06:30:00.115+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn219] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:24.153+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn209] Got notified of new snapshot: { ts: Timestamp(1567578600, 1), t: 1 }, 2019-09-04T06:30:00.115+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn209] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.622+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn181] Got notified of new snapshot: { ts: Timestamp(1567578600, 1), t: 1 }, 2019-09-04T06:30:00.115+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn181] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.478+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn193] Got notified of new snapshot: { ts: Timestamp(1567578600, 1), t: 1 }, 2019-09-04T06:30:00.115+0000 2019-09-04T06:30:00.120+0000 D3 REPL [conn193] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.753+0000 2019-09-04T06:30:00.121+0000 D2 ASIO [RS] Request 513 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578600, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578600117), o: { $v: 1, $set: { ping: new Date(1567578600116) } } }, { ts: Timestamp(1567578600, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578600117), o: { $v: 1, $set: { ping: new Date(1567578600117) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578600, 1), t: 1 }, lastCommittedWall: new Date(1567578600115), lastOpVisible: { ts: Timestamp(1567578600, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578600, 1), t: 1 }, lastCommittedWall: new Date(1567578600115), lastOpApplied: { ts: Timestamp(1567578600, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578600, 1), $clusterTime: { clusterTime: Timestamp(1567578600, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578600, 3) } 2019-09-04T06:30:00.121+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578600, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578600117), o: { $v: 1, $set: { ping: new Date(1567578600116) } } }, { ts: Timestamp(1567578600, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578600117), o: { $v: 1, $set: { ping: new Date(1567578600117) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578600, 1), t: 1 }, lastCommittedWall: new Date(1567578600115), lastOpVisible: { ts: Timestamp(1567578600, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578600, 1), t: 1 }, lastCommittedWall: new Date(1567578600115), lastOpApplied: { ts: Timestamp(1567578600, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578600, 1), $clusterTime: { clusterTime: Timestamp(1567578600, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578600, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:00.121+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:00.121+0000 D2 REPL [replication-1] oplog fetcher read 2 operations from remote oplog starting at ts: Timestamp(1567578600, 2) and ending at ts: Timestamp(1567578600, 3) 2019-09-04T06:30:00.121+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:30:10.442+0000 2019-09-04T06:30:00.121+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:30:10.892+0000 2019-09-04T06:30:00.121+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:00.121+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:30:00.121+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578600, 3), t: 1 } 2019-09-04T06:30:00.121+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:00.121+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:00.121+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578600, 1) 2019-09-04T06:30:00.121+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 7696 2019-09-04T06:30:00.121+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:00.121+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:00.121+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 7696 2019-09-04T06:30:00.121+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:00.121+0000 D2 REPL [rsSync-0] replication batch size is 2 2019-09-04T06:30:00.121+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:00.121+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578600, 2) } 2019-09-04T06:30:00.121+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578600, 1) 2019-09-04T06:30:00.121+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 7699 2019-09-04T06:30:00.121+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7694 2019-09-04T06:30:00.121+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:00.121+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:00.121+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 7699 2019-09-04T06:30:00.121+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7694 2019-09-04T06:30:00.121+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7702 2019-09-04T06:30:00.121+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7702 2019-09-04T06:30:00.121+0000 D3 EXECUTOR [repl-writer-worker-3] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:00.121+0000 D3 STORAGE [repl-writer-worker-3] WT begin_transaction for snapshot id 7704 2019-09-04T06:30:00.121+0000 D4 STORAGE [repl-writer-worker-3] inserting record with timestamp Timestamp(1567578600, 2) 2019-09-04T06:30:00.121+0000 D3 STORAGE [repl-writer-worker-3] WT set timestamp of future write operations to Timestamp(1567578600, 2) 2019-09-04T06:30:00.121+0000 D4 STORAGE [repl-writer-worker-3] inserting record with timestamp Timestamp(1567578600, 3) 2019-09-04T06:30:00.121+0000 D3 STORAGE [repl-writer-worker-3] WT set timestamp of future write operations to Timestamp(1567578600, 3) 2019-09-04T06:30:00.121+0000 D3 STORAGE [repl-writer-worker-3] WT commit_transaction for snapshot id 7704 2019-09-04T06:30:00.121+0000 D3 EXECUTOR [repl-writer-worker-3] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:00.121+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:30:00.121+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7703 2019-09-04T06:30:00.121+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7703 2019-09-04T06:30:00.121+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7706 2019-09-04T06:30:00.122+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7706 2019-09-04T06:30:00.122+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578600, 3), t: 1 }({ ts: Timestamp(1567578600, 3), t: 1 }) 2019-09-04T06:30:00.122+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578600, 3) 2019-09-04T06:30:00.122+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7707 2019-09-04T06:30:00.122+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578600, 3) } } ] } sort: {} projection: {} 2019-09-04T06:30:00.122+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:00.122+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:30:00.122+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578600, 3) Sort: {} Proj: {} ============================= 2019-09-04T06:30:00.122+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:00.122+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:00.122+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:00.122+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578600, 3) || First: notFirst: full path: ts 2019-09-04T06:30:00.122+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:00.122+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578600, 3) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:00.122+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:00.122+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:30:00.122+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:00.122+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:00.122+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:00.122+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:00.122+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:00.122+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:00.122+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:00.122+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578600, 3) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:00.122+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:00.122+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:00.122+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:00.122+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578600, 3) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:00.122+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:00.122+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578600, 3) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:00.122+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7707 2019-09-04T06:30:00.122+0000 D3 EXECUTOR [repl-writer-worker-12] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:00.122+0000 D3 STORAGE [repl-writer-worker-12] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:00.122+0000 D3 EXECUTOR [repl-writer-worker-10] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:00.122+0000 D3 STORAGE [repl-writer-worker-10] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:00.122+0000 D3 REPL [repl-writer-worker-12] applying op: { ts: Timestamp(1567578600, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578600117), o: { $v: 1, $set: { ping: new Date(1567578600117) } } }, oplog application mode: Secondary 2019-09-04T06:30:00.122+0000 D3 STORAGE [repl-writer-worker-12] WT set timestamp of future write operations to Timestamp(1567578600, 3) 2019-09-04T06:30:00.122+0000 D3 REPL [repl-writer-worker-10] applying op: { ts: Timestamp(1567578600, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578600117), o: { $v: 1, $set: { ping: new Date(1567578600116) } } }, oplog application mode: Secondary 2019-09-04T06:30:00.122+0000 D3 STORAGE [repl-writer-worker-12] WT begin_transaction for snapshot id 7709 2019-09-04T06:30:00.122+0000 D3 STORAGE [repl-writer-worker-10] WT set timestamp of future write operations to Timestamp(1567578600, 2) 2019-09-04T06:30:00.122+0000 D2 QUERY [repl-writer-worker-12] Using idhack: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" } 2019-09-04T06:30:00.122+0000 D3 STORAGE [repl-writer-worker-10] WT begin_transaction for snapshot id 7710 2019-09-04T06:30:00.122+0000 D2 QUERY [repl-writer-worker-10] Using idhack: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" } 2019-09-04T06:30:00.122+0000 D4 WRITE [repl-writer-worker-12] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:30:00.122+0000 D3 STORAGE [repl-writer-worker-12] WT commit_transaction for snapshot id 7709 2019-09-04T06:30:00.122+0000 D4 WRITE [repl-writer-worker-10] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:30:00.122+0000 D3 EXECUTOR [repl-writer-worker-12] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:00.122+0000 D3 STORAGE [repl-writer-worker-10] WT commit_transaction for snapshot id 7710 2019-09-04T06:30:00.122+0000 D3 EXECUTOR [repl-writer-worker-10] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:00.122+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578600, 3), t: 1 }({ ts: Timestamp(1567578600, 3), t: 1 }) 2019-09-04T06:30:00.122+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578600, 3) 2019-09-04T06:30:00.122+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7708 2019-09-04T06:30:00.122+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:30:00.122+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:00.122+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:30:00.122+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:00.122+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:00.122+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:30:00.122+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7708 2019-09-04T06:30:00.122+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578600, 3) 2019-09-04T06:30:00.122+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:00.122+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7714 2019-09-04T06:30:00.122+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, durableWallTime: new Date(1567578595634), appliedOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, appliedWallTime: new Date(1567578595634), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578600, 1), t: 1 }, durableWallTime: new Date(1567578600115), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, durableWallTime: new Date(1567578595634), appliedOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, appliedWallTime: new Date(1567578595634), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578600, 1), t: 1 }, lastCommittedWall: new Date(1567578600115), lastOpVisible: { ts: Timestamp(1567578600, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:00.122+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 514 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:30.122+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, durableWallTime: new Date(1567578595634), appliedOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, appliedWallTime: new Date(1567578595634), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578600, 1), t: 1 }, durableWallTime: new Date(1567578600115), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, durableWallTime: new Date(1567578595634), appliedOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, appliedWallTime: new Date(1567578595634), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578600, 1), t: 1 }, lastCommittedWall: new Date(1567578600115), lastOpVisible: { ts: Timestamp(1567578600, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:00.122+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:30.122+0000 2019-09-04T06:30:00.122+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7714 2019-09-04T06:30:00.122+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578600, 3), t: 1 }({ ts: Timestamp(1567578600, 3), t: 1 }) 2019-09-04T06:30:00.122+0000 D2 ASIO [RS] Request 514 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578600, 1), t: 1 }, lastCommittedWall: new Date(1567578600115), lastOpVisible: { ts: Timestamp(1567578600, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578600, 1), $clusterTime: { clusterTime: Timestamp(1567578600, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578600, 3) } 2019-09-04T06:30:00.122+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578600, 1), t: 1 }, lastCommittedWall: new Date(1567578600115), lastOpVisible: { ts: Timestamp(1567578600, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578600, 1), $clusterTime: { clusterTime: Timestamp(1567578600, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578600, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:00.122+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:00.122+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:30.122+0000 2019-09-04T06:30:00.123+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578600, 3), t: 1 } 2019-09-04T06:30:00.123+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 515 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:10.123+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578600, 1), t: 1 } } 2019-09-04T06:30:00.123+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:30.122+0000 2019-09-04T06:30:00.124+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:30:00.124+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:00.124+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, durableWallTime: new Date(1567578595634), appliedOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, appliedWallTime: new Date(1567578595634), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, durableWallTime: new Date(1567578595634), appliedOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, appliedWallTime: new Date(1567578595634), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578600, 1), t: 1 }, lastCommittedWall: new Date(1567578600115), lastOpVisible: { ts: Timestamp(1567578600, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:00.124+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 516 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:30.124+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, durableWallTime: new Date(1567578595634), appliedOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, appliedWallTime: new Date(1567578595634), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, durableWallTime: new Date(1567578595634), appliedOpTime: { ts: Timestamp(1567578595, 1), t: 1 }, appliedWallTime: new Date(1567578595634), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578600, 1), t: 1 }, lastCommittedWall: new Date(1567578600115), lastOpVisible: { ts: Timestamp(1567578600, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:00.124+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:30.122+0000 2019-09-04T06:30:00.125+0000 D2 ASIO [RS] Request 516 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578600, 1), t: 1 }, lastCommittedWall: new Date(1567578600115), lastOpVisible: { ts: Timestamp(1567578600, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578600, 1), $clusterTime: { clusterTime: Timestamp(1567578600, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578600, 3) } 2019-09-04T06:30:00.125+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578600, 1), t: 1 }, lastCommittedWall: new Date(1567578600115), lastOpVisible: { ts: Timestamp(1567578600, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578600, 1), $clusterTime: { clusterTime: Timestamp(1567578600, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578600, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:00.125+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:00.125+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:30.122+0000 2019-09-04T06:30:00.125+0000 D2 ASIO [RS] Request 515 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578600, 3), t: 1 }, lastCommittedWall: new Date(1567578600117), lastOpVisible: { ts: Timestamp(1567578600, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578600, 3), t: 1 }, lastCommittedWall: new Date(1567578600117), lastOpApplied: { ts: Timestamp(1567578600, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578600, 3), $clusterTime: { clusterTime: Timestamp(1567578600, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578600, 3) } 2019-09-04T06:30:00.125+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578600, 3), t: 1 }, lastCommittedWall: new Date(1567578600117), lastOpVisible: { ts: Timestamp(1567578600, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578600, 3), t: 1 }, lastCommittedWall: new Date(1567578600117), lastOpApplied: { ts: Timestamp(1567578600, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578600, 3), $clusterTime: { clusterTime: Timestamp(1567578600, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578600, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:00.125+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:00.125+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:30:00.125+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578600, 3), t: 1 }, 2019-09-04T06:30:00.117+0000 2019-09-04T06:30:00.125+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578600, 3), t: 1 }, 2019-09-04T06:30:00.117+0000 2019-09-04T06:30:00.125+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578595, 3) 2019-09-04T06:30:00.125+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:30:10.892+0000 2019-09-04T06:30:00.125+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:30:10.770+0000 2019-09-04T06:30:00.125+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 517 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:10.125+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578600, 3), t: 1 } } 2019-09-04T06:30:00.125+0000 D3 REPL [conn202] Got notified of new snapshot: { ts: Timestamp(1567578600, 3), t: 1 }, 2019-09-04T06:30:00.117+0000 2019-09-04T06:30:00.125+0000 D3 REPL [conn202] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.480+0000 2019-09-04T06:30:00.125+0000 D3 REPL [conn197] Got notified of new snapshot: { ts: Timestamp(1567578600, 3), t: 1 }, 2019-09-04T06:30:00.117+0000 2019-09-04T06:30:00.125+0000 D3 REPL [conn197] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.987+0000 2019-09-04T06:30:00.125+0000 D3 REPL [conn192] Got notified of new snapshot: { ts: Timestamp(1567578600, 3), t: 1 }, 2019-09-04T06:30:00.117+0000 2019-09-04T06:30:00.125+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:30.122+0000 2019-09-04T06:30:00.125+0000 D3 REPL [conn192] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.469+0000 2019-09-04T06:30:00.125+0000 D3 REPL [conn212] Got notified of new snapshot: { ts: Timestamp(1567578600, 3), t: 1 }, 2019-09-04T06:30:00.117+0000 2019-09-04T06:30:00.125+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:00.125+0000 D3 REPL [conn212] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.650+0000 2019-09-04T06:30:00.125+0000 D3 REPL [conn195] Got notified of new snapshot: { ts: Timestamp(1567578600, 3), t: 1 }, 2019-09-04T06:30:00.117+0000 2019-09-04T06:30:00.125+0000 D3 REPL [conn195] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.897+0000 2019-09-04T06:30:00.125+0000 D3 REPL [conn147] Got notified of new snapshot: { ts: Timestamp(1567578600, 3), t: 1 }, 2019-09-04T06:30:00.117+0000 2019-09-04T06:30:00.125+0000 D3 REPL [conn147] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:13.409+0000 2019-09-04T06:30:00.125+0000 D3 REPL [conn169] Got notified of new snapshot: { ts: Timestamp(1567578600, 3), t: 1 }, 2019-09-04T06:30:00.117+0000 2019-09-04T06:30:00.125+0000 D3 REPL [conn169] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:12.789+0000 2019-09-04T06:30:00.125+0000 D3 REPL [conn198] Got notified of new snapshot: { ts: Timestamp(1567578600, 3), t: 1 }, 2019-09-04T06:30:00.117+0000 2019-09-04T06:30:00.125+0000 D3 REPL [conn198] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:03.485+0000 2019-09-04T06:30:00.125+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:30:00.125+0000 D3 REPL [conn194] Got notified of new snapshot: { ts: Timestamp(1567578600, 3), t: 1 }, 2019-09-04T06:30:00.117+0000 2019-09-04T06:30:00.125+0000 D3 REPL [conn194] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.763+0000 2019-09-04T06:30:00.125+0000 D3 REPL [conn191] Got notified of new snapshot: { ts: Timestamp(1567578600, 3), t: 1 }, 2019-09-04T06:30:00.117+0000 2019-09-04T06:30:00.125+0000 D3 REPL [conn191] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:12.107+0000 2019-09-04T06:30:00.125+0000 D3 REPL [conn219] Got notified of new snapshot: { ts: Timestamp(1567578600, 3), t: 1 }, 2019-09-04T06:30:00.117+0000 2019-09-04T06:30:00.125+0000 D3 REPL [conn219] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:24.153+0000 2019-09-04T06:30:00.125+0000 D3 REPL [conn181] Got notified of new snapshot: { ts: Timestamp(1567578600, 3), t: 1 }, 2019-09-04T06:30:00.117+0000 2019-09-04T06:30:00.125+0000 D3 REPL [conn181] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.478+0000 2019-09-04T06:30:00.125+0000 D3 REPL [conn196] Got notified of new snapshot: { ts: Timestamp(1567578600, 3), t: 1 }, 2019-09-04T06:30:00.117+0000 2019-09-04T06:30:00.126+0000 D3 REPL [conn196] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.962+0000 2019-09-04T06:30:00.126+0000 D3 REPL [conn217] Got notified of new snapshot: { ts: Timestamp(1567578600, 3), t: 1 }, 2019-09-04T06:30:00.117+0000 2019-09-04T06:30:00.126+0000 D3 REPL [conn217] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:22.595+0000 2019-09-04T06:30:00.126+0000 D3 REPL [conn204] Got notified of new snapshot: { ts: Timestamp(1567578600, 3), t: 1 }, 2019-09-04T06:30:00.117+0000 2019-09-04T06:30:00.126+0000 D3 REPL [conn204] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.514+0000 2019-09-04T06:30:00.126+0000 D3 REPL [conn203] Got notified of new snapshot: { ts: Timestamp(1567578600, 3), t: 1 }, 2019-09-04T06:30:00.117+0000 2019-09-04T06:30:00.126+0000 D3 REPL [conn203] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.499+0000 2019-09-04T06:30:00.126+0000 D3 REPL [conn205] Got notified of new snapshot: { ts: Timestamp(1567578600, 3), t: 1 }, 2019-09-04T06:30:00.117+0000 2019-09-04T06:30:00.126+0000 D3 REPL [conn205] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.530+0000 2019-09-04T06:30:00.126+0000 D3 REPL [conn171] Got notified of new snapshot: { ts: Timestamp(1567578600, 3), t: 1 }, 2019-09-04T06:30:00.117+0000 2019-09-04T06:30:00.126+0000 D3 REPL [conn171] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.634+0000 2019-09-04T06:30:00.126+0000 D3 REPL [conn199] Got notified of new snapshot: { ts: Timestamp(1567578600, 3), t: 1 }, 2019-09-04T06:30:00.117+0000 2019-09-04T06:30:00.126+0000 D3 REPL [conn199] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.476+0000 2019-09-04T06:30:00.126+0000 D3 REPL [conn215] Got notified of new snapshot: { ts: Timestamp(1567578600, 3), t: 1 }, 2019-09-04T06:30:00.117+0000 2019-09-04T06:30:00.126+0000 D3 REPL [conn215] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:00.126+0000 D3 REPL [conn214] Got notified of new snapshot: { ts: Timestamp(1567578600, 3), t: 1 }, 2019-09-04T06:30:00.117+0000 2019-09-04T06:30:00.126+0000 D3 REPL [conn214] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:00.126+0000 D3 REPL [conn201] Got notified of new snapshot: { ts: Timestamp(1567578600, 3), t: 1 }, 2019-09-04T06:30:00.117+0000 2019-09-04T06:30:00.126+0000 D3 REPL [conn201] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.476+0000 2019-09-04T06:30:00.126+0000 D3 REPL [conn184] Got notified of new snapshot: { ts: Timestamp(1567578600, 3), t: 1 }, 2019-09-04T06:30:00.117+0000 2019-09-04T06:30:00.126+0000 D3 REPL [conn184] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.433+0000 2019-09-04T06:30:00.126+0000 D3 REPL [conn210] Got notified of new snapshot: { ts: Timestamp(1567578600, 3), t: 1 }, 2019-09-04T06:30:00.117+0000 2019-09-04T06:30:00.126+0000 D3 REPL [conn210] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.641+0000 2019-09-04T06:30:00.126+0000 D3 REPL [conn200] Got notified of new snapshot: { ts: Timestamp(1567578600, 3), t: 1 }, 2019-09-04T06:30:00.117+0000 2019-09-04T06:30:00.126+0000 D3 REPL [conn200] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.475+0000 2019-09-04T06:30:00.126+0000 D3 REPL [conn180] Got notified of new snapshot: { ts: Timestamp(1567578600, 3), t: 1 }, 2019-09-04T06:30:00.117+0000 2019-09-04T06:30:00.126+0000 D3 REPL [conn180] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.623+0000 2019-09-04T06:30:00.126+0000 D3 REPL [conn213] Got notified of new snapshot: { ts: Timestamp(1567578600, 3), t: 1 }, 2019-09-04T06:30:00.117+0000 2019-09-04T06:30:00.126+0000 D3 REPL [conn213] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:00.126+0000 D3 REPL [conn193] Got notified of new snapshot: { ts: Timestamp(1567578600, 3), t: 1 }, 2019-09-04T06:30:00.117+0000 2019-09-04T06:30:00.126+0000 D3 REPL [conn193] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:00.753+0000 2019-09-04T06:30:00.126+0000 D3 REPL [conn209] Got notified of new snapshot: { ts: Timestamp(1567578600, 3), t: 1 }, 2019-09-04T06:30:00.117+0000 2019-09-04T06:30:00.126+0000 D3 REPL [conn209] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.622+0000 2019-09-04T06:30:00.126+0000 D3 REPL [conn211] Got notified of new snapshot: { ts: Timestamp(1567578600, 3), t: 1 }, 2019-09-04T06:30:00.117+0000 2019-09-04T06:30:00.126+0000 D3 REPL [conn211] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.646+0000 2019-09-04T06:30:00.130+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:00.131+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:00.142+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:00.142+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:00.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:00.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:00.160+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:00.176+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:00.176+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:00.207+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:00.207+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:00.217+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578600, 3) 2019-09-04T06:30:00.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578600, 3), signature: { hash: BinData(0, EC799971AA75E5E45E080BC0A28BCCC7919B4E15), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:00.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:30:00.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578600, 3), signature: { hash: BinData(0, EC799971AA75E5E45E080BC0A28BCCC7919B4E15), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:00.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578600, 3), signature: { hash: BinData(0, EC799971AA75E5E45E080BC0A28BCCC7919B4E15), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:00.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), opTime: { ts: Timestamp(1567578600, 3), t: 1 }, wallTime: new Date(1567578600117) } 2019-09-04T06:30:00.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578600, 3), signature: { hash: BinData(0, EC799971AA75E5E45E080BC0A28BCCC7919B4E15), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:00.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:00.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:00.260+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:00.264+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:00.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:00.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:00.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:00.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:00.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:00.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:00.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:00.360+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:00.433+0000 I COMMAND [conn184] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578570, 1), signature: { hash: BinData(0, C7C317DFCB05C1E2BBC73621E403F791F3E36874), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:30:00.433+0000 D1 - [conn184] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:00.433+0000 W - [conn184] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:00.450+0000 I - [conn184] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:00.450+0000 D1 COMMAND [conn184] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578570, 1), signature: { hash: BinData(0, C7C317DFCB05C1E2BBC73621E403F791F3E36874), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:00.450+0000 D1 - [conn184] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:00.450+0000 W - [conn184] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:00.460+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:00.470+0000 I COMMAND [conn192] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578561, 1), signature: { hash: BinData(0, D212476059A77AEDF4B121F0DF010997DB78B838), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:30:00.470+0000 D1 - [conn192] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:00.470+0000 W - [conn192] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:00.470+0000 I - [conn184] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:00.470+0000 W COMMAND [conn184] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:00.470+0000 I COMMAND [conn184] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578570, 1), signature: { hash: BinData(0, C7C317DFCB05C1E2BBC73621E403F791F3E36874), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:30:00.470+0000 D2 NETWORK [conn184] Session from 10.108.2.48:42048 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:00.470+0000 I NETWORK [conn184] end connection 10.108.2.48:42048 (88 connections now open) 2019-09-04T06:30:00.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:00.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:00.487+0000 I - [conn192] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:00.487+0000 D1 COMMAND [conn192] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578561, 1), signature: { hash: BinData(0, D212476059A77AEDF4B121F0DF010997DB78B838), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:00.487+0000 D1 - [conn192] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:00.487+0000 W - [conn192] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:00.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:00.489+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:00.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:00.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:00.506+0000 I - [conn192] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:00.506+0000 W COMMAND [conn192] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:00.507+0000 I COMMAND [conn192] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578561, 1), signature: { hash: BinData(0, D212476059A77AEDF4B121F0DF010997DB78B838), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:30:00.507+0000 D2 NETWORK [conn192] Session from 10.108.2.55:36612 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:00.507+0000 I NETWORK [conn192] end connection 10.108.2.55:36612 (87 connections now open) 2019-09-04T06:30:00.538+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:00.538+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:00.560+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:00.565+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:00.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:00.578+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:00.578+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:00.631+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:00.631+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:00.642+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:00.642+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:00.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:00.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:00.660+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:00.676+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:00.676+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:00.707+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:00.707+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:00.743+0000 I NETWORK [listener] connection accepted from 10.108.2.52:47160 #221 (88 connections now open) 2019-09-04T06:30:00.743+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:00.743+0000 D2 COMMAND [conn221] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:00.743+0000 I NETWORK [conn221] received client metadata from 10.108.2.52:47160 conn221: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:00.743+0000 I COMMAND [conn221] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:00.753+0000 I NETWORK [listener] connection accepted from 10.108.2.50:50094 #222 (89 connections now open) 2019-09-04T06:30:00.753+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:00.753+0000 D2 COMMAND [conn222] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:00.753+0000 I NETWORK [conn222] received client metadata from 10.108.2.50:50094 conn222: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:00.753+0000 I COMMAND [conn222] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:00.753+0000 I COMMAND [conn193] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578562, 1), signature: { hash: BinData(0, D409C985E59557A0ED9DA32FA1E2AFD0B3BC04D7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:30:00.754+0000 D1 - [conn193] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:00.754+0000 W - [conn193] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:00.760+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:00.764+0000 I COMMAND [conn194] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578569, 1), signature: { hash: BinData(0, 7B33B4EC422C8F7442E7E40E2C288C538CABC8B5), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:30:00.764+0000 D1 - [conn194] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:00.764+0000 W - [conn194] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:00.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:00.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:00.770+0000 I - [conn193] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:00.770+0000 D1 COMMAND [conn193] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578562, 1), signature: { hash: BinData(0, D409C985E59557A0ED9DA32FA1E2AFD0B3BC04D7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:00.770+0000 D1 - [conn193] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:00.770+0000 W - [conn193] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:00.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:00.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:00.789+0000 I - [conn194] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:00.789+0000 D1 COMMAND [conn194] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578569, 1), signature: { hash: BinData(0, 7B33B4EC422C8F7442E7E40E2C288C538CABC8B5), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:00.789+0000 D1 - [conn194] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:00.790+0000 W - [conn194] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:00.809+0000 I - [conn193] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:00.810+0000 W COMMAND [conn193] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:00.810+0000 I COMMAND [conn193] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578562, 1), signature: { hash: BinData(0, D409C985E59557A0ED9DA32FA1E2AFD0B3BC04D7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:30:00.810+0000 D2 NETWORK [conn193] Session from 10.108.2.52:47136 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:00.810+0000 I NETWORK [conn193] end connection 10.108.2.52:47136 (88 connections now open) 2019-09-04T06:30:00.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:00.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:00.829+0000 I - [conn194] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:00.829+0000 W COMMAND [conn194] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:00.829+0000 I COMMAND [conn194] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578569, 1), signature: { hash: BinData(0, 7B33B4EC422C8F7442E7E40E2C288C538CABC8B5), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30036ms 2019-09-04T06:30:00.830+0000 D2 NETWORK [conn194] Session from 10.108.2.50:50070 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:00.830+0000 I NETWORK [conn194] end connection 10.108.2.50:50070 (87 connections now open) 2019-09-04T06:30:00.838+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:30:00.838+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 518) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:00.838+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 518 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:10.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:00.838+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:30:00.838+0000 D2 ASIO [Replication] Request 518 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), opTime: { ts: Timestamp(1567578600, 3), t: 1 }, wallTime: new Date(1567578600117), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578600, 3), t: 1 }, lastCommittedWall: new Date(1567578600117), lastOpVisible: { ts: Timestamp(1567578600, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578600, 3), $clusterTime: { clusterTime: Timestamp(1567578600, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578600, 3) } 2019-09-04T06:30:00.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), opTime: { ts: Timestamp(1567578600, 3), t: 1 }, wallTime: new Date(1567578600117), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578600, 3), t: 1 }, lastCommittedWall: new Date(1567578600117), lastOpVisible: { ts: Timestamp(1567578600, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578600, 3), $clusterTime: { clusterTime: Timestamp(1567578600, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578600, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:00.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:00.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 518) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), opTime: { ts: Timestamp(1567578600, 3), t: 1 }, wallTime: new Date(1567578600117), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578600, 3), t: 1 }, lastCommittedWall: new Date(1567578600117), lastOpVisible: { ts: Timestamp(1567578600, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578600, 3), $clusterTime: { clusterTime: Timestamp(1567578600, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578600, 3) } 2019-09-04T06:30:00.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:30:00.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:30:02.838Z 2019-09-04T06:30:00.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:30:00.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:00.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 519) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:00.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 519 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:30:10.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:00.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:30:00.839+0000 D2 ASIO [Replication] Request 519 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), opTime: { ts: Timestamp(1567578600, 3), t: 1 }, wallTime: new Date(1567578600117), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578600, 3), t: 1 }, lastCommittedWall: new Date(1567578600117), lastOpVisible: { ts: Timestamp(1567578600, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578600, 3), $clusterTime: { clusterTime: Timestamp(1567578600, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578600, 3) } 2019-09-04T06:30:00.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), opTime: { ts: Timestamp(1567578600, 3), t: 1 }, wallTime: new Date(1567578600117), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578600, 3), t: 1 }, lastCommittedWall: new Date(1567578600117), lastOpVisible: { ts: Timestamp(1567578600, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578600, 3), $clusterTime: { clusterTime: Timestamp(1567578600, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578600, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:30:00.839+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:30:00.839+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 519) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), opTime: { ts: Timestamp(1567578600, 3), t: 1 }, wallTime: new Date(1567578600117), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578600, 3), t: 1 }, lastCommittedWall: new Date(1567578600117), lastOpVisible: { ts: Timestamp(1567578600, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578600, 3), $clusterTime: { clusterTime: Timestamp(1567578600, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578600, 3) } 2019-09-04T06:30:00.839+0000 D4 ELECTION [replexec-1] Postponing election timeout due to heartbeat from primary 2019-09-04T06:30:00.839+0000 D4 REPL [replexec-1] Canceling election timeout callback at 2019-09-04T06:30:10.770+0000 2019-09-04T06:30:00.839+0000 D4 ELECTION [replexec-1] Scheduling election timeout callback at 2019-09-04T06:30:12.176+0000 2019-09-04T06:30:00.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:00.839+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:30:00.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:30:00.839+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:30:02.839Z 2019-09-04T06:30:00.839+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:30:00.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:00.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:00.860+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:00.886+0000 I NETWORK [listener] connection accepted from 10.108.2.44:38654 #223 (88 connections now open) 2019-09-04T06:30:00.886+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:00.887+0000 D2 COMMAND [conn223] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:00.887+0000 I NETWORK [conn223] received client metadata from 10.108.2.44:38654 conn223: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:00.887+0000 I COMMAND [conn223] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:00.898+0000 I COMMAND [conn195] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578568, 1), signature: { hash: BinData(0, 262C353B5DFD2D2918EF11F585DC4092E62B0B4B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:30:00.898+0000 D1 - [conn195] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:00.898+0000 W - [conn195] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:00.915+0000 I - [conn195] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:00.915+0000 D1 COMMAND [conn195] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578568, 1), signature: { hash: BinData(0, 262C353B5DFD2D2918EF11F585DC4092E62B0B4B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:00.915+0000 D1 - [conn195] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:00.915+0000 W - [conn195] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:00.935+0000 I - [conn195] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:00.935+0000 W COMMAND [conn195] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:00.935+0000 I COMMAND [conn195] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578568, 1), signature: { hash: BinData(0, 262C353B5DFD2D2918EF11F585DC4092E62B0B4B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30028ms 2019-09-04T06:30:00.935+0000 D2 NETWORK [conn195] Session from 10.108.2.44:38638 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:00.935+0000 I NETWORK [conn195] end connection 10.108.2.44:38638 (87 connections now open) 2019-09-04T06:30:00.960+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:00.962+0000 I COMMAND [conn196] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578563, 1), signature: { hash: BinData(0, 2A0BBF5A114B4F2CAB0EF4784300FF5254D97052), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:30:00.962+0000 D1 - [conn196] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:00.962+0000 W - [conn196] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:00.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:00.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:00.979+0000 I - [conn196] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:00.979+0000 D1 COMMAND [conn196] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578563, 1), signature: { hash: BinData(0, 2A0BBF5A114B4F2CAB0EF4784300FF5254D97052), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:00.979+0000 D1 - [conn196] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:00.979+0000 W - [conn196] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:00.988+0000 I COMMAND [conn197] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578565, 1), signature: { hash: BinData(0, 0FB04B3A73468DCEAB3E65E8617977C014F37989), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:30:00.988+0000 D1 - [conn197] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:00.988+0000 W - [conn197] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:00.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:00.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:00.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:00.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:00.999+0000 I - [conn196] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:00.999+0000 W COMMAND [conn196] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:00.999+0000 I COMMAND [conn196] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578563, 1), signature: { hash: BinData(0, 2A0BBF5A114B4F2CAB0EF4784300FF5254D97052), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:30:01.000+0000 D2 NETWORK [conn196] Session from 10.108.2.58:52098 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:01.000+0000 I NETWORK [conn196] end connection 10.108.2.58:52098 (86 connections now open) 2019-09-04T06:30:01.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:01.016+0000 I - [conn197] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:01.016+0000 D1 COMMAND [conn197] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578565, 1), signature: { hash: BinData(0, 0FB04B3A73468DCEAB3E65E8617977C014F37989), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:01.016+0000 D1 - [conn197] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:01.016+0000 W - [conn197] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:01.036+0000 I - [conn197] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:01.036+0000 W COMMAND [conn197] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:01.036+0000 I COMMAND [conn197] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578565, 1), signature: { hash: BinData(0, 0FB04B3A73468DCEAB3E65E8617977C014F37989), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30039ms 2019-09-04T06:30:01.036+0000 D2 NETWORK [conn197] Session from 10.108.2.46:40942 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:01.036+0000 I NETWORK [conn197] end connection 10.108.2.46:40942 (85 connections now open) 2019-09-04T06:30:01.038+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:01.038+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:01.060+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:01.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578600, 3), signature: { hash: BinData(0, EC799971AA75E5E45E080BC0A28BCCC7919B4E15), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:01.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:30:01.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578600, 3), signature: { hash: BinData(0, EC799971AA75E5E45E080BC0A28BCCC7919B4E15), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:01.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578600, 3), signature: { hash: BinData(0, EC799971AA75E5E45E080BC0A28BCCC7919B4E15), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:01.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), opTime: { ts: Timestamp(1567578600, 3), t: 1 }, wallTime: new Date(1567578600117) } 2019-09-04T06:30:01.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578600, 3), signature: { hash: BinData(0, EC799971AA75E5E45E080BC0A28BCCC7919B4E15), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:01.065+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:01.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:01.078+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:01.078+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:01.121+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:01.121+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:01.121+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578600, 3) 2019-09-04T06:30:01.121+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 7754 2019-09-04T06:30:01.121+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:01.122+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:01.122+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 7754 2019-09-04T06:30:01.122+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7757 2019-09-04T06:30:01.122+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7757 2019-09-04T06:30:01.122+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578600, 3), t: 1 }({ ts: Timestamp(1567578600, 3), t: 1 }) 2019-09-04T06:30:01.131+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:01.131+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:01.142+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:01.142+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:01.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:01.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:01.161+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:01.176+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:01.176+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:01.207+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:01.207+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:01.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:01.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:01.261+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:01.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:01.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:01.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:01.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:01.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:01.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:01.342+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:01.342+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:01.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:01.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:01.361+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:01.461+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:01.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:01.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:01.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:01.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:01.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:01.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:01.538+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:01.538+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:01.561+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:01.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:01.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:01.578+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:01.578+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:01.631+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:01.631+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:01.642+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:01.642+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:01.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:01.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:01.661+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:01.676+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:01.676+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:01.707+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:01.707+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:01.761+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:01.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:01.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:01.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:01.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:01.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:01.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:01.842+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:01.842+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:01.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:01.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:01.861+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:01.962+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:01.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:01.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:01.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:01.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:01.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:01.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:02.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:02.038+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:02.038+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:02.062+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:02.065+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:02.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:02.078+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:02.078+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:02.122+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:02.122+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:02.122+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578600, 3) 2019-09-04T06:30:02.122+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 7793 2019-09-04T06:30:02.122+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:02.122+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:02.122+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 7793 2019-09-04T06:30:02.122+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7796 2019-09-04T06:30:02.123+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7796 2019-09-04T06:30:02.123+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578600, 3), t: 1 }({ ts: Timestamp(1567578600, 3), t: 1 }) 2019-09-04T06:30:02.131+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:02.131+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:02.142+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:02.142+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:02.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:02.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:02.162+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:02.176+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:02.176+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:02.207+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:02.207+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:02.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578600, 3), signature: { hash: BinData(0, EC799971AA75E5E45E080BC0A28BCCC7919B4E15), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:02.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:30:02.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578600, 3), signature: { hash: BinData(0, EC799971AA75E5E45E080BC0A28BCCC7919B4E15), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:02.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578600, 3), signature: { hash: BinData(0, EC799971AA75E5E45E080BC0A28BCCC7919B4E15), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:02.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), opTime: { ts: Timestamp(1567578600, 3), t: 1 }, wallTime: new Date(1567578600117) } 2019-09-04T06:30:02.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578600, 3), signature: { hash: BinData(0, EC799971AA75E5E45E080BC0A28BCCC7919B4E15), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:02.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:02.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:02.262+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:02.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:02.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:02.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:02.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:02.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:02.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:02.342+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:02.342+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:02.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:02.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:02.362+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:02.462+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:02.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:02.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:02.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:02.489+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:02.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:02.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:02.538+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:02.538+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:02.562+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:02.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:02.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:02.578+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:02.578+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:02.631+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:02.631+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:02.642+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:02.642+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:02.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:02.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:02.663+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:02.676+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:02.676+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:02.707+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:02.707+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:02.763+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:02.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:02.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:02.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:02.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:02.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:02.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:02.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:02.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:02.838+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:30:01.061+0000 2019-09-04T06:30:02.838+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:30:02.232+0000 2019-09-04T06:30:02.838+0000 D3 REPL [replexec-3] stalest member MemberId(0) date: 2019-09-04T06:30:01.061+0000 2019-09-04T06:30:02.838+0000 D3 REPL [replexec-3] scheduling next check at 2019-09-04T06:30:11.061+0000 2019-09-04T06:30:02.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:30:02.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 520) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:02.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 520 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:12.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:02.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:30:02.838+0000 D2 ASIO [Replication] Request 520 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), opTime: { ts: Timestamp(1567578600, 3), t: 1 }, wallTime: new Date(1567578600117), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578600, 3), t: 1 }, lastCommittedWall: new Date(1567578600117), lastOpVisible: { ts: Timestamp(1567578600, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578600, 3), $clusterTime: { clusterTime: Timestamp(1567578600, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578600, 3) } 2019-09-04T06:30:02.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), opTime: { ts: Timestamp(1567578600, 3), t: 1 }, wallTime: new Date(1567578600117), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578600, 3), t: 1 }, lastCommittedWall: new Date(1567578600117), lastOpVisible: { ts: Timestamp(1567578600, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578600, 3), $clusterTime: { clusterTime: Timestamp(1567578600, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578600, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:02.838+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:30:02.838+0000 D2 REPL_HB [replexec-1] Received response to heartbeat (requestId: 520) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), opTime: { ts: Timestamp(1567578600, 3), t: 1 }, wallTime: new Date(1567578600117), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578600, 3), t: 1 }, lastCommittedWall: new Date(1567578600117), lastOpVisible: { ts: Timestamp(1567578600, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578600, 3), $clusterTime: { clusterTime: Timestamp(1567578600, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578600, 3) } 2019-09-04T06:30:02.838+0000 D3 REPL [replexec-1] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:30:02.838+0000 D2 REPL_HB [replexec-1] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:30:04.838Z 2019-09-04T06:30:02.838+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:30:02.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:02.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 521) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:02.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 521 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:30:12.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:02.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:30:02.839+0000 D2 ASIO [Replication] Request 521 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), opTime: { ts: Timestamp(1567578600, 3), t: 1 }, wallTime: new Date(1567578600117), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578600, 3), t: 1 }, lastCommittedWall: new Date(1567578600117), lastOpVisible: { ts: Timestamp(1567578600, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578600, 3), $clusterTime: { clusterTime: Timestamp(1567578600, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578600, 3) } 2019-09-04T06:30:02.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), opTime: { ts: Timestamp(1567578600, 3), t: 1 }, wallTime: new Date(1567578600117), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578600, 3), t: 1 }, lastCommittedWall: new Date(1567578600117), lastOpVisible: { ts: Timestamp(1567578600, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578600, 3), $clusterTime: { clusterTime: Timestamp(1567578600, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578600, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:30:02.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:02.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 521) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), opTime: { ts: Timestamp(1567578600, 3), t: 1 }, wallTime: new Date(1567578600117), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578600, 3), t: 1 }, lastCommittedWall: new Date(1567578600117), lastOpVisible: { ts: Timestamp(1567578600, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578600, 3), $clusterTime: { clusterTime: Timestamp(1567578600, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578600, 3) } 2019-09-04T06:30:02.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:30:02.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:30:12.176+0000 2019-09-04T06:30:02.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:30:13.280+0000 2019-09-04T06:30:02.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:30:02.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:30:04.839Z 2019-09-04T06:30:02.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:02.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:30:02.839+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:30:02.842+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:02.842+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:02.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:02.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:02.863+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:02.936+0000 D2 ASIO [RS] Request 517 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578602, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578602928), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f59eaac9313827bca4429'), when: new Date(1567578602928), who: "ConfigServer:conn9511" } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578600, 3), t: 1 }, lastCommittedWall: new Date(1567578600117), lastOpVisible: { ts: Timestamp(1567578600, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578600, 3), t: 1 }, lastCommittedWall: new Date(1567578600117), lastOpApplied: { ts: Timestamp(1567578602, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578600, 3), $clusterTime: { clusterTime: Timestamp(1567578602, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578602, 1) } 2019-09-04T06:30:02.936+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578602, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578602928), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f59eaac9313827bca4429'), when: new Date(1567578602928), who: "ConfigServer:conn9511" } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578600, 3), t: 1 }, lastCommittedWall: new Date(1567578600117), lastOpVisible: { ts: Timestamp(1567578600, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578600, 3), t: 1 }, lastCommittedWall: new Date(1567578600117), lastOpApplied: { ts: Timestamp(1567578602, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578600, 3), $clusterTime: { clusterTime: Timestamp(1567578602, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578602, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:02.937+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:02.937+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578602, 1) and ending at ts: Timestamp(1567578602, 1) 2019-09-04T06:30:02.937+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:30:13.280+0000 2019-09-04T06:30:02.937+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:30:14.396+0000 2019-09-04T06:30:02.937+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:02.937+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:30:02.937+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578602, 1), t: 1 } 2019-09-04T06:30:02.937+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:02.937+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:02.937+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578600, 3) 2019-09-04T06:30:02.937+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 7827 2019-09-04T06:30:02.937+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:02.937+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:02.937+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 7827 2019-09-04T06:30:02.937+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:02.937+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:02.937+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:30:02.937+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578600, 3) 2019-09-04T06:30:02.937+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 7830 2019-09-04T06:30:02.937+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578602, 1) } 2019-09-04T06:30:02.937+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:02.937+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:02.937+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 7830 2019-09-04T06:30:02.937+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7797 2019-09-04T06:30:02.937+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7797 2019-09-04T06:30:02.937+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7833 2019-09-04T06:30:02.937+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7833 2019-09-04T06:30:02.937+0000 D3 EXECUTOR [repl-writer-worker-8] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:02.937+0000 D3 STORAGE [repl-writer-worker-8] WT begin_transaction for snapshot id 7835 2019-09-04T06:30:02.937+0000 D4 STORAGE [repl-writer-worker-8] inserting record with timestamp Timestamp(1567578602, 1) 2019-09-04T06:30:02.937+0000 D3 STORAGE [repl-writer-worker-8] WT set timestamp of future write operations to Timestamp(1567578602, 1) 2019-09-04T06:30:02.937+0000 D3 STORAGE [repl-writer-worker-8] WT commit_transaction for snapshot id 7835 2019-09-04T06:30:02.937+0000 D3 EXECUTOR [repl-writer-worker-8] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:02.937+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:30:02.937+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7834 2019-09-04T06:30:02.937+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7834 2019-09-04T06:30:02.937+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7837 2019-09-04T06:30:02.937+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7837 2019-09-04T06:30:02.937+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578602, 1), t: 1 }({ ts: Timestamp(1567578602, 1), t: 1 }) 2019-09-04T06:30:02.937+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578602, 1) 2019-09-04T06:30:02.937+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7838 2019-09-04T06:30:02.937+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578602, 1) } } ] } sort: {} projection: {} 2019-09-04T06:30:02.937+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:02.937+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:30:02.937+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578602, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:30:02.937+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:02.937+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:02.937+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:02.937+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578602, 1) || First: notFirst: full path: ts 2019-09-04T06:30:02.937+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:02.937+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578602, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:02.937+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:02.937+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:30:02.937+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:02.937+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:02.937+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:02.937+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:02.937+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:02.937+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:02.937+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:02.937+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578602, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:02.937+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:02.937+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:02.937+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:02.937+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578602, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:02.937+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:02.937+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578602, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:02.938+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7838 2019-09-04T06:30:02.938+0000 D3 EXECUTOR [repl-writer-worker-4] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:02.938+0000 D3 STORAGE [repl-writer-worker-4] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:02.938+0000 D3 REPL [repl-writer-worker-4] applying op: { ts: Timestamp(1567578602, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578602928), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f59eaac9313827bca4429'), when: new Date(1567578602928), who: "ConfigServer:conn9511" } } }, oplog application mode: Secondary 2019-09-04T06:30:02.938+0000 D3 STORAGE [repl-writer-worker-4] WT set timestamp of future write operations to Timestamp(1567578602, 1) 2019-09-04T06:30:02.938+0000 D3 STORAGE [repl-writer-worker-4] WT begin_transaction for snapshot id 7840 2019-09-04T06:30:02.938+0000 D2 QUERY [repl-writer-worker-4] Using idhack: { _id: "config" } 2019-09-04T06:30:02.938+0000 D2 STORAGE [repl-writer-worker-4] WiredTigerSizeStorer::store Marking table:config/collection/42--6194257481163143499 dirty, numRecords: 2, dataSize: 307, use_count: 3 2019-09-04T06:30:02.938+0000 D4 WRITE [repl-writer-worker-4] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:30:02.938+0000 D3 STORAGE [repl-writer-worker-4] WT commit_transaction for snapshot id 7840 2019-09-04T06:30:02.938+0000 D3 EXECUTOR [repl-writer-worker-4] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:02.938+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578602, 1), t: 1 }({ ts: Timestamp(1567578602, 1), t: 1 }) 2019-09-04T06:30:02.938+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578602, 1) 2019-09-04T06:30:02.938+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7839 2019-09-04T06:30:02.938+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:30:02.938+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:02.938+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:30:02.938+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:02.938+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:02.938+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:30:02.938+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7839 2019-09-04T06:30:02.938+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578602, 1) 2019-09-04T06:30:02.938+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:02.938+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7843 2019-09-04T06:30:02.938+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578602, 1), t: 1 }, appliedWallTime: new Date(1567578602928), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578600, 3), t: 1 }, lastCommittedWall: new Date(1567578600117), lastOpVisible: { ts: Timestamp(1567578600, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:02.938+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7843 2019-09-04T06:30:02.938+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578602, 1), t: 1 }({ ts: Timestamp(1567578602, 1), t: 1 }) 2019-09-04T06:30:02.938+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 522 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:32.938+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578602, 1), t: 1 }, appliedWallTime: new Date(1567578602928), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578600, 3), t: 1 }, lastCommittedWall: new Date(1567578600117), lastOpVisible: { ts: Timestamp(1567578600, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:02.938+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:32.938+0000 2019-09-04T06:30:02.938+0000 D2 ASIO [RS] Request 522 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578600, 3), t: 1 }, lastCommittedWall: new Date(1567578600117), lastOpVisible: { ts: Timestamp(1567578600, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578600, 3), $clusterTime: { clusterTime: Timestamp(1567578602, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578602, 1) } 2019-09-04T06:30:02.938+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578600, 3), t: 1 }, lastCommittedWall: new Date(1567578600117), lastOpVisible: { ts: Timestamp(1567578600, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578600, 3), $clusterTime: { clusterTime: Timestamp(1567578602, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578602, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:02.938+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:02.938+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:32.938+0000 2019-09-04T06:30:02.939+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578602, 1), t: 1 } 2019-09-04T06:30:02.939+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 523 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:12.939+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578600, 3), t: 1 } } 2019-09-04T06:30:02.939+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:32.938+0000 2019-09-04T06:30:02.941+0000 D2 ASIO [RS] Request 523 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 1), t: 1 }, lastCommittedWall: new Date(1567578602928), lastOpVisible: { ts: Timestamp(1567578602, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578602, 1), t: 1 }, lastCommittedWall: new Date(1567578602928), lastOpApplied: { ts: Timestamp(1567578602, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578602, 1), $clusterTime: { clusterTime: Timestamp(1567578602, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578602, 1) } 2019-09-04T06:30:02.941+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 1), t: 1 }, lastCommittedWall: new Date(1567578602928), lastOpVisible: { ts: Timestamp(1567578602, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578602, 1), t: 1 }, lastCommittedWall: new Date(1567578602928), lastOpApplied: { ts: Timestamp(1567578602, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578602, 1), $clusterTime: { clusterTime: Timestamp(1567578602, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578602, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:02.941+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:02.941+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:30:02.941+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578602, 1), t: 1 }, 2019-09-04T06:30:02.928+0000 2019-09-04T06:30:02.941+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578602, 1), t: 1 }, 2019-09-04T06:30:02.928+0000 2019-09-04T06:30:02.942+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578597, 1) 2019-09-04T06:30:02.942+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:30:14.396+0000 2019-09-04T06:30:02.942+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:30:13.341+0000 2019-09-04T06:30:02.942+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:02.942+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:30:02.942+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 524 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:12.942+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578602, 1), t: 1 } } 2019-09-04T06:30:02.942+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:32.938+0000 2019-09-04T06:30:02.942+0000 D3 REPL [conn169] Got notified of new snapshot: { ts: Timestamp(1567578602, 1), t: 1 }, 2019-09-04T06:30:02.928+0000 2019-09-04T06:30:02.942+0000 D3 REPL [conn169] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:12.789+0000 2019-09-04T06:30:02.942+0000 D3 REPL [conn147] Got notified of new snapshot: { ts: Timestamp(1567578602, 1), t: 1 }, 2019-09-04T06:30:02.928+0000 2019-09-04T06:30:02.942+0000 D3 REPL [conn147] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:13.409+0000 2019-09-04T06:30:02.942+0000 D3 REPL [conn198] Got notified of new snapshot: { ts: Timestamp(1567578602, 1), t: 1 }, 2019-09-04T06:30:02.928+0000 2019-09-04T06:30:02.942+0000 D3 REPL [conn198] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:03.485+0000 2019-09-04T06:30:02.942+0000 D3 REPL [conn191] Got notified of new snapshot: { ts: Timestamp(1567578602, 1), t: 1 }, 2019-09-04T06:30:02.928+0000 2019-09-04T06:30:02.942+0000 D3 REPL [conn191] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:12.107+0000 2019-09-04T06:30:02.942+0000 D3 REPL [conn181] Got notified of new snapshot: { ts: Timestamp(1567578602, 1), t: 1 }, 2019-09-04T06:30:02.928+0000 2019-09-04T06:30:02.942+0000 D3 REPL [conn181] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.478+0000 2019-09-04T06:30:02.942+0000 D3 REPL [conn217] Got notified of new snapshot: { ts: Timestamp(1567578602, 1), t: 1 }, 2019-09-04T06:30:02.928+0000 2019-09-04T06:30:02.942+0000 D3 REPL [conn217] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:22.595+0000 2019-09-04T06:30:02.942+0000 D3 REPL [conn205] Got notified of new snapshot: { ts: Timestamp(1567578602, 1), t: 1 }, 2019-09-04T06:30:02.928+0000 2019-09-04T06:30:02.942+0000 D3 REPL [conn205] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.530+0000 2019-09-04T06:30:02.942+0000 D3 REPL [conn199] Got notified of new snapshot: { ts: Timestamp(1567578602, 1), t: 1 }, 2019-09-04T06:30:02.928+0000 2019-09-04T06:30:02.942+0000 D3 REPL [conn199] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.476+0000 2019-09-04T06:30:02.942+0000 D3 REPL [conn214] Got notified of new snapshot: { ts: Timestamp(1567578602, 1), t: 1 }, 2019-09-04T06:30:02.928+0000 2019-09-04T06:30:02.942+0000 D3 REPL [conn214] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:02.942+0000 D3 REPL [conn200] Got notified of new snapshot: { ts: Timestamp(1567578602, 1), t: 1 }, 2019-09-04T06:30:02.928+0000 2019-09-04T06:30:02.942+0000 D3 REPL [conn200] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.475+0000 2019-09-04T06:30:02.942+0000 D3 REPL [conn213] Got notified of new snapshot: { ts: Timestamp(1567578602, 1), t: 1 }, 2019-09-04T06:30:02.928+0000 2019-09-04T06:30:02.942+0000 D3 REPL [conn213] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:02.942+0000 D3 REPL [conn209] Got notified of new snapshot: { ts: Timestamp(1567578602, 1), t: 1 }, 2019-09-04T06:30:02.928+0000 2019-09-04T06:30:02.942+0000 D3 REPL [conn209] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.622+0000 2019-09-04T06:30:02.942+0000 D3 REPL [conn202] Got notified of new snapshot: { ts: Timestamp(1567578602, 1), t: 1 }, 2019-09-04T06:30:02.928+0000 2019-09-04T06:30:02.942+0000 D3 REPL [conn202] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.480+0000 2019-09-04T06:30:02.942+0000 D3 REPL [conn212] Got notified of new snapshot: { ts: Timestamp(1567578602, 1), t: 1 }, 2019-09-04T06:30:02.928+0000 2019-09-04T06:30:02.942+0000 D3 REPL [conn212] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.650+0000 2019-09-04T06:30:02.942+0000 D3 REPL [conn219] Got notified of new snapshot: { ts: Timestamp(1567578602, 1), t: 1 }, 2019-09-04T06:30:02.928+0000 2019-09-04T06:30:02.942+0000 D3 REPL [conn219] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:24.153+0000 2019-09-04T06:30:02.942+0000 D3 REPL [conn204] Got notified of new snapshot: { ts: Timestamp(1567578602, 1), t: 1 }, 2019-09-04T06:30:02.928+0000 2019-09-04T06:30:02.942+0000 D3 REPL [conn204] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.514+0000 2019-09-04T06:30:02.942+0000 D3 REPL [conn203] Got notified of new snapshot: { ts: Timestamp(1567578602, 1), t: 1 }, 2019-09-04T06:30:02.928+0000 2019-09-04T06:30:02.942+0000 D3 REPL [conn203] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.499+0000 2019-09-04T06:30:02.942+0000 D3 REPL [conn171] Got notified of new snapshot: { ts: Timestamp(1567578602, 1), t: 1 }, 2019-09-04T06:30:02.928+0000 2019-09-04T06:30:02.942+0000 D3 REPL [conn171] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.634+0000 2019-09-04T06:30:02.942+0000 D3 REPL [conn215] Got notified of new snapshot: { ts: Timestamp(1567578602, 1), t: 1 }, 2019-09-04T06:30:02.928+0000 2019-09-04T06:30:02.942+0000 D3 REPL [conn215] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:02.942+0000 D3 REPL [conn201] Got notified of new snapshot: { ts: Timestamp(1567578602, 1), t: 1 }, 2019-09-04T06:30:02.928+0000 2019-09-04T06:30:02.942+0000 D3 REPL [conn201] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.476+0000 2019-09-04T06:30:02.942+0000 D3 REPL [conn210] Got notified of new snapshot: { ts: Timestamp(1567578602, 1), t: 1 }, 2019-09-04T06:30:02.928+0000 2019-09-04T06:30:02.942+0000 D3 REPL [conn210] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.641+0000 2019-09-04T06:30:02.942+0000 D3 REPL [conn180] Got notified of new snapshot: { ts: Timestamp(1567578602, 1), t: 1 }, 2019-09-04T06:30:02.928+0000 2019-09-04T06:30:02.942+0000 D3 REPL [conn180] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.623+0000 2019-09-04T06:30:02.942+0000 D3 REPL [conn211] Got notified of new snapshot: { ts: Timestamp(1567578602, 1), t: 1 }, 2019-09-04T06:30:02.928+0000 2019-09-04T06:30:02.942+0000 D3 REPL [conn211] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.646+0000 2019-09-04T06:30:02.944+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:30:02.944+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:02.944+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578602, 1), t: 1 }, durableWallTime: new Date(1567578602928), appliedOpTime: { ts: Timestamp(1567578602, 1), t: 1 }, appliedWallTime: new Date(1567578602928), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 1), t: 1 }, lastCommittedWall: new Date(1567578602928), lastOpVisible: { ts: Timestamp(1567578602, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:02.945+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 525 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:32.944+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578602, 1), t: 1 }, durableWallTime: new Date(1567578602928), appliedOpTime: { ts: Timestamp(1567578602, 1), t: 1 }, appliedWallTime: new Date(1567578602928), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 1), t: 1 }, lastCommittedWall: new Date(1567578602928), lastOpVisible: { ts: Timestamp(1567578602, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:02.945+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:32.938+0000 2019-09-04T06:30:02.945+0000 D2 ASIO [RS] Request 525 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 1), t: 1 }, lastCommittedWall: new Date(1567578602928), lastOpVisible: { ts: Timestamp(1567578602, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578602, 1), $clusterTime: { clusterTime: Timestamp(1567578602, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578602, 1) } 2019-09-04T06:30:02.945+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 1), t: 1 }, lastCommittedWall: new Date(1567578602928), lastOpVisible: { ts: Timestamp(1567578602, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578602, 1), $clusterTime: { clusterTime: Timestamp(1567578602, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578602, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:02.945+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:02.945+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:32.938+0000 2019-09-04T06:30:02.947+0000 D2 ASIO [RS] Request 524 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578602, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578602942), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f59eaac9313827bca4430'), when: new Date(1567578602942), who: "ConfigServer:conn9511" } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 1), t: 1 }, lastCommittedWall: new Date(1567578602928), lastOpVisible: { ts: Timestamp(1567578602, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578602, 1), t: 1 }, lastCommittedWall: new Date(1567578602928), lastOpApplied: { ts: Timestamp(1567578602, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578602, 1), $clusterTime: { clusterTime: Timestamp(1567578602, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578602, 2) } 2019-09-04T06:30:02.947+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578602, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578602942), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f59eaac9313827bca4430'), when: new Date(1567578602942), who: "ConfigServer:conn9511" } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 1), t: 1 }, lastCommittedWall: new Date(1567578602928), lastOpVisible: { ts: Timestamp(1567578602, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578602, 1), t: 1 }, lastCommittedWall: new Date(1567578602928), lastOpApplied: { ts: Timestamp(1567578602, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578602, 1), $clusterTime: { clusterTime: Timestamp(1567578602, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578602, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:02.947+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:02.947+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578602, 2) and ending at ts: Timestamp(1567578602, 2) 2019-09-04T06:30:02.947+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:30:13.341+0000 2019-09-04T06:30:02.947+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:30:13.138+0000 2019-09-04T06:30:02.947+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:30:02.947+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:30:02.947+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578602, 2), t: 1 } 2019-09-04T06:30:02.948+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:02.948+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:02.948+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578602, 1) 2019-09-04T06:30:02.948+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 7847 2019-09-04T06:30:02.948+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:02.948+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:02.948+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 7847 2019-09-04T06:30:02.948+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:02.948+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:30:02.948+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:02.948+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578602, 2) } 2019-09-04T06:30:02.948+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578602, 1) 2019-09-04T06:30:02.948+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 7850 2019-09-04T06:30:02.948+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:02.948+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:02.948+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 7850 2019-09-04T06:30:02.948+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7844 2019-09-04T06:30:02.948+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7844 2019-09-04T06:30:02.948+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7853 2019-09-04T06:30:02.948+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7853 2019-09-04T06:30:02.948+0000 D3 EXECUTOR [repl-writer-worker-6] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:02.948+0000 D3 STORAGE [repl-writer-worker-6] WT begin_transaction for snapshot id 7855 2019-09-04T06:30:02.948+0000 D4 STORAGE [repl-writer-worker-6] inserting record with timestamp Timestamp(1567578602, 2) 2019-09-04T06:30:02.948+0000 D3 STORAGE [repl-writer-worker-6] WT set timestamp of future write operations to Timestamp(1567578602, 2) 2019-09-04T06:30:02.948+0000 D3 STORAGE [repl-writer-worker-6] WT commit_transaction for snapshot id 7855 2019-09-04T06:30:02.948+0000 D3 EXECUTOR [repl-writer-worker-6] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:02.948+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:30:02.948+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7854 2019-09-04T06:30:02.948+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7854 2019-09-04T06:30:02.948+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7857 2019-09-04T06:30:02.948+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7857 2019-09-04T06:30:02.948+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578602, 2), t: 1 }({ ts: Timestamp(1567578602, 2), t: 1 }) 2019-09-04T06:30:02.948+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578602, 2) 2019-09-04T06:30:02.948+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7858 2019-09-04T06:30:02.948+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578602, 2) } } ] } sort: {} projection: {} 2019-09-04T06:30:02.948+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:02.948+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:30:02.948+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578602, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:30:02.948+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:02.948+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:02.948+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:02.948+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578602, 2) || First: notFirst: full path: ts 2019-09-04T06:30:02.948+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:02.948+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578602, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:02.948+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:02.948+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:30:02.948+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:02.948+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:02.948+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:02.948+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:02.948+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:02.948+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:02.948+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:02.948+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578602, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:02.948+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:02.948+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:02.948+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:02.948+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578602, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:02.948+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:02.948+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578602, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:02.948+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7858 2019-09-04T06:30:02.948+0000 D3 EXECUTOR [repl-writer-worker-2] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:02.948+0000 D3 STORAGE [repl-writer-worker-2] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:02.948+0000 D3 REPL [repl-writer-worker-2] applying op: { ts: Timestamp(1567578602, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578602942), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f59eaac9313827bca4430'), when: new Date(1567578602942), who: "ConfigServer:conn9511" } } }, oplog application mode: Secondary 2019-09-04T06:30:02.948+0000 D3 STORAGE [repl-writer-worker-2] WT set timestamp of future write operations to Timestamp(1567578602, 2) 2019-09-04T06:30:02.948+0000 D3 STORAGE [repl-writer-worker-2] WT begin_transaction for snapshot id 7860 2019-09-04T06:30:02.948+0000 D2 QUERY [repl-writer-worker-2] Using idhack: { _id: "config.system.sessions" } 2019-09-04T06:30:02.948+0000 D4 WRITE [repl-writer-worker-2] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:30:02.948+0000 D3 STORAGE [repl-writer-worker-2] WT commit_transaction for snapshot id 7860 2019-09-04T06:30:02.948+0000 D3 EXECUTOR [repl-writer-worker-2] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:02.948+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578602, 2), t: 1 }({ ts: Timestamp(1567578602, 2), t: 1 }) 2019-09-04T06:30:02.949+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578602, 2) 2019-09-04T06:30:02.949+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7859 2019-09-04T06:30:02.949+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:30:02.949+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:02.949+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:30:02.949+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:02.949+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:02.949+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:30:02.949+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7859 2019-09-04T06:30:02.949+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578602, 2) 2019-09-04T06:30:02.949+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7864 2019-09-04T06:30:02.949+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7864 2019-09-04T06:30:02.949+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578602, 2), t: 1 }({ ts: Timestamp(1567578602, 2), t: 1 }) 2019-09-04T06:30:02.949+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:02.949+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578602, 1), t: 1 }, durableWallTime: new Date(1567578602928), appliedOpTime: { ts: Timestamp(1567578602, 2), t: 1 }, appliedWallTime: new Date(1567578602942), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 1), t: 1 }, lastCommittedWall: new Date(1567578602928), lastOpVisible: { ts: Timestamp(1567578602, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:02.949+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 526 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:32.949+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578602, 1), t: 1 }, durableWallTime: new Date(1567578602928), appliedOpTime: { ts: Timestamp(1567578602, 2), t: 1 }, appliedWallTime: new Date(1567578602942), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 1), t: 1 }, lastCommittedWall: new Date(1567578602928), lastOpVisible: { ts: Timestamp(1567578602, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:02.949+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:32.949+0000 2019-09-04T06:30:02.949+0000 D2 ASIO [RS] Request 526 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 1), t: 1 }, lastCommittedWall: new Date(1567578602928), lastOpVisible: { ts: Timestamp(1567578602, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578602, 1), $clusterTime: { clusterTime: Timestamp(1567578602, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578602, 2) } 2019-09-04T06:30:02.949+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 1), t: 1 }, lastCommittedWall: new Date(1567578602928), lastOpVisible: { ts: Timestamp(1567578602, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578602, 1), $clusterTime: { clusterTime: Timestamp(1567578602, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578602, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:02.949+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:02.949+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:32.949+0000 2019-09-04T06:30:02.950+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578602, 2), t: 1 } 2019-09-04T06:30:02.950+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 527 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:12.950+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578602, 1), t: 1 } } 2019-09-04T06:30:02.950+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:32.949+0000 2019-09-04T06:30:02.954+0000 D2 ASIO [RS] Request 527 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 2), t: 1 }, lastCommittedWall: new Date(1567578602942), lastOpVisible: { ts: Timestamp(1567578602, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578602, 2), t: 1 }, lastCommittedWall: new Date(1567578602942), lastOpApplied: { ts: Timestamp(1567578602, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578602, 2), $clusterTime: { clusterTime: Timestamp(1567578602, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578602, 2) } 2019-09-04T06:30:02.954+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 2), t: 1 }, lastCommittedWall: new Date(1567578602942), lastOpVisible: { ts: Timestamp(1567578602, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578602, 2), t: 1 }, lastCommittedWall: new Date(1567578602942), lastOpApplied: { ts: Timestamp(1567578602, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578602, 2), $clusterTime: { clusterTime: Timestamp(1567578602, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578602, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:02.954+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:02.954+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:30:02.954+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578602, 2), t: 1 }, 2019-09-04T06:30:02.942+0000 2019-09-04T06:30:02.954+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578602, 2), t: 1 }, 2019-09-04T06:30:02.942+0000 2019-09-04T06:30:02.954+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578597, 2) 2019-09-04T06:30:02.954+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:30:13.138+0000 2019-09-04T06:30:02.954+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:30:13.649+0000 2019-09-04T06:30:02.954+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:02.954+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 528 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:12.954+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578602, 2), t: 1 } } 2019-09-04T06:30:02.954+0000 D3 REPL [conn217] Got notified of new snapshot: { ts: Timestamp(1567578602, 2), t: 1 }, 2019-09-04T06:30:02.942+0000 2019-09-04T06:30:02.954+0000 D3 REPL [conn217] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:22.595+0000 2019-09-04T06:30:02.954+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:32.949+0000 2019-09-04T06:30:02.954+0000 D3 REPL [conn147] Got notified of new snapshot: { ts: Timestamp(1567578602, 2), t: 1 }, 2019-09-04T06:30:02.942+0000 2019-09-04T06:30:02.954+0000 D3 REPL [conn147] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:13.409+0000 2019-09-04T06:30:02.954+0000 D3 REPL [conn209] Got notified of new snapshot: { ts: Timestamp(1567578602, 2), t: 1 }, 2019-09-04T06:30:02.942+0000 2019-09-04T06:30:02.954+0000 D3 REPL [conn209] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.622+0000 2019-09-04T06:30:02.954+0000 D3 REPL [conn214] Got notified of new snapshot: { ts: Timestamp(1567578602, 2), t: 1 }, 2019-09-04T06:30:02.942+0000 2019-09-04T06:30:02.954+0000 D3 REPL [conn214] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:02.954+0000 D3 REPL [conn205] Got notified of new snapshot: { ts: Timestamp(1567578602, 2), t: 1 }, 2019-09-04T06:30:02.942+0000 2019-09-04T06:30:02.954+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:30:02.954+0000 D3 REPL [conn205] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.530+0000 2019-09-04T06:30:02.954+0000 D3 REPL [conn171] Got notified of new snapshot: { ts: Timestamp(1567578602, 2), t: 1 }, 2019-09-04T06:30:02.942+0000 2019-09-04T06:30:02.954+0000 D3 REPL [conn171] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.634+0000 2019-09-04T06:30:02.954+0000 D3 REPL [conn201] Got notified of new snapshot: { ts: Timestamp(1567578602, 2), t: 1 }, 2019-09-04T06:30:02.942+0000 2019-09-04T06:30:02.954+0000 D3 REPL [conn201] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.476+0000 2019-09-04T06:30:02.954+0000 D3 REPL [conn180] Got notified of new snapshot: { ts: Timestamp(1567578602, 2), t: 1 }, 2019-09-04T06:30:02.942+0000 2019-09-04T06:30:02.954+0000 D3 REPL [conn180] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.623+0000 2019-09-04T06:30:02.954+0000 D3 REPL [conn191] Got notified of new snapshot: { ts: Timestamp(1567578602, 2), t: 1 }, 2019-09-04T06:30:02.942+0000 2019-09-04T06:30:02.954+0000 D3 REPL [conn191] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:12.107+0000 2019-09-04T06:30:02.954+0000 D3 REPL [conn199] Got notified of new snapshot: { ts: Timestamp(1567578602, 2), t: 1 }, 2019-09-04T06:30:02.942+0000 2019-09-04T06:30:02.954+0000 D3 REPL [conn199] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.476+0000 2019-09-04T06:30:02.954+0000 D3 REPL [conn212] Got notified of new snapshot: { ts: Timestamp(1567578602, 2), t: 1 }, 2019-09-04T06:30:02.942+0000 2019-09-04T06:30:02.954+0000 D3 REPL [conn212] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.650+0000 2019-09-04T06:30:02.954+0000 D3 REPL [conn169] Got notified of new snapshot: { ts: Timestamp(1567578602, 2), t: 1 }, 2019-09-04T06:30:02.942+0000 2019-09-04T06:30:02.954+0000 D3 REPL [conn169] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:12.789+0000 2019-09-04T06:30:02.954+0000 D3 REPL [conn211] Got notified of new snapshot: { ts: Timestamp(1567578602, 2), t: 1 }, 2019-09-04T06:30:02.942+0000 2019-09-04T06:30:02.954+0000 D3 REPL [conn211] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.646+0000 2019-09-04T06:30:02.954+0000 D3 REPL [conn210] Got notified of new snapshot: { ts: Timestamp(1567578602, 2), t: 1 }, 2019-09-04T06:30:02.942+0000 2019-09-04T06:30:02.954+0000 D3 REPL [conn210] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.641+0000 2019-09-04T06:30:02.954+0000 D3 REPL [conn203] Got notified of new snapshot: { ts: Timestamp(1567578602, 2), t: 1 }, 2019-09-04T06:30:02.942+0000 2019-09-04T06:30:02.954+0000 D3 REPL [conn203] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.499+0000 2019-09-04T06:30:02.954+0000 D3 REPL [conn215] Got notified of new snapshot: { ts: Timestamp(1567578602, 2), t: 1 }, 2019-09-04T06:30:02.942+0000 2019-09-04T06:30:02.954+0000 D3 REPL [conn215] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:02.954+0000 D3 REPL [conn213] Got notified of new snapshot: { ts: Timestamp(1567578602, 2), t: 1 }, 2019-09-04T06:30:02.942+0000 2019-09-04T06:30:02.954+0000 D3 REPL [conn213] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:02.954+0000 D3 REPL [conn200] Got notified of new snapshot: { ts: Timestamp(1567578602, 2), t: 1 }, 2019-09-04T06:30:02.942+0000 2019-09-04T06:30:02.954+0000 D3 REPL [conn200] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.475+0000 2019-09-04T06:30:02.954+0000 D3 REPL [conn204] Got notified of new snapshot: { ts: Timestamp(1567578602, 2), t: 1 }, 2019-09-04T06:30:02.942+0000 2019-09-04T06:30:02.954+0000 D3 REPL [conn204] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.514+0000 2019-09-04T06:30:02.954+0000 D3 REPL [conn181] Got notified of new snapshot: { ts: Timestamp(1567578602, 2), t: 1 }, 2019-09-04T06:30:02.942+0000 2019-09-04T06:30:02.954+0000 D3 REPL [conn181] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.478+0000 2019-09-04T06:30:02.954+0000 D3 REPL [conn219] Got notified of new snapshot: { ts: Timestamp(1567578602, 2), t: 1 }, 2019-09-04T06:30:02.942+0000 2019-09-04T06:30:02.954+0000 D3 REPL [conn219] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:24.153+0000 2019-09-04T06:30:02.954+0000 D3 REPL [conn198] Got notified of new snapshot: { ts: Timestamp(1567578602, 2), t: 1 }, 2019-09-04T06:30:02.942+0000 2019-09-04T06:30:02.954+0000 D3 REPL [conn198] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:03.485+0000 2019-09-04T06:30:02.954+0000 D3 REPL [conn202] Got notified of new snapshot: { ts: Timestamp(1567578602, 2), t: 1 }, 2019-09-04T06:30:02.942+0000 2019-09-04T06:30:02.954+0000 D3 REPL [conn202] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.480+0000 2019-09-04T06:30:02.968+0000 D2 ASIO [RS] Request 528 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578602, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578602955), o: { $v: 1, $set: { state: 0 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 2), t: 1 }, lastCommittedWall: new Date(1567578602942), lastOpVisible: { ts: Timestamp(1567578602, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578602, 2), t: 1 }, lastCommittedWall: new Date(1567578602942), lastOpApplied: { ts: Timestamp(1567578602, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578602, 2), $clusterTime: { clusterTime: Timestamp(1567578602, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578602, 3) } 2019-09-04T06:30:02.968+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578602, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578602955), o: { $v: 1, $set: { state: 0 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 2), t: 1 }, lastCommittedWall: new Date(1567578602942), lastOpVisible: { ts: Timestamp(1567578602, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578602, 2), t: 1 }, lastCommittedWall: new Date(1567578602942), lastOpApplied: { ts: Timestamp(1567578602, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578602, 2), $clusterTime: { clusterTime: Timestamp(1567578602, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578602, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:02.968+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:02.968+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578602, 3) and ending at ts: Timestamp(1567578602, 3) 2019-09-04T06:30:02.968+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:30:13.649+0000 2019-09-04T06:30:02.968+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:30:13.471+0000 2019-09-04T06:30:02.968+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:02.968+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:30:02.968+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:02.968+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:02.968+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578602, 2) 2019-09-04T06:30:02.968+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 7867 2019-09-04T06:30:02.968+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:02.968+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:02.968+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 7867 2019-09-04T06:30:02.968+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:02.968+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:30:02.968+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:02.968+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578602, 2) 2019-09-04T06:30:02.968+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578602, 3) } 2019-09-04T06:30:02.968+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 7870 2019-09-04T06:30:02.968+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:02.968+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:02.968+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 7870 2019-09-04T06:30:02.968+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578602, 3), t: 1 } 2019-09-04T06:30:02.968+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7865 2019-09-04T06:30:02.968+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7865 2019-09-04T06:30:02.968+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7873 2019-09-04T06:30:02.968+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7873 2019-09-04T06:30:02.968+0000 D3 EXECUTOR [repl-writer-worker-0] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:02.968+0000 D3 STORAGE [repl-writer-worker-0] WT begin_transaction for snapshot id 7875 2019-09-04T06:30:02.968+0000 D4 STORAGE [repl-writer-worker-0] inserting record with timestamp Timestamp(1567578602, 3) 2019-09-04T06:30:02.968+0000 D3 STORAGE [repl-writer-worker-0] WT set timestamp of future write operations to Timestamp(1567578602, 3) 2019-09-04T06:30:02.968+0000 D3 STORAGE [repl-writer-worker-0] WT commit_transaction for snapshot id 7875 2019-09-04T06:30:02.968+0000 D3 EXECUTOR [repl-writer-worker-0] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:02.969+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:30:02.969+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7874 2019-09-04T06:30:02.969+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7874 2019-09-04T06:30:02.969+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7877 2019-09-04T06:30:02.969+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7877 2019-09-04T06:30:02.969+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578602, 3), t: 1 }({ ts: Timestamp(1567578602, 3), t: 1 }) 2019-09-04T06:30:02.969+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578602, 3) 2019-09-04T06:30:02.969+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7878 2019-09-04T06:30:02.969+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578602, 3) } } ] } sort: {} projection: {} 2019-09-04T06:30:02.969+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:02.969+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:30:02.969+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578602, 3) Sort: {} Proj: {} ============================= 2019-09-04T06:30:02.969+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:02.969+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:02.969+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:02.969+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578602, 3) || First: notFirst: full path: ts 2019-09-04T06:30:02.969+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:02.969+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578602, 3) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:02.969+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:02.969+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:30:02.969+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:02.969+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:02.969+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:02.969+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:02.969+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:02.969+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:02.969+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:02.969+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578602, 3) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:02.969+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:02.969+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:02.969+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:02.969+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578602, 3) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:02.969+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:02.969+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578602, 3) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:02.969+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7878 2019-09-04T06:30:02.969+0000 D3 EXECUTOR [repl-writer-worker-15] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:02.969+0000 D3 STORAGE [repl-writer-worker-15] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:02.969+0000 D3 REPL [repl-writer-worker-15] applying op: { ts: Timestamp(1567578602, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578602955), o: { $v: 1, $set: { state: 0 } } }, oplog application mode: Secondary 2019-09-04T06:30:02.969+0000 D3 STORAGE [repl-writer-worker-15] WT set timestamp of future write operations to Timestamp(1567578602, 3) 2019-09-04T06:30:02.969+0000 D3 STORAGE [repl-writer-worker-15] WT begin_transaction for snapshot id 7880 2019-09-04T06:30:02.969+0000 D2 QUERY [repl-writer-worker-15] Using idhack: { _id: "config.system.sessions" } 2019-09-04T06:30:02.969+0000 D4 WRITE [repl-writer-worker-15] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:30:02.969+0000 D3 STORAGE [repl-writer-worker-15] WT commit_transaction for snapshot id 7880 2019-09-04T06:30:02.969+0000 D3 EXECUTOR [repl-writer-worker-15] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:02.969+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578602, 3), t: 1 }({ ts: Timestamp(1567578602, 3), t: 1 }) 2019-09-04T06:30:02.969+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578602, 3) 2019-09-04T06:30:02.969+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7879 2019-09-04T06:30:02.969+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:30:02.969+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:02.969+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:30:02.969+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:02.969+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:02.969+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:30:02.969+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7879 2019-09-04T06:30:02.969+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578602, 3) 2019-09-04T06:30:02.969+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:02.969+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7883 2019-09-04T06:30:02.969+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578602, 1), t: 1 }, durableWallTime: new Date(1567578602928), appliedOpTime: { ts: Timestamp(1567578602, 3), t: 1 }, appliedWallTime: new Date(1567578602955), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 2), t: 1 }, lastCommittedWall: new Date(1567578602942), lastOpVisible: { ts: Timestamp(1567578602, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:02.969+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7883 2019-09-04T06:30:02.969+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 529 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:32.969+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578602, 1), t: 1 }, durableWallTime: new Date(1567578602928), appliedOpTime: { ts: Timestamp(1567578602, 3), t: 1 }, appliedWallTime: new Date(1567578602955), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 2), t: 1 }, lastCommittedWall: new Date(1567578602942), lastOpVisible: { ts: Timestamp(1567578602, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:02.969+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578602, 3), t: 1 }({ ts: Timestamp(1567578602, 3), t: 1 }) 2019-09-04T06:30:02.969+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:32.969+0000 2019-09-04T06:30:02.970+0000 D2 ASIO [RS] Request 529 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 2), t: 1 }, lastCommittedWall: new Date(1567578602942), lastOpVisible: { ts: Timestamp(1567578602, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578602, 2), $clusterTime: { clusterTime: Timestamp(1567578602, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578602, 3) } 2019-09-04T06:30:02.970+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 2), t: 1 }, lastCommittedWall: new Date(1567578602942), lastOpVisible: { ts: Timestamp(1567578602, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578602, 2), $clusterTime: { clusterTime: Timestamp(1567578602, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578602, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:02.970+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:02.970+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:32.970+0000 2019-09-04T06:30:02.970+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578602, 3), t: 1 } 2019-09-04T06:30:02.970+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 530 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:12.970+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578602, 2), t: 1 } } 2019-09-04T06:30:02.970+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:32.970+0000 2019-09-04T06:30:02.976+0000 D2 ASIO [RS] Request 530 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 3), t: 1 }, lastCommittedWall: new Date(1567578602955), lastOpVisible: { ts: Timestamp(1567578602, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578602, 3), t: 1 }, lastCommittedWall: new Date(1567578602955), lastOpApplied: { ts: Timestamp(1567578602, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578602, 3), $clusterTime: { clusterTime: Timestamp(1567578602, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578602, 3) } 2019-09-04T06:30:02.976+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 3), t: 1 }, lastCommittedWall: new Date(1567578602955), lastOpVisible: { ts: Timestamp(1567578602, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578602, 3), t: 1 }, lastCommittedWall: new Date(1567578602955), lastOpApplied: { ts: Timestamp(1567578602, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578602, 3), $clusterTime: { clusterTime: Timestamp(1567578602, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578602, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:02.976+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:02.976+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:30:02.976+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578602, 3), t: 1 }, 2019-09-04T06:30:02.955+0000 2019-09-04T06:30:02.976+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578602, 3), t: 1 }, 2019-09-04T06:30:02.955+0000 2019-09-04T06:30:02.976+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578597, 3) 2019-09-04T06:30:02.976+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:30:13.471+0000 2019-09-04T06:30:02.976+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:30:14.428+0000 2019-09-04T06:30:02.976+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:30:02.976+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 531 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:12.976+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578602, 3), t: 1 } } 2019-09-04T06:30:02.976+0000 D3 REPL [conn209] Got notified of new snapshot: { ts: Timestamp(1567578602, 3), t: 1 }, 2019-09-04T06:30:02.955+0000 2019-09-04T06:30:02.976+0000 D3 REPL [conn209] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.622+0000 2019-09-04T06:30:02.976+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:32.970+0000 2019-09-04T06:30:02.976+0000 D3 REPL [conn217] Got notified of new snapshot: { ts: Timestamp(1567578602, 3), t: 1 }, 2019-09-04T06:30:02.955+0000 2019-09-04T06:30:02.976+0000 D3 REPL [conn217] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:22.595+0000 2019-09-04T06:30:02.976+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:30:02.976+0000 D3 REPL [conn205] Got notified of new snapshot: { ts: Timestamp(1567578602, 3), t: 1 }, 2019-09-04T06:30:02.955+0000 2019-09-04T06:30:02.976+0000 D3 REPL [conn205] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.530+0000 2019-09-04T06:30:02.976+0000 D3 REPL [conn201] Got notified of new snapshot: { ts: Timestamp(1567578602, 3), t: 1 }, 2019-09-04T06:30:02.955+0000 2019-09-04T06:30:02.976+0000 D3 REPL [conn201] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.476+0000 2019-09-04T06:30:02.976+0000 D3 REPL [conn210] Got notified of new snapshot: { ts: Timestamp(1567578602, 3), t: 1 }, 2019-09-04T06:30:02.955+0000 2019-09-04T06:30:02.976+0000 D3 REPL [conn210] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.641+0000 2019-09-04T06:30:02.976+0000 D3 REPL [conn215] Got notified of new snapshot: { ts: Timestamp(1567578602, 3), t: 1 }, 2019-09-04T06:30:02.955+0000 2019-09-04T06:30:02.976+0000 D3 REPL [conn215] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:02.976+0000 D3 REPL [conn200] Got notified of new snapshot: { ts: Timestamp(1567578602, 3), t: 1 }, 2019-09-04T06:30:02.955+0000 2019-09-04T06:30:02.976+0000 D3 REPL [conn200] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.475+0000 2019-09-04T06:30:02.976+0000 D3 REPL [conn181] Got notified of new snapshot: { ts: Timestamp(1567578602, 3), t: 1 }, 2019-09-04T06:30:02.955+0000 2019-09-04T06:30:02.976+0000 D3 REPL [conn181] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.478+0000 2019-09-04T06:30:02.976+0000 D3 REPL [conn171] Got notified of new snapshot: { ts: Timestamp(1567578602, 3), t: 1 }, 2019-09-04T06:30:02.955+0000 2019-09-04T06:30:02.976+0000 D3 REPL [conn171] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.634+0000 2019-09-04T06:30:02.976+0000 D3 REPL [conn198] Got notified of new snapshot: { ts: Timestamp(1567578602, 3), t: 1 }, 2019-09-04T06:30:02.955+0000 2019-09-04T06:30:02.976+0000 D3 REPL [conn198] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:03.485+0000 2019-09-04T06:30:02.976+0000 D3 REPL [conn169] Got notified of new snapshot: { ts: Timestamp(1567578602, 3), t: 1 }, 2019-09-04T06:30:02.955+0000 2019-09-04T06:30:02.976+0000 D3 REPL [conn169] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:12.789+0000 2019-09-04T06:30:02.976+0000 D3 REPL [conn203] Got notified of new snapshot: { ts: Timestamp(1567578602, 3), t: 1 }, 2019-09-04T06:30:02.955+0000 2019-09-04T06:30:02.976+0000 D3 REPL [conn203] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.499+0000 2019-09-04T06:30:02.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:02.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:02.976+0000 D3 REPL [conn213] Got notified of new snapshot: { ts: Timestamp(1567578602, 3), t: 1 }, 2019-09-04T06:30:02.955+0000 2019-09-04T06:30:02.976+0000 D3 REPL [conn213] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:02.976+0000 D3 REPL [conn219] Got notified of new snapshot: { ts: Timestamp(1567578602, 3), t: 1 }, 2019-09-04T06:30:02.955+0000 2019-09-04T06:30:02.976+0000 D3 REPL [conn219] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:24.153+0000 2019-09-04T06:30:02.976+0000 D3 REPL [conn204] Got notified of new snapshot: { ts: Timestamp(1567578602, 3), t: 1 }, 2019-09-04T06:30:02.955+0000 2019-09-04T06:30:02.976+0000 D3 REPL [conn204] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.514+0000 2019-09-04T06:30:02.976+0000 D3 REPL [conn147] Got notified of new snapshot: { ts: Timestamp(1567578602, 3), t: 1 }, 2019-09-04T06:30:02.955+0000 2019-09-04T06:30:02.976+0000 D3 REPL [conn147] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:13.409+0000 2019-09-04T06:30:02.976+0000 D3 REPL [conn202] Got notified of new snapshot: { ts: Timestamp(1567578602, 3), t: 1 }, 2019-09-04T06:30:02.955+0000 2019-09-04T06:30:02.976+0000 D3 REPL [conn202] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.480+0000 2019-09-04T06:30:02.976+0000 D3 REPL [conn214] Got notified of new snapshot: { ts: Timestamp(1567578602, 3), t: 1 }, 2019-09-04T06:30:02.955+0000 2019-09-04T06:30:02.976+0000 D3 REPL [conn214] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:02.977+0000 D3 REPL [conn211] Got notified of new snapshot: { ts: Timestamp(1567578602, 3), t: 1 }, 2019-09-04T06:30:02.955+0000 2019-09-04T06:30:02.977+0000 D3 REPL [conn211] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.646+0000 2019-09-04T06:30:02.977+0000 D3 REPL [conn180] Got notified of new snapshot: { ts: Timestamp(1567578602, 3), t: 1 }, 2019-09-04T06:30:02.955+0000 2019-09-04T06:30:02.977+0000 D3 REPL [conn180] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.623+0000 2019-09-04T06:30:02.977+0000 D3 REPL [conn191] Got notified of new snapshot: { ts: Timestamp(1567578602, 3), t: 1 }, 2019-09-04T06:30:02.955+0000 2019-09-04T06:30:02.977+0000 D3 REPL [conn191] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:12.107+0000 2019-09-04T06:30:02.977+0000 D3 REPL [conn212] Got notified of new snapshot: { ts: Timestamp(1567578602, 3), t: 1 }, 2019-09-04T06:30:02.955+0000 2019-09-04T06:30:02.977+0000 D3 REPL [conn212] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.650+0000 2019-09-04T06:30:02.977+0000 D3 REPL [conn199] Got notified of new snapshot: { ts: Timestamp(1567578602, 3), t: 1 }, 2019-09-04T06:30:02.955+0000 2019-09-04T06:30:02.977+0000 D3 REPL [conn199] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.476+0000 2019-09-04T06:30:02.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:02.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:02.990+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:30:02.990+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:02.990+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578602, 2), t: 1 }, durableWallTime: new Date(1567578602942), appliedOpTime: { ts: Timestamp(1567578602, 3), t: 1 }, appliedWallTime: new Date(1567578602955), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 3), t: 1 }, lastCommittedWall: new Date(1567578602955), lastOpVisible: { ts: Timestamp(1567578602, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:02.990+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 532 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:32.990+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578602, 2), t: 1 }, durableWallTime: new Date(1567578602942), appliedOpTime: { ts: Timestamp(1567578602, 3), t: 1 }, appliedWallTime: new Date(1567578602955), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 3), t: 1 }, lastCommittedWall: new Date(1567578602955), lastOpVisible: { ts: Timestamp(1567578602, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:02.990+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:32.970+0000 2019-09-04T06:30:02.990+0000 D2 ASIO [RS] Request 532 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 3), t: 1 }, lastCommittedWall: new Date(1567578602955), lastOpVisible: { ts: Timestamp(1567578602, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578602, 3), $clusterTime: { clusterTime: Timestamp(1567578602, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578602, 3) } 2019-09-04T06:30:02.991+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 3), t: 1 }, lastCommittedWall: new Date(1567578602955), lastOpVisible: { ts: Timestamp(1567578602, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578602, 3), $clusterTime: { clusterTime: Timestamp(1567578602, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578602, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:02.991+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:02.991+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:32.970+0000 2019-09-04T06:30:02.991+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:30:02.991+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:02.991+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578602, 3), t: 1 }, durableWallTime: new Date(1567578602955), appliedOpTime: { ts: Timestamp(1567578602, 3), t: 1 }, appliedWallTime: new Date(1567578602955), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 3), t: 1 }, lastCommittedWall: new Date(1567578602955), lastOpVisible: { ts: Timestamp(1567578602, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:02.991+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 533 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:32.991+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578602, 3), t: 1 }, durableWallTime: new Date(1567578602955), appliedOpTime: { ts: Timestamp(1567578602, 3), t: 1 }, appliedWallTime: new Date(1567578602955), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 3), t: 1 }, lastCommittedWall: new Date(1567578602955), lastOpVisible: { ts: Timestamp(1567578602, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:02.991+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:32.970+0000 2019-09-04T06:30:02.992+0000 D2 ASIO [RS] Request 533 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 3), t: 1 }, lastCommittedWall: new Date(1567578602955), lastOpVisible: { ts: Timestamp(1567578602, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578602, 3), $clusterTime: { clusterTime: Timestamp(1567578602, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578602, 3) } 2019-09-04T06:30:02.992+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 3), t: 1 }, lastCommittedWall: new Date(1567578602955), lastOpVisible: { ts: Timestamp(1567578602, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578602, 3), $clusterTime: { clusterTime: Timestamp(1567578602, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578602, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:02.992+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:02.992+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:32.970+0000 2019-09-04T06:30:02.994+0000 D2 ASIO [RS] Request 531 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578602, 4), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578602976), o: { $v: 1, $set: { state: 0 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 3), t: 1 }, lastCommittedWall: new Date(1567578602955), lastOpVisible: { ts: Timestamp(1567578602, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578602, 3), t: 1 }, lastCommittedWall: new Date(1567578602955), lastOpApplied: { ts: Timestamp(1567578602, 4), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578602, 3), $clusterTime: { clusterTime: Timestamp(1567578602, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578602, 4) } 2019-09-04T06:30:02.994+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578602, 4), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578602976), o: { $v: 1, $set: { state: 0 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 3), t: 1 }, lastCommittedWall: new Date(1567578602955), lastOpVisible: { ts: Timestamp(1567578602, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578602, 3), t: 1 }, lastCommittedWall: new Date(1567578602955), lastOpApplied: { ts: Timestamp(1567578602, 4), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578602, 3), $clusterTime: { clusterTime: Timestamp(1567578602, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578602, 4) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:02.994+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:02.994+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578602, 4) and ending at ts: Timestamp(1567578602, 4) 2019-09-04T06:30:02.994+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:30:14.428+0000 2019-09-04T06:30:02.994+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:30:14.205+0000 2019-09-04T06:30:02.994+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:02.994+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578602, 4), t: 1 } 2019-09-04T06:30:02.994+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:02.994+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:02.994+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578602, 3) 2019-09-04T06:30:02.994+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 7889 2019-09-04T06:30:02.994+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:02.994+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:02.994+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 7889 2019-09-04T06:30:02.994+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:02.994+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:30:02.994+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:02.994+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578602, 4) } 2019-09-04T06:30:02.994+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578602, 3) 2019-09-04T06:30:02.994+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 7892 2019-09-04T06:30:02.994+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:02.994+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:02.994+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 7892 2019-09-04T06:30:02.994+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7884 2019-09-04T06:30:02.994+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:30:02.995+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7884 2019-09-04T06:30:02.995+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7895 2019-09-04T06:30:02.995+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7895 2019-09-04T06:30:02.995+0000 D3 EXECUTOR [repl-writer-worker-1] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:02.995+0000 D3 STORAGE [repl-writer-worker-1] WT begin_transaction for snapshot id 7897 2019-09-04T06:30:02.995+0000 D4 STORAGE [repl-writer-worker-1] inserting record with timestamp Timestamp(1567578602, 4) 2019-09-04T06:30:02.995+0000 D3 STORAGE [repl-writer-worker-1] WT set timestamp of future write operations to Timestamp(1567578602, 4) 2019-09-04T06:30:02.995+0000 D3 STORAGE [repl-writer-worker-1] WT commit_transaction for snapshot id 7897 2019-09-04T06:30:02.995+0000 D3 EXECUTOR [repl-writer-worker-1] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:02.995+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:30:02.995+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7896 2019-09-04T06:30:02.995+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7896 2019-09-04T06:30:02.995+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7899 2019-09-04T06:30:02.995+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7899 2019-09-04T06:30:02.995+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578602, 4), t: 1 }({ ts: Timestamp(1567578602, 4), t: 1 }) 2019-09-04T06:30:02.995+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578602, 4) 2019-09-04T06:30:02.995+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7900 2019-09-04T06:30:02.995+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578602, 4) } } ] } sort: {} projection: {} 2019-09-04T06:30:02.995+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:02.995+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:30:02.995+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578602, 4) Sort: {} Proj: {} ============================= 2019-09-04T06:30:02.995+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:02.995+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:02.995+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:02.995+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578602, 4) || First: notFirst: full path: ts 2019-09-04T06:30:02.995+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:02.995+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578602, 4) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:02.995+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:02.995+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:30:02.995+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:02.995+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:02.995+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:02.995+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:02.995+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:02.995+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:02.995+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:02.995+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578602, 4) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:02.995+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:02.995+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:02.995+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:02.995+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578602, 4) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:02.995+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:02.995+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578602, 4) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:02.995+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7900 2019-09-04T06:30:02.995+0000 D3 EXECUTOR [repl-writer-worker-5] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:02.995+0000 D3 STORAGE [repl-writer-worker-5] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:02.995+0000 D3 REPL [repl-writer-worker-5] applying op: { ts: Timestamp(1567578602, 4), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578602976), o: { $v: 1, $set: { state: 0 } } }, oplog application mode: Secondary 2019-09-04T06:30:02.995+0000 D3 STORAGE [repl-writer-worker-5] WT set timestamp of future write operations to Timestamp(1567578602, 4) 2019-09-04T06:30:02.995+0000 D3 STORAGE [repl-writer-worker-5] WT begin_transaction for snapshot id 7902 2019-09-04T06:30:02.995+0000 D2 QUERY [repl-writer-worker-5] Using idhack: { _id: "config" } 2019-09-04T06:30:02.995+0000 D4 WRITE [repl-writer-worker-5] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:30:02.995+0000 D3 STORAGE [repl-writer-worker-5] WT commit_transaction for snapshot id 7902 2019-09-04T06:30:02.995+0000 D3 EXECUTOR [repl-writer-worker-5] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:02.995+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578602, 4), t: 1 }({ ts: Timestamp(1567578602, 4), t: 1 }) 2019-09-04T06:30:02.995+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578602, 4) 2019-09-04T06:30:02.995+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7901 2019-09-04T06:30:02.995+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:30:02.995+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:02.995+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:30:02.995+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:02.995+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:02.995+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:30:02.995+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7901 2019-09-04T06:30:02.995+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578602, 4) 2019-09-04T06:30:02.995+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7905 2019-09-04T06:30:02.995+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7905 2019-09-04T06:30:02.995+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578602, 4), t: 1 }({ ts: Timestamp(1567578602, 4), t: 1 }) 2019-09-04T06:30:02.995+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:02.995+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578602, 3), t: 1 }, durableWallTime: new Date(1567578602955), appliedOpTime: { ts: Timestamp(1567578602, 4), t: 1 }, appliedWallTime: new Date(1567578602976), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 3), t: 1 }, lastCommittedWall: new Date(1567578602955), lastOpVisible: { ts: Timestamp(1567578602, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:02.996+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 534 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:32.995+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578602, 3), t: 1 }, durableWallTime: new Date(1567578602955), appliedOpTime: { ts: Timestamp(1567578602, 4), t: 1 }, appliedWallTime: new Date(1567578602976), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 3), t: 1 }, lastCommittedWall: new Date(1567578602955), lastOpVisible: { ts: Timestamp(1567578602, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:02.996+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:32.995+0000 2019-09-04T06:30:02.996+0000 D2 ASIO [RS] Request 534 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 3), t: 1 }, lastCommittedWall: new Date(1567578602955), lastOpVisible: { ts: Timestamp(1567578602, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578602, 3), $clusterTime: { clusterTime: Timestamp(1567578602, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578602, 4) } 2019-09-04T06:30:02.996+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 3), t: 1 }, lastCommittedWall: new Date(1567578602955), lastOpVisible: { ts: Timestamp(1567578602, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578602, 3), $clusterTime: { clusterTime: Timestamp(1567578602, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578602, 4) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:02.996+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:02.996+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:32.996+0000 2019-09-04T06:30:02.996+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:30:02.996+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:02.996+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578602, 4), t: 1 }, durableWallTime: new Date(1567578602976), appliedOpTime: { ts: Timestamp(1567578602, 4), t: 1 }, appliedWallTime: new Date(1567578602976), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 3), t: 1 }, lastCommittedWall: new Date(1567578602955), lastOpVisible: { ts: Timestamp(1567578602, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:02.996+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 535 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:32.996+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578602, 4), t: 1 }, durableWallTime: new Date(1567578602976), appliedOpTime: { ts: Timestamp(1567578602, 4), t: 1 }, appliedWallTime: new Date(1567578602976), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 3), t: 1 }, lastCommittedWall: new Date(1567578602955), lastOpVisible: { ts: Timestamp(1567578602, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:02.996+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:32.996+0000 2019-09-04T06:30:02.996+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578602, 4), t: 1 } 2019-09-04T06:30:02.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:02.996+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 536 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:12.996+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578602, 3), t: 1 } } 2019-09-04T06:30:02.996+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:32.996+0000 2019-09-04T06:30:02.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:02.996+0000 D2 ASIO [RS] Request 535 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 3), t: 1 }, lastCommittedWall: new Date(1567578602955), lastOpVisible: { ts: Timestamp(1567578602, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578602, 3), $clusterTime: { clusterTime: Timestamp(1567578602, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578602, 4) } 2019-09-04T06:30:02.997+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 3), t: 1 }, lastCommittedWall: new Date(1567578602955), lastOpVisible: { ts: Timestamp(1567578602, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578602, 3), $clusterTime: { clusterTime: Timestamp(1567578602, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578602, 4) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:02.997+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:02.997+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:32.996+0000 2019-09-04T06:30:02.997+0000 D2 ASIO [RS] Request 536 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 4), t: 1 }, lastCommittedWall: new Date(1567578602976), lastOpVisible: { ts: Timestamp(1567578602, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578602, 4), t: 1 }, lastCommittedWall: new Date(1567578602976), lastOpApplied: { ts: Timestamp(1567578602, 4), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578602, 4), $clusterTime: { clusterTime: Timestamp(1567578602, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578602, 4) } 2019-09-04T06:30:02.997+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 4), t: 1 }, lastCommittedWall: new Date(1567578602976), lastOpVisible: { ts: Timestamp(1567578602, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578602, 4), t: 1 }, lastCommittedWall: new Date(1567578602976), lastOpApplied: { ts: Timestamp(1567578602, 4), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578602, 4), $clusterTime: { clusterTime: Timestamp(1567578602, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578602, 4) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:02.997+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:02.997+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:30:02.997+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578602, 4), t: 1 }, 2019-09-04T06:30:02.976+0000 2019-09-04T06:30:02.997+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578602, 4), t: 1 }, 2019-09-04T06:30:02.976+0000 2019-09-04T06:30:02.997+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578597, 4) 2019-09-04T06:30:02.997+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:30:14.205+0000 2019-09-04T06:30:02.997+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:30:13.750+0000 2019-09-04T06:30:02.997+0000 D3 REPL [conn205] Got notified of new snapshot: { ts: Timestamp(1567578602, 4), t: 1 }, 2019-09-04T06:30:02.976+0000 2019-09-04T06:30:02.997+0000 D3 REPL [conn205] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.530+0000 2019-09-04T06:30:02.997+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 537 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:12.997+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578602, 4), t: 1 } } 2019-09-04T06:30:02.997+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:32.996+0000 2019-09-04T06:30:02.997+0000 D3 REPL [conn200] Got notified of new snapshot: { ts: Timestamp(1567578602, 4), t: 1 }, 2019-09-04T06:30:02.976+0000 2019-09-04T06:30:02.997+0000 D3 REPL [conn200] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.475+0000 2019-09-04T06:30:02.997+0000 D3 REPL [conn169] Got notified of new snapshot: { ts: Timestamp(1567578602, 4), t: 1 }, 2019-09-04T06:30:02.976+0000 2019-09-04T06:30:02.997+0000 D3 REPL [conn169] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:12.789+0000 2019-09-04T06:30:02.997+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:02.997+0000 D3 REPL [conn201] Got notified of new snapshot: { ts: Timestamp(1567578602, 4), t: 1 }, 2019-09-04T06:30:02.976+0000 2019-09-04T06:30:02.997+0000 D3 REPL [conn201] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.476+0000 2019-09-04T06:30:02.997+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:30:02.997+0000 D3 REPL [conn203] Got notified of new snapshot: { ts: Timestamp(1567578602, 4), t: 1 }, 2019-09-04T06:30:02.976+0000 2019-09-04T06:30:02.997+0000 D3 REPL [conn203] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.499+0000 2019-09-04T06:30:02.997+0000 D3 REPL [conn219] Got notified of new snapshot: { ts: Timestamp(1567578602, 4), t: 1 }, 2019-09-04T06:30:02.976+0000 2019-09-04T06:30:02.997+0000 D3 REPL [conn219] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:24.153+0000 2019-09-04T06:30:02.997+0000 D3 REPL [conn214] Got notified of new snapshot: { ts: Timestamp(1567578602, 4), t: 1 }, 2019-09-04T06:30:02.976+0000 2019-09-04T06:30:02.997+0000 D3 REPL [conn214] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:02.997+0000 D3 REPL [conn180] Got notified of new snapshot: { ts: Timestamp(1567578602, 4), t: 1 }, 2019-09-04T06:30:02.976+0000 2019-09-04T06:30:02.997+0000 D3 REPL [conn180] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.623+0000 2019-09-04T06:30:02.997+0000 D3 REPL [conn212] Got notified of new snapshot: { ts: Timestamp(1567578602, 4), t: 1 }, 2019-09-04T06:30:02.976+0000 2019-09-04T06:30:02.997+0000 D3 REPL [conn212] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.650+0000 2019-09-04T06:30:02.997+0000 D3 REPL [conn147] Got notified of new snapshot: { ts: Timestamp(1567578602, 4), t: 1 }, 2019-09-04T06:30:02.976+0000 2019-09-04T06:30:02.997+0000 D3 REPL [conn147] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:13.409+0000 2019-09-04T06:30:02.997+0000 D3 REPL [conn198] Got notified of new snapshot: { ts: Timestamp(1567578602, 4), t: 1 }, 2019-09-04T06:30:02.976+0000 2019-09-04T06:30:02.997+0000 D3 REPL [conn198] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:03.485+0000 2019-09-04T06:30:02.997+0000 D3 REPL [conn209] Got notified of new snapshot: { ts: Timestamp(1567578602, 4), t: 1 }, 2019-09-04T06:30:02.976+0000 2019-09-04T06:30:02.997+0000 D3 REPL [conn209] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.622+0000 2019-09-04T06:30:02.997+0000 D3 REPL [conn217] Got notified of new snapshot: { ts: Timestamp(1567578602, 4), t: 1 }, 2019-09-04T06:30:02.976+0000 2019-09-04T06:30:02.997+0000 D3 REPL [conn217] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:22.595+0000 2019-09-04T06:30:02.997+0000 D3 REPL [conn211] Got notified of new snapshot: { ts: Timestamp(1567578602, 4), t: 1 }, 2019-09-04T06:30:02.976+0000 2019-09-04T06:30:02.997+0000 D3 REPL [conn211] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.646+0000 2019-09-04T06:30:02.997+0000 D3 REPL [conn215] Got notified of new snapshot: { ts: Timestamp(1567578602, 4), t: 1 }, 2019-09-04T06:30:02.976+0000 2019-09-04T06:30:02.997+0000 D3 REPL [conn215] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:02.997+0000 D3 REPL [conn210] Got notified of new snapshot: { ts: Timestamp(1567578602, 4), t: 1 }, 2019-09-04T06:30:02.976+0000 2019-09-04T06:30:02.997+0000 D3 REPL [conn210] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.641+0000 2019-09-04T06:30:02.997+0000 D3 REPL [conn191] Got notified of new snapshot: { ts: Timestamp(1567578602, 4), t: 1 }, 2019-09-04T06:30:02.976+0000 2019-09-04T06:30:02.997+0000 D3 REPL [conn191] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:12.107+0000 2019-09-04T06:30:02.998+0000 D3 REPL [conn204] Got notified of new snapshot: { ts: Timestamp(1567578602, 4), t: 1 }, 2019-09-04T06:30:02.976+0000 2019-09-04T06:30:02.998+0000 D3 REPL [conn204] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.514+0000 2019-09-04T06:30:02.998+0000 D3 REPL [conn199] Got notified of new snapshot: { ts: Timestamp(1567578602, 4), t: 1 }, 2019-09-04T06:30:02.976+0000 2019-09-04T06:30:02.998+0000 D3 REPL [conn199] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.476+0000 2019-09-04T06:30:02.998+0000 D3 REPL [conn171] Got notified of new snapshot: { ts: Timestamp(1567578602, 4), t: 1 }, 2019-09-04T06:30:02.976+0000 2019-09-04T06:30:02.998+0000 D3 REPL [conn171] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.634+0000 2019-09-04T06:30:02.998+0000 D3 REPL [conn213] Got notified of new snapshot: { ts: Timestamp(1567578602, 4), t: 1 }, 2019-09-04T06:30:02.976+0000 2019-09-04T06:30:02.998+0000 D3 REPL [conn213] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:02.998+0000 D3 REPL [conn181] Got notified of new snapshot: { ts: Timestamp(1567578602, 4), t: 1 }, 2019-09-04T06:30:02.976+0000 2019-09-04T06:30:02.998+0000 D3 REPL [conn181] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.478+0000 2019-09-04T06:30:02.998+0000 D3 REPL [conn202] Got notified of new snapshot: { ts: Timestamp(1567578602, 4), t: 1 }, 2019-09-04T06:30:02.976+0000 2019-09-04T06:30:02.998+0000 D3 REPL [conn202] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.480+0000 2019-09-04T06:30:03.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:03.017+0000 D2 ASIO [RS] Request 537 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578602, 5), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578602998), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f59eaac9313827bca444b'), when: new Date(1567578602998), who: "ConfigServer:conn10279" } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 4), t: 1 }, lastCommittedWall: new Date(1567578602976), lastOpVisible: { ts: Timestamp(1567578602, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578602, 4), t: 1 }, lastCommittedWall: new Date(1567578602976), lastOpApplied: { ts: Timestamp(1567578602, 5), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578602, 4), $clusterTime: { clusterTime: Timestamp(1567578602, 5), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578602, 5) } 2019-09-04T06:30:03.017+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578602, 5), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578602998), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f59eaac9313827bca444b'), when: new Date(1567578602998), who: "ConfigServer:conn10279" } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 4), t: 1 }, lastCommittedWall: new Date(1567578602976), lastOpVisible: { ts: Timestamp(1567578602, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578602, 4), t: 1 }, lastCommittedWall: new Date(1567578602976), lastOpApplied: { ts: Timestamp(1567578602, 5), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578602, 4), $clusterTime: { clusterTime: Timestamp(1567578602, 5), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578602, 5) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:03.017+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:03.017+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578602, 5) and ending at ts: Timestamp(1567578602, 5) 2019-09-04T06:30:03.017+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:30:13.750+0000 2019-09-04T06:30:03.017+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:30:14.226+0000 2019-09-04T06:30:03.017+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:30:03.017+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:30:03.017+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578602, 5), t: 1 } 2019-09-04T06:30:03.017+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:03.017+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:03.017+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578602, 4) 2019-09-04T06:30:03.017+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 7911 2019-09-04T06:30:03.017+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:03.017+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:03.017+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 7911 2019-09-04T06:30:03.017+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:03.017+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:30:03.017+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:03.017+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578602, 5) } 2019-09-04T06:30:03.017+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578602, 4) 2019-09-04T06:30:03.017+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 7914 2019-09-04T06:30:03.017+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:03.017+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:03.017+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 7914 2019-09-04T06:30:03.017+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7906 2019-09-04T06:30:03.018+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7906 2019-09-04T06:30:03.018+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7917 2019-09-04T06:30:03.018+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7917 2019-09-04T06:30:03.018+0000 D3 EXECUTOR [repl-writer-worker-9] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:03.018+0000 D3 STORAGE [repl-writer-worker-9] WT begin_transaction for snapshot id 7919 2019-09-04T06:30:03.018+0000 D4 STORAGE [repl-writer-worker-9] inserting record with timestamp Timestamp(1567578602, 5) 2019-09-04T06:30:03.018+0000 D3 STORAGE [repl-writer-worker-9] WT set timestamp of future write operations to Timestamp(1567578602, 5) 2019-09-04T06:30:03.018+0000 D3 STORAGE [repl-writer-worker-9] WT commit_transaction for snapshot id 7919 2019-09-04T06:30:03.018+0000 D3 EXECUTOR [repl-writer-worker-9] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:03.018+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:30:03.018+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7918 2019-09-04T06:30:03.018+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7918 2019-09-04T06:30:03.018+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7921 2019-09-04T06:30:03.018+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7921 2019-09-04T06:30:03.018+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578602, 5), t: 1 }({ ts: Timestamp(1567578602, 5), t: 1 }) 2019-09-04T06:30:03.018+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578602, 5) 2019-09-04T06:30:03.018+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7922 2019-09-04T06:30:03.018+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578602, 5) } } ] } sort: {} projection: {} 2019-09-04T06:30:03.018+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:03.018+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:30:03.018+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578602, 5) Sort: {} Proj: {} ============================= 2019-09-04T06:30:03.018+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:03.018+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:03.018+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:03.018+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578602, 5) || First: notFirst: full path: ts 2019-09-04T06:30:03.018+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:03.018+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578602, 5) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:03.018+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:03.018+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:30:03.018+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:03.018+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:03.018+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:03.018+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:03.018+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:03.018+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:03.018+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:03.018+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578602, 5) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:03.018+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:03.018+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:03.018+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:03.018+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578602, 5) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:03.018+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:03.018+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578602, 5) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:03.018+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7922 2019-09-04T06:30:03.018+0000 D3 EXECUTOR [repl-writer-worker-7] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:03.018+0000 D3 STORAGE [repl-writer-worker-7] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:03.018+0000 D3 REPL [repl-writer-worker-7] applying op: { ts: Timestamp(1567578602, 5), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578602998), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f59eaac9313827bca444b'), when: new Date(1567578602998), who: "ConfigServer:conn10279" } } }, oplog application mode: Secondary 2019-09-04T06:30:03.018+0000 D3 STORAGE [repl-writer-worker-7] WT set timestamp of future write operations to Timestamp(1567578602, 5) 2019-09-04T06:30:03.018+0000 D3 STORAGE [repl-writer-worker-7] WT begin_transaction for snapshot id 7924 2019-09-04T06:30:03.018+0000 D2 QUERY [repl-writer-worker-7] Using idhack: { _id: "config" } 2019-09-04T06:30:03.018+0000 D4 WRITE [repl-writer-worker-7] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:30:03.018+0000 D3 STORAGE [repl-writer-worker-7] WT commit_transaction for snapshot id 7924 2019-09-04T06:30:03.018+0000 D3 EXECUTOR [repl-writer-worker-7] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:03.018+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578602, 5), t: 1 }({ ts: Timestamp(1567578602, 5), t: 1 }) 2019-09-04T06:30:03.018+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578602, 5) 2019-09-04T06:30:03.018+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7923 2019-09-04T06:30:03.018+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:30:03.018+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:03.018+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:30:03.018+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:03.018+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:03.018+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:30:03.018+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7923 2019-09-04T06:30:03.018+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578602, 5) 2019-09-04T06:30:03.018+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:03.018+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7927 2019-09-04T06:30:03.018+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578602, 4), t: 1 }, durableWallTime: new Date(1567578602976), appliedOpTime: { ts: Timestamp(1567578602, 5), t: 1 }, appliedWallTime: new Date(1567578602998), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 4), t: 1 }, lastCommittedWall: new Date(1567578602976), lastOpVisible: { ts: Timestamp(1567578602, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:03.018+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 538 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:33.018+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578602, 4), t: 1 }, durableWallTime: new Date(1567578602976), appliedOpTime: { ts: Timestamp(1567578602, 5), t: 1 }, appliedWallTime: new Date(1567578602998), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 4), t: 1 }, lastCommittedWall: new Date(1567578602976), lastOpVisible: { ts: Timestamp(1567578602, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:03.018+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:33.018+0000 2019-09-04T06:30:03.018+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7927 2019-09-04T06:30:03.019+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578602, 5), t: 1 }({ ts: Timestamp(1567578602, 5), t: 1 }) 2019-09-04T06:30:03.019+0000 D2 ASIO [RS] Request 538 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 4), t: 1 }, lastCommittedWall: new Date(1567578602976), lastOpVisible: { ts: Timestamp(1567578602, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578602, 4), $clusterTime: { clusterTime: Timestamp(1567578602, 5), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578602, 5) } 2019-09-04T06:30:03.019+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 4), t: 1 }, lastCommittedWall: new Date(1567578602976), lastOpVisible: { ts: Timestamp(1567578602, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578602, 4), $clusterTime: { clusterTime: Timestamp(1567578602, 5), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578602, 5) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:03.019+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:03.019+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:33.019+0000 2019-09-04T06:30:03.019+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578602, 5), t: 1 } 2019-09-04T06:30:03.019+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 539 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:13.019+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578602, 4), t: 1 } } 2019-09-04T06:30:03.019+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:33.019+0000 2019-09-04T06:30:03.020+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:30:03.020+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:03.020+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578602, 5), t: 1 }, durableWallTime: new Date(1567578602998), appliedOpTime: { ts: Timestamp(1567578602, 5), t: 1 }, appliedWallTime: new Date(1567578602998), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 4), t: 1 }, lastCommittedWall: new Date(1567578602976), lastOpVisible: { ts: Timestamp(1567578602, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:03.020+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 540 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:33.020+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578602, 5), t: 1 }, durableWallTime: new Date(1567578602998), appliedOpTime: { ts: Timestamp(1567578602, 5), t: 1 }, appliedWallTime: new Date(1567578602998), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 4), t: 1 }, lastCommittedWall: new Date(1567578602976), lastOpVisible: { ts: Timestamp(1567578602, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:03.020+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:33.019+0000 2019-09-04T06:30:03.020+0000 D2 ASIO [RS] Request 540 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 4), t: 1 }, lastCommittedWall: new Date(1567578602976), lastOpVisible: { ts: Timestamp(1567578602, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578602, 4), $clusterTime: { clusterTime: Timestamp(1567578602, 5), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578602, 5) } 2019-09-04T06:30:03.020+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 4), t: 1 }, lastCommittedWall: new Date(1567578602976), lastOpVisible: { ts: Timestamp(1567578602, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578602, 4), $clusterTime: { clusterTime: Timestamp(1567578602, 5), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578602, 5) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:03.020+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:03.020+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:33.019+0000 2019-09-04T06:30:03.020+0000 D2 ASIO [RS] Request 539 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 5), t: 1 }, lastCommittedWall: new Date(1567578602998), lastOpVisible: { ts: Timestamp(1567578602, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578602, 5), t: 1 }, lastCommittedWall: new Date(1567578602998), lastOpApplied: { ts: Timestamp(1567578602, 5), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578602, 5), $clusterTime: { clusterTime: Timestamp(1567578602, 5), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578602, 5) } 2019-09-04T06:30:03.020+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 5), t: 1 }, lastCommittedWall: new Date(1567578602998), lastOpVisible: { ts: Timestamp(1567578602, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578602, 5), t: 1 }, lastCommittedWall: new Date(1567578602998), lastOpApplied: { ts: Timestamp(1567578602, 5), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578602, 5), $clusterTime: { clusterTime: Timestamp(1567578602, 5), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578602, 5) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:03.020+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:03.020+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:30:03.020+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578602, 5), t: 1 }, 2019-09-04T06:30:02.998+0000 2019-09-04T06:30:03.020+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578602, 5), t: 1 }, 2019-09-04T06:30:02.998+0000 2019-09-04T06:30:03.021+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578597, 5) 2019-09-04T06:30:03.021+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:30:14.226+0000 2019-09-04T06:30:03.021+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:30:13.202+0000 2019-09-04T06:30:03.021+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:03.021+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:30:03.021+0000 D3 REPL [conn201] Got notified of new snapshot: { ts: Timestamp(1567578602, 5), t: 1 }, 2019-09-04T06:30:02.998+0000 2019-09-04T06:30:03.021+0000 D3 REPL [conn201] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.476+0000 2019-09-04T06:30:03.021+0000 D3 REPL [conn180] Got notified of new snapshot: { ts: Timestamp(1567578602, 5), t: 1 }, 2019-09-04T06:30:02.998+0000 2019-09-04T06:30:03.021+0000 D3 REPL [conn180] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.623+0000 2019-09-04T06:30:03.021+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 541 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:13.021+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578602, 5), t: 1 } } 2019-09-04T06:30:03.021+0000 D3 REPL [conn204] Got notified of new snapshot: { ts: Timestamp(1567578602, 5), t: 1 }, 2019-09-04T06:30:02.998+0000 2019-09-04T06:30:03.021+0000 D3 REPL [conn204] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.514+0000 2019-09-04T06:30:03.021+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:33.019+0000 2019-09-04T06:30:03.021+0000 D3 REPL [conn181] Got notified of new snapshot: { ts: Timestamp(1567578602, 5), t: 1 }, 2019-09-04T06:30:02.998+0000 2019-09-04T06:30:03.021+0000 D3 REPL [conn181] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.478+0000 2019-09-04T06:30:03.021+0000 D3 REPL [conn219] Got notified of new snapshot: { ts: Timestamp(1567578602, 5), t: 1 }, 2019-09-04T06:30:02.998+0000 2019-09-04T06:30:03.021+0000 D3 REPL [conn219] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:24.153+0000 2019-09-04T06:30:03.021+0000 D3 REPL [conn212] Got notified of new snapshot: { ts: Timestamp(1567578602, 5), t: 1 }, 2019-09-04T06:30:02.998+0000 2019-09-04T06:30:03.021+0000 D3 REPL [conn212] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.650+0000 2019-09-04T06:30:03.021+0000 D3 REPL [conn147] Got notified of new snapshot: { ts: Timestamp(1567578602, 5), t: 1 }, 2019-09-04T06:30:02.998+0000 2019-09-04T06:30:03.021+0000 D3 REPL [conn147] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:13.409+0000 2019-09-04T06:30:03.021+0000 D3 REPL [conn217] Got notified of new snapshot: { ts: Timestamp(1567578602, 5), t: 1 }, 2019-09-04T06:30:02.998+0000 2019-09-04T06:30:03.021+0000 D3 REPL [conn217] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:22.595+0000 2019-09-04T06:30:03.021+0000 D3 REPL [conn215] Got notified of new snapshot: { ts: Timestamp(1567578602, 5), t: 1 }, 2019-09-04T06:30:02.998+0000 2019-09-04T06:30:03.021+0000 D3 REPL [conn215] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:03.021+0000 D3 REPL [conn191] Got notified of new snapshot: { ts: Timestamp(1567578602, 5), t: 1 }, 2019-09-04T06:30:02.998+0000 2019-09-04T06:30:03.021+0000 D3 REPL [conn191] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:12.107+0000 2019-09-04T06:30:03.021+0000 D3 REPL [conn199] Got notified of new snapshot: { ts: Timestamp(1567578602, 5), t: 1 }, 2019-09-04T06:30:02.998+0000 2019-09-04T06:30:03.021+0000 D3 REPL [conn199] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.476+0000 2019-09-04T06:30:03.021+0000 D3 REPL [conn213] Got notified of new snapshot: { ts: Timestamp(1567578602, 5), t: 1 }, 2019-09-04T06:30:02.998+0000 2019-09-04T06:30:03.021+0000 D3 REPL [conn213] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:03.021+0000 D3 REPL [conn202] Got notified of new snapshot: { ts: Timestamp(1567578602, 5), t: 1 }, 2019-09-04T06:30:02.998+0000 2019-09-04T06:30:03.021+0000 D3 REPL [conn202] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.480+0000 2019-09-04T06:30:03.021+0000 D3 REPL [conn205] Got notified of new snapshot: { ts: Timestamp(1567578602, 5), t: 1 }, 2019-09-04T06:30:02.998+0000 2019-09-04T06:30:03.021+0000 D3 REPL [conn205] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.530+0000 2019-09-04T06:30:03.021+0000 D3 REPL [conn200] Got notified of new snapshot: { ts: Timestamp(1567578602, 5), t: 1 }, 2019-09-04T06:30:02.998+0000 2019-09-04T06:30:03.021+0000 D3 REPL [conn200] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.475+0000 2019-09-04T06:30:03.021+0000 D3 REPL [conn211] Got notified of new snapshot: { ts: Timestamp(1567578602, 5), t: 1 }, 2019-09-04T06:30:02.998+0000 2019-09-04T06:30:03.021+0000 D3 REPL [conn211] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.646+0000 2019-09-04T06:30:03.021+0000 D3 REPL [conn209] Got notified of new snapshot: { ts: Timestamp(1567578602, 5), t: 1 }, 2019-09-04T06:30:02.998+0000 2019-09-04T06:30:03.021+0000 D3 REPL [conn209] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.622+0000 2019-09-04T06:30:03.021+0000 D3 REPL [conn203] Got notified of new snapshot: { ts: Timestamp(1567578602, 5), t: 1 }, 2019-09-04T06:30:02.998+0000 2019-09-04T06:30:03.021+0000 D3 REPL [conn203] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.499+0000 2019-09-04T06:30:03.021+0000 D3 REPL [conn214] Got notified of new snapshot: { ts: Timestamp(1567578602, 5), t: 1 }, 2019-09-04T06:30:02.998+0000 2019-09-04T06:30:03.021+0000 D3 REPL [conn214] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:03.021+0000 D3 REPL [conn210] Got notified of new snapshot: { ts: Timestamp(1567578602, 5), t: 1 }, 2019-09-04T06:30:02.998+0000 2019-09-04T06:30:03.021+0000 D3 REPL [conn210] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.641+0000 2019-09-04T06:30:03.021+0000 D3 REPL [conn171] Got notified of new snapshot: { ts: Timestamp(1567578602, 5), t: 1 }, 2019-09-04T06:30:02.998+0000 2019-09-04T06:30:03.021+0000 D3 REPL [conn171] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.634+0000 2019-09-04T06:30:03.021+0000 D3 REPL [conn198] Got notified of new snapshot: { ts: Timestamp(1567578602, 5), t: 1 }, 2019-09-04T06:30:02.998+0000 2019-09-04T06:30:03.021+0000 D3 REPL [conn198] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:03.485+0000 2019-09-04T06:30:03.021+0000 D3 REPL [conn169] Got notified of new snapshot: { ts: Timestamp(1567578602, 5), t: 1 }, 2019-09-04T06:30:02.998+0000 2019-09-04T06:30:03.021+0000 D3 REPL [conn169] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:12.789+0000 2019-09-04T06:30:03.023+0000 D2 ASIO [RS] Request 541 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578603, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578603021), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f59ebac9313827bca4453'), when: new Date(1567578603021), who: "ConfigServer:conn10279" } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 5), t: 1 }, lastCommittedWall: new Date(1567578602998), lastOpVisible: { ts: Timestamp(1567578602, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578602, 5), t: 1 }, lastCommittedWall: new Date(1567578602998), lastOpApplied: { ts: Timestamp(1567578603, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578602, 5), $clusterTime: { clusterTime: Timestamp(1567578603, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578603, 1) } 2019-09-04T06:30:03.023+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578603, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578603021), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f59ebac9313827bca4453'), when: new Date(1567578603021), who: "ConfigServer:conn10279" } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 5), t: 1 }, lastCommittedWall: new Date(1567578602998), lastOpVisible: { ts: Timestamp(1567578602, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578602, 5), t: 1 }, lastCommittedWall: new Date(1567578602998), lastOpApplied: { ts: Timestamp(1567578603, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578602, 5), $clusterTime: { clusterTime: Timestamp(1567578603, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578603, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:03.023+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:03.023+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578603, 1) and ending at ts: Timestamp(1567578603, 1) 2019-09-04T06:30:03.023+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:30:13.202+0000 2019-09-04T06:30:03.023+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:30:13.967+0000 2019-09-04T06:30:03.023+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578603, 1), t: 1 } 2019-09-04T06:30:03.023+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:03.023+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:30:03.023+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:03.023+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:03.023+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578602, 5) 2019-09-04T06:30:03.023+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 7931 2019-09-04T06:30:03.024+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:03.024+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:03.024+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 7931 2019-09-04T06:30:03.024+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:30:03.024+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:03.024+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578603, 1) } 2019-09-04T06:30:03.024+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:03.024+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578602, 5) 2019-09-04T06:30:03.024+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 7934 2019-09-04T06:30:03.024+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7929 2019-09-04T06:30:03.024+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:03.024+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:03.024+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 7934 2019-09-04T06:30:03.024+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7929 2019-09-04T06:30:03.024+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7937 2019-09-04T06:30:03.024+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7937 2019-09-04T06:30:03.024+0000 D3 EXECUTOR [repl-writer-worker-11] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:03.024+0000 D3 STORAGE [repl-writer-worker-11] WT begin_transaction for snapshot id 7939 2019-09-04T06:30:03.024+0000 D4 STORAGE [repl-writer-worker-11] inserting record with timestamp Timestamp(1567578603, 1) 2019-09-04T06:30:03.024+0000 D3 STORAGE [repl-writer-worker-11] WT set timestamp of future write operations to Timestamp(1567578603, 1) 2019-09-04T06:30:03.024+0000 D3 STORAGE [repl-writer-worker-11] WT commit_transaction for snapshot id 7939 2019-09-04T06:30:03.024+0000 D3 EXECUTOR [repl-writer-worker-11] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:03.024+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:30:03.024+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7938 2019-09-04T06:30:03.024+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7938 2019-09-04T06:30:03.024+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7941 2019-09-04T06:30:03.024+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7941 2019-09-04T06:30:03.024+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578603, 1), t: 1 }({ ts: Timestamp(1567578603, 1), t: 1 }) 2019-09-04T06:30:03.024+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578603, 1) 2019-09-04T06:30:03.024+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7942 2019-09-04T06:30:03.024+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578603, 1) } } ] } sort: {} projection: {} 2019-09-04T06:30:03.024+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:03.024+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:30:03.024+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578603, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:30:03.024+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:03.024+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:03.024+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:03.024+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578603, 1) || First: notFirst: full path: ts 2019-09-04T06:30:03.024+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:03.024+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578603, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:03.024+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:03.024+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:30:03.024+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:03.024+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:03.024+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:03.024+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:03.024+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:03.024+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:03.024+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:03.024+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578603, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:03.024+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:03.024+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:03.024+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:03.024+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578603, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:03.024+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:03.024+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578603, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:03.024+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7942 2019-09-04T06:30:03.024+0000 D3 EXECUTOR [repl-writer-worker-13] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:03.024+0000 D3 STORAGE [repl-writer-worker-13] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:03.024+0000 D3 REPL [repl-writer-worker-13] applying op: { ts: Timestamp(1567578603, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578603021), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f59ebac9313827bca4453'), when: new Date(1567578603021), who: "ConfigServer:conn10279" } } }, oplog application mode: Secondary 2019-09-04T06:30:03.024+0000 D3 STORAGE [repl-writer-worker-13] WT set timestamp of future write operations to Timestamp(1567578603, 1) 2019-09-04T06:30:03.024+0000 D3 STORAGE [repl-writer-worker-13] WT begin_transaction for snapshot id 7944 2019-09-04T06:30:03.024+0000 D2 QUERY [repl-writer-worker-13] Using idhack: { _id: "config.system.sessions" } 2019-09-04T06:30:03.024+0000 D4 WRITE [repl-writer-worker-13] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:30:03.024+0000 D3 STORAGE [repl-writer-worker-13] WT commit_transaction for snapshot id 7944 2019-09-04T06:30:03.024+0000 D3 EXECUTOR [repl-writer-worker-13] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:03.024+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578603, 1), t: 1 }({ ts: Timestamp(1567578603, 1), t: 1 }) 2019-09-04T06:30:03.024+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578603, 1) 2019-09-04T06:30:03.024+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7943 2019-09-04T06:30:03.024+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:30:03.024+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:03.024+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:30:03.024+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:03.024+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:03.024+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:30:03.024+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7943 2019-09-04T06:30:03.024+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578603, 1) 2019-09-04T06:30:03.025+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:03.025+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7947 2019-09-04T06:30:03.025+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578602, 5), t: 1 }, durableWallTime: new Date(1567578602998), appliedOpTime: { ts: Timestamp(1567578603, 1), t: 1 }, appliedWallTime: new Date(1567578603021), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 5), t: 1 }, lastCommittedWall: new Date(1567578602998), lastOpVisible: { ts: Timestamp(1567578602, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:03.025+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 542 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:33.025+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578602, 5), t: 1 }, durableWallTime: new Date(1567578602998), appliedOpTime: { ts: Timestamp(1567578603, 1), t: 1 }, appliedWallTime: new Date(1567578603021), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 5), t: 1 }, lastCommittedWall: new Date(1567578602998), lastOpVisible: { ts: Timestamp(1567578602, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:03.025+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:33.025+0000 2019-09-04T06:30:03.025+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7947 2019-09-04T06:30:03.025+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578603, 1), t: 1 }({ ts: Timestamp(1567578603, 1), t: 1 }) 2019-09-04T06:30:03.025+0000 D2 ASIO [RS] Request 542 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 5), t: 1 }, lastCommittedWall: new Date(1567578602998), lastOpVisible: { ts: Timestamp(1567578602, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578602, 5), $clusterTime: { clusterTime: Timestamp(1567578603, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578603, 1) } 2019-09-04T06:30:03.025+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 5), t: 1 }, lastCommittedWall: new Date(1567578602998), lastOpVisible: { ts: Timestamp(1567578602, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578602, 5), $clusterTime: { clusterTime: Timestamp(1567578603, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578603, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:03.025+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:03.025+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:33.025+0000 2019-09-04T06:30:03.026+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578603, 1), t: 1 } 2019-09-04T06:30:03.026+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 543 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:13.026+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578602, 5), t: 1 } } 2019-09-04T06:30:03.026+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:33.025+0000 2019-09-04T06:30:03.035+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:30:03.035+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:03.035+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578603, 1), t: 1 }, durableWallTime: new Date(1567578603021), appliedOpTime: { ts: Timestamp(1567578603, 1), t: 1 }, appliedWallTime: new Date(1567578603021), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 5), t: 1 }, lastCommittedWall: new Date(1567578602998), lastOpVisible: { ts: Timestamp(1567578602, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:03.035+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 544 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:33.035+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578603, 1), t: 1 }, durableWallTime: new Date(1567578603021), appliedOpTime: { ts: Timestamp(1567578603, 1), t: 1 }, appliedWallTime: new Date(1567578603021), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578602, 5), t: 1 }, lastCommittedWall: new Date(1567578602998), lastOpVisible: { ts: Timestamp(1567578602, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:03.035+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:33.025+0000 2019-09-04T06:30:03.035+0000 D2 ASIO [RS] Request 543 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 1), t: 1 }, lastCommittedWall: new Date(1567578603021), lastOpVisible: { ts: Timestamp(1567578603, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578603, 1), t: 1 }, lastCommittedWall: new Date(1567578603021), lastOpApplied: { ts: Timestamp(1567578603, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578603, 1), $clusterTime: { clusterTime: Timestamp(1567578603, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578603, 1) } 2019-09-04T06:30:03.035+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 1), t: 1 }, lastCommittedWall: new Date(1567578603021), lastOpVisible: { ts: Timestamp(1567578603, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578603, 1), t: 1 }, lastCommittedWall: new Date(1567578603021), lastOpApplied: { ts: Timestamp(1567578603, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578603, 1), $clusterTime: { clusterTime: Timestamp(1567578603, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578603, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:03.035+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:03.035+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:30:03.035+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578603, 1), t: 1 }, 2019-09-04T06:30:03.021+0000 2019-09-04T06:30:03.035+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578603, 1), t: 1 }, 2019-09-04T06:30:03.021+0000 2019-09-04T06:30:03.035+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578598, 1) 2019-09-04T06:30:03.035+0000 D3 REPL [conn201] Got notified of new snapshot: { ts: Timestamp(1567578603, 1), t: 1 }, 2019-09-04T06:30:03.021+0000 2019-09-04T06:30:03.035+0000 D3 REPL [conn201] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.476+0000 2019-09-04T06:30:03.035+0000 D3 REPL [conn203] Got notified of new snapshot: { ts: Timestamp(1567578603, 1), t: 1 }, 2019-09-04T06:30:03.021+0000 2019-09-04T06:30:03.035+0000 D3 REPL [conn203] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.499+0000 2019-09-04T06:30:03.035+0000 D3 REPL [conn191] Got notified of new snapshot: { ts: Timestamp(1567578603, 1), t: 1 }, 2019-09-04T06:30:03.021+0000 2019-09-04T06:30:03.035+0000 D3 REPL [conn191] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:12.107+0000 2019-09-04T06:30:03.035+0000 D3 REPL [conn199] Got notified of new snapshot: { ts: Timestamp(1567578603, 1), t: 1 }, 2019-09-04T06:30:03.021+0000 2019-09-04T06:30:03.035+0000 D3 REPL [conn199] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.476+0000 2019-09-04T06:30:03.035+0000 D2 ASIO [RS] Request 544 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 1), t: 1 }, lastCommittedWall: new Date(1567578603021), lastOpVisible: { ts: Timestamp(1567578603, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578603, 1), $clusterTime: { clusterTime: Timestamp(1567578603, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578603, 1) } 2019-09-04T06:30:03.035+0000 D3 REPL [conn205] Got notified of new snapshot: { ts: Timestamp(1567578603, 1), t: 1 }, 2019-09-04T06:30:03.021+0000 2019-09-04T06:30:03.035+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 1), t: 1 }, lastCommittedWall: new Date(1567578603021), lastOpVisible: { ts: Timestamp(1567578603, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578603, 1), $clusterTime: { clusterTime: Timestamp(1567578603, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578603, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:03.035+0000 D3 REPL [conn205] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.530+0000 2019-09-04T06:30:03.035+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:03.035+0000 D3 REPL [conn202] Got notified of new snapshot: { ts: Timestamp(1567578603, 1), t: 1 }, 2019-09-04T06:30:03.021+0000 2019-09-04T06:30:03.035+0000 D3 REPL [conn202] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.480+0000 2019-09-04T06:30:03.035+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:33.035+0000 2019-09-04T06:30:03.035+0000 D3 REPL [conn181] Got notified of new snapshot: { ts: Timestamp(1567578603, 1), t: 1 }, 2019-09-04T06:30:03.021+0000 2019-09-04T06:30:03.035+0000 D3 REPL [conn181] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.478+0000 2019-09-04T06:30:03.035+0000 D3 REPL [conn147] Got notified of new snapshot: { ts: Timestamp(1567578603, 1), t: 1 }, 2019-09-04T06:30:03.021+0000 2019-09-04T06:30:03.035+0000 D3 REPL [conn147] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:13.409+0000 2019-09-04T06:30:03.035+0000 D3 REPL [conn200] Got notified of new snapshot: { ts: Timestamp(1567578603, 1), t: 1 }, 2019-09-04T06:30:03.021+0000 2019-09-04T06:30:03.035+0000 D3 REPL [conn200] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.475+0000 2019-09-04T06:30:03.035+0000 D3 REPL [conn209] Got notified of new snapshot: { ts: Timestamp(1567578603, 1), t: 1 }, 2019-09-04T06:30:03.021+0000 2019-09-04T06:30:03.035+0000 D3 REPL [conn209] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.622+0000 2019-09-04T06:30:03.035+0000 D3 REPL [conn180] Got notified of new snapshot: { ts: Timestamp(1567578603, 1), t: 1 }, 2019-09-04T06:30:03.021+0000 2019-09-04T06:30:03.035+0000 D3 REPL [conn180] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.623+0000 2019-09-04T06:30:03.035+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:30:13.967+0000 2019-09-04T06:30:03.035+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:30:14.220+0000 2019-09-04T06:30:03.035+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:30:03.035+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:30:03.035+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 545 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:13.035+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578603, 1), t: 1 } } 2019-09-04T06:30:03.036+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:33.035+0000 2019-09-04T06:30:03.036+0000 D3 REPL [conn212] Got notified of new snapshot: { ts: Timestamp(1567578603, 1), t: 1 }, 2019-09-04T06:30:03.021+0000 2019-09-04T06:30:03.036+0000 D3 REPL [conn212] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.650+0000 2019-09-04T06:30:03.036+0000 D3 REPL [conn213] Got notified of new snapshot: { ts: Timestamp(1567578603, 1), t: 1 }, 2019-09-04T06:30:03.021+0000 2019-09-04T06:30:03.036+0000 D3 REPL [conn213] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:03.036+0000 D3 REPL [conn217] Got notified of new snapshot: { ts: Timestamp(1567578603, 1), t: 1 }, 2019-09-04T06:30:03.021+0000 2019-09-04T06:30:03.036+0000 D3 REPL [conn217] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:22.595+0000 2019-09-04T06:30:03.036+0000 D3 REPL [conn204] Got notified of new snapshot: { ts: Timestamp(1567578603, 1), t: 1 }, 2019-09-04T06:30:03.021+0000 2019-09-04T06:30:03.036+0000 D3 REPL [conn204] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.514+0000 2019-09-04T06:30:03.036+0000 D3 REPL [conn219] Got notified of new snapshot: { ts: Timestamp(1567578603, 1), t: 1 }, 2019-09-04T06:30:03.021+0000 2019-09-04T06:30:03.036+0000 D3 REPL [conn219] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:24.153+0000 2019-09-04T06:30:03.036+0000 D3 REPL [conn171] Got notified of new snapshot: { ts: Timestamp(1567578603, 1), t: 1 }, 2019-09-04T06:30:03.021+0000 2019-09-04T06:30:03.036+0000 D3 REPL [conn171] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.634+0000 2019-09-04T06:30:03.036+0000 D3 REPL [conn169] Got notified of new snapshot: { ts: Timestamp(1567578603, 1), t: 1 }, 2019-09-04T06:30:03.021+0000 2019-09-04T06:30:03.036+0000 D3 REPL [conn169] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:12.789+0000 2019-09-04T06:30:03.036+0000 D3 REPL [conn215] Got notified of new snapshot: { ts: Timestamp(1567578603, 1), t: 1 }, 2019-09-04T06:30:03.021+0000 2019-09-04T06:30:03.036+0000 D3 REPL [conn215] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:03.036+0000 D3 REPL [conn198] Got notified of new snapshot: { ts: Timestamp(1567578603, 1), t: 1 }, 2019-09-04T06:30:03.021+0000 2019-09-04T06:30:03.036+0000 D3 REPL [conn198] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:03.485+0000 2019-09-04T06:30:03.036+0000 D3 REPL [conn211] Got notified of new snapshot: { ts: Timestamp(1567578603, 1), t: 1 }, 2019-09-04T06:30:03.021+0000 2019-09-04T06:30:03.036+0000 D3 REPL [conn211] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.646+0000 2019-09-04T06:30:03.036+0000 D3 REPL [conn210] Got notified of new snapshot: { ts: Timestamp(1567578603, 1), t: 1 }, 2019-09-04T06:30:03.021+0000 2019-09-04T06:30:03.036+0000 D3 REPL [conn210] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.641+0000 2019-09-04T06:30:03.036+0000 D3 REPL [conn214] Got notified of new snapshot: { ts: Timestamp(1567578603, 1), t: 1 }, 2019-09-04T06:30:03.021+0000 2019-09-04T06:30:03.036+0000 D3 REPL [conn214] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:03.037+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578603, 1) 2019-09-04T06:30:03.038+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:03.038+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:03.038+0000 D2 ASIO [RS] Request 545 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578603, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578603036), o: { $v: 1, $set: { state: 0 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 1), t: 1 }, lastCommittedWall: new Date(1567578603021), lastOpVisible: { ts: Timestamp(1567578603, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578603, 1), t: 1 }, lastCommittedWall: new Date(1567578603021), lastOpApplied: { ts: Timestamp(1567578603, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578603, 1), $clusterTime: { clusterTime: Timestamp(1567578603, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578603, 2) } 2019-09-04T06:30:03.038+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578603, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578603036), o: { $v: 1, $set: { state: 0 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 1), t: 1 }, lastCommittedWall: new Date(1567578603021), lastOpVisible: { ts: Timestamp(1567578603, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578603, 1), t: 1 }, lastCommittedWall: new Date(1567578603021), lastOpApplied: { ts: Timestamp(1567578603, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578603, 1), $clusterTime: { clusterTime: Timestamp(1567578603, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578603, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:03.038+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:03.038+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578603, 2) and ending at ts: Timestamp(1567578603, 2) 2019-09-04T06:30:03.038+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:30:14.220+0000 2019-09-04T06:30:03.038+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:30:14.160+0000 2019-09-04T06:30:03.038+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:03.038+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578603, 2), t: 1 } 2019-09-04T06:30:03.038+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:03.038+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:03.038+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578603, 1) 2019-09-04T06:30:03.038+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 7952 2019-09-04T06:30:03.038+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:03.038+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:03.038+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 7952 2019-09-04T06:30:03.038+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:03.038+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:30:03.038+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:03.038+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:30:03.038+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578603, 2) } 2019-09-04T06:30:03.038+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7949 2019-09-04T06:30:03.039+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7949 2019-09-04T06:30:03.039+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7956 2019-09-04T06:30:03.039+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7956 2019-09-04T06:30:03.039+0000 D3 EXECUTOR [repl-writer-worker-14] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:03.038+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578603, 1) 2019-09-04T06:30:03.039+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 7955 2019-09-04T06:30:03.039+0000 D3 STORAGE [repl-writer-worker-14] WT begin_transaction for snapshot id 7958 2019-09-04T06:30:03.039+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:03.039+0000 D4 STORAGE [repl-writer-worker-14] inserting record with timestamp Timestamp(1567578603, 2) 2019-09-04T06:30:03.039+0000 D3 STORAGE [repl-writer-worker-14] WT set timestamp of future write operations to Timestamp(1567578603, 2) 2019-09-04T06:30:03.039+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:03.039+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 7955 2019-09-04T06:30:03.039+0000 D3 STORAGE [repl-writer-worker-14] WT commit_transaction for snapshot id 7958 2019-09-04T06:30:03.039+0000 D3 EXECUTOR [repl-writer-worker-14] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:03.039+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:30:03.039+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7957 2019-09-04T06:30:03.039+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7957 2019-09-04T06:30:03.039+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7962 2019-09-04T06:30:03.039+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7962 2019-09-04T06:30:03.039+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578603, 2), t: 1 }({ ts: Timestamp(1567578603, 2), t: 1 }) 2019-09-04T06:30:03.039+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578603, 2) 2019-09-04T06:30:03.039+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7963 2019-09-04T06:30:03.039+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578603, 2) } } ] } sort: {} projection: {} 2019-09-04T06:30:03.039+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:03.039+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:30:03.039+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578603, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:30:03.039+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:03.039+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:03.039+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:03.039+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578603, 2) || First: notFirst: full path: ts 2019-09-04T06:30:03.039+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:03.039+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578603, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:03.039+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:03.039+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:30:03.039+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:03.039+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:03.039+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:03.039+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:03.039+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:03.039+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:03.039+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:03.039+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578603, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:03.039+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:03.039+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:03.039+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:03.039+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578603, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:03.039+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:03.039+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578603, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:03.039+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7963 2019-09-04T06:30:03.039+0000 D3 EXECUTOR [repl-writer-worker-3] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:03.039+0000 D3 STORAGE [repl-writer-worker-3] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:03.039+0000 D3 REPL [repl-writer-worker-3] applying op: { ts: Timestamp(1567578603, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578603036), o: { $v: 1, $set: { state: 0 } } }, oplog application mode: Secondary 2019-09-04T06:30:03.039+0000 D3 STORAGE [repl-writer-worker-3] WT set timestamp of future write operations to Timestamp(1567578603, 2) 2019-09-04T06:30:03.039+0000 D3 STORAGE [repl-writer-worker-3] WT begin_transaction for snapshot id 7965 2019-09-04T06:30:03.039+0000 D2 QUERY [repl-writer-worker-3] Using idhack: { _id: "config.system.sessions" } 2019-09-04T06:30:03.039+0000 D4 WRITE [repl-writer-worker-3] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:30:03.039+0000 D3 STORAGE [repl-writer-worker-3] WT commit_transaction for snapshot id 7965 2019-09-04T06:30:03.039+0000 D3 EXECUTOR [repl-writer-worker-3] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:03.039+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578603, 2), t: 1 }({ ts: Timestamp(1567578603, 2), t: 1 }) 2019-09-04T06:30:03.039+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578603, 2) 2019-09-04T06:30:03.039+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7964 2019-09-04T06:30:03.039+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:30:03.039+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:03.039+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:30:03.039+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:03.039+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:03.039+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:30:03.039+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7964 2019-09-04T06:30:03.039+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578603, 2) 2019-09-04T06:30:03.039+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7968 2019-09-04T06:30:03.039+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7968 2019-09-04T06:30:03.039+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578603, 2), t: 1 }({ ts: Timestamp(1567578603, 2), t: 1 }) 2019-09-04T06:30:03.040+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:03.040+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578603, 1), t: 1 }, durableWallTime: new Date(1567578603021), appliedOpTime: { ts: Timestamp(1567578603, 2), t: 1 }, appliedWallTime: new Date(1567578603036), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 1), t: 1 }, lastCommittedWall: new Date(1567578603021), lastOpVisible: { ts: Timestamp(1567578603, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:03.040+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 546 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:33.040+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578603, 1), t: 1 }, durableWallTime: new Date(1567578603021), appliedOpTime: { ts: Timestamp(1567578603, 2), t: 1 }, appliedWallTime: new Date(1567578603036), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 1), t: 1 }, lastCommittedWall: new Date(1567578603021), lastOpVisible: { ts: Timestamp(1567578603, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:03.040+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:33.039+0000 2019-09-04T06:30:03.040+0000 D2 ASIO [RS] Request 546 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 1), t: 1 }, lastCommittedWall: new Date(1567578603021), lastOpVisible: { ts: Timestamp(1567578603, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578603, 1), $clusterTime: { clusterTime: Timestamp(1567578603, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578603, 2) } 2019-09-04T06:30:03.040+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 1), t: 1 }, lastCommittedWall: new Date(1567578603021), lastOpVisible: { ts: Timestamp(1567578603, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578603, 1), $clusterTime: { clusterTime: Timestamp(1567578603, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578603, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:03.040+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:03.040+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:33.040+0000 2019-09-04T06:30:03.040+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578603, 2), t: 1 } 2019-09-04T06:30:03.040+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 547 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:13.040+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578603, 1), t: 1 } } 2019-09-04T06:30:03.040+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:33.040+0000 2019-09-04T06:30:03.044+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:30:03.044+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:03.044+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578603, 2), t: 1 }, durableWallTime: new Date(1567578603036), appliedOpTime: { ts: Timestamp(1567578603, 2), t: 1 }, appliedWallTime: new Date(1567578603036), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 1), t: 1 }, lastCommittedWall: new Date(1567578603021), lastOpVisible: { ts: Timestamp(1567578603, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:03.044+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 548 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:33.044+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578603, 2), t: 1 }, durableWallTime: new Date(1567578603036), appliedOpTime: { ts: Timestamp(1567578603, 2), t: 1 }, appliedWallTime: new Date(1567578603036), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 1), t: 1 }, lastCommittedWall: new Date(1567578603021), lastOpVisible: { ts: Timestamp(1567578603, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:03.044+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:33.040+0000 2019-09-04T06:30:03.044+0000 D2 ASIO [RS] Request 548 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 1), t: 1 }, lastCommittedWall: new Date(1567578603021), lastOpVisible: { ts: Timestamp(1567578603, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578603, 1), $clusterTime: { clusterTime: Timestamp(1567578603, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578603, 2) } 2019-09-04T06:30:03.044+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 1), t: 1 }, lastCommittedWall: new Date(1567578603021), lastOpVisible: { ts: Timestamp(1567578603, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578603, 1), $clusterTime: { clusterTime: Timestamp(1567578603, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578603, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:03.044+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:03.044+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:33.040+0000 2019-09-04T06:30:03.045+0000 D2 ASIO [RS] Request 547 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 2), t: 1 }, lastCommittedWall: new Date(1567578603036), lastOpVisible: { ts: Timestamp(1567578603, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578603, 2), t: 1 }, lastCommittedWall: new Date(1567578603036), lastOpApplied: { ts: Timestamp(1567578603, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578603, 2), $clusterTime: { clusterTime: Timestamp(1567578603, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578603, 2) } 2019-09-04T06:30:03.045+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 2), t: 1 }, lastCommittedWall: new Date(1567578603036), lastOpVisible: { ts: Timestamp(1567578603, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578603, 2), t: 1 }, lastCommittedWall: new Date(1567578603036), lastOpApplied: { ts: Timestamp(1567578603, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578603, 2), $clusterTime: { clusterTime: Timestamp(1567578603, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578603, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:03.045+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:03.045+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:30:03.045+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578603, 2), t: 1 }, 2019-09-04T06:30:03.036+0000 2019-09-04T06:30:03.045+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578603, 2), t: 1 }, 2019-09-04T06:30:03.036+0000 2019-09-04T06:30:03.045+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578598, 2) 2019-09-04T06:30:03.045+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:30:14.160+0000 2019-09-04T06:30:03.045+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:30:13.919+0000 2019-09-04T06:30:03.045+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:03.045+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:30:03.045+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 549 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:13.045+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578603, 2), t: 1 } } 2019-09-04T06:30:03.045+0000 D3 REPL [conn203] Got notified of new snapshot: { ts: Timestamp(1567578603, 2), t: 1 }, 2019-09-04T06:30:03.036+0000 2019-09-04T06:30:03.045+0000 D3 REPL [conn203] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.499+0000 2019-09-04T06:30:03.045+0000 D3 REPL [conn215] Got notified of new snapshot: { ts: Timestamp(1567578603, 2), t: 1 }, 2019-09-04T06:30:03.036+0000 2019-09-04T06:30:03.045+0000 D3 REPL [conn215] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:03.045+0000 D3 REPL [conn181] Got notified of new snapshot: { ts: Timestamp(1567578603, 2), t: 1 }, 2019-09-04T06:30:03.036+0000 2019-09-04T06:30:03.045+0000 D3 REPL [conn181] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.478+0000 2019-09-04T06:30:03.045+0000 D3 REPL [conn211] Got notified of new snapshot: { ts: Timestamp(1567578603, 2), t: 1 }, 2019-09-04T06:30:03.036+0000 2019-09-04T06:30:03.045+0000 D3 REPL [conn211] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.646+0000 2019-09-04T06:30:03.045+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:33.040+0000 2019-09-04T06:30:03.045+0000 D3 REPL [conn147] Got notified of new snapshot: { ts: Timestamp(1567578603, 2), t: 1 }, 2019-09-04T06:30:03.036+0000 2019-09-04T06:30:03.045+0000 D3 REPL [conn147] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:13.409+0000 2019-09-04T06:30:03.045+0000 D3 REPL [conn217] Got notified of new snapshot: { ts: Timestamp(1567578603, 2), t: 1 }, 2019-09-04T06:30:03.036+0000 2019-09-04T06:30:03.045+0000 D3 REPL [conn217] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:22.595+0000 2019-09-04T06:30:03.045+0000 D3 REPL [conn213] Got notified of new snapshot: { ts: Timestamp(1567578603, 2), t: 1 }, 2019-09-04T06:30:03.036+0000 2019-09-04T06:30:03.045+0000 D3 REPL [conn213] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:03.045+0000 D3 REPL [conn219] Got notified of new snapshot: { ts: Timestamp(1567578603, 2), t: 1 }, 2019-09-04T06:30:03.036+0000 2019-09-04T06:30:03.045+0000 D3 REPL [conn219] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:24.153+0000 2019-09-04T06:30:03.045+0000 D3 REPL [conn198] Got notified of new snapshot: { ts: Timestamp(1567578603, 2), t: 1 }, 2019-09-04T06:30:03.036+0000 2019-09-04T06:30:03.045+0000 D3 REPL [conn198] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:03.485+0000 2019-09-04T06:30:03.045+0000 D3 REPL [conn210] Got notified of new snapshot: { ts: Timestamp(1567578603, 2), t: 1 }, 2019-09-04T06:30:03.036+0000 2019-09-04T06:30:03.045+0000 D3 REPL [conn210] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.641+0000 2019-09-04T06:30:03.045+0000 D3 REPL [conn169] Got notified of new snapshot: { ts: Timestamp(1567578603, 2), t: 1 }, 2019-09-04T06:30:03.036+0000 2019-09-04T06:30:03.045+0000 D3 REPL [conn169] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:12.789+0000 2019-09-04T06:30:03.045+0000 D3 REPL [conn201] Got notified of new snapshot: { ts: Timestamp(1567578603, 2), t: 1 }, 2019-09-04T06:30:03.036+0000 2019-09-04T06:30:03.045+0000 D3 REPL [conn201] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.476+0000 2019-09-04T06:30:03.045+0000 D3 REPL [conn202] Got notified of new snapshot: { ts: Timestamp(1567578603, 2), t: 1 }, 2019-09-04T06:30:03.036+0000 2019-09-04T06:30:03.045+0000 D3 REPL [conn202] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.480+0000 2019-09-04T06:30:03.045+0000 D3 REPL [conn191] Got notified of new snapshot: { ts: Timestamp(1567578603, 2), t: 1 }, 2019-09-04T06:30:03.036+0000 2019-09-04T06:30:03.045+0000 D3 REPL [conn191] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:12.107+0000 2019-09-04T06:30:03.045+0000 D3 REPL [conn214] Got notified of new snapshot: { ts: Timestamp(1567578603, 2), t: 1 }, 2019-09-04T06:30:03.036+0000 2019-09-04T06:30:03.045+0000 D3 REPL [conn214] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:03.045+0000 D3 REPL [conn212] Got notified of new snapshot: { ts: Timestamp(1567578603, 2), t: 1 }, 2019-09-04T06:30:03.036+0000 2019-09-04T06:30:03.045+0000 D3 REPL [conn212] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.650+0000 2019-09-04T06:30:03.045+0000 D3 REPL [conn199] Got notified of new snapshot: { ts: Timestamp(1567578603, 2), t: 1 }, 2019-09-04T06:30:03.036+0000 2019-09-04T06:30:03.045+0000 D3 REPL [conn199] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.476+0000 2019-09-04T06:30:03.045+0000 D3 REPL [conn205] Got notified of new snapshot: { ts: Timestamp(1567578603, 2), t: 1 }, 2019-09-04T06:30:03.036+0000 2019-09-04T06:30:03.045+0000 D3 REPL [conn205] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.530+0000 2019-09-04T06:30:03.045+0000 D3 REPL [conn204] Got notified of new snapshot: { ts: Timestamp(1567578603, 2), t: 1 }, 2019-09-04T06:30:03.036+0000 2019-09-04T06:30:03.045+0000 D3 REPL [conn204] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.514+0000 2019-09-04T06:30:03.045+0000 D3 REPL [conn200] Got notified of new snapshot: { ts: Timestamp(1567578603, 2), t: 1 }, 2019-09-04T06:30:03.036+0000 2019-09-04T06:30:03.045+0000 D3 REPL [conn200] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.475+0000 2019-09-04T06:30:03.045+0000 D3 REPL [conn180] Got notified of new snapshot: { ts: Timestamp(1567578603, 2), t: 1 }, 2019-09-04T06:30:03.036+0000 2019-09-04T06:30:03.045+0000 D3 REPL [conn180] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.623+0000 2019-09-04T06:30:03.045+0000 D3 REPL [conn209] Got notified of new snapshot: { ts: Timestamp(1567578603, 2), t: 1 }, 2019-09-04T06:30:03.036+0000 2019-09-04T06:30:03.045+0000 D3 REPL [conn209] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.622+0000 2019-09-04T06:30:03.046+0000 D3 REPL [conn171] Got notified of new snapshot: { ts: Timestamp(1567578603, 2), t: 1 }, 2019-09-04T06:30:03.036+0000 2019-09-04T06:30:03.046+0000 D3 REPL [conn171] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.634+0000 2019-09-04T06:30:03.051+0000 D2 ASIO [RS] Request 549 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578603, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578603045), o: { $v: 1, $set: { state: 0 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 2), t: 1 }, lastCommittedWall: new Date(1567578603036), lastOpVisible: { ts: Timestamp(1567578603, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578603, 2), t: 1 }, lastCommittedWall: new Date(1567578603036), lastOpApplied: { ts: Timestamp(1567578603, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578603, 2), $clusterTime: { clusterTime: Timestamp(1567578603, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578603, 3) } 2019-09-04T06:30:03.052+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578603, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578603045), o: { $v: 1, $set: { state: 0 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 2), t: 1 }, lastCommittedWall: new Date(1567578603036), lastOpVisible: { ts: Timestamp(1567578603, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578603, 2), t: 1 }, lastCommittedWall: new Date(1567578603036), lastOpApplied: { ts: Timestamp(1567578603, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578603, 2), $clusterTime: { clusterTime: Timestamp(1567578603, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578603, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:03.052+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:03.052+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578603, 3) and ending at ts: Timestamp(1567578603, 3) 2019-09-04T06:30:03.052+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:30:13.919+0000 2019-09-04T06:30:03.052+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:30:14.435+0000 2019-09-04T06:30:03.052+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:30:03.052+0000 D2 REPL [replication-1] oplog buffer has 0 bytes 2019-09-04T06:30:03.052+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:30:03.052+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578603, 3), t: 1 } 2019-09-04T06:30:03.052+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:03.052+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:03.052+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578603, 2) 2019-09-04T06:30:03.052+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 7972 2019-09-04T06:30:03.052+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:03.052+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:03.052+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 7972 2019-09-04T06:30:03.052+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:03.052+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:30:03.052+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:03.052+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578603, 3) } 2019-09-04T06:30:03.052+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578603, 2) 2019-09-04T06:30:03.052+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 7975 2019-09-04T06:30:03.052+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:03.052+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:03.052+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 7975 2019-09-04T06:30:03.052+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7970 2019-09-04T06:30:03.052+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7970 2019-09-04T06:30:03.052+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7978 2019-09-04T06:30:03.052+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7978 2019-09-04T06:30:03.052+0000 D3 EXECUTOR [repl-writer-worker-12] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:03.052+0000 D3 STORAGE [repl-writer-worker-12] WT begin_transaction for snapshot id 7980 2019-09-04T06:30:03.052+0000 D4 STORAGE [repl-writer-worker-12] inserting record with timestamp Timestamp(1567578603, 3) 2019-09-04T06:30:03.052+0000 D3 STORAGE [repl-writer-worker-12] WT set timestamp of future write operations to Timestamp(1567578603, 3) 2019-09-04T06:30:03.052+0000 D3 STORAGE [repl-writer-worker-12] WT commit_transaction for snapshot id 7980 2019-09-04T06:30:03.052+0000 D3 EXECUTOR [repl-writer-worker-12] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:03.052+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:30:03.052+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7979 2019-09-04T06:30:03.052+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7979 2019-09-04T06:30:03.052+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7982 2019-09-04T06:30:03.052+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7982 2019-09-04T06:30:03.052+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578603, 3), t: 1 }({ ts: Timestamp(1567578603, 3), t: 1 }) 2019-09-04T06:30:03.052+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578603, 3) 2019-09-04T06:30:03.052+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7983 2019-09-04T06:30:03.052+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578603, 3) } } ] } sort: {} projection: {} 2019-09-04T06:30:03.052+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:03.052+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:30:03.052+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578603, 3) Sort: {} Proj: {} ============================= 2019-09-04T06:30:03.052+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:03.052+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:03.052+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:03.052+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578603, 3) || First: notFirst: full path: ts 2019-09-04T06:30:03.052+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:03.052+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578603, 3) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:03.052+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:03.052+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:30:03.052+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:03.052+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:03.052+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:03.052+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:03.052+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:03.052+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:03.052+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:03.052+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578603, 3) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:03.052+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:03.052+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:03.052+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:03.052+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578603, 3) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:03.052+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:03.052+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578603, 3) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:03.052+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7983 2019-09-04T06:30:03.052+0000 D3 EXECUTOR [repl-writer-worker-10] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:03.053+0000 D3 STORAGE [repl-writer-worker-10] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:03.053+0000 D3 REPL [repl-writer-worker-10] applying op: { ts: Timestamp(1567578603, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578603045), o: { $v: 1, $set: { state: 0 } } }, oplog application mode: Secondary 2019-09-04T06:30:03.053+0000 D3 STORAGE [repl-writer-worker-10] WT set timestamp of future write operations to Timestamp(1567578603, 3) 2019-09-04T06:30:03.053+0000 D3 STORAGE [repl-writer-worker-10] WT begin_transaction for snapshot id 7985 2019-09-04T06:30:03.053+0000 D2 QUERY [repl-writer-worker-10] Using idhack: { _id: "config" } 2019-09-04T06:30:03.053+0000 D4 WRITE [repl-writer-worker-10] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:30:03.053+0000 D3 STORAGE [repl-writer-worker-10] WT commit_transaction for snapshot id 7985 2019-09-04T06:30:03.053+0000 D3 EXECUTOR [repl-writer-worker-10] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:03.053+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578603, 3), t: 1 }({ ts: Timestamp(1567578603, 3), t: 1 }) 2019-09-04T06:30:03.053+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578603, 3) 2019-09-04T06:30:03.053+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7984 2019-09-04T06:30:03.053+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:30:03.053+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:03.053+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:30:03.053+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:03.053+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:03.053+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:30:03.053+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7984 2019-09-04T06:30:03.053+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578603, 3) 2019-09-04T06:30:03.053+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7988 2019-09-04T06:30:03.053+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 7988 2019-09-04T06:30:03.053+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578603, 3), t: 1 }({ ts: Timestamp(1567578603, 3), t: 1 }) 2019-09-04T06:30:03.053+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:03.053+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578603, 2), t: 1 }, durableWallTime: new Date(1567578603036), appliedOpTime: { ts: Timestamp(1567578603, 3), t: 1 }, appliedWallTime: new Date(1567578603045), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 2), t: 1 }, lastCommittedWall: new Date(1567578603036), lastOpVisible: { ts: Timestamp(1567578603, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:03.053+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 550 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:33.053+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578603, 2), t: 1 }, durableWallTime: new Date(1567578603036), appliedOpTime: { ts: Timestamp(1567578603, 3), t: 1 }, appliedWallTime: new Date(1567578603045), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 2), t: 1 }, lastCommittedWall: new Date(1567578603036), lastOpVisible: { ts: Timestamp(1567578603, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:03.053+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:33.053+0000 2019-09-04T06:30:03.053+0000 D2 ASIO [RS] Request 550 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 2), t: 1 }, lastCommittedWall: new Date(1567578603036), lastOpVisible: { ts: Timestamp(1567578603, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578603, 2), $clusterTime: { clusterTime: Timestamp(1567578603, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578603, 3) } 2019-09-04T06:30:03.053+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 2), t: 1 }, lastCommittedWall: new Date(1567578603036), lastOpVisible: { ts: Timestamp(1567578603, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578603, 2), $clusterTime: { clusterTime: Timestamp(1567578603, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578603, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:03.053+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:03.053+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:33.053+0000 2019-09-04T06:30:03.054+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578603, 3), t: 1 } 2019-09-04T06:30:03.054+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 551 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:13.054+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578603, 2), t: 1 } } 2019-09-04T06:30:03.054+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:33.053+0000 2019-09-04T06:30:03.057+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:30:03.057+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:03.057+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578603, 3), t: 1 }, durableWallTime: new Date(1567578603045), appliedOpTime: { ts: Timestamp(1567578603, 3), t: 1 }, appliedWallTime: new Date(1567578603045), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 2), t: 1 }, lastCommittedWall: new Date(1567578603036), lastOpVisible: { ts: Timestamp(1567578603, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:03.057+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 552 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:33.057+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578603, 3), t: 1 }, durableWallTime: new Date(1567578603045), appliedOpTime: { ts: Timestamp(1567578603, 3), t: 1 }, appliedWallTime: new Date(1567578603045), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 2), t: 1 }, lastCommittedWall: new Date(1567578603036), lastOpVisible: { ts: Timestamp(1567578603, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:03.057+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:33.053+0000 2019-09-04T06:30:03.057+0000 D2 ASIO [RS] Request 552 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 2), t: 1 }, lastCommittedWall: new Date(1567578603036), lastOpVisible: { ts: Timestamp(1567578603, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578603, 2), $clusterTime: { clusterTime: Timestamp(1567578603, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578603, 3) } 2019-09-04T06:30:03.057+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 2), t: 1 }, lastCommittedWall: new Date(1567578603036), lastOpVisible: { ts: Timestamp(1567578603, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578603, 2), $clusterTime: { clusterTime: Timestamp(1567578603, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578603, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:03.057+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:03.057+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:33.053+0000 2019-09-04T06:30:03.057+0000 D2 ASIO [RS] Request 551 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 3), t: 1 }, lastCommittedWall: new Date(1567578603045), lastOpVisible: { ts: Timestamp(1567578603, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578603, 3), t: 1 }, lastCommittedWall: new Date(1567578603045), lastOpApplied: { ts: Timestamp(1567578603, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578603, 3), $clusterTime: { clusterTime: Timestamp(1567578603, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578603, 3) } 2019-09-04T06:30:03.057+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 3), t: 1 }, lastCommittedWall: new Date(1567578603045), lastOpVisible: { ts: Timestamp(1567578603, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578603, 3), t: 1 }, lastCommittedWall: new Date(1567578603045), lastOpApplied: { ts: Timestamp(1567578603, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578603, 3), $clusterTime: { clusterTime: Timestamp(1567578603, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578603, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:03.057+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:03.057+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:30:03.057+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578603, 3), t: 1 }, 2019-09-04T06:30:03.045+0000 2019-09-04T06:30:03.057+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578603, 3), t: 1 }, 2019-09-04T06:30:03.045+0000 2019-09-04T06:30:03.057+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578598, 3) 2019-09-04T06:30:03.058+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:30:14.435+0000 2019-09-04T06:30:03.058+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:30:13.615+0000 2019-09-04T06:30:03.058+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:03.058+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 553 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:13.058+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578603, 3), t: 1 } } 2019-09-04T06:30:03.058+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:30:03.058+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:33.053+0000 2019-09-04T06:30:03.058+0000 D3 REPL [conn181] Got notified of new snapshot: { ts: Timestamp(1567578603, 3), t: 1 }, 2019-09-04T06:30:03.045+0000 2019-09-04T06:30:03.058+0000 D3 REPL [conn181] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.478+0000 2019-09-04T06:30:03.058+0000 D3 REPL [conn202] Got notified of new snapshot: { ts: Timestamp(1567578603, 3), t: 1 }, 2019-09-04T06:30:03.045+0000 2019-09-04T06:30:03.058+0000 D3 REPL [conn202] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.480+0000 2019-09-04T06:30:03.058+0000 D3 REPL [conn219] Got notified of new snapshot: { ts: Timestamp(1567578603, 3), t: 1 }, 2019-09-04T06:30:03.045+0000 2019-09-04T06:30:03.058+0000 D3 REPL [conn219] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:24.153+0000 2019-09-04T06:30:03.058+0000 D3 REPL [conn180] Got notified of new snapshot: { ts: Timestamp(1567578603, 3), t: 1 }, 2019-09-04T06:30:03.045+0000 2019-09-04T06:30:03.058+0000 D3 REPL [conn180] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.623+0000 2019-09-04T06:30:03.058+0000 D3 REPL [conn201] Got notified of new snapshot: { ts: Timestamp(1567578603, 3), t: 1 }, 2019-09-04T06:30:03.045+0000 2019-09-04T06:30:03.058+0000 D3 REPL [conn201] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.476+0000 2019-09-04T06:30:03.058+0000 D3 REPL [conn191] Got notified of new snapshot: { ts: Timestamp(1567578603, 3), t: 1 }, 2019-09-04T06:30:03.045+0000 2019-09-04T06:30:03.058+0000 D3 REPL [conn191] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:12.107+0000 2019-09-04T06:30:03.058+0000 D3 REPL [conn212] Got notified of new snapshot: { ts: Timestamp(1567578603, 3), t: 1 }, 2019-09-04T06:30:03.045+0000 2019-09-04T06:30:03.058+0000 D3 REPL [conn212] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.650+0000 2019-09-04T06:30:03.058+0000 D3 REPL [conn205] Got notified of new snapshot: { ts: Timestamp(1567578603, 3), t: 1 }, 2019-09-04T06:30:03.045+0000 2019-09-04T06:30:03.058+0000 D3 REPL [conn205] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.530+0000 2019-09-04T06:30:03.058+0000 D3 REPL [conn200] Got notified of new snapshot: { ts: Timestamp(1567578603, 3), t: 1 }, 2019-09-04T06:30:03.045+0000 2019-09-04T06:30:03.058+0000 D3 REPL [conn200] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.475+0000 2019-09-04T06:30:03.058+0000 D3 REPL [conn209] Got notified of new snapshot: { ts: Timestamp(1567578603, 3), t: 1 }, 2019-09-04T06:30:03.045+0000 2019-09-04T06:30:03.058+0000 D3 REPL [conn209] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.622+0000 2019-09-04T06:30:03.058+0000 D3 REPL [conn203] Got notified of new snapshot: { ts: Timestamp(1567578603, 3), t: 1 }, 2019-09-04T06:30:03.045+0000 2019-09-04T06:30:03.058+0000 D3 REPL [conn203] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.499+0000 2019-09-04T06:30:03.058+0000 D3 REPL [conn214] Got notified of new snapshot: { ts: Timestamp(1567578603, 3), t: 1 }, 2019-09-04T06:30:03.045+0000 2019-09-04T06:30:03.058+0000 D3 REPL [conn214] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:03.058+0000 D3 REPL [conn199] Got notified of new snapshot: { ts: Timestamp(1567578603, 3), t: 1 }, 2019-09-04T06:30:03.045+0000 2019-09-04T06:30:03.058+0000 D3 REPL [conn199] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.476+0000 2019-09-04T06:30:03.058+0000 D3 REPL [conn213] Got notified of new snapshot: { ts: Timestamp(1567578603, 3), t: 1 }, 2019-09-04T06:30:03.045+0000 2019-09-04T06:30:03.058+0000 D3 REPL [conn213] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:03.058+0000 D3 REPL [conn215] Got notified of new snapshot: { ts: Timestamp(1567578603, 3), t: 1 }, 2019-09-04T06:30:03.045+0000 2019-09-04T06:30:03.058+0000 D3 REPL [conn215] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:03.058+0000 D3 REPL [conn198] Got notified of new snapshot: { ts: Timestamp(1567578603, 3), t: 1 }, 2019-09-04T06:30:03.045+0000 2019-09-04T06:30:03.058+0000 D3 REPL [conn198] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:03.485+0000 2019-09-04T06:30:03.058+0000 D3 REPL [conn217] Got notified of new snapshot: { ts: Timestamp(1567578603, 3), t: 1 }, 2019-09-04T06:30:03.045+0000 2019-09-04T06:30:03.058+0000 D3 REPL [conn217] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:22.595+0000 2019-09-04T06:30:03.058+0000 D3 REPL [conn147] Got notified of new snapshot: { ts: Timestamp(1567578603, 3), t: 1 }, 2019-09-04T06:30:03.045+0000 2019-09-04T06:30:03.058+0000 D3 REPL [conn147] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:13.409+0000 2019-09-04T06:30:03.058+0000 D3 REPL [conn204] Got notified of new snapshot: { ts: Timestamp(1567578603, 3), t: 1 }, 2019-09-04T06:30:03.045+0000 2019-09-04T06:30:03.058+0000 D3 REPL [conn204] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.514+0000 2019-09-04T06:30:03.058+0000 D3 REPL [conn211] Got notified of new snapshot: { ts: Timestamp(1567578603, 3), t: 1 }, 2019-09-04T06:30:03.045+0000 2019-09-04T06:30:03.058+0000 D3 REPL [conn211] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.646+0000 2019-09-04T06:30:03.058+0000 D3 REPL [conn171] Got notified of new snapshot: { ts: Timestamp(1567578603, 3), t: 1 }, 2019-09-04T06:30:03.045+0000 2019-09-04T06:30:03.058+0000 D3 REPL [conn171] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.634+0000 2019-09-04T06:30:03.058+0000 D3 REPL [conn210] Got notified of new snapshot: { ts: Timestamp(1567578603, 3), t: 1 }, 2019-09-04T06:30:03.045+0000 2019-09-04T06:30:03.058+0000 D3 REPL [conn210] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.641+0000 2019-09-04T06:30:03.058+0000 D3 REPL [conn169] Got notified of new snapshot: { ts: Timestamp(1567578603, 3), t: 1 }, 2019-09-04T06:30:03.045+0000 2019-09-04T06:30:03.058+0000 D3 REPL [conn169] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:12.789+0000 2019-09-04T06:30:03.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578603, 3), signature: { hash: BinData(0, 3E95F95F5F1EA0AF195E67F5DF5C1390ED12839F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:03.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:30:03.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578603, 3), signature: { hash: BinData(0, 3E95F95F5F1EA0AF195E67F5DF5C1390ED12839F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:03.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578603, 3), signature: { hash: BinData(0, 3E95F95F5F1EA0AF195E67F5DF5C1390ED12839F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:03.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578603, 3), t: 1 }, durableWallTime: new Date(1567578603045), opTime: { ts: Timestamp(1567578603, 3), t: 1 }, wallTime: new Date(1567578603045) } 2019-09-04T06:30:03.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578603, 3), signature: { hash: BinData(0, 3E95F95F5F1EA0AF195E67F5DF5C1390ED12839F), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:03.065+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:03.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:03.078+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:03.078+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:03.091+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:03.131+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:03.131+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:03.138+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578603, 3) 2019-09-04T06:30:03.142+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:03.142+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:03.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:03.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:03.176+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:03.176+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:03.192+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:03.207+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:03.207+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:03.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:03.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:03.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:03.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:03.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:03.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:03.292+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:03.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:03.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:03.342+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:03.342+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:03.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:03.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:03.392+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:03.427+0000 D2 ASIO [RS] Request 553 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578603, 4), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578603422), o: { $v: 1, $set: { ping: new Date(1567578603417) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 3), t: 1 }, lastCommittedWall: new Date(1567578603045), lastOpVisible: { ts: Timestamp(1567578603, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578603, 3), t: 1 }, lastCommittedWall: new Date(1567578603045), lastOpApplied: { ts: Timestamp(1567578603, 4), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578603, 3), $clusterTime: { clusterTime: Timestamp(1567578603, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578603, 4) } 2019-09-04T06:30:03.427+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578603, 4), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578603422), o: { $v: 1, $set: { ping: new Date(1567578603417) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 3), t: 1 }, lastCommittedWall: new Date(1567578603045), lastOpVisible: { ts: Timestamp(1567578603, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578603, 3), t: 1 }, lastCommittedWall: new Date(1567578603045), lastOpApplied: { ts: Timestamp(1567578603, 4), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578603, 3), $clusterTime: { clusterTime: Timestamp(1567578603, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578603, 4) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:03.427+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:03.427+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578603, 4) and ending at ts: Timestamp(1567578603, 4) 2019-09-04T06:30:03.427+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:30:13.615+0000 2019-09-04T06:30:03.427+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:30:14.659+0000 2019-09-04T06:30:03.427+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:03.427+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:30:03.427+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578603, 4), t: 1 } 2019-09-04T06:30:03.427+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:03.427+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:03.427+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578603, 3) 2019-09-04T06:30:03.427+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 8007 2019-09-04T06:30:03.427+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:03.427+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:03.427+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 8007 2019-09-04T06:30:03.427+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:03.427+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:30:03.427+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:03.427+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578603, 3) 2019-09-04T06:30:03.427+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578603, 4) } 2019-09-04T06:30:03.427+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 8010 2019-09-04T06:30:03.427+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:03.427+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:03.427+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 8010 2019-09-04T06:30:03.427+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 7989 2019-09-04T06:30:03.427+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 7989 2019-09-04T06:30:03.427+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8013 2019-09-04T06:30:03.427+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 8013 2019-09-04T06:30:03.427+0000 D3 EXECUTOR [repl-writer-worker-8] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:03.427+0000 D3 STORAGE [repl-writer-worker-8] WT begin_transaction for snapshot id 8015 2019-09-04T06:30:03.427+0000 D4 STORAGE [repl-writer-worker-8] inserting record with timestamp Timestamp(1567578603, 4) 2019-09-04T06:30:03.427+0000 D3 STORAGE [repl-writer-worker-8] WT set timestamp of future write operations to Timestamp(1567578603, 4) 2019-09-04T06:30:03.427+0000 D3 STORAGE [repl-writer-worker-8] WT commit_transaction for snapshot id 8015 2019-09-04T06:30:03.427+0000 D3 EXECUTOR [repl-writer-worker-8] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:03.427+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:30:03.427+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8014 2019-09-04T06:30:03.427+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 8014 2019-09-04T06:30:03.427+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8017 2019-09-04T06:30:03.427+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 8017 2019-09-04T06:30:03.427+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578603, 4), t: 1 }({ ts: Timestamp(1567578603, 4), t: 1 }) 2019-09-04T06:30:03.427+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578603, 4) 2019-09-04T06:30:03.428+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8018 2019-09-04T06:30:03.428+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578603, 4) } } ] } sort: {} projection: {} 2019-09-04T06:30:03.428+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:03.428+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:30:03.428+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578603, 4) Sort: {} Proj: {} ============================= 2019-09-04T06:30:03.428+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:03.428+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:03.428+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:03.428+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578603, 4) || First: notFirst: full path: ts 2019-09-04T06:30:03.428+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:03.428+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578603, 4) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:03.428+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:03.428+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:30:03.428+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:03.428+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:03.428+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:03.428+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:03.428+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:03.428+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:03.428+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:03.428+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578603, 4) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:03.428+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:03.428+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:03.428+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:03.428+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578603, 4) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:03.428+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:03.428+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578603, 4) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:03.428+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 8018 2019-09-04T06:30:03.428+0000 D3 EXECUTOR [repl-writer-worker-4] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:03.428+0000 D3 STORAGE [repl-writer-worker-4] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:03.428+0000 D3 REPL [repl-writer-worker-4] applying op: { ts: Timestamp(1567578603, 4), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578603422), o: { $v: 1, $set: { ping: new Date(1567578603417) } } }, oplog application mode: Secondary 2019-09-04T06:30:03.428+0000 D3 STORAGE [repl-writer-worker-4] WT set timestamp of future write operations to Timestamp(1567578603, 4) 2019-09-04T06:30:03.428+0000 D3 STORAGE [repl-writer-worker-4] WT begin_transaction for snapshot id 8020 2019-09-04T06:30:03.428+0000 D2 QUERY [repl-writer-worker-4] Using idhack: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" } 2019-09-04T06:30:03.428+0000 D4 WRITE [repl-writer-worker-4] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:30:03.428+0000 D3 STORAGE [repl-writer-worker-4] WT commit_transaction for snapshot id 8020 2019-09-04T06:30:03.428+0000 D3 EXECUTOR [repl-writer-worker-4] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:03.428+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578603, 4), t: 1 }({ ts: Timestamp(1567578603, 4), t: 1 }) 2019-09-04T06:30:03.428+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578603, 4) 2019-09-04T06:30:03.428+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8019 2019-09-04T06:30:03.428+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:30:03.428+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:03.428+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:30:03.428+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:03.428+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:03.428+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:30:03.428+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 8019 2019-09-04T06:30:03.428+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578603, 4) 2019-09-04T06:30:03.428+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:03.428+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8023 2019-09-04T06:30:03.428+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 8023 2019-09-04T06:30:03.428+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578603, 3), t: 1 }, durableWallTime: new Date(1567578603045), appliedOpTime: { ts: Timestamp(1567578603, 4), t: 1 }, appliedWallTime: new Date(1567578603422), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 3), t: 1 }, lastCommittedWall: new Date(1567578603045), lastOpVisible: { ts: Timestamp(1567578603, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:03.428+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 554 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:33.428+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578603, 3), t: 1 }, durableWallTime: new Date(1567578603045), appliedOpTime: { ts: Timestamp(1567578603, 4), t: 1 }, appliedWallTime: new Date(1567578603422), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 3), t: 1 }, lastCommittedWall: new Date(1567578603045), lastOpVisible: { ts: Timestamp(1567578603, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:03.428+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:33.428+0000 2019-09-04T06:30:03.428+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578603, 4), t: 1 }({ ts: Timestamp(1567578603, 4), t: 1 }) 2019-09-04T06:30:03.428+0000 D2 ASIO [RS] Request 554 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 3), t: 1 }, lastCommittedWall: new Date(1567578603045), lastOpVisible: { ts: Timestamp(1567578603, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578603, 3), $clusterTime: { clusterTime: Timestamp(1567578603, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578603, 4) } 2019-09-04T06:30:03.428+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 3), t: 1 }, lastCommittedWall: new Date(1567578603045), lastOpVisible: { ts: Timestamp(1567578603, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578603, 3), $clusterTime: { clusterTime: Timestamp(1567578603, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578603, 4) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:03.428+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:03.429+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:33.428+0000 2019-09-04T06:30:03.429+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578603, 4), t: 1 } 2019-09-04T06:30:03.429+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 555 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:13.429+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578603, 3), t: 1 } } 2019-09-04T06:30:03.429+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:33.428+0000 2019-09-04T06:30:03.432+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:30:03.432+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:03.432+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578603, 4), t: 1 }, durableWallTime: new Date(1567578603422), appliedOpTime: { ts: Timestamp(1567578603, 4), t: 1 }, appliedWallTime: new Date(1567578603422), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 3), t: 1 }, lastCommittedWall: new Date(1567578603045), lastOpVisible: { ts: Timestamp(1567578603, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:03.432+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 556 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:33.432+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578603, 4), t: 1 }, durableWallTime: new Date(1567578603422), appliedOpTime: { ts: Timestamp(1567578603, 4), t: 1 }, appliedWallTime: new Date(1567578603422), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, durableWallTime: new Date(1567578600117), appliedOpTime: { ts: Timestamp(1567578600, 3), t: 1 }, appliedWallTime: new Date(1567578600117), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 3), t: 1 }, lastCommittedWall: new Date(1567578603045), lastOpVisible: { ts: Timestamp(1567578603, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:03.432+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:33.428+0000 2019-09-04T06:30:03.432+0000 D2 ASIO [RS] Request 555 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 4), t: 1 }, lastCommittedWall: new Date(1567578603422), lastOpVisible: { ts: Timestamp(1567578603, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578603, 4), t: 1 }, lastCommittedWall: new Date(1567578603422), lastOpApplied: { ts: Timestamp(1567578603, 4), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578603, 4), $clusterTime: { clusterTime: Timestamp(1567578603, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578603, 4) } 2019-09-04T06:30:03.432+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 4), t: 1 }, lastCommittedWall: new Date(1567578603422), lastOpVisible: { ts: Timestamp(1567578603, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578603, 4), t: 1 }, lastCommittedWall: new Date(1567578603422), lastOpApplied: { ts: Timestamp(1567578603, 4), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578603, 4), $clusterTime: { clusterTime: Timestamp(1567578603, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578603, 4) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:03.432+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:03.433+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:30:03.433+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578603, 4), t: 1 }, 2019-09-04T06:30:03.422+0000 2019-09-04T06:30:03.433+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578603, 4), t: 1 }, 2019-09-04T06:30:03.422+0000 2019-09-04T06:30:03.433+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578598, 4) 2019-09-04T06:30:03.433+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:30:14.659+0000 2019-09-04T06:30:03.433+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:30:14.167+0000 2019-09-04T06:30:03.433+0000 D2 ASIO [RS] Request 556 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 4), t: 1 }, lastCommittedWall: new Date(1567578603422), lastOpVisible: { ts: Timestamp(1567578603, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578603, 4), $clusterTime: { clusterTime: Timestamp(1567578603, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578603, 4) } 2019-09-04T06:30:03.433+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 557 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:13.433+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578603, 4), t: 1 } } 2019-09-04T06:30:03.433+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:30:03.433+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:33.428+0000 2019-09-04T06:30:03.433+0000 D3 REPL [conn181] Got notified of new snapshot: { ts: Timestamp(1567578603, 4), t: 1 }, 2019-09-04T06:30:03.422+0000 2019-09-04T06:30:03.433+0000 D3 REPL [conn181] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.478+0000 2019-09-04T06:30:03.433+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:30:03.433+0000 D3 REPL [conn191] Got notified of new snapshot: { ts: Timestamp(1567578603, 4), t: 1 }, 2019-09-04T06:30:03.422+0000 2019-09-04T06:30:03.433+0000 D3 REPL [conn191] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:12.107+0000 2019-09-04T06:30:03.433+0000 D3 REPL [conn198] Got notified of new snapshot: { ts: Timestamp(1567578603, 4), t: 1 }, 2019-09-04T06:30:03.422+0000 2019-09-04T06:30:03.433+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 4), t: 1 }, lastCommittedWall: new Date(1567578603422), lastOpVisible: { ts: Timestamp(1567578603, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578603, 4), $clusterTime: { clusterTime: Timestamp(1567578603, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578603, 4) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:03.433+0000 D3 REPL [conn198] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:03.485+0000 2019-09-04T06:30:03.433+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:03.433+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:33.428+0000 2019-09-04T06:30:03.433+0000 D3 REPL [conn212] Got notified of new snapshot: { ts: Timestamp(1567578603, 4), t: 1 }, 2019-09-04T06:30:03.422+0000 2019-09-04T06:30:03.433+0000 D3 REPL [conn212] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.650+0000 2019-09-04T06:30:03.433+0000 D3 REPL [conn200] Got notified of new snapshot: { ts: Timestamp(1567578603, 4), t: 1 }, 2019-09-04T06:30:03.422+0000 2019-09-04T06:30:03.433+0000 D3 REPL [conn200] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.475+0000 2019-09-04T06:30:03.433+0000 D3 REPL [conn203] Got notified of new snapshot: { ts: Timestamp(1567578603, 4), t: 1 }, 2019-09-04T06:30:03.422+0000 2019-09-04T06:30:03.433+0000 D3 REPL [conn203] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.499+0000 2019-09-04T06:30:03.433+0000 D3 REPL [conn199] Got notified of new snapshot: { ts: Timestamp(1567578603, 4), t: 1 }, 2019-09-04T06:30:03.422+0000 2019-09-04T06:30:03.433+0000 D3 REPL [conn199] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.476+0000 2019-09-04T06:30:03.433+0000 D3 REPL [conn215] Got notified of new snapshot: { ts: Timestamp(1567578603, 4), t: 1 }, 2019-09-04T06:30:03.422+0000 2019-09-04T06:30:03.433+0000 D3 REPL [conn215] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:03.433+0000 D3 REPL [conn217] Got notified of new snapshot: { ts: Timestamp(1567578603, 4), t: 1 }, 2019-09-04T06:30:03.422+0000 2019-09-04T06:30:03.433+0000 D3 REPL [conn217] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:22.595+0000 2019-09-04T06:30:03.433+0000 D3 REPL [conn204] Got notified of new snapshot: { ts: Timestamp(1567578603, 4), t: 1 }, 2019-09-04T06:30:03.422+0000 2019-09-04T06:30:03.433+0000 D3 REPL [conn204] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.514+0000 2019-09-04T06:30:03.433+0000 D3 REPL [conn171] Got notified of new snapshot: { ts: Timestamp(1567578603, 4), t: 1 }, 2019-09-04T06:30:03.422+0000 2019-09-04T06:30:03.433+0000 D3 REPL [conn171] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.634+0000 2019-09-04T06:30:03.433+0000 D3 REPL [conn169] Got notified of new snapshot: { ts: Timestamp(1567578603, 4), t: 1 }, 2019-09-04T06:30:03.422+0000 2019-09-04T06:30:03.433+0000 D3 REPL [conn169] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:12.789+0000 2019-09-04T06:30:03.433+0000 D3 REPL [conn205] Got notified of new snapshot: { ts: Timestamp(1567578603, 4), t: 1 }, 2019-09-04T06:30:03.422+0000 2019-09-04T06:30:03.433+0000 D3 REPL [conn205] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.530+0000 2019-09-04T06:30:03.433+0000 D3 REPL [conn209] Got notified of new snapshot: { ts: Timestamp(1567578603, 4), t: 1 }, 2019-09-04T06:30:03.422+0000 2019-09-04T06:30:03.433+0000 D3 REPL [conn209] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.622+0000 2019-09-04T06:30:03.433+0000 D3 REPL [conn219] Got notified of new snapshot: { ts: Timestamp(1567578603, 4), t: 1 }, 2019-09-04T06:30:03.422+0000 2019-09-04T06:30:03.433+0000 D3 REPL [conn219] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:24.153+0000 2019-09-04T06:30:03.433+0000 D3 REPL [conn210] Got notified of new snapshot: { ts: Timestamp(1567578603, 4), t: 1 }, 2019-09-04T06:30:03.422+0000 2019-09-04T06:30:03.433+0000 D3 REPL [conn210] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.641+0000 2019-09-04T06:30:03.433+0000 D3 REPL [conn214] Got notified of new snapshot: { ts: Timestamp(1567578603, 4), t: 1 }, 2019-09-04T06:30:03.422+0000 2019-09-04T06:30:03.433+0000 D3 REPL [conn214] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:03.433+0000 D3 REPL [conn213] Got notified of new snapshot: { ts: Timestamp(1567578603, 4), t: 1 }, 2019-09-04T06:30:03.422+0000 2019-09-04T06:30:03.433+0000 D3 REPL [conn213] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:03.433+0000 D3 REPL [conn201] Got notified of new snapshot: { ts: Timestamp(1567578603, 4), t: 1 }, 2019-09-04T06:30:03.422+0000 2019-09-04T06:30:03.433+0000 D3 REPL [conn201] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.476+0000 2019-09-04T06:30:03.433+0000 D3 REPL [conn202] Got notified of new snapshot: { ts: Timestamp(1567578603, 4), t: 1 }, 2019-09-04T06:30:03.422+0000 2019-09-04T06:30:03.433+0000 D3 REPL [conn202] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.480+0000 2019-09-04T06:30:03.433+0000 D3 REPL [conn147] Got notified of new snapshot: { ts: Timestamp(1567578603, 4), t: 1 }, 2019-09-04T06:30:03.422+0000 2019-09-04T06:30:03.433+0000 D3 REPL [conn147] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:13.409+0000 2019-09-04T06:30:03.433+0000 D3 REPL [conn211] Got notified of new snapshot: { ts: Timestamp(1567578603, 4), t: 1 }, 2019-09-04T06:30:03.422+0000 2019-09-04T06:30:03.433+0000 D3 REPL [conn211] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.646+0000 2019-09-04T06:30:03.433+0000 D3 REPL [conn180] Got notified of new snapshot: { ts: Timestamp(1567578603, 4), t: 1 }, 2019-09-04T06:30:03.422+0000 2019-09-04T06:30:03.433+0000 D3 REPL [conn180] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.623+0000 2019-09-04T06:30:03.472+0000 I NETWORK [listener] connection accepted from 10.108.2.63:36282 #224 (86 connections now open) 2019-09-04T06:30:03.472+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:03.472+0000 D2 COMMAND [conn224] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:03.473+0000 I NETWORK [conn224] received client metadata from 10.108.2.63:36282 conn224: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:03.473+0000 I COMMAND [conn224] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:03.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:03.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:03.485+0000 I COMMAND [conn198] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578572, 1), signature: { hash: BinData(0, BEAA603D37F22B007C54F93405BE7B4D709F86E2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:30:03.485+0000 D1 - [conn198] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:03.485+0000 W - [conn198] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:03.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:03.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:03.492+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:03.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:03.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:03.502+0000 I - [conn198] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:03.502+0000 D1 COMMAND [conn198] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578572, 1), signature: { hash: BinData(0, BEAA603D37F22B007C54F93405BE7B4D709F86E2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:03.502+0000 D1 - [conn198] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:03.502+0000 W - [conn198] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:03.511+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:30:03.511+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:30:03.511+0000 D2 COMMAND [conn90] run command admin.$cmd { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:30:03.511+0000 I COMMAND [conn90] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:30:03.522+0000 I - [conn198] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:03.522+0000 W COMMAND [conn198] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:03.522+0000 I COMMAND [conn198] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578572, 1), signature: { hash: BinData(0, BEAA603D37F22B007C54F93405BE7B4D709F86E2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30026ms 2019-09-04T06:30:03.522+0000 D2 NETWORK [conn198] Session from 10.108.2.63:36258 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:03.522+0000 I NETWORK [conn198] end connection 10.108.2.63:36258 (85 connections now open) 2019-09-04T06:30:03.527+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578603, 4) 2019-09-04T06:30:03.538+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:03.538+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:03.565+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:03.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:03.578+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:03.578+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:03.592+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:03.631+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:03.631+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:03.642+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:03.642+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:03.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:03.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:03.676+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:03.676+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:03.692+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:03.707+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:03.707+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:03.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:03.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:03.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:03.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:03.792+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:03.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:03.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:03.842+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:03.842+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:03.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:03.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:03.892+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:03.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:03.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:03.990+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:03.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:03.993+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:03.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:03.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:04.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:04.038+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:04.038+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:04.065+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:04.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:04.078+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:04.078+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:04.093+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:04.131+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:04.131+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:04.142+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:04.142+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:04.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:04.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:04.176+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:04.176+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:04.193+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:04.207+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:04.207+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:04.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578603, 4), signature: { hash: BinData(0, 3E95F95F5F1EA0AF195E67F5DF5C1390ED12839F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:04.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:30:04.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578603, 4), signature: { hash: BinData(0, 3E95F95F5F1EA0AF195E67F5DF5C1390ED12839F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:04.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578603, 4), signature: { hash: BinData(0, 3E95F95F5F1EA0AF195E67F5DF5C1390ED12839F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:04.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578603, 4), t: 1 }, durableWallTime: new Date(1567578603422), opTime: { ts: Timestamp(1567578603, 4), t: 1 }, wallTime: new Date(1567578603422) } 2019-09-04T06:30:04.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578603, 4), signature: { hash: BinData(0, 3E95F95F5F1EA0AF195E67F5DF5C1390ED12839F), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:04.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:04.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:04.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:04.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:04.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:04.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:04.293+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:04.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:04.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:04.342+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:04.342+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:04.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:04.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:04.393+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:04.427+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:04.427+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:04.427+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578603, 4) 2019-09-04T06:30:04.427+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 8065 2019-09-04T06:30:04.427+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:04.427+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:04.427+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 8065 2019-09-04T06:30:04.428+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8068 2019-09-04T06:30:04.429+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 8068 2019-09-04T06:30:04.429+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578603, 4), t: 1 }({ ts: Timestamp(1567578603, 4), t: 1 }) 2019-09-04T06:30:04.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:04.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:04.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:04.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:04.493+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:04.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:04.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:04.538+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:04.538+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:04.565+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:04.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:04.578+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:04.578+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:04.593+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:04.631+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:04.631+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:04.642+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:04.642+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:04.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:04.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:04.676+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:04.676+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:04.693+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:04.707+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:04.707+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:04.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:04.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:04.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:04.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:04.793+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:04.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:04.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:04.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:04.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 558) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:04.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 558 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:14.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:04.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:30:04.838+0000 D2 ASIO [Replication] Request 558 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578603, 4), t: 1 }, durableWallTime: new Date(1567578603422), opTime: { ts: Timestamp(1567578603, 4), t: 1 }, wallTime: new Date(1567578603422), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 4), t: 1 }, lastCommittedWall: new Date(1567578603422), lastOpVisible: { ts: Timestamp(1567578603, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578603, 4), $clusterTime: { clusterTime: Timestamp(1567578603, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578603, 4) } 2019-09-04T06:30:04.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578603, 4), t: 1 }, durableWallTime: new Date(1567578603422), opTime: { ts: Timestamp(1567578603, 4), t: 1 }, wallTime: new Date(1567578603422), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 4), t: 1 }, lastCommittedWall: new Date(1567578603422), lastOpVisible: { ts: Timestamp(1567578603, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578603, 4), $clusterTime: { clusterTime: Timestamp(1567578603, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578603, 4) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:04.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:04.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 558) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578603, 4), t: 1 }, durableWallTime: new Date(1567578603422), opTime: { ts: Timestamp(1567578603, 4), t: 1 }, wallTime: new Date(1567578603422), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 4), t: 1 }, lastCommittedWall: new Date(1567578603422), lastOpVisible: { ts: Timestamp(1567578603, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578603, 4), $clusterTime: { clusterTime: Timestamp(1567578603, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578603, 4) } 2019-09-04T06:30:04.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:30:04.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:30:06.838Z 2019-09-04T06:30:04.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:30:04.839+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:30:04.839+0000 D2 REPL_HB [replexec-1] Sending heartbeat (requestId: 559) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:04.839+0000 D3 EXECUTOR [replexec-1] Scheduling remote command request: RemoteCommand 559 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:30:14.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:04.839+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:30:04.839+0000 D2 ASIO [Replication] Request 559 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578603, 4), t: 1 }, durableWallTime: new Date(1567578603422), opTime: { ts: Timestamp(1567578603, 4), t: 1 }, wallTime: new Date(1567578603422), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 4), t: 1 }, lastCommittedWall: new Date(1567578603422), lastOpVisible: { ts: Timestamp(1567578603, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578603, 4), $clusterTime: { clusterTime: Timestamp(1567578603, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578603, 4) } 2019-09-04T06:30:04.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578603, 4), t: 1 }, durableWallTime: new Date(1567578603422), opTime: { ts: Timestamp(1567578603, 4), t: 1 }, wallTime: new Date(1567578603422), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 4), t: 1 }, lastCommittedWall: new Date(1567578603422), lastOpVisible: { ts: Timestamp(1567578603, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578603, 4), $clusterTime: { clusterTime: Timestamp(1567578603, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578603, 4) } target: cmodb802.togewa.com:27019 2019-09-04T06:30:04.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:04.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 559) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578603, 4), t: 1 }, durableWallTime: new Date(1567578603422), opTime: { ts: Timestamp(1567578603, 4), t: 1 }, wallTime: new Date(1567578603422), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 4), t: 1 }, lastCommittedWall: new Date(1567578603422), lastOpVisible: { ts: Timestamp(1567578603, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578603, 4), $clusterTime: { clusterTime: Timestamp(1567578603, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578603, 4) } 2019-09-04T06:30:04.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:30:04.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:30:14.167+0000 2019-09-04T06:30:04.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:30:15.763+0000 2019-09-04T06:30:04.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:30:04.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:30:06.839Z 2019-09-04T06:30:04.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:04.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:30:04.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:30:04.842+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:04.842+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:04.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:04.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:04.894+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:04.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:04.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:04.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:04.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:04.994+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:04.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:04.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:05.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:05.036+0000 D2 ASIO [RS] Request 557 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578605, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578605002), o: { $v: 1, $set: { ping: new Date(1567578604998), up: 2505 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 4), t: 1 }, lastCommittedWall: new Date(1567578603422), lastOpVisible: { ts: Timestamp(1567578603, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578603, 4), t: 1 }, lastCommittedWall: new Date(1567578603422), lastOpApplied: { ts: Timestamp(1567578605, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578603, 4), $clusterTime: { clusterTime: Timestamp(1567578605, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578605, 1) } 2019-09-04T06:30:05.036+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578605, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578605002), o: { $v: 1, $set: { ping: new Date(1567578604998), up: 2505 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 4), t: 1 }, lastCommittedWall: new Date(1567578603422), lastOpVisible: { ts: Timestamp(1567578603, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578603, 4), t: 1 }, lastCommittedWall: new Date(1567578603422), lastOpApplied: { ts: Timestamp(1567578605, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578603, 4), $clusterTime: { clusterTime: Timestamp(1567578605, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578605, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:05.036+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:05.036+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578605, 1) and ending at ts: Timestamp(1567578605, 1) 2019-09-04T06:30:05.036+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:30:15.763+0000 2019-09-04T06:30:05.036+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:30:16.511+0000 2019-09-04T06:30:05.036+0000 D3 EXECUTOR [replexec-1] Executing a task on behalf of pool replexec 2019-09-04T06:30:05.036+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578605, 1), t: 1 } 2019-09-04T06:30:05.036+0000 D3 EXECUTOR [replexec-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:30:05.036+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:05.036+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:05.036+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578603, 4) 2019-09-04T06:30:05.036+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 8091 2019-09-04T06:30:05.036+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:05.036+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:05.036+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 8091 2019-09-04T06:30:05.036+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:05.036+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:05.036+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578603, 4) 2019-09-04T06:30:05.036+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 8094 2019-09-04T06:30:05.036+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:05.036+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:30:05.036+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:05.036+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 8094 2019-09-04T06:30:05.036+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578605, 1) } 2019-09-04T06:30:05.036+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8069 2019-09-04T06:30:05.036+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 8069 2019-09-04T06:30:05.037+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8097 2019-09-04T06:30:05.037+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 8097 2019-09-04T06:30:05.037+0000 D3 EXECUTOR [repl-writer-worker-6] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:05.037+0000 D3 STORAGE [repl-writer-worker-6] WT begin_transaction for snapshot id 8099 2019-09-04T06:30:05.037+0000 D4 STORAGE [repl-writer-worker-6] inserting record with timestamp Timestamp(1567578605, 1) 2019-09-04T06:30:05.037+0000 D3 STORAGE [repl-writer-worker-6] WT set timestamp of future write operations to Timestamp(1567578605, 1) 2019-09-04T06:30:05.037+0000 D3 STORAGE [repl-writer-worker-6] WT commit_transaction for snapshot id 8099 2019-09-04T06:30:05.037+0000 D3 EXECUTOR [repl-writer-worker-6] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:05.037+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:30:05.037+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8098 2019-09-04T06:30:05.037+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 8098 2019-09-04T06:30:05.037+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8101 2019-09-04T06:30:05.037+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 8101 2019-09-04T06:30:05.037+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578605, 1), t: 1 }({ ts: Timestamp(1567578605, 1), t: 1 }) 2019-09-04T06:30:05.037+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578605, 1) 2019-09-04T06:30:05.037+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8102 2019-09-04T06:30:05.037+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578605, 1) } } ] } sort: {} projection: {} 2019-09-04T06:30:05.037+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:05.037+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:30:05.037+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578605, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:30:05.037+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:05.037+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:05.037+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:05.037+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578605, 1) || First: notFirst: full path: ts 2019-09-04T06:30:05.037+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:05.037+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578605, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:05.037+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:05.037+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:30:05.037+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:05.037+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:05.037+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:05.037+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:05.037+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:05.037+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:05.037+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:05.037+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578605, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:05.037+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:05.037+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:05.037+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:05.037+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578605, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:05.037+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:05.037+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578605, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:05.037+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 8102 2019-09-04T06:30:05.037+0000 D3 EXECUTOR [repl-writer-worker-2] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:05.037+0000 D3 STORAGE [repl-writer-worker-2] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:05.037+0000 D3 REPL [repl-writer-worker-2] applying op: { ts: Timestamp(1567578605, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578605002), o: { $v: 1, $set: { ping: new Date(1567578604998), up: 2505 } } }, oplog application mode: Secondary 2019-09-04T06:30:05.037+0000 D3 STORAGE [repl-writer-worker-2] WT set timestamp of future write operations to Timestamp(1567578605, 1) 2019-09-04T06:30:05.037+0000 D3 STORAGE [repl-writer-worker-2] WT begin_transaction for snapshot id 8104 2019-09-04T06:30:05.037+0000 D2 QUERY [repl-writer-worker-2] Using idhack: { _id: "cmodb801.togewa.com:27017" } 2019-09-04T06:30:05.037+0000 D4 WRITE [repl-writer-worker-2] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:30:05.037+0000 D3 STORAGE [repl-writer-worker-2] WT commit_transaction for snapshot id 8104 2019-09-04T06:30:05.037+0000 D3 EXECUTOR [repl-writer-worker-2] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:05.037+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578605, 1), t: 1 }({ ts: Timestamp(1567578605, 1), t: 1 }) 2019-09-04T06:30:05.038+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578605, 1) 2019-09-04T06:30:05.038+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8103 2019-09-04T06:30:05.038+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:30:05.038+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:05.038+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:30:05.038+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:05.038+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:05.038+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:30:05.038+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 8103 2019-09-04T06:30:05.038+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578605, 1) 2019-09-04T06:30:05.038+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:05.038+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8107 2019-09-04T06:30:05.038+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 8107 2019-09-04T06:30:05.038+0000 D2 COMMAND [conn21] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578605, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f59ed02d1a496712d71e4'), operName: "", parentOperId: "5d6f59ec02d1a496712d71e1" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578605, 1), signature: { hash: BinData(0, 78D6D4CBE17DFB1B284AA56C6F914310CD914A15), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578605, 1), t: 1 } }, $db: "config" } 2019-09-04T06:30:05.038+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578605, 1), t: 1 }({ ts: Timestamp(1567578605, 1), t: 1 }) 2019-09-04T06:30:05.038+0000 D1 TRACKING [conn21] Cmd: find, TrackingId: 5d6f59ec02d1a496712d71e1|5d6f59ed02d1a496712d71e4 2019-09-04T06:30:05.038+0000 D1 REPL [conn21] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1567578605, 1), t: 1 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578603, 4), t: 1 } 2019-09-04T06:30:05.038+0000 D3 REPL [conn21] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:35.048+0000 2019-09-04T06:30:05.038+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578603, 4), t: 1 }, durableWallTime: new Date(1567578603422), appliedOpTime: { ts: Timestamp(1567578603, 4), t: 1 }, appliedWallTime: new Date(1567578603422), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578603, 4), t: 1 }, durableWallTime: new Date(1567578603422), appliedOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, appliedWallTime: new Date(1567578605002), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578603, 4), t: 1 }, durableWallTime: new Date(1567578603422), appliedOpTime: { ts: Timestamp(1567578603, 4), t: 1 }, appliedWallTime: new Date(1567578603422), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 4), t: 1 }, lastCommittedWall: new Date(1567578603422), lastOpVisible: { ts: Timestamp(1567578603, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:05.038+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 560 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:35.038+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578603, 4), t: 1 }, durableWallTime: new Date(1567578603422), appliedOpTime: { ts: Timestamp(1567578603, 4), t: 1 }, appliedWallTime: new Date(1567578603422), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578603, 4), t: 1 }, durableWallTime: new Date(1567578603422), appliedOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, appliedWallTime: new Date(1567578605002), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578603, 4), t: 1 }, durableWallTime: new Date(1567578603422), appliedOpTime: { ts: Timestamp(1567578603, 4), t: 1 }, appliedWallTime: new Date(1567578603422), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578603, 4), t: 1 }, lastCommittedWall: new Date(1567578603422), lastOpVisible: { ts: Timestamp(1567578603, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:05.038+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:35.038+0000 2019-09-04T06:30:05.038+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:05.038+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:05.038+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578605, 1), t: 1 } 2019-09-04T06:30:05.038+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 561 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:15.038+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578603, 4), t: 1 } } 2019-09-04T06:30:05.038+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:35.038+0000 2019-09-04T06:30:05.038+0000 D2 ASIO [RS] Request 560 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578605, 1), t: 1 }, lastCommittedWall: new Date(1567578605002), lastOpVisible: { ts: Timestamp(1567578605, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578605, 1), $clusterTime: { clusterTime: Timestamp(1567578605, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578605, 1) } 2019-09-04T06:30:05.038+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578605, 1), t: 1 }, lastCommittedWall: new Date(1567578605002), lastOpVisible: { ts: Timestamp(1567578605, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578605, 1), $clusterTime: { clusterTime: Timestamp(1567578605, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578605, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:05.038+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:05.038+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:35.038+0000 2019-09-04T06:30:05.038+0000 D2 ASIO [RS] Request 561 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578605, 1), t: 1 }, lastCommittedWall: new Date(1567578605002), lastOpVisible: { ts: Timestamp(1567578605, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578605, 1), t: 1 }, lastCommittedWall: new Date(1567578605002), lastOpApplied: { ts: Timestamp(1567578605, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578605, 1), $clusterTime: { clusterTime: Timestamp(1567578605, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578605, 1) } 2019-09-04T06:30:05.038+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578605, 1), t: 1 }, lastCommittedWall: new Date(1567578605002), lastOpVisible: { ts: Timestamp(1567578605, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578605, 1), t: 1 }, lastCommittedWall: new Date(1567578605002), lastOpApplied: { ts: Timestamp(1567578605, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578605, 1), $clusterTime: { clusterTime: Timestamp(1567578605, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578605, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:05.039+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:05.039+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:30:05.039+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578605, 1), t: 1 }, 2019-09-04T06:30:05.002+0000 2019-09-04T06:30:05.039+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578605, 1), t: 1 }, 2019-09-04T06:30:05.002+0000 2019-09-04T06:30:05.039+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578600, 1) 2019-09-04T06:30:05.039+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:30:16.511+0000 2019-09-04T06:30:05.039+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:30:15.623+0000 2019-09-04T06:30:05.039+0000 D3 REPL [conn191] Got notified of new snapshot: { ts: Timestamp(1567578605, 1), t: 1 }, 2019-09-04T06:30:05.002+0000 2019-09-04T06:30:05.039+0000 D3 REPL [conn191] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:12.107+0000 2019-09-04T06:30:05.039+0000 D3 REPL [conn212] Got notified of new snapshot: { ts: Timestamp(1567578605, 1), t: 1 }, 2019-09-04T06:30:05.002+0000 2019-09-04T06:30:05.039+0000 D3 REPL [conn212] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.650+0000 2019-09-04T06:30:05.039+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 562 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:15.039+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578605, 1), t: 1 } } 2019-09-04T06:30:05.039+0000 D3 REPL [conn219] Got notified of new snapshot: { ts: Timestamp(1567578605, 1), t: 1 }, 2019-09-04T06:30:05.002+0000 2019-09-04T06:30:05.039+0000 D3 REPL [conn219] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:24.153+0000 2019-09-04T06:30:05.039+0000 D3 REPL [conn21] Got notified of new snapshot: { ts: Timestamp(1567578605, 1), t: 1 }, 2019-09-04T06:30:05.002+0000 2019-09-04T06:30:05.039+0000 D1 COMMAND [conn21] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578605, 1), t: 1 } } } 2019-09-04T06:30:05.039+0000 D3 STORAGE [conn21] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:30:05.039+0000 D3 REPL [conn215] Got notified of new snapshot: { ts: Timestamp(1567578605, 1), t: 1 }, 2019-09-04T06:30:05.002+0000 2019-09-04T06:30:05.039+0000 D1 COMMAND [conn21] Using 'committed' snapshot: { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578605, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f59ed02d1a496712d71e4'), operName: "", parentOperId: "5d6f59ec02d1a496712d71e1" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578605, 1), signature: { hash: BinData(0, 78D6D4CBE17DFB1B284AA56C6F914310CD914A15), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578605, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578605, 1) 2019-09-04T06:30:05.039+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:35.038+0000 2019-09-04T06:30:05.039+0000 D3 REPL [conn215] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:05.039+0000 D3 REPL [conn204] Got notified of new snapshot: { ts: Timestamp(1567578605, 1), t: 1 }, 2019-09-04T06:30:05.002+0000 2019-09-04T06:30:05.039+0000 D3 REPL [conn204] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.514+0000 2019-09-04T06:30:05.039+0000 D2 QUERY [conn21] Collection config.settings does not exist. Using EOF plan: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 2019-09-04T06:30:05.039+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:05.039+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:06.838+0000 2019-09-04T06:30:05.039+0000 D3 REPL [conn169] Got notified of new snapshot: { ts: Timestamp(1567578605, 1), t: 1 }, 2019-09-04T06:30:05.002+0000 2019-09-04T06:30:05.039+0000 D3 REPL [conn169] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:12.789+0000 2019-09-04T06:30:05.039+0000 D3 REPL [conn209] Got notified of new snapshot: { ts: Timestamp(1567578605, 1), t: 1 }, 2019-09-04T06:30:05.002+0000 2019-09-04T06:30:05.039+0000 D3 REPL [conn209] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.622+0000 2019-09-04T06:30:05.039+0000 D3 REPL [conn210] Got notified of new snapshot: { ts: Timestamp(1567578605, 1), t: 1 }, 2019-09-04T06:30:05.002+0000 2019-09-04T06:30:05.039+0000 D3 REPL [conn210] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.641+0000 2019-09-04T06:30:05.039+0000 D3 REPL [conn213] Got notified of new snapshot: { ts: Timestamp(1567578605, 1), t: 1 }, 2019-09-04T06:30:05.002+0000 2019-09-04T06:30:05.039+0000 D3 REPL [conn213] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:05.039+0000 D3 REPL [conn202] Got notified of new snapshot: { ts: Timestamp(1567578605, 1), t: 1 }, 2019-09-04T06:30:05.002+0000 2019-09-04T06:30:05.039+0000 D3 REPL [conn202] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.480+0000 2019-09-04T06:30:05.039+0000 D3 REPL [conn211] Got notified of new snapshot: { ts: Timestamp(1567578605, 1), t: 1 }, 2019-09-04T06:30:05.002+0000 2019-09-04T06:30:05.039+0000 D3 REPL [conn211] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.646+0000 2019-09-04T06:30:05.039+0000 D3 REPL [conn181] Got notified of new snapshot: { ts: Timestamp(1567578605, 1), t: 1 }, 2019-09-04T06:30:05.002+0000 2019-09-04T06:30:05.039+0000 D3 REPL [conn181] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.478+0000 2019-09-04T06:30:05.039+0000 D3 REPL [conn200] Got notified of new snapshot: { ts: Timestamp(1567578605, 1), t: 1 }, 2019-09-04T06:30:05.002+0000 2019-09-04T06:30:05.039+0000 D3 REPL [conn200] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.475+0000 2019-09-04T06:30:05.039+0000 D3 REPL [conn199] Got notified of new snapshot: { ts: Timestamp(1567578605, 1), t: 1 }, 2019-09-04T06:30:05.002+0000 2019-09-04T06:30:05.039+0000 D3 REPL [conn199] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.476+0000 2019-09-04T06:30:05.039+0000 D3 REPL [conn217] Got notified of new snapshot: { ts: Timestamp(1567578605, 1), t: 1 }, 2019-09-04T06:30:05.002+0000 2019-09-04T06:30:05.039+0000 D3 REPL [conn217] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:22.595+0000 2019-09-04T06:30:05.039+0000 D3 REPL [conn171] Got notified of new snapshot: { ts: Timestamp(1567578605, 1), t: 1 }, 2019-09-04T06:30:05.002+0000 2019-09-04T06:30:05.039+0000 D3 REPL [conn171] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.634+0000 2019-09-04T06:30:05.039+0000 D3 REPL [conn205] Got notified of new snapshot: { ts: Timestamp(1567578605, 1), t: 1 }, 2019-09-04T06:30:05.002+0000 2019-09-04T06:30:05.039+0000 D3 REPL [conn205] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.530+0000 2019-09-04T06:30:05.039+0000 D3 REPL [conn203] Got notified of new snapshot: { ts: Timestamp(1567578605, 1), t: 1 }, 2019-09-04T06:30:05.002+0000 2019-09-04T06:30:05.039+0000 D3 REPL [conn203] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.499+0000 2019-09-04T06:30:05.039+0000 D3 REPL [conn214] Got notified of new snapshot: { ts: Timestamp(1567578605, 1), t: 1 }, 2019-09-04T06:30:05.002+0000 2019-09-04T06:30:05.039+0000 D3 REPL [conn214] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:05.039+0000 D3 REPL [conn201] Got notified of new snapshot: { ts: Timestamp(1567578605, 1), t: 1 }, 2019-09-04T06:30:05.002+0000 2019-09-04T06:30:05.039+0000 D3 REPL [conn201] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.476+0000 2019-09-04T06:30:05.039+0000 D3 REPL [conn147] Got notified of new snapshot: { ts: Timestamp(1567578605, 1), t: 1 }, 2019-09-04T06:30:05.002+0000 2019-09-04T06:30:05.039+0000 D3 REPL [conn147] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:13.409+0000 2019-09-04T06:30:05.039+0000 D3 REPL [conn180] Got notified of new snapshot: { ts: Timestamp(1567578605, 1), t: 1 }, 2019-09-04T06:30:05.002+0000 2019-09-04T06:30:05.039+0000 D3 REPL [conn180] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.623+0000 2019-09-04T06:30:05.039+0000 I COMMAND [conn21] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578605, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f59ed02d1a496712d71e4'), operName: "", parentOperId: "5d6f59ec02d1a496712d71e1" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578605, 1), signature: { hash: BinData(0, 78D6D4CBE17DFB1B284AA56C6F914310CD914A15), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578605, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 1ms 2019-09-04T06:30:05.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578605, 1), signature: { hash: BinData(0, 78D6D4CBE17DFB1B284AA56C6F914310CD914A15), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:05.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:30:05.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578605, 1), signature: { hash: BinData(0, 78D6D4CBE17DFB1B284AA56C6F914310CD914A15), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:05.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578605, 1), signature: { hash: BinData(0, 78D6D4CBE17DFB1B284AA56C6F914310CD914A15), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:05.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578603, 4), t: 1 }, durableWallTime: new Date(1567578603422), opTime: { ts: Timestamp(1567578605, 1), t: 1 }, wallTime: new Date(1567578605002) } 2019-09-04T06:30:05.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578605, 1), signature: { hash: BinData(0, 78D6D4CBE17DFB1B284AA56C6F914310CD914A15), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:05.065+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:05.065+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:05.068+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:30:05.068+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:05.068+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578603, 4), t: 1 }, durableWallTime: new Date(1567578603422), appliedOpTime: { ts: Timestamp(1567578603, 4), t: 1 }, appliedWallTime: new Date(1567578603422), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, durableWallTime: new Date(1567578605002), appliedOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, appliedWallTime: new Date(1567578605002), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578603, 4), t: 1 }, durableWallTime: new Date(1567578603422), appliedOpTime: { ts: Timestamp(1567578603, 4), t: 1 }, appliedWallTime: new Date(1567578603422), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578605, 1), t: 1 }, lastCommittedWall: new Date(1567578605002), lastOpVisible: { ts: Timestamp(1567578605, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:05.068+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 563 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:35.068+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578603, 4), t: 1 }, durableWallTime: new Date(1567578603422), appliedOpTime: { ts: Timestamp(1567578603, 4), t: 1 }, appliedWallTime: new Date(1567578603422), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, durableWallTime: new Date(1567578605002), appliedOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, appliedWallTime: new Date(1567578605002), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578603, 4), t: 1 }, durableWallTime: new Date(1567578603422), appliedOpTime: { ts: Timestamp(1567578603, 4), t: 1 }, appliedWallTime: new Date(1567578603422), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578605, 1), t: 1 }, lastCommittedWall: new Date(1567578605002), lastOpVisible: { ts: Timestamp(1567578605, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:05.068+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:35.038+0000 2019-09-04T06:30:05.069+0000 D2 ASIO [RS] Request 563 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578605, 1), t: 1 }, lastCommittedWall: new Date(1567578605002), lastOpVisible: { ts: Timestamp(1567578605, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578605, 1), $clusterTime: { clusterTime: Timestamp(1567578605, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578605, 1) } 2019-09-04T06:30:05.069+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578605, 1), t: 1 }, lastCommittedWall: new Date(1567578605002), lastOpVisible: { ts: Timestamp(1567578605, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578605, 1), $clusterTime: { clusterTime: Timestamp(1567578605, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578605, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:05.069+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:05.069+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:35.038+0000 2019-09-04T06:30:05.078+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:05.078+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:05.094+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:05.131+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:05.131+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:05.136+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578605, 1) 2019-09-04T06:30:05.142+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:05.142+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:05.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:05.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:05.176+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:05.176+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:05.194+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:05.207+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:05.207+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:05.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:05.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:05.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:05.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:05.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:05.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:05.294+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:05.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:05.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:05.342+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:05.342+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:05.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:05.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:05.394+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:05.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:05.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:05.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:05.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:05.494+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:05.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:05.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:05.538+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:05.538+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:05.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:05.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:05.578+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:05.578+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:05.594+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:05.630+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:05.631+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:05.642+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:05.642+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:05.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:05.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:05.676+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:05.676+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:05.694+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:05.707+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:05.707+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:05.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:05.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:05.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:05.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:05.795+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:05.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:05.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:05.842+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:05.842+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:05.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:05.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:05.895+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:05.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:05.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:05.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:05.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:05.995+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:05.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:05.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:06.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:06.037+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:06.037+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:06.037+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578605, 1) 2019-09-04T06:30:06.037+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 8146 2019-09-04T06:30:06.037+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:06.037+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:06.037+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 8146 2019-09-04T06:30:06.038+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8149 2019-09-04T06:30:06.038+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 8149 2019-09-04T06:30:06.038+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578605, 1), t: 1 }({ ts: Timestamp(1567578605, 1), t: 1 }) 2019-09-04T06:30:06.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:06.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:06.078+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:06.078+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:06.095+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:06.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:06.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:06.176+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:06.176+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:06.195+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:06.207+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:06.207+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:06.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578605, 1), signature: { hash: BinData(0, 78D6D4CBE17DFB1B284AA56C6F914310CD914A15), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:06.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:30:06.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578605, 1), signature: { hash: BinData(0, 78D6D4CBE17DFB1B284AA56C6F914310CD914A15), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:06.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578605, 1), signature: { hash: BinData(0, 78D6D4CBE17DFB1B284AA56C6F914310CD914A15), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:06.233+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, durableWallTime: new Date(1567578605002), opTime: { ts: Timestamp(1567578605, 1), t: 1 }, wallTime: new Date(1567578605002) } 2019-09-04T06:30:06.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578605, 1), signature: { hash: BinData(0, 78D6D4CBE17DFB1B284AA56C6F914310CD914A15), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:06.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:06.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:06.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:06.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:06.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:06.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:06.295+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:06.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:06.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:06.342+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:06.342+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:06.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:06.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:06.395+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:06.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:06.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:06.490+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:06.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:06.496+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:06.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:06.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:06.553+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:06.553+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:06.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:06.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:06.578+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:06.578+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:06.596+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:06.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:06.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:06.676+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:06.676+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:06.696+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:06.707+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:06.707+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:06.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:06.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:06.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:06.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:06.796+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:06.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:06.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:06.838+0000 D1 EXECUTOR [replexec-1] Reaping this thread; next thread reaped no earlier than 2019-09-04T06:30:36.838+0000 2019-09-04T06:30:06.838+0000 D1 EXECUTOR [replexec-1] shutting down thread in pool replexec 2019-09-04T06:30:06.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:06.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 564) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:06.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 564 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:16.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:06.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:36.838+0000 2019-09-04T06:30:06.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:36.838+0000 2019-09-04T06:30:06.838+0000 D2 ASIO [Replication] Request 564 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, durableWallTime: new Date(1567578605002), opTime: { ts: Timestamp(1567578605, 1), t: 1 }, wallTime: new Date(1567578605002), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578605, 1), t: 1 }, lastCommittedWall: new Date(1567578605002), lastOpVisible: { ts: Timestamp(1567578605, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578605, 1), $clusterTime: { clusterTime: Timestamp(1567578605, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578605, 1) } 2019-09-04T06:30:06.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, durableWallTime: new Date(1567578605002), opTime: { ts: Timestamp(1567578605, 1), t: 1 }, wallTime: new Date(1567578605002), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578605, 1), t: 1 }, lastCommittedWall: new Date(1567578605002), lastOpVisible: { ts: Timestamp(1567578605, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578605, 1), $clusterTime: { clusterTime: Timestamp(1567578605, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578605, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:06.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:06.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 564) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, durableWallTime: new Date(1567578605002), opTime: { ts: Timestamp(1567578605, 1), t: 1 }, wallTime: new Date(1567578605002), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578605, 1), t: 1 }, lastCommittedWall: new Date(1567578605002), lastOpVisible: { ts: Timestamp(1567578605, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578605, 1), $clusterTime: { clusterTime: Timestamp(1567578605, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578605, 1) } 2019-09-04T06:30:06.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:30:06.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:30:08.838Z 2019-09-04T06:30:06.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:36.838+0000 2019-09-04T06:30:06.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:06.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 565) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:06.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 565 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:30:16.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:06.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:36.838+0000 2019-09-04T06:30:06.839+0000 D2 ASIO [Replication] Request 565 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, durableWallTime: new Date(1567578605002), opTime: { ts: Timestamp(1567578605, 1), t: 1 }, wallTime: new Date(1567578605002), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578605, 1), t: 1 }, lastCommittedWall: new Date(1567578605002), lastOpVisible: { ts: Timestamp(1567578605, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578605, 1), $clusterTime: { clusterTime: Timestamp(1567578605, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578605, 1) } 2019-09-04T06:30:06.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, durableWallTime: new Date(1567578605002), opTime: { ts: Timestamp(1567578605, 1), t: 1 }, wallTime: new Date(1567578605002), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578605, 1), t: 1 }, lastCommittedWall: new Date(1567578605002), lastOpVisible: { ts: Timestamp(1567578605, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578605, 1), $clusterTime: { clusterTime: Timestamp(1567578605, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578605, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:30:06.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:06.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 565) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, durableWallTime: new Date(1567578605002), opTime: { ts: Timestamp(1567578605, 1), t: 1 }, wallTime: new Date(1567578605002), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578605, 1), t: 1 }, lastCommittedWall: new Date(1567578605002), lastOpVisible: { ts: Timestamp(1567578605, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578605, 1), $clusterTime: { clusterTime: Timestamp(1567578605, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578605, 1) } 2019-09-04T06:30:06.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:30:06.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:30:15.623+0000 2019-09-04T06:30:06.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:30:17.152+0000 2019-09-04T06:30:06.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:30:06.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:30:08.839Z 2019-09-04T06:30:06.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:06.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:36.839+0000 2019-09-04T06:30:06.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:36.839+0000 2019-09-04T06:30:06.842+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:06.842+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:06.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:06.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:06.896+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:06.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:06.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:06.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:06.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:06.996+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:06.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:06.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:07.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:07.037+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:07.037+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:07.037+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578605, 1) 2019-09-04T06:30:07.037+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 8181 2019-09-04T06:30:07.037+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:07.037+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:07.037+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 8181 2019-09-04T06:30:07.038+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8184 2019-09-04T06:30:07.038+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 8184 2019-09-04T06:30:07.038+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578605, 1), t: 1 }({ ts: Timestamp(1567578605, 1), t: 1 }) 2019-09-04T06:30:07.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:07.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:07.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578605, 1), signature: { hash: BinData(0, 78D6D4CBE17DFB1B284AA56C6F914310CD914A15), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:07.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:30:07.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578605, 1), signature: { hash: BinData(0, 78D6D4CBE17DFB1B284AA56C6F914310CD914A15), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:07.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578605, 1), signature: { hash: BinData(0, 78D6D4CBE17DFB1B284AA56C6F914310CD914A15), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:07.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, durableWallTime: new Date(1567578605002), opTime: { ts: Timestamp(1567578605, 1), t: 1 }, wallTime: new Date(1567578605002) } 2019-09-04T06:30:07.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578605, 1), signature: { hash: BinData(0, 78D6D4CBE17DFB1B284AA56C6F914310CD914A15), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:07.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:07.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:07.078+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:07.078+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:07.096+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:07.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:07.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:07.176+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:07.176+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:07.197+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:07.207+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:07.207+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:07.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:07.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:07.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:07.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:07.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:07.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:07.297+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:07.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:07.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:07.342+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:07.342+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:07.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:07.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:07.397+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:07.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:07.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:07.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:07.489+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:07.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:07.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:07.497+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:07.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:07.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:07.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:07.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:07.578+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:07.578+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:07.597+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:07.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:07.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:07.676+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:07.676+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:07.697+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:07.707+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:07.707+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:07.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:07.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:07.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:07.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:07.798+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:07.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:07.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:07.842+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:07.842+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:07.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:07.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:07.898+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:07.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:07.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:07.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:07.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:07.998+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:08.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:08.037+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:08.037+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:08.037+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578605, 1) 2019-09-04T06:30:08.037+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 8217 2019-09-04T06:30:08.037+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:08.037+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:08.037+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 8217 2019-09-04T06:30:08.038+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8220 2019-09-04T06:30:08.038+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 8220 2019-09-04T06:30:08.038+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578605, 1), t: 1 }({ ts: Timestamp(1567578605, 1), t: 1 }) 2019-09-04T06:30:08.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:08.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:08.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:08.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:08.078+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:08.078+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:08.098+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:08.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:08.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:08.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:08.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:08.176+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:08.176+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:08.198+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:08.207+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:08.207+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:08.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578605, 1), signature: { hash: BinData(0, 78D6D4CBE17DFB1B284AA56C6F914310CD914A15), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:08.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:30:08.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578605, 1), signature: { hash: BinData(0, 78D6D4CBE17DFB1B284AA56C6F914310CD914A15), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:08.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578605, 1), signature: { hash: BinData(0, 78D6D4CBE17DFB1B284AA56C6F914310CD914A15), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:08.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, durableWallTime: new Date(1567578605002), opTime: { ts: Timestamp(1567578605, 1), t: 1 }, wallTime: new Date(1567578605002) } 2019-09-04T06:30:08.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578605, 1), signature: { hash: BinData(0, 78D6D4CBE17DFB1B284AA56C6F914310CD914A15), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:08.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:08.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:08.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:08.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:08.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:08.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:08.296+0000 D2 ASIO [RS] Request 562 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578608, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578608283), o: { $v: 1, $set: { ping: new Date(1567578608283) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578605, 1), t: 1 }, lastCommittedWall: new Date(1567578605002), lastOpVisible: { ts: Timestamp(1567578605, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578605, 1), t: 1 }, lastCommittedWall: new Date(1567578605002), lastOpApplied: { ts: Timestamp(1567578608, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578605, 1), $clusterTime: { clusterTime: Timestamp(1567578608, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578608, 1) } 2019-09-04T06:30:08.296+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578608, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578608283), o: { $v: 1, $set: { ping: new Date(1567578608283) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578605, 1), t: 1 }, lastCommittedWall: new Date(1567578605002), lastOpVisible: { ts: Timestamp(1567578605, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578605, 1), t: 1 }, lastCommittedWall: new Date(1567578605002), lastOpApplied: { ts: Timestamp(1567578608, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578605, 1), $clusterTime: { clusterTime: Timestamp(1567578608, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578608, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:08.296+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:08.296+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578608, 1) and ending at ts: Timestamp(1567578608, 1) 2019-09-04T06:30:08.296+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:30:17.152+0000 2019-09-04T06:30:08.296+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:30:19.029+0000 2019-09-04T06:30:08.296+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:08.296+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:36.839+0000 2019-09-04T06:30:08.296+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:08.297+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:08.297+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578605, 1) 2019-09-04T06:30:08.297+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 8234 2019-09-04T06:30:08.297+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:08.297+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:08.297+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 8234 2019-09-04T06:30:08.297+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:08.297+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:30:08.297+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:08.297+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578605, 1) 2019-09-04T06:30:08.297+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 8237 2019-09-04T06:30:08.297+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578608, 1) } 2019-09-04T06:30:08.297+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:08.297+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:08.297+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 8237 2019-09-04T06:30:08.296+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578608, 1), t: 1 } 2019-09-04T06:30:08.297+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8221 2019-09-04T06:30:08.297+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 8221 2019-09-04T06:30:08.297+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8240 2019-09-04T06:30:08.297+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 8240 2019-09-04T06:30:08.297+0000 D3 EXECUTOR [repl-writer-worker-0] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:08.297+0000 D3 STORAGE [repl-writer-worker-0] WT begin_transaction for snapshot id 8242 2019-09-04T06:30:08.297+0000 D4 STORAGE [repl-writer-worker-0] inserting record with timestamp Timestamp(1567578608, 1) 2019-09-04T06:30:08.297+0000 D3 STORAGE [repl-writer-worker-0] WT set timestamp of future write operations to Timestamp(1567578608, 1) 2019-09-04T06:30:08.297+0000 D3 STORAGE [repl-writer-worker-0] WT commit_transaction for snapshot id 8242 2019-09-04T06:30:08.297+0000 D3 EXECUTOR [repl-writer-worker-0] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:08.297+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:30:08.297+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8241 2019-09-04T06:30:08.297+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 8241 2019-09-04T06:30:08.297+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8244 2019-09-04T06:30:08.297+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 8244 2019-09-04T06:30:08.297+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578608, 1), t: 1 }({ ts: Timestamp(1567578608, 1), t: 1 }) 2019-09-04T06:30:08.297+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578608, 1) 2019-09-04T06:30:08.297+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8245 2019-09-04T06:30:08.297+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578608, 1) } } ] } sort: {} projection: {} 2019-09-04T06:30:08.297+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:08.297+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:30:08.297+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578608, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:30:08.297+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:08.297+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:08.297+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:08.297+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578608, 1) || First: notFirst: full path: ts 2019-09-04T06:30:08.297+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:08.297+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578608, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:08.297+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:08.297+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:30:08.297+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:08.297+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:08.297+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:08.297+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:08.297+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:08.297+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:08.297+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:08.297+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578608, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:08.297+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:08.297+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:08.297+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:08.297+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578608, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:08.297+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:08.297+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578608, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:08.297+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 8245 2019-09-04T06:30:08.297+0000 D3 EXECUTOR [repl-writer-worker-15] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:08.297+0000 D3 STORAGE [repl-writer-worker-15] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:08.297+0000 D3 REPL [repl-writer-worker-15] applying op: { ts: Timestamp(1567578608, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578608283), o: { $v: 1, $set: { ping: new Date(1567578608283) } } }, oplog application mode: Secondary 2019-09-04T06:30:08.297+0000 D3 STORAGE [repl-writer-worker-15] WT set timestamp of future write operations to Timestamp(1567578608, 1) 2019-09-04T06:30:08.297+0000 D3 STORAGE [repl-writer-worker-15] WT begin_transaction for snapshot id 8247 2019-09-04T06:30:08.298+0000 D2 QUERY [repl-writer-worker-15] Using idhack: { _id: "ConfigServer" } 2019-09-04T06:30:08.298+0000 D4 WRITE [repl-writer-worker-15] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:30:08.298+0000 D3 STORAGE [repl-writer-worker-15] WT commit_transaction for snapshot id 8247 2019-09-04T06:30:08.298+0000 D3 EXECUTOR [repl-writer-worker-15] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:08.298+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578608, 1), t: 1 }({ ts: Timestamp(1567578608, 1), t: 1 }) 2019-09-04T06:30:08.298+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578608, 1) 2019-09-04T06:30:08.298+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8246 2019-09-04T06:30:08.298+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:30:08.298+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:08.298+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:30:08.298+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:08.298+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:08.298+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:30:08.298+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 8246 2019-09-04T06:30:08.298+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578608, 1) 2019-09-04T06:30:08.298+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:08.298+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8250 2019-09-04T06:30:08.298+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, durableWallTime: new Date(1567578605002), appliedOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, appliedWallTime: new Date(1567578605002), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, durableWallTime: new Date(1567578605002), appliedOpTime: { ts: Timestamp(1567578608, 1), t: 1 }, appliedWallTime: new Date(1567578608283), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, durableWallTime: new Date(1567578605002), appliedOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, appliedWallTime: new Date(1567578605002), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578605, 1), t: 1 }, lastCommittedWall: new Date(1567578605002), lastOpVisible: { ts: Timestamp(1567578605, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:08.298+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 566 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:38.298+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, durableWallTime: new Date(1567578605002), appliedOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, appliedWallTime: new Date(1567578605002), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, durableWallTime: new Date(1567578605002), appliedOpTime: { ts: Timestamp(1567578608, 1), t: 1 }, appliedWallTime: new Date(1567578608283), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, durableWallTime: new Date(1567578605002), appliedOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, appliedWallTime: new Date(1567578605002), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578605, 1), t: 1 }, lastCommittedWall: new Date(1567578605002), lastOpVisible: { ts: Timestamp(1567578605, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:08.298+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:38.298+0000 2019-09-04T06:30:08.298+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 8250 2019-09-04T06:30:08.298+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578608, 1), t: 1 }({ ts: Timestamp(1567578608, 1), t: 1 }) 2019-09-04T06:30:08.298+0000 D2 ASIO [RS] Request 566 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578605, 1), t: 1 }, lastCommittedWall: new Date(1567578605002), lastOpVisible: { ts: Timestamp(1567578605, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578605, 1), $clusterTime: { clusterTime: Timestamp(1567578608, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578608, 1) } 2019-09-04T06:30:08.298+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578605, 1), t: 1 }, lastCommittedWall: new Date(1567578605002), lastOpVisible: { ts: Timestamp(1567578605, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578605, 1), $clusterTime: { clusterTime: Timestamp(1567578608, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578608, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:08.298+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:08.298+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:38.298+0000 2019-09-04T06:30:08.299+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578608, 1), t: 1 } 2019-09-04T06:30:08.299+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 567 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:18.299+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578605, 1), t: 1 } } 2019-09-04T06:30:08.299+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:38.298+0000 2019-09-04T06:30:08.305+0000 D2 ASIO [RS] Request 567 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 1), t: 1 }, lastCommittedWall: new Date(1567578608283), lastOpVisible: { ts: Timestamp(1567578608, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578608, 1), t: 1 }, lastCommittedWall: new Date(1567578608283), lastOpApplied: { ts: Timestamp(1567578608, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578608, 1), $clusterTime: { clusterTime: Timestamp(1567578608, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578608, 1) } 2019-09-04T06:30:08.305+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 1), t: 1 }, lastCommittedWall: new Date(1567578608283), lastOpVisible: { ts: Timestamp(1567578608, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578608, 1), t: 1 }, lastCommittedWall: new Date(1567578608283), lastOpApplied: { ts: Timestamp(1567578608, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578608, 1), $clusterTime: { clusterTime: Timestamp(1567578608, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578608, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:08.305+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:08.305+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:30:08.305+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578608, 1), t: 1 }, 2019-09-04T06:30:08.283+0000 2019-09-04T06:30:08.305+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578608, 1), t: 1 }, 2019-09-04T06:30:08.283+0000 2019-09-04T06:30:08.305+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578603, 1) 2019-09-04T06:30:08.305+0000 D3 REPL [conn199] Got notified of new snapshot: { ts: Timestamp(1567578608, 1), t: 1 }, 2019-09-04T06:30:08.283+0000 2019-09-04T06:30:08.305+0000 D3 REPL [conn199] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.476+0000 2019-09-04T06:30:08.305+0000 D3 REPL [conn202] Got notified of new snapshot: { ts: Timestamp(1567578608, 1), t: 1 }, 2019-09-04T06:30:08.283+0000 2019-09-04T06:30:08.305+0000 D3 REPL [conn202] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.480+0000 2019-09-04T06:30:08.305+0000 D3 REPL [conn201] Got notified of new snapshot: { ts: Timestamp(1567578608, 1), t: 1 }, 2019-09-04T06:30:08.283+0000 2019-09-04T06:30:08.305+0000 D3 REPL [conn201] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.476+0000 2019-09-04T06:30:08.305+0000 D3 REPL [conn203] Got notified of new snapshot: { ts: Timestamp(1567578608, 1), t: 1 }, 2019-09-04T06:30:08.283+0000 2019-09-04T06:30:08.306+0000 D3 REPL [conn203] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.499+0000 2019-09-04T06:30:08.306+0000 D3 REPL [conn211] Got notified of new snapshot: { ts: Timestamp(1567578608, 1), t: 1 }, 2019-09-04T06:30:08.283+0000 2019-09-04T06:30:08.306+0000 D3 REPL [conn211] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.646+0000 2019-09-04T06:30:08.306+0000 D3 REPL [conn200] Got notified of new snapshot: { ts: Timestamp(1567578608, 1), t: 1 }, 2019-09-04T06:30:08.283+0000 2019-09-04T06:30:08.306+0000 D3 REPL [conn200] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.475+0000 2019-09-04T06:30:08.306+0000 D3 REPL [conn217] Got notified of new snapshot: { ts: Timestamp(1567578608, 1), t: 1 }, 2019-09-04T06:30:08.283+0000 2019-09-04T06:30:08.306+0000 D3 REPL [conn217] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:22.595+0000 2019-09-04T06:30:08.306+0000 D3 REPL [conn205] Got notified of new snapshot: { ts: Timestamp(1567578608, 1), t: 1 }, 2019-09-04T06:30:08.283+0000 2019-09-04T06:30:08.306+0000 D3 REPL [conn205] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.530+0000 2019-09-04T06:30:08.306+0000 D3 REPL [conn214] Got notified of new snapshot: { ts: Timestamp(1567578608, 1), t: 1 }, 2019-09-04T06:30:08.283+0000 2019-09-04T06:30:08.306+0000 D3 REPL [conn214] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:08.306+0000 D3 REPL [conn147] Got notified of new snapshot: { ts: Timestamp(1567578608, 1), t: 1 }, 2019-09-04T06:30:08.283+0000 2019-09-04T06:30:08.306+0000 D3 REPL [conn147] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:13.409+0000 2019-09-04T06:30:08.306+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:30:19.029+0000 2019-09-04T06:30:08.306+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:30:18.834+0000 2019-09-04T06:30:08.306+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:08.306+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:36.839+0000 2019-09-04T06:30:08.306+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 568 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:18.306+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578608, 1), t: 1 } } 2019-09-04T06:30:08.306+0000 D3 REPL [conn212] Got notified of new snapshot: { ts: Timestamp(1567578608, 1), t: 1 }, 2019-09-04T06:30:08.283+0000 2019-09-04T06:30:08.306+0000 D3 REPL [conn212] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.650+0000 2019-09-04T06:30:08.306+0000 D3 REPL [conn219] Got notified of new snapshot: { ts: Timestamp(1567578608, 1), t: 1 }, 2019-09-04T06:30:08.283+0000 2019-09-04T06:30:08.306+0000 D3 REPL [conn219] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:24.153+0000 2019-09-04T06:30:08.306+0000 D3 REPL [conn171] Got notified of new snapshot: { ts: Timestamp(1567578608, 1), t: 1 }, 2019-09-04T06:30:08.283+0000 2019-09-04T06:30:08.306+0000 D3 REPL [conn171] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.634+0000 2019-09-04T06:30:08.306+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:38.298+0000 2019-09-04T06:30:08.306+0000 D3 REPL [conn180] Got notified of new snapshot: { ts: Timestamp(1567578608, 1), t: 1 }, 2019-09-04T06:30:08.283+0000 2019-09-04T06:30:08.306+0000 D3 REPL [conn180] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.623+0000 2019-09-04T06:30:08.306+0000 D3 REPL [conn210] Got notified of new snapshot: { ts: Timestamp(1567578608, 1), t: 1 }, 2019-09-04T06:30:08.283+0000 2019-09-04T06:30:08.306+0000 D3 REPL [conn210] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.641+0000 2019-09-04T06:30:08.306+0000 D3 REPL [conn215] Got notified of new snapshot: { ts: Timestamp(1567578608, 1), t: 1 }, 2019-09-04T06:30:08.283+0000 2019-09-04T06:30:08.306+0000 D3 REPL [conn215] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:08.306+0000 D3 REPL [conn169] Got notified of new snapshot: { ts: Timestamp(1567578608, 1), t: 1 }, 2019-09-04T06:30:08.283+0000 2019-09-04T06:30:08.306+0000 D3 REPL [conn169] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:12.789+0000 2019-09-04T06:30:08.306+0000 D3 REPL [conn209] Got notified of new snapshot: { ts: Timestamp(1567578608, 1), t: 1 }, 2019-09-04T06:30:08.283+0000 2019-09-04T06:30:08.306+0000 D3 REPL [conn209] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.622+0000 2019-09-04T06:30:08.306+0000 D3 REPL [conn213] Got notified of new snapshot: { ts: Timestamp(1567578608, 1), t: 1 }, 2019-09-04T06:30:08.283+0000 2019-09-04T06:30:08.306+0000 D3 REPL [conn213] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:08.306+0000 D3 REPL [conn191] Got notified of new snapshot: { ts: Timestamp(1567578608, 1), t: 1 }, 2019-09-04T06:30:08.283+0000 2019-09-04T06:30:08.306+0000 D3 REPL [conn191] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:12.107+0000 2019-09-04T06:30:08.306+0000 D3 REPL [conn181] Got notified of new snapshot: { ts: Timestamp(1567578608, 1), t: 1 }, 2019-09-04T06:30:08.283+0000 2019-09-04T06:30:08.306+0000 D3 REPL [conn181] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.478+0000 2019-09-04T06:30:08.306+0000 D3 REPL [conn204] Got notified of new snapshot: { ts: Timestamp(1567578608, 1), t: 1 }, 2019-09-04T06:30:08.283+0000 2019-09-04T06:30:08.306+0000 D3 REPL [conn204] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.514+0000 2019-09-04T06:30:08.308+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:30:08.308+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:08.308+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:08.308+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, durableWallTime: new Date(1567578605002), appliedOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, appliedWallTime: new Date(1567578605002), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578608, 1), t: 1 }, durableWallTime: new Date(1567578608283), appliedOpTime: { ts: Timestamp(1567578608, 1), t: 1 }, appliedWallTime: new Date(1567578608283), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, durableWallTime: new Date(1567578605002), appliedOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, appliedWallTime: new Date(1567578605002), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 1), t: 1 }, lastCommittedWall: new Date(1567578608283), lastOpVisible: { ts: Timestamp(1567578608, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:08.308+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 569 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:38.308+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, durableWallTime: new Date(1567578605002), appliedOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, appliedWallTime: new Date(1567578605002), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578608, 1), t: 1 }, durableWallTime: new Date(1567578608283), appliedOpTime: { ts: Timestamp(1567578608, 1), t: 1 }, appliedWallTime: new Date(1567578608283), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, durableWallTime: new Date(1567578605002), appliedOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, appliedWallTime: new Date(1567578605002), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 1), t: 1 }, lastCommittedWall: new Date(1567578608283), lastOpVisible: { ts: Timestamp(1567578608, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:08.308+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:38.298+0000 2019-09-04T06:30:08.308+0000 D2 ASIO [RS] Request 569 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 1), t: 1 }, lastCommittedWall: new Date(1567578608283), lastOpVisible: { ts: Timestamp(1567578608, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578608, 1), $clusterTime: { clusterTime: Timestamp(1567578608, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578608, 1) } 2019-09-04T06:30:08.308+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 1), t: 1 }, lastCommittedWall: new Date(1567578608283), lastOpVisible: { ts: Timestamp(1567578608, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578608, 1), $clusterTime: { clusterTime: Timestamp(1567578608, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578608, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:08.308+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:08.308+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:38.298+0000 2019-09-04T06:30:08.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:08.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:08.342+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:08.342+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:08.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:08.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:08.397+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578608, 1) 2019-09-04T06:30:08.408+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:08.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:08.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:08.484+0000 D2 ASIO [RS] Request 568 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578608, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578608430), o: { $v: 1, $set: { ping: new Date(1567578608429) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 1), t: 1 }, lastCommittedWall: new Date(1567578608283), lastOpVisible: { ts: Timestamp(1567578608, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578608, 1), t: 1 }, lastCommittedWall: new Date(1567578608283), lastOpApplied: { ts: Timestamp(1567578608, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578608, 1), $clusterTime: { clusterTime: Timestamp(1567578608, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578608, 2) } 2019-09-04T06:30:08.484+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578608, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578608430), o: { $v: 1, $set: { ping: new Date(1567578608429) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 1), t: 1 }, lastCommittedWall: new Date(1567578608283), lastOpVisible: { ts: Timestamp(1567578608, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578608, 1), t: 1 }, lastCommittedWall: new Date(1567578608283), lastOpApplied: { ts: Timestamp(1567578608, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578608, 1), $clusterTime: { clusterTime: Timestamp(1567578608, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578608, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:08.484+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:08.484+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578608, 2) and ending at ts: Timestamp(1567578608, 2) 2019-09-04T06:30:08.484+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:30:18.834+0000 2019-09-04T06:30:08.484+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:30:19.145+0000 2019-09-04T06:30:08.484+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578608, 2), t: 1 } 2019-09-04T06:30:08.484+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:08.484+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:36.839+0000 2019-09-04T06:30:08.484+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:08.484+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:08.484+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578608, 1) 2019-09-04T06:30:08.484+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 8258 2019-09-04T06:30:08.484+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:08.484+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:08.484+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 8258 2019-09-04T06:30:08.484+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:08.484+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:30:08.484+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:08.484+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578608, 2) } 2019-09-04T06:30:08.484+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578608, 1) 2019-09-04T06:30:08.484+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 8261 2019-09-04T06:30:08.484+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:08.484+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:08.484+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 8261 2019-09-04T06:30:08.484+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8252 2019-09-04T06:30:08.484+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 8252 2019-09-04T06:30:08.484+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8264 2019-09-04T06:30:08.484+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 8264 2019-09-04T06:30:08.484+0000 D3 EXECUTOR [repl-writer-worker-1] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:08.484+0000 D3 STORAGE [repl-writer-worker-1] WT begin_transaction for snapshot id 8266 2019-09-04T06:30:08.484+0000 D4 STORAGE [repl-writer-worker-1] inserting record with timestamp Timestamp(1567578608, 2) 2019-09-04T06:30:08.484+0000 D3 STORAGE [repl-writer-worker-1] WT set timestamp of future write operations to Timestamp(1567578608, 2) 2019-09-04T06:30:08.484+0000 D3 STORAGE [repl-writer-worker-1] WT commit_transaction for snapshot id 8266 2019-09-04T06:30:08.484+0000 D3 EXECUTOR [repl-writer-worker-1] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:08.484+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:30:08.484+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8265 2019-09-04T06:30:08.484+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 8265 2019-09-04T06:30:08.484+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8268 2019-09-04T06:30:08.484+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 8268 2019-09-04T06:30:08.484+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578608, 2), t: 1 }({ ts: Timestamp(1567578608, 2), t: 1 }) 2019-09-04T06:30:08.484+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578608, 2) 2019-09-04T06:30:08.484+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8269 2019-09-04T06:30:08.484+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578608, 2) } } ] } sort: {} projection: {} 2019-09-04T06:30:08.485+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:08.485+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:30:08.485+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578608, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:30:08.485+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:08.485+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:08.485+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:08.485+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578608, 2) || First: notFirst: full path: ts 2019-09-04T06:30:08.485+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:08.485+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578608, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:08.485+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:08.485+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:30:08.485+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:08.485+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:08.485+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:08.485+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:08.485+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:08.485+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:08.485+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:08.485+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578608, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:08.485+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:08.485+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:08.485+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:08.485+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578608, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:08.485+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:08.485+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578608, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:08.485+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 8269 2019-09-04T06:30:08.485+0000 D3 EXECUTOR [repl-writer-worker-5] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:08.485+0000 D3 STORAGE [repl-writer-worker-5] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:08.485+0000 D3 REPL [repl-writer-worker-5] applying op: { ts: Timestamp(1567578608, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578608430), o: { $v: 1, $set: { ping: new Date(1567578608429) } } }, oplog application mode: Secondary 2019-09-04T06:30:08.485+0000 D3 STORAGE [repl-writer-worker-5] WT set timestamp of future write operations to Timestamp(1567578608, 2) 2019-09-04T06:30:08.485+0000 D3 STORAGE [repl-writer-worker-5] WT begin_transaction for snapshot id 8271 2019-09-04T06:30:08.485+0000 D2 QUERY [repl-writer-worker-5] Using idhack: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" } 2019-09-04T06:30:08.485+0000 D4 WRITE [repl-writer-worker-5] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:30:08.485+0000 D3 STORAGE [repl-writer-worker-5] WT commit_transaction for snapshot id 8271 2019-09-04T06:30:08.485+0000 D3 EXECUTOR [repl-writer-worker-5] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:08.485+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578608, 2), t: 1 }({ ts: Timestamp(1567578608, 2), t: 1 }) 2019-09-04T06:30:08.485+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578608, 2) 2019-09-04T06:30:08.485+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8270 2019-09-04T06:30:08.485+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:30:08.485+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:08.485+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:30:08.485+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:08.485+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:08.485+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:30:08.485+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 8270 2019-09-04T06:30:08.485+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578608, 2) 2019-09-04T06:30:08.485+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8274 2019-09-04T06:30:08.485+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 8274 2019-09-04T06:30:08.485+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578608, 2), t: 1 }({ ts: Timestamp(1567578608, 2), t: 1 }) 2019-09-04T06:30:08.485+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:08.485+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, durableWallTime: new Date(1567578605002), appliedOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, appliedWallTime: new Date(1567578605002), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578608, 1), t: 1 }, durableWallTime: new Date(1567578608283), appliedOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, appliedWallTime: new Date(1567578608430), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, durableWallTime: new Date(1567578605002), appliedOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, appliedWallTime: new Date(1567578605002), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 1), t: 1 }, lastCommittedWall: new Date(1567578608283), lastOpVisible: { ts: Timestamp(1567578608, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:08.485+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 570 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:38.485+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, durableWallTime: new Date(1567578605002), appliedOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, appliedWallTime: new Date(1567578605002), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578608, 1), t: 1 }, durableWallTime: new Date(1567578608283), appliedOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, appliedWallTime: new Date(1567578608430), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, durableWallTime: new Date(1567578605002), appliedOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, appliedWallTime: new Date(1567578605002), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 1), t: 1 }, lastCommittedWall: new Date(1567578608283), lastOpVisible: { ts: Timestamp(1567578608, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:08.485+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:38.485+0000 2019-09-04T06:30:08.486+0000 D2 ASIO [RS] Request 570 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 1), t: 1 }, lastCommittedWall: new Date(1567578608283), lastOpVisible: { ts: Timestamp(1567578608, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578608, 1), $clusterTime: { clusterTime: Timestamp(1567578608, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578608, 2) } 2019-09-04T06:30:08.486+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 1), t: 1 }, lastCommittedWall: new Date(1567578608283), lastOpVisible: { ts: Timestamp(1567578608, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578608, 1), $clusterTime: { clusterTime: Timestamp(1567578608, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578608, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:08.486+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:08.486+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:38.486+0000 2019-09-04T06:30:08.486+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578608, 2), t: 1 } 2019-09-04T06:30:08.486+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 571 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:18.486+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578608, 1), t: 1 } } 2019-09-04T06:30:08.486+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:38.486+0000 2019-09-04T06:30:08.489+0000 D2 ASIO [RS] Request 571 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpVisible: { ts: Timestamp(1567578608, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpApplied: { ts: Timestamp(1567578608, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578608, 2), $clusterTime: { clusterTime: Timestamp(1567578608, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578608, 2) } 2019-09-04T06:30:08.489+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpVisible: { ts: Timestamp(1567578608, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpApplied: { ts: Timestamp(1567578608, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578608, 2), $clusterTime: { clusterTime: Timestamp(1567578608, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578608, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:08.489+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:08.489+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:30:08.489+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578608, 2), t: 1 }, 2019-09-04T06:30:08.430+0000 2019-09-04T06:30:08.489+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578608, 2), t: 1 }, 2019-09-04T06:30:08.430+0000 2019-09-04T06:30:08.489+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578603, 2) 2019-09-04T06:30:08.489+0000 D3 REPL [conn180] Got notified of new snapshot: { ts: Timestamp(1567578608, 2), t: 1 }, 2019-09-04T06:30:08.430+0000 2019-09-04T06:30:08.489+0000 D3 REPL [conn180] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.623+0000 2019-09-04T06:30:08.489+0000 D3 REPL [conn213] Got notified of new snapshot: { ts: Timestamp(1567578608, 2), t: 1 }, 2019-09-04T06:30:08.430+0000 2019-09-04T06:30:08.489+0000 D3 REPL [conn213] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:08.489+0000 D3 REPL [conn200] Got notified of new snapshot: { ts: Timestamp(1567578608, 2), t: 1 }, 2019-09-04T06:30:08.430+0000 2019-09-04T06:30:08.489+0000 D3 REPL [conn200] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.475+0000 2019-09-04T06:30:08.489+0000 D3 REPL [conn181] Got notified of new snapshot: { ts: Timestamp(1567578608, 2), t: 1 }, 2019-09-04T06:30:08.430+0000 2019-09-04T06:30:08.489+0000 D3 REPL [conn181] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.478+0000 2019-09-04T06:30:08.489+0000 D3 REPL [conn147] Got notified of new snapshot: { ts: Timestamp(1567578608, 2), t: 1 }, 2019-09-04T06:30:08.430+0000 2019-09-04T06:30:08.489+0000 D3 REPL [conn147] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:13.409+0000 2019-09-04T06:30:08.489+0000 D3 REPL [conn219] Got notified of new snapshot: { ts: Timestamp(1567578608, 2), t: 1 }, 2019-09-04T06:30:08.430+0000 2019-09-04T06:30:08.489+0000 D3 REPL [conn219] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:24.153+0000 2019-09-04T06:30:08.489+0000 D3 REPL [conn212] Got notified of new snapshot: { ts: Timestamp(1567578608, 2), t: 1 }, 2019-09-04T06:30:08.430+0000 2019-09-04T06:30:08.489+0000 D3 REPL [conn212] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.650+0000 2019-09-04T06:30:08.489+0000 D3 REPL [conn209] Got notified of new snapshot: { ts: Timestamp(1567578608, 2), t: 1 }, 2019-09-04T06:30:08.430+0000 2019-09-04T06:30:08.489+0000 D3 REPL [conn209] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.622+0000 2019-09-04T06:30:08.489+0000 D3 REPL [conn171] Got notified of new snapshot: { ts: Timestamp(1567578608, 2), t: 1 }, 2019-09-04T06:30:08.430+0000 2019-09-04T06:30:08.489+0000 D3 REPL [conn171] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.634+0000 2019-09-04T06:30:08.489+0000 D3 REPL [conn204] Got notified of new snapshot: { ts: Timestamp(1567578608, 2), t: 1 }, 2019-09-04T06:30:08.430+0000 2019-09-04T06:30:08.489+0000 D3 REPL [conn204] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.514+0000 2019-09-04T06:30:08.489+0000 D3 REPL [conn202] Got notified of new snapshot: { ts: Timestamp(1567578608, 2), t: 1 }, 2019-09-04T06:30:08.430+0000 2019-09-04T06:30:08.489+0000 D3 REPL [conn202] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.480+0000 2019-09-04T06:30:08.489+0000 D3 REPL [conn215] Got notified of new snapshot: { ts: Timestamp(1567578608, 2), t: 1 }, 2019-09-04T06:30:08.430+0000 2019-09-04T06:30:08.489+0000 D3 REPL [conn215] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:08.489+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:30:19.145+0000 2019-09-04T06:30:08.489+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:30:18.561+0000 2019-09-04T06:30:08.489+0000 D3 REPL [conn214] Got notified of new snapshot: { ts: Timestamp(1567578608, 2), t: 1 }, 2019-09-04T06:30:08.430+0000 2019-09-04T06:30:08.489+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:08.489+0000 D3 REPL [conn214] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:08.489+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:36.839+0000 2019-09-04T06:30:08.489+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 572 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:18.489+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578608, 2), t: 1 } } 2019-09-04T06:30:08.489+0000 D3 REPL [conn203] Got notified of new snapshot: { ts: Timestamp(1567578608, 2), t: 1 }, 2019-09-04T06:30:08.430+0000 2019-09-04T06:30:08.489+0000 D3 REPL [conn203] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.499+0000 2019-09-04T06:30:08.489+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:38.486+0000 2019-09-04T06:30:08.489+0000 D3 REPL [conn199] Got notified of new snapshot: { ts: Timestamp(1567578608, 2), t: 1 }, 2019-09-04T06:30:08.430+0000 2019-09-04T06:30:08.489+0000 D3 REPL [conn199] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.476+0000 2019-09-04T06:30:08.489+0000 D3 REPL [conn191] Got notified of new snapshot: { ts: Timestamp(1567578608, 2), t: 1 }, 2019-09-04T06:30:08.430+0000 2019-09-04T06:30:08.489+0000 D3 REPL [conn191] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:12.107+0000 2019-09-04T06:30:08.489+0000 D3 REPL [conn211] Got notified of new snapshot: { ts: Timestamp(1567578608, 2), t: 1 }, 2019-09-04T06:30:08.430+0000 2019-09-04T06:30:08.489+0000 D3 REPL [conn211] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.646+0000 2019-09-04T06:30:08.489+0000 D3 REPL [conn201] Got notified of new snapshot: { ts: Timestamp(1567578608, 2), t: 1 }, 2019-09-04T06:30:08.430+0000 2019-09-04T06:30:08.489+0000 D3 REPL [conn201] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.476+0000 2019-09-04T06:30:08.489+0000 D3 REPL [conn210] Got notified of new snapshot: { ts: Timestamp(1567578608, 2), t: 1 }, 2019-09-04T06:30:08.430+0000 2019-09-04T06:30:08.489+0000 D3 REPL [conn210] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.641+0000 2019-09-04T06:30:08.489+0000 D3 REPL [conn205] Got notified of new snapshot: { ts: Timestamp(1567578608, 2), t: 1 }, 2019-09-04T06:30:08.430+0000 2019-09-04T06:30:08.490+0000 D3 REPL [conn205] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:10.530+0000 2019-09-04T06:30:08.490+0000 D3 REPL [conn169] Got notified of new snapshot: { ts: Timestamp(1567578608, 2), t: 1 }, 2019-09-04T06:30:08.430+0000 2019-09-04T06:30:08.490+0000 D3 REPL [conn169] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:12.789+0000 2019-09-04T06:30:08.490+0000 D3 REPL [conn217] Got notified of new snapshot: { ts: Timestamp(1567578608, 2), t: 1 }, 2019-09-04T06:30:08.430+0000 2019-09-04T06:30:08.490+0000 D3 REPL [conn217] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:22.595+0000 2019-09-04T06:30:08.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:08.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:08.509+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:30:08.509+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:08.509+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:08.509+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, durableWallTime: new Date(1567578605002), appliedOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, appliedWallTime: new Date(1567578605002), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), appliedOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, appliedWallTime: new Date(1567578608430), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, durableWallTime: new Date(1567578605002), appliedOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, appliedWallTime: new Date(1567578605002), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpVisible: { ts: Timestamp(1567578608, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:08.509+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 573 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:38.509+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, durableWallTime: new Date(1567578605002), appliedOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, appliedWallTime: new Date(1567578605002), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), appliedOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, appliedWallTime: new Date(1567578608430), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, durableWallTime: new Date(1567578605002), appliedOpTime: { ts: Timestamp(1567578605, 1), t: 1 }, appliedWallTime: new Date(1567578605002), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpVisible: { ts: Timestamp(1567578608, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:08.509+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:38.486+0000 2019-09-04T06:30:08.509+0000 D2 ASIO [RS] Request 573 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpVisible: { ts: Timestamp(1567578608, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578608, 2), $clusterTime: { clusterTime: Timestamp(1567578608, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578608, 2) } 2019-09-04T06:30:08.509+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpVisible: { ts: Timestamp(1567578608, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578608, 2), $clusterTime: { clusterTime: Timestamp(1567578608, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578608, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:08.509+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:08.509+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:38.486+0000 2019-09-04T06:30:08.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:08.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:08.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:08.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:08.578+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:08.578+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:08.584+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578608, 2) 2019-09-04T06:30:08.609+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:08.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:08.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:08.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:08.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:08.676+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:08.676+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:08.707+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:08.707+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:08.709+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:08.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:08.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:08.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:08.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:08.809+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:08.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:08.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:08.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:08.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 574) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:08.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 574 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:18.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:08.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:36.839+0000 2019-09-04T06:30:08.838+0000 D2 ASIO [Replication] Request 574 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), opTime: { ts: Timestamp(1567578608, 2), t: 1 }, wallTime: new Date(1567578608430), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpVisible: { ts: Timestamp(1567578608, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578608, 2), $clusterTime: { clusterTime: Timestamp(1567578608, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578608, 2) } 2019-09-04T06:30:08.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), opTime: { ts: Timestamp(1567578608, 2), t: 1 }, wallTime: new Date(1567578608430), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpVisible: { ts: Timestamp(1567578608, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578608, 2), $clusterTime: { clusterTime: Timestamp(1567578608, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578608, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:08.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:08.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 574) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), opTime: { ts: Timestamp(1567578608, 2), t: 1 }, wallTime: new Date(1567578608430), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpVisible: { ts: Timestamp(1567578608, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578608, 2), $clusterTime: { clusterTime: Timestamp(1567578608, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578608, 2) } 2019-09-04T06:30:08.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:30:08.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:30:10.838Z 2019-09-04T06:30:08.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:36.839+0000 2019-09-04T06:30:08.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:08.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 575) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:08.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 575 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:30:18.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:08.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:36.839+0000 2019-09-04T06:30:08.839+0000 D2 ASIO [Replication] Request 575 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), opTime: { ts: Timestamp(1567578608, 2), t: 1 }, wallTime: new Date(1567578608430), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpVisible: { ts: Timestamp(1567578608, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578608, 2), $clusterTime: { clusterTime: Timestamp(1567578608, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578608, 2) } 2019-09-04T06:30:08.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), opTime: { ts: Timestamp(1567578608, 2), t: 1 }, wallTime: new Date(1567578608430), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpVisible: { ts: Timestamp(1567578608, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578608, 2), $clusterTime: { clusterTime: Timestamp(1567578608, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578608, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:30:08.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:08.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 575) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), opTime: { ts: Timestamp(1567578608, 2), t: 1 }, wallTime: new Date(1567578608430), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpVisible: { ts: Timestamp(1567578608, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578608, 2), $clusterTime: { clusterTime: Timestamp(1567578608, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578608, 2) } 2019-09-04T06:30:08.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:30:08.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:30:18.561+0000 2019-09-04T06:30:08.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:30:19.738+0000 2019-09-04T06:30:08.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:30:08.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:30:10.839Z 2019-09-04T06:30:08.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:08.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:38.839+0000 2019-09-04T06:30:08.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:38.839+0000 2019-09-04T06:30:08.842+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:08.842+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:08.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:08.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:08.910+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:08.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:08.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:08.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:08.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:09.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:09.010+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:09.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:09.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:09.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578608, 2), signature: { hash: BinData(0, 042EF87231BC959C5E03232968C6F2479145C98C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:09.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:30:09.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578608, 2), signature: { hash: BinData(0, 042EF87231BC959C5E03232968C6F2479145C98C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:09.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578608, 2), signature: { hash: BinData(0, 042EF87231BC959C5E03232968C6F2479145C98C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:09.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), opTime: { ts: Timestamp(1567578608, 2), t: 1 }, wallTime: new Date(1567578608430) } 2019-09-04T06:30:09.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578608, 2), signature: { hash: BinData(0, 042EF87231BC959C5E03232968C6F2479145C98C), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:09.065+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:09.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:09.078+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:09.078+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:09.110+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:09.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:09.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:09.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:09.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:09.176+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:09.176+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:09.207+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:09.207+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:09.210+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:09.213+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:09.213+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:09.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:09.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:09.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:09.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:09.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:09.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:09.310+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:09.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:09.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:09.342+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:09.342+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:09.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:09.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:09.410+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:09.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:09.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:09.484+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:09.484+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:09.484+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578608, 2) 2019-09-04T06:30:09.484+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 8310 2019-09-04T06:30:09.484+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:09.484+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:09.484+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 8310 2019-09-04T06:30:09.485+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8313 2019-09-04T06:30:09.485+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 8313 2019-09-04T06:30:09.485+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578608, 2), t: 1 }({ ts: Timestamp(1567578608, 2), t: 1 }) 2019-09-04T06:30:09.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:09.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:09.510+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:09.537+0000 D2 COMMAND [conn61] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578608, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578608, 2), signature: { hash: BinData(0, 042EF87231BC959C5E03232968C6F2479145C98C), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578608, 2), t: 1 } }, $db: "config" } 2019-09-04T06:30:09.537+0000 D1 COMMAND [conn61] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578608, 2), t: 1 } } } 2019-09-04T06:30:09.537+0000 D3 STORAGE [conn61] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:30:09.537+0000 D1 COMMAND [conn61] Using 'committed' snapshot: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578608, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578608, 2), signature: { hash: BinData(0, 042EF87231BC959C5E03232968C6F2479145C98C), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578608, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578608, 2) 2019-09-04T06:30:09.537+0000 D2 QUERY [conn61] Collection config.settings does not exist. Using EOF plan: query: { _id: "balancer" } sort: {} projection: {} limit: 1 2019-09-04T06:30:09.537+0000 I COMMAND [conn61] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578608, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578608, 2), signature: { hash: BinData(0, 042EF87231BC959C5E03232968C6F2479145C98C), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578608, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:30:09.538+0000 D2 COMMAND [conn61] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578608, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578608, 2), signature: { hash: BinData(0, 042EF87231BC959C5E03232968C6F2479145C98C), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578608, 2), t: 1 } }, $db: "config" } 2019-09-04T06:30:09.538+0000 D1 COMMAND [conn61] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578608, 2), t: 1 } } } 2019-09-04T06:30:09.538+0000 D3 STORAGE [conn61] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:30:09.538+0000 D1 COMMAND [conn61] Using 'committed' snapshot: { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578608, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578608, 2), signature: { hash: BinData(0, 042EF87231BC959C5E03232968C6F2479145C98C), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578608, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578608, 2) 2019-09-04T06:30:09.538+0000 D2 QUERY [conn61] Collection config.settings does not exist. Using EOF plan: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 2019-09-04T06:30:09.538+0000 I COMMAND [conn61] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578608, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578608, 2), signature: { hash: BinData(0, 042EF87231BC959C5E03232968C6F2479145C98C), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578608, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:30:09.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:09.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:09.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:09.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:09.578+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:09.578+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:09.610+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:09.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:09.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:09.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:09.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:09.676+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:09.676+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:09.707+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:09.707+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:09.711+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:09.713+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:09.713+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:09.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:09.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:09.811+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:09.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:09.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:09.842+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:09.842+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:09.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:09.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:09.911+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:09.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:09.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:09.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:09.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:10.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:10.003+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:30:10.003+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:30:10.003+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:30:10.018+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:30:10.018+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:30:10.018+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:10.033+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:30:10.033+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:30:10.033+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:30:10.033+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:30:10.035+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:30:10.036+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35129 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:30:10.036+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:30:10.036+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:30:10.037+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:30:10.037+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:30:10.037+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:10.037+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:30:10.037+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:30:10.037+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:30:10.037+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:30:10.037+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:30:10.037+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:30:10.037+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:30:10.037+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:10.037+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:10.037+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:30:10.037+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578608, 2) 2019-09-04T06:30:10.037+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8338 2019-09-04T06:30:10.037+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8338 2019-09-04T06:30:10.037+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:30:10.037+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:30:10.037+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:30:10.037+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:30:10.037+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:10.037+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:30:10.037+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:30:10.037+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:30:10.037+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578608, 2) 2019-09-04T06:30:10.037+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8341 2019-09-04T06:30:10.037+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8341 2019-09-04T06:30:10.037+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:30:10.038+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:30:10.038+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:10.038+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:30:10.038+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:30:10.038+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:30:10.038+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578608, 2) 2019-09-04T06:30:10.038+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8343 2019-09-04T06:30:10.038+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8343 2019-09-04T06:30:10.038+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:30:10.038+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:30:10.038+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:30:10.038+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:30:10.038+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:30:10.038+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:30:10.038+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:30:10.038+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8346 2019-09-04T06:30:10.038+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:30:10.038+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:10.038+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:30:10.038+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:30:10.038+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8346 2019-09-04T06:30:10.038+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:30:10.038+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8347 2019-09-04T06:30:10.038+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:30:10.038+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:10.038+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:30:10.038+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8347 2019-09-04T06:30:10.038+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:30:10.038+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8348 2019-09-04T06:30:10.038+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:30:10.038+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:10.038+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:30:10.038+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8348 2019-09-04T06:30:10.038+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:30:10.038+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8349 2019-09-04T06:30:10.038+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:30:10.038+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:10.038+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:30:10.038+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8349 2019-09-04T06:30:10.038+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:30:10.038+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8350 2019-09-04T06:30:10.038+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:30:10.038+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:10.038+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8350 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8351 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8351 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8352 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8352 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8353 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8353 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8354 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8354 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8355 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8355 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8356 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8356 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8357 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8357 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8358 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8358 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8359 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8359 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8360 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8360 2019-09-04T06:30:10.039+0000 D2 COMMAND [conn69] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:10.039+0000 I COMMAND [conn69] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:10.039+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8362 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8362 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8363 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8363 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8364 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8364 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8365 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8365 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8366 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8366 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8367 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8367 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8368 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8368 2019-09-04T06:30:10.040+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:30:10.040+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8370 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8370 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8371 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8371 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8372 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8372 2019-09-04T06:30:10.040+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:30:10.040+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:30:10.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8374 2019-09-04T06:30:10.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8374 2019-09-04T06:30:10.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8375 2019-09-04T06:30:10.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8375 2019-09-04T06:30:10.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8376 2019-09-04T06:30:10.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8376 2019-09-04T06:30:10.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8377 2019-09-04T06:30:10.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8377 2019-09-04T06:30:10.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8378 2019-09-04T06:30:10.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8378 2019-09-04T06:30:10.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8379 2019-09-04T06:30:10.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8379 2019-09-04T06:30:10.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8380 2019-09-04T06:30:10.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8380 2019-09-04T06:30:10.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8381 2019-09-04T06:30:10.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8381 2019-09-04T06:30:10.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8382 2019-09-04T06:30:10.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8382 2019-09-04T06:30:10.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8383 2019-09-04T06:30:10.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8383 2019-09-04T06:30:10.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8384 2019-09-04T06:30:10.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8384 2019-09-04T06:30:10.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8385 2019-09-04T06:30:10.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8385 2019-09-04T06:30:10.041+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:30:10.041+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:30:10.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8387 2019-09-04T06:30:10.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8387 2019-09-04T06:30:10.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8388 2019-09-04T06:30:10.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8388 2019-09-04T06:30:10.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8389 2019-09-04T06:30:10.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8389 2019-09-04T06:30:10.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8390 2019-09-04T06:30:10.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8390 2019-09-04T06:30:10.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8391 2019-09-04T06:30:10.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8391 2019-09-04T06:30:10.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8392 2019-09-04T06:30:10.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8392 2019-09-04T06:30:10.041+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:30:10.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:10.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:10.055+0000 D2 COMMAND [conn70] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:10.055+0000 I COMMAND [conn70] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:10.065+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:10.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:10.078+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:10.078+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:10.118+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:10.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:10.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:10.176+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:10.176+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:10.207+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:10.207+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:10.213+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:10.213+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:10.218+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:10.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578608, 2), signature: { hash: BinData(0, 042EF87231BC959C5E03232968C6F2479145C98C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:10.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:30:10.233+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578608, 2), signature: { hash: BinData(0, 042EF87231BC959C5E03232968C6F2479145C98C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:10.233+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578608, 2), signature: { hash: BinData(0, 042EF87231BC959C5E03232968C6F2479145C98C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:10.233+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), opTime: { ts: Timestamp(1567578608, 2), t: 1 }, wallTime: new Date(1567578608430) } 2019-09-04T06:30:10.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578608, 2), signature: { hash: BinData(0, 042EF87231BC959C5E03232968C6F2479145C98C), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:10.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:10.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:10.284+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:10.284+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:10.318+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:10.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:10.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:10.342+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:10.342+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:10.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:10.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:10.418+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:10.465+0000 D2 COMMAND [conn71] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578587, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578606, 1), signature: { hash: BinData(0, 37E12309D5AF39CE274F9E039F3BE5D844071452), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578587, 1), t: 1 } }, $db: "config" } 2019-09-04T06:30:10.465+0000 D1 COMMAND [conn71] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578587, 1), t: 1 } } } 2019-09-04T06:30:10.465+0000 D3 STORAGE [conn71] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:30:10.465+0000 D1 COMMAND [conn71] Using 'committed' snapshot: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578587, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578606, 1), signature: { hash: BinData(0, 37E12309D5AF39CE274F9E039F3BE5D844071452), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578587, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578608, 2) 2019-09-04T06:30:10.465+0000 D2 QUERY [conn71] Collection config.settings does not exist. Using EOF plan: query: { _id: "balancer" } sort: {} projection: {} limit: 1 2019-09-04T06:30:10.465+0000 I COMMAND [conn71] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578587, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578606, 1), signature: { hash: BinData(0, 37E12309D5AF39CE274F9E039F3BE5D844071452), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578587, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:30:10.465+0000 D2 COMMAND [conn71] run command config.$cmd { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578608, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578608, 2), signature: { hash: BinData(0, 042EF87231BC959C5E03232968C6F2479145C98C), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578608, 2), t: 1 } }, $db: "config" } 2019-09-04T06:30:10.465+0000 D1 COMMAND [conn71] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578608, 2), t: 1 } } } 2019-09-04T06:30:10.465+0000 D3 STORAGE [conn71] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:30:10.466+0000 D1 COMMAND [conn71] Using 'committed' snapshot: { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578608, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578608, 2), signature: { hash: BinData(0, 042EF87231BC959C5E03232968C6F2479145C98C), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578608, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578608, 2) 2019-09-04T06:30:10.466+0000 D2 QUERY [conn71] Collection config.settings does not exist. Using EOF plan: query: { _id: "autosplit" } sort: {} projection: {} limit: 1 2019-09-04T06:30:10.466+0000 I COMMAND [conn71] command config.settings command: find { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578608, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578608, 2), signature: { hash: BinData(0, 042EF87231BC959C5E03232968C6F2479145C98C), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578608, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:30:10.466+0000 I NETWORK [listener] connection accepted from 10.108.2.55:36638 #225 (86 connections now open) 2019-09-04T06:30:10.466+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:10.466+0000 D2 COMMAND [conn225] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:10.466+0000 I NETWORK [conn225] received client metadata from 10.108.2.55:36638 conn225: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:10.466+0000 I COMMAND [conn225] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:10.471+0000 I NETWORK [listener] connection accepted from 10.108.2.56:35688 #226 (87 connections now open) 2019-09-04T06:30:10.471+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:10.471+0000 D2 COMMAND [conn226] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:10.471+0000 I NETWORK [conn226] received client metadata from 10.108.2.56:35688 conn226: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:10.471+0000 I COMMAND [conn226] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:10.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:10.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:10.476+0000 I COMMAND [conn200] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578580, 1), signature: { hash: BinData(0, 86E87D4DBA38E94854814CCDC22E7B802B4418C3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:30:10.477+0000 D1 - [conn200] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:10.477+0000 W - [conn200] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:10.477+0000 I COMMAND [conn199] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578575, 1), signature: { hash: BinData(0, A1616ED3E044436BEC7C98EB05F697E1909A54E7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:30:10.478+0000 D1 - [conn199] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:10.478+0000 W - [conn199] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:10.478+0000 I COMMAND [conn201] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578571, 1), signature: { hash: BinData(0, 472AE666FE64617C815B4B64B8CFDCED57BE7FFA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:30:10.479+0000 D1 - [conn201] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:10.479+0000 W - [conn201] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:10.479+0000 I COMMAND [conn181] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578571, 1), signature: { hash: BinData(0, 472AE666FE64617C815B4B64B8CFDCED57BE7FFA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:30:10.479+0000 D1 - [conn181] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:10.479+0000 W - [conn181] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:10.480+0000 I NETWORK [listener] connection accepted from 10.108.2.57:34234 #227 (88 connections now open) 2019-09-04T06:30:10.480+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:10.480+0000 D2 COMMAND [conn227] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:10.480+0000 I NETWORK [conn227] received client metadata from 10.108.2.57:34234 conn227: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:10.481+0000 I COMMAND [conn227] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:10.481+0000 I COMMAND [conn202] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578572, 1), signature: { hash: BinData(0, BEAA603D37F22B007C54F93405BE7B4D709F86E2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:30:10.481+0000 D1 - [conn202] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:10.481+0000 W - [conn202] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:10.484+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:10.484+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:10.484+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578608, 2) 2019-09-04T06:30:10.484+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 8414 2019-09-04T06:30:10.484+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:10.484+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:10.484+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 8414 2019-09-04T06:30:10.485+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8417 2019-09-04T06:30:10.485+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 8417 2019-09-04T06:30:10.486+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578608, 2), t: 1 }({ ts: Timestamp(1567578608, 2), t: 1 }) 2019-09-04T06:30:10.495+0000 I - [conn199] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:10.495+0000 D1 COMMAND [conn199] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578575, 1), signature: { hash: BinData(0, A1616ED3E044436BEC7C98EB05F697E1909A54E7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:10.495+0000 D1 - [conn199] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:10.495+0000 W - [conn199] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:10.500+0000 I COMMAND [conn203] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578578, 1), signature: { hash: BinData(0, C1354BF9089533EC9529632B626A8C0C97EF30BD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:30:10.501+0000 D1 - [conn203] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:10.501+0000 W - [conn203] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:10.503+0000 I NETWORK [listener] connection accepted from 10.108.2.60:44842 #228 (89 connections now open) 2019-09-04T06:30:10.503+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:10.503+0000 D2 COMMAND [conn228] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:10.503+0000 I NETWORK [conn228] received client metadata from 10.108.2.60:44842 conn228: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:10.503+0000 I COMMAND [conn228] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:10.512+0000 I - [conn181] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:10.512+0000 D1 COMMAND [conn181] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578571, 1), signature: { hash: BinData(0, 472AE666FE64617C815B4B64B8CFDCED57BE7FFA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:10.512+0000 D1 - [conn181] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:10.512+0000 W - [conn181] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:10.515+0000 I COMMAND [conn204] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578573, 1), signature: { hash: BinData(0, 63C6058835D1DE93F5F1A44E095F1DBE683122D6), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:30:10.515+0000 D1 - [conn204] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:10.516+0000 W - [conn204] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:10.518+0000 I NETWORK [listener] connection accepted from 10.108.2.61:37918 #229 (90 connections now open) 2019-09-04T06:30:10.518+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:10.518+0000 D2 COMMAND [conn229] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:10.518+0000 I NETWORK [conn229] received client metadata from 10.108.2.61:37918 conn229: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:10.518+0000 I COMMAND [conn229] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:10.518+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:10.529+0000 I - [conn201] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:10.529+0000 D1 COMMAND [conn201] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578571, 1), signature: { hash: BinData(0, 472AE666FE64617C815B4B64B8CFDCED57BE7FFA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:10.529+0000 D1 - [conn201] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:10.529+0000 W - [conn201] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:10.531+0000 I COMMAND [conn205] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578571, 1), signature: { hash: BinData(0, 472AE666FE64617C815B4B64B8CFDCED57BE7FFA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:30:10.532+0000 D1 - [conn205] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:10.532+0000 W - [conn205] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:10.544+0000 I - [conn200] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:10.544+0000 D1 COMMAND [conn200] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578580, 1), signature: { hash: BinData(0, 86E87D4DBA38E94854814CCDC22E7B802B4418C3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:10.544+0000 D1 - [conn200] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:10.544+0000 W - [conn200] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:10.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:10.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:10.560+0000 I - [conn202] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:10.560+0000 D1 COMMAND [conn202] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578572, 1), signature: { hash: BinData(0, BEAA603D37F22B007C54F93405BE7B4D709F86E2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:10.560+0000 D1 - [conn202] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:10.561+0000 W - [conn202] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:10.565+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:10.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:10.578+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:10.578+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:10.580+0000 I - [conn199] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:10.580+0000 W COMMAND [conn199] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:10.580+0000 I COMMAND [conn199] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578575, 1), signature: { hash: BinData(0, A1616ED3E044436BEC7C98EB05F697E1909A54E7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30028ms 2019-09-04T06:30:10.580+0000 D2 NETWORK [conn199] Session from 10.108.2.74:51742 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:10.580+0000 I NETWORK [conn199] end connection 10.108.2.74:51742 (89 connections now open) 2019-09-04T06:30:10.597+0000 I - [conn205] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:10.597+0000 D1 COMMAND [conn205] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578571, 1), signature: { hash: BinData(0, 472AE666FE64617C815B4B64B8CFDCED57BE7FFA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:10.597+0000 D1 - [conn205] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:10.597+0000 W - [conn205] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:10.614+0000 I - [conn204] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:10.614+0000 D1 COMMAND [conn204] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578573, 1), signature: { hash: BinData(0, 63C6058835D1DE93F5F1A44E095F1DBE683122D6), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:10.614+0000 D1 - [conn204] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:10.614+0000 W - [conn204] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:10.618+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:10.634+0000 I - [conn205] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:10.634+0000 W COMMAND [conn205] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:10.634+0000 I COMMAND [conn205] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578571, 1), signature: { hash: BinData(0, 472AE666FE64617C815B4B64B8CFDCED57BE7FFA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30076ms 2019-09-04T06:30:10.634+0000 D2 NETWORK [conn205] Session from 10.108.2.61:37896 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:10.634+0000 I NETWORK [conn205] end connection 10.108.2.61:37896 (88 connections now open) 2019-09-04T06:30:10.654+0000 I - [conn201] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:10.654+0000 W COMMAND [conn201] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:10.654+0000 I COMMAND [conn201] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578571, 1), signature: { hash: BinData(0, 472AE666FE64617C815B4B64B8CFDCED57BE7FFA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30062ms 2019-09-04T06:30:10.654+0000 D2 NETWORK [conn201] Session from 10.108.2.55:36618 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:10.654+0000 I NETWORK [conn201] end connection 10.108.2.55:36618 (87 connections now open) 2019-09-04T06:30:10.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:10.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:10.661+0000 D2 COMMAND [conn207] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578602, 1), signature: { hash: BinData(0, B8E65EF8C3BE078375F8162EA3CD50E942460267), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:30:10.661+0000 D1 REPL [conn207] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578608, 2), t: 1 } 2019-09-04T06:30:10.661+0000 D3 REPL [conn207] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:40.671+0000 2019-09-04T06:30:10.667+0000 I NETWORK [listener] connection accepted from 10.108.2.74:51772 #230 (88 connections now open) 2019-09-04T06:30:10.667+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:10.667+0000 D2 COMMAND [conn230] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:10.667+0000 I NETWORK [conn230] received client metadata from 10.108.2.74:51772 conn230: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:10.667+0000 I COMMAND [conn230] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:10.667+0000 D2 COMMAND [conn230] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578605, 1), signature: { hash: BinData(0, 1A8F6415775A35EDF4B88EC006CD33118085876C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:30:10.667+0000 D1 REPL [conn230] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578608, 2), t: 1 } 2019-09-04T06:30:10.667+0000 D3 REPL [conn230] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:40.677+0000 2019-09-04T06:30:10.669+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:10.669+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:10.672+0000 D2 COMMAND [conn226] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578601, 1), signature: { hash: BinData(0, CDC17BEE3BF53630BBD514A3979D01B672FD102E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:30:10.672+0000 D1 REPL [conn226] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578608, 2), t: 1 } 2019-09-04T06:30:10.672+0000 D3 REPL [conn226] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:40.682+0000 2019-09-04T06:30:10.675+0000 I - [conn204] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:10.675+0000 W COMMAND [conn204] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:10.675+0000 I COMMAND [conn204] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578573, 1), signature: { hash: BinData(0, 63C6058835D1DE93F5F1A44E095F1DBE683122D6), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30110ms 2019-09-04T06:30:10.675+0000 D2 NETWORK [conn204] Session from 10.108.2.60:44826 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:10.675+0000 I NETWORK [conn204] end connection 10.108.2.60:44826 (87 connections now open) 2019-09-04T06:30:10.676+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:10.676+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:10.695+0000 I - [conn200] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:10.695+0000 W COMMAND [conn200] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:10.695+0000 I COMMAND [conn200] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578580, 1), signature: { hash: BinData(0, 86E87D4DBA38E94854814CCDC22E7B802B4418C3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30078ms 2019-09-04T06:30:10.695+0000 D2 NETWORK [conn200] Session from 10.108.2.48:42060 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:10.695+0000 I NETWORK [conn200] end connection 10.108.2.48:42060 (86 connections now open) 2019-09-04T06:30:10.707+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:10.707+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:10.713+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:10.713+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:10.719+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:10.721+0000 D2 COMMAND [conn72] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578608, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578608, 2), signature: { hash: BinData(0, 042EF87231BC959C5E03232968C6F2479145C98C), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578608, 2), t: 1 } }, $db: "config" } 2019-09-04T06:30:10.721+0000 D1 COMMAND [conn72] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578608, 2), t: 1 } } } 2019-09-04T06:30:10.721+0000 D3 STORAGE [conn72] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:30:10.721+0000 D1 COMMAND [conn72] Using 'committed' snapshot: { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578608, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578608, 2), signature: { hash: BinData(0, 042EF87231BC959C5E03232968C6F2479145C98C), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578608, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578608, 2) 2019-09-04T06:30:10.721+0000 D2 QUERY [conn72] Collection config.settings does not exist. Using EOF plan: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 2019-09-04T06:30:10.721+0000 I COMMAND [conn72] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578608, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578608, 2), signature: { hash: BinData(0, 042EF87231BC959C5E03232968C6F2479145C98C), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578608, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:30:10.721+0000 I - [conn202] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:10.721+0000 W COMMAND [conn202] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:10.722+0000 I COMMAND [conn202] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578572, 1), signature: { hash: BinData(0, BEAA603D37F22B007C54F93405BE7B4D709F86E2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30090ms 2019-09-04T06:30:10.722+0000 D2 NETWORK [conn202] Session from 10.108.2.72:45708 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:10.722+0000 I NETWORK [conn202] end connection 10.108.2.72:45708 (85 connections now open) 2019-09-04T06:30:10.740+0000 I - [conn181] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:10.741+0000 W COMMAND [conn181] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:10.741+0000 I COMMAND [conn181] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578571, 1), signature: { hash: BinData(0, 472AE666FE64617C815B4B64B8CFDCED57BE7FFA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30043ms 2019-09-04T06:30:10.741+0000 D2 NETWORK [conn181] Session from 10.108.2.56:35652 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:10.741+0000 I NETWORK [conn181] end connection 10.108.2.56:35652 (84 connections now open) 2019-09-04T06:30:10.751+0000 I - [conn203] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:10.751+0000 D1 COMMAND [conn203] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578578, 1), signature: { hash: BinData(0, C1354BF9089533EC9529632B626A8C0C97EF30BD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:10.751+0000 D1 - [conn203] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:10.751+0000 W - [conn203] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:10.771+0000 I - [conn203] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:10.771+0000 W COMMAND [conn203] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:10.771+0000 I COMMAND [conn203] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578578, 1), signature: { hash: BinData(0, C1354BF9089533EC9529632B626A8C0C97EF30BD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30262ms 2019-09-04T06:30:10.771+0000 D2 NETWORK [conn203] Session from 10.108.2.57:34218 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:10.771+0000 I NETWORK [conn203] end connection 10.108.2.57:34218 (83 connections now open) 2019-09-04T06:30:10.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:10.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:10.819+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:10.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:10.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:10.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:10.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 576) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:10.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 576 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:20.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:10.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:38.839+0000 2019-09-04T06:30:10.838+0000 D2 ASIO [Replication] Request 576 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), opTime: { ts: Timestamp(1567578608, 2), t: 1 }, wallTime: new Date(1567578608430), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpVisible: { ts: Timestamp(1567578608, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578608, 2), $clusterTime: { clusterTime: Timestamp(1567578610, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578608, 2) } 2019-09-04T06:30:10.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), opTime: { ts: Timestamp(1567578608, 2), t: 1 }, wallTime: new Date(1567578608430), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpVisible: { ts: Timestamp(1567578608, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578608, 2), $clusterTime: { clusterTime: Timestamp(1567578610, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578608, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:10.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:10.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 576) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), opTime: { ts: Timestamp(1567578608, 2), t: 1 }, wallTime: new Date(1567578608430), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpVisible: { ts: Timestamp(1567578608, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578608, 2), $clusterTime: { clusterTime: Timestamp(1567578610, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578608, 2) } 2019-09-04T06:30:10.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:30:10.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:30:12.838Z 2019-09-04T06:30:10.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:38.839+0000 2019-09-04T06:30:10.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:10.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 577) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:10.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 577 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:30:20.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:10.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:38.839+0000 2019-09-04T06:30:10.839+0000 D2 ASIO [Replication] Request 577 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), opTime: { ts: Timestamp(1567578608, 2), t: 1 }, wallTime: new Date(1567578608430), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpVisible: { ts: Timestamp(1567578608, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578608, 2), $clusterTime: { clusterTime: Timestamp(1567578610, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578608, 2) } 2019-09-04T06:30:10.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), opTime: { ts: Timestamp(1567578608, 2), t: 1 }, wallTime: new Date(1567578608430), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpVisible: { ts: Timestamp(1567578608, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578608, 2), $clusterTime: { clusterTime: Timestamp(1567578610, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578608, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:30:10.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:10.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 577) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), opTime: { ts: Timestamp(1567578608, 2), t: 1 }, wallTime: new Date(1567578608430), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpVisible: { ts: Timestamp(1567578608, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578608, 2), $clusterTime: { clusterTime: Timestamp(1567578610, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578608, 2) } 2019-09-04T06:30:10.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:30:10.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:30:19.738+0000 2019-09-04T06:30:10.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:30:22.067+0000 2019-09-04T06:30:10.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:30:10.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:30:12.839Z 2019-09-04T06:30:10.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:10.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:40.839+0000 2019-09-04T06:30:10.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:40.839+0000 2019-09-04T06:30:10.842+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:10.842+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:10.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:10.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:10.919+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:10.945+0000 D2 COMMAND [conn73] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:10.945+0000 I COMMAND [conn73] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:10.968+0000 D2 COMMAND [conn74] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:10.968+0000 I COMMAND [conn74] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:11.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:11.019+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:11.024+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:11.024+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:11.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:11.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:11.061+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:11.061+0000 D3 REPL [replexec-0] memberData lastupdate is: 2019-09-04T06:30:10.839+0000 2019-09-04T06:30:11.061+0000 D3 REPL [replexec-0] memberData lastupdate is: 2019-09-04T06:30:10.838+0000 2019-09-04T06:30:11.061+0000 D3 REPL [replexec-0] stalest member MemberId(2) date: 2019-09-04T06:30:10.838+0000 2019-09-04T06:30:11.061+0000 D3 REPL [replexec-0] scheduling next check at 2019-09-04T06:30:20.838+0000 2019-09-04T06:30:11.061+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:40.839+0000 2019-09-04T06:30:11.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578610, 1), signature: { hash: BinData(0, 598830489B2C59E8558B28BBD198E0EC0A968E87), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:11.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:30:11.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578610, 1), signature: { hash: BinData(0, 598830489B2C59E8558B28BBD198E0EC0A968E87), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:11.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578610, 1), signature: { hash: BinData(0, 598830489B2C59E8558B28BBD198E0EC0A968E87), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:11.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), opTime: { ts: Timestamp(1567578608, 2), t: 1 }, wallTime: new Date(1567578608430) } 2019-09-04T06:30:11.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578610, 1), signature: { hash: BinData(0, 598830489B2C59E8558B28BBD198E0EC0A968E87), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:11.065+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:11.065+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:11.078+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:11.078+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:11.119+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:11.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:11.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:11.169+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:11.169+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:11.176+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:11.176+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:11.207+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:11.207+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:11.213+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:11.213+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:11.219+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:11.221+0000 D2 COMMAND [conn77] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:11.221+0000 I COMMAND [conn77] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:11.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:11.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:11.253+0000 D2 COMMAND [conn113] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:11.253+0000 I COMMAND [conn113] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:11.273+0000 D2 COMMAND [conn78] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:11.273+0000 I COMMAND [conn78] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:11.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:11.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:11.319+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:11.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:11.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:11.342+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:11.342+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:11.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:11.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:11.419+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:11.485+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:11.485+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:11.485+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578608, 2) 2019-09-04T06:30:11.485+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 8460 2019-09-04T06:30:11.485+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:11.485+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:11.485+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 8460 2019-09-04T06:30:11.486+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8463 2019-09-04T06:30:11.486+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 8463 2019-09-04T06:30:11.486+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578608, 2), t: 1 }({ ts: Timestamp(1567578608, 2), t: 1 }) 2019-09-04T06:30:11.520+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:11.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:11.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:11.565+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:11.565+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:11.578+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:11.578+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:11.620+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:11.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:11.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:11.669+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:11.669+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:11.676+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:11.676+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:11.707+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:11.707+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:11.713+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:11.713+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:11.720+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:11.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:11.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:11.794+0000 D2 COMMAND [conn81] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:11.794+0000 I COMMAND [conn81] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:11.820+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:11.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:11.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:11.842+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:11.842+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:11.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:11.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:11.920+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:12.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:12.020+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:12.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:12.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:12.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:12.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:12.078+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:12.078+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:12.082+0000 D2 COMMAND [conn216] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578611, 1), signature: { hash: BinData(0, 542F7D84F611AE5E9C7A33BCFBF929241984258E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:30:12.082+0000 D1 REPL [conn216] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578608, 2), t: 1 } 2019-09-04T06:30:12.083+0000 D3 REPL [conn216] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:42.092+0000 2019-09-04T06:30:12.111+0000 I COMMAND [conn191] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578578, 1), signature: { hash: BinData(0, C1354BF9089533EC9529632B626A8C0C97EF30BD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:30:12.111+0000 D1 - [conn191] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:12.111+0000 W - [conn191] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:12.120+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:12.129+0000 I - [conn191] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:12.129+0000 D1 COMMAND [conn191] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578578, 1), signature: { hash: BinData(0, C1354BF9089533EC9529632B626A8C0C97EF30BD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:12.129+0000 D1 - [conn191] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:12.129+0000 W - [conn191] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:12.150+0000 I - [conn191] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:12.150+0000 W COMMAND [conn191] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:12.150+0000 I COMMAND [conn191] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578578, 1), signature: { hash: BinData(0, C1354BF9089533EC9529632B626A8C0C97EF30BD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30032ms 2019-09-04T06:30:12.150+0000 D2 NETWORK [conn191] Session from 10.108.2.54:49138 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:12.150+0000 I NETWORK [conn191] end connection 10.108.2.54:49138 (82 connections now open) 2019-09-04T06:30:12.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:12.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:12.169+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:12.169+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:12.176+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:12.176+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:12.207+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:12.207+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:12.213+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:12.213+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:12.220+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:12.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578610, 1), signature: { hash: BinData(0, 598830489B2C59E8558B28BBD198E0EC0A968E87), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:12.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:30:12.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578610, 1), signature: { hash: BinData(0, 598830489B2C59E8558B28BBD198E0EC0A968E87), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:12.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578610, 1), signature: { hash: BinData(0, 598830489B2C59E8558B28BBD198E0EC0A968E87), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:12.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), opTime: { ts: Timestamp(1567578608, 2), t: 1 }, wallTime: new Date(1567578608430) } 2019-09-04T06:30:12.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578610, 1), signature: { hash: BinData(0, 598830489B2C59E8558B28BBD198E0EC0A968E87), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:12.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:12.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:12.279+0000 D2 COMMAND [conn221] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578602, 1), signature: { hash: BinData(0, B8E65EF8C3BE078375F8162EA3CD50E942460267), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:30:12.279+0000 D1 REPL [conn221] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578608, 2), t: 1 } 2019-09-04T06:30:12.279+0000 D3 REPL [conn221] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:42.289+0000 2019-09-04T06:30:12.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:12.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:12.321+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:12.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:12.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:12.342+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:12.342+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:12.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:12.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:12.421+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:12.485+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:12.485+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:12.485+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578608, 2) 2019-09-04T06:30:12.485+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 8495 2019-09-04T06:30:12.485+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:12.485+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:12.485+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 8495 2019-09-04T06:30:12.486+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8498 2019-09-04T06:30:12.486+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 8498 2019-09-04T06:30:12.486+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578608, 2), t: 1 }({ ts: Timestamp(1567578608, 2), t: 1 }) 2019-09-04T06:30:12.521+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:12.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:12.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:12.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:12.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:12.578+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:12.578+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:12.621+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:12.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:12.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:12.669+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:12.669+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:12.676+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:12.676+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:12.707+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:12.707+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:12.713+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:12.713+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:12.721+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:12.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:12.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:12.784+0000 I NETWORK [listener] connection accepted from 10.108.2.62:53426 #231 (83 connections now open) 2019-09-04T06:30:12.784+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:12.784+0000 D2 COMMAND [conn231] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:12.784+0000 I NETWORK [conn231] received client metadata from 10.108.2.62:53426 conn231: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:12.784+0000 I COMMAND [conn231] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:12.793+0000 I COMMAND [conn169] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578582, 1), signature: { hash: BinData(0, CBAA63ADEE2EDC53BD93E84FCB4CEC8CB94AD092), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:30:12.793+0000 D1 - [conn169] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:12.793+0000 W - [conn169] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:12.810+0000 I - [conn169] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:12.810+0000 D1 COMMAND [conn169] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578582, 1), signature: { hash: BinData(0, CBAA63ADEE2EDC53BD93E84FCB4CEC8CB94AD092), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:12.810+0000 D1 - [conn169] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:12.810+0000 W - [conn169] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:12.821+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:12.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:12.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:12.830+0000 I - [conn169] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:12.830+0000 W COMMAND [conn169] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:12.830+0000 I COMMAND [conn169] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578582, 1), signature: { hash: BinData(0, CBAA63ADEE2EDC53BD93E84FCB4CEC8CB94AD092), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:30:12.830+0000 D2 NETWORK [conn169] Session from 10.108.2.62:53394 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:12.830+0000 I NETWORK [conn169] end connection 10.108.2.62:53394 (82 connections now open) 2019-09-04T06:30:12.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:12.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 578) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:12.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 578 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:22.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:12.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:40.839+0000 2019-09-04T06:30:12.838+0000 D2 ASIO [Replication] Request 578 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), opTime: { ts: Timestamp(1567578608, 2), t: 1 }, wallTime: new Date(1567578608430), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpVisible: { ts: Timestamp(1567578608, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578608, 2), $clusterTime: { clusterTime: Timestamp(1567578611, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578608, 2) } 2019-09-04T06:30:12.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), opTime: { ts: Timestamp(1567578608, 2), t: 1 }, wallTime: new Date(1567578608430), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpVisible: { ts: Timestamp(1567578608, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578608, 2), $clusterTime: { clusterTime: Timestamp(1567578611, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578608, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:12.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:12.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 578) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), opTime: { ts: Timestamp(1567578608, 2), t: 1 }, wallTime: new Date(1567578608430), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpVisible: { ts: Timestamp(1567578608, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578608, 2), $clusterTime: { clusterTime: Timestamp(1567578611, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578608, 2) } 2019-09-04T06:30:12.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:30:12.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:30:14.838Z 2019-09-04T06:30:12.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:40.839+0000 2019-09-04T06:30:12.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:12.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 579) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:12.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 579 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:30:22.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:12.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:40.839+0000 2019-09-04T06:30:12.839+0000 D2 ASIO [Replication] Request 579 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), opTime: { ts: Timestamp(1567578608, 2), t: 1 }, wallTime: new Date(1567578608430), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpVisible: { ts: Timestamp(1567578608, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578608, 2), $clusterTime: { clusterTime: Timestamp(1567578611, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578608, 2) } 2019-09-04T06:30:12.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), opTime: { ts: Timestamp(1567578608, 2), t: 1 }, wallTime: new Date(1567578608430), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpVisible: { ts: Timestamp(1567578608, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578608, 2), $clusterTime: { clusterTime: Timestamp(1567578611, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578608, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:30:12.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:12.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 579) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), opTime: { ts: Timestamp(1567578608, 2), t: 1 }, wallTime: new Date(1567578608430), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpVisible: { ts: Timestamp(1567578608, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578608, 2), $clusterTime: { clusterTime: Timestamp(1567578611, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578608, 2) } 2019-09-04T06:30:12.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:30:12.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:30:22.067+0000 2019-09-04T06:30:12.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:30:23.652+0000 2019-09-04T06:30:12.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:30:12.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:30:14.839Z 2019-09-04T06:30:12.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:12.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:42.839+0000 2019-09-04T06:30:12.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:42.839+0000 2019-09-04T06:30:12.843+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:12.843+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:12.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:12.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:12.921+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:13.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:13.021+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:13.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:13.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:13.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578611, 1), signature: { hash: BinData(0, 52AFD23D243B967671BA2BACEF899C3E2133B3D8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:13.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:30:13.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578611, 1), signature: { hash: BinData(0, 52AFD23D243B967671BA2BACEF899C3E2133B3D8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:13.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578611, 1), signature: { hash: BinData(0, 52AFD23D243B967671BA2BACEF899C3E2133B3D8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:13.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), opTime: { ts: Timestamp(1567578608, 2), t: 1 }, wallTime: new Date(1567578608430) } 2019-09-04T06:30:13.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578611, 1), signature: { hash: BinData(0, 52AFD23D243B967671BA2BACEF899C3E2133B3D8), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:13.065+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:13.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:13.078+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:13.078+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:13.122+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:13.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:13.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:13.169+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:13.169+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:13.176+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:13.176+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:13.207+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:13.207+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:13.213+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:13.213+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:13.222+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:13.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:13.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:13.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:13.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:13.322+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:13.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:13.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:13.343+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:13.343+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:13.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:13.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:13.412+0000 I COMMAND [conn147] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578578, 1), signature: { hash: BinData(0, C1354BF9089533EC9529632B626A8C0C97EF30BD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:30:13.412+0000 D1 - [conn147] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:13.412+0000 W - [conn147] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:13.422+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:13.429+0000 I - [conn147] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:13.429+0000 D1 COMMAND [conn147] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578578, 1), signature: { hash: BinData(0, C1354BF9089533EC9529632B626A8C0C97EF30BD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:13.430+0000 D1 - [conn147] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:13.430+0000 W - [conn147] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:13.451+0000 I - [conn147] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:13.451+0000 W COMMAND [conn147] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:13.451+0000 I COMMAND [conn147] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578578, 1), signature: { hash: BinData(0, C1354BF9089533EC9529632B626A8C0C97EF30BD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:30:13.451+0000 D2 NETWORK [conn147] Session from 10.108.2.45:36472 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:13.451+0000 I NETWORK [conn147] end connection 10.108.2.45:36472 (81 connections now open) 2019-09-04T06:30:13.485+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:13.485+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:13.485+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578608, 2) 2019-09-04T06:30:13.485+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 8528 2019-09-04T06:30:13.485+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:13.485+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:13.485+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 8528 2019-09-04T06:30:13.486+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8531 2019-09-04T06:30:13.486+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 8531 2019-09-04T06:30:13.486+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578608, 2), t: 1 }({ ts: Timestamp(1567578608, 2), t: 1 }) 2019-09-04T06:30:13.489+0000 D2 ASIO [RS] Request 572 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpVisible: { ts: Timestamp(1567578608, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpApplied: { ts: Timestamp(1567578608, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578608, 2), $clusterTime: { clusterTime: Timestamp(1567578612, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578608, 2) } 2019-09-04T06:30:13.489+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpVisible: { ts: Timestamp(1567578608, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpApplied: { ts: Timestamp(1567578608, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578608, 2), $clusterTime: { clusterTime: Timestamp(1567578612, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578608, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:13.489+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:13.490+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:30:13.490+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:30:23.652+0000 2019-09-04T06:30:13.490+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:30:23.504+0000 2019-09-04T06:30:13.490+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:13.490+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:42.839+0000 2019-09-04T06:30:13.490+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 580 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:23.490+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578608, 2), t: 1 } } 2019-09-04T06:30:13.490+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:38.486+0000 2019-09-04T06:30:13.509+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:13.509+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), appliedOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, appliedWallTime: new Date(1567578608430), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), appliedOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, appliedWallTime: new Date(1567578608430), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), appliedOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, appliedWallTime: new Date(1567578608430), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpVisible: { ts: Timestamp(1567578608, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:13.509+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 581 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:43.509+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), appliedOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, appliedWallTime: new Date(1567578608430), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), appliedOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, appliedWallTime: new Date(1567578608430), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), appliedOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, appliedWallTime: new Date(1567578608430), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpVisible: { ts: Timestamp(1567578608, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:13.509+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:38.486+0000 2019-09-04T06:30:13.509+0000 D2 ASIO [RS] Request 581 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpVisible: { ts: Timestamp(1567578608, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578608, 2), $clusterTime: { clusterTime: Timestamp(1567578612, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578608, 2) } 2019-09-04T06:30:13.509+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpVisible: { ts: Timestamp(1567578608, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578608, 2), $clusterTime: { clusterTime: Timestamp(1567578612, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578608, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:13.509+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:13.509+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:38.486+0000 2019-09-04T06:30:13.522+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:13.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:13.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:13.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:13.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:13.578+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:13.578+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:13.622+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:13.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:13.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:13.669+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:13.669+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:13.676+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:13.676+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:13.707+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:13.707+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:13.713+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:13.713+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:13.722+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:13.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:13.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:13.823+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:13.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:13.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:13.843+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:13.843+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:13.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:13.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:13.923+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:14.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:14.023+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:14.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:14.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:14.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:14.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:14.078+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:14.078+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:14.123+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:14.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:14.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:14.169+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:14.169+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:14.176+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:14.176+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:14.207+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:14.207+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:14.213+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:14.213+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:14.223+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:14.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578612, 1), signature: { hash: BinData(0, 94B0F0DA2525CCBF57AD1DECA88F34B7FC42EDF5), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:14.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:30:14.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578612, 1), signature: { hash: BinData(0, 94B0F0DA2525CCBF57AD1DECA88F34B7FC42EDF5), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:14.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578612, 1), signature: { hash: BinData(0, 94B0F0DA2525CCBF57AD1DECA88F34B7FC42EDF5), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:14.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), opTime: { ts: Timestamp(1567578608, 2), t: 1 }, wallTime: new Date(1567578608430) } 2019-09-04T06:30:14.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578612, 1), signature: { hash: BinData(0, 94B0F0DA2525CCBF57AD1DECA88F34B7FC42EDF5), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:14.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:14.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:14.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:14.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:14.323+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:14.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:14.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:14.343+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:14.343+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:14.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:14.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:14.423+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:14.486+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:14.486+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:14.486+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578608, 2) 2019-09-04T06:30:14.486+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 8561 2019-09-04T06:30:14.486+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:14.486+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:14.486+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 8561 2019-09-04T06:30:14.486+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8564 2019-09-04T06:30:14.486+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 8564 2019-09-04T06:30:14.486+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578608, 2), t: 1 }({ ts: Timestamp(1567578608, 2), t: 1 }) 2019-09-04T06:30:14.524+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:14.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:14.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:14.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:14.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:14.578+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:14.578+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:14.624+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:14.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:14.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:14.669+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:14.669+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:14.676+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:14.676+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:14.707+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:14.707+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:14.713+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:14.713+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:14.724+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:14.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:14.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:14.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:14.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:14.824+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:14.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:14.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 582) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:14.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 582 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:24.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:14.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:42.839+0000 2019-09-04T06:30:14.838+0000 D2 ASIO [Replication] Request 582 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), opTime: { ts: Timestamp(1567578608, 2), t: 1 }, wallTime: new Date(1567578608430), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpVisible: { ts: Timestamp(1567578608, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578608, 2), $clusterTime: { clusterTime: Timestamp(1567578612, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578608, 2) } 2019-09-04T06:30:14.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), opTime: { ts: Timestamp(1567578608, 2), t: 1 }, wallTime: new Date(1567578608430), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpVisible: { ts: Timestamp(1567578608, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578608, 2), $clusterTime: { clusterTime: Timestamp(1567578612, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578608, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:14.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:14.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 582) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), opTime: { ts: Timestamp(1567578608, 2), t: 1 }, wallTime: new Date(1567578608430), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpVisible: { ts: Timestamp(1567578608, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578608, 2), $clusterTime: { clusterTime: Timestamp(1567578612, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578608, 2) } 2019-09-04T06:30:14.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:30:14.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:30:16.838Z 2019-09-04T06:30:14.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:42.839+0000 2019-09-04T06:30:14.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:14.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 583) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:14.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 583 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:30:24.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:14.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:42.839+0000 2019-09-04T06:30:14.839+0000 D2 ASIO [Replication] Request 583 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), opTime: { ts: Timestamp(1567578608, 2), t: 1 }, wallTime: new Date(1567578608430), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpVisible: { ts: Timestamp(1567578608, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578608, 2), $clusterTime: { clusterTime: Timestamp(1567578612, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578608, 2) } 2019-09-04T06:30:14.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), opTime: { ts: Timestamp(1567578608, 2), t: 1 }, wallTime: new Date(1567578608430), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpVisible: { ts: Timestamp(1567578608, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578608, 2), $clusterTime: { clusterTime: Timestamp(1567578612, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578608, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:30:14.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:14.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 583) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), opTime: { ts: Timestamp(1567578608, 2), t: 1 }, wallTime: new Date(1567578608430), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpVisible: { ts: Timestamp(1567578608, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578608, 2), $clusterTime: { clusterTime: Timestamp(1567578612, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578608, 2) } 2019-09-04T06:30:14.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:30:14.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:30:23.504+0000 2019-09-04T06:30:14.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:30:25.603+0000 2019-09-04T06:30:14.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:30:14.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:30:16.839Z 2019-09-04T06:30:14.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:14.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:44.839+0000 2019-09-04T06:30:14.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:44.839+0000 2019-09-04T06:30:14.843+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:14.843+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:14.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:14.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:14.924+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:15.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:15.024+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:15.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:15.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:15.058+0000 D2 ASIO [RS] Request 580 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578615, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578615041), o: { $v: 1, $set: { ping: new Date(1567578615038), up: 2515 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpVisible: { ts: Timestamp(1567578608, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpApplied: { ts: Timestamp(1567578615, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578608, 2), $clusterTime: { clusterTime: Timestamp(1567578615, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578615, 1) } 2019-09-04T06:30:15.058+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578615, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578615041), o: { $v: 1, $set: { ping: new Date(1567578615038), up: 2515 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpVisible: { ts: Timestamp(1567578608, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpApplied: { ts: Timestamp(1567578615, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578608, 2), $clusterTime: { clusterTime: Timestamp(1567578615, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578615, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:15.058+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:15.058+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578615, 1) and ending at ts: Timestamp(1567578615, 1) 2019-09-04T06:30:15.059+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:30:25.603+0000 2019-09-04T06:30:15.059+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:30:25.323+0000 2019-09-04T06:30:15.059+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:15.059+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:44.839+0000 2019-09-04T06:30:15.059+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578615, 1), t: 1 } 2019-09-04T06:30:15.059+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:15.059+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:15.059+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578608, 2) 2019-09-04T06:30:15.059+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 8581 2019-09-04T06:30:15.059+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:15.059+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:15.059+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 8581 2019-09-04T06:30:15.059+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:15.059+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:30:15.059+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:15.059+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578608, 2) 2019-09-04T06:30:15.059+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 8584 2019-09-04T06:30:15.059+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578615, 1) } 2019-09-04T06:30:15.059+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:15.059+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:15.059+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 8584 2019-09-04T06:30:15.059+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8565 2019-09-04T06:30:15.059+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 8565 2019-09-04T06:30:15.059+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8587 2019-09-04T06:30:15.059+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 8587 2019-09-04T06:30:15.059+0000 D3 EXECUTOR [repl-writer-worker-9] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:15.059+0000 D3 STORAGE [repl-writer-worker-9] WT begin_transaction for snapshot id 8589 2019-09-04T06:30:15.059+0000 D4 STORAGE [repl-writer-worker-9] inserting record with timestamp Timestamp(1567578615, 1) 2019-09-04T06:30:15.059+0000 D3 STORAGE [repl-writer-worker-9] WT set timestamp of future write operations to Timestamp(1567578615, 1) 2019-09-04T06:30:15.059+0000 D3 STORAGE [repl-writer-worker-9] WT commit_transaction for snapshot id 8589 2019-09-04T06:30:15.059+0000 D3 EXECUTOR [repl-writer-worker-9] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:15.059+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:30:15.059+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8588 2019-09-04T06:30:15.059+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 8588 2019-09-04T06:30:15.059+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8591 2019-09-04T06:30:15.059+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 8591 2019-09-04T06:30:15.059+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578615, 1), t: 1 }({ ts: Timestamp(1567578615, 1), t: 1 }) 2019-09-04T06:30:15.059+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578615, 1) 2019-09-04T06:30:15.059+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8592 2019-09-04T06:30:15.059+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578615, 1) } } ] } sort: {} projection: {} 2019-09-04T06:30:15.059+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:15.059+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:30:15.059+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578615, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:30:15.059+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:15.059+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:15.059+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:15.059+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578615, 1) || First: notFirst: full path: ts 2019-09-04T06:30:15.059+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:15.059+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578615, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:15.059+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:15.059+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:30:15.059+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:15.059+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:15.059+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:15.059+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:15.059+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:15.059+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:15.059+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:15.059+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578615, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:15.059+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:15.059+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:15.059+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:15.059+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578615, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:15.059+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:15.059+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578615, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:15.059+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 8592 2019-09-04T06:30:15.059+0000 D3 EXECUTOR [repl-writer-worker-7] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:15.059+0000 D3 STORAGE [repl-writer-worker-7] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:15.060+0000 D3 REPL [repl-writer-worker-7] applying op: { ts: Timestamp(1567578615, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578615041), o: { $v: 1, $set: { ping: new Date(1567578615038), up: 2515 } } }, oplog application mode: Secondary 2019-09-04T06:30:15.060+0000 D3 STORAGE [repl-writer-worker-7] WT set timestamp of future write operations to Timestamp(1567578615, 1) 2019-09-04T06:30:15.060+0000 D3 STORAGE [repl-writer-worker-7] WT begin_transaction for snapshot id 8594 2019-09-04T06:30:15.060+0000 D2 QUERY [repl-writer-worker-7] Using idhack: { _id: "cmodb801.togewa.com:27017" } 2019-09-04T06:30:15.060+0000 D4 WRITE [repl-writer-worker-7] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:30:15.060+0000 D3 STORAGE [repl-writer-worker-7] WT commit_transaction for snapshot id 8594 2019-09-04T06:30:15.060+0000 D3 EXECUTOR [repl-writer-worker-7] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:15.060+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578615, 1), t: 1 }({ ts: Timestamp(1567578615, 1), t: 1 }) 2019-09-04T06:30:15.060+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578615, 1) 2019-09-04T06:30:15.060+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8593 2019-09-04T06:30:15.060+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:30:15.060+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:15.060+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:30:15.060+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:15.060+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:15.060+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:30:15.060+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 8593 2019-09-04T06:30:15.060+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578615, 1) 2019-09-04T06:30:15.060+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8597 2019-09-04T06:30:15.060+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 8597 2019-09-04T06:30:15.060+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578615, 1), t: 1 }({ ts: Timestamp(1567578615, 1), t: 1 }) 2019-09-04T06:30:15.060+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:15.060+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), appliedOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, appliedWallTime: new Date(1567578608430), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), appliedOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, appliedWallTime: new Date(1567578615041), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), appliedOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, appliedWallTime: new Date(1567578608430), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpVisible: { ts: Timestamp(1567578608, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:15.060+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 584 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:45.060+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), appliedOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, appliedWallTime: new Date(1567578608430), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), appliedOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, appliedWallTime: new Date(1567578615041), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), appliedOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, appliedWallTime: new Date(1567578608430), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpVisible: { ts: Timestamp(1567578608, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:15.060+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:45.060+0000 2019-09-04T06:30:15.060+0000 D2 ASIO [RS] Request 584 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpVisible: { ts: Timestamp(1567578608, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578608, 2), $clusterTime: { clusterTime: Timestamp(1567578615, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578615, 1) } 2019-09-04T06:30:15.060+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpVisible: { ts: Timestamp(1567578608, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578608, 2), $clusterTime: { clusterTime: Timestamp(1567578615, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578615, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:15.060+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:15.060+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:45.060+0000 2019-09-04T06:30:15.061+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578615, 1), t: 1 } 2019-09-04T06:30:15.061+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 585 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:25.061+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578608, 2), t: 1 } } 2019-09-04T06:30:15.061+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:45.060+0000 2019-09-04T06:30:15.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578615, 1), signature: { hash: BinData(0, 4E788FA2BE7CAD154C2A7459705E8B87927F31D2), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:15.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:30:15.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578615, 1), signature: { hash: BinData(0, 4E788FA2BE7CAD154C2A7459705E8B87927F31D2), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:15.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578615, 1), signature: { hash: BinData(0, 4E788FA2BE7CAD154C2A7459705E8B87927F31D2), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:15.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), opTime: { ts: Timestamp(1567578615, 1), t: 1 }, wallTime: new Date(1567578615041) } 2019-09-04T06:30:15.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578615, 1), signature: { hash: BinData(0, 4E788FA2BE7CAD154C2A7459705E8B87927F31D2), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:15.065+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:30:15.065+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:15.065+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), appliedOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, appliedWallTime: new Date(1567578608430), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, durableWallTime: new Date(1567578615041), appliedOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, appliedWallTime: new Date(1567578615041), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), appliedOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, appliedWallTime: new Date(1567578608430), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpVisible: { ts: Timestamp(1567578608, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:15.065+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 586 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:45.065+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), appliedOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, appliedWallTime: new Date(1567578608430), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, durableWallTime: new Date(1567578615041), appliedOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, appliedWallTime: new Date(1567578615041), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, durableWallTime: new Date(1567578608430), appliedOpTime: { ts: Timestamp(1567578608, 2), t: 1 }, appliedWallTime: new Date(1567578608430), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpVisible: { ts: Timestamp(1567578608, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:15.065+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:45.060+0000 2019-09-04T06:30:15.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:15.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:15.066+0000 D2 ASIO [RS] Request 586 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpVisible: { ts: Timestamp(1567578608, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578608, 2), $clusterTime: { clusterTime: Timestamp(1567578615, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578615, 1) } 2019-09-04T06:30:15.066+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578608, 2), t: 1 }, lastCommittedWall: new Date(1567578608430), lastOpVisible: { ts: Timestamp(1567578608, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578608, 2), $clusterTime: { clusterTime: Timestamp(1567578615, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578615, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:15.066+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:15.066+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:45.060+0000 2019-09-04T06:30:15.066+0000 D2 ASIO [RS] Request 585 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578615, 1), t: 1 }, lastCommittedWall: new Date(1567578615041), lastOpVisible: { ts: Timestamp(1567578615, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578615, 1), t: 1 }, lastCommittedWall: new Date(1567578615041), lastOpApplied: { ts: Timestamp(1567578615, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578615, 1), $clusterTime: { clusterTime: Timestamp(1567578615, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578615, 1) } 2019-09-04T06:30:15.066+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578615, 1), t: 1 }, lastCommittedWall: new Date(1567578615041), lastOpVisible: { ts: Timestamp(1567578615, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578615, 1), t: 1 }, lastCommittedWall: new Date(1567578615041), lastOpApplied: { ts: Timestamp(1567578615, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578615, 1), $clusterTime: { clusterTime: Timestamp(1567578615, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578615, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:15.066+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:15.066+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:30:15.066+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578615, 1), t: 1 }, 2019-09-04T06:30:15.041+0000 2019-09-04T06:30:15.066+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578615, 1), t: 1 }, 2019-09-04T06:30:15.041+0000 2019-09-04T06:30:15.066+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578610, 1) 2019-09-04T06:30:15.066+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:30:25.323+0000 2019-09-04T06:30:15.066+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:30:25.614+0000 2019-09-04T06:30:15.066+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:15.066+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:44.839+0000 2019-09-04T06:30:15.066+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 587 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:25.066+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578615, 1), t: 1 } } 2019-09-04T06:30:15.066+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:45.060+0000 2019-09-04T06:30:15.066+0000 D3 REPL [conn212] Got notified of new snapshot: { ts: Timestamp(1567578615, 1), t: 1 }, 2019-09-04T06:30:15.041+0000 2019-09-04T06:30:15.066+0000 D3 REPL [conn212] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.650+0000 2019-09-04T06:30:15.067+0000 D3 REPL [conn219] Got notified of new snapshot: { ts: Timestamp(1567578615, 1), t: 1 }, 2019-09-04T06:30:15.041+0000 2019-09-04T06:30:15.067+0000 D3 REPL [conn219] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:24.153+0000 2019-09-04T06:30:15.067+0000 D3 REPL [conn209] Got notified of new snapshot: { ts: Timestamp(1567578615, 1), t: 1 }, 2019-09-04T06:30:15.041+0000 2019-09-04T06:30:15.067+0000 D3 REPL [conn209] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.622+0000 2019-09-04T06:30:15.067+0000 D3 REPL [conn215] Got notified of new snapshot: { ts: Timestamp(1567578615, 1), t: 1 }, 2019-09-04T06:30:15.041+0000 2019-09-04T06:30:15.067+0000 D3 REPL [conn215] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:15.067+0000 D3 REPL [conn214] Got notified of new snapshot: { ts: Timestamp(1567578615, 1), t: 1 }, 2019-09-04T06:30:15.041+0000 2019-09-04T06:30:15.067+0000 D3 REPL [conn214] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:15.067+0000 D3 REPL [conn211] Got notified of new snapshot: { ts: Timestamp(1567578615, 1), t: 1 }, 2019-09-04T06:30:15.041+0000 2019-09-04T06:30:15.067+0000 D3 REPL [conn211] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.646+0000 2019-09-04T06:30:15.067+0000 D3 REPL [conn210] Got notified of new snapshot: { ts: Timestamp(1567578615, 1), t: 1 }, 2019-09-04T06:30:15.041+0000 2019-09-04T06:30:15.067+0000 D3 REPL [conn210] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.641+0000 2019-09-04T06:30:15.067+0000 D3 REPL [conn230] Got notified of new snapshot: { ts: Timestamp(1567578615, 1), t: 1 }, 2019-09-04T06:30:15.041+0000 2019-09-04T06:30:15.067+0000 D3 REPL [conn230] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:40.677+0000 2019-09-04T06:30:15.067+0000 D3 REPL [conn226] Got notified of new snapshot: { ts: Timestamp(1567578615, 1), t: 1 }, 2019-09-04T06:30:15.041+0000 2019-09-04T06:30:15.067+0000 D3 REPL [conn226] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:40.682+0000 2019-09-04T06:30:15.067+0000 D3 REPL [conn216] Got notified of new snapshot: { ts: Timestamp(1567578615, 1), t: 1 }, 2019-09-04T06:30:15.041+0000 2019-09-04T06:30:15.067+0000 D3 REPL [conn216] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:42.092+0000 2019-09-04T06:30:15.067+0000 D3 REPL [conn180] Got notified of new snapshot: { ts: Timestamp(1567578615, 1), t: 1 }, 2019-09-04T06:30:15.041+0000 2019-09-04T06:30:15.067+0000 D3 REPL [conn180] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.623+0000 2019-09-04T06:30:15.067+0000 D3 REPL [conn213] Got notified of new snapshot: { ts: Timestamp(1567578615, 1), t: 1 }, 2019-09-04T06:30:15.041+0000 2019-09-04T06:30:15.067+0000 D3 REPL [conn213] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:15.067+0000 D3 REPL [conn171] Got notified of new snapshot: { ts: Timestamp(1567578615, 1), t: 1 }, 2019-09-04T06:30:15.041+0000 2019-09-04T06:30:15.067+0000 D3 REPL [conn171] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:15.634+0000 2019-09-04T06:30:15.067+0000 D3 REPL [conn217] Got notified of new snapshot: { ts: Timestamp(1567578615, 1), t: 1 }, 2019-09-04T06:30:15.041+0000 2019-09-04T06:30:15.067+0000 D3 REPL [conn217] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:22.595+0000 2019-09-04T06:30:15.067+0000 D3 REPL [conn207] Got notified of new snapshot: { ts: Timestamp(1567578615, 1), t: 1 }, 2019-09-04T06:30:15.041+0000 2019-09-04T06:30:15.067+0000 D3 REPL [conn207] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:40.671+0000 2019-09-04T06:30:15.067+0000 D3 REPL [conn221] Got notified of new snapshot: { ts: Timestamp(1567578615, 1), t: 1 }, 2019-09-04T06:30:15.041+0000 2019-09-04T06:30:15.067+0000 D3 REPL [conn221] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:42.289+0000 2019-09-04T06:30:15.078+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:15.078+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:15.124+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:15.159+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578615, 1) 2019-09-04T06:30:15.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:15.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:15.169+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:15.169+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:15.176+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:15.176+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:15.207+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:15.207+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:15.213+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:15.213+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:15.224+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:15.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:15.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:15.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:15.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:15.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:15.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:15.325+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:15.343+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:15.343+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:15.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:15.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:15.425+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:15.525+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:15.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:15.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:15.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:15.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:15.578+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:15.578+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:15.612+0000 I NETWORK [listener] connection accepted from 10.108.2.58:52130 #232 (82 connections now open) 2019-09-04T06:30:15.612+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:15.612+0000 D2 COMMAND [conn232] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:15.612+0000 I NETWORK [conn232] received client metadata from 10.108.2.58:52130 conn232: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:15.612+0000 I COMMAND [conn232] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:15.618+0000 I NETWORK [listener] connection accepted from 10.108.2.51:59146 #233 (83 connections now open) 2019-09-04T06:30:15.618+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:15.618+0000 D2 COMMAND [conn233] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:15.618+0000 I NETWORK [conn233] received client metadata from 10.108.2.51:59146 conn233: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:15.618+0000 I COMMAND [conn233] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:15.622+0000 I COMMAND [conn209] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578583, 1), signature: { hash: BinData(0, F158BC25720340B443747D4FFA61B3F2D0B5D09D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:30:15.622+0000 D1 - [conn209] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:15.622+0000 W - [conn209] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:15.623+0000 I COMMAND [conn180] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578579, 1), signature: { hash: BinData(0, 4E0EB43FB9673465B07DACBBA684379C1C10ABEB), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:30:15.623+0000 D1 - [conn180] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:15.623+0000 W - [conn180] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:15.625+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:15.626+0000 I NETWORK [listener] connection accepted from 10.108.2.64:46614 #234 (84 connections now open) 2019-09-04T06:30:15.626+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:15.626+0000 D2 COMMAND [conn234] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:15.626+0000 I NETWORK [conn234] received client metadata from 10.108.2.64:46614 conn234: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:15.626+0000 I COMMAND [conn234] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:15.629+0000 I NETWORK [listener] connection accepted from 10.108.2.53:50696 #235 (85 connections now open) 2019-09-04T06:30:15.629+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:15.629+0000 D2 COMMAND [conn235] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:15.629+0000 I NETWORK [conn235] received client metadata from 10.108.2.53:50696 conn235: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:15.629+0000 I COMMAND [conn235] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:15.633+0000 I NETWORK [listener] connection accepted from 10.108.2.47:56538 #236 (86 connections now open) 2019-09-04T06:30:15.633+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:15.633+0000 D2 COMMAND [conn236] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:15.633+0000 I NETWORK [conn236] received client metadata from 10.108.2.47:56538 conn236: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:15.633+0000 I COMMAND [conn236] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:15.634+0000 I COMMAND [conn171] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578582, 1), signature: { hash: BinData(0, CBAA63ADEE2EDC53BD93E84FCB4CEC8CB94AD092), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:30:15.634+0000 D1 - [conn171] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:15.634+0000 W - [conn171] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:15.635+0000 I NETWORK [listener] connection accepted from 10.108.2.45:36530 #237 (87 connections now open) 2019-09-04T06:30:15.635+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:15.635+0000 D2 COMMAND [conn237] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:15.635+0000 I NETWORK [conn237] received client metadata from 10.108.2.45:36530 conn237: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:15.635+0000 I COMMAND [conn237] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:15.639+0000 I - [conn209] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:15.639+0000 D1 COMMAND [conn209] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578583, 1), signature: { hash: BinData(0, F158BC25720340B443747D4FFA61B3F2D0B5D09D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:15.639+0000 D1 - [conn209] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:15.639+0000 W - [conn209] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:15.641+0000 I COMMAND [conn210] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578585, 1), signature: { hash: BinData(0, 7ED407E7D8FC2F48E792B1C41AEA50FC38ED9E10), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:30:15.641+0000 D1 - [conn210] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:15.641+0000 W - [conn210] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:15.646+0000 I COMMAND [conn211] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578585, 1), signature: { hash: BinData(0, 7ED407E7D8FC2F48E792B1C41AEA50FC38ED9E10), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:30:15.646+0000 D1 - [conn211] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:15.646+0000 W - [conn211] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:15.651+0000 I COMMAND [conn212] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578578, 1), signature: { hash: BinData(0, C1354BF9089533EC9529632B626A8C0C97EF30BD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:30:15.651+0000 D1 - [conn212] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:15.651+0000 W - [conn212] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:15.658+0000 I - [conn180] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:15.658+0000 D1 COMMAND [conn180] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578579, 1), signature: { hash: BinData(0, 4E0EB43FB9673465B07DACBBA684379C1C10ABEB), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:15.658+0000 D1 - [conn180] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:15.658+0000 W - [conn180] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:15.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:15.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:15.669+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:15.669+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:15.674+0000 I - [conn212] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:15.675+0000 D1 COMMAND [conn212] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578578, 1), signature: { hash: BinData(0, C1354BF9089533EC9529632B626A8C0C97EF30BD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:15.675+0000 D1 - [conn212] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:15.675+0000 W - [conn212] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:15.676+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:15.676+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:15.693+0000 I - [conn171] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:15.693+0000 D1 COMMAND [conn171] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578582, 1), signature: { hash: BinData(0, CBAA63ADEE2EDC53BD93E84FCB4CEC8CB94AD092), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:15.693+0000 D1 - [conn171] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:15.693+0000 W - [conn171] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:15.707+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:15.707+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:15.710+0000 I - [conn211] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:15.710+0000 D1 COMMAND [conn211] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578585, 1), signature: { hash: BinData(0, 7ED407E7D8FC2F48E792B1C41AEA50FC38ED9E10), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:15.710+0000 D1 - [conn211] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:15.710+0000 W - [conn211] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:15.713+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:15.713+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:15.725+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:15.732+0000 I - [conn180] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:15.732+0000 W COMMAND [conn180] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:15.732+0000 I COMMAND [conn180] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578579, 1), signature: { hash: BinData(0, 4E0EB43FB9673465B07DACBBA684379C1C10ABEB), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30044ms 2019-09-04T06:30:15.732+0000 D2 NETWORK [conn180] Session from 10.108.2.51:59106 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:15.732+0000 I NETWORK [conn180] end connection 10.108.2.51:59106 (86 connections now open) 2019-09-04T06:30:15.772+0000 I - [conn212] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:15.772+0000 W COMMAND [conn212] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:15.772+0000 I COMMAND [conn212] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578578, 1), signature: { hash: BinData(0, C1354BF9089533EC9529632B626A8C0C97EF30BD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30034ms 2019-09-04T06:30:15.772+0000 D2 NETWORK [conn212] Session from 10.108.2.45:36514 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:15.772+0000 I NETWORK [conn212] end connection 10.108.2.45:36514 (85 connections now open) 2019-09-04T06:30:15.772+0000 I - [conn171] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:15.773+0000 W COMMAND [conn171] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:15.773+0000 I COMMAND [conn171] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578582, 1), signature: { hash: BinData(0, CBAA63ADEE2EDC53BD93E84FCB4CEC8CB94AD092), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30069ms 2019-09-04T06:30:15.773+0000 D2 NETWORK [conn171] Session from 10.108.2.53:50656 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:15.773+0000 I NETWORK [conn171] end connection 10.108.2.53:50656 (84 connections now open) 2019-09-04T06:30:15.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:15.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:15.792+0000 I - [conn209] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:15.792+0000 W COMMAND [conn209] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:15.792+0000 I COMMAND [conn209] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578583, 1), signature: { hash: BinData(0, F158BC25720340B443747D4FFA61B3F2D0B5D09D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:30:15.792+0000 D2 NETWORK [conn209] Session from 10.108.2.58:52106 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:15.792+0000 I NETWORK [conn209] end connection 10.108.2.58:52106 (83 connections now open) 2019-09-04T06:30:15.811+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:15.811+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:15.812+0000 D2 COMMAND [conn222] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578609, 1), signature: { hash: BinData(0, CAAD09B6BD8A5CCC5E7CF668FD260233128308EF), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:30:15.812+0000 D1 REPL [conn222] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578615, 1), t: 1 } 2019-09-04T06:30:15.812+0000 D3 REPL [conn222] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.822+0000 2019-09-04T06:30:15.814+0000 I NETWORK [listener] connection accepted from 10.108.2.46:40972 #238 (84 connections now open) 2019-09-04T06:30:15.814+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:15.814+0000 D2 COMMAND [conn238] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:15.814+0000 I NETWORK [conn238] received client metadata from 10.108.2.46:40972 conn238: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:15.814+0000 I COMMAND [conn238] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:15.814+0000 D2 COMMAND [conn238] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578615, 1), signature: { hash: BinData(0, 501D3C9598BB496C2DB69F206C3057FEAA271409), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:30:15.814+0000 D1 REPL [conn238] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578615, 1), t: 1 } 2019-09-04T06:30:15.814+0000 D3 REPL [conn238] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.824+0000 2019-09-04T06:30:15.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:15.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:15.824+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:15.824+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:15.825+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:15.829+0000 D2 COMMAND [conn208] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578610, 1), signature: { hash: BinData(0, 55B04E6A9E4D06C4F65F23BA7FFE4919B6F8B920), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:30:15.829+0000 I - [conn211] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:15.829+0000 I - [conn210] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:15.829+0000 W COMMAND [conn211] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:15.829+0000 I COMMAND [conn211] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578585, 1), signature: { hash: BinData(0, 7ED407E7D8FC2F48E792B1C41AEA50FC38ED9E10), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30073ms 2019-09-04T06:30:15.829+0000 D1 REPL [conn208] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578615, 1), t: 1 } 2019-09-04T06:30:15.829+0000 D3 REPL [conn208] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.839+0000 2019-09-04T06:30:15.829+0000 D1 COMMAND [conn210] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578585, 1), signature: { hash: BinData(0, 7ED407E7D8FC2F48E792B1C41AEA50FC38ED9E10), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:15.829+0000 D1 - [conn210] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:15.829+0000 W - [conn210] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:15.829+0000 D2 NETWORK [conn211] Session from 10.108.2.47:56522 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:15.829+0000 I NETWORK [conn211] end connection 10.108.2.47:56522 (83 connections now open) 2019-09-04T06:30:15.830+0000 D2 COMMAND [conn235] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578612, 1), signature: { hash: BinData(0, 382BE338B8E062C89DE3BDFB8F55724B12CF9B0F), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:30:15.830+0000 D1 REPL [conn235] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578615, 1), t: 1 } 2019-09-04T06:30:15.830+0000 D3 REPL [conn235] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.840+0000 2019-09-04T06:30:15.843+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:15.843+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:15.849+0000 I - [conn210] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:15.849+0000 W COMMAND [conn210] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:15.849+0000 I COMMAND [conn210] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578585, 1), signature: { hash: BinData(0, 7ED407E7D8FC2F48E792B1C41AEA50FC38ED9E10), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30197ms 2019-09-04T06:30:15.849+0000 D2 NETWORK [conn210] Session from 10.108.2.64:46594 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:15.849+0000 I NETWORK [conn210] end connection 10.108.2.64:46594 (82 connections now open) 2019-09-04T06:30:15.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:15.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:15.925+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:16.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:16.026+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:16.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:16.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:16.059+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:16.059+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:16.059+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578615, 1) 2019-09-04T06:30:16.059+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 8641 2019-09-04T06:30:16.059+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:16.059+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:16.059+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 8641 2019-09-04T06:30:16.060+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8644 2019-09-04T06:30:16.060+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 8644 2019-09-04T06:30:16.060+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578615, 1), t: 1 }({ ts: Timestamp(1567578615, 1), t: 1 }) 2019-09-04T06:30:16.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:16.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:16.078+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:16.078+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:16.126+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:16.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:16.160+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:16.169+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:16.169+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:16.176+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:16.176+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:16.207+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:16.207+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:16.213+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:16.213+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:16.226+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:16.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578615, 1), signature: { hash: BinData(0, 4E788FA2BE7CAD154C2A7459705E8B87927F31D2), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:16.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:30:16.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578615, 1), signature: { hash: BinData(0, 4E788FA2BE7CAD154C2A7459705E8B87927F31D2), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:16.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578615, 1), signature: { hash: BinData(0, 4E788FA2BE7CAD154C2A7459705E8B87927F31D2), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:16.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, durableWallTime: new Date(1567578615041), opTime: { ts: Timestamp(1567578615, 1), t: 1 }, wallTime: new Date(1567578615041) } 2019-09-04T06:30:16.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578615, 1), signature: { hash: BinData(0, 4E788FA2BE7CAD154C2A7459705E8B87927F31D2), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:16.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:16.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:16.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:16.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:16.310+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:16.310+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:16.314+0000 D2 COMMAND [conn125] run command admin.$cmd { ping: 1, $db: "admin" } 2019-09-04T06:30:16.314+0000 I COMMAND [conn125] command admin.$cmd appName: "robo3t" command: ping { ping: 1, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:16.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:16.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:16.324+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:16.324+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:16.324+0000 D2 COMMAND [conn178] run command config.$cmd { ping: 1.0, lsid: { id: UUID("4ca3bc30-0f16-4335-a15f-3e7d48b5566e") }, $clusterTime: { clusterTime: Timestamp(1567578555, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:30:16.324+0000 I COMMAND [conn178] command config.$cmd appName: "MongoDB Shell" command: ping { ping: 1.0, lsid: { id: UUID("4ca3bc30-0f16-4335-a15f-3e7d48b5566e") }, $clusterTime: { clusterTime: Timestamp(1567578555, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:16.326+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:16.343+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:16.343+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:16.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:16.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:16.426+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:16.526+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:16.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:16.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:16.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:16.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:16.578+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:16.578+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:16.626+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:16.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:16.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:16.669+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:16.669+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:16.676+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:16.676+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:16.707+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:16.707+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:16.713+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:16.713+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:16.726+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:16.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:16.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:16.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:16.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:16.827+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:16.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:16.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 588) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:16.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 588 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:26.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:16.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:44.839+0000 2019-09-04T06:30:16.838+0000 D2 ASIO [Replication] Request 588 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, durableWallTime: new Date(1567578615041), opTime: { ts: Timestamp(1567578615, 1), t: 1 }, wallTime: new Date(1567578615041), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578615, 1), t: 1 }, lastCommittedWall: new Date(1567578615041), lastOpVisible: { ts: Timestamp(1567578615, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578615, 1), $clusterTime: { clusterTime: Timestamp(1567578615, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578615, 1) } 2019-09-04T06:30:16.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, durableWallTime: new Date(1567578615041), opTime: { ts: Timestamp(1567578615, 1), t: 1 }, wallTime: new Date(1567578615041), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578615, 1), t: 1 }, lastCommittedWall: new Date(1567578615041), lastOpVisible: { ts: Timestamp(1567578615, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578615, 1), $clusterTime: { clusterTime: Timestamp(1567578615, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578615, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:16.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:16.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 588) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, durableWallTime: new Date(1567578615041), opTime: { ts: Timestamp(1567578615, 1), t: 1 }, wallTime: new Date(1567578615041), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578615, 1), t: 1 }, lastCommittedWall: new Date(1567578615041), lastOpVisible: { ts: Timestamp(1567578615, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578615, 1), $clusterTime: { clusterTime: Timestamp(1567578615, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578615, 1) } 2019-09-04T06:30:16.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:30:16.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:30:18.838Z 2019-09-04T06:30:16.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:44.839+0000 2019-09-04T06:30:16.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:16.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 589) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:16.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 589 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:30:26.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:16.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:44.839+0000 2019-09-04T06:30:16.839+0000 D2 ASIO [Replication] Request 589 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, durableWallTime: new Date(1567578615041), opTime: { ts: Timestamp(1567578615, 1), t: 1 }, wallTime: new Date(1567578615041), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578615, 1), t: 1 }, lastCommittedWall: new Date(1567578615041), lastOpVisible: { ts: Timestamp(1567578615, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578615, 1), $clusterTime: { clusterTime: Timestamp(1567578615, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578615, 1) } 2019-09-04T06:30:16.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, durableWallTime: new Date(1567578615041), opTime: { ts: Timestamp(1567578615, 1), t: 1 }, wallTime: new Date(1567578615041), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578615, 1), t: 1 }, lastCommittedWall: new Date(1567578615041), lastOpVisible: { ts: Timestamp(1567578615, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578615, 1), $clusterTime: { clusterTime: Timestamp(1567578615, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578615, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:30:16.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:16.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 589) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, durableWallTime: new Date(1567578615041), opTime: { ts: Timestamp(1567578615, 1), t: 1 }, wallTime: new Date(1567578615041), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578615, 1), t: 1 }, lastCommittedWall: new Date(1567578615041), lastOpVisible: { ts: Timestamp(1567578615, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578615, 1), $clusterTime: { clusterTime: Timestamp(1567578615, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578615, 1) } 2019-09-04T06:30:16.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:30:16.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:30:25.614+0000 2019-09-04T06:30:16.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:30:27.080+0000 2019-09-04T06:30:16.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:30:16.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:30:18.839Z 2019-09-04T06:30:16.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:16.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:46.839+0000 2019-09-04T06:30:16.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:46.839+0000 2019-09-04T06:30:16.843+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:16.843+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:16.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:16.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:16.927+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:17.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:17.027+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:17.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:17.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:17.059+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:17.059+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:17.059+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578615, 1) 2019-09-04T06:30:17.059+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 8677 2019-09-04T06:30:17.059+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:17.059+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:17.059+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 8677 2019-09-04T06:30:17.060+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8680 2019-09-04T06:30:17.060+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 8680 2019-09-04T06:30:17.060+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578615, 1), t: 1 }({ ts: Timestamp(1567578615, 1), t: 1 }) 2019-09-04T06:30:17.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578615, 1), signature: { hash: BinData(0, 4E788FA2BE7CAD154C2A7459705E8B87927F31D2), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:17.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:30:17.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578615, 1), signature: { hash: BinData(0, 4E788FA2BE7CAD154C2A7459705E8B87927F31D2), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:17.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578615, 1), signature: { hash: BinData(0, 4E788FA2BE7CAD154C2A7459705E8B87927F31D2), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:17.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, durableWallTime: new Date(1567578615041), opTime: { ts: Timestamp(1567578615, 1), t: 1 }, wallTime: new Date(1567578615041) } 2019-09-04T06:30:17.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578615, 1), signature: { hash: BinData(0, 4E788FA2BE7CAD154C2A7459705E8B87927F31D2), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:17.065+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:17.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:17.127+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:17.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:17.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:17.169+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:17.169+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:17.176+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:17.176+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:17.207+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:17.207+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:17.213+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:17.213+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:17.227+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:17.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:17.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:17.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:17.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:17.327+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:17.343+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:17.343+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:17.427+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:17.485+0000 D2 ASIO [RS] Request 587 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578617, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578617477), o: { $v: 1, $set: { ping: new Date(1567578617476) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578615, 1), t: 1 }, lastCommittedWall: new Date(1567578615041), lastOpVisible: { ts: Timestamp(1567578615, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578615, 1), t: 1 }, lastCommittedWall: new Date(1567578615041), lastOpApplied: { ts: Timestamp(1567578617, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578615, 1), $clusterTime: { clusterTime: Timestamp(1567578617, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578617, 1) } 2019-09-04T06:30:17.485+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578617, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578617477), o: { $v: 1, $set: { ping: new Date(1567578617476) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578615, 1), t: 1 }, lastCommittedWall: new Date(1567578615041), lastOpVisible: { ts: Timestamp(1567578615, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578615, 1), t: 1 }, lastCommittedWall: new Date(1567578615041), lastOpApplied: { ts: Timestamp(1567578617, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578615, 1), $clusterTime: { clusterTime: Timestamp(1567578617, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578617, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:17.485+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:17.485+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578617, 1) and ending at ts: Timestamp(1567578617, 1) 2019-09-04T06:30:17.485+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:30:27.080+0000 2019-09-04T06:30:17.485+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:30:27.851+0000 2019-09-04T06:30:17.485+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:17.485+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:46.839+0000 2019-09-04T06:30:17.485+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578617, 1), t: 1 } 2019-09-04T06:30:17.485+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:17.485+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:17.485+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578615, 1) 2019-09-04T06:30:17.485+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 8693 2019-09-04T06:30:17.485+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:17.485+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:17.485+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 8693 2019-09-04T06:30:17.485+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:17.485+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:17.485+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578615, 1) 2019-09-04T06:30:17.485+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 8696 2019-09-04T06:30:17.485+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:30:17.485+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:17.485+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:17.485+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 8696 2019-09-04T06:30:17.485+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578617, 1) } 2019-09-04T06:30:17.486+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8681 2019-09-04T06:30:17.486+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 8681 2019-09-04T06:30:17.486+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8699 2019-09-04T06:30:17.486+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 8699 2019-09-04T06:30:17.486+0000 D3 EXECUTOR [repl-writer-worker-11] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:17.486+0000 D3 STORAGE [repl-writer-worker-11] WT begin_transaction for snapshot id 8701 2019-09-04T06:30:17.486+0000 D4 STORAGE [repl-writer-worker-11] inserting record with timestamp Timestamp(1567578617, 1) 2019-09-04T06:30:17.486+0000 D3 STORAGE [repl-writer-worker-11] WT set timestamp of future write operations to Timestamp(1567578617, 1) 2019-09-04T06:30:17.486+0000 D3 STORAGE [repl-writer-worker-11] WT commit_transaction for snapshot id 8701 2019-09-04T06:30:17.486+0000 D3 EXECUTOR [repl-writer-worker-11] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:17.486+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:30:17.486+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8700 2019-09-04T06:30:17.486+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 8700 2019-09-04T06:30:17.486+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8703 2019-09-04T06:30:17.486+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 8703 2019-09-04T06:30:17.486+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578617, 1), t: 1 }({ ts: Timestamp(1567578617, 1), t: 1 }) 2019-09-04T06:30:17.486+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578617, 1) 2019-09-04T06:30:17.486+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8704 2019-09-04T06:30:17.486+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578617, 1) } } ] } sort: {} projection: {} 2019-09-04T06:30:17.486+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:17.486+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:30:17.486+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578617, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:30:17.486+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:17.486+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:17.486+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:17.486+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578617, 1) || First: notFirst: full path: ts 2019-09-04T06:30:17.486+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:17.486+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578617, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:17.486+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:17.486+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:30:17.486+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:17.486+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:17.486+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:17.486+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:17.486+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:17.486+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:17.486+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:17.486+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578617, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:17.486+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:17.486+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:17.486+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:17.486+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578617, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:17.486+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:17.486+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578617, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:17.486+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 8704 2019-09-04T06:30:17.486+0000 D3 EXECUTOR [repl-writer-worker-13] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:17.486+0000 D3 STORAGE [repl-writer-worker-13] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:17.486+0000 D3 REPL [repl-writer-worker-13] applying op: { ts: Timestamp(1567578617, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578617477), o: { $v: 1, $set: { ping: new Date(1567578617476) } } }, oplog application mode: Secondary 2019-09-04T06:30:17.487+0000 D3 STORAGE [repl-writer-worker-13] WT set timestamp of future write operations to Timestamp(1567578617, 1) 2019-09-04T06:30:17.487+0000 D3 STORAGE [repl-writer-worker-13] WT begin_transaction for snapshot id 8706 2019-09-04T06:30:17.487+0000 D2 QUERY [repl-writer-worker-13] Using idhack: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" } 2019-09-04T06:30:17.487+0000 D4 WRITE [repl-writer-worker-13] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:30:17.487+0000 D3 STORAGE [repl-writer-worker-13] WT commit_transaction for snapshot id 8706 2019-09-04T06:30:17.487+0000 D3 EXECUTOR [repl-writer-worker-13] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:17.487+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578617, 1), t: 1 }({ ts: Timestamp(1567578617, 1), t: 1 }) 2019-09-04T06:30:17.487+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578617, 1) 2019-09-04T06:30:17.487+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8705 2019-09-04T06:30:17.487+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:30:17.487+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:17.487+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:30:17.487+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:17.487+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:17.487+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:30:17.487+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 8705 2019-09-04T06:30:17.487+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578617, 1) 2019-09-04T06:30:17.487+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:17.487+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8709 2019-09-04T06:30:17.487+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 8709 2019-09-04T06:30:17.487+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578617, 1), t: 1 }({ ts: Timestamp(1567578617, 1), t: 1 }) 2019-09-04T06:30:17.487+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, durableWallTime: new Date(1567578615041), appliedOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, appliedWallTime: new Date(1567578615041), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, durableWallTime: new Date(1567578615041), appliedOpTime: { ts: Timestamp(1567578617, 1), t: 1 }, appliedWallTime: new Date(1567578617477), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, durableWallTime: new Date(1567578615041), appliedOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, appliedWallTime: new Date(1567578615041), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578615, 1), t: 1 }, lastCommittedWall: new Date(1567578615041), lastOpVisible: { ts: Timestamp(1567578615, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:17.487+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 590 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:47.487+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, durableWallTime: new Date(1567578615041), appliedOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, appliedWallTime: new Date(1567578615041), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, durableWallTime: new Date(1567578615041), appliedOpTime: { ts: Timestamp(1567578617, 1), t: 1 }, appliedWallTime: new Date(1567578617477), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, durableWallTime: new Date(1567578615041), appliedOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, appliedWallTime: new Date(1567578615041), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578615, 1), t: 1 }, lastCommittedWall: new Date(1567578615041), lastOpVisible: { ts: Timestamp(1567578615, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:17.487+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:47.487+0000 2019-09-04T06:30:17.487+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578617, 1), t: 1 } 2019-09-04T06:30:17.487+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 591 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:27.487+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578615, 1), t: 1 } } 2019-09-04T06:30:17.487+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:47.487+0000 2019-09-04T06:30:17.487+0000 D2 ASIO [RS] Request 590 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578615, 1), t: 1 }, lastCommittedWall: new Date(1567578615041), lastOpVisible: { ts: Timestamp(1567578615, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578615, 1), $clusterTime: { clusterTime: Timestamp(1567578617, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578617, 1) } 2019-09-04T06:30:17.487+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578615, 1), t: 1 }, lastCommittedWall: new Date(1567578615041), lastOpVisible: { ts: Timestamp(1567578615, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578615, 1), $clusterTime: { clusterTime: Timestamp(1567578617, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578617, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:17.488+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:17.488+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:47.487+0000 2019-09-04T06:30:17.497+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:30:17.497+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:17.497+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, durableWallTime: new Date(1567578615041), appliedOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, appliedWallTime: new Date(1567578615041), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578617, 1), t: 1 }, durableWallTime: new Date(1567578617477), appliedOpTime: { ts: Timestamp(1567578617, 1), t: 1 }, appliedWallTime: new Date(1567578617477), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, durableWallTime: new Date(1567578615041), appliedOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, appliedWallTime: new Date(1567578615041), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578615, 1), t: 1 }, lastCommittedWall: new Date(1567578615041), lastOpVisible: { ts: Timestamp(1567578615, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:17.497+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 592 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:47.497+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, durableWallTime: new Date(1567578615041), appliedOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, appliedWallTime: new Date(1567578615041), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578617, 1), t: 1 }, durableWallTime: new Date(1567578617477), appliedOpTime: { ts: Timestamp(1567578617, 1), t: 1 }, appliedWallTime: new Date(1567578617477), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, durableWallTime: new Date(1567578615041), appliedOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, appliedWallTime: new Date(1567578615041), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578615, 1), t: 1 }, lastCommittedWall: new Date(1567578615041), lastOpVisible: { ts: Timestamp(1567578615, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:17.497+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:47.487+0000 2019-09-04T06:30:17.497+0000 D2 ASIO [RS] Request 592 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578615, 1), t: 1 }, lastCommittedWall: new Date(1567578615041), lastOpVisible: { ts: Timestamp(1567578615, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578615, 1), $clusterTime: { clusterTime: Timestamp(1567578617, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578617, 1) } 2019-09-04T06:30:17.497+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578615, 1), t: 1 }, lastCommittedWall: new Date(1567578615041), lastOpVisible: { ts: Timestamp(1567578615, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578615, 1), $clusterTime: { clusterTime: Timestamp(1567578617, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578617, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:17.497+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:17.497+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:47.487+0000 2019-09-04T06:30:17.498+0000 D2 ASIO [RS] Request 591 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578617, 1), t: 1 }, lastCommittedWall: new Date(1567578617477), lastOpVisible: { ts: Timestamp(1567578617, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578617, 1), t: 1 }, lastCommittedWall: new Date(1567578617477), lastOpApplied: { ts: Timestamp(1567578617, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578617, 1), $clusterTime: { clusterTime: Timestamp(1567578617, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578617, 1) } 2019-09-04T06:30:17.498+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578617, 1), t: 1 }, lastCommittedWall: new Date(1567578617477), lastOpVisible: { ts: Timestamp(1567578617, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578617, 1), t: 1 }, lastCommittedWall: new Date(1567578617477), lastOpApplied: { ts: Timestamp(1567578617, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578617, 1), $clusterTime: { clusterTime: Timestamp(1567578617, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578617, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:17.498+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:17.498+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:30:17.498+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578617, 1), t: 1 }, 2019-09-04T06:30:17.477+0000 2019-09-04T06:30:17.498+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578617, 1), t: 1 }, 2019-09-04T06:30:17.477+0000 2019-09-04T06:30:17.498+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578612, 1) 2019-09-04T06:30:17.498+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:30:27.851+0000 2019-09-04T06:30:17.498+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:30:28.831+0000 2019-09-04T06:30:17.498+0000 D3 REPL [conn215] Got notified of new snapshot: { ts: Timestamp(1567578617, 1), t: 1 }, 2019-09-04T06:30:17.477+0000 2019-09-04T06:30:17.498+0000 D3 REPL [conn215] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:17.498+0000 D3 REPL [conn226] Got notified of new snapshot: { ts: Timestamp(1567578617, 1), t: 1 }, 2019-09-04T06:30:17.477+0000 2019-09-04T06:30:17.498+0000 D3 REPL [conn226] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:40.682+0000 2019-09-04T06:30:17.498+0000 D3 REPL [conn213] Got notified of new snapshot: { ts: Timestamp(1567578617, 1), t: 1 }, 2019-09-04T06:30:17.477+0000 2019-09-04T06:30:17.498+0000 D3 REPL [conn213] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:17.498+0000 D3 REPL [conn217] Got notified of new snapshot: { ts: Timestamp(1567578617, 1), t: 1 }, 2019-09-04T06:30:17.477+0000 2019-09-04T06:30:17.498+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:17.498+0000 D3 REPL [conn217] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:22.595+0000 2019-09-04T06:30:17.498+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:46.839+0000 2019-09-04T06:30:17.498+0000 D3 REPL [conn221] Got notified of new snapshot: { ts: Timestamp(1567578617, 1), t: 1 }, 2019-09-04T06:30:17.477+0000 2019-09-04T06:30:17.498+0000 D3 REPL [conn221] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:42.289+0000 2019-09-04T06:30:17.498+0000 D3 REPL [conn235] Got notified of new snapshot: { ts: Timestamp(1567578617, 1), t: 1 }, 2019-09-04T06:30:17.477+0000 2019-09-04T06:30:17.498+0000 D3 REPL [conn235] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.840+0000 2019-09-04T06:30:17.498+0000 D3 REPL [conn219] Got notified of new snapshot: { ts: Timestamp(1567578617, 1), t: 1 }, 2019-09-04T06:30:17.477+0000 2019-09-04T06:30:17.498+0000 D3 REPL [conn219] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:24.153+0000 2019-09-04T06:30:17.498+0000 D3 REPL [conn214] Got notified of new snapshot: { ts: Timestamp(1567578617, 1), t: 1 }, 2019-09-04T06:30:17.477+0000 2019-09-04T06:30:17.498+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 593 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:27.498+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578617, 1), t: 1 } } 2019-09-04T06:30:17.498+0000 D3 REPL [conn214] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:17.498+0000 D3 REPL [conn230] Got notified of new snapshot: { ts: Timestamp(1567578617, 1), t: 1 }, 2019-09-04T06:30:17.477+0000 2019-09-04T06:30:17.498+0000 D3 REPL [conn230] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:40.677+0000 2019-09-04T06:30:17.498+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:47.487+0000 2019-09-04T06:30:17.498+0000 D3 REPL [conn216] Got notified of new snapshot: { ts: Timestamp(1567578617, 1), t: 1 }, 2019-09-04T06:30:17.477+0000 2019-09-04T06:30:17.498+0000 D3 REPL [conn216] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:42.092+0000 2019-09-04T06:30:17.498+0000 D3 REPL [conn207] Got notified of new snapshot: { ts: Timestamp(1567578617, 1), t: 1 }, 2019-09-04T06:30:17.477+0000 2019-09-04T06:30:17.498+0000 D3 REPL [conn207] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:40.671+0000 2019-09-04T06:30:17.498+0000 D3 REPL [conn222] Got notified of new snapshot: { ts: Timestamp(1567578617, 1), t: 1 }, 2019-09-04T06:30:17.477+0000 2019-09-04T06:30:17.498+0000 D3 REPL [conn222] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.822+0000 2019-09-04T06:30:17.498+0000 D3 REPL [conn238] Got notified of new snapshot: { ts: Timestamp(1567578617, 1), t: 1 }, 2019-09-04T06:30:17.477+0000 2019-09-04T06:30:17.498+0000 D3 REPL [conn238] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.824+0000 2019-09-04T06:30:17.498+0000 D3 REPL [conn208] Got notified of new snapshot: { ts: Timestamp(1567578617, 1), t: 1 }, 2019-09-04T06:30:17.477+0000 2019-09-04T06:30:17.498+0000 D3 REPL [conn208] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.839+0000 2019-09-04T06:30:17.528+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:17.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:17.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:17.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:17.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:17.586+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578617, 1) 2019-09-04T06:30:17.628+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:17.639+0000 D2 COMMAND [conn21] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578615, 1), t: 1 } }, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f59f902d1a496712d71ec'), operName: "", parentOperId: "5d6f59f902d1a496712d71eb" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578615, 1), signature: { hash: BinData(0, 4E788FA2BE7CAD154C2A7459705E8B87927F31D2), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578615, 1), t: 1 } }, $db: "config" } 2019-09-04T06:30:17.639+0000 D1 TRACKING [conn21] Cmd: find, TrackingId: 5d6f59f902d1a496712d71eb|5d6f59f902d1a496712d71ec 2019-09-04T06:30:17.639+0000 D1 COMMAND [conn21] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578615, 1), t: 1 } } } 2019-09-04T06:30:17.639+0000 D3 STORAGE [conn21] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:30:17.639+0000 D1 COMMAND [conn21] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578615, 1), t: 1 } }, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f59f902d1a496712d71ec'), operName: "", parentOperId: "5d6f59f902d1a496712d71eb" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578615, 1), signature: { hash: BinData(0, 4E788FA2BE7CAD154C2A7459705E8B87927F31D2), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578615, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578617, 1) 2019-09-04T06:30:17.639+0000 D5 QUERY [conn21] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:30:17.639+0000 D5 QUERY [conn21] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:30:17.639+0000 D5 QUERY [conn21] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:30:17.639+0000 D5 QUERY [conn21] Rated tree: $and 2019-09-04T06:30:17.639+0000 D5 QUERY [conn21] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:17.639+0000 D5 QUERY [conn21] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:17.639+0000 D2 QUERY [conn21] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:30:17.639+0000 D3 STORAGE [conn21] WT begin_transaction for snapshot id 8714 2019-09-04T06:30:17.639+0000 D3 STORAGE [conn21] WT rollback_transaction for snapshot id 8714 2019-09-04T06:30:17.639+0000 I COMMAND [conn21] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578615, 1), t: 1 } }, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f59f902d1a496712d71ec'), operName: "", parentOperId: "5d6f59f902d1a496712d71eb" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578615, 1), signature: { hash: BinData(0, 4E788FA2BE7CAD154C2A7459705E8B87927F31D2), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578615, 1), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:879 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:30:17.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:17.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:17.669+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:17.669+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:17.676+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:17.676+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:17.707+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:17.707+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:17.713+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:17.713+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:17.728+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:17.748+0000 D2 ASIO [RS] Request 593 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578617, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578617737), o: { $v: 1, $set: { ping: new Date(1567578617731) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578617, 1), t: 1 }, lastCommittedWall: new Date(1567578617477), lastOpVisible: { ts: Timestamp(1567578617, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578617, 1), t: 1 }, lastCommittedWall: new Date(1567578617477), lastOpApplied: { ts: Timestamp(1567578617, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578617, 1), $clusterTime: { clusterTime: Timestamp(1567578617, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578617, 2) } 2019-09-04T06:30:17.748+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578617, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578617737), o: { $v: 1, $set: { ping: new Date(1567578617731) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578617, 1), t: 1 }, lastCommittedWall: new Date(1567578617477), lastOpVisible: { ts: Timestamp(1567578617, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578617, 1), t: 1 }, lastCommittedWall: new Date(1567578617477), lastOpApplied: { ts: Timestamp(1567578617, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578617, 1), $clusterTime: { clusterTime: Timestamp(1567578617, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578617, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:17.748+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:17.748+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578617, 2) and ending at ts: Timestamp(1567578617, 2) 2019-09-04T06:30:17.749+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:30:28.831+0000 2019-09-04T06:30:17.749+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:30:28.012+0000 2019-09-04T06:30:17.749+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:17.749+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:46.839+0000 2019-09-04T06:30:17.749+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578617, 2), t: 1 } 2019-09-04T06:30:17.749+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:17.749+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:17.749+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578617, 1) 2019-09-04T06:30:17.749+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 8722 2019-09-04T06:30:17.749+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:17.749+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:17.749+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 8722 2019-09-04T06:30:17.749+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:17.749+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:17.749+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578617, 1) 2019-09-04T06:30:17.749+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:30:17.749+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 8725 2019-09-04T06:30:17.749+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578617, 2) } 2019-09-04T06:30:17.749+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:17.749+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:17.749+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 8725 2019-09-04T06:30:17.749+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8711 2019-09-04T06:30:17.749+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 8711 2019-09-04T06:30:17.749+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8728 2019-09-04T06:30:17.749+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 8728 2019-09-04T06:30:17.749+0000 D3 EXECUTOR [repl-writer-worker-14] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:17.749+0000 D3 STORAGE [repl-writer-worker-14] WT begin_transaction for snapshot id 8730 2019-09-04T06:30:17.749+0000 D4 STORAGE [repl-writer-worker-14] inserting record with timestamp Timestamp(1567578617, 2) 2019-09-04T06:30:17.749+0000 D3 STORAGE [repl-writer-worker-14] WT set timestamp of future write operations to Timestamp(1567578617, 2) 2019-09-04T06:30:17.749+0000 D3 STORAGE [repl-writer-worker-14] WT commit_transaction for snapshot id 8730 2019-09-04T06:30:17.749+0000 D3 EXECUTOR [repl-writer-worker-14] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:17.749+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:30:17.749+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8729 2019-09-04T06:30:17.749+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 8729 2019-09-04T06:30:17.749+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8732 2019-09-04T06:30:17.749+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 8732 2019-09-04T06:30:17.749+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578617, 2), t: 1 }({ ts: Timestamp(1567578617, 2), t: 1 }) 2019-09-04T06:30:17.749+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578617, 2) 2019-09-04T06:30:17.749+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8733 2019-09-04T06:30:17.749+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578617, 2) } } ] } sort: {} projection: {} 2019-09-04T06:30:17.749+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:17.749+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:30:17.749+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578617, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:30:17.749+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:17.749+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:17.749+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:17.749+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578617, 2) || First: notFirst: full path: ts 2019-09-04T06:30:17.749+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:17.749+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578617, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:17.749+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:17.749+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:30:17.749+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:17.749+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:17.749+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:17.749+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:17.749+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:17.749+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:17.749+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:17.749+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578617, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:17.749+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:17.749+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:17.749+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:17.749+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578617, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:17.750+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:17.750+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578617, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:17.750+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 8733 2019-09-04T06:30:17.750+0000 D3 EXECUTOR [repl-writer-worker-3] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:17.750+0000 D3 STORAGE [repl-writer-worker-3] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:17.750+0000 D3 REPL [repl-writer-worker-3] applying op: { ts: Timestamp(1567578617, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578617737), o: { $v: 1, $set: { ping: new Date(1567578617731) } } }, oplog application mode: Secondary 2019-09-04T06:30:17.750+0000 D3 STORAGE [repl-writer-worker-3] WT set timestamp of future write operations to Timestamp(1567578617, 2) 2019-09-04T06:30:17.750+0000 D3 STORAGE [repl-writer-worker-3] WT begin_transaction for snapshot id 8735 2019-09-04T06:30:17.750+0000 D2 QUERY [repl-writer-worker-3] Using idhack: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" } 2019-09-04T06:30:17.750+0000 D4 WRITE [repl-writer-worker-3] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:30:17.750+0000 D3 STORAGE [repl-writer-worker-3] WT commit_transaction for snapshot id 8735 2019-09-04T06:30:17.750+0000 D3 EXECUTOR [repl-writer-worker-3] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:17.750+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578617, 2), t: 1 }({ ts: Timestamp(1567578617, 2), t: 1 }) 2019-09-04T06:30:17.750+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578617, 2) 2019-09-04T06:30:17.750+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8734 2019-09-04T06:30:17.750+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:30:17.750+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:17.750+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:30:17.750+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:17.750+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:17.750+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:30:17.750+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 8734 2019-09-04T06:30:17.750+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578617, 2) 2019-09-04T06:30:17.750+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8738 2019-09-04T06:30:17.750+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 8738 2019-09-04T06:30:17.750+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578617, 2), t: 1 }({ ts: Timestamp(1567578617, 2), t: 1 }) 2019-09-04T06:30:17.750+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:17.750+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, durableWallTime: new Date(1567578615041), appliedOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, appliedWallTime: new Date(1567578615041), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578617, 1), t: 1 }, durableWallTime: new Date(1567578617477), appliedOpTime: { ts: Timestamp(1567578617, 2), t: 1 }, appliedWallTime: new Date(1567578617737), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, durableWallTime: new Date(1567578615041), appliedOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, appliedWallTime: new Date(1567578615041), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578617, 1), t: 1 }, lastCommittedWall: new Date(1567578617477), lastOpVisible: { ts: Timestamp(1567578617, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:17.750+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 594 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:47.750+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, durableWallTime: new Date(1567578615041), appliedOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, appliedWallTime: new Date(1567578615041), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578617, 1), t: 1 }, durableWallTime: new Date(1567578617477), appliedOpTime: { ts: Timestamp(1567578617, 2), t: 1 }, appliedWallTime: new Date(1567578617737), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, durableWallTime: new Date(1567578615041), appliedOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, appliedWallTime: new Date(1567578615041), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578617, 1), t: 1 }, lastCommittedWall: new Date(1567578617477), lastOpVisible: { ts: Timestamp(1567578617, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:17.750+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:47.750+0000 2019-09-04T06:30:17.750+0000 D2 ASIO [RS] Request 594 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578617, 1), t: 1 }, lastCommittedWall: new Date(1567578617477), lastOpVisible: { ts: Timestamp(1567578617, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578617, 1), $clusterTime: { clusterTime: Timestamp(1567578617, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578617, 2) } 2019-09-04T06:30:17.750+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578617, 1), t: 1 }, lastCommittedWall: new Date(1567578617477), lastOpVisible: { ts: Timestamp(1567578617, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578617, 1), $clusterTime: { clusterTime: Timestamp(1567578617, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578617, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:17.750+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:17.750+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:47.750+0000 2019-09-04T06:30:17.751+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578617, 2), t: 1 } 2019-09-04T06:30:17.751+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 595 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:27.751+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578617, 1), t: 1 } } 2019-09-04T06:30:17.751+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:47.750+0000 2019-09-04T06:30:17.751+0000 D2 ASIO [RS] Request 595 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578617, 2), t: 1 }, lastCommittedWall: new Date(1567578617737), lastOpVisible: { ts: Timestamp(1567578617, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578617, 2), t: 1 }, lastCommittedWall: new Date(1567578617737), lastOpApplied: { ts: Timestamp(1567578617, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578617, 2), $clusterTime: { clusterTime: Timestamp(1567578617, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578617, 2) } 2019-09-04T06:30:17.751+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578617, 2), t: 1 }, lastCommittedWall: new Date(1567578617737), lastOpVisible: { ts: Timestamp(1567578617, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578617, 2), t: 1 }, lastCommittedWall: new Date(1567578617737), lastOpApplied: { ts: Timestamp(1567578617, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578617, 2), $clusterTime: { clusterTime: Timestamp(1567578617, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578617, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:17.751+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:17.751+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:30:17.751+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578617, 2), t: 1 }, 2019-09-04T06:30:17.737+0000 2019-09-04T06:30:17.751+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578617, 2), t: 1 }, 2019-09-04T06:30:17.737+0000 2019-09-04T06:30:17.751+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578612, 2) 2019-09-04T06:30:17.751+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:30:28.012+0000 2019-09-04T06:30:17.752+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:30:27.812+0000 2019-09-04T06:30:17.752+0000 D3 REPL [conn213] Got notified of new snapshot: { ts: Timestamp(1567578617, 2), t: 1 }, 2019-09-04T06:30:17.737+0000 2019-09-04T06:30:17.752+0000 D3 REPL [conn213] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:17.752+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 596 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:27.752+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578617, 2), t: 1 } } 2019-09-04T06:30:17.752+0000 D3 REPL [conn238] Got notified of new snapshot: { ts: Timestamp(1567578617, 2), t: 1 }, 2019-09-04T06:30:17.737+0000 2019-09-04T06:30:17.752+0000 D3 REPL [conn238] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.824+0000 2019-09-04T06:30:17.752+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:47.750+0000 2019-09-04T06:30:17.752+0000 D3 REPL [conn230] Got notified of new snapshot: { ts: Timestamp(1567578617, 2), t: 1 }, 2019-09-04T06:30:17.737+0000 2019-09-04T06:30:17.752+0000 D3 REPL [conn230] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:40.677+0000 2019-09-04T06:30:17.752+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:17.752+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:46.839+0000 2019-09-04T06:30:17.752+0000 D3 REPL [conn216] Got notified of new snapshot: { ts: Timestamp(1567578617, 2), t: 1 }, 2019-09-04T06:30:17.737+0000 2019-09-04T06:30:17.752+0000 D3 REPL [conn216] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:42.092+0000 2019-09-04T06:30:17.752+0000 D3 REPL [conn208] Got notified of new snapshot: { ts: Timestamp(1567578617, 2), t: 1 }, 2019-09-04T06:30:17.737+0000 2019-09-04T06:30:17.752+0000 D3 REPL [conn208] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.839+0000 2019-09-04T06:30:17.752+0000 D3 REPL [conn222] Got notified of new snapshot: { ts: Timestamp(1567578617, 2), t: 1 }, 2019-09-04T06:30:17.737+0000 2019-09-04T06:30:17.752+0000 D3 REPL [conn222] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.822+0000 2019-09-04T06:30:17.752+0000 D3 REPL [conn214] Got notified of new snapshot: { ts: Timestamp(1567578617, 2), t: 1 }, 2019-09-04T06:30:17.737+0000 2019-09-04T06:30:17.752+0000 D3 REPL [conn214] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:17.752+0000 D3 REPL [conn219] Got notified of new snapshot: { ts: Timestamp(1567578617, 2), t: 1 }, 2019-09-04T06:30:17.737+0000 2019-09-04T06:30:17.752+0000 D3 REPL [conn219] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:24.153+0000 2019-09-04T06:30:17.752+0000 D3 REPL [conn215] Got notified of new snapshot: { ts: Timestamp(1567578617, 2), t: 1 }, 2019-09-04T06:30:17.737+0000 2019-09-04T06:30:17.752+0000 D3 REPL [conn215] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:17.752+0000 D3 REPL [conn221] Got notified of new snapshot: { ts: Timestamp(1567578617, 2), t: 1 }, 2019-09-04T06:30:17.737+0000 2019-09-04T06:30:17.752+0000 D3 REPL [conn221] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:42.289+0000 2019-09-04T06:30:17.752+0000 D3 REPL [conn226] Got notified of new snapshot: { ts: Timestamp(1567578617, 2), t: 1 }, 2019-09-04T06:30:17.737+0000 2019-09-04T06:30:17.752+0000 D3 REPL [conn226] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:40.682+0000 2019-09-04T06:30:17.752+0000 D3 REPL [conn217] Got notified of new snapshot: { ts: Timestamp(1567578617, 2), t: 1 }, 2019-09-04T06:30:17.737+0000 2019-09-04T06:30:17.752+0000 D3 REPL [conn217] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:22.595+0000 2019-09-04T06:30:17.752+0000 D3 REPL [conn207] Got notified of new snapshot: { ts: Timestamp(1567578617, 2), t: 1 }, 2019-09-04T06:30:17.737+0000 2019-09-04T06:30:17.752+0000 D3 REPL [conn207] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:40.671+0000 2019-09-04T06:30:17.752+0000 D3 REPL [conn235] Got notified of new snapshot: { ts: Timestamp(1567578617, 2), t: 1 }, 2019-09-04T06:30:17.737+0000 2019-09-04T06:30:17.752+0000 D3 REPL [conn235] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.840+0000 2019-09-04T06:30:17.752+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:30:17.753+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:17.753+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, durableWallTime: new Date(1567578615041), appliedOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, appliedWallTime: new Date(1567578615041), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578617, 2), t: 1 }, durableWallTime: new Date(1567578617737), appliedOpTime: { ts: Timestamp(1567578617, 2), t: 1 }, appliedWallTime: new Date(1567578617737), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, durableWallTime: new Date(1567578615041), appliedOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, appliedWallTime: new Date(1567578615041), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578617, 2), t: 1 }, lastCommittedWall: new Date(1567578617737), lastOpVisible: { ts: Timestamp(1567578617, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:17.753+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 597 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:47.753+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, durableWallTime: new Date(1567578615041), appliedOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, appliedWallTime: new Date(1567578615041), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578617, 2), t: 1 }, durableWallTime: new Date(1567578617737), appliedOpTime: { ts: Timestamp(1567578617, 2), t: 1 }, appliedWallTime: new Date(1567578617737), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, durableWallTime: new Date(1567578615041), appliedOpTime: { ts: Timestamp(1567578615, 1), t: 1 }, appliedWallTime: new Date(1567578615041), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578617, 2), t: 1 }, lastCommittedWall: new Date(1567578617737), lastOpVisible: { ts: Timestamp(1567578617, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:17.753+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:47.750+0000 2019-09-04T06:30:17.753+0000 D2 ASIO [RS] Request 597 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578617, 2), t: 1 }, lastCommittedWall: new Date(1567578617737), lastOpVisible: { ts: Timestamp(1567578617, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578617, 2), $clusterTime: { clusterTime: Timestamp(1567578617, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578617, 2) } 2019-09-04T06:30:17.753+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578617, 2), t: 1 }, lastCommittedWall: new Date(1567578617737), lastOpVisible: { ts: Timestamp(1567578617, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578617, 2), $clusterTime: { clusterTime: Timestamp(1567578617, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578617, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:17.753+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:17.753+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:47.750+0000 2019-09-04T06:30:17.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:17.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:17.828+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:17.843+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:17.843+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:17.849+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578617, 2) 2019-09-04T06:30:17.928+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:18.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:18.028+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:18.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:18.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:18.065+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:18.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:18.129+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:18.130+0000 D2 COMMAND [conn20] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:18.130+0000 I COMMAND [conn20] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:18.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:18.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:18.169+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:18.169+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:18.176+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:18.176+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:18.207+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:18.207+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:18.213+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:18.213+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:18.229+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:18.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578617, 2), signature: { hash: BinData(0, D3116BAEF4FE99A67988EC1F6090ED75B280E4D4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:18.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:30:18.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578617, 2), signature: { hash: BinData(0, D3116BAEF4FE99A67988EC1F6090ED75B280E4D4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:18.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578617, 2), signature: { hash: BinData(0, D3116BAEF4FE99A67988EC1F6090ED75B280E4D4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:18.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578617, 2), t: 1 }, durableWallTime: new Date(1567578617737), opTime: { ts: Timestamp(1567578617, 2), t: 1 }, wallTime: new Date(1567578617737) } 2019-09-04T06:30:18.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578617, 2), signature: { hash: BinData(0, D3116BAEF4FE99A67988EC1F6090ED75B280E4D4), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:18.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:18.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:18.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:18.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:18.329+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:18.343+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:18.343+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:18.429+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:18.512+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:30:18.512+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:30:18.512+0000 D2 COMMAND [conn90] run command admin.$cmd { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:30:18.512+0000 I COMMAND [conn90] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:30:18.529+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:18.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:18.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:18.565+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:18.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:18.629+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:18.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:18.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:18.669+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:18.669+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:18.676+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:18.676+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:18.707+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:18.707+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:18.713+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:18.713+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:18.729+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:18.749+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:18.749+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:18.749+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578617, 2) 2019-09-04T06:30:18.749+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 8765 2019-09-04T06:30:18.749+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:18.749+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:18.749+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 8765 2019-09-04T06:30:18.749+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer::flush table:config/collection/28--6194257481163143499 -> { numRecords: 22, dataSize: 1828 } 2019-09-04T06:30:18.749+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer::flush table:config/collection/42--6194257481163143499 -> { numRecords: 2, dataSize: 308 } 2019-09-04T06:30:18.749+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer::flush table:local/collection/16--6194257481163143499 -> { numRecords: 1441, dataSize: 324876 } 2019-09-04T06:30:18.749+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer flush took 71 µs 2019-09-04T06:30:18.750+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8768 2019-09-04T06:30:18.750+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 8768 2019-09-04T06:30:18.750+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578617, 2), t: 1 }({ ts: Timestamp(1567578617, 2), t: 1 }) 2019-09-04T06:30:18.831+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:18.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:18.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 598) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:18.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 598 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:28.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:18.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:46.839+0000 2019-09-04T06:30:18.838+0000 D2 ASIO [Replication] Request 598 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578617, 2), t: 1 }, durableWallTime: new Date(1567578617737), opTime: { ts: Timestamp(1567578617, 2), t: 1 }, wallTime: new Date(1567578617737), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578617, 2), t: 1 }, lastCommittedWall: new Date(1567578617737), lastOpVisible: { ts: Timestamp(1567578617, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578617, 2), $clusterTime: { clusterTime: Timestamp(1567578617, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578617, 2) } 2019-09-04T06:30:18.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578617, 2), t: 1 }, durableWallTime: new Date(1567578617737), opTime: { ts: Timestamp(1567578617, 2), t: 1 }, wallTime: new Date(1567578617737), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578617, 2), t: 1 }, lastCommittedWall: new Date(1567578617737), lastOpVisible: { ts: Timestamp(1567578617, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578617, 2), $clusterTime: { clusterTime: Timestamp(1567578617, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578617, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:18.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:18.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 598) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578617, 2), t: 1 }, durableWallTime: new Date(1567578617737), opTime: { ts: Timestamp(1567578617, 2), t: 1 }, wallTime: new Date(1567578617737), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578617, 2), t: 1 }, lastCommittedWall: new Date(1567578617737), lastOpVisible: { ts: Timestamp(1567578617, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578617, 2), $clusterTime: { clusterTime: Timestamp(1567578617, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578617, 2) } 2019-09-04T06:30:18.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:30:18.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:30:20.838Z 2019-09-04T06:30:18.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:46.839+0000 2019-09-04T06:30:18.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:18.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 599) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:18.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 599 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:30:28.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:18.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:46.839+0000 2019-09-04T06:30:18.839+0000 D2 ASIO [Replication] Request 599 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578617, 2), t: 1 }, durableWallTime: new Date(1567578617737), opTime: { ts: Timestamp(1567578617, 2), t: 1 }, wallTime: new Date(1567578617737), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578617, 2), t: 1 }, lastCommittedWall: new Date(1567578617737), lastOpVisible: { ts: Timestamp(1567578617, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578617, 2), $clusterTime: { clusterTime: Timestamp(1567578617, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578617, 2) } 2019-09-04T06:30:18.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578617, 2), t: 1 }, durableWallTime: new Date(1567578617737), opTime: { ts: Timestamp(1567578617, 2), t: 1 }, wallTime: new Date(1567578617737), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578617, 2), t: 1 }, lastCommittedWall: new Date(1567578617737), lastOpVisible: { ts: Timestamp(1567578617, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578617, 2), $clusterTime: { clusterTime: Timestamp(1567578617, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578617, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:30:18.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:18.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 599) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578617, 2), t: 1 }, durableWallTime: new Date(1567578617737), opTime: { ts: Timestamp(1567578617, 2), t: 1 }, wallTime: new Date(1567578617737), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578617, 2), t: 1 }, lastCommittedWall: new Date(1567578617737), lastOpVisible: { ts: Timestamp(1567578617, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578617, 2), $clusterTime: { clusterTime: Timestamp(1567578617, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578617, 2) } 2019-09-04T06:30:18.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:30:18.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:30:27.812+0000 2019-09-04T06:30:18.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:30:29.548+0000 2019-09-04T06:30:18.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:30:18.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:30:20.839Z 2019-09-04T06:30:18.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:18.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:48.839+0000 2019-09-04T06:30:18.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:48.839+0000 2019-09-04T06:30:18.843+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:18.843+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:18.931+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:19.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:19.031+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:19.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:19.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:19.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578617, 2), signature: { hash: BinData(0, D3116BAEF4FE99A67988EC1F6090ED75B280E4D4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:19.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:30:19.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578617, 2), signature: { hash: BinData(0, D3116BAEF4FE99A67988EC1F6090ED75B280E4D4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:19.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578617, 2), signature: { hash: BinData(0, D3116BAEF4FE99A67988EC1F6090ED75B280E4D4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:19.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578617, 2), t: 1 }, durableWallTime: new Date(1567578617737), opTime: { ts: Timestamp(1567578617, 2), t: 1 }, wallTime: new Date(1567578617737) } 2019-09-04T06:30:19.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578617, 2), signature: { hash: BinData(0, D3116BAEF4FE99A67988EC1F6090ED75B280E4D4), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:19.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:19.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:19.131+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:19.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:19.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:19.169+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:19.169+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:19.176+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:19.176+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:19.207+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:19.207+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:19.213+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:19.213+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:19.226+0000 D2 ASIO [RS] Request 596 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578619, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" }, wall: new Date(1567578619224), o: { $v: 1, $set: { ping: new Date(1567578619221) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578617, 2), t: 1 }, lastCommittedWall: new Date(1567578617737), lastOpVisible: { ts: Timestamp(1567578617, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578617, 2), t: 1 }, lastCommittedWall: new Date(1567578617737), lastOpApplied: { ts: Timestamp(1567578619, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578617, 2), $clusterTime: { clusterTime: Timestamp(1567578619, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578619, 1) } 2019-09-04T06:30:19.226+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578619, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" }, wall: new Date(1567578619224), o: { $v: 1, $set: { ping: new Date(1567578619221) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578617, 2), t: 1 }, lastCommittedWall: new Date(1567578617737), lastOpVisible: { ts: Timestamp(1567578617, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578617, 2), t: 1 }, lastCommittedWall: new Date(1567578617737), lastOpApplied: { ts: Timestamp(1567578619, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578617, 2), $clusterTime: { clusterTime: Timestamp(1567578619, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578619, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:19.226+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:19.226+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578619, 1) and ending at ts: Timestamp(1567578619, 1) 2019-09-04T06:30:19.226+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:30:29.548+0000 2019-09-04T06:30:19.226+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:30:29.335+0000 2019-09-04T06:30:19.226+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:19.226+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:48.839+0000 2019-09-04T06:30:19.226+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578619, 1), t: 1 } 2019-09-04T06:30:19.226+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:19.226+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:19.226+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578617, 2) 2019-09-04T06:30:19.226+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 8781 2019-09-04T06:30:19.226+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:19.226+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:19.226+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 8781 2019-09-04T06:30:19.226+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:19.226+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:30:19.226+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:19.226+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578617, 2) 2019-09-04T06:30:19.226+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 8784 2019-09-04T06:30:19.226+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578619, 1) } 2019-09-04T06:30:19.226+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:19.226+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:19.226+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 8784 2019-09-04T06:30:19.226+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8769 2019-09-04T06:30:19.226+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 8769 2019-09-04T06:30:19.226+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8787 2019-09-04T06:30:19.226+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 8787 2019-09-04T06:30:19.226+0000 D3 EXECUTOR [repl-writer-worker-12] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:19.226+0000 D3 STORAGE [repl-writer-worker-12] WT begin_transaction for snapshot id 8789 2019-09-04T06:30:19.226+0000 D4 STORAGE [repl-writer-worker-12] inserting record with timestamp Timestamp(1567578619, 1) 2019-09-04T06:30:19.226+0000 D3 STORAGE [repl-writer-worker-12] WT set timestamp of future write operations to Timestamp(1567578619, 1) 2019-09-04T06:30:19.226+0000 D2 STORAGE [repl-writer-worker-12] WiredTigerSizeStorer::store Marking table:local/collection/16--6194257481163143499 dirty, numRecords: 1442, dataSize: 325112, use_count: 3 2019-09-04T06:30:19.227+0000 D3 STORAGE [repl-writer-worker-12] WT commit_transaction for snapshot id 8789 2019-09-04T06:30:19.227+0000 D3 EXECUTOR [repl-writer-worker-12] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:19.227+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:30:19.227+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8788 2019-09-04T06:30:19.227+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 8788 2019-09-04T06:30:19.227+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8791 2019-09-04T06:30:19.227+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 8791 2019-09-04T06:30:19.227+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578619, 1), t: 1 }({ ts: Timestamp(1567578619, 1), t: 1 }) 2019-09-04T06:30:19.227+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578619, 1) 2019-09-04T06:30:19.227+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8792 2019-09-04T06:30:19.227+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578619, 1) } } ] } sort: {} projection: {} 2019-09-04T06:30:19.227+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:19.227+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:30:19.227+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578619, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:30:19.227+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:19.227+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:19.227+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:19.227+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578619, 1) || First: notFirst: full path: ts 2019-09-04T06:30:19.227+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:19.227+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578619, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:19.227+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:19.227+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:30:19.227+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:19.227+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:19.227+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:19.227+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:19.227+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:19.227+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:19.227+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:19.227+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578619, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:19.227+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:19.227+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:19.227+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:19.227+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578619, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:19.227+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:19.227+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578619, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:19.227+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 8792 2019-09-04T06:30:19.227+0000 D3 EXECUTOR [repl-writer-worker-10] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:19.227+0000 D3 STORAGE [repl-writer-worker-10] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:19.227+0000 D3 REPL [repl-writer-worker-10] applying op: { ts: Timestamp(1567578619, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" }, wall: new Date(1567578619224), o: { $v: 1, $set: { ping: new Date(1567578619221) } } }, oplog application mode: Secondary 2019-09-04T06:30:19.227+0000 D3 STORAGE [repl-writer-worker-10] WT set timestamp of future write operations to Timestamp(1567578619, 1) 2019-09-04T06:30:19.227+0000 D3 STORAGE [repl-writer-worker-10] WT begin_transaction for snapshot id 8794 2019-09-04T06:30:19.227+0000 D2 QUERY [repl-writer-worker-10] Using idhack: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" } 2019-09-04T06:30:19.227+0000 D2 STORAGE [repl-writer-worker-10] WiredTigerSizeStorer::store Marking table:config/collection/28--6194257481163143499 dirty, numRecords: 22, dataSize: 1828, use_count: 3 2019-09-04T06:30:19.227+0000 D4 WRITE [repl-writer-worker-10] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:30:19.227+0000 D3 STORAGE [repl-writer-worker-10] WT commit_transaction for snapshot id 8794 2019-09-04T06:30:19.227+0000 D3 EXECUTOR [repl-writer-worker-10] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:19.227+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578619, 1), t: 1 }({ ts: Timestamp(1567578619, 1), t: 1 }) 2019-09-04T06:30:19.227+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578619, 1) 2019-09-04T06:30:19.227+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8793 2019-09-04T06:30:19.227+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:30:19.227+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:19.227+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:30:19.227+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:19.227+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:19.227+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:30:19.227+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 8793 2019-09-04T06:30:19.227+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578619, 1) 2019-09-04T06:30:19.227+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8797 2019-09-04T06:30:19.227+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 8797 2019-09-04T06:30:19.227+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578619, 1), t: 1 }({ ts: Timestamp(1567578619, 1), t: 1 }) 2019-09-04T06:30:19.227+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:19.227+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578617, 2), t: 1 }, durableWallTime: new Date(1567578617737), appliedOpTime: { ts: Timestamp(1567578617, 2), t: 1 }, appliedWallTime: new Date(1567578617737), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578617, 2), t: 1 }, durableWallTime: new Date(1567578617737), appliedOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, appliedWallTime: new Date(1567578619224), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578617, 2), t: 1 }, durableWallTime: new Date(1567578617737), appliedOpTime: { ts: Timestamp(1567578617, 2), t: 1 }, appliedWallTime: new Date(1567578617737), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578617, 2), t: 1 }, lastCommittedWall: new Date(1567578617737), lastOpVisible: { ts: Timestamp(1567578617, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:19.227+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 600 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:49.227+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578617, 2), t: 1 }, durableWallTime: new Date(1567578617737), appliedOpTime: { ts: Timestamp(1567578617, 2), t: 1 }, appliedWallTime: new Date(1567578617737), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578617, 2), t: 1 }, durableWallTime: new Date(1567578617737), appliedOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, appliedWallTime: new Date(1567578619224), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578617, 2), t: 1 }, durableWallTime: new Date(1567578617737), appliedOpTime: { ts: Timestamp(1567578617, 2), t: 1 }, appliedWallTime: new Date(1567578617737), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578617, 2), t: 1 }, lastCommittedWall: new Date(1567578617737), lastOpVisible: { ts: Timestamp(1567578617, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:19.227+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:49.227+0000 2019-09-04T06:30:19.228+0000 D2 ASIO [RS] Request 600 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpVisible: { ts: Timestamp(1567578619, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578619, 1), $clusterTime: { clusterTime: Timestamp(1567578619, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578619, 1) } 2019-09-04T06:30:19.228+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpVisible: { ts: Timestamp(1567578619, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578619, 1), $clusterTime: { clusterTime: Timestamp(1567578619, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578619, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:19.228+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:19.228+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:49.228+0000 2019-09-04T06:30:19.228+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578619, 1), t: 1 } 2019-09-04T06:30:19.228+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 601 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:29.228+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578617, 2), t: 1 } } 2019-09-04T06:30:19.228+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:49.228+0000 2019-09-04T06:30:19.228+0000 D2 ASIO [RS] Request 601 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpVisible: { ts: Timestamp(1567578619, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpApplied: { ts: Timestamp(1567578619, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578619, 1), $clusterTime: { clusterTime: Timestamp(1567578619, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578619, 1) } 2019-09-04T06:30:19.228+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpVisible: { ts: Timestamp(1567578619, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpApplied: { ts: Timestamp(1567578619, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578619, 1), $clusterTime: { clusterTime: Timestamp(1567578619, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578619, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:19.228+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:19.228+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:30:19.228+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578619, 1), t: 1 }, 2019-09-04T06:30:19.224+0000 2019-09-04T06:30:19.228+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578619, 1), t: 1 }, 2019-09-04T06:30:19.224+0000 2019-09-04T06:30:19.229+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578614, 1) 2019-09-04T06:30:19.229+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:30:29.335+0000 2019-09-04T06:30:19.229+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:30:30.301+0000 2019-09-04T06:30:19.229+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:19.229+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 602 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:29.229+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578619, 1), t: 1 } } 2019-09-04T06:30:19.229+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:48.839+0000 2019-09-04T06:30:19.229+0000 D3 REPL [conn230] Got notified of new snapshot: { ts: Timestamp(1567578619, 1), t: 1 }, 2019-09-04T06:30:19.224+0000 2019-09-04T06:30:19.229+0000 D3 REPL [conn230] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:40.677+0000 2019-09-04T06:30:19.229+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:49.228+0000 2019-09-04T06:30:19.229+0000 D3 REPL [conn222] Got notified of new snapshot: { ts: Timestamp(1567578619, 1), t: 1 }, 2019-09-04T06:30:19.224+0000 2019-09-04T06:30:19.229+0000 D3 REPL [conn222] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.822+0000 2019-09-04T06:30:19.229+0000 D3 REPL [conn238] Got notified of new snapshot: { ts: Timestamp(1567578619, 1), t: 1 }, 2019-09-04T06:30:19.224+0000 2019-09-04T06:30:19.229+0000 D3 REPL [conn238] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.824+0000 2019-09-04T06:30:19.229+0000 D3 REPL [conn214] Got notified of new snapshot: { ts: Timestamp(1567578619, 1), t: 1 }, 2019-09-04T06:30:19.224+0000 2019-09-04T06:30:19.229+0000 D3 REPL [conn214] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:19.229+0000 D3 REPL [conn226] Got notified of new snapshot: { ts: Timestamp(1567578619, 1), t: 1 }, 2019-09-04T06:30:19.224+0000 2019-09-04T06:30:19.229+0000 D3 REPL [conn226] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:40.682+0000 2019-09-04T06:30:19.229+0000 D3 REPL [conn207] Got notified of new snapshot: { ts: Timestamp(1567578619, 1), t: 1 }, 2019-09-04T06:30:19.224+0000 2019-09-04T06:30:19.229+0000 D3 REPL [conn207] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:40.671+0000 2019-09-04T06:30:19.229+0000 D3 REPL [conn215] Got notified of new snapshot: { ts: Timestamp(1567578619, 1), t: 1 }, 2019-09-04T06:30:19.224+0000 2019-09-04T06:30:19.229+0000 D3 REPL [conn215] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:19.229+0000 D3 REPL [conn213] Got notified of new snapshot: { ts: Timestamp(1567578619, 1), t: 1 }, 2019-09-04T06:30:19.224+0000 2019-09-04T06:30:19.229+0000 D3 REPL [conn213] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:21.661+0000 2019-09-04T06:30:19.229+0000 D3 REPL [conn216] Got notified of new snapshot: { ts: Timestamp(1567578619, 1), t: 1 }, 2019-09-04T06:30:19.224+0000 2019-09-04T06:30:19.229+0000 D3 REPL [conn216] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:42.092+0000 2019-09-04T06:30:19.229+0000 D3 REPL [conn208] Got notified of new snapshot: { ts: Timestamp(1567578619, 1), t: 1 }, 2019-09-04T06:30:19.224+0000 2019-09-04T06:30:19.229+0000 D3 REPL [conn208] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.839+0000 2019-09-04T06:30:19.229+0000 D3 REPL [conn219] Got notified of new snapshot: { ts: Timestamp(1567578619, 1), t: 1 }, 2019-09-04T06:30:19.224+0000 2019-09-04T06:30:19.229+0000 D3 REPL [conn219] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:24.153+0000 2019-09-04T06:30:19.229+0000 D3 REPL [conn221] Got notified of new snapshot: { ts: Timestamp(1567578619, 1), t: 1 }, 2019-09-04T06:30:19.224+0000 2019-09-04T06:30:19.229+0000 D3 REPL [conn221] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:42.289+0000 2019-09-04T06:30:19.229+0000 D3 REPL [conn217] Got notified of new snapshot: { ts: Timestamp(1567578619, 1), t: 1 }, 2019-09-04T06:30:19.224+0000 2019-09-04T06:30:19.229+0000 D3 REPL [conn217] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:22.595+0000 2019-09-04T06:30:19.229+0000 D3 REPL [conn235] Got notified of new snapshot: { ts: Timestamp(1567578619, 1), t: 1 }, 2019-09-04T06:30:19.224+0000 2019-09-04T06:30:19.229+0000 D3 REPL [conn235] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.840+0000 2019-09-04T06:30:19.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:19.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:19.240+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:30:19.240+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:19.240+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:19.240+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578617, 2), t: 1 }, durableWallTime: new Date(1567578617737), appliedOpTime: { ts: Timestamp(1567578617, 2), t: 1 }, appliedWallTime: new Date(1567578617737), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), appliedOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, appliedWallTime: new Date(1567578619224), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578617, 2), t: 1 }, durableWallTime: new Date(1567578617737), appliedOpTime: { ts: Timestamp(1567578617, 2), t: 1 }, appliedWallTime: new Date(1567578617737), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpVisible: { ts: Timestamp(1567578619, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:19.240+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 603 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:49.240+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578617, 2), t: 1 }, durableWallTime: new Date(1567578617737), appliedOpTime: { ts: Timestamp(1567578617, 2), t: 1 }, appliedWallTime: new Date(1567578617737), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), appliedOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, appliedWallTime: new Date(1567578619224), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578617, 2), t: 1 }, durableWallTime: new Date(1567578617737), appliedOpTime: { ts: Timestamp(1567578617, 2), t: 1 }, appliedWallTime: new Date(1567578617737), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpVisible: { ts: Timestamp(1567578619, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:19.240+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:49.228+0000 2019-09-04T06:30:19.240+0000 D2 ASIO [RS] Request 603 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpVisible: { ts: Timestamp(1567578619, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578619, 1), $clusterTime: { clusterTime: Timestamp(1567578619, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578619, 1) } 2019-09-04T06:30:19.240+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpVisible: { ts: Timestamp(1567578619, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578619, 1), $clusterTime: { clusterTime: Timestamp(1567578619, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578619, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:19.240+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:19.240+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:49.228+0000 2019-09-04T06:30:19.317+0000 D3 STORAGE [FreeMonProcessor] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:19.320+0000 D3 INDEX [TTLMonitor] thread awake 2019-09-04T06:30:19.321+0000 D3 COMMAND [PeriodicTaskRunner] task: DBConnectionPool-cleaner took: 0ms 2019-09-04T06:30:19.321+0000 D3 COMMAND [PeriodicTaskRunner] task: DBConnectionPool-cleaner took: 0ms 2019-09-04T06:30:19.321+0000 D2 - [PeriodicTaskRunner] cleaning up unused lock buckets of the global lock manager 2019-09-04T06:30:19.321+0000 D3 COMMAND [PeriodicTaskRunner] task: UnusedLockCleaner took: 0ms 2019-09-04T06:30:19.326+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578619, 1) 2019-09-04T06:30:19.329+0000 D2 WRITE [startPeriodicThreadToAbortExpiredTransactions] Beginning scanSessions. Scanning 0 sessions. 2019-09-04T06:30:19.340+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:19.343+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:19.343+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:19.370+0000 D1 SHARDING [shard-registry-reload] Reloading shardRegistry 2019-09-04T06:30:19.370+0000 D3 STORAGE [shard-registry-reload] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:30:19.370+0000 D2 COMMAND [shard-registry-reload] run command config.$cmd { find: "shards", $readPreference: { mode: "nearest", tags: [] }, $db: "config" } 2019-09-04T06:30:19.370+0000 D3 STORAGE [shard-registry-reload] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:19.370+0000 D5 QUERY [shard-registry-reload] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:30:19.370+0000 D5 QUERY [shard-registry-reload] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:30:19.370+0000 D5 QUERY [shard-registry-reload] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:30:19.370+0000 D5 QUERY [shard-registry-reload] Rated tree: $and 2019-09-04T06:30:19.370+0000 D5 QUERY [shard-registry-reload] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:19.370+0000 D5 QUERY [shard-registry-reload] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:19.370+0000 D2 QUERY [shard-registry-reload] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:30:19.370+0000 D3 STORAGE [shard-registry-reload] begin_transaction on local snapshot Timestamp(1567578619, 1) 2019-09-04T06:30:19.370+0000 D3 STORAGE [shard-registry-reload] WT begin_transaction for snapshot id 8807 2019-09-04T06:30:19.370+0000 D3 STORAGE [shard-registry-reload] WT rollback_transaction for snapshot id 8807 2019-09-04T06:30:19.370+0000 I COMMAND [shard-registry-reload] command config.shards command: find { find: "shards", $readPreference: { mode: "nearest", tags: [] }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:646 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:30:19.370+0000 D1 SHARDING [shard-registry-reload] found 3 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp(1567578619, 1), t: 1 } 2019-09-04T06:30:19.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0000/cmodb806.togewa.com:27018,cmodb807.togewa.com:27018 2019-09-04T06:30:19.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0000, with CS shard0000/cmodb806.togewa.com:27018,cmodb807.togewa.com:27018 2019-09-04T06:30:19.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0001/cmodb808.togewa.com:27018,cmodb809.togewa.com:27018 2019-09-04T06:30:19.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0001, with CS shard0001/cmodb808.togewa.com:27018,cmodb809.togewa.com:27018 2019-09-04T06:30:19.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0002/cmodb810.togewa.com:27018,cmodb811.togewa.com:27018 2019-09-04T06:30:19.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0002, with CS shard0002/cmodb810.togewa.com:27018,cmodb811.togewa.com:27018 2019-09-04T06:30:19.370+0000 D3 SHARDING [shard-registry-reload] Adding shard config, with CS 2019-09-04T06:30:19.382+0000 D3 STORAGE [WTCheckpointThread] setting timestamp read source: 6, provided timestamp: Timestamp(1567578619, 1) 2019-09-04T06:30:19.382+0000 D3 STORAGE [WTCheckpointThread] WT begin_transaction for snapshot id 8809 2019-09-04T06:30:19.382+0000 D3 STORAGE [WTCheckpointThread] WT rollback_transaction for snapshot id 8809 2019-09-04T06:30:19.382+0000 D2 RECOVERY [WTCheckpointThread] Performing stable checkpoint. StableTimestamp: Timestamp(1567578619, 1), OplogNeededForRollback: Timestamp(1567578619, 1) 2019-09-04T06:30:19.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0002 2019-09-04T06:30:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 604 -- target:[cmodb810.togewa.com:27018] db:admin expDate:2019-09-04T06:30:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:30:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 605 -- target:[cmodb811.togewa.com:27018] db:admin expDate:2019-09-04T06:30:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:30:19.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0001 2019-09-04T06:30:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 606 -- target:[cmodb808.togewa.com:27018] db:admin expDate:2019-09-04T06:30:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:30:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 607 -- target:[cmodb809.togewa.com:27018] db:admin expDate:2019-09-04T06:30:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:30:19.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0000 2019-09-04T06:30:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 608 -- target:[cmodb806.togewa.com:27018] db:admin expDate:2019-09-04T06:30:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:30:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 609 -- target:[cmodb807.togewa.com:27018] db:admin expDate:2019-09-04T06:30:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:30:19.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 604 finished with response: { hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb810.togewa.com:27018", me: "cmodb810.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578609, 1), t: 1 }, lastWriteDate: new Date(1567578609000), majorityOpTime: { ts: Timestamp(1567578609, 1), t: 1 }, majorityWriteDate: new Date(1567578609000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578619386), logicalSessionTimeoutMinutes: 30, connectionId: 20469, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578609, 1), $configServerState: { opTime: { ts: Timestamp(1567578615, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578615, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578609, 1) } 2019-09-04T06:30:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb810.togewa.com:27018", me: "cmodb810.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578609, 1), t: 1 }, lastWriteDate: new Date(1567578609000), majorityOpTime: { ts: Timestamp(1567578609, 1), t: 1 }, majorityWriteDate: new Date(1567578609000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578619386), logicalSessionTimeoutMinutes: 30, connectionId: 20469, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578609, 1), $configServerState: { opTime: { ts: Timestamp(1567578615, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578615, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578609, 1) } target: cmodb810.togewa.com:27018 2019-09-04T06:30:19.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 607 finished with response: { hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb808.togewa.com:27018", me: "cmodb809.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578617, 1), t: 1 }, lastWriteDate: new Date(1567578617000), majorityOpTime: { ts: Timestamp(1567578617, 1), t: 1 }, majorityWriteDate: new Date(1567578617000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578619386), logicalSessionTimeoutMinutes: 30, connectionId: 13302, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578617, 1), $configServerState: { opTime: { ts: Timestamp(1567578600, 3), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578617, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578617, 1) } 2019-09-04T06:30:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb808.togewa.com:27018", me: "cmodb809.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578617, 1), t: 1 }, lastWriteDate: new Date(1567578617000), majorityOpTime: { ts: Timestamp(1567578617, 1), t: 1 }, majorityWriteDate: new Date(1567578617000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578619386), logicalSessionTimeoutMinutes: 30, connectionId: 13302, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578617, 1), $configServerState: { opTime: { ts: Timestamp(1567578600, 3), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578617, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578617, 1) } target: cmodb809.togewa.com:27018 2019-09-04T06:30:19.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 606 finished with response: { hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb808.togewa.com:27018", me: "cmodb808.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578617, 1), t: 1 }, lastWriteDate: new Date(1567578617000), majorityOpTime: { ts: Timestamp(1567578617, 1), t: 1 }, majorityWriteDate: new Date(1567578617000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578619386), logicalSessionTimeoutMinutes: 30, connectionId: 18183, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578617, 1), $configServerState: { opTime: { ts: Timestamp(1567578615, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578617, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578617, 1) } 2019-09-04T06:30:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb808.togewa.com:27018", me: "cmodb808.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578617, 1), t: 1 }, lastWriteDate: new Date(1567578617000), majorityOpTime: { ts: Timestamp(1567578617, 1), t: 1 }, majorityWriteDate: new Date(1567578617000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578619386), logicalSessionTimeoutMinutes: 30, connectionId: 18183, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578617, 1), $configServerState: { opTime: { ts: Timestamp(1567578615, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578617, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578617, 1) } target: cmodb808.togewa.com:27018 2019-09-04T06:30:19.385+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0001 took 0ms 2019-09-04T06:30:19.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 608 finished with response: { hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb806.togewa.com:27018", me: "cmodb806.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578613, 1), t: 1 }, lastWriteDate: new Date(1567578613000), majorityOpTime: { ts: Timestamp(1567578613, 1), t: 1 }, majorityWriteDate: new Date(1567578613000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578619385), logicalSessionTimeoutMinutes: 30, connectionId: 16400, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578613, 1), $configServerState: { opTime: { ts: Timestamp(1567578615, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578615, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578613, 1) } 2019-09-04T06:30:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb806.togewa.com:27018", me: "cmodb806.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578613, 1), t: 1 }, lastWriteDate: new Date(1567578613000), majorityOpTime: { ts: Timestamp(1567578613, 1), t: 1 }, majorityWriteDate: new Date(1567578613000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578619385), logicalSessionTimeoutMinutes: 30, connectionId: 16400, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578613, 1), $configServerState: { opTime: { ts: Timestamp(1567578615, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578615, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578613, 1) } target: cmodb806.togewa.com:27018 2019-09-04T06:30:19.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 609 finished with response: { hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb806.togewa.com:27018", me: "cmodb807.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578613, 1), t: 1 }, lastWriteDate: new Date(1567578613000), majorityOpTime: { ts: Timestamp(1567578613, 1), t: 1 }, majorityWriteDate: new Date(1567578613000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578619386), logicalSessionTimeoutMinutes: 30, connectionId: 17074, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578613, 1), $configServerState: { opTime: { ts: Timestamp(1567578608, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578615, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578613, 1) } 2019-09-04T06:30:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb806.togewa.com:27018", me: "cmodb807.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578613, 1), t: 1 }, lastWriteDate: new Date(1567578613000), majorityOpTime: { ts: Timestamp(1567578613, 1), t: 1 }, majorityWriteDate: new Date(1567578613000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578619386), logicalSessionTimeoutMinutes: 30, connectionId: 17074, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578613, 1), $configServerState: { opTime: { ts: Timestamp(1567578608, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578615, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578613, 1) } target: cmodb807.togewa.com:27018 2019-09-04T06:30:19.385+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0000 took 0ms 2019-09-04T06:30:19.390+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 605 finished with response: { hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb810.togewa.com:27018", me: "cmodb811.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578609, 1), t: 1 }, lastWriteDate: new Date(1567578609000), majorityOpTime: { ts: Timestamp(1567578609, 1), t: 1 }, majorityWriteDate: new Date(1567578609000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578619386), logicalSessionTimeoutMinutes: 30, connectionId: 13284, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578609, 1), $configServerState: { opTime: { ts: Timestamp(1567578608, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578615, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578609, 1) } 2019-09-04T06:30:19.390+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb810.togewa.com:27018", me: "cmodb811.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578609, 1), t: 1 }, lastWriteDate: new Date(1567578609000), majorityOpTime: { ts: Timestamp(1567578609, 1), t: 1 }, majorityWriteDate: new Date(1567578609000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578619386), logicalSessionTimeoutMinutes: 30, connectionId: 13284, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578609, 1), $configServerState: { opTime: { ts: Timestamp(1567578608, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578615, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578609, 1) } target: cmodb811.togewa.com:27018 2019-09-04T06:30:19.390+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0002 took 5ms 2019-09-04T06:30:19.440+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:19.461+0000 D2 COMMAND [replSetDistLockPinger] run command config.$cmd { findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578619461) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } 2019-09-04T06:30:19.461+0000 D4 - [replSetDistLockPinger] Taking ticket. Available: 1000000000 2019-09-04T06:30:19.461+0000 D1 - [replSetDistLockPinger] User Assertion: NotMaster: Not primary while running findAndModify command on collection config.lockpings src/mongo/db/commands/find_and_modify.cpp 178 2019-09-04T06:30:19.461+0000 W - [replSetDistLockPinger] DBException thrown :: caused by :: NotMaster: Not primary while running findAndModify command on collection config.lockpings 2019-09-04T06:30:19.480+0000 I - [replSetDistLockPinger] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6b5b62 0x561749c38a0a 0x561749c42521 0x561749a63043 0x56174a33a606 0x56174a33ba55 0x56174b117894 0x56174a082899 0x56174a083f53 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174af452ee 0x56174af457fa 0x56174b0c25e2 0x56174a244e7b 0x56174a243c1e 0x56174a42b1dc 0x56174a23b7b1 0x56174a232a0a 0x56174b82dbbf 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"272DB62","s":"_ZN5mongo11DBExceptionC2ERKNS_6StatusE"},{"b":"561748F88000","o":"CB0A0A","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"ADB043"},{"b":"561748F88000","o":"13B2606"},{"b":"561748F88000","o":"13B3A55"},{"b":"561748F88000","o":"218F894","s":"_ZN5mongo12BasicCommand10Invocation3runEPNS_16OperationContextEPNS_3rpc21ReplyBuilderInterfaceE"},{"b":"561748F88000","o":"10FA899"},{"b":"561748F88000","o":"10FBF53"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"1FBD2EE"},{"b":"561748F88000","o":"1FBD7FA","s":"_ZN5mongo14DBDirectClient4callERNS_7MessageES2_bPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE"},{"b":"561748F88000","o":"213A5E2","s":"_ZN5mongo12DBClientBase20runCommandWithTargetENS_12OpMsgRequestE"},{"b":"561748F88000","o":"12BCE7B","s":"_ZN5mongo13RSLocalClient14runCommandOnceEPNS_16OperationContextENS_10StringDataERKNS_7BSONObjE"},{"b":"561748F88000","o":"12BBC1E","s":"_ZN5mongo10ShardLocal11_runCommandEPNS_16OperationContextERKNS_21ReadPreferenceSettingENS_10StringDataENS_8DurationISt5ratioILl1ELl1000EEEERKNS_7BSONObjE"},{"b":"561748F88000","o":"14A31DC","s":"_ZN5mongo5Shard32runCommandWithFixedRetryAttemptsEPNS_16OperationContextERKNS_21ReadPreferenceSettingERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_7BSONObjENS_8DurationISt5ratioILl1ELl1000EEEENS0_11RetryPolicyE"},{"b":"561748F88000","o":"12B37B1","s":"_ZN5mongo19DistLockCatalogImpl4pingEPNS_16OperationContextENS_10StringDataENS_6Date_tE"},{"b":"561748F88000","o":"12AAA0A","s":"_ZN5mongo22ReplSetDistLockManager6doTaskEv"},{"b":"561748F88000","o":"28A5BBF"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo11DBExceptionC2ERKNS_6StatusE+0x32) [0x56174b6b5b62] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x6D08) [0x561749c38a0a] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xADB043) [0x561749a63043] mongod(+0x13B2606) [0x56174a33a606] mongod(+0x13B3A55) [0x56174a33ba55] mongod(_ZN5mongo12BasicCommand10Invocation3runEPNS_16OperationContextEPNS_3rpc21ReplyBuilderInterfaceE+0x74) [0x56174b117894] mongod(+0x10FA899) [0x56174a082899] mongod(+0x10FBF53) [0x56174a083f53] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(+0x1FBD2EE) [0x56174af452ee] mongod(_ZN5mongo14DBDirectClient4callERNS_7MessageES2_bPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x3A) [0x56174af457fa] mongod(_ZN5mongo12DBClientBase20runCommandWithTargetENS_12OpMsgRequestE+0x1F2) [0x56174b0c25e2] mongod(_ZN5mongo13RSLocalClient14runCommandOnceEPNS_16OperationContextENS_10StringDataERKNS_7BSONObjE+0x4FB) [0x56174a244e7b] mongod(_ZN5mongo10ShardLocal11_runCommandEPNS_16OperationContextERKNS_21ReadPreferenceSettingENS_10StringDataENS_8DurationISt5ratioILl1ELl1000EEEERKNS_7BSONObjE+0x2E) [0x56174a243c1e] mongod(_ZN5mongo5Shard32runCommandWithFixedRetryAttemptsEPNS_16OperationContextERKNS_21ReadPreferenceSettingERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_7BSONObjENS_8DurationISt5ratioILl1ELl1000EEEENS0_11RetryPolicyE+0xDC) [0x56174a42b1dc] mongod(_ZN5mongo19DistLockCatalogImpl4pingEPNS_16OperationContextENS_10StringDataENS_6Date_tE+0x571) [0x56174a23b7b1] mongod(_ZN5mongo22ReplSetDistLockManager6doTaskEv+0x27A) [0x56174a232a0a] mongod(+0x28A5BBF) [0x56174b82dbbf] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:19.480+0000 D2 REPL [replSetDistLockPinger] Waiting for write concern. OpTime: { ts: Timestamp(1567578619, 1), t: 1 }, write concern: { w: "majority", wtimeout: 15000 } 2019-09-04T06:30:19.480+0000 D4 STORAGE [replSetDistLockPinger] flushed journal 2019-09-04T06:30:19.480+0000 D1 COMMAND [replSetDistLockPinger] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578619461) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" }': NotMaster: Not primary while running findAndModify command on collection config.lockpings 2019-09-04T06:30:19.480+0000 I COMMAND [replSetDistLockPinger] command config.lockpings command: findAndModify { findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578619461) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } numYields:0 ok:0 errMsg:"Not primary while running findAndModify command on collection config.lockpings" errName:NotMaster errCode:10107 reslen:527 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } protocol:op_msg 19ms 2019-09-04T06:30:19.540+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:19.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:19.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:19.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:19.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:19.640+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:19.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:19.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:19.669+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:19.669+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:19.676+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:19.676+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:19.707+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:19.707+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:19.713+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:19.713+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:19.740+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:19.840+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:19.843+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:19.843+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:19.940+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:20.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:20.003+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:30:20.003+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:30:20.003+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:30:20.019+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:30:20.019+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:30:20.025+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:30:20.025+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:30:20.025+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:30:20.025+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:30:20.037+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:30:20.037+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35129 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:30:20.038+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:30:20.038+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:30:20.038+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:30:20.038+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:30:20.038+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:20.038+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:30:20.038+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:30:20.038+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:30:20.038+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:30:20.038+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:30:20.038+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:30:20.038+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:30:20.038+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:20.038+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:20.039+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:30:20.039+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578619, 1) 2019-09-04T06:30:20.039+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8826 2019-09-04T06:30:20.039+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8826 2019-09-04T06:30:20.039+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:30:20.039+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:30:20.039+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:30:20.039+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:30:20.039+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:20.039+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:30:20.039+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:30:20.039+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:30:20.039+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578619, 1) 2019-09-04T06:30:20.039+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8829 2019-09-04T06:30:20.039+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8829 2019-09-04T06:30:20.039+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:30:20.039+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:30:20.039+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:20.039+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:30:20.039+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:30:20.039+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:30:20.039+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578619, 1) 2019-09-04T06:30:20.039+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8831 2019-09-04T06:30:20.039+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8831 2019-09-04T06:30:20.039+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:30:20.039+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:30:20.040+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:30:20.040+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:30:20.040+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:30:20.040+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8834 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8834 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8835 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8835 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8836 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8836 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8837 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8837 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8838 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8838 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8839 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8839 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8840 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8840 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8841 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8841 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8842 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8842 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8843 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8843 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:30:20.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8844 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:30:20.041+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8844 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8845 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8845 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8846 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8846 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8847 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8847 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8848 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8848 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8849 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8849 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8850 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8850 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8851 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8851 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8852 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8852 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8853 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8853 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8854 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8854 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8855 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:30:20.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8855 2019-09-04T06:30:20.041+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:30:20.041+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:30:20.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8857 2019-09-04T06:30:20.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8857 2019-09-04T06:30:20.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8858 2019-09-04T06:30:20.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8858 2019-09-04T06:30:20.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8859 2019-09-04T06:30:20.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8859 2019-09-04T06:30:20.042+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:30:20.042+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:30:20.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8861 2019-09-04T06:30:20.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8861 2019-09-04T06:30:20.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8862 2019-09-04T06:30:20.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8862 2019-09-04T06:30:20.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8863 2019-09-04T06:30:20.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8863 2019-09-04T06:30:20.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8864 2019-09-04T06:30:20.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8864 2019-09-04T06:30:20.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8865 2019-09-04T06:30:20.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8865 2019-09-04T06:30:20.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8866 2019-09-04T06:30:20.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8866 2019-09-04T06:30:20.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8867 2019-09-04T06:30:20.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8867 2019-09-04T06:30:20.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8868 2019-09-04T06:30:20.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8868 2019-09-04T06:30:20.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8869 2019-09-04T06:30:20.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8869 2019-09-04T06:30:20.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8870 2019-09-04T06:30:20.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8870 2019-09-04T06:30:20.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8871 2019-09-04T06:30:20.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8871 2019-09-04T06:30:20.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8872 2019-09-04T06:30:20.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8872 2019-09-04T06:30:20.042+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:30:20.042+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:30:20.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8874 2019-09-04T06:30:20.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8874 2019-09-04T06:30:20.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8875 2019-09-04T06:30:20.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8875 2019-09-04T06:30:20.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8876 2019-09-04T06:30:20.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8876 2019-09-04T06:30:20.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8877 2019-09-04T06:30:20.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8877 2019-09-04T06:30:20.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8878 2019-09-04T06:30:20.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8878 2019-09-04T06:30:20.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 8879 2019-09-04T06:30:20.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 8879 2019-09-04T06:30:20.043+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:30:20.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:20.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:20.065+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:20.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:20.141+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:20.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:20.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:20.169+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:20.169+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:20.176+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:20.176+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:20.207+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:20.207+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:20.213+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:20.213+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:20.226+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:20.226+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:20.226+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578619, 1) 2019-09-04T06:30:20.227+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 8888 2019-09-04T06:30:20.227+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:20.227+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:20.227+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 8888 2019-09-04T06:30:20.227+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8891 2019-09-04T06:30:20.228+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 8891 2019-09-04T06:30:20.228+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578619, 1), t: 1 }({ ts: Timestamp(1567578619, 1), t: 1 }) 2019-09-04T06:30:20.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578619, 1), signature: { hash: BinData(0, 3926E52DC6E5AF0457D825184FD386F62A906F45), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:20.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:30:20.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578619, 1), signature: { hash: BinData(0, 3926E52DC6E5AF0457D825184FD386F62A906F45), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:20.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578619, 1), signature: { hash: BinData(0, 3926E52DC6E5AF0457D825184FD386F62A906F45), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:20.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), opTime: { ts: Timestamp(1567578619, 1), t: 1 }, wallTime: new Date(1567578619224) } 2019-09-04T06:30:20.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578619, 1), signature: { hash: BinData(0, 3926E52DC6E5AF0457D825184FD386F62A906F45), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:20.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:20.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 999999999 Now: 1000000000 2019-09-04T06:30:20.241+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:20.341+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:20.343+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:20.343+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:20.436+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:20.437+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:20.441+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:20.541+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:20.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:20.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:20.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:20.566+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:20.641+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:20.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:20.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:20.669+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:20.669+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:20.676+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:20.676+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:20.713+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:20.713+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:20.742+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:20.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:20.838+0000 D3 REPL [replexec-0] memberData lastupdate is: 2019-09-04T06:30:19.061+0000 2019-09-04T06:30:20.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:20.838+0000 D3 REPL [replexec-0] memberData lastupdate is: 2019-09-04T06:30:20.232+0000 2019-09-04T06:30:20.838+0000 D3 REPL [replexec-0] stalest member MemberId(0) date: 2019-09-04T06:30:19.061+0000 2019-09-04T06:30:20.838+0000 D3 REPL [replexec-0] scheduling next check at 2019-09-04T06:30:29.061+0000 2019-09-04T06:30:20.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:50.838+0000 2019-09-04T06:30:20.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 610) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:20.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 610 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:30.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:20.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:50.838+0000 2019-09-04T06:30:20.838+0000 D2 ASIO [Replication] Request 610 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), opTime: { ts: Timestamp(1567578619, 1), t: 1 }, wallTime: new Date(1567578619224), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpVisible: { ts: Timestamp(1567578619, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578619, 1), $clusterTime: { clusterTime: Timestamp(1567578619, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578619, 1) } 2019-09-04T06:30:20.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), opTime: { ts: Timestamp(1567578619, 1), t: 1 }, wallTime: new Date(1567578619224), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpVisible: { ts: Timestamp(1567578619, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578619, 1), $clusterTime: { clusterTime: Timestamp(1567578619, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578619, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:20.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:20.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 610) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), opTime: { ts: Timestamp(1567578619, 1), t: 1 }, wallTime: new Date(1567578619224), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpVisible: { ts: Timestamp(1567578619, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578619, 1), $clusterTime: { clusterTime: Timestamp(1567578619, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578619, 1) } 2019-09-04T06:30:20.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:30:20.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:30:22.838Z 2019-09-04T06:30:20.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:50.838+0000 2019-09-04T06:30:20.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:20.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 611) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:20.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 611 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:30:30.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:20.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:50.838+0000 2019-09-04T06:30:20.839+0000 D2 ASIO [Replication] Request 611 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), opTime: { ts: Timestamp(1567578619, 1), t: 1 }, wallTime: new Date(1567578619224), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpVisible: { ts: Timestamp(1567578619, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578619, 1), $clusterTime: { clusterTime: Timestamp(1567578619, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578619, 1) } 2019-09-04T06:30:20.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), opTime: { ts: Timestamp(1567578619, 1), t: 1 }, wallTime: new Date(1567578619224), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpVisible: { ts: Timestamp(1567578619, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578619, 1), $clusterTime: { clusterTime: Timestamp(1567578619, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578619, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:30:20.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:20.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 611) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), opTime: { ts: Timestamp(1567578619, 1), t: 1 }, wallTime: new Date(1567578619224), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpVisible: { ts: Timestamp(1567578619, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578619, 1), $clusterTime: { clusterTime: Timestamp(1567578619, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578619, 1) } 2019-09-04T06:30:20.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:30:20.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:30:30.301+0000 2019-09-04T06:30:20.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:30:31.576+0000 2019-09-04T06:30:20.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:30:20.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:30:22.839Z 2019-09-04T06:30:20.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:20.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:50.839+0000 2019-09-04T06:30:20.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:50.839+0000 2019-09-04T06:30:20.842+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:20.843+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:20.843+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:20.936+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:20.936+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:20.942+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:21.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:21.042+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:21.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:21.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:21.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578619, 1), signature: { hash: BinData(0, 3926E52DC6E5AF0457D825184FD386F62A906F45), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:21.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:30:21.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578619, 1), signature: { hash: BinData(0, 3926E52DC6E5AF0457D825184FD386F62A906F45), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:21.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578619, 1), signature: { hash: BinData(0, 3926E52DC6E5AF0457D825184FD386F62A906F45), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:21.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), opTime: { ts: Timestamp(1567578619, 1), t: 1 }, wallTime: new Date(1567578619224) } 2019-09-04T06:30:21.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578619, 1), signature: { hash: BinData(0, 3926E52DC6E5AF0457D825184FD386F62A906F45), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:21.118+0000 I NETWORK [listener] connection accepted from 10.108.2.56:35694 #239 (83 connections now open) 2019-09-04T06:30:21.118+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:21.118+0000 D2 COMMAND [conn239] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:21.118+0000 I NETWORK [conn239] received client metadata from 10.108.2.56:35694 conn239: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:21.118+0000 I COMMAND [conn239] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:21.123+0000 D2 COMMAND [conn239] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578611, 1), signature: { hash: BinData(0, 542F7D84F611AE5E9C7A33BCFBF929241984258E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:30:21.123+0000 D1 REPL [conn239] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578619, 1), t: 1 } 2019-09-04T06:30:21.123+0000 D3 REPL [conn239] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.133+0000 2019-09-04T06:30:21.142+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:21.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:21.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:21.169+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:21.169+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:21.213+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:21.213+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:21.227+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:21.227+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:21.227+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578619, 1) 2019-09-04T06:30:21.227+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 8913 2019-09-04T06:30:21.227+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:21.227+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:21.227+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 8913 2019-09-04T06:30:21.228+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8916 2019-09-04T06:30:21.228+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 8916 2019-09-04T06:30:21.228+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578619, 1), t: 1 }({ ts: Timestamp(1567578619, 1), t: 1 }) 2019-09-04T06:30:21.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:21.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:21.242+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:21.342+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:21.343+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:21.343+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:21.436+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:21.436+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:21.443+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:21.543+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:21.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:21.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:21.572+0000 D2 COMMAND [conn126] run command admin.$cmd { ping: 1, $db: "admin" } 2019-09-04T06:30:21.572+0000 I COMMAND [conn126] command admin.$cmd appName: "robo3t" command: ping { ping: 1, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:21.584+0000 D2 COMMAND [conn182] run command config.$cmd { ping: 1.0, lsid: { id: UUID("02492cc9-cb3a-4cd4-9c2e-0d7430e82ce2") }, $clusterTime: { clusterTime: Timestamp(1567578559, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:30:21.584+0000 I COMMAND [conn182] command config.$cmd appName: "MongoDB Shell" command: ping { ping: 1.0, lsid: { id: UUID("02492cc9-cb3a-4cd4-9c2e-0d7430e82ce2") }, $clusterTime: { clusterTime: Timestamp(1567578559, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:21.634+0000 D2 COMMAND [conn223] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578618, 1), signature: { hash: BinData(0, 6D229A0253FA92070B3D485820FE05002AE9502B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:30:21.634+0000 D1 REPL [conn223] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578619, 1), t: 1 } 2019-09-04T06:30:21.634+0000 D3 REPL [conn223] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.644+0000 2019-09-04T06:30:21.643+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:21.650+0000 I NETWORK [listener] connection accepted from 10.108.2.48:42088 #240 (84 connections now open) 2019-09-04T06:30:21.650+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:21.650+0000 I NETWORK [listener] connection accepted from 10.108.2.72:45734 #241 (85 connections now open) 2019-09-04T06:30:21.650+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:21.650+0000 D2 COMMAND [conn240] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:21.650+0000 D2 COMMAND [conn241] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:21.650+0000 I NETWORK [conn241] received client metadata from 10.108.2.72:45734 conn241: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:21.650+0000 I NETWORK [conn240] received client metadata from 10.108.2.48:42088 conn240: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:21.650+0000 I COMMAND [conn241] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:21.650+0000 I COMMAND [conn240] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:21.650+0000 I NETWORK [listener] connection accepted from 10.108.2.54:49178 #242 (86 connections now open) 2019-09-04T06:30:21.650+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:21.650+0000 D2 COMMAND [conn242] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:21.651+0000 I NETWORK [conn242] received client metadata from 10.108.2.54:49178 conn242: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:21.651+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:21.651+0000 I COMMAND [conn242] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:21.651+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:21.651+0000 D2 COMMAND [conn232] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578613, 1), signature: { hash: BinData(0, F7238143B431FCDDC1120756CE12F6F6C16C6FEB), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:30:21.651+0000 D1 REPL [conn232] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578619, 1), t: 1 } 2019-09-04T06:30:21.651+0000 D3 REPL [conn232] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.661+0000 2019-09-04T06:30:21.652+0000 I NETWORK [listener] connection accepted from 10.108.2.73:52146 #243 (87 connections now open) 2019-09-04T06:30:21.652+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:21.652+0000 D2 COMMAND [conn243] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:21.652+0000 I NETWORK [conn243] received client metadata from 10.108.2.73:52146 conn243: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:21.652+0000 I COMMAND [conn243] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:21.652+0000 D2 COMMAND [conn243] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578612, 1), signature: { hash: BinData(0, 382BE338B8E062C89DE3BDFB8F55724B12CF9B0F), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:30:21.652+0000 D1 REPL [conn243] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578619, 1), t: 1 } 2019-09-04T06:30:21.652+0000 D3 REPL [conn243] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.662+0000 2019-09-04T06:30:21.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:21.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:21.663+0000 I COMMAND [conn214] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578588, 1), signature: { hash: BinData(0, 66E892E731879211782E008B61DB2B292F6E252E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:30:21.663+0000 I COMMAND [conn213] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578590, 1), signature: { hash: BinData(0, A7E9BE56119A00BA3C8E3F60B38676D3DC7FF217), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:30:21.663+0000 I COMMAND [conn215] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578582, 1), signature: { hash: BinData(0, CBAA63ADEE2EDC53BD93E84FCB4CEC8CB94AD092), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:30:21.663+0000 D1 - [conn215] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:21.663+0000 D1 - [conn213] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:21.663+0000 W - [conn215] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:21.663+0000 D1 - [conn214] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:21.663+0000 W - [conn214] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:21.663+0000 W - [conn213] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:21.669+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:21.669+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:21.702+0000 I - [conn214] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:21.703+0000 D1 COMMAND [conn214] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578588, 1), signature: { hash: BinData(0, 66E892E731879211782E008B61DB2B292F6E252E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:21.703+0000 D1 - [conn214] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:21.703+0000 W - [conn214] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:21.713+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:21.713+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:21.743+0000 I - [conn214] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:21.743+0000 I - [conn213] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:21.743+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:21.743+0000 W COMMAND [conn214] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:21.743+0000 I COMMAND [conn214] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578588, 1), signature: { hash: BinData(0, 66E892E731879211782E008B61DB2B292F6E252E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30051ms 2019-09-04T06:30:21.743+0000 D1 COMMAND [conn213] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578590, 1), signature: { hash: BinData(0, A7E9BE56119A00BA3C8E3F60B38676D3DC7FF217), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:21.743+0000 D1 - [conn213] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:21.743+0000 W - [conn213] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:21.743+0000 D2 NETWORK [conn214] Session from 10.108.2.54:49160 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:21.743+0000 I NETWORK [conn214] end connection 10.108.2.54:49160 (87 connections now open) 2019-09-04T06:30:21.743+0000 I NETWORK [listener] connection accepted from 10.108.2.52:47174 #244 (88 connections now open) 2019-09-04T06:30:21.743+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:21.744+0000 D2 COMMAND [conn244] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:21.744+0000 I NETWORK [conn244] received client metadata from 10.108.2.52:47174 conn244: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:21.744+0000 I COMMAND [conn244] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:21.744+0000 D2 COMMAND [conn244] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578612, 1), signature: { hash: BinData(0, 382BE338B8E062C89DE3BDFB8F55724B12CF9B0F), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:30:21.744+0000 D1 REPL [conn244] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578619, 1), t: 1 } 2019-09-04T06:30:21.744+0000 D3 REPL [conn244] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.754+0000 2019-09-04T06:30:21.756+0000 I NETWORK [listener] connection accepted from 10.108.2.59:48336 #245 (88 connections now open) 2019-09-04T06:30:21.756+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:21.756+0000 D2 COMMAND [conn245] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:21.756+0000 I NETWORK [conn245] received client metadata from 10.108.2.59:48336 conn245: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:21.757+0000 I COMMAND [conn245] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:21.757+0000 D2 COMMAND [conn245] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578611, 1), signature: { hash: BinData(0, 542F7D84F611AE5E9C7A33BCFBF929241984258E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:30:21.757+0000 D1 REPL [conn245] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578619, 1), t: 1 } 2019-09-04T06:30:21.757+0000 D3 REPL [conn245] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.767+0000 2019-09-04T06:30:21.764+0000 I - [conn213] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:21.764+0000 I - [conn215] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:21.764+0000 W COMMAND [conn213] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:21.764+0000 I COMMAND [conn213] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578590, 1), signature: { hash: BinData(0, A7E9BE56119A00BA3C8E3F60B38676D3DC7FF217), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30092ms 2019-09-04T06:30:21.764+0000 D1 COMMAND [conn215] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578582, 1), signature: { hash: BinData(0, CBAA63ADEE2EDC53BD93E84FCB4CEC8CB94AD092), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:21.764+0000 D1 - [conn215] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:21.764+0000 W - [conn215] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:21.764+0000 D2 NETWORK [conn213] Session from 10.108.2.48:42068 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:21.764+0000 I NETWORK [conn213] end connection 10.108.2.48:42068 (87 connections now open) 2019-09-04T06:30:21.786+0000 I - [conn215] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:21.786+0000 W COMMAND [conn215] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:21.786+0000 I COMMAND [conn215] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578582, 1), signature: { hash: BinData(0, CBAA63ADEE2EDC53BD93E84FCB4CEC8CB94AD092), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30112ms 2019-09-04T06:30:21.787+0000 D2 NETWORK [conn215] Session from 10.108.2.72:45716 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:21.787+0000 I NETWORK [conn215] end connection 10.108.2.72:45716 (86 connections now open) 2019-09-04T06:30:21.843+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:21.843+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:21.843+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:21.936+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:21.936+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:21.943+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:22.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:22.043+0000 I NETWORK [listener] connection accepted from 10.108.2.50:50110 #246 (87 connections now open) 2019-09-04T06:30:22.043+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:22.043+0000 D2 COMMAND [conn246] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:22.043+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:22.043+0000 I NETWORK [conn246] received client metadata from 10.108.2.50:50110 conn246: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:22.043+0000 I COMMAND [conn246] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:22.044+0000 D2 COMMAND [conn246] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578619, 1), signature: { hash: BinData(0, 93BEEE1914896F4DB94877FF3B5D418106DCB32C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:30:22.044+0000 D1 REPL [conn246] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578619, 1), t: 1 } 2019-09-04T06:30:22.044+0000 D3 REPL [conn246] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:52.054+0000 2019-09-04T06:30:22.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:22.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:22.143+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:22.150+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:22.150+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:22.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:22.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:22.169+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:22.169+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:22.213+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:22.213+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:22.227+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:22.227+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:22.227+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578619, 1) 2019-09-04T06:30:22.227+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 8949 2019-09-04T06:30:22.227+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:22.227+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:22.227+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 8949 2019-09-04T06:30:22.228+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8952 2019-09-04T06:30:22.228+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 8952 2019-09-04T06:30:22.228+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578619, 1), t: 1 }({ ts: Timestamp(1567578619, 1), t: 1 }) 2019-09-04T06:30:22.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578620, 1), signature: { hash: BinData(0, 94948AD9BC88594E1E8C3301A2EC1F7C4634F7C7), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:22.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:30:22.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578620, 1), signature: { hash: BinData(0, 94948AD9BC88594E1E8C3301A2EC1F7C4634F7C7), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:22.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578620, 1), signature: { hash: BinData(0, 94948AD9BC88594E1E8C3301A2EC1F7C4634F7C7), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:22.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), opTime: { ts: Timestamp(1567578619, 1), t: 1 }, wallTime: new Date(1567578619224) } 2019-09-04T06:30:22.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578620, 1), signature: { hash: BinData(0, 94948AD9BC88594E1E8C3301A2EC1F7C4634F7C7), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:22.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:22.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:22.244+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:22.344+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:22.436+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:22.436+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:22.444+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:22.544+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:22.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:22.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:22.553+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:30:22.553+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:30:22.553+0000 D2 COMMAND [conn90] run command admin.$cmd { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:30:22.553+0000 I COMMAND [conn90] command admin.$cmd command: isMaster { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:866 locks:{} protocol:op_query 0ms 2019-09-04T06:30:22.598+0000 I COMMAND [conn217] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578585, 1), signature: { hash: BinData(0, 7ED407E7D8FC2F48E792B1C41AEA50FC38ED9E10), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:30:22.598+0000 D1 - [conn217] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:22.598+0000 W - [conn217] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:22.618+0000 I - [conn217] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:22.619+0000 D1 COMMAND [conn217] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578585, 1), signature: { hash: BinData(0, 7ED407E7D8FC2F48E792B1C41AEA50FC38ED9E10), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:22.619+0000 D1 - [conn217] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:22.619+0000 W - [conn217] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:22.642+0000 I - [conn217] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:22.642+0000 W COMMAND [conn217] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:22.642+0000 I COMMAND [conn217] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578585, 1), signature: { hash: BinData(0, 7ED407E7D8FC2F48E792B1C41AEA50FC38ED9E10), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30033ms 2019-09-04T06:30:22.642+0000 D2 NETWORK [conn217] Session from 10.108.2.74:51752 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:22.642+0000 I NETWORK [conn217] end connection 10.108.2.74:51752 (86 connections now open) 2019-09-04T06:30:22.644+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:22.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:22.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:22.669+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:22.669+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:22.713+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:22.713+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:22.744+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:22.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:22.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 612) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:22.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 612 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:32.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:22.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:50.839+0000 2019-09-04T06:30:22.838+0000 D2 ASIO [Replication] Request 612 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), opTime: { ts: Timestamp(1567578619, 1), t: 1 }, wallTime: new Date(1567578619224), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpVisible: { ts: Timestamp(1567578619, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578619, 1), $clusterTime: { clusterTime: Timestamp(1567578620, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578619, 1) } 2019-09-04T06:30:22.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), opTime: { ts: Timestamp(1567578619, 1), t: 1 }, wallTime: new Date(1567578619224), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpVisible: { ts: Timestamp(1567578619, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578619, 1), $clusterTime: { clusterTime: Timestamp(1567578620, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578619, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:22.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:22.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 612) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), opTime: { ts: Timestamp(1567578619, 1), t: 1 }, wallTime: new Date(1567578619224), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpVisible: { ts: Timestamp(1567578619, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578619, 1), $clusterTime: { clusterTime: Timestamp(1567578620, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578619, 1) } 2019-09-04T06:30:22.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:30:22.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:30:24.838Z 2019-09-04T06:30:22.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:50.839+0000 2019-09-04T06:30:22.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:22.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 613) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:22.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 613 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:30:32.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:22.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:50.839+0000 2019-09-04T06:30:22.839+0000 D2 ASIO [Replication] Request 613 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), opTime: { ts: Timestamp(1567578619, 1), t: 1 }, wallTime: new Date(1567578619224), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpVisible: { ts: Timestamp(1567578619, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578619, 1), $clusterTime: { clusterTime: Timestamp(1567578620, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578619, 1) } 2019-09-04T06:30:22.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), opTime: { ts: Timestamp(1567578619, 1), t: 1 }, wallTime: new Date(1567578619224), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpVisible: { ts: Timestamp(1567578619, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578619, 1), $clusterTime: { clusterTime: Timestamp(1567578620, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578619, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:30:22.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:22.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 613) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), opTime: { ts: Timestamp(1567578619, 1), t: 1 }, wallTime: new Date(1567578619224), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpVisible: { ts: Timestamp(1567578619, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578619, 1), $clusterTime: { clusterTime: Timestamp(1567578620, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578619, 1) } 2019-09-04T06:30:22.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:30:22.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:30:31.576+0000 2019-09-04T06:30:22.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:30:34.130+0000 2019-09-04T06:30:22.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:30:22.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:30:24.839Z 2019-09-04T06:30:22.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:22.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:52.839+0000 2019-09-04T06:30:22.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:52.839+0000 2019-09-04T06:30:22.844+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:22.936+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:22.936+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:22.944+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:23.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:23.045+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:23.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:23.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:23.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578620, 1), signature: { hash: BinData(0, 94948AD9BC88594E1E8C3301A2EC1F7C4634F7C7), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:23.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:30:23.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578620, 1), signature: { hash: BinData(0, 94948AD9BC88594E1E8C3301A2EC1F7C4634F7C7), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:23.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578620, 1), signature: { hash: BinData(0, 94948AD9BC88594E1E8C3301A2EC1F7C4634F7C7), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:23.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), opTime: { ts: Timestamp(1567578619, 1), t: 1 }, wallTime: new Date(1567578619224) } 2019-09-04T06:30:23.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578620, 1), signature: { hash: BinData(0, 94948AD9BC88594E1E8C3301A2EC1F7C4634F7C7), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:23.145+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:23.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:23.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:23.169+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:23.169+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:23.213+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:23.213+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:23.214+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:23.214+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:23.227+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:23.227+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:23.227+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578619, 1) 2019-09-04T06:30:23.227+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 8971 2019-09-04T06:30:23.227+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:23.227+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:23.227+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 8971 2019-09-04T06:30:23.228+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8974 2019-09-04T06:30:23.228+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 8974 2019-09-04T06:30:23.228+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578619, 1), t: 1 }({ ts: Timestamp(1567578619, 1), t: 1 }) 2019-09-04T06:30:23.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:23.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:23.245+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:23.345+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:23.436+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:23.436+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:23.445+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:23.545+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:23.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:23.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:23.646+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:23.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:23.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:23.669+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:23.669+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:23.713+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:23.713+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:23.714+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:23.714+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:23.746+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:23.846+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:23.936+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:23.936+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:23.946+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:24.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:24.046+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:24.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:24.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:24.146+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:24.153+0000 I COMMAND [conn219] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578585, 1), signature: { hash: BinData(0, 7ED407E7D8FC2F48E792B1C41AEA50FC38ED9E10), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:30:24.153+0000 D1 - [conn219] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:24.153+0000 W - [conn219] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:24.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:24.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:24.169+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:24.169+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:24.172+0000 I - [conn219] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:24.172+0000 D1 COMMAND [conn219] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578585, 1), signature: { hash: BinData(0, 7ED407E7D8FC2F48E792B1C41AEA50FC38ED9E10), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:24.172+0000 D1 - [conn219] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:24.172+0000 W - [conn219] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:24.194+0000 I - [conn219] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:24.194+0000 W COMMAND [conn219] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:24.194+0000 I COMMAND [conn219] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578585, 1), signature: { hash: BinData(0, 7ED407E7D8FC2F48E792B1C41AEA50FC38ED9E10), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:30:24.194+0000 D2 NETWORK [conn219] Session from 10.108.2.46:40956 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:24.194+0000 I NETWORK [conn219] end connection 10.108.2.46:40956 (85 connections now open) 2019-09-04T06:30:24.213+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:24.213+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:24.214+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:24.214+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:24.228+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:24.228+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:24.228+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578619, 1) 2019-09-04T06:30:24.228+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 8991 2019-09-04T06:30:24.228+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:24.228+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:24.228+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 8991 2019-09-04T06:30:24.228+0000 D2 ASIO [RS] Request 602 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpVisible: { ts: Timestamp(1567578619, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpApplied: { ts: Timestamp(1567578619, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578619, 1), $clusterTime: { clusterTime: Timestamp(1567578620, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578619, 1) } 2019-09-04T06:30:24.228+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8994 2019-09-04T06:30:24.228+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpVisible: { ts: Timestamp(1567578619, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpApplied: { ts: Timestamp(1567578619, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578619, 1), $clusterTime: { clusterTime: Timestamp(1567578620, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578619, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:24.228+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 8994 2019-09-04T06:30:24.228+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578619, 1), t: 1 }({ ts: Timestamp(1567578619, 1), t: 1 }) 2019-09-04T06:30:24.228+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:24.228+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:30:24.228+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:30:34.130+0000 2019-09-04T06:30:24.229+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:30:34.624+0000 2019-09-04T06:30:24.229+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:24.229+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:52.839+0000 2019-09-04T06:30:24.229+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 614 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:34.229+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578619, 1), t: 1 } } 2019-09-04T06:30:24.229+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:49.228+0000 2019-09-04T06:30:24.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578620, 1), signature: { hash: BinData(0, 94948AD9BC88594E1E8C3301A2EC1F7C4634F7C7), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:24.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:30:24.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578620, 1), signature: { hash: BinData(0, 94948AD9BC88594E1E8C3301A2EC1F7C4634F7C7), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:24.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578620, 1), signature: { hash: BinData(0, 94948AD9BC88594E1E8C3301A2EC1F7C4634F7C7), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:24.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), opTime: { ts: Timestamp(1567578619, 1), t: 1 }, wallTime: new Date(1567578619224) } 2019-09-04T06:30:24.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578620, 1), signature: { hash: BinData(0, 94948AD9BC88594E1E8C3301A2EC1F7C4634F7C7), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:24.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:24.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:24.240+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:24.240+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), appliedOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, appliedWallTime: new Date(1567578619224), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), appliedOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, appliedWallTime: new Date(1567578619224), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), appliedOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, appliedWallTime: new Date(1567578619224), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpVisible: { ts: Timestamp(1567578619, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:24.240+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 615 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:54.240+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), appliedOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, appliedWallTime: new Date(1567578619224), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), appliedOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, appliedWallTime: new Date(1567578619224), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), appliedOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, appliedWallTime: new Date(1567578619224), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpVisible: { ts: Timestamp(1567578619, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:24.240+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:49.228+0000 2019-09-04T06:30:24.240+0000 D2 ASIO [RS] Request 615 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpVisible: { ts: Timestamp(1567578619, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578619, 1), $clusterTime: { clusterTime: Timestamp(1567578620, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578619, 1) } 2019-09-04T06:30:24.240+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpVisible: { ts: Timestamp(1567578619, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578619, 1), $clusterTime: { clusterTime: Timestamp(1567578620, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578619, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:24.240+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:24.240+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:49.228+0000 2019-09-04T06:30:24.246+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:24.347+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:24.436+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:24.436+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:24.447+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:24.547+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:24.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:24.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:24.647+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:24.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:24.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:24.669+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:24.669+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:24.713+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:24.713+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:24.714+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:24.714+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:24.747+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:24.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:24.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 616) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:24.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 616 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:34.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:24.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:52.839+0000 2019-09-04T06:30:24.838+0000 D2 ASIO [Replication] Request 616 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), opTime: { ts: Timestamp(1567578619, 1), t: 1 }, wallTime: new Date(1567578619224), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpVisible: { ts: Timestamp(1567578619, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578619, 1), $clusterTime: { clusterTime: Timestamp(1567578620, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578619, 1) } 2019-09-04T06:30:24.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), opTime: { ts: Timestamp(1567578619, 1), t: 1 }, wallTime: new Date(1567578619224), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpVisible: { ts: Timestamp(1567578619, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578619, 1), $clusterTime: { clusterTime: Timestamp(1567578620, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578619, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:24.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:24.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 616) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), opTime: { ts: Timestamp(1567578619, 1), t: 1 }, wallTime: new Date(1567578619224), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpVisible: { ts: Timestamp(1567578619, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578619, 1), $clusterTime: { clusterTime: Timestamp(1567578620, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578619, 1) } 2019-09-04T06:30:24.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:30:24.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:30:26.838Z 2019-09-04T06:30:24.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:52.839+0000 2019-09-04T06:30:24.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:24.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 617) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:24.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 617 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:30:34.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:24.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:52.839+0000 2019-09-04T06:30:24.839+0000 D2 ASIO [Replication] Request 617 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), opTime: { ts: Timestamp(1567578619, 1), t: 1 }, wallTime: new Date(1567578619224), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpVisible: { ts: Timestamp(1567578619, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578619, 1), $clusterTime: { clusterTime: Timestamp(1567578620, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578619, 1) } 2019-09-04T06:30:24.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), opTime: { ts: Timestamp(1567578619, 1), t: 1 }, wallTime: new Date(1567578619224), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpVisible: { ts: Timestamp(1567578619, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578619, 1), $clusterTime: { clusterTime: Timestamp(1567578620, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578619, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:30:24.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:24.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 617) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), opTime: { ts: Timestamp(1567578619, 1), t: 1 }, wallTime: new Date(1567578619224), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpVisible: { ts: Timestamp(1567578619, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578619, 1), $clusterTime: { clusterTime: Timestamp(1567578620, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578619, 1) } 2019-09-04T06:30:24.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:30:24.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:30:34.624+0000 2019-09-04T06:30:24.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:30:35.938+0000 2019-09-04T06:30:24.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:30:24.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:30:26.839Z 2019-09-04T06:30:24.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:24.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:54.839+0000 2019-09-04T06:30:24.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:54.839+0000 2019-09-04T06:30:24.847+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:24.936+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:24.936+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:24.948+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:25.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:25.048+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:25.049+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:25.049+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:25.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:25.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:25.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578620, 1), signature: { hash: BinData(0, 94948AD9BC88594E1E8C3301A2EC1F7C4634F7C7), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:25.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:30:25.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578620, 1), signature: { hash: BinData(0, 94948AD9BC88594E1E8C3301A2EC1F7C4634F7C7), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:25.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578620, 1), signature: { hash: BinData(0, 94948AD9BC88594E1E8C3301A2EC1F7C4634F7C7), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:25.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), opTime: { ts: Timestamp(1567578619, 1), t: 1 }, wallTime: new Date(1567578619224) } 2019-09-04T06:30:25.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578620, 1), signature: { hash: BinData(0, 94948AD9BC88594E1E8C3301A2EC1F7C4634F7C7), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:25.072+0000 D2 ASIO [RS] Request 614 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578625, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578625069), o: { $v: 1, $set: { ping: new Date(1567578625066), up: 2525 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpVisible: { ts: Timestamp(1567578619, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpApplied: { ts: Timestamp(1567578625, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578619, 1), $clusterTime: { clusterTime: Timestamp(1567578625, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578625, 1) } 2019-09-04T06:30:25.072+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578625, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578625069), o: { $v: 1, $set: { ping: new Date(1567578625066), up: 2525 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpVisible: { ts: Timestamp(1567578619, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpApplied: { ts: Timestamp(1567578625, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578619, 1), $clusterTime: { clusterTime: Timestamp(1567578625, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578625, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:25.072+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:25.072+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578625, 1) and ending at ts: Timestamp(1567578625, 1) 2019-09-04T06:30:25.072+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:30:35.938+0000 2019-09-04T06:30:25.072+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:30:35.744+0000 2019-09-04T06:30:25.072+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:25.072+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578625, 1), t: 1 } 2019-09-04T06:30:25.072+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:25.072+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:25.072+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578619, 1) 2019-09-04T06:30:25.072+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 9011 2019-09-04T06:30:25.072+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:25.072+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:25.072+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 9011 2019-09-04T06:30:25.072+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:25.072+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:30:25.072+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:25.072+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578619, 1) 2019-09-04T06:30:25.072+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 9014 2019-09-04T06:30:25.072+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578625, 1) } 2019-09-04T06:30:25.072+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:25.072+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:25.072+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 9014 2019-09-04T06:30:25.072+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:54.839+0000 2019-09-04T06:30:25.072+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 8995 2019-09-04T06:30:25.073+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 8995 2019-09-04T06:30:25.073+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9017 2019-09-04T06:30:25.073+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9017 2019-09-04T06:30:25.073+0000 D3 EXECUTOR [repl-writer-worker-8] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:25.073+0000 D3 STORAGE [repl-writer-worker-8] WT begin_transaction for snapshot id 9019 2019-09-04T06:30:25.073+0000 D4 STORAGE [repl-writer-worker-8] inserting record with timestamp Timestamp(1567578625, 1) 2019-09-04T06:30:25.073+0000 D3 STORAGE [repl-writer-worker-8] WT set timestamp of future write operations to Timestamp(1567578625, 1) 2019-09-04T06:30:25.073+0000 D3 STORAGE [repl-writer-worker-8] WT commit_transaction for snapshot id 9019 2019-09-04T06:30:25.073+0000 D3 EXECUTOR [repl-writer-worker-8] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:25.073+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:30:25.073+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9018 2019-09-04T06:30:25.073+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 9018 2019-09-04T06:30:25.073+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9021 2019-09-04T06:30:25.073+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9021 2019-09-04T06:30:25.073+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578625, 1), t: 1 }({ ts: Timestamp(1567578625, 1), t: 1 }) 2019-09-04T06:30:25.073+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578625, 1) 2019-09-04T06:30:25.073+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9022 2019-09-04T06:30:25.073+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578625, 1) } } ] } sort: {} projection: {} 2019-09-04T06:30:25.073+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:25.073+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:30:25.073+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578625, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:30:25.073+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:25.073+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:25.073+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:25.073+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578625, 1) || First: notFirst: full path: ts 2019-09-04T06:30:25.073+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:25.073+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578625, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:25.073+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:25.073+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:30:25.073+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:25.073+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:25.073+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:25.073+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:25.073+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:25.073+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:25.073+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:25.073+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578625, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:25.073+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:25.073+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:25.073+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:25.073+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578625, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:25.073+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:25.073+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578625, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:25.073+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 9022 2019-09-04T06:30:25.073+0000 D3 EXECUTOR [repl-writer-worker-4] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:25.073+0000 D3 STORAGE [repl-writer-worker-4] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:25.073+0000 D3 REPL [repl-writer-worker-4] applying op: { ts: Timestamp(1567578625, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578625069), o: { $v: 1, $set: { ping: new Date(1567578625066), up: 2525 } } }, oplog application mode: Secondary 2019-09-04T06:30:25.073+0000 D3 STORAGE [repl-writer-worker-4] WT set timestamp of future write operations to Timestamp(1567578625, 1) 2019-09-04T06:30:25.073+0000 D3 STORAGE [repl-writer-worker-4] WT begin_transaction for snapshot id 9024 2019-09-04T06:30:25.073+0000 D2 QUERY [repl-writer-worker-4] Using idhack: { _id: "cmodb801.togewa.com:27017" } 2019-09-04T06:30:25.073+0000 D4 WRITE [repl-writer-worker-4] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:30:25.073+0000 D3 STORAGE [repl-writer-worker-4] WT commit_transaction for snapshot id 9024 2019-09-04T06:30:25.073+0000 D3 EXECUTOR [repl-writer-worker-4] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:25.073+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578625, 1), t: 1 }({ ts: Timestamp(1567578625, 1), t: 1 }) 2019-09-04T06:30:25.073+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578625, 1) 2019-09-04T06:30:25.074+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9023 2019-09-04T06:30:25.074+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:30:25.074+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:25.074+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:30:25.074+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:25.074+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:25.074+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:30:25.074+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 9023 2019-09-04T06:30:25.074+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578625, 1) 2019-09-04T06:30:25.074+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9027 2019-09-04T06:30:25.074+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9027 2019-09-04T06:30:25.074+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578625, 1), t: 1 }({ ts: Timestamp(1567578625, 1), t: 1 }) 2019-09-04T06:30:25.074+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:25.074+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), appliedOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, appliedWallTime: new Date(1567578619224), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), appliedOpTime: { ts: Timestamp(1567578625, 1), t: 1 }, appliedWallTime: new Date(1567578625069), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), appliedOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, appliedWallTime: new Date(1567578619224), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpVisible: { ts: Timestamp(1567578619, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:25.074+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 618 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:55.074+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), appliedOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, appliedWallTime: new Date(1567578619224), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), appliedOpTime: { ts: Timestamp(1567578625, 1), t: 1 }, appliedWallTime: new Date(1567578625069), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), appliedOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, appliedWallTime: new Date(1567578619224), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpVisible: { ts: Timestamp(1567578619, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:25.074+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:55.074+0000 2019-09-04T06:30:25.074+0000 D2 ASIO [RS] Request 618 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpVisible: { ts: Timestamp(1567578619, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578619, 1), $clusterTime: { clusterTime: Timestamp(1567578625, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578625, 1) } 2019-09-04T06:30:25.074+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpVisible: { ts: Timestamp(1567578619, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578619, 1), $clusterTime: { clusterTime: Timestamp(1567578625, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578625, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:25.074+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578625, 1), t: 1 } 2019-09-04T06:30:25.074+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 619 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:35.074+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578619, 1), t: 1 } } 2019-09-04T06:30:25.074+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:25.074+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:55.074+0000 2019-09-04T06:30:25.074+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:55.074+0000 2019-09-04T06:30:25.075+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:30:25.076+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:25.076+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), appliedOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, appliedWallTime: new Date(1567578619224), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578625, 1), t: 1 }, durableWallTime: new Date(1567578625069), appliedOpTime: { ts: Timestamp(1567578625, 1), t: 1 }, appliedWallTime: new Date(1567578625069), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), appliedOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, appliedWallTime: new Date(1567578619224), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpVisible: { ts: Timestamp(1567578619, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:25.076+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 620 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:55.076+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), appliedOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, appliedWallTime: new Date(1567578619224), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578625, 1), t: 1 }, durableWallTime: new Date(1567578625069), appliedOpTime: { ts: Timestamp(1567578625, 1), t: 1 }, appliedWallTime: new Date(1567578625069), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), appliedOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, appliedWallTime: new Date(1567578619224), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpVisible: { ts: Timestamp(1567578619, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:25.076+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:55.074+0000 2019-09-04T06:30:25.076+0000 D2 ASIO [RS] Request 620 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpVisible: { ts: Timestamp(1567578619, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578619, 1), $clusterTime: { clusterTime: Timestamp(1567578625, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578625, 1) } 2019-09-04T06:30:25.076+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578619, 1), t: 1 }, lastCommittedWall: new Date(1567578619224), lastOpVisible: { ts: Timestamp(1567578619, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578619, 1), $clusterTime: { clusterTime: Timestamp(1567578625, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578625, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:25.076+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:25.076+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:55.074+0000 2019-09-04T06:30:25.076+0000 D2 ASIO [RS] Request 619 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578625, 1), t: 1 }, lastCommittedWall: new Date(1567578625069), lastOpVisible: { ts: Timestamp(1567578625, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578625, 1), t: 1 }, lastCommittedWall: new Date(1567578625069), lastOpApplied: { ts: Timestamp(1567578625, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578625, 1), $clusterTime: { clusterTime: Timestamp(1567578625, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578625, 1) } 2019-09-04T06:30:25.076+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578625, 1), t: 1 }, lastCommittedWall: new Date(1567578625069), lastOpVisible: { ts: Timestamp(1567578625, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578625, 1), t: 1 }, lastCommittedWall: new Date(1567578625069), lastOpApplied: { ts: Timestamp(1567578625, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578625, 1), $clusterTime: { clusterTime: Timestamp(1567578625, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578625, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:25.076+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:25.076+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:30:25.076+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578625, 1), t: 1 }, 2019-09-04T06:30:25.069+0000 2019-09-04T06:30:25.076+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578625, 1), t: 1 }, 2019-09-04T06:30:25.069+0000 2019-09-04T06:30:25.076+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578620, 1) 2019-09-04T06:30:25.076+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:30:35.744+0000 2019-09-04T06:30:25.076+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:30:35.330+0000 2019-09-04T06:30:25.076+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:25.076+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 621 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:35.076+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578625, 1), t: 1 } } 2019-09-04T06:30:25.076+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:54.839+0000 2019-09-04T06:30:25.076+0000 D3 REPL [conn226] Got notified of new snapshot: { ts: Timestamp(1567578625, 1), t: 1 }, 2019-09-04T06:30:25.069+0000 2019-09-04T06:30:25.076+0000 D3 REPL [conn226] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:40.682+0000 2019-09-04T06:30:25.076+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:55.074+0000 2019-09-04T06:30:25.076+0000 D3 REPL [conn216] Got notified of new snapshot: { ts: Timestamp(1567578625, 1), t: 1 }, 2019-09-04T06:30:25.069+0000 2019-09-04T06:30:25.076+0000 D3 REPL [conn216] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:42.092+0000 2019-09-04T06:30:25.076+0000 D3 REPL [conn238] Got notified of new snapshot: { ts: Timestamp(1567578625, 1), t: 1 }, 2019-09-04T06:30:25.069+0000 2019-09-04T06:30:25.077+0000 D3 REPL [conn238] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.824+0000 2019-09-04T06:30:25.077+0000 D3 REPL [conn207] Got notified of new snapshot: { ts: Timestamp(1567578625, 1), t: 1 }, 2019-09-04T06:30:25.069+0000 2019-09-04T06:30:25.077+0000 D3 REPL [conn207] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:40.671+0000 2019-09-04T06:30:25.077+0000 D3 REPL [conn243] Got notified of new snapshot: { ts: Timestamp(1567578625, 1), t: 1 }, 2019-09-04T06:30:25.069+0000 2019-09-04T06:30:25.077+0000 D3 REPL [conn243] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.662+0000 2019-09-04T06:30:25.077+0000 D3 REPL [conn244] Got notified of new snapshot: { ts: Timestamp(1567578625, 1), t: 1 }, 2019-09-04T06:30:25.069+0000 2019-09-04T06:30:25.077+0000 D3 REPL [conn244] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.754+0000 2019-09-04T06:30:25.077+0000 D3 REPL [conn222] Got notified of new snapshot: { ts: Timestamp(1567578625, 1), t: 1 }, 2019-09-04T06:30:25.069+0000 2019-09-04T06:30:25.077+0000 D3 REPL [conn222] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.822+0000 2019-09-04T06:30:25.077+0000 D3 REPL [conn239] Got notified of new snapshot: { ts: Timestamp(1567578625, 1), t: 1 }, 2019-09-04T06:30:25.069+0000 2019-09-04T06:30:25.077+0000 D3 REPL [conn239] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.133+0000 2019-09-04T06:30:25.077+0000 D3 REPL [conn232] Got notified of new snapshot: { ts: Timestamp(1567578625, 1), t: 1 }, 2019-09-04T06:30:25.069+0000 2019-09-04T06:30:25.077+0000 D3 REPL [conn232] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.661+0000 2019-09-04T06:30:25.077+0000 D3 REPL [conn208] Got notified of new snapshot: { ts: Timestamp(1567578625, 1), t: 1 }, 2019-09-04T06:30:25.069+0000 2019-09-04T06:30:25.077+0000 D3 REPL [conn208] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.839+0000 2019-09-04T06:30:25.077+0000 D3 REPL [conn221] Got notified of new snapshot: { ts: Timestamp(1567578625, 1), t: 1 }, 2019-09-04T06:30:25.069+0000 2019-09-04T06:30:25.077+0000 D3 REPL [conn221] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:42.289+0000 2019-09-04T06:30:25.077+0000 D3 REPL [conn235] Got notified of new snapshot: { ts: Timestamp(1567578625, 1), t: 1 }, 2019-09-04T06:30:25.069+0000 2019-09-04T06:30:25.077+0000 D3 REPL [conn235] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.840+0000 2019-09-04T06:30:25.077+0000 D3 REPL [conn223] Got notified of new snapshot: { ts: Timestamp(1567578625, 1), t: 1 }, 2019-09-04T06:30:25.069+0000 2019-09-04T06:30:25.077+0000 D3 REPL [conn223] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.644+0000 2019-09-04T06:30:25.077+0000 D3 REPL [conn230] Got notified of new snapshot: { ts: Timestamp(1567578625, 1), t: 1 }, 2019-09-04T06:30:25.069+0000 2019-09-04T06:30:25.077+0000 D3 REPL [conn230] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:40.677+0000 2019-09-04T06:30:25.077+0000 D3 REPL [conn245] Got notified of new snapshot: { ts: Timestamp(1567578625, 1), t: 1 }, 2019-09-04T06:30:25.069+0000 2019-09-04T06:30:25.077+0000 D3 REPL [conn245] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.767+0000 2019-09-04T06:30:25.077+0000 D3 REPL [conn246] Got notified of new snapshot: { ts: Timestamp(1567578625, 1), t: 1 }, 2019-09-04T06:30:25.069+0000 2019-09-04T06:30:25.077+0000 D3 REPL [conn246] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:52.054+0000 2019-09-04T06:30:25.148+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:25.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:25.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:25.169+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:25.169+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:25.172+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578625, 1) 2019-09-04T06:30:25.213+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:25.213+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:25.214+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:25.214+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:25.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:25.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:25.248+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:25.348+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:25.436+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:25.436+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:25.448+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:25.548+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:25.549+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:25.549+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:25.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:25.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:25.649+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:25.656+0000 D2 ASIO [RS] Request 621 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578625, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578625653), o: { $v: 1, $set: { ping: new Date(1567578625652) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578625, 1), t: 1 }, lastCommittedWall: new Date(1567578625069), lastOpVisible: { ts: Timestamp(1567578625, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578625, 1), t: 1 }, lastCommittedWall: new Date(1567578625069), lastOpApplied: { ts: Timestamp(1567578625, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578625, 1), $clusterTime: { clusterTime: Timestamp(1567578625, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578625, 2) } 2019-09-04T06:30:25.656+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578625, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578625653), o: { $v: 1, $set: { ping: new Date(1567578625652) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578625, 1), t: 1 }, lastCommittedWall: new Date(1567578625069), lastOpVisible: { ts: Timestamp(1567578625, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578625, 1), t: 1 }, lastCommittedWall: new Date(1567578625069), lastOpApplied: { ts: Timestamp(1567578625, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578625, 1), $clusterTime: { clusterTime: Timestamp(1567578625, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578625, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:25.656+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:25.656+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578625, 2) and ending at ts: Timestamp(1567578625, 2) 2019-09-04T06:30:25.656+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:30:35.330+0000 2019-09-04T06:30:25.656+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:30:35.961+0000 2019-09-04T06:30:25.656+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:25.656+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:54.839+0000 2019-09-04T06:30:25.656+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:25.656+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:25.656+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578625, 1) 2019-09-04T06:30:25.656+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 9039 2019-09-04T06:30:25.656+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:25.656+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:25.656+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 9039 2019-09-04T06:30:25.656+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:25.656+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:30:25.656+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:25.656+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578625, 1) 2019-09-04T06:30:25.656+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578625, 2) } 2019-09-04T06:30:25.656+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 9042 2019-09-04T06:30:25.656+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:25.656+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:25.656+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 9042 2019-09-04T06:30:25.656+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9028 2019-09-04T06:30:25.656+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578625, 2), t: 1 } 2019-09-04T06:30:25.656+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 9028 2019-09-04T06:30:25.656+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9045 2019-09-04T06:30:25.656+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9045 2019-09-04T06:30:25.656+0000 D3 EXECUTOR [repl-writer-worker-6] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:25.656+0000 D3 STORAGE [repl-writer-worker-6] WT begin_transaction for snapshot id 9047 2019-09-04T06:30:25.656+0000 D4 STORAGE [repl-writer-worker-6] inserting record with timestamp Timestamp(1567578625, 2) 2019-09-04T06:30:25.656+0000 D3 STORAGE [repl-writer-worker-6] WT set timestamp of future write operations to Timestamp(1567578625, 2) 2019-09-04T06:30:25.656+0000 D3 STORAGE [repl-writer-worker-6] WT commit_transaction for snapshot id 9047 2019-09-04T06:30:25.656+0000 D3 EXECUTOR [repl-writer-worker-6] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:25.656+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:30:25.656+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9046 2019-09-04T06:30:25.656+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 9046 2019-09-04T06:30:25.656+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9049 2019-09-04T06:30:25.656+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9049 2019-09-04T06:30:25.656+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578625, 2), t: 1 }({ ts: Timestamp(1567578625, 2), t: 1 }) 2019-09-04T06:30:25.656+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578625, 2) 2019-09-04T06:30:25.656+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9050 2019-09-04T06:30:25.656+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578625, 2) } } ] } sort: {} projection: {} 2019-09-04T06:30:25.656+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:25.656+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:30:25.656+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578625, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:30:25.656+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:25.656+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:25.656+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:25.656+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578625, 2) || First: notFirst: full path: ts 2019-09-04T06:30:25.656+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:25.656+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578625, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:25.656+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:25.656+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:30:25.656+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:25.656+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:25.657+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:25.657+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:25.657+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:25.657+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:25.657+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:25.657+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578625, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:25.657+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:25.657+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:25.657+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:25.657+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578625, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:25.657+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:25.657+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578625, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:25.657+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 9050 2019-09-04T06:30:25.657+0000 D3 EXECUTOR [repl-writer-worker-2] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:25.657+0000 D3 STORAGE [repl-writer-worker-2] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:25.657+0000 D3 REPL [repl-writer-worker-2] applying op: { ts: Timestamp(1567578625, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578625653), o: { $v: 1, $set: { ping: new Date(1567578625652) } } }, oplog application mode: Secondary 2019-09-04T06:30:25.657+0000 D3 STORAGE [repl-writer-worker-2] WT set timestamp of future write operations to Timestamp(1567578625, 2) 2019-09-04T06:30:25.657+0000 D3 STORAGE [repl-writer-worker-2] WT begin_transaction for snapshot id 9052 2019-09-04T06:30:25.657+0000 D2 QUERY [repl-writer-worker-2] Using idhack: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" } 2019-09-04T06:30:25.657+0000 D4 WRITE [repl-writer-worker-2] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:30:25.657+0000 D3 STORAGE [repl-writer-worker-2] WT commit_transaction for snapshot id 9052 2019-09-04T06:30:25.657+0000 D3 EXECUTOR [repl-writer-worker-2] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:25.657+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578625, 2), t: 1 }({ ts: Timestamp(1567578625, 2), t: 1 }) 2019-09-04T06:30:25.657+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578625, 2) 2019-09-04T06:30:25.657+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9051 2019-09-04T06:30:25.657+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:30:25.657+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:25.657+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:30:25.657+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:25.657+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:25.657+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:30:25.657+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 9051 2019-09-04T06:30:25.657+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578625, 2) 2019-09-04T06:30:25.657+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:25.657+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9055 2019-09-04T06:30:25.657+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9055 2019-09-04T06:30:25.657+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578625, 2), t: 1 }({ ts: Timestamp(1567578625, 2), t: 1 }) 2019-09-04T06:30:25.657+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), appliedOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, appliedWallTime: new Date(1567578619224), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578625, 1), t: 1 }, durableWallTime: new Date(1567578625069), appliedOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, appliedWallTime: new Date(1567578625653), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), appliedOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, appliedWallTime: new Date(1567578619224), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578625, 1), t: 1 }, lastCommittedWall: new Date(1567578625069), lastOpVisible: { ts: Timestamp(1567578625, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:25.657+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 622 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:55.657+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), appliedOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, appliedWallTime: new Date(1567578619224), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578625, 1), t: 1 }, durableWallTime: new Date(1567578625069), appliedOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, appliedWallTime: new Date(1567578625653), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), appliedOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, appliedWallTime: new Date(1567578619224), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578625, 1), t: 1 }, lastCommittedWall: new Date(1567578625069), lastOpVisible: { ts: Timestamp(1567578625, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:25.657+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:55.657+0000 2019-09-04T06:30:25.658+0000 D2 ASIO [RS] Request 622 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578625, 1), t: 1 }, lastCommittedWall: new Date(1567578625069), lastOpVisible: { ts: Timestamp(1567578625, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578625, 1), $clusterTime: { clusterTime: Timestamp(1567578625, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578625, 2) } 2019-09-04T06:30:25.658+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578625, 1), t: 1 }, lastCommittedWall: new Date(1567578625069), lastOpVisible: { ts: Timestamp(1567578625, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578625, 1), $clusterTime: { clusterTime: Timestamp(1567578625, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578625, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:25.658+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:25.658+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:55.658+0000 2019-09-04T06:30:25.658+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578625, 2), t: 1 } 2019-09-04T06:30:25.658+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 623 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:35.658+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578625, 1), t: 1 } } 2019-09-04T06:30:25.658+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:55.658+0000 2019-09-04T06:30:25.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:25.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:25.662+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:30:25.662+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:25.662+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), appliedOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, appliedWallTime: new Date(1567578619224), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, durableWallTime: new Date(1567578625653), appliedOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, appliedWallTime: new Date(1567578625653), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), appliedOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, appliedWallTime: new Date(1567578619224), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578625, 1), t: 1 }, lastCommittedWall: new Date(1567578625069), lastOpVisible: { ts: Timestamp(1567578625, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:25.662+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 624 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:55.662+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), appliedOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, appliedWallTime: new Date(1567578619224), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, durableWallTime: new Date(1567578625653), appliedOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, appliedWallTime: new Date(1567578625653), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, durableWallTime: new Date(1567578619224), appliedOpTime: { ts: Timestamp(1567578619, 1), t: 1 }, appliedWallTime: new Date(1567578619224), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578625, 1), t: 1 }, lastCommittedWall: new Date(1567578625069), lastOpVisible: { ts: Timestamp(1567578625, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:25.662+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:55.658+0000 2019-09-04T06:30:25.662+0000 D2 ASIO [RS] Request 624 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578625, 1), t: 1 }, lastCommittedWall: new Date(1567578625069), lastOpVisible: { ts: Timestamp(1567578625, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578625, 1), $clusterTime: { clusterTime: Timestamp(1567578625, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578625, 2) } 2019-09-04T06:30:25.662+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578625, 1), t: 1 }, lastCommittedWall: new Date(1567578625069), lastOpVisible: { ts: Timestamp(1567578625, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578625, 1), $clusterTime: { clusterTime: Timestamp(1567578625, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578625, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:25.662+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:25.662+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:55.658+0000 2019-09-04T06:30:25.663+0000 D2 ASIO [RS] Request 623 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578625, 2), t: 1 }, lastCommittedWall: new Date(1567578625653), lastOpVisible: { ts: Timestamp(1567578625, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578625, 2), t: 1 }, lastCommittedWall: new Date(1567578625653), lastOpApplied: { ts: Timestamp(1567578625, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578625, 2), $clusterTime: { clusterTime: Timestamp(1567578625, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578625, 2) } 2019-09-04T06:30:25.663+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578625, 2), t: 1 }, lastCommittedWall: new Date(1567578625653), lastOpVisible: { ts: Timestamp(1567578625, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578625, 2), t: 1 }, lastCommittedWall: new Date(1567578625653), lastOpApplied: { ts: Timestamp(1567578625, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578625, 2), $clusterTime: { clusterTime: Timestamp(1567578625, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578625, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:25.663+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:25.663+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:30:25.663+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578625, 2), t: 1 }, 2019-09-04T06:30:25.653+0000 2019-09-04T06:30:25.663+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578625, 2), t: 1 }, 2019-09-04T06:30:25.653+0000 2019-09-04T06:30:25.663+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578620, 2) 2019-09-04T06:30:25.663+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:30:35.961+0000 2019-09-04T06:30:25.663+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:30:36.275+0000 2019-09-04T06:30:25.663+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:25.663+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:54.839+0000 2019-09-04T06:30:25.663+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 625 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:35.663+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578625, 2), t: 1 } } 2019-09-04T06:30:25.663+0000 D3 REPL [conn226] Got notified of new snapshot: { ts: Timestamp(1567578625, 2), t: 1 }, 2019-09-04T06:30:25.653+0000 2019-09-04T06:30:25.663+0000 D3 REPL [conn226] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:40.682+0000 2019-09-04T06:30:25.663+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:30:55.658+0000 2019-09-04T06:30:25.663+0000 D3 REPL [conn222] Got notified of new snapshot: { ts: Timestamp(1567578625, 2), t: 1 }, 2019-09-04T06:30:25.653+0000 2019-09-04T06:30:25.663+0000 D3 REPL [conn222] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.822+0000 2019-09-04T06:30:25.663+0000 D3 REPL [conn244] Got notified of new snapshot: { ts: Timestamp(1567578625, 2), t: 1 }, 2019-09-04T06:30:25.653+0000 2019-09-04T06:30:25.663+0000 D3 REPL [conn244] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.754+0000 2019-09-04T06:30:25.663+0000 D3 REPL [conn246] Got notified of new snapshot: { ts: Timestamp(1567578625, 2), t: 1 }, 2019-09-04T06:30:25.653+0000 2019-09-04T06:30:25.663+0000 D3 REPL [conn246] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:52.054+0000 2019-09-04T06:30:25.663+0000 D3 REPL [conn208] Got notified of new snapshot: { ts: Timestamp(1567578625, 2), t: 1 }, 2019-09-04T06:30:25.653+0000 2019-09-04T06:30:25.663+0000 D3 REPL [conn208] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.839+0000 2019-09-04T06:30:25.663+0000 D3 REPL [conn245] Got notified of new snapshot: { ts: Timestamp(1567578625, 2), t: 1 }, 2019-09-04T06:30:25.653+0000 2019-09-04T06:30:25.663+0000 D3 REPL [conn245] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.767+0000 2019-09-04T06:30:25.663+0000 D3 REPL [conn243] Got notified of new snapshot: { ts: Timestamp(1567578625, 2), t: 1 }, 2019-09-04T06:30:25.653+0000 2019-09-04T06:30:25.663+0000 D3 REPL [conn243] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.662+0000 2019-09-04T06:30:25.663+0000 D3 REPL [conn216] Got notified of new snapshot: { ts: Timestamp(1567578625, 2), t: 1 }, 2019-09-04T06:30:25.653+0000 2019-09-04T06:30:25.663+0000 D3 REPL [conn216] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:42.092+0000 2019-09-04T06:30:25.663+0000 D3 REPL [conn207] Got notified of new snapshot: { ts: Timestamp(1567578625, 2), t: 1 }, 2019-09-04T06:30:25.653+0000 2019-09-04T06:30:25.663+0000 D3 REPL [conn207] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:40.671+0000 2019-09-04T06:30:25.663+0000 D3 REPL [conn232] Got notified of new snapshot: { ts: Timestamp(1567578625, 2), t: 1 }, 2019-09-04T06:30:25.653+0000 2019-09-04T06:30:25.663+0000 D3 REPL [conn232] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.661+0000 2019-09-04T06:30:25.663+0000 D3 REPL [conn221] Got notified of new snapshot: { ts: Timestamp(1567578625, 2), t: 1 }, 2019-09-04T06:30:25.653+0000 2019-09-04T06:30:25.663+0000 D3 REPL [conn221] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:42.289+0000 2019-09-04T06:30:25.663+0000 D3 REPL [conn223] Got notified of new snapshot: { ts: Timestamp(1567578625, 2), t: 1 }, 2019-09-04T06:30:25.653+0000 2019-09-04T06:30:25.663+0000 D3 REPL [conn223] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.644+0000 2019-09-04T06:30:25.663+0000 D3 REPL [conn235] Got notified of new snapshot: { ts: Timestamp(1567578625, 2), t: 1 }, 2019-09-04T06:30:25.653+0000 2019-09-04T06:30:25.663+0000 D3 REPL [conn235] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.840+0000 2019-09-04T06:30:25.663+0000 D3 REPL [conn238] Got notified of new snapshot: { ts: Timestamp(1567578625, 2), t: 1 }, 2019-09-04T06:30:25.653+0000 2019-09-04T06:30:25.663+0000 D3 REPL [conn238] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.824+0000 2019-09-04T06:30:25.663+0000 D3 REPL [conn239] Got notified of new snapshot: { ts: Timestamp(1567578625, 2), t: 1 }, 2019-09-04T06:30:25.653+0000 2019-09-04T06:30:25.663+0000 D3 REPL [conn239] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.133+0000 2019-09-04T06:30:25.663+0000 D3 REPL [conn230] Got notified of new snapshot: { ts: Timestamp(1567578625, 2), t: 1 }, 2019-09-04T06:30:25.653+0000 2019-09-04T06:30:25.663+0000 D3 REPL [conn230] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:40.677+0000 2019-09-04T06:30:25.669+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:25.669+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:25.713+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:25.713+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:25.714+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:25.714+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:25.749+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:25.756+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578625, 2) 2019-09-04T06:30:25.849+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:25.936+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:25.936+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:25.949+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:26.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:26.049+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:26.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:26.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:26.149+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:26.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:26.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:26.169+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:26.169+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:26.213+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:26.213+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:26.214+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:26.214+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:26.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578625, 2), signature: { hash: BinData(0, 45491B4EEC1706D9A488DC039229F164414CAEEA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:26.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:30:26.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578625, 2), signature: { hash: BinData(0, 45491B4EEC1706D9A488DC039229F164414CAEEA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:26.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578625, 2), signature: { hash: BinData(0, 45491B4EEC1706D9A488DC039229F164414CAEEA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:26.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, durableWallTime: new Date(1567578625653), opTime: { ts: Timestamp(1567578625, 2), t: 1 }, wallTime: new Date(1567578625653) } 2019-09-04T06:30:26.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578625, 2), signature: { hash: BinData(0, 45491B4EEC1706D9A488DC039229F164414CAEEA), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:26.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:26.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:26.250+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:26.294+0000 D2 COMMAND [conn228] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578623, 1), signature: { hash: BinData(0, 3A03B2C9D6C104013865981BCFBDB7C92D5EFDE9), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:30:26.294+0000 D1 REPL [conn228] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578625, 2), t: 1 } 2019-09-04T06:30:26.294+0000 D3 REPL [conn228] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:56.304+0000 2019-09-04T06:30:26.296+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:26.297+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:26.350+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:26.436+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:26.436+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:26.450+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:26.550+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:26.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:26.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:26.650+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:26.656+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:26.656+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:26.656+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578625, 2) 2019-09-04T06:30:26.656+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 9075 2019-09-04T06:30:26.656+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:26.656+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:26.656+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 9075 2019-09-04T06:30:26.657+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9078 2019-09-04T06:30:26.657+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9078 2019-09-04T06:30:26.657+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578625, 2), t: 1 }({ ts: Timestamp(1567578625, 2), t: 1 }) 2019-09-04T06:30:26.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:26.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:26.669+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:26.669+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:26.713+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:26.713+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:26.714+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:26.714+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:26.750+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:26.796+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:26.796+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:26.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:26.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 626) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:26.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 626 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:36.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:26.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:54.839+0000 2019-09-04T06:30:26.838+0000 D2 ASIO [Replication] Request 626 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, durableWallTime: new Date(1567578625653), opTime: { ts: Timestamp(1567578625, 2), t: 1 }, wallTime: new Date(1567578625653), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578625, 2), t: 1 }, lastCommittedWall: new Date(1567578625653), lastOpVisible: { ts: Timestamp(1567578625, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578625, 2), $clusterTime: { clusterTime: Timestamp(1567578625, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578625, 2) } 2019-09-04T06:30:26.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, durableWallTime: new Date(1567578625653), opTime: { ts: Timestamp(1567578625, 2), t: 1 }, wallTime: new Date(1567578625653), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578625, 2), t: 1 }, lastCommittedWall: new Date(1567578625653), lastOpVisible: { ts: Timestamp(1567578625, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578625, 2), $clusterTime: { clusterTime: Timestamp(1567578625, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578625, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:26.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:26.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 626) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, durableWallTime: new Date(1567578625653), opTime: { ts: Timestamp(1567578625, 2), t: 1 }, wallTime: new Date(1567578625653), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578625, 2), t: 1 }, lastCommittedWall: new Date(1567578625653), lastOpVisible: { ts: Timestamp(1567578625, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578625, 2), $clusterTime: { clusterTime: Timestamp(1567578625, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578625, 2) } 2019-09-04T06:30:26.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:30:26.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:30:28.838Z 2019-09-04T06:30:26.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:54.839+0000 2019-09-04T06:30:26.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:26.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 627) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:26.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 627 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:30:36.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:26.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:54.839+0000 2019-09-04T06:30:26.839+0000 D2 ASIO [Replication] Request 627 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, durableWallTime: new Date(1567578625653), opTime: { ts: Timestamp(1567578625, 2), t: 1 }, wallTime: new Date(1567578625653), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578625, 2), t: 1 }, lastCommittedWall: new Date(1567578625653), lastOpVisible: { ts: Timestamp(1567578625, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578625, 2), $clusterTime: { clusterTime: Timestamp(1567578625, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578625, 2) } 2019-09-04T06:30:26.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, durableWallTime: new Date(1567578625653), opTime: { ts: Timestamp(1567578625, 2), t: 1 }, wallTime: new Date(1567578625653), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578625, 2), t: 1 }, lastCommittedWall: new Date(1567578625653), lastOpVisible: { ts: Timestamp(1567578625, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578625, 2), $clusterTime: { clusterTime: Timestamp(1567578625, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578625, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:30:26.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:26.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 627) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, durableWallTime: new Date(1567578625653), opTime: { ts: Timestamp(1567578625, 2), t: 1 }, wallTime: new Date(1567578625653), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578625, 2), t: 1 }, lastCommittedWall: new Date(1567578625653), lastOpVisible: { ts: Timestamp(1567578625, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578625, 2), $clusterTime: { clusterTime: Timestamp(1567578625, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578625, 2) } 2019-09-04T06:30:26.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:30:26.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:30:36.275+0000 2019-09-04T06:30:26.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:30:36.972+0000 2019-09-04T06:30:26.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:30:26.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:30:28.839Z 2019-09-04T06:30:26.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:26.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:56.839+0000 2019-09-04T06:30:26.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:56.839+0000 2019-09-04T06:30:26.850+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:26.876+0000 D2 COMMAND [conn47] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:26.876+0000 I COMMAND [conn47] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:26.911+0000 D2 COMMAND [conn48] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:26.911+0000 I COMMAND [conn48] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:26.936+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:26.936+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:26.951+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:27.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:27.051+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:27.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:27.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:27.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578625, 2), signature: { hash: BinData(0, 45491B4EEC1706D9A488DC039229F164414CAEEA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:27.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:30:27.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578625, 2), signature: { hash: BinData(0, 45491B4EEC1706D9A488DC039229F164414CAEEA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:27.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578625, 2), signature: { hash: BinData(0, 45491B4EEC1706D9A488DC039229F164414CAEEA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:27.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, durableWallTime: new Date(1567578625653), opTime: { ts: Timestamp(1567578625, 2), t: 1 }, wallTime: new Date(1567578625653) } 2019-09-04T06:30:27.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578625, 2), signature: { hash: BinData(0, 45491B4EEC1706D9A488DC039229F164414CAEEA), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:27.151+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:27.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:27.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:27.169+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:27.169+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:27.213+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:27.213+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:27.214+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:27.214+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:27.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:27.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:27.251+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:27.296+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:27.296+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:27.351+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:27.356+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:27.356+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:27.436+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:27.436+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:27.451+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:27.551+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:27.652+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:27.656+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:27.656+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:27.656+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578625, 2) 2019-09-04T06:30:27.656+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 9100 2019-09-04T06:30:27.657+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:27.657+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:27.657+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 9100 2019-09-04T06:30:27.657+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9103 2019-09-04T06:30:27.657+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9103 2019-09-04T06:30:27.658+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578625, 2), t: 1 }({ ts: Timestamp(1567578625, 2), t: 1 }) 2019-09-04T06:30:27.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:27.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:27.669+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:27.669+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:27.713+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:27.713+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:27.714+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:27.714+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:27.752+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:27.796+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:27.796+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:27.852+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:27.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:27.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:27.936+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:27.936+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:27.952+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:28.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:28.021+0000 D2 COMMAND [conn50] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578600, 3), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578627, 1), signature: { hash: BinData(0, AD04BEA240EAB3319E2AC04DFDCE765729E8C7C3), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578600, 3), t: 1 } }, $db: "config" } 2019-09-04T06:30:28.021+0000 D1 COMMAND [conn50] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578600, 3), t: 1 } } } 2019-09-04T06:30:28.021+0000 D3 STORAGE [conn50] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:30:28.021+0000 D1 COMMAND [conn50] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578600, 3), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578627, 1), signature: { hash: BinData(0, AD04BEA240EAB3319E2AC04DFDCE765729E8C7C3), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578600, 3), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578625, 2) 2019-09-04T06:30:28.021+0000 D5 QUERY [conn50] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:30:28.021+0000 D5 QUERY [conn50] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:30:28.021+0000 D5 QUERY [conn50] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:30:28.021+0000 D5 QUERY [conn50] Rated tree: $and 2019-09-04T06:30:28.021+0000 D5 QUERY [conn50] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:28.021+0000 D5 QUERY [conn50] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:28.021+0000 D2 QUERY [conn50] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:30:28.021+0000 D3 STORAGE [conn50] WT begin_transaction for snapshot id 9113 2019-09-04T06:30:28.021+0000 D3 STORAGE [conn50] WT rollback_transaction for snapshot id 9113 2019-09-04T06:30:28.021+0000 I COMMAND [conn50] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578600, 3), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578627, 1), signature: { hash: BinData(0, AD04BEA240EAB3319E2AC04DFDCE765729E8C7C3), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578600, 3), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:879 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:30:28.052+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:28.152+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:28.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:28.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:28.169+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:28.169+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:28.213+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:28.213+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:28.214+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:28.214+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:28.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578625, 2), signature: { hash: BinData(0, 45491B4EEC1706D9A488DC039229F164414CAEEA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:28.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:30:28.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578625, 2), signature: { hash: BinData(0, 45491B4EEC1706D9A488DC039229F164414CAEEA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:28.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578625, 2), signature: { hash: BinData(0, 45491B4EEC1706D9A488DC039229F164414CAEEA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:28.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, durableWallTime: new Date(1567578625653), opTime: { ts: Timestamp(1567578625, 2), t: 1 }, wallTime: new Date(1567578625653) } 2019-09-04T06:30:28.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578625, 2), signature: { hash: BinData(0, 45491B4EEC1706D9A488DC039229F164414CAEEA), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:28.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:28.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:28.252+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:28.296+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:28.296+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:28.352+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:28.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:28.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:28.436+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:28.436+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:28.453+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:28.553+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:28.653+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:28.657+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:28.657+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:28.657+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578625, 2) 2019-09-04T06:30:28.657+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 9124 2019-09-04T06:30:28.657+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:28.657+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:28.657+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 9124 2019-09-04T06:30:28.658+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9127 2019-09-04T06:30:28.658+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9127 2019-09-04T06:30:28.658+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578625, 2), t: 1 }({ ts: Timestamp(1567578625, 2), t: 1 }) 2019-09-04T06:30:28.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:28.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:28.669+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:28.669+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:28.713+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:28.713+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:28.714+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:28.714+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:28.753+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:28.796+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:28.796+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:28.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:28.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 628) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:28.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 628 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:38.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:28.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:56.839+0000 2019-09-04T06:30:28.838+0000 D2 ASIO [Replication] Request 628 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, durableWallTime: new Date(1567578625653), opTime: { ts: Timestamp(1567578625, 2), t: 1 }, wallTime: new Date(1567578625653), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578625, 2), t: 1 }, lastCommittedWall: new Date(1567578625653), lastOpVisible: { ts: Timestamp(1567578625, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578625, 2), $clusterTime: { clusterTime: Timestamp(1567578627, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578625, 2) } 2019-09-04T06:30:28.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, durableWallTime: new Date(1567578625653), opTime: { ts: Timestamp(1567578625, 2), t: 1 }, wallTime: new Date(1567578625653), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578625, 2), t: 1 }, lastCommittedWall: new Date(1567578625653), lastOpVisible: { ts: Timestamp(1567578625, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578625, 2), $clusterTime: { clusterTime: Timestamp(1567578627, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578625, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:28.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:28.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 628) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, durableWallTime: new Date(1567578625653), opTime: { ts: Timestamp(1567578625, 2), t: 1 }, wallTime: new Date(1567578625653), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578625, 2), t: 1 }, lastCommittedWall: new Date(1567578625653), lastOpVisible: { ts: Timestamp(1567578625, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578625, 2), $clusterTime: { clusterTime: Timestamp(1567578627, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578625, 2) } 2019-09-04T06:30:28.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:30:28.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:30:30.838Z 2019-09-04T06:30:28.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:56.839+0000 2019-09-04T06:30:28.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:28.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 629) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:28.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 629 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:30:38.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:28.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:56.839+0000 2019-09-04T06:30:28.839+0000 D2 ASIO [Replication] Request 629 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, durableWallTime: new Date(1567578625653), opTime: { ts: Timestamp(1567578625, 2), t: 1 }, wallTime: new Date(1567578625653), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578625, 2), t: 1 }, lastCommittedWall: new Date(1567578625653), lastOpVisible: { ts: Timestamp(1567578625, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578625, 2), $clusterTime: { clusterTime: Timestamp(1567578627, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578625, 2) } 2019-09-04T06:30:28.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, durableWallTime: new Date(1567578625653), opTime: { ts: Timestamp(1567578625, 2), t: 1 }, wallTime: new Date(1567578625653), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578625, 2), t: 1 }, lastCommittedWall: new Date(1567578625653), lastOpVisible: { ts: Timestamp(1567578625, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578625, 2), $clusterTime: { clusterTime: Timestamp(1567578627, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578625, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:30:28.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:28.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 629) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, durableWallTime: new Date(1567578625653), opTime: { ts: Timestamp(1567578625, 2), t: 1 }, wallTime: new Date(1567578625653), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578625, 2), t: 1 }, lastCommittedWall: new Date(1567578625653), lastOpVisible: { ts: Timestamp(1567578625, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578625, 2), $clusterTime: { clusterTime: Timestamp(1567578627, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578625, 2) } 2019-09-04T06:30:28.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:30:28.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:30:36.972+0000 2019-09-04T06:30:28.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:30:39.733+0000 2019-09-04T06:30:28.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:30:28.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:30:30.839Z 2019-09-04T06:30:28.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:28.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:58.839+0000 2019-09-04T06:30:28.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:58.839+0000 2019-09-04T06:30:28.853+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:28.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:28.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:28.936+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:28.936+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:28.953+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:29.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:29.053+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:29.061+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:29.061+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:30:28.839+0000 2019-09-04T06:30:29.061+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:30:28.838+0000 2019-09-04T06:30:29.061+0000 D3 REPL [replexec-3] stalest member MemberId(2) date: 2019-09-04T06:30:28.838+0000 2019-09-04T06:30:29.061+0000 D3 REPL [replexec-3] scheduling next check at 2019-09-04T06:30:38.838+0000 2019-09-04T06:30:29.061+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:58.839+0000 2019-09-04T06:30:29.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578627, 1), signature: { hash: BinData(0, AD04BEA240EAB3319E2AC04DFDCE765729E8C7C3), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:29.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:30:29.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578627, 1), signature: { hash: BinData(0, AD04BEA240EAB3319E2AC04DFDCE765729E8C7C3), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:29.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578627, 1), signature: { hash: BinData(0, AD04BEA240EAB3319E2AC04DFDCE765729E8C7C3), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:29.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, durableWallTime: new Date(1567578625653), opTime: { ts: Timestamp(1567578625, 2), t: 1 }, wallTime: new Date(1567578625653) } 2019-09-04T06:30:29.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578627, 1), signature: { hash: BinData(0, AD04BEA240EAB3319E2AC04DFDCE765729E8C7C3), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:29.153+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:29.169+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:29.169+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:29.213+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:29.213+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:29.214+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:29.214+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:29.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:29.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:29.254+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:29.296+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:29.296+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:29.354+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:29.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:29.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:29.436+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:29.436+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:29.454+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:29.554+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:29.654+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:29.657+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:29.657+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:29.657+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578625, 2) 2019-09-04T06:30:29.657+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 9146 2019-09-04T06:30:29.657+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:29.657+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:29.657+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 9146 2019-09-04T06:30:29.658+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9149 2019-09-04T06:30:29.658+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9149 2019-09-04T06:30:29.658+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578625, 2), t: 1 }({ ts: Timestamp(1567578625, 2), t: 1 }) 2019-09-04T06:30:29.669+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:29.669+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:29.713+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:29.713+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:29.714+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:29.714+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:29.735+0000 I NETWORK [listener] connection accepted from 10.108.2.49:53376 #247 (86 connections now open) 2019-09-04T06:30:29.735+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:29.735+0000 D2 COMMAND [conn247] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:29.735+0000 I NETWORK [conn247] received client metadata from 10.108.2.49:53376 conn247: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:29.735+0000 I COMMAND [conn247] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:29.740+0000 D2 COMMAND [conn247] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578620, 1), signature: { hash: BinData(0, D896F3BE714D00815704D4FC4827545AB3DF55E0), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:30:29.740+0000 D1 REPL [conn247] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578625, 2), t: 1 } 2019-09-04T06:30:29.740+0000 D3 REPL [conn247] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:59.750+0000 2019-09-04T06:30:29.754+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:29.796+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:29.796+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:29.854+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:29.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:29.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:29.936+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:29.936+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:29.955+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:30.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:30.000+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:30:30.000+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:30:30.000+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:30:30.007+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:30:30.007+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:30:30.007+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:30:30.007+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:30:30.007+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:30:30.007+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:30:30.007+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:30:30.008+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35129 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:30:30.015+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:30:30.015+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:30:30.015+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:30:30.018+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:30:30.018+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:30.018+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:30:30.018+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:30:30.018+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:30:30.018+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:30:30.018+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:30:30.018+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:30:30.018+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:30:30.018+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:30.018+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:30.018+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:30:30.018+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578625, 2) 2019-09-04T06:30:30.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9165 2019-09-04T06:30:30.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9165 2019-09-04T06:30:30.018+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:30:30.029+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:30:30.029+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:30:30.042+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:30:30.042+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:30.042+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:30:30.042+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:30:30.042+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:30:30.042+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578625, 2) 2019-09-04T06:30:30.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9168 2019-09-04T06:30:30.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9168 2019-09-04T06:30:30.042+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:30:30.042+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:30:30.042+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:30.042+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:30:30.042+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:30:30.042+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:30:30.042+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578625, 2) 2019-09-04T06:30:30.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9170 2019-09-04T06:30:30.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9170 2019-09-04T06:30:30.042+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:30:30.042+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:30:30.043+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:30:30.043+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:30:30.043+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:30:30.043+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9173 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9173 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9174 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9174 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9175 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9175 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9176 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9176 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9177 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9177 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9178 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9178 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9179 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9179 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9180 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9180 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9181 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9181 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9182 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:30.043+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9182 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9183 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9183 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9184 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9184 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9185 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9185 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9186 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9186 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9187 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9187 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9188 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9188 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9189 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9189 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9190 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9190 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9191 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9191 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9192 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9192 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9193 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9193 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9194 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:30:30.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9194 2019-09-04T06:30:30.044+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:30:30.044+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:30:30.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9196 2019-09-04T06:30:30.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9196 2019-09-04T06:30:30.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9197 2019-09-04T06:30:30.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9197 2019-09-04T06:30:30.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9198 2019-09-04T06:30:30.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9198 2019-09-04T06:30:30.045+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:30:30.045+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:30:30.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9200 2019-09-04T06:30:30.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9200 2019-09-04T06:30:30.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9201 2019-09-04T06:30:30.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9201 2019-09-04T06:30:30.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9202 2019-09-04T06:30:30.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9202 2019-09-04T06:30:30.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9203 2019-09-04T06:30:30.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9203 2019-09-04T06:30:30.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9204 2019-09-04T06:30:30.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9204 2019-09-04T06:30:30.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9205 2019-09-04T06:30:30.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9205 2019-09-04T06:30:30.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9206 2019-09-04T06:30:30.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9206 2019-09-04T06:30:30.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9207 2019-09-04T06:30:30.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9207 2019-09-04T06:30:30.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9208 2019-09-04T06:30:30.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9208 2019-09-04T06:30:30.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9209 2019-09-04T06:30:30.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9209 2019-09-04T06:30:30.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9210 2019-09-04T06:30:30.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9210 2019-09-04T06:30:30.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9211 2019-09-04T06:30:30.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9211 2019-09-04T06:30:30.045+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:30:30.045+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:30:30.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9213 2019-09-04T06:30:30.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9213 2019-09-04T06:30:30.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9214 2019-09-04T06:30:30.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9214 2019-09-04T06:30:30.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9215 2019-09-04T06:30:30.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9215 2019-09-04T06:30:30.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9216 2019-09-04T06:30:30.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9216 2019-09-04T06:30:30.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9217 2019-09-04T06:30:30.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9217 2019-09-04T06:30:30.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9218 2019-09-04T06:30:30.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9218 2019-09-04T06:30:30.046+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:30:30.055+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:30.121+0000 D2 ASIO [RS] Request 625 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578630, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578630119), o: { $v: 1, $set: { ping: new Date(1567578630119) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578625, 2), t: 1 }, lastCommittedWall: new Date(1567578625653), lastOpVisible: { ts: Timestamp(1567578625, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578625, 2), t: 1 }, lastCommittedWall: new Date(1567578625653), lastOpApplied: { ts: Timestamp(1567578630, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578625, 2), $clusterTime: { clusterTime: Timestamp(1567578630, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578630, 1) } 2019-09-04T06:30:30.121+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578630, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578630119), o: { $v: 1, $set: { ping: new Date(1567578630119) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578625, 2), t: 1 }, lastCommittedWall: new Date(1567578625653), lastOpVisible: { ts: Timestamp(1567578625, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578625, 2), t: 1 }, lastCommittedWall: new Date(1567578625653), lastOpApplied: { ts: Timestamp(1567578630, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578625, 2), $clusterTime: { clusterTime: Timestamp(1567578630, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578630, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:30.121+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:30.122+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578630, 1) and ending at ts: Timestamp(1567578630, 1) 2019-09-04T06:30:30.122+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:30:39.733+0000 2019-09-04T06:30:30.122+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:30:40.225+0000 2019-09-04T06:30:30.122+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:30.122+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:58.839+0000 2019-09-04T06:30:30.122+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578630, 1), t: 1 } 2019-09-04T06:30:30.122+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:30.122+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:30.122+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578625, 2) 2019-09-04T06:30:30.122+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 9221 2019-09-04T06:30:30.122+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:30.122+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:30.122+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 9221 2019-09-04T06:30:30.122+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:30.122+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:30:30.122+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:30.122+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578625, 2) 2019-09-04T06:30:30.122+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 9224 2019-09-04T06:30:30.122+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578630, 1) } 2019-09-04T06:30:30.122+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:30.122+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:30.122+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 9224 2019-09-04T06:30:30.122+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9150 2019-09-04T06:30:30.122+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 9150 2019-09-04T06:30:30.122+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9227 2019-09-04T06:30:30.122+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9227 2019-09-04T06:30:30.122+0000 D3 EXECUTOR [repl-writer-worker-0] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:30.122+0000 D3 STORAGE [repl-writer-worker-0] WT begin_transaction for snapshot id 9229 2019-09-04T06:30:30.122+0000 D4 STORAGE [repl-writer-worker-0] inserting record with timestamp Timestamp(1567578630, 1) 2019-09-04T06:30:30.122+0000 D3 STORAGE [repl-writer-worker-0] WT set timestamp of future write operations to Timestamp(1567578630, 1) 2019-09-04T06:30:30.122+0000 D3 STORAGE [repl-writer-worker-0] WT commit_transaction for snapshot id 9229 2019-09-04T06:30:30.122+0000 D3 EXECUTOR [repl-writer-worker-0] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:30.122+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:30:30.122+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9228 2019-09-04T06:30:30.122+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 9228 2019-09-04T06:30:30.122+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9231 2019-09-04T06:30:30.122+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9231 2019-09-04T06:30:30.122+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578630, 1), t: 1 }({ ts: Timestamp(1567578630, 1), t: 1 }) 2019-09-04T06:30:30.122+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578630, 1) 2019-09-04T06:30:30.122+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9232 2019-09-04T06:30:30.122+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578630, 1) } } ] } sort: {} projection: {} 2019-09-04T06:30:30.122+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:30.122+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:30:30.122+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578630, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:30:30.122+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:30.122+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:30.122+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:30.122+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578630, 1) || First: notFirst: full path: ts 2019-09-04T06:30:30.122+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:30.122+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578630, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:30.122+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:30.122+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:30:30.122+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:30.122+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:30.122+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:30.122+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:30.122+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:30.122+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:30.122+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:30.122+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578630, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:30.122+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:30.122+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:30.122+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:30.122+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578630, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:30.122+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:30.122+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578630, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:30.122+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 9232 2019-09-04T06:30:30.122+0000 D3 EXECUTOR [repl-writer-worker-15] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:30.122+0000 D3 STORAGE [repl-writer-worker-15] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:30.122+0000 D3 REPL [repl-writer-worker-15] applying op: { ts: Timestamp(1567578630, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578630119), o: { $v: 1, $set: { ping: new Date(1567578630119) } } }, oplog application mode: Secondary 2019-09-04T06:30:30.122+0000 D3 STORAGE [repl-writer-worker-15] WT set timestamp of future write operations to Timestamp(1567578630, 1) 2019-09-04T06:30:30.122+0000 D3 STORAGE [repl-writer-worker-15] WT begin_transaction for snapshot id 9234 2019-09-04T06:30:30.123+0000 D2 QUERY [repl-writer-worker-15] Using idhack: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" } 2019-09-04T06:30:30.123+0000 D4 WRITE [repl-writer-worker-15] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:30:30.123+0000 D3 STORAGE [repl-writer-worker-15] WT commit_transaction for snapshot id 9234 2019-09-04T06:30:30.123+0000 D3 EXECUTOR [repl-writer-worker-15] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:30.123+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578630, 1), t: 1 }({ ts: Timestamp(1567578630, 1), t: 1 }) 2019-09-04T06:30:30.123+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578630, 1) 2019-09-04T06:30:30.123+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9233 2019-09-04T06:30:30.123+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:30:30.123+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:30.123+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:30:30.123+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:30.123+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:30.123+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:30:30.123+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 9233 2019-09-04T06:30:30.123+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578630, 1) 2019-09-04T06:30:30.123+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9237 2019-09-04T06:30:30.123+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9237 2019-09-04T06:30:30.123+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578630, 1), t: 1 }({ ts: Timestamp(1567578630, 1), t: 1 }) 2019-09-04T06:30:30.123+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:30.123+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, durableWallTime: new Date(1567578625653), appliedOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, appliedWallTime: new Date(1567578625653), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, durableWallTime: new Date(1567578625653), appliedOpTime: { ts: Timestamp(1567578630, 1), t: 1 }, appliedWallTime: new Date(1567578630119), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, durableWallTime: new Date(1567578625653), appliedOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, appliedWallTime: new Date(1567578625653), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578625, 2), t: 1 }, lastCommittedWall: new Date(1567578625653), lastOpVisible: { ts: Timestamp(1567578625, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:30.123+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 630 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:00.123+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, durableWallTime: new Date(1567578625653), appliedOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, appliedWallTime: new Date(1567578625653), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, durableWallTime: new Date(1567578625653), appliedOpTime: { ts: Timestamp(1567578630, 1), t: 1 }, appliedWallTime: new Date(1567578630119), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, durableWallTime: new Date(1567578625653), appliedOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, appliedWallTime: new Date(1567578625653), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578625, 2), t: 1 }, lastCommittedWall: new Date(1567578625653), lastOpVisible: { ts: Timestamp(1567578625, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:30.123+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:00.123+0000 2019-09-04T06:30:30.123+0000 D2 ASIO [RS] Request 630 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578625, 2), t: 1 }, lastCommittedWall: new Date(1567578625653), lastOpVisible: { ts: Timestamp(1567578625, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578625, 2), $clusterTime: { clusterTime: Timestamp(1567578630, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578630, 1) } 2019-09-04T06:30:30.123+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578625, 2), t: 1 }, lastCommittedWall: new Date(1567578625653), lastOpVisible: { ts: Timestamp(1567578625, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578625, 2), $clusterTime: { clusterTime: Timestamp(1567578630, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578630, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:30.123+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:30.123+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:00.123+0000 2019-09-04T06:30:30.124+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578630, 1), t: 1 } 2019-09-04T06:30:30.124+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 631 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:40.124+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578625, 2), t: 1 } } 2019-09-04T06:30:30.124+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:00.123+0000 2019-09-04T06:30:30.125+0000 D2 ASIO [RS] Request 631 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578630, 1), t: 1 }, lastCommittedWall: new Date(1567578630119), lastOpVisible: { ts: Timestamp(1567578630, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578630, 1), t: 1 }, lastCommittedWall: new Date(1567578630119), lastOpApplied: { ts: Timestamp(1567578630, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578630, 1), $clusterTime: { clusterTime: Timestamp(1567578630, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578630, 1) } 2019-09-04T06:30:30.125+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578630, 1), t: 1 }, lastCommittedWall: new Date(1567578630119), lastOpVisible: { ts: Timestamp(1567578630, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578630, 1), t: 1 }, lastCommittedWall: new Date(1567578630119), lastOpApplied: { ts: Timestamp(1567578630, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578630, 1), $clusterTime: { clusterTime: Timestamp(1567578630, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578630, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:30.125+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:30.125+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:30:30.125+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578630, 1), t: 1 }, 2019-09-04T06:30:30.119+0000 2019-09-04T06:30:30.125+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578630, 1), t: 1 }, 2019-09-04T06:30:30.119+0000 2019-09-04T06:30:30.125+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578625, 1) 2019-09-04T06:30:30.125+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:30:40.225+0000 2019-09-04T06:30:30.125+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:30:41.356+0000 2019-09-04T06:30:30.125+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:30.125+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:58.839+0000 2019-09-04T06:30:30.125+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 632 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:40.125+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578630, 1), t: 1 } } 2019-09-04T06:30:30.125+0000 D3 REPL [conn244] Got notified of new snapshot: { ts: Timestamp(1567578630, 1), t: 1 }, 2019-09-04T06:30:30.119+0000 2019-09-04T06:30:30.125+0000 D3 REPL [conn244] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.754+0000 2019-09-04T06:30:30.126+0000 D3 REPL [conn246] Got notified of new snapshot: { ts: Timestamp(1567578630, 1), t: 1 }, 2019-09-04T06:30:30.119+0000 2019-09-04T06:30:30.126+0000 D3 REPL [conn246] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:52.054+0000 2019-09-04T06:30:30.126+0000 D3 REPL [conn243] Got notified of new snapshot: { ts: Timestamp(1567578630, 1), t: 1 }, 2019-09-04T06:30:30.119+0000 2019-09-04T06:30:30.126+0000 D3 REPL [conn243] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.662+0000 2019-09-04T06:30:30.126+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:00.123+0000 2019-09-04T06:30:30.126+0000 D3 REPL [conn232] Got notified of new snapshot: { ts: Timestamp(1567578630, 1), t: 1 }, 2019-09-04T06:30:30.119+0000 2019-09-04T06:30:30.126+0000 D3 REPL [conn232] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.661+0000 2019-09-04T06:30:30.126+0000 D3 REPL [conn230] Got notified of new snapshot: { ts: Timestamp(1567578630, 1), t: 1 }, 2019-09-04T06:30:30.119+0000 2019-09-04T06:30:30.126+0000 D3 REPL [conn230] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:40.677+0000 2019-09-04T06:30:30.126+0000 D3 REPL [conn247] Got notified of new snapshot: { ts: Timestamp(1567578630, 1), t: 1 }, 2019-09-04T06:30:30.119+0000 2019-09-04T06:30:30.126+0000 D3 REPL [conn247] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:59.750+0000 2019-09-04T06:30:30.126+0000 D3 REPL [conn239] Got notified of new snapshot: { ts: Timestamp(1567578630, 1), t: 1 }, 2019-09-04T06:30:30.119+0000 2019-09-04T06:30:30.126+0000 D3 REPL [conn239] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.133+0000 2019-09-04T06:30:30.126+0000 D3 REPL [conn228] Got notified of new snapshot: { ts: Timestamp(1567578630, 1), t: 1 }, 2019-09-04T06:30:30.119+0000 2019-09-04T06:30:30.126+0000 D3 REPL [conn228] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:56.304+0000 2019-09-04T06:30:30.126+0000 D3 REPL [conn226] Got notified of new snapshot: { ts: Timestamp(1567578630, 1), t: 1 }, 2019-09-04T06:30:30.119+0000 2019-09-04T06:30:30.126+0000 D3 REPL [conn226] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:40.682+0000 2019-09-04T06:30:30.126+0000 D3 REPL [conn222] Got notified of new snapshot: { ts: Timestamp(1567578630, 1), t: 1 }, 2019-09-04T06:30:30.119+0000 2019-09-04T06:30:30.126+0000 D3 REPL [conn222] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.822+0000 2019-09-04T06:30:30.126+0000 D3 REPL [conn208] Got notified of new snapshot: { ts: Timestamp(1567578630, 1), t: 1 }, 2019-09-04T06:30:30.119+0000 2019-09-04T06:30:30.126+0000 D3 REPL [conn208] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.839+0000 2019-09-04T06:30:30.126+0000 D3 REPL [conn245] Got notified of new snapshot: { ts: Timestamp(1567578630, 1), t: 1 }, 2019-09-04T06:30:30.119+0000 2019-09-04T06:30:30.126+0000 D3 REPL [conn245] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.767+0000 2019-09-04T06:30:30.126+0000 D3 REPL [conn216] Got notified of new snapshot: { ts: Timestamp(1567578630, 1), t: 1 }, 2019-09-04T06:30:30.119+0000 2019-09-04T06:30:30.126+0000 D3 REPL [conn216] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:42.092+0000 2019-09-04T06:30:30.126+0000 D3 REPL [conn207] Got notified of new snapshot: { ts: Timestamp(1567578630, 1), t: 1 }, 2019-09-04T06:30:30.119+0000 2019-09-04T06:30:30.126+0000 D3 REPL [conn207] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:40.671+0000 2019-09-04T06:30:30.126+0000 D3 REPL [conn223] Got notified of new snapshot: { ts: Timestamp(1567578630, 1), t: 1 }, 2019-09-04T06:30:30.119+0000 2019-09-04T06:30:30.126+0000 D3 REPL [conn223] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.644+0000 2019-09-04T06:30:30.126+0000 D3 REPL [conn238] Got notified of new snapshot: { ts: Timestamp(1567578630, 1), t: 1 }, 2019-09-04T06:30:30.119+0000 2019-09-04T06:30:30.126+0000 D3 REPL [conn238] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.824+0000 2019-09-04T06:30:30.126+0000 D3 REPL [conn221] Got notified of new snapshot: { ts: Timestamp(1567578630, 1), t: 1 }, 2019-09-04T06:30:30.119+0000 2019-09-04T06:30:30.126+0000 D3 REPL [conn221] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:42.289+0000 2019-09-04T06:30:30.126+0000 D3 REPL [conn235] Got notified of new snapshot: { ts: Timestamp(1567578630, 1), t: 1 }, 2019-09-04T06:30:30.119+0000 2019-09-04T06:30:30.126+0000 D3 REPL [conn235] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.840+0000 2019-09-04T06:30:30.127+0000 D2 ASIO [RS] Request 632 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578630, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578630126), o: { $v: 1, $set: { ping: new Date(1567578630125) } } }, { ts: Timestamp(1567578630, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578630126), o: { $v: 1, $set: { ping: new Date(1567578630126) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578630, 1), t: 1 }, lastCommittedWall: new Date(1567578630119), lastOpVisible: { ts: Timestamp(1567578630, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578630, 1), t: 1 }, lastCommittedWall: new Date(1567578630119), lastOpApplied: { ts: Timestamp(1567578630, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578630, 1), $clusterTime: { clusterTime: Timestamp(1567578630, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578630, 3) } 2019-09-04T06:30:30.127+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578630, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578630126), o: { $v: 1, $set: { ping: new Date(1567578630125) } } }, { ts: Timestamp(1567578630, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578630126), o: { $v: 1, $set: { ping: new Date(1567578630126) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578630, 1), t: 1 }, lastCommittedWall: new Date(1567578630119), lastOpVisible: { ts: Timestamp(1567578630, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578630, 1), t: 1 }, lastCommittedWall: new Date(1567578630119), lastOpApplied: { ts: Timestamp(1567578630, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578630, 1), $clusterTime: { clusterTime: Timestamp(1567578630, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578630, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:30.127+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:30.127+0000 D2 REPL [replication-0] oplog fetcher read 2 operations from remote oplog starting at ts: Timestamp(1567578630, 2) and ending at ts: Timestamp(1567578630, 3) 2019-09-04T06:30:30.127+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:30:41.356+0000 2019-09-04T06:30:30.127+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:30:40.511+0000 2019-09-04T06:30:30.127+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578630, 3), t: 1 } 2019-09-04T06:30:30.127+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:30.127+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:58.839+0000 2019-09-04T06:30:30.127+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:30.127+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:30.127+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578630, 1) 2019-09-04T06:30:30.127+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 9241 2019-09-04T06:30:30.127+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:30.127+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:30.127+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 9241 2019-09-04T06:30:30.127+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:30.127+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:30.127+0000 D2 REPL [rsSync-0] replication batch size is 2 2019-09-04T06:30:30.127+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578630, 1) 2019-09-04T06:30:30.127+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578630, 2) } 2019-09-04T06:30:30.127+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 9244 2019-09-04T06:30:30.127+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:30.127+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:30.127+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9238 2019-09-04T06:30:30.127+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 9244 2019-09-04T06:30:30.127+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 9238 2019-09-04T06:30:30.127+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9247 2019-09-04T06:30:30.127+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9247 2019-09-04T06:30:30.127+0000 D3 EXECUTOR [repl-writer-worker-1] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:30.128+0000 D3 STORAGE [repl-writer-worker-1] WT begin_transaction for snapshot id 9249 2019-09-04T06:30:30.128+0000 D4 STORAGE [repl-writer-worker-1] inserting record with timestamp Timestamp(1567578630, 2) 2019-09-04T06:30:30.128+0000 D3 STORAGE [repl-writer-worker-1] WT set timestamp of future write operations to Timestamp(1567578630, 2) 2019-09-04T06:30:30.128+0000 D4 STORAGE [repl-writer-worker-1] inserting record with timestamp Timestamp(1567578630, 3) 2019-09-04T06:30:30.128+0000 D3 STORAGE [repl-writer-worker-1] WT set timestamp of future write operations to Timestamp(1567578630, 3) 2019-09-04T06:30:30.128+0000 D3 STORAGE [repl-writer-worker-1] WT commit_transaction for snapshot id 9249 2019-09-04T06:30:30.128+0000 D3 EXECUTOR [repl-writer-worker-1] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:30.128+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:30:30.128+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9248 2019-09-04T06:30:30.128+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 9248 2019-09-04T06:30:30.128+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9251 2019-09-04T06:30:30.128+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9251 2019-09-04T06:30:30.128+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578630, 3), t: 1 }({ ts: Timestamp(1567578630, 3), t: 1 }) 2019-09-04T06:30:30.128+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578630, 3) 2019-09-04T06:30:30.128+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9252 2019-09-04T06:30:30.128+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578630, 3) } } ] } sort: {} projection: {} 2019-09-04T06:30:30.128+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:30.128+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:30:30.128+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578630, 3) Sort: {} Proj: {} ============================= 2019-09-04T06:30:30.128+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:30.128+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:30.128+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:30.128+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578630, 3) || First: notFirst: full path: ts 2019-09-04T06:30:30.128+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:30.128+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578630, 3) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:30.128+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:30.128+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:30:30.128+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:30.128+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:30.128+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:30.128+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:30.128+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:30.128+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:30.128+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:30.128+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578630, 3) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:30.128+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:30.128+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:30.128+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:30.128+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578630, 3) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:30.128+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:30.128+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578630, 3) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:30.128+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 9252 2019-09-04T06:30:30.128+0000 D3 EXECUTOR [repl-writer-worker-5] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:30.128+0000 D3 STORAGE [repl-writer-worker-5] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:30.128+0000 D3 REPL [repl-writer-worker-5] applying op: { ts: Timestamp(1567578630, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578630126), o: { $v: 1, $set: { ping: new Date(1567578630126) } } }, oplog application mode: Secondary 2019-09-04T06:30:30.128+0000 D3 STORAGE [repl-writer-worker-5] WT set timestamp of future write operations to Timestamp(1567578630, 3) 2019-09-04T06:30:30.128+0000 D3 STORAGE [repl-writer-worker-5] WT begin_transaction for snapshot id 9254 2019-09-04T06:30:30.128+0000 D2 QUERY [repl-writer-worker-5] Using idhack: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" } 2019-09-04T06:30:30.128+0000 D4 WRITE [repl-writer-worker-5] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:30:30.128+0000 D3 STORAGE [repl-writer-worker-5] WT commit_transaction for snapshot id 9254 2019-09-04T06:30:30.128+0000 D3 EXECUTOR [repl-writer-worker-5] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:30.128+0000 D3 STORAGE [repl-writer-worker-5] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:30.128+0000 D3 REPL [repl-writer-worker-5] applying op: { ts: Timestamp(1567578630, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578630126), o: { $v: 1, $set: { ping: new Date(1567578630125) } } }, oplog application mode: Secondary 2019-09-04T06:30:30.128+0000 D3 STORAGE [repl-writer-worker-5] WT set timestamp of future write operations to Timestamp(1567578630, 2) 2019-09-04T06:30:30.128+0000 D3 STORAGE [repl-writer-worker-5] WT begin_transaction for snapshot id 9256 2019-09-04T06:30:30.128+0000 D2 QUERY [repl-writer-worker-5] Using idhack: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" } 2019-09-04T06:30:30.128+0000 D4 WRITE [repl-writer-worker-5] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:30:30.128+0000 D3 STORAGE [repl-writer-worker-5] WT commit_transaction for snapshot id 9256 2019-09-04T06:30:30.128+0000 D3 EXECUTOR [repl-writer-worker-5] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:30.128+0000 D3 EXECUTOR [repl-writer-worker-9] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:30.128+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578630, 3), t: 1 }({ ts: Timestamp(1567578630, 3), t: 1 }) 2019-09-04T06:30:30.128+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:30:30.128+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578630, 3) 2019-09-04T06:30:30.128+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9253 2019-09-04T06:30:30.128+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:30.128+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:30:30.128+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, durableWallTime: new Date(1567578625653), appliedOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, appliedWallTime: new Date(1567578625653), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578630, 1), t: 1 }, durableWallTime: new Date(1567578630119), appliedOpTime: { ts: Timestamp(1567578630, 1), t: 1 }, appliedWallTime: new Date(1567578630119), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, durableWallTime: new Date(1567578625653), appliedOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, appliedWallTime: new Date(1567578625653), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578630, 1), t: 1 }, lastCommittedWall: new Date(1567578630119), lastOpVisible: { ts: Timestamp(1567578630, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:30.128+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:30.128+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 633 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:00.128+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, durableWallTime: new Date(1567578625653), appliedOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, appliedWallTime: new Date(1567578625653), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578630, 1), t: 1 }, durableWallTime: new Date(1567578630119), appliedOpTime: { ts: Timestamp(1567578630, 1), t: 1 }, appliedWallTime: new Date(1567578630119), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, durableWallTime: new Date(1567578625653), appliedOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, appliedWallTime: new Date(1567578625653), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578630, 1), t: 1 }, lastCommittedWall: new Date(1567578630119), lastOpVisible: { ts: Timestamp(1567578630, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:30.128+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:30:30.128+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:00.128+0000 2019-09-04T06:30:30.128+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:30.128+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:30.128+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:30:30.128+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 9253 2019-09-04T06:30:30.129+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578630, 3) 2019-09-04T06:30:30.129+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9260 2019-09-04T06:30:30.129+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9260 2019-09-04T06:30:30.129+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578630, 3), t: 1 }({ ts: Timestamp(1567578630, 3), t: 1 }) 2019-09-04T06:30:30.129+0000 D2 ASIO [RS] Request 633 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578630, 1), t: 1 }, lastCommittedWall: new Date(1567578630119), lastOpVisible: { ts: Timestamp(1567578630, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578630, 1), $clusterTime: { clusterTime: Timestamp(1567578630, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578630, 3) } 2019-09-04T06:30:30.129+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578630, 1), t: 1 }, lastCommittedWall: new Date(1567578630119), lastOpVisible: { ts: Timestamp(1567578630, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578630, 1), $clusterTime: { clusterTime: Timestamp(1567578630, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578630, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:30.129+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:30.129+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, durableWallTime: new Date(1567578625653), appliedOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, appliedWallTime: new Date(1567578625653), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578630, 1), t: 1 }, durableWallTime: new Date(1567578630119), appliedOpTime: { ts: Timestamp(1567578630, 3), t: 1 }, appliedWallTime: new Date(1567578630126), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, durableWallTime: new Date(1567578625653), appliedOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, appliedWallTime: new Date(1567578625653), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578630, 1), t: 1 }, lastCommittedWall: new Date(1567578630119), lastOpVisible: { ts: Timestamp(1567578630, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:30.129+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 634 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:00.129+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, durableWallTime: new Date(1567578625653), appliedOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, appliedWallTime: new Date(1567578625653), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578630, 1), t: 1 }, durableWallTime: new Date(1567578630119), appliedOpTime: { ts: Timestamp(1567578630, 3), t: 1 }, appliedWallTime: new Date(1567578630126), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, durableWallTime: new Date(1567578625653), appliedOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, appliedWallTime: new Date(1567578625653), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578630, 1), t: 1 }, lastCommittedWall: new Date(1567578630119), lastOpVisible: { ts: Timestamp(1567578630, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:30.129+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:00.129+0000 2019-09-04T06:30:30.129+0000 D2 ASIO [RS] Request 634 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578630, 3), t: 1 }, lastCommittedWall: new Date(1567578630126), lastOpVisible: { ts: Timestamp(1567578630, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578630, 3), $clusterTime: { clusterTime: Timestamp(1567578630, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578630, 3) } 2019-09-04T06:30:30.129+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578630, 3), t: 1 }, lastCommittedWall: new Date(1567578630126), lastOpVisible: { ts: Timestamp(1567578630, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578630, 3), $clusterTime: { clusterTime: Timestamp(1567578630, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578630, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:30.129+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:30.129+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:00.129+0000 2019-09-04T06:30:30.129+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578630, 3), t: 1 } 2019-09-04T06:30:30.129+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:30:30.129+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 635 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:40.129+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578630, 1), t: 1 } } 2019-09-04T06:30:30.129+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:30.129+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, durableWallTime: new Date(1567578625653), appliedOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, appliedWallTime: new Date(1567578625653), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578630, 3), t: 1 }, durableWallTime: new Date(1567578630126), appliedOpTime: { ts: Timestamp(1567578630, 3), t: 1 }, appliedWallTime: new Date(1567578630126), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, durableWallTime: new Date(1567578625653), appliedOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, appliedWallTime: new Date(1567578625653), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578630, 1), t: 1 }, lastCommittedWall: new Date(1567578630119), lastOpVisible: { ts: Timestamp(1567578630, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:30.129+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 636 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:00.129+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, durableWallTime: new Date(1567578625653), appliedOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, appliedWallTime: new Date(1567578625653), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578630, 3), t: 1 }, durableWallTime: new Date(1567578630126), appliedOpTime: { ts: Timestamp(1567578630, 3), t: 1 }, appliedWallTime: new Date(1567578630126), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, durableWallTime: new Date(1567578625653), appliedOpTime: { ts: Timestamp(1567578625, 2), t: 1 }, appliedWallTime: new Date(1567578625653), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578630, 1), t: 1 }, lastCommittedWall: new Date(1567578630119), lastOpVisible: { ts: Timestamp(1567578630, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:30.129+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:00.129+0000 2019-09-04T06:30:30.129+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:00.129+0000 2019-09-04T06:30:30.130+0000 D2 ASIO [RS] Request 635 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578630, 3), t: 1 }, lastCommittedWall: new Date(1567578630126), lastOpVisible: { ts: Timestamp(1567578630, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578630, 3), t: 1 }, lastCommittedWall: new Date(1567578630126), lastOpApplied: { ts: Timestamp(1567578630, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578630, 3), $clusterTime: { clusterTime: Timestamp(1567578630, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578630, 3) } 2019-09-04T06:30:30.130+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578630, 3), t: 1 }, lastCommittedWall: new Date(1567578630126), lastOpVisible: { ts: Timestamp(1567578630, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578630, 3), t: 1 }, lastCommittedWall: new Date(1567578630126), lastOpApplied: { ts: Timestamp(1567578630, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578630, 3), $clusterTime: { clusterTime: Timestamp(1567578630, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578630, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:30.130+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:30.130+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:30:30.130+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578630, 3), t: 1 }, 2019-09-04T06:30:30.126+0000 2019-09-04T06:30:30.130+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578630, 3), t: 1 }, 2019-09-04T06:30:30.126+0000 2019-09-04T06:30:30.130+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578625, 3) 2019-09-04T06:30:30.130+0000 D3 REPL [conn208] Got notified of new snapshot: { ts: Timestamp(1567578630, 3), t: 1 }, 2019-09-04T06:30:30.126+0000 2019-09-04T06:30:30.130+0000 D3 REPL [conn208] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.839+0000 2019-09-04T06:30:30.130+0000 D3 REPL [conn235] Got notified of new snapshot: { ts: Timestamp(1567578630, 3), t: 1 }, 2019-09-04T06:30:30.126+0000 2019-09-04T06:30:30.130+0000 D3 REPL [conn235] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.840+0000 2019-09-04T06:30:30.130+0000 D2 ASIO [RS] Request 636 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578630, 3), t: 1 }, lastCommittedWall: new Date(1567578630126), lastOpVisible: { ts: Timestamp(1567578630, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578630, 3), $clusterTime: { clusterTime: Timestamp(1567578630, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578630, 3) } 2019-09-04T06:30:30.130+0000 D3 REPL [conn245] Got notified of new snapshot: { ts: Timestamp(1567578630, 3), t: 1 }, 2019-09-04T06:30:30.126+0000 2019-09-04T06:30:30.130+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578630, 3), t: 1 }, lastCommittedWall: new Date(1567578630126), lastOpVisible: { ts: Timestamp(1567578630, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578630, 3), $clusterTime: { clusterTime: Timestamp(1567578630, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578630, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:30.130+0000 D3 REPL [conn245] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.767+0000 2019-09-04T06:30:30.130+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:30.130+0000 D3 REPL [conn221] Got notified of new snapshot: { ts: Timestamp(1567578630, 3), t: 1 }, 2019-09-04T06:30:30.126+0000 2019-09-04T06:30:30.130+0000 D3 REPL [conn221] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:42.289+0000 2019-09-04T06:30:30.130+0000 D3 REPL [conn223] Got notified of new snapshot: { ts: Timestamp(1567578630, 3), t: 1 }, 2019-09-04T06:30:30.126+0000 2019-09-04T06:30:30.130+0000 D3 REPL [conn223] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.644+0000 2019-09-04T06:30:30.130+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:00.130+0000 2019-09-04T06:30:30.130+0000 D3 REPL [conn216] Got notified of new snapshot: { ts: Timestamp(1567578630, 3), t: 1 }, 2019-09-04T06:30:30.126+0000 2019-09-04T06:30:30.130+0000 D3 REPL [conn216] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:42.092+0000 2019-09-04T06:30:30.130+0000 D3 REPL [conn239] Got notified of new snapshot: { ts: Timestamp(1567578630, 3), t: 1 }, 2019-09-04T06:30:30.126+0000 2019-09-04T06:30:30.130+0000 D3 REPL [conn239] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.133+0000 2019-09-04T06:30:30.130+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:30:40.511+0000 2019-09-04T06:30:30.130+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:30:41.320+0000 2019-09-04T06:30:30.130+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:30.130+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:58.839+0000 2019-09-04T06:30:30.130+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 637 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:40.130+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578630, 3), t: 1 } } 2019-09-04T06:30:30.130+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:00.130+0000 2019-09-04T06:30:30.130+0000 D3 REPL [conn247] Got notified of new snapshot: { ts: Timestamp(1567578630, 3), t: 1 }, 2019-09-04T06:30:30.126+0000 2019-09-04T06:30:30.130+0000 D3 REPL [conn247] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:59.750+0000 2019-09-04T06:30:30.130+0000 D3 REPL [conn246] Got notified of new snapshot: { ts: Timestamp(1567578630, 3), t: 1 }, 2019-09-04T06:30:30.126+0000 2019-09-04T06:30:30.130+0000 D3 REPL [conn246] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:52.054+0000 2019-09-04T06:30:30.130+0000 D3 REPL [conn232] Got notified of new snapshot: { ts: Timestamp(1567578630, 3), t: 1 }, 2019-09-04T06:30:30.126+0000 2019-09-04T06:30:30.130+0000 D3 REPL [conn232] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.661+0000 2019-09-04T06:30:30.130+0000 D3 REPL [conn226] Got notified of new snapshot: { ts: Timestamp(1567578630, 3), t: 1 }, 2019-09-04T06:30:30.126+0000 2019-09-04T06:30:30.130+0000 D3 REPL [conn226] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:40.682+0000 2019-09-04T06:30:30.130+0000 D3 REPL [conn222] Got notified of new snapshot: { ts: Timestamp(1567578630, 3), t: 1 }, 2019-09-04T06:30:30.126+0000 2019-09-04T06:30:30.130+0000 D3 REPL [conn222] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.822+0000 2019-09-04T06:30:30.130+0000 D3 REPL [conn230] Got notified of new snapshot: { ts: Timestamp(1567578630, 3), t: 1 }, 2019-09-04T06:30:30.126+0000 2019-09-04T06:30:30.130+0000 D3 REPL [conn230] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:40.677+0000 2019-09-04T06:30:30.130+0000 D3 REPL [conn228] Got notified of new snapshot: { ts: Timestamp(1567578630, 3), t: 1 }, 2019-09-04T06:30:30.126+0000 2019-09-04T06:30:30.130+0000 D3 REPL [conn228] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:56.304+0000 2019-09-04T06:30:30.130+0000 D3 REPL [conn244] Got notified of new snapshot: { ts: Timestamp(1567578630, 3), t: 1 }, 2019-09-04T06:30:30.126+0000 2019-09-04T06:30:30.130+0000 D3 REPL [conn244] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.754+0000 2019-09-04T06:30:30.130+0000 D3 REPL [conn207] Got notified of new snapshot: { ts: Timestamp(1567578630, 3), t: 1 }, 2019-09-04T06:30:30.126+0000 2019-09-04T06:30:30.130+0000 D3 REPL [conn207] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:40.671+0000 2019-09-04T06:30:30.130+0000 D3 REPL [conn238] Got notified of new snapshot: { ts: Timestamp(1567578630, 3), t: 1 }, 2019-09-04T06:30:30.126+0000 2019-09-04T06:30:30.130+0000 D3 REPL [conn238] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.824+0000 2019-09-04T06:30:30.130+0000 D3 REPL [conn243] Got notified of new snapshot: { ts: Timestamp(1567578630, 3), t: 1 }, 2019-09-04T06:30:30.126+0000 2019-09-04T06:30:30.130+0000 D3 REPL [conn243] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.662+0000 2019-09-04T06:30:30.155+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:30.169+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:30.169+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:30.214+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:30.214+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:30.222+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578630, 3) 2019-09-04T06:30:30.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578630, 3), signature: { hash: BinData(0, DBFA3A0E2709162FD90315371A4EA671BDCE9EE5), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:30.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:30:30.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578630, 3), signature: { hash: BinData(0, DBFA3A0E2709162FD90315371A4EA671BDCE9EE5), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:30.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578630, 3), signature: { hash: BinData(0, DBFA3A0E2709162FD90315371A4EA671BDCE9EE5), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:30.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578630, 3), t: 1 }, durableWallTime: new Date(1567578630126), opTime: { ts: Timestamp(1567578630, 3), t: 1 }, wallTime: new Date(1567578630126) } 2019-09-04T06:30:30.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578630, 3), signature: { hash: BinData(0, DBFA3A0E2709162FD90315371A4EA671BDCE9EE5), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:30.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:30.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:30.255+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:30.296+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:30.296+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:30.355+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:30.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:30.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:30.436+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:30.436+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:30.445+0000 I NETWORK [listener] connection accepted from 10.108.2.59:48350 #248 (87 connections now open) 2019-09-04T06:30:30.445+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:30.445+0000 D2 COMMAND [conn248] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:30.445+0000 I NETWORK [conn248] received client metadata from 10.108.2.59:48350 conn248: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:30.445+0000 I COMMAND [conn248] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:30.446+0000 D2 COMMAND [conn248] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578621, 1), signature: { hash: BinData(0, E82E5CEB98BE4FF0E89C81089D4E7D0BAD5C9FA3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:30:30.446+0000 D1 REPL [conn248] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578630, 3), t: 1 } 2019-09-04T06:30:30.446+0000 D3 REPL [conn248] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.456+0000 2019-09-04T06:30:30.455+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:30.458+0000 D2 COMMAND [conn225] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578621, 1), signature: { hash: BinData(0, E82E5CEB98BE4FF0E89C81089D4E7D0BAD5C9FA3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:30:30.458+0000 D1 REPL [conn225] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578630, 3), t: 1 } 2019-09-04T06:30:30.458+0000 D3 REPL [conn225] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.468+0000 2019-09-04T06:30:30.555+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:30.655+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:30.669+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:30.669+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:30.691+0000 I NETWORK [listener] connection accepted from 10.108.2.74:51784 #249 (88 connections now open) 2019-09-04T06:30:30.691+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:30.692+0000 D2 COMMAND [conn249] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:30.692+0000 I NETWORK [conn249] received client metadata from 10.108.2.74:51784 conn249: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:30.692+0000 I COMMAND [conn249] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:30.692+0000 D2 COMMAND [conn249] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578625, 1), signature: { hash: BinData(0, 6E155B13C29F2856B6AEB543BF6F1BCB12BB4DE1), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:30:30.692+0000 D1 REPL [conn249] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578630, 3), t: 1 } 2019-09-04T06:30:30.692+0000 D3 REPL [conn249] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.702+0000 2019-09-04T06:30:30.714+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:30.714+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:30.743+0000 I NETWORK [listener] connection accepted from 10.108.2.52:47180 #250 (89 connections now open) 2019-09-04T06:30:30.743+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:30.743+0000 D2 COMMAND [conn250] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:30.743+0000 I NETWORK [conn250] received client metadata from 10.108.2.52:47180 conn250: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:30.743+0000 I COMMAND [conn250] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:30.743+0000 D2 COMMAND [conn250] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578622, 1), signature: { hash: BinData(0, 57682BE5EAE1D8EF39CBA39B0418328ED3E0A567), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:30:30.743+0000 D1 REPL [conn250] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578630, 3), t: 1 } 2019-09-04T06:30:30.743+0000 D3 REPL [conn250] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.753+0000 2019-09-04T06:30:30.753+0000 I NETWORK [listener] connection accepted from 10.108.2.50:50114 #251 (90 connections now open) 2019-09-04T06:30:30.753+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:30.753+0000 D2 COMMAND [conn251] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:30.753+0000 I NETWORK [conn251] received client metadata from 10.108.2.50:50114 conn251: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:30.753+0000 I COMMAND [conn251] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:30.753+0000 D2 COMMAND [conn251] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578629, 1), signature: { hash: BinData(0, C65C59F62612952069D57CCC38BA6C20E661C70E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:30:30.753+0000 D1 REPL [conn251] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578630, 3), t: 1 } 2019-09-04T06:30:30.753+0000 D3 REPL [conn251] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.763+0000 2019-09-04T06:30:30.756+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:30.796+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:30.796+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:30.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:30.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 638) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:30.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 638 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:40.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:30.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:58.839+0000 2019-09-04T06:30:30.838+0000 D2 ASIO [Replication] Request 638 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578630, 3), t: 1 }, durableWallTime: new Date(1567578630126), opTime: { ts: Timestamp(1567578630, 3), t: 1 }, wallTime: new Date(1567578630126), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578630, 3), t: 1 }, lastCommittedWall: new Date(1567578630126), lastOpVisible: { ts: Timestamp(1567578630, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578630, 3), $clusterTime: { clusterTime: Timestamp(1567578630, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578630, 3) } 2019-09-04T06:30:30.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578630, 3), t: 1 }, durableWallTime: new Date(1567578630126), opTime: { ts: Timestamp(1567578630, 3), t: 1 }, wallTime: new Date(1567578630126), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578630, 3), t: 1 }, lastCommittedWall: new Date(1567578630126), lastOpVisible: { ts: Timestamp(1567578630, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578630, 3), $clusterTime: { clusterTime: Timestamp(1567578630, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578630, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:30.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:30.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 638) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578630, 3), t: 1 }, durableWallTime: new Date(1567578630126), opTime: { ts: Timestamp(1567578630, 3), t: 1 }, wallTime: new Date(1567578630126), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578630, 3), t: 1 }, lastCommittedWall: new Date(1567578630126), lastOpVisible: { ts: Timestamp(1567578630, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578630, 3), $clusterTime: { clusterTime: Timestamp(1567578630, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578630, 3) } 2019-09-04T06:30:30.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:30:30.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:30:32.838Z 2019-09-04T06:30:30.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:30:58.839+0000 2019-09-04T06:30:30.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:30.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 639) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:30.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 639 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:30:40.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:30.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:30:58.839+0000 2019-09-04T06:30:30.839+0000 D2 ASIO [Replication] Request 639 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578630, 3), t: 1 }, durableWallTime: new Date(1567578630126), opTime: { ts: Timestamp(1567578630, 3), t: 1 }, wallTime: new Date(1567578630126), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578630, 3), t: 1 }, lastCommittedWall: new Date(1567578630126), lastOpVisible: { ts: Timestamp(1567578630, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578630, 3), $clusterTime: { clusterTime: Timestamp(1567578630, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578630, 3) } 2019-09-04T06:30:30.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578630, 3), t: 1 }, durableWallTime: new Date(1567578630126), opTime: { ts: Timestamp(1567578630, 3), t: 1 }, wallTime: new Date(1567578630126), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578630, 3), t: 1 }, lastCommittedWall: new Date(1567578630126), lastOpVisible: { ts: Timestamp(1567578630, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578630, 3), $clusterTime: { clusterTime: Timestamp(1567578630, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578630, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:30:30.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:30.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 639) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578630, 3), t: 1 }, durableWallTime: new Date(1567578630126), opTime: { ts: Timestamp(1567578630, 3), t: 1 }, wallTime: new Date(1567578630126), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578630, 3), t: 1 }, lastCommittedWall: new Date(1567578630126), lastOpVisible: { ts: Timestamp(1567578630, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578630, 3), $clusterTime: { clusterTime: Timestamp(1567578630, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578630, 3) } 2019-09-04T06:30:30.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:30:30.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:30:41.320+0000 2019-09-04T06:30:30.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:30:42.116+0000 2019-09-04T06:30:30.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:30:30.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:30:32.839Z 2019-09-04T06:30:30.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:30.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:00.839+0000 2019-09-04T06:30:30.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:00.839+0000 2019-09-04T06:30:30.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:30.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:30.856+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:30.886+0000 I NETWORK [listener] connection accepted from 10.108.2.44:38672 #252 (91 connections now open) 2019-09-04T06:30:30.886+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:30.887+0000 D2 COMMAND [conn252] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:30.887+0000 I NETWORK [conn252] received client metadata from 10.108.2.44:38672 conn252: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:30.887+0000 I COMMAND [conn252] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:30.887+0000 D2 COMMAND [conn252] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578628, 1), signature: { hash: BinData(0, A3D197163D4DC2FD06580272DC5949AA9EB70946), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:30:30.887+0000 D1 REPL [conn252] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578630, 3), t: 1 } 2019-09-04T06:30:30.887+0000 D3 REPL [conn252] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.897+0000 2019-09-04T06:30:30.915+0000 I NETWORK [listener] connection accepted from 10.108.2.73:52152 #253 (92 connections now open) 2019-09-04T06:30:30.915+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:30.915+0000 D2 COMMAND [conn253] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:30.915+0000 I NETWORK [conn253] received client metadata from 10.108.2.73:52152 conn253: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:30.915+0000 I COMMAND [conn253] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:30.915+0000 D2 COMMAND [conn253] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578622, 1), signature: { hash: BinData(0, 57682BE5EAE1D8EF39CBA39B0418328ED3E0A567), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:30:30.915+0000 D1 REPL [conn253] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578630, 3), t: 1 } 2019-09-04T06:30:30.915+0000 D3 REPL [conn253] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.925+0000 2019-09-04T06:30:30.936+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:30.936+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:30.952+0000 I NETWORK [listener] connection accepted from 10.108.2.58:52140 #254 (93 connections now open) 2019-09-04T06:30:30.952+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:30.952+0000 D2 COMMAND [conn254] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:30.952+0000 I NETWORK [conn254] received client metadata from 10.108.2.58:52140 conn254: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:30.952+0000 I COMMAND [conn254] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:30.952+0000 D2 COMMAND [conn254] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578623, 1), signature: { hash: BinData(0, 3A03B2C9D6C104013865981BCFBDB7C92D5EFDE9), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:30:30.952+0000 D1 REPL [conn254] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578630, 3), t: 1 } 2019-09-04T06:30:30.952+0000 D3 REPL [conn254] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.962+0000 2019-09-04T06:30:30.956+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:31.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:31.056+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:31.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578630, 3), signature: { hash: BinData(0, DBFA3A0E2709162FD90315371A4EA671BDCE9EE5), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:31.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:30:31.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578630, 3), signature: { hash: BinData(0, DBFA3A0E2709162FD90315371A4EA671BDCE9EE5), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:31.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578630, 3), signature: { hash: BinData(0, DBFA3A0E2709162FD90315371A4EA671BDCE9EE5), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:31.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578630, 3), t: 1 }, durableWallTime: new Date(1567578630126), opTime: { ts: Timestamp(1567578630, 3), t: 1 }, wallTime: new Date(1567578630126) } 2019-09-04T06:30:31.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578630, 3), signature: { hash: BinData(0, DBFA3A0E2709162FD90315371A4EA671BDCE9EE5), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:31.128+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:31.128+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:31.128+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578630, 3) 2019-09-04T06:30:31.128+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 9291 2019-09-04T06:30:31.128+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:31.128+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:31.128+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 9291 2019-09-04T06:30:31.129+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9294 2019-09-04T06:30:31.129+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9294 2019-09-04T06:30:31.129+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578630, 3), t: 1 }({ ts: Timestamp(1567578630, 3), t: 1 }) 2019-09-04T06:30:31.156+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:31.169+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:31.169+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:31.214+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:31.214+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:31.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:31.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:31.256+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:31.296+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:31.296+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:31.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:31.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:31.356+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:31.436+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:31.436+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:31.456+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:31.557+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:31.657+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:31.669+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:31.669+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:31.714+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:31.714+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:31.757+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:31.796+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:31.796+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:31.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:31.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:31.857+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:31.936+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:31.936+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:31.957+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:32.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:32.057+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:32.128+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:32.128+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:32.128+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578630, 3) 2019-09-04T06:30:32.128+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 9309 2019-09-04T06:30:32.128+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:32.128+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:32.128+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 9309 2019-09-04T06:30:32.129+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9312 2019-09-04T06:30:32.129+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9312 2019-09-04T06:30:32.129+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578630, 3), t: 1 }({ ts: Timestamp(1567578630, 3), t: 1 }) 2019-09-04T06:30:32.158+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:32.214+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:32.214+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:32.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578630, 3), signature: { hash: BinData(0, DBFA3A0E2709162FD90315371A4EA671BDCE9EE5), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:32.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:30:32.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578630, 3), signature: { hash: BinData(0, DBFA3A0E2709162FD90315371A4EA671BDCE9EE5), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:32.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578630, 3), signature: { hash: BinData(0, DBFA3A0E2709162FD90315371A4EA671BDCE9EE5), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:32.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578630, 3), t: 1 }, durableWallTime: new Date(1567578630126), opTime: { ts: Timestamp(1567578630, 3), t: 1 }, wallTime: new Date(1567578630126) } 2019-09-04T06:30:32.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578630, 3), signature: { hash: BinData(0, DBFA3A0E2709162FD90315371A4EA671BDCE9EE5), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:32.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:32.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:32.258+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:32.296+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:32.296+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:32.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:32.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:32.358+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:32.436+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:32.436+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:32.458+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:32.558+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:32.658+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:32.714+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:32.714+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:32.758+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:32.796+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:32.796+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:32.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:32.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 640) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:32.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 640 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:42.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:32.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:00.839+0000 2019-09-04T06:30:32.838+0000 D2 ASIO [Replication] Request 640 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578630, 3), t: 1 }, durableWallTime: new Date(1567578630126), opTime: { ts: Timestamp(1567578630, 3), t: 1 }, wallTime: new Date(1567578630126), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578630, 3), t: 1 }, lastCommittedWall: new Date(1567578630126), lastOpVisible: { ts: Timestamp(1567578630, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578630, 3), $clusterTime: { clusterTime: Timestamp(1567578630, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578630, 3) } 2019-09-04T06:30:32.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578630, 3), t: 1 }, durableWallTime: new Date(1567578630126), opTime: { ts: Timestamp(1567578630, 3), t: 1 }, wallTime: new Date(1567578630126), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578630, 3), t: 1 }, lastCommittedWall: new Date(1567578630126), lastOpVisible: { ts: Timestamp(1567578630, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578630, 3), $clusterTime: { clusterTime: Timestamp(1567578630, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578630, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:32.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:32.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 640) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578630, 3), t: 1 }, durableWallTime: new Date(1567578630126), opTime: { ts: Timestamp(1567578630, 3), t: 1 }, wallTime: new Date(1567578630126), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578630, 3), t: 1 }, lastCommittedWall: new Date(1567578630126), lastOpVisible: { ts: Timestamp(1567578630, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578630, 3), $clusterTime: { clusterTime: Timestamp(1567578630, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578630, 3) } 2019-09-04T06:30:32.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:30:32.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:30:34.838Z 2019-09-04T06:30:32.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:00.839+0000 2019-09-04T06:30:32.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:32.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 641) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:32.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 641 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:30:42.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:32.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:00.839+0000 2019-09-04T06:30:32.839+0000 D2 ASIO [Replication] Request 641 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578630, 3), t: 1 }, durableWallTime: new Date(1567578630126), opTime: { ts: Timestamp(1567578630, 3), t: 1 }, wallTime: new Date(1567578630126), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578630, 3), t: 1 }, lastCommittedWall: new Date(1567578630126), lastOpVisible: { ts: Timestamp(1567578630, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578630, 3), $clusterTime: { clusterTime: Timestamp(1567578630, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578630, 3) } 2019-09-04T06:30:32.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578630, 3), t: 1 }, durableWallTime: new Date(1567578630126), opTime: { ts: Timestamp(1567578630, 3), t: 1 }, wallTime: new Date(1567578630126), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578630, 3), t: 1 }, lastCommittedWall: new Date(1567578630126), lastOpVisible: { ts: Timestamp(1567578630, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578630, 3), $clusterTime: { clusterTime: Timestamp(1567578630, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578630, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:30:32.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:32.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 641) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578630, 3), t: 1 }, durableWallTime: new Date(1567578630126), opTime: { ts: Timestamp(1567578630, 3), t: 1 }, wallTime: new Date(1567578630126), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578630, 3), t: 1 }, lastCommittedWall: new Date(1567578630126), lastOpVisible: { ts: Timestamp(1567578630, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578630, 3), $clusterTime: { clusterTime: Timestamp(1567578630, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578630, 3) } 2019-09-04T06:30:32.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:30:32.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:30:42.116+0000 2019-09-04T06:30:32.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:30:43.563+0000 2019-09-04T06:30:32.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:30:32.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:30:34.839Z 2019-09-04T06:30:32.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:32.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:02.839+0000 2019-09-04T06:30:32.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:02.839+0000 2019-09-04T06:30:32.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:32.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:32.858+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:32.936+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:32.936+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:32.959+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:33.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:33.059+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:33.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578630, 3), signature: { hash: BinData(0, DBFA3A0E2709162FD90315371A4EA671BDCE9EE5), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:33.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:30:33.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578630, 3), signature: { hash: BinData(0, DBFA3A0E2709162FD90315371A4EA671BDCE9EE5), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:33.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578630, 3), signature: { hash: BinData(0, DBFA3A0E2709162FD90315371A4EA671BDCE9EE5), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:33.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578630, 3), t: 1 }, durableWallTime: new Date(1567578630126), opTime: { ts: Timestamp(1567578630, 3), t: 1 }, wallTime: new Date(1567578630126) } 2019-09-04T06:30:33.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578630, 3), signature: { hash: BinData(0, DBFA3A0E2709162FD90315371A4EA671BDCE9EE5), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:33.128+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:33.128+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:33.128+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578630, 3) 2019-09-04T06:30:33.128+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 9326 2019-09-04T06:30:33.129+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:33.129+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:33.129+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 9326 2019-09-04T06:30:33.129+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9329 2019-09-04T06:30:33.129+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9329 2019-09-04T06:30:33.129+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578630, 3), t: 1 }({ ts: Timestamp(1567578630, 3), t: 1 }) 2019-09-04T06:30:33.159+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:33.214+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:33.214+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:33.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:33.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:33.259+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:33.296+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:33.296+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:33.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:33.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:33.359+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:33.436+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:33.436+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:33.443+0000 D2 ASIO [RS] Request 637 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578633, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578633438), o: { $v: 1, $set: { ping: new Date(1567578633434) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578630, 3), t: 1 }, lastCommittedWall: new Date(1567578630126), lastOpVisible: { ts: Timestamp(1567578630, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578630, 3), t: 1 }, lastCommittedWall: new Date(1567578630126), lastOpApplied: { ts: Timestamp(1567578633, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578630, 3), $clusterTime: { clusterTime: Timestamp(1567578633, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578633, 1) } 2019-09-04T06:30:33.443+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578633, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578633438), o: { $v: 1, $set: { ping: new Date(1567578633434) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578630, 3), t: 1 }, lastCommittedWall: new Date(1567578630126), lastOpVisible: { ts: Timestamp(1567578630, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578630, 3), t: 1 }, lastCommittedWall: new Date(1567578630126), lastOpApplied: { ts: Timestamp(1567578633, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578630, 3), $clusterTime: { clusterTime: Timestamp(1567578633, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578633, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:33.443+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:33.443+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578633, 1) and ending at ts: Timestamp(1567578633, 1) 2019-09-04T06:30:33.443+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:30:43.563+0000 2019-09-04T06:30:33.443+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:30:43.560+0000 2019-09-04T06:30:33.443+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:33.443+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578633, 1), t: 1 } 2019-09-04T06:30:33.443+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:02.839+0000 2019-09-04T06:30:33.443+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:33.443+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:33.443+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578630, 3) 2019-09-04T06:30:33.443+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 9337 2019-09-04T06:30:33.443+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:33.443+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:33.443+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 9337 2019-09-04T06:30:33.443+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:33.443+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:30:33.443+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:33.443+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578630, 3) 2019-09-04T06:30:33.443+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 9340 2019-09-04T06:30:33.443+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578633, 1) } 2019-09-04T06:30:33.443+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:33.443+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:33.443+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 9340 2019-09-04T06:30:33.443+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9330 2019-09-04T06:30:33.443+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 9330 2019-09-04T06:30:33.443+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9343 2019-09-04T06:30:33.443+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9343 2019-09-04T06:30:33.443+0000 D3 EXECUTOR [repl-writer-worker-7] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:33.443+0000 D3 STORAGE [repl-writer-worker-7] WT begin_transaction for snapshot id 9345 2019-09-04T06:30:33.443+0000 D4 STORAGE [repl-writer-worker-7] inserting record with timestamp Timestamp(1567578633, 1) 2019-09-04T06:30:33.443+0000 D3 STORAGE [repl-writer-worker-7] WT set timestamp of future write operations to Timestamp(1567578633, 1) 2019-09-04T06:30:33.443+0000 D3 STORAGE [repl-writer-worker-7] WT commit_transaction for snapshot id 9345 2019-09-04T06:30:33.443+0000 D3 EXECUTOR [repl-writer-worker-7] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:33.443+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:30:33.443+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9344 2019-09-04T06:30:33.443+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 9344 2019-09-04T06:30:33.443+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9347 2019-09-04T06:30:33.443+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9347 2019-09-04T06:30:33.443+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578633, 1), t: 1 }({ ts: Timestamp(1567578633, 1), t: 1 }) 2019-09-04T06:30:33.443+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578633, 1) 2019-09-04T06:30:33.443+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9348 2019-09-04T06:30:33.443+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578633, 1) } } ] } sort: {} projection: {} 2019-09-04T06:30:33.444+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:33.444+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:30:33.444+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578633, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:30:33.444+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:33.444+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:33.444+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:33.444+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578633, 1) || First: notFirst: full path: ts 2019-09-04T06:30:33.444+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:33.444+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578633, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:33.444+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:33.444+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:30:33.444+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:33.444+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:33.444+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:33.444+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:33.444+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:33.444+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:33.444+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:33.444+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578633, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:33.444+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:33.444+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:33.444+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:33.444+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578633, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:33.444+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:33.444+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578633, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:33.444+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 9348 2019-09-04T06:30:33.444+0000 D3 EXECUTOR [repl-writer-worker-11] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:33.444+0000 D3 STORAGE [repl-writer-worker-11] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:33.444+0000 D3 REPL [repl-writer-worker-11] applying op: { ts: Timestamp(1567578633, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578633438), o: { $v: 1, $set: { ping: new Date(1567578633434) } } }, oplog application mode: Secondary 2019-09-04T06:30:33.444+0000 D3 STORAGE [repl-writer-worker-11] WT set timestamp of future write operations to Timestamp(1567578633, 1) 2019-09-04T06:30:33.444+0000 D3 STORAGE [repl-writer-worker-11] WT begin_transaction for snapshot id 9350 2019-09-04T06:30:33.444+0000 D2 QUERY [repl-writer-worker-11] Using idhack: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" } 2019-09-04T06:30:33.444+0000 D4 WRITE [repl-writer-worker-11] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:30:33.444+0000 D3 STORAGE [repl-writer-worker-11] WT commit_transaction for snapshot id 9350 2019-09-04T06:30:33.444+0000 D3 EXECUTOR [repl-writer-worker-11] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:33.444+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578633, 1), t: 1 }({ ts: Timestamp(1567578633, 1), t: 1 }) 2019-09-04T06:30:33.444+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578633, 1) 2019-09-04T06:30:33.444+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9349 2019-09-04T06:30:33.444+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:30:33.444+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:33.444+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:30:33.444+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:33.444+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:33.444+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:30:33.444+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 9349 2019-09-04T06:30:33.444+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578633, 1) 2019-09-04T06:30:33.444+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:33.444+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9353 2019-09-04T06:30:33.444+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578630, 3), t: 1 }, durableWallTime: new Date(1567578630126), appliedOpTime: { ts: Timestamp(1567578630, 3), t: 1 }, appliedWallTime: new Date(1567578630126), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578630, 3), t: 1 }, durableWallTime: new Date(1567578630126), appliedOpTime: { ts: Timestamp(1567578633, 1), t: 1 }, appliedWallTime: new Date(1567578633438), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578630, 3), t: 1 }, durableWallTime: new Date(1567578630126), appliedOpTime: { ts: Timestamp(1567578630, 3), t: 1 }, appliedWallTime: new Date(1567578630126), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578630, 3), t: 1 }, lastCommittedWall: new Date(1567578630126), lastOpVisible: { ts: Timestamp(1567578630, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:33.444+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 642 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:03.444+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578630, 3), t: 1 }, durableWallTime: new Date(1567578630126), appliedOpTime: { ts: Timestamp(1567578630, 3), t: 1 }, appliedWallTime: new Date(1567578630126), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578630, 3), t: 1 }, durableWallTime: new Date(1567578630126), appliedOpTime: { ts: Timestamp(1567578633, 1), t: 1 }, appliedWallTime: new Date(1567578633438), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578630, 3), t: 1 }, durableWallTime: new Date(1567578630126), appliedOpTime: { ts: Timestamp(1567578630, 3), t: 1 }, appliedWallTime: new Date(1567578630126), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578630, 3), t: 1 }, lastCommittedWall: new Date(1567578630126), lastOpVisible: { ts: Timestamp(1567578630, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:33.444+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:03.444+0000 2019-09-04T06:30:33.444+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9353 2019-09-04T06:30:33.445+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578633, 1), t: 1 }({ ts: Timestamp(1567578633, 1), t: 1 }) 2019-09-04T06:30:33.445+0000 D2 ASIO [RS] Request 642 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578633, 1), t: 1 }, lastCommittedWall: new Date(1567578633438), lastOpVisible: { ts: Timestamp(1567578633, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578633, 1), $clusterTime: { clusterTime: Timestamp(1567578633, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578633, 1) } 2019-09-04T06:30:33.445+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578633, 1), t: 1 }, lastCommittedWall: new Date(1567578633438), lastOpVisible: { ts: Timestamp(1567578633, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578633, 1), $clusterTime: { clusterTime: Timestamp(1567578633, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578633, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:33.445+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:33.445+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:03.445+0000 2019-09-04T06:30:33.445+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578633, 1), t: 1 } 2019-09-04T06:30:33.445+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 643 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:43.445+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578630, 3), t: 1 } } 2019-09-04T06:30:33.445+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:03.445+0000 2019-09-04T06:30:33.445+0000 D2 ASIO [RS] Request 643 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578633, 1), t: 1 }, lastCommittedWall: new Date(1567578633438), lastOpVisible: { ts: Timestamp(1567578633, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578633, 1), t: 1 }, lastCommittedWall: new Date(1567578633438), lastOpApplied: { ts: Timestamp(1567578633, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578633, 1), $clusterTime: { clusterTime: Timestamp(1567578633, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578633, 1) } 2019-09-04T06:30:33.445+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578633, 1), t: 1 }, lastCommittedWall: new Date(1567578633438), lastOpVisible: { ts: Timestamp(1567578633, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578633, 1), t: 1 }, lastCommittedWall: new Date(1567578633438), lastOpApplied: { ts: Timestamp(1567578633, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578633, 1), $clusterTime: { clusterTime: Timestamp(1567578633, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578633, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:33.445+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:33.445+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:30:33.445+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578633, 1), t: 1 }, 2019-09-04T06:30:33.438+0000 2019-09-04T06:30:33.445+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578633, 1), t: 1 }, 2019-09-04T06:30:33.438+0000 2019-09-04T06:30:33.445+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578628, 1) 2019-09-04T06:30:33.445+0000 D3 REPL [conn252] Got notified of new snapshot: { ts: Timestamp(1567578633, 1), t: 1 }, 2019-09-04T06:30:33.438+0000 2019-09-04T06:30:33.445+0000 D3 REPL [conn252] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.897+0000 2019-09-04T06:30:33.445+0000 D3 REPL [conn216] Got notified of new snapshot: { ts: Timestamp(1567578633, 1), t: 1 }, 2019-09-04T06:30:33.438+0000 2019-09-04T06:30:33.445+0000 D3 REPL [conn216] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:42.092+0000 2019-09-04T06:30:33.445+0000 D3 REPL [conn223] Got notified of new snapshot: { ts: Timestamp(1567578633, 1), t: 1 }, 2019-09-04T06:30:33.438+0000 2019-09-04T06:30:33.445+0000 D3 REPL [conn223] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.644+0000 2019-09-04T06:30:33.445+0000 D3 REPL [conn243] Got notified of new snapshot: { ts: Timestamp(1567578633, 1), t: 1 }, 2019-09-04T06:30:33.438+0000 2019-09-04T06:30:33.445+0000 D3 REPL [conn243] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.662+0000 2019-09-04T06:30:33.445+0000 D3 REPL [conn246] Got notified of new snapshot: { ts: Timestamp(1567578633, 1), t: 1 }, 2019-09-04T06:30:33.438+0000 2019-09-04T06:30:33.445+0000 D3 REPL [conn246] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:52.054+0000 2019-09-04T06:30:33.445+0000 D3 REPL [conn226] Got notified of new snapshot: { ts: Timestamp(1567578633, 1), t: 1 }, 2019-09-04T06:30:33.438+0000 2019-09-04T06:30:33.445+0000 D3 REPL [conn226] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:40.682+0000 2019-09-04T06:30:33.446+0000 D3 REPL [conn230] Got notified of new snapshot: { ts: Timestamp(1567578633, 1), t: 1 }, 2019-09-04T06:30:33.438+0000 2019-09-04T06:30:33.446+0000 D3 REPL [conn230] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:40.677+0000 2019-09-04T06:30:33.446+0000 D3 REPL [conn244] Got notified of new snapshot: { ts: Timestamp(1567578633, 1), t: 1 }, 2019-09-04T06:30:33.438+0000 2019-09-04T06:30:33.446+0000 D3 REPL [conn244] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.754+0000 2019-09-04T06:30:33.446+0000 D3 REPL [conn238] Got notified of new snapshot: { ts: Timestamp(1567578633, 1), t: 1 }, 2019-09-04T06:30:33.438+0000 2019-09-04T06:30:33.446+0000 D3 REPL [conn238] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.824+0000 2019-09-04T06:30:33.446+0000 D3 REPL [conn225] Got notified of new snapshot: { ts: Timestamp(1567578633, 1), t: 1 }, 2019-09-04T06:30:33.438+0000 2019-09-04T06:30:33.446+0000 D3 REPL [conn225] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.468+0000 2019-09-04T06:30:33.446+0000 D3 REPL [conn250] Got notified of new snapshot: { ts: Timestamp(1567578633, 1), t: 1 }, 2019-09-04T06:30:33.438+0000 2019-09-04T06:30:33.446+0000 D3 REPL [conn250] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.753+0000 2019-09-04T06:30:33.446+0000 D3 REPL [conn251] Got notified of new snapshot: { ts: Timestamp(1567578633, 1), t: 1 }, 2019-09-04T06:30:33.438+0000 2019-09-04T06:30:33.446+0000 D3 REPL [conn251] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.763+0000 2019-09-04T06:30:33.446+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:30:43.560+0000 2019-09-04T06:30:33.446+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:30:44.900+0000 2019-09-04T06:30:33.446+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:33.446+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:02.839+0000 2019-09-04T06:30:33.446+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 644 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:43.446+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578633, 1), t: 1 } } 2019-09-04T06:30:33.446+0000 D3 REPL [conn221] Got notified of new snapshot: { ts: Timestamp(1567578633, 1), t: 1 }, 2019-09-04T06:30:33.438+0000 2019-09-04T06:30:33.446+0000 D3 REPL [conn221] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:42.289+0000 2019-09-04T06:30:33.446+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:03.445+0000 2019-09-04T06:30:33.446+0000 D3 REPL [conn245] Got notified of new snapshot: { ts: Timestamp(1567578633, 1), t: 1 }, 2019-09-04T06:30:33.438+0000 2019-09-04T06:30:33.446+0000 D3 REPL [conn245] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.767+0000 2019-09-04T06:30:33.446+0000 D3 REPL [conn208] Got notified of new snapshot: { ts: Timestamp(1567578633, 1), t: 1 }, 2019-09-04T06:30:33.438+0000 2019-09-04T06:30:33.446+0000 D3 REPL [conn208] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.839+0000 2019-09-04T06:30:33.446+0000 D3 REPL [conn253] Got notified of new snapshot: { ts: Timestamp(1567578633, 1), t: 1 }, 2019-09-04T06:30:33.438+0000 2019-09-04T06:30:33.446+0000 D3 REPL [conn253] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.925+0000 2019-09-04T06:30:33.446+0000 D3 REPL [conn254] Got notified of new snapshot: { ts: Timestamp(1567578633, 1), t: 1 }, 2019-09-04T06:30:33.438+0000 2019-09-04T06:30:33.446+0000 D3 REPL [conn254] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.962+0000 2019-09-04T06:30:33.446+0000 D3 REPL [conn247] Got notified of new snapshot: { ts: Timestamp(1567578633, 1), t: 1 }, 2019-09-04T06:30:33.438+0000 2019-09-04T06:30:33.446+0000 D3 REPL [conn247] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:59.750+0000 2019-09-04T06:30:33.446+0000 D3 REPL [conn232] Got notified of new snapshot: { ts: Timestamp(1567578633, 1), t: 1 }, 2019-09-04T06:30:33.438+0000 2019-09-04T06:30:33.446+0000 D3 REPL [conn232] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.661+0000 2019-09-04T06:30:33.446+0000 D3 REPL [conn222] Got notified of new snapshot: { ts: Timestamp(1567578633, 1), t: 1 }, 2019-09-04T06:30:33.438+0000 2019-09-04T06:30:33.446+0000 D3 REPL [conn222] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.822+0000 2019-09-04T06:30:33.446+0000 D3 REPL [conn228] Got notified of new snapshot: { ts: Timestamp(1567578633, 1), t: 1 }, 2019-09-04T06:30:33.438+0000 2019-09-04T06:30:33.446+0000 D3 REPL [conn228] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:56.304+0000 2019-09-04T06:30:33.446+0000 D3 REPL [conn207] Got notified of new snapshot: { ts: Timestamp(1567578633, 1), t: 1 }, 2019-09-04T06:30:33.438+0000 2019-09-04T06:30:33.446+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:30:33.446+0000 D3 REPL [conn207] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:40.671+0000 2019-09-04T06:30:33.446+0000 D3 REPL [conn239] Got notified of new snapshot: { ts: Timestamp(1567578633, 1), t: 1 }, 2019-09-04T06:30:33.438+0000 2019-09-04T06:30:33.446+0000 D3 REPL [conn239] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.133+0000 2019-09-04T06:30:33.446+0000 D3 REPL [conn248] Got notified of new snapshot: { ts: Timestamp(1567578633, 1), t: 1 }, 2019-09-04T06:30:33.438+0000 2019-09-04T06:30:33.446+0000 D3 REPL [conn248] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.456+0000 2019-09-04T06:30:33.446+0000 D3 REPL [conn249] Got notified of new snapshot: { ts: Timestamp(1567578633, 1), t: 1 }, 2019-09-04T06:30:33.438+0000 2019-09-04T06:30:33.446+0000 D3 REPL [conn249] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.702+0000 2019-09-04T06:30:33.446+0000 D3 REPL [conn235] Got notified of new snapshot: { ts: Timestamp(1567578633, 1), t: 1 }, 2019-09-04T06:30:33.438+0000 2019-09-04T06:30:33.446+0000 D3 REPL [conn235] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.840+0000 2019-09-04T06:30:33.446+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:33.446+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578630, 3), t: 1 }, durableWallTime: new Date(1567578630126), appliedOpTime: { ts: Timestamp(1567578630, 3), t: 1 }, appliedWallTime: new Date(1567578630126), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578633, 1), t: 1 }, durableWallTime: new Date(1567578633438), appliedOpTime: { ts: Timestamp(1567578633, 1), t: 1 }, appliedWallTime: new Date(1567578633438), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578630, 3), t: 1 }, durableWallTime: new Date(1567578630126), appliedOpTime: { ts: Timestamp(1567578630, 3), t: 1 }, appliedWallTime: new Date(1567578630126), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578633, 1), t: 1 }, lastCommittedWall: new Date(1567578633438), lastOpVisible: { ts: Timestamp(1567578633, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:33.446+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 645 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:03.446+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578630, 3), t: 1 }, durableWallTime: new Date(1567578630126), appliedOpTime: { ts: Timestamp(1567578630, 3), t: 1 }, appliedWallTime: new Date(1567578630126), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578633, 1), t: 1 }, durableWallTime: new Date(1567578633438), appliedOpTime: { ts: Timestamp(1567578633, 1), t: 1 }, appliedWallTime: new Date(1567578633438), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578630, 3), t: 1 }, durableWallTime: new Date(1567578630126), appliedOpTime: { ts: Timestamp(1567578630, 3), t: 1 }, appliedWallTime: new Date(1567578630126), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578633, 1), t: 1 }, lastCommittedWall: new Date(1567578633438), lastOpVisible: { ts: Timestamp(1567578633, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:33.446+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:03.445+0000 2019-09-04T06:30:33.447+0000 D2 ASIO [RS] Request 645 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578633, 1), t: 1 }, lastCommittedWall: new Date(1567578633438), lastOpVisible: { ts: Timestamp(1567578633, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578633, 1), $clusterTime: { clusterTime: Timestamp(1567578633, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578633, 1) } 2019-09-04T06:30:33.447+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578633, 1), t: 1 }, lastCommittedWall: new Date(1567578633438), lastOpVisible: { ts: Timestamp(1567578633, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578633, 1), $clusterTime: { clusterTime: Timestamp(1567578633, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578633, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:33.447+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:33.447+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:03.445+0000 2019-09-04T06:30:33.459+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:33.512+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:30:33.512+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:30:33.512+0000 D2 COMMAND [conn90] run command admin.$cmd { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:30:33.512+0000 I COMMAND [conn90] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:30:33.543+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578633, 1) 2019-09-04T06:30:33.559+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:33.660+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:33.714+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:33.714+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:33.760+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:33.796+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:33.796+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:33.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:33.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:33.860+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:33.936+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:33.936+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:33.960+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:34.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:34.060+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:34.160+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:34.214+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:34.214+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:34.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578633, 1), signature: { hash: BinData(0, 77C4D29E0E956B786CF08FE4C74408CA26F79ED1), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:34.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:30:34.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578633, 1), signature: { hash: BinData(0, 77C4D29E0E956B786CF08FE4C74408CA26F79ED1), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:34.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578633, 1), signature: { hash: BinData(0, 77C4D29E0E956B786CF08FE4C74408CA26F79ED1), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:34.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578633, 1), t: 1 }, durableWallTime: new Date(1567578633438), opTime: { ts: Timestamp(1567578633, 1), t: 1 }, wallTime: new Date(1567578633438) } 2019-09-04T06:30:34.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578633, 1), signature: { hash: BinData(0, 77C4D29E0E956B786CF08FE4C74408CA26F79ED1), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:34.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:34.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:34.260+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:34.296+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:34.296+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:34.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:34.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:34.360+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:34.436+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:34.436+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:34.443+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:34.443+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:34.443+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578633, 1) 2019-09-04T06:30:34.443+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 9370 2019-09-04T06:30:34.443+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:34.443+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:34.443+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 9370 2019-09-04T06:30:34.445+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9373 2019-09-04T06:30:34.445+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9373 2019-09-04T06:30:34.445+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578633, 1), t: 1 }({ ts: Timestamp(1567578633, 1), t: 1 }) 2019-09-04T06:30:34.461+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:34.561+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:34.661+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:34.714+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:34.714+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:34.761+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:34.796+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:34.796+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:34.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:34.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 646) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:34.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 646 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:44.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:34.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:02.839+0000 2019-09-04T06:30:34.838+0000 D2 ASIO [Replication] Request 646 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578633, 1), t: 1 }, durableWallTime: new Date(1567578633438), opTime: { ts: Timestamp(1567578633, 1), t: 1 }, wallTime: new Date(1567578633438), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578633, 1), t: 1 }, lastCommittedWall: new Date(1567578633438), lastOpVisible: { ts: Timestamp(1567578633, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578633, 1), $clusterTime: { clusterTime: Timestamp(1567578633, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578633, 1) } 2019-09-04T06:30:34.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578633, 1), t: 1 }, durableWallTime: new Date(1567578633438), opTime: { ts: Timestamp(1567578633, 1), t: 1 }, wallTime: new Date(1567578633438), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578633, 1), t: 1 }, lastCommittedWall: new Date(1567578633438), lastOpVisible: { ts: Timestamp(1567578633, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578633, 1), $clusterTime: { clusterTime: Timestamp(1567578633, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578633, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:34.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:34.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 646) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578633, 1), t: 1 }, durableWallTime: new Date(1567578633438), opTime: { ts: Timestamp(1567578633, 1), t: 1 }, wallTime: new Date(1567578633438), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578633, 1), t: 1 }, lastCommittedWall: new Date(1567578633438), lastOpVisible: { ts: Timestamp(1567578633, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578633, 1), $clusterTime: { clusterTime: Timestamp(1567578633, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578633, 1) } 2019-09-04T06:30:34.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:30:34.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:30:36.838Z 2019-09-04T06:30:34.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:02.839+0000 2019-09-04T06:30:34.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:34.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 647) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:34.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 647 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:30:44.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:34.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:02.839+0000 2019-09-04T06:30:34.839+0000 D2 ASIO [Replication] Request 647 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578633, 1), t: 1 }, durableWallTime: new Date(1567578633438), opTime: { ts: Timestamp(1567578633, 1), t: 1 }, wallTime: new Date(1567578633438), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578633, 1), t: 1 }, lastCommittedWall: new Date(1567578633438), lastOpVisible: { ts: Timestamp(1567578633, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578633, 1), $clusterTime: { clusterTime: Timestamp(1567578633, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578633, 1) } 2019-09-04T06:30:34.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578633, 1), t: 1 }, durableWallTime: new Date(1567578633438), opTime: { ts: Timestamp(1567578633, 1), t: 1 }, wallTime: new Date(1567578633438), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578633, 1), t: 1 }, lastCommittedWall: new Date(1567578633438), lastOpVisible: { ts: Timestamp(1567578633, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578633, 1), $clusterTime: { clusterTime: Timestamp(1567578633, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578633, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:30:34.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:34.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 647) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578633, 1), t: 1 }, durableWallTime: new Date(1567578633438), opTime: { ts: Timestamp(1567578633, 1), t: 1 }, wallTime: new Date(1567578633438), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578633, 1), t: 1 }, lastCommittedWall: new Date(1567578633438), lastOpVisible: { ts: Timestamp(1567578633, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578633, 1), $clusterTime: { clusterTime: Timestamp(1567578633, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578633, 1) } 2019-09-04T06:30:34.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:30:34.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:30:44.900+0000 2019-09-04T06:30:34.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:30:45.277+0000 2019-09-04T06:30:34.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:30:34.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:30:36.839Z 2019-09-04T06:30:34.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:34.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:04.839+0000 2019-09-04T06:30:34.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:04.839+0000 2019-09-04T06:30:34.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:34.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:34.861+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:34.936+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:34.936+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:34.961+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:35.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:35.043+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:35.043+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:35.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578633, 1), signature: { hash: BinData(0, 77C4D29E0E956B786CF08FE4C74408CA26F79ED1), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:35.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:30:35.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578633, 1), signature: { hash: BinData(0, 77C4D29E0E956B786CF08FE4C74408CA26F79ED1), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:35.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578633, 1), signature: { hash: BinData(0, 77C4D29E0E956B786CF08FE4C74408CA26F79ED1), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:35.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578633, 1), t: 1 }, durableWallTime: new Date(1567578633438), opTime: { ts: Timestamp(1567578633, 1), t: 1 }, wallTime: new Date(1567578633438) } 2019-09-04T06:30:35.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578633, 1), signature: { hash: BinData(0, 77C4D29E0E956B786CF08FE4C74408CA26F79ED1), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:35.061+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:35.088+0000 D2 ASIO [RS] Request 644 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578635, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578635079), o: { $v: 1, $set: { ping: new Date(1567578635076), up: 2535 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578633, 1), t: 1 }, lastCommittedWall: new Date(1567578633438), lastOpVisible: { ts: Timestamp(1567578633, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578633, 1), t: 1 }, lastCommittedWall: new Date(1567578633438), lastOpApplied: { ts: Timestamp(1567578635, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578633, 1), $clusterTime: { clusterTime: Timestamp(1567578635, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578635, 1) } 2019-09-04T06:30:35.088+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578635, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578635079), o: { $v: 1, $set: { ping: new Date(1567578635076), up: 2535 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578633, 1), t: 1 }, lastCommittedWall: new Date(1567578633438), lastOpVisible: { ts: Timestamp(1567578633, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578633, 1), t: 1 }, lastCommittedWall: new Date(1567578633438), lastOpApplied: { ts: Timestamp(1567578635, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578633, 1), $clusterTime: { clusterTime: Timestamp(1567578635, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578635, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:35.088+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:35.088+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578635, 1) and ending at ts: Timestamp(1567578635, 1) 2019-09-04T06:30:35.088+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:30:45.277+0000 2019-09-04T06:30:35.088+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:30:45.410+0000 2019-09-04T06:30:35.088+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:35.088+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578635, 1), t: 1 } 2019-09-04T06:30:35.088+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:35.088+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:35.088+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578633, 1) 2019-09-04T06:30:35.088+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 9383 2019-09-04T06:30:35.088+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:35.088+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:35.088+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 9383 2019-09-04T06:30:35.088+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:35.088+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:30:35.088+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:04.839+0000 2019-09-04T06:30:35.088+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:35.088+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578635, 1) } 2019-09-04T06:30:35.088+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578633, 1) 2019-09-04T06:30:35.088+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 9386 2019-09-04T06:30:35.088+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:35.088+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:35.088+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 9386 2019-09-04T06:30:35.088+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9374 2019-09-04T06:30:35.088+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 9374 2019-09-04T06:30:35.088+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9389 2019-09-04T06:30:35.088+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9389 2019-09-04T06:30:35.088+0000 D3 EXECUTOR [repl-writer-worker-13] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:35.088+0000 D3 STORAGE [repl-writer-worker-13] WT begin_transaction for snapshot id 9391 2019-09-04T06:30:35.088+0000 D4 STORAGE [repl-writer-worker-13] inserting record with timestamp Timestamp(1567578635, 1) 2019-09-04T06:30:35.088+0000 D3 STORAGE [repl-writer-worker-13] WT set timestamp of future write operations to Timestamp(1567578635, 1) 2019-09-04T06:30:35.088+0000 D3 STORAGE [repl-writer-worker-13] WT commit_transaction for snapshot id 9391 2019-09-04T06:30:35.088+0000 D3 EXECUTOR [repl-writer-worker-13] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:35.088+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:30:35.088+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9390 2019-09-04T06:30:35.088+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 9390 2019-09-04T06:30:35.088+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9393 2019-09-04T06:30:35.088+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9393 2019-09-04T06:30:35.088+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578635, 1), t: 1 }({ ts: Timestamp(1567578635, 1), t: 1 }) 2019-09-04T06:30:35.088+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578635, 1) 2019-09-04T06:30:35.088+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9394 2019-09-04T06:30:35.088+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578635, 1) } } ] } sort: {} projection: {} 2019-09-04T06:30:35.088+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:35.088+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:30:35.088+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578635, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:30:35.088+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:35.089+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:35.089+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:35.089+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578635, 1) || First: notFirst: full path: ts 2019-09-04T06:30:35.089+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:35.089+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578635, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:35.089+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:35.089+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:30:35.089+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:35.089+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:35.089+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:35.089+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:35.089+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:35.089+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:35.089+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:35.089+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578635, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:35.089+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:35.089+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:35.089+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:35.089+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578635, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:35.089+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:35.089+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578635, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:35.089+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 9394 2019-09-04T06:30:35.089+0000 D3 EXECUTOR [repl-writer-worker-14] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:35.089+0000 D3 STORAGE [repl-writer-worker-14] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:35.089+0000 D3 REPL [repl-writer-worker-14] applying op: { ts: Timestamp(1567578635, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578635079), o: { $v: 1, $set: { ping: new Date(1567578635076), up: 2535 } } }, oplog application mode: Secondary 2019-09-04T06:30:35.089+0000 D3 STORAGE [repl-writer-worker-14] WT set timestamp of future write operations to Timestamp(1567578635, 1) 2019-09-04T06:30:35.089+0000 D3 STORAGE [repl-writer-worker-14] WT begin_transaction for snapshot id 9396 2019-09-04T06:30:35.089+0000 D2 QUERY [repl-writer-worker-14] Using idhack: { _id: "cmodb801.togewa.com:27017" } 2019-09-04T06:30:35.089+0000 D4 WRITE [repl-writer-worker-14] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:30:35.089+0000 D3 STORAGE [repl-writer-worker-14] WT commit_transaction for snapshot id 9396 2019-09-04T06:30:35.089+0000 D3 EXECUTOR [repl-writer-worker-14] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:35.089+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578635, 1), t: 1 }({ ts: Timestamp(1567578635, 1), t: 1 }) 2019-09-04T06:30:35.089+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578635, 1) 2019-09-04T06:30:35.089+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9395 2019-09-04T06:30:35.089+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:30:35.089+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:35.089+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:30:35.089+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:35.089+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:35.089+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:30:35.089+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 9395 2019-09-04T06:30:35.089+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578635, 1) 2019-09-04T06:30:35.089+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9399 2019-09-04T06:30:35.089+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9399 2019-09-04T06:30:35.089+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578635, 1), t: 1 }({ ts: Timestamp(1567578635, 1), t: 1 }) 2019-09-04T06:30:35.089+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:35.089+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578633, 1), t: 1 }, durableWallTime: new Date(1567578633438), appliedOpTime: { ts: Timestamp(1567578633, 1), t: 1 }, appliedWallTime: new Date(1567578633438), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578633, 1), t: 1 }, durableWallTime: new Date(1567578633438), appliedOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, appliedWallTime: new Date(1567578635079), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578633, 1), t: 1 }, durableWallTime: new Date(1567578633438), appliedOpTime: { ts: Timestamp(1567578633, 1), t: 1 }, appliedWallTime: new Date(1567578633438), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578633, 1), t: 1 }, lastCommittedWall: new Date(1567578633438), lastOpVisible: { ts: Timestamp(1567578633, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:35.089+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 648 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:05.089+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578633, 1), t: 1 }, durableWallTime: new Date(1567578633438), appliedOpTime: { ts: Timestamp(1567578633, 1), t: 1 }, appliedWallTime: new Date(1567578633438), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578633, 1), t: 1 }, durableWallTime: new Date(1567578633438), appliedOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, appliedWallTime: new Date(1567578635079), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578633, 1), t: 1 }, durableWallTime: new Date(1567578633438), appliedOpTime: { ts: Timestamp(1567578633, 1), t: 1 }, appliedWallTime: new Date(1567578633438), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578633, 1), t: 1 }, lastCommittedWall: new Date(1567578633438), lastOpVisible: { ts: Timestamp(1567578633, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:35.089+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:05.089+0000 2019-09-04T06:30:35.089+0000 D2 ASIO [RS] Request 648 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578633, 1), t: 1 }, lastCommittedWall: new Date(1567578633438), lastOpVisible: { ts: Timestamp(1567578633, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578633, 1), $clusterTime: { clusterTime: Timestamp(1567578635, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578635, 1) } 2019-09-04T06:30:35.089+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578633, 1), t: 1 }, lastCommittedWall: new Date(1567578633438), lastOpVisible: { ts: Timestamp(1567578633, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578633, 1), $clusterTime: { clusterTime: Timestamp(1567578635, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578635, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:35.089+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:35.089+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:05.089+0000 2019-09-04T06:30:35.090+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578635, 1), t: 1 } 2019-09-04T06:30:35.090+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 649 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:45.090+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578633, 1), t: 1 } } 2019-09-04T06:30:35.090+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:05.089+0000 2019-09-04T06:30:35.091+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:30:35.091+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:35.091+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578633, 1), t: 1 }, durableWallTime: new Date(1567578633438), appliedOpTime: { ts: Timestamp(1567578633, 1), t: 1 }, appliedWallTime: new Date(1567578633438), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, durableWallTime: new Date(1567578635079), appliedOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, appliedWallTime: new Date(1567578635079), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578633, 1), t: 1 }, durableWallTime: new Date(1567578633438), appliedOpTime: { ts: Timestamp(1567578633, 1), t: 1 }, appliedWallTime: new Date(1567578633438), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578633, 1), t: 1 }, lastCommittedWall: new Date(1567578633438), lastOpVisible: { ts: Timestamp(1567578633, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:35.091+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 650 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:05.091+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578633, 1), t: 1 }, durableWallTime: new Date(1567578633438), appliedOpTime: { ts: Timestamp(1567578633, 1), t: 1 }, appliedWallTime: new Date(1567578633438), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, durableWallTime: new Date(1567578635079), appliedOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, appliedWallTime: new Date(1567578635079), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578633, 1), t: 1 }, durableWallTime: new Date(1567578633438), appliedOpTime: { ts: Timestamp(1567578633, 1), t: 1 }, appliedWallTime: new Date(1567578633438), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578633, 1), t: 1 }, lastCommittedWall: new Date(1567578633438), lastOpVisible: { ts: Timestamp(1567578633, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:35.091+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:05.089+0000 2019-09-04T06:30:35.091+0000 D2 ASIO [RS] Request 650 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578633, 1), t: 1 }, lastCommittedWall: new Date(1567578633438), lastOpVisible: { ts: Timestamp(1567578633, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578633, 1), $clusterTime: { clusterTime: Timestamp(1567578635, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578635, 1) } 2019-09-04T06:30:35.091+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578633, 1), t: 1 }, lastCommittedWall: new Date(1567578633438), lastOpVisible: { ts: Timestamp(1567578633, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578633, 1), $clusterTime: { clusterTime: Timestamp(1567578635, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578635, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:35.091+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:35.091+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:05.089+0000 2019-09-04T06:30:35.092+0000 D2 ASIO [RS] Request 649 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578635, 1), t: 1 }, lastCommittedWall: new Date(1567578635079), lastOpVisible: { ts: Timestamp(1567578635, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578635, 1), t: 1 }, lastCommittedWall: new Date(1567578635079), lastOpApplied: { ts: Timestamp(1567578635, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578635, 1), $clusterTime: { clusterTime: Timestamp(1567578635, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578635, 1) } 2019-09-04T06:30:35.092+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578635, 1), t: 1 }, lastCommittedWall: new Date(1567578635079), lastOpVisible: { ts: Timestamp(1567578635, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578635, 1), t: 1 }, lastCommittedWall: new Date(1567578635079), lastOpApplied: { ts: Timestamp(1567578635, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578635, 1), $clusterTime: { clusterTime: Timestamp(1567578635, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578635, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:35.092+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:35.092+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:30:35.092+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578635, 1), t: 1 }, 2019-09-04T06:30:35.079+0000 2019-09-04T06:30:35.092+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578635, 1), t: 1 }, 2019-09-04T06:30:35.079+0000 2019-09-04T06:30:35.092+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578630, 1) 2019-09-04T06:30:35.092+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:30:45.410+0000 2019-09-04T06:30:35.092+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:30:45.684+0000 2019-09-04T06:30:35.092+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:35.092+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 651 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:45.092+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578635, 1), t: 1 } } 2019-09-04T06:30:35.092+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:04.839+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn223] Got notified of new snapshot: { ts: Timestamp(1567578635, 1), t: 1 }, 2019-09-04T06:30:35.079+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn223] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.644+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn226] Got notified of new snapshot: { ts: Timestamp(1567578635, 1), t: 1 }, 2019-09-04T06:30:35.079+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn226] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:40.682+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn244] Got notified of new snapshot: { ts: Timestamp(1567578635, 1), t: 1 }, 2019-09-04T06:30:35.079+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn244] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.754+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn238] Got notified of new snapshot: { ts: Timestamp(1567578635, 1), t: 1 }, 2019-09-04T06:30:35.079+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn238] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.824+0000 2019-09-04T06:30:35.092+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:05.089+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn228] Got notified of new snapshot: { ts: Timestamp(1567578635, 1), t: 1 }, 2019-09-04T06:30:35.079+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn228] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:56.304+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn245] Got notified of new snapshot: { ts: Timestamp(1567578635, 1), t: 1 }, 2019-09-04T06:30:35.079+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn245] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.767+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn253] Got notified of new snapshot: { ts: Timestamp(1567578635, 1), t: 1 }, 2019-09-04T06:30:35.079+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn253] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.925+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn247] Got notified of new snapshot: { ts: Timestamp(1567578635, 1), t: 1 }, 2019-09-04T06:30:35.079+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn247] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:59.750+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn222] Got notified of new snapshot: { ts: Timestamp(1567578635, 1), t: 1 }, 2019-09-04T06:30:35.079+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn222] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.822+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn207] Got notified of new snapshot: { ts: Timestamp(1567578635, 1), t: 1 }, 2019-09-04T06:30:35.079+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn207] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:40.671+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn248] Got notified of new snapshot: { ts: Timestamp(1567578635, 1), t: 1 }, 2019-09-04T06:30:35.079+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn248] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.456+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn235] Got notified of new snapshot: { ts: Timestamp(1567578635, 1), t: 1 }, 2019-09-04T06:30:35.079+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn235] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.840+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn252] Got notified of new snapshot: { ts: Timestamp(1567578635, 1), t: 1 }, 2019-09-04T06:30:35.079+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn252] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.897+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn216] Got notified of new snapshot: { ts: Timestamp(1567578635, 1), t: 1 }, 2019-09-04T06:30:35.079+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn216] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:42.092+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn243] Got notified of new snapshot: { ts: Timestamp(1567578635, 1), t: 1 }, 2019-09-04T06:30:35.079+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn243] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.662+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn246] Got notified of new snapshot: { ts: Timestamp(1567578635, 1), t: 1 }, 2019-09-04T06:30:35.079+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn246] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:52.054+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn230] Got notified of new snapshot: { ts: Timestamp(1567578635, 1), t: 1 }, 2019-09-04T06:30:35.079+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn230] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:40.677+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn225] Got notified of new snapshot: { ts: Timestamp(1567578635, 1), t: 1 }, 2019-09-04T06:30:35.079+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn225] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.468+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn221] Got notified of new snapshot: { ts: Timestamp(1567578635, 1), t: 1 }, 2019-09-04T06:30:35.079+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn221] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:42.289+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn208] Got notified of new snapshot: { ts: Timestamp(1567578635, 1), t: 1 }, 2019-09-04T06:30:35.079+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn208] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.839+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn232] Got notified of new snapshot: { ts: Timestamp(1567578635, 1), t: 1 }, 2019-09-04T06:30:35.079+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn232] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.661+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn250] Got notified of new snapshot: { ts: Timestamp(1567578635, 1), t: 1 }, 2019-09-04T06:30:35.079+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn250] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.753+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn239] Got notified of new snapshot: { ts: Timestamp(1567578635, 1), t: 1 }, 2019-09-04T06:30:35.079+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn239] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.133+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn249] Got notified of new snapshot: { ts: Timestamp(1567578635, 1), t: 1 }, 2019-09-04T06:30:35.079+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn249] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.702+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn251] Got notified of new snapshot: { ts: Timestamp(1567578635, 1), t: 1 }, 2019-09-04T06:30:35.079+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn251] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.763+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn254] Got notified of new snapshot: { ts: Timestamp(1567578635, 1), t: 1 }, 2019-09-04T06:30:35.079+0000 2019-09-04T06:30:35.092+0000 D3 REPL [conn254] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.962+0000 2019-09-04T06:30:35.135+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:35.135+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:35.147+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:35.147+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:35.161+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:35.188+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578635, 1) 2019-09-04T06:30:35.214+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:35.214+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:35.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:35.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:35.262+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:35.296+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:35.296+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:35.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:35.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:35.362+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:35.436+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:35.436+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:35.462+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:35.543+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:35.543+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:35.562+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:35.635+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:35.635+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:35.647+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:35.647+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:35.662+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:35.714+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:35.714+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:35.762+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:35.796+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:35.796+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:35.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:35.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:35.862+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:35.936+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:35.936+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:35.962+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:36.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:36.043+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:36.043+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:36.062+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:36.088+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:36.088+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:36.088+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578635, 1) 2019-09-04T06:30:36.088+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 9419 2019-09-04T06:30:36.088+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:36.088+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:36.088+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 9419 2019-09-04T06:30:36.089+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9422 2019-09-04T06:30:36.089+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9422 2019-09-04T06:30:36.089+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578635, 1), t: 1 }({ ts: Timestamp(1567578635, 1), t: 1 }) 2019-09-04T06:30:36.135+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:36.135+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:36.147+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:36.147+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:36.163+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:36.214+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:36.214+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:36.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578635, 1), signature: { hash: BinData(0, 561C59678D5E2EFED3FB8B2F66E84DD71225B31F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:36.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:30:36.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578635, 1), signature: { hash: BinData(0, 561C59678D5E2EFED3FB8B2F66E84DD71225B31F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:36.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578635, 1), signature: { hash: BinData(0, 561C59678D5E2EFED3FB8B2F66E84DD71225B31F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:36.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, durableWallTime: new Date(1567578635079), opTime: { ts: Timestamp(1567578635, 1), t: 1 }, wallTime: new Date(1567578635079) } 2019-09-04T06:30:36.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578635, 1), signature: { hash: BinData(0, 561C59678D5E2EFED3FB8B2F66E84DD71225B31F), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:36.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:36.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:36.263+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:36.296+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:36.296+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:36.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:36.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:36.363+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:36.436+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:36.436+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:36.463+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:36.543+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:36.543+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:36.563+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:36.635+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:36.635+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:36.647+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:36.647+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:36.663+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:36.714+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:36.714+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:36.763+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:36.796+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:36.796+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:36.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:36.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 652) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:36.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 652 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:46.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:36.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:04.839+0000 2019-09-04T06:30:36.838+0000 D2 ASIO [Replication] Request 652 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, durableWallTime: new Date(1567578635079), opTime: { ts: Timestamp(1567578635, 1), t: 1 }, wallTime: new Date(1567578635079), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578635, 1), t: 1 }, lastCommittedWall: new Date(1567578635079), lastOpVisible: { ts: Timestamp(1567578635, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578635, 1), $clusterTime: { clusterTime: Timestamp(1567578635, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578635, 1) } 2019-09-04T06:30:36.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, durableWallTime: new Date(1567578635079), opTime: { ts: Timestamp(1567578635, 1), t: 1 }, wallTime: new Date(1567578635079), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578635, 1), t: 1 }, lastCommittedWall: new Date(1567578635079), lastOpVisible: { ts: Timestamp(1567578635, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578635, 1), $clusterTime: { clusterTime: Timestamp(1567578635, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578635, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:36.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:36.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 652) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, durableWallTime: new Date(1567578635079), opTime: { ts: Timestamp(1567578635, 1), t: 1 }, wallTime: new Date(1567578635079), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578635, 1), t: 1 }, lastCommittedWall: new Date(1567578635079), lastOpVisible: { ts: Timestamp(1567578635, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578635, 1), $clusterTime: { clusterTime: Timestamp(1567578635, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578635, 1) } 2019-09-04T06:30:36.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:30:36.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:30:38.838Z 2019-09-04T06:30:36.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:04.839+0000 2019-09-04T06:30:36.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:36.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 653) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:36.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 653 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:30:46.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:36.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:04.839+0000 2019-09-04T06:30:36.839+0000 D2 ASIO [Replication] Request 653 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, durableWallTime: new Date(1567578635079), opTime: { ts: Timestamp(1567578635, 1), t: 1 }, wallTime: new Date(1567578635079), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578635, 1), t: 1 }, lastCommittedWall: new Date(1567578635079), lastOpVisible: { ts: Timestamp(1567578635, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578635, 1), $clusterTime: { clusterTime: Timestamp(1567578635, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578635, 1) } 2019-09-04T06:30:36.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, durableWallTime: new Date(1567578635079), opTime: { ts: Timestamp(1567578635, 1), t: 1 }, wallTime: new Date(1567578635079), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578635, 1), t: 1 }, lastCommittedWall: new Date(1567578635079), lastOpVisible: { ts: Timestamp(1567578635, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578635, 1), $clusterTime: { clusterTime: Timestamp(1567578635, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578635, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:30:36.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:36.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 653) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, durableWallTime: new Date(1567578635079), opTime: { ts: Timestamp(1567578635, 1), t: 1 }, wallTime: new Date(1567578635079), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578635, 1), t: 1 }, lastCommittedWall: new Date(1567578635079), lastOpVisible: { ts: Timestamp(1567578635, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578635, 1), $clusterTime: { clusterTime: Timestamp(1567578635, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578635, 1) } 2019-09-04T06:30:36.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:30:36.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:30:45.684+0000 2019-09-04T06:30:36.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:30:48.085+0000 2019-09-04T06:30:36.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:30:36.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:30:38.839Z 2019-09-04T06:30:36.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:36.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:06.839+0000 2019-09-04T06:30:36.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:06.839+0000 2019-09-04T06:30:36.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:36.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:36.863+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:36.936+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:36.936+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:36.963+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:36.990+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:36.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:37.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:37.043+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:37.043+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:37.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578635, 1), signature: { hash: BinData(0, 561C59678D5E2EFED3FB8B2F66E84DD71225B31F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:37.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:30:37.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578635, 1), signature: { hash: BinData(0, 561C59678D5E2EFED3FB8B2F66E84DD71225B31F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:37.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578635, 1), signature: { hash: BinData(0, 561C59678D5E2EFED3FB8B2F66E84DD71225B31F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:37.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, durableWallTime: new Date(1567578635079), opTime: { ts: Timestamp(1567578635, 1), t: 1 }, wallTime: new Date(1567578635079) } 2019-09-04T06:30:37.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578635, 1), signature: { hash: BinData(0, 561C59678D5E2EFED3FB8B2F66E84DD71225B31F), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:37.064+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:37.089+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:37.089+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:37.089+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578635, 1) 2019-09-04T06:30:37.089+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 9443 2019-09-04T06:30:37.089+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:37.089+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:37.089+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 9443 2019-09-04T06:30:37.089+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9446 2019-09-04T06:30:37.089+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9446 2019-09-04T06:30:37.089+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578635, 1), t: 1 }({ ts: Timestamp(1567578635, 1), t: 1 }) 2019-09-04T06:30:37.135+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:37.135+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:37.147+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:37.147+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:37.164+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:37.214+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:37.214+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:37.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:37.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:37.264+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:37.296+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:37.296+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:37.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:37.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:37.364+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:37.436+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:37.436+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:37.464+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:37.490+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:37.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:37.543+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:37.543+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:37.564+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:37.635+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:37.635+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:37.647+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:37.647+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:37.664+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:37.714+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:37.714+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:37.765+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:37.796+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:37.796+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:37.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:37.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:37.865+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:37.936+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:37.936+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:37.965+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:37.990+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:37.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:38.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:38.043+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:38.043+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:38.065+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:38.089+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:38.089+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:38.089+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578635, 1) 2019-09-04T06:30:38.089+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 9466 2019-09-04T06:30:38.089+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:38.089+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:38.089+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 9466 2019-09-04T06:30:38.090+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9469 2019-09-04T06:30:38.090+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9469 2019-09-04T06:30:38.090+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578635, 1), t: 1 }({ ts: Timestamp(1567578635, 1), t: 1 }) 2019-09-04T06:30:38.135+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:38.135+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:38.147+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:38.147+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:38.165+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:38.214+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:38.214+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:38.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578635, 1), signature: { hash: BinData(0, 561C59678D5E2EFED3FB8B2F66E84DD71225B31F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:38.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:30:38.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578635, 1), signature: { hash: BinData(0, 561C59678D5E2EFED3FB8B2F66E84DD71225B31F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:38.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578635, 1), signature: { hash: BinData(0, 561C59678D5E2EFED3FB8B2F66E84DD71225B31F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:38.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, durableWallTime: new Date(1567578635079), opTime: { ts: Timestamp(1567578635, 1), t: 1 }, wallTime: new Date(1567578635079) } 2019-09-04T06:30:38.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578635, 1), signature: { hash: BinData(0, 561C59678D5E2EFED3FB8B2F66E84DD71225B31F), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:38.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:38.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:38.265+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:38.296+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:38.296+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:38.307+0000 D2 ASIO [RS] Request 651 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578638, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578638306), o: { $v: 1, $set: { ping: new Date(1567578638306) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578635, 1), t: 1 }, lastCommittedWall: new Date(1567578635079), lastOpVisible: { ts: Timestamp(1567578635, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578635, 1), t: 1 }, lastCommittedWall: new Date(1567578635079), lastOpApplied: { ts: Timestamp(1567578638, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578635, 1), $clusterTime: { clusterTime: Timestamp(1567578638, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578638, 1) } 2019-09-04T06:30:38.307+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578638, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578638306), o: { $v: 1, $set: { ping: new Date(1567578638306) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578635, 1), t: 1 }, lastCommittedWall: new Date(1567578635079), lastOpVisible: { ts: Timestamp(1567578635, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578635, 1), t: 1 }, lastCommittedWall: new Date(1567578635079), lastOpApplied: { ts: Timestamp(1567578638, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578635, 1), $clusterTime: { clusterTime: Timestamp(1567578638, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578638, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:38.307+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:38.307+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578638, 1) and ending at ts: Timestamp(1567578638, 1) 2019-09-04T06:30:38.308+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:30:48.085+0000 2019-09-04T06:30:38.308+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:30:48.694+0000 2019-09-04T06:30:38.308+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:38.308+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578638, 1), t: 1 } 2019-09-04T06:30:38.308+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:38.308+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:38.308+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578635, 1) 2019-09-04T06:30:38.308+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 9478 2019-09-04T06:30:38.308+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:38.308+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:38.308+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 9478 2019-09-04T06:30:38.308+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:38.308+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:38.308+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:30:38.308+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578635, 1) 2019-09-04T06:30:38.308+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 9481 2019-09-04T06:30:38.308+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578638, 1) } 2019-09-04T06:30:38.308+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:38.308+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:38.308+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 9481 2019-09-04T06:30:38.308+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:06.839+0000 2019-09-04T06:30:38.308+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9470 2019-09-04T06:30:38.308+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 9470 2019-09-04T06:30:38.308+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9484 2019-09-04T06:30:38.308+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9484 2019-09-04T06:30:38.308+0000 D3 EXECUTOR [repl-writer-worker-3] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:38.308+0000 D3 STORAGE [repl-writer-worker-3] WT begin_transaction for snapshot id 9486 2019-09-04T06:30:38.308+0000 D4 STORAGE [repl-writer-worker-3] inserting record with timestamp Timestamp(1567578638, 1) 2019-09-04T06:30:38.308+0000 D3 STORAGE [repl-writer-worker-3] WT set timestamp of future write operations to Timestamp(1567578638, 1) 2019-09-04T06:30:38.308+0000 D3 STORAGE [repl-writer-worker-3] WT commit_transaction for snapshot id 9486 2019-09-04T06:30:38.308+0000 D3 EXECUTOR [repl-writer-worker-3] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:38.308+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:30:38.308+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9485 2019-09-04T06:30:38.308+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 9485 2019-09-04T06:30:38.308+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9488 2019-09-04T06:30:38.308+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9488 2019-09-04T06:30:38.308+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578638, 1), t: 1 }({ ts: Timestamp(1567578638, 1), t: 1 }) 2019-09-04T06:30:38.308+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578638, 1) 2019-09-04T06:30:38.308+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9489 2019-09-04T06:30:38.308+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578638, 1) } } ] } sort: {} projection: {} 2019-09-04T06:30:38.308+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:38.308+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:30:38.308+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578638, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:30:38.308+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:38.308+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:38.308+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:38.308+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578638, 1) || First: notFirst: full path: ts 2019-09-04T06:30:38.308+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:38.308+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578638, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:38.308+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:38.308+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:30:38.308+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:38.308+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:38.308+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:38.308+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:38.308+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:38.308+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:38.308+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:38.308+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578638, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:38.308+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:38.308+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:38.308+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:38.308+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578638, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:38.308+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:38.308+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578638, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:38.308+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 9489 2019-09-04T06:30:38.308+0000 D3 EXECUTOR [repl-writer-worker-12] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:38.308+0000 D3 STORAGE [repl-writer-worker-12] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:38.308+0000 D3 REPL [repl-writer-worker-12] applying op: { ts: Timestamp(1567578638, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578638306), o: { $v: 1, $set: { ping: new Date(1567578638306) } } }, oplog application mode: Secondary 2019-09-04T06:30:38.308+0000 D3 STORAGE [repl-writer-worker-12] WT set timestamp of future write operations to Timestamp(1567578638, 1) 2019-09-04T06:30:38.308+0000 D3 STORAGE [repl-writer-worker-12] WT begin_transaction for snapshot id 9491 2019-09-04T06:30:38.308+0000 D2 QUERY [repl-writer-worker-12] Using idhack: { _id: "ConfigServer" } 2019-09-04T06:30:38.309+0000 D4 WRITE [repl-writer-worker-12] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:30:38.309+0000 D3 STORAGE [repl-writer-worker-12] WT commit_transaction for snapshot id 9491 2019-09-04T06:30:38.309+0000 D3 EXECUTOR [repl-writer-worker-12] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:38.309+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578638, 1), t: 1 }({ ts: Timestamp(1567578638, 1), t: 1 }) 2019-09-04T06:30:38.309+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578638, 1) 2019-09-04T06:30:38.309+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9490 2019-09-04T06:30:38.309+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:30:38.309+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:38.309+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:30:38.309+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:38.309+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:38.309+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:30:38.309+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 9490 2019-09-04T06:30:38.309+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578638, 1) 2019-09-04T06:30:38.309+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:38.309+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9494 2019-09-04T06:30:38.309+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, durableWallTime: new Date(1567578635079), appliedOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, appliedWallTime: new Date(1567578635079), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, durableWallTime: new Date(1567578635079), appliedOpTime: { ts: Timestamp(1567578638, 1), t: 1 }, appliedWallTime: new Date(1567578638306), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, durableWallTime: new Date(1567578635079), appliedOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, appliedWallTime: new Date(1567578635079), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578635, 1), t: 1 }, lastCommittedWall: new Date(1567578635079), lastOpVisible: { ts: Timestamp(1567578635, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:38.309+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 654 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:08.309+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, durableWallTime: new Date(1567578635079), appliedOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, appliedWallTime: new Date(1567578635079), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, durableWallTime: new Date(1567578635079), appliedOpTime: { ts: Timestamp(1567578638, 1), t: 1 }, appliedWallTime: new Date(1567578638306), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, durableWallTime: new Date(1567578635079), appliedOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, appliedWallTime: new Date(1567578635079), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578635, 1), t: 1 }, lastCommittedWall: new Date(1567578635079), lastOpVisible: { ts: Timestamp(1567578635, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:38.309+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:08.309+0000 2019-09-04T06:30:38.309+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9494 2019-09-04T06:30:38.309+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578638, 1), t: 1 }({ ts: Timestamp(1567578638, 1), t: 1 }) 2019-09-04T06:30:38.309+0000 D2 ASIO [RS] Request 654 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 1), t: 1 }, lastCommittedWall: new Date(1567578638306), lastOpVisible: { ts: Timestamp(1567578638, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578638, 1), $clusterTime: { clusterTime: Timestamp(1567578638, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578638, 1) } 2019-09-04T06:30:38.309+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 1), t: 1 }, lastCommittedWall: new Date(1567578638306), lastOpVisible: { ts: Timestamp(1567578638, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578638, 1), $clusterTime: { clusterTime: Timestamp(1567578638, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578638, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:38.309+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:38.309+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:08.309+0000 2019-09-04T06:30:38.310+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578638, 1), t: 1 } 2019-09-04T06:30:38.310+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 655 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:48.310+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578635, 1), t: 1 } } 2019-09-04T06:30:38.310+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:08.309+0000 2019-09-04T06:30:38.310+0000 D2 ASIO [RS] Request 655 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 1), t: 1 }, lastCommittedWall: new Date(1567578638306), lastOpVisible: { ts: Timestamp(1567578638, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578638, 1), t: 1 }, lastCommittedWall: new Date(1567578638306), lastOpApplied: { ts: Timestamp(1567578638, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578638, 1), $clusterTime: { clusterTime: Timestamp(1567578638, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578638, 1) } 2019-09-04T06:30:38.310+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 1), t: 1 }, lastCommittedWall: new Date(1567578638306), lastOpVisible: { ts: Timestamp(1567578638, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578638, 1), t: 1 }, lastCommittedWall: new Date(1567578638306), lastOpApplied: { ts: Timestamp(1567578638, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578638, 1), $clusterTime: { clusterTime: Timestamp(1567578638, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578638, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:38.310+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:38.310+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:30:38.310+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578638, 1), t: 1 }, 2019-09-04T06:30:38.306+0000 2019-09-04T06:30:38.310+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578638, 1), t: 1 }, 2019-09-04T06:30:38.306+0000 2019-09-04T06:30:38.310+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578633, 1) 2019-09-04T06:30:38.310+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:30:48.694+0000 2019-09-04T06:30:38.310+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:30:48.786+0000 2019-09-04T06:30:38.310+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 656 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:48.310+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578638, 1), t: 1 } } 2019-09-04T06:30:38.310+0000 D3 REPL [conn238] Got notified of new snapshot: { ts: Timestamp(1567578638, 1), t: 1 }, 2019-09-04T06:30:38.306+0000 2019-09-04T06:30:38.310+0000 D3 REPL [conn238] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.824+0000 2019-09-04T06:30:38.310+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:08.309+0000 2019-09-04T06:30:38.310+0000 D3 REPL [conn226] Got notified of new snapshot: { ts: Timestamp(1567578638, 1), t: 1 }, 2019-09-04T06:30:38.306+0000 2019-09-04T06:30:38.310+0000 D3 REPL [conn226] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:40.682+0000 2019-09-04T06:30:38.310+0000 D3 REPL [conn207] Got notified of new snapshot: { ts: Timestamp(1567578638, 1), t: 1 }, 2019-09-04T06:30:38.306+0000 2019-09-04T06:30:38.310+0000 D3 REPL [conn207] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:40.671+0000 2019-09-04T06:30:38.310+0000 D3 REPL [conn235] Got notified of new snapshot: { ts: Timestamp(1567578638, 1), t: 1 }, 2019-09-04T06:30:38.306+0000 2019-09-04T06:30:38.310+0000 D3 REPL [conn235] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.840+0000 2019-09-04T06:30:38.310+0000 D3 REPL [conn248] Got notified of new snapshot: { ts: Timestamp(1567578638, 1), t: 1 }, 2019-09-04T06:30:38.306+0000 2019-09-04T06:30:38.310+0000 D3 REPL [conn248] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.456+0000 2019-09-04T06:30:38.310+0000 D3 REPL [conn250] Got notified of new snapshot: { ts: Timestamp(1567578638, 1), t: 1 }, 2019-09-04T06:30:38.306+0000 2019-09-04T06:30:38.310+0000 D3 REPL [conn250] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.753+0000 2019-09-04T06:30:38.310+0000 D3 REPL [conn243] Got notified of new snapshot: { ts: Timestamp(1567578638, 1), t: 1 }, 2019-09-04T06:30:38.306+0000 2019-09-04T06:30:38.310+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:38.310+0000 D3 REPL [conn243] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.662+0000 2019-09-04T06:30:38.310+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:06.839+0000 2019-09-04T06:30:38.310+0000 D3 REPL [conn230] Got notified of new snapshot: { ts: Timestamp(1567578638, 1), t: 1 }, 2019-09-04T06:30:38.306+0000 2019-09-04T06:30:38.310+0000 D3 REPL [conn230] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:40.677+0000 2019-09-04T06:30:38.310+0000 D3 REPL [conn221] Got notified of new snapshot: { ts: Timestamp(1567578638, 1), t: 1 }, 2019-09-04T06:30:38.306+0000 2019-09-04T06:30:38.310+0000 D3 REPL [conn221] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:42.289+0000 2019-09-04T06:30:38.310+0000 D3 REPL [conn232] Got notified of new snapshot: { ts: Timestamp(1567578638, 1), t: 1 }, 2019-09-04T06:30:38.306+0000 2019-09-04T06:30:38.310+0000 D3 REPL [conn232] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.661+0000 2019-09-04T06:30:38.310+0000 D3 REPL [conn239] Got notified of new snapshot: { ts: Timestamp(1567578638, 1), t: 1 }, 2019-09-04T06:30:38.306+0000 2019-09-04T06:30:38.310+0000 D3 REPL [conn239] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.133+0000 2019-09-04T06:30:38.310+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:30:38.310+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:38.310+0000 D3 REPL [conn251] Got notified of new snapshot: { ts: Timestamp(1567578638, 1), t: 1 }, 2019-09-04T06:30:38.306+0000 2019-09-04T06:30:38.310+0000 D3 REPL [conn251] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.763+0000 2019-09-04T06:30:38.310+0000 D3 REPL [conn223] Got notified of new snapshot: { ts: Timestamp(1567578638, 1), t: 1 }, 2019-09-04T06:30:38.306+0000 2019-09-04T06:30:38.310+0000 D3 REPL [conn223] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.644+0000 2019-09-04T06:30:38.310+0000 D3 REPL [conn228] Got notified of new snapshot: { ts: Timestamp(1567578638, 1), t: 1 }, 2019-09-04T06:30:38.306+0000 2019-09-04T06:30:38.310+0000 D3 REPL [conn228] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:56.304+0000 2019-09-04T06:30:38.310+0000 D3 REPL [conn244] Got notified of new snapshot: { ts: Timestamp(1567578638, 1), t: 1 }, 2019-09-04T06:30:38.306+0000 2019-09-04T06:30:38.310+0000 D3 REPL [conn244] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.754+0000 2019-09-04T06:30:38.310+0000 D3 REPL [conn245] Got notified of new snapshot: { ts: Timestamp(1567578638, 1), t: 1 }, 2019-09-04T06:30:38.306+0000 2019-09-04T06:30:38.310+0000 D3 REPL [conn245] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.767+0000 2019-09-04T06:30:38.311+0000 D3 REPL [conn247] Got notified of new snapshot: { ts: Timestamp(1567578638, 1), t: 1 }, 2019-09-04T06:30:38.306+0000 2019-09-04T06:30:38.311+0000 D3 REPL [conn247] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:59.750+0000 2019-09-04T06:30:38.311+0000 D3 REPL [conn253] Got notified of new snapshot: { ts: Timestamp(1567578638, 1), t: 1 }, 2019-09-04T06:30:38.306+0000 2019-09-04T06:30:38.311+0000 D3 REPL [conn253] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.925+0000 2019-09-04T06:30:38.311+0000 D3 REPL [conn222] Got notified of new snapshot: { ts: Timestamp(1567578638, 1), t: 1 }, 2019-09-04T06:30:38.306+0000 2019-09-04T06:30:38.311+0000 D3 REPL [conn222] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.822+0000 2019-09-04T06:30:38.311+0000 D3 REPL [conn216] Got notified of new snapshot: { ts: Timestamp(1567578638, 1), t: 1 }, 2019-09-04T06:30:38.306+0000 2019-09-04T06:30:38.311+0000 D3 REPL [conn216] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:42.092+0000 2019-09-04T06:30:38.311+0000 D3 REPL [conn246] Got notified of new snapshot: { ts: Timestamp(1567578638, 1), t: 1 }, 2019-09-04T06:30:38.306+0000 2019-09-04T06:30:38.311+0000 D3 REPL [conn246] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:52.054+0000 2019-09-04T06:30:38.311+0000 D3 REPL [conn225] Got notified of new snapshot: { ts: Timestamp(1567578638, 1), t: 1 }, 2019-09-04T06:30:38.306+0000 2019-09-04T06:30:38.311+0000 D3 REPL [conn225] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.468+0000 2019-09-04T06:30:38.311+0000 D3 REPL [conn208] Got notified of new snapshot: { ts: Timestamp(1567578638, 1), t: 1 }, 2019-09-04T06:30:38.306+0000 2019-09-04T06:30:38.311+0000 D3 REPL [conn208] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.839+0000 2019-09-04T06:30:38.311+0000 D3 REPL [conn252] Got notified of new snapshot: { ts: Timestamp(1567578638, 1), t: 1 }, 2019-09-04T06:30:38.306+0000 2019-09-04T06:30:38.311+0000 D3 REPL [conn252] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.897+0000 2019-09-04T06:30:38.311+0000 D3 REPL [conn249] Got notified of new snapshot: { ts: Timestamp(1567578638, 1), t: 1 }, 2019-09-04T06:30:38.306+0000 2019-09-04T06:30:38.311+0000 D3 REPL [conn249] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.702+0000 2019-09-04T06:30:38.311+0000 D3 REPL [conn254] Got notified of new snapshot: { ts: Timestamp(1567578638, 1), t: 1 }, 2019-09-04T06:30:38.306+0000 2019-09-04T06:30:38.311+0000 D3 REPL [conn254] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.962+0000 2019-09-04T06:30:38.311+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, durableWallTime: new Date(1567578635079), appliedOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, appliedWallTime: new Date(1567578635079), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578638, 1), t: 1 }, durableWallTime: new Date(1567578638306), appliedOpTime: { ts: Timestamp(1567578638, 1), t: 1 }, appliedWallTime: new Date(1567578638306), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, durableWallTime: new Date(1567578635079), appliedOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, appliedWallTime: new Date(1567578635079), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 1), t: 1 }, lastCommittedWall: new Date(1567578638306), lastOpVisible: { ts: Timestamp(1567578638, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:38.311+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 657 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:08.311+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, durableWallTime: new Date(1567578635079), appliedOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, appliedWallTime: new Date(1567578635079), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578638, 1), t: 1 }, durableWallTime: new Date(1567578638306), appliedOpTime: { ts: Timestamp(1567578638, 1), t: 1 }, appliedWallTime: new Date(1567578638306), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, durableWallTime: new Date(1567578635079), appliedOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, appliedWallTime: new Date(1567578635079), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 1), t: 1 }, lastCommittedWall: new Date(1567578638306), lastOpVisible: { ts: Timestamp(1567578638, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:38.311+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:08.309+0000 2019-09-04T06:30:38.311+0000 D2 ASIO [RS] Request 657 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 1), t: 1 }, lastCommittedWall: new Date(1567578638306), lastOpVisible: { ts: Timestamp(1567578638, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578638, 1), $clusterTime: { clusterTime: Timestamp(1567578638, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578638, 1) } 2019-09-04T06:30:38.311+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 1), t: 1 }, lastCommittedWall: new Date(1567578638306), lastOpVisible: { ts: Timestamp(1567578638, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578638, 1), $clusterTime: { clusterTime: Timestamp(1567578638, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578638, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:38.311+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:38.311+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:08.309+0000 2019-09-04T06:30:38.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:38.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:38.365+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:38.408+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578638, 1) 2019-09-04T06:30:38.436+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:38.436+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:38.465+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:38.490+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:38.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:38.501+0000 D2 ASIO [RS] Request 656 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578638, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578638490), o: { $v: 1, $set: { ping: new Date(1567578638490) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 1), t: 1 }, lastCommittedWall: new Date(1567578638306), lastOpVisible: { ts: Timestamp(1567578638, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578638, 1), t: 1 }, lastCommittedWall: new Date(1567578638306), lastOpApplied: { ts: Timestamp(1567578638, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578638, 1), $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578638, 2) } 2019-09-04T06:30:38.502+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578638, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578638490), o: { $v: 1, $set: { ping: new Date(1567578638490) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 1), t: 1 }, lastCommittedWall: new Date(1567578638306), lastOpVisible: { ts: Timestamp(1567578638, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578638, 1), t: 1 }, lastCommittedWall: new Date(1567578638306), lastOpApplied: { ts: Timestamp(1567578638, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578638, 1), $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578638, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:38.502+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:38.502+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578638, 2) and ending at ts: Timestamp(1567578638, 2) 2019-09-04T06:30:38.502+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:30:48.786+0000 2019-09-04T06:30:38.502+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:30:49.489+0000 2019-09-04T06:30:38.502+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:38.502+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:06.839+0000 2019-09-04T06:30:38.502+0000 D2 REPL [replication-1] oplog buffer has 0 bytes 2019-09-04T06:30:38.502+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578638, 2), t: 1 } 2019-09-04T06:30:38.502+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:38.502+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:38.502+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578638, 1) 2019-09-04T06:30:38.502+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 9501 2019-09-04T06:30:38.502+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:38.502+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:38.502+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 9501 2019-09-04T06:30:38.502+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:38.502+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:38.502+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578638, 1) 2019-09-04T06:30:38.502+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:30:38.502+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 9504 2019-09-04T06:30:38.502+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:38.502+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578638, 2) } 2019-09-04T06:30:38.502+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:38.502+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 9504 2019-09-04T06:30:38.502+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9496 2019-09-04T06:30:38.502+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 9496 2019-09-04T06:30:38.502+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9507 2019-09-04T06:30:38.502+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9507 2019-09-04T06:30:38.502+0000 D3 EXECUTOR [repl-writer-worker-10] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:38.502+0000 D3 STORAGE [repl-writer-worker-10] WT begin_transaction for snapshot id 9509 2019-09-04T06:30:38.502+0000 D4 STORAGE [repl-writer-worker-10] inserting record with timestamp Timestamp(1567578638, 2) 2019-09-04T06:30:38.502+0000 D3 STORAGE [repl-writer-worker-10] WT set timestamp of future write operations to Timestamp(1567578638, 2) 2019-09-04T06:30:38.502+0000 D3 STORAGE [repl-writer-worker-10] WT commit_transaction for snapshot id 9509 2019-09-04T06:30:38.502+0000 D3 EXECUTOR [repl-writer-worker-10] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:38.502+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:30:38.502+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9508 2019-09-04T06:30:38.502+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 9508 2019-09-04T06:30:38.502+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9511 2019-09-04T06:30:38.502+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9511 2019-09-04T06:30:38.502+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578638, 2), t: 1 }({ ts: Timestamp(1567578638, 2), t: 1 }) 2019-09-04T06:30:38.502+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578638, 2) 2019-09-04T06:30:38.502+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9512 2019-09-04T06:30:38.502+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578638, 2) } } ] } sort: {} projection: {} 2019-09-04T06:30:38.502+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:38.502+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:30:38.502+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578638, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:30:38.502+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:38.502+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:38.502+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:38.502+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578638, 2) || First: notFirst: full path: ts 2019-09-04T06:30:38.503+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:38.503+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578638, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:38.503+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:38.503+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:30:38.503+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:38.503+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:38.503+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:38.503+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:38.503+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:38.503+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:38.503+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:38.503+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578638, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:38.503+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:38.503+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:38.503+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:38.503+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578638, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:38.503+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:38.503+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578638, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:38.503+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 9512 2019-09-04T06:30:38.503+0000 D3 EXECUTOR [repl-writer-worker-8] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:38.503+0000 D3 STORAGE [repl-writer-worker-8] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:38.503+0000 D3 REPL [repl-writer-worker-8] applying op: { ts: Timestamp(1567578638, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578638490), o: { $v: 1, $set: { ping: new Date(1567578638490) } } }, oplog application mode: Secondary 2019-09-04T06:30:38.503+0000 D3 STORAGE [repl-writer-worker-8] WT set timestamp of future write operations to Timestamp(1567578638, 2) 2019-09-04T06:30:38.503+0000 D3 STORAGE [repl-writer-worker-8] WT begin_transaction for snapshot id 9514 2019-09-04T06:30:38.503+0000 D2 QUERY [repl-writer-worker-8] Using idhack: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" } 2019-09-04T06:30:38.503+0000 D4 WRITE [repl-writer-worker-8] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:30:38.503+0000 D3 STORAGE [repl-writer-worker-8] WT commit_transaction for snapshot id 9514 2019-09-04T06:30:38.503+0000 D3 EXECUTOR [repl-writer-worker-8] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:38.503+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578638, 2), t: 1 }({ ts: Timestamp(1567578638, 2), t: 1 }) 2019-09-04T06:30:38.503+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578638, 2) 2019-09-04T06:30:38.503+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9513 2019-09-04T06:30:38.503+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:30:38.503+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:38.503+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:30:38.503+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:38.503+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:38.503+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:30:38.503+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 9513 2019-09-04T06:30:38.503+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578638, 2) 2019-09-04T06:30:38.503+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9518 2019-09-04T06:30:38.503+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9518 2019-09-04T06:30:38.503+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578638, 2), t: 1 }({ ts: Timestamp(1567578638, 2), t: 1 }) 2019-09-04T06:30:38.503+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:38.503+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, durableWallTime: new Date(1567578635079), appliedOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, appliedWallTime: new Date(1567578635079), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578638, 1), t: 1 }, durableWallTime: new Date(1567578638306), appliedOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, appliedWallTime: new Date(1567578638490), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, durableWallTime: new Date(1567578635079), appliedOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, appliedWallTime: new Date(1567578635079), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 1), t: 1 }, lastCommittedWall: new Date(1567578638306), lastOpVisible: { ts: Timestamp(1567578638, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:38.503+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 658 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:08.503+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, durableWallTime: new Date(1567578635079), appliedOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, appliedWallTime: new Date(1567578635079), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578638, 1), t: 1 }, durableWallTime: new Date(1567578638306), appliedOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, appliedWallTime: new Date(1567578638490), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, durableWallTime: new Date(1567578635079), appliedOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, appliedWallTime: new Date(1567578635079), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 1), t: 1 }, lastCommittedWall: new Date(1567578638306), lastOpVisible: { ts: Timestamp(1567578638, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:38.503+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:08.503+0000 2019-09-04T06:30:38.503+0000 D2 ASIO [RS] Request 658 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 1), t: 1 }, lastCommittedWall: new Date(1567578638306), lastOpVisible: { ts: Timestamp(1567578638, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578638, 1), $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578638, 2) } 2019-09-04T06:30:38.503+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 1), t: 1 }, lastCommittedWall: new Date(1567578638306), lastOpVisible: { ts: Timestamp(1567578638, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578638, 1), $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578638, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:38.503+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:38.503+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:08.503+0000 2019-09-04T06:30:38.504+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578638, 2), t: 1 } 2019-09-04T06:30:38.504+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 659 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:48.504+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578638, 1), t: 1 } } 2019-09-04T06:30:38.504+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:08.503+0000 2019-09-04T06:30:38.504+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:30:38.504+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:38.504+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, durableWallTime: new Date(1567578635079), appliedOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, appliedWallTime: new Date(1567578635079), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), appliedOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, appliedWallTime: new Date(1567578638490), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, durableWallTime: new Date(1567578635079), appliedOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, appliedWallTime: new Date(1567578635079), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 1), t: 1 }, lastCommittedWall: new Date(1567578638306), lastOpVisible: { ts: Timestamp(1567578638, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:38.504+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 660 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:08.504+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, durableWallTime: new Date(1567578635079), appliedOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, appliedWallTime: new Date(1567578635079), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), appliedOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, appliedWallTime: new Date(1567578638490), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, durableWallTime: new Date(1567578635079), appliedOpTime: { ts: Timestamp(1567578635, 1), t: 1 }, appliedWallTime: new Date(1567578635079), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 1), t: 1 }, lastCommittedWall: new Date(1567578638306), lastOpVisible: { ts: Timestamp(1567578638, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:38.504+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:08.503+0000 2019-09-04T06:30:38.504+0000 D2 ASIO [RS] Request 660 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 1), t: 1 }, lastCommittedWall: new Date(1567578638306), lastOpVisible: { ts: Timestamp(1567578638, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578638, 1), $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578638, 2) } 2019-09-04T06:30:38.504+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 1), t: 1 }, lastCommittedWall: new Date(1567578638306), lastOpVisible: { ts: Timestamp(1567578638, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578638, 1), $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578638, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:38.504+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:38.504+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:08.503+0000 2019-09-04T06:30:38.505+0000 D2 ASIO [RS] Request 659 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 2), t: 1 }, lastCommittedWall: new Date(1567578638490), lastOpVisible: { ts: Timestamp(1567578638, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578638, 2), t: 1 }, lastCommittedWall: new Date(1567578638490), lastOpApplied: { ts: Timestamp(1567578638, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578638, 2), $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578638, 2) } 2019-09-04T06:30:38.505+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 2), t: 1 }, lastCommittedWall: new Date(1567578638490), lastOpVisible: { ts: Timestamp(1567578638, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578638, 2), t: 1 }, lastCommittedWall: new Date(1567578638490), lastOpApplied: { ts: Timestamp(1567578638, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578638, 2), $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578638, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:38.505+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:38.505+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:30:38.505+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578638, 2), t: 1 }, 2019-09-04T06:30:38.490+0000 2019-09-04T06:30:38.505+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578638, 2), t: 1 }, 2019-09-04T06:30:38.490+0000 2019-09-04T06:30:38.505+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578633, 2) 2019-09-04T06:30:38.505+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:30:49.489+0000 2019-09-04T06:30:38.505+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:30:48.510+0000 2019-09-04T06:30:38.505+0000 D3 REPL [conn251] Got notified of new snapshot: { ts: Timestamp(1567578638, 2), t: 1 }, 2019-09-04T06:30:38.490+0000 2019-09-04T06:30:38.505+0000 D3 REPL [conn251] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.763+0000 2019-09-04T06:30:38.505+0000 D3 REPL [conn235] Got notified of new snapshot: { ts: Timestamp(1567578638, 2), t: 1 }, 2019-09-04T06:30:38.490+0000 2019-09-04T06:30:38.505+0000 D3 REPL [conn235] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.840+0000 2019-09-04T06:30:38.505+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 661 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:48.505+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578638, 2), t: 1 } } 2019-09-04T06:30:38.505+0000 D3 REPL [conn249] Got notified of new snapshot: { ts: Timestamp(1567578638, 2), t: 1 }, 2019-09-04T06:30:38.490+0000 2019-09-04T06:30:38.505+0000 D3 REPL [conn249] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.702+0000 2019-09-04T06:30:38.505+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:08.503+0000 2019-09-04T06:30:38.505+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:38.505+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:06.839+0000 2019-09-04T06:30:38.505+0000 D3 REPL [conn208] Got notified of new snapshot: { ts: Timestamp(1567578638, 2), t: 1 }, 2019-09-04T06:30:38.490+0000 2019-09-04T06:30:38.505+0000 D3 REPL [conn208] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.839+0000 2019-09-04T06:30:38.505+0000 D3 REPL [conn221] Got notified of new snapshot: { ts: Timestamp(1567578638, 2), t: 1 }, 2019-09-04T06:30:38.490+0000 2019-09-04T06:30:38.505+0000 D3 REPL [conn221] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:42.289+0000 2019-09-04T06:30:38.505+0000 D3 REPL [conn228] Got notified of new snapshot: { ts: Timestamp(1567578638, 2), t: 1 }, 2019-09-04T06:30:38.490+0000 2019-09-04T06:30:38.505+0000 D3 REPL [conn228] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:56.304+0000 2019-09-04T06:30:38.505+0000 D3 REPL [conn245] Got notified of new snapshot: { ts: Timestamp(1567578638, 2), t: 1 }, 2019-09-04T06:30:38.490+0000 2019-09-04T06:30:38.505+0000 D3 REPL [conn245] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.767+0000 2019-09-04T06:30:38.505+0000 D3 REPL [conn253] Got notified of new snapshot: { ts: Timestamp(1567578638, 2), t: 1 }, 2019-09-04T06:30:38.490+0000 2019-09-04T06:30:38.505+0000 D3 REPL [conn253] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.925+0000 2019-09-04T06:30:38.505+0000 D3 REPL [conn216] Got notified of new snapshot: { ts: Timestamp(1567578638, 2), t: 1 }, 2019-09-04T06:30:38.490+0000 2019-09-04T06:30:38.505+0000 D3 REPL [conn216] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:42.092+0000 2019-09-04T06:30:38.505+0000 D3 REPL [conn225] Got notified of new snapshot: { ts: Timestamp(1567578638, 2), t: 1 }, 2019-09-04T06:30:38.490+0000 2019-09-04T06:30:38.505+0000 D3 REPL [conn225] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.468+0000 2019-09-04T06:30:38.505+0000 D3 REPL [conn252] Got notified of new snapshot: { ts: Timestamp(1567578638, 2), t: 1 }, 2019-09-04T06:30:38.490+0000 2019-09-04T06:30:38.505+0000 D3 REPL [conn252] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.897+0000 2019-09-04T06:30:38.505+0000 D3 REPL [conn254] Got notified of new snapshot: { ts: Timestamp(1567578638, 2), t: 1 }, 2019-09-04T06:30:38.490+0000 2019-09-04T06:30:38.505+0000 D3 REPL [conn254] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.962+0000 2019-09-04T06:30:38.505+0000 D3 REPL [conn238] Got notified of new snapshot: { ts: Timestamp(1567578638, 2), t: 1 }, 2019-09-04T06:30:38.490+0000 2019-09-04T06:30:38.505+0000 D3 REPL [conn238] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.824+0000 2019-09-04T06:30:38.505+0000 D3 REPL [conn226] Got notified of new snapshot: { ts: Timestamp(1567578638, 2), t: 1 }, 2019-09-04T06:30:38.490+0000 2019-09-04T06:30:38.505+0000 D3 REPL [conn226] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:40.682+0000 2019-09-04T06:30:38.505+0000 D3 REPL [conn250] Got notified of new snapshot: { ts: Timestamp(1567578638, 2), t: 1 }, 2019-09-04T06:30:38.490+0000 2019-09-04T06:30:38.505+0000 D3 REPL [conn250] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.753+0000 2019-09-04T06:30:38.506+0000 D3 REPL [conn207] Got notified of new snapshot: { ts: Timestamp(1567578638, 2), t: 1 }, 2019-09-04T06:30:38.490+0000 2019-09-04T06:30:38.506+0000 D3 REPL [conn207] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:40.671+0000 2019-09-04T06:30:38.506+0000 D3 REPL [conn230] Got notified of new snapshot: { ts: Timestamp(1567578638, 2), t: 1 }, 2019-09-04T06:30:38.490+0000 2019-09-04T06:30:38.506+0000 D3 REPL [conn230] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:40.677+0000 2019-09-04T06:30:38.506+0000 D3 REPL [conn248] Got notified of new snapshot: { ts: Timestamp(1567578638, 2), t: 1 }, 2019-09-04T06:30:38.490+0000 2019-09-04T06:30:38.506+0000 D3 REPL [conn248] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.456+0000 2019-09-04T06:30:38.506+0000 D3 REPL [conn246] Got notified of new snapshot: { ts: Timestamp(1567578638, 2), t: 1 }, 2019-09-04T06:30:38.490+0000 2019-09-04T06:30:38.506+0000 D3 REPL [conn246] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:52.054+0000 2019-09-04T06:30:38.506+0000 D3 REPL [conn232] Got notified of new snapshot: { ts: Timestamp(1567578638, 2), t: 1 }, 2019-09-04T06:30:38.490+0000 2019-09-04T06:30:38.506+0000 D3 REPL [conn232] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.661+0000 2019-09-04T06:30:38.506+0000 D3 REPL [conn247] Got notified of new snapshot: { ts: Timestamp(1567578638, 2), t: 1 }, 2019-09-04T06:30:38.490+0000 2019-09-04T06:30:38.506+0000 D3 REPL [conn247] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:59.750+0000 2019-09-04T06:30:38.506+0000 D3 REPL [conn244] Got notified of new snapshot: { ts: Timestamp(1567578638, 2), t: 1 }, 2019-09-04T06:30:38.490+0000 2019-09-04T06:30:38.506+0000 D3 REPL [conn244] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.754+0000 2019-09-04T06:30:38.506+0000 D3 REPL [conn243] Got notified of new snapshot: { ts: Timestamp(1567578638, 2), t: 1 }, 2019-09-04T06:30:38.490+0000 2019-09-04T06:30:38.506+0000 D3 REPL [conn243] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.662+0000 2019-09-04T06:30:38.506+0000 D3 REPL [conn222] Got notified of new snapshot: { ts: Timestamp(1567578638, 2), t: 1 }, 2019-09-04T06:30:38.490+0000 2019-09-04T06:30:38.506+0000 D3 REPL [conn222] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.822+0000 2019-09-04T06:30:38.506+0000 D3 REPL [conn223] Got notified of new snapshot: { ts: Timestamp(1567578638, 2), t: 1 }, 2019-09-04T06:30:38.490+0000 2019-09-04T06:30:38.506+0000 D3 REPL [conn223] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.644+0000 2019-09-04T06:30:38.506+0000 D3 REPL [conn239] Got notified of new snapshot: { ts: Timestamp(1567578638, 2), t: 1 }, 2019-09-04T06:30:38.490+0000 2019-09-04T06:30:38.506+0000 D3 REPL [conn239] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.133+0000 2019-09-04T06:30:38.543+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:38.543+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:38.566+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:38.602+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578638, 2) 2019-09-04T06:30:38.635+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:38.635+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:38.647+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:38.647+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:38.666+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:38.714+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:38.714+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:38.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:38.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:38.766+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:38.796+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:38.796+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:38.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:38.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:38.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 662) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:38.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 662 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:48.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:38.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:08.838+0000 2019-09-04T06:30:38.838+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:30:37.061+0000 2019-09-04T06:30:38.838+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:30:38.232+0000 2019-09-04T06:30:38.838+0000 D3 REPL [replexec-3] stalest member MemberId(0) date: 2019-09-04T06:30:37.061+0000 2019-09-04T06:30:38.838+0000 D3 REPL [replexec-3] scheduling next check at 2019-09-04T06:30:47.061+0000 2019-09-04T06:30:38.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:08.838+0000 2019-09-04T06:30:38.838+0000 D2 ASIO [Replication] Request 662 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), opTime: { ts: Timestamp(1567578638, 2), t: 1 }, wallTime: new Date(1567578638490), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 2), t: 1 }, lastCommittedWall: new Date(1567578638490), lastOpVisible: { ts: Timestamp(1567578638, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578638, 2), $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578638, 2) } 2019-09-04T06:30:38.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), opTime: { ts: Timestamp(1567578638, 2), t: 1 }, wallTime: new Date(1567578638490), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 2), t: 1 }, lastCommittedWall: new Date(1567578638490), lastOpVisible: { ts: Timestamp(1567578638, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578638, 2), $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578638, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:38.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:38.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 662) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), opTime: { ts: Timestamp(1567578638, 2), t: 1 }, wallTime: new Date(1567578638490), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 2), t: 1 }, lastCommittedWall: new Date(1567578638490), lastOpVisible: { ts: Timestamp(1567578638, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578638, 2), $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578638, 2) } 2019-09-04T06:30:38.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:30:38.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:30:40.838Z 2019-09-04T06:30:38.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:08.838+0000 2019-09-04T06:30:38.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:38.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 663) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:38.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 663 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:30:48.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:38.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:08.838+0000 2019-09-04T06:30:38.839+0000 D2 ASIO [Replication] Request 663 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), opTime: { ts: Timestamp(1567578638, 2), t: 1 }, wallTime: new Date(1567578638490), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 2), t: 1 }, lastCommittedWall: new Date(1567578638490), lastOpVisible: { ts: Timestamp(1567578638, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578638, 2), $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578638, 2) } 2019-09-04T06:30:38.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), opTime: { ts: Timestamp(1567578638, 2), t: 1 }, wallTime: new Date(1567578638490), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 2), t: 1 }, lastCommittedWall: new Date(1567578638490), lastOpVisible: { ts: Timestamp(1567578638, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578638, 2), $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578638, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:30:38.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:38.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 663) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), opTime: { ts: Timestamp(1567578638, 2), t: 1 }, wallTime: new Date(1567578638490), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 2), t: 1 }, lastCommittedWall: new Date(1567578638490), lastOpVisible: { ts: Timestamp(1567578638, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578638, 2), $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578638, 2) } 2019-09-04T06:30:38.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:30:38.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:30:48.510+0000 2019-09-04T06:30:38.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:30:49.724+0000 2019-09-04T06:30:38.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:30:38.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:30:40.839Z 2019-09-04T06:30:38.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:38.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:08.839+0000 2019-09-04T06:30:38.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:08.839+0000 2019-09-04T06:30:38.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:38.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:38.866+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:38.936+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:38.936+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:38.966+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:38.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:38.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:39.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:39.043+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:39.043+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:39.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 3857A7497DF1FD387F62ABD3256D1068F5781668), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:39.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:30:39.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 3857A7497DF1FD387F62ABD3256D1068F5781668), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:39.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 3857A7497DF1FD387F62ABD3256D1068F5781668), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:39.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), opTime: { ts: Timestamp(1567578638, 2), t: 1 }, wallTime: new Date(1567578638490) } 2019-09-04T06:30:39.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 3857A7497DF1FD387F62ABD3256D1068F5781668), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:39.066+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:39.135+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:39.135+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:39.147+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:39.147+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:39.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:39.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:39.166+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:39.214+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:39.214+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:39.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:39.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:39.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:39.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:39.266+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:39.296+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:39.296+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:39.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:39.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:39.366+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:39.436+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:39.436+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:39.467+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:39.490+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:39.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:39.497+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:39.497+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:39.502+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:39.502+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:39.502+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578638, 2) 2019-09-04T06:30:39.502+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 9545 2019-09-04T06:30:39.502+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:39.502+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:39.502+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 9545 2019-09-04T06:30:39.503+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9548 2019-09-04T06:30:39.503+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9548 2019-09-04T06:30:39.503+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578638, 2), t: 1 }({ ts: Timestamp(1567578638, 2), t: 1 }) 2019-09-04T06:30:39.537+0000 D2 COMMAND [conn61] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578638, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 3857A7497DF1FD387F62ABD3256D1068F5781668), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578638, 2), t: 1 } }, $db: "config" } 2019-09-04T06:30:39.537+0000 D1 COMMAND [conn61] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578638, 2), t: 1 } } } 2019-09-04T06:30:39.537+0000 D3 STORAGE [conn61] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:30:39.537+0000 D1 COMMAND [conn61] Using 'committed' snapshot: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578638, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 3857A7497DF1FD387F62ABD3256D1068F5781668), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578638, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578638, 2) 2019-09-04T06:30:39.537+0000 D2 QUERY [conn61] Collection config.settings does not exist. Using EOF plan: query: { _id: "balancer" } sort: {} projection: {} limit: 1 2019-09-04T06:30:39.537+0000 I COMMAND [conn61] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578638, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 3857A7497DF1FD387F62ABD3256D1068F5781668), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578638, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:30:39.538+0000 D2 COMMAND [conn61] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578638, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 3857A7497DF1FD387F62ABD3256D1068F5781668), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578638, 2), t: 1 } }, $db: "config" } 2019-09-04T06:30:39.538+0000 D1 COMMAND [conn61] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578638, 2), t: 1 } } } 2019-09-04T06:30:39.538+0000 D3 STORAGE [conn61] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:30:39.538+0000 D1 COMMAND [conn61] Using 'committed' snapshot: { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578638, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 3857A7497DF1FD387F62ABD3256D1068F5781668), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578638, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578638, 2) 2019-09-04T06:30:39.538+0000 D2 QUERY [conn61] Collection config.settings does not exist. Using EOF plan: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 2019-09-04T06:30:39.538+0000 I COMMAND [conn61] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578638, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 3857A7497DF1FD387F62ABD3256D1068F5781668), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578638, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:30:39.543+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:39.543+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:39.567+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:39.635+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:39.635+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:39.647+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:39.647+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:39.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:39.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:39.667+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:39.714+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:39.714+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:39.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:39.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:39.766+0000 D2 COMMAND [conn61] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578638, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 3857A7497DF1FD387F62ABD3256D1068F5781668), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578638, 2), t: 1 } }, $db: "config" } 2019-09-04T06:30:39.766+0000 D1 COMMAND [conn61] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578638, 2), t: 1 } } } 2019-09-04T06:30:39.766+0000 D3 STORAGE [conn61] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:30:39.766+0000 D1 COMMAND [conn61] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578638, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 3857A7497DF1FD387F62ABD3256D1068F5781668), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578638, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578638, 2) 2019-09-04T06:30:39.766+0000 D5 QUERY [conn61] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:30:39.766+0000 D5 QUERY [conn61] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:30:39.766+0000 D5 QUERY [conn61] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:30:39.766+0000 D5 QUERY [conn61] Rated tree: $and 2019-09-04T06:30:39.766+0000 D5 QUERY [conn61] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:39.766+0000 D5 QUERY [conn61] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:39.766+0000 D2 QUERY [conn61] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:30:39.766+0000 D3 STORAGE [conn61] WT begin_transaction for snapshot id 9558 2019-09-04T06:30:39.766+0000 D3 STORAGE [conn61] WT rollback_transaction for snapshot id 9558 2019-09-04T06:30:39.766+0000 I COMMAND [conn61] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578638, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 3857A7497DF1FD387F62ABD3256D1068F5781668), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578638, 2), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:879 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:30:39.767+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:39.796+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:39.796+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:39.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:39.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:39.867+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:39.936+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:39.936+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:39.967+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:39.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:39.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:39.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:39.989+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:39.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:39.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:40.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:40.003+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:30:40.003+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:30:40.003+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:30:40.010+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:30:40.010+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:30:40.010+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:30:40.010+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:30:40.010+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:30:40.010+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:30:40.011+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:30:40.011+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35129 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:30:40.012+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:30:40.012+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:30:40.012+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:30:40.025+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:30:40.025+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:40.025+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:30:40.025+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:30:40.025+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:30:40.025+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:30:40.025+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:30:40.025+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:30:40.025+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:30:40.025+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:40.025+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:40.025+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:30:40.025+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578638, 2) 2019-09-04T06:30:40.025+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9572 2019-09-04T06:30:40.025+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9572 2019-09-04T06:30:40.025+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:30:40.031+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:30:40.031+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:30:40.040+0000 D2 COMMAND [conn69] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:40.040+0000 I COMMAND [conn69] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:40.041+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:30:40.041+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:40.041+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:30:40.041+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:30:40.041+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:30:40.041+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578638, 2) 2019-09-04T06:30:40.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9576 2019-09-04T06:30:40.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9576 2019-09-04T06:30:40.041+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:30:40.041+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:30:40.041+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:40.041+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:30:40.041+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:30:40.041+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:30:40.041+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578638, 2) 2019-09-04T06:30:40.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9578 2019-09-04T06:30:40.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9578 2019-09-04T06:30:40.041+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:30:40.041+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:30:40.041+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:30:40.041+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:30:40.041+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:30:40.041+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:30:40.041+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:30:40.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9581 2019-09-04T06:30:40.041+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:30:40.041+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:40.041+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:30:40.041+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9581 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9582 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9582 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9583 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9583 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9584 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9584 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9585 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9585 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9586 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9586 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9587 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9587 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9588 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9588 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9589 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9589 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9590 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9590 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9591 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9591 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9592 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9592 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9593 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9593 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9594 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9594 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9595 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:40.042+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9595 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9596 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9596 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9597 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9597 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9598 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9598 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9599 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9599 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9600 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9600 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9601 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9601 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9602 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9602 2019-09-04T06:30:40.043+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:30:40.043+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:30:40.043+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9604 2019-09-04T06:30:40.043+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9604 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9606 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9606 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9607 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9607 2019-09-04T06:30:40.043+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:30:40.043+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9609 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9609 2019-09-04T06:30:40.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9610 2019-09-04T06:30:40.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9610 2019-09-04T06:30:40.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9611 2019-09-04T06:30:40.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9611 2019-09-04T06:30:40.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9612 2019-09-04T06:30:40.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9612 2019-09-04T06:30:40.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9613 2019-09-04T06:30:40.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9613 2019-09-04T06:30:40.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9614 2019-09-04T06:30:40.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9614 2019-09-04T06:30:40.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9615 2019-09-04T06:30:40.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9615 2019-09-04T06:30:40.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9616 2019-09-04T06:30:40.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9616 2019-09-04T06:30:40.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9617 2019-09-04T06:30:40.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9617 2019-09-04T06:30:40.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9618 2019-09-04T06:30:40.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9618 2019-09-04T06:30:40.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9619 2019-09-04T06:30:40.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9619 2019-09-04T06:30:40.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9620 2019-09-04T06:30:40.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9620 2019-09-04T06:30:40.044+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:30:40.044+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:30:40.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9622 2019-09-04T06:30:40.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9622 2019-09-04T06:30:40.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9623 2019-09-04T06:30:40.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9623 2019-09-04T06:30:40.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9624 2019-09-04T06:30:40.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9624 2019-09-04T06:30:40.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9625 2019-09-04T06:30:40.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9625 2019-09-04T06:30:40.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9626 2019-09-04T06:30:40.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9626 2019-09-04T06:30:40.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 9627 2019-09-04T06:30:40.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 9627 2019-09-04T06:30:40.044+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:30:40.055+0000 D2 COMMAND [conn70] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:40.055+0000 I COMMAND [conn70] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:40.067+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:40.134+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:40.135+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:40.147+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:40.147+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:40.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:40.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:40.167+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:40.214+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:40.214+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:40.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 3857A7497DF1FD387F62ABD3256D1068F5781668), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:40.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:30:40.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 3857A7497DF1FD387F62ABD3256D1068F5781668), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:40.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 3857A7497DF1FD387F62ABD3256D1068F5781668), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:40.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), opTime: { ts: Timestamp(1567578638, 2), t: 1 }, wallTime: new Date(1567578638490) } 2019-09-04T06:30:40.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 3857A7497DF1FD387F62ABD3256D1068F5781668), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:40.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:40.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:40.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:40.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:40.267+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:40.296+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:40.296+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:40.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:40.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:40.368+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:40.436+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:40.436+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:40.468+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:40.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:40.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:40.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:40.489+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:40.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:40.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:40.502+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:40.502+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:40.502+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578638, 2) 2019-09-04T06:30:40.502+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 9643 2019-09-04T06:30:40.502+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:40.502+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:40.502+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 9643 2019-09-04T06:30:40.503+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9646 2019-09-04T06:30:40.503+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9646 2019-09-04T06:30:40.503+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578638, 2), t: 1 }({ ts: Timestamp(1567578638, 2), t: 1 }) 2019-09-04T06:30:40.543+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:40.543+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:40.568+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:40.634+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:40.635+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:40.647+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:40.647+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:40.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:40.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:40.668+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:40.673+0000 I COMMAND [conn207] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578602, 1), signature: { hash: BinData(0, B8E65EF8C3BE078375F8162EA3CD50E942460267), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:30:40.673+0000 D1 - [conn207] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:40.673+0000 W - [conn207] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:40.679+0000 I COMMAND [conn230] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578605, 1), signature: { hash: BinData(0, 1A8F6415775A35EDF4B88EC006CD33118085876C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:30:40.679+0000 D1 - [conn230] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:40.679+0000 W - [conn230] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:40.685+0000 I COMMAND [conn226] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578601, 1), signature: { hash: BinData(0, CDC17BEE3BF53630BBD514A3979D01B672FD102E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:30:40.685+0000 D1 - [conn226] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:40.685+0000 W - [conn226] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:40.690+0000 I - [conn207] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:40.690+0000 D1 COMMAND [conn207] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578602, 1), signature: { hash: BinData(0, B8E65EF8C3BE078375F8162EA3CD50E942460267), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:40.690+0000 D1 - [conn207] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:40.690+0000 W - [conn207] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:40.706+0000 I - [conn230] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:40.706+0000 D1 COMMAND [conn230] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578605, 1), signature: { hash: BinData(0, 1A8F6415775A35EDF4B88EC006CD33118085876C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:40.706+0000 D1 - [conn230] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:40.706+0000 W - [conn230] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:40.714+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:40.714+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:40.721+0000 D2 COMMAND [conn72] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578638, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 3857A7497DF1FD387F62ABD3256D1068F5781668), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578638, 2), t: 1 } }, $db: "config" } 2019-09-04T06:30:40.721+0000 D1 COMMAND [conn72] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578638, 2), t: 1 } } } 2019-09-04T06:30:40.721+0000 D3 STORAGE [conn72] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:30:40.721+0000 D1 COMMAND [conn72] Using 'committed' snapshot: { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578638, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 3857A7497DF1FD387F62ABD3256D1068F5781668), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578638, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578638, 2) 2019-09-04T06:30:40.721+0000 D2 QUERY [conn72] Collection config.settings does not exist. Using EOF plan: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 2019-09-04T06:30:40.721+0000 I COMMAND [conn72] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578638, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 3857A7497DF1FD387F62ABD3256D1068F5781668), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578638, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:30:40.722+0000 D2 COMMAND [conn72] run command config.$cmd { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578638, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 3857A7497DF1FD387F62ABD3256D1068F5781668), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578638, 2), t: 1 } }, $db: "config" } 2019-09-04T06:30:40.722+0000 D1 COMMAND [conn72] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578638, 2), t: 1 } } } 2019-09-04T06:30:40.722+0000 D3 STORAGE [conn72] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:30:40.722+0000 D1 COMMAND [conn72] Using 'committed' snapshot: { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578638, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 3857A7497DF1FD387F62ABD3256D1068F5781668), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578638, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578638, 2) 2019-09-04T06:30:40.722+0000 D2 QUERY [conn72] Collection config.settings does not exist. Using EOF plan: query: { _id: "autosplit" } sort: {} projection: {} limit: 1 2019-09-04T06:30:40.722+0000 I COMMAND [conn72] command config.settings command: find { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578638, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 3857A7497DF1FD387F62ABD3256D1068F5781668), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578638, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:30:40.726+0000 I - [conn207] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:40.726+0000 W COMMAND [conn207] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:40.726+0000 I COMMAND [conn207] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578602, 1), signature: { hash: BinData(0, B8E65EF8C3BE078375F8162EA3CD50E942460267), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30028ms 2019-09-04T06:30:40.726+0000 D2 NETWORK [conn207] Session from 10.108.2.73:52114 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:40.726+0000 I NETWORK [conn207] end connection 10.108.2.73:52114 (92 connections now open) 2019-09-04T06:30:40.744+0000 I - [conn226] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:40.744+0000 D1 COMMAND [conn226] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578601, 1), signature: { hash: BinData(0, CDC17BEE3BF53630BBD514A3979D01B672FD102E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:40.744+0000 D1 - [conn226] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:40.744+0000 W - [conn226] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:40.763+0000 I - [conn230] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:40.763+0000 W COMMAND [conn230] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:40.763+0000 I COMMAND [conn230] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578605, 1), signature: { hash: BinData(0, 1A8F6415775A35EDF4B88EC006CD33118085876C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30039ms 2019-09-04T06:30:40.763+0000 D2 NETWORK [conn230] Session from 10.108.2.74:51772 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:40.763+0000 I NETWORK [conn230] end connection 10.108.2.74:51772 (91 connections now open) 2019-09-04T06:30:40.764+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:40.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:40.768+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:40.783+0000 I - [conn226] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:40.783+0000 W COMMAND [conn226] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:40.783+0000 I COMMAND [conn226] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578601, 1), signature: { hash: BinData(0, CDC17BEE3BF53630BBD514A3979D01B672FD102E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30071ms 2019-09-04T06:30:40.783+0000 D2 NETWORK [conn226] Session from 10.108.2.56:35688 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:40.783+0000 I NETWORK [conn226] end connection 10.108.2.56:35688 (90 connections now open) 2019-09-04T06:30:40.796+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:40.796+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:40.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:40.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 664) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:40.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 664 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:50.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:40.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:08.839+0000 2019-09-04T06:30:40.838+0000 D2 ASIO [Replication] Request 664 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), opTime: { ts: Timestamp(1567578638, 2), t: 1 }, wallTime: new Date(1567578638490), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 2), t: 1 }, lastCommittedWall: new Date(1567578638490), lastOpVisible: { ts: Timestamp(1567578638, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578638, 2), $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578638, 2) } 2019-09-04T06:30:40.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), opTime: { ts: Timestamp(1567578638, 2), t: 1 }, wallTime: new Date(1567578638490), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 2), t: 1 }, lastCommittedWall: new Date(1567578638490), lastOpVisible: { ts: Timestamp(1567578638, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578638, 2), $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578638, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:40.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:40.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 664) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), opTime: { ts: Timestamp(1567578638, 2), t: 1 }, wallTime: new Date(1567578638490), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 2), t: 1 }, lastCommittedWall: new Date(1567578638490), lastOpVisible: { ts: Timestamp(1567578638, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578638, 2), $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578638, 2) } 2019-09-04T06:30:40.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:30:40.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:30:42.838Z 2019-09-04T06:30:40.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:08.839+0000 2019-09-04T06:30:40.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:40.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 665) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:40.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 665 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:30:50.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:40.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:08.839+0000 2019-09-04T06:30:40.839+0000 D2 ASIO [Replication] Request 665 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), opTime: { ts: Timestamp(1567578638, 2), t: 1 }, wallTime: new Date(1567578638490), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 2), t: 1 }, lastCommittedWall: new Date(1567578638490), lastOpVisible: { ts: Timestamp(1567578638, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578638, 2), $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578638, 2) } 2019-09-04T06:30:40.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), opTime: { ts: Timestamp(1567578638, 2), t: 1 }, wallTime: new Date(1567578638490), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 2), t: 1 }, lastCommittedWall: new Date(1567578638490), lastOpVisible: { ts: Timestamp(1567578638, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578638, 2), $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578638, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:30:40.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:40.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 665) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), opTime: { ts: Timestamp(1567578638, 2), t: 1 }, wallTime: new Date(1567578638490), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 2), t: 1 }, lastCommittedWall: new Date(1567578638490), lastOpVisible: { ts: Timestamp(1567578638, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578638, 2), $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578638, 2) } 2019-09-04T06:30:40.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:30:40.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:30:49.724+0000 2019-09-04T06:30:40.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:30:51.324+0000 2019-09-04T06:30:40.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:30:40.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:30:42.839Z 2019-09-04T06:30:40.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:40.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:10.839+0000 2019-09-04T06:30:40.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:10.839+0000 2019-09-04T06:30:40.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:40.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:40.861+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:40.861+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:40.864+0000 D2 COMMAND [conn240] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578640, 1), signature: { hash: BinData(0, 9AA6EC49743F043950E16BB5631473231B19B5FC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:30:40.864+0000 D1 REPL [conn240] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578638, 2), t: 1 } 2019-09-04T06:30:40.864+0000 D3 REPL [conn240] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.874+0000 2019-09-04T06:30:40.866+0000 I NETWORK [listener] connection accepted from 10.108.2.55:36658 #255 (91 connections now open) 2019-09-04T06:30:40.866+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:40.867+0000 D2 COMMAND [conn255] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:40.867+0000 I NETWORK [conn255] received client metadata from 10.108.2.55:36658 conn255: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:40.867+0000 I COMMAND [conn255] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:40.867+0000 D2 COMMAND [conn255] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578631, 1), signature: { hash: BinData(0, 126D9B97A246E71CB7F886F7E917931B13D03AC4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:30:40.867+0000 D1 REPL [conn255] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578638, 2), t: 1 } 2019-09-04T06:30:40.867+0000 D3 REPL [conn255] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.877+0000 2019-09-04T06:30:40.868+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:40.869+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:40.869+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:40.869+0000 D2 COMMAND [conn241] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578632, 1), signature: { hash: BinData(0, 7B0D7364553F28B05AA42FCCC1A524375FA6A20E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:30:40.869+0000 D1 REPL [conn241] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578638, 2), t: 1 } 2019-09-04T06:30:40.869+0000 D3 REPL [conn241] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.879+0000 2019-09-04T06:30:40.878+0000 D2 COMMAND [conn227] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 1), signature: { hash: BinData(0, 01D129354ACF87BDDDE96A49066A029F1BB3D92A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:30:40.878+0000 D1 REPL [conn227] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578638, 2), t: 1 } 2019-09-04T06:30:40.878+0000 D3 REPL [conn227] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.888+0000 2019-09-04T06:30:40.903+0000 I NETWORK [listener] connection accepted from 10.108.2.60:44860 #256 (92 connections now open) 2019-09-04T06:30:40.903+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:40.903+0000 D2 COMMAND [conn256] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:40.903+0000 I NETWORK [conn256] received client metadata from 10.108.2.60:44860 conn256: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:40.903+0000 I COMMAND [conn256] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:40.908+0000 D2 COMMAND [conn256] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578633, 1), signature: { hash: BinData(0, 788D5538F6F1908EEC9B9DC20AF81546C8F832BC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:30:40.908+0000 D1 REPL [conn256] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578638, 2), t: 1 } 2019-09-04T06:30:40.908+0000 D3 REPL [conn256] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.918+0000 2019-09-04T06:30:40.914+0000 D2 COMMAND [conn224] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578632, 1), signature: { hash: BinData(0, 7B0D7364553F28B05AA42FCCC1A524375FA6A20E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:30:40.914+0000 D1 REPL [conn224] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578638, 2), t: 1 } 2019-09-04T06:30:40.914+0000 D3 REPL [conn224] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.924+0000 2019-09-04T06:30:40.914+0000 D2 COMMAND [conn229] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578631, 1), signature: { hash: BinData(0, 126D9B97A246E71CB7F886F7E917931B13D03AC4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:30:40.914+0000 D1 REPL [conn229] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578638, 2), t: 1 } 2019-09-04T06:30:40.914+0000 D3 REPL [conn229] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.924+0000 2019-09-04T06:30:40.936+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:40.936+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:40.944+0000 D2 COMMAND [conn73] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:40.944+0000 I COMMAND [conn73] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:40.968+0000 D2 COMMAND [conn74] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:40.968+0000 I COMMAND [conn74] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:40.968+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:40.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:40.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:40.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:40.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:40.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:40.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:41.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:41.043+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:41.043+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:41.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 3857A7497DF1FD387F62ABD3256D1068F5781668), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:41.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:30:41.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 3857A7497DF1FD387F62ABD3256D1068F5781668), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:41.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 3857A7497DF1FD387F62ABD3256D1068F5781668), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:41.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), opTime: { ts: Timestamp(1567578638, 2), t: 1 }, wallTime: new Date(1567578638490) } 2019-09-04T06:30:41.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 2), signature: { hash: BinData(0, 3857A7497DF1FD387F62ABD3256D1068F5781668), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:41.068+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:41.135+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:41.135+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:41.147+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:41.147+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:41.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:41.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:41.168+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:41.214+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:41.214+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:41.221+0000 D2 COMMAND [conn77] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:41.221+0000 I COMMAND [conn77] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:41.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:41.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:41.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:41.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:41.268+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:41.273+0000 D2 COMMAND [conn78] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:41.273+0000 I COMMAND [conn78] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:41.296+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:41.296+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:41.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:41.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:41.360+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:41.360+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:41.369+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:41.369+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:41.369+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:41.469+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:41.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:41.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:41.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:41.489+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:41.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:41.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:41.502+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:41.502+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:41.503+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578638, 2) 2019-09-04T06:30:41.503+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 9693 2019-09-04T06:30:41.503+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:41.503+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:41.503+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 9693 2019-09-04T06:30:41.504+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9696 2019-09-04T06:30:41.504+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9696 2019-09-04T06:30:41.504+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578638, 2), t: 1 }({ ts: Timestamp(1567578638, 2), t: 1 }) 2019-09-04T06:30:41.543+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:41.543+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:41.569+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:41.635+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:41.635+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:41.647+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:41.647+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:41.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:41.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:41.669+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:41.714+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:41.714+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:41.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:41.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:41.769+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:41.796+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:41.796+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:41.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:41.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:41.869+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:41.969+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:41.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:41.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:41.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:41.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:41.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:41.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:42.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:42.043+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:42.044+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:42.070+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:42.096+0000 I COMMAND [conn216] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578611, 1), signature: { hash: BinData(0, 542F7D84F611AE5E9C7A33BCFBF929241984258E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:30:42.096+0000 D1 - [conn216] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:42.096+0000 W - [conn216] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:42.113+0000 I - [conn216] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:42.113+0000 D1 COMMAND [conn216] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578611, 1), signature: { hash: BinData(0, 542F7D84F611AE5E9C7A33BCFBF929241984258E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:42.113+0000 D1 - [conn216] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:42.113+0000 W - [conn216] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:42.133+0000 I - [conn216] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:42.133+0000 W COMMAND [conn216] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:42.133+0000 I COMMAND [conn216] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578611, 1), signature: { hash: BinData(0, 542F7D84F611AE5E9C7A33BCFBF929241984258E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:30:42.133+0000 D2 NETWORK [conn216] Session from 10.108.2.59:48316 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:42.133+0000 I NETWORK [conn216] end connection 10.108.2.59:48316 (91 connections now open) 2019-09-04T06:30:42.134+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:42.135+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:42.147+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:42.147+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:42.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:42.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:42.170+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:42.214+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:42.214+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:42.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578640, 1), signature: { hash: BinData(0, FFF304531115B48011BB1CE549551362579B3A30), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:42.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:30:42.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578640, 1), signature: { hash: BinData(0, FFF304531115B48011BB1CE549551362579B3A30), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:42.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578640, 1), signature: { hash: BinData(0, FFF304531115B48011BB1CE549551362579B3A30), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:42.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), opTime: { ts: Timestamp(1567578638, 2), t: 1 }, wallTime: new Date(1567578638490) } 2019-09-04T06:30:42.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578640, 1), signature: { hash: BinData(0, FFF304531115B48011BB1CE549551362579B3A30), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:42.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:42.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:42.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:42.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:42.270+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:42.282+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:42.282+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:42.283+0000 I NETWORK [listener] connection accepted from 10.108.2.59:48356 #257 (92 connections now open) 2019-09-04T06:30:42.283+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:42.283+0000 D2 COMMAND [conn257] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:42.283+0000 I NETWORK [conn257] received client metadata from 10.108.2.59:48356 conn257: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:42.283+0000 I COMMAND [conn257] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:42.283+0000 D2 COMMAND [conn257] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578641, 1), signature: { hash: BinData(0, 2D0FF26358BD656234793C721CD1E7FBC2D07432), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:30:42.284+0000 D1 REPL [conn257] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578638, 2), t: 1 } 2019-09-04T06:30:42.284+0000 D3 REPL [conn257] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:12.293+0000 2019-09-04T06:30:42.289+0000 I COMMAND [conn221] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578602, 1), signature: { hash: BinData(0, B8E65EF8C3BE078375F8162EA3CD50E942460267), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:30:42.290+0000 D1 - [conn221] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:42.290+0000 W - [conn221] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:42.296+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:42.296+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:42.306+0000 I - [conn221] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:42.306+0000 D1 COMMAND [conn221] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578602, 1), signature: { hash: BinData(0, B8E65EF8C3BE078375F8162EA3CD50E942460267), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:42.306+0000 D1 - [conn221] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:42.306+0000 W - [conn221] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:42.326+0000 I - [conn221] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:42.326+0000 W COMMAND [conn221] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:42.326+0000 I COMMAND [conn221] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578602, 1), signature: { hash: BinData(0, B8E65EF8C3BE078375F8162EA3CD50E942460267), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:30:42.326+0000 D2 NETWORK [conn221] Session from 10.108.2.52:47160 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:42.326+0000 I NETWORK [conn221] end connection 10.108.2.52:47160 (91 connections now open) 2019-09-04T06:30:42.354+0000 D2 COMMAND [conn143] run command admin.$cmd { ping: 1, $db: "admin" } 2019-09-04T06:30:42.354+0000 I COMMAND [conn143] command admin.$cmd appName: "robo3t" command: ping { ping: 1, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:42.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:42.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:42.363+0000 D2 COMMAND [conn206] run command config.$cmd { ping: 1.0, lsid: { id: UUID("2fef7d2a-ea06-44d7-a315-b0e911b7f5bf") }, $clusterTime: { clusterTime: Timestamp(1567578580, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:30:42.363+0000 I COMMAND [conn206] command config.$cmd appName: "MongoDB Shell" command: ping { ping: 1.0, lsid: { id: UUID("2fef7d2a-ea06-44d7-a315-b0e911b7f5bf") }, $clusterTime: { clusterTime: Timestamp(1567578580, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:42.370+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:42.470+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:42.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:42.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:42.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:42.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:42.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:42.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:42.497+0000 D2 COMMAND [conn242] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 1), signature: { hash: BinData(0, 01D129354ACF87BDDDE96A49066A029F1BB3D92A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:30:42.497+0000 D1 REPL [conn242] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578638, 2), t: 1 } 2019-09-04T06:30:42.497+0000 D3 REPL [conn242] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:12.507+0000 2019-09-04T06:30:42.503+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:42.503+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:42.503+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578638, 2) 2019-09-04T06:30:42.503+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 9729 2019-09-04T06:30:42.503+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:42.503+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:42.503+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 9729 2019-09-04T06:30:42.504+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9732 2019-09-04T06:30:42.504+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9732 2019-09-04T06:30:42.504+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578638, 2), t: 1 }({ ts: Timestamp(1567578638, 2), t: 1 }) 2019-09-04T06:30:42.543+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:42.543+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:42.570+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:42.635+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:42.635+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:42.647+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:42.647+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:42.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:42.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:42.670+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:42.714+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:42.714+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:42.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:42.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:42.770+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:42.782+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:42.782+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:42.796+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:42.796+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:42.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:42.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 666) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:42.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 666 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:52.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:42.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:10.839+0000 2019-09-04T06:30:42.838+0000 D2 ASIO [Replication] Request 666 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), opTime: { ts: Timestamp(1567578638, 2), t: 1 }, wallTime: new Date(1567578638490), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 2), t: 1 }, lastCommittedWall: new Date(1567578638490), lastOpVisible: { ts: Timestamp(1567578638, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578638, 2), $clusterTime: { clusterTime: Timestamp(1567578641, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578638, 2) } 2019-09-04T06:30:42.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), opTime: { ts: Timestamp(1567578638, 2), t: 1 }, wallTime: new Date(1567578638490), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 2), t: 1 }, lastCommittedWall: new Date(1567578638490), lastOpVisible: { ts: Timestamp(1567578638, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578638, 2), $clusterTime: { clusterTime: Timestamp(1567578641, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578638, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:42.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:42.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 666) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), opTime: { ts: Timestamp(1567578638, 2), t: 1 }, wallTime: new Date(1567578638490), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 2), t: 1 }, lastCommittedWall: new Date(1567578638490), lastOpVisible: { ts: Timestamp(1567578638, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578638, 2), $clusterTime: { clusterTime: Timestamp(1567578641, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578638, 2) } 2019-09-04T06:30:42.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:30:42.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:30:44.838Z 2019-09-04T06:30:42.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:10.839+0000 2019-09-04T06:30:42.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:42.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 667) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:42.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 667 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:30:52.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:42.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:10.839+0000 2019-09-04T06:30:42.839+0000 D2 ASIO [Replication] Request 667 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), opTime: { ts: Timestamp(1567578638, 2), t: 1 }, wallTime: new Date(1567578638490), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 2), t: 1 }, lastCommittedWall: new Date(1567578638490), lastOpVisible: { ts: Timestamp(1567578638, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578638, 2), $clusterTime: { clusterTime: Timestamp(1567578641, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578638, 2) } 2019-09-04T06:30:42.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), opTime: { ts: Timestamp(1567578638, 2), t: 1 }, wallTime: new Date(1567578638490), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 2), t: 1 }, lastCommittedWall: new Date(1567578638490), lastOpVisible: { ts: Timestamp(1567578638, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578638, 2), $clusterTime: { clusterTime: Timestamp(1567578641, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578638, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:30:42.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:42.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 667) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), opTime: { ts: Timestamp(1567578638, 2), t: 1 }, wallTime: new Date(1567578638490), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 2), t: 1 }, lastCommittedWall: new Date(1567578638490), lastOpVisible: { ts: Timestamp(1567578638, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578638, 2), $clusterTime: { clusterTime: Timestamp(1567578641, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578638, 2) } 2019-09-04T06:30:42.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:30:42.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:30:51.324+0000 2019-09-04T06:30:42.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:30:53.774+0000 2019-09-04T06:30:42.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:30:42.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:30:44.839Z 2019-09-04T06:30:42.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:42.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:12.839+0000 2019-09-04T06:30:42.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:12.839+0000 2019-09-04T06:30:42.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:42.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:42.870+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:42.971+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:42.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:42.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:42.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:42.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:42.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:42.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:43.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:43.043+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:43.043+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:43.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578641, 1), signature: { hash: BinData(0, B7DAD48B03A629170F0F80E2E895224968863D45), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:43.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:30:43.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578641, 1), signature: { hash: BinData(0, B7DAD48B03A629170F0F80E2E895224968863D45), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:43.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578641, 1), signature: { hash: BinData(0, B7DAD48B03A629170F0F80E2E895224968863D45), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:43.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), opTime: { ts: Timestamp(1567578638, 2), t: 1 }, wallTime: new Date(1567578638490) } 2019-09-04T06:30:43.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578641, 1), signature: { hash: BinData(0, B7DAD48B03A629170F0F80E2E895224968863D45), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:43.071+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:43.135+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:43.135+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:43.147+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:43.147+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:43.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:43.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:43.171+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:43.180+0000 D2 COMMAND [conn231] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578642, 1), signature: { hash: BinData(0, D3B29E27081E353619E4BBAFF34512E8BADA791D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:30:43.180+0000 D1 REPL [conn231] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578638, 2), t: 1 } 2019-09-04T06:30:43.180+0000 D3 REPL [conn231] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:13.190+0000 2019-09-04T06:30:43.214+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:43.214+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:43.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:43.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:43.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:43.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:43.271+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:43.296+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:43.296+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:43.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:43.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:43.371+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:43.399+0000 D2 COMMAND [conn237] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 1), signature: { hash: BinData(0, 01D129354ACF87BDDDE96A49066A029F1BB3D92A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:30:43.399+0000 D1 REPL [conn237] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578638, 2), t: 1 } 2019-09-04T06:30:43.399+0000 D3 REPL [conn237] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:13.409+0000 2019-09-04T06:30:43.471+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:43.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:43.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:43.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:43.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:43.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:43.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:43.503+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:43.503+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:43.503+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578638, 2) 2019-09-04T06:30:43.503+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 9763 2019-09-04T06:30:43.503+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:43.503+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:43.503+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 9763 2019-09-04T06:30:43.504+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:43.504+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), appliedOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, appliedWallTime: new Date(1567578638490), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), appliedOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, appliedWallTime: new Date(1567578638490), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), appliedOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, appliedWallTime: new Date(1567578638490), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 2), t: 1 }, lastCommittedWall: new Date(1567578638490), lastOpVisible: { ts: Timestamp(1567578638, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:43.504+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 668 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:13.504+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), appliedOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, appliedWallTime: new Date(1567578638490), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), appliedOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, appliedWallTime: new Date(1567578638490), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), appliedOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, appliedWallTime: new Date(1567578638490), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 2), t: 1 }, lastCommittedWall: new Date(1567578638490), lastOpVisible: { ts: Timestamp(1567578638, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:43.504+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:08.503+0000 2019-09-04T06:30:43.504+0000 D2 ASIO [RS] Request 668 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 2), t: 1 }, lastCommittedWall: new Date(1567578638490), lastOpVisible: { ts: Timestamp(1567578638, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578638, 2), $clusterTime: { clusterTime: Timestamp(1567578642, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578638, 2) } 2019-09-04T06:30:43.504+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 2), t: 1 }, lastCommittedWall: new Date(1567578638490), lastOpVisible: { ts: Timestamp(1567578638, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578638, 2), $clusterTime: { clusterTime: Timestamp(1567578642, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578638, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:43.504+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:43.504+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9766 2019-09-04T06:30:43.504+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:08.503+0000 2019-09-04T06:30:43.504+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9766 2019-09-04T06:30:43.504+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578638, 2), t: 1 }({ ts: Timestamp(1567578638, 2), t: 1 }) 2019-09-04T06:30:43.505+0000 D2 ASIO [RS] Request 661 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 2), t: 1 }, lastCommittedWall: new Date(1567578638490), lastOpVisible: { ts: Timestamp(1567578638, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578638, 2), t: 1 }, lastCommittedWall: new Date(1567578638490), lastOpApplied: { ts: Timestamp(1567578638, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578638, 2), $clusterTime: { clusterTime: Timestamp(1567578642, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578638, 2) } 2019-09-04T06:30:43.505+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 2), t: 1 }, lastCommittedWall: new Date(1567578638490), lastOpVisible: { ts: Timestamp(1567578638, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578638, 2), t: 1 }, lastCommittedWall: new Date(1567578638490), lastOpApplied: { ts: Timestamp(1567578638, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578638, 2), $clusterTime: { clusterTime: Timestamp(1567578642, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578638, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:43.505+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:43.505+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:30:43.505+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:30:53.774+0000 2019-09-04T06:30:43.505+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:30:54.056+0000 2019-09-04T06:30:43.505+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:43.505+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:12.839+0000 2019-09-04T06:30:43.505+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 669 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:53.505+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578638, 2), t: 1 } } 2019-09-04T06:30:43.505+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:08.503+0000 2019-09-04T06:30:43.543+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:43.543+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:43.571+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:43.635+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:43.635+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:43.647+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:43.647+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:43.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:43.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:43.672+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:43.714+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:43.714+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:43.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:43.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:43.772+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:43.796+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:43.796+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:43.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:43.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:43.872+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:43.972+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:43.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:43.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:43.990+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:43.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:43.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:43.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:44.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:44.043+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:44.043+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:44.072+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:44.135+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:44.135+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:44.147+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:44.147+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:44.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:44.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:44.172+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:44.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578642, 1), signature: { hash: BinData(0, 134292195DC19D15F05422F0C13C5747C33CF650), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:44.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:30:44.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578642, 1), signature: { hash: BinData(0, 134292195DC19D15F05422F0C13C5747C33CF650), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:44.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578642, 1), signature: { hash: BinData(0, 134292195DC19D15F05422F0C13C5747C33CF650), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:44.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), opTime: { ts: Timestamp(1567578638, 2), t: 1 }, wallTime: new Date(1567578638490) } 2019-09-04T06:30:44.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578642, 1), signature: { hash: BinData(0, 134292195DC19D15F05422F0C13C5747C33CF650), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:44.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:44.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:44.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:44.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:44.272+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:44.296+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:44.296+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:44.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:44.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:44.372+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:44.419+0000 I NETWORK [listener] connection accepted from 10.108.2.62:53452 #258 (92 connections now open) 2019-09-04T06:30:44.419+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:44.419+0000 D2 COMMAND [conn258] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:44.419+0000 I NETWORK [conn258] received client metadata from 10.108.2.62:53452 conn258: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:44.419+0000 I COMMAND [conn258] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:44.423+0000 D2 COMMAND [conn258] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578642, 1), signature: { hash: BinData(0, D3B29E27081E353619E4BBAFF34512E8BADA791D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:30:44.423+0000 D1 REPL [conn258] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578638, 2), t: 1 } 2019-09-04T06:30:44.423+0000 D3 REPL [conn258] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:14.433+0000 2019-09-04T06:30:44.473+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:44.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:44.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:44.490+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:44.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:44.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:44.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:44.503+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:44.503+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:44.503+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578638, 2) 2019-09-04T06:30:44.503+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 9795 2019-09-04T06:30:44.503+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:44.503+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:44.503+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 9795 2019-09-04T06:30:44.504+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9798 2019-09-04T06:30:44.504+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9798 2019-09-04T06:30:44.504+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578638, 2), t: 1 }({ ts: Timestamp(1567578638, 2), t: 1 }) 2019-09-04T06:30:44.543+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:44.543+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:44.573+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:44.635+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:44.635+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:44.647+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:44.647+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:44.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:44.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:44.673+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:44.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:44.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:44.773+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:44.796+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:44.796+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:44.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:44.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 670) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:44.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 670 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:54.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:44.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:12.839+0000 2019-09-04T06:30:44.838+0000 D2 ASIO [Replication] Request 670 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), opTime: { ts: Timestamp(1567578638, 2), t: 1 }, wallTime: new Date(1567578638490), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 2), t: 1 }, lastCommittedWall: new Date(1567578638490), lastOpVisible: { ts: Timestamp(1567578638, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578638, 2), $clusterTime: { clusterTime: Timestamp(1567578642, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578638, 2) } 2019-09-04T06:30:44.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), opTime: { ts: Timestamp(1567578638, 2), t: 1 }, wallTime: new Date(1567578638490), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 2), t: 1 }, lastCommittedWall: new Date(1567578638490), lastOpVisible: { ts: Timestamp(1567578638, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578638, 2), $clusterTime: { clusterTime: Timestamp(1567578642, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578638, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:44.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:44.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 670) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), opTime: { ts: Timestamp(1567578638, 2), t: 1 }, wallTime: new Date(1567578638490), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 2), t: 1 }, lastCommittedWall: new Date(1567578638490), lastOpVisible: { ts: Timestamp(1567578638, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578638, 2), $clusterTime: { clusterTime: Timestamp(1567578642, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578638, 2) } 2019-09-04T06:30:44.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:30:44.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:30:46.838Z 2019-09-04T06:30:44.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:12.839+0000 2019-09-04T06:30:44.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:44.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 671) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:44.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 671 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:30:54.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:44.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:12.839+0000 2019-09-04T06:30:44.839+0000 D2 ASIO [Replication] Request 671 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), opTime: { ts: Timestamp(1567578638, 2), t: 1 }, wallTime: new Date(1567578638490), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 2), t: 1 }, lastCommittedWall: new Date(1567578638490), lastOpVisible: { ts: Timestamp(1567578638, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578638, 2), $clusterTime: { clusterTime: Timestamp(1567578642, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578638, 2) } 2019-09-04T06:30:44.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), opTime: { ts: Timestamp(1567578638, 2), t: 1 }, wallTime: new Date(1567578638490), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 2), t: 1 }, lastCommittedWall: new Date(1567578638490), lastOpVisible: { ts: Timestamp(1567578638, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578638, 2), $clusterTime: { clusterTime: Timestamp(1567578642, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578638, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:30:44.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:44.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 671) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), opTime: { ts: Timestamp(1567578638, 2), t: 1 }, wallTime: new Date(1567578638490), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 2), t: 1 }, lastCommittedWall: new Date(1567578638490), lastOpVisible: { ts: Timestamp(1567578638, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578638, 2), $clusterTime: { clusterTime: Timestamp(1567578642, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578638, 2) } 2019-09-04T06:30:44.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:30:44.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:30:54.056+0000 2019-09-04T06:30:44.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:30:56.057+0000 2019-09-04T06:30:44.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:30:44.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:30:46.839Z 2019-09-04T06:30:44.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:44.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:14.839+0000 2019-09-04T06:30:44.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:14.839+0000 2019-09-04T06:30:44.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:44.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:44.873+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:44.973+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:44.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:44.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:44.990+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:44.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:44.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:44.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:45.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:45.043+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:45.043+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:45.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578642, 1), signature: { hash: BinData(0, 134292195DC19D15F05422F0C13C5747C33CF650), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:45.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:30:45.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578642, 1), signature: { hash: BinData(0, 134292195DC19D15F05422F0C13C5747C33CF650), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:45.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578642, 1), signature: { hash: BinData(0, 134292195DC19D15F05422F0C13C5747C33CF650), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:45.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), opTime: { ts: Timestamp(1567578638, 2), t: 1 }, wallTime: new Date(1567578638490) } 2019-09-04T06:30:45.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578642, 1), signature: { hash: BinData(0, 134292195DC19D15F05422F0C13C5747C33CF650), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:45.073+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:45.096+0000 D2 ASIO [RS] Request 669 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578645, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578645094), o: { $v: 1, $set: { ping: new Date(1567578645091), up: 2545 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 2), t: 1 }, lastCommittedWall: new Date(1567578638490), lastOpVisible: { ts: Timestamp(1567578638, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578638, 2), t: 1 }, lastCommittedWall: new Date(1567578638490), lastOpApplied: { ts: Timestamp(1567578645, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578638, 2), $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578645, 1) } 2019-09-04T06:30:45.096+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578645, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578645094), o: { $v: 1, $set: { ping: new Date(1567578645091), up: 2545 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 2), t: 1 }, lastCommittedWall: new Date(1567578638490), lastOpVisible: { ts: Timestamp(1567578638, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578638, 2), t: 1 }, lastCommittedWall: new Date(1567578638490), lastOpApplied: { ts: Timestamp(1567578645, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578638, 2), $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578645, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:45.096+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:45.096+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578645, 1) and ending at ts: Timestamp(1567578645, 1) 2019-09-04T06:30:45.096+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:30:56.057+0000 2019-09-04T06:30:45.096+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:30:55.213+0000 2019-09-04T06:30:45.096+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:45.096+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578645, 1), t: 1 } 2019-09-04T06:30:45.096+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:45.096+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:45.096+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578638, 2) 2019-09-04T06:30:45.096+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 9814 2019-09-04T06:30:45.096+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:45.096+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:45.096+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 9814 2019-09-04T06:30:45.096+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:45.096+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:30:45.096+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:14.839+0000 2019-09-04T06:30:45.096+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:45.096+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578638, 2) 2019-09-04T06:30:45.096+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 9817 2019-09-04T06:30:45.096+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578645, 1) } 2019-09-04T06:30:45.096+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:45.096+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:45.096+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 9817 2019-09-04T06:30:45.096+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9799 2019-09-04T06:30:45.096+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 9799 2019-09-04T06:30:45.096+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9820 2019-09-04T06:30:45.096+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9820 2019-09-04T06:30:45.096+0000 D3 EXECUTOR [repl-writer-worker-4] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:45.096+0000 D3 STORAGE [repl-writer-worker-4] WT begin_transaction for snapshot id 9822 2019-09-04T06:30:45.096+0000 D4 STORAGE [repl-writer-worker-4] inserting record with timestamp Timestamp(1567578645, 1) 2019-09-04T06:30:45.096+0000 D3 STORAGE [repl-writer-worker-4] WT set timestamp of future write operations to Timestamp(1567578645, 1) 2019-09-04T06:30:45.096+0000 D3 STORAGE [repl-writer-worker-4] WT commit_transaction for snapshot id 9822 2019-09-04T06:30:45.096+0000 D3 EXECUTOR [repl-writer-worker-4] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:45.096+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:30:45.096+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9821 2019-09-04T06:30:45.096+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 9821 2019-09-04T06:30:45.096+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9824 2019-09-04T06:30:45.096+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9824 2019-09-04T06:30:45.096+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578645, 1), t: 1 }({ ts: Timestamp(1567578645, 1), t: 1 }) 2019-09-04T06:30:45.096+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578645, 1) 2019-09-04T06:30:45.096+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9825 2019-09-04T06:30:45.096+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578645, 1) } } ] } sort: {} projection: {} 2019-09-04T06:30:45.096+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:45.096+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:30:45.096+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578645, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:30:45.096+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:45.096+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:45.096+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:45.096+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578645, 1) || First: notFirst: full path: ts 2019-09-04T06:30:45.096+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:45.096+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578645, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:45.096+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:45.096+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:30:45.096+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:45.096+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:45.096+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:45.096+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:45.097+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:45.097+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:45.097+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:45.097+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578645, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:45.097+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:45.097+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:45.097+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:45.097+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578645, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:45.097+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:45.097+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578645, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:45.097+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 9825 2019-09-04T06:30:45.097+0000 D3 EXECUTOR [repl-writer-worker-6] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:45.097+0000 D3 STORAGE [repl-writer-worker-6] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:45.097+0000 D3 REPL [repl-writer-worker-6] applying op: { ts: Timestamp(1567578645, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578645094), o: { $v: 1, $set: { ping: new Date(1567578645091), up: 2545 } } }, oplog application mode: Secondary 2019-09-04T06:30:45.097+0000 D3 STORAGE [repl-writer-worker-6] WT set timestamp of future write operations to Timestamp(1567578645, 1) 2019-09-04T06:30:45.097+0000 D3 STORAGE [repl-writer-worker-6] WT begin_transaction for snapshot id 9827 2019-09-04T06:30:45.097+0000 D2 QUERY [repl-writer-worker-6] Using idhack: { _id: "cmodb801.togewa.com:27017" } 2019-09-04T06:30:45.097+0000 D4 WRITE [repl-writer-worker-6] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:30:45.097+0000 D3 STORAGE [repl-writer-worker-6] WT commit_transaction for snapshot id 9827 2019-09-04T06:30:45.097+0000 D3 EXECUTOR [repl-writer-worker-6] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:45.097+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578645, 1), t: 1 }({ ts: Timestamp(1567578645, 1), t: 1 }) 2019-09-04T06:30:45.097+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578645, 1) 2019-09-04T06:30:45.097+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9826 2019-09-04T06:30:45.097+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:30:45.097+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:45.097+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:30:45.097+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:45.097+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:45.097+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:30:45.097+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 9826 2019-09-04T06:30:45.097+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578645, 1) 2019-09-04T06:30:45.097+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9831 2019-09-04T06:30:45.097+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9831 2019-09-04T06:30:45.097+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578645, 1), t: 1 }({ ts: Timestamp(1567578645, 1), t: 1 }) 2019-09-04T06:30:45.097+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:45.097+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), appliedOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, appliedWallTime: new Date(1567578638490), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), appliedOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, appliedWallTime: new Date(1567578645094), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), appliedOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, appliedWallTime: new Date(1567578638490), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 2), t: 1 }, lastCommittedWall: new Date(1567578638490), lastOpVisible: { ts: Timestamp(1567578638, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:45.097+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 672 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:15.097+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), appliedOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, appliedWallTime: new Date(1567578638490), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), appliedOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, appliedWallTime: new Date(1567578645094), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), appliedOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, appliedWallTime: new Date(1567578638490), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 2), t: 1 }, lastCommittedWall: new Date(1567578638490), lastOpVisible: { ts: Timestamp(1567578638, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:45.097+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:15.097+0000 2019-09-04T06:30:45.097+0000 D2 ASIO [RS] Request 672 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578645, 1), t: 1 }, lastCommittedWall: new Date(1567578645094), lastOpVisible: { ts: Timestamp(1567578645, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578645, 1), $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578645, 1) } 2019-09-04T06:30:45.097+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578645, 1), t: 1 }, lastCommittedWall: new Date(1567578645094), lastOpVisible: { ts: Timestamp(1567578645, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578645, 1), $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578645, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:45.097+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:45.097+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:15.097+0000 2019-09-04T06:30:45.098+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:30:45.098+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:45.098+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), appliedOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, appliedWallTime: new Date(1567578638490), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, durableWallTime: new Date(1567578645094), appliedOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, appliedWallTime: new Date(1567578645094), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), appliedOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, appliedWallTime: new Date(1567578638490), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 2), t: 1 }, lastCommittedWall: new Date(1567578638490), lastOpVisible: { ts: Timestamp(1567578638, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:45.098+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 673 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:15.098+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), appliedOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, appliedWallTime: new Date(1567578638490), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, durableWallTime: new Date(1567578645094), appliedOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, appliedWallTime: new Date(1567578645094), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, durableWallTime: new Date(1567578638490), appliedOpTime: { ts: Timestamp(1567578638, 2), t: 1 }, appliedWallTime: new Date(1567578638490), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578638, 2), t: 1 }, lastCommittedWall: new Date(1567578638490), lastOpVisible: { ts: Timestamp(1567578638, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:45.098+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:15.098+0000 2019-09-04T06:30:45.098+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578645, 1), t: 1 } 2019-09-04T06:30:45.098+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 674 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:55.098+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578638, 2), t: 1 } } 2019-09-04T06:30:45.098+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:15.098+0000 2019-09-04T06:30:45.098+0000 D2 ASIO [RS] Request 673 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578645, 1), t: 1 }, lastCommittedWall: new Date(1567578645094), lastOpVisible: { ts: Timestamp(1567578645, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578645, 1), $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578645, 1) } 2019-09-04T06:30:45.098+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578645, 1), t: 1 }, lastCommittedWall: new Date(1567578645094), lastOpVisible: { ts: Timestamp(1567578645, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578645, 1), $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578645, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:45.098+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:45.098+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:15.098+0000 2019-09-04T06:30:45.098+0000 D2 ASIO [RS] Request 674 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578645, 1), t: 1 }, lastCommittedWall: new Date(1567578645094), lastOpVisible: { ts: Timestamp(1567578645, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578645, 1), t: 1 }, lastCommittedWall: new Date(1567578645094), lastOpApplied: { ts: Timestamp(1567578645, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578645, 1), $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578645, 1) } 2019-09-04T06:30:45.098+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578645, 1), t: 1 }, lastCommittedWall: new Date(1567578645094), lastOpVisible: { ts: Timestamp(1567578645, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578645, 1), t: 1 }, lastCommittedWall: new Date(1567578645094), lastOpApplied: { ts: Timestamp(1567578645, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578645, 1), $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578645, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:45.098+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:45.098+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:30:45.098+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578645, 1), t: 1 }, 2019-09-04T06:30:45.094+0000 2019-09-04T06:30:45.098+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578645, 1), t: 1 }, 2019-09-04T06:30:45.094+0000 2019-09-04T06:30:45.098+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578640, 1) 2019-09-04T06:30:45.098+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:30:55.213+0000 2019-09-04T06:30:45.098+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:30:55.373+0000 2019-09-04T06:30:45.098+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 675 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:55.098+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578645, 1), t: 1 } } 2019-09-04T06:30:45.098+0000 D3 REPL [conn208] Got notified of new snapshot: { ts: Timestamp(1567578645, 1), t: 1 }, 2019-09-04T06:30:45.094+0000 2019-09-04T06:30:45.098+0000 D3 REPL [conn208] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.839+0000 2019-09-04T06:30:45.098+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:15.098+0000 2019-09-04T06:30:45.098+0000 D3 REPL [conn228] Got notified of new snapshot: { ts: Timestamp(1567578645, 1), t: 1 }, 2019-09-04T06:30:45.094+0000 2019-09-04T06:30:45.098+0000 D3 REPL [conn228] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:56.304+0000 2019-09-04T06:30:45.098+0000 D3 REPL [conn245] Got notified of new snapshot: { ts: Timestamp(1567578645, 1), t: 1 }, 2019-09-04T06:30:45.094+0000 2019-09-04T06:30:45.098+0000 D3 REPL [conn245] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.767+0000 2019-09-04T06:30:45.098+0000 D3 REPL [conn252] Got notified of new snapshot: { ts: Timestamp(1567578645, 1), t: 1 }, 2019-09-04T06:30:45.094+0000 2019-09-04T06:30:45.098+0000 D3 REPL [conn252] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.897+0000 2019-09-04T06:30:45.098+0000 D3 REPL [conn254] Got notified of new snapshot: { ts: Timestamp(1567578645, 1), t: 1 }, 2019-09-04T06:30:45.094+0000 2019-09-04T06:30:45.098+0000 D3 REPL [conn254] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.962+0000 2019-09-04T06:30:45.098+0000 D3 REPL [conn247] Got notified of new snapshot: { ts: Timestamp(1567578645, 1), t: 1 }, 2019-09-04T06:30:45.094+0000 2019-09-04T06:30:45.098+0000 D3 REPL [conn247] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:59.750+0000 2019-09-04T06:30:45.098+0000 D3 REPL [conn232] Got notified of new snapshot: { ts: Timestamp(1567578645, 1), t: 1 }, 2019-09-04T06:30:45.094+0000 2019-09-04T06:30:45.098+0000 D3 REPL [conn232] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.661+0000 2019-09-04T06:30:45.098+0000 D3 REPL [conn255] Got notified of new snapshot: { ts: Timestamp(1567578645, 1), t: 1 }, 2019-09-04T06:30:45.094+0000 2019-09-04T06:30:45.098+0000 D3 REPL [conn255] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.877+0000 2019-09-04T06:30:45.098+0000 D3 REPL [conn222] Got notified of new snapshot: { ts: Timestamp(1567578645, 1), t: 1 }, 2019-09-04T06:30:45.094+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn222] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.822+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn237] Got notified of new snapshot: { ts: Timestamp(1567578645, 1), t: 1 }, 2019-09-04T06:30:45.094+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn237] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:13.409+0000 2019-09-04T06:30:45.099+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:45.099+0000 D3 REPL [conn240] Got notified of new snapshot: { ts: Timestamp(1567578645, 1), t: 1 }, 2019-09-04T06:30:45.094+0000 2019-09-04T06:30:45.099+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:14.839+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn240] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.874+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn256] Got notified of new snapshot: { ts: Timestamp(1567578645, 1), t: 1 }, 2019-09-04T06:30:45.094+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn256] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.918+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn229] Got notified of new snapshot: { ts: Timestamp(1567578645, 1), t: 1 }, 2019-09-04T06:30:45.094+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn229] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.924+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn257] Got notified of new snapshot: { ts: Timestamp(1567578645, 1), t: 1 }, 2019-09-04T06:30:45.094+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn257] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:12.293+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn242] Got notified of new snapshot: { ts: Timestamp(1567578645, 1), t: 1 }, 2019-09-04T06:30:45.094+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn242] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:12.507+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn231] Got notified of new snapshot: { ts: Timestamp(1567578645, 1), t: 1 }, 2019-09-04T06:30:45.094+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn231] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:13.190+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn251] Got notified of new snapshot: { ts: Timestamp(1567578645, 1), t: 1 }, 2019-09-04T06:30:45.094+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn251] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.763+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn235] Got notified of new snapshot: { ts: Timestamp(1567578645, 1), t: 1 }, 2019-09-04T06:30:45.094+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn235] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.840+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn249] Got notified of new snapshot: { ts: Timestamp(1567578645, 1), t: 1 }, 2019-09-04T06:30:45.094+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn249] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.702+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn253] Got notified of new snapshot: { ts: Timestamp(1567578645, 1), t: 1 }, 2019-09-04T06:30:45.094+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn253] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.925+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn225] Got notified of new snapshot: { ts: Timestamp(1567578645, 1), t: 1 }, 2019-09-04T06:30:45.094+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn225] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.468+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn238] Got notified of new snapshot: { ts: Timestamp(1567578645, 1), t: 1 }, 2019-09-04T06:30:45.094+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn238] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:45.824+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn250] Got notified of new snapshot: { ts: Timestamp(1567578645, 1), t: 1 }, 2019-09-04T06:30:45.094+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn250] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.753+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn246] Got notified of new snapshot: { ts: Timestamp(1567578645, 1), t: 1 }, 2019-09-04T06:30:45.094+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn246] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:52.054+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn248] Got notified of new snapshot: { ts: Timestamp(1567578645, 1), t: 1 }, 2019-09-04T06:30:45.094+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn248] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.456+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn243] Got notified of new snapshot: { ts: Timestamp(1567578645, 1), t: 1 }, 2019-09-04T06:30:45.094+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn243] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.662+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn223] Got notified of new snapshot: { ts: Timestamp(1567578645, 1), t: 1 }, 2019-09-04T06:30:45.094+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn223] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.644+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn244] Got notified of new snapshot: { ts: Timestamp(1567578645, 1), t: 1 }, 2019-09-04T06:30:45.094+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn244] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.754+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn241] Got notified of new snapshot: { ts: Timestamp(1567578645, 1), t: 1 }, 2019-09-04T06:30:45.094+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn241] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.879+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn227] Got notified of new snapshot: { ts: Timestamp(1567578645, 1), t: 1 }, 2019-09-04T06:30:45.094+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn227] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.888+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn224] Got notified of new snapshot: { ts: Timestamp(1567578645, 1), t: 1 }, 2019-09-04T06:30:45.094+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn224] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.924+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn239] Got notified of new snapshot: { ts: Timestamp(1567578645, 1), t: 1 }, 2019-09-04T06:30:45.094+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn239] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.133+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn258] Got notified of new snapshot: { ts: Timestamp(1567578645, 1), t: 1 }, 2019-09-04T06:30:45.094+0000 2019-09-04T06:30:45.099+0000 D3 REPL [conn258] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:14.433+0000 2019-09-04T06:30:45.135+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:45.135+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:45.147+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:45.147+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:45.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:45.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:45.173+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:45.196+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578645, 1) 2019-09-04T06:30:45.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:45.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:45.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:45.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:45.274+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:45.296+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:45.296+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:45.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:45.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:45.374+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:45.474+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:45.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:45.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:45.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:45.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:45.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:45.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:45.543+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:45.543+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:45.574+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:45.634+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:45.635+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:45.647+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:45.647+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:45.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:45.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:45.674+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:45.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:45.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:45.774+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:45.796+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:45.796+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:45.814+0000 I NETWORK [listener] connection accepted from 10.108.2.46:40992 #259 (93 connections now open) 2019-09-04T06:30:45.814+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:45.814+0000 D2 COMMAND [conn259] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:45.814+0000 I NETWORK [conn259] received client metadata from 10.108.2.46:40992 conn259: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:45.814+0000 I COMMAND [conn259] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:45.822+0000 I COMMAND [conn222] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578609, 1), signature: { hash: BinData(0, CAAD09B6BD8A5CCC5E7CF668FD260233128308EF), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:30:45.822+0000 D1 - [conn222] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:45.822+0000 W - [conn222] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:45.825+0000 I COMMAND [conn238] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578615, 1), signature: { hash: BinData(0, 501D3C9598BB496C2DB69F206C3057FEAA271409), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:30:45.825+0000 D1 - [conn238] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:45.825+0000 W - [conn238] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:45.834+0000 I NETWORK [listener] connection accepted from 10.108.2.53:50710 #260 (94 connections now open) 2019-09-04T06:30:45.834+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:45.834+0000 D2 COMMAND [conn260] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:45.834+0000 I NETWORK [conn260] received client metadata from 10.108.2.53:50710 conn260: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:45.834+0000 I COMMAND [conn260] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:45.839+0000 I - [conn222] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:45.839+0000 D1 COMMAND [conn222] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578609, 1), signature: { hash: BinData(0, CAAD09B6BD8A5CCC5E7CF668FD260233128308EF), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:45.839+0000 D1 - [conn222] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:45.839+0000 W - [conn222] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:45.840+0000 I COMMAND [conn208] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578610, 1), signature: { hash: BinData(0, 55B04E6A9E4D06C4F65F23BA7FFE4919B6F8B920), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:30:45.840+0000 D1 - [conn208] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:45.840+0000 W - [conn208] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:45.840+0000 I COMMAND [conn235] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578612, 1), signature: { hash: BinData(0, 382BE338B8E062C89DE3BDFB8F55724B12CF9B0F), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:30:45.841+0000 D1 - [conn235] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:45.841+0000 W - [conn235] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:45.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:45.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:45.856+0000 I - [conn238] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:45.856+0000 D1 COMMAND [conn238] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578615, 1), signature: { hash: BinData(0, 501D3C9598BB496C2DB69F206C3057FEAA271409), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:45.856+0000 D1 - [conn238] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:45.856+0000 W - [conn238] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:45.874+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:45.876+0000 I - [conn222] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:45.876+0000 W COMMAND [conn222] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:45.876+0000 I COMMAND [conn222] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578609, 1), signature: { hash: BinData(0, CAAD09B6BD8A5CCC5E7CF668FD260233128308EF), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:30:45.876+0000 D2 NETWORK [conn222] Session from 10.108.2.50:50094 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:45.876+0000 I NETWORK [conn222] end connection 10.108.2.50:50094 (93 connections now open) 2019-09-04T06:30:45.896+0000 I - [conn238] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:45.896+0000 W COMMAND [conn238] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:45.896+0000 I COMMAND [conn238] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578615, 1), signature: { hash: BinData(0, 501D3C9598BB496C2DB69F206C3057FEAA271409), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30041ms 2019-09-04T06:30:45.896+0000 D2 NETWORK [conn238] Session from 10.108.2.46:40972 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:45.896+0000 I NETWORK [conn238] end connection 10.108.2.46:40972 (92 connections now open) 2019-09-04T06:30:45.912+0000 I - [conn235] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:45.913+0000 D1 COMMAND [conn235] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578612, 1), signature: { hash: BinData(0, 382BE338B8E062C89DE3BDFB8F55724B12CF9B0F), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:45.913+0000 D1 - [conn235] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:45.913+0000 W - [conn235] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:45.929+0000 I - [conn208] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:45.929+0000 D1 COMMAND [conn208] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578610, 1), signature: { hash: BinData(0, 55B04E6A9E4D06C4F65F23BA7FFE4919B6F8B920), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:45.929+0000 D1 - [conn208] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:45.929+0000 W - [conn208] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:45.949+0000 I - [conn235] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:45.949+0000 W COMMAND [conn235] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:45.949+0000 I COMMAND [conn235] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578612, 1), signature: { hash: BinData(0, 382BE338B8E062C89DE3BDFB8F55724B12CF9B0F), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30082ms 2019-09-04T06:30:45.949+0000 D2 NETWORK [conn235] Session from 10.108.2.53:50696 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:45.949+0000 I NETWORK [conn235] end connection 10.108.2.53:50696 (91 connections now open) 2019-09-04T06:30:45.969+0000 I - [conn208] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:45.969+0000 W COMMAND [conn208] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:45.970+0000 I COMMAND [conn208] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578610, 1), signature: { hash: BinData(0, 55B04E6A9E4D06C4F65F23BA7FFE4919B6F8B920), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30100ms 2019-09-04T06:30:45.970+0000 D2 NETWORK [conn208] Session from 10.108.2.49:53344 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:45.970+0000 I NETWORK [conn208] end connection 10.108.2.49:53344 (90 connections now open) 2019-09-04T06:30:45.974+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:45.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:45.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:45.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:45.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:45.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:45.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:46.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:46.012+0000 I NETWORK [listener] connection accepted from 10.108.2.58:52150 #261 (91 connections now open) 2019-09-04T06:30:46.012+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:46.013+0000 D2 COMMAND [conn261] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:46.013+0000 I NETWORK [conn261] received client metadata from 10.108.2.58:52150 conn261: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:46.013+0000 I COMMAND [conn261] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:46.013+0000 D2 COMMAND [conn261] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578643, 1), signature: { hash: BinData(0, 77F4286AFD23F9458372B5E8BE90AECE0C1F6CA6), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:30:46.013+0000 D1 REPL [conn261] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578645, 1), t: 1 } 2019-09-04T06:30:46.013+0000 D3 REPL [conn261] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.023+0000 2019-09-04T06:30:46.014+0000 D2 COMMAND [conn233] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578639, 1), signature: { hash: BinData(0, A2EE588ECDB33D4C640333310703F752DA8D0A68), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:30:46.014+0000 D1 REPL [conn233] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578645, 1), t: 1 } 2019-09-04T06:30:46.014+0000 D3 REPL [conn233] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.024+0000 2019-09-04T06:30:46.023+0000 D2 COMMAND [conn234] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, 05433FFC1F19D7D8CE5BDF35FCE276DA0CD821FC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:30:46.023+0000 D1 REPL [conn234] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578645, 1), t: 1 } 2019-09-04T06:30:46.023+0000 D3 REPL [conn234] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.033+0000 2019-09-04T06:30:46.030+0000 D2 COMMAND [conn236] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, 05433FFC1F19D7D8CE5BDF35FCE276DA0CD821FC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:30:46.030+0000 D1 REPL [conn236] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578645, 1), t: 1 } 2019-09-04T06:30:46.030+0000 D3 REPL [conn236] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.040+0000 2019-09-04T06:30:46.036+0000 I NETWORK [listener] connection accepted from 10.108.2.45:36550 #262 (92 connections now open) 2019-09-04T06:30:46.036+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:46.036+0000 D2 COMMAND [conn262] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:46.036+0000 I NETWORK [conn262] received client metadata from 10.108.2.45:36550 conn262: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:46.036+0000 I COMMAND [conn262] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:46.041+0000 D2 COMMAND [conn262] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 1), signature: { hash: BinData(0, 01D129354ACF87BDDDE96A49066A029F1BB3D92A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:30:46.041+0000 D1 REPL [conn262] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578645, 1), t: 1 } 2019-09-04T06:30:46.041+0000 D3 REPL [conn262] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.051+0000 2019-09-04T06:30:46.043+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:46.043+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:46.074+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:46.084+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:46.084+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:46.096+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:46.096+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:46.096+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578645, 1) 2019-09-04T06:30:46.096+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 9865 2019-09-04T06:30:46.096+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:46.096+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:46.096+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 9865 2019-09-04T06:30:46.097+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9868 2019-09-04T06:30:46.097+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9868 2019-09-04T06:30:46.097+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578645, 1), t: 1 }({ ts: Timestamp(1567578645, 1), t: 1 }) 2019-09-04T06:30:46.134+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:46.135+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:46.147+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:46.147+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:46.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:46.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:46.175+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:46.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, A900B5AA37CD7FF0E04EF7EA782683DB9B504D66), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:46.233+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:30:46.233+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, A900B5AA37CD7FF0E04EF7EA782683DB9B504D66), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:46.233+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, A900B5AA37CD7FF0E04EF7EA782683DB9B504D66), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:46.233+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, durableWallTime: new Date(1567578645094), opTime: { ts: Timestamp(1567578645, 1), t: 1 }, wallTime: new Date(1567578645094) } 2019-09-04T06:30:46.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, A900B5AA37CD7FF0E04EF7EA782683DB9B504D66), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:46.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:46.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:46.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:46.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:46.275+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:46.296+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:46.296+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:46.324+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:46.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:46.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:46.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:46.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:46.360+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:46.375+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:46.400+0000 I NETWORK [listener] connection accepted from 10.108.2.57:34256 #263 (93 connections now open) 2019-09-04T06:30:46.400+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:46.401+0000 D2 COMMAND [conn263] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:46.401+0000 I NETWORK [conn263] received client metadata from 10.108.2.57:34256 conn263: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:46.401+0000 I COMMAND [conn263] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:46.405+0000 D2 COMMAND [conn263] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 1), signature: { hash: BinData(0, 01D129354ACF87BDDDE96A49066A029F1BB3D92A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:30:46.405+0000 D1 REPL [conn263] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578645, 1), t: 1 } 2019-09-04T06:30:46.405+0000 D3 REPL [conn263] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.415+0000 2019-09-04T06:30:46.475+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:46.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:46.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:46.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:46.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:46.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:46.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:46.543+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:46.543+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:46.575+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:46.583+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:46.583+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:46.635+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:46.635+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:46.647+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:46.647+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:46.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:46.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:46.675+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:46.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:46.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:46.775+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:46.796+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:46.796+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:46.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:46.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:46.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:46.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 676) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:46.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 676 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:56.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:46.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:14.839+0000 2019-09-04T06:30:46.838+0000 D2 ASIO [Replication] Request 676 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, durableWallTime: new Date(1567578645094), opTime: { ts: Timestamp(1567578645, 1), t: 1 }, wallTime: new Date(1567578645094), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578645, 1), t: 1 }, lastCommittedWall: new Date(1567578645094), lastOpVisible: { ts: Timestamp(1567578645, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578645, 1), $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578645, 1) } 2019-09-04T06:30:46.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, durableWallTime: new Date(1567578645094), opTime: { ts: Timestamp(1567578645, 1), t: 1 }, wallTime: new Date(1567578645094), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578645, 1), t: 1 }, lastCommittedWall: new Date(1567578645094), lastOpVisible: { ts: Timestamp(1567578645, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578645, 1), $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578645, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:46.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:46.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 676) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, durableWallTime: new Date(1567578645094), opTime: { ts: Timestamp(1567578645, 1), t: 1 }, wallTime: new Date(1567578645094), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578645, 1), t: 1 }, lastCommittedWall: new Date(1567578645094), lastOpVisible: { ts: Timestamp(1567578645, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578645, 1), $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578645, 1) } 2019-09-04T06:30:46.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:30:46.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:30:48.838Z 2019-09-04T06:30:46.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:14.839+0000 2019-09-04T06:30:46.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:46.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 677) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:46.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 677 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:30:56.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:46.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:14.839+0000 2019-09-04T06:30:46.839+0000 D2 ASIO [Replication] Request 677 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, durableWallTime: new Date(1567578645094), opTime: { ts: Timestamp(1567578645, 1), t: 1 }, wallTime: new Date(1567578645094), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578645, 1), t: 1 }, lastCommittedWall: new Date(1567578645094), lastOpVisible: { ts: Timestamp(1567578645, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578645, 1), $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578645, 1) } 2019-09-04T06:30:46.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, durableWallTime: new Date(1567578645094), opTime: { ts: Timestamp(1567578645, 1), t: 1 }, wallTime: new Date(1567578645094), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578645, 1), t: 1 }, lastCommittedWall: new Date(1567578645094), lastOpVisible: { ts: Timestamp(1567578645, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578645, 1), $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578645, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:30:46.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:46.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 677) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, durableWallTime: new Date(1567578645094), opTime: { ts: Timestamp(1567578645, 1), t: 1 }, wallTime: new Date(1567578645094), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578645, 1), t: 1 }, lastCommittedWall: new Date(1567578645094), lastOpVisible: { ts: Timestamp(1567578645, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578645, 1), $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578645, 1) } 2019-09-04T06:30:46.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:30:46.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:30:55.373+0000 2019-09-04T06:30:46.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:30:58.105+0000 2019-09-04T06:30:46.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:30:46.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:30:48.839Z 2019-09-04T06:30:46.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:46.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:16.839+0000 2019-09-04T06:30:46.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:16.839+0000 2019-09-04T06:30:46.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:46.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:46.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:46.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:46.875+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:46.975+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:46.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:46.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:46.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:46.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:46.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:46.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:47.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:47.043+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:47.043+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:47.061+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:47.061+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:30:46.839+0000 2019-09-04T06:30:47.061+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:30:46.838+0000 2019-09-04T06:30:47.061+0000 D3 REPL [replexec-3] stalest member MemberId(2) date: 2019-09-04T06:30:46.838+0000 2019-09-04T06:30:47.061+0000 D3 REPL [replexec-3] scheduling next check at 2019-09-04T06:30:56.838+0000 2019-09-04T06:30:47.061+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:16.839+0000 2019-09-04T06:30:47.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, A900B5AA37CD7FF0E04EF7EA782683DB9B504D66), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:47.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:30:47.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, A900B5AA37CD7FF0E04EF7EA782683DB9B504D66), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:47.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, A900B5AA37CD7FF0E04EF7EA782683DB9B504D66), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:47.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, durableWallTime: new Date(1567578645094), opTime: { ts: Timestamp(1567578645, 1), t: 1 }, wallTime: new Date(1567578645094) } 2019-09-04T06:30:47.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, A900B5AA37CD7FF0E04EF7EA782683DB9B504D66), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:47.076+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:47.083+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:47.083+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:47.096+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:47.096+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:47.096+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578645, 1) 2019-09-04T06:30:47.096+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 9902 2019-09-04T06:30:47.096+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:47.096+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:47.096+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 9902 2019-09-04T06:30:47.097+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9905 2019-09-04T06:30:47.097+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9905 2019-09-04T06:30:47.097+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578645, 1), t: 1 }({ ts: Timestamp(1567578645, 1), t: 1 }) 2019-09-04T06:30:47.135+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:47.135+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:47.147+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:47.147+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:47.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:47.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:47.176+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:47.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:47.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:47.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:47.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:47.276+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:47.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:47.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:47.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:47.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:47.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:47.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:47.376+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:47.476+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:47.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:47.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:47.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:47.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:47.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:47.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:47.505+0000 D2 ASIO [RS] Request 675 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578647, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578647499), o: { $v: 1, $set: { ping: new Date(1567578647498) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578645, 1), t: 1 }, lastCommittedWall: new Date(1567578645094), lastOpVisible: { ts: Timestamp(1567578645, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578645, 1), t: 1 }, lastCommittedWall: new Date(1567578645094), lastOpApplied: { ts: Timestamp(1567578647, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578645, 1), $clusterTime: { clusterTime: Timestamp(1567578647, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578647, 1) } 2019-09-04T06:30:47.505+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578647, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578647499), o: { $v: 1, $set: { ping: new Date(1567578647498) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578645, 1), t: 1 }, lastCommittedWall: new Date(1567578645094), lastOpVisible: { ts: Timestamp(1567578645, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578645, 1), t: 1 }, lastCommittedWall: new Date(1567578645094), lastOpApplied: { ts: Timestamp(1567578647, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578645, 1), $clusterTime: { clusterTime: Timestamp(1567578647, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578647, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:47.505+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:47.505+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578647, 1) and ending at ts: Timestamp(1567578647, 1) 2019-09-04T06:30:47.505+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:30:58.105+0000 2019-09-04T06:30:47.505+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:30:58.166+0000 2019-09-04T06:30:47.506+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:47.506+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:16.839+0000 2019-09-04T06:30:47.506+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:47.506+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:47.506+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578645, 1) 2019-09-04T06:30:47.506+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 9920 2019-09-04T06:30:47.506+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:47.506+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:47.506+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 9920 2019-09-04T06:30:47.506+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:47.506+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:47.506+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:30:47.506+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578645, 1) 2019-09-04T06:30:47.506+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 9923 2019-09-04T06:30:47.506+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:47.506+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578647, 1) } 2019-09-04T06:30:47.506+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:47.506+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 9923 2019-09-04T06:30:47.506+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578647, 1), t: 1 } 2019-09-04T06:30:47.506+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9906 2019-09-04T06:30:47.506+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 9906 2019-09-04T06:30:47.506+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9926 2019-09-04T06:30:47.506+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9926 2019-09-04T06:30:47.506+0000 D3 EXECUTOR [repl-writer-worker-2] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:47.506+0000 D3 STORAGE [repl-writer-worker-2] WT begin_transaction for snapshot id 9928 2019-09-04T06:30:47.506+0000 D4 STORAGE [repl-writer-worker-2] inserting record with timestamp Timestamp(1567578647, 1) 2019-09-04T06:30:47.506+0000 D3 STORAGE [repl-writer-worker-2] WT set timestamp of future write operations to Timestamp(1567578647, 1) 2019-09-04T06:30:47.506+0000 D3 STORAGE [repl-writer-worker-2] WT commit_transaction for snapshot id 9928 2019-09-04T06:30:47.506+0000 D3 EXECUTOR [repl-writer-worker-2] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:47.506+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:30:47.506+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9927 2019-09-04T06:30:47.506+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 9927 2019-09-04T06:30:47.506+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9930 2019-09-04T06:30:47.506+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9930 2019-09-04T06:30:47.506+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578647, 1), t: 1 }({ ts: Timestamp(1567578647, 1), t: 1 }) 2019-09-04T06:30:47.506+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578647, 1) 2019-09-04T06:30:47.506+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9931 2019-09-04T06:30:47.506+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578647, 1) } } ] } sort: {} projection: {} 2019-09-04T06:30:47.506+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:47.506+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:30:47.506+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578647, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:30:47.506+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:47.506+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:47.506+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:47.506+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578647, 1) || First: notFirst: full path: ts 2019-09-04T06:30:47.506+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:47.506+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578647, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:47.506+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:47.506+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:30:47.506+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:47.506+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:47.506+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:47.506+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:47.506+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:47.506+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:47.506+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:47.506+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578647, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:47.506+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:47.506+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:47.506+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:47.506+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578647, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:47.506+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:47.506+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578647, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:47.507+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 9931 2019-09-04T06:30:47.507+0000 D3 EXECUTOR [repl-writer-worker-0] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:47.507+0000 D3 STORAGE [repl-writer-worker-0] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:47.507+0000 D3 REPL [repl-writer-worker-0] applying op: { ts: Timestamp(1567578647, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578647499), o: { $v: 1, $set: { ping: new Date(1567578647498) } } }, oplog application mode: Secondary 2019-09-04T06:30:47.507+0000 D3 STORAGE [repl-writer-worker-0] WT set timestamp of future write operations to Timestamp(1567578647, 1) 2019-09-04T06:30:47.507+0000 D3 STORAGE [repl-writer-worker-0] WT begin_transaction for snapshot id 9933 2019-09-04T06:30:47.507+0000 D2 QUERY [repl-writer-worker-0] Using idhack: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" } 2019-09-04T06:30:47.507+0000 D4 WRITE [repl-writer-worker-0] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:30:47.507+0000 D3 STORAGE [repl-writer-worker-0] WT commit_transaction for snapshot id 9933 2019-09-04T06:30:47.507+0000 D3 EXECUTOR [repl-writer-worker-0] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:47.507+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578647, 1), t: 1 }({ ts: Timestamp(1567578647, 1), t: 1 }) 2019-09-04T06:30:47.507+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578647, 1) 2019-09-04T06:30:47.507+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9932 2019-09-04T06:30:47.507+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:30:47.507+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:47.507+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:30:47.507+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:47.507+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:47.507+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:30:47.507+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 9932 2019-09-04T06:30:47.507+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578647, 1) 2019-09-04T06:30:47.507+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:47.507+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9936 2019-09-04T06:30:47.507+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, durableWallTime: new Date(1567578645094), appliedOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, appliedWallTime: new Date(1567578645094), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, durableWallTime: new Date(1567578645094), appliedOpTime: { ts: Timestamp(1567578647, 1), t: 1 }, appliedWallTime: new Date(1567578647499), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, durableWallTime: new Date(1567578645094), appliedOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, appliedWallTime: new Date(1567578645094), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578645, 1), t: 1 }, lastCommittedWall: new Date(1567578645094), lastOpVisible: { ts: Timestamp(1567578645, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:47.507+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 678 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:17.507+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, durableWallTime: new Date(1567578645094), appliedOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, appliedWallTime: new Date(1567578645094), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, durableWallTime: new Date(1567578645094), appliedOpTime: { ts: Timestamp(1567578647, 1), t: 1 }, appliedWallTime: new Date(1567578647499), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, durableWallTime: new Date(1567578645094), appliedOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, appliedWallTime: new Date(1567578645094), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578645, 1), t: 1 }, lastCommittedWall: new Date(1567578645094), lastOpVisible: { ts: Timestamp(1567578645, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:47.507+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:17.507+0000 2019-09-04T06:30:47.507+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9936 2019-09-04T06:30:47.507+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578647, 1), t: 1 }({ ts: Timestamp(1567578647, 1), t: 1 }) 2019-09-04T06:30:47.507+0000 D2 ASIO [RS] Request 678 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578647, 1), t: 1 }, lastCommittedWall: new Date(1567578647499), lastOpVisible: { ts: Timestamp(1567578647, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578647, 1), $clusterTime: { clusterTime: Timestamp(1567578647, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578647, 1) } 2019-09-04T06:30:47.507+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578647, 1), t: 1 }, lastCommittedWall: new Date(1567578647499), lastOpVisible: { ts: Timestamp(1567578647, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578647, 1), $clusterTime: { clusterTime: Timestamp(1567578647, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578647, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:47.507+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:47.507+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:17.507+0000 2019-09-04T06:30:47.508+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578647, 1), t: 1 } 2019-09-04T06:30:47.508+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 679 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:57.508+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578645, 1), t: 1 } } 2019-09-04T06:30:47.508+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:17.507+0000 2019-09-04T06:30:47.508+0000 D2 ASIO [RS] Request 679 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578647, 1), t: 1 }, lastCommittedWall: new Date(1567578647499), lastOpVisible: { ts: Timestamp(1567578647, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578647, 1), t: 1 }, lastCommittedWall: new Date(1567578647499), lastOpApplied: { ts: Timestamp(1567578647, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578647, 1), $clusterTime: { clusterTime: Timestamp(1567578647, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578647, 1) } 2019-09-04T06:30:47.508+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578647, 1), t: 1 }, lastCommittedWall: new Date(1567578647499), lastOpVisible: { ts: Timestamp(1567578647, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578647, 1), t: 1 }, lastCommittedWall: new Date(1567578647499), lastOpApplied: { ts: Timestamp(1567578647, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578647, 1), $clusterTime: { clusterTime: Timestamp(1567578647, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578647, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:47.508+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:47.508+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:30:47.508+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578647, 1), t: 1 }, 2019-09-04T06:30:47.499+0000 2019-09-04T06:30:47.508+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578647, 1), t: 1 }, 2019-09-04T06:30:47.499+0000 2019-09-04T06:30:47.508+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578642, 1) 2019-09-04T06:30:47.508+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:30:58.166+0000 2019-09-04T06:30:47.508+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:30:57.652+0000 2019-09-04T06:30:47.508+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:47.508+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 680 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:57.508+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578647, 1), t: 1 } } 2019-09-04T06:30:47.508+0000 D3 REPL [conn245] Got notified of new snapshot: { ts: Timestamp(1567578647, 1), t: 1 }, 2019-09-04T06:30:47.499+0000 2019-09-04T06:30:47.508+0000 D3 REPL [conn245] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.767+0000 2019-09-04T06:30:47.508+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:17.507+0000 2019-09-04T06:30:47.508+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:16.839+0000 2019-09-04T06:30:47.508+0000 D3 REPL [conn247] Got notified of new snapshot: { ts: Timestamp(1567578647, 1), t: 1 }, 2019-09-04T06:30:47.499+0000 2019-09-04T06:30:47.508+0000 D3 REPL [conn247] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:59.750+0000 2019-09-04T06:30:47.508+0000 D3 REPL [conn232] Got notified of new snapshot: { ts: Timestamp(1567578647, 1), t: 1 }, 2019-09-04T06:30:47.499+0000 2019-09-04T06:30:47.508+0000 D3 REPL [conn232] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.661+0000 2019-09-04T06:30:47.508+0000 D3 REPL [conn240] Got notified of new snapshot: { ts: Timestamp(1567578647, 1), t: 1 }, 2019-09-04T06:30:47.499+0000 2019-09-04T06:30:47.508+0000 D3 REPL [conn240] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.874+0000 2019-09-04T06:30:47.508+0000 D3 REPL [conn256] Got notified of new snapshot: { ts: Timestamp(1567578647, 1), t: 1 }, 2019-09-04T06:30:47.499+0000 2019-09-04T06:30:47.508+0000 D3 REPL [conn256] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.918+0000 2019-09-04T06:30:47.508+0000 D3 REPL [conn249] Got notified of new snapshot: { ts: Timestamp(1567578647, 1), t: 1 }, 2019-09-04T06:30:47.499+0000 2019-09-04T06:30:47.508+0000 D3 REPL [conn249] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.702+0000 2019-09-04T06:30:47.508+0000 D3 REPL [conn261] Got notified of new snapshot: { ts: Timestamp(1567578647, 1), t: 1 }, 2019-09-04T06:30:47.499+0000 2019-09-04T06:30:47.508+0000 D3 REPL [conn261] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.023+0000 2019-09-04T06:30:47.508+0000 D3 REPL [conn223] Got notified of new snapshot: { ts: Timestamp(1567578647, 1), t: 1 }, 2019-09-04T06:30:47.499+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn223] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.644+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn246] Got notified of new snapshot: { ts: Timestamp(1567578647, 1), t: 1 }, 2019-09-04T06:30:47.499+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn246] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:52.054+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn243] Got notified of new snapshot: { ts: Timestamp(1567578647, 1), t: 1 }, 2019-09-04T06:30:47.499+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn243] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.662+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn244] Got notified of new snapshot: { ts: Timestamp(1567578647, 1), t: 1 }, 2019-09-04T06:30:47.499+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn244] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.754+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn227] Got notified of new snapshot: { ts: Timestamp(1567578647, 1), t: 1 }, 2019-09-04T06:30:47.499+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn227] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.888+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn239] Got notified of new snapshot: { ts: Timestamp(1567578647, 1), t: 1 }, 2019-09-04T06:30:47.499+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn239] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.133+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn236] Got notified of new snapshot: { ts: Timestamp(1567578647, 1), t: 1 }, 2019-09-04T06:30:47.499+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn236] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.040+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn262] Got notified of new snapshot: { ts: Timestamp(1567578647, 1), t: 1 }, 2019-09-04T06:30:47.499+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn262] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.051+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn263] Got notified of new snapshot: { ts: Timestamp(1567578647, 1), t: 1 }, 2019-09-04T06:30:47.499+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn263] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.415+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn228] Got notified of new snapshot: { ts: Timestamp(1567578647, 1), t: 1 }, 2019-09-04T06:30:47.499+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn228] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:56.304+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn252] Got notified of new snapshot: { ts: Timestamp(1567578647, 1), t: 1 }, 2019-09-04T06:30:47.499+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn252] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.897+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn254] Got notified of new snapshot: { ts: Timestamp(1567578647, 1), t: 1 }, 2019-09-04T06:30:47.499+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn254] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.962+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn255] Got notified of new snapshot: { ts: Timestamp(1567578647, 1), t: 1 }, 2019-09-04T06:30:47.499+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn255] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.877+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn237] Got notified of new snapshot: { ts: Timestamp(1567578647, 1), t: 1 }, 2019-09-04T06:30:47.499+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn237] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:13.409+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn229] Got notified of new snapshot: { ts: Timestamp(1567578647, 1), t: 1 }, 2019-09-04T06:30:47.499+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn229] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.924+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn242] Got notified of new snapshot: { ts: Timestamp(1567578647, 1), t: 1 }, 2019-09-04T06:30:47.499+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn242] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:12.507+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn251] Got notified of new snapshot: { ts: Timestamp(1567578647, 1), t: 1 }, 2019-09-04T06:30:47.499+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn251] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.763+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn257] Got notified of new snapshot: { ts: Timestamp(1567578647, 1), t: 1 }, 2019-09-04T06:30:47.499+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn257] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:12.293+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn234] Got notified of new snapshot: { ts: Timestamp(1567578647, 1), t: 1 }, 2019-09-04T06:30:47.499+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn234] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.033+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn233] Got notified of new snapshot: { ts: Timestamp(1567578647, 1), t: 1 }, 2019-09-04T06:30:47.499+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn233] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.024+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn225] Got notified of new snapshot: { ts: Timestamp(1567578647, 1), t: 1 }, 2019-09-04T06:30:47.499+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn225] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.468+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn250] Got notified of new snapshot: { ts: Timestamp(1567578647, 1), t: 1 }, 2019-09-04T06:30:47.499+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn250] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.753+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn248] Got notified of new snapshot: { ts: Timestamp(1567578647, 1), t: 1 }, 2019-09-04T06:30:47.499+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn248] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.456+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn258] Got notified of new snapshot: { ts: Timestamp(1567578647, 1), t: 1 }, 2019-09-04T06:30:47.499+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn258] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:14.433+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn224] Got notified of new snapshot: { ts: Timestamp(1567578647, 1), t: 1 }, 2019-09-04T06:30:47.499+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn224] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.924+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn231] Got notified of new snapshot: { ts: Timestamp(1567578647, 1), t: 1 }, 2019-09-04T06:30:47.499+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn231] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:13.190+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn253] Got notified of new snapshot: { ts: Timestamp(1567578647, 1), t: 1 }, 2019-09-04T06:30:47.499+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn253] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.925+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn241] Got notified of new snapshot: { ts: Timestamp(1567578647, 1), t: 1 }, 2019-09-04T06:30:47.499+0000 2019-09-04T06:30:47.509+0000 D3 REPL [conn241] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.879+0000 2019-09-04T06:30:47.512+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:30:47.512+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:47.512+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, durableWallTime: new Date(1567578645094), appliedOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, appliedWallTime: new Date(1567578645094), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578647, 1), t: 1 }, durableWallTime: new Date(1567578647499), appliedOpTime: { ts: Timestamp(1567578647, 1), t: 1 }, appliedWallTime: new Date(1567578647499), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, durableWallTime: new Date(1567578645094), appliedOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, appliedWallTime: new Date(1567578645094), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578647, 1), t: 1 }, lastCommittedWall: new Date(1567578647499), lastOpVisible: { ts: Timestamp(1567578647, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:47.512+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 681 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:17.512+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, durableWallTime: new Date(1567578645094), appliedOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, appliedWallTime: new Date(1567578645094), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578647, 1), t: 1 }, durableWallTime: new Date(1567578647499), appliedOpTime: { ts: Timestamp(1567578647, 1), t: 1 }, appliedWallTime: new Date(1567578647499), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, durableWallTime: new Date(1567578645094), appliedOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, appliedWallTime: new Date(1567578645094), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578647, 1), t: 1 }, lastCommittedWall: new Date(1567578647499), lastOpVisible: { ts: Timestamp(1567578647, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:47.512+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:17.507+0000 2019-09-04T06:30:47.513+0000 D2 ASIO [RS] Request 681 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578647, 1), t: 1 }, lastCommittedWall: new Date(1567578647499), lastOpVisible: { ts: Timestamp(1567578647, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578647, 1), $clusterTime: { clusterTime: Timestamp(1567578647, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578647, 1) } 2019-09-04T06:30:47.513+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578647, 1), t: 1 }, lastCommittedWall: new Date(1567578647499), lastOpVisible: { ts: Timestamp(1567578647, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578647, 1), $clusterTime: { clusterTime: Timestamp(1567578647, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578647, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:47.513+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:47.513+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:17.507+0000 2019-09-04T06:30:47.543+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:47.543+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:47.576+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:47.583+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:47.583+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:47.606+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578647, 1) 2019-09-04T06:30:47.635+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:47.635+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:47.639+0000 D2 COMMAND [conn21] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578645, 1), t: 1 } }, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a1702d1a496712d7201'), operName: "", parentOperId: "5d6f5a1702d1a496712d7200" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, A900B5AA37CD7FF0E04EF7EA782683DB9B504D66), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578645, 1), t: 1 } }, $db: "config" } 2019-09-04T06:30:47.639+0000 D1 TRACKING [conn21] Cmd: find, TrackingId: 5d6f5a1702d1a496712d7200|5d6f5a1702d1a496712d7201 2019-09-04T06:30:47.639+0000 D1 COMMAND [conn21] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578645, 1), t: 1 } } } 2019-09-04T06:30:47.639+0000 D3 STORAGE [conn21] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:30:47.639+0000 D1 COMMAND [conn21] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578645, 1), t: 1 } }, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a1702d1a496712d7201'), operName: "", parentOperId: "5d6f5a1702d1a496712d7200" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, A900B5AA37CD7FF0E04EF7EA782683DB9B504D66), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578645, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578647, 1) 2019-09-04T06:30:47.639+0000 D5 QUERY [conn21] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:30:47.639+0000 D5 QUERY [conn21] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:30:47.639+0000 D5 QUERY [conn21] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:30:47.639+0000 D5 QUERY [conn21] Rated tree: $and 2019-09-04T06:30:47.639+0000 D5 QUERY [conn21] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:47.639+0000 D5 QUERY [conn21] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:47.639+0000 D2 QUERY [conn21] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:30:47.639+0000 D3 STORAGE [conn21] WT begin_transaction for snapshot id 9942 2019-09-04T06:30:47.639+0000 D3 STORAGE [conn21] WT rollback_transaction for snapshot id 9942 2019-09-04T06:30:47.639+0000 I COMMAND [conn21] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578645, 1), t: 1 } }, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a1702d1a496712d7201'), operName: "", parentOperId: "5d6f5a1702d1a496712d7200" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, A900B5AA37CD7FF0E04EF7EA782683DB9B504D66), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578645, 1), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:879 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:30:47.647+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:47.647+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:47.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:47.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:47.676+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:47.758+0000 D2 ASIO [RS] Request 680 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578647, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578647756), o: { $v: 1, $set: { ping: new Date(1567578647750) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578647, 1), t: 1 }, lastCommittedWall: new Date(1567578647499), lastOpVisible: { ts: Timestamp(1567578647, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578647, 1), t: 1 }, lastCommittedWall: new Date(1567578647499), lastOpApplied: { ts: Timestamp(1567578647, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578647, 1), $clusterTime: { clusterTime: Timestamp(1567578647, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578647, 2) } 2019-09-04T06:30:47.758+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578647, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578647756), o: { $v: 1, $set: { ping: new Date(1567578647750) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578647, 1), t: 1 }, lastCommittedWall: new Date(1567578647499), lastOpVisible: { ts: Timestamp(1567578647, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578647, 1), t: 1 }, lastCommittedWall: new Date(1567578647499), lastOpApplied: { ts: Timestamp(1567578647, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578647, 1), $clusterTime: { clusterTime: Timestamp(1567578647, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578647, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:47.758+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:47.758+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578647, 2) and ending at ts: Timestamp(1567578647, 2) 2019-09-04T06:30:47.758+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:30:57.652+0000 2019-09-04T06:30:47.758+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:30:58.549+0000 2019-09-04T06:30:47.758+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:47.758+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:16.839+0000 2019-09-04T06:30:47.758+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:47.758+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:47.758+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578647, 1) 2019-09-04T06:30:47.758+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 9947 2019-09-04T06:30:47.758+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:47.758+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:47.758+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 9947 2019-09-04T06:30:47.758+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:47.758+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:30:47.758+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:47.758+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578647, 1) 2019-09-04T06:30:47.758+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578647, 2) } 2019-09-04T06:30:47.758+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 9950 2019-09-04T06:30:47.758+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:47.758+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:47.758+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 9950 2019-09-04T06:30:47.758+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9938 2019-09-04T06:30:47.758+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578647, 2), t: 1 } 2019-09-04T06:30:47.758+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 9938 2019-09-04T06:30:47.758+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9953 2019-09-04T06:30:47.758+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9953 2019-09-04T06:30:47.758+0000 D3 EXECUTOR [repl-writer-worker-15] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:47.759+0000 D3 STORAGE [repl-writer-worker-15] WT begin_transaction for snapshot id 9955 2019-09-04T06:30:47.759+0000 D4 STORAGE [repl-writer-worker-15] inserting record with timestamp Timestamp(1567578647, 2) 2019-09-04T06:30:47.759+0000 D3 STORAGE [repl-writer-worker-15] WT set timestamp of future write operations to Timestamp(1567578647, 2) 2019-09-04T06:30:47.759+0000 D3 STORAGE [repl-writer-worker-15] WT commit_transaction for snapshot id 9955 2019-09-04T06:30:47.759+0000 D3 EXECUTOR [repl-writer-worker-15] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:47.759+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:30:47.759+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9954 2019-09-04T06:30:47.759+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 9954 2019-09-04T06:30:47.759+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9957 2019-09-04T06:30:47.759+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9957 2019-09-04T06:30:47.759+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578647, 2), t: 1 }({ ts: Timestamp(1567578647, 2), t: 1 }) 2019-09-04T06:30:47.759+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578647, 2) 2019-09-04T06:30:47.759+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9958 2019-09-04T06:30:47.759+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578647, 2) } } ] } sort: {} projection: {} 2019-09-04T06:30:47.759+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:47.759+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:30:47.759+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578647, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:30:47.759+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:47.759+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:47.759+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:47.759+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578647, 2) || First: notFirst: full path: ts 2019-09-04T06:30:47.759+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:47.759+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578647, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:47.759+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:47.759+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:30:47.759+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:47.759+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:47.759+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:47.759+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:47.759+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:47.759+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:47.759+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:47.759+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578647, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:47.759+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:47.759+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:47.759+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:47.759+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578647, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:47.759+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:47.759+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578647, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:47.759+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 9958 2019-09-04T06:30:47.759+0000 D3 EXECUTOR [repl-writer-worker-1] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:47.759+0000 D3 STORAGE [repl-writer-worker-1] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:47.759+0000 D3 REPL [repl-writer-worker-1] applying op: { ts: Timestamp(1567578647, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578647756), o: { $v: 1, $set: { ping: new Date(1567578647750) } } }, oplog application mode: Secondary 2019-09-04T06:30:47.759+0000 D3 STORAGE [repl-writer-worker-1] WT set timestamp of future write operations to Timestamp(1567578647, 2) 2019-09-04T06:30:47.759+0000 D3 STORAGE [repl-writer-worker-1] WT begin_transaction for snapshot id 9960 2019-09-04T06:30:47.759+0000 D2 QUERY [repl-writer-worker-1] Using idhack: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" } 2019-09-04T06:30:47.759+0000 D4 WRITE [repl-writer-worker-1] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:30:47.759+0000 D3 STORAGE [repl-writer-worker-1] WT commit_transaction for snapshot id 9960 2019-09-04T06:30:47.759+0000 D3 EXECUTOR [repl-writer-worker-1] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:47.759+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578647, 2), t: 1 }({ ts: Timestamp(1567578647, 2), t: 1 }) 2019-09-04T06:30:47.759+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578647, 2) 2019-09-04T06:30:47.759+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9959 2019-09-04T06:30:47.759+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:30:47.759+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:47.759+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:30:47.759+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:47.759+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:47.759+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:30:47.759+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 9959 2019-09-04T06:30:47.759+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578647, 2) 2019-09-04T06:30:47.759+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:47.759+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 9963 2019-09-04T06:30:47.759+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, durableWallTime: new Date(1567578645094), appliedOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, appliedWallTime: new Date(1567578645094), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578647, 1), t: 1 }, durableWallTime: new Date(1567578647499), appliedOpTime: { ts: Timestamp(1567578647, 2), t: 1 }, appliedWallTime: new Date(1567578647756), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, durableWallTime: new Date(1567578645094), appliedOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, appliedWallTime: new Date(1567578645094), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578647, 1), t: 1 }, lastCommittedWall: new Date(1567578647499), lastOpVisible: { ts: Timestamp(1567578647, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:47.760+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 9963 2019-09-04T06:30:47.760+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578647, 2), t: 1 }({ ts: Timestamp(1567578647, 2), t: 1 }) 2019-09-04T06:30:47.760+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 682 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:17.760+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, durableWallTime: new Date(1567578645094), appliedOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, appliedWallTime: new Date(1567578645094), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578647, 1), t: 1 }, durableWallTime: new Date(1567578647499), appliedOpTime: { ts: Timestamp(1567578647, 2), t: 1 }, appliedWallTime: new Date(1567578647756), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, durableWallTime: new Date(1567578645094), appliedOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, appliedWallTime: new Date(1567578645094), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578647, 1), t: 1 }, lastCommittedWall: new Date(1567578647499), lastOpVisible: { ts: Timestamp(1567578647, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:47.760+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:17.759+0000 2019-09-04T06:30:47.760+0000 D2 ASIO [RS] Request 682 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578647, 2), t: 1 }, lastCommittedWall: new Date(1567578647756), lastOpVisible: { ts: Timestamp(1567578647, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578647, 2), $clusterTime: { clusterTime: Timestamp(1567578647, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578647, 2) } 2019-09-04T06:30:47.760+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578647, 2), t: 1 }, lastCommittedWall: new Date(1567578647756), lastOpVisible: { ts: Timestamp(1567578647, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578647, 2), $clusterTime: { clusterTime: Timestamp(1567578647, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578647, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:47.760+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:47.760+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:17.760+0000 2019-09-04T06:30:47.760+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:30:47.760+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:47.760+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, durableWallTime: new Date(1567578645094), appliedOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, appliedWallTime: new Date(1567578645094), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578647, 2), t: 1 }, durableWallTime: new Date(1567578647756), appliedOpTime: { ts: Timestamp(1567578647, 2), t: 1 }, appliedWallTime: new Date(1567578647756), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, durableWallTime: new Date(1567578645094), appliedOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, appliedWallTime: new Date(1567578645094), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578647, 1), t: 1 }, lastCommittedWall: new Date(1567578647499), lastOpVisible: { ts: Timestamp(1567578647, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:47.760+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 683 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:17.760+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, durableWallTime: new Date(1567578645094), appliedOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, appliedWallTime: new Date(1567578645094), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578647, 2), t: 1 }, durableWallTime: new Date(1567578647756), appliedOpTime: { ts: Timestamp(1567578647, 2), t: 1 }, appliedWallTime: new Date(1567578647756), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, durableWallTime: new Date(1567578645094), appliedOpTime: { ts: Timestamp(1567578645, 1), t: 1 }, appliedWallTime: new Date(1567578645094), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578647, 1), t: 1 }, lastCommittedWall: new Date(1567578647499), lastOpVisible: { ts: Timestamp(1567578647, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:47.760+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578647, 2), t: 1 } 2019-09-04T06:30:47.760+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:17.760+0000 2019-09-04T06:30:47.760+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 684 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:57.760+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578647, 1), t: 1 } } 2019-09-04T06:30:47.760+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:17.760+0000 2019-09-04T06:30:47.761+0000 D2 ASIO [RS] Request 683 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578647, 2), t: 1 }, lastCommittedWall: new Date(1567578647756), lastOpVisible: { ts: Timestamp(1567578647, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578647, 2), $clusterTime: { clusterTime: Timestamp(1567578647, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578647, 2) } 2019-09-04T06:30:47.761+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578647, 2), t: 1 }, lastCommittedWall: new Date(1567578647756), lastOpVisible: { ts: Timestamp(1567578647, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578647, 2), $clusterTime: { clusterTime: Timestamp(1567578647, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578647, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:47.761+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:47.761+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:17.760+0000 2019-09-04T06:30:47.761+0000 D2 ASIO [RS] Request 684 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578647, 2), t: 1 }, lastCommittedWall: new Date(1567578647756), lastOpVisible: { ts: Timestamp(1567578647, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578647, 2), t: 1 }, lastCommittedWall: new Date(1567578647756), lastOpApplied: { ts: Timestamp(1567578647, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578647, 2), $clusterTime: { clusterTime: Timestamp(1567578647, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578647, 2) } 2019-09-04T06:30:47.761+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578647, 2), t: 1 }, lastCommittedWall: new Date(1567578647756), lastOpVisible: { ts: Timestamp(1567578647, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578647, 2), t: 1 }, lastCommittedWall: new Date(1567578647756), lastOpApplied: { ts: Timestamp(1567578647, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578647, 2), $clusterTime: { clusterTime: Timestamp(1567578647, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578647, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:47.761+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:47.761+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:30:47.761+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578647, 2), t: 1 }, 2019-09-04T06:30:47.756+0000 2019-09-04T06:30:47.761+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578647, 2), t: 1 }, 2019-09-04T06:30:47.756+0000 2019-09-04T06:30:47.761+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578642, 2) 2019-09-04T06:30:47.761+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:30:58.549+0000 2019-09-04T06:30:47.761+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:30:58.796+0000 2019-09-04T06:30:47.761+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:47.761+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:16.839+0000 2019-09-04T06:30:47.761+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 685 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:57.761+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578647, 2), t: 1 } } 2019-09-04T06:30:47.761+0000 D3 REPL [conn245] Got notified of new snapshot: { ts: Timestamp(1567578647, 2), t: 1 }, 2019-09-04T06:30:47.756+0000 2019-09-04T06:30:47.761+0000 D3 REPL [conn245] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.767+0000 2019-09-04T06:30:47.761+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:17.760+0000 2019-09-04T06:30:47.761+0000 D3 REPL [conn261] Got notified of new snapshot: { ts: Timestamp(1567578647, 2), t: 1 }, 2019-09-04T06:30:47.756+0000 2019-09-04T06:30:47.761+0000 D3 REPL [conn261] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.023+0000 2019-09-04T06:30:47.761+0000 D3 REPL [conn249] Got notified of new snapshot: { ts: Timestamp(1567578647, 2), t: 1 }, 2019-09-04T06:30:47.756+0000 2019-09-04T06:30:47.761+0000 D3 REPL [conn249] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.702+0000 2019-09-04T06:30:47.761+0000 D3 REPL [conn244] Got notified of new snapshot: { ts: Timestamp(1567578647, 2), t: 1 }, 2019-09-04T06:30:47.756+0000 2019-09-04T06:30:47.761+0000 D3 REPL [conn244] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.754+0000 2019-09-04T06:30:47.761+0000 D3 REPL [conn252] Got notified of new snapshot: { ts: Timestamp(1567578647, 2), t: 1 }, 2019-09-04T06:30:47.756+0000 2019-09-04T06:30:47.761+0000 D3 REPL [conn252] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.897+0000 2019-09-04T06:30:47.761+0000 D3 REPL [conn239] Got notified of new snapshot: { ts: Timestamp(1567578647, 2), t: 1 }, 2019-09-04T06:30:47.756+0000 2019-09-04T06:30:47.761+0000 D3 REPL [conn239] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.133+0000 2019-09-04T06:30:47.761+0000 D3 REPL [conn224] Got notified of new snapshot: { ts: Timestamp(1567578647, 2), t: 1 }, 2019-09-04T06:30:47.756+0000 2019-09-04T06:30:47.761+0000 D3 REPL [conn224] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.924+0000 2019-09-04T06:30:47.761+0000 D3 REPL [conn254] Got notified of new snapshot: { ts: Timestamp(1567578647, 2), t: 1 }, 2019-09-04T06:30:47.756+0000 2019-09-04T06:30:47.761+0000 D3 REPL [conn254] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.962+0000 2019-09-04T06:30:47.761+0000 D3 REPL [conn255] Got notified of new snapshot: { ts: Timestamp(1567578647, 2), t: 1 }, 2019-09-04T06:30:47.756+0000 2019-09-04T06:30:47.761+0000 D3 REPL [conn255] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.877+0000 2019-09-04T06:30:47.761+0000 D3 REPL [conn242] Got notified of new snapshot: { ts: Timestamp(1567578647, 2), t: 1 }, 2019-09-04T06:30:47.756+0000 2019-09-04T06:30:47.761+0000 D3 REPL [conn242] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:12.507+0000 2019-09-04T06:30:47.761+0000 D3 REPL [conn257] Got notified of new snapshot: { ts: Timestamp(1567578647, 2), t: 1 }, 2019-09-04T06:30:47.756+0000 2019-09-04T06:30:47.761+0000 D3 REPL [conn257] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:12.293+0000 2019-09-04T06:30:47.761+0000 D3 REPL [conn233] Got notified of new snapshot: { ts: Timestamp(1567578647, 2), t: 1 }, 2019-09-04T06:30:47.756+0000 2019-09-04T06:30:47.761+0000 D3 REPL [conn233] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.024+0000 2019-09-04T06:30:47.761+0000 D3 REPL [conn250] Got notified of new snapshot: { ts: Timestamp(1567578647, 2), t: 1 }, 2019-09-04T06:30:47.756+0000 2019-09-04T06:30:47.761+0000 D3 REPL [conn250] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.753+0000 2019-09-04T06:30:47.761+0000 D3 REPL [conn258] Got notified of new snapshot: { ts: Timestamp(1567578647, 2), t: 1 }, 2019-09-04T06:30:47.756+0000 2019-09-04T06:30:47.761+0000 D3 REPL [conn258] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:14.433+0000 2019-09-04T06:30:47.761+0000 D3 REPL [conn231] Got notified of new snapshot: { ts: Timestamp(1567578647, 2), t: 1 }, 2019-09-04T06:30:47.756+0000 2019-09-04T06:30:47.761+0000 D3 REPL [conn231] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:13.190+0000 2019-09-04T06:30:47.761+0000 D3 REPL [conn241] Got notified of new snapshot: { ts: Timestamp(1567578647, 2), t: 1 }, 2019-09-04T06:30:47.756+0000 2019-09-04T06:30:47.761+0000 D3 REPL [conn241] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.879+0000 2019-09-04T06:30:47.761+0000 D3 REPL [conn262] Got notified of new snapshot: { ts: Timestamp(1567578647, 2), t: 1 }, 2019-09-04T06:30:47.756+0000 2019-09-04T06:30:47.761+0000 D3 REPL [conn262] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.051+0000 2019-09-04T06:30:47.761+0000 D3 REPL [conn247] Got notified of new snapshot: { ts: Timestamp(1567578647, 2), t: 1 }, 2019-09-04T06:30:47.756+0000 2019-09-04T06:30:47.761+0000 D3 REPL [conn247] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:59.750+0000 2019-09-04T06:30:47.762+0000 D3 REPL [conn232] Got notified of new snapshot: { ts: Timestamp(1567578647, 2), t: 1 }, 2019-09-04T06:30:47.756+0000 2019-09-04T06:30:47.762+0000 D3 REPL [conn232] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.661+0000 2019-09-04T06:30:47.762+0000 D3 REPL [conn240] Got notified of new snapshot: { ts: Timestamp(1567578647, 2), t: 1 }, 2019-09-04T06:30:47.756+0000 2019-09-04T06:30:47.762+0000 D3 REPL [conn240] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.874+0000 2019-09-04T06:30:47.762+0000 D3 REPL [conn223] Got notified of new snapshot: { ts: Timestamp(1567578647, 2), t: 1 }, 2019-09-04T06:30:47.756+0000 2019-09-04T06:30:47.762+0000 D3 REPL [conn223] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.644+0000 2019-09-04T06:30:47.762+0000 D3 REPL [conn227] Got notified of new snapshot: { ts: Timestamp(1567578647, 2), t: 1 }, 2019-09-04T06:30:47.756+0000 2019-09-04T06:30:47.762+0000 D3 REPL [conn227] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.888+0000 2019-09-04T06:30:47.762+0000 D3 REPL [conn236] Got notified of new snapshot: { ts: Timestamp(1567578647, 2), t: 1 }, 2019-09-04T06:30:47.756+0000 2019-09-04T06:30:47.762+0000 D3 REPL [conn236] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.040+0000 2019-09-04T06:30:47.762+0000 D3 REPL [conn246] Got notified of new snapshot: { ts: Timestamp(1567578647, 2), t: 1 }, 2019-09-04T06:30:47.756+0000 2019-09-04T06:30:47.762+0000 D3 REPL [conn246] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:52.054+0000 2019-09-04T06:30:47.762+0000 D3 REPL [conn243] Got notified of new snapshot: { ts: Timestamp(1567578647, 2), t: 1 }, 2019-09-04T06:30:47.756+0000 2019-09-04T06:30:47.762+0000 D3 REPL [conn243] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.662+0000 2019-09-04T06:30:47.762+0000 D3 REPL [conn251] Got notified of new snapshot: { ts: Timestamp(1567578647, 2), t: 1 }, 2019-09-04T06:30:47.756+0000 2019-09-04T06:30:47.762+0000 D3 REPL [conn251] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.763+0000 2019-09-04T06:30:47.762+0000 D3 REPL [conn225] Got notified of new snapshot: { ts: Timestamp(1567578647, 2), t: 1 }, 2019-09-04T06:30:47.756+0000 2019-09-04T06:30:47.762+0000 D3 REPL [conn225] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.468+0000 2019-09-04T06:30:47.762+0000 D3 REPL [conn256] Got notified of new snapshot: { ts: Timestamp(1567578647, 2), t: 1 }, 2019-09-04T06:30:47.756+0000 2019-09-04T06:30:47.762+0000 D3 REPL [conn256] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.918+0000 2019-09-04T06:30:47.762+0000 D3 REPL [conn253] Got notified of new snapshot: { ts: Timestamp(1567578647, 2), t: 1 }, 2019-09-04T06:30:47.756+0000 2019-09-04T06:30:47.762+0000 D3 REPL [conn253] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.925+0000 2019-09-04T06:30:47.762+0000 D3 REPL [conn248] Got notified of new snapshot: { ts: Timestamp(1567578647, 2), t: 1 }, 2019-09-04T06:30:47.756+0000 2019-09-04T06:30:47.762+0000 D3 REPL [conn248] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.456+0000 2019-09-04T06:30:47.762+0000 D3 REPL [conn228] Got notified of new snapshot: { ts: Timestamp(1567578647, 2), t: 1 }, 2019-09-04T06:30:47.756+0000 2019-09-04T06:30:47.762+0000 D3 REPL [conn228] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:56.304+0000 2019-09-04T06:30:47.762+0000 D3 REPL [conn234] Got notified of new snapshot: { ts: Timestamp(1567578647, 2), t: 1 }, 2019-09-04T06:30:47.756+0000 2019-09-04T06:30:47.762+0000 D3 REPL [conn234] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.033+0000 2019-09-04T06:30:47.762+0000 D3 REPL [conn229] Got notified of new snapshot: { ts: Timestamp(1567578647, 2), t: 1 }, 2019-09-04T06:30:47.756+0000 2019-09-04T06:30:47.762+0000 D3 REPL [conn229] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.924+0000 2019-09-04T06:30:47.762+0000 D3 REPL [conn237] Got notified of new snapshot: { ts: Timestamp(1567578647, 2), t: 1 }, 2019-09-04T06:30:47.756+0000 2019-09-04T06:30:47.762+0000 D3 REPL [conn237] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:13.409+0000 2019-09-04T06:30:47.762+0000 D3 REPL [conn263] Got notified of new snapshot: { ts: Timestamp(1567578647, 2), t: 1 }, 2019-09-04T06:30:47.756+0000 2019-09-04T06:30:47.762+0000 D3 REPL [conn263] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.415+0000 2019-09-04T06:30:47.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:47.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:47.776+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:47.784+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:47.784+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:47.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:47.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:47.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:47.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:47.858+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578647, 2) 2019-09-04T06:30:47.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:47.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:47.877+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:47.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:47.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:47.977+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:47.990+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:47.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:47.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:47.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:48.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:48.043+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:48.043+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:48.077+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:48.083+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:48.083+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:48.130+0000 D2 COMMAND [conn20] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:48.130+0000 I COMMAND [conn20] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:48.134+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:48.135+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:48.147+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:48.147+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:48.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:48.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:48.177+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:48.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578647, 2), signature: { hash: BinData(0, F1F754E8FB60493548060F46804288C811776BE0), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:48.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:30:48.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578647, 2), signature: { hash: BinData(0, F1F754E8FB60493548060F46804288C811776BE0), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:48.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578647, 2), signature: { hash: BinData(0, F1F754E8FB60493548060F46804288C811776BE0), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:48.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578647, 2), t: 1 }, durableWallTime: new Date(1567578647756), opTime: { ts: Timestamp(1567578647, 2), t: 1 }, wallTime: new Date(1567578647756) } 2019-09-04T06:30:48.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578647, 2), signature: { hash: BinData(0, F1F754E8FB60493548060F46804288C811776BE0), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:48.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:48.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:48.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:48.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:48.277+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:48.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:48.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:48.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:48.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:48.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:48.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:48.377+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:48.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:48.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:48.477+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:48.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:48.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:48.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:48.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:48.513+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:30:48.513+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:30:48.513+0000 D2 COMMAND [conn90] run command admin.$cmd { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:30:48.513+0000 I COMMAND [conn90] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:30:48.543+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:48.543+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:48.577+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:48.583+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:48.583+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:48.635+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:48.635+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:48.647+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:48.647+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:48.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:48.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:48.678+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:48.758+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:48.759+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:48.759+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578647, 2) 2019-09-04T06:30:48.759+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 9997 2019-09-04T06:30:48.759+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:48.759+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:48.759+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 9997 2019-09-04T06:30:48.760+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10000 2019-09-04T06:30:48.760+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 10000 2019-09-04T06:30:48.760+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578647, 2), t: 1 }({ ts: Timestamp(1567578647, 2), t: 1 }) 2019-09-04T06:30:48.764+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:48.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:48.778+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:48.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:48.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:48.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:48.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:48.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:48.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 686) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:48.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 686 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:30:58.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:48.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:16.839+0000 2019-09-04T06:30:48.838+0000 D2 ASIO [Replication] Request 686 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578647, 2), t: 1 }, durableWallTime: new Date(1567578647756), opTime: { ts: Timestamp(1567578647, 2), t: 1 }, wallTime: new Date(1567578647756), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578647, 2), t: 1 }, lastCommittedWall: new Date(1567578647756), lastOpVisible: { ts: Timestamp(1567578647, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578647, 2), $clusterTime: { clusterTime: Timestamp(1567578647, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578647, 2) } 2019-09-04T06:30:48.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578647, 2), t: 1 }, durableWallTime: new Date(1567578647756), opTime: { ts: Timestamp(1567578647, 2), t: 1 }, wallTime: new Date(1567578647756), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578647, 2), t: 1 }, lastCommittedWall: new Date(1567578647756), lastOpVisible: { ts: Timestamp(1567578647, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578647, 2), $clusterTime: { clusterTime: Timestamp(1567578647, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578647, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:48.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:48.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 686) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578647, 2), t: 1 }, durableWallTime: new Date(1567578647756), opTime: { ts: Timestamp(1567578647, 2), t: 1 }, wallTime: new Date(1567578647756), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578647, 2), t: 1 }, lastCommittedWall: new Date(1567578647756), lastOpVisible: { ts: Timestamp(1567578647, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578647, 2), $clusterTime: { clusterTime: Timestamp(1567578647, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578647, 2) } 2019-09-04T06:30:48.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:30:48.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:30:50.838Z 2019-09-04T06:30:48.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:16.839+0000 2019-09-04T06:30:48.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:48.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 687) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:48.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 687 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:30:58.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:48.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:16.839+0000 2019-09-04T06:30:48.839+0000 D2 ASIO [Replication] Request 687 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578647, 2), t: 1 }, durableWallTime: new Date(1567578647756), opTime: { ts: Timestamp(1567578647, 2), t: 1 }, wallTime: new Date(1567578647756), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578647, 2), t: 1 }, lastCommittedWall: new Date(1567578647756), lastOpVisible: { ts: Timestamp(1567578647, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578647, 2), $clusterTime: { clusterTime: Timestamp(1567578647, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578647, 2) } 2019-09-04T06:30:48.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578647, 2), t: 1 }, durableWallTime: new Date(1567578647756), opTime: { ts: Timestamp(1567578647, 2), t: 1 }, wallTime: new Date(1567578647756), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578647, 2), t: 1 }, lastCommittedWall: new Date(1567578647756), lastOpVisible: { ts: Timestamp(1567578647, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578647, 2), $clusterTime: { clusterTime: Timestamp(1567578647, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578647, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:30:48.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:48.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 687) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578647, 2), t: 1 }, durableWallTime: new Date(1567578647756), opTime: { ts: Timestamp(1567578647, 2), t: 1 }, wallTime: new Date(1567578647756), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578647, 2), t: 1 }, lastCommittedWall: new Date(1567578647756), lastOpVisible: { ts: Timestamp(1567578647, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578647, 2), $clusterTime: { clusterTime: Timestamp(1567578647, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578647, 2) } 2019-09-04T06:30:48.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:30:48.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:30:58.796+0000 2019-09-04T06:30:48.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:31:00.321+0000 2019-09-04T06:30:48.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:30:48.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:30:50.839Z 2019-09-04T06:30:48.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:48.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:18.839+0000 2019-09-04T06:30:48.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:18.839+0000 2019-09-04T06:30:48.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:48.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:48.878+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:48.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:48.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:48.978+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:48.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:48.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:48.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:48.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:49.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:49.043+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:49.043+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:49.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578647, 2), signature: { hash: BinData(0, F1F754E8FB60493548060F46804288C811776BE0), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:49.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:30:49.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578647, 2), signature: { hash: BinData(0, F1F754E8FB60493548060F46804288C811776BE0), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:49.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578647, 2), signature: { hash: BinData(0, F1F754E8FB60493548060F46804288C811776BE0), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:49.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578647, 2), t: 1 }, durableWallTime: new Date(1567578647756), opTime: { ts: Timestamp(1567578647, 2), t: 1 }, wallTime: new Date(1567578647756) } 2019-09-04T06:30:49.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578647, 2), signature: { hash: BinData(0, F1F754E8FB60493548060F46804288C811776BE0), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:49.078+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:49.083+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:49.083+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:49.135+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:49.135+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:49.147+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:49.147+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:49.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:49.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:49.178+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:49.230+0000 D2 ASIO [RS] Request 685 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578649, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" }, wall: new Date(1567578649228), o: { $v: 1, $set: { ping: new Date(1567578649225) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578647, 2), t: 1 }, lastCommittedWall: new Date(1567578647756), lastOpVisible: { ts: Timestamp(1567578647, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578647, 2), t: 1 }, lastCommittedWall: new Date(1567578647756), lastOpApplied: { ts: Timestamp(1567578649, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578647, 2), $clusterTime: { clusterTime: Timestamp(1567578649, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578649, 1) } 2019-09-04T06:30:49.230+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578649, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" }, wall: new Date(1567578649228), o: { $v: 1, $set: { ping: new Date(1567578649225) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578647, 2), t: 1 }, lastCommittedWall: new Date(1567578647756), lastOpVisible: { ts: Timestamp(1567578647, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578647, 2), t: 1 }, lastCommittedWall: new Date(1567578647756), lastOpApplied: { ts: Timestamp(1567578649, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578647, 2), $clusterTime: { clusterTime: Timestamp(1567578649, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578649, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:49.230+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:49.230+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578649, 1) and ending at ts: Timestamp(1567578649, 1) 2019-09-04T06:30:49.230+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:31:00.321+0000 2019-09-04T06:30:49.230+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:30:59.450+0000 2019-09-04T06:30:49.230+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:49.230+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:18.839+0000 2019-09-04T06:30:49.230+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578649, 1), t: 1 } 2019-09-04T06:30:49.230+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:49.230+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:49.230+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578647, 2) 2019-09-04T06:30:49.230+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 10017 2019-09-04T06:30:49.230+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:49.230+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:49.230+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 10017 2019-09-04T06:30:49.231+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:49.231+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:30:49.231+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:49.231+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578647, 2) 2019-09-04T06:30:49.231+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578649, 1) } 2019-09-04T06:30:49.231+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 10020 2019-09-04T06:30:49.231+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:49.231+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:49.231+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 10020 2019-09-04T06:30:49.231+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10001 2019-09-04T06:30:49.231+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 10001 2019-09-04T06:30:49.231+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10023 2019-09-04T06:30:49.231+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 10023 2019-09-04T06:30:49.231+0000 D3 EXECUTOR [repl-writer-worker-5] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:49.231+0000 D3 STORAGE [repl-writer-worker-5] WT begin_transaction for snapshot id 10025 2019-09-04T06:30:49.231+0000 D4 STORAGE [repl-writer-worker-5] inserting record with timestamp Timestamp(1567578649, 1) 2019-09-04T06:30:49.231+0000 D3 STORAGE [repl-writer-worker-5] WT set timestamp of future write operations to Timestamp(1567578649, 1) 2019-09-04T06:30:49.231+0000 D3 STORAGE [repl-writer-worker-5] WT commit_transaction for snapshot id 10025 2019-09-04T06:30:49.231+0000 D3 EXECUTOR [repl-writer-worker-5] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:49.231+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:30:49.231+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10024 2019-09-04T06:30:49.231+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 10024 2019-09-04T06:30:49.231+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10027 2019-09-04T06:30:49.231+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 10027 2019-09-04T06:30:49.231+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578649, 1), t: 1 }({ ts: Timestamp(1567578649, 1), t: 1 }) 2019-09-04T06:30:49.231+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578649, 1) 2019-09-04T06:30:49.231+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10028 2019-09-04T06:30:49.231+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578649, 1) } } ] } sort: {} projection: {} 2019-09-04T06:30:49.231+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:49.231+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:30:49.231+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578649, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:30:49.231+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:49.231+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:49.231+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:49.231+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578649, 1) || First: notFirst: full path: ts 2019-09-04T06:30:49.231+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:49.231+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578649, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:49.231+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:49.231+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:30:49.231+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:49.231+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:49.231+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:49.231+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:49.231+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:49.231+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:49.231+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:49.231+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578649, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:49.231+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:49.231+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:49.231+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:49.231+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578649, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:49.231+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:49.231+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578649, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:49.231+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 10028 2019-09-04T06:30:49.231+0000 D3 EXECUTOR [repl-writer-worker-9] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:49.231+0000 D3 STORAGE [repl-writer-worker-9] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:49.231+0000 D3 REPL [repl-writer-worker-9] applying op: { ts: Timestamp(1567578649, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" }, wall: new Date(1567578649228), o: { $v: 1, $set: { ping: new Date(1567578649225) } } }, oplog application mode: Secondary 2019-09-04T06:30:49.231+0000 D3 STORAGE [repl-writer-worker-9] WT set timestamp of future write operations to Timestamp(1567578649, 1) 2019-09-04T06:30:49.231+0000 D3 STORAGE [repl-writer-worker-9] WT begin_transaction for snapshot id 10030 2019-09-04T06:30:49.231+0000 D2 QUERY [repl-writer-worker-9] Using idhack: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" } 2019-09-04T06:30:49.231+0000 D4 WRITE [repl-writer-worker-9] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:30:49.231+0000 D3 STORAGE [repl-writer-worker-9] WT commit_transaction for snapshot id 10030 2019-09-04T06:30:49.231+0000 D3 EXECUTOR [repl-writer-worker-9] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:49.231+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578649, 1), t: 1 }({ ts: Timestamp(1567578649, 1), t: 1 }) 2019-09-04T06:30:49.231+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578649, 1) 2019-09-04T06:30:49.231+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10029 2019-09-04T06:30:49.231+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:30:49.232+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:49.232+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:30:49.232+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:49.232+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:49.232+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:30:49.232+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 10029 2019-09-04T06:30:49.232+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578649, 1) 2019-09-04T06:30:49.232+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10033 2019-09-04T06:30:49.232+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 10033 2019-09-04T06:30:49.232+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578649, 1), t: 1 }({ ts: Timestamp(1567578649, 1), t: 1 }) 2019-09-04T06:30:49.232+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:49.232+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578647, 2), t: 1 }, durableWallTime: new Date(1567578647756), appliedOpTime: { ts: Timestamp(1567578647, 2), t: 1 }, appliedWallTime: new Date(1567578647756), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578647, 2), t: 1 }, durableWallTime: new Date(1567578647756), appliedOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, appliedWallTime: new Date(1567578649228), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578647, 2), t: 1 }, durableWallTime: new Date(1567578647756), appliedOpTime: { ts: Timestamp(1567578647, 2), t: 1 }, appliedWallTime: new Date(1567578647756), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578647, 2), t: 1 }, lastCommittedWall: new Date(1567578647756), lastOpVisible: { ts: Timestamp(1567578647, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:49.232+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 688 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:19.232+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578647, 2), t: 1 }, durableWallTime: new Date(1567578647756), appliedOpTime: { ts: Timestamp(1567578647, 2), t: 1 }, appliedWallTime: new Date(1567578647756), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578647, 2), t: 1 }, durableWallTime: new Date(1567578647756), appliedOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, appliedWallTime: new Date(1567578649228), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578647, 2), t: 1 }, durableWallTime: new Date(1567578647756), appliedOpTime: { ts: Timestamp(1567578647, 2), t: 1 }, appliedWallTime: new Date(1567578647756), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578647, 2), t: 1 }, lastCommittedWall: new Date(1567578647756), lastOpVisible: { ts: Timestamp(1567578647, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:49.232+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:19.232+0000 2019-09-04T06:30:49.232+0000 D2 ASIO [RS] Request 688 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578647, 2), t: 1 }, lastCommittedWall: new Date(1567578647756), lastOpVisible: { ts: Timestamp(1567578647, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578647, 2), $clusterTime: { clusterTime: Timestamp(1567578649, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578649, 1) } 2019-09-04T06:30:49.232+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578647, 2), t: 1 }, lastCommittedWall: new Date(1567578647756), lastOpVisible: { ts: Timestamp(1567578647, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578647, 2), $clusterTime: { clusterTime: Timestamp(1567578649, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578649, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:49.232+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:49.232+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:19.232+0000 2019-09-04T06:30:49.232+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578649, 1), t: 1 } 2019-09-04T06:30:49.232+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 689 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:59.232+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578647, 2), t: 1 } } 2019-09-04T06:30:49.232+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:19.232+0000 2019-09-04T06:30:49.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:49.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:49.237+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:30:49.237+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:49.237+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578647, 2), t: 1 }, durableWallTime: new Date(1567578647756), appliedOpTime: { ts: Timestamp(1567578647, 2), t: 1 }, appliedWallTime: new Date(1567578647756), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), appliedOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, appliedWallTime: new Date(1567578649228), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578647, 2), t: 1 }, durableWallTime: new Date(1567578647756), appliedOpTime: { ts: Timestamp(1567578647, 2), t: 1 }, appliedWallTime: new Date(1567578647756), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578647, 2), t: 1 }, lastCommittedWall: new Date(1567578647756), lastOpVisible: { ts: Timestamp(1567578647, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:49.237+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 690 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:19.237+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578647, 2), t: 1 }, durableWallTime: new Date(1567578647756), appliedOpTime: { ts: Timestamp(1567578647, 2), t: 1 }, appliedWallTime: new Date(1567578647756), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), appliedOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, appliedWallTime: new Date(1567578649228), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578647, 2), t: 1 }, durableWallTime: new Date(1567578647756), appliedOpTime: { ts: Timestamp(1567578647, 2), t: 1 }, appliedWallTime: new Date(1567578647756), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578647, 2), t: 1 }, lastCommittedWall: new Date(1567578647756), lastOpVisible: { ts: Timestamp(1567578647, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:49.237+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:19.232+0000 2019-09-04T06:30:49.237+0000 D2 ASIO [RS] Request 690 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578647, 2), t: 1 }, lastCommittedWall: new Date(1567578647756), lastOpVisible: { ts: Timestamp(1567578647, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578647, 2), $clusterTime: { clusterTime: Timestamp(1567578649, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578649, 1) } 2019-09-04T06:30:49.237+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578647, 2), t: 1 }, lastCommittedWall: new Date(1567578647756), lastOpVisible: { ts: Timestamp(1567578647, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578647, 2), $clusterTime: { clusterTime: Timestamp(1567578649, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578649, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:49.237+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:49.237+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:19.232+0000 2019-09-04T06:30:49.237+0000 D2 ASIO [RS] Request 689 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578649, 1), t: 1 }, lastCommittedWall: new Date(1567578649228), lastOpVisible: { ts: Timestamp(1567578649, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578649, 1), t: 1 }, lastCommittedWall: new Date(1567578649228), lastOpApplied: { ts: Timestamp(1567578649, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578649, 1), $clusterTime: { clusterTime: Timestamp(1567578649, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578649, 1) } 2019-09-04T06:30:49.237+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578649, 1), t: 1 }, lastCommittedWall: new Date(1567578649228), lastOpVisible: { ts: Timestamp(1567578649, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578649, 1), t: 1 }, lastCommittedWall: new Date(1567578649228), lastOpApplied: { ts: Timestamp(1567578649, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578649, 1), $clusterTime: { clusterTime: Timestamp(1567578649, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578649, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:49.237+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:49.237+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:30:49.237+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578649, 1), t: 1 }, 2019-09-04T06:30:49.228+0000 2019-09-04T06:30:49.237+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578649, 1), t: 1 }, 2019-09-04T06:30:49.228+0000 2019-09-04T06:30:49.238+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578644, 1) 2019-09-04T06:30:49.238+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:30:59.450+0000 2019-09-04T06:30:49.238+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:30:59.697+0000 2019-09-04T06:30:49.238+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 691 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:30:59.238+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578649, 1), t: 1 } } 2019-09-04T06:30:49.238+0000 D3 REPL [conn261] Got notified of new snapshot: { ts: Timestamp(1567578649, 1), t: 1 }, 2019-09-04T06:30:49.228+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn261] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.023+0000 2019-09-04T06:30:49.238+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:19.232+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn244] Got notified of new snapshot: { ts: Timestamp(1567578649, 1), t: 1 }, 2019-09-04T06:30:49.228+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn244] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.754+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn236] Got notified of new snapshot: { ts: Timestamp(1567578649, 1), t: 1 }, 2019-09-04T06:30:49.228+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn236] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.040+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn243] Got notified of new snapshot: { ts: Timestamp(1567578649, 1), t: 1 }, 2019-09-04T06:30:49.228+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn243] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.662+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn242] Got notified of new snapshot: { ts: Timestamp(1567578649, 1), t: 1 }, 2019-09-04T06:30:49.228+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn242] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:12.507+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn229] Got notified of new snapshot: { ts: Timestamp(1567578649, 1), t: 1 }, 2019-09-04T06:30:49.228+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn229] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.924+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn258] Got notified of new snapshot: { ts: Timestamp(1567578649, 1), t: 1 }, 2019-09-04T06:30:49.228+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn258] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:14.433+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn231] Got notified of new snapshot: { ts: Timestamp(1567578649, 1), t: 1 }, 2019-09-04T06:30:49.228+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn231] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:13.190+0000 2019-09-04T06:30:49.238+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:49.238+0000 D3 REPL [conn247] Got notified of new snapshot: { ts: Timestamp(1567578649, 1), t: 1 }, 2019-09-04T06:30:49.228+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn247] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:59.750+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn263] Got notified of new snapshot: { ts: Timestamp(1567578649, 1), t: 1 }, 2019-09-04T06:30:49.228+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn263] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.415+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn227] Got notified of new snapshot: { ts: Timestamp(1567578649, 1), t: 1 }, 2019-09-04T06:30:49.228+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn227] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.888+0000 2019-09-04T06:30:49.238+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:18.839+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn246] Got notified of new snapshot: { ts: Timestamp(1567578649, 1), t: 1 }, 2019-09-04T06:30:49.228+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn246] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:52.054+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn251] Got notified of new snapshot: { ts: Timestamp(1567578649, 1), t: 1 }, 2019-09-04T06:30:49.228+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn251] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.763+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn256] Got notified of new snapshot: { ts: Timestamp(1567578649, 1), t: 1 }, 2019-09-04T06:30:49.228+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn256] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.918+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn248] Got notified of new snapshot: { ts: Timestamp(1567578649, 1), t: 1 }, 2019-09-04T06:30:49.228+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn248] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.456+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn234] Got notified of new snapshot: { ts: Timestamp(1567578649, 1), t: 1 }, 2019-09-04T06:30:49.228+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn234] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.033+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn237] Got notified of new snapshot: { ts: Timestamp(1567578649, 1), t: 1 }, 2019-09-04T06:30:49.228+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn237] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:13.409+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn245] Got notified of new snapshot: { ts: Timestamp(1567578649, 1), t: 1 }, 2019-09-04T06:30:49.228+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn245] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.767+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn249] Got notified of new snapshot: { ts: Timestamp(1567578649, 1), t: 1 }, 2019-09-04T06:30:49.228+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn249] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.702+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn232] Got notified of new snapshot: { ts: Timestamp(1567578649, 1), t: 1 }, 2019-09-04T06:30:49.228+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn232] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.661+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn223] Got notified of new snapshot: { ts: Timestamp(1567578649, 1), t: 1 }, 2019-09-04T06:30:49.228+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn223] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.644+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn239] Got notified of new snapshot: { ts: Timestamp(1567578649, 1), t: 1 }, 2019-09-04T06:30:49.228+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn239] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:51.133+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn254] Got notified of new snapshot: { ts: Timestamp(1567578649, 1), t: 1 }, 2019-09-04T06:30:49.228+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn254] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.962+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn252] Got notified of new snapshot: { ts: Timestamp(1567578649, 1), t: 1 }, 2019-09-04T06:30:49.228+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn252] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.897+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn225] Got notified of new snapshot: { ts: Timestamp(1567578649, 1), t: 1 }, 2019-09-04T06:30:49.228+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn225] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.468+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn253] Got notified of new snapshot: { ts: Timestamp(1567578649, 1), t: 1 }, 2019-09-04T06:30:49.228+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn253] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.925+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn250] Got notified of new snapshot: { ts: Timestamp(1567578649, 1), t: 1 }, 2019-09-04T06:30:49.228+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn250] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.753+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn233] Got notified of new snapshot: { ts: Timestamp(1567578649, 1), t: 1 }, 2019-09-04T06:30:49.228+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn233] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.024+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn255] Got notified of new snapshot: { ts: Timestamp(1567578649, 1), t: 1 }, 2019-09-04T06:30:49.228+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn255] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.877+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn257] Got notified of new snapshot: { ts: Timestamp(1567578649, 1), t: 1 }, 2019-09-04T06:30:49.228+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn257] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:12.293+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn224] Got notified of new snapshot: { ts: Timestamp(1567578649, 1), t: 1 }, 2019-09-04T06:30:49.228+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn224] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.924+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn228] Got notified of new snapshot: { ts: Timestamp(1567578649, 1), t: 1 }, 2019-09-04T06:30:49.228+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn228] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:56.304+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn241] Got notified of new snapshot: { ts: Timestamp(1567578649, 1), t: 1 }, 2019-09-04T06:30:49.228+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn241] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.879+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn262] Got notified of new snapshot: { ts: Timestamp(1567578649, 1), t: 1 }, 2019-09-04T06:30:49.228+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn262] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.051+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn240] Got notified of new snapshot: { ts: Timestamp(1567578649, 1), t: 1 }, 2019-09-04T06:30:49.228+0000 2019-09-04T06:30:49.238+0000 D3 REPL [conn240] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.874+0000 2019-09-04T06:30:49.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:49.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:49.278+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:49.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:49.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:49.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:49.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:49.329+0000 D2 WRITE [startPeriodicThreadToAbortExpiredTransactions] Beginning scanSessions. Scanning 0 sessions. 2019-09-04T06:30:49.330+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578649, 1) 2019-09-04T06:30:49.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:49.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:49.370+0000 D1 SHARDING [shard-registry-reload] Reloading shardRegistry 2019-09-04T06:30:49.370+0000 D3 STORAGE [shard-registry-reload] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:30:49.370+0000 D2 COMMAND [shard-registry-reload] run command config.$cmd { find: "shards", $readPreference: { mode: "nearest", tags: [] }, $db: "config" } 2019-09-04T06:30:49.370+0000 D3 STORAGE [shard-registry-reload] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:49.370+0000 D5 QUERY [shard-registry-reload] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:30:49.370+0000 D5 QUERY [shard-registry-reload] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:30:49.370+0000 D5 QUERY [shard-registry-reload] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:30:49.370+0000 D5 QUERY [shard-registry-reload] Rated tree: $and 2019-09-04T06:30:49.370+0000 D5 QUERY [shard-registry-reload] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:49.370+0000 D5 QUERY [shard-registry-reload] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:49.370+0000 D2 QUERY [shard-registry-reload] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:30:49.370+0000 D3 STORAGE [shard-registry-reload] begin_transaction on local snapshot Timestamp(1567578649, 1) 2019-09-04T06:30:49.370+0000 D3 STORAGE [shard-registry-reload] WT begin_transaction for snapshot id 10043 2019-09-04T06:30:49.370+0000 D3 STORAGE [shard-registry-reload] WT rollback_transaction for snapshot id 10043 2019-09-04T06:30:49.370+0000 I COMMAND [shard-registry-reload] command config.shards command: find { find: "shards", $readPreference: { mode: "nearest", tags: [] }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:646 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:30:49.370+0000 D1 SHARDING [shard-registry-reload] found 3 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp(1567578649, 1), t: 1 } 2019-09-04T06:30:49.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0000/cmodb806.togewa.com:27018,cmodb807.togewa.com:27018 2019-09-04T06:30:49.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0000, with CS shard0000/cmodb806.togewa.com:27018,cmodb807.togewa.com:27018 2019-09-04T06:30:49.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0001/cmodb808.togewa.com:27018,cmodb809.togewa.com:27018 2019-09-04T06:30:49.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0001, with CS shard0001/cmodb808.togewa.com:27018,cmodb809.togewa.com:27018 2019-09-04T06:30:49.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0002/cmodb810.togewa.com:27018,cmodb811.togewa.com:27018 2019-09-04T06:30:49.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0002, with CS shard0002/cmodb810.togewa.com:27018,cmodb811.togewa.com:27018 2019-09-04T06:30:49.370+0000 D3 SHARDING [shard-registry-reload] Adding shard config, with CS 2019-09-04T06:30:49.378+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:49.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0002 2019-09-04T06:30:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 692 -- target:[cmodb810.togewa.com:27018] db:admin expDate:2019-09-04T06:30:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:30:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 693 -- target:[cmodb811.togewa.com:27018] db:admin expDate:2019-09-04T06:30:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:30:49.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0000 2019-09-04T06:30:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 694 -- target:[cmodb806.togewa.com:27018] db:admin expDate:2019-09-04T06:30:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:30:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 695 -- target:[cmodb807.togewa.com:27018] db:admin expDate:2019-09-04T06:30:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:30:49.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0001 2019-09-04T06:30:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 696 -- target:[cmodb808.togewa.com:27018] db:admin expDate:2019-09-04T06:30:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:30:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 697 -- target:[cmodb809.togewa.com:27018] db:admin expDate:2019-09-04T06:30:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:30:49.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 695 finished with response: { hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb806.togewa.com:27018", me: "cmodb807.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578643, 1), t: 1 }, lastWriteDate: new Date(1567578643000), majorityOpTime: { ts: Timestamp(1567578643, 1), t: 1 }, majorityWriteDate: new Date(1567578643000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578649386), logicalSessionTimeoutMinutes: 30, connectionId: 17074, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578643, 1), $configServerState: { opTime: { ts: Timestamp(1567578638, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578643, 1) } 2019-09-04T06:30:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb806.togewa.com:27018", me: "cmodb807.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578643, 1), t: 1 }, lastWriteDate: new Date(1567578643000), majorityOpTime: { ts: Timestamp(1567578643, 1), t: 1 }, majorityWriteDate: new Date(1567578643000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578649386), logicalSessionTimeoutMinutes: 30, connectionId: 17074, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578643, 1), $configServerState: { opTime: { ts: Timestamp(1567578638, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578643, 1) } target: cmodb807.togewa.com:27018 2019-09-04T06:30:49.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 692 finished with response: { hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb810.togewa.com:27018", me: "cmodb810.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578639, 1), t: 1 }, lastWriteDate: new Date(1567578639000), majorityOpTime: { ts: Timestamp(1567578639, 1), t: 1 }, majorityWriteDate: new Date(1567578639000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578649386), logicalSessionTimeoutMinutes: 30, connectionId: 20469, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578639, 1), $configServerState: { opTime: { ts: Timestamp(1567578645, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578639, 1) } 2019-09-04T06:30:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb810.togewa.com:27018", me: "cmodb810.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578639, 1), t: 1 }, lastWriteDate: new Date(1567578639000), majorityOpTime: { ts: Timestamp(1567578639, 1), t: 1 }, majorityWriteDate: new Date(1567578639000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578649386), logicalSessionTimeoutMinutes: 30, connectionId: 20469, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578639, 1), $configServerState: { opTime: { ts: Timestamp(1567578645, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578639, 1) } target: cmodb810.togewa.com:27018 2019-09-04T06:30:49.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 694 finished with response: { hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb806.togewa.com:27018", me: "cmodb806.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578643, 1), t: 1 }, lastWriteDate: new Date(1567578643000), majorityOpTime: { ts: Timestamp(1567578643, 1), t: 1 }, majorityWriteDate: new Date(1567578643000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578649385), logicalSessionTimeoutMinutes: 30, connectionId: 16400, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578643, 1), $configServerState: { opTime: { ts: Timestamp(1567578645, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578643, 1) } 2019-09-04T06:30:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb806.togewa.com:27018", me: "cmodb806.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578643, 1), t: 1 }, lastWriteDate: new Date(1567578643000), majorityOpTime: { ts: Timestamp(1567578643, 1), t: 1 }, majorityWriteDate: new Date(1567578643000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578649385), logicalSessionTimeoutMinutes: 30, connectionId: 16400, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578643, 1), $configServerState: { opTime: { ts: Timestamp(1567578645, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578643, 1) } target: cmodb806.togewa.com:27018 2019-09-04T06:30:49.385+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0000 took 0ms 2019-09-04T06:30:49.386+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 697 finished with response: { hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb808.togewa.com:27018", me: "cmodb809.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578647, 1), t: 1 }, lastWriteDate: new Date(1567578647000), majorityOpTime: { ts: Timestamp(1567578647, 1), t: 1 }, majorityWriteDate: new Date(1567578647000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578649386), logicalSessionTimeoutMinutes: 30, connectionId: 13302, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578647, 1), $configServerState: { opTime: { ts: Timestamp(1567578630, 3), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578647, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578647, 1) } 2019-09-04T06:30:49.386+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb808.togewa.com:27018", me: "cmodb809.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578647, 1), t: 1 }, lastWriteDate: new Date(1567578647000), majorityOpTime: { ts: Timestamp(1567578647, 1), t: 1 }, majorityWriteDate: new Date(1567578647000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578649386), logicalSessionTimeoutMinutes: 30, connectionId: 13302, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578647, 1), $configServerState: { opTime: { ts: Timestamp(1567578630, 3), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578647, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578647, 1) } target: cmodb809.togewa.com:27018 2019-09-04T06:30:49.386+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 696 finished with response: { hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb808.togewa.com:27018", me: "cmodb808.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578647, 1), t: 1 }, lastWriteDate: new Date(1567578647000), majorityOpTime: { ts: Timestamp(1567578647, 1), t: 1 }, majorityWriteDate: new Date(1567578647000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578649386), logicalSessionTimeoutMinutes: 30, connectionId: 18183, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578647, 1), $configServerState: { opTime: { ts: Timestamp(1567578645, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578647, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578647, 1) } 2019-09-04T06:30:49.386+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb808.togewa.com:27018", me: "cmodb808.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578647, 1), t: 1 }, lastWriteDate: new Date(1567578647000), majorityOpTime: { ts: Timestamp(1567578647, 1), t: 1 }, majorityWriteDate: new Date(1567578647000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578649386), logicalSessionTimeoutMinutes: 30, connectionId: 18183, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578647, 1), $configServerState: { opTime: { ts: Timestamp(1567578645, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578647, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578647, 1) } target: cmodb808.togewa.com:27018 2019-09-04T06:30:49.386+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0001 took 0ms 2019-09-04T06:30:49.390+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 693 finished with response: { hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb810.togewa.com:27018", me: "cmodb811.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578639, 1), t: 1 }, lastWriteDate: new Date(1567578639000), majorityOpTime: { ts: Timestamp(1567578639, 1), t: 1 }, majorityWriteDate: new Date(1567578639000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578649386), logicalSessionTimeoutMinutes: 30, connectionId: 13284, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578639, 1), $configServerState: { opTime: { ts: Timestamp(1567578638, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578639, 1) } 2019-09-04T06:30:49.390+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb810.togewa.com:27018", me: "cmodb811.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578639, 1), t: 1 }, lastWriteDate: new Date(1567578639000), majorityOpTime: { ts: Timestamp(1567578639, 1), t: 1 }, majorityWriteDate: new Date(1567578639000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578649386), logicalSessionTimeoutMinutes: 30, connectionId: 13284, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578639, 1), $configServerState: { opTime: { ts: Timestamp(1567578638, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578639, 1) } target: cmodb811.togewa.com:27018 2019-09-04T06:30:49.390+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0002 took 5ms 2019-09-04T06:30:49.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:49.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:49.478+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:49.480+0000 D2 COMMAND [replSetDistLockPinger] run command config.$cmd { findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578649480) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } 2019-09-04T06:30:49.480+0000 D4 - [replSetDistLockPinger] Taking ticket. Available: 1000000000 2019-09-04T06:30:49.480+0000 D1 - [replSetDistLockPinger] User Assertion: NotMaster: Not primary while running findAndModify command on collection config.lockpings src/mongo/db/commands/find_and_modify.cpp 178 2019-09-04T06:30:49.480+0000 W - [replSetDistLockPinger] DBException thrown :: caused by :: NotMaster: Not primary while running findAndModify command on collection config.lockpings 2019-09-04T06:30:49.490+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:49.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:49.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:49.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:49.499+0000 I - [replSetDistLockPinger] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6b5b62 0x561749c38a0a 0x561749c42521 0x561749a63043 0x56174a33a606 0x56174a33ba55 0x56174b117894 0x56174a082899 0x56174a083f53 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174af452ee 0x56174af457fa 0x56174b0c25e2 0x56174a244e7b 0x56174a243c1e 0x56174a42b1dc 0x56174a23b7b1 0x56174a232a0a 0x56174b82dbbf 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"272DB62","s":"_ZN5mongo11DBExceptionC2ERKNS_6StatusE"},{"b":"561748F88000","o":"CB0A0A","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"ADB043"},{"b":"561748F88000","o":"13B2606"},{"b":"561748F88000","o":"13B3A55"},{"b":"561748F88000","o":"218F894","s":"_ZN5mongo12BasicCommand10Invocation3runEPNS_16OperationContextEPNS_3rpc21ReplyBuilderInterfaceE"},{"b":"561748F88000","o":"10FA899"},{"b":"561748F88000","o":"10FBF53"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"1FBD2EE"},{"b":"561748F88000","o":"1FBD7FA","s":"_ZN5mongo14DBDirectClient4callERNS_7MessageES2_bPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE"},{"b":"561748F88000","o":"213A5E2","s":"_ZN5mongo12DBClientBase20runCommandWithTargetENS_12OpMsgRequestE"},{"b":"561748F88000","o":"12BCE7B","s":"_ZN5mongo13RSLocalClient14runCommandOnceEPNS_16OperationContextENS_10StringDataERKNS_7BSONObjE"},{"b":"561748F88000","o":"12BBC1E","s":"_ZN5mongo10ShardLocal11_runCommandEPNS_16OperationContextERKNS_21ReadPreferenceSettingENS_10StringDataENS_8DurationISt5ratioILl1ELl1000EEEERKNS_7BSONObjE"},{"b":"561748F88000","o":"14A31DC","s":"_ZN5mongo5Shard32runCommandWithFixedRetryAttemptsEPNS_16OperationContextERKNS_21ReadPreferenceSettingERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_7BSONObjENS_8DurationISt5ratioILl1ELl1000EEEENS0_11RetryPolicyE"},{"b":"561748F88000","o":"12B37B1","s":"_ZN5mongo19DistLockCatalogImpl4pingEPNS_16OperationContextENS_10StringDataENS_6Date_tE"},{"b":"561748F88000","o":"12AAA0A","s":"_ZN5mongo22ReplSetDistLockManager6doTaskEv"},{"b":"561748F88000","o":"28A5BBF"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo11DBExceptionC2ERKNS_6StatusE+0x32) [0x56174b6b5b62] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x6D08) [0x561749c38a0a] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xADB043) [0x561749a63043] mongod(+0x13B2606) [0x56174a33a606] mongod(+0x13B3A55) [0x56174a33ba55] mongod(_ZN5mongo12BasicCommand10Invocation3runEPNS_16OperationContextEPNS_3rpc21ReplyBuilderInterfaceE+0x74) [0x56174b117894] mongod(+0x10FA899) [0x56174a082899] mongod(+0x10FBF53) [0x56174a083f53] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(+0x1FBD2EE) [0x56174af452ee] mongod(_ZN5mongo14DBDirectClient4callERNS_7MessageES2_bPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x3A) [0x56174af457fa] mongod(_ZN5mongo12DBClientBase20runCommandWithTargetENS_12OpMsgRequestE+0x1F2) [0x56174b0c25e2] mongod(_ZN5mongo13RSLocalClient14runCommandOnceEPNS_16OperationContextENS_10StringDataERKNS_7BSONObjE+0x4FB) [0x56174a244e7b] mongod(_ZN5mongo10ShardLocal11_runCommandEPNS_16OperationContextERKNS_21ReadPreferenceSettingENS_10StringDataENS_8DurationISt5ratioILl1ELl1000EEEERKNS_7BSONObjE+0x2E) [0x56174a243c1e] mongod(_ZN5mongo5Shard32runCommandWithFixedRetryAttemptsEPNS_16OperationContextERKNS_21ReadPreferenceSettingERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_7BSONObjENS_8DurationISt5ratioILl1ELl1000EEEENS0_11RetryPolicyE+0xDC) [0x56174a42b1dc] mongod(_ZN5mongo19DistLockCatalogImpl4pingEPNS_16OperationContextENS_10StringDataENS_6Date_tE+0x571) [0x56174a23b7b1] mongod(_ZN5mongo22ReplSetDistLockManager6doTaskEv+0x27A) [0x56174a232a0a] mongod(+0x28A5BBF) [0x56174b82dbbf] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:49.499+0000 D2 REPL [replSetDistLockPinger] Waiting for write concern. OpTime: { ts: Timestamp(1567578649, 1), t: 1 }, write concern: { w: "majority", wtimeout: 15000 } 2019-09-04T06:30:49.499+0000 D4 STORAGE [replSetDistLockPinger] flushed journal 2019-09-04T06:30:49.499+0000 D1 COMMAND [replSetDistLockPinger] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578649480) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" }': NotMaster: Not primary while running findAndModify command on collection config.lockpings 2019-09-04T06:30:49.499+0000 I COMMAND [replSetDistLockPinger] command config.lockpings command: findAndModify { findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578649480) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } numYields:0 ok:0 errMsg:"Not primary while running findAndModify command on collection config.lockpings" errName:NotMaster errCode:10107 reslen:527 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } protocol:op_msg 19ms 2019-09-04T06:30:49.543+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:49.543+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:49.578+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:49.583+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:49.583+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:49.635+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:49.635+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:49.647+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:49.647+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:49.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:49.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:49.679+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:49.712+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:49.712+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:49.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:49.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:49.779+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:49.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:49.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:49.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:49.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:49.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:49.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:49.879+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:49.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:49.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:49.979+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:49.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:49.989+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:49.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:49.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:50.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:50.003+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:30:50.003+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:30:50.003+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:30:50.011+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:30:50.011+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:30:50.011+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:30:50.011+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:30:50.011+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:30:50.011+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:30:50.011+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:30:50.012+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35129 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:30:50.012+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:30:50.012+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:30:50.012+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:30:50.012+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:30:50.012+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:50.012+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:30:50.012+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:30:50.012+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:30:50.012+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:30:50.012+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:30:50.012+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:30:50.013+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:30:50.013+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:50.013+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:50.013+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:30:50.013+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578649, 1) 2019-09-04T06:30:50.013+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10068 2019-09-04T06:30:50.013+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10068 2019-09-04T06:30:50.013+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:30:50.013+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:30:50.013+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:30:50.013+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:30:50.013+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:50.013+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:30:50.013+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:30:50.013+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:30:50.013+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578649, 1) 2019-09-04T06:30:50.013+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10071 2019-09-04T06:30:50.013+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10071 2019-09-04T06:30:50.013+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:30:50.013+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:30:50.013+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:50.013+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:30:50.013+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:30:50.013+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:30:50.013+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578649, 1) 2019-09-04T06:30:50.013+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10073 2019-09-04T06:30:50.013+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10073 2019-09-04T06:30:50.013+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:30:50.013+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:30:50.013+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:30:50.013+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:30:50.014+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:30:50.014+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10076 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10076 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10077 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10077 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10078 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10078 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10079 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10079 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10080 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10080 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10081 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10081 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10082 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10082 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10083 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10083 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10084 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10084 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10085 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10085 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10086 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:30:50.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10086 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10087 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10087 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10088 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10088 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10089 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10089 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10090 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10090 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10091 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10091 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10092 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10092 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10093 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10093 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10094 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10094 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10095 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10095 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10096 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10096 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10097 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10097 2019-09-04T06:30:50.015+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:30:50.015+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10099 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10099 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10100 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10100 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10101 2019-09-04T06:30:50.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10101 2019-09-04T06:30:50.016+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:30:50.016+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:30:50.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10103 2019-09-04T06:30:50.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10103 2019-09-04T06:30:50.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10104 2019-09-04T06:30:50.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10104 2019-09-04T06:30:50.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10105 2019-09-04T06:30:50.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10105 2019-09-04T06:30:50.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10106 2019-09-04T06:30:50.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10106 2019-09-04T06:30:50.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10107 2019-09-04T06:30:50.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10107 2019-09-04T06:30:50.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10108 2019-09-04T06:30:50.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10108 2019-09-04T06:30:50.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10109 2019-09-04T06:30:50.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10109 2019-09-04T06:30:50.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10110 2019-09-04T06:30:50.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10110 2019-09-04T06:30:50.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10111 2019-09-04T06:30:50.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10111 2019-09-04T06:30:50.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10112 2019-09-04T06:30:50.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10112 2019-09-04T06:30:50.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10113 2019-09-04T06:30:50.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10113 2019-09-04T06:30:50.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10114 2019-09-04T06:30:50.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10114 2019-09-04T06:30:50.016+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:30:50.016+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:30:50.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10116 2019-09-04T06:30:50.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10116 2019-09-04T06:30:50.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10117 2019-09-04T06:30:50.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10117 2019-09-04T06:30:50.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10118 2019-09-04T06:30:50.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10118 2019-09-04T06:30:50.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10119 2019-09-04T06:30:50.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10119 2019-09-04T06:30:50.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10120 2019-09-04T06:30:50.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10120 2019-09-04T06:30:50.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10121 2019-09-04T06:30:50.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10121 2019-09-04T06:30:50.016+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:30:50.047+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:50.047+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:50.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:50.067+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:50.079+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:50.083+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:50.083+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:50.135+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:50.135+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:50.147+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:50.147+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:50.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:50.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:50.179+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:50.181+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:50.182+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:50.212+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:50.212+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:50.231+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:50.231+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:50.231+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578649, 1) 2019-09-04T06:30:50.231+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 10131 2019-09-04T06:30:50.231+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:50.231+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:50.231+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 10131 2019-09-04T06:30:50.232+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10134 2019-09-04T06:30:50.232+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 10134 2019-09-04T06:30:50.232+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578649, 1), t: 1 }({ ts: Timestamp(1567578649, 1), t: 1 }) 2019-09-04T06:30:50.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578649, 1), signature: { hash: BinData(0, 03D4B24AFF9BA5C7888ADCCEC49B7DAACB48EDBA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:50.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:30:50.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578649, 1), signature: { hash: BinData(0, 03D4B24AFF9BA5C7888ADCCEC49B7DAACB48EDBA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:50.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578649, 1), signature: { hash: BinData(0, 03D4B24AFF9BA5C7888ADCCEC49B7DAACB48EDBA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:50.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), opTime: { ts: Timestamp(1567578649, 1), t: 1 }, wallTime: new Date(1567578649228) } 2019-09-04T06:30:50.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578649, 1), signature: { hash: BinData(0, 03D4B24AFF9BA5C7888ADCCEC49B7DAACB48EDBA), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:50.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:50.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 999999999 Now: 1000000000 2019-09-04T06:30:50.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:50.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:50.279+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:50.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:50.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:50.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:50.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:50.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:50.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:50.379+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:50.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:50.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:50.479+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:50.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:50.489+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:50.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:50.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:50.543+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:50.543+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:50.567+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:50.567+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:50.580+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:50.583+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:50.583+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:50.635+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:50.635+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:50.647+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:50.647+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:50.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:50.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:50.680+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:50.681+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:50.681+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:50.712+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:50.712+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:50.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:50.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:50.780+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:50.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:50.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:50.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:50.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:50.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:50.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 698) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:50.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 698 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:00.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:50.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:18.839+0000 2019-09-04T06:30:50.838+0000 D2 ASIO [Replication] Request 698 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), opTime: { ts: Timestamp(1567578649, 1), t: 1 }, wallTime: new Date(1567578649228), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578649, 1), t: 1 }, lastCommittedWall: new Date(1567578649228), lastOpVisible: { ts: Timestamp(1567578649, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578649, 1), $clusterTime: { clusterTime: Timestamp(1567578649, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578649, 1) } 2019-09-04T06:30:50.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), opTime: { ts: Timestamp(1567578649, 1), t: 1 }, wallTime: new Date(1567578649228), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578649, 1), t: 1 }, lastCommittedWall: new Date(1567578649228), lastOpVisible: { ts: Timestamp(1567578649, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578649, 1), $clusterTime: { clusterTime: Timestamp(1567578649, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578649, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:50.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:50.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 698) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), opTime: { ts: Timestamp(1567578649, 1), t: 1 }, wallTime: new Date(1567578649228), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578649, 1), t: 1 }, lastCommittedWall: new Date(1567578649228), lastOpVisible: { ts: Timestamp(1567578649, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578649, 1), $clusterTime: { clusterTime: Timestamp(1567578649, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578649, 1) } 2019-09-04T06:30:50.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:30:50.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:30:52.838Z 2019-09-04T06:30:50.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:18.839+0000 2019-09-04T06:30:50.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:50.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 699) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:50.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 699 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:31:00.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:50.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:18.839+0000 2019-09-04T06:30:50.839+0000 D2 ASIO [Replication] Request 699 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), opTime: { ts: Timestamp(1567578649, 1), t: 1 }, wallTime: new Date(1567578649228), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578649, 1), t: 1 }, lastCommittedWall: new Date(1567578649228), lastOpVisible: { ts: Timestamp(1567578649, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578649, 1), $clusterTime: { clusterTime: Timestamp(1567578649, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578649, 1) } 2019-09-04T06:30:50.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), opTime: { ts: Timestamp(1567578649, 1), t: 1 }, wallTime: new Date(1567578649228), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578649, 1), t: 1 }, lastCommittedWall: new Date(1567578649228), lastOpVisible: { ts: Timestamp(1567578649, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578649, 1), $clusterTime: { clusterTime: Timestamp(1567578649, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578649, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:30:50.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:50.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 699) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), opTime: { ts: Timestamp(1567578649, 1), t: 1 }, wallTime: new Date(1567578649228), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578649, 1), t: 1 }, lastCommittedWall: new Date(1567578649228), lastOpVisible: { ts: Timestamp(1567578649, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578649, 1), $clusterTime: { clusterTime: Timestamp(1567578649, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578649, 1) } 2019-09-04T06:30:50.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:30:50.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:30:59.697+0000 2019-09-04T06:30:50.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:31:01.649+0000 2019-09-04T06:30:50.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:30:50.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:30:52.839Z 2019-09-04T06:30:50.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:50.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:20.839+0000 2019-09-04T06:30:50.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:20.839+0000 2019-09-04T06:30:50.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:50.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:50.880+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:50.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:50.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:50.980+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:50.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:50.989+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:50.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:50.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:51.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:51.043+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:51.043+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:51.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578649, 1), signature: { hash: BinData(0, 03D4B24AFF9BA5C7888ADCCEC49B7DAACB48EDBA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:51.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:30:51.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578649, 1), signature: { hash: BinData(0, 03D4B24AFF9BA5C7888ADCCEC49B7DAACB48EDBA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:51.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578649, 1), signature: { hash: BinData(0, 03D4B24AFF9BA5C7888ADCCEC49B7DAACB48EDBA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:51.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), opTime: { ts: Timestamp(1567578649, 1), t: 1 }, wallTime: new Date(1567578649228) } 2019-09-04T06:30:51.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578649, 1), signature: { hash: BinData(0, 03D4B24AFF9BA5C7888ADCCEC49B7DAACB48EDBA), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:51.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:51.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:51.080+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:51.083+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:51.083+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:51.118+0000 I NETWORK [listener] connection accepted from 10.108.2.56:35710 #264 (94 connections now open) 2019-09-04T06:30:51.118+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:51.118+0000 D2 COMMAND [conn264] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:51.118+0000 I NETWORK [conn264] received client metadata from 10.108.2.56:35710 conn264: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:51.118+0000 I COMMAND [conn264] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:51.135+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:51.135+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:51.135+0000 I COMMAND [conn239] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578611, 1), signature: { hash: BinData(0, 542F7D84F611AE5E9C7A33BCFBF929241984258E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:30:51.135+0000 D1 - [conn239] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:51.135+0000 W - [conn239] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:51.147+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:51.147+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:51.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:51.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:51.152+0000 I - [conn239] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:51.152+0000 D1 COMMAND [conn239] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578611, 1), signature: { hash: BinData(0, 542F7D84F611AE5E9C7A33BCFBF929241984258E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:51.152+0000 D1 - [conn239] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:51.152+0000 W - [conn239] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:51.172+0000 I - [conn239] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:51.172+0000 W COMMAND [conn239] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:51.172+0000 I COMMAND [conn239] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578611, 1), signature: { hash: BinData(0, 542F7D84F611AE5E9C7A33BCFBF929241984258E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30028ms 2019-09-04T06:30:51.172+0000 D2 NETWORK [conn239] Session from 10.108.2.56:35694 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:51.172+0000 I NETWORK [conn239] end connection 10.108.2.56:35694 (93 connections now open) 2019-09-04T06:30:51.180+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:51.181+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:51.181+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:51.212+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:51.212+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:51.231+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:51.231+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:51.231+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578649, 1) 2019-09-04T06:30:51.231+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 10171 2019-09-04T06:30:51.231+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:51.231+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:51.231+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 10171 2019-09-04T06:30:51.232+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10174 2019-09-04T06:30:51.232+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 10174 2019-09-04T06:30:51.232+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578649, 1), t: 1 }({ ts: Timestamp(1567578649, 1), t: 1 }) 2019-09-04T06:30:51.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:51.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:51.264+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:51.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:51.280+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:51.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:51.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:51.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:51.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:51.348+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:51.348+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:51.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:51.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:51.380+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:51.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:51.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:51.481+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:51.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:51.489+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:51.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:51.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:51.543+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:51.543+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:51.567+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:51.567+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:51.581+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:51.583+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:51.583+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:51.634+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:51.635+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:51.647+0000 I COMMAND [conn223] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578618, 1), signature: { hash: BinData(0, 6D229A0253FA92070B3D485820FE05002AE9502B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:30:51.647+0000 D1 - [conn223] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:51.647+0000 W - [conn223] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:51.647+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:51.647+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:51.650+0000 I NETWORK [listener] connection accepted from 10.108.2.48:42106 #265 (94 connections now open) 2019-09-04T06:30:51.650+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:51.650+0000 D2 COMMAND [conn265] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:51.650+0000 I NETWORK [conn265] received client metadata from 10.108.2.48:42106 conn265: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:51.650+0000 I COMMAND [conn265] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:51.650+0000 I NETWORK [listener] connection accepted from 10.108.2.72:45752 #266 (95 connections now open) 2019-09-04T06:30:51.650+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:51.650+0000 I NETWORK [listener] connection accepted from 10.108.2.54:49200 #267 (96 connections now open) 2019-09-04T06:30:51.650+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:51.650+0000 D2 COMMAND [conn265] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578650, 1), signature: { hash: BinData(0, 31B748279854B89A75FE3C6C5E99D3DE120BDB4B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:30:51.650+0000 D1 REPL [conn265] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578649, 1), t: 1 } 2019-09-04T06:30:51.650+0000 D3 REPL [conn265] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:21.660+0000 2019-09-04T06:30:51.650+0000 D2 COMMAND [conn266] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:51.650+0000 I NETWORK [conn266] received client metadata from 10.108.2.72:45752 conn266: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:51.650+0000 I COMMAND [conn266] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:51.651+0000 D2 COMMAND [conn267] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:51.651+0000 I NETWORK [conn267] received client metadata from 10.108.2.54:49200 conn267: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:51.651+0000 I COMMAND [conn267] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:51.651+0000 D2 COMMAND [conn266] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578642, 1), signature: { hash: BinData(0, D3B29E27081E353619E4BBAFF34512E8BADA791D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:30:51.651+0000 D1 REPL [conn266] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578649, 1), t: 1 } 2019-09-04T06:30:51.651+0000 D3 REPL [conn266] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:21.661+0000 2019-09-04T06:30:51.651+0000 D2 COMMAND [conn267] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578648, 1), signature: { hash: BinData(0, F12E75CCCE0218CFE0AA364AB06192226B38C39A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:30:51.651+0000 D1 REPL [conn267] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578649, 1), t: 1 } 2019-09-04T06:30:51.651+0000 D3 REPL [conn267] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:21.661+0000 2019-09-04T06:30:51.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:51.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:51.663+0000 I - [conn223] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:51.664+0000 I COMMAND [conn232] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578613, 1), signature: { hash: BinData(0, F7238143B431FCDDC1120756CE12F6F6C16C6FEB), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:30:51.664+0000 D1 - [conn232] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:51.664+0000 W - [conn232] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:51.664+0000 I COMMAND [conn243] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578612, 1), signature: { hash: BinData(0, 382BE338B8E062C89DE3BDFB8F55724B12CF9B0F), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:30:51.664+0000 D1 - [conn243] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:51.664+0000 W - [conn243] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:51.664+0000 D1 COMMAND [conn223] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578618, 1), signature: { hash: BinData(0, 6D229A0253FA92070B3D485820FE05002AE9502B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:51.664+0000 D1 - [conn223] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:51.664+0000 W - [conn223] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:51.681+0000 I - [conn232] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:51.681+0000 D1 COMMAND [conn232] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578613, 1), signature: { hash: BinData(0, F7238143B431FCDDC1120756CE12F6F6C16C6FEB), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:51.681+0000 D1 - [conn232] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:51.681+0000 W - [conn232] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:51.681+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:51.681+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:51.681+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:51.700+0000 I - [conn223] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:51.701+0000 W COMMAND [conn223] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:51.701+0000 I COMMAND [conn223] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578618, 1), signature: { hash: BinData(0, 6D229A0253FA92070B3D485820FE05002AE9502B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:30:51.701+0000 D2 NETWORK [conn223] Session from 10.108.2.44:38654 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:51.701+0000 I NETWORK [conn223] end connection 10.108.2.44:38654 (95 connections now open) 2019-09-04T06:30:51.712+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:51.712+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:51.717+0000 I - [conn243] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:51.717+0000 D1 COMMAND [conn243] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578612, 1), signature: { hash: BinData(0, 382BE338B8E062C89DE3BDFB8F55724B12CF9B0F), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:51.717+0000 D1 - [conn243] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:51.717+0000 W - [conn243] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:51.737+0000 I - [conn232] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:51.737+0000 W COMMAND [conn232] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:51.737+0000 I COMMAND [conn232] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578613, 1), signature: { hash: BinData(0, F7238143B431FCDDC1120756CE12F6F6C16C6FEB), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:30:51.737+0000 D2 NETWORK [conn232] Session from 10.108.2.58:52130 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:51.737+0000 I NETWORK [conn232] end connection 10.108.2.58:52130 (94 connections now open) 2019-09-04T06:30:51.756+0000 I COMMAND [conn244] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578612, 1), signature: { hash: BinData(0, 382BE338B8E062C89DE3BDFB8F55724B12CF9B0F), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:30:51.756+0000 D1 - [conn244] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:51.756+0000 W - [conn244] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:51.757+0000 I - [conn243] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:51.757+0000 W COMMAND [conn243] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:51.757+0000 I COMMAND [conn243] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578612, 1), signature: { hash: BinData(0, 382BE338B8E062C89DE3BDFB8F55724B12CF9B0F), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30065ms 2019-09-04T06:30:51.757+0000 D2 NETWORK [conn243] Session from 10.108.2.73:52146 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:51.757+0000 I NETWORK [conn243] end connection 10.108.2.73:52146 (93 connections now open) 2019-09-04T06:30:51.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:51.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:51.767+0000 I COMMAND [conn245] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578611, 1), signature: { hash: BinData(0, 542F7D84F611AE5E9C7A33BCFBF929241984258E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:30:51.768+0000 D1 - [conn245] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:51.768+0000 W - [conn245] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:51.774+0000 I - [conn244] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:51.774+0000 D1 COMMAND [conn244] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578612, 1), signature: { hash: BinData(0, 382BE338B8E062C89DE3BDFB8F55724B12CF9B0F), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:51.774+0000 D1 - [conn244] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:51.774+0000 W - [conn244] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:51.781+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:51.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:51.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:51.795+0000 I - [conn244] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:51.795+0000 W COMMAND [conn244] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:51.796+0000 I COMMAND [conn244] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578612, 1), signature: { hash: BinData(0, 382BE338B8E062C89DE3BDFB8F55724B12CF9B0F), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:30:51.796+0000 D2 NETWORK [conn244] Session from 10.108.2.52:47174 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:51.796+0000 I NETWORK [conn244] end connection 10.108.2.52:47174 (92 connections now open) 2019-09-04T06:30:51.810+0000 I - [conn245] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:51.810+0000 D1 COMMAND [conn245] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578611, 1), signature: { hash: BinData(0, 542F7D84F611AE5E9C7A33BCFBF929241984258E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:51.810+0000 D1 - [conn245] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:51.810+0000 W - [conn245] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:51.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:51.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:51.831+0000 I - [conn245] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:51.831+0000 W COMMAND [conn245] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:51.831+0000 I COMMAND [conn245] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578611, 1), signature: { hash: BinData(0, 542F7D84F611AE5E9C7A33BCFBF929241984258E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30053ms 2019-09-04T06:30:51.831+0000 D2 NETWORK [conn245] Session from 10.108.2.59:48336 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:51.831+0000 I NETWORK [conn245] end connection 10.108.2.59:48336 (91 connections now open) 2019-09-04T06:30:51.848+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:51.848+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:51.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:51.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:51.881+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:51.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:51.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:51.981+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:51.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:51.989+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:51.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:51.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:52.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:52.043+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:52.043+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:52.056+0000 I COMMAND [conn246] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578619, 1), signature: { hash: BinData(0, 93BEEE1914896F4DB94877FF3B5D418106DCB32C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:30:52.057+0000 D1 - [conn246] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:52.057+0000 W - [conn246] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:52.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:52.067+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:52.073+0000 I - [conn246] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:52.073+0000 D1 COMMAND [conn246] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578619, 1), signature: { hash: BinData(0, 93BEEE1914896F4DB94877FF3B5D418106DCB32C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:52.073+0000 D1 - [conn246] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:52.073+0000 W - [conn246] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:52.081+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:52.083+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:52.083+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:52.093+0000 I - [conn246] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:52.093+0000 W COMMAND [conn246] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:52.093+0000 I COMMAND [conn246] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578619, 1), signature: { hash: BinData(0, 93BEEE1914896F4DB94877FF3B5D418106DCB32C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:30:52.093+0000 D2 NETWORK [conn246] Session from 10.108.2.50:50110 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:52.093+0000 I NETWORK [conn246] end connection 10.108.2.50:50110 (90 connections now open) 2019-09-04T06:30:52.135+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:52.135+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:52.147+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:52.147+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:52.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:52.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:52.181+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:52.181+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:52.181+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:52.212+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:52.212+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:52.231+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:52.231+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:52.231+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578649, 1) 2019-09-04T06:30:52.231+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 10217 2019-09-04T06:30:52.231+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:52.231+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:52.231+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 10217 2019-09-04T06:30:52.232+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10220 2019-09-04T06:30:52.232+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 10220 2019-09-04T06:30:52.232+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578649, 1), t: 1 }({ ts: Timestamp(1567578649, 1), t: 1 }) 2019-09-04T06:30:52.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578649, 1), signature: { hash: BinData(0, 03D4B24AFF9BA5C7888ADCCEC49B7DAACB48EDBA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:52.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:30:52.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578649, 1), signature: { hash: BinData(0, 03D4B24AFF9BA5C7888ADCCEC49B7DAACB48EDBA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:52.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578649, 1), signature: { hash: BinData(0, 03D4B24AFF9BA5C7888ADCCEC49B7DAACB48EDBA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:52.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), opTime: { ts: Timestamp(1567578649, 1), t: 1 }, wallTime: new Date(1567578649228) } 2019-09-04T06:30:52.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578649, 1), signature: { hash: BinData(0, 03D4B24AFF9BA5C7888ADCCEC49B7DAACB48EDBA), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:52.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:52.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:52.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:52.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:52.281+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:52.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:52.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:52.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:52.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:52.348+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:52.348+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:52.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:52.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:52.381+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:52.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:52.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:52.482+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:52.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:52.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:52.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:52.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:52.543+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:52.543+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:52.567+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:52.567+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:52.582+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:52.583+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:52.583+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:52.584+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:52.584+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:52.585+0000 I NETWORK [listener] connection accepted from 10.108.2.74:51796 #268 (91 connections now open) 2019-09-04T06:30:52.585+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:52.585+0000 D2 COMMAND [conn268] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:52.585+0000 I NETWORK [conn268] received client metadata from 10.108.2.74:51796 conn268: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:52.585+0000 I COMMAND [conn268] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:52.585+0000 D2 COMMAND [conn268] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, 05433FFC1F19D7D8CE5BDF35FCE276DA0CD821FC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:30:52.585+0000 D1 REPL [conn268] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578649, 1), t: 1 } 2019-09-04T06:30:52.585+0000 D3 REPL [conn268] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:22.595+0000 2019-09-04T06:30:52.635+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:52.635+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:52.647+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:52.647+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:52.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:52.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:52.681+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:52.681+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:52.682+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:52.712+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:52.712+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:52.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:52.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:52.782+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:52.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:52.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:52.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:52.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:52.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:52.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 700) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:52.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 700 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:02.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:52.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:20.839+0000 2019-09-04T06:30:52.838+0000 D2 ASIO [Replication] Request 700 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), opTime: { ts: Timestamp(1567578649, 1), t: 1 }, wallTime: new Date(1567578649228), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578649, 1), t: 1 }, lastCommittedWall: new Date(1567578649228), lastOpVisible: { ts: Timestamp(1567578649, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578649, 1), $clusterTime: { clusterTime: Timestamp(1567578650, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578649, 1) } 2019-09-04T06:30:52.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), opTime: { ts: Timestamp(1567578649, 1), t: 1 }, wallTime: new Date(1567578649228), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578649, 1), t: 1 }, lastCommittedWall: new Date(1567578649228), lastOpVisible: { ts: Timestamp(1567578649, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578649, 1), $clusterTime: { clusterTime: Timestamp(1567578650, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578649, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:52.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:52.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 700) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), opTime: { ts: Timestamp(1567578649, 1), t: 1 }, wallTime: new Date(1567578649228), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578649, 1), t: 1 }, lastCommittedWall: new Date(1567578649228), lastOpVisible: { ts: Timestamp(1567578649, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578649, 1), $clusterTime: { clusterTime: Timestamp(1567578650, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578649, 1) } 2019-09-04T06:30:52.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:30:52.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:30:54.838Z 2019-09-04T06:30:52.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:20.839+0000 2019-09-04T06:30:52.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:52.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 701) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:52.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 701 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:31:02.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:52.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:20.839+0000 2019-09-04T06:30:52.839+0000 D2 ASIO [Replication] Request 701 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), opTime: { ts: Timestamp(1567578649, 1), t: 1 }, wallTime: new Date(1567578649228), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578649, 1), t: 1 }, lastCommittedWall: new Date(1567578649228), lastOpVisible: { ts: Timestamp(1567578649, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578649, 1), $clusterTime: { clusterTime: Timestamp(1567578650, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578649, 1) } 2019-09-04T06:30:52.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), opTime: { ts: Timestamp(1567578649, 1), t: 1 }, wallTime: new Date(1567578649228), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578649, 1), t: 1 }, lastCommittedWall: new Date(1567578649228), lastOpVisible: { ts: Timestamp(1567578649, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578649, 1), $clusterTime: { clusterTime: Timestamp(1567578650, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578649, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:30:52.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:52.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 701) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), opTime: { ts: Timestamp(1567578649, 1), t: 1 }, wallTime: new Date(1567578649228), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578649, 1), t: 1 }, lastCommittedWall: new Date(1567578649228), lastOpVisible: { ts: Timestamp(1567578649, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578649, 1), $clusterTime: { clusterTime: Timestamp(1567578650, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578649, 1) } 2019-09-04T06:30:52.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:30:52.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:31:01.649+0000 2019-09-04T06:30:52.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:31:03.874+0000 2019-09-04T06:30:52.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:30:52.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:30:54.839Z 2019-09-04T06:30:52.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:52.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:22.839+0000 2019-09-04T06:30:52.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:22.839+0000 2019-09-04T06:30:52.848+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:52.848+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:52.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:52.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:52.882+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:52.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:52.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:52.982+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:52.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:52.989+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:52.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:52.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:53.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:53.043+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:53.043+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:53.054+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:30:53.054+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:30:53.054+0000 D2 COMMAND [conn90] run command admin.$cmd { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:30:53.054+0000 I COMMAND [conn90] command admin.$cmd command: isMaster { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:866 locks:{} protocol:op_query 0ms 2019-09-04T06:30:53.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578650, 1), signature: { hash: BinData(0, 36929FD359BF68B276CD8735F588CA7A4DCB6F1B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:53.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:30:53.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578650, 1), signature: { hash: BinData(0, 36929FD359BF68B276CD8735F588CA7A4DCB6F1B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:53.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578650, 1), signature: { hash: BinData(0, 36929FD359BF68B276CD8735F588CA7A4DCB6F1B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:53.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), opTime: { ts: Timestamp(1567578649, 1), t: 1 }, wallTime: new Date(1567578649228) } 2019-09-04T06:30:53.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578650, 1), signature: { hash: BinData(0, 36929FD359BF68B276CD8735F588CA7A4DCB6F1B), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:53.067+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:53.067+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:53.082+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:53.083+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:53.083+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:53.084+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:53.084+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:53.135+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:53.135+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:53.147+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:53.147+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:53.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:53.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:53.181+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:53.181+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:53.182+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:53.212+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:53.212+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:53.232+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:53.232+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:53.232+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578649, 1) 2019-09-04T06:30:53.232+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 10264 2019-09-04T06:30:53.232+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:53.232+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:53.232+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 10264 2019-09-04T06:30:53.232+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10267 2019-09-04T06:30:53.232+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 10267 2019-09-04T06:30:53.232+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578649, 1), t: 1 }({ ts: Timestamp(1567578649, 1), t: 1 }) 2019-09-04T06:30:53.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:53.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:53.258+0000 D2 COMMAND [conn157] run command admin.$cmd { ping: 1, $db: "admin" } 2019-09-04T06:30:53.258+0000 I COMMAND [conn157] command admin.$cmd appName: "robo3t" command: ping { ping: 1, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:53.264+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:53.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:53.269+0000 D2 COMMAND [conn218] run command admin.$cmd { ping: 1.0, lsid: { id: UUID("ac8e303f-4e60-4a79-b9a4-f7cba7354076") }, $clusterTime: { clusterTime: Timestamp(1567578590, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } 2019-09-04T06:30:53.269+0000 I COMMAND [conn218] command admin.$cmd appName: "MongoDB Shell" command: ping { ping: 1.0, lsid: { id: UUID("ac8e303f-4e60-4a79-b9a4-f7cba7354076") }, $clusterTime: { clusterTime: Timestamp(1567578590, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:53.274+0000 D2 COMMAND [conn101] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:53.274+0000 I COMMAND [conn101] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:53.282+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:53.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:53.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:53.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:53.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:53.348+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:53.348+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:53.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:53.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:53.383+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:53.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:53.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:53.483+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:53.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:53.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:53.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:53.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:53.543+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:53.543+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:53.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:53.567+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:53.583+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:53.583+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:53.583+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:53.635+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:53.635+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:53.647+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:53.647+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:53.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:53.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:53.681+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:53.681+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:53.683+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:53.712+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:53.712+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:53.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:53.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:53.783+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:53.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:53.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:53.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:53.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:53.848+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:53.848+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:53.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:53.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:53.883+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:53.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:53.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:53.983+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:53.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:53.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:53.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:53.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:54.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:54.043+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:54.043+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:54.067+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:54.067+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:54.083+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:54.083+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:54.083+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:54.135+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:54.135+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:54.141+0000 D2 COMMAND [conn259] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, 05433FFC1F19D7D8CE5BDF35FCE276DA0CD821FC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:30:54.141+0000 D1 REPL [conn259] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578649, 1), t: 1 } 2019-09-04T06:30:54.141+0000 D3 REPL [conn259] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:24.151+0000 2019-09-04T06:30:54.147+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:54.147+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:54.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:54.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:54.181+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:54.181+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:54.183+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:54.212+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:54.212+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:54.232+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:54.232+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:54.232+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578649, 1) 2019-09-04T06:30:54.232+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 10307 2019-09-04T06:30:54.232+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:54.232+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:54.232+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 10307 2019-09-04T06:30:54.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578650, 1), signature: { hash: BinData(0, 36929FD359BF68B276CD8735F588CA7A4DCB6F1B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:54.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:30:54.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578650, 1), signature: { hash: BinData(0, 36929FD359BF68B276CD8735F588CA7A4DCB6F1B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:54.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578650, 1), signature: { hash: BinData(0, 36929FD359BF68B276CD8735F588CA7A4DCB6F1B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:54.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), opTime: { ts: Timestamp(1567578649, 1), t: 1 }, wallTime: new Date(1567578649228) } 2019-09-04T06:30:54.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578650, 1), signature: { hash: BinData(0, 36929FD359BF68B276CD8735F588CA7A4DCB6F1B), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:54.232+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10311 2019-09-04T06:30:54.232+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 10311 2019-09-04T06:30:54.232+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578649, 1), t: 1 }({ ts: Timestamp(1567578649, 1), t: 1 }) 2019-09-04T06:30:54.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:54.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:54.237+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:54.237+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), appliedOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, appliedWallTime: new Date(1567578649228), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), appliedOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, appliedWallTime: new Date(1567578649228), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), appliedOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, appliedWallTime: new Date(1567578649228), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578649, 1), t: 1 }, lastCommittedWall: new Date(1567578649228), lastOpVisible: { ts: Timestamp(1567578649, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:54.237+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 702 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:24.237+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), appliedOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, appliedWallTime: new Date(1567578649228), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), appliedOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, appliedWallTime: new Date(1567578649228), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), appliedOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, appliedWallTime: new Date(1567578649228), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578649, 1), t: 1 }, lastCommittedWall: new Date(1567578649228), lastOpVisible: { ts: Timestamp(1567578649, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:54.237+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:19.232+0000 2019-09-04T06:30:54.237+0000 D2 ASIO [RS] Request 702 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578649, 1), t: 1 }, lastCommittedWall: new Date(1567578649228), lastOpVisible: { ts: Timestamp(1567578649, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578649, 1), $clusterTime: { clusterTime: Timestamp(1567578650, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578649, 1) } 2019-09-04T06:30:54.237+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578649, 1), t: 1 }, lastCommittedWall: new Date(1567578649228), lastOpVisible: { ts: Timestamp(1567578649, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578649, 1), $clusterTime: { clusterTime: Timestamp(1567578650, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578649, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:54.237+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:54.237+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:19.232+0000 2019-09-04T06:30:54.237+0000 D2 ASIO [RS] Request 691 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578649, 1), t: 1 }, lastCommittedWall: new Date(1567578649228), lastOpVisible: { ts: Timestamp(1567578649, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578649, 1), t: 1 }, lastCommittedWall: new Date(1567578649228), lastOpApplied: { ts: Timestamp(1567578649, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578649, 1), $clusterTime: { clusterTime: Timestamp(1567578650, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578649, 1) } 2019-09-04T06:30:54.237+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578649, 1), t: 1 }, lastCommittedWall: new Date(1567578649228), lastOpVisible: { ts: Timestamp(1567578649, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578649, 1), t: 1 }, lastCommittedWall: new Date(1567578649228), lastOpApplied: { ts: Timestamp(1567578649, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578649, 1), $clusterTime: { clusterTime: Timestamp(1567578650, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578649, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:54.237+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:54.237+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:30:54.237+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:31:03.874+0000 2019-09-04T06:30:54.237+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:31:04.408+0000 2019-09-04T06:30:54.237+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:54.237+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 703 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:04.237+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578649, 1), t: 1 } } 2019-09-04T06:30:54.237+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:22.839+0000 2019-09-04T06:30:54.237+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:19.232+0000 2019-09-04T06:30:54.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:54.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:54.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:54.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:54.283+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:54.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:54.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:54.348+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:54.348+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:54.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:54.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:54.384+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:54.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:54.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:54.484+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:54.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:54.489+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:54.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:54.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:54.543+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:54.543+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:54.567+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:54.567+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:54.583+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:54.583+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:54.584+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:54.635+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:54.635+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:54.647+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:54.647+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:54.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:54.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:54.681+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:54.681+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:54.684+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:54.712+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:54.712+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:54.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:54.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:54.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:54.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:54.784+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:54.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:54.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:54.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:54.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 704) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:54.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 704 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:04.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:54.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:22.839+0000 2019-09-04T06:30:54.838+0000 D2 ASIO [Replication] Request 704 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), opTime: { ts: Timestamp(1567578649, 1), t: 1 }, wallTime: new Date(1567578649228), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578649, 1), t: 1 }, lastCommittedWall: new Date(1567578649228), lastOpVisible: { ts: Timestamp(1567578649, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578649, 1), $clusterTime: { clusterTime: Timestamp(1567578650, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578649, 1) } 2019-09-04T06:30:54.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), opTime: { ts: Timestamp(1567578649, 1), t: 1 }, wallTime: new Date(1567578649228), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578649, 1), t: 1 }, lastCommittedWall: new Date(1567578649228), lastOpVisible: { ts: Timestamp(1567578649, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578649, 1), $clusterTime: { clusterTime: Timestamp(1567578650, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578649, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:54.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:54.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 704) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), opTime: { ts: Timestamp(1567578649, 1), t: 1 }, wallTime: new Date(1567578649228), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578649, 1), t: 1 }, lastCommittedWall: new Date(1567578649228), lastOpVisible: { ts: Timestamp(1567578649, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578649, 1), $clusterTime: { clusterTime: Timestamp(1567578650, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578649, 1) } 2019-09-04T06:30:54.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:30:54.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:30:56.838Z 2019-09-04T06:30:54.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:22.839+0000 2019-09-04T06:30:54.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:54.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 705) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:54.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 705 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:31:04.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:54.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:22.839+0000 2019-09-04T06:30:54.839+0000 D2 ASIO [Replication] Request 705 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), opTime: { ts: Timestamp(1567578649, 1), t: 1 }, wallTime: new Date(1567578649228), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578649, 1), t: 1 }, lastCommittedWall: new Date(1567578649228), lastOpVisible: { ts: Timestamp(1567578649, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578649, 1), $clusterTime: { clusterTime: Timestamp(1567578650, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578649, 1) } 2019-09-04T06:30:54.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), opTime: { ts: Timestamp(1567578649, 1), t: 1 }, wallTime: new Date(1567578649228), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578649, 1), t: 1 }, lastCommittedWall: new Date(1567578649228), lastOpVisible: { ts: Timestamp(1567578649, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578649, 1), $clusterTime: { clusterTime: Timestamp(1567578650, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578649, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:30:54.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:54.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 705) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), opTime: { ts: Timestamp(1567578649, 1), t: 1 }, wallTime: new Date(1567578649228), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578649, 1), t: 1 }, lastCommittedWall: new Date(1567578649228), lastOpVisible: { ts: Timestamp(1567578649, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578649, 1), $clusterTime: { clusterTime: Timestamp(1567578650, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578649, 1) } 2019-09-04T06:30:54.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:30:54.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:31:04.408+0000 2019-09-04T06:30:54.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:31:05.182+0000 2019-09-04T06:30:54.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:30:54.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:30:56.839Z 2019-09-04T06:30:54.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:54.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:24.839+0000 2019-09-04T06:30:54.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:24.839+0000 2019-09-04T06:30:54.848+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:54.848+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:54.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:54.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:54.884+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:54.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:54.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:54.984+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:54.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:54.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:54.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:54.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:55.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:55.043+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:55.043+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:55.049+0000 I NETWORK [listener] connection accepted from 10.108.2.55:36668 #269 (92 connections now open) 2019-09-04T06:30:55.049+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:55.049+0000 D2 COMMAND [conn269] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:55.049+0000 I NETWORK [conn269] received client metadata from 10.108.2.55:36668 conn269: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:55.049+0000 I COMMAND [conn269] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:55.050+0000 D2 COMMAND [conn269] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578651, 1), signature: { hash: BinData(0, BB862498148B907236F82EF2CD87FA263BF3C9C2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:30:55.050+0000 D1 REPL [conn269] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578649, 1), t: 1 } 2019-09-04T06:30:55.050+0000 D3 REPL [conn269] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:25.060+0000 2019-09-04T06:30:55.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578650, 1), signature: { hash: BinData(0, 36929FD359BF68B276CD8735F588CA7A4DCB6F1B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:55.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:30:55.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578650, 1), signature: { hash: BinData(0, 36929FD359BF68B276CD8735F588CA7A4DCB6F1B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:55.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578650, 1), signature: { hash: BinData(0, 36929FD359BF68B276CD8735F588CA7A4DCB6F1B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:55.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), opTime: { ts: Timestamp(1567578649, 1), t: 1 }, wallTime: new Date(1567578649228) } 2019-09-04T06:30:55.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578650, 1), signature: { hash: BinData(0, 36929FD359BF68B276CD8735F588CA7A4DCB6F1B), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:55.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:55.067+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:55.083+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:55.083+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:55.084+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:55.108+0000 D2 ASIO [RS] Request 703 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578655, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578655100), o: { $v: 1, $set: { ping: new Date(1567578655097), up: 2555 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578649, 1), t: 1 }, lastCommittedWall: new Date(1567578649228), lastOpVisible: { ts: Timestamp(1567578649, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578649, 1), t: 1 }, lastCommittedWall: new Date(1567578649228), lastOpApplied: { ts: Timestamp(1567578655, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578649, 1), $clusterTime: { clusterTime: Timestamp(1567578655, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578655, 1) } 2019-09-04T06:30:55.108+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578655, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578655100), o: { $v: 1, $set: { ping: new Date(1567578655097), up: 2555 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578649, 1), t: 1 }, lastCommittedWall: new Date(1567578649228), lastOpVisible: { ts: Timestamp(1567578649, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578649, 1), t: 1 }, lastCommittedWall: new Date(1567578649228), lastOpApplied: { ts: Timestamp(1567578655, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578649, 1), $clusterTime: { clusterTime: Timestamp(1567578655, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578655, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:55.108+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:55.108+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578655, 1) and ending at ts: Timestamp(1567578655, 1) 2019-09-04T06:30:55.108+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:31:05.182+0000 2019-09-04T06:30:55.108+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:31:06.310+0000 2019-09-04T06:30:55.108+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:55.108+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:24.839+0000 2019-09-04T06:30:55.108+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578655, 1), t: 1 } 2019-09-04T06:30:55.109+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:55.109+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:55.109+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578649, 1) 2019-09-04T06:30:55.109+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 10347 2019-09-04T06:30:55.109+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:55.109+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:55.109+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 10347 2019-09-04T06:30:55.109+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:55.109+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:30:55.109+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:55.109+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578649, 1) 2019-09-04T06:30:55.109+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 10350 2019-09-04T06:30:55.109+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578655, 1) } 2019-09-04T06:30:55.109+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:55.109+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:55.109+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 10350 2019-09-04T06:30:55.109+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10312 2019-09-04T06:30:55.109+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 10312 2019-09-04T06:30:55.109+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10353 2019-09-04T06:30:55.109+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 10353 2019-09-04T06:30:55.109+0000 D3 EXECUTOR [repl-writer-worker-7] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:55.109+0000 D3 STORAGE [repl-writer-worker-7] WT begin_transaction for snapshot id 10355 2019-09-04T06:30:55.109+0000 D4 STORAGE [repl-writer-worker-7] inserting record with timestamp Timestamp(1567578655, 1) 2019-09-04T06:30:55.109+0000 D3 STORAGE [repl-writer-worker-7] WT set timestamp of future write operations to Timestamp(1567578655, 1) 2019-09-04T06:30:55.109+0000 D3 STORAGE [repl-writer-worker-7] WT commit_transaction for snapshot id 10355 2019-09-04T06:30:55.109+0000 D3 EXECUTOR [repl-writer-worker-7] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:55.109+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:30:55.109+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10354 2019-09-04T06:30:55.109+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 10354 2019-09-04T06:30:55.109+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10357 2019-09-04T06:30:55.109+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 10357 2019-09-04T06:30:55.109+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578655, 1), t: 1 }({ ts: Timestamp(1567578655, 1), t: 1 }) 2019-09-04T06:30:55.109+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578655, 1) 2019-09-04T06:30:55.109+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10358 2019-09-04T06:30:55.109+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578655, 1) } } ] } sort: {} projection: {} 2019-09-04T06:30:55.109+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:55.109+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:30:55.109+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578655, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:30:55.109+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:55.109+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:55.109+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:55.109+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578655, 1) || First: notFirst: full path: ts 2019-09-04T06:30:55.109+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:55.109+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578655, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:55.109+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:55.109+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:30:55.109+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:55.109+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:55.109+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:55.109+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:55.109+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:55.109+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:55.109+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:55.109+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578655, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:55.109+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:55.109+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:55.109+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:55.109+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578655, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:55.109+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:55.109+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578655, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:55.109+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 10358 2019-09-04T06:30:55.109+0000 D3 EXECUTOR [repl-writer-worker-11] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:55.109+0000 D3 STORAGE [repl-writer-worker-11] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:55.109+0000 D3 REPL [repl-writer-worker-11] applying op: { ts: Timestamp(1567578655, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578655100), o: { $v: 1, $set: { ping: new Date(1567578655097), up: 2555 } } }, oplog application mode: Secondary 2019-09-04T06:30:55.109+0000 D3 STORAGE [repl-writer-worker-11] WT set timestamp of future write operations to Timestamp(1567578655, 1) 2019-09-04T06:30:55.109+0000 D3 STORAGE [repl-writer-worker-11] WT begin_transaction for snapshot id 10360 2019-09-04T06:30:55.109+0000 D2 QUERY [repl-writer-worker-11] Using idhack: { _id: "cmodb801.togewa.com:27017" } 2019-09-04T06:30:55.109+0000 D4 WRITE [repl-writer-worker-11] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:30:55.109+0000 D3 STORAGE [repl-writer-worker-11] WT commit_transaction for snapshot id 10360 2019-09-04T06:30:55.110+0000 D3 EXECUTOR [repl-writer-worker-11] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:55.110+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578655, 1), t: 1 }({ ts: Timestamp(1567578655, 1), t: 1 }) 2019-09-04T06:30:55.110+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578655, 1) 2019-09-04T06:30:55.110+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10359 2019-09-04T06:30:55.110+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:30:55.110+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:55.110+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:30:55.110+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:55.110+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:55.110+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:30:55.110+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 10359 2019-09-04T06:30:55.110+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578655, 1) 2019-09-04T06:30:55.110+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10363 2019-09-04T06:30:55.110+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 10363 2019-09-04T06:30:55.110+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578655, 1), t: 1 }({ ts: Timestamp(1567578655, 1), t: 1 }) 2019-09-04T06:30:55.110+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:55.110+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), appliedOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, appliedWallTime: new Date(1567578649228), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), appliedOpTime: { ts: Timestamp(1567578655, 1), t: 1 }, appliedWallTime: new Date(1567578655100), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), appliedOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, appliedWallTime: new Date(1567578649228), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578649, 1), t: 1 }, lastCommittedWall: new Date(1567578649228), lastOpVisible: { ts: Timestamp(1567578649, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:55.110+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 706 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:25.110+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), appliedOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, appliedWallTime: new Date(1567578649228), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), appliedOpTime: { ts: Timestamp(1567578655, 1), t: 1 }, appliedWallTime: new Date(1567578655100), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), appliedOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, appliedWallTime: new Date(1567578649228), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578649, 1), t: 1 }, lastCommittedWall: new Date(1567578649228), lastOpVisible: { ts: Timestamp(1567578649, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:55.110+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:25.110+0000 2019-09-04T06:30:55.110+0000 D2 ASIO [RS] Request 706 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578655, 1), t: 1 }, lastCommittedWall: new Date(1567578655100), lastOpVisible: { ts: Timestamp(1567578655, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578655, 1), $clusterTime: { clusterTime: Timestamp(1567578655, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578655, 1) } 2019-09-04T06:30:55.110+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578655, 1), t: 1 }, lastCommittedWall: new Date(1567578655100), lastOpVisible: { ts: Timestamp(1567578655, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578655, 1), $clusterTime: { clusterTime: Timestamp(1567578655, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578655, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:55.110+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:55.110+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:25.110+0000 2019-09-04T06:30:55.111+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578655, 1), t: 1 } 2019-09-04T06:30:55.111+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 707 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:05.111+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578649, 1), t: 1 } } 2019-09-04T06:30:55.111+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:25.110+0000 2019-09-04T06:30:55.111+0000 D2 ASIO [RS] Request 707 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578655, 1), t: 1 }, lastCommittedWall: new Date(1567578655100), lastOpVisible: { ts: Timestamp(1567578655, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578655, 1), t: 1 }, lastCommittedWall: new Date(1567578655100), lastOpApplied: { ts: Timestamp(1567578655, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578655, 1), $clusterTime: { clusterTime: Timestamp(1567578655, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578655, 1) } 2019-09-04T06:30:55.111+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578655, 1), t: 1 }, lastCommittedWall: new Date(1567578655100), lastOpVisible: { ts: Timestamp(1567578655, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578655, 1), t: 1 }, lastCommittedWall: new Date(1567578655100), lastOpApplied: { ts: Timestamp(1567578655, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578655, 1), $clusterTime: { clusterTime: Timestamp(1567578655, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578655, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:55.111+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:55.111+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:30:55.111+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578655, 1), t: 1 }, 2019-09-04T06:30:55.100+0000 2019-09-04T06:30:55.111+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578655, 1), t: 1 }, 2019-09-04T06:30:55.100+0000 2019-09-04T06:30:55.111+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578650, 1) 2019-09-04T06:30:55.111+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:31:06.310+0000 2019-09-04T06:30:55.111+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:31:05.990+0000 2019-09-04T06:30:55.111+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 708 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:05.111+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578655, 1), t: 1 } } 2019-09-04T06:30:55.111+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:25.110+0000 2019-09-04T06:30:55.111+0000 D3 REPL [conn242] Got notified of new snapshot: { ts: Timestamp(1567578655, 1), t: 1 }, 2019-09-04T06:30:55.100+0000 2019-09-04T06:30:55.111+0000 D3 REPL [conn242] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:12.507+0000 2019-09-04T06:30:55.111+0000 D3 REPL [conn263] Got notified of new snapshot: { ts: Timestamp(1567578655, 1), t: 1 }, 2019-09-04T06:30:55.100+0000 2019-09-04T06:30:55.111+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:30:55.111+0000 D3 REPL [conn263] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.415+0000 2019-09-04T06:30:55.111+0000 D3 REPL [conn252] Got notified of new snapshot: { ts: Timestamp(1567578655, 1), t: 1 }, 2019-09-04T06:30:55.100+0000 2019-09-04T06:30:55.111+0000 D3 REPL [conn252] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.897+0000 2019-09-04T06:30:55.111+0000 D3 REPL [conn253] Got notified of new snapshot: { ts: Timestamp(1567578655, 1), t: 1 }, 2019-09-04T06:30:55.100+0000 2019-09-04T06:30:55.111+0000 D3 REPL [conn253] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.925+0000 2019-09-04T06:30:55.111+0000 D3 REPL [conn233] Got notified of new snapshot: { ts: Timestamp(1567578655, 1), t: 1 }, 2019-09-04T06:30:55.100+0000 2019-09-04T06:30:55.111+0000 D3 REPL [conn233] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.024+0000 2019-09-04T06:30:55.111+0000 D3 REPL [conn251] Got notified of new snapshot: { ts: Timestamp(1567578655, 1), t: 1 }, 2019-09-04T06:30:55.100+0000 2019-09-04T06:30:55.111+0000 D3 REPL [conn251] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.763+0000 2019-09-04T06:30:55.111+0000 D3 REPL [conn259] Got notified of new snapshot: { ts: Timestamp(1567578655, 1), t: 1 }, 2019-09-04T06:30:55.100+0000 2019-09-04T06:30:55.111+0000 D3 REPL [conn259] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:24.151+0000 2019-09-04T06:30:55.111+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:55.111+0000 D3 REPL [conn237] Got notified of new snapshot: { ts: Timestamp(1567578655, 1), t: 1 }, 2019-09-04T06:30:55.100+0000 2019-09-04T06:30:55.111+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:24.839+0000 2019-09-04T06:30:55.111+0000 D3 REPL [conn237] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:13.409+0000 2019-09-04T06:30:55.111+0000 D3 REPL [conn249] Got notified of new snapshot: { ts: Timestamp(1567578655, 1), t: 1 }, 2019-09-04T06:30:55.100+0000 2019-09-04T06:30:55.111+0000 D3 REPL [conn249] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.702+0000 2019-09-04T06:30:55.111+0000 D3 REPL [conn254] Got notified of new snapshot: { ts: Timestamp(1567578655, 1), t: 1 }, 2019-09-04T06:30:55.100+0000 2019-09-04T06:30:55.111+0000 D3 REPL [conn254] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.962+0000 2019-09-04T06:30:55.111+0000 D3 REPL [conn225] Got notified of new snapshot: { ts: Timestamp(1567578655, 1), t: 1 }, 2019-09-04T06:30:55.100+0000 2019-09-04T06:30:55.111+0000 D3 REPL [conn225] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.468+0000 2019-09-04T06:30:55.111+0000 D3 REPL [conn250] Got notified of new snapshot: { ts: Timestamp(1567578655, 1), t: 1 }, 2019-09-04T06:30:55.100+0000 2019-09-04T06:30:55.111+0000 D3 REPL [conn250] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.753+0000 2019-09-04T06:30:55.111+0000 D3 REPL [conn255] Got notified of new snapshot: { ts: Timestamp(1567578655, 1), t: 1 }, 2019-09-04T06:30:55.100+0000 2019-09-04T06:30:55.111+0000 D3 REPL [conn255] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.877+0000 2019-09-04T06:30:55.111+0000 D3 REPL [conn224] Got notified of new snapshot: { ts: Timestamp(1567578655, 1), t: 1 }, 2019-09-04T06:30:55.100+0000 2019-09-04T06:30:55.111+0000 D3 REPL [conn224] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.924+0000 2019-09-04T06:30:55.111+0000 D3 REPL [conn241] Got notified of new snapshot: { ts: Timestamp(1567578655, 1), t: 1 }, 2019-09-04T06:30:55.100+0000 2019-09-04T06:30:55.111+0000 D3 REPL [conn241] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.879+0000 2019-09-04T06:30:55.111+0000 D3 REPL [conn240] Got notified of new snapshot: { ts: Timestamp(1567578655, 1), t: 1 }, 2019-09-04T06:30:55.100+0000 2019-09-04T06:30:55.111+0000 D3 REPL [conn240] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.874+0000 2019-09-04T06:30:55.111+0000 D3 REPL [conn269] Got notified of new snapshot: { ts: Timestamp(1567578655, 1), t: 1 }, 2019-09-04T06:30:55.100+0000 2019-09-04T06:30:55.112+0000 D3 REPL [conn269] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:25.060+0000 2019-09-04T06:30:55.112+0000 D3 REPL [conn261] Got notified of new snapshot: { ts: Timestamp(1567578655, 1), t: 1 }, 2019-09-04T06:30:55.100+0000 2019-09-04T06:30:55.112+0000 D3 REPL [conn261] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.023+0000 2019-09-04T06:30:55.112+0000 D3 REPL [conn236] Got notified of new snapshot: { ts: Timestamp(1567578655, 1), t: 1 }, 2019-09-04T06:30:55.100+0000 2019-09-04T06:30:55.112+0000 D3 REPL [conn236] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.040+0000 2019-09-04T06:30:55.112+0000 D3 REPL [conn229] Got notified of new snapshot: { ts: Timestamp(1567578655, 1), t: 1 }, 2019-09-04T06:30:55.100+0000 2019-09-04T06:30:55.112+0000 D3 REPL [conn229] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.924+0000 2019-09-04T06:30:55.112+0000 D3 REPL [conn231] Got notified of new snapshot: { ts: Timestamp(1567578655, 1), t: 1 }, 2019-09-04T06:30:55.100+0000 2019-09-04T06:30:55.112+0000 D3 REPL [conn231] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:13.190+0000 2019-09-04T06:30:55.112+0000 D3 REPL [conn258] Got notified of new snapshot: { ts: Timestamp(1567578655, 1), t: 1 }, 2019-09-04T06:30:55.100+0000 2019-09-04T06:30:55.112+0000 D3 REPL [conn258] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:14.433+0000 2019-09-04T06:30:55.112+0000 D3 REPL [conn256] Got notified of new snapshot: { ts: Timestamp(1567578655, 1), t: 1 }, 2019-09-04T06:30:55.100+0000 2019-09-04T06:30:55.112+0000 D3 REPL [conn256] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.918+0000 2019-09-04T06:30:55.112+0000 D3 REPL [conn234] Got notified of new snapshot: { ts: Timestamp(1567578655, 1), t: 1 }, 2019-09-04T06:30:55.100+0000 2019-09-04T06:30:55.112+0000 D3 REPL [conn234] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.033+0000 2019-09-04T06:30:55.112+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:55.112+0000 D3 REPL [conn247] Got notified of new snapshot: { ts: Timestamp(1567578655, 1), t: 1 }, 2019-09-04T06:30:55.100+0000 2019-09-04T06:30:55.112+0000 D3 REPL [conn247] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:59.750+0000 2019-09-04T06:30:55.112+0000 D3 REPL [conn227] Got notified of new snapshot: { ts: Timestamp(1567578655, 1), t: 1 }, 2019-09-04T06:30:55.100+0000 2019-09-04T06:30:55.112+0000 D3 REPL [conn227] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.888+0000 2019-09-04T06:30:55.112+0000 D3 REPL [conn257] Got notified of new snapshot: { ts: Timestamp(1567578655, 1), t: 1 }, 2019-09-04T06:30:55.100+0000 2019-09-04T06:30:55.112+0000 D3 REPL [conn257] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:12.293+0000 2019-09-04T06:30:55.112+0000 D3 REPL [conn228] Got notified of new snapshot: { ts: Timestamp(1567578655, 1), t: 1 }, 2019-09-04T06:30:55.100+0000 2019-09-04T06:30:55.112+0000 D3 REPL [conn228] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:56.304+0000 2019-09-04T06:30:55.112+0000 D3 REPL [conn262] Got notified of new snapshot: { ts: Timestamp(1567578655, 1), t: 1 }, 2019-09-04T06:30:55.100+0000 2019-09-04T06:30:55.112+0000 D3 REPL [conn262] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.051+0000 2019-09-04T06:30:55.112+0000 D3 REPL [conn265] Got notified of new snapshot: { ts: Timestamp(1567578655, 1), t: 1 }, 2019-09-04T06:30:55.100+0000 2019-09-04T06:30:55.112+0000 D3 REPL [conn265] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:21.660+0000 2019-09-04T06:30:55.112+0000 D3 REPL [conn266] Got notified of new snapshot: { ts: Timestamp(1567578655, 1), t: 1 }, 2019-09-04T06:30:55.100+0000 2019-09-04T06:30:55.112+0000 D3 REPL [conn266] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:21.661+0000 2019-09-04T06:30:55.112+0000 D3 REPL [conn267] Got notified of new snapshot: { ts: Timestamp(1567578655, 1), t: 1 }, 2019-09-04T06:30:55.100+0000 2019-09-04T06:30:55.112+0000 D3 REPL [conn267] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:21.661+0000 2019-09-04T06:30:55.112+0000 D3 REPL [conn248] Got notified of new snapshot: { ts: Timestamp(1567578655, 1), t: 1 }, 2019-09-04T06:30:55.100+0000 2019-09-04T06:30:55.112+0000 D3 REPL [conn248] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.456+0000 2019-09-04T06:30:55.112+0000 D3 REPL [conn268] Got notified of new snapshot: { ts: Timestamp(1567578655, 1), t: 1 }, 2019-09-04T06:30:55.100+0000 2019-09-04T06:30:55.112+0000 D3 REPL [conn268] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:22.595+0000 2019-09-04T06:30:55.112+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), appliedOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, appliedWallTime: new Date(1567578649228), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578655, 1), t: 1 }, durableWallTime: new Date(1567578655100), appliedOpTime: { ts: Timestamp(1567578655, 1), t: 1 }, appliedWallTime: new Date(1567578655100), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), appliedOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, appliedWallTime: new Date(1567578649228), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578655, 1), t: 1 }, lastCommittedWall: new Date(1567578655100), lastOpVisible: { ts: Timestamp(1567578655, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:55.112+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 709 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:25.112+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), appliedOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, appliedWallTime: new Date(1567578649228), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578655, 1), t: 1 }, durableWallTime: new Date(1567578655100), appliedOpTime: { ts: Timestamp(1567578655, 1), t: 1 }, appliedWallTime: new Date(1567578655100), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), appliedOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, appliedWallTime: new Date(1567578649228), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578655, 1), t: 1 }, lastCommittedWall: new Date(1567578655100), lastOpVisible: { ts: Timestamp(1567578655, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:55.112+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:25.110+0000 2019-09-04T06:30:55.112+0000 D2 ASIO [RS] Request 709 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578655, 1), t: 1 }, lastCommittedWall: new Date(1567578655100), lastOpVisible: { ts: Timestamp(1567578655, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578655, 1), $clusterTime: { clusterTime: Timestamp(1567578655, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578655, 1) } 2019-09-04T06:30:55.112+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578655, 1), t: 1 }, lastCommittedWall: new Date(1567578655100), lastOpVisible: { ts: Timestamp(1567578655, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578655, 1), $clusterTime: { clusterTime: Timestamp(1567578655, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578655, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:55.112+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:55.112+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:25.110+0000 2019-09-04T06:30:55.135+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:55.135+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:55.147+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:55.147+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:55.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:55.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:55.181+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:55.181+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:55.184+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:55.209+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578655, 1) 2019-09-04T06:30:55.212+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:55.212+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:55.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:55.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:55.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:55.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:55.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:55.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:55.284+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:55.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:55.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:55.348+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:55.348+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:55.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:55.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:55.384+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:55.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:55.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:55.485+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:55.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:55.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:55.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:55.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:55.543+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:55.543+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:55.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:55.567+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:55.583+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:55.583+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:55.585+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:55.635+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:55.635+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:55.647+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:55.647+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:55.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:55.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:55.675+0000 D2 ASIO [RS] Request 708 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578655, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578655664), o: { $v: 1, $set: { ping: new Date(1567578655663) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578655, 1), t: 1 }, lastCommittedWall: new Date(1567578655100), lastOpVisible: { ts: Timestamp(1567578655, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578655, 1), t: 1 }, lastCommittedWall: new Date(1567578655100), lastOpApplied: { ts: Timestamp(1567578655, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578655, 1), $clusterTime: { clusterTime: Timestamp(1567578655, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578655, 2) } 2019-09-04T06:30:55.675+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578655, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578655664), o: { $v: 1, $set: { ping: new Date(1567578655663) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578655, 1), t: 1 }, lastCommittedWall: new Date(1567578655100), lastOpVisible: { ts: Timestamp(1567578655, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578655, 1), t: 1 }, lastCommittedWall: new Date(1567578655100), lastOpApplied: { ts: Timestamp(1567578655, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578655, 1), $clusterTime: { clusterTime: Timestamp(1567578655, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578655, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:55.675+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:55.675+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578655, 2) and ending at ts: Timestamp(1567578655, 2) 2019-09-04T06:30:55.675+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:31:05.990+0000 2019-09-04T06:30:55.675+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:31:06.852+0000 2019-09-04T06:30:55.675+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:55.675+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:24.839+0000 2019-09-04T06:30:55.675+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578655, 2), t: 1 } 2019-09-04T06:30:55.675+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:55.675+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:55.675+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578655, 1) 2019-09-04T06:30:55.675+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 10388 2019-09-04T06:30:55.675+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:55.675+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:55.675+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 10388 2019-09-04T06:30:55.675+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:55.675+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:30:55.675+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:55.675+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578655, 1) 2019-09-04T06:30:55.675+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578655, 2) } 2019-09-04T06:30:55.675+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 10391 2019-09-04T06:30:55.675+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:55.675+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:55.675+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 10391 2019-09-04T06:30:55.675+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10364 2019-09-04T06:30:55.675+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 10364 2019-09-04T06:30:55.675+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10394 2019-09-04T06:30:55.675+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 10394 2019-09-04T06:30:55.675+0000 D3 EXECUTOR [repl-writer-worker-13] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:55.675+0000 D3 STORAGE [repl-writer-worker-13] WT begin_transaction for snapshot id 10396 2019-09-04T06:30:55.675+0000 D4 STORAGE [repl-writer-worker-13] inserting record with timestamp Timestamp(1567578655, 2) 2019-09-04T06:30:55.675+0000 D3 STORAGE [repl-writer-worker-13] WT set timestamp of future write operations to Timestamp(1567578655, 2) 2019-09-04T06:30:55.675+0000 D3 STORAGE [repl-writer-worker-13] WT commit_transaction for snapshot id 10396 2019-09-04T06:30:55.675+0000 D3 EXECUTOR [repl-writer-worker-13] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:55.675+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:30:55.675+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10395 2019-09-04T06:30:55.675+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 10395 2019-09-04T06:30:55.675+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10398 2019-09-04T06:30:55.675+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 10398 2019-09-04T06:30:55.675+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578655, 2), t: 1 }({ ts: Timestamp(1567578655, 2), t: 1 }) 2019-09-04T06:30:55.675+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578655, 2) 2019-09-04T06:30:55.675+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10399 2019-09-04T06:30:55.675+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578655, 2) } } ] } sort: {} projection: {} 2019-09-04T06:30:55.675+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:55.675+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:30:55.675+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578655, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:30:55.675+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:55.675+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:55.675+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:55.675+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578655, 2) || First: notFirst: full path: ts 2019-09-04T06:30:55.675+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:55.675+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578655, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:55.675+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:55.675+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:30:55.675+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:55.675+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:55.675+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:55.675+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:55.675+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:55.675+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:55.676+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:30:55.676+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578655, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:30:55.676+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:55.676+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:30:55.676+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:30:55.676+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578655, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:30:55.676+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:55.676+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578655, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:55.676+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 10399 2019-09-04T06:30:55.676+0000 D3 EXECUTOR [repl-writer-worker-14] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:30:55.676+0000 D3 STORAGE [repl-writer-worker-14] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:55.676+0000 D3 REPL [repl-writer-worker-14] applying op: { ts: Timestamp(1567578655, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578655664), o: { $v: 1, $set: { ping: new Date(1567578655663) } } }, oplog application mode: Secondary 2019-09-04T06:30:55.676+0000 D3 STORAGE [repl-writer-worker-14] WT set timestamp of future write operations to Timestamp(1567578655, 2) 2019-09-04T06:30:55.676+0000 D3 STORAGE [repl-writer-worker-14] WT begin_transaction for snapshot id 10401 2019-09-04T06:30:55.676+0000 D2 QUERY [repl-writer-worker-14] Using idhack: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" } 2019-09-04T06:30:55.676+0000 D4 WRITE [repl-writer-worker-14] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:30:55.676+0000 D3 STORAGE [repl-writer-worker-14] WT commit_transaction for snapshot id 10401 2019-09-04T06:30:55.676+0000 D3 EXECUTOR [repl-writer-worker-14] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:30:55.676+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578655, 2), t: 1 }({ ts: Timestamp(1567578655, 2), t: 1 }) 2019-09-04T06:30:55.676+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578655, 2) 2019-09-04T06:30:55.676+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10400 2019-09-04T06:30:55.676+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:30:55.676+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:30:55.676+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:30:55.676+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:55.676+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:55.676+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:30:55.676+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 10400 2019-09-04T06:30:55.676+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578655, 2) 2019-09-04T06:30:55.676+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10405 2019-09-04T06:30:55.676+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 10405 2019-09-04T06:30:55.676+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578655, 2), t: 1 }({ ts: Timestamp(1567578655, 2), t: 1 }) 2019-09-04T06:30:55.676+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:55.676+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), appliedOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, appliedWallTime: new Date(1567578649228), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578655, 1), t: 1 }, durableWallTime: new Date(1567578655100), appliedOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, appliedWallTime: new Date(1567578655664), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), appliedOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, appliedWallTime: new Date(1567578649228), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578655, 1), t: 1 }, lastCommittedWall: new Date(1567578655100), lastOpVisible: { ts: Timestamp(1567578655, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:55.676+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 710 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:25.676+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), appliedOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, appliedWallTime: new Date(1567578649228), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578655, 1), t: 1 }, durableWallTime: new Date(1567578655100), appliedOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, appliedWallTime: new Date(1567578655664), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), appliedOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, appliedWallTime: new Date(1567578649228), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578655, 1), t: 1 }, lastCommittedWall: new Date(1567578655100), lastOpVisible: { ts: Timestamp(1567578655, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:55.676+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:25.676+0000 2019-09-04T06:30:55.676+0000 D2 ASIO [RS] Request 710 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578655, 1), t: 1 }, lastCommittedWall: new Date(1567578655100), lastOpVisible: { ts: Timestamp(1567578655, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578655, 1), $clusterTime: { clusterTime: Timestamp(1567578655, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578655, 2) } 2019-09-04T06:30:55.676+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578655, 1), t: 1 }, lastCommittedWall: new Date(1567578655100), lastOpVisible: { ts: Timestamp(1567578655, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578655, 1), $clusterTime: { clusterTime: Timestamp(1567578655, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578655, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:55.677+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:55.677+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:25.676+0000 2019-09-04T06:30:55.677+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578655, 2), t: 1 } 2019-09-04T06:30:55.677+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 711 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:05.677+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578655, 1), t: 1 } } 2019-09-04T06:30:55.677+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:25.676+0000 2019-09-04T06:30:55.677+0000 D2 ASIO [RS] Request 711 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578655, 2), t: 1 }, lastCommittedWall: new Date(1567578655664), lastOpVisible: { ts: Timestamp(1567578655, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578655, 2), t: 1 }, lastCommittedWall: new Date(1567578655664), lastOpApplied: { ts: Timestamp(1567578655, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578655, 2), $clusterTime: { clusterTime: Timestamp(1567578655, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578655, 2) } 2019-09-04T06:30:55.677+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578655, 2), t: 1 }, lastCommittedWall: new Date(1567578655664), lastOpVisible: { ts: Timestamp(1567578655, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578655, 2), t: 1 }, lastCommittedWall: new Date(1567578655664), lastOpApplied: { ts: Timestamp(1567578655, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578655, 2), $clusterTime: { clusterTime: Timestamp(1567578655, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578655, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:55.677+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:55.677+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:30:55.677+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578655, 2), t: 1 }, 2019-09-04T06:30:55.664+0000 2019-09-04T06:30:55.677+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578655, 2), t: 1 }, 2019-09-04T06:30:55.664+0000 2019-09-04T06:30:55.677+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578650, 2) 2019-09-04T06:30:55.677+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:31:06.852+0000 2019-09-04T06:30:55.677+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:31:06.238+0000 2019-09-04T06:30:55.677+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 712 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:05.677+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578655, 2), t: 1 } } 2019-09-04T06:30:55.677+0000 D3 REPL [conn251] Got notified of new snapshot: { ts: Timestamp(1567578655, 2), t: 1 }, 2019-09-04T06:30:55.664+0000 2019-09-04T06:30:55.677+0000 D3 REPL [conn251] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.763+0000 2019-09-04T06:30:55.677+0000 D3 REPL [conn252] Got notified of new snapshot: { ts: Timestamp(1567578655, 2), t: 1 }, 2019-09-04T06:30:55.664+0000 2019-09-04T06:30:55.677+0000 D3 REPL [conn252] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.897+0000 2019-09-04T06:30:55.677+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:25.676+0000 2019-09-04T06:30:55.677+0000 D3 REPL [conn253] Got notified of new snapshot: { ts: Timestamp(1567578655, 2), t: 1 }, 2019-09-04T06:30:55.664+0000 2019-09-04T06:30:55.677+0000 D3 REPL [conn253] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.925+0000 2019-09-04T06:30:55.677+0000 D3 REPL [conn228] Got notified of new snapshot: { ts: Timestamp(1567578655, 2), t: 1 }, 2019-09-04T06:30:55.664+0000 2019-09-04T06:30:55.677+0000 D3 REPL [conn228] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:56.304+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn224] Got notified of new snapshot: { ts: Timestamp(1567578655, 2), t: 1 }, 2019-09-04T06:30:55.664+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn224] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.924+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn256] Got notified of new snapshot: { ts: Timestamp(1567578655, 2), t: 1 }, 2019-09-04T06:30:55.664+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn256] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.918+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn261] Got notified of new snapshot: { ts: Timestamp(1567578655, 2), t: 1 }, 2019-09-04T06:30:55.664+0000 2019-09-04T06:30:55.678+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:55.678+0000 D3 REPL [conn261] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.023+0000 2019-09-04T06:30:55.678+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:24.839+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn269] Got notified of new snapshot: { ts: Timestamp(1567578655, 2), t: 1 }, 2019-09-04T06:30:55.664+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn269] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:25.060+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn258] Got notified of new snapshot: { ts: Timestamp(1567578655, 2), t: 1 }, 2019-09-04T06:30:55.664+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn258] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:14.433+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn247] Got notified of new snapshot: { ts: Timestamp(1567578655, 2), t: 1 }, 2019-09-04T06:30:55.664+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn247] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:30:59.750+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn257] Got notified of new snapshot: { ts: Timestamp(1567578655, 2), t: 1 }, 2019-09-04T06:30:55.664+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn257] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:12.293+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn262] Got notified of new snapshot: { ts: Timestamp(1567578655, 2), t: 1 }, 2019-09-04T06:30:55.664+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn262] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.051+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn266] Got notified of new snapshot: { ts: Timestamp(1567578655, 2), t: 1 }, 2019-09-04T06:30:55.664+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn266] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:21.661+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn248] Got notified of new snapshot: { ts: Timestamp(1567578655, 2), t: 1 }, 2019-09-04T06:30:55.664+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn248] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.456+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn242] Got notified of new snapshot: { ts: Timestamp(1567578655, 2), t: 1 }, 2019-09-04T06:30:55.664+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn242] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:12.507+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn263] Got notified of new snapshot: { ts: Timestamp(1567578655, 2), t: 1 }, 2019-09-04T06:30:55.664+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn263] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.415+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn237] Got notified of new snapshot: { ts: Timestamp(1567578655, 2), t: 1 }, 2019-09-04T06:30:55.664+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn237] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:13.409+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn254] Got notified of new snapshot: { ts: Timestamp(1567578655, 2), t: 1 }, 2019-09-04T06:30:55.664+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn254] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.962+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn233] Got notified of new snapshot: { ts: Timestamp(1567578655, 2), t: 1 }, 2019-09-04T06:30:55.664+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn233] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.024+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn249] Got notified of new snapshot: { ts: Timestamp(1567578655, 2), t: 1 }, 2019-09-04T06:30:55.664+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn249] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.702+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn255] Got notified of new snapshot: { ts: Timestamp(1567578655, 2), t: 1 }, 2019-09-04T06:30:55.664+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn255] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.877+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn225] Got notified of new snapshot: { ts: Timestamp(1567578655, 2), t: 1 }, 2019-09-04T06:30:55.664+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn225] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.468+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn241] Got notified of new snapshot: { ts: Timestamp(1567578655, 2), t: 1 }, 2019-09-04T06:30:55.664+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn241] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.879+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn267] Got notified of new snapshot: { ts: Timestamp(1567578655, 2), t: 1 }, 2019-09-04T06:30:55.664+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn267] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:21.661+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn265] Got notified of new snapshot: { ts: Timestamp(1567578655, 2), t: 1 }, 2019-09-04T06:30:55.664+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn265] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:21.660+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn250] Got notified of new snapshot: { ts: Timestamp(1567578655, 2), t: 1 }, 2019-09-04T06:30:55.664+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn250] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.753+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn234] Got notified of new snapshot: { ts: Timestamp(1567578655, 2), t: 1 }, 2019-09-04T06:30:55.664+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn234] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.033+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn236] Got notified of new snapshot: { ts: Timestamp(1567578655, 2), t: 1 }, 2019-09-04T06:30:55.664+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn236] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.040+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn231] Got notified of new snapshot: { ts: Timestamp(1567578655, 2), t: 1 }, 2019-09-04T06:30:55.664+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn231] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:13.190+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn240] Got notified of new snapshot: { ts: Timestamp(1567578655, 2), t: 1 }, 2019-09-04T06:30:55.664+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn240] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.874+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn227] Got notified of new snapshot: { ts: Timestamp(1567578655, 2), t: 1 }, 2019-09-04T06:30:55.664+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn227] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.888+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn259] Got notified of new snapshot: { ts: Timestamp(1567578655, 2), t: 1 }, 2019-09-04T06:30:55.664+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn259] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:24.151+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn268] Got notified of new snapshot: { ts: Timestamp(1567578655, 2), t: 1 }, 2019-09-04T06:30:55.664+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn268] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:22.595+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn229] Got notified of new snapshot: { ts: Timestamp(1567578655, 2), t: 1 }, 2019-09-04T06:30:55.664+0000 2019-09-04T06:30:55.678+0000 D3 REPL [conn229] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.924+0000 2019-09-04T06:30:55.681+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:55.681+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:55.701+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:30:55.701+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:30:55.701+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:55.701+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), appliedOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, appliedWallTime: new Date(1567578649228), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, durableWallTime: new Date(1567578655664), appliedOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, appliedWallTime: new Date(1567578655664), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), appliedOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, appliedWallTime: new Date(1567578649228), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578655, 2), t: 1 }, lastCommittedWall: new Date(1567578655664), lastOpVisible: { ts: Timestamp(1567578655, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:55.701+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 713 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:25.701+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), appliedOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, appliedWallTime: new Date(1567578649228), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, durableWallTime: new Date(1567578655664), appliedOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, appliedWallTime: new Date(1567578655664), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, durableWallTime: new Date(1567578649228), appliedOpTime: { ts: Timestamp(1567578649, 1), t: 1 }, appliedWallTime: new Date(1567578649228), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578655, 2), t: 1 }, lastCommittedWall: new Date(1567578655664), lastOpVisible: { ts: Timestamp(1567578655, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:30:55.701+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:25.676+0000 2019-09-04T06:30:55.701+0000 D2 ASIO [RS] Request 713 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578655, 2), t: 1 }, lastCommittedWall: new Date(1567578655664), lastOpVisible: { ts: Timestamp(1567578655, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578655, 2), $clusterTime: { clusterTime: Timestamp(1567578655, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578655, 2) } 2019-09-04T06:30:55.701+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578655, 2), t: 1 }, lastCommittedWall: new Date(1567578655664), lastOpVisible: { ts: Timestamp(1567578655, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578655, 2), $clusterTime: { clusterTime: Timestamp(1567578655, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578655, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:55.701+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:30:55.701+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:25.676+0000 2019-09-04T06:30:55.712+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:55.712+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:55.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:55.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:55.775+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578655, 2) 2019-09-04T06:30:55.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:55.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:55.801+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:55.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:55.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:55.848+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:55.848+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:55.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:55.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:55.901+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:55.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:55.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:55.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:55.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:55.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:55.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:56.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:56.001+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:56.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:56.067+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:56.083+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:56.083+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:56.101+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:56.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:56.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:56.181+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:56.181+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:56.201+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:56.212+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:56.212+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:56.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578655, 2), signature: { hash: BinData(0, 9BE8BE6035413B2F3A584AA192E6AFF2A563822F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:56.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:30:56.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578655, 2), signature: { hash: BinData(0, 9BE8BE6035413B2F3A584AA192E6AFF2A563822F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:56.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578655, 2), signature: { hash: BinData(0, 9BE8BE6035413B2F3A584AA192E6AFF2A563822F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:56.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, durableWallTime: new Date(1567578655664), opTime: { ts: Timestamp(1567578655, 2), t: 1 }, wallTime: new Date(1567578655664) } 2019-09-04T06:30:56.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578655, 2), signature: { hash: BinData(0, 9BE8BE6035413B2F3A584AA192E6AFF2A563822F), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:56.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:56.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:56.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:56.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:56.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:56.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:56.302+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:56.304+0000 I COMMAND [conn228] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578623, 1), signature: { hash: BinData(0, 3A03B2C9D6C104013865981BCFBDB7C92D5EFDE9), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:30:56.304+0000 D1 - [conn228] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:56.304+0000 W - [conn228] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:56.321+0000 I - [conn228] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:56.321+0000 D1 COMMAND [conn228] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578623, 1), signature: { hash: BinData(0, 3A03B2C9D6C104013865981BCFBDB7C92D5EFDE9), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:56.321+0000 D1 - [conn228] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:56.321+0000 W - [conn228] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:56.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:56.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:56.341+0000 I - [conn228] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:56.341+0000 W COMMAND [conn228] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:56.341+0000 I COMMAND [conn228] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578623, 1), signature: { hash: BinData(0, 3A03B2C9D6C104013865981BCFBDB7C92D5EFDE9), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:30:56.341+0000 D2 NETWORK [conn228] Session from 10.108.2.60:44842 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:56.341+0000 I NETWORK [conn228] end connection 10.108.2.60:44842 (91 connections now open) 2019-09-04T06:30:56.348+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:56.348+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:56.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:56.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:56.402+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:56.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:56.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:56.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:56.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:56.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:56.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:56.502+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:56.553+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:56.553+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:56.567+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:56.567+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:56.583+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:56.583+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:56.601+0000 D2 COMMAND [conn49] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578655, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578655, 2), signature: { hash: BinData(0, 9BE8BE6035413B2F3A584AA192E6AFF2A563822F), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578655, 2), t: 1 } }, $db: "config" } 2019-09-04T06:30:56.601+0000 D1 COMMAND [conn49] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578655, 2), t: 1 } } } 2019-09-04T06:30:56.601+0000 D3 STORAGE [conn49] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:30:56.601+0000 D1 COMMAND [conn49] Using 'committed' snapshot: { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578655, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578655, 2), signature: { hash: BinData(0, 9BE8BE6035413B2F3A584AA192E6AFF2A563822F), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578655, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578655, 2) 2019-09-04T06:30:56.601+0000 D2 QUERY [conn49] Collection config.settings does not exist. Using EOF plan: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 2019-09-04T06:30:56.602+0000 I COMMAND [conn49] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578655, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578655, 2), signature: { hash: BinData(0, 9BE8BE6035413B2F3A584AA192E6AFF2A563822F), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578655, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:30:56.602+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:56.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:56.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:56.675+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:56.675+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:56.675+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578655, 2) 2019-09-04T06:30:56.675+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 10438 2019-09-04T06:30:56.675+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:56.675+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:56.675+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 10438 2019-09-04T06:30:56.676+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10441 2019-09-04T06:30:56.676+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 10441 2019-09-04T06:30:56.676+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578655, 2), t: 1 }({ ts: Timestamp(1567578655, 2), t: 1 }) 2019-09-04T06:30:56.681+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:56.681+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:56.695+0000 D2 COMMAND [conn49] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578655, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578655, 2), signature: { hash: BinData(0, 9BE8BE6035413B2F3A584AA192E6AFF2A563822F), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578655, 2), t: 1 } }, $db: "config" } 2019-09-04T06:30:56.695+0000 D1 COMMAND [conn49] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578655, 2), t: 1 } } } 2019-09-04T06:30:56.695+0000 D3 STORAGE [conn49] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:30:56.695+0000 D1 COMMAND [conn49] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578655, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578655, 2), signature: { hash: BinData(0, 9BE8BE6035413B2F3A584AA192E6AFF2A563822F), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578655, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578655, 2) 2019-09-04T06:30:56.695+0000 D5 QUERY [conn49] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:30:56.695+0000 D5 QUERY [conn49] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:30:56.695+0000 D5 QUERY [conn49] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:30:56.695+0000 D5 QUERY [conn49] Rated tree: $and 2019-09-04T06:30:56.695+0000 D5 QUERY [conn49] Planner: outputted 0 indexed solutions. 2019-09-04T06:30:56.695+0000 D5 QUERY [conn49] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:30:56.695+0000 D2 QUERY [conn49] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:30:56.695+0000 D3 STORAGE [conn49] WT begin_transaction for snapshot id 10444 2019-09-04T06:30:56.695+0000 D3 STORAGE [conn49] WT rollback_transaction for snapshot id 10444 2019-09-04T06:30:56.695+0000 I COMMAND [conn49] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578655, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578655, 2), signature: { hash: BinData(0, 9BE8BE6035413B2F3A584AA192E6AFF2A563822F), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578655, 2), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:879 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:30:56.702+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:56.712+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:56.712+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:56.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:56.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:56.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:56.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:56.802+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:56.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:56.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:56.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:56.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:56.838+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:30:55.061+0000 2019-09-04T06:30:56.838+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:30:56.232+0000 2019-09-04T06:30:56.838+0000 D3 REPL [replexec-3] stalest member MemberId(0) date: 2019-09-04T06:30:55.061+0000 2019-09-04T06:30:56.838+0000 D3 REPL [replexec-3] scheduling next check at 2019-09-04T06:31:05.061+0000 2019-09-04T06:30:56.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:26.838+0000 2019-09-04T06:30:56.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 714) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:56.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 714 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:06.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:56.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:26.838+0000 2019-09-04T06:30:56.838+0000 D2 ASIO [Replication] Request 714 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, durableWallTime: new Date(1567578655664), opTime: { ts: Timestamp(1567578655, 2), t: 1 }, wallTime: new Date(1567578655664), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578655, 2), t: 1 }, lastCommittedWall: new Date(1567578655664), lastOpVisible: { ts: Timestamp(1567578655, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578655, 2), $clusterTime: { clusterTime: Timestamp(1567578655, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578655, 2) } 2019-09-04T06:30:56.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, durableWallTime: new Date(1567578655664), opTime: { ts: Timestamp(1567578655, 2), t: 1 }, wallTime: new Date(1567578655664), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578655, 2), t: 1 }, lastCommittedWall: new Date(1567578655664), lastOpVisible: { ts: Timestamp(1567578655, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578655, 2), $clusterTime: { clusterTime: Timestamp(1567578655, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578655, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:56.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:56.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 714) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, durableWallTime: new Date(1567578655664), opTime: { ts: Timestamp(1567578655, 2), t: 1 }, wallTime: new Date(1567578655664), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578655, 2), t: 1 }, lastCommittedWall: new Date(1567578655664), lastOpVisible: { ts: Timestamp(1567578655, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578655, 2), $clusterTime: { clusterTime: Timestamp(1567578655, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578655, 2) } 2019-09-04T06:30:56.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:30:56.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:30:58.838Z 2019-09-04T06:30:56.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:26.838+0000 2019-09-04T06:30:56.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:56.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 715) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:56.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 715 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:31:06.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:56.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:26.838+0000 2019-09-04T06:30:56.839+0000 D2 ASIO [Replication] Request 715 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, durableWallTime: new Date(1567578655664), opTime: { ts: Timestamp(1567578655, 2), t: 1 }, wallTime: new Date(1567578655664), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578655, 2), t: 1 }, lastCommittedWall: new Date(1567578655664), lastOpVisible: { ts: Timestamp(1567578655, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578655, 2), $clusterTime: { clusterTime: Timestamp(1567578655, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578655, 2) } 2019-09-04T06:30:56.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, durableWallTime: new Date(1567578655664), opTime: { ts: Timestamp(1567578655, 2), t: 1 }, wallTime: new Date(1567578655664), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578655, 2), t: 1 }, lastCommittedWall: new Date(1567578655664), lastOpVisible: { ts: Timestamp(1567578655, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578655, 2), $clusterTime: { clusterTime: Timestamp(1567578655, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578655, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:30:56.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:56.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 715) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, durableWallTime: new Date(1567578655664), opTime: { ts: Timestamp(1567578655, 2), t: 1 }, wallTime: new Date(1567578655664), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578655, 2), t: 1 }, lastCommittedWall: new Date(1567578655664), lastOpVisible: { ts: Timestamp(1567578655, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578655, 2), $clusterTime: { clusterTime: Timestamp(1567578655, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578655, 2) } 2019-09-04T06:30:56.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:30:56.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:31:06.238+0000 2019-09-04T06:30:56.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:31:07.167+0000 2019-09-04T06:30:56.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:30:56.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:30:58.839Z 2019-09-04T06:30:56.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:56.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:26.839+0000 2019-09-04T06:30:56.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:26.839+0000 2019-09-04T06:30:56.848+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:56.848+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:56.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:56.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:56.876+0000 D2 COMMAND [conn47] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:56.876+0000 I COMMAND [conn47] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:56.902+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:56.911+0000 D2 COMMAND [conn48] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:56.911+0000 I COMMAND [conn48] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:56.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:56.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:56.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:56.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:56.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:56.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:57.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:57.003+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:57.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:57.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:57.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578655, 2), signature: { hash: BinData(0, 9BE8BE6035413B2F3A584AA192E6AFF2A563822F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:57.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:30:57.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578655, 2), signature: { hash: BinData(0, 9BE8BE6035413B2F3A584AA192E6AFF2A563822F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:57.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578655, 2), signature: { hash: BinData(0, 9BE8BE6035413B2F3A584AA192E6AFF2A563822F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:57.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, durableWallTime: new Date(1567578655664), opTime: { ts: Timestamp(1567578655, 2), t: 1 }, wallTime: new Date(1567578655664) } 2019-09-04T06:30:57.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578655, 2), signature: { hash: BinData(0, 9BE8BE6035413B2F3A584AA192E6AFF2A563822F), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:57.067+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:57.067+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:57.083+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:57.083+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:57.103+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:57.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:57.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:57.181+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:57.181+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:57.203+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:57.212+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:57.212+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:57.220+0000 D2 COMMAND [conn160] run command admin.$cmd { ping: 1, $db: "admin" } 2019-09-04T06:30:57.220+0000 I COMMAND [conn160] command admin.$cmd appName: "robo3t" command: ping { ping: 1, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:57.229+0000 D2 COMMAND [conn220] run command admin.$cmd { ping: 1.0, lsid: { id: UUID("23af97f8-66f0-4a27-b5f1-59167651ca5f") }, $clusterTime: { clusterTime: Timestamp(1567578595, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } 2019-09-04T06:30:57.229+0000 I COMMAND [conn220] command admin.$cmd appName: "MongoDB Shell" command: ping { ping: 1.0, lsid: { id: UUID("23af97f8-66f0-4a27-b5f1-59167651ca5f") }, $clusterTime: { clusterTime: Timestamp(1567578595, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:57.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:57.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:57.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:57.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:57.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:57.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:57.303+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:57.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:57.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:57.348+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:57.348+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:57.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:57.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:57.403+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:57.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:57.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:57.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:57.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:57.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:57.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:57.503+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:57.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:57.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:57.567+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:57.567+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:57.583+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:57.583+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:57.603+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:57.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:57.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:57.675+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:57.675+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:57.676+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578655, 2) 2019-09-04T06:30:57.676+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 10480 2019-09-04T06:30:57.676+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:57.676+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:57.676+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 10480 2019-09-04T06:30:57.676+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10483 2019-09-04T06:30:57.676+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 10483 2019-09-04T06:30:57.676+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578655, 2), t: 1 }({ ts: Timestamp(1567578655, 2), t: 1 }) 2019-09-04T06:30:57.681+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:57.681+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:57.703+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:57.712+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:57.712+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:57.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:57.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:57.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:57.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:57.803+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:57.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:57.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:57.848+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:57.848+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:57.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:57.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:57.904+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:57.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:57.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:57.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:57.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:58.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:58.004+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:58.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:58.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:58.067+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:58.067+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:58.083+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:58.083+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:58.104+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:58.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:58.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:58.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:58.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:58.181+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:58.181+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:58.204+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:58.212+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:58.212+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:58.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578657, 1), signature: { hash: BinData(0, 972D177E14CCA8D669730ED6AC9C9B84C50E26A3), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:58.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:30:58.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578657, 1), signature: { hash: BinData(0, 972D177E14CCA8D669730ED6AC9C9B84C50E26A3), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:58.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578657, 1), signature: { hash: BinData(0, 972D177E14CCA8D669730ED6AC9C9B84C50E26A3), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:58.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, durableWallTime: new Date(1567578655664), opTime: { ts: Timestamp(1567578655, 2), t: 1 }, wallTime: new Date(1567578655664) } 2019-09-04T06:30:58.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578657, 1), signature: { hash: BinData(0, 972D177E14CCA8D669730ED6AC9C9B84C50E26A3), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:58.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:58.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:58.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:58.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:58.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:58.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:58.304+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:58.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:58.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:58.348+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:58.348+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:58.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:58.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:58.404+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:58.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:58.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:58.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:58.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:58.504+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:58.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:58.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:58.567+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:58.567+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:58.583+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:58.583+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:58.605+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:58.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:58.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:58.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:58.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:58.676+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:58.676+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:58.676+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578655, 2) 2019-09-04T06:30:58.676+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 10516 2019-09-04T06:30:58.676+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:58.676+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:58.676+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 10516 2019-09-04T06:30:58.677+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10519 2019-09-04T06:30:58.677+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 10519 2019-09-04T06:30:58.677+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578655, 2), t: 1 }({ ts: Timestamp(1567578655, 2), t: 1 }) 2019-09-04T06:30:58.681+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:58.681+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:58.705+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:58.712+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:58.712+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:58.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:58.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:58.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:58.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:58.805+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:58.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:58.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:58.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:58.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 716) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:58.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 716 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:08.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:58.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:26.839+0000 2019-09-04T06:30:58.838+0000 D2 ASIO [Replication] Request 716 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, durableWallTime: new Date(1567578655664), opTime: { ts: Timestamp(1567578655, 2), t: 1 }, wallTime: new Date(1567578655664), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578655, 2), t: 1 }, lastCommittedWall: new Date(1567578655664), lastOpVisible: { ts: Timestamp(1567578655, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578655, 2), $clusterTime: { clusterTime: Timestamp(1567578657, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578655, 2) } 2019-09-04T06:30:58.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, durableWallTime: new Date(1567578655664), opTime: { ts: Timestamp(1567578655, 2), t: 1 }, wallTime: new Date(1567578655664), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578655, 2), t: 1 }, lastCommittedWall: new Date(1567578655664), lastOpVisible: { ts: Timestamp(1567578655, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578655, 2), $clusterTime: { clusterTime: Timestamp(1567578657, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578655, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:30:58.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:58.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 716) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, durableWallTime: new Date(1567578655664), opTime: { ts: Timestamp(1567578655, 2), t: 1 }, wallTime: new Date(1567578655664), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578655, 2), t: 1 }, lastCommittedWall: new Date(1567578655664), lastOpVisible: { ts: Timestamp(1567578655, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578655, 2), $clusterTime: { clusterTime: Timestamp(1567578657, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578655, 2) } 2019-09-04T06:30:58.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:30:58.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:31:00.838Z 2019-09-04T06:30:58.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:26.839+0000 2019-09-04T06:30:58.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:30:58.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 717) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:58.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 717 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:31:08.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:30:58.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:26.839+0000 2019-09-04T06:30:58.839+0000 D2 ASIO [Replication] Request 717 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, durableWallTime: new Date(1567578655664), opTime: { ts: Timestamp(1567578655, 2), t: 1 }, wallTime: new Date(1567578655664), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578655, 2), t: 1 }, lastCommittedWall: new Date(1567578655664), lastOpVisible: { ts: Timestamp(1567578655, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578655, 2), $clusterTime: { clusterTime: Timestamp(1567578657, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578655, 2) } 2019-09-04T06:30:58.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, durableWallTime: new Date(1567578655664), opTime: { ts: Timestamp(1567578655, 2), t: 1 }, wallTime: new Date(1567578655664), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578655, 2), t: 1 }, lastCommittedWall: new Date(1567578655664), lastOpVisible: { ts: Timestamp(1567578655, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578655, 2), $clusterTime: { clusterTime: Timestamp(1567578657, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578655, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:30:58.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:58.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 717) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, durableWallTime: new Date(1567578655664), opTime: { ts: Timestamp(1567578655, 2), t: 1 }, wallTime: new Date(1567578655664), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578655, 2), t: 1 }, lastCommittedWall: new Date(1567578655664), lastOpVisible: { ts: Timestamp(1567578655, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578655, 2), $clusterTime: { clusterTime: Timestamp(1567578657, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578655, 2) } 2019-09-04T06:30:58.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:30:58.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:31:07.167+0000 2019-09-04T06:30:58.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:31:10.020+0000 2019-09-04T06:30:58.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:30:58.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:31:00.839Z 2019-09-04T06:30:58.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:30:58.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:28.839+0000 2019-09-04T06:30:58.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:28.839+0000 2019-09-04T06:30:58.848+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:58.848+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:58.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:58.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:58.905+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:58.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:58.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:58.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:58.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:59.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:30:59.005+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:59.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:59.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:59.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578657, 1), signature: { hash: BinData(0, 972D177E14CCA8D669730ED6AC9C9B84C50E26A3), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:59.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:30:59.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578657, 1), signature: { hash: BinData(0, 972D177E14CCA8D669730ED6AC9C9B84C50E26A3), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:59.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578657, 1), signature: { hash: BinData(0, 972D177E14CCA8D669730ED6AC9C9B84C50E26A3), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:30:59.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, durableWallTime: new Date(1567578655664), opTime: { ts: Timestamp(1567578655, 2), t: 1 }, wallTime: new Date(1567578655664) } 2019-09-04T06:30:59.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578657, 1), signature: { hash: BinData(0, 972D177E14CCA8D669730ED6AC9C9B84C50E26A3), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:59.067+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:59.067+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:59.083+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:59.083+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:59.105+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:59.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:59.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:59.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:59.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:59.181+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:59.181+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:59.205+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:59.212+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:59.212+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:59.218+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:59.218+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:59.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:30:59.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:30:59.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:59.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:59.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:59.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:59.306+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:59.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:59.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:59.348+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:59.348+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:59.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:59.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:59.406+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:59.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:59.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:59.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:59.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:59.506+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:59.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:59.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:59.567+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:59.567+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:59.583+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:59.583+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:59.606+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:59.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:59.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:59.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:59.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:59.676+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:30:59.676+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:30:59.676+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578655, 2) 2019-09-04T06:30:59.676+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 10555 2019-09-04T06:30:59.676+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:30:59.676+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:30:59.676+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 10555 2019-09-04T06:30:59.677+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10558 2019-09-04T06:30:59.677+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 10558 2019-09-04T06:30:59.677+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578655, 2), t: 1 }({ ts: Timestamp(1567578655, 2), t: 1 }) 2019-09-04T06:30:59.681+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:59.681+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:59.706+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:59.712+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:59.712+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:59.718+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:59.718+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:59.735+0000 I NETWORK [listener] connection accepted from 10.108.2.49:53394 #270 (92 connections now open) 2019-09-04T06:30:59.735+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:30:59.735+0000 D2 COMMAND [conn270] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:30:59.735+0000 I NETWORK [conn270] received client metadata from 10.108.2.49:53394 conn270: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:30:59.735+0000 I COMMAND [conn270] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:30:59.752+0000 I COMMAND [conn247] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578620, 1), signature: { hash: BinData(0, D896F3BE714D00815704D4FC4827545AB3DF55E0), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:30:59.752+0000 D1 - [conn247] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:30:59.752+0000 W - [conn247] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:59.769+0000 I - [conn247] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:59.769+0000 D1 COMMAND [conn247] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578620, 1), signature: { hash: BinData(0, D896F3BE714D00815704D4FC4827545AB3DF55E0), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:59.769+0000 D1 - [conn247] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:30:59.769+0000 W - [conn247] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:30:59.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:59.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:59.790+0000 I - [conn247] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:30:59.790+0000 W COMMAND [conn247] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:30:59.790+0000 I COMMAND [conn247] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578620, 1), signature: { hash: BinData(0, D896F3BE714D00815704D4FC4827545AB3DF55E0), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:30:59.790+0000 D2 NETWORK [conn247] Session from 10.108.2.49:53376 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:30:59.790+0000 I NETWORK [conn247] end connection 10.108.2.49:53376 (91 connections now open) 2019-09-04T06:30:59.806+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:59.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:59.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:59.848+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:59.848+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:59.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:59.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:59.906+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:30:59.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:59.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:30:59.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:30:59.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:00.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:00.005+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:31:00.005+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:31:00.005+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:31:00.010+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:00.017+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:31:00.018+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:31:00.020+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:31:00.020+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:31:00.020+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:31:00.020+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:31:00.035+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:31:00.036+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35129 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:31:00.048+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:31:00.048+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:31:00.048+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:31:00.059+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:00.059+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:00.059+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:31:00.059+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:00.059+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:31:00.059+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:31:00.059+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:31:00.059+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:31:00.059+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:31:00.059+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:31:00.059+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:31:00.059+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:00.059+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:00.059+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:31:00.059+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578655, 2) 2019-09-04T06:31:00.059+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10577 2019-09-04T06:31:00.059+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10577 2019-09-04T06:31:00.059+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:31:00.065+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:31:00.066+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:31:00.066+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:31:00.066+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:00.066+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:31:00.066+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:31:00.066+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:31:00.066+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578655, 2) 2019-09-04T06:31:00.066+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10580 2019-09-04T06:31:00.066+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10580 2019-09-04T06:31:00.066+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:31:00.066+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:31:00.066+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:00.066+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:31:00.066+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:31:00.066+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:31:00.066+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578655, 2) 2019-09-04T06:31:00.066+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10582 2019-09-04T06:31:00.066+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10582 2019-09-04T06:31:00.066+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:31:00.067+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:31:00.067+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:31:00.067+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:31:00.067+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:31:00.067+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:31:00.067+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:31:00.067+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10585 2019-09-04T06:31:00.067+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:31:00.067+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:00.067+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:31:00.067+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:31:00.067+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10585 2019-09-04T06:31:00.067+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:31:00.067+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10586 2019-09-04T06:31:00.067+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:31:00.067+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:00.067+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:31:00.067+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10586 2019-09-04T06:31:00.067+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:31:00.067+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10587 2019-09-04T06:31:00.067+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:31:00.067+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:00.067+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:31:00.067+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10587 2019-09-04T06:31:00.067+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:31:00.067+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10588 2019-09-04T06:31:00.067+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:31:00.067+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:00.067+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:31:00.067+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10588 2019-09-04T06:31:00.067+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:31:00.067+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10589 2019-09-04T06:31:00.067+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:31:00.067+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:00.067+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:31:00.067+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:31:00.067+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10589 2019-09-04T06:31:00.067+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:31:00.067+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10590 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10590 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10591 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10591 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10592 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10592 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10593 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10593 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10594 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10594 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10595 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10595 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10596 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10596 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10597 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10597 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10598 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10598 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10599 2019-09-04T06:31:00.068+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10599 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10600 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10600 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10601 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10601 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10602 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10602 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10603 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10603 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10604 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10604 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10605 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10605 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10606 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10606 2019-09-04T06:31:00.069+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 2ms 2019-09-04T06:31:00.069+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:31:00.069+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10608 2019-09-04T06:31:00.070+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10608 2019-09-04T06:31:00.070+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10609 2019-09-04T06:31:00.070+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10609 2019-09-04T06:31:00.070+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10610 2019-09-04T06:31:00.070+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10610 2019-09-04T06:31:00.070+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:31:00.070+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:31:00.070+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10612 2019-09-04T06:31:00.070+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10612 2019-09-04T06:31:00.070+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10613 2019-09-04T06:31:00.070+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10613 2019-09-04T06:31:00.070+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10614 2019-09-04T06:31:00.070+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10614 2019-09-04T06:31:00.070+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10615 2019-09-04T06:31:00.070+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10615 2019-09-04T06:31:00.070+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10616 2019-09-04T06:31:00.070+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10616 2019-09-04T06:31:00.070+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10617 2019-09-04T06:31:00.070+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10617 2019-09-04T06:31:00.070+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10618 2019-09-04T06:31:00.070+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10618 2019-09-04T06:31:00.070+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10619 2019-09-04T06:31:00.070+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10619 2019-09-04T06:31:00.070+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10620 2019-09-04T06:31:00.070+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10620 2019-09-04T06:31:00.070+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10621 2019-09-04T06:31:00.070+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10621 2019-09-04T06:31:00.070+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10622 2019-09-04T06:31:00.070+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10622 2019-09-04T06:31:00.070+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10623 2019-09-04T06:31:00.071+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10623 2019-09-04T06:31:00.071+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:31:00.071+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:31:00.071+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10625 2019-09-04T06:31:00.071+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10625 2019-09-04T06:31:00.071+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10626 2019-09-04T06:31:00.071+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10626 2019-09-04T06:31:00.071+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10627 2019-09-04T06:31:00.071+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10627 2019-09-04T06:31:00.071+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10628 2019-09-04T06:31:00.071+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10628 2019-09-04T06:31:00.071+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10629 2019-09-04T06:31:00.071+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10629 2019-09-04T06:31:00.071+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 10630 2019-09-04T06:31:00.071+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 10630 2019-09-04T06:31:00.071+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:31:00.071+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:00.071+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:00.083+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:00.083+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:00.110+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:00.158+0000 D2 ASIO [RS] Request 712 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578660, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578660126), o: { $v: 1, $set: { ping: new Date(1567578660126) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578655, 2), t: 1 }, lastCommittedWall: new Date(1567578655664), lastOpVisible: { ts: Timestamp(1567578655, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578655, 2), t: 1 }, lastCommittedWall: new Date(1567578655664), lastOpApplied: { ts: Timestamp(1567578660, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578655, 2), $clusterTime: { clusterTime: Timestamp(1567578660, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578660, 1) } 2019-09-04T06:31:00.158+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578660, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578660126), o: { $v: 1, $set: { ping: new Date(1567578660126) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578655, 2), t: 1 }, lastCommittedWall: new Date(1567578655664), lastOpVisible: { ts: Timestamp(1567578655, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578655, 2), t: 1 }, lastCommittedWall: new Date(1567578655664), lastOpApplied: { ts: Timestamp(1567578660, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578655, 2), $clusterTime: { clusterTime: Timestamp(1567578660, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578660, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:00.158+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:00.158+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578660, 1) and ending at ts: Timestamp(1567578660, 1) 2019-09-04T06:31:00.158+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:31:10.020+0000 2019-09-04T06:31:00.159+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:31:10.941+0000 2019-09-04T06:31:00.159+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:00.159+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:28.839+0000 2019-09-04T06:31:00.159+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578660, 1), t: 1 } 2019-09-04T06:31:00.159+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:00.159+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:00.159+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578655, 2) 2019-09-04T06:31:00.159+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 10635 2019-09-04T06:31:00.159+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:00.159+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:00.159+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 10635 2019-09-04T06:31:00.159+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:00.159+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:31:00.159+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:00.159+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578655, 2) 2019-09-04T06:31:00.159+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 10638 2019-09-04T06:31:00.159+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578660, 1) } 2019-09-04T06:31:00.159+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:00.159+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:00.159+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 10638 2019-09-04T06:31:00.159+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10559 2019-09-04T06:31:00.159+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 10559 2019-09-04T06:31:00.159+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10641 2019-09-04T06:31:00.159+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 10641 2019-09-04T06:31:00.159+0000 D3 EXECUTOR [repl-writer-worker-3] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:00.159+0000 D3 STORAGE [repl-writer-worker-3] WT begin_transaction for snapshot id 10643 2019-09-04T06:31:00.159+0000 D4 STORAGE [repl-writer-worker-3] inserting record with timestamp Timestamp(1567578660, 1) 2019-09-04T06:31:00.159+0000 D3 STORAGE [repl-writer-worker-3] WT set timestamp of future write operations to Timestamp(1567578660, 1) 2019-09-04T06:31:00.159+0000 D3 STORAGE [repl-writer-worker-3] WT commit_transaction for snapshot id 10643 2019-09-04T06:31:00.159+0000 D3 EXECUTOR [repl-writer-worker-3] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:00.159+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:31:00.159+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10642 2019-09-04T06:31:00.159+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 10642 2019-09-04T06:31:00.159+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10645 2019-09-04T06:31:00.159+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 10645 2019-09-04T06:31:00.159+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578660, 1), t: 1 }({ ts: Timestamp(1567578660, 1), t: 1 }) 2019-09-04T06:31:00.159+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578660, 1) 2019-09-04T06:31:00.159+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10646 2019-09-04T06:31:00.159+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578660, 1) } } ] } sort: {} projection: {} 2019-09-04T06:31:00.159+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:00.159+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:31:00.159+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578660, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:31:00.159+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:00.159+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:00.159+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:00.159+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578660, 1) || First: notFirst: full path: ts 2019-09-04T06:31:00.159+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:00.159+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578660, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:00.160+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:00.160+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:31:00.160+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:00.160+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:00.160+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:00.160+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:00.160+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:00.160+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:00.160+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:00.160+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578660, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:00.160+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:00.160+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:00.160+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:00.160+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578660, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:00.160+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:00.160+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578660, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:00.160+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 10646 2019-09-04T06:31:00.160+0000 D3 EXECUTOR [repl-writer-worker-12] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:00.160+0000 D3 STORAGE [repl-writer-worker-12] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:00.160+0000 D3 REPL [repl-writer-worker-12] applying op: { ts: Timestamp(1567578660, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578660126), o: { $v: 1, $set: { ping: new Date(1567578660126) } } }, oplog application mode: Secondary 2019-09-04T06:31:00.160+0000 D3 STORAGE [repl-writer-worker-12] WT set timestamp of future write operations to Timestamp(1567578660, 1) 2019-09-04T06:31:00.160+0000 D3 STORAGE [repl-writer-worker-12] WT begin_transaction for snapshot id 10648 2019-09-04T06:31:00.160+0000 D2 QUERY [repl-writer-worker-12] Using idhack: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" } 2019-09-04T06:31:00.160+0000 D4 WRITE [repl-writer-worker-12] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:31:00.160+0000 D3 STORAGE [repl-writer-worker-12] WT commit_transaction for snapshot id 10648 2019-09-04T06:31:00.160+0000 D3 EXECUTOR [repl-writer-worker-12] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:00.160+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578660, 1), t: 1 }({ ts: Timestamp(1567578660, 1), t: 1 }) 2019-09-04T06:31:00.160+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578660, 1) 2019-09-04T06:31:00.160+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10647 2019-09-04T06:31:00.160+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:31:00.160+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:00.160+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:31:00.160+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:00.160+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:00.160+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:31:00.160+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 10647 2019-09-04T06:31:00.160+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578660, 1) 2019-09-04T06:31:00.160+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10652 2019-09-04T06:31:00.160+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 10652 2019-09-04T06:31:00.160+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578660, 1), t: 1 }({ ts: Timestamp(1567578660, 1), t: 1 }) 2019-09-04T06:31:00.160+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:00.160+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, durableWallTime: new Date(1567578655664), appliedOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, appliedWallTime: new Date(1567578655664), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, durableWallTime: new Date(1567578655664), appliedOpTime: { ts: Timestamp(1567578660, 1), t: 1 }, appliedWallTime: new Date(1567578660126), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, durableWallTime: new Date(1567578655664), appliedOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, appliedWallTime: new Date(1567578655664), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578655, 2), t: 1 }, lastCommittedWall: new Date(1567578655664), lastOpVisible: { ts: Timestamp(1567578655, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:00.160+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 718 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:30.160+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, durableWallTime: new Date(1567578655664), appliedOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, appliedWallTime: new Date(1567578655664), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, durableWallTime: new Date(1567578655664), appliedOpTime: { ts: Timestamp(1567578660, 1), t: 1 }, appliedWallTime: new Date(1567578660126), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, durableWallTime: new Date(1567578655664), appliedOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, appliedWallTime: new Date(1567578655664), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578655, 2), t: 1 }, lastCommittedWall: new Date(1567578655664), lastOpVisible: { ts: Timestamp(1567578655, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:00.160+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:30.160+0000 2019-09-04T06:31:00.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:00.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:00.161+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578660, 1), t: 1 } 2019-09-04T06:31:00.161+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 719 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:10.161+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578655, 2), t: 1 } } 2019-09-04T06:31:00.161+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:30.160+0000 2019-09-04T06:31:00.161+0000 D2 ASIO [RS] Request 718 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578655, 2), t: 1 }, lastCommittedWall: new Date(1567578655664), lastOpVisible: { ts: Timestamp(1567578655, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578655, 2), $clusterTime: { clusterTime: Timestamp(1567578660, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578660, 1) } 2019-09-04T06:31:00.161+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578655, 2), t: 1 }, lastCommittedWall: new Date(1567578655664), lastOpVisible: { ts: Timestamp(1567578655, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578655, 2), $clusterTime: { clusterTime: Timestamp(1567578660, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578660, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:00.161+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:00.161+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:30.160+0000 2019-09-04T06:31:00.162+0000 D2 ASIO [RS] Request 719 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578660, 1), t: 1 }, lastCommittedWall: new Date(1567578660126), lastOpVisible: { ts: Timestamp(1567578660, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578660, 1), t: 1 }, lastCommittedWall: new Date(1567578660126), lastOpApplied: { ts: Timestamp(1567578660, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578660, 1), $clusterTime: { clusterTime: Timestamp(1567578660, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578660, 1) } 2019-09-04T06:31:00.162+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578660, 1), t: 1 }, lastCommittedWall: new Date(1567578660126), lastOpVisible: { ts: Timestamp(1567578660, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578660, 1), t: 1 }, lastCommittedWall: new Date(1567578660126), lastOpApplied: { ts: Timestamp(1567578660, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578660, 1), $clusterTime: { clusterTime: Timestamp(1567578660, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578660, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:00.162+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:00.162+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:31:00.162+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578660, 1), t: 1 }, 2019-09-04T06:31:00.126+0000 2019-09-04T06:31:00.162+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578660, 1), t: 1 }, 2019-09-04T06:31:00.126+0000 2019-09-04T06:31:00.162+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578655, 1) 2019-09-04T06:31:00.162+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:31:10.941+0000 2019-09-04T06:31:00.162+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:31:10.422+0000 2019-09-04T06:31:00.162+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:00.162+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 720 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:10.162+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578660, 1), t: 1 } } 2019-09-04T06:31:00.162+0000 D3 REPL [conn253] Got notified of new snapshot: { ts: Timestamp(1567578660, 1), t: 1 }, 2019-09-04T06:31:00.126+0000 2019-09-04T06:31:00.162+0000 D3 REPL [conn253] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.925+0000 2019-09-04T06:31:00.162+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:30.160+0000 2019-09-04T06:31:00.162+0000 D3 REPL [conn224] Got notified of new snapshot: { ts: Timestamp(1567578660, 1), t: 1 }, 2019-09-04T06:31:00.126+0000 2019-09-04T06:31:00.162+0000 D3 REPL [conn224] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.924+0000 2019-09-04T06:31:00.162+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:28.839+0000 2019-09-04T06:31:00.162+0000 D3 REPL [conn249] Got notified of new snapshot: { ts: Timestamp(1567578660, 1), t: 1 }, 2019-09-04T06:31:00.126+0000 2019-09-04T06:31:00.162+0000 D3 REPL [conn249] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.702+0000 2019-09-04T06:31:00.162+0000 D3 REPL [conn236] Got notified of new snapshot: { ts: Timestamp(1567578660, 1), t: 1 }, 2019-09-04T06:31:00.126+0000 2019-09-04T06:31:00.162+0000 D3 REPL [conn236] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.040+0000 2019-09-04T06:31:00.162+0000 D3 REPL [conn257] Got notified of new snapshot: { ts: Timestamp(1567578660, 1), t: 1 }, 2019-09-04T06:31:00.126+0000 2019-09-04T06:31:00.162+0000 D3 REPL [conn257] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:12.293+0000 2019-09-04T06:31:00.162+0000 D3 REPL [conn266] Got notified of new snapshot: { ts: Timestamp(1567578660, 1), t: 1 }, 2019-09-04T06:31:00.126+0000 2019-09-04T06:31:00.162+0000 D3 REPL [conn266] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:21.661+0000 2019-09-04T06:31:00.162+0000 D3 REPL [conn242] Got notified of new snapshot: { ts: Timestamp(1567578660, 1), t: 1 }, 2019-09-04T06:31:00.126+0000 2019-09-04T06:31:00.162+0000 D3 REPL [conn242] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:12.507+0000 2019-09-04T06:31:00.162+0000 D3 REPL [conn237] Got notified of new snapshot: { ts: Timestamp(1567578660, 1), t: 1 }, 2019-09-04T06:31:00.126+0000 2019-09-04T06:31:00.162+0000 D3 REPL [conn237] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:13.409+0000 2019-09-04T06:31:00.162+0000 D3 REPL [conn233] Got notified of new snapshot: { ts: Timestamp(1567578660, 1), t: 1 }, 2019-09-04T06:31:00.126+0000 2019-09-04T06:31:00.162+0000 D3 REPL [conn233] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.024+0000 2019-09-04T06:31:00.162+0000 D3 REPL [conn255] Got notified of new snapshot: { ts: Timestamp(1567578660, 1), t: 1 }, 2019-09-04T06:31:00.126+0000 2019-09-04T06:31:00.162+0000 D3 REPL [conn255] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.877+0000 2019-09-04T06:31:00.162+0000 D3 REPL [conn241] Got notified of new snapshot: { ts: Timestamp(1567578660, 1), t: 1 }, 2019-09-04T06:31:00.126+0000 2019-09-04T06:31:00.162+0000 D3 REPL [conn241] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.879+0000 2019-09-04T06:31:00.162+0000 D3 REPL [conn265] Got notified of new snapshot: { ts: Timestamp(1567578660, 1), t: 1 }, 2019-09-04T06:31:00.126+0000 2019-09-04T06:31:00.162+0000 D3 REPL [conn265] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:21.660+0000 2019-09-04T06:31:00.162+0000 D3 REPL [conn234] Got notified of new snapshot: { ts: Timestamp(1567578660, 1), t: 1 }, 2019-09-04T06:31:00.126+0000 2019-09-04T06:31:00.162+0000 D3 REPL [conn234] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.033+0000 2019-09-04T06:31:00.162+0000 D3 REPL [conn231] Got notified of new snapshot: { ts: Timestamp(1567578660, 1), t: 1 }, 2019-09-04T06:31:00.126+0000 2019-09-04T06:31:00.162+0000 D3 REPL [conn231] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:13.190+0000 2019-09-04T06:31:00.162+0000 D3 REPL [conn227] Got notified of new snapshot: { ts: Timestamp(1567578660, 1), t: 1 }, 2019-09-04T06:31:00.126+0000 2019-09-04T06:31:00.162+0000 D3 REPL [conn227] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.888+0000 2019-09-04T06:31:00.162+0000 D3 REPL [conn268] Got notified of new snapshot: { ts: Timestamp(1567578660, 1), t: 1 }, 2019-09-04T06:31:00.126+0000 2019-09-04T06:31:00.162+0000 D3 REPL [conn268] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:22.595+0000 2019-09-04T06:31:00.162+0000 D3 REPL [conn251] Got notified of new snapshot: { ts: Timestamp(1567578660, 1), t: 1 }, 2019-09-04T06:31:00.126+0000 2019-09-04T06:31:00.162+0000 D3 REPL [conn251] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.763+0000 2019-09-04T06:31:00.162+0000 D3 REPL [conn252] Got notified of new snapshot: { ts: Timestamp(1567578660, 1), t: 1 }, 2019-09-04T06:31:00.126+0000 2019-09-04T06:31:00.162+0000 D3 REPL [conn252] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.897+0000 2019-09-04T06:31:00.162+0000 D3 REPL [conn256] Got notified of new snapshot: { ts: Timestamp(1567578660, 1), t: 1 }, 2019-09-04T06:31:00.126+0000 2019-09-04T06:31:00.163+0000 D3 REPL [conn256] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.918+0000 2019-09-04T06:31:00.163+0000 D3 REPL [conn269] Got notified of new snapshot: { ts: Timestamp(1567578660, 1), t: 1 }, 2019-09-04T06:31:00.126+0000 2019-09-04T06:31:00.163+0000 D3 REPL [conn269] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:25.060+0000 2019-09-04T06:31:00.163+0000 D3 REPL [conn262] Got notified of new snapshot: { ts: Timestamp(1567578660, 1), t: 1 }, 2019-09-04T06:31:00.126+0000 2019-09-04T06:31:00.163+0000 D3 REPL [conn262] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.051+0000 2019-09-04T06:31:00.163+0000 D3 REPL [conn248] Got notified of new snapshot: { ts: Timestamp(1567578660, 1), t: 1 }, 2019-09-04T06:31:00.126+0000 2019-09-04T06:31:00.163+0000 D3 REPL [conn248] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.456+0000 2019-09-04T06:31:00.163+0000 D3 REPL [conn263] Got notified of new snapshot: { ts: Timestamp(1567578660, 1), t: 1 }, 2019-09-04T06:31:00.126+0000 2019-09-04T06:31:00.163+0000 D3 REPL [conn263] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.415+0000 2019-09-04T06:31:00.163+0000 D3 REPL [conn254] Got notified of new snapshot: { ts: Timestamp(1567578660, 1), t: 1 }, 2019-09-04T06:31:00.126+0000 2019-09-04T06:31:00.163+0000 D3 REPL [conn254] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.962+0000 2019-09-04T06:31:00.163+0000 D3 REPL [conn261] Got notified of new snapshot: { ts: Timestamp(1567578660, 1), t: 1 }, 2019-09-04T06:31:00.126+0000 2019-09-04T06:31:00.163+0000 D3 REPL [conn261] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.023+0000 2019-09-04T06:31:00.163+0000 D3 REPL [conn225] Got notified of new snapshot: { ts: Timestamp(1567578660, 1), t: 1 }, 2019-09-04T06:31:00.126+0000 2019-09-04T06:31:00.163+0000 D3 REPL [conn225] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.468+0000 2019-09-04T06:31:00.163+0000 D3 REPL [conn267] Got notified of new snapshot: { ts: Timestamp(1567578660, 1), t: 1 }, 2019-09-04T06:31:00.126+0000 2019-09-04T06:31:00.163+0000 D3 REPL [conn267] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:21.661+0000 2019-09-04T06:31:00.163+0000 D3 REPL [conn250] Got notified of new snapshot: { ts: Timestamp(1567578660, 1), t: 1 }, 2019-09-04T06:31:00.126+0000 2019-09-04T06:31:00.163+0000 D3 REPL [conn250] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.753+0000 2019-09-04T06:31:00.163+0000 D3 REPL [conn258] Got notified of new snapshot: { ts: Timestamp(1567578660, 1), t: 1 }, 2019-09-04T06:31:00.126+0000 2019-09-04T06:31:00.163+0000 D3 REPL [conn258] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:14.433+0000 2019-09-04T06:31:00.163+0000 D3 REPL [conn240] Got notified of new snapshot: { ts: Timestamp(1567578660, 1), t: 1 }, 2019-09-04T06:31:00.126+0000 2019-09-04T06:31:00.163+0000 D3 REPL [conn240] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.874+0000 2019-09-04T06:31:00.163+0000 D3 REPL [conn259] Got notified of new snapshot: { ts: Timestamp(1567578660, 1), t: 1 }, 2019-09-04T06:31:00.126+0000 2019-09-04T06:31:00.163+0000 D3 REPL [conn259] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:24.151+0000 2019-09-04T06:31:00.163+0000 D3 REPL [conn229] Got notified of new snapshot: { ts: Timestamp(1567578660, 1), t: 1 }, 2019-09-04T06:31:00.126+0000 2019-09-04T06:31:00.163+0000 D3 REPL [conn229] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.924+0000 2019-09-04T06:31:00.163+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:31:00.163+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:00.163+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, durableWallTime: new Date(1567578655664), appliedOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, appliedWallTime: new Date(1567578655664), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578660, 1), t: 1 }, durableWallTime: new Date(1567578660126), appliedOpTime: { ts: Timestamp(1567578660, 1), t: 1 }, appliedWallTime: new Date(1567578660126), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, durableWallTime: new Date(1567578655664), appliedOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, appliedWallTime: new Date(1567578655664), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578660, 1), t: 1 }, lastCommittedWall: new Date(1567578660126), lastOpVisible: { ts: Timestamp(1567578660, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:00.163+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 721 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:30.163+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, durableWallTime: new Date(1567578655664), appliedOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, appliedWallTime: new Date(1567578655664), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578660, 1), t: 1 }, durableWallTime: new Date(1567578660126), appliedOpTime: { ts: Timestamp(1567578660, 1), t: 1 }, appliedWallTime: new Date(1567578660126), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, durableWallTime: new Date(1567578655664), appliedOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, appliedWallTime: new Date(1567578655664), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578660, 1), t: 1 }, lastCommittedWall: new Date(1567578660126), lastOpVisible: { ts: Timestamp(1567578660, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:00.163+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:30.160+0000 2019-09-04T06:31:00.164+0000 D2 ASIO [RS] Request 721 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578660, 1), t: 1 }, lastCommittedWall: new Date(1567578660126), lastOpVisible: { ts: Timestamp(1567578660, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578660, 1), $clusterTime: { clusterTime: Timestamp(1567578660, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578660, 1) } 2019-09-04T06:31:00.164+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578660, 1), t: 1 }, lastCommittedWall: new Date(1567578660126), lastOpVisible: { ts: Timestamp(1567578660, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578660, 1), $clusterTime: { clusterTime: Timestamp(1567578660, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578660, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:00.164+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:00.164+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:30.160+0000 2019-09-04T06:31:00.165+0000 D2 ASIO [RS] Request 720 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578660, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578660130), o: { $v: 1, $set: { ping: new Date(1567578660130) } } }, { ts: Timestamp(1567578660, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578660130), o: { $v: 1, $set: { ping: new Date(1567578660129) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578660, 1), t: 1 }, lastCommittedWall: new Date(1567578660126), lastOpVisible: { ts: Timestamp(1567578660, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578660, 1), t: 1 }, lastCommittedWall: new Date(1567578660126), lastOpApplied: { ts: Timestamp(1567578660, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578660, 1), $clusterTime: { clusterTime: Timestamp(1567578660, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578660, 3) } 2019-09-04T06:31:00.165+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578660, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578660130), o: { $v: 1, $set: { ping: new Date(1567578660130) } } }, { ts: Timestamp(1567578660, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578660130), o: { $v: 1, $set: { ping: new Date(1567578660129) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578660, 1), t: 1 }, lastCommittedWall: new Date(1567578660126), lastOpVisible: { ts: Timestamp(1567578660, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578660, 1), t: 1 }, lastCommittedWall: new Date(1567578660126), lastOpApplied: { ts: Timestamp(1567578660, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578660, 1), $clusterTime: { clusterTime: Timestamp(1567578660, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578660, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:00.165+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:00.165+0000 D2 REPL [replication-0] oplog fetcher read 2 operations from remote oplog starting at ts: Timestamp(1567578660, 2) and ending at ts: Timestamp(1567578660, 3) 2019-09-04T06:31:00.165+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:31:10.422+0000 2019-09-04T06:31:00.165+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:31:10.235+0000 2019-09-04T06:31:00.165+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578660, 3), t: 1 } 2019-09-04T06:31:00.165+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:00.165+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:28.839+0000 2019-09-04T06:31:00.165+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:00.165+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:00.165+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578660, 1) 2019-09-04T06:31:00.165+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 10656 2019-09-04T06:31:00.165+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:00.165+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:00.165+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 10656 2019-09-04T06:31:00.165+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:00.165+0000 D2 REPL [rsSync-0] replication batch size is 2 2019-09-04T06:31:00.166+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:00.166+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578660, 2) } 2019-09-04T06:31:00.166+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578660, 1) 2019-09-04T06:31:00.166+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 10659 2019-09-04T06:31:00.166+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10653 2019-09-04T06:31:00.166+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:00.166+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:00.166+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 10659 2019-09-04T06:31:00.166+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 10653 2019-09-04T06:31:00.166+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10662 2019-09-04T06:31:00.166+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 10662 2019-09-04T06:31:00.166+0000 D3 EXECUTOR [repl-writer-worker-10] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:00.166+0000 D3 STORAGE [repl-writer-worker-10] WT begin_transaction for snapshot id 10664 2019-09-04T06:31:00.166+0000 D4 STORAGE [repl-writer-worker-10] inserting record with timestamp Timestamp(1567578660, 2) 2019-09-04T06:31:00.166+0000 D3 STORAGE [repl-writer-worker-10] WT set timestamp of future write operations to Timestamp(1567578660, 2) 2019-09-04T06:31:00.166+0000 D4 STORAGE [repl-writer-worker-10] inserting record with timestamp Timestamp(1567578660, 3) 2019-09-04T06:31:00.166+0000 D3 STORAGE [repl-writer-worker-10] WT set timestamp of future write operations to Timestamp(1567578660, 3) 2019-09-04T06:31:00.166+0000 D3 STORAGE [repl-writer-worker-10] WT commit_transaction for snapshot id 10664 2019-09-04T06:31:00.166+0000 D3 EXECUTOR [repl-writer-worker-10] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:00.166+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:31:00.166+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10663 2019-09-04T06:31:00.166+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 10663 2019-09-04T06:31:00.166+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10666 2019-09-04T06:31:00.166+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 10666 2019-09-04T06:31:00.166+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578660, 3), t: 1 }({ ts: Timestamp(1567578660, 3), t: 1 }) 2019-09-04T06:31:00.166+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578660, 3) 2019-09-04T06:31:00.166+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10667 2019-09-04T06:31:00.166+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578660, 3) } } ] } sort: {} projection: {} 2019-09-04T06:31:00.166+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:00.166+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:31:00.166+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578660, 3) Sort: {} Proj: {} ============================= 2019-09-04T06:31:00.166+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:00.166+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:00.166+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:00.166+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578660, 3) || First: notFirst: full path: ts 2019-09-04T06:31:00.166+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:00.166+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578660, 3) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:00.166+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:00.166+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:31:00.166+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:00.166+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:00.166+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:00.166+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:00.166+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:00.166+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:00.166+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:00.166+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578660, 3) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:00.166+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:00.166+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:00.166+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:00.166+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578660, 3) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:00.166+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:00.166+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578660, 3) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:00.166+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 10667 2019-09-04T06:31:00.166+0000 D3 EXECUTOR [repl-writer-worker-8] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:00.166+0000 D3 STORAGE [repl-writer-worker-8] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:00.166+0000 D3 REPL [repl-writer-worker-8] applying op: { ts: Timestamp(1567578660, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578660130), o: { $v: 1, $set: { ping: new Date(1567578660130) } } }, oplog application mode: Secondary 2019-09-04T06:31:00.166+0000 D3 STORAGE [repl-writer-worker-8] WT set timestamp of future write operations to Timestamp(1567578660, 2) 2019-09-04T06:31:00.166+0000 D3 STORAGE [repl-writer-worker-8] WT begin_transaction for snapshot id 10669 2019-09-04T06:31:00.166+0000 D2 QUERY [repl-writer-worker-8] Using idhack: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" } 2019-09-04T06:31:00.167+0000 D4 WRITE [repl-writer-worker-8] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:31:00.167+0000 D3 STORAGE [repl-writer-worker-8] WT commit_transaction for snapshot id 10669 2019-09-04T06:31:00.167+0000 D3 EXECUTOR [repl-writer-worker-8] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:00.167+0000 D3 STORAGE [repl-writer-worker-8] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:00.167+0000 D3 REPL [repl-writer-worker-8] applying op: { ts: Timestamp(1567578660, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578660130), o: { $v: 1, $set: { ping: new Date(1567578660129) } } }, oplog application mode: Secondary 2019-09-04T06:31:00.167+0000 D3 STORAGE [repl-writer-worker-8] WT set timestamp of future write operations to Timestamp(1567578660, 3) 2019-09-04T06:31:00.167+0000 D3 STORAGE [repl-writer-worker-8] WT begin_transaction for snapshot id 10671 2019-09-04T06:31:00.167+0000 D2 QUERY [repl-writer-worker-8] Using idhack: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" } 2019-09-04T06:31:00.167+0000 D4 WRITE [repl-writer-worker-8] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:31:00.167+0000 D3 STORAGE [repl-writer-worker-8] WT commit_transaction for snapshot id 10671 2019-09-04T06:31:00.167+0000 D3 EXECUTOR [repl-writer-worker-8] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:00.167+0000 D3 EXECUTOR [repl-writer-worker-4] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:00.167+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578660, 3), t: 1 }({ ts: Timestamp(1567578660, 3), t: 1 }) 2019-09-04T06:31:00.167+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578660, 3) 2019-09-04T06:31:00.167+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10668 2019-09-04T06:31:00.167+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:31:00.167+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:00.167+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:31:00.167+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:00.167+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:00.167+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:31:00.167+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 10668 2019-09-04T06:31:00.167+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578660, 3) 2019-09-04T06:31:00.167+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10674 2019-09-04T06:31:00.167+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 10674 2019-09-04T06:31:00.167+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578660, 3), t: 1 }({ ts: Timestamp(1567578660, 3), t: 1 }) 2019-09-04T06:31:00.167+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:00.167+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, durableWallTime: new Date(1567578655664), appliedOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, appliedWallTime: new Date(1567578655664), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578660, 1), t: 1 }, durableWallTime: new Date(1567578660126), appliedOpTime: { ts: Timestamp(1567578660, 3), t: 1 }, appliedWallTime: new Date(1567578660130), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, durableWallTime: new Date(1567578655664), appliedOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, appliedWallTime: new Date(1567578655664), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578660, 1), t: 1 }, lastCommittedWall: new Date(1567578660126), lastOpVisible: { ts: Timestamp(1567578660, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:00.167+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 722 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:30.167+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, durableWallTime: new Date(1567578655664), appliedOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, appliedWallTime: new Date(1567578655664), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578660, 1), t: 1 }, durableWallTime: new Date(1567578660126), appliedOpTime: { ts: Timestamp(1567578660, 3), t: 1 }, appliedWallTime: new Date(1567578660130), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, durableWallTime: new Date(1567578655664), appliedOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, appliedWallTime: new Date(1567578655664), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578660, 1), t: 1 }, lastCommittedWall: new Date(1567578660126), lastOpVisible: { ts: Timestamp(1567578660, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:00.167+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:30.167+0000 2019-09-04T06:31:00.167+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578660, 3), t: 1 } 2019-09-04T06:31:00.167+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 723 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:10.167+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578660, 1), t: 1 } } 2019-09-04T06:31:00.167+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:30.167+0000 2019-09-04T06:31:00.168+0000 D2 ASIO [RS] Request 722 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578660, 1), t: 1 }, lastCommittedWall: new Date(1567578660126), lastOpVisible: { ts: Timestamp(1567578660, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578660, 1), $clusterTime: { clusterTime: Timestamp(1567578660, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578660, 3) } 2019-09-04T06:31:00.168+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578660, 1), t: 1 }, lastCommittedWall: new Date(1567578660126), lastOpVisible: { ts: Timestamp(1567578660, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578660, 1), $clusterTime: { clusterTime: Timestamp(1567578660, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578660, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:00.168+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:00.168+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:30.167+0000 2019-09-04T06:31:00.172+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:31:00.172+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:00.173+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, durableWallTime: new Date(1567578655664), appliedOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, appliedWallTime: new Date(1567578655664), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578660, 3), t: 1 }, durableWallTime: new Date(1567578660130), appliedOpTime: { ts: Timestamp(1567578660, 3), t: 1 }, appliedWallTime: new Date(1567578660130), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, durableWallTime: new Date(1567578655664), appliedOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, appliedWallTime: new Date(1567578655664), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578660, 1), t: 1 }, lastCommittedWall: new Date(1567578660126), lastOpVisible: { ts: Timestamp(1567578660, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:00.173+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 724 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:30.173+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, durableWallTime: new Date(1567578655664), appliedOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, appliedWallTime: new Date(1567578655664), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578660, 3), t: 1 }, durableWallTime: new Date(1567578660130), appliedOpTime: { ts: Timestamp(1567578660, 3), t: 1 }, appliedWallTime: new Date(1567578660130), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, durableWallTime: new Date(1567578655664), appliedOpTime: { ts: Timestamp(1567578655, 2), t: 1 }, appliedWallTime: new Date(1567578655664), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578660, 1), t: 1 }, lastCommittedWall: new Date(1567578660126), lastOpVisible: { ts: Timestamp(1567578660, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:00.173+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:30.167+0000 2019-09-04T06:31:00.173+0000 D2 ASIO [RS] Request 724 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578660, 1), t: 1 }, lastCommittedWall: new Date(1567578660126), lastOpVisible: { ts: Timestamp(1567578660, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578660, 1), $clusterTime: { clusterTime: Timestamp(1567578660, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578660, 3) } 2019-09-04T06:31:00.173+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578660, 1), t: 1 }, lastCommittedWall: new Date(1567578660126), lastOpVisible: { ts: Timestamp(1567578660, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578660, 1), $clusterTime: { clusterTime: Timestamp(1567578660, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578660, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:00.173+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:00.173+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:30.167+0000 2019-09-04T06:31:00.173+0000 D2 ASIO [RS] Request 723 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578660, 3), t: 1 }, lastCommittedWall: new Date(1567578660130), lastOpVisible: { ts: Timestamp(1567578660, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578660, 3), t: 1 }, lastCommittedWall: new Date(1567578660130), lastOpApplied: { ts: Timestamp(1567578660, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578660, 3), $clusterTime: { clusterTime: Timestamp(1567578660, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578660, 3) } 2019-09-04T06:31:00.173+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578660, 3), t: 1 }, lastCommittedWall: new Date(1567578660130), lastOpVisible: { ts: Timestamp(1567578660, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578660, 3), t: 1 }, lastCommittedWall: new Date(1567578660130), lastOpApplied: { ts: Timestamp(1567578660, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578660, 3), $clusterTime: { clusterTime: Timestamp(1567578660, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578660, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:00.173+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:00.173+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:31:00.173+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578660, 3), t: 1 }, 2019-09-04T06:31:00.130+0000 2019-09-04T06:31:00.173+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578660, 3), t: 1 }, 2019-09-04T06:31:00.130+0000 2019-09-04T06:31:00.173+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578655, 3) 2019-09-04T06:31:00.173+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:31:10.235+0000 2019-09-04T06:31:00.173+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:31:11.283+0000 2019-09-04T06:31:00.173+0000 D3 REPL [conn236] Got notified of new snapshot: { ts: Timestamp(1567578660, 3), t: 1 }, 2019-09-04T06:31:00.130+0000 2019-09-04T06:31:00.173+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:00.173+0000 D3 REPL [conn236] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.040+0000 2019-09-04T06:31:00.173+0000 D3 REPL [conn227] Got notified of new snapshot: { ts: Timestamp(1567578660, 3), t: 1 }, 2019-09-04T06:31:00.130+0000 2019-09-04T06:31:00.173+0000 D3 REPL [conn227] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.888+0000 2019-09-04T06:31:00.173+0000 D3 REPL [conn255] Got notified of new snapshot: { ts: Timestamp(1567578660, 3), t: 1 }, 2019-09-04T06:31:00.130+0000 2019-09-04T06:31:00.173+0000 D3 REPL [conn255] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.877+0000 2019-09-04T06:31:00.173+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 725 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:10.173+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578660, 3), t: 1 } } 2019-09-04T06:31:00.173+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:30.167+0000 2019-09-04T06:31:00.173+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:28.839+0000 2019-09-04T06:31:00.173+0000 D3 REPL [conn257] Got notified of new snapshot: { ts: Timestamp(1567578660, 3), t: 1 }, 2019-09-04T06:31:00.130+0000 2019-09-04T06:31:00.173+0000 D3 REPL [conn257] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:12.293+0000 2019-09-04T06:31:00.173+0000 D3 REPL [conn231] Got notified of new snapshot: { ts: Timestamp(1567578660, 3), t: 1 }, 2019-09-04T06:31:00.130+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn231] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:13.190+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn268] Got notified of new snapshot: { ts: Timestamp(1567578660, 3), t: 1 }, 2019-09-04T06:31:00.130+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn268] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:22.595+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn265] Got notified of new snapshot: { ts: Timestamp(1567578660, 3), t: 1 }, 2019-09-04T06:31:00.130+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn265] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:21.660+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn248] Got notified of new snapshot: { ts: Timestamp(1567578660, 3), t: 1 }, 2019-09-04T06:31:00.130+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn248] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.456+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn254] Got notified of new snapshot: { ts: Timestamp(1567578660, 3), t: 1 }, 2019-09-04T06:31:00.130+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn254] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.962+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn225] Got notified of new snapshot: { ts: Timestamp(1567578660, 3), t: 1 }, 2019-09-04T06:31:00.130+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn225] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.468+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn250] Got notified of new snapshot: { ts: Timestamp(1567578660, 3), t: 1 }, 2019-09-04T06:31:00.130+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn250] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.753+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn240] Got notified of new snapshot: { ts: Timestamp(1567578660, 3), t: 1 }, 2019-09-04T06:31:00.130+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn240] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.874+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn229] Got notified of new snapshot: { ts: Timestamp(1567578660, 3), t: 1 }, 2019-09-04T06:31:00.130+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn229] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.924+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn269] Got notified of new snapshot: { ts: Timestamp(1567578660, 3), t: 1 }, 2019-09-04T06:31:00.130+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn269] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:25.060+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn252] Got notified of new snapshot: { ts: Timestamp(1567578660, 3), t: 1 }, 2019-09-04T06:31:00.130+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn252] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.897+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn253] Got notified of new snapshot: { ts: Timestamp(1567578660, 3), t: 1 }, 2019-09-04T06:31:00.130+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn253] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.925+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn249] Got notified of new snapshot: { ts: Timestamp(1567578660, 3), t: 1 }, 2019-09-04T06:31:00.130+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn249] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.702+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn266] Got notified of new snapshot: { ts: Timestamp(1567578660, 3), t: 1 }, 2019-09-04T06:31:00.130+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn266] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:21.661+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn241] Got notified of new snapshot: { ts: Timestamp(1567578660, 3), t: 1 }, 2019-09-04T06:31:00.130+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn241] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.879+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn234] Got notified of new snapshot: { ts: Timestamp(1567578660, 3), t: 1 }, 2019-09-04T06:31:00.130+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn234] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.033+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn233] Got notified of new snapshot: { ts: Timestamp(1567578660, 3), t: 1 }, 2019-09-04T06:31:00.130+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn233] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.024+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn242] Got notified of new snapshot: { ts: Timestamp(1567578660, 3), t: 1 }, 2019-09-04T06:31:00.130+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn242] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:12.507+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn237] Got notified of new snapshot: { ts: Timestamp(1567578660, 3), t: 1 }, 2019-09-04T06:31:00.130+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn237] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:13.409+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn262] Got notified of new snapshot: { ts: Timestamp(1567578660, 3), t: 1 }, 2019-09-04T06:31:00.130+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn262] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.051+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn259] Got notified of new snapshot: { ts: Timestamp(1567578660, 3), t: 1 }, 2019-09-04T06:31:00.130+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn259] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:24.151+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn263] Got notified of new snapshot: { ts: Timestamp(1567578660, 3), t: 1 }, 2019-09-04T06:31:00.130+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn263] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.415+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn256] Got notified of new snapshot: { ts: Timestamp(1567578660, 3), t: 1 }, 2019-09-04T06:31:00.130+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn256] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.918+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn258] Got notified of new snapshot: { ts: Timestamp(1567578660, 3), t: 1 }, 2019-09-04T06:31:00.130+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn258] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:14.433+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn224] Got notified of new snapshot: { ts: Timestamp(1567578660, 3), t: 1 }, 2019-09-04T06:31:00.130+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn224] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.924+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn261] Got notified of new snapshot: { ts: Timestamp(1567578660, 3), t: 1 }, 2019-09-04T06:31:00.130+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn261] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.023+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn267] Got notified of new snapshot: { ts: Timestamp(1567578660, 3), t: 1 }, 2019-09-04T06:31:00.130+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn267] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:21.661+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn251] Got notified of new snapshot: { ts: Timestamp(1567578660, 3), t: 1 }, 2019-09-04T06:31:00.130+0000 2019-09-04T06:31:00.174+0000 D3 REPL [conn251] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:00.763+0000 2019-09-04T06:31:00.181+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:00.181+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:00.210+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:00.212+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:00.212+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:00.218+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:00.218+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:00.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578660, 3), signature: { hash: BinData(0, 5CB90FE3C9418BB04B17B0226F28D85E25BBB700), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:00.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:31:00.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578660, 3), signature: { hash: BinData(0, 5CB90FE3C9418BB04B17B0226F28D85E25BBB700), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:00.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578660, 3), signature: { hash: BinData(0, 5CB90FE3C9418BB04B17B0226F28D85E25BBB700), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:00.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578660, 3), t: 1 }, durableWallTime: new Date(1567578660130), opTime: { ts: Timestamp(1567578660, 3), t: 1 }, wallTime: new Date(1567578660130) } 2019-09-04T06:31:00.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578660, 3), signature: { hash: BinData(0, 5CB90FE3C9418BB04B17B0226F28D85E25BBB700), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:00.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:00.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:00.259+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578660, 3) 2019-09-04T06:31:00.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:00.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:00.310+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:00.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:00.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:00.348+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:00.348+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:00.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:00.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:00.410+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:00.456+0000 I COMMAND [conn248] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578621, 1), signature: { hash: BinData(0, E82E5CEB98BE4FF0E89C81089D4E7D0BAD5C9FA3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:31:00.456+0000 D1 - [conn248] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:31:00.456+0000 W - [conn248] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:00.468+0000 I COMMAND [conn225] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578621, 1), signature: { hash: BinData(0, E82E5CEB98BE4FF0E89C81089D4E7D0BAD5C9FA3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:31:00.468+0000 D1 - [conn225] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:31:00.468+0000 W - [conn225] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:00.475+0000 I - [conn248] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:00.475+0000 D1 COMMAND [conn248] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578621, 1), signature: { hash: BinData(0, E82E5CEB98BE4FF0E89C81089D4E7D0BAD5C9FA3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:00.475+0000 D1 - [conn248] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:31:00.475+0000 W - [conn248] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:00.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:00.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:00.492+0000 I - [conn225] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:00.492+0000 D1 COMMAND [conn225] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578621, 1), signature: { hash: BinData(0, E82E5CEB98BE4FF0E89C81089D4E7D0BAD5C9FA3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:00.492+0000 D1 - [conn225] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:31:00.492+0000 W - [conn225] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:00.511+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:00.514+0000 I - [conn248] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:00.514+0000 W COMMAND [conn248] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:31:00.514+0000 I COMMAND [conn248] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578621, 1), signature: { hash: BinData(0, E82E5CEB98BE4FF0E89C81089D4E7D0BAD5C9FA3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:31:00.514+0000 D2 NETWORK [conn248] Session from 10.108.2.59:48350 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:31:00.514+0000 I NETWORK [conn248] end connection 10.108.2.59:48350 (90 connections now open) 2019-09-04T06:31:00.534+0000 I - [conn225] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:00.534+0000 W COMMAND [conn225] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:31:00.534+0000 I COMMAND [conn225] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578621, 1), signature: { hash: BinData(0, E82E5CEB98BE4FF0E89C81089D4E7D0BAD5C9FA3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30034ms 2019-09-04T06:31:00.534+0000 D2 NETWORK [conn225] Session from 10.108.2.55:36638 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:31:00.534+0000 I NETWORK [conn225] end connection 10.108.2.55:36638 (89 connections now open) 2019-09-04T06:31:00.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:00.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:00.567+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:00.567+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:00.583+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:00.583+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:00.611+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:00.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:00.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:00.681+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:00.681+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:00.703+0000 I COMMAND [conn249] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578625, 1), signature: { hash: BinData(0, 6E155B13C29F2856B6AEB543BF6F1BCB12BB4DE1), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:31:00.703+0000 D1 - [conn249] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:31:00.703+0000 W - [conn249] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:00.711+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:00.712+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:00.712+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:00.718+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:00.718+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:00.719+0000 I - [conn249] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:00.719+0000 D1 COMMAND [conn249] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578625, 1), signature: { hash: BinData(0, 6E155B13C29F2856B6AEB543BF6F1BCB12BB4DE1), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:00.719+0000 D1 - [conn249] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:31:00.719+0000 W - [conn249] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:00.740+0000 I - [conn249] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:00.740+0000 W COMMAND [conn249] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:31:00.740+0000 I COMMAND [conn249] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578625, 1), signature: { hash: BinData(0, 6E155B13C29F2856B6AEB543BF6F1BCB12BB4DE1), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:31:00.740+0000 D2 NETWORK [conn249] Session from 10.108.2.74:51784 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:31:00.740+0000 I NETWORK [conn249] end connection 10.108.2.74:51784 (88 connections now open) 2019-09-04T06:31:00.743+0000 I NETWORK [listener] connection accepted from 10.108.2.52:47200 #271 (89 connections now open) 2019-09-04T06:31:00.743+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:31:00.743+0000 D2 COMMAND [conn271] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:31:00.743+0000 I NETWORK [conn271] received client metadata from 10.108.2.52:47200 conn271: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:31:00.743+0000 I COMMAND [conn271] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:31:00.753+0000 I NETWORK [listener] connection accepted from 10.108.2.50:50134 #272 (90 connections now open) 2019-09-04T06:31:00.753+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:31:00.753+0000 D2 COMMAND [conn272] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:31:00.753+0000 I NETWORK [conn272] received client metadata from 10.108.2.50:50134 conn272: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:31:00.753+0000 I COMMAND [conn272] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:31:00.753+0000 I COMMAND [conn250] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578622, 1), signature: { hash: BinData(0, 57682BE5EAE1D8EF39CBA39B0418328ED3E0A567), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:31:00.753+0000 D1 - [conn250] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:31:00.753+0000 W - [conn250] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:00.764+0000 I COMMAND [conn251] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578629, 1), signature: { hash: BinData(0, C65C59F62612952069D57CCC38BA6C20E661C70E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:31:00.764+0000 D1 - [conn251] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:31:00.764+0000 W - [conn251] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:00.771+0000 I - [conn250] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:00.771+0000 D1 COMMAND [conn250] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578622, 1), signature: { hash: BinData(0, 57682BE5EAE1D8EF39CBA39B0418328ED3E0A567), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:00.771+0000 D1 - [conn250] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:31:00.771+0000 W - [conn250] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:00.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:00.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:00.789+0000 I - [conn251] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:00.789+0000 D1 COMMAND [conn251] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578629, 1), signature: { hash: BinData(0, C65C59F62612952069D57CCC38BA6C20E661C70E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:00.789+0000 D1 - [conn251] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:31:00.789+0000 W - [conn251] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:00.811+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:00.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:00.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:00.827+0000 I - [conn250] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:00.827+0000 W COMMAND [conn250] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:31:00.827+0000 I COMMAND [conn250] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578622, 1), signature: { hash: BinData(0, 57682BE5EAE1D8EF39CBA39B0418328ED3E0A567), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:31:00.827+0000 D2 NETWORK [conn250] Session from 10.108.2.52:47180 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:31:00.827+0000 I NETWORK [conn250] end connection 10.108.2.52:47180 (89 connections now open) 2019-09-04T06:31:00.832+0000 I - [conn251] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:00.832+0000 W COMMAND [conn251] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:31:00.832+0000 I COMMAND [conn251] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578629, 1), signature: { hash: BinData(0, C65C59F62612952069D57CCC38BA6C20E661C70E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30035ms 2019-09-04T06:31:00.832+0000 D2 NETWORK [conn251] Session from 10.108.2.50:50114 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:31:00.832+0000 I NETWORK [conn251] end connection 10.108.2.50:50114 (88 connections now open) 2019-09-04T06:31:00.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:00.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 726) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:00.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 726 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:10.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:00.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:28.839+0000 2019-09-04T06:31:00.838+0000 D2 ASIO [Replication] Request 726 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578660, 3), t: 1 }, durableWallTime: new Date(1567578660130), opTime: { ts: Timestamp(1567578660, 3), t: 1 }, wallTime: new Date(1567578660130), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578660, 3), t: 1 }, lastCommittedWall: new Date(1567578660130), lastOpVisible: { ts: Timestamp(1567578660, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578660, 3), $clusterTime: { clusterTime: Timestamp(1567578660, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578660, 3) } 2019-09-04T06:31:00.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578660, 3), t: 1 }, durableWallTime: new Date(1567578660130), opTime: { ts: Timestamp(1567578660, 3), t: 1 }, wallTime: new Date(1567578660130), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578660, 3), t: 1 }, lastCommittedWall: new Date(1567578660130), lastOpVisible: { ts: Timestamp(1567578660, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578660, 3), $clusterTime: { clusterTime: Timestamp(1567578660, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578660, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:00.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:00.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 726) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578660, 3), t: 1 }, durableWallTime: new Date(1567578660130), opTime: { ts: Timestamp(1567578660, 3), t: 1 }, wallTime: new Date(1567578660130), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578660, 3), t: 1 }, lastCommittedWall: new Date(1567578660130), lastOpVisible: { ts: Timestamp(1567578660, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578660, 3), $clusterTime: { clusterTime: Timestamp(1567578660, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578660, 3) } 2019-09-04T06:31:00.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:31:00.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:31:02.838Z 2019-09-04T06:31:00.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:28.839+0000 2019-09-04T06:31:00.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:00.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 727) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:00.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 727 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:31:10.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:00.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:28.839+0000 2019-09-04T06:31:00.839+0000 D2 ASIO [Replication] Request 727 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578660, 3), t: 1 }, durableWallTime: new Date(1567578660130), opTime: { ts: Timestamp(1567578660, 3), t: 1 }, wallTime: new Date(1567578660130), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578660, 3), t: 1 }, lastCommittedWall: new Date(1567578660130), lastOpVisible: { ts: Timestamp(1567578660, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578660, 3), $clusterTime: { clusterTime: Timestamp(1567578660, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578660, 3) } 2019-09-04T06:31:00.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578660, 3), t: 1 }, durableWallTime: new Date(1567578660130), opTime: { ts: Timestamp(1567578660, 3), t: 1 }, wallTime: new Date(1567578660130), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578660, 3), t: 1 }, lastCommittedWall: new Date(1567578660130), lastOpVisible: { ts: Timestamp(1567578660, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578660, 3), $clusterTime: { clusterTime: Timestamp(1567578660, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578660, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:31:00.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:00.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 727) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578660, 3), t: 1 }, durableWallTime: new Date(1567578660130), opTime: { ts: Timestamp(1567578660, 3), t: 1 }, wallTime: new Date(1567578660130), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578660, 3), t: 1 }, lastCommittedWall: new Date(1567578660130), lastOpVisible: { ts: Timestamp(1567578660, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578660, 3), $clusterTime: { clusterTime: Timestamp(1567578660, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578660, 3) } 2019-09-04T06:31:00.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:31:00.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:31:11.283+0000 2019-09-04T06:31:00.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:31:10.971+0000 2019-09-04T06:31:00.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:31:00.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:31:02.839Z 2019-09-04T06:31:00.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:00.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:30.839+0000 2019-09-04T06:31:00.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:30.839+0000 2019-09-04T06:31:00.848+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:00.848+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:00.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:00.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:00.886+0000 I NETWORK [listener] connection accepted from 10.108.2.44:38698 #273 (89 connections now open) 2019-09-04T06:31:00.886+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:31:00.887+0000 D2 COMMAND [conn273] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:31:00.887+0000 I NETWORK [conn273] received client metadata from 10.108.2.44:38698 conn273: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:31:00.887+0000 I COMMAND [conn273] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:31:00.898+0000 I COMMAND [conn252] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578628, 1), signature: { hash: BinData(0, A3D197163D4DC2FD06580272DC5949AA9EB70946), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:31:00.898+0000 D1 - [conn252] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:31:00.898+0000 W - [conn252] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:00.911+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:00.915+0000 I NETWORK [listener] connection accepted from 10.108.2.73:52170 #274 (90 connections now open) 2019-09-04T06:31:00.915+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:31:00.915+0000 D2 COMMAND [conn274] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:31:00.915+0000 I NETWORK [conn274] received client metadata from 10.108.2.73:52170 conn274: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:31:00.915+0000 I COMMAND [conn274] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:31:00.916+0000 I - [conn252] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:00.916+0000 D1 COMMAND [conn252] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578628, 1), signature: { hash: BinData(0, A3D197163D4DC2FD06580272DC5949AA9EB70946), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:00.916+0000 D1 - [conn252] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:31:00.916+0000 W - [conn252] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:00.926+0000 I COMMAND [conn253] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578622, 1), signature: { hash: BinData(0, 57682BE5EAE1D8EF39CBA39B0418328ED3E0A567), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:31:00.926+0000 D1 - [conn253] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:31:00.926+0000 W - [conn253] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:00.937+0000 I - [conn252] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:00.938+0000 W COMMAND [conn252] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:31:00.938+0000 I COMMAND [conn252] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578628, 1), signature: { hash: BinData(0, A3D197163D4DC2FD06580272DC5949AA9EB70946), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30028ms 2019-09-04T06:31:00.938+0000 D2 NETWORK [conn252] Session from 10.108.2.44:38672 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:31:00.938+0000 I NETWORK [conn252] end connection 10.108.2.44:38672 (89 connections now open) 2019-09-04T06:31:00.954+0000 I - [conn253] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:00.954+0000 D1 COMMAND [conn253] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578622, 1), signature: { hash: BinData(0, 57682BE5EAE1D8EF39CBA39B0418328ED3E0A567), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:00.954+0000 D1 - [conn253] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:31:00.954+0000 W - [conn253] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:00.963+0000 I COMMAND [conn254] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578623, 1), signature: { hash: BinData(0, 3A03B2C9D6C104013865981BCFBDB7C92D5EFDE9), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:31:00.963+0000 D1 - [conn254] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:31:00.963+0000 W - [conn254] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:00.974+0000 I - [conn253] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:00.974+0000 W COMMAND [conn253] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:31:00.974+0000 I COMMAND [conn253] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578622, 1), signature: { hash: BinData(0, 57682BE5EAE1D8EF39CBA39B0418328ED3E0A567), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30038ms 2019-09-04T06:31:00.974+0000 D2 NETWORK [conn253] Session from 10.108.2.73:52152 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:31:00.974+0000 I NETWORK [conn253] end connection 10.108.2.73:52152 (88 connections now open) 2019-09-04T06:31:00.991+0000 I - [conn254] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:00.991+0000 D1 COMMAND [conn254] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578623, 1), signature: { hash: BinData(0, 3A03B2C9D6C104013865981BCFBDB7C92D5EFDE9), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:00.991+0000 D1 - [conn254] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:31:00.991+0000 W - [conn254] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:01.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:01.011+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:01.011+0000 I - [conn254] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:01.012+0000 W COMMAND [conn254] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:31:01.012+0000 I COMMAND [conn254] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578623, 1), signature: { hash: BinData(0, 3A03B2C9D6C104013865981BCFBDB7C92D5EFDE9), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30038ms 2019-09-04T06:31:01.012+0000 D2 NETWORK [conn254] Session from 10.108.2.58:52140 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:31:01.012+0000 I NETWORK [conn254] end connection 10.108.2.58:52140 (87 connections now open) 2019-09-04T06:31:01.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:01.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:01.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578660, 3), signature: { hash: BinData(0, 5CB90FE3C9418BB04B17B0226F28D85E25BBB700), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:01.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:31:01.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578660, 3), signature: { hash: BinData(0, 5CB90FE3C9418BB04B17B0226F28D85E25BBB700), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:01.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578660, 3), signature: { hash: BinData(0, 5CB90FE3C9418BB04B17B0226F28D85E25BBB700), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:01.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578660, 3), t: 1 }, durableWallTime: new Date(1567578660130), opTime: { ts: Timestamp(1567578660, 3), t: 1 }, wallTime: new Date(1567578660130) } 2019-09-04T06:31:01.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578660, 3), signature: { hash: BinData(0, 5CB90FE3C9418BB04B17B0226F28D85E25BBB700), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:01.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:01.067+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:01.083+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:01.083+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:01.111+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:01.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:01.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:01.166+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:01.166+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:01.166+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578660, 3) 2019-09-04T06:31:01.166+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 10708 2019-09-04T06:31:01.166+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:01.166+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:01.166+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 10708 2019-09-04T06:31:01.167+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10711 2019-09-04T06:31:01.167+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 10711 2019-09-04T06:31:01.167+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578660, 3), t: 1 }({ ts: Timestamp(1567578660, 3), t: 1 }) 2019-09-04T06:31:01.173+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:01.173+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:01.181+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:01.181+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:01.212+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:01.212+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:01.212+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:01.218+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:01.218+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:01.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:01.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:01.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:01.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:01.312+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:01.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:01.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:01.348+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:01.348+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:01.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:01.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:01.412+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:01.512+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:01.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:01.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:01.567+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:01.567+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:01.583+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:01.583+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:01.612+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:01.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:01.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:01.673+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:01.673+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:01.681+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:01.681+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:01.712+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:01.712+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:01.712+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:01.718+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:01.718+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:01.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:01.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:01.812+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:01.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:01.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:01.848+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:01.848+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:01.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:01.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:01.912+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:02.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:02.013+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:02.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:02.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:02.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:02.067+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:02.083+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:02.083+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:02.113+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:02.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:02.160+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:02.166+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:02.166+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:02.166+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578660, 3) 2019-09-04T06:31:02.166+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 10739 2019-09-04T06:31:02.166+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:02.166+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:02.166+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 10739 2019-09-04T06:31:02.167+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10742 2019-09-04T06:31:02.167+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 10742 2019-09-04T06:31:02.167+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578660, 3), t: 1 }({ ts: Timestamp(1567578660, 3), t: 1 }) 2019-09-04T06:31:02.173+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:02.173+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:02.181+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:02.181+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:02.212+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:02.212+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:02.213+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:02.218+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:02.218+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:02.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578660, 3), signature: { hash: BinData(0, 5CB90FE3C9418BB04B17B0226F28D85E25BBB700), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:02.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:31:02.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578660, 3), signature: { hash: BinData(0, 5CB90FE3C9418BB04B17B0226F28D85E25BBB700), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:02.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578660, 3), signature: { hash: BinData(0, 5CB90FE3C9418BB04B17B0226F28D85E25BBB700), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:02.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578660, 3), t: 1 }, durableWallTime: new Date(1567578660130), opTime: { ts: Timestamp(1567578660, 3), t: 1 }, wallTime: new Date(1567578660130) } 2019-09-04T06:31:02.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578660, 3), signature: { hash: BinData(0, 5CB90FE3C9418BB04B17B0226F28D85E25BBB700), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:02.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:02.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:02.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:02.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:02.313+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:02.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:02.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:02.348+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:02.348+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:02.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:02.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:02.413+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:02.513+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:02.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:02.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:02.567+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:02.567+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:02.583+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:02.583+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:02.613+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:02.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:02.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:02.673+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:02.673+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:02.681+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:02.681+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:02.712+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:02.712+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:02.713+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:02.718+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:02.718+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:02.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:02.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:02.814+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:02.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:02.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:02.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:02.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 728) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:02.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 728 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:12.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:02.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:30.839+0000 2019-09-04T06:31:02.838+0000 D2 ASIO [Replication] Request 728 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578660, 3), t: 1 }, durableWallTime: new Date(1567578660130), opTime: { ts: Timestamp(1567578660, 3), t: 1 }, wallTime: new Date(1567578660130), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578660, 3), t: 1 }, lastCommittedWall: new Date(1567578660130), lastOpVisible: { ts: Timestamp(1567578660, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578660, 3), $clusterTime: { clusterTime: Timestamp(1567578660, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578660, 3) } 2019-09-04T06:31:02.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578660, 3), t: 1 }, durableWallTime: new Date(1567578660130), opTime: { ts: Timestamp(1567578660, 3), t: 1 }, wallTime: new Date(1567578660130), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578660, 3), t: 1 }, lastCommittedWall: new Date(1567578660130), lastOpVisible: { ts: Timestamp(1567578660, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578660, 3), $clusterTime: { clusterTime: Timestamp(1567578660, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578660, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:02.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:02.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 728) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578660, 3), t: 1 }, durableWallTime: new Date(1567578660130), opTime: { ts: Timestamp(1567578660, 3), t: 1 }, wallTime: new Date(1567578660130), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578660, 3), t: 1 }, lastCommittedWall: new Date(1567578660130), lastOpVisible: { ts: Timestamp(1567578660, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578660, 3), $clusterTime: { clusterTime: Timestamp(1567578660, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578660, 3) } 2019-09-04T06:31:02.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:31:02.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:31:04.838Z 2019-09-04T06:31:02.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:30.839+0000 2019-09-04T06:31:02.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:02.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 729) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:02.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 729 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:31:12.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:02.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:30.839+0000 2019-09-04T06:31:02.839+0000 D2 ASIO [Replication] Request 729 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578660, 3), t: 1 }, durableWallTime: new Date(1567578660130), opTime: { ts: Timestamp(1567578660, 3), t: 1 }, wallTime: new Date(1567578660130), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578660, 3), t: 1 }, lastCommittedWall: new Date(1567578660130), lastOpVisible: { ts: Timestamp(1567578660, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578660, 3), $clusterTime: { clusterTime: Timestamp(1567578660, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578660, 3) } 2019-09-04T06:31:02.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578660, 3), t: 1 }, durableWallTime: new Date(1567578660130), opTime: { ts: Timestamp(1567578660, 3), t: 1 }, wallTime: new Date(1567578660130), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578660, 3), t: 1 }, lastCommittedWall: new Date(1567578660130), lastOpVisible: { ts: Timestamp(1567578660, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578660, 3), $clusterTime: { clusterTime: Timestamp(1567578660, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578660, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:31:02.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:02.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 729) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578660, 3), t: 1 }, durableWallTime: new Date(1567578660130), opTime: { ts: Timestamp(1567578660, 3), t: 1 }, wallTime: new Date(1567578660130), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578660, 3), t: 1 }, lastCommittedWall: new Date(1567578660130), lastOpVisible: { ts: Timestamp(1567578660, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578660, 3), $clusterTime: { clusterTime: Timestamp(1567578660, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578660, 3) } 2019-09-04T06:31:02.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:31:02.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:31:10.971+0000 2019-09-04T06:31:02.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:31:13.238+0000 2019-09-04T06:31:02.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:31:02.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:31:04.839Z 2019-09-04T06:31:02.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:02.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:32.839+0000 2019-09-04T06:31:02.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:32.839+0000 2019-09-04T06:31:02.848+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:02.848+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:02.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:02.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:02.914+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:03.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:03.014+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:03.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:03.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:03.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578660, 3), signature: { hash: BinData(0, 5CB90FE3C9418BB04B17B0226F28D85E25BBB700), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:03.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:31:03.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578660, 3), signature: { hash: BinData(0, 5CB90FE3C9418BB04B17B0226F28D85E25BBB700), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:03.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578660, 3), signature: { hash: BinData(0, 5CB90FE3C9418BB04B17B0226F28D85E25BBB700), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:03.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578660, 3), t: 1 }, durableWallTime: new Date(1567578660130), opTime: { ts: Timestamp(1567578660, 3), t: 1 }, wallTime: new Date(1567578660130) } 2019-09-04T06:31:03.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578660, 3), signature: { hash: BinData(0, 5CB90FE3C9418BB04B17B0226F28D85E25BBB700), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:03.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:03.067+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:03.083+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:03.083+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:03.114+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:03.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:03.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:03.166+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:03.166+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:03.166+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578660, 3) 2019-09-04T06:31:03.166+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 10772 2019-09-04T06:31:03.166+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:03.166+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:03.166+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 10772 2019-09-04T06:31:03.167+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10775 2019-09-04T06:31:03.167+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 10775 2019-09-04T06:31:03.167+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578660, 3), t: 1 }({ ts: Timestamp(1567578660, 3), t: 1 }) 2019-09-04T06:31:03.173+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:03.173+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:03.181+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:03.181+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:03.212+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:03.212+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:03.214+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:03.218+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:03.218+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:03.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:03.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:03.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:03.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:03.314+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:03.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:03.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:03.348+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:03.348+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:03.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:03.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:03.414+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:03.453+0000 D2 ASIO [RS] Request 725 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578663, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578663451), o: { $v: 1, $set: { ping: new Date(1567578663446) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578660, 3), t: 1 }, lastCommittedWall: new Date(1567578660130), lastOpVisible: { ts: Timestamp(1567578660, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578660, 3), t: 1 }, lastCommittedWall: new Date(1567578660130), lastOpApplied: { ts: Timestamp(1567578663, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578660, 3), $clusterTime: { clusterTime: Timestamp(1567578663, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578663, 1) } 2019-09-04T06:31:03.453+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578663, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578663451), o: { $v: 1, $set: { ping: new Date(1567578663446) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578660, 3), t: 1 }, lastCommittedWall: new Date(1567578660130), lastOpVisible: { ts: Timestamp(1567578660, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578660, 3), t: 1 }, lastCommittedWall: new Date(1567578660130), lastOpApplied: { ts: Timestamp(1567578663, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578660, 3), $clusterTime: { clusterTime: Timestamp(1567578663, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578663, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:03.453+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:03.453+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578663, 1) and ending at ts: Timestamp(1567578663, 1) 2019-09-04T06:31:03.453+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:31:13.238+0000 2019-09-04T06:31:03.453+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:31:13.674+0000 2019-09-04T06:31:03.453+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:03.453+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578663, 1), t: 1 } 2019-09-04T06:31:03.453+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:03.453+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:03.453+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578660, 3) 2019-09-04T06:31:03.453+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 10788 2019-09-04T06:31:03.453+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:03.453+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:03.453+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 10788 2019-09-04T06:31:03.453+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:03.453+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:31:03.453+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:03.453+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578660, 3) 2019-09-04T06:31:03.453+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 10791 2019-09-04T06:31:03.453+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578663, 1) } 2019-09-04T06:31:03.453+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:03.453+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:03.453+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 10791 2019-09-04T06:31:03.453+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:32.839+0000 2019-09-04T06:31:03.453+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10776 2019-09-04T06:31:03.453+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 10776 2019-09-04T06:31:03.453+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10794 2019-09-04T06:31:03.453+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 10794 2019-09-04T06:31:03.453+0000 D3 EXECUTOR [repl-writer-worker-6] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:03.453+0000 D3 STORAGE [repl-writer-worker-6] WT begin_transaction for snapshot id 10796 2019-09-04T06:31:03.453+0000 D4 STORAGE [repl-writer-worker-6] inserting record with timestamp Timestamp(1567578663, 1) 2019-09-04T06:31:03.453+0000 D3 STORAGE [repl-writer-worker-6] WT set timestamp of future write operations to Timestamp(1567578663, 1) 2019-09-04T06:31:03.453+0000 D3 STORAGE [repl-writer-worker-6] WT commit_transaction for snapshot id 10796 2019-09-04T06:31:03.453+0000 D3 EXECUTOR [repl-writer-worker-6] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:03.453+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:31:03.453+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10795 2019-09-04T06:31:03.453+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 10795 2019-09-04T06:31:03.453+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10798 2019-09-04T06:31:03.453+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 10798 2019-09-04T06:31:03.453+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578663, 1), t: 1 }({ ts: Timestamp(1567578663, 1), t: 1 }) 2019-09-04T06:31:03.454+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578663, 1) 2019-09-04T06:31:03.454+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10799 2019-09-04T06:31:03.454+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578663, 1) } } ] } sort: {} projection: {} 2019-09-04T06:31:03.454+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:03.454+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:31:03.454+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578663, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:31:03.454+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:03.454+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:03.454+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:03.454+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578663, 1) || First: notFirst: full path: ts 2019-09-04T06:31:03.454+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:03.454+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578663, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:03.454+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:03.454+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:31:03.454+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:03.454+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:03.454+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:03.454+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:03.454+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:03.454+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:03.454+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:03.454+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578663, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:03.454+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:03.454+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:03.454+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:03.454+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578663, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:03.454+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:03.454+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578663, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:03.454+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 10799 2019-09-04T06:31:03.454+0000 D3 EXECUTOR [repl-writer-worker-2] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:03.454+0000 D3 STORAGE [repl-writer-worker-2] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:03.454+0000 D3 REPL [repl-writer-worker-2] applying op: { ts: Timestamp(1567578663, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578663451), o: { $v: 1, $set: { ping: new Date(1567578663446) } } }, oplog application mode: Secondary 2019-09-04T06:31:03.454+0000 D3 STORAGE [repl-writer-worker-2] WT set timestamp of future write operations to Timestamp(1567578663, 1) 2019-09-04T06:31:03.454+0000 D3 STORAGE [repl-writer-worker-2] WT begin_transaction for snapshot id 10801 2019-09-04T06:31:03.454+0000 D2 QUERY [repl-writer-worker-2] Using idhack: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" } 2019-09-04T06:31:03.454+0000 D4 WRITE [repl-writer-worker-2] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:31:03.454+0000 D3 STORAGE [repl-writer-worker-2] WT commit_transaction for snapshot id 10801 2019-09-04T06:31:03.454+0000 D3 EXECUTOR [repl-writer-worker-2] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:03.454+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578663, 1), t: 1 }({ ts: Timestamp(1567578663, 1), t: 1 }) 2019-09-04T06:31:03.454+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578663, 1) 2019-09-04T06:31:03.454+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10800 2019-09-04T06:31:03.454+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:31:03.454+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:03.454+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:31:03.454+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:03.454+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:03.454+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:31:03.454+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 10800 2019-09-04T06:31:03.454+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578663, 1) 2019-09-04T06:31:03.454+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:03.454+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10804 2019-09-04T06:31:03.454+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 10804 2019-09-04T06:31:03.454+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578663, 1), t: 1 }({ ts: Timestamp(1567578663, 1), t: 1 }) 2019-09-04T06:31:03.454+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578660, 3), t: 1 }, durableWallTime: new Date(1567578660130), appliedOpTime: { ts: Timestamp(1567578660, 3), t: 1 }, appliedWallTime: new Date(1567578660130), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578660, 3), t: 1 }, durableWallTime: new Date(1567578660130), appliedOpTime: { ts: Timestamp(1567578663, 1), t: 1 }, appliedWallTime: new Date(1567578663451), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578660, 3), t: 1 }, durableWallTime: new Date(1567578660130), appliedOpTime: { ts: Timestamp(1567578660, 3), t: 1 }, appliedWallTime: new Date(1567578660130), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578660, 3), t: 1 }, lastCommittedWall: new Date(1567578660130), lastOpVisible: { ts: Timestamp(1567578660, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:03.454+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 730 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:33.454+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578660, 3), t: 1 }, durableWallTime: new Date(1567578660130), appliedOpTime: { ts: Timestamp(1567578660, 3), t: 1 }, appliedWallTime: new Date(1567578660130), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578660, 3), t: 1 }, durableWallTime: new Date(1567578660130), appliedOpTime: { ts: Timestamp(1567578663, 1), t: 1 }, appliedWallTime: new Date(1567578663451), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578660, 3), t: 1 }, durableWallTime: new Date(1567578660130), appliedOpTime: { ts: Timestamp(1567578660, 3), t: 1 }, appliedWallTime: new Date(1567578660130), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578660, 3), t: 1 }, lastCommittedWall: new Date(1567578660130), lastOpVisible: { ts: Timestamp(1567578660, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:03.454+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:33.454+0000 2019-09-04T06:31:03.455+0000 D2 ASIO [RS] Request 730 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578660, 3), t: 1 }, lastCommittedWall: new Date(1567578660130), lastOpVisible: { ts: Timestamp(1567578660, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578660, 3), $clusterTime: { clusterTime: Timestamp(1567578663, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578663, 1) } 2019-09-04T06:31:03.455+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578660, 3), t: 1 }, lastCommittedWall: new Date(1567578660130), lastOpVisible: { ts: Timestamp(1567578660, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578660, 3), $clusterTime: { clusterTime: Timestamp(1567578663, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578663, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:03.455+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:03.455+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:33.455+0000 2019-09-04T06:31:03.455+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578663, 1), t: 1 } 2019-09-04T06:31:03.455+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 731 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:13.455+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578660, 3), t: 1 } } 2019-09-04T06:31:03.455+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:33.455+0000 2019-09-04T06:31:03.457+0000 D2 ASIO [RS] Request 731 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578663, 1), t: 1 }, lastCommittedWall: new Date(1567578663451), lastOpVisible: { ts: Timestamp(1567578663, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578663, 1), t: 1 }, lastCommittedWall: new Date(1567578663451), lastOpApplied: { ts: Timestamp(1567578663, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578663, 1), $clusterTime: { clusterTime: Timestamp(1567578663, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578663, 1) } 2019-09-04T06:31:03.457+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578663, 1), t: 1 }, lastCommittedWall: new Date(1567578663451), lastOpVisible: { ts: Timestamp(1567578663, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578663, 1), t: 1 }, lastCommittedWall: new Date(1567578663451), lastOpApplied: { ts: Timestamp(1567578663, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578663, 1), $clusterTime: { clusterTime: Timestamp(1567578663, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578663, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:03.457+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:03.457+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:31:03.457+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578663, 1), t: 1 }, 2019-09-04T06:31:03.451+0000 2019-09-04T06:31:03.457+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578663, 1), t: 1 }, 2019-09-04T06:31:03.451+0000 2019-09-04T06:31:03.457+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578658, 1) 2019-09-04T06:31:03.457+0000 D3 REPL [conn224] Got notified of new snapshot: { ts: Timestamp(1567578663, 1), t: 1 }, 2019-09-04T06:31:03.451+0000 2019-09-04T06:31:03.457+0000 D3 REPL [conn224] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.924+0000 2019-09-04T06:31:03.458+0000 D3 REPL [conn240] Got notified of new snapshot: { ts: Timestamp(1567578663, 1), t: 1 }, 2019-09-04T06:31:03.451+0000 2019-09-04T06:31:03.458+0000 D3 REPL [conn240] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.874+0000 2019-09-04T06:31:03.458+0000 D3 REPL [conn261] Got notified of new snapshot: { ts: Timestamp(1567578663, 1), t: 1 }, 2019-09-04T06:31:03.451+0000 2019-09-04T06:31:03.458+0000 D3 REPL [conn261] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.023+0000 2019-09-04T06:31:03.458+0000 D3 REPL [conn258] Got notified of new snapshot: { ts: Timestamp(1567578663, 1), t: 1 }, 2019-09-04T06:31:03.451+0000 2019-09-04T06:31:03.458+0000 D3 REPL [conn258] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:14.433+0000 2019-09-04T06:31:03.458+0000 D3 REPL [conn229] Got notified of new snapshot: { ts: Timestamp(1567578663, 1), t: 1 }, 2019-09-04T06:31:03.451+0000 2019-09-04T06:31:03.458+0000 D3 REPL [conn229] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.924+0000 2019-09-04T06:31:03.458+0000 D3 REPL [conn241] Got notified of new snapshot: { ts: Timestamp(1567578663, 1), t: 1 }, 2019-09-04T06:31:03.451+0000 2019-09-04T06:31:03.458+0000 D3 REPL [conn241] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.879+0000 2019-09-04T06:31:03.458+0000 D3 REPL [conn233] Got notified of new snapshot: { ts: Timestamp(1567578663, 1), t: 1 }, 2019-09-04T06:31:03.451+0000 2019-09-04T06:31:03.458+0000 D3 REPL [conn233] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.024+0000 2019-09-04T06:31:03.458+0000 D3 REPL [conn237] Got notified of new snapshot: { ts: Timestamp(1567578663, 1), t: 1 }, 2019-09-04T06:31:03.451+0000 2019-09-04T06:31:03.458+0000 D3 REPL [conn237] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:13.409+0000 2019-09-04T06:31:03.458+0000 D3 REPL [conn259] Got notified of new snapshot: { ts: Timestamp(1567578663, 1), t: 1 }, 2019-09-04T06:31:03.451+0000 2019-09-04T06:31:03.458+0000 D3 REPL [conn259] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:24.151+0000 2019-09-04T06:31:03.458+0000 D3 REPL [conn256] Got notified of new snapshot: { ts: Timestamp(1567578663, 1), t: 1 }, 2019-09-04T06:31:03.451+0000 2019-09-04T06:31:03.458+0000 D3 REPL [conn256] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.918+0000 2019-09-04T06:31:03.458+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:31:13.674+0000 2019-09-04T06:31:03.458+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:31:13.958+0000 2019-09-04T06:31:03.458+0000 D3 REPL [conn257] Got notified of new snapshot: { ts: Timestamp(1567578663, 1), t: 1 }, 2019-09-04T06:31:03.451+0000 2019-09-04T06:31:03.458+0000 D3 REPL [conn257] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:12.293+0000 2019-09-04T06:31:03.458+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:03.458+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 732 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:13.458+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578663, 1), t: 1 } } 2019-09-04T06:31:03.458+0000 D3 REPL [conn268] Got notified of new snapshot: { ts: Timestamp(1567578663, 1), t: 1 }, 2019-09-04T06:31:03.451+0000 2019-09-04T06:31:03.458+0000 D3 REPL [conn268] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:22.595+0000 2019-09-04T06:31:03.458+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:32.839+0000 2019-09-04T06:31:03.458+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:33.455+0000 2019-09-04T06:31:03.458+0000 D3 REPL [conn267] Got notified of new snapshot: { ts: Timestamp(1567578663, 1), t: 1 }, 2019-09-04T06:31:03.451+0000 2019-09-04T06:31:03.458+0000 D3 REPL [conn267] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:21.661+0000 2019-09-04T06:31:03.458+0000 D3 REPL [conn255] Got notified of new snapshot: { ts: Timestamp(1567578663, 1), t: 1 }, 2019-09-04T06:31:03.451+0000 2019-09-04T06:31:03.458+0000 D3 REPL [conn255] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.877+0000 2019-09-04T06:31:03.458+0000 D3 REPL [conn236] Got notified of new snapshot: { ts: Timestamp(1567578663, 1), t: 1 }, 2019-09-04T06:31:03.451+0000 2019-09-04T06:31:03.458+0000 D3 REPL [conn236] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.040+0000 2019-09-04T06:31:03.458+0000 D3 REPL [conn269] Got notified of new snapshot: { ts: Timestamp(1567578663, 1), t: 1 }, 2019-09-04T06:31:03.451+0000 2019-09-04T06:31:03.458+0000 D3 REPL [conn269] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:25.060+0000 2019-09-04T06:31:03.458+0000 D3 REPL [conn266] Got notified of new snapshot: { ts: Timestamp(1567578663, 1), t: 1 }, 2019-09-04T06:31:03.451+0000 2019-09-04T06:31:03.458+0000 D3 REPL [conn266] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:21.661+0000 2019-09-04T06:31:03.458+0000 D3 REPL [conn234] Got notified of new snapshot: { ts: Timestamp(1567578663, 1), t: 1 }, 2019-09-04T06:31:03.451+0000 2019-09-04T06:31:03.458+0000 D3 REPL [conn234] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.033+0000 2019-09-04T06:31:03.458+0000 D3 REPL [conn242] Got notified of new snapshot: { ts: Timestamp(1567578663, 1), t: 1 }, 2019-09-04T06:31:03.451+0000 2019-09-04T06:31:03.458+0000 D3 REPL [conn242] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:12.507+0000 2019-09-04T06:31:03.458+0000 D3 REPL [conn265] Got notified of new snapshot: { ts: Timestamp(1567578663, 1), t: 1 }, 2019-09-04T06:31:03.451+0000 2019-09-04T06:31:03.458+0000 D3 REPL [conn265] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:21.660+0000 2019-09-04T06:31:03.458+0000 D3 REPL [conn263] Got notified of new snapshot: { ts: Timestamp(1567578663, 1), t: 1 }, 2019-09-04T06:31:03.451+0000 2019-09-04T06:31:03.458+0000 D3 REPL [conn263] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.415+0000 2019-09-04T06:31:03.458+0000 D3 REPL [conn262] Got notified of new snapshot: { ts: Timestamp(1567578663, 1), t: 1 }, 2019-09-04T06:31:03.451+0000 2019-09-04T06:31:03.458+0000 D3 REPL [conn262] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.051+0000 2019-09-04T06:31:03.458+0000 D3 REPL [conn231] Got notified of new snapshot: { ts: Timestamp(1567578663, 1), t: 1 }, 2019-09-04T06:31:03.451+0000 2019-09-04T06:31:03.458+0000 D3 REPL [conn231] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:13.190+0000 2019-09-04T06:31:03.458+0000 D3 REPL [conn227] Got notified of new snapshot: { ts: Timestamp(1567578663, 1), t: 1 }, 2019-09-04T06:31:03.451+0000 2019-09-04T06:31:03.458+0000 D3 REPL [conn227] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.888+0000 2019-09-04T06:31:03.495+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:31:03.495+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:03.495+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578660, 3), t: 1 }, durableWallTime: new Date(1567578660130), appliedOpTime: { ts: Timestamp(1567578660, 3), t: 1 }, appliedWallTime: new Date(1567578660130), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578663, 1), t: 1 }, durableWallTime: new Date(1567578663451), appliedOpTime: { ts: Timestamp(1567578663, 1), t: 1 }, appliedWallTime: new Date(1567578663451), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578660, 3), t: 1 }, durableWallTime: new Date(1567578660130), appliedOpTime: { ts: Timestamp(1567578660, 3), t: 1 }, appliedWallTime: new Date(1567578660130), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578663, 1), t: 1 }, lastCommittedWall: new Date(1567578663451), lastOpVisible: { ts: Timestamp(1567578663, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:03.495+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 733 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:33.495+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578660, 3), t: 1 }, durableWallTime: new Date(1567578660130), appliedOpTime: { ts: Timestamp(1567578660, 3), t: 1 }, appliedWallTime: new Date(1567578660130), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578663, 1), t: 1 }, durableWallTime: new Date(1567578663451), appliedOpTime: { ts: Timestamp(1567578663, 1), t: 1 }, appliedWallTime: new Date(1567578663451), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578660, 3), t: 1 }, durableWallTime: new Date(1567578660130), appliedOpTime: { ts: Timestamp(1567578660, 3), t: 1 }, appliedWallTime: new Date(1567578660130), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578663, 1), t: 1 }, lastCommittedWall: new Date(1567578663451), lastOpVisible: { ts: Timestamp(1567578663, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:03.495+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:33.455+0000 2019-09-04T06:31:03.496+0000 D2 ASIO [RS] Request 733 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578663, 1), t: 1 }, lastCommittedWall: new Date(1567578663451), lastOpVisible: { ts: Timestamp(1567578663, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578663, 1), $clusterTime: { clusterTime: Timestamp(1567578663, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578663, 1) } 2019-09-04T06:31:03.496+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578663, 1), t: 1 }, lastCommittedWall: new Date(1567578663451), lastOpVisible: { ts: Timestamp(1567578663, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578663, 1), $clusterTime: { clusterTime: Timestamp(1567578663, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578663, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:03.496+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:03.496+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:33.455+0000 2019-09-04T06:31:03.513+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:31:03.513+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:31:03.513+0000 D2 COMMAND [conn90] run command admin.$cmd { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:31:03.513+0000 I COMMAND [conn90] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:31:03.514+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:03.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:03.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:03.553+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578663, 1) 2019-09-04T06:31:03.567+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:03.567+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:03.583+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:03.583+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:03.614+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:03.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:03.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:03.673+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:03.673+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:03.681+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:03.681+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:03.712+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:03.712+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:03.715+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:03.718+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:03.718+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:03.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:03.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:03.815+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:03.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:03.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:03.848+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:03.848+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:03.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:03.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:03.915+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:04.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:04.015+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:04.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:04.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:04.067+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:04.067+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:04.083+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:04.083+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:04.115+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:04.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:04.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:04.173+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:04.173+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:04.181+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:04.181+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:04.212+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:04.212+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:04.215+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:04.218+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:04.218+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:04.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578663, 1), signature: { hash: BinData(0, CE425F8C895B5599C1ACDE9C56FEE55F29AF2437), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:04.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:31:04.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578663, 1), signature: { hash: BinData(0, CE425F8C895B5599C1ACDE9C56FEE55F29AF2437), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:04.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578663, 1), signature: { hash: BinData(0, CE425F8C895B5599C1ACDE9C56FEE55F29AF2437), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:04.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578663, 1), t: 1 }, durableWallTime: new Date(1567578663451), opTime: { ts: Timestamp(1567578663, 1), t: 1 }, wallTime: new Date(1567578663451) } 2019-09-04T06:31:04.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578663, 1), signature: { hash: BinData(0, CE425F8C895B5599C1ACDE9C56FEE55F29AF2437), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:04.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:04.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:04.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:04.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:04.315+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:04.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:04.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:04.348+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:04.348+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:04.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:04.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:04.415+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:04.453+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:04.453+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:04.453+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578663, 1) 2019-09-04T06:31:04.453+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 10837 2019-09-04T06:31:04.453+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:04.453+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:04.453+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 10837 2019-09-04T06:31:04.454+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10840 2019-09-04T06:31:04.454+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 10840 2019-09-04T06:31:04.454+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578663, 1), t: 1 }({ ts: Timestamp(1567578663, 1), t: 1 }) 2019-09-04T06:31:04.516+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:04.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:04.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:04.567+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:04.567+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:04.583+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:04.583+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:04.616+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:04.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:04.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:04.673+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:04.673+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:04.681+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:04.681+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:04.712+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:04.712+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:04.716+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:04.718+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:04.718+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:04.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:04.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:04.816+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:04.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:04.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:04.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:04.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 734) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:04.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 734 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:14.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:04.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:32.839+0000 2019-09-04T06:31:04.838+0000 D2 ASIO [Replication] Request 734 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578663, 1), t: 1 }, durableWallTime: new Date(1567578663451), opTime: { ts: Timestamp(1567578663, 1), t: 1 }, wallTime: new Date(1567578663451), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578663, 1), t: 1 }, lastCommittedWall: new Date(1567578663451), lastOpVisible: { ts: Timestamp(1567578663, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578663, 1), $clusterTime: { clusterTime: Timestamp(1567578663, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578663, 1) } 2019-09-04T06:31:04.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578663, 1), t: 1 }, durableWallTime: new Date(1567578663451), opTime: { ts: Timestamp(1567578663, 1), t: 1 }, wallTime: new Date(1567578663451), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578663, 1), t: 1 }, lastCommittedWall: new Date(1567578663451), lastOpVisible: { ts: Timestamp(1567578663, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578663, 1), $clusterTime: { clusterTime: Timestamp(1567578663, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578663, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:04.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:04.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 734) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578663, 1), t: 1 }, durableWallTime: new Date(1567578663451), opTime: { ts: Timestamp(1567578663, 1), t: 1 }, wallTime: new Date(1567578663451), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578663, 1), t: 1 }, lastCommittedWall: new Date(1567578663451), lastOpVisible: { ts: Timestamp(1567578663, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578663, 1), $clusterTime: { clusterTime: Timestamp(1567578663, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578663, 1) } 2019-09-04T06:31:04.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:31:04.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:31:06.838Z 2019-09-04T06:31:04.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:32.839+0000 2019-09-04T06:31:04.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:04.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 735) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:04.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 735 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:31:14.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:04.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:32.839+0000 2019-09-04T06:31:04.839+0000 D2 ASIO [Replication] Request 735 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578663, 1), t: 1 }, durableWallTime: new Date(1567578663451), opTime: { ts: Timestamp(1567578663, 1), t: 1 }, wallTime: new Date(1567578663451), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578663, 1), t: 1 }, lastCommittedWall: new Date(1567578663451), lastOpVisible: { ts: Timestamp(1567578663, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578663, 1), $clusterTime: { clusterTime: Timestamp(1567578663, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578663, 1) } 2019-09-04T06:31:04.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578663, 1), t: 1 }, durableWallTime: new Date(1567578663451), opTime: { ts: Timestamp(1567578663, 1), t: 1 }, wallTime: new Date(1567578663451), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578663, 1), t: 1 }, lastCommittedWall: new Date(1567578663451), lastOpVisible: { ts: Timestamp(1567578663, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578663, 1), $clusterTime: { clusterTime: Timestamp(1567578663, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578663, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:31:04.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:04.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 735) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578663, 1), t: 1 }, durableWallTime: new Date(1567578663451), opTime: { ts: Timestamp(1567578663, 1), t: 1 }, wallTime: new Date(1567578663451), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578663, 1), t: 1 }, lastCommittedWall: new Date(1567578663451), lastOpVisible: { ts: Timestamp(1567578663, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578663, 1), $clusterTime: { clusterTime: Timestamp(1567578663, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578663, 1) } 2019-09-04T06:31:04.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:31:04.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:31:13.958+0000 2019-09-04T06:31:04.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:31:15.524+0000 2019-09-04T06:31:04.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:31:04.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:31:06.839Z 2019-09-04T06:31:04.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:04.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:34.839+0000 2019-09-04T06:31:04.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:34.839+0000 2019-09-04T06:31:04.848+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:04.848+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:04.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:04.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:04.916+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:05.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:05.016+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:05.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:05.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:05.061+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:05.061+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:31:04.839+0000 2019-09-04T06:31:05.061+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:31:04.838+0000 2019-09-04T06:31:05.061+0000 D3 REPL [replexec-3] stalest member MemberId(2) date: 2019-09-04T06:31:04.838+0000 2019-09-04T06:31:05.061+0000 D3 REPL [replexec-3] scheduling next check at 2019-09-04T06:31:14.838+0000 2019-09-04T06:31:05.061+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:34.839+0000 2019-09-04T06:31:05.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578663, 1), signature: { hash: BinData(0, CE425F8C895B5599C1ACDE9C56FEE55F29AF2437), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:05.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:31:05.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578663, 1), signature: { hash: BinData(0, CE425F8C895B5599C1ACDE9C56FEE55F29AF2437), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:05.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578663, 1), signature: { hash: BinData(0, CE425F8C895B5599C1ACDE9C56FEE55F29AF2437), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:05.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578663, 1), t: 1 }, durableWallTime: new Date(1567578663451), opTime: { ts: Timestamp(1567578663, 1), t: 1 }, wallTime: new Date(1567578663451) } 2019-09-04T06:31:05.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578663, 1), signature: { hash: BinData(0, CE425F8C895B5599C1ACDE9C56FEE55F29AF2437), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:05.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:05.067+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:05.083+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:05.083+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:05.114+0000 D2 ASIO [RS] Request 732 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578665, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578665112), o: { $v: 1, $set: { ping: new Date(1567578665109), up: 2565 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578663, 1), t: 1 }, lastCommittedWall: new Date(1567578663451), lastOpVisible: { ts: Timestamp(1567578663, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578663, 1), t: 1 }, lastCommittedWall: new Date(1567578663451), lastOpApplied: { ts: Timestamp(1567578665, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578663, 1), $clusterTime: { clusterTime: Timestamp(1567578665, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578665, 1) } 2019-09-04T06:31:05.114+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578665, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578665112), o: { $v: 1, $set: { ping: new Date(1567578665109), up: 2565 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578663, 1), t: 1 }, lastCommittedWall: new Date(1567578663451), lastOpVisible: { ts: Timestamp(1567578663, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578663, 1), t: 1 }, lastCommittedWall: new Date(1567578663451), lastOpApplied: { ts: Timestamp(1567578665, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578663, 1), $clusterTime: { clusterTime: Timestamp(1567578665, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578665, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:05.114+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:05.114+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578665, 1) and ending at ts: Timestamp(1567578665, 1) 2019-09-04T06:31:05.114+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:31:15.524+0000 2019-09-04T06:31:05.115+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:31:15.299+0000 2019-09-04T06:31:05.115+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:05.115+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578665, 1), t: 1 } 2019-09-04T06:31:05.115+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:34.839+0000 2019-09-04T06:31:05.115+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:05.115+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:05.115+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578663, 1) 2019-09-04T06:31:05.115+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 10860 2019-09-04T06:31:05.115+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:05.115+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:05.115+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 10860 2019-09-04T06:31:05.115+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:05.115+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:05.115+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578663, 1) 2019-09-04T06:31:05.115+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 10863 2019-09-04T06:31:05.115+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:31:05.115+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:05.115+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:05.115+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578665, 1) } 2019-09-04T06:31:05.115+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 10863 2019-09-04T06:31:05.115+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10841 2019-09-04T06:31:05.115+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 10841 2019-09-04T06:31:05.115+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10866 2019-09-04T06:31:05.115+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 10866 2019-09-04T06:31:05.115+0000 D3 EXECUTOR [repl-writer-worker-0] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:05.115+0000 D3 STORAGE [repl-writer-worker-0] WT begin_transaction for snapshot id 10868 2019-09-04T06:31:05.115+0000 D4 STORAGE [repl-writer-worker-0] inserting record with timestamp Timestamp(1567578665, 1) 2019-09-04T06:31:05.115+0000 D3 STORAGE [repl-writer-worker-0] WT set timestamp of future write operations to Timestamp(1567578665, 1) 2019-09-04T06:31:05.115+0000 D3 STORAGE [repl-writer-worker-0] WT commit_transaction for snapshot id 10868 2019-09-04T06:31:05.115+0000 D3 EXECUTOR [repl-writer-worker-0] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:05.115+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:31:05.115+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10867 2019-09-04T06:31:05.115+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 10867 2019-09-04T06:31:05.115+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10870 2019-09-04T06:31:05.115+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 10870 2019-09-04T06:31:05.115+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578665, 1), t: 1 }({ ts: Timestamp(1567578665, 1), t: 1 }) 2019-09-04T06:31:05.115+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578665, 1) 2019-09-04T06:31:05.115+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10871 2019-09-04T06:31:05.115+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578665, 1) } } ] } sort: {} projection: {} 2019-09-04T06:31:05.115+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:05.115+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:31:05.115+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578665, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:31:05.115+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:05.115+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:05.115+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:05.115+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578665, 1) || First: notFirst: full path: ts 2019-09-04T06:31:05.115+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:05.115+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578665, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:05.115+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:05.115+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:31:05.115+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:05.115+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:05.115+0000 D2 COMMAND [conn21] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578665, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a2902d1a496712d720d'), operName: "", parentOperId: "5d6f5a2902d1a496712d720b" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578665, 1), signature: { hash: BinData(0, C89261ADBEBF71472476FDA4ADA89D9789614141), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578665, 1), t: 1 } }, $db: "config" } 2019-09-04T06:31:05.115+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:05.115+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:05.115+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:05.115+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:05.115+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:05.115+0000 D1 TRACKING [conn21] Cmd: find, TrackingId: 5d6f5a2902d1a496712d720b|5d6f5a2902d1a496712d720d 2019-09-04T06:31:05.115+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578665, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:05.115+0000 D1 REPL [conn21] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1567578665, 1), t: 1 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578663, 1), t: 1 } 2019-09-04T06:31:05.116+0000 D3 REPL [conn21] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:35.125+0000 2019-09-04T06:31:05.115+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:05.116+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:05.116+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:05.116+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578665, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:05.116+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:05.116+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578665, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:05.116+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 10871 2019-09-04T06:31:05.116+0000 D3 EXECUTOR [repl-writer-worker-15] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:05.116+0000 D3 STORAGE [repl-writer-worker-15] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:05.116+0000 D3 REPL [repl-writer-worker-15] applying op: { ts: Timestamp(1567578665, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578665112), o: { $v: 1, $set: { ping: new Date(1567578665109), up: 2565 } } }, oplog application mode: Secondary 2019-09-04T06:31:05.116+0000 D3 STORAGE [repl-writer-worker-15] WT set timestamp of future write operations to Timestamp(1567578665, 1) 2019-09-04T06:31:05.116+0000 D3 STORAGE [repl-writer-worker-15] WT begin_transaction for snapshot id 10874 2019-09-04T06:31:05.116+0000 D2 QUERY [repl-writer-worker-15] Using idhack: { _id: "cmodb801.togewa.com:27017" } 2019-09-04T06:31:05.116+0000 D4 WRITE [repl-writer-worker-15] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:31:05.116+0000 D3 STORAGE [repl-writer-worker-15] WT commit_transaction for snapshot id 10874 2019-09-04T06:31:05.116+0000 D3 EXECUTOR [repl-writer-worker-15] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:05.116+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578665, 1), t: 1 }({ ts: Timestamp(1567578665, 1), t: 1 }) 2019-09-04T06:31:05.116+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578665, 1) 2019-09-04T06:31:05.116+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10873 2019-09-04T06:31:05.116+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:31:05.116+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:05.116+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:31:05.116+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:05.116+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:05.116+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:31:05.116+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 10873 2019-09-04T06:31:05.116+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578665, 1) 2019-09-04T06:31:05.116+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:05.116+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10877 2019-09-04T06:31:05.116+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578663, 1), t: 1 }, durableWallTime: new Date(1567578663451), appliedOpTime: { ts: Timestamp(1567578663, 1), t: 1 }, appliedWallTime: new Date(1567578663451), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578663, 1), t: 1 }, durableWallTime: new Date(1567578663451), appliedOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, appliedWallTime: new Date(1567578665112), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578663, 1), t: 1 }, durableWallTime: new Date(1567578663451), appliedOpTime: { ts: Timestamp(1567578663, 1), t: 1 }, appliedWallTime: new Date(1567578663451), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578663, 1), t: 1 }, lastCommittedWall: new Date(1567578663451), lastOpVisible: { ts: Timestamp(1567578663, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:05.116+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 736 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:35.116+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578663, 1), t: 1 }, durableWallTime: new Date(1567578663451), appliedOpTime: { ts: Timestamp(1567578663, 1), t: 1 }, appliedWallTime: new Date(1567578663451), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578663, 1), t: 1 }, durableWallTime: new Date(1567578663451), appliedOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, appliedWallTime: new Date(1567578665112), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578663, 1), t: 1 }, durableWallTime: new Date(1567578663451), appliedOpTime: { ts: Timestamp(1567578663, 1), t: 1 }, appliedWallTime: new Date(1567578663451), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578663, 1), t: 1 }, lastCommittedWall: new Date(1567578663451), lastOpVisible: { ts: Timestamp(1567578663, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:05.116+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:35.116+0000 2019-09-04T06:31:05.116+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 10877 2019-09-04T06:31:05.116+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578665, 1), t: 1 }({ ts: Timestamp(1567578665, 1), t: 1 }) 2019-09-04T06:31:05.116+0000 D2 ASIO [RS] Request 736 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578665, 1), t: 1 }, lastCommittedWall: new Date(1567578665112), lastOpVisible: { ts: Timestamp(1567578665, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578665, 1), $clusterTime: { clusterTime: Timestamp(1567578665, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578665, 1) } 2019-09-04T06:31:05.116+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578665, 1), t: 1 }, lastCommittedWall: new Date(1567578665112), lastOpVisible: { ts: Timestamp(1567578665, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578665, 1), $clusterTime: { clusterTime: Timestamp(1567578665, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578665, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:05.116+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:05.116+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:35.116+0000 2019-09-04T06:31:05.117+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578665, 1), t: 1 } 2019-09-04T06:31:05.117+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 737 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:15.117+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578663, 1), t: 1 } } 2019-09-04T06:31:05.117+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:35.116+0000 2019-09-04T06:31:05.117+0000 D2 ASIO [RS] Request 737 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578665, 1), t: 1 }, lastCommittedWall: new Date(1567578665112), lastOpVisible: { ts: Timestamp(1567578665, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578665, 1), t: 1 }, lastCommittedWall: new Date(1567578665112), lastOpApplied: { ts: Timestamp(1567578665, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578665, 1), $clusterTime: { clusterTime: Timestamp(1567578665, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578665, 1) } 2019-09-04T06:31:05.117+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578665, 1), t: 1 }, lastCommittedWall: new Date(1567578665112), lastOpVisible: { ts: Timestamp(1567578665, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578665, 1), t: 1 }, lastCommittedWall: new Date(1567578665112), lastOpApplied: { ts: Timestamp(1567578665, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578665, 1), $clusterTime: { clusterTime: Timestamp(1567578665, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578665, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:05.117+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:05.117+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:31:05.117+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578665, 1), t: 1 }, 2019-09-04T06:31:05.112+0000 2019-09-04T06:31:05.117+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578665, 1), t: 1 }, 2019-09-04T06:31:05.112+0000 2019-09-04T06:31:05.117+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578660, 1) 2019-09-04T06:31:05.117+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:31:15.299+0000 2019-09-04T06:31:05.117+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:31:05.117+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:05.117+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:34.839+0000 2019-09-04T06:31:05.117+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:31:15.916+0000 2019-09-04T06:31:05.117+0000 D3 REPL [conn242] Got notified of new snapshot: { ts: Timestamp(1567578665, 1), t: 1 }, 2019-09-04T06:31:05.112+0000 2019-09-04T06:31:05.117+0000 D3 REPL [conn242] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:12.507+0000 2019-09-04T06:31:05.117+0000 D3 REPL [conn229] Got notified of new snapshot: { ts: Timestamp(1567578665, 1), t: 1 }, 2019-09-04T06:31:05.112+0000 2019-09-04T06:31:05.117+0000 D3 REPL [conn229] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.924+0000 2019-09-04T06:31:05.117+0000 D3 REPL [conn237] Got notified of new snapshot: { ts: Timestamp(1567578665, 1), t: 1 }, 2019-09-04T06:31:05.112+0000 2019-09-04T06:31:05.117+0000 D3 REPL [conn237] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:13.409+0000 2019-09-04T06:31:05.117+0000 D3 REPL [conn259] Got notified of new snapshot: { ts: Timestamp(1567578665, 1), t: 1 }, 2019-09-04T06:31:05.112+0000 2019-09-04T06:31:05.117+0000 D3 REPL [conn259] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:24.151+0000 2019-09-04T06:31:05.117+0000 D3 REPL [conn268] Got notified of new snapshot: { ts: Timestamp(1567578665, 1), t: 1 }, 2019-09-04T06:31:05.112+0000 2019-09-04T06:31:05.117+0000 D3 REPL [conn268] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:22.595+0000 2019-09-04T06:31:05.117+0000 D3 REPL [conn255] Got notified of new snapshot: { ts: Timestamp(1567578665, 1), t: 1 }, 2019-09-04T06:31:05.112+0000 2019-09-04T06:31:05.117+0000 D3 REPL [conn255] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.877+0000 2019-09-04T06:31:05.117+0000 D3 REPL [conn269] Got notified of new snapshot: { ts: Timestamp(1567578665, 1), t: 1 }, 2019-09-04T06:31:05.112+0000 2019-09-04T06:31:05.117+0000 D3 REPL [conn269] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:25.060+0000 2019-09-04T06:31:05.117+0000 D3 REPL [conn234] Got notified of new snapshot: { ts: Timestamp(1567578665, 1), t: 1 }, 2019-09-04T06:31:05.112+0000 2019-09-04T06:31:05.117+0000 D3 REPL [conn234] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.033+0000 2019-09-04T06:31:05.117+0000 D3 REPL [conn265] Got notified of new snapshot: { ts: Timestamp(1567578665, 1), t: 1 }, 2019-09-04T06:31:05.112+0000 2019-09-04T06:31:05.117+0000 D3 REPL [conn265] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:21.660+0000 2019-09-04T06:31:05.117+0000 D3 REPL [conn262] Got notified of new snapshot: { ts: Timestamp(1567578665, 1), t: 1 }, 2019-09-04T06:31:05.112+0000 2019-09-04T06:31:05.117+0000 D3 REPL [conn262] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.051+0000 2019-09-04T06:31:05.117+0000 D3 REPL [conn227] Got notified of new snapshot: { ts: Timestamp(1567578665, 1), t: 1 }, 2019-09-04T06:31:05.112+0000 2019-09-04T06:31:05.117+0000 D3 REPL [conn227] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.888+0000 2019-09-04T06:31:05.117+0000 D3 REPL [conn21] Got notified of new snapshot: { ts: Timestamp(1567578665, 1), t: 1 }, 2019-09-04T06:31:05.112+0000 2019-09-04T06:31:05.117+0000 D1 COMMAND [conn21] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578665, 1), t: 1 } } } 2019-09-04T06:31:05.117+0000 D3 REPL [conn224] Got notified of new snapshot: { ts: Timestamp(1567578665, 1), t: 1 }, 2019-09-04T06:31:05.112+0000 2019-09-04T06:31:05.117+0000 D3 STORAGE [conn21] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:31:05.117+0000 D3 REPL [conn224] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.924+0000 2019-09-04T06:31:05.117+0000 D1 COMMAND [conn21] Using 'committed' snapshot: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578665, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a2902d1a496712d720d'), operName: "", parentOperId: "5d6f5a2902d1a496712d720b" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578665, 1), signature: { hash: BinData(0, C89261ADBEBF71472476FDA4ADA89D9789614141), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578665, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578665, 1) 2019-09-04T06:31:05.117+0000 D2 QUERY [conn21] Collection config.settings does not exist. Using EOF plan: query: { _id: "balancer" } sort: {} projection: {} limit: 1 2019-09-04T06:31:05.117+0000 I COMMAND [conn21] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578665, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a2902d1a496712d720d'), operName: "", parentOperId: "5d6f5a2902d1a496712d720b" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578665, 1), signature: { hash: BinData(0, C89261ADBEBF71472476FDA4ADA89D9789614141), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578665, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 1ms 2019-09-04T06:31:05.118+0000 D3 REPL [conn258] Got notified of new snapshot: { ts: Timestamp(1567578665, 1), t: 1 }, 2019-09-04T06:31:05.112+0000 2019-09-04T06:31:05.118+0000 D3 REPL [conn258] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:14.433+0000 2019-09-04T06:31:05.118+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:05.118+0000 D3 REPL [conn267] Got notified of new snapshot: { ts: Timestamp(1567578665, 1), t: 1 }, 2019-09-04T06:31:05.112+0000 2019-09-04T06:31:05.118+0000 D3 REPL [conn267] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:21.661+0000 2019-09-04T06:31:05.118+0000 D3 REPL [conn266] Got notified of new snapshot: { ts: Timestamp(1567578665, 1), t: 1 }, 2019-09-04T06:31:05.112+0000 2019-09-04T06:31:05.118+0000 D3 REPL [conn266] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:21.661+0000 2019-09-04T06:31:05.118+0000 D3 REPL [conn261] Got notified of new snapshot: { ts: Timestamp(1567578665, 1), t: 1 }, 2019-09-04T06:31:05.112+0000 2019-09-04T06:31:05.118+0000 D3 REPL [conn261] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.023+0000 2019-09-04T06:31:05.118+0000 D3 REPL [conn263] Got notified of new snapshot: { ts: Timestamp(1567578665, 1), t: 1 }, 2019-09-04T06:31:05.112+0000 2019-09-04T06:31:05.118+0000 D3 REPL [conn263] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.415+0000 2019-09-04T06:31:05.118+0000 D3 REPL [conn256] Got notified of new snapshot: { ts: Timestamp(1567578665, 1), t: 1 }, 2019-09-04T06:31:05.112+0000 2019-09-04T06:31:05.118+0000 D3 REPL [conn256] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.918+0000 2019-09-04T06:31:05.118+0000 D3 REPL [conn240] Got notified of new snapshot: { ts: Timestamp(1567578665, 1), t: 1 }, 2019-09-04T06:31:05.112+0000 2019-09-04T06:31:05.118+0000 D3 REPL [conn240] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.874+0000 2019-09-04T06:31:05.118+0000 D3 REPL [conn241] Got notified of new snapshot: { ts: Timestamp(1567578665, 1), t: 1 }, 2019-09-04T06:31:05.112+0000 2019-09-04T06:31:05.118+0000 D3 REPL [conn241] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.879+0000 2019-09-04T06:31:05.118+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 738 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:15.118+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578665, 1), t: 1 } } 2019-09-04T06:31:05.118+0000 D3 REPL [conn231] Got notified of new snapshot: { ts: Timestamp(1567578665, 1), t: 1 }, 2019-09-04T06:31:05.112+0000 2019-09-04T06:31:05.118+0000 D3 REPL [conn231] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:13.190+0000 2019-09-04T06:31:05.118+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:35.118+0000 2019-09-04T06:31:05.118+0000 D3 REPL [conn257] Got notified of new snapshot: { ts: Timestamp(1567578665, 1), t: 1 }, 2019-09-04T06:31:05.112+0000 2019-09-04T06:31:05.118+0000 D3 REPL [conn257] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:12.293+0000 2019-09-04T06:31:05.118+0000 D3 REPL [conn233] Got notified of new snapshot: { ts: Timestamp(1567578665, 1), t: 1 }, 2019-09-04T06:31:05.112+0000 2019-09-04T06:31:05.118+0000 D3 REPL [conn233] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.024+0000 2019-09-04T06:31:05.118+0000 D2 COMMAND [conn21] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578665, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a2902d1a496712d720e'), operName: "", parentOperId: "5d6f5a2902d1a496712d720b" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578665, 1), signature: { hash: BinData(0, C89261ADBEBF71472476FDA4ADA89D9789614141), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578665, 1), t: 1 } }, $db: "config" } 2019-09-04T06:31:05.118+0000 D1 TRACKING [conn21] Cmd: find, TrackingId: 5d6f5a2902d1a496712d720b|5d6f5a2902d1a496712d720e 2019-09-04T06:31:05.118+0000 D1 COMMAND [conn21] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578665, 1), t: 1 } } } 2019-09-04T06:31:05.118+0000 D3 STORAGE [conn21] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:31:05.118+0000 D1 COMMAND [conn21] Using 'committed' snapshot: { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578665, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a2902d1a496712d720e'), operName: "", parentOperId: "5d6f5a2902d1a496712d720b" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578665, 1), signature: { hash: BinData(0, C89261ADBEBF71472476FDA4ADA89D9789614141), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578665, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578665, 1) 2019-09-04T06:31:05.118+0000 D2 QUERY [conn21] Collection config.settings does not exist. Using EOF plan: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 2019-09-04T06:31:05.118+0000 I COMMAND [conn21] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578665, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a2902d1a496712d720e'), operName: "", parentOperId: "5d6f5a2902d1a496712d720b" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578665, 1), signature: { hash: BinData(0, C89261ADBEBF71472476FDA4ADA89D9789614141), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578665, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:31:05.118+0000 D3 REPL [conn236] Got notified of new snapshot: { ts: Timestamp(1567578665, 1), t: 1 }, 2019-09-04T06:31:05.112+0000 2019-09-04T06:31:05.118+0000 D3 REPL [conn236] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.040+0000 2019-09-04T06:31:05.118+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:05.118+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578663, 1), t: 1 }, durableWallTime: new Date(1567578663451), appliedOpTime: { ts: Timestamp(1567578663, 1), t: 1 }, appliedWallTime: new Date(1567578663451), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, durableWallTime: new Date(1567578665112), appliedOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, appliedWallTime: new Date(1567578665112), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578663, 1), t: 1 }, durableWallTime: new Date(1567578663451), appliedOpTime: { ts: Timestamp(1567578663, 1), t: 1 }, appliedWallTime: new Date(1567578663451), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578665, 1), t: 1 }, lastCommittedWall: new Date(1567578665112), lastOpVisible: { ts: Timestamp(1567578665, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:05.118+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 739 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:35.118+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578663, 1), t: 1 }, durableWallTime: new Date(1567578663451), appliedOpTime: { ts: Timestamp(1567578663, 1), t: 1 }, appliedWallTime: new Date(1567578663451), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, durableWallTime: new Date(1567578665112), appliedOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, appliedWallTime: new Date(1567578665112), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578663, 1), t: 1 }, durableWallTime: new Date(1567578663451), appliedOpTime: { ts: Timestamp(1567578663, 1), t: 1 }, appliedWallTime: new Date(1567578663451), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578665, 1), t: 1 }, lastCommittedWall: new Date(1567578665112), lastOpVisible: { ts: Timestamp(1567578665, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:05.118+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:35.118+0000 2019-09-04T06:31:05.118+0000 D2 COMMAND [conn21] run command config.$cmd { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578665, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a2902d1a496712d720f'), operName: "", parentOperId: "5d6f5a2902d1a496712d720b" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578665, 1), signature: { hash: BinData(0, C89261ADBEBF71472476FDA4ADA89D9789614141), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578665, 1), t: 1 } }, $db: "config" } 2019-09-04T06:31:05.118+0000 D1 TRACKING [conn21] Cmd: find, TrackingId: 5d6f5a2902d1a496712d720b|5d6f5a2902d1a496712d720f 2019-09-04T06:31:05.118+0000 D1 COMMAND [conn21] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578665, 1), t: 1 } } } 2019-09-04T06:31:05.118+0000 D3 STORAGE [conn21] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:31:05.118+0000 D1 COMMAND [conn21] Using 'committed' snapshot: { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578665, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a2902d1a496712d720f'), operName: "", parentOperId: "5d6f5a2902d1a496712d720b" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578665, 1), signature: { hash: BinData(0, C89261ADBEBF71472476FDA4ADA89D9789614141), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578665, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578665, 1) 2019-09-04T06:31:05.118+0000 D2 QUERY [conn21] Collection config.settings does not exist. Using EOF plan: query: { _id: "autosplit" } sort: {} projection: {} limit: 1 2019-09-04T06:31:05.118+0000 D2 ASIO [RS] Request 739 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578665, 1), t: 1 }, lastCommittedWall: new Date(1567578665112), lastOpVisible: { ts: Timestamp(1567578665, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578665, 1), $clusterTime: { clusterTime: Timestamp(1567578665, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578665, 1) } 2019-09-04T06:31:05.118+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578665, 1), t: 1 }, lastCommittedWall: new Date(1567578665112), lastOpVisible: { ts: Timestamp(1567578665, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578665, 1), $clusterTime: { clusterTime: Timestamp(1567578665, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578665, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:05.118+0000 I COMMAND [conn21] command config.settings command: find { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578665, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a2902d1a496712d720f'), operName: "", parentOperId: "5d6f5a2902d1a496712d720b" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578665, 1), signature: { hash: BinData(0, C89261ADBEBF71472476FDA4ADA89D9789614141), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578665, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:31:05.118+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:05.119+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:35.118+0000 2019-09-04T06:31:05.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:05.160+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:05.173+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:05.173+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:05.181+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:05.181+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:05.212+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:05.212+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:05.215+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578665, 1) 2019-09-04T06:31:05.218+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:05.218+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:05.218+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:05.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:05.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:05.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:05.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:05.318+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:05.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:05.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:05.348+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:05.348+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:05.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:05.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:05.418+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:05.519+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:05.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:05.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:05.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:05.567+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:05.583+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:05.583+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:05.619+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:05.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:05.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:05.673+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:05.673+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:05.681+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:05.681+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:05.712+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:05.712+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:05.718+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:05.718+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:05.719+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:05.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:05.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:05.819+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:05.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:05.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:05.848+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:05.848+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:05.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:05.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:05.919+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:06.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:06.019+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:06.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:06.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:06.067+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:06.067+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:06.083+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:06.083+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:06.115+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:06.115+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:06.115+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578665, 1) 2019-09-04T06:31:06.115+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 10908 2019-09-04T06:31:06.115+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:06.115+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:06.115+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 10908 2019-09-04T06:31:06.116+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10911 2019-09-04T06:31:06.116+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 10911 2019-09-04T06:31:06.116+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578665, 1), t: 1 }({ ts: Timestamp(1567578665, 1), t: 1 }) 2019-09-04T06:31:06.119+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:06.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:06.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:06.173+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:06.173+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:06.181+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:06.181+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:06.212+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:06.212+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:06.218+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:06.218+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:06.219+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:06.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578665, 1), signature: { hash: BinData(0, C89261ADBEBF71472476FDA4ADA89D9789614141), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:06.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:31:06.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578665, 1), signature: { hash: BinData(0, C89261ADBEBF71472476FDA4ADA89D9789614141), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:06.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578665, 1), signature: { hash: BinData(0, C89261ADBEBF71472476FDA4ADA89D9789614141), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:06.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, durableWallTime: new Date(1567578665112), opTime: { ts: Timestamp(1567578665, 1), t: 1 }, wallTime: new Date(1567578665112) } 2019-09-04T06:31:06.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578665, 1), signature: { hash: BinData(0, C89261ADBEBF71472476FDA4ADA89D9789614141), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:06.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:06.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:06.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:06.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:06.320+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:06.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:06.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:06.348+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:06.348+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:06.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:06.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:06.420+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:06.520+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:06.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:06.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:06.566+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:06.567+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:06.583+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:06.583+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:06.620+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:06.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:06.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:06.673+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:06.673+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:06.681+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:06.681+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:06.712+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:06.712+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:06.718+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:06.718+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:06.720+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:06.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:06.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:06.820+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:06.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:06.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:06.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:06.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 740) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:06.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 740 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:16.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:06.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:34.839+0000 2019-09-04T06:31:06.838+0000 D2 ASIO [Replication] Request 740 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, durableWallTime: new Date(1567578665112), opTime: { ts: Timestamp(1567578665, 1), t: 1 }, wallTime: new Date(1567578665112), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578665, 1), t: 1 }, lastCommittedWall: new Date(1567578665112), lastOpVisible: { ts: Timestamp(1567578665, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578665, 1), $clusterTime: { clusterTime: Timestamp(1567578665, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578665, 1) } 2019-09-04T06:31:06.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, durableWallTime: new Date(1567578665112), opTime: { ts: Timestamp(1567578665, 1), t: 1 }, wallTime: new Date(1567578665112), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578665, 1), t: 1 }, lastCommittedWall: new Date(1567578665112), lastOpVisible: { ts: Timestamp(1567578665, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578665, 1), $clusterTime: { clusterTime: Timestamp(1567578665, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578665, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:06.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:06.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 740) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, durableWallTime: new Date(1567578665112), opTime: { ts: Timestamp(1567578665, 1), t: 1 }, wallTime: new Date(1567578665112), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578665, 1), t: 1 }, lastCommittedWall: new Date(1567578665112), lastOpVisible: { ts: Timestamp(1567578665, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578665, 1), $clusterTime: { clusterTime: Timestamp(1567578665, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578665, 1) } 2019-09-04T06:31:06.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:31:06.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:31:08.838Z 2019-09-04T06:31:06.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:34.839+0000 2019-09-04T06:31:06.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:06.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 741) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:06.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 741 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:31:16.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:06.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:34.839+0000 2019-09-04T06:31:06.839+0000 D2 ASIO [Replication] Request 741 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, durableWallTime: new Date(1567578665112), opTime: { ts: Timestamp(1567578665, 1), t: 1 }, wallTime: new Date(1567578665112), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578665, 1), t: 1 }, lastCommittedWall: new Date(1567578665112), lastOpVisible: { ts: Timestamp(1567578665, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578665, 1), $clusterTime: { clusterTime: Timestamp(1567578665, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578665, 1) } 2019-09-04T06:31:06.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, durableWallTime: new Date(1567578665112), opTime: { ts: Timestamp(1567578665, 1), t: 1 }, wallTime: new Date(1567578665112), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578665, 1), t: 1 }, lastCommittedWall: new Date(1567578665112), lastOpVisible: { ts: Timestamp(1567578665, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578665, 1), $clusterTime: { clusterTime: Timestamp(1567578665, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578665, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:31:06.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:06.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 741) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, durableWallTime: new Date(1567578665112), opTime: { ts: Timestamp(1567578665, 1), t: 1 }, wallTime: new Date(1567578665112), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578665, 1), t: 1 }, lastCommittedWall: new Date(1567578665112), lastOpVisible: { ts: Timestamp(1567578665, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578665, 1), $clusterTime: { clusterTime: Timestamp(1567578665, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578665, 1) } 2019-09-04T06:31:06.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:31:06.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:31:15.916+0000 2019-09-04T06:31:06.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:31:17.053+0000 2019-09-04T06:31:06.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:31:06.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:31:08.839Z 2019-09-04T06:31:06.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:06.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:36.839+0000 2019-09-04T06:31:06.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:36.839+0000 2019-09-04T06:31:06.848+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:06.848+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:06.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:06.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:06.920+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:07.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:07.020+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:07.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:07.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:07.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578665, 1), signature: { hash: BinData(0, C89261ADBEBF71472476FDA4ADA89D9789614141), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:07.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:31:07.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578665, 1), signature: { hash: BinData(0, C89261ADBEBF71472476FDA4ADA89D9789614141), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:07.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578665, 1), signature: { hash: BinData(0, C89261ADBEBF71472476FDA4ADA89D9789614141), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:07.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, durableWallTime: new Date(1567578665112), opTime: { ts: Timestamp(1567578665, 1), t: 1 }, wallTime: new Date(1567578665112) } 2019-09-04T06:31:07.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578665, 1), signature: { hash: BinData(0, C89261ADBEBF71472476FDA4ADA89D9789614141), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:07.066+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:07.066+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:07.115+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:07.115+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:07.116+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578665, 1) 2019-09-04T06:31:07.116+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 10940 2019-09-04T06:31:07.116+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:07.116+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:07.116+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 10940 2019-09-04T06:31:07.116+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10943 2019-09-04T06:31:07.117+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 10943 2019-09-04T06:31:07.117+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578665, 1), t: 1 }({ ts: Timestamp(1567578665, 1), t: 1 }) 2019-09-04T06:31:07.121+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:07.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:07.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:07.173+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:07.173+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:07.181+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:07.181+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:07.212+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:07.212+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:07.218+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:07.218+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:07.221+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:07.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:07.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:07.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:07.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:07.321+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:07.348+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:07.348+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:07.421+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:07.521+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:07.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:07.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:07.567+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:07.567+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:07.621+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:07.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:07.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:07.673+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:07.673+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:07.681+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:07.681+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:07.712+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:07.712+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:07.718+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:07.718+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:07.721+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:07.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:07.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:07.821+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:07.848+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:07.848+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:07.922+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:08.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:08.022+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:08.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:08.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:08.067+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:08.067+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:08.116+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:08.116+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:08.116+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578665, 1) 2019-09-04T06:31:08.116+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 10966 2019-09-04T06:31:08.116+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:08.116+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:08.116+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 10966 2019-09-04T06:31:08.117+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10969 2019-09-04T06:31:08.117+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 10969 2019-09-04T06:31:08.117+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578665, 1), t: 1 }({ ts: Timestamp(1567578665, 1), t: 1 }) 2019-09-04T06:31:08.122+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:08.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:08.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:08.173+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:08.173+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:08.181+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:08.181+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:08.212+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:08.212+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:08.218+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:08.218+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:08.222+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:08.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578665, 1), signature: { hash: BinData(0, C89261ADBEBF71472476FDA4ADA89D9789614141), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:08.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:31:08.233+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578665, 1), signature: { hash: BinData(0, C89261ADBEBF71472476FDA4ADA89D9789614141), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:08.233+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578665, 1), signature: { hash: BinData(0, C89261ADBEBF71472476FDA4ADA89D9789614141), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:08.233+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, durableWallTime: new Date(1567578665112), opTime: { ts: Timestamp(1567578665, 1), t: 1 }, wallTime: new Date(1567578665112) } 2019-09-04T06:31:08.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578665, 1), signature: { hash: BinData(0, C89261ADBEBF71472476FDA4ADA89D9789614141), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:08.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:08.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:08.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:08.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:08.316+0000 D2 ASIO [RS] Request 738 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578668, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578668309), o: { $v: 1, $set: { ping: new Date(1567578668309) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578665, 1), t: 1 }, lastCommittedWall: new Date(1567578665112), lastOpVisible: { ts: Timestamp(1567578665, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578665, 1), t: 1 }, lastCommittedWall: new Date(1567578665112), lastOpApplied: { ts: Timestamp(1567578668, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578665, 1), $clusterTime: { clusterTime: Timestamp(1567578668, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578668, 1) } 2019-09-04T06:31:08.316+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578668, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578668309), o: { $v: 1, $set: { ping: new Date(1567578668309) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578665, 1), t: 1 }, lastCommittedWall: new Date(1567578665112), lastOpVisible: { ts: Timestamp(1567578665, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578665, 1), t: 1 }, lastCommittedWall: new Date(1567578665112), lastOpApplied: { ts: Timestamp(1567578668, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578665, 1), $clusterTime: { clusterTime: Timestamp(1567578668, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578668, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:08.316+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:08.316+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578668, 1) and ending at ts: Timestamp(1567578668, 1) 2019-09-04T06:31:08.316+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:31:17.053+0000 2019-09-04T06:31:08.316+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:31:19.644+0000 2019-09-04T06:31:08.316+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:08.316+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:36.839+0000 2019-09-04T06:31:08.316+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578668, 1), t: 1 } 2019-09-04T06:31:08.316+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:08.316+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:08.316+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578665, 1) 2019-09-04T06:31:08.316+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 10980 2019-09-04T06:31:08.316+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:08.316+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:08.316+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 10980 2019-09-04T06:31:08.316+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:08.316+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:31:08.316+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:08.316+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578665, 1) 2019-09-04T06:31:08.316+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 10983 2019-09-04T06:31:08.316+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578668, 1) } 2019-09-04T06:31:08.316+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:08.316+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:08.316+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 10983 2019-09-04T06:31:08.316+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10970 2019-09-04T06:31:08.316+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 10970 2019-09-04T06:31:08.316+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10986 2019-09-04T06:31:08.317+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 10986 2019-09-04T06:31:08.317+0000 D3 EXECUTOR [repl-writer-worker-1] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:08.317+0000 D3 STORAGE [repl-writer-worker-1] WT begin_transaction for snapshot id 10988 2019-09-04T06:31:08.317+0000 D4 STORAGE [repl-writer-worker-1] inserting record with timestamp Timestamp(1567578668, 1) 2019-09-04T06:31:08.317+0000 D3 STORAGE [repl-writer-worker-1] WT set timestamp of future write operations to Timestamp(1567578668, 1) 2019-09-04T06:31:08.317+0000 D3 STORAGE [repl-writer-worker-1] WT commit_transaction for snapshot id 10988 2019-09-04T06:31:08.317+0000 D3 EXECUTOR [repl-writer-worker-1] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:08.317+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:31:08.317+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10987 2019-09-04T06:31:08.317+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 10987 2019-09-04T06:31:08.317+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10990 2019-09-04T06:31:08.317+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 10990 2019-09-04T06:31:08.317+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578668, 1), t: 1 }({ ts: Timestamp(1567578668, 1), t: 1 }) 2019-09-04T06:31:08.317+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578668, 1) 2019-09-04T06:31:08.317+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10991 2019-09-04T06:31:08.317+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578668, 1) } } ] } sort: {} projection: {} 2019-09-04T06:31:08.317+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:08.317+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:31:08.317+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578668, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:31:08.317+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:08.317+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:08.317+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:08.317+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578668, 1) || First: notFirst: full path: ts 2019-09-04T06:31:08.317+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:08.317+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578668, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:08.317+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:08.317+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:31:08.317+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:08.317+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:08.317+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:08.317+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:08.317+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:08.317+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:08.317+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:08.317+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578668, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:08.317+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:08.317+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:08.317+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:08.317+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578668, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:08.317+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:08.317+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578668, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:08.317+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 10991 2019-09-04T06:31:08.317+0000 D3 EXECUTOR [repl-writer-worker-5] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:08.317+0000 D3 STORAGE [repl-writer-worker-5] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:08.317+0000 D3 REPL [repl-writer-worker-5] applying op: { ts: Timestamp(1567578668, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578668309), o: { $v: 1, $set: { ping: new Date(1567578668309) } } }, oplog application mode: Secondary 2019-09-04T06:31:08.317+0000 D3 STORAGE [repl-writer-worker-5] WT set timestamp of future write operations to Timestamp(1567578668, 1) 2019-09-04T06:31:08.317+0000 D3 STORAGE [repl-writer-worker-5] WT begin_transaction for snapshot id 10993 2019-09-04T06:31:08.317+0000 D2 QUERY [repl-writer-worker-5] Using idhack: { _id: "ConfigServer" } 2019-09-04T06:31:08.317+0000 D4 WRITE [repl-writer-worker-5] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:31:08.317+0000 D3 STORAGE [repl-writer-worker-5] WT commit_transaction for snapshot id 10993 2019-09-04T06:31:08.318+0000 D3 EXECUTOR [repl-writer-worker-5] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:08.318+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578668, 1), t: 1 }({ ts: Timestamp(1567578668, 1), t: 1 }) 2019-09-04T06:31:08.318+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578668, 1) 2019-09-04T06:31:08.318+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10992 2019-09-04T06:31:08.318+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:31:08.318+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:08.318+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:31:08.318+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:08.318+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:08.318+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:31:08.318+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 10992 2019-09-04T06:31:08.318+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578668, 1) 2019-09-04T06:31:08.318+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10997 2019-09-04T06:31:08.318+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 10997 2019-09-04T06:31:08.318+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578668, 1), t: 1 }({ ts: Timestamp(1567578668, 1), t: 1 }) 2019-09-04T06:31:08.318+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:08.318+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, durableWallTime: new Date(1567578665112), appliedOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, appliedWallTime: new Date(1567578665112), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, durableWallTime: new Date(1567578665112), appliedOpTime: { ts: Timestamp(1567578668, 1), t: 1 }, appliedWallTime: new Date(1567578668309), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, durableWallTime: new Date(1567578665112), appliedOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, appliedWallTime: new Date(1567578665112), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578665, 1), t: 1 }, lastCommittedWall: new Date(1567578665112), lastOpVisible: { ts: Timestamp(1567578665, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:08.318+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 742 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:38.318+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, durableWallTime: new Date(1567578665112), appliedOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, appliedWallTime: new Date(1567578665112), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, durableWallTime: new Date(1567578665112), appliedOpTime: { ts: Timestamp(1567578668, 1), t: 1 }, appliedWallTime: new Date(1567578668309), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, durableWallTime: new Date(1567578665112), appliedOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, appliedWallTime: new Date(1567578665112), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578665, 1), t: 1 }, lastCommittedWall: new Date(1567578665112), lastOpVisible: { ts: Timestamp(1567578665, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:08.318+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:38.318+0000 2019-09-04T06:31:08.318+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578668, 1), t: 1 } 2019-09-04T06:31:08.318+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 743 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:18.318+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578665, 1), t: 1 } } 2019-09-04T06:31:08.318+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:38.318+0000 2019-09-04T06:31:08.318+0000 D2 ASIO [RS] Request 742 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 1), t: 1 }, lastCommittedWall: new Date(1567578668309), lastOpVisible: { ts: Timestamp(1567578668, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578668, 1), $clusterTime: { clusterTime: Timestamp(1567578668, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578668, 1) } 2019-09-04T06:31:08.318+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 1), t: 1 }, lastCommittedWall: new Date(1567578668309), lastOpVisible: { ts: Timestamp(1567578668, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578668, 1), $clusterTime: { clusterTime: Timestamp(1567578668, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578668, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:08.318+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:08.318+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:38.318+0000 2019-09-04T06:31:08.319+0000 D2 ASIO [RS] Request 743 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 1), t: 1 }, lastCommittedWall: new Date(1567578668309), lastOpVisible: { ts: Timestamp(1567578668, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578668, 1), t: 1 }, lastCommittedWall: new Date(1567578668309), lastOpApplied: { ts: Timestamp(1567578668, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578668, 1), $clusterTime: { clusterTime: Timestamp(1567578668, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578668, 1) } 2019-09-04T06:31:08.319+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 1), t: 1 }, lastCommittedWall: new Date(1567578668309), lastOpVisible: { ts: Timestamp(1567578668, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578668, 1), t: 1 }, lastCommittedWall: new Date(1567578668309), lastOpApplied: { ts: Timestamp(1567578668, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578668, 1), $clusterTime: { clusterTime: Timestamp(1567578668, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578668, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:08.319+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:08.319+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:31:08.319+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578668, 1), t: 1 }, 2019-09-04T06:31:08.309+0000 2019-09-04T06:31:08.319+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578668, 1), t: 1 }, 2019-09-04T06:31:08.309+0000 2019-09-04T06:31:08.319+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578663, 1) 2019-09-04T06:31:08.319+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:31:19.644+0000 2019-09-04T06:31:08.319+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:31:19.790+0000 2019-09-04T06:31:08.319+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:08.319+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:36.839+0000 2019-09-04T06:31:08.319+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 744 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:18.319+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578668, 1), t: 1 } } 2019-09-04T06:31:08.319+0000 D3 REPL [conn229] Got notified of new snapshot: { ts: Timestamp(1567578668, 1), t: 1 }, 2019-09-04T06:31:08.309+0000 2019-09-04T06:31:08.319+0000 D3 REPL [conn229] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.924+0000 2019-09-04T06:31:08.319+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:38.318+0000 2019-09-04T06:31:08.319+0000 D3 REPL [conn259] Got notified of new snapshot: { ts: Timestamp(1567578668, 1), t: 1 }, 2019-09-04T06:31:08.309+0000 2019-09-04T06:31:08.319+0000 D3 REPL [conn259] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:24.151+0000 2019-09-04T06:31:08.319+0000 D3 REPL [conn255] Got notified of new snapshot: { ts: Timestamp(1567578668, 1), t: 1 }, 2019-09-04T06:31:08.309+0000 2019-09-04T06:31:08.319+0000 D3 REPL [conn255] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.877+0000 2019-09-04T06:31:08.319+0000 D3 REPL [conn234] Got notified of new snapshot: { ts: Timestamp(1567578668, 1), t: 1 }, 2019-09-04T06:31:08.309+0000 2019-09-04T06:31:08.319+0000 D3 REPL [conn234] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.033+0000 2019-09-04T06:31:08.319+0000 D3 REPL [conn262] Got notified of new snapshot: { ts: Timestamp(1567578668, 1), t: 1 }, 2019-09-04T06:31:08.309+0000 2019-09-04T06:31:08.319+0000 D3 REPL [conn262] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.051+0000 2019-09-04T06:31:08.319+0000 D3 REPL [conn267] Got notified of new snapshot: { ts: Timestamp(1567578668, 1), t: 1 }, 2019-09-04T06:31:08.309+0000 2019-09-04T06:31:08.319+0000 D3 REPL [conn267] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:21.661+0000 2019-09-04T06:31:08.319+0000 D3 REPL [conn261] Got notified of new snapshot: { ts: Timestamp(1567578668, 1), t: 1 }, 2019-09-04T06:31:08.309+0000 2019-09-04T06:31:08.319+0000 D3 REPL [conn261] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.023+0000 2019-09-04T06:31:08.319+0000 D3 REPL [conn256] Got notified of new snapshot: { ts: Timestamp(1567578668, 1), t: 1 }, 2019-09-04T06:31:08.309+0000 2019-09-04T06:31:08.319+0000 D3 REPL [conn256] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.918+0000 2019-09-04T06:31:08.319+0000 D3 REPL [conn240] Got notified of new snapshot: { ts: Timestamp(1567578668, 1), t: 1 }, 2019-09-04T06:31:08.309+0000 2019-09-04T06:31:08.319+0000 D3 REPL [conn240] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.874+0000 2019-09-04T06:31:08.319+0000 D3 REPL [conn231] Got notified of new snapshot: { ts: Timestamp(1567578668, 1), t: 1 }, 2019-09-04T06:31:08.309+0000 2019-09-04T06:31:08.319+0000 D3 REPL [conn231] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:13.190+0000 2019-09-04T06:31:08.319+0000 D3 REPL [conn233] Got notified of new snapshot: { ts: Timestamp(1567578668, 1), t: 1 }, 2019-09-04T06:31:08.309+0000 2019-09-04T06:31:08.319+0000 D3 REPL [conn233] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.024+0000 2019-09-04T06:31:08.319+0000 D3 REPL [conn242] Got notified of new snapshot: { ts: Timestamp(1567578668, 1), t: 1 }, 2019-09-04T06:31:08.309+0000 2019-09-04T06:31:08.319+0000 D3 REPL [conn242] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:12.507+0000 2019-09-04T06:31:08.319+0000 D3 REPL [conn237] Got notified of new snapshot: { ts: Timestamp(1567578668, 1), t: 1 }, 2019-09-04T06:31:08.309+0000 2019-09-04T06:31:08.319+0000 D3 REPL [conn237] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:13.409+0000 2019-09-04T06:31:08.319+0000 D3 REPL [conn268] Got notified of new snapshot: { ts: Timestamp(1567578668, 1), t: 1 }, 2019-09-04T06:31:08.309+0000 2019-09-04T06:31:08.319+0000 D3 REPL [conn268] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:22.595+0000 2019-09-04T06:31:08.319+0000 D3 REPL [conn269] Got notified of new snapshot: { ts: Timestamp(1567578668, 1), t: 1 }, 2019-09-04T06:31:08.309+0000 2019-09-04T06:31:08.319+0000 D3 REPL [conn269] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:25.060+0000 2019-09-04T06:31:08.319+0000 D3 REPL [conn265] Got notified of new snapshot: { ts: Timestamp(1567578668, 1), t: 1 }, 2019-09-04T06:31:08.309+0000 2019-09-04T06:31:08.319+0000 D3 REPL [conn265] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:21.660+0000 2019-09-04T06:31:08.319+0000 D3 REPL [conn227] Got notified of new snapshot: { ts: Timestamp(1567578668, 1), t: 1 }, 2019-09-04T06:31:08.309+0000 2019-09-04T06:31:08.319+0000 D3 REPL [conn227] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.888+0000 2019-09-04T06:31:08.319+0000 D3 REPL [conn224] Got notified of new snapshot: { ts: Timestamp(1567578668, 1), t: 1 }, 2019-09-04T06:31:08.309+0000 2019-09-04T06:31:08.319+0000 D3 REPL [conn224] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.924+0000 2019-09-04T06:31:08.319+0000 D3 REPL [conn258] Got notified of new snapshot: { ts: Timestamp(1567578668, 1), t: 1 }, 2019-09-04T06:31:08.309+0000 2019-09-04T06:31:08.319+0000 D3 REPL [conn258] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:14.433+0000 2019-09-04T06:31:08.319+0000 D3 REPL [conn266] Got notified of new snapshot: { ts: Timestamp(1567578668, 1), t: 1 }, 2019-09-04T06:31:08.309+0000 2019-09-04T06:31:08.319+0000 D3 REPL [conn266] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:21.661+0000 2019-09-04T06:31:08.319+0000 D3 REPL [conn263] Got notified of new snapshot: { ts: Timestamp(1567578668, 1), t: 1 }, 2019-09-04T06:31:08.309+0000 2019-09-04T06:31:08.319+0000 D3 REPL [conn263] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.415+0000 2019-09-04T06:31:08.319+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:31:08.319+0000 D3 REPL [conn241] Got notified of new snapshot: { ts: Timestamp(1567578668, 1), t: 1 }, 2019-09-04T06:31:08.309+0000 2019-09-04T06:31:08.319+0000 D3 REPL [conn241] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.879+0000 2019-09-04T06:31:08.319+0000 D3 REPL [conn257] Got notified of new snapshot: { ts: Timestamp(1567578668, 1), t: 1 }, 2019-09-04T06:31:08.309+0000 2019-09-04T06:31:08.319+0000 D3 REPL [conn257] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:12.293+0000 2019-09-04T06:31:08.319+0000 D3 REPL [conn236] Got notified of new snapshot: { ts: Timestamp(1567578668, 1), t: 1 }, 2019-09-04T06:31:08.309+0000 2019-09-04T06:31:08.319+0000 D3 REPL [conn236] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.040+0000 2019-09-04T06:31:08.319+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:08.320+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, durableWallTime: new Date(1567578665112), appliedOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, appliedWallTime: new Date(1567578665112), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578668, 1), t: 1 }, durableWallTime: new Date(1567578668309), appliedOpTime: { ts: Timestamp(1567578668, 1), t: 1 }, appliedWallTime: new Date(1567578668309), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, durableWallTime: new Date(1567578665112), appliedOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, appliedWallTime: new Date(1567578665112), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 1), t: 1 }, lastCommittedWall: new Date(1567578668309), lastOpVisible: { ts: Timestamp(1567578668, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:08.320+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 745 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:38.320+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, durableWallTime: new Date(1567578665112), appliedOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, appliedWallTime: new Date(1567578665112), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578668, 1), t: 1 }, durableWallTime: new Date(1567578668309), appliedOpTime: { ts: Timestamp(1567578668, 1), t: 1 }, appliedWallTime: new Date(1567578668309), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, durableWallTime: new Date(1567578665112), appliedOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, appliedWallTime: new Date(1567578665112), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 1), t: 1 }, lastCommittedWall: new Date(1567578668309), lastOpVisible: { ts: Timestamp(1567578668, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:08.320+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:38.318+0000 2019-09-04T06:31:08.320+0000 D2 ASIO [RS] Request 745 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 1), t: 1 }, lastCommittedWall: new Date(1567578668309), lastOpVisible: { ts: Timestamp(1567578668, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578668, 1), $clusterTime: { clusterTime: Timestamp(1567578668, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578668, 1) } 2019-09-04T06:31:08.320+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 1), t: 1 }, lastCommittedWall: new Date(1567578668309), lastOpVisible: { ts: Timestamp(1567578668, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578668, 1), $clusterTime: { clusterTime: Timestamp(1567578668, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578668, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:08.320+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:08.320+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:38.318+0000 2019-09-04T06:31:08.322+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:08.348+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:08.348+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:08.417+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578668, 1) 2019-09-04T06:31:08.422+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:08.522+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:08.524+0000 D2 ASIO [RS] Request 744 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578668, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578668506), o: { $v: 1, $set: { ping: new Date(1567578668506) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 1), t: 1 }, lastCommittedWall: new Date(1567578668309), lastOpVisible: { ts: Timestamp(1567578668, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578668, 1), t: 1 }, lastCommittedWall: new Date(1567578668309), lastOpApplied: { ts: Timestamp(1567578668, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578668, 1), $clusterTime: { clusterTime: Timestamp(1567578668, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578668, 2) } 2019-09-04T06:31:08.524+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578668, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578668506), o: { $v: 1, $set: { ping: new Date(1567578668506) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 1), t: 1 }, lastCommittedWall: new Date(1567578668309), lastOpVisible: { ts: Timestamp(1567578668, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578668, 1), t: 1 }, lastCommittedWall: new Date(1567578668309), lastOpApplied: { ts: Timestamp(1567578668, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578668, 1), $clusterTime: { clusterTime: Timestamp(1567578668, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578668, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:08.524+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:08.524+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578668, 2) and ending at ts: Timestamp(1567578668, 2) 2019-09-04T06:31:08.524+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:31:19.790+0000 2019-09-04T06:31:08.524+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:31:19.491+0000 2019-09-04T06:31:08.524+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:08.524+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:36.839+0000 2019-09-04T06:31:08.524+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578668, 2), t: 1 } 2019-09-04T06:31:08.524+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:08.524+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:08.524+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578668, 1) 2019-09-04T06:31:08.524+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 11001 2019-09-04T06:31:08.524+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:08.524+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:08.524+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 11001 2019-09-04T06:31:08.524+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:08.524+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:31:08.524+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:08.524+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578668, 2) } 2019-09-04T06:31:08.524+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578668, 1) 2019-09-04T06:31:08.524+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 11004 2019-09-04T06:31:08.524+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:08.524+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:08.524+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 11004 2019-09-04T06:31:08.524+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 10998 2019-09-04T06:31:08.524+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 10998 2019-09-04T06:31:08.524+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11007 2019-09-04T06:31:08.524+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11007 2019-09-04T06:31:08.525+0000 D3 EXECUTOR [repl-writer-worker-9] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:08.525+0000 D3 STORAGE [repl-writer-worker-9] WT begin_transaction for snapshot id 11009 2019-09-04T06:31:08.525+0000 D4 STORAGE [repl-writer-worker-9] inserting record with timestamp Timestamp(1567578668, 2) 2019-09-04T06:31:08.525+0000 D3 STORAGE [repl-writer-worker-9] WT set timestamp of future write operations to Timestamp(1567578668, 2) 2019-09-04T06:31:08.525+0000 D3 STORAGE [repl-writer-worker-9] WT commit_transaction for snapshot id 11009 2019-09-04T06:31:08.525+0000 D3 EXECUTOR [repl-writer-worker-9] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:08.525+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:31:08.525+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11008 2019-09-04T06:31:08.525+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 11008 2019-09-04T06:31:08.525+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11011 2019-09-04T06:31:08.525+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11011 2019-09-04T06:31:08.525+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578668, 2), t: 1 }({ ts: Timestamp(1567578668, 2), t: 1 }) 2019-09-04T06:31:08.525+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578668, 2) 2019-09-04T06:31:08.525+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11012 2019-09-04T06:31:08.525+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578668, 2) } } ] } sort: {} projection: {} 2019-09-04T06:31:08.525+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:08.525+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:31:08.525+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578668, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:31:08.525+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:08.525+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:08.525+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:08.525+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578668, 2) || First: notFirst: full path: ts 2019-09-04T06:31:08.525+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:08.525+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578668, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:08.525+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:08.525+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:31:08.525+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:08.525+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:08.525+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:08.525+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:08.525+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:08.525+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:08.525+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:08.525+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578668, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:08.525+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:08.525+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:08.525+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:08.525+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578668, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:08.525+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:08.525+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578668, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:08.525+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 11012 2019-09-04T06:31:08.525+0000 D3 EXECUTOR [repl-writer-worker-7] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:08.525+0000 D3 STORAGE [repl-writer-worker-7] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:08.525+0000 D3 REPL [repl-writer-worker-7] applying op: { ts: Timestamp(1567578668, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578668506), o: { $v: 1, $set: { ping: new Date(1567578668506) } } }, oplog application mode: Secondary 2019-09-04T06:31:08.525+0000 D3 STORAGE [repl-writer-worker-7] WT set timestamp of future write operations to Timestamp(1567578668, 2) 2019-09-04T06:31:08.525+0000 D3 STORAGE [repl-writer-worker-7] WT begin_transaction for snapshot id 11014 2019-09-04T06:31:08.525+0000 D2 QUERY [repl-writer-worker-7] Using idhack: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" } 2019-09-04T06:31:08.525+0000 D4 WRITE [repl-writer-worker-7] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:31:08.525+0000 D3 STORAGE [repl-writer-worker-7] WT commit_transaction for snapshot id 11014 2019-09-04T06:31:08.525+0000 D3 EXECUTOR [repl-writer-worker-7] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:08.525+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578668, 2), t: 1 }({ ts: Timestamp(1567578668, 2), t: 1 }) 2019-09-04T06:31:08.525+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578668, 2) 2019-09-04T06:31:08.525+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11013 2019-09-04T06:31:08.525+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:31:08.526+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:08.526+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:31:08.526+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:08.526+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:08.526+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:31:08.526+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 11013 2019-09-04T06:31:08.526+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578668, 2) 2019-09-04T06:31:08.526+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11017 2019-09-04T06:31:08.526+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11017 2019-09-04T06:31:08.526+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578668, 2), t: 1 }({ ts: Timestamp(1567578668, 2), t: 1 }) 2019-09-04T06:31:08.526+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:08.526+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, durableWallTime: new Date(1567578665112), appliedOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, appliedWallTime: new Date(1567578665112), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578668, 1), t: 1 }, durableWallTime: new Date(1567578668309), appliedOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, appliedWallTime: new Date(1567578668506), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, durableWallTime: new Date(1567578665112), appliedOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, appliedWallTime: new Date(1567578665112), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 1), t: 1 }, lastCommittedWall: new Date(1567578668309), lastOpVisible: { ts: Timestamp(1567578668, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:08.526+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 746 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:38.526+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, durableWallTime: new Date(1567578665112), appliedOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, appliedWallTime: new Date(1567578665112), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578668, 1), t: 1 }, durableWallTime: new Date(1567578668309), appliedOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, appliedWallTime: new Date(1567578668506), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, durableWallTime: new Date(1567578665112), appliedOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, appliedWallTime: new Date(1567578665112), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 1), t: 1 }, lastCommittedWall: new Date(1567578668309), lastOpVisible: { ts: Timestamp(1567578668, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:08.526+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:38.526+0000 2019-09-04T06:31:08.526+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578668, 2), t: 1 } 2019-09-04T06:31:08.526+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 747 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:18.526+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578668, 1), t: 1 } } 2019-09-04T06:31:08.526+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:38.526+0000 2019-09-04T06:31:08.526+0000 D2 ASIO [RS] Request 746 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 1), t: 1 }, lastCommittedWall: new Date(1567578668309), lastOpVisible: { ts: Timestamp(1567578668, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578668, 1), $clusterTime: { clusterTime: Timestamp(1567578668, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578668, 2) } 2019-09-04T06:31:08.526+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 1), t: 1 }, lastCommittedWall: new Date(1567578668309), lastOpVisible: { ts: Timestamp(1567578668, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578668, 1), $clusterTime: { clusterTime: Timestamp(1567578668, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578668, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:08.526+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:08.526+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:38.526+0000 2019-09-04T06:31:08.530+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:31:08.530+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:08.530+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, durableWallTime: new Date(1567578665112), appliedOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, appliedWallTime: new Date(1567578665112), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), appliedOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, appliedWallTime: new Date(1567578668506), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, durableWallTime: new Date(1567578665112), appliedOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, appliedWallTime: new Date(1567578665112), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 1), t: 1 }, lastCommittedWall: new Date(1567578668309), lastOpVisible: { ts: Timestamp(1567578668, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:08.530+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 748 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:38.530+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, durableWallTime: new Date(1567578665112), appliedOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, appliedWallTime: new Date(1567578665112), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), appliedOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, appliedWallTime: new Date(1567578668506), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, durableWallTime: new Date(1567578665112), appliedOpTime: { ts: Timestamp(1567578665, 1), t: 1 }, appliedWallTime: new Date(1567578665112), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 1), t: 1 }, lastCommittedWall: new Date(1567578668309), lastOpVisible: { ts: Timestamp(1567578668, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:08.530+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:38.526+0000 2019-09-04T06:31:08.530+0000 D2 ASIO [RS] Request 748 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 1), t: 1 }, lastCommittedWall: new Date(1567578668309), lastOpVisible: { ts: Timestamp(1567578668, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578668, 1), $clusterTime: { clusterTime: Timestamp(1567578668, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578668, 2) } 2019-09-04T06:31:08.530+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 1), t: 1 }, lastCommittedWall: new Date(1567578668309), lastOpVisible: { ts: Timestamp(1567578668, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578668, 1), $clusterTime: { clusterTime: Timestamp(1567578668, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578668, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:08.530+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:08.530+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:38.526+0000 2019-09-04T06:31:08.531+0000 D2 ASIO [RS] Request 747 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpVisible: { ts: Timestamp(1567578668, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpApplied: { ts: Timestamp(1567578668, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578668, 2), $clusterTime: { clusterTime: Timestamp(1567578668, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578668, 2) } 2019-09-04T06:31:08.531+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpVisible: { ts: Timestamp(1567578668, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpApplied: { ts: Timestamp(1567578668, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578668, 2), $clusterTime: { clusterTime: Timestamp(1567578668, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578668, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:08.531+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:08.531+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:31:08.531+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578668, 2), t: 1 }, 2019-09-04T06:31:08.506+0000 2019-09-04T06:31:08.531+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578668, 2), t: 1 }, 2019-09-04T06:31:08.506+0000 2019-09-04T06:31:08.531+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578663, 2) 2019-09-04T06:31:08.531+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:31:19.491+0000 2019-09-04T06:31:08.531+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:31:19.893+0000 2019-09-04T06:31:08.531+0000 D3 REPL [conn233] Got notified of new snapshot: { ts: Timestamp(1567578668, 2), t: 1 }, 2019-09-04T06:31:08.506+0000 2019-09-04T06:31:08.531+0000 D3 REPL [conn233] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.024+0000 2019-09-04T06:31:08.531+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 749 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:18.531+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578668, 2), t: 1 } } 2019-09-04T06:31:08.531+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:08.531+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:38.526+0000 2019-09-04T06:31:08.531+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:36.839+0000 2019-09-04T06:31:08.531+0000 D3 REPL [conn256] Got notified of new snapshot: { ts: Timestamp(1567578668, 2), t: 1 }, 2019-09-04T06:31:08.506+0000 2019-09-04T06:31:08.531+0000 D3 REPL [conn256] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.918+0000 2019-09-04T06:31:08.531+0000 D3 REPL [conn269] Got notified of new snapshot: { ts: Timestamp(1567578668, 2), t: 1 }, 2019-09-04T06:31:08.506+0000 2019-09-04T06:31:08.531+0000 D3 REPL [conn269] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:25.060+0000 2019-09-04T06:31:08.531+0000 D3 REPL [conn224] Got notified of new snapshot: { ts: Timestamp(1567578668, 2), t: 1 }, 2019-09-04T06:31:08.506+0000 2019-09-04T06:31:08.531+0000 D3 REPL [conn224] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.924+0000 2019-09-04T06:31:08.531+0000 D3 REPL [conn258] Got notified of new snapshot: { ts: Timestamp(1567578668, 2), t: 1 }, 2019-09-04T06:31:08.506+0000 2019-09-04T06:31:08.531+0000 D3 REPL [conn258] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:14.433+0000 2019-09-04T06:31:08.531+0000 D3 REPL [conn242] Got notified of new snapshot: { ts: Timestamp(1567578668, 2), t: 1 }, 2019-09-04T06:31:08.506+0000 2019-09-04T06:31:08.531+0000 D3 REPL [conn242] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:12.507+0000 2019-09-04T06:31:08.531+0000 D3 REPL [conn257] Got notified of new snapshot: { ts: Timestamp(1567578668, 2), t: 1 }, 2019-09-04T06:31:08.506+0000 2019-09-04T06:31:08.531+0000 D3 REPL [conn257] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:12.293+0000 2019-09-04T06:31:08.531+0000 D3 REPL [conn229] Got notified of new snapshot: { ts: Timestamp(1567578668, 2), t: 1 }, 2019-09-04T06:31:08.506+0000 2019-09-04T06:31:08.531+0000 D3 REPL [conn229] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.924+0000 2019-09-04T06:31:08.531+0000 D3 REPL [conn267] Got notified of new snapshot: { ts: Timestamp(1567578668, 2), t: 1 }, 2019-09-04T06:31:08.506+0000 2019-09-04T06:31:08.531+0000 D3 REPL [conn267] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:21.661+0000 2019-09-04T06:31:08.531+0000 D3 REPL [conn234] Got notified of new snapshot: { ts: Timestamp(1567578668, 2), t: 1 }, 2019-09-04T06:31:08.506+0000 2019-09-04T06:31:08.531+0000 D3 REPL [conn234] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.033+0000 2019-09-04T06:31:08.531+0000 D3 REPL [conn237] Got notified of new snapshot: { ts: Timestamp(1567578668, 2), t: 1 }, 2019-09-04T06:31:08.506+0000 2019-09-04T06:31:08.531+0000 D3 REPL [conn237] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:13.409+0000 2019-09-04T06:31:08.531+0000 D3 REPL [conn231] Got notified of new snapshot: { ts: Timestamp(1567578668, 2), t: 1 }, 2019-09-04T06:31:08.506+0000 2019-09-04T06:31:08.531+0000 D3 REPL [conn231] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:13.190+0000 2019-09-04T06:31:08.531+0000 D3 REPL [conn255] Got notified of new snapshot: { ts: Timestamp(1567578668, 2), t: 1 }, 2019-09-04T06:31:08.506+0000 2019-09-04T06:31:08.531+0000 D3 REPL [conn255] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.877+0000 2019-09-04T06:31:08.531+0000 D3 REPL [conn259] Got notified of new snapshot: { ts: Timestamp(1567578668, 2), t: 1 }, 2019-09-04T06:31:08.506+0000 2019-09-04T06:31:08.531+0000 D3 REPL [conn259] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:24.151+0000 2019-09-04T06:31:08.531+0000 D3 REPL [conn261] Got notified of new snapshot: { ts: Timestamp(1567578668, 2), t: 1 }, 2019-09-04T06:31:08.506+0000 2019-09-04T06:31:08.531+0000 D3 REPL [conn261] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.023+0000 2019-09-04T06:31:08.531+0000 D3 REPL [conn236] Got notified of new snapshot: { ts: Timestamp(1567578668, 2), t: 1 }, 2019-09-04T06:31:08.506+0000 2019-09-04T06:31:08.531+0000 D3 REPL [conn236] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.040+0000 2019-09-04T06:31:08.532+0000 D3 REPL [conn266] Got notified of new snapshot: { ts: Timestamp(1567578668, 2), t: 1 }, 2019-09-04T06:31:08.506+0000 2019-09-04T06:31:08.532+0000 D3 REPL [conn266] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:21.661+0000 2019-09-04T06:31:08.532+0000 D3 REPL [conn227] Got notified of new snapshot: { ts: Timestamp(1567578668, 2), t: 1 }, 2019-09-04T06:31:08.506+0000 2019-09-04T06:31:08.532+0000 D3 REPL [conn227] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.888+0000 2019-09-04T06:31:08.532+0000 D3 REPL [conn240] Got notified of new snapshot: { ts: Timestamp(1567578668, 2), t: 1 }, 2019-09-04T06:31:08.506+0000 2019-09-04T06:31:08.532+0000 D3 REPL [conn240] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.874+0000 2019-09-04T06:31:08.532+0000 D3 REPL [conn265] Got notified of new snapshot: { ts: Timestamp(1567578668, 2), t: 1 }, 2019-09-04T06:31:08.506+0000 2019-09-04T06:31:08.532+0000 D3 REPL [conn265] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:21.660+0000 2019-09-04T06:31:08.532+0000 D3 REPL [conn263] Got notified of new snapshot: { ts: Timestamp(1567578668, 2), t: 1 }, 2019-09-04T06:31:08.506+0000 2019-09-04T06:31:08.532+0000 D3 REPL [conn263] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.415+0000 2019-09-04T06:31:08.532+0000 D3 REPL [conn262] Got notified of new snapshot: { ts: Timestamp(1567578668, 2), t: 1 }, 2019-09-04T06:31:08.506+0000 2019-09-04T06:31:08.532+0000 D3 REPL [conn262] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.051+0000 2019-09-04T06:31:08.532+0000 D3 REPL [conn241] Got notified of new snapshot: { ts: Timestamp(1567578668, 2), t: 1 }, 2019-09-04T06:31:08.506+0000 2019-09-04T06:31:08.532+0000 D3 REPL [conn241] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:10.879+0000 2019-09-04T06:31:08.532+0000 D3 REPL [conn268] Got notified of new snapshot: { ts: Timestamp(1567578668, 2), t: 1 }, 2019-09-04T06:31:08.506+0000 2019-09-04T06:31:08.532+0000 D3 REPL [conn268] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:22.595+0000 2019-09-04T06:31:08.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:08.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:08.567+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:08.567+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:08.622+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:08.624+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578668, 2) 2019-09-04T06:31:08.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:08.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:08.673+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:08.673+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:08.681+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:08.681+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:08.712+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:08.712+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:08.718+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:08.718+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:08.723+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:08.823+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:08.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:08.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 750) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:08.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 750 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:18.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:08.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:36.839+0000 2019-09-04T06:31:08.838+0000 D2 ASIO [Replication] Request 750 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), opTime: { ts: Timestamp(1567578668, 2), t: 1 }, wallTime: new Date(1567578668506), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpVisible: { ts: Timestamp(1567578668, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578668, 2), $clusterTime: { clusterTime: Timestamp(1567578668, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578668, 2) } 2019-09-04T06:31:08.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), opTime: { ts: Timestamp(1567578668, 2), t: 1 }, wallTime: new Date(1567578668506), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpVisible: { ts: Timestamp(1567578668, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578668, 2), $clusterTime: { clusterTime: Timestamp(1567578668, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578668, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:08.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:08.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 750) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), opTime: { ts: Timestamp(1567578668, 2), t: 1 }, wallTime: new Date(1567578668506), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpVisible: { ts: Timestamp(1567578668, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578668, 2), $clusterTime: { clusterTime: Timestamp(1567578668, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578668, 2) } 2019-09-04T06:31:08.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:31:08.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:31:10.838Z 2019-09-04T06:31:08.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:36.839+0000 2019-09-04T06:31:08.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:08.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 751) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:08.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 751 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:31:18.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:08.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:36.839+0000 2019-09-04T06:31:08.839+0000 D2 ASIO [Replication] Request 751 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), opTime: { ts: Timestamp(1567578668, 2), t: 1 }, wallTime: new Date(1567578668506), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpVisible: { ts: Timestamp(1567578668, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578668, 2), $clusterTime: { clusterTime: Timestamp(1567578668, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578668, 2) } 2019-09-04T06:31:08.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), opTime: { ts: Timestamp(1567578668, 2), t: 1 }, wallTime: new Date(1567578668506), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpVisible: { ts: Timestamp(1567578668, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578668, 2), $clusterTime: { clusterTime: Timestamp(1567578668, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578668, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:31:08.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:08.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 751) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), opTime: { ts: Timestamp(1567578668, 2), t: 1 }, wallTime: new Date(1567578668506), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpVisible: { ts: Timestamp(1567578668, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578668, 2), $clusterTime: { clusterTime: Timestamp(1567578668, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578668, 2) } 2019-09-04T06:31:08.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:31:08.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:31:19.893+0000 2019-09-04T06:31:08.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:31:19.489+0000 2019-09-04T06:31:08.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:08.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:31:08.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:38.839+0000 2019-09-04T06:31:08.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:31:10.839Z 2019-09-04T06:31:08.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:38.839+0000 2019-09-04T06:31:08.848+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:08.848+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:08.923+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:09.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:09.023+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:09.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:09.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:09.061+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578668, 2), signature: { hash: BinData(0, 8900DA819AAEAC07F156CF647ADB297848A9340B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:09.061+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:31:09.061+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578668, 2), signature: { hash: BinData(0, 8900DA819AAEAC07F156CF647ADB297848A9340B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:09.061+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578668, 2), signature: { hash: BinData(0, 8900DA819AAEAC07F156CF647ADB297848A9340B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:09.061+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), opTime: { ts: Timestamp(1567578668, 2), t: 1 }, wallTime: new Date(1567578668506) } 2019-09-04T06:31:09.061+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578668, 2), signature: { hash: BinData(0, 8900DA819AAEAC07F156CF647ADB297848A9340B), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:09.067+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:09.067+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:09.123+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:09.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:09.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:09.173+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:09.173+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:09.181+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:09.181+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:09.212+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:09.212+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:09.218+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:09.218+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:09.223+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:09.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:09.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:09.324+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:09.348+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:09.348+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:09.424+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:09.524+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:09.524+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:09.524+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:09.524+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578668, 2) 2019-09-04T06:31:09.525+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 11040 2019-09-04T06:31:09.525+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:09.525+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:09.525+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 11040 2019-09-04T06:31:09.526+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11043 2019-09-04T06:31:09.526+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11043 2019-09-04T06:31:09.526+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578668, 2), t: 1 }({ ts: Timestamp(1567578668, 2), t: 1 }) 2019-09-04T06:31:09.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:09.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:09.567+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:09.567+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:09.624+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:09.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:09.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:09.673+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:09.673+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:09.681+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:09.681+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:09.712+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:09.712+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:09.718+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:09.718+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:09.724+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:09.766+0000 D2 COMMAND [conn61] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578668, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578668, 2), signature: { hash: BinData(0, 8900DA819AAEAC07F156CF647ADB297848A9340B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578668, 2), t: 1 } }, $db: "config" } 2019-09-04T06:31:09.766+0000 D1 COMMAND [conn61] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578668, 2), t: 1 } } } 2019-09-04T06:31:09.766+0000 D3 STORAGE [conn61] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:31:09.766+0000 D1 COMMAND [conn61] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578668, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578668, 2), signature: { hash: BinData(0, 8900DA819AAEAC07F156CF647ADB297848A9340B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578668, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578668, 2) 2019-09-04T06:31:09.766+0000 D5 QUERY [conn61] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:31:09.766+0000 D5 QUERY [conn61] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:31:09.766+0000 D5 QUERY [conn61] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:31:09.766+0000 D5 QUERY [conn61] Rated tree: $and 2019-09-04T06:31:09.766+0000 D5 QUERY [conn61] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:09.766+0000 D5 QUERY [conn61] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:09.766+0000 D2 QUERY [conn61] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:31:09.767+0000 D3 STORAGE [conn61] WT begin_transaction for snapshot id 11052 2019-09-04T06:31:09.767+0000 D3 STORAGE [conn61] WT rollback_transaction for snapshot id 11052 2019-09-04T06:31:09.767+0000 I COMMAND [conn61] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578668, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578668, 2), signature: { hash: BinData(0, 8900DA819AAEAC07F156CF647ADB297848A9340B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578668, 2), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:879 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:31:09.824+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:09.848+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:09.848+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:09.925+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:10.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:10.003+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:31:10.003+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:31:10.003+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:31:10.011+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:31:10.011+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:31:10.011+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:31:10.011+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:31:10.011+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:31:10.011+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:31:10.011+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:31:10.012+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35129 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:31:10.012+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:31:10.012+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:31:10.013+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:31:10.014+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:31:10.014+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:10.014+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:31:10.014+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:31:10.014+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:31:10.014+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:31:10.014+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:31:10.014+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:31:10.014+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:31:10.014+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:10.014+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:10.014+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:31:10.014+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578668, 2) 2019-09-04T06:31:10.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11061 2019-09-04T06:31:10.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11061 2019-09-04T06:31:10.014+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:31:10.014+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:31:10.014+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:31:10.015+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:31:10.015+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:10.015+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:31:10.015+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:31:10.015+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:31:10.015+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578668, 2) 2019-09-04T06:31:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11064 2019-09-04T06:31:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11064 2019-09-04T06:31:10.015+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:31:10.015+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:31:10.015+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:10.015+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:31:10.015+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:31:10.015+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:31:10.015+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578668, 2) 2019-09-04T06:31:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11066 2019-09-04T06:31:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11066 2019-09-04T06:31:10.015+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:31:10.015+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:31:10.015+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:31:10.015+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:31:10.015+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:31:10.015+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:31:10.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:31:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11069 2019-09-04T06:31:10.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:31:10.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:10.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:31:10.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:31:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11069 2019-09-04T06:31:10.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:31:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11070 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11070 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11071 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11071 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11072 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11072 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11073 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11073 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11074 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11074 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11075 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11075 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11076 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11076 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11077 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11077 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11078 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11078 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11079 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11079 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11080 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11080 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11081 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11081 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11082 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11082 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11083 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11083 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11084 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:31:10.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:10.017+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:31:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11084 2019-09-04T06:31:10.017+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:31:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11085 2019-09-04T06:31:10.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:31:10.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:10.017+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:31:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11085 2019-09-04T06:31:10.017+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:31:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11086 2019-09-04T06:31:10.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:31:10.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:10.017+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:31:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11086 2019-09-04T06:31:10.017+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:31:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11087 2019-09-04T06:31:10.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:31:10.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:10.017+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:31:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11087 2019-09-04T06:31:10.017+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11088 2019-09-04T06:31:10.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:10.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11088 2019-09-04T06:31:10.017+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:31:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11089 2019-09-04T06:31:10.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:31:10.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:10.017+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:31:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11089 2019-09-04T06:31:10.017+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:31:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11090 2019-09-04T06:31:10.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:31:10.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:10.017+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:31:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11090 2019-09-04T06:31:10.017+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:31:10.017+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:31:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11092 2019-09-04T06:31:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11092 2019-09-04T06:31:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11093 2019-09-04T06:31:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11093 2019-09-04T06:31:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11094 2019-09-04T06:31:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11094 2019-09-04T06:31:10.017+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:31:10.018+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:31:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11096 2019-09-04T06:31:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11096 2019-09-04T06:31:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11097 2019-09-04T06:31:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11097 2019-09-04T06:31:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11098 2019-09-04T06:31:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11098 2019-09-04T06:31:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11099 2019-09-04T06:31:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11099 2019-09-04T06:31:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11100 2019-09-04T06:31:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11100 2019-09-04T06:31:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11101 2019-09-04T06:31:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11101 2019-09-04T06:31:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11102 2019-09-04T06:31:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11102 2019-09-04T06:31:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11103 2019-09-04T06:31:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11103 2019-09-04T06:31:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11104 2019-09-04T06:31:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11104 2019-09-04T06:31:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11105 2019-09-04T06:31:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11105 2019-09-04T06:31:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11106 2019-09-04T06:31:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11106 2019-09-04T06:31:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11107 2019-09-04T06:31:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11107 2019-09-04T06:31:10.018+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:31:10.028+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:31:10.028+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11109 2019-09-04T06:31:10.028+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11109 2019-09-04T06:31:10.028+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11110 2019-09-04T06:31:10.028+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11110 2019-09-04T06:31:10.028+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11111 2019-09-04T06:31:10.028+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11111 2019-09-04T06:31:10.028+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11112 2019-09-04T06:31:10.028+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11112 2019-09-04T06:31:10.028+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11113 2019-09-04T06:31:10.028+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11113 2019-09-04T06:31:10.028+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11114 2019-09-04T06:31:10.028+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11114 2019-09-04T06:31:10.028+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:31:10.029+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:10.040+0000 D2 COMMAND [conn69] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:10.040+0000 I COMMAND [conn69] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:10.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:10.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:10.055+0000 D2 COMMAND [conn70] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:10.056+0000 I COMMAND [conn70] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:10.067+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:10.067+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:10.130+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:10.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:10.160+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:10.173+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:10.173+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:10.181+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:10.181+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:10.212+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:10.212+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:10.218+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:10.218+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:10.230+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:10.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578668, 2), signature: { hash: BinData(0, 8900DA819AAEAC07F156CF647ADB297848A9340B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:10.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:31:10.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578668, 2), signature: { hash: BinData(0, 8900DA819AAEAC07F156CF647ADB297848A9340B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:10.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578668, 2), signature: { hash: BinData(0, 8900DA819AAEAC07F156CF647ADB297848A9340B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:10.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), opTime: { ts: Timestamp(1567578668, 2), t: 1 }, wallTime: new Date(1567578668506) } 2019-09-04T06:31:10.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578668, 2), signature: { hash: BinData(0, 8900DA819AAEAC07F156CF647ADB297848A9340B), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:10.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:10.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:10.330+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:10.348+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:10.348+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:10.430+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:10.440+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:10.440+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:10.464+0000 D2 COMMAND [conn71] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578647, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578666, 1), signature: { hash: BinData(0, 14F54F2D50CB0B508C0011F65FD8CEF5CDB2EABA), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578647, 1), t: 1 } }, $db: "config" } 2019-09-04T06:31:10.465+0000 D1 COMMAND [conn71] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578647, 1), t: 1 } } } 2019-09-04T06:31:10.465+0000 D3 STORAGE [conn71] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:31:10.465+0000 D1 COMMAND [conn71] Using 'committed' snapshot: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578647, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578666, 1), signature: { hash: BinData(0, 14F54F2D50CB0B508C0011F65FD8CEF5CDB2EABA), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578647, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578668, 2) 2019-09-04T06:31:10.465+0000 D2 QUERY [conn71] Collection config.settings does not exist. Using EOF plan: query: { _id: "balancer" } sort: {} projection: {} limit: 1 2019-09-04T06:31:10.465+0000 I COMMAND [conn71] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578647, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578666, 1), signature: { hash: BinData(0, 14F54F2D50CB0B508C0011F65FD8CEF5CDB2EABA), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578647, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:31:10.525+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:10.525+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:10.525+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578668, 2) 2019-09-04T06:31:10.525+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 11130 2019-09-04T06:31:10.525+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:10.525+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:10.525+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 11130 2019-09-04T06:31:10.526+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11133 2019-09-04T06:31:10.526+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11133 2019-09-04T06:31:10.526+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578668, 2), t: 1 }({ ts: Timestamp(1567578668, 2), t: 1 }) 2019-09-04T06:31:10.530+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:10.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:10.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:10.567+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:10.567+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:10.630+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:10.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:10.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:10.673+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:10.673+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:10.681+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:10.681+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:10.718+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:10.718+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:10.730+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:10.830+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:10.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:10.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 752) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:10.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 752 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:20.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:10.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:38.839+0000 2019-09-04T06:31:10.838+0000 D2 ASIO [Replication] Request 752 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), opTime: { ts: Timestamp(1567578668, 2), t: 1 }, wallTime: new Date(1567578668506), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpVisible: { ts: Timestamp(1567578668, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578668, 2), $clusterTime: { clusterTime: Timestamp(1567578668, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578668, 2) } 2019-09-04T06:31:10.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), opTime: { ts: Timestamp(1567578668, 2), t: 1 }, wallTime: new Date(1567578668506), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpVisible: { ts: Timestamp(1567578668, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578668, 2), $clusterTime: { clusterTime: Timestamp(1567578668, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578668, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:10.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:10.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 752) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), opTime: { ts: Timestamp(1567578668, 2), t: 1 }, wallTime: new Date(1567578668506), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpVisible: { ts: Timestamp(1567578668, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578668, 2), $clusterTime: { clusterTime: Timestamp(1567578668, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578668, 2) } 2019-09-04T06:31:10.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:31:10.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:31:12.838Z 2019-09-04T06:31:10.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:38.839+0000 2019-09-04T06:31:10.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:10.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 753) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:10.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 753 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:31:20.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:10.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:38.839+0000 2019-09-04T06:31:10.839+0000 D2 ASIO [Replication] Request 753 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), opTime: { ts: Timestamp(1567578668, 2), t: 1 }, wallTime: new Date(1567578668506), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpVisible: { ts: Timestamp(1567578668, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578668, 2), $clusterTime: { clusterTime: Timestamp(1567578668, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578668, 2) } 2019-09-04T06:31:10.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), opTime: { ts: Timestamp(1567578668, 2), t: 1 }, wallTime: new Date(1567578668506), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpVisible: { ts: Timestamp(1567578668, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578668, 2), $clusterTime: { clusterTime: Timestamp(1567578668, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578668, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:31:10.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:10.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 753) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), opTime: { ts: Timestamp(1567578668, 2), t: 1 }, wallTime: new Date(1567578668506), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpVisible: { ts: Timestamp(1567578668, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578668, 2), $clusterTime: { clusterTime: Timestamp(1567578668, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578668, 2) } 2019-09-04T06:31:10.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:31:10.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:31:19.489+0000 2019-09-04T06:31:10.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:31:21.491+0000 2019-09-04T06:31:10.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:31:10.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:31:12.839Z 2019-09-04T06:31:10.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:10.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:40.839+0000 2019-09-04T06:31:10.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:40.839+0000 2019-09-04T06:31:10.848+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:10.848+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:10.876+0000 I COMMAND [conn240] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578640, 1), signature: { hash: BinData(0, 9AA6EC49743F043950E16BB5631473231B19B5FC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:31:10.876+0000 D1 - [conn240] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:31:10.876+0000 W - [conn240] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:10.880+0000 I COMMAND [conn255] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578631, 1), signature: { hash: BinData(0, 126D9B97A246E71CB7F886F7E917931B13D03AC4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:31:10.880+0000 D1 - [conn255] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:31:10.880+0000 W - [conn255] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:10.880+0000 I COMMAND [conn241] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578632, 1), signature: { hash: BinData(0, 7B0D7364553F28B05AA42FCCC1A524375FA6A20E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:31:10.880+0000 D1 - [conn241] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:31:10.880+0000 W - [conn241] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:10.890+0000 I COMMAND [conn227] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 1), signature: { hash: BinData(0, 01D129354ACF87BDDDE96A49066A029F1BB3D92A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:31:10.890+0000 D1 - [conn227] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:31:10.890+0000 W - [conn227] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:10.894+0000 I - [conn240] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:10.894+0000 D1 COMMAND [conn240] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578640, 1), signature: { hash: BinData(0, 9AA6EC49743F043950E16BB5631473231B19B5FC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:10.894+0000 D1 - [conn240] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:31:10.894+0000 W - [conn240] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:10.903+0000 I NETWORK [listener] connection accepted from 10.108.2.60:44880 #275 (88 connections now open) 2019-09-04T06:31:10.903+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:31:10.903+0000 D2 COMMAND [conn275] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:31:10.903+0000 I NETWORK [conn275] received client metadata from 10.108.2.60:44880 conn275: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:31:10.903+0000 I COMMAND [conn275] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:31:10.911+0000 I - [conn241] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:10.911+0000 D1 COMMAND [conn241] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578632, 1), signature: { hash: BinData(0, 7B0D7364553F28B05AA42FCCC1A524375FA6A20E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:10.911+0000 D1 - [conn241] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:31:10.911+0000 W - [conn241] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:10.916+0000 I NETWORK [listener] connection accepted from 10.108.2.61:37954 #276 (89 connections now open) 2019-09-04T06:31:10.916+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:31:10.916+0000 D2 COMMAND [conn276] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:31:10.916+0000 I NETWORK [conn276] received client metadata from 10.108.2.61:37954 conn276: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:31:10.917+0000 I COMMAND [conn276] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:31:10.917+0000 I NETWORK [listener] connection accepted from 10.108.2.63:36324 #277 (90 connections now open) 2019-09-04T06:31:10.917+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:31:10.917+0000 D2 COMMAND [conn277] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:31:10.917+0000 I NETWORK [conn277] received client metadata from 10.108.2.63:36324 conn277: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:31:10.917+0000 I COMMAND [conn277] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:31:10.920+0000 I COMMAND [conn256] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578633, 1), signature: { hash: BinData(0, 788D5538F6F1908EEC9B9DC20AF81546C8F832BC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:31:10.921+0000 D1 - [conn256] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:31:10.921+0000 W - [conn256] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:10.926+0000 I COMMAND [conn224] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578632, 1), signature: { hash: BinData(0, 7B0D7364553F28B05AA42FCCC1A524375FA6A20E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:31:10.926+0000 D1 - [conn224] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:31:10.926+0000 W - [conn224] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:10.926+0000 I COMMAND [conn229] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578631, 1), signature: { hash: BinData(0, 126D9B97A246E71CB7F886F7E917931B13D03AC4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:31:10.926+0000 D1 - [conn229] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:31:10.926+0000 W - [conn229] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:10.931+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:10.940+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:10.940+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:10.944+0000 D2 COMMAND [conn73] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:10.944+0000 I COMMAND [conn73] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:10.947+0000 I - [conn256] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:10.948+0000 D1 COMMAND [conn256] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578633, 1), signature: { hash: BinData(0, 788D5538F6F1908EEC9B9DC20AF81546C8F832BC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:10.948+0000 D1 - [conn256] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:31:10.948+0000 W - [conn256] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:10.947+0000 I - [conn240] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:10.949+0000 W COMMAND [conn240] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:31:10.949+0000 I COMMAND [conn240] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578640, 1), signature: { hash: BinData(0, 9AA6EC49743F043950E16BB5631473231B19B5FC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:31:10.949+0000 D2 NETWORK [conn240] Session from 10.108.2.48:42088 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:31:10.949+0000 I NETWORK [conn240] end connection 10.108.2.48:42088 (89 connections now open) 2019-09-04T06:31:10.967+0000 I - [conn241] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:10.967+0000 W COMMAND [conn241] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:31:10.967+0000 I COMMAND [conn241] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578632, 1), signature: { hash: BinData(0, 7B0D7364553F28B05AA42FCCC1A524375FA6A20E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30041ms 2019-09-04T06:31:10.968+0000 D2 NETWORK [conn241] Session from 10.108.2.72:45734 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:31:10.968+0000 I NETWORK [conn241] end connection 10.108.2.72:45734 (88 connections now open) 2019-09-04T06:31:10.968+0000 D2 COMMAND [conn74] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:10.968+0000 I COMMAND [conn74] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:10.984+0000 I - [conn227] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:10.984+0000 D1 COMMAND [conn227] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 1), signature: { hash: BinData(0, 01D129354ACF87BDDDE96A49066A029F1BB3D92A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:10.984+0000 D1 - [conn227] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:31:10.984+0000 W - [conn227] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:11.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:11.001+0000 I - [conn255] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:11.001+0000 D1 COMMAND [conn255] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578631, 1), signature: { hash: BinData(0, 126D9B97A246E71CB7F886F7E917931B13D03AC4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:11.001+0000 D1 - [conn255] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:31:11.001+0000 W - [conn255] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:11.031+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:11.039+0000 I - [conn224] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:11.039+0000 D1 COMMAND [conn224] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578632, 1), signature: { hash: BinData(0, 7B0D7364553F28B05AA42FCCC1A524375FA6A20E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:11.039+0000 D1 - [conn224] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:31:11.039+0000 W - [conn224] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:11.040+0000 D2 COMMAND [conn72] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578668, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578668, 2), signature: { hash: BinData(0, 8900DA819AAEAC07F156CF647ADB297848A9340B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578668, 2), t: 1 } }, $db: "config" } 2019-09-04T06:31:11.040+0000 D1 COMMAND [conn72] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578668, 2), t: 1 } } } 2019-09-04T06:31:11.040+0000 D3 STORAGE [conn72] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:31:11.040+0000 D1 COMMAND [conn72] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578668, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578668, 2), signature: { hash: BinData(0, 8900DA819AAEAC07F156CF647ADB297848A9340B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578668, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578668, 2) 2019-09-04T06:31:11.040+0000 D5 QUERY [conn72] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:31:11.040+0000 D5 QUERY [conn72] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:31:11.040+0000 D5 QUERY [conn72] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:31:11.040+0000 D5 QUERY [conn72] Rated tree: $and 2019-09-04T06:31:11.040+0000 D5 QUERY [conn72] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:11.040+0000 D5 QUERY [conn72] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:11.040+0000 D2 QUERY [conn72] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:31:11.040+0000 D3 STORAGE [conn72] WT begin_transaction for snapshot id 11149 2019-09-04T06:31:11.040+0000 D3 STORAGE [conn72] WT rollback_transaction for snapshot id 11149 2019-09-04T06:31:11.040+0000 I COMMAND [conn72] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578668, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578668, 2), signature: { hash: BinData(0, 8900DA819AAEAC07F156CF647ADB297848A9340B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578668, 2), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:879 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:31:11.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:11.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:11.058+0000 I - [conn256] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:11.058+0000 W COMMAND [conn256] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:31:11.058+0000 I COMMAND [conn256] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578633, 1), signature: { hash: BinData(0, 788D5538F6F1908EEC9B9DC20AF81546C8F832BC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30039ms 2019-09-04T06:31:11.058+0000 D2 NETWORK [conn256] Session from 10.108.2.60:44860 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:31:11.058+0000 I NETWORK [conn256] end connection 10.108.2.60:44860 (87 connections now open) 2019-09-04T06:31:11.061+0000 D2 COMMAND [conn274] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578662, 1), signature: { hash: BinData(0, 22544C1F58F00D2C7A5EB6A40A8F03FCCA24680D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:31:11.061+0000 D1 REPL [conn274] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578668, 2), t: 1 } 2019-09-04T06:31:11.061+0000 D3 REPL [conn274] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:41.071+0000 2019-09-04T06:31:11.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578668, 2), signature: { hash: BinData(0, 8900DA819AAEAC07F156CF647ADB297848A9340B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:11.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:31:11.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578668, 2), signature: { hash: BinData(0, 8900DA819AAEAC07F156CF647ADB297848A9340B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:11.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578668, 2), signature: { hash: BinData(0, 8900DA819AAEAC07F156CF647ADB297848A9340B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:11.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), opTime: { ts: Timestamp(1567578668, 2), t: 1 }, wallTime: new Date(1567578668506) } 2019-09-04T06:31:11.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578668, 2), signature: { hash: BinData(0, 8900DA819AAEAC07F156CF647ADB297848A9340B), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:11.073+0000 D2 COMMAND [conn264] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578661, 1), signature: { hash: BinData(0, 514286DB74F9F77B0D4219622FBBDB9CC9396AD9), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:31:11.073+0000 D1 REPL [conn264] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578668, 2), t: 1 } 2019-09-04T06:31:11.073+0000 D3 REPL [conn264] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:41.083+0000 2019-09-04T06:31:11.076+0000 I - [conn229] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:11.076+0000 D1 COMMAND [conn229] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578631, 1), signature: { hash: BinData(0, 126D9B97A246E71CB7F886F7E917931B13D03AC4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:11.076+0000 D1 - [conn229] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:31:11.076+0000 W - [conn229] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:11.095+0000 I - [conn224] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:11.095+0000 W COMMAND [conn224] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:31:11.095+0000 I COMMAND [conn224] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578632, 1), signature: { hash: BinData(0, 7B0D7364553F28B05AA42FCCC1A524375FA6A20E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30125ms 2019-09-04T06:31:11.095+0000 D2 NETWORK [conn224] Session from 10.108.2.63:36282 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:31:11.095+0000 I NETWORK [conn224] end connection 10.108.2.63:36282 (86 connections now open) 2019-09-04T06:31:11.114+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:11.114+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:11.115+0000 I - [conn229] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:11.116+0000 W COMMAND [conn229] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:31:11.116+0000 I COMMAND [conn229] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578631, 1), signature: { hash: BinData(0, 126D9B97A246E71CB7F886F7E917931B13D03AC4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30161ms 2019-09-04T06:31:11.116+0000 D2 NETWORK [conn229] Session from 10.108.2.61:37918 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:31:11.116+0000 I NETWORK [conn229] end connection 10.108.2.61:37918 (85 connections now open) 2019-09-04T06:31:11.116+0000 I - [conn255] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:11.117+0000 W COMMAND [conn255] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:31:11.117+0000 I COMMAND [conn255] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578631, 1), signature: { hash: BinData(0, 126D9B97A246E71CB7F886F7E917931B13D03AC4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30133ms 2019-09-04T06:31:11.117+0000 D2 NETWORK [conn255] Session from 10.108.2.55:36658 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:31:11.117+0000 I NETWORK [conn255] end connection 10.108.2.55:36658 (84 connections now open) 2019-09-04T06:31:11.131+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:11.134+0000 I - [conn227] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:11.135+0000 W COMMAND [conn227] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:31:11.135+0000 I COMMAND [conn227] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 1), signature: { hash: BinData(0, 01D129354ACF87BDDDE96A49066A029F1BB3D92A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30106ms 2019-09-04T06:31:11.135+0000 D2 NETWORK [conn227] Session from 10.108.2.57:34234 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:31:11.135+0000 I NETWORK [conn227] end connection 10.108.2.57:34234 (83 connections now open) 2019-09-04T06:31:11.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:11.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:11.173+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:11.173+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:11.218+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:11.218+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:11.221+0000 D2 COMMAND [conn77] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:11.221+0000 I COMMAND [conn77] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:11.231+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:11.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:11.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:11.257+0000 D2 COMMAND [conn113] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:11.257+0000 I COMMAND [conn113] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:11.273+0000 D2 COMMAND [conn78] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:11.273+0000 I COMMAND [conn78] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:11.331+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:11.348+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:11.348+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:11.431+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:11.440+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:11.440+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:11.525+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:11.525+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:11.525+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578668, 2) 2019-09-04T06:31:11.525+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 11166 2019-09-04T06:31:11.525+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:11.525+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:11.525+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 11166 2019-09-04T06:31:11.526+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11169 2019-09-04T06:31:11.526+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11169 2019-09-04T06:31:11.526+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578668, 2), t: 1 }({ ts: Timestamp(1567578668, 2), t: 1 }) 2019-09-04T06:31:11.531+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:11.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:11.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:11.613+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:11.613+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:11.631+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:11.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:11.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:11.673+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:11.673+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:11.718+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:11.718+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:11.731+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:11.794+0000 D2 COMMAND [conn81] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:11.794+0000 I COMMAND [conn81] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:11.832+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:11.848+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:11.848+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:11.932+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:11.940+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:11.940+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:12.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:12.032+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:12.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:12.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:12.132+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:12.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:12.160+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:12.173+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:12.173+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:12.218+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:12.218+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:12.232+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:12.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578670, 1), signature: { hash: BinData(0, 3E0B77969EE6400C1E9373D051BC52C30F99C070), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:12.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:31:12.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578670, 1), signature: { hash: BinData(0, 3E0B77969EE6400C1E9373D051BC52C30F99C070), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:12.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578670, 1), signature: { hash: BinData(0, 3E0B77969EE6400C1E9373D051BC52C30F99C070), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:12.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), opTime: { ts: Timestamp(1567578668, 2), t: 1 }, wallTime: new Date(1567578668506) } 2019-09-04T06:31:12.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578670, 1), signature: { hash: BinData(0, 3E0B77969EE6400C1E9373D051BC52C30F99C070), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:12.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:12.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:12.283+0000 I NETWORK [listener] connection accepted from 10.108.2.59:48372 #278 (84 connections now open) 2019-09-04T06:31:12.283+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:31:12.283+0000 D2 COMMAND [conn278] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:31:12.283+0000 I NETWORK [conn278] received client metadata from 10.108.2.59:48372 conn278: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:31:12.283+0000 I COMMAND [conn278] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:31:12.297+0000 I COMMAND [conn257] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578641, 1), signature: { hash: BinData(0, 2D0FF26358BD656234793C721CD1E7FBC2D07432), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:31:12.297+0000 D1 - [conn257] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:31:12.297+0000 W - [conn257] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:12.313+0000 I - [conn257] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:12.313+0000 D1 COMMAND [conn257] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578641, 1), signature: { hash: BinData(0, 2D0FF26358BD656234793C721CD1E7FBC2D07432), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:12.313+0000 D1 - [conn257] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:31:12.313+0000 W - [conn257] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:12.319+0000 D2 COMMAND [conn260] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578662, 1), signature: { hash: BinData(0, 22544C1F58F00D2C7A5EB6A40A8F03FCCA24680D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:31:12.319+0000 D1 REPL [conn260] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578668, 2), t: 1 } 2019-09-04T06:31:12.319+0000 D3 REPL [conn260] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:42.329+0000 2019-09-04T06:31:12.332+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:12.333+0000 I - [conn257] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:12.333+0000 W COMMAND [conn257] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:31:12.333+0000 I COMMAND [conn257] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578641, 1), signature: { hash: BinData(0, 2D0FF26358BD656234793C721CD1E7FBC2D07432), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:31:12.334+0000 D2 NETWORK [conn257] Session from 10.108.2.59:48356 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:31:12.334+0000 I NETWORK [conn257] end connection 10.108.2.59:48356 (83 connections now open) 2019-09-04T06:31:12.432+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:12.440+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:12.440+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:12.511+0000 I COMMAND [conn242] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 1), signature: { hash: BinData(0, 01D129354ACF87BDDDE96A49066A029F1BB3D92A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:31:12.511+0000 D1 - [conn242] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:31:12.511+0000 W - [conn242] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:12.525+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:12.525+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:12.525+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578668, 2) 2019-09-04T06:31:12.525+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 11189 2019-09-04T06:31:12.525+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:12.525+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:12.525+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 11189 2019-09-04T06:31:12.526+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11192 2019-09-04T06:31:12.526+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11192 2019-09-04T06:31:12.526+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578668, 2), t: 1 }({ ts: Timestamp(1567578668, 2), t: 1 }) 2019-09-04T06:31:12.528+0000 I - [conn242] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:12.528+0000 D1 COMMAND [conn242] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 1), signature: { hash: BinData(0, 01D129354ACF87BDDDE96A49066A029F1BB3D92A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:12.528+0000 D1 - [conn242] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:31:12.528+0000 W - [conn242] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:12.532+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:12.548+0000 I - [conn242] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:12.548+0000 W COMMAND [conn242] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:31:12.548+0000 I COMMAND [conn242] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 1), signature: { hash: BinData(0, 01D129354ACF87BDDDE96A49066A029F1BB3D92A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30031ms 2019-09-04T06:31:12.548+0000 D2 NETWORK [conn242] Session from 10.108.2.54:49178 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:31:12.548+0000 I NETWORK [conn242] end connection 10.108.2.54:49178 (82 connections now open) 2019-09-04T06:31:12.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:12.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:12.633+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:12.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:12.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:12.673+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:12.673+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:12.679+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:12.679+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:12.697+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:12.697+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:12.698+0000 I NETWORK [listener] connection accepted from 10.108.2.54:49214 #279 (83 connections now open) 2019-09-04T06:31:12.698+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:31:12.698+0000 D2 COMMAND [conn279] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:31:12.698+0000 I NETWORK [conn279] received client metadata from 10.108.2.54:49214 conn279: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:31:12.698+0000 I COMMAND [conn279] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:31:12.699+0000 D2 COMMAND [conn279] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578668, 1), signature: { hash: BinData(0, 6940E12D4AC1B4BCB13CFF3D9A7E2572F61E6255), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:31:12.699+0000 D1 REPL [conn279] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578668, 2), t: 1 } 2019-09-04T06:31:12.699+0000 D3 REPL [conn279] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:42.709+0000 2019-09-04T06:31:12.718+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:12.718+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:12.733+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:12.833+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:12.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:12.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 754) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:12.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 754 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:22.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:12.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:40.839+0000 2019-09-04T06:31:12.838+0000 D2 ASIO [Replication] Request 754 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), opTime: { ts: Timestamp(1567578668, 2), t: 1 }, wallTime: new Date(1567578668506), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpVisible: { ts: Timestamp(1567578668, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578668, 2), $clusterTime: { clusterTime: Timestamp(1567578671, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578668, 2) } 2019-09-04T06:31:12.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), opTime: { ts: Timestamp(1567578668, 2), t: 1 }, wallTime: new Date(1567578668506), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpVisible: { ts: Timestamp(1567578668, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578668, 2), $clusterTime: { clusterTime: Timestamp(1567578671, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578668, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:12.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:12.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 754) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), opTime: { ts: Timestamp(1567578668, 2), t: 1 }, wallTime: new Date(1567578668506), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpVisible: { ts: Timestamp(1567578668, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578668, 2), $clusterTime: { clusterTime: Timestamp(1567578671, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578668, 2) } 2019-09-04T06:31:12.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:31:12.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:31:14.838Z 2019-09-04T06:31:12.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:40.839+0000 2019-09-04T06:31:12.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:12.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 755) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:12.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 755 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:31:22.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:12.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:40.839+0000 2019-09-04T06:31:12.839+0000 D2 ASIO [Replication] Request 755 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), opTime: { ts: Timestamp(1567578668, 2), t: 1 }, wallTime: new Date(1567578668506), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpVisible: { ts: Timestamp(1567578668, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578668, 2), $clusterTime: { clusterTime: Timestamp(1567578671, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578668, 2) } 2019-09-04T06:31:12.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), opTime: { ts: Timestamp(1567578668, 2), t: 1 }, wallTime: new Date(1567578668506), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpVisible: { ts: Timestamp(1567578668, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578668, 2), $clusterTime: { clusterTime: Timestamp(1567578671, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578668, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:31:12.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:12.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 755) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), opTime: { ts: Timestamp(1567578668, 2), t: 1 }, wallTime: new Date(1567578668506), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpVisible: { ts: Timestamp(1567578668, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578668, 2), $clusterTime: { clusterTime: Timestamp(1567578671, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578668, 2) } 2019-09-04T06:31:12.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:31:12.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:31:21.491+0000 2019-09-04T06:31:12.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:31:23.693+0000 2019-09-04T06:31:12.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:31:12.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:31:14.839Z 2019-09-04T06:31:12.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:12.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:42.839+0000 2019-09-04T06:31:12.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:42.839+0000 2019-09-04T06:31:12.933+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:12.940+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:12.940+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:13.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:13.033+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:13.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:13.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:13.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578671, 1), signature: { hash: BinData(0, 992E3451CA81D7F71CF7225EF5B2E77DE69E63F2), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:13.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:31:13.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578671, 1), signature: { hash: BinData(0, 992E3451CA81D7F71CF7225EF5B2E77DE69E63F2), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:13.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578671, 1), signature: { hash: BinData(0, 992E3451CA81D7F71CF7225EF5B2E77DE69E63F2), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:13.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), opTime: { ts: Timestamp(1567578668, 2), t: 1 }, wallTime: new Date(1567578668506) } 2019-09-04T06:31:13.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578671, 1), signature: { hash: BinData(0, 992E3451CA81D7F71CF7225EF5B2E77DE69E63F2), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:13.133+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:13.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:13.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:13.173+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:13.173+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:13.178+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:13.178+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:13.195+0000 I COMMAND [conn231] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578642, 1), signature: { hash: BinData(0, D3B29E27081E353619E4BBAFF34512E8BADA791D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:31:13.195+0000 D1 - [conn231] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:31:13.195+0000 W - [conn231] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:13.197+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:13.197+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:13.212+0000 I - [conn231] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:13.212+0000 D1 COMMAND [conn231] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578642, 1), signature: { hash: BinData(0, D3B29E27081E353619E4BBAFF34512E8BADA791D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:13.212+0000 D1 - [conn231] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:31:13.212+0000 W - [conn231] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:13.218+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:13.218+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:13.219+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:13.219+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:13.232+0000 I - [conn231] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:13.232+0000 W COMMAND [conn231] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:31:13.232+0000 I COMMAND [conn231] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578642, 1), signature: { hash: BinData(0, D3B29E27081E353619E4BBAFF34512E8BADA791D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30031ms 2019-09-04T06:31:13.232+0000 D2 NETWORK [conn231] Session from 10.108.2.62:53426 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:31:13.232+0000 I NETWORK [conn231] end connection 10.108.2.62:53426 (82 connections now open) 2019-09-04T06:31:13.233+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:13.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:13.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:13.333+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:13.414+0000 I COMMAND [conn237] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 1), signature: { hash: BinData(0, 01D129354ACF87BDDDE96A49066A029F1BB3D92A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:31:13.414+0000 D1 - [conn237] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:31:13.414+0000 W - [conn237] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:13.431+0000 I - [conn237] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:13.431+0000 D1 COMMAND [conn237] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 1), signature: { hash: BinData(0, 01D129354ACF87BDDDE96A49066A029F1BB3D92A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:13.431+0000 D1 - [conn237] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:31:13.431+0000 W - [conn237] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:13.434+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:13.440+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:13.440+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:13.452+0000 I - [conn237] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:13.452+0000 W COMMAND [conn237] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:31:13.452+0000 I COMMAND [conn237] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 1), signature: { hash: BinData(0, 01D129354ACF87BDDDE96A49066A029F1BB3D92A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30032ms 2019-09-04T06:31:13.452+0000 D2 NETWORK [conn237] Session from 10.108.2.45:36530 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:31:13.452+0000 I NETWORK [conn237] end connection 10.108.2.45:36530 (81 connections now open) 2019-09-04T06:31:13.526+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:13.526+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:13.526+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578668, 2) 2019-09-04T06:31:13.526+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 11214 2019-09-04T06:31:13.526+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:13.526+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:13.526+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 11214 2019-09-04T06:31:13.527+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11217 2019-09-04T06:31:13.527+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11217 2019-09-04T06:31:13.527+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578668, 2), t: 1 }({ ts: Timestamp(1567578668, 2), t: 1 }) 2019-09-04T06:31:13.530+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:13.530+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), appliedOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, appliedWallTime: new Date(1567578668506), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), appliedOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, appliedWallTime: new Date(1567578668506), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), appliedOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, appliedWallTime: new Date(1567578668506), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpVisible: { ts: Timestamp(1567578668, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:13.530+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 756 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:43.530+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), appliedOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, appliedWallTime: new Date(1567578668506), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), appliedOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, appliedWallTime: new Date(1567578668506), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), appliedOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, appliedWallTime: new Date(1567578668506), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpVisible: { ts: Timestamp(1567578668, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:13.530+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:38.526+0000 2019-09-04T06:31:13.530+0000 D2 ASIO [RS] Request 756 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpVisible: { ts: Timestamp(1567578668, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578668, 2), $clusterTime: { clusterTime: Timestamp(1567578672, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578668, 2) } 2019-09-04T06:31:13.530+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpVisible: { ts: Timestamp(1567578668, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578668, 2), $clusterTime: { clusterTime: Timestamp(1567578672, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578668, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:13.530+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:13.530+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:38.526+0000 2019-09-04T06:31:13.531+0000 D2 ASIO [RS] Request 749 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpVisible: { ts: Timestamp(1567578668, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpApplied: { ts: Timestamp(1567578668, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578668, 2), $clusterTime: { clusterTime: Timestamp(1567578672, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578668, 2) } 2019-09-04T06:31:13.531+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpVisible: { ts: Timestamp(1567578668, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpApplied: { ts: Timestamp(1567578668, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578668, 2), $clusterTime: { clusterTime: Timestamp(1567578672, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578668, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:13.531+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:13.531+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:31:13.531+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:31:23.693+0000 2019-09-04T06:31:13.531+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:31:23.965+0000 2019-09-04T06:31:13.531+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:13.531+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 757 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:23.531+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578668, 2), t: 1 } } 2019-09-04T06:31:13.531+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:42.839+0000 2019-09-04T06:31:13.531+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:38.526+0000 2019-09-04T06:31:13.534+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:13.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:13.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:13.634+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:13.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:13.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:13.673+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:13.673+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:13.718+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:13.718+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:13.719+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:13.719+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:13.734+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:13.834+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:13.934+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:13.940+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:13.940+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:14.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:14.034+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:14.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:14.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:14.135+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:14.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:14.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:14.173+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:14.173+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:14.218+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:14.218+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:14.219+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:14.219+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:14.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578672, 1), signature: { hash: BinData(0, 1C8A642162D6DCA1AAB4CF0418EFC21D84314F82), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:14.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:31:14.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578672, 1), signature: { hash: BinData(0, 1C8A642162D6DCA1AAB4CF0418EFC21D84314F82), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:14.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578672, 1), signature: { hash: BinData(0, 1C8A642162D6DCA1AAB4CF0418EFC21D84314F82), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:14.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), opTime: { ts: Timestamp(1567578668, 2), t: 1 }, wallTime: new Date(1567578668506) } 2019-09-04T06:31:14.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578672, 1), signature: { hash: BinData(0, 1C8A642162D6DCA1AAB4CF0418EFC21D84314F82), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:14.235+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:14.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:14.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:14.335+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:14.419+0000 I NETWORK [listener] connection accepted from 10.108.2.62:53468 #280 (82 connections now open) 2019-09-04T06:31:14.419+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:31:14.419+0000 D2 COMMAND [conn280] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:31:14.419+0000 I NETWORK [conn280] received client metadata from 10.108.2.62:53468 conn280: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:31:14.419+0000 I COMMAND [conn280] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:31:14.434+0000 I COMMAND [conn258] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578642, 1), signature: { hash: BinData(0, D3B29E27081E353619E4BBAFF34512E8BADA791D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:31:14.434+0000 D1 - [conn258] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:31:14.434+0000 W - [conn258] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:14.435+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:14.440+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:14.440+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:14.451+0000 I - [conn258] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:14.451+0000 D1 COMMAND [conn258] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578642, 1), signature: { hash: BinData(0, D3B29E27081E353619E4BBAFF34512E8BADA791D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:14.451+0000 D1 - [conn258] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:31:14.451+0000 W - [conn258] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:14.471+0000 I - [conn258] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:14.471+0000 W COMMAND [conn258] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:31:14.471+0000 I COMMAND [conn258] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578642, 1), signature: { hash: BinData(0, D3B29E27081E353619E4BBAFF34512E8BADA791D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:31:14.471+0000 D2 NETWORK [conn258] Session from 10.108.2.62:53452 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:31:14.471+0000 I NETWORK [conn258] end connection 10.108.2.62:53452 (81 connections now open) 2019-09-04T06:31:14.526+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:14.526+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:14.526+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578668, 2) 2019-09-04T06:31:14.526+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 11236 2019-09-04T06:31:14.526+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:14.526+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:14.526+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 11236 2019-09-04T06:31:14.527+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11239 2019-09-04T06:31:14.527+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11239 2019-09-04T06:31:14.527+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578668, 2), t: 1 }({ ts: Timestamp(1567578668, 2), t: 1 }) 2019-09-04T06:31:14.535+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:14.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:14.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:14.635+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:14.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:14.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:14.673+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:14.673+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:14.718+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:14.718+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:14.719+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:14.719+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:14.735+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:14.835+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:14.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:14.838+0000 D3 REPL [replexec-0] memberData lastupdate is: 2019-09-04T06:31:13.063+0000 2019-09-04T06:31:14.838+0000 D3 REPL [replexec-0] memberData lastupdate is: 2019-09-04T06:31:14.232+0000 2019-09-04T06:31:14.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:14.838+0000 D3 REPL [replexec-0] stalest member MemberId(0) date: 2019-09-04T06:31:13.063+0000 2019-09-04T06:31:14.838+0000 D3 REPL [replexec-0] scheduling next check at 2019-09-04T06:31:23.063+0000 2019-09-04T06:31:14.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:44.838+0000 2019-09-04T06:31:14.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 758) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:14.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 758 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:24.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:14.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:44.838+0000 2019-09-04T06:31:14.838+0000 D2 ASIO [Replication] Request 758 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), opTime: { ts: Timestamp(1567578668, 2), t: 1 }, wallTime: new Date(1567578668506), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpVisible: { ts: Timestamp(1567578668, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578668, 2), $clusterTime: { clusterTime: Timestamp(1567578672, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578668, 2) } 2019-09-04T06:31:14.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), opTime: { ts: Timestamp(1567578668, 2), t: 1 }, wallTime: new Date(1567578668506), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpVisible: { ts: Timestamp(1567578668, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578668, 2), $clusterTime: { clusterTime: Timestamp(1567578672, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578668, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:14.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:14.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 758) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), opTime: { ts: Timestamp(1567578668, 2), t: 1 }, wallTime: new Date(1567578668506), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpVisible: { ts: Timestamp(1567578668, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578668, 2), $clusterTime: { clusterTime: Timestamp(1567578672, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578668, 2) } 2019-09-04T06:31:14.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:31:14.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:31:16.838Z 2019-09-04T06:31:14.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:44.838+0000 2019-09-04T06:31:14.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:14.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 759) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:14.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 759 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:31:24.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:14.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:44.838+0000 2019-09-04T06:31:14.839+0000 D2 ASIO [Replication] Request 759 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), opTime: { ts: Timestamp(1567578668, 2), t: 1 }, wallTime: new Date(1567578668506), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpVisible: { ts: Timestamp(1567578668, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578668, 2), $clusterTime: { clusterTime: Timestamp(1567578672, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578668, 2) } 2019-09-04T06:31:14.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), opTime: { ts: Timestamp(1567578668, 2), t: 1 }, wallTime: new Date(1567578668506), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpVisible: { ts: Timestamp(1567578668, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578668, 2), $clusterTime: { clusterTime: Timestamp(1567578672, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578668, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:31:14.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:14.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 759) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), opTime: { ts: Timestamp(1567578668, 2), t: 1 }, wallTime: new Date(1567578668506), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpVisible: { ts: Timestamp(1567578668, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578668, 2), $clusterTime: { clusterTime: Timestamp(1567578672, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578668, 2) } 2019-09-04T06:31:14.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:31:14.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:31:23.965+0000 2019-09-04T06:31:14.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:31:25.037+0000 2019-09-04T06:31:14.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:31:14.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:31:16.839Z 2019-09-04T06:31:14.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:14.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:44.839+0000 2019-09-04T06:31:14.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:44.839+0000 2019-09-04T06:31:14.936+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:14.940+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:14.940+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:15.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:15.036+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:15.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:15.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:15.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578672, 1), signature: { hash: BinData(0, 1C8A642162D6DCA1AAB4CF0418EFC21D84314F82), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:15.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:31:15.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578672, 1), signature: { hash: BinData(0, 1C8A642162D6DCA1AAB4CF0418EFC21D84314F82), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:15.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578672, 1), signature: { hash: BinData(0, 1C8A642162D6DCA1AAB4CF0418EFC21D84314F82), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:15.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), opTime: { ts: Timestamp(1567578668, 2), t: 1 }, wallTime: new Date(1567578668506) } 2019-09-04T06:31:15.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578672, 1), signature: { hash: BinData(0, 1C8A642162D6DCA1AAB4CF0418EFC21D84314F82), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:15.125+0000 D2 ASIO [RS] Request 757 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578675, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578675120), o: { $v: 1, $set: { ping: new Date(1567578675117), up: 2575 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpVisible: { ts: Timestamp(1567578668, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpApplied: { ts: Timestamp(1567578675, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578668, 2), $clusterTime: { clusterTime: Timestamp(1567578675, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578675, 1) } 2019-09-04T06:31:15.125+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578675, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578675120), o: { $v: 1, $set: { ping: new Date(1567578675117), up: 2575 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpVisible: { ts: Timestamp(1567578668, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpApplied: { ts: Timestamp(1567578675, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578668, 2), $clusterTime: { clusterTime: Timestamp(1567578675, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578675, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:15.125+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:15.125+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578675, 1) and ending at ts: Timestamp(1567578675, 1) 2019-09-04T06:31:15.125+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:31:25.037+0000 2019-09-04T06:31:15.125+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:31:25.917+0000 2019-09-04T06:31:15.125+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:15.125+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:44.839+0000 2019-09-04T06:31:15.125+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578675, 1), t: 1 } 2019-09-04T06:31:15.125+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:15.125+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:15.125+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578668, 2) 2019-09-04T06:31:15.125+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 11251 2019-09-04T06:31:15.125+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:15.125+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:15.125+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 11251 2019-09-04T06:31:15.126+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:15.126+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:15.126+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:31:15.126+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578668, 2) 2019-09-04T06:31:15.126+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 11254 2019-09-04T06:31:15.126+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:15.126+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:15.126+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578675, 1) } 2019-09-04T06:31:15.126+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 11254 2019-09-04T06:31:15.126+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11240 2019-09-04T06:31:15.126+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 11240 2019-09-04T06:31:15.126+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11257 2019-09-04T06:31:15.126+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11257 2019-09-04T06:31:15.126+0000 D3 EXECUTOR [repl-writer-worker-11] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:15.126+0000 D3 STORAGE [repl-writer-worker-11] WT begin_transaction for snapshot id 11259 2019-09-04T06:31:15.126+0000 D4 STORAGE [repl-writer-worker-11] inserting record with timestamp Timestamp(1567578675, 1) 2019-09-04T06:31:15.126+0000 D3 STORAGE [repl-writer-worker-11] WT set timestamp of future write operations to Timestamp(1567578675, 1) 2019-09-04T06:31:15.126+0000 D3 STORAGE [repl-writer-worker-11] WT commit_transaction for snapshot id 11259 2019-09-04T06:31:15.126+0000 D3 EXECUTOR [repl-writer-worker-11] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:15.126+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:31:15.126+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11258 2019-09-04T06:31:15.126+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 11258 2019-09-04T06:31:15.126+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11261 2019-09-04T06:31:15.126+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11261 2019-09-04T06:31:15.126+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578675, 1), t: 1 }({ ts: Timestamp(1567578675, 1), t: 1 }) 2019-09-04T06:31:15.126+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578675, 1) 2019-09-04T06:31:15.126+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11262 2019-09-04T06:31:15.126+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578675, 1) } } ] } sort: {} projection: {} 2019-09-04T06:31:15.126+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:15.126+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:31:15.126+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578675, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:31:15.126+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:15.126+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:15.126+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:15.126+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578675, 1) || First: notFirst: full path: ts 2019-09-04T06:31:15.126+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:15.126+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578675, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:15.126+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:15.126+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:31:15.126+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:15.126+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:15.126+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:15.126+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:15.126+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:15.126+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:15.126+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:15.126+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578675, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:15.126+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:15.126+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:15.126+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:15.126+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578675, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:15.126+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:15.126+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578675, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:15.126+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 11262 2019-09-04T06:31:15.126+0000 D3 EXECUTOR [repl-writer-worker-13] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:15.126+0000 D3 STORAGE [repl-writer-worker-13] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:15.126+0000 D3 REPL [repl-writer-worker-13] applying op: { ts: Timestamp(1567578675, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578675120), o: { $v: 1, $set: { ping: new Date(1567578675117), up: 2575 } } }, oplog application mode: Secondary 2019-09-04T06:31:15.126+0000 D3 STORAGE [repl-writer-worker-13] WT set timestamp of future write operations to Timestamp(1567578675, 1) 2019-09-04T06:31:15.126+0000 D3 STORAGE [repl-writer-worker-13] WT begin_transaction for snapshot id 11264 2019-09-04T06:31:15.126+0000 D2 QUERY [repl-writer-worker-13] Using idhack: { _id: "cmodb801.togewa.com:27017" } 2019-09-04T06:31:15.126+0000 D4 WRITE [repl-writer-worker-13] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:31:15.126+0000 D3 STORAGE [repl-writer-worker-13] WT commit_transaction for snapshot id 11264 2019-09-04T06:31:15.127+0000 D3 EXECUTOR [repl-writer-worker-13] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:15.127+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578675, 1), t: 1 }({ ts: Timestamp(1567578675, 1), t: 1 }) 2019-09-04T06:31:15.127+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578675, 1) 2019-09-04T06:31:15.127+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11263 2019-09-04T06:31:15.127+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:31:15.127+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:15.127+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:31:15.127+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:15.127+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:15.127+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:31:15.127+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 11263 2019-09-04T06:31:15.127+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578675, 1) 2019-09-04T06:31:15.127+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:15.127+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11267 2019-09-04T06:31:15.127+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), appliedOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, appliedWallTime: new Date(1567578668506), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), appliedOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, appliedWallTime: new Date(1567578675120), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), appliedOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, appliedWallTime: new Date(1567578668506), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpVisible: { ts: Timestamp(1567578668, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:15.127+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 760 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:45.127+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), appliedOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, appliedWallTime: new Date(1567578668506), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), appliedOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, appliedWallTime: new Date(1567578675120), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), appliedOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, appliedWallTime: new Date(1567578668506), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpVisible: { ts: Timestamp(1567578668, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:15.127+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:45.127+0000 2019-09-04T06:31:15.127+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11267 2019-09-04T06:31:15.127+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578675, 1), t: 1 }({ ts: Timestamp(1567578675, 1), t: 1 }) 2019-09-04T06:31:15.127+0000 D2 ASIO [RS] Request 760 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpVisible: { ts: Timestamp(1567578668, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578668, 2), $clusterTime: { clusterTime: Timestamp(1567578675, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578675, 1) } 2019-09-04T06:31:15.127+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpVisible: { ts: Timestamp(1567578668, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578668, 2), $clusterTime: { clusterTime: Timestamp(1567578675, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578675, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:15.127+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:15.127+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:45.127+0000 2019-09-04T06:31:15.127+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578675, 1), t: 1 } 2019-09-04T06:31:15.127+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 761 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:25.127+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578668, 2), t: 1 } } 2019-09-04T06:31:15.127+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:45.127+0000 2019-09-04T06:31:15.128+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:31:15.128+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:15.128+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), appliedOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, appliedWallTime: new Date(1567578668506), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, durableWallTime: new Date(1567578675120), appliedOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, appliedWallTime: new Date(1567578675120), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), appliedOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, appliedWallTime: new Date(1567578668506), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpVisible: { ts: Timestamp(1567578668, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:15.128+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 762 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:45.128+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), appliedOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, appliedWallTime: new Date(1567578668506), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, durableWallTime: new Date(1567578675120), appliedOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, appliedWallTime: new Date(1567578675120), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, durableWallTime: new Date(1567578668506), appliedOpTime: { ts: Timestamp(1567578668, 2), t: 1 }, appliedWallTime: new Date(1567578668506), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpVisible: { ts: Timestamp(1567578668, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:15.128+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:45.127+0000 2019-09-04T06:31:15.128+0000 D2 ASIO [RS] Request 762 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpVisible: { ts: Timestamp(1567578668, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578668, 2), $clusterTime: { clusterTime: Timestamp(1567578675, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578675, 1) } 2019-09-04T06:31:15.128+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578668, 2), t: 1 }, lastCommittedWall: new Date(1567578668506), lastOpVisible: { ts: Timestamp(1567578668, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578668, 2), $clusterTime: { clusterTime: Timestamp(1567578675, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578675, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:15.128+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:15.128+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:45.127+0000 2019-09-04T06:31:15.128+0000 D2 ASIO [RS] Request 761 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578675, 1), t: 1 }, lastCommittedWall: new Date(1567578675120), lastOpVisible: { ts: Timestamp(1567578675, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578675, 1), t: 1 }, lastCommittedWall: new Date(1567578675120), lastOpApplied: { ts: Timestamp(1567578675, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578675, 1), $clusterTime: { clusterTime: Timestamp(1567578675, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578675, 1) } 2019-09-04T06:31:15.128+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578675, 1), t: 1 }, lastCommittedWall: new Date(1567578675120), lastOpVisible: { ts: Timestamp(1567578675, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578675, 1), t: 1 }, lastCommittedWall: new Date(1567578675120), lastOpApplied: { ts: Timestamp(1567578675, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578675, 1), $clusterTime: { clusterTime: Timestamp(1567578675, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578675, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:15.128+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:15.128+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:31:15.128+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578675, 1), t: 1 }, 2019-09-04T06:31:15.120+0000 2019-09-04T06:31:15.128+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578675, 1), t: 1 }, 2019-09-04T06:31:15.120+0000 2019-09-04T06:31:15.129+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578670, 1) 2019-09-04T06:31:15.129+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:31:25.917+0000 2019-09-04T06:31:15.129+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:31:26.109+0000 2019-09-04T06:31:15.129+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 763 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:25.129+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578675, 1), t: 1 } } 2019-09-04T06:31:15.129+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:45.127+0000 2019-09-04T06:31:15.129+0000 D3 REPL [conn234] Got notified of new snapshot: { ts: Timestamp(1567578675, 1), t: 1 }, 2019-09-04T06:31:15.120+0000 2019-09-04T06:31:15.129+0000 D3 REPL [conn234] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.033+0000 2019-09-04T06:31:15.129+0000 D3 REPL [conn266] Got notified of new snapshot: { ts: Timestamp(1567578675, 1), t: 1 }, 2019-09-04T06:31:15.120+0000 2019-09-04T06:31:15.129+0000 D3 REPL [conn266] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:21.661+0000 2019-09-04T06:31:15.129+0000 D3 REPL [conn263] Got notified of new snapshot: { ts: Timestamp(1567578675, 1), t: 1 }, 2019-09-04T06:31:15.120+0000 2019-09-04T06:31:15.129+0000 D3 REPL [conn263] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.415+0000 2019-09-04T06:31:15.129+0000 D3 REPL [conn274] Got notified of new snapshot: { ts: Timestamp(1567578675, 1), t: 1 }, 2019-09-04T06:31:15.120+0000 2019-09-04T06:31:15.129+0000 D3 REPL [conn274] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:41.071+0000 2019-09-04T06:31:15.129+0000 D3 REPL [conn262] Got notified of new snapshot: { ts: Timestamp(1567578675, 1), t: 1 }, 2019-09-04T06:31:15.120+0000 2019-09-04T06:31:15.129+0000 D3 REPL [conn262] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.051+0000 2019-09-04T06:31:15.129+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:15.129+0000 D3 REPL [conn268] Got notified of new snapshot: { ts: Timestamp(1567578675, 1), t: 1 }, 2019-09-04T06:31:15.120+0000 2019-09-04T06:31:15.129+0000 D3 REPL [conn268] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:22.595+0000 2019-09-04T06:31:15.129+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:44.839+0000 2019-09-04T06:31:15.129+0000 D3 REPL [conn260] Got notified of new snapshot: { ts: Timestamp(1567578675, 1), t: 1 }, 2019-09-04T06:31:15.120+0000 2019-09-04T06:31:15.129+0000 D3 REPL [conn260] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:42.329+0000 2019-09-04T06:31:15.129+0000 D3 REPL [conn279] Got notified of new snapshot: { ts: Timestamp(1567578675, 1), t: 1 }, 2019-09-04T06:31:15.120+0000 2019-09-04T06:31:15.129+0000 D3 REPL [conn279] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:42.709+0000 2019-09-04T06:31:15.129+0000 D3 REPL [conn233] Got notified of new snapshot: { ts: Timestamp(1567578675, 1), t: 1 }, 2019-09-04T06:31:15.120+0000 2019-09-04T06:31:15.129+0000 D3 REPL [conn233] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.024+0000 2019-09-04T06:31:15.129+0000 D3 REPL [conn269] Got notified of new snapshot: { ts: Timestamp(1567578675, 1), t: 1 }, 2019-09-04T06:31:15.120+0000 2019-09-04T06:31:15.129+0000 D3 REPL [conn269] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:25.060+0000 2019-09-04T06:31:15.129+0000 D3 REPL [conn267] Got notified of new snapshot: { ts: Timestamp(1567578675, 1), t: 1 }, 2019-09-04T06:31:15.120+0000 2019-09-04T06:31:15.129+0000 D3 REPL [conn267] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:21.661+0000 2019-09-04T06:31:15.129+0000 D3 REPL [conn261] Got notified of new snapshot: { ts: Timestamp(1567578675, 1), t: 1 }, 2019-09-04T06:31:15.120+0000 2019-09-04T06:31:15.129+0000 D3 REPL [conn261] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.023+0000 2019-09-04T06:31:15.129+0000 D3 REPL [conn259] Got notified of new snapshot: { ts: Timestamp(1567578675, 1), t: 1 }, 2019-09-04T06:31:15.120+0000 2019-09-04T06:31:15.129+0000 D3 REPL [conn259] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:24.151+0000 2019-09-04T06:31:15.129+0000 D3 REPL [conn236] Got notified of new snapshot: { ts: Timestamp(1567578675, 1), t: 1 }, 2019-09-04T06:31:15.120+0000 2019-09-04T06:31:15.129+0000 D3 REPL [conn236] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:16.040+0000 2019-09-04T06:31:15.129+0000 D3 REPL [conn265] Got notified of new snapshot: { ts: Timestamp(1567578675, 1), t: 1 }, 2019-09-04T06:31:15.120+0000 2019-09-04T06:31:15.129+0000 D3 REPL [conn265] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:21.660+0000 2019-09-04T06:31:15.129+0000 D3 REPL [conn264] Got notified of new snapshot: { ts: Timestamp(1567578675, 1), t: 1 }, 2019-09-04T06:31:15.120+0000 2019-09-04T06:31:15.129+0000 D3 REPL [conn264] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:41.083+0000 2019-09-04T06:31:15.136+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:15.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:15.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:15.173+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:15.173+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:15.218+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:15.218+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:15.219+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:15.219+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:15.226+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578675, 1) 2019-09-04T06:31:15.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:15.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:15.236+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:15.336+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:15.436+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:15.440+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:15.440+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:15.536+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:15.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:15.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:15.636+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:15.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:15.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:15.673+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:15.673+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:15.718+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:15.718+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:15.719+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:15.719+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:15.737+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:15.837+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:15.937+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:15.940+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:15.940+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:16.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:16.012+0000 I NETWORK [listener] connection accepted from 10.108.2.58:52170 #281 (82 connections now open) 2019-09-04T06:31:16.012+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:31:16.012+0000 D2 COMMAND [conn281] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:31:16.012+0000 I NETWORK [conn281] received client metadata from 10.108.2.58:52170 conn281: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:31:16.012+0000 I COMMAND [conn281] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:31:16.018+0000 I NETWORK [listener] connection accepted from 10.108.2.51:59182 #282 (83 connections now open) 2019-09-04T06:31:16.018+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:31:16.018+0000 D2 COMMAND [conn282] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:31:16.018+0000 I NETWORK [conn282] received client metadata from 10.108.2.51:59182 conn282: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:31:16.018+0000 I COMMAND [conn282] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:31:16.024+0000 I COMMAND [conn261] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578643, 1), signature: { hash: BinData(0, 77F4286AFD23F9458372B5E8BE90AECE0C1F6CA6), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:31:16.024+0000 D1 - [conn261] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:31:16.024+0000 W - [conn261] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:16.025+0000 I COMMAND [conn233] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578639, 1), signature: { hash: BinData(0, A2EE588ECDB33D4C640333310703F752DA8D0A68), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:31:16.025+0000 D1 - [conn233] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:31:16.025+0000 W - [conn233] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:16.028+0000 I NETWORK [listener] connection accepted from 10.108.2.64:46654 #283 (84 connections now open) 2019-09-04T06:31:16.028+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:31:16.028+0000 D2 COMMAND [conn283] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:31:16.029+0000 I NETWORK [conn283] received client metadata from 10.108.2.64:46654 conn283: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:31:16.029+0000 I COMMAND [conn283] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:31:16.032+0000 I NETWORK [listener] connection accepted from 10.108.2.47:56568 #284 (85 connections now open) 2019-09-04T06:31:16.032+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:31:16.032+0000 D2 COMMAND [conn284] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:31:16.032+0000 I NETWORK [conn284] received client metadata from 10.108.2.47:56568 conn284: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:31:16.032+0000 I COMMAND [conn284] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:31:16.033+0000 I COMMAND [conn234] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, 05433FFC1F19D7D8CE5BDF35FCE276DA0CD821FC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:31:16.034+0000 D1 - [conn234] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:31:16.034+0000 W - [conn234] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:16.034+0000 I NETWORK [listener] connection accepted from 10.108.2.45:36570 #285 (86 connections now open) 2019-09-04T06:31:16.034+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:31:16.034+0000 D2 COMMAND [conn285] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:31:16.034+0000 I NETWORK [conn285] received client metadata from 10.108.2.45:36570 conn285: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:31:16.034+0000 I COMMAND [conn285] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:31:16.037+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:16.041+0000 I COMMAND [conn236] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, 05433FFC1F19D7D8CE5BDF35FCE276DA0CD821FC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:31:16.041+0000 D1 - [conn236] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:31:16.041+0000 W - [conn236] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:16.044+0000 I - [conn261] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:16.044+0000 D1 COMMAND [conn261] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578643, 1), signature: { hash: BinData(0, 77F4286AFD23F9458372B5E8BE90AECE0C1F6CA6), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:16.044+0000 D1 - [conn261] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:31:16.044+0000 W - [conn261] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:16.052+0000 I COMMAND [conn262] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 1), signature: { hash: BinData(0, 01D129354ACF87BDDDE96A49066A029F1BB3D92A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:31:16.052+0000 D1 - [conn262] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:31:16.052+0000 W - [conn262] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:16.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:16.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:16.061+0000 I - [conn236] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:16.062+0000 D1 COMMAND [conn236] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, 05433FFC1F19D7D8CE5BDF35FCE276DA0CD821FC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:16.062+0000 D1 - [conn236] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:31:16.062+0000 W - [conn236] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:16.099+0000 I - [conn234] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:16.099+0000 D1 COMMAND [conn234] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, 05433FFC1F19D7D8CE5BDF35FCE276DA0CD821FC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:16.099+0000 D1 - [conn234] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:31:16.099+0000 W - [conn234] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:16.119+0000 I - [conn236] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:16.119+0000 W COMMAND [conn236] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:31:16.119+0000 I COMMAND [conn236] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, 05433FFC1F19D7D8CE5BDF35FCE276DA0CD821FC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30032ms 2019-09-04T06:31:16.119+0000 D2 NETWORK [conn236] Session from 10.108.2.47:56538 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:31:16.119+0000 I NETWORK [conn236] end connection 10.108.2.47:56538 (85 connections now open) 2019-09-04T06:31:16.119+0000 I - [conn262] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:16.120+0000 D1 COMMAND [conn262] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 1), signature: { hash: BinData(0, 01D129354ACF87BDDDE96A49066A029F1BB3D92A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:16.120+0000 D1 - [conn262] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:31:16.120+0000 W - [conn262] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:16.126+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:16.126+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:16.126+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578675, 1) 2019-09-04T06:31:16.126+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 11290 2019-09-04T06:31:16.126+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:16.126+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:16.126+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 11290 2019-09-04T06:31:16.127+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11293 2019-09-04T06:31:16.127+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11293 2019-09-04T06:31:16.127+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578675, 1), t: 1 }({ ts: Timestamp(1567578675, 1), t: 1 }) 2019-09-04T06:31:16.137+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:16.141+0000 I - [conn234] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:16.141+0000 W COMMAND [conn234] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:31:16.141+0000 I COMMAND [conn234] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, 05433FFC1F19D7D8CE5BDF35FCE276DA0CD821FC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30075ms 2019-09-04T06:31:16.141+0000 D2 NETWORK [conn234] Session from 10.108.2.64:46614 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:31:16.141+0000 I NETWORK [conn234] end connection 10.108.2.64:46614 (84 connections now open) 2019-09-04T06:31:16.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:16.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:16.161+0000 I - [conn262] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:16.162+0000 W COMMAND [conn262] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:31:16.162+0000 I COMMAND [conn262] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 1), signature: { hash: BinData(0, 01D129354ACF87BDDDE96A49066A029F1BB3D92A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30078ms 2019-09-04T06:31:16.162+0000 D2 NETWORK [conn262] Session from 10.108.2.45:36550 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:31:16.162+0000 I NETWORK [conn262] end connection 10.108.2.45:36550 (83 connections now open) 2019-09-04T06:31:16.173+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:16.173+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:16.196+0000 I - [conn233] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:16.196+0000 D1 COMMAND [conn233] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578639, 1), signature: { hash: BinData(0, A2EE588ECDB33D4C640333310703F752DA8D0A68), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:16.196+0000 D1 - [conn233] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:31:16.196+0000 W - [conn233] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:16.200+0000 I - [conn261] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:16.200+0000 W COMMAND [conn261] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:31:16.200+0000 I COMMAND [conn261] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578643, 1), signature: { hash: BinData(0, 77F4286AFD23F9458372B5E8BE90AECE0C1F6CA6), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30031ms 2019-09-04T06:31:16.200+0000 D2 NETWORK [conn261] Session from 10.108.2.58:52150 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:31:16.200+0000 I NETWORK [conn261] end connection 10.108.2.58:52150 (82 connections now open) 2019-09-04T06:31:16.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:16.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:16.212+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:16.212+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:16.212+0000 D2 COMMAND [conn272] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578669, 1), signature: { hash: BinData(0, E8F838832282F30383549BB7AA2B977F74AA6897), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:31:16.212+0000 D1 REPL [conn272] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578675, 1), t: 1 } 2019-09-04T06:31:16.212+0000 D3 REPL [conn272] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.222+0000 2019-09-04T06:31:16.214+0000 I NETWORK [listener] connection accepted from 10.108.2.46:41012 #286 (83 connections now open) 2019-09-04T06:31:16.214+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:31:16.214+0000 D2 COMMAND [conn286] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:31:16.214+0000 I NETWORK [conn286] received client metadata from 10.108.2.46:41012 conn286: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:31:16.214+0000 I COMMAND [conn286] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:31:16.214+0000 D2 COMMAND [conn286] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578675, 1), signature: { hash: BinData(0, 571EA4AAB3958D7DD4D98129725B60C4FB7E61F0), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:31:16.214+0000 D1 REPL [conn286] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578675, 1), t: 1 } 2019-09-04T06:31:16.214+0000 D3 REPL [conn286] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.224+0000 2019-09-04T06:31:16.218+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:16.218+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:16.219+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:16.219+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:16.221+0000 I - [conn233] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:16.221+0000 W COMMAND [conn233] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:31:16.221+0000 I COMMAND [conn233] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578639, 1), signature: { hash: BinData(0, A2EE588ECDB33D4C640333310703F752DA8D0A68), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30181ms 2019-09-04T06:31:16.221+0000 D2 NETWORK [conn233] Session from 10.108.2.51:59146 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:31:16.221+0000 I NETWORK [conn233] end connection 10.108.2.51:59146 (82 connections now open) 2019-09-04T06:31:16.223+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:16.223+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:16.228+0000 D2 COMMAND [conn283] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578675, 1), signature: { hash: BinData(0, 571EA4AAB3958D7DD4D98129725B60C4FB7E61F0), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:31:16.228+0000 D1 REPL [conn283] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578675, 1), t: 1 } 2019-09-04T06:31:16.228+0000 D3 REPL [conn283] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.238+0000 2019-09-04T06:31:16.229+0000 D2 COMMAND [conn270] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578670, 1), signature: { hash: BinData(0, 8177E92797B9354FFC128D337C838CA7406FA18D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:31:16.229+0000 D1 REPL [conn270] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578675, 1), t: 1 } 2019-09-04T06:31:16.229+0000 D3 REPL [conn270] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.239+0000 2019-09-04T06:31:16.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578675, 1), signature: { hash: BinData(0, 9936C6191683BD44A9A979851B60780CEEAE8531), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:16.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:31:16.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578675, 1), signature: { hash: BinData(0, 9936C6191683BD44A9A979851B60780CEEAE8531), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:16.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578675, 1), signature: { hash: BinData(0, 9936C6191683BD44A9A979851B60780CEEAE8531), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:16.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, durableWallTime: new Date(1567578675120), opTime: { ts: Timestamp(1567578675, 1), t: 1 }, wallTime: new Date(1567578675120) } 2019-09-04T06:31:16.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578675, 1), signature: { hash: BinData(0, 9936C6191683BD44A9A979851B60780CEEAE8531), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:16.234+0000 I NETWORK [listener] connection accepted from 10.108.2.53:50732 #287 (83 connections now open) 2019-09-04T06:31:16.234+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:31:16.234+0000 D2 COMMAND [conn287] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:31:16.234+0000 I NETWORK [conn287] received client metadata from 10.108.2.53:50732 conn287: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:31:16.234+0000 I COMMAND [conn287] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:31:16.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:16.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:16.237+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:16.238+0000 D2 COMMAND [conn287] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578672, 1), signature: { hash: BinData(0, 2D7FBB04A997F5BB61222D22B1E0478572811EE3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:31:16.238+0000 D1 REPL [conn287] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578675, 1), t: 1 } 2019-09-04T06:31:16.238+0000 D3 REPL [conn287] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.248+0000 2019-09-04T06:31:16.302+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:16.302+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:16.331+0000 D2 COMMAND [conn125] run command admin.$cmd { ping: 1, $db: "admin" } 2019-09-04T06:31:16.331+0000 I COMMAND [conn125] command admin.$cmd appName: "robo3t" command: ping { ping: 1, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:16.337+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:16.342+0000 D2 COMMAND [conn178] run command config.$cmd { ping: 1.0, lsid: { id: UUID("4ca3bc30-0f16-4335-a15f-3e7d48b5566e") }, $clusterTime: { clusterTime: Timestamp(1567578615, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:31:16.342+0000 I COMMAND [conn178] command config.$cmd appName: "MongoDB Shell" command: ping { ping: 1.0, lsid: { id: UUID("4ca3bc30-0f16-4335-a15f-3e7d48b5566e") }, $clusterTime: { clusterTime: Timestamp(1567578615, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:16.401+0000 I NETWORK [listener] connection accepted from 10.108.2.57:34278 #288 (84 connections now open) 2019-09-04T06:31:16.401+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:31:16.402+0000 D2 COMMAND [conn288] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:31:16.402+0000 I NETWORK [conn288] received client metadata from 10.108.2.57:34278 conn288: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:31:16.402+0000 I COMMAND [conn288] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:31:16.416+0000 I COMMAND [conn263] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 1), signature: { hash: BinData(0, 01D129354ACF87BDDDE96A49066A029F1BB3D92A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:31:16.416+0000 D1 - [conn263] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:31:16.416+0000 W - [conn263] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:16.433+0000 I - [conn263] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:16.433+0000 D1 COMMAND [conn263] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 1), signature: { hash: BinData(0, 01D129354ACF87BDDDE96A49066A029F1BB3D92A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:16.433+0000 D1 - [conn263] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:31:16.433+0000 W - [conn263] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:16.437+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:16.440+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:16.440+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:16.453+0000 I - [conn263] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:16.453+0000 W COMMAND [conn263] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:31:16.453+0000 I COMMAND [conn263] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578638, 1), signature: { hash: BinData(0, 01D129354ACF87BDDDE96A49066A029F1BB3D92A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30028ms 2019-09-04T06:31:16.453+0000 D2 NETWORK [conn263] Session from 10.108.2.57:34256 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:31:16.453+0000 I NETWORK [conn263] end connection 10.108.2.57:34256 (83 connections now open) 2019-09-04T06:31:16.538+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:16.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:16.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:16.638+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:16.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:16.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:16.673+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:16.673+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:16.711+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:16.711+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:16.711+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:16.711+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:16.718+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:16.718+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:16.719+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:16.719+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:16.723+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:16.723+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:16.738+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:16.801+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:16.801+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:16.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:16.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 764) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:16.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 764 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:26.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:16.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:44.839+0000 2019-09-04T06:31:16.838+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:16.838+0000 D2 ASIO [Replication] Request 764 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, durableWallTime: new Date(1567578675120), opTime: { ts: Timestamp(1567578675, 1), t: 1 }, wallTime: new Date(1567578675120), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578675, 1), t: 1 }, lastCommittedWall: new Date(1567578675120), lastOpVisible: { ts: Timestamp(1567578675, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578675, 1), $clusterTime: { clusterTime: Timestamp(1567578675, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578675, 1) } 2019-09-04T06:31:16.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, durableWallTime: new Date(1567578675120), opTime: { ts: Timestamp(1567578675, 1), t: 1 }, wallTime: new Date(1567578675120), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578675, 1), t: 1 }, lastCommittedWall: new Date(1567578675120), lastOpVisible: { ts: Timestamp(1567578675, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578675, 1), $clusterTime: { clusterTime: Timestamp(1567578675, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578675, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:16.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:16.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 764) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, durableWallTime: new Date(1567578675120), opTime: { ts: Timestamp(1567578675, 1), t: 1 }, wallTime: new Date(1567578675120), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578675, 1), t: 1 }, lastCommittedWall: new Date(1567578675120), lastOpVisible: { ts: Timestamp(1567578675, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578675, 1), $clusterTime: { clusterTime: Timestamp(1567578675, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578675, 1) } 2019-09-04T06:31:16.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:31:16.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:31:18.838Z 2019-09-04T06:31:16.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:44.839+0000 2019-09-04T06:31:16.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:16.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 765) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:16.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 765 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:31:26.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:16.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:44.839+0000 2019-09-04T06:31:16.839+0000 D2 ASIO [Replication] Request 765 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, durableWallTime: new Date(1567578675120), opTime: { ts: Timestamp(1567578675, 1), t: 1 }, wallTime: new Date(1567578675120), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578675, 1), t: 1 }, lastCommittedWall: new Date(1567578675120), lastOpVisible: { ts: Timestamp(1567578675, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578675, 1), $clusterTime: { clusterTime: Timestamp(1567578675, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578675, 1) } 2019-09-04T06:31:16.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, durableWallTime: new Date(1567578675120), opTime: { ts: Timestamp(1567578675, 1), t: 1 }, wallTime: new Date(1567578675120), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578675, 1), t: 1 }, lastCommittedWall: new Date(1567578675120), lastOpVisible: { ts: Timestamp(1567578675, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578675, 1), $clusterTime: { clusterTime: Timestamp(1567578675, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578675, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:31:16.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:16.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 765) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, durableWallTime: new Date(1567578675120), opTime: { ts: Timestamp(1567578675, 1), t: 1 }, wallTime: new Date(1567578675120), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578675, 1), t: 1 }, lastCommittedWall: new Date(1567578675120), lastOpVisible: { ts: Timestamp(1567578675, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578675, 1), $clusterTime: { clusterTime: Timestamp(1567578675, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578675, 1) } 2019-09-04T06:31:16.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:31:16.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:31:26.109+0000 2019-09-04T06:31:16.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:31:28.006+0000 2019-09-04T06:31:16.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:31:16.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:31:18.839Z 2019-09-04T06:31:16.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:16.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:46.839+0000 2019-09-04T06:31:16.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:46.839+0000 2019-09-04T06:31:16.938+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:16.940+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:16.940+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:17.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:17.038+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:17.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:17.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:17.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578675, 1), signature: { hash: BinData(0, 9936C6191683BD44A9A979851B60780CEEAE8531), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:17.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:31:17.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578675, 1), signature: { hash: BinData(0, 9936C6191683BD44A9A979851B60780CEEAE8531), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:17.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578675, 1), signature: { hash: BinData(0, 9936C6191683BD44A9A979851B60780CEEAE8531), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:17.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, durableWallTime: new Date(1567578675120), opTime: { ts: Timestamp(1567578675, 1), t: 1 }, wallTime: new Date(1567578675120) } 2019-09-04T06:31:17.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578675, 1), signature: { hash: BinData(0, 9936C6191683BD44A9A979851B60780CEEAE8531), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:17.126+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:17.126+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:17.126+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578675, 1) 2019-09-04T06:31:17.126+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 11329 2019-09-04T06:31:17.126+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:17.126+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:17.126+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 11329 2019-09-04T06:31:17.127+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11332 2019-09-04T06:31:17.127+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11332 2019-09-04T06:31:17.127+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578675, 1), t: 1 }({ ts: Timestamp(1567578675, 1), t: 1 }) 2019-09-04T06:31:17.138+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:17.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:17.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:17.173+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:17.173+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:17.218+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:17.218+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:17.219+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:17.219+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:17.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:17.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:17.239+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:17.301+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:17.301+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:17.339+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:17.356+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:17.356+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:17.439+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:17.440+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:17.440+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:17.516+0000 D2 ASIO [RS] Request 763 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578677, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578677507), o: { $v: 1, $set: { ping: new Date(1567578677507) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578675, 1), t: 1 }, lastCommittedWall: new Date(1567578675120), lastOpVisible: { ts: Timestamp(1567578675, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578675, 1), t: 1 }, lastCommittedWall: new Date(1567578675120), lastOpApplied: { ts: Timestamp(1567578677, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578675, 1), $clusterTime: { clusterTime: Timestamp(1567578677, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578677, 1) } 2019-09-04T06:31:17.516+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578677, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578677507), o: { $v: 1, $set: { ping: new Date(1567578677507) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578675, 1), t: 1 }, lastCommittedWall: new Date(1567578675120), lastOpVisible: { ts: Timestamp(1567578675, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578675, 1), t: 1 }, lastCommittedWall: new Date(1567578675120), lastOpApplied: { ts: Timestamp(1567578677, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578675, 1), $clusterTime: { clusterTime: Timestamp(1567578677, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578677, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:17.516+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:17.516+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578677, 1) and ending at ts: Timestamp(1567578677, 1) 2019-09-04T06:31:17.516+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:31:28.006+0000 2019-09-04T06:31:17.516+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:31:27.570+0000 2019-09-04T06:31:17.516+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:17.516+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:46.839+0000 2019-09-04T06:31:17.516+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578677, 1), t: 1 } 2019-09-04T06:31:17.516+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:17.516+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:17.516+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578675, 1) 2019-09-04T06:31:17.516+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 11343 2019-09-04T06:31:17.516+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:17.516+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:17.516+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 11343 2019-09-04T06:31:17.516+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:17.516+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:31:17.516+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:17.516+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578675, 1) 2019-09-04T06:31:17.516+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578677, 1) } 2019-09-04T06:31:17.516+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 11346 2019-09-04T06:31:17.516+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11333 2019-09-04T06:31:17.516+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:17.516+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:17.516+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 11346 2019-09-04T06:31:17.516+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 11333 2019-09-04T06:31:17.516+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11349 2019-09-04T06:31:17.516+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11349 2019-09-04T06:31:17.516+0000 D3 EXECUTOR [repl-writer-worker-14] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:17.516+0000 D3 STORAGE [repl-writer-worker-14] WT begin_transaction for snapshot id 11351 2019-09-04T06:31:17.516+0000 D4 STORAGE [repl-writer-worker-14] inserting record with timestamp Timestamp(1567578677, 1) 2019-09-04T06:31:17.516+0000 D3 STORAGE [repl-writer-worker-14] WT set timestamp of future write operations to Timestamp(1567578677, 1) 2019-09-04T06:31:17.516+0000 D3 STORAGE [repl-writer-worker-14] WT commit_transaction for snapshot id 11351 2019-09-04T06:31:17.516+0000 D3 EXECUTOR [repl-writer-worker-14] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:17.516+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:31:17.516+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11350 2019-09-04T06:31:17.516+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 11350 2019-09-04T06:31:17.516+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11353 2019-09-04T06:31:17.516+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11353 2019-09-04T06:31:17.516+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578677, 1), t: 1 }({ ts: Timestamp(1567578677, 1), t: 1 }) 2019-09-04T06:31:17.517+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578677, 1) 2019-09-04T06:31:17.517+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11354 2019-09-04T06:31:17.517+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578677, 1) } } ] } sort: {} projection: {} 2019-09-04T06:31:17.517+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:17.517+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:31:17.517+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578677, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:31:17.517+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:17.517+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:17.517+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:17.517+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578677, 1) || First: notFirst: full path: ts 2019-09-04T06:31:17.517+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:17.517+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578677, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:17.517+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:17.517+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:31:17.517+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:17.517+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:17.517+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:17.517+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:17.517+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:17.517+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:17.517+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:17.517+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578677, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:17.517+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:17.517+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:17.517+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:17.517+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578677, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:17.517+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:17.517+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578677, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:17.517+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 11354 2019-09-04T06:31:17.517+0000 D3 EXECUTOR [repl-writer-worker-3] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:17.517+0000 D3 STORAGE [repl-writer-worker-3] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:17.517+0000 D3 REPL [repl-writer-worker-3] applying op: { ts: Timestamp(1567578677, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578677507), o: { $v: 1, $set: { ping: new Date(1567578677507) } } }, oplog application mode: Secondary 2019-09-04T06:31:17.517+0000 D3 STORAGE [repl-writer-worker-3] WT set timestamp of future write operations to Timestamp(1567578677, 1) 2019-09-04T06:31:17.517+0000 D3 STORAGE [repl-writer-worker-3] WT begin_transaction for snapshot id 11356 2019-09-04T06:31:17.517+0000 D2 QUERY [repl-writer-worker-3] Using idhack: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" } 2019-09-04T06:31:17.517+0000 D4 WRITE [repl-writer-worker-3] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:31:17.517+0000 D3 STORAGE [repl-writer-worker-3] WT commit_transaction for snapshot id 11356 2019-09-04T06:31:17.517+0000 D3 EXECUTOR [repl-writer-worker-3] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:17.517+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578677, 1), t: 1 }({ ts: Timestamp(1567578677, 1), t: 1 }) 2019-09-04T06:31:17.517+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578677, 1) 2019-09-04T06:31:17.517+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11355 2019-09-04T06:31:17.517+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:31:17.517+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:17.517+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:31:17.517+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:17.517+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:17.517+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:31:17.517+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 11355 2019-09-04T06:31:17.517+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578677, 1) 2019-09-04T06:31:17.517+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:17.517+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11359 2019-09-04T06:31:17.517+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, durableWallTime: new Date(1567578675120), appliedOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, appliedWallTime: new Date(1567578675120), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, durableWallTime: new Date(1567578675120), appliedOpTime: { ts: Timestamp(1567578677, 1), t: 1 }, appliedWallTime: new Date(1567578677507), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, durableWallTime: new Date(1567578675120), appliedOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, appliedWallTime: new Date(1567578675120), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578675, 1), t: 1 }, lastCommittedWall: new Date(1567578675120), lastOpVisible: { ts: Timestamp(1567578675, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:17.517+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11359 2019-09-04T06:31:17.517+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 766 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:47.517+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, durableWallTime: new Date(1567578675120), appliedOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, appliedWallTime: new Date(1567578675120), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, durableWallTime: new Date(1567578675120), appliedOpTime: { ts: Timestamp(1567578677, 1), t: 1 }, appliedWallTime: new Date(1567578677507), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, durableWallTime: new Date(1567578675120), appliedOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, appliedWallTime: new Date(1567578675120), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578675, 1), t: 1 }, lastCommittedWall: new Date(1567578675120), lastOpVisible: { ts: Timestamp(1567578675, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:17.517+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578677, 1), t: 1 }({ ts: Timestamp(1567578677, 1), t: 1 }) 2019-09-04T06:31:17.517+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:47.517+0000 2019-09-04T06:31:17.518+0000 D2 ASIO [RS] Request 766 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578675, 1), t: 1 }, lastCommittedWall: new Date(1567578675120), lastOpVisible: { ts: Timestamp(1567578675, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578675, 1), $clusterTime: { clusterTime: Timestamp(1567578677, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578677, 1) } 2019-09-04T06:31:17.518+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578675, 1), t: 1 }, lastCommittedWall: new Date(1567578675120), lastOpVisible: { ts: Timestamp(1567578675, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578675, 1), $clusterTime: { clusterTime: Timestamp(1567578677, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578677, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:17.518+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:17.518+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:47.518+0000 2019-09-04T06:31:17.518+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578677, 1), t: 1 } 2019-09-04T06:31:17.518+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 767 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:27.518+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578675, 1), t: 1 } } 2019-09-04T06:31:17.518+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:47.518+0000 2019-09-04T06:31:17.518+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:31:17.518+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:17.518+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, durableWallTime: new Date(1567578675120), appliedOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, appliedWallTime: new Date(1567578675120), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578677, 1), t: 1 }, durableWallTime: new Date(1567578677507), appliedOpTime: { ts: Timestamp(1567578677, 1), t: 1 }, appliedWallTime: new Date(1567578677507), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, durableWallTime: new Date(1567578675120), appliedOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, appliedWallTime: new Date(1567578675120), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578675, 1), t: 1 }, lastCommittedWall: new Date(1567578675120), lastOpVisible: { ts: Timestamp(1567578675, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:17.518+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 768 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:47.518+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, durableWallTime: new Date(1567578675120), appliedOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, appliedWallTime: new Date(1567578675120), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578677, 1), t: 1 }, durableWallTime: new Date(1567578677507), appliedOpTime: { ts: Timestamp(1567578677, 1), t: 1 }, appliedWallTime: new Date(1567578677507), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, durableWallTime: new Date(1567578675120), appliedOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, appliedWallTime: new Date(1567578675120), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578675, 1), t: 1 }, lastCommittedWall: new Date(1567578675120), lastOpVisible: { ts: Timestamp(1567578675, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:17.518+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:47.518+0000 2019-09-04T06:31:17.519+0000 D2 ASIO [RS] Request 768 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578675, 1), t: 1 }, lastCommittedWall: new Date(1567578675120), lastOpVisible: { ts: Timestamp(1567578675, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578675, 1), $clusterTime: { clusterTime: Timestamp(1567578677, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578677, 1) } 2019-09-04T06:31:17.519+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578675, 1), t: 1 }, lastCommittedWall: new Date(1567578675120), lastOpVisible: { ts: Timestamp(1567578675, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578675, 1), $clusterTime: { clusterTime: Timestamp(1567578677, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578677, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:17.519+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:17.519+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:47.518+0000 2019-09-04T06:31:17.519+0000 D2 ASIO [RS] Request 767 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578677, 1), t: 1 }, lastCommittedWall: new Date(1567578677507), lastOpVisible: { ts: Timestamp(1567578677, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578677, 1), t: 1 }, lastCommittedWall: new Date(1567578677507), lastOpApplied: { ts: Timestamp(1567578677, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578677, 1), $clusterTime: { clusterTime: Timestamp(1567578677, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578677, 1) } 2019-09-04T06:31:17.519+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578677, 1), t: 1 }, lastCommittedWall: new Date(1567578677507), lastOpVisible: { ts: Timestamp(1567578677, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578677, 1), t: 1 }, lastCommittedWall: new Date(1567578677507), lastOpApplied: { ts: Timestamp(1567578677, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578677, 1), $clusterTime: { clusterTime: Timestamp(1567578677, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578677, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:17.519+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:17.519+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:31:17.519+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578677, 1), t: 1 }, 2019-09-04T06:31:17.507+0000 2019-09-04T06:31:17.519+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578677, 1), t: 1 }, 2019-09-04T06:31:17.507+0000 2019-09-04T06:31:17.519+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578672, 1) 2019-09-04T06:31:17.519+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:31:27.570+0000 2019-09-04T06:31:17.519+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:31:28.850+0000 2019-09-04T06:31:17.519+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:17.519+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 769 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:27.519+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578677, 1), t: 1 } } 2019-09-04T06:31:17.519+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:46.839+0000 2019-09-04T06:31:17.519+0000 D3 REPL [conn268] Got notified of new snapshot: { ts: Timestamp(1567578677, 1), t: 1 }, 2019-09-04T06:31:17.507+0000 2019-09-04T06:31:17.520+0000 D3 REPL [conn268] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:22.595+0000 2019-09-04T06:31:17.520+0000 D3 REPL [conn274] Got notified of new snapshot: { ts: Timestamp(1567578677, 1), t: 1 }, 2019-09-04T06:31:17.507+0000 2019-09-04T06:31:17.520+0000 D3 REPL [conn274] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:41.071+0000 2019-09-04T06:31:17.520+0000 D3 REPL [conn279] Got notified of new snapshot: { ts: Timestamp(1567578677, 1), t: 1 }, 2019-09-04T06:31:17.507+0000 2019-09-04T06:31:17.520+0000 D3 REPL [conn279] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:42.709+0000 2019-09-04T06:31:17.520+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:47.518+0000 2019-09-04T06:31:17.520+0000 D3 REPL [conn269] Got notified of new snapshot: { ts: Timestamp(1567578677, 1), t: 1 }, 2019-09-04T06:31:17.507+0000 2019-09-04T06:31:17.520+0000 D3 REPL [conn269] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:25.060+0000 2019-09-04T06:31:17.520+0000 D3 REPL [conn265] Got notified of new snapshot: { ts: Timestamp(1567578677, 1), t: 1 }, 2019-09-04T06:31:17.507+0000 2019-09-04T06:31:17.520+0000 D3 REPL [conn265] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:21.660+0000 2019-09-04T06:31:17.520+0000 D3 REPL [conn283] Got notified of new snapshot: { ts: Timestamp(1567578677, 1), t: 1 }, 2019-09-04T06:31:17.507+0000 2019-09-04T06:31:17.520+0000 D3 REPL [conn283] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.238+0000 2019-09-04T06:31:17.520+0000 D3 REPL [conn270] Got notified of new snapshot: { ts: Timestamp(1567578677, 1), t: 1 }, 2019-09-04T06:31:17.507+0000 2019-09-04T06:31:17.520+0000 D3 REPL [conn270] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.239+0000 2019-09-04T06:31:17.520+0000 D3 REPL [conn287] Got notified of new snapshot: { ts: Timestamp(1567578677, 1), t: 1 }, 2019-09-04T06:31:17.507+0000 2019-09-04T06:31:17.520+0000 D3 REPL [conn287] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.248+0000 2019-09-04T06:31:17.520+0000 D3 REPL [conn266] Got notified of new snapshot: { ts: Timestamp(1567578677, 1), t: 1 }, 2019-09-04T06:31:17.507+0000 2019-09-04T06:31:17.520+0000 D3 REPL [conn266] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:21.661+0000 2019-09-04T06:31:17.520+0000 D3 REPL [conn260] Got notified of new snapshot: { ts: Timestamp(1567578677, 1), t: 1 }, 2019-09-04T06:31:17.507+0000 2019-09-04T06:31:17.520+0000 D3 REPL [conn260] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:42.329+0000 2019-09-04T06:31:17.520+0000 D3 REPL [conn267] Got notified of new snapshot: { ts: Timestamp(1567578677, 1), t: 1 }, 2019-09-04T06:31:17.507+0000 2019-09-04T06:31:17.520+0000 D3 REPL [conn267] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:21.661+0000 2019-09-04T06:31:17.520+0000 D3 REPL [conn259] Got notified of new snapshot: { ts: Timestamp(1567578677, 1), t: 1 }, 2019-09-04T06:31:17.507+0000 2019-09-04T06:31:17.520+0000 D3 REPL [conn259] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:24.151+0000 2019-09-04T06:31:17.520+0000 D3 REPL [conn264] Got notified of new snapshot: { ts: Timestamp(1567578677, 1), t: 1 }, 2019-09-04T06:31:17.507+0000 2019-09-04T06:31:17.520+0000 D3 REPL [conn264] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:41.083+0000 2019-09-04T06:31:17.520+0000 D3 REPL [conn272] Got notified of new snapshot: { ts: Timestamp(1567578677, 1), t: 1 }, 2019-09-04T06:31:17.507+0000 2019-09-04T06:31:17.520+0000 D3 REPL [conn272] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.222+0000 2019-09-04T06:31:17.520+0000 D3 REPL [conn286] Got notified of new snapshot: { ts: Timestamp(1567578677, 1), t: 1 }, 2019-09-04T06:31:17.507+0000 2019-09-04T06:31:17.520+0000 D3 REPL [conn286] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.224+0000 2019-09-04T06:31:17.539+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:17.616+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578677, 1) 2019-09-04T06:31:17.639+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:17.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:17.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:17.673+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:17.673+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:17.718+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:17.718+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:17.719+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:17.719+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:17.739+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:17.791+0000 D2 ASIO [RS] Request 769 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578677, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578677764), o: { $v: 1, $set: { ping: new Date(1567578677757) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578677, 1), t: 1 }, lastCommittedWall: new Date(1567578677507), lastOpVisible: { ts: Timestamp(1567578677, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578677, 1), t: 1 }, lastCommittedWall: new Date(1567578677507), lastOpApplied: { ts: Timestamp(1567578677, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578677, 1), $clusterTime: { clusterTime: Timestamp(1567578677, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578677, 2) } 2019-09-04T06:31:17.791+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578677, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578677764), o: { $v: 1, $set: { ping: new Date(1567578677757) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578677, 1), t: 1 }, lastCommittedWall: new Date(1567578677507), lastOpVisible: { ts: Timestamp(1567578677, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578677, 1), t: 1 }, lastCommittedWall: new Date(1567578677507), lastOpApplied: { ts: Timestamp(1567578677, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578677, 1), $clusterTime: { clusterTime: Timestamp(1567578677, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578677, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:17.791+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:17.791+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578677, 2) and ending at ts: Timestamp(1567578677, 2) 2019-09-04T06:31:17.791+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:31:28.850+0000 2019-09-04T06:31:17.791+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:31:28.380+0000 2019-09-04T06:31:17.791+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:17.791+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:46.839+0000 2019-09-04T06:31:17.791+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:17.791+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:17.791+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578677, 1) 2019-09-04T06:31:17.791+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 11367 2019-09-04T06:31:17.791+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:17.791+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:17.791+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 11367 2019-09-04T06:31:17.791+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:17.791+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:31:17.791+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:17.791+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578677, 2) } 2019-09-04T06:31:17.791+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578677, 2), t: 1 } 2019-09-04T06:31:17.791+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578677, 1) 2019-09-04T06:31:17.791+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 11370 2019-09-04T06:31:17.791+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:17.791+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:17.791+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 11370 2019-09-04T06:31:17.791+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11360 2019-09-04T06:31:17.792+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 11360 2019-09-04T06:31:17.792+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11373 2019-09-04T06:31:17.792+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11373 2019-09-04T06:31:17.792+0000 D3 EXECUTOR [repl-writer-worker-12] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:17.792+0000 D3 STORAGE [repl-writer-worker-12] WT begin_transaction for snapshot id 11375 2019-09-04T06:31:17.792+0000 D4 STORAGE [repl-writer-worker-12] inserting record with timestamp Timestamp(1567578677, 2) 2019-09-04T06:31:17.792+0000 D3 STORAGE [repl-writer-worker-12] WT set timestamp of future write operations to Timestamp(1567578677, 2) 2019-09-04T06:31:17.792+0000 D3 STORAGE [repl-writer-worker-12] WT commit_transaction for snapshot id 11375 2019-09-04T06:31:17.792+0000 D3 EXECUTOR [repl-writer-worker-12] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:17.792+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:31:17.792+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11374 2019-09-04T06:31:17.792+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 11374 2019-09-04T06:31:17.792+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11377 2019-09-04T06:31:17.792+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11377 2019-09-04T06:31:17.792+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578677, 2), t: 1 }({ ts: Timestamp(1567578677, 2), t: 1 }) 2019-09-04T06:31:17.792+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578677, 2) 2019-09-04T06:31:17.792+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11378 2019-09-04T06:31:17.792+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578677, 2) } } ] } sort: {} projection: {} 2019-09-04T06:31:17.792+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:17.792+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:31:17.792+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578677, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:31:17.792+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:17.792+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:17.792+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:17.792+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578677, 2) || First: notFirst: full path: ts 2019-09-04T06:31:17.792+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:17.792+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578677, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:17.792+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:17.792+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:31:17.792+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:17.792+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:17.792+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:17.792+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:17.792+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:17.792+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:17.792+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:17.792+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578677, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:17.792+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:17.792+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:17.792+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:17.792+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578677, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:17.792+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:17.792+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578677, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:17.792+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 11378 2019-09-04T06:31:17.792+0000 D3 EXECUTOR [repl-writer-worker-10] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:17.792+0000 D3 STORAGE [repl-writer-worker-10] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:17.792+0000 D3 REPL [repl-writer-worker-10] applying op: { ts: Timestamp(1567578677, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578677764), o: { $v: 1, $set: { ping: new Date(1567578677757) } } }, oplog application mode: Secondary 2019-09-04T06:31:17.792+0000 D3 STORAGE [repl-writer-worker-10] WT set timestamp of future write operations to Timestamp(1567578677, 2) 2019-09-04T06:31:17.792+0000 D3 STORAGE [repl-writer-worker-10] WT begin_transaction for snapshot id 11380 2019-09-04T06:31:17.792+0000 D2 QUERY [repl-writer-worker-10] Using idhack: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" } 2019-09-04T06:31:17.792+0000 D4 WRITE [repl-writer-worker-10] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:31:17.792+0000 D3 STORAGE [repl-writer-worker-10] WT commit_transaction for snapshot id 11380 2019-09-04T06:31:17.792+0000 D3 EXECUTOR [repl-writer-worker-10] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:17.792+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578677, 2), t: 1 }({ ts: Timestamp(1567578677, 2), t: 1 }) 2019-09-04T06:31:17.792+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578677, 2) 2019-09-04T06:31:17.792+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11379 2019-09-04T06:31:17.792+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:31:17.792+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:17.792+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:31:17.792+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:17.792+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:17.792+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:31:17.792+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 11379 2019-09-04T06:31:17.792+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578677, 2) 2019-09-04T06:31:17.792+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11384 2019-09-04T06:31:17.792+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11384 2019-09-04T06:31:17.793+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578677, 2), t: 1 }({ ts: Timestamp(1567578677, 2), t: 1 }) 2019-09-04T06:31:17.793+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:17.793+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, durableWallTime: new Date(1567578675120), appliedOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, appliedWallTime: new Date(1567578675120), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578677, 1), t: 1 }, durableWallTime: new Date(1567578677507), appliedOpTime: { ts: Timestamp(1567578677, 2), t: 1 }, appliedWallTime: new Date(1567578677764), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, durableWallTime: new Date(1567578675120), appliedOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, appliedWallTime: new Date(1567578675120), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578677, 1), t: 1 }, lastCommittedWall: new Date(1567578677507), lastOpVisible: { ts: Timestamp(1567578677, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:17.793+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 770 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:47.793+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, durableWallTime: new Date(1567578675120), appliedOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, appliedWallTime: new Date(1567578675120), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578677, 1), t: 1 }, durableWallTime: new Date(1567578677507), appliedOpTime: { ts: Timestamp(1567578677, 2), t: 1 }, appliedWallTime: new Date(1567578677764), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, durableWallTime: new Date(1567578675120), appliedOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, appliedWallTime: new Date(1567578675120), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578677, 1), t: 1 }, lastCommittedWall: new Date(1567578677507), lastOpVisible: { ts: Timestamp(1567578677, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:17.793+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:47.792+0000 2019-09-04T06:31:17.793+0000 D2 ASIO [RS] Request 770 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578677, 1), t: 1 }, lastCommittedWall: new Date(1567578677507), lastOpVisible: { ts: Timestamp(1567578677, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578677, 1), $clusterTime: { clusterTime: Timestamp(1567578677, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578677, 2) } 2019-09-04T06:31:17.793+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578677, 1), t: 1 }, lastCommittedWall: new Date(1567578677507), lastOpVisible: { ts: Timestamp(1567578677, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578677, 1), $clusterTime: { clusterTime: Timestamp(1567578677, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578677, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:17.793+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:17.793+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:47.793+0000 2019-09-04T06:31:17.793+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578677, 2), t: 1 } 2019-09-04T06:31:17.793+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 771 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:27.793+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578677, 1), t: 1 } } 2019-09-04T06:31:17.794+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:47.793+0000 2019-09-04T06:31:17.794+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:31:17.794+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:17.794+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, durableWallTime: new Date(1567578675120), appliedOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, appliedWallTime: new Date(1567578675120), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578677, 2), t: 1 }, durableWallTime: new Date(1567578677764), appliedOpTime: { ts: Timestamp(1567578677, 2), t: 1 }, appliedWallTime: new Date(1567578677764), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, durableWallTime: new Date(1567578675120), appliedOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, appliedWallTime: new Date(1567578675120), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578677, 1), t: 1 }, lastCommittedWall: new Date(1567578677507), lastOpVisible: { ts: Timestamp(1567578677, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:17.794+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 772 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:47.794+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, durableWallTime: new Date(1567578675120), appliedOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, appliedWallTime: new Date(1567578675120), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578677, 2), t: 1 }, durableWallTime: new Date(1567578677764), appliedOpTime: { ts: Timestamp(1567578677, 2), t: 1 }, appliedWallTime: new Date(1567578677764), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, durableWallTime: new Date(1567578675120), appliedOpTime: { ts: Timestamp(1567578675, 1), t: 1 }, appliedWallTime: new Date(1567578675120), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578677, 1), t: 1 }, lastCommittedWall: new Date(1567578677507), lastOpVisible: { ts: Timestamp(1567578677, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:17.794+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:47.793+0000 2019-09-04T06:31:17.794+0000 D2 ASIO [RS] Request 772 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578677, 1), t: 1 }, lastCommittedWall: new Date(1567578677507), lastOpVisible: { ts: Timestamp(1567578677, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578677, 1), $clusterTime: { clusterTime: Timestamp(1567578677, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578677, 2) } 2019-09-04T06:31:17.794+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578677, 1), t: 1 }, lastCommittedWall: new Date(1567578677507), lastOpVisible: { ts: Timestamp(1567578677, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578677, 1), $clusterTime: { clusterTime: Timestamp(1567578677, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578677, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:17.794+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:17.794+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:47.793+0000 2019-09-04T06:31:17.794+0000 D2 ASIO [RS] Request 771 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578677, 2), t: 1 }, lastCommittedWall: new Date(1567578677764), lastOpVisible: { ts: Timestamp(1567578677, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578677, 2), t: 1 }, lastCommittedWall: new Date(1567578677764), lastOpApplied: { ts: Timestamp(1567578677, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578677, 2), $clusterTime: { clusterTime: Timestamp(1567578677, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578677, 2) } 2019-09-04T06:31:17.795+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578677, 2), t: 1 }, lastCommittedWall: new Date(1567578677764), lastOpVisible: { ts: Timestamp(1567578677, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578677, 2), t: 1 }, lastCommittedWall: new Date(1567578677764), lastOpApplied: { ts: Timestamp(1567578677, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578677, 2), $clusterTime: { clusterTime: Timestamp(1567578677, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578677, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:17.795+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:17.795+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:31:17.795+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578677, 2), t: 1 }, 2019-09-04T06:31:17.764+0000 2019-09-04T06:31:17.795+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578677, 2), t: 1 }, 2019-09-04T06:31:17.764+0000 2019-09-04T06:31:17.795+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578672, 2) 2019-09-04T06:31:17.795+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:31:28.380+0000 2019-09-04T06:31:17.795+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:31:29.245+0000 2019-09-04T06:31:17.795+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:17.795+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 773 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:27.795+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578677, 2), t: 1 } } 2019-09-04T06:31:17.795+0000 D3 REPL [conn265] Got notified of new snapshot: { ts: Timestamp(1567578677, 2), t: 1 }, 2019-09-04T06:31:17.764+0000 2019-09-04T06:31:17.795+0000 D3 REPL [conn265] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:21.660+0000 2019-09-04T06:31:17.795+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:47.793+0000 2019-09-04T06:31:17.795+0000 D3 REPL [conn270] Got notified of new snapshot: { ts: Timestamp(1567578677, 2), t: 1 }, 2019-09-04T06:31:17.764+0000 2019-09-04T06:31:17.795+0000 D3 REPL [conn270] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.239+0000 2019-09-04T06:31:17.795+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:46.839+0000 2019-09-04T06:31:17.795+0000 D3 REPL [conn279] Got notified of new snapshot: { ts: Timestamp(1567578677, 2), t: 1 }, 2019-09-04T06:31:17.764+0000 2019-09-04T06:31:17.795+0000 D3 REPL [conn279] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:42.709+0000 2019-09-04T06:31:17.795+0000 D3 REPL [conn287] Got notified of new snapshot: { ts: Timestamp(1567578677, 2), t: 1 }, 2019-09-04T06:31:17.764+0000 2019-09-04T06:31:17.795+0000 D3 REPL [conn287] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.248+0000 2019-09-04T06:31:17.795+0000 D3 REPL [conn264] Got notified of new snapshot: { ts: Timestamp(1567578677, 2), t: 1 }, 2019-09-04T06:31:17.764+0000 2019-09-04T06:31:17.795+0000 D3 REPL [conn264] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:41.083+0000 2019-09-04T06:31:17.795+0000 D3 REPL [conn286] Got notified of new snapshot: { ts: Timestamp(1567578677, 2), t: 1 }, 2019-09-04T06:31:17.764+0000 2019-09-04T06:31:17.795+0000 D3 REPL [conn286] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.224+0000 2019-09-04T06:31:17.795+0000 D3 REPL [conn268] Got notified of new snapshot: { ts: Timestamp(1567578677, 2), t: 1 }, 2019-09-04T06:31:17.764+0000 2019-09-04T06:31:17.795+0000 D3 REPL [conn268] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:22.595+0000 2019-09-04T06:31:17.795+0000 D3 REPL [conn269] Got notified of new snapshot: { ts: Timestamp(1567578677, 2), t: 1 }, 2019-09-04T06:31:17.764+0000 2019-09-04T06:31:17.795+0000 D3 REPL [conn269] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:25.060+0000 2019-09-04T06:31:17.795+0000 D3 REPL [conn266] Got notified of new snapshot: { ts: Timestamp(1567578677, 2), t: 1 }, 2019-09-04T06:31:17.764+0000 2019-09-04T06:31:17.795+0000 D3 REPL [conn266] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:21.661+0000 2019-09-04T06:31:17.795+0000 D3 REPL [conn272] Got notified of new snapshot: { ts: Timestamp(1567578677, 2), t: 1 }, 2019-09-04T06:31:17.764+0000 2019-09-04T06:31:17.795+0000 D3 REPL [conn272] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.222+0000 2019-09-04T06:31:17.795+0000 D3 REPL [conn260] Got notified of new snapshot: { ts: Timestamp(1567578677, 2), t: 1 }, 2019-09-04T06:31:17.764+0000 2019-09-04T06:31:17.795+0000 D3 REPL [conn260] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:42.329+0000 2019-09-04T06:31:17.795+0000 D3 REPL [conn259] Got notified of new snapshot: { ts: Timestamp(1567578677, 2), t: 1 }, 2019-09-04T06:31:17.764+0000 2019-09-04T06:31:17.795+0000 D3 REPL [conn259] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:24.151+0000 2019-09-04T06:31:17.795+0000 D3 REPL [conn283] Got notified of new snapshot: { ts: Timestamp(1567578677, 2), t: 1 }, 2019-09-04T06:31:17.764+0000 2019-09-04T06:31:17.795+0000 D3 REPL [conn283] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.238+0000 2019-09-04T06:31:17.795+0000 D3 REPL [conn267] Got notified of new snapshot: { ts: Timestamp(1567578677, 2), t: 1 }, 2019-09-04T06:31:17.764+0000 2019-09-04T06:31:17.795+0000 D3 REPL [conn267] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:21.661+0000 2019-09-04T06:31:17.795+0000 D3 REPL [conn274] Got notified of new snapshot: { ts: Timestamp(1567578677, 2), t: 1 }, 2019-09-04T06:31:17.764+0000 2019-09-04T06:31:17.795+0000 D3 REPL [conn274] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:41.071+0000 2019-09-04T06:31:17.801+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:17.801+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:17.839+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:17.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:17.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:17.891+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578677, 2) 2019-09-04T06:31:17.940+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:17.940+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:17.940+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:18.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:18.040+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:18.130+0000 D2 COMMAND [conn20] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:18.130+0000 I COMMAND [conn20] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:18.140+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:18.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:18.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:18.173+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:18.173+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:18.218+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:18.218+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:18.219+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:18.219+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:18.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578677, 2), signature: { hash: BinData(0, 60082BBADB2BEB060B2252CA3120841B0151D35E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:18.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:31:18.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578677, 2), signature: { hash: BinData(0, 60082BBADB2BEB060B2252CA3120841B0151D35E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:18.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578677, 2), signature: { hash: BinData(0, 60082BBADB2BEB060B2252CA3120841B0151D35E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:18.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578677, 2), t: 1 }, durableWallTime: new Date(1567578677764), opTime: { ts: Timestamp(1567578677, 2), t: 1 }, wallTime: new Date(1567578677764) } 2019-09-04T06:31:18.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578677, 2), signature: { hash: BinData(0, 60082BBADB2BEB060B2252CA3120841B0151D35E), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:18.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:18.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:18.240+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:18.301+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:18.301+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:18.340+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:18.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:18.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:18.440+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:18.440+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:18.440+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:18.514+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:31:18.514+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:31:18.514+0000 D2 COMMAND [conn90] run command admin.$cmd { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:31:18.514+0000 I COMMAND [conn90] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:31:18.540+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:18.640+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:18.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:18.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:18.673+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:18.673+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:18.718+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:18.718+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:18.719+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:18.719+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:18.740+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:18.792+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:18.792+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:18.792+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578677, 2) 2019-09-04T06:31:18.792+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 11406 2019-09-04T06:31:18.792+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:18.792+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:18.792+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 11406 2019-09-04T06:31:18.792+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer::flush table:local/collection/16--6194257481163143499 -> { numRecords: 1467, dataSize: 330788 } 2019-09-04T06:31:18.792+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer::flush table:config/collection/28--6194257481163143499 -> { numRecords: 22, dataSize: 1828 } 2019-09-04T06:31:18.792+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer flush took 49 µs 2019-09-04T06:31:18.793+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11409 2019-09-04T06:31:18.793+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11409 2019-09-04T06:31:18.793+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578677, 2), t: 1 }({ ts: Timestamp(1567578677, 2), t: 1 }) 2019-09-04T06:31:18.801+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:18.801+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:18.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:18.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 774) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:18.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 774 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:28.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:18.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:46.839+0000 2019-09-04T06:31:18.838+0000 D2 ASIO [Replication] Request 774 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578677, 2), t: 1 }, durableWallTime: new Date(1567578677764), opTime: { ts: Timestamp(1567578677, 2), t: 1 }, wallTime: new Date(1567578677764), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578677, 2), t: 1 }, lastCommittedWall: new Date(1567578677764), lastOpVisible: { ts: Timestamp(1567578677, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578677, 2), $clusterTime: { clusterTime: Timestamp(1567578677, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578677, 2) } 2019-09-04T06:31:18.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578677, 2), t: 1 }, durableWallTime: new Date(1567578677764), opTime: { ts: Timestamp(1567578677, 2), t: 1 }, wallTime: new Date(1567578677764), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578677, 2), t: 1 }, lastCommittedWall: new Date(1567578677764), lastOpVisible: { ts: Timestamp(1567578677, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578677, 2), $clusterTime: { clusterTime: Timestamp(1567578677, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578677, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:18.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:18.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 774) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578677, 2), t: 1 }, durableWallTime: new Date(1567578677764), opTime: { ts: Timestamp(1567578677, 2), t: 1 }, wallTime: new Date(1567578677764), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578677, 2), t: 1 }, lastCommittedWall: new Date(1567578677764), lastOpVisible: { ts: Timestamp(1567578677, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578677, 2), $clusterTime: { clusterTime: Timestamp(1567578677, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578677, 2) } 2019-09-04T06:31:18.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:31:18.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:31:20.838Z 2019-09-04T06:31:18.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:46.839+0000 2019-09-04T06:31:18.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:18.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 775) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:18.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 775 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:31:28.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:18.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:46.839+0000 2019-09-04T06:31:18.839+0000 D2 ASIO [Replication] Request 775 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578677, 2), t: 1 }, durableWallTime: new Date(1567578677764), opTime: { ts: Timestamp(1567578677, 2), t: 1 }, wallTime: new Date(1567578677764), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578677, 2), t: 1 }, lastCommittedWall: new Date(1567578677764), lastOpVisible: { ts: Timestamp(1567578677, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578677, 2), $clusterTime: { clusterTime: Timestamp(1567578677, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578677, 2) } 2019-09-04T06:31:18.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578677, 2), t: 1 }, durableWallTime: new Date(1567578677764), opTime: { ts: Timestamp(1567578677, 2), t: 1 }, wallTime: new Date(1567578677764), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578677, 2), t: 1 }, lastCommittedWall: new Date(1567578677764), lastOpVisible: { ts: Timestamp(1567578677, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578677, 2), $clusterTime: { clusterTime: Timestamp(1567578677, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578677, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:31:18.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:18.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 775) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578677, 2), t: 1 }, durableWallTime: new Date(1567578677764), opTime: { ts: Timestamp(1567578677, 2), t: 1 }, wallTime: new Date(1567578677764), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578677, 2), t: 1 }, lastCommittedWall: new Date(1567578677764), lastOpVisible: { ts: Timestamp(1567578677, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578677, 2), $clusterTime: { clusterTime: Timestamp(1567578677, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578677, 2) } 2019-09-04T06:31:18.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:31:18.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:31:29.245+0000 2019-09-04T06:31:18.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:31:29.076+0000 2019-09-04T06:31:18.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:31:18.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:31:20.839Z 2019-09-04T06:31:18.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:18.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:48.839+0000 2019-09-04T06:31:18.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:48.839+0000 2019-09-04T06:31:18.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:18.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:18.864+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:18.940+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:18.940+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:18.964+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:19.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:19.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578677, 2), signature: { hash: BinData(0, 60082BBADB2BEB060B2252CA3120841B0151D35E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:19.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:31:19.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578677, 2), signature: { hash: BinData(0, 60082BBADB2BEB060B2252CA3120841B0151D35E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:19.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578677, 2), signature: { hash: BinData(0, 60082BBADB2BEB060B2252CA3120841B0151D35E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:19.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578677, 2), t: 1 }, durableWallTime: new Date(1567578677764), opTime: { ts: Timestamp(1567578677, 2), t: 1 }, wallTime: new Date(1567578677764) } 2019-09-04T06:31:19.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578677, 2), signature: { hash: BinData(0, 60082BBADB2BEB060B2252CA3120841B0151D35E), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:19.064+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:19.164+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:19.173+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:19.173+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:19.218+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:19.218+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:19.219+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:19.219+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:19.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:19.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:19.241+0000 D2 ASIO [RS] Request 773 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578679, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" }, wall: new Date(1567578679239), o: { $v: 1, $set: { ping: new Date(1567578679235) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578677, 2), t: 1 }, lastCommittedWall: new Date(1567578677764), lastOpVisible: { ts: Timestamp(1567578677, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578677, 2), t: 1 }, lastCommittedWall: new Date(1567578677764), lastOpApplied: { ts: Timestamp(1567578679, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578677, 2), $clusterTime: { clusterTime: Timestamp(1567578679, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578679, 1) } 2019-09-04T06:31:19.241+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578679, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" }, wall: new Date(1567578679239), o: { $v: 1, $set: { ping: new Date(1567578679235) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578677, 2), t: 1 }, lastCommittedWall: new Date(1567578677764), lastOpVisible: { ts: Timestamp(1567578677, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578677, 2), t: 1 }, lastCommittedWall: new Date(1567578677764), lastOpApplied: { ts: Timestamp(1567578679, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578677, 2), $clusterTime: { clusterTime: Timestamp(1567578679, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578679, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:19.241+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:19.241+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578679, 1) and ending at ts: Timestamp(1567578679, 1) 2019-09-04T06:31:19.241+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:31:29.076+0000 2019-09-04T06:31:19.241+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:31:30.186+0000 2019-09-04T06:31:19.241+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:19.241+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:48.839+0000 2019-09-04T06:31:19.241+0000 D2 REPL [replication-0] oplog buffer has 0 bytes 2019-09-04T06:31:19.241+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578679, 1), t: 1 } 2019-09-04T06:31:19.241+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:19.241+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:19.241+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578677, 2) 2019-09-04T06:31:19.241+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 11420 2019-09-04T06:31:19.242+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:19.242+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:19.242+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 11420 2019-09-04T06:31:19.242+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:19.242+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:31:19.242+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:19.242+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578677, 2) 2019-09-04T06:31:19.242+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 11423 2019-09-04T06:31:19.242+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578679, 1) } 2019-09-04T06:31:19.242+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:19.242+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:19.242+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 11423 2019-09-04T06:31:19.242+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11410 2019-09-04T06:31:19.242+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 11410 2019-09-04T06:31:19.242+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11426 2019-09-04T06:31:19.242+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11426 2019-09-04T06:31:19.242+0000 D3 EXECUTOR [repl-writer-worker-8] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:19.242+0000 D3 STORAGE [repl-writer-worker-8] WT begin_transaction for snapshot id 11428 2019-09-04T06:31:19.242+0000 D4 STORAGE [repl-writer-worker-8] inserting record with timestamp Timestamp(1567578679, 1) 2019-09-04T06:31:19.242+0000 D3 STORAGE [repl-writer-worker-8] WT set timestamp of future write operations to Timestamp(1567578679, 1) 2019-09-04T06:31:19.242+0000 D2 STORAGE [repl-writer-worker-8] WiredTigerSizeStorer::store Marking table:local/collection/16--6194257481163143499 dirty, numRecords: 1468, dataSize: 331024, use_count: 3 2019-09-04T06:31:19.242+0000 D3 STORAGE [repl-writer-worker-8] WT commit_transaction for snapshot id 11428 2019-09-04T06:31:19.242+0000 D3 EXECUTOR [repl-writer-worker-8] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:19.242+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:31:19.242+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11427 2019-09-04T06:31:19.242+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 11427 2019-09-04T06:31:19.242+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11430 2019-09-04T06:31:19.242+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11430 2019-09-04T06:31:19.242+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578679, 1), t: 1 }({ ts: Timestamp(1567578679, 1), t: 1 }) 2019-09-04T06:31:19.242+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578679, 1) 2019-09-04T06:31:19.242+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11431 2019-09-04T06:31:19.242+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578679, 1) } } ] } sort: {} projection: {} 2019-09-04T06:31:19.242+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:19.242+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:31:19.242+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578679, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:31:19.242+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:19.242+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:19.242+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:19.242+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578679, 1) || First: notFirst: full path: ts 2019-09-04T06:31:19.242+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:19.242+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578679, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:19.242+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:19.242+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:31:19.242+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:19.242+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:19.242+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:19.242+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:19.242+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:19.242+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:19.242+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:19.242+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578679, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:19.242+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:19.242+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:19.242+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:19.242+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578679, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:19.242+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:19.242+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578679, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:19.242+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 11431 2019-09-04T06:31:19.242+0000 D3 EXECUTOR [repl-writer-worker-4] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:19.242+0000 D3 STORAGE [repl-writer-worker-4] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:19.242+0000 D3 REPL [repl-writer-worker-4] applying op: { ts: Timestamp(1567578679, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" }, wall: new Date(1567578679239), o: { $v: 1, $set: { ping: new Date(1567578679235) } } }, oplog application mode: Secondary 2019-09-04T06:31:19.242+0000 D3 STORAGE [repl-writer-worker-4] WT set timestamp of future write operations to Timestamp(1567578679, 1) 2019-09-04T06:31:19.242+0000 D3 STORAGE [repl-writer-worker-4] WT begin_transaction for snapshot id 11433 2019-09-04T06:31:19.242+0000 D2 QUERY [repl-writer-worker-4] Using idhack: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" } 2019-09-04T06:31:19.242+0000 D2 STORAGE [repl-writer-worker-4] WiredTigerSizeStorer::store Marking table:config/collection/28--6194257481163143499 dirty, numRecords: 22, dataSize: 1828, use_count: 3 2019-09-04T06:31:19.242+0000 D4 WRITE [repl-writer-worker-4] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:31:19.242+0000 D3 STORAGE [repl-writer-worker-4] WT commit_transaction for snapshot id 11433 2019-09-04T06:31:19.242+0000 D3 EXECUTOR [repl-writer-worker-4] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:19.242+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578679, 1), t: 1 }({ ts: Timestamp(1567578679, 1), t: 1 }) 2019-09-04T06:31:19.242+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578679, 1) 2019-09-04T06:31:19.242+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11432 2019-09-04T06:31:19.242+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:31:19.243+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:19.243+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:31:19.243+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:19.243+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:19.243+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:31:19.243+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 11432 2019-09-04T06:31:19.243+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578679, 1) 2019-09-04T06:31:19.243+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11436 2019-09-04T06:31:19.243+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11436 2019-09-04T06:31:19.243+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578679, 1), t: 1 }({ ts: Timestamp(1567578679, 1), t: 1 }) 2019-09-04T06:31:19.243+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:19.243+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578677, 2), t: 1 }, durableWallTime: new Date(1567578677764), appliedOpTime: { ts: Timestamp(1567578677, 2), t: 1 }, appliedWallTime: new Date(1567578677764), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578677, 2), t: 1 }, durableWallTime: new Date(1567578677764), appliedOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, appliedWallTime: new Date(1567578679239), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578677, 2), t: 1 }, durableWallTime: new Date(1567578677764), appliedOpTime: { ts: Timestamp(1567578677, 2), t: 1 }, appliedWallTime: new Date(1567578677764), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578677, 2), t: 1 }, lastCommittedWall: new Date(1567578677764), lastOpVisible: { ts: Timestamp(1567578677, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:19.243+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 776 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:49.243+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578677, 2), t: 1 }, durableWallTime: new Date(1567578677764), appliedOpTime: { ts: Timestamp(1567578677, 2), t: 1 }, appliedWallTime: new Date(1567578677764), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578677, 2), t: 1 }, durableWallTime: new Date(1567578677764), appliedOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, appliedWallTime: new Date(1567578679239), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578677, 2), t: 1 }, durableWallTime: new Date(1567578677764), appliedOpTime: { ts: Timestamp(1567578677, 2), t: 1 }, appliedWallTime: new Date(1567578677764), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578677, 2), t: 1 }, lastCommittedWall: new Date(1567578677764), lastOpVisible: { ts: Timestamp(1567578677, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:19.243+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:49.243+0000 2019-09-04T06:31:19.243+0000 D2 ASIO [RS] Request 776 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578677, 2), t: 1 }, lastCommittedWall: new Date(1567578677764), lastOpVisible: { ts: Timestamp(1567578677, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578677, 2), $clusterTime: { clusterTime: Timestamp(1567578679, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578679, 1) } 2019-09-04T06:31:19.243+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578677, 2), t: 1 }, lastCommittedWall: new Date(1567578677764), lastOpVisible: { ts: Timestamp(1567578677, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578677, 2), $clusterTime: { clusterTime: Timestamp(1567578679, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578679, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:19.243+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:19.243+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:49.243+0000 2019-09-04T06:31:19.243+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578679, 1), t: 1 } 2019-09-04T06:31:19.244+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 777 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:29.243+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578677, 2), t: 1 } } 2019-09-04T06:31:19.244+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:49.243+0000 2019-09-04T06:31:19.255+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:31:19.255+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:19.255+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578677, 2), t: 1 }, durableWallTime: new Date(1567578677764), appliedOpTime: { ts: Timestamp(1567578677, 2), t: 1 }, appliedWallTime: new Date(1567578677764), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), appliedOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, appliedWallTime: new Date(1567578679239), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578677, 2), t: 1 }, durableWallTime: new Date(1567578677764), appliedOpTime: { ts: Timestamp(1567578677, 2), t: 1 }, appliedWallTime: new Date(1567578677764), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578677, 2), t: 1 }, lastCommittedWall: new Date(1567578677764), lastOpVisible: { ts: Timestamp(1567578677, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:19.255+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 778 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:49.255+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578677, 2), t: 1 }, durableWallTime: new Date(1567578677764), appliedOpTime: { ts: Timestamp(1567578677, 2), t: 1 }, appliedWallTime: new Date(1567578677764), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), appliedOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, appliedWallTime: new Date(1567578679239), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578677, 2), t: 1 }, durableWallTime: new Date(1567578677764), appliedOpTime: { ts: Timestamp(1567578677, 2), t: 1 }, appliedWallTime: new Date(1567578677764), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578677, 2), t: 1 }, lastCommittedWall: new Date(1567578677764), lastOpVisible: { ts: Timestamp(1567578677, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:19.255+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:49.243+0000 2019-09-04T06:31:19.255+0000 D2 ASIO [RS] Request 778 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578677, 2), t: 1 }, lastCommittedWall: new Date(1567578677764), lastOpVisible: { ts: Timestamp(1567578677, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578677, 2), $clusterTime: { clusterTime: Timestamp(1567578679, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578679, 1) } 2019-09-04T06:31:19.255+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578677, 2), t: 1 }, lastCommittedWall: new Date(1567578677764), lastOpVisible: { ts: Timestamp(1567578677, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578677, 2), $clusterTime: { clusterTime: Timestamp(1567578679, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578679, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:19.255+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:19.255+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:49.243+0000 2019-09-04T06:31:19.255+0000 D2 ASIO [RS] Request 777 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578679, 1), t: 1 }, lastCommittedWall: new Date(1567578679239), lastOpVisible: { ts: Timestamp(1567578679, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578679, 1), t: 1 }, lastCommittedWall: new Date(1567578679239), lastOpApplied: { ts: Timestamp(1567578679, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578679, 1), $clusterTime: { clusterTime: Timestamp(1567578679, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578679, 1) } 2019-09-04T06:31:19.255+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578679, 1), t: 1 }, lastCommittedWall: new Date(1567578679239), lastOpVisible: { ts: Timestamp(1567578679, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578679, 1), t: 1 }, lastCommittedWall: new Date(1567578679239), lastOpApplied: { ts: Timestamp(1567578679, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578679, 1), $clusterTime: { clusterTime: Timestamp(1567578679, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578679, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:19.255+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:19.255+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:31:19.255+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578679, 1), t: 1 }, 2019-09-04T06:31:19.239+0000 2019-09-04T06:31:19.255+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578679, 1), t: 1 }, 2019-09-04T06:31:19.239+0000 2019-09-04T06:31:19.255+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578674, 1) 2019-09-04T06:31:19.256+0000 D3 REPL [conn269] Got notified of new snapshot: { ts: Timestamp(1567578679, 1), t: 1 }, 2019-09-04T06:31:19.239+0000 2019-09-04T06:31:19.256+0000 D3 REPL [conn269] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:25.060+0000 2019-09-04T06:31:19.256+0000 D3 REPL [conn270] Got notified of new snapshot: { ts: Timestamp(1567578679, 1), t: 1 }, 2019-09-04T06:31:19.239+0000 2019-09-04T06:31:19.256+0000 D3 REPL [conn270] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.239+0000 2019-09-04T06:31:19.256+0000 D3 REPL [conn279] Got notified of new snapshot: { ts: Timestamp(1567578679, 1), t: 1 }, 2019-09-04T06:31:19.239+0000 2019-09-04T06:31:19.256+0000 D3 REPL [conn279] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:42.709+0000 2019-09-04T06:31:19.256+0000 D3 REPL [conn268] Got notified of new snapshot: { ts: Timestamp(1567578679, 1), t: 1 }, 2019-09-04T06:31:19.239+0000 2019-09-04T06:31:19.256+0000 D3 REPL [conn268] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:22.595+0000 2019-09-04T06:31:19.256+0000 D3 REPL [conn266] Got notified of new snapshot: { ts: Timestamp(1567578679, 1), t: 1 }, 2019-09-04T06:31:19.239+0000 2019-09-04T06:31:19.256+0000 D3 REPL [conn266] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:21.661+0000 2019-09-04T06:31:19.256+0000 D3 REPL [conn260] Got notified of new snapshot: { ts: Timestamp(1567578679, 1), t: 1 }, 2019-09-04T06:31:19.239+0000 2019-09-04T06:31:19.256+0000 D3 REPL [conn260] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:42.329+0000 2019-09-04T06:31:19.256+0000 D3 REPL [conn283] Got notified of new snapshot: { ts: Timestamp(1567578679, 1), t: 1 }, 2019-09-04T06:31:19.239+0000 2019-09-04T06:31:19.256+0000 D3 REPL [conn283] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.238+0000 2019-09-04T06:31:19.256+0000 D3 REPL [conn274] Got notified of new snapshot: { ts: Timestamp(1567578679, 1), t: 1 }, 2019-09-04T06:31:19.239+0000 2019-09-04T06:31:19.256+0000 D3 REPL [conn274] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:41.071+0000 2019-09-04T06:31:19.256+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:31:30.186+0000 2019-09-04T06:31:19.256+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:31:30.305+0000 2019-09-04T06:31:19.256+0000 D3 REPL [conn287] Got notified of new snapshot: { ts: Timestamp(1567578679, 1), t: 1 }, 2019-09-04T06:31:19.239+0000 2019-09-04T06:31:19.256+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:19.256+0000 D3 REPL [conn287] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.248+0000 2019-09-04T06:31:19.256+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:48.839+0000 2019-09-04T06:31:19.256+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 779 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:29.256+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578679, 1), t: 1 } } 2019-09-04T06:31:19.256+0000 D3 REPL [conn286] Got notified of new snapshot: { ts: Timestamp(1567578679, 1), t: 1 }, 2019-09-04T06:31:19.239+0000 2019-09-04T06:31:19.256+0000 D3 REPL [conn286] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.224+0000 2019-09-04T06:31:19.256+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:49.243+0000 2019-09-04T06:31:19.256+0000 D3 REPL [conn272] Got notified of new snapshot: { ts: Timestamp(1567578679, 1), t: 1 }, 2019-09-04T06:31:19.239+0000 2019-09-04T06:31:19.256+0000 D3 REPL [conn272] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.222+0000 2019-09-04T06:31:19.256+0000 D3 REPL [conn259] Got notified of new snapshot: { ts: Timestamp(1567578679, 1), t: 1 }, 2019-09-04T06:31:19.239+0000 2019-09-04T06:31:19.256+0000 D3 REPL [conn259] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:24.151+0000 2019-09-04T06:31:19.256+0000 D3 REPL [conn267] Got notified of new snapshot: { ts: Timestamp(1567578679, 1), t: 1 }, 2019-09-04T06:31:19.239+0000 2019-09-04T06:31:19.256+0000 D3 REPL [conn267] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:21.661+0000 2019-09-04T06:31:19.256+0000 D3 REPL [conn265] Got notified of new snapshot: { ts: Timestamp(1567578679, 1), t: 1 }, 2019-09-04T06:31:19.239+0000 2019-09-04T06:31:19.256+0000 D3 REPL [conn265] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:21.660+0000 2019-09-04T06:31:19.256+0000 D3 REPL [conn264] Got notified of new snapshot: { ts: Timestamp(1567578679, 1), t: 1 }, 2019-09-04T06:31:19.239+0000 2019-09-04T06:31:19.256+0000 D3 REPL [conn264] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:41.083+0000 2019-09-04T06:31:19.264+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:19.301+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:19.301+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:19.317+0000 D3 STORAGE [FreeMonProcessor] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:19.321+0000 D3 INDEX [TTLMonitor] thread awake 2019-09-04T06:31:19.321+0000 D3 COMMAND [PeriodicTaskRunner] task: DBConnectionPool-cleaner took: 0ms 2019-09-04T06:31:19.321+0000 D3 COMMAND [PeriodicTaskRunner] task: DBConnectionPool-cleaner took: 0ms 2019-09-04T06:31:19.321+0000 D2 - [PeriodicTaskRunner] cleaning up unused lock buckets of the global lock manager 2019-09-04T06:31:19.321+0000 D3 COMMAND [PeriodicTaskRunner] task: UnusedLockCleaner took: 0ms 2019-09-04T06:31:19.329+0000 D2 WRITE [startPeriodicThreadToAbortExpiredTransactions] Beginning scanSessions. Scanning 0 sessions. 2019-09-04T06:31:19.341+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578679, 1) 2019-09-04T06:31:19.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:19.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:19.364+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:19.370+0000 D1 SHARDING [shard-registry-reload] Reloading shardRegistry 2019-09-04T06:31:19.370+0000 D3 STORAGE [shard-registry-reload] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:31:19.370+0000 D2 COMMAND [shard-registry-reload] run command config.$cmd { find: "shards", $readPreference: { mode: "nearest", tags: [] }, $db: "config" } 2019-09-04T06:31:19.370+0000 D3 STORAGE [shard-registry-reload] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:19.370+0000 D5 QUERY [shard-registry-reload] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:31:19.370+0000 D5 QUERY [shard-registry-reload] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:31:19.370+0000 D5 QUERY [shard-registry-reload] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:31:19.370+0000 D5 QUERY [shard-registry-reload] Rated tree: $and 2019-09-04T06:31:19.370+0000 D5 QUERY [shard-registry-reload] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:19.370+0000 D5 QUERY [shard-registry-reload] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:19.370+0000 D2 QUERY [shard-registry-reload] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:31:19.370+0000 D3 STORAGE [shard-registry-reload] begin_transaction on local snapshot Timestamp(1567578679, 1) 2019-09-04T06:31:19.370+0000 D3 STORAGE [shard-registry-reload] WT begin_transaction for snapshot id 11447 2019-09-04T06:31:19.370+0000 D3 STORAGE [shard-registry-reload] WT rollback_transaction for snapshot id 11447 2019-09-04T06:31:19.370+0000 I COMMAND [shard-registry-reload] command config.shards command: find { find: "shards", $readPreference: { mode: "nearest", tags: [] }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:646 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:31:19.370+0000 D1 SHARDING [shard-registry-reload] found 3 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp(1567578679, 1), t: 1 } 2019-09-04T06:31:19.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0000/cmodb806.togewa.com:27018,cmodb807.togewa.com:27018 2019-09-04T06:31:19.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0000, with CS shard0000/cmodb806.togewa.com:27018,cmodb807.togewa.com:27018 2019-09-04T06:31:19.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0001/cmodb808.togewa.com:27018,cmodb809.togewa.com:27018 2019-09-04T06:31:19.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0001, with CS shard0001/cmodb808.togewa.com:27018,cmodb809.togewa.com:27018 2019-09-04T06:31:19.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0002/cmodb810.togewa.com:27018,cmodb811.togewa.com:27018 2019-09-04T06:31:19.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0002, with CS shard0002/cmodb810.togewa.com:27018,cmodb811.togewa.com:27018 2019-09-04T06:31:19.370+0000 D3 SHARDING [shard-registry-reload] Adding shard config, with CS 2019-09-04T06:31:19.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0000 2019-09-04T06:31:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 780 -- target:[cmodb806.togewa.com:27018] db:admin expDate:2019-09-04T06:31:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:31:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 781 -- target:[cmodb807.togewa.com:27018] db:admin expDate:2019-09-04T06:31:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:31:19.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0001 2019-09-04T06:31:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 782 -- target:[cmodb808.togewa.com:27018] db:admin expDate:2019-09-04T06:31:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:31:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 783 -- target:[cmodb809.togewa.com:27018] db:admin expDate:2019-09-04T06:31:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:31:19.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0002 2019-09-04T06:31:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 784 -- target:[cmodb810.togewa.com:27018] db:admin expDate:2019-09-04T06:31:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:31:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 785 -- target:[cmodb811.togewa.com:27018] db:admin expDate:2019-09-04T06:31:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:31:19.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 780 finished with response: { hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb806.togewa.com:27018", me: "cmodb806.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578673, 1), t: 1 }, lastWriteDate: new Date(1567578673000), majorityOpTime: { ts: Timestamp(1567578673, 1), t: 1 }, majorityWriteDate: new Date(1567578673000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578679385), logicalSessionTimeoutMinutes: 30, connectionId: 16400, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578673, 1), $configServerState: { opTime: { ts: Timestamp(1567578675, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578675, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578673, 1) } 2019-09-04T06:31:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb806.togewa.com:27018", me: "cmodb806.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578673, 1), t: 1 }, lastWriteDate: new Date(1567578673000), majorityOpTime: { ts: Timestamp(1567578673, 1), t: 1 }, majorityWriteDate: new Date(1567578673000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578679385), logicalSessionTimeoutMinutes: 30, connectionId: 16400, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578673, 1), $configServerState: { opTime: { ts: Timestamp(1567578675, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578675, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578673, 1) } target: cmodb806.togewa.com:27018 2019-09-04T06:31:19.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 781 finished with response: { hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb806.togewa.com:27018", me: "cmodb807.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578673, 1), t: 1 }, lastWriteDate: new Date(1567578673000), majorityOpTime: { ts: Timestamp(1567578673, 1), t: 1 }, majorityWriteDate: new Date(1567578673000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578679386), logicalSessionTimeoutMinutes: 30, connectionId: 17074, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578673, 1), $configServerState: { opTime: { ts: Timestamp(1567578668, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578675, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578673, 1) } 2019-09-04T06:31:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb806.togewa.com:27018", me: "cmodb807.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578673, 1), t: 1 }, lastWriteDate: new Date(1567578673000), majorityOpTime: { ts: Timestamp(1567578673, 1), t: 1 }, majorityWriteDate: new Date(1567578673000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578679386), logicalSessionTimeoutMinutes: 30, connectionId: 17074, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578673, 1), $configServerState: { opTime: { ts: Timestamp(1567578668, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578675, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578673, 1) } target: cmodb807.togewa.com:27018 2019-09-04T06:31:19.385+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0000 took 0ms 2019-09-04T06:31:19.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 783 finished with response: { hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb808.togewa.com:27018", me: "cmodb809.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578677, 1), t: 1 }, lastWriteDate: new Date(1567578677000), majorityOpTime: { ts: Timestamp(1567578677, 1), t: 1 }, majorityWriteDate: new Date(1567578677000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578679386), logicalSessionTimeoutMinutes: 30, connectionId: 13302, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578677, 1), $configServerState: { opTime: { ts: Timestamp(1567578660, 3), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578677, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578677, 1) } 2019-09-04T06:31:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb808.togewa.com:27018", me: "cmodb809.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578677, 1), t: 1 }, lastWriteDate: new Date(1567578677000), majorityOpTime: { ts: Timestamp(1567578677, 1), t: 1 }, majorityWriteDate: new Date(1567578677000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578679386), logicalSessionTimeoutMinutes: 30, connectionId: 13302, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578677, 1), $configServerState: { opTime: { ts: Timestamp(1567578660, 3), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578677, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578677, 1) } target: cmodb809.togewa.com:27018 2019-09-04T06:31:19.386+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 784 finished with response: { hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb810.togewa.com:27018", me: "cmodb810.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578669, 1), t: 1 }, lastWriteDate: new Date(1567578669000), majorityOpTime: { ts: Timestamp(1567578669, 1), t: 1 }, majorityWriteDate: new Date(1567578669000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578679386), logicalSessionTimeoutMinutes: 30, connectionId: 20469, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578669, 1), $configServerState: { opTime: { ts: Timestamp(1567578675, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578675, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578669, 1) } 2019-09-04T06:31:19.386+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb810.togewa.com:27018", me: "cmodb810.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578669, 1), t: 1 }, lastWriteDate: new Date(1567578669000), majorityOpTime: { ts: Timestamp(1567578669, 1), t: 1 }, majorityWriteDate: new Date(1567578669000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578679386), logicalSessionTimeoutMinutes: 30, connectionId: 20469, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578669, 1), $configServerState: { opTime: { ts: Timestamp(1567578675, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578675, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578669, 1) } target: cmodb810.togewa.com:27018 2019-09-04T06:31:19.386+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 782 finished with response: { hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb808.togewa.com:27018", me: "cmodb808.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578677, 1), t: 1 }, lastWriteDate: new Date(1567578677000), majorityOpTime: { ts: Timestamp(1567578677, 1), t: 1 }, majorityWriteDate: new Date(1567578677000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578679386), logicalSessionTimeoutMinutes: 30, connectionId: 18183, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578677, 1), $configServerState: { opTime: { ts: Timestamp(1567578675, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578677, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578677, 1) } 2019-09-04T06:31:19.386+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb808.togewa.com:27018", me: "cmodb808.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578677, 1), t: 1 }, lastWriteDate: new Date(1567578677000), majorityOpTime: { ts: Timestamp(1567578677, 1), t: 1 }, majorityWriteDate: new Date(1567578677000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578679386), logicalSessionTimeoutMinutes: 30, connectionId: 18183, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578677, 1), $configServerState: { opTime: { ts: Timestamp(1567578675, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578677, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578677, 1) } target: cmodb808.togewa.com:27018 2019-09-04T06:31:19.386+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0001 took 0ms 2019-09-04T06:31:19.390+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 785 finished with response: { hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb810.togewa.com:27018", me: "cmodb811.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578669, 1), t: 1 }, lastWriteDate: new Date(1567578669000), majorityOpTime: { ts: Timestamp(1567578669, 1), t: 1 }, majorityWriteDate: new Date(1567578669000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578679386), logicalSessionTimeoutMinutes: 30, connectionId: 13284, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578669, 1), $configServerState: { opTime: { ts: Timestamp(1567578668, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578675, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578669, 1) } 2019-09-04T06:31:19.390+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb810.togewa.com:27018", me: "cmodb811.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578669, 1), t: 1 }, lastWriteDate: new Date(1567578669000), majorityOpTime: { ts: Timestamp(1567578669, 1), t: 1 }, majorityWriteDate: new Date(1567578669000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578679386), logicalSessionTimeoutMinutes: 30, connectionId: 13284, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578669, 1), $configServerState: { opTime: { ts: Timestamp(1567578668, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578675, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578669, 1) } target: cmodb811.togewa.com:27018 2019-09-04T06:31:19.390+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0002 took 5ms 2019-09-04T06:31:19.427+0000 D3 STORAGE [WTCheckpointThread] setting timestamp read source: 6, provided timestamp: Timestamp(1567578679, 1) 2019-09-04T06:31:19.427+0000 D3 STORAGE [WTCheckpointThread] WT begin_transaction for snapshot id 11449 2019-09-04T06:31:19.427+0000 D3 STORAGE [WTCheckpointThread] WT rollback_transaction for snapshot id 11449 2019-09-04T06:31:19.427+0000 D2 RECOVERY [WTCheckpointThread] Performing stable checkpoint. StableTimestamp: Timestamp(1567578679, 1), OplogNeededForRollback: Timestamp(1567578679, 1) 2019-09-04T06:31:19.440+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:19.440+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:19.466+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:19.500+0000 D2 COMMAND [replSetDistLockPinger] run command config.$cmd { findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578679500) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } 2019-09-04T06:31:19.500+0000 D4 - [replSetDistLockPinger] Taking ticket. Available: 1000000000 2019-09-04T06:31:19.500+0000 D1 - [replSetDistLockPinger] User Assertion: NotMaster: Not primary while running findAndModify command on collection config.lockpings src/mongo/db/commands/find_and_modify.cpp 178 2019-09-04T06:31:19.500+0000 W - [replSetDistLockPinger] DBException thrown :: caused by :: NotMaster: Not primary while running findAndModify command on collection config.lockpings 2019-09-04T06:31:19.521+0000 I - [replSetDistLockPinger] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6b5b62 0x561749c38a0a 0x561749c42521 0x561749a63043 0x56174a33a606 0x56174a33ba55 0x56174b117894 0x56174a082899 0x56174a083f53 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174af452ee 0x56174af457fa 0x56174b0c25e2 0x56174a244e7b 0x56174a243c1e 0x56174a42b1dc 0x56174a23b7b1 0x56174a232a0a 0x56174b82dbbf 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"272DB62","s":"_ZN5mongo11DBExceptionC2ERKNS_6StatusE"},{"b":"561748F88000","o":"CB0A0A","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"ADB043"},{"b":"561748F88000","o":"13B2606"},{"b":"561748F88000","o":"13B3A55"},{"b":"561748F88000","o":"218F894","s":"_ZN5mongo12BasicCommand10Invocation3runEPNS_16OperationContextEPNS_3rpc21ReplyBuilderInterfaceE"},{"b":"561748F88000","o":"10FA899"},{"b":"561748F88000","o":"10FBF53"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"1FBD2EE"},{"b":"561748F88000","o":"1FBD7FA","s":"_ZN5mongo14DBDirectClient4callERNS_7MessageES2_bPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE"},{"b":"561748F88000","o":"213A5E2","s":"_ZN5mongo12DBClientBase20runCommandWithTargetENS_12OpMsgRequestE"},{"b":"561748F88000","o":"12BCE7B","s":"_ZN5mongo13RSLocalClient14runCommandOnceEPNS_16OperationContextENS_10StringDataERKNS_7BSONObjE"},{"b":"561748F88000","o":"12BBC1E","s":"_ZN5mongo10ShardLocal11_runCommandEPNS_16OperationContextERKNS_21ReadPreferenceSettingENS_10StringDataENS_8DurationISt5ratioILl1ELl1000EEEERKNS_7BSONObjE"},{"b":"561748F88000","o":"14A31DC","s":"_ZN5mongo5Shard32runCommandWithFixedRetryAttemptsEPNS_16OperationContextERKNS_21ReadPreferenceSettingERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_7BSONObjENS_8DurationISt5ratioILl1ELl1000EEEENS0_11RetryPolicyE"},{"b":"561748F88000","o":"12B37B1","s":"_ZN5mongo19DistLockCatalogImpl4pingEPNS_16OperationContextENS_10StringDataENS_6Date_tE"},{"b":"561748F88000","o":"12AAA0A","s":"_ZN5mongo22ReplSetDistLockManager6doTaskEv"},{"b":"561748F88000","o":"28A5BBF"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo11DBExceptionC2ERKNS_6StatusE+0x32) [0x56174b6b5b62] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x6D08) [0x561749c38a0a] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xADB043) [0x561749a63043] mongod(+0x13B2606) [0x56174a33a606] mongod(+0x13B3A55) [0x56174a33ba55] mongod(_ZN5mongo12BasicCommand10Invocation3runEPNS_16OperationContextEPNS_3rpc21ReplyBuilderInterfaceE+0x74) [0x56174b117894] mongod(+0x10FA899) [0x56174a082899] mongod(+0x10FBF53) [0x56174a083f53] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(+0x1FBD2EE) [0x56174af452ee] mongod(_ZN5mongo14DBDirectClient4callERNS_7MessageES2_bPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x3A) [0x56174af457fa] mongod(_ZN5mongo12DBClientBase20runCommandWithTargetENS_12OpMsgRequestE+0x1F2) [0x56174b0c25e2] mongod(_ZN5mongo13RSLocalClient14runCommandOnceEPNS_16OperationContextENS_10StringDataERKNS_7BSONObjE+0x4FB) [0x56174a244e7b] mongod(_ZN5mongo10ShardLocal11_runCommandEPNS_16OperationContextERKNS_21ReadPreferenceSettingENS_10StringDataENS_8DurationISt5ratioILl1ELl1000EEEERKNS_7BSONObjE+0x2E) [0x56174a243c1e] mongod(_ZN5mongo5Shard32runCommandWithFixedRetryAttemptsEPNS_16OperationContextERKNS_21ReadPreferenceSettingERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_7BSONObjENS_8DurationISt5ratioILl1ELl1000EEEENS0_11RetryPolicyE+0xDC) [0x56174a42b1dc] mongod(_ZN5mongo19DistLockCatalogImpl4pingEPNS_16OperationContextENS_10StringDataENS_6Date_tE+0x571) [0x56174a23b7b1] mongod(_ZN5mongo22ReplSetDistLockManager6doTaskEv+0x27A) [0x56174a232a0a] mongod(+0x28A5BBF) [0x56174b82dbbf] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:19.521+0000 D2 REPL [replSetDistLockPinger] Waiting for write concern. OpTime: { ts: Timestamp(1567578679, 1), t: 1 }, write concern: { w: "majority", wtimeout: 15000 } 2019-09-04T06:31:19.521+0000 D4 STORAGE [replSetDistLockPinger] flushed journal 2019-09-04T06:31:19.521+0000 D1 COMMAND [replSetDistLockPinger] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578679500) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" }': NotMaster: Not primary while running findAndModify command on collection config.lockpings 2019-09-04T06:31:19.522+0000 I COMMAND [replSetDistLockPinger] command config.lockpings command: findAndModify { findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578679500) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } numYields:0 ok:0 errMsg:"Not primary while running findAndModify command on collection config.lockpings" errName:NotMaster errCode:10107 reslen:527 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } protocol:op_msg 21ms 2019-09-04T06:31:19.566+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:19.666+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:19.673+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:19.673+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:19.718+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:19.718+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:19.719+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:19.719+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:19.766+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:19.801+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:19.801+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:19.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:19.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:19.867+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:19.940+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:19.940+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:19.967+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:20.000+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:31:20.001+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:31:20.001+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:31:20.001+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:20.012+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:31:20.012+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:31:20.014+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:31:20.014+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:31:20.014+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:31:20.014+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:31:20.014+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:31:20.015+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35129 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:31:20.016+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:31:20.016+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:31:20.016+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:31:20.016+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:31:20.016+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:20.016+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:31:20.016+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:31:20.016+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:31:20.016+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:31:20.016+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:31:20.016+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:31:20.016+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:31:20.016+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:20.016+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:20.016+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:31:20.016+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578679, 1) 2019-09-04T06:31:20.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11465 2019-09-04T06:31:20.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11465 2019-09-04T06:31:20.016+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:31:20.017+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:31:20.017+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:31:20.017+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:31:20.017+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:20.017+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:31:20.017+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:31:20.017+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:31:20.017+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578679, 1) 2019-09-04T06:31:20.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11468 2019-09-04T06:31:20.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11468 2019-09-04T06:31:20.017+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:31:20.018+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:31:20.018+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:20.018+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:31:20.018+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:31:20.018+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:31:20.018+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578679, 1) 2019-09-04T06:31:20.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11470 2019-09-04T06:31:20.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11470 2019-09-04T06:31:20.018+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:31:20.018+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:31:20.018+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:31:20.018+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:31:20.018+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:31:20.018+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:31:20.018+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:31:20.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11473 2019-09-04T06:31:20.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:31:20.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11473 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11474 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11474 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11475 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11475 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11476 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11476 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11477 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11477 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11478 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11478 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11479 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11479 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11480 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11480 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11481 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11481 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11482 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11482 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11483 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11483 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11484 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11484 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:31:20.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11485 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11485 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11486 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11486 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11487 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11487 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11488 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11488 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11489 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11489 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11490 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11490 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11491 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11491 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11492 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11492 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11493 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11493 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11494 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:31:20.020+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11494 2019-09-04T06:31:20.020+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:31:20.034+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:31:20.035+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11496 2019-09-04T06:31:20.035+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11496 2019-09-04T06:31:20.035+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11497 2019-09-04T06:31:20.035+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11497 2019-09-04T06:31:20.035+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11498 2019-09-04T06:31:20.035+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11498 2019-09-04T06:31:20.035+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:31:20.035+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:31:20.035+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11500 2019-09-04T06:31:20.035+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11500 2019-09-04T06:31:20.035+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11501 2019-09-04T06:31:20.035+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11501 2019-09-04T06:31:20.035+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11502 2019-09-04T06:31:20.035+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11502 2019-09-04T06:31:20.035+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11503 2019-09-04T06:31:20.035+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11503 2019-09-04T06:31:20.035+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11504 2019-09-04T06:31:20.035+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11504 2019-09-04T06:31:20.035+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11505 2019-09-04T06:31:20.035+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11505 2019-09-04T06:31:20.035+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11506 2019-09-04T06:31:20.035+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11506 2019-09-04T06:31:20.035+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11507 2019-09-04T06:31:20.035+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11507 2019-09-04T06:31:20.035+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11508 2019-09-04T06:31:20.035+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11508 2019-09-04T06:31:20.035+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11509 2019-09-04T06:31:20.036+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11509 2019-09-04T06:31:20.036+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11510 2019-09-04T06:31:20.036+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11510 2019-09-04T06:31:20.036+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11511 2019-09-04T06:31:20.036+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11511 2019-09-04T06:31:20.036+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:31:20.036+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:31:20.036+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11513 2019-09-04T06:31:20.036+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11513 2019-09-04T06:31:20.036+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11514 2019-09-04T06:31:20.036+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11514 2019-09-04T06:31:20.036+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11515 2019-09-04T06:31:20.036+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11515 2019-09-04T06:31:20.036+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11516 2019-09-04T06:31:20.036+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11516 2019-09-04T06:31:20.036+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11517 2019-09-04T06:31:20.036+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11517 2019-09-04T06:31:20.036+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11518 2019-09-04T06:31:20.036+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11518 2019-09-04T06:31:20.036+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:31:20.067+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:20.167+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:20.173+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:20.173+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:20.219+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:20.219+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:20.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578679, 1), signature: { hash: BinData(0, 9561EE202ACEAFF4123AAD3F044CD30AE92F7B44), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:20.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:31:20.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578679, 1), signature: { hash: BinData(0, 9561EE202ACEAFF4123AAD3F044CD30AE92F7B44), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:20.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578679, 1), signature: { hash: BinData(0, 9561EE202ACEAFF4123AAD3F044CD30AE92F7B44), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:20.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), opTime: { ts: Timestamp(1567578679, 1), t: 1 }, wallTime: new Date(1567578679239) } 2019-09-04T06:31:20.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578679, 1), signature: { hash: BinData(0, 9561EE202ACEAFF4123AAD3F044CD30AE92F7B44), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:20.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:20.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 999999999 Now: 1000000000 2019-09-04T06:31:20.242+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:20.242+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:20.242+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578679, 1) 2019-09-04T06:31:20.242+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 11523 2019-09-04T06:31:20.242+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:20.242+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:20.242+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 11523 2019-09-04T06:31:20.243+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11526 2019-09-04T06:31:20.243+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11526 2019-09-04T06:31:20.243+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578679, 1), t: 1 }({ ts: Timestamp(1567578679, 1), t: 1 }) 2019-09-04T06:31:20.267+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:20.301+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:20.301+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:20.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:20.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:20.367+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:20.440+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:20.440+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:20.468+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:20.568+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:20.668+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:20.673+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:20.673+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:20.719+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:20.719+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:20.768+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:20.801+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:20.801+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:20.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:20.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 786) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:20.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 786 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:30.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:20.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:48.839+0000 2019-09-04T06:31:20.838+0000 D2 ASIO [Replication] Request 786 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), opTime: { ts: Timestamp(1567578679, 1), t: 1 }, wallTime: new Date(1567578679239), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578679, 1), t: 1 }, lastCommittedWall: new Date(1567578679239), lastOpVisible: { ts: Timestamp(1567578679, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578679, 1), $clusterTime: { clusterTime: Timestamp(1567578679, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578679, 1) } 2019-09-04T06:31:20.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), opTime: { ts: Timestamp(1567578679, 1), t: 1 }, wallTime: new Date(1567578679239), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578679, 1), t: 1 }, lastCommittedWall: new Date(1567578679239), lastOpVisible: { ts: Timestamp(1567578679, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578679, 1), $clusterTime: { clusterTime: Timestamp(1567578679, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578679, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:20.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:20.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 786) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), opTime: { ts: Timestamp(1567578679, 1), t: 1 }, wallTime: new Date(1567578679239), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578679, 1), t: 1 }, lastCommittedWall: new Date(1567578679239), lastOpVisible: { ts: Timestamp(1567578679, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578679, 1), $clusterTime: { clusterTime: Timestamp(1567578679, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578679, 1) } 2019-09-04T06:31:20.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:31:20.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:31:22.838Z 2019-09-04T06:31:20.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:48.839+0000 2019-09-04T06:31:20.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:20.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 787) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:20.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 787 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:31:30.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:20.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:48.839+0000 2019-09-04T06:31:20.839+0000 D2 ASIO [Replication] Request 787 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), opTime: { ts: Timestamp(1567578679, 1), t: 1 }, wallTime: new Date(1567578679239), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578679, 1), t: 1 }, lastCommittedWall: new Date(1567578679239), lastOpVisible: { ts: Timestamp(1567578679, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578679, 1), $clusterTime: { clusterTime: Timestamp(1567578679, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578679, 1) } 2019-09-04T06:31:20.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), opTime: { ts: Timestamp(1567578679, 1), t: 1 }, wallTime: new Date(1567578679239), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578679, 1), t: 1 }, lastCommittedWall: new Date(1567578679239), lastOpVisible: { ts: Timestamp(1567578679, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578679, 1), $clusterTime: { clusterTime: Timestamp(1567578679, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578679, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:31:20.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:20.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 787) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), opTime: { ts: Timestamp(1567578679, 1), t: 1 }, wallTime: new Date(1567578679239), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578679, 1), t: 1 }, lastCommittedWall: new Date(1567578679239), lastOpVisible: { ts: Timestamp(1567578679, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578679, 1), $clusterTime: { clusterTime: Timestamp(1567578679, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578679, 1) } 2019-09-04T06:31:20.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:31:20.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:31:30.305+0000 2019-09-04T06:31:20.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:31:30.968+0000 2019-09-04T06:31:20.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:31:20.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:31:22.839Z 2019-09-04T06:31:20.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:20.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:50.839+0000 2019-09-04T06:31:20.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:50.839+0000 2019-09-04T06:31:20.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:20.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:20.868+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:20.940+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:20.940+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:20.968+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:21.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:21.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578679, 1), signature: { hash: BinData(0, 9561EE202ACEAFF4123AAD3F044CD30AE92F7B44), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:21.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:31:21.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578679, 1), signature: { hash: BinData(0, 9561EE202ACEAFF4123AAD3F044CD30AE92F7B44), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:21.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578679, 1), signature: { hash: BinData(0, 9561EE202ACEAFF4123AAD3F044CD30AE92F7B44), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:21.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), opTime: { ts: Timestamp(1567578679, 1), t: 1 }, wallTime: new Date(1567578679239) } 2019-09-04T06:31:21.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578679, 1), signature: { hash: BinData(0, 9561EE202ACEAFF4123AAD3F044CD30AE92F7B44), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:21.068+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:21.168+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:21.173+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:21.173+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:21.219+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:21.219+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:21.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:21.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:21.242+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:21.242+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:21.242+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578679, 1) 2019-09-04T06:31:21.242+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 11541 2019-09-04T06:31:21.242+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:21.242+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:21.242+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 11541 2019-09-04T06:31:21.243+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11544 2019-09-04T06:31:21.243+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11544 2019-09-04T06:31:21.243+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578679, 1), t: 1 }({ ts: Timestamp(1567578679, 1), t: 1 }) 2019-09-04T06:31:21.269+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:21.301+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:21.301+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:21.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:21.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:21.369+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:21.440+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:21.440+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:21.469+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:21.569+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:21.590+0000 D2 COMMAND [conn126] run command admin.$cmd { ping: 1, $db: "admin" } 2019-09-04T06:31:21.590+0000 I COMMAND [conn126] command admin.$cmd appName: "robo3t" command: ping { ping: 1, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:21.601+0000 D2 COMMAND [conn182] run command config.$cmd { ping: 1.0, lsid: { id: UUID("02492cc9-cb3a-4cd4-9c2e-0d7430e82ce2") }, $clusterTime: { clusterTime: Timestamp(1567578619, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:31:21.601+0000 I COMMAND [conn182] command config.$cmd appName: "MongoDB Shell" command: ping { ping: 1.0, lsid: { id: UUID("02492cc9-cb3a-4cd4-9c2e-0d7430e82ce2") }, $clusterTime: { clusterTime: Timestamp(1567578619, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:21.634+0000 D2 COMMAND [conn273] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578678, 1), signature: { hash: BinData(0, BB6464574AD1B916A93154DCD4D10DFFEF24752B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:31:21.634+0000 D1 REPL [conn273] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578679, 1), t: 1 } 2019-09-04T06:31:21.634+0000 D3 REPL [conn273] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.644+0000 2019-09-04T06:31:21.650+0000 I NETWORK [listener] connection accepted from 10.108.2.72:45770 #289 (84 connections now open) 2019-09-04T06:31:21.650+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:31:21.650+0000 I NETWORK [listener] connection accepted from 10.108.2.48:42132 #290 (85 connections now open) 2019-09-04T06:31:21.650+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:31:21.650+0000 D2 COMMAND [conn289] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:31:21.650+0000 D2 COMMAND [conn290] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:31:21.650+0000 I NETWORK [conn289] received client metadata from 10.108.2.72:45770 conn289: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:31:21.650+0000 I NETWORK [conn290] received client metadata from 10.108.2.48:42132 conn290: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:31:21.650+0000 I COMMAND [conn289] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:31:21.650+0000 I COMMAND [conn290] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:31:21.651+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:21.651+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:21.651+0000 D2 COMMAND [conn281] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578673, 1), signature: { hash: BinData(0, 9F3CDA4E8B339EB390337989064218FBF78EF14F), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:31:21.651+0000 D1 REPL [conn281] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578679, 1), t: 1 } 2019-09-04T06:31:21.651+0000 D3 REPL [conn281] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.661+0000 2019-09-04T06:31:21.652+0000 I NETWORK [listener] connection accepted from 10.108.2.73:52186 #291 (86 connections now open) 2019-09-04T06:31:21.652+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:31:21.652+0000 D2 COMMAND [conn291] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:31:21.652+0000 I NETWORK [conn291] received client metadata from 10.108.2.73:52186 conn291: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:31:21.652+0000 I COMMAND [conn291] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:31:21.652+0000 D2 COMMAND [conn291] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578672, 1), signature: { hash: BinData(0, 2D7FBB04A997F5BB61222D22B1E0478572811EE3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:31:21.652+0000 D1 REPL [conn291] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578679, 1), t: 1 } 2019-09-04T06:31:21.652+0000 D3 REPL [conn291] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.662+0000 2019-09-04T06:31:21.662+0000 I COMMAND [conn265] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578650, 1), signature: { hash: BinData(0, 31B748279854B89A75FE3C6C5E99D3DE120BDB4B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:31:21.662+0000 I COMMAND [conn266] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578642, 1), signature: { hash: BinData(0, D3B29E27081E353619E4BBAFF34512E8BADA791D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:31:21.662+0000 D1 - [conn265] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:31:21.662+0000 D1 - [conn266] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:31:21.662+0000 W - [conn265] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:21.662+0000 W - [conn266] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:21.663+0000 I COMMAND [conn267] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578648, 1), signature: { hash: BinData(0, F12E75CCCE0218CFE0AA364AB06192226B38C39A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:31:21.663+0000 D1 - [conn267] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:31:21.663+0000 W - [conn267] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:21.669+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:21.673+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:21.673+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:21.679+0000 I - [conn266] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:21.679+0000 D1 COMMAND [conn266] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578642, 1), signature: { hash: BinData(0, D3B29E27081E353619E4BBAFF34512E8BADA791D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:21.679+0000 D1 - [conn266] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:31:21.679+0000 W - [conn266] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:21.696+0000 I - [conn267] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:21.696+0000 D1 COMMAND [conn267] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578648, 1), signature: { hash: BinData(0, F12E75CCCE0218CFE0AA364AB06192226B38C39A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:21.696+0000 D1 - [conn267] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:31:21.696+0000 W - [conn267] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:21.719+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:21.719+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:21.737+0000 I - [conn267] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:21.737+0000 W COMMAND [conn267] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:31:21.738+0000 I COMMAND [conn267] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578648, 1), signature: { hash: BinData(0, F12E75CCCE0218CFE0AA364AB06192226B38C39A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30045ms 2019-09-04T06:31:21.738+0000 D2 NETWORK [conn267] Session from 10.108.2.54:49200 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:31:21.738+0000 I NETWORK [conn267] end connection 10.108.2.54:49200 (85 connections now open) 2019-09-04T06:31:21.738+0000 I - [conn265] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:21.738+0000 D1 COMMAND [conn265] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578650, 1), signature: { hash: BinData(0, 31B748279854B89A75FE3C6C5E99D3DE120BDB4B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:21.738+0000 D1 - [conn265] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:31:21.738+0000 W - [conn265] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:21.742+0000 D2 COMMAND [conn271] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578672, 1), signature: { hash: BinData(0, 2D7FBB04A997F5BB61222D22B1E0478572811EE3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:31:21.743+0000 D1 REPL [conn271] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578679, 1), t: 1 } 2019-09-04T06:31:21.743+0000 D3 REPL [conn271] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.752+0000 2019-09-04T06:31:21.753+0000 I - [conn266] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:21.753+0000 W COMMAND [conn266] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:31:21.753+0000 I COMMAND [conn266] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578642, 1), signature: { hash: BinData(0, D3B29E27081E353619E4BBAFF34512E8BADA791D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30028ms 2019-09-04T06:31:21.753+0000 D2 NETWORK [conn266] Session from 10.108.2.72:45752 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:31:21.753+0000 I NETWORK [conn266] end connection 10.108.2.72:45752 (84 connections now open) 2019-09-04T06:31:21.756+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:21.756+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:21.769+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:21.773+0000 I - [conn265] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:21.773+0000 W COMMAND [conn265] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:31:21.773+0000 I COMMAND [conn265] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578650, 1), signature: { hash: BinData(0, 31B748279854B89A75FE3C6C5E99D3DE120BDB4B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30087ms 2019-09-04T06:31:21.773+0000 D2 NETWORK [conn265] Session from 10.108.2.48:42106 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:31:21.773+0000 I NETWORK [conn265] end connection 10.108.2.48:42106 (83 connections now open) 2019-09-04T06:31:21.801+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:21.801+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:21.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:21.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:21.869+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:21.940+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:21.940+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:21.969+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:22.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:22.043+0000 I NETWORK [listener] connection accepted from 10.108.2.50:50150 #292 (84 connections now open) 2019-09-04T06:31:22.043+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:31:22.043+0000 D2 COMMAND [conn292] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:31:22.043+0000 I NETWORK [conn292] received client metadata from 10.108.2.50:50150 conn292: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:31:22.043+0000 I COMMAND [conn292] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:31:22.044+0000 D2 COMMAND [conn292] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578679, 1), signature: { hash: BinData(0, 488931F6E677F94ED5295475628B5C49496549EB), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:31:22.044+0000 D1 REPL [conn292] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578679, 1), t: 1 } 2019-09-04T06:31:22.044+0000 D3 REPL [conn292] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:52.054+0000 2019-09-04T06:31:22.070+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:22.150+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:22.150+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:22.170+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:22.219+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:22.219+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:22.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578680, 1), signature: { hash: BinData(0, 6B7973242EEB8928AAB5AE704BBFD7D498CB72C8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:22.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:31:22.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578680, 1), signature: { hash: BinData(0, 6B7973242EEB8928AAB5AE704BBFD7D498CB72C8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:22.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578680, 1), signature: { hash: BinData(0, 6B7973242EEB8928AAB5AE704BBFD7D498CB72C8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:22.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), opTime: { ts: Timestamp(1567578679, 1), t: 1 }, wallTime: new Date(1567578679239) } 2019-09-04T06:31:22.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578680, 1), signature: { hash: BinData(0, 6B7973242EEB8928AAB5AE704BBFD7D498CB72C8), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:22.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:22.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:22.242+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:22.242+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:22.242+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578679, 1) 2019-09-04T06:31:22.242+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 11572 2019-09-04T06:31:22.242+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:22.242+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:22.242+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 11572 2019-09-04T06:31:22.243+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11575 2019-09-04T06:31:22.243+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11575 2019-09-04T06:31:22.243+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578679, 1), t: 1 }({ ts: Timestamp(1567578679, 1), t: 1 }) 2019-09-04T06:31:22.256+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:22.256+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:22.270+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:22.301+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:22.302+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:22.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:22.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:22.370+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:22.440+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:22.440+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:22.470+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:22.570+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:22.584+0000 I NETWORK [listener] connection accepted from 10.108.2.74:51818 #293 (85 connections now open) 2019-09-04T06:31:22.584+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:31:22.585+0000 D2 COMMAND [conn293] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:31:22.585+0000 I NETWORK [conn293] received client metadata from 10.108.2.74:51818 conn293: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:31:22.585+0000 I COMMAND [conn293] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:31:22.598+0000 I COMMAND [conn268] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, 05433FFC1F19D7D8CE5BDF35FCE276DA0CD821FC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:31:22.598+0000 D1 - [conn268] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:31:22.598+0000 W - [conn268] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:22.615+0000 I - [conn268] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:22.615+0000 D1 COMMAND [conn268] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, 05433FFC1F19D7D8CE5BDF35FCE276DA0CD821FC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:22.615+0000 D1 - [conn268] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:31:22.615+0000 W - [conn268] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:22.635+0000 I - [conn268] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:22.635+0000 W COMMAND [conn268] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:31:22.635+0000 I COMMAND [conn268] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, 05433FFC1F19D7D8CE5BDF35FCE276DA0CD821FC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:31:22.635+0000 D2 NETWORK [conn268] Session from 10.108.2.74:51796 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:31:22.635+0000 I NETWORK [conn268] end connection 10.108.2.74:51796 (84 connections now open) 2019-09-04T06:31:22.670+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:22.719+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:22.719+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:22.771+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:22.801+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:22.801+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:22.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:22.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 788) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:22.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 788 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:32.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:22.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:50.839+0000 2019-09-04T06:31:22.838+0000 D2 ASIO [Replication] Request 788 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), opTime: { ts: Timestamp(1567578679, 1), t: 1 }, wallTime: new Date(1567578679239), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578679, 1), t: 1 }, lastCommittedWall: new Date(1567578679239), lastOpVisible: { ts: Timestamp(1567578679, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578679, 1), $clusterTime: { clusterTime: Timestamp(1567578680, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578679, 1) } 2019-09-04T06:31:22.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), opTime: { ts: Timestamp(1567578679, 1), t: 1 }, wallTime: new Date(1567578679239), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578679, 1), t: 1 }, lastCommittedWall: new Date(1567578679239), lastOpVisible: { ts: Timestamp(1567578679, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578679, 1), $clusterTime: { clusterTime: Timestamp(1567578680, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578679, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:22.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:22.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 788) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), opTime: { ts: Timestamp(1567578679, 1), t: 1 }, wallTime: new Date(1567578679239), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578679, 1), t: 1 }, lastCommittedWall: new Date(1567578679239), lastOpVisible: { ts: Timestamp(1567578679, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578679, 1), $clusterTime: { clusterTime: Timestamp(1567578680, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578679, 1) } 2019-09-04T06:31:22.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:31:22.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:31:24.838Z 2019-09-04T06:31:22.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:50.839+0000 2019-09-04T06:31:22.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:22.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 789) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:22.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 789 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:31:32.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:22.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:50.839+0000 2019-09-04T06:31:22.839+0000 D2 ASIO [Replication] Request 789 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), opTime: { ts: Timestamp(1567578679, 1), t: 1 }, wallTime: new Date(1567578679239), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578679, 1), t: 1 }, lastCommittedWall: new Date(1567578679239), lastOpVisible: { ts: Timestamp(1567578679, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578679, 1), $clusterTime: { clusterTime: Timestamp(1567578680, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578679, 1) } 2019-09-04T06:31:22.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), opTime: { ts: Timestamp(1567578679, 1), t: 1 }, wallTime: new Date(1567578679239), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578679, 1), t: 1 }, lastCommittedWall: new Date(1567578679239), lastOpVisible: { ts: Timestamp(1567578679, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578679, 1), $clusterTime: { clusterTime: Timestamp(1567578680, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578679, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:31:22.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:22.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 789) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), opTime: { ts: Timestamp(1567578679, 1), t: 1 }, wallTime: new Date(1567578679239), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578679, 1), t: 1 }, lastCommittedWall: new Date(1567578679239), lastOpVisible: { ts: Timestamp(1567578679, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578679, 1), $clusterTime: { clusterTime: Timestamp(1567578680, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578679, 1) } 2019-09-04T06:31:22.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:31:22.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:31:30.968+0000 2019-09-04T06:31:22.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:31:33.867+0000 2019-09-04T06:31:22.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:31:22.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:31:24.839Z 2019-09-04T06:31:22.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:22.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:52.839+0000 2019-09-04T06:31:22.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:52.839+0000 2019-09-04T06:31:22.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:22.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:22.871+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:22.940+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:22.940+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:22.971+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:23.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:23.063+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:23.063+0000 D3 REPL [replexec-0] memberData lastupdate is: 2019-09-04T06:31:22.839+0000 2019-09-04T06:31:23.063+0000 D3 REPL [replexec-0] memberData lastupdate is: 2019-09-04T06:31:22.838+0000 2019-09-04T06:31:23.063+0000 D3 REPL [replexec-0] stalest member MemberId(2) date: 2019-09-04T06:31:22.838+0000 2019-09-04T06:31:23.063+0000 D3 REPL [replexec-0] scheduling next check at 2019-09-04T06:31:32.838+0000 2019-09-04T06:31:23.063+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:52.839+0000 2019-09-04T06:31:23.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578680, 1), signature: { hash: BinData(0, 6B7973242EEB8928AAB5AE704BBFD7D498CB72C8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:23.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:31:23.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578680, 1), signature: { hash: BinData(0, 6B7973242EEB8928AAB5AE704BBFD7D498CB72C8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:23.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578680, 1), signature: { hash: BinData(0, 6B7973242EEB8928AAB5AE704BBFD7D498CB72C8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:23.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), opTime: { ts: Timestamp(1567578679, 1), t: 1 }, wallTime: new Date(1567578679239) } 2019-09-04T06:31:23.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578680, 1), signature: { hash: BinData(0, 6B7973242EEB8928AAB5AE704BBFD7D498CB72C8), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:23.071+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:23.171+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:23.219+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:23.219+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:23.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:23.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:23.243+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:23.243+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:23.243+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578679, 1) 2019-09-04T06:31:23.243+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 11590 2019-09-04T06:31:23.243+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:23.243+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:23.243+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 11590 2019-09-04T06:31:23.243+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11593 2019-09-04T06:31:23.243+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11593 2019-09-04T06:31:23.243+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578679, 1), t: 1 }({ ts: Timestamp(1567578679, 1), t: 1 }) 2019-09-04T06:31:23.271+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:23.301+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:23.301+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:23.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:23.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:23.371+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:23.440+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:23.440+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:23.471+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:23.555+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:31:23.555+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:31:23.555+0000 D2 COMMAND [conn90] run command admin.$cmd { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:31:23.555+0000 I COMMAND [conn90] command admin.$cmd command: isMaster { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:866 locks:{} protocol:op_query 0ms 2019-09-04T06:31:23.571+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:23.671+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:23.719+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:23.719+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:23.772+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:23.801+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:23.801+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:23.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:23.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:23.872+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:23.940+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:23.940+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:23.972+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:24.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:24.072+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:24.141+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:24.141+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:24.156+0000 I COMMAND [conn259] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, 05433FFC1F19D7D8CE5BDF35FCE276DA0CD821FC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:31:24.156+0000 D1 - [conn259] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:31:24.156+0000 W - [conn259] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:24.172+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:24.174+0000 I - [conn259] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:24.174+0000 D1 COMMAND [conn259] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, 05433FFC1F19D7D8CE5BDF35FCE276DA0CD821FC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:24.174+0000 D1 - [conn259] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:31:24.174+0000 W - [conn259] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:24.196+0000 I - [conn259] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:24.196+0000 W COMMAND [conn259] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:31:24.196+0000 I COMMAND [conn259] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578645, 1), signature: { hash: BinData(0, 05433FFC1F19D7D8CE5BDF35FCE276DA0CD821FC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30033ms 2019-09-04T06:31:24.197+0000 D2 NETWORK [conn259] Session from 10.108.2.46:40992 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:31:24.197+0000 I NETWORK [conn259] end connection 10.108.2.46:40992 (83 connections now open) 2019-09-04T06:31:24.219+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:24.219+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:24.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578680, 1), signature: { hash: BinData(0, 6B7973242EEB8928AAB5AE704BBFD7D498CB72C8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:24.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:31:24.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578680, 1), signature: { hash: BinData(0, 6B7973242EEB8928AAB5AE704BBFD7D498CB72C8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:24.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578680, 1), signature: { hash: BinData(0, 6B7973242EEB8928AAB5AE704BBFD7D498CB72C8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:24.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), opTime: { ts: Timestamp(1567578679, 1), t: 1 }, wallTime: new Date(1567578679239) } 2019-09-04T06:31:24.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578680, 1), signature: { hash: BinData(0, 6B7973242EEB8928AAB5AE704BBFD7D498CB72C8), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:24.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:24.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:24.243+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:24.243+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:24.243+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578679, 1) 2019-09-04T06:31:24.243+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 11610 2019-09-04T06:31:24.243+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:24.243+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:24.243+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 11610 2019-09-04T06:31:24.243+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11613 2019-09-04T06:31:24.244+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11613 2019-09-04T06:31:24.244+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578679, 1), t: 1 }({ ts: Timestamp(1567578679, 1), t: 1 }) 2019-09-04T06:31:24.255+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:24.255+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), appliedOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, appliedWallTime: new Date(1567578679239), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), appliedOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, appliedWallTime: new Date(1567578679239), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), appliedOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, appliedWallTime: new Date(1567578679239), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578679, 1), t: 1 }, lastCommittedWall: new Date(1567578679239), lastOpVisible: { ts: Timestamp(1567578679, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:24.255+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 790 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:54.255+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), appliedOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, appliedWallTime: new Date(1567578679239), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), appliedOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, appliedWallTime: new Date(1567578679239), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), appliedOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, appliedWallTime: new Date(1567578679239), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578679, 1), t: 1 }, lastCommittedWall: new Date(1567578679239), lastOpVisible: { ts: Timestamp(1567578679, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:24.255+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:49.243+0000 2019-09-04T06:31:24.255+0000 D2 ASIO [RS] Request 790 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578679, 1), t: 1 }, lastCommittedWall: new Date(1567578679239), lastOpVisible: { ts: Timestamp(1567578679, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578679, 1), $clusterTime: { clusterTime: Timestamp(1567578680, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578679, 1) } 2019-09-04T06:31:24.255+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578679, 1), t: 1 }, lastCommittedWall: new Date(1567578679239), lastOpVisible: { ts: Timestamp(1567578679, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578679, 1), $clusterTime: { clusterTime: Timestamp(1567578680, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578679, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:24.255+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:24.255+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:49.243+0000 2019-09-04T06:31:24.255+0000 D2 ASIO [RS] Request 779 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578679, 1), t: 1 }, lastCommittedWall: new Date(1567578679239), lastOpVisible: { ts: Timestamp(1567578679, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578679, 1), t: 1 }, lastCommittedWall: new Date(1567578679239), lastOpApplied: { ts: Timestamp(1567578679, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578679, 1), $clusterTime: { clusterTime: Timestamp(1567578680, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578679, 1) } 2019-09-04T06:31:24.255+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578679, 1), t: 1 }, lastCommittedWall: new Date(1567578679239), lastOpVisible: { ts: Timestamp(1567578679, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578679, 1), t: 1 }, lastCommittedWall: new Date(1567578679239), lastOpApplied: { ts: Timestamp(1567578679, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578679, 1), $clusterTime: { clusterTime: Timestamp(1567578680, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578679, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:24.255+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:24.255+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:31:24.255+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:31:33.867+0000 2019-09-04T06:31:24.255+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:31:35.423+0000 2019-09-04T06:31:24.255+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:24.255+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:52.839+0000 2019-09-04T06:31:24.255+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 791 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:34.255+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578679, 1), t: 1 } } 2019-09-04T06:31:24.255+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:49.243+0000 2019-09-04T06:31:24.272+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:24.301+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:24.301+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:24.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:24.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:24.372+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:24.440+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:24.440+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:24.472+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:24.573+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:24.641+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:24.641+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:24.673+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:24.719+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:24.719+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:24.773+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:24.801+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:24.801+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:24.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:24.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 792) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:24.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 792 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:34.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:24.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:52.839+0000 2019-09-04T06:31:24.838+0000 D2 ASIO [Replication] Request 792 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), opTime: { ts: Timestamp(1567578679, 1), t: 1 }, wallTime: new Date(1567578679239), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578679, 1), t: 1 }, lastCommittedWall: new Date(1567578679239), lastOpVisible: { ts: Timestamp(1567578679, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578679, 1), $clusterTime: { clusterTime: Timestamp(1567578680, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578679, 1) } 2019-09-04T06:31:24.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), opTime: { ts: Timestamp(1567578679, 1), t: 1 }, wallTime: new Date(1567578679239), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578679, 1), t: 1 }, lastCommittedWall: new Date(1567578679239), lastOpVisible: { ts: Timestamp(1567578679, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578679, 1), $clusterTime: { clusterTime: Timestamp(1567578680, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578679, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:24.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:24.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 792) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), opTime: { ts: Timestamp(1567578679, 1), t: 1 }, wallTime: new Date(1567578679239), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578679, 1), t: 1 }, lastCommittedWall: new Date(1567578679239), lastOpVisible: { ts: Timestamp(1567578679, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578679, 1), $clusterTime: { clusterTime: Timestamp(1567578680, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578679, 1) } 2019-09-04T06:31:24.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:31:24.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:31:26.838Z 2019-09-04T06:31:24.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:52.839+0000 2019-09-04T06:31:24.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:24.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 793) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:24.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 793 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:31:34.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:24.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:52.839+0000 2019-09-04T06:31:24.839+0000 D2 ASIO [Replication] Request 793 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), opTime: { ts: Timestamp(1567578679, 1), t: 1 }, wallTime: new Date(1567578679239), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578679, 1), t: 1 }, lastCommittedWall: new Date(1567578679239), lastOpVisible: { ts: Timestamp(1567578679, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578679, 1), $clusterTime: { clusterTime: Timestamp(1567578680, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578679, 1) } 2019-09-04T06:31:24.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), opTime: { ts: Timestamp(1567578679, 1), t: 1 }, wallTime: new Date(1567578679239), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578679, 1), t: 1 }, lastCommittedWall: new Date(1567578679239), lastOpVisible: { ts: Timestamp(1567578679, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578679, 1), $clusterTime: { clusterTime: Timestamp(1567578680, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578679, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:31:24.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:24.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 793) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), opTime: { ts: Timestamp(1567578679, 1), t: 1 }, wallTime: new Date(1567578679239), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578679, 1), t: 1 }, lastCommittedWall: new Date(1567578679239), lastOpVisible: { ts: Timestamp(1567578679, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578679, 1), $clusterTime: { clusterTime: Timestamp(1567578680, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578679, 1) } 2019-09-04T06:31:24.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:31:24.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:31:35.423+0000 2019-09-04T06:31:24.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:31:35.065+0000 2019-09-04T06:31:24.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:31:24.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:31:26.839Z 2019-09-04T06:31:24.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:24.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:54.839+0000 2019-09-04T06:31:24.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:54.839+0000 2019-09-04T06:31:24.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:24.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:24.873+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:24.940+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:24.940+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:24.973+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:25.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:25.048+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:25.048+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:25.049+0000 I NETWORK [listener] connection accepted from 10.108.2.55:36692 #294 (84 connections now open) 2019-09-04T06:31:25.049+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:31:25.049+0000 D2 COMMAND [conn294] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:31:25.049+0000 I NETWORK [conn294] received client metadata from 10.108.2.55:36692 conn294: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:31:25.049+0000 I COMMAND [conn294] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:31:25.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578680, 1), signature: { hash: BinData(0, 6B7973242EEB8928AAB5AE704BBFD7D498CB72C8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:25.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:31:25.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578680, 1), signature: { hash: BinData(0, 6B7973242EEB8928AAB5AE704BBFD7D498CB72C8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:25.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578680, 1), signature: { hash: BinData(0, 6B7973242EEB8928AAB5AE704BBFD7D498CB72C8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:25.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), opTime: { ts: Timestamp(1567578679, 1), t: 1 }, wallTime: new Date(1567578679239) } 2019-09-04T06:31:25.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578680, 1), signature: { hash: BinData(0, 6B7973242EEB8928AAB5AE704BBFD7D498CB72C8), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:25.065+0000 I COMMAND [conn269] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578651, 1), signature: { hash: BinData(0, BB862498148B907236F82EF2CD87FA263BF3C9C2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:31:25.065+0000 D1 - [conn269] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:31:25.065+0000 W - [conn269] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:25.073+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:25.082+0000 I - [conn269] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:25.082+0000 D1 COMMAND [conn269] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578651, 1), signature: { hash: BinData(0, BB862498148B907236F82EF2CD87FA263BF3C9C2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:25.082+0000 D1 - [conn269] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:31:25.082+0000 W - [conn269] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:25.102+0000 I - [conn269] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:25.102+0000 W COMMAND [conn269] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:31:25.102+0000 I COMMAND [conn269] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578651, 1), signature: { hash: BinData(0, BB862498148B907236F82EF2CD87FA263BF3C9C2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30032ms 2019-09-04T06:31:25.102+0000 D2 NETWORK [conn269] Session from 10.108.2.55:36668 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:31:25.102+0000 I NETWORK [conn269] end connection 10.108.2.55:36668 (83 connections now open) 2019-09-04T06:31:25.134+0000 D2 ASIO [RS] Request 791 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578685, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578685131), o: { $v: 1, $set: { ping: new Date(1567578685128), up: 2585 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578679, 1), t: 1 }, lastCommittedWall: new Date(1567578679239), lastOpVisible: { ts: Timestamp(1567578679, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578679, 1), t: 1 }, lastCommittedWall: new Date(1567578679239), lastOpApplied: { ts: Timestamp(1567578685, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578679, 1), $clusterTime: { clusterTime: Timestamp(1567578685, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578685, 1) } 2019-09-04T06:31:25.134+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578685, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578685131), o: { $v: 1, $set: { ping: new Date(1567578685128), up: 2585 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578679, 1), t: 1 }, lastCommittedWall: new Date(1567578679239), lastOpVisible: { ts: Timestamp(1567578679, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578679, 1), t: 1 }, lastCommittedWall: new Date(1567578679239), lastOpApplied: { ts: Timestamp(1567578685, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578679, 1), $clusterTime: { clusterTime: Timestamp(1567578685, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578685, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:25.134+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:25.134+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578685, 1) and ending at ts: Timestamp(1567578685, 1) 2019-09-04T06:31:25.134+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:31:35.065+0000 2019-09-04T06:31:25.134+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:31:35.864+0000 2019-09-04T06:31:25.134+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:25.134+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:54.839+0000 2019-09-04T06:31:25.134+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578685, 1), t: 1 } 2019-09-04T06:31:25.134+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:25.134+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:25.134+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578679, 1) 2019-09-04T06:31:25.134+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 11630 2019-09-04T06:31:25.134+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:25.134+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:25.134+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 11630 2019-09-04T06:31:25.134+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:25.134+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:31:25.134+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:25.134+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578679, 1) 2019-09-04T06:31:25.134+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 11633 2019-09-04T06:31:25.134+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578685, 1) } 2019-09-04T06:31:25.134+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:25.134+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:25.134+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 11633 2019-09-04T06:31:25.134+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11614 2019-09-04T06:31:25.134+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 11614 2019-09-04T06:31:25.134+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11636 2019-09-04T06:31:25.134+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11636 2019-09-04T06:31:25.134+0000 D3 EXECUTOR [repl-writer-worker-6] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:25.134+0000 D3 STORAGE [repl-writer-worker-6] WT begin_transaction for snapshot id 11638 2019-09-04T06:31:25.134+0000 D4 STORAGE [repl-writer-worker-6] inserting record with timestamp Timestamp(1567578685, 1) 2019-09-04T06:31:25.135+0000 D3 STORAGE [repl-writer-worker-6] WT set timestamp of future write operations to Timestamp(1567578685, 1) 2019-09-04T06:31:25.135+0000 D3 STORAGE [repl-writer-worker-6] WT commit_transaction for snapshot id 11638 2019-09-04T06:31:25.135+0000 D3 EXECUTOR [repl-writer-worker-6] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:25.135+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:31:25.135+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11637 2019-09-04T06:31:25.135+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 11637 2019-09-04T06:31:25.135+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11640 2019-09-04T06:31:25.135+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11640 2019-09-04T06:31:25.135+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578685, 1), t: 1 }({ ts: Timestamp(1567578685, 1), t: 1 }) 2019-09-04T06:31:25.135+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578685, 1) 2019-09-04T06:31:25.135+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11641 2019-09-04T06:31:25.135+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578685, 1) } } ] } sort: {} projection: {} 2019-09-04T06:31:25.135+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:25.135+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:31:25.135+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578685, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:31:25.135+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:25.135+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:25.135+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:25.135+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578685, 1) || First: notFirst: full path: ts 2019-09-04T06:31:25.135+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:25.135+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578685, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:25.135+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:25.135+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:31:25.135+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:25.135+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:25.135+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:25.135+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:25.135+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:25.135+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:25.135+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:25.135+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578685, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:25.135+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:25.135+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:25.135+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:25.135+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578685, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:25.135+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:25.135+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578685, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:25.135+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 11641 2019-09-04T06:31:25.135+0000 D3 EXECUTOR [repl-writer-worker-2] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:25.135+0000 D3 STORAGE [repl-writer-worker-2] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:25.135+0000 D3 REPL [repl-writer-worker-2] applying op: { ts: Timestamp(1567578685, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578685131), o: { $v: 1, $set: { ping: new Date(1567578685128), up: 2585 } } }, oplog application mode: Secondary 2019-09-04T06:31:25.135+0000 D3 STORAGE [repl-writer-worker-2] WT set timestamp of future write operations to Timestamp(1567578685, 1) 2019-09-04T06:31:25.135+0000 D3 STORAGE [repl-writer-worker-2] WT begin_transaction for snapshot id 11643 2019-09-04T06:31:25.135+0000 D2 QUERY [repl-writer-worker-2] Using idhack: { _id: "cmodb801.togewa.com:27017" } 2019-09-04T06:31:25.135+0000 D4 WRITE [repl-writer-worker-2] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:31:25.135+0000 D3 STORAGE [repl-writer-worker-2] WT commit_transaction for snapshot id 11643 2019-09-04T06:31:25.135+0000 D3 EXECUTOR [repl-writer-worker-2] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:25.135+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578685, 1), t: 1 }({ ts: Timestamp(1567578685, 1), t: 1 }) 2019-09-04T06:31:25.135+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578685, 1) 2019-09-04T06:31:25.135+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11642 2019-09-04T06:31:25.135+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:31:25.135+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:25.135+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:31:25.135+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:25.135+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:25.135+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:31:25.135+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 11642 2019-09-04T06:31:25.135+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578685, 1) 2019-09-04T06:31:25.135+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11646 2019-09-04T06:31:25.135+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11646 2019-09-04T06:31:25.135+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578685, 1), t: 1 }({ ts: Timestamp(1567578685, 1), t: 1 }) 2019-09-04T06:31:25.136+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:25.136+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), appliedOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, appliedWallTime: new Date(1567578679239), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), appliedOpTime: { ts: Timestamp(1567578685, 1), t: 1 }, appliedWallTime: new Date(1567578685131), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), appliedOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, appliedWallTime: new Date(1567578679239), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578679, 1), t: 1 }, lastCommittedWall: new Date(1567578679239), lastOpVisible: { ts: Timestamp(1567578679, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:25.136+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 794 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:55.136+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), appliedOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, appliedWallTime: new Date(1567578679239), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), appliedOpTime: { ts: Timestamp(1567578685, 1), t: 1 }, appliedWallTime: new Date(1567578685131), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), appliedOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, appliedWallTime: new Date(1567578679239), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578679, 1), t: 1 }, lastCommittedWall: new Date(1567578679239), lastOpVisible: { ts: Timestamp(1567578679, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:25.136+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:55.135+0000 2019-09-04T06:31:25.136+0000 D2 ASIO [RS] Request 794 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578679, 1), t: 1 }, lastCommittedWall: new Date(1567578679239), lastOpVisible: { ts: Timestamp(1567578679, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578679, 1), $clusterTime: { clusterTime: Timestamp(1567578685, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578685, 1) } 2019-09-04T06:31:25.136+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578679, 1), t: 1 }, lastCommittedWall: new Date(1567578679239), lastOpVisible: { ts: Timestamp(1567578679, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578679, 1), $clusterTime: { clusterTime: Timestamp(1567578685, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578685, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:25.136+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:25.136+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:55.136+0000 2019-09-04T06:31:25.136+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578685, 1), t: 1 } 2019-09-04T06:31:25.136+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 795 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:35.136+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578679, 1), t: 1 } } 2019-09-04T06:31:25.136+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:55.136+0000 2019-09-04T06:31:25.139+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:31:25.139+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:25.139+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), appliedOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, appliedWallTime: new Date(1567578679239), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578685, 1), t: 1 }, durableWallTime: new Date(1567578685131), appliedOpTime: { ts: Timestamp(1567578685, 1), t: 1 }, appliedWallTime: new Date(1567578685131), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), appliedOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, appliedWallTime: new Date(1567578679239), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578679, 1), t: 1 }, lastCommittedWall: new Date(1567578679239), lastOpVisible: { ts: Timestamp(1567578679, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:25.139+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 796 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:55.139+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), appliedOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, appliedWallTime: new Date(1567578679239), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578685, 1), t: 1 }, durableWallTime: new Date(1567578685131), appliedOpTime: { ts: Timestamp(1567578685, 1), t: 1 }, appliedWallTime: new Date(1567578685131), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), appliedOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, appliedWallTime: new Date(1567578679239), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578679, 1), t: 1 }, lastCommittedWall: new Date(1567578679239), lastOpVisible: { ts: Timestamp(1567578679, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:25.139+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:55.136+0000 2019-09-04T06:31:25.139+0000 D2 ASIO [RS] Request 796 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578685, 1), t: 1 }, lastCommittedWall: new Date(1567578685131), lastOpVisible: { ts: Timestamp(1567578685, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578685, 1), $clusterTime: { clusterTime: Timestamp(1567578685, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578685, 1) } 2019-09-04T06:31:25.139+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:25.139+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578685, 1), t: 1 }, lastCommittedWall: new Date(1567578685131), lastOpVisible: { ts: Timestamp(1567578685, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578685, 1), $clusterTime: { clusterTime: Timestamp(1567578685, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578685, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:25.139+0000 D2 ASIO [RS] Request 795 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578685, 1), t: 1 }, lastCommittedWall: new Date(1567578685131), lastOpVisible: { ts: Timestamp(1567578685, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578685, 1), t: 1 }, lastCommittedWall: new Date(1567578685131), lastOpApplied: { ts: Timestamp(1567578685, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578685, 1), $clusterTime: { clusterTime: Timestamp(1567578685, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578685, 1) } 2019-09-04T06:31:25.139+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578685, 1), t: 1 }, lastCommittedWall: new Date(1567578685131), lastOpVisible: { ts: Timestamp(1567578685, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578685, 1), t: 1 }, lastCommittedWall: new Date(1567578685131), lastOpApplied: { ts: Timestamp(1567578685, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578685, 1), $clusterTime: { clusterTime: Timestamp(1567578685, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578685, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:25.139+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:25.139+0000 D2 COMMAND [conn21] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578685, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a3d02d1a496712d721d'), operName: "", parentOperId: "5d6f5a3d02d1a496712d721b" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578685, 1), signature: { hash: BinData(0, F73CCE80BCA482483698CCC7E3BB37D653AA5CF5), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578685, 1), t: 1 } }, $db: "config" } 2019-09-04T06:31:25.139+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:25.139+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:25.139+0000 D1 TRACKING [conn21] Cmd: find, TrackingId: 5d6f5a3d02d1a496712d721b|5d6f5a3d02d1a496712d721d 2019-09-04T06:31:25.139+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:31:25.139+0000 D1 REPL [conn21] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1567578685, 1), t: 1 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578679, 1), t: 1 } 2019-09-04T06:31:25.139+0000 D3 REPL [conn21] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:55.149+0000 2019-09-04T06:31:25.139+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:55.139+0000 2019-09-04T06:31:25.139+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578685, 1), t: 1 }, 2019-09-04T06:31:25.131+0000 2019-09-04T06:31:25.139+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578685, 1), t: 1 }, 2019-09-04T06:31:25.131+0000 2019-09-04T06:31:25.139+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578680, 1) 2019-09-04T06:31:25.139+0000 D3 REPL [conn21] Got notified of new snapshot: { ts: Timestamp(1567578685, 1), t: 1 }, 2019-09-04T06:31:25.131+0000 2019-09-04T06:31:25.139+0000 D1 COMMAND [conn21] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578685, 1), t: 1 } } } 2019-09-04T06:31:25.139+0000 D3 STORAGE [conn21] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:31:25.139+0000 D1 COMMAND [conn21] Using 'committed' snapshot: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578685, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a3d02d1a496712d721d'), operName: "", parentOperId: "5d6f5a3d02d1a496712d721b" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578685, 1), signature: { hash: BinData(0, F73CCE80BCA482483698CCC7E3BB37D653AA5CF5), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578685, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578685, 1) 2019-09-04T06:31:25.139+0000 D2 QUERY [conn21] Collection config.settings does not exist. Using EOF plan: query: { _id: "balancer" } sort: {} projection: {} limit: 1 2019-09-04T06:31:25.139+0000 D3 REPL [conn274] Got notified of new snapshot: { ts: Timestamp(1567578685, 1), t: 1 }, 2019-09-04T06:31:25.131+0000 2019-09-04T06:31:25.139+0000 D3 REPL [conn274] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:41.071+0000 2019-09-04T06:31:25.139+0000 D3 REPL [conn283] Got notified of new snapshot: { ts: Timestamp(1567578685, 1), t: 1 }, 2019-09-04T06:31:25.131+0000 2019-09-04T06:31:25.139+0000 D3 REPL [conn283] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.238+0000 2019-09-04T06:31:25.139+0000 D3 REPL [conn272] Got notified of new snapshot: { ts: Timestamp(1567578685, 1), t: 1 }, 2019-09-04T06:31:25.131+0000 2019-09-04T06:31:25.139+0000 D3 REPL [conn272] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.222+0000 2019-09-04T06:31:25.139+0000 D3 REPL [conn281] Got notified of new snapshot: { ts: Timestamp(1567578685, 1), t: 1 }, 2019-09-04T06:31:25.131+0000 2019-09-04T06:31:25.139+0000 D3 REPL [conn281] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.661+0000 2019-09-04T06:31:25.139+0000 D3 REPL [conn291] Got notified of new snapshot: { ts: Timestamp(1567578685, 1), t: 1 }, 2019-09-04T06:31:25.131+0000 2019-09-04T06:31:25.139+0000 D3 REPL [conn291] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.662+0000 2019-09-04T06:31:25.139+0000 D3 REPL [conn271] Got notified of new snapshot: { ts: Timestamp(1567578685, 1), t: 1 }, 2019-09-04T06:31:25.131+0000 2019-09-04T06:31:25.139+0000 D3 REPL [conn271] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.752+0000 2019-09-04T06:31:25.139+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:31:35.864+0000 2019-09-04T06:31:25.140+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:31:36.184+0000 2019-09-04T06:31:25.140+0000 D3 REPL [conn260] Got notified of new snapshot: { ts: Timestamp(1567578685, 1), t: 1 }, 2019-09-04T06:31:25.131+0000 2019-09-04T06:31:25.140+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 797 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:35.140+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578685, 1), t: 1 } } 2019-09-04T06:31:25.140+0000 D3 REPL [conn260] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:42.329+0000 2019-09-04T06:31:25.140+0000 D3 REPL [conn287] Got notified of new snapshot: { ts: Timestamp(1567578685, 1), t: 1 }, 2019-09-04T06:31:25.131+0000 2019-09-04T06:31:25.140+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:25.140+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:54.839+0000 2019-09-04T06:31:25.140+0000 D3 REPL [conn287] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.248+0000 2019-09-04T06:31:25.140+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:55.139+0000 2019-09-04T06:31:25.140+0000 D3 REPL [conn286] Got notified of new snapshot: { ts: Timestamp(1567578685, 1), t: 1 }, 2019-09-04T06:31:25.131+0000 2019-09-04T06:31:25.140+0000 D3 REPL [conn286] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.224+0000 2019-09-04T06:31:25.140+0000 D3 REPL [conn264] Got notified of new snapshot: { ts: Timestamp(1567578685, 1), t: 1 }, 2019-09-04T06:31:25.131+0000 2019-09-04T06:31:25.140+0000 D3 REPL [conn264] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:41.083+0000 2019-09-04T06:31:25.140+0000 D3 REPL [conn273] Got notified of new snapshot: { ts: Timestamp(1567578685, 1), t: 1 }, 2019-09-04T06:31:25.131+0000 2019-09-04T06:31:25.140+0000 D3 REPL [conn273] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.644+0000 2019-09-04T06:31:25.140+0000 D3 REPL [conn292] Got notified of new snapshot: { ts: Timestamp(1567578685, 1), t: 1 }, 2019-09-04T06:31:25.131+0000 2019-09-04T06:31:25.140+0000 D3 REPL [conn292] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:52.054+0000 2019-09-04T06:31:25.140+0000 D3 REPL [conn279] Got notified of new snapshot: { ts: Timestamp(1567578685, 1), t: 1 }, 2019-09-04T06:31:25.131+0000 2019-09-04T06:31:25.140+0000 D3 REPL [conn279] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:42.709+0000 2019-09-04T06:31:25.140+0000 I COMMAND [conn21] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578685, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a3d02d1a496712d721d'), operName: "", parentOperId: "5d6f5a3d02d1a496712d721b" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578685, 1), signature: { hash: BinData(0, F73CCE80BCA482483698CCC7E3BB37D653AA5CF5), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578685, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:31:25.140+0000 D3 REPL [conn270] Got notified of new snapshot: { ts: Timestamp(1567578685, 1), t: 1 }, 2019-09-04T06:31:25.131+0000 2019-09-04T06:31:25.140+0000 D3 REPL [conn270] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.239+0000 2019-09-04T06:31:25.153+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:25.153+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:25.174+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:25.219+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:25.219+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:25.234+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578685, 1) 2019-09-04T06:31:25.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:25.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:25.274+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:25.301+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:25.301+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:25.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:25.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:25.374+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:25.440+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:25.440+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:25.474+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:25.548+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:25.548+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:25.574+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:25.639+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:25.639+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:25.652+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:25.652+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:25.674+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:25.685+0000 D2 ASIO [RS] Request 797 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578685, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578685678), o: { $v: 1, $set: { ping: new Date(1567578685677) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578685, 1), t: 1 }, lastCommittedWall: new Date(1567578685131), lastOpVisible: { ts: Timestamp(1567578685, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578685, 1), t: 1 }, lastCommittedWall: new Date(1567578685131), lastOpApplied: { ts: Timestamp(1567578685, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578685, 1), $clusterTime: { clusterTime: Timestamp(1567578685, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578685, 2) } 2019-09-04T06:31:25.685+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578685, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578685678), o: { $v: 1, $set: { ping: new Date(1567578685677) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578685, 1), t: 1 }, lastCommittedWall: new Date(1567578685131), lastOpVisible: { ts: Timestamp(1567578685, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578685, 1), t: 1 }, lastCommittedWall: new Date(1567578685131), lastOpApplied: { ts: Timestamp(1567578685, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578685, 1), $clusterTime: { clusterTime: Timestamp(1567578685, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578685, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:25.685+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:25.685+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578685, 2) and ending at ts: Timestamp(1567578685, 2) 2019-09-04T06:31:25.685+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:31:36.184+0000 2019-09-04T06:31:25.685+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:31:36.645+0000 2019-09-04T06:31:25.685+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:25.685+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:54.839+0000 2019-09-04T06:31:25.685+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:25.685+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:25.685+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578685, 1) 2019-09-04T06:31:25.685+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 11661 2019-09-04T06:31:25.685+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:25.685+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:25.685+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 11661 2019-09-04T06:31:25.686+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:25.686+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:25.686+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:31:25.686+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578685, 1) 2019-09-04T06:31:25.686+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 11664 2019-09-04T06:31:25.686+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578685, 2) } 2019-09-04T06:31:25.686+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:25.686+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:25.686+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 11664 2019-09-04T06:31:25.685+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578685, 2), t: 1 } 2019-09-04T06:31:25.686+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11647 2019-09-04T06:31:25.686+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 11647 2019-09-04T06:31:25.686+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11667 2019-09-04T06:31:25.686+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11667 2019-09-04T06:31:25.686+0000 D3 EXECUTOR [repl-writer-worker-0] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:25.686+0000 D3 STORAGE [repl-writer-worker-0] WT begin_transaction for snapshot id 11669 2019-09-04T06:31:25.686+0000 D4 STORAGE [repl-writer-worker-0] inserting record with timestamp Timestamp(1567578685, 2) 2019-09-04T06:31:25.686+0000 D3 STORAGE [repl-writer-worker-0] WT set timestamp of future write operations to Timestamp(1567578685, 2) 2019-09-04T06:31:25.686+0000 D3 STORAGE [repl-writer-worker-0] WT commit_transaction for snapshot id 11669 2019-09-04T06:31:25.686+0000 D3 EXECUTOR [repl-writer-worker-0] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:25.686+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:31:25.686+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11668 2019-09-04T06:31:25.686+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 11668 2019-09-04T06:31:25.686+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11671 2019-09-04T06:31:25.686+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11671 2019-09-04T06:31:25.686+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578685, 2), t: 1 }({ ts: Timestamp(1567578685, 2), t: 1 }) 2019-09-04T06:31:25.686+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578685, 2) 2019-09-04T06:31:25.686+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11672 2019-09-04T06:31:25.686+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578685, 2) } } ] } sort: {} projection: {} 2019-09-04T06:31:25.686+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:25.686+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:31:25.686+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578685, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:31:25.686+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:25.686+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:25.686+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:25.686+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578685, 2) || First: notFirst: full path: ts 2019-09-04T06:31:25.686+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:25.686+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578685, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:25.686+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:25.686+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:31:25.686+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:25.686+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:25.686+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:25.686+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:25.686+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:25.686+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:25.686+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:25.686+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578685, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:25.686+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:25.686+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:25.686+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:25.686+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578685, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:25.686+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:25.686+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578685, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:25.686+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 11672 2019-09-04T06:31:25.686+0000 D3 EXECUTOR [repl-writer-worker-15] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:25.686+0000 D3 STORAGE [repl-writer-worker-15] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:25.686+0000 D3 REPL [repl-writer-worker-15] applying op: { ts: Timestamp(1567578685, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578685678), o: { $v: 1, $set: { ping: new Date(1567578685677) } } }, oplog application mode: Secondary 2019-09-04T06:31:25.686+0000 D3 STORAGE [repl-writer-worker-15] WT set timestamp of future write operations to Timestamp(1567578685, 2) 2019-09-04T06:31:25.686+0000 D3 STORAGE [repl-writer-worker-15] WT begin_transaction for snapshot id 11674 2019-09-04T06:31:25.686+0000 D2 QUERY [repl-writer-worker-15] Using idhack: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" } 2019-09-04T06:31:25.687+0000 D4 WRITE [repl-writer-worker-15] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:31:25.687+0000 D3 STORAGE [repl-writer-worker-15] WT commit_transaction for snapshot id 11674 2019-09-04T06:31:25.687+0000 D3 EXECUTOR [repl-writer-worker-15] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:25.687+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578685, 2), t: 1 }({ ts: Timestamp(1567578685, 2), t: 1 }) 2019-09-04T06:31:25.687+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578685, 2) 2019-09-04T06:31:25.687+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11673 2019-09-04T06:31:25.687+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:31:25.687+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:25.687+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:31:25.687+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:25.687+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:25.687+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:31:25.687+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 11673 2019-09-04T06:31:25.687+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578685, 2) 2019-09-04T06:31:25.687+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:25.687+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11677 2019-09-04T06:31:25.687+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), appliedOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, appliedWallTime: new Date(1567578679239), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578685, 1), t: 1 }, durableWallTime: new Date(1567578685131), appliedOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, appliedWallTime: new Date(1567578685678), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), appliedOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, appliedWallTime: new Date(1567578679239), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578685, 1), t: 1 }, lastCommittedWall: new Date(1567578685131), lastOpVisible: { ts: Timestamp(1567578685, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:25.687+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11677 2019-09-04T06:31:25.687+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578685, 2), t: 1 }({ ts: Timestamp(1567578685, 2), t: 1 }) 2019-09-04T06:31:25.687+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 798 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:55.687+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), appliedOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, appliedWallTime: new Date(1567578679239), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578685, 1), t: 1 }, durableWallTime: new Date(1567578685131), appliedOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, appliedWallTime: new Date(1567578685678), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), appliedOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, appliedWallTime: new Date(1567578679239), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578685, 1), t: 1 }, lastCommittedWall: new Date(1567578685131), lastOpVisible: { ts: Timestamp(1567578685, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:25.687+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:55.687+0000 2019-09-04T06:31:25.687+0000 D2 ASIO [RS] Request 798 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578685, 1), t: 1 }, lastCommittedWall: new Date(1567578685131), lastOpVisible: { ts: Timestamp(1567578685, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578685, 1), $clusterTime: { clusterTime: Timestamp(1567578685, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578685, 2) } 2019-09-04T06:31:25.687+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578685, 1), t: 1 }, lastCommittedWall: new Date(1567578685131), lastOpVisible: { ts: Timestamp(1567578685, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578685, 1), $clusterTime: { clusterTime: Timestamp(1567578685, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578685, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:25.687+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:25.687+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:55.687+0000 2019-09-04T06:31:25.688+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578685, 2), t: 1 } 2019-09-04T06:31:25.688+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 799 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:35.688+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578685, 1), t: 1 } } 2019-09-04T06:31:25.688+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:55.687+0000 2019-09-04T06:31:25.703+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:31:25.703+0000 D2 ASIO [RS] Request 799 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578685, 2), t: 1 }, lastCommittedWall: new Date(1567578685678), lastOpVisible: { ts: Timestamp(1567578685, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578685, 2), t: 1 }, lastCommittedWall: new Date(1567578685678), lastOpApplied: { ts: Timestamp(1567578685, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578685, 2), $clusterTime: { clusterTime: Timestamp(1567578685, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578685, 2) } 2019-09-04T06:31:25.703+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:25.703+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578685, 2), t: 1 }, lastCommittedWall: new Date(1567578685678), lastOpVisible: { ts: Timestamp(1567578685, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578685, 2), t: 1 }, lastCommittedWall: new Date(1567578685678), lastOpApplied: { ts: Timestamp(1567578685, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578685, 2), $clusterTime: { clusterTime: Timestamp(1567578685, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578685, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:25.703+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), appliedOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, appliedWallTime: new Date(1567578679239), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, durableWallTime: new Date(1567578685678), appliedOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, appliedWallTime: new Date(1567578685678), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), appliedOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, appliedWallTime: new Date(1567578679239), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578685, 1), t: 1 }, lastCommittedWall: new Date(1567578685131), lastOpVisible: { ts: Timestamp(1567578685, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:25.703+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 800 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:55.703+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), appliedOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, appliedWallTime: new Date(1567578679239), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, durableWallTime: new Date(1567578685678), appliedOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, appliedWallTime: new Date(1567578685678), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, durableWallTime: new Date(1567578679239), appliedOpTime: { ts: Timestamp(1567578679, 1), t: 1 }, appliedWallTime: new Date(1567578679239), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578685, 1), t: 1 }, lastCommittedWall: new Date(1567578685131), lastOpVisible: { ts: Timestamp(1567578685, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:25.703+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:25.703+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:31:25.703+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:55.703+0000 2019-09-04T06:31:25.703+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578685, 2), t: 1 }, 2019-09-04T06:31:25.678+0000 2019-09-04T06:31:25.703+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578685, 2), t: 1 }, 2019-09-04T06:31:25.678+0000 2019-09-04T06:31:25.703+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578680, 2) 2019-09-04T06:31:25.703+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:31:36.645+0000 2019-09-04T06:31:25.703+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:31:36.621+0000 2019-09-04T06:31:25.703+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:25.703+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:54.839+0000 2019-09-04T06:31:25.703+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 801 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:35.703+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578685, 2), t: 1 } } 2019-09-04T06:31:25.703+0000 D3 REPL [conn272] Got notified of new snapshot: { ts: Timestamp(1567578685, 2), t: 1 }, 2019-09-04T06:31:25.678+0000 2019-09-04T06:31:25.703+0000 D3 REPL [conn272] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.222+0000 2019-09-04T06:31:25.703+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:31:55.703+0000 2019-09-04T06:31:25.703+0000 D3 REPL [conn291] Got notified of new snapshot: { ts: Timestamp(1567578685, 2), t: 1 }, 2019-09-04T06:31:25.678+0000 2019-09-04T06:31:25.703+0000 D3 REPL [conn291] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.662+0000 2019-09-04T06:31:25.703+0000 D3 REPL [conn264] Got notified of new snapshot: { ts: Timestamp(1567578685, 2), t: 1 }, 2019-09-04T06:31:25.678+0000 2019-09-04T06:31:25.703+0000 D3 REPL [conn264] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:41.083+0000 2019-09-04T06:31:25.703+0000 D3 REPL [conn292] Got notified of new snapshot: { ts: Timestamp(1567578685, 2), t: 1 }, 2019-09-04T06:31:25.678+0000 2019-09-04T06:31:25.703+0000 D3 REPL [conn292] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:52.054+0000 2019-09-04T06:31:25.703+0000 D3 REPL [conn279] Got notified of new snapshot: { ts: Timestamp(1567578685, 2), t: 1 }, 2019-09-04T06:31:25.678+0000 2019-09-04T06:31:25.703+0000 D3 REPL [conn279] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:42.709+0000 2019-09-04T06:31:25.703+0000 D3 REPL [conn270] Got notified of new snapshot: { ts: Timestamp(1567578685, 2), t: 1 }, 2019-09-04T06:31:25.678+0000 2019-09-04T06:31:25.703+0000 D2 ASIO [RS] Request 800 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578685, 2), t: 1 }, lastCommittedWall: new Date(1567578685678), lastOpVisible: { ts: Timestamp(1567578685, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578685, 2), $clusterTime: { clusterTime: Timestamp(1567578685, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578685, 2) } 2019-09-04T06:31:25.703+0000 D3 REPL [conn270] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.239+0000 2019-09-04T06:31:25.703+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578685, 2), t: 1 }, lastCommittedWall: new Date(1567578685678), lastOpVisible: { ts: Timestamp(1567578685, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578685, 2), $clusterTime: { clusterTime: Timestamp(1567578685, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578685, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:25.703+0000 D3 REPL [conn286] Got notified of new snapshot: { ts: Timestamp(1567578685, 2), t: 1 }, 2019-09-04T06:31:25.678+0000 2019-09-04T06:31:25.703+0000 D3 REPL [conn286] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.224+0000 2019-09-04T06:31:25.703+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:25.703+0000 D3 REPL [conn273] Got notified of new snapshot: { ts: Timestamp(1567578685, 2), t: 1 }, 2019-09-04T06:31:25.678+0000 2019-09-04T06:31:25.703+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:55.703+0000 2019-09-04T06:31:25.703+0000 D3 REPL [conn273] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.644+0000 2019-09-04T06:31:25.703+0000 D3 REPL [conn281] Got notified of new snapshot: { ts: Timestamp(1567578685, 2), t: 1 }, 2019-09-04T06:31:25.678+0000 2019-09-04T06:31:25.703+0000 D3 REPL [conn281] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.661+0000 2019-09-04T06:31:25.703+0000 D3 REPL [conn287] Got notified of new snapshot: { ts: Timestamp(1567578685, 2), t: 1 }, 2019-09-04T06:31:25.678+0000 2019-09-04T06:31:25.703+0000 D3 REPL [conn287] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.248+0000 2019-09-04T06:31:25.703+0000 D3 REPL [conn283] Got notified of new snapshot: { ts: Timestamp(1567578685, 2), t: 1 }, 2019-09-04T06:31:25.678+0000 2019-09-04T06:31:25.703+0000 D3 REPL [conn283] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.238+0000 2019-09-04T06:31:25.703+0000 D3 REPL [conn271] Got notified of new snapshot: { ts: Timestamp(1567578685, 2), t: 1 }, 2019-09-04T06:31:25.678+0000 2019-09-04T06:31:25.703+0000 D3 REPL [conn271] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.752+0000 2019-09-04T06:31:25.703+0000 D3 REPL [conn260] Got notified of new snapshot: { ts: Timestamp(1567578685, 2), t: 1 }, 2019-09-04T06:31:25.678+0000 2019-09-04T06:31:25.703+0000 D3 REPL [conn260] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:42.329+0000 2019-09-04T06:31:25.703+0000 D3 REPL [conn274] Got notified of new snapshot: { ts: Timestamp(1567578685, 2), t: 1 }, 2019-09-04T06:31:25.678+0000 2019-09-04T06:31:25.704+0000 D3 REPL [conn274] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:41.071+0000 2019-09-04T06:31:25.719+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:25.719+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:25.774+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:25.786+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578685, 2) 2019-09-04T06:31:25.801+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:25.801+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:25.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:25.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:25.875+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:25.940+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:25.940+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:25.975+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:26.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:26.048+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:26.048+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:26.075+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:26.139+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:26.139+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:26.152+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:26.152+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:26.175+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:26.219+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:26.219+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:26.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578685, 2), signature: { hash: BinData(0, F73CCE80BCA482483698CCC7E3BB37D653AA5CF5), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:26.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:31:26.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578685, 2), signature: { hash: BinData(0, F73CCE80BCA482483698CCC7E3BB37D653AA5CF5), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:26.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578685, 2), signature: { hash: BinData(0, F73CCE80BCA482483698CCC7E3BB37D653AA5CF5), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:26.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, durableWallTime: new Date(1567578685678), opTime: { ts: Timestamp(1567578685, 2), t: 1 }, wallTime: new Date(1567578685678) } 2019-09-04T06:31:26.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578685, 2), signature: { hash: BinData(0, F73CCE80BCA482483698CCC7E3BB37D653AA5CF5), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:26.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:26.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:26.275+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:26.301+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:26.301+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:26.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:26.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:26.375+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:26.440+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:26.440+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:26.475+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:26.548+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:26.548+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:26.576+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:26.602+0000 D2 COMMAND [conn49] run command config.$cmd { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578685, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578685, 2), signature: { hash: BinData(0, F73CCE80BCA482483698CCC7E3BB37D653AA5CF5), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578685, 2), t: 1 } }, $db: "config" } 2019-09-04T06:31:26.602+0000 D1 COMMAND [conn49] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578685, 2), t: 1 } } } 2019-09-04T06:31:26.602+0000 D3 STORAGE [conn49] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:31:26.602+0000 D1 COMMAND [conn49] Using 'committed' snapshot: { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578685, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578685, 2), signature: { hash: BinData(0, F73CCE80BCA482483698CCC7E3BB37D653AA5CF5), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578685, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578685, 2) 2019-09-04T06:31:26.602+0000 D2 QUERY [conn49] Collection config.settings does not exist. Using EOF plan: query: { _id: "autosplit" } sort: {} projection: {} limit: 1 2019-09-04T06:31:26.602+0000 I COMMAND [conn49] command config.settings command: find { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578685, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578685, 2), signature: { hash: BinData(0, F73CCE80BCA482483698CCC7E3BB37D653AA5CF5), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578685, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:31:26.639+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:26.639+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:26.652+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:26.652+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:26.676+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:26.686+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:26.686+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:26.686+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578685, 2) 2019-09-04T06:31:26.686+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 11698 2019-09-04T06:31:26.686+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:26.686+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:26.686+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 11698 2019-09-04T06:31:26.687+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11701 2019-09-04T06:31:26.687+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11701 2019-09-04T06:31:26.687+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578685, 2), t: 1 }({ ts: Timestamp(1567578685, 2), t: 1 }) 2019-09-04T06:31:26.719+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:26.719+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:26.776+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:26.801+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:26.801+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:26.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:26.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 802) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:26.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 802 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:36.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:26.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:54.839+0000 2019-09-04T06:31:26.838+0000 D2 ASIO [Replication] Request 802 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, durableWallTime: new Date(1567578685678), opTime: { ts: Timestamp(1567578685, 2), t: 1 }, wallTime: new Date(1567578685678), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578685, 2), t: 1 }, lastCommittedWall: new Date(1567578685678), lastOpVisible: { ts: Timestamp(1567578685, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578685, 2), $clusterTime: { clusterTime: Timestamp(1567578685, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578685, 2) } 2019-09-04T06:31:26.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, durableWallTime: new Date(1567578685678), opTime: { ts: Timestamp(1567578685, 2), t: 1 }, wallTime: new Date(1567578685678), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578685, 2), t: 1 }, lastCommittedWall: new Date(1567578685678), lastOpVisible: { ts: Timestamp(1567578685, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578685, 2), $clusterTime: { clusterTime: Timestamp(1567578685, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578685, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:26.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:26.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 802) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, durableWallTime: new Date(1567578685678), opTime: { ts: Timestamp(1567578685, 2), t: 1 }, wallTime: new Date(1567578685678), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578685, 2), t: 1 }, lastCommittedWall: new Date(1567578685678), lastOpVisible: { ts: Timestamp(1567578685, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578685, 2), $clusterTime: { clusterTime: Timestamp(1567578685, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578685, 2) } 2019-09-04T06:31:26.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:31:26.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:31:28.838Z 2019-09-04T06:31:26.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:54.839+0000 2019-09-04T06:31:26.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:26.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 803) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:26.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 803 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:31:36.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:26.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:54.839+0000 2019-09-04T06:31:26.839+0000 D2 ASIO [Replication] Request 803 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, durableWallTime: new Date(1567578685678), opTime: { ts: Timestamp(1567578685, 2), t: 1 }, wallTime: new Date(1567578685678), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578685, 2), t: 1 }, lastCommittedWall: new Date(1567578685678), lastOpVisible: { ts: Timestamp(1567578685, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578685, 2), $clusterTime: { clusterTime: Timestamp(1567578685, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578685, 2) } 2019-09-04T06:31:26.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, durableWallTime: new Date(1567578685678), opTime: { ts: Timestamp(1567578685, 2), t: 1 }, wallTime: new Date(1567578685678), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578685, 2), t: 1 }, lastCommittedWall: new Date(1567578685678), lastOpVisible: { ts: Timestamp(1567578685, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578685, 2), $clusterTime: { clusterTime: Timestamp(1567578685, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578685, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:31:26.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:26.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 803) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, durableWallTime: new Date(1567578685678), opTime: { ts: Timestamp(1567578685, 2), t: 1 }, wallTime: new Date(1567578685678), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578685, 2), t: 1 }, lastCommittedWall: new Date(1567578685678), lastOpVisible: { ts: Timestamp(1567578685, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578685, 2), $clusterTime: { clusterTime: Timestamp(1567578685, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578685, 2) } 2019-09-04T06:31:26.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:31:26.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:31:36.621+0000 2019-09-04T06:31:26.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:31:36.963+0000 2019-09-04T06:31:26.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:31:26.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:31:28.839Z 2019-09-04T06:31:26.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:26.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:56.839+0000 2019-09-04T06:31:26.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:56.839+0000 2019-09-04T06:31:26.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:26.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:26.876+0000 D2 COMMAND [conn47] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:26.876+0000 I COMMAND [conn47] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:26.876+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:26.911+0000 D2 COMMAND [conn48] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:26.911+0000 I COMMAND [conn48] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:26.940+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:26.940+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:26.976+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:26.990+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:26.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:27.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:27.048+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:27.048+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:27.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578685, 2), signature: { hash: BinData(0, F73CCE80BCA482483698CCC7E3BB37D653AA5CF5), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:27.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:31:27.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578685, 2), signature: { hash: BinData(0, F73CCE80BCA482483698CCC7E3BB37D653AA5CF5), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:27.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578685, 2), signature: { hash: BinData(0, F73CCE80BCA482483698CCC7E3BB37D653AA5CF5), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:27.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, durableWallTime: new Date(1567578685678), opTime: { ts: Timestamp(1567578685, 2), t: 1 }, wallTime: new Date(1567578685678) } 2019-09-04T06:31:27.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578685, 2), signature: { hash: BinData(0, F73CCE80BCA482483698CCC7E3BB37D653AA5CF5), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:27.076+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:27.139+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:27.139+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:27.152+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:27.152+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:27.176+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:27.219+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:27.219+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:27.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:27.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:27.276+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:27.301+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:27.301+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:27.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:27.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:27.377+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:27.440+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:27.440+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:27.477+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:27.490+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:27.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:27.548+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:27.548+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:27.577+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:27.639+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:27.639+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:27.652+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:27.652+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:27.677+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:27.686+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:27.686+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:27.686+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578685, 2) 2019-09-04T06:31:27.686+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 11725 2019-09-04T06:31:27.686+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:27.686+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:27.686+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 11725 2019-09-04T06:31:27.687+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11728 2019-09-04T06:31:27.687+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11728 2019-09-04T06:31:27.687+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578685, 2), t: 1 }({ ts: Timestamp(1567578685, 2), t: 1 }) 2019-09-04T06:31:27.719+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:27.719+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:27.777+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:27.801+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:27.801+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:27.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:27.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:27.877+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:27.940+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:27.940+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:27.977+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:27.990+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:27.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:28.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:28.021+0000 D2 COMMAND [conn50] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:28.021+0000 I COMMAND [conn50] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:28.022+0000 D2 COMMAND [conn50] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578660, 3), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578687, 1), signature: { hash: BinData(0, D328DD27EFCBFB295D6C5CB5897E52F4DE0C31DE), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578660, 3), t: 1 } }, $db: "config" } 2019-09-04T06:31:28.022+0000 D1 COMMAND [conn50] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578660, 3), t: 1 } } } 2019-09-04T06:31:28.022+0000 D3 STORAGE [conn50] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:31:28.022+0000 D1 COMMAND [conn50] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578660, 3), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578687, 1), signature: { hash: BinData(0, D328DD27EFCBFB295D6C5CB5897E52F4DE0C31DE), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578660, 3), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578685, 2) 2019-09-04T06:31:28.022+0000 D5 QUERY [conn50] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:31:28.022+0000 D5 QUERY [conn50] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:31:28.022+0000 D5 QUERY [conn50] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:31:28.022+0000 D5 QUERY [conn50] Rated tree: $and 2019-09-04T06:31:28.022+0000 D5 QUERY [conn50] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:28.022+0000 D5 QUERY [conn50] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:28.022+0000 D2 QUERY [conn50] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:31:28.022+0000 D3 STORAGE [conn50] WT begin_transaction for snapshot id 11737 2019-09-04T06:31:28.022+0000 D3 STORAGE [conn50] WT rollback_transaction for snapshot id 11737 2019-09-04T06:31:28.022+0000 I COMMAND [conn50] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578660, 3), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578687, 1), signature: { hash: BinData(0, D328DD27EFCBFB295D6C5CB5897E52F4DE0C31DE), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578660, 3), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:879 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:31:28.048+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:28.048+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:28.077+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:28.139+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:28.139+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:28.152+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:28.152+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:28.178+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:28.219+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:28.219+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:28.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578685, 2), signature: { hash: BinData(0, F73CCE80BCA482483698CCC7E3BB37D653AA5CF5), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:28.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:31:28.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578685, 2), signature: { hash: BinData(0, F73CCE80BCA482483698CCC7E3BB37D653AA5CF5), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:28.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578685, 2), signature: { hash: BinData(0, F73CCE80BCA482483698CCC7E3BB37D653AA5CF5), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:28.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, durableWallTime: new Date(1567578685678), opTime: { ts: Timestamp(1567578685, 2), t: 1 }, wallTime: new Date(1567578685678) } 2019-09-04T06:31:28.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578685, 2), signature: { hash: BinData(0, F73CCE80BCA482483698CCC7E3BB37D653AA5CF5), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:28.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:28.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:28.278+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:28.301+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:28.301+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:28.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:28.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:28.378+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:28.440+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:28.440+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:28.478+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:28.490+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:28.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:28.548+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:28.548+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:28.578+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:28.639+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:28.639+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:28.652+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:28.652+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:28.678+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:28.686+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:28.686+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:28.686+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578685, 2) 2019-09-04T06:31:28.686+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 11752 2019-09-04T06:31:28.686+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:28.686+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:28.686+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 11752 2019-09-04T06:31:28.687+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11755 2019-09-04T06:31:28.687+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11755 2019-09-04T06:31:28.687+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578685, 2), t: 1 }({ ts: Timestamp(1567578685, 2), t: 1 }) 2019-09-04T06:31:28.719+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:28.719+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:28.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:28.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:28.778+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:28.801+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:28.801+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:28.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:28.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 804) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:28.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 804 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:38.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:28.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:56.839+0000 2019-09-04T06:31:28.838+0000 D2 ASIO [Replication] Request 804 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, durableWallTime: new Date(1567578685678), opTime: { ts: Timestamp(1567578685, 2), t: 1 }, wallTime: new Date(1567578685678), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578685, 2), t: 1 }, lastCommittedWall: new Date(1567578685678), lastOpVisible: { ts: Timestamp(1567578685, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578685, 2), $clusterTime: { clusterTime: Timestamp(1567578687, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578685, 2) } 2019-09-04T06:31:28.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, durableWallTime: new Date(1567578685678), opTime: { ts: Timestamp(1567578685, 2), t: 1 }, wallTime: new Date(1567578685678), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578685, 2), t: 1 }, lastCommittedWall: new Date(1567578685678), lastOpVisible: { ts: Timestamp(1567578685, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578685, 2), $clusterTime: { clusterTime: Timestamp(1567578687, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578685, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:28.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:28.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 804) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, durableWallTime: new Date(1567578685678), opTime: { ts: Timestamp(1567578685, 2), t: 1 }, wallTime: new Date(1567578685678), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578685, 2), t: 1 }, lastCommittedWall: new Date(1567578685678), lastOpVisible: { ts: Timestamp(1567578685, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578685, 2), $clusterTime: { clusterTime: Timestamp(1567578687, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578685, 2) } 2019-09-04T06:31:28.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:31:28.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:31:30.838Z 2019-09-04T06:31:28.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:56.839+0000 2019-09-04T06:31:28.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:28.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 805) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:28.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 805 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:31:38.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:28.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:56.839+0000 2019-09-04T06:31:28.839+0000 D2 ASIO [Replication] Request 805 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, durableWallTime: new Date(1567578685678), opTime: { ts: Timestamp(1567578685, 2), t: 1 }, wallTime: new Date(1567578685678), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578685, 2), t: 1 }, lastCommittedWall: new Date(1567578685678), lastOpVisible: { ts: Timestamp(1567578685, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578685, 2), $clusterTime: { clusterTime: Timestamp(1567578687, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578685, 2) } 2019-09-04T06:31:28.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, durableWallTime: new Date(1567578685678), opTime: { ts: Timestamp(1567578685, 2), t: 1 }, wallTime: new Date(1567578685678), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578685, 2), t: 1 }, lastCommittedWall: new Date(1567578685678), lastOpVisible: { ts: Timestamp(1567578685, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578685, 2), $clusterTime: { clusterTime: Timestamp(1567578687, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578685, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:31:28.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:28.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 805) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, durableWallTime: new Date(1567578685678), opTime: { ts: Timestamp(1567578685, 2), t: 1 }, wallTime: new Date(1567578685678), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578685, 2), t: 1 }, lastCommittedWall: new Date(1567578685678), lastOpVisible: { ts: Timestamp(1567578685, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578685, 2), $clusterTime: { clusterTime: Timestamp(1567578687, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578685, 2) } 2019-09-04T06:31:28.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:31:28.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:31:36.963+0000 2019-09-04T06:31:28.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:31:39.909+0000 2019-09-04T06:31:28.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:31:28.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:31:30.839Z 2019-09-04T06:31:28.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:28.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:58.839+0000 2019-09-04T06:31:28.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:58.839+0000 2019-09-04T06:31:28.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:28.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:28.878+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:28.940+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:28.940+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:28.979+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:28.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:28.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:29.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:29.048+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:29.048+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:29.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578687, 1), signature: { hash: BinData(0, D328DD27EFCBFB295D6C5CB5897E52F4DE0C31DE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:29.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:31:29.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578687, 1), signature: { hash: BinData(0, D328DD27EFCBFB295D6C5CB5897E52F4DE0C31DE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:29.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578687, 1), signature: { hash: BinData(0, D328DD27EFCBFB295D6C5CB5897E52F4DE0C31DE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:29.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, durableWallTime: new Date(1567578685678), opTime: { ts: Timestamp(1567578685, 2), t: 1 }, wallTime: new Date(1567578685678) } 2019-09-04T06:31:29.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578687, 1), signature: { hash: BinData(0, D328DD27EFCBFB295D6C5CB5897E52F4DE0C31DE), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:29.079+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:29.138+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:29.139+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:29.152+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:29.152+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:29.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:29.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:29.179+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:29.219+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:29.219+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:29.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:29.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:29.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:29.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:29.279+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:29.301+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:29.301+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:29.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:29.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:29.379+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:29.440+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:29.440+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:29.479+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:29.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:29.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:29.497+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:29.497+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:29.548+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:29.548+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:29.579+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:29.639+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:29.639+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:29.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:29.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:29.652+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:29.652+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:29.679+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:29.687+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:29.687+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:29.687+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578685, 2) 2019-09-04T06:31:29.687+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 11782 2019-09-04T06:31:29.687+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:29.687+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:29.687+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 11782 2019-09-04T06:31:29.687+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11785 2019-09-04T06:31:29.687+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11785 2019-09-04T06:31:29.687+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578685, 2), t: 1 }({ ts: Timestamp(1567578685, 2), t: 1 }) 2019-09-04T06:31:29.719+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:29.719+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:29.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:29.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:29.780+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:29.801+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:29.801+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:29.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:29.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:29.880+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:29.940+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:29.940+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:29.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:29.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:29.980+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:29.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:29.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:29.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:29.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:30.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:30.004+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:31:30.004+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:31:30.004+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:31:30.019+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:31:30.019+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:31:30.026+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:31:30.026+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:31:30.026+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:31:30.026+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:31:30.036+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:31:30.037+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35129 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:31:30.038+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:31:30.038+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:31:30.038+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:31:30.038+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:31:30.038+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:30.038+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:31:30.038+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:31:30.038+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:31:30.038+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:31:30.038+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:31:30.038+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:31:30.038+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:31:30.038+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:30.038+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:30.038+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:31:30.038+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578685, 2) 2019-09-04T06:31:30.038+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11801 2019-09-04T06:31:30.038+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11801 2019-09-04T06:31:30.038+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:31:30.038+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:31:30.038+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:31:30.039+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:31:30.039+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:30.039+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:31:30.039+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:31:30.039+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:31:30.039+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578685, 2) 2019-09-04T06:31:30.039+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11804 2019-09-04T06:31:30.039+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11804 2019-09-04T06:31:30.039+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:31:30.039+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:31:30.039+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:30.039+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:31:30.039+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:31:30.039+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:31:30.039+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578685, 2) 2019-09-04T06:31:30.039+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11806 2019-09-04T06:31:30.039+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11806 2019-09-04T06:31:30.039+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:31:30.039+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:31:30.039+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:31:30.039+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:31:30.039+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:31:30.039+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:31:30.039+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:31:30.039+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11809 2019-09-04T06:31:30.039+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:31:30.039+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:30.039+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:31:30.039+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:31:30.039+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11809 2019-09-04T06:31:30.039+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:31:30.039+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11810 2019-09-04T06:31:30.039+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:31:30.039+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:30.039+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:31:30.039+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11810 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11811 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11811 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11812 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11812 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11813 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11813 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11814 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11814 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11815 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11815 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11816 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11816 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11817 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11817 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11818 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11818 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11819 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11819 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11820 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11820 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11821 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11821 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11822 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11822 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11823 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11823 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11824 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:30.040+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11824 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11825 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11825 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11826 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11826 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11827 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11827 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11828 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11828 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11829 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11829 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11830 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11830 2019-09-04T06:31:30.041+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:31:30.041+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11832 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11832 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11833 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11833 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11834 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11834 2019-09-04T06:31:30.041+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:31:30.041+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11836 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11836 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11837 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11837 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11838 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11838 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11839 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11839 2019-09-04T06:31:30.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11840 2019-09-04T06:31:30.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11840 2019-09-04T06:31:30.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11841 2019-09-04T06:31:30.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11841 2019-09-04T06:31:30.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11842 2019-09-04T06:31:30.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11842 2019-09-04T06:31:30.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11843 2019-09-04T06:31:30.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11843 2019-09-04T06:31:30.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11844 2019-09-04T06:31:30.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11844 2019-09-04T06:31:30.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11845 2019-09-04T06:31:30.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11845 2019-09-04T06:31:30.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11846 2019-09-04T06:31:30.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11846 2019-09-04T06:31:30.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11847 2019-09-04T06:31:30.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11847 2019-09-04T06:31:30.042+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:31:30.042+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:31:30.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11849 2019-09-04T06:31:30.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11849 2019-09-04T06:31:30.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11850 2019-09-04T06:31:30.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11850 2019-09-04T06:31:30.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11851 2019-09-04T06:31:30.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11851 2019-09-04T06:31:30.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11852 2019-09-04T06:31:30.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11852 2019-09-04T06:31:30.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11853 2019-09-04T06:31:30.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11853 2019-09-04T06:31:30.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 11854 2019-09-04T06:31:30.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 11854 2019-09-04T06:31:30.042+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:31:30.048+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:30.048+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:30.080+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:30.139+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:30.139+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:30.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:30.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:30.152+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:30.152+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:30.180+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:30.219+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:30.219+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:30.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578687, 1), signature: { hash: BinData(0, D328DD27EFCBFB295D6C5CB5897E52F4DE0C31DE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:30.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:31:30.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578687, 1), signature: { hash: BinData(0, D328DD27EFCBFB295D6C5CB5897E52F4DE0C31DE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:30.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578687, 1), signature: { hash: BinData(0, D328DD27EFCBFB295D6C5CB5897E52F4DE0C31DE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:30.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, durableWallTime: new Date(1567578685678), opTime: { ts: Timestamp(1567578685, 2), t: 1 }, wallTime: new Date(1567578685678) } 2019-09-04T06:31:30.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578687, 1), signature: { hash: BinData(0, D328DD27EFCBFB295D6C5CB5897E52F4DE0C31DE), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:30.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:30.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:30.249+0000 D2 ASIO [RS] Request 801 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578690, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578690166), o: { $v: 1, $set: { ping: new Date(1567578690166) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578685, 2), t: 1 }, lastCommittedWall: new Date(1567578685678), lastOpVisible: { ts: Timestamp(1567578685, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578685, 2), t: 1 }, lastCommittedWall: new Date(1567578685678), lastOpApplied: { ts: Timestamp(1567578690, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578685, 2), $clusterTime: { clusterTime: Timestamp(1567578690, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578690, 1) } 2019-09-04T06:31:30.249+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578690, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578690166), o: { $v: 1, $set: { ping: new Date(1567578690166) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578685, 2), t: 1 }, lastCommittedWall: new Date(1567578685678), lastOpVisible: { ts: Timestamp(1567578685, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578685, 2), t: 1 }, lastCommittedWall: new Date(1567578685678), lastOpApplied: { ts: Timestamp(1567578690, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578685, 2), $clusterTime: { clusterTime: Timestamp(1567578690, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578690, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:30.249+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:30.249+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578690, 1) and ending at ts: Timestamp(1567578690, 1) 2019-09-04T06:31:30.249+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:31:39.909+0000 2019-09-04T06:31:30.249+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:31:41.215+0000 2019-09-04T06:31:30.249+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:30.249+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:58.839+0000 2019-09-04T06:31:30.249+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578690, 1), t: 1 } 2019-09-04T06:31:30.249+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:30.249+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:30.249+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578685, 2) 2019-09-04T06:31:30.249+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 11863 2019-09-04T06:31:30.249+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:30.249+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:30.249+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 11863 2019-09-04T06:31:30.249+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:30.249+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:31:30.249+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:30.249+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578685, 2) 2019-09-04T06:31:30.249+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578690, 1) } 2019-09-04T06:31:30.249+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 11866 2019-09-04T06:31:30.249+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:30.249+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:30.249+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 11866 2019-09-04T06:31:30.249+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11786 2019-09-04T06:31:30.249+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 11786 2019-09-04T06:31:30.249+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11869 2019-09-04T06:31:30.249+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11869 2019-09-04T06:31:30.249+0000 D3 EXECUTOR [repl-writer-worker-1] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:30.249+0000 D3 STORAGE [repl-writer-worker-1] WT begin_transaction for snapshot id 11871 2019-09-04T06:31:30.249+0000 D4 STORAGE [repl-writer-worker-1] inserting record with timestamp Timestamp(1567578690, 1) 2019-09-04T06:31:30.249+0000 D3 STORAGE [repl-writer-worker-1] WT set timestamp of future write operations to Timestamp(1567578690, 1) 2019-09-04T06:31:30.249+0000 D3 STORAGE [repl-writer-worker-1] WT commit_transaction for snapshot id 11871 2019-09-04T06:31:30.249+0000 D3 EXECUTOR [repl-writer-worker-1] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:30.249+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:31:30.249+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11870 2019-09-04T06:31:30.249+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 11870 2019-09-04T06:31:30.249+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11873 2019-09-04T06:31:30.249+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11873 2019-09-04T06:31:30.249+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578690, 1), t: 1 }({ ts: Timestamp(1567578690, 1), t: 1 }) 2019-09-04T06:31:30.249+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578690, 1) 2019-09-04T06:31:30.249+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11874 2019-09-04T06:31:30.250+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578690, 1) } } ] } sort: {} projection: {} 2019-09-04T06:31:30.250+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:30.250+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:31:30.250+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578690, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:31:30.250+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:30.250+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:30.250+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:30.250+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578690, 1) || First: notFirst: full path: ts 2019-09-04T06:31:30.250+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:30.250+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578690, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:30.250+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:30.250+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:31:30.250+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:30.250+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:30.250+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:30.250+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:30.250+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:30.250+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:30.250+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:30.250+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578690, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:30.250+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:30.250+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:30.250+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:30.250+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578690, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:30.250+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:30.250+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578690, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:30.250+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 11874 2019-09-04T06:31:30.250+0000 D3 EXECUTOR [repl-writer-worker-5] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:30.250+0000 D3 STORAGE [repl-writer-worker-5] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:30.250+0000 D3 REPL [repl-writer-worker-5] applying op: { ts: Timestamp(1567578690, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578690166), o: { $v: 1, $set: { ping: new Date(1567578690166) } } }, oplog application mode: Secondary 2019-09-04T06:31:30.250+0000 D3 STORAGE [repl-writer-worker-5] WT set timestamp of future write operations to Timestamp(1567578690, 1) 2019-09-04T06:31:30.250+0000 D3 STORAGE [repl-writer-worker-5] WT begin_transaction for snapshot id 11876 2019-09-04T06:31:30.250+0000 D2 QUERY [repl-writer-worker-5] Using idhack: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" } 2019-09-04T06:31:30.250+0000 D4 WRITE [repl-writer-worker-5] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:31:30.250+0000 D3 STORAGE [repl-writer-worker-5] WT commit_transaction for snapshot id 11876 2019-09-04T06:31:30.250+0000 D3 EXECUTOR [repl-writer-worker-5] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:30.250+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578690, 1), t: 1 }({ ts: Timestamp(1567578690, 1), t: 1 }) 2019-09-04T06:31:30.250+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578690, 1) 2019-09-04T06:31:30.250+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11875 2019-09-04T06:31:30.250+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:31:30.250+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:30.250+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:31:30.250+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:30.250+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:30.250+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:31:30.250+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 11875 2019-09-04T06:31:30.250+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578690, 1) 2019-09-04T06:31:30.250+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:30.250+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11879 2019-09-04T06:31:30.250+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, durableWallTime: new Date(1567578685678), appliedOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, appliedWallTime: new Date(1567578685678), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, durableWallTime: new Date(1567578685678), appliedOpTime: { ts: Timestamp(1567578690, 1), t: 1 }, appliedWallTime: new Date(1567578690166), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, durableWallTime: new Date(1567578685678), appliedOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, appliedWallTime: new Date(1567578685678), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578685, 2), t: 1 }, lastCommittedWall: new Date(1567578685678), lastOpVisible: { ts: Timestamp(1567578685, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:30.250+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11879 2019-09-04T06:31:30.250+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 806 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:00.250+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, durableWallTime: new Date(1567578685678), appliedOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, appliedWallTime: new Date(1567578685678), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, durableWallTime: new Date(1567578685678), appliedOpTime: { ts: Timestamp(1567578690, 1), t: 1 }, appliedWallTime: new Date(1567578690166), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, durableWallTime: new Date(1567578685678), appliedOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, appliedWallTime: new Date(1567578685678), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578685, 2), t: 1 }, lastCommittedWall: new Date(1567578685678), lastOpVisible: { ts: Timestamp(1567578685, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:30.250+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578690, 1), t: 1 }({ ts: Timestamp(1567578690, 1), t: 1 }) 2019-09-04T06:31:30.250+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:00.250+0000 2019-09-04T06:31:30.251+0000 D2 ASIO [RS] Request 806 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578685, 2), t: 1 }, lastCommittedWall: new Date(1567578685678), lastOpVisible: { ts: Timestamp(1567578685, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578685, 2), $clusterTime: { clusterTime: Timestamp(1567578690, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578690, 1) } 2019-09-04T06:31:30.251+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578685, 2), t: 1 }, lastCommittedWall: new Date(1567578685678), lastOpVisible: { ts: Timestamp(1567578685, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578685, 2), $clusterTime: { clusterTime: Timestamp(1567578690, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578690, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:30.251+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:30.251+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:00.251+0000 2019-09-04T06:31:30.251+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578690, 1), t: 1 } 2019-09-04T06:31:30.251+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 807 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:40.251+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578685, 2), t: 1 } } 2019-09-04T06:31:30.251+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:00.251+0000 2019-09-04T06:31:30.256+0000 D2 ASIO [RS] Request 807 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578690, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578690174), o: { $v: 1, $set: { ping: new Date(1567578690174) } } }, { ts: Timestamp(1567578690, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578690174), o: { $v: 1, $set: { ping: new Date(1567578690173) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578685, 2), t: 1 }, lastCommittedWall: new Date(1567578685678), lastOpVisible: { ts: Timestamp(1567578685, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578685, 2), t: 1 }, lastCommittedWall: new Date(1567578685678), lastOpApplied: { ts: Timestamp(1567578690, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578685, 2), $clusterTime: { clusterTime: Timestamp(1567578690, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578690, 3) } 2019-09-04T06:31:30.256+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578690, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578690174), o: { $v: 1, $set: { ping: new Date(1567578690174) } } }, { ts: Timestamp(1567578690, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578690174), o: { $v: 1, $set: { ping: new Date(1567578690173) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578685, 2), t: 1 }, lastCommittedWall: new Date(1567578685678), lastOpVisible: { ts: Timestamp(1567578685, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578685, 2), t: 1 }, lastCommittedWall: new Date(1567578685678), lastOpApplied: { ts: Timestamp(1567578690, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578685, 2), $clusterTime: { clusterTime: Timestamp(1567578690, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578690, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:30.256+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:30.256+0000 D2 REPL [replication-0] oplog fetcher read 2 operations from remote oplog starting at ts: Timestamp(1567578690, 2) and ending at ts: Timestamp(1567578690, 3) 2019-09-04T06:31:30.256+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:31:41.215+0000 2019-09-04T06:31:30.256+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:31:40.782+0000 2019-09-04T06:31:30.256+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:30.256+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578690, 3), t: 1 } 2019-09-04T06:31:30.256+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:30.256+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:30.256+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578690, 1) 2019-09-04T06:31:30.256+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 11884 2019-09-04T06:31:30.256+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:30.256+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:30.256+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 11884 2019-09-04T06:31:30.256+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:30.256+0000 D2 REPL [rsSync-0] replication batch size is 2 2019-09-04T06:31:30.256+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:30.256+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578690, 2) } 2019-09-04T06:31:30.256+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578690, 1) 2019-09-04T06:31:30.256+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 11887 2019-09-04T06:31:30.256+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:30.256+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:30.256+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 11887 2019-09-04T06:31:30.256+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11880 2019-09-04T06:31:30.256+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:58.839+0000 2019-09-04T06:31:30.256+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 11880 2019-09-04T06:31:30.256+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11890 2019-09-04T06:31:30.256+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11890 2019-09-04T06:31:30.256+0000 D3 EXECUTOR [repl-writer-worker-9] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:30.256+0000 D3 STORAGE [repl-writer-worker-9] WT begin_transaction for snapshot id 11892 2019-09-04T06:31:30.256+0000 D4 STORAGE [repl-writer-worker-9] inserting record with timestamp Timestamp(1567578690, 2) 2019-09-04T06:31:30.256+0000 D3 STORAGE [repl-writer-worker-9] WT set timestamp of future write operations to Timestamp(1567578690, 2) 2019-09-04T06:31:30.256+0000 D4 STORAGE [repl-writer-worker-9] inserting record with timestamp Timestamp(1567578690, 3) 2019-09-04T06:31:30.256+0000 D3 STORAGE [repl-writer-worker-9] WT set timestamp of future write operations to Timestamp(1567578690, 3) 2019-09-04T06:31:30.256+0000 D3 STORAGE [repl-writer-worker-9] WT commit_transaction for snapshot id 11892 2019-09-04T06:31:30.256+0000 D3 EXECUTOR [repl-writer-worker-9] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:30.256+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:31:30.256+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11891 2019-09-04T06:31:30.256+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 11891 2019-09-04T06:31:30.256+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11894 2019-09-04T06:31:30.256+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11894 2019-09-04T06:31:30.256+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578690, 3), t: 1 }({ ts: Timestamp(1567578690, 3), t: 1 }) 2019-09-04T06:31:30.256+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578690, 3) 2019-09-04T06:31:30.256+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11895 2019-09-04T06:31:30.257+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578690, 3) } } ] } sort: {} projection: {} 2019-09-04T06:31:30.257+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:30.257+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:31:30.257+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578690, 3) Sort: {} Proj: {} ============================= 2019-09-04T06:31:30.257+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:30.257+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:30.257+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:30.257+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578690, 3) || First: notFirst: full path: ts 2019-09-04T06:31:30.257+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:30.257+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578690, 3) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:30.257+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:30.257+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:31:30.257+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:30.257+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:30.257+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:30.257+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:30.257+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:30.257+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:30.257+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:30.257+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578690, 3) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:30.257+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:30.257+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:30.257+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:30.257+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578690, 3) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:30.257+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:30.257+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578690, 3) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:30.257+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 11895 2019-09-04T06:31:30.257+0000 D3 EXECUTOR [repl-writer-worker-7] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:30.257+0000 D3 STORAGE [repl-writer-worker-7] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:30.257+0000 D3 REPL [repl-writer-worker-7] applying op: { ts: Timestamp(1567578690, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578690174), o: { $v: 1, $set: { ping: new Date(1567578690174) } } }, oplog application mode: Secondary 2019-09-04T06:31:30.257+0000 D3 STORAGE [repl-writer-worker-7] WT set timestamp of future write operations to Timestamp(1567578690, 2) 2019-09-04T06:31:30.257+0000 D3 STORAGE [repl-writer-worker-7] WT begin_transaction for snapshot id 11897 2019-09-04T06:31:30.257+0000 D2 QUERY [repl-writer-worker-7] Using idhack: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" } 2019-09-04T06:31:30.257+0000 D4 WRITE [repl-writer-worker-7] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:31:30.257+0000 D3 STORAGE [repl-writer-worker-7] WT commit_transaction for snapshot id 11897 2019-09-04T06:31:30.257+0000 D3 EXECUTOR [repl-writer-worker-7] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:30.257+0000 D3 STORAGE [repl-writer-worker-7] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:30.257+0000 D3 REPL [repl-writer-worker-7] applying op: { ts: Timestamp(1567578690, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578690174), o: { $v: 1, $set: { ping: new Date(1567578690173) } } }, oplog application mode: Secondary 2019-09-04T06:31:30.257+0000 D3 STORAGE [repl-writer-worker-7] WT set timestamp of future write operations to Timestamp(1567578690, 3) 2019-09-04T06:31:30.257+0000 D3 STORAGE [repl-writer-worker-7] WT begin_transaction for snapshot id 11899 2019-09-04T06:31:30.257+0000 D2 QUERY [repl-writer-worker-7] Using idhack: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" } 2019-09-04T06:31:30.257+0000 D4 WRITE [repl-writer-worker-7] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:31:30.257+0000 D3 STORAGE [repl-writer-worker-7] WT commit_transaction for snapshot id 11899 2019-09-04T06:31:30.257+0000 D3 EXECUTOR [repl-writer-worker-7] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:30.257+0000 D3 EXECUTOR [repl-writer-worker-11] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:30.257+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578690, 3), t: 1 }({ ts: Timestamp(1567578690, 3), t: 1 }) 2019-09-04T06:31:30.257+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578690, 3) 2019-09-04T06:31:30.257+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11896 2019-09-04T06:31:30.257+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:31:30.257+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:30.257+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:31:30.257+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:30.257+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:30.257+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:31:30.257+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 11896 2019-09-04T06:31:30.257+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578690, 3) 2019-09-04T06:31:30.257+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11902 2019-09-04T06:31:30.257+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11902 2019-09-04T06:31:30.257+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578690, 3), t: 1 }({ ts: Timestamp(1567578690, 3), t: 1 }) 2019-09-04T06:31:30.257+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:30.257+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, durableWallTime: new Date(1567578685678), appliedOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, appliedWallTime: new Date(1567578685678), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, durableWallTime: new Date(1567578685678), appliedOpTime: { ts: Timestamp(1567578690, 3), t: 1 }, appliedWallTime: new Date(1567578690174), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, durableWallTime: new Date(1567578685678), appliedOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, appliedWallTime: new Date(1567578685678), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578685, 2), t: 1 }, lastCommittedWall: new Date(1567578685678), lastOpVisible: { ts: Timestamp(1567578685, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:30.257+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 808 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:00.257+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, durableWallTime: new Date(1567578685678), appliedOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, appliedWallTime: new Date(1567578685678), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, durableWallTime: new Date(1567578685678), appliedOpTime: { ts: Timestamp(1567578690, 3), t: 1 }, appliedWallTime: new Date(1567578690174), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, durableWallTime: new Date(1567578685678), appliedOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, appliedWallTime: new Date(1567578685678), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578685, 2), t: 1 }, lastCommittedWall: new Date(1567578685678), lastOpVisible: { ts: Timestamp(1567578685, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:30.257+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:00.257+0000 2019-09-04T06:31:30.258+0000 D2 ASIO [RS] Request 808 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578685, 2), t: 1 }, lastCommittedWall: new Date(1567578685678), lastOpVisible: { ts: Timestamp(1567578685, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578685, 2), $clusterTime: { clusterTime: Timestamp(1567578690, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578690, 3) } 2019-09-04T06:31:30.258+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578685, 2), t: 1 }, lastCommittedWall: new Date(1567578685678), lastOpVisible: { ts: Timestamp(1567578685, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578685, 2), $clusterTime: { clusterTime: Timestamp(1567578690, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578690, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:30.258+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:30.258+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:00.258+0000 2019-09-04T06:31:30.258+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578690, 3), t: 1 } 2019-09-04T06:31:30.258+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 809 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:40.258+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578685, 2), t: 1 } } 2019-09-04T06:31:30.258+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:00.258+0000 2019-09-04T06:31:30.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:30.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:30.277+0000 D2 ASIO [RS] Request 809 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578690, 3), t: 1 }, lastCommittedWall: new Date(1567578690174), lastOpVisible: { ts: Timestamp(1567578690, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578690, 3), t: 1 }, lastCommittedWall: new Date(1567578690174), lastOpApplied: { ts: Timestamp(1567578690, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578690, 3), $clusterTime: { clusterTime: Timestamp(1567578690, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578690, 3) } 2019-09-04T06:31:30.277+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578690, 3), t: 1 }, lastCommittedWall: new Date(1567578690174), lastOpVisible: { ts: Timestamp(1567578690, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578690, 3), t: 1 }, lastCommittedWall: new Date(1567578690174), lastOpApplied: { ts: Timestamp(1567578690, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578690, 3), $clusterTime: { clusterTime: Timestamp(1567578690, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578690, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:30.277+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:30.277+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:31:30.277+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578690, 3), t: 1 }, 2019-09-04T06:31:30.174+0000 2019-09-04T06:31:30.277+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578690, 3), t: 1 }, 2019-09-04T06:31:30.174+0000 2019-09-04T06:31:30.277+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578685, 3) 2019-09-04T06:31:30.277+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:31:40.782+0000 2019-09-04T06:31:30.277+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:31:41.641+0000 2019-09-04T06:31:30.277+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:30.277+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:58.839+0000 2019-09-04T06:31:30.277+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 810 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:40.277+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578690, 3), t: 1 } } 2019-09-04T06:31:30.277+0000 D3 REPL [conn270] Got notified of new snapshot: { ts: Timestamp(1567578690, 3), t: 1 }, 2019-09-04T06:31:30.174+0000 2019-09-04T06:31:30.277+0000 D3 REPL [conn270] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.239+0000 2019-09-04T06:31:30.277+0000 D3 REPL [conn279] Got notified of new snapshot: { ts: Timestamp(1567578690, 3), t: 1 }, 2019-09-04T06:31:30.174+0000 2019-09-04T06:31:30.277+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:00.258+0000 2019-09-04T06:31:30.277+0000 D3 REPL [conn279] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:42.709+0000 2019-09-04T06:31:30.277+0000 D3 REPL [conn292] Got notified of new snapshot: { ts: Timestamp(1567578690, 3), t: 1 }, 2019-09-04T06:31:30.174+0000 2019-09-04T06:31:30.277+0000 D3 REPL [conn292] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:52.054+0000 2019-09-04T06:31:30.277+0000 D3 REPL [conn273] Got notified of new snapshot: { ts: Timestamp(1567578690, 3), t: 1 }, 2019-09-04T06:31:30.174+0000 2019-09-04T06:31:30.277+0000 D3 REPL [conn273] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.644+0000 2019-09-04T06:31:30.277+0000 D3 REPL [conn287] Got notified of new snapshot: { ts: Timestamp(1567578690, 3), t: 1 }, 2019-09-04T06:31:30.174+0000 2019-09-04T06:31:30.277+0000 D3 REPL [conn287] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.248+0000 2019-09-04T06:31:30.277+0000 D3 REPL [conn271] Got notified of new snapshot: { ts: Timestamp(1567578690, 3), t: 1 }, 2019-09-04T06:31:30.174+0000 2019-09-04T06:31:30.277+0000 D3 REPL [conn271] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.752+0000 2019-09-04T06:31:30.277+0000 D3 REPL [conn274] Got notified of new snapshot: { ts: Timestamp(1567578690, 3), t: 1 }, 2019-09-04T06:31:30.174+0000 2019-09-04T06:31:30.277+0000 D3 REPL [conn274] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:41.071+0000 2019-09-04T06:31:30.277+0000 D3 REPL [conn260] Got notified of new snapshot: { ts: Timestamp(1567578690, 3), t: 1 }, 2019-09-04T06:31:30.174+0000 2019-09-04T06:31:30.277+0000 D3 REPL [conn260] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:42.329+0000 2019-09-04T06:31:30.277+0000 D3 REPL [conn283] Got notified of new snapshot: { ts: Timestamp(1567578690, 3), t: 1 }, 2019-09-04T06:31:30.174+0000 2019-09-04T06:31:30.277+0000 D3 REPL [conn283] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.238+0000 2019-09-04T06:31:30.277+0000 D3 REPL [conn281] Got notified of new snapshot: { ts: Timestamp(1567578690, 3), t: 1 }, 2019-09-04T06:31:30.174+0000 2019-09-04T06:31:30.277+0000 D3 REPL [conn281] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.661+0000 2019-09-04T06:31:30.277+0000 D3 REPL [conn272] Got notified of new snapshot: { ts: Timestamp(1567578690, 3), t: 1 }, 2019-09-04T06:31:30.174+0000 2019-09-04T06:31:30.277+0000 D3 REPL [conn272] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.222+0000 2019-09-04T06:31:30.277+0000 D3 REPL [conn286] Got notified of new snapshot: { ts: Timestamp(1567578690, 3), t: 1 }, 2019-09-04T06:31:30.174+0000 2019-09-04T06:31:30.277+0000 D3 REPL [conn286] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.224+0000 2019-09-04T06:31:30.277+0000 D3 REPL [conn264] Got notified of new snapshot: { ts: Timestamp(1567578690, 3), t: 1 }, 2019-09-04T06:31:30.174+0000 2019-09-04T06:31:30.277+0000 D3 REPL [conn264] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:41.083+0000 2019-09-04T06:31:30.277+0000 D3 REPL [conn291] Got notified of new snapshot: { ts: Timestamp(1567578690, 3), t: 1 }, 2019-09-04T06:31:30.174+0000 2019-09-04T06:31:30.277+0000 D3 REPL [conn291] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.662+0000 2019-09-04T06:31:30.283+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:31:30.283+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:30.283+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, durableWallTime: new Date(1567578685678), appliedOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, appliedWallTime: new Date(1567578685678), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578690, 1), t: 1 }, durableWallTime: new Date(1567578690166), appliedOpTime: { ts: Timestamp(1567578690, 3), t: 1 }, appliedWallTime: new Date(1567578690174), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, durableWallTime: new Date(1567578685678), appliedOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, appliedWallTime: new Date(1567578685678), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578690, 3), t: 1 }, lastCommittedWall: new Date(1567578690174), lastOpVisible: { ts: Timestamp(1567578690, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:30.283+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 811 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:00.283+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, durableWallTime: new Date(1567578685678), appliedOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, appliedWallTime: new Date(1567578685678), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578690, 1), t: 1 }, durableWallTime: new Date(1567578690166), appliedOpTime: { ts: Timestamp(1567578690, 3), t: 1 }, appliedWallTime: new Date(1567578690174), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, durableWallTime: new Date(1567578685678), appliedOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, appliedWallTime: new Date(1567578685678), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578690, 3), t: 1 }, lastCommittedWall: new Date(1567578690174), lastOpVisible: { ts: Timestamp(1567578690, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:30.283+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:00.258+0000 2019-09-04T06:31:30.283+0000 D2 ASIO [RS] Request 811 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578690, 3), t: 1 }, lastCommittedWall: new Date(1567578690174), lastOpVisible: { ts: Timestamp(1567578690, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578690, 3), $clusterTime: { clusterTime: Timestamp(1567578690, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578690, 3) } 2019-09-04T06:31:30.283+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578690, 3), t: 1 }, lastCommittedWall: new Date(1567578690174), lastOpVisible: { ts: Timestamp(1567578690, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578690, 3), $clusterTime: { clusterTime: Timestamp(1567578690, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578690, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:30.283+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:30.283+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:00.258+0000 2019-09-04T06:31:30.285+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:30.285+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:30.285+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:31:30.285+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, durableWallTime: new Date(1567578685678), appliedOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, appliedWallTime: new Date(1567578685678), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578690, 3), t: 1 }, durableWallTime: new Date(1567578690174), appliedOpTime: { ts: Timestamp(1567578690, 3), t: 1 }, appliedWallTime: new Date(1567578690174), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, durableWallTime: new Date(1567578685678), appliedOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, appliedWallTime: new Date(1567578685678), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578690, 3), t: 1 }, lastCommittedWall: new Date(1567578690174), lastOpVisible: { ts: Timestamp(1567578690, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:30.286+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 812 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:00.286+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, durableWallTime: new Date(1567578685678), appliedOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, appliedWallTime: new Date(1567578685678), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578690, 3), t: 1 }, durableWallTime: new Date(1567578690174), appliedOpTime: { ts: Timestamp(1567578690, 3), t: 1 }, appliedWallTime: new Date(1567578690174), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, durableWallTime: new Date(1567578685678), appliedOpTime: { ts: Timestamp(1567578685, 2), t: 1 }, appliedWallTime: new Date(1567578685678), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578690, 3), t: 1 }, lastCommittedWall: new Date(1567578690174), lastOpVisible: { ts: Timestamp(1567578690, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:30.286+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:00.258+0000 2019-09-04T06:31:30.286+0000 D2 ASIO [RS] Request 812 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578690, 3), t: 1 }, lastCommittedWall: new Date(1567578690174), lastOpVisible: { ts: Timestamp(1567578690, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578690, 3), $clusterTime: { clusterTime: Timestamp(1567578690, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578690, 3) } 2019-09-04T06:31:30.286+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578690, 3), t: 1 }, lastCommittedWall: new Date(1567578690174), lastOpVisible: { ts: Timestamp(1567578690, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578690, 3), $clusterTime: { clusterTime: Timestamp(1567578690, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578690, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:30.286+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:30.286+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:00.258+0000 2019-09-04T06:31:30.301+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:30.301+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:30.349+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578690, 3) 2019-09-04T06:31:30.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:30.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:30.386+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:30.440+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:30.440+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:30.458+0000 D2 COMMAND [conn294] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578681, 1), signature: { hash: BinData(0, DBCE5483F7C812CC5318B2426FFE1910EB037402), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:31:30.458+0000 D1 REPL [conn294] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578690, 3), t: 1 } 2019-09-04T06:31:30.458+0000 D3 REPL [conn294] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.468+0000 2019-09-04T06:31:30.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:30.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:30.486+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:30.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:30.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:30.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:30.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:30.548+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:30.548+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:30.586+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:30.639+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:30.639+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:30.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:30.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:30.652+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:30.652+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:30.686+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:30.719+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:30.719+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:30.743+0000 I NETWORK [listener] connection accepted from 10.108.2.52:47218 #295 (84 connections now open) 2019-09-04T06:31:30.743+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:31:30.743+0000 D2 COMMAND [conn295] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:31:30.743+0000 I NETWORK [conn295] received client metadata from 10.108.2.52:47218 conn295: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:31:30.743+0000 I COMMAND [conn295] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:31:30.743+0000 D2 COMMAND [conn295] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578682, 1), signature: { hash: BinData(0, 1618860F770E3C611DC63E4BA94A19A92BD9155D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:31:30.743+0000 D1 REPL [conn295] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578690, 3), t: 1 } 2019-09-04T06:31:30.743+0000 D3 REPL [conn295] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.753+0000 2019-09-04T06:31:30.753+0000 I NETWORK [listener] connection accepted from 10.108.2.50:50154 #296 (85 connections now open) 2019-09-04T06:31:30.753+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:31:30.753+0000 D2 COMMAND [conn296] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:31:30.753+0000 I NETWORK [conn296] received client metadata from 10.108.2.50:50154 conn296: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:31:30.753+0000 I COMMAND [conn296] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:31:30.753+0000 D2 COMMAND [conn296] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578689, 1), signature: { hash: BinData(0, D15269915E9E5FCA90C0F56D93441522EC0BAE20), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:31:30.753+0000 D1 REPL [conn296] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578690, 3), t: 1 } 2019-09-04T06:31:30.753+0000 D3 REPL [conn296] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.763+0000 2019-09-04T06:31:30.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:30.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:30.786+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:30.801+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:30.801+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:30.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:30.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 813) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:30.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 813 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:40.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:30.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:58.839+0000 2019-09-04T06:31:30.838+0000 D2 ASIO [Replication] Request 813 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578690, 3), t: 1 }, durableWallTime: new Date(1567578690174), opTime: { ts: Timestamp(1567578690, 3), t: 1 }, wallTime: new Date(1567578690174), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578690, 3), t: 1 }, lastCommittedWall: new Date(1567578690174), lastOpVisible: { ts: Timestamp(1567578690, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578690, 3), $clusterTime: { clusterTime: Timestamp(1567578690, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578690, 3) } 2019-09-04T06:31:30.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578690, 3), t: 1 }, durableWallTime: new Date(1567578690174), opTime: { ts: Timestamp(1567578690, 3), t: 1 }, wallTime: new Date(1567578690174), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578690, 3), t: 1 }, lastCommittedWall: new Date(1567578690174), lastOpVisible: { ts: Timestamp(1567578690, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578690, 3), $clusterTime: { clusterTime: Timestamp(1567578690, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578690, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:30.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:30.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 813) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578690, 3), t: 1 }, durableWallTime: new Date(1567578690174), opTime: { ts: Timestamp(1567578690, 3), t: 1 }, wallTime: new Date(1567578690174), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578690, 3), t: 1 }, lastCommittedWall: new Date(1567578690174), lastOpVisible: { ts: Timestamp(1567578690, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578690, 3), $clusterTime: { clusterTime: Timestamp(1567578690, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578690, 3) } 2019-09-04T06:31:30.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:31:30.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:31:32.838Z 2019-09-04T06:31:30.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:31:58.839+0000 2019-09-04T06:31:30.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:30.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 814) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:30.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 814 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:31:40.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:30.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:31:58.839+0000 2019-09-04T06:31:30.839+0000 D2 ASIO [Replication] Request 814 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578690, 3), t: 1 }, durableWallTime: new Date(1567578690174), opTime: { ts: Timestamp(1567578690, 3), t: 1 }, wallTime: new Date(1567578690174), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578690, 3), t: 1 }, lastCommittedWall: new Date(1567578690174), lastOpVisible: { ts: Timestamp(1567578690, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578690, 3), $clusterTime: { clusterTime: Timestamp(1567578690, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578690, 3) } 2019-09-04T06:31:30.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578690, 3), t: 1 }, durableWallTime: new Date(1567578690174), opTime: { ts: Timestamp(1567578690, 3), t: 1 }, wallTime: new Date(1567578690174), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578690, 3), t: 1 }, lastCommittedWall: new Date(1567578690174), lastOpVisible: { ts: Timestamp(1567578690, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578690, 3), $clusterTime: { clusterTime: Timestamp(1567578690, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578690, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:31:30.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:30.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 814) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578690, 3), t: 1 }, durableWallTime: new Date(1567578690174), opTime: { ts: Timestamp(1567578690, 3), t: 1 }, wallTime: new Date(1567578690174), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578690, 3), t: 1 }, lastCommittedWall: new Date(1567578690174), lastOpVisible: { ts: Timestamp(1567578690, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578690, 3), $clusterTime: { clusterTime: Timestamp(1567578690, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578690, 3) } 2019-09-04T06:31:30.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:31:30.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:31:41.641+0000 2019-09-04T06:31:30.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:31:40.872+0000 2019-09-04T06:31:30.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:31:30.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:31:32.839Z 2019-09-04T06:31:30.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:30.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:00.839+0000 2019-09-04T06:31:30.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:00.839+0000 2019-09-04T06:31:30.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:30.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:30.886+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:30.915+0000 I NETWORK [listener] connection accepted from 10.108.2.73:52192 #297 (86 connections now open) 2019-09-04T06:31:30.915+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:31:30.915+0000 D2 COMMAND [conn297] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:31:30.915+0000 I NETWORK [conn297] received client metadata from 10.108.2.73:52192 conn297: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:31:30.915+0000 I COMMAND [conn297] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:31:30.915+0000 D2 COMMAND [conn297] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578682, 1), signature: { hash: BinData(0, 1618860F770E3C611DC63E4BA94A19A92BD9155D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:31:30.915+0000 D1 REPL [conn297] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578690, 3), t: 1 } 2019-09-04T06:31:30.915+0000 D3 REPL [conn297] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.925+0000 2019-09-04T06:31:30.940+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:30.940+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:30.976+0000 I NETWORK [listener] connection accepted from 10.108.2.46:41026 #298 (87 connections now open) 2019-09-04T06:31:30.976+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:31:30.976+0000 D2 COMMAND [conn298] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:31:30.976+0000 I NETWORK [conn298] received client metadata from 10.108.2.46:41026 conn298: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:31:30.976+0000 I COMMAND [conn298] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:31:30.976+0000 D2 COMMAND [conn298] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578685, 1), signature: { hash: BinData(0, AAAAD9086BE97960C6C704E487600A0D6823C425), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:31:30.976+0000 D1 REPL [conn298] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578690, 3), t: 1 } 2019-09-04T06:31:30.976+0000 D3 REPL [conn298] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.986+0000 2019-09-04T06:31:30.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:30.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:30.986+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:30.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:30.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:30.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:30.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:31.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:31.048+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:31.048+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:31.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578690, 3), signature: { hash: BinData(0, BDEC76843F239EE13762CFEF871357EA4C8B11BF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:31.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:31:31.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578690, 3), signature: { hash: BinData(0, BDEC76843F239EE13762CFEF871357EA4C8B11BF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:31.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578690, 3), signature: { hash: BinData(0, BDEC76843F239EE13762CFEF871357EA4C8B11BF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:31.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578690, 3), t: 1 }, durableWallTime: new Date(1567578690174), opTime: { ts: Timestamp(1567578690, 3), t: 1 }, wallTime: new Date(1567578690174) } 2019-09-04T06:31:31.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578690, 3), signature: { hash: BinData(0, BDEC76843F239EE13762CFEF871357EA4C8B11BF), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:31.086+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:31.139+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:31.139+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:31.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:31.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:31.152+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:31.152+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:31.187+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:31.219+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:31.219+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:31.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:31.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:31.256+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:31.256+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:31.256+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578690, 3) 2019-09-04T06:31:31.256+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 11941 2019-09-04T06:31:31.256+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:31.256+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:31.256+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 11941 2019-09-04T06:31:31.257+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11944 2019-09-04T06:31:31.257+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11944 2019-09-04T06:31:31.257+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578690, 3), t: 1 }({ ts: Timestamp(1567578690, 3), t: 1 }) 2019-09-04T06:31:31.264+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:31.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:31.287+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:31.300+0000 D2 COMMAND [conn280] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578682, 1), signature: { hash: BinData(0, 1618860F770E3C611DC63E4BA94A19A92BD9155D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:31:31.300+0000 D1 REPL [conn280] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578690, 3), t: 1 } 2019-09-04T06:31:31.300+0000 D3 REPL [conn280] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:01.310+0000 2019-09-04T06:31:31.301+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:31.301+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:31.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:31.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:31.387+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:31.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:31.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:31.487+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:31.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:31.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:31.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:31.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:31.548+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:31.548+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:31.587+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:31.639+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:31.639+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:31.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:31.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:31.652+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:31.652+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:31.687+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:31.719+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:31.719+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:31.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:31.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:31.787+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:31.801+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:31.801+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:31.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:31.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:31.887+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:31.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:31.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:31.988+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:31.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:31.989+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:31.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:31.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:32.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:32.048+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:32.048+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:32.088+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:32.139+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:32.139+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:32.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:32.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:32.152+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:32.152+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:32.188+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:32.219+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:32.219+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:32.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578690, 3), signature: { hash: BinData(0, BDEC76843F239EE13762CFEF871357EA4C8B11BF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:32.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:31:32.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578690, 3), signature: { hash: BinData(0, BDEC76843F239EE13762CFEF871357EA4C8B11BF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:32.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578690, 3), signature: { hash: BinData(0, BDEC76843F239EE13762CFEF871357EA4C8B11BF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:32.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578690, 3), t: 1 }, durableWallTime: new Date(1567578690174), opTime: { ts: Timestamp(1567578690, 3), t: 1 }, wallTime: new Date(1567578690174) } 2019-09-04T06:31:32.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578690, 3), signature: { hash: BinData(0, BDEC76843F239EE13762CFEF871357EA4C8B11BF), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:32.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:32.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:32.257+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:32.257+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:32.257+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578690, 3) 2019-09-04T06:31:32.257+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 11973 2019-09-04T06:31:32.257+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:32.257+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:32.257+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 11973 2019-09-04T06:31:32.258+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 11976 2019-09-04T06:31:32.258+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 11976 2019-09-04T06:31:32.258+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578690, 3), t: 1 }({ ts: Timestamp(1567578690, 3), t: 1 }) 2019-09-04T06:31:32.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:32.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:32.288+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:32.301+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:32.301+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:32.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:32.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:32.388+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:32.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:32.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:32.488+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:32.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:32.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:32.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:32.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:32.548+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:32.548+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:32.588+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:32.639+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:32.639+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:32.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:32.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:32.652+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:32.652+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:32.688+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:32.719+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:32.719+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:32.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:32.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:32.789+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:32.801+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:32.801+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:32.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:32.838+0000 D3 REPL [replexec-0] memberData lastupdate is: 2019-09-04T06:31:31.063+0000 2019-09-04T06:31:32.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:32.838+0000 D3 REPL [replexec-0] memberData lastupdate is: 2019-09-04T06:31:32.232+0000 2019-09-04T06:31:32.838+0000 D3 REPL [replexec-0] stalest member MemberId(0) date: 2019-09-04T06:31:31.063+0000 2019-09-04T06:31:32.838+0000 D3 REPL [replexec-0] scheduling next check at 2019-09-04T06:31:41.063+0000 2019-09-04T06:31:32.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:02.838+0000 2019-09-04T06:31:32.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 815) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:32.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 815 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:42.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:32.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:02.838+0000 2019-09-04T06:31:32.838+0000 D2 ASIO [Replication] Request 815 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578690, 3), t: 1 }, durableWallTime: new Date(1567578690174), opTime: { ts: Timestamp(1567578690, 3), t: 1 }, wallTime: new Date(1567578690174), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578690, 3), t: 1 }, lastCommittedWall: new Date(1567578690174), lastOpVisible: { ts: Timestamp(1567578690, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578690, 3), $clusterTime: { clusterTime: Timestamp(1567578690, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578690, 3) } 2019-09-04T06:31:32.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578690, 3), t: 1 }, durableWallTime: new Date(1567578690174), opTime: { ts: Timestamp(1567578690, 3), t: 1 }, wallTime: new Date(1567578690174), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578690, 3), t: 1 }, lastCommittedWall: new Date(1567578690174), lastOpVisible: { ts: Timestamp(1567578690, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578690, 3), $clusterTime: { clusterTime: Timestamp(1567578690, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578690, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:32.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:32.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 815) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578690, 3), t: 1 }, durableWallTime: new Date(1567578690174), opTime: { ts: Timestamp(1567578690, 3), t: 1 }, wallTime: new Date(1567578690174), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578690, 3), t: 1 }, lastCommittedWall: new Date(1567578690174), lastOpVisible: { ts: Timestamp(1567578690, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578690, 3), $clusterTime: { clusterTime: Timestamp(1567578690, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578690, 3) } 2019-09-04T06:31:32.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:31:32.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:31:34.838Z 2019-09-04T06:31:32.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:02.838+0000 2019-09-04T06:31:32.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:32.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 816) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:32.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 816 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:31:42.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:32.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:02.838+0000 2019-09-04T06:31:32.839+0000 D2 ASIO [Replication] Request 816 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578690, 3), t: 1 }, durableWallTime: new Date(1567578690174), opTime: { ts: Timestamp(1567578690, 3), t: 1 }, wallTime: new Date(1567578690174), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578690, 3), t: 1 }, lastCommittedWall: new Date(1567578690174), lastOpVisible: { ts: Timestamp(1567578690, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578690, 3), $clusterTime: { clusterTime: Timestamp(1567578690, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578690, 3) } 2019-09-04T06:31:32.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578690, 3), t: 1 }, durableWallTime: new Date(1567578690174), opTime: { ts: Timestamp(1567578690, 3), t: 1 }, wallTime: new Date(1567578690174), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578690, 3), t: 1 }, lastCommittedWall: new Date(1567578690174), lastOpVisible: { ts: Timestamp(1567578690, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578690, 3), $clusterTime: { clusterTime: Timestamp(1567578690, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578690, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:31:32.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:32.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 816) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578690, 3), t: 1 }, durableWallTime: new Date(1567578690174), opTime: { ts: Timestamp(1567578690, 3), t: 1 }, wallTime: new Date(1567578690174), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578690, 3), t: 1 }, lastCommittedWall: new Date(1567578690174), lastOpVisible: { ts: Timestamp(1567578690, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578690, 3), $clusterTime: { clusterTime: Timestamp(1567578690, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578690, 3) } 2019-09-04T06:31:32.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:31:32.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:31:40.872+0000 2019-09-04T06:31:32.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:31:42.897+0000 2019-09-04T06:31:32.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:31:32.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:31:34.839Z 2019-09-04T06:31:32.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:32.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:02.839+0000 2019-09-04T06:31:32.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:02.839+0000 2019-09-04T06:31:32.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:32.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:32.889+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:32.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:32.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:32.989+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:32.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:32.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:32.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:32.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:33.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:33.048+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:33.048+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:33.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578690, 3), signature: { hash: BinData(0, BDEC76843F239EE13762CFEF871357EA4C8B11BF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:33.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:31:33.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578690, 3), signature: { hash: BinData(0, BDEC76843F239EE13762CFEF871357EA4C8B11BF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:33.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578690, 3), signature: { hash: BinData(0, BDEC76843F239EE13762CFEF871357EA4C8B11BF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:33.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578690, 3), t: 1 }, durableWallTime: new Date(1567578690174), opTime: { ts: Timestamp(1567578690, 3), t: 1 }, wallTime: new Date(1567578690174) } 2019-09-04T06:31:33.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578690, 3), signature: { hash: BinData(0, BDEC76843F239EE13762CFEF871357EA4C8B11BF), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:33.089+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:33.139+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:33.139+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:33.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:33.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:33.152+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:33.152+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:33.189+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:33.219+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:33.219+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:33.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:33.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:33.257+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:33.257+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:33.257+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578690, 3) 2019-09-04T06:31:33.257+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 12003 2019-09-04T06:31:33.257+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:33.257+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:33.257+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 12003 2019-09-04T06:31:33.258+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12006 2019-09-04T06:31:33.258+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 12006 2019-09-04T06:31:33.258+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578690, 3), t: 1 }({ ts: Timestamp(1567578690, 3), t: 1 }) 2019-09-04T06:31:33.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:33.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:33.289+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:33.301+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:33.301+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:33.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:33.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:33.390+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:33.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:33.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:33.479+0000 D2 ASIO [RS] Request 810 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578693, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578693463), o: { $v: 1, $set: { ping: new Date(1567578693458) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578690, 3), t: 1 }, lastCommittedWall: new Date(1567578690174), lastOpVisible: { ts: Timestamp(1567578690, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578690, 3), t: 1 }, lastCommittedWall: new Date(1567578690174), lastOpApplied: { ts: Timestamp(1567578693, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578690, 3), $clusterTime: { clusterTime: Timestamp(1567578693, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578693, 1) } 2019-09-04T06:31:33.479+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578693, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578693463), o: { $v: 1, $set: { ping: new Date(1567578693458) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578690, 3), t: 1 }, lastCommittedWall: new Date(1567578690174), lastOpVisible: { ts: Timestamp(1567578690, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578690, 3), t: 1 }, lastCommittedWall: new Date(1567578690174), lastOpApplied: { ts: Timestamp(1567578693, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578690, 3), $clusterTime: { clusterTime: Timestamp(1567578693, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578693, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:33.479+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:33.479+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578693, 1) and ending at ts: Timestamp(1567578693, 1) 2019-09-04T06:31:33.479+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:31:42.897+0000 2019-09-04T06:31:33.479+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:31:43.582+0000 2019-09-04T06:31:33.479+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:33.479+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578693, 1), t: 1 } 2019-09-04T06:31:33.479+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:33.480+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:33.480+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578690, 3) 2019-09-04T06:31:33.480+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 12013 2019-09-04T06:31:33.480+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:33.480+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:33.480+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 12013 2019-09-04T06:31:33.480+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:33.480+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:31:33.480+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:33.480+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578690, 3) 2019-09-04T06:31:33.480+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 12016 2019-09-04T06:31:33.480+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578693, 1) } 2019-09-04T06:31:33.480+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:33.480+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:33.480+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 12016 2019-09-04T06:31:33.479+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:02.839+0000 2019-09-04T06:31:33.480+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12007 2019-09-04T06:31:33.480+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 12007 2019-09-04T06:31:33.480+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12019 2019-09-04T06:31:33.480+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 12019 2019-09-04T06:31:33.480+0000 D3 EXECUTOR [repl-writer-worker-13] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:33.480+0000 D3 STORAGE [repl-writer-worker-13] WT begin_transaction for snapshot id 12021 2019-09-04T06:31:33.480+0000 D4 STORAGE [repl-writer-worker-13] inserting record with timestamp Timestamp(1567578693, 1) 2019-09-04T06:31:33.480+0000 D3 STORAGE [repl-writer-worker-13] WT set timestamp of future write operations to Timestamp(1567578693, 1) 2019-09-04T06:31:33.480+0000 D3 STORAGE [repl-writer-worker-13] WT commit_transaction for snapshot id 12021 2019-09-04T06:31:33.480+0000 D3 EXECUTOR [repl-writer-worker-13] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:33.480+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:31:33.480+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12020 2019-09-04T06:31:33.480+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 12020 2019-09-04T06:31:33.480+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12023 2019-09-04T06:31:33.480+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 12023 2019-09-04T06:31:33.480+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578693, 1), t: 1 }({ ts: Timestamp(1567578693, 1), t: 1 }) 2019-09-04T06:31:33.480+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578693, 1) 2019-09-04T06:31:33.480+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12024 2019-09-04T06:31:33.480+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578693, 1) } } ] } sort: {} projection: {} 2019-09-04T06:31:33.480+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:33.480+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:31:33.480+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578693, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:31:33.480+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:33.480+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:33.480+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:33.480+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578693, 1) || First: notFirst: full path: ts 2019-09-04T06:31:33.480+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:33.480+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578693, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:33.480+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:33.480+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:31:33.480+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:33.480+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:33.480+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:33.480+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:33.481+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:33.481+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:33.481+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:33.481+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578693, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:33.481+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:33.481+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:33.481+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:33.481+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578693, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:33.481+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:33.481+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578693, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:33.481+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 12024 2019-09-04T06:31:33.481+0000 D3 EXECUTOR [repl-writer-worker-14] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:33.481+0000 D3 STORAGE [repl-writer-worker-14] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:33.481+0000 D3 REPL [repl-writer-worker-14] applying op: { ts: Timestamp(1567578693, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578693463), o: { $v: 1, $set: { ping: new Date(1567578693458) } } }, oplog application mode: Secondary 2019-09-04T06:31:33.481+0000 D3 STORAGE [repl-writer-worker-14] WT set timestamp of future write operations to Timestamp(1567578693, 1) 2019-09-04T06:31:33.481+0000 D3 STORAGE [repl-writer-worker-14] WT begin_transaction for snapshot id 12026 2019-09-04T06:31:33.481+0000 D2 QUERY [repl-writer-worker-14] Using idhack: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" } 2019-09-04T06:31:33.481+0000 D4 WRITE [repl-writer-worker-14] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:31:33.481+0000 D3 STORAGE [repl-writer-worker-14] WT commit_transaction for snapshot id 12026 2019-09-04T06:31:33.481+0000 D3 EXECUTOR [repl-writer-worker-14] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:33.481+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578693, 1), t: 1 }({ ts: Timestamp(1567578693, 1), t: 1 }) 2019-09-04T06:31:33.481+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578693, 1) 2019-09-04T06:31:33.481+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12025 2019-09-04T06:31:33.481+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:31:33.481+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:33.481+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:31:33.481+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:33.481+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:33.481+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:31:33.481+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 12025 2019-09-04T06:31:33.481+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578693, 1) 2019-09-04T06:31:33.481+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12029 2019-09-04T06:31:33.481+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 12029 2019-09-04T06:31:33.481+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578693, 1), t: 1 }({ ts: Timestamp(1567578693, 1), t: 1 }) 2019-09-04T06:31:33.481+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:33.481+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578690, 3), t: 1 }, durableWallTime: new Date(1567578690174), appliedOpTime: { ts: Timestamp(1567578690, 3), t: 1 }, appliedWallTime: new Date(1567578690174), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578690, 3), t: 1 }, durableWallTime: new Date(1567578690174), appliedOpTime: { ts: Timestamp(1567578693, 1), t: 1 }, appliedWallTime: new Date(1567578693463), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578690, 3), t: 1 }, durableWallTime: new Date(1567578690174), appliedOpTime: { ts: Timestamp(1567578690, 3), t: 1 }, appliedWallTime: new Date(1567578690174), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578690, 3), t: 1 }, lastCommittedWall: new Date(1567578690174), lastOpVisible: { ts: Timestamp(1567578690, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:33.481+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 817 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:03.481+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578690, 3), t: 1 }, durableWallTime: new Date(1567578690174), appliedOpTime: { ts: Timestamp(1567578690, 3), t: 1 }, appliedWallTime: new Date(1567578690174), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578690, 3), t: 1 }, durableWallTime: new Date(1567578690174), appliedOpTime: { ts: Timestamp(1567578693, 1), t: 1 }, appliedWallTime: new Date(1567578693463), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578690, 3), t: 1 }, durableWallTime: new Date(1567578690174), appliedOpTime: { ts: Timestamp(1567578690, 3), t: 1 }, appliedWallTime: new Date(1567578690174), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578690, 3), t: 1 }, lastCommittedWall: new Date(1567578690174), lastOpVisible: { ts: Timestamp(1567578690, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:33.481+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:03.481+0000 2019-09-04T06:31:33.482+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578693, 1), t: 1 } 2019-09-04T06:31:33.482+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 818 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:43.482+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578690, 3), t: 1 } } 2019-09-04T06:31:33.482+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:03.481+0000 2019-09-04T06:31:33.482+0000 D2 ASIO [RS] Request 817 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578693, 1), t: 1 }, lastCommittedWall: new Date(1567578693463), lastOpVisible: { ts: Timestamp(1567578693, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578693, 1), $clusterTime: { clusterTime: Timestamp(1567578693, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578693, 1) } 2019-09-04T06:31:33.482+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578693, 1), t: 1 }, lastCommittedWall: new Date(1567578693463), lastOpVisible: { ts: Timestamp(1567578693, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578693, 1), $clusterTime: { clusterTime: Timestamp(1567578693, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578693, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:33.482+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:33.482+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:03.481+0000 2019-09-04T06:31:33.482+0000 D2 ASIO [RS] Request 818 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578693, 1), t: 1 }, lastCommittedWall: new Date(1567578693463), lastOpVisible: { ts: Timestamp(1567578693, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578693, 1), t: 1 }, lastCommittedWall: new Date(1567578693463), lastOpApplied: { ts: Timestamp(1567578693, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578693, 1), $clusterTime: { clusterTime: Timestamp(1567578693, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578693, 1) } 2019-09-04T06:31:33.482+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578693, 1), t: 1 }, lastCommittedWall: new Date(1567578693463), lastOpVisible: { ts: Timestamp(1567578693, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578693, 1), t: 1 }, lastCommittedWall: new Date(1567578693463), lastOpApplied: { ts: Timestamp(1567578693, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578693, 1), $clusterTime: { clusterTime: Timestamp(1567578693, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578693, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:33.482+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:33.482+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:31:33.482+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578693, 1), t: 1 }, 2019-09-04T06:31:33.463+0000 2019-09-04T06:31:33.482+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578693, 1), t: 1 }, 2019-09-04T06:31:33.463+0000 2019-09-04T06:31:33.482+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578688, 1) 2019-09-04T06:31:33.482+0000 D3 REPL [conn294] Got notified of new snapshot: { ts: Timestamp(1567578693, 1), t: 1 }, 2019-09-04T06:31:33.463+0000 2019-09-04T06:31:33.482+0000 D3 REPL [conn294] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.468+0000 2019-09-04T06:31:33.482+0000 D3 REPL [conn298] Got notified of new snapshot: { ts: Timestamp(1567578693, 1), t: 1 }, 2019-09-04T06:31:33.463+0000 2019-09-04T06:31:33.482+0000 D3 REPL [conn298] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.986+0000 2019-09-04T06:31:33.482+0000 D3 REPL [conn297] Got notified of new snapshot: { ts: Timestamp(1567578693, 1), t: 1 }, 2019-09-04T06:31:33.463+0000 2019-09-04T06:31:33.482+0000 D3 REPL [conn297] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.925+0000 2019-09-04T06:31:33.482+0000 D3 REPL [conn279] Got notified of new snapshot: { ts: Timestamp(1567578693, 1), t: 1 }, 2019-09-04T06:31:33.463+0000 2019-09-04T06:31:33.482+0000 D3 REPL [conn279] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:42.709+0000 2019-09-04T06:31:33.482+0000 D3 REPL [conn270] Got notified of new snapshot: { ts: Timestamp(1567578693, 1), t: 1 }, 2019-09-04T06:31:33.463+0000 2019-09-04T06:31:33.482+0000 D3 REPL [conn270] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.239+0000 2019-09-04T06:31:33.482+0000 D3 REPL [conn281] Got notified of new snapshot: { ts: Timestamp(1567578693, 1), t: 1 }, 2019-09-04T06:31:33.463+0000 2019-09-04T06:31:33.482+0000 D3 REPL [conn281] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.661+0000 2019-09-04T06:31:33.482+0000 D3 REPL [conn286] Got notified of new snapshot: { ts: Timestamp(1567578693, 1), t: 1 }, 2019-09-04T06:31:33.463+0000 2019-09-04T06:31:33.482+0000 D3 REPL [conn286] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.224+0000 2019-09-04T06:31:33.482+0000 D3 REPL [conn291] Got notified of new snapshot: { ts: Timestamp(1567578693, 1), t: 1 }, 2019-09-04T06:31:33.463+0000 2019-09-04T06:31:33.482+0000 D3 REPL [conn291] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.662+0000 2019-09-04T06:31:33.482+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:31:43.582+0000 2019-09-04T06:31:33.482+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:31:44.735+0000 2019-09-04T06:31:33.482+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:33.482+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:02.839+0000 2019-09-04T06:31:33.482+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 819 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:43.482+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578693, 1), t: 1 } } 2019-09-04T06:31:33.482+0000 D3 REPL [conn287] Got notified of new snapshot: { ts: Timestamp(1567578693, 1), t: 1 }, 2019-09-04T06:31:33.463+0000 2019-09-04T06:31:33.482+0000 D3 REPL [conn287] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.248+0000 2019-09-04T06:31:33.482+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:03.481+0000 2019-09-04T06:31:33.482+0000 D3 REPL [conn274] Got notified of new snapshot: { ts: Timestamp(1567578693, 1), t: 1 }, 2019-09-04T06:31:33.463+0000 2019-09-04T06:31:33.482+0000 D3 REPL [conn274] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:41.071+0000 2019-09-04T06:31:33.482+0000 D3 REPL [conn283] Got notified of new snapshot: { ts: Timestamp(1567578693, 1), t: 1 }, 2019-09-04T06:31:33.463+0000 2019-09-04T06:31:33.482+0000 D3 REPL [conn283] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.238+0000 2019-09-04T06:31:33.482+0000 D3 REPL [conn272] Got notified of new snapshot: { ts: Timestamp(1567578693, 1), t: 1 }, 2019-09-04T06:31:33.463+0000 2019-09-04T06:31:33.482+0000 D3 REPL [conn272] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.222+0000 2019-09-04T06:31:33.482+0000 D3 REPL [conn264] Got notified of new snapshot: { ts: Timestamp(1567578693, 1), t: 1 }, 2019-09-04T06:31:33.463+0000 2019-09-04T06:31:33.482+0000 D3 REPL [conn264] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:41.083+0000 2019-09-04T06:31:33.482+0000 D3 REPL [conn280] Got notified of new snapshot: { ts: Timestamp(1567578693, 1), t: 1 }, 2019-09-04T06:31:33.463+0000 2019-09-04T06:31:33.482+0000 D3 REPL [conn280] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:01.310+0000 2019-09-04T06:31:33.482+0000 D3 REPL [conn295] Got notified of new snapshot: { ts: Timestamp(1567578693, 1), t: 1 }, 2019-09-04T06:31:33.463+0000 2019-09-04T06:31:33.482+0000 D3 REPL [conn295] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.753+0000 2019-09-04T06:31:33.483+0000 D3 REPL [conn296] Got notified of new snapshot: { ts: Timestamp(1567578693, 1), t: 1 }, 2019-09-04T06:31:33.463+0000 2019-09-04T06:31:33.483+0000 D3 REPL [conn296] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.763+0000 2019-09-04T06:31:33.483+0000 D3 REPL [conn273] Got notified of new snapshot: { ts: Timestamp(1567578693, 1), t: 1 }, 2019-09-04T06:31:33.463+0000 2019-09-04T06:31:33.483+0000 D3 REPL [conn273] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.644+0000 2019-09-04T06:31:33.483+0000 D3 REPL [conn271] Got notified of new snapshot: { ts: Timestamp(1567578693, 1), t: 1 }, 2019-09-04T06:31:33.463+0000 2019-09-04T06:31:33.483+0000 D3 REPL [conn271] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.752+0000 2019-09-04T06:31:33.483+0000 D3 REPL [conn260] Got notified of new snapshot: { ts: Timestamp(1567578693, 1), t: 1 }, 2019-09-04T06:31:33.463+0000 2019-09-04T06:31:33.483+0000 D3 REPL [conn260] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:42.329+0000 2019-09-04T06:31:33.483+0000 D3 REPL [conn292] Got notified of new snapshot: { ts: Timestamp(1567578693, 1), t: 1 }, 2019-09-04T06:31:33.463+0000 2019-09-04T06:31:33.483+0000 D3 REPL [conn292] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:52.054+0000 2019-09-04T06:31:33.488+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:31:33.488+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:33.488+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578690, 3), t: 1 }, durableWallTime: new Date(1567578690174), appliedOpTime: { ts: Timestamp(1567578690, 3), t: 1 }, appliedWallTime: new Date(1567578690174), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578693, 1), t: 1 }, durableWallTime: new Date(1567578693463), appliedOpTime: { ts: Timestamp(1567578693, 1), t: 1 }, appliedWallTime: new Date(1567578693463), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578690, 3), t: 1 }, durableWallTime: new Date(1567578690174), appliedOpTime: { ts: Timestamp(1567578690, 3), t: 1 }, appliedWallTime: new Date(1567578690174), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578693, 1), t: 1 }, lastCommittedWall: new Date(1567578693463), lastOpVisible: { ts: Timestamp(1567578693, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:33.488+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 820 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:03.488+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578690, 3), t: 1 }, durableWallTime: new Date(1567578690174), appliedOpTime: { ts: Timestamp(1567578690, 3), t: 1 }, appliedWallTime: new Date(1567578690174), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578693, 1), t: 1 }, durableWallTime: new Date(1567578693463), appliedOpTime: { ts: Timestamp(1567578693, 1), t: 1 }, appliedWallTime: new Date(1567578693463), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578690, 3), t: 1 }, durableWallTime: new Date(1567578690174), appliedOpTime: { ts: Timestamp(1567578690, 3), t: 1 }, appliedWallTime: new Date(1567578690174), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578693, 1), t: 1 }, lastCommittedWall: new Date(1567578693463), lastOpVisible: { ts: Timestamp(1567578693, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:33.488+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:03.481+0000 2019-09-04T06:31:33.488+0000 D2 ASIO [RS] Request 820 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578693, 1), t: 1 }, lastCommittedWall: new Date(1567578693463), lastOpVisible: { ts: Timestamp(1567578693, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578693, 1), $clusterTime: { clusterTime: Timestamp(1567578693, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578693, 1) } 2019-09-04T06:31:33.488+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578693, 1), t: 1 }, lastCommittedWall: new Date(1567578693463), lastOpVisible: { ts: Timestamp(1567578693, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578693, 1), $clusterTime: { clusterTime: Timestamp(1567578693, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578693, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:33.488+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:33.488+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:03.481+0000 2019-09-04T06:31:33.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:33.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:33.490+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:33.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:33.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:33.514+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:31:33.514+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:31:33.514+0000 D2 COMMAND [conn90] run command admin.$cmd { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:31:33.514+0000 I COMMAND [conn90] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:31:33.548+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:33.548+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:33.580+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578693, 1) 2019-09-04T06:31:33.590+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:33.639+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:33.639+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:33.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:33.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:33.652+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:33.652+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:33.690+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:33.719+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:33.719+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:33.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:33.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:33.790+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:33.801+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:33.801+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:33.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:33.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:33.890+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:33.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:33.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:33.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:33.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:33.990+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:33.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:33.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:34.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:34.048+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:34.048+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:34.091+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:34.139+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:34.139+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:34.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:34.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:34.152+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:34.152+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:34.191+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:34.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578693, 1), signature: { hash: BinData(0, 1C184FC10CE3BDF72918997549EE962ABE197277), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:34.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:31:34.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578693, 1), signature: { hash: BinData(0, 1C184FC10CE3BDF72918997549EE962ABE197277), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:34.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578693, 1), signature: { hash: BinData(0, 1C184FC10CE3BDF72918997549EE962ABE197277), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:34.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578693, 1), t: 1 }, durableWallTime: new Date(1567578693463), opTime: { ts: Timestamp(1567578693, 1), t: 1 }, wallTime: new Date(1567578693463) } 2019-09-04T06:31:34.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578693, 1), signature: { hash: BinData(0, 1C184FC10CE3BDF72918997549EE962ABE197277), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:34.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:34.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:34.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:34.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:34.291+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:34.301+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:34.301+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:34.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:34.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:34.391+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:34.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:34.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:34.480+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:34.480+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:34.480+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578693, 1) 2019-09-04T06:31:34.480+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 12059 2019-09-04T06:31:34.480+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:34.480+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:34.480+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 12059 2019-09-04T06:31:34.481+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12062 2019-09-04T06:31:34.481+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 12062 2019-09-04T06:31:34.481+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578693, 1), t: 1 }({ ts: Timestamp(1567578693, 1), t: 1 }) 2019-09-04T06:31:34.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:34.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:34.491+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:34.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:34.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:34.548+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:34.548+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:34.591+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:34.639+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:34.639+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:34.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:34.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:34.652+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:34.652+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:34.692+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:34.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:34.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:34.792+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:34.801+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:34.801+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:34.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:34.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 821) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:34.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 821 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:44.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:34.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:02.839+0000 2019-09-04T06:31:34.838+0000 D2 ASIO [Replication] Request 821 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578693, 1), t: 1 }, durableWallTime: new Date(1567578693463), opTime: { ts: Timestamp(1567578693, 1), t: 1 }, wallTime: new Date(1567578693463), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578693, 1), t: 1 }, lastCommittedWall: new Date(1567578693463), lastOpVisible: { ts: Timestamp(1567578693, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578693, 1), $clusterTime: { clusterTime: Timestamp(1567578693, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578693, 1) } 2019-09-04T06:31:34.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578693, 1), t: 1 }, durableWallTime: new Date(1567578693463), opTime: { ts: Timestamp(1567578693, 1), t: 1 }, wallTime: new Date(1567578693463), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578693, 1), t: 1 }, lastCommittedWall: new Date(1567578693463), lastOpVisible: { ts: Timestamp(1567578693, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578693, 1), $clusterTime: { clusterTime: Timestamp(1567578693, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578693, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:34.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:34.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 821) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578693, 1), t: 1 }, durableWallTime: new Date(1567578693463), opTime: { ts: Timestamp(1567578693, 1), t: 1 }, wallTime: new Date(1567578693463), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578693, 1), t: 1 }, lastCommittedWall: new Date(1567578693463), lastOpVisible: { ts: Timestamp(1567578693, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578693, 1), $clusterTime: { clusterTime: Timestamp(1567578693, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578693, 1) } 2019-09-04T06:31:34.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:31:34.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:31:36.838Z 2019-09-04T06:31:34.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:02.839+0000 2019-09-04T06:31:34.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:34.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 822) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:34.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 822 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:31:44.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:34.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:02.839+0000 2019-09-04T06:31:34.839+0000 D2 ASIO [Replication] Request 822 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578693, 1), t: 1 }, durableWallTime: new Date(1567578693463), opTime: { ts: Timestamp(1567578693, 1), t: 1 }, wallTime: new Date(1567578693463), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578693, 1), t: 1 }, lastCommittedWall: new Date(1567578693463), lastOpVisible: { ts: Timestamp(1567578693, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578693, 1), $clusterTime: { clusterTime: Timestamp(1567578693, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578693, 1) } 2019-09-04T06:31:34.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578693, 1), t: 1 }, durableWallTime: new Date(1567578693463), opTime: { ts: Timestamp(1567578693, 1), t: 1 }, wallTime: new Date(1567578693463), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578693, 1), t: 1 }, lastCommittedWall: new Date(1567578693463), lastOpVisible: { ts: Timestamp(1567578693, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578693, 1), $clusterTime: { clusterTime: Timestamp(1567578693, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578693, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:31:34.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:34.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 822) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578693, 1), t: 1 }, durableWallTime: new Date(1567578693463), opTime: { ts: Timestamp(1567578693, 1), t: 1 }, wallTime: new Date(1567578693463), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578693, 1), t: 1 }, lastCommittedWall: new Date(1567578693463), lastOpVisible: { ts: Timestamp(1567578693, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578693, 1), $clusterTime: { clusterTime: Timestamp(1567578693, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578693, 1) } 2019-09-04T06:31:34.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:31:34.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:31:44.735+0000 2019-09-04T06:31:34.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:31:45.334+0000 2019-09-04T06:31:34.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:31:34.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:31:36.839Z 2019-09-04T06:31:34.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:34.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:04.839+0000 2019-09-04T06:31:34.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:04.839+0000 2019-09-04T06:31:34.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:34.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:34.892+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:34.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:34.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:34.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:34.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:34.992+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:34.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:34.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:35.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:35.048+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:35.048+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:35.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578693, 1), signature: { hash: BinData(0, 1C184FC10CE3BDF72918997549EE962ABE197277), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:35.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:31:35.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578693, 1), signature: { hash: BinData(0, 1C184FC10CE3BDF72918997549EE962ABE197277), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:35.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578693, 1), signature: { hash: BinData(0, 1C184FC10CE3BDF72918997549EE962ABE197277), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:35.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578693, 1), t: 1 }, durableWallTime: new Date(1567578693463), opTime: { ts: Timestamp(1567578693, 1), t: 1 }, wallTime: new Date(1567578693463) } 2019-09-04T06:31:35.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578693, 1), signature: { hash: BinData(0, 1C184FC10CE3BDF72918997549EE962ABE197277), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:35.092+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:35.139+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:35.139+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:35.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:35.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:35.152+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:35.152+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:35.176+0000 D2 ASIO [RS] Request 819 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578695, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578695142), o: { $v: 1, $set: { ping: new Date(1567578695139), up: 2595 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578693, 1), t: 1 }, lastCommittedWall: new Date(1567578693463), lastOpVisible: { ts: Timestamp(1567578693, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578693, 1), t: 1 }, lastCommittedWall: new Date(1567578693463), lastOpApplied: { ts: Timestamp(1567578695, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578693, 1), $clusterTime: { clusterTime: Timestamp(1567578695, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578695, 1) } 2019-09-04T06:31:35.176+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578695, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578695142), o: { $v: 1, $set: { ping: new Date(1567578695139), up: 2595 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578693, 1), t: 1 }, lastCommittedWall: new Date(1567578693463), lastOpVisible: { ts: Timestamp(1567578693, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578693, 1), t: 1 }, lastCommittedWall: new Date(1567578693463), lastOpApplied: { ts: Timestamp(1567578695, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578693, 1), $clusterTime: { clusterTime: Timestamp(1567578695, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578695, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:35.176+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:35.176+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578695, 1) and ending at ts: Timestamp(1567578695, 1) 2019-09-04T06:31:35.176+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:31:45.334+0000 2019-09-04T06:31:35.176+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:31:46.452+0000 2019-09-04T06:31:35.176+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:35.176+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:04.839+0000 2019-09-04T06:31:35.176+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578695, 1), t: 1 } 2019-09-04T06:31:35.176+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:35.176+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:35.176+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578693, 1) 2019-09-04T06:31:35.176+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 12083 2019-09-04T06:31:35.176+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:35.176+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:35.176+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 12083 2019-09-04T06:31:35.176+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:35.176+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:31:35.176+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:35.176+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578693, 1) 2019-09-04T06:31:35.176+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578695, 1) } 2019-09-04T06:31:35.176+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 12086 2019-09-04T06:31:35.176+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:35.176+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:35.176+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 12086 2019-09-04T06:31:35.176+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12063 2019-09-04T06:31:35.176+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 12063 2019-09-04T06:31:35.176+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12089 2019-09-04T06:31:35.176+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 12089 2019-09-04T06:31:35.176+0000 D3 EXECUTOR [repl-writer-worker-3] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:35.176+0000 D3 STORAGE [repl-writer-worker-3] WT begin_transaction for snapshot id 12091 2019-09-04T06:31:35.176+0000 D4 STORAGE [repl-writer-worker-3] inserting record with timestamp Timestamp(1567578695, 1) 2019-09-04T06:31:35.176+0000 D3 STORAGE [repl-writer-worker-3] WT set timestamp of future write operations to Timestamp(1567578695, 1) 2019-09-04T06:31:35.176+0000 D3 STORAGE [repl-writer-worker-3] WT commit_transaction for snapshot id 12091 2019-09-04T06:31:35.176+0000 D3 EXECUTOR [repl-writer-worker-3] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:35.176+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:31:35.176+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12090 2019-09-04T06:31:35.176+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 12090 2019-09-04T06:31:35.176+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12093 2019-09-04T06:31:35.176+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 12093 2019-09-04T06:31:35.176+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578695, 1), t: 1 }({ ts: Timestamp(1567578695, 1), t: 1 }) 2019-09-04T06:31:35.177+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578695, 1) 2019-09-04T06:31:35.177+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12094 2019-09-04T06:31:35.177+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578695, 1) } } ] } sort: {} projection: {} 2019-09-04T06:31:35.177+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:35.177+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:31:35.177+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578695, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:31:35.177+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:35.177+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:35.177+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:35.177+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578695, 1) || First: notFirst: full path: ts 2019-09-04T06:31:35.177+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:35.177+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578695, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:35.177+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:35.177+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:31:35.177+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:35.177+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:35.177+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:35.177+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:35.177+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:35.177+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:35.177+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:35.177+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578695, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:35.177+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:35.177+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:35.177+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:35.177+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578695, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:35.177+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:35.177+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578695, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:35.177+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 12094 2019-09-04T06:31:35.177+0000 D3 EXECUTOR [repl-writer-worker-12] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:35.177+0000 D3 STORAGE [repl-writer-worker-12] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:35.177+0000 D3 REPL [repl-writer-worker-12] applying op: { ts: Timestamp(1567578695, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578695142), o: { $v: 1, $set: { ping: new Date(1567578695139), up: 2595 } } }, oplog application mode: Secondary 2019-09-04T06:31:35.177+0000 D3 STORAGE [repl-writer-worker-12] WT set timestamp of future write operations to Timestamp(1567578695, 1) 2019-09-04T06:31:35.177+0000 D3 STORAGE [repl-writer-worker-12] WT begin_transaction for snapshot id 12096 2019-09-04T06:31:35.177+0000 D2 QUERY [repl-writer-worker-12] Using idhack: { _id: "cmodb801.togewa.com:27017" } 2019-09-04T06:31:35.177+0000 D4 WRITE [repl-writer-worker-12] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:31:35.177+0000 D3 STORAGE [repl-writer-worker-12] WT commit_transaction for snapshot id 12096 2019-09-04T06:31:35.177+0000 D3 EXECUTOR [repl-writer-worker-12] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:35.177+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578695, 1), t: 1 }({ ts: Timestamp(1567578695, 1), t: 1 }) 2019-09-04T06:31:35.177+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578695, 1) 2019-09-04T06:31:35.177+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12095 2019-09-04T06:31:35.177+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:31:35.177+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:35.177+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:31:35.177+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:35.177+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:35.177+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:31:35.177+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 12095 2019-09-04T06:31:35.177+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578695, 1) 2019-09-04T06:31:35.177+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12100 2019-09-04T06:31:35.177+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 12100 2019-09-04T06:31:35.177+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578695, 1), t: 1 }({ ts: Timestamp(1567578695, 1), t: 1 }) 2019-09-04T06:31:35.178+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:35.178+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578693, 1), t: 1 }, durableWallTime: new Date(1567578693463), appliedOpTime: { ts: Timestamp(1567578693, 1), t: 1 }, appliedWallTime: new Date(1567578693463), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578693, 1), t: 1 }, durableWallTime: new Date(1567578693463), appliedOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, appliedWallTime: new Date(1567578695142), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578693, 1), t: 1 }, durableWallTime: new Date(1567578693463), appliedOpTime: { ts: Timestamp(1567578693, 1), t: 1 }, appliedWallTime: new Date(1567578693463), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578693, 1), t: 1 }, lastCommittedWall: new Date(1567578693463), lastOpVisible: { ts: Timestamp(1567578693, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:35.178+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 823 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:05.178+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578693, 1), t: 1 }, durableWallTime: new Date(1567578693463), appliedOpTime: { ts: Timestamp(1567578693, 1), t: 1 }, appliedWallTime: new Date(1567578693463), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578693, 1), t: 1 }, durableWallTime: new Date(1567578693463), appliedOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, appliedWallTime: new Date(1567578695142), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578693, 1), t: 1 }, durableWallTime: new Date(1567578693463), appliedOpTime: { ts: Timestamp(1567578693, 1), t: 1 }, appliedWallTime: new Date(1567578693463), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578693, 1), t: 1 }, lastCommittedWall: new Date(1567578693463), lastOpVisible: { ts: Timestamp(1567578693, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:35.178+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:05.177+0000 2019-09-04T06:31:35.178+0000 D2 ASIO [RS] Request 823 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578693, 1), t: 1 }, lastCommittedWall: new Date(1567578693463), lastOpVisible: { ts: Timestamp(1567578693, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578693, 1), $clusterTime: { clusterTime: Timestamp(1567578695, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578695, 1) } 2019-09-04T06:31:35.178+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578693, 1), t: 1 }, lastCommittedWall: new Date(1567578693463), lastOpVisible: { ts: Timestamp(1567578693, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578693, 1), $clusterTime: { clusterTime: Timestamp(1567578695, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578695, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:35.178+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578695, 1), t: 1 } 2019-09-04T06:31:35.178+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 824 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:45.178+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578693, 1), t: 1 } } 2019-09-04T06:31:35.178+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:35.178+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:05.178+0000 2019-09-04T06:31:35.178+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:05.178+0000 2019-09-04T06:31:35.192+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:31:35.192+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:35.192+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:35.193+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578693, 1), t: 1 }, durableWallTime: new Date(1567578693463), appliedOpTime: { ts: Timestamp(1567578693, 1), t: 1 }, appliedWallTime: new Date(1567578693463), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, durableWallTime: new Date(1567578695142), appliedOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, appliedWallTime: new Date(1567578695142), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578693, 1), t: 1 }, durableWallTime: new Date(1567578693463), appliedOpTime: { ts: Timestamp(1567578693, 1), t: 1 }, appliedWallTime: new Date(1567578693463), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578693, 1), t: 1 }, lastCommittedWall: new Date(1567578693463), lastOpVisible: { ts: Timestamp(1567578693, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:35.193+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 825 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:05.193+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578693, 1), t: 1 }, durableWallTime: new Date(1567578693463), appliedOpTime: { ts: Timestamp(1567578693, 1), t: 1 }, appliedWallTime: new Date(1567578693463), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, durableWallTime: new Date(1567578695142), appliedOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, appliedWallTime: new Date(1567578695142), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578693, 1), t: 1 }, durableWallTime: new Date(1567578693463), appliedOpTime: { ts: Timestamp(1567578693, 1), t: 1 }, appliedWallTime: new Date(1567578693463), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578693, 1), t: 1 }, lastCommittedWall: new Date(1567578693463), lastOpVisible: { ts: Timestamp(1567578693, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:35.193+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:05.178+0000 2019-09-04T06:31:35.193+0000 D2 ASIO [RS] Request 824 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578695, 1), t: 1 }, lastCommittedWall: new Date(1567578695142), lastOpVisible: { ts: Timestamp(1567578695, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578695, 1), t: 1 }, lastCommittedWall: new Date(1567578695142), lastOpApplied: { ts: Timestamp(1567578695, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578695, 1), $clusterTime: { clusterTime: Timestamp(1567578695, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578695, 1) } 2019-09-04T06:31:35.193+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578695, 1), t: 1 }, lastCommittedWall: new Date(1567578695142), lastOpVisible: { ts: Timestamp(1567578695, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578695, 1), t: 1 }, lastCommittedWall: new Date(1567578695142), lastOpApplied: { ts: Timestamp(1567578695, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578695, 1), $clusterTime: { clusterTime: Timestamp(1567578695, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578695, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:35.193+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:35.193+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:31:35.193+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578695, 1), t: 1 }, 2019-09-04T06:31:35.142+0000 2019-09-04T06:31:35.193+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578695, 1), t: 1 }, 2019-09-04T06:31:35.142+0000 2019-09-04T06:31:35.193+0000 D2 ASIO [RS] Request 825 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578695, 1), t: 1 }, lastCommittedWall: new Date(1567578695142), lastOpVisible: { ts: Timestamp(1567578695, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578695, 1), $clusterTime: { clusterTime: Timestamp(1567578695, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578695, 1) } 2019-09-04T06:31:35.193+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578695, 1), t: 1 }, lastCommittedWall: new Date(1567578695142), lastOpVisible: { ts: Timestamp(1567578695, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578695, 1), $clusterTime: { clusterTime: Timestamp(1567578695, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578695, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:35.193+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578690, 1) 2019-09-04T06:31:35.193+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:31:46.452+0000 2019-09-04T06:31:35.193+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:31:45.587+0000 2019-09-04T06:31:35.193+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:35.193+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 826 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:45.193+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578695, 1), t: 1 } } 2019-09-04T06:31:35.193+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:04.839+0000 2019-09-04T06:31:35.193+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:35.193+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:05.193+0000 2019-09-04T06:31:35.193+0000 D3 REPL [conn286] Got notified of new snapshot: { ts: Timestamp(1567578695, 1), t: 1 }, 2019-09-04T06:31:35.142+0000 2019-09-04T06:31:35.193+0000 D3 REPL [conn286] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.224+0000 2019-09-04T06:31:35.193+0000 D3 REPL [conn274] Got notified of new snapshot: { ts: Timestamp(1567578695, 1), t: 1 }, 2019-09-04T06:31:35.142+0000 2019-09-04T06:31:35.193+0000 D3 REPL [conn274] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:41.071+0000 2019-09-04T06:31:35.193+0000 D3 REPL [conn295] Got notified of new snapshot: { ts: Timestamp(1567578695, 1), t: 1 }, 2019-09-04T06:31:35.142+0000 2019-09-04T06:31:35.193+0000 D3 REPL [conn295] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.753+0000 2019-09-04T06:31:35.193+0000 D3 REPL [conn264] Got notified of new snapshot: { ts: Timestamp(1567578695, 1), t: 1 }, 2019-09-04T06:31:35.142+0000 2019-09-04T06:31:35.193+0000 D3 REPL [conn264] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:41.083+0000 2019-09-04T06:31:35.193+0000 D3 REPL [conn273] Got notified of new snapshot: { ts: Timestamp(1567578695, 1), t: 1 }, 2019-09-04T06:31:35.142+0000 2019-09-04T06:31:35.193+0000 D3 REPL [conn273] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.644+0000 2019-09-04T06:31:35.193+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:05.193+0000 2019-09-04T06:31:35.193+0000 D2 COMMAND [conn21] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578695, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a4702d1a496712d7223'), operName: "", parentOperId: "5d6f5a4702d1a496712d7220" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578695, 1), signature: { hash: BinData(0, 842107D62A486EE76C3B7B6A78CCD60772CE6680), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578695, 1), t: 1 } }, $db: "config" } 2019-09-04T06:31:35.193+0000 D3 REPL [conn260] Got notified of new snapshot: { ts: Timestamp(1567578695, 1), t: 1 }, 2019-09-04T06:31:35.142+0000 2019-09-04T06:31:35.193+0000 D3 REPL [conn260] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:42.329+0000 2019-09-04T06:31:35.193+0000 D3 REPL [conn294] Got notified of new snapshot: { ts: Timestamp(1567578695, 1), t: 1 }, 2019-09-04T06:31:35.142+0000 2019-09-04T06:31:35.193+0000 D3 REPL [conn294] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.468+0000 2019-09-04T06:31:35.193+0000 D3 REPL [conn297] Got notified of new snapshot: { ts: Timestamp(1567578695, 1), t: 1 }, 2019-09-04T06:31:35.142+0000 2019-09-04T06:31:35.193+0000 D3 REPL [conn297] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.925+0000 2019-09-04T06:31:35.193+0000 D3 REPL [conn298] Got notified of new snapshot: { ts: Timestamp(1567578695, 1), t: 1 }, 2019-09-04T06:31:35.142+0000 2019-09-04T06:31:35.193+0000 D3 REPL [conn298] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.986+0000 2019-09-04T06:31:35.193+0000 D3 REPL [conn281] Got notified of new snapshot: { ts: Timestamp(1567578695, 1), t: 1 }, 2019-09-04T06:31:35.142+0000 2019-09-04T06:31:35.193+0000 D3 REPL [conn281] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.661+0000 2019-09-04T06:31:35.193+0000 D3 REPL [conn291] Got notified of new snapshot: { ts: Timestamp(1567578695, 1), t: 1 }, 2019-09-04T06:31:35.142+0000 2019-09-04T06:31:35.193+0000 D3 REPL [conn291] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.662+0000 2019-09-04T06:31:35.193+0000 D3 REPL [conn287] Got notified of new snapshot: { ts: Timestamp(1567578695, 1), t: 1 }, 2019-09-04T06:31:35.142+0000 2019-09-04T06:31:35.193+0000 D3 REPL [conn287] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.248+0000 2019-09-04T06:31:35.193+0000 D3 REPL [conn270] Got notified of new snapshot: { ts: Timestamp(1567578695, 1), t: 1 }, 2019-09-04T06:31:35.142+0000 2019-09-04T06:31:35.193+0000 D3 REPL [conn270] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.239+0000 2019-09-04T06:31:35.193+0000 D3 REPL [conn283] Got notified of new snapshot: { ts: Timestamp(1567578695, 1), t: 1 }, 2019-09-04T06:31:35.142+0000 2019-09-04T06:31:35.193+0000 D3 REPL [conn283] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.238+0000 2019-09-04T06:31:35.193+0000 D3 REPL [conn272] Got notified of new snapshot: { ts: Timestamp(1567578695, 1), t: 1 }, 2019-09-04T06:31:35.142+0000 2019-09-04T06:31:35.193+0000 D3 REPL [conn272] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.222+0000 2019-09-04T06:31:35.193+0000 D3 REPL [conn280] Got notified of new snapshot: { ts: Timestamp(1567578695, 1), t: 1 }, 2019-09-04T06:31:35.142+0000 2019-09-04T06:31:35.193+0000 D3 REPL [conn280] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:01.310+0000 2019-09-04T06:31:35.193+0000 D3 REPL [conn296] Got notified of new snapshot: { ts: Timestamp(1567578695, 1), t: 1 }, 2019-09-04T06:31:35.142+0000 2019-09-04T06:31:35.193+0000 D3 REPL [conn296] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.763+0000 2019-09-04T06:31:35.193+0000 D3 REPL [conn271] Got notified of new snapshot: { ts: Timestamp(1567578695, 1), t: 1 }, 2019-09-04T06:31:35.142+0000 2019-09-04T06:31:35.193+0000 D3 REPL [conn271] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.752+0000 2019-09-04T06:31:35.193+0000 D3 REPL [conn292] Got notified of new snapshot: { ts: Timestamp(1567578695, 1), t: 1 }, 2019-09-04T06:31:35.142+0000 2019-09-04T06:31:35.193+0000 D3 REPL [conn292] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:52.054+0000 2019-09-04T06:31:35.193+0000 D3 REPL [conn279] Got notified of new snapshot: { ts: Timestamp(1567578695, 1), t: 1 }, 2019-09-04T06:31:35.142+0000 2019-09-04T06:31:35.193+0000 D3 REPL [conn279] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:42.709+0000 2019-09-04T06:31:35.194+0000 D1 TRACKING [conn21] Cmd: find, TrackingId: 5d6f5a4702d1a496712d7220|5d6f5a4702d1a496712d7223 2019-09-04T06:31:35.194+0000 D1 COMMAND [conn21] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578695, 1), t: 1 } } } 2019-09-04T06:31:35.194+0000 D3 STORAGE [conn21] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:31:35.194+0000 D1 COMMAND [conn21] Using 'committed' snapshot: { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578695, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a4702d1a496712d7223'), operName: "", parentOperId: "5d6f5a4702d1a496712d7220" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578695, 1), signature: { hash: BinData(0, 842107D62A486EE76C3B7B6A78CCD60772CE6680), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578695, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578695, 1) 2019-09-04T06:31:35.194+0000 D2 QUERY [conn21] Collection config.settings does not exist. Using EOF plan: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 2019-09-04T06:31:35.194+0000 I COMMAND [conn21] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578695, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a4702d1a496712d7223'), operName: "", parentOperId: "5d6f5a4702d1a496712d7220" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578695, 1), signature: { hash: BinData(0, 842107D62A486EE76C3B7B6A78CCD60772CE6680), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578695, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:31:35.194+0000 D2 COMMAND [conn21] run command config.$cmd { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578695, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a4702d1a496712d7224'), operName: "", parentOperId: "5d6f5a4702d1a496712d7220" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578695, 1), signature: { hash: BinData(0, 842107D62A486EE76C3B7B6A78CCD60772CE6680), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578695, 1), t: 1 } }, $db: "config" } 2019-09-04T06:31:35.194+0000 D1 TRACKING [conn21] Cmd: find, TrackingId: 5d6f5a4702d1a496712d7220|5d6f5a4702d1a496712d7224 2019-09-04T06:31:35.194+0000 D1 COMMAND [conn21] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578695, 1), t: 1 } } } 2019-09-04T06:31:35.194+0000 D3 STORAGE [conn21] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:31:35.194+0000 D1 COMMAND [conn21] Using 'committed' snapshot: { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578695, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a4702d1a496712d7224'), operName: "", parentOperId: "5d6f5a4702d1a496712d7220" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578695, 1), signature: { hash: BinData(0, 842107D62A486EE76C3B7B6A78CCD60772CE6680), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578695, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578695, 1) 2019-09-04T06:31:35.194+0000 D2 QUERY [conn21] Collection config.settings does not exist. Using EOF plan: query: { _id: "autosplit" } sort: {} projection: {} limit: 1 2019-09-04T06:31:35.194+0000 I COMMAND [conn21] command config.settings command: find { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578695, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a4702d1a496712d7224'), operName: "", parentOperId: "5d6f5a4702d1a496712d7220" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578695, 1), signature: { hash: BinData(0, 842107D62A486EE76C3B7B6A78CCD60772CE6680), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578695, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:31:35.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:35.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:35.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:35.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:35.276+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578695, 1) 2019-09-04T06:31:35.293+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:35.301+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:35.301+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:35.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:35.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:35.393+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:35.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:35.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:35.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:35.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:35.493+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:35.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:35.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:35.548+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:35.548+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:35.593+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:35.639+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:35.639+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:35.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:35.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:35.652+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:35.652+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:35.693+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:35.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:35.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:35.793+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:35.801+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:35.801+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:35.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:35.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:35.894+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:35.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:35.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:35.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:35.989+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:35.994+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:35.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:35.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:36.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:36.048+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:36.048+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:36.089+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:36.089+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:36.094+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:36.139+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:36.139+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:36.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:36.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:36.152+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:36.152+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:36.176+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:36.176+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:36.176+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578695, 1) 2019-09-04T06:31:36.176+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 12128 2019-09-04T06:31:36.176+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:36.176+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:36.176+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 12128 2019-09-04T06:31:36.178+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12131 2019-09-04T06:31:36.178+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 12131 2019-09-04T06:31:36.178+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578695, 1), t: 1 }({ ts: Timestamp(1567578695, 1), t: 1 }) 2019-09-04T06:31:36.194+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:36.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578695, 1), signature: { hash: BinData(0, 842107D62A486EE76C3B7B6A78CCD60772CE6680), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:36.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:31:36.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578695, 1), signature: { hash: BinData(0, 842107D62A486EE76C3B7B6A78CCD60772CE6680), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:36.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578695, 1), signature: { hash: BinData(0, 842107D62A486EE76C3B7B6A78CCD60772CE6680), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:36.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, durableWallTime: new Date(1567578695142), opTime: { ts: Timestamp(1567578695, 1), t: 1 }, wallTime: new Date(1567578695142) } 2019-09-04T06:31:36.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578695, 1), signature: { hash: BinData(0, 842107D62A486EE76C3B7B6A78CCD60772CE6680), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:36.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:36.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:36.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:36.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:36.294+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:36.301+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:36.301+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:36.324+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:36.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:36.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:36.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:36.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:36.360+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:36.394+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:36.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:36.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:36.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:36.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:36.495+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:36.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:36.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:36.548+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:36.548+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:36.588+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:36.588+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:36.595+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:36.639+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:36.639+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:36.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:36.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:36.652+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:36.652+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:36.695+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:36.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:36.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:36.795+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:36.801+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:36.801+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:36.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:36.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:36.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:36.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 827) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:36.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 827 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:46.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:36.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:04.839+0000 2019-09-04T06:31:36.838+0000 D2 ASIO [Replication] Request 827 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, durableWallTime: new Date(1567578695142), opTime: { ts: Timestamp(1567578695, 1), t: 1 }, wallTime: new Date(1567578695142), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578695, 1), t: 1 }, lastCommittedWall: new Date(1567578695142), lastOpVisible: { ts: Timestamp(1567578695, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578695, 1), $clusterTime: { clusterTime: Timestamp(1567578695, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578695, 1) } 2019-09-04T06:31:36.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, durableWallTime: new Date(1567578695142), opTime: { ts: Timestamp(1567578695, 1), t: 1 }, wallTime: new Date(1567578695142), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578695, 1), t: 1 }, lastCommittedWall: new Date(1567578695142), lastOpVisible: { ts: Timestamp(1567578695, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578695, 1), $clusterTime: { clusterTime: Timestamp(1567578695, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578695, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:36.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:36.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 827) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, durableWallTime: new Date(1567578695142), opTime: { ts: Timestamp(1567578695, 1), t: 1 }, wallTime: new Date(1567578695142), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578695, 1), t: 1 }, lastCommittedWall: new Date(1567578695142), lastOpVisible: { ts: Timestamp(1567578695, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578695, 1), $clusterTime: { clusterTime: Timestamp(1567578695, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578695, 1) } 2019-09-04T06:31:36.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:31:36.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:31:38.838Z 2019-09-04T06:31:36.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:04.839+0000 2019-09-04T06:31:36.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:36.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 828) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:36.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 828 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:31:46.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:36.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:04.839+0000 2019-09-04T06:31:36.839+0000 D2 ASIO [Replication] Request 828 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, durableWallTime: new Date(1567578695142), opTime: { ts: Timestamp(1567578695, 1), t: 1 }, wallTime: new Date(1567578695142), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578695, 1), t: 1 }, lastCommittedWall: new Date(1567578695142), lastOpVisible: { ts: Timestamp(1567578695, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578695, 1), $clusterTime: { clusterTime: Timestamp(1567578695, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578695, 1) } 2019-09-04T06:31:36.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, durableWallTime: new Date(1567578695142), opTime: { ts: Timestamp(1567578695, 1), t: 1 }, wallTime: new Date(1567578695142), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578695, 1), t: 1 }, lastCommittedWall: new Date(1567578695142), lastOpVisible: { ts: Timestamp(1567578695, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578695, 1), $clusterTime: { clusterTime: Timestamp(1567578695, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578695, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:31:36.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:36.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 828) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, durableWallTime: new Date(1567578695142), opTime: { ts: Timestamp(1567578695, 1), t: 1 }, wallTime: new Date(1567578695142), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578695, 1), t: 1 }, lastCommittedWall: new Date(1567578695142), lastOpVisible: { ts: Timestamp(1567578695, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578695, 1), $clusterTime: { clusterTime: Timestamp(1567578695, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578695, 1) } 2019-09-04T06:31:36.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:31:36.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:31:45.587+0000 2019-09-04T06:31:36.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:31:47.227+0000 2019-09-04T06:31:36.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:31:36.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:31:38.839Z 2019-09-04T06:31:36.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:36.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:06.839+0000 2019-09-04T06:31:36.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:06.839+0000 2019-09-04T06:31:36.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:36.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:36.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:36.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:36.895+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:36.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:36.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:36.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:36.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:36.995+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:36.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:36.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:37.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:37.048+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:37.048+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:37.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578695, 1), signature: { hash: BinData(0, 842107D62A486EE76C3B7B6A78CCD60772CE6680), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:37.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:31:37.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578695, 1), signature: { hash: BinData(0, 842107D62A486EE76C3B7B6A78CCD60772CE6680), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:37.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578695, 1), signature: { hash: BinData(0, 842107D62A486EE76C3B7B6A78CCD60772CE6680), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:37.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, durableWallTime: new Date(1567578695142), opTime: { ts: Timestamp(1567578695, 1), t: 1 }, wallTime: new Date(1567578695142) } 2019-09-04T06:31:37.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578695, 1), signature: { hash: BinData(0, 842107D62A486EE76C3B7B6A78CCD60772CE6680), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:37.088+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:37.088+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:37.096+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:37.138+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:37.139+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:37.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:37.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:37.152+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:37.152+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:37.177+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:37.177+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:37.177+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578695, 1) 2019-09-04T06:31:37.177+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 12163 2019-09-04T06:31:37.177+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:37.177+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:37.177+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 12163 2019-09-04T06:31:37.178+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12166 2019-09-04T06:31:37.178+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 12166 2019-09-04T06:31:37.178+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578695, 1), t: 1 }({ ts: Timestamp(1567578695, 1), t: 1 }) 2019-09-04T06:31:37.196+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:37.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:37.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:37.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:37.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:37.296+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:37.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:37.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:37.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:37.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:37.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:37.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:37.396+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:37.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:37.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:37.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:37.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:37.496+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:37.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:37.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:37.548+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:37.548+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:37.588+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:37.588+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:37.596+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:37.639+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:37.639+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:37.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:37.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:37.652+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:37.652+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:37.696+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:37.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:37.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:37.784+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:37.784+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:37.796+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:37.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:37.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:37.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:37.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:37.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:37.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:37.897+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:37.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:37.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:37.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:37.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:37.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:37.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:37.997+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:38.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:38.048+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:38.048+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:38.088+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:38.088+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:38.097+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:38.138+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:38.139+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:38.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:38.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:38.152+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:38.152+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:38.177+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:38.177+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:38.177+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578695, 1) 2019-09-04T06:31:38.177+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 12195 2019-09-04T06:31:38.177+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:38.177+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:38.177+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 12195 2019-09-04T06:31:38.178+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12198 2019-09-04T06:31:38.178+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 12198 2019-09-04T06:31:38.178+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578695, 1), t: 1 }({ ts: Timestamp(1567578695, 1), t: 1 }) 2019-09-04T06:31:38.197+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:38.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578695, 1), signature: { hash: BinData(0, 842107D62A486EE76C3B7B6A78CCD60772CE6680), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:38.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:31:38.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578695, 1), signature: { hash: BinData(0, 842107D62A486EE76C3B7B6A78CCD60772CE6680), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:38.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578695, 1), signature: { hash: BinData(0, 842107D62A486EE76C3B7B6A78CCD60772CE6680), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:38.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, durableWallTime: new Date(1567578695142), opTime: { ts: Timestamp(1567578695, 1), t: 1 }, wallTime: new Date(1567578695142) } 2019-09-04T06:31:38.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578695, 1), signature: { hash: BinData(0, 842107D62A486EE76C3B7B6A78CCD60772CE6680), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:38.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:38.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:38.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:38.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:38.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:38.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:38.297+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:38.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:38.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:38.329+0000 D2 ASIO [RS] Request 826 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578698, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578698319), o: { $v: 1, $set: { ping: new Date(1567578698319) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578695, 1), t: 1 }, lastCommittedWall: new Date(1567578695142), lastOpVisible: { ts: Timestamp(1567578695, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578695, 1), t: 1 }, lastCommittedWall: new Date(1567578695142), lastOpApplied: { ts: Timestamp(1567578698, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578695, 1), $clusterTime: { clusterTime: Timestamp(1567578698, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578698, 1) } 2019-09-04T06:31:38.329+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578698, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578698319), o: { $v: 1, $set: { ping: new Date(1567578698319) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578695, 1), t: 1 }, lastCommittedWall: new Date(1567578695142), lastOpVisible: { ts: Timestamp(1567578695, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578695, 1), t: 1 }, lastCommittedWall: new Date(1567578695142), lastOpApplied: { ts: Timestamp(1567578698, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578695, 1), $clusterTime: { clusterTime: Timestamp(1567578698, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578698, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:38.329+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:38.329+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578698, 1) and ending at ts: Timestamp(1567578698, 1) 2019-09-04T06:31:38.329+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:31:47.227+0000 2019-09-04T06:31:38.329+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:31:49.690+0000 2019-09-04T06:31:38.329+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:38.329+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:06.839+0000 2019-09-04T06:31:38.329+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578698, 1), t: 1 } 2019-09-04T06:31:38.329+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:38.329+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:38.329+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578695, 1) 2019-09-04T06:31:38.329+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 12206 2019-09-04T06:31:38.329+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:38.329+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:38.329+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 12206 2019-09-04T06:31:38.329+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:38.329+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:38.329+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578695, 1) 2019-09-04T06:31:38.329+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 12209 2019-09-04T06:31:38.329+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:31:38.329+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:38.329+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:38.329+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578698, 1) } 2019-09-04T06:31:38.329+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 12209 2019-09-04T06:31:38.329+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12199 2019-09-04T06:31:38.329+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 12199 2019-09-04T06:31:38.329+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12212 2019-09-04T06:31:38.329+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 12212 2019-09-04T06:31:38.329+0000 D3 EXECUTOR [repl-writer-worker-10] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:38.329+0000 D3 STORAGE [repl-writer-worker-10] WT begin_transaction for snapshot id 12214 2019-09-04T06:31:38.329+0000 D4 STORAGE [repl-writer-worker-10] inserting record with timestamp Timestamp(1567578698, 1) 2019-09-04T06:31:38.329+0000 D3 STORAGE [repl-writer-worker-10] WT set timestamp of future write operations to Timestamp(1567578698, 1) 2019-09-04T06:31:38.329+0000 D3 STORAGE [repl-writer-worker-10] WT commit_transaction for snapshot id 12214 2019-09-04T06:31:38.329+0000 D3 EXECUTOR [repl-writer-worker-10] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:38.329+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:31:38.329+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12213 2019-09-04T06:31:38.329+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 12213 2019-09-04T06:31:38.329+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12216 2019-09-04T06:31:38.329+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 12216 2019-09-04T06:31:38.329+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578698, 1), t: 1 }({ ts: Timestamp(1567578698, 1), t: 1 }) 2019-09-04T06:31:38.329+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578698, 1) 2019-09-04T06:31:38.329+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12217 2019-09-04T06:31:38.329+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578698, 1) } } ] } sort: {} projection: {} 2019-09-04T06:31:38.330+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:38.330+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:31:38.330+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578698, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:31:38.330+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:38.330+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:38.330+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:38.330+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578698, 1) || First: notFirst: full path: ts 2019-09-04T06:31:38.330+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:38.330+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578698, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:38.330+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:38.330+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:31:38.330+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:38.330+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:38.330+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:38.330+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:38.330+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:38.330+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:38.330+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:38.330+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578698, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:38.330+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:38.330+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:38.330+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:38.330+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578698, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:38.330+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:38.330+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578698, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:38.330+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 12217 2019-09-04T06:31:38.330+0000 D3 EXECUTOR [repl-writer-worker-8] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:38.330+0000 D3 STORAGE [repl-writer-worker-8] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:38.330+0000 D3 REPL [repl-writer-worker-8] applying op: { ts: Timestamp(1567578698, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578698319), o: { $v: 1, $set: { ping: new Date(1567578698319) } } }, oplog application mode: Secondary 2019-09-04T06:31:38.330+0000 D3 STORAGE [repl-writer-worker-8] WT set timestamp of future write operations to Timestamp(1567578698, 1) 2019-09-04T06:31:38.330+0000 D3 STORAGE [repl-writer-worker-8] WT begin_transaction for snapshot id 12219 2019-09-04T06:31:38.330+0000 D2 QUERY [repl-writer-worker-8] Using idhack: { _id: "ConfigServer" } 2019-09-04T06:31:38.330+0000 D4 WRITE [repl-writer-worker-8] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:31:38.330+0000 D3 STORAGE [repl-writer-worker-8] WT commit_transaction for snapshot id 12219 2019-09-04T06:31:38.330+0000 D3 EXECUTOR [repl-writer-worker-8] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:38.330+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578698, 1), t: 1 }({ ts: Timestamp(1567578698, 1), t: 1 }) 2019-09-04T06:31:38.330+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578698, 1) 2019-09-04T06:31:38.330+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12218 2019-09-04T06:31:38.330+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:31:38.330+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:38.330+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:31:38.330+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:38.330+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:38.330+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:31:38.330+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 12218 2019-09-04T06:31:38.330+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578698, 1) 2019-09-04T06:31:38.330+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12222 2019-09-04T06:31:38.330+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 12222 2019-09-04T06:31:38.330+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578698, 1), t: 1 }({ ts: Timestamp(1567578698, 1), t: 1 }) 2019-09-04T06:31:38.330+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:38.330+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, durableWallTime: new Date(1567578695142), appliedOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, appliedWallTime: new Date(1567578695142), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, durableWallTime: new Date(1567578695142), appliedOpTime: { ts: Timestamp(1567578698, 1), t: 1 }, appliedWallTime: new Date(1567578698319), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, durableWallTime: new Date(1567578695142), appliedOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, appliedWallTime: new Date(1567578695142), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578695, 1), t: 1 }, lastCommittedWall: new Date(1567578695142), lastOpVisible: { ts: Timestamp(1567578695, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:38.330+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 829 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:08.330+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, durableWallTime: new Date(1567578695142), appliedOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, appliedWallTime: new Date(1567578695142), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, durableWallTime: new Date(1567578695142), appliedOpTime: { ts: Timestamp(1567578698, 1), t: 1 }, appliedWallTime: new Date(1567578698319), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, durableWallTime: new Date(1567578695142), appliedOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, appliedWallTime: new Date(1567578695142), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578695, 1), t: 1 }, lastCommittedWall: new Date(1567578695142), lastOpVisible: { ts: Timestamp(1567578695, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:38.330+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:08.330+0000 2019-09-04T06:31:38.330+0000 D2 ASIO [RS] Request 829 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578695, 1), t: 1 }, lastCommittedWall: new Date(1567578695142), lastOpVisible: { ts: Timestamp(1567578695, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578695, 1), $clusterTime: { clusterTime: Timestamp(1567578698, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578698, 1) } 2019-09-04T06:31:38.331+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578695, 1), t: 1 }, lastCommittedWall: new Date(1567578695142), lastOpVisible: { ts: Timestamp(1567578695, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578695, 1), $clusterTime: { clusterTime: Timestamp(1567578698, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578698, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:38.331+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:38.331+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:08.331+0000 2019-09-04T06:31:38.331+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578698, 1), t: 1 } 2019-09-04T06:31:38.331+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 830 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:48.331+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578695, 1), t: 1 } } 2019-09-04T06:31:38.331+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:08.331+0000 2019-09-04T06:31:38.332+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:31:38.332+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:38.332+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, durableWallTime: new Date(1567578695142), appliedOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, appliedWallTime: new Date(1567578695142), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578698, 1), t: 1 }, durableWallTime: new Date(1567578698319), appliedOpTime: { ts: Timestamp(1567578698, 1), t: 1 }, appliedWallTime: new Date(1567578698319), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, durableWallTime: new Date(1567578695142), appliedOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, appliedWallTime: new Date(1567578695142), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578695, 1), t: 1 }, lastCommittedWall: new Date(1567578695142), lastOpVisible: { ts: Timestamp(1567578695, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:38.332+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 831 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:08.332+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, durableWallTime: new Date(1567578695142), appliedOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, appliedWallTime: new Date(1567578695142), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578698, 1), t: 1 }, durableWallTime: new Date(1567578698319), appliedOpTime: { ts: Timestamp(1567578698, 1), t: 1 }, appliedWallTime: new Date(1567578698319), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, durableWallTime: new Date(1567578695142), appliedOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, appliedWallTime: new Date(1567578695142), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578695, 1), t: 1 }, lastCommittedWall: new Date(1567578695142), lastOpVisible: { ts: Timestamp(1567578695, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:38.332+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:08.331+0000 2019-09-04T06:31:38.332+0000 D2 ASIO [RS] Request 831 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578695, 1), t: 1 }, lastCommittedWall: new Date(1567578695142), lastOpVisible: { ts: Timestamp(1567578695, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578695, 1), $clusterTime: { clusterTime: Timestamp(1567578698, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578698, 1) } 2019-09-04T06:31:38.332+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578695, 1), t: 1 }, lastCommittedWall: new Date(1567578695142), lastOpVisible: { ts: Timestamp(1567578695, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578695, 1), $clusterTime: { clusterTime: Timestamp(1567578698, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578698, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:38.332+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:38.332+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:08.331+0000 2019-09-04T06:31:38.333+0000 D2 ASIO [RS] Request 830 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 1), t: 1 }, lastCommittedWall: new Date(1567578698319), lastOpVisible: { ts: Timestamp(1567578698, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578698, 1), t: 1 }, lastCommittedWall: new Date(1567578698319), lastOpApplied: { ts: Timestamp(1567578698, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578698, 1), $clusterTime: { clusterTime: Timestamp(1567578698, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578698, 1) } 2019-09-04T06:31:38.333+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 1), t: 1 }, lastCommittedWall: new Date(1567578698319), lastOpVisible: { ts: Timestamp(1567578698, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578698, 1), t: 1 }, lastCommittedWall: new Date(1567578698319), lastOpApplied: { ts: Timestamp(1567578698, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578698, 1), $clusterTime: { clusterTime: Timestamp(1567578698, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578698, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:38.333+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:38.333+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:31:38.333+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578698, 1), t: 1 }, 2019-09-04T06:31:38.319+0000 2019-09-04T06:31:38.333+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578698, 1), t: 1 }, 2019-09-04T06:31:38.319+0000 2019-09-04T06:31:38.333+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578693, 1) 2019-09-04T06:31:38.333+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:31:49.690+0000 2019-09-04T06:31:38.333+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:31:49.052+0000 2019-09-04T06:31:38.333+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:38.333+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:06.839+0000 2019-09-04T06:31:38.333+0000 D3 REPL [conn274] Got notified of new snapshot: { ts: Timestamp(1567578698, 1), t: 1 }, 2019-09-04T06:31:38.319+0000 2019-09-04T06:31:38.333+0000 D3 REPL [conn274] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:41.071+0000 2019-09-04T06:31:38.333+0000 D3 REPL [conn260] Got notified of new snapshot: { ts: Timestamp(1567578698, 1), t: 1 }, 2019-09-04T06:31:38.319+0000 2019-09-04T06:31:38.333+0000 D3 REPL [conn260] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:42.329+0000 2019-09-04T06:31:38.333+0000 D3 REPL [conn291] Got notified of new snapshot: { ts: Timestamp(1567578698, 1), t: 1 }, 2019-09-04T06:31:38.319+0000 2019-09-04T06:31:38.333+0000 D3 REPL [conn291] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.662+0000 2019-09-04T06:31:38.333+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 832 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:48.333+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578698, 1), t: 1 } } 2019-09-04T06:31:38.333+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:08.331+0000 2019-09-04T06:31:38.333+0000 D3 REPL [conn287] Got notified of new snapshot: { ts: Timestamp(1567578698, 1), t: 1 }, 2019-09-04T06:31:38.319+0000 2019-09-04T06:31:38.333+0000 D3 REPL [conn287] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.248+0000 2019-09-04T06:31:38.333+0000 D3 REPL [conn272] Got notified of new snapshot: { ts: Timestamp(1567578698, 1), t: 1 }, 2019-09-04T06:31:38.319+0000 2019-09-04T06:31:38.333+0000 D3 REPL [conn272] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.222+0000 2019-09-04T06:31:38.333+0000 D3 REPL [conn296] Got notified of new snapshot: { ts: Timestamp(1567578698, 1), t: 1 }, 2019-09-04T06:31:38.319+0000 2019-09-04T06:31:38.333+0000 D3 REPL [conn296] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.763+0000 2019-09-04T06:31:38.333+0000 D3 REPL [conn292] Got notified of new snapshot: { ts: Timestamp(1567578698, 1), t: 1 }, 2019-09-04T06:31:38.319+0000 2019-09-04T06:31:38.333+0000 D3 REPL [conn292] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:52.054+0000 2019-09-04T06:31:38.333+0000 D3 REPL [conn286] Got notified of new snapshot: { ts: Timestamp(1567578698, 1), t: 1 }, 2019-09-04T06:31:38.319+0000 2019-09-04T06:31:38.333+0000 D3 REPL [conn286] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.224+0000 2019-09-04T06:31:38.333+0000 D3 REPL [conn295] Got notified of new snapshot: { ts: Timestamp(1567578698, 1), t: 1 }, 2019-09-04T06:31:38.319+0000 2019-09-04T06:31:38.333+0000 D3 REPL [conn295] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.753+0000 2019-09-04T06:31:38.333+0000 D3 REPL [conn264] Got notified of new snapshot: { ts: Timestamp(1567578698, 1), t: 1 }, 2019-09-04T06:31:38.319+0000 2019-09-04T06:31:38.333+0000 D3 REPL [conn264] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:41.083+0000 2019-09-04T06:31:38.333+0000 D3 REPL [conn273] Got notified of new snapshot: { ts: Timestamp(1567578698, 1), t: 1 }, 2019-09-04T06:31:38.319+0000 2019-09-04T06:31:38.333+0000 D3 REPL [conn273] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.644+0000 2019-09-04T06:31:38.333+0000 D3 REPL [conn298] Got notified of new snapshot: { ts: Timestamp(1567578698, 1), t: 1 }, 2019-09-04T06:31:38.319+0000 2019-09-04T06:31:38.333+0000 D3 REPL [conn298] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.986+0000 2019-09-04T06:31:38.333+0000 D3 REPL [conn294] Got notified of new snapshot: { ts: Timestamp(1567578698, 1), t: 1 }, 2019-09-04T06:31:38.319+0000 2019-09-04T06:31:38.333+0000 D3 REPL [conn294] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.468+0000 2019-09-04T06:31:38.333+0000 D3 REPL [conn297] Got notified of new snapshot: { ts: Timestamp(1567578698, 1), t: 1 }, 2019-09-04T06:31:38.319+0000 2019-09-04T06:31:38.333+0000 D3 REPL [conn297] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.925+0000 2019-09-04T06:31:38.333+0000 D3 REPL [conn281] Got notified of new snapshot: { ts: Timestamp(1567578698, 1), t: 1 }, 2019-09-04T06:31:38.319+0000 2019-09-04T06:31:38.333+0000 D3 REPL [conn281] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.661+0000 2019-09-04T06:31:38.333+0000 D3 REPL [conn270] Got notified of new snapshot: { ts: Timestamp(1567578698, 1), t: 1 }, 2019-09-04T06:31:38.319+0000 2019-09-04T06:31:38.333+0000 D3 REPL [conn270] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.239+0000 2019-09-04T06:31:38.333+0000 D3 REPL [conn283] Got notified of new snapshot: { ts: Timestamp(1567578698, 1), t: 1 }, 2019-09-04T06:31:38.319+0000 2019-09-04T06:31:38.333+0000 D3 REPL [conn283] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.238+0000 2019-09-04T06:31:38.333+0000 D3 REPL [conn280] Got notified of new snapshot: { ts: Timestamp(1567578698, 1), t: 1 }, 2019-09-04T06:31:38.319+0000 2019-09-04T06:31:38.333+0000 D3 REPL [conn280] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:01.310+0000 2019-09-04T06:31:38.333+0000 D3 REPL [conn271] Got notified of new snapshot: { ts: Timestamp(1567578698, 1), t: 1 }, 2019-09-04T06:31:38.319+0000 2019-09-04T06:31:38.333+0000 D3 REPL [conn271] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.752+0000 2019-09-04T06:31:38.333+0000 D3 REPL [conn279] Got notified of new snapshot: { ts: Timestamp(1567578698, 1), t: 1 }, 2019-09-04T06:31:38.319+0000 2019-09-04T06:31:38.333+0000 D3 REPL [conn279] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:42.709+0000 2019-09-04T06:31:38.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:38.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:38.397+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:38.429+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578698, 1) 2019-09-04T06:31:38.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:38.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:38.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:38.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:38.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:38.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:38.497+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:38.537+0000 D2 ASIO [RS] Request 832 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578698, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578698532), o: { $v: 1, $set: { ping: new Date(1567578698531) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 1), t: 1 }, lastCommittedWall: new Date(1567578698319), lastOpVisible: { ts: Timestamp(1567578698, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578698, 1), t: 1 }, lastCommittedWall: new Date(1567578698319), lastOpApplied: { ts: Timestamp(1567578698, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578698, 1), $clusterTime: { clusterTime: Timestamp(1567578698, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578698, 2) } 2019-09-04T06:31:38.537+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578698, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578698532), o: { $v: 1, $set: { ping: new Date(1567578698531) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 1), t: 1 }, lastCommittedWall: new Date(1567578698319), lastOpVisible: { ts: Timestamp(1567578698, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578698, 1), t: 1 }, lastCommittedWall: new Date(1567578698319), lastOpApplied: { ts: Timestamp(1567578698, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578698, 1), $clusterTime: { clusterTime: Timestamp(1567578698, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578698, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:38.537+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:38.537+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578698, 2) and ending at ts: Timestamp(1567578698, 2) 2019-09-04T06:31:38.537+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:31:49.052+0000 2019-09-04T06:31:38.537+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:31:49.665+0000 2019-09-04T06:31:38.537+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:38.537+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:06.839+0000 2019-09-04T06:31:38.537+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:38.537+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:38.538+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578698, 1) 2019-09-04T06:31:38.538+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 12230 2019-09-04T06:31:38.538+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:38.538+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:38.538+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 12230 2019-09-04T06:31:38.538+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:38.538+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:31:38.538+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:38.538+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578698, 2) } 2019-09-04T06:31:38.538+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578698, 1) 2019-09-04T06:31:38.538+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 12233 2019-09-04T06:31:38.538+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:38.538+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:38.538+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 12233 2019-09-04T06:31:38.538+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12224 2019-09-04T06:31:38.537+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578698, 2), t: 1 } 2019-09-04T06:31:38.538+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 12224 2019-09-04T06:31:38.538+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12236 2019-09-04T06:31:38.538+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 12236 2019-09-04T06:31:38.538+0000 D3 EXECUTOR [repl-writer-worker-4] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:38.538+0000 D3 STORAGE [repl-writer-worker-4] WT begin_transaction for snapshot id 12238 2019-09-04T06:31:38.538+0000 D4 STORAGE [repl-writer-worker-4] inserting record with timestamp Timestamp(1567578698, 2) 2019-09-04T06:31:38.538+0000 D3 STORAGE [repl-writer-worker-4] WT set timestamp of future write operations to Timestamp(1567578698, 2) 2019-09-04T06:31:38.538+0000 D3 STORAGE [repl-writer-worker-4] WT commit_transaction for snapshot id 12238 2019-09-04T06:31:38.538+0000 D3 EXECUTOR [repl-writer-worker-4] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:38.538+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:31:38.538+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12237 2019-09-04T06:31:38.538+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 12237 2019-09-04T06:31:38.538+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12240 2019-09-04T06:31:38.538+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 12240 2019-09-04T06:31:38.538+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578698, 2), t: 1 }({ ts: Timestamp(1567578698, 2), t: 1 }) 2019-09-04T06:31:38.538+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578698, 2) 2019-09-04T06:31:38.538+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12241 2019-09-04T06:31:38.538+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578698, 2) } } ] } sort: {} projection: {} 2019-09-04T06:31:38.538+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:38.538+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:31:38.538+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578698, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:31:38.538+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:38.538+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:38.538+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:38.538+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578698, 2) || First: notFirst: full path: ts 2019-09-04T06:31:38.538+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:38.538+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578698, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:38.538+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:38.538+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:31:38.538+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:38.538+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:38.538+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:38.538+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:38.538+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:38.538+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:38.538+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:38.538+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578698, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:38.538+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:38.538+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:38.538+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:38.538+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578698, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:38.538+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:38.538+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578698, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:38.538+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 12241 2019-09-04T06:31:38.538+0000 D3 EXECUTOR [repl-writer-worker-6] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:38.538+0000 D3 STORAGE [repl-writer-worker-6] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:38.538+0000 D3 REPL [repl-writer-worker-6] applying op: { ts: Timestamp(1567578698, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578698532), o: { $v: 1, $set: { ping: new Date(1567578698531) } } }, oplog application mode: Secondary 2019-09-04T06:31:38.538+0000 D3 STORAGE [repl-writer-worker-6] WT set timestamp of future write operations to Timestamp(1567578698, 2) 2019-09-04T06:31:38.538+0000 D3 STORAGE [repl-writer-worker-6] WT begin_transaction for snapshot id 12243 2019-09-04T06:31:38.538+0000 D2 QUERY [repl-writer-worker-6] Using idhack: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" } 2019-09-04T06:31:38.538+0000 D4 WRITE [repl-writer-worker-6] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:31:38.538+0000 D3 STORAGE [repl-writer-worker-6] WT commit_transaction for snapshot id 12243 2019-09-04T06:31:38.538+0000 D3 EXECUTOR [repl-writer-worker-6] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:38.538+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578698, 2), t: 1 }({ ts: Timestamp(1567578698, 2), t: 1 }) 2019-09-04T06:31:38.538+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578698, 2) 2019-09-04T06:31:38.538+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12242 2019-09-04T06:31:38.539+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:31:38.539+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:38.539+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:31:38.539+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:38.539+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:38.539+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:31:38.539+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 12242 2019-09-04T06:31:38.539+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578698, 2) 2019-09-04T06:31:38.539+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12246 2019-09-04T06:31:38.539+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 12246 2019-09-04T06:31:38.539+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578698, 2), t: 1 }({ ts: Timestamp(1567578698, 2), t: 1 }) 2019-09-04T06:31:38.539+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:38.539+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, durableWallTime: new Date(1567578695142), appliedOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, appliedWallTime: new Date(1567578695142), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578698, 1), t: 1 }, durableWallTime: new Date(1567578698319), appliedOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, appliedWallTime: new Date(1567578698532), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, durableWallTime: new Date(1567578695142), appliedOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, appliedWallTime: new Date(1567578695142), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 1), t: 1 }, lastCommittedWall: new Date(1567578698319), lastOpVisible: { ts: Timestamp(1567578698, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:38.539+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 833 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:08.539+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, durableWallTime: new Date(1567578695142), appliedOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, appliedWallTime: new Date(1567578695142), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578698, 1), t: 1 }, durableWallTime: new Date(1567578698319), appliedOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, appliedWallTime: new Date(1567578698532), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, durableWallTime: new Date(1567578695142), appliedOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, appliedWallTime: new Date(1567578695142), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 1), t: 1 }, lastCommittedWall: new Date(1567578698319), lastOpVisible: { ts: Timestamp(1567578698, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:38.539+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:08.539+0000 2019-09-04T06:31:38.539+0000 D2 ASIO [RS] Request 833 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 1), t: 1 }, lastCommittedWall: new Date(1567578698319), lastOpVisible: { ts: Timestamp(1567578698, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578698, 1), $clusterTime: { clusterTime: Timestamp(1567578698, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578698, 2) } 2019-09-04T06:31:38.539+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 1), t: 1 }, lastCommittedWall: new Date(1567578698319), lastOpVisible: { ts: Timestamp(1567578698, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578698, 1), $clusterTime: { clusterTime: Timestamp(1567578698, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578698, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:38.539+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:38.539+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:08.539+0000 2019-09-04T06:31:38.540+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578698, 2), t: 1 } 2019-09-04T06:31:38.540+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 834 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:48.540+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578698, 1), t: 1 } } 2019-09-04T06:31:38.540+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:08.539+0000 2019-09-04T06:31:38.546+0000 D2 ASIO [RS] Request 834 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpVisible: { ts: Timestamp(1567578698, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpApplied: { ts: Timestamp(1567578698, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578698, 2), $clusterTime: { clusterTime: Timestamp(1567578698, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578698, 2) } 2019-09-04T06:31:38.546+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpVisible: { ts: Timestamp(1567578698, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpApplied: { ts: Timestamp(1567578698, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578698, 2), $clusterTime: { clusterTime: Timestamp(1567578698, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578698, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:38.546+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:38.546+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:31:38.546+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578698, 2), t: 1 }, 2019-09-04T06:31:38.532+0000 2019-09-04T06:31:38.546+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578698, 2), t: 1 }, 2019-09-04T06:31:38.532+0000 2019-09-04T06:31:38.546+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578693, 2) 2019-09-04T06:31:38.546+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:31:49.665+0000 2019-09-04T06:31:38.546+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:31:48.972+0000 2019-09-04T06:31:38.546+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:38.546+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:06.839+0000 2019-09-04T06:31:38.546+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 835 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:48.546+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578698, 2), t: 1 } } 2019-09-04T06:31:38.546+0000 D3 REPL [conn287] Got notified of new snapshot: { ts: Timestamp(1567578698, 2), t: 1 }, 2019-09-04T06:31:38.532+0000 2019-09-04T06:31:38.546+0000 D3 REPL [conn287] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.248+0000 2019-09-04T06:31:38.546+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:08.539+0000 2019-09-04T06:31:38.546+0000 D3 REPL [conn296] Got notified of new snapshot: { ts: Timestamp(1567578698, 2), t: 1 }, 2019-09-04T06:31:38.532+0000 2019-09-04T06:31:38.546+0000 D3 REPL [conn296] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.763+0000 2019-09-04T06:31:38.546+0000 D3 REPL [conn286] Got notified of new snapshot: { ts: Timestamp(1567578698, 2), t: 1 }, 2019-09-04T06:31:38.532+0000 2019-09-04T06:31:38.546+0000 D3 REPL [conn286] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.224+0000 2019-09-04T06:31:38.546+0000 D3 REPL [conn295] Got notified of new snapshot: { ts: Timestamp(1567578698, 2), t: 1 }, 2019-09-04T06:31:38.532+0000 2019-09-04T06:31:38.546+0000 D3 REPL [conn295] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.753+0000 2019-09-04T06:31:38.546+0000 D3 REPL [conn292] Got notified of new snapshot: { ts: Timestamp(1567578698, 2), t: 1 }, 2019-09-04T06:31:38.532+0000 2019-09-04T06:31:38.546+0000 D3 REPL [conn292] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:52.054+0000 2019-09-04T06:31:38.546+0000 D3 REPL [conn264] Got notified of new snapshot: { ts: Timestamp(1567578698, 2), t: 1 }, 2019-09-04T06:31:38.532+0000 2019-09-04T06:31:38.546+0000 D3 REPL [conn264] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:41.083+0000 2019-09-04T06:31:38.546+0000 D3 REPL [conn297] Got notified of new snapshot: { ts: Timestamp(1567578698, 2), t: 1 }, 2019-09-04T06:31:38.532+0000 2019-09-04T06:31:38.546+0000 D3 REPL [conn297] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.925+0000 2019-09-04T06:31:38.546+0000 D3 REPL [conn270] Got notified of new snapshot: { ts: Timestamp(1567578698, 2), t: 1 }, 2019-09-04T06:31:38.532+0000 2019-09-04T06:31:38.546+0000 D3 REPL [conn270] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.239+0000 2019-09-04T06:31:38.546+0000 D3 REPL [conn280] Got notified of new snapshot: { ts: Timestamp(1567578698, 2), t: 1 }, 2019-09-04T06:31:38.532+0000 2019-09-04T06:31:38.546+0000 D3 REPL [conn280] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:01.310+0000 2019-09-04T06:31:38.546+0000 D3 REPL [conn279] Got notified of new snapshot: { ts: Timestamp(1567578698, 2), t: 1 }, 2019-09-04T06:31:38.532+0000 2019-09-04T06:31:38.547+0000 D3 REPL [conn279] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:42.709+0000 2019-09-04T06:31:38.547+0000 D3 REPL [conn298] Got notified of new snapshot: { ts: Timestamp(1567578698, 2), t: 1 }, 2019-09-04T06:31:38.532+0000 2019-09-04T06:31:38.547+0000 D3 REPL [conn298] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.986+0000 2019-09-04T06:31:38.547+0000 D3 REPL [conn271] Got notified of new snapshot: { ts: Timestamp(1567578698, 2), t: 1 }, 2019-09-04T06:31:38.532+0000 2019-09-04T06:31:38.547+0000 D3 REPL [conn271] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.752+0000 2019-09-04T06:31:38.547+0000 D3 REPL [conn283] Got notified of new snapshot: { ts: Timestamp(1567578698, 2), t: 1 }, 2019-09-04T06:31:38.532+0000 2019-09-04T06:31:38.547+0000 D3 REPL [conn283] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.238+0000 2019-09-04T06:31:38.547+0000 D3 REPL [conn281] Got notified of new snapshot: { ts: Timestamp(1567578698, 2), t: 1 }, 2019-09-04T06:31:38.532+0000 2019-09-04T06:31:38.547+0000 D3 REPL [conn281] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.661+0000 2019-09-04T06:31:38.547+0000 D3 REPL [conn294] Got notified of new snapshot: { ts: Timestamp(1567578698, 2), t: 1 }, 2019-09-04T06:31:38.532+0000 2019-09-04T06:31:38.547+0000 D3 REPL [conn294] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.468+0000 2019-09-04T06:31:38.547+0000 D3 REPL [conn273] Got notified of new snapshot: { ts: Timestamp(1567578698, 2), t: 1 }, 2019-09-04T06:31:38.532+0000 2019-09-04T06:31:38.547+0000 D3 REPL [conn273] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.644+0000 2019-09-04T06:31:38.547+0000 D3 REPL [conn291] Got notified of new snapshot: { ts: Timestamp(1567578698, 2), t: 1 }, 2019-09-04T06:31:38.532+0000 2019-09-04T06:31:38.547+0000 D3 REPL [conn291] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.662+0000 2019-09-04T06:31:38.547+0000 D3 REPL [conn272] Got notified of new snapshot: { ts: Timestamp(1567578698, 2), t: 1 }, 2019-09-04T06:31:38.532+0000 2019-09-04T06:31:38.547+0000 D3 REPL [conn272] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.222+0000 2019-09-04T06:31:38.547+0000 D3 REPL [conn260] Got notified of new snapshot: { ts: Timestamp(1567578698, 2), t: 1 }, 2019-09-04T06:31:38.532+0000 2019-09-04T06:31:38.547+0000 D3 REPL [conn260] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:42.329+0000 2019-09-04T06:31:38.547+0000 D3 REPL [conn274] Got notified of new snapshot: { ts: Timestamp(1567578698, 2), t: 1 }, 2019-09-04T06:31:38.532+0000 2019-09-04T06:31:38.547+0000 D3 REPL [conn274] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:41.071+0000 2019-09-04T06:31:38.547+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:31:38.547+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:38.548+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, durableWallTime: new Date(1567578695142), appliedOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, appliedWallTime: new Date(1567578695142), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), appliedOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, appliedWallTime: new Date(1567578698532), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, durableWallTime: new Date(1567578695142), appliedOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, appliedWallTime: new Date(1567578695142), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpVisible: { ts: Timestamp(1567578698, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:38.548+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 836 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:08.548+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, durableWallTime: new Date(1567578695142), appliedOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, appliedWallTime: new Date(1567578695142), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), appliedOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, appliedWallTime: new Date(1567578698532), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, durableWallTime: new Date(1567578695142), appliedOpTime: { ts: Timestamp(1567578695, 1), t: 1 }, appliedWallTime: new Date(1567578695142), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpVisible: { ts: Timestamp(1567578698, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:38.548+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:08.539+0000 2019-09-04T06:31:38.548+0000 D2 ASIO [RS] Request 836 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpVisible: { ts: Timestamp(1567578698, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578698, 2), $clusterTime: { clusterTime: Timestamp(1567578698, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578698, 2) } 2019-09-04T06:31:38.548+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpVisible: { ts: Timestamp(1567578698, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578698, 2), $clusterTime: { clusterTime: Timestamp(1567578698, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578698, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:38.548+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:38.548+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:08.539+0000 2019-09-04T06:31:38.548+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:38.548+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:38.588+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:38.588+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:38.597+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:38.637+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578698, 2) 2019-09-04T06:31:38.638+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:38.639+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:38.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:38.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:38.652+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:38.652+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:38.698+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:38.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:38.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:38.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:38.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:38.798+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:38.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:38.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:38.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:38.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 837) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:38.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 837 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:48.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:38.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:06.839+0000 2019-09-04T06:31:38.838+0000 D2 ASIO [Replication] Request 837 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), opTime: { ts: Timestamp(1567578698, 2), t: 1 }, wallTime: new Date(1567578698532), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpVisible: { ts: Timestamp(1567578698, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578698, 2), $clusterTime: { clusterTime: Timestamp(1567578698, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578698, 2) } 2019-09-04T06:31:38.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), opTime: { ts: Timestamp(1567578698, 2), t: 1 }, wallTime: new Date(1567578698532), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpVisible: { ts: Timestamp(1567578698, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578698, 2), $clusterTime: { clusterTime: Timestamp(1567578698, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578698, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:38.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:38.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 837) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), opTime: { ts: Timestamp(1567578698, 2), t: 1 }, wallTime: new Date(1567578698532), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpVisible: { ts: Timestamp(1567578698, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578698, 2), $clusterTime: { clusterTime: Timestamp(1567578698, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578698, 2) } 2019-09-04T06:31:38.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:31:38.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:31:40.838Z 2019-09-04T06:31:38.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:06.839+0000 2019-09-04T06:31:38.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:38.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 838) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:38.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 838 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:31:48.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:38.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:06.839+0000 2019-09-04T06:31:38.839+0000 D2 ASIO [Replication] Request 838 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), opTime: { ts: Timestamp(1567578698, 2), t: 1 }, wallTime: new Date(1567578698532), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpVisible: { ts: Timestamp(1567578698, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578698, 2), $clusterTime: { clusterTime: Timestamp(1567578698, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578698, 2) } 2019-09-04T06:31:38.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), opTime: { ts: Timestamp(1567578698, 2), t: 1 }, wallTime: new Date(1567578698532), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpVisible: { ts: Timestamp(1567578698, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578698, 2), $clusterTime: { clusterTime: Timestamp(1567578698, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578698, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:31:38.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:38.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 838) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), opTime: { ts: Timestamp(1567578698, 2), t: 1 }, wallTime: new Date(1567578698532), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpVisible: { ts: Timestamp(1567578698, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578698, 2), $clusterTime: { clusterTime: Timestamp(1567578698, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578698, 2) } 2019-09-04T06:31:38.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:31:38.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:31:48.972+0000 2019-09-04T06:31:38.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:31:49.006+0000 2019-09-04T06:31:38.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:31:38.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:31:40.839Z 2019-09-04T06:31:38.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:38.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:08.839+0000 2019-09-04T06:31:38.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:08.839+0000 2019-09-04T06:31:38.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:38.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:38.898+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:38.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:38.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:38.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:38.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:38.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:38.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:38.998+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:39.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:39.048+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:39.048+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:39.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578698, 2), signature: { hash: BinData(0, 2145D5032D79C959503A18C6037AD518B01EA1B4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:39.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:31:39.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578698, 2), signature: { hash: BinData(0, 2145D5032D79C959503A18C6037AD518B01EA1B4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:39.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578698, 2), signature: { hash: BinData(0, 2145D5032D79C959503A18C6037AD518B01EA1B4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:39.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), opTime: { ts: Timestamp(1567578698, 2), t: 1 }, wallTime: new Date(1567578698532) } 2019-09-04T06:31:39.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578698, 2), signature: { hash: BinData(0, 2145D5032D79C959503A18C6037AD518B01EA1B4), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:39.088+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:39.088+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:39.098+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:39.139+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:39.139+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:39.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:39.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:39.152+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:39.152+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:39.198+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:39.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:39.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:39.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:39.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:39.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:39.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:39.298+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:39.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:39.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:39.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:39.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:39.398+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:39.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:39.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:39.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:39.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:39.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:39.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:39.498+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:39.538+0000 D2 COMMAND [conn61] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578698, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578698, 2), signature: { hash: BinData(0, 2145D5032D79C959503A18C6037AD518B01EA1B4), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578698, 2), t: 1 } }, $db: "config" } 2019-09-04T06:31:39.538+0000 D1 COMMAND [conn61] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578698, 2), t: 1 } } } 2019-09-04T06:31:39.538+0000 D3 STORAGE [conn61] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:31:39.538+0000 D1 COMMAND [conn61] Using 'committed' snapshot: { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578698, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578698, 2), signature: { hash: BinData(0, 2145D5032D79C959503A18C6037AD518B01EA1B4), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578698, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578698, 2) 2019-09-04T06:31:39.538+0000 D2 QUERY [conn61] Collection config.settings does not exist. Using EOF plan: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 2019-09-04T06:31:39.538+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:39.538+0000 I COMMAND [conn61] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578698, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578698, 2), signature: { hash: BinData(0, 2145D5032D79C959503A18C6037AD518B01EA1B4), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578698, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:31:39.538+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:39.538+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578698, 2) 2019-09-04T06:31:39.538+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 12279 2019-09-04T06:31:39.538+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:39.538+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:39.538+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 12279 2019-09-04T06:31:39.539+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12282 2019-09-04T06:31:39.539+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 12282 2019-09-04T06:31:39.539+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578698, 2), t: 1 }({ ts: Timestamp(1567578698, 2), t: 1 }) 2019-09-04T06:31:39.548+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:39.548+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:39.588+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:39.588+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:39.599+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:39.639+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:39.639+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:39.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:39.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:39.652+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:39.652+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:39.699+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:39.717+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:39.717+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:39.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:39.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:39.767+0000 D2 COMMAND [conn61] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578698, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578698, 2), signature: { hash: BinData(0, 2145D5032D79C959503A18C6037AD518B01EA1B4), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578698, 2), t: 1 } }, $db: "config" } 2019-09-04T06:31:39.767+0000 D1 COMMAND [conn61] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578698, 2), t: 1 } } } 2019-09-04T06:31:39.767+0000 D3 STORAGE [conn61] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:31:39.767+0000 D1 COMMAND [conn61] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578698, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578698, 2), signature: { hash: BinData(0, 2145D5032D79C959503A18C6037AD518B01EA1B4), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578698, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578698, 2) 2019-09-04T06:31:39.767+0000 D5 QUERY [conn61] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:31:39.767+0000 D5 QUERY [conn61] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:31:39.767+0000 D5 QUERY [conn61] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:31:39.767+0000 D5 QUERY [conn61] Rated tree: $and 2019-09-04T06:31:39.767+0000 D5 QUERY [conn61] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:39.767+0000 D5 QUERY [conn61] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:39.767+0000 D2 QUERY [conn61] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:31:39.767+0000 D3 STORAGE [conn61] WT begin_transaction for snapshot id 12291 2019-09-04T06:31:39.767+0000 D3 STORAGE [conn61] WT rollback_transaction for snapshot id 12291 2019-09-04T06:31:39.767+0000 I COMMAND [conn61] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578698, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578698, 2), signature: { hash: BinData(0, 2145D5032D79C959503A18C6037AD518B01EA1B4), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578698, 2), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:879 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:31:39.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:39.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:39.799+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:39.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:39.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:39.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:39.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:39.899+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:39.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:39.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:39.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:39.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:39.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:39.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:39.999+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:40.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:40.003+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:31:40.003+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:31:40.003+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:31:40.011+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:31:40.011+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:31:40.011+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:31:40.011+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:31:40.011+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:31:40.011+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:31:40.011+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:31:40.012+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35129 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:31:40.015+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:31:40.015+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:31:40.015+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:31:40.026+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:31:40.026+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:40.026+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:31:40.026+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:31:40.026+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:31:40.026+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:31:40.026+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:31:40.026+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:31:40.026+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:31:40.026+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:40.026+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:40.026+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:31:40.026+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578698, 2) 2019-09-04T06:31:40.026+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12305 2019-09-04T06:31:40.026+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12305 2019-09-04T06:31:40.026+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:31:40.042+0000 D2 COMMAND [conn69] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:40.042+0000 I COMMAND [conn69] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:40.046+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:31:40.046+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:31:40.047+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:31:40.047+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:40.047+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:31:40.047+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:31:40.047+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:31:40.047+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578698, 2) 2019-09-04T06:31:40.047+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12309 2019-09-04T06:31:40.047+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12309 2019-09-04T06:31:40.047+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:31:40.047+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:31:40.047+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:40.047+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:31:40.047+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:31:40.047+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:31:40.047+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578698, 2) 2019-09-04T06:31:40.047+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12311 2019-09-04T06:31:40.047+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12311 2019-09-04T06:31:40.047+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:31:40.047+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:31:40.047+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:31:40.047+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:31:40.047+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:31:40.047+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:31:40.047+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:31:40.047+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12314 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12314 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12315 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12315 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12316 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12316 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12317 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12317 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12318 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12318 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12319 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12319 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12320 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12320 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12321 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12321 2019-09-04T06:31:40.048+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12323 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:31:40.048+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12323 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12324 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12324 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12325 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12325 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12326 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12326 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12327 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12327 2019-09-04T06:31:40.048+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12328 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12328 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12329 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12329 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12330 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12330 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12331 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12331 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12332 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12332 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12333 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12333 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12334 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12334 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12335 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12335 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12336 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12336 2019-09-04T06:31:40.049+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:31:40.049+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12338 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12338 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12339 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12339 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12340 2019-09-04T06:31:40.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12340 2019-09-04T06:31:40.049+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:31:40.049+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:31:40.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12342 2019-09-04T06:31:40.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12342 2019-09-04T06:31:40.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12343 2019-09-04T06:31:40.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12343 2019-09-04T06:31:40.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12344 2019-09-04T06:31:40.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12344 2019-09-04T06:31:40.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12345 2019-09-04T06:31:40.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12345 2019-09-04T06:31:40.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12346 2019-09-04T06:31:40.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12346 2019-09-04T06:31:40.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12347 2019-09-04T06:31:40.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12347 2019-09-04T06:31:40.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12348 2019-09-04T06:31:40.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12348 2019-09-04T06:31:40.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12349 2019-09-04T06:31:40.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12349 2019-09-04T06:31:40.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12350 2019-09-04T06:31:40.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12350 2019-09-04T06:31:40.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12351 2019-09-04T06:31:40.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12351 2019-09-04T06:31:40.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12352 2019-09-04T06:31:40.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12352 2019-09-04T06:31:40.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12353 2019-09-04T06:31:40.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12353 2019-09-04T06:31:40.050+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:31:40.050+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:31:40.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12355 2019-09-04T06:31:40.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12355 2019-09-04T06:31:40.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12356 2019-09-04T06:31:40.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12356 2019-09-04T06:31:40.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12357 2019-09-04T06:31:40.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12357 2019-09-04T06:31:40.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12358 2019-09-04T06:31:40.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12358 2019-09-04T06:31:40.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12359 2019-09-04T06:31:40.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12359 2019-09-04T06:31:40.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12360 2019-09-04T06:31:40.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12360 2019-09-04T06:31:40.050+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:31:40.055+0000 D2 COMMAND [conn70] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:40.056+0000 I COMMAND [conn70] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:40.067+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:40.067+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:40.088+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:40.088+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:40.099+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:40.139+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:40.139+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:40.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:40.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:40.152+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:40.152+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:40.186+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:40.187+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:40.199+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:40.217+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:40.217+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:40.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578698, 2), signature: { hash: BinData(0, 2145D5032D79C959503A18C6037AD518B01EA1B4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:40.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:31:40.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578698, 2), signature: { hash: BinData(0, 2145D5032D79C959503A18C6037AD518B01EA1B4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:40.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578698, 2), signature: { hash: BinData(0, 2145D5032D79C959503A18C6037AD518B01EA1B4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:40.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), opTime: { ts: Timestamp(1567578698, 2), t: 1 }, wallTime: new Date(1567578698532) } 2019-09-04T06:31:40.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578698, 2), signature: { hash: BinData(0, 2145D5032D79C959503A18C6037AD518B01EA1B4), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:40.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:40.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:40.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:40.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:40.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:40.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:40.299+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:40.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:40.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:40.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:40.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:40.399+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:40.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:40.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:40.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:40.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:40.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:40.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:40.499+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:40.538+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:40.538+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:40.538+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578698, 2) 2019-09-04T06:31:40.538+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 12379 2019-09-04T06:31:40.538+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:40.538+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:40.538+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 12379 2019-09-04T06:31:40.539+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12382 2019-09-04T06:31:40.539+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 12382 2019-09-04T06:31:40.539+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578698, 2), t: 1 }({ ts: Timestamp(1567578698, 2), t: 1 }) 2019-09-04T06:31:40.548+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:40.548+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:40.567+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:40.568+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:40.588+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:40.588+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:40.600+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:40.639+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:40.639+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:40.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:40.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:40.652+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:40.652+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:40.686+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:40.686+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:40.700+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:40.717+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:40.717+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:40.721+0000 D2 COMMAND [conn72] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578695, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578695, 1), signature: { hash: BinData(0, 842107D62A486EE76C3B7B6A78CCD60772CE6680), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578695, 1), t: 1 } }, $db: "config" } 2019-09-04T06:31:40.721+0000 D1 COMMAND [conn72] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578695, 1), t: 1 } } } 2019-09-04T06:31:40.721+0000 D3 STORAGE [conn72] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:31:40.721+0000 D1 COMMAND [conn72] Using 'committed' snapshot: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578695, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578695, 1), signature: { hash: BinData(0, 842107D62A486EE76C3B7B6A78CCD60772CE6680), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578695, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578698, 2) 2019-09-04T06:31:40.721+0000 D2 QUERY [conn72] Collection config.settings does not exist. Using EOF plan: query: { _id: "balancer" } sort: {} projection: {} limit: 1 2019-09-04T06:31:40.721+0000 I COMMAND [conn72] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578695, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578695, 1), signature: { hash: BinData(0, 842107D62A486EE76C3B7B6A78CCD60772CE6680), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578695, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:31:40.722+0000 D2 COMMAND [conn72] run command config.$cmd { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578698, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578698, 2), signature: { hash: BinData(0, 2145D5032D79C959503A18C6037AD518B01EA1B4), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578698, 2), t: 1 } }, $db: "config" } 2019-09-04T06:31:40.722+0000 D1 COMMAND [conn72] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578698, 2), t: 1 } } } 2019-09-04T06:31:40.722+0000 D3 STORAGE [conn72] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:31:40.722+0000 D1 COMMAND [conn72] Using 'committed' snapshot: { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578698, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578698, 2), signature: { hash: BinData(0, 2145D5032D79C959503A18C6037AD518B01EA1B4), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578698, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578698, 2) 2019-09-04T06:31:40.722+0000 D2 QUERY [conn72] Collection config.settings does not exist. Using EOF plan: query: { _id: "autosplit" } sort: {} projection: {} limit: 1 2019-09-04T06:31:40.722+0000 I COMMAND [conn72] command config.settings command: find { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578698, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578698, 2), signature: { hash: BinData(0, 2145D5032D79C959503A18C6037AD518B01EA1B4), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578698, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:31:40.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:40.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:40.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:40.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:40.800+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:40.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:40.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:40.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:40.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 839) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:40.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 839 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:50.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:40.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:08.839+0000 2019-09-04T06:31:40.838+0000 D2 ASIO [Replication] Request 839 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), opTime: { ts: Timestamp(1567578698, 2), t: 1 }, wallTime: new Date(1567578698532), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpVisible: { ts: Timestamp(1567578698, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578698, 2), $clusterTime: { clusterTime: Timestamp(1567578698, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578698, 2) } 2019-09-04T06:31:40.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), opTime: { ts: Timestamp(1567578698, 2), t: 1 }, wallTime: new Date(1567578698532), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpVisible: { ts: Timestamp(1567578698, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578698, 2), $clusterTime: { clusterTime: Timestamp(1567578698, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578698, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:40.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:40.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 839) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), opTime: { ts: Timestamp(1567578698, 2), t: 1 }, wallTime: new Date(1567578698532), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpVisible: { ts: Timestamp(1567578698, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578698, 2), $clusterTime: { clusterTime: Timestamp(1567578698, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578698, 2) } 2019-09-04T06:31:40.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:31:40.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:31:42.838Z 2019-09-04T06:31:40.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:08.839+0000 2019-09-04T06:31:40.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:40.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 840) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:40.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 840 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:31:50.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:40.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:08.839+0000 2019-09-04T06:31:40.839+0000 D2 ASIO [Replication] Request 840 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), opTime: { ts: Timestamp(1567578698, 2), t: 1 }, wallTime: new Date(1567578698532), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpVisible: { ts: Timestamp(1567578698, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578698, 2), $clusterTime: { clusterTime: Timestamp(1567578698, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578698, 2) } 2019-09-04T06:31:40.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), opTime: { ts: Timestamp(1567578698, 2), t: 1 }, wallTime: new Date(1567578698532), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpVisible: { ts: Timestamp(1567578698, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578698, 2), $clusterTime: { clusterTime: Timestamp(1567578698, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578698, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:31:40.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:40.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 840) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), opTime: { ts: Timestamp(1567578698, 2), t: 1 }, wallTime: new Date(1567578698532), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpVisible: { ts: Timestamp(1567578698, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578698, 2), $clusterTime: { clusterTime: Timestamp(1567578698, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578698, 2) } 2019-09-04T06:31:40.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:31:40.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:31:49.006+0000 2019-09-04T06:31:40.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:31:51.730+0000 2019-09-04T06:31:40.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:31:40.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:31:42.839Z 2019-09-04T06:31:40.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:40.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:10.839+0000 2019-09-04T06:31:40.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:10.839+0000 2019-09-04T06:31:40.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:40.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:40.900+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:40.944+0000 D2 COMMAND [conn73] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:40.945+0000 I COMMAND [conn73] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:40.968+0000 D2 COMMAND [conn74] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:40.968+0000 I COMMAND [conn74] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:40.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:40.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:40.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:40.989+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:40.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:40.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:41.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:41.000+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:41.048+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:41.048+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:41.063+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:41.063+0000 D3 REPL [replexec-0] memberData lastupdate is: 2019-09-04T06:31:40.839+0000 2019-09-04T06:31:41.063+0000 D3 REPL [replexec-0] memberData lastupdate is: 2019-09-04T06:31:40.838+0000 2019-09-04T06:31:41.063+0000 D3 REPL [replexec-0] stalest member MemberId(2) date: 2019-09-04T06:31:40.838+0000 2019-09-04T06:31:41.063+0000 D3 REPL [replexec-0] scheduling next check at 2019-09-04T06:31:50.838+0000 2019-09-04T06:31:41.063+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:10.839+0000 2019-09-04T06:31:41.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578698, 2), signature: { hash: BinData(0, 2145D5032D79C959503A18C6037AD518B01EA1B4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:41.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:31:41.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578698, 2), signature: { hash: BinData(0, 2145D5032D79C959503A18C6037AD518B01EA1B4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:41.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578698, 2), signature: { hash: BinData(0, 2145D5032D79C959503A18C6037AD518B01EA1B4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:41.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), opTime: { ts: Timestamp(1567578698, 2), t: 1 }, wallTime: new Date(1567578698532) } 2019-09-04T06:31:41.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578698, 2), signature: { hash: BinData(0, 2145D5032D79C959503A18C6037AD518B01EA1B4), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:41.067+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:41.067+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:41.073+0000 I COMMAND [conn274] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578662, 1), signature: { hash: BinData(0, 22544C1F58F00D2C7A5EB6A40A8F03FCCA24680D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:31:41.073+0000 D1 - [conn274] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:31:41.073+0000 W - [conn274] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:41.078+0000 I NETWORK [listener] connection accepted from 10.108.2.56:35738 #299 (88 connections now open) 2019-09-04T06:31:41.078+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:31:41.078+0000 D2 COMMAND [conn299] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:31:41.078+0000 I NETWORK [conn299] received client metadata from 10.108.2.56:35738 conn299: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:31:41.078+0000 I COMMAND [conn299] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:31:41.086+0000 I COMMAND [conn264] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578661, 1), signature: { hash: BinData(0, 514286DB74F9F77B0D4219622FBBDB9CC9396AD9), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:31:41.086+0000 D1 - [conn264] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:31:41.086+0000 W - [conn264] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:41.088+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:41.088+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:41.090+0000 I - [conn274] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:41.090+0000 D1 COMMAND [conn274] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578662, 1), signature: { hash: BinData(0, 22544C1F58F00D2C7A5EB6A40A8F03FCCA24680D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:41.090+0000 D1 - [conn274] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:31:41.090+0000 W - [conn274] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:41.100+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:41.107+0000 I - [conn264] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:41.107+0000 D1 COMMAND [conn264] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578661, 1), signature: { hash: BinData(0, 514286DB74F9F77B0D4219622FBBDB9CC9396AD9), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:41.107+0000 D1 - [conn264] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:31:41.107+0000 W - [conn264] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:41.127+0000 I - [conn274] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:41.127+0000 W COMMAND [conn274] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:31:41.127+0000 I COMMAND [conn274] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578662, 1), signature: { hash: BinData(0, 22544C1F58F00D2C7A5EB6A40A8F03FCCA24680D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:31:41.127+0000 D2 NETWORK [conn274] Session from 10.108.2.73:52170 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:31:41.127+0000 I NETWORK [conn274] end connection 10.108.2.73:52170 (87 connections now open) 2019-09-04T06:31:41.138+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:41.139+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:41.147+0000 I - [conn264] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:41.147+0000 W COMMAND [conn264] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:31:41.147+0000 I COMMAND [conn264] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578661, 1), signature: { hash: BinData(0, 514286DB74F9F77B0D4219622FBBDB9CC9396AD9), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30033ms 2019-09-04T06:31:41.147+0000 D2 NETWORK [conn264] Session from 10.108.2.56:35710 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:31:41.147+0000 I NETWORK [conn264] end connection 10.108.2.56:35710 (86 connections now open) 2019-09-04T06:31:41.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:41.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:41.152+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:41.152+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:41.186+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:41.186+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:41.201+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:41.217+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:41.217+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:41.221+0000 D2 COMMAND [conn77] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:41.221+0000 I COMMAND [conn77] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:41.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:41.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:41.264+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:41.264+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:41.264+0000 D2 COMMAND [conn290] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578700, 1), signature: { hash: BinData(0, 95E089B7C5424E83E756C7B5503AC8F8547CCE40), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:31:41.264+0000 D1 REPL [conn290] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578698, 2), t: 1 } 2019-09-04T06:31:41.264+0000 D3 REPL [conn290] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.274+0000 2019-09-04T06:31:41.264+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:41.264+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:41.266+0000 I NETWORK [listener] connection accepted from 10.108.2.55:36702 #300 (87 connections now open) 2019-09-04T06:31:41.266+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:31:41.266+0000 D2 COMMAND [conn300] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:31:41.266+0000 I NETWORK [conn300] received client metadata from 10.108.2.55:36702 conn300: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:31:41.266+0000 I COMMAND [conn300] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:31:41.266+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:41.266+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:41.267+0000 D2 COMMAND [conn300] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578691, 1), signature: { hash: BinData(0, E866C51399BFEA5AC401BAD9014663E2B59AFDDF), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:31:41.267+0000 D1 REPL [conn300] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578698, 2), t: 1 } 2019-09-04T06:31:41.267+0000 D3 REPL [conn300] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.277+0000 2019-09-04T06:31:41.267+0000 D2 COMMAND [conn293] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578695, 1), signature: { hash: BinData(0, 9E26572A5146BC8E6E3FE400B47A1E3317EB4F1E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:31:41.267+0000 D1 REPL [conn293] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578698, 2), t: 1 } 2019-09-04T06:31:41.267+0000 D3 REPL [conn293] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.277+0000 2019-09-04T06:31:41.270+0000 D2 COMMAND [conn289] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578692, 1), signature: { hash: BinData(0, 14C73FD082AC31B9D9EA409E8C477DA8DEE8FE15), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:31:41.270+0000 D1 REPL [conn289] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578698, 2), t: 1 } 2019-09-04T06:31:41.270+0000 D3 REPL [conn289] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.280+0000 2019-09-04T06:31:41.273+0000 D2 COMMAND [conn78] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:41.273+0000 I COMMAND [conn78] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:41.279+0000 D2 COMMAND [conn288] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578698, 1), signature: { hash: BinData(0, 530FD6D8E211D30BDB1163BF3162E191BBCC422A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:31:41.279+0000 D1 REPL [conn288] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578698, 2), t: 1 } 2019-09-04T06:31:41.279+0000 D3 REPL [conn288] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.289+0000 2019-09-04T06:31:41.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:41.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:41.301+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:41.314+0000 D2 COMMAND [conn277] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578692, 1), signature: { hash: BinData(0, 14C73FD082AC31B9D9EA409E8C477DA8DEE8FE15), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:31:41.314+0000 D1 REPL [conn277] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578698, 2), t: 1 } 2019-09-04T06:31:41.314+0000 D3 REPL [conn277] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.324+0000 2019-09-04T06:31:41.319+0000 D2 COMMAND [conn276] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578691, 1), signature: { hash: BinData(0, E866C51399BFEA5AC401BAD9014663E2B59AFDDF), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:31:41.319+0000 D1 REPL [conn276] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578698, 2), t: 1 } 2019-09-04T06:31:41.319+0000 D3 REPL [conn276] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.329+0000 2019-09-04T06:31:41.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:41.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:41.353+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:41.353+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:41.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:41.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:41.401+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:41.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:41.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:41.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:41.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:41.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:41.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:41.501+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:41.538+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:41.538+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:41.538+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578698, 2) 2019-09-04T06:31:41.538+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 12435 2019-09-04T06:31:41.538+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:41.538+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:41.538+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 12435 2019-09-04T06:31:41.539+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12438 2019-09-04T06:31:41.539+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 12438 2019-09-04T06:31:41.539+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578698, 2), t: 1 }({ ts: Timestamp(1567578698, 2), t: 1 }) 2019-09-04T06:31:41.548+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:41.548+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:41.567+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:41.568+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:41.588+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:41.588+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:41.601+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:41.639+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:41.639+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:41.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:41.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:41.652+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:41.652+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:41.686+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:41.686+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:41.701+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:41.717+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:41.717+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:41.763+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:41.764+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:41.764+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:41.764+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:41.766+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:41.766+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:41.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:41.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:41.795+0000 D2 COMMAND [conn81] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578685, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578698, 2), signature: { hash: BinData(0, 2145D5032D79C959503A18C6037AD518B01EA1B4), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578685, 2), t: 1 } }, $db: "config" } 2019-09-04T06:31:41.795+0000 D1 COMMAND [conn81] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578685, 2), t: 1 } } } 2019-09-04T06:31:41.795+0000 D3 STORAGE [conn81] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:31:41.795+0000 D1 COMMAND [conn81] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578685, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578698, 2), signature: { hash: BinData(0, 2145D5032D79C959503A18C6037AD518B01EA1B4), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578685, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578698, 2) 2019-09-04T06:31:41.795+0000 D5 QUERY [conn81] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:31:41.795+0000 D5 QUERY [conn81] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:31:41.795+0000 D5 QUERY [conn81] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:31:41.795+0000 D5 QUERY [conn81] Rated tree: $and 2019-09-04T06:31:41.795+0000 D5 QUERY [conn81] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:41.795+0000 D5 QUERY [conn81] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:41.795+0000 D2 QUERY [conn81] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:31:41.795+0000 D3 STORAGE [conn81] WT begin_transaction for snapshot id 12452 2019-09-04T06:31:41.795+0000 D3 STORAGE [conn81] WT rollback_transaction for snapshot id 12452 2019-09-04T06:31:41.796+0000 I COMMAND [conn81] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578685, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578698, 2), signature: { hash: BinData(0, 2145D5032D79C959503A18C6037AD518B01EA1B4), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578685, 2), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:879 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:31:41.801+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:41.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:41.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:41.853+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:41.853+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:41.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:41.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:41.901+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:41.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:41.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:41.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:41.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:41.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:41.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:42.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:42.002+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:42.048+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:42.048+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:42.068+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:42.068+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:42.088+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:42.088+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:42.102+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:42.139+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:42.139+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:42.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:42.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:42.152+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:42.152+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:42.186+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:42.186+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:42.202+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:42.217+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:42.217+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:42.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578699, 1), signature: { hash: BinData(0, 1F7CA335EEBE7B8D2778C103AF42FF8F7C913273), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:42.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:31:42.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578699, 1), signature: { hash: BinData(0, 1F7CA335EEBE7B8D2778C103AF42FF8F7C913273), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:42.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578699, 1), signature: { hash: BinData(0, 1F7CA335EEBE7B8D2778C103AF42FF8F7C913273), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:42.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), opTime: { ts: Timestamp(1567578698, 2), t: 1 }, wallTime: new Date(1567578698532) } 2019-09-04T06:31:42.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578699, 1), signature: { hash: BinData(0, 1F7CA335EEBE7B8D2778C103AF42FF8F7C913273), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:42.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:42.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:42.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:42.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:42.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:42.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:42.302+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:42.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:42.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:42.332+0000 I COMMAND [conn260] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578662, 1), signature: { hash: BinData(0, 22544C1F58F00D2C7A5EB6A40A8F03FCCA24680D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:31:42.332+0000 D1 - [conn260] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:31:42.332+0000 W - [conn260] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:42.348+0000 I - [conn260] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:42.349+0000 D1 COMMAND [conn260] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578662, 1), signature: { hash: BinData(0, 22544C1F58F00D2C7A5EB6A40A8F03FCCA24680D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:42.349+0000 D1 - [conn260] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:31:42.349+0000 W - [conn260] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:42.353+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:42.353+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:42.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:42.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:42.368+0000 D2 COMMAND [conn143] run command admin.$cmd { ping: 1, $db: "admin" } 2019-09-04T06:31:42.368+0000 I COMMAND [conn143] command admin.$cmd appName: "robo3t" command: ping { ping: 1, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:42.368+0000 I - [conn260] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:42.369+0000 W COMMAND [conn260] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:31:42.369+0000 I COMMAND [conn260] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578662, 1), signature: { hash: BinData(0, 22544C1F58F00D2C7A5EB6A40A8F03FCCA24680D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:31:42.369+0000 D2 NETWORK [conn260] Session from 10.108.2.53:50710 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:31:42.369+0000 I NETWORK [conn260] end connection 10.108.2.53:50710 (86 connections now open) 2019-09-04T06:31:42.378+0000 D2 COMMAND [conn206] run command config.$cmd { ping: 1.0, lsid: { id: UUID("2fef7d2a-ea06-44d7-a315-b0e911b7f5bf") }, $clusterTime: { clusterTime: Timestamp(1567578641, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:31:42.378+0000 I COMMAND [conn206] command config.$cmd appName: "MongoDB Shell" command: ping { ping: 1.0, lsid: { id: UUID("2fef7d2a-ea06-44d7-a315-b0e911b7f5bf") }, $clusterTime: { clusterTime: Timestamp(1567578641, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:42.402+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:42.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:42.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:42.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:42.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:42.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:42.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:42.502+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:42.539+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:42.539+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:42.539+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578698, 2) 2019-09-04T06:31:42.539+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 12481 2019-09-04T06:31:42.539+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:42.539+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:42.539+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 12481 2019-09-04T06:31:42.539+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12484 2019-09-04T06:31:42.539+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 12484 2019-09-04T06:31:42.539+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578698, 2), t: 1 }({ ts: Timestamp(1567578698, 2), t: 1 }) 2019-09-04T06:31:42.548+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:42.548+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:42.568+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:42.568+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:42.588+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:42.588+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:42.602+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:42.639+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:42.639+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:42.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:42.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:42.652+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:42.652+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:42.683+0000 D2 COMMAND [conn278] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578701, 1), signature: { hash: BinData(0, 1171ABEBFE541B8950624F669563FA93C6C7DEA2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:31:42.683+0000 D1 REPL [conn278] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578698, 2), t: 1 } 2019-09-04T06:31:42.683+0000 D3 REPL [conn278] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:12.693+0000 2019-09-04T06:31:42.686+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:42.686+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:42.698+0000 I NETWORK [listener] connection accepted from 10.108.2.54:49232 #301 (87 connections now open) 2019-09-04T06:31:42.698+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:31:42.698+0000 D2 COMMAND [conn301] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:31:42.698+0000 I NETWORK [conn301] received client metadata from 10.108.2.54:49232 conn301: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:31:42.698+0000 I COMMAND [conn301] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:31:42.702+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:42.713+0000 I COMMAND [conn279] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578668, 1), signature: { hash: BinData(0, 6940E12D4AC1B4BCB13CFF3D9A7E2572F61E6255), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:31:42.713+0000 D1 - [conn279] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:31:42.713+0000 W - [conn279] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:42.717+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:42.717+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:42.730+0000 I - [conn279] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:42.730+0000 D1 COMMAND [conn279] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578668, 1), signature: { hash: BinData(0, 6940E12D4AC1B4BCB13CFF3D9A7E2572F61E6255), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:42.730+0000 D1 - [conn279] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:31:42.730+0000 W - [conn279] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:42.750+0000 I - [conn279] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:42.750+0000 W COMMAND [conn279] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:31:42.750+0000 I COMMAND [conn279] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578668, 1), signature: { hash: BinData(0, 6940E12D4AC1B4BCB13CFF3D9A7E2572F61E6255), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30031ms 2019-09-04T06:31:42.750+0000 D2 NETWORK [conn279] Session from 10.108.2.54:49214 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:31:42.750+0000 I NETWORK [conn279] end connection 10.108.2.54:49214 (86 connections now open) 2019-09-04T06:31:42.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:42.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:42.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:42.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:42.803+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:42.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:42.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:42.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:42.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 841) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:42.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 841 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:52.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:42.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:10.839+0000 2019-09-04T06:31:42.838+0000 D2 ASIO [Replication] Request 841 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), opTime: { ts: Timestamp(1567578698, 2), t: 1 }, wallTime: new Date(1567578698532), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpVisible: { ts: Timestamp(1567578698, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578698, 2), $clusterTime: { clusterTime: Timestamp(1567578701, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578698, 2) } 2019-09-04T06:31:42.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), opTime: { ts: Timestamp(1567578698, 2), t: 1 }, wallTime: new Date(1567578698532), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpVisible: { ts: Timestamp(1567578698, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578698, 2), $clusterTime: { clusterTime: Timestamp(1567578701, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578698, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:42.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:42.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 841) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), opTime: { ts: Timestamp(1567578698, 2), t: 1 }, wallTime: new Date(1567578698532), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpVisible: { ts: Timestamp(1567578698, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578698, 2), $clusterTime: { clusterTime: Timestamp(1567578701, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578698, 2) } 2019-09-04T06:31:42.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:31:42.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:31:44.838Z 2019-09-04T06:31:42.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:10.839+0000 2019-09-04T06:31:42.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:42.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 842) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:42.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 842 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:31:52.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:42.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:10.839+0000 2019-09-04T06:31:42.839+0000 D2 ASIO [Replication] Request 842 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), opTime: { ts: Timestamp(1567578698, 2), t: 1 }, wallTime: new Date(1567578698532), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpVisible: { ts: Timestamp(1567578698, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578698, 2), $clusterTime: { clusterTime: Timestamp(1567578701, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578698, 2) } 2019-09-04T06:31:42.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), opTime: { ts: Timestamp(1567578698, 2), t: 1 }, wallTime: new Date(1567578698532), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpVisible: { ts: Timestamp(1567578698, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578698, 2), $clusterTime: { clusterTime: Timestamp(1567578701, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578698, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:31:42.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:42.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 842) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), opTime: { ts: Timestamp(1567578698, 2), t: 1 }, wallTime: new Date(1567578698532), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpVisible: { ts: Timestamp(1567578698, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578698, 2), $clusterTime: { clusterTime: Timestamp(1567578701, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578698, 2) } 2019-09-04T06:31:42.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:31:42.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:31:51.730+0000 2019-09-04T06:31:42.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:31:54.235+0000 2019-09-04T06:31:42.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:31:42.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:31:44.839Z 2019-09-04T06:31:42.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:42.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:12.839+0000 2019-09-04T06:31:42.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:12.839+0000 2019-09-04T06:31:42.853+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:42.853+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:42.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:42.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:42.880+0000 I NETWORK [listener] connection accepted from 10.108.2.52:47226 #302 (87 connections now open) 2019-09-04T06:31:42.880+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:31:42.880+0000 D2 COMMAND [conn302] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:31:42.881+0000 I NETWORK [conn302] received client metadata from 10.108.2.52:47226 conn302: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:31:42.881+0000 I COMMAND [conn302] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:31:42.881+0000 D2 COMMAND [conn302] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578702, 1), signature: { hash: BinData(0, ECF4B5774F4DA596CBFAEF2B306A437976FBCFAD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:31:42.881+0000 D1 REPL [conn302] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578698, 2), t: 1 } 2019-09-04T06:31:42.881+0000 D3 REPL [conn302] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:12.891+0000 2019-09-04T06:31:42.903+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:42.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:42.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:42.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:42.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:42.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:42.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:43.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:43.003+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:43.048+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:43.048+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:43.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578701, 1), signature: { hash: BinData(0, DFD9D2CB89631603AE6277D7A8BB0554DD59995C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:43.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:31:43.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578701, 1), signature: { hash: BinData(0, DFD9D2CB89631603AE6277D7A8BB0554DD59995C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:43.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578701, 1), signature: { hash: BinData(0, DFD9D2CB89631603AE6277D7A8BB0554DD59995C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:43.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), opTime: { ts: Timestamp(1567578698, 2), t: 1 }, wallTime: new Date(1567578698532) } 2019-09-04T06:31:43.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578701, 1), signature: { hash: BinData(0, DFD9D2CB89631603AE6277D7A8BB0554DD59995C), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:43.068+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:43.068+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:43.088+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:43.088+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:43.103+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:43.139+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:43.139+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:43.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:43.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:43.152+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:43.152+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:43.186+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:43.186+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:43.203+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:43.217+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:43.217+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:43.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:43.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:43.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:43.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:43.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:43.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:43.303+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:43.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:43.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:43.353+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:43.353+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:43.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:43.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:43.403+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:43.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:43.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:43.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:43.489+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:43.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:43.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:43.503+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:43.539+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:43.539+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:43.539+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578698, 2) 2019-09-04T06:31:43.539+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 12526 2019-09-04T06:31:43.539+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:43.539+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:43.539+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 12526 2019-09-04T06:31:43.539+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12529 2019-09-04T06:31:43.539+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 12529 2019-09-04T06:31:43.539+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578698, 2), t: 1 }({ ts: Timestamp(1567578698, 2), t: 1 }) 2019-09-04T06:31:43.546+0000 D2 ASIO [RS] Request 835 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpVisible: { ts: Timestamp(1567578698, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpApplied: { ts: Timestamp(1567578698, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578698, 2), $clusterTime: { clusterTime: Timestamp(1567578702, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578698, 2) } 2019-09-04T06:31:43.546+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpVisible: { ts: Timestamp(1567578698, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpApplied: { ts: Timestamp(1567578698, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578698, 2), $clusterTime: { clusterTime: Timestamp(1567578702, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578698, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:43.546+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:43.547+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:31:43.547+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:31:54.235+0000 2019-09-04T06:31:43.547+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:31:54.761+0000 2019-09-04T06:31:43.547+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:43.547+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:12.839+0000 2019-09-04T06:31:43.547+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 843 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:53.547+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578698, 2), t: 1 } } 2019-09-04T06:31:43.547+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:08.539+0000 2019-09-04T06:31:43.548+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:43.548+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), appliedOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, appliedWallTime: new Date(1567578698532), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), appliedOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, appliedWallTime: new Date(1567578698532), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), appliedOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, appliedWallTime: new Date(1567578698532), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpVisible: { ts: Timestamp(1567578698, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:43.548+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 844 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:13.548+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), appliedOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, appliedWallTime: new Date(1567578698532), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), appliedOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, appliedWallTime: new Date(1567578698532), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), appliedOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, appliedWallTime: new Date(1567578698532), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpVisible: { ts: Timestamp(1567578698, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:43.548+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:08.539+0000 2019-09-04T06:31:43.548+0000 D2 ASIO [RS] Request 844 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpVisible: { ts: Timestamp(1567578698, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578698, 2), $clusterTime: { clusterTime: Timestamp(1567578702, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578698, 2) } 2019-09-04T06:31:43.548+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpVisible: { ts: Timestamp(1567578698, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578698, 2), $clusterTime: { clusterTime: Timestamp(1567578702, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578698, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:43.548+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:43.548+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:08.539+0000 2019-09-04T06:31:43.548+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:43.548+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:43.568+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:43.568+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:43.585+0000 I NETWORK [listener] connection accepted from 10.108.2.62:53484 #303 (88 connections now open) 2019-09-04T06:31:43.585+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:31:43.585+0000 D2 COMMAND [conn303] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:31:43.585+0000 I NETWORK [conn303] received client metadata from 10.108.2.62:53484 conn303: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:31:43.585+0000 I COMMAND [conn303] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:31:43.588+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:43.588+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:43.590+0000 D2 COMMAND [conn303] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578702, 1), signature: { hash: BinData(0, ECF4B5774F4DA596CBFAEF2B306A437976FBCFAD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:31:43.590+0000 D1 REPL [conn303] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578698, 2), t: 1 } 2019-09-04T06:31:43.590+0000 D3 REPL [conn303] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:13.600+0000 2019-09-04T06:31:43.604+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:43.639+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:43.639+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:43.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:43.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:43.652+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:43.652+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:43.686+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:43.686+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:43.704+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:43.717+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:43.717+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:43.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:43.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:43.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:43.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:43.804+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:43.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:43.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:43.853+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:43.853+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:43.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:43.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:43.904+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:43.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:43.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:43.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:43.989+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:43.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:43.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:44.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:44.004+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:44.048+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:44.048+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:44.068+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:44.068+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:44.088+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:44.088+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:44.104+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:44.139+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:44.139+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:44.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:44.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:44.152+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:44.152+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:44.186+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:44.186+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:44.204+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:44.217+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:44.217+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:44.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578702, 1), signature: { hash: BinData(0, 8A44A988C58411A600A7EFDCACE96350C4266CFC), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:44.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:31:44.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578702, 1), signature: { hash: BinData(0, 8A44A988C58411A600A7EFDCACE96350C4266CFC), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:44.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578702, 1), signature: { hash: BinData(0, 8A44A988C58411A600A7EFDCACE96350C4266CFC), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:44.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), opTime: { ts: Timestamp(1567578698, 2), t: 1 }, wallTime: new Date(1567578698532) } 2019-09-04T06:31:44.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578702, 1), signature: { hash: BinData(0, 8A44A988C58411A600A7EFDCACE96350C4266CFC), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:44.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:44.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:44.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:44.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:44.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:44.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:44.305+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:44.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:44.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:44.353+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:44.353+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:44.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:44.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:44.405+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:44.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:44.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:44.490+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:44.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:44.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:44.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:44.505+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:44.539+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:44.539+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:44.539+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578698, 2) 2019-09-04T06:31:44.539+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 12569 2019-09-04T06:31:44.539+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:44.539+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:44.539+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 12569 2019-09-04T06:31:44.539+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12572 2019-09-04T06:31:44.539+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 12572 2019-09-04T06:31:44.539+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578698, 2), t: 1 }({ ts: Timestamp(1567578698, 2), t: 1 }) 2019-09-04T06:31:44.548+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:44.548+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:44.567+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:44.568+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:44.588+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:44.588+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:44.605+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:44.639+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:44.639+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:44.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:44.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:44.652+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:44.652+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:44.686+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:44.686+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:44.705+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:44.717+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:44.717+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:44.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:44.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:44.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:44.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:44.805+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:44.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:44.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:44.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:44.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 845) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:44.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 845 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:54.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:44.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:12.839+0000 2019-09-04T06:31:44.838+0000 D2 ASIO [Replication] Request 845 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), opTime: { ts: Timestamp(1567578698, 2), t: 1 }, wallTime: new Date(1567578698532), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpVisible: { ts: Timestamp(1567578698, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578698, 2), $clusterTime: { clusterTime: Timestamp(1567578702, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578698, 2) } 2019-09-04T06:31:44.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), opTime: { ts: Timestamp(1567578698, 2), t: 1 }, wallTime: new Date(1567578698532), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpVisible: { ts: Timestamp(1567578698, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578698, 2), $clusterTime: { clusterTime: Timestamp(1567578702, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578698, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:44.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:44.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 845) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), opTime: { ts: Timestamp(1567578698, 2), t: 1 }, wallTime: new Date(1567578698532), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpVisible: { ts: Timestamp(1567578698, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578698, 2), $clusterTime: { clusterTime: Timestamp(1567578702, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578698, 2) } 2019-09-04T06:31:44.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:31:44.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:31:46.838Z 2019-09-04T06:31:44.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:12.839+0000 2019-09-04T06:31:44.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:44.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 846) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:44.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 846 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:31:54.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:44.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:12.839+0000 2019-09-04T06:31:44.839+0000 D2 ASIO [Replication] Request 846 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), opTime: { ts: Timestamp(1567578698, 2), t: 1 }, wallTime: new Date(1567578698532), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpVisible: { ts: Timestamp(1567578698, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578698, 2), $clusterTime: { clusterTime: Timestamp(1567578702, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578698, 2) } 2019-09-04T06:31:44.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), opTime: { ts: Timestamp(1567578698, 2), t: 1 }, wallTime: new Date(1567578698532), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpVisible: { ts: Timestamp(1567578698, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578698, 2), $clusterTime: { clusterTime: Timestamp(1567578702, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578698, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:31:44.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:44.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 846) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), opTime: { ts: Timestamp(1567578698, 2), t: 1 }, wallTime: new Date(1567578698532), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpVisible: { ts: Timestamp(1567578698, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578698, 2), $clusterTime: { clusterTime: Timestamp(1567578702, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578698, 2) } 2019-09-04T06:31:44.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:31:44.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:31:54.761+0000 2019-09-04T06:31:44.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:31:56.200+0000 2019-09-04T06:31:44.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:31:44.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:31:46.839Z 2019-09-04T06:31:44.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:44.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:14.839+0000 2019-09-04T06:31:44.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:14.839+0000 2019-09-04T06:31:44.853+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:44.853+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:44.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:44.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:44.905+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:44.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:44.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:44.990+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:44.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:44.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:44.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:45.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:45.006+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:45.048+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:45.048+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:45.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578702, 1), signature: { hash: BinData(0, 8A44A988C58411A600A7EFDCACE96350C4266CFC), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:45.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:31:45.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578702, 1), signature: { hash: BinData(0, 8A44A988C58411A600A7EFDCACE96350C4266CFC), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:45.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578702, 1), signature: { hash: BinData(0, 8A44A988C58411A600A7EFDCACE96350C4266CFC), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:45.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), opTime: { ts: Timestamp(1567578698, 2), t: 1 }, wallTime: new Date(1567578698532) } 2019-09-04T06:31:45.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578702, 1), signature: { hash: BinData(0, 8A44A988C58411A600A7EFDCACE96350C4266CFC), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:45.067+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:45.067+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:45.088+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:45.088+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:45.106+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:45.139+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:45.139+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:45.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:45.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:45.152+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:45.152+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:45.186+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:45.186+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:45.206+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:45.216+0000 D2 ASIO [RS] Request 843 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578705, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578705196), o: { $v: 1, $set: { ping: new Date(1567578705193), up: 2605 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpVisible: { ts: Timestamp(1567578698, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpApplied: { ts: Timestamp(1567578705, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578698, 2), $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578705, 1) } 2019-09-04T06:31:45.216+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578705, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578705196), o: { $v: 1, $set: { ping: new Date(1567578705193), up: 2605 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpVisible: { ts: Timestamp(1567578698, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpApplied: { ts: Timestamp(1567578705, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578698, 2), $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578705, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:45.216+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:45.217+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578705, 1) and ending at ts: Timestamp(1567578705, 1) 2019-09-04T06:31:45.217+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:31:56.200+0000 2019-09-04T06:31:45.217+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:31:56.191+0000 2019-09-04T06:31:45.217+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:45.217+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:14.839+0000 2019-09-04T06:31:45.217+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578705, 1), t: 1 } 2019-09-04T06:31:45.217+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:45.217+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:45.217+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:45.217+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:45.217+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578698, 2) 2019-09-04T06:31:45.217+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 12601 2019-09-04T06:31:45.217+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:45.217+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:45.217+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 12601 2019-09-04T06:31:45.217+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:45.217+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:45.217+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578698, 2) 2019-09-04T06:31:45.217+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 12604 2019-09-04T06:31:45.217+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:45.217+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:45.217+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 12604 2019-09-04T06:31:45.217+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:31:45.217+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578705, 1) } 2019-09-04T06:31:45.217+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12573 2019-09-04T06:31:45.217+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 12573 2019-09-04T06:31:45.217+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12607 2019-09-04T06:31:45.217+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 12607 2019-09-04T06:31:45.217+0000 D3 EXECUTOR [repl-writer-worker-2] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:45.217+0000 D3 STORAGE [repl-writer-worker-2] WT begin_transaction for snapshot id 12609 2019-09-04T06:31:45.217+0000 D4 STORAGE [repl-writer-worker-2] inserting record with timestamp Timestamp(1567578705, 1) 2019-09-04T06:31:45.217+0000 D3 STORAGE [repl-writer-worker-2] WT set timestamp of future write operations to Timestamp(1567578705, 1) 2019-09-04T06:31:45.217+0000 D3 STORAGE [repl-writer-worker-2] WT commit_transaction for snapshot id 12609 2019-09-04T06:31:45.217+0000 D3 EXECUTOR [repl-writer-worker-2] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:45.217+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:31:45.217+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12608 2019-09-04T06:31:45.217+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 12608 2019-09-04T06:31:45.217+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12611 2019-09-04T06:31:45.217+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 12611 2019-09-04T06:31:45.217+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578705, 1), t: 1 }({ ts: Timestamp(1567578705, 1), t: 1 }) 2019-09-04T06:31:45.217+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578705, 1) 2019-09-04T06:31:45.217+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12612 2019-09-04T06:31:45.217+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578705, 1) } } ] } sort: {} projection: {} 2019-09-04T06:31:45.217+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:45.217+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:31:45.217+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578705, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:31:45.217+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:45.217+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:45.217+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:45.217+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578705, 1) || First: notFirst: full path: ts 2019-09-04T06:31:45.217+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:45.217+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578705, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:45.217+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:45.217+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:31:45.218+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:45.218+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:45.218+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:45.218+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:45.218+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:45.218+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:45.218+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:45.218+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578705, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:45.218+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:45.218+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:45.218+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:45.218+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578705, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:45.218+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:45.218+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578705, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:45.218+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 12612 2019-09-04T06:31:45.218+0000 D3 EXECUTOR [repl-writer-worker-0] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:45.218+0000 D3 STORAGE [repl-writer-worker-0] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:45.218+0000 D3 REPL [repl-writer-worker-0] applying op: { ts: Timestamp(1567578705, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578705196), o: { $v: 1, $set: { ping: new Date(1567578705193), up: 2605 } } }, oplog application mode: Secondary 2019-09-04T06:31:45.218+0000 D3 STORAGE [repl-writer-worker-0] WT set timestamp of future write operations to Timestamp(1567578705, 1) 2019-09-04T06:31:45.218+0000 D3 STORAGE [repl-writer-worker-0] WT begin_transaction for snapshot id 12614 2019-09-04T06:31:45.218+0000 D2 QUERY [repl-writer-worker-0] Using idhack: { _id: "cmodb801.togewa.com:27017" } 2019-09-04T06:31:45.218+0000 D4 WRITE [repl-writer-worker-0] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:31:45.218+0000 D3 STORAGE [repl-writer-worker-0] WT commit_transaction for snapshot id 12614 2019-09-04T06:31:45.218+0000 D3 EXECUTOR [repl-writer-worker-0] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:45.218+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578705, 1), t: 1 }({ ts: Timestamp(1567578705, 1), t: 1 }) 2019-09-04T06:31:45.218+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578705, 1) 2019-09-04T06:31:45.218+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12613 2019-09-04T06:31:45.218+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:31:45.218+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:45.218+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:31:45.218+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:45.218+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:45.218+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:31:45.218+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 12613 2019-09-04T06:31:45.218+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578705, 1) 2019-09-04T06:31:45.218+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:45.218+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12617 2019-09-04T06:31:45.218+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 12617 2019-09-04T06:31:45.218+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), appliedOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, appliedWallTime: new Date(1567578698532), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), appliedOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, appliedWallTime: new Date(1567578705196), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), appliedOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, appliedWallTime: new Date(1567578698532), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpVisible: { ts: Timestamp(1567578698, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:45.218+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578705, 1), t: 1 }({ ts: Timestamp(1567578705, 1), t: 1 }) 2019-09-04T06:31:45.218+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 847 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:15.218+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), appliedOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, appliedWallTime: new Date(1567578698532), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), appliedOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, appliedWallTime: new Date(1567578705196), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), appliedOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, appliedWallTime: new Date(1567578698532), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpVisible: { ts: Timestamp(1567578698, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:45.218+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:15.218+0000 2019-09-04T06:31:45.218+0000 D2 ASIO [RS] Request 847 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpVisible: { ts: Timestamp(1567578698, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578698, 2), $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578705, 1) } 2019-09-04T06:31:45.218+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpVisible: { ts: Timestamp(1567578698, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578698, 2), $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578705, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:45.218+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:45.218+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:15.218+0000 2019-09-04T06:31:45.219+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578705, 1), t: 1 } 2019-09-04T06:31:45.219+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 848 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:55.219+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578698, 2), t: 1 } } 2019-09-04T06:31:45.219+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:15.218+0000 2019-09-04T06:31:45.226+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:31:45.226+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:45.226+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), appliedOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, appliedWallTime: new Date(1567578698532), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, durableWallTime: new Date(1567578705196), appliedOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, appliedWallTime: new Date(1567578705196), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), appliedOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, appliedWallTime: new Date(1567578698532), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpVisible: { ts: Timestamp(1567578698, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:45.226+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 849 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:15.226+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), appliedOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, appliedWallTime: new Date(1567578698532), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, durableWallTime: new Date(1567578705196), appliedOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, appliedWallTime: new Date(1567578705196), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, durableWallTime: new Date(1567578698532), appliedOpTime: { ts: Timestamp(1567578698, 2), t: 1 }, appliedWallTime: new Date(1567578698532), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpVisible: { ts: Timestamp(1567578698, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:45.226+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:15.218+0000 2019-09-04T06:31:45.227+0000 D2 ASIO [RS] Request 849 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpVisible: { ts: Timestamp(1567578698, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578698, 2), $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578705, 1) } 2019-09-04T06:31:45.227+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578698, 2), t: 1 }, lastCommittedWall: new Date(1567578698532), lastOpVisible: { ts: Timestamp(1567578698, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578698, 2), $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578705, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:45.227+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:45.227+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:15.218+0000 2019-09-04T06:31:45.227+0000 D2 ASIO [RS] Request 848 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578705, 1), t: 1 }, lastCommittedWall: new Date(1567578705196), lastOpVisible: { ts: Timestamp(1567578705, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578705, 1), t: 1 }, lastCommittedWall: new Date(1567578705196), lastOpApplied: { ts: Timestamp(1567578705, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578705, 1), $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578705, 1) } 2019-09-04T06:31:45.227+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578705, 1), t: 1 }, lastCommittedWall: new Date(1567578705196), lastOpVisible: { ts: Timestamp(1567578705, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578705, 1), t: 1 }, lastCommittedWall: new Date(1567578705196), lastOpApplied: { ts: Timestamp(1567578705, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578705, 1), $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578705, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:45.227+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:45.227+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:31:45.227+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578705, 1), t: 1 }, 2019-09-04T06:31:45.196+0000 2019-09-04T06:31:45.227+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578705, 1), t: 1 }, 2019-09-04T06:31:45.196+0000 2019-09-04T06:31:45.227+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578700, 1) 2019-09-04T06:31:45.227+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:31:56.191+0000 2019-09-04T06:31:45.227+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:31:55.659+0000 2019-09-04T06:31:45.227+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:45.227+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 850 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:55.227+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578705, 1), t: 1 } } 2019-09-04T06:31:45.227+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:14.839+0000 2019-09-04T06:31:45.227+0000 D3 REPL [conn286] Got notified of new snapshot: { ts: Timestamp(1567578705, 1), t: 1 }, 2019-09-04T06:31:45.196+0000 2019-09-04T06:31:45.227+0000 D3 REPL [conn286] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.224+0000 2019-09-04T06:31:45.227+0000 D3 REPL [conn295] Got notified of new snapshot: { ts: Timestamp(1567578705, 1), t: 1 }, 2019-09-04T06:31:45.196+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn295] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.753+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn270] Got notified of new snapshot: { ts: Timestamp(1567578705, 1), t: 1 }, 2019-09-04T06:31:45.196+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn270] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.239+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn297] Got notified of new snapshot: { ts: Timestamp(1567578705, 1), t: 1 }, 2019-09-04T06:31:45.196+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn297] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.925+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn290] Got notified of new snapshot: { ts: Timestamp(1567578705, 1), t: 1 }, 2019-09-04T06:31:45.196+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn290] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.274+0000 2019-09-04T06:31:45.227+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:15.218+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn300] Got notified of new snapshot: { ts: Timestamp(1567578705, 1), t: 1 }, 2019-09-04T06:31:45.196+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn300] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.277+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn283] Got notified of new snapshot: { ts: Timestamp(1567578705, 1), t: 1 }, 2019-09-04T06:31:45.196+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn283] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.238+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn303] Got notified of new snapshot: { ts: Timestamp(1567578705, 1), t: 1 }, 2019-09-04T06:31:45.196+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn303] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:13.600+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn291] Got notified of new snapshot: { ts: Timestamp(1567578705, 1), t: 1 }, 2019-09-04T06:31:45.196+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn291] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.662+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn293] Got notified of new snapshot: { ts: Timestamp(1567578705, 1), t: 1 }, 2019-09-04T06:31:45.196+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn293] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.277+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn276] Got notified of new snapshot: { ts: Timestamp(1567578705, 1), t: 1 }, 2019-09-04T06:31:45.196+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn276] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.329+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn278] Got notified of new snapshot: { ts: Timestamp(1567578705, 1), t: 1 }, 2019-09-04T06:31:45.196+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn278] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:12.693+0000 2019-09-04T06:31:45.228+0000 D2 COMMAND [conn21] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578705, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a5102d1a496712d7228'), operName: "", parentOperId: "5d6f5a5102d1a496712d7225" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, 341C75EA8323E71A94A85277266F2314DFFC7D76), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578705, 1), t: 1 } }, $db: "config" } 2019-09-04T06:31:45.228+0000 D3 REPL [conn302] Got notified of new snapshot: { ts: Timestamp(1567578705, 1), t: 1 }, 2019-09-04T06:31:45.196+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn302] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:12.891+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn287] Got notified of new snapshot: { ts: Timestamp(1567578705, 1), t: 1 }, 2019-09-04T06:31:45.196+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn287] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.248+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn296] Got notified of new snapshot: { ts: Timestamp(1567578705, 1), t: 1 }, 2019-09-04T06:31:45.196+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn296] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.763+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn292] Got notified of new snapshot: { ts: Timestamp(1567578705, 1), t: 1 }, 2019-09-04T06:31:45.196+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn292] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:52.054+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn271] Got notified of new snapshot: { ts: Timestamp(1567578705, 1), t: 1 }, 2019-09-04T06:31:45.196+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn271] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.752+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn281] Got notified of new snapshot: { ts: Timestamp(1567578705, 1), t: 1 }, 2019-09-04T06:31:45.196+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn281] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.661+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn273] Got notified of new snapshot: { ts: Timestamp(1567578705, 1), t: 1 }, 2019-09-04T06:31:45.196+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn273] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.644+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn272] Got notified of new snapshot: { ts: Timestamp(1567578705, 1), t: 1 }, 2019-09-04T06:31:45.196+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn272] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:46.222+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn280] Got notified of new snapshot: { ts: Timestamp(1567578705, 1), t: 1 }, 2019-09-04T06:31:45.196+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn280] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:01.310+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn298] Got notified of new snapshot: { ts: Timestamp(1567578705, 1), t: 1 }, 2019-09-04T06:31:45.196+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn298] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.986+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn289] Got notified of new snapshot: { ts: Timestamp(1567578705, 1), t: 1 }, 2019-09-04T06:31:45.196+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn289] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.280+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn288] Got notified of new snapshot: { ts: Timestamp(1567578705, 1), t: 1 }, 2019-09-04T06:31:45.196+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn288] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.289+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn277] Got notified of new snapshot: { ts: Timestamp(1567578705, 1), t: 1 }, 2019-09-04T06:31:45.196+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn277] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.324+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn294] Got notified of new snapshot: { ts: Timestamp(1567578705, 1), t: 1 }, 2019-09-04T06:31:45.196+0000 2019-09-04T06:31:45.228+0000 D3 REPL [conn294] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.468+0000 2019-09-04T06:31:45.228+0000 D1 TRACKING [conn21] Cmd: find, TrackingId: 5d6f5a5102d1a496712d7225|5d6f5a5102d1a496712d7228 2019-09-04T06:31:45.228+0000 D1 COMMAND [conn21] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578705, 1), t: 1 } } } 2019-09-04T06:31:45.228+0000 D3 STORAGE [conn21] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:31:45.228+0000 D1 COMMAND [conn21] Using 'committed' snapshot: { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578705, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a5102d1a496712d7228'), operName: "", parentOperId: "5d6f5a5102d1a496712d7225" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, 341C75EA8323E71A94A85277266F2314DFFC7D76), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578705, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578705, 1) 2019-09-04T06:31:45.228+0000 D2 QUERY [conn21] Collection config.settings does not exist. Using EOF plan: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 2019-09-04T06:31:45.228+0000 I COMMAND [conn21] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578705, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a5102d1a496712d7228'), operName: "", parentOperId: "5d6f5a5102d1a496712d7225" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, 341C75EA8323E71A94A85277266F2314DFFC7D76), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578705, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:31:45.229+0000 D2 COMMAND [conn21] run command config.$cmd { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578705, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a5102d1a496712d7229'), operName: "", parentOperId: "5d6f5a5102d1a496712d7225" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, 341C75EA8323E71A94A85277266F2314DFFC7D76), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578705, 1), t: 1 } }, $db: "config" } 2019-09-04T06:31:45.229+0000 D1 TRACKING [conn21] Cmd: find, TrackingId: 5d6f5a5102d1a496712d7225|5d6f5a5102d1a496712d7229 2019-09-04T06:31:45.229+0000 D1 COMMAND [conn21] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578705, 1), t: 1 } } } 2019-09-04T06:31:45.229+0000 D3 STORAGE [conn21] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:31:45.229+0000 D1 COMMAND [conn21] Using 'committed' snapshot: { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578705, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a5102d1a496712d7229'), operName: "", parentOperId: "5d6f5a5102d1a496712d7225" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, 341C75EA8323E71A94A85277266F2314DFFC7D76), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578705, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578705, 1) 2019-09-04T06:31:45.229+0000 D2 QUERY [conn21] Collection config.settings does not exist. Using EOF plan: query: { _id: "autosplit" } sort: {} projection: {} limit: 1 2019-09-04T06:31:45.229+0000 I COMMAND [conn21] command config.settings command: find { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578705, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a5102d1a496712d7229'), operName: "", parentOperId: "5d6f5a5102d1a496712d7225" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, 341C75EA8323E71A94A85277266F2314DFFC7D76), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578705, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:31:45.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:45.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:45.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:45.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:45.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:45.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:45.306+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:45.317+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578705, 1) 2019-09-04T06:31:45.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:45.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:45.353+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:45.353+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:45.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:45.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:45.406+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:45.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:45.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:45.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:45.489+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:45.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:45.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:45.506+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:45.548+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:45.548+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:45.568+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:45.568+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:45.588+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:45.588+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:45.606+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:45.639+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:45.639+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:45.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:45.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:45.652+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:45.652+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:45.686+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:45.686+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:45.707+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:45.717+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:45.717+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:45.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:45.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:45.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:45.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:45.807+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:45.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:45.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:45.853+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:45.853+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:45.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:45.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:45.907+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:45.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:45.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:45.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:45.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:45.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:45.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:46.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:46.007+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:46.068+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:46.068+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:46.088+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:46.088+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:46.107+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:46.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:46.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:46.186+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:46.186+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:46.207+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:46.217+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:46.217+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:46.217+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:46.217+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:46.217+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578705, 1) 2019-09-04T06:31:46.217+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 12653 2019-09-04T06:31:46.217+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:46.217+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:46.217+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 12653 2019-09-04T06:31:46.218+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12656 2019-09-04T06:31:46.218+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 12656 2019-09-04T06:31:46.218+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578705, 1), t: 1 }({ ts: Timestamp(1567578705, 1), t: 1 }) 2019-09-04T06:31:46.223+0000 I COMMAND [conn272] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578669, 1), signature: { hash: BinData(0, E8F838832282F30383549BB7AA2B977F74AA6897), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:31:46.223+0000 D1 - [conn272] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:31:46.223+0000 W - [conn272] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:46.226+0000 I COMMAND [conn286] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578675, 1), signature: { hash: BinData(0, 571EA4AAB3958D7DD4D98129725B60C4FB7E61F0), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:31:46.226+0000 D1 - [conn286] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:31:46.226+0000 W - [conn286] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:46.230+0000 I NETWORK [listener] connection accepted from 10.108.2.64:46668 #304 (89 connections now open) 2019-09-04T06:31:46.230+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:31:46.230+0000 D2 COMMAND [conn304] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:31:46.230+0000 I NETWORK [conn304] received client metadata from 10.108.2.64:46668 conn304: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:31:46.230+0000 I COMMAND [conn304] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:31:46.231+0000 I NETWORK [listener] connection accepted from 10.108.2.53:50750 #305 (90 connections now open) 2019-09-04T06:31:46.231+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:31:46.231+0000 D2 COMMAND [conn305] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:31:46.231+0000 I NETWORK [conn305] received client metadata from 10.108.2.53:50750 conn305: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:31:46.231+0000 I COMMAND [conn305] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:31:46.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, 341C75EA8323E71A94A85277266F2314DFFC7D76), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:46.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:31:46.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, 341C75EA8323E71A94A85277266F2314DFFC7D76), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:46.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, 341C75EA8323E71A94A85277266F2314DFFC7D76), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:46.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, durableWallTime: new Date(1567578705196), opTime: { ts: Timestamp(1567578705, 1), t: 1 }, wallTime: new Date(1567578705196) } 2019-09-04T06:31:46.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, 341C75EA8323E71A94A85277266F2314DFFC7D76), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:46.234+0000 I NETWORK [listener] connection accepted from 10.108.2.49:53420 #306 (91 connections now open) 2019-09-04T06:31:46.234+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:31:46.234+0000 D2 COMMAND [conn306] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:31:46.234+0000 I NETWORK [conn306] received client metadata from 10.108.2.49:53420 conn306: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:31:46.234+0000 I COMMAND [conn306] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:31:46.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:46.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:46.239+0000 I COMMAND [conn283] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578675, 1), signature: { hash: BinData(0, 571EA4AAB3958D7DD4D98129725B60C4FB7E61F0), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:31:46.239+0000 D1 - [conn283] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:31:46.239+0000 W - [conn283] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:46.240+0000 I COMMAND [conn270] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578670, 1), signature: { hash: BinData(0, 8177E92797B9354FFC128D337C838CA7406FA18D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:31:46.240+0000 D1 - [conn270] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:31:46.240+0000 W - [conn270] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:46.248+0000 I COMMAND [conn287] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578672, 1), signature: { hash: BinData(0, 2D7FBB04A997F5BB61222D22B1E0478572811EE3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:31:46.249+0000 D1 - [conn287] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:31:46.249+0000 W - [conn287] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:46.259+0000 I - [conn283] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:46.259+0000 D1 COMMAND [conn283] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578675, 1), signature: { hash: BinData(0, 571EA4AAB3958D7DD4D98129725B60C4FB7E61F0), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:46.259+0000 D1 - [conn283] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:31:46.259+0000 W - [conn283] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:46.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:46.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:46.276+0000 I - [conn270] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:46.276+0000 D1 COMMAND [conn270] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578670, 1), signature: { hash: BinData(0, 8177E92797B9354FFC128D337C838CA7406FA18D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:46.276+0000 D1 - [conn270] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:31:46.276+0000 W - [conn270] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:46.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:46.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:46.307+0000 I - [conn287] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:46.307+0000 D1 COMMAND [conn287] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578672, 1), signature: { hash: BinData(0, 2D7FBB04A997F5BB61222D22B1E0478572811EE3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:46.307+0000 D1 - [conn287] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:31:46.307+0000 W - [conn287] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:46.307+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:46.318+0000 I - [conn272] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:46.318+0000 D1 COMMAND [conn272] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578669, 1), signature: { hash: BinData(0, E8F838832282F30383549BB7AA2B977F74AA6897), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:46.318+0000 D1 - [conn272] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:31:46.318+0000 W - [conn272] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:46.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:46.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:46.331+0000 I - [conn286] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:46.332+0000 D1 COMMAND [conn286] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578675, 1), signature: { hash: BinData(0, 571EA4AAB3958D7DD4D98129725B60C4FB7E61F0), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:46.332+0000 D1 - [conn286] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:31:46.332+0000 W - [conn286] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:46.351+0000 I - [conn272] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:46.351+0000 W COMMAND [conn272] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:31:46.351+0000 I COMMAND [conn272] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578669, 1), signature: { hash: BinData(0, E8F838832282F30383549BB7AA2B977F74AA6897), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30105ms 2019-09-04T06:31:46.352+0000 D2 NETWORK [conn272] Session from 10.108.2.50:50134 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:31:46.352+0000 I NETWORK [conn272] end connection 10.108.2.50:50134 (90 connections now open) 2019-09-04T06:31:46.353+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:46.353+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:46.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:46.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:46.361+0000 I - [conn283] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:46.361+0000 W COMMAND [conn283] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:31:46.361+0000 I COMMAND [conn283] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578675, 1), signature: { hash: BinData(0, 571EA4AAB3958D7DD4D98129725B60C4FB7E61F0), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30031ms 2019-09-04T06:31:46.362+0000 D2 NETWORK [conn283] Session from 10.108.2.64:46654 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:31:46.362+0000 I NETWORK [conn283] end connection 10.108.2.64:46654 (89 connections now open) 2019-09-04T06:31:46.381+0000 I - [conn287] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:46.382+0000 W COMMAND [conn287] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:31:46.382+0000 I COMMAND [conn287] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578672, 1), signature: { hash: BinData(0, 2D7FBB04A997F5BB61222D22B1E0478572811EE3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30069ms 2019-09-04T06:31:46.382+0000 D2 NETWORK [conn287] Session from 10.108.2.53:50732 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:31:46.382+0000 I NETWORK [conn287] end connection 10.108.2.53:50732 (88 connections now open) 2019-09-04T06:31:46.402+0000 I NETWORK [listener] connection accepted from 10.108.2.57:34296 #307 (89 connections now open) 2019-09-04T06:31:46.402+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:31:46.402+0000 D2 COMMAND [conn307] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:31:46.403+0000 I NETWORK [conn307] received client metadata from 10.108.2.57:34296 conn307: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:31:46.403+0000 I COMMAND [conn307] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:31:46.403+0000 I - [conn270] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:46.403+0000 W COMMAND [conn270] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:31:46.403+0000 I COMMAND [conn270] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578670, 1), signature: { hash: BinData(0, 8177E92797B9354FFC128D337C838CA7406FA18D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30046ms 2019-09-04T06:31:46.403+0000 D2 NETWORK [conn270] Session from 10.108.2.49:53394 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:31:46.403+0000 I NETWORK [conn270] end connection 10.108.2.49:53394 (88 connections now open) 2019-09-04T06:31:46.407+0000 D2 COMMAND [conn307] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578698, 1), signature: { hash: BinData(0, 530FD6D8E211D30BDB1163BF3162E191BBCC422A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:31:46.407+0000 D1 REPL [conn307] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578705, 1), t: 1 } 2019-09-04T06:31:46.407+0000 D3 REPL [conn307] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.417+0000 2019-09-04T06:31:46.408+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:46.412+0000 I NETWORK [listener] connection accepted from 10.108.2.58:52188 #308 (89 connections now open) 2019-09-04T06:31:46.412+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:31:46.412+0000 D2 COMMAND [conn308] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:31:46.413+0000 I NETWORK [conn308] received client metadata from 10.108.2.58:52188 conn308: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:31:46.413+0000 I COMMAND [conn308] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:31:46.413+0000 D2 COMMAND [conn308] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578703, 1), signature: { hash: BinData(0, FE5CD33D4FD4AD3E642E53447F4EC0589DAAF02E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:31:46.413+0000 D1 REPL [conn308] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578705, 1), t: 1 } 2019-09-04T06:31:46.413+0000 D3 REPL [conn308] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.423+0000 2019-09-04T06:31:46.413+0000 I NETWORK [listener] connection accepted from 10.108.2.44:38726 #309 (90 connections now open) 2019-09-04T06:31:46.413+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:31:46.413+0000 D2 COMMAND [conn309] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:31:46.413+0000 I NETWORK [conn309] received client metadata from 10.108.2.44:38726 conn309: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:31:46.413+0000 I COMMAND [conn309] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:31:46.413+0000 D2 COMMAND [conn309] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578698, 1), signature: { hash: BinData(0, 530FD6D8E211D30BDB1163BF3162E191BBCC422A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:31:46.413+0000 D1 REPL [conn309] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578705, 1), t: 1 } 2019-09-04T06:31:46.413+0000 D3 REPL [conn309] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.423+0000 2019-09-04T06:31:46.414+0000 D2 COMMAND [conn282] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578699, 1), signature: { hash: BinData(0, FDAB2A63E94489DE9A0BA601ABF193CC5AED761D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:31:46.414+0000 D1 REPL [conn282] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578705, 1), t: 1 } 2019-09-04T06:31:46.414+0000 D3 REPL [conn282] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.424+0000 2019-09-04T06:31:46.426+0000 I - [conn286] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:46.427+0000 W COMMAND [conn286] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:31:46.427+0000 I COMMAND [conn286] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578675, 1), signature: { hash: BinData(0, 571EA4AAB3958D7DD4D98129725B60C4FB7E61F0), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30117ms 2019-09-04T06:31:46.427+0000 D2 NETWORK [conn286] Session from 10.108.2.46:41012 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:31:46.427+0000 I NETWORK [conn286] end connection 10.108.2.46:41012 (89 connections now open) 2019-09-04T06:31:46.430+0000 D2 COMMAND [conn284] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, D5CCCC4E77C8D415872AB251DCD35BEAE0CDE47F), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:31:46.430+0000 D1 REPL [conn284] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578705, 1), t: 1 } 2019-09-04T06:31:46.430+0000 D3 REPL [conn284] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.440+0000 2019-09-04T06:31:46.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:46.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:46.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:46.489+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:46.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:46.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:46.508+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:46.553+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:46.553+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:46.568+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:46.568+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:46.588+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:46.588+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:46.608+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:46.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:46.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:46.686+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:46.686+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:46.708+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:46.717+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:46.717+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:46.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:46.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:46.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:46.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:46.808+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:46.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:46.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:46.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:46.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 851) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:46.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 851 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:56.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:46.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:14.839+0000 2019-09-04T06:31:46.838+0000 D2 ASIO [Replication] Request 851 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, durableWallTime: new Date(1567578705196), opTime: { ts: Timestamp(1567578705, 1), t: 1 }, wallTime: new Date(1567578705196), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578705, 1), t: 1 }, lastCommittedWall: new Date(1567578705196), lastOpVisible: { ts: Timestamp(1567578705, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578705, 1), $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578705, 1) } 2019-09-04T06:31:46.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, durableWallTime: new Date(1567578705196), opTime: { ts: Timestamp(1567578705, 1), t: 1 }, wallTime: new Date(1567578705196), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578705, 1), t: 1 }, lastCommittedWall: new Date(1567578705196), lastOpVisible: { ts: Timestamp(1567578705, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578705, 1), $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578705, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:46.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:46.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 851) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, durableWallTime: new Date(1567578705196), opTime: { ts: Timestamp(1567578705, 1), t: 1 }, wallTime: new Date(1567578705196), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578705, 1), t: 1 }, lastCommittedWall: new Date(1567578705196), lastOpVisible: { ts: Timestamp(1567578705, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578705, 1), $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578705, 1) } 2019-09-04T06:31:46.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:31:46.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:31:48.838Z 2019-09-04T06:31:46.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:14.839+0000 2019-09-04T06:31:46.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:46.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 852) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:46.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 852 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:31:56.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:46.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:14.839+0000 2019-09-04T06:31:46.839+0000 D2 ASIO [Replication] Request 852 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, durableWallTime: new Date(1567578705196), opTime: { ts: Timestamp(1567578705, 1), t: 1 }, wallTime: new Date(1567578705196), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578705, 1), t: 1 }, lastCommittedWall: new Date(1567578705196), lastOpVisible: { ts: Timestamp(1567578705, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578705, 1), $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578705, 1) } 2019-09-04T06:31:46.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, durableWallTime: new Date(1567578705196), opTime: { ts: Timestamp(1567578705, 1), t: 1 }, wallTime: new Date(1567578705196), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578705, 1), t: 1 }, lastCommittedWall: new Date(1567578705196), lastOpVisible: { ts: Timestamp(1567578705, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578705, 1), $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578705, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:31:46.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:46.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 852) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, durableWallTime: new Date(1567578705196), opTime: { ts: Timestamp(1567578705, 1), t: 1 }, wallTime: new Date(1567578705196), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578705, 1), t: 1 }, lastCommittedWall: new Date(1567578705196), lastOpVisible: { ts: Timestamp(1567578705, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578705, 1), $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578705, 1) } 2019-09-04T06:31:46.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:31:46.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:31:55.659+0000 2019-09-04T06:31:46.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:31:56.888+0000 2019-09-04T06:31:46.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:31:46.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:31:48.839Z 2019-09-04T06:31:46.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:46.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:16.839+0000 2019-09-04T06:31:46.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:16.839+0000 2019-09-04T06:31:46.853+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:46.853+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:46.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:46.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:46.908+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:46.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:46.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:46.989+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:46.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:46.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:46.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:47.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:47.008+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:47.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:47.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:47.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, 341C75EA8323E71A94A85277266F2314DFFC7D76), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:47.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:31:47.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, 341C75EA8323E71A94A85277266F2314DFFC7D76), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:47.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, 341C75EA8323E71A94A85277266F2314DFFC7D76), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:47.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, durableWallTime: new Date(1567578705196), opTime: { ts: Timestamp(1567578705, 1), t: 1 }, wallTime: new Date(1567578705196) } 2019-09-04T06:31:47.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, 341C75EA8323E71A94A85277266F2314DFFC7D76), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:47.067+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:47.068+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:47.088+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:47.088+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:47.108+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:47.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:47.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:47.186+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:47.186+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:47.208+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:47.217+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:47.217+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:47.217+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:47.217+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:47.217+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578705, 1) 2019-09-04T06:31:47.217+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 12701 2019-09-04T06:31:47.217+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:47.217+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:47.217+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 12701 2019-09-04T06:31:47.218+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12704 2019-09-04T06:31:47.218+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 12704 2019-09-04T06:31:47.218+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578705, 1), t: 1 }({ ts: Timestamp(1567578705, 1), t: 1 }) 2019-09-04T06:31:47.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:47.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:47.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:47.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:47.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:47.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:47.309+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:47.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:47.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:47.353+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:47.353+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:47.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:47.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:47.409+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:47.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:47.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:47.489+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:47.490+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:47.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:47.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:47.509+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:47.530+0000 D2 ASIO [RS] Request 850 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578707, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578707520), o: { $v: 1, $set: { ping: new Date(1567578707519) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578705, 1), t: 1 }, lastCommittedWall: new Date(1567578705196), lastOpVisible: { ts: Timestamp(1567578705, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578705, 1), t: 1 }, lastCommittedWall: new Date(1567578705196), lastOpApplied: { ts: Timestamp(1567578707, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578705, 1), $clusterTime: { clusterTime: Timestamp(1567578707, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578707, 1) } 2019-09-04T06:31:47.530+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578707, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578707520), o: { $v: 1, $set: { ping: new Date(1567578707519) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578705, 1), t: 1 }, lastCommittedWall: new Date(1567578705196), lastOpVisible: { ts: Timestamp(1567578705, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578705, 1), t: 1 }, lastCommittedWall: new Date(1567578705196), lastOpApplied: { ts: Timestamp(1567578707, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578705, 1), $clusterTime: { clusterTime: Timestamp(1567578707, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578707, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:47.530+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:47.530+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578707, 1) and ending at ts: Timestamp(1567578707, 1) 2019-09-04T06:31:47.530+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:31:56.888+0000 2019-09-04T06:31:47.530+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:31:58.259+0000 2019-09-04T06:31:47.530+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:47.530+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:16.839+0000 2019-09-04T06:31:47.530+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578707, 1), t: 1 } 2019-09-04T06:31:47.530+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:47.530+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:47.530+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578705, 1) 2019-09-04T06:31:47.530+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 12717 2019-09-04T06:31:47.530+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:47.530+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:47.530+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 12717 2019-09-04T06:31:47.530+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:47.530+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:31:47.530+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:47.530+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578705, 1) 2019-09-04T06:31:47.530+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578707, 1) } 2019-09-04T06:31:47.530+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 12720 2019-09-04T06:31:47.530+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:47.530+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:47.530+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12705 2019-09-04T06:31:47.530+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 12720 2019-09-04T06:31:47.530+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 12705 2019-09-04T06:31:47.530+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12723 2019-09-04T06:31:47.530+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 12723 2019-09-04T06:31:47.530+0000 D3 EXECUTOR [repl-writer-worker-15] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:47.530+0000 D3 STORAGE [repl-writer-worker-15] WT begin_transaction for snapshot id 12725 2019-09-04T06:31:47.530+0000 D4 STORAGE [repl-writer-worker-15] inserting record with timestamp Timestamp(1567578707, 1) 2019-09-04T06:31:47.530+0000 D3 STORAGE [repl-writer-worker-15] WT set timestamp of future write operations to Timestamp(1567578707, 1) 2019-09-04T06:31:47.530+0000 D3 STORAGE [repl-writer-worker-15] WT commit_transaction for snapshot id 12725 2019-09-04T06:31:47.530+0000 D3 EXECUTOR [repl-writer-worker-15] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:47.530+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:31:47.530+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12724 2019-09-04T06:31:47.530+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 12724 2019-09-04T06:31:47.530+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12727 2019-09-04T06:31:47.530+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 12727 2019-09-04T06:31:47.530+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578707, 1), t: 1 }({ ts: Timestamp(1567578707, 1), t: 1 }) 2019-09-04T06:31:47.530+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578707, 1) 2019-09-04T06:31:47.530+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12728 2019-09-04T06:31:47.530+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578707, 1) } } ] } sort: {} projection: {} 2019-09-04T06:31:47.530+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:47.531+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:31:47.531+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578707, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:31:47.531+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:47.531+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:47.531+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:47.531+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578707, 1) || First: notFirst: full path: ts 2019-09-04T06:31:47.531+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:47.531+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578707, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:47.531+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:47.531+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:31:47.531+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:47.531+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:47.531+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:47.531+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:47.531+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:47.531+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:47.531+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:47.531+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578707, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:47.531+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:47.531+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:47.531+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:47.531+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578707, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:47.531+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:47.531+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578707, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:47.531+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 12728 2019-09-04T06:31:47.531+0000 D3 EXECUTOR [repl-writer-worker-1] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:47.531+0000 D3 STORAGE [repl-writer-worker-1] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:47.531+0000 D3 REPL [repl-writer-worker-1] applying op: { ts: Timestamp(1567578707, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578707520), o: { $v: 1, $set: { ping: new Date(1567578707519) } } }, oplog application mode: Secondary 2019-09-04T06:31:47.531+0000 D3 STORAGE [repl-writer-worker-1] WT set timestamp of future write operations to Timestamp(1567578707, 1) 2019-09-04T06:31:47.531+0000 D3 STORAGE [repl-writer-worker-1] WT begin_transaction for snapshot id 12730 2019-09-04T06:31:47.531+0000 D2 QUERY [repl-writer-worker-1] Using idhack: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" } 2019-09-04T06:31:47.531+0000 D4 WRITE [repl-writer-worker-1] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:31:47.531+0000 D3 STORAGE [repl-writer-worker-1] WT commit_transaction for snapshot id 12730 2019-09-04T06:31:47.531+0000 D3 EXECUTOR [repl-writer-worker-1] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:47.531+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578707, 1), t: 1 }({ ts: Timestamp(1567578707, 1), t: 1 }) 2019-09-04T06:31:47.531+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578707, 1) 2019-09-04T06:31:47.531+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12729 2019-09-04T06:31:47.531+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:31:47.531+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:47.531+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:31:47.531+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:47.531+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:47.531+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:31:47.531+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 12729 2019-09-04T06:31:47.531+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578707, 1) 2019-09-04T06:31:47.531+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:47.531+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12733 2019-09-04T06:31:47.531+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 12733 2019-09-04T06:31:47.531+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578707, 1), t: 1 }({ ts: Timestamp(1567578707, 1), t: 1 }) 2019-09-04T06:31:47.531+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, durableWallTime: new Date(1567578705196), appliedOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, appliedWallTime: new Date(1567578705196), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, durableWallTime: new Date(1567578705196), appliedOpTime: { ts: Timestamp(1567578707, 1), t: 1 }, appliedWallTime: new Date(1567578707520), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, durableWallTime: new Date(1567578705196), appliedOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, appliedWallTime: new Date(1567578705196), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578705, 1), t: 1 }, lastCommittedWall: new Date(1567578705196), lastOpVisible: { ts: Timestamp(1567578705, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:47.531+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 853 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:17.531+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, durableWallTime: new Date(1567578705196), appliedOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, appliedWallTime: new Date(1567578705196), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, durableWallTime: new Date(1567578705196), appliedOpTime: { ts: Timestamp(1567578707, 1), t: 1 }, appliedWallTime: new Date(1567578707520), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, durableWallTime: new Date(1567578705196), appliedOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, appliedWallTime: new Date(1567578705196), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578705, 1), t: 1 }, lastCommittedWall: new Date(1567578705196), lastOpVisible: { ts: Timestamp(1567578705, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:47.531+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:17.531+0000 2019-09-04T06:31:47.532+0000 D2 ASIO [RS] Request 853 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578705, 1), t: 1 }, lastCommittedWall: new Date(1567578705196), lastOpVisible: { ts: Timestamp(1567578705, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578705, 1), $clusterTime: { clusterTime: Timestamp(1567578707, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578707, 1) } 2019-09-04T06:31:47.532+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578705, 1), t: 1 }, lastCommittedWall: new Date(1567578705196), lastOpVisible: { ts: Timestamp(1567578705, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578705, 1), $clusterTime: { clusterTime: Timestamp(1567578707, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578707, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:47.532+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:47.532+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:17.532+0000 2019-09-04T06:31:47.532+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578707, 1), t: 1 } 2019-09-04T06:31:47.532+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 854 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:57.532+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578705, 1), t: 1 } } 2019-09-04T06:31:47.532+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:17.532+0000 2019-09-04T06:31:47.543+0000 D2 ASIO [RS] Request 854 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578707, 1), t: 1 }, lastCommittedWall: new Date(1567578707520), lastOpVisible: { ts: Timestamp(1567578707, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578707, 1), t: 1 }, lastCommittedWall: new Date(1567578707520), lastOpApplied: { ts: Timestamp(1567578707, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578707, 1), $clusterTime: { clusterTime: Timestamp(1567578707, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578707, 1) } 2019-09-04T06:31:47.544+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578707, 1), t: 1 }, lastCommittedWall: new Date(1567578707520), lastOpVisible: { ts: Timestamp(1567578707, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578707, 1), t: 1 }, lastCommittedWall: new Date(1567578707520), lastOpApplied: { ts: Timestamp(1567578707, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578707, 1), $clusterTime: { clusterTime: Timestamp(1567578707, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578707, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:47.544+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:47.544+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:31:47.544+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578707, 1), t: 1 }, 2019-09-04T06:31:47.520+0000 2019-09-04T06:31:47.544+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578707, 1), t: 1 }, 2019-09-04T06:31:47.520+0000 2019-09-04T06:31:47.544+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578702, 1) 2019-09-04T06:31:47.544+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:31:58.259+0000 2019-09-04T06:31:47.544+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:31:57.627+0000 2019-09-04T06:31:47.544+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 855 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:57.544+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578707, 1), t: 1 } } 2019-09-04T06:31:47.544+0000 D3 REPL [conn289] Got notified of new snapshot: { ts: Timestamp(1567578707, 1), t: 1 }, 2019-09-04T06:31:47.520+0000 2019-09-04T06:31:47.544+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:17.532+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn289] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.280+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn293] Got notified of new snapshot: { ts: Timestamp(1567578707, 1), t: 1 }, 2019-09-04T06:31:47.520+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn293] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.277+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn278] Got notified of new snapshot: { ts: Timestamp(1567578707, 1), t: 1 }, 2019-09-04T06:31:47.520+0000 2019-09-04T06:31:47.544+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:47.544+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:16.839+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn278] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:12.693+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn302] Got notified of new snapshot: { ts: Timestamp(1567578707, 1), t: 1 }, 2019-09-04T06:31:47.520+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn302] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:12.891+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn309] Got notified of new snapshot: { ts: Timestamp(1567578707, 1), t: 1 }, 2019-09-04T06:31:47.520+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn309] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.423+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn292] Got notified of new snapshot: { ts: Timestamp(1567578707, 1), t: 1 }, 2019-09-04T06:31:47.520+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn292] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:52.054+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn281] Got notified of new snapshot: { ts: Timestamp(1567578707, 1), t: 1 }, 2019-09-04T06:31:47.520+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn281] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.661+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn298] Got notified of new snapshot: { ts: Timestamp(1567578707, 1), t: 1 }, 2019-09-04T06:31:47.520+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn298] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.986+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn288] Got notified of new snapshot: { ts: Timestamp(1567578707, 1), t: 1 }, 2019-09-04T06:31:47.520+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn288] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.289+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn294] Got notified of new snapshot: { ts: Timestamp(1567578707, 1), t: 1 }, 2019-09-04T06:31:47.520+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn294] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.468+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn284] Got notified of new snapshot: { ts: Timestamp(1567578707, 1), t: 1 }, 2019-09-04T06:31:47.520+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn284] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.440+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn295] Got notified of new snapshot: { ts: Timestamp(1567578707, 1), t: 1 }, 2019-09-04T06:31:47.520+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn295] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.753+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn308] Got notified of new snapshot: { ts: Timestamp(1567578707, 1), t: 1 }, 2019-09-04T06:31:47.520+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn308] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.423+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn307] Got notified of new snapshot: { ts: Timestamp(1567578707, 1), t: 1 }, 2019-09-04T06:31:47.520+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn307] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.417+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn277] Got notified of new snapshot: { ts: Timestamp(1567578707, 1), t: 1 }, 2019-09-04T06:31:47.520+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn277] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.324+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn280] Got notified of new snapshot: { ts: Timestamp(1567578707, 1), t: 1 }, 2019-09-04T06:31:47.520+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn280] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:01.310+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn291] Got notified of new snapshot: { ts: Timestamp(1567578707, 1), t: 1 }, 2019-09-04T06:31:47.520+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn291] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.662+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn297] Got notified of new snapshot: { ts: Timestamp(1567578707, 1), t: 1 }, 2019-09-04T06:31:47.520+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn297] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.925+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn290] Got notified of new snapshot: { ts: Timestamp(1567578707, 1), t: 1 }, 2019-09-04T06:31:47.520+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn290] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.274+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn300] Got notified of new snapshot: { ts: Timestamp(1567578707, 1), t: 1 }, 2019-09-04T06:31:47.520+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn300] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.277+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn303] Got notified of new snapshot: { ts: Timestamp(1567578707, 1), t: 1 }, 2019-09-04T06:31:47.520+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn303] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:13.600+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn276] Got notified of new snapshot: { ts: Timestamp(1567578707, 1), t: 1 }, 2019-09-04T06:31:47.520+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn276] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.329+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn282] Got notified of new snapshot: { ts: Timestamp(1567578707, 1), t: 1 }, 2019-09-04T06:31:47.520+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn282] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.424+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn296] Got notified of new snapshot: { ts: Timestamp(1567578707, 1), t: 1 }, 2019-09-04T06:31:47.520+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn296] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.763+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn271] Got notified of new snapshot: { ts: Timestamp(1567578707, 1), t: 1 }, 2019-09-04T06:31:47.520+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn271] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.752+0000 2019-09-04T06:31:47.544+0000 D3 REPL [conn273] Got notified of new snapshot: { ts: Timestamp(1567578707, 1), t: 1 }, 2019-09-04T06:31:47.520+0000 2019-09-04T06:31:47.545+0000 D3 REPL [conn273] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.644+0000 2019-09-04T06:31:47.547+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:31:47.547+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:47.547+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, durableWallTime: new Date(1567578705196), appliedOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, appliedWallTime: new Date(1567578705196), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578707, 1), t: 1 }, durableWallTime: new Date(1567578707520), appliedOpTime: { ts: Timestamp(1567578707, 1), t: 1 }, appliedWallTime: new Date(1567578707520), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, durableWallTime: new Date(1567578705196), appliedOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, appliedWallTime: new Date(1567578705196), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578707, 1), t: 1 }, lastCommittedWall: new Date(1567578707520), lastOpVisible: { ts: Timestamp(1567578707, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:47.547+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 856 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:17.547+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, durableWallTime: new Date(1567578705196), appliedOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, appliedWallTime: new Date(1567578705196), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578707, 1), t: 1 }, durableWallTime: new Date(1567578707520), appliedOpTime: { ts: Timestamp(1567578707, 1), t: 1 }, appliedWallTime: new Date(1567578707520), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, durableWallTime: new Date(1567578705196), appliedOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, appliedWallTime: new Date(1567578705196), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578707, 1), t: 1 }, lastCommittedWall: new Date(1567578707520), lastOpVisible: { ts: Timestamp(1567578707, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:47.547+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:17.532+0000 2019-09-04T06:31:47.547+0000 D2 ASIO [RS] Request 856 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578707, 1), t: 1 }, lastCommittedWall: new Date(1567578707520), lastOpVisible: { ts: Timestamp(1567578707, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578707, 1), $clusterTime: { clusterTime: Timestamp(1567578707, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578707, 1) } 2019-09-04T06:31:47.547+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578707, 1), t: 1 }, lastCommittedWall: new Date(1567578707520), lastOpVisible: { ts: Timestamp(1567578707, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578707, 1), $clusterTime: { clusterTime: Timestamp(1567578707, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578707, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:47.547+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:47.547+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:17.532+0000 2019-09-04T06:31:47.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:47.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:47.567+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:47.568+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:47.588+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:47.588+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:47.609+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:47.630+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578707, 1) 2019-09-04T06:31:47.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:47.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:47.686+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:47.686+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:47.709+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:47.717+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:47.717+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:47.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:47.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:47.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:47.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:47.803+0000 D2 ASIO [RS] Request 855 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578707, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578707799), o: { $v: 1, $set: { ping: new Date(1567578707793) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578707, 1), t: 1 }, lastCommittedWall: new Date(1567578707520), lastOpVisible: { ts: Timestamp(1567578707, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578707, 1), t: 1 }, lastCommittedWall: new Date(1567578707520), lastOpApplied: { ts: Timestamp(1567578707, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578707, 1), $clusterTime: { clusterTime: Timestamp(1567578707, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578707, 2) } 2019-09-04T06:31:47.803+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578707, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578707799), o: { $v: 1, $set: { ping: new Date(1567578707793) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578707, 1), t: 1 }, lastCommittedWall: new Date(1567578707520), lastOpVisible: { ts: Timestamp(1567578707, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578707, 1), t: 1 }, lastCommittedWall: new Date(1567578707520), lastOpApplied: { ts: Timestamp(1567578707, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578707, 1), $clusterTime: { clusterTime: Timestamp(1567578707, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578707, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:47.803+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:47.803+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578707, 2) and ending at ts: Timestamp(1567578707, 2) 2019-09-04T06:31:47.803+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:31:57.627+0000 2019-09-04T06:31:47.803+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:31:58.262+0000 2019-09-04T06:31:47.803+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:47.803+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:16.839+0000 2019-09-04T06:31:47.803+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578707, 2), t: 1 } 2019-09-04T06:31:47.803+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:47.803+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:47.803+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578707, 1) 2019-09-04T06:31:47.803+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 12745 2019-09-04T06:31:47.803+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:47.803+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:47.803+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 12745 2019-09-04T06:31:47.803+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:47.803+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:31:47.803+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:47.803+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578707, 2) } 2019-09-04T06:31:47.803+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578707, 1) 2019-09-04T06:31:47.803+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 12748 2019-09-04T06:31:47.803+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:47.803+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:47.803+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 12748 2019-09-04T06:31:47.803+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12734 2019-09-04T06:31:47.803+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 12734 2019-09-04T06:31:47.803+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12751 2019-09-04T06:31:47.803+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 12751 2019-09-04T06:31:47.803+0000 D3 EXECUTOR [repl-writer-worker-5] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:47.803+0000 D3 STORAGE [repl-writer-worker-5] WT begin_transaction for snapshot id 12753 2019-09-04T06:31:47.803+0000 D4 STORAGE [repl-writer-worker-5] inserting record with timestamp Timestamp(1567578707, 2) 2019-09-04T06:31:47.803+0000 D3 STORAGE [repl-writer-worker-5] WT set timestamp of future write operations to Timestamp(1567578707, 2) 2019-09-04T06:31:47.803+0000 D3 STORAGE [repl-writer-worker-5] WT commit_transaction for snapshot id 12753 2019-09-04T06:31:47.803+0000 D3 EXECUTOR [repl-writer-worker-5] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:47.803+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:31:47.803+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12752 2019-09-04T06:31:47.803+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 12752 2019-09-04T06:31:47.803+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12755 2019-09-04T06:31:47.803+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 12755 2019-09-04T06:31:47.803+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578707, 2), t: 1 }({ ts: Timestamp(1567578707, 2), t: 1 }) 2019-09-04T06:31:47.803+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578707, 2) 2019-09-04T06:31:47.803+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12756 2019-09-04T06:31:47.803+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578707, 2) } } ] } sort: {} projection: {} 2019-09-04T06:31:47.803+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:47.803+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:31:47.803+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578707, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:31:47.803+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:47.803+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:47.803+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:47.803+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578707, 2) || First: notFirst: full path: ts 2019-09-04T06:31:47.804+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:47.804+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578707, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:47.804+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:47.804+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:31:47.804+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:47.804+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:47.804+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:47.804+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:47.804+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:47.804+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:47.804+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:47.804+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578707, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:47.804+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:47.804+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:47.804+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:47.804+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578707, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:47.804+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:47.804+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578707, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:47.804+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 12756 2019-09-04T06:31:47.804+0000 D3 EXECUTOR [repl-writer-worker-9] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:47.804+0000 D3 STORAGE [repl-writer-worker-9] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:47.804+0000 D3 REPL [repl-writer-worker-9] applying op: { ts: Timestamp(1567578707, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578707799), o: { $v: 1, $set: { ping: new Date(1567578707793) } } }, oplog application mode: Secondary 2019-09-04T06:31:47.804+0000 D3 STORAGE [repl-writer-worker-9] WT set timestamp of future write operations to Timestamp(1567578707, 2) 2019-09-04T06:31:47.804+0000 D3 STORAGE [repl-writer-worker-9] WT begin_transaction for snapshot id 12758 2019-09-04T06:31:47.804+0000 D2 QUERY [repl-writer-worker-9] Using idhack: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" } 2019-09-04T06:31:47.804+0000 D4 WRITE [repl-writer-worker-9] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:31:47.804+0000 D3 STORAGE [repl-writer-worker-9] WT commit_transaction for snapshot id 12758 2019-09-04T06:31:47.804+0000 D3 EXECUTOR [repl-writer-worker-9] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:47.804+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578707, 2), t: 1 }({ ts: Timestamp(1567578707, 2), t: 1 }) 2019-09-04T06:31:47.804+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578707, 2) 2019-09-04T06:31:47.804+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12757 2019-09-04T06:31:47.804+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:31:47.804+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:47.804+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:31:47.804+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:47.804+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:47.804+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:31:47.804+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 12757 2019-09-04T06:31:47.804+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578707, 2) 2019-09-04T06:31:47.804+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12761 2019-09-04T06:31:47.804+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 12761 2019-09-04T06:31:47.804+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578707, 2), t: 1 }({ ts: Timestamp(1567578707, 2), t: 1 }) 2019-09-04T06:31:47.804+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:47.804+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, durableWallTime: new Date(1567578705196), appliedOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, appliedWallTime: new Date(1567578705196), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578707, 1), t: 1 }, durableWallTime: new Date(1567578707520), appliedOpTime: { ts: Timestamp(1567578707, 2), t: 1 }, appliedWallTime: new Date(1567578707799), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, durableWallTime: new Date(1567578705196), appliedOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, appliedWallTime: new Date(1567578705196), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578707, 1), t: 1 }, lastCommittedWall: new Date(1567578707520), lastOpVisible: { ts: Timestamp(1567578707, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:47.804+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 857 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:17.804+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, durableWallTime: new Date(1567578705196), appliedOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, appliedWallTime: new Date(1567578705196), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578707, 1), t: 1 }, durableWallTime: new Date(1567578707520), appliedOpTime: { ts: Timestamp(1567578707, 2), t: 1 }, appliedWallTime: new Date(1567578707799), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, durableWallTime: new Date(1567578705196), appliedOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, appliedWallTime: new Date(1567578705196), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578707, 1), t: 1 }, lastCommittedWall: new Date(1567578707520), lastOpVisible: { ts: Timestamp(1567578707, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:47.804+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:17.804+0000 2019-09-04T06:31:47.805+0000 D2 ASIO [RS] Request 857 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578707, 1), t: 1 }, lastCommittedWall: new Date(1567578707520), lastOpVisible: { ts: Timestamp(1567578707, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578707, 1), $clusterTime: { clusterTime: Timestamp(1567578707, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578707, 2) } 2019-09-04T06:31:47.805+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578707, 1), t: 1 }, lastCommittedWall: new Date(1567578707520), lastOpVisible: { ts: Timestamp(1567578707, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578707, 1), $clusterTime: { clusterTime: Timestamp(1567578707, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578707, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:47.805+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:47.805+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:17.805+0000 2019-09-04T06:31:47.805+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578707, 2), t: 1 } 2019-09-04T06:31:47.805+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 858 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:57.805+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578707, 1), t: 1 } } 2019-09-04T06:31:47.805+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:17.805+0000 2019-09-04T06:31:47.807+0000 D2 ASIO [RS] Request 858 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578707, 2), t: 1 }, lastCommittedWall: new Date(1567578707799), lastOpVisible: { ts: Timestamp(1567578707, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578707, 2), t: 1 }, lastCommittedWall: new Date(1567578707799), lastOpApplied: { ts: Timestamp(1567578707, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578707, 2), $clusterTime: { clusterTime: Timestamp(1567578707, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578707, 2) } 2019-09-04T06:31:47.807+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578707, 2), t: 1 }, lastCommittedWall: new Date(1567578707799), lastOpVisible: { ts: Timestamp(1567578707, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578707, 2), t: 1 }, lastCommittedWall: new Date(1567578707799), lastOpApplied: { ts: Timestamp(1567578707, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578707, 2), $clusterTime: { clusterTime: Timestamp(1567578707, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578707, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:47.807+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:47.807+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:31:47.807+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578707, 2), t: 1 }, 2019-09-04T06:31:47.799+0000 2019-09-04T06:31:47.807+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578707, 2), t: 1 }, 2019-09-04T06:31:47.799+0000 2019-09-04T06:31:47.807+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578702, 2) 2019-09-04T06:31:47.807+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:31:58.262+0000 2019-09-04T06:31:47.807+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:31:58.252+0000 2019-09-04T06:31:47.807+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 859 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:57.807+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578707, 2), t: 1 } } 2019-09-04T06:31:47.807+0000 D3 REPL [conn292] Got notified of new snapshot: { ts: Timestamp(1567578707, 2), t: 1 }, 2019-09-04T06:31:47.799+0000 2019-09-04T06:31:47.807+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:17.805+0000 2019-09-04T06:31:47.807+0000 D3 REPL [conn292] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:52.054+0000 2019-09-04T06:31:47.807+0000 D3 REPL [conn303] Got notified of new snapshot: { ts: Timestamp(1567578707, 2), t: 1 }, 2019-09-04T06:31:47.799+0000 2019-09-04T06:31:47.807+0000 D3 REPL [conn303] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:13.600+0000 2019-09-04T06:31:47.807+0000 D3 REPL [conn294] Got notified of new snapshot: { ts: Timestamp(1567578707, 2), t: 1 }, 2019-09-04T06:31:47.799+0000 2019-09-04T06:31:47.807+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:47.807+0000 D3 REPL [conn294] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.468+0000 2019-09-04T06:31:47.807+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:16.839+0000 2019-09-04T06:31:47.807+0000 D3 REPL [conn295] Got notified of new snapshot: { ts: Timestamp(1567578707, 2), t: 1 }, 2019-09-04T06:31:47.799+0000 2019-09-04T06:31:47.807+0000 D3 REPL [conn295] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.753+0000 2019-09-04T06:31:47.807+0000 D3 REPL [conn277] Got notified of new snapshot: { ts: Timestamp(1567578707, 2), t: 1 }, 2019-09-04T06:31:47.799+0000 2019-09-04T06:31:47.807+0000 D3 REPL [conn277] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.324+0000 2019-09-04T06:31:47.807+0000 D3 REPL [conn280] Got notified of new snapshot: { ts: Timestamp(1567578707, 2), t: 1 }, 2019-09-04T06:31:47.799+0000 2019-09-04T06:31:47.807+0000 D3 REPL [conn280] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:01.310+0000 2019-09-04T06:31:47.807+0000 D3 REPL [conn291] Got notified of new snapshot: { ts: Timestamp(1567578707, 2), t: 1 }, 2019-09-04T06:31:47.799+0000 2019-09-04T06:31:47.807+0000 D3 REPL [conn291] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.662+0000 2019-09-04T06:31:47.807+0000 D3 REPL [conn297] Got notified of new snapshot: { ts: Timestamp(1567578707, 2), t: 1 }, 2019-09-04T06:31:47.799+0000 2019-09-04T06:31:47.807+0000 D3 REPL [conn297] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.925+0000 2019-09-04T06:31:47.807+0000 D3 REPL [conn307] Got notified of new snapshot: { ts: Timestamp(1567578707, 2), t: 1 }, 2019-09-04T06:31:47.799+0000 2019-09-04T06:31:47.807+0000 D3 REPL [conn307] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.417+0000 2019-09-04T06:31:47.807+0000 D3 REPL [conn276] Got notified of new snapshot: { ts: Timestamp(1567578707, 2), t: 1 }, 2019-09-04T06:31:47.799+0000 2019-09-04T06:31:47.807+0000 D3 REPL [conn276] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.329+0000 2019-09-04T06:31:47.807+0000 D3 REPL [conn273] Got notified of new snapshot: { ts: Timestamp(1567578707, 2), t: 1 }, 2019-09-04T06:31:47.799+0000 2019-09-04T06:31:47.807+0000 D3 REPL [conn273] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.644+0000 2019-09-04T06:31:47.807+0000 D3 REPL [conn300] Got notified of new snapshot: { ts: Timestamp(1567578707, 2), t: 1 }, 2019-09-04T06:31:47.799+0000 2019-09-04T06:31:47.807+0000 D3 REPL [conn300] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.277+0000 2019-09-04T06:31:47.807+0000 D3 REPL [conn296] Got notified of new snapshot: { ts: Timestamp(1567578707, 2), t: 1 }, 2019-09-04T06:31:47.799+0000 2019-09-04T06:31:47.807+0000 D3 REPL [conn296] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.763+0000 2019-09-04T06:31:47.807+0000 D3 REPL [conn309] Got notified of new snapshot: { ts: Timestamp(1567578707, 2), t: 1 }, 2019-09-04T06:31:47.799+0000 2019-09-04T06:31:47.807+0000 D3 REPL [conn309] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.423+0000 2019-09-04T06:31:47.807+0000 D3 REPL [conn289] Got notified of new snapshot: { ts: Timestamp(1567578707, 2), t: 1 }, 2019-09-04T06:31:47.799+0000 2019-09-04T06:31:47.807+0000 D3 REPL [conn289] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.280+0000 2019-09-04T06:31:47.807+0000 D3 REPL [conn293] Got notified of new snapshot: { ts: Timestamp(1567578707, 2), t: 1 }, 2019-09-04T06:31:47.799+0000 2019-09-04T06:31:47.807+0000 D3 REPL [conn293] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.277+0000 2019-09-04T06:31:47.807+0000 D3 REPL [conn288] Got notified of new snapshot: { ts: Timestamp(1567578707, 2), t: 1 }, 2019-09-04T06:31:47.799+0000 2019-09-04T06:31:47.807+0000 D3 REPL [conn288] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.289+0000 2019-09-04T06:31:47.807+0000 D3 REPL [conn308] Got notified of new snapshot: { ts: Timestamp(1567578707, 2), t: 1 }, 2019-09-04T06:31:47.799+0000 2019-09-04T06:31:47.808+0000 D3 REPL [conn308] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.423+0000 2019-09-04T06:31:47.808+0000 D3 REPL [conn302] Got notified of new snapshot: { ts: Timestamp(1567578707, 2), t: 1 }, 2019-09-04T06:31:47.799+0000 2019-09-04T06:31:47.808+0000 D3 REPL [conn302] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:12.891+0000 2019-09-04T06:31:47.808+0000 D3 REPL [conn271] Got notified of new snapshot: { ts: Timestamp(1567578707, 2), t: 1 }, 2019-09-04T06:31:47.799+0000 2019-09-04T06:31:47.808+0000 D3 REPL [conn271] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.752+0000 2019-09-04T06:31:47.808+0000 D3 REPL [conn281] Got notified of new snapshot: { ts: Timestamp(1567578707, 2), t: 1 }, 2019-09-04T06:31:47.799+0000 2019-09-04T06:31:47.808+0000 D3 REPL [conn281] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.661+0000 2019-09-04T06:31:47.808+0000 D3 REPL [conn278] Got notified of new snapshot: { ts: Timestamp(1567578707, 2), t: 1 }, 2019-09-04T06:31:47.799+0000 2019-09-04T06:31:47.808+0000 D3 REPL [conn278] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:12.693+0000 2019-09-04T06:31:47.808+0000 D3 REPL [conn298] Got notified of new snapshot: { ts: Timestamp(1567578707, 2), t: 1 }, 2019-09-04T06:31:47.799+0000 2019-09-04T06:31:47.808+0000 D3 REPL [conn298] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.986+0000 2019-09-04T06:31:47.808+0000 D3 REPL [conn290] Got notified of new snapshot: { ts: Timestamp(1567578707, 2), t: 1 }, 2019-09-04T06:31:47.799+0000 2019-09-04T06:31:47.808+0000 D3 REPL [conn290] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.274+0000 2019-09-04T06:31:47.808+0000 D3 REPL [conn282] Got notified of new snapshot: { ts: Timestamp(1567578707, 2), t: 1 }, 2019-09-04T06:31:47.799+0000 2019-09-04T06:31:47.808+0000 D3 REPL [conn282] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.424+0000 2019-09-04T06:31:47.808+0000 D3 REPL [conn284] Got notified of new snapshot: { ts: Timestamp(1567578707, 2), t: 1 }, 2019-09-04T06:31:47.799+0000 2019-09-04T06:31:47.808+0000 D3 REPL [conn284] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.440+0000 2019-09-04T06:31:47.813+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:31:47.813+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:47.813+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:47.813+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, durableWallTime: new Date(1567578705196), appliedOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, appliedWallTime: new Date(1567578705196), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578707, 2), t: 1 }, durableWallTime: new Date(1567578707799), appliedOpTime: { ts: Timestamp(1567578707, 2), t: 1 }, appliedWallTime: new Date(1567578707799), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, durableWallTime: new Date(1567578705196), appliedOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, appliedWallTime: new Date(1567578705196), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578707, 2), t: 1 }, lastCommittedWall: new Date(1567578707799), lastOpVisible: { ts: Timestamp(1567578707, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:47.813+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 860 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:17.813+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, durableWallTime: new Date(1567578705196), appliedOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, appliedWallTime: new Date(1567578705196), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578707, 2), t: 1 }, durableWallTime: new Date(1567578707799), appliedOpTime: { ts: Timestamp(1567578707, 2), t: 1 }, appliedWallTime: new Date(1567578707799), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, durableWallTime: new Date(1567578705196), appliedOpTime: { ts: Timestamp(1567578705, 1), t: 1 }, appliedWallTime: new Date(1567578705196), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578707, 2), t: 1 }, lastCommittedWall: new Date(1567578707799), lastOpVisible: { ts: Timestamp(1567578707, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:47.813+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:17.805+0000 2019-09-04T06:31:47.813+0000 D2 ASIO [RS] Request 860 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578707, 2), t: 1 }, lastCommittedWall: new Date(1567578707799), lastOpVisible: { ts: Timestamp(1567578707, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578707, 2), $clusterTime: { clusterTime: Timestamp(1567578707, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578707, 2) } 2019-09-04T06:31:47.813+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578707, 2), t: 1 }, lastCommittedWall: new Date(1567578707799), lastOpVisible: { ts: Timestamp(1567578707, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578707, 2), $clusterTime: { clusterTime: Timestamp(1567578707, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578707, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:47.813+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:47.813+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:17.805+0000 2019-09-04T06:31:47.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:47.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:47.853+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:47.853+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:47.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:47.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:47.903+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578707, 2) 2019-09-04T06:31:47.913+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:47.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:47.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:47.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:47.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:48.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:48.013+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:48.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:48.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:48.068+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:48.068+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:48.088+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:48.088+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:48.113+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:48.130+0000 D2 COMMAND [conn20] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:48.130+0000 I COMMAND [conn20] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:48.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:48.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:48.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:48.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:48.186+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:48.186+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:48.213+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:48.217+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:48.217+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:48.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578707, 2), signature: { hash: BinData(0, C65889D0BDE93AEF11A9A7B2E513340DB472A659), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:48.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:31:48.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578707, 2), signature: { hash: BinData(0, C65889D0BDE93AEF11A9A7B2E513340DB472A659), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:48.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578707, 2), signature: { hash: BinData(0, C65889D0BDE93AEF11A9A7B2E513340DB472A659), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:48.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578707, 2), t: 1 }, durableWallTime: new Date(1567578707799), opTime: { ts: Timestamp(1567578707, 2), t: 1 }, wallTime: new Date(1567578707799) } 2019-09-04T06:31:48.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578707, 2), signature: { hash: BinData(0, C65889D0BDE93AEF11A9A7B2E513340DB472A659), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:48.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:48.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:48.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:48.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:48.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:48.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:48.313+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:48.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:48.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:48.353+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:48.353+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:48.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:48.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:48.414+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:48.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:48.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:48.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:48.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:48.514+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:48.515+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:31:48.515+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:31:48.515+0000 D2 COMMAND [conn90] run command admin.$cmd { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:31:48.515+0000 I COMMAND [conn90] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:31:48.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:48.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:48.567+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:48.568+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:48.588+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:48.588+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:48.614+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:48.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:48.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:48.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:48.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:48.686+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:48.686+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:48.714+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:48.717+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:48.717+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:48.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:48.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:48.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:48.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:48.803+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:48.803+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:48.803+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578707, 2) 2019-09-04T06:31:48.803+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 12798 2019-09-04T06:31:48.803+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:48.803+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:48.803+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 12798 2019-09-04T06:31:48.804+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12801 2019-09-04T06:31:48.804+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 12801 2019-09-04T06:31:48.804+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578707, 2), t: 1 }({ ts: Timestamp(1567578707, 2), t: 1 }) 2019-09-04T06:31:48.814+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:48.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:48.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:48.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:48.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 861) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:48.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 861 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:31:58.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:48.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:16.839+0000 2019-09-04T06:31:48.838+0000 D2 ASIO [Replication] Request 861 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578707, 2), t: 1 }, durableWallTime: new Date(1567578707799), opTime: { ts: Timestamp(1567578707, 2), t: 1 }, wallTime: new Date(1567578707799), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578707, 2), t: 1 }, lastCommittedWall: new Date(1567578707799), lastOpVisible: { ts: Timestamp(1567578707, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578707, 2), $clusterTime: { clusterTime: Timestamp(1567578707, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578707, 2) } 2019-09-04T06:31:48.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578707, 2), t: 1 }, durableWallTime: new Date(1567578707799), opTime: { ts: Timestamp(1567578707, 2), t: 1 }, wallTime: new Date(1567578707799), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578707, 2), t: 1 }, lastCommittedWall: new Date(1567578707799), lastOpVisible: { ts: Timestamp(1567578707, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578707, 2), $clusterTime: { clusterTime: Timestamp(1567578707, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578707, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:48.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:48.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 861) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578707, 2), t: 1 }, durableWallTime: new Date(1567578707799), opTime: { ts: Timestamp(1567578707, 2), t: 1 }, wallTime: new Date(1567578707799), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578707, 2), t: 1 }, lastCommittedWall: new Date(1567578707799), lastOpVisible: { ts: Timestamp(1567578707, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578707, 2), $clusterTime: { clusterTime: Timestamp(1567578707, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578707, 2) } 2019-09-04T06:31:48.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:31:48.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:31:50.838Z 2019-09-04T06:31:48.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:16.839+0000 2019-09-04T06:31:48.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:48.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 862) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:48.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 862 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:31:58.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:48.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:16.839+0000 2019-09-04T06:31:48.839+0000 D2 ASIO [Replication] Request 862 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578707, 2), t: 1 }, durableWallTime: new Date(1567578707799), opTime: { ts: Timestamp(1567578707, 2), t: 1 }, wallTime: new Date(1567578707799), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578707, 2), t: 1 }, lastCommittedWall: new Date(1567578707799), lastOpVisible: { ts: Timestamp(1567578707, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578707, 2), $clusterTime: { clusterTime: Timestamp(1567578707, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578707, 2) } 2019-09-04T06:31:48.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578707, 2), t: 1 }, durableWallTime: new Date(1567578707799), opTime: { ts: Timestamp(1567578707, 2), t: 1 }, wallTime: new Date(1567578707799), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578707, 2), t: 1 }, lastCommittedWall: new Date(1567578707799), lastOpVisible: { ts: Timestamp(1567578707, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578707, 2), $clusterTime: { clusterTime: Timestamp(1567578707, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578707, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:31:48.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:48.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 862) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578707, 2), t: 1 }, durableWallTime: new Date(1567578707799), opTime: { ts: Timestamp(1567578707, 2), t: 1 }, wallTime: new Date(1567578707799), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578707, 2), t: 1 }, lastCommittedWall: new Date(1567578707799), lastOpVisible: { ts: Timestamp(1567578707, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578707, 2), $clusterTime: { clusterTime: Timestamp(1567578707, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578707, 2) } 2019-09-04T06:31:48.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:31:48.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:31:58.252+0000 2019-09-04T06:31:48.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:31:59.461+0000 2019-09-04T06:31:48.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:31:48.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:31:50.839Z 2019-09-04T06:31:48.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:48.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:18.839+0000 2019-09-04T06:31:48.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:18.839+0000 2019-09-04T06:31:48.853+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:48.853+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:48.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:48.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:48.914+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:48.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:48.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:48.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:48.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:49.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:49.014+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:49.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:49.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:49.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578707, 2), signature: { hash: BinData(0, C65889D0BDE93AEF11A9A7B2E513340DB472A659), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:49.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:31:49.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578707, 2), signature: { hash: BinData(0, C65889D0BDE93AEF11A9A7B2E513340DB472A659), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:49.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578707, 2), signature: { hash: BinData(0, C65889D0BDE93AEF11A9A7B2E513340DB472A659), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:49.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578707, 2), t: 1 }, durableWallTime: new Date(1567578707799), opTime: { ts: Timestamp(1567578707, 2), t: 1 }, wallTime: new Date(1567578707799) } 2019-09-04T06:31:49.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578707, 2), signature: { hash: BinData(0, C65889D0BDE93AEF11A9A7B2E513340DB472A659), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:49.067+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:49.068+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:49.088+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:49.088+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:49.114+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:49.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:49.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:49.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:49.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:49.186+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:49.186+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:49.215+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:49.217+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:49.217+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:49.223+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:49.223+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:49.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:49.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:49.258+0000 D2 ASIO [RS] Request 859 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578709, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" }, wall: new Date(1567578709257), o: { $v: 1, $set: { ping: new Date(1567578709254) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578707, 2), t: 1 }, lastCommittedWall: new Date(1567578707799), lastOpVisible: { ts: Timestamp(1567578707, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578707, 2), t: 1 }, lastCommittedWall: new Date(1567578707799), lastOpApplied: { ts: Timestamp(1567578709, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578707, 2), $clusterTime: { clusterTime: Timestamp(1567578709, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578709, 1) } 2019-09-04T06:31:49.258+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578709, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" }, wall: new Date(1567578709257), o: { $v: 1, $set: { ping: new Date(1567578709254) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578707, 2), t: 1 }, lastCommittedWall: new Date(1567578707799), lastOpVisible: { ts: Timestamp(1567578707, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578707, 2), t: 1 }, lastCommittedWall: new Date(1567578707799), lastOpApplied: { ts: Timestamp(1567578709, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578707, 2), $clusterTime: { clusterTime: Timestamp(1567578709, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578709, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:49.258+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:49.259+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578709, 1) and ending at ts: Timestamp(1567578709, 1) 2019-09-04T06:31:49.259+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:31:59.461+0000 2019-09-04T06:31:49.259+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:31:59.630+0000 2019-09-04T06:31:49.259+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:49.259+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:18.839+0000 2019-09-04T06:31:49.259+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578709, 1), t: 1 } 2019-09-04T06:31:49.259+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:49.259+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:49.259+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578707, 2) 2019-09-04T06:31:49.259+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 12820 2019-09-04T06:31:49.259+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:49.259+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:49.259+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 12820 2019-09-04T06:31:49.259+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:49.259+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:49.259+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:31:49.259+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578707, 2) 2019-09-04T06:31:49.259+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 12823 2019-09-04T06:31:49.259+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:49.259+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578709, 1) } 2019-09-04T06:31:49.259+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:49.259+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 12823 2019-09-04T06:31:49.259+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12802 2019-09-04T06:31:49.259+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 12802 2019-09-04T06:31:49.259+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12826 2019-09-04T06:31:49.259+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 12826 2019-09-04T06:31:49.259+0000 D3 EXECUTOR [repl-writer-worker-7] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:49.259+0000 D3 STORAGE [repl-writer-worker-7] WT begin_transaction for snapshot id 12828 2019-09-04T06:31:49.259+0000 D4 STORAGE [repl-writer-worker-7] inserting record with timestamp Timestamp(1567578709, 1) 2019-09-04T06:31:49.259+0000 D3 STORAGE [repl-writer-worker-7] WT set timestamp of future write operations to Timestamp(1567578709, 1) 2019-09-04T06:31:49.259+0000 D3 STORAGE [repl-writer-worker-7] WT commit_transaction for snapshot id 12828 2019-09-04T06:31:49.259+0000 D3 EXECUTOR [repl-writer-worker-7] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:49.259+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:31:49.259+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12827 2019-09-04T06:31:49.259+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 12827 2019-09-04T06:31:49.259+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12830 2019-09-04T06:31:49.259+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 12830 2019-09-04T06:31:49.259+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578709, 1), t: 1 }({ ts: Timestamp(1567578709, 1), t: 1 }) 2019-09-04T06:31:49.259+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578709, 1) 2019-09-04T06:31:49.259+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12831 2019-09-04T06:31:49.259+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578709, 1) } } ] } sort: {} projection: {} 2019-09-04T06:31:49.259+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:49.259+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:31:49.259+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578709, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:31:49.259+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:49.259+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:49.259+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:49.259+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578709, 1) || First: notFirst: full path: ts 2019-09-04T06:31:49.259+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:49.259+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578709, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:49.259+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:49.259+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:31:49.259+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:49.259+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:49.259+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:49.259+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:49.259+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:49.259+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:49.260+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:49.260+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578709, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:49.260+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:49.260+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:49.260+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:49.260+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578709, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:49.260+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:49.260+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578709, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:49.260+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 12831 2019-09-04T06:31:49.260+0000 D3 EXECUTOR [repl-writer-worker-11] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:49.260+0000 D3 STORAGE [repl-writer-worker-11] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:49.260+0000 D3 REPL [repl-writer-worker-11] applying op: { ts: Timestamp(1567578709, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" }, wall: new Date(1567578709257), o: { $v: 1, $set: { ping: new Date(1567578709254) } } }, oplog application mode: Secondary 2019-09-04T06:31:49.260+0000 D3 STORAGE [repl-writer-worker-11] WT set timestamp of future write operations to Timestamp(1567578709, 1) 2019-09-04T06:31:49.260+0000 D3 STORAGE [repl-writer-worker-11] WT begin_transaction for snapshot id 12833 2019-09-04T06:31:49.260+0000 D2 QUERY [repl-writer-worker-11] Using idhack: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" } 2019-09-04T06:31:49.260+0000 D4 WRITE [repl-writer-worker-11] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:31:49.260+0000 D3 STORAGE [repl-writer-worker-11] WT commit_transaction for snapshot id 12833 2019-09-04T06:31:49.260+0000 D3 EXECUTOR [repl-writer-worker-11] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:49.260+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578709, 1), t: 1 }({ ts: Timestamp(1567578709, 1), t: 1 }) 2019-09-04T06:31:49.260+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578709, 1) 2019-09-04T06:31:49.260+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12832 2019-09-04T06:31:49.260+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:31:49.260+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:49.260+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:31:49.260+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:49.260+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:49.260+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:31:49.260+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 12832 2019-09-04T06:31:49.260+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578709, 1) 2019-09-04T06:31:49.260+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12837 2019-09-04T06:31:49.260+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 12837 2019-09-04T06:31:49.260+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578709, 1), t: 1 }({ ts: Timestamp(1567578709, 1), t: 1 }) 2019-09-04T06:31:49.260+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:49.260+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578707, 2), t: 1 }, durableWallTime: new Date(1567578707799), appliedOpTime: { ts: Timestamp(1567578707, 2), t: 1 }, appliedWallTime: new Date(1567578707799), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578707, 2), t: 1 }, durableWallTime: new Date(1567578707799), appliedOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, appliedWallTime: new Date(1567578709257), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578707, 2), t: 1 }, durableWallTime: new Date(1567578707799), appliedOpTime: { ts: Timestamp(1567578707, 2), t: 1 }, appliedWallTime: new Date(1567578707799), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578707, 2), t: 1 }, lastCommittedWall: new Date(1567578707799), lastOpVisible: { ts: Timestamp(1567578707, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:49.260+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 863 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:19.260+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578707, 2), t: 1 }, durableWallTime: new Date(1567578707799), appliedOpTime: { ts: Timestamp(1567578707, 2), t: 1 }, appliedWallTime: new Date(1567578707799), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578707, 2), t: 1 }, durableWallTime: new Date(1567578707799), appliedOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, appliedWallTime: new Date(1567578709257), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578707, 2), t: 1 }, durableWallTime: new Date(1567578707799), appliedOpTime: { ts: Timestamp(1567578707, 2), t: 1 }, appliedWallTime: new Date(1567578707799), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578707, 2), t: 1 }, lastCommittedWall: new Date(1567578707799), lastOpVisible: { ts: Timestamp(1567578707, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:49.260+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:19.260+0000 2019-09-04T06:31:49.261+0000 D2 ASIO [RS] Request 863 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpVisible: { ts: Timestamp(1567578709, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578709, 1), $clusterTime: { clusterTime: Timestamp(1567578709, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578709, 1) } 2019-09-04T06:31:49.261+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpVisible: { ts: Timestamp(1567578709, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578709, 1), $clusterTime: { clusterTime: Timestamp(1567578709, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578709, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:49.261+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:49.261+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:19.261+0000 2019-09-04T06:31:49.261+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578709, 1), t: 1 } 2019-09-04T06:31:49.261+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 864 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:59.261+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578707, 2), t: 1 } } 2019-09-04T06:31:49.261+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:19.261+0000 2019-09-04T06:31:49.261+0000 D2 ASIO [RS] Request 864 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpVisible: { ts: Timestamp(1567578709, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpApplied: { ts: Timestamp(1567578709, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578709, 1), $clusterTime: { clusterTime: Timestamp(1567578709, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578709, 1) } 2019-09-04T06:31:49.261+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpVisible: { ts: Timestamp(1567578709, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpApplied: { ts: Timestamp(1567578709, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578709, 1), $clusterTime: { clusterTime: Timestamp(1567578709, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578709, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:49.261+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:49.261+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:31:49.261+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578709, 1), t: 1 }, 2019-09-04T06:31:49.257+0000 2019-09-04T06:31:49.261+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578709, 1), t: 1 }, 2019-09-04T06:31:49.257+0000 2019-09-04T06:31:49.261+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578704, 1) 2019-09-04T06:31:49.261+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:31:59.630+0000 2019-09-04T06:31:49.261+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:31:59.476+0000 2019-09-04T06:31:49.261+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 865 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:31:59.261+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578709, 1), t: 1 } } 2019-09-04T06:31:49.261+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:19.261+0000 2019-09-04T06:31:49.261+0000 D3 REPL [conn293] Got notified of new snapshot: { ts: Timestamp(1567578709, 1), t: 1 }, 2019-09-04T06:31:49.257+0000 2019-09-04T06:31:49.261+0000 D3 REPL [conn293] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.277+0000 2019-09-04T06:31:49.261+0000 D3 REPL [conn302] Got notified of new snapshot: { ts: Timestamp(1567578709, 1), t: 1 }, 2019-09-04T06:31:49.257+0000 2019-09-04T06:31:49.261+0000 D3 REPL [conn302] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:12.891+0000 2019-09-04T06:31:49.261+0000 D3 REPL [conn289] Got notified of new snapshot: { ts: Timestamp(1567578709, 1), t: 1 }, 2019-09-04T06:31:49.257+0000 2019-09-04T06:31:49.261+0000 D3 REPL [conn289] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.280+0000 2019-09-04T06:31:49.261+0000 D3 REPL [conn281] Got notified of new snapshot: { ts: Timestamp(1567578709, 1), t: 1 }, 2019-09-04T06:31:49.257+0000 2019-09-04T06:31:49.261+0000 D3 REPL [conn281] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.661+0000 2019-09-04T06:31:49.261+0000 D3 REPL [conn280] Got notified of new snapshot: { ts: Timestamp(1567578709, 1), t: 1 }, 2019-09-04T06:31:49.257+0000 2019-09-04T06:31:49.261+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:31:49.261+0000 D3 REPL [conn280] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:01.310+0000 2019-09-04T06:31:49.261+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:49.261+0000 D3 REPL [conn297] Got notified of new snapshot: { ts: Timestamp(1567578709, 1), t: 1 }, 2019-09-04T06:31:49.257+0000 2019-09-04T06:31:49.261+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:18.839+0000 2019-09-04T06:31:49.261+0000 D3 REPL [conn297] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.925+0000 2019-09-04T06:31:49.261+0000 D3 REPL [conn276] Got notified of new snapshot: { ts: Timestamp(1567578709, 1), t: 1 }, 2019-09-04T06:31:49.257+0000 2019-09-04T06:31:49.261+0000 D3 REPL [conn276] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.329+0000 2019-09-04T06:31:49.261+0000 D3 REPL [conn300] Got notified of new snapshot: { ts: Timestamp(1567578709, 1), t: 1 }, 2019-09-04T06:31:49.257+0000 2019-09-04T06:31:49.262+0000 D3 REPL [conn300] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.277+0000 2019-09-04T06:31:49.262+0000 D3 REPL [conn308] Got notified of new snapshot: { ts: Timestamp(1567578709, 1), t: 1 }, 2019-09-04T06:31:49.257+0000 2019-09-04T06:31:49.262+0000 D3 REPL [conn308] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.423+0000 2019-09-04T06:31:49.262+0000 D3 REPL [conn271] Got notified of new snapshot: { ts: Timestamp(1567578709, 1), t: 1 }, 2019-09-04T06:31:49.257+0000 2019-09-04T06:31:49.262+0000 D3 REPL [conn271] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.752+0000 2019-09-04T06:31:49.262+0000 D3 REPL [conn278] Got notified of new snapshot: { ts: Timestamp(1567578709, 1), t: 1 }, 2019-09-04T06:31:49.257+0000 2019-09-04T06:31:49.262+0000 D3 REPL [conn278] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:12.693+0000 2019-09-04T06:31:49.262+0000 D3 REPL [conn290] Got notified of new snapshot: { ts: Timestamp(1567578709, 1), t: 1 }, 2019-09-04T06:31:49.257+0000 2019-09-04T06:31:49.262+0000 D3 REPL [conn290] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.274+0000 2019-09-04T06:31:49.262+0000 D3 REPL [conn291] Got notified of new snapshot: { ts: Timestamp(1567578709, 1), t: 1 }, 2019-09-04T06:31:49.257+0000 2019-09-04T06:31:49.262+0000 D3 REPL [conn291] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.662+0000 2019-09-04T06:31:49.262+0000 D3 REPL [conn284] Got notified of new snapshot: { ts: Timestamp(1567578709, 1), t: 1 }, 2019-09-04T06:31:49.257+0000 2019-09-04T06:31:49.262+0000 D3 REPL [conn284] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.440+0000 2019-09-04T06:31:49.262+0000 D3 REPL [conn294] Got notified of new snapshot: { ts: Timestamp(1567578709, 1), t: 1 }, 2019-09-04T06:31:49.257+0000 2019-09-04T06:31:49.262+0000 D3 REPL [conn294] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.468+0000 2019-09-04T06:31:49.262+0000 D3 REPL [conn277] Got notified of new snapshot: { ts: Timestamp(1567578709, 1), t: 1 }, 2019-09-04T06:31:49.257+0000 2019-09-04T06:31:49.262+0000 D3 REPL [conn277] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.324+0000 2019-09-04T06:31:49.262+0000 D3 REPL [conn298] Got notified of new snapshot: { ts: Timestamp(1567578709, 1), t: 1 }, 2019-09-04T06:31:49.257+0000 2019-09-04T06:31:49.262+0000 D3 REPL [conn298] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.986+0000 2019-09-04T06:31:49.262+0000 D3 REPL [conn273] Got notified of new snapshot: { ts: Timestamp(1567578709, 1), t: 1 }, 2019-09-04T06:31:49.257+0000 2019-09-04T06:31:49.262+0000 D3 REPL [conn273] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:51.644+0000 2019-09-04T06:31:49.262+0000 D3 REPL [conn307] Got notified of new snapshot: { ts: Timestamp(1567578709, 1), t: 1 }, 2019-09-04T06:31:49.257+0000 2019-09-04T06:31:49.262+0000 D3 REPL [conn307] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.417+0000 2019-09-04T06:31:49.262+0000 D3 REPL [conn296] Got notified of new snapshot: { ts: Timestamp(1567578709, 1), t: 1 }, 2019-09-04T06:31:49.257+0000 2019-09-04T06:31:49.262+0000 D3 REPL [conn296] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.763+0000 2019-09-04T06:31:49.262+0000 D3 REPL [conn288] Got notified of new snapshot: { ts: Timestamp(1567578709, 1), t: 1 }, 2019-09-04T06:31:49.257+0000 2019-09-04T06:31:49.262+0000 D3 REPL [conn288] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.289+0000 2019-09-04T06:31:49.262+0000 D3 REPL [conn295] Got notified of new snapshot: { ts: Timestamp(1567578709, 1), t: 1 }, 2019-09-04T06:31:49.257+0000 2019-09-04T06:31:49.262+0000 D3 REPL [conn295] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.753+0000 2019-09-04T06:31:49.262+0000 D3 REPL [conn292] Got notified of new snapshot: { ts: Timestamp(1567578709, 1), t: 1 }, 2019-09-04T06:31:49.257+0000 2019-09-04T06:31:49.262+0000 D3 REPL [conn292] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:31:52.054+0000 2019-09-04T06:31:49.262+0000 D3 REPL [conn303] Got notified of new snapshot: { ts: Timestamp(1567578709, 1), t: 1 }, 2019-09-04T06:31:49.257+0000 2019-09-04T06:31:49.262+0000 D3 REPL [conn303] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:13.600+0000 2019-09-04T06:31:49.262+0000 D3 REPL [conn282] Got notified of new snapshot: { ts: Timestamp(1567578709, 1), t: 1 }, 2019-09-04T06:31:49.257+0000 2019-09-04T06:31:49.262+0000 D3 REPL [conn282] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.424+0000 2019-09-04T06:31:49.262+0000 D3 REPL [conn309] Got notified of new snapshot: { ts: Timestamp(1567578709, 1), t: 1 }, 2019-09-04T06:31:49.257+0000 2019-09-04T06:31:49.262+0000 D3 REPL [conn309] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.423+0000 2019-09-04T06:31:49.262+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:49.262+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578707, 2), t: 1 }, durableWallTime: new Date(1567578707799), appliedOpTime: { ts: Timestamp(1567578707, 2), t: 1 }, appliedWallTime: new Date(1567578707799), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), appliedOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, appliedWallTime: new Date(1567578709257), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578707, 2), t: 1 }, durableWallTime: new Date(1567578707799), appliedOpTime: { ts: Timestamp(1567578707, 2), t: 1 }, appliedWallTime: new Date(1567578707799), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpVisible: { ts: Timestamp(1567578709, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:49.262+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 866 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:19.262+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578707, 2), t: 1 }, durableWallTime: new Date(1567578707799), appliedOpTime: { ts: Timestamp(1567578707, 2), t: 1 }, appliedWallTime: new Date(1567578707799), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), appliedOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, appliedWallTime: new Date(1567578709257), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578707, 2), t: 1 }, durableWallTime: new Date(1567578707799), appliedOpTime: { ts: Timestamp(1567578707, 2), t: 1 }, appliedWallTime: new Date(1567578707799), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpVisible: { ts: Timestamp(1567578709, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:49.262+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:19.261+0000 2019-09-04T06:31:49.262+0000 D2 ASIO [RS] Request 866 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpVisible: { ts: Timestamp(1567578709, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578709, 1), $clusterTime: { clusterTime: Timestamp(1567578709, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578709, 1) } 2019-09-04T06:31:49.262+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpVisible: { ts: Timestamp(1567578709, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578709, 1), $clusterTime: { clusterTime: Timestamp(1567578709, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578709, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:49.262+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:49.262+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:19.261+0000 2019-09-04T06:31:49.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:49.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:49.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:49.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:49.315+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:49.324+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:49.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:49.329+0000 D2 WRITE [startPeriodicThreadToAbortExpiredTransactions] Beginning scanSessions. Scanning 0 sessions. 2019-09-04T06:31:49.353+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:49.353+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:49.359+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578709, 1) 2019-09-04T06:31:49.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:49.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:49.370+0000 D1 SHARDING [shard-registry-reload] Reloading shardRegistry 2019-09-04T06:31:49.370+0000 D3 STORAGE [shard-registry-reload] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:31:49.370+0000 D2 COMMAND [shard-registry-reload] run command config.$cmd { find: "shards", $readPreference: { mode: "nearest", tags: [] }, $db: "config" } 2019-09-04T06:31:49.370+0000 D3 STORAGE [shard-registry-reload] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:49.370+0000 D5 QUERY [shard-registry-reload] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:31:49.370+0000 D5 QUERY [shard-registry-reload] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:31:49.370+0000 D5 QUERY [shard-registry-reload] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:31:49.370+0000 D5 QUERY [shard-registry-reload] Rated tree: $and 2019-09-04T06:31:49.370+0000 D5 QUERY [shard-registry-reload] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:49.370+0000 D5 QUERY [shard-registry-reload] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:49.370+0000 D2 QUERY [shard-registry-reload] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:31:49.370+0000 D3 STORAGE [shard-registry-reload] begin_transaction on local snapshot Timestamp(1567578709, 1) 2019-09-04T06:31:49.370+0000 D3 STORAGE [shard-registry-reload] WT begin_transaction for snapshot id 12846 2019-09-04T06:31:49.370+0000 D3 STORAGE [shard-registry-reload] WT rollback_transaction for snapshot id 12846 2019-09-04T06:31:49.370+0000 I COMMAND [shard-registry-reload] command config.shards command: find { find: "shards", $readPreference: { mode: "nearest", tags: [] }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:646 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:31:49.370+0000 D1 SHARDING [shard-registry-reload] found 3 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp(1567578709, 1), t: 1 } 2019-09-04T06:31:49.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0000/cmodb806.togewa.com:27018,cmodb807.togewa.com:27018 2019-09-04T06:31:49.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0000, with CS shard0000/cmodb806.togewa.com:27018,cmodb807.togewa.com:27018 2019-09-04T06:31:49.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0001/cmodb808.togewa.com:27018,cmodb809.togewa.com:27018 2019-09-04T06:31:49.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0001, with CS shard0001/cmodb808.togewa.com:27018,cmodb809.togewa.com:27018 2019-09-04T06:31:49.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0002/cmodb810.togewa.com:27018,cmodb811.togewa.com:27018 2019-09-04T06:31:49.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0002, with CS shard0002/cmodb810.togewa.com:27018,cmodb811.togewa.com:27018 2019-09-04T06:31:49.370+0000 D3 SHARDING [shard-registry-reload] Adding shard config, with CS 2019-09-04T06:31:49.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0000 2019-09-04T06:31:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 867 -- target:[cmodb806.togewa.com:27018] db:admin expDate:2019-09-04T06:31:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:31:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 868 -- target:[cmodb807.togewa.com:27018] db:admin expDate:2019-09-04T06:31:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:31:49.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0001 2019-09-04T06:31:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 869 -- target:[cmodb808.togewa.com:27018] db:admin expDate:2019-09-04T06:31:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:31:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 870 -- target:[cmodb809.togewa.com:27018] db:admin expDate:2019-09-04T06:31:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:31:49.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0002 2019-09-04T06:31:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 871 -- target:[cmodb810.togewa.com:27018] db:admin expDate:2019-09-04T06:31:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:31:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 872 -- target:[cmodb811.togewa.com:27018] db:admin expDate:2019-09-04T06:31:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:31:49.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 870 finished with response: { hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb808.togewa.com:27018", me: "cmodb809.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578707, 1), t: 1 }, lastWriteDate: new Date(1567578707000), majorityOpTime: { ts: Timestamp(1567578707, 1), t: 1 }, majorityWriteDate: new Date(1567578707000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578709386), logicalSessionTimeoutMinutes: 30, connectionId: 13302, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578707, 1), $configServerState: { opTime: { ts: Timestamp(1567578690, 3), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578707, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578707, 1) } 2019-09-04T06:31:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb808.togewa.com:27018", me: "cmodb809.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578707, 1), t: 1 }, lastWriteDate: new Date(1567578707000), majorityOpTime: { ts: Timestamp(1567578707, 1), t: 1 }, majorityWriteDate: new Date(1567578707000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578709386), logicalSessionTimeoutMinutes: 30, connectionId: 13302, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578707, 1), $configServerState: { opTime: { ts: Timestamp(1567578690, 3), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578707, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578707, 1) } target: cmodb809.togewa.com:27018 2019-09-04T06:31:49.386+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 867 finished with response: { hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb806.togewa.com:27018", me: "cmodb806.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578703, 1), t: 1 }, lastWriteDate: new Date(1567578703000), majorityOpTime: { ts: Timestamp(1567578703, 1), t: 1 }, majorityWriteDate: new Date(1567578703000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578709385), logicalSessionTimeoutMinutes: 30, connectionId: 16400, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578703, 1), $configServerState: { opTime: { ts: Timestamp(1567578705, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578703, 1) } 2019-09-04T06:31:49.386+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb806.togewa.com:27018", me: "cmodb806.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578703, 1), t: 1 }, lastWriteDate: new Date(1567578703000), majorityOpTime: { ts: Timestamp(1567578703, 1), t: 1 }, majorityWriteDate: new Date(1567578703000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578709385), logicalSessionTimeoutMinutes: 30, connectionId: 16400, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578703, 1), $configServerState: { opTime: { ts: Timestamp(1567578705, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578703, 1) } target: cmodb806.togewa.com:27018 2019-09-04T06:31:49.386+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 869 finished with response: { hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb808.togewa.com:27018", me: "cmodb808.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578707, 1), t: 1 }, lastWriteDate: new Date(1567578707000), majorityOpTime: { ts: Timestamp(1567578707, 1), t: 1 }, majorityWriteDate: new Date(1567578707000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578709386), logicalSessionTimeoutMinutes: 30, connectionId: 18183, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578707, 1), $configServerState: { opTime: { ts: Timestamp(1567578705, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578707, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578707, 1) } 2019-09-04T06:31:49.386+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb808.togewa.com:27018", me: "cmodb808.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578707, 1), t: 1 }, lastWriteDate: new Date(1567578707000), majorityOpTime: { ts: Timestamp(1567578707, 1), t: 1 }, majorityWriteDate: new Date(1567578707000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578709386), logicalSessionTimeoutMinutes: 30, connectionId: 18183, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578707, 1), $configServerState: { opTime: { ts: Timestamp(1567578705, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578707, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578707, 1) } target: cmodb808.togewa.com:27018 2019-09-04T06:31:49.386+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0001 took 0ms 2019-09-04T06:31:49.386+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 868 finished with response: { hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb806.togewa.com:27018", me: "cmodb807.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578703, 1), t: 1 }, lastWriteDate: new Date(1567578703000), majorityOpTime: { ts: Timestamp(1567578703, 1), t: 1 }, majorityWriteDate: new Date(1567578703000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578709386), logicalSessionTimeoutMinutes: 30, connectionId: 17074, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578703, 1), $configServerState: { opTime: { ts: Timestamp(1567578698, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578703, 1) } 2019-09-04T06:31:49.386+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb806.togewa.com:27018", me: "cmodb807.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578703, 1), t: 1 }, lastWriteDate: new Date(1567578703000), majorityOpTime: { ts: Timestamp(1567578703, 1), t: 1 }, majorityWriteDate: new Date(1567578703000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578709386), logicalSessionTimeoutMinutes: 30, connectionId: 17074, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578703, 1), $configServerState: { opTime: { ts: Timestamp(1567578698, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578703, 1) } target: cmodb807.togewa.com:27018 2019-09-04T06:31:49.386+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0000 took 1ms 2019-09-04T06:31:49.386+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 871 finished with response: { hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb810.togewa.com:27018", me: "cmodb810.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578699, 1), t: 1 }, lastWriteDate: new Date(1567578699000), majorityOpTime: { ts: Timestamp(1567578699, 1), t: 1 }, majorityWriteDate: new Date(1567578699000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578709386), logicalSessionTimeoutMinutes: 30, connectionId: 20469, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578699, 1), $configServerState: { opTime: { ts: Timestamp(1567578705, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578699, 1) } 2019-09-04T06:31:49.386+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb810.togewa.com:27018", me: "cmodb810.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578699, 1), t: 1 }, lastWriteDate: new Date(1567578699000), majorityOpTime: { ts: Timestamp(1567578699, 1), t: 1 }, majorityWriteDate: new Date(1567578699000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578709386), logicalSessionTimeoutMinutes: 30, connectionId: 20469, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578699, 1), $configServerState: { opTime: { ts: Timestamp(1567578705, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578699, 1) } target: cmodb810.togewa.com:27018 2019-09-04T06:31:49.390+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 872 finished with response: { hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb810.togewa.com:27018", me: "cmodb811.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578699, 1), t: 1 }, lastWriteDate: new Date(1567578699000), majorityOpTime: { ts: Timestamp(1567578699, 1), t: 1 }, majorityWriteDate: new Date(1567578699000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578709386), logicalSessionTimeoutMinutes: 30, connectionId: 13284, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578699, 1), $configServerState: { opTime: { ts: Timestamp(1567578698, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578699, 1) } 2019-09-04T06:31:49.390+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb810.togewa.com:27018", me: "cmodb811.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578699, 1), t: 1 }, lastWriteDate: new Date(1567578699000), majorityOpTime: { ts: Timestamp(1567578699, 1), t: 1 }, majorityWriteDate: new Date(1567578699000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578709386), logicalSessionTimeoutMinutes: 30, connectionId: 13284, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578699, 1), $configServerState: { opTime: { ts: Timestamp(1567578698, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578699, 1) } target: cmodb811.togewa.com:27018 2019-09-04T06:31:49.390+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0002 took 5ms 2019-09-04T06:31:49.415+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:49.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:49.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:49.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:49.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:49.515+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:49.522+0000 D2 COMMAND [replSetDistLockPinger] run command config.$cmd { findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578709522) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } 2019-09-04T06:31:49.522+0000 D4 - [replSetDistLockPinger] Taking ticket. Available: 1000000000 2019-09-04T06:31:49.522+0000 D1 - [replSetDistLockPinger] User Assertion: NotMaster: Not primary while running findAndModify command on collection config.lockpings src/mongo/db/commands/find_and_modify.cpp 178 2019-09-04T06:31:49.522+0000 W - [replSetDistLockPinger] DBException thrown :: caused by :: NotMaster: Not primary while running findAndModify command on collection config.lockpings 2019-09-04T06:31:49.541+0000 I - [replSetDistLockPinger] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6b5b62 0x561749c38a0a 0x561749c42521 0x561749a63043 0x56174a33a606 0x56174a33ba55 0x56174b117894 0x56174a082899 0x56174a083f53 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174af452ee 0x56174af457fa 0x56174b0c25e2 0x56174a244e7b 0x56174a243c1e 0x56174a42b1dc 0x56174a23b7b1 0x56174a232a0a 0x56174b82dbbf 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"272DB62","s":"_ZN5mongo11DBExceptionC2ERKNS_6StatusE"},{"b":"561748F88000","o":"CB0A0A","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"ADB043"},{"b":"561748F88000","o":"13B2606"},{"b":"561748F88000","o":"13B3A55"},{"b":"561748F88000","o":"218F894","s":"_ZN5mongo12BasicCommand10Invocation3runEPNS_16OperationContextEPNS_3rpc21ReplyBuilderInterfaceE"},{"b":"561748F88000","o":"10FA899"},{"b":"561748F88000","o":"10FBF53"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"1FBD2EE"},{"b":"561748F88000","o":"1FBD7FA","s":"_ZN5mongo14DBDirectClient4callERNS_7MessageES2_bPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE"},{"b":"561748F88000","o":"213A5E2","s":"_ZN5mongo12DBClientBase20runCommandWithTargetENS_12OpMsgRequestE"},{"b":"561748F88000","o":"12BCE7B","s":"_ZN5mongo13RSLocalClient14runCommandOnceEPNS_16OperationContextENS_10StringDataERKNS_7BSONObjE"},{"b":"561748F88000","o":"12BBC1E","s":"_ZN5mongo10ShardLocal11_runCommandEPNS_16OperationContextERKNS_21ReadPreferenceSettingENS_10StringDataENS_8DurationISt5ratioILl1ELl1000EEEERKNS_7BSONObjE"},{"b":"561748F88000","o":"14A31DC","s":"_ZN5mongo5Shard32runCommandWithFixedRetryAttemptsEPNS_16OperationContextERKNS_21ReadPreferenceSettingERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_7BSONObjENS_8DurationISt5ratioILl1ELl1000EEEENS0_11RetryPolicyE"},{"b":"561748F88000","o":"12B37B1","s":"_ZN5mongo19DistLockCatalogImpl4pingEPNS_16OperationContextENS_10StringDataENS_6Date_tE"},{"b":"561748F88000","o":"12AAA0A","s":"_ZN5mongo22ReplSetDistLockManager6doTaskEv"},{"b":"561748F88000","o":"28A5BBF"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo11DBExceptionC2ERKNS_6StatusE+0x32) [0x56174b6b5b62] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x6D08) [0x561749c38a0a] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xADB043) [0x561749a63043] mongod(+0x13B2606) [0x56174a33a606] mongod(+0x13B3A55) [0x56174a33ba55] mongod(_ZN5mongo12BasicCommand10Invocation3runEPNS_16OperationContextEPNS_3rpc21ReplyBuilderInterfaceE+0x74) [0x56174b117894] mongod(+0x10FA899) [0x56174a082899] mongod(+0x10FBF53) [0x56174a083f53] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(+0x1FBD2EE) [0x56174af452ee] mongod(_ZN5mongo14DBDirectClient4callERNS_7MessageES2_bPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x3A) [0x56174af457fa] mongod(_ZN5mongo12DBClientBase20runCommandWithTargetENS_12OpMsgRequestE+0x1F2) [0x56174b0c25e2] mongod(_ZN5mongo13RSLocalClient14runCommandOnceEPNS_16OperationContextENS_10StringDataERKNS_7BSONObjE+0x4FB) [0x56174a244e7b] mongod(_ZN5mongo10ShardLocal11_runCommandEPNS_16OperationContextERKNS_21ReadPreferenceSettingENS_10StringDataENS_8DurationISt5ratioILl1ELl1000EEEERKNS_7BSONObjE+0x2E) [0x56174a243c1e] mongod(_ZN5mongo5Shard32runCommandWithFixedRetryAttemptsEPNS_16OperationContextERKNS_21ReadPreferenceSettingERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_7BSONObjENS_8DurationISt5ratioILl1ELl1000EEEENS0_11RetryPolicyE+0xDC) [0x56174a42b1dc] mongod(_ZN5mongo19DistLockCatalogImpl4pingEPNS_16OperationContextENS_10StringDataENS_6Date_tE+0x571) [0x56174a23b7b1] mongod(_ZN5mongo22ReplSetDistLockManager6doTaskEv+0x27A) [0x56174a232a0a] mongod(+0x28A5BBF) [0x56174b82dbbf] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:49.541+0000 D2 REPL [replSetDistLockPinger] Waiting for write concern. OpTime: { ts: Timestamp(1567578709, 1), t: 1 }, write concern: { w: "majority", wtimeout: 15000 } 2019-09-04T06:31:49.541+0000 D4 STORAGE [replSetDistLockPinger] flushed journal 2019-09-04T06:31:49.541+0000 D1 COMMAND [replSetDistLockPinger] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578709522) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" }': NotMaster: Not primary while running findAndModify command on collection config.lockpings 2019-09-04T06:31:49.541+0000 I COMMAND [replSetDistLockPinger] command config.lockpings command: findAndModify { findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578709522) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } numYields:0 ok:0 errMsg:"Not primary while running findAndModify command on collection config.lockpings" errName:NotMaster errCode:10107 reslen:527 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } protocol:op_msg 18ms 2019-09-04T06:31:49.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:49.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:49.568+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:49.568+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:49.588+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:49.588+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:49.615+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:49.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:49.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:49.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:49.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:49.686+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:49.686+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:49.715+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:49.717+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:49.717+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:49.723+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:49.723+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:49.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:49.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:49.815+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:49.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:49.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:49.853+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:49.853+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:49.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:49.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:49.916+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:49.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:49.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:49.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:49.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:50.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:50.003+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:31:50.003+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:31:50.003+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:31:50.011+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:31:50.011+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:31:50.011+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:31:50.011+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:31:50.011+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:31:50.012+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:31:50.012+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:31:50.012+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35129 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:31:50.014+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:31:50.014+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:31:50.014+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:31:50.015+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:31:50.015+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:50.015+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:31:50.015+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:31:50.015+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:31:50.015+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:31:50.015+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:31:50.015+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:31:50.015+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:31:50.015+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:50.015+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:50.015+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:31:50.015+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578709, 1) 2019-09-04T06:31:50.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12871 2019-09-04T06:31:50.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12871 2019-09-04T06:31:50.015+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:31:50.015+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:31:50.015+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:31:50.015+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:31:50.015+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:50.015+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:31:50.015+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:31:50.015+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:31:50.015+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578709, 1) 2019-09-04T06:31:50.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12874 2019-09-04T06:31:50.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12874 2019-09-04T06:31:50.015+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:31:50.015+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:31:50.015+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:50.015+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:31:50.015+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:31:50.015+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:31:50.016+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578709, 1) 2019-09-04T06:31:50.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12876 2019-09-04T06:31:50.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12876 2019-09-04T06:31:50.016+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:31:50.016+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:31:50.016+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:31:50.016+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:31:50.016+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:50.016+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:31:50.016+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:31:50.016+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:31:50.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12879 2019-09-04T06:31:50.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:31:50.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:50.016+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:31:50.016+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:31:50.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12879 2019-09-04T06:31:50.016+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:31:50.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12880 2019-09-04T06:31:50.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:31:50.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:50.016+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:31:50.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12880 2019-09-04T06:31:50.016+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:31:50.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12881 2019-09-04T06:31:50.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:31:50.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:50.016+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:31:50.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12881 2019-09-04T06:31:50.016+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:31:50.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12882 2019-09-04T06:31:50.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:31:50.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:50.016+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:31:50.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12882 2019-09-04T06:31:50.016+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:31:50.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12883 2019-09-04T06:31:50.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:31:50.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:50.016+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:31:50.016+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:31:50.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12883 2019-09-04T06:31:50.016+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:31:50.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12884 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12884 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12885 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12885 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12886 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12886 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12887 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12887 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12888 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12888 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12889 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12889 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12890 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12890 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12891 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12891 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12892 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12892 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12893 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12893 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12894 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12894 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12895 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12895 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:31:50.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12896 2019-09-04T06:31:50.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:31:50.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:50.018+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:31:50.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12896 2019-09-04T06:31:50.018+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:31:50.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12897 2019-09-04T06:31:50.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:31:50.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:50.018+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:31:50.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12897 2019-09-04T06:31:50.018+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:50.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12898 2019-09-04T06:31:50.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:50.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:50.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12898 2019-09-04T06:31:50.018+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:31:50.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12899 2019-09-04T06:31:50.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:31:50.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:50.018+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:31:50.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12899 2019-09-04T06:31:50.018+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:31:50.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12900 2019-09-04T06:31:50.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:31:50.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:31:50.018+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:31:50.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12900 2019-09-04T06:31:50.018+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:31:50.041+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:31:50.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12902 2019-09-04T06:31:50.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12902 2019-09-04T06:31:50.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12903 2019-09-04T06:31:50.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12903 2019-09-04T06:31:50.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12904 2019-09-04T06:31:50.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12904 2019-09-04T06:31:50.041+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:31:50.042+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:31:50.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12906 2019-09-04T06:31:50.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12906 2019-09-04T06:31:50.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12907 2019-09-04T06:31:50.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12907 2019-09-04T06:31:50.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12908 2019-09-04T06:31:50.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12908 2019-09-04T06:31:50.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12909 2019-09-04T06:31:50.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12909 2019-09-04T06:31:50.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12910 2019-09-04T06:31:50.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12910 2019-09-04T06:31:50.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12911 2019-09-04T06:31:50.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12911 2019-09-04T06:31:50.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12912 2019-09-04T06:31:50.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12912 2019-09-04T06:31:50.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12913 2019-09-04T06:31:50.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12913 2019-09-04T06:31:50.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12914 2019-09-04T06:31:50.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12914 2019-09-04T06:31:50.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12915 2019-09-04T06:31:50.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12915 2019-09-04T06:31:50.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12916 2019-09-04T06:31:50.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12916 2019-09-04T06:31:50.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12917 2019-09-04T06:31:50.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12917 2019-09-04T06:31:50.043+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:31:50.043+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:31:50.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12919 2019-09-04T06:31:50.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12919 2019-09-04T06:31:50.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12920 2019-09-04T06:31:50.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12920 2019-09-04T06:31:50.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12921 2019-09-04T06:31:50.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12921 2019-09-04T06:31:50.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12922 2019-09-04T06:31:50.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12922 2019-09-04T06:31:50.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12923 2019-09-04T06:31:50.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12923 2019-09-04T06:31:50.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 12924 2019-09-04T06:31:50.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 12924 2019-09-04T06:31:50.044+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:31:50.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:50.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:50.067+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:50.068+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:50.088+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:50.088+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:50.116+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:50.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:50.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:50.186+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:50.186+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:50.216+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:50.217+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:50.217+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:50.223+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:50.223+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:50.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578709, 1), signature: { hash: BinData(0, F6290D48707DB86D19CC3803FBFC2DFF0F26CF00), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:50.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:31:50.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578709, 1), signature: { hash: BinData(0, F6290D48707DB86D19CC3803FBFC2DFF0F26CF00), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:50.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578709, 1), signature: { hash: BinData(0, F6290D48707DB86D19CC3803FBFC2DFF0F26CF00), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:50.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), opTime: { ts: Timestamp(1567578709, 1), t: 1 }, wallTime: new Date(1567578709257) } 2019-09-04T06:31:50.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578709, 1), signature: { hash: BinData(0, F6290D48707DB86D19CC3803FBFC2DFF0F26CF00), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:50.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:50.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 999999999 Now: 1000000000 2019-09-04T06:31:50.259+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:50.259+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:50.259+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578709, 1) 2019-09-04T06:31:50.259+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 12935 2019-09-04T06:31:50.259+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:50.259+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:50.259+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 12935 2019-09-04T06:31:50.260+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12938 2019-09-04T06:31:50.260+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 12938 2019-09-04T06:31:50.260+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578709, 1), t: 1 }({ ts: Timestamp(1567578709, 1), t: 1 }) 2019-09-04T06:31:50.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:50.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:50.316+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:50.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:50.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:50.353+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:50.353+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:50.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:50.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:50.416+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:50.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:50.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:50.516+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:50.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:50.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:50.567+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:50.568+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:50.588+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:50.588+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:50.617+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:50.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:50.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:50.686+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:50.686+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:50.717+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:50.717+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:50.717+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:50.723+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:50.723+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:50.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:50.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:50.817+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:50.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:50.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:50.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:50.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:50.838+0000 D3 REPL [replexec-0] memberData lastupdate is: 2019-09-04T06:31:49.063+0000 2019-09-04T06:31:50.838+0000 D3 REPL [replexec-0] memberData lastupdate is: 2019-09-04T06:31:50.232+0000 2019-09-04T06:31:50.838+0000 D3 REPL [replexec-0] stalest member MemberId(0) date: 2019-09-04T06:31:49.063+0000 2019-09-04T06:31:50.838+0000 D3 REPL [replexec-0] scheduling next check at 2019-09-04T06:31:59.063+0000 2019-09-04T06:31:50.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:20.838+0000 2019-09-04T06:31:50.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 873) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:50.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 873 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:00.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:50.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:20.838+0000 2019-09-04T06:31:50.838+0000 D2 ASIO [Replication] Request 873 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), opTime: { ts: Timestamp(1567578709, 1), t: 1 }, wallTime: new Date(1567578709257), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpVisible: { ts: Timestamp(1567578709, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578709, 1), $clusterTime: { clusterTime: Timestamp(1567578709, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578709, 1) } 2019-09-04T06:31:50.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), opTime: { ts: Timestamp(1567578709, 1), t: 1 }, wallTime: new Date(1567578709257), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpVisible: { ts: Timestamp(1567578709, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578709, 1), $clusterTime: { clusterTime: Timestamp(1567578709, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578709, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:50.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:50.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 873) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), opTime: { ts: Timestamp(1567578709, 1), t: 1 }, wallTime: new Date(1567578709257), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpVisible: { ts: Timestamp(1567578709, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578709, 1), $clusterTime: { clusterTime: Timestamp(1567578709, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578709, 1) } 2019-09-04T06:31:50.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:31:50.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:31:52.838Z 2019-09-04T06:31:50.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:20.838+0000 2019-09-04T06:31:50.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:50.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 874) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:50.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 874 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:32:00.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:50.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:20.838+0000 2019-09-04T06:31:50.839+0000 D2 ASIO [Replication] Request 874 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), opTime: { ts: Timestamp(1567578709, 1), t: 1 }, wallTime: new Date(1567578709257), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpVisible: { ts: Timestamp(1567578709, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578709, 1), $clusterTime: { clusterTime: Timestamp(1567578709, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578709, 1) } 2019-09-04T06:31:50.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), opTime: { ts: Timestamp(1567578709, 1), t: 1 }, wallTime: new Date(1567578709257), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpVisible: { ts: Timestamp(1567578709, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578709, 1), $clusterTime: { clusterTime: Timestamp(1567578709, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578709, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:31:50.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:50.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 874) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), opTime: { ts: Timestamp(1567578709, 1), t: 1 }, wallTime: new Date(1567578709257), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpVisible: { ts: Timestamp(1567578709, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578709, 1), $clusterTime: { clusterTime: Timestamp(1567578709, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578709, 1) } 2019-09-04T06:31:50.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:31:50.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:31:59.476+0000 2019-09-04T06:31:50.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:32:01.557+0000 2019-09-04T06:31:50.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:31:50.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:31:52.839Z 2019-09-04T06:31:50.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:50.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:20.839+0000 2019-09-04T06:31:50.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:20.839+0000 2019-09-04T06:31:50.853+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:50.853+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:50.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:50.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:50.917+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:51.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:51.017+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:51.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:51.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:51.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578709, 1), signature: { hash: BinData(0, F6290D48707DB86D19CC3803FBFC2DFF0F26CF00), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:51.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:31:51.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578709, 1), signature: { hash: BinData(0, F6290D48707DB86D19CC3803FBFC2DFF0F26CF00), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:51.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578709, 1), signature: { hash: BinData(0, F6290D48707DB86D19CC3803FBFC2DFF0F26CF00), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:51.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), opTime: { ts: Timestamp(1567578709, 1), t: 1 }, wallTime: new Date(1567578709257) } 2019-09-04T06:31:51.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578709, 1), signature: { hash: BinData(0, F6290D48707DB86D19CC3803FBFC2DFF0F26CF00), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:51.068+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:51.068+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:51.088+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:51.088+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:51.117+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:51.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:51.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:51.177+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:51.177+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:51.186+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:51.186+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:51.217+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:51.217+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:51.217+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:51.223+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:51.223+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:51.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:51.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:51.259+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:51.259+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:51.259+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578709, 1) 2019-09-04T06:31:51.259+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 12967 2019-09-04T06:31:51.259+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:51.259+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:51.259+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 12967 2019-09-04T06:31:51.260+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 12970 2019-09-04T06:31:51.261+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 12970 2019-09-04T06:31:51.261+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578709, 1), t: 1 }({ ts: Timestamp(1567578709, 1), t: 1 }) 2019-09-04T06:31:51.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:51.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:51.317+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:51.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:51.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:51.353+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:51.353+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:51.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:51.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:51.418+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:51.518+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:51.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:51.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:51.567+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:51.568+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:51.588+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:51.588+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:51.618+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:51.634+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:51.634+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:51.646+0000 I COMMAND [conn273] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578678, 1), signature: { hash: BinData(0, BB6464574AD1B916A93154DCD4D10DFFEF24752B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:31:51.646+0000 D1 - [conn273] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:31:51.646+0000 W - [conn273] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:51.650+0000 D2 COMMAND [conn301] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578708, 1), signature: { hash: BinData(0, B5AEC13859CF2AE093A83653359419B1C5F526D1), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:31:51.650+0000 D1 REPL [conn301] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578709, 1), t: 1 } 2019-09-04T06:31:51.650+0000 D3 REPL [conn301] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.660+0000 2019-09-04T06:31:51.650+0000 I NETWORK [listener] connection accepted from 10.108.2.48:42150 #310 (90 connections now open) 2019-09-04T06:31:51.650+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:31:51.650+0000 D2 COMMAND [conn310] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:31:51.650+0000 I NETWORK [conn310] received client metadata from 10.108.2.48:42150 conn310: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:31:51.650+0000 I COMMAND [conn310] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:31:51.650+0000 D2 COMMAND [conn310] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578710, 1), signature: { hash: BinData(0, A3FF2599D574559E1DAE48C1FCBADF51073EB20B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:31:51.650+0000 D1 REPL [conn310] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578709, 1), t: 1 } 2019-09-04T06:31:51.650+0000 D3 REPL [conn310] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.660+0000 2019-09-04T06:31:51.651+0000 I NETWORK [listener] connection accepted from 10.108.2.72:45788 #311 (91 connections now open) 2019-09-04T06:31:51.651+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:31:51.651+0000 D2 COMMAND [conn311] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:31:51.651+0000 I NETWORK [conn311] received client metadata from 10.108.2.72:45788 conn311: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:31:51.651+0000 I COMMAND [conn311] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:31:51.651+0000 D2 COMMAND [conn311] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578702, 1), signature: { hash: BinData(0, ECF4B5774F4DA596CBFAEF2B306A437976FBCFAD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:31:51.651+0000 D1 REPL [conn311] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578709, 1), t: 1 } 2019-09-04T06:31:51.651+0000 D3 REPL [conn311] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.661+0000 2019-09-04T06:31:51.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:51.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:51.663+0000 I - [conn273] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:51.663+0000 D1 COMMAND [conn273] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578678, 1), signature: { hash: BinData(0, BB6464574AD1B916A93154DCD4D10DFFEF24752B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:51.663+0000 D1 - [conn273] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:31:51.663+0000 W - [conn273] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:51.664+0000 I COMMAND [conn281] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578673, 1), signature: { hash: BinData(0, 9F3CDA4E8B339EB390337989064218FBF78EF14F), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:31:51.664+0000 D1 - [conn281] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:31:51.664+0000 W - [conn281] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:51.664+0000 I COMMAND [conn291] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578672, 1), signature: { hash: BinData(0, 2D7FBB04A997F5BB61222D22B1E0478572811EE3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:31:51.664+0000 D1 - [conn291] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:31:51.664+0000 W - [conn291] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:51.677+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:51.677+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:51.683+0000 I - [conn273] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:51.684+0000 W COMMAND [conn273] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:31:51.684+0000 I COMMAND [conn273] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578678, 1), signature: { hash: BinData(0, BB6464574AD1B916A93154DCD4D10DFFEF24752B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:31:51.684+0000 D2 NETWORK [conn273] Session from 10.108.2.44:38698 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:31:51.684+0000 I NETWORK [conn273] end connection 10.108.2.44:38698 (90 connections now open) 2019-09-04T06:31:51.686+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:51.686+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:51.700+0000 I - [conn281] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:51.700+0000 D1 COMMAND [conn281] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578673, 1), signature: { hash: BinData(0, 9F3CDA4E8B339EB390337989064218FBF78EF14F), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:51.700+0000 D1 - [conn281] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:31:51.700+0000 W - [conn281] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:51.717+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:51.717+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:51.718+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:51.723+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:51.723+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:51.724+0000 I - [conn281] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:51.724+0000 W COMMAND [conn281] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:31:51.724+0000 I COMMAND [conn281] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578673, 1), signature: { hash: BinData(0, 9F3CDA4E8B339EB390337989064218FBF78EF14F), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30049ms 2019-09-04T06:31:51.724+0000 D2 NETWORK [conn281] Session from 10.108.2.58:52170 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:31:51.724+0000 I NETWORK [conn281] end connection 10.108.2.58:52170 (89 connections now open) 2019-09-04T06:31:51.737+0000 I - [conn291] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:51.737+0000 D1 COMMAND [conn291] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578672, 1), signature: { hash: BinData(0, 2D7FBB04A997F5BB61222D22B1E0478572811EE3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:51.737+0000 D1 - [conn291] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:31:51.737+0000 W - [conn291] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:51.754+0000 I COMMAND [conn271] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578672, 1), signature: { hash: BinData(0, 2D7FBB04A997F5BB61222D22B1E0478572811EE3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:31:51.754+0000 D1 - [conn271] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:31:51.754+0000 W - [conn271] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:51.756+0000 I NETWORK [listener] connection accepted from 10.108.2.59:48400 #312 (90 connections now open) 2019-09-04T06:31:51.756+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:31:51.756+0000 D2 COMMAND [conn312] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:31:51.756+0000 I NETWORK [conn312] received client metadata from 10.108.2.59:48400 conn312: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:31:51.756+0000 I COMMAND [conn312] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:31:51.757+0000 D2 COMMAND [conn312] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578701, 1), signature: { hash: BinData(0, 1171ABEBFE541B8950624F669563FA93C6C7DEA2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:31:51.757+0000 D1 REPL [conn312] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578709, 1), t: 1 } 2019-09-04T06:31:51.757+0000 D3 REPL [conn312] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.767+0000 2019-09-04T06:31:51.757+0000 I - [conn291] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:51.757+0000 W COMMAND [conn291] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:31:51.757+0000 I COMMAND [conn291] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578672, 1), signature: { hash: BinData(0, 2D7FBB04A997F5BB61222D22B1E0478572811EE3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30084ms 2019-09-04T06:31:51.757+0000 D2 NETWORK [conn291] Session from 10.108.2.73:52186 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:31:51.757+0000 I NETWORK [conn291] end connection 10.108.2.73:52186 (89 connections now open) 2019-09-04T06:31:51.774+0000 I - [conn271] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:51.774+0000 D1 COMMAND [conn271] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578672, 1), signature: { hash: BinData(0, 2D7FBB04A997F5BB61222D22B1E0478572811EE3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:51.774+0000 D1 - [conn271] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:31:51.774+0000 W - [conn271] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:51.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:51.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:51.794+0000 I - [conn271] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:51.794+0000 W COMMAND [conn271] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:31:51.794+0000 I COMMAND [conn271] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578672, 1), signature: { hash: BinData(0, 2D7FBB04A997F5BB61222D22B1E0478572811EE3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30031ms 2019-09-04T06:31:51.794+0000 D2 NETWORK [conn271] Session from 10.108.2.52:47200 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:31:51.794+0000 I NETWORK [conn271] end connection 10.108.2.52:47200 (88 connections now open) 2019-09-04T06:31:51.818+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:51.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:51.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:51.853+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:51.853+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:51.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:51.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:51.918+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:52.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:52.018+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:52.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:52.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:52.056+0000 I COMMAND [conn292] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578679, 1), signature: { hash: BinData(0, 488931F6E677F94ED5295475628B5C49496549EB), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:31:52.057+0000 D1 - [conn292] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:31:52.057+0000 W - [conn292] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:52.067+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:52.068+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:52.073+0000 I - [conn292] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:52.073+0000 D1 COMMAND [conn292] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578679, 1), signature: { hash: BinData(0, 488931F6E677F94ED5295475628B5C49496549EB), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:52.073+0000 D1 - [conn292] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:31:52.073+0000 W - [conn292] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:31:52.088+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:52.088+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:52.093+0000 I - [conn292] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:31:52.093+0000 W COMMAND [conn292] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:31:52.093+0000 I COMMAND [conn292] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578679, 1), signature: { hash: BinData(0, 488931F6E677F94ED5295475628B5C49496549EB), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:31:52.093+0000 D2 NETWORK [conn292] Session from 10.108.2.50:50150 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:31:52.093+0000 I NETWORK [conn292] end connection 10.108.2.50:50150 (87 connections now open) 2019-09-04T06:31:52.119+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:52.134+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:52.134+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:52.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:52.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:52.177+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:52.177+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:52.186+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:52.186+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:52.217+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:52.217+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:52.219+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:52.223+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:52.223+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:52.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578709, 1), signature: { hash: BinData(0, F6290D48707DB86D19CC3803FBFC2DFF0F26CF00), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:52.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:31:52.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578709, 1), signature: { hash: BinData(0, F6290D48707DB86D19CC3803FBFC2DFF0F26CF00), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:52.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578709, 1), signature: { hash: BinData(0, F6290D48707DB86D19CC3803FBFC2DFF0F26CF00), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:52.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), opTime: { ts: Timestamp(1567578709, 1), t: 1 }, wallTime: new Date(1567578709257) } 2019-09-04T06:31:52.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578709, 1), signature: { hash: BinData(0, F6290D48707DB86D19CC3803FBFC2DFF0F26CF00), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:52.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:52.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:52.260+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:52.260+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:52.260+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578709, 1) 2019-09-04T06:31:52.260+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 13009 2019-09-04T06:31:52.260+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:52.260+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:52.260+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 13009 2019-09-04T06:31:52.261+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13012 2019-09-04T06:31:52.261+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13012 2019-09-04T06:31:52.261+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578709, 1), t: 1 }({ ts: Timestamp(1567578709, 1), t: 1 }) 2019-09-04T06:31:52.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:52.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:52.319+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:52.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:52.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:52.353+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:52.353+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:52.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:52.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:52.419+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:52.519+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:52.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:52.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:52.568+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:52.568+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:52.584+0000 I NETWORK [listener] connection accepted from 10.108.2.74:51836 #313 (88 connections now open) 2019-09-04T06:31:52.584+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:31:52.585+0000 D2 COMMAND [conn313] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:31:52.585+0000 I NETWORK [conn313] received client metadata from 10.108.2.74:51836 conn313: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:31:52.585+0000 I COMMAND [conn313] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:31:52.585+0000 D2 COMMAND [conn313] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, D5CCCC4E77C8D415872AB251DCD35BEAE0CDE47F), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:31:52.585+0000 D1 REPL [conn313] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578709, 1), t: 1 } 2019-09-04T06:31:52.585+0000 D3 REPL [conn313] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:22.595+0000 2019-09-04T06:31:52.588+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:52.588+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:52.619+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:52.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:52.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:52.677+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:52.677+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:52.686+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:52.686+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:52.717+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:52.717+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:52.719+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:52.723+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:52.723+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:52.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:52.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:52.819+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:52.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:52.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:52.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:52.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 875) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:52.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 875 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:02.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:52.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:20.839+0000 2019-09-04T06:31:52.838+0000 D2 ASIO [Replication] Request 875 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), opTime: { ts: Timestamp(1567578709, 1), t: 1 }, wallTime: new Date(1567578709257), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpVisible: { ts: Timestamp(1567578709, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578709, 1), $clusterTime: { clusterTime: Timestamp(1567578710, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578709, 1) } 2019-09-04T06:31:52.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), opTime: { ts: Timestamp(1567578709, 1), t: 1 }, wallTime: new Date(1567578709257), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpVisible: { ts: Timestamp(1567578709, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578709, 1), $clusterTime: { clusterTime: Timestamp(1567578710, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578709, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:52.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:52.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 875) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), opTime: { ts: Timestamp(1567578709, 1), t: 1 }, wallTime: new Date(1567578709257), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpVisible: { ts: Timestamp(1567578709, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578709, 1), $clusterTime: { clusterTime: Timestamp(1567578710, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578709, 1) } 2019-09-04T06:31:52.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:31:52.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:31:54.838Z 2019-09-04T06:31:52.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:20.839+0000 2019-09-04T06:31:52.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:52.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 876) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:52.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 876 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:32:02.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:52.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:20.839+0000 2019-09-04T06:31:52.839+0000 D2 ASIO [Replication] Request 876 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), opTime: { ts: Timestamp(1567578709, 1), t: 1 }, wallTime: new Date(1567578709257), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpVisible: { ts: Timestamp(1567578709, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578709, 1), $clusterTime: { clusterTime: Timestamp(1567578710, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578709, 1) } 2019-09-04T06:31:52.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), opTime: { ts: Timestamp(1567578709, 1), t: 1 }, wallTime: new Date(1567578709257), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpVisible: { ts: Timestamp(1567578709, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578709, 1), $clusterTime: { clusterTime: Timestamp(1567578710, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578709, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:31:52.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:52.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 876) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), opTime: { ts: Timestamp(1567578709, 1), t: 1 }, wallTime: new Date(1567578709257), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpVisible: { ts: Timestamp(1567578709, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578709, 1), $clusterTime: { clusterTime: Timestamp(1567578710, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578709, 1) } 2019-09-04T06:31:52.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:31:52.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:32:01.557+0000 2019-09-04T06:31:52.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:32:02.933+0000 2019-09-04T06:31:52.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:31:52.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:31:54.839Z 2019-09-04T06:31:52.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:52.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:22.839+0000 2019-09-04T06:31:52.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:22.839+0000 2019-09-04T06:31:52.853+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:52.853+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:52.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:52.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:52.920+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:53.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:53.020+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:53.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:53.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:53.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578710, 1), signature: { hash: BinData(0, C8E7772ECF1D7D4E1016D0A7E07B11FD4136DFDE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:53.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:31:53.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578710, 1), signature: { hash: BinData(0, C8E7772ECF1D7D4E1016D0A7E07B11FD4136DFDE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:53.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578710, 1), signature: { hash: BinData(0, C8E7772ECF1D7D4E1016D0A7E07B11FD4136DFDE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:53.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), opTime: { ts: Timestamp(1567578709, 1), t: 1 }, wallTime: new Date(1567578709257) } 2019-09-04T06:31:53.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578710, 1), signature: { hash: BinData(0, C8E7772ECF1D7D4E1016D0A7E07B11FD4136DFDE), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:53.067+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:53.067+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:53.088+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:53.088+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:53.120+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:53.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:53.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:53.177+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:53.177+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:53.186+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:53.186+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:53.217+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:53.217+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:53.220+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:53.223+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:53.223+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:53.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:53.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:53.260+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:53.260+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:53.260+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578709, 1) 2019-09-04T06:31:53.260+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 13043 2019-09-04T06:31:53.260+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:53.260+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:53.260+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 13043 2019-09-04T06:31:53.261+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13046 2019-09-04T06:31:53.261+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13046 2019-09-04T06:31:53.261+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578709, 1), t: 1 }({ ts: Timestamp(1567578709, 1), t: 1 }) 2019-09-04T06:31:53.272+0000 D2 COMMAND [conn157] run command admin.$cmd { ping: 1, $db: "admin" } 2019-09-04T06:31:53.272+0000 I COMMAND [conn157] command admin.$cmd appName: "robo3t" command: ping { ping: 1, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:53.279+0000 D2 COMMAND [conn101] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:53.279+0000 I COMMAND [conn101] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:53.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:53.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:53.284+0000 D2 COMMAND [conn218] run command admin.$cmd { ping: 1.0, lsid: { id: UUID("ac8e303f-4e60-4a79-b9a4-f7cba7354076") }, $clusterTime: { clusterTime: Timestamp(1567578650, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } 2019-09-04T06:31:53.284+0000 I COMMAND [conn218] command admin.$cmd appName: "MongoDB Shell" command: ping { ping: 1.0, lsid: { id: UUID("ac8e303f-4e60-4a79-b9a4-f7cba7354076") }, $clusterTime: { clusterTime: Timestamp(1567578650, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:53.320+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:53.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:53.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:53.353+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:53.353+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:53.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:53.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:53.420+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:53.520+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:53.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:53.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:53.567+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:53.568+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:53.588+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:53.588+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:53.620+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:53.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:53.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:53.677+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:53.677+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:53.686+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:53.686+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:53.717+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:53.717+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:53.721+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:53.723+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:53.723+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:53.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:53.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:53.821+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:53.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:53.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:53.853+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:53.853+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:53.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:53.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:53.921+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:54.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:54.021+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:54.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:54.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:54.055+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:31:54.056+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:31:54.056+0000 D2 COMMAND [conn90] run command admin.$cmd { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:31:54.056+0000 I COMMAND [conn90] command admin.$cmd command: isMaster { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:866 locks:{} protocol:op_query 0ms 2019-09-04T06:31:54.067+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:54.068+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:54.088+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:54.088+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:54.121+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:54.142+0000 I NETWORK [listener] connection accepted from 10.108.2.46:41040 #314 (89 connections now open) 2019-09-04T06:31:54.142+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:31:54.142+0000 D2 COMMAND [conn314] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:31:54.142+0000 I NETWORK [conn314] received client metadata from 10.108.2.46:41040 conn314: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:31:54.142+0000 I COMMAND [conn314] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:31:54.142+0000 D2 COMMAND [conn314] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, D5CCCC4E77C8D415872AB251DCD35BEAE0CDE47F), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:31:54.142+0000 D1 REPL [conn314] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578709, 1), t: 1 } 2019-09-04T06:31:54.142+0000 D3 REPL [conn314] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:24.152+0000 2019-09-04T06:31:54.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:54.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:54.177+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:54.177+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:54.186+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:54.186+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:54.217+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:54.217+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:54.221+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:54.223+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:54.223+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:54.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578710, 1), signature: { hash: BinData(0, C8E7772ECF1D7D4E1016D0A7E07B11FD4136DFDE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:54.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:31:54.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578710, 1), signature: { hash: BinData(0, C8E7772ECF1D7D4E1016D0A7E07B11FD4136DFDE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:54.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578710, 1), signature: { hash: BinData(0, C8E7772ECF1D7D4E1016D0A7E07B11FD4136DFDE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:54.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), opTime: { ts: Timestamp(1567578709, 1), t: 1 }, wallTime: new Date(1567578709257) } 2019-09-04T06:31:54.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578710, 1), signature: { hash: BinData(0, C8E7772ECF1D7D4E1016D0A7E07B11FD4136DFDE), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:54.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:54.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:54.260+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:54.260+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:54.260+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578709, 1) 2019-09-04T06:31:54.260+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 13082 2019-09-04T06:31:54.260+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:54.260+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:54.260+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 13082 2019-09-04T06:31:54.261+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13085 2019-09-04T06:31:54.261+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13085 2019-09-04T06:31:54.261+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578709, 1), t: 1 }({ ts: Timestamp(1567578709, 1), t: 1 }) 2019-09-04T06:31:54.261+0000 D2 ASIO [RS] Request 865 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpVisible: { ts: Timestamp(1567578709, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpApplied: { ts: Timestamp(1567578709, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578709, 1), $clusterTime: { clusterTime: Timestamp(1567578710, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578709, 1) } 2019-09-04T06:31:54.261+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpVisible: { ts: Timestamp(1567578709, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpApplied: { ts: Timestamp(1567578709, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578709, 1), $clusterTime: { clusterTime: Timestamp(1567578710, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578709, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:54.261+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:54.261+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:31:54.261+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:32:02.933+0000 2019-09-04T06:31:54.261+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:32:04.666+0000 2019-09-04T06:31:54.261+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:54.261+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:22.839+0000 2019-09-04T06:31:54.261+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 877 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:04.261+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578709, 1), t: 1 } } 2019-09-04T06:31:54.261+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:19.261+0000 2019-09-04T06:31:54.262+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:54.262+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), appliedOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, appliedWallTime: new Date(1567578709257), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), appliedOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, appliedWallTime: new Date(1567578709257), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), appliedOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, appliedWallTime: new Date(1567578709257), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpVisible: { ts: Timestamp(1567578709, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:54.262+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 878 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:24.262+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), appliedOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, appliedWallTime: new Date(1567578709257), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), appliedOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, appliedWallTime: new Date(1567578709257), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), appliedOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, appliedWallTime: new Date(1567578709257), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpVisible: { ts: Timestamp(1567578709, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:54.262+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:19.261+0000 2019-09-04T06:31:54.262+0000 D2 ASIO [RS] Request 878 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpVisible: { ts: Timestamp(1567578709, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578709, 1), $clusterTime: { clusterTime: Timestamp(1567578710, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578709, 1) } 2019-09-04T06:31:54.262+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpVisible: { ts: Timestamp(1567578709, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578709, 1), $clusterTime: { clusterTime: Timestamp(1567578710, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578709, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:54.262+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:54.262+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:19.261+0000 2019-09-04T06:31:54.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:54.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:54.321+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:54.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:54.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:54.353+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:54.353+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:54.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:54.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:54.422+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:54.522+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:54.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:54.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:54.568+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:54.568+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:54.588+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:54.588+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:54.622+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:54.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:54.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:54.677+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:54.677+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:54.686+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:54.686+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:54.717+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:54.717+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:54.722+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:54.723+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:54.723+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:54.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:54.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:54.822+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:54.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:54.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:54.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:54.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 879) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:54.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 879 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:04.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:54.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:22.839+0000 2019-09-04T06:31:54.838+0000 D2 ASIO [Replication] Request 879 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), opTime: { ts: Timestamp(1567578709, 1), t: 1 }, wallTime: new Date(1567578709257), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpVisible: { ts: Timestamp(1567578709, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578709, 1), $clusterTime: { clusterTime: Timestamp(1567578710, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578709, 1) } 2019-09-04T06:31:54.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), opTime: { ts: Timestamp(1567578709, 1), t: 1 }, wallTime: new Date(1567578709257), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpVisible: { ts: Timestamp(1567578709, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578709, 1), $clusterTime: { clusterTime: Timestamp(1567578710, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578709, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:54.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:54.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 879) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), opTime: { ts: Timestamp(1567578709, 1), t: 1 }, wallTime: new Date(1567578709257), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpVisible: { ts: Timestamp(1567578709, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578709, 1), $clusterTime: { clusterTime: Timestamp(1567578710, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578709, 1) } 2019-09-04T06:31:54.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:31:54.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:31:56.838Z 2019-09-04T06:31:54.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:22.839+0000 2019-09-04T06:31:54.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:54.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 880) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:54.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 880 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:32:04.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:54.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:22.839+0000 2019-09-04T06:31:54.839+0000 D2 ASIO [Replication] Request 880 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), opTime: { ts: Timestamp(1567578709, 1), t: 1 }, wallTime: new Date(1567578709257), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpVisible: { ts: Timestamp(1567578709, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578709, 1), $clusterTime: { clusterTime: Timestamp(1567578710, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578709, 1) } 2019-09-04T06:31:54.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), opTime: { ts: Timestamp(1567578709, 1), t: 1 }, wallTime: new Date(1567578709257), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpVisible: { ts: Timestamp(1567578709, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578709, 1), $clusterTime: { clusterTime: Timestamp(1567578710, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578709, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:31:54.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:54.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 880) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), opTime: { ts: Timestamp(1567578709, 1), t: 1 }, wallTime: new Date(1567578709257), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpVisible: { ts: Timestamp(1567578709, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578709, 1), $clusterTime: { clusterTime: Timestamp(1567578710, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578709, 1) } 2019-09-04T06:31:54.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:31:54.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:32:04.666+0000 2019-09-04T06:31:54.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:32:04.955+0000 2019-09-04T06:31:54.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:31:54.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:31:56.839Z 2019-09-04T06:31:54.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:54.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:24.839+0000 2019-09-04T06:31:54.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:24.839+0000 2019-09-04T06:31:54.853+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:54.853+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:54.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:54.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:54.922+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:55.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:55.022+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:55.049+0000 I NETWORK [listener] connection accepted from 10.108.2.55:36712 #315 (90 connections now open) 2019-09-04T06:31:55.049+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:31:55.049+0000 D2 COMMAND [conn315] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:31:55.049+0000 I NETWORK [conn315] received client metadata from 10.108.2.55:36712 conn315: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:31:55.049+0000 I COMMAND [conn315] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:31:55.050+0000 D2 COMMAND [conn315] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578711, 1), signature: { hash: BinData(0, 6FC68FAD4499724D64BA3144ABE5F4A5DAEAA379), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:31:55.050+0000 D1 REPL [conn315] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578709, 1), t: 1 } 2019-09-04T06:31:55.050+0000 D3 REPL [conn315] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:25.060+0000 2019-09-04T06:31:55.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:55.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:55.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578710, 1), signature: { hash: BinData(0, C8E7772ECF1D7D4E1016D0A7E07B11FD4136DFDE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:55.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:31:55.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578710, 1), signature: { hash: BinData(0, C8E7772ECF1D7D4E1016D0A7E07B11FD4136DFDE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:55.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578710, 1), signature: { hash: BinData(0, C8E7772ECF1D7D4E1016D0A7E07B11FD4136DFDE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:55.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), opTime: { ts: Timestamp(1567578709, 1), t: 1 }, wallTime: new Date(1567578709257) } 2019-09-04T06:31:55.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578710, 1), signature: { hash: BinData(0, C8E7772ECF1D7D4E1016D0A7E07B11FD4136DFDE), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:55.067+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:55.067+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:55.088+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:55.088+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:55.123+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:55.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:55.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:55.177+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:55.177+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:55.186+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:55.186+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:55.217+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:55.217+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:55.223+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:55.223+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:55.223+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:55.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:55.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:55.237+0000 D2 ASIO [RS] Request 877 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578715, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578715231), o: { $v: 1, $set: { ping: new Date(1567578715228), up: 2615 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpVisible: { ts: Timestamp(1567578709, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpApplied: { ts: Timestamp(1567578715, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578709, 1), $clusterTime: { clusterTime: Timestamp(1567578715, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578715, 1) } 2019-09-04T06:31:55.237+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578715, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578715231), o: { $v: 1, $set: { ping: new Date(1567578715228), up: 2615 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpVisible: { ts: Timestamp(1567578709, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpApplied: { ts: Timestamp(1567578715, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578709, 1), $clusterTime: { clusterTime: Timestamp(1567578715, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578715, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:55.237+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:55.237+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578715, 1) and ending at ts: Timestamp(1567578715, 1) 2019-09-04T06:31:55.237+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:32:04.955+0000 2019-09-04T06:31:55.237+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:32:05.950+0000 2019-09-04T06:31:55.237+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:55.237+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:24.839+0000 2019-09-04T06:31:55.237+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578715, 1), t: 1 } 2019-09-04T06:31:55.237+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:55.237+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:55.237+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578709, 1) 2019-09-04T06:31:55.237+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 13117 2019-09-04T06:31:55.237+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:55.237+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:55.237+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 13117 2019-09-04T06:31:55.237+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:55.237+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:31:55.237+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:55.237+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578709, 1) 2019-09-04T06:31:55.237+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 13120 2019-09-04T06:31:55.237+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578715, 1) } 2019-09-04T06:31:55.237+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:55.237+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:55.237+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 13120 2019-09-04T06:31:55.237+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13086 2019-09-04T06:31:55.237+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 13086 2019-09-04T06:31:55.237+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13123 2019-09-04T06:31:55.237+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13123 2019-09-04T06:31:55.237+0000 D3 EXECUTOR [repl-writer-worker-13] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:55.237+0000 D3 STORAGE [repl-writer-worker-13] WT begin_transaction for snapshot id 13125 2019-09-04T06:31:55.237+0000 D4 STORAGE [repl-writer-worker-13] inserting record with timestamp Timestamp(1567578715, 1) 2019-09-04T06:31:55.237+0000 D3 STORAGE [repl-writer-worker-13] WT set timestamp of future write operations to Timestamp(1567578715, 1) 2019-09-04T06:31:55.237+0000 D3 STORAGE [repl-writer-worker-13] WT commit_transaction for snapshot id 13125 2019-09-04T06:31:55.238+0000 D3 EXECUTOR [repl-writer-worker-13] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:55.238+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:31:55.238+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13124 2019-09-04T06:31:55.238+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 13124 2019-09-04T06:31:55.238+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13127 2019-09-04T06:31:55.238+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13127 2019-09-04T06:31:55.238+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578715, 1), t: 1 }({ ts: Timestamp(1567578715, 1), t: 1 }) 2019-09-04T06:31:55.238+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578715, 1) 2019-09-04T06:31:55.238+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13128 2019-09-04T06:31:55.238+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578715, 1) } } ] } sort: {} projection: {} 2019-09-04T06:31:55.238+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:55.238+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:31:55.238+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578715, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:31:55.238+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:55.238+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:55.238+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:55.238+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578715, 1) || First: notFirst: full path: ts 2019-09-04T06:31:55.238+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:55.238+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578715, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:55.238+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:55.238+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:31:55.238+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:55.238+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:55.238+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:55.238+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:55.238+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:55.238+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:55.238+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:55.238+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578715, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:55.238+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:55.238+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:55.238+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:55.238+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578715, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:55.238+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:55.238+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578715, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:55.238+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 13128 2019-09-04T06:31:55.238+0000 D3 EXECUTOR [repl-writer-worker-14] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:55.238+0000 D3 STORAGE [repl-writer-worker-14] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:55.238+0000 D3 REPL [repl-writer-worker-14] applying op: { ts: Timestamp(1567578715, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578715231), o: { $v: 1, $set: { ping: new Date(1567578715228), up: 2615 } } }, oplog application mode: Secondary 2019-09-04T06:31:55.238+0000 D3 STORAGE [repl-writer-worker-14] WT set timestamp of future write operations to Timestamp(1567578715, 1) 2019-09-04T06:31:55.238+0000 D3 STORAGE [repl-writer-worker-14] WT begin_transaction for snapshot id 13130 2019-09-04T06:31:55.238+0000 D2 QUERY [repl-writer-worker-14] Using idhack: { _id: "cmodb801.togewa.com:27017" } 2019-09-04T06:31:55.238+0000 D4 WRITE [repl-writer-worker-14] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:31:55.238+0000 D3 STORAGE [repl-writer-worker-14] WT commit_transaction for snapshot id 13130 2019-09-04T06:31:55.238+0000 D3 EXECUTOR [repl-writer-worker-14] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:55.238+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578715, 1), t: 1 }({ ts: Timestamp(1567578715, 1), t: 1 }) 2019-09-04T06:31:55.238+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578715, 1) 2019-09-04T06:31:55.238+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13129 2019-09-04T06:31:55.238+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:31:55.238+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:55.238+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:31:55.238+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:55.238+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:55.238+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:31:55.238+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 13129 2019-09-04T06:31:55.238+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578715, 1) 2019-09-04T06:31:55.238+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13134 2019-09-04T06:31:55.238+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13134 2019-09-04T06:31:55.238+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578715, 1), t: 1 }({ ts: Timestamp(1567578715, 1), t: 1 }) 2019-09-04T06:31:55.238+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:55.238+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), appliedOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, appliedWallTime: new Date(1567578709257), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), appliedOpTime: { ts: Timestamp(1567578715, 1), t: 1 }, appliedWallTime: new Date(1567578715231), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), appliedOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, appliedWallTime: new Date(1567578709257), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpVisible: { ts: Timestamp(1567578709, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:55.238+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 881 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:25.238+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), appliedOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, appliedWallTime: new Date(1567578709257), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), appliedOpTime: { ts: Timestamp(1567578715, 1), t: 1 }, appliedWallTime: new Date(1567578715231), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), appliedOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, appliedWallTime: new Date(1567578709257), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpVisible: { ts: Timestamp(1567578709, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:55.238+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:25.238+0000 2019-09-04T06:31:55.239+0000 D2 ASIO [RS] Request 881 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpVisible: { ts: Timestamp(1567578709, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578709, 1), $clusterTime: { clusterTime: Timestamp(1567578715, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578715, 1) } 2019-09-04T06:31:55.239+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpVisible: { ts: Timestamp(1567578709, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578709, 1), $clusterTime: { clusterTime: Timestamp(1567578715, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578715, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:55.239+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:55.239+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:25.239+0000 2019-09-04T06:31:55.239+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578715, 1), t: 1 } 2019-09-04T06:31:55.239+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 882 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:05.239+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578709, 1), t: 1 } } 2019-09-04T06:31:55.239+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:25.239+0000 2019-09-04T06:31:55.253+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:31:55.253+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:55.253+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), appliedOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, appliedWallTime: new Date(1567578709257), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578715, 1), t: 1 }, durableWallTime: new Date(1567578715231), appliedOpTime: { ts: Timestamp(1567578715, 1), t: 1 }, appliedWallTime: new Date(1567578715231), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), appliedOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, appliedWallTime: new Date(1567578709257), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpVisible: { ts: Timestamp(1567578709, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:55.253+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 883 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:25.253+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), appliedOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, appliedWallTime: new Date(1567578709257), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578715, 1), t: 1 }, durableWallTime: new Date(1567578715231), appliedOpTime: { ts: Timestamp(1567578715, 1), t: 1 }, appliedWallTime: new Date(1567578715231), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), appliedOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, appliedWallTime: new Date(1567578709257), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpVisible: { ts: Timestamp(1567578709, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:55.253+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:25.239+0000 2019-09-04T06:31:55.253+0000 D2 ASIO [RS] Request 883 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpVisible: { ts: Timestamp(1567578709, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578709, 1), $clusterTime: { clusterTime: Timestamp(1567578715, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578715, 1) } 2019-09-04T06:31:55.253+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578709, 1), t: 1 }, lastCommittedWall: new Date(1567578709257), lastOpVisible: { ts: Timestamp(1567578709, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578709, 1), $clusterTime: { clusterTime: Timestamp(1567578715, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578715, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:55.253+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:55.253+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:25.239+0000 2019-09-04T06:31:55.253+0000 D2 ASIO [RS] Request 882 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578715, 1), t: 1 }, lastCommittedWall: new Date(1567578715231), lastOpVisible: { ts: Timestamp(1567578715, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578715, 1), t: 1 }, lastCommittedWall: new Date(1567578715231), lastOpApplied: { ts: Timestamp(1567578715, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578715, 1), $clusterTime: { clusterTime: Timestamp(1567578715, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578715, 1) } 2019-09-04T06:31:55.253+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578715, 1), t: 1 }, lastCommittedWall: new Date(1567578715231), lastOpVisible: { ts: Timestamp(1567578715, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578715, 1), t: 1 }, lastCommittedWall: new Date(1567578715231), lastOpApplied: { ts: Timestamp(1567578715, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578715, 1), $clusterTime: { clusterTime: Timestamp(1567578715, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578715, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:55.253+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:55.253+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:31:55.253+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578715, 1), t: 1 }, 2019-09-04T06:31:55.231+0000 2019-09-04T06:31:55.253+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578715, 1), t: 1 }, 2019-09-04T06:31:55.231+0000 2019-09-04T06:31:55.253+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578710, 1) 2019-09-04T06:31:55.253+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:32:05.950+0000 2019-09-04T06:31:55.253+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:32:06.525+0000 2019-09-04T06:31:55.253+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 884 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:05.253+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578715, 1), t: 1 } } 2019-09-04T06:31:55.253+0000 D3 REPL [conn315] Got notified of new snapshot: { ts: Timestamp(1567578715, 1), t: 1 }, 2019-09-04T06:31:55.231+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn315] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:25.060+0000 2019-09-04T06:31:55.254+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:25.239+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn300] Got notified of new snapshot: { ts: Timestamp(1567578715, 1), t: 1 }, 2019-09-04T06:31:55.231+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn300] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.277+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn289] Got notified of new snapshot: { ts: Timestamp(1567578715, 1), t: 1 }, 2019-09-04T06:31:55.231+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn289] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.280+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn276] Got notified of new snapshot: { ts: Timestamp(1567578715, 1), t: 1 }, 2019-09-04T06:31:55.231+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn276] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.329+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn277] Got notified of new snapshot: { ts: Timestamp(1567578715, 1), t: 1 }, 2019-09-04T06:31:55.231+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn277] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.324+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn296] Got notified of new snapshot: { ts: Timestamp(1567578715, 1), t: 1 }, 2019-09-04T06:31:55.231+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn296] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.763+0000 2019-09-04T06:31:55.254+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:55.254+0000 D3 REPL [conn295] Got notified of new snapshot: { ts: Timestamp(1567578715, 1), t: 1 }, 2019-09-04T06:31:55.231+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn295] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.753+0000 2019-09-04T06:31:55.254+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:24.839+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn303] Got notified of new snapshot: { ts: Timestamp(1567578715, 1), t: 1 }, 2019-09-04T06:31:55.231+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn303] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:13.600+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn312] Got notified of new snapshot: { ts: Timestamp(1567578715, 1), t: 1 }, 2019-09-04T06:31:55.231+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn312] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.767+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn311] Got notified of new snapshot: { ts: Timestamp(1567578715, 1), t: 1 }, 2019-09-04T06:31:55.231+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn311] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.661+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn310] Got notified of new snapshot: { ts: Timestamp(1567578715, 1), t: 1 }, 2019-09-04T06:31:55.231+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn310] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.660+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn301] Got notified of new snapshot: { ts: Timestamp(1567578715, 1), t: 1 }, 2019-09-04T06:31:55.231+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn301] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.660+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn309] Got notified of new snapshot: { ts: Timestamp(1567578715, 1), t: 1 }, 2019-09-04T06:31:55.231+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn309] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.423+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn282] Got notified of new snapshot: { ts: Timestamp(1567578715, 1), t: 1 }, 2019-09-04T06:31:55.231+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn282] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.424+0000 2019-09-04T06:31:55.254+0000 D2 COMMAND [conn21] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578715, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a5b02d1a496712d7233'), operName: "", parentOperId: "5d6f5a5b02d1a496712d7230" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578715, 1), signature: { hash: BinData(0, 7F24EBE9226A904172D61E8007D74BEDB704BEDC), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578715, 1), t: 1 } }, $db: "config" } 2019-09-04T06:31:55.254+0000 D3 REPL [conn288] Got notified of new snapshot: { ts: Timestamp(1567578715, 1), t: 1 }, 2019-09-04T06:31:55.231+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn288] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.289+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn307] Got notified of new snapshot: { ts: Timestamp(1567578715, 1), t: 1 }, 2019-09-04T06:31:55.231+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn307] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.417+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn313] Got notified of new snapshot: { ts: Timestamp(1567578715, 1), t: 1 }, 2019-09-04T06:31:55.231+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn313] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:22.595+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn293] Got notified of new snapshot: { ts: Timestamp(1567578715, 1), t: 1 }, 2019-09-04T06:31:55.231+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn293] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.277+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn280] Got notified of new snapshot: { ts: Timestamp(1567578715, 1), t: 1 }, 2019-09-04T06:31:55.231+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn280] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:01.310+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn297] Got notified of new snapshot: { ts: Timestamp(1567578715, 1), t: 1 }, 2019-09-04T06:31:55.231+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn297] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.925+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn302] Got notified of new snapshot: { ts: Timestamp(1567578715, 1), t: 1 }, 2019-09-04T06:31:55.231+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn302] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:12.891+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn290] Got notified of new snapshot: { ts: Timestamp(1567578715, 1), t: 1 }, 2019-09-04T06:31:55.231+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn290] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.274+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn284] Got notified of new snapshot: { ts: Timestamp(1567578715, 1), t: 1 }, 2019-09-04T06:31:55.231+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn284] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.440+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn308] Got notified of new snapshot: { ts: Timestamp(1567578715, 1), t: 1 }, 2019-09-04T06:31:55.231+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn308] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.423+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn278] Got notified of new snapshot: { ts: Timestamp(1567578715, 1), t: 1 }, 2019-09-04T06:31:55.231+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn278] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:12.693+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn294] Got notified of new snapshot: { ts: Timestamp(1567578715, 1), t: 1 }, 2019-09-04T06:31:55.231+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn294] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.468+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn298] Got notified of new snapshot: { ts: Timestamp(1567578715, 1), t: 1 }, 2019-09-04T06:31:55.231+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn298] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.986+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn314] Got notified of new snapshot: { ts: Timestamp(1567578715, 1), t: 1 }, 2019-09-04T06:31:55.231+0000 2019-09-04T06:31:55.254+0000 D3 REPL [conn314] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:24.152+0000 2019-09-04T06:31:55.254+0000 D1 TRACKING [conn21] Cmd: find, TrackingId: 5d6f5a5b02d1a496712d7230|5d6f5a5b02d1a496712d7233 2019-09-04T06:31:55.254+0000 D1 COMMAND [conn21] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578715, 1), t: 1 } } } 2019-09-04T06:31:55.254+0000 D3 STORAGE [conn21] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:31:55.254+0000 D1 COMMAND [conn21] Using 'committed' snapshot: { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578715, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a5b02d1a496712d7233'), operName: "", parentOperId: "5d6f5a5b02d1a496712d7230" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578715, 1), signature: { hash: BinData(0, 7F24EBE9226A904172D61E8007D74BEDB704BEDC), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578715, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578715, 1) 2019-09-04T06:31:55.254+0000 D2 QUERY [conn21] Collection config.settings does not exist. Using EOF plan: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 2019-09-04T06:31:55.254+0000 I COMMAND [conn21] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578715, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a5b02d1a496712d7233'), operName: "", parentOperId: "5d6f5a5b02d1a496712d7230" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578715, 1), signature: { hash: BinData(0, 7F24EBE9226A904172D61E8007D74BEDB704BEDC), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578715, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:31:55.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:55.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:55.323+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:55.323+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:55.323+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:55.337+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578715, 1) 2019-09-04T06:31:55.353+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:55.353+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:55.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:55.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:55.423+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:55.523+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:55.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:55.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:55.567+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:55.568+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:55.588+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:55.588+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:55.623+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:55.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:55.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:55.677+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:55.677+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:55.686+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:55.686+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:55.706+0000 D2 ASIO [RS] Request 884 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578715, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578715704), o: { $v: 1, $set: { ping: new Date(1567578715703) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578715, 1), t: 1 }, lastCommittedWall: new Date(1567578715231), lastOpVisible: { ts: Timestamp(1567578715, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578715, 1), t: 1 }, lastCommittedWall: new Date(1567578715231), lastOpApplied: { ts: Timestamp(1567578715, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578715, 1), $clusterTime: { clusterTime: Timestamp(1567578715, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578715, 2) } 2019-09-04T06:31:55.706+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578715, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578715704), o: { $v: 1, $set: { ping: new Date(1567578715703) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578715, 1), t: 1 }, lastCommittedWall: new Date(1567578715231), lastOpVisible: { ts: Timestamp(1567578715, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578715, 1), t: 1 }, lastCommittedWall: new Date(1567578715231), lastOpApplied: { ts: Timestamp(1567578715, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578715, 1), $clusterTime: { clusterTime: Timestamp(1567578715, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578715, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:55.706+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:55.706+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578715, 2) and ending at ts: Timestamp(1567578715, 2) 2019-09-04T06:31:55.706+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:32:06.525+0000 2019-09-04T06:31:55.706+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:32:05.741+0000 2019-09-04T06:31:55.706+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:55.706+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:24.839+0000 2019-09-04T06:31:55.706+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578715, 2), t: 1 } 2019-09-04T06:31:55.706+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:55.706+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:55.706+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578715, 1) 2019-09-04T06:31:55.706+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 13150 2019-09-04T06:31:55.706+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:55.706+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:55.706+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 13150 2019-09-04T06:31:55.706+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:55.706+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:31:55.706+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:55.706+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578715, 2) } 2019-09-04T06:31:55.706+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578715, 1) 2019-09-04T06:31:55.706+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 13153 2019-09-04T06:31:55.706+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:55.706+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:55.706+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 13153 2019-09-04T06:31:55.706+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13135 2019-09-04T06:31:55.706+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 13135 2019-09-04T06:31:55.706+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13156 2019-09-04T06:31:55.706+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13156 2019-09-04T06:31:55.706+0000 D3 EXECUTOR [repl-writer-worker-3] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:55.706+0000 D3 STORAGE [repl-writer-worker-3] WT begin_transaction for snapshot id 13158 2019-09-04T06:31:55.706+0000 D4 STORAGE [repl-writer-worker-3] inserting record with timestamp Timestamp(1567578715, 2) 2019-09-04T06:31:55.706+0000 D3 STORAGE [repl-writer-worker-3] WT set timestamp of future write operations to Timestamp(1567578715, 2) 2019-09-04T06:31:55.706+0000 D3 STORAGE [repl-writer-worker-3] WT commit_transaction for snapshot id 13158 2019-09-04T06:31:55.706+0000 D3 EXECUTOR [repl-writer-worker-3] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:55.706+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:31:55.706+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13157 2019-09-04T06:31:55.706+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 13157 2019-09-04T06:31:55.706+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13160 2019-09-04T06:31:55.706+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13160 2019-09-04T06:31:55.707+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578715, 2), t: 1 }({ ts: Timestamp(1567578715, 2), t: 1 }) 2019-09-04T06:31:55.707+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578715, 2) 2019-09-04T06:31:55.707+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13161 2019-09-04T06:31:55.707+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578715, 2) } } ] } sort: {} projection: {} 2019-09-04T06:31:55.707+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:55.707+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:31:55.707+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578715, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:31:55.707+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:55.707+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:55.707+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:55.707+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578715, 2) || First: notFirst: full path: ts 2019-09-04T06:31:55.707+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:55.707+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578715, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:55.707+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:55.707+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:31:55.707+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:55.707+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:55.707+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:55.707+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:55.707+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:55.707+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:55.707+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:31:55.707+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578715, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:31:55.707+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:55.707+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:31:55.707+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:31:55.707+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578715, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:31:55.707+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:55.707+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578715, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:55.707+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 13161 2019-09-04T06:31:55.707+0000 D3 EXECUTOR [repl-writer-worker-12] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:31:55.707+0000 D3 STORAGE [repl-writer-worker-12] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:55.707+0000 D3 REPL [repl-writer-worker-12] applying op: { ts: Timestamp(1567578715, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578715704), o: { $v: 1, $set: { ping: new Date(1567578715703) } } }, oplog application mode: Secondary 2019-09-04T06:31:55.707+0000 D3 STORAGE [repl-writer-worker-12] WT set timestamp of future write operations to Timestamp(1567578715, 2) 2019-09-04T06:31:55.707+0000 D3 STORAGE [repl-writer-worker-12] WT begin_transaction for snapshot id 13163 2019-09-04T06:31:55.707+0000 D2 QUERY [repl-writer-worker-12] Using idhack: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" } 2019-09-04T06:31:55.707+0000 D4 WRITE [repl-writer-worker-12] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:31:55.707+0000 D3 STORAGE [repl-writer-worker-12] WT commit_transaction for snapshot id 13163 2019-09-04T06:31:55.707+0000 D3 EXECUTOR [repl-writer-worker-12] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:31:55.707+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578715, 2), t: 1 }({ ts: Timestamp(1567578715, 2), t: 1 }) 2019-09-04T06:31:55.707+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578715, 2) 2019-09-04T06:31:55.707+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13162 2019-09-04T06:31:55.707+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:31:55.707+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:31:55.707+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:31:55.707+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:55.707+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:55.707+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:31:55.707+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 13162 2019-09-04T06:31:55.707+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578715, 2) 2019-09-04T06:31:55.707+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13166 2019-09-04T06:31:55.707+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13166 2019-09-04T06:31:55.707+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578715, 2), t: 1 }({ ts: Timestamp(1567578715, 2), t: 1 }) 2019-09-04T06:31:55.707+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:55.707+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), appliedOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, appliedWallTime: new Date(1567578709257), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578715, 1), t: 1 }, durableWallTime: new Date(1567578715231), appliedOpTime: { ts: Timestamp(1567578715, 2), t: 1 }, appliedWallTime: new Date(1567578715704), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), appliedOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, appliedWallTime: new Date(1567578709257), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578715, 1), t: 1 }, lastCommittedWall: new Date(1567578715231), lastOpVisible: { ts: Timestamp(1567578715, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:55.707+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 885 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:25.707+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), appliedOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, appliedWallTime: new Date(1567578709257), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578715, 1), t: 1 }, durableWallTime: new Date(1567578715231), appliedOpTime: { ts: Timestamp(1567578715, 2), t: 1 }, appliedWallTime: new Date(1567578715704), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), appliedOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, appliedWallTime: new Date(1567578709257), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578715, 1), t: 1 }, lastCommittedWall: new Date(1567578715231), lastOpVisible: { ts: Timestamp(1567578715, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:55.707+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:25.707+0000 2019-09-04T06:31:55.708+0000 D2 ASIO [RS] Request 885 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578715, 1), t: 1 }, lastCommittedWall: new Date(1567578715231), lastOpVisible: { ts: Timestamp(1567578715, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578715, 1), $clusterTime: { clusterTime: Timestamp(1567578715, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578715, 2) } 2019-09-04T06:31:55.708+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578715, 1), t: 1 }, lastCommittedWall: new Date(1567578715231), lastOpVisible: { ts: Timestamp(1567578715, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578715, 1), $clusterTime: { clusterTime: Timestamp(1567578715, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578715, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:55.708+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:55.708+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:25.708+0000 2019-09-04T06:31:55.708+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578715, 2), t: 1 } 2019-09-04T06:31:55.708+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 886 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:05.708+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578715, 1), t: 1 } } 2019-09-04T06:31:55.708+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:25.708+0000 2019-09-04T06:31:55.710+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:31:55.710+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:55.710+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), appliedOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, appliedWallTime: new Date(1567578709257), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578715, 2), t: 1 }, durableWallTime: new Date(1567578715704), appliedOpTime: { ts: Timestamp(1567578715, 2), t: 1 }, appliedWallTime: new Date(1567578715704), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), appliedOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, appliedWallTime: new Date(1567578709257), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578715, 1), t: 1 }, lastCommittedWall: new Date(1567578715231), lastOpVisible: { ts: Timestamp(1567578715, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:55.710+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 887 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:25.710+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), appliedOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, appliedWallTime: new Date(1567578709257), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578715, 2), t: 1 }, durableWallTime: new Date(1567578715704), appliedOpTime: { ts: Timestamp(1567578715, 2), t: 1 }, appliedWallTime: new Date(1567578715704), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, durableWallTime: new Date(1567578709257), appliedOpTime: { ts: Timestamp(1567578709, 1), t: 1 }, appliedWallTime: new Date(1567578709257), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578715, 1), t: 1 }, lastCommittedWall: new Date(1567578715231), lastOpVisible: { ts: Timestamp(1567578715, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:31:55.710+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:25.708+0000 2019-09-04T06:31:55.710+0000 D2 ASIO [RS] Request 887 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578715, 1), t: 1 }, lastCommittedWall: new Date(1567578715231), lastOpVisible: { ts: Timestamp(1567578715, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578715, 1), $clusterTime: { clusterTime: Timestamp(1567578715, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578715, 2) } 2019-09-04T06:31:55.710+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578715, 1), t: 1 }, lastCommittedWall: new Date(1567578715231), lastOpVisible: { ts: Timestamp(1567578715, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578715, 1), $clusterTime: { clusterTime: Timestamp(1567578715, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578715, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:55.710+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:31:55.710+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:25.708+0000 2019-09-04T06:31:55.710+0000 D2 ASIO [RS] Request 886 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578715, 2), t: 1 }, lastCommittedWall: new Date(1567578715704), lastOpVisible: { ts: Timestamp(1567578715, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578715, 2), t: 1 }, lastCommittedWall: new Date(1567578715704), lastOpApplied: { ts: Timestamp(1567578715, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578715, 2), $clusterTime: { clusterTime: Timestamp(1567578715, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578715, 2) } 2019-09-04T06:31:55.710+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578715, 2), t: 1 }, lastCommittedWall: new Date(1567578715704), lastOpVisible: { ts: Timestamp(1567578715, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578715, 2), t: 1 }, lastCommittedWall: new Date(1567578715704), lastOpApplied: { ts: Timestamp(1567578715, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578715, 2), $clusterTime: { clusterTime: Timestamp(1567578715, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578715, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:55.710+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:31:55.710+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:31:55.710+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578715, 2), t: 1 }, 2019-09-04T06:31:55.704+0000 2019-09-04T06:31:55.710+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578715, 2), t: 1 }, 2019-09-04T06:31:55.704+0000 2019-09-04T06:31:55.711+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578710, 2) 2019-09-04T06:31:55.711+0000 D3 REPL [conn301] Got notified of new snapshot: { ts: Timestamp(1567578715, 2), t: 1 }, 2019-09-04T06:31:55.704+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn301] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.660+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn280] Got notified of new snapshot: { ts: Timestamp(1567578715, 2), t: 1 }, 2019-09-04T06:31:55.704+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn280] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:01.310+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn315] Got notified of new snapshot: { ts: Timestamp(1567578715, 2), t: 1 }, 2019-09-04T06:31:55.704+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn315] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:25.060+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn313] Got notified of new snapshot: { ts: Timestamp(1567578715, 2), t: 1 }, 2019-09-04T06:31:55.704+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn313] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:22.595+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn311] Got notified of new snapshot: { ts: Timestamp(1567578715, 2), t: 1 }, 2019-09-04T06:31:55.704+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn311] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.661+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn309] Got notified of new snapshot: { ts: Timestamp(1567578715, 2), t: 1 }, 2019-09-04T06:31:55.704+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn309] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.423+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn288] Got notified of new snapshot: { ts: Timestamp(1567578715, 2), t: 1 }, 2019-09-04T06:31:55.704+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn288] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.289+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn282] Got notified of new snapshot: { ts: Timestamp(1567578715, 2), t: 1 }, 2019-09-04T06:31:55.704+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn282] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.424+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn307] Got notified of new snapshot: { ts: Timestamp(1567578715, 2), t: 1 }, 2019-09-04T06:31:55.704+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn307] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.417+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn290] Got notified of new snapshot: { ts: Timestamp(1567578715, 2), t: 1 }, 2019-09-04T06:31:55.704+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn290] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.274+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn297] Got notified of new snapshot: { ts: Timestamp(1567578715, 2), t: 1 }, 2019-09-04T06:31:55.704+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn297] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.925+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn293] Got notified of new snapshot: { ts: Timestamp(1567578715, 2), t: 1 }, 2019-09-04T06:31:55.704+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn293] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.277+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn294] Got notified of new snapshot: { ts: Timestamp(1567578715, 2), t: 1 }, 2019-09-04T06:31:55.704+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn294] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.468+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn308] Got notified of new snapshot: { ts: Timestamp(1567578715, 2), t: 1 }, 2019-09-04T06:31:55.704+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn308] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.423+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn314] Got notified of new snapshot: { ts: Timestamp(1567578715, 2), t: 1 }, 2019-09-04T06:31:55.704+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn314] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:24.152+0000 2019-09-04T06:31:55.711+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:32:05.741+0000 2019-09-04T06:31:55.711+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:32:06.273+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn289] Got notified of new snapshot: { ts: Timestamp(1567578715, 2), t: 1 }, 2019-09-04T06:31:55.704+0000 2019-09-04T06:31:55.711+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 888 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:05.711+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578715, 2), t: 1 } } 2019-09-04T06:31:55.711+0000 D3 REPL [conn289] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.280+0000 2019-09-04T06:31:55.711+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:25.708+0000 2019-09-04T06:31:55.711+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:55.711+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:24.839+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn277] Got notified of new snapshot: { ts: Timestamp(1567578715, 2), t: 1 }, 2019-09-04T06:31:55.704+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn277] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.324+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn300] Got notified of new snapshot: { ts: Timestamp(1567578715, 2), t: 1 }, 2019-09-04T06:31:55.704+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn300] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.277+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn295] Got notified of new snapshot: { ts: Timestamp(1567578715, 2), t: 1 }, 2019-09-04T06:31:55.704+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn295] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.753+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn312] Got notified of new snapshot: { ts: Timestamp(1567578715, 2), t: 1 }, 2019-09-04T06:31:55.704+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn312] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.767+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn310] Got notified of new snapshot: { ts: Timestamp(1567578715, 2), t: 1 }, 2019-09-04T06:31:55.704+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn310] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.660+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn284] Got notified of new snapshot: { ts: Timestamp(1567578715, 2), t: 1 }, 2019-09-04T06:31:55.704+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn284] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.440+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn296] Got notified of new snapshot: { ts: Timestamp(1567578715, 2), t: 1 }, 2019-09-04T06:31:55.704+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn296] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.763+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn303] Got notified of new snapshot: { ts: Timestamp(1567578715, 2), t: 1 }, 2019-09-04T06:31:55.704+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn303] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:13.600+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn298] Got notified of new snapshot: { ts: Timestamp(1567578715, 2), t: 1 }, 2019-09-04T06:31:55.704+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn298] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.986+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn278] Got notified of new snapshot: { ts: Timestamp(1567578715, 2), t: 1 }, 2019-09-04T06:31:55.704+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn278] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:12.693+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn302] Got notified of new snapshot: { ts: Timestamp(1567578715, 2), t: 1 }, 2019-09-04T06:31:55.704+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn302] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:12.891+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn276] Got notified of new snapshot: { ts: Timestamp(1567578715, 2), t: 1 }, 2019-09-04T06:31:55.704+0000 2019-09-04T06:31:55.711+0000 D3 REPL [conn276] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.329+0000 2019-09-04T06:31:55.717+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:55.717+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:55.723+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:55.723+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:55.723+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:55.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:55.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:55.806+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578715, 2) 2019-09-04T06:31:55.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:55.823+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:55.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:55.853+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:55.853+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:55.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:55.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:55.924+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:56.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:56.024+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:56.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:56.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:56.068+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:56.068+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:56.088+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:56.088+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:56.124+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:56.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:56.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:56.177+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:56.177+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:56.186+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:56.186+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:56.217+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:56.217+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:56.223+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:56.223+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:56.224+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:56.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578715, 2), signature: { hash: BinData(0, 7F24EBE9226A904172D61E8007D74BEDB704BEDC), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:56.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:31:56.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578715, 2), signature: { hash: BinData(0, 7F24EBE9226A904172D61E8007D74BEDB704BEDC), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:56.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578715, 2), signature: { hash: BinData(0, 7F24EBE9226A904172D61E8007D74BEDB704BEDC), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:56.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578715, 2), t: 1 }, durableWallTime: new Date(1567578715704), opTime: { ts: Timestamp(1567578715, 2), t: 1 }, wallTime: new Date(1567578715704) } 2019-09-04T06:31:56.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578715, 2), signature: { hash: BinData(0, 7F24EBE9226A904172D61E8007D74BEDB704BEDC), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:56.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:56.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:56.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:56.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:56.324+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:56.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:56.324+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:56.353+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:56.353+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:56.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:56.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:56.424+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:56.524+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:56.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:56.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:56.567+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:56.568+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:56.588+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:56.588+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:56.624+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:56.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:56.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:56.677+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:56.677+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:56.686+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:56.686+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:56.695+0000 D2 COMMAND [conn49] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578715, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578715, 2), signature: { hash: BinData(0, 7F24EBE9226A904172D61E8007D74BEDB704BEDC), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578715, 2), t: 1 } }, $db: "config" } 2019-09-04T06:31:56.695+0000 D1 COMMAND [conn49] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578715, 2), t: 1 } } } 2019-09-04T06:31:56.695+0000 D3 STORAGE [conn49] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:31:56.695+0000 D1 COMMAND [conn49] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578715, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578715, 2), signature: { hash: BinData(0, 7F24EBE9226A904172D61E8007D74BEDB704BEDC), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578715, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578715, 2) 2019-09-04T06:31:56.695+0000 D5 QUERY [conn49] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:31:56.695+0000 D5 QUERY [conn49] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:31:56.695+0000 D5 QUERY [conn49] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:31:56.695+0000 D5 QUERY [conn49] Rated tree: $and 2019-09-04T06:31:56.695+0000 D5 QUERY [conn49] Planner: outputted 0 indexed solutions. 2019-09-04T06:31:56.695+0000 D5 QUERY [conn49] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:31:56.695+0000 D2 QUERY [conn49] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:31:56.695+0000 D3 STORAGE [conn49] WT begin_transaction for snapshot id 13196 2019-09-04T06:31:56.695+0000 D3 STORAGE [conn49] WT rollback_transaction for snapshot id 13196 2019-09-04T06:31:56.695+0000 I COMMAND [conn49] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578715, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578715, 2), signature: { hash: BinData(0, 7F24EBE9226A904172D61E8007D74BEDB704BEDC), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578715, 2), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:879 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:31:56.706+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:56.706+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:56.706+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578715, 2) 2019-09-04T06:31:56.706+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 13198 2019-09-04T06:31:56.706+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:56.706+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:56.706+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 13198 2019-09-04T06:31:56.707+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13201 2019-09-04T06:31:56.707+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13201 2019-09-04T06:31:56.707+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578715, 2), t: 1 }({ ts: Timestamp(1567578715, 2), t: 1 }) 2019-09-04T06:31:56.717+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:56.717+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:56.723+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:56.723+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:56.725+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:56.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:56.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:56.823+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:56.823+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:56.825+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:56.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:56.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 889) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:56.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 889 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:06.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:56.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:24.839+0000 2019-09-04T06:31:56.838+0000 D2 ASIO [Replication] Request 889 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578715, 2), t: 1 }, durableWallTime: new Date(1567578715704), opTime: { ts: Timestamp(1567578715, 2), t: 1 }, wallTime: new Date(1567578715704), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578715, 2), t: 1 }, lastCommittedWall: new Date(1567578715704), lastOpVisible: { ts: Timestamp(1567578715, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578715, 2), $clusterTime: { clusterTime: Timestamp(1567578715, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578715, 2) } 2019-09-04T06:31:56.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578715, 2), t: 1 }, durableWallTime: new Date(1567578715704), opTime: { ts: Timestamp(1567578715, 2), t: 1 }, wallTime: new Date(1567578715704), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578715, 2), t: 1 }, lastCommittedWall: new Date(1567578715704), lastOpVisible: { ts: Timestamp(1567578715, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578715, 2), $clusterTime: { clusterTime: Timestamp(1567578715, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578715, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:56.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:56.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 889) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578715, 2), t: 1 }, durableWallTime: new Date(1567578715704), opTime: { ts: Timestamp(1567578715, 2), t: 1 }, wallTime: new Date(1567578715704), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578715, 2), t: 1 }, lastCommittedWall: new Date(1567578715704), lastOpVisible: { ts: Timestamp(1567578715, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578715, 2), $clusterTime: { clusterTime: Timestamp(1567578715, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578715, 2) } 2019-09-04T06:31:56.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:31:56.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:31:58.838Z 2019-09-04T06:31:56.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:24.839+0000 2019-09-04T06:31:56.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:56.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 890) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:56.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 890 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:32:06.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:56.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:24.839+0000 2019-09-04T06:31:56.839+0000 D2 ASIO [Replication] Request 890 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578715, 2), t: 1 }, durableWallTime: new Date(1567578715704), opTime: { ts: Timestamp(1567578715, 2), t: 1 }, wallTime: new Date(1567578715704), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578715, 2), t: 1 }, lastCommittedWall: new Date(1567578715704), lastOpVisible: { ts: Timestamp(1567578715, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578715, 2), $clusterTime: { clusterTime: Timestamp(1567578715, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578715, 2) } 2019-09-04T06:31:56.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578715, 2), t: 1 }, durableWallTime: new Date(1567578715704), opTime: { ts: Timestamp(1567578715, 2), t: 1 }, wallTime: new Date(1567578715704), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578715, 2), t: 1 }, lastCommittedWall: new Date(1567578715704), lastOpVisible: { ts: Timestamp(1567578715, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578715, 2), $clusterTime: { clusterTime: Timestamp(1567578715, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578715, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:31:56.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:56.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 890) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578715, 2), t: 1 }, durableWallTime: new Date(1567578715704), opTime: { ts: Timestamp(1567578715, 2), t: 1 }, wallTime: new Date(1567578715704), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578715, 2), t: 1 }, lastCommittedWall: new Date(1567578715704), lastOpVisible: { ts: Timestamp(1567578715, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578715, 2), $clusterTime: { clusterTime: Timestamp(1567578715, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578715, 2) } 2019-09-04T06:31:56.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:31:56.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:32:06.273+0000 2019-09-04T06:31:56.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:32:07.598+0000 2019-09-04T06:31:56.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:31:56.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:31:58.839Z 2019-09-04T06:31:56.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:56.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:26.839+0000 2019-09-04T06:31:56.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:26.839+0000 2019-09-04T06:31:56.853+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:56.853+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:56.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:56.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:56.876+0000 D2 COMMAND [conn47] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:56.876+0000 I COMMAND [conn47] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:56.911+0000 D2 COMMAND [conn48] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:56.911+0000 I COMMAND [conn48] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:56.925+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:57.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:57.025+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:57.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:57.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:57.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578715, 2), signature: { hash: BinData(0, 7F24EBE9226A904172D61E8007D74BEDB704BEDC), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:57.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:31:57.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578715, 2), signature: { hash: BinData(0, 7F24EBE9226A904172D61E8007D74BEDB704BEDC), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:57.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578715, 2), signature: { hash: BinData(0, 7F24EBE9226A904172D61E8007D74BEDB704BEDC), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:57.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578715, 2), t: 1 }, durableWallTime: new Date(1567578715704), opTime: { ts: Timestamp(1567578715, 2), t: 1 }, wallTime: new Date(1567578715704) } 2019-09-04T06:31:57.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578715, 2), signature: { hash: BinData(0, 7F24EBE9226A904172D61E8007D74BEDB704BEDC), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:57.067+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:57.068+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:57.125+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:57.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:57.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:57.177+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:57.177+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:57.186+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:57.186+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:57.217+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:57.217+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:57.223+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:57.223+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:57.225+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:57.234+0000 D2 COMMAND [conn160] run command admin.$cmd { ping: 1, $db: "admin" } 2019-09-04T06:31:57.234+0000 I COMMAND [conn160] command admin.$cmd appName: "robo3t" command: ping { ping: 1, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:57.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:57.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:57.244+0000 D2 COMMAND [conn220] run command admin.$cmd { ping: 1.0, lsid: { id: UUID("23af97f8-66f0-4a27-b5f1-59167651ca5f") }, $clusterTime: { clusterTime: Timestamp(1567578655, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } 2019-09-04T06:31:57.244+0000 I COMMAND [conn220] command admin.$cmd appName: "MongoDB Shell" command: ping { ping: 1.0, lsid: { id: UUID("23af97f8-66f0-4a27-b5f1-59167651ca5f") }, $clusterTime: { clusterTime: Timestamp(1567578655, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:57.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:57.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:57.325+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:57.353+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:57.353+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:57.425+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:57.526+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:57.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:57.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:57.568+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:57.568+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:57.626+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:57.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:57.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:57.677+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:57.677+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:57.686+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:57.686+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:57.707+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:57.707+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:57.707+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578715, 2) 2019-09-04T06:31:57.707+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 13230 2019-09-04T06:31:57.707+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:57.707+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:57.707+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 13230 2019-09-04T06:31:57.708+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13233 2019-09-04T06:31:57.708+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13233 2019-09-04T06:31:57.708+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578715, 2), t: 1 }({ ts: Timestamp(1567578715, 2), t: 1 }) 2019-09-04T06:31:57.717+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:57.717+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:57.723+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:57.723+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:57.726+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:57.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:57.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:57.826+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:57.853+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:57.853+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:57.926+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:58.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:58.026+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:58.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:58.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:58.068+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:58.068+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:58.126+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:58.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:58.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:58.177+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:58.177+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:58.186+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:58.186+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:58.217+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:58.217+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:58.223+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:58.223+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:58.226+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:58.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578717, 1), signature: { hash: BinData(0, 321BAEE5AA37228F1CDE1F430DF7B9EBA049FF07), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:58.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:31:58.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578717, 1), signature: { hash: BinData(0, 321BAEE5AA37228F1CDE1F430DF7B9EBA049FF07), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:58.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578717, 1), signature: { hash: BinData(0, 321BAEE5AA37228F1CDE1F430DF7B9EBA049FF07), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:58.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578715, 2), t: 1 }, durableWallTime: new Date(1567578715704), opTime: { ts: Timestamp(1567578715, 2), t: 1 }, wallTime: new Date(1567578715704) } 2019-09-04T06:31:58.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578717, 1), signature: { hash: BinData(0, 321BAEE5AA37228F1CDE1F430DF7B9EBA049FF07), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:58.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:58.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:58.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:58.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:58.327+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:58.353+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:58.353+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:58.427+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:58.527+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:58.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:58.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:58.567+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:58.568+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:58.627+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:58.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:58.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:58.677+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:58.677+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:58.686+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:58.686+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:58.707+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:58.707+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:58.707+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578715, 2) 2019-09-04T06:31:58.707+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 13256 2019-09-04T06:31:58.707+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:58.707+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:58.707+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 13256 2019-09-04T06:31:58.708+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13259 2019-09-04T06:31:58.708+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13259 2019-09-04T06:31:58.708+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578715, 2), t: 1 }({ ts: Timestamp(1567578715, 2), t: 1 }) 2019-09-04T06:31:58.717+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:58.717+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:58.723+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:58.723+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:58.727+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:58.827+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:58.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:58.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 891) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:58.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 891 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:08.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:58.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:26.839+0000 2019-09-04T06:31:58.838+0000 D2 ASIO [Replication] Request 891 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578715, 2), t: 1 }, durableWallTime: new Date(1567578715704), opTime: { ts: Timestamp(1567578715, 2), t: 1 }, wallTime: new Date(1567578715704), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578715, 2), t: 1 }, lastCommittedWall: new Date(1567578715704), lastOpVisible: { ts: Timestamp(1567578715, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578715, 2), $clusterTime: { clusterTime: Timestamp(1567578717, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578715, 2) } 2019-09-04T06:31:58.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578715, 2), t: 1 }, durableWallTime: new Date(1567578715704), opTime: { ts: Timestamp(1567578715, 2), t: 1 }, wallTime: new Date(1567578715704), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578715, 2), t: 1 }, lastCommittedWall: new Date(1567578715704), lastOpVisible: { ts: Timestamp(1567578715, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578715, 2), $clusterTime: { clusterTime: Timestamp(1567578717, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578715, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:31:58.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:58.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 891) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578715, 2), t: 1 }, durableWallTime: new Date(1567578715704), opTime: { ts: Timestamp(1567578715, 2), t: 1 }, wallTime: new Date(1567578715704), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578715, 2), t: 1 }, lastCommittedWall: new Date(1567578715704), lastOpVisible: { ts: Timestamp(1567578715, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578715, 2), $clusterTime: { clusterTime: Timestamp(1567578717, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578715, 2) } 2019-09-04T06:31:58.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:31:58.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:32:00.838Z 2019-09-04T06:31:58.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:26.839+0000 2019-09-04T06:31:58.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:31:58.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 892) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:58.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 892 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:32:08.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:31:58.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:26.839+0000 2019-09-04T06:31:58.839+0000 D2 ASIO [Replication] Request 892 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578715, 2), t: 1 }, durableWallTime: new Date(1567578715704), opTime: { ts: Timestamp(1567578715, 2), t: 1 }, wallTime: new Date(1567578715704), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578715, 2), t: 1 }, lastCommittedWall: new Date(1567578715704), lastOpVisible: { ts: Timestamp(1567578715, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578715, 2), $clusterTime: { clusterTime: Timestamp(1567578717, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578715, 2) } 2019-09-04T06:31:58.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578715, 2), t: 1 }, durableWallTime: new Date(1567578715704), opTime: { ts: Timestamp(1567578715, 2), t: 1 }, wallTime: new Date(1567578715704), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578715, 2), t: 1 }, lastCommittedWall: new Date(1567578715704), lastOpVisible: { ts: Timestamp(1567578715, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578715, 2), $clusterTime: { clusterTime: Timestamp(1567578717, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578715, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:31:58.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:58.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 892) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578715, 2), t: 1 }, durableWallTime: new Date(1567578715704), opTime: { ts: Timestamp(1567578715, 2), t: 1 }, wallTime: new Date(1567578715704), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578715, 2), t: 1 }, lastCommittedWall: new Date(1567578715704), lastOpVisible: { ts: Timestamp(1567578715, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578715, 2), $clusterTime: { clusterTime: Timestamp(1567578717, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578715, 2) } 2019-09-04T06:31:58.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:31:58.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:32:07.598+0000 2019-09-04T06:31:58.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:32:09.899+0000 2019-09-04T06:31:58.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:31:58.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:32:00.839Z 2019-09-04T06:31:58.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:58.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:28.839+0000 2019-09-04T06:31:58.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:28.839+0000 2019-09-04T06:31:58.853+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:58.853+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:58.928+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:59.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:31:59.028+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:59.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:59.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:59.063+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:31:59.063+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:31:58.839+0000 2019-09-04T06:31:59.063+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:31:58.838+0000 2019-09-04T06:31:59.063+0000 D3 REPL [replexec-3] stalest member MemberId(2) date: 2019-09-04T06:31:58.838+0000 2019-09-04T06:31:59.063+0000 D3 REPL [replexec-3] scheduling next check at 2019-09-04T06:32:08.838+0000 2019-09-04T06:31:59.063+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:28.839+0000 2019-09-04T06:31:59.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578717, 1), signature: { hash: BinData(0, 321BAEE5AA37228F1CDE1F430DF7B9EBA049FF07), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:59.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:31:59.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578717, 1), signature: { hash: BinData(0, 321BAEE5AA37228F1CDE1F430DF7B9EBA049FF07), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:59.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578717, 1), signature: { hash: BinData(0, 321BAEE5AA37228F1CDE1F430DF7B9EBA049FF07), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:31:59.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578715, 2), t: 1 }, durableWallTime: new Date(1567578715704), opTime: { ts: Timestamp(1567578715, 2), t: 1 }, wallTime: new Date(1567578715704) } 2019-09-04T06:31:59.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578717, 1), signature: { hash: BinData(0, 321BAEE5AA37228F1CDE1F430DF7B9EBA049FF07), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:59.068+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:59.068+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:59.128+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:59.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:59.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:59.177+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:59.177+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:59.186+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:59.186+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:59.217+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:59.217+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:59.223+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:59.223+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:59.228+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:59.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:31:59.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:31:59.328+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:59.353+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:59.353+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:59.428+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:59.528+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:59.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:59.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:59.568+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:59.568+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:59.629+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:59.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:59.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:59.677+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:59.677+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:59.686+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:59.686+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:59.707+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:31:59.707+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:31:59.707+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578715, 2) 2019-09-04T06:31:59.707+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 13282 2019-09-04T06:31:59.707+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:31:59.707+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:31:59.707+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 13282 2019-09-04T06:31:59.708+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13285 2019-09-04T06:31:59.708+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13285 2019-09-04T06:31:59.708+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578715, 2), t: 1 }({ ts: Timestamp(1567578715, 2), t: 1 }) 2019-09-04T06:31:59.717+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:59.717+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:59.723+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:59.723+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:59.729+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:59.829+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:31:59.853+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:31:59.853+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:31:59.929+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:00.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:00.000+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:32:00.000+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:32:00.005+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 4ms 2019-09-04T06:32:00.017+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:32:00.018+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:32:00.020+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:32:00.020+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:32:00.020+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:32:00.020+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:32:00.032+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:00.032+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:32:00.033+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35129 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:32:00.035+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:32:00.035+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:32:00.035+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:32:00.045+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:32:00.045+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:00.045+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:32:00.045+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:32:00.045+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:32:00.045+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:32:00.045+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:32:00.045+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:32:00.045+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:32:00.045+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:00.045+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:00.045+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:32:00.045+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578715, 2) 2019-09-04T06:32:00.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13296 2019-09-04T06:32:00.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13296 2019-09-04T06:32:00.045+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:32:00.046+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:32:00.046+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:32:00.046+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:32:00.046+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:00.046+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:32:00.046+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:32:00.046+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:32:00.046+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578715, 2) 2019-09-04T06:32:00.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13299 2019-09-04T06:32:00.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13299 2019-09-04T06:32:00.046+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:32:00.046+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:32:00.046+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:00.046+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:32:00.046+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:32:00.046+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:32:00.046+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578715, 2) 2019-09-04T06:32:00.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13301 2019-09-04T06:32:00.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13301 2019-09-04T06:32:00.046+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:32:00.046+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:32:00.047+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:32:00.047+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:32:00.047+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:32:00.047+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13304 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13304 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13305 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13305 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13306 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13306 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13307 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13307 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13308 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13308 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13309 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13309 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13310 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13310 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13311 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13311 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13312 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13312 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13313 2019-09-04T06:32:00.047+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13313 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13314 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13314 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13315 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13315 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13316 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13316 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13317 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13317 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13318 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13318 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13319 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13319 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13320 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13320 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13321 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13321 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13322 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13322 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13323 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13323 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13324 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13324 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13325 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:32:00.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13325 2019-09-04T06:32:00.048+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:32:00.049+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:32:00.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13327 2019-09-04T06:32:00.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13327 2019-09-04T06:32:00.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13328 2019-09-04T06:32:00.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13328 2019-09-04T06:32:00.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13329 2019-09-04T06:32:00.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13329 2019-09-04T06:32:00.049+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:32:00.049+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:32:00.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13331 2019-09-04T06:32:00.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13331 2019-09-04T06:32:00.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13332 2019-09-04T06:32:00.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13332 2019-09-04T06:32:00.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13333 2019-09-04T06:32:00.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13333 2019-09-04T06:32:00.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13334 2019-09-04T06:32:00.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13334 2019-09-04T06:32:00.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13335 2019-09-04T06:32:00.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13335 2019-09-04T06:32:00.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13336 2019-09-04T06:32:00.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13336 2019-09-04T06:32:00.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13337 2019-09-04T06:32:00.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13337 2019-09-04T06:32:00.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13338 2019-09-04T06:32:00.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13338 2019-09-04T06:32:00.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13339 2019-09-04T06:32:00.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13339 2019-09-04T06:32:00.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13340 2019-09-04T06:32:00.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13340 2019-09-04T06:32:00.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13341 2019-09-04T06:32:00.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13341 2019-09-04T06:32:00.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13342 2019-09-04T06:32:00.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13342 2019-09-04T06:32:00.049+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:32:00.049+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:32:00.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13344 2019-09-04T06:32:00.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13344 2019-09-04T06:32:00.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13345 2019-09-04T06:32:00.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13345 2019-09-04T06:32:00.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13346 2019-09-04T06:32:00.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13346 2019-09-04T06:32:00.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13347 2019-09-04T06:32:00.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13347 2019-09-04T06:32:00.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13348 2019-09-04T06:32:00.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13348 2019-09-04T06:32:00.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13349 2019-09-04T06:32:00.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13349 2019-09-04T06:32:00.050+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:32:00.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:00.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:00.067+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:00.068+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:00.133+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:00.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:00.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:00.177+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:00.177+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:00.186+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:00.186+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:00.217+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:00.217+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:00.223+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:00.223+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:00.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578717, 1), signature: { hash: BinData(0, 321BAEE5AA37228F1CDE1F430DF7B9EBA049FF07), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:00.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:32:00.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578717, 1), signature: { hash: BinData(0, 321BAEE5AA37228F1CDE1F430DF7B9EBA049FF07), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:00.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578717, 1), signature: { hash: BinData(0, 321BAEE5AA37228F1CDE1F430DF7B9EBA049FF07), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:00.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578715, 2), t: 1 }, durableWallTime: new Date(1567578715704), opTime: { ts: Timestamp(1567578715, 2), t: 1 }, wallTime: new Date(1567578715704) } 2019-09-04T06:32:00.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578717, 1), signature: { hash: BinData(0, 321BAEE5AA37228F1CDE1F430DF7B9EBA049FF07), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:00.233+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:00.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:00.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:00.288+0000 D2 ASIO [RS] Request 888 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578720, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578720278), o: { $v: 1, $set: { ping: new Date(1567578720278) } } }, { ts: Timestamp(1567578720, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578720278), o: { $v: 1, $set: { ping: new Date(1567578720278) } } }, { ts: Timestamp(1567578720, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578720278), o: { $v: 1, $set: { ping: new Date(1567578720277) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578715, 2), t: 1 }, lastCommittedWall: new Date(1567578715704), lastOpVisible: { ts: Timestamp(1567578715, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578715, 2), t: 1 }, lastCommittedWall: new Date(1567578715704), lastOpApplied: { ts: Timestamp(1567578720, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578715, 2), $clusterTime: { clusterTime: Timestamp(1567578720, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578720, 3) } 2019-09-04T06:32:00.288+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578720, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578720278), o: { $v: 1, $set: { ping: new Date(1567578720278) } } }, { ts: Timestamp(1567578720, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578720278), o: { $v: 1, $set: { ping: new Date(1567578720278) } } }, { ts: Timestamp(1567578720, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578720278), o: { $v: 1, $set: { ping: new Date(1567578720277) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578715, 2), t: 1 }, lastCommittedWall: new Date(1567578715704), lastOpVisible: { ts: Timestamp(1567578715, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578715, 2), t: 1 }, lastCommittedWall: new Date(1567578715704), lastOpApplied: { ts: Timestamp(1567578720, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578715, 2), $clusterTime: { clusterTime: Timestamp(1567578720, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578720, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:00.288+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:00.288+0000 D2 REPL [replication-0] oplog fetcher read 3 operations from remote oplog starting at ts: Timestamp(1567578720, 1) and ending at ts: Timestamp(1567578720, 3) 2019-09-04T06:32:00.288+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:32:09.899+0000 2019-09-04T06:32:00.288+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:32:11.001+0000 2019-09-04T06:32:00.288+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:00.288+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:28.839+0000 2019-09-04T06:32:00.288+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:00.288+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:00.288+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578715, 2) 2019-09-04T06:32:00.288+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 13361 2019-09-04T06:32:00.288+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:00.288+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:00.288+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 13361 2019-09-04T06:32:00.288+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:00.288+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:00.288+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578715, 2) 2019-09-04T06:32:00.288+0000 D2 REPL [rsSync-0] replication batch size is 3 2019-09-04T06:32:00.288+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 13364 2019-09-04T06:32:00.288+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:00.288+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:00.288+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578720, 1) } 2019-09-04T06:32:00.288+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 13364 2019-09-04T06:32:00.288+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578720, 3), t: 1 } 2019-09-04T06:32:00.288+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13286 2019-09-04T06:32:00.288+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 13286 2019-09-04T06:32:00.288+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13367 2019-09-04T06:32:00.288+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13367 2019-09-04T06:32:00.288+0000 D3 EXECUTOR [repl-writer-worker-10] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:00.288+0000 D3 STORAGE [repl-writer-worker-10] WT begin_transaction for snapshot id 13369 2019-09-04T06:32:00.288+0000 D4 STORAGE [repl-writer-worker-10] inserting record with timestamp Timestamp(1567578720, 1) 2019-09-04T06:32:00.288+0000 D3 STORAGE [repl-writer-worker-10] WT set timestamp of future write operations to Timestamp(1567578720, 1) 2019-09-04T06:32:00.288+0000 D4 STORAGE [repl-writer-worker-10] inserting record with timestamp Timestamp(1567578720, 2) 2019-09-04T06:32:00.288+0000 D3 STORAGE [repl-writer-worker-10] WT set timestamp of future write operations to Timestamp(1567578720, 2) 2019-09-04T06:32:00.288+0000 D4 STORAGE [repl-writer-worker-10] inserting record with timestamp Timestamp(1567578720, 3) 2019-09-04T06:32:00.288+0000 D3 STORAGE [repl-writer-worker-10] WT set timestamp of future write operations to Timestamp(1567578720, 3) 2019-09-04T06:32:00.288+0000 D3 STORAGE [repl-writer-worker-10] WT commit_transaction for snapshot id 13369 2019-09-04T06:32:00.288+0000 D3 EXECUTOR [repl-writer-worker-10] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:00.289+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:32:00.289+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13368 2019-09-04T06:32:00.289+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 13368 2019-09-04T06:32:00.289+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13371 2019-09-04T06:32:00.289+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13371 2019-09-04T06:32:00.289+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578720, 3), t: 1 }({ ts: Timestamp(1567578720, 3), t: 1 }) 2019-09-04T06:32:00.289+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578720, 3) 2019-09-04T06:32:00.289+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13372 2019-09-04T06:32:00.289+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578720, 3) } } ] } sort: {} projection: {} 2019-09-04T06:32:00.289+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:00.289+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:32:00.289+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578720, 3) Sort: {} Proj: {} ============================= 2019-09-04T06:32:00.289+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:00.289+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:00.289+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:00.289+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578720, 3) || First: notFirst: full path: ts 2019-09-04T06:32:00.289+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:00.289+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578720, 3) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:00.289+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:00.289+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:32:00.289+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:00.289+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:00.289+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:00.289+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:00.289+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:00.289+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:00.289+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:00.289+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578720, 3) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:00.289+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:00.289+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:00.289+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:00.289+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578720, 3) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:00.289+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:00.289+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578720, 3) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:00.289+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 13372 2019-09-04T06:32:00.289+0000 D3 EXECUTOR [repl-writer-worker-8] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:00.289+0000 D3 STORAGE [repl-writer-worker-8] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:00.289+0000 D3 EXECUTOR [repl-writer-worker-4] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:00.289+0000 D3 STORAGE [repl-writer-worker-4] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:00.289+0000 D3 REPL [repl-writer-worker-8] applying op: { ts: Timestamp(1567578720, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578720278), o: { $v: 1, $set: { ping: new Date(1567578720278) } } }, oplog application mode: Secondary 2019-09-04T06:32:00.289+0000 D3 STORAGE [repl-writer-worker-8] WT set timestamp of future write operations to Timestamp(1567578720, 2) 2019-09-04T06:32:00.289+0000 D3 EXECUTOR [repl-writer-worker-6] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:00.289+0000 D3 STORAGE [repl-writer-worker-8] WT begin_transaction for snapshot id 13374 2019-09-04T06:32:00.289+0000 D3 STORAGE [repl-writer-worker-6] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:00.289+0000 D2 QUERY [repl-writer-worker-8] Using idhack: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" } 2019-09-04T06:32:00.289+0000 D3 REPL [repl-writer-worker-6] applying op: { ts: Timestamp(1567578720, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578720278), o: { $v: 1, $set: { ping: new Date(1567578720278) } } }, oplog application mode: Secondary 2019-09-04T06:32:00.289+0000 D3 STORAGE [repl-writer-worker-6] WT set timestamp of future write operations to Timestamp(1567578720, 1) 2019-09-04T06:32:00.289+0000 D3 STORAGE [repl-writer-worker-6] WT begin_transaction for snapshot id 13376 2019-09-04T06:32:00.289+0000 D2 QUERY [repl-writer-worker-6] Using idhack: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" } 2019-09-04T06:32:00.289+0000 D4 WRITE [repl-writer-worker-8] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:32:00.289+0000 D3 STORAGE [repl-writer-worker-8] WT commit_transaction for snapshot id 13374 2019-09-04T06:32:00.289+0000 D3 EXECUTOR [repl-writer-worker-8] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:00.289+0000 D4 WRITE [repl-writer-worker-6] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:32:00.289+0000 D3 STORAGE [repl-writer-worker-6] WT commit_transaction for snapshot id 13376 2019-09-04T06:32:00.289+0000 D3 EXECUTOR [repl-writer-worker-6] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:00.289+0000 D3 REPL [repl-writer-worker-4] applying op: { ts: Timestamp(1567578720, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578720278), o: { $v: 1, $set: { ping: new Date(1567578720277) } } }, oplog application mode: Secondary 2019-09-04T06:32:00.289+0000 D3 STORAGE [repl-writer-worker-4] WT set timestamp of future write operations to Timestamp(1567578720, 3) 2019-09-04T06:32:00.289+0000 D3 STORAGE [repl-writer-worker-4] WT begin_transaction for snapshot id 13375 2019-09-04T06:32:00.289+0000 D2 QUERY [repl-writer-worker-4] Using idhack: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" } 2019-09-04T06:32:00.289+0000 D4 WRITE [repl-writer-worker-4] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:32:00.289+0000 D3 STORAGE [repl-writer-worker-4] WT commit_transaction for snapshot id 13375 2019-09-04T06:32:00.289+0000 D3 EXECUTOR [repl-writer-worker-4] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:00.289+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578720, 3), t: 1 }({ ts: Timestamp(1567578720, 3), t: 1 }) 2019-09-04T06:32:00.289+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578720, 3) 2019-09-04T06:32:00.289+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13373 2019-09-04T06:32:00.289+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:32:00.289+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:00.289+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:32:00.289+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:00.289+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:00.289+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:32:00.289+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 13373 2019-09-04T06:32:00.289+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578720, 3) 2019-09-04T06:32:00.289+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13381 2019-09-04T06:32:00.289+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13381 2019-09-04T06:32:00.289+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578720, 3), t: 1 }({ ts: Timestamp(1567578720, 3), t: 1 }) 2019-09-04T06:32:00.289+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:00.289+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578715, 2), t: 1 }, durableWallTime: new Date(1567578715704), appliedOpTime: { ts: Timestamp(1567578715, 2), t: 1 }, appliedWallTime: new Date(1567578715704), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578715, 2), t: 1 }, durableWallTime: new Date(1567578715704), appliedOpTime: { ts: Timestamp(1567578720, 3), t: 1 }, appliedWallTime: new Date(1567578720278), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578715, 2), t: 1 }, durableWallTime: new Date(1567578715704), appliedOpTime: { ts: Timestamp(1567578715, 2), t: 1 }, appliedWallTime: new Date(1567578715704), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578715, 2), t: 1 }, lastCommittedWall: new Date(1567578715704), lastOpVisible: { ts: Timestamp(1567578715, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:00.290+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 893 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:30.289+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578715, 2), t: 1 }, durableWallTime: new Date(1567578715704), appliedOpTime: { ts: Timestamp(1567578715, 2), t: 1 }, appliedWallTime: new Date(1567578715704), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578715, 2), t: 1 }, durableWallTime: new Date(1567578715704), appliedOpTime: { ts: Timestamp(1567578720, 3), t: 1 }, appliedWallTime: new Date(1567578720278), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578715, 2), t: 1 }, durableWallTime: new Date(1567578715704), appliedOpTime: { ts: Timestamp(1567578715, 2), t: 1 }, appliedWallTime: new Date(1567578715704), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578715, 2), t: 1 }, lastCommittedWall: new Date(1567578715704), lastOpVisible: { ts: Timestamp(1567578715, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:00.290+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:30.289+0000 2019-09-04T06:32:00.290+0000 D2 ASIO [RS] Request 893 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578715, 2), t: 1 }, lastCommittedWall: new Date(1567578715704), lastOpVisible: { ts: Timestamp(1567578715, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578715, 2), $clusterTime: { clusterTime: Timestamp(1567578720, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578720, 3) } 2019-09-04T06:32:00.290+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578715, 2), t: 1 }, lastCommittedWall: new Date(1567578715704), lastOpVisible: { ts: Timestamp(1567578715, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578715, 2), $clusterTime: { clusterTime: Timestamp(1567578720, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578720, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:00.290+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:00.290+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:30.290+0000 2019-09-04T06:32:00.290+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578720, 3), t: 1 } 2019-09-04T06:32:00.290+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 894 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:10.290+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578715, 2), t: 1 } } 2019-09-04T06:32:00.290+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:30.290+0000 2019-09-04T06:32:00.291+0000 D2 ASIO [RS] Request 894 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578720, 3), t: 1 }, lastCommittedWall: new Date(1567578720278), lastOpVisible: { ts: Timestamp(1567578720, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578720, 3), t: 1 }, lastCommittedWall: new Date(1567578720278), lastOpApplied: { ts: Timestamp(1567578720, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578720, 3), $clusterTime: { clusterTime: Timestamp(1567578720, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578720, 3) } 2019-09-04T06:32:00.291+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578720, 3), t: 1 }, lastCommittedWall: new Date(1567578720278), lastOpVisible: { ts: Timestamp(1567578720, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578720, 3), t: 1 }, lastCommittedWall: new Date(1567578720278), lastOpApplied: { ts: Timestamp(1567578720, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578720, 3), $clusterTime: { clusterTime: Timestamp(1567578720, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578720, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:00.291+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:00.291+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:32:00.291+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578720, 3), t: 1 }, 2019-09-04T06:32:00.278+0000 2019-09-04T06:32:00.291+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578720, 3), t: 1 }, 2019-09-04T06:32:00.278+0000 2019-09-04T06:32:00.291+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578715, 3) 2019-09-04T06:32:00.291+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:32:11.001+0000 2019-09-04T06:32:00.291+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:32:11.667+0000 2019-09-04T06:32:00.291+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 895 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:10.291+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578720, 3), t: 1 } } 2019-09-04T06:32:00.291+0000 D3 REPL [conn315] Got notified of new snapshot: { ts: Timestamp(1567578720, 3), t: 1 }, 2019-09-04T06:32:00.278+0000 2019-09-04T06:32:00.291+0000 D3 REPL [conn315] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:25.060+0000 2019-09-04T06:32:00.291+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:30.290+0000 2019-09-04T06:32:00.291+0000 D3 REPL [conn311] Got notified of new snapshot: { ts: Timestamp(1567578720, 3), t: 1 }, 2019-09-04T06:32:00.278+0000 2019-09-04T06:32:00.291+0000 D3 REPL [conn311] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.661+0000 2019-09-04T06:32:00.291+0000 D3 REPL [conn313] Got notified of new snapshot: { ts: Timestamp(1567578720, 3), t: 1 }, 2019-09-04T06:32:00.278+0000 2019-09-04T06:32:00.291+0000 D3 REPL [conn313] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:22.595+0000 2019-09-04T06:32:00.291+0000 D3 REPL [conn309] Got notified of new snapshot: { ts: Timestamp(1567578720, 3), t: 1 }, 2019-09-04T06:32:00.278+0000 2019-09-04T06:32:00.291+0000 D3 REPL [conn309] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.423+0000 2019-09-04T06:32:00.291+0000 D3 REPL [conn297] Got notified of new snapshot: { ts: Timestamp(1567578720, 3), t: 1 }, 2019-09-04T06:32:00.278+0000 2019-09-04T06:32:00.291+0000 D3 REPL [conn297] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.925+0000 2019-09-04T06:32:00.291+0000 D3 REPL [conn300] Got notified of new snapshot: { ts: Timestamp(1567578720, 3), t: 1 }, 2019-09-04T06:32:00.278+0000 2019-09-04T06:32:00.291+0000 D3 REPL [conn300] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.277+0000 2019-09-04T06:32:00.291+0000 D3 REPL [conn314] Got notified of new snapshot: { ts: Timestamp(1567578720, 3), t: 1 }, 2019-09-04T06:32:00.278+0000 2019-09-04T06:32:00.291+0000 D3 REPL [conn314] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:24.152+0000 2019-09-04T06:32:00.291+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:00.291+0000 D3 REPL [conn289] Got notified of new snapshot: { ts: Timestamp(1567578720, 3), t: 1 }, 2019-09-04T06:32:00.278+0000 2019-09-04T06:32:00.291+0000 D3 REPL [conn289] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.280+0000 2019-09-04T06:32:00.291+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:28.839+0000 2019-09-04T06:32:00.291+0000 D3 REPL [conn277] Got notified of new snapshot: { ts: Timestamp(1567578720, 3), t: 1 }, 2019-09-04T06:32:00.278+0000 2019-09-04T06:32:00.291+0000 D3 REPL [conn277] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.324+0000 2019-09-04T06:32:00.291+0000 D3 REPL [conn295] Got notified of new snapshot: { ts: Timestamp(1567578720, 3), t: 1 }, 2019-09-04T06:32:00.278+0000 2019-09-04T06:32:00.291+0000 D3 REPL [conn295] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.753+0000 2019-09-04T06:32:00.291+0000 D3 REPL [conn310] Got notified of new snapshot: { ts: Timestamp(1567578720, 3), t: 1 }, 2019-09-04T06:32:00.278+0000 2019-09-04T06:32:00.291+0000 D3 REPL [conn310] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.660+0000 2019-09-04T06:32:00.291+0000 D3 REPL [conn296] Got notified of new snapshot: { ts: Timestamp(1567578720, 3), t: 1 }, 2019-09-04T06:32:00.278+0000 2019-09-04T06:32:00.291+0000 D3 REPL [conn296] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.763+0000 2019-09-04T06:32:00.291+0000 D3 REPL [conn298] Got notified of new snapshot: { ts: Timestamp(1567578720, 3), t: 1 }, 2019-09-04T06:32:00.278+0000 2019-09-04T06:32:00.291+0000 D3 REPL [conn298] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.986+0000 2019-09-04T06:32:00.291+0000 D3 REPL [conn302] Got notified of new snapshot: { ts: Timestamp(1567578720, 3), t: 1 }, 2019-09-04T06:32:00.278+0000 2019-09-04T06:32:00.291+0000 D3 REPL [conn302] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:12.891+0000 2019-09-04T06:32:00.291+0000 D3 REPL [conn301] Got notified of new snapshot: { ts: Timestamp(1567578720, 3), t: 1 }, 2019-09-04T06:32:00.278+0000 2019-09-04T06:32:00.291+0000 D3 REPL [conn301] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.660+0000 2019-09-04T06:32:00.291+0000 D3 REPL [conn280] Got notified of new snapshot: { ts: Timestamp(1567578720, 3), t: 1 }, 2019-09-04T06:32:00.278+0000 2019-09-04T06:32:00.291+0000 D3 REPL [conn280] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:01.310+0000 2019-09-04T06:32:00.291+0000 D3 REPL [conn288] Got notified of new snapshot: { ts: Timestamp(1567578720, 3), t: 1 }, 2019-09-04T06:32:00.278+0000 2019-09-04T06:32:00.291+0000 D3 REPL [conn288] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.289+0000 2019-09-04T06:32:00.291+0000 D3 REPL [conn307] Got notified of new snapshot: { ts: Timestamp(1567578720, 3), t: 1 }, 2019-09-04T06:32:00.278+0000 2019-09-04T06:32:00.291+0000 D3 REPL [conn307] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.417+0000 2019-09-04T06:32:00.291+0000 D3 REPL [conn282] Got notified of new snapshot: { ts: Timestamp(1567578720, 3), t: 1 }, 2019-09-04T06:32:00.278+0000 2019-09-04T06:32:00.291+0000 D3 REPL [conn282] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.424+0000 2019-09-04T06:32:00.291+0000 D3 REPL [conn290] Got notified of new snapshot: { ts: Timestamp(1567578720, 3), t: 1 }, 2019-09-04T06:32:00.278+0000 2019-09-04T06:32:00.291+0000 D3 REPL [conn290] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.274+0000 2019-09-04T06:32:00.291+0000 D3 REPL [conn293] Got notified of new snapshot: { ts: Timestamp(1567578720, 3), t: 1 }, 2019-09-04T06:32:00.278+0000 2019-09-04T06:32:00.291+0000 D3 REPL [conn293] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.277+0000 2019-09-04T06:32:00.291+0000 D3 REPL [conn308] Got notified of new snapshot: { ts: Timestamp(1567578720, 3), t: 1 }, 2019-09-04T06:32:00.278+0000 2019-09-04T06:32:00.291+0000 D3 REPL [conn308] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.423+0000 2019-09-04T06:32:00.292+0000 D3 REPL [conn294] Got notified of new snapshot: { ts: Timestamp(1567578720, 3), t: 1 }, 2019-09-04T06:32:00.278+0000 2019-09-04T06:32:00.292+0000 D3 REPL [conn294] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:00.468+0000 2019-09-04T06:32:00.292+0000 D3 REPL [conn312] Got notified of new snapshot: { ts: Timestamp(1567578720, 3), t: 1 }, 2019-09-04T06:32:00.278+0000 2019-09-04T06:32:00.292+0000 D3 REPL [conn312] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.767+0000 2019-09-04T06:32:00.292+0000 D3 REPL [conn284] Got notified of new snapshot: { ts: Timestamp(1567578720, 3), t: 1 }, 2019-09-04T06:32:00.278+0000 2019-09-04T06:32:00.292+0000 D3 REPL [conn284] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.440+0000 2019-09-04T06:32:00.292+0000 D3 REPL [conn303] Got notified of new snapshot: { ts: Timestamp(1567578720, 3), t: 1 }, 2019-09-04T06:32:00.278+0000 2019-09-04T06:32:00.292+0000 D3 REPL [conn303] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:13.600+0000 2019-09-04T06:32:00.292+0000 D3 REPL [conn278] Got notified of new snapshot: { ts: Timestamp(1567578720, 3), t: 1 }, 2019-09-04T06:32:00.278+0000 2019-09-04T06:32:00.292+0000 D3 REPL [conn278] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:12.693+0000 2019-09-04T06:32:00.292+0000 D3 REPL [conn276] Got notified of new snapshot: { ts: Timestamp(1567578720, 3), t: 1 }, 2019-09-04T06:32:00.278+0000 2019-09-04T06:32:00.292+0000 D3 REPL [conn276] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.329+0000 2019-09-04T06:32:00.292+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:32:00.292+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:00.292+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578715, 2), t: 1 }, durableWallTime: new Date(1567578715704), appliedOpTime: { ts: Timestamp(1567578715, 2), t: 1 }, appliedWallTime: new Date(1567578715704), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578720, 3), t: 1 }, durableWallTime: new Date(1567578720278), appliedOpTime: { ts: Timestamp(1567578720, 3), t: 1 }, appliedWallTime: new Date(1567578720278), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578715, 2), t: 1 }, durableWallTime: new Date(1567578715704), appliedOpTime: { ts: Timestamp(1567578715, 2), t: 1 }, appliedWallTime: new Date(1567578715704), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578720, 3), t: 1 }, lastCommittedWall: new Date(1567578720278), lastOpVisible: { ts: Timestamp(1567578720, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:00.292+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 896 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:30.292+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578715, 2), t: 1 }, durableWallTime: new Date(1567578715704), appliedOpTime: { ts: Timestamp(1567578715, 2), t: 1 }, appliedWallTime: new Date(1567578715704), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578720, 3), t: 1 }, durableWallTime: new Date(1567578720278), appliedOpTime: { ts: Timestamp(1567578720, 3), t: 1 }, appliedWallTime: new Date(1567578720278), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578715, 2), t: 1 }, durableWallTime: new Date(1567578715704), appliedOpTime: { ts: Timestamp(1567578715, 2), t: 1 }, appliedWallTime: new Date(1567578715704), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578720, 3), t: 1 }, lastCommittedWall: new Date(1567578720278), lastOpVisible: { ts: Timestamp(1567578720, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:00.293+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:30.290+0000 2019-09-04T06:32:00.293+0000 D2 ASIO [RS] Request 896 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578720, 3), t: 1 }, lastCommittedWall: new Date(1567578720278), lastOpVisible: { ts: Timestamp(1567578720, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578720, 3), $clusterTime: { clusterTime: Timestamp(1567578720, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578720, 3) } 2019-09-04T06:32:00.293+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578720, 3), t: 1 }, lastCommittedWall: new Date(1567578720278), lastOpVisible: { ts: Timestamp(1567578720, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578720, 3), $clusterTime: { clusterTime: Timestamp(1567578720, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578720, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:00.293+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:00.293+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:30.290+0000 2019-09-04T06:32:00.333+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:00.353+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:00.353+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:00.388+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578720, 3) 2019-09-04T06:32:00.433+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:00.444+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:00.444+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:00.468+0000 I COMMAND [conn294] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578681, 1), signature: { hash: BinData(0, DBCE5483F7C812CC5318B2426FFE1910EB037402), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:32:00.468+0000 D1 - [conn294] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:00.468+0000 W - [conn294] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:00.485+0000 I - [conn294] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:00.485+0000 D1 COMMAND [conn294] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578681, 1), signature: { hash: BinData(0, DBCE5483F7C812CC5318B2426FFE1910EB037402), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:00.485+0000 D1 - [conn294] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:00.485+0000 W - [conn294] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:00.505+0000 I - [conn294] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:00.505+0000 W COMMAND [conn294] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:00.505+0000 I COMMAND [conn294] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578681, 1), signature: { hash: BinData(0, DBCE5483F7C812CC5318B2426FFE1910EB037402), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:32:00.505+0000 D2 NETWORK [conn294] Session from 10.108.2.55:36692 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:00.505+0000 I NETWORK [conn294] end connection 10.108.2.55:36692 (89 connections now open) 2019-09-04T06:32:00.533+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:00.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:00.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:00.567+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:00.568+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:00.633+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:00.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:00.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:00.677+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:00.677+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:00.686+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:00.686+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:00.723+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:00.723+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:00.733+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:00.753+0000 I NETWORK [listener] connection accepted from 10.108.2.50:50170 #316 (90 connections now open) 2019-09-04T06:32:00.753+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:00.753+0000 D2 COMMAND [conn316] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:00.753+0000 I NETWORK [conn316] received client metadata from 10.108.2.50:50170 conn316: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:00.753+0000 I COMMAND [conn295] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578682, 1), signature: { hash: BinData(0, 1618860F770E3C611DC63E4BA94A19A92BD9155D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:32:00.754+0000 I COMMAND [conn316] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:00.754+0000 D1 - [conn295] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:00.754+0000 W - [conn295] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:00.764+0000 I COMMAND [conn296] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578689, 1), signature: { hash: BinData(0, D15269915E9E5FCA90C0F56D93441522EC0BAE20), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:32:00.764+0000 D1 - [conn296] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:00.764+0000 W - [conn296] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:00.770+0000 I - [conn295] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:00.770+0000 D1 COMMAND [conn295] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578682, 1), signature: { hash: BinData(0, 1618860F770E3C611DC63E4BA94A19A92BD9155D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:00.770+0000 D1 - [conn295] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:00.770+0000 W - [conn295] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:00.787+0000 I - [conn296] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:00.787+0000 D1 COMMAND [conn296] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578689, 1), signature: { hash: BinData(0, D15269915E9E5FCA90C0F56D93441522EC0BAE20), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:00.787+0000 D1 - [conn296] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:00.787+0000 W - [conn296] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:00.807+0000 I - [conn295] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:00.807+0000 W COMMAND [conn295] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:00.807+0000 I COMMAND [conn295] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578682, 1), signature: { hash: BinData(0, 1618860F770E3C611DC63E4BA94A19A92BD9155D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:32:00.807+0000 D2 NETWORK [conn295] Session from 10.108.2.52:47218 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:00.807+0000 I NETWORK [conn295] end connection 10.108.2.52:47218 (89 connections now open) 2019-09-04T06:32:00.827+0000 I - [conn296] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:00.827+0000 W COMMAND [conn296] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:00.827+0000 I COMMAND [conn296] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578689, 1), signature: { hash: BinData(0, D15269915E9E5FCA90C0F56D93441522EC0BAE20), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30034ms 2019-09-04T06:32:00.828+0000 D2 NETWORK [conn296] Session from 10.108.2.50:50154 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:00.828+0000 I NETWORK [conn296] end connection 10.108.2.50:50154 (88 connections now open) 2019-09-04T06:32:00.833+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:00.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:00.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 897) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:00.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 897 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:10.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:00.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:28.839+0000 2019-09-04T06:32:00.838+0000 D2 ASIO [Replication] Request 897 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578720, 3), t: 1 }, durableWallTime: new Date(1567578720278), opTime: { ts: Timestamp(1567578720, 3), t: 1 }, wallTime: new Date(1567578720278), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578720, 3), t: 1 }, lastCommittedWall: new Date(1567578720278), lastOpVisible: { ts: Timestamp(1567578720, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578720, 3), $clusterTime: { clusterTime: Timestamp(1567578720, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578720, 3) } 2019-09-04T06:32:00.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578720, 3), t: 1 }, durableWallTime: new Date(1567578720278), opTime: { ts: Timestamp(1567578720, 3), t: 1 }, wallTime: new Date(1567578720278), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578720, 3), t: 1 }, lastCommittedWall: new Date(1567578720278), lastOpVisible: { ts: Timestamp(1567578720, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578720, 3), $clusterTime: { clusterTime: Timestamp(1567578720, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578720, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:00.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:00.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 897) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578720, 3), t: 1 }, durableWallTime: new Date(1567578720278), opTime: { ts: Timestamp(1567578720, 3), t: 1 }, wallTime: new Date(1567578720278), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578720, 3), t: 1 }, lastCommittedWall: new Date(1567578720278), lastOpVisible: { ts: Timestamp(1567578720, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578720, 3), $clusterTime: { clusterTime: Timestamp(1567578720, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578720, 3) } 2019-09-04T06:32:00.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:32:00.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:32:02.838Z 2019-09-04T06:32:00.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:28.839+0000 2019-09-04T06:32:00.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:00.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 898) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:00.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 898 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:32:10.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:00.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:28.839+0000 2019-09-04T06:32:00.839+0000 D2 ASIO [Replication] Request 898 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578720, 3), t: 1 }, durableWallTime: new Date(1567578720278), opTime: { ts: Timestamp(1567578720, 3), t: 1 }, wallTime: new Date(1567578720278), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578720, 3), t: 1 }, lastCommittedWall: new Date(1567578720278), lastOpVisible: { ts: Timestamp(1567578720, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578720, 3), $clusterTime: { clusterTime: Timestamp(1567578720, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578720, 3) } 2019-09-04T06:32:00.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578720, 3), t: 1 }, durableWallTime: new Date(1567578720278), opTime: { ts: Timestamp(1567578720, 3), t: 1 }, wallTime: new Date(1567578720278), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578720, 3), t: 1 }, lastCommittedWall: new Date(1567578720278), lastOpVisible: { ts: Timestamp(1567578720, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578720, 3), $clusterTime: { clusterTime: Timestamp(1567578720, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578720, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:32:00.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:00.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 898) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578720, 3), t: 1 }, durableWallTime: new Date(1567578720278), opTime: { ts: Timestamp(1567578720, 3), t: 1 }, wallTime: new Date(1567578720278), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578720, 3), t: 1 }, lastCommittedWall: new Date(1567578720278), lastOpVisible: { ts: Timestamp(1567578720, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578720, 3), $clusterTime: { clusterTime: Timestamp(1567578720, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578720, 3) } 2019-09-04T06:32:00.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:32:00.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:32:11.667+0000 2019-09-04T06:32:00.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:32:11.101+0000 2019-09-04T06:32:00.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:32:00.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:32:02.839Z 2019-09-04T06:32:00.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:00.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:30.839+0000 2019-09-04T06:32:00.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:30.839+0000 2019-09-04T06:32:00.853+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:00.853+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:00.915+0000 I NETWORK [listener] connection accepted from 10.108.2.73:52208 #317 (89 connections now open) 2019-09-04T06:32:00.915+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:00.915+0000 D2 COMMAND [conn317] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:00.915+0000 I NETWORK [conn317] received client metadata from 10.108.2.73:52208 conn317: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:00.915+0000 I COMMAND [conn317] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:00.926+0000 I COMMAND [conn297] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578682, 1), signature: { hash: BinData(0, 1618860F770E3C611DC63E4BA94A19A92BD9155D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:32:00.926+0000 D1 - [conn297] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:00.926+0000 W - [conn297] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:00.934+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:00.943+0000 I - [conn297] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:00.943+0000 D1 COMMAND [conn297] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578682, 1), signature: { hash: BinData(0, 1618860F770E3C611DC63E4BA94A19A92BD9155D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:00.943+0000 D1 - [conn297] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:00.943+0000 W - [conn297] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:00.944+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:00.944+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:00.964+0000 I - [conn297] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:00.964+0000 W COMMAND [conn297] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:00.964+0000 I COMMAND [conn297] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578682, 1), signature: { hash: BinData(0, 1618860F770E3C611DC63E4BA94A19A92BD9155D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30028ms 2019-09-04T06:32:00.964+0000 D2 NETWORK [conn297] Session from 10.108.2.73:52192 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:00.964+0000 I NETWORK [conn297] end connection 10.108.2.73:52192 (88 connections now open) 2019-09-04T06:32:00.987+0000 I COMMAND [conn298] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578685, 1), signature: { hash: BinData(0, AAAAD9086BE97960C6C704E487600A0D6823C425), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:32:00.987+0000 D1 - [conn298] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:00.987+0000 W - [conn298] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:01.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:01.005+0000 I - [conn298] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:01.005+0000 D1 COMMAND [conn298] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578685, 1), signature: { hash: BinData(0, AAAAD9086BE97960C6C704E487600A0D6823C425), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:01.005+0000 D1 - [conn298] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:01.005+0000 W - [conn298] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:01.025+0000 I - [conn298] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:01.025+0000 W COMMAND [conn298] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:01.025+0000 I COMMAND [conn298] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578685, 1), signature: { hash: BinData(0, AAAAD9086BE97960C6C704E487600A0D6823C425), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30028ms 2019-09-04T06:32:01.025+0000 D2 NETWORK [conn298] Session from 10.108.2.46:41026 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:01.025+0000 I NETWORK [conn298] end connection 10.108.2.46:41026 (87 connections now open) 2019-09-04T06:32:01.034+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:01.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:01.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:01.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578720, 3), signature: { hash: BinData(0, FCA69D01442EEC3D61215D0948FAD0338AD6FE0C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:01.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:32:01.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578720, 3), signature: { hash: BinData(0, FCA69D01442EEC3D61215D0948FAD0338AD6FE0C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:01.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578720, 3), signature: { hash: BinData(0, FCA69D01442EEC3D61215D0948FAD0338AD6FE0C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:01.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578720, 3), t: 1 }, durableWallTime: new Date(1567578720278), opTime: { ts: Timestamp(1567578720, 3), t: 1 }, wallTime: new Date(1567578720278) } 2019-09-04T06:32:01.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578720, 3), signature: { hash: BinData(0, FCA69D01442EEC3D61215D0948FAD0338AD6FE0C), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:01.134+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:01.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:01.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:01.177+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:01.177+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:01.223+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:01.223+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:01.234+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:01.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:01.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:01.288+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:01.288+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:01.288+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578720, 3) 2019-09-04T06:32:01.289+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 13403 2019-09-04T06:32:01.289+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:01.289+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:01.289+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 13403 2019-09-04T06:32:01.290+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13406 2019-09-04T06:32:01.290+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13406 2019-09-04T06:32:01.290+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578720, 3), t: 1 }({ ts: Timestamp(1567578720, 3), t: 1 }) 2019-09-04T06:32:01.311+0000 I COMMAND [conn280] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578682, 1), signature: { hash: BinData(0, 1618860F770E3C611DC63E4BA94A19A92BD9155D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:32:01.312+0000 D1 - [conn280] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:01.312+0000 W - [conn280] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:01.329+0000 I - [conn280] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:01.329+0000 D1 COMMAND [conn280] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578682, 1), signature: { hash: BinData(0, 1618860F770E3C611DC63E4BA94A19A92BD9155D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:01.329+0000 D1 - [conn280] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:01.329+0000 W - [conn280] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:01.334+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:01.350+0000 I - [conn280] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:01.350+0000 W COMMAND [conn280] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:01.350+0000 I COMMAND [conn280] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578682, 1), signature: { hash: BinData(0, 1618860F770E3C611DC63E4BA94A19A92BD9155D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:32:01.350+0000 D2 NETWORK [conn280] Session from 10.108.2.62:53468 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:01.350+0000 I NETWORK [conn280] end connection 10.108.2.62:53468 (86 connections now open) 2019-09-04T06:32:01.353+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:01.353+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:01.434+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:01.444+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:01.444+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:01.534+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:01.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:01.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:01.634+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:01.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:01.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:01.677+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:01.677+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:01.723+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:01.723+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:01.735+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:01.835+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:01.853+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:01.853+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:01.935+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:01.944+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:01.944+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:02.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:02.035+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:02.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:02.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:02.135+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:02.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:02.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:02.177+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:02.177+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:02.223+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:02.223+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:02.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578720, 3), signature: { hash: BinData(0, FCA69D01442EEC3D61215D0948FAD0338AD6FE0C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:02.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:32:02.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578720, 3), signature: { hash: BinData(0, FCA69D01442EEC3D61215D0948FAD0338AD6FE0C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:02.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578720, 3), signature: { hash: BinData(0, FCA69D01442EEC3D61215D0948FAD0338AD6FE0C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:02.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578720, 3), t: 1 }, durableWallTime: new Date(1567578720278), opTime: { ts: Timestamp(1567578720, 3), t: 1 }, wallTime: new Date(1567578720278) } 2019-09-04T06:32:02.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578720, 3), signature: { hash: BinData(0, FCA69D01442EEC3D61215D0948FAD0338AD6FE0C), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:02.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:02.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:02.235+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:02.289+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:02.289+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:02.289+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578720, 3) 2019-09-04T06:32:02.289+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 13423 2019-09-04T06:32:02.289+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:02.289+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:02.289+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 13423 2019-09-04T06:32:02.290+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13426 2019-09-04T06:32:02.290+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13426 2019-09-04T06:32:02.290+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578720, 3), t: 1 }({ ts: Timestamp(1567578720, 3), t: 1 }) 2019-09-04T06:32:02.335+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:02.436+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:02.444+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:02.444+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:02.536+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:02.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:02.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:02.636+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:02.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:02.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:02.677+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:02.677+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:02.723+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:02.723+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:02.736+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:02.836+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:02.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:02.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 899) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:02.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 899 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:12.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:02.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:30.839+0000 2019-09-04T06:32:02.838+0000 D2 ASIO [Replication] Request 899 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578720, 3), t: 1 }, durableWallTime: new Date(1567578720278), opTime: { ts: Timestamp(1567578720, 3), t: 1 }, wallTime: new Date(1567578720278), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578720, 3), t: 1 }, lastCommittedWall: new Date(1567578720278), lastOpVisible: { ts: Timestamp(1567578720, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578720, 3), $clusterTime: { clusterTime: Timestamp(1567578720, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578720, 3) } 2019-09-04T06:32:02.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578720, 3), t: 1 }, durableWallTime: new Date(1567578720278), opTime: { ts: Timestamp(1567578720, 3), t: 1 }, wallTime: new Date(1567578720278), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578720, 3), t: 1 }, lastCommittedWall: new Date(1567578720278), lastOpVisible: { ts: Timestamp(1567578720, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578720, 3), $clusterTime: { clusterTime: Timestamp(1567578720, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578720, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:02.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:02.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 899) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578720, 3), t: 1 }, durableWallTime: new Date(1567578720278), opTime: { ts: Timestamp(1567578720, 3), t: 1 }, wallTime: new Date(1567578720278), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578720, 3), t: 1 }, lastCommittedWall: new Date(1567578720278), lastOpVisible: { ts: Timestamp(1567578720, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578720, 3), $clusterTime: { clusterTime: Timestamp(1567578720, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578720, 3) } 2019-09-04T06:32:02.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:32:02.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:32:04.838Z 2019-09-04T06:32:02.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:30.839+0000 2019-09-04T06:32:02.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:02.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 900) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:02.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 900 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:32:12.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:02.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:30.839+0000 2019-09-04T06:32:02.839+0000 D2 ASIO [Replication] Request 900 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578720, 3), t: 1 }, durableWallTime: new Date(1567578720278), opTime: { ts: Timestamp(1567578720, 3), t: 1 }, wallTime: new Date(1567578720278), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578720, 3), t: 1 }, lastCommittedWall: new Date(1567578720278), lastOpVisible: { ts: Timestamp(1567578720, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578720, 3), $clusterTime: { clusterTime: Timestamp(1567578720, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578720, 3) } 2019-09-04T06:32:02.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578720, 3), t: 1 }, durableWallTime: new Date(1567578720278), opTime: { ts: Timestamp(1567578720, 3), t: 1 }, wallTime: new Date(1567578720278), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578720, 3), t: 1 }, lastCommittedWall: new Date(1567578720278), lastOpVisible: { ts: Timestamp(1567578720, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578720, 3), $clusterTime: { clusterTime: Timestamp(1567578720, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578720, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:32:02.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:02.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 900) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578720, 3), t: 1 }, durableWallTime: new Date(1567578720278), opTime: { ts: Timestamp(1567578720, 3), t: 1 }, wallTime: new Date(1567578720278), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578720, 3), t: 1 }, lastCommittedWall: new Date(1567578720278), lastOpVisible: { ts: Timestamp(1567578720, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578720, 3), $clusterTime: { clusterTime: Timestamp(1567578720, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578720, 3) } 2019-09-04T06:32:02.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:32:02.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:32:11.101+0000 2019-09-04T06:32:02.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:32:14.073+0000 2019-09-04T06:32:02.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:32:02.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:32:04.839Z 2019-09-04T06:32:02.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:02.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:32.839+0000 2019-09-04T06:32:02.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:32.839+0000 2019-09-04T06:32:02.936+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:02.944+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:02.944+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:03.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:03.036+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:03.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:03.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:03.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578720, 3), signature: { hash: BinData(0, FCA69D01442EEC3D61215D0948FAD0338AD6FE0C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:03.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:32:03.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578720, 3), signature: { hash: BinData(0, FCA69D01442EEC3D61215D0948FAD0338AD6FE0C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:03.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578720, 3), signature: { hash: BinData(0, FCA69D01442EEC3D61215D0948FAD0338AD6FE0C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:03.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578720, 3), t: 1 }, durableWallTime: new Date(1567578720278), opTime: { ts: Timestamp(1567578720, 3), t: 1 }, wallTime: new Date(1567578720278) } 2019-09-04T06:32:03.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578720, 3), signature: { hash: BinData(0, FCA69D01442EEC3D61215D0948FAD0338AD6FE0C), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:03.137+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:03.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:03.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:03.177+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:03.177+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:03.223+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:03.223+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:03.224+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:03.224+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:03.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:03.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:03.237+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:03.289+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:03.289+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:03.289+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578720, 3) 2019-09-04T06:32:03.289+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 13442 2019-09-04T06:32:03.289+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:03.289+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:03.289+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 13442 2019-09-04T06:32:03.290+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13445 2019-09-04T06:32:03.290+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13445 2019-09-04T06:32:03.290+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578720, 3), t: 1 }({ ts: Timestamp(1567578720, 3), t: 1 }) 2019-09-04T06:32:03.337+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:03.437+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:03.444+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:03.444+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:03.491+0000 D2 ASIO [RS] Request 895 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578723, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578723487), o: { $v: 1, $set: { ping: new Date(1567578723482) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578720, 3), t: 1 }, lastCommittedWall: new Date(1567578720278), lastOpVisible: { ts: Timestamp(1567578720, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578720, 3), t: 1 }, lastCommittedWall: new Date(1567578720278), lastOpApplied: { ts: Timestamp(1567578723, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578720, 3), $clusterTime: { clusterTime: Timestamp(1567578723, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578723, 1) } 2019-09-04T06:32:03.491+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578723, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578723487), o: { $v: 1, $set: { ping: new Date(1567578723482) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578720, 3), t: 1 }, lastCommittedWall: new Date(1567578720278), lastOpVisible: { ts: Timestamp(1567578720, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578720, 3), t: 1 }, lastCommittedWall: new Date(1567578720278), lastOpApplied: { ts: Timestamp(1567578723, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578720, 3), $clusterTime: { clusterTime: Timestamp(1567578723, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578723, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:03.491+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:03.491+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578723, 1) and ending at ts: Timestamp(1567578723, 1) 2019-09-04T06:32:03.491+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:32:14.073+0000 2019-09-04T06:32:03.491+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:32:14.578+0000 2019-09-04T06:32:03.491+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:03.491+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:32.839+0000 2019-09-04T06:32:03.491+0000 D2 REPL [replication-0] oplog buffer has 0 bytes 2019-09-04T06:32:03.491+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578723, 1), t: 1 } 2019-09-04T06:32:03.491+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:03.491+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:03.491+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578720, 3) 2019-09-04T06:32:03.491+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 13450 2019-09-04T06:32:03.491+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:03.491+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:03.491+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 13450 2019-09-04T06:32:03.491+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:03.491+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:03.491+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:32:03.491+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578720, 3) 2019-09-04T06:32:03.491+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 13453 2019-09-04T06:32:03.491+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578723, 1) } 2019-09-04T06:32:03.491+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:03.491+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:03.491+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 13453 2019-09-04T06:32:03.491+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13446 2019-09-04T06:32:03.491+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 13446 2019-09-04T06:32:03.491+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13456 2019-09-04T06:32:03.491+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13456 2019-09-04T06:32:03.492+0000 D3 EXECUTOR [repl-writer-worker-2] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:03.492+0000 D3 STORAGE [repl-writer-worker-2] WT begin_transaction for snapshot id 13458 2019-09-04T06:32:03.492+0000 D4 STORAGE [repl-writer-worker-2] inserting record with timestamp Timestamp(1567578723, 1) 2019-09-04T06:32:03.492+0000 D3 STORAGE [repl-writer-worker-2] WT set timestamp of future write operations to Timestamp(1567578723, 1) 2019-09-04T06:32:03.492+0000 D3 STORAGE [repl-writer-worker-2] WT commit_transaction for snapshot id 13458 2019-09-04T06:32:03.492+0000 D3 EXECUTOR [repl-writer-worker-2] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:03.492+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:32:03.492+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13457 2019-09-04T06:32:03.492+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 13457 2019-09-04T06:32:03.492+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13460 2019-09-04T06:32:03.492+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13460 2019-09-04T06:32:03.492+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578723, 1), t: 1 }({ ts: Timestamp(1567578723, 1), t: 1 }) 2019-09-04T06:32:03.492+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578723, 1) 2019-09-04T06:32:03.492+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13461 2019-09-04T06:32:03.492+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578723, 1) } } ] } sort: {} projection: {} 2019-09-04T06:32:03.492+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:03.492+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:32:03.492+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578723, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:32:03.492+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:03.492+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:03.492+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:03.492+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578723, 1) || First: notFirst: full path: ts 2019-09-04T06:32:03.492+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:03.492+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578723, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:03.492+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:03.492+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:32:03.492+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:03.492+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:03.492+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:03.492+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:03.492+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:03.492+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:03.492+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:03.492+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578723, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:03.492+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:03.492+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:03.492+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:03.492+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578723, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:03.492+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:03.492+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578723, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:03.492+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 13461 2019-09-04T06:32:03.492+0000 D3 EXECUTOR [repl-writer-worker-0] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:03.492+0000 D3 STORAGE [repl-writer-worker-0] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:03.492+0000 D3 REPL [repl-writer-worker-0] applying op: { ts: Timestamp(1567578723, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578723487), o: { $v: 1, $set: { ping: new Date(1567578723482) } } }, oplog application mode: Secondary 2019-09-04T06:32:03.492+0000 D3 STORAGE [repl-writer-worker-0] WT set timestamp of future write operations to Timestamp(1567578723, 1) 2019-09-04T06:32:03.492+0000 D3 STORAGE [repl-writer-worker-0] WT begin_transaction for snapshot id 13463 2019-09-04T06:32:03.492+0000 D2 QUERY [repl-writer-worker-0] Using idhack: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" } 2019-09-04T06:32:03.492+0000 D4 WRITE [repl-writer-worker-0] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:32:03.492+0000 D3 STORAGE [repl-writer-worker-0] WT commit_transaction for snapshot id 13463 2019-09-04T06:32:03.492+0000 D3 EXECUTOR [repl-writer-worker-0] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:03.492+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578723, 1), t: 1 }({ ts: Timestamp(1567578723, 1), t: 1 }) 2019-09-04T06:32:03.492+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578723, 1) 2019-09-04T06:32:03.492+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13462 2019-09-04T06:32:03.492+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:32:03.492+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:03.492+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:32:03.492+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:03.492+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:03.492+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:32:03.492+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 13462 2019-09-04T06:32:03.492+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578723, 1) 2019-09-04T06:32:03.492+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13466 2019-09-04T06:32:03.492+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13466 2019-09-04T06:32:03.492+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578723, 1), t: 1 }({ ts: Timestamp(1567578723, 1), t: 1 }) 2019-09-04T06:32:03.492+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:03.492+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578720, 3), t: 1 }, durableWallTime: new Date(1567578720278), appliedOpTime: { ts: Timestamp(1567578720, 3), t: 1 }, appliedWallTime: new Date(1567578720278), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578720, 3), t: 1 }, durableWallTime: new Date(1567578720278), appliedOpTime: { ts: Timestamp(1567578723, 1), t: 1 }, appliedWallTime: new Date(1567578723487), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578720, 3), t: 1 }, durableWallTime: new Date(1567578720278), appliedOpTime: { ts: Timestamp(1567578720, 3), t: 1 }, appliedWallTime: new Date(1567578720278), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578720, 3), t: 1 }, lastCommittedWall: new Date(1567578720278), lastOpVisible: { ts: Timestamp(1567578720, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:03.492+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 901 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:33.492+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578720, 3), t: 1 }, durableWallTime: new Date(1567578720278), appliedOpTime: { ts: Timestamp(1567578720, 3), t: 1 }, appliedWallTime: new Date(1567578720278), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578720, 3), t: 1 }, durableWallTime: new Date(1567578720278), appliedOpTime: { ts: Timestamp(1567578723, 1), t: 1 }, appliedWallTime: new Date(1567578723487), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578720, 3), t: 1 }, durableWallTime: new Date(1567578720278), appliedOpTime: { ts: Timestamp(1567578720, 3), t: 1 }, appliedWallTime: new Date(1567578720278), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578720, 3), t: 1 }, lastCommittedWall: new Date(1567578720278), lastOpVisible: { ts: Timestamp(1567578720, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:03.492+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:33.492+0000 2019-09-04T06:32:03.493+0000 D2 ASIO [RS] Request 901 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578720, 3), t: 1 }, lastCommittedWall: new Date(1567578720278), lastOpVisible: { ts: Timestamp(1567578720, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578720, 3), $clusterTime: { clusterTime: Timestamp(1567578723, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578723, 1) } 2019-09-04T06:32:03.493+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578720, 3), t: 1 }, lastCommittedWall: new Date(1567578720278), lastOpVisible: { ts: Timestamp(1567578720, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578720, 3), $clusterTime: { clusterTime: Timestamp(1567578723, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578723, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:03.493+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:03.493+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:33.493+0000 2019-09-04T06:32:03.493+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578723, 1), t: 1 } 2019-09-04T06:32:03.493+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 902 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:13.493+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578720, 3), t: 1 } } 2019-09-04T06:32:03.493+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:33.493+0000 2019-09-04T06:32:03.494+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:32:03.494+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:03.494+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578720, 3), t: 1 }, durableWallTime: new Date(1567578720278), appliedOpTime: { ts: Timestamp(1567578720, 3), t: 1 }, appliedWallTime: new Date(1567578720278), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578723, 1), t: 1 }, durableWallTime: new Date(1567578723487), appliedOpTime: { ts: Timestamp(1567578723, 1), t: 1 }, appliedWallTime: new Date(1567578723487), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578720, 3), t: 1 }, durableWallTime: new Date(1567578720278), appliedOpTime: { ts: Timestamp(1567578720, 3), t: 1 }, appliedWallTime: new Date(1567578720278), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578720, 3), t: 1 }, lastCommittedWall: new Date(1567578720278), lastOpVisible: { ts: Timestamp(1567578720, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:03.494+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 903 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:33.494+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578720, 3), t: 1 }, durableWallTime: new Date(1567578720278), appliedOpTime: { ts: Timestamp(1567578720, 3), t: 1 }, appliedWallTime: new Date(1567578720278), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578723, 1), t: 1 }, durableWallTime: new Date(1567578723487), appliedOpTime: { ts: Timestamp(1567578723, 1), t: 1 }, appliedWallTime: new Date(1567578723487), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578720, 3), t: 1 }, durableWallTime: new Date(1567578720278), appliedOpTime: { ts: Timestamp(1567578720, 3), t: 1 }, appliedWallTime: new Date(1567578720278), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578720, 3), t: 1 }, lastCommittedWall: new Date(1567578720278), lastOpVisible: { ts: Timestamp(1567578720, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:03.494+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:33.493+0000 2019-09-04T06:32:03.494+0000 D2 ASIO [RS] Request 903 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578720, 3), t: 1 }, lastCommittedWall: new Date(1567578720278), lastOpVisible: { ts: Timestamp(1567578720, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578720, 3), $clusterTime: { clusterTime: Timestamp(1567578723, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578723, 1) } 2019-09-04T06:32:03.494+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578720, 3), t: 1 }, lastCommittedWall: new Date(1567578720278), lastOpVisible: { ts: Timestamp(1567578720, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578720, 3), $clusterTime: { clusterTime: Timestamp(1567578723, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578723, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:03.494+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:03.495+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:33.493+0000 2019-09-04T06:32:03.495+0000 D2 ASIO [RS] Request 902 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578723, 1), t: 1 }, lastCommittedWall: new Date(1567578723487), lastOpVisible: { ts: Timestamp(1567578723, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578723, 1), t: 1 }, lastCommittedWall: new Date(1567578723487), lastOpApplied: { ts: Timestamp(1567578723, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578723, 1), $clusterTime: { clusterTime: Timestamp(1567578723, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578723, 1) } 2019-09-04T06:32:03.495+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578723, 1), t: 1 }, lastCommittedWall: new Date(1567578723487), lastOpVisible: { ts: Timestamp(1567578723, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578723, 1), t: 1 }, lastCommittedWall: new Date(1567578723487), lastOpApplied: { ts: Timestamp(1567578723, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578723, 1), $clusterTime: { clusterTime: Timestamp(1567578723, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578723, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:03.495+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:03.495+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:32:03.495+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578723, 1), t: 1 }, 2019-09-04T06:32:03.487+0000 2019-09-04T06:32:03.495+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578723, 1), t: 1 }, 2019-09-04T06:32:03.487+0000 2019-09-04T06:32:03.495+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578718, 1) 2019-09-04T06:32:03.495+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:32:14.578+0000 2019-09-04T06:32:03.495+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:32:14.634+0000 2019-09-04T06:32:03.495+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:03.495+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:32.839+0000 2019-09-04T06:32:03.495+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 904 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:13.495+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578723, 1), t: 1 } } 2019-09-04T06:32:03.495+0000 D3 REPL [conn313] Got notified of new snapshot: { ts: Timestamp(1567578723, 1), t: 1 }, 2019-09-04T06:32:03.487+0000 2019-09-04T06:32:03.495+0000 D3 REPL [conn313] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:22.595+0000 2019-09-04T06:32:03.495+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:33.493+0000 2019-09-04T06:32:03.495+0000 D3 REPL [conn309] Got notified of new snapshot: { ts: Timestamp(1567578723, 1), t: 1 }, 2019-09-04T06:32:03.487+0000 2019-09-04T06:32:03.495+0000 D3 REPL [conn309] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.423+0000 2019-09-04T06:32:03.495+0000 D3 REPL [conn300] Got notified of new snapshot: { ts: Timestamp(1567578723, 1), t: 1 }, 2019-09-04T06:32:03.487+0000 2019-09-04T06:32:03.495+0000 D3 REPL [conn300] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.277+0000 2019-09-04T06:32:03.495+0000 D3 REPL [conn314] Got notified of new snapshot: { ts: Timestamp(1567578723, 1), t: 1 }, 2019-09-04T06:32:03.487+0000 2019-09-04T06:32:03.495+0000 D3 REPL [conn314] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:24.152+0000 2019-09-04T06:32:03.495+0000 D3 REPL [conn290] Got notified of new snapshot: { ts: Timestamp(1567578723, 1), t: 1 }, 2019-09-04T06:32:03.487+0000 2019-09-04T06:32:03.495+0000 D3 REPL [conn290] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.274+0000 2019-09-04T06:32:03.495+0000 D3 REPL [conn301] Got notified of new snapshot: { ts: Timestamp(1567578723, 1), t: 1 }, 2019-09-04T06:32:03.487+0000 2019-09-04T06:32:03.495+0000 D3 REPL [conn301] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.660+0000 2019-09-04T06:32:03.495+0000 D3 REPL [conn312] Got notified of new snapshot: { ts: Timestamp(1567578723, 1), t: 1 }, 2019-09-04T06:32:03.487+0000 2019-09-04T06:32:03.495+0000 D3 REPL [conn312] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.767+0000 2019-09-04T06:32:03.495+0000 D3 REPL [conn284] Got notified of new snapshot: { ts: Timestamp(1567578723, 1), t: 1 }, 2019-09-04T06:32:03.487+0000 2019-09-04T06:32:03.495+0000 D3 REPL [conn284] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.440+0000 2019-09-04T06:32:03.495+0000 D3 REPL [conn276] Got notified of new snapshot: { ts: Timestamp(1567578723, 1), t: 1 }, 2019-09-04T06:32:03.487+0000 2019-09-04T06:32:03.495+0000 D3 REPL [conn276] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.329+0000 2019-09-04T06:32:03.495+0000 D3 REPL [conn315] Got notified of new snapshot: { ts: Timestamp(1567578723, 1), t: 1 }, 2019-09-04T06:32:03.487+0000 2019-09-04T06:32:03.495+0000 D3 REPL [conn315] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:25.060+0000 2019-09-04T06:32:03.495+0000 D3 REPL [conn311] Got notified of new snapshot: { ts: Timestamp(1567578723, 1), t: 1 }, 2019-09-04T06:32:03.487+0000 2019-09-04T06:32:03.495+0000 D3 REPL [conn311] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.661+0000 2019-09-04T06:32:03.495+0000 D3 REPL [conn302] Got notified of new snapshot: { ts: Timestamp(1567578723, 1), t: 1 }, 2019-09-04T06:32:03.487+0000 2019-09-04T06:32:03.495+0000 D3 REPL [conn302] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:12.891+0000 2019-09-04T06:32:03.495+0000 D3 REPL [conn307] Got notified of new snapshot: { ts: Timestamp(1567578723, 1), t: 1 }, 2019-09-04T06:32:03.487+0000 2019-09-04T06:32:03.495+0000 D3 REPL [conn307] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.417+0000 2019-09-04T06:32:03.495+0000 D3 REPL [conn289] Got notified of new snapshot: { ts: Timestamp(1567578723, 1), t: 1 }, 2019-09-04T06:32:03.487+0000 2019-09-04T06:32:03.495+0000 D3 REPL [conn289] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.280+0000 2019-09-04T06:32:03.495+0000 D3 REPL [conn277] Got notified of new snapshot: { ts: Timestamp(1567578723, 1), t: 1 }, 2019-09-04T06:32:03.487+0000 2019-09-04T06:32:03.495+0000 D3 REPL [conn277] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.324+0000 2019-09-04T06:32:03.495+0000 D3 REPL [conn310] Got notified of new snapshot: { ts: Timestamp(1567578723, 1), t: 1 }, 2019-09-04T06:32:03.487+0000 2019-09-04T06:32:03.495+0000 D3 REPL [conn310] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.660+0000 2019-09-04T06:32:03.496+0000 D3 REPL [conn308] Got notified of new snapshot: { ts: Timestamp(1567578723, 1), t: 1 }, 2019-09-04T06:32:03.487+0000 2019-09-04T06:32:03.496+0000 D3 REPL [conn308] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.423+0000 2019-09-04T06:32:03.496+0000 D3 REPL [conn288] Got notified of new snapshot: { ts: Timestamp(1567578723, 1), t: 1 }, 2019-09-04T06:32:03.487+0000 2019-09-04T06:32:03.496+0000 D3 REPL [conn288] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.289+0000 2019-09-04T06:32:03.496+0000 D3 REPL [conn282] Got notified of new snapshot: { ts: Timestamp(1567578723, 1), t: 1 }, 2019-09-04T06:32:03.487+0000 2019-09-04T06:32:03.496+0000 D3 REPL [conn282] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.424+0000 2019-09-04T06:32:03.496+0000 D3 REPL [conn293] Got notified of new snapshot: { ts: Timestamp(1567578723, 1), t: 1 }, 2019-09-04T06:32:03.487+0000 2019-09-04T06:32:03.496+0000 D3 REPL [conn293] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.277+0000 2019-09-04T06:32:03.496+0000 D3 REPL [conn303] Got notified of new snapshot: { ts: Timestamp(1567578723, 1), t: 1 }, 2019-09-04T06:32:03.487+0000 2019-09-04T06:32:03.496+0000 D3 REPL [conn303] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:13.600+0000 2019-09-04T06:32:03.496+0000 D3 REPL [conn278] Got notified of new snapshot: { ts: Timestamp(1567578723, 1), t: 1 }, 2019-09-04T06:32:03.487+0000 2019-09-04T06:32:03.496+0000 D3 REPL [conn278] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:12.693+0000 2019-09-04T06:32:03.515+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:32:03.515+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:32:03.515+0000 D2 COMMAND [conn90] run command admin.$cmd { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:32:03.515+0000 I COMMAND [conn90] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:32:03.537+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:03.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:03.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:03.591+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578723, 1) 2019-09-04T06:32:03.637+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:03.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:03.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:03.677+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:03.677+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:03.723+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:03.723+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:03.724+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:03.724+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:03.737+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:03.838+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:03.938+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:03.944+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:03.944+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:04.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:04.038+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:04.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:04.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:04.138+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:04.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:04.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:04.177+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:04.177+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:04.223+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:04.223+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:04.224+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:04.224+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:04.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578723, 1), signature: { hash: BinData(0, A92A21DBB6DDF990E4358BE8633D9791E2F138EA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:04.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:32:04.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578723, 1), signature: { hash: BinData(0, A92A21DBB6DDF990E4358BE8633D9791E2F138EA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:04.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578723, 1), signature: { hash: BinData(0, A92A21DBB6DDF990E4358BE8633D9791E2F138EA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:04.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578723, 1), t: 1 }, durableWallTime: new Date(1567578723487), opTime: { ts: Timestamp(1567578723, 1), t: 1 }, wallTime: new Date(1567578723487) } 2019-09-04T06:32:04.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578723, 1), signature: { hash: BinData(0, A92A21DBB6DDF990E4358BE8633D9791E2F138EA), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:04.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:04.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:04.238+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:04.338+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:04.438+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:04.444+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:04.444+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:04.492+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:04.492+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:04.492+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578723, 1) 2019-09-04T06:32:04.492+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 13487 2019-09-04T06:32:04.492+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:04.492+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:04.492+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 13487 2019-09-04T06:32:04.492+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13490 2019-09-04T06:32:04.493+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13490 2019-09-04T06:32:04.493+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578723, 1), t: 1 }({ ts: Timestamp(1567578723, 1), t: 1 }) 2019-09-04T06:32:04.539+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:04.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:04.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:04.639+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:04.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:04.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:04.677+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:04.677+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:04.723+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:04.723+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:04.724+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:04.724+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:04.739+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:04.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:04.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 905) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:04.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 905 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:14.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:04.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:32.839+0000 2019-09-04T06:32:04.838+0000 D2 ASIO [Replication] Request 905 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578723, 1), t: 1 }, durableWallTime: new Date(1567578723487), opTime: { ts: Timestamp(1567578723, 1), t: 1 }, wallTime: new Date(1567578723487), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578723, 1), t: 1 }, lastCommittedWall: new Date(1567578723487), lastOpVisible: { ts: Timestamp(1567578723, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578723, 1), $clusterTime: { clusterTime: Timestamp(1567578723, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578723, 1) } 2019-09-04T06:32:04.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578723, 1), t: 1 }, durableWallTime: new Date(1567578723487), opTime: { ts: Timestamp(1567578723, 1), t: 1 }, wallTime: new Date(1567578723487), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578723, 1), t: 1 }, lastCommittedWall: new Date(1567578723487), lastOpVisible: { ts: Timestamp(1567578723, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578723, 1), $clusterTime: { clusterTime: Timestamp(1567578723, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578723, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:04.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:04.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 905) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578723, 1), t: 1 }, durableWallTime: new Date(1567578723487), opTime: { ts: Timestamp(1567578723, 1), t: 1 }, wallTime: new Date(1567578723487), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578723, 1), t: 1 }, lastCommittedWall: new Date(1567578723487), lastOpVisible: { ts: Timestamp(1567578723, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578723, 1), $clusterTime: { clusterTime: Timestamp(1567578723, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578723, 1) } 2019-09-04T06:32:04.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:32:04.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:32:06.838Z 2019-09-04T06:32:04.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:32.839+0000 2019-09-04T06:32:04.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:04.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 906) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:04.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 906 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:32:14.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:04.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:32.839+0000 2019-09-04T06:32:04.839+0000 D2 ASIO [Replication] Request 906 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578723, 1), t: 1 }, durableWallTime: new Date(1567578723487), opTime: { ts: Timestamp(1567578723, 1), t: 1 }, wallTime: new Date(1567578723487), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578723, 1), t: 1 }, lastCommittedWall: new Date(1567578723487), lastOpVisible: { ts: Timestamp(1567578723, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578723, 1), $clusterTime: { clusterTime: Timestamp(1567578723, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578723, 1) } 2019-09-04T06:32:04.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578723, 1), t: 1 }, durableWallTime: new Date(1567578723487), opTime: { ts: Timestamp(1567578723, 1), t: 1 }, wallTime: new Date(1567578723487), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578723, 1), t: 1 }, lastCommittedWall: new Date(1567578723487), lastOpVisible: { ts: Timestamp(1567578723, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578723, 1), $clusterTime: { clusterTime: Timestamp(1567578723, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578723, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:32:04.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:04.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 906) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578723, 1), t: 1 }, durableWallTime: new Date(1567578723487), opTime: { ts: Timestamp(1567578723, 1), t: 1 }, wallTime: new Date(1567578723487), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578723, 1), t: 1 }, lastCommittedWall: new Date(1567578723487), lastOpVisible: { ts: Timestamp(1567578723, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578723, 1), $clusterTime: { clusterTime: Timestamp(1567578723, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578723, 1) } 2019-09-04T06:32:04.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:32:04.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:32:14.634+0000 2019-09-04T06:32:04.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:32:15.155+0000 2019-09-04T06:32:04.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:32:04.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:32:06.839Z 2019-09-04T06:32:04.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:04.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:34.839+0000 2019-09-04T06:32:04.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:34.839+0000 2019-09-04T06:32:04.839+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:04.939+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:04.944+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:04.944+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:05.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:05.039+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:05.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:05.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:05.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578723, 1), signature: { hash: BinData(0, A92A21DBB6DDF990E4358BE8633D9791E2F138EA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:05.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:32:05.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578723, 1), signature: { hash: BinData(0, A92A21DBB6DDF990E4358BE8633D9791E2F138EA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:05.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578723, 1), signature: { hash: BinData(0, A92A21DBB6DDF990E4358BE8633D9791E2F138EA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:05.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578723, 1), t: 1 }, durableWallTime: new Date(1567578723487), opTime: { ts: Timestamp(1567578723, 1), t: 1 }, wallTime: new Date(1567578723487) } 2019-09-04T06:32:05.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578723, 1), signature: { hash: BinData(0, A92A21DBB6DDF990E4358BE8633D9791E2F138EA), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:05.139+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:05.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:05.160+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:05.177+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:05.177+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:05.223+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:05.223+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:05.224+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:05.224+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:05.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:05.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:05.240+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:05.259+0000 D2 ASIO [RS] Request 904 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578725, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578725256), o: { $v: 1, $set: { ping: new Date(1567578725253), up: 2625 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578723, 1), t: 1 }, lastCommittedWall: new Date(1567578723487), lastOpVisible: { ts: Timestamp(1567578723, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578723, 1), t: 1 }, lastCommittedWall: new Date(1567578723487), lastOpApplied: { ts: Timestamp(1567578725, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578723, 1), $clusterTime: { clusterTime: Timestamp(1567578725, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578725, 1) } 2019-09-04T06:32:05.259+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578725, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578725256), o: { $v: 1, $set: { ping: new Date(1567578725253), up: 2625 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578723, 1), t: 1 }, lastCommittedWall: new Date(1567578723487), lastOpVisible: { ts: Timestamp(1567578723, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578723, 1), t: 1 }, lastCommittedWall: new Date(1567578723487), lastOpApplied: { ts: Timestamp(1567578725, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578723, 1), $clusterTime: { clusterTime: Timestamp(1567578725, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578725, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:05.259+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:05.259+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578725, 1) and ending at ts: Timestamp(1567578725, 1) 2019-09-04T06:32:05.259+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:32:15.155+0000 2019-09-04T06:32:05.259+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:32:15.887+0000 2019-09-04T06:32:05.259+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:05.259+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578725, 1), t: 1 } 2019-09-04T06:32:05.259+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:34.839+0000 2019-09-04T06:32:05.259+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:05.259+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:05.259+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578723, 1) 2019-09-04T06:32:05.259+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 13507 2019-09-04T06:32:05.259+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:05.259+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:05.259+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 13507 2019-09-04T06:32:05.259+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:05.259+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:05.259+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578723, 1) 2019-09-04T06:32:05.259+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:32:05.259+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 13510 2019-09-04T06:32:05.259+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:05.259+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:05.259+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578725, 1) } 2019-09-04T06:32:05.259+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 13510 2019-09-04T06:32:05.259+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13491 2019-09-04T06:32:05.259+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 13491 2019-09-04T06:32:05.259+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13513 2019-09-04T06:32:05.259+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13513 2019-09-04T06:32:05.259+0000 D3 EXECUTOR [repl-writer-worker-15] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:05.259+0000 D3 STORAGE [repl-writer-worker-15] WT begin_transaction for snapshot id 13515 2019-09-04T06:32:05.259+0000 D4 STORAGE [repl-writer-worker-15] inserting record with timestamp Timestamp(1567578725, 1) 2019-09-04T06:32:05.259+0000 D3 STORAGE [repl-writer-worker-15] WT set timestamp of future write operations to Timestamp(1567578725, 1) 2019-09-04T06:32:05.260+0000 D3 STORAGE [repl-writer-worker-15] WT commit_transaction for snapshot id 13515 2019-09-04T06:32:05.260+0000 D3 EXECUTOR [repl-writer-worker-15] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:05.260+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:32:05.260+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13514 2019-09-04T06:32:05.260+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 13514 2019-09-04T06:32:05.260+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13517 2019-09-04T06:32:05.260+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13517 2019-09-04T06:32:05.260+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578725, 1), t: 1 }({ ts: Timestamp(1567578725, 1), t: 1 }) 2019-09-04T06:32:05.260+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578725, 1) 2019-09-04T06:32:05.260+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13518 2019-09-04T06:32:05.260+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578725, 1) } } ] } sort: {} projection: {} 2019-09-04T06:32:05.260+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:05.260+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:32:05.260+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578725, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:32:05.260+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:05.260+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:05.260+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:05.260+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578725, 1) || First: notFirst: full path: ts 2019-09-04T06:32:05.260+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:05.260+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578725, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:05.260+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:05.260+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:32:05.260+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:05.260+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:05.260+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:05.260+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:05.260+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:05.260+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:05.260+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:05.260+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578725, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:05.260+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:05.260+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:05.260+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:05.260+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578725, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:05.260+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:05.260+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578725, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:05.260+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 13518 2019-09-04T06:32:05.260+0000 D3 EXECUTOR [repl-writer-worker-1] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:05.260+0000 D3 STORAGE [repl-writer-worker-1] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:05.260+0000 D3 REPL [repl-writer-worker-1] applying op: { ts: Timestamp(1567578725, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578725256), o: { $v: 1, $set: { ping: new Date(1567578725253), up: 2625 } } }, oplog application mode: Secondary 2019-09-04T06:32:05.260+0000 D3 STORAGE [repl-writer-worker-1] WT set timestamp of future write operations to Timestamp(1567578725, 1) 2019-09-04T06:32:05.260+0000 D3 STORAGE [repl-writer-worker-1] WT begin_transaction for snapshot id 13520 2019-09-04T06:32:05.260+0000 D2 QUERY [repl-writer-worker-1] Using idhack: { _id: "cmodb801.togewa.com:27017" } 2019-09-04T06:32:05.260+0000 D4 WRITE [repl-writer-worker-1] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:32:05.260+0000 D3 STORAGE [repl-writer-worker-1] WT commit_transaction for snapshot id 13520 2019-09-04T06:32:05.260+0000 D3 EXECUTOR [repl-writer-worker-1] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:05.260+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578725, 1), t: 1 }({ ts: Timestamp(1567578725, 1), t: 1 }) 2019-09-04T06:32:05.260+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578725, 1) 2019-09-04T06:32:05.260+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13519 2019-09-04T06:32:05.260+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:32:05.260+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:05.260+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:32:05.260+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:05.260+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:05.260+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:32:05.260+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 13519 2019-09-04T06:32:05.260+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578725, 1) 2019-09-04T06:32:05.260+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13524 2019-09-04T06:32:05.260+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13524 2019-09-04T06:32:05.260+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578725, 1), t: 1 }({ ts: Timestamp(1567578725, 1), t: 1 }) 2019-09-04T06:32:05.260+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:05.261+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578723, 1), t: 1 }, durableWallTime: new Date(1567578723487), appliedOpTime: { ts: Timestamp(1567578723, 1), t: 1 }, appliedWallTime: new Date(1567578723487), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578723, 1), t: 1 }, durableWallTime: new Date(1567578723487), appliedOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, appliedWallTime: new Date(1567578725256), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578723, 1), t: 1 }, durableWallTime: new Date(1567578723487), appliedOpTime: { ts: Timestamp(1567578723, 1), t: 1 }, appliedWallTime: new Date(1567578723487), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578723, 1), t: 1 }, lastCommittedWall: new Date(1567578723487), lastOpVisible: { ts: Timestamp(1567578723, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:05.261+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 907 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:35.261+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578723, 1), t: 1 }, durableWallTime: new Date(1567578723487), appliedOpTime: { ts: Timestamp(1567578723, 1), t: 1 }, appliedWallTime: new Date(1567578723487), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578723, 1), t: 1 }, durableWallTime: new Date(1567578723487), appliedOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, appliedWallTime: new Date(1567578725256), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578723, 1), t: 1 }, durableWallTime: new Date(1567578723487), appliedOpTime: { ts: Timestamp(1567578723, 1), t: 1 }, appliedWallTime: new Date(1567578723487), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578723, 1), t: 1 }, lastCommittedWall: new Date(1567578723487), lastOpVisible: { ts: Timestamp(1567578723, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:05.261+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:35.260+0000 2019-09-04T06:32:05.261+0000 D2 ASIO [RS] Request 907 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578723, 1), t: 1 }, lastCommittedWall: new Date(1567578723487), lastOpVisible: { ts: Timestamp(1567578723, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578723, 1), $clusterTime: { clusterTime: Timestamp(1567578725, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578725, 1) } 2019-09-04T06:32:05.261+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578723, 1), t: 1 }, lastCommittedWall: new Date(1567578723487), lastOpVisible: { ts: Timestamp(1567578723, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578723, 1), $clusterTime: { clusterTime: Timestamp(1567578725, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578725, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:05.261+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:05.261+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:35.261+0000 2019-09-04T06:32:05.261+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578725, 1), t: 1 } 2019-09-04T06:32:05.261+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 908 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:15.261+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578723, 1), t: 1 } } 2019-09-04T06:32:05.261+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:35.261+0000 2019-09-04T06:32:05.261+0000 D2 ASIO [RS] Request 908 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578725, 1), t: 1 }, lastCommittedWall: new Date(1567578725256), lastOpVisible: { ts: Timestamp(1567578725, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578725, 1), t: 1 }, lastCommittedWall: new Date(1567578725256), lastOpApplied: { ts: Timestamp(1567578725, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578725, 1), $clusterTime: { clusterTime: Timestamp(1567578725, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578725, 1) } 2019-09-04T06:32:05.261+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578725, 1), t: 1 }, lastCommittedWall: new Date(1567578725256), lastOpVisible: { ts: Timestamp(1567578725, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578725, 1), t: 1 }, lastCommittedWall: new Date(1567578725256), lastOpApplied: { ts: Timestamp(1567578725, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578725, 1), $clusterTime: { clusterTime: Timestamp(1567578725, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578725, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:05.261+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:05.261+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:32:05.261+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578725, 1), t: 1 }, 2019-09-04T06:32:05.256+0000 2019-09-04T06:32:05.261+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578725, 1), t: 1 }, 2019-09-04T06:32:05.256+0000 2019-09-04T06:32:05.262+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578720, 1) 2019-09-04T06:32:05.262+0000 D3 REPL [conn282] Got notified of new snapshot: { ts: Timestamp(1567578725, 1), t: 1 }, 2019-09-04T06:32:05.256+0000 2019-09-04T06:32:05.262+0000 D3 REPL [conn282] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.424+0000 2019-09-04T06:32:05.262+0000 D3 REPL [conn288] Got notified of new snapshot: { ts: Timestamp(1567578725, 1), t: 1 }, 2019-09-04T06:32:05.256+0000 2019-09-04T06:32:05.262+0000 D3 REPL [conn288] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.289+0000 2019-09-04T06:32:05.262+0000 D3 REPL [conn290] Got notified of new snapshot: { ts: Timestamp(1567578725, 1), t: 1 }, 2019-09-04T06:32:05.256+0000 2019-09-04T06:32:05.262+0000 D3 REPL [conn290] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.274+0000 2019-09-04T06:32:05.262+0000 D3 REPL [conn289] Got notified of new snapshot: { ts: Timestamp(1567578725, 1), t: 1 }, 2019-09-04T06:32:05.256+0000 2019-09-04T06:32:05.262+0000 D3 REPL [conn289] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.280+0000 2019-09-04T06:32:05.262+0000 D3 REPL [conn276] Got notified of new snapshot: { ts: Timestamp(1567578725, 1), t: 1 }, 2019-09-04T06:32:05.256+0000 2019-09-04T06:32:05.262+0000 D3 REPL [conn276] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.329+0000 2019-09-04T06:32:05.262+0000 D3 REPL [conn311] Got notified of new snapshot: { ts: Timestamp(1567578725, 1), t: 1 }, 2019-09-04T06:32:05.256+0000 2019-09-04T06:32:05.262+0000 D3 REPL [conn311] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.661+0000 2019-09-04T06:32:05.262+0000 D3 REPL [conn307] Got notified of new snapshot: { ts: Timestamp(1567578725, 1), t: 1 }, 2019-09-04T06:32:05.256+0000 2019-09-04T06:32:05.262+0000 D3 REPL [conn307] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.417+0000 2019-09-04T06:32:05.262+0000 D3 REPL [conn277] Got notified of new snapshot: { ts: Timestamp(1567578725, 1), t: 1 }, 2019-09-04T06:32:05.256+0000 2019-09-04T06:32:05.262+0000 D3 REPL [conn277] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.324+0000 2019-09-04T06:32:05.262+0000 D3 REPL [conn308] Got notified of new snapshot: { ts: Timestamp(1567578725, 1), t: 1 }, 2019-09-04T06:32:05.256+0000 2019-09-04T06:32:05.262+0000 D3 REPL [conn308] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.423+0000 2019-09-04T06:32:05.262+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:32:15.887+0000 2019-09-04T06:32:05.262+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:32:16.015+0000 2019-09-04T06:32:05.262+0000 D3 REPL [conn301] Got notified of new snapshot: { ts: Timestamp(1567578725, 1), t: 1 }, 2019-09-04T06:32:05.256+0000 2019-09-04T06:32:05.262+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 909 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:15.262+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578725, 1), t: 1 } } 2019-09-04T06:32:05.262+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:05.262+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:35.261+0000 2019-09-04T06:32:05.262+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:34.839+0000 2019-09-04T06:32:05.262+0000 D3 REPL [conn301] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.660+0000 2019-09-04T06:32:05.262+0000 D3 REPL [conn300] Got notified of new snapshot: { ts: Timestamp(1567578725, 1), t: 1 }, 2019-09-04T06:32:05.256+0000 2019-09-04T06:32:05.262+0000 D3 REPL [conn300] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.277+0000 2019-09-04T06:32:05.262+0000 D3 REPL [conn278] Got notified of new snapshot: { ts: Timestamp(1567578725, 1), t: 1 }, 2019-09-04T06:32:05.256+0000 2019-09-04T06:32:05.262+0000 D3 REPL [conn278] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:12.693+0000 2019-09-04T06:32:05.262+0000 D3 REPL [conn303] Got notified of new snapshot: { ts: Timestamp(1567578725, 1), t: 1 }, 2019-09-04T06:32:05.256+0000 2019-09-04T06:32:05.262+0000 D3 REPL [conn303] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:13.600+0000 2019-09-04T06:32:05.262+0000 D3 REPL [conn309] Got notified of new snapshot: { ts: Timestamp(1567578725, 1), t: 1 }, 2019-09-04T06:32:05.256+0000 2019-09-04T06:32:05.262+0000 D3 REPL [conn309] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.423+0000 2019-09-04T06:32:05.262+0000 D3 REPL [conn313] Got notified of new snapshot: { ts: Timestamp(1567578725, 1), t: 1 }, 2019-09-04T06:32:05.256+0000 2019-09-04T06:32:05.262+0000 D3 REPL [conn313] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:22.595+0000 2019-09-04T06:32:05.262+0000 D3 REPL [conn310] Got notified of new snapshot: { ts: Timestamp(1567578725, 1), t: 1 }, 2019-09-04T06:32:05.256+0000 2019-09-04T06:32:05.262+0000 D3 REPL [conn310] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.660+0000 2019-09-04T06:32:05.262+0000 D3 REPL [conn284] Got notified of new snapshot: { ts: Timestamp(1567578725, 1), t: 1 }, 2019-09-04T06:32:05.256+0000 2019-09-04T06:32:05.262+0000 D3 REPL [conn284] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.440+0000 2019-09-04T06:32:05.262+0000 D3 REPL [conn293] Got notified of new snapshot: { ts: Timestamp(1567578725, 1), t: 1 }, 2019-09-04T06:32:05.256+0000 2019-09-04T06:32:05.262+0000 D3 REPL [conn293] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.277+0000 2019-09-04T06:32:05.262+0000 D3 REPL [conn302] Got notified of new snapshot: { ts: Timestamp(1567578725, 1), t: 1 }, 2019-09-04T06:32:05.256+0000 2019-09-04T06:32:05.262+0000 D3 REPL [conn302] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:12.891+0000 2019-09-04T06:32:05.262+0000 D3 REPL [conn312] Got notified of new snapshot: { ts: Timestamp(1567578725, 1), t: 1 }, 2019-09-04T06:32:05.256+0000 2019-09-04T06:32:05.262+0000 D3 REPL [conn312] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.767+0000 2019-09-04T06:32:05.262+0000 D3 REPL [conn315] Got notified of new snapshot: { ts: Timestamp(1567578725, 1), t: 1 }, 2019-09-04T06:32:05.256+0000 2019-09-04T06:32:05.262+0000 D3 REPL [conn315] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:25.060+0000 2019-09-04T06:32:05.262+0000 D3 REPL [conn314] Got notified of new snapshot: { ts: Timestamp(1567578725, 1), t: 1 }, 2019-09-04T06:32:05.256+0000 2019-09-04T06:32:05.262+0000 D3 REPL [conn314] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:24.152+0000 2019-09-04T06:32:05.325+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:32:05.325+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:05.325+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578723, 1), t: 1 }, durableWallTime: new Date(1567578723487), appliedOpTime: { ts: Timestamp(1567578723, 1), t: 1 }, appliedWallTime: new Date(1567578723487), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, durableWallTime: new Date(1567578725256), appliedOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, appliedWallTime: new Date(1567578725256), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578723, 1), t: 1 }, durableWallTime: new Date(1567578723487), appliedOpTime: { ts: Timestamp(1567578723, 1), t: 1 }, appliedWallTime: new Date(1567578723487), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578725, 1), t: 1 }, lastCommittedWall: new Date(1567578725256), lastOpVisible: { ts: Timestamp(1567578725, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:05.325+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 910 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:35.325+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578723, 1), t: 1 }, durableWallTime: new Date(1567578723487), appliedOpTime: { ts: Timestamp(1567578723, 1), t: 1 }, appliedWallTime: new Date(1567578723487), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, durableWallTime: new Date(1567578725256), appliedOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, appliedWallTime: new Date(1567578725256), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578723, 1), t: 1 }, durableWallTime: new Date(1567578723487), appliedOpTime: { ts: Timestamp(1567578723, 1), t: 1 }, appliedWallTime: new Date(1567578723487), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578725, 1), t: 1 }, lastCommittedWall: new Date(1567578725256), lastOpVisible: { ts: Timestamp(1567578725, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:05.325+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:35.261+0000 2019-09-04T06:32:05.326+0000 D2 ASIO [RS] Request 910 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578725, 1), t: 1 }, lastCommittedWall: new Date(1567578725256), lastOpVisible: { ts: Timestamp(1567578725, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578725, 1), $clusterTime: { clusterTime: Timestamp(1567578725, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578725, 1) } 2019-09-04T06:32:05.326+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578725, 1), t: 1 }, lastCommittedWall: new Date(1567578725256), lastOpVisible: { ts: Timestamp(1567578725, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578725, 1), $clusterTime: { clusterTime: Timestamp(1567578725, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578725, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:05.326+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:05.326+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:35.261+0000 2019-09-04T06:32:05.340+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:05.359+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578725, 1) 2019-09-04T06:32:05.440+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:05.444+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:05.444+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:05.540+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:05.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:05.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:05.640+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:05.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:05.660+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:05.677+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:05.677+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:05.723+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:05.723+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:05.724+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:05.724+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:05.740+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:05.840+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:05.940+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:05.944+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:05.944+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:06.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:06.041+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:06.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:06.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:06.141+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:06.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:06.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:06.177+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:06.177+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:06.223+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:06.223+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:06.224+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:06.224+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:06.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578725, 1), signature: { hash: BinData(0, F24F3BB57563136762ED5C48A696D579076107A7), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:06.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:32:06.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578725, 1), signature: { hash: BinData(0, F24F3BB57563136762ED5C48A696D579076107A7), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:06.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578725, 1), signature: { hash: BinData(0, F24F3BB57563136762ED5C48A696D579076107A7), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:06.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, durableWallTime: new Date(1567578725256), opTime: { ts: Timestamp(1567578725, 1), t: 1 }, wallTime: new Date(1567578725256) } 2019-09-04T06:32:06.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578725, 1), signature: { hash: BinData(0, F24F3BB57563136762ED5C48A696D579076107A7), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:06.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:06.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:06.241+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:06.259+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:06.259+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:06.259+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578725, 1) 2019-09-04T06:32:06.259+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 13541 2019-09-04T06:32:06.259+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:06.259+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:06.259+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 13541 2019-09-04T06:32:06.261+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13544 2019-09-04T06:32:06.261+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13544 2019-09-04T06:32:06.261+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578725, 1), t: 1 }({ ts: Timestamp(1567578725, 1), t: 1 }) 2019-09-04T06:32:06.307+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:06.307+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:06.341+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:06.441+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:06.444+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:06.444+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:06.541+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:06.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:06.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:06.641+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:06.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:06.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:06.677+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:06.677+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:06.723+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:06.723+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:06.724+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:06.724+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:06.741+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:06.806+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:06.806+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:06.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:06.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 911) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:06.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 911 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:16.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:06.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:34.839+0000 2019-09-04T06:32:06.838+0000 D2 ASIO [Replication] Request 911 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, durableWallTime: new Date(1567578725256), opTime: { ts: Timestamp(1567578725, 1), t: 1 }, wallTime: new Date(1567578725256), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578725, 1), t: 1 }, lastCommittedWall: new Date(1567578725256), lastOpVisible: { ts: Timestamp(1567578725, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578725, 1), $clusterTime: { clusterTime: Timestamp(1567578725, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578725, 1) } 2019-09-04T06:32:06.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, durableWallTime: new Date(1567578725256), opTime: { ts: Timestamp(1567578725, 1), t: 1 }, wallTime: new Date(1567578725256), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578725, 1), t: 1 }, lastCommittedWall: new Date(1567578725256), lastOpVisible: { ts: Timestamp(1567578725, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578725, 1), $clusterTime: { clusterTime: Timestamp(1567578725, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578725, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:06.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:06.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 911) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, durableWallTime: new Date(1567578725256), opTime: { ts: Timestamp(1567578725, 1), t: 1 }, wallTime: new Date(1567578725256), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578725, 1), t: 1 }, lastCommittedWall: new Date(1567578725256), lastOpVisible: { ts: Timestamp(1567578725, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578725, 1), $clusterTime: { clusterTime: Timestamp(1567578725, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578725, 1) } 2019-09-04T06:32:06.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:32:06.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:32:08.838Z 2019-09-04T06:32:06.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:34.839+0000 2019-09-04T06:32:06.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:06.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 912) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:06.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 912 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:32:16.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:06.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:34.839+0000 2019-09-04T06:32:06.839+0000 D2 ASIO [Replication] Request 912 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, durableWallTime: new Date(1567578725256), opTime: { ts: Timestamp(1567578725, 1), t: 1 }, wallTime: new Date(1567578725256), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578725, 1), t: 1 }, lastCommittedWall: new Date(1567578725256), lastOpVisible: { ts: Timestamp(1567578725, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578725, 1), $clusterTime: { clusterTime: Timestamp(1567578725, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578725, 1) } 2019-09-04T06:32:06.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, durableWallTime: new Date(1567578725256), opTime: { ts: Timestamp(1567578725, 1), t: 1 }, wallTime: new Date(1567578725256), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578725, 1), t: 1 }, lastCommittedWall: new Date(1567578725256), lastOpVisible: { ts: Timestamp(1567578725, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578725, 1), $clusterTime: { clusterTime: Timestamp(1567578725, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578725, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:32:06.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:06.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 912) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, durableWallTime: new Date(1567578725256), opTime: { ts: Timestamp(1567578725, 1), t: 1 }, wallTime: new Date(1567578725256), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578725, 1), t: 1 }, lastCommittedWall: new Date(1567578725256), lastOpVisible: { ts: Timestamp(1567578725, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578725, 1), $clusterTime: { clusterTime: Timestamp(1567578725, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578725, 1) } 2019-09-04T06:32:06.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:32:06.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:32:16.015+0000 2019-09-04T06:32:06.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:32:18.179+0000 2019-09-04T06:32:06.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:32:06.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:32:08.839Z 2019-09-04T06:32:06.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:06.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:36.839+0000 2019-09-04T06:32:06.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:36.839+0000 2019-09-04T06:32:06.842+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:06.942+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:06.944+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:06.944+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:07.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:07.042+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:07.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:07.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:07.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578725, 1), signature: { hash: BinData(0, F24F3BB57563136762ED5C48A696D579076107A7), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:07.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:32:07.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578725, 1), signature: { hash: BinData(0, F24F3BB57563136762ED5C48A696D579076107A7), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:07.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578725, 1), signature: { hash: BinData(0, F24F3BB57563136762ED5C48A696D579076107A7), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:07.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, durableWallTime: new Date(1567578725256), opTime: { ts: Timestamp(1567578725, 1), t: 1 }, wallTime: new Date(1567578725256) } 2019-09-04T06:32:07.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578725, 1), signature: { hash: BinData(0, F24F3BB57563136762ED5C48A696D579076107A7), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:07.142+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:07.160+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:07.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:07.177+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:07.177+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:07.223+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:07.223+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:07.224+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:07.224+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:07.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:07.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:07.242+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:07.260+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:07.260+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:07.260+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578725, 1) 2019-09-04T06:32:07.260+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 13563 2019-09-04T06:32:07.260+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:07.260+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:07.260+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 13563 2019-09-04T06:32:07.261+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13566 2019-09-04T06:32:07.261+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13566 2019-09-04T06:32:07.261+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578725, 1), t: 1 }({ ts: Timestamp(1567578725, 1), t: 1 }) 2019-09-04T06:32:07.306+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:07.306+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:07.342+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:07.356+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:07.356+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:07.442+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:07.444+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:07.444+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:07.542+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:07.643+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:07.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:07.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:07.677+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:07.677+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:07.723+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:07.723+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:07.724+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:07.724+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:07.743+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:07.806+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:07.806+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:07.843+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:07.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:07.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:07.943+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:07.944+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:07.944+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:08.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:08.043+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:08.143+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:08.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:08.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:08.177+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:08.177+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:08.223+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:08.223+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:08.224+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:08.224+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:08.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578725, 1), signature: { hash: BinData(0, F24F3BB57563136762ED5C48A696D579076107A7), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:08.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:32:08.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578725, 1), signature: { hash: BinData(0, F24F3BB57563136762ED5C48A696D579076107A7), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:08.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578725, 1), signature: { hash: BinData(0, F24F3BB57563136762ED5C48A696D579076107A7), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:08.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, durableWallTime: new Date(1567578725256), opTime: { ts: Timestamp(1567578725, 1), t: 1 }, wallTime: new Date(1567578725256) } 2019-09-04T06:32:08.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578725, 1), signature: { hash: BinData(0, F24F3BB57563136762ED5C48A696D579076107A7), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:08.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:08.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:08.243+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:08.260+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:08.260+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:08.260+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578725, 1) 2019-09-04T06:32:08.260+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 13586 2019-09-04T06:32:08.260+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:08.260+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:08.260+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 13586 2019-09-04T06:32:08.261+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13589 2019-09-04T06:32:08.261+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13589 2019-09-04T06:32:08.261+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578725, 1), t: 1 }({ ts: Timestamp(1567578725, 1), t: 1 }) 2019-09-04T06:32:08.306+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:08.307+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:08.336+0000 D2 ASIO [RS] Request 909 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578728, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578728333), o: { $v: 1, $set: { ping: new Date(1567578728333) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578725, 1), t: 1 }, lastCommittedWall: new Date(1567578725256), lastOpVisible: { ts: Timestamp(1567578725, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578725, 1), t: 1 }, lastCommittedWall: new Date(1567578725256), lastOpApplied: { ts: Timestamp(1567578728, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578725, 1), $clusterTime: { clusterTime: Timestamp(1567578728, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578728, 1) } 2019-09-04T06:32:08.336+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578728, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578728333), o: { $v: 1, $set: { ping: new Date(1567578728333) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578725, 1), t: 1 }, lastCommittedWall: new Date(1567578725256), lastOpVisible: { ts: Timestamp(1567578725, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578725, 1), t: 1 }, lastCommittedWall: new Date(1567578725256), lastOpApplied: { ts: Timestamp(1567578728, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578725, 1), $clusterTime: { clusterTime: Timestamp(1567578728, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578728, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:08.336+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:08.336+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578728, 1) and ending at ts: Timestamp(1567578728, 1) 2019-09-04T06:32:08.336+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:32:18.179+0000 2019-09-04T06:32:08.336+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:32:19.251+0000 2019-09-04T06:32:08.336+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:08.336+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:36.839+0000 2019-09-04T06:32:08.336+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578728, 1), t: 1 } 2019-09-04T06:32:08.336+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:08.336+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:08.336+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578725, 1) 2019-09-04T06:32:08.336+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 13593 2019-09-04T06:32:08.336+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:08.336+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:08.336+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 13593 2019-09-04T06:32:08.336+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:08.336+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:32:08.336+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:08.336+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578725, 1) 2019-09-04T06:32:08.336+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 13596 2019-09-04T06:32:08.336+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578728, 1) } 2019-09-04T06:32:08.336+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:08.336+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:08.336+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 13596 2019-09-04T06:32:08.336+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13590 2019-09-04T06:32:08.336+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 13590 2019-09-04T06:32:08.336+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13599 2019-09-04T06:32:08.336+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13599 2019-09-04T06:32:08.336+0000 D3 EXECUTOR [repl-writer-worker-5] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:08.336+0000 D3 STORAGE [repl-writer-worker-5] WT begin_transaction for snapshot id 13601 2019-09-04T06:32:08.336+0000 D4 STORAGE [repl-writer-worker-5] inserting record with timestamp Timestamp(1567578728, 1) 2019-09-04T06:32:08.336+0000 D3 STORAGE [repl-writer-worker-5] WT set timestamp of future write operations to Timestamp(1567578728, 1) 2019-09-04T06:32:08.336+0000 D3 STORAGE [repl-writer-worker-5] WT commit_transaction for snapshot id 13601 2019-09-04T06:32:08.336+0000 D3 EXECUTOR [repl-writer-worker-5] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:08.336+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:32:08.337+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13600 2019-09-04T06:32:08.337+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 13600 2019-09-04T06:32:08.337+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13603 2019-09-04T06:32:08.337+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13603 2019-09-04T06:32:08.337+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578728, 1), t: 1 }({ ts: Timestamp(1567578728, 1), t: 1 }) 2019-09-04T06:32:08.337+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578728, 1) 2019-09-04T06:32:08.337+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13604 2019-09-04T06:32:08.337+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578728, 1) } } ] } sort: {} projection: {} 2019-09-04T06:32:08.337+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:08.337+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:32:08.337+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578728, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:32:08.337+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:08.337+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:08.337+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:08.337+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578728, 1) || First: notFirst: full path: ts 2019-09-04T06:32:08.337+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:08.337+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578728, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:08.337+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:08.337+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:32:08.337+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:08.337+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:08.337+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:08.337+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:08.337+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:08.337+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:08.337+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:08.337+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578728, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:08.337+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:08.337+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:08.337+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:08.337+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578728, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:08.337+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:08.337+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578728, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:08.337+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 13604 2019-09-04T06:32:08.337+0000 D3 EXECUTOR [repl-writer-worker-9] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:08.337+0000 D3 STORAGE [repl-writer-worker-9] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:08.337+0000 D3 REPL [repl-writer-worker-9] applying op: { ts: Timestamp(1567578728, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578728333), o: { $v: 1, $set: { ping: new Date(1567578728333) } } }, oplog application mode: Secondary 2019-09-04T06:32:08.337+0000 D3 STORAGE [repl-writer-worker-9] WT set timestamp of future write operations to Timestamp(1567578728, 1) 2019-09-04T06:32:08.337+0000 D3 STORAGE [repl-writer-worker-9] WT begin_transaction for snapshot id 13606 2019-09-04T06:32:08.337+0000 D2 QUERY [repl-writer-worker-9] Using idhack: { _id: "ConfigServer" } 2019-09-04T06:32:08.337+0000 D4 WRITE [repl-writer-worker-9] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:32:08.337+0000 D3 STORAGE [repl-writer-worker-9] WT commit_transaction for snapshot id 13606 2019-09-04T06:32:08.337+0000 D3 EXECUTOR [repl-writer-worker-9] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:08.337+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578728, 1), t: 1 }({ ts: Timestamp(1567578728, 1), t: 1 }) 2019-09-04T06:32:08.337+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578728, 1) 2019-09-04T06:32:08.337+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13605 2019-09-04T06:32:08.337+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:32:08.337+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:08.337+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:32:08.337+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:08.337+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:08.337+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:32:08.337+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 13605 2019-09-04T06:32:08.337+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578728, 1) 2019-09-04T06:32:08.337+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13610 2019-09-04T06:32:08.337+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13610 2019-09-04T06:32:08.337+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578728, 1), t: 1 }({ ts: Timestamp(1567578728, 1), t: 1 }) 2019-09-04T06:32:08.337+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:08.337+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, durableWallTime: new Date(1567578725256), appliedOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, appliedWallTime: new Date(1567578725256), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, durableWallTime: new Date(1567578725256), appliedOpTime: { ts: Timestamp(1567578728, 1), t: 1 }, appliedWallTime: new Date(1567578728333), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, durableWallTime: new Date(1567578725256), appliedOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, appliedWallTime: new Date(1567578725256), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578725, 1), t: 1 }, lastCommittedWall: new Date(1567578725256), lastOpVisible: { ts: Timestamp(1567578725, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:08.337+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 913 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:38.337+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, durableWallTime: new Date(1567578725256), appliedOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, appliedWallTime: new Date(1567578725256), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, durableWallTime: new Date(1567578725256), appliedOpTime: { ts: Timestamp(1567578728, 1), t: 1 }, appliedWallTime: new Date(1567578728333), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, durableWallTime: new Date(1567578725256), appliedOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, appliedWallTime: new Date(1567578725256), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578725, 1), t: 1 }, lastCommittedWall: new Date(1567578725256), lastOpVisible: { ts: Timestamp(1567578725, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:08.337+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:38.337+0000 2019-09-04T06:32:08.338+0000 D2 ASIO [RS] Request 913 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578725, 1), t: 1 }, lastCommittedWall: new Date(1567578725256), lastOpVisible: { ts: Timestamp(1567578725, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578725, 1), $clusterTime: { clusterTime: Timestamp(1567578728, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578728, 1) } 2019-09-04T06:32:08.338+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578725, 1), t: 1 }, lastCommittedWall: new Date(1567578725256), lastOpVisible: { ts: Timestamp(1567578725, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578725, 1), $clusterTime: { clusterTime: Timestamp(1567578728, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578728, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:08.338+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:08.338+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:38.338+0000 2019-09-04T06:32:08.338+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578728, 1), t: 1 } 2019-09-04T06:32:08.338+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 914 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:18.338+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578725, 1), t: 1 } } 2019-09-04T06:32:08.338+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:38.338+0000 2019-09-04T06:32:08.349+0000 D2 ASIO [RS] Request 914 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 1), t: 1 }, lastCommittedWall: new Date(1567578728333), lastOpVisible: { ts: Timestamp(1567578728, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578728, 1), t: 1 }, lastCommittedWall: new Date(1567578728333), lastOpApplied: { ts: Timestamp(1567578728, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578728, 1), $clusterTime: { clusterTime: Timestamp(1567578728, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578728, 1) } 2019-09-04T06:32:08.349+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 1), t: 1 }, lastCommittedWall: new Date(1567578728333), lastOpVisible: { ts: Timestamp(1567578728, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578728, 1), t: 1 }, lastCommittedWall: new Date(1567578728333), lastOpApplied: { ts: Timestamp(1567578728, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578728, 1), $clusterTime: { clusterTime: Timestamp(1567578728, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578728, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:08.349+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:08.349+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:32:08.349+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578728, 1), t: 1 }, 2019-09-04T06:32:08.333+0000 2019-09-04T06:32:08.349+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578728, 1), t: 1 }, 2019-09-04T06:32:08.333+0000 2019-09-04T06:32:08.349+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578723, 1) 2019-09-04T06:32:08.349+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:32:19.251+0000 2019-09-04T06:32:08.349+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:32:18.816+0000 2019-09-04T06:32:08.349+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:08.349+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 915 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:18.349+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578728, 1), t: 1 } } 2019-09-04T06:32:08.349+0000 D3 REPL [conn278] Got notified of new snapshot: { ts: Timestamp(1567578728, 1), t: 1 }, 2019-09-04T06:32:08.333+0000 2019-09-04T06:32:08.349+0000 D3 REPL [conn278] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:12.693+0000 2019-09-04T06:32:08.349+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:38.338+0000 2019-09-04T06:32:08.349+0000 D3 REPL [conn290] Got notified of new snapshot: { ts: Timestamp(1567578728, 1), t: 1 }, 2019-09-04T06:32:08.333+0000 2019-09-04T06:32:08.349+0000 D3 REPL [conn290] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.274+0000 2019-09-04T06:32:08.349+0000 D3 REPL [conn289] Got notified of new snapshot: { ts: Timestamp(1567578728, 1), t: 1 }, 2019-09-04T06:32:08.333+0000 2019-09-04T06:32:08.349+0000 D3 REPL [conn289] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.280+0000 2019-09-04T06:32:08.349+0000 D3 REPL [conn311] Got notified of new snapshot: { ts: Timestamp(1567578728, 1), t: 1 }, 2019-09-04T06:32:08.333+0000 2019-09-04T06:32:08.349+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:36.839+0000 2019-09-04T06:32:08.349+0000 D3 REPL [conn311] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.661+0000 2019-09-04T06:32:08.349+0000 D3 REPL [conn308] Got notified of new snapshot: { ts: Timestamp(1567578728, 1), t: 1 }, 2019-09-04T06:32:08.333+0000 2019-09-04T06:32:08.349+0000 D3 REPL [conn308] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.423+0000 2019-09-04T06:32:08.349+0000 D3 REPL [conn301] Got notified of new snapshot: { ts: Timestamp(1567578728, 1), t: 1 }, 2019-09-04T06:32:08.333+0000 2019-09-04T06:32:08.349+0000 D3 REPL [conn301] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.660+0000 2019-09-04T06:32:08.349+0000 D3 REPL [conn303] Got notified of new snapshot: { ts: Timestamp(1567578728, 1), t: 1 }, 2019-09-04T06:32:08.333+0000 2019-09-04T06:32:08.349+0000 D3 REPL [conn303] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:13.600+0000 2019-09-04T06:32:08.349+0000 D3 REPL [conn313] Got notified of new snapshot: { ts: Timestamp(1567578728, 1), t: 1 }, 2019-09-04T06:32:08.333+0000 2019-09-04T06:32:08.349+0000 D3 REPL [conn313] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:22.595+0000 2019-09-04T06:32:08.349+0000 D3 REPL [conn284] Got notified of new snapshot: { ts: Timestamp(1567578728, 1), t: 1 }, 2019-09-04T06:32:08.333+0000 2019-09-04T06:32:08.349+0000 D3 REPL [conn284] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.440+0000 2019-09-04T06:32:08.349+0000 D3 REPL [conn302] Got notified of new snapshot: { ts: Timestamp(1567578728, 1), t: 1 }, 2019-09-04T06:32:08.333+0000 2019-09-04T06:32:08.349+0000 D3 REPL [conn302] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:12.891+0000 2019-09-04T06:32:08.349+0000 D3 REPL [conn315] Got notified of new snapshot: { ts: Timestamp(1567578728, 1), t: 1 }, 2019-09-04T06:32:08.333+0000 2019-09-04T06:32:08.349+0000 D3 REPL [conn315] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:25.060+0000 2019-09-04T06:32:08.349+0000 D3 REPL [conn314] Got notified of new snapshot: { ts: Timestamp(1567578728, 1), t: 1 }, 2019-09-04T06:32:08.333+0000 2019-09-04T06:32:08.349+0000 D3 REPL [conn314] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:24.152+0000 2019-09-04T06:32:08.349+0000 D3 REPL [conn312] Got notified of new snapshot: { ts: Timestamp(1567578728, 1), t: 1 }, 2019-09-04T06:32:08.333+0000 2019-09-04T06:32:08.349+0000 D3 REPL [conn312] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.767+0000 2019-09-04T06:32:08.349+0000 D3 REPL [conn293] Got notified of new snapshot: { ts: Timestamp(1567578728, 1), t: 1 }, 2019-09-04T06:32:08.333+0000 2019-09-04T06:32:08.349+0000 D3 REPL [conn293] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.277+0000 2019-09-04T06:32:08.349+0000 D3 REPL [conn310] Got notified of new snapshot: { ts: Timestamp(1567578728, 1), t: 1 }, 2019-09-04T06:32:08.333+0000 2019-09-04T06:32:08.349+0000 D3 REPL [conn310] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.660+0000 2019-09-04T06:32:08.349+0000 D3 REPL [conn309] Got notified of new snapshot: { ts: Timestamp(1567578728, 1), t: 1 }, 2019-09-04T06:32:08.333+0000 2019-09-04T06:32:08.349+0000 D3 REPL [conn309] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.423+0000 2019-09-04T06:32:08.349+0000 D3 REPL [conn282] Got notified of new snapshot: { ts: Timestamp(1567578728, 1), t: 1 }, 2019-09-04T06:32:08.333+0000 2019-09-04T06:32:08.349+0000 D3 REPL [conn282] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.424+0000 2019-09-04T06:32:08.349+0000 D3 REPL [conn288] Got notified of new snapshot: { ts: Timestamp(1567578728, 1), t: 1 }, 2019-09-04T06:32:08.333+0000 2019-09-04T06:32:08.349+0000 D3 REPL [conn288] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.289+0000 2019-09-04T06:32:08.349+0000 D3 REPL [conn276] Got notified of new snapshot: { ts: Timestamp(1567578728, 1), t: 1 }, 2019-09-04T06:32:08.333+0000 2019-09-04T06:32:08.349+0000 D3 REPL [conn276] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.329+0000 2019-09-04T06:32:08.349+0000 D3 REPL [conn307] Got notified of new snapshot: { ts: Timestamp(1567578728, 1), t: 1 }, 2019-09-04T06:32:08.333+0000 2019-09-04T06:32:08.349+0000 D3 REPL [conn307] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.417+0000 2019-09-04T06:32:08.350+0000 D3 REPL [conn277] Got notified of new snapshot: { ts: Timestamp(1567578728, 1), t: 1 }, 2019-09-04T06:32:08.333+0000 2019-09-04T06:32:08.350+0000 D3 REPL [conn277] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.324+0000 2019-09-04T06:32:08.350+0000 D3 REPL [conn300] Got notified of new snapshot: { ts: Timestamp(1567578728, 1), t: 1 }, 2019-09-04T06:32:08.333+0000 2019-09-04T06:32:08.350+0000 D3 REPL [conn300] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.277+0000 2019-09-04T06:32:08.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:08.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:08.372+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:32:08.372+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:08.372+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:08.372+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, durableWallTime: new Date(1567578725256), appliedOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, appliedWallTime: new Date(1567578725256), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578728, 1), t: 1 }, durableWallTime: new Date(1567578728333), appliedOpTime: { ts: Timestamp(1567578728, 1), t: 1 }, appliedWallTime: new Date(1567578728333), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, durableWallTime: new Date(1567578725256), appliedOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, appliedWallTime: new Date(1567578725256), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 1), t: 1 }, lastCommittedWall: new Date(1567578728333), lastOpVisible: { ts: Timestamp(1567578728, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:08.372+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 916 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:38.372+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, durableWallTime: new Date(1567578725256), appliedOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, appliedWallTime: new Date(1567578725256), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578728, 1), t: 1 }, durableWallTime: new Date(1567578728333), appliedOpTime: { ts: Timestamp(1567578728, 1), t: 1 }, appliedWallTime: new Date(1567578728333), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, durableWallTime: new Date(1567578725256), appliedOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, appliedWallTime: new Date(1567578725256), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 1), t: 1 }, lastCommittedWall: new Date(1567578728333), lastOpVisible: { ts: Timestamp(1567578728, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:08.372+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:38.338+0000 2019-09-04T06:32:08.372+0000 D2 ASIO [RS] Request 916 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 1), t: 1 }, lastCommittedWall: new Date(1567578728333), lastOpVisible: { ts: Timestamp(1567578728, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578728, 1), $clusterTime: { clusterTime: Timestamp(1567578728, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578728, 1) } 2019-09-04T06:32:08.372+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 1), t: 1 }, lastCommittedWall: new Date(1567578728333), lastOpVisible: { ts: Timestamp(1567578728, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578728, 1), $clusterTime: { clusterTime: Timestamp(1567578728, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578728, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:08.372+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:08.372+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:38.338+0000 2019-09-04T06:32:08.436+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578728, 1) 2019-09-04T06:32:08.444+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:08.444+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:08.472+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:08.555+0000 D2 ASIO [RS] Request 915 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578728, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578728547), o: { $v: 1, $set: { ping: new Date(1567578728547) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 1), t: 1 }, lastCommittedWall: new Date(1567578728333), lastOpVisible: { ts: Timestamp(1567578728, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578728, 1), t: 1 }, lastCommittedWall: new Date(1567578728333), lastOpApplied: { ts: Timestamp(1567578728, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578728, 1), $clusterTime: { clusterTime: Timestamp(1567578728, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578728, 2) } 2019-09-04T06:32:08.555+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578728, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578728547), o: { $v: 1, $set: { ping: new Date(1567578728547) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 1), t: 1 }, lastCommittedWall: new Date(1567578728333), lastOpVisible: { ts: Timestamp(1567578728, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578728, 1), t: 1 }, lastCommittedWall: new Date(1567578728333), lastOpApplied: { ts: Timestamp(1567578728, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578728, 1), $clusterTime: { clusterTime: Timestamp(1567578728, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578728, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:08.555+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:08.555+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578728, 2) and ending at ts: Timestamp(1567578728, 2) 2019-09-04T06:32:08.555+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:32:18.816+0000 2019-09-04T06:32:08.555+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:32:18.680+0000 2019-09-04T06:32:08.555+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:08.555+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:36.839+0000 2019-09-04T06:32:08.555+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578728, 2), t: 1 } 2019-09-04T06:32:08.555+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:08.555+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:08.555+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578728, 1) 2019-09-04T06:32:08.555+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 13615 2019-09-04T06:32:08.555+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:08.555+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:08.555+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 13615 2019-09-04T06:32:08.555+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:08.555+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:32:08.555+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:08.555+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578728, 2) } 2019-09-04T06:32:08.555+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578728, 1) 2019-09-04T06:32:08.555+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 13618 2019-09-04T06:32:08.555+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:08.555+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:08.555+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 13618 2019-09-04T06:32:08.555+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13611 2019-09-04T06:32:08.555+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 13611 2019-09-04T06:32:08.555+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13621 2019-09-04T06:32:08.555+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13621 2019-09-04T06:32:08.555+0000 D3 EXECUTOR [repl-writer-worker-7] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:08.555+0000 D3 STORAGE [repl-writer-worker-7] WT begin_transaction for snapshot id 13623 2019-09-04T06:32:08.555+0000 D4 STORAGE [repl-writer-worker-7] inserting record with timestamp Timestamp(1567578728, 2) 2019-09-04T06:32:08.556+0000 D3 STORAGE [repl-writer-worker-7] WT set timestamp of future write operations to Timestamp(1567578728, 2) 2019-09-04T06:32:08.556+0000 D3 STORAGE [repl-writer-worker-7] WT commit_transaction for snapshot id 13623 2019-09-04T06:32:08.556+0000 D3 EXECUTOR [repl-writer-worker-7] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:08.556+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:32:08.556+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13622 2019-09-04T06:32:08.556+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 13622 2019-09-04T06:32:08.556+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13625 2019-09-04T06:32:08.556+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13625 2019-09-04T06:32:08.556+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578728, 2), t: 1 }({ ts: Timestamp(1567578728, 2), t: 1 }) 2019-09-04T06:32:08.556+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578728, 2) 2019-09-04T06:32:08.556+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13626 2019-09-04T06:32:08.556+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578728, 2) } } ] } sort: {} projection: {} 2019-09-04T06:32:08.556+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:08.556+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:32:08.556+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578728, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:32:08.556+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:08.556+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:08.556+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:08.556+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578728, 2) || First: notFirst: full path: ts 2019-09-04T06:32:08.556+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:08.556+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578728, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:08.556+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:08.556+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:32:08.556+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:08.556+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:08.556+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:08.556+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:08.556+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:08.556+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:08.556+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:08.556+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578728, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:08.556+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:08.556+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:08.556+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:08.556+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578728, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:08.556+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:08.556+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578728, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:08.556+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 13626 2019-09-04T06:32:08.556+0000 D3 EXECUTOR [repl-writer-worker-11] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:08.556+0000 D3 STORAGE [repl-writer-worker-11] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:08.556+0000 D3 REPL [repl-writer-worker-11] applying op: { ts: Timestamp(1567578728, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578728547), o: { $v: 1, $set: { ping: new Date(1567578728547) } } }, oplog application mode: Secondary 2019-09-04T06:32:08.556+0000 D3 STORAGE [repl-writer-worker-11] WT set timestamp of future write operations to Timestamp(1567578728, 2) 2019-09-04T06:32:08.556+0000 D3 STORAGE [repl-writer-worker-11] WT begin_transaction for snapshot id 13628 2019-09-04T06:32:08.556+0000 D2 QUERY [repl-writer-worker-11] Using idhack: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" } 2019-09-04T06:32:08.556+0000 D4 WRITE [repl-writer-worker-11] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:32:08.556+0000 D3 STORAGE [repl-writer-worker-11] WT commit_transaction for snapshot id 13628 2019-09-04T06:32:08.556+0000 D3 EXECUTOR [repl-writer-worker-11] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:08.556+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578728, 2), t: 1 }({ ts: Timestamp(1567578728, 2), t: 1 }) 2019-09-04T06:32:08.556+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578728, 2) 2019-09-04T06:32:08.556+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13627 2019-09-04T06:32:08.556+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:32:08.556+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:08.556+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:32:08.556+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:08.556+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:08.556+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:32:08.556+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 13627 2019-09-04T06:32:08.556+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578728, 2) 2019-09-04T06:32:08.556+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13631 2019-09-04T06:32:08.556+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13631 2019-09-04T06:32:08.556+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578728, 2), t: 1 }({ ts: Timestamp(1567578728, 2), t: 1 }) 2019-09-04T06:32:08.556+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:08.556+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, durableWallTime: new Date(1567578725256), appliedOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, appliedWallTime: new Date(1567578725256), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578728, 1), t: 1 }, durableWallTime: new Date(1567578728333), appliedOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, appliedWallTime: new Date(1567578728547), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, durableWallTime: new Date(1567578725256), appliedOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, appliedWallTime: new Date(1567578725256), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 1), t: 1 }, lastCommittedWall: new Date(1567578728333), lastOpVisible: { ts: Timestamp(1567578728, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:08.556+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 917 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:38.556+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, durableWallTime: new Date(1567578725256), appliedOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, appliedWallTime: new Date(1567578725256), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578728, 1), t: 1 }, durableWallTime: new Date(1567578728333), appliedOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, appliedWallTime: new Date(1567578728547), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, durableWallTime: new Date(1567578725256), appliedOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, appliedWallTime: new Date(1567578725256), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 1), t: 1 }, lastCommittedWall: new Date(1567578728333), lastOpVisible: { ts: Timestamp(1567578728, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:08.556+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:38.556+0000 2019-09-04T06:32:08.557+0000 D2 ASIO [RS] Request 917 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 1), t: 1 }, lastCommittedWall: new Date(1567578728333), lastOpVisible: { ts: Timestamp(1567578728, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578728, 1), $clusterTime: { clusterTime: Timestamp(1567578728, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578728, 2) } 2019-09-04T06:32:08.557+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 1), t: 1 }, lastCommittedWall: new Date(1567578728333), lastOpVisible: { ts: Timestamp(1567578728, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578728, 1), $clusterTime: { clusterTime: Timestamp(1567578728, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578728, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:08.557+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:08.557+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:38.557+0000 2019-09-04T06:32:08.557+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578728, 2), t: 1 } 2019-09-04T06:32:08.557+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 918 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:18.557+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578728, 1), t: 1 } } 2019-09-04T06:32:08.557+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:38.557+0000 2019-09-04T06:32:08.560+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:32:08.560+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:08.560+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, durableWallTime: new Date(1567578725256), appliedOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, appliedWallTime: new Date(1567578725256), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), appliedOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, appliedWallTime: new Date(1567578728547), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, durableWallTime: new Date(1567578725256), appliedOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, appliedWallTime: new Date(1567578725256), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 1), t: 1 }, lastCommittedWall: new Date(1567578728333), lastOpVisible: { ts: Timestamp(1567578728, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:08.560+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 919 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:38.560+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, durableWallTime: new Date(1567578725256), appliedOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, appliedWallTime: new Date(1567578725256), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), appliedOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, appliedWallTime: new Date(1567578728547), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, durableWallTime: new Date(1567578725256), appliedOpTime: { ts: Timestamp(1567578725, 1), t: 1 }, appliedWallTime: new Date(1567578725256), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 1), t: 1 }, lastCommittedWall: new Date(1567578728333), lastOpVisible: { ts: Timestamp(1567578728, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:08.560+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:38.557+0000 2019-09-04T06:32:08.560+0000 D2 ASIO [RS] Request 919 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 1), t: 1 }, lastCommittedWall: new Date(1567578728333), lastOpVisible: { ts: Timestamp(1567578728, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578728, 1), $clusterTime: { clusterTime: Timestamp(1567578728, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578728, 2) } 2019-09-04T06:32:08.560+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 1), t: 1 }, lastCommittedWall: new Date(1567578728333), lastOpVisible: { ts: Timestamp(1567578728, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578728, 1), $clusterTime: { clusterTime: Timestamp(1567578728, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578728, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:08.560+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:08.560+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:38.557+0000 2019-09-04T06:32:08.561+0000 D2 ASIO [RS] Request 918 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 2), t: 1 }, lastCommittedWall: new Date(1567578728547), lastOpVisible: { ts: Timestamp(1567578728, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578728, 2), t: 1 }, lastCommittedWall: new Date(1567578728547), lastOpApplied: { ts: Timestamp(1567578728, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578728, 2), $clusterTime: { clusterTime: Timestamp(1567578728, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578728, 2) } 2019-09-04T06:32:08.561+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 2), t: 1 }, lastCommittedWall: new Date(1567578728547), lastOpVisible: { ts: Timestamp(1567578728, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578728, 2), t: 1 }, lastCommittedWall: new Date(1567578728547), lastOpApplied: { ts: Timestamp(1567578728, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578728, 2), $clusterTime: { clusterTime: Timestamp(1567578728, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578728, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:08.561+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:08.561+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:32:08.561+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578728, 2), t: 1 }, 2019-09-04T06:32:08.547+0000 2019-09-04T06:32:08.561+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578728, 2), t: 1 }, 2019-09-04T06:32:08.547+0000 2019-09-04T06:32:08.561+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578723, 2) 2019-09-04T06:32:08.561+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:32:18.680+0000 2019-09-04T06:32:08.561+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:32:19.694+0000 2019-09-04T06:32:08.561+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:08.561+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 920 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:18.561+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578728, 2), t: 1 } } 2019-09-04T06:32:08.561+0000 D3 REPL [conn290] Got notified of new snapshot: { ts: Timestamp(1567578728, 2), t: 1 }, 2019-09-04T06:32:08.547+0000 2019-09-04T06:32:08.561+0000 D3 REPL [conn290] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.274+0000 2019-09-04T06:32:08.561+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:38.557+0000 2019-09-04T06:32:08.561+0000 D3 REPL [conn303] Got notified of new snapshot: { ts: Timestamp(1567578728, 2), t: 1 }, 2019-09-04T06:32:08.547+0000 2019-09-04T06:32:08.561+0000 D3 REPL [conn303] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:13.600+0000 2019-09-04T06:32:08.561+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:36.839+0000 2019-09-04T06:32:08.561+0000 D3 REPL [conn289] Got notified of new snapshot: { ts: Timestamp(1567578728, 2), t: 1 }, 2019-09-04T06:32:08.547+0000 2019-09-04T06:32:08.561+0000 D3 REPL [conn289] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.280+0000 2019-09-04T06:32:08.561+0000 D3 REPL [conn308] Got notified of new snapshot: { ts: Timestamp(1567578728, 2), t: 1 }, 2019-09-04T06:32:08.547+0000 2019-09-04T06:32:08.561+0000 D3 REPL [conn308] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.423+0000 2019-09-04T06:32:08.561+0000 D3 REPL [conn307] Got notified of new snapshot: { ts: Timestamp(1567578728, 2), t: 1 }, 2019-09-04T06:32:08.547+0000 2019-09-04T06:32:08.561+0000 D3 REPL [conn307] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.417+0000 2019-09-04T06:32:08.561+0000 D3 REPL [conn310] Got notified of new snapshot: { ts: Timestamp(1567578728, 2), t: 1 }, 2019-09-04T06:32:08.547+0000 2019-09-04T06:32:08.561+0000 D3 REPL [conn310] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.660+0000 2019-09-04T06:32:08.561+0000 D3 REPL [conn312] Got notified of new snapshot: { ts: Timestamp(1567578728, 2), t: 1 }, 2019-09-04T06:32:08.547+0000 2019-09-04T06:32:08.561+0000 D3 REPL [conn312] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.767+0000 2019-09-04T06:32:08.561+0000 D3 REPL [conn276] Got notified of new snapshot: { ts: Timestamp(1567578728, 2), t: 1 }, 2019-09-04T06:32:08.547+0000 2019-09-04T06:32:08.561+0000 D3 REPL [conn276] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.329+0000 2019-09-04T06:32:08.561+0000 D3 REPL [conn277] Got notified of new snapshot: { ts: Timestamp(1567578728, 2), t: 1 }, 2019-09-04T06:32:08.547+0000 2019-09-04T06:32:08.561+0000 D3 REPL [conn277] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.324+0000 2019-09-04T06:32:08.561+0000 D3 REPL [conn282] Got notified of new snapshot: { ts: Timestamp(1567578728, 2), t: 1 }, 2019-09-04T06:32:08.547+0000 2019-09-04T06:32:08.561+0000 D3 REPL [conn282] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.424+0000 2019-09-04T06:32:08.561+0000 D3 REPL [conn278] Got notified of new snapshot: { ts: Timestamp(1567578728, 2), t: 1 }, 2019-09-04T06:32:08.547+0000 2019-09-04T06:32:08.561+0000 D3 REPL [conn278] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:12.693+0000 2019-09-04T06:32:08.561+0000 D3 REPL [conn300] Got notified of new snapshot: { ts: Timestamp(1567578728, 2), t: 1 }, 2019-09-04T06:32:08.547+0000 2019-09-04T06:32:08.561+0000 D3 REPL [conn300] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.277+0000 2019-09-04T06:32:08.561+0000 D3 REPL [conn284] Got notified of new snapshot: { ts: Timestamp(1567578728, 2), t: 1 }, 2019-09-04T06:32:08.547+0000 2019-09-04T06:32:08.561+0000 D3 REPL [conn284] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.440+0000 2019-09-04T06:32:08.561+0000 D3 REPL [conn311] Got notified of new snapshot: { ts: Timestamp(1567578728, 2), t: 1 }, 2019-09-04T06:32:08.547+0000 2019-09-04T06:32:08.561+0000 D3 REPL [conn311] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.661+0000 2019-09-04T06:32:08.561+0000 D3 REPL [conn293] Got notified of new snapshot: { ts: Timestamp(1567578728, 2), t: 1 }, 2019-09-04T06:32:08.547+0000 2019-09-04T06:32:08.561+0000 D3 REPL [conn293] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.277+0000 2019-09-04T06:32:08.561+0000 D3 REPL [conn288] Got notified of new snapshot: { ts: Timestamp(1567578728, 2), t: 1 }, 2019-09-04T06:32:08.547+0000 2019-09-04T06:32:08.561+0000 D3 REPL [conn288] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:11.289+0000 2019-09-04T06:32:08.561+0000 D3 REPL [conn309] Got notified of new snapshot: { ts: Timestamp(1567578728, 2), t: 1 }, 2019-09-04T06:32:08.547+0000 2019-09-04T06:32:08.561+0000 D3 REPL [conn309] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.423+0000 2019-09-04T06:32:08.561+0000 D3 REPL [conn314] Got notified of new snapshot: { ts: Timestamp(1567578728, 2), t: 1 }, 2019-09-04T06:32:08.547+0000 2019-09-04T06:32:08.561+0000 D3 REPL [conn314] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:24.152+0000 2019-09-04T06:32:08.561+0000 D3 REPL [conn302] Got notified of new snapshot: { ts: Timestamp(1567578728, 2), t: 1 }, 2019-09-04T06:32:08.547+0000 2019-09-04T06:32:08.561+0000 D3 REPL [conn302] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:12.891+0000 2019-09-04T06:32:08.561+0000 D3 REPL [conn313] Got notified of new snapshot: { ts: Timestamp(1567578728, 2), t: 1 }, 2019-09-04T06:32:08.547+0000 2019-09-04T06:32:08.561+0000 D3 REPL [conn313] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:22.595+0000 2019-09-04T06:32:08.561+0000 D3 REPL [conn315] Got notified of new snapshot: { ts: Timestamp(1567578728, 2), t: 1 }, 2019-09-04T06:32:08.547+0000 2019-09-04T06:32:08.561+0000 D3 REPL [conn315] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:25.060+0000 2019-09-04T06:32:08.561+0000 D3 REPL [conn301] Got notified of new snapshot: { ts: Timestamp(1567578728, 2), t: 1 }, 2019-09-04T06:32:08.547+0000 2019-09-04T06:32:08.561+0000 D3 REPL [conn301] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.660+0000 2019-09-04T06:32:08.572+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:08.655+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578728, 2) 2019-09-04T06:32:08.660+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:08.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:08.672+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:08.677+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:08.677+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:08.723+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:08.723+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:08.724+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:08.724+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:08.772+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:08.806+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:08.807+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:08.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:08.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:08.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 921) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:08.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 921 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:18.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:08.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:38.838+0000 2019-09-04T06:32:08.838+0000 D3 REPL [replexec-0] memberData lastupdate is: 2019-09-04T06:32:07.063+0000 2019-09-04T06:32:08.838+0000 D3 REPL [replexec-0] memberData lastupdate is: 2019-09-04T06:32:08.232+0000 2019-09-04T06:32:08.838+0000 D3 REPL [replexec-0] stalest member MemberId(0) date: 2019-09-04T06:32:07.063+0000 2019-09-04T06:32:08.838+0000 D3 REPL [replexec-0] scheduling next check at 2019-09-04T06:32:17.063+0000 2019-09-04T06:32:08.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:38.838+0000 2019-09-04T06:32:08.838+0000 D2 ASIO [Replication] Request 921 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), opTime: { ts: Timestamp(1567578728, 2), t: 1 }, wallTime: new Date(1567578728547), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 2), t: 1 }, lastCommittedWall: new Date(1567578728547), lastOpVisible: { ts: Timestamp(1567578728, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578728, 2), $clusterTime: { clusterTime: Timestamp(1567578728, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578728, 2) } 2019-09-04T06:32:08.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), opTime: { ts: Timestamp(1567578728, 2), t: 1 }, wallTime: new Date(1567578728547), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 2), t: 1 }, lastCommittedWall: new Date(1567578728547), lastOpVisible: { ts: Timestamp(1567578728, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578728, 2), $clusterTime: { clusterTime: Timestamp(1567578728, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578728, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:08.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:08.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 921) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), opTime: { ts: Timestamp(1567578728, 2), t: 1 }, wallTime: new Date(1567578728547), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 2), t: 1 }, lastCommittedWall: new Date(1567578728547), lastOpVisible: { ts: Timestamp(1567578728, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578728, 2), $clusterTime: { clusterTime: Timestamp(1567578728, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578728, 2) } 2019-09-04T06:32:08.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:32:08.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:32:10.838Z 2019-09-04T06:32:08.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:38.838+0000 2019-09-04T06:32:08.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:08.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 922) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:08.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 922 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:32:18.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:08.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:38.838+0000 2019-09-04T06:32:08.839+0000 D2 ASIO [Replication] Request 922 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), opTime: { ts: Timestamp(1567578728, 2), t: 1 }, wallTime: new Date(1567578728547), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 2), t: 1 }, lastCommittedWall: new Date(1567578728547), lastOpVisible: { ts: Timestamp(1567578728, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578728, 2), $clusterTime: { clusterTime: Timestamp(1567578728, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578728, 2) } 2019-09-04T06:32:08.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), opTime: { ts: Timestamp(1567578728, 2), t: 1 }, wallTime: new Date(1567578728547), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 2), t: 1 }, lastCommittedWall: new Date(1567578728547), lastOpVisible: { ts: Timestamp(1567578728, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578728, 2), $clusterTime: { clusterTime: Timestamp(1567578728, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578728, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:32:08.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:08.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 922) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), opTime: { ts: Timestamp(1567578728, 2), t: 1 }, wallTime: new Date(1567578728547), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 2), t: 1 }, lastCommittedWall: new Date(1567578728547), lastOpVisible: { ts: Timestamp(1567578728, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578728, 2), $clusterTime: { clusterTime: Timestamp(1567578728, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578728, 2) } 2019-09-04T06:32:08.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:32:08.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:32:19.694+0000 2019-09-04T06:32:08.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:32:20.131+0000 2019-09-04T06:32:08.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:32:08.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:32:10.839Z 2019-09-04T06:32:08.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:08.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:38.839+0000 2019-09-04T06:32:08.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:38.839+0000 2019-09-04T06:32:08.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:08.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:08.873+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:08.944+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:08.944+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:08.973+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:09.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:09.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578728, 2), signature: { hash: BinData(0, C6584B377E4F7874DD9DB6DB72617E1CB4A639AF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:09.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:32:09.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578728, 2), signature: { hash: BinData(0, C6584B377E4F7874DD9DB6DB72617E1CB4A639AF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:09.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578728, 2), signature: { hash: BinData(0, C6584B377E4F7874DD9DB6DB72617E1CB4A639AF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:09.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), opTime: { ts: Timestamp(1567578728, 2), t: 1 }, wallTime: new Date(1567578728547) } 2019-09-04T06:32:09.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578728, 2), signature: { hash: BinData(0, C6584B377E4F7874DD9DB6DB72617E1CB4A639AF), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:09.073+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:09.173+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:09.177+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:09.177+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:09.223+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:09.223+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:09.224+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:09.224+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:09.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:09.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:09.273+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:09.306+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:09.307+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:09.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:09.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:09.373+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:09.444+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:09.444+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:09.473+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:09.555+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:09.556+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:09.556+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578728, 2) 2019-09-04T06:32:09.556+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 13651 2019-09-04T06:32:09.556+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:09.556+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:09.556+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 13651 2019-09-04T06:32:09.556+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13654 2019-09-04T06:32:09.557+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13654 2019-09-04T06:32:09.557+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578728, 2), t: 1 }({ ts: Timestamp(1567578728, 2), t: 1 }) 2019-09-04T06:32:09.573+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:09.673+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:09.677+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:09.677+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:09.723+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:09.723+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:09.724+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:09.724+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:09.767+0000 D2 COMMAND [conn61] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578728, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578728, 2), signature: { hash: BinData(0, C6584B377E4F7874DD9DB6DB72617E1CB4A639AF), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578728, 2), t: 1 } }, $db: "config" } 2019-09-04T06:32:09.767+0000 D1 COMMAND [conn61] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578728, 2), t: 1 } } } 2019-09-04T06:32:09.767+0000 D3 STORAGE [conn61] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:32:09.767+0000 D1 COMMAND [conn61] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578728, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578728, 2), signature: { hash: BinData(0, C6584B377E4F7874DD9DB6DB72617E1CB4A639AF), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578728, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578728, 2) 2019-09-04T06:32:09.767+0000 D5 QUERY [conn61] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:32:09.767+0000 D5 QUERY [conn61] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:32:09.767+0000 D5 QUERY [conn61] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:32:09.767+0000 D5 QUERY [conn61] Rated tree: $and 2019-09-04T06:32:09.767+0000 D5 QUERY [conn61] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:09.767+0000 D5 QUERY [conn61] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:09.767+0000 D2 QUERY [conn61] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:32:09.768+0000 D3 STORAGE [conn61] WT begin_transaction for snapshot id 13659 2019-09-04T06:32:09.768+0000 D3 STORAGE [conn61] WT rollback_transaction for snapshot id 13659 2019-09-04T06:32:09.768+0000 I COMMAND [conn61] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578728, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578728, 2), signature: { hash: BinData(0, C6584B377E4F7874DD9DB6DB72617E1CB4A639AF), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578728, 2), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:879 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:32:09.774+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:09.806+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:09.807+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:09.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:09.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:09.874+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:09.944+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:09.944+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:09.974+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:10.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:10.003+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:32:10.003+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:32:10.003+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:32:10.011+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:32:10.011+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:32:10.011+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:32:10.011+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:32:10.011+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:32:10.011+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:32:10.011+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:32:10.012+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35129 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:32:10.012+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:32:10.012+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:32:10.013+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:32:10.013+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:32:10.013+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:10.013+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:32:10.013+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:32:10.013+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:32:10.013+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:32:10.013+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:32:10.013+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:32:10.013+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:32:10.013+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:10.013+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:10.013+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:32:10.013+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578728, 2) 2019-09-04T06:32:10.013+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13670 2019-09-04T06:32:10.013+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13670 2019-09-04T06:32:10.013+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:32:10.013+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:32:10.013+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:32:10.013+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:32:10.013+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:10.013+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:32:10.013+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:32:10.013+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:32:10.013+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578728, 2) 2019-09-04T06:32:10.013+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13673 2019-09-04T06:32:10.013+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13673 2019-09-04T06:32:10.013+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:32:10.014+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:32:10.014+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:10.014+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:32:10.014+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:32:10.014+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:32:10.014+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578728, 2) 2019-09-04T06:32:10.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13675 2019-09-04T06:32:10.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13675 2019-09-04T06:32:10.014+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:32:10.014+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:32:10.014+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:32:10.014+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:32:10.014+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:32:10.014+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:32:10.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:32:10.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13678 2019-09-04T06:32:10.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:32:10.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:10.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:32:10.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:32:10.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13678 2019-09-04T06:32:10.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:32:10.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13679 2019-09-04T06:32:10.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:32:10.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:10.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:32:10.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13679 2019-09-04T06:32:10.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:32:10.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13680 2019-09-04T06:32:10.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:32:10.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:10.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:32:10.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13680 2019-09-04T06:32:10.014+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:32:10.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13681 2019-09-04T06:32:10.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:32:10.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:10.014+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:32:10.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13681 2019-09-04T06:32:10.014+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:32:10.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13682 2019-09-04T06:32:10.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:32:10.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:10.014+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:32:10.014+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:32:10.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13682 2019-09-04T06:32:10.014+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:32:10.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13683 2019-09-04T06:32:10.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13683 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13684 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13684 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13685 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13685 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13686 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13686 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13687 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13687 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13688 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13688 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13689 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13689 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13690 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13690 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13691 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13691 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13692 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13692 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13693 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13693 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13694 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13694 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13695 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13695 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13696 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13696 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13697 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13697 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13698 2019-09-04T06:32:10.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:32:10.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:10.016+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:32:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13698 2019-09-04T06:32:10.016+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:32:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13699 2019-09-04T06:32:10.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:32:10.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:10.016+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:32:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13699 2019-09-04T06:32:10.016+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:32:10.016+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:32:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13701 2019-09-04T06:32:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13701 2019-09-04T06:32:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13702 2019-09-04T06:32:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13702 2019-09-04T06:32:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13703 2019-09-04T06:32:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13703 2019-09-04T06:32:10.016+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:32:10.016+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:32:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13705 2019-09-04T06:32:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13705 2019-09-04T06:32:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13706 2019-09-04T06:32:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13706 2019-09-04T06:32:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13707 2019-09-04T06:32:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13707 2019-09-04T06:32:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13708 2019-09-04T06:32:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13708 2019-09-04T06:32:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13709 2019-09-04T06:32:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13709 2019-09-04T06:32:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13710 2019-09-04T06:32:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13710 2019-09-04T06:32:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13711 2019-09-04T06:32:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13711 2019-09-04T06:32:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13712 2019-09-04T06:32:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13712 2019-09-04T06:32:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13713 2019-09-04T06:32:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13713 2019-09-04T06:32:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13714 2019-09-04T06:32:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13714 2019-09-04T06:32:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13715 2019-09-04T06:32:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13715 2019-09-04T06:32:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13716 2019-09-04T06:32:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13716 2019-09-04T06:32:10.017+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:32:10.017+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:32:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13718 2019-09-04T06:32:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13718 2019-09-04T06:32:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13719 2019-09-04T06:32:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13719 2019-09-04T06:32:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13720 2019-09-04T06:32:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13720 2019-09-04T06:32:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13721 2019-09-04T06:32:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13721 2019-09-04T06:32:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13722 2019-09-04T06:32:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13722 2019-09-04T06:32:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 13723 2019-09-04T06:32:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 13723 2019-09-04T06:32:10.017+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:32:10.038+0000 D2 COMMAND [conn69] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:10.038+0000 I COMMAND [conn69] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:10.055+0000 D2 COMMAND [conn70] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:10.056+0000 I COMMAND [conn70] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:10.074+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:10.174+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:10.177+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:10.177+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:10.224+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:10.224+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:10.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578728, 2), signature: { hash: BinData(0, C6584B377E4F7874DD9DB6DB72617E1CB4A639AF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:10.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:32:10.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578728, 2), signature: { hash: BinData(0, C6584B377E4F7874DD9DB6DB72617E1CB4A639AF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:10.233+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578728, 2), signature: { hash: BinData(0, C6584B377E4F7874DD9DB6DB72617E1CB4A639AF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:10.233+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), opTime: { ts: Timestamp(1567578728, 2), t: 1 }, wallTime: new Date(1567578728547) } 2019-09-04T06:32:10.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578728, 2), signature: { hash: BinData(0, C6584B377E4F7874DD9DB6DB72617E1CB4A639AF), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:10.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:10.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:10.274+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:10.306+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:10.307+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:10.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:10.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:10.374+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:10.444+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:10.444+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:10.464+0000 D2 COMMAND [conn71] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:10.464+0000 I COMMAND [conn71] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:10.475+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:10.556+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:10.556+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:10.556+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578728, 2) 2019-09-04T06:32:10.556+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 13735 2019-09-04T06:32:10.556+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:10.556+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:10.556+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 13735 2019-09-04T06:32:10.557+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13738 2019-09-04T06:32:10.557+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13738 2019-09-04T06:32:10.557+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578728, 2), t: 1 }({ ts: Timestamp(1567578728, 2), t: 1 }) 2019-09-04T06:32:10.575+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:10.675+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:10.677+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:10.677+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:10.721+0000 D2 COMMAND [conn72] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578728, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578728, 2), signature: { hash: BinData(0, C6584B377E4F7874DD9DB6DB72617E1CB4A639AF), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578728, 2), t: 1 } }, $db: "config" } 2019-09-04T06:32:10.721+0000 D1 COMMAND [conn72] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578728, 2), t: 1 } } } 2019-09-04T06:32:10.721+0000 D3 STORAGE [conn72] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:32:10.721+0000 D1 COMMAND [conn72] Using 'committed' snapshot: { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578728, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578728, 2), signature: { hash: BinData(0, C6584B377E4F7874DD9DB6DB72617E1CB4A639AF), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578728, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578728, 2) 2019-09-04T06:32:10.721+0000 D2 QUERY [conn72] Collection config.settings does not exist. Using EOF plan: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 2019-09-04T06:32:10.722+0000 I COMMAND [conn72] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578728, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578728, 2), signature: { hash: BinData(0, C6584B377E4F7874DD9DB6DB72617E1CB4A639AF), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578728, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:32:10.724+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:10.724+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:10.775+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:10.806+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:10.807+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:10.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:10.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 923) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:10.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 923 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:20.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:10.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:38.839+0000 2019-09-04T06:32:10.838+0000 D2 ASIO [Replication] Request 923 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), opTime: { ts: Timestamp(1567578728, 2), t: 1 }, wallTime: new Date(1567578728547), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 2), t: 1 }, lastCommittedWall: new Date(1567578728547), lastOpVisible: { ts: Timestamp(1567578728, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578728, 2), $clusterTime: { clusterTime: Timestamp(1567578728, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578728, 2) } 2019-09-04T06:32:10.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), opTime: { ts: Timestamp(1567578728, 2), t: 1 }, wallTime: new Date(1567578728547), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 2), t: 1 }, lastCommittedWall: new Date(1567578728547), lastOpVisible: { ts: Timestamp(1567578728, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578728, 2), $clusterTime: { clusterTime: Timestamp(1567578728, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578728, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:10.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:10.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 923) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), opTime: { ts: Timestamp(1567578728, 2), t: 1 }, wallTime: new Date(1567578728547), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 2), t: 1 }, lastCommittedWall: new Date(1567578728547), lastOpVisible: { ts: Timestamp(1567578728, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578728, 2), $clusterTime: { clusterTime: Timestamp(1567578728, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578728, 2) } 2019-09-04T06:32:10.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:32:10.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:32:12.838Z 2019-09-04T06:32:10.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:38.839+0000 2019-09-04T06:32:10.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:10.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 924) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:10.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 924 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:32:20.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:10.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:38.839+0000 2019-09-04T06:32:10.839+0000 D2 ASIO [Replication] Request 924 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), opTime: { ts: Timestamp(1567578728, 2), t: 1 }, wallTime: new Date(1567578728547), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 2), t: 1 }, lastCommittedWall: new Date(1567578728547), lastOpVisible: { ts: Timestamp(1567578728, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578728, 2), $clusterTime: { clusterTime: Timestamp(1567578728, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578728, 2) } 2019-09-04T06:32:10.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), opTime: { ts: Timestamp(1567578728, 2), t: 1 }, wallTime: new Date(1567578728547), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 2), t: 1 }, lastCommittedWall: new Date(1567578728547), lastOpVisible: { ts: Timestamp(1567578728, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578728, 2), $clusterTime: { clusterTime: Timestamp(1567578728, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578728, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:32:10.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:10.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 924) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), opTime: { ts: Timestamp(1567578728, 2), t: 1 }, wallTime: new Date(1567578728547), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 2), t: 1 }, lastCommittedWall: new Date(1567578728547), lastOpVisible: { ts: Timestamp(1567578728, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578728, 2), $clusterTime: { clusterTime: Timestamp(1567578728, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578728, 2) } 2019-09-04T06:32:10.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:32:10.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:32:20.131+0000 2019-09-04T06:32:10.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:32:22.263+0000 2019-09-04T06:32:10.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:32:10.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:32:12.839Z 2019-09-04T06:32:10.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:10.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:40.839+0000 2019-09-04T06:32:10.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:40.839+0000 2019-09-04T06:32:10.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:10.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:10.875+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:10.908+0000 D2 COMMAND [conn275] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:10.908+0000 I COMMAND [conn275] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:10.944+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:10.944+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:10.944+0000 D2 COMMAND [conn73] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:10.944+0000 I COMMAND [conn73] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:10.968+0000 D2 COMMAND [conn74] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:10.968+0000 I COMMAND [conn74] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:10.975+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:11.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:11.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578728, 2), signature: { hash: BinData(0, C6584B377E4F7874DD9DB6DB72617E1CB4A639AF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:11.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:32:11.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578728, 2), signature: { hash: BinData(0, C6584B377E4F7874DD9DB6DB72617E1CB4A639AF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:11.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578728, 2), signature: { hash: BinData(0, C6584B377E4F7874DD9DB6DB72617E1CB4A639AF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:11.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), opTime: { ts: Timestamp(1567578728, 2), t: 1 }, wallTime: new Date(1567578728547) } 2019-09-04T06:32:11.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578728, 2), signature: { hash: BinData(0, C6584B377E4F7874DD9DB6DB72617E1CB4A639AF), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:11.075+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:11.176+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:11.177+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:11.177+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:11.221+0000 D2 COMMAND [conn77] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:11.221+0000 I COMMAND [conn77] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:11.224+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:11.224+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:11.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:11.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:11.261+0000 D2 COMMAND [conn113] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:11.262+0000 I COMMAND [conn113] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:11.273+0000 D2 COMMAND [conn78] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:11.273+0000 I COMMAND [conn78] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:11.276+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:11.276+0000 I COMMAND [conn290] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578700, 1), signature: { hash: BinData(0, 95E089B7C5424E83E756C7B5503AC8F8547CCE40), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:32:11.276+0000 D1 - [conn290] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:11.276+0000 W - [conn290] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:11.280+0000 I COMMAND [conn293] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578695, 1), signature: { hash: BinData(0, 9E26572A5146BC8E6E3FE400B47A1E3317EB4F1E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:32:11.280+0000 D1 - [conn293] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:11.280+0000 W - [conn293] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:11.280+0000 I COMMAND [conn300] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578691, 1), signature: { hash: BinData(0, E866C51399BFEA5AC401BAD9014663E2B59AFDDF), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:32:11.280+0000 D1 - [conn300] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:11.280+0000 W - [conn300] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:11.282+0000 I COMMAND [conn289] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578692, 1), signature: { hash: BinData(0, 14C73FD082AC31B9D9EA409E8C477DA8DEE8FE15), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:32:11.282+0000 D1 - [conn289] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:11.282+0000 W - [conn289] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:11.291+0000 I COMMAND [conn288] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578698, 1), signature: { hash: BinData(0, 530FD6D8E211D30BDB1163BF3162E191BBCC422A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:32:11.291+0000 D1 - [conn288] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:11.291+0000 W - [conn288] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:11.292+0000 I - [conn290] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:11.293+0000 D1 COMMAND [conn290] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578700, 1), signature: { hash: BinData(0, 95E089B7C5424E83E756C7B5503AC8F8547CCE40), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:11.293+0000 D1 - [conn290] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:11.293+0000 W - [conn290] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:11.306+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:11.306+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:11.316+0000 I NETWORK [listener] connection accepted from 10.108.2.63:36356 #318 (87 connections now open) 2019-09-04T06:32:11.316+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:11.317+0000 D2 COMMAND [conn318] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:11.317+0000 I NETWORK [conn318] received client metadata from 10.108.2.63:36356 conn318: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:11.317+0000 I COMMAND [conn318] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:11.319+0000 I - [conn290] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:11.320+0000 W COMMAND [conn290] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:11.320+0000 I COMMAND [conn290] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578700, 1), signature: { hash: BinData(0, 95E089B7C5424E83E756C7B5503AC8F8547CCE40), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30028ms 2019-09-04T06:32:11.320+0000 D2 NETWORK [conn290] Session from 10.108.2.48:42132 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:11.320+0000 I NETWORK [conn290] end connection 10.108.2.48:42132 (86 connections now open) 2019-09-04T06:32:11.324+0000 I NETWORK [listener] connection accepted from 10.108.2.61:37986 #319 (87 connections now open) 2019-09-04T06:32:11.324+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:11.324+0000 D2 COMMAND [conn319] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:11.324+0000 I NETWORK [conn319] received client metadata from 10.108.2.61:37986 conn319: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:11.324+0000 I COMMAND [conn319] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:11.326+0000 I COMMAND [conn277] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578692, 1), signature: { hash: BinData(0, 14C73FD082AC31B9D9EA409E8C477DA8DEE8FE15), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:32:11.326+0000 D1 - [conn277] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:11.327+0000 W - [conn277] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:11.330+0000 I COMMAND [conn276] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578691, 1), signature: { hash: BinData(0, E866C51399BFEA5AC401BAD9014663E2B59AFDDF), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:32:11.331+0000 D1 - [conn276] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:11.331+0000 W - [conn276] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:11.337+0000 I - [conn289] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:11.337+0000 D1 COMMAND [conn289] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578692, 1), signature: { hash: BinData(0, 14C73FD082AC31B9D9EA409E8C477DA8DEE8FE15), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:11.337+0000 D1 - [conn289] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:11.337+0000 W - [conn289] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:11.348+0000 I - [conn288] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:11.348+0000 D1 COMMAND [conn288] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578698, 1), signature: { hash: BinData(0, 530FD6D8E211D30BDB1163BF3162E191BBCC422A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:11.348+0000 D1 - [conn288] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:11.348+0000 W - [conn288] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:11.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:11.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:11.365+0000 I - [conn277] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:11.365+0000 D1 COMMAND [conn277] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578692, 1), signature: { hash: BinData(0, 14C73FD082AC31B9D9EA409E8C477DA8DEE8FE15), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:11.365+0000 D1 - [conn277] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:11.365+0000 W - [conn277] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:11.376+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:11.385+0000 I - [conn289] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:11.385+0000 W COMMAND [conn289] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:11.385+0000 I COMMAND [conn289] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578692, 1), signature: { hash: BinData(0, 14C73FD082AC31B9D9EA409E8C477DA8DEE8FE15), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30067ms 2019-09-04T06:32:11.385+0000 D2 NETWORK [conn289] Session from 10.108.2.72:45770 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:11.385+0000 I NETWORK [conn289] end connection 10.108.2.72:45770 (86 connections now open) 2019-09-04T06:32:11.402+0000 I - [conn300] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:11.402+0000 D1 COMMAND [conn300] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578691, 1), signature: { hash: BinData(0, E866C51399BFEA5AC401BAD9014663E2B59AFDDF), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:11.402+0000 D1 - [conn300] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:11.402+0000 W - [conn300] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:11.422+0000 I - [conn288] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:11.422+0000 W COMMAND [conn288] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:11.422+0000 I COMMAND [conn288] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578698, 1), signature: { hash: BinData(0, 530FD6D8E211D30BDB1163BF3162E191BBCC422A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30069ms 2019-09-04T06:32:11.422+0000 D2 NETWORK [conn288] Session from 10.108.2.57:34278 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:11.422+0000 I NETWORK [conn288] end connection 10.108.2.57:34278 (85 connections now open) 2019-09-04T06:32:11.442+0000 I - [conn277] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:11.442+0000 W COMMAND [conn277] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:11.442+0000 I COMMAND [conn277] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578692, 1), signature: { hash: BinData(0, 14C73FD082AC31B9D9EA409E8C477DA8DEE8FE15), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30050ms 2019-09-04T06:32:11.442+0000 D2 NETWORK [conn277] Session from 10.108.2.63:36324 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:11.442+0000 I NETWORK [conn277] end connection 10.108.2.63:36324 (84 connections now open) 2019-09-04T06:32:11.444+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:11.444+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:11.461+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:11.461+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:11.461+0000 D2 COMMAND [conn317] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578722, 1), signature: { hash: BinData(0, A7DB2B2BD110626557891F82D1C539FE660A4A4A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:32:11.461+0000 D1 REPL [conn317] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578728, 2), t: 1 } 2019-09-04T06:32:11.461+0000 D3 REPL [conn317] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.471+0000 2019-09-04T06:32:11.463+0000 I - [conn300] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:11.463+0000 W COMMAND [conn300] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:11.463+0000 I COMMAND [conn300] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578691, 1), signature: { hash: BinData(0, E866C51399BFEA5AC401BAD9014663E2B59AFDDF), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30135ms 2019-09-04T06:32:11.463+0000 D2 NETWORK [conn300] Session from 10.108.2.55:36702 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:11.463+0000 I NETWORK [conn300] end connection 10.108.2.55:36702 (83 connections now open) 2019-09-04T06:32:11.466+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:11.466+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:11.473+0000 D2 COMMAND [conn299] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578721, 1), signature: { hash: BinData(0, F2F103EFE4A77E8DC9C0F29C8D39A027024FF7E0), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:32:11.473+0000 D1 REPL [conn299] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578728, 2), t: 1 } 2019-09-04T06:32:11.473+0000 D3 REPL [conn299] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.483+0000 2019-09-04T06:32:11.476+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:11.479+0000 I - [conn276] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:11.479+0000 D1 COMMAND [conn276] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578691, 1), signature: { hash: BinData(0, E866C51399BFEA5AC401BAD9014663E2B59AFDDF), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:11.479+0000 D1 - [conn276] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:11.479+0000 W - [conn276] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:11.495+0000 I - [conn293] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:11.495+0000 D1 COMMAND [conn293] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578695, 1), signature: { hash: BinData(0, 9E26572A5146BC8E6E3FE400B47A1E3317EB4F1E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:11.495+0000 D1 - [conn293] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:11.495+0000 W - [conn293] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:11.498+0000 D2 COMMAND [conn275] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578723, 1), signature: { hash: BinData(0, B284AA1DA8C9D75820A3CFCF85E9C4196C90E245), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:32:11.498+0000 D1 REPL [conn275] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578728, 2), t: 1 } 2019-09-04T06:32:11.498+0000 D3 REPL [conn275] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.508+0000 2019-09-04T06:32:11.514+0000 I - [conn276] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:11.514+0000 W COMMAND [conn276] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:11.515+0000 I COMMAND [conn276] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578691, 1), signature: { hash: BinData(0, E866C51399BFEA5AC401BAD9014663E2B59AFDDF), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30160ms 2019-09-04T06:32:11.515+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:11.515+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:11.515+0000 D2 NETWORK [conn276] Session from 10.108.2.61:37954 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:11.515+0000 I NETWORK [conn276] end connection 10.108.2.61:37954 (82 connections now open) 2019-09-04T06:32:11.519+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:11.519+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:11.525+0000 D2 COMMAND [conn319] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578721, 1), signature: { hash: BinData(0, F2F103EFE4A77E8DC9C0F29C8D39A027024FF7E0), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:32:11.525+0000 D1 REPL [conn319] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578728, 2), t: 1 } 2019-09-04T06:32:11.525+0000 D3 REPL [conn319] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.535+0000 2019-09-04T06:32:11.535+0000 I - [conn293] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:11.535+0000 W COMMAND [conn293] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:11.535+0000 I COMMAND [conn293] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578695, 1), signature: { hash: BinData(0, 9E26572A5146BC8E6E3FE400B47A1E3317EB4F1E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30227ms 2019-09-04T06:32:11.535+0000 D2 NETWORK [conn293] Session from 10.108.2.74:51818 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:11.535+0000 I NETWORK [conn293] end connection 10.108.2.74:51818 (81 connections now open) 2019-09-04T06:32:11.556+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:11.556+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:11.556+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578728, 2) 2019-09-04T06:32:11.556+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 13771 2019-09-04T06:32:11.556+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:11.556+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:11.556+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 13771 2019-09-04T06:32:11.557+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13774 2019-09-04T06:32:11.557+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13774 2019-09-04T06:32:11.557+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578728, 2), t: 1 }({ ts: Timestamp(1567578728, 2), t: 1 }) 2019-09-04T06:32:11.576+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:11.676+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:11.677+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:11.677+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:11.724+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:11.724+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:11.776+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:11.795+0000 D2 COMMAND [conn81] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578715, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578728, 2), signature: { hash: BinData(0, C6584B377E4F7874DD9DB6DB72617E1CB4A639AF), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578715, 2), t: 1 } }, $db: "config" } 2019-09-04T06:32:11.795+0000 D1 COMMAND [conn81] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578715, 2), t: 1 } } } 2019-09-04T06:32:11.795+0000 D3 STORAGE [conn81] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:32:11.795+0000 D1 COMMAND [conn81] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578715, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578728, 2), signature: { hash: BinData(0, C6584B377E4F7874DD9DB6DB72617E1CB4A639AF), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578715, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578728, 2) 2019-09-04T06:32:11.796+0000 D5 QUERY [conn81] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:32:11.796+0000 D5 QUERY [conn81] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:32:11.796+0000 D5 QUERY [conn81] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:32:11.796+0000 D5 QUERY [conn81] Rated tree: $and 2019-09-04T06:32:11.796+0000 D5 QUERY [conn81] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:11.796+0000 D5 QUERY [conn81] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:11.796+0000 D2 QUERY [conn81] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:32:11.796+0000 D3 STORAGE [conn81] WT begin_transaction for snapshot id 13778 2019-09-04T06:32:11.796+0000 D3 STORAGE [conn81] WT rollback_transaction for snapshot id 13778 2019-09-04T06:32:11.796+0000 I COMMAND [conn81] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578715, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578728, 2), signature: { hash: BinData(0, C6584B377E4F7874DD9DB6DB72617E1CB4A639AF), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578715, 2), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:879 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:32:11.806+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:11.806+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:11.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:11.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:11.876+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:11.944+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:11.944+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:11.960+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:11.961+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:11.966+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:11.966+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:11.977+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:12.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:12.014+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:12.014+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:12.018+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:12.018+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:12.077+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:12.177+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:12.224+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:12.224+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:12.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578730, 1), signature: { hash: BinData(0, F255CDF09A97EDC2D0717DBACFBFC09551B9999A), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:12.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:32:12.233+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578730, 1), signature: { hash: BinData(0, F255CDF09A97EDC2D0717DBACFBFC09551B9999A), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:12.233+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578730, 1), signature: { hash: BinData(0, F255CDF09A97EDC2D0717DBACFBFC09551B9999A), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:12.233+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), opTime: { ts: Timestamp(1567578728, 2), t: 1 }, wallTime: new Date(1567578728547) } 2019-09-04T06:32:12.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578730, 1), signature: { hash: BinData(0, F255CDF09A97EDC2D0717DBACFBFC09551B9999A), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:12.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:12.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:12.277+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:12.306+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:12.307+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:12.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:12.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:12.377+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:12.444+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:12.444+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:12.477+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:12.556+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:12.556+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:12.556+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578728, 2) 2019-09-04T06:32:12.556+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 13794 2019-09-04T06:32:12.556+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:12.556+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:12.556+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 13794 2019-09-04T06:32:12.557+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13797 2019-09-04T06:32:12.557+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13797 2019-09-04T06:32:12.557+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578728, 2), t: 1 }({ ts: Timestamp(1567578728, 2), t: 1 }) 2019-09-04T06:32:12.577+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:12.678+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:12.697+0000 I COMMAND [conn278] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578701, 1), signature: { hash: BinData(0, 1171ABEBFE541B8950624F669563FA93C6C7DEA2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:32:12.697+0000 D1 - [conn278] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:12.697+0000 W - [conn278] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:12.714+0000 I - [conn278] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:12.714+0000 D1 COMMAND [conn278] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578701, 1), signature: { hash: BinData(0, 1171ABEBFE541B8950624F669563FA93C6C7DEA2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:12.714+0000 D1 - [conn278] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:12.714+0000 W - [conn278] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:12.724+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:12.724+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:12.734+0000 I - [conn278] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:12.734+0000 W COMMAND [conn278] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:12.734+0000 I COMMAND [conn278] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578701, 1), signature: { hash: BinData(0, 1171ABEBFE541B8950624F669563FA93C6C7DEA2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:32:12.734+0000 D2 NETWORK [conn278] Session from 10.108.2.59:48372 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:12.734+0000 I NETWORK [conn278] end connection 10.108.2.59:48372 (80 connections now open) 2019-09-04T06:32:12.778+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:12.806+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:12.806+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:12.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:12.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 925) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:12.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 925 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:22.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:12.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:40.839+0000 2019-09-04T06:32:12.838+0000 D2 ASIO [Replication] Request 925 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), opTime: { ts: Timestamp(1567578728, 2), t: 1 }, wallTime: new Date(1567578728547), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 2), t: 1 }, lastCommittedWall: new Date(1567578728547), lastOpVisible: { ts: Timestamp(1567578728, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578728, 2), $clusterTime: { clusterTime: Timestamp(1567578730, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578728, 2) } 2019-09-04T06:32:12.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), opTime: { ts: Timestamp(1567578728, 2), t: 1 }, wallTime: new Date(1567578728547), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 2), t: 1 }, lastCommittedWall: new Date(1567578728547), lastOpVisible: { ts: Timestamp(1567578728, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578728, 2), $clusterTime: { clusterTime: Timestamp(1567578730, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578728, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:12.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:12.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 925) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), opTime: { ts: Timestamp(1567578728, 2), t: 1 }, wallTime: new Date(1567578728547), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 2), t: 1 }, lastCommittedWall: new Date(1567578728547), lastOpVisible: { ts: Timestamp(1567578728, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578728, 2), $clusterTime: { clusterTime: Timestamp(1567578730, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578728, 2) } 2019-09-04T06:32:12.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:32:12.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:32:14.838Z 2019-09-04T06:32:12.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:40.839+0000 2019-09-04T06:32:12.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:12.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 926) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:12.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 926 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:32:22.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:12.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:40.839+0000 2019-09-04T06:32:12.839+0000 D2 ASIO [Replication] Request 926 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), opTime: { ts: Timestamp(1567578728, 2), t: 1 }, wallTime: new Date(1567578728547), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 2), t: 1 }, lastCommittedWall: new Date(1567578728547), lastOpVisible: { ts: Timestamp(1567578728, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578728, 2), $clusterTime: { clusterTime: Timestamp(1567578730, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578728, 2) } 2019-09-04T06:32:12.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), opTime: { ts: Timestamp(1567578728, 2), t: 1 }, wallTime: new Date(1567578728547), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 2), t: 1 }, lastCommittedWall: new Date(1567578728547), lastOpVisible: { ts: Timestamp(1567578728, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578728, 2), $clusterTime: { clusterTime: Timestamp(1567578730, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578728, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:32:12.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:12.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 926) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), opTime: { ts: Timestamp(1567578728, 2), t: 1 }, wallTime: new Date(1567578728547), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 2), t: 1 }, lastCommittedWall: new Date(1567578728547), lastOpVisible: { ts: Timestamp(1567578728, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578728, 2), $clusterTime: { clusterTime: Timestamp(1567578730, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578728, 2) } 2019-09-04T06:32:12.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:32:12.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:32:22.263+0000 2019-09-04T06:32:12.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:32:23.945+0000 2019-09-04T06:32:12.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:32:12.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:32:14.839Z 2019-09-04T06:32:12.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:12.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:42.839+0000 2019-09-04T06:32:12.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:42.839+0000 2019-09-04T06:32:12.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:12.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:12.878+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:12.880+0000 I NETWORK [listener] connection accepted from 10.108.2.52:47240 #320 (81 connections now open) 2019-09-04T06:32:12.880+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:12.880+0000 D2 COMMAND [conn320] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:12.880+0000 I NETWORK [conn320] received client metadata from 10.108.2.52:47240 conn320: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:12.880+0000 I COMMAND [conn320] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:12.883+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:12.884+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:12.896+0000 I COMMAND [conn302] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578702, 1), signature: { hash: BinData(0, ECF4B5774F4DA596CBFAEF2B306A437976FBCFAD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:32:12.896+0000 D1 - [conn302] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:12.896+0000 W - [conn302] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:12.913+0000 I - [conn302] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:12.913+0000 D1 COMMAND [conn302] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578702, 1), signature: { hash: BinData(0, ECF4B5774F4DA596CBFAEF2B306A437976FBCFAD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:12.913+0000 D1 - [conn302] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:12.913+0000 W - [conn302] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:12.933+0000 I - [conn302] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:12.933+0000 W COMMAND [conn302] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:12.933+0000 I COMMAND [conn302] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578702, 1), signature: { hash: BinData(0, ECF4B5774F4DA596CBFAEF2B306A437976FBCFAD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30032ms 2019-09-04T06:32:12.934+0000 D2 NETWORK [conn302] Session from 10.108.2.52:47226 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:12.934+0000 I NETWORK [conn302] end connection 10.108.2.52:47226 (80 connections now open) 2019-09-04T06:32:12.944+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:12.944+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:12.978+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:13.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:13.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578731, 1), signature: { hash: BinData(0, 14E10638C2FEBDBA38D4C6E68B44E16C8FF35A2A), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:13.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:32:13.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578731, 1), signature: { hash: BinData(0, 14E10638C2FEBDBA38D4C6E68B44E16C8FF35A2A), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:13.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578731, 1), signature: { hash: BinData(0, 14E10638C2FEBDBA38D4C6E68B44E16C8FF35A2A), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:13.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), opTime: { ts: Timestamp(1567578728, 2), t: 1 }, wallTime: new Date(1567578728547) } 2019-09-04T06:32:13.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578731, 1), signature: { hash: BinData(0, 14E10638C2FEBDBA38D4C6E68B44E16C8FF35A2A), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:13.078+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:13.099+0000 I NETWORK [listener] connection accepted from 10.108.2.54:49250 #321 (81 connections now open) 2019-09-04T06:32:13.099+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:13.099+0000 D2 COMMAND [conn321] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:13.099+0000 I NETWORK [conn321] received client metadata from 10.108.2.54:49250 conn321: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:13.099+0000 I COMMAND [conn321] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:13.099+0000 D2 COMMAND [conn321] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578728, 1), signature: { hash: BinData(0, D40B466E4FDFBEAA54D0D3B68CF775A1034DDCB7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:32:13.099+0000 D1 REPL [conn321] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578728, 2), t: 1 } 2019-09-04T06:32:13.099+0000 D3 REPL [conn321] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:43.109+0000 2019-09-04T06:32:13.178+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:13.224+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:13.224+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:13.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:13.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:13.279+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:13.306+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:13.307+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:13.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:13.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:13.379+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:13.383+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:13.383+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:13.444+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:13.444+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:13.479+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:13.557+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:13.557+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:13.557+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578728, 2) 2019-09-04T06:32:13.557+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 13815 2019-09-04T06:32:13.557+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:13.557+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:13.557+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 13815 2019-09-04T06:32:13.557+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13818 2019-09-04T06:32:13.557+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13818 2019-09-04T06:32:13.557+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578728, 2), t: 1 }({ ts: Timestamp(1567578728, 2), t: 1 }) 2019-09-04T06:32:13.560+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:13.560+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), appliedOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, appliedWallTime: new Date(1567578728547), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), appliedOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, appliedWallTime: new Date(1567578728547), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), appliedOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, appliedWallTime: new Date(1567578728547), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 2), t: 1 }, lastCommittedWall: new Date(1567578728547), lastOpVisible: { ts: Timestamp(1567578728, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:13.560+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 927 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:43.560+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), appliedOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, appliedWallTime: new Date(1567578728547), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), appliedOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, appliedWallTime: new Date(1567578728547), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), appliedOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, appliedWallTime: new Date(1567578728547), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 2), t: 1 }, lastCommittedWall: new Date(1567578728547), lastOpVisible: { ts: Timestamp(1567578728, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:13.560+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:38.557+0000 2019-09-04T06:32:13.560+0000 D2 ASIO [RS] Request 927 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 2), t: 1 }, lastCommittedWall: new Date(1567578728547), lastOpVisible: { ts: Timestamp(1567578728, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578728, 2), $clusterTime: { clusterTime: Timestamp(1567578732, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578728, 2) } 2019-09-04T06:32:13.560+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 2), t: 1 }, lastCommittedWall: new Date(1567578728547), lastOpVisible: { ts: Timestamp(1567578728, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578728, 2), $clusterTime: { clusterTime: Timestamp(1567578732, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578728, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:13.560+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:13.560+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:38.557+0000 2019-09-04T06:32:13.560+0000 D2 ASIO [RS] Request 920 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 2), t: 1 }, lastCommittedWall: new Date(1567578728547), lastOpVisible: { ts: Timestamp(1567578728, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578728, 2), t: 1 }, lastCommittedWall: new Date(1567578728547), lastOpApplied: { ts: Timestamp(1567578728, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578728, 2), $clusterTime: { clusterTime: Timestamp(1567578732, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578728, 2) } 2019-09-04T06:32:13.560+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 2), t: 1 }, lastCommittedWall: new Date(1567578728547), lastOpVisible: { ts: Timestamp(1567578728, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578728, 2), t: 1 }, lastCommittedWall: new Date(1567578728547), lastOpApplied: { ts: Timestamp(1567578728, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578728, 2), $clusterTime: { clusterTime: Timestamp(1567578732, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578728, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:13.560+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:13.560+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:32:13.560+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:32:23.945+0000 2019-09-04T06:32:13.560+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:32:25.050+0000 2019-09-04T06:32:13.560+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:13.560+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:42.839+0000 2019-09-04T06:32:13.560+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 928 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:23.560+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578728, 2), t: 1 } } 2019-09-04T06:32:13.560+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:38.557+0000 2019-09-04T06:32:13.579+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:13.583+0000 I NETWORK [listener] connection accepted from 10.108.2.62:53500 #322 (82 connections now open) 2019-09-04T06:32:13.583+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:13.583+0000 D2 COMMAND [conn322] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:13.583+0000 I NETWORK [conn322] received client metadata from 10.108.2.62:53500 conn322: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:13.583+0000 I COMMAND [conn322] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:13.605+0000 I COMMAND [conn303] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578702, 1), signature: { hash: BinData(0, ECF4B5774F4DA596CBFAEF2B306A437976FBCFAD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:32:13.605+0000 D1 - [conn303] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:13.605+0000 W - [conn303] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:13.622+0000 I - [conn303] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:13.622+0000 D1 COMMAND [conn303] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578702, 1), signature: { hash: BinData(0, ECF4B5774F4DA596CBFAEF2B306A437976FBCFAD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:13.622+0000 D1 - [conn303] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:13.622+0000 W - [conn303] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:13.642+0000 I - [conn303] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:13.642+0000 W COMMAND [conn303] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:13.642+0000 I COMMAND [conn303] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578702, 1), signature: { hash: BinData(0, ECF4B5774F4DA596CBFAEF2B306A437976FBCFAD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30031ms 2019-09-04T06:32:13.642+0000 D2 NETWORK [conn303] Session from 10.108.2.62:53484 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:13.642+0000 I NETWORK [conn303] end connection 10.108.2.62:53484 (81 connections now open) 2019-09-04T06:32:13.679+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:13.724+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:13.724+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:13.779+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:13.806+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:13.807+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:13.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:13.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:13.879+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:13.944+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:13.944+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:13.979+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:14.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:14.080+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:14.180+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:14.224+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:14.224+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:14.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578732, 1), signature: { hash: BinData(0, 830ED4A8A67663B9BD82EDCDB43D7E853C74F525), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:14.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:32:14.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578732, 1), signature: { hash: BinData(0, 830ED4A8A67663B9BD82EDCDB43D7E853C74F525), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:14.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578732, 1), signature: { hash: BinData(0, 830ED4A8A67663B9BD82EDCDB43D7E853C74F525), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:14.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), opTime: { ts: Timestamp(1567578728, 2), t: 1 }, wallTime: new Date(1567578728547) } 2019-09-04T06:32:14.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578732, 1), signature: { hash: BinData(0, 830ED4A8A67663B9BD82EDCDB43D7E853C74F525), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:14.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:14.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:14.280+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:14.306+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:14.307+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:14.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:14.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:14.380+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:14.444+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:14.444+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:14.480+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:14.557+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:14.557+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:14.557+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578728, 2) 2019-09-04T06:32:14.557+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 13833 2019-09-04T06:32:14.557+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:14.557+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:14.557+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 13833 2019-09-04T06:32:14.557+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13836 2019-09-04T06:32:14.558+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13836 2019-09-04T06:32:14.558+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578728, 2), t: 1 }({ ts: Timestamp(1567578728, 2), t: 1 }) 2019-09-04T06:32:14.580+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:14.680+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:14.724+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:14.724+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:14.781+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:14.806+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:14.807+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:14.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:14.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 929) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:14.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 929 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:24.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:14.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:42.839+0000 2019-09-04T06:32:14.838+0000 D2 ASIO [Replication] Request 929 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), opTime: { ts: Timestamp(1567578728, 2), t: 1 }, wallTime: new Date(1567578728547), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 2), t: 1 }, lastCommittedWall: new Date(1567578728547), lastOpVisible: { ts: Timestamp(1567578728, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578728, 2), $clusterTime: { clusterTime: Timestamp(1567578732, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578728, 2) } 2019-09-04T06:32:14.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), opTime: { ts: Timestamp(1567578728, 2), t: 1 }, wallTime: new Date(1567578728547), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 2), t: 1 }, lastCommittedWall: new Date(1567578728547), lastOpVisible: { ts: Timestamp(1567578728, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578728, 2), $clusterTime: { clusterTime: Timestamp(1567578732, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578728, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:14.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:14.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 929) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), opTime: { ts: Timestamp(1567578728, 2), t: 1 }, wallTime: new Date(1567578728547), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 2), t: 1 }, lastCommittedWall: new Date(1567578728547), lastOpVisible: { ts: Timestamp(1567578728, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578728, 2), $clusterTime: { clusterTime: Timestamp(1567578732, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578728, 2) } 2019-09-04T06:32:14.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:32:14.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:32:16.838Z 2019-09-04T06:32:14.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:42.839+0000 2019-09-04T06:32:14.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:14.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 930) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:14.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 930 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:32:24.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:14.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:42.839+0000 2019-09-04T06:32:14.839+0000 D2 ASIO [Replication] Request 930 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), opTime: { ts: Timestamp(1567578728, 2), t: 1 }, wallTime: new Date(1567578728547), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 2), t: 1 }, lastCommittedWall: new Date(1567578728547), lastOpVisible: { ts: Timestamp(1567578728, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578728, 2), $clusterTime: { clusterTime: Timestamp(1567578732, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578728, 2) } 2019-09-04T06:32:14.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), opTime: { ts: Timestamp(1567578728, 2), t: 1 }, wallTime: new Date(1567578728547), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 2), t: 1 }, lastCommittedWall: new Date(1567578728547), lastOpVisible: { ts: Timestamp(1567578728, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578728, 2), $clusterTime: { clusterTime: Timestamp(1567578732, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578728, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:32:14.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:14.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 930) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), opTime: { ts: Timestamp(1567578728, 2), t: 1 }, wallTime: new Date(1567578728547), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 2), t: 1 }, lastCommittedWall: new Date(1567578728547), lastOpVisible: { ts: Timestamp(1567578728, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578728, 2), $clusterTime: { clusterTime: Timestamp(1567578732, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578728, 2) } 2019-09-04T06:32:14.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:32:14.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:32:25.050+0000 2019-09-04T06:32:14.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:32:25.660+0000 2019-09-04T06:32:14.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:32:14.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:32:16.839Z 2019-09-04T06:32:14.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:14.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:44.839+0000 2019-09-04T06:32:14.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:44.839+0000 2019-09-04T06:32:14.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:14.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:14.881+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:14.944+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:14.944+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:14.981+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:15.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:15.053+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:15.054+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:15.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578732, 1), signature: { hash: BinData(0, 830ED4A8A67663B9BD82EDCDB43D7E853C74F525), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:15.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:32:15.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578732, 1), signature: { hash: BinData(0, 830ED4A8A67663B9BD82EDCDB43D7E853C74F525), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:15.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578732, 1), signature: { hash: BinData(0, 830ED4A8A67663B9BD82EDCDB43D7E853C74F525), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:15.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), opTime: { ts: Timestamp(1567578728, 2), t: 1 }, wallTime: new Date(1567578728547) } 2019-09-04T06:32:15.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578732, 1), signature: { hash: BinData(0, 830ED4A8A67663B9BD82EDCDB43D7E853C74F525), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:15.081+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:15.143+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:15.143+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:15.157+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:15.157+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:15.181+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:15.224+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:15.224+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:15.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:15.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:15.281+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:15.285+0000 D2 ASIO [RS] Request 928 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578735, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578735264), o: { $v: 1, $set: { ping: new Date(1567578735260), up: 2635 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 2), t: 1 }, lastCommittedWall: new Date(1567578728547), lastOpVisible: { ts: Timestamp(1567578728, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578728, 2), t: 1 }, lastCommittedWall: new Date(1567578728547), lastOpApplied: { ts: Timestamp(1567578735, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578728, 2), $clusterTime: { clusterTime: Timestamp(1567578735, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578735, 1) } 2019-09-04T06:32:15.285+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578735, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578735264), o: { $v: 1, $set: { ping: new Date(1567578735260), up: 2635 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 2), t: 1 }, lastCommittedWall: new Date(1567578728547), lastOpVisible: { ts: Timestamp(1567578728, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578728, 2), t: 1 }, lastCommittedWall: new Date(1567578728547), lastOpApplied: { ts: Timestamp(1567578735, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578728, 2), $clusterTime: { clusterTime: Timestamp(1567578735, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578735, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:15.285+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:15.285+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578735, 1) and ending at ts: Timestamp(1567578735, 1) 2019-09-04T06:32:15.285+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:32:25.660+0000 2019-09-04T06:32:15.285+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:32:25.483+0000 2019-09-04T06:32:15.285+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:15.285+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:44.839+0000 2019-09-04T06:32:15.285+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578735, 1), t: 1 } 2019-09-04T06:32:15.285+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:15.285+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:15.285+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578728, 2) 2019-09-04T06:32:15.285+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 13850 2019-09-04T06:32:15.285+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:15.285+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:15.285+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 13850 2019-09-04T06:32:15.285+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:15.285+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:32:15.285+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:15.285+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578728, 2) 2019-09-04T06:32:15.285+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 13853 2019-09-04T06:32:15.285+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:15.285+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:15.285+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578735, 1) } 2019-09-04T06:32:15.285+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 13853 2019-09-04T06:32:15.285+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13837 2019-09-04T06:32:15.285+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 13837 2019-09-04T06:32:15.285+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13856 2019-09-04T06:32:15.285+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13856 2019-09-04T06:32:15.285+0000 D3 EXECUTOR [repl-writer-worker-13] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:15.285+0000 D3 STORAGE [repl-writer-worker-13] WT begin_transaction for snapshot id 13858 2019-09-04T06:32:15.285+0000 D4 STORAGE [repl-writer-worker-13] inserting record with timestamp Timestamp(1567578735, 1) 2019-09-04T06:32:15.285+0000 D3 STORAGE [repl-writer-worker-13] WT set timestamp of future write operations to Timestamp(1567578735, 1) 2019-09-04T06:32:15.285+0000 D3 STORAGE [repl-writer-worker-13] WT commit_transaction for snapshot id 13858 2019-09-04T06:32:15.285+0000 D3 EXECUTOR [repl-writer-worker-13] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:15.285+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:32:15.286+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13857 2019-09-04T06:32:15.286+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 13857 2019-09-04T06:32:15.286+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13860 2019-09-04T06:32:15.286+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13860 2019-09-04T06:32:15.286+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578735, 1), t: 1 }({ ts: Timestamp(1567578735, 1), t: 1 }) 2019-09-04T06:32:15.286+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578735, 1) 2019-09-04T06:32:15.286+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13861 2019-09-04T06:32:15.286+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578735, 1) } } ] } sort: {} projection: {} 2019-09-04T06:32:15.286+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:15.286+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:32:15.286+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578735, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:32:15.286+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:15.286+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:15.286+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:15.286+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578735, 1) || First: notFirst: full path: ts 2019-09-04T06:32:15.286+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:15.286+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578735, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:15.286+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:15.286+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:32:15.286+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:15.286+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:15.286+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:15.286+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:15.286+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:15.286+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:15.286+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:15.286+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578735, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:15.286+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:15.286+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:15.286+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:15.286+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578735, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:15.286+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:15.286+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578735, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:15.286+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 13861 2019-09-04T06:32:15.286+0000 D3 EXECUTOR [repl-writer-worker-14] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:15.286+0000 D3 STORAGE [repl-writer-worker-14] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:15.286+0000 D3 REPL [repl-writer-worker-14] applying op: { ts: Timestamp(1567578735, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578735264), o: { $v: 1, $set: { ping: new Date(1567578735260), up: 2635 } } }, oplog application mode: Secondary 2019-09-04T06:32:15.286+0000 D3 STORAGE [repl-writer-worker-14] WT set timestamp of future write operations to Timestamp(1567578735, 1) 2019-09-04T06:32:15.286+0000 D3 STORAGE [repl-writer-worker-14] WT begin_transaction for snapshot id 13863 2019-09-04T06:32:15.286+0000 D2 QUERY [repl-writer-worker-14] Using idhack: { _id: "cmodb801.togewa.com:27017" } 2019-09-04T06:32:15.286+0000 D4 WRITE [repl-writer-worker-14] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:32:15.286+0000 D3 STORAGE [repl-writer-worker-14] WT commit_transaction for snapshot id 13863 2019-09-04T06:32:15.286+0000 D3 EXECUTOR [repl-writer-worker-14] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:15.286+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578735, 1), t: 1 }({ ts: Timestamp(1567578735, 1), t: 1 }) 2019-09-04T06:32:15.286+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578735, 1) 2019-09-04T06:32:15.286+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13862 2019-09-04T06:32:15.286+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:32:15.286+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:15.286+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:32:15.286+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:15.286+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:15.286+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:32:15.286+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 13862 2019-09-04T06:32:15.286+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578735, 1) 2019-09-04T06:32:15.286+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:15.286+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13866 2019-09-04T06:32:15.286+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13866 2019-09-04T06:32:15.286+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), appliedOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, appliedWallTime: new Date(1567578728547), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), appliedOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, appliedWallTime: new Date(1567578735264), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), appliedOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, appliedWallTime: new Date(1567578728547), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 2), t: 1 }, lastCommittedWall: new Date(1567578728547), lastOpVisible: { ts: Timestamp(1567578728, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:15.287+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 931 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:45.287+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), appliedOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, appliedWallTime: new Date(1567578728547), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), appliedOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, appliedWallTime: new Date(1567578735264), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), appliedOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, appliedWallTime: new Date(1567578728547), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 2), t: 1 }, lastCommittedWall: new Date(1567578728547), lastOpVisible: { ts: Timestamp(1567578728, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:15.287+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:45.286+0000 2019-09-04T06:32:15.286+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578735, 1), t: 1 }({ ts: Timestamp(1567578735, 1), t: 1 }) 2019-09-04T06:32:15.287+0000 D2 ASIO [RS] Request 931 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 2), t: 1 }, lastCommittedWall: new Date(1567578728547), lastOpVisible: { ts: Timestamp(1567578728, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578728, 2), $clusterTime: { clusterTime: Timestamp(1567578735, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578735, 1) } 2019-09-04T06:32:15.287+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 2), t: 1 }, lastCommittedWall: new Date(1567578728547), lastOpVisible: { ts: Timestamp(1567578728, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578728, 2), $clusterTime: { clusterTime: Timestamp(1567578735, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578735, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:15.287+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:15.287+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:45.287+0000 2019-09-04T06:32:15.287+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578735, 1), t: 1 } 2019-09-04T06:32:15.287+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 932 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:25.287+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578728, 2), t: 1 } } 2019-09-04T06:32:15.287+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:45.287+0000 2019-09-04T06:32:15.288+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:32:15.288+0000 D2 ASIO [RS] Request 932 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578735, 1), t: 1 }, lastCommittedWall: new Date(1567578735264), lastOpVisible: { ts: Timestamp(1567578735, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578735, 1), t: 1 }, lastCommittedWall: new Date(1567578735264), lastOpApplied: { ts: Timestamp(1567578735, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578735, 1), $clusterTime: { clusterTime: Timestamp(1567578735, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578735, 1) } 2019-09-04T06:32:15.288+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:15.288+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578735, 1), t: 1 }, lastCommittedWall: new Date(1567578735264), lastOpVisible: { ts: Timestamp(1567578735, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578735, 1), t: 1 }, lastCommittedWall: new Date(1567578735264), lastOpApplied: { ts: Timestamp(1567578735, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578735, 1), $clusterTime: { clusterTime: Timestamp(1567578735, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578735, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:15.288+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), appliedOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, appliedWallTime: new Date(1567578728547), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, durableWallTime: new Date(1567578735264), appliedOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, appliedWallTime: new Date(1567578735264), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), appliedOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, appliedWallTime: new Date(1567578728547), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 2), t: 1 }, lastCommittedWall: new Date(1567578728547), lastOpVisible: { ts: Timestamp(1567578728, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:15.289+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 933 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:45.289+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), appliedOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, appliedWallTime: new Date(1567578728547), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, durableWallTime: new Date(1567578735264), appliedOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, appliedWallTime: new Date(1567578735264), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, durableWallTime: new Date(1567578728547), appliedOpTime: { ts: Timestamp(1567578728, 2), t: 1 }, appliedWallTime: new Date(1567578728547), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578728, 2), t: 1 }, lastCommittedWall: new Date(1567578728547), lastOpVisible: { ts: Timestamp(1567578728, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:15.289+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:15.289+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:32:15.289+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578735, 1), t: 1 }, 2019-09-04T06:32:15.264+0000 2019-09-04T06:32:15.289+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578735, 1), t: 1 }, 2019-09-04T06:32:15.264+0000 2019-09-04T06:32:15.289+0000 D2 COMMAND [conn21] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578735, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a6f02d1a496712d723c'), operName: "", parentOperId: "5d6f5a6f02d1a496712d723a" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578735, 1), signature: { hash: BinData(0, 4E6C68D9A13249114A9A68C6253AD2637BDB74C0), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578735, 1), t: 1 } }, $db: "config" } 2019-09-04T06:32:15.289+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578730, 1) 2019-09-04T06:32:15.289+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:32:25.483+0000 2019-09-04T06:32:15.289+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:32:25.442+0000 2019-09-04T06:32:15.289+0000 D1 TRACKING [conn21] Cmd: find, TrackingId: 5d6f5a6f02d1a496712d723a|5d6f5a6f02d1a496712d723c 2019-09-04T06:32:15.289+0000 D1 COMMAND [conn21] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578735, 1), t: 1 } } } 2019-09-04T06:32:15.289+0000 D3 STORAGE [conn21] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:32:15.289+0000 D1 COMMAND [conn21] Using 'committed' snapshot: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578735, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a6f02d1a496712d723c'), operName: "", parentOperId: "5d6f5a6f02d1a496712d723a" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578735, 1), signature: { hash: BinData(0, 4E6C68D9A13249114A9A68C6253AD2637BDB74C0), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578735, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578735, 1) 2019-09-04T06:32:15.289+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:45.288+0000 2019-09-04T06:32:15.289+0000 D3 REPL [conn308] Got notified of new snapshot: { ts: Timestamp(1567578735, 1), t: 1 }, 2019-09-04T06:32:15.264+0000 2019-09-04T06:32:15.289+0000 D3 REPL [conn308] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.423+0000 2019-09-04T06:32:15.289+0000 D2 QUERY [conn21] Collection config.settings does not exist. Using EOF plan: query: { _id: "balancer" } sort: {} projection: {} limit: 1 2019-09-04T06:32:15.289+0000 D3 REPL [conn309] Got notified of new snapshot: { ts: Timestamp(1567578735, 1), t: 1 }, 2019-09-04T06:32:15.264+0000 2019-09-04T06:32:15.289+0000 D3 REPL [conn309] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.423+0000 2019-09-04T06:32:15.289+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:15.289+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:44.839+0000 2019-09-04T06:32:15.289+0000 D3 REPL [conn315] Got notified of new snapshot: { ts: Timestamp(1567578735, 1), t: 1 }, 2019-09-04T06:32:15.264+0000 2019-09-04T06:32:15.289+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 934 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:25.289+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578735, 1), t: 1 } } 2019-09-04T06:32:15.289+0000 D3 REPL [conn315] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:25.060+0000 2019-09-04T06:32:15.289+0000 D3 REPL [conn319] Got notified of new snapshot: { ts: Timestamp(1567578735, 1), t: 1 }, 2019-09-04T06:32:15.264+0000 2019-09-04T06:32:15.289+0000 D3 REPL [conn319] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.535+0000 2019-09-04T06:32:15.289+0000 D3 REPL [conn310] Got notified of new snapshot: { ts: Timestamp(1567578735, 1), t: 1 }, 2019-09-04T06:32:15.264+0000 2019-09-04T06:32:15.289+0000 D3 REPL [conn310] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.660+0000 2019-09-04T06:32:15.289+0000 D3 REPL [conn282] Got notified of new snapshot: { ts: Timestamp(1567578735, 1), t: 1 }, 2019-09-04T06:32:15.264+0000 2019-09-04T06:32:15.289+0000 D3 REPL [conn282] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.424+0000 2019-09-04T06:32:15.289+0000 D2 ASIO [RS] Request 933 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578735, 1), t: 1 }, lastCommittedWall: new Date(1567578735264), lastOpVisible: { ts: Timestamp(1567578735, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578735, 1), $clusterTime: { clusterTime: Timestamp(1567578735, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578735, 1) } 2019-09-04T06:32:15.289+0000 D3 REPL [conn311] Got notified of new snapshot: { ts: Timestamp(1567578735, 1), t: 1 }, 2019-09-04T06:32:15.264+0000 2019-09-04T06:32:15.289+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578735, 1), t: 1 }, lastCommittedWall: new Date(1567578735264), lastOpVisible: { ts: Timestamp(1567578735, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578735, 1), $clusterTime: { clusterTime: Timestamp(1567578735, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578735, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:15.289+0000 D3 REPL [conn311] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.661+0000 2019-09-04T06:32:15.289+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:45.288+0000 2019-09-04T06:32:15.289+0000 D3 REPL [conn314] Got notified of new snapshot: { ts: Timestamp(1567578735, 1), t: 1 }, 2019-09-04T06:32:15.264+0000 2019-09-04T06:32:15.289+0000 D3 REPL [conn314] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:24.152+0000 2019-09-04T06:32:15.289+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:15.289+0000 D3 REPL [conn313] Got notified of new snapshot: { ts: Timestamp(1567578735, 1), t: 1 }, 2019-09-04T06:32:15.264+0000 2019-09-04T06:32:15.289+0000 D3 REPL [conn313] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:22.595+0000 2019-09-04T06:32:15.289+0000 D3 REPL [conn301] Got notified of new snapshot: { ts: Timestamp(1567578735, 1), t: 1 }, 2019-09-04T06:32:15.264+0000 2019-09-04T06:32:15.289+0000 D3 REPL [conn301] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.660+0000 2019-09-04T06:32:15.289+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:45.288+0000 2019-09-04T06:32:15.289+0000 D3 REPL [conn317] Got notified of new snapshot: { ts: Timestamp(1567578735, 1), t: 1 }, 2019-09-04T06:32:15.264+0000 2019-09-04T06:32:15.289+0000 D3 REPL [conn317] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.471+0000 2019-09-04T06:32:15.289+0000 D3 REPL [conn299] Got notified of new snapshot: { ts: Timestamp(1567578735, 1), t: 1 }, 2019-09-04T06:32:15.264+0000 2019-09-04T06:32:15.289+0000 D3 REPL [conn299] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.483+0000 2019-09-04T06:32:15.289+0000 D3 REPL [conn275] Got notified of new snapshot: { ts: Timestamp(1567578735, 1), t: 1 }, 2019-09-04T06:32:15.264+0000 2019-09-04T06:32:15.289+0000 D3 REPL [conn275] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.508+0000 2019-09-04T06:32:15.289+0000 D3 REPL [conn321] Got notified of new snapshot: { ts: Timestamp(1567578735, 1), t: 1 }, 2019-09-04T06:32:15.264+0000 2019-09-04T06:32:15.289+0000 D3 REPL [conn321] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:43.109+0000 2019-09-04T06:32:15.289+0000 D3 REPL [conn284] Got notified of new snapshot: { ts: Timestamp(1567578735, 1), t: 1 }, 2019-09-04T06:32:15.264+0000 2019-09-04T06:32:15.289+0000 D3 REPL [conn284] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.440+0000 2019-09-04T06:32:15.289+0000 D3 REPL [conn312] Got notified of new snapshot: { ts: Timestamp(1567578735, 1), t: 1 }, 2019-09-04T06:32:15.264+0000 2019-09-04T06:32:15.289+0000 D3 REPL [conn312] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.767+0000 2019-09-04T06:32:15.289+0000 D3 REPL [conn307] Got notified of new snapshot: { ts: Timestamp(1567578735, 1), t: 1 }, 2019-09-04T06:32:15.264+0000 2019-09-04T06:32:15.289+0000 D3 REPL [conn307] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:16.417+0000 2019-09-04T06:32:15.289+0000 I COMMAND [conn21] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578735, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a6f02d1a496712d723c'), operName: "", parentOperId: "5d6f5a6f02d1a496712d723a" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578735, 1), signature: { hash: BinData(0, 4E6C68D9A13249114A9A68C6253AD2637BDB74C0), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578735, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:32:15.306+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:15.306+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:15.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:15.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:15.381+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:15.385+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578735, 1) 2019-09-04T06:32:15.444+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:15.444+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:15.482+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:15.553+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:15.553+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:15.582+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:15.643+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:15.643+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:15.657+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:15.657+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:15.682+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:15.724+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:15.724+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:15.782+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:15.806+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:15.807+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:15.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:15.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:15.882+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:15.944+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:15.944+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:15.982+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:16.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:16.038+0000 D2 COMMAND [conn285] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:16.038+0000 I COMMAND [conn285] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:16.053+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:16.053+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:16.083+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:16.143+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:16.143+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:16.157+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:16.157+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:16.183+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:16.224+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:16.224+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:16.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578735, 1), signature: { hash: BinData(0, 4E6C68D9A13249114A9A68C6253AD2637BDB74C0), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:16.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:32:16.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578735, 1), signature: { hash: BinData(0, 4E6C68D9A13249114A9A68C6253AD2637BDB74C0), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:16.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578735, 1), signature: { hash: BinData(0, 4E6C68D9A13249114A9A68C6253AD2637BDB74C0), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:16.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, durableWallTime: new Date(1567578735264), opTime: { ts: Timestamp(1567578735, 1), t: 1 }, wallTime: new Date(1567578735264) } 2019-09-04T06:32:16.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578735, 1), signature: { hash: BinData(0, 4E6C68D9A13249114A9A68C6253AD2637BDB74C0), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:16.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:16.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:16.283+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:16.285+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:16.285+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:16.285+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578735, 1) 2019-09-04T06:32:16.285+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 13889 2019-09-04T06:32:16.285+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:16.285+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:16.285+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 13889 2019-09-04T06:32:16.287+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13892 2019-09-04T06:32:16.287+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13892 2019-09-04T06:32:16.287+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578735, 1), t: 1 }({ ts: Timestamp(1567578735, 1), t: 1 }) 2019-09-04T06:32:16.306+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:16.306+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:16.344+0000 D2 COMMAND [conn125] run command admin.$cmd { ping: 1, $db: "admin" } 2019-09-04T06:32:16.344+0000 I COMMAND [conn125] command admin.$cmd appName: "robo3t" command: ping { ping: 1, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:16.355+0000 D2 COMMAND [conn178] run command config.$cmd { ping: 1.0, lsid: { id: UUID("4ca3bc30-0f16-4335-a15f-3e7d48b5566e") }, $clusterTime: { clusterTime: Timestamp(1567578675, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:32:16.355+0000 I COMMAND [conn178] command config.$cmd appName: "MongoDB Shell" command: ping { ping: 1.0, lsid: { id: UUID("4ca3bc30-0f16-4335-a15f-3e7d48b5566e") }, $clusterTime: { clusterTime: Timestamp(1567578675, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:16.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:16.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:16.383+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:16.401+0000 I NETWORK [listener] connection accepted from 10.108.2.57:34314 #323 (82 connections now open) 2019-09-04T06:32:16.401+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:16.401+0000 D2 COMMAND [conn323] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:16.401+0000 I NETWORK [conn323] received client metadata from 10.108.2.57:34314 conn323: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:16.401+0000 I COMMAND [conn323] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:16.412+0000 I NETWORK [listener] connection accepted from 10.108.2.58:52206 #324 (83 connections now open) 2019-09-04T06:32:16.412+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:16.412+0000 D2 COMMAND [conn324] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:16.412+0000 I NETWORK [conn324] received client metadata from 10.108.2.58:52206 conn324: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:16.412+0000 I COMMAND [conn324] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:16.412+0000 I NETWORK [listener] connection accepted from 10.108.2.44:38742 #325 (84 connections now open) 2019-09-04T06:32:16.412+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:16.413+0000 D2 COMMAND [conn325] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:16.413+0000 I NETWORK [conn325] received client metadata from 10.108.2.44:38742 conn325: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:16.413+0000 I COMMAND [conn325] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:16.417+0000 I NETWORK [listener] connection accepted from 10.108.2.51:59214 #326 (85 connections now open) 2019-09-04T06:32:16.417+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:16.417+0000 D2 COMMAND [conn326] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:16.417+0000 I NETWORK [conn326] received client metadata from 10.108.2.51:59214 conn326: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:16.417+0000 I COMMAND [conn326] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:16.418+0000 I COMMAND [conn307] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578698, 1), signature: { hash: BinData(0, 530FD6D8E211D30BDB1163BF3162E191BBCC422A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:32:16.419+0000 D1 - [conn307] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:16.419+0000 W - [conn307] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:16.424+0000 I COMMAND [conn308] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578703, 1), signature: { hash: BinData(0, FE5CD33D4FD4AD3E642E53447F4EC0589DAAF02E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:32:16.424+0000 D1 - [conn308] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:16.424+0000 W - [conn308] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:16.424+0000 I COMMAND [conn309] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578698, 1), signature: { hash: BinData(0, 530FD6D8E211D30BDB1163BF3162E191BBCC422A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:32:16.424+0000 D1 - [conn309] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:16.424+0000 W - [conn309] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:16.424+0000 I COMMAND [conn282] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578699, 1), signature: { hash: BinData(0, FDAB2A63E94489DE9A0BA601ABF193CC5AED761D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:32:16.424+0000 D1 - [conn282] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:16.424+0000 W - [conn282] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:16.435+0000 I NETWORK [listener] connection accepted from 10.108.2.47:56608 #327 (86 connections now open) 2019-09-04T06:32:16.435+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:16.435+0000 D2 COMMAND [conn327] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:16.435+0000 I NETWORK [conn327] received client metadata from 10.108.2.47:56608 conn327: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:16.435+0000 I COMMAND [conn327] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:16.436+0000 I - [conn307] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:16.436+0000 D1 COMMAND [conn307] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578698, 1), signature: { hash: BinData(0, 530FD6D8E211D30BDB1163BF3162E191BBCC422A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:16.436+0000 D1 - [conn307] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:16.436+0000 W - [conn307] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:16.441+0000 I COMMAND [conn284] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, D5CCCC4E77C8D415872AB251DCD35BEAE0CDE47F), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:32:16.442+0000 D1 - [conn284] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:16.442+0000 W - [conn284] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:16.444+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:16.444+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:16.454+0000 I - [conn309] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:16.455+0000 D1 COMMAND [conn309] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578698, 1), signature: { hash: BinData(0, 530FD6D8E211D30BDB1163BF3162E191BBCC422A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:16.455+0000 D1 - [conn309] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:16.455+0000 W - [conn309] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:16.473+0000 I - [conn308] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:16.473+0000 D1 COMMAND [conn308] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578703, 1), signature: { hash: BinData(0, FE5CD33D4FD4AD3E642E53447F4EC0589DAAF02E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:16.473+0000 D1 - [conn308] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:16.473+0000 W - [conn308] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:16.483+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:16.512+0000 I - [conn284] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:16.512+0000 D1 COMMAND [conn284] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, D5CCCC4E77C8D415872AB251DCD35BEAE0CDE47F), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:16.512+0000 D1 - [conn284] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:16.512+0000 W - [conn284] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:16.547+0000 I - [conn284] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:16.547+0000 W COMMAND [conn284] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:16.547+0000 I COMMAND [conn284] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, D5CCCC4E77C8D415872AB251DCD35BEAE0CDE47F), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30082ms 2019-09-04T06:32:16.547+0000 D2 NETWORK [conn284] Session from 10.108.2.47:56568 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:16.547+0000 I NETWORK [conn284] end connection 10.108.2.47:56568 (85 connections now open) 2019-09-04T06:32:16.553+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:16.553+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:16.569+0000 I - [conn307] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:16.569+0000 W COMMAND [conn307] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:16.569+0000 I COMMAND [conn307] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578698, 1), signature: { hash: BinData(0, 530FD6D8E211D30BDB1163BF3162E191BBCC422A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30028ms 2019-09-04T06:32:16.569+0000 D2 NETWORK [conn307] Session from 10.108.2.57:34296 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:16.569+0000 I NETWORK [conn307] end connection 10.108.2.57:34296 (84 connections now open) 2019-09-04T06:32:16.569+0000 I - [conn282] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:16.570+0000 D1 COMMAND [conn282] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578699, 1), signature: { hash: BinData(0, FDAB2A63E94489DE9A0BA601ABF193CC5AED761D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:16.570+0000 D1 - [conn282] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:16.570+0000 W - [conn282] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:16.583+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:16.589+0000 I - [conn308] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:16.589+0000 W COMMAND [conn308] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:16.589+0000 I COMMAND [conn308] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578703, 1), signature: { hash: BinData(0, FE5CD33D4FD4AD3E642E53447F4EC0589DAAF02E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30060ms 2019-09-04T06:32:16.590+0000 D2 NETWORK [conn308] Session from 10.108.2.58:52188 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:16.590+0000 I NETWORK [conn308] end connection 10.108.2.58:52188 (83 connections now open) 2019-09-04T06:32:16.599+0000 I - [conn309] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:16.599+0000 W COMMAND [conn309] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:16.599+0000 I COMMAND [conn309] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578698, 1), signature: { hash: BinData(0, 530FD6D8E211D30BDB1163BF3162E191BBCC422A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30041ms 2019-09-04T06:32:16.599+0000 D2 NETWORK [conn309] Session from 10.108.2.44:38726 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:16.599+0000 I NETWORK [conn309] end connection 10.108.2.44:38726 (82 connections now open) 2019-09-04T06:32:16.612+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:16.612+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:16.612+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:16.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:16.612+0000 D2 COMMAND [conn324] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578733, 1), signature: { hash: BinData(0, 45E6BA95D4A46C041D1AB6A238AC46A21C109958), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:32:16.612+0000 D1 REPL [conn324] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578735, 1), t: 1 } 2019-09-04T06:32:16.612+0000 D3 REPL [conn324] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.622+0000 2019-09-04T06:32:16.613+0000 D2 COMMAND [conn316] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578729, 1), signature: { hash: BinData(0, 0C0420C4E529BE8AB49DB9C1B2EBD0DB69E0B59C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:32:16.613+0000 D1 REPL [conn316] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578735, 1), t: 1 } 2019-09-04T06:32:16.613+0000 D3 REPL [conn316] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.623+0000 2019-09-04T06:32:16.614+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:16.614+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:16.614+0000 I NETWORK [listener] connection accepted from 10.108.2.46:41056 #328 (83 connections now open) 2019-09-04T06:32:16.614+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:16.615+0000 D2 COMMAND [conn328] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:16.615+0000 I NETWORK [conn328] received client metadata from 10.108.2.46:41056 conn328: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:16.615+0000 I COMMAND [conn328] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:16.615+0000 D2 COMMAND [conn328] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578735, 1), signature: { hash: BinData(0, 58E67B65554B3CCD2C041972A16DBE3FCED4CE23), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:32:16.615+0000 D1 REPL [conn328] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578735, 1), t: 1 } 2019-09-04T06:32:16.615+0000 D3 REPL [conn328] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.625+0000 2019-09-04T06:32:16.617+0000 I - [conn282] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:16.617+0000 W COMMAND [conn282] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:16.617+0000 I COMMAND [conn282] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578699, 1), signature: { hash: BinData(0, FDAB2A63E94489DE9A0BA601ABF193CC5AED761D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30155ms 2019-09-04T06:32:16.617+0000 D2 NETWORK [conn282] Session from 10.108.2.51:59182 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:16.617+0000 I NETWORK [conn282] end connection 10.108.2.51:59182 (82 connections now open) 2019-09-04T06:32:16.627+0000 D2 COMMAND [conn304] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578735, 1), signature: { hash: BinData(0, 58E67B65554B3CCD2C041972A16DBE3FCED4CE23), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:32:16.627+0000 D1 REPL [conn304] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578735, 1), t: 1 } 2019-09-04T06:32:16.627+0000 D3 REPL [conn304] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.637+0000 2019-09-04T06:32:16.628+0000 D2 COMMAND [conn305] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578732, 1), signature: { hash: BinData(0, 2AE057265C24734D9B4E4568BCA1F32820325A81), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:32:16.629+0000 D1 REPL [conn305] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578735, 1), t: 1 } 2019-09-04T06:32:16.629+0000 D3 REPL [conn305] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.638+0000 2019-09-04T06:32:16.629+0000 D2 COMMAND [conn306] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578730, 1), signature: { hash: BinData(0, B6D82C58D1F28CF1D765C9B40350BC7D309BEF8B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:32:16.629+0000 D1 REPL [conn306] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578735, 1), t: 1 } 2019-09-04T06:32:16.629+0000 D3 REPL [conn306] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.639+0000 2019-09-04T06:32:16.630+0000 D2 COMMAND [conn285] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578728, 1), signature: { hash: BinData(0, D40B466E4FDFBEAA54D0D3B68CF775A1034DDCB7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:32:16.630+0000 D1 REPL [conn285] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578735, 1), t: 1 } 2019-09-04T06:32:16.630+0000 D3 REPL [conn285] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.640+0000 2019-09-04T06:32:16.642+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:16.643+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:16.657+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:16.657+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:16.683+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:16.724+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:16.724+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:16.783+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:16.806+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:16.807+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:16.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:16.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 935) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:16.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 935 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:26.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:16.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:44.839+0000 2019-09-04T06:32:16.838+0000 D2 ASIO [Replication] Request 935 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, durableWallTime: new Date(1567578735264), opTime: { ts: Timestamp(1567578735, 1), t: 1 }, wallTime: new Date(1567578735264), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578735, 1), t: 1 }, lastCommittedWall: new Date(1567578735264), lastOpVisible: { ts: Timestamp(1567578735, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578735, 1), $clusterTime: { clusterTime: Timestamp(1567578735, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578735, 1) } 2019-09-04T06:32:16.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, durableWallTime: new Date(1567578735264), opTime: { ts: Timestamp(1567578735, 1), t: 1 }, wallTime: new Date(1567578735264), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578735, 1), t: 1 }, lastCommittedWall: new Date(1567578735264), lastOpVisible: { ts: Timestamp(1567578735, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578735, 1), $clusterTime: { clusterTime: Timestamp(1567578735, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578735, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:16.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:16.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 935) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, durableWallTime: new Date(1567578735264), opTime: { ts: Timestamp(1567578735, 1), t: 1 }, wallTime: new Date(1567578735264), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578735, 1), t: 1 }, lastCommittedWall: new Date(1567578735264), lastOpVisible: { ts: Timestamp(1567578735, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578735, 1), $clusterTime: { clusterTime: Timestamp(1567578735, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578735, 1) } 2019-09-04T06:32:16.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:32:16.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:32:18.838Z 2019-09-04T06:32:16.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:44.839+0000 2019-09-04T06:32:16.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:16.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 936) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:16.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 936 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:32:26.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:16.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:44.839+0000 2019-09-04T06:32:16.839+0000 D2 ASIO [Replication] Request 936 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, durableWallTime: new Date(1567578735264), opTime: { ts: Timestamp(1567578735, 1), t: 1 }, wallTime: new Date(1567578735264), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578735, 1), t: 1 }, lastCommittedWall: new Date(1567578735264), lastOpVisible: { ts: Timestamp(1567578735, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578735, 1), $clusterTime: { clusterTime: Timestamp(1567578735, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578735, 1) } 2019-09-04T06:32:16.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, durableWallTime: new Date(1567578735264), opTime: { ts: Timestamp(1567578735, 1), t: 1 }, wallTime: new Date(1567578735264), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578735, 1), t: 1 }, lastCommittedWall: new Date(1567578735264), lastOpVisible: { ts: Timestamp(1567578735, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578735, 1), $clusterTime: { clusterTime: Timestamp(1567578735, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578735, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:32:16.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:16.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 936) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, durableWallTime: new Date(1567578735264), opTime: { ts: Timestamp(1567578735, 1), t: 1 }, wallTime: new Date(1567578735264), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578735, 1), t: 1 }, lastCommittedWall: new Date(1567578735264), lastOpVisible: { ts: Timestamp(1567578735, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578735, 1), $clusterTime: { clusterTime: Timestamp(1567578735, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578735, 1) } 2019-09-04T06:32:16.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:32:16.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:32:25.442+0000 2019-09-04T06:32:16.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:32:27.121+0000 2019-09-04T06:32:16.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:16.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:32:16.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:46.839+0000 2019-09-04T06:32:16.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:32:18.839Z 2019-09-04T06:32:16.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:46.839+0000 2019-09-04T06:32:16.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:16.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:16.884+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:16.944+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:16.944+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:16.984+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:16.990+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:16.990+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:17.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:17.053+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:17.053+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:17.063+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:17.063+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:32:16.839+0000 2019-09-04T06:32:17.063+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:32:16.838+0000 2019-09-04T06:32:17.063+0000 D3 REPL [replexec-3] stalest member MemberId(2) date: 2019-09-04T06:32:16.838+0000 2019-09-04T06:32:17.063+0000 D3 REPL [replexec-3] scheduling next check at 2019-09-04T06:32:26.838+0000 2019-09-04T06:32:17.063+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:46.839+0000 2019-09-04T06:32:17.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578735, 1), signature: { hash: BinData(0, 4E6C68D9A13249114A9A68C6253AD2637BDB74C0), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:17.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:32:17.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578735, 1), signature: { hash: BinData(0, 4E6C68D9A13249114A9A68C6253AD2637BDB74C0), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:17.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578735, 1), signature: { hash: BinData(0, 4E6C68D9A13249114A9A68C6253AD2637BDB74C0), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:17.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, durableWallTime: new Date(1567578735264), opTime: { ts: Timestamp(1567578735, 1), t: 1 }, wallTime: new Date(1567578735264) } 2019-09-04T06:32:17.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578735, 1), signature: { hash: BinData(0, 4E6C68D9A13249114A9A68C6253AD2637BDB74C0), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:17.084+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:17.111+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:17.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:17.112+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:17.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:17.113+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:17.113+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:17.143+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:17.143+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:17.157+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:17.157+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:17.184+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:17.224+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:17.224+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:17.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:17.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:17.284+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:17.286+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:17.286+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:17.286+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578735, 1) 2019-09-04T06:32:17.286+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 13933 2019-09-04T06:32:17.286+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:17.286+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:17.286+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 13933 2019-09-04T06:32:17.287+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13936 2019-09-04T06:32:17.287+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13936 2019-09-04T06:32:17.287+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578735, 1), t: 1 }({ ts: Timestamp(1567578735, 1), t: 1 }) 2019-09-04T06:32:17.306+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:17.307+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:17.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:17.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:17.384+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:17.444+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:17.444+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:17.484+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:17.553+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:17.553+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:17.565+0000 D2 ASIO [RS] Request 934 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578737, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578737545), o: { $v: 1, $set: { ping: new Date(1567578737544) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578735, 1), t: 1 }, lastCommittedWall: new Date(1567578735264), lastOpVisible: { ts: Timestamp(1567578735, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578735, 1), t: 1 }, lastCommittedWall: new Date(1567578735264), lastOpApplied: { ts: Timestamp(1567578737, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578735, 1), $clusterTime: { clusterTime: Timestamp(1567578737, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578737, 1) } 2019-09-04T06:32:17.566+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578737, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578737545), o: { $v: 1, $set: { ping: new Date(1567578737544) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578735, 1), t: 1 }, lastCommittedWall: new Date(1567578735264), lastOpVisible: { ts: Timestamp(1567578735, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578735, 1), t: 1 }, lastCommittedWall: new Date(1567578735264), lastOpApplied: { ts: Timestamp(1567578737, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578735, 1), $clusterTime: { clusterTime: Timestamp(1567578737, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578737, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:17.566+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:17.566+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578737, 1) and ending at ts: Timestamp(1567578737, 1) 2019-09-04T06:32:17.566+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:32:27.121+0000 2019-09-04T06:32:17.566+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:32:28.089+0000 2019-09-04T06:32:17.566+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:17.566+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578737, 1), t: 1 } 2019-09-04T06:32:17.566+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:17.566+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:17.566+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578735, 1) 2019-09-04T06:32:17.566+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 13943 2019-09-04T06:32:17.566+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:17.566+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:17.566+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 13943 2019-09-04T06:32:17.566+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:17.566+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:32:17.566+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:17.566+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578735, 1) 2019-09-04T06:32:17.566+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 13946 2019-09-04T06:32:17.566+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578737, 1) } 2019-09-04T06:32:17.566+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:46.839+0000 2019-09-04T06:32:17.566+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:17.566+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:17.566+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 13946 2019-09-04T06:32:17.566+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13937 2019-09-04T06:32:17.566+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 13937 2019-09-04T06:32:17.566+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13949 2019-09-04T06:32:17.566+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13949 2019-09-04T06:32:17.566+0000 D3 EXECUTOR [repl-writer-worker-3] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:17.566+0000 D3 STORAGE [repl-writer-worker-3] WT begin_transaction for snapshot id 13951 2019-09-04T06:32:17.566+0000 D4 STORAGE [repl-writer-worker-3] inserting record with timestamp Timestamp(1567578737, 1) 2019-09-04T06:32:17.566+0000 D3 STORAGE [repl-writer-worker-3] WT set timestamp of future write operations to Timestamp(1567578737, 1) 2019-09-04T06:32:17.566+0000 D3 STORAGE [repl-writer-worker-3] WT commit_transaction for snapshot id 13951 2019-09-04T06:32:17.566+0000 D3 EXECUTOR [repl-writer-worker-3] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:17.566+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:32:17.566+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13950 2019-09-04T06:32:17.566+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 13950 2019-09-04T06:32:17.566+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13953 2019-09-04T06:32:17.566+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13953 2019-09-04T06:32:17.566+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578737, 1), t: 1 }({ ts: Timestamp(1567578737, 1), t: 1 }) 2019-09-04T06:32:17.566+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578737, 1) 2019-09-04T06:32:17.566+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13954 2019-09-04T06:32:17.566+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578737, 1) } } ] } sort: {} projection: {} 2019-09-04T06:32:17.566+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:17.566+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:32:17.566+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578737, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:32:17.566+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:17.566+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:17.566+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:17.566+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578737, 1) || First: notFirst: full path: ts 2019-09-04T06:32:17.566+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:17.566+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578737, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:17.566+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:17.566+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:32:17.566+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:17.566+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:17.566+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:17.566+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:17.566+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:17.566+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:17.566+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:17.566+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578737, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:17.566+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:17.566+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:17.566+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:17.566+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578737, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:17.566+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:17.566+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578737, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:17.566+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 13954 2019-09-04T06:32:17.566+0000 D3 EXECUTOR [repl-writer-worker-12] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:17.566+0000 D3 STORAGE [repl-writer-worker-12] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:17.566+0000 D3 REPL [repl-writer-worker-12] applying op: { ts: Timestamp(1567578737, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578737545), o: { $v: 1, $set: { ping: new Date(1567578737544) } } }, oplog application mode: Secondary 2019-09-04T06:32:17.567+0000 D3 STORAGE [repl-writer-worker-12] WT set timestamp of future write operations to Timestamp(1567578737, 1) 2019-09-04T06:32:17.567+0000 D3 STORAGE [repl-writer-worker-12] WT begin_transaction for snapshot id 13956 2019-09-04T06:32:17.567+0000 D2 QUERY [repl-writer-worker-12] Using idhack: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" } 2019-09-04T06:32:17.567+0000 D4 WRITE [repl-writer-worker-12] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:32:17.567+0000 D3 STORAGE [repl-writer-worker-12] WT commit_transaction for snapshot id 13956 2019-09-04T06:32:17.567+0000 D3 EXECUTOR [repl-writer-worker-12] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:17.567+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578737, 1), t: 1 }({ ts: Timestamp(1567578737, 1), t: 1 }) 2019-09-04T06:32:17.567+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578737, 1) 2019-09-04T06:32:17.567+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13955 2019-09-04T06:32:17.567+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:32:17.567+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:17.567+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:32:17.567+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:17.567+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:17.567+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:32:17.567+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 13955 2019-09-04T06:32:17.567+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578737, 1) 2019-09-04T06:32:17.567+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13959 2019-09-04T06:32:17.567+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13959 2019-09-04T06:32:17.567+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578737, 1), t: 1 }({ ts: Timestamp(1567578737, 1), t: 1 }) 2019-09-04T06:32:17.567+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:17.567+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, durableWallTime: new Date(1567578735264), appliedOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, appliedWallTime: new Date(1567578735264), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, durableWallTime: new Date(1567578735264), appliedOpTime: { ts: Timestamp(1567578737, 1), t: 1 }, appliedWallTime: new Date(1567578737545), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, durableWallTime: new Date(1567578735264), appliedOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, appliedWallTime: new Date(1567578735264), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578735, 1), t: 1 }, lastCommittedWall: new Date(1567578735264), lastOpVisible: { ts: Timestamp(1567578735, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:17.567+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 937 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:47.567+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, durableWallTime: new Date(1567578735264), appliedOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, appliedWallTime: new Date(1567578735264), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, durableWallTime: new Date(1567578735264), appliedOpTime: { ts: Timestamp(1567578737, 1), t: 1 }, appliedWallTime: new Date(1567578737545), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, durableWallTime: new Date(1567578735264), appliedOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, appliedWallTime: new Date(1567578735264), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578735, 1), t: 1 }, lastCommittedWall: new Date(1567578735264), lastOpVisible: { ts: Timestamp(1567578735, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:17.567+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:47.567+0000 2019-09-04T06:32:17.567+0000 D2 ASIO [RS] Request 937 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578735, 1), t: 1 }, lastCommittedWall: new Date(1567578735264), lastOpVisible: { ts: Timestamp(1567578735, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578735, 1), $clusterTime: { clusterTime: Timestamp(1567578737, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578737, 1) } 2019-09-04T06:32:17.567+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578735, 1), t: 1 }, lastCommittedWall: new Date(1567578735264), lastOpVisible: { ts: Timestamp(1567578735, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578735, 1), $clusterTime: { clusterTime: Timestamp(1567578737, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578737, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:17.567+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:17.567+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:47.567+0000 2019-09-04T06:32:17.568+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578737, 1), t: 1 } 2019-09-04T06:32:17.568+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 938 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:27.568+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578735, 1), t: 1 } } 2019-09-04T06:32:17.568+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:47.567+0000 2019-09-04T06:32:17.571+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:32:17.571+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:17.571+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, durableWallTime: new Date(1567578735264), appliedOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, appliedWallTime: new Date(1567578735264), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578737, 1), t: 1 }, durableWallTime: new Date(1567578737545), appliedOpTime: { ts: Timestamp(1567578737, 1), t: 1 }, appliedWallTime: new Date(1567578737545), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, durableWallTime: new Date(1567578735264), appliedOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, appliedWallTime: new Date(1567578735264), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578735, 1), t: 1 }, lastCommittedWall: new Date(1567578735264), lastOpVisible: { ts: Timestamp(1567578735, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:17.571+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 939 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:47.571+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, durableWallTime: new Date(1567578735264), appliedOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, appliedWallTime: new Date(1567578735264), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578737, 1), t: 1 }, durableWallTime: new Date(1567578737545), appliedOpTime: { ts: Timestamp(1567578737, 1), t: 1 }, appliedWallTime: new Date(1567578737545), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, durableWallTime: new Date(1567578735264), appliedOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, appliedWallTime: new Date(1567578735264), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578735, 1), t: 1 }, lastCommittedWall: new Date(1567578735264), lastOpVisible: { ts: Timestamp(1567578735, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:17.571+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:47.567+0000 2019-09-04T06:32:17.571+0000 D2 ASIO [RS] Request 939 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578735, 1), t: 1 }, lastCommittedWall: new Date(1567578735264), lastOpVisible: { ts: Timestamp(1567578735, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578735, 1), $clusterTime: { clusterTime: Timestamp(1567578737, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578737, 1) } 2019-09-04T06:32:17.571+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578735, 1), t: 1 }, lastCommittedWall: new Date(1567578735264), lastOpVisible: { ts: Timestamp(1567578735, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578735, 1), $clusterTime: { clusterTime: Timestamp(1567578737, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578737, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:17.571+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:17.571+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:47.567+0000 2019-09-04T06:32:17.571+0000 D2 ASIO [RS] Request 938 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578737, 1), t: 1 }, lastCommittedWall: new Date(1567578737545), lastOpVisible: { ts: Timestamp(1567578737, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578737, 1), t: 1 }, lastCommittedWall: new Date(1567578737545), lastOpApplied: { ts: Timestamp(1567578737, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578737, 1), $clusterTime: { clusterTime: Timestamp(1567578737, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578737, 1) } 2019-09-04T06:32:17.571+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578737, 1), t: 1 }, lastCommittedWall: new Date(1567578737545), lastOpVisible: { ts: Timestamp(1567578737, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578737, 1), t: 1 }, lastCommittedWall: new Date(1567578737545), lastOpApplied: { ts: Timestamp(1567578737, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578737, 1), $clusterTime: { clusterTime: Timestamp(1567578737, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578737, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:17.571+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:17.571+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:32:17.571+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578737, 1), t: 1 }, 2019-09-04T06:32:17.545+0000 2019-09-04T06:32:17.571+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578737, 1), t: 1 }, 2019-09-04T06:32:17.545+0000 2019-09-04T06:32:17.572+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578732, 1) 2019-09-04T06:32:17.572+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:32:28.089+0000 2019-09-04T06:32:17.572+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:32:28.305+0000 2019-09-04T06:32:17.572+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 940 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:27.572+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578737, 1), t: 1 } } 2019-09-04T06:32:17.572+0000 D3 REPL [conn313] Got notified of new snapshot: { ts: Timestamp(1567578737, 1), t: 1 }, 2019-09-04T06:32:17.545+0000 2019-09-04T06:32:17.572+0000 D3 REPL [conn313] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:22.595+0000 2019-09-04T06:32:17.572+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:47.567+0000 2019-09-04T06:32:17.572+0000 D3 REPL [conn317] Got notified of new snapshot: { ts: Timestamp(1567578737, 1), t: 1 }, 2019-09-04T06:32:17.545+0000 2019-09-04T06:32:17.572+0000 D3 REPL [conn317] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.471+0000 2019-09-04T06:32:17.572+0000 D3 REPL [conn275] Got notified of new snapshot: { ts: Timestamp(1567578737, 1), t: 1 }, 2019-09-04T06:32:17.545+0000 2019-09-04T06:32:17.572+0000 D3 REPL [conn275] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.508+0000 2019-09-04T06:32:17.572+0000 D3 REPL [conn299] Got notified of new snapshot: { ts: Timestamp(1567578737, 1), t: 1 }, 2019-09-04T06:32:17.545+0000 2019-09-04T06:32:17.572+0000 D3 REPL [conn299] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.483+0000 2019-09-04T06:32:17.572+0000 D3 REPL [conn316] Got notified of new snapshot: { ts: Timestamp(1567578737, 1), t: 1 }, 2019-09-04T06:32:17.545+0000 2019-09-04T06:32:17.572+0000 D3 REPL [conn316] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.623+0000 2019-09-04T06:32:17.572+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:17.572+0000 D3 REPL [conn305] Got notified of new snapshot: { ts: Timestamp(1567578737, 1), t: 1 }, 2019-09-04T06:32:17.545+0000 2019-09-04T06:32:17.572+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:46.839+0000 2019-09-04T06:32:17.572+0000 D3 REPL [conn305] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.638+0000 2019-09-04T06:32:17.572+0000 D3 REPL [conn328] Got notified of new snapshot: { ts: Timestamp(1567578737, 1), t: 1 }, 2019-09-04T06:32:17.545+0000 2019-09-04T06:32:17.572+0000 D3 REPL [conn328] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.625+0000 2019-09-04T06:32:17.572+0000 D3 REPL [conn315] Got notified of new snapshot: { ts: Timestamp(1567578737, 1), t: 1 }, 2019-09-04T06:32:17.545+0000 2019-09-04T06:32:17.572+0000 D3 REPL [conn315] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:25.060+0000 2019-09-04T06:32:17.572+0000 D3 REPL [conn319] Got notified of new snapshot: { ts: Timestamp(1567578737, 1), t: 1 }, 2019-09-04T06:32:17.545+0000 2019-09-04T06:32:17.572+0000 D3 REPL [conn319] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.535+0000 2019-09-04T06:32:17.572+0000 D3 REPL [conn310] Got notified of new snapshot: { ts: Timestamp(1567578737, 1), t: 1 }, 2019-09-04T06:32:17.545+0000 2019-09-04T06:32:17.572+0000 D3 REPL [conn310] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.660+0000 2019-09-04T06:32:17.572+0000 D3 REPL [conn311] Got notified of new snapshot: { ts: Timestamp(1567578737, 1), t: 1 }, 2019-09-04T06:32:17.545+0000 2019-09-04T06:32:17.572+0000 D3 REPL [conn311] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.661+0000 2019-09-04T06:32:17.572+0000 D3 REPL [conn314] Got notified of new snapshot: { ts: Timestamp(1567578737, 1), t: 1 }, 2019-09-04T06:32:17.545+0000 2019-09-04T06:32:17.572+0000 D3 REPL [conn314] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:24.152+0000 2019-09-04T06:32:17.572+0000 D3 REPL [conn301] Got notified of new snapshot: { ts: Timestamp(1567578737, 1), t: 1 }, 2019-09-04T06:32:17.545+0000 2019-09-04T06:32:17.572+0000 D3 REPL [conn301] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.660+0000 2019-09-04T06:32:17.572+0000 D3 REPL [conn324] Got notified of new snapshot: { ts: Timestamp(1567578737, 1), t: 1 }, 2019-09-04T06:32:17.545+0000 2019-09-04T06:32:17.572+0000 D3 REPL [conn324] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.622+0000 2019-09-04T06:32:17.572+0000 D3 REPL [conn321] Got notified of new snapshot: { ts: Timestamp(1567578737, 1), t: 1 }, 2019-09-04T06:32:17.545+0000 2019-09-04T06:32:17.572+0000 D3 REPL [conn321] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:43.109+0000 2019-09-04T06:32:17.572+0000 D3 REPL [conn312] Got notified of new snapshot: { ts: Timestamp(1567578737, 1), t: 1 }, 2019-09-04T06:32:17.545+0000 2019-09-04T06:32:17.572+0000 D3 REPL [conn312] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.767+0000 2019-09-04T06:32:17.572+0000 D3 REPL [conn285] Got notified of new snapshot: { ts: Timestamp(1567578737, 1), t: 1 }, 2019-09-04T06:32:17.545+0000 2019-09-04T06:32:17.572+0000 D3 REPL [conn285] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.640+0000 2019-09-04T06:32:17.572+0000 D3 REPL [conn306] Got notified of new snapshot: { ts: Timestamp(1567578737, 1), t: 1 }, 2019-09-04T06:32:17.545+0000 2019-09-04T06:32:17.572+0000 D3 REPL [conn306] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.639+0000 2019-09-04T06:32:17.572+0000 D3 REPL [conn304] Got notified of new snapshot: { ts: Timestamp(1567578737, 1), t: 1 }, 2019-09-04T06:32:17.545+0000 2019-09-04T06:32:17.572+0000 D3 REPL [conn304] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.637+0000 2019-09-04T06:32:17.585+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:17.612+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:17.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:17.642+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:17.643+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:17.657+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:17.657+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:17.666+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578737, 1) 2019-09-04T06:32:17.685+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:17.724+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:17.724+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:17.785+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:17.806+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:17.806+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:17.828+0000 D2 ASIO [RS] Request 940 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578737, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578737811), o: { $v: 1, $set: { ping: new Date(1567578737805) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578737, 1), t: 1 }, lastCommittedWall: new Date(1567578737545), lastOpVisible: { ts: Timestamp(1567578737, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578737, 1), t: 1 }, lastCommittedWall: new Date(1567578737545), lastOpApplied: { ts: Timestamp(1567578737, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578737, 1), $clusterTime: { clusterTime: Timestamp(1567578737, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578737, 2) } 2019-09-04T06:32:17.828+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578737, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578737811), o: { $v: 1, $set: { ping: new Date(1567578737805) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578737, 1), t: 1 }, lastCommittedWall: new Date(1567578737545), lastOpVisible: { ts: Timestamp(1567578737, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578737, 1), t: 1 }, lastCommittedWall: new Date(1567578737545), lastOpApplied: { ts: Timestamp(1567578737, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578737, 1), $clusterTime: { clusterTime: Timestamp(1567578737, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578737, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:17.828+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:17.828+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578737, 2) and ending at ts: Timestamp(1567578737, 2) 2019-09-04T06:32:17.828+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:32:28.305+0000 2019-09-04T06:32:17.828+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:32:28.557+0000 2019-09-04T06:32:17.828+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:17.828+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:46.839+0000 2019-09-04T06:32:17.828+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578737, 2), t: 1 } 2019-09-04T06:32:17.828+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:17.828+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:17.828+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578737, 1) 2019-09-04T06:32:17.828+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 13968 2019-09-04T06:32:17.828+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:17.828+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:17.828+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 13968 2019-09-04T06:32:17.828+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:17.828+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:32:17.828+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:17.828+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578737, 2) } 2019-09-04T06:32:17.828+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578737, 1) 2019-09-04T06:32:17.828+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 13971 2019-09-04T06:32:17.828+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:17.828+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:17.828+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 13971 2019-09-04T06:32:17.828+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13960 2019-09-04T06:32:17.828+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 13960 2019-09-04T06:32:17.828+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13974 2019-09-04T06:32:17.828+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13974 2019-09-04T06:32:17.828+0000 D3 EXECUTOR [repl-writer-worker-10] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:17.828+0000 D3 STORAGE [repl-writer-worker-10] WT begin_transaction for snapshot id 13976 2019-09-04T06:32:17.828+0000 D4 STORAGE [repl-writer-worker-10] inserting record with timestamp Timestamp(1567578737, 2) 2019-09-04T06:32:17.828+0000 D3 STORAGE [repl-writer-worker-10] WT set timestamp of future write operations to Timestamp(1567578737, 2) 2019-09-04T06:32:17.828+0000 D3 STORAGE [repl-writer-worker-10] WT commit_transaction for snapshot id 13976 2019-09-04T06:32:17.828+0000 D3 EXECUTOR [repl-writer-worker-10] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:17.828+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:32:17.828+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13975 2019-09-04T06:32:17.828+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 13975 2019-09-04T06:32:17.828+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13978 2019-09-04T06:32:17.828+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13978 2019-09-04T06:32:17.828+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578737, 2), t: 1 }({ ts: Timestamp(1567578737, 2), t: 1 }) 2019-09-04T06:32:17.828+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578737, 2) 2019-09-04T06:32:17.828+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13979 2019-09-04T06:32:17.828+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578737, 2) } } ] } sort: {} projection: {} 2019-09-04T06:32:17.828+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:17.829+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:32:17.829+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578737, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:32:17.829+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:17.829+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:17.829+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:17.829+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578737, 2) || First: notFirst: full path: ts 2019-09-04T06:32:17.829+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:17.829+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578737, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:17.829+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:17.829+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:32:17.829+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:17.829+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:17.829+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:17.829+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:17.829+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:17.829+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:17.829+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:17.829+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578737, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:17.829+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:17.829+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:17.829+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:17.829+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578737, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:17.829+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:17.829+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578737, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:17.829+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 13979 2019-09-04T06:32:17.829+0000 D3 EXECUTOR [repl-writer-worker-8] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:17.829+0000 D3 STORAGE [repl-writer-worker-8] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:17.829+0000 D3 REPL [repl-writer-worker-8] applying op: { ts: Timestamp(1567578737, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578737811), o: { $v: 1, $set: { ping: new Date(1567578737805) } } }, oplog application mode: Secondary 2019-09-04T06:32:17.829+0000 D3 STORAGE [repl-writer-worker-8] WT set timestamp of future write operations to Timestamp(1567578737, 2) 2019-09-04T06:32:17.829+0000 D3 STORAGE [repl-writer-worker-8] WT begin_transaction for snapshot id 13981 2019-09-04T06:32:17.829+0000 D2 QUERY [repl-writer-worker-8] Using idhack: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" } 2019-09-04T06:32:17.829+0000 D4 WRITE [repl-writer-worker-8] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:32:17.829+0000 D3 STORAGE [repl-writer-worker-8] WT commit_transaction for snapshot id 13981 2019-09-04T06:32:17.829+0000 D3 EXECUTOR [repl-writer-worker-8] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:17.829+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578737, 2), t: 1 }({ ts: Timestamp(1567578737, 2), t: 1 }) 2019-09-04T06:32:17.829+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578737, 2) 2019-09-04T06:32:17.829+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13980 2019-09-04T06:32:17.829+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:32:17.829+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:17.829+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:32:17.829+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:17.829+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:17.829+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:32:17.829+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 13980 2019-09-04T06:32:17.829+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578737, 2) 2019-09-04T06:32:17.829+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 13984 2019-09-04T06:32:17.829+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 13984 2019-09-04T06:32:17.829+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578737, 2), t: 1 }({ ts: Timestamp(1567578737, 2), t: 1 }) 2019-09-04T06:32:17.829+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:17.829+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, durableWallTime: new Date(1567578735264), appliedOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, appliedWallTime: new Date(1567578735264), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578737, 1), t: 1 }, durableWallTime: new Date(1567578737545), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, durableWallTime: new Date(1567578735264), appliedOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, appliedWallTime: new Date(1567578735264), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578737, 1), t: 1 }, lastCommittedWall: new Date(1567578737545), lastOpVisible: { ts: Timestamp(1567578737, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:17.829+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 941 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:47.829+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, durableWallTime: new Date(1567578735264), appliedOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, appliedWallTime: new Date(1567578735264), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578737, 1), t: 1 }, durableWallTime: new Date(1567578737545), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, durableWallTime: new Date(1567578735264), appliedOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, appliedWallTime: new Date(1567578735264), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578737, 1), t: 1 }, lastCommittedWall: new Date(1567578737545), lastOpVisible: { ts: Timestamp(1567578737, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:17.829+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:47.829+0000 2019-09-04T06:32:17.830+0000 D2 ASIO [RS] Request 941 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578737, 1), t: 1 }, lastCommittedWall: new Date(1567578737545), lastOpVisible: { ts: Timestamp(1567578737, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578737, 1), $clusterTime: { clusterTime: Timestamp(1567578737, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578737, 2) } 2019-09-04T06:32:17.830+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578737, 1), t: 1 }, lastCommittedWall: new Date(1567578737545), lastOpVisible: { ts: Timestamp(1567578737, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578737, 1), $clusterTime: { clusterTime: Timestamp(1567578737, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578737, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:17.830+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:17.830+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:47.830+0000 2019-09-04T06:32:17.830+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578737, 2), t: 1 } 2019-09-04T06:32:17.830+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 942 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:27.830+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578737, 1), t: 1 } } 2019-09-04T06:32:17.830+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:47.830+0000 2019-09-04T06:32:17.836+0000 D2 ASIO [RS] Request 942 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578737, 2), t: 1 }, lastCommittedWall: new Date(1567578737811), lastOpVisible: { ts: Timestamp(1567578737, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578737, 2), t: 1 }, lastCommittedWall: new Date(1567578737811), lastOpApplied: { ts: Timestamp(1567578737, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578737, 2), $clusterTime: { clusterTime: Timestamp(1567578737, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578737, 2) } 2019-09-04T06:32:17.836+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578737, 2), t: 1 }, lastCommittedWall: new Date(1567578737811), lastOpVisible: { ts: Timestamp(1567578737, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578737, 2), t: 1 }, lastCommittedWall: new Date(1567578737811), lastOpApplied: { ts: Timestamp(1567578737, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578737, 2), $clusterTime: { clusterTime: Timestamp(1567578737, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578737, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:17.836+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:17.836+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:32:17.836+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578737, 2), t: 1 }, 2019-09-04T06:32:17.811+0000 2019-09-04T06:32:17.836+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578737, 2), t: 1 }, 2019-09-04T06:32:17.811+0000 2019-09-04T06:32:17.836+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578732, 2) 2019-09-04T06:32:17.836+0000 D3 REPL [conn301] Got notified of new snapshot: { ts: Timestamp(1567578737, 2), t: 1 }, 2019-09-04T06:32:17.811+0000 2019-09-04T06:32:17.836+0000 D3 REPL [conn301] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.660+0000 2019-09-04T06:32:17.836+0000 D3 REPL [conn319] Got notified of new snapshot: { ts: Timestamp(1567578737, 2), t: 1 }, 2019-09-04T06:32:17.811+0000 2019-09-04T06:32:17.836+0000 D3 REPL [conn319] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.535+0000 2019-09-04T06:32:17.836+0000 D3 REPL [conn285] Got notified of new snapshot: { ts: Timestamp(1567578737, 2), t: 1 }, 2019-09-04T06:32:17.811+0000 2019-09-04T06:32:17.837+0000 D3 REPL [conn285] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.640+0000 2019-09-04T06:32:17.837+0000 D3 REPL [conn316] Got notified of new snapshot: { ts: Timestamp(1567578737, 2), t: 1 }, 2019-09-04T06:32:17.811+0000 2019-09-04T06:32:17.837+0000 D3 REPL [conn316] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.623+0000 2019-09-04T06:32:17.837+0000 D3 REPL [conn310] Got notified of new snapshot: { ts: Timestamp(1567578737, 2), t: 1 }, 2019-09-04T06:32:17.811+0000 2019-09-04T06:32:17.837+0000 D3 REPL [conn310] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.660+0000 2019-09-04T06:32:17.837+0000 D3 REPL [conn314] Got notified of new snapshot: { ts: Timestamp(1567578737, 2), t: 1 }, 2019-09-04T06:32:17.811+0000 2019-09-04T06:32:17.837+0000 D3 REPL [conn314] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:24.152+0000 2019-09-04T06:32:17.837+0000 D3 REPL [conn324] Got notified of new snapshot: { ts: Timestamp(1567578737, 2), t: 1 }, 2019-09-04T06:32:17.811+0000 2019-09-04T06:32:17.837+0000 D3 REPL [conn324] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.622+0000 2019-09-04T06:32:17.837+0000 D3 REPL [conn312] Got notified of new snapshot: { ts: Timestamp(1567578737, 2), t: 1 }, 2019-09-04T06:32:17.811+0000 2019-09-04T06:32:17.837+0000 D3 REPL [conn312] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.767+0000 2019-09-04T06:32:17.837+0000 D3 REPL [conn306] Got notified of new snapshot: { ts: Timestamp(1567578737, 2), t: 1 }, 2019-09-04T06:32:17.811+0000 2019-09-04T06:32:17.837+0000 D3 REPL [conn306] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.639+0000 2019-09-04T06:32:17.837+0000 D3 REPL [conn305] Got notified of new snapshot: { ts: Timestamp(1567578737, 2), t: 1 }, 2019-09-04T06:32:17.811+0000 2019-09-04T06:32:17.837+0000 D3 REPL [conn305] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.638+0000 2019-09-04T06:32:17.837+0000 D3 REPL [conn311] Got notified of new snapshot: { ts: Timestamp(1567578737, 2), t: 1 }, 2019-09-04T06:32:17.811+0000 2019-09-04T06:32:17.837+0000 D3 REPL [conn311] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.661+0000 2019-09-04T06:32:17.837+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:32:28.557+0000 2019-09-04T06:32:17.837+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:32:28.156+0000 2019-09-04T06:32:17.837+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:17.837+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:46.839+0000 2019-09-04T06:32:17.837+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 943 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:27.837+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578737, 2), t: 1 } } 2019-09-04T06:32:17.837+0000 D3 REPL [conn275] Got notified of new snapshot: { ts: Timestamp(1567578737, 2), t: 1 }, 2019-09-04T06:32:17.811+0000 2019-09-04T06:32:17.837+0000 D3 REPL [conn275] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.508+0000 2019-09-04T06:32:17.837+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:47.830+0000 2019-09-04T06:32:17.837+0000 D3 REPL [conn317] Got notified of new snapshot: { ts: Timestamp(1567578737, 2), t: 1 }, 2019-09-04T06:32:17.811+0000 2019-09-04T06:32:17.837+0000 D3 REPL [conn317] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.471+0000 2019-09-04T06:32:17.837+0000 D3 REPL [conn321] Got notified of new snapshot: { ts: Timestamp(1567578737, 2), t: 1 }, 2019-09-04T06:32:17.811+0000 2019-09-04T06:32:17.837+0000 D3 REPL [conn321] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:43.109+0000 2019-09-04T06:32:17.837+0000 D3 REPL [conn313] Got notified of new snapshot: { ts: Timestamp(1567578737, 2), t: 1 }, 2019-09-04T06:32:17.811+0000 2019-09-04T06:32:17.837+0000 D3 REPL [conn313] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:22.595+0000 2019-09-04T06:32:17.837+0000 D3 REPL [conn328] Got notified of new snapshot: { ts: Timestamp(1567578737, 2), t: 1 }, 2019-09-04T06:32:17.811+0000 2019-09-04T06:32:17.837+0000 D3 REPL [conn328] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.625+0000 2019-09-04T06:32:17.837+0000 D3 REPL [conn304] Got notified of new snapshot: { ts: Timestamp(1567578737, 2), t: 1 }, 2019-09-04T06:32:17.811+0000 2019-09-04T06:32:17.837+0000 D3 REPL [conn304] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.637+0000 2019-09-04T06:32:17.837+0000 D3 REPL [conn315] Got notified of new snapshot: { ts: Timestamp(1567578737, 2), t: 1 }, 2019-09-04T06:32:17.811+0000 2019-09-04T06:32:17.837+0000 D3 REPL [conn315] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:25.060+0000 2019-09-04T06:32:17.837+0000 D3 REPL [conn299] Got notified of new snapshot: { ts: Timestamp(1567578737, 2), t: 1 }, 2019-09-04T06:32:17.811+0000 2019-09-04T06:32:17.837+0000 D3 REPL [conn299] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.483+0000 2019-09-04T06:32:17.844+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:32:17.844+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:17.844+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, durableWallTime: new Date(1567578735264), appliedOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, appliedWallTime: new Date(1567578735264), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, durableWallTime: new Date(1567578735264), appliedOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, appliedWallTime: new Date(1567578735264), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578737, 2), t: 1 }, lastCommittedWall: new Date(1567578737811), lastOpVisible: { ts: Timestamp(1567578737, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:17.844+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 944 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:47.844+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, durableWallTime: new Date(1567578735264), appliedOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, appliedWallTime: new Date(1567578735264), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, durableWallTime: new Date(1567578735264), appliedOpTime: { ts: Timestamp(1567578735, 1), t: 1 }, appliedWallTime: new Date(1567578735264), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578737, 2), t: 1 }, lastCommittedWall: new Date(1567578737811), lastOpVisible: { ts: Timestamp(1567578737, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:17.844+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:47.830+0000 2019-09-04T06:32:17.844+0000 D2 ASIO [RS] Request 944 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578737, 2), t: 1 }, lastCommittedWall: new Date(1567578737811), lastOpVisible: { ts: Timestamp(1567578737, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578737, 2), $clusterTime: { clusterTime: Timestamp(1567578737, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578737, 2) } 2019-09-04T06:32:17.844+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578737, 2), t: 1 }, lastCommittedWall: new Date(1567578737811), lastOpVisible: { ts: Timestamp(1567578737, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578737, 2), $clusterTime: { clusterTime: Timestamp(1567578737, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578737, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:17.844+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:17.844+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:47.830+0000 2019-09-04T06:32:17.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:17.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:17.885+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:17.928+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578737, 2) 2019-09-04T06:32:17.944+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:17.944+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:17.985+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:18.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:18.053+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:18.053+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:18.085+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:18.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:18.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:18.130+0000 D2 COMMAND [conn20] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:18.130+0000 I COMMAND [conn20] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:18.142+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:18.143+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:18.157+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:18.157+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:18.185+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:18.224+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:18.224+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:18.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578737, 2), signature: { hash: BinData(0, DCB6130DFBB9663950EACFB0FAA93E87243597A3), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:18.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:32:18.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578737, 2), signature: { hash: BinData(0, DCB6130DFBB9663950EACFB0FAA93E87243597A3), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:18.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578737, 2), signature: { hash: BinData(0, DCB6130DFBB9663950EACFB0FAA93E87243597A3), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:18.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), opTime: { ts: Timestamp(1567578737, 2), t: 1 }, wallTime: new Date(1567578737811) } 2019-09-04T06:32:18.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578737, 2), signature: { hash: BinData(0, DCB6130DFBB9663950EACFB0FAA93E87243597A3), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:18.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:18.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:18.285+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:18.306+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:18.306+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:18.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:18.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:18.385+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:18.444+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:18.444+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:18.486+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:18.516+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:32:18.516+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:32:18.516+0000 D2 COMMAND [conn90] run command admin.$cmd { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:32:18.516+0000 I COMMAND [conn90] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:32:18.553+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:18.553+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:18.586+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:18.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:18.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:18.642+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:18.643+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:18.657+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:18.657+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:18.686+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:18.724+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:18.724+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:18.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:18.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:18.786+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:18.806+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:18.806+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:18.828+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:18.828+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:18.828+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578737, 2) 2019-09-04T06:32:18.828+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 14010 2019-09-04T06:32:18.828+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:18.828+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:18.828+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 14010 2019-09-04T06:32:18.828+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer::flush table:config/collection/28--6194257481163143499 -> { numRecords: 22, dataSize: 1828 } 2019-09-04T06:32:18.828+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer::flush table:local/collection/16--6194257481163143499 -> { numRecords: 1493, dataSize: 336700 } 2019-09-04T06:32:18.828+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer flush took 49 µs 2019-09-04T06:32:18.829+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14013 2019-09-04T06:32:18.829+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14013 2019-09-04T06:32:18.829+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578737, 2), t: 1 }({ ts: Timestamp(1567578737, 2), t: 1 }) 2019-09-04T06:32:18.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:18.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 945) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:18.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 945 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:28.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:18.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:46.839+0000 2019-09-04T06:32:18.838+0000 D2 ASIO [Replication] Request 945 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), opTime: { ts: Timestamp(1567578737, 2), t: 1 }, wallTime: new Date(1567578737811), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578737, 2), t: 1 }, lastCommittedWall: new Date(1567578737811), lastOpVisible: { ts: Timestamp(1567578737, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578737, 2), $clusterTime: { clusterTime: Timestamp(1567578737, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578737, 2) } 2019-09-04T06:32:18.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), opTime: { ts: Timestamp(1567578737, 2), t: 1 }, wallTime: new Date(1567578737811), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578737, 2), t: 1 }, lastCommittedWall: new Date(1567578737811), lastOpVisible: { ts: Timestamp(1567578737, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578737, 2), $clusterTime: { clusterTime: Timestamp(1567578737, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578737, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:18.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:18.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 945) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), opTime: { ts: Timestamp(1567578737, 2), t: 1 }, wallTime: new Date(1567578737811), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578737, 2), t: 1 }, lastCommittedWall: new Date(1567578737811), lastOpVisible: { ts: Timestamp(1567578737, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578737, 2), $clusterTime: { clusterTime: Timestamp(1567578737, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578737, 2) } 2019-09-04T06:32:18.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:32:18.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:32:20.838Z 2019-09-04T06:32:18.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:46.839+0000 2019-09-04T06:32:18.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:18.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 946) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:18.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 946 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:32:28.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:18.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:46.839+0000 2019-09-04T06:32:18.839+0000 D2 ASIO [Replication] Request 946 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), opTime: { ts: Timestamp(1567578737, 2), t: 1 }, wallTime: new Date(1567578737811), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578737, 2), t: 1 }, lastCommittedWall: new Date(1567578737811), lastOpVisible: { ts: Timestamp(1567578737, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578737, 2), $clusterTime: { clusterTime: Timestamp(1567578737, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578737, 2) } 2019-09-04T06:32:18.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), opTime: { ts: Timestamp(1567578737, 2), t: 1 }, wallTime: new Date(1567578737811), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578737, 2), t: 1 }, lastCommittedWall: new Date(1567578737811), lastOpVisible: { ts: Timestamp(1567578737, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578737, 2), $clusterTime: { clusterTime: Timestamp(1567578737, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578737, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:32:18.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:18.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 946) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), opTime: { ts: Timestamp(1567578737, 2), t: 1 }, wallTime: new Date(1567578737811), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578737, 2), t: 1 }, lastCommittedWall: new Date(1567578737811), lastOpVisible: { ts: Timestamp(1567578737, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578737, 2), $clusterTime: { clusterTime: Timestamp(1567578737, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578737, 2) } 2019-09-04T06:32:18.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:32:18.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:32:28.156+0000 2019-09-04T06:32:18.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:32:30.264+0000 2019-09-04T06:32:18.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:32:18.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:32:20.839Z 2019-09-04T06:32:18.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:18.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:48.839+0000 2019-09-04T06:32:18.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:48.839+0000 2019-09-04T06:32:18.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:18.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:18.889+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:18.944+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:18.944+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:18.989+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:19.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:19.053+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:19.053+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:19.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578737, 2), signature: { hash: BinData(0, DCB6130DFBB9663950EACFB0FAA93E87243597A3), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:19.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:32:19.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578737, 2), signature: { hash: BinData(0, DCB6130DFBB9663950EACFB0FAA93E87243597A3), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:19.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578737, 2), signature: { hash: BinData(0, DCB6130DFBB9663950EACFB0FAA93E87243597A3), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:19.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), opTime: { ts: Timestamp(1567578737, 2), t: 1 }, wallTime: new Date(1567578737811) } 2019-09-04T06:32:19.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578737, 2), signature: { hash: BinData(0, DCB6130DFBB9663950EACFB0FAA93E87243597A3), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:19.089+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:19.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:19.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:19.142+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:19.143+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:19.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:19.153+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:19.157+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:19.157+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:19.189+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:19.224+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:19.224+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:19.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:19.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:19.264+0000 D2 ASIO [RS] Request 943 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578739, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" }, wall: new Date(1567578739261), o: { $v: 1, $set: { ping: new Date(1567578739258) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578737, 2), t: 1 }, lastCommittedWall: new Date(1567578737811), lastOpVisible: { ts: Timestamp(1567578737, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578737, 2), t: 1 }, lastCommittedWall: new Date(1567578737811), lastOpApplied: { ts: Timestamp(1567578739, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578737, 2), $clusterTime: { clusterTime: Timestamp(1567578739, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 1) } 2019-09-04T06:32:19.264+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578739, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" }, wall: new Date(1567578739261), o: { $v: 1, $set: { ping: new Date(1567578739258) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578737, 2), t: 1 }, lastCommittedWall: new Date(1567578737811), lastOpVisible: { ts: Timestamp(1567578737, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578737, 2), t: 1 }, lastCommittedWall: new Date(1567578737811), lastOpApplied: { ts: Timestamp(1567578739, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578737, 2), $clusterTime: { clusterTime: Timestamp(1567578739, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:19.264+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:19.264+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578739, 1) and ending at ts: Timestamp(1567578739, 1) 2019-09-04T06:32:19.264+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:32:30.264+0000 2019-09-04T06:32:19.264+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:32:30.149+0000 2019-09-04T06:32:19.264+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:19.264+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:48.839+0000 2019-09-04T06:32:19.264+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578739, 1), t: 1 } 2019-09-04T06:32:19.264+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:19.264+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:19.264+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578737, 2) 2019-09-04T06:32:19.264+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 14027 2019-09-04T06:32:19.264+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:19.264+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:19.264+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 14027 2019-09-04T06:32:19.264+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:19.264+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:19.264+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:32:19.264+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578737, 2) 2019-09-04T06:32:19.264+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 14030 2019-09-04T06:32:19.264+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:19.264+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578739, 1) } 2019-09-04T06:32:19.264+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:19.264+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 14030 2019-09-04T06:32:19.264+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14014 2019-09-04T06:32:19.264+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14014 2019-09-04T06:32:19.264+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14033 2019-09-04T06:32:19.264+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14033 2019-09-04T06:32:19.264+0000 D3 EXECUTOR [repl-writer-worker-6] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:19.264+0000 D3 STORAGE [repl-writer-worker-6] WT begin_transaction for snapshot id 14035 2019-09-04T06:32:19.264+0000 D4 STORAGE [repl-writer-worker-6] inserting record with timestamp Timestamp(1567578739, 1) 2019-09-04T06:32:19.264+0000 D3 STORAGE [repl-writer-worker-6] WT set timestamp of future write operations to Timestamp(1567578739, 1) 2019-09-04T06:32:19.264+0000 D2 STORAGE [repl-writer-worker-6] WiredTigerSizeStorer::store Marking table:local/collection/16--6194257481163143499 dirty, numRecords: 1494, dataSize: 336936, use_count: 3 2019-09-04T06:32:19.264+0000 D3 STORAGE [repl-writer-worker-6] WT commit_transaction for snapshot id 14035 2019-09-04T06:32:19.264+0000 D3 EXECUTOR [repl-writer-worker-6] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:19.264+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:32:19.264+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14034 2019-09-04T06:32:19.264+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14034 2019-09-04T06:32:19.264+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14037 2019-09-04T06:32:19.264+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14037 2019-09-04T06:32:19.265+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578739, 1), t: 1 }({ ts: Timestamp(1567578739, 1), t: 1 }) 2019-09-04T06:32:19.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:19.265+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578739, 1) 2019-09-04T06:32:19.265+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14038 2019-09-04T06:32:19.265+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578739, 1) } } ] } sort: {} projection: {} 2019-09-04T06:32:19.265+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:19.265+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:32:19.265+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578739, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:32:19.265+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:19.265+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:19.265+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:19.265+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578739, 1) || First: notFirst: full path: ts 2019-09-04T06:32:19.265+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:19.265+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578739, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:19.265+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:19.265+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:32:19.265+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:19.265+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:19.265+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:19.265+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:19.265+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:19.265+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:19.265+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:19.265+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578739, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:19.265+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:19.265+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:19.265+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:19.265+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578739, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:19.265+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:19.265+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578739, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:19.265+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14038 2019-09-04T06:32:19.265+0000 D3 EXECUTOR [repl-writer-worker-4] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:19.265+0000 D3 STORAGE [repl-writer-worker-4] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:19.265+0000 D3 REPL [repl-writer-worker-4] applying op: { ts: Timestamp(1567578739, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" }, wall: new Date(1567578739261), o: { $v: 1, $set: { ping: new Date(1567578739258) } } }, oplog application mode: Secondary 2019-09-04T06:32:19.265+0000 D3 STORAGE [repl-writer-worker-4] WT set timestamp of future write operations to Timestamp(1567578739, 1) 2019-09-04T06:32:19.265+0000 D3 STORAGE [repl-writer-worker-4] WT begin_transaction for snapshot id 14041 2019-09-04T06:32:19.265+0000 D2 QUERY [repl-writer-worker-4] Using idhack: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" } 2019-09-04T06:32:19.265+0000 D2 STORAGE [repl-writer-worker-4] WiredTigerSizeStorer::store Marking table:config/collection/28--6194257481163143499 dirty, numRecords: 22, dataSize: 1828, use_count: 3 2019-09-04T06:32:19.265+0000 D4 WRITE [repl-writer-worker-4] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:32:19.265+0000 D3 STORAGE [repl-writer-worker-4] WT commit_transaction for snapshot id 14041 2019-09-04T06:32:19.265+0000 D3 EXECUTOR [repl-writer-worker-4] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:19.265+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578739, 1), t: 1 }({ ts: Timestamp(1567578739, 1), t: 1 }) 2019-09-04T06:32:19.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:19.265+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578739, 1) 2019-09-04T06:32:19.265+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14040 2019-09-04T06:32:19.265+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:32:19.265+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:19.265+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:32:19.265+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:19.265+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:19.265+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:32:19.265+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14040 2019-09-04T06:32:19.265+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578739, 1) 2019-09-04T06:32:19.266+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14044 2019-09-04T06:32:19.266+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14044 2019-09-04T06:32:19.266+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578739, 1), t: 1 }({ ts: Timestamp(1567578739, 1), t: 1 }) 2019-09-04T06:32:19.266+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:19.266+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578739, 1), t: 1 }, appliedWallTime: new Date(1567578739261), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578737, 2), t: 1 }, lastCommittedWall: new Date(1567578737811), lastOpVisible: { ts: Timestamp(1567578737, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:19.266+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 947 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:49.266+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578739, 1), t: 1 }, appliedWallTime: new Date(1567578739261), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578737, 2), t: 1 }, lastCommittedWall: new Date(1567578737811), lastOpVisible: { ts: Timestamp(1567578737, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:19.266+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.265+0000 2019-09-04T06:32:19.266+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578739, 1), t: 1 } 2019-09-04T06:32:19.266+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 948 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:29.266+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578737, 2), t: 1 } } 2019-09-04T06:32:19.266+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.265+0000 2019-09-04T06:32:19.266+0000 D2 ASIO [RS] Request 947 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 1), t: 1 }, lastCommittedWall: new Date(1567578739261), lastOpVisible: { ts: Timestamp(1567578739, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 1), $clusterTime: { clusterTime: Timestamp(1567578739, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 1) } 2019-09-04T06:32:19.266+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 1), t: 1 }, lastCommittedWall: new Date(1567578739261), lastOpVisible: { ts: Timestamp(1567578739, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 1), $clusterTime: { clusterTime: Timestamp(1567578739, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:19.266+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:19.266+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.265+0000 2019-09-04T06:32:19.266+0000 D2 ASIO [RS] Request 948 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 1), t: 1 }, lastCommittedWall: new Date(1567578739261), lastOpVisible: { ts: Timestamp(1567578739, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578739, 1), t: 1 }, lastCommittedWall: new Date(1567578739261), lastOpApplied: { ts: Timestamp(1567578739, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 1), $clusterTime: { clusterTime: Timestamp(1567578739, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 1) } 2019-09-04T06:32:19.266+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 1), t: 1 }, lastCommittedWall: new Date(1567578739261), lastOpVisible: { ts: Timestamp(1567578739, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578739, 1), t: 1 }, lastCommittedWall: new Date(1567578739261), lastOpApplied: { ts: Timestamp(1567578739, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 1), $clusterTime: { clusterTime: Timestamp(1567578739, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:19.266+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:19.266+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:32:19.266+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578739, 1), t: 1 }, 2019-09-04T06:32:19.261+0000 2019-09-04T06:32:19.266+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578739, 1), t: 1 }, 2019-09-04T06:32:19.261+0000 2019-09-04T06:32:19.266+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578734, 1) 2019-09-04T06:32:19.267+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:32:30.149+0000 2019-09-04T06:32:19.267+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:32:29.947+0000 2019-09-04T06:32:19.267+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:19.267+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:48.839+0000 2019-09-04T06:32:19.267+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 949 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:29.267+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578739, 1), t: 1 } } 2019-09-04T06:32:19.267+0000 D3 REPL [conn316] Got notified of new snapshot: { ts: Timestamp(1567578739, 1), t: 1 }, 2019-09-04T06:32:19.261+0000 2019-09-04T06:32:19.267+0000 D3 REPL [conn316] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.623+0000 2019-09-04T06:32:19.267+0000 D3 REPL [conn314] Got notified of new snapshot: { ts: Timestamp(1567578739, 1), t: 1 }, 2019-09-04T06:32:19.261+0000 2019-09-04T06:32:19.267+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.265+0000 2019-09-04T06:32:19.267+0000 D3 REPL [conn314] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:24.152+0000 2019-09-04T06:32:19.267+0000 D3 REPL [conn312] Got notified of new snapshot: { ts: Timestamp(1567578739, 1), t: 1 }, 2019-09-04T06:32:19.261+0000 2019-09-04T06:32:19.267+0000 D3 REPL [conn312] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.767+0000 2019-09-04T06:32:19.267+0000 D3 REPL [conn285] Got notified of new snapshot: { ts: Timestamp(1567578739, 1), t: 1 }, 2019-09-04T06:32:19.261+0000 2019-09-04T06:32:19.267+0000 D3 REPL [conn285] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.640+0000 2019-09-04T06:32:19.267+0000 D3 REPL [conn311] Got notified of new snapshot: { ts: Timestamp(1567578739, 1), t: 1 }, 2019-09-04T06:32:19.261+0000 2019-09-04T06:32:19.267+0000 D3 REPL [conn311] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.661+0000 2019-09-04T06:32:19.267+0000 D3 REPL [conn321] Got notified of new snapshot: { ts: Timestamp(1567578739, 1), t: 1 }, 2019-09-04T06:32:19.261+0000 2019-09-04T06:32:19.267+0000 D3 REPL [conn321] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:43.109+0000 2019-09-04T06:32:19.267+0000 D3 REPL [conn328] Got notified of new snapshot: { ts: Timestamp(1567578739, 1), t: 1 }, 2019-09-04T06:32:19.261+0000 2019-09-04T06:32:19.267+0000 D3 REPL [conn328] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.625+0000 2019-09-04T06:32:19.267+0000 D3 REPL [conn304] Got notified of new snapshot: { ts: Timestamp(1567578739, 1), t: 1 }, 2019-09-04T06:32:19.261+0000 2019-09-04T06:32:19.267+0000 D3 REPL [conn304] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.637+0000 2019-09-04T06:32:19.267+0000 D3 REPL [conn301] Got notified of new snapshot: { ts: Timestamp(1567578739, 1), t: 1 }, 2019-09-04T06:32:19.261+0000 2019-09-04T06:32:19.267+0000 D3 REPL [conn301] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.660+0000 2019-09-04T06:32:19.267+0000 D3 REPL [conn299] Got notified of new snapshot: { ts: Timestamp(1567578739, 1), t: 1 }, 2019-09-04T06:32:19.261+0000 2019-09-04T06:32:19.267+0000 D3 REPL [conn299] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.483+0000 2019-09-04T06:32:19.267+0000 D3 REPL [conn275] Got notified of new snapshot: { ts: Timestamp(1567578739, 1), t: 1 }, 2019-09-04T06:32:19.261+0000 2019-09-04T06:32:19.267+0000 D3 REPL [conn275] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.508+0000 2019-09-04T06:32:19.267+0000 D3 REPL [conn317] Got notified of new snapshot: { ts: Timestamp(1567578739, 1), t: 1 }, 2019-09-04T06:32:19.261+0000 2019-09-04T06:32:19.267+0000 D3 REPL [conn317] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.471+0000 2019-09-04T06:32:19.267+0000 D3 REPL [conn319] Got notified of new snapshot: { ts: Timestamp(1567578739, 1), t: 1 }, 2019-09-04T06:32:19.261+0000 2019-09-04T06:32:19.267+0000 D3 REPL [conn319] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.535+0000 2019-09-04T06:32:19.267+0000 D3 REPL [conn305] Got notified of new snapshot: { ts: Timestamp(1567578739, 1), t: 1 }, 2019-09-04T06:32:19.261+0000 2019-09-04T06:32:19.267+0000 D3 REPL [conn305] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.638+0000 2019-09-04T06:32:19.267+0000 D3 REPL [conn310] Got notified of new snapshot: { ts: Timestamp(1567578739, 1), t: 1 }, 2019-09-04T06:32:19.261+0000 2019-09-04T06:32:19.267+0000 D3 REPL [conn310] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.660+0000 2019-09-04T06:32:19.267+0000 D3 REPL [conn324] Got notified of new snapshot: { ts: Timestamp(1567578739, 1), t: 1 }, 2019-09-04T06:32:19.261+0000 2019-09-04T06:32:19.267+0000 D3 REPL [conn324] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.622+0000 2019-09-04T06:32:19.267+0000 D3 REPL [conn313] Got notified of new snapshot: { ts: Timestamp(1567578739, 1), t: 1 }, 2019-09-04T06:32:19.261+0000 2019-09-04T06:32:19.267+0000 D3 REPL [conn313] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:22.595+0000 2019-09-04T06:32:19.267+0000 D3 REPL [conn306] Got notified of new snapshot: { ts: Timestamp(1567578739, 1), t: 1 }, 2019-09-04T06:32:19.261+0000 2019-09-04T06:32:19.267+0000 D3 REPL [conn306] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.639+0000 2019-09-04T06:32:19.267+0000 D3 REPL [conn315] Got notified of new snapshot: { ts: Timestamp(1567578739, 1), t: 1 }, 2019-09-04T06:32:19.261+0000 2019-09-04T06:32:19.267+0000 D3 REPL [conn315] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:25.060+0000 2019-09-04T06:32:19.278+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:32:19.278+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:19.278+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578739, 1), t: 1 }, durableWallTime: new Date(1567578739261), appliedOpTime: { ts: Timestamp(1567578739, 1), t: 1 }, appliedWallTime: new Date(1567578739261), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 1), t: 1 }, lastCommittedWall: new Date(1567578739261), lastOpVisible: { ts: Timestamp(1567578739, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:19.278+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 950 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:49.278+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578739, 1), t: 1 }, durableWallTime: new Date(1567578739261), appliedOpTime: { ts: Timestamp(1567578739, 1), t: 1 }, appliedWallTime: new Date(1567578739261), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 1), t: 1 }, lastCommittedWall: new Date(1567578739261), lastOpVisible: { ts: Timestamp(1567578739, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:19.278+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.265+0000 2019-09-04T06:32:19.278+0000 D2 ASIO [RS] Request 950 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 1), t: 1 }, lastCommittedWall: new Date(1567578739261), lastOpVisible: { ts: Timestamp(1567578739, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 1), $clusterTime: { clusterTime: Timestamp(1567578739, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 1) } 2019-09-04T06:32:19.278+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 1), t: 1 }, lastCommittedWall: new Date(1567578739261), lastOpVisible: { ts: Timestamp(1567578739, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 1), $clusterTime: { clusterTime: Timestamp(1567578739, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:19.278+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:19.278+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.265+0000 2019-09-04T06:32:19.289+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:19.306+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:19.306+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:19.317+0000 D3 STORAGE [FreeMonProcessor] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:19.321+0000 D3 INDEX [TTLMonitor] thread awake 2019-09-04T06:32:19.321+0000 D3 COMMAND [PeriodicTaskRunner] task: DBConnectionPool-cleaner took: 0ms 2019-09-04T06:32:19.321+0000 D3 COMMAND [PeriodicTaskRunner] task: DBConnectionPool-cleaner took: 0ms 2019-09-04T06:32:19.321+0000 D2 - [PeriodicTaskRunner] cleaning up unused lock buckets of the global lock manager 2019-09-04T06:32:19.322+0000 D3 COMMAND [PeriodicTaskRunner] task: UnusedLockCleaner took: 0ms 2019-09-04T06:32:19.329+0000 D2 WRITE [startPeriodicThreadToAbortExpiredTransactions] Beginning scanSessions. Scanning 0 sessions. 2019-09-04T06:32:19.330+0000 D1 SH_REFR [LogicalSessionCacheRefresh] Refreshing chunks for collection config.system.sessions; current collection version is 1|0||5d5e4a1c7fc690fd4e5fb282 2019-09-04T06:32:19.330+0000 D1 EXECUTOR [ConfigServerCatalogCacheLoader-1] starting thread in pool ConfigServerCatalogCacheLoader 2019-09-04T06:32:19.330+0000 D3 EXECUTOR [ConfigServerCatalogCacheLoader-1] Executing a task on behalf of pool ConfigServerCatalogCacheLoader 2019-09-04T06:32:19.330+0000 D3 STORAGE [ConfigServerCatalogCacheLoader-1] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:32:19.330+0000 D2 COMMAND [ConfigServerCatalogCacheLoader-1] run command config.$cmd { find: "collections", filter: { _id: "config.system.sessions" }, ntoreturn: 1, singleBatch: true, $readPreference: { mode: "nearest", tags: [] }, $db: "config" } 2019-09-04T06:32:19.330+0000 D3 STORAGE [ConfigServerCatalogCacheLoader-1] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:19.330+0000 D2 QUERY [ConfigServerCatalogCacheLoader-1] Using idhack: query: { _id: "config.system.sessions" } sort: {} projection: {} ntoreturn=1 2019-09-04T06:32:19.330+0000 D3 STORAGE [ConfigServerCatalogCacheLoader-1] begin_transaction on local snapshot Timestamp(1567578739, 1) 2019-09-04T06:32:19.330+0000 D3 STORAGE [ConfigServerCatalogCacheLoader-1] WT begin_transaction for snapshot id 14054 2019-09-04T06:32:19.330+0000 D3 STORAGE [ConfigServerCatalogCacheLoader-1] WT rollback_transaction for snapshot id 14054 2019-09-04T06:32:19.330+0000 I COMMAND [ConfigServerCatalogCacheLoader-1] command config.collections command: find { find: "collections", filter: { _id: "config.system.sessions" }, ntoreturn: 1, singleBatch: true, $readPreference: { mode: "nearest", tags: [] }, $db: "config" } planSummary: IDHACK keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:469 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:32:19.330+0000 D3 STORAGE [ConfigServerCatalogCacheLoader-1] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:32:19.330+0000 D2 COMMAND [ConfigServerCatalogCacheLoader-1] run command config.$cmd { find: "chunks", filter: { ns: "config.system.sessions", lastmod: { $gte: Timestamp(1, 0) } }, sort: { lastmod: 1 }, $readPreference: { mode: "nearest", tags: [] }, $db: "config" } 2019-09-04T06:32:19.330+0000 D3 STORAGE [ConfigServerCatalogCacheLoader-1] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:19.330+0000 D2 QUERY [ConfigServerCatalogCacheLoader-1] Not using cached entry for key: an[eqns,gelastmod]~alastmod<111><1> since it is inactive 2019-09-04T06:32:19.330+0000 D5 QUERY [ConfigServerCatalogCacheLoader-1] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: $and ns $eq "config.system.sessions" lastmod $gte Timestamp(1, 0) Sort: { lastmod: 1 } Proj: {} ============================= 2019-09-04T06:32:19.330+0000 D5 QUERY [ConfigServerCatalogCacheLoader-1] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:32:19.330+0000 D5 QUERY [ConfigServerCatalogCacheLoader-1] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:32:19.330+0000 D5 QUERY [ConfigServerCatalogCacheLoader-1] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:32:19.330+0000 D5 QUERY [ConfigServerCatalogCacheLoader-1] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:32:19.330+0000 D5 QUERY [ConfigServerCatalogCacheLoader-1] Predicate over field 'lastmod' 2019-09-04T06:32:19.330+0000 D5 QUERY [ConfigServerCatalogCacheLoader-1] Predicate over field 'ns' 2019-09-04T06:32:19.330+0000 D2 QUERY [ConfigServerCatalogCacheLoader-1] Relevant index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:32:19.330+0000 D2 QUERY [ConfigServerCatalogCacheLoader-1] Relevant index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:32:19.330+0000 D2 QUERY [ConfigServerCatalogCacheLoader-1] Relevant index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:32:19.330+0000 D5 QUERY [ConfigServerCatalogCacheLoader-1] Rated tree: $and ns $eq "config.system.sessions" || First: 0 1 2 notFirst: full path: ns lastmod $gte Timestamp(1, 0) || First: notFirst: 2 full path: lastmod 2019-09-04T06:32:19.330+0000 D5 QUERY [ConfigServerCatalogCacheLoader-1] Tagging memoID 1 2019-09-04T06:32:19.330+0000 D5 QUERY [ConfigServerCatalogCacheLoader-1] Enumerator: memo just before moving: [Node #1]: AND enumstate counter 0 choice 0: subnodes: idx[2] pos 0 pred ns $eq "config.system.sessions" pos 1 pred lastmod $gte Timestamp(1, 0) choice 1: subnodes: idx[0] pos 0 pred ns $eq "config.system.sessions" choice 2: subnodes: idx[1] pos 0 pred ns $eq "config.system.sessions" 2019-09-04T06:32:19.330+0000 D5 QUERY [ConfigServerCatalogCacheLoader-1] About to build solntree from tagged tree: $and ns $eq "config.system.sessions" || Selected Index #2 pos 0 combine 1 lastmod $gte Timestamp(1, 0) || Selected Index #2 pos 1 combine 1 2019-09-04T06:32:19.330+0000 D5 QUERY [ConfigServerCatalogCacheLoader-1] Planner: adding solution: FETCH ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [{ lastmod: 1 }, { ns: 1 }, { ns: 1, lastmod: 1 }, ] ---Child: ------IXSCAN ---------indexName = ns_1_lastmod_1 keyPattern = { ns: 1, lastmod: 1 } ---------direction = 1 ---------bounds = field #0['ns']: ["config.system.sessions", "config.system.sessions"], field #1['lastmod']: [Timestamp(1, 0), Timestamp(4294967295, 4294967295)] ---------fetched = 0 ---------sortedByDiskLoc = 0 ---------getSort = [{ lastmod: 1 }, { ns: 1 }, { ns: 1, lastmod: 1 }, ] 2019-09-04T06:32:19.330+0000 D5 QUERY [ConfigServerCatalogCacheLoader-1] Tagging memoID 1 2019-09-04T06:32:19.330+0000 D5 QUERY [ConfigServerCatalogCacheLoader-1] Enumerator: memo just before moving: [Node #1]: AND enumstate counter 1 choice 0: subnodes: idx[2] pos 0 pred ns $eq "config.system.sessions" pos 1 pred lastmod $gte Timestamp(1, 0) choice 1: subnodes: idx[0] pos 0 pred ns $eq "config.system.sessions" choice 2: subnodes: idx[1] pos 0 pred ns $eq "config.system.sessions" 2019-09-04T06:32:19.330+0000 D5 QUERY [ConfigServerCatalogCacheLoader-1] About to build solntree from tagged tree: $and ns $eq "config.system.sessions" || Selected Index #0 pos 0 combine 1 lastmod $gte Timestamp(1, 0) 2019-09-04T06:32:19.330+0000 D5 QUERY [ConfigServerCatalogCacheLoader-1] Planner: adding solution: SORT ---pattern = { lastmod: 1 } ---limit = 0 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] ---Child: ------SORT_KEY_GENERATOR ---------sortSpec = { lastmod: 1 } ---------fetched = 1 ---------sortedByDiskLoc = 0 ---------getSort = [{ min: 1 }, { ns: 1 }, { ns: 1, min: 1 }, ] ---------Child: ------------FETCH ---------------filter: lastmod $gte Timestamp(1, 0) ---------------fetched = 1 ---------------sortedByDiskLoc = 0 ---------------getSort = [{ min: 1 }, { ns: 1 }, { ns: 1, min: 1 }, ] ---------------Child: ------------------IXSCAN ---------------------indexName = ns_1_min_1 keyPattern = { ns: 1, min: 1 } ---------------------direction = 1 ---------------------bounds = field #0['ns']: ["config.system.sessions", "config.system.sessions"], field #1['min']: [MinKey, MaxKey] ---------------------fetched = 0 ---------------------sortedByDiskLoc = 0 ---------------------getSort = [{ min: 1 }, { ns: 1 }, { ns: 1, min: 1 }, ] 2019-09-04T06:32:19.330+0000 D5 QUERY [ConfigServerCatalogCacheLoader-1] Tagging memoID 1 2019-09-04T06:32:19.330+0000 D5 QUERY [ConfigServerCatalogCacheLoader-1] Enumerator: memo just before moving: [Node #1]: AND enumstate counter 2 choice 0: subnodes: idx[2] pos 0 pred ns $eq "config.system.sessions" pos 1 pred lastmod $gte Timestamp(1, 0) choice 1: subnodes: idx[0] pos 0 pred ns $eq "config.system.sessions" choice 2: subnodes: idx[1] pos 0 pred ns $eq "config.system.sessions" 2019-09-04T06:32:19.330+0000 D5 QUERY [ConfigServerCatalogCacheLoader-1] About to build solntree from tagged tree: $and ns $eq "config.system.sessions" || Selected Index #1 pos 0 combine 1 lastmod $gte Timestamp(1, 0) 2019-09-04T06:32:19.330+0000 D5 QUERY [ConfigServerCatalogCacheLoader-1] Planner: adding solution: SORT ---pattern = { lastmod: 1 } ---limit = 0 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] ---Child: ------SORT_KEY_GENERATOR ---------sortSpec = { lastmod: 1 } ---------fetched = 1 ---------sortedByDiskLoc = 0 ---------getSort = [{ ns: 1 }, { ns: 1, shard: 1 }, { ns: 1, shard: 1, min: 1 }, { shard: 1 }, { shard: 1, min: 1 }, ] ---------Child: ------------FETCH ---------------filter: lastmod $gte Timestamp(1, 0) ---------------fetched = 1 ---------------sortedByDiskLoc = 0 ---------------getSort = [{ ns: 1 }, { ns: 1, shard: 1 }, { ns: 1, shard: 1, min: 1 }, { shard: 1 }, { shard: 1, min: 1 }, ] ---------------Child: ------------------IXSCAN ---------------------indexName = ns_1_shard_1_min_1 keyPattern = { ns: 1, shard: 1, min: 1 } ---------------------direction = 1 ---------------------bounds = field #0['ns']: ["config.system.sessions", "config.system.sessions"], field #1['shard']: [MinKey, MaxKey], field #2['min']: [MinKey, MaxKey] ---------------------fetched = 0 ---------------------sortedByDiskLoc = 0 ---------------------getSort = [{ ns: 1 }, { ns: 1, shard: 1 }, { ns: 1, shard: 1, min: 1 }, { shard: 1 }, { shard: 1, min: 1 }, ] 2019-09-04T06:32:19.330+0000 D5 QUERY [ConfigServerCatalogCacheLoader-1] Planner: outputted 3 indexed solutions. 2019-09-04T06:32:19.330+0000 D3 STORAGE [ConfigServerCatalogCacheLoader-1] begin_transaction on local snapshot Timestamp(1567578739, 1) 2019-09-04T06:32:19.330+0000 D3 STORAGE [ConfigServerCatalogCacheLoader-1] WT begin_transaction for snapshot id 14055 2019-09-04T06:32:19.331+0000 D5 QUERY [ConfigServerCatalogCacheLoader-1] Scoring plan 0: FETCH ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [{ lastmod: 1 }, { ns: 1 }, { ns: 1, lastmod: 1 }, ] ---Child: ------IXSCAN ---------indexName = ns_1_lastmod_1 keyPattern = { ns: 1, lastmod: 1 } ---------direction = 1 ---------bounds = field #0['ns']: ["config.system.sessions", "config.system.sessions"], field #1['lastmod']: [Timestamp(1, 0), Timestamp(4294967295, 4294967295)] ---------fetched = 0 ---------sortedByDiskLoc = 0 ---------getSort = [{ lastmod: 1 }, { ns: 1 }, { ns: 1, lastmod: 1 }, ] Stats: { "stage" : "FETCH", "nReturned" : 1, "executionTimeMillisEstimate" : 0, "works" : 2, "advanced" : 1, "needTime" : 0, "needYield" : 0, "saveState" : 0, "restoreState" : 0, "isEOF" : 1, "docsExamined" : 1, "alreadyHasObj" : 0, "inputStage" : { "stage" : "IXSCAN", "nReturned" : 1, "executionTimeMillisEstimate" : 0, "works" : 2, "advanced" : 1, "needTime" : 0, "needYield" : 0, "saveState" : 0, "restoreState" : 0, "isEOF" : 1, "keyPattern" : { "ns" : 1, "lastmod" : 1 }, "indexName" : "ns_1_lastmod_1", "isMultiKey" : false, "multiKeyPaths" : { "ns" : [], "lastmod" : [] }, "isUnique" : true, "isSparse" : false, "isPartial" : false, "indexVersion" : 2, "direction" : "forward", "indexBounds" : { "ns" : [ "[\"config.system.sessions\", \"config.system.sessions\"]" ], "lastmod" : [ "[Timestamp(1, 0), Timestamp(4294967295, 4294967295)]" ] }, "keysExamined" : 1, "seeks" : 1, "dupsTested" : 0, "dupsDropped" : 0 } } 2019-09-04T06:32:19.331+0000 D2 QUERY [ConfigServerCatalogCacheLoader-1] Scoring query plan: IXSCAN { ns: 1, lastmod: 1 } planHitEOF=1 2019-09-04T06:32:19.331+0000 D2 QUERY [ConfigServerCatalogCacheLoader-1] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) 2019-09-04T06:32:19.331+0000 D5 QUERY [ConfigServerCatalogCacheLoader-1] score = 1.5003 2019-09-04T06:32:19.331+0000 D5 QUERY [ConfigServerCatalogCacheLoader-1] Adding +1 EOF bonus to score. 2019-09-04T06:32:19.331+0000 D5 QUERY [ConfigServerCatalogCacheLoader-1] Scoring plan 1: SORT ---pattern = { lastmod: 1 } ---limit = 0 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] ---Child: ------SORT_KEY_GENERATOR ---------sortSpec = { lastmod: 1 } ---------fetched = 1 ---------sortedByDiskLoc = 0 ---------getSort = [{ min: 1 }, { ns: 1 }, { ns: 1, min: 1 }, ] ---------Child: ------------FETCH ---------------filter: lastmod $gte Timestamp(1, 0) ---------------fetched = 1 ---------------sortedByDiskLoc = 0 ---------------getSort = [{ min: 1 }, { ns: 1 }, { ns: 1, min: 1 }, ] ---------------Child: ------------------IXSCAN ---------------------indexName = ns_1_min_1 keyPattern = { ns: 1, min: 1 } ---------------------direction = 1 ---------------------bounds = field #0['ns']: ["config.system.sessions", "config.system.sessions"], field #1['min']: [MinKey, MaxKey] ---------------------fetched = 0 ---------------------sortedByDiskLoc = 0 ---------------------getSort = [{ min: 1 }, { ns: 1 }, { ns: 1, min: 1 }, ] Stats: { "stage" : "SORT", "nReturned" : 0, "executionTimeMillisEstimate" : 0, "works" : 2, "advanced" : 0, "needTime" : 2, "needYield" : 0, "saveState" : 0, "restoreState" : 0, "isEOF" : 0, "sortPattern" : { "lastmod" : 1 }, "memUsage" : 244, "memLimit" : 33554432, "inputStage" : { "stage" : "SORT_KEY_GENERATOR", "nReturned" : 1, "executionTimeMillisEstimate" : 0, "works" : 2, "advanced" : 1, "needTime" : 1, "needYield" : 0, "saveState" : 0, "restoreState" : 0, "isEOF" : 0, "inputStage" : { "stage" : "FETCH", "filter" : { "lastmod" : { "$gte" : { "$timestamp" : { "t" : 1, "i" : 0 } } } }, "nReturned" : 1, "executionTimeMillisEstimate" : 0, "works" : 1, "advanced" : 1, "needTime" : 0, "needYield" : 0, "saveState" : 0, "restoreState" : 0, "isEOF" : 0, "docsExamined" : 1, "alreadyHasObj" : 0, "inputStage" : { "stage" : "IXSCAN", "nReturned" : 1, "executionTimeMillisEstimate" : 0, "works" : 1, "advanced" : 1, "needTime" : 0, "needYield" : 0, "saveState" : 0, "restoreState" : 0, "isEOF" : 0, "keyPattern" : { "ns" : 1, "min" : 1 }, "indexName" : "ns_1_min_1", "isMultiKey" : false, "multiKeyPaths" : { "ns" : [], "min" : [] }, "isUnique" : true, "isSparse" : false, "isPartial" : false, "indexVersion" : 2, "direction" : "forward", "indexBounds" : { "ns" : [ "[\"config.system.sessions\", \"config.system.sessions\"]" ], "min" : [ "[MinKey, MaxKey]" ] }, "keysExamined" : 1, "seeks" : 1, "dupsTested" : 0, "dupsDropped" : 0 } } } } 2019-09-04T06:32:19.331+0000 D2 QUERY [ConfigServerCatalogCacheLoader-1] Scoring query plan: IXSCAN { ns: 1, min: 1 } planHitEOF=0 2019-09-04T06:32:19.331+0000 D2 QUERY [ConfigServerCatalogCacheLoader-1] score(1.0002) = baseScore(1) + productivity((0 advanced)/(2 works) = 0) + tieBreakers(0.0001 noFetchBonus + 0 noSortBonus + 0.0001 noIxisectBonus = 0.0002) 2019-09-04T06:32:19.331+0000 D5 QUERY [ConfigServerCatalogCacheLoader-1] score = 1.0002 2019-09-04T06:32:19.331+0000 D5 QUERY [ConfigServerCatalogCacheLoader-1] Scoring plan 2: SORT ---pattern = { lastmod: 1 } ---limit = 0 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] ---Child: ------SORT_KEY_GENERATOR ---------sortSpec = { lastmod: 1 } ---------fetched = 1 ---------sortedByDiskLoc = 0 ---------getSort = [{ ns: 1 }, { ns: 1, shard: 1 }, { ns: 1, shard: 1, min: 1 }, { shard: 1 }, { shard: 1, min: 1 }, ] ---------Child: ------------FETCH ---------------filter: lastmod $gte Timestamp(1, 0) ---------------fetched = 1 ---------------sortedByDiskLoc = 0 ---------------getSort = [{ ns: 1 }, { ns: 1, shard: 1 }, { ns: 1, shard: 1, min: 1 }, { shard: 1 }, { shard: 1, min: 1 }, ] ---------------Child: ------------------IXSCAN ---------------------indexName = ns_1_shard_1_min_1 keyPattern = { ns: 1, shard: 1, min: 1 } ---------------------direction = 1 ---------------------bounds = field #0['ns']: ["config.system.sessions", "config.system.sessions"], field #1['shard']: [MinKey, MaxKey], field #2['min']: [MinKey, MaxKey] ---------------------fetched = 0 ---------------------sortedByDiskLoc = 0 ---------------------getSort = [{ ns: 1 }, { ns: 1, shard: 1 }, { ns: 1, shard: 1, min: 1 }, { shard: 1 }, { shard: 1, min: 1 }, ] Stats: { "stage" : "SORT", "nReturned" : 0, "executionTimeMillisEstimate" : 0, "works" : 2, "advanced" : 0, "needTime" : 2, "needYield" : 0, "saveState" : 0, "restoreState" : 0, "isEOF" : 0, "sortPattern" : { "lastmod" : 1 }, "memUsage" : 244, "memLimit" : 33554432, "inputStage" : { "stage" : "SORT_KEY_GENERATOR", "nReturned" : 1, "executionTimeMillisEstimate" : 0, "works" : 2, "advanced" : 1, "needTime" : 1, "needYield" : 0, "saveState" : 0, "restoreState" : 0, "isEOF" : 0, "inputStage" : { "stage" : "FETCH", "filter" : { "lastmod" : { "$gte" : { "$timestamp" : { "t" : 1, "i" : 0 } } } }, "nReturned" : 1, "executionTimeMillisEstimate" : 0, "works" : 1, "advanced" : 1, "needTime" : 0, "needYield" : 0, "saveState" : 0, "restoreState" : 0, "isEOF" : 0, "docsExamined" : 1, "alreadyHasObj" : 0, "inputStage" : { "stage" : "IXSCAN", "nReturned" : 1, "executionTimeMillisEstimate" : 0, "works" : 1, "advanced" : 1, "needTime" : 0, "needYield" : 0, "saveState" : 0, "restoreState" : 0, "isEOF" : 0, "keyPattern" : { "ns" : 1, "shard" : 1, "min" : 1 }, "indexName" : "ns_1_shard_1_min_1", "isMultiKey" : false, "multiKeyPaths" : { "ns" : [], "shard" : [], "min" : [] }, "isUnique" : true, "isSparse" : false, "isPartial" : false, "indexVersion" : 2, "direction" : "forward", "indexBounds" : { "ns" : [ "[\"config.system.sessions\", \"config.system.sessions\"]" ], "shard" : [ "[MinKey, MaxKey]" ], "min" : [ "[MinKey, MaxKey]" ] }, "keysExamined" : 1, "seeks" : 1, "dupsTested" : 0, "dupsDropped" : 0 } } } } 2019-09-04T06:32:19.331+0000 D2 QUERY [ConfigServerCatalogCacheLoader-1] Scoring query plan: IXSCAN { ns: 1, shard: 1, min: 1 } planHitEOF=0 2019-09-04T06:32:19.331+0000 D2 QUERY [ConfigServerCatalogCacheLoader-1] score(1.0002) = baseScore(1) + productivity((0 advanced)/(2 works) = 0) + tieBreakers(0.0001 noFetchBonus + 0 noSortBonus + 0.0001 noIxisectBonus = 0.0002) 2019-09-04T06:32:19.331+0000 D5 QUERY [ConfigServerCatalogCacheLoader-1] score = 1.0002 2019-09-04T06:32:19.331+0000 D5 QUERY [ConfigServerCatalogCacheLoader-1] Winning solution: FETCH ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [{ lastmod: 1 }, { ns: 1 }, { ns: 1, lastmod: 1 }, ] ---Child: ------IXSCAN ---------indexName = ns_1_lastmod_1 keyPattern = { ns: 1, lastmod: 1 } ---------direction = 1 ---------bounds = field #0['ns']: ["config.system.sessions", "config.system.sessions"], field #1['lastmod']: [Timestamp(1, 0), Timestamp(4294967295, 4294967295)] ---------fetched = 0 ---------sortedByDiskLoc = 0 ---------getSort = [{ lastmod: 1 }, { ns: 1 }, { ns: 1, lastmod: 1 }, ] 2019-09-04T06:32:19.331+0000 D2 QUERY [ConfigServerCatalogCacheLoader-1] Winning plan: IXSCAN { ns: 1, lastmod: 1 } 2019-09-04T06:32:19.331+0000 D1 QUERY [ConfigServerCatalogCacheLoader-1] Inactive cache entry for query query: { ns: "config.system.sessions", lastmod: { $gte: Timestamp(1, 0) } } sort: { lastmod: 1 } projection: {} queryHash 1DDA71BE planCacheKey 167D77D5 with works 2 is being promoted to active entry with works value 2 2019-09-04T06:32:19.331+0000 D3 STORAGE [ConfigServerCatalogCacheLoader-1] WT rollback_transaction for snapshot id 14055 2019-09-04T06:32:19.331+0000 I COMMAND [ConfigServerCatalogCacheLoader-1] command config.chunks command: find { find: "chunks", filter: { ns: "config.system.sessions", lastmod: { $gte: Timestamp(1, 0) } }, sort: { lastmod: 1 }, $readPreference: { mode: "nearest", tags: [] }, $db: "config" } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 fromMultiPlanner:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:1DDA71BE planCacheKey:167D77D5 reslen:555 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:32:19.331+0000 D1 SH_REFR [ConfigServerCatalogCacheLoader-1] Refresh for collection config.system.sessions from version 1|0||5d5e4a1c7fc690fd4e5fb282 to version 1|0||5d5e4a1c7fc690fd4e5fb282 took 1 ms 2019-09-04T06:32:19.331+0000 D3 EXECUTOR [ConfigServerCatalogCacheLoader-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.330+0000 2019-09-04T06:32:19.331+0000 D3 EXECUTOR [LogicalSessionCacheRefresh] Scheduling remote command request: RemoteCommand 951 -- target:[cmodb806.togewa.com:27018] db:config cmd:{ createIndexes: "system.sessions", indexes: [ { key: { lastUse: 1 }, name: "lsidTTLIndex", expireAfterSeconds: 1800 } ], allowImplicitCollectionCreation: false } 2019-09-04T06:32:19.331+0000 D3 EXECUTOR [LogicalSessionCacheRefresh] Scheduling remote command request: RemoteCommand 952 -- target:[cmodb808.togewa.com:27018] db:config cmd:{ createIndexes: "system.sessions", indexes: [ { key: { lastUse: 1 }, name: "lsidTTLIndex", expireAfterSeconds: 1800 } ], allowImplicitCollectionCreation: false } 2019-09-04T06:32:19.331+0000 I CONNPOOL [TaskExecutorPool-0] Connecting to cmodb806.togewa.com:27018 2019-09-04T06:32:19.332+0000 D2 ASIO [TaskExecutorPool-0] Finished connection setup. 2019-09-04T06:32:19.332+0000 D1 SH_REFR [LogicalSessionCacheReap] Refreshing chunks for collection config.system.sessions; current collection version is 1|0||5d5e4a1c7fc690fd4e5fb282 2019-09-04T06:32:19.332+0000 D3 EXECUTOR [LogicalSessionCacheRefresh] Scheduling remote command request: RemoteCommand 953 -- target:[cmodb810.togewa.com:27018] db:config cmd:{ createIndexes: "system.sessions", indexes: [ { key: { lastUse: 1 }, name: "lsidTTLIndex", expireAfterSeconds: 1800 } ], allowImplicitCollectionCreation: false } 2019-09-04T06:32:19.332+0000 I CONNPOOL [TaskExecutorPool-0] Connecting to cmodb808.togewa.com:27018 2019-09-04T06:32:19.332+0000 D2 ASIO [TaskExecutorPool-0] Finished connection setup. 2019-09-04T06:32:19.332+0000 D3 EXECUTOR [ConfigServerCatalogCacheLoader-1] Executing a task on behalf of pool ConfigServerCatalogCacheLoader 2019-09-04T06:32:19.332+0000 D3 STORAGE [ConfigServerCatalogCacheLoader-1] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:32:19.332+0000 D2 COMMAND [ConfigServerCatalogCacheLoader-1] run command config.$cmd { find: "collections", filter: { _id: "config.system.sessions" }, ntoreturn: 1, singleBatch: true, $readPreference: { mode: "nearest", tags: [] }, $db: "config" } 2019-09-04T06:32:19.332+0000 D3 STORAGE [ConfigServerCatalogCacheLoader-1] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:19.332+0000 D2 QUERY [ConfigServerCatalogCacheLoader-1] Using idhack: query: { _id: "config.system.sessions" } sort: {} projection: {} ntoreturn=1 2019-09-04T06:32:19.332+0000 D3 STORAGE [ConfigServerCatalogCacheLoader-1] begin_transaction on local snapshot Timestamp(1567578739, 1) 2019-09-04T06:32:19.332+0000 D3 STORAGE [ConfigServerCatalogCacheLoader-1] WT begin_transaction for snapshot id 14058 2019-09-04T06:32:19.332+0000 D3 STORAGE [ConfigServerCatalogCacheLoader-1] WT rollback_transaction for snapshot id 14058 2019-09-04T06:32:19.332+0000 I COMMAND [ConfigServerCatalogCacheLoader-1] command config.collections command: find { find: "collections", filter: { _id: "config.system.sessions" }, ntoreturn: 1, singleBatch: true, $readPreference: { mode: "nearest", tags: [] }, $db: "config" } planSummary: IDHACK keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:469 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:32:19.332+0000 D3 STORAGE [ConfigServerCatalogCacheLoader-1] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:32:19.332+0000 D2 COMMAND [ConfigServerCatalogCacheLoader-1] run command config.$cmd { find: "chunks", filter: { ns: "config.system.sessions", lastmod: { $gte: Timestamp(1, 0) } }, sort: { lastmod: 1 }, $readPreference: { mode: "nearest", tags: [] }, $db: "config" } 2019-09-04T06:32:19.332+0000 D3 STORAGE [ConfigServerCatalogCacheLoader-1] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:19.332+0000 D5 QUERY [ConfigServerCatalogCacheLoader-1] Tagging the match expression according to cache data: Filter: $and ns $eq "config.system.sessions" lastmod $gte Timestamp(1, 0) Cache data: (index-tagged expression tree: tree=Node ---Leaf (ns_1_lastmod_1, ), pos: 0, can combine? 1 ---Leaf (ns_1_lastmod_1, ), pos: 1, can combine? 1 ) 2019-09-04T06:32:19.332+0000 D5 QUERY [ConfigServerCatalogCacheLoader-1] Index 0: (ns_1_min_1, ) 2019-09-04T06:32:19.332+0000 D5 QUERY [ConfigServerCatalogCacheLoader-1] Index 1: (ns_1_shard_1_min_1, ) 2019-09-04T06:32:19.332+0000 D5 QUERY [ConfigServerCatalogCacheLoader-1] Index 2: (ns_1_lastmod_1, ) 2019-09-04T06:32:19.332+0000 D5 QUERY [ConfigServerCatalogCacheLoader-1] Index 3: (_id_, ) 2019-09-04T06:32:19.332+0000 D5 QUERY [ConfigServerCatalogCacheLoader-1] Tagged tree: $and ns $eq "config.system.sessions" || Selected Index #2 pos 0 combine 1 lastmod $gte Timestamp(1, 0) || Selected Index #2 pos 1 combine 1 2019-09-04T06:32:19.332+0000 D5 QUERY [ConfigServerCatalogCacheLoader-1] Planner: solution constructed from the cache: FETCH ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [{ lastmod: 1 }, { ns: 1 }, { ns: 1, lastmod: 1 }, ] ---Child: ------IXSCAN ---------indexName = ns_1_lastmod_1 keyPattern = { ns: 1, lastmod: 1 } ---------direction = 1 ---------bounds = field #0['ns']: ["config.system.sessions", "config.system.sessions"], field #1['lastmod']: [Timestamp(1, 0), Timestamp(4294967295, 4294967295)] ---------fetched = 0 ---------sortedByDiskLoc = 0 ---------getSort = [{ lastmod: 1 }, { ns: 1 }, { ns: 1, lastmod: 1 }, ] 2019-09-04T06:32:19.332+0000 D3 STORAGE [ConfigServerCatalogCacheLoader-1] begin_transaction on local snapshot Timestamp(1567578739, 1) 2019-09-04T06:32:19.332+0000 D3 STORAGE [ConfigServerCatalogCacheLoader-1] WT begin_transaction for snapshot id 14059 2019-09-04T06:32:19.332+0000 D2 QUERY [ConfigServerCatalogCacheLoader-1] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) 2019-09-04T06:32:19.332+0000 D3 STORAGE [ConfigServerCatalogCacheLoader-1] WT rollback_transaction for snapshot id 14059 2019-09-04T06:32:19.332+0000 I COMMAND [ConfigServerCatalogCacheLoader-1] command config.chunks command: find { find: "chunks", filter: { ns: "config.system.sessions", lastmod: { $gte: Timestamp(1, 0) } }, sort: { lastmod: 1 }, $readPreference: { mode: "nearest", tags: [] }, $db: "config" } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:1DDA71BE planCacheKey:167D77D5 reslen:555 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:32:19.332+0000 D1 SH_REFR [ConfigServerCatalogCacheLoader-1] Refresh for collection config.system.sessions from version 1|0||5d5e4a1c7fc690fd4e5fb282 to version 1|0||5d5e4a1c7fc690fd4e5fb282 took 0 ms 2019-09-04T06:32:19.332+0000 D3 EXECUTOR [ConfigServerCatalogCacheLoader-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.332+0000 2019-09-04T06:32:19.332+0000 D2 WRITE [LogicalSessionCacheReap] Beginning scanSessions. Scanning 0 sessions. 2019-09-04T06:32:19.332+0000 I CONNPOOL [TaskExecutorPool-0] Connecting to cmodb810.togewa.com:27018 2019-09-04T06:32:19.332+0000 D2 ASIO [TaskExecutorPool-0] Finished connection setup. 2019-09-04T06:32:19.333+0000 D2 ASIO [LogicalSessionCacheRefresh] Request 951 finished with response: { numIndexesBefore: 2, numIndexesAfter: 2, note: "all indexes already exist", ok: 1.0, $gleStats: { lastOpTime: { ts: Timestamp(1567578733, 1), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578733, 1), $configServerState: { opTime: { ts: Timestamp(1567578739, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578739, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578733, 1) } 2019-09-04T06:32:19.333+0000 D3 EXECUTOR [LogicalSessionCacheRefresh] Received remote response: RemoteOnAnyResponse -- cmd:{ numIndexesBefore: 2, numIndexesAfter: 2, note: "all indexes already exist", ok: 1.0, $gleStats: { lastOpTime: { ts: Timestamp(1567578733, 1), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578733, 1), $configServerState: { opTime: { ts: Timestamp(1567578739, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578739, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578733, 1) } target: cmodb806.togewa.com:27018 2019-09-04T06:32:19.339+0000 D2 ASIO [RS] Request 949 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578739, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578739335), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f5a73ac9313827bca5321'), when: new Date(1567578739335) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 1), t: 1 }, lastCommittedWall: new Date(1567578739261), lastOpVisible: { ts: Timestamp(1567578739, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578739, 1), t: 1 }, lastCommittedWall: new Date(1567578739261), lastOpApplied: { ts: Timestamp(1567578739, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 1), $clusterTime: { clusterTime: Timestamp(1567578739, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 2) } 2019-09-04T06:32:19.339+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578739, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578739335), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f5a73ac9313827bca5321'), when: new Date(1567578739335) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 1), t: 1 }, lastCommittedWall: new Date(1567578739261), lastOpVisible: { ts: Timestamp(1567578739, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578739, 1), t: 1 }, lastCommittedWall: new Date(1567578739261), lastOpApplied: { ts: Timestamp(1567578739, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 1), $clusterTime: { clusterTime: Timestamp(1567578739, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:19.339+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:19.339+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578739, 2) and ending at ts: Timestamp(1567578739, 2) 2019-09-04T06:32:19.340+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:32:29.947+0000 2019-09-04T06:32:19.340+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:32:30.003+0000 2019-09-04T06:32:19.340+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578739, 2), t: 1 } 2019-09-04T06:32:19.340+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:19.340+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:48.839+0000 2019-09-04T06:32:19.340+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:19.340+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:19.340+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578739, 1) 2019-09-04T06:32:19.340+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 14062 2019-09-04T06:32:19.340+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:19.340+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:19.340+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 14062 2019-09-04T06:32:19.340+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:19.340+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:32:19.340+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:19.340+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578739, 1) 2019-09-04T06:32:19.340+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578739, 2) } 2019-09-04T06:32:19.340+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 14065 2019-09-04T06:32:19.340+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:19.340+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:19.340+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 14065 2019-09-04T06:32:19.340+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14045 2019-09-04T06:32:19.340+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14045 2019-09-04T06:32:19.340+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14068 2019-09-04T06:32:19.340+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14068 2019-09-04T06:32:19.340+0000 D3 EXECUTOR [repl-writer-worker-2] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:19.340+0000 D3 STORAGE [repl-writer-worker-2] WT begin_transaction for snapshot id 14070 2019-09-04T06:32:19.340+0000 D4 STORAGE [repl-writer-worker-2] inserting record with timestamp Timestamp(1567578739, 2) 2019-09-04T06:32:19.340+0000 D3 STORAGE [repl-writer-worker-2] WT set timestamp of future write operations to Timestamp(1567578739, 2) 2019-09-04T06:32:19.340+0000 D3 STORAGE [repl-writer-worker-2] WT commit_transaction for snapshot id 14070 2019-09-04T06:32:19.340+0000 D3 EXECUTOR [repl-writer-worker-2] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:19.340+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:32:19.340+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14069 2019-09-04T06:32:19.340+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14069 2019-09-04T06:32:19.340+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14072 2019-09-04T06:32:19.340+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14072 2019-09-04T06:32:19.340+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578739, 2), t: 1 }({ ts: Timestamp(1567578739, 2), t: 1 }) 2019-09-04T06:32:19.340+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578739, 2) 2019-09-04T06:32:19.340+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14073 2019-09-04T06:32:19.340+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578739, 2) } } ] } sort: {} projection: {} 2019-09-04T06:32:19.340+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:19.340+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:32:19.340+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578739, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:32:19.340+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:19.340+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:19.340+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:19.340+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578739, 2) || First: notFirst: full path: ts 2019-09-04T06:32:19.340+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:19.340+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578739, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:19.340+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:19.340+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:32:19.340+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:19.340+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:19.340+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:19.340+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:19.340+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:19.340+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:19.340+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:19.340+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578739, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:19.340+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:19.340+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:19.340+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:19.340+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578739, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:19.340+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:19.340+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578739, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:19.340+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14073 2019-09-04T06:32:19.340+0000 D3 EXECUTOR [repl-writer-worker-0] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:19.340+0000 D3 STORAGE [repl-writer-worker-0] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:19.341+0000 D3 REPL [repl-writer-worker-0] applying op: { ts: Timestamp(1567578739, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578739335), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f5a73ac9313827bca5321'), when: new Date(1567578739335) } } }, oplog application mode: Secondary 2019-09-04T06:32:19.341+0000 D3 STORAGE [repl-writer-worker-0] WT set timestamp of future write operations to Timestamp(1567578739, 2) 2019-09-04T06:32:19.341+0000 D3 STORAGE [repl-writer-worker-0] WT begin_transaction for snapshot id 14075 2019-09-04T06:32:19.341+0000 D2 QUERY [repl-writer-worker-0] Using idhack: { _id: "config" } 2019-09-04T06:32:19.341+0000 D2 STORAGE [repl-writer-worker-0] WiredTigerSizeStorer::store Marking table:config/collection/42--6194257481163143499 dirty, numRecords: 2, dataSize: 308, use_count: 3 2019-09-04T06:32:19.341+0000 D4 WRITE [repl-writer-worker-0] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:32:19.341+0000 D3 STORAGE [repl-writer-worker-0] WT commit_transaction for snapshot id 14075 2019-09-04T06:32:19.341+0000 D3 EXECUTOR [repl-writer-worker-0] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:19.341+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578739, 2), t: 1 }({ ts: Timestamp(1567578739, 2), t: 1 }) 2019-09-04T06:32:19.341+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578739, 2) 2019-09-04T06:32:19.341+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14074 2019-09-04T06:32:19.341+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:32:19.341+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:19.341+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:32:19.341+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:19.341+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:19.341+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:32:19.341+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14074 2019-09-04T06:32:19.341+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578739, 2) 2019-09-04T06:32:19.341+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:19.341+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578739, 1), t: 1 }, durableWallTime: new Date(1567578739261), appliedOpTime: { ts: Timestamp(1567578739, 2), t: 1 }, appliedWallTime: new Date(1567578739335), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 1), t: 1 }, lastCommittedWall: new Date(1567578739261), lastOpVisible: { ts: Timestamp(1567578739, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:19.341+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 954 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:49.341+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578739, 1), t: 1 }, durableWallTime: new Date(1567578739261), appliedOpTime: { ts: Timestamp(1567578739, 2), t: 1 }, appliedWallTime: new Date(1567578739335), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 1), t: 1 }, lastCommittedWall: new Date(1567578739261), lastOpVisible: { ts: Timestamp(1567578739, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:19.341+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.341+0000 2019-09-04T06:32:19.341+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14078 2019-09-04T06:32:19.341+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14078 2019-09-04T06:32:19.341+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578739, 2), t: 1 }({ ts: Timestamp(1567578739, 2), t: 1 }) 2019-09-04T06:32:19.341+0000 D2 ASIO [RS] Request 954 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 1), t: 1 }, lastCommittedWall: new Date(1567578739261), lastOpVisible: { ts: Timestamp(1567578739, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 1), $clusterTime: { clusterTime: Timestamp(1567578739, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 2) } 2019-09-04T06:32:19.341+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 1), t: 1 }, lastCommittedWall: new Date(1567578739261), lastOpVisible: { ts: Timestamp(1567578739, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 1), $clusterTime: { clusterTime: Timestamp(1567578739, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:19.341+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:19.341+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.341+0000 2019-09-04T06:32:19.342+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578739, 2), t: 1 } 2019-09-04T06:32:19.342+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 955 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:29.342+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578739, 1), t: 1 } } 2019-09-04T06:32:19.342+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.341+0000 2019-09-04T06:32:19.343+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:32:19.343+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:19.343+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578739, 2), t: 1 }, durableWallTime: new Date(1567578739335), appliedOpTime: { ts: Timestamp(1567578739, 2), t: 1 }, appliedWallTime: new Date(1567578739335), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 1), t: 1 }, lastCommittedWall: new Date(1567578739261), lastOpVisible: { ts: Timestamp(1567578739, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:19.343+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 956 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:49.343+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578739, 2), t: 1 }, durableWallTime: new Date(1567578739335), appliedOpTime: { ts: Timestamp(1567578739, 2), t: 1 }, appliedWallTime: new Date(1567578739335), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 1), t: 1 }, lastCommittedWall: new Date(1567578739261), lastOpVisible: { ts: Timestamp(1567578739, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:19.343+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.341+0000 2019-09-04T06:32:19.343+0000 D2 ASIO [RS] Request 956 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 1), t: 1 }, lastCommittedWall: new Date(1567578739261), lastOpVisible: { ts: Timestamp(1567578739, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 1), $clusterTime: { clusterTime: Timestamp(1567578739, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 2) } 2019-09-04T06:32:19.344+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 1), t: 1 }, lastCommittedWall: new Date(1567578739261), lastOpVisible: { ts: Timestamp(1567578739, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 1), $clusterTime: { clusterTime: Timestamp(1567578739, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:19.344+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:19.344+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.341+0000 2019-09-04T06:32:19.344+0000 D2 ASIO [RS] Request 955 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 2), t: 1 }, lastCommittedWall: new Date(1567578739335), lastOpVisible: { ts: Timestamp(1567578739, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578739, 2), t: 1 }, lastCommittedWall: new Date(1567578739335), lastOpApplied: { ts: Timestamp(1567578739, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 2), $clusterTime: { clusterTime: Timestamp(1567578739, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 2) } 2019-09-04T06:32:19.344+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 2), t: 1 }, lastCommittedWall: new Date(1567578739335), lastOpVisible: { ts: Timestamp(1567578739, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578739, 2), t: 1 }, lastCommittedWall: new Date(1567578739335), lastOpApplied: { ts: Timestamp(1567578739, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 2), $clusterTime: { clusterTime: Timestamp(1567578739, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:19.344+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:19.344+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:32:19.344+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578739, 2), t: 1 }, 2019-09-04T06:32:19.335+0000 2019-09-04T06:32:19.344+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578739, 2), t: 1 }, 2019-09-04T06:32:19.335+0000 2019-09-04T06:32:19.344+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578734, 2) 2019-09-04T06:32:19.344+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:32:30.003+0000 2019-09-04T06:32:19.344+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:32:30.307+0000 2019-09-04T06:32:19.344+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 957 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:29.344+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578739, 2), t: 1 } } 2019-09-04T06:32:19.344+0000 D3 REPL [conn311] Got notified of new snapshot: { ts: Timestamp(1567578739, 2), t: 1 }, 2019-09-04T06:32:19.335+0000 2019-09-04T06:32:19.344+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.341+0000 2019-09-04T06:32:19.344+0000 D3 REPL [conn311] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.661+0000 2019-09-04T06:32:19.344+0000 D3 REPL [conn301] Got notified of new snapshot: { ts: Timestamp(1567578739, 2), t: 1 }, 2019-09-04T06:32:19.335+0000 2019-09-04T06:32:19.344+0000 D3 REPL [conn301] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.660+0000 2019-09-04T06:32:19.344+0000 D3 REPL [conn299] Got notified of new snapshot: { ts: Timestamp(1567578739, 2), t: 1 }, 2019-09-04T06:32:19.335+0000 2019-09-04T06:32:19.344+0000 D3 REPL [conn299] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.483+0000 2019-09-04T06:32:19.344+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:19.344+0000 D3 REPL [conn317] Got notified of new snapshot: { ts: Timestamp(1567578739, 2), t: 1 }, 2019-09-04T06:32:19.335+0000 2019-09-04T06:32:19.344+0000 D3 REPL [conn317] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.471+0000 2019-09-04T06:32:19.344+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:48.839+0000 2019-09-04T06:32:19.344+0000 D3 REPL [conn305] Got notified of new snapshot: { ts: Timestamp(1567578739, 2), t: 1 }, 2019-09-04T06:32:19.335+0000 2019-09-04T06:32:19.344+0000 D3 REPL [conn305] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.638+0000 2019-09-04T06:32:19.344+0000 D3 REPL [conn324] Got notified of new snapshot: { ts: Timestamp(1567578739, 2), t: 1 }, 2019-09-04T06:32:19.335+0000 2019-09-04T06:32:19.344+0000 D3 REPL [conn324] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.622+0000 2019-09-04T06:32:19.344+0000 D3 REPL [conn306] Got notified of new snapshot: { ts: Timestamp(1567578739, 2), t: 1 }, 2019-09-04T06:32:19.335+0000 2019-09-04T06:32:19.344+0000 D3 REPL [conn306] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.639+0000 2019-09-04T06:32:19.344+0000 D3 REPL [conn316] Got notified of new snapshot: { ts: Timestamp(1567578739, 2), t: 1 }, 2019-09-04T06:32:19.335+0000 2019-09-04T06:32:19.344+0000 D3 REPL [conn316] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.623+0000 2019-09-04T06:32:19.344+0000 D3 REPL [conn314] Got notified of new snapshot: { ts: Timestamp(1567578739, 2), t: 1 }, 2019-09-04T06:32:19.335+0000 2019-09-04T06:32:19.344+0000 D3 REPL [conn314] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:24.152+0000 2019-09-04T06:32:19.345+0000 D3 REPL [conn312] Got notified of new snapshot: { ts: Timestamp(1567578739, 2), t: 1 }, 2019-09-04T06:32:19.335+0000 2019-09-04T06:32:19.345+0000 D3 REPL [conn312] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.767+0000 2019-09-04T06:32:19.345+0000 D3 REPL [conn285] Got notified of new snapshot: { ts: Timestamp(1567578739, 2), t: 1 }, 2019-09-04T06:32:19.335+0000 2019-09-04T06:32:19.345+0000 D3 REPL [conn285] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.640+0000 2019-09-04T06:32:19.345+0000 D3 REPL [conn321] Got notified of new snapshot: { ts: Timestamp(1567578739, 2), t: 1 }, 2019-09-04T06:32:19.335+0000 2019-09-04T06:32:19.345+0000 D3 REPL [conn321] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:43.109+0000 2019-09-04T06:32:19.345+0000 D3 REPL [conn328] Got notified of new snapshot: { ts: Timestamp(1567578739, 2), t: 1 }, 2019-09-04T06:32:19.335+0000 2019-09-04T06:32:19.345+0000 D3 REPL [conn328] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.625+0000 2019-09-04T06:32:19.345+0000 D3 REPL [conn304] Got notified of new snapshot: { ts: Timestamp(1567578739, 2), t: 1 }, 2019-09-04T06:32:19.335+0000 2019-09-04T06:32:19.345+0000 D3 REPL [conn304] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.637+0000 2019-09-04T06:32:19.345+0000 D3 REPL [conn275] Got notified of new snapshot: { ts: Timestamp(1567578739, 2), t: 1 }, 2019-09-04T06:32:19.335+0000 2019-09-04T06:32:19.345+0000 D3 REPL [conn275] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.508+0000 2019-09-04T06:32:19.345+0000 D3 REPL [conn319] Got notified of new snapshot: { ts: Timestamp(1567578739, 2), t: 1 }, 2019-09-04T06:32:19.335+0000 2019-09-04T06:32:19.345+0000 D3 REPL [conn319] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.535+0000 2019-09-04T06:32:19.345+0000 D3 REPL [conn310] Got notified of new snapshot: { ts: Timestamp(1567578739, 2), t: 1 }, 2019-09-04T06:32:19.335+0000 2019-09-04T06:32:19.345+0000 D3 REPL [conn310] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.660+0000 2019-09-04T06:32:19.345+0000 D3 REPL [conn313] Got notified of new snapshot: { ts: Timestamp(1567578739, 2), t: 1 }, 2019-09-04T06:32:19.335+0000 2019-09-04T06:32:19.345+0000 D3 REPL [conn313] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:22.595+0000 2019-09-04T06:32:19.345+0000 D3 REPL [conn315] Got notified of new snapshot: { ts: Timestamp(1567578739, 2), t: 1 }, 2019-09-04T06:32:19.335+0000 2019-09-04T06:32:19.345+0000 D3 REPL [conn315] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:25.060+0000 2019-09-04T06:32:19.352+0000 D2 ASIO [RS] Request 957 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578739, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578739345), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f5a73ac9313827bca5327'), when: new Date(1567578739345) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 2), t: 1 }, lastCommittedWall: new Date(1567578739335), lastOpVisible: { ts: Timestamp(1567578739, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578739, 2), t: 1 }, lastCommittedWall: new Date(1567578739335), lastOpApplied: { ts: Timestamp(1567578739, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 2), $clusterTime: { clusterTime: Timestamp(1567578739, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 3) } 2019-09-04T06:32:19.352+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578739, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578739345), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f5a73ac9313827bca5327'), when: new Date(1567578739345) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 2), t: 1 }, lastCommittedWall: new Date(1567578739335), lastOpVisible: { ts: Timestamp(1567578739, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578739, 2), t: 1 }, lastCommittedWall: new Date(1567578739335), lastOpApplied: { ts: Timestamp(1567578739, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 2), $clusterTime: { clusterTime: Timestamp(1567578739, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:19.352+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:19.352+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578739, 3) and ending at ts: Timestamp(1567578739, 3) 2019-09-04T06:32:19.352+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:32:30.307+0000 2019-09-04T06:32:19.352+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:32:29.896+0000 2019-09-04T06:32:19.352+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:19.352+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:48.839+0000 2019-09-04T06:32:19.352+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:19.352+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:19.352+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578739, 2) 2019-09-04T06:32:19.352+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 14082 2019-09-04T06:32:19.352+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:19.352+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:19.352+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 14082 2019-09-04T06:32:19.352+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:19.352+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:32:19.352+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:19.352+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578739, 2) 2019-09-04T06:32:19.352+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578739, 3) } 2019-09-04T06:32:19.353+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 14085 2019-09-04T06:32:19.353+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:19.353+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:19.353+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 14085 2019-09-04T06:32:19.352+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578739, 3), t: 1 } 2019-09-04T06:32:19.353+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14080 2019-09-04T06:32:19.353+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14080 2019-09-04T06:32:19.353+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14088 2019-09-04T06:32:19.353+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14088 2019-09-04T06:32:19.353+0000 D3 EXECUTOR [repl-writer-worker-15] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:19.353+0000 D3 STORAGE [repl-writer-worker-15] WT begin_transaction for snapshot id 14090 2019-09-04T06:32:19.353+0000 D4 STORAGE [repl-writer-worker-15] inserting record with timestamp Timestamp(1567578739, 3) 2019-09-04T06:32:19.353+0000 D3 STORAGE [repl-writer-worker-15] WT set timestamp of future write operations to Timestamp(1567578739, 3) 2019-09-04T06:32:19.353+0000 D3 STORAGE [repl-writer-worker-15] WT commit_transaction for snapshot id 14090 2019-09-04T06:32:19.353+0000 D3 EXECUTOR [repl-writer-worker-15] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:19.353+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:32:19.353+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14089 2019-09-04T06:32:19.353+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14089 2019-09-04T06:32:19.353+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14092 2019-09-04T06:32:19.353+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14092 2019-09-04T06:32:19.353+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578739, 3), t: 1 }({ ts: Timestamp(1567578739, 3), t: 1 }) 2019-09-04T06:32:19.353+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578739, 3) 2019-09-04T06:32:19.353+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14093 2019-09-04T06:32:19.353+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578739, 3) } } ] } sort: {} projection: {} 2019-09-04T06:32:19.353+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:19.353+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:32:19.353+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578739, 3) Sort: {} Proj: {} ============================= 2019-09-04T06:32:19.353+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:19.353+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:19.353+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:19.353+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578739, 3) || First: notFirst: full path: ts 2019-09-04T06:32:19.353+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:19.353+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578739, 3) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:19.353+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:19.353+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:32:19.353+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:19.353+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:19.353+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:19.353+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:19.353+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:19.353+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:19.353+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:19.353+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578739, 3) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:19.353+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:19.353+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:19.353+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:19.353+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578739, 3) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:19.353+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:19.353+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578739, 3) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:19.353+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14093 2019-09-04T06:32:19.353+0000 D3 EXECUTOR [repl-writer-worker-1] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:19.353+0000 D3 STORAGE [repl-writer-worker-1] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:19.353+0000 D3 REPL [repl-writer-worker-1] applying op: { ts: Timestamp(1567578739, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578739345), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f5a73ac9313827bca5327'), when: new Date(1567578739345) } } }, oplog application mode: Secondary 2019-09-04T06:32:19.353+0000 D3 STORAGE [repl-writer-worker-1] WT set timestamp of future write operations to Timestamp(1567578739, 3) 2019-09-04T06:32:19.353+0000 D3 STORAGE [repl-writer-worker-1] WT begin_transaction for snapshot id 14095 2019-09-04T06:32:19.353+0000 D2 QUERY [repl-writer-worker-1] Using idhack: { _id: "config.system.sessions" } 2019-09-04T06:32:19.353+0000 D4 WRITE [repl-writer-worker-1] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:32:19.353+0000 D3 STORAGE [repl-writer-worker-1] WT commit_transaction for snapshot id 14095 2019-09-04T06:32:19.353+0000 D3 EXECUTOR [repl-writer-worker-1] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:19.353+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578739, 3), t: 1 }({ ts: Timestamp(1567578739, 3), t: 1 }) 2019-09-04T06:32:19.353+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578739, 3) 2019-09-04T06:32:19.353+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14094 2019-09-04T06:32:19.353+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:32:19.354+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:19.354+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:32:19.354+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:19.354+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:19.354+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:32:19.354+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14094 2019-09-04T06:32:19.354+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578739, 3) 2019-09-04T06:32:19.354+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:19.354+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578739, 2), t: 1 }, durableWallTime: new Date(1567578739335), appliedOpTime: { ts: Timestamp(1567578739, 3), t: 1 }, appliedWallTime: new Date(1567578739345), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 2), t: 1 }, lastCommittedWall: new Date(1567578739335), lastOpVisible: { ts: Timestamp(1567578739, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:19.354+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14098 2019-09-04T06:32:19.354+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 958 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:49.354+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578739, 2), t: 1 }, durableWallTime: new Date(1567578739335), appliedOpTime: { ts: Timestamp(1567578739, 3), t: 1 }, appliedWallTime: new Date(1567578739345), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 2), t: 1 }, lastCommittedWall: new Date(1567578739335), lastOpVisible: { ts: Timestamp(1567578739, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:19.354+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14098 2019-09-04T06:32:19.354+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578739, 3), t: 1 }({ ts: Timestamp(1567578739, 3), t: 1 }) 2019-09-04T06:32:19.354+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.354+0000 2019-09-04T06:32:19.354+0000 D2 ASIO [RS] Request 958 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 2), t: 1 }, lastCommittedWall: new Date(1567578739335), lastOpVisible: { ts: Timestamp(1567578739, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 2), $clusterTime: { clusterTime: Timestamp(1567578739, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 3) } 2019-09-04T06:32:19.354+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 2), t: 1 }, lastCommittedWall: new Date(1567578739335), lastOpVisible: { ts: Timestamp(1567578739, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 2), $clusterTime: { clusterTime: Timestamp(1567578739, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:19.354+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:19.354+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.354+0000 2019-09-04T06:32:19.355+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:32:19.355+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578739, 3), t: 1 } 2019-09-04T06:32:19.355+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 959 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:29.355+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578739, 2), t: 1 } } 2019-09-04T06:32:19.355+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:19.355+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578739, 3), t: 1 }, durableWallTime: new Date(1567578739345), appliedOpTime: { ts: Timestamp(1567578739, 3), t: 1 }, appliedWallTime: new Date(1567578739345), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 2), t: 1 }, lastCommittedWall: new Date(1567578739335), lastOpVisible: { ts: Timestamp(1567578739, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:19.355+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.355+0000 2019-09-04T06:32:19.355+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 960 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:49.355+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578739, 3), t: 1 }, durableWallTime: new Date(1567578739345), appliedOpTime: { ts: Timestamp(1567578739, 3), t: 1 }, appliedWallTime: new Date(1567578739345), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 2), t: 1 }, lastCommittedWall: new Date(1567578739335), lastOpVisible: { ts: Timestamp(1567578739, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:19.355+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.355+0000 2019-09-04T06:32:19.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:19.355+0000 D2 ASIO [RS] Request 960 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 2), t: 1 }, lastCommittedWall: new Date(1567578739335), lastOpVisible: { ts: Timestamp(1567578739, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 2), $clusterTime: { clusterTime: Timestamp(1567578739, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 3) } 2019-09-04T06:32:19.355+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 2), t: 1 }, lastCommittedWall: new Date(1567578739335), lastOpVisible: { ts: Timestamp(1567578739, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 2), $clusterTime: { clusterTime: Timestamp(1567578739, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:19.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:19.355+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:19.355+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.355+0000 2019-09-04T06:32:19.356+0000 D2 ASIO [RS] Request 959 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 3), t: 1 }, lastCommittedWall: new Date(1567578739345), lastOpVisible: { ts: Timestamp(1567578739, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578739, 3), t: 1 }, lastCommittedWall: new Date(1567578739345), lastOpApplied: { ts: Timestamp(1567578739, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 3), $clusterTime: { clusterTime: Timestamp(1567578739, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 3) } 2019-09-04T06:32:19.356+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 3), t: 1 }, lastCommittedWall: new Date(1567578739345), lastOpVisible: { ts: Timestamp(1567578739, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578739, 3), t: 1 }, lastCommittedWall: new Date(1567578739345), lastOpApplied: { ts: Timestamp(1567578739, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 3), $clusterTime: { clusterTime: Timestamp(1567578739, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:19.356+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:19.356+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:32:19.356+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578739, 3), t: 1 }, 2019-09-04T06:32:19.345+0000 2019-09-04T06:32:19.356+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578739, 3), t: 1 }, 2019-09-04T06:32:19.345+0000 2019-09-04T06:32:19.356+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578734, 3) 2019-09-04T06:32:19.356+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:32:29.896+0000 2019-09-04T06:32:19.356+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:32:30.817+0000 2019-09-04T06:32:19.356+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:19.356+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:48.839+0000 2019-09-04T06:32:19.356+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 961 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:29.356+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578739, 3), t: 1 } } 2019-09-04T06:32:19.356+0000 D3 REPL [conn301] Got notified of new snapshot: { ts: Timestamp(1567578739, 3), t: 1 }, 2019-09-04T06:32:19.345+0000 2019-09-04T06:32:19.356+0000 D3 REPL [conn301] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.660+0000 2019-09-04T06:32:19.356+0000 D3 REPL [conn324] Got notified of new snapshot: { ts: Timestamp(1567578739, 3), t: 1 }, 2019-09-04T06:32:19.345+0000 2019-09-04T06:32:19.356+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.355+0000 2019-09-04T06:32:19.356+0000 D3 REPL [conn324] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.622+0000 2019-09-04T06:32:19.356+0000 D3 REPL [conn305] Got notified of new snapshot: { ts: Timestamp(1567578739, 3), t: 1 }, 2019-09-04T06:32:19.345+0000 2019-09-04T06:32:19.356+0000 D3 REPL [conn305] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.638+0000 2019-09-04T06:32:19.356+0000 D3 REPL [conn312] Got notified of new snapshot: { ts: Timestamp(1567578739, 3), t: 1 }, 2019-09-04T06:32:19.345+0000 2019-09-04T06:32:19.356+0000 D3 REPL [conn312] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.767+0000 2019-09-04T06:32:19.356+0000 D3 REPL [conn321] Got notified of new snapshot: { ts: Timestamp(1567578739, 3), t: 1 }, 2019-09-04T06:32:19.345+0000 2019-09-04T06:32:19.356+0000 D3 REPL [conn321] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:43.109+0000 2019-09-04T06:32:19.356+0000 D3 REPL [conn304] Got notified of new snapshot: { ts: Timestamp(1567578739, 3), t: 1 }, 2019-09-04T06:32:19.345+0000 2019-09-04T06:32:19.356+0000 D3 REPL [conn304] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.637+0000 2019-09-04T06:32:19.356+0000 D3 REPL [conn328] Got notified of new snapshot: { ts: Timestamp(1567578739, 3), t: 1 }, 2019-09-04T06:32:19.345+0000 2019-09-04T06:32:19.356+0000 D3 REPL [conn328] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.625+0000 2019-09-04T06:32:19.356+0000 D3 REPL [conn313] Got notified of new snapshot: { ts: Timestamp(1567578739, 3), t: 1 }, 2019-09-04T06:32:19.345+0000 2019-09-04T06:32:19.356+0000 D3 REPL [conn313] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:22.595+0000 2019-09-04T06:32:19.356+0000 D3 REPL [conn311] Got notified of new snapshot: { ts: Timestamp(1567578739, 3), t: 1 }, 2019-09-04T06:32:19.345+0000 2019-09-04T06:32:19.356+0000 D3 REPL [conn311] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.661+0000 2019-09-04T06:32:19.356+0000 D3 REPL [conn299] Got notified of new snapshot: { ts: Timestamp(1567578739, 3), t: 1 }, 2019-09-04T06:32:19.345+0000 2019-09-04T06:32:19.356+0000 D3 REPL [conn299] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.483+0000 2019-09-04T06:32:19.356+0000 D3 REPL [conn306] Got notified of new snapshot: { ts: Timestamp(1567578739, 3), t: 1 }, 2019-09-04T06:32:19.345+0000 2019-09-04T06:32:19.356+0000 D3 REPL [conn306] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.639+0000 2019-09-04T06:32:19.357+0000 D3 REPL [conn316] Got notified of new snapshot: { ts: Timestamp(1567578739, 3), t: 1 }, 2019-09-04T06:32:19.345+0000 2019-09-04T06:32:19.357+0000 D3 REPL [conn316] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.623+0000 2019-09-04T06:32:19.357+0000 D3 REPL [conn314] Got notified of new snapshot: { ts: Timestamp(1567578739, 3), t: 1 }, 2019-09-04T06:32:19.345+0000 2019-09-04T06:32:19.357+0000 D3 REPL [conn314] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:24.152+0000 2019-09-04T06:32:19.357+0000 D3 REPL [conn310] Got notified of new snapshot: { ts: Timestamp(1567578739, 3), t: 1 }, 2019-09-04T06:32:19.345+0000 2019-09-04T06:32:19.357+0000 D3 REPL [conn310] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.660+0000 2019-09-04T06:32:19.357+0000 D3 REPL [conn285] Got notified of new snapshot: { ts: Timestamp(1567578739, 3), t: 1 }, 2019-09-04T06:32:19.345+0000 2019-09-04T06:32:19.357+0000 D3 REPL [conn285] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.640+0000 2019-09-04T06:32:19.357+0000 D3 REPL [conn315] Got notified of new snapshot: { ts: Timestamp(1567578739, 3), t: 1 }, 2019-09-04T06:32:19.345+0000 2019-09-04T06:32:19.357+0000 D3 REPL [conn315] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:25.060+0000 2019-09-04T06:32:19.357+0000 D3 REPL [conn275] Got notified of new snapshot: { ts: Timestamp(1567578739, 3), t: 1 }, 2019-09-04T06:32:19.345+0000 2019-09-04T06:32:19.357+0000 D3 REPL [conn275] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.508+0000 2019-09-04T06:32:19.357+0000 D3 REPL [conn317] Got notified of new snapshot: { ts: Timestamp(1567578739, 3), t: 1 }, 2019-09-04T06:32:19.345+0000 2019-09-04T06:32:19.357+0000 D3 REPL [conn317] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.471+0000 2019-09-04T06:32:19.357+0000 D3 REPL [conn319] Got notified of new snapshot: { ts: Timestamp(1567578739, 3), t: 1 }, 2019-09-04T06:32:19.345+0000 2019-09-04T06:32:19.357+0000 D3 REPL [conn319] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.535+0000 2019-09-04T06:32:19.364+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578739, 3) 2019-09-04T06:32:19.366+0000 D2 ASIO [RS] Request 961 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578739, 4), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578739356), o: { $v: 1, $set: { state: 0 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 3), t: 1 }, lastCommittedWall: new Date(1567578739345), lastOpVisible: { ts: Timestamp(1567578739, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578739, 3), t: 1 }, lastCommittedWall: new Date(1567578739345), lastOpApplied: { ts: Timestamp(1567578739, 4), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 3), $clusterTime: { clusterTime: Timestamp(1567578739, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 4) } 2019-09-04T06:32:19.366+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578739, 4), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578739356), o: { $v: 1, $set: { state: 0 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 3), t: 1 }, lastCommittedWall: new Date(1567578739345), lastOpVisible: { ts: Timestamp(1567578739, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578739, 3), t: 1 }, lastCommittedWall: new Date(1567578739345), lastOpApplied: { ts: Timestamp(1567578739, 4), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 3), $clusterTime: { clusterTime: Timestamp(1567578739, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 4) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:19.366+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:19.366+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578739, 4) and ending at ts: Timestamp(1567578739, 4) 2019-09-04T06:32:19.366+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:32:30.817+0000 2019-09-04T06:32:19.366+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:32:30.021+0000 2019-09-04T06:32:19.366+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:19.366+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:48.839+0000 2019-09-04T06:32:19.366+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578739, 4), t: 1 } 2019-09-04T06:32:19.366+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:19.366+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:19.366+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578739, 3) 2019-09-04T06:32:19.366+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 14103 2019-09-04T06:32:19.366+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:19.366+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:19.366+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 14103 2019-09-04T06:32:19.366+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:19.366+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:32:19.366+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:19.366+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578739, 4) } 2019-09-04T06:32:19.366+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578739, 3) 2019-09-04T06:32:19.366+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 14106 2019-09-04T06:32:19.366+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:19.366+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:19.366+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 14106 2019-09-04T06:32:19.366+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14099 2019-09-04T06:32:19.366+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14099 2019-09-04T06:32:19.366+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14109 2019-09-04T06:32:19.366+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14109 2019-09-04T06:32:19.366+0000 D3 EXECUTOR [repl-writer-worker-5] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:19.366+0000 D3 STORAGE [repl-writer-worker-5] WT begin_transaction for snapshot id 14111 2019-09-04T06:32:19.366+0000 D4 STORAGE [repl-writer-worker-5] inserting record with timestamp Timestamp(1567578739, 4) 2019-09-04T06:32:19.366+0000 D3 STORAGE [repl-writer-worker-5] WT set timestamp of future write operations to Timestamp(1567578739, 4) 2019-09-04T06:32:19.366+0000 D3 STORAGE [repl-writer-worker-5] WT commit_transaction for snapshot id 14111 2019-09-04T06:32:19.366+0000 D3 EXECUTOR [repl-writer-worker-5] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:19.366+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:32:19.366+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14110 2019-09-04T06:32:19.366+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14110 2019-09-04T06:32:19.366+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14113 2019-09-04T06:32:19.366+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14113 2019-09-04T06:32:19.366+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578739, 4), t: 1 }({ ts: Timestamp(1567578739, 4), t: 1 }) 2019-09-04T06:32:19.366+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578739, 4) 2019-09-04T06:32:19.366+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14114 2019-09-04T06:32:19.366+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578739, 4) } } ] } sort: {} projection: {} 2019-09-04T06:32:19.366+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:19.366+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:32:19.366+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578739, 4) Sort: {} Proj: {} ============================= 2019-09-04T06:32:19.366+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:19.366+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:19.366+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:19.366+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578739, 4) || First: notFirst: full path: ts 2019-09-04T06:32:19.366+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:19.366+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578739, 4) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:19.366+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:19.366+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:32:19.367+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:19.367+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:19.367+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:19.367+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:19.367+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:19.367+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:19.367+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:19.367+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578739, 4) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:19.367+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:19.367+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:19.367+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:19.367+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578739, 4) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:19.367+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:19.367+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578739, 4) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:19.367+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14114 2019-09-04T06:32:19.367+0000 D3 EXECUTOR [repl-writer-worker-9] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:19.367+0000 D3 STORAGE [repl-writer-worker-9] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:19.367+0000 D3 REPL [repl-writer-worker-9] applying op: { ts: Timestamp(1567578739, 4), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578739356), o: { $v: 1, $set: { state: 0 } } }, oplog application mode: Secondary 2019-09-04T06:32:19.367+0000 D3 STORAGE [repl-writer-worker-9] WT set timestamp of future write operations to Timestamp(1567578739, 4) 2019-09-04T06:32:19.367+0000 D3 STORAGE [repl-writer-worker-9] WT begin_transaction for snapshot id 14116 2019-09-04T06:32:19.367+0000 D2 QUERY [repl-writer-worker-9] Using idhack: { _id: "config.system.sessions" } 2019-09-04T06:32:19.367+0000 D4 WRITE [repl-writer-worker-9] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:32:19.367+0000 D3 STORAGE [repl-writer-worker-9] WT commit_transaction for snapshot id 14116 2019-09-04T06:32:19.367+0000 D3 EXECUTOR [repl-writer-worker-9] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:19.367+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578739, 4), t: 1 }({ ts: Timestamp(1567578739, 4), t: 1 }) 2019-09-04T06:32:19.367+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578739, 4) 2019-09-04T06:32:19.367+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14115 2019-09-04T06:32:19.367+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:32:19.367+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:19.367+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:32:19.367+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:19.367+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:19.367+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:32:19.367+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14115 2019-09-04T06:32:19.367+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578739, 4) 2019-09-04T06:32:19.367+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14119 2019-09-04T06:32:19.367+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14119 2019-09-04T06:32:19.367+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578739, 4), t: 1 }({ ts: Timestamp(1567578739, 4), t: 1 }) 2019-09-04T06:32:19.367+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:19.367+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578739, 3), t: 1 }, durableWallTime: new Date(1567578739345), appliedOpTime: { ts: Timestamp(1567578739, 4), t: 1 }, appliedWallTime: new Date(1567578739356), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 3), t: 1 }, lastCommittedWall: new Date(1567578739345), lastOpVisible: { ts: Timestamp(1567578739, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:19.367+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 962 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:49.367+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578739, 3), t: 1 }, durableWallTime: new Date(1567578739345), appliedOpTime: { ts: Timestamp(1567578739, 4), t: 1 }, appliedWallTime: new Date(1567578739356), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 3), t: 1 }, lastCommittedWall: new Date(1567578739345), lastOpVisible: { ts: Timestamp(1567578739, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:19.367+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.367+0000 2019-09-04T06:32:19.368+0000 D2 ASIO [RS] Request 962 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 3), t: 1 }, lastCommittedWall: new Date(1567578739345), lastOpVisible: { ts: Timestamp(1567578739, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 3), $clusterTime: { clusterTime: Timestamp(1567578739, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 4) } 2019-09-04T06:32:19.368+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 3), t: 1 }, lastCommittedWall: new Date(1567578739345), lastOpVisible: { ts: Timestamp(1567578739, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 3), $clusterTime: { clusterTime: Timestamp(1567578739, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 4) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:19.368+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:19.368+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.368+0000 2019-09-04T06:32:19.368+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578739, 4), t: 1 } 2019-09-04T06:32:19.368+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 963 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:29.368+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578739, 3), t: 1 } } 2019-09-04T06:32:19.368+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.368+0000 2019-09-04T06:32:19.369+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:32:19.369+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:19.369+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578739, 4), t: 1 }, durableWallTime: new Date(1567578739356), appliedOpTime: { ts: Timestamp(1567578739, 4), t: 1 }, appliedWallTime: new Date(1567578739356), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 3), t: 1 }, lastCommittedWall: new Date(1567578739345), lastOpVisible: { ts: Timestamp(1567578739, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:19.369+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 964 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:49.369+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578739, 4), t: 1 }, durableWallTime: new Date(1567578739356), appliedOpTime: { ts: Timestamp(1567578739, 4), t: 1 }, appliedWallTime: new Date(1567578739356), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 3), t: 1 }, lastCommittedWall: new Date(1567578739345), lastOpVisible: { ts: Timestamp(1567578739, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:19.369+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.368+0000 2019-09-04T06:32:19.369+0000 D2 ASIO [RS] Request 964 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 3), t: 1 }, lastCommittedWall: new Date(1567578739345), lastOpVisible: { ts: Timestamp(1567578739, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 3), $clusterTime: { clusterTime: Timestamp(1567578739, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 4) } 2019-09-04T06:32:19.369+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 3), t: 1 }, lastCommittedWall: new Date(1567578739345), lastOpVisible: { ts: Timestamp(1567578739, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 3), $clusterTime: { clusterTime: Timestamp(1567578739, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 4) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:19.369+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:19.369+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.368+0000 2019-09-04T06:32:19.370+0000 D1 SHARDING [shard-registry-reload] Reloading shardRegistry 2019-09-04T06:32:19.370+0000 D3 STORAGE [shard-registry-reload] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:32:19.370+0000 D2 COMMAND [shard-registry-reload] run command config.$cmd { find: "shards", $readPreference: { mode: "nearest", tags: [] }, $db: "config" } 2019-09-04T06:32:19.370+0000 D3 STORAGE [shard-registry-reload] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:19.370+0000 D5 QUERY [shard-registry-reload] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:32:19.370+0000 D5 QUERY [shard-registry-reload] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:32:19.370+0000 D5 QUERY [shard-registry-reload] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:32:19.370+0000 D2 ASIO [RS] Request 963 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 4), t: 1 }, lastCommittedWall: new Date(1567578739356), lastOpVisible: { ts: Timestamp(1567578739, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578739, 4), t: 1 }, lastCommittedWall: new Date(1567578739356), lastOpApplied: { ts: Timestamp(1567578739, 4), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 4), $clusterTime: { clusterTime: Timestamp(1567578739, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 4) } 2019-09-04T06:32:19.370+0000 D5 QUERY [shard-registry-reload] Rated tree: $and 2019-09-04T06:32:19.370+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 4), t: 1 }, lastCommittedWall: new Date(1567578739356), lastOpVisible: { ts: Timestamp(1567578739, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578739, 4), t: 1 }, lastCommittedWall: new Date(1567578739356), lastOpApplied: { ts: Timestamp(1567578739, 4), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 4), $clusterTime: { clusterTime: Timestamp(1567578739, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 4) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:19.370+0000 D5 QUERY [shard-registry-reload] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:19.370+0000 D5 QUERY [shard-registry-reload] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:19.370+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:19.370+0000 D2 QUERY [shard-registry-reload] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:32:19.370+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:32:19.370+0000 D3 STORAGE [shard-registry-reload] begin_transaction on local snapshot Timestamp(1567578739, 4) 2019-09-04T06:32:19.370+0000 D3 STORAGE [shard-registry-reload] WT begin_transaction for snapshot id 14122 2019-09-04T06:32:19.370+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578739, 4), t: 1 }, 2019-09-04T06:32:19.356+0000 2019-09-04T06:32:19.370+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578739, 4), t: 1 }, 2019-09-04T06:32:19.356+0000 2019-09-04T06:32:19.370+0000 D3 STORAGE [shard-registry-reload] WT rollback_transaction for snapshot id 14122 2019-09-04T06:32:19.370+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578734, 4) 2019-09-04T06:32:19.370+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:32:30.021+0000 2019-09-04T06:32:19.370+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:32:30.666+0000 2019-09-04T06:32:19.370+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:19.370+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:48.839+0000 2019-09-04T06:32:19.370+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 965 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:29.370+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578739, 4), t: 1 } } 2019-09-04T06:32:19.370+0000 D3 REPL [conn301] Got notified of new snapshot: { ts: Timestamp(1567578739, 4), t: 1 }, 2019-09-04T06:32:19.356+0000 2019-09-04T06:32:19.370+0000 D3 REPL [conn301] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.660+0000 2019-09-04T06:32:19.370+0000 D3 REPL [conn304] Got notified of new snapshot: { ts: Timestamp(1567578739, 4), t: 1 }, 2019-09-04T06:32:19.356+0000 2019-09-04T06:32:19.370+0000 D3 REPL [conn304] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.637+0000 2019-09-04T06:32:19.370+0000 D3 REPL [conn321] Got notified of new snapshot: { ts: Timestamp(1567578739, 4), t: 1 }, 2019-09-04T06:32:19.356+0000 2019-09-04T06:32:19.370+0000 D3 REPL [conn321] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:43.109+0000 2019-09-04T06:32:19.370+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.368+0000 2019-09-04T06:32:19.370+0000 D3 REPL [conn305] Got notified of new snapshot: { ts: Timestamp(1567578739, 4), t: 1 }, 2019-09-04T06:32:19.356+0000 2019-09-04T06:32:19.370+0000 D3 REPL [conn305] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.638+0000 2019-09-04T06:32:19.370+0000 D3 REPL [conn313] Got notified of new snapshot: { ts: Timestamp(1567578739, 4), t: 1 }, 2019-09-04T06:32:19.356+0000 2019-09-04T06:32:19.370+0000 D3 REPL [conn313] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:22.595+0000 2019-09-04T06:32:19.370+0000 D3 REPL [conn324] Got notified of new snapshot: { ts: Timestamp(1567578739, 4), t: 1 }, 2019-09-04T06:32:19.356+0000 2019-09-04T06:32:19.370+0000 D3 REPL [conn324] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.622+0000 2019-09-04T06:32:19.370+0000 D3 REPL [conn315] Got notified of new snapshot: { ts: Timestamp(1567578739, 4), t: 1 }, 2019-09-04T06:32:19.356+0000 2019-09-04T06:32:19.370+0000 D3 REPL [conn315] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:25.060+0000 2019-09-04T06:32:19.370+0000 D3 REPL [conn317] Got notified of new snapshot: { ts: Timestamp(1567578739, 4), t: 1 }, 2019-09-04T06:32:19.356+0000 2019-09-04T06:32:19.370+0000 D3 REPL [conn317] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.471+0000 2019-09-04T06:32:19.370+0000 D3 REPL [conn299] Got notified of new snapshot: { ts: Timestamp(1567578739, 4), t: 1 }, 2019-09-04T06:32:19.356+0000 2019-09-04T06:32:19.370+0000 D3 REPL [conn299] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.483+0000 2019-09-04T06:32:19.370+0000 D3 REPL [conn310] Got notified of new snapshot: { ts: Timestamp(1567578739, 4), t: 1 }, 2019-09-04T06:32:19.356+0000 2019-09-04T06:32:19.370+0000 D3 REPL [conn310] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.660+0000 2019-09-04T06:32:19.370+0000 D3 REPL [conn316] Got notified of new snapshot: { ts: Timestamp(1567578739, 4), t: 1 }, 2019-09-04T06:32:19.356+0000 2019-09-04T06:32:19.370+0000 D3 REPL [conn316] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.623+0000 2019-09-04T06:32:19.370+0000 D3 REPL [conn319] Got notified of new snapshot: { ts: Timestamp(1567578739, 4), t: 1 }, 2019-09-04T06:32:19.356+0000 2019-09-04T06:32:19.370+0000 D3 REPL [conn319] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.535+0000 2019-09-04T06:32:19.370+0000 D3 REPL [conn314] Got notified of new snapshot: { ts: Timestamp(1567578739, 4), t: 1 }, 2019-09-04T06:32:19.356+0000 2019-09-04T06:32:19.370+0000 I COMMAND [shard-registry-reload] command config.shards command: find { find: "shards", $readPreference: { mode: "nearest", tags: [] }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:646 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:32:19.370+0000 D3 REPL [conn314] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:24.152+0000 2019-09-04T06:32:19.370+0000 D3 REPL [conn285] Got notified of new snapshot: { ts: Timestamp(1567578739, 4), t: 1 }, 2019-09-04T06:32:19.356+0000 2019-09-04T06:32:19.370+0000 D1 SHARDING [shard-registry-reload] found 3 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp(1567578739, 4), t: 1 } 2019-09-04T06:32:19.370+0000 D3 REPL [conn285] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.640+0000 2019-09-04T06:32:19.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0000/cmodb806.togewa.com:27018,cmodb807.togewa.com:27018 2019-09-04T06:32:19.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0000, with CS shard0000/cmodb806.togewa.com:27018,cmodb807.togewa.com:27018 2019-09-04T06:32:19.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0001/cmodb808.togewa.com:27018,cmodb809.togewa.com:27018 2019-09-04T06:32:19.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0001, with CS shard0001/cmodb808.togewa.com:27018,cmodb809.togewa.com:27018 2019-09-04T06:32:19.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0002/cmodb810.togewa.com:27018,cmodb811.togewa.com:27018 2019-09-04T06:32:19.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0002, with CS shard0002/cmodb810.togewa.com:27018,cmodb811.togewa.com:27018 2019-09-04T06:32:19.370+0000 D3 SHARDING [shard-registry-reload] Adding shard config, with CS 2019-09-04T06:32:19.370+0000 D3 REPL [conn312] Got notified of new snapshot: { ts: Timestamp(1567578739, 4), t: 1 }, 2019-09-04T06:32:19.356+0000 2019-09-04T06:32:19.370+0000 D3 REPL [conn312] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.767+0000 2019-09-04T06:32:19.370+0000 D3 REPL [conn306] Got notified of new snapshot: { ts: Timestamp(1567578739, 4), t: 1 }, 2019-09-04T06:32:19.356+0000 2019-09-04T06:32:19.370+0000 D3 REPL [conn306] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.639+0000 2019-09-04T06:32:19.370+0000 D3 REPL [conn275] Got notified of new snapshot: { ts: Timestamp(1567578739, 4), t: 1 }, 2019-09-04T06:32:19.356+0000 2019-09-04T06:32:19.370+0000 D3 REPL [conn275] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.508+0000 2019-09-04T06:32:19.370+0000 D3 REPL [conn328] Got notified of new snapshot: { ts: Timestamp(1567578739, 4), t: 1 }, 2019-09-04T06:32:19.356+0000 2019-09-04T06:32:19.370+0000 D3 REPL [conn328] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.625+0000 2019-09-04T06:32:19.370+0000 D3 REPL [conn311] Got notified of new snapshot: { ts: Timestamp(1567578739, 4), t: 1 }, 2019-09-04T06:32:19.356+0000 2019-09-04T06:32:19.370+0000 D3 REPL [conn311] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.661+0000 2019-09-04T06:32:19.373+0000 D2 ASIO [RS] Request 965 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578739, 5), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578739370), o: { $v: 1, $set: { state: 0 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 4), t: 1 }, lastCommittedWall: new Date(1567578739356), lastOpVisible: { ts: Timestamp(1567578739, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578739, 4), t: 1 }, lastCommittedWall: new Date(1567578739356), lastOpApplied: { ts: Timestamp(1567578739, 5), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 4), $clusterTime: { clusterTime: Timestamp(1567578739, 5), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 5) } 2019-09-04T06:32:19.373+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578739, 5), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578739370), o: { $v: 1, $set: { state: 0 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 4), t: 1 }, lastCommittedWall: new Date(1567578739356), lastOpVisible: { ts: Timestamp(1567578739, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578739, 4), t: 1 }, lastCommittedWall: new Date(1567578739356), lastOpApplied: { ts: Timestamp(1567578739, 5), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 4), $clusterTime: { clusterTime: Timestamp(1567578739, 5), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 5) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:19.373+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:19.373+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578739, 5) and ending at ts: Timestamp(1567578739, 5) 2019-09-04T06:32:19.373+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:32:30.666+0000 2019-09-04T06:32:19.373+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:32:29.905+0000 2019-09-04T06:32:19.373+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578739, 5), t: 1 } 2019-09-04T06:32:19.373+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:19.373+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:48.839+0000 2019-09-04T06:32:19.373+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:19.373+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:19.373+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578739, 4) 2019-09-04T06:32:19.373+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 14125 2019-09-04T06:32:19.373+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:19.373+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:19.373+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 14125 2019-09-04T06:32:19.373+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:32:19.373+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:19.373+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578739, 5) } 2019-09-04T06:32:19.373+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:19.373+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578739, 4) 2019-09-04T06:32:19.373+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 14128 2019-09-04T06:32:19.373+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14120 2019-09-04T06:32:19.373+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:19.373+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:19.373+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 14128 2019-09-04T06:32:19.373+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14120 2019-09-04T06:32:19.373+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14131 2019-09-04T06:32:19.373+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14131 2019-09-04T06:32:19.373+0000 D3 EXECUTOR [repl-writer-worker-7] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:19.373+0000 D3 STORAGE [repl-writer-worker-7] WT begin_transaction for snapshot id 14133 2019-09-04T06:32:19.373+0000 D4 STORAGE [repl-writer-worker-7] inserting record with timestamp Timestamp(1567578739, 5) 2019-09-04T06:32:19.374+0000 D3 STORAGE [repl-writer-worker-7] WT set timestamp of future write operations to Timestamp(1567578739, 5) 2019-09-04T06:32:19.374+0000 D3 STORAGE [repl-writer-worker-7] WT commit_transaction for snapshot id 14133 2019-09-04T06:32:19.374+0000 D3 EXECUTOR [repl-writer-worker-7] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:19.374+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:32:19.374+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14132 2019-09-04T06:32:19.374+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14132 2019-09-04T06:32:19.374+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14135 2019-09-04T06:32:19.374+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14135 2019-09-04T06:32:19.374+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578739, 5), t: 1 }({ ts: Timestamp(1567578739, 5), t: 1 }) 2019-09-04T06:32:19.374+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578739, 5) 2019-09-04T06:32:19.374+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14136 2019-09-04T06:32:19.374+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578739, 5) } } ] } sort: {} projection: {} 2019-09-04T06:32:19.374+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:19.374+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:32:19.374+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578739, 5) Sort: {} Proj: {} ============================= 2019-09-04T06:32:19.374+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:19.374+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:19.374+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:19.374+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578739, 5) || First: notFirst: full path: ts 2019-09-04T06:32:19.374+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:19.374+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578739, 5) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:19.374+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:19.374+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:32:19.374+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:19.374+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:19.374+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:19.374+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:19.374+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:19.374+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:19.374+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:19.374+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578739, 5) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:19.374+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:19.374+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:19.374+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:19.374+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578739, 5) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:19.374+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:19.374+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578739, 5) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:19.374+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14136 2019-09-04T06:32:19.374+0000 D3 EXECUTOR [repl-writer-worker-11] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:19.374+0000 D3 STORAGE [repl-writer-worker-11] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:19.374+0000 D3 REPL [repl-writer-worker-11] applying op: { ts: Timestamp(1567578739, 5), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578739370), o: { $v: 1, $set: { state: 0 } } }, oplog application mode: Secondary 2019-09-04T06:32:19.374+0000 D3 STORAGE [repl-writer-worker-11] WT set timestamp of future write operations to Timestamp(1567578739, 5) 2019-09-04T06:32:19.374+0000 D3 STORAGE [repl-writer-worker-11] WT begin_transaction for snapshot id 14138 2019-09-04T06:32:19.374+0000 D2 QUERY [repl-writer-worker-11] Using idhack: { _id: "config" } 2019-09-04T06:32:19.374+0000 D4 WRITE [repl-writer-worker-11] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:32:19.374+0000 D3 STORAGE [repl-writer-worker-11] WT commit_transaction for snapshot id 14138 2019-09-04T06:32:19.374+0000 D3 EXECUTOR [repl-writer-worker-11] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:19.374+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578739, 5), t: 1 }({ ts: Timestamp(1567578739, 5), t: 1 }) 2019-09-04T06:32:19.374+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578739, 5) 2019-09-04T06:32:19.374+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14137 2019-09-04T06:32:19.374+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:32:19.374+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:19.374+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:32:19.374+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:19.374+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:19.374+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:32:19.374+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14137 2019-09-04T06:32:19.374+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578739, 5) 2019-09-04T06:32:19.374+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14141 2019-09-04T06:32:19.374+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14141 2019-09-04T06:32:19.374+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578739, 5), t: 1 }({ ts: Timestamp(1567578739, 5), t: 1 }) 2019-09-04T06:32:19.374+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:19.374+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578739, 4), t: 1 }, durableWallTime: new Date(1567578739356), appliedOpTime: { ts: Timestamp(1567578739, 5), t: 1 }, appliedWallTime: new Date(1567578739370), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 4), t: 1 }, lastCommittedWall: new Date(1567578739356), lastOpVisible: { ts: Timestamp(1567578739, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:19.374+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 966 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:49.374+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578739, 4), t: 1 }, durableWallTime: new Date(1567578739356), appliedOpTime: { ts: Timestamp(1567578739, 5), t: 1 }, appliedWallTime: new Date(1567578739370), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 4), t: 1 }, lastCommittedWall: new Date(1567578739356), lastOpVisible: { ts: Timestamp(1567578739, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:19.374+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.374+0000 2019-09-04T06:32:19.374+0000 D2 ASIO [RS] Request 966 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 4), t: 1 }, lastCommittedWall: new Date(1567578739356), lastOpVisible: { ts: Timestamp(1567578739, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 4), $clusterTime: { clusterTime: Timestamp(1567578739, 5), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 5) } 2019-09-04T06:32:19.374+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 4), t: 1 }, lastCommittedWall: new Date(1567578739356), lastOpVisible: { ts: Timestamp(1567578739, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 4), $clusterTime: { clusterTime: Timestamp(1567578739, 5), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 5) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:19.375+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:19.375+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.374+0000 2019-09-04T06:32:19.375+0000 D2 ASIO [LogicalSessionCacheRefresh] Request 953 finished with response: { operationTime: Timestamp(1567578729, 1), ok: 0.0, errmsg: "request doesn't allow collection to be created implicitly", code: 227, codeName: "CannotImplicitlyCreateCollection", ns: "config.system.sessions", $gleStats: { lastOpTime: { ts: Timestamp(1567578729, 1), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578729, 1), $configServerState: { opTime: { ts: Timestamp(1567578739, 5), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578739, 5), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } 2019-09-04T06:32:19.375+0000 D3 EXECUTOR [LogicalSessionCacheRefresh] Received remote response: RemoteOnAnyResponse -- cmd:{ operationTime: Timestamp(1567578729, 1), ok: 0.0, errmsg: "request doesn't allow collection to be created implicitly", code: 227, codeName: "CannotImplicitlyCreateCollection", ns: "config.system.sessions", $gleStats: { lastOpTime: { ts: Timestamp(1567578729, 1), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578729, 1), $configServerState: { opTime: { ts: Timestamp(1567578739, 5), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578739, 5), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } target: cmodb810.togewa.com:27018 2019-09-04T06:32:19.375+0000 D1 - [LogicalSessionCacheRefresh] User Assertion: CannotImplicitlyCreateCollection{ ns: "config.system.sessions" }: request doesn't allow collection to be created implicitly src/mongo/s/cluster_commands_helpers.cpp 246 2019-09-04T06:32:19.375+0000 W - [LogicalSessionCacheRefresh] DBException thrown :: caused by :: CannotImplicitlyCreateCollection{ ns: "config.system.sessions" }: request doesn't allow collection to be created implicitly 2019-09-04T06:32:19.375+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578739, 5), t: 1 } 2019-09-04T06:32:19.375+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 967 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:29.375+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578739, 4), t: 1 } } 2019-09-04T06:32:19.375+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.374+0000 2019-09-04T06:32:19.378+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:32:19.378+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:19.379+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578739, 5), t: 1 }, durableWallTime: new Date(1567578739370), appliedOpTime: { ts: Timestamp(1567578739, 5), t: 1 }, appliedWallTime: new Date(1567578739370), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 4), t: 1 }, lastCommittedWall: new Date(1567578739356), lastOpVisible: { ts: Timestamp(1567578739, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:19.379+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 968 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:49.379+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578739, 5), t: 1 }, durableWallTime: new Date(1567578739370), appliedOpTime: { ts: Timestamp(1567578739, 5), t: 1 }, appliedWallTime: new Date(1567578739370), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 4), t: 1 }, lastCommittedWall: new Date(1567578739356), lastOpVisible: { ts: Timestamp(1567578739, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:19.379+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.374+0000 2019-09-04T06:32:19.379+0000 D2 ASIO [RS] Request 967 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578739, 6), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578739375), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f5a73ac9313827bca533f'), when: new Date(1567578739375), who: "ConfigServer:conn9511" } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 5), t: 1 }, lastCommittedWall: new Date(1567578739370), lastOpVisible: { ts: Timestamp(1567578739, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578739, 5), t: 1 }, lastCommittedWall: new Date(1567578739370), lastOpApplied: { ts: Timestamp(1567578739, 6), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 5), $clusterTime: { clusterTime: Timestamp(1567578739, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 6) } 2019-09-04T06:32:19.379+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578739, 6), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578739375), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f5a73ac9313827bca533f'), when: new Date(1567578739375), who: "ConfigServer:conn9511" } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 5), t: 1 }, lastCommittedWall: new Date(1567578739370), lastOpVisible: { ts: Timestamp(1567578739, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578739, 5), t: 1 }, lastCommittedWall: new Date(1567578739370), lastOpApplied: { ts: Timestamp(1567578739, 6), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 5), $clusterTime: { clusterTime: Timestamp(1567578739, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 6) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:19.379+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:19.379+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578739, 6) and ending at ts: Timestamp(1567578739, 6) 2019-09-04T06:32:19.379+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578739, 5), t: 1 }, 2019-09-04T06:32:19.370+0000 2019-09-04T06:32:19.379+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578739, 5), t: 1 }, 2019-09-04T06:32:19.370+0000 2019-09-04T06:32:19.379+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578734, 5) 2019-09-04T06:32:19.379+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:32:29.905+0000 2019-09-04T06:32:19.379+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:32:29.465+0000 2019-09-04T06:32:19.379+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578739, 6), t: 1 } 2019-09-04T06:32:19.379+0000 D3 REPL [conn321] Got notified of new snapshot: { ts: Timestamp(1567578739, 5), t: 1 }, 2019-09-04T06:32:19.370+0000 2019-09-04T06:32:19.379+0000 D3 REPL [conn321] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:43.109+0000 2019-09-04T06:32:19.379+0000 D3 REPL [conn304] Got notified of new snapshot: { ts: Timestamp(1567578739, 5), t: 1 }, 2019-09-04T06:32:19.370+0000 2019-09-04T06:32:19.379+0000 D3 REPL [conn304] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.637+0000 2019-09-04T06:32:19.379+0000 D3 REPL [conn324] Got notified of new snapshot: { ts: Timestamp(1567578739, 5), t: 1 }, 2019-09-04T06:32:19.370+0000 2019-09-04T06:32:19.379+0000 D3 REPL [conn324] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.622+0000 2019-09-04T06:32:19.379+0000 D3 REPL [conn312] Got notified of new snapshot: { ts: Timestamp(1567578739, 5), t: 1 }, 2019-09-04T06:32:19.370+0000 2019-09-04T06:32:19.379+0000 D3 REPL [conn312] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.767+0000 2019-09-04T06:32:19.379+0000 D3 REPL [conn313] Got notified of new snapshot: { ts: Timestamp(1567578739, 5), t: 1 }, 2019-09-04T06:32:19.370+0000 2019-09-04T06:32:19.379+0000 D3 REPL [conn313] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:22.595+0000 2019-09-04T06:32:19.379+0000 D3 REPL [conn315] Got notified of new snapshot: { ts: Timestamp(1567578739, 5), t: 1 }, 2019-09-04T06:32:19.370+0000 2019-09-04T06:32:19.379+0000 D3 REPL [conn315] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:25.060+0000 2019-09-04T06:32:19.379+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:19.379+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:48.839+0000 2019-09-04T06:32:19.379+0000 D3 REPL [conn301] Got notified of new snapshot: { ts: Timestamp(1567578739, 5), t: 1 }, 2019-09-04T06:32:19.370+0000 2019-09-04T06:32:19.379+0000 D3 REPL [conn301] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.660+0000 2019-09-04T06:32:19.379+0000 D3 REPL [conn317] Got notified of new snapshot: { ts: Timestamp(1567578739, 5), t: 1 }, 2019-09-04T06:32:19.370+0000 2019-09-04T06:32:19.379+0000 D3 REPL [conn317] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.471+0000 2019-09-04T06:32:19.379+0000 D3 REPL [conn310] Got notified of new snapshot: { ts: Timestamp(1567578739, 5), t: 1 }, 2019-09-04T06:32:19.370+0000 2019-09-04T06:32:19.379+0000 D3 REPL [conn310] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.660+0000 2019-09-04T06:32:19.379+0000 D3 REPL [conn275] Got notified of new snapshot: { ts: Timestamp(1567578739, 5), t: 1 }, 2019-09-04T06:32:19.370+0000 2019-09-04T06:32:19.379+0000 D3 REPL [conn275] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.508+0000 2019-09-04T06:32:19.379+0000 D3 REPL [conn311] Got notified of new snapshot: { ts: Timestamp(1567578739, 5), t: 1 }, 2019-09-04T06:32:19.370+0000 2019-09-04T06:32:19.379+0000 D3 REPL [conn311] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.661+0000 2019-09-04T06:32:19.380+0000 D3 REPL [conn314] Got notified of new snapshot: { ts: Timestamp(1567578739, 5), t: 1 }, 2019-09-04T06:32:19.370+0000 2019-09-04T06:32:19.380+0000 D3 REPL [conn314] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:24.152+0000 2019-09-04T06:32:19.380+0000 D3 REPL [conn285] Got notified of new snapshot: { ts: Timestamp(1567578739, 5), t: 1 }, 2019-09-04T06:32:19.370+0000 2019-09-04T06:32:19.380+0000 D3 REPL [conn285] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.640+0000 2019-09-04T06:32:19.381+0000 D3 REPL [conn306] Got notified of new snapshot: { ts: Timestamp(1567578739, 5), t: 1 }, 2019-09-04T06:32:19.370+0000 2019-09-04T06:32:19.381+0000 D3 REPL [conn306] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.639+0000 2019-09-04T06:32:19.381+0000 D3 REPL [conn328] Got notified of new snapshot: { ts: Timestamp(1567578739, 5), t: 1 }, 2019-09-04T06:32:19.370+0000 2019-09-04T06:32:19.381+0000 D3 REPL [conn328] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.625+0000 2019-09-04T06:32:19.381+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:19.381+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:19.381+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578739, 5) 2019-09-04T06:32:19.381+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 14145 2019-09-04T06:32:19.381+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:19.381+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:19.381+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 14145 2019-09-04T06:32:19.381+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:19.381+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:19.381+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578739, 5) 2019-09-04T06:32:19.381+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 14148 2019-09-04T06:32:19.381+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:19.381+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:19.381+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 14148 2019-09-04T06:32:19.381+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:32:19.381+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578739, 6) } 2019-09-04T06:32:19.381+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14143 2019-09-04T06:32:19.381+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14143 2019-09-04T06:32:19.381+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14151 2019-09-04T06:32:19.381+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14151 2019-09-04T06:32:19.381+0000 D3 EXECUTOR [repl-writer-worker-13] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:19.381+0000 D3 STORAGE [repl-writer-worker-13] WT begin_transaction for snapshot id 14153 2019-09-04T06:32:19.381+0000 D4 STORAGE [repl-writer-worker-13] inserting record with timestamp Timestamp(1567578739, 6) 2019-09-04T06:32:19.381+0000 D3 STORAGE [repl-writer-worker-13] WT set timestamp of future write operations to Timestamp(1567578739, 6) 2019-09-04T06:32:19.381+0000 D3 STORAGE [repl-writer-worker-13] WT commit_transaction for snapshot id 14153 2019-09-04T06:32:19.381+0000 D3 EXECUTOR [repl-writer-worker-13] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:19.381+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:32:19.381+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14152 2019-09-04T06:32:19.381+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14152 2019-09-04T06:32:19.381+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14155 2019-09-04T06:32:19.381+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14155 2019-09-04T06:32:19.381+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578739, 6), t: 1 }({ ts: Timestamp(1567578739, 6), t: 1 }) 2019-09-04T06:32:19.381+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578739, 6) 2019-09-04T06:32:19.381+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14156 2019-09-04T06:32:19.381+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578739, 6) } } ] } sort: {} projection: {} 2019-09-04T06:32:19.381+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:19.381+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:32:19.381+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578739, 6) Sort: {} Proj: {} ============================= 2019-09-04T06:32:19.381+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:19.381+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:19.381+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:19.381+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578739, 6) || First: notFirst: full path: ts 2019-09-04T06:32:19.381+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:19.381+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578739, 6) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:19.381+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:19.381+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:32:19.381+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:19.381+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:19.381+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:19.381+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:19.381+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:19.381+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:19.381+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:19.381+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578739, 6) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:19.381+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:19.381+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:19.381+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:19.381+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578739, 6) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:19.381+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:19.381+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578739, 6) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:19.381+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14156 2019-09-04T06:32:19.381+0000 D3 EXECUTOR [repl-writer-worker-14] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:19.381+0000 D3 STORAGE [repl-writer-worker-14] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:19.381+0000 D3 REPL [repl-writer-worker-14] applying op: { ts: Timestamp(1567578739, 6), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578739375), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f5a73ac9313827bca533f'), when: new Date(1567578739375), who: "ConfigServer:conn9511" } } }, oplog application mode: Secondary 2019-09-04T06:32:19.381+0000 D3 STORAGE [repl-writer-worker-14] WT set timestamp of future write operations to Timestamp(1567578739, 6) 2019-09-04T06:32:19.381+0000 D3 STORAGE [repl-writer-worker-14] WT begin_transaction for snapshot id 14158 2019-09-04T06:32:19.381+0000 D2 QUERY [repl-writer-worker-14] Using idhack: { _id: "config" } 2019-09-04T06:32:19.381+0000 D4 WRITE [repl-writer-worker-14] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:32:19.381+0000 D3 STORAGE [repl-writer-worker-14] WT commit_transaction for snapshot id 14158 2019-09-04T06:32:19.381+0000 D3 EXECUTOR [repl-writer-worker-14] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:19.381+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578739, 6), t: 1 } 2019-09-04T06:32:19.381+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 969 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:29.381+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578739, 5), t: 1 } } 2019-09-04T06:32:19.381+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.374+0000 2019-09-04T06:32:19.381+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578739, 6), t: 1 }({ ts: Timestamp(1567578739, 6), t: 1 }) 2019-09-04T06:32:19.381+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578739, 6) 2019-09-04T06:32:19.381+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14157 2019-09-04T06:32:19.381+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:32:19.381+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:19.381+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:32:19.381+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:19.381+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:19.381+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:32:19.382+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14157 2019-09-04T06:32:19.382+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578739, 6) 2019-09-04T06:32:19.382+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14161 2019-09-04T06:32:19.382+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14161 2019-09-04T06:32:19.382+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578739, 6), t: 1 }({ ts: Timestamp(1567578739, 6), t: 1 }) 2019-09-04T06:32:19.382+0000 D3 REPL [conn319] Got notified of new snapshot: { ts: Timestamp(1567578739, 5), t: 1 }, 2019-09-04T06:32:19.370+0000 2019-09-04T06:32:19.382+0000 D3 REPL [conn319] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.535+0000 2019-09-04T06:32:19.383+0000 D3 REPL [conn316] Got notified of new snapshot: { ts: Timestamp(1567578739, 5), t: 1 }, 2019-09-04T06:32:19.370+0000 2019-09-04T06:32:19.383+0000 D3 REPL [conn316] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.623+0000 2019-09-04T06:32:19.383+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:32:19.384+0000 D3 REPL [conn299] Got notified of new snapshot: { ts: Timestamp(1567578739, 5), t: 1 }, 2019-09-04T06:32:19.370+0000 2019-09-04T06:32:19.385+0000 D3 REPL [conn299] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.483+0000 2019-09-04T06:32:19.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0000 2019-09-04T06:32:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 970 -- target:[cmodb806.togewa.com:27018] db:admin expDate:2019-09-04T06:32:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:32:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 971 -- target:[cmodb807.togewa.com:27018] db:admin expDate:2019-09-04T06:32:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:32:19.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0002 2019-09-04T06:32:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 972 -- target:[cmodb810.togewa.com:27018] db:admin expDate:2019-09-04T06:32:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:32:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 973 -- target:[cmodb811.togewa.com:27018] db:admin expDate:2019-09-04T06:32:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:32:19.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0001 2019-09-04T06:32:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 974 -- target:[cmodb808.togewa.com:27018] db:admin expDate:2019-09-04T06:32:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:32:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 975 -- target:[cmodb809.togewa.com:27018] db:admin expDate:2019-09-04T06:32:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:32:19.385+0000 D2 ASIO [RS] Request 968 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 5), t: 1 }, lastCommittedWall: new Date(1567578739370), lastOpVisible: { ts: Timestamp(1567578739, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 5), $clusterTime: { clusterTime: Timestamp(1567578739, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 6) } 2019-09-04T06:32:19.385+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 5), t: 1 }, lastCommittedWall: new Date(1567578739370), lastOpVisible: { ts: Timestamp(1567578739, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 5), $clusterTime: { clusterTime: Timestamp(1567578739, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 6) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:19.385+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:19.385+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578739, 6), t: 1 }, durableWallTime: new Date(1567578739375), appliedOpTime: { ts: Timestamp(1567578739, 6), t: 1 }, appliedWallTime: new Date(1567578739375), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 5), t: 1 }, lastCommittedWall: new Date(1567578739370), lastOpVisible: { ts: Timestamp(1567578739, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:19.385+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 976 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:49.385+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578739, 6), t: 1 }, durableWallTime: new Date(1567578739375), appliedOpTime: { ts: Timestamp(1567578739, 6), t: 1 }, appliedWallTime: new Date(1567578739375), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 5), t: 1 }, lastCommittedWall: new Date(1567578739370), lastOpVisible: { ts: Timestamp(1567578739, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:19.385+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.374+0000 2019-09-04T06:32:19.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 972 finished with response: { hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb810.togewa.com:27018", me: "cmodb810.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578729, 1), t: 1 }, lastWriteDate: new Date(1567578729000), majorityOpTime: { ts: Timestamp(1567578729, 1), t: 1 }, majorityWriteDate: new Date(1567578729000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578739386), logicalSessionTimeoutMinutes: 30, connectionId: 20469, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578729, 1), $configServerState: { opTime: { ts: Timestamp(1567578739, 5), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578739, 5), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578729, 1) } 2019-09-04T06:32:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb810.togewa.com:27018", me: "cmodb810.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578729, 1), t: 1 }, lastWriteDate: new Date(1567578729000), majorityOpTime: { ts: Timestamp(1567578729, 1), t: 1 }, majorityWriteDate: new Date(1567578729000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578739386), logicalSessionTimeoutMinutes: 30, connectionId: 20469, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578729, 1), $configServerState: { opTime: { ts: Timestamp(1567578739, 5), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578739, 5), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578729, 1) } target: cmodb810.togewa.com:27018 2019-09-04T06:32:19.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 970 finished with response: { hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb806.togewa.com:27018", me: "cmodb806.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578733, 1), t: 1 }, lastWriteDate: new Date(1567578733000), majorityOpTime: { ts: Timestamp(1567578733, 1), t: 1 }, majorityWriteDate: new Date(1567578733000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578739385), logicalSessionTimeoutMinutes: 30, connectionId: 16400, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578733, 1), $configServerState: { opTime: { ts: Timestamp(1567578739, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578739, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578733, 1) } 2019-09-04T06:32:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb806.togewa.com:27018", me: "cmodb806.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578733, 1), t: 1 }, lastWriteDate: new Date(1567578733000), majorityOpTime: { ts: Timestamp(1567578733, 1), t: 1 }, majorityWriteDate: new Date(1567578733000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578739385), logicalSessionTimeoutMinutes: 30, connectionId: 16400, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578733, 1), $configServerState: { opTime: { ts: Timestamp(1567578739, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578739, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578733, 1) } target: cmodb806.togewa.com:27018 2019-09-04T06:32:19.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 974 finished with response: { hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb808.togewa.com:27018", me: "cmodb808.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578737, 1), t: 1 }, lastWriteDate: new Date(1567578737000), majorityOpTime: { ts: Timestamp(1567578737, 1), t: 1 }, majorityWriteDate: new Date(1567578737000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578739386), logicalSessionTimeoutMinutes: 30, connectionId: 18183, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578737, 1), $configServerState: { opTime: { ts: Timestamp(1567578739, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578739, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578737, 1) } 2019-09-04T06:32:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb808.togewa.com:27018", me: "cmodb808.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578737, 1), t: 1 }, lastWriteDate: new Date(1567578737000), majorityOpTime: { ts: Timestamp(1567578737, 1), t: 1 }, majorityWriteDate: new Date(1567578737000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578739386), logicalSessionTimeoutMinutes: 30, connectionId: 18183, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578737, 1), $configServerState: { opTime: { ts: Timestamp(1567578739, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578739, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578737, 1) } target: cmodb808.togewa.com:27018 2019-09-04T06:32:19.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 971 finished with response: { hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb806.togewa.com:27018", me: "cmodb807.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578733, 1), t: 1 }, lastWriteDate: new Date(1567578733000), majorityOpTime: { ts: Timestamp(1567578733, 1), t: 1 }, majorityWriteDate: new Date(1567578733000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578739385), logicalSessionTimeoutMinutes: 30, connectionId: 17074, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578733, 1), $configServerState: { opTime: { ts: Timestamp(1567578728, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578735, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578733, 1) } 2019-09-04T06:32:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb806.togewa.com:27018", me: "cmodb807.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578733, 1), t: 1 }, lastWriteDate: new Date(1567578733000), majorityOpTime: { ts: Timestamp(1567578733, 1), t: 1 }, majorityWriteDate: new Date(1567578733000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578739385), logicalSessionTimeoutMinutes: 30, connectionId: 17074, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578733, 1), $configServerState: { opTime: { ts: Timestamp(1567578728, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578735, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578733, 1) } target: cmodb807.togewa.com:27018 2019-09-04T06:32:19.385+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0000 took 0ms 2019-09-04T06:32:19.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 975 finished with response: { hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb808.togewa.com:27018", me: "cmodb809.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578737, 1), t: 1 }, lastWriteDate: new Date(1567578737000), majorityOpTime: { ts: Timestamp(1567578737, 1), t: 1 }, majorityWriteDate: new Date(1567578737000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578739386), logicalSessionTimeoutMinutes: 30, connectionId: 13302, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578737, 1), $configServerState: { opTime: { ts: Timestamp(1567578720, 3), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578737, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578737, 1) } 2019-09-04T06:32:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb808.togewa.com:27018", me: "cmodb809.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578737, 1), t: 1 }, lastWriteDate: new Date(1567578737000), majorityOpTime: { ts: Timestamp(1567578737, 1), t: 1 }, majorityWriteDate: new Date(1567578737000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578739386), logicalSessionTimeoutMinutes: 30, connectionId: 13302, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578737, 1), $configServerState: { opTime: { ts: Timestamp(1567578720, 3), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578737, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578737, 1) } target: cmodb809.togewa.com:27018 2019-09-04T06:32:19.385+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0001 took 0ms 2019-09-04T06:32:19.385+0000 D2 ASIO [RS] Request 969 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 6), t: 1 }, lastCommittedWall: new Date(1567578739375), lastOpVisible: { ts: Timestamp(1567578739, 6), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578739, 6), t: 1 }, lastCommittedWall: new Date(1567578739375), lastOpApplied: { ts: Timestamp(1567578739, 6), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 6), $clusterTime: { clusterTime: Timestamp(1567578739, 7), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 6) } 2019-09-04T06:32:19.385+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 6), t: 1 }, lastCommittedWall: new Date(1567578739375), lastOpVisible: { ts: Timestamp(1567578739, 6), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578739, 6), t: 1 }, lastCommittedWall: new Date(1567578739375), lastOpApplied: { ts: Timestamp(1567578739, 6), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 6), $clusterTime: { clusterTime: Timestamp(1567578739, 7), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 6) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:19.385+0000 D2 ASIO [RS] Request 976 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 6), t: 1 }, lastCommittedWall: new Date(1567578739375), lastOpVisible: { ts: Timestamp(1567578739, 6), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 6), $clusterTime: { clusterTime: Timestamp(1567578739, 7), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 6) } 2019-09-04T06:32:19.385+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 6), t: 1 }, lastCommittedWall: new Date(1567578739375), lastOpVisible: { ts: Timestamp(1567578739, 6), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 6), $clusterTime: { clusterTime: Timestamp(1567578739, 7), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 6) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:19.385+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:19.385+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:32:19.385+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578739, 6), t: 1 }, 2019-09-04T06:32:19.375+0000 2019-09-04T06:32:19.385+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578739, 6), t: 1 }, 2019-09-04T06:32:19.375+0000 2019-09-04T06:32:19.386+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578734, 6) 2019-09-04T06:32:19.386+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:32:29.465+0000 2019-09-04T06:32:19.386+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:32:29.908+0000 2019-09-04T06:32:19.386+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 977 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:29.386+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578739, 6), t: 1 } } 2019-09-04T06:32:19.386+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:19.386+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.385+0000 2019-09-04T06:32:19.386+0000 D3 REPL [conn321] Got notified of new snapshot: { ts: Timestamp(1567578739, 6), t: 1 }, 2019-09-04T06:32:19.375+0000 2019-09-04T06:32:19.386+0000 D3 REPL [conn321] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:43.109+0000 2019-09-04T06:32:19.386+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:19.386+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:48.839+0000 2019-09-04T06:32:19.386+0000 D3 REPL [conn304] Got notified of new snapshot: { ts: Timestamp(1567578739, 6), t: 1 }, 2019-09-04T06:32:19.375+0000 2019-09-04T06:32:19.386+0000 D3 REPL [conn304] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.637+0000 2019-09-04T06:32:19.386+0000 D3 REPL [conn324] Got notified of new snapshot: { ts: Timestamp(1567578739, 6), t: 1 }, 2019-09-04T06:32:19.375+0000 2019-09-04T06:32:19.386+0000 D3 REPL [conn324] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.622+0000 2019-09-04T06:32:19.386+0000 D3 REPL [conn312] Got notified of new snapshot: { ts: Timestamp(1567578739, 6), t: 1 }, 2019-09-04T06:32:19.375+0000 2019-09-04T06:32:19.386+0000 D3 REPL [conn312] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.767+0000 2019-09-04T06:32:19.386+0000 D3 REPL [conn313] Got notified of new snapshot: { ts: Timestamp(1567578739, 6), t: 1 }, 2019-09-04T06:32:19.375+0000 2019-09-04T06:32:19.386+0000 D3 REPL [conn313] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:22.595+0000 2019-09-04T06:32:19.386+0000 D3 REPL [conn315] Got notified of new snapshot: { ts: Timestamp(1567578739, 6), t: 1 }, 2019-09-04T06:32:19.375+0000 2019-09-04T06:32:19.386+0000 D3 REPL [conn315] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:25.060+0000 2019-09-04T06:32:19.386+0000 D3 REPL [conn301] Got notified of new snapshot: { ts: Timestamp(1567578739, 6), t: 1 }, 2019-09-04T06:32:19.375+0000 2019-09-04T06:32:19.386+0000 D3 REPL [conn301] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.660+0000 2019-09-04T06:32:19.386+0000 D3 REPL [conn317] Got notified of new snapshot: { ts: Timestamp(1567578739, 6), t: 1 }, 2019-09-04T06:32:19.375+0000 2019-09-04T06:32:19.386+0000 D3 REPL [conn317] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.471+0000 2019-09-04T06:32:19.386+0000 D3 REPL [conn310] Got notified of new snapshot: { ts: Timestamp(1567578739, 6), t: 1 }, 2019-09-04T06:32:19.375+0000 2019-09-04T06:32:19.386+0000 D3 REPL [conn310] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.660+0000 2019-09-04T06:32:19.386+0000 D3 REPL [conn314] Got notified of new snapshot: { ts: Timestamp(1567578739, 6), t: 1 }, 2019-09-04T06:32:19.375+0000 2019-09-04T06:32:19.386+0000 D3 REPL [conn314] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:24.152+0000 2019-09-04T06:32:19.386+0000 D3 REPL [conn285] Got notified of new snapshot: { ts: Timestamp(1567578739, 6), t: 1 }, 2019-09-04T06:32:19.375+0000 2019-09-04T06:32:19.386+0000 D3 REPL [conn285] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.640+0000 2019-09-04T06:32:19.386+0000 D3 REPL [conn275] Got notified of new snapshot: { ts: Timestamp(1567578739, 6), t: 1 }, 2019-09-04T06:32:19.375+0000 2019-09-04T06:32:19.386+0000 D3 REPL [conn275] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.508+0000 2019-09-04T06:32:19.386+0000 D3 REPL [conn306] Got notified of new snapshot: { ts: Timestamp(1567578739, 6), t: 1 }, 2019-09-04T06:32:19.375+0000 2019-09-04T06:32:19.386+0000 D3 REPL [conn306] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.639+0000 2019-09-04T06:32:19.386+0000 D3 REPL [conn319] Got notified of new snapshot: { ts: Timestamp(1567578739, 6), t: 1 }, 2019-09-04T06:32:19.375+0000 2019-09-04T06:32:19.386+0000 D3 REPL [conn319] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.535+0000 2019-09-04T06:32:19.386+0000 D3 REPL [conn316] Got notified of new snapshot: { ts: Timestamp(1567578739, 6), t: 1 }, 2019-09-04T06:32:19.375+0000 2019-09-04T06:32:19.386+0000 D3 REPL [conn316] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.623+0000 2019-09-04T06:32:19.386+0000 D3 REPL [conn311] Got notified of new snapshot: { ts: Timestamp(1567578739, 6), t: 1 }, 2019-09-04T06:32:19.375+0000 2019-09-04T06:32:19.386+0000 D3 REPL [conn311] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.661+0000 2019-09-04T06:32:19.386+0000 D3 REPL [conn328] Got notified of new snapshot: { ts: Timestamp(1567578739, 6), t: 1 }, 2019-09-04T06:32:19.375+0000 2019-09-04T06:32:19.386+0000 D3 REPL [conn328] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.625+0000 2019-09-04T06:32:19.386+0000 D3 REPL [conn299] Got notified of new snapshot: { ts: Timestamp(1567578739, 6), t: 1 }, 2019-09-04T06:32:19.375+0000 2019-09-04T06:32:19.386+0000 D3 REPL [conn299] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.483+0000 2019-09-04T06:32:19.386+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.385+0000 2019-09-04T06:32:19.386+0000 I - [LogicalSessionCacheRefresh] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6b5b62 0x561749c37259 0x561749c42521 0x561749a7a54b 0x56174a3c2b67 0x56174a0ae262 0x56174a0ae6da 0x56174a47a143 0x56174a47d086 0x56174a22c215 0x56174b82dbbf 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"272DB62","s":"_ZN5mongo11DBExceptionC2ERKNS_6StatusE"},{"b":"561748F88000","o":"CAF259","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"AF254B"},{"b":"561748F88000","o":"143AB67","s":"_ZN5mongo35scatterGatherOnlyVersionIfUnshardedEPNS_16OperationContextERKNS_15NamespaceStringERKNS_7BSONObjERKNS_21ReadPreferenceSettingENS_5Shard11RetryPolicyERKSt3setINS_10ErrorCodes5ErrorESt4lessISF_ESaISF_EE"},{"b":"561748F88000","o":"1126262","s":"_ZN5mongo30SessionsCollectionConfigServer24_generateIndexesIfNeededEPNS_16OperationContextE"},{"b":"561748F88000","o":"11266DA","s":"_ZN5mongo30SessionsCollectionConfigServer23setupSessionsCollectionEPNS_16OperationContextE"},{"b":"561748F88000","o":"14F2143","s":"_ZN5mongo23LogicalSessionCacheImpl8_refreshEPNS_6ClientE"},{"b":"561748F88000","o":"14F5086","s":"_ZN5mongo23LogicalSessionCacheImpl16_periodicRefreshEPNS_6ClientE"},{"b":"561748F88000","o":"12A4215"},{"b":"561748F88000","o":"28A5BBF"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo11DBExceptionC2ERKNS_6StatusE+0x32) [0x56174b6b5b62] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x5557) [0x561749c37259] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xAF254B) [0x561749a7a54b] mongod(_ZN5mongo35scatterGatherOnlyVersionIfUnshardedEPNS_16OperationContextERKNS_15NamespaceStringERKNS_7BSONObjERKNS_21ReadPreferenceSettingENS_5Shard11RetryPolicyERKSt3setINS_10ErrorCodes5ErrorESt4lessISF_ESaISF_EE+0x3B7) [0x56174a3c2b67] mongod(_ZN5mongo30SessionsCollectionConfigServer24_generateIndexesIfNeededEPNS_16OperationContextE+0x92) [0x56174a0ae262] mongod(_ZN5mongo30SessionsCollectionConfigServer23setupSessionsCollectionEPNS_16OperationContextE+0x1BA) [0x56174a0ae6da] mongod(_ZN5mongo23LogicalSessionCacheImpl8_refreshEPNS_6ClientE+0x103) [0x56174a47a143] mongod(_ZN5mongo23LogicalSessionCacheImpl16_periodicRefreshEPNS_6ClientE+0x26) [0x56174a47d086] mongod(+0x12A4215) [0x56174a22c215] mongod(+0x28A5BBF) [0x56174b82dbbf] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:19.386+0000 D2 ASIO [LogicalSessionCacheRefresh] Canceling operation; original request was: RemoteCommand 952 -- target:[cmodb808.togewa.com:27018] db:config cmd:{ createIndexes: "system.sessions", indexes: [ { key: { lastUse: 1 }, name: "lsidTTLIndex", expireAfterSeconds: 1800 } ], allowImplicitCollectionCreation: false } 2019-09-04T06:32:19.386+0000 D1 - [LogicalSessionCacheRefresh] User Assertion: CallbackCanceled: Callback was canceled src/mongo/executor/network_interface_tl.cpp 400 2019-09-04T06:32:19.386+0000 W - [LogicalSessionCacheRefresh] DBException thrown :: caused by :: CallbackCanceled: Callback was canceled 2019-09-04T06:32:19.388+0000 D2 ASIO [RS] Request 977 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578739, 7), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578739383), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f5a73ac9313827bca5347'), when: new Date(1567578739383), who: "ConfigServer:conn9511" } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 6), t: 1 }, lastCommittedWall: new Date(1567578739375), lastOpVisible: { ts: Timestamp(1567578739, 6), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578739, 6), t: 1 }, lastCommittedWall: new Date(1567578739375), lastOpApplied: { ts: Timestamp(1567578739, 7), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 6), $clusterTime: { clusterTime: Timestamp(1567578739, 7), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 7) } 2019-09-04T06:32:19.388+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578739, 7), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578739383), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f5a73ac9313827bca5347'), when: new Date(1567578739383), who: "ConfigServer:conn9511" } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 6), t: 1 }, lastCommittedWall: new Date(1567578739375), lastOpVisible: { ts: Timestamp(1567578739, 6), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578739, 6), t: 1 }, lastCommittedWall: new Date(1567578739375), lastOpApplied: { ts: Timestamp(1567578739, 7), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 6), $clusterTime: { clusterTime: Timestamp(1567578739, 7), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 7) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:19.388+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:19.388+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578739, 7) and ending at ts: Timestamp(1567578739, 7) 2019-09-04T06:32:19.388+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:32:29.908+0000 2019-09-04T06:32:19.388+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:32:29.703+0000 2019-09-04T06:32:19.388+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578739, 7), t: 1 } 2019-09-04T06:32:19.388+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:19.388+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:48.839+0000 2019-09-04T06:32:19.388+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:19.388+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:19.388+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578739, 6) 2019-09-04T06:32:19.389+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 14165 2019-09-04T06:32:19.389+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:19.389+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:19.389+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 14165 2019-09-04T06:32:19.389+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:19.389+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:19.389+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578739, 6) 2019-09-04T06:32:19.389+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 14168 2019-09-04T06:32:19.389+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:19.389+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:19.389+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 14168 2019-09-04T06:32:19.389+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:32:19.389+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578739, 7) } 2019-09-04T06:32:19.389+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14162 2019-09-04T06:32:19.389+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14162 2019-09-04T06:32:19.389+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14171 2019-09-04T06:32:19.389+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14171 2019-09-04T06:32:19.389+0000 D3 EXECUTOR [repl-writer-worker-3] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:19.389+0000 D3 STORAGE [repl-writer-worker-3] WT begin_transaction for snapshot id 14173 2019-09-04T06:32:19.389+0000 D4 STORAGE [repl-writer-worker-3] inserting record with timestamp Timestamp(1567578739, 7) 2019-09-04T06:32:19.389+0000 D3 STORAGE [repl-writer-worker-3] WT set timestamp of future write operations to Timestamp(1567578739, 7) 2019-09-04T06:32:19.389+0000 D3 STORAGE [repl-writer-worker-3] WT commit_transaction for snapshot id 14173 2019-09-04T06:32:19.389+0000 D3 EXECUTOR [repl-writer-worker-3] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:19.389+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:32:19.389+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14172 2019-09-04T06:32:19.389+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14172 2019-09-04T06:32:19.389+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14175 2019-09-04T06:32:19.389+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14175 2019-09-04T06:32:19.389+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578739, 7), t: 1 }({ ts: Timestamp(1567578739, 7), t: 1 }) 2019-09-04T06:32:19.389+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578739, 7) 2019-09-04T06:32:19.389+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14176 2019-09-04T06:32:19.389+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578739, 7) } } ] } sort: {} projection: {} 2019-09-04T06:32:19.389+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:19.389+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:32:19.389+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578739, 7) Sort: {} Proj: {} ============================= 2019-09-04T06:32:19.389+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:19.389+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:19.389+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:19.389+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578739, 7) || First: notFirst: full path: ts 2019-09-04T06:32:19.389+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:19.389+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578739, 7) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:19.389+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:19.389+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:32:19.389+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:19.389+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:19.389+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:19.389+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:19.389+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:19.389+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:19.389+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:19.389+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578739, 7) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:19.389+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:19.389+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:19.389+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:19.389+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578739, 7) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:19.389+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:19.389+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578739, 7) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:19.389+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14176 2019-09-04T06:32:19.389+0000 D3 EXECUTOR [repl-writer-worker-12] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:19.389+0000 D3 STORAGE [repl-writer-worker-12] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:19.389+0000 D3 REPL [repl-writer-worker-12] applying op: { ts: Timestamp(1567578739, 7), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578739383), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f5a73ac9313827bca5347'), when: new Date(1567578739383), who: "ConfigServer:conn9511" } } }, oplog application mode: Secondary 2019-09-04T06:32:19.389+0000 D3 STORAGE [repl-writer-worker-12] WT set timestamp of future write operations to Timestamp(1567578739, 7) 2019-09-04T06:32:19.389+0000 D3 STORAGE [repl-writer-worker-12] WT begin_transaction for snapshot id 14178 2019-09-04T06:32:19.389+0000 D2 QUERY [repl-writer-worker-12] Using idhack: { _id: "config.system.sessions" } 2019-09-04T06:32:19.389+0000 D4 WRITE [repl-writer-worker-12] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:32:19.389+0000 D3 STORAGE [repl-writer-worker-12] WT commit_transaction for snapshot id 14178 2019-09-04T06:32:19.389+0000 D3 EXECUTOR [repl-writer-worker-12] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:19.389+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578739, 7), t: 1 }({ ts: Timestamp(1567578739, 7), t: 1 }) 2019-09-04T06:32:19.389+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578739, 7) 2019-09-04T06:32:19.389+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14177 2019-09-04T06:32:19.389+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:32:19.389+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:19.389+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:32:19.389+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:19.389+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:19.389+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:32:19.389+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14177 2019-09-04T06:32:19.389+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578739, 7) 2019-09-04T06:32:19.389+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14181 2019-09-04T06:32:19.390+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14181 2019-09-04T06:32:19.390+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578739, 7), t: 1 }({ ts: Timestamp(1567578739, 7), t: 1 }) 2019-09-04T06:32:19.390+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:19.390+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578739, 6), t: 1 }, durableWallTime: new Date(1567578739375), appliedOpTime: { ts: Timestamp(1567578739, 7), t: 1 }, appliedWallTime: new Date(1567578739383), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 6), t: 1 }, lastCommittedWall: new Date(1567578739375), lastOpVisible: { ts: Timestamp(1567578739, 6), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:19.390+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 978 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:49.390+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578739, 6), t: 1 }, durableWallTime: new Date(1567578739375), appliedOpTime: { ts: Timestamp(1567578739, 7), t: 1 }, appliedWallTime: new Date(1567578739383), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 6), t: 1 }, lastCommittedWall: new Date(1567578739375), lastOpVisible: { ts: Timestamp(1567578739, 6), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:19.390+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.389+0000 2019-09-04T06:32:19.390+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 973 finished with response: { hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb810.togewa.com:27018", me: "cmodb811.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578729, 1), t: 1 }, lastWriteDate: new Date(1567578729000), majorityOpTime: { ts: Timestamp(1567578729, 1), t: 1 }, majorityWriteDate: new Date(1567578729000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578739386), logicalSessionTimeoutMinutes: 30, connectionId: 13284, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578729, 1), $configServerState: { opTime: { ts: Timestamp(1567578728, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578735, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578729, 1) } 2019-09-04T06:32:19.390+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb810.togewa.com:27018", me: "cmodb811.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578729, 1), t: 1 }, lastWriteDate: new Date(1567578729000), majorityOpTime: { ts: Timestamp(1567578729, 1), t: 1 }, majorityWriteDate: new Date(1567578729000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578739386), logicalSessionTimeoutMinutes: 30, connectionId: 13284, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578729, 1), $configServerState: { opTime: { ts: Timestamp(1567578728, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578735, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578729, 1) } target: cmodb811.togewa.com:27018 2019-09-04T06:32:19.390+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0002 took 5ms 2019-09-04T06:32:19.390+0000 D2 ASIO [RS] Request 978 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 6), t: 1 }, lastCommittedWall: new Date(1567578739375), lastOpVisible: { ts: Timestamp(1567578739, 6), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 6), $clusterTime: { clusterTime: Timestamp(1567578739, 7), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 7) } 2019-09-04T06:32:19.390+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 6), t: 1 }, lastCommittedWall: new Date(1567578739375), lastOpVisible: { ts: Timestamp(1567578739, 6), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 6), $clusterTime: { clusterTime: Timestamp(1567578739, 7), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 7) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:19.390+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:19.390+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.390+0000 2019-09-04T06:32:19.390+0000 D3 REPL [conn305] Got notified of new snapshot: { ts: Timestamp(1567578739, 6), t: 1 }, 2019-09-04T06:32:19.375+0000 2019-09-04T06:32:19.390+0000 D3 REPL [conn305] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.638+0000 2019-09-04T06:32:19.391+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578739, 7), t: 1 } 2019-09-04T06:32:19.391+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 979 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:29.391+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578739, 6), t: 1 } } 2019-09-04T06:32:19.391+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.390+0000 2019-09-04T06:32:19.391+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:32:19.391+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:19.391+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:19.391+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578739, 7), t: 1 }, durableWallTime: new Date(1567578739383), appliedOpTime: { ts: Timestamp(1567578739, 7), t: 1 }, appliedWallTime: new Date(1567578739383), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 6), t: 1 }, lastCommittedWall: new Date(1567578739375), lastOpVisible: { ts: Timestamp(1567578739, 6), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:19.391+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 980 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:49.391+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578739, 7), t: 1 }, durableWallTime: new Date(1567578739383), appliedOpTime: { ts: Timestamp(1567578739, 7), t: 1 }, appliedWallTime: new Date(1567578739383), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 6), t: 1 }, lastCommittedWall: new Date(1567578739375), lastOpVisible: { ts: Timestamp(1567578739, 6), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:19.391+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.390+0000 2019-09-04T06:32:19.391+0000 D2 ASIO [RS] Request 980 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 6), t: 1 }, lastCommittedWall: new Date(1567578739375), lastOpVisible: { ts: Timestamp(1567578739, 6), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 6), $clusterTime: { clusterTime: Timestamp(1567578739, 7), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 7) } 2019-09-04T06:32:19.391+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 6), t: 1 }, lastCommittedWall: new Date(1567578739375), lastOpVisible: { ts: Timestamp(1567578739, 6), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 6), $clusterTime: { clusterTime: Timestamp(1567578739, 7), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 7) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:19.391+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:19.391+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.390+0000 2019-09-04T06:32:19.391+0000 D2 ASIO [RS] Request 979 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 7), t: 1 }, lastCommittedWall: new Date(1567578739383), lastOpVisible: { ts: Timestamp(1567578739, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578739, 7), t: 1 }, lastCommittedWall: new Date(1567578739383), lastOpApplied: { ts: Timestamp(1567578739, 7), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 7), $clusterTime: { clusterTime: Timestamp(1567578739, 7), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 7) } 2019-09-04T06:32:19.391+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 7), t: 1 }, lastCommittedWall: new Date(1567578739383), lastOpVisible: { ts: Timestamp(1567578739, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578739, 7), t: 1 }, lastCommittedWall: new Date(1567578739383), lastOpApplied: { ts: Timestamp(1567578739, 7), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 7), $clusterTime: { clusterTime: Timestamp(1567578739, 7), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 7) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:19.391+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:19.391+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:32:19.391+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578739, 7), t: 1 }, 2019-09-04T06:32:19.383+0000 2019-09-04T06:32:19.391+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578739, 7), t: 1 }, 2019-09-04T06:32:19.383+0000 2019-09-04T06:32:19.391+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578734, 7) 2019-09-04T06:32:19.391+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:32:29.703+0000 2019-09-04T06:32:19.391+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:32:30.008+0000 2019-09-04T06:32:19.391+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 981 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:29.391+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578739, 7), t: 1 } } 2019-09-04T06:32:19.392+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.390+0000 2019-09-04T06:32:19.392+0000 D3 REPL [conn305] Got notified of new snapshot: { ts: Timestamp(1567578739, 7), t: 1 }, 2019-09-04T06:32:19.383+0000 2019-09-04T06:32:19.392+0000 D3 REPL [conn305] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.638+0000 2019-09-04T06:32:19.392+0000 D3 REPL [conn324] Got notified of new snapshot: { ts: Timestamp(1567578739, 7), t: 1 }, 2019-09-04T06:32:19.383+0000 2019-09-04T06:32:19.392+0000 D3 REPL [conn324] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.622+0000 2019-09-04T06:32:19.392+0000 D3 REPL [conn304] Got notified of new snapshot: { ts: Timestamp(1567578739, 7), t: 1 }, 2019-09-04T06:32:19.383+0000 2019-09-04T06:32:19.392+0000 D3 REPL [conn304] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.637+0000 2019-09-04T06:32:19.392+0000 D3 REPL [conn321] Got notified of new snapshot: { ts: Timestamp(1567578739, 7), t: 1 }, 2019-09-04T06:32:19.383+0000 2019-09-04T06:32:19.392+0000 D3 REPL [conn321] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:43.109+0000 2019-09-04T06:32:19.392+0000 D3 REPL [conn312] Got notified of new snapshot: { ts: Timestamp(1567578739, 7), t: 1 }, 2019-09-04T06:32:19.383+0000 2019-09-04T06:32:19.392+0000 D3 REPL [conn312] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.767+0000 2019-09-04T06:32:19.392+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:19.392+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:48.839+0000 2019-09-04T06:32:19.392+0000 D3 REPL [conn313] Got notified of new snapshot: { ts: Timestamp(1567578739, 7), t: 1 }, 2019-09-04T06:32:19.383+0000 2019-09-04T06:32:19.392+0000 D3 REPL [conn313] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:22.595+0000 2019-09-04T06:32:19.392+0000 D3 REPL [conn315] Got notified of new snapshot: { ts: Timestamp(1567578739, 7), t: 1 }, 2019-09-04T06:32:19.383+0000 2019-09-04T06:32:19.392+0000 D3 REPL [conn315] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:25.060+0000 2019-09-04T06:32:19.392+0000 D3 REPL [conn301] Got notified of new snapshot: { ts: Timestamp(1567578739, 7), t: 1 }, 2019-09-04T06:32:19.383+0000 2019-09-04T06:32:19.392+0000 D3 REPL [conn301] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.660+0000 2019-09-04T06:32:19.392+0000 D3 REPL [conn317] Got notified of new snapshot: { ts: Timestamp(1567578739, 7), t: 1 }, 2019-09-04T06:32:19.383+0000 2019-09-04T06:32:19.392+0000 D3 REPL [conn317] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.471+0000 2019-09-04T06:32:19.392+0000 D3 REPL [conn310] Got notified of new snapshot: { ts: Timestamp(1567578739, 7), t: 1 }, 2019-09-04T06:32:19.383+0000 2019-09-04T06:32:19.392+0000 D3 REPL [conn310] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.660+0000 2019-09-04T06:32:19.392+0000 D3 REPL [conn314] Got notified of new snapshot: { ts: Timestamp(1567578739, 7), t: 1 }, 2019-09-04T06:32:19.383+0000 2019-09-04T06:32:19.392+0000 D3 REPL [conn314] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:24.152+0000 2019-09-04T06:32:19.392+0000 D3 REPL [conn285] Got notified of new snapshot: { ts: Timestamp(1567578739, 7), t: 1 }, 2019-09-04T06:32:19.383+0000 2019-09-04T06:32:19.392+0000 D3 REPL [conn285] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.640+0000 2019-09-04T06:32:19.392+0000 D3 REPL [conn275] Got notified of new snapshot: { ts: Timestamp(1567578739, 7), t: 1 }, 2019-09-04T06:32:19.383+0000 2019-09-04T06:32:19.392+0000 D3 REPL [conn275] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.508+0000 2019-09-04T06:32:19.392+0000 D3 REPL [conn306] Got notified of new snapshot: { ts: Timestamp(1567578739, 7), t: 1 }, 2019-09-04T06:32:19.383+0000 2019-09-04T06:32:19.392+0000 D3 REPL [conn306] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.639+0000 2019-09-04T06:32:19.392+0000 D3 REPL [conn319] Got notified of new snapshot: { ts: Timestamp(1567578739, 7), t: 1 }, 2019-09-04T06:32:19.383+0000 2019-09-04T06:32:19.392+0000 D3 REPL [conn319] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.535+0000 2019-09-04T06:32:19.392+0000 D3 REPL [conn316] Got notified of new snapshot: { ts: Timestamp(1567578739, 7), t: 1 }, 2019-09-04T06:32:19.383+0000 2019-09-04T06:32:19.392+0000 D3 REPL [conn316] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.623+0000 2019-09-04T06:32:19.392+0000 D3 REPL [conn311] Got notified of new snapshot: { ts: Timestamp(1567578739, 7), t: 1 }, 2019-09-04T06:32:19.383+0000 2019-09-04T06:32:19.392+0000 D3 REPL [conn311] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.661+0000 2019-09-04T06:32:19.392+0000 D3 REPL [conn328] Got notified of new snapshot: { ts: Timestamp(1567578739, 7), t: 1 }, 2019-09-04T06:32:19.383+0000 2019-09-04T06:32:19.392+0000 D3 REPL [conn328] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.625+0000 2019-09-04T06:32:19.392+0000 D3 REPL [conn299] Got notified of new snapshot: { ts: Timestamp(1567578739, 7), t: 1 }, 2019-09-04T06:32:19.383+0000 2019-09-04T06:32:19.392+0000 D3 REPL [conn299] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.483+0000 2019-09-04T06:32:19.396+0000 D2 ASIO [RS] Request 981 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578739, 8), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578739392), o: { $v: 1, $set: { state: 0 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 7), t: 1 }, lastCommittedWall: new Date(1567578739383), lastOpVisible: { ts: Timestamp(1567578739, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578739, 7), t: 1 }, lastCommittedWall: new Date(1567578739383), lastOpApplied: { ts: Timestamp(1567578739, 8), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 7), $clusterTime: { clusterTime: Timestamp(1567578739, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 8) } 2019-09-04T06:32:19.396+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578739, 8), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578739392), o: { $v: 1, $set: { state: 0 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 7), t: 1 }, lastCommittedWall: new Date(1567578739383), lastOpVisible: { ts: Timestamp(1567578739, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578739, 7), t: 1 }, lastCommittedWall: new Date(1567578739383), lastOpApplied: { ts: Timestamp(1567578739, 8), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 7), $clusterTime: { clusterTime: Timestamp(1567578739, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 8) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:19.396+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:19.396+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578739, 8) and ending at ts: Timestamp(1567578739, 8) 2019-09-04T06:32:19.396+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:32:30.008+0000 2019-09-04T06:32:19.396+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:32:30.314+0000 2019-09-04T06:32:19.397+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578739, 8), t: 1 } 2019-09-04T06:32:19.397+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:19.397+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:48.839+0000 2019-09-04T06:32:19.397+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:19.397+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:19.397+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578739, 7) 2019-09-04T06:32:19.397+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 14185 2019-09-04T06:32:19.397+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:19.397+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:19.397+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 14185 2019-09-04T06:32:19.397+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:19.397+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:19.397+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578739, 7) 2019-09-04T06:32:19.397+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 14188 2019-09-04T06:32:19.397+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:19.397+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:19.397+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 14188 2019-09-04T06:32:19.397+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:32:19.397+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578739, 8) } 2019-09-04T06:32:19.397+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14182 2019-09-04T06:32:19.397+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14182 2019-09-04T06:32:19.397+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14191 2019-09-04T06:32:19.397+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14191 2019-09-04T06:32:19.397+0000 D3 EXECUTOR [repl-writer-worker-10] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:19.397+0000 D3 STORAGE [repl-writer-worker-10] WT begin_transaction for snapshot id 14193 2019-09-04T06:32:19.397+0000 D4 STORAGE [repl-writer-worker-10] inserting record with timestamp Timestamp(1567578739, 8) 2019-09-04T06:32:19.397+0000 D3 STORAGE [repl-writer-worker-10] WT set timestamp of future write operations to Timestamp(1567578739, 8) 2019-09-04T06:32:19.397+0000 D3 STORAGE [repl-writer-worker-10] WT commit_transaction for snapshot id 14193 2019-09-04T06:32:19.397+0000 D3 EXECUTOR [repl-writer-worker-10] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:19.397+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:32:19.397+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14192 2019-09-04T06:32:19.397+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14192 2019-09-04T06:32:19.397+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14195 2019-09-04T06:32:19.397+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14195 2019-09-04T06:32:19.397+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578739, 8), t: 1 }({ ts: Timestamp(1567578739, 8), t: 1 }) 2019-09-04T06:32:19.397+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578739, 8) 2019-09-04T06:32:19.397+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14196 2019-09-04T06:32:19.397+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578739, 8) } } ] } sort: {} projection: {} 2019-09-04T06:32:19.397+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:19.397+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:32:19.397+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578739, 8) Sort: {} Proj: {} ============================= 2019-09-04T06:32:19.397+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:19.397+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:19.397+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:19.397+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578739, 8) || First: notFirst: full path: ts 2019-09-04T06:32:19.397+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:19.397+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578739, 8) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:19.397+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:19.397+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:32:19.397+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:19.397+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:19.397+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:19.397+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:19.397+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:19.397+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:19.397+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:19.397+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578739, 8) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:19.397+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:19.397+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:19.397+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:19.397+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578739, 8) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:19.397+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:19.397+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578739, 8) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:19.397+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14196 2019-09-04T06:32:19.397+0000 D3 EXECUTOR [repl-writer-worker-8] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:19.397+0000 D3 STORAGE [repl-writer-worker-8] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:19.397+0000 D3 REPL [repl-writer-worker-8] applying op: { ts: Timestamp(1567578739, 8), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578739392), o: { $v: 1, $set: { state: 0 } } }, oplog application mode: Secondary 2019-09-04T06:32:19.397+0000 D3 STORAGE [repl-writer-worker-8] WT set timestamp of future write operations to Timestamp(1567578739, 8) 2019-09-04T06:32:19.397+0000 D3 STORAGE [repl-writer-worker-8] WT begin_transaction for snapshot id 14198 2019-09-04T06:32:19.397+0000 D2 QUERY [repl-writer-worker-8] Using idhack: { _id: "config.system.sessions" } 2019-09-04T06:32:19.397+0000 D4 WRITE [repl-writer-worker-8] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:32:19.397+0000 D3 STORAGE [repl-writer-worker-8] WT commit_transaction for snapshot id 14198 2019-09-04T06:32:19.397+0000 D3 EXECUTOR [repl-writer-worker-8] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:19.397+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578739, 8), t: 1 }({ ts: Timestamp(1567578739, 8), t: 1 }) 2019-09-04T06:32:19.397+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578739, 8) 2019-09-04T06:32:19.397+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14197 2019-09-04T06:32:19.397+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:32:19.397+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:19.397+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:32:19.397+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:19.397+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:19.397+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:32:19.398+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14197 2019-09-04T06:32:19.398+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578739, 8) 2019-09-04T06:32:19.398+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14201 2019-09-04T06:32:19.398+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14201 2019-09-04T06:32:19.398+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578739, 8), t: 1 }({ ts: Timestamp(1567578739, 8), t: 1 }) 2019-09-04T06:32:19.398+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:19.398+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578739, 7), t: 1 }, durableWallTime: new Date(1567578739383), appliedOpTime: { ts: Timestamp(1567578739, 8), t: 1 }, appliedWallTime: new Date(1567578739392), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 7), t: 1 }, lastCommittedWall: new Date(1567578739383), lastOpVisible: { ts: Timestamp(1567578739, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:19.398+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 982 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:49.398+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578739, 7), t: 1 }, durableWallTime: new Date(1567578739383), appliedOpTime: { ts: Timestamp(1567578739, 8), t: 1 }, appliedWallTime: new Date(1567578739392), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 7), t: 1 }, lastCommittedWall: new Date(1567578739383), lastOpVisible: { ts: Timestamp(1567578739, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:19.398+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.398+0000 2019-09-04T06:32:19.398+0000 D2 ASIO [RS] Request 982 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 7), t: 1 }, lastCommittedWall: new Date(1567578739383), lastOpVisible: { ts: Timestamp(1567578739, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 7), $clusterTime: { clusterTime: Timestamp(1567578739, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 8) } 2019-09-04T06:32:19.398+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 7), t: 1 }, lastCommittedWall: new Date(1567578739383), lastOpVisible: { ts: Timestamp(1567578739, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 7), $clusterTime: { clusterTime: Timestamp(1567578739, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 8) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:19.398+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:19.398+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.398+0000 2019-09-04T06:32:19.399+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578739, 8), t: 1 } 2019-09-04T06:32:19.399+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 983 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:29.399+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578739, 7), t: 1 } } 2019-09-04T06:32:19.399+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.398+0000 2019-09-04T06:32:19.399+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:32:19.399+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:19.399+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578739, 8), t: 1 }, durableWallTime: new Date(1567578739392), appliedOpTime: { ts: Timestamp(1567578739, 8), t: 1 }, appliedWallTime: new Date(1567578739392), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 7), t: 1 }, lastCommittedWall: new Date(1567578739383), lastOpVisible: { ts: Timestamp(1567578739, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:19.399+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 984 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:49.399+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578739, 8), t: 1 }, durableWallTime: new Date(1567578739392), appliedOpTime: { ts: Timestamp(1567578739, 8), t: 1 }, appliedWallTime: new Date(1567578739392), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 7), t: 1 }, lastCommittedWall: new Date(1567578739383), lastOpVisible: { ts: Timestamp(1567578739, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:19.399+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.398+0000 2019-09-04T06:32:19.399+0000 D2 ASIO [RS] Request 984 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 7), t: 1 }, lastCommittedWall: new Date(1567578739383), lastOpVisible: { ts: Timestamp(1567578739, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 7), $clusterTime: { clusterTime: Timestamp(1567578739, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 8) } 2019-09-04T06:32:19.399+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 7), t: 1 }, lastCommittedWall: new Date(1567578739383), lastOpVisible: { ts: Timestamp(1567578739, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 7), $clusterTime: { clusterTime: Timestamp(1567578739, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 8) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:19.399+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:19.399+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.398+0000 2019-09-04T06:32:19.400+0000 D2 ASIO [RS] Request 983 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 8), t: 1 }, lastCommittedWall: new Date(1567578739392), lastOpVisible: { ts: Timestamp(1567578739, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578739, 8), t: 1 }, lastCommittedWall: new Date(1567578739392), lastOpApplied: { ts: Timestamp(1567578739, 8), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 8), $clusterTime: { clusterTime: Timestamp(1567578739, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 8) } 2019-09-04T06:32:19.400+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 8), t: 1 }, lastCommittedWall: new Date(1567578739392), lastOpVisible: { ts: Timestamp(1567578739, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578739, 8), t: 1 }, lastCommittedWall: new Date(1567578739392), lastOpApplied: { ts: Timestamp(1567578739, 8), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 8), $clusterTime: { clusterTime: Timestamp(1567578739, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 8) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:19.400+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:19.400+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:32:19.400+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578739, 8), t: 1 }, 2019-09-04T06:32:19.392+0000 2019-09-04T06:32:19.400+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578739, 8), t: 1 }, 2019-09-04T06:32:19.392+0000 2019-09-04T06:32:19.400+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578734, 8) 2019-09-04T06:32:19.400+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:32:30.314+0000 2019-09-04T06:32:19.400+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:32:30.158+0000 2019-09-04T06:32:19.400+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 985 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:29.400+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578739, 8), t: 1 } } 2019-09-04T06:32:19.400+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.398+0000 2019-09-04T06:32:19.400+0000 D3 REPL [conn305] Got notified of new snapshot: { ts: Timestamp(1567578739, 8), t: 1 }, 2019-09-04T06:32:19.392+0000 2019-09-04T06:32:19.400+0000 D3 REPL [conn305] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.638+0000 2019-09-04T06:32:19.400+0000 D3 REPL [conn324] Got notified of new snapshot: { ts: Timestamp(1567578739, 8), t: 1 }, 2019-09-04T06:32:19.392+0000 2019-09-04T06:32:19.400+0000 D3 REPL [conn324] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.622+0000 2019-09-04T06:32:19.400+0000 D3 REPL [conn321] Got notified of new snapshot: { ts: Timestamp(1567578739, 8), t: 1 }, 2019-09-04T06:32:19.392+0000 2019-09-04T06:32:19.400+0000 D3 REPL [conn321] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:43.109+0000 2019-09-04T06:32:19.400+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:19.400+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:48.839+0000 2019-09-04T06:32:19.400+0000 D3 REPL [conn313] Got notified of new snapshot: { ts: Timestamp(1567578739, 8), t: 1 }, 2019-09-04T06:32:19.392+0000 2019-09-04T06:32:19.400+0000 D3 REPL [conn313] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:22.595+0000 2019-09-04T06:32:19.400+0000 D3 REPL [conn312] Got notified of new snapshot: { ts: Timestamp(1567578739, 8), t: 1 }, 2019-09-04T06:32:19.392+0000 2019-09-04T06:32:19.400+0000 D3 REPL [conn312] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.767+0000 2019-09-04T06:32:19.400+0000 D3 REPL [conn304] Got notified of new snapshot: { ts: Timestamp(1567578739, 8), t: 1 }, 2019-09-04T06:32:19.392+0000 2019-09-04T06:32:19.400+0000 D3 REPL [conn304] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.637+0000 2019-09-04T06:32:19.400+0000 D3 REPL [conn315] Got notified of new snapshot: { ts: Timestamp(1567578739, 8), t: 1 }, 2019-09-04T06:32:19.392+0000 2019-09-04T06:32:19.400+0000 D3 REPL [conn315] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:25.060+0000 2019-09-04T06:32:19.400+0000 D3 REPL [conn301] Got notified of new snapshot: { ts: Timestamp(1567578739, 8), t: 1 }, 2019-09-04T06:32:19.392+0000 2019-09-04T06:32:19.400+0000 D3 REPL [conn301] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.660+0000 2019-09-04T06:32:19.400+0000 D3 REPL [conn317] Got notified of new snapshot: { ts: Timestamp(1567578739, 8), t: 1 }, 2019-09-04T06:32:19.392+0000 2019-09-04T06:32:19.400+0000 D3 REPL [conn317] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.471+0000 2019-09-04T06:32:19.400+0000 D3 REPL [conn310] Got notified of new snapshot: { ts: Timestamp(1567578739, 8), t: 1 }, 2019-09-04T06:32:19.392+0000 2019-09-04T06:32:19.400+0000 D3 REPL [conn310] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.660+0000 2019-09-04T06:32:19.400+0000 D3 REPL [conn285] Got notified of new snapshot: { ts: Timestamp(1567578739, 8), t: 1 }, 2019-09-04T06:32:19.392+0000 2019-09-04T06:32:19.400+0000 D3 REPL [conn285] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.640+0000 2019-09-04T06:32:19.400+0000 D3 REPL [conn314] Got notified of new snapshot: { ts: Timestamp(1567578739, 8), t: 1 }, 2019-09-04T06:32:19.392+0000 2019-09-04T06:32:19.400+0000 D3 REPL [conn314] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:24.152+0000 2019-09-04T06:32:19.400+0000 D3 REPL [conn275] Got notified of new snapshot: { ts: Timestamp(1567578739, 8), t: 1 }, 2019-09-04T06:32:19.392+0000 2019-09-04T06:32:19.400+0000 D3 REPL [conn275] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.508+0000 2019-09-04T06:32:19.400+0000 D3 REPL [conn306] Got notified of new snapshot: { ts: Timestamp(1567578739, 8), t: 1 }, 2019-09-04T06:32:19.392+0000 2019-09-04T06:32:19.400+0000 D3 REPL [conn306] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.639+0000 2019-09-04T06:32:19.400+0000 D3 REPL [conn319] Got notified of new snapshot: { ts: Timestamp(1567578739, 8), t: 1 }, 2019-09-04T06:32:19.392+0000 2019-09-04T06:32:19.400+0000 D3 REPL [conn319] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.535+0000 2019-09-04T06:32:19.400+0000 D3 REPL [conn316] Got notified of new snapshot: { ts: Timestamp(1567578739, 8), t: 1 }, 2019-09-04T06:32:19.392+0000 2019-09-04T06:32:19.400+0000 D3 REPL [conn316] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.623+0000 2019-09-04T06:32:19.400+0000 D3 REPL [conn311] Got notified of new snapshot: { ts: Timestamp(1567578739, 8), t: 1 }, 2019-09-04T06:32:19.392+0000 2019-09-04T06:32:19.400+0000 D3 REPL [conn311] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.661+0000 2019-09-04T06:32:19.400+0000 D3 REPL [conn328] Got notified of new snapshot: { ts: Timestamp(1567578739, 8), t: 1 }, 2019-09-04T06:32:19.392+0000 2019-09-04T06:32:19.400+0000 D3 REPL [conn328] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.625+0000 2019-09-04T06:32:19.400+0000 D3 REPL [conn299] Got notified of new snapshot: { ts: Timestamp(1567578739, 8), t: 1 }, 2019-09-04T06:32:19.392+0000 2019-09-04T06:32:19.400+0000 D3 REPL [conn299] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.483+0000 2019-09-04T06:32:19.404+0000 D2 ASIO [RS] Request 985 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578739, 9), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578739400), o: { $v: 1, $set: { state: 0 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 8), t: 1 }, lastCommittedWall: new Date(1567578739392), lastOpVisible: { ts: Timestamp(1567578739, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578739, 8), t: 1 }, lastCommittedWall: new Date(1567578739392), lastOpApplied: { ts: Timestamp(1567578739, 9), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 8), $clusterTime: { clusterTime: Timestamp(1567578739, 9), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 9) } 2019-09-04T06:32:19.404+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578739, 9), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578739400), o: { $v: 1, $set: { state: 0 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 8), t: 1 }, lastCommittedWall: new Date(1567578739392), lastOpVisible: { ts: Timestamp(1567578739, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578739, 8), t: 1 }, lastCommittedWall: new Date(1567578739392), lastOpApplied: { ts: Timestamp(1567578739, 9), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 8), $clusterTime: { clusterTime: Timestamp(1567578739, 9), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 9) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:19.404+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:19.404+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578739, 9) and ending at ts: Timestamp(1567578739, 9) 2019-09-04T06:32:19.404+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:32:30.158+0000 2019-09-04T06:32:19.404+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:32:29.532+0000 2019-09-04T06:32:19.404+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578739, 9), t: 1 } 2019-09-04T06:32:19.404+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:19.404+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:48.839+0000 2019-09-04T06:32:19.404+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:19.404+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:19.404+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578739, 8) 2019-09-04T06:32:19.404+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 14205 2019-09-04T06:32:19.404+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:19.404+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:19.404+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 14205 2019-09-04T06:32:19.404+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:19.404+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:19.404+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578739, 8) 2019-09-04T06:32:19.404+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 14208 2019-09-04T06:32:19.404+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:19.404+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:19.404+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 14208 2019-09-04T06:32:19.404+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:32:19.404+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578739, 9) } 2019-09-04T06:32:19.404+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14202 2019-09-04T06:32:19.405+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14202 2019-09-04T06:32:19.405+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14211 2019-09-04T06:32:19.405+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14211 2019-09-04T06:32:19.405+0000 D3 EXECUTOR [repl-writer-worker-6] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:19.405+0000 D3 STORAGE [repl-writer-worker-6] WT begin_transaction for snapshot id 14213 2019-09-04T06:32:19.405+0000 D4 STORAGE [repl-writer-worker-6] inserting record with timestamp Timestamp(1567578739, 9) 2019-09-04T06:32:19.405+0000 D3 STORAGE [repl-writer-worker-6] WT set timestamp of future write operations to Timestamp(1567578739, 9) 2019-09-04T06:32:19.405+0000 D3 STORAGE [repl-writer-worker-6] WT commit_transaction for snapshot id 14213 2019-09-04T06:32:19.405+0000 D3 EXECUTOR [repl-writer-worker-6] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:19.405+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:32:19.405+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14212 2019-09-04T06:32:19.405+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14212 2019-09-04T06:32:19.405+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14215 2019-09-04T06:32:19.405+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14215 2019-09-04T06:32:19.405+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578739, 9), t: 1 }({ ts: Timestamp(1567578739, 9), t: 1 }) 2019-09-04T06:32:19.405+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578739, 9) 2019-09-04T06:32:19.405+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14216 2019-09-04T06:32:19.405+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578739, 9) } } ] } sort: {} projection: {} 2019-09-04T06:32:19.405+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:19.405+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:32:19.405+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578739, 9) Sort: {} Proj: {} ============================= 2019-09-04T06:32:19.405+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:19.405+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:19.405+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:19.405+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578739, 9) || First: notFirst: full path: ts 2019-09-04T06:32:19.405+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:19.405+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578739, 9) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:19.405+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:19.405+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:32:19.405+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:19.405+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:19.405+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:19.405+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:19.405+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:19.405+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:19.405+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:19.405+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578739, 9) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:19.405+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:19.405+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:19.405+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:19.405+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578739, 9) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:19.405+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:19.405+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578739, 9) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:19.405+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14216 2019-09-04T06:32:19.405+0000 D3 EXECUTOR [repl-writer-worker-4] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:19.405+0000 D3 STORAGE [repl-writer-worker-4] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:19.405+0000 D3 REPL [repl-writer-worker-4] applying op: { ts: Timestamp(1567578739, 9), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578739400), o: { $v: 1, $set: { state: 0 } } }, oplog application mode: Secondary 2019-09-04T06:32:19.405+0000 D3 STORAGE [repl-writer-worker-4] WT set timestamp of future write operations to Timestamp(1567578739, 9) 2019-09-04T06:32:19.405+0000 D3 STORAGE [repl-writer-worker-4] WT begin_transaction for snapshot id 14218 2019-09-04T06:32:19.405+0000 D2 QUERY [repl-writer-worker-4] Using idhack: { _id: "config" } 2019-09-04T06:32:19.405+0000 D4 WRITE [repl-writer-worker-4] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:32:19.405+0000 D3 STORAGE [repl-writer-worker-4] WT commit_transaction for snapshot id 14218 2019-09-04T06:32:19.405+0000 D3 EXECUTOR [repl-writer-worker-4] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:19.405+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578739, 9), t: 1 }({ ts: Timestamp(1567578739, 9), t: 1 }) 2019-09-04T06:32:19.405+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578739, 9) 2019-09-04T06:32:19.405+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14217 2019-09-04T06:32:19.405+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:32:19.405+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:19.405+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:32:19.405+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:19.405+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:19.405+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:32:19.405+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14217 2019-09-04T06:32:19.405+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578739, 9) 2019-09-04T06:32:19.405+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14221 2019-09-04T06:32:19.405+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14221 2019-09-04T06:32:19.405+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578739, 9), t: 1 }({ ts: Timestamp(1567578739, 9), t: 1 }) 2019-09-04T06:32:19.405+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:19.406+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578739, 8), t: 1 }, durableWallTime: new Date(1567578739392), appliedOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, appliedWallTime: new Date(1567578739400), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 8), t: 1 }, lastCommittedWall: new Date(1567578739392), lastOpVisible: { ts: Timestamp(1567578739, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:19.406+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 986 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:49.406+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578739, 8), t: 1 }, durableWallTime: new Date(1567578739392), appliedOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, appliedWallTime: new Date(1567578739400), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 8), t: 1 }, lastCommittedWall: new Date(1567578739392), lastOpVisible: { ts: Timestamp(1567578739, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:19.406+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.405+0000 2019-09-04T06:32:19.406+0000 D2 ASIO [RS] Request 986 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 8), t: 1 }, lastCommittedWall: new Date(1567578739392), lastOpVisible: { ts: Timestamp(1567578739, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 8), $clusterTime: { clusterTime: Timestamp(1567578739, 9), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 9) } 2019-09-04T06:32:19.406+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 8), t: 1 }, lastCommittedWall: new Date(1567578739392), lastOpVisible: { ts: Timestamp(1567578739, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 8), $clusterTime: { clusterTime: Timestamp(1567578739, 9), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 9) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:19.406+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:19.406+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.406+0000 2019-09-04T06:32:19.406+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578739, 9), t: 1 } 2019-09-04T06:32:19.406+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 987 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:29.406+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578739, 8), t: 1 } } 2019-09-04T06:32:19.406+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.406+0000 2019-09-04T06:32:19.407+0000 D2 ASIO [RS] Request 987 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 9), t: 1 }, lastCommittedWall: new Date(1567578739400), lastOpVisible: { ts: Timestamp(1567578739, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578739, 9), t: 1 }, lastCommittedWall: new Date(1567578739400), lastOpApplied: { ts: Timestamp(1567578739, 9), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 9), $clusterTime: { clusterTime: Timestamp(1567578739, 9), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 9) } 2019-09-04T06:32:19.407+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 9), t: 1 }, lastCommittedWall: new Date(1567578739400), lastOpVisible: { ts: Timestamp(1567578739, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578739, 9), t: 1 }, lastCommittedWall: new Date(1567578739400), lastOpApplied: { ts: Timestamp(1567578739, 9), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 9), $clusterTime: { clusterTime: Timestamp(1567578739, 9), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 9) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:19.407+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:19.407+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:32:19.407+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578739, 9), t: 1 }, 2019-09-04T06:32:19.400+0000 2019-09-04T06:32:19.407+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578739, 9), t: 1 }, 2019-09-04T06:32:19.400+0000 2019-09-04T06:32:19.407+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578734, 9) 2019-09-04T06:32:19.407+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:32:29.532+0000 2019-09-04T06:32:19.407+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:32:30.178+0000 2019-09-04T06:32:19.407+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 988 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:29.407+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578739, 9), t: 1 } } 2019-09-04T06:32:19.407+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.406+0000 2019-09-04T06:32:19.407+0000 D3 REPL [conn305] Got notified of new snapshot: { ts: Timestamp(1567578739, 9), t: 1 }, 2019-09-04T06:32:19.400+0000 2019-09-04T06:32:19.407+0000 D3 REPL [conn305] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.638+0000 2019-09-04T06:32:19.407+0000 D3 REPL [conn324] Got notified of new snapshot: { ts: Timestamp(1567578739, 9), t: 1 }, 2019-09-04T06:32:19.400+0000 2019-09-04T06:32:19.407+0000 D3 REPL [conn324] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.622+0000 2019-09-04T06:32:19.407+0000 D3 REPL [conn321] Got notified of new snapshot: { ts: Timestamp(1567578739, 9), t: 1 }, 2019-09-04T06:32:19.400+0000 2019-09-04T06:32:19.407+0000 D3 REPL [conn321] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:43.109+0000 2019-09-04T06:32:19.407+0000 D3 REPL [conn313] Got notified of new snapshot: { ts: Timestamp(1567578739, 9), t: 1 }, 2019-09-04T06:32:19.400+0000 2019-09-04T06:32:19.407+0000 D3 REPL [conn313] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:22.595+0000 2019-09-04T06:32:19.407+0000 D3 REPL [conn312] Got notified of new snapshot: { ts: Timestamp(1567578739, 9), t: 1 }, 2019-09-04T06:32:19.400+0000 2019-09-04T06:32:19.407+0000 D3 REPL [conn312] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.767+0000 2019-09-04T06:32:19.407+0000 D3 REPL [conn304] Got notified of new snapshot: { ts: Timestamp(1567578739, 9), t: 1 }, 2019-09-04T06:32:19.400+0000 2019-09-04T06:32:19.407+0000 D3 REPL [conn304] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.637+0000 2019-09-04T06:32:19.407+0000 D3 REPL [conn315] Got notified of new snapshot: { ts: Timestamp(1567578739, 9), t: 1 }, 2019-09-04T06:32:19.400+0000 2019-09-04T06:32:19.407+0000 D3 REPL [conn315] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:25.060+0000 2019-09-04T06:32:19.407+0000 D3 REPL [conn301] Got notified of new snapshot: { ts: Timestamp(1567578739, 9), t: 1 }, 2019-09-04T06:32:19.400+0000 2019-09-04T06:32:19.407+0000 D3 REPL [conn301] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.660+0000 2019-09-04T06:32:19.407+0000 D3 REPL [conn317] Got notified of new snapshot: { ts: Timestamp(1567578739, 9), t: 1 }, 2019-09-04T06:32:19.400+0000 2019-09-04T06:32:19.407+0000 D3 REPL [conn317] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.471+0000 2019-09-04T06:32:19.407+0000 D3 REPL [conn310] Got notified of new snapshot: { ts: Timestamp(1567578739, 9), t: 1 }, 2019-09-04T06:32:19.400+0000 2019-09-04T06:32:19.407+0000 D3 REPL [conn310] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.660+0000 2019-09-04T06:32:19.407+0000 D3 REPL [conn285] Got notified of new snapshot: { ts: Timestamp(1567578739, 9), t: 1 }, 2019-09-04T06:32:19.400+0000 2019-09-04T06:32:19.407+0000 D3 REPL [conn285] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.640+0000 2019-09-04T06:32:19.407+0000 D3 REPL [conn314] Got notified of new snapshot: { ts: Timestamp(1567578739, 9), t: 1 }, 2019-09-04T06:32:19.400+0000 2019-09-04T06:32:19.407+0000 D3 REPL [conn314] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:24.152+0000 2019-09-04T06:32:19.407+0000 D3 REPL [conn275] Got notified of new snapshot: { ts: Timestamp(1567578739, 9), t: 1 }, 2019-09-04T06:32:19.400+0000 2019-09-04T06:32:19.407+0000 D3 REPL [conn275] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.508+0000 2019-09-04T06:32:19.407+0000 D3 REPL [conn306] Got notified of new snapshot: { ts: Timestamp(1567578739, 9), t: 1 }, 2019-09-04T06:32:19.400+0000 2019-09-04T06:32:19.407+0000 D3 REPL [conn306] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.639+0000 2019-09-04T06:32:19.407+0000 D3 REPL [conn319] Got notified of new snapshot: { ts: Timestamp(1567578739, 9), t: 1 }, 2019-09-04T06:32:19.400+0000 2019-09-04T06:32:19.407+0000 D3 REPL [conn319] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.535+0000 2019-09-04T06:32:19.407+0000 D3 REPL [conn316] Got notified of new snapshot: { ts: Timestamp(1567578739, 9), t: 1 }, 2019-09-04T06:32:19.400+0000 2019-09-04T06:32:19.407+0000 D3 REPL [conn316] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.623+0000 2019-09-04T06:32:19.407+0000 D3 REPL [conn311] Got notified of new snapshot: { ts: Timestamp(1567578739, 9), t: 1 }, 2019-09-04T06:32:19.400+0000 2019-09-04T06:32:19.407+0000 D3 REPL [conn311] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:21.661+0000 2019-09-04T06:32:19.407+0000 D3 REPL [conn328] Got notified of new snapshot: { ts: Timestamp(1567578739, 9), t: 1 }, 2019-09-04T06:32:19.400+0000 2019-09-04T06:32:19.407+0000 D3 REPL [conn328] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.625+0000 2019-09-04T06:32:19.407+0000 D3 REPL [conn299] Got notified of new snapshot: { ts: Timestamp(1567578739, 9), t: 1 }, 2019-09-04T06:32:19.400+0000 2019-09-04T06:32:19.407+0000 D3 REPL [conn299] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.483+0000 2019-09-04T06:32:19.407+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:19.407+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:48.839+0000 2019-09-04T06:32:19.414+0000 I - [LogicalSessionCacheRefresh] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6b5b62 0x561749c34665 0x561749c42521 0x561749b4da31 0x56174ae4d9d2 0x56174a01c507 0x56174ae92b4d 0x56174a01c507 0x56174ae92193 0x56174a01c507 0x56174ae9269a 0x56174a01c507 0x56174ae91dfa 0x56174a01c507 0x56174ae908d8 0x56174a01c507 0x56174ae82458 0x56174a01c507 0x56174a22d0d9 0x56174ae719ec 0x56174ae6a810 0x561749b4d680 0x56174ae377b2 0x56174a3e8caa 0x56174a0b6a04 0x561749a7a4ed 0x56174a3c2b67 0x56174a0ae262 0x56174a0ae6da 0x56174a47a143 0x56174a47d086 0x56174a22c215 0x56174b82dbbf 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"272DB62","s":"_ZN5mongo11DBExceptionC2ERKNS_6StatusE"},{"b":"561748F88000","o":"CAC665","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"BC5A31"},{"b":"561748F88000","o":"1EC59D2"},{"b":"561748F88000","o":"1094507","s":"_ZN5mongo14future_details15SharedStateBase20transitionToFinishedEv"},{"b":"561748F88000","o":"1F0AB4D"},{"b":"561748F88000","o":"1094507","s":"_ZN5mongo14future_details15SharedStateBase20transitionToFinishedEv"},{"b":"561748F88000","o":"1F0A193"},{"b":"561748F88000","o":"1094507","s":"_ZN5mongo14future_details15SharedStateBase20transitionToFinishedEv"},{"b":"561748F88000","o":"1F0A69A"},{"b":"561748F88000","o":"1094507","s":"_ZN5mongo14future_details15SharedStateBase20transitionToFinishedEv"},{"b":"561748F88000","o":"1F09DFA"},{"b":"561748F88000","o":"1094507","s":"_ZN5mongo14future_details15SharedStateBase20transitionToFinishedEv"},{"b":"561748F88000","o":"1F088D8","s":"_ZZN5mongo15unique_functionIFvPNS_14future_details15SharedStateBaseEEE8makeImplIZNS1_10FutureImplINS1_8FakeVoidEE16makeContinuationINS_7MessageEZZNOS9_4thenIZNS_9transport18TransportLayerASIO11ASIOSession17sourceMessageImplERKSt10shared_ptrINS_5BatonEEEUlvE_EEDaOT_ENKUlvE1_clEvEUlPNS1_15SharedStateImplIS8_EEPNSP_ISB_EEE_EENS7_ISM_EEOT0_EUlS3_E_EEDaSN_EN12SpecificImpl4callEOS3_"},{"b":"561748F88000","o":"1094507","s":"_ZN5mongo14future_details15SharedStateBase20transitionToFinishedEv"},{"b":"561748F88000","o":"1EFA458","s":"_ZZN5mongo15unique_functionIFvPNS_14future_details15SharedStateBaseEEE8makeImplIZNS1_10FutureImplINS1_8FakeVoidEE16makeContinuationIvZZNOS9_4thenIZNS_9transport18TransportLayerASIO11ASIOSession17opportunisticReadIN4asio19basic_stream_socketINSG_7generic15stream_protocolEEENSG_17mutable_buffers_1EEENS_6FutureIvEERT_RKT0_RKSt10shared_ptrINS_5BatonEEEUlvE_EEDaOSO_ENKUlvE1_clEvEUlPNS1_15SharedStateImplIS8_EES13_E_EENS7_ISO_EEOSQ_EUlS3_E_EEDaSZ_EN12SpecificImpl4callEOS3_"},{"b":"561748F88000","o":"1094507","s":"_ZN5mongo14future_details15SharedStateBase20transitionToFinishedEv"},{"b":"561748F88000","o":"12A50D9","s":"_ZN5mongo7PromiseIvED1Ev"},{"b":"561748F88000","o":"1EE99EC","s":"_ZN5mongo9transport18TransportLayerASIO9BatonASIO13cancelSessionERNS0_7SessionE"},{"b":"561748F88000","o":"1EE2810","s":"_ZN5mongo9transport18TransportLayerASIO11ASIOSession21cancelAsyncOperationsERKSt10shared_ptrINS_5BatonEE"},{"b":"561748F88000","o":"BC5680"},{"b":"561748F88000","o":"1EAF7B2","s":"_ZN5mongo8executor22ThreadPoolTaskExecutor6cancelERKNS0_12TaskExecutor14CallbackHandleE"},{"b":"561748F88000","o":"1460CAA","s":"_ZN5mongo8executor18ScopedTaskExecutorD1Ev"},{"b":"561748F88000","o":"112EA04","s":"_ZN5mongo19AsyncRequestsSenderD1Ev"},{"b":"561748F88000","o":"AF24ED"},{"b":"561748F88000","o":"143AB67","s":"_ZN5mongo35scatterGatherOnlyVersionIfUnshardedEPNS_16OperationContextERKNS_15NamespaceStringERKNS_7BSONObjERKNS_21ReadPreferenceSettingENS_5Shard11RetryPolicyERKSt3setINS_10ErrorCodes5ErrorESt4lessISF_ESaISF_EE"},{"b":"561748F88000","o":"1126262","s":"_ZN5mongo30SessionsCollectionConfigServer24_generateIndexesIfNeededEPNS_16OperationContextE"},{"b":"561748F88000","o":"11266DA","s":"_ZN5mongo30SessionsCollectionConfigServer23setupSessionsCollectionEPNS_16OperationContextE"},{"b":"561748F88000","o":"14F2143","s":"_ZN5mongo23LogicalSessionCacheImpl8_refreshEPNS_6ClientE"},{"b":"561748F88000","o":"14F5086","s":"_ZN5mongo23LogicalSessionCacheImpl16_periodicRefreshEPNS_6ClientE"},{"b":"561748F88000","o":"12A4215"},{"b":"561748F88000","o":"28A5BBF"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo11DBExceptionC2ERKNS_6StatusE+0x32) [0x56174b6b5b62] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x2963) [0x561749c34665] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xBC5A31) [0x561749b4da31] mongod(+0x1EC59D2) [0x56174ae4d9d2] mongod(_ZN5mongo14future_details15SharedStateBase20transitionToFinishedEv+0x197) [0x56174a01c507] mongod(+0x1F0AB4D) [0x56174ae92b4d] mongod(_ZN5mongo14future_details15SharedStateBase20transitionToFinishedEv+0x197) [0x56174a01c507] mongod(+0x1F0A193) [0x56174ae92193] mongod(_ZN5mongo14future_details15SharedStateBase20transitionToFinishedEv+0x197) [0x56174a01c507] mongod(+0x1F0A69A) [0x56174ae9269a] mongod(_ZN5mongo14future_details15SharedStateBase20transitionToFinishedEv+0x197) [0x56174a01c507] mongod(+0x1F09DFA) [0x56174ae91dfa] mongod(_ZN5mongo14future_details15SharedStateBase20transitionToFinishedEv+0x197) [0x56174a01c507] mongod(_ZZN5mongo15unique_functionIFvPNS_14future_details15SharedStateBaseEEE8makeImplIZNS1_10FutureImplINS1_8FakeVoidEE16makeContinuationINS_7MessageEZZNOS9_4thenIZNS_9transport18TransportLayerASIO11ASIOSession17sourceMessageImplERKSt10shared_ptrINS_5BatonEEEUlvE_EEDaOT_ENKUlvE1_clEvEUlPNS1_15SharedStateImplIS8_EEPNSP_ISB_EEE_EENS7_ISM_EEOT0_EUlS3_E_EEDaSN_EN12SpecificImpl4callEOS3_+0x48) [0x56174ae908d8] mongod(_ZN5mongo14future_details15SharedStateBase20transitionToFinishedEv+0x197) [0x56174a01c507] mongod(_ZZN5mongo15unique_functionIFvPNS_14future_details15SharedStateBaseEEE8makeImplIZNS1_10FutureImplINS1_8FakeVoidEE16makeContinuationIvZZNOS9_4thenIZNS_9transport18TransportLayerASIO11ASIOSession17opportunisticReadIN4asio19basic_stream_socketINSG_7generic15stream_protocolEEENSG_17mutable_buffers_1EEENS_6FutureIvEERT_RKT0_RKSt10shared_ptrINS_5BatonEEEUlvE_EEDaOSO_ENKUlvE1_clEvEUlPNS1_15SharedStateImplIS8_EES13_E_EENS7_ISO_EEOSQ_EUlS3_E_EEDaSZ_EN12SpecificImpl4callEOS3_+0x48) [0x56174ae82458] mongod(_ZN5mongo14future_details15SharedStateBase20transitionToFinishedEv+0x197) [0x56174a01c507] mongod(_ZN5mongo7PromiseIvED1Ev+0x69) [0x56174a22d0d9] mongod(_ZN5mongo9transport18TransportLayerASIO9BatonASIO13cancelSessionERNS0_7SessionE+0x21C) [0x56174ae719ec] mongod(_ZN5mongo9transport18TransportLayerASIO11ASIOSession21cancelAsyncOperationsERKSt10shared_ptrINS_5BatonEE+0x70) [0x56174ae6a810] mongod(+0xBC5680) [0x561749b4d680] mongod(_ZN5mongo8executor22ThreadPoolTaskExecutor6cancelERKNS0_12TaskExecutor14CallbackHandleE+0x1F2) [0x56174ae377b2] mongod(_ZN5mongo8executor18ScopedTaskExecutorD1Ev+0x65A) [0x56174a3e8caa] mongod(_ZN5mongo19AsyncRequestsSenderD1Ev+0x64) [0x56174a0b6a04] mongod(+0xAF24ED) [0x561749a7a4ed] mongod(_ZN5mongo35scatterGatherOnlyVersionIfUnshardedEPNS_16OperationContextERKNS_15NamespaceStringERKNS_7BSONObjERKNS_21ReadPreferenceSettingENS_5Shard11RetryPolicyERKSt3setINS_10ErrorCodes5ErrorESt4lessISF_ESaISF_EE+0x3B7) [0x56174a3c2b67] mongod(_ZN5mongo30SessionsCollectionConfigServer24_generateIndexesIfNeededEPNS_16OperationContextE+0x92) [0x56174a0ae262] mongod(_ZN5mongo30SessionsCollectionConfigServer23setupSessionsCollectionEPNS_16OperationContextE+0x1BA) [0x56174a0ae6da] mongod(_ZN5mongo23LogicalSessionCacheImpl8_refreshEPNS_6ClientE+0x103) [0x56174a47a143] mongod(_ZN5mongo23LogicalSessionCacheImpl16_periodicRefreshEPNS_6ClientE+0x26) [0x56174a47d086] mongod(+0x12A4215) [0x56174a22c215] mongod(+0x28A5BBF) [0x56174b82dbbf] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:19.415+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:32:19.415+0000 I CONNPOOL [LogicalSessionCacheRefresh] Ending connection to host cmodb808.togewa.com:27018 due to bad connection status: CallbackCanceled: Callback was canceled; 0 connections to that host remain open 2019-09-04T06:32:19.415+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:19.415+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), appliedOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, appliedWallTime: new Date(1567578739400), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 9), t: 1 }, lastCommittedWall: new Date(1567578739400), lastOpVisible: { ts: Timestamp(1567578739, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:19.415+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 989 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:49.415+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), appliedOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, appliedWallTime: new Date(1567578739400), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, durableWallTime: new Date(1567578737811), appliedOpTime: { ts: Timestamp(1567578737, 2), t: 1 }, appliedWallTime: new Date(1567578737811), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 9), t: 1 }, lastCommittedWall: new Date(1567578739400), lastOpVisible: { ts: Timestamp(1567578739, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:19.415+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.406+0000 2019-09-04T06:32:19.415+0000 D2 CONNPOOL [TaskExecutorPool-0] Connecting to cmodb808.togewa.com:27018 2019-09-04T06:32:19.415+0000 D2 ASIO [TaskExecutorPool-0] Finished connection setup. 2019-09-04T06:32:19.415+0000 I CONTROL [LogicalSessionCacheRefresh] Failed to generate TTL index for config.system.sessions on all shards, will try again on the next refresh interval 2019-09-04T06:32:19.415+0000 I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: request doesn't allow collection to be created implicitly 2019-09-04T06:32:19.415+0000 D2 ASIO [RS] Request 989 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 9), t: 1 }, lastCommittedWall: new Date(1567578739400), lastOpVisible: { ts: Timestamp(1567578739, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 9), $clusterTime: { clusterTime: Timestamp(1567578739, 9), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 9) } 2019-09-04T06:32:19.415+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 9), t: 1 }, lastCommittedWall: new Date(1567578739400), lastOpVisible: { ts: Timestamp(1567578739, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 9), $clusterTime: { clusterTime: Timestamp(1567578739, 9), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 9) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:19.415+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:19.415+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.406+0000 2019-09-04T06:32:19.444+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:19.444+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:19.466+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578739, 9) 2019-09-04T06:32:19.466+0000 D3 STORAGE [WTCheckpointThread] setting timestamp read source: 6, provided timestamp: Timestamp(1567578739, 9) 2019-09-04T06:32:19.466+0000 D3 STORAGE [WTCheckpointThread] WT begin_transaction for snapshot id 14225 2019-09-04T06:32:19.466+0000 D3 STORAGE [WTCheckpointThread] WT rollback_transaction for snapshot id 14225 2019-09-04T06:32:19.466+0000 D2 RECOVERY [WTCheckpointThread] Performing stable checkpoint. StableTimestamp: Timestamp(1567578739, 9), OplogNeededForRollback: Timestamp(1567578739, 9) 2019-09-04T06:32:19.492+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:19.497+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:19.497+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:19.541+0000 D2 COMMAND [replSetDistLockPinger] run command config.$cmd { findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578739541) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } 2019-09-04T06:32:19.541+0000 D4 - [replSetDistLockPinger] Taking ticket. Available: 1000000000 2019-09-04T06:32:19.541+0000 D1 - [replSetDistLockPinger] User Assertion: NotMaster: Not primary while running findAndModify command on collection config.lockpings src/mongo/db/commands/find_and_modify.cpp 178 2019-09-04T06:32:19.541+0000 W - [replSetDistLockPinger] DBException thrown :: caused by :: NotMaster: Not primary while running findAndModify command on collection config.lockpings 2019-09-04T06:32:19.553+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:19.553+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:19.560+0000 I - [replSetDistLockPinger] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6b5b62 0x561749c38a0a 0x561749c42521 0x561749a63043 0x56174a33a606 0x56174a33ba55 0x56174b117894 0x56174a082899 0x56174a083f53 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174af452ee 0x56174af457fa 0x56174b0c25e2 0x56174a244e7b 0x56174a243c1e 0x56174a42b1dc 0x56174a23b7b1 0x56174a232a0a 0x56174b82dbbf 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"272DB62","s":"_ZN5mongo11DBExceptionC2ERKNS_6StatusE"},{"b":"561748F88000","o":"CB0A0A","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"ADB043"},{"b":"561748F88000","o":"13B2606"},{"b":"561748F88000","o":"13B3A55"},{"b":"561748F88000","o":"218F894","s":"_ZN5mongo12BasicCommand10Invocation3runEPNS_16OperationContextEPNS_3rpc21ReplyBuilderInterfaceE"},{"b":"561748F88000","o":"10FA899"},{"b":"561748F88000","o":"10FBF53"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"1FBD2EE"},{"b":"561748F88000","o":"1FBD7FA","s":"_ZN5mongo14DBDirectClient4callERNS_7MessageES2_bPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE"},{"b":"561748F88000","o":"213A5E2","s":"_ZN5mongo12DBClientBase20runCommandWithTargetENS_12OpMsgRequestE"},{"b":"561748F88000","o":"12BCE7B","s":"_ZN5mongo13RSLocalClient14runCommandOnceEPNS_16OperationContextENS_10StringDataERKNS_7BSONObjE"},{"b":"561748F88000","o":"12BBC1E","s":"_ZN5mongo10ShardLocal11_runCommandEPNS_16OperationContextERKNS_21ReadPreferenceSettingENS_10StringDataENS_8DurationISt5ratioILl1ELl1000EEEERKNS_7BSONObjE"},{"b":"561748F88000","o":"14A31DC","s":"_ZN5mongo5Shard32runCommandWithFixedRetryAttemptsEPNS_16OperationContextERKNS_21ReadPreferenceSettingERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_7BSONObjENS_8DurationISt5ratioILl1ELl1000EEEENS0_11RetryPolicyE"},{"b":"561748F88000","o":"12B37B1","s":"_ZN5mongo19DistLockCatalogImpl4pingEPNS_16OperationContextENS_10StringDataENS_6Date_tE"},{"b":"561748F88000","o":"12AAA0A","s":"_ZN5mongo22ReplSetDistLockManager6doTaskEv"},{"b":"561748F88000","o":"28A5BBF"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo11DBExceptionC2ERKNS_6StatusE+0x32) [0x56174b6b5b62] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x6D08) [0x561749c38a0a] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xADB043) [0x561749a63043] mongod(+0x13B2606) [0x56174a33a606] mongod(+0x13B3A55) [0x56174a33ba55] mongod(_ZN5mongo12BasicCommand10Invocation3runEPNS_16OperationContextEPNS_3rpc21ReplyBuilderInterfaceE+0x74) [0x56174b117894] mongod(+0x10FA899) [0x56174a082899] mongod(+0x10FBF53) [0x56174a083f53] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(+0x1FBD2EE) [0x56174af452ee] mongod(_ZN5mongo14DBDirectClient4callERNS_7MessageES2_bPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x3A) [0x56174af457fa] mongod(_ZN5mongo12DBClientBase20runCommandWithTargetENS_12OpMsgRequestE+0x1F2) [0x56174b0c25e2] mongod(_ZN5mongo13RSLocalClient14runCommandOnceEPNS_16OperationContextENS_10StringDataERKNS_7BSONObjE+0x4FB) [0x56174a244e7b] mongod(_ZN5mongo10ShardLocal11_runCommandEPNS_16OperationContextERKNS_21ReadPreferenceSettingENS_10StringDataENS_8DurationISt5ratioILl1ELl1000EEEERKNS_7BSONObjE+0x2E) [0x56174a243c1e] mongod(_ZN5mongo5Shard32runCommandWithFixedRetryAttemptsEPNS_16OperationContextERKNS_21ReadPreferenceSettingERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_7BSONObjENS_8DurationISt5ratioILl1ELl1000EEEENS0_11RetryPolicyE+0xDC) [0x56174a42b1dc] mongod(_ZN5mongo19DistLockCatalogImpl4pingEPNS_16OperationContextENS_10StringDataENS_6Date_tE+0x571) [0x56174a23b7b1] mongod(_ZN5mongo22ReplSetDistLockManager6doTaskEv+0x27A) [0x56174a232a0a] mongod(+0x28A5BBF) [0x56174b82dbbf] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:19.560+0000 D2 REPL [replSetDistLockPinger] Waiting for write concern. OpTime: { ts: Timestamp(1567578739, 9), t: 1 }, write concern: { w: "majority", wtimeout: 15000 } 2019-09-04T06:32:19.560+0000 D4 STORAGE [replSetDistLockPinger] flushed journal 2019-09-04T06:32:19.560+0000 D1 COMMAND [replSetDistLockPinger] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578739541) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" }': NotMaster: Not primary while running findAndModify command on collection config.lockpings 2019-09-04T06:32:19.560+0000 I COMMAND [replSetDistLockPinger] command config.lockpings command: findAndModify { findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578739541) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } numYields:0 ok:0 errMsg:"Not primary while running findAndModify command on collection config.lockpings" errName:NotMaster errCode:10107 reslen:527 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } protocol:op_msg 19ms 2019-09-04T06:32:19.592+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:19.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:19.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:19.643+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:19.643+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:19.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:19.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:19.657+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:19.657+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:19.692+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:19.724+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:19.724+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:19.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:19.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:19.792+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:19.806+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:19.806+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:19.830+0000 D2 COMMAND [conn326] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578739, 1), signature: { hash: BinData(0, FC302E736E3E0A04685B3CCCCAB671E279A74EEA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:32:19.830+0000 D1 REPL [conn326] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578739, 9), t: 1 } 2019-09-04T06:32:19.830+0000 D3 REPL [conn326] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:49.840+0000 2019-09-04T06:32:19.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:19.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:19.892+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:19.944+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:19.944+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:19.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:19.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:19.992+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:19.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:19.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:20.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:20.003+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:32:20.003+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:32:20.003+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:32:20.011+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:32:20.011+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:32:20.020+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:32:20.020+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:32:20.020+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:32:20.020+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:32:20.036+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:32:20.036+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35129 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:32:20.038+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:32:20.038+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:32:20.038+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:32:20.038+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:32:20.038+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:20.038+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:32:20.038+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:32:20.038+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:32:20.038+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:32:20.038+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:32:20.038+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:32:20.038+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:32:20.038+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:20.038+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:20.038+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:32:20.038+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578739, 9) 2019-09-04T06:32:20.038+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14248 2019-09-04T06:32:20.039+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14248 2019-09-04T06:32:20.039+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:32:20.039+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:32:20.039+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:32:20.039+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:32:20.039+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:20.039+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:32:20.039+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:32:20.039+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:32:20.039+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578739, 9) 2019-09-04T06:32:20.039+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14251 2019-09-04T06:32:20.039+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14251 2019-09-04T06:32:20.039+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:32:20.039+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:32:20.039+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:20.039+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:32:20.039+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:32:20.039+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:32:20.039+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578739, 9) 2019-09-04T06:32:20.039+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14253 2019-09-04T06:32:20.039+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14253 2019-09-04T06:32:20.039+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:514 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:32:20.039+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:32:20.039+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:32:20.040+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:32:20.040+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:32:20.040+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14256 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14256 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14257 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14257 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14258 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14258 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14259 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14259 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14260 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14260 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14261 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14261 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14262 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14262 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14263 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14263 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14264 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14264 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14265 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14265 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14266 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:32:20.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14266 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14267 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14267 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14268 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14268 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14269 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14269 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14270 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14270 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14271 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14271 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14272 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14272 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14273 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14273 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14274 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14274 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14275 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14275 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14276 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14276 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14277 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14277 2019-09-04T06:32:20.041+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:32:20.041+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14279 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14279 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14280 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14280 2019-09-04T06:32:20.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14281 2019-09-04T06:32:20.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14281 2019-09-04T06:32:20.042+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:32:20.042+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:32:20.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14283 2019-09-04T06:32:20.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14283 2019-09-04T06:32:20.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14284 2019-09-04T06:32:20.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14284 2019-09-04T06:32:20.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14285 2019-09-04T06:32:20.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14285 2019-09-04T06:32:20.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14286 2019-09-04T06:32:20.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14286 2019-09-04T06:32:20.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14287 2019-09-04T06:32:20.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14287 2019-09-04T06:32:20.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14288 2019-09-04T06:32:20.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14288 2019-09-04T06:32:20.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14289 2019-09-04T06:32:20.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14289 2019-09-04T06:32:20.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14290 2019-09-04T06:32:20.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14290 2019-09-04T06:32:20.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14291 2019-09-04T06:32:20.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14291 2019-09-04T06:32:20.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14292 2019-09-04T06:32:20.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14292 2019-09-04T06:32:20.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14293 2019-09-04T06:32:20.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14293 2019-09-04T06:32:20.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14294 2019-09-04T06:32:20.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14294 2019-09-04T06:32:20.042+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:32:20.042+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:32:20.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14296 2019-09-04T06:32:20.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14296 2019-09-04T06:32:20.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14297 2019-09-04T06:32:20.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14297 2019-09-04T06:32:20.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14298 2019-09-04T06:32:20.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14298 2019-09-04T06:32:20.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14299 2019-09-04T06:32:20.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14299 2019-09-04T06:32:20.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14300 2019-09-04T06:32:20.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14300 2019-09-04T06:32:20.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14301 2019-09-04T06:32:20.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14301 2019-09-04T06:32:20.043+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:32:20.053+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:20.053+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:20.092+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:20.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:20.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:20.143+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:20.143+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:20.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:20.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:20.157+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:20.157+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:20.192+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:20.224+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:20.224+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:20.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578739, 9), signature: { hash: BinData(0, 7B0F7FBB31926278759832DF1F99ECE6D56654AF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:20.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:32:20.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578739, 9), signature: { hash: BinData(0, 7B0F7FBB31926278759832DF1F99ECE6D56654AF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:20.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578739, 9), signature: { hash: BinData(0, 7B0F7FBB31926278759832DF1F99ECE6D56654AF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:20.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), opTime: { ts: Timestamp(1567578739, 9), t: 1 }, wallTime: new Date(1567578739400) } 2019-09-04T06:32:20.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578739, 9), signature: { hash: BinData(0, 7B0F7FBB31926278759832DF1F99ECE6D56654AF), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:20.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:20.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 999999999 Now: 1000000000 2019-09-04T06:32:20.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:20.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:20.293+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:20.306+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:20.306+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:20.331+0000 I CONNPOOL [TaskExecutorPool-0] Connecting to cmodb807.togewa.com:27018 2019-09-04T06:32:20.331+0000 D2 ASIO [TaskExecutorPool-0] Finished connection setup. 2019-09-04T06:32:20.332+0000 I CONNPOOL [TaskExecutorPool-0] Connecting to cmodb809.togewa.com:27018 2019-09-04T06:32:20.332+0000 D2 ASIO [TaskExecutorPool-0] Finished connection setup. 2019-09-04T06:32:20.332+0000 I CONNPOOL [TaskExecutorPool-0] Connecting to cmodb811.togewa.com:27018 2019-09-04T06:32:20.332+0000 D2 ASIO [TaskExecutorPool-0] Finished connection setup. 2019-09-04T06:32:20.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:20.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:20.393+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:20.405+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:20.405+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:20.405+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578739, 9) 2019-09-04T06:32:20.405+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 14314 2019-09-04T06:32:20.405+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:20.405+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:20.405+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 14314 2019-09-04T06:32:20.405+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14317 2019-09-04T06:32:20.405+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14317 2019-09-04T06:32:20.405+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578739, 9), t: 1 }({ ts: Timestamp(1567578739, 9), t: 1 }) 2019-09-04T06:32:20.444+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:20.444+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:20.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:20.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:20.493+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:20.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:20.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:20.553+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:20.553+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:20.593+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:20.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:20.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:20.643+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:20.643+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:20.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:20.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:20.657+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:20.657+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:20.693+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:20.724+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:20.724+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:20.764+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:20.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:20.793+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:20.806+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:20.806+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:20.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:20.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 990) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:20.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 990 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:30.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:20.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:48.839+0000 2019-09-04T06:32:20.838+0000 D2 ASIO [Replication] Request 990 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), opTime: { ts: Timestamp(1567578739, 9), t: 1 }, wallTime: new Date(1567578739400), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 9), t: 1 }, lastCommittedWall: new Date(1567578739400), lastOpVisible: { ts: Timestamp(1567578739, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 9), $clusterTime: { clusterTime: Timestamp(1567578739, 9), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 9) } 2019-09-04T06:32:20.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), opTime: { ts: Timestamp(1567578739, 9), t: 1 }, wallTime: new Date(1567578739400), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 9), t: 1 }, lastCommittedWall: new Date(1567578739400), lastOpVisible: { ts: Timestamp(1567578739, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 9), $clusterTime: { clusterTime: Timestamp(1567578739, 9), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 9) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:20.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:20.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 990) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), opTime: { ts: Timestamp(1567578739, 9), t: 1 }, wallTime: new Date(1567578739400), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 9), t: 1 }, lastCommittedWall: new Date(1567578739400), lastOpVisible: { ts: Timestamp(1567578739, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 9), $clusterTime: { clusterTime: Timestamp(1567578739, 9), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 9) } 2019-09-04T06:32:20.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:32:20.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:32:22.838Z 2019-09-04T06:32:20.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:48.839+0000 2019-09-04T06:32:20.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:20.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 991) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:20.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 991 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:32:30.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:20.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:48.839+0000 2019-09-04T06:32:20.839+0000 D2 ASIO [Replication] Request 991 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), opTime: { ts: Timestamp(1567578739, 9), t: 1 }, wallTime: new Date(1567578739400), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 9), t: 1 }, lastCommittedWall: new Date(1567578739400), lastOpVisible: { ts: Timestamp(1567578739, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578739, 9), $clusterTime: { clusterTime: Timestamp(1567578739, 9), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 9) } 2019-09-04T06:32:20.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), opTime: { ts: Timestamp(1567578739, 9), t: 1 }, wallTime: new Date(1567578739400), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 9), t: 1 }, lastCommittedWall: new Date(1567578739400), lastOpVisible: { ts: Timestamp(1567578739, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578739, 9), $clusterTime: { clusterTime: Timestamp(1567578739, 9), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 9) } target: cmodb802.togewa.com:27019 2019-09-04T06:32:20.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:20.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 991) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), opTime: { ts: Timestamp(1567578739, 9), t: 1 }, wallTime: new Date(1567578739400), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 9), t: 1 }, lastCommittedWall: new Date(1567578739400), lastOpVisible: { ts: Timestamp(1567578739, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578739, 9), $clusterTime: { clusterTime: Timestamp(1567578739, 9), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 9) } 2019-09-04T06:32:20.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:32:20.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:32:30.178+0000 2019-09-04T06:32:20.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:32:31.662+0000 2019-09-04T06:32:20.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:32:20.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:32:22.839Z 2019-09-04T06:32:20.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:20.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:50.839+0000 2019-09-04T06:32:20.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:50.839+0000 2019-09-04T06:32:20.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:20.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:20.893+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:20.944+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:20.944+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:20.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:20.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:20.993+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:20.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:20.997+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:21.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:21.053+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:21.053+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:21.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578739, 9), signature: { hash: BinData(0, 7B0F7FBB31926278759832DF1F99ECE6D56654AF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:21.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:32:21.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578739, 9), signature: { hash: BinData(0, 7B0F7FBB31926278759832DF1F99ECE6D56654AF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:21.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578739, 9), signature: { hash: BinData(0, 7B0F7FBB31926278759832DF1F99ECE6D56654AF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:21.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), opTime: { ts: Timestamp(1567578739, 9), t: 1 }, wallTime: new Date(1567578739400) } 2019-09-04T06:32:21.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578739, 9), signature: { hash: BinData(0, 7B0F7FBB31926278759832DF1F99ECE6D56654AF), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:21.094+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:21.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:21.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:21.116+0000 I NETWORK [listener] connection accepted from 10.108.2.56:35762 #336 (83 connections now open) 2019-09-04T06:32:21.116+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:21.116+0000 D2 COMMAND [conn336] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:21.116+0000 I NETWORK [conn336] received client metadata from 10.108.2.56:35762 conn336: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:21.116+0000 I COMMAND [conn336] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:21.120+0000 D2 COMMAND [conn336] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578731, 1), signature: { hash: BinData(0, 403A7CDEC6D36A1AA08331185731CC5F9C84C762), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:32:21.120+0000 D1 REPL [conn336] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578739, 9), t: 1 } 2019-09-04T06:32:21.120+0000 D3 REPL [conn336] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.130+0000 2019-09-04T06:32:21.143+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:21.143+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:21.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:21.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:21.157+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:21.157+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:21.194+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:21.224+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:21.224+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:21.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:21.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:21.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:21.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:21.294+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:21.306+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:21.306+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:21.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:21.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:21.394+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:21.405+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:21.405+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:21.405+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578739, 9) 2019-09-04T06:32:21.405+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 14348 2019-09-04T06:32:21.405+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:21.405+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:21.405+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 14348 2019-09-04T06:32:21.406+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14351 2019-09-04T06:32:21.406+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14351 2019-09-04T06:32:21.406+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578739, 9), t: 1 }({ ts: Timestamp(1567578739, 9), t: 1 }) 2019-09-04T06:32:21.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:21.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:21.494+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:21.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:21.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:21.553+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:21.553+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:21.594+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:21.602+0000 D2 COMMAND [conn126] run command admin.$cmd { ping: 1, $db: "admin" } 2019-09-04T06:32:21.602+0000 I COMMAND [conn126] command admin.$cmd appName: "robo3t" command: ping { ping: 1, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:21.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:21.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:21.613+0000 D2 COMMAND [conn182] run command config.$cmd { ping: 1.0, lsid: { id: UUID("02492cc9-cb3a-4cd4-9c2e-0d7430e82ce2") }, $clusterTime: { clusterTime: Timestamp(1567578679, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:32:21.613+0000 I COMMAND [conn182] command config.$cmd appName: "MongoDB Shell" command: ping { ping: 1.0, lsid: { id: UUID("02492cc9-cb3a-4cd4-9c2e-0d7430e82ce2") }, $clusterTime: { clusterTime: Timestamp(1567578679, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:21.634+0000 D2 COMMAND [conn325] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578738, 1), signature: { hash: BinData(0, 0D0BB0ED5B61061781383AFC683690CFB9762D5E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:32:21.634+0000 D1 REPL [conn325] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578739, 9), t: 1 } 2019-09-04T06:32:21.634+0000 D3 REPL [conn325] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.644+0000 2019-09-04T06:32:21.642+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:21.643+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:21.650+0000 I NETWORK [listener] connection accepted from 10.108.2.48:42172 #337 (84 connections now open) 2019-09-04T06:32:21.650+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:21.650+0000 D2 COMMAND [conn337] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:21.650+0000 I NETWORK [listener] connection accepted from 10.108.2.72:45814 #338 (85 connections now open) 2019-09-04T06:32:21.650+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:21.650+0000 I NETWORK [conn337] received client metadata from 10.108.2.48:42172 conn337: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:21.650+0000 I COMMAND [conn337] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:21.651+0000 D2 COMMAND [conn338] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:21.651+0000 I NETWORK [conn338] received client metadata from 10.108.2.72:45814 conn338: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:21.651+0000 I COMMAND [conn338] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:21.651+0000 I NETWORK [listener] connection accepted from 10.108.2.58:52212 #339 (86 connections now open) 2019-09-04T06:32:21.651+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:21.651+0000 D2 COMMAND [conn339] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:21.651+0000 I NETWORK [conn339] received client metadata from 10.108.2.58:52212 conn339: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:21.651+0000 I COMMAND [conn339] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:21.651+0000 D2 COMMAND [conn339] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578733, 1), signature: { hash: BinData(0, 45E6BA95D4A46C041D1AB6A238AC46A21C109958), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:32:21.651+0000 D1 REPL [conn339] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578739, 9), t: 1 } 2019-09-04T06:32:21.651+0000 D3 REPL [conn339] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.661+0000 2019-09-04T06:32:21.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:21.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:21.652+0000 I NETWORK [listener] connection accepted from 10.108.2.73:52222 #340 (87 connections now open) 2019-09-04T06:32:21.652+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:21.652+0000 D2 COMMAND [conn340] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:21.652+0000 I NETWORK [conn340] received client metadata from 10.108.2.73:52222 conn340: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:21.652+0000 I COMMAND [conn340] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:21.652+0000 D2 COMMAND [conn340] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578732, 1), signature: { hash: BinData(0, 2AE057265C24734D9B4E4568BCA1F32820325A81), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:32:21.652+0000 D1 REPL [conn340] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578739, 9), t: 1 } 2019-09-04T06:32:21.652+0000 D3 REPL [conn340] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.662+0000 2019-09-04T06:32:21.657+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:21.657+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:21.662+0000 I COMMAND [conn301] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578708, 1), signature: { hash: BinData(0, B5AEC13859CF2AE093A83653359419B1C5F526D1), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:32:21.662+0000 I COMMAND [conn310] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578710, 1), signature: { hash: BinData(0, A3FF2599D574559E1DAE48C1FCBADF51073EB20B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:32:21.662+0000 D1 - [conn301] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:21.662+0000 D1 - [conn310] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:21.662+0000 I COMMAND [conn311] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578702, 1), signature: { hash: BinData(0, ECF4B5774F4DA596CBFAEF2B306A437976FBCFAD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:32:21.662+0000 D1 - [conn311] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:21.662+0000 W - [conn311] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:21.662+0000 W - [conn301] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:21.662+0000 W - [conn310] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:21.679+0000 I - [conn311] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:21.679+0000 D1 COMMAND [conn311] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578702, 1), signature: { hash: BinData(0, ECF4B5774F4DA596CBFAEF2B306A437976FBCFAD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:21.679+0000 D1 - [conn311] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:21.679+0000 W - [conn311] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:21.694+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:21.696+0000 I - [conn310] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:21.696+0000 D1 COMMAND [conn310] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578710, 1), signature: { hash: BinData(0, A3FF2599D574559E1DAE48C1FCBADF51073EB20B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:21.696+0000 D1 - [conn310] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:21.696+0000 W - [conn310] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:21.716+0000 I - [conn311] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:21.716+0000 W COMMAND [conn311] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:21.716+0000 I COMMAND [conn311] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578702, 1), signature: { hash: BinData(0, ECF4B5774F4DA596CBFAEF2B306A437976FBCFAD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30028ms 2019-09-04T06:32:21.717+0000 D2 NETWORK [conn311] Session from 10.108.2.72:45788 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:21.717+0000 I NETWORK [conn311] end connection 10.108.2.72:45788 (86 connections now open) 2019-09-04T06:32:21.724+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:21.724+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:21.740+0000 I - [conn301] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:21.740+0000 D1 COMMAND [conn301] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578708, 1), signature: { hash: BinData(0, B5AEC13859CF2AE093A83653359419B1C5F526D1), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:21.740+0000 D1 - [conn301] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:21.740+0000 W - [conn301] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:21.743+0000 D2 COMMAND [conn320] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578732, 1), signature: { hash: BinData(0, 2AE057265C24734D9B4E4568BCA1F32820325A81), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:32:21.743+0000 D1 REPL [conn320] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578739, 9), t: 1 } 2019-09-04T06:32:21.743+0000 D3 REPL [conn320] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.753+0000 2019-09-04T06:32:21.754+0000 I - [conn310] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:21.754+0000 W COMMAND [conn310] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:21.754+0000 I COMMAND [conn310] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578710, 1), signature: { hash: BinData(0, A3FF2599D574559E1DAE48C1FCBADF51073EB20B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30045ms 2019-09-04T06:32:21.754+0000 D2 NETWORK [conn310] Session from 10.108.2.48:42150 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:21.754+0000 I NETWORK [conn310] end connection 10.108.2.48:42150 (85 connections now open) 2019-09-04T06:32:21.756+0000 I NETWORK [listener] connection accepted from 10.108.2.59:48418 #341 (86 connections now open) 2019-09-04T06:32:21.756+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:21.756+0000 D2 COMMAND [conn341] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:21.756+0000 I NETWORK [conn341] received client metadata from 10.108.2.59:48418 conn341: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:21.756+0000 I COMMAND [conn341] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:21.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:21.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:21.769+0000 I COMMAND [conn312] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578701, 1), signature: { hash: BinData(0, 1171ABEBFE541B8950624F669563FA93C6C7DEA2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:32:21.769+0000 D1 - [conn312] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:21.769+0000 W - [conn312] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:21.786+0000 I - [conn312] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:21.786+0000 D1 COMMAND [conn312] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578701, 1), signature: { hash: BinData(0, 1171ABEBFE541B8950624F669563FA93C6C7DEA2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:21.786+0000 D1 - [conn312] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:21.786+0000 W - [conn312] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:21.791+0000 I - [conn301] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:21.791+0000 W COMMAND [conn301] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:21.791+0000 I COMMAND [conn301] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578708, 1), signature: { hash: BinData(0, B5AEC13859CF2AE093A83653359419B1C5F526D1), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30090ms 2019-09-04T06:32:21.791+0000 D2 NETWORK [conn301] Session from 10.108.2.54:49232 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:21.791+0000 I NETWORK [conn301] end connection 10.108.2.54:49232 (85 connections now open) 2019-09-04T06:32:21.794+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:21.806+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:21.806+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:21.811+0000 I - [conn312] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:21.811+0000 W COMMAND [conn312] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:21.811+0000 I COMMAND [conn312] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578701, 1), signature: { hash: BinData(0, 1171ABEBFE541B8950624F669563FA93C6C7DEA2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:32:21.811+0000 D2 NETWORK [conn312] Session from 10.108.2.59:48400 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:21.811+0000 I NETWORK [conn312] end connection 10.108.2.59:48400 (84 connections now open) 2019-09-04T06:32:21.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:21.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:21.895+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:21.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:21.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:21.995+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:21.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:21.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:22.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:22.043+0000 I NETWORK [listener] connection accepted from 10.108.2.50:50186 #342 (85 connections now open) 2019-09-04T06:32:22.043+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:22.043+0000 D2 COMMAND [conn342] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:22.043+0000 I NETWORK [conn342] received client metadata from 10.108.2.50:50186 conn342: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:22.043+0000 I COMMAND [conn342] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:22.044+0000 D2 COMMAND [conn342] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578739, 1), signature: { hash: BinData(0, FC302E736E3E0A04685B3CCCCAB671E279A74EEA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:32:22.044+0000 D1 REPL [conn342] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578739, 9), t: 1 } 2019-09-04T06:32:22.044+0000 D3 REPL [conn342] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:52.054+0000 2019-09-04T06:32:22.053+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:22.053+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:22.095+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:22.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:22.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:22.143+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:22.143+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:22.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:22.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:22.157+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:22.157+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:22.195+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:22.224+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:22.224+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:22.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578740, 1), signature: { hash: BinData(0, 7456093C02D01CB4D58BCD4E896E56E721B0F461), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:22.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:32:22.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578740, 1), signature: { hash: BinData(0, 7456093C02D01CB4D58BCD4E896E56E721B0F461), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:22.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578740, 1), signature: { hash: BinData(0, 7456093C02D01CB4D58BCD4E896E56E721B0F461), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:22.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), opTime: { ts: Timestamp(1567578739, 9), t: 1 }, wallTime: new Date(1567578739400) } 2019-09-04T06:32:22.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578740, 1), signature: { hash: BinData(0, 7456093C02D01CB4D58BCD4E896E56E721B0F461), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:22.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:22.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:22.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:22.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:22.295+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:22.306+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:22.306+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:22.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:22.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:22.395+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:22.405+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:22.405+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:22.405+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578739, 9) 2019-09-04T06:32:22.405+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 14391 2019-09-04T06:32:22.405+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:22.405+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:22.405+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 14391 2019-09-04T06:32:22.406+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14394 2019-09-04T06:32:22.406+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14394 2019-09-04T06:32:22.406+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578739, 9), t: 1 }({ ts: Timestamp(1567578739, 9), t: 1 }) 2019-09-04T06:32:22.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:22.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:22.495+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:22.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:22.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:22.553+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:22.553+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:22.584+0000 I NETWORK [listener] connection accepted from 10.108.2.74:51852 #343 (86 connections now open) 2019-09-04T06:32:22.584+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:22.585+0000 D2 COMMAND [conn343] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:22.585+0000 I NETWORK [conn343] received client metadata from 10.108.2.74:51852 conn343: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:22.585+0000 I COMMAND [conn343] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:22.595+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:22.598+0000 I COMMAND [conn313] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, D5CCCC4E77C8D415872AB251DCD35BEAE0CDE47F), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:32:22.598+0000 D1 - [conn313] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:22.598+0000 W - [conn313] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:22.612+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:22.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:22.615+0000 I - [conn313] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:22.615+0000 D1 COMMAND [conn313] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, D5CCCC4E77C8D415872AB251DCD35BEAE0CDE47F), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:22.615+0000 D1 - [conn313] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:22.615+0000 W - [conn313] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:22.636+0000 I - [conn313] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:22.636+0000 W COMMAND [conn313] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:22.636+0000 I COMMAND [conn313] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, D5CCCC4E77C8D415872AB251DCD35BEAE0CDE47F), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:32:22.636+0000 D2 NETWORK [conn313] Session from 10.108.2.74:51836 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:22.636+0000 I NETWORK [conn313] end connection 10.108.2.74:51836 (85 connections now open) 2019-09-04T06:32:22.642+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:22.643+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:22.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:22.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:22.657+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:22.657+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:22.696+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:22.724+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:22.724+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:22.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:22.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:22.796+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:22.806+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:22.806+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:22.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:22.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 992) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:22.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 992 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:32.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:22.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:50.839+0000 2019-09-04T06:32:22.838+0000 D2 ASIO [Replication] Request 992 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), opTime: { ts: Timestamp(1567578739, 9), t: 1 }, wallTime: new Date(1567578739400), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 9), t: 1 }, lastCommittedWall: new Date(1567578739400), lastOpVisible: { ts: Timestamp(1567578739, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 9), $clusterTime: { clusterTime: Timestamp(1567578740, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 9) } 2019-09-04T06:32:22.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), opTime: { ts: Timestamp(1567578739, 9), t: 1 }, wallTime: new Date(1567578739400), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 9), t: 1 }, lastCommittedWall: new Date(1567578739400), lastOpVisible: { ts: Timestamp(1567578739, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 9), $clusterTime: { clusterTime: Timestamp(1567578740, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 9) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:22.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:22.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 992) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), opTime: { ts: Timestamp(1567578739, 9), t: 1 }, wallTime: new Date(1567578739400), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 9), t: 1 }, lastCommittedWall: new Date(1567578739400), lastOpVisible: { ts: Timestamp(1567578739, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 9), $clusterTime: { clusterTime: Timestamp(1567578740, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 9) } 2019-09-04T06:32:22.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:32:22.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:32:24.838Z 2019-09-04T06:32:22.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:50.839+0000 2019-09-04T06:32:22.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:22.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 993) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:22.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 993 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:32:32.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:22.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:50.839+0000 2019-09-04T06:32:22.839+0000 D2 ASIO [Replication] Request 993 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), opTime: { ts: Timestamp(1567578739, 9), t: 1 }, wallTime: new Date(1567578739400), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 9), t: 1 }, lastCommittedWall: new Date(1567578739400), lastOpVisible: { ts: Timestamp(1567578739, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578739, 9), $clusterTime: { clusterTime: Timestamp(1567578740, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 9) } 2019-09-04T06:32:22.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), opTime: { ts: Timestamp(1567578739, 9), t: 1 }, wallTime: new Date(1567578739400), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 9), t: 1 }, lastCommittedWall: new Date(1567578739400), lastOpVisible: { ts: Timestamp(1567578739, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578739, 9), $clusterTime: { clusterTime: Timestamp(1567578740, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 9) } target: cmodb802.togewa.com:27019 2019-09-04T06:32:22.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:22.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 993) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), opTime: { ts: Timestamp(1567578739, 9), t: 1 }, wallTime: new Date(1567578739400), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 9), t: 1 }, lastCommittedWall: new Date(1567578739400), lastOpVisible: { ts: Timestamp(1567578739, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578739, 9), $clusterTime: { clusterTime: Timestamp(1567578740, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 9) } 2019-09-04T06:32:22.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:32:22.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:32:31.662+0000 2019-09-04T06:32:22.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:32:34.264+0000 2019-09-04T06:32:22.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:32:22.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:32:24.839Z 2019-09-04T06:32:22.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:22.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:52.839+0000 2019-09-04T06:32:22.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:52.839+0000 2019-09-04T06:32:22.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:22.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:22.896+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:22.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:22.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:22.996+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:22.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:22.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:23.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:23.053+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:23.053+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:23.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578740, 1), signature: { hash: BinData(0, 7456093C02D01CB4D58BCD4E896E56E721B0F461), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:23.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:32:23.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578740, 1), signature: { hash: BinData(0, 7456093C02D01CB4D58BCD4E896E56E721B0F461), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:23.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578740, 1), signature: { hash: BinData(0, 7456093C02D01CB4D58BCD4E896E56E721B0F461), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:23.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), opTime: { ts: Timestamp(1567578739, 9), t: 1 }, wallTime: new Date(1567578739400) } 2019-09-04T06:32:23.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578740, 1), signature: { hash: BinData(0, 7456093C02D01CB4D58BCD4E896E56E721B0F461), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:23.096+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:23.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:23.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:23.143+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:23.143+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:23.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:23.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:23.157+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:23.157+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:23.196+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:23.224+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:23.224+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:23.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:23.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:23.264+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:23.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:23.296+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:23.306+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:23.306+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:23.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:23.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:23.396+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:23.405+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:23.405+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:23.405+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578739, 9) 2019-09-04T06:32:23.405+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 14423 2019-09-04T06:32:23.405+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:23.405+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:23.405+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 14423 2019-09-04T06:32:23.406+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14426 2019-09-04T06:32:23.406+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14426 2019-09-04T06:32:23.406+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578739, 9), t: 1 }({ ts: Timestamp(1567578739, 9), t: 1 }) 2019-09-04T06:32:23.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:23.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:23.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:23.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:23.497+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:23.553+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:23.553+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:23.597+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:23.612+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:23.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:23.643+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:23.643+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:23.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:23.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:23.657+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:23.657+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:23.697+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:23.724+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:23.724+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:23.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:23.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:23.797+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:23.806+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:23.806+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:23.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:23.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:23.897+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:23.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:23.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:23.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:23.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:23.997+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:24.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:24.053+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:24.053+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:24.097+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:24.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:24.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:24.143+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:24.143+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:24.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:24.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:24.153+0000 I COMMAND [conn314] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, D5CCCC4E77C8D415872AB251DCD35BEAE0CDE47F), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:32:24.153+0000 D1 - [conn314] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:24.153+0000 W - [conn314] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:24.157+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:24.157+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:24.169+0000 I - [conn314] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:24.169+0000 D1 COMMAND [conn314] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, D5CCCC4E77C8D415872AB251DCD35BEAE0CDE47F), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:24.169+0000 D1 - [conn314] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:24.169+0000 W - [conn314] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:24.189+0000 I - [conn314] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:24.189+0000 W COMMAND [conn314] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:24.189+0000 I COMMAND [conn314] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578705, 1), signature: { hash: BinData(0, D5CCCC4E77C8D415872AB251DCD35BEAE0CDE47F), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:32:24.189+0000 D2 NETWORK [conn314] Session from 10.108.2.46:41040 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:24.190+0000 I NETWORK [conn314] end connection 10.108.2.46:41040 (84 connections now open) 2019-09-04T06:32:24.197+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:24.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578740, 1), signature: { hash: BinData(0, 7456093C02D01CB4D58BCD4E896E56E721B0F461), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:24.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:32:24.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578740, 1), signature: { hash: BinData(0, 7456093C02D01CB4D58BCD4E896E56E721B0F461), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:24.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578740, 1), signature: { hash: BinData(0, 7456093C02D01CB4D58BCD4E896E56E721B0F461), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:24.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), opTime: { ts: Timestamp(1567578739, 9), t: 1 }, wallTime: new Date(1567578739400) } 2019-09-04T06:32:24.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578740, 1), signature: { hash: BinData(0, 7456093C02D01CB4D58BCD4E896E56E721B0F461), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:24.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:24.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:24.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:24.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:24.297+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:24.306+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:24.306+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:24.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:24.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:24.398+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:24.406+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:24.406+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:24.406+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578739, 9) 2019-09-04T06:32:24.406+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 14453 2019-09-04T06:32:24.406+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:24.406+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:24.406+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 14453 2019-09-04T06:32:24.406+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14456 2019-09-04T06:32:24.406+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14456 2019-09-04T06:32:24.406+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578739, 9), t: 1 }({ ts: Timestamp(1567578739, 9), t: 1 }) 2019-09-04T06:32:24.407+0000 D2 ASIO [RS] Request 988 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 9), t: 1 }, lastCommittedWall: new Date(1567578739400), lastOpVisible: { ts: Timestamp(1567578739, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578739, 9), t: 1 }, lastCommittedWall: new Date(1567578739400), lastOpApplied: { ts: Timestamp(1567578739, 9), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 9), $clusterTime: { clusterTime: Timestamp(1567578740, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 9) } 2019-09-04T06:32:24.407+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 9), t: 1 }, lastCommittedWall: new Date(1567578739400), lastOpVisible: { ts: Timestamp(1567578739, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578739, 9), t: 1 }, lastCommittedWall: new Date(1567578739400), lastOpApplied: { ts: Timestamp(1567578739, 9), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 9), $clusterTime: { clusterTime: Timestamp(1567578740, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 9) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:24.407+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:24.407+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:32:24.407+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:32:34.264+0000 2019-09-04T06:32:24.407+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:32:35.513+0000 2019-09-04T06:32:24.407+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:24.407+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:52.839+0000 2019-09-04T06:32:24.407+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 994 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:34.407+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578739, 9), t: 1 } } 2019-09-04T06:32:24.408+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.406+0000 2019-09-04T06:32:24.415+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:24.415+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), appliedOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, appliedWallTime: new Date(1567578739400), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), appliedOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, appliedWallTime: new Date(1567578739400), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), appliedOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, appliedWallTime: new Date(1567578739400), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 9), t: 1 }, lastCommittedWall: new Date(1567578739400), lastOpVisible: { ts: Timestamp(1567578739, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:24.415+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 995 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:54.415+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), appliedOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, appliedWallTime: new Date(1567578739400), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), appliedOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, appliedWallTime: new Date(1567578739400), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), appliedOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, appliedWallTime: new Date(1567578739400), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 9), t: 1 }, lastCommittedWall: new Date(1567578739400), lastOpVisible: { ts: Timestamp(1567578739, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:24.415+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.406+0000 2019-09-04T06:32:24.415+0000 D2 ASIO [RS] Request 995 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 9), t: 1 }, lastCommittedWall: new Date(1567578739400), lastOpVisible: { ts: Timestamp(1567578739, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 9), $clusterTime: { clusterTime: Timestamp(1567578740, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 9) } 2019-09-04T06:32:24.415+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 9), t: 1 }, lastCommittedWall: new Date(1567578739400), lastOpVisible: { ts: Timestamp(1567578739, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 9), $clusterTime: { clusterTime: Timestamp(1567578740, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 9) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:24.415+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:24.415+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:49.406+0000 2019-09-04T06:32:24.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:24.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:24.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:24.497+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:24.498+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:24.553+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:24.553+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:24.556+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:32:24.556+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:32:24.556+0000 D2 COMMAND [conn90] run command admin.$cmd { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:32:24.556+0000 I COMMAND [conn90] command admin.$cmd command: isMaster { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:866 locks:{} protocol:op_query 0ms 2019-09-04T06:32:24.598+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:24.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:24.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:24.642+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:24.643+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:24.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:24.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:24.657+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:24.657+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:24.698+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:24.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:24.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:24.798+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:24.806+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:24.806+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:24.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:24.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 996) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:24.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 996 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:34.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:24.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:52.839+0000 2019-09-04T06:32:24.838+0000 D2 ASIO [Replication] Request 996 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), opTime: { ts: Timestamp(1567578739, 9), t: 1 }, wallTime: new Date(1567578739400), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 9), t: 1 }, lastCommittedWall: new Date(1567578739400), lastOpVisible: { ts: Timestamp(1567578739, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 9), $clusterTime: { clusterTime: Timestamp(1567578740, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 9) } 2019-09-04T06:32:24.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), opTime: { ts: Timestamp(1567578739, 9), t: 1 }, wallTime: new Date(1567578739400), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 9), t: 1 }, lastCommittedWall: new Date(1567578739400), lastOpVisible: { ts: Timestamp(1567578739, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 9), $clusterTime: { clusterTime: Timestamp(1567578740, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 9) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:24.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:24.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 996) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), opTime: { ts: Timestamp(1567578739, 9), t: 1 }, wallTime: new Date(1567578739400), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 9), t: 1 }, lastCommittedWall: new Date(1567578739400), lastOpVisible: { ts: Timestamp(1567578739, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 9), $clusterTime: { clusterTime: Timestamp(1567578740, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 9) } 2019-09-04T06:32:24.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:32:24.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:32:26.838Z 2019-09-04T06:32:24.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:52.839+0000 2019-09-04T06:32:24.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:24.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 997) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:24.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 997 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:32:34.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:24.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:52.839+0000 2019-09-04T06:32:24.839+0000 D2 ASIO [Replication] Request 997 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), opTime: { ts: Timestamp(1567578739, 9), t: 1 }, wallTime: new Date(1567578739400), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 9), t: 1 }, lastCommittedWall: new Date(1567578739400), lastOpVisible: { ts: Timestamp(1567578739, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578739, 9), $clusterTime: { clusterTime: Timestamp(1567578740, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 9) } 2019-09-04T06:32:24.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), opTime: { ts: Timestamp(1567578739, 9), t: 1 }, wallTime: new Date(1567578739400), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 9), t: 1 }, lastCommittedWall: new Date(1567578739400), lastOpVisible: { ts: Timestamp(1567578739, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578739, 9), $clusterTime: { clusterTime: Timestamp(1567578740, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 9) } target: cmodb802.togewa.com:27019 2019-09-04T06:32:24.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:24.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 997) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), opTime: { ts: Timestamp(1567578739, 9), t: 1 }, wallTime: new Date(1567578739400), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 9), t: 1 }, lastCommittedWall: new Date(1567578739400), lastOpVisible: { ts: Timestamp(1567578739, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578739, 9), $clusterTime: { clusterTime: Timestamp(1567578740, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578739, 9) } 2019-09-04T06:32:24.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:32:24.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:32:35.513+0000 2019-09-04T06:32:24.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:32:35.780+0000 2019-09-04T06:32:24.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:32:24.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:32:26.839Z 2019-09-04T06:32:24.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:24.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:54.839+0000 2019-09-04T06:32:24.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:54.839+0000 2019-09-04T06:32:24.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:24.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:24.898+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:24.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:24.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:24.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:24.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:24.998+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:25.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:25.049+0000 I NETWORK [listener] connection accepted from 10.108.2.55:36732 #344 (85 connections now open) 2019-09-04T06:32:25.049+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:25.049+0000 D2 COMMAND [conn344] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:25.049+0000 I NETWORK [conn344] received client metadata from 10.108.2.55:36732 conn344: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:25.049+0000 I COMMAND [conn344] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:25.053+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:25.053+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:25.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578740, 1), signature: { hash: BinData(0, 7456093C02D01CB4D58BCD4E896E56E721B0F461), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:25.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:32:25.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578740, 1), signature: { hash: BinData(0, 7456093C02D01CB4D58BCD4E896E56E721B0F461), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:25.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578740, 1), signature: { hash: BinData(0, 7456093C02D01CB4D58BCD4E896E56E721B0F461), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:25.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), opTime: { ts: Timestamp(1567578739, 9), t: 1 }, wallTime: new Date(1567578739400) } 2019-09-04T06:32:25.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578740, 1), signature: { hash: BinData(0, 7456093C02D01CB4D58BCD4E896E56E721B0F461), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:25.066+0000 I COMMAND [conn315] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578711, 1), signature: { hash: BinData(0, 6FC68FAD4499724D64BA3144ABE5F4A5DAEAA379), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:32:25.066+0000 D1 - [conn315] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:25.066+0000 W - [conn315] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:25.082+0000 I - [conn315] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:25.082+0000 D1 COMMAND [conn315] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578711, 1), signature: { hash: BinData(0, 6FC68FAD4499724D64BA3144ABE5F4A5DAEAA379), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:25.082+0000 D1 - [conn315] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:25.082+0000 W - [conn315] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:25.098+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:25.102+0000 I - [conn315] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:25.102+0000 W COMMAND [conn315] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:25.102+0000 I COMMAND [conn315] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578711, 1), signature: { hash: BinData(0, 6FC68FAD4499724D64BA3144ABE5F4A5DAEAA379), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30032ms 2019-09-04T06:32:25.102+0000 D2 NETWORK [conn315] Session from 10.108.2.55:36712 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:25.102+0000 I NETWORK [conn315] end connection 10.108.2.55:36712 (84 connections now open) 2019-09-04T06:32:25.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:25.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:25.143+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:25.143+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:25.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:25.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:25.157+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:25.157+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:25.198+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:25.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:25.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:25.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:25.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:25.295+0000 D2 ASIO [RS] Request 994 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578745, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578745292), o: { $v: 1, $set: { ping: new Date(1567578745289), up: 2645 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 9), t: 1 }, lastCommittedWall: new Date(1567578739400), lastOpVisible: { ts: Timestamp(1567578739, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578739, 9), t: 1 }, lastCommittedWall: new Date(1567578739400), lastOpApplied: { ts: Timestamp(1567578745, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 9), $clusterTime: { clusterTime: Timestamp(1567578745, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578745, 1) } 2019-09-04T06:32:25.295+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578745, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578745292), o: { $v: 1, $set: { ping: new Date(1567578745289), up: 2645 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 9), t: 1 }, lastCommittedWall: new Date(1567578739400), lastOpVisible: { ts: Timestamp(1567578739, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578739, 9), t: 1 }, lastCommittedWall: new Date(1567578739400), lastOpApplied: { ts: Timestamp(1567578745, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 9), $clusterTime: { clusterTime: Timestamp(1567578745, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578745, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:25.295+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:25.295+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578745, 1) and ending at ts: Timestamp(1567578745, 1) 2019-09-04T06:32:25.295+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:32:35.780+0000 2019-09-04T06:32:25.295+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:32:35.775+0000 2019-09-04T06:32:25.295+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:25.295+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:54.839+0000 2019-09-04T06:32:25.295+0000 D2 REPL [replication-0] oplog buffer has 0 bytes 2019-09-04T06:32:25.295+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578745, 1), t: 1 } 2019-09-04T06:32:25.295+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:25.295+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:25.295+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578739, 9) 2019-09-04T06:32:25.295+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 14483 2019-09-04T06:32:25.295+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:25.295+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:25.295+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 14483 2019-09-04T06:32:25.295+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:25.295+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:32:25.295+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:25.295+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578739, 9) 2019-09-04T06:32:25.295+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578745, 1) } 2019-09-04T06:32:25.295+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 14486 2019-09-04T06:32:25.295+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:25.295+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:25.295+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 14486 2019-09-04T06:32:25.295+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14457 2019-09-04T06:32:25.295+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14457 2019-09-04T06:32:25.295+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14489 2019-09-04T06:32:25.295+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14489 2019-09-04T06:32:25.295+0000 D3 EXECUTOR [repl-writer-worker-2] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:25.295+0000 D3 STORAGE [repl-writer-worker-2] WT begin_transaction for snapshot id 14491 2019-09-04T06:32:25.295+0000 D4 STORAGE [repl-writer-worker-2] inserting record with timestamp Timestamp(1567578745, 1) 2019-09-04T06:32:25.295+0000 D3 STORAGE [repl-writer-worker-2] WT set timestamp of future write operations to Timestamp(1567578745, 1) 2019-09-04T06:32:25.295+0000 D3 STORAGE [repl-writer-worker-2] WT commit_transaction for snapshot id 14491 2019-09-04T06:32:25.295+0000 D3 EXECUTOR [repl-writer-worker-2] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:25.295+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:32:25.295+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14490 2019-09-04T06:32:25.295+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14490 2019-09-04T06:32:25.295+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14493 2019-09-04T06:32:25.295+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14493 2019-09-04T06:32:25.295+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578745, 1), t: 1 }({ ts: Timestamp(1567578745, 1), t: 1 }) 2019-09-04T06:32:25.295+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578745, 1) 2019-09-04T06:32:25.295+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14494 2019-09-04T06:32:25.295+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578745, 1) } } ] } sort: {} projection: {} 2019-09-04T06:32:25.295+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:25.295+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:32:25.295+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578745, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:32:25.295+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:25.295+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:25.295+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:25.295+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578745, 1) || First: notFirst: full path: ts 2019-09-04T06:32:25.295+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:25.296+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578745, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:25.296+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:25.296+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:32:25.296+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:25.296+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:25.296+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:25.296+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:25.296+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:25.296+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:25.296+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:25.296+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578745, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:25.296+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:25.296+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:25.296+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:25.296+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578745, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:25.296+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:25.296+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578745, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:25.296+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14494 2019-09-04T06:32:25.296+0000 D3 EXECUTOR [repl-writer-worker-0] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:25.296+0000 D3 STORAGE [repl-writer-worker-0] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:25.296+0000 D3 REPL [repl-writer-worker-0] applying op: { ts: Timestamp(1567578745, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578745292), o: { $v: 1, $set: { ping: new Date(1567578745289), up: 2645 } } }, oplog application mode: Secondary 2019-09-04T06:32:25.296+0000 D3 STORAGE [repl-writer-worker-0] WT set timestamp of future write operations to Timestamp(1567578745, 1) 2019-09-04T06:32:25.296+0000 D3 STORAGE [repl-writer-worker-0] WT begin_transaction for snapshot id 14496 2019-09-04T06:32:25.296+0000 D2 QUERY [repl-writer-worker-0] Using idhack: { _id: "cmodb801.togewa.com:27017" } 2019-09-04T06:32:25.296+0000 D4 WRITE [repl-writer-worker-0] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:32:25.296+0000 D3 STORAGE [repl-writer-worker-0] WT commit_transaction for snapshot id 14496 2019-09-04T06:32:25.296+0000 D3 EXECUTOR [repl-writer-worker-0] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:25.296+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578745, 1), t: 1 }({ ts: Timestamp(1567578745, 1), t: 1 }) 2019-09-04T06:32:25.296+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578745, 1) 2019-09-04T06:32:25.296+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14495 2019-09-04T06:32:25.296+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:32:25.296+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:25.296+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:32:25.296+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:25.296+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:25.296+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:32:25.296+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14495 2019-09-04T06:32:25.296+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578745, 1) 2019-09-04T06:32:25.296+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14500 2019-09-04T06:32:25.296+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14500 2019-09-04T06:32:25.296+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578745, 1), t: 1 }({ ts: Timestamp(1567578745, 1), t: 1 }) 2019-09-04T06:32:25.296+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:25.296+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), appliedOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, appliedWallTime: new Date(1567578739400), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), appliedOpTime: { ts: Timestamp(1567578745, 1), t: 1 }, appliedWallTime: new Date(1567578745292), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), appliedOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, appliedWallTime: new Date(1567578739400), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 9), t: 1 }, lastCommittedWall: new Date(1567578739400), lastOpVisible: { ts: Timestamp(1567578739, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:25.296+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 998 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:55.296+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), appliedOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, appliedWallTime: new Date(1567578739400), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), appliedOpTime: { ts: Timestamp(1567578745, 1), t: 1 }, appliedWallTime: new Date(1567578745292), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), appliedOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, appliedWallTime: new Date(1567578739400), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 9), t: 1 }, lastCommittedWall: new Date(1567578739400), lastOpVisible: { ts: Timestamp(1567578739, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:25.296+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:55.296+0000 2019-09-04T06:32:25.296+0000 D2 ASIO [RS] Request 998 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 9), t: 1 }, lastCommittedWall: new Date(1567578739400), lastOpVisible: { ts: Timestamp(1567578739, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 9), $clusterTime: { clusterTime: Timestamp(1567578745, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578745, 1) } 2019-09-04T06:32:25.296+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 9), t: 1 }, lastCommittedWall: new Date(1567578739400), lastOpVisible: { ts: Timestamp(1567578739, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 9), $clusterTime: { clusterTime: Timestamp(1567578745, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578745, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:25.297+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:25.297+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:55.297+0000 2019-09-04T06:32:25.297+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578745, 1), t: 1 } 2019-09-04T06:32:25.297+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 999 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:35.297+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578739, 9), t: 1 } } 2019-09-04T06:32:25.297+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:55.297+0000 2019-09-04T06:32:25.305+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:32:25.306+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:25.306+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:25.306+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), appliedOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, appliedWallTime: new Date(1567578739400), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578745, 1), t: 1 }, durableWallTime: new Date(1567578745292), appliedOpTime: { ts: Timestamp(1567578745, 1), t: 1 }, appliedWallTime: new Date(1567578745292), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), appliedOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, appliedWallTime: new Date(1567578739400), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 9), t: 1 }, lastCommittedWall: new Date(1567578739400), lastOpVisible: { ts: Timestamp(1567578739, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:25.306+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1000 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:55.306+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), appliedOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, appliedWallTime: new Date(1567578739400), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578745, 1), t: 1 }, durableWallTime: new Date(1567578745292), appliedOpTime: { ts: Timestamp(1567578745, 1), t: 1 }, appliedWallTime: new Date(1567578745292), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), appliedOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, appliedWallTime: new Date(1567578739400), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 9), t: 1 }, lastCommittedWall: new Date(1567578739400), lastOpVisible: { ts: Timestamp(1567578739, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:25.306+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:55.297+0000 2019-09-04T06:32:25.306+0000 D2 ASIO [RS] Request 1000 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 9), t: 1 }, lastCommittedWall: new Date(1567578739400), lastOpVisible: { ts: Timestamp(1567578739, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 9), $clusterTime: { clusterTime: Timestamp(1567578745, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578745, 1) } 2019-09-04T06:32:25.306+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578739, 9), t: 1 }, lastCommittedWall: new Date(1567578739400), lastOpVisible: { ts: Timestamp(1567578739, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578739, 9), $clusterTime: { clusterTime: Timestamp(1567578745, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578745, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:25.306+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:25.306+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:55.297+0000 2019-09-04T06:32:25.306+0000 D2 ASIO [RS] Request 999 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578745, 1), t: 1 }, lastCommittedWall: new Date(1567578745292), lastOpVisible: { ts: Timestamp(1567578745, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578745, 1), t: 1 }, lastCommittedWall: new Date(1567578745292), lastOpApplied: { ts: Timestamp(1567578745, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578745, 1), $clusterTime: { clusterTime: Timestamp(1567578745, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578745, 1) } 2019-09-04T06:32:25.306+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578745, 1), t: 1 }, lastCommittedWall: new Date(1567578745292), lastOpVisible: { ts: Timestamp(1567578745, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578745, 1), t: 1 }, lastCommittedWall: new Date(1567578745292), lastOpApplied: { ts: Timestamp(1567578745, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578745, 1), $clusterTime: { clusterTime: Timestamp(1567578745, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578745, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:25.306+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:25.306+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:32:25.306+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578745, 1), t: 1 }, 2019-09-04T06:32:25.292+0000 2019-09-04T06:32:25.306+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578745, 1), t: 1 }, 2019-09-04T06:32:25.292+0000 2019-09-04T06:32:25.306+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578740, 1) 2019-09-04T06:32:25.306+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:25.306+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:32:35.775+0000 2019-09-04T06:32:25.307+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:32:36.337+0000 2019-09-04T06:32:25.307+0000 D3 REPL [conn328] Got notified of new snapshot: { ts: Timestamp(1567578745, 1), t: 1 }, 2019-09-04T06:32:25.292+0000 2019-09-04T06:32:25.307+0000 D3 REPL [conn328] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.625+0000 2019-09-04T06:32:25.307+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:25.307+0000 D3 REPL [conn321] Got notified of new snapshot: { ts: Timestamp(1567578745, 1), t: 1 }, 2019-09-04T06:32:25.292+0000 2019-09-04T06:32:25.307+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:54.839+0000 2019-09-04T06:32:25.307+0000 D3 REPL [conn321] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:43.109+0000 2019-09-04T06:32:25.307+0000 D3 REPL [conn326] Got notified of new snapshot: { ts: Timestamp(1567578745, 1), t: 1 }, 2019-09-04T06:32:25.292+0000 2019-09-04T06:32:25.307+0000 D3 REPL [conn326] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:49.840+0000 2019-09-04T06:32:25.307+0000 D3 REPL [conn336] Got notified of new snapshot: { ts: Timestamp(1567578745, 1), t: 1 }, 2019-09-04T06:32:25.292+0000 2019-09-04T06:32:25.307+0000 D3 REPL [conn336] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.130+0000 2019-09-04T06:32:25.307+0000 D3 REPL [conn325] Got notified of new snapshot: { ts: Timestamp(1567578745, 1), t: 1 }, 2019-09-04T06:32:25.292+0000 2019-09-04T06:32:25.307+0000 D3 REPL [conn325] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.644+0000 2019-09-04T06:32:25.307+0000 D3 REPL [conn339] Got notified of new snapshot: { ts: Timestamp(1567578745, 1), t: 1 }, 2019-09-04T06:32:25.292+0000 2019-09-04T06:32:25.307+0000 D3 REPL [conn339] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.661+0000 2019-09-04T06:32:25.307+0000 D3 REPL [conn304] Got notified of new snapshot: { ts: Timestamp(1567578745, 1), t: 1 }, 2019-09-04T06:32:25.292+0000 2019-09-04T06:32:25.307+0000 D3 REPL [conn304] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.637+0000 2019-09-04T06:32:25.307+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:25.307+0000 D3 REPL [conn340] Got notified of new snapshot: { ts: Timestamp(1567578745, 1), t: 1 }, 2019-09-04T06:32:25.292+0000 2019-09-04T06:32:25.307+0000 D3 REPL [conn340] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.662+0000 2019-09-04T06:32:25.307+0000 D3 REPL [conn317] Got notified of new snapshot: { ts: Timestamp(1567578745, 1), t: 1 }, 2019-09-04T06:32:25.292+0000 2019-09-04T06:32:25.307+0000 D3 REPL [conn317] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.471+0000 2019-09-04T06:32:25.307+0000 D3 REPL [conn320] Got notified of new snapshot: { ts: Timestamp(1567578745, 1), t: 1 }, 2019-09-04T06:32:25.292+0000 2019-09-04T06:32:25.307+0000 D3 REPL [conn320] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.753+0000 2019-09-04T06:32:25.307+0000 D3 REPL [conn342] Got notified of new snapshot: { ts: Timestamp(1567578745, 1), t: 1 }, 2019-09-04T06:32:25.292+0000 2019-09-04T06:32:25.307+0000 D3 REPL [conn342] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:52.054+0000 2019-09-04T06:32:25.307+0000 D3 REPL [conn285] Got notified of new snapshot: { ts: Timestamp(1567578745, 1), t: 1 }, 2019-09-04T06:32:25.292+0000 2019-09-04T06:32:25.307+0000 D3 REPL [conn285] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.640+0000 2019-09-04T06:32:25.307+0000 D3 REPL [conn275] Got notified of new snapshot: { ts: Timestamp(1567578745, 1), t: 1 }, 2019-09-04T06:32:25.292+0000 2019-09-04T06:32:25.307+0000 D3 REPL [conn275] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.508+0000 2019-09-04T06:32:25.307+0000 D3 REPL [conn306] Got notified of new snapshot: { ts: Timestamp(1567578745, 1), t: 1 }, 2019-09-04T06:32:25.292+0000 2019-09-04T06:32:25.307+0000 D3 REPL [conn306] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.639+0000 2019-09-04T06:32:25.307+0000 D3 REPL [conn319] Got notified of new snapshot: { ts: Timestamp(1567578745, 1), t: 1 }, 2019-09-04T06:32:25.292+0000 2019-09-04T06:32:25.307+0000 D3 REPL [conn319] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.535+0000 2019-09-04T06:32:25.307+0000 D3 REPL [conn316] Got notified of new snapshot: { ts: Timestamp(1567578745, 1), t: 1 }, 2019-09-04T06:32:25.292+0000 2019-09-04T06:32:25.307+0000 D3 REPL [conn316] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.623+0000 2019-09-04T06:32:25.307+0000 D3 REPL [conn299] Got notified of new snapshot: { ts: Timestamp(1567578745, 1), t: 1 }, 2019-09-04T06:32:25.292+0000 2019-09-04T06:32:25.307+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1001 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:35.307+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578745, 1), t: 1 } } 2019-09-04T06:32:25.307+0000 D3 REPL [conn299] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.483+0000 2019-09-04T06:32:25.307+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:55.297+0000 2019-09-04T06:32:25.307+0000 D3 REPL [conn324] Got notified of new snapshot: { ts: Timestamp(1567578745, 1), t: 1 }, 2019-09-04T06:32:25.292+0000 2019-09-04T06:32:25.307+0000 D3 REPL [conn324] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.622+0000 2019-09-04T06:32:25.307+0000 D3 REPL [conn305] Got notified of new snapshot: { ts: Timestamp(1567578745, 1), t: 1 }, 2019-09-04T06:32:25.292+0000 2019-09-04T06:32:25.307+0000 D3 REPL [conn305] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.638+0000 2019-09-04T06:32:25.307+0000 D2 COMMAND [conn21] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578745, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a7902d1a496712d7248'), operName: "", parentOperId: "5d6f5a7902d1a496712d7245" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578745, 1), signature: { hash: BinData(0, 80ECB12492EB246DD8838B87D77BE589FB20238B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578745, 1), t: 1 } }, $db: "config" } 2019-09-04T06:32:25.307+0000 D1 TRACKING [conn21] Cmd: find, TrackingId: 5d6f5a7902d1a496712d7245|5d6f5a7902d1a496712d7248 2019-09-04T06:32:25.307+0000 D1 COMMAND [conn21] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578745, 1), t: 1 } } } 2019-09-04T06:32:25.307+0000 D3 STORAGE [conn21] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:32:25.307+0000 D1 COMMAND [conn21] Using 'committed' snapshot: { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578745, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a7902d1a496712d7248'), operName: "", parentOperId: "5d6f5a7902d1a496712d7245" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578745, 1), signature: { hash: BinData(0, 80ECB12492EB246DD8838B87D77BE589FB20238B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578745, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578745, 1) 2019-09-04T06:32:25.307+0000 D2 QUERY [conn21] Collection config.settings does not exist. Using EOF plan: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 2019-09-04T06:32:25.307+0000 I COMMAND [conn21] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578745, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a7902d1a496712d7248'), operName: "", parentOperId: "5d6f5a7902d1a496712d7245" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578745, 1), signature: { hash: BinData(0, 80ECB12492EB246DD8838B87D77BE589FB20238B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578745, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:32:25.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:25.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:25.395+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578745, 1) 2019-09-04T06:32:25.406+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:25.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:25.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:25.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:25.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:25.506+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:25.553+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:25.553+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:25.606+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:25.612+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:25.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:25.642+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:25.643+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:25.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:25.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:25.657+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:25.657+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:25.706+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:25.715+0000 D2 ASIO [RS] Request 1001 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578745, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578745711), o: { $v: 1, $set: { ping: new Date(1567578745711) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578745, 1), t: 1 }, lastCommittedWall: new Date(1567578745292), lastOpVisible: { ts: Timestamp(1567578745, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578745, 1), t: 1 }, lastCommittedWall: new Date(1567578745292), lastOpApplied: { ts: Timestamp(1567578745, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578745, 1), $clusterTime: { clusterTime: Timestamp(1567578745, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578745, 2) } 2019-09-04T06:32:25.715+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578745, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578745711), o: { $v: 1, $set: { ping: new Date(1567578745711) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578745, 1), t: 1 }, lastCommittedWall: new Date(1567578745292), lastOpVisible: { ts: Timestamp(1567578745, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578745, 1), t: 1 }, lastCommittedWall: new Date(1567578745292), lastOpApplied: { ts: Timestamp(1567578745, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578745, 1), $clusterTime: { clusterTime: Timestamp(1567578745, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578745, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:25.715+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:25.715+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578745, 2) and ending at ts: Timestamp(1567578745, 2) 2019-09-04T06:32:25.715+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:32:36.337+0000 2019-09-04T06:32:25.715+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:32:35.880+0000 2019-09-04T06:32:25.715+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:25.715+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:54.839+0000 2019-09-04T06:32:25.715+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578745, 2), t: 1 } 2019-09-04T06:32:25.715+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:25.715+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:25.715+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578745, 1) 2019-09-04T06:32:25.715+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 14513 2019-09-04T06:32:25.715+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:25.715+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:25.715+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 14513 2019-09-04T06:32:25.715+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:25.715+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:32:25.715+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:25.715+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578745, 1) 2019-09-04T06:32:25.715+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 14516 2019-09-04T06:32:25.715+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578745, 2) } 2019-09-04T06:32:25.715+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:25.715+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:25.715+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 14516 2019-09-04T06:32:25.715+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14501 2019-09-04T06:32:25.715+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14501 2019-09-04T06:32:25.715+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14519 2019-09-04T06:32:25.715+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14519 2019-09-04T06:32:25.715+0000 D3 EXECUTOR [repl-writer-worker-15] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:25.715+0000 D3 STORAGE [repl-writer-worker-15] WT begin_transaction for snapshot id 14521 2019-09-04T06:32:25.715+0000 D4 STORAGE [repl-writer-worker-15] inserting record with timestamp Timestamp(1567578745, 2) 2019-09-04T06:32:25.715+0000 D3 STORAGE [repl-writer-worker-15] WT set timestamp of future write operations to Timestamp(1567578745, 2) 2019-09-04T06:32:25.715+0000 D3 STORAGE [repl-writer-worker-15] WT commit_transaction for snapshot id 14521 2019-09-04T06:32:25.715+0000 D3 EXECUTOR [repl-writer-worker-15] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:25.715+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:32:25.715+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14520 2019-09-04T06:32:25.715+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14520 2019-09-04T06:32:25.715+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14523 2019-09-04T06:32:25.715+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14523 2019-09-04T06:32:25.715+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578745, 2), t: 1 }({ ts: Timestamp(1567578745, 2), t: 1 }) 2019-09-04T06:32:25.715+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578745, 2) 2019-09-04T06:32:25.715+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14524 2019-09-04T06:32:25.715+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578745, 2) } } ] } sort: {} projection: {} 2019-09-04T06:32:25.716+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:25.716+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:32:25.716+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578745, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:32:25.716+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:25.716+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:25.716+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:25.716+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578745, 2) || First: notFirst: full path: ts 2019-09-04T06:32:25.716+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:25.716+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578745, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:25.716+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:25.716+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:32:25.716+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:25.716+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:25.716+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:25.716+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:25.716+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:25.716+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:25.716+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:25.716+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578745, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:25.716+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:25.716+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:25.716+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:25.716+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578745, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:25.716+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:25.716+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578745, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:25.716+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14524 2019-09-04T06:32:25.716+0000 D3 EXECUTOR [repl-writer-worker-1] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:25.716+0000 D3 STORAGE [repl-writer-worker-1] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:25.716+0000 D3 REPL [repl-writer-worker-1] applying op: { ts: Timestamp(1567578745, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578745711), o: { $v: 1, $set: { ping: new Date(1567578745711) } } }, oplog application mode: Secondary 2019-09-04T06:32:25.716+0000 D3 STORAGE [repl-writer-worker-1] WT set timestamp of future write operations to Timestamp(1567578745, 2) 2019-09-04T06:32:25.716+0000 D3 STORAGE [repl-writer-worker-1] WT begin_transaction for snapshot id 14526 2019-09-04T06:32:25.716+0000 D2 QUERY [repl-writer-worker-1] Using idhack: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" } 2019-09-04T06:32:25.716+0000 D4 WRITE [repl-writer-worker-1] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:32:25.716+0000 D3 STORAGE [repl-writer-worker-1] WT commit_transaction for snapshot id 14526 2019-09-04T06:32:25.716+0000 D3 EXECUTOR [repl-writer-worker-1] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:25.716+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578745, 2), t: 1 }({ ts: Timestamp(1567578745, 2), t: 1 }) 2019-09-04T06:32:25.716+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578745, 2) 2019-09-04T06:32:25.716+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14525 2019-09-04T06:32:25.716+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:32:25.716+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:25.716+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:32:25.716+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:25.716+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:25.716+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:32:25.716+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14525 2019-09-04T06:32:25.716+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578745, 2) 2019-09-04T06:32:25.716+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14530 2019-09-04T06:32:25.716+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14530 2019-09-04T06:32:25.716+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578745, 2), t: 1 }({ ts: Timestamp(1567578745, 2), t: 1 }) 2019-09-04T06:32:25.716+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:25.716+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), appliedOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, appliedWallTime: new Date(1567578739400), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578745, 1), t: 1 }, durableWallTime: new Date(1567578745292), appliedOpTime: { ts: Timestamp(1567578745, 2), t: 1 }, appliedWallTime: new Date(1567578745711), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), appliedOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, appliedWallTime: new Date(1567578739400), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578745, 1), t: 1 }, lastCommittedWall: new Date(1567578745292), lastOpVisible: { ts: Timestamp(1567578745, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:25.716+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1002 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:55.716+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), appliedOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, appliedWallTime: new Date(1567578739400), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578745, 1), t: 1 }, durableWallTime: new Date(1567578745292), appliedOpTime: { ts: Timestamp(1567578745, 2), t: 1 }, appliedWallTime: new Date(1567578745711), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), appliedOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, appliedWallTime: new Date(1567578739400), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578745, 1), t: 1 }, lastCommittedWall: new Date(1567578745292), lastOpVisible: { ts: Timestamp(1567578745, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:25.716+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:55.716+0000 2019-09-04T06:32:25.717+0000 D2 ASIO [RS] Request 1002 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578745, 1), t: 1 }, lastCommittedWall: new Date(1567578745292), lastOpVisible: { ts: Timestamp(1567578745, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578745, 1), $clusterTime: { clusterTime: Timestamp(1567578745, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578745, 2) } 2019-09-04T06:32:25.717+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578745, 1), t: 1 }, lastCommittedWall: new Date(1567578745292), lastOpVisible: { ts: Timestamp(1567578745, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578745, 1), $clusterTime: { clusterTime: Timestamp(1567578745, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578745, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:25.717+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:25.717+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:55.717+0000 2019-09-04T06:32:25.717+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578745, 2), t: 1 } 2019-09-04T06:32:25.717+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1003 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:35.717+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578745, 1), t: 1 } } 2019-09-04T06:32:25.717+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:55.717+0000 2019-09-04T06:32:25.718+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:32:25.718+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:25.718+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), appliedOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, appliedWallTime: new Date(1567578739400), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578745, 2), t: 1 }, durableWallTime: new Date(1567578745711), appliedOpTime: { ts: Timestamp(1567578745, 2), t: 1 }, appliedWallTime: new Date(1567578745711), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), appliedOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, appliedWallTime: new Date(1567578739400), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578745, 1), t: 1 }, lastCommittedWall: new Date(1567578745292), lastOpVisible: { ts: Timestamp(1567578745, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:25.718+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1004 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:55.718+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), appliedOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, appliedWallTime: new Date(1567578739400), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578745, 2), t: 1 }, durableWallTime: new Date(1567578745711), appliedOpTime: { ts: Timestamp(1567578745, 2), t: 1 }, appliedWallTime: new Date(1567578745711), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, durableWallTime: new Date(1567578739400), appliedOpTime: { ts: Timestamp(1567578739, 9), t: 1 }, appliedWallTime: new Date(1567578739400), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578745, 1), t: 1 }, lastCommittedWall: new Date(1567578745292), lastOpVisible: { ts: Timestamp(1567578745, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:25.718+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:55.717+0000 2019-09-04T06:32:25.718+0000 D2 ASIO [RS] Request 1004 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578745, 1), t: 1 }, lastCommittedWall: new Date(1567578745292), lastOpVisible: { ts: Timestamp(1567578745, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578745, 1), $clusterTime: { clusterTime: Timestamp(1567578745, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578745, 2) } 2019-09-04T06:32:25.718+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578745, 1), t: 1 }, lastCommittedWall: new Date(1567578745292), lastOpVisible: { ts: Timestamp(1567578745, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578745, 1), $clusterTime: { clusterTime: Timestamp(1567578745, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578745, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:25.718+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:25.718+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:55.717+0000 2019-09-04T06:32:25.718+0000 D2 ASIO [RS] Request 1003 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578745, 2), t: 1 }, lastCommittedWall: new Date(1567578745711), lastOpVisible: { ts: Timestamp(1567578745, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578745, 2), t: 1 }, lastCommittedWall: new Date(1567578745711), lastOpApplied: { ts: Timestamp(1567578745, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578745, 2), $clusterTime: { clusterTime: Timestamp(1567578745, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578745, 2) } 2019-09-04T06:32:25.718+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578745, 2), t: 1 }, lastCommittedWall: new Date(1567578745711), lastOpVisible: { ts: Timestamp(1567578745, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578745, 2), t: 1 }, lastCommittedWall: new Date(1567578745711), lastOpApplied: { ts: Timestamp(1567578745, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578745, 2), $clusterTime: { clusterTime: Timestamp(1567578745, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578745, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:25.718+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:25.718+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:32:25.719+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578745, 2), t: 1 }, 2019-09-04T06:32:25.711+0000 2019-09-04T06:32:25.719+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578745, 2), t: 1 }, 2019-09-04T06:32:25.711+0000 2019-09-04T06:32:25.719+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578740, 2) 2019-09-04T06:32:25.719+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:32:35.880+0000 2019-09-04T06:32:25.719+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:32:36.775+0000 2019-09-04T06:32:25.719+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:25.719+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:54.839+0000 2019-09-04T06:32:25.719+0000 D3 REPL [conn321] Got notified of new snapshot: { ts: Timestamp(1567578745, 2), t: 1 }, 2019-09-04T06:32:25.711+0000 2019-09-04T06:32:25.719+0000 D3 REPL [conn321] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:43.109+0000 2019-09-04T06:32:25.719+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1005 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:35.719+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578745, 2), t: 1 } } 2019-09-04T06:32:25.719+0000 D3 REPL [conn275] Got notified of new snapshot: { ts: Timestamp(1567578745, 2), t: 1 }, 2019-09-04T06:32:25.711+0000 2019-09-04T06:32:25.719+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:32:55.717+0000 2019-09-04T06:32:25.719+0000 D3 REPL [conn275] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.508+0000 2019-09-04T06:32:25.719+0000 D3 REPL [conn320] Got notified of new snapshot: { ts: Timestamp(1567578745, 2), t: 1 }, 2019-09-04T06:32:25.711+0000 2019-09-04T06:32:25.719+0000 D3 REPL [conn320] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.753+0000 2019-09-04T06:32:25.719+0000 D3 REPL [conn285] Got notified of new snapshot: { ts: Timestamp(1567578745, 2), t: 1 }, 2019-09-04T06:32:25.711+0000 2019-09-04T06:32:25.719+0000 D3 REPL [conn285] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.640+0000 2019-09-04T06:32:25.719+0000 D3 REPL [conn336] Got notified of new snapshot: { ts: Timestamp(1567578745, 2), t: 1 }, 2019-09-04T06:32:25.711+0000 2019-09-04T06:32:25.719+0000 D3 REPL [conn336] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.130+0000 2019-09-04T06:32:25.719+0000 D3 REPL [conn306] Got notified of new snapshot: { ts: Timestamp(1567578745, 2), t: 1 }, 2019-09-04T06:32:25.711+0000 2019-09-04T06:32:25.719+0000 D3 REPL [conn306] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.639+0000 2019-09-04T06:32:25.719+0000 D3 REPL [conn316] Got notified of new snapshot: { ts: Timestamp(1567578745, 2), t: 1 }, 2019-09-04T06:32:25.711+0000 2019-09-04T06:32:25.719+0000 D3 REPL [conn316] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.623+0000 2019-09-04T06:32:25.719+0000 D3 REPL [conn324] Got notified of new snapshot: { ts: Timestamp(1567578745, 2), t: 1 }, 2019-09-04T06:32:25.711+0000 2019-09-04T06:32:25.719+0000 D3 REPL [conn324] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.622+0000 2019-09-04T06:32:25.719+0000 D3 REPL [conn299] Got notified of new snapshot: { ts: Timestamp(1567578745, 2), t: 1 }, 2019-09-04T06:32:25.711+0000 2019-09-04T06:32:25.719+0000 D3 REPL [conn299] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.483+0000 2019-09-04T06:32:25.719+0000 D3 REPL [conn328] Got notified of new snapshot: { ts: Timestamp(1567578745, 2), t: 1 }, 2019-09-04T06:32:25.711+0000 2019-09-04T06:32:25.719+0000 D3 REPL [conn328] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.625+0000 2019-09-04T06:32:25.719+0000 D3 REPL [conn304] Got notified of new snapshot: { ts: Timestamp(1567578745, 2), t: 1 }, 2019-09-04T06:32:25.711+0000 2019-09-04T06:32:25.719+0000 D3 REPL [conn304] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.637+0000 2019-09-04T06:32:25.719+0000 D3 REPL [conn339] Got notified of new snapshot: { ts: Timestamp(1567578745, 2), t: 1 }, 2019-09-04T06:32:25.711+0000 2019-09-04T06:32:25.719+0000 D3 REPL [conn339] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.661+0000 2019-09-04T06:32:25.719+0000 D3 REPL [conn319] Got notified of new snapshot: { ts: Timestamp(1567578745, 2), t: 1 }, 2019-09-04T06:32:25.711+0000 2019-09-04T06:32:25.719+0000 D3 REPL [conn319] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.535+0000 2019-09-04T06:32:25.719+0000 D3 REPL [conn317] Got notified of new snapshot: { ts: Timestamp(1567578745, 2), t: 1 }, 2019-09-04T06:32:25.711+0000 2019-09-04T06:32:25.719+0000 D3 REPL [conn317] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.471+0000 2019-09-04T06:32:25.719+0000 D3 REPL [conn340] Got notified of new snapshot: { ts: Timestamp(1567578745, 2), t: 1 }, 2019-09-04T06:32:25.711+0000 2019-09-04T06:32:25.719+0000 D3 REPL [conn340] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.662+0000 2019-09-04T06:32:25.719+0000 D3 REPL [conn325] Got notified of new snapshot: { ts: Timestamp(1567578745, 2), t: 1 }, 2019-09-04T06:32:25.711+0000 2019-09-04T06:32:25.719+0000 D3 REPL [conn325] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.644+0000 2019-09-04T06:32:25.719+0000 D3 REPL [conn342] Got notified of new snapshot: { ts: Timestamp(1567578745, 2), t: 1 }, 2019-09-04T06:32:25.711+0000 2019-09-04T06:32:25.719+0000 D3 REPL [conn342] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:52.054+0000 2019-09-04T06:32:25.719+0000 D3 REPL [conn326] Got notified of new snapshot: { ts: Timestamp(1567578745, 2), t: 1 }, 2019-09-04T06:32:25.711+0000 2019-09-04T06:32:25.719+0000 D3 REPL [conn326] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:49.840+0000 2019-09-04T06:32:25.719+0000 D3 REPL [conn305] Got notified of new snapshot: { ts: Timestamp(1567578745, 2), t: 1 }, 2019-09-04T06:32:25.711+0000 2019-09-04T06:32:25.719+0000 D3 REPL [conn305] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.638+0000 2019-09-04T06:32:25.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:25.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:25.806+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:25.806+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:25.806+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:25.815+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578745, 2) 2019-09-04T06:32:25.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:25.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:25.906+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:25.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:25.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:25.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:25.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:26.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:26.006+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:26.053+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:26.053+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:26.093+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:26.094+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:26.106+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:26.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:26.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:26.143+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:26.143+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:26.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:26.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:26.157+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:26.157+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:26.207+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:26.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578745, 2), signature: { hash: BinData(0, 80ECB12492EB246DD8838B87D77BE589FB20238B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:26.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:32:26.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578745, 2), signature: { hash: BinData(0, 80ECB12492EB246DD8838B87D77BE589FB20238B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:26.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578745, 2), signature: { hash: BinData(0, 80ECB12492EB246DD8838B87D77BE589FB20238B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:26.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578745, 2), t: 1 }, durableWallTime: new Date(1567578745711), opTime: { ts: Timestamp(1567578745, 2), t: 1 }, wallTime: new Date(1567578745711) } 2019-09-04T06:32:26.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578745, 2), signature: { hash: BinData(0, 80ECB12492EB246DD8838B87D77BE589FB20238B), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:26.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:26.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:26.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:26.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:26.298+0000 I NETWORK [listener] connection accepted from 10.108.2.60:44926 #345 (85 connections now open) 2019-09-04T06:32:26.298+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:26.298+0000 D2 COMMAND [conn345] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:26.298+0000 I NETWORK [conn345] received client metadata from 10.108.2.60:44926 conn345: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:26.298+0000 I COMMAND [conn345] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:26.302+0000 D2 COMMAND [conn345] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578743, 1), signature: { hash: BinData(0, 7EB74EF80679BE75277EB9A98AFF21A12F4B2DD4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:32:26.302+0000 D1 REPL [conn345] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578745, 2), t: 1 } 2019-09-04T06:32:26.302+0000 D3 REPL [conn345] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:56.312+0000 2019-09-04T06:32:26.306+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:26.306+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:26.307+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:26.324+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:26.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:26.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:26.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:26.360+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:26.360+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:26.407+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:26.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:26.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:26.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:26.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:26.507+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:26.553+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:26.553+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:26.593+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:26.593+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:26.601+0000 D2 COMMAND [conn49] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578739, 9), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578739, 9), signature: { hash: BinData(0, 7B0F7FBB31926278759832DF1F99ECE6D56654AF), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578739, 9), t: 1 } }, $db: "config" } 2019-09-04T06:32:26.601+0000 D1 COMMAND [conn49] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578739, 9), t: 1 } } } 2019-09-04T06:32:26.601+0000 D3 STORAGE [conn49] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:32:26.601+0000 D1 COMMAND [conn49] Using 'committed' snapshot: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578739, 9), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578739, 9), signature: { hash: BinData(0, 7B0F7FBB31926278759832DF1F99ECE6D56654AF), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578739, 9), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578745, 2) 2019-09-04T06:32:26.601+0000 D2 QUERY [conn49] Collection config.settings does not exist. Using EOF plan: query: { _id: "balancer" } sort: {} projection: {} limit: 1 2019-09-04T06:32:26.601+0000 I COMMAND [conn49] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578739, 9), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578739, 9), signature: { hash: BinData(0, 7B0F7FBB31926278759832DF1F99ECE6D56654AF), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578739, 9), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:32:26.607+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:26.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:26.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:26.642+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:26.643+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:26.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:26.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:26.657+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:26.657+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:26.707+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:26.715+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:26.715+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:26.715+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578745, 2) 2019-09-04T06:32:26.715+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 14562 2019-09-04T06:32:26.715+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:26.715+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:26.715+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 14562 2019-09-04T06:32:26.716+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14565 2019-09-04T06:32:26.716+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14565 2019-09-04T06:32:26.716+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578745, 2), t: 1 }({ ts: Timestamp(1567578745, 2), t: 1 }) 2019-09-04T06:32:26.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:26.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:26.806+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:26.806+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:26.807+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:26.824+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:26.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:26.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:26.838+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:32:25.063+0000 2019-09-04T06:32:26.838+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:32:26.232+0000 2019-09-04T06:32:26.838+0000 D3 REPL [replexec-3] stalest member MemberId(0) date: 2019-09-04T06:32:25.063+0000 2019-09-04T06:32:26.838+0000 D3 REPL [replexec-3] scheduling next check at 2019-09-04T06:32:35.063+0000 2019-09-04T06:32:26.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:26.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:56.838+0000 2019-09-04T06:32:26.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1006) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:26.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1006 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:36.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:26.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:56.838+0000 2019-09-04T06:32:26.838+0000 D2 ASIO [Replication] Request 1006 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578745, 2), t: 1 }, durableWallTime: new Date(1567578745711), opTime: { ts: Timestamp(1567578745, 2), t: 1 }, wallTime: new Date(1567578745711), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578745, 2), t: 1 }, lastCommittedWall: new Date(1567578745711), lastOpVisible: { ts: Timestamp(1567578745, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578745, 2), $clusterTime: { clusterTime: Timestamp(1567578745, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578745, 2) } 2019-09-04T06:32:26.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578745, 2), t: 1 }, durableWallTime: new Date(1567578745711), opTime: { ts: Timestamp(1567578745, 2), t: 1 }, wallTime: new Date(1567578745711), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578745, 2), t: 1 }, lastCommittedWall: new Date(1567578745711), lastOpVisible: { ts: Timestamp(1567578745, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578745, 2), $clusterTime: { clusterTime: Timestamp(1567578745, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578745, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:26.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:26.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1006) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578745, 2), t: 1 }, durableWallTime: new Date(1567578745711), opTime: { ts: Timestamp(1567578745, 2), t: 1 }, wallTime: new Date(1567578745711), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578745, 2), t: 1 }, lastCommittedWall: new Date(1567578745711), lastOpVisible: { ts: Timestamp(1567578745, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578745, 2), $clusterTime: { clusterTime: Timestamp(1567578745, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578745, 2) } 2019-09-04T06:32:26.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:32:26.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:32:28.838Z 2019-09-04T06:32:26.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:56.838+0000 2019-09-04T06:32:26.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:26.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1007) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:26.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1007 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:32:36.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:26.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:56.838+0000 2019-09-04T06:32:26.839+0000 D2 ASIO [Replication] Request 1007 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578745, 2), t: 1 }, durableWallTime: new Date(1567578745711), opTime: { ts: Timestamp(1567578745, 2), t: 1 }, wallTime: new Date(1567578745711), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578745, 2), t: 1 }, lastCommittedWall: new Date(1567578745711), lastOpVisible: { ts: Timestamp(1567578745, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578745, 2), $clusterTime: { clusterTime: Timestamp(1567578745, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578745, 2) } 2019-09-04T06:32:26.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578745, 2), t: 1 }, durableWallTime: new Date(1567578745711), opTime: { ts: Timestamp(1567578745, 2), t: 1 }, wallTime: new Date(1567578745711), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578745, 2), t: 1 }, lastCommittedWall: new Date(1567578745711), lastOpVisible: { ts: Timestamp(1567578745, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578745, 2), $clusterTime: { clusterTime: Timestamp(1567578745, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578745, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:32:26.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:26.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1007) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578745, 2), t: 1 }, durableWallTime: new Date(1567578745711), opTime: { ts: Timestamp(1567578745, 2), t: 1 }, wallTime: new Date(1567578745711), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578745, 2), t: 1 }, lastCommittedWall: new Date(1567578745711), lastOpVisible: { ts: Timestamp(1567578745, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578745, 2), $clusterTime: { clusterTime: Timestamp(1567578745, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578745, 2) } 2019-09-04T06:32:26.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:32:26.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:32:36.775+0000 2019-09-04T06:32:26.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:32:37.013+0000 2019-09-04T06:32:26.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:32:26.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:32:28.839Z 2019-09-04T06:32:26.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:26.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:56.839+0000 2019-09-04T06:32:26.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:56.839+0000 2019-09-04T06:32:26.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:26.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:26.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:26.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:26.876+0000 D2 COMMAND [conn47] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:26.876+0000 I COMMAND [conn47] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:26.907+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:26.911+0000 D2 COMMAND [conn48] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:26.911+0000 I COMMAND [conn48] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:26.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:26.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:26.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:26.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:27.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:27.007+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:27.053+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:27.053+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:27.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578745, 2), signature: { hash: BinData(0, 80ECB12492EB246DD8838B87D77BE589FB20238B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:27.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:32:27.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578745, 2), signature: { hash: BinData(0, 80ECB12492EB246DD8838B87D77BE589FB20238B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:27.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578745, 2), signature: { hash: BinData(0, 80ECB12492EB246DD8838B87D77BE589FB20238B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:27.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578745, 2), t: 1 }, durableWallTime: new Date(1567578745711), opTime: { ts: Timestamp(1567578745, 2), t: 1 }, wallTime: new Date(1567578745711) } 2019-09-04T06:32:27.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578745, 2), signature: { hash: BinData(0, 80ECB12492EB246DD8838B87D77BE589FB20238B), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:27.093+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:27.093+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:27.108+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:27.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:27.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:27.143+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:27.143+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:27.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:27.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:27.157+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:27.157+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:27.208+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:27.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:27.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:27.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:27.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:27.308+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:27.324+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:27.325+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:27.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:27.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:27.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:27.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:27.408+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:27.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:27.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:27.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:27.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:27.508+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:27.553+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:27.553+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:27.560+0000 I NETWORK [listener] connection accepted from 10.108.2.61:37996 #346 (86 connections now open) 2019-09-04T06:32:27.560+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:27.560+0000 D2 COMMAND [conn346] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:27.560+0000 I NETWORK [conn346] received client metadata from 10.108.2.61:37996 conn346: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:27.560+0000 I COMMAND [conn346] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:27.564+0000 D2 COMMAND [conn346] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578741, 1), signature: { hash: BinData(0, 6623B4D362DDEA79EDD3F88245C2B01A5792EA1E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:32:27.564+0000 D1 REPL [conn346] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578745, 2), t: 1 } 2019-09-04T06:32:27.564+0000 D3 REPL [conn346] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:57.574+0000 2019-09-04T06:32:27.593+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:27.593+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:27.608+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:27.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:27.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:27.642+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:27.643+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:27.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:27.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:27.657+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:27.657+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:27.708+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:27.716+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:27.716+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:27.716+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578745, 2) 2019-09-04T06:32:27.716+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 14600 2019-09-04T06:32:27.716+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:27.716+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:27.716+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 14600 2019-09-04T06:32:27.716+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14603 2019-09-04T06:32:27.716+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14603 2019-09-04T06:32:27.716+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578745, 2), t: 1 }({ ts: Timestamp(1567578745, 2), t: 1 }) 2019-09-04T06:32:27.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:27.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:27.784+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:27.784+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:27.808+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:27.824+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:27.825+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:27.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:27.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:27.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:27.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:27.909+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:27.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:27.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:27.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:27.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:28.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:28.009+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:28.022+0000 D2 COMMAND [conn50] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:28.022+0000 I COMMAND [conn50] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:28.022+0000 D2 COMMAND [conn50] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578720, 3), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578747, 1), signature: { hash: BinData(0, 547E60E7A684B157C05E096A752A860FE6BD559C), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578720, 3), t: 1 } }, $db: "config" } 2019-09-04T06:32:28.022+0000 D1 COMMAND [conn50] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578720, 3), t: 1 } } } 2019-09-04T06:32:28.022+0000 D3 STORAGE [conn50] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:32:28.022+0000 D1 COMMAND [conn50] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578720, 3), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578747, 1), signature: { hash: BinData(0, 547E60E7A684B157C05E096A752A860FE6BD559C), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578720, 3), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578745, 2) 2019-09-04T06:32:28.022+0000 D5 QUERY [conn50] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:32:28.022+0000 D5 QUERY [conn50] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:32:28.022+0000 D5 QUERY [conn50] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:32:28.022+0000 D5 QUERY [conn50] Rated tree: $and 2019-09-04T06:32:28.022+0000 D5 QUERY [conn50] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:28.022+0000 D5 QUERY [conn50] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:28.022+0000 D2 QUERY [conn50] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:32:28.022+0000 D3 STORAGE [conn50] WT begin_transaction for snapshot id 14614 2019-09-04T06:32:28.022+0000 D3 STORAGE [conn50] WT rollback_transaction for snapshot id 14614 2019-09-04T06:32:28.022+0000 I COMMAND [conn50] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578720, 3), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578747, 1), signature: { hash: BinData(0, 547E60E7A684B157C05E096A752A860FE6BD559C), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578720, 3), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:879 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:32:28.053+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:28.053+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:28.093+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:28.093+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:28.109+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:28.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:28.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:28.143+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:28.143+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:28.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:28.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:28.157+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:28.157+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:28.209+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:28.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578745, 2), signature: { hash: BinData(0, 80ECB12492EB246DD8838B87D77BE589FB20238B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:28.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:32:28.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578745, 2), signature: { hash: BinData(0, 80ECB12492EB246DD8838B87D77BE589FB20238B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:28.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578745, 2), signature: { hash: BinData(0, 80ECB12492EB246DD8838B87D77BE589FB20238B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:28.233+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578745, 2), t: 1 }, durableWallTime: new Date(1567578745711), opTime: { ts: Timestamp(1567578745, 2), t: 1 }, wallTime: new Date(1567578745711) } 2019-09-04T06:32:28.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578745, 2), signature: { hash: BinData(0, 80ECB12492EB246DD8838B87D77BE589FB20238B), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:28.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:28.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:28.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:28.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:28.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:28.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:28.309+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:28.324+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:28.325+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:28.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:28.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:28.409+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:28.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:28.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:28.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:28.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:28.509+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:28.553+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:28.553+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:28.593+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:28.593+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:28.610+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:28.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:28.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:28.643+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:28.643+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:28.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:28.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:28.657+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:28.657+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:28.710+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:28.716+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:28.716+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:28.716+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578745, 2) 2019-09-04T06:32:28.716+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 14636 2019-09-04T06:32:28.716+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:28.716+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:28.716+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 14636 2019-09-04T06:32:28.717+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14639 2019-09-04T06:32:28.717+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14639 2019-09-04T06:32:28.717+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578745, 2), t: 1 }({ ts: Timestamp(1567578745, 2), t: 1 }) 2019-09-04T06:32:28.746+0000 I NETWORK [listener] connection accepted from 10.108.2.64:46696 #347 (87 connections now open) 2019-09-04T06:32:28.746+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:28.746+0000 D2 COMMAND [conn347] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:28.746+0000 I NETWORK [conn347] received client metadata from 10.108.2.64:46696 conn347: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:28.746+0000 I COMMAND [conn347] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:28.750+0000 D2 COMMAND [conn347] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578745, 1), signature: { hash: BinData(0, 4BC8AEE07FF90D8B732E450D3B6393D8C5E79E39), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:32:28.750+0000 D1 REPL [conn347] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578745, 2), t: 1 } 2019-09-04T06:32:28.750+0000 D3 REPL [conn347] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:58.760+0000 2019-09-04T06:32:28.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:28.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:28.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:28.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:28.810+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:28.824+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:28.825+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:28.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:28.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1008) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:28.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1008 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:38.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:28.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:56.839+0000 2019-09-04T06:32:28.838+0000 D2 ASIO [Replication] Request 1008 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578745, 2), t: 1 }, durableWallTime: new Date(1567578745711), opTime: { ts: Timestamp(1567578745, 2), t: 1 }, wallTime: new Date(1567578745711), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578745, 2), t: 1 }, lastCommittedWall: new Date(1567578745711), lastOpVisible: { ts: Timestamp(1567578745, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578745, 2), $clusterTime: { clusterTime: Timestamp(1567578747, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578745, 2) } 2019-09-04T06:32:28.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578745, 2), t: 1 }, durableWallTime: new Date(1567578745711), opTime: { ts: Timestamp(1567578745, 2), t: 1 }, wallTime: new Date(1567578745711), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578745, 2), t: 1 }, lastCommittedWall: new Date(1567578745711), lastOpVisible: { ts: Timestamp(1567578745, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578745, 2), $clusterTime: { clusterTime: Timestamp(1567578747, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578745, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:28.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:28.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1008) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578745, 2), t: 1 }, durableWallTime: new Date(1567578745711), opTime: { ts: Timestamp(1567578745, 2), t: 1 }, wallTime: new Date(1567578745711), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578745, 2), t: 1 }, lastCommittedWall: new Date(1567578745711), lastOpVisible: { ts: Timestamp(1567578745, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578745, 2), $clusterTime: { clusterTime: Timestamp(1567578747, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578745, 2) } 2019-09-04T06:32:28.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:32:28.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:32:30.838Z 2019-09-04T06:32:28.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:56.839+0000 2019-09-04T06:32:28.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:28.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1009) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:28.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1009 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:32:38.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:28.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:56.839+0000 2019-09-04T06:32:28.839+0000 D2 ASIO [Replication] Request 1009 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578745, 2), t: 1 }, durableWallTime: new Date(1567578745711), opTime: { ts: Timestamp(1567578745, 2), t: 1 }, wallTime: new Date(1567578745711), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578745, 2), t: 1 }, lastCommittedWall: new Date(1567578745711), lastOpVisible: { ts: Timestamp(1567578745, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578745, 2), $clusterTime: { clusterTime: Timestamp(1567578747, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578745, 2) } 2019-09-04T06:32:28.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578745, 2), t: 1 }, durableWallTime: new Date(1567578745711), opTime: { ts: Timestamp(1567578745, 2), t: 1 }, wallTime: new Date(1567578745711), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578745, 2), t: 1 }, lastCommittedWall: new Date(1567578745711), lastOpVisible: { ts: Timestamp(1567578745, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578745, 2), $clusterTime: { clusterTime: Timestamp(1567578747, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578745, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:32:28.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:28.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1009) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578745, 2), t: 1 }, durableWallTime: new Date(1567578745711), opTime: { ts: Timestamp(1567578745, 2), t: 1 }, wallTime: new Date(1567578745711), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578745, 2), t: 1 }, lastCommittedWall: new Date(1567578745711), lastOpVisible: { ts: Timestamp(1567578745, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578745, 2), $clusterTime: { clusterTime: Timestamp(1567578747, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578745, 2) } 2019-09-04T06:32:28.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:32:28.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:32:37.013+0000 2019-09-04T06:32:28.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:32:39.805+0000 2019-09-04T06:32:28.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:32:28.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:32:30.839Z 2019-09-04T06:32:28.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:28.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:58.839+0000 2019-09-04T06:32:28.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:58.839+0000 2019-09-04T06:32:28.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:28.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:28.910+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:28.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:28.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:28.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:28.997+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:29.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:29.010+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:29.053+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:29.053+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:29.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578747, 1), signature: { hash: BinData(0, 547E60E7A684B157C05E096A752A860FE6BD559C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:29.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:32:29.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578747, 1), signature: { hash: BinData(0, 547E60E7A684B157C05E096A752A860FE6BD559C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:29.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578747, 1), signature: { hash: BinData(0, 547E60E7A684B157C05E096A752A860FE6BD559C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:29.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578745, 2), t: 1 }, durableWallTime: new Date(1567578745711), opTime: { ts: Timestamp(1567578745, 2), t: 1 }, wallTime: new Date(1567578745711) } 2019-09-04T06:32:29.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578747, 1), signature: { hash: BinData(0, 547E60E7A684B157C05E096A752A860FE6BD559C), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:29.093+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:29.093+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:29.110+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:29.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:29.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:29.142+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:29.143+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:29.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:29.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:29.157+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:29.157+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:29.210+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:29.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:29.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:29.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:29.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:29.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:29.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:29.311+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:29.324+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:29.325+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:29.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:29.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:29.411+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:29.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:29.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:29.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:29.497+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:29.511+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:29.553+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:29.553+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:29.593+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:29.593+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:29.611+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:29.612+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:29.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:29.643+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:29.643+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:29.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:29.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:29.657+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:29.657+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:29.711+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:29.716+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:29.716+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:29.716+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578745, 2) 2019-09-04T06:32:29.716+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 14671 2019-09-04T06:32:29.716+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:29.716+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:29.716+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 14671 2019-09-04T06:32:29.717+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14674 2019-09-04T06:32:29.717+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14674 2019-09-04T06:32:29.717+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578745, 2), t: 1 }({ ts: Timestamp(1567578745, 2), t: 1 }) 2019-09-04T06:32:29.722+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:29.722+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:29.735+0000 I NETWORK [listener] connection accepted from 10.108.2.49:53444 #348 (88 connections now open) 2019-09-04T06:32:29.735+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:29.735+0000 D2 COMMAND [conn348] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:29.735+0000 I NETWORK [conn348] received client metadata from 10.108.2.49:53444 conn348: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:29.735+0000 I COMMAND [conn348] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:29.740+0000 D2 COMMAND [conn348] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578740, 1), signature: { hash: BinData(0, 436A2CA78160F4792AF24C20BF715D9579B6F362), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:32:29.740+0000 D1 REPL [conn348] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578745, 2), t: 1 } 2019-09-04T06:32:29.740+0000 D3 REPL [conn348] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:59.750+0000 2019-09-04T06:32:29.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:29.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:29.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:29.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:29.811+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:29.824+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:29.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:29.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:29.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:29.911+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:29.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:29.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:29.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:29.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:30.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:30.003+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:32:30.003+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:32:30.003+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:32:30.018+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:30.018+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:32:30.018+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:32:30.033+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:32:30.033+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:32:30.033+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:32:30.033+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:32:30.034+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:32:30.034+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35129 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:32:30.047+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:32:30.047+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:32:30.047+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:32:30.049+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:32:30.049+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:30.049+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:32:30.049+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:32:30.049+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:32:30.049+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:32:30.049+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:32:30.049+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:32:30.049+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:32:30.049+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:30.049+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:30.049+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:32:30.049+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578745, 2) 2019-09-04T06:32:30.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14691 2019-09-04T06:32:30.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14691 2019-09-04T06:32:30.049+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:32:30.049+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:32:30.049+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:32:30.049+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:32:30.049+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:30.049+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:32:30.050+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:32:30.050+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:32:30.050+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578745, 2) 2019-09-04T06:32:30.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14694 2019-09-04T06:32:30.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14694 2019-09-04T06:32:30.050+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:32:30.050+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:32:30.050+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:30.050+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:32:30.050+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:32:30.050+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:32:30.050+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578745, 2) 2019-09-04T06:32:30.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14696 2019-09-04T06:32:30.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14696 2019-09-04T06:32:30.050+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:32:30.050+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:32:30.050+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:32:30.050+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:32:30.050+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:32:30.050+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:32:30.050+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:32:30.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14699 2019-09-04T06:32:30.050+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:32:30.050+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:30.050+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:32:30.050+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:32:30.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14699 2019-09-04T06:32:30.050+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:32:30.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14700 2019-09-04T06:32:30.050+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:32:30.050+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:30.050+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:32:30.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14700 2019-09-04T06:32:30.050+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:32:30.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14701 2019-09-04T06:32:30.050+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:32:30.050+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:30.050+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:32:30.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14701 2019-09-04T06:32:30.050+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:32:30.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14702 2019-09-04T06:32:30.050+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:32:30.050+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:30.050+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14702 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14703 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14703 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14704 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14704 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14705 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14705 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14706 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14706 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14707 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14707 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14708 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14708 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14709 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14709 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14710 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14710 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14711 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14711 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14712 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14712 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14713 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14713 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14714 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14714 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14715 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14715 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14716 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14716 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:32:30.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14717 2019-09-04T06:32:30.052+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:32:30.052+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:30.052+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:32:30.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14717 2019-09-04T06:32:30.052+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:30.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14718 2019-09-04T06:32:30.052+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:30.052+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:30.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14718 2019-09-04T06:32:30.052+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:32:30.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14719 2019-09-04T06:32:30.052+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:32:30.052+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:30.052+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:32:30.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14719 2019-09-04T06:32:30.052+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:32:30.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14720 2019-09-04T06:32:30.052+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:32:30.052+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:30.052+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:32:30.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14720 2019-09-04T06:32:30.052+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:32:30.052+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:32:30.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14722 2019-09-04T06:32:30.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14722 2019-09-04T06:32:30.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14723 2019-09-04T06:32:30.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14723 2019-09-04T06:32:30.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14724 2019-09-04T06:32:30.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14724 2019-09-04T06:32:30.052+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:32:30.052+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:32:30.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14726 2019-09-04T06:32:30.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14726 2019-09-04T06:32:30.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14727 2019-09-04T06:32:30.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14727 2019-09-04T06:32:30.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14728 2019-09-04T06:32:30.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14728 2019-09-04T06:32:30.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14729 2019-09-04T06:32:30.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14729 2019-09-04T06:32:30.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14730 2019-09-04T06:32:30.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14730 2019-09-04T06:32:30.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14731 2019-09-04T06:32:30.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14731 2019-09-04T06:32:30.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14732 2019-09-04T06:32:30.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14732 2019-09-04T06:32:30.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14733 2019-09-04T06:32:30.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14733 2019-09-04T06:32:30.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14734 2019-09-04T06:32:30.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14734 2019-09-04T06:32:30.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14735 2019-09-04T06:32:30.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14735 2019-09-04T06:32:30.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14736 2019-09-04T06:32:30.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14736 2019-09-04T06:32:30.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14737 2019-09-04T06:32:30.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14737 2019-09-04T06:32:30.053+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:32:30.053+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:32:30.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14739 2019-09-04T06:32:30.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14739 2019-09-04T06:32:30.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14740 2019-09-04T06:32:30.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14740 2019-09-04T06:32:30.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14741 2019-09-04T06:32:30.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14741 2019-09-04T06:32:30.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14742 2019-09-04T06:32:30.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14742 2019-09-04T06:32:30.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14743 2019-09-04T06:32:30.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14743 2019-09-04T06:32:30.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 14744 2019-09-04T06:32:30.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 14744 2019-09-04T06:32:30.053+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:32:30.053+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:30.053+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:30.068+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:30.068+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:30.093+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:30.093+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:30.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:30.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:30.118+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:30.143+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:30.143+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:30.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:30.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:30.157+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:30.157+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:30.192+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:30.192+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:30.218+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:30.222+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:30.222+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:30.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578747, 1), signature: { hash: BinData(0, 547E60E7A684B157C05E096A752A860FE6BD559C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:30.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:32:30.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578747, 1), signature: { hash: BinData(0, 547E60E7A684B157C05E096A752A860FE6BD559C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:30.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578747, 1), signature: { hash: BinData(0, 547E60E7A684B157C05E096A752A860FE6BD559C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:30.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578745, 2), t: 1 }, durableWallTime: new Date(1567578745711), opTime: { ts: Timestamp(1567578745, 2), t: 1 }, wallTime: new Date(1567578745711) } 2019-09-04T06:32:30.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578747, 1), signature: { hash: BinData(0, 547E60E7A684B157C05E096A752A860FE6BD559C), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:30.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:30.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:30.251+0000 I NETWORK [listener] connection accepted from 10.108.2.54:49260 #349 (89 connections now open) 2019-09-04T06:32:30.251+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:30.251+0000 D2 COMMAND [conn349] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:30.251+0000 I NETWORK [conn349] received client metadata from 10.108.2.54:49260 conn349: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:30.251+0000 I COMMAND [conn349] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:30.251+0000 D2 COMMAND [conn349] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578748, 1), signature: { hash: BinData(0, 4288F60C47116EFABC53087AB62874B09C6AEA93), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:32:30.251+0000 D1 REPL [conn349] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578745, 2), t: 1 } 2019-09-04T06:32:30.251+0000 D3 REPL [conn349] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:00.261+0000 2019-09-04T06:32:30.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:30.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:30.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:30.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:30.318+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:30.319+0000 D2 ASIO [RS] Request 1005 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578750, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578750292), o: { $v: 1, $set: { ping: new Date(1567578750292) } } }, { ts: Timestamp(1567578750, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578750292), o: { $v: 1, $set: { ping: new Date(1567578750291) } } }, { ts: Timestamp(1567578750, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578750292), o: { $v: 1, $set: { ping: new Date(1567578750292) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578745, 2), t: 1 }, lastCommittedWall: new Date(1567578745711), lastOpVisible: { ts: Timestamp(1567578745, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578745, 2), t: 1 }, lastCommittedWall: new Date(1567578745711), lastOpApplied: { ts: Timestamp(1567578750, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578745, 2), $clusterTime: { clusterTime: Timestamp(1567578750, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578750, 3) } 2019-09-04T06:32:30.319+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578750, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578750292), o: { $v: 1, $set: { ping: new Date(1567578750292) } } }, { ts: Timestamp(1567578750, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578750292), o: { $v: 1, $set: { ping: new Date(1567578750291) } } }, { ts: Timestamp(1567578750, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578750292), o: { $v: 1, $set: { ping: new Date(1567578750292) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578745, 2), t: 1 }, lastCommittedWall: new Date(1567578745711), lastOpVisible: { ts: Timestamp(1567578745, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578745, 2), t: 1 }, lastCommittedWall: new Date(1567578745711), lastOpApplied: { ts: Timestamp(1567578750, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578745, 2), $clusterTime: { clusterTime: Timestamp(1567578750, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578750, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:30.319+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:30.320+0000 D2 REPL [replication-0] oplog fetcher read 3 operations from remote oplog starting at ts: Timestamp(1567578750, 1) and ending at ts: Timestamp(1567578750, 3) 2019-09-04T06:32:30.320+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:32:39.805+0000 2019-09-04T06:32:30.320+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:32:40.792+0000 2019-09-04T06:32:30.320+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:30.320+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578750, 3), t: 1 } 2019-09-04T06:32:30.320+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:30.320+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:30.320+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578745, 2) 2019-09-04T06:32:30.320+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 14762 2019-09-04T06:32:30.320+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:30.320+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:30.320+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 14762 2019-09-04T06:32:30.320+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:30.320+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:30.320+0000 D2 REPL [rsSync-0] replication batch size is 3 2019-09-04T06:32:30.320+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:58.839+0000 2019-09-04T06:32:30.320+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578745, 2) 2019-09-04T06:32:30.320+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 14765 2019-09-04T06:32:30.320+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578750, 1) } 2019-09-04T06:32:30.320+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:30.320+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:30.320+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 14765 2019-09-04T06:32:30.320+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14675 2019-09-04T06:32:30.320+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14675 2019-09-04T06:32:30.320+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14768 2019-09-04T06:32:30.320+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14768 2019-09-04T06:32:30.320+0000 D3 EXECUTOR [repl-writer-worker-5] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:30.320+0000 D3 STORAGE [repl-writer-worker-5] WT begin_transaction for snapshot id 14770 2019-09-04T06:32:30.320+0000 D4 STORAGE [repl-writer-worker-5] inserting record with timestamp Timestamp(1567578750, 1) 2019-09-04T06:32:30.320+0000 D3 STORAGE [repl-writer-worker-5] WT set timestamp of future write operations to Timestamp(1567578750, 1) 2019-09-04T06:32:30.320+0000 D4 STORAGE [repl-writer-worker-5] inserting record with timestamp Timestamp(1567578750, 2) 2019-09-04T06:32:30.320+0000 D3 STORAGE [repl-writer-worker-5] WT set timestamp of future write operations to Timestamp(1567578750, 2) 2019-09-04T06:32:30.320+0000 D4 STORAGE [repl-writer-worker-5] inserting record with timestamp Timestamp(1567578750, 3) 2019-09-04T06:32:30.320+0000 D3 STORAGE [repl-writer-worker-5] WT set timestamp of future write operations to Timestamp(1567578750, 3) 2019-09-04T06:32:30.320+0000 D3 STORAGE [repl-writer-worker-5] WT commit_transaction for snapshot id 14770 2019-09-04T06:32:30.320+0000 D3 EXECUTOR [repl-writer-worker-5] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:30.320+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:32:30.320+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14769 2019-09-04T06:32:30.320+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14769 2019-09-04T06:32:30.320+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14772 2019-09-04T06:32:30.320+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14772 2019-09-04T06:32:30.320+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578750, 3), t: 1 }({ ts: Timestamp(1567578750, 3), t: 1 }) 2019-09-04T06:32:30.320+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578750, 3) 2019-09-04T06:32:30.320+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14773 2019-09-04T06:32:30.320+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578750, 3) } } ] } sort: {} projection: {} 2019-09-04T06:32:30.320+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:30.320+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:32:30.320+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578750, 3) Sort: {} Proj: {} ============================= 2019-09-04T06:32:30.320+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:30.320+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:30.320+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:30.320+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578750, 3) || First: notFirst: full path: ts 2019-09-04T06:32:30.320+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:30.320+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578750, 3) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:30.320+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:30.320+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:32:30.320+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:30.320+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:30.320+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:30.320+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:30.320+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:30.320+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:30.320+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:30.320+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578750, 3) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:30.320+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:30.320+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:30.320+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:30.320+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578750, 3) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:30.320+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:30.320+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578750, 3) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:30.320+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14773 2019-09-04T06:32:30.321+0000 D3 EXECUTOR [repl-writer-worker-11] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:30.321+0000 D3 EXECUTOR [repl-writer-worker-9] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:30.321+0000 D3 STORAGE [repl-writer-worker-11] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:30.321+0000 D3 STORAGE [repl-writer-worker-9] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:30.321+0000 D3 REPL [repl-writer-worker-11] applying op: { ts: Timestamp(1567578750, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578750292), o: { $v: 1, $set: { ping: new Date(1567578750292) } } }, oplog application mode: Secondary 2019-09-04T06:32:30.321+0000 D3 REPL [repl-writer-worker-9] applying op: { ts: Timestamp(1567578750, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578750292), o: { $v: 1, $set: { ping: new Date(1567578750291) } } }, oplog application mode: Secondary 2019-09-04T06:32:30.321+0000 D3 STORAGE [repl-writer-worker-11] WT set timestamp of future write operations to Timestamp(1567578750, 3) 2019-09-04T06:32:30.321+0000 D3 STORAGE [repl-writer-worker-9] WT set timestamp of future write operations to Timestamp(1567578750, 2) 2019-09-04T06:32:30.321+0000 D3 STORAGE [repl-writer-worker-11] WT begin_transaction for snapshot id 14775 2019-09-04T06:32:30.321+0000 D2 QUERY [repl-writer-worker-11] Using idhack: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" } 2019-09-04T06:32:30.321+0000 D3 STORAGE [repl-writer-worker-9] WT begin_transaction for snapshot id 14776 2019-09-04T06:32:30.321+0000 D2 QUERY [repl-writer-worker-9] Using idhack: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" } 2019-09-04T06:32:30.321+0000 D4 WRITE [repl-writer-worker-11] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:32:30.321+0000 D3 STORAGE [repl-writer-worker-11] WT commit_transaction for snapshot id 14775 2019-09-04T06:32:30.321+0000 D3 EXECUTOR [repl-writer-worker-11] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:30.321+0000 D3 STORAGE [repl-writer-worker-11] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:30.321+0000 D4 WRITE [repl-writer-worker-9] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:32:30.321+0000 D3 REPL [repl-writer-worker-11] applying op: { ts: Timestamp(1567578750, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578750292), o: { $v: 1, $set: { ping: new Date(1567578750292) } } }, oplog application mode: Secondary 2019-09-04T06:32:30.321+0000 D3 STORAGE [repl-writer-worker-9] WT commit_transaction for snapshot id 14776 2019-09-04T06:32:30.321+0000 D3 STORAGE [repl-writer-worker-11] WT set timestamp of future write operations to Timestamp(1567578750, 1) 2019-09-04T06:32:30.321+0000 D3 STORAGE [repl-writer-worker-11] WT begin_transaction for snapshot id 14778 2019-09-04T06:32:30.321+0000 D3 EXECUTOR [repl-writer-worker-9] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:30.321+0000 D2 QUERY [repl-writer-worker-11] Using idhack: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" } 2019-09-04T06:32:30.321+0000 D3 EXECUTOR [repl-writer-worker-7] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:30.321+0000 D4 WRITE [repl-writer-worker-11] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:32:30.321+0000 D3 STORAGE [repl-writer-worker-11] WT commit_transaction for snapshot id 14778 2019-09-04T06:32:30.321+0000 D3 EXECUTOR [repl-writer-worker-11] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:30.321+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578750, 3), t: 1 }({ ts: Timestamp(1567578750, 3), t: 1 }) 2019-09-04T06:32:30.321+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578750, 3) 2019-09-04T06:32:30.321+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14774 2019-09-04T06:32:30.321+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:32:30.321+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:30.321+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:32:30.321+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:30.321+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:30.321+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:32:30.321+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14774 2019-09-04T06:32:30.321+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578750, 3) 2019-09-04T06:32:30.321+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14783 2019-09-04T06:32:30.321+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14783 2019-09-04T06:32:30.321+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578750, 3), t: 1 }({ ts: Timestamp(1567578750, 3), t: 1 }) 2019-09-04T06:32:30.321+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:30.321+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578745, 2), t: 1 }, durableWallTime: new Date(1567578745711), appliedOpTime: { ts: Timestamp(1567578745, 2), t: 1 }, appliedWallTime: new Date(1567578745711), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578745, 2), t: 1 }, durableWallTime: new Date(1567578745711), appliedOpTime: { ts: Timestamp(1567578750, 3), t: 1 }, appliedWallTime: new Date(1567578750292), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578745, 2), t: 1 }, durableWallTime: new Date(1567578745711), appliedOpTime: { ts: Timestamp(1567578745, 2), t: 1 }, appliedWallTime: new Date(1567578745711), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578745, 2), t: 1 }, lastCommittedWall: new Date(1567578745711), lastOpVisible: { ts: Timestamp(1567578745, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:30.321+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1010 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:00.321+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578745, 2), t: 1 }, durableWallTime: new Date(1567578745711), appliedOpTime: { ts: Timestamp(1567578745, 2), t: 1 }, appliedWallTime: new Date(1567578745711), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578745, 2), t: 1 }, durableWallTime: new Date(1567578745711), appliedOpTime: { ts: Timestamp(1567578750, 3), t: 1 }, appliedWallTime: new Date(1567578750292), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578745, 2), t: 1 }, durableWallTime: new Date(1567578745711), appliedOpTime: { ts: Timestamp(1567578745, 2), t: 1 }, appliedWallTime: new Date(1567578745711), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578745, 2), t: 1 }, lastCommittedWall: new Date(1567578745711), lastOpVisible: { ts: Timestamp(1567578745, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:30.321+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:00.321+0000 2019-09-04T06:32:30.321+0000 D2 ASIO [RS] Request 1010 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578750, 3), t: 1 }, lastCommittedWall: new Date(1567578750292), lastOpVisible: { ts: Timestamp(1567578750, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578750, 3), $clusterTime: { clusterTime: Timestamp(1567578750, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578750, 3) } 2019-09-04T06:32:30.321+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578750, 3), t: 1 }, lastCommittedWall: new Date(1567578750292), lastOpVisible: { ts: Timestamp(1567578750, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578750, 3), $clusterTime: { clusterTime: Timestamp(1567578750, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578750, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:30.322+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:30.322+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:00.321+0000 2019-09-04T06:32:30.322+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578750, 3), t: 1 } 2019-09-04T06:32:30.322+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1011 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:40.322+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578745, 2), t: 1 } } 2019-09-04T06:32:30.322+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:00.321+0000 2019-09-04T06:32:30.322+0000 D2 ASIO [RS] Request 1011 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578750, 3), t: 1 }, lastCommittedWall: new Date(1567578750292), lastOpVisible: { ts: Timestamp(1567578750, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578750, 3), t: 1 }, lastCommittedWall: new Date(1567578750292), lastOpApplied: { ts: Timestamp(1567578750, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578750, 3), $clusterTime: { clusterTime: Timestamp(1567578750, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578750, 3) } 2019-09-04T06:32:30.322+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578750, 3), t: 1 }, lastCommittedWall: new Date(1567578750292), lastOpVisible: { ts: Timestamp(1567578750, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578750, 3), t: 1 }, lastCommittedWall: new Date(1567578750292), lastOpApplied: { ts: Timestamp(1567578750, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578750, 3), $clusterTime: { clusterTime: Timestamp(1567578750, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578750, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:30.322+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:30.322+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:32:30.322+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578750, 3), t: 1 }, 2019-09-04T06:32:30.292+0000 2019-09-04T06:32:30.322+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578750, 3), t: 1 }, 2019-09-04T06:32:30.292+0000 2019-09-04T06:32:30.322+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578745, 3) 2019-09-04T06:32:30.322+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:32:40.792+0000 2019-09-04T06:32:30.322+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:32:41.723+0000 2019-09-04T06:32:30.322+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1012 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:40.322+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578750, 3), t: 1 } } 2019-09-04T06:32:30.322+0000 D3 REPL [conn320] Got notified of new snapshot: { ts: Timestamp(1567578750, 3), t: 1 }, 2019-09-04T06:32:30.292+0000 2019-09-04T06:32:30.322+0000 D3 REPL [conn320] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.753+0000 2019-09-04T06:32:30.322+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:00.321+0000 2019-09-04T06:32:30.322+0000 D3 REPL [conn336] Got notified of new snapshot: { ts: Timestamp(1567578750, 3), t: 1 }, 2019-09-04T06:32:30.292+0000 2019-09-04T06:32:30.322+0000 D3 REPL [conn336] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.130+0000 2019-09-04T06:32:30.322+0000 D3 REPL [conn306] Got notified of new snapshot: { ts: Timestamp(1567578750, 3), t: 1 }, 2019-09-04T06:32:30.292+0000 2019-09-04T06:32:30.322+0000 D3 REPL [conn306] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.639+0000 2019-09-04T06:32:30.322+0000 D3 REPL [conn324] Got notified of new snapshot: { ts: Timestamp(1567578750, 3), t: 1 }, 2019-09-04T06:32:30.292+0000 2019-09-04T06:32:30.322+0000 D3 REPL [conn324] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.622+0000 2019-09-04T06:32:30.322+0000 D3 REPL [conn299] Got notified of new snapshot: { ts: Timestamp(1567578750, 3), t: 1 }, 2019-09-04T06:32:30.292+0000 2019-09-04T06:32:30.322+0000 D3 REPL [conn299] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.483+0000 2019-09-04T06:32:30.322+0000 D3 REPL [conn317] Got notified of new snapshot: { ts: Timestamp(1567578750, 3), t: 1 }, 2019-09-04T06:32:30.292+0000 2019-09-04T06:32:30.322+0000 D3 REPL [conn317] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.471+0000 2019-09-04T06:32:30.322+0000 D3 REPL [conn319] Got notified of new snapshot: { ts: Timestamp(1567578750, 3), t: 1 }, 2019-09-04T06:32:30.292+0000 2019-09-04T06:32:30.322+0000 D3 REPL [conn319] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.535+0000 2019-09-04T06:32:30.322+0000 D3 REPL [conn346] Got notified of new snapshot: { ts: Timestamp(1567578750, 3), t: 1 }, 2019-09-04T06:32:30.292+0000 2019-09-04T06:32:30.322+0000 D3 REPL [conn346] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:57.574+0000 2019-09-04T06:32:30.322+0000 D3 REPL [conn342] Got notified of new snapshot: { ts: Timestamp(1567578750, 3), t: 1 }, 2019-09-04T06:32:30.292+0000 2019-09-04T06:32:30.322+0000 D3 REPL [conn342] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:52.054+0000 2019-09-04T06:32:30.322+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:30.322+0000 D3 REPL [conn305] Got notified of new snapshot: { ts: Timestamp(1567578750, 3), t: 1 }, 2019-09-04T06:32:30.292+0000 2019-09-04T06:32:30.322+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:58.839+0000 2019-09-04T06:32:30.322+0000 D3 REPL [conn305] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.638+0000 2019-09-04T06:32:30.322+0000 D3 REPL [conn321] Got notified of new snapshot: { ts: Timestamp(1567578750, 3), t: 1 }, 2019-09-04T06:32:30.292+0000 2019-09-04T06:32:30.322+0000 D3 REPL [conn321] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:43.109+0000 2019-09-04T06:32:30.322+0000 D3 REPL [conn275] Got notified of new snapshot: { ts: Timestamp(1567578750, 3), t: 1 }, 2019-09-04T06:32:30.292+0000 2019-09-04T06:32:30.322+0000 D3 REPL [conn275] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.508+0000 2019-09-04T06:32:30.322+0000 D3 REPL [conn285] Got notified of new snapshot: { ts: Timestamp(1567578750, 3), t: 1 }, 2019-09-04T06:32:30.292+0000 2019-09-04T06:32:30.322+0000 D3 REPL [conn285] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.640+0000 2019-09-04T06:32:30.323+0000 D3 REPL [conn316] Got notified of new snapshot: { ts: Timestamp(1567578750, 3), t: 1 }, 2019-09-04T06:32:30.292+0000 2019-09-04T06:32:30.323+0000 D3 REPL [conn316] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.623+0000 2019-09-04T06:32:30.323+0000 D3 REPL [conn328] Got notified of new snapshot: { ts: Timestamp(1567578750, 3), t: 1 }, 2019-09-04T06:32:30.292+0000 2019-09-04T06:32:30.323+0000 D3 REPL [conn328] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.625+0000 2019-09-04T06:32:30.323+0000 D3 REPL [conn339] Got notified of new snapshot: { ts: Timestamp(1567578750, 3), t: 1 }, 2019-09-04T06:32:30.292+0000 2019-09-04T06:32:30.323+0000 D3 REPL [conn339] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.661+0000 2019-09-04T06:32:30.323+0000 D3 REPL [conn304] Got notified of new snapshot: { ts: Timestamp(1567578750, 3), t: 1 }, 2019-09-04T06:32:30.292+0000 2019-09-04T06:32:30.323+0000 D3 REPL [conn304] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.637+0000 2019-09-04T06:32:30.323+0000 D3 REPL [conn325] Got notified of new snapshot: { ts: Timestamp(1567578750, 3), t: 1 }, 2019-09-04T06:32:30.292+0000 2019-09-04T06:32:30.323+0000 D3 REPL [conn325] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.644+0000 2019-09-04T06:32:30.323+0000 D3 REPL [conn326] Got notified of new snapshot: { ts: Timestamp(1567578750, 3), t: 1 }, 2019-09-04T06:32:30.292+0000 2019-09-04T06:32:30.323+0000 D3 REPL [conn326] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:49.840+0000 2019-09-04T06:32:30.323+0000 D3 REPL [conn345] Got notified of new snapshot: { ts: Timestamp(1567578750, 3), t: 1 }, 2019-09-04T06:32:30.292+0000 2019-09-04T06:32:30.323+0000 D3 REPL [conn345] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:56.312+0000 2019-09-04T06:32:30.323+0000 D3 REPL [conn340] Got notified of new snapshot: { ts: Timestamp(1567578750, 3), t: 1 }, 2019-09-04T06:32:30.292+0000 2019-09-04T06:32:30.323+0000 D3 REPL [conn340] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.662+0000 2019-09-04T06:32:30.323+0000 D3 REPL [conn347] Got notified of new snapshot: { ts: Timestamp(1567578750, 3), t: 1 }, 2019-09-04T06:32:30.292+0000 2019-09-04T06:32:30.323+0000 D3 REPL [conn347] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:58.760+0000 2019-09-04T06:32:30.323+0000 D3 REPL [conn348] Got notified of new snapshot: { ts: Timestamp(1567578750, 3), t: 1 }, 2019-09-04T06:32:30.292+0000 2019-09-04T06:32:30.323+0000 D3 REPL [conn348] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:59.750+0000 2019-09-04T06:32:30.323+0000 D3 REPL [conn349] Got notified of new snapshot: { ts: Timestamp(1567578750, 3), t: 1 }, 2019-09-04T06:32:30.292+0000 2019-09-04T06:32:30.323+0000 D3 REPL [conn349] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:00.261+0000 2019-09-04T06:32:30.324+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:30.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:30.332+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:32:30.332+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:30.332+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578745, 2), t: 1 }, durableWallTime: new Date(1567578745711), appliedOpTime: { ts: Timestamp(1567578745, 2), t: 1 }, appliedWallTime: new Date(1567578745711), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578750, 3), t: 1 }, durableWallTime: new Date(1567578750292), appliedOpTime: { ts: Timestamp(1567578750, 3), t: 1 }, appliedWallTime: new Date(1567578750292), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578745, 2), t: 1 }, durableWallTime: new Date(1567578745711), appliedOpTime: { ts: Timestamp(1567578745, 2), t: 1 }, appliedWallTime: new Date(1567578745711), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578750, 3), t: 1 }, lastCommittedWall: new Date(1567578750292), lastOpVisible: { ts: Timestamp(1567578750, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:30.332+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1013 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:00.332+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578745, 2), t: 1 }, durableWallTime: new Date(1567578745711), appliedOpTime: { ts: Timestamp(1567578745, 2), t: 1 }, appliedWallTime: new Date(1567578745711), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578750, 3), t: 1 }, durableWallTime: new Date(1567578750292), appliedOpTime: { ts: Timestamp(1567578750, 3), t: 1 }, appliedWallTime: new Date(1567578750292), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578745, 2), t: 1 }, durableWallTime: new Date(1567578745711), appliedOpTime: { ts: Timestamp(1567578745, 2), t: 1 }, appliedWallTime: new Date(1567578745711), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578750, 3), t: 1 }, lastCommittedWall: new Date(1567578750292), lastOpVisible: { ts: Timestamp(1567578750, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:30.332+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:00.321+0000 2019-09-04T06:32:30.333+0000 D2 ASIO [RS] Request 1013 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578750, 3), t: 1 }, lastCommittedWall: new Date(1567578750292), lastOpVisible: { ts: Timestamp(1567578750, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578750, 3), $clusterTime: { clusterTime: Timestamp(1567578750, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578750, 3) } 2019-09-04T06:32:30.333+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578750, 3), t: 1 }, lastCommittedWall: new Date(1567578750292), lastOpVisible: { ts: Timestamp(1567578750, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578750, 3), $clusterTime: { clusterTime: Timestamp(1567578750, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578750, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:30.333+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:30.333+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:00.321+0000 2019-09-04T06:32:30.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:30.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:30.418+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:30.420+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578750, 3) 2019-09-04T06:32:30.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:30.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:30.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:30.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:30.518+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:30.553+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:30.553+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:30.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:30.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:30.593+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:30.593+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:30.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:30.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:30.618+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:30.643+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:30.643+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:30.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:30.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:30.657+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:30.657+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:30.691+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:30.691+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:30.718+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:30.722+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:30.722+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:30.753+0000 I NETWORK [listener] connection accepted from 10.108.2.50:50190 #350 (90 connections now open) 2019-09-04T06:32:30.753+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:30.753+0000 D2 COMMAND [conn350] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:30.753+0000 I NETWORK [conn350] received client metadata from 10.108.2.50:50190 conn350: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:30.753+0000 I COMMAND [conn350] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:30.753+0000 D2 COMMAND [conn350] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578749, 1), signature: { hash: BinData(0, 0BA381EC49423BB6D573BE72099CAF4D3E399D41), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:32:30.753+0000 D1 REPL [conn350] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578750, 3), t: 1 } 2019-09-04T06:32:30.753+0000 D3 REPL [conn350] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:00.763+0000 2019-09-04T06:32:30.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:30.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:30.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:30.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:30.819+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:30.824+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:30.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:30.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:30.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1014) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:30.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1014 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:40.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:30.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:58.839+0000 2019-09-04T06:32:30.838+0000 D2 ASIO [Replication] Request 1014 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578750, 3), t: 1 }, durableWallTime: new Date(1567578750292), opTime: { ts: Timestamp(1567578750, 3), t: 1 }, wallTime: new Date(1567578750292), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578750, 3), t: 1 }, lastCommittedWall: new Date(1567578750292), lastOpVisible: { ts: Timestamp(1567578750, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578750, 3), $clusterTime: { clusterTime: Timestamp(1567578750, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578750, 3) } 2019-09-04T06:32:30.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578750, 3), t: 1 }, durableWallTime: new Date(1567578750292), opTime: { ts: Timestamp(1567578750, 3), t: 1 }, wallTime: new Date(1567578750292), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578750, 3), t: 1 }, lastCommittedWall: new Date(1567578750292), lastOpVisible: { ts: Timestamp(1567578750, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578750, 3), $clusterTime: { clusterTime: Timestamp(1567578750, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578750, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:30.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:30.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1014) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578750, 3), t: 1 }, durableWallTime: new Date(1567578750292), opTime: { ts: Timestamp(1567578750, 3), t: 1 }, wallTime: new Date(1567578750292), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578750, 3), t: 1 }, lastCommittedWall: new Date(1567578750292), lastOpVisible: { ts: Timestamp(1567578750, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578750, 3), $clusterTime: { clusterTime: Timestamp(1567578750, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578750, 3) } 2019-09-04T06:32:30.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:32:30.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:32:32.838Z 2019-09-04T06:32:30.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:32:58.839+0000 2019-09-04T06:32:30.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:30.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1015) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:30.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1015 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:32:40.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:30.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:32:58.839+0000 2019-09-04T06:32:30.839+0000 D2 ASIO [Replication] Request 1015 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578750, 3), t: 1 }, durableWallTime: new Date(1567578750292), opTime: { ts: Timestamp(1567578750, 3), t: 1 }, wallTime: new Date(1567578750292), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578750, 3), t: 1 }, lastCommittedWall: new Date(1567578750292), lastOpVisible: { ts: Timestamp(1567578750, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578750, 3), $clusterTime: { clusterTime: Timestamp(1567578750, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578750, 3) } 2019-09-04T06:32:30.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578750, 3), t: 1 }, durableWallTime: new Date(1567578750292), opTime: { ts: Timestamp(1567578750, 3), t: 1 }, wallTime: new Date(1567578750292), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578750, 3), t: 1 }, lastCommittedWall: new Date(1567578750292), lastOpVisible: { ts: Timestamp(1567578750, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578750, 3), $clusterTime: { clusterTime: Timestamp(1567578750, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578750, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:32:30.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:30.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1015) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578750, 3), t: 1 }, durableWallTime: new Date(1567578750292), opTime: { ts: Timestamp(1567578750, 3), t: 1 }, wallTime: new Date(1567578750292), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578750, 3), t: 1 }, lastCommittedWall: new Date(1567578750292), lastOpVisible: { ts: Timestamp(1567578750, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578750, 3), $clusterTime: { clusterTime: Timestamp(1567578750, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578750, 3) } 2019-09-04T06:32:30.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:32:30.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:32:41.723+0000 2019-09-04T06:32:30.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:32:41.042+0000 2019-09-04T06:32:30.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:32:30.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:32:32.839Z 2019-09-04T06:32:30.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:30.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:00.839+0000 2019-09-04T06:32:30.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:00.839+0000 2019-09-04T06:32:30.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:30.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:30.919+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:30.952+0000 I NETWORK [listener] connection accepted from 10.108.2.58:52218 #351 (91 connections now open) 2019-09-04T06:32:30.952+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:30.952+0000 D2 COMMAND [conn351] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:30.952+0000 I NETWORK [conn351] received client metadata from 10.108.2.58:52218 conn351: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:30.952+0000 I COMMAND [conn351] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:30.952+0000 D2 COMMAND [conn351] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578743, 1), signature: { hash: BinData(0, 7EB74EF80679BE75277EB9A98AFF21A12F4B2DD4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:32:30.953+0000 D1 REPL [conn351] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578750, 3), t: 1 } 2019-09-04T06:32:30.953+0000 D3 REPL [conn351] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:00.962+0000 2019-09-04T06:32:30.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:30.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:30.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:30.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:31.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:31.019+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:31.053+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:31.053+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:31.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578750, 3), signature: { hash: BinData(0, 115A8A3C8ED90E3C1300B4DF6BF921B6B5BA5F34), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:31.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:32:31.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578750, 3), signature: { hash: BinData(0, 115A8A3C8ED90E3C1300B4DF6BF921B6B5BA5F34), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:31.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578750, 3), signature: { hash: BinData(0, 115A8A3C8ED90E3C1300B4DF6BF921B6B5BA5F34), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:31.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578750, 3), t: 1 }, durableWallTime: new Date(1567578750292), opTime: { ts: Timestamp(1567578750, 3), t: 1 }, wallTime: new Date(1567578750292) } 2019-09-04T06:32:31.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578750, 3), signature: { hash: BinData(0, 115A8A3C8ED90E3C1300B4DF6BF921B6B5BA5F34), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:31.068+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:31.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:31.093+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:31.093+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:31.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:31.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:31.119+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:31.142+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:31.143+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:31.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:31.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:31.157+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:31.157+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:31.191+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:31.191+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:31.219+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:31.222+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:31.222+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:31.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:31.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:31.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:31.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:31.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:31.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:31.319+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:31.320+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:31.320+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:31.320+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578750, 3) 2019-09-04T06:32:31.320+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 14822 2019-09-04T06:32:31.320+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:31.320+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:31.320+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 14822 2019-09-04T06:32:31.321+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14825 2019-09-04T06:32:31.321+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14825 2019-09-04T06:32:31.321+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578750, 3), t: 1 }({ ts: Timestamp(1567578750, 3), t: 1 }) 2019-09-04T06:32:31.324+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:31.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:31.358+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:31.358+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:31.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:31.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:31.419+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:31.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:31.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:31.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:31.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:31.519+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:31.553+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:31.553+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:31.568+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:31.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:31.593+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:31.593+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:31.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:31.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:31.620+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:31.643+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:31.643+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:31.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:31.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:31.657+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:31.657+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:31.691+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:31.691+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:31.720+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:31.722+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:31.722+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:31.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:31.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:31.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:31.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:31.820+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:31.824+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:31.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:31.858+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:31.858+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:31.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:31.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:31.920+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:31.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:31.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:31.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:31.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:32.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:32.020+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:32.053+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:32.053+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:32.068+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:32.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:32.093+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:32.093+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:32.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:32.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:32.120+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:32.142+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:32.143+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:32.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:32.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:32.157+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:32.157+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:32.191+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:32.191+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:32.220+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:32.222+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:32.222+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:32.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578750, 3), signature: { hash: BinData(0, 115A8A3C8ED90E3C1300B4DF6BF921B6B5BA5F34), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:32.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:32:32.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578750, 3), signature: { hash: BinData(0, 115A8A3C8ED90E3C1300B4DF6BF921B6B5BA5F34), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:32.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578750, 3), signature: { hash: BinData(0, 115A8A3C8ED90E3C1300B4DF6BF921B6B5BA5F34), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:32.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578750, 3), t: 1 }, durableWallTime: new Date(1567578750292), opTime: { ts: Timestamp(1567578750, 3), t: 1 }, wallTime: new Date(1567578750292) } 2019-09-04T06:32:32.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578750, 3), signature: { hash: BinData(0, 115A8A3C8ED90E3C1300B4DF6BF921B6B5BA5F34), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:32.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:32.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:32.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:32.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:32.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:32.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:32.320+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:32.320+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:32.320+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578750, 3) 2019-09-04T06:32:32.320+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 14863 2019-09-04T06:32:32.320+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:32.320+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:32.320+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 14863 2019-09-04T06:32:32.320+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:32.321+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14866 2019-09-04T06:32:32.321+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14866 2019-09-04T06:32:32.321+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578750, 3), t: 1 }({ ts: Timestamp(1567578750, 3), t: 1 }) 2019-09-04T06:32:32.324+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:32.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:32.358+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:32.358+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:32.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:32.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:32.421+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:32.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:32.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:32.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:32.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:32.521+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:32.553+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:32.553+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:32.568+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:32.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:32.593+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:32.593+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:32.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:32.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:32.621+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:32.643+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:32.643+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:32.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:32.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:32.657+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:32.657+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:32.691+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:32.691+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:32.721+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:32.722+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:32.722+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:32.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:32.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:32.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:32.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:32.821+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:32.824+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:32.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:32.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:32.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1016) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:32.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1016 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:42.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:32.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:00.839+0000 2019-09-04T06:32:32.838+0000 D2 ASIO [Replication] Request 1016 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578750, 3), t: 1 }, durableWallTime: new Date(1567578750292), opTime: { ts: Timestamp(1567578750, 3), t: 1 }, wallTime: new Date(1567578750292), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578750, 3), t: 1 }, lastCommittedWall: new Date(1567578750292), lastOpVisible: { ts: Timestamp(1567578750, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578750, 3), $clusterTime: { clusterTime: Timestamp(1567578750, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578750, 3) } 2019-09-04T06:32:32.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578750, 3), t: 1 }, durableWallTime: new Date(1567578750292), opTime: { ts: Timestamp(1567578750, 3), t: 1 }, wallTime: new Date(1567578750292), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578750, 3), t: 1 }, lastCommittedWall: new Date(1567578750292), lastOpVisible: { ts: Timestamp(1567578750, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578750, 3), $clusterTime: { clusterTime: Timestamp(1567578750, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578750, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:32.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:32.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1016) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578750, 3), t: 1 }, durableWallTime: new Date(1567578750292), opTime: { ts: Timestamp(1567578750, 3), t: 1 }, wallTime: new Date(1567578750292), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578750, 3), t: 1 }, lastCommittedWall: new Date(1567578750292), lastOpVisible: { ts: Timestamp(1567578750, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578750, 3), $clusterTime: { clusterTime: Timestamp(1567578750, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578750, 3) } 2019-09-04T06:32:32.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:32:32.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:32:34.838Z 2019-09-04T06:32:32.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:00.839+0000 2019-09-04T06:32:32.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:32.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1017) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:32.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1017 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:32:42.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:32.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:00.839+0000 2019-09-04T06:32:32.839+0000 D2 ASIO [Replication] Request 1017 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578750, 3), t: 1 }, durableWallTime: new Date(1567578750292), opTime: { ts: Timestamp(1567578750, 3), t: 1 }, wallTime: new Date(1567578750292), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578750, 3), t: 1 }, lastCommittedWall: new Date(1567578750292), lastOpVisible: { ts: Timestamp(1567578750, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578750, 3), $clusterTime: { clusterTime: Timestamp(1567578750, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578750, 3) } 2019-09-04T06:32:32.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578750, 3), t: 1 }, durableWallTime: new Date(1567578750292), opTime: { ts: Timestamp(1567578750, 3), t: 1 }, wallTime: new Date(1567578750292), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578750, 3), t: 1 }, lastCommittedWall: new Date(1567578750292), lastOpVisible: { ts: Timestamp(1567578750, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578750, 3), $clusterTime: { clusterTime: Timestamp(1567578750, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578750, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:32:32.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:32.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1017) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578750, 3), t: 1 }, durableWallTime: new Date(1567578750292), opTime: { ts: Timestamp(1567578750, 3), t: 1 }, wallTime: new Date(1567578750292), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578750, 3), t: 1 }, lastCommittedWall: new Date(1567578750292), lastOpVisible: { ts: Timestamp(1567578750, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578750, 3), $clusterTime: { clusterTime: Timestamp(1567578750, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578750, 3) } 2019-09-04T06:32:32.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:32:32.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:32:41.042+0000 2019-09-04T06:32:32.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:32:43.606+0000 2019-09-04T06:32:32.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:32:32.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:32:34.839Z 2019-09-04T06:32:32.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:32.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:02.839+0000 2019-09-04T06:32:32.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:02.839+0000 2019-09-04T06:32:32.858+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:32.858+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:32.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:32.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:32.921+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:32.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:32.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:32.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:32.997+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:33.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:33.021+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:33.053+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:33.053+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:33.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578750, 3), signature: { hash: BinData(0, 115A8A3C8ED90E3C1300B4DF6BF921B6B5BA5F34), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:33.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:32:33.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578750, 3), signature: { hash: BinData(0, 115A8A3C8ED90E3C1300B4DF6BF921B6B5BA5F34), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:33.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578750, 3), signature: { hash: BinData(0, 115A8A3C8ED90E3C1300B4DF6BF921B6B5BA5F34), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:33.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578750, 3), t: 1 }, durableWallTime: new Date(1567578750292), opTime: { ts: Timestamp(1567578750, 3), t: 1 }, wallTime: new Date(1567578750292) } 2019-09-04T06:32:33.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578750, 3), signature: { hash: BinData(0, 115A8A3C8ED90E3C1300B4DF6BF921B6B5BA5F34), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:33.068+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:33.068+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:33.093+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:33.093+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:33.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:33.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:33.122+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:33.143+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:33.143+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:33.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:33.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:33.157+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:33.157+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:33.191+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:33.191+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:33.222+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:33.222+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:33.222+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:33.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:33.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:33.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:33.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:33.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:33.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:33.321+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:33.321+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:33.321+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578750, 3) 2019-09-04T06:32:33.321+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 14903 2019-09-04T06:32:33.321+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:33.321+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:33.321+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 14903 2019-09-04T06:32:33.321+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14906 2019-09-04T06:32:33.322+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14906 2019-09-04T06:32:33.322+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578750, 3), t: 1 }({ ts: Timestamp(1567578750, 3), t: 1 }) 2019-09-04T06:32:33.322+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:33.324+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:33.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:33.358+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:33.358+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:33.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:33.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:33.422+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:33.468+0000 D2 COMMAND [conn318] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578752, 1), signature: { hash: BinData(0, 5FD87181FFEFAB4F27FB7C85B30A6D21FE89ECFB), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:32:33.468+0000 D1 REPL [conn318] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578750, 3), t: 1 } 2019-09-04T06:32:33.468+0000 D3 REPL [conn318] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:03.478+0000 2019-09-04T06:32:33.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:33.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:33.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:33.497+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:33.504+0000 D2 ASIO [RS] Request 1012 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578753, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578753501), o: { $v: 1, $set: { ping: new Date(1567578753496) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578750, 3), t: 1 }, lastCommittedWall: new Date(1567578750292), lastOpVisible: { ts: Timestamp(1567578750, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578750, 3), t: 1 }, lastCommittedWall: new Date(1567578750292), lastOpApplied: { ts: Timestamp(1567578753, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578750, 3), $clusterTime: { clusterTime: Timestamp(1567578753, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578753, 1) } 2019-09-04T06:32:33.504+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578753, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578753501), o: { $v: 1, $set: { ping: new Date(1567578753496) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578750, 3), t: 1 }, lastCommittedWall: new Date(1567578750292), lastOpVisible: { ts: Timestamp(1567578750, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578750, 3), t: 1 }, lastCommittedWall: new Date(1567578750292), lastOpApplied: { ts: Timestamp(1567578753, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578750, 3), $clusterTime: { clusterTime: Timestamp(1567578753, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578753, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:33.504+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:33.504+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578753, 1) and ending at ts: Timestamp(1567578753, 1) 2019-09-04T06:32:33.504+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:32:43.606+0000 2019-09-04T06:32:33.504+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:32:44.475+0000 2019-09-04T06:32:33.504+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:33.504+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:02.839+0000 2019-09-04T06:32:33.504+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578753, 1), t: 1 } 2019-09-04T06:32:33.504+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:33.504+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:33.504+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578750, 3) 2019-09-04T06:32:33.504+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 14915 2019-09-04T06:32:33.504+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:33.504+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:33.504+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 14915 2019-09-04T06:32:33.504+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:33.504+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:32:33.504+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:33.504+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578750, 3) 2019-09-04T06:32:33.504+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 14918 2019-09-04T06:32:33.504+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578753, 1) } 2019-09-04T06:32:33.504+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:33.504+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:33.504+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 14918 2019-09-04T06:32:33.504+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14907 2019-09-04T06:32:33.505+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14907 2019-09-04T06:32:33.505+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14921 2019-09-04T06:32:33.505+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14921 2019-09-04T06:32:33.505+0000 D3 EXECUTOR [repl-writer-worker-13] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:33.505+0000 D3 STORAGE [repl-writer-worker-13] WT begin_transaction for snapshot id 14923 2019-09-04T06:32:33.505+0000 D4 STORAGE [repl-writer-worker-13] inserting record with timestamp Timestamp(1567578753, 1) 2019-09-04T06:32:33.505+0000 D3 STORAGE [repl-writer-worker-13] WT set timestamp of future write operations to Timestamp(1567578753, 1) 2019-09-04T06:32:33.505+0000 D3 STORAGE [repl-writer-worker-13] WT commit_transaction for snapshot id 14923 2019-09-04T06:32:33.505+0000 D3 EXECUTOR [repl-writer-worker-13] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:33.505+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:32:33.505+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14922 2019-09-04T06:32:33.505+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14922 2019-09-04T06:32:33.505+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14925 2019-09-04T06:32:33.505+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14925 2019-09-04T06:32:33.505+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578753, 1), t: 1 }({ ts: Timestamp(1567578753, 1), t: 1 }) 2019-09-04T06:32:33.505+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578753, 1) 2019-09-04T06:32:33.505+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14926 2019-09-04T06:32:33.505+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578753, 1) } } ] } sort: {} projection: {} 2019-09-04T06:32:33.505+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:33.505+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:32:33.505+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578753, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:32:33.505+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:33.505+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:33.505+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:33.505+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578753, 1) || First: notFirst: full path: ts 2019-09-04T06:32:33.505+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:33.505+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578753, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:33.505+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:33.505+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:32:33.505+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:33.505+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:33.505+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:33.505+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:33.505+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:33.505+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:33.505+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:33.505+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578753, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:33.505+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:33.505+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:33.505+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:33.505+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578753, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:33.505+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:33.505+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578753, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:33.505+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14926 2019-09-04T06:32:33.505+0000 D3 EXECUTOR [repl-writer-worker-14] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:33.505+0000 D3 STORAGE [repl-writer-worker-14] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:33.505+0000 D3 REPL [repl-writer-worker-14] applying op: { ts: Timestamp(1567578753, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578753501), o: { $v: 1, $set: { ping: new Date(1567578753496) } } }, oplog application mode: Secondary 2019-09-04T06:32:33.505+0000 D3 STORAGE [repl-writer-worker-14] WT set timestamp of future write operations to Timestamp(1567578753, 1) 2019-09-04T06:32:33.505+0000 D3 STORAGE [repl-writer-worker-14] WT begin_transaction for snapshot id 14928 2019-09-04T06:32:33.505+0000 D2 QUERY [repl-writer-worker-14] Using idhack: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" } 2019-09-04T06:32:33.505+0000 D4 WRITE [repl-writer-worker-14] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:32:33.505+0000 D3 STORAGE [repl-writer-worker-14] WT commit_transaction for snapshot id 14928 2019-09-04T06:32:33.505+0000 D3 EXECUTOR [repl-writer-worker-14] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:33.505+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578753, 1), t: 1 }({ ts: Timestamp(1567578753, 1), t: 1 }) 2019-09-04T06:32:33.505+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578753, 1) 2019-09-04T06:32:33.505+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14927 2019-09-04T06:32:33.505+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:32:33.505+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:33.505+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:32:33.505+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:33.505+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:33.505+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:32:33.505+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14927 2019-09-04T06:32:33.505+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578753, 1) 2019-09-04T06:32:33.506+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14931 2019-09-04T06:32:33.506+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14931 2019-09-04T06:32:33.506+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578753, 1), t: 1 }({ ts: Timestamp(1567578753, 1), t: 1 }) 2019-09-04T06:32:33.506+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:33.506+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578750, 3), t: 1 }, durableWallTime: new Date(1567578750292), appliedOpTime: { ts: Timestamp(1567578750, 3), t: 1 }, appliedWallTime: new Date(1567578750292), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578750, 3), t: 1 }, durableWallTime: new Date(1567578750292), appliedOpTime: { ts: Timestamp(1567578753, 1), t: 1 }, appliedWallTime: new Date(1567578753501), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578750, 3), t: 1 }, durableWallTime: new Date(1567578750292), appliedOpTime: { ts: Timestamp(1567578750, 3), t: 1 }, appliedWallTime: new Date(1567578750292), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578750, 3), t: 1 }, lastCommittedWall: new Date(1567578750292), lastOpVisible: { ts: Timestamp(1567578750, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:33.506+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1018 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:03.506+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578750, 3), t: 1 }, durableWallTime: new Date(1567578750292), appliedOpTime: { ts: Timestamp(1567578750, 3), t: 1 }, appliedWallTime: new Date(1567578750292), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578750, 3), t: 1 }, durableWallTime: new Date(1567578750292), appliedOpTime: { ts: Timestamp(1567578753, 1), t: 1 }, appliedWallTime: new Date(1567578753501), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578750, 3), t: 1 }, durableWallTime: new Date(1567578750292), appliedOpTime: { ts: Timestamp(1567578750, 3), t: 1 }, appliedWallTime: new Date(1567578750292), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578750, 3), t: 1 }, lastCommittedWall: new Date(1567578750292), lastOpVisible: { ts: Timestamp(1567578750, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:33.506+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:03.506+0000 2019-09-04T06:32:33.506+0000 D2 ASIO [RS] Request 1018 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578750, 3), t: 1 }, lastCommittedWall: new Date(1567578750292), lastOpVisible: { ts: Timestamp(1567578750, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578750, 3), $clusterTime: { clusterTime: Timestamp(1567578753, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578753, 1) } 2019-09-04T06:32:33.506+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578750, 3), t: 1 }, lastCommittedWall: new Date(1567578750292), lastOpVisible: { ts: Timestamp(1567578750, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578750, 3), $clusterTime: { clusterTime: Timestamp(1567578753, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578753, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:33.506+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:33.506+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:03.506+0000 2019-09-04T06:32:33.506+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578753, 1), t: 1 } 2019-09-04T06:32:33.506+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1019 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:43.506+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578750, 3), t: 1 } } 2019-09-04T06:32:33.506+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:03.506+0000 2019-09-04T06:32:33.514+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:32:33.514+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:33.514+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578750, 3), t: 1 }, durableWallTime: new Date(1567578750292), appliedOpTime: { ts: Timestamp(1567578750, 3), t: 1 }, appliedWallTime: new Date(1567578750292), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578753, 1), t: 1 }, durableWallTime: new Date(1567578753501), appliedOpTime: { ts: Timestamp(1567578753, 1), t: 1 }, appliedWallTime: new Date(1567578753501), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578750, 3), t: 1 }, durableWallTime: new Date(1567578750292), appliedOpTime: { ts: Timestamp(1567578750, 3), t: 1 }, appliedWallTime: new Date(1567578750292), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578750, 3), t: 1 }, lastCommittedWall: new Date(1567578750292), lastOpVisible: { ts: Timestamp(1567578750, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:33.514+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1020 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:03.514+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578750, 3), t: 1 }, durableWallTime: new Date(1567578750292), appliedOpTime: { ts: Timestamp(1567578750, 3), t: 1 }, appliedWallTime: new Date(1567578750292), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578753, 1), t: 1 }, durableWallTime: new Date(1567578753501), appliedOpTime: { ts: Timestamp(1567578753, 1), t: 1 }, appliedWallTime: new Date(1567578753501), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578750, 3), t: 1 }, durableWallTime: new Date(1567578750292), appliedOpTime: { ts: Timestamp(1567578750, 3), t: 1 }, appliedWallTime: new Date(1567578750292), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578750, 3), t: 1 }, lastCommittedWall: new Date(1567578750292), lastOpVisible: { ts: Timestamp(1567578750, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:33.514+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:03.506+0000 2019-09-04T06:32:33.514+0000 D2 ASIO [RS] Request 1020 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578750, 3), t: 1 }, lastCommittedWall: new Date(1567578750292), lastOpVisible: { ts: Timestamp(1567578750, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578750, 3), $clusterTime: { clusterTime: Timestamp(1567578753, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578753, 1) } 2019-09-04T06:32:33.514+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578750, 3), t: 1 }, lastCommittedWall: new Date(1567578750292), lastOpVisible: { ts: Timestamp(1567578750, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578750, 3), $clusterTime: { clusterTime: Timestamp(1567578753, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578753, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:33.514+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:33.514+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:03.506+0000 2019-09-04T06:32:33.514+0000 D2 ASIO [RS] Request 1019 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578753, 1), t: 1 }, lastCommittedWall: new Date(1567578753501), lastOpVisible: { ts: Timestamp(1567578753, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578753, 1), t: 1 }, lastCommittedWall: new Date(1567578753501), lastOpApplied: { ts: Timestamp(1567578753, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578753, 1), $clusterTime: { clusterTime: Timestamp(1567578753, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578753, 1) } 2019-09-04T06:32:33.514+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578753, 1), t: 1 }, lastCommittedWall: new Date(1567578753501), lastOpVisible: { ts: Timestamp(1567578753, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578753, 1), t: 1 }, lastCommittedWall: new Date(1567578753501), lastOpApplied: { ts: Timestamp(1567578753, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578753, 1), $clusterTime: { clusterTime: Timestamp(1567578753, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578753, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:33.514+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:33.514+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:32:33.514+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578753, 1), t: 1 }, 2019-09-04T06:32:33.501+0000 2019-09-04T06:32:33.514+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578753, 1), t: 1 }, 2019-09-04T06:32:33.501+0000 2019-09-04T06:32:33.514+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578748, 1) 2019-09-04T06:32:33.514+0000 D3 REPL [conn318] Got notified of new snapshot: { ts: Timestamp(1567578753, 1), t: 1 }, 2019-09-04T06:32:33.501+0000 2019-09-04T06:32:33.514+0000 D3 REPL [conn318] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:03.478+0000 2019-09-04T06:32:33.514+0000 D3 REPL [conn346] Got notified of new snapshot: { ts: Timestamp(1567578753, 1), t: 1 }, 2019-09-04T06:32:33.501+0000 2019-09-04T06:32:33.514+0000 D3 REPL [conn346] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:57.574+0000 2019-09-04T06:32:33.514+0000 D3 REPL [conn351] Got notified of new snapshot: { ts: Timestamp(1567578753, 1), t: 1 }, 2019-09-04T06:32:33.501+0000 2019-09-04T06:32:33.514+0000 D3 REPL [conn351] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:00.962+0000 2019-09-04T06:32:33.514+0000 D3 REPL [conn345] Got notified of new snapshot: { ts: Timestamp(1567578753, 1), t: 1 }, 2019-09-04T06:32:33.501+0000 2019-09-04T06:32:33.514+0000 D3 REPL [conn345] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:56.312+0000 2019-09-04T06:32:33.514+0000 D3 REPL [conn349] Got notified of new snapshot: { ts: Timestamp(1567578753, 1), t: 1 }, 2019-09-04T06:32:33.501+0000 2019-09-04T06:32:33.515+0000 D3 REPL [conn349] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:00.261+0000 2019-09-04T06:32:33.515+0000 D3 REPL [conn321] Got notified of new snapshot: { ts: Timestamp(1567578753, 1), t: 1 }, 2019-09-04T06:32:33.501+0000 2019-09-04T06:32:33.515+0000 D3 REPL [conn321] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:43.109+0000 2019-09-04T06:32:33.515+0000 D3 REPL [conn285] Got notified of new snapshot: { ts: Timestamp(1567578753, 1), t: 1 }, 2019-09-04T06:32:33.501+0000 2019-09-04T06:32:33.515+0000 D3 REPL [conn285] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.640+0000 2019-09-04T06:32:33.515+0000 D3 REPL [conn328] Got notified of new snapshot: { ts: Timestamp(1567578753, 1), t: 1 }, 2019-09-04T06:32:33.501+0000 2019-09-04T06:32:33.515+0000 D3 REPL [conn328] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.625+0000 2019-09-04T06:32:33.515+0000 D3 REPL [conn304] Got notified of new snapshot: { ts: Timestamp(1567578753, 1), t: 1 }, 2019-09-04T06:32:33.501+0000 2019-09-04T06:32:33.515+0000 D3 REPL [conn304] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.637+0000 2019-09-04T06:32:33.515+0000 D3 REPL [conn326] Got notified of new snapshot: { ts: Timestamp(1567578753, 1), t: 1 }, 2019-09-04T06:32:33.501+0000 2019-09-04T06:32:33.515+0000 D3 REPL [conn326] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:49.840+0000 2019-09-04T06:32:33.515+0000 D3 REPL [conn340] Got notified of new snapshot: { ts: Timestamp(1567578753, 1), t: 1 }, 2019-09-04T06:32:33.501+0000 2019-09-04T06:32:33.515+0000 D3 REPL [conn340] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.662+0000 2019-09-04T06:32:33.515+0000 D3 REPL [conn348] Got notified of new snapshot: { ts: Timestamp(1567578753, 1), t: 1 }, 2019-09-04T06:32:33.501+0000 2019-09-04T06:32:33.515+0000 D3 REPL [conn348] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:59.750+0000 2019-09-04T06:32:33.515+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:32:44.475+0000 2019-09-04T06:32:33.515+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:32:43.613+0000 2019-09-04T06:32:33.515+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:33.515+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:02.839+0000 2019-09-04T06:32:33.515+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1021 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:43.515+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578753, 1), t: 1 } } 2019-09-04T06:32:33.515+0000 D3 REPL [conn324] Got notified of new snapshot: { ts: Timestamp(1567578753, 1), t: 1 }, 2019-09-04T06:32:33.501+0000 2019-09-04T06:32:33.515+0000 D3 REPL [conn324] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.622+0000 2019-09-04T06:32:33.515+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:03.506+0000 2019-09-04T06:32:33.515+0000 D3 REPL [conn317] Got notified of new snapshot: { ts: Timestamp(1567578753, 1), t: 1 }, 2019-09-04T06:32:33.501+0000 2019-09-04T06:32:33.515+0000 D3 REPL [conn317] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.471+0000 2019-09-04T06:32:33.515+0000 D3 REPL [conn336] Got notified of new snapshot: { ts: Timestamp(1567578753, 1), t: 1 }, 2019-09-04T06:32:33.501+0000 2019-09-04T06:32:33.515+0000 D3 REPL [conn336] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.130+0000 2019-09-04T06:32:33.515+0000 D3 REPL [conn320] Got notified of new snapshot: { ts: Timestamp(1567578753, 1), t: 1 }, 2019-09-04T06:32:33.501+0000 2019-09-04T06:32:33.515+0000 D3 REPL [conn320] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.753+0000 2019-09-04T06:32:33.515+0000 D3 REPL [conn350] Got notified of new snapshot: { ts: Timestamp(1567578753, 1), t: 1 }, 2019-09-04T06:32:33.501+0000 2019-09-04T06:32:33.515+0000 D3 REPL [conn350] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:00.763+0000 2019-09-04T06:32:33.515+0000 D3 REPL [conn305] Got notified of new snapshot: { ts: Timestamp(1567578753, 1), t: 1 }, 2019-09-04T06:32:33.501+0000 2019-09-04T06:32:33.515+0000 D3 REPL [conn305] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.638+0000 2019-09-04T06:32:33.515+0000 D3 REPL [conn275] Got notified of new snapshot: { ts: Timestamp(1567578753, 1), t: 1 }, 2019-09-04T06:32:33.501+0000 2019-09-04T06:32:33.515+0000 D3 REPL [conn275] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.508+0000 2019-09-04T06:32:33.515+0000 D3 REPL [conn316] Got notified of new snapshot: { ts: Timestamp(1567578753, 1), t: 1 }, 2019-09-04T06:32:33.501+0000 2019-09-04T06:32:33.515+0000 D3 REPL [conn316] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.623+0000 2019-09-04T06:32:33.515+0000 D3 REPL [conn339] Got notified of new snapshot: { ts: Timestamp(1567578753, 1), t: 1 }, 2019-09-04T06:32:33.501+0000 2019-09-04T06:32:33.515+0000 D3 REPL [conn339] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.661+0000 2019-09-04T06:32:33.515+0000 D3 REPL [conn325] Got notified of new snapshot: { ts: Timestamp(1567578753, 1), t: 1 }, 2019-09-04T06:32:33.501+0000 2019-09-04T06:32:33.515+0000 D3 REPL [conn325] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.644+0000 2019-09-04T06:32:33.515+0000 D3 REPL [conn319] Got notified of new snapshot: { ts: Timestamp(1567578753, 1), t: 1 }, 2019-09-04T06:32:33.501+0000 2019-09-04T06:32:33.515+0000 D3 REPL [conn319] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.535+0000 2019-09-04T06:32:33.515+0000 D3 REPL [conn299] Got notified of new snapshot: { ts: Timestamp(1567578753, 1), t: 1 }, 2019-09-04T06:32:33.501+0000 2019-09-04T06:32:33.515+0000 D3 REPL [conn299] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.483+0000 2019-09-04T06:32:33.515+0000 D3 REPL [conn342] Got notified of new snapshot: { ts: Timestamp(1567578753, 1), t: 1 }, 2019-09-04T06:32:33.501+0000 2019-09-04T06:32:33.515+0000 D3 REPL [conn342] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:52.054+0000 2019-09-04T06:32:33.515+0000 D3 REPL [conn347] Got notified of new snapshot: { ts: Timestamp(1567578753, 1), t: 1 }, 2019-09-04T06:32:33.501+0000 2019-09-04T06:32:33.515+0000 D3 REPL [conn347] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:58.760+0000 2019-09-04T06:32:33.515+0000 D3 REPL [conn306] Got notified of new snapshot: { ts: Timestamp(1567578753, 1), t: 1 }, 2019-09-04T06:32:33.501+0000 2019-09-04T06:32:33.515+0000 D3 REPL [conn306] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.639+0000 2019-09-04T06:32:33.516+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:32:33.516+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:32:33.516+0000 D2 COMMAND [conn90] run command admin.$cmd { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:32:33.516+0000 I COMMAND [conn90] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:32:33.522+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:33.553+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:33.553+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:33.568+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:33.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:33.593+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:33.593+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:33.604+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578753, 1) 2019-09-04T06:32:33.612+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:33.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:33.622+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:33.643+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:33.643+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:33.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:33.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:33.657+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:33.657+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:33.691+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:33.691+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:33.722+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:33.722+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:33.722+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:33.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:33.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:33.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:33.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:33.822+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:33.824+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:33.825+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:33.858+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:33.858+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:33.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:33.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:33.922+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:33.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:33.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:33.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:33.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:34.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:34.023+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:34.053+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:34.053+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:34.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:34.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:34.093+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:34.093+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:34.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:34.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:34.123+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:34.142+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:34.143+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:34.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:34.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:34.157+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:34.157+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:34.191+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:34.191+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:34.222+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:34.222+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:34.223+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:34.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578753, 1), signature: { hash: BinData(0, 26B2AE4F7D5C316BED4EC235D69E56DA09B6B619), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:34.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:32:34.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578753, 1), signature: { hash: BinData(0, 26B2AE4F7D5C316BED4EC235D69E56DA09B6B619), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:34.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578753, 1), signature: { hash: BinData(0, 26B2AE4F7D5C316BED4EC235D69E56DA09B6B619), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:34.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578753, 1), t: 1 }, durableWallTime: new Date(1567578753501), opTime: { ts: Timestamp(1567578753, 1), t: 1 }, wallTime: new Date(1567578753501) } 2019-09-04T06:32:34.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578753, 1), signature: { hash: BinData(0, 26B2AE4F7D5C316BED4EC235D69E56DA09B6B619), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:34.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:34.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:34.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:34.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:34.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:34.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:34.323+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:34.324+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:34.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:34.358+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:34.358+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:34.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:34.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:34.423+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:34.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:34.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:34.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:34.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:34.505+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:34.505+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:34.505+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578753, 1) 2019-09-04T06:32:34.505+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 14972 2019-09-04T06:32:34.505+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:34.505+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:34.505+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 14972 2019-09-04T06:32:34.506+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14975 2019-09-04T06:32:34.506+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 14975 2019-09-04T06:32:34.506+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578753, 1), t: 1 }({ ts: Timestamp(1567578753, 1), t: 1 }) 2019-09-04T06:32:34.523+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:34.553+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:34.553+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:34.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:34.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:34.593+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:34.593+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:34.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:34.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:34.623+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:34.642+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:34.643+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:34.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:34.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:34.657+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:34.657+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:34.691+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:34.691+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:34.722+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:34.722+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:34.724+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:34.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:34.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:34.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:34.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:34.824+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:34.824+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:34.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:34.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:34.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1022) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:34.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1022 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:44.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:34.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:02.839+0000 2019-09-04T06:32:34.838+0000 D2 ASIO [Replication] Request 1022 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578753, 1), t: 1 }, durableWallTime: new Date(1567578753501), opTime: { ts: Timestamp(1567578753, 1), t: 1 }, wallTime: new Date(1567578753501), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578753, 1), t: 1 }, lastCommittedWall: new Date(1567578753501), lastOpVisible: { ts: Timestamp(1567578753, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578753, 1), $clusterTime: { clusterTime: Timestamp(1567578753, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578753, 1) } 2019-09-04T06:32:34.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578753, 1), t: 1 }, durableWallTime: new Date(1567578753501), opTime: { ts: Timestamp(1567578753, 1), t: 1 }, wallTime: new Date(1567578753501), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578753, 1), t: 1 }, lastCommittedWall: new Date(1567578753501), lastOpVisible: { ts: Timestamp(1567578753, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578753, 1), $clusterTime: { clusterTime: Timestamp(1567578753, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578753, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:34.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:34.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1022) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578753, 1), t: 1 }, durableWallTime: new Date(1567578753501), opTime: { ts: Timestamp(1567578753, 1), t: 1 }, wallTime: new Date(1567578753501), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578753, 1), t: 1 }, lastCommittedWall: new Date(1567578753501), lastOpVisible: { ts: Timestamp(1567578753, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578753, 1), $clusterTime: { clusterTime: Timestamp(1567578753, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578753, 1) } 2019-09-04T06:32:34.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:32:34.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:32:36.838Z 2019-09-04T06:32:34.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:02.839+0000 2019-09-04T06:32:34.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:34.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1023) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:34.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1023 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:32:44.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:34.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:02.839+0000 2019-09-04T06:32:34.839+0000 D2 ASIO [Replication] Request 1023 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578753, 1), t: 1 }, durableWallTime: new Date(1567578753501), opTime: { ts: Timestamp(1567578753, 1), t: 1 }, wallTime: new Date(1567578753501), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578753, 1), t: 1 }, lastCommittedWall: new Date(1567578753501), lastOpVisible: { ts: Timestamp(1567578753, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578753, 1), $clusterTime: { clusterTime: Timestamp(1567578753, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578753, 1) } 2019-09-04T06:32:34.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578753, 1), t: 1 }, durableWallTime: new Date(1567578753501), opTime: { ts: Timestamp(1567578753, 1), t: 1 }, wallTime: new Date(1567578753501), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578753, 1), t: 1 }, lastCommittedWall: new Date(1567578753501), lastOpVisible: { ts: Timestamp(1567578753, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578753, 1), $clusterTime: { clusterTime: Timestamp(1567578753, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578753, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:32:34.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:34.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1023) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578753, 1), t: 1 }, durableWallTime: new Date(1567578753501), opTime: { ts: Timestamp(1567578753, 1), t: 1 }, wallTime: new Date(1567578753501), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578753, 1), t: 1 }, lastCommittedWall: new Date(1567578753501), lastOpVisible: { ts: Timestamp(1567578753, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578753, 1), $clusterTime: { clusterTime: Timestamp(1567578753, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578753, 1) } 2019-09-04T06:32:34.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:32:34.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:32:43.613+0000 2019-09-04T06:32:34.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:32:46.143+0000 2019-09-04T06:32:34.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:32:34.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:32:36.839Z 2019-09-04T06:32:34.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:34.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:04.839+0000 2019-09-04T06:32:34.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:04.839+0000 2019-09-04T06:32:34.858+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:34.858+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:34.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:34.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:34.924+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:34.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:34.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:34.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:34.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:35.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:35.024+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:35.053+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:35.053+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:35.063+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:35.063+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:32:34.839+0000 2019-09-04T06:32:35.063+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:32:34.838+0000 2019-09-04T06:32:35.063+0000 D3 REPL [replexec-3] stalest member MemberId(2) date: 2019-09-04T06:32:34.838+0000 2019-09-04T06:32:35.063+0000 D3 REPL [replexec-3] scheduling next check at 2019-09-04T06:32:44.838+0000 2019-09-04T06:32:35.063+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:04.839+0000 2019-09-04T06:32:35.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578753, 1), signature: { hash: BinData(0, 26B2AE4F7D5C316BED4EC235D69E56DA09B6B619), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:35.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:32:35.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578753, 1), signature: { hash: BinData(0, 26B2AE4F7D5C316BED4EC235D69E56DA09B6B619), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:35.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578753, 1), signature: { hash: BinData(0, 26B2AE4F7D5C316BED4EC235D69E56DA09B6B619), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:35.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578753, 1), t: 1 }, durableWallTime: new Date(1567578753501), opTime: { ts: Timestamp(1567578753, 1), t: 1 }, wallTime: new Date(1567578753501) } 2019-09-04T06:32:35.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578753, 1), signature: { hash: BinData(0, 26B2AE4F7D5C316BED4EC235D69E56DA09B6B619), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:35.068+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:35.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:35.093+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:35.093+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:35.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:35.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:35.124+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:35.143+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:35.143+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:35.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:35.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:35.157+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:35.157+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:35.191+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:35.191+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:35.222+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:35.222+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:35.224+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:35.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:35.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:35.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:35.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:35.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:35.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:35.312+0000 D2 ASIO [RS] Request 1021 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578755, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578755309), o: { $v: 1, $set: { ping: new Date(1567578755306), up: 2655 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578753, 1), t: 1 }, lastCommittedWall: new Date(1567578753501), lastOpVisible: { ts: Timestamp(1567578753, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578753, 1), t: 1 }, lastCommittedWall: new Date(1567578753501), lastOpApplied: { ts: Timestamp(1567578755, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578753, 1), $clusterTime: { clusterTime: Timestamp(1567578755, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578755, 1) } 2019-09-04T06:32:35.312+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578755, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578755309), o: { $v: 1, $set: { ping: new Date(1567578755306), up: 2655 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578753, 1), t: 1 }, lastCommittedWall: new Date(1567578753501), lastOpVisible: { ts: Timestamp(1567578753, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578753, 1), t: 1 }, lastCommittedWall: new Date(1567578753501), lastOpApplied: { ts: Timestamp(1567578755, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578753, 1), $clusterTime: { clusterTime: Timestamp(1567578755, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578755, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:35.312+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:35.312+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578755, 1) and ending at ts: Timestamp(1567578755, 1) 2019-09-04T06:32:35.312+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:32:46.143+0000 2019-09-04T06:32:35.312+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:32:46.529+0000 2019-09-04T06:32:35.312+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:35.312+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578755, 1), t: 1 } 2019-09-04T06:32:35.312+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:35.312+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:35.312+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578753, 1) 2019-09-04T06:32:35.312+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 15008 2019-09-04T06:32:35.312+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:35.312+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:35.312+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 15008 2019-09-04T06:32:35.312+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:32:35.312+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:35.313+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:35.313+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578755, 1) } 2019-09-04T06:32:35.313+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578753, 1) 2019-09-04T06:32:35.313+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 15011 2019-09-04T06:32:35.313+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:35.313+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:35.313+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 15011 2019-09-04T06:32:35.313+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 14976 2019-09-04T06:32:35.312+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:04.839+0000 2019-09-04T06:32:35.313+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 14976 2019-09-04T06:32:35.313+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15014 2019-09-04T06:32:35.313+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 15014 2019-09-04T06:32:35.313+0000 D3 EXECUTOR [repl-writer-worker-3] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:35.313+0000 D3 STORAGE [repl-writer-worker-3] WT begin_transaction for snapshot id 15016 2019-09-04T06:32:35.313+0000 D4 STORAGE [repl-writer-worker-3] inserting record with timestamp Timestamp(1567578755, 1) 2019-09-04T06:32:35.313+0000 D3 STORAGE [repl-writer-worker-3] WT set timestamp of future write operations to Timestamp(1567578755, 1) 2019-09-04T06:32:35.313+0000 D3 STORAGE [repl-writer-worker-3] WT commit_transaction for snapshot id 15016 2019-09-04T06:32:35.313+0000 D3 EXECUTOR [repl-writer-worker-3] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:35.313+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:32:35.313+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15015 2019-09-04T06:32:35.313+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 15015 2019-09-04T06:32:35.313+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15018 2019-09-04T06:32:35.313+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 15018 2019-09-04T06:32:35.313+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578755, 1), t: 1 }({ ts: Timestamp(1567578755, 1), t: 1 }) 2019-09-04T06:32:35.313+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578755, 1) 2019-09-04T06:32:35.313+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15019 2019-09-04T06:32:35.313+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578755, 1) } } ] } sort: {} projection: {} 2019-09-04T06:32:35.313+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:35.313+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:32:35.313+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578755, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:32:35.313+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:35.313+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:35.313+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:35.313+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578755, 1) || First: notFirst: full path: ts 2019-09-04T06:32:35.313+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:35.313+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578755, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:35.313+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:35.313+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:32:35.313+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:35.313+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:35.313+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:35.313+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:35.313+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:35.313+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:35.313+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:35.313+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578755, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:35.313+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:35.313+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:35.313+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:35.313+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578755, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:35.313+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:35.313+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578755, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:35.313+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 15019 2019-09-04T06:32:35.313+0000 D3 EXECUTOR [repl-writer-worker-12] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:35.313+0000 D3 STORAGE [repl-writer-worker-12] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:35.313+0000 D3 REPL [repl-writer-worker-12] applying op: { ts: Timestamp(1567578755, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578755309), o: { $v: 1, $set: { ping: new Date(1567578755306), up: 2655 } } }, oplog application mode: Secondary 2019-09-04T06:32:35.313+0000 D3 STORAGE [repl-writer-worker-12] WT set timestamp of future write operations to Timestamp(1567578755, 1) 2019-09-04T06:32:35.313+0000 D3 STORAGE [repl-writer-worker-12] WT begin_transaction for snapshot id 15021 2019-09-04T06:32:35.313+0000 D2 QUERY [repl-writer-worker-12] Using idhack: { _id: "cmodb801.togewa.com:27017" } 2019-09-04T06:32:35.313+0000 D4 WRITE [repl-writer-worker-12] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:32:35.313+0000 D3 STORAGE [repl-writer-worker-12] WT commit_transaction for snapshot id 15021 2019-09-04T06:32:35.313+0000 D3 EXECUTOR [repl-writer-worker-12] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:35.313+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578755, 1), t: 1 }({ ts: Timestamp(1567578755, 1), t: 1 }) 2019-09-04T06:32:35.313+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578755, 1) 2019-09-04T06:32:35.313+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15020 2019-09-04T06:32:35.314+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:32:35.314+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:35.314+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:32:35.314+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:35.314+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:35.314+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:32:35.314+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 15020 2019-09-04T06:32:35.314+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578755, 1) 2019-09-04T06:32:35.314+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:35.314+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15024 2019-09-04T06:32:35.314+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578753, 1), t: 1 }, durableWallTime: new Date(1567578753501), appliedOpTime: { ts: Timestamp(1567578753, 1), t: 1 }, appliedWallTime: new Date(1567578753501), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578753, 1), t: 1 }, durableWallTime: new Date(1567578753501), appliedOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, appliedWallTime: new Date(1567578755309), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578753, 1), t: 1 }, durableWallTime: new Date(1567578753501), appliedOpTime: { ts: Timestamp(1567578753, 1), t: 1 }, appliedWallTime: new Date(1567578753501), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578753, 1), t: 1 }, lastCommittedWall: new Date(1567578753501), lastOpVisible: { ts: Timestamp(1567578753, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:35.314+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 15024 2019-09-04T06:32:35.314+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1024 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:05.314+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578753, 1), t: 1 }, durableWallTime: new Date(1567578753501), appliedOpTime: { ts: Timestamp(1567578753, 1), t: 1 }, appliedWallTime: new Date(1567578753501), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578753, 1), t: 1 }, durableWallTime: new Date(1567578753501), appliedOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, appliedWallTime: new Date(1567578755309), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578753, 1), t: 1 }, durableWallTime: new Date(1567578753501), appliedOpTime: { ts: Timestamp(1567578753, 1), t: 1 }, appliedWallTime: new Date(1567578753501), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578753, 1), t: 1 }, lastCommittedWall: new Date(1567578753501), lastOpVisible: { ts: Timestamp(1567578753, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:35.314+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578755, 1), t: 1 }({ ts: Timestamp(1567578755, 1), t: 1 }) 2019-09-04T06:32:35.314+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:05.314+0000 2019-09-04T06:32:35.314+0000 D2 ASIO [RS] Request 1024 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578753, 1), t: 1 }, lastCommittedWall: new Date(1567578753501), lastOpVisible: { ts: Timestamp(1567578753, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578753, 1), $clusterTime: { clusterTime: Timestamp(1567578755, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578755, 1) } 2019-09-04T06:32:35.314+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578753, 1), t: 1 }, lastCommittedWall: new Date(1567578753501), lastOpVisible: { ts: Timestamp(1567578753, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578753, 1), $clusterTime: { clusterTime: Timestamp(1567578755, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578755, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:35.314+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:35.314+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:05.314+0000 2019-09-04T06:32:35.314+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578755, 1), t: 1 } 2019-09-04T06:32:35.314+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1025 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:45.314+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578753, 1), t: 1 } } 2019-09-04T06:32:35.314+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:05.314+0000 2019-09-04T06:32:35.323+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:32:35.323+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:35.323+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578753, 1), t: 1 }, durableWallTime: new Date(1567578753501), appliedOpTime: { ts: Timestamp(1567578753, 1), t: 1 }, appliedWallTime: new Date(1567578753501), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, durableWallTime: new Date(1567578755309), appliedOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, appliedWallTime: new Date(1567578755309), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578753, 1), t: 1 }, durableWallTime: new Date(1567578753501), appliedOpTime: { ts: Timestamp(1567578753, 1), t: 1 }, appliedWallTime: new Date(1567578753501), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578753, 1), t: 1 }, lastCommittedWall: new Date(1567578753501), lastOpVisible: { ts: Timestamp(1567578753, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:35.324+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1026 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:05.324+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578753, 1), t: 1 }, durableWallTime: new Date(1567578753501), appliedOpTime: { ts: Timestamp(1567578753, 1), t: 1 }, appliedWallTime: new Date(1567578753501), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, durableWallTime: new Date(1567578755309), appliedOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, appliedWallTime: new Date(1567578755309), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578753, 1), t: 1 }, durableWallTime: new Date(1567578753501), appliedOpTime: { ts: Timestamp(1567578753, 1), t: 1 }, appliedWallTime: new Date(1567578753501), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578753, 1), t: 1 }, lastCommittedWall: new Date(1567578753501), lastOpVisible: { ts: Timestamp(1567578753, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:35.324+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:05.314+0000 2019-09-04T06:32:35.324+0000 D2 ASIO [RS] Request 1026 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578753, 1), t: 1 }, lastCommittedWall: new Date(1567578753501), lastOpVisible: { ts: Timestamp(1567578753, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578753, 1), $clusterTime: { clusterTime: Timestamp(1567578755, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578755, 1) } 2019-09-04T06:32:35.324+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578753, 1), t: 1 }, lastCommittedWall: new Date(1567578753501), lastOpVisible: { ts: Timestamp(1567578753, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578753, 1), $clusterTime: { clusterTime: Timestamp(1567578755, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578755, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:35.324+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:35.324+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:05.314+0000 2019-09-04T06:32:35.324+0000 D2 ASIO [RS] Request 1025 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578755, 1), t: 1 }, lastCommittedWall: new Date(1567578755309), lastOpVisible: { ts: Timestamp(1567578755, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578755, 1), t: 1 }, lastCommittedWall: new Date(1567578755309), lastOpApplied: { ts: Timestamp(1567578755, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578755, 1), $clusterTime: { clusterTime: Timestamp(1567578755, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578755, 1) } 2019-09-04T06:32:35.324+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578755, 1), t: 1 }, lastCommittedWall: new Date(1567578755309), lastOpVisible: { ts: Timestamp(1567578755, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578755, 1), t: 1 }, lastCommittedWall: new Date(1567578755309), lastOpApplied: { ts: Timestamp(1567578755, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578755, 1), $clusterTime: { clusterTime: Timestamp(1567578755, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578755, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:35.324+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:35.324+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:35.324+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:32:35.324+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578755, 1), t: 1 }, 2019-09-04T06:32:35.309+0000 2019-09-04T06:32:35.324+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578755, 1), t: 1 }, 2019-09-04T06:32:35.309+0000 2019-09-04T06:32:35.324+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:35.324+0000 D2 COMMAND [conn21] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578755, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a8302d1a496712d724c'), operName: "", parentOperId: "5d6f5a8302d1a496712d724a" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578755, 1), signature: { hash: BinData(0, 1A578DEA7B142EFF21E3A86A7DEA8240F7DE4FEE), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578755, 1), t: 1 } }, $db: "config" } 2019-09-04T06:32:35.324+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578750, 1) 2019-09-04T06:32:35.325+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:32:46.529+0000 2019-09-04T06:32:35.325+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:32:45.517+0000 2019-09-04T06:32:35.325+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:35.325+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:04.839+0000 2019-09-04T06:32:35.325+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1027 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:45.325+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578755, 1), t: 1 } } 2019-09-04T06:32:35.325+0000 D3 REPL [conn318] Got notified of new snapshot: { ts: Timestamp(1567578755, 1), t: 1 }, 2019-09-04T06:32:35.309+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn318] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:03.478+0000 2019-09-04T06:32:35.325+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:05.314+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn346] Got notified of new snapshot: { ts: Timestamp(1567578755, 1), t: 1 }, 2019-09-04T06:32:35.309+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn346] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:57.574+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn345] Got notified of new snapshot: { ts: Timestamp(1567578755, 1), t: 1 }, 2019-09-04T06:32:35.309+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn345] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:56.312+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn321] Got notified of new snapshot: { ts: Timestamp(1567578755, 1), t: 1 }, 2019-09-04T06:32:35.309+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn321] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:43.109+0000 2019-09-04T06:32:35.325+0000 D1 TRACKING [conn21] Cmd: find, TrackingId: 5d6f5a8302d1a496712d724a|5d6f5a8302d1a496712d724c 2019-09-04T06:32:35.325+0000 D3 REPL [conn328] Got notified of new snapshot: { ts: Timestamp(1567578755, 1), t: 1 }, 2019-09-04T06:32:35.309+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn328] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.625+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn326] Got notified of new snapshot: { ts: Timestamp(1567578755, 1), t: 1 }, 2019-09-04T06:32:35.309+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn326] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:49.840+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn351] Got notified of new snapshot: { ts: Timestamp(1567578755, 1), t: 1 }, 2019-09-04T06:32:35.309+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn351] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:00.962+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn348] Got notified of new snapshot: { ts: Timestamp(1567578755, 1), t: 1 }, 2019-09-04T06:32:35.309+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn348] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:59.750+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn349] Got notified of new snapshot: { ts: Timestamp(1567578755, 1), t: 1 }, 2019-09-04T06:32:35.309+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn349] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:00.261+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn324] Got notified of new snapshot: { ts: Timestamp(1567578755, 1), t: 1 }, 2019-09-04T06:32:35.309+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn324] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.622+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn285] Got notified of new snapshot: { ts: Timestamp(1567578755, 1), t: 1 }, 2019-09-04T06:32:35.309+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn285] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.640+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn317] Got notified of new snapshot: { ts: Timestamp(1567578755, 1), t: 1 }, 2019-09-04T06:32:35.309+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn317] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.471+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn304] Got notified of new snapshot: { ts: Timestamp(1567578755, 1), t: 1 }, 2019-09-04T06:32:35.309+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn304] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.637+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn305] Got notified of new snapshot: { ts: Timestamp(1567578755, 1), t: 1 }, 2019-09-04T06:32:35.309+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn305] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.638+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn340] Got notified of new snapshot: { ts: Timestamp(1567578755, 1), t: 1 }, 2019-09-04T06:32:35.309+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn340] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.662+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn316] Got notified of new snapshot: { ts: Timestamp(1567578755, 1), t: 1 }, 2019-09-04T06:32:35.309+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn316] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.623+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn336] Got notified of new snapshot: { ts: Timestamp(1567578755, 1), t: 1 }, 2019-09-04T06:32:35.309+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn336] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.130+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn325] Got notified of new snapshot: { ts: Timestamp(1567578755, 1), t: 1 }, 2019-09-04T06:32:35.309+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn325] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.644+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn350] Got notified of new snapshot: { ts: Timestamp(1567578755, 1), t: 1 }, 2019-09-04T06:32:35.309+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn350] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:00.763+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn299] Got notified of new snapshot: { ts: Timestamp(1567578755, 1), t: 1 }, 2019-09-04T06:32:35.309+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn299] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.483+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn275] Got notified of new snapshot: { ts: Timestamp(1567578755, 1), t: 1 }, 2019-09-04T06:32:35.309+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn275] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.508+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn339] Got notified of new snapshot: { ts: Timestamp(1567578755, 1), t: 1 }, 2019-09-04T06:32:35.309+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn339] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.661+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn347] Got notified of new snapshot: { ts: Timestamp(1567578755, 1), t: 1 }, 2019-09-04T06:32:35.309+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn347] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:58.760+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn319] Got notified of new snapshot: { ts: Timestamp(1567578755, 1), t: 1 }, 2019-09-04T06:32:35.309+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn319] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.535+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn320] Got notified of new snapshot: { ts: Timestamp(1567578755, 1), t: 1 }, 2019-09-04T06:32:35.309+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn320] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.753+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn342] Got notified of new snapshot: { ts: Timestamp(1567578755, 1), t: 1 }, 2019-09-04T06:32:35.309+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn342] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:52.054+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn306] Got notified of new snapshot: { ts: Timestamp(1567578755, 1), t: 1 }, 2019-09-04T06:32:35.309+0000 2019-09-04T06:32:35.325+0000 D3 REPL [conn306] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.639+0000 2019-09-04T06:32:35.325+0000 D1 COMMAND [conn21] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578755, 1), t: 1 } } } 2019-09-04T06:32:35.325+0000 D3 STORAGE [conn21] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:32:35.325+0000 D1 COMMAND [conn21] Using 'committed' snapshot: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578755, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a8302d1a496712d724c'), operName: "", parentOperId: "5d6f5a8302d1a496712d724a" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578755, 1), signature: { hash: BinData(0, 1A578DEA7B142EFF21E3A86A7DEA8240F7DE4FEE), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578755, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578755, 1) 2019-09-04T06:32:35.325+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:35.325+0000 D2 QUERY [conn21] Collection config.settings does not exist. Using EOF plan: query: { _id: "balancer" } sort: {} projection: {} limit: 1 2019-09-04T06:32:35.326+0000 I COMMAND [conn21] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578755, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a8302d1a496712d724c'), operName: "", parentOperId: "5d6f5a8302d1a496712d724a" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578755, 1), signature: { hash: BinData(0, 1A578DEA7B142EFF21E3A86A7DEA8240F7DE4FEE), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578755, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:32:35.358+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:35.358+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:35.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:35.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:35.412+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578755, 1) 2019-09-04T06:32:35.424+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:35.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:35.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:35.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:35.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:35.525+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:35.553+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:35.553+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:35.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:35.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:35.593+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:35.593+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:35.612+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:35.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:35.625+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:35.642+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:35.643+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:35.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:35.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:35.657+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:35.657+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:35.691+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:35.691+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:35.722+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:35.722+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:35.725+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:35.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:35.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:35.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:35.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:35.824+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:35.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:35.825+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:35.858+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:35.858+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:35.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:35.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:35.925+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:35.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:35.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:35.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:35.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:36.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:36.025+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:36.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:36.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:36.093+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:36.093+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:36.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:36.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:36.125+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:36.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:36.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:36.191+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:36.191+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:36.222+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:36.222+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:36.225+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:36.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578755, 1), signature: { hash: BinData(0, 1A578DEA7B142EFF21E3A86A7DEA8240F7DE4FEE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:36.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:32:36.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578755, 1), signature: { hash: BinData(0, 1A578DEA7B142EFF21E3A86A7DEA8240F7DE4FEE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:36.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578755, 1), signature: { hash: BinData(0, 1A578DEA7B142EFF21E3A86A7DEA8240F7DE4FEE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:36.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, durableWallTime: new Date(1567578755309), opTime: { ts: Timestamp(1567578755, 1), t: 1 }, wallTime: new Date(1567578755309) } 2019-09-04T06:32:36.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578755, 1), signature: { hash: BinData(0, 1A578DEA7B142EFF21E3A86A7DEA8240F7DE4FEE), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:36.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:36.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:36.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:36.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:36.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:36.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:36.313+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:36.313+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:36.313+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578755, 1) 2019-09-04T06:32:36.313+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 15061 2019-09-04T06:32:36.313+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:36.313+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:36.313+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 15061 2019-09-04T06:32:36.314+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15064 2019-09-04T06:32:36.314+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 15064 2019-09-04T06:32:36.314+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578755, 1), t: 1 }({ ts: Timestamp(1567578755, 1), t: 1 }) 2019-09-04T06:32:36.324+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:36.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:36.325+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:36.358+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:36.358+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:36.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:36.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:36.425+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:36.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:36.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:36.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:36.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:36.526+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:36.553+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:36.553+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:36.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:36.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:36.593+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:36.593+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:36.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:36.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:36.626+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:36.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:36.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:36.691+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:36.691+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:36.722+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:36.722+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:36.726+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:36.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:36.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:36.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:36.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:36.824+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:36.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:36.826+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:36.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:36.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1028) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:36.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1028 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:46.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:36.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:04.839+0000 2019-09-04T06:32:36.838+0000 D2 ASIO [Replication] Request 1028 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, durableWallTime: new Date(1567578755309), opTime: { ts: Timestamp(1567578755, 1), t: 1 }, wallTime: new Date(1567578755309), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578755, 1), t: 1 }, lastCommittedWall: new Date(1567578755309), lastOpVisible: { ts: Timestamp(1567578755, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578755, 1), $clusterTime: { clusterTime: Timestamp(1567578755, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578755, 1) } 2019-09-04T06:32:36.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, durableWallTime: new Date(1567578755309), opTime: { ts: Timestamp(1567578755, 1), t: 1 }, wallTime: new Date(1567578755309), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578755, 1), t: 1 }, lastCommittedWall: new Date(1567578755309), lastOpVisible: { ts: Timestamp(1567578755, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578755, 1), $clusterTime: { clusterTime: Timestamp(1567578755, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578755, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:36.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:36.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1028) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, durableWallTime: new Date(1567578755309), opTime: { ts: Timestamp(1567578755, 1), t: 1 }, wallTime: new Date(1567578755309), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578755, 1), t: 1 }, lastCommittedWall: new Date(1567578755309), lastOpVisible: { ts: Timestamp(1567578755, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578755, 1), $clusterTime: { clusterTime: Timestamp(1567578755, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578755, 1) } 2019-09-04T06:32:36.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:32:36.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:32:38.838Z 2019-09-04T06:32:36.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:04.839+0000 2019-09-04T06:32:36.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:36.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1029) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:36.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1029 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:32:46.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:36.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:04.839+0000 2019-09-04T06:32:36.839+0000 D2 ASIO [Replication] Request 1029 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, durableWallTime: new Date(1567578755309), opTime: { ts: Timestamp(1567578755, 1), t: 1 }, wallTime: new Date(1567578755309), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578755, 1), t: 1 }, lastCommittedWall: new Date(1567578755309), lastOpVisible: { ts: Timestamp(1567578755, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578755, 1), $clusterTime: { clusterTime: Timestamp(1567578755, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578755, 1) } 2019-09-04T06:32:36.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, durableWallTime: new Date(1567578755309), opTime: { ts: Timestamp(1567578755, 1), t: 1 }, wallTime: new Date(1567578755309), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578755, 1), t: 1 }, lastCommittedWall: new Date(1567578755309), lastOpVisible: { ts: Timestamp(1567578755, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578755, 1), $clusterTime: { clusterTime: Timestamp(1567578755, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578755, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:32:36.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:36.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1029) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, durableWallTime: new Date(1567578755309), opTime: { ts: Timestamp(1567578755, 1), t: 1 }, wallTime: new Date(1567578755309), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578755, 1), t: 1 }, lastCommittedWall: new Date(1567578755309), lastOpVisible: { ts: Timestamp(1567578755, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578755, 1), $clusterTime: { clusterTime: Timestamp(1567578755, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578755, 1) } 2019-09-04T06:32:36.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:32:36.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:32:45.517+0000 2019-09-04T06:32:36.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:32:46.954+0000 2019-09-04T06:32:36.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:32:36.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:32:38.839Z 2019-09-04T06:32:36.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:36.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:06.839+0000 2019-09-04T06:32:36.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:06.839+0000 2019-09-04T06:32:36.858+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:36.858+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:36.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:36.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:36.926+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:36.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:36.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:36.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:36.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:37.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:37.026+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:37.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:37.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:37.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578755, 1), signature: { hash: BinData(0, 1A578DEA7B142EFF21E3A86A7DEA8240F7DE4FEE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:37.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:32:37.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578755, 1), signature: { hash: BinData(0, 1A578DEA7B142EFF21E3A86A7DEA8240F7DE4FEE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:37.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578755, 1), signature: { hash: BinData(0, 1A578DEA7B142EFF21E3A86A7DEA8240F7DE4FEE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:37.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, durableWallTime: new Date(1567578755309), opTime: { ts: Timestamp(1567578755, 1), t: 1 }, wallTime: new Date(1567578755309) } 2019-09-04T06:32:37.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578755, 1), signature: { hash: BinData(0, 1A578DEA7B142EFF21E3A86A7DEA8240F7DE4FEE), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:37.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:37.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:37.093+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:37.093+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:37.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:37.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:37.126+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:37.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:37.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:37.191+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:37.191+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:37.222+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:37.222+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:37.226+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:37.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:37.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:37.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:37.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:37.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:37.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:37.313+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:37.313+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:37.313+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578755, 1) 2019-09-04T06:32:37.313+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 15097 2019-09-04T06:32:37.313+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:37.313+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:37.313+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 15097 2019-09-04T06:32:37.314+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15100 2019-09-04T06:32:37.314+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 15100 2019-09-04T06:32:37.314+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578755, 1), t: 1 }({ ts: Timestamp(1567578755, 1), t: 1 }) 2019-09-04T06:32:37.324+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:37.325+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:37.326+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:37.358+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:37.358+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:37.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:37.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:37.427+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:37.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:37.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:37.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:37.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:37.527+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:37.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:37.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:37.568+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:37.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:37.593+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:37.593+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:37.612+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:37.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:37.627+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:37.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:37.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:37.691+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:37.691+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:37.722+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:37.722+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:37.727+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:37.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:37.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:37.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:37.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:37.824+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:37.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:37.827+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:37.858+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:37.858+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:37.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:37.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:37.927+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:37.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:37.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:37.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:37.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:38.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:38.027+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:38.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:38.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:38.068+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:38.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:38.093+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:38.093+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:38.127+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:38.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:38.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:38.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:38.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:38.191+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:38.191+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:38.222+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:38.222+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:38.228+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:38.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578755, 1), signature: { hash: BinData(0, 1A578DEA7B142EFF21E3A86A7DEA8240F7DE4FEE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:38.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:32:38.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578755, 1), signature: { hash: BinData(0, 1A578DEA7B142EFF21E3A86A7DEA8240F7DE4FEE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:38.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578755, 1), signature: { hash: BinData(0, 1A578DEA7B142EFF21E3A86A7DEA8240F7DE4FEE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:38.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, durableWallTime: new Date(1567578755309), opTime: { ts: Timestamp(1567578755, 1), t: 1 }, wallTime: new Date(1567578755309) } 2019-09-04T06:32:38.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578755, 1), signature: { hash: BinData(0, 1A578DEA7B142EFF21E3A86A7DEA8240F7DE4FEE), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:38.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:38.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:38.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:38.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:38.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:38.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:38.313+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:38.313+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:38.313+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578755, 1) 2019-09-04T06:32:38.313+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 15133 2019-09-04T06:32:38.313+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:38.313+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:38.313+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 15133 2019-09-04T06:32:38.314+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15136 2019-09-04T06:32:38.314+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 15136 2019-09-04T06:32:38.314+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578755, 1), t: 1 }({ ts: Timestamp(1567578755, 1), t: 1 }) 2019-09-04T06:32:38.324+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:38.325+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:38.328+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:38.352+0000 D2 ASIO [RS] Request 1027 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578758, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578758350), o: { $v: 1, $set: { ping: new Date(1567578758349) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578755, 1), t: 1 }, lastCommittedWall: new Date(1567578755309), lastOpVisible: { ts: Timestamp(1567578755, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578755, 1), t: 1 }, lastCommittedWall: new Date(1567578755309), lastOpApplied: { ts: Timestamp(1567578758, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578755, 1), $clusterTime: { clusterTime: Timestamp(1567578758, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578758, 1) } 2019-09-04T06:32:38.352+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578758, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578758350), o: { $v: 1, $set: { ping: new Date(1567578758349) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578755, 1), t: 1 }, lastCommittedWall: new Date(1567578755309), lastOpVisible: { ts: Timestamp(1567578755, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578755, 1), t: 1 }, lastCommittedWall: new Date(1567578755309), lastOpApplied: { ts: Timestamp(1567578758, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578755, 1), $clusterTime: { clusterTime: Timestamp(1567578758, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578758, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:38.352+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:38.352+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578758, 1) and ending at ts: Timestamp(1567578758, 1) 2019-09-04T06:32:38.352+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:32:46.954+0000 2019-09-04T06:32:38.352+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:32:48.387+0000 2019-09-04T06:32:38.352+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:38.352+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:06.839+0000 2019-09-04T06:32:38.352+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578758, 1), t: 1 } 2019-09-04T06:32:38.352+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:38.352+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:38.352+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578755, 1) 2019-09-04T06:32:38.352+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 15140 2019-09-04T06:32:38.352+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:38.352+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:38.352+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 15140 2019-09-04T06:32:38.352+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:38.352+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:32:38.352+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:38.352+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578755, 1) 2019-09-04T06:32:38.352+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 15143 2019-09-04T06:32:38.352+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578758, 1) } 2019-09-04T06:32:38.352+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:38.352+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:38.352+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 15143 2019-09-04T06:32:38.352+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15137 2019-09-04T06:32:38.353+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 15137 2019-09-04T06:32:38.353+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15146 2019-09-04T06:32:38.353+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 15146 2019-09-04T06:32:38.353+0000 D3 EXECUTOR [repl-writer-worker-10] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:38.353+0000 D3 STORAGE [repl-writer-worker-10] WT begin_transaction for snapshot id 15148 2019-09-04T06:32:38.353+0000 D4 STORAGE [repl-writer-worker-10] inserting record with timestamp Timestamp(1567578758, 1) 2019-09-04T06:32:38.353+0000 D3 STORAGE [repl-writer-worker-10] WT set timestamp of future write operations to Timestamp(1567578758, 1) 2019-09-04T06:32:38.353+0000 D3 STORAGE [repl-writer-worker-10] WT commit_transaction for snapshot id 15148 2019-09-04T06:32:38.353+0000 D3 EXECUTOR [repl-writer-worker-10] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:38.353+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:32:38.353+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15147 2019-09-04T06:32:38.353+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 15147 2019-09-04T06:32:38.353+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15150 2019-09-04T06:32:38.353+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 15150 2019-09-04T06:32:38.353+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578758, 1), t: 1 }({ ts: Timestamp(1567578758, 1), t: 1 }) 2019-09-04T06:32:38.353+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578758, 1) 2019-09-04T06:32:38.353+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15151 2019-09-04T06:32:38.353+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578758, 1) } } ] } sort: {} projection: {} 2019-09-04T06:32:38.353+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:38.353+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:32:38.353+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578758, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:32:38.353+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:38.353+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:38.353+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:38.353+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578758, 1) || First: notFirst: full path: ts 2019-09-04T06:32:38.353+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:38.353+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578758, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:38.353+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:38.353+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:32:38.353+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:38.353+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:38.353+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:38.353+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:38.353+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:38.353+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:38.353+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:38.353+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578758, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:38.353+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:38.353+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:38.353+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:38.353+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578758, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:38.353+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:38.353+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578758, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:38.353+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 15151 2019-09-04T06:32:38.353+0000 D3 EXECUTOR [repl-writer-worker-8] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:38.353+0000 D3 STORAGE [repl-writer-worker-8] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:38.353+0000 D3 REPL [repl-writer-worker-8] applying op: { ts: Timestamp(1567578758, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578758350), o: { $v: 1, $set: { ping: new Date(1567578758349) } } }, oplog application mode: Secondary 2019-09-04T06:32:38.353+0000 D3 STORAGE [repl-writer-worker-8] WT set timestamp of future write operations to Timestamp(1567578758, 1) 2019-09-04T06:32:38.353+0000 D3 STORAGE [repl-writer-worker-8] WT begin_transaction for snapshot id 15153 2019-09-04T06:32:38.353+0000 D2 QUERY [repl-writer-worker-8] Using idhack: { _id: "ConfigServer" } 2019-09-04T06:32:38.353+0000 D4 WRITE [repl-writer-worker-8] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:32:38.353+0000 D3 STORAGE [repl-writer-worker-8] WT commit_transaction for snapshot id 15153 2019-09-04T06:32:38.353+0000 D3 EXECUTOR [repl-writer-worker-8] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:38.353+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578758, 1), t: 1 }({ ts: Timestamp(1567578758, 1), t: 1 }) 2019-09-04T06:32:38.353+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578758, 1) 2019-09-04T06:32:38.353+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15152 2019-09-04T06:32:38.353+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:32:38.353+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:38.353+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:32:38.353+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:38.353+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:38.353+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:32:38.353+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 15152 2019-09-04T06:32:38.353+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578758, 1) 2019-09-04T06:32:38.353+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:38.353+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15156 2019-09-04T06:32:38.353+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, durableWallTime: new Date(1567578755309), appliedOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, appliedWallTime: new Date(1567578755309), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, durableWallTime: new Date(1567578755309), appliedOpTime: { ts: Timestamp(1567578758, 1), t: 1 }, appliedWallTime: new Date(1567578758350), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, durableWallTime: new Date(1567578755309), appliedOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, appliedWallTime: new Date(1567578755309), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578755, 1), t: 1 }, lastCommittedWall: new Date(1567578755309), lastOpVisible: { ts: Timestamp(1567578755, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:38.353+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 15156 2019-09-04T06:32:38.354+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1030 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:08.353+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, durableWallTime: new Date(1567578755309), appliedOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, appliedWallTime: new Date(1567578755309), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, durableWallTime: new Date(1567578755309), appliedOpTime: { ts: Timestamp(1567578758, 1), t: 1 }, appliedWallTime: new Date(1567578758350), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, durableWallTime: new Date(1567578755309), appliedOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, appliedWallTime: new Date(1567578755309), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578755, 1), t: 1 }, lastCommittedWall: new Date(1567578755309), lastOpVisible: { ts: Timestamp(1567578755, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:38.354+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578758, 1), t: 1 }({ ts: Timestamp(1567578758, 1), t: 1 }) 2019-09-04T06:32:38.354+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:08.353+0000 2019-09-04T06:32:38.354+0000 D2 ASIO [RS] Request 1030 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578755, 1), t: 1 }, lastCommittedWall: new Date(1567578755309), lastOpVisible: { ts: Timestamp(1567578755, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578755, 1), $clusterTime: { clusterTime: Timestamp(1567578758, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578758, 1) } 2019-09-04T06:32:38.354+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578755, 1), t: 1 }, lastCommittedWall: new Date(1567578755309), lastOpVisible: { ts: Timestamp(1567578755, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578755, 1), $clusterTime: { clusterTime: Timestamp(1567578758, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578758, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:38.354+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:38.354+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:08.354+0000 2019-09-04T06:32:38.354+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578758, 1), t: 1 } 2019-09-04T06:32:38.354+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1031 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:48.354+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578755, 1), t: 1 } } 2019-09-04T06:32:38.354+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:08.354+0000 2019-09-04T06:32:38.355+0000 D2 ASIO [RS] Request 1031 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 1), t: 1 }, lastCommittedWall: new Date(1567578758350), lastOpVisible: { ts: Timestamp(1567578758, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578758, 1), t: 1 }, lastCommittedWall: new Date(1567578758350), lastOpApplied: { ts: Timestamp(1567578758, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578758, 1), $clusterTime: { clusterTime: Timestamp(1567578758, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578758, 1) } 2019-09-04T06:32:38.355+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 1), t: 1 }, lastCommittedWall: new Date(1567578758350), lastOpVisible: { ts: Timestamp(1567578758, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578758, 1), t: 1 }, lastCommittedWall: new Date(1567578758350), lastOpApplied: { ts: Timestamp(1567578758, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578758, 1), $clusterTime: { clusterTime: Timestamp(1567578758, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578758, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:38.355+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:38.355+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:32:38.355+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578758, 1), t: 1 }, 2019-09-04T06:32:38.350+0000 2019-09-04T06:32:38.355+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578758, 1), t: 1 }, 2019-09-04T06:32:38.350+0000 2019-09-04T06:32:38.355+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578753, 1) 2019-09-04T06:32:38.355+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:32:48.387+0000 2019-09-04T06:32:38.355+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:32:48.401+0000 2019-09-04T06:32:38.355+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:38.355+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:06.839+0000 2019-09-04T06:32:38.355+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1032 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:48.355+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578758, 1), t: 1 } } 2019-09-04T06:32:38.355+0000 D3 REPL [conn321] Got notified of new snapshot: { ts: Timestamp(1567578758, 1), t: 1 }, 2019-09-04T06:32:38.350+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn321] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:43.109+0000 2019-09-04T06:32:38.355+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:08.354+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn328] Got notified of new snapshot: { ts: Timestamp(1567578758, 1), t: 1 }, 2019-09-04T06:32:38.350+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn328] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.625+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn346] Got notified of new snapshot: { ts: Timestamp(1567578758, 1), t: 1 }, 2019-09-04T06:32:38.350+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn346] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:57.574+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn345] Got notified of new snapshot: { ts: Timestamp(1567578758, 1), t: 1 }, 2019-09-04T06:32:38.350+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn345] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:56.312+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn285] Got notified of new snapshot: { ts: Timestamp(1567578758, 1), t: 1 }, 2019-09-04T06:32:38.350+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn285] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.640+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn317] Got notified of new snapshot: { ts: Timestamp(1567578758, 1), t: 1 }, 2019-09-04T06:32:38.350+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn317] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.471+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn340] Got notified of new snapshot: { ts: Timestamp(1567578758, 1), t: 1 }, 2019-09-04T06:32:38.350+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn340] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.662+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn325] Got notified of new snapshot: { ts: Timestamp(1567578758, 1), t: 1 }, 2019-09-04T06:32:38.350+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn325] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.644+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn350] Got notified of new snapshot: { ts: Timestamp(1567578758, 1), t: 1 }, 2019-09-04T06:32:38.350+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn350] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:00.763+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn342] Got notified of new snapshot: { ts: Timestamp(1567578758, 1), t: 1 }, 2019-09-04T06:32:38.350+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn342] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:52.054+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn347] Got notified of new snapshot: { ts: Timestamp(1567578758, 1), t: 1 }, 2019-09-04T06:32:38.350+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn347] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:58.760+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn320] Got notified of new snapshot: { ts: Timestamp(1567578758, 1), t: 1 }, 2019-09-04T06:32:38.350+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn320] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.753+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn306] Got notified of new snapshot: { ts: Timestamp(1567578758, 1), t: 1 }, 2019-09-04T06:32:38.350+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn306] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.639+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn318] Got notified of new snapshot: { ts: Timestamp(1567578758, 1), t: 1 }, 2019-09-04T06:32:38.350+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn318] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:03.478+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn351] Got notified of new snapshot: { ts: Timestamp(1567578758, 1), t: 1 }, 2019-09-04T06:32:38.350+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn351] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:00.962+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn349] Got notified of new snapshot: { ts: Timestamp(1567578758, 1), t: 1 }, 2019-09-04T06:32:38.350+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn349] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:00.261+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn326] Got notified of new snapshot: { ts: Timestamp(1567578758, 1), t: 1 }, 2019-09-04T06:32:38.350+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn326] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:49.840+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn348] Got notified of new snapshot: { ts: Timestamp(1567578758, 1), t: 1 }, 2019-09-04T06:32:38.350+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn348] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:59.750+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn324] Got notified of new snapshot: { ts: Timestamp(1567578758, 1), t: 1 }, 2019-09-04T06:32:38.350+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn324] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.622+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn304] Got notified of new snapshot: { ts: Timestamp(1567578758, 1), t: 1 }, 2019-09-04T06:32:38.350+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn304] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.637+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn305] Got notified of new snapshot: { ts: Timestamp(1567578758, 1), t: 1 }, 2019-09-04T06:32:38.350+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn305] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.638+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn316] Got notified of new snapshot: { ts: Timestamp(1567578758, 1), t: 1 }, 2019-09-04T06:32:38.350+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn316] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.623+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn336] Got notified of new snapshot: { ts: Timestamp(1567578758, 1), t: 1 }, 2019-09-04T06:32:38.350+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn336] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.130+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn299] Got notified of new snapshot: { ts: Timestamp(1567578758, 1), t: 1 }, 2019-09-04T06:32:38.350+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn299] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.483+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn339] Got notified of new snapshot: { ts: Timestamp(1567578758, 1), t: 1 }, 2019-09-04T06:32:38.350+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn339] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.661+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn319] Got notified of new snapshot: { ts: Timestamp(1567578758, 1), t: 1 }, 2019-09-04T06:32:38.350+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn319] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.535+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn275] Got notified of new snapshot: { ts: Timestamp(1567578758, 1), t: 1 }, 2019-09-04T06:32:38.350+0000 2019-09-04T06:32:38.355+0000 D3 REPL [conn275] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.508+0000 2019-09-04T06:32:38.358+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:38.358+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:38.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:38.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:38.368+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:32:38.368+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:38.368+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, durableWallTime: new Date(1567578755309), appliedOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, appliedWallTime: new Date(1567578755309), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578758, 1), t: 1 }, durableWallTime: new Date(1567578758350), appliedOpTime: { ts: Timestamp(1567578758, 1), t: 1 }, appliedWallTime: new Date(1567578758350), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, durableWallTime: new Date(1567578755309), appliedOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, appliedWallTime: new Date(1567578755309), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 1), t: 1 }, lastCommittedWall: new Date(1567578758350), lastOpVisible: { ts: Timestamp(1567578758, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:38.368+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1033 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:08.368+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, durableWallTime: new Date(1567578755309), appliedOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, appliedWallTime: new Date(1567578755309), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578758, 1), t: 1 }, durableWallTime: new Date(1567578758350), appliedOpTime: { ts: Timestamp(1567578758, 1), t: 1 }, appliedWallTime: new Date(1567578758350), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, durableWallTime: new Date(1567578755309), appliedOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, appliedWallTime: new Date(1567578755309), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 1), t: 1 }, lastCommittedWall: new Date(1567578758350), lastOpVisible: { ts: Timestamp(1567578758, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:38.368+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:08.354+0000 2019-09-04T06:32:38.369+0000 D2 ASIO [RS] Request 1033 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 1), t: 1 }, lastCommittedWall: new Date(1567578758350), lastOpVisible: { ts: Timestamp(1567578758, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578758, 1), $clusterTime: { clusterTime: Timestamp(1567578758, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578758, 1) } 2019-09-04T06:32:38.369+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 1), t: 1 }, lastCommittedWall: new Date(1567578758350), lastOpVisible: { ts: Timestamp(1567578758, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578758, 1), $clusterTime: { clusterTime: Timestamp(1567578758, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578758, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:38.369+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:38.369+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:08.354+0000 2019-09-04T06:32:38.428+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:38.452+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578758, 1) 2019-09-04T06:32:38.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:38.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:38.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:38.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:38.528+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:38.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:38.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:38.564+0000 D2 ASIO [RS] Request 1032 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578758, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578758562), o: { $v: 1, $set: { ping: new Date(1567578758561) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 1), t: 1 }, lastCommittedWall: new Date(1567578758350), lastOpVisible: { ts: Timestamp(1567578758, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578758, 1), t: 1 }, lastCommittedWall: new Date(1567578758350), lastOpApplied: { ts: Timestamp(1567578758, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578758, 1), $clusterTime: { clusterTime: Timestamp(1567578758, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578758, 2) } 2019-09-04T06:32:38.564+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578758, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578758562), o: { $v: 1, $set: { ping: new Date(1567578758561) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 1), t: 1 }, lastCommittedWall: new Date(1567578758350), lastOpVisible: { ts: Timestamp(1567578758, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578758, 1), t: 1 }, lastCommittedWall: new Date(1567578758350), lastOpApplied: { ts: Timestamp(1567578758, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578758, 1), $clusterTime: { clusterTime: Timestamp(1567578758, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578758, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:38.564+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:38.564+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578758, 2) and ending at ts: Timestamp(1567578758, 2) 2019-09-04T06:32:38.564+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:32:48.401+0000 2019-09-04T06:32:38.564+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:32:48.640+0000 2019-09-04T06:32:38.564+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:38.564+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:06.839+0000 2019-09-04T06:32:38.564+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578758, 2), t: 1 } 2019-09-04T06:32:38.564+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:38.564+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:38.564+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578758, 1) 2019-09-04T06:32:38.564+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 15165 2019-09-04T06:32:38.564+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:38.564+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:38.564+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 15165 2019-09-04T06:32:38.564+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:38.564+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:32:38.564+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:38.564+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578758, 2) } 2019-09-04T06:32:38.564+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578758, 1) 2019-09-04T06:32:38.564+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 15168 2019-09-04T06:32:38.564+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:38.564+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:38.564+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 15168 2019-09-04T06:32:38.564+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15157 2019-09-04T06:32:38.564+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 15157 2019-09-04T06:32:38.564+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15171 2019-09-04T06:32:38.564+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 15171 2019-09-04T06:32:38.564+0000 D3 EXECUTOR [repl-writer-worker-6] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:38.564+0000 D3 STORAGE [repl-writer-worker-6] WT begin_transaction for snapshot id 15173 2019-09-04T06:32:38.564+0000 D4 STORAGE [repl-writer-worker-6] inserting record with timestamp Timestamp(1567578758, 2) 2019-09-04T06:32:38.564+0000 D3 STORAGE [repl-writer-worker-6] WT set timestamp of future write operations to Timestamp(1567578758, 2) 2019-09-04T06:32:38.564+0000 D3 STORAGE [repl-writer-worker-6] WT commit_transaction for snapshot id 15173 2019-09-04T06:32:38.564+0000 D3 EXECUTOR [repl-writer-worker-6] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:38.564+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:32:38.564+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15172 2019-09-04T06:32:38.564+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 15172 2019-09-04T06:32:38.564+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15175 2019-09-04T06:32:38.564+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 15175 2019-09-04T06:32:38.564+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578758, 2), t: 1 }({ ts: Timestamp(1567578758, 2), t: 1 }) 2019-09-04T06:32:38.564+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578758, 2) 2019-09-04T06:32:38.564+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15176 2019-09-04T06:32:38.564+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578758, 2) } } ] } sort: {} projection: {} 2019-09-04T06:32:38.564+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:38.564+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:32:38.564+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578758, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:32:38.564+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:38.565+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:38.565+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:38.565+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578758, 2) || First: notFirst: full path: ts 2019-09-04T06:32:38.565+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:38.565+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578758, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:38.565+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:38.565+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:32:38.565+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:38.565+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:38.565+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:38.565+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:38.565+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:38.565+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:38.565+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:38.565+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578758, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:38.565+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:38.565+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:38.565+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:38.565+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578758, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:38.565+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:38.565+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578758, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:38.565+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 15176 2019-09-04T06:32:38.565+0000 D3 EXECUTOR [repl-writer-worker-4] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:38.565+0000 D3 STORAGE [repl-writer-worker-4] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:38.565+0000 D3 REPL [repl-writer-worker-4] applying op: { ts: Timestamp(1567578758, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578758562), o: { $v: 1, $set: { ping: new Date(1567578758561) } } }, oplog application mode: Secondary 2019-09-04T06:32:38.565+0000 D3 STORAGE [repl-writer-worker-4] WT set timestamp of future write operations to Timestamp(1567578758, 2) 2019-09-04T06:32:38.565+0000 D3 STORAGE [repl-writer-worker-4] WT begin_transaction for snapshot id 15178 2019-09-04T06:32:38.565+0000 D2 QUERY [repl-writer-worker-4] Using idhack: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" } 2019-09-04T06:32:38.565+0000 D4 WRITE [repl-writer-worker-4] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:32:38.565+0000 D3 STORAGE [repl-writer-worker-4] WT commit_transaction for snapshot id 15178 2019-09-04T06:32:38.565+0000 D3 EXECUTOR [repl-writer-worker-4] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:38.565+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578758, 2), t: 1 }({ ts: Timestamp(1567578758, 2), t: 1 }) 2019-09-04T06:32:38.565+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578758, 2) 2019-09-04T06:32:38.565+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15177 2019-09-04T06:32:38.565+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:32:38.565+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:38.565+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:32:38.565+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:38.565+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:38.565+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:32:38.565+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 15177 2019-09-04T06:32:38.565+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578758, 2) 2019-09-04T06:32:38.565+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15182 2019-09-04T06:32:38.565+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 15182 2019-09-04T06:32:38.565+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578758, 2), t: 1 }({ ts: Timestamp(1567578758, 2), t: 1 }) 2019-09-04T06:32:38.565+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:38.565+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, durableWallTime: new Date(1567578755309), appliedOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, appliedWallTime: new Date(1567578755309), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578758, 1), t: 1 }, durableWallTime: new Date(1567578758350), appliedOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, appliedWallTime: new Date(1567578758562), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, durableWallTime: new Date(1567578755309), appliedOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, appliedWallTime: new Date(1567578755309), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 1), t: 1 }, lastCommittedWall: new Date(1567578758350), lastOpVisible: { ts: Timestamp(1567578758, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:38.565+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1034 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:08.565+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, durableWallTime: new Date(1567578755309), appliedOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, appliedWallTime: new Date(1567578755309), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578758, 1), t: 1 }, durableWallTime: new Date(1567578758350), appliedOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, appliedWallTime: new Date(1567578758562), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, durableWallTime: new Date(1567578755309), appliedOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, appliedWallTime: new Date(1567578755309), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 1), t: 1 }, lastCommittedWall: new Date(1567578758350), lastOpVisible: { ts: Timestamp(1567578758, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:38.565+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:08.565+0000 2019-09-04T06:32:38.565+0000 D2 ASIO [RS] Request 1034 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 1), t: 1 }, lastCommittedWall: new Date(1567578758350), lastOpVisible: { ts: Timestamp(1567578758, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578758, 1), $clusterTime: { clusterTime: Timestamp(1567578758, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578758, 2) } 2019-09-04T06:32:38.566+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 1), t: 1 }, lastCommittedWall: new Date(1567578758350), lastOpVisible: { ts: Timestamp(1567578758, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578758, 1), $clusterTime: { clusterTime: Timestamp(1567578758, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578758, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:38.566+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:38.566+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:08.566+0000 2019-09-04T06:32:38.566+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578758, 2), t: 1 } 2019-09-04T06:32:38.566+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1035 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:48.566+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578758, 1), t: 1 } } 2019-09-04T06:32:38.566+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:08.566+0000 2019-09-04T06:32:38.566+0000 D2 ASIO [RS] Request 1035 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpVisible: { ts: Timestamp(1567578758, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpApplied: { ts: Timestamp(1567578758, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578758, 2), $clusterTime: { clusterTime: Timestamp(1567578758, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578758, 2) } 2019-09-04T06:32:38.566+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpVisible: { ts: Timestamp(1567578758, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpApplied: { ts: Timestamp(1567578758, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578758, 2), $clusterTime: { clusterTime: Timestamp(1567578758, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578758, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:38.567+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:38.567+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:32:38.567+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578758, 2), t: 1 }, 2019-09-04T06:32:38.562+0000 2019-09-04T06:32:38.567+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578758, 2), t: 1 }, 2019-09-04T06:32:38.562+0000 2019-09-04T06:32:38.567+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578753, 2) 2019-09-04T06:32:38.567+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:32:48.640+0000 2019-09-04T06:32:38.567+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:32:49.397+0000 2019-09-04T06:32:38.567+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1036 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:48.567+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578758, 2), t: 1 } } 2019-09-04T06:32:38.567+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:08.566+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn325] Got notified of new snapshot: { ts: Timestamp(1567578758, 2), t: 1 }, 2019-09-04T06:32:38.562+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn325] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.644+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn342] Got notified of new snapshot: { ts: Timestamp(1567578758, 2), t: 1 }, 2019-09-04T06:32:38.562+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn342] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:52.054+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn328] Got notified of new snapshot: { ts: Timestamp(1567578758, 2), t: 1 }, 2019-09-04T06:32:38.562+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn328] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.625+0000 2019-09-04T06:32:38.567+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:38.567+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:06.839+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn348] Got notified of new snapshot: { ts: Timestamp(1567578758, 2), t: 1 }, 2019-09-04T06:32:38.562+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn348] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:59.750+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn350] Got notified of new snapshot: { ts: Timestamp(1567578758, 2), t: 1 }, 2019-09-04T06:32:38.562+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn350] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:00.763+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn285] Got notified of new snapshot: { ts: Timestamp(1567578758, 2), t: 1 }, 2019-09-04T06:32:38.562+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn285] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.640+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn347] Got notified of new snapshot: { ts: Timestamp(1567578758, 2), t: 1 }, 2019-09-04T06:32:38.562+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn347] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:58.760+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn306] Got notified of new snapshot: { ts: Timestamp(1567578758, 2), t: 1 }, 2019-09-04T06:32:38.562+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn306] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.639+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn305] Got notified of new snapshot: { ts: Timestamp(1567578758, 2), t: 1 }, 2019-09-04T06:32:38.562+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn305] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.638+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn324] Got notified of new snapshot: { ts: Timestamp(1567578758, 2), t: 1 }, 2019-09-04T06:32:38.562+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn324] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.622+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn326] Got notified of new snapshot: { ts: Timestamp(1567578758, 2), t: 1 }, 2019-09-04T06:32:38.562+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn326] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:49.840+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn339] Got notified of new snapshot: { ts: Timestamp(1567578758, 2), t: 1 }, 2019-09-04T06:32:38.562+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn339] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.661+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn275] Got notified of new snapshot: { ts: Timestamp(1567578758, 2), t: 1 }, 2019-09-04T06:32:38.562+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn275] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.508+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn351] Got notified of new snapshot: { ts: Timestamp(1567578758, 2), t: 1 }, 2019-09-04T06:32:38.562+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn351] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:00.962+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn336] Got notified of new snapshot: { ts: Timestamp(1567578758, 2), t: 1 }, 2019-09-04T06:32:38.562+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn336] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.130+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn321] Got notified of new snapshot: { ts: Timestamp(1567578758, 2), t: 1 }, 2019-09-04T06:32:38.562+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn321] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:43.109+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn346] Got notified of new snapshot: { ts: Timestamp(1567578758, 2), t: 1 }, 2019-09-04T06:32:38.562+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn346] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:57.574+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn317] Got notified of new snapshot: { ts: Timestamp(1567578758, 2), t: 1 }, 2019-09-04T06:32:38.562+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn317] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.471+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn345] Got notified of new snapshot: { ts: Timestamp(1567578758, 2), t: 1 }, 2019-09-04T06:32:38.562+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn345] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:56.312+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn304] Got notified of new snapshot: { ts: Timestamp(1567578758, 2), t: 1 }, 2019-09-04T06:32:38.562+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn304] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.637+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn316] Got notified of new snapshot: { ts: Timestamp(1567578758, 2), t: 1 }, 2019-09-04T06:32:38.562+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn316] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.623+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn340] Got notified of new snapshot: { ts: Timestamp(1567578758, 2), t: 1 }, 2019-09-04T06:32:38.562+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn340] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.662+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn349] Got notified of new snapshot: { ts: Timestamp(1567578758, 2), t: 1 }, 2019-09-04T06:32:38.562+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn349] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:00.261+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn319] Got notified of new snapshot: { ts: Timestamp(1567578758, 2), t: 1 }, 2019-09-04T06:32:38.562+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn319] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.535+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn299] Got notified of new snapshot: { ts: Timestamp(1567578758, 2), t: 1 }, 2019-09-04T06:32:38.562+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn299] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:41.483+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn320] Got notified of new snapshot: { ts: Timestamp(1567578758, 2), t: 1 }, 2019-09-04T06:32:38.562+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn320] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.753+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn318] Got notified of new snapshot: { ts: Timestamp(1567578758, 2), t: 1 }, 2019-09-04T06:32:38.562+0000 2019-09-04T06:32:38.567+0000 D3 REPL [conn318] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:03.478+0000 2019-09-04T06:32:38.568+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:32:38.568+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:38.568+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, durableWallTime: new Date(1567578755309), appliedOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, appliedWallTime: new Date(1567578755309), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), appliedOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, appliedWallTime: new Date(1567578758562), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, durableWallTime: new Date(1567578755309), appliedOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, appliedWallTime: new Date(1567578755309), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpVisible: { ts: Timestamp(1567578758, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:38.568+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1037 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:08.568+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, durableWallTime: new Date(1567578755309), appliedOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, appliedWallTime: new Date(1567578755309), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), appliedOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, appliedWallTime: new Date(1567578758562), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, durableWallTime: new Date(1567578755309), appliedOpTime: { ts: Timestamp(1567578755, 1), t: 1 }, appliedWallTime: new Date(1567578755309), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpVisible: { ts: Timestamp(1567578758, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:38.568+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:08.566+0000 2019-09-04T06:32:38.568+0000 D2 ASIO [RS] Request 1037 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpVisible: { ts: Timestamp(1567578758, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578758, 2), $clusterTime: { clusterTime: Timestamp(1567578758, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578758, 2) } 2019-09-04T06:32:38.568+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpVisible: { ts: Timestamp(1567578758, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578758, 2), $clusterTime: { clusterTime: Timestamp(1567578758, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578758, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:38.568+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:38.568+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:08.566+0000 2019-09-04T06:32:38.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:38.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:38.593+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:38.593+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:38.628+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:38.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:38.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:38.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:38.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:38.664+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578758, 2) 2019-09-04T06:32:38.691+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:38.691+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:38.722+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:38.722+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:38.728+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:38.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:38.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:38.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:38.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:38.824+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:38.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:38.828+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:38.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:38.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1038) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:38.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1038 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:48.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:38.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:06.839+0000 2019-09-04T06:32:38.838+0000 D2 ASIO [Replication] Request 1038 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), opTime: { ts: Timestamp(1567578758, 2), t: 1 }, wallTime: new Date(1567578758562), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpVisible: { ts: Timestamp(1567578758, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578758, 2), $clusterTime: { clusterTime: Timestamp(1567578758, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578758, 2) } 2019-09-04T06:32:38.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), opTime: { ts: Timestamp(1567578758, 2), t: 1 }, wallTime: new Date(1567578758562), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpVisible: { ts: Timestamp(1567578758, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578758, 2), $clusterTime: { clusterTime: Timestamp(1567578758, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578758, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:38.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:38.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1038) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), opTime: { ts: Timestamp(1567578758, 2), t: 1 }, wallTime: new Date(1567578758562), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpVisible: { ts: Timestamp(1567578758, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578758, 2), $clusterTime: { clusterTime: Timestamp(1567578758, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578758, 2) } 2019-09-04T06:32:38.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:32:38.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:32:40.838Z 2019-09-04T06:32:38.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:06.839+0000 2019-09-04T06:32:38.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:38.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1039) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:38.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1039 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:32:48.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:38.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:06.839+0000 2019-09-04T06:32:38.839+0000 D2 ASIO [Replication] Request 1039 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), opTime: { ts: Timestamp(1567578758, 2), t: 1 }, wallTime: new Date(1567578758562), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpVisible: { ts: Timestamp(1567578758, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578758, 2), $clusterTime: { clusterTime: Timestamp(1567578758, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578758, 2) } 2019-09-04T06:32:38.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), opTime: { ts: Timestamp(1567578758, 2), t: 1 }, wallTime: new Date(1567578758562), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpVisible: { ts: Timestamp(1567578758, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578758, 2), $clusterTime: { clusterTime: Timestamp(1567578758, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578758, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:32:38.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:38.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1039) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), opTime: { ts: Timestamp(1567578758, 2), t: 1 }, wallTime: new Date(1567578758562), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpVisible: { ts: Timestamp(1567578758, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578758, 2), $clusterTime: { clusterTime: Timestamp(1567578758, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578758, 2) } 2019-09-04T06:32:38.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:32:38.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:32:49.397+0000 2019-09-04T06:32:38.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:32:49.076+0000 2019-09-04T06:32:38.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:32:38.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:32:40.839Z 2019-09-04T06:32:38.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:38.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:08.839+0000 2019-09-04T06:32:38.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:08.839+0000 2019-09-04T06:32:38.858+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:38.858+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:38.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:38.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:38.928+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:38.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:38.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:38.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:38.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:39.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:39.029+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:39.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:39.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:39.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578758, 2), signature: { hash: BinData(0, A54912400E1F7E228A119016C5273F5772B1E46E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:39.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:32:39.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578758, 2), signature: { hash: BinData(0, A54912400E1F7E228A119016C5273F5772B1E46E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:39.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578758, 2), signature: { hash: BinData(0, A54912400E1F7E228A119016C5273F5772B1E46E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:39.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), opTime: { ts: Timestamp(1567578758, 2), t: 1 }, wallTime: new Date(1567578758562) } 2019-09-04T06:32:39.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578758, 2), signature: { hash: BinData(0, A54912400E1F7E228A119016C5273F5772B1E46E), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:39.068+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:39.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:39.093+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:39.093+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:39.129+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:39.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:39.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:39.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:39.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:39.191+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:39.191+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:39.222+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:39.222+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:39.228+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:39.228+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:39.229+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:39.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:39.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:39.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:39.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:39.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:39.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:39.324+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:39.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:39.329+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:39.358+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:39.358+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:39.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:39.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:39.429+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:39.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:39.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:39.496+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:39.496+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:39.529+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:39.537+0000 D2 COMMAND [conn61] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578758, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578758, 2), signature: { hash: BinData(0, A54912400E1F7E228A119016C5273F5772B1E46E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578758, 2), t: 1 } }, $db: "config" } 2019-09-04T06:32:39.537+0000 D1 COMMAND [conn61] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578758, 2), t: 1 } } } 2019-09-04T06:32:39.537+0000 D3 STORAGE [conn61] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:32:39.537+0000 D1 COMMAND [conn61] Using 'committed' snapshot: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578758, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578758, 2), signature: { hash: BinData(0, A54912400E1F7E228A119016C5273F5772B1E46E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578758, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578758, 2) 2019-09-04T06:32:39.537+0000 D2 QUERY [conn61] Collection config.settings does not exist. Using EOF plan: query: { _id: "balancer" } sort: {} projection: {} limit: 1 2019-09-04T06:32:39.537+0000 I COMMAND [conn61] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578758, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578758, 2), signature: { hash: BinData(0, A54912400E1F7E228A119016C5273F5772B1E46E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578758, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:32:39.538+0000 D2 COMMAND [conn61] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578758, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578758, 2), signature: { hash: BinData(0, A54912400E1F7E228A119016C5273F5772B1E46E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578758, 2), t: 1 } }, $db: "config" } 2019-09-04T06:32:39.538+0000 D1 COMMAND [conn61] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578758, 2), t: 1 } } } 2019-09-04T06:32:39.538+0000 D3 STORAGE [conn61] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:32:39.538+0000 D1 COMMAND [conn61] Using 'committed' snapshot: { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578758, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578758, 2), signature: { hash: BinData(0, A54912400E1F7E228A119016C5273F5772B1E46E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578758, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578758, 2) 2019-09-04T06:32:39.538+0000 D2 QUERY [conn61] Collection config.settings does not exist. Using EOF plan: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 2019-09-04T06:32:39.538+0000 I COMMAND [conn61] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578758, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578758, 2), signature: { hash: BinData(0, A54912400E1F7E228A119016C5273F5772B1E46E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578758, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:32:39.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:39.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:39.564+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:39.564+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:39.564+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578758, 2) 2019-09-04T06:32:39.564+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 15220 2019-09-04T06:32:39.564+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:39.564+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:39.564+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 15220 2019-09-04T06:32:39.565+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15223 2019-09-04T06:32:39.565+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 15223 2019-09-04T06:32:39.565+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578758, 2), t: 1 }({ ts: Timestamp(1567578758, 2), t: 1 }) 2019-09-04T06:32:39.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:39.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:39.593+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:39.593+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:39.629+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:39.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:39.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:39.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:39.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:39.691+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:39.691+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:39.722+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:39.722+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:39.728+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:39.728+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:39.729+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:39.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:39.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:39.824+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:39.825+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:39.829+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:39.858+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:39.858+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:39.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:39.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:39.930+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:39.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:39.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:39.996+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:39.996+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:40.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:40.003+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:32:40.003+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:32:40.003+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:32:40.010+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:32:40.010+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:32:40.011+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:32:40.011+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:32:40.011+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:32:40.011+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:32:40.011+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:32:40.012+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35129 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:32:40.012+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:32:40.012+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:32:40.013+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:32:40.014+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:32:40.014+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:40.014+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:32:40.014+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:32:40.014+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:32:40.014+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:32:40.014+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:32:40.014+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:32:40.014+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:32:40.014+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:40.014+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:40.014+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:32:40.014+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578758, 2) 2019-09-04T06:32:40.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15244 2019-09-04T06:32:40.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15244 2019-09-04T06:32:40.014+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:32:40.014+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:32:40.014+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:32:40.014+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:32:40.014+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:40.014+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:32:40.015+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:32:40.015+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:32:40.015+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578758, 2) 2019-09-04T06:32:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15247 2019-09-04T06:32:40.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15247 2019-09-04T06:32:40.015+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:32:40.015+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:32:40.015+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:40.015+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:32:40.015+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:32:40.015+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:32:40.015+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578758, 2) 2019-09-04T06:32:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15249 2019-09-04T06:32:40.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15249 2019-09-04T06:32:40.015+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:32:40.015+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:32:40.015+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:32:40.015+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:32:40.015+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:32:40.015+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:32:40.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:32:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15252 2019-09-04T06:32:40.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:32:40.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:40.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:32:40.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:32:40.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15252 2019-09-04T06:32:40.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:32:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15253 2019-09-04T06:32:40.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:32:40.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:40.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:32:40.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15253 2019-09-04T06:32:40.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:32:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15254 2019-09-04T06:32:40.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:32:40.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:40.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:32:40.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15254 2019-09-04T06:32:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:32:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15255 2019-09-04T06:32:40.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15255 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15256 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15256 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15257 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15257 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15258 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15258 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15259 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15259 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15260 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15260 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15261 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15261 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15262 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15262 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15263 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15263 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15264 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15264 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15265 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15265 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15266 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15266 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15267 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:40.016+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:32:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15267 2019-09-04T06:32:40.017+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:32:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15268 2019-09-04T06:32:40.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:32:40.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:40.017+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:32:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15268 2019-09-04T06:32:40.017+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:32:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15269 2019-09-04T06:32:40.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:32:40.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:40.017+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:32:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15269 2019-09-04T06:32:40.017+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:32:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15270 2019-09-04T06:32:40.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:32:40.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:40.017+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:32:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15270 2019-09-04T06:32:40.017+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15271 2019-09-04T06:32:40.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:40.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15271 2019-09-04T06:32:40.017+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:32:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15272 2019-09-04T06:32:40.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:32:40.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:40.017+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:32:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15272 2019-09-04T06:32:40.017+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:32:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15273 2019-09-04T06:32:40.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:32:40.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:40.017+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:32:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15273 2019-09-04T06:32:40.017+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:32:40.017+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:32:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15275 2019-09-04T06:32:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15275 2019-09-04T06:32:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15276 2019-09-04T06:32:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15276 2019-09-04T06:32:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15277 2019-09-04T06:32:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15277 2019-09-04T06:32:40.017+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:32:40.037+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:40.041+0000 D2 COMMAND [conn69] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:40.041+0000 I COMMAND [conn69] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:40.041+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:32:40.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15280 2019-09-04T06:32:40.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15280 2019-09-04T06:32:40.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15281 2019-09-04T06:32:40.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15281 2019-09-04T06:32:40.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15282 2019-09-04T06:32:40.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15282 2019-09-04T06:32:40.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15283 2019-09-04T06:32:40.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15283 2019-09-04T06:32:40.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15284 2019-09-04T06:32:40.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15284 2019-09-04T06:32:40.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15285 2019-09-04T06:32:40.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15285 2019-09-04T06:32:40.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15286 2019-09-04T06:32:40.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15286 2019-09-04T06:32:40.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15287 2019-09-04T06:32:40.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15287 2019-09-04T06:32:40.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15288 2019-09-04T06:32:40.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15288 2019-09-04T06:32:40.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15289 2019-09-04T06:32:40.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15289 2019-09-04T06:32:40.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15290 2019-09-04T06:32:40.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15290 2019-09-04T06:32:40.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15291 2019-09-04T06:32:40.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15291 2019-09-04T06:32:40.042+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:32:40.044+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:32:40.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15293 2019-09-04T06:32:40.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15293 2019-09-04T06:32:40.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15294 2019-09-04T06:32:40.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15294 2019-09-04T06:32:40.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15295 2019-09-04T06:32:40.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15295 2019-09-04T06:32:40.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15296 2019-09-04T06:32:40.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15296 2019-09-04T06:32:40.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15297 2019-09-04T06:32:40.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15297 2019-09-04T06:32:40.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15298 2019-09-04T06:32:40.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15298 2019-09-04T06:32:40.044+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:32:40.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:40.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:40.055+0000 D2 COMMAND [conn70] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:40.055+0000 I COMMAND [conn70] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:40.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:40.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:40.093+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:40.093+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:40.138+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:40.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:40.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:40.191+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:40.191+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:40.222+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:40.222+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:40.228+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:40.228+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:40.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578758, 2), signature: { hash: BinData(0, A54912400E1F7E228A119016C5273F5772B1E46E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:40.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:32:40.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578758, 2), signature: { hash: BinData(0, A54912400E1F7E228A119016C5273F5772B1E46E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:40.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578758, 2), signature: { hash: BinData(0, A54912400E1F7E228A119016C5273F5772B1E46E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:40.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), opTime: { ts: Timestamp(1567578758, 2), t: 1 }, wallTime: new Date(1567578758562) } 2019-09-04T06:32:40.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578758, 2), signature: { hash: BinData(0, A54912400E1F7E228A119016C5273F5772B1E46E), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:40.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:40.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:40.238+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:40.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:40.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:40.324+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:40.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:40.338+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:40.358+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:40.358+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:40.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:40.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:40.438+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:40.464+0000 D2 COMMAND [conn71] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578737, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578756, 1), signature: { hash: BinData(0, 096FE5E8EC5BE9591597E8E2C1886AC20C361C93), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578737, 1), t: 1 } }, $db: "config" } 2019-09-04T06:32:40.465+0000 D1 COMMAND [conn71] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578737, 1), t: 1 } } } 2019-09-04T06:32:40.465+0000 D3 STORAGE [conn71] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:32:40.465+0000 D1 COMMAND [conn71] Using 'committed' snapshot: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578737, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578756, 1), signature: { hash: BinData(0, 096FE5E8EC5BE9591597E8E2C1886AC20C361C93), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578737, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578758, 2) 2019-09-04T06:32:40.465+0000 D2 QUERY [conn71] Collection config.settings does not exist. Using EOF plan: query: { _id: "balancer" } sort: {} projection: {} limit: 1 2019-09-04T06:32:40.465+0000 I COMMAND [conn71] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578737, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578756, 1), signature: { hash: BinData(0, 096FE5E8EC5BE9591597E8E2C1886AC20C361C93), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578737, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:32:40.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:40.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:40.538+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:40.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:40.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:40.564+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:40.564+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:40.564+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578758, 2) 2019-09-04T06:32:40.564+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 15317 2019-09-04T06:32:40.565+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:40.565+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:40.565+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 15317 2019-09-04T06:32:40.565+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15320 2019-09-04T06:32:40.565+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 15320 2019-09-04T06:32:40.565+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578758, 2), t: 1 }({ ts: Timestamp(1567578758, 2), t: 1 }) 2019-09-04T06:32:40.568+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:40.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:40.593+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:40.593+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:40.638+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:40.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:40.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:40.691+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:40.691+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:40.721+0000 D2 COMMAND [conn72] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578758, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578758, 2), signature: { hash: BinData(0, A54912400E1F7E228A119016C5273F5772B1E46E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578758, 2), t: 1 } }, $db: "config" } 2019-09-04T06:32:40.721+0000 D1 COMMAND [conn72] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578758, 2), t: 1 } } } 2019-09-04T06:32:40.721+0000 D3 STORAGE [conn72] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:32:40.721+0000 D1 COMMAND [conn72] Using 'committed' snapshot: { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578758, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578758, 2), signature: { hash: BinData(0, A54912400E1F7E228A119016C5273F5772B1E46E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578758, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578758, 2) 2019-09-04T06:32:40.721+0000 D2 QUERY [conn72] Collection config.settings does not exist. Using EOF plan: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 2019-09-04T06:32:40.722+0000 I COMMAND [conn72] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578758, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578758, 2), signature: { hash: BinData(0, A54912400E1F7E228A119016C5273F5772B1E46E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578758, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:32:40.722+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:40.722+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:40.728+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:40.728+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:40.738+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:40.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:40.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:40.824+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:40.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:40.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:40.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1040) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:40.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1040 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:50.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:40.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:08.839+0000 2019-09-04T06:32:40.838+0000 D2 ASIO [Replication] Request 1040 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), opTime: { ts: Timestamp(1567578758, 2), t: 1 }, wallTime: new Date(1567578758562), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpVisible: { ts: Timestamp(1567578758, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578758, 2), $clusterTime: { clusterTime: Timestamp(1567578758, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578758, 2) } 2019-09-04T06:32:40.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), opTime: { ts: Timestamp(1567578758, 2), t: 1 }, wallTime: new Date(1567578758562), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpVisible: { ts: Timestamp(1567578758, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578758, 2), $clusterTime: { clusterTime: Timestamp(1567578758, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578758, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:40.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:40.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1040) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), opTime: { ts: Timestamp(1567578758, 2), t: 1 }, wallTime: new Date(1567578758562), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpVisible: { ts: Timestamp(1567578758, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578758, 2), $clusterTime: { clusterTime: Timestamp(1567578758, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578758, 2) } 2019-09-04T06:32:40.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:32:40.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:32:42.838Z 2019-09-04T06:32:40.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:08.839+0000 2019-09-04T06:32:40.838+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:40.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:40.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1041) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:40.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1041 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:32:50.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:40.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:08.839+0000 2019-09-04T06:32:40.839+0000 D2 ASIO [Replication] Request 1041 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), opTime: { ts: Timestamp(1567578758, 2), t: 1 }, wallTime: new Date(1567578758562), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpVisible: { ts: Timestamp(1567578758, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578758, 2), $clusterTime: { clusterTime: Timestamp(1567578758, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578758, 2) } 2019-09-04T06:32:40.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), opTime: { ts: Timestamp(1567578758, 2), t: 1 }, wallTime: new Date(1567578758562), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpVisible: { ts: Timestamp(1567578758, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578758, 2), $clusterTime: { clusterTime: Timestamp(1567578758, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578758, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:32:40.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:40.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1041) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), opTime: { ts: Timestamp(1567578758, 2), t: 1 }, wallTime: new Date(1567578758562), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpVisible: { ts: Timestamp(1567578758, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578758, 2), $clusterTime: { clusterTime: Timestamp(1567578758, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578758, 2) } 2019-09-04T06:32:40.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:32:40.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:32:49.076+0000 2019-09-04T06:32:40.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:32:51.365+0000 2019-09-04T06:32:40.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:32:40.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:32:42.839Z 2019-09-04T06:32:40.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:40.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:10.839+0000 2019-09-04T06:32:40.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:10.839+0000 2019-09-04T06:32:40.858+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:40.858+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:40.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:40.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:40.938+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:40.944+0000 D2 COMMAND [conn73] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:40.944+0000 I COMMAND [conn73] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:40.968+0000 D2 COMMAND [conn74] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:40.968+0000 I COMMAND [conn74] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:41.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:41.039+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:41.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:41.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:41.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578758, 2), signature: { hash: BinData(0, A54912400E1F7E228A119016C5273F5772B1E46E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:41.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:32:41.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578758, 2), signature: { hash: BinData(0, A54912400E1F7E228A119016C5273F5772B1E46E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:41.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578758, 2), signature: { hash: BinData(0, A54912400E1F7E228A119016C5273F5772B1E46E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:41.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), opTime: { ts: Timestamp(1567578758, 2), t: 1 }, wallTime: new Date(1567578758562) } 2019-09-04T06:32:41.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578758, 2), signature: { hash: BinData(0, A54912400E1F7E228A119016C5273F5772B1E46E), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:41.068+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:41.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:41.093+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:41.093+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:41.139+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:41.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:41.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:41.181+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:41.181+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:41.191+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:41.191+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:41.221+0000 D2 COMMAND [conn77] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:41.221+0000 I COMMAND [conn77] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:41.222+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:41.222+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:41.228+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:41.228+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:41.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:41.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:41.239+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:41.273+0000 D2 COMMAND [conn78] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:41.273+0000 I COMMAND [conn78] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:41.279+0000 D2 COMMAND [conn113] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578753, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578759, 1), signature: { hash: BinData(0, DA429FF5DAC86212BA02883E42383BC314B3672F), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578753, 1), t: 1 } }, $db: "config" } 2019-09-04T06:32:41.280+0000 D1 COMMAND [conn113] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578753, 1), t: 1 } } } 2019-09-04T06:32:41.280+0000 D3 STORAGE [conn113] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:32:41.280+0000 D1 COMMAND [conn113] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578753, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578759, 1), signature: { hash: BinData(0, DA429FF5DAC86212BA02883E42383BC314B3672F), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578753, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578758, 2) 2019-09-04T06:32:41.280+0000 D5 QUERY [conn113] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:32:41.280+0000 D5 QUERY [conn113] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:32:41.280+0000 D5 QUERY [conn113] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:32:41.280+0000 D5 QUERY [conn113] Rated tree: $and 2019-09-04T06:32:41.280+0000 D5 QUERY [conn113] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:41.280+0000 D5 QUERY [conn113] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:41.280+0000 D2 QUERY [conn113] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:32:41.280+0000 D3 STORAGE [conn113] WT begin_transaction for snapshot id 15348 2019-09-04T06:32:41.280+0000 D3 STORAGE [conn113] WT rollback_transaction for snapshot id 15348 2019-09-04T06:32:41.280+0000 I COMMAND [conn113] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578753, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578759, 1), signature: { hash: BinData(0, DA429FF5DAC86212BA02883E42383BC314B3672F), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578753, 1), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:879 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:32:41.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:41.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:41.324+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:41.325+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:41.339+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:41.358+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:41.358+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:41.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:41.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:41.439+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:41.472+0000 I COMMAND [conn317] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578722, 1), signature: { hash: BinData(0, A7DB2B2BD110626557891F82D1C539FE660A4A4A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:32:41.472+0000 D1 - [conn317] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:41.472+0000 W - [conn317] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:41.486+0000 I COMMAND [conn299] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578721, 1), signature: { hash: BinData(0, F2F103EFE4A77E8DC9C0F29C8D39A027024FF7E0), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:32:41.486+0000 D1 - [conn299] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:41.486+0000 W - [conn299] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:41.488+0000 I - [conn317] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:41.488+0000 D1 COMMAND [conn317] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578722, 1), signature: { hash: BinData(0, A7DB2B2BD110626557891F82D1C539FE660A4A4A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:41.488+0000 D1 - [conn317] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:41.488+0000 W - [conn317] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:41.505+0000 I - [conn299] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:41.505+0000 D1 COMMAND [conn299] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578721, 1), signature: { hash: BinData(0, F2F103EFE4A77E8DC9C0F29C8D39A027024FF7E0), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:41.505+0000 D1 - [conn299] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:41.505+0000 W - [conn299] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:41.510+0000 I COMMAND [conn275] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578723, 1), signature: { hash: BinData(0, B284AA1DA8C9D75820A3CFCF85E9C4196C90E245), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:32:41.511+0000 D1 - [conn275] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:41.511+0000 W - [conn275] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:41.525+0000 I - [conn317] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:41.525+0000 W COMMAND [conn317] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:41.525+0000 I COMMAND [conn317] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578722, 1), signature: { hash: BinData(0, A7DB2B2BD110626557891F82D1C539FE660A4A4A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:32:41.525+0000 D2 NETWORK [conn317] Session from 10.108.2.73:52208 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:41.525+0000 I NETWORK [conn317] end connection 10.108.2.73:52208 (90 connections now open) 2019-09-04T06:32:41.538+0000 I COMMAND [conn319] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578721, 1), signature: { hash: BinData(0, F2F103EFE4A77E8DC9C0F29C8D39A027024FF7E0), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:32:41.538+0000 D1 - [conn319] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:41.538+0000 W - [conn319] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:41.539+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:41.542+0000 I - [conn275] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:41.543+0000 D1 COMMAND [conn275] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578723, 1), signature: { hash: BinData(0, B284AA1DA8C9D75820A3CFCF85E9C4196C90E245), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:41.543+0000 D1 - [conn275] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:41.543+0000 W - [conn275] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:41.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:41.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:41.562+0000 I - [conn299] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:41.562+0000 W COMMAND [conn299] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:41.562+0000 I COMMAND [conn299] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578721, 1), signature: { hash: BinData(0, F2F103EFE4A77E8DC9C0F29C8D39A027024FF7E0), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30031ms 2019-09-04T06:32:41.562+0000 D2 NETWORK [conn299] Session from 10.108.2.56:35738 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:41.562+0000 I NETWORK [conn299] end connection 10.108.2.56:35738 (89 connections now open) 2019-09-04T06:32:41.565+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:41.565+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:41.565+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578758, 2) 2019-09-04T06:32:41.565+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 15355 2019-09-04T06:32:41.565+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:41.565+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:41.565+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 15355 2019-09-04T06:32:41.565+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15358 2019-09-04T06:32:41.566+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 15358 2019-09-04T06:32:41.566+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578758, 2), t: 1 }({ ts: Timestamp(1567578758, 2), t: 1 }) 2019-09-04T06:32:41.568+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:41.568+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:41.581+0000 I - [conn275] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:41.582+0000 W COMMAND [conn275] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:41.582+0000 I COMMAND [conn275] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578723, 1), signature: { hash: BinData(0, B284AA1DA8C9D75820A3CFCF85E9C4196C90E245), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30044ms 2019-09-04T06:32:41.582+0000 D2 NETWORK [conn275] Session from 10.108.2.60:44880 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:41.582+0000 I NETWORK [conn275] end connection 10.108.2.60:44880 (88 connections now open) 2019-09-04T06:32:41.593+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:41.593+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:41.598+0000 I - [conn319] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:41.598+0000 D1 COMMAND [conn319] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578721, 1), signature: { hash: BinData(0, F2F103EFE4A77E8DC9C0F29C8D39A027024FF7E0), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:41.598+0000 D1 - [conn319] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:41.598+0000 W - [conn319] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:41.618+0000 I - [conn319] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:41.618+0000 W COMMAND [conn319] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:41.618+0000 I COMMAND [conn319] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578721, 1), signature: { hash: BinData(0, F2F103EFE4A77E8DC9C0F29C8D39A027024FF7E0), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30073ms 2019-09-04T06:32:41.618+0000 D2 NETWORK [conn319] Session from 10.108.2.61:37986 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:41.618+0000 I NETWORK [conn319] end connection 10.108.2.61:37986 (87 connections now open) 2019-09-04T06:32:41.639+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:41.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:41.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:41.667+0000 D2 COMMAND [conn344] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578761, 1), signature: { hash: BinData(0, ABD195D4AC4ED4988535B74DC3F4B42B5EC26ED1), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:32:41.667+0000 D1 REPL [conn344] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578758, 2), t: 1 } 2019-09-04T06:32:41.667+0000 D3 REPL [conn344] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:11.677+0000 2019-09-04T06:32:41.670+0000 D2 COMMAND [conn338] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578752, 1), signature: { hash: BinData(0, 5FD87181FFEFAB4F27FB7C85B30A6D21FE89ECFB), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:32:41.670+0000 D1 REPL [conn338] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578758, 2), t: 1 } 2019-09-04T06:32:41.670+0000 D3 REPL [conn338] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:11.680+0000 2019-09-04T06:32:41.678+0000 D2 COMMAND [conn323] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578758, 1), signature: { hash: BinData(0, 83D99C38C0B1A9A64B0855E948984423AC47D765), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:32:41.678+0000 D1 REPL [conn323] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578758, 2), t: 1 } 2019-09-04T06:32:41.678+0000 D3 REPL [conn323] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:11.688+0000 2019-09-04T06:32:41.681+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:41.681+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:41.691+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:41.691+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:41.722+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:41.722+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:41.728+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:41.728+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:41.739+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:41.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:41.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:41.824+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:41.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:41.840+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:41.858+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:41.858+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:41.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:41.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:41.940+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:42.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:42.040+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:42.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:42.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:42.068+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:42.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:42.093+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:42.093+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:42.140+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:42.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:42.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:42.181+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:42.181+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:42.191+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:42.191+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:42.222+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:42.222+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:42.228+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:42.228+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:42.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578761, 1), signature: { hash: BinData(0, 2352D67DCB9B791421B9D3437701E2D80AAFA608), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:42.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:32:42.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578761, 1), signature: { hash: BinData(0, 2352D67DCB9B791421B9D3437701E2D80AAFA608), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:42.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578761, 1), signature: { hash: BinData(0, 2352D67DCB9B791421B9D3437701E2D80AAFA608), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:42.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), opTime: { ts: Timestamp(1567578758, 2), t: 1 }, wallTime: new Date(1567578758562) } 2019-09-04T06:32:42.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578761, 1), signature: { hash: BinData(0, 2352D67DCB9B791421B9D3437701E2D80AAFA608), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:42.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:42.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:42.240+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:42.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:42.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:42.324+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:42.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:42.340+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:42.358+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:42.358+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:42.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:42.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:42.384+0000 D2 COMMAND [conn143] run command admin.$cmd { ping: 1, $db: "admin" } 2019-09-04T06:32:42.384+0000 I COMMAND [conn143] command admin.$cmd appName: "robo3t" command: ping { ping: 1, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:42.394+0000 D2 COMMAND [conn206] run command config.$cmd { ping: 1.0, lsid: { id: UUID("2fef7d2a-ea06-44d7-a315-b0e911b7f5bf") }, $clusterTime: { clusterTime: Timestamp(1567578700, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:32:42.394+0000 I COMMAND [conn206] command config.$cmd appName: "MongoDB Shell" command: ping { ping: 1.0, lsid: { id: UUID("2fef7d2a-ea06-44d7-a315-b0e911b7f5bf") }, $clusterTime: { clusterTime: Timestamp(1567578700, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:42.440+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:42.540+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:42.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:42.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:42.565+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:42.565+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:42.565+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578758, 2) 2019-09-04T06:32:42.565+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 15392 2019-09-04T06:32:42.565+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:42.565+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:42.565+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 15392 2019-09-04T06:32:42.566+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15395 2019-09-04T06:32:42.566+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 15395 2019-09-04T06:32:42.566+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578758, 2), t: 1 }({ ts: Timestamp(1567578758, 2), t: 1 }) 2019-09-04T06:32:42.568+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:42.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:42.593+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:42.593+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:42.641+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:42.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:42.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:42.681+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:42.681+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:42.691+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:42.691+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:42.722+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:42.722+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:42.728+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:42.728+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:42.741+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:42.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:42.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:42.824+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:42.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:42.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:42.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1042) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:42.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1042 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:52.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:42.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:10.839+0000 2019-09-04T06:32:42.838+0000 D2 ASIO [Replication] Request 1042 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), opTime: { ts: Timestamp(1567578758, 2), t: 1 }, wallTime: new Date(1567578758562), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpVisible: { ts: Timestamp(1567578758, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578758, 2), $clusterTime: { clusterTime: Timestamp(1567578761, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578758, 2) } 2019-09-04T06:32:42.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), opTime: { ts: Timestamp(1567578758, 2), t: 1 }, wallTime: new Date(1567578758562), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpVisible: { ts: Timestamp(1567578758, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578758, 2), $clusterTime: { clusterTime: Timestamp(1567578761, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578758, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:42.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:42.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1042) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), opTime: { ts: Timestamp(1567578758, 2), t: 1 }, wallTime: new Date(1567578758562), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpVisible: { ts: Timestamp(1567578758, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578758, 2), $clusterTime: { clusterTime: Timestamp(1567578761, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578758, 2) } 2019-09-04T06:32:42.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:32:42.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:32:44.838Z 2019-09-04T06:32:42.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:10.839+0000 2019-09-04T06:32:42.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:42.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1043) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:42.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1043 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:32:52.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:42.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:10.839+0000 2019-09-04T06:32:42.839+0000 D2 ASIO [Replication] Request 1043 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), opTime: { ts: Timestamp(1567578758, 2), t: 1 }, wallTime: new Date(1567578758562), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpVisible: { ts: Timestamp(1567578758, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578758, 2), $clusterTime: { clusterTime: Timestamp(1567578761, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578758, 2) } 2019-09-04T06:32:42.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), opTime: { ts: Timestamp(1567578758, 2), t: 1 }, wallTime: new Date(1567578758562), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpVisible: { ts: Timestamp(1567578758, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578758, 2), $clusterTime: { clusterTime: Timestamp(1567578761, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578758, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:32:42.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:42.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1043) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), opTime: { ts: Timestamp(1567578758, 2), t: 1 }, wallTime: new Date(1567578758562), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpVisible: { ts: Timestamp(1567578758, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578758, 2), $clusterTime: { clusterTime: Timestamp(1567578761, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578758, 2) } 2019-09-04T06:32:42.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:32:42.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:32:51.365+0000 2019-09-04T06:32:42.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:32:53.711+0000 2019-09-04T06:32:42.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:32:42.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:32:44.839Z 2019-09-04T06:32:42.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:42.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:12.839+0000 2019-09-04T06:32:42.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:12.839+0000 2019-09-04T06:32:42.841+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:42.858+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:42.858+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:42.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:42.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:42.941+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:43.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:43.041+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:43.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:43.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:43.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578761, 1), signature: { hash: BinData(0, 2352D67DCB9B791421B9D3437701E2D80AAFA608), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:43.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:32:43.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578761, 1), signature: { hash: BinData(0, 2352D67DCB9B791421B9D3437701E2D80AAFA608), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:43.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578761, 1), signature: { hash: BinData(0, 2352D67DCB9B791421B9D3437701E2D80AAFA608), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:43.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), opTime: { ts: Timestamp(1567578758, 2), t: 1 }, wallTime: new Date(1567578758562) } 2019-09-04T06:32:43.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578761, 1), signature: { hash: BinData(0, 2352D67DCB9B791421B9D3437701E2D80AAFA608), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:43.068+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:43.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:43.084+0000 D2 COMMAND [conn341] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578761, 1), signature: { hash: BinData(0, ABD195D4AC4ED4988535B74DC3F4B42B5EC26ED1), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:32:43.084+0000 D1 REPL [conn341] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578758, 2), t: 1 } 2019-09-04T06:32:43.084+0000 D3 REPL [conn341] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.094+0000 2019-09-04T06:32:43.093+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:43.093+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:43.113+0000 I COMMAND [conn321] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578728, 1), signature: { hash: BinData(0, D40B466E4FDFBEAA54D0D3B68CF775A1034DDCB7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:32:43.113+0000 D1 - [conn321] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:43.113+0000 W - [conn321] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:43.129+0000 I - [conn321] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:43.129+0000 D1 COMMAND [conn321] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578728, 1), signature: { hash: BinData(0, D40B466E4FDFBEAA54D0D3B68CF775A1034DDCB7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:43.129+0000 D1 - [conn321] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:43.129+0000 W - [conn321] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:43.141+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:43.149+0000 I - [conn321] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:43.149+0000 W COMMAND [conn321] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:43.149+0000 I COMMAND [conn321] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578728, 1), signature: { hash: BinData(0, D40B466E4FDFBEAA54D0D3B68CF775A1034DDCB7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:32:43.149+0000 D2 NETWORK [conn321] Session from 10.108.2.54:49250 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:43.150+0000 I NETWORK [conn321] end connection 10.108.2.54:49250 (86 connections now open) 2019-09-04T06:32:43.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:43.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:43.181+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:43.181+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:43.191+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:43.191+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:43.222+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:43.222+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:43.228+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:43.228+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:43.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:43.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:43.241+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:43.280+0000 I NETWORK [listener] connection accepted from 10.108.2.52:47260 #352 (87 connections now open) 2019-09-04T06:32:43.280+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:43.280+0000 D2 COMMAND [conn352] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:43.280+0000 I NETWORK [conn352] received client metadata from 10.108.2.52:47260 conn352: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:43.280+0000 I COMMAND [conn352] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:43.281+0000 D2 COMMAND [conn352] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578762, 1), signature: { hash: BinData(0, 70ADEB07C63950ADB1CBAACD1EFFE951853A061C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:32:43.281+0000 D1 REPL [conn352] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578758, 2), t: 1 } 2019-09-04T06:32:43.281+0000 D3 REPL [conn352] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.291+0000 2019-09-04T06:32:43.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:43.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:43.324+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:43.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:43.341+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:43.358+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:43.358+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:43.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:43.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:43.404+0000 I NETWORK [listener] connection accepted from 10.108.2.45:36622 #353 (88 connections now open) 2019-09-04T06:32:43.404+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:43.404+0000 D2 COMMAND [conn353] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:43.404+0000 I NETWORK [conn353] received client metadata from 10.108.2.45:36622 conn353: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:43.404+0000 I COMMAND [conn353] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:43.409+0000 D2 COMMAND [conn353] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578758, 1), signature: { hash: BinData(0, 83D99C38C0B1A9A64B0855E948984423AC47D765), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:32:43.409+0000 D1 REPL [conn353] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578758, 2), t: 1 } 2019-09-04T06:32:43.409+0000 D3 REPL [conn353] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.419+0000 2019-09-04T06:32:43.442+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:43.542+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:43.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:43.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:43.565+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:43.565+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:43.565+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578758, 2) 2019-09-04T06:32:43.565+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 15430 2019-09-04T06:32:43.565+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:43.565+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:43.565+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 15430 2019-09-04T06:32:43.566+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15433 2019-09-04T06:32:43.566+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 15433 2019-09-04T06:32:43.566+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578758, 2), t: 1 }({ ts: Timestamp(1567578758, 2), t: 1 }) 2019-09-04T06:32:43.566+0000 D2 ASIO [RS] Request 1036 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpVisible: { ts: Timestamp(1567578758, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpApplied: { ts: Timestamp(1567578758, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578758, 2), $clusterTime: { clusterTime: Timestamp(1567578761, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578758, 2) } 2019-09-04T06:32:43.566+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpVisible: { ts: Timestamp(1567578758, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpApplied: { ts: Timestamp(1567578758, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578758, 2), $clusterTime: { clusterTime: Timestamp(1567578761, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578758, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:43.566+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:43.566+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:32:43.566+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:32:53.711+0000 2019-09-04T06:32:43.566+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:32:54.068+0000 2019-09-04T06:32:43.566+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:43.566+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:12.839+0000 2019-09-04T06:32:43.566+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1044 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:53.566+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578758, 2), t: 1 } } 2019-09-04T06:32:43.567+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:08.566+0000 2019-09-04T06:32:43.568+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:43.568+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), appliedOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, appliedWallTime: new Date(1567578758562), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), appliedOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, appliedWallTime: new Date(1567578758562), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), appliedOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, appliedWallTime: new Date(1567578758562), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpVisible: { ts: Timestamp(1567578758, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:43.568+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1045 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:13.568+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), appliedOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, appliedWallTime: new Date(1567578758562), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), appliedOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, appliedWallTime: new Date(1567578758562), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), appliedOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, appliedWallTime: new Date(1567578758562), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpVisible: { ts: Timestamp(1567578758, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:43.568+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:08.566+0000 2019-09-04T06:32:43.568+0000 D2 ASIO [RS] Request 1045 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpVisible: { ts: Timestamp(1567578758, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578758, 2), $clusterTime: { clusterTime: Timestamp(1567578762, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578758, 2) } 2019-09-04T06:32:43.568+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpVisible: { ts: Timestamp(1567578758, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578758, 2), $clusterTime: { clusterTime: Timestamp(1567578762, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578758, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:43.568+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:43.568+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:08.566+0000 2019-09-04T06:32:43.568+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:43.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:43.593+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:43.593+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:43.642+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:43.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:43.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:43.681+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:43.681+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:43.691+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:43.691+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:43.722+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:43.722+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:43.728+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:43.728+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:43.742+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:43.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:43.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:43.824+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:43.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:43.842+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:43.858+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:43.858+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:43.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:43.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:43.942+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:43.979+0000 D2 COMMAND [conn322] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578762, 1), signature: { hash: BinData(0, 70ADEB07C63950ADB1CBAACD1EFFE951853A061C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:32:43.979+0000 D1 REPL [conn322] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578758, 2), t: 1 } 2019-09-04T06:32:43.979+0000 D3 REPL [conn322] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.989+0000 2019-09-04T06:32:44.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:44.042+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:44.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:44.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:44.068+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:44.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:44.093+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:44.093+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:44.143+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:44.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:44.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:44.181+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:44.181+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:44.191+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:44.191+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:44.222+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:44.222+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:44.228+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:44.228+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:44.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578762, 1), signature: { hash: BinData(0, 73945534685194E864A6767A09FE9E0239F31DD5), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:44.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:32:44.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578762, 1), signature: { hash: BinData(0, 73945534685194E864A6767A09FE9E0239F31DD5), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:44.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578762, 1), signature: { hash: BinData(0, 73945534685194E864A6767A09FE9E0239F31DD5), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:44.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), opTime: { ts: Timestamp(1567578758, 2), t: 1 }, wallTime: new Date(1567578758562) } 2019-09-04T06:32:44.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578762, 1), signature: { hash: BinData(0, 73945534685194E864A6767A09FE9E0239F31DD5), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:44.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:44.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:44.243+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:44.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:44.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:44.324+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:44.324+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:44.343+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:44.358+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:44.358+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:44.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:44.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:44.443+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:44.543+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:44.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:44.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:44.565+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:44.565+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:44.565+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578758, 2) 2019-09-04T06:32:44.565+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 15464 2019-09-04T06:32:44.565+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:44.565+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:44.565+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 15464 2019-09-04T06:32:44.566+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15467 2019-09-04T06:32:44.566+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 15467 2019-09-04T06:32:44.566+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578758, 2), t: 1 }({ ts: Timestamp(1567578758, 2), t: 1 }) 2019-09-04T06:32:44.568+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:44.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:44.593+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:44.593+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:44.643+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:44.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:44.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:44.681+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:44.681+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:44.691+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:44.691+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:44.722+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:44.722+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:44.728+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:44.728+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:44.743+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:44.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:44.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:44.824+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:44.825+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:44.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:44.838+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:32:43.063+0000 2019-09-04T06:32:44.838+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:32:44.232+0000 2019-09-04T06:32:44.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:44.838+0000 D3 REPL [replexec-3] stalest member MemberId(0) date: 2019-09-04T06:32:43.063+0000 2019-09-04T06:32:44.838+0000 D3 REPL [replexec-3] scheduling next check at 2019-09-04T06:32:53.063+0000 2019-09-04T06:32:44.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:14.838+0000 2019-09-04T06:32:44.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1046) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:44.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1046 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:54.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:44.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:14.838+0000 2019-09-04T06:32:44.838+0000 D2 ASIO [Replication] Request 1046 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), opTime: { ts: Timestamp(1567578758, 2), t: 1 }, wallTime: new Date(1567578758562), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpVisible: { ts: Timestamp(1567578758, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578758, 2), $clusterTime: { clusterTime: Timestamp(1567578762, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578758, 2) } 2019-09-04T06:32:44.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), opTime: { ts: Timestamp(1567578758, 2), t: 1 }, wallTime: new Date(1567578758562), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpVisible: { ts: Timestamp(1567578758, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578758, 2), $clusterTime: { clusterTime: Timestamp(1567578762, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578758, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:44.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:44.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1046) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), opTime: { ts: Timestamp(1567578758, 2), t: 1 }, wallTime: new Date(1567578758562), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpVisible: { ts: Timestamp(1567578758, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578758, 2), $clusterTime: { clusterTime: Timestamp(1567578762, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578758, 2) } 2019-09-04T06:32:44.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:32:44.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:32:46.838Z 2019-09-04T06:32:44.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:14.838+0000 2019-09-04T06:32:44.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:44.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1047) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:44.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1047 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:32:54.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:44.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:14.838+0000 2019-09-04T06:32:44.839+0000 D2 ASIO [Replication] Request 1047 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), opTime: { ts: Timestamp(1567578758, 2), t: 1 }, wallTime: new Date(1567578758562), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpVisible: { ts: Timestamp(1567578758, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578758, 2), $clusterTime: { clusterTime: Timestamp(1567578762, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578758, 2) } 2019-09-04T06:32:44.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), opTime: { ts: Timestamp(1567578758, 2), t: 1 }, wallTime: new Date(1567578758562), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpVisible: { ts: Timestamp(1567578758, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578758, 2), $clusterTime: { clusterTime: Timestamp(1567578762, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578758, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:32:44.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:44.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1047) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), opTime: { ts: Timestamp(1567578758, 2), t: 1 }, wallTime: new Date(1567578758562), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpVisible: { ts: Timestamp(1567578758, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578758, 2), $clusterTime: { clusterTime: Timestamp(1567578762, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578758, 2) } 2019-09-04T06:32:44.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:32:44.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:32:54.068+0000 2019-09-04T06:32:44.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:32:55.207+0000 2019-09-04T06:32:44.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:32:44.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:32:46.839Z 2019-09-04T06:32:44.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:44.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:14.839+0000 2019-09-04T06:32:44.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:14.839+0000 2019-09-04T06:32:44.843+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:44.858+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:44.858+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:44.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:44.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:44.943+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:45.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:45.044+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:45.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:45.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:45.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578762, 1), signature: { hash: BinData(0, 73945534685194E864A6767A09FE9E0239F31DD5), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:45.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:32:45.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578762, 1), signature: { hash: BinData(0, 73945534685194E864A6767A09FE9E0239F31DD5), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:45.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578762, 1), signature: { hash: BinData(0, 73945534685194E864A6767A09FE9E0239F31DD5), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:45.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), opTime: { ts: Timestamp(1567578758, 2), t: 1 }, wallTime: new Date(1567578758562) } 2019-09-04T06:32:45.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578762, 1), signature: { hash: BinData(0, 73945534685194E864A6767A09FE9E0239F31DD5), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:45.068+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:45.068+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:45.093+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:45.093+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:45.144+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:45.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:45.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:45.181+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:45.181+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:45.191+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:45.191+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:45.222+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:45.222+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:45.228+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:45.228+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:45.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:45.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:45.244+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:45.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:45.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:45.324+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:45.325+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:45.344+0000 D2 ASIO [RS] Request 1044 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578765, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578765328), o: { $v: 1, $set: { ping: new Date(1567578765325), up: 2665 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpVisible: { ts: Timestamp(1567578758, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpApplied: { ts: Timestamp(1567578765, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578758, 2), $clusterTime: { clusterTime: Timestamp(1567578765, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578765, 1) } 2019-09-04T06:32:45.344+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578765, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578765328), o: { $v: 1, $set: { ping: new Date(1567578765325), up: 2665 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpVisible: { ts: Timestamp(1567578758, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpApplied: { ts: Timestamp(1567578765, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578758, 2), $clusterTime: { clusterTime: Timestamp(1567578765, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578765, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:45.344+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:45.344+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578765, 1) and ending at ts: Timestamp(1567578765, 1) 2019-09-04T06:32:45.344+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:32:55.207+0000 2019-09-04T06:32:45.344+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:32:56.690+0000 2019-09-04T06:32:45.344+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:45.344+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:14.839+0000 2019-09-04T06:32:45.344+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578765, 1), t: 1 } 2019-09-04T06:32:45.344+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:45.344+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:45.344+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578758, 2) 2019-09-04T06:32:45.344+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:45.344+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 15494 2019-09-04T06:32:45.344+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:45.344+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:45.344+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 15494 2019-09-04T06:32:45.344+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:45.344+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:32:45.344+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:45.344+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578758, 2) 2019-09-04T06:32:45.344+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 15497 2019-09-04T06:32:45.344+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:45.344+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578765, 1) } 2019-09-04T06:32:45.344+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:45.344+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 15497 2019-09-04T06:32:45.344+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15468 2019-09-04T06:32:45.344+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 15468 2019-09-04T06:32:45.344+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15500 2019-09-04T06:32:45.344+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 15500 2019-09-04T06:32:45.344+0000 D3 EXECUTOR [repl-writer-worker-2] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:45.344+0000 D3 STORAGE [repl-writer-worker-2] WT begin_transaction for snapshot id 15502 2019-09-04T06:32:45.344+0000 D4 STORAGE [repl-writer-worker-2] inserting record with timestamp Timestamp(1567578765, 1) 2019-09-04T06:32:45.344+0000 D3 STORAGE [repl-writer-worker-2] WT set timestamp of future write operations to Timestamp(1567578765, 1) 2019-09-04T06:32:45.344+0000 D3 STORAGE [repl-writer-worker-2] WT commit_transaction for snapshot id 15502 2019-09-04T06:32:45.344+0000 D3 EXECUTOR [repl-writer-worker-2] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:45.344+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:32:45.344+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15501 2019-09-04T06:32:45.344+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 15501 2019-09-04T06:32:45.344+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15504 2019-09-04T06:32:45.344+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 15504 2019-09-04T06:32:45.344+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578765, 1), t: 1 }({ ts: Timestamp(1567578765, 1), t: 1 }) 2019-09-04T06:32:45.345+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578765, 1) 2019-09-04T06:32:45.345+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15505 2019-09-04T06:32:45.345+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578765, 1) } } ] } sort: {} projection: {} 2019-09-04T06:32:45.345+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:45.345+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:32:45.345+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578765, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:32:45.345+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:45.345+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:45.345+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:45.345+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578765, 1) || First: notFirst: full path: ts 2019-09-04T06:32:45.345+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:45.345+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578765, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:45.345+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:45.345+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:32:45.345+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:45.345+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:45.345+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:45.345+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:45.345+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:45.345+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:45.345+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:45.345+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578765, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:45.345+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:45.345+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:45.345+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:45.345+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578765, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:45.345+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:45.345+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578765, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:45.345+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 15505 2019-09-04T06:32:45.345+0000 D3 EXECUTOR [repl-writer-worker-0] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:45.345+0000 D3 STORAGE [repl-writer-worker-0] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:45.345+0000 D3 REPL [repl-writer-worker-0] applying op: { ts: Timestamp(1567578765, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578765328), o: { $v: 1, $set: { ping: new Date(1567578765325), up: 2665 } } }, oplog application mode: Secondary 2019-09-04T06:32:45.345+0000 D3 STORAGE [repl-writer-worker-0] WT set timestamp of future write operations to Timestamp(1567578765, 1) 2019-09-04T06:32:45.345+0000 D3 STORAGE [repl-writer-worker-0] WT begin_transaction for snapshot id 15507 2019-09-04T06:32:45.345+0000 D2 QUERY [repl-writer-worker-0] Using idhack: { _id: "cmodb801.togewa.com:27017" } 2019-09-04T06:32:45.345+0000 D4 WRITE [repl-writer-worker-0] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:32:45.345+0000 D3 STORAGE [repl-writer-worker-0] WT commit_transaction for snapshot id 15507 2019-09-04T06:32:45.345+0000 D3 EXECUTOR [repl-writer-worker-0] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:45.345+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578765, 1), t: 1 }({ ts: Timestamp(1567578765, 1), t: 1 }) 2019-09-04T06:32:45.345+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578765, 1) 2019-09-04T06:32:45.345+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15506 2019-09-04T06:32:45.345+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:32:45.345+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:45.345+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:32:45.345+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:45.345+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:45.345+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:32:45.345+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 15506 2019-09-04T06:32:45.345+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578765, 1) 2019-09-04T06:32:45.345+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:45.345+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15510 2019-09-04T06:32:45.345+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), appliedOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, appliedWallTime: new Date(1567578758562), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), appliedOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, appliedWallTime: new Date(1567578765328), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), appliedOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, appliedWallTime: new Date(1567578758562), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpVisible: { ts: Timestamp(1567578758, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:45.345+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1048 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:15.345+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), appliedOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, appliedWallTime: new Date(1567578758562), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), appliedOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, appliedWallTime: new Date(1567578765328), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), appliedOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, appliedWallTime: new Date(1567578758562), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpVisible: { ts: Timestamp(1567578758, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:45.345+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:15.345+0000 2019-09-04T06:32:45.345+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 15510 2019-09-04T06:32:45.346+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578765, 1), t: 1 }({ ts: Timestamp(1567578765, 1), t: 1 }) 2019-09-04T06:32:45.346+0000 D2 ASIO [RS] Request 1048 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpVisible: { ts: Timestamp(1567578758, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578758, 2), $clusterTime: { clusterTime: Timestamp(1567578765, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578765, 1) } 2019-09-04T06:32:45.346+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpVisible: { ts: Timestamp(1567578758, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578758, 2), $clusterTime: { clusterTime: Timestamp(1567578765, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578765, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:45.346+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:45.346+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:15.346+0000 2019-09-04T06:32:45.346+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578765, 1), t: 1 } 2019-09-04T06:32:45.346+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1049 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:55.346+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578758, 2), t: 1 } } 2019-09-04T06:32:45.346+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:15.346+0000 2019-09-04T06:32:45.346+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:32:45.346+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:45.346+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), appliedOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, appliedWallTime: new Date(1567578758562), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, durableWallTime: new Date(1567578765328), appliedOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, appliedWallTime: new Date(1567578765328), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), appliedOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, appliedWallTime: new Date(1567578758562), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpVisible: { ts: Timestamp(1567578758, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:45.346+0000 D2 COMMAND [conn61] run command config.$cmd { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578758, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578759, 1), signature: { hash: BinData(0, DA429FF5DAC86212BA02883E42383BC314B3672F), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578758, 2), t: 1 } }, $db: "config" } 2019-09-04T06:32:45.346+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1050 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:15.346+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), appliedOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, appliedWallTime: new Date(1567578758562), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, durableWallTime: new Date(1567578765328), appliedOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, appliedWallTime: new Date(1567578765328), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, durableWallTime: new Date(1567578758562), appliedOpTime: { ts: Timestamp(1567578758, 2), t: 1 }, appliedWallTime: new Date(1567578758562), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpVisible: { ts: Timestamp(1567578758, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:45.346+0000 D1 COMMAND [conn61] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578758, 2), t: 1 } } } 2019-09-04T06:32:45.346+0000 D3 STORAGE [conn61] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:32:45.346+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:15.346+0000 2019-09-04T06:32:45.346+0000 D1 COMMAND [conn61] Using 'committed' snapshot: { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578758, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578759, 1), signature: { hash: BinData(0, DA429FF5DAC86212BA02883E42383BC314B3672F), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578758, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578758, 2) 2019-09-04T06:32:45.347+0000 D2 QUERY [conn61] Using idhack: query: { _id: "config.system.sessions" } sort: {} projection: {} limit: 1 2019-09-04T06:32:45.347+0000 D3 STORAGE [conn61] WT begin_transaction for snapshot id 15513 2019-09-04T06:32:45.347+0000 D3 STORAGE [conn61] WT rollback_transaction for snapshot id 15513 2019-09-04T06:32:45.347+0000 I COMMAND [conn61] command config.collections command: find { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578758, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578759, 1), signature: { hash: BinData(0, DA429FF5DAC86212BA02883E42383BC314B3672F), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578758, 2), t: 1 } }, $db: "config" } planSummary: IDHACK keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:702 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:32:45.347+0000 D2 ASIO [RS] Request 1050 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpVisible: { ts: Timestamp(1567578758, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578758, 2), $clusterTime: { clusterTime: Timestamp(1567578765, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578765, 1) } 2019-09-04T06:32:45.347+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578758, 2), t: 1 }, lastCommittedWall: new Date(1567578758562), lastOpVisible: { ts: Timestamp(1567578758, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578758, 2), $clusterTime: { clusterTime: Timestamp(1567578765, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578765, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:45.347+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:45.347+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:15.346+0000 2019-09-04T06:32:45.347+0000 D2 ASIO [RS] Request 1049 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578765, 1), t: 1 }, lastCommittedWall: new Date(1567578765328), lastOpVisible: { ts: Timestamp(1567578765, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578765, 1), t: 1 }, lastCommittedWall: new Date(1567578765328), lastOpApplied: { ts: Timestamp(1567578765, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578765, 1), $clusterTime: { clusterTime: Timestamp(1567578765, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578765, 1) } 2019-09-04T06:32:45.347+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578765, 1), t: 1 }, lastCommittedWall: new Date(1567578765328), lastOpVisible: { ts: Timestamp(1567578765, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578765, 1), t: 1 }, lastCommittedWall: new Date(1567578765328), lastOpApplied: { ts: Timestamp(1567578765, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578765, 1), $clusterTime: { clusterTime: Timestamp(1567578765, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578765, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:45.347+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:45.347+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:32:45.347+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578765, 1), t: 1 }, 2019-09-04T06:32:45.328+0000 2019-09-04T06:32:45.347+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578765, 1), t: 1 }, 2019-09-04T06:32:45.328+0000 2019-09-04T06:32:45.347+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578760, 1) 2019-09-04T06:32:45.347+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:32:56.690+0000 2019-09-04T06:32:45.347+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:32:55.615+0000 2019-09-04T06:32:45.347+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:45.347+0000 D3 REPL [conn342] Got notified of new snapshot: { ts: Timestamp(1567578765, 1), t: 1 }, 2019-09-04T06:32:45.328+0000 2019-09-04T06:32:45.347+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1051 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:55.347+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578765, 1), t: 1 } } 2019-09-04T06:32:45.347+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:15.346+0000 2019-09-04T06:32:45.347+0000 D3 REPL [conn342] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:52.054+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn318] Got notified of new snapshot: { ts: Timestamp(1567578765, 1), t: 1 }, 2019-09-04T06:32:45.328+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn318] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:03.478+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn323] Got notified of new snapshot: { ts: Timestamp(1567578765, 1), t: 1 }, 2019-09-04T06:32:45.328+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn323] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:11.688+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn285] Got notified of new snapshot: { ts: Timestamp(1567578765, 1), t: 1 }, 2019-09-04T06:32:45.328+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn285] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.640+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn324] Got notified of new snapshot: { ts: Timestamp(1567578765, 1), t: 1 }, 2019-09-04T06:32:45.328+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn324] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.622+0000 2019-09-04T06:32:45.347+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:14.839+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn339] Got notified of new snapshot: { ts: Timestamp(1567578765, 1), t: 1 }, 2019-09-04T06:32:45.328+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn339] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.661+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn351] Got notified of new snapshot: { ts: Timestamp(1567578765, 1), t: 1 }, 2019-09-04T06:32:45.328+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn351] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:00.962+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn304] Got notified of new snapshot: { ts: Timestamp(1567578765, 1), t: 1 }, 2019-09-04T06:32:45.328+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn304] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.637+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn340] Got notified of new snapshot: { ts: Timestamp(1567578765, 1), t: 1 }, 2019-09-04T06:32:45.328+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn340] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.662+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn320] Got notified of new snapshot: { ts: Timestamp(1567578765, 1), t: 1 }, 2019-09-04T06:32:45.328+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn320] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.753+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn344] Got notified of new snapshot: { ts: Timestamp(1567578765, 1), t: 1 }, 2019-09-04T06:32:45.328+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn344] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:11.677+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn338] Got notified of new snapshot: { ts: Timestamp(1567578765, 1), t: 1 }, 2019-09-04T06:32:45.328+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn338] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:11.680+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn325] Got notified of new snapshot: { ts: Timestamp(1567578765, 1), t: 1 }, 2019-09-04T06:32:45.328+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn325] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.644+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn328] Got notified of new snapshot: { ts: Timestamp(1567578765, 1), t: 1 }, 2019-09-04T06:32:45.328+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn328] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.625+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn350] Got notified of new snapshot: { ts: Timestamp(1567578765, 1), t: 1 }, 2019-09-04T06:32:45.328+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn350] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:00.763+0000 2019-09-04T06:32:45.348+0000 D2 COMMAND [conn21] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578765, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a8d02d1a496712d7252'), operName: "", parentOperId: "5d6f5a8d02d1a496712d724f" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578765, 1), signature: { hash: BinData(0, 383EA545B58B4E2FA14FF5806CCA8B42F82AF743), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578765, 1), t: 1 } }, $db: "config" } 2019-09-04T06:32:45.348+0000 D3 REPL [conn347] Got notified of new snapshot: { ts: Timestamp(1567578765, 1), t: 1 }, 2019-09-04T06:32:45.328+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn347] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:58.760+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn305] Got notified of new snapshot: { ts: Timestamp(1567578765, 1), t: 1 }, 2019-09-04T06:32:45.328+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn305] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.638+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn326] Got notified of new snapshot: { ts: Timestamp(1567578765, 1), t: 1 }, 2019-09-04T06:32:45.328+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn326] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:49.840+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn336] Got notified of new snapshot: { ts: Timestamp(1567578765, 1), t: 1 }, 2019-09-04T06:32:45.328+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn336] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.130+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn346] Got notified of new snapshot: { ts: Timestamp(1567578765, 1), t: 1 }, 2019-09-04T06:32:45.328+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn346] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:57.574+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn345] Got notified of new snapshot: { ts: Timestamp(1567578765, 1), t: 1 }, 2019-09-04T06:32:45.328+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn345] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:56.312+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn316] Got notified of new snapshot: { ts: Timestamp(1567578765, 1), t: 1 }, 2019-09-04T06:32:45.328+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn316] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.623+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn349] Got notified of new snapshot: { ts: Timestamp(1567578765, 1), t: 1 }, 2019-09-04T06:32:45.328+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn349] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:00.261+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn348] Got notified of new snapshot: { ts: Timestamp(1567578765, 1), t: 1 }, 2019-09-04T06:32:45.328+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn348] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:59.750+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn341] Got notified of new snapshot: { ts: Timestamp(1567578765, 1), t: 1 }, 2019-09-04T06:32:45.328+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn341] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.094+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn322] Got notified of new snapshot: { ts: Timestamp(1567578765, 1), t: 1 }, 2019-09-04T06:32:45.328+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn322] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.989+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn352] Got notified of new snapshot: { ts: Timestamp(1567578765, 1), t: 1 }, 2019-09-04T06:32:45.328+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn352] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.291+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn353] Got notified of new snapshot: { ts: Timestamp(1567578765, 1), t: 1 }, 2019-09-04T06:32:45.328+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn353] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.419+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn306] Got notified of new snapshot: { ts: Timestamp(1567578765, 1), t: 1 }, 2019-09-04T06:32:45.328+0000 2019-09-04T06:32:45.348+0000 D3 REPL [conn306] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:46.639+0000 2019-09-04T06:32:45.348+0000 D1 TRACKING [conn21] Cmd: find, TrackingId: 5d6f5a8d02d1a496712d724f|5d6f5a8d02d1a496712d7252 2019-09-04T06:32:45.348+0000 D1 COMMAND [conn21] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578765, 1), t: 1 } } } 2019-09-04T06:32:45.348+0000 D3 STORAGE [conn21] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:32:45.348+0000 D1 COMMAND [conn21] Using 'committed' snapshot: { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578765, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a8d02d1a496712d7252'), operName: "", parentOperId: "5d6f5a8d02d1a496712d724f" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578765, 1), signature: { hash: BinData(0, 383EA545B58B4E2FA14FF5806CCA8B42F82AF743), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578765, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578765, 1) 2019-09-04T06:32:45.348+0000 D2 QUERY [conn21] Collection config.settings does not exist. Using EOF plan: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 2019-09-04T06:32:45.349+0000 I COMMAND [conn21] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578765, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a8d02d1a496712d7252'), operName: "", parentOperId: "5d6f5a8d02d1a496712d724f" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578765, 1), signature: { hash: BinData(0, 383EA545B58B4E2FA14FF5806CCA8B42F82AF743), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578765, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:32:45.358+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:45.358+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:45.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:45.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:45.444+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578765, 1) 2019-09-04T06:32:45.444+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:45.544+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:45.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:45.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:45.568+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:45.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:45.593+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:45.593+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:45.644+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:45.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:45.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:45.681+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:45.681+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:45.691+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:45.691+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:45.722+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:45.722+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:45.728+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:45.728+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:45.744+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:45.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:45.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:45.824+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:45.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:45.845+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:45.858+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:45.858+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:45.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:45.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:45.945+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:46.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:46.045+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:46.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:46.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:46.068+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:46.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:46.093+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:46.093+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:46.145+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:46.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:46.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:46.181+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:46.181+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:46.191+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:46.191+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:46.222+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:46.222+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:46.228+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:46.228+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:46.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578765, 1), signature: { hash: BinData(0, 383EA545B58B4E2FA14FF5806CCA8B42F82AF743), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:46.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:32:46.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578765, 1), signature: { hash: BinData(0, 383EA545B58B4E2FA14FF5806CCA8B42F82AF743), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:46.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578765, 1), signature: { hash: BinData(0, 383EA545B58B4E2FA14FF5806CCA8B42F82AF743), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:46.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, durableWallTime: new Date(1567578765328), opTime: { ts: Timestamp(1567578765, 1), t: 1 }, wallTime: new Date(1567578765328) } 2019-09-04T06:32:46.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578765, 1), signature: { hash: BinData(0, 383EA545B58B4E2FA14FF5806CCA8B42F82AF743), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:46.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:46.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:46.245+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:46.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:46.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:46.324+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:46.325+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:46.344+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:46.344+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:46.344+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578765, 1) 2019-09-04T06:32:46.344+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 15543 2019-09-04T06:32:46.344+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:46.344+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:46.344+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 15543 2019-09-04T06:32:46.345+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:46.346+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15546 2019-09-04T06:32:46.346+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 15546 2019-09-04T06:32:46.346+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578765, 1), t: 1 }({ ts: Timestamp(1567578765, 1), t: 1 }) 2019-09-04T06:32:46.358+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:46.358+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:46.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:46.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:46.400+0000 I NETWORK [listener] connection accepted from 10.108.2.57:34332 #354 (89 connections now open) 2019-09-04T06:32:46.400+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:46.400+0000 D2 COMMAND [conn354] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:46.400+0000 I NETWORK [conn354] received client metadata from 10.108.2.57:34332 conn354: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:46.400+0000 I COMMAND [conn354] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:46.403+0000 D2 COMMAND [conn354] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578758, 1), signature: { hash: BinData(0, 83D99C38C0B1A9A64B0855E948984423AC47D765), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:32:46.403+0000 D1 REPL [conn354] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578765, 1), t: 1 } 2019-09-04T06:32:46.403+0000 D3 REPL [conn354] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.413+0000 2019-09-04T06:32:46.446+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:46.546+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:46.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:46.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:46.568+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:46.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:46.593+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:46.593+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:46.615+0000 I NETWORK [listener] connection accepted from 10.108.2.46:41076 #355 (90 connections now open) 2019-09-04T06:32:46.615+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:46.615+0000 D2 COMMAND [conn355] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:46.615+0000 I NETWORK [conn355] received client metadata from 10.108.2.46:41076 conn355: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:46.615+0000 I COMMAND [conn355] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:46.623+0000 I COMMAND [conn324] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578733, 1), signature: { hash: BinData(0, 45E6BA95D4A46C041D1AB6A238AC46A21C109958), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:32:46.623+0000 D1 - [conn324] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:46.623+0000 W - [conn324] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:46.625+0000 I COMMAND [conn316] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578729, 1), signature: { hash: BinData(0, 0C0420C4E529BE8AB49DB9C1B2EBD0DB69E0B59C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:32:46.625+0000 D1 - [conn316] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:46.625+0000 W - [conn316] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:46.626+0000 I COMMAND [conn328] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578735, 1), signature: { hash: BinData(0, 58E67B65554B3CCD2C041972A16DBE3FCED4CE23), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:32:46.626+0000 D1 - [conn328] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:46.626+0000 W - [conn328] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:46.632+0000 I NETWORK [listener] connection accepted from 10.108.2.53:50790 #356 (91 connections now open) 2019-09-04T06:32:46.632+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:46.632+0000 D2 COMMAND [conn356] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:46.632+0000 I NETWORK [conn356] received client metadata from 10.108.2.53:50790 conn356: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:46.632+0000 I COMMAND [conn356] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:46.638+0000 I COMMAND [conn304] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578735, 1), signature: { hash: BinData(0, 58E67B65554B3CCD2C041972A16DBE3FCED4CE23), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:32:46.638+0000 D1 - [conn304] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:46.638+0000 W - [conn304] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:46.638+0000 I COMMAND [conn305] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578732, 1), signature: { hash: BinData(0, 2AE057265C24734D9B4E4568BCA1F32820325A81), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:32:46.638+0000 D1 - [conn305] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:46.639+0000 W - [conn305] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:46.640+0000 I - [conn324] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:46.640+0000 D1 COMMAND [conn324] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578733, 1), signature: { hash: BinData(0, 45E6BA95D4A46C041D1AB6A238AC46A21C109958), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:46.640+0000 D1 - [conn324] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:46.640+0000 W - [conn324] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:46.640+0000 I COMMAND [conn306] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578730, 1), signature: { hash: BinData(0, B6D82C58D1F28CF1D765C9B40350BC7D309BEF8B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:32:46.640+0000 D1 - [conn306] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:46.641+0000 W - [conn306] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:46.641+0000 I COMMAND [conn285] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578728, 1), signature: { hash: BinData(0, D40B466E4FDFBEAA54D0D3B68CF775A1034DDCB7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:32:46.641+0000 D1 - [conn285] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:46.641+0000 W - [conn285] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:46.646+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:46.660+0000 I - [conn305] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:46.660+0000 D1 COMMAND [conn305] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578732, 1), signature: { hash: BinData(0, 2AE057265C24734D9B4E4568BCA1F32820325A81), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:46.660+0000 D1 - [conn305] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:46.660+0000 W - [conn305] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:46.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:46.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:46.676+0000 I - [conn285] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:46.676+0000 D1 COMMAND [conn285] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578728, 1), signature: { hash: BinData(0, D40B466E4FDFBEAA54D0D3B68CF775A1034DDCB7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:46.676+0000 D1 - [conn285] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:46.676+0000 W - [conn285] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:46.681+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:46.681+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:46.691+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:46.691+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:46.700+0000 I - [conn324] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:46.700+0000 W COMMAND [conn324] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:46.700+0000 I COMMAND [conn324] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578733, 1), signature: { hash: BinData(0, 45E6BA95D4A46C041D1AB6A238AC46A21C109958), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:32:46.700+0000 D2 NETWORK [conn324] Session from 10.108.2.58:52206 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:46.700+0000 I NETWORK [conn324] end connection 10.108.2.58:52206 (90 connections now open) 2019-09-04T06:32:46.722+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:46.722+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:46.728+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:46.728+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:46.730+0000 I - [conn304] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:46.730+0000 D1 COMMAND [conn304] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578735, 1), signature: { hash: BinData(0, 58E67B65554B3CCD2C041972A16DBE3FCED4CE23), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:46.730+0000 D1 - [conn304] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:46.730+0000 W - [conn304] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:46.746+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:46.749+0000 I - [conn316] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:46.749+0000 D1 COMMAND [conn316] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578729, 1), signature: { hash: BinData(0, 0C0420C4E529BE8AB49DB9C1B2EBD0DB69E0B59C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:46.749+0000 D1 - [conn316] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:46.749+0000 W - [conn316] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:46.769+0000 I - [conn305] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:46.769+0000 W COMMAND [conn305] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:46.769+0000 I COMMAND [conn305] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578732, 1), signature: { hash: BinData(0, 2AE057265C24734D9B4E4568BCA1F32820325A81), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30031ms 2019-09-04T06:32:46.769+0000 D2 NETWORK [conn305] Session from 10.108.2.53:50750 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:46.769+0000 I NETWORK [conn305] end connection 10.108.2.53:50750 (89 connections now open) 2019-09-04T06:32:46.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:46.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:46.792+0000 I - [conn306] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:46.792+0000 D1 COMMAND [conn306] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578730, 1), signature: { hash: BinData(0, B6D82C58D1F28CF1D765C9B40350BC7D309BEF8B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:46.792+0000 D1 - [conn306] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:46.792+0000 W - [conn306] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:46.813+0000 I NETWORK [listener] connection accepted from 10.108.2.44:38766 #357 (90 connections now open) 2019-09-04T06:32:46.813+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:46.813+0000 D2 COMMAND [conn357] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:46.813+0000 I NETWORK [conn357] received client metadata from 10.108.2.44:38766 conn357: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:46.813+0000 I COMMAND [conn357] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:46.813+0000 D2 COMMAND [conn357] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578758, 1), signature: { hash: BinData(0, 83D99C38C0B1A9A64B0855E948984423AC47D765), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:32:46.813+0000 D1 REPL [conn357] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578765, 1), t: 1 } 2019-09-04T06:32:46.813+0000 D3 REPL [conn357] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.823+0000 2019-09-04T06:32:46.815+0000 I - [conn285] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:46.815+0000 W COMMAND [conn285] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:46.815+0000 I COMMAND [conn285] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578728, 1), signature: { hash: BinData(0, D40B466E4FDFBEAA54D0D3B68CF775A1034DDCB7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30046ms 2019-09-04T06:32:46.816+0000 D2 NETWORK [conn285] Session from 10.108.2.45:36570 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:46.816+0000 I NETWORK [conn285] end connection 10.108.2.45:36570 (89 connections now open) 2019-09-04T06:32:46.818+0000 I NETWORK [listener] connection accepted from 10.108.2.51:59236 #358 (90 connections now open) 2019-09-04T06:32:46.819+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:46.819+0000 D2 COMMAND [conn358] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:46.819+0000 I NETWORK [conn358] received client metadata from 10.108.2.51:59236 conn358: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:46.819+0000 I COMMAND [conn358] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:46.819+0000 I - [conn328] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:46.819+0000 D1 COMMAND [conn328] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578735, 1), signature: { hash: BinData(0, 58E67B65554B3CCD2C041972A16DBE3FCED4CE23), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:46.819+0000 D1 - [conn328] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:46.820+0000 W - [conn328] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:46.823+0000 D2 COMMAND [conn358] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578759, 1), signature: { hash: BinData(0, 8EE15F3E3BC68F6992D473DC2636D9C138513069), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:32:46.823+0000 D1 REPL [conn358] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578765, 1), t: 1 } 2019-09-04T06:32:46.823+0000 D3 REPL [conn358] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.833+0000 2019-09-04T06:32:46.824+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:46.824+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:46.829+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:46.829+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:46.830+0000 D2 COMMAND [conn327] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578765, 1), signature: { hash: BinData(0, B25952C3221E28D91A8CF134FCDA55710368078E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:32:46.831+0000 D1 REPL [conn327] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578765, 1), t: 1 } 2019-09-04T06:32:46.831+0000 D3 REPL [conn327] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.840+0000 2019-09-04T06:32:46.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:46.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1052) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:46.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1052 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:56.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:46.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:14.839+0000 2019-09-04T06:32:46.838+0000 D2 ASIO [Replication] Request 1052 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, durableWallTime: new Date(1567578765328), opTime: { ts: Timestamp(1567578765, 1), t: 1 }, wallTime: new Date(1567578765328), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578765, 1), t: 1 }, lastCommittedWall: new Date(1567578765328), lastOpVisible: { ts: Timestamp(1567578765, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578765, 1), $clusterTime: { clusterTime: Timestamp(1567578765, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578765, 1) } 2019-09-04T06:32:46.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, durableWallTime: new Date(1567578765328), opTime: { ts: Timestamp(1567578765, 1), t: 1 }, wallTime: new Date(1567578765328), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578765, 1), t: 1 }, lastCommittedWall: new Date(1567578765328), lastOpVisible: { ts: Timestamp(1567578765, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578765, 1), $clusterTime: { clusterTime: Timestamp(1567578765, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578765, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:46.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:46.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1052) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, durableWallTime: new Date(1567578765328), opTime: { ts: Timestamp(1567578765, 1), t: 1 }, wallTime: new Date(1567578765328), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578765, 1), t: 1 }, lastCommittedWall: new Date(1567578765328), lastOpVisible: { ts: Timestamp(1567578765, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578765, 1), $clusterTime: { clusterTime: Timestamp(1567578765, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578765, 1) } 2019-09-04T06:32:46.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:32:46.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:32:48.838Z 2019-09-04T06:32:46.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:14.839+0000 2019-09-04T06:32:46.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:46.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1053) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:46.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1053 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:32:56.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:46.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:14.839+0000 2019-09-04T06:32:46.839+0000 D2 ASIO [Replication] Request 1053 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, durableWallTime: new Date(1567578765328), opTime: { ts: Timestamp(1567578765, 1), t: 1 }, wallTime: new Date(1567578765328), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578765, 1), t: 1 }, lastCommittedWall: new Date(1567578765328), lastOpVisible: { ts: Timestamp(1567578765, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578765, 1), $clusterTime: { clusterTime: Timestamp(1567578765, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578765, 1) } 2019-09-04T06:32:46.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, durableWallTime: new Date(1567578765328), opTime: { ts: Timestamp(1567578765, 1), t: 1 }, wallTime: new Date(1567578765328), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578765, 1), t: 1 }, lastCommittedWall: new Date(1567578765328), lastOpVisible: { ts: Timestamp(1567578765, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578765, 1), $clusterTime: { clusterTime: Timestamp(1567578765, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578765, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:32:46.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:46.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1053) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, durableWallTime: new Date(1567578765328), opTime: { ts: Timestamp(1567578765, 1), t: 1 }, wallTime: new Date(1567578765328), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578765, 1), t: 1 }, lastCommittedWall: new Date(1567578765328), lastOpVisible: { ts: Timestamp(1567578765, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578765, 1), $clusterTime: { clusterTime: Timestamp(1567578765, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578765, 1) } 2019-09-04T06:32:46.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:32:46.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:32:55.615+0000 2019-09-04T06:32:46.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:32:58.079+0000 2019-09-04T06:32:46.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:32:46.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:32:48.839Z 2019-09-04T06:32:46.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:46.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:16.839+0000 2019-09-04T06:32:46.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:16.839+0000 2019-09-04T06:32:46.843+0000 I - [conn316] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:46.843+0000 W COMMAND [conn316] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:46.843+0000 I COMMAND [conn316] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578729, 1), signature: { hash: BinData(0, 0C0420C4E529BE8AB49DB9C1B2EBD0DB69E0B59C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30136ms 2019-09-04T06:32:46.843+0000 D2 NETWORK [conn316] Session from 10.108.2.50:50170 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:46.843+0000 I NETWORK [conn316] end connection 10.108.2.50:50170 (89 connections now open) 2019-09-04T06:32:46.846+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:46.858+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:46.858+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:46.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:46.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:46.863+0000 I - [conn328] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:46.863+0000 W COMMAND [conn328] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:46.863+0000 I COMMAND [conn328] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578735, 1), signature: { hash: BinData(0, 58E67B65554B3CCD2C041972A16DBE3FCED4CE23), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30204ms 2019-09-04T06:32:46.863+0000 D2 NETWORK [conn328] Session from 10.108.2.46:41056 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:46.863+0000 I NETWORK [conn328] end connection 10.108.2.46:41056 (88 connections now open) 2019-09-04T06:32:46.884+0000 I - [conn306] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:46.884+0000 W COMMAND [conn306] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:46.884+0000 I COMMAND [conn306] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578730, 1), signature: { hash: BinData(0, B6D82C58D1F28CF1D765C9B40350BC7D309BEF8B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30163ms 2019-09-04T06:32:46.884+0000 D2 NETWORK [conn306] Session from 10.108.2.49:53420 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:46.884+0000 I NETWORK [conn306] end connection 10.108.2.49:53420 (87 connections now open) 2019-09-04T06:32:46.899+0000 I - [conn304] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:46.900+0000 W COMMAND [conn304] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:46.900+0000 I COMMAND [conn304] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578735, 1), signature: { hash: BinData(0, 58E67B65554B3CCD2C041972A16DBE3FCED4CE23), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30103ms 2019-09-04T06:32:46.900+0000 D2 NETWORK [conn304] Session from 10.108.2.64:46668 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:46.900+0000 I NETWORK [conn304] end connection 10.108.2.64:46668 (86 connections now open) 2019-09-04T06:32:46.946+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:47.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:47.046+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:47.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:47.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:47.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578765, 2), signature: { hash: BinData(0, 383EA545B58B4E2FA14FF5806CCA8B42F82AF743), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:47.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:32:47.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578765, 2), signature: { hash: BinData(0, 383EA545B58B4E2FA14FF5806CCA8B42F82AF743), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:47.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578765, 2), signature: { hash: BinData(0, 383EA545B58B4E2FA14FF5806CCA8B42F82AF743), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:47.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, durableWallTime: new Date(1567578765328), opTime: { ts: Timestamp(1567578765, 1), t: 1 }, wallTime: new Date(1567578765328) } 2019-09-04T06:32:47.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578765, 2), signature: { hash: BinData(0, 383EA545B58B4E2FA14FF5806CCA8B42F82AF743), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:47.068+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:47.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:47.147+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:47.162+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:47.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:47.181+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:47.181+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:47.191+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:47.191+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:47.222+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:47.222+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:47.228+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:47.228+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:47.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:47.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:47.247+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:47.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:47.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:47.329+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:47.329+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:47.345+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:47.345+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:47.345+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578765, 1) 2019-09-04T06:32:47.345+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 15585 2019-09-04T06:32:47.345+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:47.345+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:47.345+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 15585 2019-09-04T06:32:47.346+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15588 2019-09-04T06:32:47.346+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 15588 2019-09-04T06:32:47.346+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578765, 1), t: 1 }({ ts: Timestamp(1567578765, 1), t: 1 }) 2019-09-04T06:32:47.347+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:47.358+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:47.358+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:47.447+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:47.547+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:47.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:47.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:47.568+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:47.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:47.596+0000 D2 ASIO [RS] Request 1051 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578767, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578767573), o: { $v: 1, $set: { ping: new Date(1567578767572) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578765, 1), t: 1 }, lastCommittedWall: new Date(1567578765328), lastOpVisible: { ts: Timestamp(1567578765, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578765, 1), t: 1 }, lastCommittedWall: new Date(1567578765328), lastOpApplied: { ts: Timestamp(1567578767, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578765, 1), $clusterTime: { clusterTime: Timestamp(1567578767, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578767, 1) } 2019-09-04T06:32:47.596+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578767, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578767573), o: { $v: 1, $set: { ping: new Date(1567578767572) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578765, 1), t: 1 }, lastCommittedWall: new Date(1567578765328), lastOpVisible: { ts: Timestamp(1567578765, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578765, 1), t: 1 }, lastCommittedWall: new Date(1567578765328), lastOpApplied: { ts: Timestamp(1567578767, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578765, 1), $clusterTime: { clusterTime: Timestamp(1567578767, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578767, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:47.596+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:47.596+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578767, 1) and ending at ts: Timestamp(1567578767, 1) 2019-09-04T06:32:47.596+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:32:58.079+0000 2019-09-04T06:32:47.596+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:32:58.678+0000 2019-09-04T06:32:47.596+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:47.596+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:16.839+0000 2019-09-04T06:32:47.596+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578767, 1), t: 1 } 2019-09-04T06:32:47.596+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:47.596+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:47.596+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578765, 1) 2019-09-04T06:32:47.596+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 15594 2019-09-04T06:32:47.596+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:47.596+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:47.596+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 15594 2019-09-04T06:32:47.596+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:47.596+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:32:47.596+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:47.596+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578765, 1) 2019-09-04T06:32:47.596+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578767, 1) } 2019-09-04T06:32:47.596+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 15597 2019-09-04T06:32:47.596+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:47.596+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:47.596+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 15597 2019-09-04T06:32:47.596+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15589 2019-09-04T06:32:47.596+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 15589 2019-09-04T06:32:47.596+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15600 2019-09-04T06:32:47.596+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 15600 2019-09-04T06:32:47.596+0000 D3 EXECUTOR [repl-writer-worker-15] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:47.596+0000 D3 STORAGE [repl-writer-worker-15] WT begin_transaction for snapshot id 15602 2019-09-04T06:32:47.596+0000 D4 STORAGE [repl-writer-worker-15] inserting record with timestamp Timestamp(1567578767, 1) 2019-09-04T06:32:47.596+0000 D3 STORAGE [repl-writer-worker-15] WT set timestamp of future write operations to Timestamp(1567578767, 1) 2019-09-04T06:32:47.596+0000 D3 STORAGE [repl-writer-worker-15] WT commit_transaction for snapshot id 15602 2019-09-04T06:32:47.596+0000 D3 EXECUTOR [repl-writer-worker-15] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:47.596+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:32:47.596+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15601 2019-09-04T06:32:47.596+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 15601 2019-09-04T06:32:47.596+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15604 2019-09-04T06:32:47.596+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 15604 2019-09-04T06:32:47.596+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578767, 1), t: 1 }({ ts: Timestamp(1567578767, 1), t: 1 }) 2019-09-04T06:32:47.597+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578767, 1) 2019-09-04T06:32:47.597+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15605 2019-09-04T06:32:47.597+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578767, 1) } } ] } sort: {} projection: {} 2019-09-04T06:32:47.597+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:47.597+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:32:47.597+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578767, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:32:47.597+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:47.597+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:47.597+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:47.597+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578767, 1) || First: notFirst: full path: ts 2019-09-04T06:32:47.597+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:47.597+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578767, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:47.597+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:47.597+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:32:47.597+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:47.597+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:47.597+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:47.597+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:47.597+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:47.597+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:47.597+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:47.597+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578767, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:47.597+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:47.597+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:47.597+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:47.597+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578767, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:47.597+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:47.597+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578767, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:47.597+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 15605 2019-09-04T06:32:47.597+0000 D3 EXECUTOR [repl-writer-worker-1] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:47.597+0000 D3 STORAGE [repl-writer-worker-1] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:47.597+0000 D3 REPL [repl-writer-worker-1] applying op: { ts: Timestamp(1567578767, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578767573), o: { $v: 1, $set: { ping: new Date(1567578767572) } } }, oplog application mode: Secondary 2019-09-04T06:32:47.597+0000 D3 STORAGE [repl-writer-worker-1] WT set timestamp of future write operations to Timestamp(1567578767, 1) 2019-09-04T06:32:47.597+0000 D3 STORAGE [repl-writer-worker-1] WT begin_transaction for snapshot id 15607 2019-09-04T06:32:47.597+0000 D2 QUERY [repl-writer-worker-1] Using idhack: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" } 2019-09-04T06:32:47.597+0000 D4 WRITE [repl-writer-worker-1] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:32:47.597+0000 D3 STORAGE [repl-writer-worker-1] WT commit_transaction for snapshot id 15607 2019-09-04T06:32:47.597+0000 D3 EXECUTOR [repl-writer-worker-1] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:47.597+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578767, 1), t: 1 }({ ts: Timestamp(1567578767, 1), t: 1 }) 2019-09-04T06:32:47.597+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578767, 1) 2019-09-04T06:32:47.597+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15606 2019-09-04T06:32:47.597+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:32:47.597+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:47.597+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:32:47.597+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:47.597+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:47.597+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:32:47.597+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 15606 2019-09-04T06:32:47.597+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578767, 1) 2019-09-04T06:32:47.597+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:47.597+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15610 2019-09-04T06:32:47.597+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 15610 2019-09-04T06:32:47.597+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578767, 1), t: 1 }({ ts: Timestamp(1567578767, 1), t: 1 }) 2019-09-04T06:32:47.597+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, durableWallTime: new Date(1567578765328), appliedOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, appliedWallTime: new Date(1567578765328), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, durableWallTime: new Date(1567578765328), appliedOpTime: { ts: Timestamp(1567578767, 1), t: 1 }, appliedWallTime: new Date(1567578767573), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, durableWallTime: new Date(1567578765328), appliedOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, appliedWallTime: new Date(1567578765328), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578765, 1), t: 1 }, lastCommittedWall: new Date(1567578765328), lastOpVisible: { ts: Timestamp(1567578765, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:47.597+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1054 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:17.597+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, durableWallTime: new Date(1567578765328), appliedOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, appliedWallTime: new Date(1567578765328), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, durableWallTime: new Date(1567578765328), appliedOpTime: { ts: Timestamp(1567578767, 1), t: 1 }, appliedWallTime: new Date(1567578767573), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, durableWallTime: new Date(1567578765328), appliedOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, appliedWallTime: new Date(1567578765328), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578765, 1), t: 1 }, lastCommittedWall: new Date(1567578765328), lastOpVisible: { ts: Timestamp(1567578765, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:47.597+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:17.597+0000 2019-09-04T06:32:47.598+0000 D2 ASIO [RS] Request 1054 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578765, 1), t: 1 }, lastCommittedWall: new Date(1567578765328), lastOpVisible: { ts: Timestamp(1567578765, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578765, 1), $clusterTime: { clusterTime: Timestamp(1567578767, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578767, 1) } 2019-09-04T06:32:47.598+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578765, 1), t: 1 }, lastCommittedWall: new Date(1567578765328), lastOpVisible: { ts: Timestamp(1567578765, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578765, 1), $clusterTime: { clusterTime: Timestamp(1567578767, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578767, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:47.598+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:47.598+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:17.598+0000 2019-09-04T06:32:47.598+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578767, 1), t: 1 } 2019-09-04T06:32:47.598+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1055 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:57.598+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578765, 1), t: 1 } } 2019-09-04T06:32:47.598+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:17.598+0000 2019-09-04T06:32:47.605+0000 D2 ASIO [RS] Request 1055 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578767, 1), t: 1 }, lastCommittedWall: new Date(1567578767573), lastOpVisible: { ts: Timestamp(1567578767, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578767, 1), t: 1 }, lastCommittedWall: new Date(1567578767573), lastOpApplied: { ts: Timestamp(1567578767, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578767, 1), $clusterTime: { clusterTime: Timestamp(1567578767, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578767, 1) } 2019-09-04T06:32:47.605+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578767, 1), t: 1 }, lastCommittedWall: new Date(1567578767573), lastOpVisible: { ts: Timestamp(1567578767, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578767, 1), t: 1 }, lastCommittedWall: new Date(1567578767573), lastOpApplied: { ts: Timestamp(1567578767, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578767, 1), $clusterTime: { clusterTime: Timestamp(1567578767, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578767, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:47.605+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:47.605+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:32:47.605+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:32:47.605+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578767, 1), t: 1 }, 2019-09-04T06:32:47.573+0000 2019-09-04T06:32:47.605+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578767, 1), t: 1 }, 2019-09-04T06:32:47.573+0000 2019-09-04T06:32:47.605+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:47.605+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578762, 1) 2019-09-04T06:32:47.605+0000 D3 REPL [conn357] Got notified of new snapshot: { ts: Timestamp(1567578767, 1), t: 1 }, 2019-09-04T06:32:47.573+0000 2019-09-04T06:32:47.605+0000 D3 REPL [conn357] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.823+0000 2019-09-04T06:32:47.605+0000 D3 REPL [conn323] Got notified of new snapshot: { ts: Timestamp(1567578767, 1), t: 1 }, 2019-09-04T06:32:47.573+0000 2019-09-04T06:32:47.605+0000 D3 REPL [conn323] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:11.688+0000 2019-09-04T06:32:47.605+0000 D3 REPL [conn322] Got notified of new snapshot: { ts: Timestamp(1567578767, 1), t: 1 }, 2019-09-04T06:32:47.573+0000 2019-09-04T06:32:47.605+0000 D3 REPL [conn322] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.989+0000 2019-09-04T06:32:47.605+0000 D3 REPL [conn344] Got notified of new snapshot: { ts: Timestamp(1567578767, 1), t: 1 }, 2019-09-04T06:32:47.573+0000 2019-09-04T06:32:47.605+0000 D3 REPL [conn344] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:11.677+0000 2019-09-04T06:32:47.605+0000 D3 REPL [conn326] Got notified of new snapshot: { ts: Timestamp(1567578767, 1), t: 1 }, 2019-09-04T06:32:47.573+0000 2019-09-04T06:32:47.605+0000 D3 REPL [conn326] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:49.840+0000 2019-09-04T06:32:47.605+0000 D3 REPL [conn320] Got notified of new snapshot: { ts: Timestamp(1567578767, 1), t: 1 }, 2019-09-04T06:32:47.573+0000 2019-09-04T06:32:47.605+0000 D3 REPL [conn320] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.753+0000 2019-09-04T06:32:47.605+0000 D3 REPL [conn342] Got notified of new snapshot: { ts: Timestamp(1567578767, 1), t: 1 }, 2019-09-04T06:32:47.573+0000 2019-09-04T06:32:47.605+0000 D3 REPL [conn342] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:52.054+0000 2019-09-04T06:32:47.605+0000 D3 REPL [conn336] Got notified of new snapshot: { ts: Timestamp(1567578767, 1), t: 1 }, 2019-09-04T06:32:47.573+0000 2019-09-04T06:32:47.605+0000 D3 REPL [conn336] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.130+0000 2019-09-04T06:32:47.605+0000 D3 REPL [conn349] Got notified of new snapshot: { ts: Timestamp(1567578767, 1), t: 1 }, 2019-09-04T06:32:47.573+0000 2019-09-04T06:32:47.605+0000 D3 REPL [conn349] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:00.261+0000 2019-09-04T06:32:47.605+0000 D3 REPL [conn341] Got notified of new snapshot: { ts: Timestamp(1567578767, 1), t: 1 }, 2019-09-04T06:32:47.573+0000 2019-09-04T06:32:47.605+0000 D3 REPL [conn341] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.094+0000 2019-09-04T06:32:47.606+0000 D3 REPL [conn352] Got notified of new snapshot: { ts: Timestamp(1567578767, 1), t: 1 }, 2019-09-04T06:32:47.573+0000 2019-09-04T06:32:47.606+0000 D3 REPL [conn352] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.291+0000 2019-09-04T06:32:47.606+0000 D3 REPL [conn327] Got notified of new snapshot: { ts: Timestamp(1567578767, 1), t: 1 }, 2019-09-04T06:32:47.573+0000 2019-09-04T06:32:47.606+0000 D3 REPL [conn327] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.840+0000 2019-09-04T06:32:47.606+0000 D3 REPL [conn358] Got notified of new snapshot: { ts: Timestamp(1567578767, 1), t: 1 }, 2019-09-04T06:32:47.573+0000 2019-09-04T06:32:47.606+0000 D3 REPL [conn358] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.833+0000 2019-09-04T06:32:47.606+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:32:58.678+0000 2019-09-04T06:32:47.606+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:32:58.580+0000 2019-09-04T06:32:47.606+0000 D3 REPL [conn354] Got notified of new snapshot: { ts: Timestamp(1567578767, 1), t: 1 }, 2019-09-04T06:32:47.573+0000 2019-09-04T06:32:47.606+0000 D3 REPL [conn354] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.413+0000 2019-09-04T06:32:47.606+0000 D3 REPL [conn351] Got notified of new snapshot: { ts: Timestamp(1567578767, 1), t: 1 }, 2019-09-04T06:32:47.573+0000 2019-09-04T06:32:47.606+0000 D3 REPL [conn351] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:00.962+0000 2019-09-04T06:32:47.606+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:47.606+0000 D3 REPL [conn340] Got notified of new snapshot: { ts: Timestamp(1567578767, 1), t: 1 }, 2019-09-04T06:32:47.573+0000 2019-09-04T06:32:47.606+0000 D3 REPL [conn340] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.662+0000 2019-09-04T06:32:47.606+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:16.839+0000 2019-09-04T06:32:47.606+0000 D3 REPL [conn348] Got notified of new snapshot: { ts: Timestamp(1567578767, 1), t: 1 }, 2019-09-04T06:32:47.573+0000 2019-09-04T06:32:47.606+0000 D3 REPL [conn348] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:59.750+0000 2019-09-04T06:32:47.606+0000 D3 REPL [conn346] Got notified of new snapshot: { ts: Timestamp(1567578767, 1), t: 1 }, 2019-09-04T06:32:47.573+0000 2019-09-04T06:32:47.606+0000 D3 REPL [conn346] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:57.574+0000 2019-09-04T06:32:47.606+0000 D3 REPL [conn325] Got notified of new snapshot: { ts: Timestamp(1567578767, 1), t: 1 }, 2019-09-04T06:32:47.573+0000 2019-09-04T06:32:47.606+0000 D3 REPL [conn325] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.644+0000 2019-09-04T06:32:47.606+0000 D3 REPL [conn339] Got notified of new snapshot: { ts: Timestamp(1567578767, 1), t: 1 }, 2019-09-04T06:32:47.573+0000 2019-09-04T06:32:47.606+0000 D3 REPL [conn339] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.661+0000 2019-09-04T06:32:47.606+0000 D3 REPL [conn338] Got notified of new snapshot: { ts: Timestamp(1567578767, 1), t: 1 }, 2019-09-04T06:32:47.573+0000 2019-09-04T06:32:47.606+0000 D3 REPL [conn338] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:11.680+0000 2019-09-04T06:32:47.606+0000 D3 REPL [conn350] Got notified of new snapshot: { ts: Timestamp(1567578767, 1), t: 1 }, 2019-09-04T06:32:47.573+0000 2019-09-04T06:32:47.606+0000 D3 REPL [conn350] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:00.763+0000 2019-09-04T06:32:47.606+0000 D3 REPL [conn347] Got notified of new snapshot: { ts: Timestamp(1567578767, 1), t: 1 }, 2019-09-04T06:32:47.573+0000 2019-09-04T06:32:47.606+0000 D3 REPL [conn347] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:58.760+0000 2019-09-04T06:32:47.606+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, durableWallTime: new Date(1567578765328), appliedOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, appliedWallTime: new Date(1567578765328), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578767, 1), t: 1 }, durableWallTime: new Date(1567578767573), appliedOpTime: { ts: Timestamp(1567578767, 1), t: 1 }, appliedWallTime: new Date(1567578767573), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, durableWallTime: new Date(1567578765328), appliedOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, appliedWallTime: new Date(1567578765328), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578767, 1), t: 1 }, lastCommittedWall: new Date(1567578767573), lastOpVisible: { ts: Timestamp(1567578767, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:47.606+0000 D3 REPL [conn345] Got notified of new snapshot: { ts: Timestamp(1567578767, 1), t: 1 }, 2019-09-04T06:32:47.573+0000 2019-09-04T06:32:47.606+0000 D3 REPL [conn345] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:56.312+0000 2019-09-04T06:32:47.606+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1056 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:17.606+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, durableWallTime: new Date(1567578765328), appliedOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, appliedWallTime: new Date(1567578765328), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578767, 1), t: 1 }, durableWallTime: new Date(1567578767573), appliedOpTime: { ts: Timestamp(1567578767, 1), t: 1 }, appliedWallTime: new Date(1567578767573), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, durableWallTime: new Date(1567578765328), appliedOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, appliedWallTime: new Date(1567578765328), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578767, 1), t: 1 }, lastCommittedWall: new Date(1567578767573), lastOpVisible: { ts: Timestamp(1567578767, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:47.606+0000 D3 REPL [conn318] Got notified of new snapshot: { ts: Timestamp(1567578767, 1), t: 1 }, 2019-09-04T06:32:47.573+0000 2019-09-04T06:32:47.606+0000 D3 REPL [conn318] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:03.478+0000 2019-09-04T06:32:47.606+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:17.605+0000 2019-09-04T06:32:47.606+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1057 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:57.606+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578767, 1), t: 1 } } 2019-09-04T06:32:47.606+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:17.605+0000 2019-09-04T06:32:47.606+0000 D3 REPL [conn353] Got notified of new snapshot: { ts: Timestamp(1567578767, 1), t: 1 }, 2019-09-04T06:32:47.573+0000 2019-09-04T06:32:47.606+0000 D3 REPL [conn353] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.419+0000 2019-09-04T06:32:47.606+0000 D2 ASIO [RS] Request 1056 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578767, 1), t: 1 }, lastCommittedWall: new Date(1567578767573), lastOpVisible: { ts: Timestamp(1567578767, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578767, 1), $clusterTime: { clusterTime: Timestamp(1567578767, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578767, 1) } 2019-09-04T06:32:47.606+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578767, 1), t: 1 }, lastCommittedWall: new Date(1567578767573), lastOpVisible: { ts: Timestamp(1567578767, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578767, 1), $clusterTime: { clusterTime: Timestamp(1567578767, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578767, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:47.606+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:47.606+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:17.605+0000 2019-09-04T06:32:47.647+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:47.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:47.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:47.681+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:47.681+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:47.691+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:47.691+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:47.696+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578767, 1) 2019-09-04T06:32:47.722+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:47.722+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:47.728+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:47.728+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:47.747+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:47.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:47.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:47.846+0000 D2 ASIO [RS] Request 1057 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578767, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578767841), o: { $v: 1, $set: { ping: new Date(1567578767835) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578767, 1), t: 1 }, lastCommittedWall: new Date(1567578767573), lastOpVisible: { ts: Timestamp(1567578767, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578767, 1), t: 1 }, lastCommittedWall: new Date(1567578767573), lastOpApplied: { ts: Timestamp(1567578767, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578767, 1), $clusterTime: { clusterTime: Timestamp(1567578767, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578767, 2) } 2019-09-04T06:32:47.847+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578767, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578767841), o: { $v: 1, $set: { ping: new Date(1567578767835) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578767, 1), t: 1 }, lastCommittedWall: new Date(1567578767573), lastOpVisible: { ts: Timestamp(1567578767, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578767, 1), t: 1 }, lastCommittedWall: new Date(1567578767573), lastOpApplied: { ts: Timestamp(1567578767, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578767, 1), $clusterTime: { clusterTime: Timestamp(1567578767, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578767, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:47.847+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:47.847+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578767, 2) and ending at ts: Timestamp(1567578767, 2) 2019-09-04T06:32:47.847+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:32:58.580+0000 2019-09-04T06:32:47.847+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:32:58.375+0000 2019-09-04T06:32:47.847+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:47.847+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:16.839+0000 2019-09-04T06:32:47.847+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:47.847+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:47.847+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578767, 1) 2019-09-04T06:32:47.847+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 15620 2019-09-04T06:32:47.847+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:47.847+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:47.847+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 15620 2019-09-04T06:32:47.847+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:47.847+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:47.847+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:32:47.847+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578767, 1) 2019-09-04T06:32:47.847+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 15623 2019-09-04T06:32:47.847+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578767, 2) } 2019-09-04T06:32:47.847+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:47.847+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:47.847+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 15623 2019-09-04T06:32:47.847+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578767, 2), t: 1 } 2019-09-04T06:32:47.847+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15611 2019-09-04T06:32:47.847+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 15611 2019-09-04T06:32:47.847+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15626 2019-09-04T06:32:47.847+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 15626 2019-09-04T06:32:47.847+0000 D3 EXECUTOR [repl-writer-worker-5] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:47.847+0000 D3 STORAGE [repl-writer-worker-5] WT begin_transaction for snapshot id 15628 2019-09-04T06:32:47.847+0000 D4 STORAGE [repl-writer-worker-5] inserting record with timestamp Timestamp(1567578767, 2) 2019-09-04T06:32:47.847+0000 D3 STORAGE [repl-writer-worker-5] WT set timestamp of future write operations to Timestamp(1567578767, 2) 2019-09-04T06:32:47.847+0000 D3 STORAGE [repl-writer-worker-5] WT commit_transaction for snapshot id 15628 2019-09-04T06:32:47.847+0000 D3 EXECUTOR [repl-writer-worker-5] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:47.847+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:32:47.847+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15627 2019-09-04T06:32:47.847+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 15627 2019-09-04T06:32:47.847+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15630 2019-09-04T06:32:47.847+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 15630 2019-09-04T06:32:47.847+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578767, 2), t: 1 }({ ts: Timestamp(1567578767, 2), t: 1 }) 2019-09-04T06:32:47.847+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578767, 2) 2019-09-04T06:32:47.847+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15631 2019-09-04T06:32:47.847+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578767, 2) } } ] } sort: {} projection: {} 2019-09-04T06:32:47.847+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:47.847+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:32:47.847+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578767, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:32:47.847+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:47.847+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:47.847+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:47.847+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578767, 2) || First: notFirst: full path: ts 2019-09-04T06:32:47.847+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:47.847+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578767, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:47.847+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:47.848+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:32:47.848+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:47.848+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:47.848+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:47.848+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:47.848+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:47.848+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:47.848+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:47.848+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578767, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:47.848+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:47.848+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:47.848+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:47.848+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578767, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:47.848+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:47.848+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578767, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:47.848+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 15631 2019-09-04T06:32:47.848+0000 D3 EXECUTOR [repl-writer-worker-9] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:47.848+0000 D3 STORAGE [repl-writer-worker-9] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:47.848+0000 D3 REPL [repl-writer-worker-9] applying op: { ts: Timestamp(1567578767, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578767841), o: { $v: 1, $set: { ping: new Date(1567578767835) } } }, oplog application mode: Secondary 2019-09-04T06:32:47.848+0000 D3 STORAGE [repl-writer-worker-9] WT set timestamp of future write operations to Timestamp(1567578767, 2) 2019-09-04T06:32:47.848+0000 D3 STORAGE [repl-writer-worker-9] WT begin_transaction for snapshot id 15633 2019-09-04T06:32:47.848+0000 D2 QUERY [repl-writer-worker-9] Using idhack: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" } 2019-09-04T06:32:47.848+0000 D4 WRITE [repl-writer-worker-9] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:32:47.848+0000 D3 STORAGE [repl-writer-worker-9] WT commit_transaction for snapshot id 15633 2019-09-04T06:32:47.848+0000 D3 EXECUTOR [repl-writer-worker-9] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:47.848+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578767, 2), t: 1 }({ ts: Timestamp(1567578767, 2), t: 1 }) 2019-09-04T06:32:47.848+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578767, 2) 2019-09-04T06:32:47.848+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15632 2019-09-04T06:32:47.848+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:32:47.848+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:47.848+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:32:47.848+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:47.848+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:47.848+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:32:47.848+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 15632 2019-09-04T06:32:47.848+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578767, 2) 2019-09-04T06:32:47.848+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15637 2019-09-04T06:32:47.848+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 15637 2019-09-04T06:32:47.848+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578767, 2), t: 1 }({ ts: Timestamp(1567578767, 2), t: 1 }) 2019-09-04T06:32:47.848+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:47.848+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, durableWallTime: new Date(1567578765328), appliedOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, appliedWallTime: new Date(1567578765328), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578767, 1), t: 1 }, durableWallTime: new Date(1567578767573), appliedOpTime: { ts: Timestamp(1567578767, 2), t: 1 }, appliedWallTime: new Date(1567578767841), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, durableWallTime: new Date(1567578765328), appliedOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, appliedWallTime: new Date(1567578765328), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578767, 1), t: 1 }, lastCommittedWall: new Date(1567578767573), lastOpVisible: { ts: Timestamp(1567578767, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:47.848+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1058 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:17.848+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, durableWallTime: new Date(1567578765328), appliedOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, appliedWallTime: new Date(1567578765328), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578767, 1), t: 1 }, durableWallTime: new Date(1567578767573), appliedOpTime: { ts: Timestamp(1567578767, 2), t: 1 }, appliedWallTime: new Date(1567578767841), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, durableWallTime: new Date(1567578765328), appliedOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, appliedWallTime: new Date(1567578765328), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578767, 1), t: 1 }, lastCommittedWall: new Date(1567578767573), lastOpVisible: { ts: Timestamp(1567578767, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:47.848+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:17.848+0000 2019-09-04T06:32:47.848+0000 D2 ASIO [RS] Request 1058 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578767, 2), t: 1 }, lastCommittedWall: new Date(1567578767841), lastOpVisible: { ts: Timestamp(1567578767, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578767, 2), $clusterTime: { clusterTime: Timestamp(1567578767, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578767, 2) } 2019-09-04T06:32:47.848+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578767, 2), t: 1 }, lastCommittedWall: new Date(1567578767841), lastOpVisible: { ts: Timestamp(1567578767, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578767, 2), $clusterTime: { clusterTime: Timestamp(1567578767, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578767, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:47.848+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:47.848+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:17.848+0000 2019-09-04T06:32:47.849+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578767, 2), t: 1 } 2019-09-04T06:32:47.849+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1059 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:57.849+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578767, 1), t: 1 } } 2019-09-04T06:32:47.849+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:17.848+0000 2019-09-04T06:32:47.849+0000 D2 ASIO [RS] Request 1059 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578767, 2), t: 1 }, lastCommittedWall: new Date(1567578767841), lastOpVisible: { ts: Timestamp(1567578767, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578767, 2), t: 1 }, lastCommittedWall: new Date(1567578767841), lastOpApplied: { ts: Timestamp(1567578767, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578767, 2), $clusterTime: { clusterTime: Timestamp(1567578767, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578767, 2) } 2019-09-04T06:32:47.849+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578767, 2), t: 1 }, lastCommittedWall: new Date(1567578767841), lastOpVisible: { ts: Timestamp(1567578767, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578767, 2), t: 1 }, lastCommittedWall: new Date(1567578767841), lastOpApplied: { ts: Timestamp(1567578767, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578767, 2), $clusterTime: { clusterTime: Timestamp(1567578767, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578767, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:47.849+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:47.849+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:32:47.849+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578767, 2), t: 1 }, 2019-09-04T06:32:47.841+0000 2019-09-04T06:32:47.849+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578767, 2), t: 1 }, 2019-09-04T06:32:47.841+0000 2019-09-04T06:32:47.849+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578762, 2) 2019-09-04T06:32:47.849+0000 D3 REPL [conn349] Got notified of new snapshot: { ts: Timestamp(1567578767, 2), t: 1 }, 2019-09-04T06:32:47.841+0000 2019-09-04T06:32:47.849+0000 D3 REPL [conn349] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:00.261+0000 2019-09-04T06:32:47.849+0000 D3 REPL [conn351] Got notified of new snapshot: { ts: Timestamp(1567578767, 2), t: 1 }, 2019-09-04T06:32:47.841+0000 2019-09-04T06:32:47.849+0000 D3 REPL [conn351] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:00.962+0000 2019-09-04T06:32:47.849+0000 D3 REPL [conn357] Got notified of new snapshot: { ts: Timestamp(1567578767, 2), t: 1 }, 2019-09-04T06:32:47.841+0000 2019-09-04T06:32:47.849+0000 D3 REPL [conn357] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.823+0000 2019-09-04T06:32:47.849+0000 D3 REPL [conn336] Got notified of new snapshot: { ts: Timestamp(1567578767, 2), t: 1 }, 2019-09-04T06:32:47.841+0000 2019-09-04T06:32:47.850+0000 D3 REPL [conn336] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.130+0000 2019-09-04T06:32:47.850+0000 D3 REPL [conn342] Got notified of new snapshot: { ts: Timestamp(1567578767, 2), t: 1 }, 2019-09-04T06:32:47.841+0000 2019-09-04T06:32:47.850+0000 D3 REPL [conn342] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:52.054+0000 2019-09-04T06:32:47.850+0000 D3 REPL [conn358] Got notified of new snapshot: { ts: Timestamp(1567578767, 2), t: 1 }, 2019-09-04T06:32:47.841+0000 2019-09-04T06:32:47.850+0000 D3 REPL [conn358] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.833+0000 2019-09-04T06:32:47.850+0000 D3 REPL [conn340] Got notified of new snapshot: { ts: Timestamp(1567578767, 2), t: 1 }, 2019-09-04T06:32:47.841+0000 2019-09-04T06:32:47.850+0000 D3 REPL [conn340] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.662+0000 2019-09-04T06:32:47.850+0000 D3 REPL [conn348] Got notified of new snapshot: { ts: Timestamp(1567578767, 2), t: 1 }, 2019-09-04T06:32:47.841+0000 2019-09-04T06:32:47.850+0000 D3 REPL [conn348] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:59.750+0000 2019-09-04T06:32:47.850+0000 D3 REPL [conn325] Got notified of new snapshot: { ts: Timestamp(1567578767, 2), t: 1 }, 2019-09-04T06:32:47.841+0000 2019-09-04T06:32:47.850+0000 D3 REPL [conn325] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.644+0000 2019-09-04T06:32:47.850+0000 D3 REPL [conn338] Got notified of new snapshot: { ts: Timestamp(1567578767, 2), t: 1 }, 2019-09-04T06:32:47.841+0000 2019-09-04T06:32:47.850+0000 D3 REPL [conn338] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:11.680+0000 2019-09-04T06:32:47.850+0000 D3 REPL [conn354] Got notified of new snapshot: { ts: Timestamp(1567578767, 2), t: 1 }, 2019-09-04T06:32:47.841+0000 2019-09-04T06:32:47.850+0000 D3 REPL [conn354] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.413+0000 2019-09-04T06:32:47.850+0000 D3 REPL [conn353] Got notified of new snapshot: { ts: Timestamp(1567578767, 2), t: 1 }, 2019-09-04T06:32:47.841+0000 2019-09-04T06:32:47.850+0000 D3 REPL [conn353] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.419+0000 2019-09-04T06:32:47.850+0000 D3 REPL [conn352] Got notified of new snapshot: { ts: Timestamp(1567578767, 2), t: 1 }, 2019-09-04T06:32:47.841+0000 2019-09-04T06:32:47.850+0000 D3 REPL [conn352] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.291+0000 2019-09-04T06:32:47.850+0000 D3 REPL [conn318] Got notified of new snapshot: { ts: Timestamp(1567578767, 2), t: 1 }, 2019-09-04T06:32:47.841+0000 2019-09-04T06:32:47.850+0000 D3 REPL [conn318] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:03.478+0000 2019-09-04T06:32:47.850+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:32:58.375+0000 2019-09-04T06:32:47.850+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:32:58.486+0000 2019-09-04T06:32:47.850+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1060 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:57.850+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578767, 2), t: 1 } } 2019-09-04T06:32:47.850+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:17.848+0000 2019-09-04T06:32:47.850+0000 D3 REPL [conn323] Got notified of new snapshot: { ts: Timestamp(1567578767, 2), t: 1 }, 2019-09-04T06:32:47.841+0000 2019-09-04T06:32:47.850+0000 D3 REPL [conn323] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:11.688+0000 2019-09-04T06:32:47.850+0000 D3 REPL [conn322] Got notified of new snapshot: { ts: Timestamp(1567578767, 2), t: 1 }, 2019-09-04T06:32:47.841+0000 2019-09-04T06:32:47.850+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:47.850+0000 D3 REPL [conn322] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.989+0000 2019-09-04T06:32:47.850+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:16.839+0000 2019-09-04T06:32:47.850+0000 D3 REPL [conn345] Got notified of new snapshot: { ts: Timestamp(1567578767, 2), t: 1 }, 2019-09-04T06:32:47.841+0000 2019-09-04T06:32:47.850+0000 D3 REPL [conn345] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:56.312+0000 2019-09-04T06:32:47.850+0000 D3 REPL [conn347] Got notified of new snapshot: { ts: Timestamp(1567578767, 2), t: 1 }, 2019-09-04T06:32:47.841+0000 2019-09-04T06:32:47.850+0000 D3 REPL [conn347] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:58.760+0000 2019-09-04T06:32:47.850+0000 D3 REPL [conn339] Got notified of new snapshot: { ts: Timestamp(1567578767, 2), t: 1 }, 2019-09-04T06:32:47.841+0000 2019-09-04T06:32:47.850+0000 D3 REPL [conn339] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.661+0000 2019-09-04T06:32:47.850+0000 D3 REPL [conn346] Got notified of new snapshot: { ts: Timestamp(1567578767, 2), t: 1 }, 2019-09-04T06:32:47.841+0000 2019-09-04T06:32:47.850+0000 D3 REPL [conn346] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:57.574+0000 2019-09-04T06:32:47.850+0000 D3 REPL [conn320] Got notified of new snapshot: { ts: Timestamp(1567578767, 2), t: 1 }, 2019-09-04T06:32:47.841+0000 2019-09-04T06:32:47.850+0000 D3 REPL [conn320] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.753+0000 2019-09-04T06:32:47.850+0000 D3 REPL [conn326] Got notified of new snapshot: { ts: Timestamp(1567578767, 2), t: 1 }, 2019-09-04T06:32:47.841+0000 2019-09-04T06:32:47.850+0000 D3 REPL [conn326] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:49.840+0000 2019-09-04T06:32:47.850+0000 D3 REPL [conn327] Got notified of new snapshot: { ts: Timestamp(1567578767, 2), t: 1 }, 2019-09-04T06:32:47.841+0000 2019-09-04T06:32:47.850+0000 D3 REPL [conn327] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.840+0000 2019-09-04T06:32:47.850+0000 D3 REPL [conn341] Got notified of new snapshot: { ts: Timestamp(1567578767, 2), t: 1 }, 2019-09-04T06:32:47.841+0000 2019-09-04T06:32:47.850+0000 D3 REPL [conn341] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.094+0000 2019-09-04T06:32:47.850+0000 D3 REPL [conn350] Got notified of new snapshot: { ts: Timestamp(1567578767, 2), t: 1 }, 2019-09-04T06:32:47.841+0000 2019-09-04T06:32:47.850+0000 D3 REPL [conn350] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:00.763+0000 2019-09-04T06:32:47.850+0000 D3 REPL [conn344] Got notified of new snapshot: { ts: Timestamp(1567578767, 2), t: 1 }, 2019-09-04T06:32:47.841+0000 2019-09-04T06:32:47.850+0000 D3 REPL [conn344] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:11.677+0000 2019-09-04T06:32:47.854+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:47.854+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:32:47.854+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:47.854+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, durableWallTime: new Date(1567578765328), appliedOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, appliedWallTime: new Date(1567578765328), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578767, 2), t: 1 }, durableWallTime: new Date(1567578767841), appliedOpTime: { ts: Timestamp(1567578767, 2), t: 1 }, appliedWallTime: new Date(1567578767841), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, durableWallTime: new Date(1567578765328), appliedOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, appliedWallTime: new Date(1567578765328), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578767, 2), t: 1 }, lastCommittedWall: new Date(1567578767841), lastOpVisible: { ts: Timestamp(1567578767, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:47.854+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1061 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:17.854+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, durableWallTime: new Date(1567578765328), appliedOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, appliedWallTime: new Date(1567578765328), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578767, 2), t: 1 }, durableWallTime: new Date(1567578767841), appliedOpTime: { ts: Timestamp(1567578767, 2), t: 1 }, appliedWallTime: new Date(1567578767841), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, durableWallTime: new Date(1567578765328), appliedOpTime: { ts: Timestamp(1567578765, 1), t: 1 }, appliedWallTime: new Date(1567578765328), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578767, 2), t: 1 }, lastCommittedWall: new Date(1567578767841), lastOpVisible: { ts: Timestamp(1567578767, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:47.854+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:17.848+0000 2019-09-04T06:32:47.854+0000 D2 ASIO [RS] Request 1061 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578767, 2), t: 1 }, lastCommittedWall: new Date(1567578767841), lastOpVisible: { ts: Timestamp(1567578767, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578767, 2), $clusterTime: { clusterTime: Timestamp(1567578767, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578767, 2) } 2019-09-04T06:32:47.854+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578767, 2), t: 1 }, lastCommittedWall: new Date(1567578767841), lastOpVisible: { ts: Timestamp(1567578767, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578767, 2), $clusterTime: { clusterTime: Timestamp(1567578767, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578767, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:47.854+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:47.854+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:17.848+0000 2019-09-04T06:32:47.858+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:47.858+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:47.947+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578767, 2) 2019-09-04T06:32:47.954+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:48.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:48.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:48.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:48.054+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:48.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:48.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:48.130+0000 D2 COMMAND [conn20] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:48.130+0000 I COMMAND [conn20] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:48.154+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:48.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:48.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:48.181+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:48.181+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:48.191+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:48.191+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:48.222+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:48.222+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:48.228+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:48.228+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:48.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578767, 2), signature: { hash: BinData(0, F9A171B9CA4A2E86C2347783076F553836E4559B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:48.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:32:48.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578767, 2), signature: { hash: BinData(0, F9A171B9CA4A2E86C2347783076F553836E4559B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:48.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578767, 2), signature: { hash: BinData(0, F9A171B9CA4A2E86C2347783076F553836E4559B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:48.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578767, 2), t: 1 }, durableWallTime: new Date(1567578767841), opTime: { ts: Timestamp(1567578767, 2), t: 1 }, wallTime: new Date(1567578767841) } 2019-09-04T06:32:48.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578767, 2), signature: { hash: BinData(0, F9A171B9CA4A2E86C2347783076F553836E4559B), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:48.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:48.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:48.254+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:48.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:48.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:48.354+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:48.358+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:48.358+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:48.455+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:48.516+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:32:48.517+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:32:48.517+0000 D2 COMMAND [conn90] run command admin.$cmd { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:32:48.517+0000 I COMMAND [conn90] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:32:48.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:48.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:48.555+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:48.568+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:48.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:48.655+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:48.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:48.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:48.681+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:48.681+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:48.691+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:48.691+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:48.722+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:48.722+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:48.728+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:48.728+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:48.755+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:48.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:48.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1062) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:48.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1062 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:32:58.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:48.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:16.839+0000 2019-09-04T06:32:48.838+0000 D2 ASIO [Replication] Request 1062 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578767, 2), t: 1 }, durableWallTime: new Date(1567578767841), opTime: { ts: Timestamp(1567578767, 2), t: 1 }, wallTime: new Date(1567578767841), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578767, 2), t: 1 }, lastCommittedWall: new Date(1567578767841), lastOpVisible: { ts: Timestamp(1567578767, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578767, 2), $clusterTime: { clusterTime: Timestamp(1567578767, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578767, 2) } 2019-09-04T06:32:48.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578767, 2), t: 1 }, durableWallTime: new Date(1567578767841), opTime: { ts: Timestamp(1567578767, 2), t: 1 }, wallTime: new Date(1567578767841), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578767, 2), t: 1 }, lastCommittedWall: new Date(1567578767841), lastOpVisible: { ts: Timestamp(1567578767, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578767, 2), $clusterTime: { clusterTime: Timestamp(1567578767, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578767, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:48.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:48.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1062) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578767, 2), t: 1 }, durableWallTime: new Date(1567578767841), opTime: { ts: Timestamp(1567578767, 2), t: 1 }, wallTime: new Date(1567578767841), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578767, 2), t: 1 }, lastCommittedWall: new Date(1567578767841), lastOpVisible: { ts: Timestamp(1567578767, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578767, 2), $clusterTime: { clusterTime: Timestamp(1567578767, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578767, 2) } 2019-09-04T06:32:48.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:32:48.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:32:50.838Z 2019-09-04T06:32:48.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:16.839+0000 2019-09-04T06:32:48.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:48.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1063) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:48.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1063 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:32:58.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:48.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:16.839+0000 2019-09-04T06:32:48.839+0000 D2 ASIO [Replication] Request 1063 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578767, 2), t: 1 }, durableWallTime: new Date(1567578767841), opTime: { ts: Timestamp(1567578767, 2), t: 1 }, wallTime: new Date(1567578767841), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578767, 2), t: 1 }, lastCommittedWall: new Date(1567578767841), lastOpVisible: { ts: Timestamp(1567578767, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578767, 2), $clusterTime: { clusterTime: Timestamp(1567578767, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578767, 2) } 2019-09-04T06:32:48.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578767, 2), t: 1 }, durableWallTime: new Date(1567578767841), opTime: { ts: Timestamp(1567578767, 2), t: 1 }, wallTime: new Date(1567578767841), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578767, 2), t: 1 }, lastCommittedWall: new Date(1567578767841), lastOpVisible: { ts: Timestamp(1567578767, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578767, 2), $clusterTime: { clusterTime: Timestamp(1567578767, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578767, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:32:48.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:48.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1063) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578767, 2), t: 1 }, durableWallTime: new Date(1567578767841), opTime: { ts: Timestamp(1567578767, 2), t: 1 }, wallTime: new Date(1567578767841), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578767, 2), t: 1 }, lastCommittedWall: new Date(1567578767841), lastOpVisible: { ts: Timestamp(1567578767, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578767, 2), $clusterTime: { clusterTime: Timestamp(1567578767, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578767, 2) } 2019-09-04T06:32:48.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:32:48.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:32:58.486+0000 2019-09-04T06:32:48.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:32:59.928+0000 2019-09-04T06:32:48.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:32:48.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:32:50.839Z 2019-09-04T06:32:48.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:48.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:18.839+0000 2019-09-04T06:32:48.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:18.839+0000 2019-09-04T06:32:48.847+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:48.847+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:48.847+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578767, 2) 2019-09-04T06:32:48.847+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 15662 2019-09-04T06:32:48.847+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:48.847+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:48.847+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 15662 2019-09-04T06:32:48.848+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15665 2019-09-04T06:32:48.848+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 15665 2019-09-04T06:32:48.848+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578767, 2), t: 1 }({ ts: Timestamp(1567578767, 2), t: 1 }) 2019-09-04T06:32:48.855+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:48.858+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:48.858+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:48.955+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:49.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:49.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:49.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:49.055+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:49.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578767, 2), signature: { hash: BinData(0, F9A171B9CA4A2E86C2347783076F553836E4559B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:49.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:32:49.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578767, 2), signature: { hash: BinData(0, F9A171B9CA4A2E86C2347783076F553836E4559B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:49.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578767, 2), signature: { hash: BinData(0, F9A171B9CA4A2E86C2347783076F553836E4559B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:49.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578767, 2), t: 1 }, durableWallTime: new Date(1567578767841), opTime: { ts: Timestamp(1567578767, 2), t: 1 }, wallTime: new Date(1567578767841) } 2019-09-04T06:32:49.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578767, 2), signature: { hash: BinData(0, F9A171B9CA4A2E86C2347783076F553836E4559B), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:49.068+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:49.068+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:49.155+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:49.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:49.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:49.181+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:49.181+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:49.191+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:49.191+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:49.222+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:49.222+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:49.228+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:49.228+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:49.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:49.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:49.256+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:49.271+0000 D2 ASIO [RS] Request 1060 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578769, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" }, wall: new Date(1567578769266), o: { $v: 1, $set: { ping: new Date(1567578769263) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578767, 2), t: 1 }, lastCommittedWall: new Date(1567578767841), lastOpVisible: { ts: Timestamp(1567578767, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578767, 2), t: 1 }, lastCommittedWall: new Date(1567578767841), lastOpApplied: { ts: Timestamp(1567578769, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578767, 2), $clusterTime: { clusterTime: Timestamp(1567578769, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578769, 1) } 2019-09-04T06:32:49.271+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578769, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" }, wall: new Date(1567578769266), o: { $v: 1, $set: { ping: new Date(1567578769263) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578767, 2), t: 1 }, lastCommittedWall: new Date(1567578767841), lastOpVisible: { ts: Timestamp(1567578767, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578767, 2), t: 1 }, lastCommittedWall: new Date(1567578767841), lastOpApplied: { ts: Timestamp(1567578769, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578767, 2), $clusterTime: { clusterTime: Timestamp(1567578769, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578769, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:49.271+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:49.271+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578769, 1) and ending at ts: Timestamp(1567578769, 1) 2019-09-04T06:32:49.271+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:32:59.928+0000 2019-09-04T06:32:49.271+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:33:00.328+0000 2019-09-04T06:32:49.271+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:49.271+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578769, 1), t: 1 } 2019-09-04T06:32:49.271+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:49.271+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:49.271+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578767, 2) 2019-09-04T06:32:49.271+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 15679 2019-09-04T06:32:49.271+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:49.271+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:49.271+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 15679 2019-09-04T06:32:49.271+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:49.271+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:32:49.271+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:18.839+0000 2019-09-04T06:32:49.271+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:49.271+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578767, 2) 2019-09-04T06:32:49.271+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 15682 2019-09-04T06:32:49.271+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578769, 1) } 2019-09-04T06:32:49.271+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:49.271+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:49.271+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 15682 2019-09-04T06:32:49.271+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15666 2019-09-04T06:32:49.271+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 15666 2019-09-04T06:32:49.271+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15685 2019-09-04T06:32:49.271+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 15685 2019-09-04T06:32:49.271+0000 D3 EXECUTOR [repl-writer-worker-7] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:49.271+0000 D3 STORAGE [repl-writer-worker-7] WT begin_transaction for snapshot id 15687 2019-09-04T06:32:49.271+0000 D4 STORAGE [repl-writer-worker-7] inserting record with timestamp Timestamp(1567578769, 1) 2019-09-04T06:32:49.271+0000 D3 STORAGE [repl-writer-worker-7] WT set timestamp of future write operations to Timestamp(1567578769, 1) 2019-09-04T06:32:49.271+0000 D3 STORAGE [repl-writer-worker-7] WT commit_transaction for snapshot id 15687 2019-09-04T06:32:49.271+0000 D3 EXECUTOR [repl-writer-worker-7] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:49.271+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:32:49.271+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15686 2019-09-04T06:32:49.271+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 15686 2019-09-04T06:32:49.271+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15689 2019-09-04T06:32:49.271+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 15689 2019-09-04T06:32:49.271+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578769, 1), t: 1 }({ ts: Timestamp(1567578769, 1), t: 1 }) 2019-09-04T06:32:49.271+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578769, 1) 2019-09-04T06:32:49.271+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15690 2019-09-04T06:32:49.271+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578769, 1) } } ] } sort: {} projection: {} 2019-09-04T06:32:49.271+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:49.271+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:32:49.271+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578769, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:32:49.271+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:49.271+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:49.271+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:49.271+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578769, 1) || First: notFirst: full path: ts 2019-09-04T06:32:49.271+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:49.272+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578769, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:49.272+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:49.272+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:32:49.272+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:49.272+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:49.272+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:49.272+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:49.272+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:49.272+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:49.272+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:49.272+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578769, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:49.272+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:49.272+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:49.272+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:49.272+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578769, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:49.272+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:49.272+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578769, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:49.272+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 15690 2019-09-04T06:32:49.272+0000 D3 EXECUTOR [repl-writer-worker-11] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:49.272+0000 D3 STORAGE [repl-writer-worker-11] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:49.272+0000 D3 REPL [repl-writer-worker-11] applying op: { ts: Timestamp(1567578769, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" }, wall: new Date(1567578769266), o: { $v: 1, $set: { ping: new Date(1567578769263) } } }, oplog application mode: Secondary 2019-09-04T06:32:49.272+0000 D3 STORAGE [repl-writer-worker-11] WT set timestamp of future write operations to Timestamp(1567578769, 1) 2019-09-04T06:32:49.272+0000 D3 STORAGE [repl-writer-worker-11] WT begin_transaction for snapshot id 15692 2019-09-04T06:32:49.272+0000 D2 QUERY [repl-writer-worker-11] Using idhack: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" } 2019-09-04T06:32:49.272+0000 D4 WRITE [repl-writer-worker-11] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:32:49.272+0000 D3 STORAGE [repl-writer-worker-11] WT commit_transaction for snapshot id 15692 2019-09-04T06:32:49.272+0000 D3 EXECUTOR [repl-writer-worker-11] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:49.272+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578769, 1), t: 1 }({ ts: Timestamp(1567578769, 1), t: 1 }) 2019-09-04T06:32:49.272+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578769, 1) 2019-09-04T06:32:49.272+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15691 2019-09-04T06:32:49.272+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:32:49.272+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:49.272+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:32:49.272+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:49.272+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:49.272+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:32:49.272+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 15691 2019-09-04T06:32:49.272+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578769, 1) 2019-09-04T06:32:49.272+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15695 2019-09-04T06:32:49.272+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 15695 2019-09-04T06:32:49.272+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578769, 1), t: 1 }({ ts: Timestamp(1567578769, 1), t: 1 }) 2019-09-04T06:32:49.272+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:49.272+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578767, 2), t: 1 }, durableWallTime: new Date(1567578767841), appliedOpTime: { ts: Timestamp(1567578767, 2), t: 1 }, appliedWallTime: new Date(1567578767841), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578767, 2), t: 1 }, durableWallTime: new Date(1567578767841), appliedOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, appliedWallTime: new Date(1567578769266), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578767, 2), t: 1 }, durableWallTime: new Date(1567578767841), appliedOpTime: { ts: Timestamp(1567578767, 2), t: 1 }, appliedWallTime: new Date(1567578767841), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578767, 2), t: 1 }, lastCommittedWall: new Date(1567578767841), lastOpVisible: { ts: Timestamp(1567578767, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:49.272+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1064 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:19.272+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578767, 2), t: 1 }, durableWallTime: new Date(1567578767841), appliedOpTime: { ts: Timestamp(1567578767, 2), t: 1 }, appliedWallTime: new Date(1567578767841), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578767, 2), t: 1 }, durableWallTime: new Date(1567578767841), appliedOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, appliedWallTime: new Date(1567578769266), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578767, 2), t: 1 }, durableWallTime: new Date(1567578767841), appliedOpTime: { ts: Timestamp(1567578767, 2), t: 1 }, appliedWallTime: new Date(1567578767841), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578767, 2), t: 1 }, lastCommittedWall: new Date(1567578767841), lastOpVisible: { ts: Timestamp(1567578767, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:49.272+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:19.272+0000 2019-09-04T06:32:49.273+0000 D2 ASIO [RS] Request 1064 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578767, 2), t: 1 }, lastCommittedWall: new Date(1567578767841), lastOpVisible: { ts: Timestamp(1567578767, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578767, 2), $clusterTime: { clusterTime: Timestamp(1567578769, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578769, 1) } 2019-09-04T06:32:49.273+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578767, 2), t: 1 }, lastCommittedWall: new Date(1567578767841), lastOpVisible: { ts: Timestamp(1567578767, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578767, 2), $clusterTime: { clusterTime: Timestamp(1567578769, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578769, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:49.273+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:49.273+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:19.273+0000 2019-09-04T06:32:49.273+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578769, 1), t: 1 } 2019-09-04T06:32:49.273+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1065 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:59.273+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578767, 2), t: 1 } } 2019-09-04T06:32:49.273+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:19.273+0000 2019-09-04T06:32:49.274+0000 D2 ASIO [RS] Request 1065 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578769, 1), t: 1 }, lastCommittedWall: new Date(1567578769266), lastOpVisible: { ts: Timestamp(1567578769, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578769, 1), t: 1 }, lastCommittedWall: new Date(1567578769266), lastOpApplied: { ts: Timestamp(1567578769, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578769, 1), $clusterTime: { clusterTime: Timestamp(1567578769, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578769, 1) } 2019-09-04T06:32:49.274+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578769, 1), t: 1 }, lastCommittedWall: new Date(1567578769266), lastOpVisible: { ts: Timestamp(1567578769, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578769, 1), t: 1 }, lastCommittedWall: new Date(1567578769266), lastOpApplied: { ts: Timestamp(1567578769, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578769, 1), $clusterTime: { clusterTime: Timestamp(1567578769, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578769, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:49.274+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:49.274+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:32:49.274+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578769, 1), t: 1 }, 2019-09-04T06:32:49.266+0000 2019-09-04T06:32:49.274+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578769, 1), t: 1 }, 2019-09-04T06:32:49.266+0000 2019-09-04T06:32:49.274+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578764, 1) 2019-09-04T06:32:49.274+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:33:00.328+0000 2019-09-04T06:32:49.274+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:32:59.587+0000 2019-09-04T06:32:49.274+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1066 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:32:59.274+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578769, 1), t: 1 } } 2019-09-04T06:32:49.274+0000 D3 REPL [conn342] Got notified of new snapshot: { ts: Timestamp(1567578769, 1), t: 1 }, 2019-09-04T06:32:49.266+0000 2019-09-04T06:32:49.274+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:19.273+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn342] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:52.054+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn354] Got notified of new snapshot: { ts: Timestamp(1567578769, 1), t: 1 }, 2019-09-04T06:32:49.266+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn354] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.413+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn338] Got notified of new snapshot: { ts: Timestamp(1567578769, 1), t: 1 }, 2019-09-04T06:32:49.266+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn338] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:11.680+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn346] Got notified of new snapshot: { ts: Timestamp(1567578769, 1), t: 1 }, 2019-09-04T06:32:49.266+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn346] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:57.574+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn357] Got notified of new snapshot: { ts: Timestamp(1567578769, 1), t: 1 }, 2019-09-04T06:32:49.266+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn357] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.823+0000 2019-09-04T06:32:49.274+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:49.274+0000 D3 REPL [conn322] Got notified of new snapshot: { ts: Timestamp(1567578769, 1), t: 1 }, 2019-09-04T06:32:49.266+0000 2019-09-04T06:32:49.274+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:18.839+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn322] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.989+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn339] Got notified of new snapshot: { ts: Timestamp(1567578769, 1), t: 1 }, 2019-09-04T06:32:49.266+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn339] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.661+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn320] Got notified of new snapshot: { ts: Timestamp(1567578769, 1), t: 1 }, 2019-09-04T06:32:49.266+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn320] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.753+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn340] Got notified of new snapshot: { ts: Timestamp(1567578769, 1), t: 1 }, 2019-09-04T06:32:49.266+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn340] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.662+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn345] Got notified of new snapshot: { ts: Timestamp(1567578769, 1), t: 1 }, 2019-09-04T06:32:49.266+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn345] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:56.312+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn327] Got notified of new snapshot: { ts: Timestamp(1567578769, 1), t: 1 }, 2019-09-04T06:32:49.266+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn327] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.840+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn318] Got notified of new snapshot: { ts: Timestamp(1567578769, 1), t: 1 }, 2019-09-04T06:32:49.266+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn318] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:03.478+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn350] Got notified of new snapshot: { ts: Timestamp(1567578769, 1), t: 1 }, 2019-09-04T06:32:49.266+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn350] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:00.763+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn349] Got notified of new snapshot: { ts: Timestamp(1567578769, 1), t: 1 }, 2019-09-04T06:32:49.266+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn349] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:00.261+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn351] Got notified of new snapshot: { ts: Timestamp(1567578769, 1), t: 1 }, 2019-09-04T06:32:49.266+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn351] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:00.962+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn336] Got notified of new snapshot: { ts: Timestamp(1567578769, 1), t: 1 }, 2019-09-04T06:32:49.266+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn336] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.130+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn358] Got notified of new snapshot: { ts: Timestamp(1567578769, 1), t: 1 }, 2019-09-04T06:32:49.266+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn358] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.833+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn325] Got notified of new snapshot: { ts: Timestamp(1567578769, 1), t: 1 }, 2019-09-04T06:32:49.266+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn325] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:51.644+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn348] Got notified of new snapshot: { ts: Timestamp(1567578769, 1), t: 1 }, 2019-09-04T06:32:49.266+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn348] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:59.750+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn352] Got notified of new snapshot: { ts: Timestamp(1567578769, 1), t: 1 }, 2019-09-04T06:32:49.266+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn352] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.291+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn323] Got notified of new snapshot: { ts: Timestamp(1567578769, 1), t: 1 }, 2019-09-04T06:32:49.266+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn323] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:11.688+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn347] Got notified of new snapshot: { ts: Timestamp(1567578769, 1), t: 1 }, 2019-09-04T06:32:49.266+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn347] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:58.760+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn353] Got notified of new snapshot: { ts: Timestamp(1567578769, 1), t: 1 }, 2019-09-04T06:32:49.266+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn353] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.419+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn326] Got notified of new snapshot: { ts: Timestamp(1567578769, 1), t: 1 }, 2019-09-04T06:32:49.266+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn326] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:49.840+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn341] Got notified of new snapshot: { ts: Timestamp(1567578769, 1), t: 1 }, 2019-09-04T06:32:49.266+0000 2019-09-04T06:32:49.274+0000 D3 REPL [conn341] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.094+0000 2019-09-04T06:32:49.275+0000 D3 REPL [conn344] Got notified of new snapshot: { ts: Timestamp(1567578769, 1), t: 1 }, 2019-09-04T06:32:49.266+0000 2019-09-04T06:32:49.275+0000 D3 REPL [conn344] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:11.677+0000 2019-09-04T06:32:49.277+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:32:49.277+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:49.277+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578767, 2), t: 1 }, durableWallTime: new Date(1567578767841), appliedOpTime: { ts: Timestamp(1567578767, 2), t: 1 }, appliedWallTime: new Date(1567578767841), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), appliedOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, appliedWallTime: new Date(1567578769266), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578767, 2), t: 1 }, durableWallTime: new Date(1567578767841), appliedOpTime: { ts: Timestamp(1567578767, 2), t: 1 }, appliedWallTime: new Date(1567578767841), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578769, 1), t: 1 }, lastCommittedWall: new Date(1567578769266), lastOpVisible: { ts: Timestamp(1567578769, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:49.277+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1067 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:19.277+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578767, 2), t: 1 }, durableWallTime: new Date(1567578767841), appliedOpTime: { ts: Timestamp(1567578767, 2), t: 1 }, appliedWallTime: new Date(1567578767841), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), appliedOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, appliedWallTime: new Date(1567578769266), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578767, 2), t: 1 }, durableWallTime: new Date(1567578767841), appliedOpTime: { ts: Timestamp(1567578767, 2), t: 1 }, appliedWallTime: new Date(1567578767841), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578769, 1), t: 1 }, lastCommittedWall: new Date(1567578769266), lastOpVisible: { ts: Timestamp(1567578769, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:49.277+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:19.273+0000 2019-09-04T06:32:49.278+0000 D2 ASIO [RS] Request 1067 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578769, 1), t: 1 }, lastCommittedWall: new Date(1567578769266), lastOpVisible: { ts: Timestamp(1567578769, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578769, 1), $clusterTime: { clusterTime: Timestamp(1567578769, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578769, 1) } 2019-09-04T06:32:49.278+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578769, 1), t: 1 }, lastCommittedWall: new Date(1567578769266), lastOpVisible: { ts: Timestamp(1567578769, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578769, 1), $clusterTime: { clusterTime: Timestamp(1567578769, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578769, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:49.278+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:49.278+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:19.273+0000 2019-09-04T06:32:49.329+0000 D2 WRITE [startPeriodicThreadToAbortExpiredTransactions] Beginning scanSessions. Scanning 0 sessions. 2019-09-04T06:32:49.332+0000 D1 EXECUTOR [ConfigServerCatalogCacheLoader-1] Reaping this thread; next thread reaped no earlier than 2019-09-04T06:33:19.332+0000 2019-09-04T06:32:49.332+0000 D1 EXECUTOR [ConfigServerCatalogCacheLoader-1] shutting down thread in pool ConfigServerCatalogCacheLoader 2019-09-04T06:32:49.356+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:49.358+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:49.358+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:49.370+0000 D1 SHARDING [shard-registry-reload] Reloading shardRegistry 2019-09-04T06:32:49.370+0000 D3 STORAGE [shard-registry-reload] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:32:49.370+0000 D2 COMMAND [shard-registry-reload] run command config.$cmd { find: "shards", $readPreference: { mode: "nearest", tags: [] }, $db: "config" } 2019-09-04T06:32:49.370+0000 D3 STORAGE [shard-registry-reload] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:49.370+0000 D5 QUERY [shard-registry-reload] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:32:49.370+0000 D5 QUERY [shard-registry-reload] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:32:49.370+0000 D5 QUERY [shard-registry-reload] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:32:49.370+0000 D5 QUERY [shard-registry-reload] Rated tree: $and 2019-09-04T06:32:49.370+0000 D5 QUERY [shard-registry-reload] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:49.370+0000 D5 QUERY [shard-registry-reload] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:49.370+0000 D2 QUERY [shard-registry-reload] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:32:49.370+0000 D3 STORAGE [shard-registry-reload] begin_transaction on local snapshot Timestamp(1567578769, 1) 2019-09-04T06:32:49.370+0000 D3 STORAGE [shard-registry-reload] WT begin_transaction for snapshot id 15701 2019-09-04T06:32:49.370+0000 D3 STORAGE [shard-registry-reload] WT rollback_transaction for snapshot id 15701 2019-09-04T06:32:49.370+0000 I COMMAND [shard-registry-reload] command config.shards command: find { find: "shards", $readPreference: { mode: "nearest", tags: [] }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:646 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:32:49.370+0000 D1 SHARDING [shard-registry-reload] found 3 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp(1567578769, 1), t: 1 } 2019-09-04T06:32:49.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0000/cmodb806.togewa.com:27018,cmodb807.togewa.com:27018 2019-09-04T06:32:49.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0000, with CS shard0000/cmodb806.togewa.com:27018,cmodb807.togewa.com:27018 2019-09-04T06:32:49.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0001/cmodb808.togewa.com:27018,cmodb809.togewa.com:27018 2019-09-04T06:32:49.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0001, with CS shard0001/cmodb808.togewa.com:27018,cmodb809.togewa.com:27018 2019-09-04T06:32:49.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0002/cmodb810.togewa.com:27018,cmodb811.togewa.com:27018 2019-09-04T06:32:49.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0002, with CS shard0002/cmodb810.togewa.com:27018,cmodb811.togewa.com:27018 2019-09-04T06:32:49.370+0000 D3 SHARDING [shard-registry-reload] Adding shard config, with CS 2019-09-04T06:32:49.371+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578769, 1) 2019-09-04T06:32:49.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0000 2019-09-04T06:32:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1068 -- target:[cmodb806.togewa.com:27018] db:admin expDate:2019-09-04T06:32:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:32:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1069 -- target:[cmodb807.togewa.com:27018] db:admin expDate:2019-09-04T06:32:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:32:49.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0002 2019-09-04T06:32:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1070 -- target:[cmodb810.togewa.com:27018] db:admin expDate:2019-09-04T06:32:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:32:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1071 -- target:[cmodb811.togewa.com:27018] db:admin expDate:2019-09-04T06:32:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:32:49.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0001 2019-09-04T06:32:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1072 -- target:[cmodb808.togewa.com:27018] db:admin expDate:2019-09-04T06:32:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:32:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1073 -- target:[cmodb809.togewa.com:27018] db:admin expDate:2019-09-04T06:32:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:32:49.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1069 finished with response: { hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb806.togewa.com:27018", me: "cmodb807.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578765, 2), t: 1 }, lastWriteDate: new Date(1567578765000), majorityOpTime: { ts: Timestamp(1567578765, 2), t: 1 }, majorityWriteDate: new Date(1567578765000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578769385), logicalSessionTimeoutMinutes: 30, connectionId: 17074, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578765, 2), $configServerState: { opTime: { ts: Timestamp(1567578758, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578765, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578765, 2) } 2019-09-04T06:32:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb806.togewa.com:27018", me: "cmodb807.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578765, 2), t: 1 }, lastWriteDate: new Date(1567578765000), majorityOpTime: { ts: Timestamp(1567578765, 2), t: 1 }, majorityWriteDate: new Date(1567578765000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578769385), logicalSessionTimeoutMinutes: 30, connectionId: 17074, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578765, 2), $configServerState: { opTime: { ts: Timestamp(1567578758, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578765, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578765, 2) } target: cmodb807.togewa.com:27018 2019-09-04T06:32:49.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1068 finished with response: { hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb806.togewa.com:27018", me: "cmodb806.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578765, 2), t: 1 }, lastWriteDate: new Date(1567578765000), majorityOpTime: { ts: Timestamp(1567578765, 2), t: 1 }, majorityWriteDate: new Date(1567578765000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578769385), logicalSessionTimeoutMinutes: 30, connectionId: 16400, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578765, 2), $configServerState: { opTime: { ts: Timestamp(1567578765, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578765, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578765, 2) } 2019-09-04T06:32:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb806.togewa.com:27018", me: "cmodb806.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578765, 2), t: 1 }, lastWriteDate: new Date(1567578765000), majorityOpTime: { ts: Timestamp(1567578765, 2), t: 1 }, majorityWriteDate: new Date(1567578765000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578769385), logicalSessionTimeoutMinutes: 30, connectionId: 16400, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578765, 2), $configServerState: { opTime: { ts: Timestamp(1567578765, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578765, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578765, 2) } target: cmodb806.togewa.com:27018 2019-09-04T06:32:49.385+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0000 took 0ms 2019-09-04T06:32:49.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1073 finished with response: { hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb808.togewa.com:27018", me: "cmodb809.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578767, 1), t: 1 }, lastWriteDate: new Date(1567578767000), majorityOpTime: { ts: Timestamp(1567578767, 1), t: 1 }, majorityWriteDate: new Date(1567578767000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578769386), logicalSessionTimeoutMinutes: 30, connectionId: 13302, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578767, 1), $configServerState: { opTime: { ts: Timestamp(1567578750, 3), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578767, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578767, 1) } 2019-09-04T06:32:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb808.togewa.com:27018", me: "cmodb809.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578767, 1), t: 1 }, lastWriteDate: new Date(1567578767000), majorityOpTime: { ts: Timestamp(1567578767, 1), t: 1 }, majorityWriteDate: new Date(1567578767000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578769386), logicalSessionTimeoutMinutes: 30, connectionId: 13302, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578767, 1), $configServerState: { opTime: { ts: Timestamp(1567578750, 3), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578767, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578767, 1) } target: cmodb809.togewa.com:27018 2019-09-04T06:32:49.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1070 finished with response: { hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb810.togewa.com:27018", me: "cmodb810.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578759, 1), t: 1 }, lastWriteDate: new Date(1567578759000), majorityOpTime: { ts: Timestamp(1567578759, 1), t: 1 }, majorityWriteDate: new Date(1567578759000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578769386), logicalSessionTimeoutMinutes: 30, connectionId: 20469, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578759, 1), $configServerState: { opTime: { ts: Timestamp(1567578765, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578765, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578759, 1) } 2019-09-04T06:32:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb810.togewa.com:27018", me: "cmodb810.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578759, 1), t: 1 }, lastWriteDate: new Date(1567578759000), majorityOpTime: { ts: Timestamp(1567578759, 1), t: 1 }, majorityWriteDate: new Date(1567578759000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578769386), logicalSessionTimeoutMinutes: 30, connectionId: 20469, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578759, 1), $configServerState: { opTime: { ts: Timestamp(1567578765, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578765, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578759, 1) } target: cmodb810.togewa.com:27018 2019-09-04T06:32:49.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1072 finished with response: { hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb808.togewa.com:27018", me: "cmodb808.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578767, 1), t: 1 }, lastWriteDate: new Date(1567578767000), majorityOpTime: { ts: Timestamp(1567578767, 1), t: 1 }, majorityWriteDate: new Date(1567578767000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578769386), logicalSessionTimeoutMinutes: 30, connectionId: 18183, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578767, 1), $configServerState: { opTime: { ts: Timestamp(1567578765, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578767, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578767, 1) } 2019-09-04T06:32:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb808.togewa.com:27018", me: "cmodb808.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578767, 1), t: 1 }, lastWriteDate: new Date(1567578767000), majorityOpTime: { ts: Timestamp(1567578767, 1), t: 1 }, majorityWriteDate: new Date(1567578767000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578769386), logicalSessionTimeoutMinutes: 30, connectionId: 18183, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578767, 1), $configServerState: { opTime: { ts: Timestamp(1567578765, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578767, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578767, 1) } target: cmodb808.togewa.com:27018 2019-09-04T06:32:49.385+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0001 took 0ms 2019-09-04T06:32:49.390+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1071 finished with response: { hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb810.togewa.com:27018", me: "cmodb811.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578759, 1), t: 1 }, lastWriteDate: new Date(1567578759000), majorityOpTime: { ts: Timestamp(1567578759, 1), t: 1 }, majorityWriteDate: new Date(1567578759000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578769386), logicalSessionTimeoutMinutes: 30, connectionId: 13284, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578759, 1), $configServerState: { opTime: { ts: Timestamp(1567578765, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578765, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578759, 1) } 2019-09-04T06:32:49.390+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb810.togewa.com:27018", me: "cmodb811.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578759, 1), t: 1 }, lastWriteDate: new Date(1567578759000), majorityOpTime: { ts: Timestamp(1567578759, 1), t: 1 }, majorityWriteDate: new Date(1567578759000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578769386), logicalSessionTimeoutMinutes: 30, connectionId: 13284, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578759, 1), $configServerState: { opTime: { ts: Timestamp(1567578765, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578765, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578759, 1) } target: cmodb811.togewa.com:27018 2019-09-04T06:32:49.390+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0002 took 5ms 2019-09-04T06:32:49.456+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:49.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:49.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:49.556+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:49.560+0000 D2 COMMAND [replSetDistLockPinger] run command config.$cmd { findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578769560) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } 2019-09-04T06:32:49.560+0000 D4 - [replSetDistLockPinger] Taking ticket. Available: 1000000000 2019-09-04T06:32:49.560+0000 D1 - [replSetDistLockPinger] User Assertion: NotMaster: Not primary while running findAndModify command on collection config.lockpings src/mongo/db/commands/find_and_modify.cpp 178 2019-09-04T06:32:49.560+0000 W - [replSetDistLockPinger] DBException thrown :: caused by :: NotMaster: Not primary while running findAndModify command on collection config.lockpings 2019-09-04T06:32:49.568+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:49.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:49.581+0000 I - [replSetDistLockPinger] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6b5b62 0x561749c38a0a 0x561749c42521 0x561749a63043 0x56174a33a606 0x56174a33ba55 0x56174b117894 0x56174a082899 0x56174a083f53 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174af452ee 0x56174af457fa 0x56174b0c25e2 0x56174a244e7b 0x56174a243c1e 0x56174a42b1dc 0x56174a23b7b1 0x56174a232a0a 0x56174b82dbbf 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"272DB62","s":"_ZN5mongo11DBExceptionC2ERKNS_6StatusE"},{"b":"561748F88000","o":"CB0A0A","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"ADB043"},{"b":"561748F88000","o":"13B2606"},{"b":"561748F88000","o":"13B3A55"},{"b":"561748F88000","o":"218F894","s":"_ZN5mongo12BasicCommand10Invocation3runEPNS_16OperationContextEPNS_3rpc21ReplyBuilderInterfaceE"},{"b":"561748F88000","o":"10FA899"},{"b":"561748F88000","o":"10FBF53"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"1FBD2EE"},{"b":"561748F88000","o":"1FBD7FA","s":"_ZN5mongo14DBDirectClient4callERNS_7MessageES2_bPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE"},{"b":"561748F88000","o":"213A5E2","s":"_ZN5mongo12DBClientBase20runCommandWithTargetENS_12OpMsgRequestE"},{"b":"561748F88000","o":"12BCE7B","s":"_ZN5mongo13RSLocalClient14runCommandOnceEPNS_16OperationContextENS_10StringDataERKNS_7BSONObjE"},{"b":"561748F88000","o":"12BBC1E","s":"_ZN5mongo10ShardLocal11_runCommandEPNS_16OperationContextERKNS_21ReadPreferenceSettingENS_10StringDataENS_8DurationISt5ratioILl1ELl1000EEEERKNS_7BSONObjE"},{"b":"561748F88000","o":"14A31DC","s":"_ZN5mongo5Shard32runCommandWithFixedRetryAttemptsEPNS_16OperationContextERKNS_21ReadPreferenceSettingERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_7BSONObjENS_8DurationISt5ratioILl1ELl1000EEEENS0_11RetryPolicyE"},{"b":"561748F88000","o":"12B37B1","s":"_ZN5mongo19DistLockCatalogImpl4pingEPNS_16OperationContextENS_10StringDataENS_6Date_tE"},{"b":"561748F88000","o":"12AAA0A","s":"_ZN5mongo22ReplSetDistLockManager6doTaskEv"},{"b":"561748F88000","o":"28A5BBF"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo11DBExceptionC2ERKNS_6StatusE+0x32) [0x56174b6b5b62] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x6D08) [0x561749c38a0a] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xADB043) [0x561749a63043] mongod(+0x13B2606) [0x56174a33a606] mongod(+0x13B3A55) [0x56174a33ba55] mongod(_ZN5mongo12BasicCommand10Invocation3runEPNS_16OperationContextEPNS_3rpc21ReplyBuilderInterfaceE+0x74) [0x56174b117894] mongod(+0x10FA899) [0x56174a082899] mongod(+0x10FBF53) [0x56174a083f53] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(+0x1FBD2EE) [0x56174af452ee] mongod(_ZN5mongo14DBDirectClient4callERNS_7MessageES2_bPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x3A) [0x56174af457fa] mongod(_ZN5mongo12DBClientBase20runCommandWithTargetENS_12OpMsgRequestE+0x1F2) [0x56174b0c25e2] mongod(_ZN5mongo13RSLocalClient14runCommandOnceEPNS_16OperationContextENS_10StringDataERKNS_7BSONObjE+0x4FB) [0x56174a244e7b] mongod(_ZN5mongo10ShardLocal11_runCommandEPNS_16OperationContextERKNS_21ReadPreferenceSettingENS_10StringDataENS_8DurationISt5ratioILl1ELl1000EEEERKNS_7BSONObjE+0x2E) [0x56174a243c1e] mongod(_ZN5mongo5Shard32runCommandWithFixedRetryAttemptsEPNS_16OperationContextERKNS_21ReadPreferenceSettingERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_7BSONObjENS_8DurationISt5ratioILl1ELl1000EEEENS0_11RetryPolicyE+0xDC) [0x56174a42b1dc] mongod(_ZN5mongo19DistLockCatalogImpl4pingEPNS_16OperationContextENS_10StringDataENS_6Date_tE+0x571) [0x56174a23b7b1] mongod(_ZN5mongo22ReplSetDistLockManager6doTaskEv+0x27A) [0x56174a232a0a] mongod(+0x28A5BBF) [0x56174b82dbbf] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:49.581+0000 D2 REPL [replSetDistLockPinger] Waiting for write concern. OpTime: { ts: Timestamp(1567578769, 1), t: 1 }, write concern: { w: "majority", wtimeout: 15000 } 2019-09-04T06:32:49.581+0000 D4 STORAGE [replSetDistLockPinger] flushed journal 2019-09-04T06:32:49.581+0000 D1 COMMAND [replSetDistLockPinger] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578769560) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" }': NotMaster: Not primary while running findAndModify command on collection config.lockpings 2019-09-04T06:32:49.581+0000 I COMMAND [replSetDistLockPinger] command config.lockpings command: findAndModify { findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578769560) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } numYields:0 ok:0 errMsg:"Not primary while running findAndModify command on collection config.lockpings" errName:NotMaster errCode:10107 reslen:527 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } protocol:op_msg 20ms 2019-09-04T06:32:49.656+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:49.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:49.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:49.681+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:49.681+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:49.691+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:49.691+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:49.722+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:49.722+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:49.728+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:49.728+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:49.756+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:49.841+0000 I COMMAND [conn326] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578739, 1), signature: { hash: BinData(0, FC302E736E3E0A04685B3CCCCAB671E279A74EEA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:32:49.841+0000 D1 - [conn326] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:49.841+0000 W - [conn326] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:49.857+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:49.858+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:49.858+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:49.859+0000 I - [conn326] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:49.859+0000 D1 COMMAND [conn326] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578739, 1), signature: { hash: BinData(0, FC302E736E3E0A04685B3CCCCAB671E279A74EEA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:49.859+0000 D1 - [conn326] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:49.859+0000 W - [conn326] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:49.882+0000 I - [conn326] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:49.882+0000 W COMMAND [conn326] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:49.883+0000 I COMMAND [conn326] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578739, 1), signature: { hash: BinData(0, FC302E736E3E0A04685B3CCCCAB671E279A74EEA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:32:49.883+0000 D2 NETWORK [conn326] Session from 10.108.2.51:59214 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:49.883+0000 I NETWORK [conn326] end connection 10.108.2.51:59214 (85 connections now open) 2019-09-04T06:32:49.957+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:50.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:50.004+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:32:50.004+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:32:50.004+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:32:50.011+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:32:50.011+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:32:50.012+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:32:50.012+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:32:50.012+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:32:50.012+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:32:50.013+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:32:50.013+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35129 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:32:50.015+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:32:50.015+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:32:50.015+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:32:50.016+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:32:50.016+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:50.016+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:32:50.016+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:32:50.016+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:32:50.016+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:32:50.016+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:32:50.016+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:32:50.016+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:32:50.016+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:50.016+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:50.016+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:32:50.016+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578769, 1) 2019-09-04T06:32:50.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15718 2019-09-04T06:32:50.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15718 2019-09-04T06:32:50.016+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:32:50.016+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:32:50.016+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:32:50.016+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:32:50.016+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:50.016+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:32:50.016+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:32:50.016+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:32:50.016+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578769, 1) 2019-09-04T06:32:50.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15721 2019-09-04T06:32:50.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15721 2019-09-04T06:32:50.016+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:32:50.016+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:32:50.016+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:50.016+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:32:50.016+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:32:50.016+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:32:50.016+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578769, 1) 2019-09-04T06:32:50.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15723 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15723 2019-09-04T06:32:50.017+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:32:50.017+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:32:50.017+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:32:50.017+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:32:50.017+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:32:50.017+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15726 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15726 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15727 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15727 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15728 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15728 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15729 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15729 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15730 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15730 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15731 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15731 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15732 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15732 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15733 2019-09-04T06:32:50.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15733 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15734 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15734 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15735 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15735 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15736 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15736 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15737 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15737 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15738 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15738 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15739 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15739 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15740 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15740 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15741 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15741 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15742 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15742 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15743 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15743 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15744 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15744 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15745 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15745 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15746 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:50.018+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:32:50.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15746 2019-09-04T06:32:50.019+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:32:50.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15747 2019-09-04T06:32:50.019+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:32:50.019+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:32:50.019+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:32:50.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15747 2019-09-04T06:32:50.019+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:32:50.032+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:32:50.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15749 2019-09-04T06:32:50.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15749 2019-09-04T06:32:50.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15750 2019-09-04T06:32:50.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15750 2019-09-04T06:32:50.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15751 2019-09-04T06:32:50.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15751 2019-09-04T06:32:50.032+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:32:50.032+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:32:50.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15753 2019-09-04T06:32:50.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15753 2019-09-04T06:32:50.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15754 2019-09-04T06:32:50.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15754 2019-09-04T06:32:50.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15755 2019-09-04T06:32:50.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15755 2019-09-04T06:32:50.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15756 2019-09-04T06:32:50.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15756 2019-09-04T06:32:50.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15757 2019-09-04T06:32:50.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15757 2019-09-04T06:32:50.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15758 2019-09-04T06:32:50.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15758 2019-09-04T06:32:50.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15759 2019-09-04T06:32:50.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15759 2019-09-04T06:32:50.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15760 2019-09-04T06:32:50.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15760 2019-09-04T06:32:50.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15761 2019-09-04T06:32:50.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15761 2019-09-04T06:32:50.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15762 2019-09-04T06:32:50.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15762 2019-09-04T06:32:50.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15763 2019-09-04T06:32:50.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15763 2019-09-04T06:32:50.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15764 2019-09-04T06:32:50.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15764 2019-09-04T06:32:50.033+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:32:50.033+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:32:50.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15766 2019-09-04T06:32:50.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15766 2019-09-04T06:32:50.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15767 2019-09-04T06:32:50.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15767 2019-09-04T06:32:50.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15768 2019-09-04T06:32:50.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15768 2019-09-04T06:32:50.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15769 2019-09-04T06:32:50.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15769 2019-09-04T06:32:50.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15770 2019-09-04T06:32:50.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15770 2019-09-04T06:32:50.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 15771 2019-09-04T06:32:50.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 15771 2019-09-04T06:32:50.033+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:32:50.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:50.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:50.057+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:50.068+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:50.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:50.157+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:50.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:50.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:50.181+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:50.181+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:50.191+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:50.191+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:50.222+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:50.222+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:50.228+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:50.228+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:50.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578769, 1), signature: { hash: BinData(0, 43A728076280BFE09FCADEE23A145338B1DB2D4C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:50.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:32:50.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578769, 1), signature: { hash: BinData(0, 43A728076280BFE09FCADEE23A145338B1DB2D4C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:50.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578769, 1), signature: { hash: BinData(0, 43A728076280BFE09FCADEE23A145338B1DB2D4C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:50.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), opTime: { ts: Timestamp(1567578769, 1), t: 1 }, wallTime: new Date(1567578769266) } 2019-09-04T06:32:50.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578769, 1), signature: { hash: BinData(0, 43A728076280BFE09FCADEE23A145338B1DB2D4C), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:50.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:50.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 999999999 Now: 1000000000 2019-09-04T06:32:50.257+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:50.271+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:50.271+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:50.271+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578769, 1) 2019-09-04T06:32:50.271+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 15782 2019-09-04T06:32:50.271+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:50.271+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:50.271+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 15782 2019-09-04T06:32:50.272+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15785 2019-09-04T06:32:50.272+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 15785 2019-09-04T06:32:50.272+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578769, 1), t: 1 }({ ts: Timestamp(1567578769, 1), t: 1 }) 2019-09-04T06:32:50.357+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:50.358+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:50.358+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:50.448+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:50.448+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:50.458+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:50.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:50.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:50.558+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:50.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:50.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:50.658+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:50.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:50.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:50.681+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:50.681+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:50.691+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:50.691+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:50.728+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:50.728+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:50.758+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:50.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:50.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1074) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:50.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1074 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:00.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:50.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:18.839+0000 2019-09-04T06:32:50.838+0000 D2 ASIO [Replication] Request 1074 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), opTime: { ts: Timestamp(1567578769, 1), t: 1 }, wallTime: new Date(1567578769266), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578769, 1), t: 1 }, lastCommittedWall: new Date(1567578769266), lastOpVisible: { ts: Timestamp(1567578769, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578769, 1), $clusterTime: { clusterTime: Timestamp(1567578769, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578769, 1) } 2019-09-04T06:32:50.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), opTime: { ts: Timestamp(1567578769, 1), t: 1 }, wallTime: new Date(1567578769266), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578769, 1), t: 1 }, lastCommittedWall: new Date(1567578769266), lastOpVisible: { ts: Timestamp(1567578769, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578769, 1), $clusterTime: { clusterTime: Timestamp(1567578769, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578769, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:50.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:50.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1074) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), opTime: { ts: Timestamp(1567578769, 1), t: 1 }, wallTime: new Date(1567578769266), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578769, 1), t: 1 }, lastCommittedWall: new Date(1567578769266), lastOpVisible: { ts: Timestamp(1567578769, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578769, 1), $clusterTime: { clusterTime: Timestamp(1567578769, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578769, 1) } 2019-09-04T06:32:50.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:32:50.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:32:52.838Z 2019-09-04T06:32:50.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:18.839+0000 2019-09-04T06:32:50.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:50.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1075) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:50.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1075 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:33:00.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:50.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:18.839+0000 2019-09-04T06:32:50.839+0000 D2 ASIO [Replication] Request 1075 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), opTime: { ts: Timestamp(1567578769, 1), t: 1 }, wallTime: new Date(1567578769266), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578769, 1), t: 1 }, lastCommittedWall: new Date(1567578769266), lastOpVisible: { ts: Timestamp(1567578769, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578769, 1), $clusterTime: { clusterTime: Timestamp(1567578769, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578769, 1) } 2019-09-04T06:32:50.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), opTime: { ts: Timestamp(1567578769, 1), t: 1 }, wallTime: new Date(1567578769266), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578769, 1), t: 1 }, lastCommittedWall: new Date(1567578769266), lastOpVisible: { ts: Timestamp(1567578769, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578769, 1), $clusterTime: { clusterTime: Timestamp(1567578769, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578769, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:32:50.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:50.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1075) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), opTime: { ts: Timestamp(1567578769, 1), t: 1 }, wallTime: new Date(1567578769266), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578769, 1), t: 1 }, lastCommittedWall: new Date(1567578769266), lastOpVisible: { ts: Timestamp(1567578769, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578769, 1), $clusterTime: { clusterTime: Timestamp(1567578769, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578769, 1) } 2019-09-04T06:32:50.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:32:50.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:32:59.587+0000 2019-09-04T06:32:50.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:33:02.142+0000 2019-09-04T06:32:50.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:32:50.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:32:52.839Z 2019-09-04T06:32:50.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:50.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:20.839+0000 2019-09-04T06:32:50.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:20.839+0000 2019-09-04T06:32:50.858+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:50.858+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:50.858+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:50.948+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:50.948+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:50.958+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:51.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:51.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:51.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:51.058+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:51.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578769, 1), signature: { hash: BinData(0, 43A728076280BFE09FCADEE23A145338B1DB2D4C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:51.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:32:51.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578769, 1), signature: { hash: BinData(0, 43A728076280BFE09FCADEE23A145338B1DB2D4C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:51.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578769, 1), signature: { hash: BinData(0, 43A728076280BFE09FCADEE23A145338B1DB2D4C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:51.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), opTime: { ts: Timestamp(1567578769, 1), t: 1 }, wallTime: new Date(1567578769266) } 2019-09-04T06:32:51.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578769, 1), signature: { hash: BinData(0, 43A728076280BFE09FCADEE23A145338B1DB2D4C), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:51.116+0000 I NETWORK [listener] connection accepted from 10.108.2.56:35782 #359 (86 connections now open) 2019-09-04T06:32:51.116+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:51.116+0000 D2 COMMAND [conn359] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:51.116+0000 I NETWORK [conn359] received client metadata from 10.108.2.56:35782 conn359: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:51.116+0000 I COMMAND [conn359] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:51.132+0000 I COMMAND [conn336] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578731, 1), signature: { hash: BinData(0, 403A7CDEC6D36A1AA08331185731CC5F9C84C762), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:32:51.132+0000 D1 - [conn336] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:51.132+0000 W - [conn336] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:51.149+0000 I - [conn336] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:51.149+0000 D1 COMMAND [conn336] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578731, 1), signature: { hash: BinData(0, 403A7CDEC6D36A1AA08331185731CC5F9C84C762), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:51.149+0000 D1 - [conn336] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:51.149+0000 W - [conn336] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:51.158+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:51.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:51.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:51.169+0000 I - [conn336] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:51.169+0000 W COMMAND [conn336] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:51.169+0000 I COMMAND [conn336] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578731, 1), signature: { hash: BinData(0, 403A7CDEC6D36A1AA08331185731CC5F9C84C762), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30028ms 2019-09-04T06:32:51.169+0000 D2 NETWORK [conn336] Session from 10.108.2.56:35762 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:51.169+0000 I NETWORK [conn336] end connection 10.108.2.56:35762 (85 connections now open) 2019-09-04T06:32:51.181+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:51.181+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:51.228+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:51.228+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:51.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:51.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:51.259+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:51.271+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:51.272+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:51.272+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578769, 1) 2019-09-04T06:32:51.272+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 15805 2019-09-04T06:32:51.272+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:51.272+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:51.272+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 15805 2019-09-04T06:32:51.272+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15808 2019-09-04T06:32:51.273+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 15808 2019-09-04T06:32:51.273+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578769, 1), t: 1 }({ ts: Timestamp(1567578769, 1), t: 1 }) 2019-09-04T06:32:51.358+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:51.358+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:51.359+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:51.448+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:51.448+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:51.459+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:51.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:51.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:51.559+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:51.634+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:51.634+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:51.647+0000 I COMMAND [conn325] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578738, 1), signature: { hash: BinData(0, 0D0BB0ED5B61061781383AFC683690CFB9762D5E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:32:51.647+0000 D1 - [conn325] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:51.647+0000 W - [conn325] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:51.650+0000 D2 COMMAND [conn337] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578770, 1), signature: { hash: BinData(0, 3430969E3181A35FCE5BAEFADC4CD97195C5A07D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:32:51.650+0000 D1 REPL [conn337] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578769, 1), t: 1 } 2019-09-04T06:32:51.650+0000 D3 REPL [conn337] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.660+0000 2019-09-04T06:32:51.650+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:51.650+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:51.650+0000 I NETWORK [listener] connection accepted from 10.108.2.72:45832 #360 (86 connections now open) 2019-09-04T06:32:51.650+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:51.650+0000 D2 COMMAND [conn360] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:51.650+0000 I NETWORK [conn360] received client metadata from 10.108.2.72:45832 conn360: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:51.650+0000 I COMMAND [conn360] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:51.651+0000 I NETWORK [listener] connection accepted from 10.108.2.54:49280 #361 (87 connections now open) 2019-09-04T06:32:51.651+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:51.651+0000 D2 COMMAND [conn360] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578762, 1), signature: { hash: BinData(0, 70ADEB07C63950ADB1CBAACD1EFFE951853A061C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:32:51.651+0000 D1 REPL [conn360] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578769, 1), t: 1 } 2019-09-04T06:32:51.651+0000 D3 REPL [conn360] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.661+0000 2019-09-04T06:32:51.651+0000 D2 COMMAND [conn361] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:51.651+0000 I NETWORK [conn361] received client metadata from 10.108.2.54:49280 conn361: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:51.651+0000 I COMMAND [conn361] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:51.651+0000 D2 COMMAND [conn361] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578768, 1), signature: { hash: BinData(0, 5E1D4D6DA330158C7AAB078164AD146D6E619AFC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:32:51.651+0000 D1 REPL [conn361] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578769, 1), t: 1 } 2019-09-04T06:32:51.651+0000 D3 REPL [conn361] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.661+0000 2019-09-04T06:32:51.652+0000 I NETWORK [listener] connection accepted from 10.108.2.73:52238 #362 (88 connections now open) 2019-09-04T06:32:51.652+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:51.652+0000 D2 COMMAND [conn362] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:51.652+0000 I NETWORK [conn362] received client metadata from 10.108.2.73:52238 conn362: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:51.652+0000 I COMMAND [conn362] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:51.656+0000 I NETWORK [listener] connection accepted from 10.108.2.47:56632 #363 (89 connections now open) 2019-09-04T06:32:51.656+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:51.656+0000 D2 COMMAND [conn363] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:51.656+0000 I NETWORK [conn363] received client metadata from 10.108.2.47:56632 conn363: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:51.656+0000 I COMMAND [conn363] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:51.659+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:51.660+0000 D2 COMMAND [conn363] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578765, 1), signature: { hash: BinData(0, B25952C3221E28D91A8CF134FCDA55710368078E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:32:51.660+0000 D1 REPL [conn363] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578769, 1), t: 1 } 2019-09-04T06:32:51.660+0000 D3 REPL [conn363] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.670+0000 2019-09-04T06:32:51.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:51.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:51.663+0000 I COMMAND [conn339] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578733, 1), signature: { hash: BinData(0, 45E6BA95D4A46C041D1AB6A238AC46A21C109958), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:32:51.664+0000 D1 - [conn339] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:51.664+0000 W - [conn339] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:51.664+0000 I COMMAND [conn340] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578732, 1), signature: { hash: BinData(0, 2AE057265C24734D9B4E4568BCA1F32820325A81), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:32:51.664+0000 D1 - [conn340] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:51.664+0000 W - [conn340] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:51.664+0000 I - [conn325] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:51.664+0000 D1 COMMAND [conn325] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578738, 1), signature: { hash: BinData(0, 0D0BB0ED5B61061781383AFC683690CFB9762D5E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:51.664+0000 D1 - [conn325] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:51.664+0000 W - [conn325] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:51.680+0000 I - [conn340] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:51.680+0000 D1 COMMAND [conn340] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578732, 1), signature: { hash: BinData(0, 2AE057265C24734D9B4E4568BCA1F32820325A81), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:51.680+0000 D1 - [conn340] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:51.680+0000 W - [conn340] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:51.681+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:51.681+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:51.697+0000 I - [conn339] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:51.697+0000 D1 COMMAND [conn339] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578733, 1), signature: { hash: BinData(0, 45E6BA95D4A46C041D1AB6A238AC46A21C109958), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:51.697+0000 D1 - [conn339] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:51.697+0000 W - [conn339] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:51.717+0000 I - [conn340] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:51.717+0000 W COMMAND [conn340] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:51.717+0000 I COMMAND [conn340] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578732, 1), signature: { hash: BinData(0, 2AE057265C24734D9B4E4568BCA1F32820325A81), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30028ms 2019-09-04T06:32:51.717+0000 D2 NETWORK [conn340] Session from 10.108.2.73:52222 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:51.717+0000 I NETWORK [conn340] end connection 10.108.2.73:52222 (88 connections now open) 2019-09-04T06:32:51.728+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:51.728+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:51.737+0000 I - [conn325] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:51.737+0000 W COMMAND [conn325] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:51.737+0000 I COMMAND [conn325] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578738, 1), signature: { hash: BinData(0, 0D0BB0ED5B61061781383AFC683690CFB9762D5E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:32:51.737+0000 D2 NETWORK [conn325] Session from 10.108.2.44:38742 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:51.737+0000 I NETWORK [conn325] end connection 10.108.2.44:38742 (87 connections now open) 2019-09-04T06:32:51.742+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:51.742+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:51.756+0000 I COMMAND [conn320] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578732, 1), signature: { hash: BinData(0, 2AE057265C24734D9B4E4568BCA1F32820325A81), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:32:51.756+0000 D1 - [conn320] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:51.756+0000 W - [conn320] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:51.756+0000 I NETWORK [listener] connection accepted from 10.108.2.59:48436 #364 (88 connections now open) 2019-09-04T06:32:51.756+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:51.756+0000 D2 COMMAND [conn364] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:51.756+0000 I NETWORK [conn364] received client metadata from 10.108.2.59:48436 conn364: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:51.757+0000 I COMMAND [conn364] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:51.757+0000 D2 COMMAND [conn364] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578761, 1), signature: { hash: BinData(0, ABD195D4AC4ED4988535B74DC3F4B42B5EC26ED1), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:32:51.757+0000 D1 REPL [conn364] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578769, 1), t: 1 } 2019-09-04T06:32:51.757+0000 D3 REPL [conn364] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.767+0000 2019-09-04T06:32:51.757+0000 I - [conn339] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:51.757+0000 W COMMAND [conn339] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:51.757+0000 I COMMAND [conn339] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578733, 1), signature: { hash: BinData(0, 45E6BA95D4A46C041D1AB6A238AC46A21C109958), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30045ms 2019-09-04T06:32:51.757+0000 D2 NETWORK [conn339] Session from 10.108.2.58:52212 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:51.757+0000 I NETWORK [conn339] end connection 10.108.2.58:52212 (87 connections now open) 2019-09-04T06:32:51.759+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:51.774+0000 I - [conn320] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:51.774+0000 D1 COMMAND [conn320] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578732, 1), signature: { hash: BinData(0, 2AE057265C24734D9B4E4568BCA1F32820325A81), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:51.774+0000 D1 - [conn320] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:51.774+0000 W - [conn320] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:51.793+0000 I - [conn320] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:51.794+0000 W COMMAND [conn320] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:51.794+0000 I COMMAND [conn320] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578732, 1), signature: { hash: BinData(0, 2AE057265C24734D9B4E4568BCA1F32820325A81), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30031ms 2019-09-04T06:32:51.794+0000 D2 NETWORK [conn320] Session from 10.108.2.52:47240 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:51.794+0000 I NETWORK [conn320] end connection 10.108.2.52:47240 (86 connections now open) 2019-09-04T06:32:51.858+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:51.858+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:51.859+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:51.948+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:51.948+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:51.959+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:52.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:52.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:52.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:52.056+0000 I COMMAND [conn342] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578739, 1), signature: { hash: BinData(0, FC302E736E3E0A04685B3CCCCAB671E279A74EEA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:32:52.057+0000 D1 - [conn342] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:52.057+0000 W - [conn342] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:52.059+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:52.073+0000 I - [conn342] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:52.073+0000 D1 COMMAND [conn342] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578739, 1), signature: { hash: BinData(0, FC302E736E3E0A04685B3CCCCAB671E279A74EEA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:52.073+0000 D1 - [conn342] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:52.073+0000 W - [conn342] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:52.093+0000 I - [conn342] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:52.093+0000 W COMMAND [conn342] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:52.093+0000 I COMMAND [conn342] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578739, 1), signature: { hash: BinData(0, FC302E736E3E0A04685B3CCCCAB671E279A74EEA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:32:52.093+0000 D2 NETWORK [conn342] Session from 10.108.2.50:50186 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:52.093+0000 I NETWORK [conn342] end connection 10.108.2.50:50186 (85 connections now open) 2019-09-04T06:32:52.134+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:52.134+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:52.150+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:52.150+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:52.159+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:52.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:52.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:52.181+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:52.181+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:52.228+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:52.228+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:52.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578769, 1), signature: { hash: BinData(0, 43A728076280BFE09FCADEE23A145338B1DB2D4C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:52.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:32:52.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578769, 1), signature: { hash: BinData(0, 43A728076280BFE09FCADEE23A145338B1DB2D4C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:52.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578769, 1), signature: { hash: BinData(0, 43A728076280BFE09FCADEE23A145338B1DB2D4C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:52.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), opTime: { ts: Timestamp(1567578769, 1), t: 1 }, wallTime: new Date(1567578769266) } 2019-09-04T06:32:52.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578769, 1), signature: { hash: BinData(0, 43A728076280BFE09FCADEE23A145338B1DB2D4C), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:52.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:52.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:52.242+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:52.242+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:52.260+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:52.272+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:52.272+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:52.272+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578769, 1) 2019-09-04T06:32:52.272+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 15842 2019-09-04T06:32:52.272+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:52.272+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:52.272+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 15842 2019-09-04T06:32:52.273+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15845 2019-09-04T06:32:52.273+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 15845 2019-09-04T06:32:52.273+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578769, 1), t: 1 }({ ts: Timestamp(1567578769, 1), t: 1 }) 2019-09-04T06:32:52.360+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:52.448+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:52.448+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:52.460+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:52.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:52.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:52.560+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:52.584+0000 D2 COMMAND [conn343] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578765, 1), signature: { hash: BinData(0, B25952C3221E28D91A8CF134FCDA55710368078E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:32:52.584+0000 D1 REPL [conn343] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578769, 1), t: 1 } 2019-09-04T06:32:52.584+0000 D3 REPL [conn343] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:22.594+0000 2019-09-04T06:32:52.660+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:52.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:52.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:52.681+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:52.681+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:52.728+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:52.728+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:52.760+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:52.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:52.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1076) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:52.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1076 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:02.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:52.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:20.839+0000 2019-09-04T06:32:52.838+0000 D2 ASIO [Replication] Request 1076 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), opTime: { ts: Timestamp(1567578769, 1), t: 1 }, wallTime: new Date(1567578769266), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578769, 1), t: 1 }, lastCommittedWall: new Date(1567578769266), lastOpVisible: { ts: Timestamp(1567578769, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578769, 1), $clusterTime: { clusterTime: Timestamp(1567578770, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578769, 1) } 2019-09-04T06:32:52.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), opTime: { ts: Timestamp(1567578769, 1), t: 1 }, wallTime: new Date(1567578769266), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578769, 1), t: 1 }, lastCommittedWall: new Date(1567578769266), lastOpVisible: { ts: Timestamp(1567578769, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578769, 1), $clusterTime: { clusterTime: Timestamp(1567578770, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578769, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:52.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:52.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1076) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), opTime: { ts: Timestamp(1567578769, 1), t: 1 }, wallTime: new Date(1567578769266), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578769, 1), t: 1 }, lastCommittedWall: new Date(1567578769266), lastOpVisible: { ts: Timestamp(1567578769, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578769, 1), $clusterTime: { clusterTime: Timestamp(1567578770, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578769, 1) } 2019-09-04T06:32:52.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:32:52.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:32:54.838Z 2019-09-04T06:32:52.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:20.839+0000 2019-09-04T06:32:52.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:52.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1077) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:52.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1077 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:33:02.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:52.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:20.839+0000 2019-09-04T06:32:52.839+0000 D2 ASIO [Replication] Request 1077 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), opTime: { ts: Timestamp(1567578769, 1), t: 1 }, wallTime: new Date(1567578769266), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578769, 1), t: 1 }, lastCommittedWall: new Date(1567578769266), lastOpVisible: { ts: Timestamp(1567578769, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578769, 1), $clusterTime: { clusterTime: Timestamp(1567578770, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578769, 1) } 2019-09-04T06:32:52.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), opTime: { ts: Timestamp(1567578769, 1), t: 1 }, wallTime: new Date(1567578769266), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578769, 1), t: 1 }, lastCommittedWall: new Date(1567578769266), lastOpVisible: { ts: Timestamp(1567578769, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578769, 1), $clusterTime: { clusterTime: Timestamp(1567578770, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578769, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:32:52.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:52.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1077) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), opTime: { ts: Timestamp(1567578769, 1), t: 1 }, wallTime: new Date(1567578769266), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578769, 1), t: 1 }, lastCommittedWall: new Date(1567578769266), lastOpVisible: { ts: Timestamp(1567578769, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578769, 1), $clusterTime: { clusterTime: Timestamp(1567578770, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578769, 1) } 2019-09-04T06:32:52.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:32:52.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:33:02.142+0000 2019-09-04T06:32:52.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:33:03.397+0000 2019-09-04T06:32:52.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:32:52.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:32:54.839Z 2019-09-04T06:32:52.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:52.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:22.839+0000 2019-09-04T06:32:52.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:22.839+0000 2019-09-04T06:32:52.860+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:52.948+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:52.948+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:52.961+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:53.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:53.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:53.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:53.061+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:53.063+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:53.063+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:32:52.839+0000 2019-09-04T06:32:53.063+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:32:52.838+0000 2019-09-04T06:32:53.063+0000 D3 REPL [replexec-3] stalest member MemberId(2) date: 2019-09-04T06:32:52.838+0000 2019-09-04T06:32:53.063+0000 D3 REPL [replexec-3] scheduling next check at 2019-09-04T06:33:02.838+0000 2019-09-04T06:32:53.063+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:22.839+0000 2019-09-04T06:32:53.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578770, 1), signature: { hash: BinData(0, 9DE88AEF2A3E997A8D1DBB1DFBF0F1A392D14C4C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:53.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:32:53.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578770, 1), signature: { hash: BinData(0, 9DE88AEF2A3E997A8D1DBB1DFBF0F1A392D14C4C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:53.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578770, 1), signature: { hash: BinData(0, 9DE88AEF2A3E997A8D1DBB1DFBF0F1A392D14C4C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:53.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), opTime: { ts: Timestamp(1567578769, 1), t: 1 }, wallTime: new Date(1567578769266) } 2019-09-04T06:32:53.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578770, 1), signature: { hash: BinData(0, 9DE88AEF2A3E997A8D1DBB1DFBF0F1A392D14C4C), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:53.074+0000 D2 COMMAND [conn101] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578767, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578767, 2), signature: { hash: BinData(0, F9A171B9CA4A2E86C2347783076F553836E4559B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578767, 2), t: 1 } }, $db: "config" } 2019-09-04T06:32:53.074+0000 D1 COMMAND [conn101] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578767, 2), t: 1 } } } 2019-09-04T06:32:53.074+0000 D3 STORAGE [conn101] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:32:53.074+0000 D1 COMMAND [conn101] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578767, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578767, 2), signature: { hash: BinData(0, F9A171B9CA4A2E86C2347783076F553836E4559B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578767, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578769, 1) 2019-09-04T06:32:53.074+0000 D5 QUERY [conn101] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:32:53.074+0000 D5 QUERY [conn101] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:32:53.074+0000 D5 QUERY [conn101] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:32:53.074+0000 D5 QUERY [conn101] Rated tree: $and 2019-09-04T06:32:53.074+0000 D5 QUERY [conn101] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:53.074+0000 D5 QUERY [conn101] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:53.074+0000 D2 QUERY [conn101] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:32:53.074+0000 D3 STORAGE [conn101] WT begin_transaction for snapshot id 15857 2019-09-04T06:32:53.074+0000 D3 STORAGE [conn101] WT rollback_transaction for snapshot id 15857 2019-09-04T06:32:53.074+0000 I COMMAND [conn101] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578767, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578767, 2), signature: { hash: BinData(0, F9A171B9CA4A2E86C2347783076F553836E4559B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578767, 2), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:879 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:32:53.161+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:53.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:53.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:53.181+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:53.181+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:53.228+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:53.228+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:53.229+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:53.229+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:53.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:53.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:53.261+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:53.272+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:53.272+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:53.272+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578769, 1) 2019-09-04T06:32:53.272+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 15864 2019-09-04T06:32:53.272+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:53.272+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:53.272+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 15864 2019-09-04T06:32:53.273+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15867 2019-09-04T06:32:53.273+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 15867 2019-09-04T06:32:53.273+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578769, 1), t: 1 }({ ts: Timestamp(1567578769, 1), t: 1 }) 2019-09-04T06:32:53.289+0000 D2 COMMAND [conn157] run command admin.$cmd { ping: 1, $db: "admin" } 2019-09-04T06:32:53.289+0000 I COMMAND [conn157] command admin.$cmd appName: "robo3t" command: ping { ping: 1, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:53.300+0000 D2 COMMAND [conn218] run command admin.$cmd { ping: 1.0, lsid: { id: UUID("ac8e303f-4e60-4a79-b9a4-f7cba7354076") }, $clusterTime: { clusterTime: Timestamp(1567578710, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } 2019-09-04T06:32:53.300+0000 I COMMAND [conn218] command admin.$cmd appName: "MongoDB Shell" command: ping { ping: 1.0, lsid: { id: UUID("ac8e303f-4e60-4a79-b9a4-f7cba7354076") }, $clusterTime: { clusterTime: Timestamp(1567578710, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:53.361+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:53.448+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:53.448+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:53.461+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:53.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:53.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:53.561+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:53.661+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:53.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:53.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:53.681+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:53.681+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:53.728+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:53.728+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:53.729+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:53.729+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:53.762+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:53.862+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:53.948+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:53.948+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:53.962+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:54.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:54.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:54.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:54.062+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:54.141+0000 D2 COMMAND [conn355] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578765, 1), signature: { hash: BinData(0, B25952C3221E28D91A8CF134FCDA55710368078E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:32:54.141+0000 D1 REPL [conn355] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578769, 1), t: 1 } 2019-09-04T06:32:54.141+0000 D3 REPL [conn355] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:24.151+0000 2019-09-04T06:32:54.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:54.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:54.162+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:54.181+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:54.181+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:54.228+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:54.228+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:54.229+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:54.229+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:54.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578770, 1), signature: { hash: BinData(0, 9DE88AEF2A3E997A8D1DBB1DFBF0F1A392D14C4C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:54.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:32:54.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578770, 1), signature: { hash: BinData(0, 9DE88AEF2A3E997A8D1DBB1DFBF0F1A392D14C4C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:54.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578770, 1), signature: { hash: BinData(0, 9DE88AEF2A3E997A8D1DBB1DFBF0F1A392D14C4C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:54.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), opTime: { ts: Timestamp(1567578769, 1), t: 1 }, wallTime: new Date(1567578769266) } 2019-09-04T06:32:54.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578770, 1), signature: { hash: BinData(0, 9DE88AEF2A3E997A8D1DBB1DFBF0F1A392D14C4C), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:54.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:54.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:54.262+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:54.272+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:54.272+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:54.272+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578769, 1) 2019-09-04T06:32:54.272+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 15887 2019-09-04T06:32:54.272+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:54.272+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:54.272+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 15887 2019-09-04T06:32:54.273+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15890 2019-09-04T06:32:54.273+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 15890 2019-09-04T06:32:54.273+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578769, 1), t: 1 }({ ts: Timestamp(1567578769, 1), t: 1 }) 2019-09-04T06:32:54.273+0000 D2 ASIO [RS] Request 1066 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578769, 1), t: 1 }, lastCommittedWall: new Date(1567578769266), lastOpVisible: { ts: Timestamp(1567578769, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578769, 1), t: 1 }, lastCommittedWall: new Date(1567578769266), lastOpApplied: { ts: Timestamp(1567578769, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578769, 1), $clusterTime: { clusterTime: Timestamp(1567578770, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578769, 1) } 2019-09-04T06:32:54.273+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578769, 1), t: 1 }, lastCommittedWall: new Date(1567578769266), lastOpVisible: { ts: Timestamp(1567578769, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578769, 1), t: 1 }, lastCommittedWall: new Date(1567578769266), lastOpApplied: { ts: Timestamp(1567578769, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578769, 1), $clusterTime: { clusterTime: Timestamp(1567578770, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578769, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:54.273+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:54.273+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:32:54.273+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:33:03.397+0000 2019-09-04T06:32:54.273+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:33:04.940+0000 2019-09-04T06:32:54.273+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1078 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:04.273+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578769, 1), t: 1 } } 2019-09-04T06:32:54.273+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:54.274+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:22.839+0000 2019-09-04T06:32:54.273+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:19.273+0000 2019-09-04T06:32:54.278+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:54.278+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), appliedOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, appliedWallTime: new Date(1567578769266), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), appliedOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, appliedWallTime: new Date(1567578769266), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), appliedOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, appliedWallTime: new Date(1567578769266), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578769, 1), t: 1 }, lastCommittedWall: new Date(1567578769266), lastOpVisible: { ts: Timestamp(1567578769, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:54.278+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1079 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:24.278+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), appliedOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, appliedWallTime: new Date(1567578769266), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), appliedOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, appliedWallTime: new Date(1567578769266), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), appliedOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, appliedWallTime: new Date(1567578769266), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578769, 1), t: 1 }, lastCommittedWall: new Date(1567578769266), lastOpVisible: { ts: Timestamp(1567578769, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:54.278+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:19.273+0000 2019-09-04T06:32:54.278+0000 D2 ASIO [RS] Request 1079 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578769, 1), t: 1 }, lastCommittedWall: new Date(1567578769266), lastOpVisible: { ts: Timestamp(1567578769, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578769, 1), $clusterTime: { clusterTime: Timestamp(1567578770, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578769, 1) } 2019-09-04T06:32:54.278+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578769, 1), t: 1 }, lastCommittedWall: new Date(1567578769266), lastOpVisible: { ts: Timestamp(1567578769, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578769, 1), $clusterTime: { clusterTime: Timestamp(1567578770, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578769, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:54.278+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:54.278+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:19.273+0000 2019-09-04T06:32:54.362+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:54.448+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:54.448+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:54.462+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:54.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:54.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:54.563+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:54.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:54.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:54.663+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:54.681+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:54.681+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:54.728+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:54.728+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:54.729+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:54.729+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:54.763+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:54.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:54.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1080) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:54.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1080 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:04.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:54.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:22.839+0000 2019-09-04T06:32:54.838+0000 D2 ASIO [Replication] Request 1080 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), opTime: { ts: Timestamp(1567578769, 1), t: 1 }, wallTime: new Date(1567578769266), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578769, 1), t: 1 }, lastCommittedWall: new Date(1567578769266), lastOpVisible: { ts: Timestamp(1567578769, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578769, 1), $clusterTime: { clusterTime: Timestamp(1567578770, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578769, 1) } 2019-09-04T06:32:54.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), opTime: { ts: Timestamp(1567578769, 1), t: 1 }, wallTime: new Date(1567578769266), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578769, 1), t: 1 }, lastCommittedWall: new Date(1567578769266), lastOpVisible: { ts: Timestamp(1567578769, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578769, 1), $clusterTime: { clusterTime: Timestamp(1567578770, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578769, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:54.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:54.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1080) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), opTime: { ts: Timestamp(1567578769, 1), t: 1 }, wallTime: new Date(1567578769266), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578769, 1), t: 1 }, lastCommittedWall: new Date(1567578769266), lastOpVisible: { ts: Timestamp(1567578769, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578769, 1), $clusterTime: { clusterTime: Timestamp(1567578770, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578769, 1) } 2019-09-04T06:32:54.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:32:54.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:32:56.838Z 2019-09-04T06:32:54.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:22.839+0000 2019-09-04T06:32:54.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:54.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1081) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:54.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1081 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:33:04.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:54.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:22.839+0000 2019-09-04T06:32:54.839+0000 D2 ASIO [Replication] Request 1081 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), opTime: { ts: Timestamp(1567578769, 1), t: 1 }, wallTime: new Date(1567578769266), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578769, 1), t: 1 }, lastCommittedWall: new Date(1567578769266), lastOpVisible: { ts: Timestamp(1567578769, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578769, 1), $clusterTime: { clusterTime: Timestamp(1567578770, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578769, 1) } 2019-09-04T06:32:54.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), opTime: { ts: Timestamp(1567578769, 1), t: 1 }, wallTime: new Date(1567578769266), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578769, 1), t: 1 }, lastCommittedWall: new Date(1567578769266), lastOpVisible: { ts: Timestamp(1567578769, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578769, 1), $clusterTime: { clusterTime: Timestamp(1567578770, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578769, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:32:54.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:54.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1081) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), opTime: { ts: Timestamp(1567578769, 1), t: 1 }, wallTime: new Date(1567578769266), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578769, 1), t: 1 }, lastCommittedWall: new Date(1567578769266), lastOpVisible: { ts: Timestamp(1567578769, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578769, 1), $clusterTime: { clusterTime: Timestamp(1567578770, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578769, 1) } 2019-09-04T06:32:54.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:32:54.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:33:04.940+0000 2019-09-04T06:32:54.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:33:05.093+0000 2019-09-04T06:32:54.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:32:54.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:32:56.839Z 2019-09-04T06:32:54.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:54.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:24.839+0000 2019-09-04T06:32:54.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:24.839+0000 2019-09-04T06:32:54.863+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:54.948+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:54.948+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:54.963+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:55.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:55.049+0000 I NETWORK [listener] connection accepted from 10.108.2.55:36750 #365 (86 connections now open) 2019-09-04T06:32:55.049+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:55.049+0000 D2 COMMAND [conn365] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:55.049+0000 I NETWORK [conn365] received client metadata from 10.108.2.55:36750 conn365: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:55.049+0000 I COMMAND [conn365] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:55.050+0000 D2 COMMAND [conn365] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578771, 1), signature: { hash: BinData(0, BDE1D6A72B3EA3CE1E4DDB5EC4BEB336B05E9F0D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:32:55.050+0000 D1 REPL [conn365] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578769, 1), t: 1 } 2019-09-04T06:32:55.050+0000 D3 REPL [conn365] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:25.060+0000 2019-09-04T06:32:55.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:55.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:55.057+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:32:55.057+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:32:55.057+0000 D2 COMMAND [conn90] run command admin.$cmd { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:32:55.057+0000 I COMMAND [conn90] command admin.$cmd command: isMaster { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:866 locks:{} protocol:op_query 0ms 2019-09-04T06:32:55.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578770, 1), signature: { hash: BinData(0, 9DE88AEF2A3E997A8D1DBB1DFBF0F1A392D14C4C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:55.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:32:55.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578770, 1), signature: { hash: BinData(0, 9DE88AEF2A3E997A8D1DBB1DFBF0F1A392D14C4C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:55.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578770, 1), signature: { hash: BinData(0, 9DE88AEF2A3E997A8D1DBB1DFBF0F1A392D14C4C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:55.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), opTime: { ts: Timestamp(1567578769, 1), t: 1 }, wallTime: new Date(1567578769266) } 2019-09-04T06:32:55.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578770, 1), signature: { hash: BinData(0, 9DE88AEF2A3E997A8D1DBB1DFBF0F1A392D14C4C), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:55.063+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:55.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:55.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:55.163+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:55.181+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:55.181+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:55.228+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:55.228+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:55.229+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:55.229+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:55.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:55.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:55.264+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:55.272+0000 D2 COMMAND [conn49] run command config.$cmd { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578765, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578767, 1), signature: { hash: BinData(0, F9A171B9CA4A2E86C2347783076F553836E4559B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578765, 1), t: 1 } }, $db: "config" } 2019-09-04T06:32:55.272+0000 D1 COMMAND [conn49] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578765, 1), t: 1 } } } 2019-09-04T06:32:55.272+0000 D3 STORAGE [conn49] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:32:55.272+0000 D1 COMMAND [conn49] Using 'committed' snapshot: { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578765, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578767, 1), signature: { hash: BinData(0, F9A171B9CA4A2E86C2347783076F553836E4559B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578765, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578769, 1) 2019-09-04T06:32:55.272+0000 D2 QUERY [conn49] Using idhack: query: { _id: "config.system.sessions" } sort: {} projection: {} limit: 1 2019-09-04T06:32:55.272+0000 D3 STORAGE [conn49] WT begin_transaction for snapshot id 15912 2019-09-04T06:32:55.272+0000 D3 STORAGE [conn49] WT rollback_transaction for snapshot id 15912 2019-09-04T06:32:55.272+0000 I COMMAND [conn49] command config.collections command: find { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578765, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578767, 1), signature: { hash: BinData(0, F9A171B9CA4A2E86C2347783076F553836E4559B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578765, 1), t: 1 } }, $db: "config" } planSummary: IDHACK keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:702 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:32:55.272+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:55.272+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:55.273+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578769, 1) 2019-09-04T06:32:55.273+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 15914 2019-09-04T06:32:55.273+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:55.273+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:55.273+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 15914 2019-09-04T06:32:55.273+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15917 2019-09-04T06:32:55.273+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 15917 2019-09-04T06:32:55.273+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578769, 1), t: 1 }({ ts: Timestamp(1567578769, 1), t: 1 }) 2019-09-04T06:32:55.352+0000 D2 ASIO [RS] Request 1078 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578775, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578775351), o: { $v: 1, $set: { ping: new Date(1567578775347), up: 2675 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578769, 1), t: 1 }, lastCommittedWall: new Date(1567578769266), lastOpVisible: { ts: Timestamp(1567578769, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578769, 1), t: 1 }, lastCommittedWall: new Date(1567578769266), lastOpApplied: { ts: Timestamp(1567578775, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578769, 1), $clusterTime: { clusterTime: Timestamp(1567578775, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578775, 1) } 2019-09-04T06:32:55.352+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578775, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578775351), o: { $v: 1, $set: { ping: new Date(1567578775347), up: 2675 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578769, 1), t: 1 }, lastCommittedWall: new Date(1567578769266), lastOpVisible: { ts: Timestamp(1567578769, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578769, 1), t: 1 }, lastCommittedWall: new Date(1567578769266), lastOpApplied: { ts: Timestamp(1567578775, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578769, 1), $clusterTime: { clusterTime: Timestamp(1567578775, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578775, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:55.352+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:55.352+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578775, 1) and ending at ts: Timestamp(1567578775, 1) 2019-09-04T06:32:55.352+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:33:05.093+0000 2019-09-04T06:32:55.352+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:33:05.742+0000 2019-09-04T06:32:55.352+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:55.352+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:24.839+0000 2019-09-04T06:32:55.352+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578775, 1), t: 1 } 2019-09-04T06:32:55.352+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:55.352+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:55.352+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578769, 1) 2019-09-04T06:32:55.352+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 15921 2019-09-04T06:32:55.352+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:55.352+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:55.352+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 15921 2019-09-04T06:32:55.352+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:55.352+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:32:55.352+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:55.352+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578769, 1) 2019-09-04T06:32:55.352+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578775, 1) } 2019-09-04T06:32:55.352+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 15924 2019-09-04T06:32:55.352+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:55.352+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:55.352+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 15924 2019-09-04T06:32:55.352+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15918 2019-09-04T06:32:55.352+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 15918 2019-09-04T06:32:55.352+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15927 2019-09-04T06:32:55.352+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 15927 2019-09-04T06:32:55.352+0000 D3 EXECUTOR [repl-writer-worker-13] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:55.353+0000 D3 STORAGE [repl-writer-worker-13] WT begin_transaction for snapshot id 15929 2019-09-04T06:32:55.353+0000 D4 STORAGE [repl-writer-worker-13] inserting record with timestamp Timestamp(1567578775, 1) 2019-09-04T06:32:55.353+0000 D3 STORAGE [repl-writer-worker-13] WT set timestamp of future write operations to Timestamp(1567578775, 1) 2019-09-04T06:32:55.353+0000 D3 STORAGE [repl-writer-worker-13] WT commit_transaction for snapshot id 15929 2019-09-04T06:32:55.353+0000 D3 EXECUTOR [repl-writer-worker-13] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:55.353+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:32:55.353+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15928 2019-09-04T06:32:55.353+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 15928 2019-09-04T06:32:55.353+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15931 2019-09-04T06:32:55.353+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 15931 2019-09-04T06:32:55.353+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578775, 1), t: 1 }({ ts: Timestamp(1567578775, 1), t: 1 }) 2019-09-04T06:32:55.353+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578775, 1) 2019-09-04T06:32:55.353+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15932 2019-09-04T06:32:55.353+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578775, 1) } } ] } sort: {} projection: {} 2019-09-04T06:32:55.353+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:55.353+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:32:55.353+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578775, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:32:55.353+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:55.353+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:55.353+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:55.353+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578775, 1) || First: notFirst: full path: ts 2019-09-04T06:32:55.353+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:55.353+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578775, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:55.353+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:55.353+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:32:55.353+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:55.353+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:55.353+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:55.353+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:55.353+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:55.353+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:55.353+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:55.353+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578775, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:55.353+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:55.353+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:55.353+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:55.353+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578775, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:55.353+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:55.353+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578775, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:55.353+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 15932 2019-09-04T06:32:55.353+0000 D3 EXECUTOR [repl-writer-worker-14] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:55.353+0000 D3 STORAGE [repl-writer-worker-14] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:55.353+0000 D3 REPL [repl-writer-worker-14] applying op: { ts: Timestamp(1567578775, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578775351), o: { $v: 1, $set: { ping: new Date(1567578775347), up: 2675 } } }, oplog application mode: Secondary 2019-09-04T06:32:55.353+0000 D3 STORAGE [repl-writer-worker-14] WT set timestamp of future write operations to Timestamp(1567578775, 1) 2019-09-04T06:32:55.353+0000 D3 STORAGE [repl-writer-worker-14] WT begin_transaction for snapshot id 15934 2019-09-04T06:32:55.353+0000 D2 QUERY [repl-writer-worker-14] Using idhack: { _id: "cmodb801.togewa.com:27017" } 2019-09-04T06:32:55.353+0000 D4 WRITE [repl-writer-worker-14] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:32:55.353+0000 D3 STORAGE [repl-writer-worker-14] WT commit_transaction for snapshot id 15934 2019-09-04T06:32:55.353+0000 D3 EXECUTOR [repl-writer-worker-14] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:55.353+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578775, 1), t: 1 }({ ts: Timestamp(1567578775, 1), t: 1 }) 2019-09-04T06:32:55.353+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578775, 1) 2019-09-04T06:32:55.353+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15933 2019-09-04T06:32:55.353+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:32:55.353+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:55.353+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:32:55.353+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:55.353+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:55.353+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:32:55.353+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 15933 2019-09-04T06:32:55.353+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578775, 1) 2019-09-04T06:32:55.353+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15937 2019-09-04T06:32:55.353+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 15937 2019-09-04T06:32:55.353+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578775, 1), t: 1 }({ ts: Timestamp(1567578775, 1), t: 1 }) 2019-09-04T06:32:55.353+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:55.354+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), appliedOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, appliedWallTime: new Date(1567578769266), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), appliedOpTime: { ts: Timestamp(1567578775, 1), t: 1 }, appliedWallTime: new Date(1567578775351), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), appliedOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, appliedWallTime: new Date(1567578769266), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578769, 1), t: 1 }, lastCommittedWall: new Date(1567578769266), lastOpVisible: { ts: Timestamp(1567578769, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:55.354+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1082 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:25.354+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), appliedOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, appliedWallTime: new Date(1567578769266), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), appliedOpTime: { ts: Timestamp(1567578775, 1), t: 1 }, appliedWallTime: new Date(1567578775351), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), appliedOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, appliedWallTime: new Date(1567578769266), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578769, 1), t: 1 }, lastCommittedWall: new Date(1567578769266), lastOpVisible: { ts: Timestamp(1567578769, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:55.354+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:25.353+0000 2019-09-04T06:32:55.354+0000 D2 ASIO [RS] Request 1082 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 1), t: 1 }, lastCommittedWall: new Date(1567578775351), lastOpVisible: { ts: Timestamp(1567578775, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578775, 1), $clusterTime: { clusterTime: Timestamp(1567578775, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578775, 1) } 2019-09-04T06:32:55.354+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 1), t: 1 }, lastCommittedWall: new Date(1567578775351), lastOpVisible: { ts: Timestamp(1567578775, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578775, 1), $clusterTime: { clusterTime: Timestamp(1567578775, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578775, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:55.354+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:55.354+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:25.354+0000 2019-09-04T06:32:55.354+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578775, 1), t: 1 } 2019-09-04T06:32:55.354+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1083 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:05.354+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578769, 1), t: 1 } } 2019-09-04T06:32:55.354+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:25.354+0000 2019-09-04T06:32:55.354+0000 D2 ASIO [RS] Request 1083 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 1), t: 1 }, lastCommittedWall: new Date(1567578775351), lastOpVisible: { ts: Timestamp(1567578775, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578775, 1), t: 1 }, lastCommittedWall: new Date(1567578775351), lastOpApplied: { ts: Timestamp(1567578775, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578775, 1), $clusterTime: { clusterTime: Timestamp(1567578775, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578775, 1) } 2019-09-04T06:32:55.354+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 1), t: 1 }, lastCommittedWall: new Date(1567578775351), lastOpVisible: { ts: Timestamp(1567578775, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578775, 1), t: 1 }, lastCommittedWall: new Date(1567578775351), lastOpApplied: { ts: Timestamp(1567578775, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578775, 1), $clusterTime: { clusterTime: Timestamp(1567578775, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578775, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:55.354+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:55.355+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:32:55.355+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578775, 1), t: 1 }, 2019-09-04T06:32:55.351+0000 2019-09-04T06:32:55.355+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578775, 1), t: 1 }, 2019-09-04T06:32:55.351+0000 2019-09-04T06:32:55.355+0000 D2 COMMAND [conn21] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578775, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a9702d1a496712d725d'), operName: "", parentOperId: "5d6f5a9702d1a496712d725a" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578775, 1), signature: { hash: BinData(0, CC669B4A70EC29AFAF2AFD95D332479DCDE79462), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578775, 1), t: 1 } }, $db: "config" } 2019-09-04T06:32:55.355+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578770, 1) 2019-09-04T06:32:55.355+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:33:05.742+0000 2019-09-04T06:32:55.355+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:33:06.519+0000 2019-09-04T06:32:55.355+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1084 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:05.355+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578775, 1), t: 1 } } 2019-09-04T06:32:55.355+0000 D3 REPL [conn338] Got notified of new snapshot: { ts: Timestamp(1567578775, 1), t: 1 }, 2019-09-04T06:32:55.351+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn338] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:11.680+0000 2019-09-04T06:32:55.355+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:25.354+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn357] Got notified of new snapshot: { ts: Timestamp(1567578775, 1), t: 1 }, 2019-09-04T06:32:55.351+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn357] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.823+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn322] Got notified of new snapshot: { ts: Timestamp(1567578775, 1), t: 1 }, 2019-09-04T06:32:55.351+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn322] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.989+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn345] Got notified of new snapshot: { ts: Timestamp(1567578775, 1), t: 1 }, 2019-09-04T06:32:55.351+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn345] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:56.312+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn349] Got notified of new snapshot: { ts: Timestamp(1567578775, 1), t: 1 }, 2019-09-04T06:32:55.351+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn349] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:00.261+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn337] Got notified of new snapshot: { ts: Timestamp(1567578775, 1), t: 1 }, 2019-09-04T06:32:55.351+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn337] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.660+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn360] Got notified of new snapshot: { ts: Timestamp(1567578775, 1), t: 1 }, 2019-09-04T06:32:55.351+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn360] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.661+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn344] Got notified of new snapshot: { ts: Timestamp(1567578775, 1), t: 1 }, 2019-09-04T06:32:55.351+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn344] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:11.677+0000 2019-09-04T06:32:55.355+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:32:55.355+0000 D3 REPL [conn364] Got notified of new snapshot: { ts: Timestamp(1567578775, 1), t: 1 }, 2019-09-04T06:32:55.351+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn364] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.767+0000 2019-09-04T06:32:55.355+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:55.355+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:24.839+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn354] Got notified of new snapshot: { ts: Timestamp(1567578775, 1), t: 1 }, 2019-09-04T06:32:55.351+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn354] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.413+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn346] Got notified of new snapshot: { ts: Timestamp(1567578775, 1), t: 1 }, 2019-09-04T06:32:55.351+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn346] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:57.574+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn327] Got notified of new snapshot: { ts: Timestamp(1567578775, 1), t: 1 }, 2019-09-04T06:32:55.351+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn327] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.840+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn318] Got notified of new snapshot: { ts: Timestamp(1567578775, 1), t: 1 }, 2019-09-04T06:32:55.351+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn318] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:03.478+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn350] Got notified of new snapshot: { ts: Timestamp(1567578775, 1), t: 1 }, 2019-09-04T06:32:55.351+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn350] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:00.763+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn351] Got notified of new snapshot: { ts: Timestamp(1567578775, 1), t: 1 }, 2019-09-04T06:32:55.351+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn351] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:00.962+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn358] Got notified of new snapshot: { ts: Timestamp(1567578775, 1), t: 1 }, 2019-09-04T06:32:55.351+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn358] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.833+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn348] Got notified of new snapshot: { ts: Timestamp(1567578775, 1), t: 1 }, 2019-09-04T06:32:55.351+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn348] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:59.750+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn323] Got notified of new snapshot: { ts: Timestamp(1567578775, 1), t: 1 }, 2019-09-04T06:32:55.351+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn323] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:11.688+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn353] Got notified of new snapshot: { ts: Timestamp(1567578775, 1), t: 1 }, 2019-09-04T06:32:55.351+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn353] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.419+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn341] Got notified of new snapshot: { ts: Timestamp(1567578775, 1), t: 1 }, 2019-09-04T06:32:55.351+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn341] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.094+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn352] Got notified of new snapshot: { ts: Timestamp(1567578775, 1), t: 1 }, 2019-09-04T06:32:55.351+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn352] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.291+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn347] Got notified of new snapshot: { ts: Timestamp(1567578775, 1), t: 1 }, 2019-09-04T06:32:55.351+0000 2019-09-04T06:32:55.355+0000 D1 TRACKING [conn21] Cmd: find, TrackingId: 5d6f5a9702d1a496712d725a|5d6f5a9702d1a496712d725d 2019-09-04T06:32:55.355+0000 D3 REPL [conn347] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:58.760+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn361] Got notified of new snapshot: { ts: Timestamp(1567578775, 1), t: 1 }, 2019-09-04T06:32:55.351+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn361] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.661+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn363] Got notified of new snapshot: { ts: Timestamp(1567578775, 1), t: 1 }, 2019-09-04T06:32:55.351+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn363] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.670+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn343] Got notified of new snapshot: { ts: Timestamp(1567578775, 1), t: 1 }, 2019-09-04T06:32:55.351+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn343] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:22.594+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn355] Got notified of new snapshot: { ts: Timestamp(1567578775, 1), t: 1 }, 2019-09-04T06:32:55.351+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn355] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:24.151+0000 2019-09-04T06:32:55.355+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:55.355+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), appliedOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, appliedWallTime: new Date(1567578769266), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578775, 1), t: 1 }, durableWallTime: new Date(1567578775351), appliedOpTime: { ts: Timestamp(1567578775, 1), t: 1 }, appliedWallTime: new Date(1567578775351), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), appliedOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, appliedWallTime: new Date(1567578769266), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 1), t: 1 }, lastCommittedWall: new Date(1567578775351), lastOpVisible: { ts: Timestamp(1567578775, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:55.355+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1085 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:25.355+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), appliedOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, appliedWallTime: new Date(1567578769266), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578775, 1), t: 1 }, durableWallTime: new Date(1567578775351), appliedOpTime: { ts: Timestamp(1567578775, 1), t: 1 }, appliedWallTime: new Date(1567578775351), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), appliedOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, appliedWallTime: new Date(1567578769266), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 1), t: 1 }, lastCommittedWall: new Date(1567578775351), lastOpVisible: { ts: Timestamp(1567578775, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:55.355+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:25.354+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn365] Got notified of new snapshot: { ts: Timestamp(1567578775, 1), t: 1 }, 2019-09-04T06:32:55.351+0000 2019-09-04T06:32:55.355+0000 D3 REPL [conn365] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:25.060+0000 2019-09-04T06:32:55.355+0000 D1 COMMAND [conn21] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578775, 1), t: 1 } } } 2019-09-04T06:32:55.355+0000 D3 STORAGE [conn21] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:32:55.355+0000 D1 COMMAND [conn21] Using 'committed' snapshot: { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578775, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a9702d1a496712d725d'), operName: "", parentOperId: "5d6f5a9702d1a496712d725a" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578775, 1), signature: { hash: BinData(0, CC669B4A70EC29AFAF2AFD95D332479DCDE79462), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578775, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578775, 1) 2019-09-04T06:32:55.355+0000 D2 QUERY [conn21] Collection config.settings does not exist. Using EOF plan: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 2019-09-04T06:32:55.356+0000 I COMMAND [conn21] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578775, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a9702d1a496712d725d'), operName: "", parentOperId: "5d6f5a9702d1a496712d725a" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578775, 1), signature: { hash: BinData(0, CC669B4A70EC29AFAF2AFD95D332479DCDE79462), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578775, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:32:55.356+0000 D2 ASIO [RS] Request 1085 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 1), t: 1 }, lastCommittedWall: new Date(1567578775351), lastOpVisible: { ts: Timestamp(1567578775, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578775, 1), $clusterTime: { clusterTime: Timestamp(1567578775, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578775, 1) } 2019-09-04T06:32:55.356+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 1), t: 1 }, lastCommittedWall: new Date(1567578775351), lastOpVisible: { ts: Timestamp(1567578775, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578775, 1), $clusterTime: { clusterTime: Timestamp(1567578775, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578775, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:55.356+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:55.356+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:25.354+0000 2019-09-04T06:32:55.356+0000 D2 COMMAND [conn21] run command config.$cmd { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578775, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a9702d1a496712d725e'), operName: "", parentOperId: "5d6f5a9702d1a496712d725a" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578775, 1), signature: { hash: BinData(0, CC669B4A70EC29AFAF2AFD95D332479DCDE79462), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578775, 1), t: 1 } }, $db: "config" } 2019-09-04T06:32:55.356+0000 D1 TRACKING [conn21] Cmd: find, TrackingId: 5d6f5a9702d1a496712d725a|5d6f5a9702d1a496712d725e 2019-09-04T06:32:55.356+0000 D1 COMMAND [conn21] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578775, 1), t: 1 } } } 2019-09-04T06:32:55.356+0000 D3 STORAGE [conn21] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:32:55.356+0000 D1 COMMAND [conn21] Using 'committed' snapshot: { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578775, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a9702d1a496712d725e'), operName: "", parentOperId: "5d6f5a9702d1a496712d725a" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578775, 1), signature: { hash: BinData(0, CC669B4A70EC29AFAF2AFD95D332479DCDE79462), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578775, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578775, 1) 2019-09-04T06:32:55.356+0000 D2 QUERY [conn21] Collection config.settings does not exist. Using EOF plan: query: { _id: "autosplit" } sort: {} projection: {} limit: 1 2019-09-04T06:32:55.356+0000 I COMMAND [conn21] command config.settings command: find { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578775, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5a9702d1a496712d725e'), operName: "", parentOperId: "5d6f5a9702d1a496712d725a" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578775, 1), signature: { hash: BinData(0, CC669B4A70EC29AFAF2AFD95D332479DCDE79462), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578775, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:32:55.364+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:55.448+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:55.448+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:55.452+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578775, 1) 2019-09-04T06:32:55.464+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:55.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:55.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:55.564+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:55.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:55.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:55.664+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:55.681+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:55.681+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:55.721+0000 D2 ASIO [RS] Request 1084 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578775, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578775720), o: { $v: 1, $set: { ping: new Date(1567578775719) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 1), t: 1 }, lastCommittedWall: new Date(1567578775351), lastOpVisible: { ts: Timestamp(1567578775, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578775, 1), t: 1 }, lastCommittedWall: new Date(1567578775351), lastOpApplied: { ts: Timestamp(1567578775, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578775, 1), $clusterTime: { clusterTime: Timestamp(1567578775, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578775, 2) } 2019-09-04T06:32:55.721+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578775, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578775720), o: { $v: 1, $set: { ping: new Date(1567578775719) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 1), t: 1 }, lastCommittedWall: new Date(1567578775351), lastOpVisible: { ts: Timestamp(1567578775, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578775, 1), t: 1 }, lastCommittedWall: new Date(1567578775351), lastOpApplied: { ts: Timestamp(1567578775, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578775, 1), $clusterTime: { clusterTime: Timestamp(1567578775, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578775, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:55.721+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:55.721+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578775, 2) and ending at ts: Timestamp(1567578775, 2) 2019-09-04T06:32:55.721+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:33:06.519+0000 2019-09-04T06:32:55.721+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:33:06.220+0000 2019-09-04T06:32:55.721+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:55.721+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:24.839+0000 2019-09-04T06:32:55.721+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578775, 2), t: 1 } 2019-09-04T06:32:55.721+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:55.721+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:55.721+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578775, 1) 2019-09-04T06:32:55.721+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 15947 2019-09-04T06:32:55.721+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:55.721+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:55.722+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 15947 2019-09-04T06:32:55.722+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:55.722+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:55.722+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:32:55.722+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578775, 1) 2019-09-04T06:32:55.722+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 15950 2019-09-04T06:32:55.722+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578775, 2) } 2019-09-04T06:32:55.722+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:55.722+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:55.722+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 15950 2019-09-04T06:32:55.722+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15938 2019-09-04T06:32:55.722+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 15938 2019-09-04T06:32:55.722+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15953 2019-09-04T06:32:55.722+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 15953 2019-09-04T06:32:55.722+0000 D3 EXECUTOR [repl-writer-worker-3] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:55.722+0000 D3 STORAGE [repl-writer-worker-3] WT begin_transaction for snapshot id 15955 2019-09-04T06:32:55.722+0000 D4 STORAGE [repl-writer-worker-3] inserting record with timestamp Timestamp(1567578775, 2) 2019-09-04T06:32:55.722+0000 D3 STORAGE [repl-writer-worker-3] WT set timestamp of future write operations to Timestamp(1567578775, 2) 2019-09-04T06:32:55.722+0000 D3 STORAGE [repl-writer-worker-3] WT commit_transaction for snapshot id 15955 2019-09-04T06:32:55.722+0000 D3 EXECUTOR [repl-writer-worker-3] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:55.722+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:32:55.722+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15954 2019-09-04T06:32:55.722+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 15954 2019-09-04T06:32:55.722+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15957 2019-09-04T06:32:55.722+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 15957 2019-09-04T06:32:55.722+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578775, 2), t: 1 }({ ts: Timestamp(1567578775, 2), t: 1 }) 2019-09-04T06:32:55.722+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578775, 2) 2019-09-04T06:32:55.722+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15958 2019-09-04T06:32:55.722+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578775, 2) } } ] } sort: {} projection: {} 2019-09-04T06:32:55.722+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:55.722+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:32:55.722+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578775, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:32:55.722+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:55.722+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:55.722+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:55.722+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578775, 2) || First: notFirst: full path: ts 2019-09-04T06:32:55.722+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:55.722+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578775, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:55.722+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:55.722+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:32:55.722+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:55.722+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:55.722+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:55.722+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:55.722+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:55.722+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:55.722+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:32:55.722+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578775, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:32:55.722+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:55.722+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:32:55.722+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:32:55.722+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578775, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:32:55.722+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:55.722+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578775, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:55.722+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 15958 2019-09-04T06:32:55.722+0000 D3 EXECUTOR [repl-writer-worker-12] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:32:55.722+0000 D3 STORAGE [repl-writer-worker-12] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:55.722+0000 D3 REPL [repl-writer-worker-12] applying op: { ts: Timestamp(1567578775, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578775720), o: { $v: 1, $set: { ping: new Date(1567578775719) } } }, oplog application mode: Secondary 2019-09-04T06:32:55.722+0000 D3 STORAGE [repl-writer-worker-12] WT set timestamp of future write operations to Timestamp(1567578775, 2) 2019-09-04T06:32:55.722+0000 D3 STORAGE [repl-writer-worker-12] WT begin_transaction for snapshot id 15960 2019-09-04T06:32:55.722+0000 D2 QUERY [repl-writer-worker-12] Using idhack: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" } 2019-09-04T06:32:55.722+0000 D4 WRITE [repl-writer-worker-12] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:32:55.722+0000 D3 STORAGE [repl-writer-worker-12] WT commit_transaction for snapshot id 15960 2019-09-04T06:32:55.723+0000 D3 EXECUTOR [repl-writer-worker-12] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:32:55.723+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578775, 2), t: 1 }({ ts: Timestamp(1567578775, 2), t: 1 }) 2019-09-04T06:32:55.723+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578775, 2) 2019-09-04T06:32:55.723+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15959 2019-09-04T06:32:55.723+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:32:55.723+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:32:55.723+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:32:55.723+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:32:55.723+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:32:55.723+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:32:55.723+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 15959 2019-09-04T06:32:55.723+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578775, 2) 2019-09-04T06:32:55.723+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15964 2019-09-04T06:32:55.723+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 15964 2019-09-04T06:32:55.723+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578775, 2), t: 1 }({ ts: Timestamp(1567578775, 2), t: 1 }) 2019-09-04T06:32:55.723+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:55.723+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), appliedOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, appliedWallTime: new Date(1567578769266), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578775, 1), t: 1 }, durableWallTime: new Date(1567578775351), appliedOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, appliedWallTime: new Date(1567578775720), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), appliedOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, appliedWallTime: new Date(1567578769266), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 1), t: 1 }, lastCommittedWall: new Date(1567578775351), lastOpVisible: { ts: Timestamp(1567578775, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:55.723+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1086 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:25.723+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), appliedOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, appliedWallTime: new Date(1567578769266), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578775, 1), t: 1 }, durableWallTime: new Date(1567578775351), appliedOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, appliedWallTime: new Date(1567578775720), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), appliedOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, appliedWallTime: new Date(1567578769266), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 1), t: 1 }, lastCommittedWall: new Date(1567578775351), lastOpVisible: { ts: Timestamp(1567578775, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:55.723+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:25.723+0000 2019-09-04T06:32:55.723+0000 D2 ASIO [RS] Request 1086 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 2), t: 1 }, lastCommittedWall: new Date(1567578775720), lastOpVisible: { ts: Timestamp(1567578775, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578775, 2), $clusterTime: { clusterTime: Timestamp(1567578775, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578775, 2) } 2019-09-04T06:32:55.723+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 2), t: 1 }, lastCommittedWall: new Date(1567578775720), lastOpVisible: { ts: Timestamp(1567578775, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578775, 2), $clusterTime: { clusterTime: Timestamp(1567578775, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578775, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:55.723+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:55.723+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:25.723+0000 2019-09-04T06:32:55.723+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578775, 2), t: 1 } 2019-09-04T06:32:55.723+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1087 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:05.723+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578775, 1), t: 1 } } 2019-09-04T06:32:55.724+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:25.723+0000 2019-09-04T06:32:55.724+0000 D2 ASIO [RS] Request 1087 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 2), t: 1 }, lastCommittedWall: new Date(1567578775720), lastOpVisible: { ts: Timestamp(1567578775, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578775, 2), t: 1 }, lastCommittedWall: new Date(1567578775720), lastOpApplied: { ts: Timestamp(1567578775, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578775, 2), $clusterTime: { clusterTime: Timestamp(1567578775, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578775, 2) } 2019-09-04T06:32:55.724+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 2), t: 1 }, lastCommittedWall: new Date(1567578775720), lastOpVisible: { ts: Timestamp(1567578775, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578775, 2), t: 1 }, lastCommittedWall: new Date(1567578775720), lastOpApplied: { ts: Timestamp(1567578775, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578775, 2), $clusterTime: { clusterTime: Timestamp(1567578775, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578775, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:55.724+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:55.724+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:32:55.724+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578775, 2), t: 1 }, 2019-09-04T06:32:55.720+0000 2019-09-04T06:32:55.724+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578775, 2), t: 1 }, 2019-09-04T06:32:55.720+0000 2019-09-04T06:32:55.724+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578770, 2) 2019-09-04T06:32:55.724+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:33:06.220+0000 2019-09-04T06:32:55.724+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:33:06.085+0000 2019-09-04T06:32:55.724+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:55.724+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:24.839+0000 2019-09-04T06:32:55.724+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1088 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:05.724+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578775, 2), t: 1 } } 2019-09-04T06:32:55.724+0000 D3 REPL [conn337] Got notified of new snapshot: { ts: Timestamp(1567578775, 2), t: 1 }, 2019-09-04T06:32:55.720+0000 2019-09-04T06:32:55.724+0000 D3 REPL [conn337] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.660+0000 2019-09-04T06:32:55.724+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:25.723+0000 2019-09-04T06:32:55.724+0000 D3 REPL [conn360] Got notified of new snapshot: { ts: Timestamp(1567578775, 2), t: 1 }, 2019-09-04T06:32:55.720+0000 2019-09-04T06:32:55.724+0000 D3 REPL [conn360] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.661+0000 2019-09-04T06:32:55.724+0000 D3 REPL [conn323] Got notified of new snapshot: { ts: Timestamp(1567578775, 2), t: 1 }, 2019-09-04T06:32:55.720+0000 2019-09-04T06:32:55.724+0000 D3 REPL [conn323] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:11.688+0000 2019-09-04T06:32:55.724+0000 D3 REPL [conn346] Got notified of new snapshot: { ts: Timestamp(1567578775, 2), t: 1 }, 2019-09-04T06:32:55.720+0000 2019-09-04T06:32:55.724+0000 D3 REPL [conn346] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:57.574+0000 2019-09-04T06:32:55.724+0000 D3 REPL [conn358] Got notified of new snapshot: { ts: Timestamp(1567578775, 2), t: 1 }, 2019-09-04T06:32:55.720+0000 2019-09-04T06:32:55.724+0000 D3 REPL [conn358] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.833+0000 2019-09-04T06:32:55.724+0000 D3 REPL [conn351] Got notified of new snapshot: { ts: Timestamp(1567578775, 2), t: 1 }, 2019-09-04T06:32:55.720+0000 2019-09-04T06:32:55.724+0000 D3 REPL [conn351] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:00.962+0000 2019-09-04T06:32:55.724+0000 D3 REPL [conn354] Got notified of new snapshot: { ts: Timestamp(1567578775, 2), t: 1 }, 2019-09-04T06:32:55.720+0000 2019-09-04T06:32:55.724+0000 D3 REPL [conn354] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.413+0000 2019-09-04T06:32:55.724+0000 D3 REPL [conn353] Got notified of new snapshot: { ts: Timestamp(1567578775, 2), t: 1 }, 2019-09-04T06:32:55.720+0000 2019-09-04T06:32:55.724+0000 D3 REPL [conn353] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.419+0000 2019-09-04T06:32:55.724+0000 D3 REPL [conn352] Got notified of new snapshot: { ts: Timestamp(1567578775, 2), t: 1 }, 2019-09-04T06:32:55.720+0000 2019-09-04T06:32:55.724+0000 D3 REPL [conn352] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.291+0000 2019-09-04T06:32:55.724+0000 D3 REPL [conn347] Got notified of new snapshot: { ts: Timestamp(1567578775, 2), t: 1 }, 2019-09-04T06:32:55.720+0000 2019-09-04T06:32:55.724+0000 D3 REPL [conn347] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:58.760+0000 2019-09-04T06:32:55.724+0000 D3 REPL [conn363] Got notified of new snapshot: { ts: Timestamp(1567578775, 2), t: 1 }, 2019-09-04T06:32:55.720+0000 2019-09-04T06:32:55.724+0000 D3 REPL [conn363] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.670+0000 2019-09-04T06:32:55.724+0000 D3 REPL [conn365] Got notified of new snapshot: { ts: Timestamp(1567578775, 2), t: 1 }, 2019-09-04T06:32:55.720+0000 2019-09-04T06:32:55.724+0000 D3 REPL [conn365] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:25.060+0000 2019-09-04T06:32:55.724+0000 D3 REPL [conn338] Got notified of new snapshot: { ts: Timestamp(1567578775, 2), t: 1 }, 2019-09-04T06:32:55.720+0000 2019-09-04T06:32:55.724+0000 D3 REPL [conn338] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:11.680+0000 2019-09-04T06:32:55.724+0000 D3 REPL [conn345] Got notified of new snapshot: { ts: Timestamp(1567578775, 2), t: 1 }, 2019-09-04T06:32:55.720+0000 2019-09-04T06:32:55.724+0000 D3 REPL [conn345] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:56.312+0000 2019-09-04T06:32:55.724+0000 D3 REPL [conn364] Got notified of new snapshot: { ts: Timestamp(1567578775, 2), t: 1 }, 2019-09-04T06:32:55.720+0000 2019-09-04T06:32:55.724+0000 D3 REPL [conn364] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.767+0000 2019-09-04T06:32:55.724+0000 D3 REPL [conn357] Got notified of new snapshot: { ts: Timestamp(1567578775, 2), t: 1 }, 2019-09-04T06:32:55.720+0000 2019-09-04T06:32:55.724+0000 D3 REPL [conn357] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.823+0000 2019-09-04T06:32:55.724+0000 D3 REPL [conn322] Got notified of new snapshot: { ts: Timestamp(1567578775, 2), t: 1 }, 2019-09-04T06:32:55.720+0000 2019-09-04T06:32:55.724+0000 D3 REPL [conn322] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.989+0000 2019-09-04T06:32:55.724+0000 D3 REPL [conn327] Got notified of new snapshot: { ts: Timestamp(1567578775, 2), t: 1 }, 2019-09-04T06:32:55.720+0000 2019-09-04T06:32:55.724+0000 D3 REPL [conn327] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.840+0000 2019-09-04T06:32:55.724+0000 D3 REPL [conn349] Got notified of new snapshot: { ts: Timestamp(1567578775, 2), t: 1 }, 2019-09-04T06:32:55.720+0000 2019-09-04T06:32:55.724+0000 D3 REPL [conn349] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:00.261+0000 2019-09-04T06:32:55.725+0000 D3 REPL [conn341] Got notified of new snapshot: { ts: Timestamp(1567578775, 2), t: 1 }, 2019-09-04T06:32:55.720+0000 2019-09-04T06:32:55.725+0000 D3 REPL [conn341] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.094+0000 2019-09-04T06:32:55.725+0000 D3 REPL [conn350] Got notified of new snapshot: { ts: Timestamp(1567578775, 2), t: 1 }, 2019-09-04T06:32:55.720+0000 2019-09-04T06:32:55.725+0000 D3 REPL [conn350] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:00.763+0000 2019-09-04T06:32:55.725+0000 D3 REPL [conn318] Got notified of new snapshot: { ts: Timestamp(1567578775, 2), t: 1 }, 2019-09-04T06:32:55.720+0000 2019-09-04T06:32:55.725+0000 D3 REPL [conn318] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:03.478+0000 2019-09-04T06:32:55.725+0000 D3 REPL [conn344] Got notified of new snapshot: { ts: Timestamp(1567578775, 2), t: 1 }, 2019-09-04T06:32:55.720+0000 2019-09-04T06:32:55.725+0000 D3 REPL [conn344] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:11.677+0000 2019-09-04T06:32:55.725+0000 D3 REPL [conn361] Got notified of new snapshot: { ts: Timestamp(1567578775, 2), t: 1 }, 2019-09-04T06:32:55.720+0000 2019-09-04T06:32:55.725+0000 D3 REPL [conn361] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.661+0000 2019-09-04T06:32:55.725+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:32:55.725+0000 D3 REPL [conn348] Got notified of new snapshot: { ts: Timestamp(1567578775, 2), t: 1 }, 2019-09-04T06:32:55.720+0000 2019-09-04T06:32:55.725+0000 D3 REPL [conn348] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:32:59.750+0000 2019-09-04T06:32:55.725+0000 D3 REPL [conn343] Got notified of new snapshot: { ts: Timestamp(1567578775, 2), t: 1 }, 2019-09-04T06:32:55.720+0000 2019-09-04T06:32:55.725+0000 D3 REPL [conn343] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:22.594+0000 2019-09-04T06:32:55.725+0000 D3 REPL [conn355] Got notified of new snapshot: { ts: Timestamp(1567578775, 2), t: 1 }, 2019-09-04T06:32:55.720+0000 2019-09-04T06:32:55.725+0000 D3 REPL [conn355] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:24.151+0000 2019-09-04T06:32:55.725+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:32:55.725+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), appliedOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, appliedWallTime: new Date(1567578769266), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, durableWallTime: new Date(1567578775720), appliedOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, appliedWallTime: new Date(1567578775720), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), appliedOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, appliedWallTime: new Date(1567578769266), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 2), t: 1 }, lastCommittedWall: new Date(1567578775720), lastOpVisible: { ts: Timestamp(1567578775, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:55.725+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1089 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:25.725+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), appliedOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, appliedWallTime: new Date(1567578769266), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, durableWallTime: new Date(1567578775720), appliedOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, appliedWallTime: new Date(1567578775720), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, durableWallTime: new Date(1567578769266), appliedOpTime: { ts: Timestamp(1567578769, 1), t: 1 }, appliedWallTime: new Date(1567578769266), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 2), t: 1 }, lastCommittedWall: new Date(1567578775720), lastOpVisible: { ts: Timestamp(1567578775, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:32:55.725+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:25.723+0000 2019-09-04T06:32:55.725+0000 D2 ASIO [RS] Request 1089 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 2), t: 1 }, lastCommittedWall: new Date(1567578775720), lastOpVisible: { ts: Timestamp(1567578775, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578775, 2), $clusterTime: { clusterTime: Timestamp(1567578775, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578775, 2) } 2019-09-04T06:32:55.725+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 2), t: 1 }, lastCommittedWall: new Date(1567578775720), lastOpVisible: { ts: Timestamp(1567578775, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578775, 2), $clusterTime: { clusterTime: Timestamp(1567578775, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578775, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:55.725+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:32:55.725+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:25.723+0000 2019-09-04T06:32:55.728+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:55.728+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:55.729+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:55.729+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:55.764+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:55.822+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578775, 2) 2019-09-04T06:32:55.864+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:55.948+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:55.948+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:55.964+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:56.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:56.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:56.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:56.065+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:56.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:56.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:56.165+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:56.181+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:56.181+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:56.228+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:56.228+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:56.229+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:56.229+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:56.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578775, 2), signature: { hash: BinData(0, CC669B4A70EC29AFAF2AFD95D332479DCDE79462), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:56.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:32:56.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578775, 2), signature: { hash: BinData(0, CC669B4A70EC29AFAF2AFD95D332479DCDE79462), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:56.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578775, 2), signature: { hash: BinData(0, CC669B4A70EC29AFAF2AFD95D332479DCDE79462), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:56.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, durableWallTime: new Date(1567578775720), opTime: { ts: Timestamp(1567578775, 2), t: 1 }, wallTime: new Date(1567578775720) } 2019-09-04T06:32:56.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578775, 2), signature: { hash: BinData(0, CC669B4A70EC29AFAF2AFD95D332479DCDE79462), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:56.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:56.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:56.265+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:56.298+0000 I NETWORK [listener] connection accepted from 10.108.2.60:44942 #366 (87 connections now open) 2019-09-04T06:32:56.298+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:56.298+0000 D2 COMMAND [conn366] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:56.298+0000 I NETWORK [conn366] received client metadata from 10.108.2.60:44942 conn366: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:56.298+0000 I COMMAND [conn366] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:56.312+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:56.312+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:56.313+0000 I COMMAND [conn345] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578743, 1), signature: { hash: BinData(0, 7EB74EF80679BE75277EB9A98AFF21A12F4B2DD4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:32:56.313+0000 D1 - [conn345] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:56.313+0000 W - [conn345] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:56.330+0000 I - [conn345] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:56.330+0000 D1 COMMAND [conn345] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578743, 1), signature: { hash: BinData(0, 7EB74EF80679BE75277EB9A98AFF21A12F4B2DD4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:56.330+0000 D1 - [conn345] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:56.330+0000 W - [conn345] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:56.350+0000 I - [conn345] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:56.350+0000 W COMMAND [conn345] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:56.350+0000 I COMMAND [conn345] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578743, 1), signature: { hash: BinData(0, 7EB74EF80679BE75277EB9A98AFF21A12F4B2DD4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:32:56.350+0000 D2 NETWORK [conn345] Session from 10.108.2.60:44926 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:56.350+0000 I NETWORK [conn345] end connection 10.108.2.60:44926 (86 connections now open) 2019-09-04T06:32:56.365+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:56.448+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:56.448+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:56.465+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:56.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:56.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:56.565+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:56.601+0000 D2 COMMAND [conn49] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578769, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578775, 1), signature: { hash: BinData(0, CC669B4A70EC29AFAF2AFD95D332479DCDE79462), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578769, 1), t: 1 } }, $db: "config" } 2019-09-04T06:32:56.601+0000 D1 COMMAND [conn49] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578769, 1), t: 1 } } } 2019-09-04T06:32:56.601+0000 D3 STORAGE [conn49] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:32:56.601+0000 D1 COMMAND [conn49] Using 'committed' snapshot: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578769, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578775, 1), signature: { hash: BinData(0, CC669B4A70EC29AFAF2AFD95D332479DCDE79462), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578769, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578775, 2) 2019-09-04T06:32:56.601+0000 D2 QUERY [conn49] Collection config.settings does not exist. Using EOF plan: query: { _id: "balancer" } sort: {} projection: {} limit: 1 2019-09-04T06:32:56.601+0000 I COMMAND [conn49] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578769, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578775, 1), signature: { hash: BinData(0, CC669B4A70EC29AFAF2AFD95D332479DCDE79462), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578769, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:32:56.601+0000 D2 COMMAND [conn49] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578775, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578775, 2), signature: { hash: BinData(0, CC669B4A70EC29AFAF2AFD95D332479DCDE79462), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578775, 2), t: 1 } }, $db: "config" } 2019-09-04T06:32:56.601+0000 D1 COMMAND [conn49] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578775, 2), t: 1 } } } 2019-09-04T06:32:56.601+0000 D3 STORAGE [conn49] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:32:56.601+0000 D1 COMMAND [conn49] Using 'committed' snapshot: { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578775, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578775, 2), signature: { hash: BinData(0, CC669B4A70EC29AFAF2AFD95D332479DCDE79462), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578775, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578775, 2) 2019-09-04T06:32:56.601+0000 D2 QUERY [conn49] Collection config.settings does not exist. Using EOF plan: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 2019-09-04T06:32:56.601+0000 I COMMAND [conn49] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578775, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578775, 2), signature: { hash: BinData(0, CC669B4A70EC29AFAF2AFD95D332479DCDE79462), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578775, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:32:56.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:56.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:56.665+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:56.681+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:56.681+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:56.722+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:56.722+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:56.722+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578775, 2) 2019-09-04T06:32:56.722+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 15985 2019-09-04T06:32:56.722+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:56.722+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:56.722+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 15985 2019-09-04T06:32:56.723+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 15988 2019-09-04T06:32:56.723+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 15988 2019-09-04T06:32:56.723+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578775, 2), t: 1 }({ ts: Timestamp(1567578775, 2), t: 1 }) 2019-09-04T06:32:56.728+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:56.728+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:56.729+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:56.729+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:56.766+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:56.811+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:56.812+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:56.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:56.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1090) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:56.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1090 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:06.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:56.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:24.839+0000 2019-09-04T06:32:56.838+0000 D2 ASIO [Replication] Request 1090 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, durableWallTime: new Date(1567578775720), opTime: { ts: Timestamp(1567578775, 2), t: 1 }, wallTime: new Date(1567578775720), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 2), t: 1 }, lastCommittedWall: new Date(1567578775720), lastOpVisible: { ts: Timestamp(1567578775, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578775, 2), $clusterTime: { clusterTime: Timestamp(1567578775, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578775, 2) } 2019-09-04T06:32:56.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, durableWallTime: new Date(1567578775720), opTime: { ts: Timestamp(1567578775, 2), t: 1 }, wallTime: new Date(1567578775720), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 2), t: 1 }, lastCommittedWall: new Date(1567578775720), lastOpVisible: { ts: Timestamp(1567578775, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578775, 2), $clusterTime: { clusterTime: Timestamp(1567578775, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578775, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:56.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:56.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1090) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, durableWallTime: new Date(1567578775720), opTime: { ts: Timestamp(1567578775, 2), t: 1 }, wallTime: new Date(1567578775720), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 2), t: 1 }, lastCommittedWall: new Date(1567578775720), lastOpVisible: { ts: Timestamp(1567578775, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578775, 2), $clusterTime: { clusterTime: Timestamp(1567578775, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578775, 2) } 2019-09-04T06:32:56.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:32:56.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:32:58.838Z 2019-09-04T06:32:56.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:24.839+0000 2019-09-04T06:32:56.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:56.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1091) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:56.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1091 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:33:06.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:56.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:24.839+0000 2019-09-04T06:32:56.839+0000 D2 ASIO [Replication] Request 1091 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, durableWallTime: new Date(1567578775720), opTime: { ts: Timestamp(1567578775, 2), t: 1 }, wallTime: new Date(1567578775720), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 2), t: 1 }, lastCommittedWall: new Date(1567578775720), lastOpVisible: { ts: Timestamp(1567578775, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578775, 2), $clusterTime: { clusterTime: Timestamp(1567578775, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578775, 2) } 2019-09-04T06:32:56.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, durableWallTime: new Date(1567578775720), opTime: { ts: Timestamp(1567578775, 2), t: 1 }, wallTime: new Date(1567578775720), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 2), t: 1 }, lastCommittedWall: new Date(1567578775720), lastOpVisible: { ts: Timestamp(1567578775, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578775, 2), $clusterTime: { clusterTime: Timestamp(1567578775, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578775, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:32:56.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:56.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1091) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, durableWallTime: new Date(1567578775720), opTime: { ts: Timestamp(1567578775, 2), t: 1 }, wallTime: new Date(1567578775720), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 2), t: 1 }, lastCommittedWall: new Date(1567578775720), lastOpVisible: { ts: Timestamp(1567578775, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578775, 2), $clusterTime: { clusterTime: Timestamp(1567578775, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578775, 2) } 2019-09-04T06:32:56.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:32:56.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:33:06.085+0000 2019-09-04T06:32:56.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:33:07.789+0000 2019-09-04T06:32:56.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:32:56.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:32:58.839Z 2019-09-04T06:32:56.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:56.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:26.839+0000 2019-09-04T06:32:56.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:26.839+0000 2019-09-04T06:32:56.866+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:56.876+0000 D2 COMMAND [conn47] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:56.876+0000 I COMMAND [conn47] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:56.911+0000 D2 COMMAND [conn48] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:56.911+0000 I COMMAND [conn48] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:56.948+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:56.948+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:56.966+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:57.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:57.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:57.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:57.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578775, 2), signature: { hash: BinData(0, CC669B4A70EC29AFAF2AFD95D332479DCDE79462), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:57.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:32:57.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578775, 2), signature: { hash: BinData(0, CC669B4A70EC29AFAF2AFD95D332479DCDE79462), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:57.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578775, 2), signature: { hash: BinData(0, CC669B4A70EC29AFAF2AFD95D332479DCDE79462), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:57.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, durableWallTime: new Date(1567578775720), opTime: { ts: Timestamp(1567578775, 2), t: 1 }, wallTime: new Date(1567578775720) } 2019-09-04T06:32:57.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578775, 2), signature: { hash: BinData(0, CC669B4A70EC29AFAF2AFD95D332479DCDE79462), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:57.066+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:57.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:57.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:57.166+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:57.181+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:57.181+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:57.228+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:57.228+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:57.229+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:57.229+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:57.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:57.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:57.250+0000 D2 COMMAND [conn160] run command admin.$cmd { ping: 1, $db: "admin" } 2019-09-04T06:32:57.250+0000 I COMMAND [conn160] command admin.$cmd appName: "robo3t" command: ping { ping: 1, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:57.260+0000 D2 COMMAND [conn220] run command admin.$cmd { ping: 1.0, lsid: { id: UUID("23af97f8-66f0-4a27-b5f1-59167651ca5f") }, $clusterTime: { clusterTime: Timestamp(1567578715, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } 2019-09-04T06:32:57.260+0000 I COMMAND [conn220] command admin.$cmd appName: "MongoDB Shell" command: ping { ping: 1.0, lsid: { id: UUID("23af97f8-66f0-4a27-b5f1-59167651ca5f") }, $clusterTime: { clusterTime: Timestamp(1567578715, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:57.266+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:57.311+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:57.312+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:57.356+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:57.356+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:57.366+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:57.448+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:57.448+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:57.467+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:57.561+0000 I NETWORK [listener] connection accepted from 10.108.2.61:38018 #367 (87 connections now open) 2019-09-04T06:32:57.561+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:57.561+0000 D2 COMMAND [conn367] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:57.561+0000 I NETWORK [conn367] received client metadata from 10.108.2.61:38018 conn367: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:57.561+0000 I COMMAND [conn367] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:57.567+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:57.576+0000 I COMMAND [conn346] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578741, 1), signature: { hash: BinData(0, 6623B4D362DDEA79EDD3F88245C2B01A5792EA1E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:32:57.576+0000 D1 - [conn346] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:57.576+0000 W - [conn346] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:57.593+0000 I - [conn346] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:57.593+0000 D1 COMMAND [conn346] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578741, 1), signature: { hash: BinData(0, 6623B4D362DDEA79EDD3F88245C2B01A5792EA1E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:57.593+0000 D1 - [conn346] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:57.593+0000 W - [conn346] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:57.613+0000 I - [conn346] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:57.613+0000 W COMMAND [conn346] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:57.613+0000 I COMMAND [conn346] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578741, 1), signature: { hash: BinData(0, 6623B4D362DDEA79EDD3F88245C2B01A5792EA1E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30028ms 2019-09-04T06:32:57.613+0000 D2 NETWORK [conn346] Session from 10.108.2.61:37996 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:57.613+0000 I NETWORK [conn346] end connection 10.108.2.61:37996 (86 connections now open) 2019-09-04T06:32:57.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:57.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:57.667+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:57.681+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:57.681+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:57.722+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:57.722+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:57.722+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578775, 2) 2019-09-04T06:32:57.722+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 16012 2019-09-04T06:32:57.722+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:57.722+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:57.722+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 16012 2019-09-04T06:32:57.723+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16015 2019-09-04T06:32:57.723+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16015 2019-09-04T06:32:57.723+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578775, 2), t: 1 }({ ts: Timestamp(1567578775, 2), t: 1 }) 2019-09-04T06:32:57.728+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:57.728+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:57.729+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:57.729+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:57.767+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:57.811+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:57.812+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:57.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:57.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:57.867+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:57.948+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:57.948+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:57.967+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:58.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:58.067+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:58.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:58.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:58.168+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:58.181+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:58.181+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:58.228+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:58.228+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:58.229+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:58.229+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:58.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578777, 1), signature: { hash: BinData(0, 40BA65C1F2276A67506599BA6E09AFFA9EF703A0), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:58.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:32:58.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578777, 1), signature: { hash: BinData(0, 40BA65C1F2276A67506599BA6E09AFFA9EF703A0), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:58.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578777, 1), signature: { hash: BinData(0, 40BA65C1F2276A67506599BA6E09AFFA9EF703A0), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:58.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, durableWallTime: new Date(1567578775720), opTime: { ts: Timestamp(1567578775, 2), t: 1 }, wallTime: new Date(1567578775720) } 2019-09-04T06:32:58.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578777, 1), signature: { hash: BinData(0, 40BA65C1F2276A67506599BA6E09AFFA9EF703A0), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:58.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:58.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:58.268+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:58.312+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:58.312+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:58.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:58.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:58.368+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:58.448+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:58.448+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:58.468+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:58.568+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:58.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:58.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:58.668+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:58.681+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:58.681+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:58.722+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:58.722+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:58.722+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578775, 2) 2019-09-04T06:32:58.722+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 16034 2019-09-04T06:32:58.722+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:58.722+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:58.722+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 16034 2019-09-04T06:32:58.723+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16037 2019-09-04T06:32:58.723+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16037 2019-09-04T06:32:58.723+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578775, 2), t: 1 }({ ts: Timestamp(1567578775, 2), t: 1 }) 2019-09-04T06:32:58.728+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:58.728+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:58.729+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:58.729+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:58.746+0000 I NETWORK [listener] connection accepted from 10.108.2.64:46710 #368 (87 connections now open) 2019-09-04T06:32:58.746+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:58.747+0000 D2 COMMAND [conn368] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:58.747+0000 I NETWORK [conn368] received client metadata from 10.108.2.64:46710 conn368: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:58.747+0000 I COMMAND [conn368] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:58.762+0000 I COMMAND [conn347] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578745, 1), signature: { hash: BinData(0, 4BC8AEE07FF90D8B732E450D3B6393D8C5E79E39), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:32:58.762+0000 D1 - [conn347] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:58.762+0000 W - [conn347] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:58.768+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:58.780+0000 I - [conn347] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:58.780+0000 D1 COMMAND [conn347] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578745, 1), signature: { hash: BinData(0, 4BC8AEE07FF90D8B732E450D3B6393D8C5E79E39), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:58.780+0000 D1 - [conn347] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:58.780+0000 W - [conn347] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:58.800+0000 I - [conn347] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:58.800+0000 W COMMAND [conn347] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:58.800+0000 I COMMAND [conn347] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578745, 1), signature: { hash: BinData(0, 4BC8AEE07FF90D8B732E450D3B6393D8C5E79E39), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:32:58.801+0000 D2 NETWORK [conn347] Session from 10.108.2.64:46696 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:58.801+0000 I NETWORK [conn347] end connection 10.108.2.64:46696 (86 connections now open) 2019-09-04T06:32:58.811+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:58.812+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:58.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:58.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1092) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:58.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1092 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:08.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:58.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:26.839+0000 2019-09-04T06:32:58.838+0000 D2 ASIO [Replication] Request 1092 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, durableWallTime: new Date(1567578775720), opTime: { ts: Timestamp(1567578775, 2), t: 1 }, wallTime: new Date(1567578775720), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 2), t: 1 }, lastCommittedWall: new Date(1567578775720), lastOpVisible: { ts: Timestamp(1567578775, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578775, 2), $clusterTime: { clusterTime: Timestamp(1567578777, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578775, 2) } 2019-09-04T06:32:58.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, durableWallTime: new Date(1567578775720), opTime: { ts: Timestamp(1567578775, 2), t: 1 }, wallTime: new Date(1567578775720), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 2), t: 1 }, lastCommittedWall: new Date(1567578775720), lastOpVisible: { ts: Timestamp(1567578775, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578775, 2), $clusterTime: { clusterTime: Timestamp(1567578777, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578775, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:32:58.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:58.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1092) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, durableWallTime: new Date(1567578775720), opTime: { ts: Timestamp(1567578775, 2), t: 1 }, wallTime: new Date(1567578775720), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 2), t: 1 }, lastCommittedWall: new Date(1567578775720), lastOpVisible: { ts: Timestamp(1567578775, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578775, 2), $clusterTime: { clusterTime: Timestamp(1567578777, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578775, 2) } 2019-09-04T06:32:58.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:32:58.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:33:00.838Z 2019-09-04T06:32:58.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:26.839+0000 2019-09-04T06:32:58.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:32:58.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1093) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:58.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1093 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:33:08.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:32:58.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:26.839+0000 2019-09-04T06:32:58.839+0000 D2 ASIO [Replication] Request 1093 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, durableWallTime: new Date(1567578775720), opTime: { ts: Timestamp(1567578775, 2), t: 1 }, wallTime: new Date(1567578775720), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 2), t: 1 }, lastCommittedWall: new Date(1567578775720), lastOpVisible: { ts: Timestamp(1567578775, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578775, 2), $clusterTime: { clusterTime: Timestamp(1567578777, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578775, 2) } 2019-09-04T06:32:58.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, durableWallTime: new Date(1567578775720), opTime: { ts: Timestamp(1567578775, 2), t: 1 }, wallTime: new Date(1567578775720), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 2), t: 1 }, lastCommittedWall: new Date(1567578775720), lastOpVisible: { ts: Timestamp(1567578775, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578775, 2), $clusterTime: { clusterTime: Timestamp(1567578777, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578775, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:32:58.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:58.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1093) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, durableWallTime: new Date(1567578775720), opTime: { ts: Timestamp(1567578775, 2), t: 1 }, wallTime: new Date(1567578775720), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 2), t: 1 }, lastCommittedWall: new Date(1567578775720), lastOpVisible: { ts: Timestamp(1567578775, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578775, 2), $clusterTime: { clusterTime: Timestamp(1567578777, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578775, 2) } 2019-09-04T06:32:58.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:32:58.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:33:07.789+0000 2019-09-04T06:32:58.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:33:09.904+0000 2019-09-04T06:32:58.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:32:58.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:33:00.839Z 2019-09-04T06:32:58.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:32:58.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:28.839+0000 2019-09-04T06:32:58.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:28.839+0000 2019-09-04T06:32:58.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:58.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:58.869+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:58.948+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:58.948+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:58.969+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:59.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:32:59.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578777, 1), signature: { hash: BinData(0, 40BA65C1F2276A67506599BA6E09AFFA9EF703A0), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:59.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:32:59.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578777, 1), signature: { hash: BinData(0, 40BA65C1F2276A67506599BA6E09AFFA9EF703A0), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:59.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578777, 1), signature: { hash: BinData(0, 40BA65C1F2276A67506599BA6E09AFFA9EF703A0), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:32:59.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, durableWallTime: new Date(1567578775720), opTime: { ts: Timestamp(1567578775, 2), t: 1 }, wallTime: new Date(1567578775720) } 2019-09-04T06:32:59.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578777, 1), signature: { hash: BinData(0, 40BA65C1F2276A67506599BA6E09AFFA9EF703A0), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:59.069+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:59.169+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:59.181+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:59.181+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:59.228+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:59.228+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:59.229+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:59.229+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:59.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:32:59.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:32:59.269+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:59.312+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:59.312+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:59.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:59.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:59.369+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:59.448+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:59.448+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:59.469+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:59.569+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:59.670+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:59.681+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:59.681+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:59.723+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:32:59.723+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:32:59.723+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578775, 2) 2019-09-04T06:32:59.723+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 16057 2019-09-04T06:32:59.723+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:32:59.723+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:32:59.723+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 16057 2019-09-04T06:32:59.724+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16060 2019-09-04T06:32:59.724+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16060 2019-09-04T06:32:59.724+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578775, 2), t: 1 }({ ts: Timestamp(1567578775, 2), t: 1 }) 2019-09-04T06:32:59.728+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:59.728+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:59.729+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:59.729+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:59.734+0000 I NETWORK [listener] connection accepted from 10.108.2.49:53466 #369 (87 connections now open) 2019-09-04T06:32:59.734+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:59.734+0000 D2 COMMAND [conn369] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:59.734+0000 I NETWORK [conn369] received client metadata from 10.108.2.49:53466 conn369: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:59.734+0000 I COMMAND [conn369] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:59.754+0000 I COMMAND [conn348] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578740, 1), signature: { hash: BinData(0, 436A2CA78160F4792AF24C20BF715D9579B6F362), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:32:59.754+0000 D1 - [conn348] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:32:59.754+0000 W - [conn348] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:59.770+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:59.771+0000 I - [conn348] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:59.771+0000 D1 COMMAND [conn348] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578740, 1), signature: { hash: BinData(0, 436A2CA78160F4792AF24C20BF715D9579B6F362), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:59.771+0000 D1 - [conn348] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:32:59.771+0000 W - [conn348] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:32:59.791+0000 I - [conn348] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:32:59.791+0000 W COMMAND [conn348] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:32:59.791+0000 I COMMAND [conn348] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578740, 1), signature: { hash: BinData(0, 436A2CA78160F4792AF24C20BF715D9579B6F362), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:32:59.791+0000 D2 NETWORK [conn348] Session from 10.108.2.49:53444 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:32:59.791+0000 I NETWORK [conn348] end connection 10.108.2.49:53444 (86 connections now open) 2019-09-04T06:32:59.811+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:59.812+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:59.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:59.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:59.864+0000 I NETWORK [listener] connection accepted from 10.108.2.72:45836 #370 (87 connections now open) 2019-09-04T06:32:59.864+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:32:59.864+0000 D2 COMMAND [conn370] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:32:59.864+0000 I NETWORK [conn370] received client metadata from 10.108.2.72:45836 conn370: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:32:59.864+0000 I COMMAND [conn370] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:32:59.864+0000 D2 COMMAND [conn370] run command config.$cmd { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578772, 1), signature: { hash: BinData(0, A61A1531E35D372770CFA852083A638C0C2C3E5B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:32:59.864+0000 D1 REPL [conn370] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578775, 2), t: 1 } 2019-09-04T06:32:59.864+0000 D3 REPL [conn370] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:29.874+0000 2019-09-04T06:32:59.870+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:32:59.948+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:32:59.948+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:32:59.970+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:00.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:00.004+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:33:00.004+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:33:00.004+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:33:00.011+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:33:00.011+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:33:00.012+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:33:00.012+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:33:00.012+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:33:00.012+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:33:00.012+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:33:00.012+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35129 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:33:00.014+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:33:00.014+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:33:00.015+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:33:00.016+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:33:00.016+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:00.016+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:33:00.016+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:33:00.016+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:33:00.016+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:33:00.016+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:33:00.016+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:33:00.016+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:33:00.016+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:00.016+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:00.016+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:33:00.016+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578775, 2) 2019-09-04T06:33:00.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16076 2019-09-04T06:33:00.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16076 2019-09-04T06:33:00.016+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:33:00.016+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:33:00.016+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:33:00.016+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:33:00.016+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:00.016+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:33:00.016+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:33:00.016+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:33:00.016+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578775, 2) 2019-09-04T06:33:00.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16079 2019-09-04T06:33:00.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16079 2019-09-04T06:33:00.016+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:33:00.017+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:33:00.017+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:00.017+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:33:00.017+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:33:00.017+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:33:00.017+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578775, 2) 2019-09-04T06:33:00.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16081 2019-09-04T06:33:00.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16081 2019-09-04T06:33:00.017+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:33:00.017+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:33:00.017+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:33:00.017+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:33:00.018+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:33:00.018+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:33:00.018+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:33:00.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16084 2019-09-04T06:33:00.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:33:00.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:00.018+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:33:00.018+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:33:00.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16084 2019-09-04T06:33:00.018+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:33:00.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16085 2019-09-04T06:33:00.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:33:00.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:00.018+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:33:00.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16085 2019-09-04T06:33:00.018+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:33:00.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16086 2019-09-04T06:33:00.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:33:00.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:00.018+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:33:00.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16086 2019-09-04T06:33:00.018+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:33:00.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16087 2019-09-04T06:33:00.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16087 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16088 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16088 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16089 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16089 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16090 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16090 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16091 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16091 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16092 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16092 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16093 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16093 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16094 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16094 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16095 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:33:00.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16095 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16096 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16096 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16097 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16097 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16098 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16098 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16099 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16099 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16100 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16100 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16101 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16101 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16102 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16102 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16103 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16103 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16104 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:33:00.020+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16104 2019-09-04T06:33:00.021+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:33:00.021+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16105 2019-09-04T06:33:00.021+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:33:00.021+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:00.021+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:33:00.021+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16105 2019-09-04T06:33:00.021+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 2ms 2019-09-04T06:33:00.032+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:33:00.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16107 2019-09-04T06:33:00.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16107 2019-09-04T06:33:00.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16108 2019-09-04T06:33:00.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16108 2019-09-04T06:33:00.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16109 2019-09-04T06:33:00.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16109 2019-09-04T06:33:00.032+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:33:00.032+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:33:00.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16111 2019-09-04T06:33:00.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16111 2019-09-04T06:33:00.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16112 2019-09-04T06:33:00.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16112 2019-09-04T06:33:00.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16113 2019-09-04T06:33:00.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16113 2019-09-04T06:33:00.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16114 2019-09-04T06:33:00.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16114 2019-09-04T06:33:00.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16115 2019-09-04T06:33:00.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16115 2019-09-04T06:33:00.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16116 2019-09-04T06:33:00.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16116 2019-09-04T06:33:00.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16117 2019-09-04T06:33:00.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16117 2019-09-04T06:33:00.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16118 2019-09-04T06:33:00.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16118 2019-09-04T06:33:00.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16119 2019-09-04T06:33:00.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16119 2019-09-04T06:33:00.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16120 2019-09-04T06:33:00.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16120 2019-09-04T06:33:00.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16121 2019-09-04T06:33:00.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16121 2019-09-04T06:33:00.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16122 2019-09-04T06:33:00.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16122 2019-09-04T06:33:00.033+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:33:00.033+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:33:00.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16124 2019-09-04T06:33:00.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16124 2019-09-04T06:33:00.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16125 2019-09-04T06:33:00.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16125 2019-09-04T06:33:00.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16126 2019-09-04T06:33:00.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16126 2019-09-04T06:33:00.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16127 2019-09-04T06:33:00.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16127 2019-09-04T06:33:00.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16128 2019-09-04T06:33:00.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16128 2019-09-04T06:33:00.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16129 2019-09-04T06:33:00.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16129 2019-09-04T06:33:00.033+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:33:00.070+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:00.170+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:00.181+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:00.181+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:00.229+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:00.229+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:00.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578777, 1), signature: { hash: BinData(0, 40BA65C1F2276A67506599BA6E09AFFA9EF703A0), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:00.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:33:00.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578777, 1), signature: { hash: BinData(0, 40BA65C1F2276A67506599BA6E09AFFA9EF703A0), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:00.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578777, 1), signature: { hash: BinData(0, 40BA65C1F2276A67506599BA6E09AFFA9EF703A0), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:00.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, durableWallTime: new Date(1567578775720), opTime: { ts: Timestamp(1567578775, 2), t: 1 }, wallTime: new Date(1567578775720) } 2019-09-04T06:33:00.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578777, 1), signature: { hash: BinData(0, 40BA65C1F2276A67506599BA6E09AFFA9EF703A0), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:00.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:00.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:00.265+0000 I COMMAND [conn349] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578748, 1), signature: { hash: BinData(0, 4288F60C47116EFABC53087AB62874B09C6AEA93), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:33:00.265+0000 D1 - [conn349] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:00.265+0000 W - [conn349] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:00.270+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:00.285+0000 I - [conn349] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:00.285+0000 D1 COMMAND [conn349] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578748, 1), signature: { hash: BinData(0, 4288F60C47116EFABC53087AB62874B09C6AEA93), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:00.285+0000 D1 - [conn349] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:00.285+0000 W - [conn349] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:00.309+0000 I - [conn349] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:00.309+0000 W COMMAND [conn349] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:00.309+0000 I COMMAND [conn349] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578748, 1), signature: { hash: BinData(0, 4288F60C47116EFABC53087AB62874B09C6AEA93), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30034ms 2019-09-04T06:33:00.309+0000 D2 NETWORK [conn349] Session from 10.108.2.54:49260 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:00.309+0000 I NETWORK [conn349] end connection 10.108.2.54:49260 (86 connections now open) 2019-09-04T06:33:00.312+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:00.312+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:00.327+0000 D2 ASIO [RS] Request 1088 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578780, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578780322), o: { $v: 1, $set: { ping: new Date(1567578780321) } } }, { ts: Timestamp(1567578780, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578780322), o: { $v: 1, $set: { ping: new Date(1567578780322) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 2), t: 1 }, lastCommittedWall: new Date(1567578775720), lastOpVisible: { ts: Timestamp(1567578775, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578775, 2), t: 1 }, lastCommittedWall: new Date(1567578775720), lastOpApplied: { ts: Timestamp(1567578780, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578775, 2), $clusterTime: { clusterTime: Timestamp(1567578780, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578780, 2) } 2019-09-04T06:33:00.327+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578780, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578780322), o: { $v: 1, $set: { ping: new Date(1567578780321) } } }, { ts: Timestamp(1567578780, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578780322), o: { $v: 1, $set: { ping: new Date(1567578780322) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 2), t: 1 }, lastCommittedWall: new Date(1567578775720), lastOpVisible: { ts: Timestamp(1567578775, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578775, 2), t: 1 }, lastCommittedWall: new Date(1567578775720), lastOpApplied: { ts: Timestamp(1567578780, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578775, 2), $clusterTime: { clusterTime: Timestamp(1567578780, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578780, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:00.327+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:00.327+0000 D2 REPL [replication-1] oplog fetcher read 2 operations from remote oplog starting at ts: Timestamp(1567578780, 1) and ending at ts: Timestamp(1567578780, 2) 2019-09-04T06:33:00.327+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:33:09.904+0000 2019-09-04T06:33:00.328+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:33:11.652+0000 2019-09-04T06:33:00.328+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:00.328+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:28.839+0000 2019-09-04T06:33:00.328+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:00.328+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:00.328+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578775, 2) 2019-09-04T06:33:00.328+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 16137 2019-09-04T06:33:00.328+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:00.328+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:00.328+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 16137 2019-09-04T06:33:00.328+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:00.328+0000 D2 REPL [rsSync-0] replication batch size is 2 2019-09-04T06:33:00.328+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:00.328+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578775, 2) 2019-09-04T06:33:00.328+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 16140 2019-09-04T06:33:00.328+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:00.328+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578780, 1) } 2019-09-04T06:33:00.328+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:00.328+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 16140 2019-09-04T06:33:00.328+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578780, 2), t: 1 } 2019-09-04T06:33:00.328+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16061 2019-09-04T06:33:00.328+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 16061 2019-09-04T06:33:00.328+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16143 2019-09-04T06:33:00.328+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16143 2019-09-04T06:33:00.328+0000 D3 EXECUTOR [repl-writer-worker-10] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:00.328+0000 D3 STORAGE [repl-writer-worker-10] WT begin_transaction for snapshot id 16145 2019-09-04T06:33:00.328+0000 D4 STORAGE [repl-writer-worker-10] inserting record with timestamp Timestamp(1567578780, 1) 2019-09-04T06:33:00.328+0000 D3 STORAGE [repl-writer-worker-10] WT set timestamp of future write operations to Timestamp(1567578780, 1) 2019-09-04T06:33:00.328+0000 D4 STORAGE [repl-writer-worker-10] inserting record with timestamp Timestamp(1567578780, 2) 2019-09-04T06:33:00.328+0000 D3 STORAGE [repl-writer-worker-10] WT set timestamp of future write operations to Timestamp(1567578780, 2) 2019-09-04T06:33:00.328+0000 D3 STORAGE [repl-writer-worker-10] WT commit_transaction for snapshot id 16145 2019-09-04T06:33:00.328+0000 D3 EXECUTOR [repl-writer-worker-10] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:00.328+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:33:00.328+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16144 2019-09-04T06:33:00.328+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 16144 2019-09-04T06:33:00.328+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16147 2019-09-04T06:33:00.328+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16147 2019-09-04T06:33:00.328+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578780, 2), t: 1 }({ ts: Timestamp(1567578780, 2), t: 1 }) 2019-09-04T06:33:00.328+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578780, 2) 2019-09-04T06:33:00.328+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16148 2019-09-04T06:33:00.328+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578780, 2) } } ] } sort: {} projection: {} 2019-09-04T06:33:00.328+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:00.328+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:33:00.328+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578780, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:33:00.328+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:00.328+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:33:00.328+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:00.328+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578780, 2) || First: notFirst: full path: ts 2019-09-04T06:33:00.328+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:00.328+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578780, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:00.328+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:33:00.328+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:33:00.328+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:33:00.328+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:00.328+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:00.328+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:33:00.328+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:00.328+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:00.328+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:33:00.328+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578780, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:33:00.329+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:00.329+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:33:00.329+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:00.329+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578780, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:33:00.329+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:00.329+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578780, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:00.329+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 16148 2019-09-04T06:33:00.329+0000 D3 EXECUTOR [repl-writer-worker-8] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:00.329+0000 D3 STORAGE [repl-writer-worker-8] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:00.329+0000 D3 EXECUTOR [repl-writer-worker-6] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:00.329+0000 D3 REPL [repl-writer-worker-8] applying op: { ts: Timestamp(1567578780, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578780322), o: { $v: 1, $set: { ping: new Date(1567578780321) } } }, oplog application mode: Secondary 2019-09-04T06:33:00.329+0000 D3 STORAGE [repl-writer-worker-6] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:00.329+0000 D3 STORAGE [repl-writer-worker-8] WT set timestamp of future write operations to Timestamp(1567578780, 1) 2019-09-04T06:33:00.329+0000 D3 STORAGE [repl-writer-worker-8] WT begin_transaction for snapshot id 16150 2019-09-04T06:33:00.329+0000 D3 REPL [repl-writer-worker-6] applying op: { ts: Timestamp(1567578780, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578780322), o: { $v: 1, $set: { ping: new Date(1567578780322) } } }, oplog application mode: Secondary 2019-09-04T06:33:00.329+0000 D2 QUERY [repl-writer-worker-8] Using idhack: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" } 2019-09-04T06:33:00.329+0000 D3 STORAGE [repl-writer-worker-6] WT set timestamp of future write operations to Timestamp(1567578780, 2) 2019-09-04T06:33:00.329+0000 D3 STORAGE [repl-writer-worker-6] WT begin_transaction for snapshot id 16151 2019-09-04T06:33:00.329+0000 D2 QUERY [repl-writer-worker-6] Using idhack: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" } 2019-09-04T06:33:00.329+0000 D4 WRITE [repl-writer-worker-8] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:33:00.329+0000 D3 STORAGE [repl-writer-worker-8] WT commit_transaction for snapshot id 16150 2019-09-04T06:33:00.329+0000 D3 EXECUTOR [repl-writer-worker-8] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:00.329+0000 D4 WRITE [repl-writer-worker-6] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:33:00.329+0000 D3 STORAGE [repl-writer-worker-6] WT commit_transaction for snapshot id 16151 2019-09-04T06:33:00.329+0000 D3 EXECUTOR [repl-writer-worker-6] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:00.329+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578780, 2), t: 1 }({ ts: Timestamp(1567578780, 2), t: 1 }) 2019-09-04T06:33:00.329+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578780, 2) 2019-09-04T06:33:00.329+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16149 2019-09-04T06:33:00.329+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:33:00.329+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:00.329+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:33:00.329+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:00.329+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:00.329+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:33:00.329+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 16149 2019-09-04T06:33:00.329+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578780, 2) 2019-09-04T06:33:00.329+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:00.329+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16155 2019-09-04T06:33:00.329+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16155 2019-09-04T06:33:00.329+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, durableWallTime: new Date(1567578775720), appliedOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, appliedWallTime: new Date(1567578775720), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, durableWallTime: new Date(1567578775720), appliedOpTime: { ts: Timestamp(1567578780, 2), t: 1 }, appliedWallTime: new Date(1567578780322), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, durableWallTime: new Date(1567578775720), appliedOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, appliedWallTime: new Date(1567578775720), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 2), t: 1 }, lastCommittedWall: new Date(1567578775720), lastOpVisible: { ts: Timestamp(1567578775, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:00.329+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1094 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:30.329+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, durableWallTime: new Date(1567578775720), appliedOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, appliedWallTime: new Date(1567578775720), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, durableWallTime: new Date(1567578775720), appliedOpTime: { ts: Timestamp(1567578780, 2), t: 1 }, appliedWallTime: new Date(1567578780322), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, durableWallTime: new Date(1567578775720), appliedOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, appliedWallTime: new Date(1567578775720), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 2), t: 1 }, lastCommittedWall: new Date(1567578775720), lastOpVisible: { ts: Timestamp(1567578775, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:00.329+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:30.329+0000 2019-09-04T06:33:00.329+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578780, 2), t: 1 }({ ts: Timestamp(1567578780, 2), t: 1 }) 2019-09-04T06:33:00.329+0000 D2 ASIO [RS] Request 1094 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 2), t: 1 }, lastCommittedWall: new Date(1567578775720), lastOpVisible: { ts: Timestamp(1567578775, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578775, 2), $clusterTime: { clusterTime: Timestamp(1567578780, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578780, 3) } 2019-09-04T06:33:00.330+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 2), t: 1 }, lastCommittedWall: new Date(1567578775720), lastOpVisible: { ts: Timestamp(1567578775, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578775, 2), $clusterTime: { clusterTime: Timestamp(1567578780, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578780, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:00.330+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:00.330+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:30.330+0000 2019-09-04T06:33:00.330+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578780, 2), t: 1 } 2019-09-04T06:33:00.330+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1095 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:10.330+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578775, 2), t: 1 } } 2019-09-04T06:33:00.330+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:30.330+0000 2019-09-04T06:33:00.330+0000 D2 ASIO [RS] Request 1095 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578780, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578780323), o: { $v: 1, $set: { ping: new Date(1567578780322) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 2), t: 1 }, lastCommittedWall: new Date(1567578775720), lastOpVisible: { ts: Timestamp(1567578775, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578775, 2), t: 1 }, lastCommittedWall: new Date(1567578775720), lastOpApplied: { ts: Timestamp(1567578780, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578775, 2), $clusterTime: { clusterTime: Timestamp(1567578780, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578780, 3) } 2019-09-04T06:33:00.330+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578780, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578780323), o: { $v: 1, $set: { ping: new Date(1567578780322) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 2), t: 1 }, lastCommittedWall: new Date(1567578775720), lastOpVisible: { ts: Timestamp(1567578775, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578775, 2), t: 1 }, lastCommittedWall: new Date(1567578775720), lastOpApplied: { ts: Timestamp(1567578780, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578775, 2), $clusterTime: { clusterTime: Timestamp(1567578780, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578780, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:00.330+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:00.330+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578780, 3) and ending at ts: Timestamp(1567578780, 3) 2019-09-04T06:33:00.330+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:33:11.652+0000 2019-09-04T06:33:00.330+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:33:11.391+0000 2019-09-04T06:33:00.330+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578780, 3), t: 1 } 2019-09-04T06:33:00.330+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:00.330+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:28.839+0000 2019-09-04T06:33:00.330+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:00.330+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:00.330+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578780, 2) 2019-09-04T06:33:00.330+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 16159 2019-09-04T06:33:00.330+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:00.330+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:00.330+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 16159 2019-09-04T06:33:00.330+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:33:00.330+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:00.330+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578780, 3) } 2019-09-04T06:33:00.330+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:00.330+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578780, 2) 2019-09-04T06:33:00.331+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16156 2019-09-04T06:33:00.331+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 16162 2019-09-04T06:33:00.331+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 16156 2019-09-04T06:33:00.331+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:00.331+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16163 2019-09-04T06:33:00.331+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:00.331+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16163 2019-09-04T06:33:00.331+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 16162 2019-09-04T06:33:00.331+0000 D3 EXECUTOR [repl-writer-worker-4] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:00.331+0000 D3 STORAGE [repl-writer-worker-4] WT begin_transaction for snapshot id 16167 2019-09-04T06:33:00.331+0000 D4 STORAGE [repl-writer-worker-4] inserting record with timestamp Timestamp(1567578780, 3) 2019-09-04T06:33:00.331+0000 D3 STORAGE [repl-writer-worker-4] WT set timestamp of future write operations to Timestamp(1567578780, 3) 2019-09-04T06:33:00.331+0000 D3 STORAGE [repl-writer-worker-4] WT commit_transaction for snapshot id 16167 2019-09-04T06:33:00.331+0000 D3 EXECUTOR [repl-writer-worker-4] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:00.331+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:33:00.331+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16164 2019-09-04T06:33:00.331+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 16164 2019-09-04T06:33:00.331+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16169 2019-09-04T06:33:00.331+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16169 2019-09-04T06:33:00.331+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578780, 3), t: 1 }({ ts: Timestamp(1567578780, 3), t: 1 }) 2019-09-04T06:33:00.331+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578780, 3) 2019-09-04T06:33:00.331+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16170 2019-09-04T06:33:00.331+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578780, 3) } } ] } sort: {} projection: {} 2019-09-04T06:33:00.331+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:00.331+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:33:00.331+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578780, 3) Sort: {} Proj: {} ============================= 2019-09-04T06:33:00.331+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:00.331+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:00.331+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:33:00.331+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578780, 3) || First: notFirst: full path: ts 2019-09-04T06:33:00.331+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:00.331+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578780, 3) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:00.331+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:33:00.331+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:33:00.331+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:33:00.331+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:00.331+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:00.331+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:33:00.331+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:00.331+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:00.331+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:33:00.331+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578780, 3) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:33:00.331+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:00.331+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:00.331+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:33:00.331+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578780, 3) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:33:00.331+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:00.331+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578780, 3) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:00.331+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 16170 2019-09-04T06:33:00.331+0000 D3 EXECUTOR [repl-writer-worker-2] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:00.331+0000 D3 STORAGE [repl-writer-worker-2] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:00.331+0000 D3 REPL [repl-writer-worker-2] applying op: { ts: Timestamp(1567578780, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578780323), o: { $v: 1, $set: { ping: new Date(1567578780322) } } }, oplog application mode: Secondary 2019-09-04T06:33:00.331+0000 D3 STORAGE [repl-writer-worker-2] WT set timestamp of future write operations to Timestamp(1567578780, 3) 2019-09-04T06:33:00.331+0000 D3 STORAGE [repl-writer-worker-2] WT begin_transaction for snapshot id 16172 2019-09-04T06:33:00.331+0000 D2 QUERY [repl-writer-worker-2] Using idhack: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" } 2019-09-04T06:33:00.331+0000 D4 WRITE [repl-writer-worker-2] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:33:00.331+0000 D3 STORAGE [repl-writer-worker-2] WT commit_transaction for snapshot id 16172 2019-09-04T06:33:00.331+0000 D3 EXECUTOR [repl-writer-worker-2] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:00.331+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578780, 3), t: 1 }({ ts: Timestamp(1567578780, 3), t: 1 }) 2019-09-04T06:33:00.331+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578780, 3) 2019-09-04T06:33:00.331+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16171 2019-09-04T06:33:00.331+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:33:00.331+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:00.331+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:33:00.331+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:00.331+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:00.331+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:33:00.331+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 16171 2019-09-04T06:33:00.331+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578780, 3) 2019-09-04T06:33:00.332+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16175 2019-09-04T06:33:00.332+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16175 2019-09-04T06:33:00.332+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578780, 3), t: 1 }({ ts: Timestamp(1567578780, 3), t: 1 }) 2019-09-04T06:33:00.332+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:00.332+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, durableWallTime: new Date(1567578775720), appliedOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, appliedWallTime: new Date(1567578775720), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, durableWallTime: new Date(1567578775720), appliedOpTime: { ts: Timestamp(1567578780, 3), t: 1 }, appliedWallTime: new Date(1567578780323), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, durableWallTime: new Date(1567578775720), appliedOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, appliedWallTime: new Date(1567578775720), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 2), t: 1 }, lastCommittedWall: new Date(1567578775720), lastOpVisible: { ts: Timestamp(1567578775, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:00.332+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1096 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:30.332+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, durableWallTime: new Date(1567578775720), appliedOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, appliedWallTime: new Date(1567578775720), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, durableWallTime: new Date(1567578775720), appliedOpTime: { ts: Timestamp(1567578780, 3), t: 1 }, appliedWallTime: new Date(1567578780323), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, durableWallTime: new Date(1567578775720), appliedOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, appliedWallTime: new Date(1567578775720), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 2), t: 1 }, lastCommittedWall: new Date(1567578775720), lastOpVisible: { ts: Timestamp(1567578775, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:00.332+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:30.331+0000 2019-09-04T06:33:00.332+0000 D2 ASIO [RS] Request 1096 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 2), t: 1 }, lastCommittedWall: new Date(1567578775720), lastOpVisible: { ts: Timestamp(1567578775, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578775, 2), $clusterTime: { clusterTime: Timestamp(1567578780, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578780, 3) } 2019-09-04T06:33:00.332+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 2), t: 1 }, lastCommittedWall: new Date(1567578775720), lastOpVisible: { ts: Timestamp(1567578775, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578775, 2), $clusterTime: { clusterTime: Timestamp(1567578780, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578780, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:00.332+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:00.332+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:30.332+0000 2019-09-04T06:33:00.332+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578780, 3), t: 1 } 2019-09-04T06:33:00.332+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1097 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:10.332+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578775, 2), t: 1 } } 2019-09-04T06:33:00.332+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:30.332+0000 2019-09-04T06:33:00.337+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:33:00.337+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:00.337+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, durableWallTime: new Date(1567578775720), appliedOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, appliedWallTime: new Date(1567578775720), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578780, 2), t: 1 }, durableWallTime: new Date(1567578780322), appliedOpTime: { ts: Timestamp(1567578780, 3), t: 1 }, appliedWallTime: new Date(1567578780323), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, durableWallTime: new Date(1567578775720), appliedOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, appliedWallTime: new Date(1567578775720), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 2), t: 1 }, lastCommittedWall: new Date(1567578775720), lastOpVisible: { ts: Timestamp(1567578775, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:00.337+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1098 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:30.337+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, durableWallTime: new Date(1567578775720), appliedOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, appliedWallTime: new Date(1567578775720), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578780, 2), t: 1 }, durableWallTime: new Date(1567578780322), appliedOpTime: { ts: Timestamp(1567578780, 3), t: 1 }, appliedWallTime: new Date(1567578780323), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, durableWallTime: new Date(1567578775720), appliedOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, appliedWallTime: new Date(1567578775720), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 2), t: 1 }, lastCommittedWall: new Date(1567578775720), lastOpVisible: { ts: Timestamp(1567578775, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:00.337+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:30.332+0000 2019-09-04T06:33:00.337+0000 D2 ASIO [RS] Request 1098 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 2), t: 1 }, lastCommittedWall: new Date(1567578775720), lastOpVisible: { ts: Timestamp(1567578775, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578775, 2), $clusterTime: { clusterTime: Timestamp(1567578780, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578780, 3) } 2019-09-04T06:33:00.337+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578775, 2), t: 1 }, lastCommittedWall: new Date(1567578775720), lastOpVisible: { ts: Timestamp(1567578775, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578775, 2), $clusterTime: { clusterTime: Timestamp(1567578780, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578780, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:00.337+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:00.337+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:30.332+0000 2019-09-04T06:33:00.338+0000 D2 ASIO [RS] Request 1097 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578780, 2), t: 1 }, lastCommittedWall: new Date(1567578780322), lastOpVisible: { ts: Timestamp(1567578780, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578780, 2), t: 1 }, lastCommittedWall: new Date(1567578780322), lastOpApplied: { ts: Timestamp(1567578780, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578780, 2), $clusterTime: { clusterTime: Timestamp(1567578780, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578780, 3) } 2019-09-04T06:33:00.338+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578780, 2), t: 1 }, lastCommittedWall: new Date(1567578780322), lastOpVisible: { ts: Timestamp(1567578780, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578780, 2), t: 1 }, lastCommittedWall: new Date(1567578780322), lastOpApplied: { ts: Timestamp(1567578780, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578780, 2), $clusterTime: { clusterTime: Timestamp(1567578780, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578780, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:00.338+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:00.338+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:33:00.338+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578780, 2), t: 1 }, 2019-09-04T06:33:00.322+0000 2019-09-04T06:33:00.338+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578780, 2), t: 1 }, 2019-09-04T06:33:00.322+0000 2019-09-04T06:33:00.338+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578775, 2) 2019-09-04T06:33:00.338+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:33:11.391+0000 2019-09-04T06:33:00.338+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:33:10.716+0000 2019-09-04T06:33:00.338+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:00.338+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1099 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:10.338+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578780, 2), t: 1 } } 2019-09-04T06:33:00.338+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:28.839+0000 2019-09-04T06:33:00.338+0000 D3 REPL [conn323] Got notified of new snapshot: { ts: Timestamp(1567578780, 2), t: 1 }, 2019-09-04T06:33:00.322+0000 2019-09-04T06:33:00.338+0000 D3 REPL [conn323] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:11.688+0000 2019-09-04T06:33:00.338+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:30.332+0000 2019-09-04T06:33:00.338+0000 D3 REPL [conn353] Got notified of new snapshot: { ts: Timestamp(1567578780, 2), t: 1 }, 2019-09-04T06:33:00.322+0000 2019-09-04T06:33:00.338+0000 D3 REPL [conn353] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.419+0000 2019-09-04T06:33:00.338+0000 D3 REPL [conn361] Got notified of new snapshot: { ts: Timestamp(1567578780, 2), t: 1 }, 2019-09-04T06:33:00.322+0000 2019-09-04T06:33:00.338+0000 D3 REPL [conn361] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.661+0000 2019-09-04T06:33:00.338+0000 D3 REPL [conn355] Got notified of new snapshot: { ts: Timestamp(1567578780, 2), t: 1 }, 2019-09-04T06:33:00.322+0000 2019-09-04T06:33:00.338+0000 D3 REPL [conn355] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:24.151+0000 2019-09-04T06:33:00.338+0000 D3 REPL [conn363] Got notified of new snapshot: { ts: Timestamp(1567578780, 2), t: 1 }, 2019-09-04T06:33:00.322+0000 2019-09-04T06:33:00.338+0000 D3 REPL [conn363] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.670+0000 2019-09-04T06:33:00.338+0000 D3 REPL [conn338] Got notified of new snapshot: { ts: Timestamp(1567578780, 2), t: 1 }, 2019-09-04T06:33:00.322+0000 2019-09-04T06:33:00.338+0000 D3 REPL [conn338] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:11.680+0000 2019-09-04T06:33:00.338+0000 D3 REPL [conn364] Got notified of new snapshot: { ts: Timestamp(1567578780, 2), t: 1 }, 2019-09-04T06:33:00.322+0000 2019-09-04T06:33:00.338+0000 D3 REPL [conn364] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.767+0000 2019-09-04T06:33:00.338+0000 D3 REPL [conn322] Got notified of new snapshot: { ts: Timestamp(1567578780, 2), t: 1 }, 2019-09-04T06:33:00.322+0000 2019-09-04T06:33:00.338+0000 D3 REPL [conn322] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.989+0000 2019-09-04T06:33:00.338+0000 D3 REPL [conn350] Got notified of new snapshot: { ts: Timestamp(1567578780, 2), t: 1 }, 2019-09-04T06:33:00.322+0000 2019-09-04T06:33:00.338+0000 D3 REPL [conn350] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:00.763+0000 2019-09-04T06:33:00.338+0000 D3 REPL [conn344] Got notified of new snapshot: { ts: Timestamp(1567578780, 2), t: 1 }, 2019-09-04T06:33:00.322+0000 2019-09-04T06:33:00.338+0000 D3 REPL [conn344] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:11.677+0000 2019-09-04T06:33:00.338+0000 D3 REPL [conn343] Got notified of new snapshot: { ts: Timestamp(1567578780, 2), t: 1 }, 2019-09-04T06:33:00.322+0000 2019-09-04T06:33:00.338+0000 D3 REPL [conn343] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:22.594+0000 2019-09-04T06:33:00.338+0000 D3 REPL [conn337] Got notified of new snapshot: { ts: Timestamp(1567578780, 2), t: 1 }, 2019-09-04T06:33:00.322+0000 2019-09-04T06:33:00.338+0000 D3 REPL [conn337] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.660+0000 2019-09-04T06:33:00.338+0000 D3 REPL [conn360] Got notified of new snapshot: { ts: Timestamp(1567578780, 2), t: 1 }, 2019-09-04T06:33:00.322+0000 2019-09-04T06:33:00.338+0000 D3 REPL [conn360] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.661+0000 2019-09-04T06:33:00.338+0000 D3 REPL [conn351] Got notified of new snapshot: { ts: Timestamp(1567578780, 2), t: 1 }, 2019-09-04T06:33:00.322+0000 2019-09-04T06:33:00.338+0000 D3 REPL [conn351] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:00.962+0000 2019-09-04T06:33:00.338+0000 D3 REPL [conn358] Got notified of new snapshot: { ts: Timestamp(1567578780, 2), t: 1 }, 2019-09-04T06:33:00.322+0000 2019-09-04T06:33:00.338+0000 D3 REPL [conn358] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.833+0000 2019-09-04T06:33:00.338+0000 D3 REPL [conn365] Got notified of new snapshot: { ts: Timestamp(1567578780, 2), t: 1 }, 2019-09-04T06:33:00.322+0000 2019-09-04T06:33:00.338+0000 D3 REPL [conn365] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:25.060+0000 2019-09-04T06:33:00.338+0000 D3 REPL [conn357] Got notified of new snapshot: { ts: Timestamp(1567578780, 2), t: 1 }, 2019-09-04T06:33:00.322+0000 2019-09-04T06:33:00.338+0000 D3 REPL [conn357] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.823+0000 2019-09-04T06:33:00.338+0000 D3 REPL [conn327] Got notified of new snapshot: { ts: Timestamp(1567578780, 2), t: 1 }, 2019-09-04T06:33:00.322+0000 2019-09-04T06:33:00.338+0000 D3 REPL [conn327] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.840+0000 2019-09-04T06:33:00.338+0000 D3 REPL [conn341] Got notified of new snapshot: { ts: Timestamp(1567578780, 2), t: 1 }, 2019-09-04T06:33:00.322+0000 2019-09-04T06:33:00.338+0000 D3 REPL [conn341] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.094+0000 2019-09-04T06:33:00.338+0000 D3 REPL [conn318] Got notified of new snapshot: { ts: Timestamp(1567578780, 2), t: 1 }, 2019-09-04T06:33:00.322+0000 2019-09-04T06:33:00.338+0000 D3 REPL [conn318] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:03.478+0000 2019-09-04T06:33:00.338+0000 D3 REPL [conn354] Got notified of new snapshot: { ts: Timestamp(1567578780, 2), t: 1 }, 2019-09-04T06:33:00.322+0000 2019-09-04T06:33:00.338+0000 D3 REPL [conn354] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.413+0000 2019-09-04T06:33:00.338+0000 D3 REPL [conn352] Got notified of new snapshot: { ts: Timestamp(1567578780, 2), t: 1 }, 2019-09-04T06:33:00.322+0000 2019-09-04T06:33:00.338+0000 D3 REPL [conn352] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.291+0000 2019-09-04T06:33:00.338+0000 D3 REPL [conn370] Got notified of new snapshot: { ts: Timestamp(1567578780, 2), t: 1 }, 2019-09-04T06:33:00.322+0000 2019-09-04T06:33:00.338+0000 D3 REPL [conn370] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:29.874+0000 2019-09-04T06:33:00.339+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:33:00.339+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:00.339+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, durableWallTime: new Date(1567578775720), appliedOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, appliedWallTime: new Date(1567578775720), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578780, 3), t: 1 }, durableWallTime: new Date(1567578780323), appliedOpTime: { ts: Timestamp(1567578780, 3), t: 1 }, appliedWallTime: new Date(1567578780323), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, durableWallTime: new Date(1567578775720), appliedOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, appliedWallTime: new Date(1567578775720), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578780, 2), t: 1 }, lastCommittedWall: new Date(1567578780322), lastOpVisible: { ts: Timestamp(1567578780, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:00.339+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1100 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:30.339+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, durableWallTime: new Date(1567578775720), appliedOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, appliedWallTime: new Date(1567578775720), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578780, 3), t: 1 }, durableWallTime: new Date(1567578780323), appliedOpTime: { ts: Timestamp(1567578780, 3), t: 1 }, appliedWallTime: new Date(1567578780323), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, durableWallTime: new Date(1567578775720), appliedOpTime: { ts: Timestamp(1567578775, 2), t: 1 }, appliedWallTime: new Date(1567578775720), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578780, 2), t: 1 }, lastCommittedWall: new Date(1567578780322), lastOpVisible: { ts: Timestamp(1567578780, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:00.339+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:30.332+0000 2019-09-04T06:33:00.339+0000 D2 ASIO [RS] Request 1100 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578780, 2), t: 1 }, lastCommittedWall: new Date(1567578780322), lastOpVisible: { ts: Timestamp(1567578780, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578780, 2), $clusterTime: { clusterTime: Timestamp(1567578780, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578780, 3) } 2019-09-04T06:33:00.339+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578780, 2), t: 1 }, lastCommittedWall: new Date(1567578780322), lastOpVisible: { ts: Timestamp(1567578780, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578780, 2), $clusterTime: { clusterTime: Timestamp(1567578780, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578780, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:00.339+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:00.339+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:30.332+0000 2019-09-04T06:33:00.339+0000 D2 ASIO [RS] Request 1099 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578780, 3), t: 1 }, lastCommittedWall: new Date(1567578780323), lastOpVisible: { ts: Timestamp(1567578780, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578780, 3), t: 1 }, lastCommittedWall: new Date(1567578780323), lastOpApplied: { ts: Timestamp(1567578780, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578780, 3), $clusterTime: { clusterTime: Timestamp(1567578780, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578780, 3) } 2019-09-04T06:33:00.339+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578780, 3), t: 1 }, lastCommittedWall: new Date(1567578780323), lastOpVisible: { ts: Timestamp(1567578780, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578780, 3), t: 1 }, lastCommittedWall: new Date(1567578780323), lastOpApplied: { ts: Timestamp(1567578780, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578780, 3), $clusterTime: { clusterTime: Timestamp(1567578780, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578780, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:00.339+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:00.339+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:33:00.339+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578780, 3), t: 1 }, 2019-09-04T06:33:00.323+0000 2019-09-04T06:33:00.339+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578780, 3), t: 1 }, 2019-09-04T06:33:00.323+0000 2019-09-04T06:33:00.339+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578775, 3) 2019-09-04T06:33:00.339+0000 D3 REPL [conn354] Got notified of new snapshot: { ts: Timestamp(1567578780, 3), t: 1 }, 2019-09-04T06:33:00.323+0000 2019-09-04T06:33:00.339+0000 D3 REPL [conn354] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.413+0000 2019-09-04T06:33:00.339+0000 D3 REPL [conn351] Got notified of new snapshot: { ts: Timestamp(1567578780, 3), t: 1 }, 2019-09-04T06:33:00.323+0000 2019-09-04T06:33:00.339+0000 D3 REPL [conn351] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:00.962+0000 2019-09-04T06:33:00.339+0000 D3 REPL [conn323] Got notified of new snapshot: { ts: Timestamp(1567578780, 3), t: 1 }, 2019-09-04T06:33:00.323+0000 2019-09-04T06:33:00.339+0000 D3 REPL [conn323] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:11.688+0000 2019-09-04T06:33:00.340+0000 D3 REPL [conn350] Got notified of new snapshot: { ts: Timestamp(1567578780, 3), t: 1 }, 2019-09-04T06:33:00.323+0000 2019-09-04T06:33:00.340+0000 D3 REPL [conn350] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:00.763+0000 2019-09-04T06:33:00.340+0000 D3 REPL [conn341] Got notified of new snapshot: { ts: Timestamp(1567578780, 3), t: 1 }, 2019-09-04T06:33:00.323+0000 2019-09-04T06:33:00.340+0000 D3 REPL [conn341] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.094+0000 2019-09-04T06:33:00.340+0000 D3 REPL [conn370] Got notified of new snapshot: { ts: Timestamp(1567578780, 3), t: 1 }, 2019-09-04T06:33:00.323+0000 2019-09-04T06:33:00.340+0000 D3 REPL [conn370] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:29.874+0000 2019-09-04T06:33:00.340+0000 D3 REPL [conn360] Got notified of new snapshot: { ts: Timestamp(1567578780, 3), t: 1 }, 2019-09-04T06:33:00.323+0000 2019-09-04T06:33:00.340+0000 D3 REPL [conn360] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.661+0000 2019-09-04T06:33:00.340+0000 D3 REPL [conn357] Got notified of new snapshot: { ts: Timestamp(1567578780, 3), t: 1 }, 2019-09-04T06:33:00.323+0000 2019-09-04T06:33:00.340+0000 D3 REPL [conn357] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.823+0000 2019-09-04T06:33:00.340+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:33:10.716+0000 2019-09-04T06:33:00.340+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:33:11.706+0000 2019-09-04T06:33:00.340+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:00.340+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:28.839+0000 2019-09-04T06:33:00.340+0000 D3 REPL [conn355] Got notified of new snapshot: { ts: Timestamp(1567578780, 3), t: 1 }, 2019-09-04T06:33:00.323+0000 2019-09-04T06:33:00.340+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1101 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:10.340+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578780, 3), t: 1 } } 2019-09-04T06:33:00.340+0000 D3 REPL [conn355] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:24.151+0000 2019-09-04T06:33:00.340+0000 D3 REPL [conn365] Got notified of new snapshot: { ts: Timestamp(1567578780, 3), t: 1 }, 2019-09-04T06:33:00.323+0000 2019-09-04T06:33:00.340+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:30.332+0000 2019-09-04T06:33:00.340+0000 D3 REPL [conn365] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:25.060+0000 2019-09-04T06:33:00.340+0000 D3 REPL [conn327] Got notified of new snapshot: { ts: Timestamp(1567578780, 3), t: 1 }, 2019-09-04T06:33:00.323+0000 2019-09-04T06:33:00.340+0000 D3 REPL [conn327] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.840+0000 2019-09-04T06:33:00.340+0000 D3 REPL [conn318] Got notified of new snapshot: { ts: Timestamp(1567578780, 3), t: 1 }, 2019-09-04T06:33:00.323+0000 2019-09-04T06:33:00.340+0000 D3 REPL [conn318] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:03.478+0000 2019-09-04T06:33:00.340+0000 D3 REPL [conn364] Got notified of new snapshot: { ts: Timestamp(1567578780, 3), t: 1 }, 2019-09-04T06:33:00.323+0000 2019-09-04T06:33:00.340+0000 D3 REPL [conn364] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.767+0000 2019-09-04T06:33:00.340+0000 D3 REPL [conn363] Got notified of new snapshot: { ts: Timestamp(1567578780, 3), t: 1 }, 2019-09-04T06:33:00.323+0000 2019-09-04T06:33:00.340+0000 D3 REPL [conn363] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.670+0000 2019-09-04T06:33:00.340+0000 D3 REPL [conn344] Got notified of new snapshot: { ts: Timestamp(1567578780, 3), t: 1 }, 2019-09-04T06:33:00.323+0000 2019-09-04T06:33:00.340+0000 D3 REPL [conn344] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:11.677+0000 2019-09-04T06:33:00.340+0000 D3 REPL [conn322] Got notified of new snapshot: { ts: Timestamp(1567578780, 3), t: 1 }, 2019-09-04T06:33:00.323+0000 2019-09-04T06:33:00.340+0000 D3 REPL [conn322] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.989+0000 2019-09-04T06:33:00.340+0000 D3 REPL [conn338] Got notified of new snapshot: { ts: Timestamp(1567578780, 3), t: 1 }, 2019-09-04T06:33:00.323+0000 2019-09-04T06:33:00.340+0000 D3 REPL [conn338] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:11.680+0000 2019-09-04T06:33:00.340+0000 D3 REPL [conn352] Got notified of new snapshot: { ts: Timestamp(1567578780, 3), t: 1 }, 2019-09-04T06:33:00.323+0000 2019-09-04T06:33:00.340+0000 D3 REPL [conn352] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.291+0000 2019-09-04T06:33:00.340+0000 D3 REPL [conn337] Got notified of new snapshot: { ts: Timestamp(1567578780, 3), t: 1 }, 2019-09-04T06:33:00.323+0000 2019-09-04T06:33:00.340+0000 D3 REPL [conn337] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.660+0000 2019-09-04T06:33:00.340+0000 D3 REPL [conn358] Got notified of new snapshot: { ts: Timestamp(1567578780, 3), t: 1 }, 2019-09-04T06:33:00.323+0000 2019-09-04T06:33:00.340+0000 D3 REPL [conn358] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.833+0000 2019-09-04T06:33:00.340+0000 D3 REPL [conn353] Got notified of new snapshot: { ts: Timestamp(1567578780, 3), t: 1 }, 2019-09-04T06:33:00.323+0000 2019-09-04T06:33:00.340+0000 D3 REPL [conn353] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.419+0000 2019-09-04T06:33:00.340+0000 D3 REPL [conn343] Got notified of new snapshot: { ts: Timestamp(1567578780, 3), t: 1 }, 2019-09-04T06:33:00.323+0000 2019-09-04T06:33:00.340+0000 D3 REPL [conn343] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:22.594+0000 2019-09-04T06:33:00.340+0000 D3 REPL [conn361] Got notified of new snapshot: { ts: Timestamp(1567578780, 3), t: 1 }, 2019-09-04T06:33:00.323+0000 2019-09-04T06:33:00.340+0000 D3 REPL [conn361] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.661+0000 2019-09-04T06:33:00.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:00.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:00.371+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:00.428+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578780, 3) 2019-09-04T06:33:00.448+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:00.448+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:00.471+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:00.571+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:00.671+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:00.681+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:00.681+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:00.729+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:00.729+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:00.753+0000 I NETWORK [listener] connection accepted from 10.108.2.50:50214 #371 (87 connections now open) 2019-09-04T06:33:00.753+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:00.753+0000 D2 COMMAND [conn371] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:00.753+0000 I NETWORK [conn371] received client metadata from 10.108.2.50:50214 conn371: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:00.753+0000 I COMMAND [conn371] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:00.763+0000 I COMMAND [conn350] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578749, 1), signature: { hash: BinData(0, 0BA381EC49423BB6D573BE72099CAF4D3E399D41), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:33:00.763+0000 D1 - [conn350] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:00.763+0000 W - [conn350] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:00.771+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:00.780+0000 I - [conn350] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:00.780+0000 D1 COMMAND [conn350] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578749, 1), signature: { hash: BinData(0, 0BA381EC49423BB6D573BE72099CAF4D3E399D41), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:00.780+0000 D1 - [conn350] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:00.780+0000 W - [conn350] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:00.801+0000 I - [conn350] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:00.801+0000 W COMMAND [conn350] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:00.801+0000 I COMMAND [conn350] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578749, 1), signature: { hash: BinData(0, 0BA381EC49423BB6D573BE72099CAF4D3E399D41), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:33:00.801+0000 D2 NETWORK [conn350] Session from 10.108.2.50:50190 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:00.801+0000 I NETWORK [conn350] end connection 10.108.2.50:50190 (86 connections now open) 2019-09-04T06:33:00.811+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:00.812+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:00.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:00.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1102) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:00.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1102 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:10.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:00.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:28.839+0000 2019-09-04T06:33:00.838+0000 D2 ASIO [Replication] Request 1102 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578780, 3), t: 1 }, durableWallTime: new Date(1567578780323), opTime: { ts: Timestamp(1567578780, 3), t: 1 }, wallTime: new Date(1567578780323), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578780, 3), t: 1 }, lastCommittedWall: new Date(1567578780323), lastOpVisible: { ts: Timestamp(1567578780, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578780, 3), $clusterTime: { clusterTime: Timestamp(1567578780, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578780, 3) } 2019-09-04T06:33:00.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578780, 3), t: 1 }, durableWallTime: new Date(1567578780323), opTime: { ts: Timestamp(1567578780, 3), t: 1 }, wallTime: new Date(1567578780323), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578780, 3), t: 1 }, lastCommittedWall: new Date(1567578780323), lastOpVisible: { ts: Timestamp(1567578780, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578780, 3), $clusterTime: { clusterTime: Timestamp(1567578780, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578780, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:00.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:00.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1102) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578780, 3), t: 1 }, durableWallTime: new Date(1567578780323), opTime: { ts: Timestamp(1567578780, 3), t: 1 }, wallTime: new Date(1567578780323), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578780, 3), t: 1 }, lastCommittedWall: new Date(1567578780323), lastOpVisible: { ts: Timestamp(1567578780, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578780, 3), $clusterTime: { clusterTime: Timestamp(1567578780, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578780, 3) } 2019-09-04T06:33:00.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:33:00.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:33:02.838Z 2019-09-04T06:33:00.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:28.839+0000 2019-09-04T06:33:00.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:00.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1103) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:00.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1103 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:33:10.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:00.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:28.839+0000 2019-09-04T06:33:00.839+0000 D2 ASIO [Replication] Request 1103 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578780, 3), t: 1 }, durableWallTime: new Date(1567578780323), opTime: { ts: Timestamp(1567578780, 3), t: 1 }, wallTime: new Date(1567578780323), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578780, 3), t: 1 }, lastCommittedWall: new Date(1567578780323), lastOpVisible: { ts: Timestamp(1567578780, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578780, 3), $clusterTime: { clusterTime: Timestamp(1567578780, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578780, 3) } 2019-09-04T06:33:00.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578780, 3), t: 1 }, durableWallTime: new Date(1567578780323), opTime: { ts: Timestamp(1567578780, 3), t: 1 }, wallTime: new Date(1567578780323), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578780, 3), t: 1 }, lastCommittedWall: new Date(1567578780323), lastOpVisible: { ts: Timestamp(1567578780, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578780, 3), $clusterTime: { clusterTime: Timestamp(1567578780, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578780, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:33:00.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:00.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1103) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578780, 3), t: 1 }, durableWallTime: new Date(1567578780323), opTime: { ts: Timestamp(1567578780, 3), t: 1 }, wallTime: new Date(1567578780323), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578780, 3), t: 1 }, lastCommittedWall: new Date(1567578780323), lastOpVisible: { ts: Timestamp(1567578780, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578780, 3), $clusterTime: { clusterTime: Timestamp(1567578780, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578780, 3) } 2019-09-04T06:33:00.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:33:00.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:33:11.706+0000 2019-09-04T06:33:00.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:33:11.722+0000 2019-09-04T06:33:00.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:33:00.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:33:02.839Z 2019-09-04T06:33:00.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:00.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:30.839+0000 2019-09-04T06:33:00.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:30.839+0000 2019-09-04T06:33:00.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:00.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:00.871+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:00.948+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:00.948+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:00.952+0000 I NETWORK [listener] connection accepted from 10.108.2.58:52240 #372 (87 connections now open) 2019-09-04T06:33:00.952+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:00.952+0000 D2 COMMAND [conn372] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:00.952+0000 I NETWORK [conn372] received client metadata from 10.108.2.58:52240 conn372: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:00.952+0000 I COMMAND [conn372] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:00.963+0000 I COMMAND [conn351] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578743, 1), signature: { hash: BinData(0, 7EB74EF80679BE75277EB9A98AFF21A12F4B2DD4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:33:00.963+0000 D1 - [conn351] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:00.963+0000 W - [conn351] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:00.971+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:00.980+0000 I - [conn351] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:00.980+0000 D1 COMMAND [conn351] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578743, 1), signature: { hash: BinData(0, 7EB74EF80679BE75277EB9A98AFF21A12F4B2DD4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:00.980+0000 D1 - [conn351] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:00.980+0000 W - [conn351] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:01.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:01.000+0000 I - [conn351] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:01.000+0000 W COMMAND [conn351] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:01.000+0000 I COMMAND [conn351] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578743, 1), signature: { hash: BinData(0, 7EB74EF80679BE75277EB9A98AFF21A12F4B2DD4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:33:01.000+0000 D2 NETWORK [conn351] Session from 10.108.2.58:52218 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:01.000+0000 I NETWORK [conn351] end connection 10.108.2.58:52218 (86 connections now open) 2019-09-04T06:33:01.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578780, 3), signature: { hash: BinData(0, DCB15D6F8E4568E942A28723ED13B9415632D79F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:01.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:33:01.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578780, 3), signature: { hash: BinData(0, DCB15D6F8E4568E942A28723ED13B9415632D79F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:01.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578780, 3), signature: { hash: BinData(0, DCB15D6F8E4568E942A28723ED13B9415632D79F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:01.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578780, 3), t: 1 }, durableWallTime: new Date(1567578780323), opTime: { ts: Timestamp(1567578780, 3), t: 1 }, wallTime: new Date(1567578780323) } 2019-09-04T06:33:01.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578780, 3), signature: { hash: BinData(0, DCB15D6F8E4568E942A28723ED13B9415632D79F), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:01.072+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:01.172+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:01.181+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:01.181+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:01.229+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:01.229+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:01.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:01.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:01.272+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:01.312+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:01.312+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:01.331+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:01.331+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:01.331+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578780, 3) 2019-09-04T06:33:01.331+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 16193 2019-09-04T06:33:01.331+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:01.331+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:01.331+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 16193 2019-09-04T06:33:01.332+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16196 2019-09-04T06:33:01.332+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16196 2019-09-04T06:33:01.332+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578780, 3), t: 1 }({ ts: Timestamp(1567578780, 3), t: 1 }) 2019-09-04T06:33:01.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:01.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:01.372+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:01.448+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:01.448+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:01.472+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:01.572+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:01.673+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:01.681+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:01.681+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:01.729+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:01.729+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:01.773+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:01.811+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:01.812+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:01.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:01.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:01.873+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:01.948+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:01.948+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:01.973+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:02.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:02.073+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:02.173+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:02.229+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:02.229+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:02.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578780, 3), signature: { hash: BinData(0, DCB15D6F8E4568E942A28723ED13B9415632D79F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:02.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:33:02.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578780, 3), signature: { hash: BinData(0, DCB15D6F8E4568E942A28723ED13B9415632D79F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:02.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578780, 3), signature: { hash: BinData(0, DCB15D6F8E4568E942A28723ED13B9415632D79F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:02.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578780, 3), t: 1 }, durableWallTime: new Date(1567578780323), opTime: { ts: Timestamp(1567578780, 3), t: 1 }, wallTime: new Date(1567578780323) } 2019-09-04T06:33:02.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578780, 3), signature: { hash: BinData(0, DCB15D6F8E4568E942A28723ED13B9415632D79F), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:02.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:02.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:02.274+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:02.312+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:02.312+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:02.331+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:02.331+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:02.331+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578780, 3) 2019-09-04T06:33:02.331+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 16210 2019-09-04T06:33:02.331+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:02.331+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:02.331+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 16210 2019-09-04T06:33:02.332+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16213 2019-09-04T06:33:02.332+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16213 2019-09-04T06:33:02.332+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578780, 3), t: 1 }({ ts: Timestamp(1567578780, 3), t: 1 }) 2019-09-04T06:33:02.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:02.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:02.374+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:02.448+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:02.448+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:02.474+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:02.574+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:02.674+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:02.729+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:02.729+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:02.774+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:02.811+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:02.812+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:02.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:02.838+0000 D3 REPL [replexec-0] memberData lastupdate is: 2019-09-04T06:33:01.063+0000 2019-09-04T06:33:02.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:02.838+0000 D3 REPL [replexec-0] memberData lastupdate is: 2019-09-04T06:33:02.232+0000 2019-09-04T06:33:02.838+0000 D3 REPL [replexec-0] stalest member MemberId(0) date: 2019-09-04T06:33:01.063+0000 2019-09-04T06:33:02.838+0000 D3 REPL [replexec-0] scheduling next check at 2019-09-04T06:33:11.063+0000 2019-09-04T06:33:02.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:32.838+0000 2019-09-04T06:33:02.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1104) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:02.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1104 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:12.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:02.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:32.838+0000 2019-09-04T06:33:02.838+0000 D2 ASIO [Replication] Request 1104 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578780, 3), t: 1 }, durableWallTime: new Date(1567578780323), opTime: { ts: Timestamp(1567578780, 3), t: 1 }, wallTime: new Date(1567578780323), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578780, 3), t: 1 }, lastCommittedWall: new Date(1567578780323), lastOpVisible: { ts: Timestamp(1567578780, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578780, 3), $clusterTime: { clusterTime: Timestamp(1567578780, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578780, 3) } 2019-09-04T06:33:02.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578780, 3), t: 1 }, durableWallTime: new Date(1567578780323), opTime: { ts: Timestamp(1567578780, 3), t: 1 }, wallTime: new Date(1567578780323), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578780, 3), t: 1 }, lastCommittedWall: new Date(1567578780323), lastOpVisible: { ts: Timestamp(1567578780, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578780, 3), $clusterTime: { clusterTime: Timestamp(1567578780, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578780, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:02.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:02.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1104) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578780, 3), t: 1 }, durableWallTime: new Date(1567578780323), opTime: { ts: Timestamp(1567578780, 3), t: 1 }, wallTime: new Date(1567578780323), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578780, 3), t: 1 }, lastCommittedWall: new Date(1567578780323), lastOpVisible: { ts: Timestamp(1567578780, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578780, 3), $clusterTime: { clusterTime: Timestamp(1567578780, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578780, 3) } 2019-09-04T06:33:02.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:33:02.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:33:04.838Z 2019-09-04T06:33:02.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:32.838+0000 2019-09-04T06:33:02.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:02.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1105) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:02.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1105 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:33:12.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:02.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:32.838+0000 2019-09-04T06:33:02.839+0000 D2 ASIO [Replication] Request 1105 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578780, 3), t: 1 }, durableWallTime: new Date(1567578780323), opTime: { ts: Timestamp(1567578780, 3), t: 1 }, wallTime: new Date(1567578780323), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578780, 3), t: 1 }, lastCommittedWall: new Date(1567578780323), lastOpVisible: { ts: Timestamp(1567578780, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578780, 3), $clusterTime: { clusterTime: Timestamp(1567578780, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578780, 3) } 2019-09-04T06:33:02.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578780, 3), t: 1 }, durableWallTime: new Date(1567578780323), opTime: { ts: Timestamp(1567578780, 3), t: 1 }, wallTime: new Date(1567578780323), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578780, 3), t: 1 }, lastCommittedWall: new Date(1567578780323), lastOpVisible: { ts: Timestamp(1567578780, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578780, 3), $clusterTime: { clusterTime: Timestamp(1567578780, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578780, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:33:02.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:02.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1105) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578780, 3), t: 1 }, durableWallTime: new Date(1567578780323), opTime: { ts: Timestamp(1567578780, 3), t: 1 }, wallTime: new Date(1567578780323), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578780, 3), t: 1 }, lastCommittedWall: new Date(1567578780323), lastOpVisible: { ts: Timestamp(1567578780, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578780, 3), $clusterTime: { clusterTime: Timestamp(1567578780, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578780, 3) } 2019-09-04T06:33:02.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:33:02.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:33:11.722+0000 2019-09-04T06:33:02.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:33:13.937+0000 2019-09-04T06:33:02.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:33:02.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:33:04.839Z 2019-09-04T06:33:02.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:02.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:32.839+0000 2019-09-04T06:33:02.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:32.839+0000 2019-09-04T06:33:02.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:02.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:02.875+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:02.948+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:02.948+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:02.975+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:03.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:03.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578780, 3), signature: { hash: BinData(0, DCB15D6F8E4568E942A28723ED13B9415632D79F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:03.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:33:03.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578780, 3), signature: { hash: BinData(0, DCB15D6F8E4568E942A28723ED13B9415632D79F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:03.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578780, 3), signature: { hash: BinData(0, DCB15D6F8E4568E942A28723ED13B9415632D79F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:03.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578780, 3), t: 1 }, durableWallTime: new Date(1567578780323), opTime: { ts: Timestamp(1567578780, 3), t: 1 }, wallTime: new Date(1567578780323) } 2019-09-04T06:33:03.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578780, 3), signature: { hash: BinData(0, DCB15D6F8E4568E942A28723ED13B9415632D79F), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:03.075+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:03.175+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:03.229+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:03.229+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:03.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:03.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:03.275+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:03.312+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:03.312+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:03.331+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:03.331+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:03.331+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578780, 3) 2019-09-04T06:33:03.331+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 16227 2019-09-04T06:33:03.331+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:03.331+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:03.331+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 16227 2019-09-04T06:33:03.332+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16230 2019-09-04T06:33:03.332+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16230 2019-09-04T06:33:03.332+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578780, 3), t: 1 }({ ts: Timestamp(1567578780, 3), t: 1 }) 2019-09-04T06:33:03.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:03.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:03.375+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:03.448+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:03.448+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:03.472+0000 I NETWORK [listener] connection accepted from 10.108.2.63:36390 #373 (87 connections now open) 2019-09-04T06:33:03.472+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:03.473+0000 D2 COMMAND [conn373] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:03.473+0000 I NETWORK [conn373] received client metadata from 10.108.2.63:36390 conn373: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:03.473+0000 I COMMAND [conn373] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:03.476+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:03.481+0000 I COMMAND [conn318] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578752, 1), signature: { hash: BinData(0, 5FD87181FFEFAB4F27FB7C85B30A6D21FE89ECFB), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:33:03.481+0000 D1 - [conn318] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:03.481+0000 W - [conn318] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:03.497+0000 I - [conn318] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:03.497+0000 D1 COMMAND [conn318] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578752, 1), signature: { hash: BinData(0, 5FD87181FFEFAB4F27FB7C85B30A6D21FE89ECFB), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:03.497+0000 D1 - [conn318] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:03.497+0000 W - [conn318] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:03.517+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:33:03.517+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:33:03.517+0000 D2 COMMAND [conn90] run command admin.$cmd { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:33:03.517+0000 I COMMAND [conn90] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:33:03.517+0000 I - [conn318] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:03.517+0000 W COMMAND [conn318] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:03.517+0000 I COMMAND [conn318] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578752, 1), signature: { hash: BinData(0, 5FD87181FFEFAB4F27FB7C85B30A6D21FE89ECFB), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:33:03.517+0000 D2 NETWORK [conn318] Session from 10.108.2.63:36356 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:03.517+0000 I NETWORK [conn318] end connection 10.108.2.63:36356 (86 connections now open) 2019-09-04T06:33:03.531+0000 D2 ASIO [RS] Request 1101 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578783, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578783520), o: { $v: 1, $set: { ping: new Date(1567578783515) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578780, 3), t: 1 }, lastCommittedWall: new Date(1567578780323), lastOpVisible: { ts: Timestamp(1567578780, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578780, 3), t: 1 }, lastCommittedWall: new Date(1567578780323), lastOpApplied: { ts: Timestamp(1567578783, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578780, 3), $clusterTime: { clusterTime: Timestamp(1567578783, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578783, 1) } 2019-09-04T06:33:03.531+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578783, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578783520), o: { $v: 1, $set: { ping: new Date(1567578783515) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578780, 3), t: 1 }, lastCommittedWall: new Date(1567578780323), lastOpVisible: { ts: Timestamp(1567578780, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578780, 3), t: 1 }, lastCommittedWall: new Date(1567578780323), lastOpApplied: { ts: Timestamp(1567578783, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578780, 3), $clusterTime: { clusterTime: Timestamp(1567578783, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578783, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:03.532+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:03.532+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578783, 1) and ending at ts: Timestamp(1567578783, 1) 2019-09-04T06:33:03.532+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:33:13.937+0000 2019-09-04T06:33:03.532+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:33:14.760+0000 2019-09-04T06:33:03.532+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:03.532+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:32.839+0000 2019-09-04T06:33:03.532+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578783, 1), t: 1 } 2019-09-04T06:33:03.532+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:03.532+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:03.532+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578780, 3) 2019-09-04T06:33:03.532+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 16238 2019-09-04T06:33:03.532+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:03.532+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:03.532+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 16238 2019-09-04T06:33:03.532+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:03.532+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:33:03.532+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:03.532+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578780, 3) 2019-09-04T06:33:03.532+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578783, 1) } 2019-09-04T06:33:03.532+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 16241 2019-09-04T06:33:03.532+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:03.532+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:03.532+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 16241 2019-09-04T06:33:03.532+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16231 2019-09-04T06:33:03.532+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 16231 2019-09-04T06:33:03.532+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16244 2019-09-04T06:33:03.532+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16244 2019-09-04T06:33:03.532+0000 D3 EXECUTOR [repl-writer-worker-0] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:03.532+0000 D3 STORAGE [repl-writer-worker-0] WT begin_transaction for snapshot id 16246 2019-09-04T06:33:03.532+0000 D4 STORAGE [repl-writer-worker-0] inserting record with timestamp Timestamp(1567578783, 1) 2019-09-04T06:33:03.532+0000 D3 STORAGE [repl-writer-worker-0] WT set timestamp of future write operations to Timestamp(1567578783, 1) 2019-09-04T06:33:03.532+0000 D3 STORAGE [repl-writer-worker-0] WT commit_transaction for snapshot id 16246 2019-09-04T06:33:03.532+0000 D3 EXECUTOR [repl-writer-worker-0] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:03.532+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:33:03.532+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16245 2019-09-04T06:33:03.532+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 16245 2019-09-04T06:33:03.532+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16248 2019-09-04T06:33:03.532+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16248 2019-09-04T06:33:03.532+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578783, 1), t: 1 }({ ts: Timestamp(1567578783, 1), t: 1 }) 2019-09-04T06:33:03.532+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578783, 1) 2019-09-04T06:33:03.532+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16249 2019-09-04T06:33:03.532+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578783, 1) } } ] } sort: {} projection: {} 2019-09-04T06:33:03.532+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:03.532+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:33:03.532+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578783, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:33:03.532+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:03.532+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:33:03.532+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:03.532+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578783, 1) || First: notFirst: full path: ts 2019-09-04T06:33:03.532+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:03.532+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578783, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:03.532+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:33:03.532+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:33:03.532+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:33:03.532+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:03.532+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:03.532+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:33:03.532+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:03.532+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:03.532+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:33:03.532+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578783, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:33:03.532+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:03.532+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:33:03.532+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:03.532+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578783, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:33:03.532+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:03.532+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578783, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:03.532+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 16249 2019-09-04T06:33:03.532+0000 D3 EXECUTOR [repl-writer-worker-15] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:03.533+0000 D3 STORAGE [repl-writer-worker-15] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:03.533+0000 D3 REPL [repl-writer-worker-15] applying op: { ts: Timestamp(1567578783, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578783520), o: { $v: 1, $set: { ping: new Date(1567578783515) } } }, oplog application mode: Secondary 2019-09-04T06:33:03.533+0000 D3 STORAGE [repl-writer-worker-15] WT set timestamp of future write operations to Timestamp(1567578783, 1) 2019-09-04T06:33:03.533+0000 D3 STORAGE [repl-writer-worker-15] WT begin_transaction for snapshot id 16251 2019-09-04T06:33:03.533+0000 D2 QUERY [repl-writer-worker-15] Using idhack: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" } 2019-09-04T06:33:03.533+0000 D4 WRITE [repl-writer-worker-15] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:33:03.533+0000 D3 STORAGE [repl-writer-worker-15] WT commit_transaction for snapshot id 16251 2019-09-04T06:33:03.533+0000 D3 EXECUTOR [repl-writer-worker-15] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:03.533+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578783, 1), t: 1 }({ ts: Timestamp(1567578783, 1), t: 1 }) 2019-09-04T06:33:03.533+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578783, 1) 2019-09-04T06:33:03.533+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16250 2019-09-04T06:33:03.533+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:33:03.533+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:03.533+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:33:03.533+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:03.533+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:03.533+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:33:03.533+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 16250 2019-09-04T06:33:03.533+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578783, 1) 2019-09-04T06:33:03.533+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:03.533+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16254 2019-09-04T06:33:03.533+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16254 2019-09-04T06:33:03.533+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578780, 3), t: 1 }, durableWallTime: new Date(1567578780323), appliedOpTime: { ts: Timestamp(1567578780, 3), t: 1 }, appliedWallTime: new Date(1567578780323), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578780, 3), t: 1 }, durableWallTime: new Date(1567578780323), appliedOpTime: { ts: Timestamp(1567578783, 1), t: 1 }, appliedWallTime: new Date(1567578783520), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578780, 3), t: 1 }, durableWallTime: new Date(1567578780323), appliedOpTime: { ts: Timestamp(1567578780, 3), t: 1 }, appliedWallTime: new Date(1567578780323), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578780, 3), t: 1 }, lastCommittedWall: new Date(1567578780323), lastOpVisible: { ts: Timestamp(1567578780, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:03.533+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1106 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:33.533+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578780, 3), t: 1 }, durableWallTime: new Date(1567578780323), appliedOpTime: { ts: Timestamp(1567578780, 3), t: 1 }, appliedWallTime: new Date(1567578780323), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578780, 3), t: 1 }, durableWallTime: new Date(1567578780323), appliedOpTime: { ts: Timestamp(1567578783, 1), t: 1 }, appliedWallTime: new Date(1567578783520), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578780, 3), t: 1 }, durableWallTime: new Date(1567578780323), appliedOpTime: { ts: Timestamp(1567578780, 3), t: 1 }, appliedWallTime: new Date(1567578780323), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578780, 3), t: 1 }, lastCommittedWall: new Date(1567578780323), lastOpVisible: { ts: Timestamp(1567578780, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:03.533+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:33.533+0000 2019-09-04T06:33:03.533+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578783, 1), t: 1 }({ ts: Timestamp(1567578783, 1), t: 1 }) 2019-09-04T06:33:03.533+0000 D2 ASIO [RS] Request 1106 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578780, 3), t: 1 }, lastCommittedWall: new Date(1567578780323), lastOpVisible: { ts: Timestamp(1567578780, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578780, 3), $clusterTime: { clusterTime: Timestamp(1567578783, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578783, 1) } 2019-09-04T06:33:03.533+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578780, 3), t: 1 }, lastCommittedWall: new Date(1567578780323), lastOpVisible: { ts: Timestamp(1567578780, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578780, 3), $clusterTime: { clusterTime: Timestamp(1567578783, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578783, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:03.533+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:03.533+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:33.533+0000 2019-09-04T06:33:03.534+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578783, 1), t: 1 } 2019-09-04T06:33:03.534+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1107 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:13.534+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578780, 3), t: 1 } } 2019-09-04T06:33:03.534+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:33.533+0000 2019-09-04T06:33:03.535+0000 D2 ASIO [RS] Request 1107 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578783, 1), t: 1 }, lastCommittedWall: new Date(1567578783520), lastOpVisible: { ts: Timestamp(1567578783, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578783, 1), t: 1 }, lastCommittedWall: new Date(1567578783520), lastOpApplied: { ts: Timestamp(1567578783, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578783, 1), $clusterTime: { clusterTime: Timestamp(1567578783, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578783, 1) } 2019-09-04T06:33:03.535+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578783, 1), t: 1 }, lastCommittedWall: new Date(1567578783520), lastOpVisible: { ts: Timestamp(1567578783, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578783, 1), t: 1 }, lastCommittedWall: new Date(1567578783520), lastOpApplied: { ts: Timestamp(1567578783, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578783, 1), $clusterTime: { clusterTime: Timestamp(1567578783, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578783, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:03.535+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:03.535+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:33:03.536+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578783, 1), t: 1 }, 2019-09-04T06:33:03.520+0000 2019-09-04T06:33:03.536+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578783, 1), t: 1 }, 2019-09-04T06:33:03.520+0000 2019-09-04T06:33:03.536+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578778, 1) 2019-09-04T06:33:03.536+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:33:14.760+0000 2019-09-04T06:33:03.536+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:33:14.381+0000 2019-09-04T06:33:03.536+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:03.536+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:32.839+0000 2019-09-04T06:33:03.536+0000 D3 REPL [conn370] Got notified of new snapshot: { ts: Timestamp(1567578783, 1), t: 1 }, 2019-09-04T06:33:03.520+0000 2019-09-04T06:33:03.536+0000 D3 REPL [conn370] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:29.874+0000 2019-09-04T06:33:03.536+0000 D3 REPL [conn322] Got notified of new snapshot: { ts: Timestamp(1567578783, 1), t: 1 }, 2019-09-04T06:33:03.520+0000 2019-09-04T06:33:03.536+0000 D3 REPL [conn322] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.989+0000 2019-09-04T06:33:03.536+0000 D3 REPL [conn360] Got notified of new snapshot: { ts: Timestamp(1567578783, 1), t: 1 }, 2019-09-04T06:33:03.520+0000 2019-09-04T06:33:03.536+0000 D3 REPL [conn360] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.661+0000 2019-09-04T06:33:03.536+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1108 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:13.536+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578783, 1), t: 1 } } 2019-09-04T06:33:03.536+0000 D3 REPL [conn364] Got notified of new snapshot: { ts: Timestamp(1567578783, 1), t: 1 }, 2019-09-04T06:33:03.520+0000 2019-09-04T06:33:03.536+0000 D3 REPL [conn364] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.767+0000 2019-09-04T06:33:03.536+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:33.533+0000 2019-09-04T06:33:03.536+0000 D3 REPL [conn344] Got notified of new snapshot: { ts: Timestamp(1567578783, 1), t: 1 }, 2019-09-04T06:33:03.520+0000 2019-09-04T06:33:03.536+0000 D3 REPL [conn344] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:11.677+0000 2019-09-04T06:33:03.536+0000 D3 REPL [conn338] Got notified of new snapshot: { ts: Timestamp(1567578783, 1), t: 1 }, 2019-09-04T06:33:03.520+0000 2019-09-04T06:33:03.536+0000 D3 REPL [conn338] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:11.680+0000 2019-09-04T06:33:03.536+0000 D3 REPL [conn337] Got notified of new snapshot: { ts: Timestamp(1567578783, 1), t: 1 }, 2019-09-04T06:33:03.520+0000 2019-09-04T06:33:03.536+0000 D3 REPL [conn337] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.660+0000 2019-09-04T06:33:03.536+0000 D3 REPL [conn353] Got notified of new snapshot: { ts: Timestamp(1567578783, 1), t: 1 }, 2019-09-04T06:33:03.520+0000 2019-09-04T06:33:03.536+0000 D3 REPL [conn353] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.419+0000 2019-09-04T06:33:03.536+0000 D3 REPL [conn361] Got notified of new snapshot: { ts: Timestamp(1567578783, 1), t: 1 }, 2019-09-04T06:33:03.520+0000 2019-09-04T06:33:03.536+0000 D3 REPL [conn361] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.661+0000 2019-09-04T06:33:03.536+0000 D3 REPL [conn354] Got notified of new snapshot: { ts: Timestamp(1567578783, 1), t: 1 }, 2019-09-04T06:33:03.520+0000 2019-09-04T06:33:03.536+0000 D3 REPL [conn354] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.413+0000 2019-09-04T06:33:03.536+0000 D3 REPL [conn357] Got notified of new snapshot: { ts: Timestamp(1567578783, 1), t: 1 }, 2019-09-04T06:33:03.520+0000 2019-09-04T06:33:03.536+0000 D3 REPL [conn357] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.823+0000 2019-09-04T06:33:03.536+0000 D3 REPL [conn323] Got notified of new snapshot: { ts: Timestamp(1567578783, 1), t: 1 }, 2019-09-04T06:33:03.520+0000 2019-09-04T06:33:03.536+0000 D3 REPL [conn323] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:11.688+0000 2019-09-04T06:33:03.536+0000 D3 REPL [conn355] Got notified of new snapshot: { ts: Timestamp(1567578783, 1), t: 1 }, 2019-09-04T06:33:03.520+0000 2019-09-04T06:33:03.536+0000 D3 REPL [conn355] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:24.151+0000 2019-09-04T06:33:03.536+0000 D3 REPL [conn365] Got notified of new snapshot: { ts: Timestamp(1567578783, 1), t: 1 }, 2019-09-04T06:33:03.520+0000 2019-09-04T06:33:03.536+0000 D3 REPL [conn365] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:25.060+0000 2019-09-04T06:33:03.536+0000 D3 REPL [conn327] Got notified of new snapshot: { ts: Timestamp(1567578783, 1), t: 1 }, 2019-09-04T06:33:03.520+0000 2019-09-04T06:33:03.536+0000 D3 REPL [conn327] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.840+0000 2019-09-04T06:33:03.536+0000 D3 REPL [conn363] Got notified of new snapshot: { ts: Timestamp(1567578783, 1), t: 1 }, 2019-09-04T06:33:03.520+0000 2019-09-04T06:33:03.536+0000 D3 REPL [conn363] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.670+0000 2019-09-04T06:33:03.536+0000 D3 REPL [conn341] Got notified of new snapshot: { ts: Timestamp(1567578783, 1), t: 1 }, 2019-09-04T06:33:03.520+0000 2019-09-04T06:33:03.536+0000 D3 REPL [conn341] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.094+0000 2019-09-04T06:33:03.536+0000 D3 REPL [conn352] Got notified of new snapshot: { ts: Timestamp(1567578783, 1), t: 1 }, 2019-09-04T06:33:03.520+0000 2019-09-04T06:33:03.536+0000 D3 REPL [conn352] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.291+0000 2019-09-04T06:33:03.536+0000 D3 REPL [conn358] Got notified of new snapshot: { ts: Timestamp(1567578783, 1), t: 1 }, 2019-09-04T06:33:03.520+0000 2019-09-04T06:33:03.536+0000 D3 REPL [conn358] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.833+0000 2019-09-04T06:33:03.536+0000 D3 REPL [conn343] Got notified of new snapshot: { ts: Timestamp(1567578783, 1), t: 1 }, 2019-09-04T06:33:03.520+0000 2019-09-04T06:33:03.536+0000 D3 REPL [conn343] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:22.594+0000 2019-09-04T06:33:03.541+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:33:03.541+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:03.541+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578780, 3), t: 1 }, durableWallTime: new Date(1567578780323), appliedOpTime: { ts: Timestamp(1567578780, 3), t: 1 }, appliedWallTime: new Date(1567578780323), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578783, 1), t: 1 }, durableWallTime: new Date(1567578783520), appliedOpTime: { ts: Timestamp(1567578783, 1), t: 1 }, appliedWallTime: new Date(1567578783520), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578780, 3), t: 1 }, durableWallTime: new Date(1567578780323), appliedOpTime: { ts: Timestamp(1567578780, 3), t: 1 }, appliedWallTime: new Date(1567578780323), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578783, 1), t: 1 }, lastCommittedWall: new Date(1567578783520), lastOpVisible: { ts: Timestamp(1567578783, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:03.541+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1109 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:33.541+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578780, 3), t: 1 }, durableWallTime: new Date(1567578780323), appliedOpTime: { ts: Timestamp(1567578780, 3), t: 1 }, appliedWallTime: new Date(1567578780323), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578783, 1), t: 1 }, durableWallTime: new Date(1567578783520), appliedOpTime: { ts: Timestamp(1567578783, 1), t: 1 }, appliedWallTime: new Date(1567578783520), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578780, 3), t: 1 }, durableWallTime: new Date(1567578780323), appliedOpTime: { ts: Timestamp(1567578780, 3), t: 1 }, appliedWallTime: new Date(1567578780323), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578783, 1), t: 1 }, lastCommittedWall: new Date(1567578783520), lastOpVisible: { ts: Timestamp(1567578783, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:03.541+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:33.533+0000 2019-09-04T06:33:03.542+0000 D2 ASIO [RS] Request 1109 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578783, 1), t: 1 }, lastCommittedWall: new Date(1567578783520), lastOpVisible: { ts: Timestamp(1567578783, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578783, 1), $clusterTime: { clusterTime: Timestamp(1567578783, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578783, 1) } 2019-09-04T06:33:03.542+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578783, 1), t: 1 }, lastCommittedWall: new Date(1567578783520), lastOpVisible: { ts: Timestamp(1567578783, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578783, 1), $clusterTime: { clusterTime: Timestamp(1567578783, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578783, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:03.542+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:03.542+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:33.533+0000 2019-09-04T06:33:03.576+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:03.632+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578783, 1) 2019-09-04T06:33:03.676+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:03.729+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:03.729+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:03.776+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:03.811+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:03.812+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:03.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:03.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:03.876+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:03.948+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:03.948+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:03.976+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:04.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:04.077+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:04.171+0000 D2 COMMAND [conn71] run command config.$cmd { find: "chunks", filter: { ns: "config.system.sessions", lastmod: { $gte: Timestamp(1, 0) } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578783, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578783, 1), signature: { hash: BinData(0, AF46E30394552F417A91CC7D90F43954AA686709), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578783, 1), t: 1 } }, $db: "config" } 2019-09-04T06:33:04.171+0000 D1 COMMAND [conn71] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578783, 1), t: 1 } } } 2019-09-04T06:33:04.171+0000 D3 STORAGE [conn71] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:33:04.171+0000 D1 COMMAND [conn71] Using 'committed' snapshot: { find: "chunks", filter: { ns: "config.system.sessions", lastmod: { $gte: Timestamp(1, 0) } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578783, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578783, 1), signature: { hash: BinData(0, AF46E30394552F417A91CC7D90F43954AA686709), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578783, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578783, 1) 2019-09-04T06:33:04.171+0000 D5 QUERY [conn71] Tagging the match expression according to cache data: Filter: $and ns $eq "config.system.sessions" lastmod $gte Timestamp(1, 0) Cache data: (index-tagged expression tree: tree=Node ---Leaf (ns_1_lastmod_1, ), pos: 0, can combine? 1 ---Leaf (ns_1_lastmod_1, ), pos: 1, can combine? 1 ) 2019-09-04T06:33:04.171+0000 D5 QUERY [conn71] Index 0: (ns_1_min_1, ) 2019-09-04T06:33:04.171+0000 D5 QUERY [conn71] Index 1: (ns_1_shard_1_min_1, ) 2019-09-04T06:33:04.171+0000 D5 QUERY [conn71] Index 2: (ns_1_lastmod_1, ) 2019-09-04T06:33:04.171+0000 D5 QUERY [conn71] Index 3: (_id_, ) 2019-09-04T06:33:04.171+0000 D5 QUERY [conn71] Tagged tree: $and ns $eq "config.system.sessions" || Selected Index #2 pos 0 combine 1 lastmod $gte Timestamp(1, 0) || Selected Index #2 pos 1 combine 1 2019-09-04T06:33:04.171+0000 D5 QUERY [conn71] Planner: solution constructed from the cache: FETCH ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [{ lastmod: 1 }, { ns: 1 }, { ns: 1, lastmod: 1 }, ] ---Child: ------IXSCAN ---------indexName = ns_1_lastmod_1 keyPattern = { ns: 1, lastmod: 1 } ---------direction = 1 ---------bounds = field #0['ns']: ["config.system.sessions", "config.system.sessions"], field #1['lastmod']: [Timestamp(1, 0), Timestamp(4294967295, 4294967295)] ---------fetched = 0 ---------sortedByDiskLoc = 0 ---------getSort = [{ lastmod: 1 }, { ns: 1 }, { ns: 1, lastmod: 1 }, ] 2019-09-04T06:33:04.171+0000 D3 STORAGE [conn71] WT begin_transaction for snapshot id 16262 2019-09-04T06:33:04.171+0000 D2 QUERY [conn71] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) 2019-09-04T06:33:04.171+0000 D3 STORAGE [conn71] WT rollback_transaction for snapshot id 16262 2019-09-04T06:33:04.171+0000 I COMMAND [conn71] command config.chunks command: find { find: "chunks", filter: { ns: "config.system.sessions", lastmod: { $gte: Timestamp(1, 0) } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578783, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578783, 1), signature: { hash: BinData(0, AF46E30394552F417A91CC7D90F43954AA686709), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578783, 1), t: 1 } }, $db: "config" } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:1DDA71BE planCacheKey:167D77D5 reslen:788 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:33:04.177+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:04.229+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:04.229+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:04.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578783, 1), signature: { hash: BinData(0, AF46E30394552F417A91CC7D90F43954AA686709), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:04.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:33:04.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578783, 1), signature: { hash: BinData(0, AF46E30394552F417A91CC7D90F43954AA686709), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:04.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578783, 1), signature: { hash: BinData(0, AF46E30394552F417A91CC7D90F43954AA686709), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:04.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578783, 1), t: 1 }, durableWallTime: new Date(1567578783520), opTime: { ts: Timestamp(1567578783, 1), t: 1 }, wallTime: new Date(1567578783520) } 2019-09-04T06:33:04.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578783, 1), signature: { hash: BinData(0, AF46E30394552F417A91CC7D90F43954AA686709), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:04.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:04.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:04.277+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:04.311+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:04.312+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:04.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:04.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:04.377+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:04.448+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:04.448+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:04.477+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:04.532+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:04.532+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:04.532+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578783, 1) 2019-09-04T06:33:04.532+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 16271 2019-09-04T06:33:04.532+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:04.532+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:04.532+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 16271 2019-09-04T06:33:04.533+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16274 2019-09-04T06:33:04.533+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16274 2019-09-04T06:33:04.533+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578783, 1), t: 1 }({ ts: Timestamp(1567578783, 1), t: 1 }) 2019-09-04T06:33:04.577+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:04.677+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:04.729+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:04.729+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:04.778+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:04.811+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:04.812+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:04.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:04.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1110) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:04.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1110 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:14.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:04.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:32.839+0000 2019-09-04T06:33:04.838+0000 D2 ASIO [Replication] Request 1110 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578783, 1), t: 1 }, durableWallTime: new Date(1567578783520), opTime: { ts: Timestamp(1567578783, 1), t: 1 }, wallTime: new Date(1567578783520), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578783, 1), t: 1 }, lastCommittedWall: new Date(1567578783520), lastOpVisible: { ts: Timestamp(1567578783, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578783, 1), $clusterTime: { clusterTime: Timestamp(1567578783, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578783, 1) } 2019-09-04T06:33:04.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578783, 1), t: 1 }, durableWallTime: new Date(1567578783520), opTime: { ts: Timestamp(1567578783, 1), t: 1 }, wallTime: new Date(1567578783520), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578783, 1), t: 1 }, lastCommittedWall: new Date(1567578783520), lastOpVisible: { ts: Timestamp(1567578783, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578783, 1), $clusterTime: { clusterTime: Timestamp(1567578783, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578783, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:04.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:04.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1110) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578783, 1), t: 1 }, durableWallTime: new Date(1567578783520), opTime: { ts: Timestamp(1567578783, 1), t: 1 }, wallTime: new Date(1567578783520), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578783, 1), t: 1 }, lastCommittedWall: new Date(1567578783520), lastOpVisible: { ts: Timestamp(1567578783, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578783, 1), $clusterTime: { clusterTime: Timestamp(1567578783, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578783, 1) } 2019-09-04T06:33:04.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:33:04.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:33:06.838Z 2019-09-04T06:33:04.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:32.839+0000 2019-09-04T06:33:04.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:04.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1111) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:04.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1111 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:33:14.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:04.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:32.839+0000 2019-09-04T06:33:04.839+0000 D2 ASIO [Replication] Request 1111 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578783, 1), t: 1 }, durableWallTime: new Date(1567578783520), opTime: { ts: Timestamp(1567578783, 1), t: 1 }, wallTime: new Date(1567578783520), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578783, 1), t: 1 }, lastCommittedWall: new Date(1567578783520), lastOpVisible: { ts: Timestamp(1567578783, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578783, 1), $clusterTime: { clusterTime: Timestamp(1567578783, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578783, 1) } 2019-09-04T06:33:04.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578783, 1), t: 1 }, durableWallTime: new Date(1567578783520), opTime: { ts: Timestamp(1567578783, 1), t: 1 }, wallTime: new Date(1567578783520), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578783, 1), t: 1 }, lastCommittedWall: new Date(1567578783520), lastOpVisible: { ts: Timestamp(1567578783, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578783, 1), $clusterTime: { clusterTime: Timestamp(1567578783, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578783, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:33:04.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:04.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1111) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578783, 1), t: 1 }, durableWallTime: new Date(1567578783520), opTime: { ts: Timestamp(1567578783, 1), t: 1 }, wallTime: new Date(1567578783520), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578783, 1), t: 1 }, lastCommittedWall: new Date(1567578783520), lastOpVisible: { ts: Timestamp(1567578783, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578783, 1), $clusterTime: { clusterTime: Timestamp(1567578783, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578783, 1) } 2019-09-04T06:33:04.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:33:04.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:33:14.381+0000 2019-09-04T06:33:04.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:33:15.527+0000 2019-09-04T06:33:04.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:33:04.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:33:06.839Z 2019-09-04T06:33:04.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:04.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:34.839+0000 2019-09-04T06:33:04.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:34.839+0000 2019-09-04T06:33:04.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:04.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:04.878+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:04.948+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:04.948+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:04.978+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:05.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:05.058+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:05.059+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:05.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578783, 1), signature: { hash: BinData(0, AF46E30394552F417A91CC7D90F43954AA686709), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:05.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:33:05.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578783, 1), signature: { hash: BinData(0, AF46E30394552F417A91CC7D90F43954AA686709), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:05.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578783, 1), signature: { hash: BinData(0, AF46E30394552F417A91CC7D90F43954AA686709), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:05.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578783, 1), t: 1 }, durableWallTime: new Date(1567578783520), opTime: { ts: Timestamp(1567578783, 1), t: 1 }, wallTime: new Date(1567578783520) } 2019-09-04T06:33:05.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578783, 1), signature: { hash: BinData(0, AF46E30394552F417A91CC7D90F43954AA686709), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:05.078+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:05.147+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:05.147+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:05.162+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:05.162+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:05.178+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:05.229+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:05.229+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:05.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:05.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:05.278+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:05.312+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:05.312+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:05.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:05.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:05.360+0000 D2 ASIO [RS] Request 1108 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578785, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578785358), o: { $v: 1, $set: { ping: new Date(1567578785354), up: 2685 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578783, 1), t: 1 }, lastCommittedWall: new Date(1567578783520), lastOpVisible: { ts: Timestamp(1567578783, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578783, 1), t: 1 }, lastCommittedWall: new Date(1567578783520), lastOpApplied: { ts: Timestamp(1567578785, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578783, 1), $clusterTime: { clusterTime: Timestamp(1567578785, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578785, 1) } 2019-09-04T06:33:05.360+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578785, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578785358), o: { $v: 1, $set: { ping: new Date(1567578785354), up: 2685 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578783, 1), t: 1 }, lastCommittedWall: new Date(1567578783520), lastOpVisible: { ts: Timestamp(1567578783, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578783, 1), t: 1 }, lastCommittedWall: new Date(1567578783520), lastOpApplied: { ts: Timestamp(1567578785, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578783, 1), $clusterTime: { clusterTime: Timestamp(1567578785, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578785, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:05.360+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:05.360+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578785, 1) and ending at ts: Timestamp(1567578785, 1) 2019-09-04T06:33:05.360+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:33:15.527+0000 2019-09-04T06:33:05.360+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:33:16.844+0000 2019-09-04T06:33:05.360+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:05.360+0000 D2 REPL [replication-1] oplog buffer has 0 bytes 2019-09-04T06:33:05.360+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578785, 1), t: 1 } 2019-09-04T06:33:05.360+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:05.360+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:05.360+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578783, 1) 2019-09-04T06:33:05.361+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 16290 2019-09-04T06:33:05.361+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:05.361+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:05.361+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 16290 2019-09-04T06:33:05.361+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:05.361+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:33:05.361+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578785, 1) } 2019-09-04T06:33:05.361+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:05.361+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578783, 1) 2019-09-04T06:33:05.361+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16275 2019-09-04T06:33:05.361+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 16293 2019-09-04T06:33:05.361+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:05.361+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:05.361+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 16293 2019-09-04T06:33:05.360+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:34.839+0000 2019-09-04T06:33:05.361+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 16275 2019-09-04T06:33:05.361+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16296 2019-09-04T06:33:05.361+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16296 2019-09-04T06:33:05.361+0000 D3 EXECUTOR [repl-writer-worker-1] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:05.361+0000 D3 STORAGE [repl-writer-worker-1] WT begin_transaction for snapshot id 16298 2019-09-04T06:33:05.361+0000 D4 STORAGE [repl-writer-worker-1] inserting record with timestamp Timestamp(1567578785, 1) 2019-09-04T06:33:05.361+0000 D3 STORAGE [repl-writer-worker-1] WT set timestamp of future write operations to Timestamp(1567578785, 1) 2019-09-04T06:33:05.361+0000 D3 STORAGE [repl-writer-worker-1] WT commit_transaction for snapshot id 16298 2019-09-04T06:33:05.361+0000 D3 EXECUTOR [repl-writer-worker-1] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:05.361+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:33:05.361+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16297 2019-09-04T06:33:05.361+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 16297 2019-09-04T06:33:05.361+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16300 2019-09-04T06:33:05.361+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16300 2019-09-04T06:33:05.361+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578785, 1), t: 1 }({ ts: Timestamp(1567578785, 1), t: 1 }) 2019-09-04T06:33:05.361+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578785, 1) 2019-09-04T06:33:05.361+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16301 2019-09-04T06:33:05.361+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578785, 1) } } ] } sort: {} projection: {} 2019-09-04T06:33:05.361+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:05.361+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:33:05.361+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578785, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:33:05.361+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:05.361+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:33:05.361+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:05.361+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578785, 1) || First: notFirst: full path: ts 2019-09-04T06:33:05.361+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:05.361+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578785, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:05.361+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:33:05.361+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:33:05.361+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:33:05.361+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:05.361+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:05.361+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:33:05.361+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:05.361+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:05.361+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:33:05.361+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578785, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:33:05.361+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:05.361+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:33:05.361+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:05.361+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578785, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:33:05.361+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:05.361+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578785, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:05.361+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 16301 2019-09-04T06:33:05.361+0000 D3 EXECUTOR [repl-writer-worker-5] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:05.361+0000 D3 STORAGE [repl-writer-worker-5] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:05.362+0000 D3 REPL [repl-writer-worker-5] applying op: { ts: Timestamp(1567578785, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578785358), o: { $v: 1, $set: { ping: new Date(1567578785354), up: 2685 } } }, oplog application mode: Secondary 2019-09-04T06:33:05.362+0000 D3 STORAGE [repl-writer-worker-5] WT set timestamp of future write operations to Timestamp(1567578785, 1) 2019-09-04T06:33:05.362+0000 D3 STORAGE [repl-writer-worker-5] WT begin_transaction for snapshot id 16303 2019-09-04T06:33:05.362+0000 D2 QUERY [repl-writer-worker-5] Using idhack: { _id: "cmodb801.togewa.com:27017" } 2019-09-04T06:33:05.362+0000 D4 WRITE [repl-writer-worker-5] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:33:05.362+0000 D3 STORAGE [repl-writer-worker-5] WT commit_transaction for snapshot id 16303 2019-09-04T06:33:05.362+0000 D3 EXECUTOR [repl-writer-worker-5] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:05.362+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578785, 1), t: 1 }({ ts: Timestamp(1567578785, 1), t: 1 }) 2019-09-04T06:33:05.362+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578785, 1) 2019-09-04T06:33:05.362+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16302 2019-09-04T06:33:05.362+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:33:05.362+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:05.362+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:33:05.362+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:05.362+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:05.362+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:33:05.362+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 16302 2019-09-04T06:33:05.362+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578785, 1) 2019-09-04T06:33:05.362+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:05.362+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16306 2019-09-04T06:33:05.362+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16306 2019-09-04T06:33:05.362+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578785, 1), t: 1 }({ ts: Timestamp(1567578785, 1), t: 1 }) 2019-09-04T06:33:05.362+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578783, 1), t: 1 }, durableWallTime: new Date(1567578783520), appliedOpTime: { ts: Timestamp(1567578783, 1), t: 1 }, appliedWallTime: new Date(1567578783520), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578783, 1), t: 1 }, durableWallTime: new Date(1567578783520), appliedOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, appliedWallTime: new Date(1567578785358), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578783, 1), t: 1 }, durableWallTime: new Date(1567578783520), appliedOpTime: { ts: Timestamp(1567578783, 1), t: 1 }, appliedWallTime: new Date(1567578783520), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578783, 1), t: 1 }, lastCommittedWall: new Date(1567578783520), lastOpVisible: { ts: Timestamp(1567578783, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:05.362+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1112 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:35.362+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578783, 1), t: 1 }, durableWallTime: new Date(1567578783520), appliedOpTime: { ts: Timestamp(1567578783, 1), t: 1 }, appliedWallTime: new Date(1567578783520), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578783, 1), t: 1 }, durableWallTime: new Date(1567578783520), appliedOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, appliedWallTime: new Date(1567578785358), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578783, 1), t: 1 }, durableWallTime: new Date(1567578783520), appliedOpTime: { ts: Timestamp(1567578783, 1), t: 1 }, appliedWallTime: new Date(1567578783520), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578783, 1), t: 1 }, lastCommittedWall: new Date(1567578783520), lastOpVisible: { ts: Timestamp(1567578783, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:05.362+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:35.362+0000 2019-09-04T06:33:05.362+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578785, 1), t: 1 } 2019-09-04T06:33:05.363+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1113 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:15.363+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578783, 1), t: 1 } } 2019-09-04T06:33:05.363+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:35.362+0000 2019-09-04T06:33:05.363+0000 D2 ASIO [RS] Request 1112 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578785, 1), t: 1 }, lastCommittedWall: new Date(1567578785358), lastOpVisible: { ts: Timestamp(1567578785, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578785, 1), $clusterTime: { clusterTime: Timestamp(1567578785, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578785, 1) } 2019-09-04T06:33:05.363+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578785, 1), t: 1 }, lastCommittedWall: new Date(1567578785358), lastOpVisible: { ts: Timestamp(1567578785, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578785, 1), $clusterTime: { clusterTime: Timestamp(1567578785, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578785, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:05.363+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:05.363+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:35.362+0000 2019-09-04T06:33:05.363+0000 D2 ASIO [RS] Request 1113 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578785, 1), t: 1 }, lastCommittedWall: new Date(1567578785358), lastOpVisible: { ts: Timestamp(1567578785, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578785, 1), t: 1 }, lastCommittedWall: new Date(1567578785358), lastOpApplied: { ts: Timestamp(1567578785, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578785, 1), $clusterTime: { clusterTime: Timestamp(1567578785, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578785, 1) } 2019-09-04T06:33:05.363+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578785, 1), t: 1 }, lastCommittedWall: new Date(1567578785358), lastOpVisible: { ts: Timestamp(1567578785, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578785, 1), t: 1 }, lastCommittedWall: new Date(1567578785358), lastOpApplied: { ts: Timestamp(1567578785, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578785, 1), $clusterTime: { clusterTime: Timestamp(1567578785, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578785, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:05.363+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:05.363+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:33:05.363+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578785, 1), t: 1 }, 2019-09-04T06:33:05.358+0000 2019-09-04T06:33:05.363+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578785, 1), t: 1 }, 2019-09-04T06:33:05.358+0000 2019-09-04T06:33:05.363+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578780, 1) 2019-09-04T06:33:05.363+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:33:16.844+0000 2019-09-04T06:33:05.363+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:33:15.746+0000 2019-09-04T06:33:05.363+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1114 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:15.363+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578785, 1), t: 1 } } 2019-09-04T06:33:05.363+0000 D3 REPL [conn360] Got notified of new snapshot: { ts: Timestamp(1567578785, 1), t: 1 }, 2019-09-04T06:33:05.358+0000 2019-09-04T06:33:05.363+0000 D3 REPL [conn360] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.661+0000 2019-09-04T06:33:05.363+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:35.362+0000 2019-09-04T06:33:05.363+0000 D3 REPL [conn338] Got notified of new snapshot: { ts: Timestamp(1567578785, 1), t: 1 }, 2019-09-04T06:33:05.358+0000 2019-09-04T06:33:05.363+0000 D3 REPL [conn338] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:11.680+0000 2019-09-04T06:33:05.363+0000 D3 REPL [conn361] Got notified of new snapshot: { ts: Timestamp(1567578785, 1), t: 1 }, 2019-09-04T06:33:05.358+0000 2019-09-04T06:33:05.363+0000 D3 REPL [conn361] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.661+0000 2019-09-04T06:33:05.363+0000 D3 REPL [conn355] Got notified of new snapshot: { ts: Timestamp(1567578785, 1), t: 1 }, 2019-09-04T06:33:05.358+0000 2019-09-04T06:33:05.363+0000 D3 REPL [conn355] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:24.151+0000 2019-09-04T06:33:05.363+0000 D3 REPL [conn323] Got notified of new snapshot: { ts: Timestamp(1567578785, 1), t: 1 }, 2019-09-04T06:33:05.358+0000 2019-09-04T06:33:05.363+0000 D3 REPL [conn323] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:11.688+0000 2019-09-04T06:33:05.363+0000 D3 REPL [conn341] Got notified of new snapshot: { ts: Timestamp(1567578785, 1), t: 1 }, 2019-09-04T06:33:05.358+0000 2019-09-04T06:33:05.363+0000 D3 REPL [conn341] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.094+0000 2019-09-04T06:33:05.363+0000 D3 REPL [conn363] Got notified of new snapshot: { ts: Timestamp(1567578785, 1), t: 1 }, 2019-09-04T06:33:05.358+0000 2019-09-04T06:33:05.363+0000 D3 REPL [conn363] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.670+0000 2019-09-04T06:33:05.363+0000 D3 REPL [conn352] Got notified of new snapshot: { ts: Timestamp(1567578785, 1), t: 1 }, 2019-09-04T06:33:05.358+0000 2019-09-04T06:33:05.363+0000 D3 REPL [conn352] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.291+0000 2019-09-04T06:33:05.363+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:05.363+0000 D3 REPL [conn343] Got notified of new snapshot: { ts: Timestamp(1567578785, 1), t: 1 }, 2019-09-04T06:33:05.358+0000 2019-09-04T06:33:05.363+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:34.839+0000 2019-09-04T06:33:05.363+0000 D3 REPL [conn343] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:22.594+0000 2019-09-04T06:33:05.363+0000 D3 REPL [conn370] Got notified of new snapshot: { ts: Timestamp(1567578785, 1), t: 1 }, 2019-09-04T06:33:05.358+0000 2019-09-04T06:33:05.363+0000 D3 REPL [conn370] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:29.874+0000 2019-09-04T06:33:05.363+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:33:05.363+0000 D3 REPL [conn344] Got notified of new snapshot: { ts: Timestamp(1567578785, 1), t: 1 }, 2019-09-04T06:33:05.358+0000 2019-09-04T06:33:05.363+0000 D3 REPL [conn344] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:11.677+0000 2019-09-04T06:33:05.363+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:05.363+0000 D3 REPL [conn357] Got notified of new snapshot: { ts: Timestamp(1567578785, 1), t: 1 }, 2019-09-04T06:33:05.358+0000 2019-09-04T06:33:05.363+0000 D3 REPL [conn357] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.823+0000 2019-09-04T06:33:05.363+0000 D3 REPL [conn353] Got notified of new snapshot: { ts: Timestamp(1567578785, 1), t: 1 }, 2019-09-04T06:33:05.358+0000 2019-09-04T06:33:05.363+0000 D3 REPL [conn353] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.419+0000 2019-09-04T06:33:05.364+0000 D3 REPL [conn354] Got notified of new snapshot: { ts: Timestamp(1567578785, 1), t: 1 }, 2019-09-04T06:33:05.358+0000 2019-09-04T06:33:05.364+0000 D3 REPL [conn354] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.413+0000 2019-09-04T06:33:05.364+0000 D3 REPL [conn337] Got notified of new snapshot: { ts: Timestamp(1567578785, 1), t: 1 }, 2019-09-04T06:33:05.358+0000 2019-09-04T06:33:05.364+0000 D3 REPL [conn337] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.660+0000 2019-09-04T06:33:05.364+0000 D3 REPL [conn358] Got notified of new snapshot: { ts: Timestamp(1567578785, 1), t: 1 }, 2019-09-04T06:33:05.358+0000 2019-09-04T06:33:05.364+0000 D3 REPL [conn358] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.833+0000 2019-09-04T06:33:05.364+0000 D3 REPL [conn322] Got notified of new snapshot: { ts: Timestamp(1567578785, 1), t: 1 }, 2019-09-04T06:33:05.358+0000 2019-09-04T06:33:05.364+0000 D3 REPL [conn322] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.989+0000 2019-09-04T06:33:05.364+0000 D3 REPL [conn365] Got notified of new snapshot: { ts: Timestamp(1567578785, 1), t: 1 }, 2019-09-04T06:33:05.358+0000 2019-09-04T06:33:05.364+0000 D3 REPL [conn365] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:25.060+0000 2019-09-04T06:33:05.364+0000 D3 REPL [conn364] Got notified of new snapshot: { ts: Timestamp(1567578785, 1), t: 1 }, 2019-09-04T06:33:05.358+0000 2019-09-04T06:33:05.364+0000 D3 REPL [conn364] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.767+0000 2019-09-04T06:33:05.364+0000 D3 REPL [conn327] Got notified of new snapshot: { ts: Timestamp(1567578785, 1), t: 1 }, 2019-09-04T06:33:05.358+0000 2019-09-04T06:33:05.364+0000 D3 REPL [conn327] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.840+0000 2019-09-04T06:33:05.364+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578783, 1), t: 1 }, durableWallTime: new Date(1567578783520), appliedOpTime: { ts: Timestamp(1567578783, 1), t: 1 }, appliedWallTime: new Date(1567578783520), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, durableWallTime: new Date(1567578785358), appliedOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, appliedWallTime: new Date(1567578785358), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578783, 1), t: 1 }, durableWallTime: new Date(1567578783520), appliedOpTime: { ts: Timestamp(1567578783, 1), t: 1 }, appliedWallTime: new Date(1567578783520), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578785, 1), t: 1 }, lastCommittedWall: new Date(1567578785358), lastOpVisible: { ts: Timestamp(1567578785, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:05.364+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1115 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:35.364+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578783, 1), t: 1 }, durableWallTime: new Date(1567578783520), appliedOpTime: { ts: Timestamp(1567578783, 1), t: 1 }, appliedWallTime: new Date(1567578783520), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, durableWallTime: new Date(1567578785358), appliedOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, appliedWallTime: new Date(1567578785358), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578783, 1), t: 1 }, durableWallTime: new Date(1567578783520), appliedOpTime: { ts: Timestamp(1567578783, 1), t: 1 }, appliedWallTime: new Date(1567578783520), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578785, 1), t: 1 }, lastCommittedWall: new Date(1567578785358), lastOpVisible: { ts: Timestamp(1567578785, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:05.364+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:35.362+0000 2019-09-04T06:33:05.364+0000 D2 ASIO [RS] Request 1115 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578785, 1), t: 1 }, lastCommittedWall: new Date(1567578785358), lastOpVisible: { ts: Timestamp(1567578785, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578785, 1), $clusterTime: { clusterTime: Timestamp(1567578785, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578785, 1) } 2019-09-04T06:33:05.364+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578785, 1), t: 1 }, lastCommittedWall: new Date(1567578785358), lastOpVisible: { ts: Timestamp(1567578785, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578785, 1), $clusterTime: { clusterTime: Timestamp(1567578785, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578785, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:05.364+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:05.364+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:35.362+0000 2019-09-04T06:33:05.379+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:05.448+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:05.448+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:05.461+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578785, 1) 2019-09-04T06:33:05.479+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:05.558+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:05.558+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:05.579+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:05.647+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:05.647+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:05.662+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:05.662+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:05.679+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:05.729+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:05.729+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:05.779+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:05.812+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:05.812+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:05.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:05.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:05.879+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:05.948+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:05.948+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:05.980+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:06.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:06.058+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:06.058+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:06.080+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:06.147+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:06.147+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:06.162+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:06.162+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:06.180+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:06.229+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:06.229+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:06.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578785, 1), signature: { hash: BinData(0, 573B7ED4893E413844A1D9B196D129CB2E8ED7F4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:06.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:33:06.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578785, 1), signature: { hash: BinData(0, 573B7ED4893E413844A1D9B196D129CB2E8ED7F4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:06.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578785, 1), signature: { hash: BinData(0, 573B7ED4893E413844A1D9B196D129CB2E8ED7F4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:06.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, durableWallTime: new Date(1567578785358), opTime: { ts: Timestamp(1567578785, 1), t: 1 }, wallTime: new Date(1567578785358) } 2019-09-04T06:33:06.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578785, 1), signature: { hash: BinData(0, 573B7ED4893E413844A1D9B196D129CB2E8ED7F4), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:06.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:06.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:06.280+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:06.312+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:06.312+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:06.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:06.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:06.361+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:06.361+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:06.361+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578785, 1) 2019-09-04T06:33:06.361+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 16326 2019-09-04T06:33:06.361+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:06.361+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:06.361+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 16326 2019-09-04T06:33:06.362+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16329 2019-09-04T06:33:06.362+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16329 2019-09-04T06:33:06.362+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578785, 1), t: 1 }({ ts: Timestamp(1567578785, 1), t: 1 }) 2019-09-04T06:33:06.380+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:06.448+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:06.448+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:06.480+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:06.558+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:06.558+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:06.580+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:06.647+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:06.647+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:06.662+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:06.662+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:06.681+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:06.729+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:06.729+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:06.781+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:06.811+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:06.812+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:06.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:06.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1116) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:06.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1116 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:16.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:06.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:34.839+0000 2019-09-04T06:33:06.838+0000 D2 ASIO [Replication] Request 1116 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, durableWallTime: new Date(1567578785358), opTime: { ts: Timestamp(1567578785, 1), t: 1 }, wallTime: new Date(1567578785358), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578785, 1), t: 1 }, lastCommittedWall: new Date(1567578785358), lastOpVisible: { ts: Timestamp(1567578785, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578785, 1), $clusterTime: { clusterTime: Timestamp(1567578785, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578785, 1) } 2019-09-04T06:33:06.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, durableWallTime: new Date(1567578785358), opTime: { ts: Timestamp(1567578785, 1), t: 1 }, wallTime: new Date(1567578785358), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578785, 1), t: 1 }, lastCommittedWall: new Date(1567578785358), lastOpVisible: { ts: Timestamp(1567578785, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578785, 1), $clusterTime: { clusterTime: Timestamp(1567578785, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578785, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:06.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:06.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1116) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, durableWallTime: new Date(1567578785358), opTime: { ts: Timestamp(1567578785, 1), t: 1 }, wallTime: new Date(1567578785358), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578785, 1), t: 1 }, lastCommittedWall: new Date(1567578785358), lastOpVisible: { ts: Timestamp(1567578785, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578785, 1), $clusterTime: { clusterTime: Timestamp(1567578785, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578785, 1) } 2019-09-04T06:33:06.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:33:06.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:33:08.838Z 2019-09-04T06:33:06.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:34.839+0000 2019-09-04T06:33:06.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:06.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1117) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:06.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1117 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:33:16.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:06.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:34.839+0000 2019-09-04T06:33:06.839+0000 D2 ASIO [Replication] Request 1117 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, durableWallTime: new Date(1567578785358), opTime: { ts: Timestamp(1567578785, 1), t: 1 }, wallTime: new Date(1567578785358), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578785, 1), t: 1 }, lastCommittedWall: new Date(1567578785358), lastOpVisible: { ts: Timestamp(1567578785, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578785, 1), $clusterTime: { clusterTime: Timestamp(1567578785, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578785, 1) } 2019-09-04T06:33:06.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, durableWallTime: new Date(1567578785358), opTime: { ts: Timestamp(1567578785, 1), t: 1 }, wallTime: new Date(1567578785358), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578785, 1), t: 1 }, lastCommittedWall: new Date(1567578785358), lastOpVisible: { ts: Timestamp(1567578785, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578785, 1), $clusterTime: { clusterTime: Timestamp(1567578785, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578785, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:33:06.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:06.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1117) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, durableWallTime: new Date(1567578785358), opTime: { ts: Timestamp(1567578785, 1), t: 1 }, wallTime: new Date(1567578785358), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578785, 1), t: 1 }, lastCommittedWall: new Date(1567578785358), lastOpVisible: { ts: Timestamp(1567578785, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578785, 1), $clusterTime: { clusterTime: Timestamp(1567578785, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578785, 1) } 2019-09-04T06:33:06.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:33:06.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:33:15.746+0000 2019-09-04T06:33:06.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:33:17.278+0000 2019-09-04T06:33:06.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:33:06.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:33:08.839Z 2019-09-04T06:33:06.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:06.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:36.839+0000 2019-09-04T06:33:06.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:36.839+0000 2019-09-04T06:33:06.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:06.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:06.881+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:06.948+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:06.948+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:06.981+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:07.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:07.058+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:07.058+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:07.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578785, 1), signature: { hash: BinData(0, 573B7ED4893E413844A1D9B196D129CB2E8ED7F4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:07.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:33:07.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578785, 1), signature: { hash: BinData(0, 573B7ED4893E413844A1D9B196D129CB2E8ED7F4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:07.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578785, 1), signature: { hash: BinData(0, 573B7ED4893E413844A1D9B196D129CB2E8ED7F4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:07.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, durableWallTime: new Date(1567578785358), opTime: { ts: Timestamp(1567578785, 1), t: 1 }, wallTime: new Date(1567578785358) } 2019-09-04T06:33:07.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578785, 1), signature: { hash: BinData(0, 573B7ED4893E413844A1D9B196D129CB2E8ED7F4), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:07.081+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:07.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:07.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:07.147+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:07.147+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:07.162+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:07.162+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:07.181+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:07.229+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:07.229+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:07.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:07.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:07.281+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:07.312+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:07.312+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:07.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:07.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:07.361+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:07.361+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:07.361+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578785, 1) 2019-09-04T06:33:07.361+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 16350 2019-09-04T06:33:07.361+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:07.361+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:07.361+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 16350 2019-09-04T06:33:07.363+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16353 2019-09-04T06:33:07.363+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16353 2019-09-04T06:33:07.363+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578785, 1), t: 1 }({ ts: Timestamp(1567578785, 1), t: 1 }) 2019-09-04T06:33:07.381+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:07.448+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:07.448+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:07.482+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:07.558+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:07.558+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:07.582+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:07.612+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:07.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:07.647+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:07.647+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:07.662+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:07.662+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:07.682+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:07.729+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:07.729+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:07.782+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:07.812+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:07.812+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:07.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:07.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:07.882+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:07.948+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:07.948+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:07.982+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:08.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:08.058+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:08.058+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:08.083+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:08.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:08.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:08.147+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:08.147+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:08.162+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:08.162+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:08.183+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:08.229+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:08.229+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:08.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578785, 1), signature: { hash: BinData(0, 573B7ED4893E413844A1D9B196D129CB2E8ED7F4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:08.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:33:08.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578785, 1), signature: { hash: BinData(0, 573B7ED4893E413844A1D9B196D129CB2E8ED7F4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:08.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578785, 1), signature: { hash: BinData(0, 573B7ED4893E413844A1D9B196D129CB2E8ED7F4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:08.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, durableWallTime: new Date(1567578785358), opTime: { ts: Timestamp(1567578785, 1), t: 1 }, wallTime: new Date(1567578785358) } 2019-09-04T06:33:08.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578785, 1), signature: { hash: BinData(0, 573B7ED4893E413844A1D9B196D129CB2E8ED7F4), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:08.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:08.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:08.283+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:08.312+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:08.312+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:08.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:08.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:08.359+0000 D2 ASIO [RS] Request 1114 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578788, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578788355), o: { $v: 1, $set: { ping: new Date(1567578788355) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578785, 1), t: 1 }, lastCommittedWall: new Date(1567578785358), lastOpVisible: { ts: Timestamp(1567578785, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578785, 1), t: 1 }, lastCommittedWall: new Date(1567578785358), lastOpApplied: { ts: Timestamp(1567578788, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578785, 1), $clusterTime: { clusterTime: Timestamp(1567578788, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578788, 1) } 2019-09-04T06:33:08.359+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578788, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578788355), o: { $v: 1, $set: { ping: new Date(1567578788355) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578785, 1), t: 1 }, lastCommittedWall: new Date(1567578785358), lastOpVisible: { ts: Timestamp(1567578785, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578785, 1), t: 1 }, lastCommittedWall: new Date(1567578785358), lastOpApplied: { ts: Timestamp(1567578788, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578785, 1), $clusterTime: { clusterTime: Timestamp(1567578788, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578788, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:08.359+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:08.359+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578788, 1) and ending at ts: Timestamp(1567578788, 1) 2019-09-04T06:33:08.359+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:33:17.278+0000 2019-09-04T06:33:08.359+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:33:19.841+0000 2019-09-04T06:33:08.359+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:08.359+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578788, 1), t: 1 } 2019-09-04T06:33:08.359+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:08.359+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:08.359+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578785, 1) 2019-09-04T06:33:08.359+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 16375 2019-09-04T06:33:08.359+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:08.359+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:08.359+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 16375 2019-09-04T06:33:08.359+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:33:08.359+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:08.359+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:08.359+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578788, 1) } 2019-09-04T06:33:08.359+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578785, 1) 2019-09-04T06:33:08.359+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 16378 2019-09-04T06:33:08.359+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:08.359+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16354 2019-09-04T06:33:08.359+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:08.359+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 16378 2019-09-04T06:33:08.359+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:36.839+0000 2019-09-04T06:33:08.359+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 16354 2019-09-04T06:33:08.359+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16381 2019-09-04T06:33:08.359+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16381 2019-09-04T06:33:08.359+0000 D3 EXECUTOR [repl-writer-worker-9] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:08.359+0000 D3 STORAGE [repl-writer-worker-9] WT begin_transaction for snapshot id 16383 2019-09-04T06:33:08.359+0000 D4 STORAGE [repl-writer-worker-9] inserting record with timestamp Timestamp(1567578788, 1) 2019-09-04T06:33:08.359+0000 D3 STORAGE [repl-writer-worker-9] WT set timestamp of future write operations to Timestamp(1567578788, 1) 2019-09-04T06:33:08.359+0000 D3 STORAGE [repl-writer-worker-9] WT commit_transaction for snapshot id 16383 2019-09-04T06:33:08.359+0000 D3 EXECUTOR [repl-writer-worker-9] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:08.359+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:33:08.359+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16382 2019-09-04T06:33:08.360+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 16382 2019-09-04T06:33:08.360+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16385 2019-09-04T06:33:08.360+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16385 2019-09-04T06:33:08.360+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578788, 1), t: 1 }({ ts: Timestamp(1567578788, 1), t: 1 }) 2019-09-04T06:33:08.360+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578788, 1) 2019-09-04T06:33:08.360+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16386 2019-09-04T06:33:08.360+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578788, 1) } } ] } sort: {} projection: {} 2019-09-04T06:33:08.360+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:08.360+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:33:08.360+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578788, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:33:08.360+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:08.360+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:33:08.360+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:08.360+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578788, 1) || First: notFirst: full path: ts 2019-09-04T06:33:08.360+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:08.360+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578788, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:08.360+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:33:08.360+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:33:08.360+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:33:08.360+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:08.360+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:08.360+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:33:08.360+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:08.360+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:08.360+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:33:08.360+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578788, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:33:08.360+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:08.360+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:33:08.360+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:08.360+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578788, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:33:08.360+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:08.360+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578788, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:08.360+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 16386 2019-09-04T06:33:08.360+0000 D3 EXECUTOR [repl-writer-worker-7] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:08.360+0000 D3 STORAGE [repl-writer-worker-7] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:08.360+0000 D3 REPL [repl-writer-worker-7] applying op: { ts: Timestamp(1567578788, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578788355), o: { $v: 1, $set: { ping: new Date(1567578788355) } } }, oplog application mode: Secondary 2019-09-04T06:33:08.360+0000 D3 STORAGE [repl-writer-worker-7] WT set timestamp of future write operations to Timestamp(1567578788, 1) 2019-09-04T06:33:08.360+0000 D3 STORAGE [repl-writer-worker-7] WT begin_transaction for snapshot id 16388 2019-09-04T06:33:08.360+0000 D2 QUERY [repl-writer-worker-7] Using idhack: { _id: "ConfigServer" } 2019-09-04T06:33:08.360+0000 D4 WRITE [repl-writer-worker-7] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:33:08.360+0000 D3 STORAGE [repl-writer-worker-7] WT commit_transaction for snapshot id 16388 2019-09-04T06:33:08.360+0000 D3 EXECUTOR [repl-writer-worker-7] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:08.360+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578788, 1), t: 1 }({ ts: Timestamp(1567578788, 1), t: 1 }) 2019-09-04T06:33:08.360+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578788, 1) 2019-09-04T06:33:08.360+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16387 2019-09-04T06:33:08.360+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:33:08.360+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:08.360+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:33:08.360+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:08.360+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:08.360+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:33:08.360+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 16387 2019-09-04T06:33:08.360+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578788, 1) 2019-09-04T06:33:08.360+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16391 2019-09-04T06:33:08.360+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16391 2019-09-04T06:33:08.360+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578788, 1), t: 1 }({ ts: Timestamp(1567578788, 1), t: 1 }) 2019-09-04T06:33:08.360+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:08.360+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, durableWallTime: new Date(1567578785358), appliedOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, appliedWallTime: new Date(1567578785358), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, durableWallTime: new Date(1567578785358), appliedOpTime: { ts: Timestamp(1567578788, 1), t: 1 }, appliedWallTime: new Date(1567578788355), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, durableWallTime: new Date(1567578785358), appliedOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, appliedWallTime: new Date(1567578785358), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578785, 1), t: 1 }, lastCommittedWall: new Date(1567578785358), lastOpVisible: { ts: Timestamp(1567578785, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:08.360+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1118 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:38.360+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, durableWallTime: new Date(1567578785358), appliedOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, appliedWallTime: new Date(1567578785358), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, durableWallTime: new Date(1567578785358), appliedOpTime: { ts: Timestamp(1567578788, 1), t: 1 }, appliedWallTime: new Date(1567578788355), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, durableWallTime: new Date(1567578785358), appliedOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, appliedWallTime: new Date(1567578785358), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578785, 1), t: 1 }, lastCommittedWall: new Date(1567578785358), lastOpVisible: { ts: Timestamp(1567578785, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:08.360+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:38.360+0000 2019-09-04T06:33:08.361+0000 D2 ASIO [RS] Request 1118 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578785, 1), t: 1 }, lastCommittedWall: new Date(1567578785358), lastOpVisible: { ts: Timestamp(1567578785, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578785, 1), $clusterTime: { clusterTime: Timestamp(1567578788, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578788, 1) } 2019-09-04T06:33:08.361+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578785, 1), t: 1 }, lastCommittedWall: new Date(1567578785358), lastOpVisible: { ts: Timestamp(1567578785, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578785, 1), $clusterTime: { clusterTime: Timestamp(1567578788, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578788, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:08.361+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:08.361+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:38.361+0000 2019-09-04T06:33:08.361+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578788, 1), t: 1 } 2019-09-04T06:33:08.361+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1119 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:18.361+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578785, 1), t: 1 } } 2019-09-04T06:33:08.361+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:38.361+0000 2019-09-04T06:33:08.362+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:33:08.362+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:08.362+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, durableWallTime: new Date(1567578785358), appliedOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, appliedWallTime: new Date(1567578785358), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578788, 1), t: 1 }, durableWallTime: new Date(1567578788355), appliedOpTime: { ts: Timestamp(1567578788, 1), t: 1 }, appliedWallTime: new Date(1567578788355), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, durableWallTime: new Date(1567578785358), appliedOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, appliedWallTime: new Date(1567578785358), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578785, 1), t: 1 }, lastCommittedWall: new Date(1567578785358), lastOpVisible: { ts: Timestamp(1567578785, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:08.362+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1120 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:38.362+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, durableWallTime: new Date(1567578785358), appliedOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, appliedWallTime: new Date(1567578785358), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578788, 1), t: 1 }, durableWallTime: new Date(1567578788355), appliedOpTime: { ts: Timestamp(1567578788, 1), t: 1 }, appliedWallTime: new Date(1567578788355), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, durableWallTime: new Date(1567578785358), appliedOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, appliedWallTime: new Date(1567578785358), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578785, 1), t: 1 }, lastCommittedWall: new Date(1567578785358), lastOpVisible: { ts: Timestamp(1567578785, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:08.362+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:38.361+0000 2019-09-04T06:33:08.363+0000 D2 ASIO [RS] Request 1120 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578785, 1), t: 1 }, lastCommittedWall: new Date(1567578785358), lastOpVisible: { ts: Timestamp(1567578785, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578785, 1), $clusterTime: { clusterTime: Timestamp(1567578788, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578788, 1) } 2019-09-04T06:33:08.363+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578785, 1), t: 1 }, lastCommittedWall: new Date(1567578785358), lastOpVisible: { ts: Timestamp(1567578785, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578785, 1), $clusterTime: { clusterTime: Timestamp(1567578788, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578788, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:08.363+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:08.363+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:38.361+0000 2019-09-04T06:33:08.363+0000 D2 ASIO [RS] Request 1119 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 1), t: 1 }, lastCommittedWall: new Date(1567578788355), lastOpVisible: { ts: Timestamp(1567578788, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578788, 1), t: 1 }, lastCommittedWall: new Date(1567578788355), lastOpApplied: { ts: Timestamp(1567578788, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578788, 1), $clusterTime: { clusterTime: Timestamp(1567578788, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578788, 1) } 2019-09-04T06:33:08.363+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 1), t: 1 }, lastCommittedWall: new Date(1567578788355), lastOpVisible: { ts: Timestamp(1567578788, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578788, 1), t: 1 }, lastCommittedWall: new Date(1567578788355), lastOpApplied: { ts: Timestamp(1567578788, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578788, 1), $clusterTime: { clusterTime: Timestamp(1567578788, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578788, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:08.363+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:08.363+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:33:08.363+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578788, 1), t: 1 }, 2019-09-04T06:33:08.355+0000 2019-09-04T06:33:08.363+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578788, 1), t: 1 }, 2019-09-04T06:33:08.355+0000 2019-09-04T06:33:08.363+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578783, 1) 2019-09-04T06:33:08.363+0000 D3 REPL [conn365] Got notified of new snapshot: { ts: Timestamp(1567578788, 1), t: 1 }, 2019-09-04T06:33:08.355+0000 2019-09-04T06:33:08.363+0000 D3 REPL [conn365] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:25.060+0000 2019-09-04T06:33:08.363+0000 D3 REPL [conn370] Got notified of new snapshot: { ts: Timestamp(1567578788, 1), t: 1 }, 2019-09-04T06:33:08.355+0000 2019-09-04T06:33:08.363+0000 D3 REPL [conn370] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:29.874+0000 2019-09-04T06:33:08.363+0000 D3 REPL [conn360] Got notified of new snapshot: { ts: Timestamp(1567578788, 1), t: 1 }, 2019-09-04T06:33:08.355+0000 2019-09-04T06:33:08.363+0000 D3 REPL [conn360] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.661+0000 2019-09-04T06:33:08.363+0000 D3 REPL [conn355] Got notified of new snapshot: { ts: Timestamp(1567578788, 1), t: 1 }, 2019-09-04T06:33:08.355+0000 2019-09-04T06:33:08.363+0000 D3 REPL [conn355] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:24.151+0000 2019-09-04T06:33:08.363+0000 D3 REPL [conn344] Got notified of new snapshot: { ts: Timestamp(1567578788, 1), t: 1 }, 2019-09-04T06:33:08.355+0000 2019-09-04T06:33:08.363+0000 D3 REPL [conn344] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:11.677+0000 2019-09-04T06:33:08.363+0000 D3 REPL [conn353] Got notified of new snapshot: { ts: Timestamp(1567578788, 1), t: 1 }, 2019-09-04T06:33:08.355+0000 2019-09-04T06:33:08.363+0000 D3 REPL [conn353] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.419+0000 2019-09-04T06:33:08.363+0000 D3 REPL [conn337] Got notified of new snapshot: { ts: Timestamp(1567578788, 1), t: 1 }, 2019-09-04T06:33:08.355+0000 2019-09-04T06:33:08.363+0000 D3 REPL [conn337] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.660+0000 2019-09-04T06:33:08.363+0000 D3 REPL [conn358] Got notified of new snapshot: { ts: Timestamp(1567578788, 1), t: 1 }, 2019-09-04T06:33:08.355+0000 2019-09-04T06:33:08.363+0000 D3 REPL [conn358] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.833+0000 2019-09-04T06:33:08.363+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:33:19.841+0000 2019-09-04T06:33:08.364+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:33:18.792+0000 2019-09-04T06:33:08.364+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:08.364+0000 D3 REPL [conn323] Got notified of new snapshot: { ts: Timestamp(1567578788, 1), t: 1 }, 2019-09-04T06:33:08.355+0000 2019-09-04T06:33:08.364+0000 D3 REPL [conn323] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:11.688+0000 2019-09-04T06:33:08.364+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:36.839+0000 2019-09-04T06:33:08.364+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1121 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:18.364+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578788, 1), t: 1 } } 2019-09-04T06:33:08.364+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:38.361+0000 2019-09-04T06:33:08.364+0000 D3 REPL [conn363] Got notified of new snapshot: { ts: Timestamp(1567578788, 1), t: 1 }, 2019-09-04T06:33:08.355+0000 2019-09-04T06:33:08.364+0000 D3 REPL [conn363] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.670+0000 2019-09-04T06:33:08.364+0000 D3 REPL [conn352] Got notified of new snapshot: { ts: Timestamp(1567578788, 1), t: 1 }, 2019-09-04T06:33:08.355+0000 2019-09-04T06:33:08.364+0000 D3 REPL [conn352] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.291+0000 2019-09-04T06:33:08.364+0000 D3 REPL [conn343] Got notified of new snapshot: { ts: Timestamp(1567578788, 1), t: 1 }, 2019-09-04T06:33:08.355+0000 2019-09-04T06:33:08.364+0000 D3 REPL [conn343] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:22.594+0000 2019-09-04T06:33:08.364+0000 D3 REPL [conn327] Got notified of new snapshot: { ts: Timestamp(1567578788, 1), t: 1 }, 2019-09-04T06:33:08.355+0000 2019-09-04T06:33:08.364+0000 D3 REPL [conn327] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.840+0000 2019-09-04T06:33:08.364+0000 D3 REPL [conn361] Got notified of new snapshot: { ts: Timestamp(1567578788, 1), t: 1 }, 2019-09-04T06:33:08.355+0000 2019-09-04T06:33:08.364+0000 D3 REPL [conn361] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.661+0000 2019-09-04T06:33:08.364+0000 D3 REPL [conn357] Got notified of new snapshot: { ts: Timestamp(1567578788, 1), t: 1 }, 2019-09-04T06:33:08.355+0000 2019-09-04T06:33:08.364+0000 D3 REPL [conn357] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.823+0000 2019-09-04T06:33:08.364+0000 D3 REPL [conn354] Got notified of new snapshot: { ts: Timestamp(1567578788, 1), t: 1 }, 2019-09-04T06:33:08.355+0000 2019-09-04T06:33:08.364+0000 D3 REPL [conn354] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.413+0000 2019-09-04T06:33:08.364+0000 D3 REPL [conn322] Got notified of new snapshot: { ts: Timestamp(1567578788, 1), t: 1 }, 2019-09-04T06:33:08.355+0000 2019-09-04T06:33:08.364+0000 D3 REPL [conn322] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.989+0000 2019-09-04T06:33:08.364+0000 D3 REPL [conn364] Got notified of new snapshot: { ts: Timestamp(1567578788, 1), t: 1 }, 2019-09-04T06:33:08.355+0000 2019-09-04T06:33:08.364+0000 D3 REPL [conn364] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.767+0000 2019-09-04T06:33:08.364+0000 D3 REPL [conn341] Got notified of new snapshot: { ts: Timestamp(1567578788, 1), t: 1 }, 2019-09-04T06:33:08.355+0000 2019-09-04T06:33:08.364+0000 D3 REPL [conn341] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.094+0000 2019-09-04T06:33:08.364+0000 D3 REPL [conn338] Got notified of new snapshot: { ts: Timestamp(1567578788, 1), t: 1 }, 2019-09-04T06:33:08.355+0000 2019-09-04T06:33:08.364+0000 D3 REPL [conn338] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:11.680+0000 2019-09-04T06:33:08.383+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:08.439+0000 D2 COMMAND [conn61] run command config.$cmd { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578785, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578785, 1), signature: { hash: BinData(0, 573B7ED4893E413844A1D9B196D129CB2E8ED7F4), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578785, 1), t: 1 } }, $db: "config" } 2019-09-04T06:33:08.439+0000 D1 COMMAND [conn61] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578785, 1), t: 1 } } } 2019-09-04T06:33:08.439+0000 D3 STORAGE [conn61] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:33:08.439+0000 D1 COMMAND [conn61] Using 'committed' snapshot: { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578785, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578785, 1), signature: { hash: BinData(0, 573B7ED4893E413844A1D9B196D129CB2E8ED7F4), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578785, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578788, 1) 2019-09-04T06:33:08.439+0000 D2 QUERY [conn61] Using idhack: query: { _id: "config.system.sessions" } sort: {} projection: {} limit: 1 2019-09-04T06:33:08.439+0000 D3 STORAGE [conn61] WT begin_transaction for snapshot id 16394 2019-09-04T06:33:08.439+0000 D3 STORAGE [conn61] WT rollback_transaction for snapshot id 16394 2019-09-04T06:33:08.439+0000 I COMMAND [conn61] command config.collections command: find { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578785, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578785, 1), signature: { hash: BinData(0, 573B7ED4893E413844A1D9B196D129CB2E8ED7F4), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578785, 1), t: 1 } }, $db: "config" } planSummary: IDHACK keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:702 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:33:08.439+0000 D2 COMMAND [conn61] run command config.$cmd { find: "chunks", filter: { ns: "config.system.sessions", lastmod: { $gte: Timestamp(1, 0) } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578788, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578788, 1), signature: { hash: BinData(0, 40928C9CC15D33463F50BC3A3B27F2B53603603D), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578788, 1), t: 1 } }, $db: "config" } 2019-09-04T06:33:08.439+0000 D1 COMMAND [conn61] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578788, 1), t: 1 } } } 2019-09-04T06:33:08.439+0000 D3 STORAGE [conn61] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:33:08.439+0000 D1 COMMAND [conn61] Using 'committed' snapshot: { find: "chunks", filter: { ns: "config.system.sessions", lastmod: { $gte: Timestamp(1, 0) } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578788, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578788, 1), signature: { hash: BinData(0, 40928C9CC15D33463F50BC3A3B27F2B53603603D), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578788, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578788, 1) 2019-09-04T06:33:08.440+0000 D5 QUERY [conn61] Tagging the match expression according to cache data: Filter: $and ns $eq "config.system.sessions" lastmod $gte Timestamp(1, 0) Cache data: (index-tagged expression tree: tree=Node ---Leaf (ns_1_lastmod_1, ), pos: 0, can combine? 1 ---Leaf (ns_1_lastmod_1, ), pos: 1, can combine? 1 ) 2019-09-04T06:33:08.440+0000 D5 QUERY [conn61] Index 0: (ns_1_min_1, ) 2019-09-04T06:33:08.440+0000 D5 QUERY [conn61] Index 1: (ns_1_shard_1_min_1, ) 2019-09-04T06:33:08.440+0000 D5 QUERY [conn61] Index 2: (ns_1_lastmod_1, ) 2019-09-04T06:33:08.440+0000 D5 QUERY [conn61] Index 3: (_id_, ) 2019-09-04T06:33:08.440+0000 D5 QUERY [conn61] Tagged tree: $and ns $eq "config.system.sessions" || Selected Index #2 pos 0 combine 1 lastmod $gte Timestamp(1, 0) || Selected Index #2 pos 1 combine 1 2019-09-04T06:33:08.440+0000 D5 QUERY [conn61] Planner: solution constructed from the cache: FETCH ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [{ lastmod: 1 }, { ns: 1 }, { ns: 1, lastmod: 1 }, ] ---Child: ------IXSCAN ---------indexName = ns_1_lastmod_1 keyPattern = { ns: 1, lastmod: 1 } ---------direction = 1 ---------bounds = field #0['ns']: ["config.system.sessions", "config.system.sessions"], field #1['lastmod']: [Timestamp(1, 0), Timestamp(4294967295, 4294967295)] ---------fetched = 0 ---------sortedByDiskLoc = 0 ---------getSort = [{ lastmod: 1 }, { ns: 1 }, { ns: 1, lastmod: 1 }, ] 2019-09-04T06:33:08.440+0000 D3 STORAGE [conn61] WT begin_transaction for snapshot id 16396 2019-09-04T06:33:08.440+0000 D2 QUERY [conn61] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) 2019-09-04T06:33:08.440+0000 D3 STORAGE [conn61] WT rollback_transaction for snapshot id 16396 2019-09-04T06:33:08.440+0000 I COMMAND [conn61] command config.chunks command: find { find: "chunks", filter: { ns: "config.system.sessions", lastmod: { $gte: Timestamp(1, 0) } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578788, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578788, 1), signature: { hash: BinData(0, 40928C9CC15D33463F50BC3A3B27F2B53603603D), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578788, 1), t: 1 } }, $db: "config" } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:1DDA71BE planCacheKey:167D77D5 reslen:788 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:33:08.448+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:08.448+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:08.459+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578788, 1) 2019-09-04T06:33:08.483+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:08.558+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:08.558+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:08.583+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:08.604+0000 D2 ASIO [RS] Request 1121 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578788, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578788567), o: { $v: 1, $set: { ping: new Date(1567578788567) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 1), t: 1 }, lastCommittedWall: new Date(1567578788355), lastOpVisible: { ts: Timestamp(1567578788, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578788, 1), t: 1 }, lastCommittedWall: new Date(1567578788355), lastOpApplied: { ts: Timestamp(1567578788, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578788, 1), $clusterTime: { clusterTime: Timestamp(1567578788, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578788, 3) } 2019-09-04T06:33:08.604+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578788, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578788567), o: { $v: 1, $set: { ping: new Date(1567578788567) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 1), t: 1 }, lastCommittedWall: new Date(1567578788355), lastOpVisible: { ts: Timestamp(1567578788, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578788, 1), t: 1 }, lastCommittedWall: new Date(1567578788355), lastOpApplied: { ts: Timestamp(1567578788, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578788, 1), $clusterTime: { clusterTime: Timestamp(1567578788, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578788, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:08.604+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:08.604+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578788, 3) and ending at ts: Timestamp(1567578788, 3) 2019-09-04T06:33:08.604+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:33:18.792+0000 2019-09-04T06:33:08.604+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:33:18.724+0000 2019-09-04T06:33:08.605+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:08.605+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578788, 3), t: 1 } 2019-09-04T06:33:08.605+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:36.839+0000 2019-09-04T06:33:08.605+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:08.605+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:08.605+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578788, 1) 2019-09-04T06:33:08.605+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 16401 2019-09-04T06:33:08.605+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:08.605+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:08.605+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 16401 2019-09-04T06:33:08.605+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:08.605+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:08.605+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578788, 1) 2019-09-04T06:33:08.605+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:33:08.605+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 16404 2019-09-04T06:33:08.605+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578788, 3) } 2019-09-04T06:33:08.605+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:08.605+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:08.605+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 16404 2019-09-04T06:33:08.605+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16393 2019-09-04T06:33:08.605+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 16393 2019-09-04T06:33:08.605+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16407 2019-09-04T06:33:08.605+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16407 2019-09-04T06:33:08.605+0000 D3 EXECUTOR [repl-writer-worker-11] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:08.605+0000 D3 STORAGE [repl-writer-worker-11] WT begin_transaction for snapshot id 16409 2019-09-04T06:33:08.605+0000 D4 STORAGE [repl-writer-worker-11] inserting record with timestamp Timestamp(1567578788, 3) 2019-09-04T06:33:08.605+0000 D3 STORAGE [repl-writer-worker-11] WT set timestamp of future write operations to Timestamp(1567578788, 3) 2019-09-04T06:33:08.605+0000 D3 STORAGE [repl-writer-worker-11] WT commit_transaction for snapshot id 16409 2019-09-04T06:33:08.605+0000 D3 EXECUTOR [repl-writer-worker-11] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:08.605+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:33:08.605+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16408 2019-09-04T06:33:08.605+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 16408 2019-09-04T06:33:08.605+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16411 2019-09-04T06:33:08.605+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16411 2019-09-04T06:33:08.605+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578788, 3), t: 1 }({ ts: Timestamp(1567578788, 3), t: 1 }) 2019-09-04T06:33:08.605+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578788, 3) 2019-09-04T06:33:08.605+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16412 2019-09-04T06:33:08.605+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578788, 3) } } ] } sort: {} projection: {} 2019-09-04T06:33:08.605+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:08.605+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:33:08.605+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578788, 3) Sort: {} Proj: {} ============================= 2019-09-04T06:33:08.605+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:08.605+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:08.605+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:33:08.605+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578788, 3) || First: notFirst: full path: ts 2019-09-04T06:33:08.605+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:08.605+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578788, 3) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:08.605+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:33:08.605+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:33:08.605+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:33:08.605+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:08.605+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:08.605+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:33:08.605+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:08.606+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:08.606+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:33:08.606+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578788, 3) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:33:08.606+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:08.606+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:08.606+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:33:08.606+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578788, 3) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:33:08.606+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:08.606+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578788, 3) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:08.606+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 16412 2019-09-04T06:33:08.606+0000 D3 EXECUTOR [repl-writer-worker-13] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:08.606+0000 D3 STORAGE [repl-writer-worker-13] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:08.606+0000 D3 REPL [repl-writer-worker-13] applying op: { ts: Timestamp(1567578788, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578788567), o: { $v: 1, $set: { ping: new Date(1567578788567) } } }, oplog application mode: Secondary 2019-09-04T06:33:08.606+0000 D3 STORAGE [repl-writer-worker-13] WT set timestamp of future write operations to Timestamp(1567578788, 3) 2019-09-04T06:33:08.606+0000 D3 STORAGE [repl-writer-worker-13] WT begin_transaction for snapshot id 16414 2019-09-04T06:33:08.606+0000 D2 QUERY [repl-writer-worker-13] Using idhack: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" } 2019-09-04T06:33:08.606+0000 D4 WRITE [repl-writer-worker-13] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:33:08.606+0000 D3 STORAGE [repl-writer-worker-13] WT commit_transaction for snapshot id 16414 2019-09-04T06:33:08.606+0000 D3 EXECUTOR [repl-writer-worker-13] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:08.606+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578788, 3), t: 1 }({ ts: Timestamp(1567578788, 3), t: 1 }) 2019-09-04T06:33:08.606+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578788, 3) 2019-09-04T06:33:08.606+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16413 2019-09-04T06:33:08.606+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:33:08.606+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:08.606+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:33:08.606+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:08.606+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:08.606+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:33:08.606+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 16413 2019-09-04T06:33:08.606+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578788, 3) 2019-09-04T06:33:08.606+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:08.606+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16417 2019-09-04T06:33:08.606+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16417 2019-09-04T06:33:08.606+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, durableWallTime: new Date(1567578785358), appliedOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, appliedWallTime: new Date(1567578785358), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578788, 1), t: 1 }, durableWallTime: new Date(1567578788355), appliedOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, appliedWallTime: new Date(1567578788567), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, durableWallTime: new Date(1567578785358), appliedOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, appliedWallTime: new Date(1567578785358), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 1), t: 1 }, lastCommittedWall: new Date(1567578788355), lastOpVisible: { ts: Timestamp(1567578788, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:08.606+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1122 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:38.606+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, durableWallTime: new Date(1567578785358), appliedOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, appliedWallTime: new Date(1567578785358), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578788, 1), t: 1 }, durableWallTime: new Date(1567578788355), appliedOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, appliedWallTime: new Date(1567578788567), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, durableWallTime: new Date(1567578785358), appliedOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, appliedWallTime: new Date(1567578785358), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 1), t: 1 }, lastCommittedWall: new Date(1567578788355), lastOpVisible: { ts: Timestamp(1567578788, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:08.606+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:38.606+0000 2019-09-04T06:33:08.606+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578788, 3), t: 1 }({ ts: Timestamp(1567578788, 3), t: 1 }) 2019-09-04T06:33:08.607+0000 D2 ASIO [RS] Request 1122 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 1), t: 1 }, lastCommittedWall: new Date(1567578788355), lastOpVisible: { ts: Timestamp(1567578788, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578788, 1), $clusterTime: { clusterTime: Timestamp(1567578788, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578788, 3) } 2019-09-04T06:33:08.607+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 1), t: 1 }, lastCommittedWall: new Date(1567578788355), lastOpVisible: { ts: Timestamp(1567578788, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578788, 1), $clusterTime: { clusterTime: Timestamp(1567578788, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578788, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:08.607+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:08.607+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578788, 3), t: 1 } 2019-09-04T06:33:08.607+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1123 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:18.607+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578788, 1), t: 1 } } 2019-09-04T06:33:08.607+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:38.607+0000 2019-09-04T06:33:08.607+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:38.607+0000 2019-09-04T06:33:08.609+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:33:08.609+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:08.609+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, durableWallTime: new Date(1567578785358), appliedOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, appliedWallTime: new Date(1567578785358), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), appliedOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, appliedWallTime: new Date(1567578788567), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, durableWallTime: new Date(1567578785358), appliedOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, appliedWallTime: new Date(1567578785358), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 1), t: 1 }, lastCommittedWall: new Date(1567578788355), lastOpVisible: { ts: Timestamp(1567578788, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:08.609+0000 D2 ASIO [RS] Request 1123 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 3), t: 1 }, lastCommittedWall: new Date(1567578788567), lastOpVisible: { ts: Timestamp(1567578788, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578788, 3), t: 1 }, lastCommittedWall: new Date(1567578788567), lastOpApplied: { ts: Timestamp(1567578788, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578788, 3), $clusterTime: { clusterTime: Timestamp(1567578788, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578788, 3) } 2019-09-04T06:33:08.609+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1124 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:38.609+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, durableWallTime: new Date(1567578785358), appliedOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, appliedWallTime: new Date(1567578785358), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), appliedOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, appliedWallTime: new Date(1567578788567), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, durableWallTime: new Date(1567578785358), appliedOpTime: { ts: Timestamp(1567578785, 1), t: 1 }, appliedWallTime: new Date(1567578785358), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 1), t: 1 }, lastCommittedWall: new Date(1567578788355), lastOpVisible: { ts: Timestamp(1567578788, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:08.609+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 3), t: 1 }, lastCommittedWall: new Date(1567578788567), lastOpVisible: { ts: Timestamp(1567578788, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578788, 3), t: 1 }, lastCommittedWall: new Date(1567578788567), lastOpApplied: { ts: Timestamp(1567578788, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578788, 3), $clusterTime: { clusterTime: Timestamp(1567578788, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578788, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:08.609+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:08.609+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:33:08.609+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:38.609+0000 2019-09-04T06:33:08.609+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578788, 3), t: 1 }, 2019-09-04T06:33:08.567+0000 2019-09-04T06:33:08.609+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578788, 3), t: 1 }, 2019-09-04T06:33:08.567+0000 2019-09-04T06:33:08.609+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578783, 3) 2019-09-04T06:33:08.609+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:33:18.724+0000 2019-09-04T06:33:08.609+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:33:19.415+0000 2019-09-04T06:33:08.609+0000 D3 REPL [conn370] Got notified of new snapshot: { ts: Timestamp(1567578788, 3), t: 1 }, 2019-09-04T06:33:08.567+0000 2019-09-04T06:33:08.609+0000 D3 REPL [conn370] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:29.874+0000 2019-09-04T06:33:08.609+0000 D3 REPL [conn344] Got notified of new snapshot: { ts: Timestamp(1567578788, 3), t: 1 }, 2019-09-04T06:33:08.567+0000 2019-09-04T06:33:08.609+0000 D3 REPL [conn344] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:11.677+0000 2019-09-04T06:33:08.609+0000 D3 REPL [conn327] Got notified of new snapshot: { ts: Timestamp(1567578788, 3), t: 1 }, 2019-09-04T06:33:08.567+0000 2019-09-04T06:33:08.609+0000 D3 REPL [conn327] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.840+0000 2019-09-04T06:33:08.609+0000 D3 REPL [conn337] Got notified of new snapshot: { ts: Timestamp(1567578788, 3), t: 1 }, 2019-09-04T06:33:08.567+0000 2019-09-04T06:33:08.609+0000 D3 REPL [conn337] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.660+0000 2019-09-04T06:33:08.609+0000 D3 REPL [conn322] Got notified of new snapshot: { ts: Timestamp(1567578788, 3), t: 1 }, 2019-09-04T06:33:08.567+0000 2019-09-04T06:33:08.609+0000 D3 REPL [conn322] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.989+0000 2019-09-04T06:33:08.609+0000 D2 ASIO [RS] Request 1124 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 3), t: 1 }, lastCommittedWall: new Date(1567578788567), lastOpVisible: { ts: Timestamp(1567578788, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578788, 3), $clusterTime: { clusterTime: Timestamp(1567578788, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578788, 3) } 2019-09-04T06:33:08.609+0000 D3 REPL [conn341] Got notified of new snapshot: { ts: Timestamp(1567578788, 3), t: 1 }, 2019-09-04T06:33:08.567+0000 2019-09-04T06:33:08.609+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1125 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:18.609+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578788, 3), t: 1 } } 2019-09-04T06:33:08.609+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:08.609+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:36.839+0000 2019-09-04T06:33:08.609+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:38.609+0000 2019-09-04T06:33:08.609+0000 D3 REPL [conn341] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.094+0000 2019-09-04T06:33:08.609+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 3), t: 1 }, lastCommittedWall: new Date(1567578788567), lastOpVisible: { ts: Timestamp(1567578788, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578788, 3), $clusterTime: { clusterTime: Timestamp(1567578788, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578788, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:08.609+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:08.609+0000 D3 REPL [conn352] Got notified of new snapshot: { ts: Timestamp(1567578788, 3), t: 1 }, 2019-09-04T06:33:08.567+0000 2019-09-04T06:33:08.609+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:38.609+0000 2019-09-04T06:33:08.609+0000 D3 REPL [conn352] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.291+0000 2019-09-04T06:33:08.609+0000 D3 REPL [conn343] Got notified of new snapshot: { ts: Timestamp(1567578788, 3), t: 1 }, 2019-09-04T06:33:08.567+0000 2019-09-04T06:33:08.609+0000 D3 REPL [conn343] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:22.594+0000 2019-09-04T06:33:08.609+0000 D3 REPL [conn361] Got notified of new snapshot: { ts: Timestamp(1567578788, 3), t: 1 }, 2019-09-04T06:33:08.567+0000 2019-09-04T06:33:08.609+0000 D3 REPL [conn361] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.661+0000 2019-09-04T06:33:08.609+0000 D3 REPL [conn354] Got notified of new snapshot: { ts: Timestamp(1567578788, 3), t: 1 }, 2019-09-04T06:33:08.567+0000 2019-09-04T06:33:08.609+0000 D3 REPL [conn354] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.413+0000 2019-09-04T06:33:08.609+0000 D3 REPL [conn364] Got notified of new snapshot: { ts: Timestamp(1567578788, 3), t: 1 }, 2019-09-04T06:33:08.567+0000 2019-09-04T06:33:08.609+0000 D3 REPL [conn364] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.767+0000 2019-09-04T06:33:08.609+0000 D3 REPL [conn338] Got notified of new snapshot: { ts: Timestamp(1567578788, 3), t: 1 }, 2019-09-04T06:33:08.567+0000 2019-09-04T06:33:08.609+0000 D3 REPL [conn338] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:11.680+0000 2019-09-04T06:33:08.609+0000 D3 REPL [conn365] Got notified of new snapshot: { ts: Timestamp(1567578788, 3), t: 1 }, 2019-09-04T06:33:08.567+0000 2019-09-04T06:33:08.609+0000 D3 REPL [conn365] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:25.060+0000 2019-09-04T06:33:08.609+0000 D3 REPL [conn360] Got notified of new snapshot: { ts: Timestamp(1567578788, 3), t: 1 }, 2019-09-04T06:33:08.567+0000 2019-09-04T06:33:08.609+0000 D3 REPL [conn360] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.661+0000 2019-09-04T06:33:08.610+0000 D3 REPL [conn355] Got notified of new snapshot: { ts: Timestamp(1567578788, 3), t: 1 }, 2019-09-04T06:33:08.567+0000 2019-09-04T06:33:08.610+0000 D3 REPL [conn355] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:24.151+0000 2019-09-04T06:33:08.610+0000 D3 REPL [conn353] Got notified of new snapshot: { ts: Timestamp(1567578788, 3), t: 1 }, 2019-09-04T06:33:08.567+0000 2019-09-04T06:33:08.610+0000 D3 REPL [conn353] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:13.419+0000 2019-09-04T06:33:08.610+0000 D3 REPL [conn358] Got notified of new snapshot: { ts: Timestamp(1567578788, 3), t: 1 }, 2019-09-04T06:33:08.567+0000 2019-09-04T06:33:08.610+0000 D3 REPL [conn358] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.833+0000 2019-09-04T06:33:08.610+0000 D3 REPL [conn357] Got notified of new snapshot: { ts: Timestamp(1567578788, 3), t: 1 }, 2019-09-04T06:33:08.567+0000 2019-09-04T06:33:08.610+0000 D3 REPL [conn357] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.823+0000 2019-09-04T06:33:08.610+0000 D3 REPL [conn323] Got notified of new snapshot: { ts: Timestamp(1567578788, 3), t: 1 }, 2019-09-04T06:33:08.567+0000 2019-09-04T06:33:08.610+0000 D3 REPL [conn323] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:11.688+0000 2019-09-04T06:33:08.610+0000 D3 REPL [conn363] Got notified of new snapshot: { ts: Timestamp(1567578788, 3), t: 1 }, 2019-09-04T06:33:08.567+0000 2019-09-04T06:33:08.610+0000 D3 REPL [conn363] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.670+0000 2019-09-04T06:33:08.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:08.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:08.647+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:08.647+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:08.662+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:08.662+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:08.683+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:08.705+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578788, 3) 2019-09-04T06:33:08.729+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:08.729+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:08.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:08.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:08.783+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:08.812+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:08.812+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:08.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:08.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1126) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:08.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1126 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:18.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:08.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:36.839+0000 2019-09-04T06:33:08.838+0000 D2 ASIO [Replication] Request 1126 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), opTime: { ts: Timestamp(1567578788, 3), t: 1 }, wallTime: new Date(1567578788567), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 3), t: 1 }, lastCommittedWall: new Date(1567578788567), lastOpVisible: { ts: Timestamp(1567578788, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578788, 3), $clusterTime: { clusterTime: Timestamp(1567578788, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578788, 3) } 2019-09-04T06:33:08.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), opTime: { ts: Timestamp(1567578788, 3), t: 1 }, wallTime: new Date(1567578788567), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 3), t: 1 }, lastCommittedWall: new Date(1567578788567), lastOpVisible: { ts: Timestamp(1567578788, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578788, 3), $clusterTime: { clusterTime: Timestamp(1567578788, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578788, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:08.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:08.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1126) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), opTime: { ts: Timestamp(1567578788, 3), t: 1 }, wallTime: new Date(1567578788567), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 3), t: 1 }, lastCommittedWall: new Date(1567578788567), lastOpVisible: { ts: Timestamp(1567578788, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578788, 3), $clusterTime: { clusterTime: Timestamp(1567578788, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578788, 3) } 2019-09-04T06:33:08.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:33:08.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:33:10.838Z 2019-09-04T06:33:08.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:36.839+0000 2019-09-04T06:33:08.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:08.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1127) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:08.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1127 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:33:18.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:08.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:36.839+0000 2019-09-04T06:33:08.839+0000 D2 ASIO [Replication] Request 1127 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), opTime: { ts: Timestamp(1567578788, 3), t: 1 }, wallTime: new Date(1567578788567), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 3), t: 1 }, lastCommittedWall: new Date(1567578788567), lastOpVisible: { ts: Timestamp(1567578788, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578788, 3), $clusterTime: { clusterTime: Timestamp(1567578788, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578788, 3) } 2019-09-04T06:33:08.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), opTime: { ts: Timestamp(1567578788, 3), t: 1 }, wallTime: new Date(1567578788567), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 3), t: 1 }, lastCommittedWall: new Date(1567578788567), lastOpVisible: { ts: Timestamp(1567578788, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578788, 3), $clusterTime: { clusterTime: Timestamp(1567578788, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578788, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:33:08.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:08.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1127) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), opTime: { ts: Timestamp(1567578788, 3), t: 1 }, wallTime: new Date(1567578788567), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 3), t: 1 }, lastCommittedWall: new Date(1567578788567), lastOpVisible: { ts: Timestamp(1567578788, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578788, 3), $clusterTime: { clusterTime: Timestamp(1567578788, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578788, 3) } 2019-09-04T06:33:08.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:33:08.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:33:19.415+0000 2019-09-04T06:33:08.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:33:20.231+0000 2019-09-04T06:33:08.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:33:08.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:33:10.839Z 2019-09-04T06:33:08.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:08.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:38.839+0000 2019-09-04T06:33:08.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:38.839+0000 2019-09-04T06:33:08.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:08.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:08.884+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:08.948+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:08.948+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:08.984+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:09.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:09.058+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:09.058+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:09.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578788, 3), signature: { hash: BinData(0, 40928C9CC15D33463F50BC3A3B27F2B53603603D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:09.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:33:09.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578788, 3), signature: { hash: BinData(0, 40928C9CC15D33463F50BC3A3B27F2B53603603D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:09.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578788, 3), signature: { hash: BinData(0, 40928C9CC15D33463F50BC3A3B27F2B53603603D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:09.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), opTime: { ts: Timestamp(1567578788, 3), t: 1 }, wallTime: new Date(1567578788567) } 2019-09-04T06:33:09.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578788, 3), signature: { hash: BinData(0, 40928C9CC15D33463F50BC3A3B27F2B53603603D), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:09.084+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:09.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:09.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:09.147+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:09.147+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:09.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:09.153+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:09.162+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:09.162+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:09.184+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:09.229+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:09.229+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:09.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:09.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:09.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:09.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:09.284+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:09.312+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:09.312+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:09.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:09.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:09.384+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:09.448+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:09.448+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:09.484+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:09.497+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:09.497+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:09.537+0000 D2 COMMAND [conn61] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578788, 3), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578788, 3), signature: { hash: BinData(0, 40928C9CC15D33463F50BC3A3B27F2B53603603D), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578788, 3), t: 1 } }, $db: "config" } 2019-09-04T06:33:09.537+0000 D1 COMMAND [conn61] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578788, 3), t: 1 } } } 2019-09-04T06:33:09.537+0000 D3 STORAGE [conn61] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:33:09.537+0000 D1 COMMAND [conn61] Using 'committed' snapshot: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578788, 3), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578788, 3), signature: { hash: BinData(0, 40928C9CC15D33463F50BC3A3B27F2B53603603D), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578788, 3), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578788, 3) 2019-09-04T06:33:09.537+0000 D2 QUERY [conn61] Collection config.settings does not exist. Using EOF plan: query: { _id: "balancer" } sort: {} projection: {} limit: 1 2019-09-04T06:33:09.537+0000 I COMMAND [conn61] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578788, 3), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578788, 3), signature: { hash: BinData(0, 40928C9CC15D33463F50BC3A3B27F2B53603603D), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578788, 3), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:33:09.538+0000 D2 COMMAND [conn61] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578788, 3), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578788, 3), signature: { hash: BinData(0, 40928C9CC15D33463F50BC3A3B27F2B53603603D), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578788, 3), t: 1 } }, $db: "config" } 2019-09-04T06:33:09.538+0000 D1 COMMAND [conn61] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578788, 3), t: 1 } } } 2019-09-04T06:33:09.538+0000 D3 STORAGE [conn61] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:33:09.538+0000 D1 COMMAND [conn61] Using 'committed' snapshot: { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578788, 3), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578788, 3), signature: { hash: BinData(0, 40928C9CC15D33463F50BC3A3B27F2B53603603D), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578788, 3), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578788, 3) 2019-09-04T06:33:09.538+0000 D2 QUERY [conn61] Collection config.settings does not exist. Using EOF plan: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 2019-09-04T06:33:09.538+0000 I COMMAND [conn61] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578788, 3), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578788, 3), signature: { hash: BinData(0, 40928C9CC15D33463F50BC3A3B27F2B53603603D), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578788, 3), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:33:09.558+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:09.558+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:09.585+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:09.605+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:09.605+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:09.605+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578788, 3) 2019-09-04T06:33:09.605+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 16446 2019-09-04T06:33:09.605+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:09.605+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:09.605+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 16446 2019-09-04T06:33:09.606+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16449 2019-09-04T06:33:09.606+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16449 2019-09-04T06:33:09.606+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578788, 3), t: 1 }({ ts: Timestamp(1567578788, 3), t: 1 }) 2019-09-04T06:33:09.612+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:09.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:09.647+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:09.647+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:09.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:09.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:09.662+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:09.662+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:09.685+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:09.729+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:09.729+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:09.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:09.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:09.785+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:09.811+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:09.812+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:09.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:09.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:09.885+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:09.948+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:09.948+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:09.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:09.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:09.985+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:09.997+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:09.997+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:10.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:10.004+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:33:10.004+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:33:10.004+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:33:10.012+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:33:10.012+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:33:10.012+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:33:10.012+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:33:10.012+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:33:10.012+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:33:10.014+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:33:10.015+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35129 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:33:10.016+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:33:10.016+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:33:10.016+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:33:10.016+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:33:10.016+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:10.016+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:33:10.016+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:33:10.016+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:33:10.016+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:33:10.016+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:33:10.016+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:33:10.016+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:33:10.016+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:10.016+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:10.016+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:33:10.016+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578788, 3) 2019-09-04T06:33:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16468 2019-09-04T06:33:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16468 2019-09-04T06:33:10.016+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:33:10.016+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:33:10.017+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:33:10.017+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:33:10.017+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:10.017+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:33:10.017+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:33:10.017+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:33:10.017+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578788, 3) 2019-09-04T06:33:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16471 2019-09-04T06:33:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16471 2019-09-04T06:33:10.017+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:33:10.018+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:33:10.018+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:10.018+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:33:10.018+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:33:10.018+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:33:10.018+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578788, 3) 2019-09-04T06:33:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16473 2019-09-04T06:33:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16473 2019-09-04T06:33:10.018+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:33:10.018+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:33:10.018+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:33:10.018+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:33:10.031+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:33:10.031+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16476 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16476 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16477 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16477 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16478 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16478 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16479 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16479 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16480 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16480 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16481 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16481 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16482 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16482 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16483 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:33:10.031+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16483 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16484 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16484 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16485 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16485 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16486 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16486 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16487 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16487 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16488 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16488 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16489 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16489 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16490 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16490 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16491 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16491 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16492 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16492 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16493 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16493 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16494 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16494 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16495 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16495 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16496 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16496 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16497 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:10.032+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:33:10.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16497 2019-09-04T06:33:10.033+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:33:10.033+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:33:10.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16499 2019-09-04T06:33:10.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16499 2019-09-04T06:33:10.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16500 2019-09-04T06:33:10.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16500 2019-09-04T06:33:10.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16501 2019-09-04T06:33:10.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16501 2019-09-04T06:33:10.033+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:33:10.033+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:33:10.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16503 2019-09-04T06:33:10.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16503 2019-09-04T06:33:10.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16504 2019-09-04T06:33:10.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16504 2019-09-04T06:33:10.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16505 2019-09-04T06:33:10.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16505 2019-09-04T06:33:10.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16506 2019-09-04T06:33:10.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16506 2019-09-04T06:33:10.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16507 2019-09-04T06:33:10.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16507 2019-09-04T06:33:10.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16508 2019-09-04T06:33:10.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16508 2019-09-04T06:33:10.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16509 2019-09-04T06:33:10.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16509 2019-09-04T06:33:10.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16510 2019-09-04T06:33:10.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16510 2019-09-04T06:33:10.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16511 2019-09-04T06:33:10.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16511 2019-09-04T06:33:10.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16512 2019-09-04T06:33:10.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16512 2019-09-04T06:33:10.033+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16513 2019-09-04T06:33:10.033+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16513 2019-09-04T06:33:10.034+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16514 2019-09-04T06:33:10.034+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16514 2019-09-04T06:33:10.034+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:33:10.034+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:33:10.034+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16516 2019-09-04T06:33:10.034+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16516 2019-09-04T06:33:10.034+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16517 2019-09-04T06:33:10.034+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16517 2019-09-04T06:33:10.034+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16518 2019-09-04T06:33:10.034+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16518 2019-09-04T06:33:10.034+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16519 2019-09-04T06:33:10.034+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16519 2019-09-04T06:33:10.034+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16520 2019-09-04T06:33:10.034+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16520 2019-09-04T06:33:10.034+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16521 2019-09-04T06:33:10.034+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16521 2019-09-04T06:33:10.034+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:33:10.038+0000 D2 COMMAND [conn69] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:10.038+0000 I COMMAND [conn69] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:10.055+0000 D2 COMMAND [conn70] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:10.056+0000 I COMMAND [conn70] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:10.058+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:10.058+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:10.085+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:10.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:10.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:10.147+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:10.147+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:10.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:10.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:10.162+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:10.162+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:10.185+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:10.229+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:10.229+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:10.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578788, 3), signature: { hash: BinData(0, 40928C9CC15D33463F50BC3A3B27F2B53603603D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:10.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:33:10.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578788, 3), signature: { hash: BinData(0, 40928C9CC15D33463F50BC3A3B27F2B53603603D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:10.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578788, 3), signature: { hash: BinData(0, 40928C9CC15D33463F50BC3A3B27F2B53603603D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:10.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), opTime: { ts: Timestamp(1567578788, 3), t: 1 }, wallTime: new Date(1567578788567) } 2019-09-04T06:33:10.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578788, 3), signature: { hash: BinData(0, 40928C9CC15D33463F50BC3A3B27F2B53603603D), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:10.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:10.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:10.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:10.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:10.286+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:10.312+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:10.312+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:10.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:10.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:10.386+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:10.448+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:10.448+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:10.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:10.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:10.486+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:10.497+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:10.497+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:10.558+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:10.558+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:10.586+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:10.605+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:10.605+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:10.605+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578788, 3) 2019-09-04T06:33:10.605+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 16540 2019-09-04T06:33:10.605+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:10.605+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:10.605+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 16540 2019-09-04T06:33:10.607+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16543 2019-09-04T06:33:10.607+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16543 2019-09-04T06:33:10.607+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578788, 3), t: 1 }({ ts: Timestamp(1567578788, 3), t: 1 }) 2019-09-04T06:33:10.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:10.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:10.647+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:10.647+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:10.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:10.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:10.662+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:10.662+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:10.686+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:10.729+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:10.729+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:10.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:10.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:10.786+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:10.811+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:10.812+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:10.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:10.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1128) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:10.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1128 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:20.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:10.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:38.839+0000 2019-09-04T06:33:10.838+0000 D2 ASIO [Replication] Request 1128 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), opTime: { ts: Timestamp(1567578788, 3), t: 1 }, wallTime: new Date(1567578788567), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 3), t: 1 }, lastCommittedWall: new Date(1567578788567), lastOpVisible: { ts: Timestamp(1567578788, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578788, 3), $clusterTime: { clusterTime: Timestamp(1567578789, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578788, 3) } 2019-09-04T06:33:10.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), opTime: { ts: Timestamp(1567578788, 3), t: 1 }, wallTime: new Date(1567578788567), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 3), t: 1 }, lastCommittedWall: new Date(1567578788567), lastOpVisible: { ts: Timestamp(1567578788, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578788, 3), $clusterTime: { clusterTime: Timestamp(1567578789, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578788, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:10.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:10.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1128) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), opTime: { ts: Timestamp(1567578788, 3), t: 1 }, wallTime: new Date(1567578788567), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 3), t: 1 }, lastCommittedWall: new Date(1567578788567), lastOpVisible: { ts: Timestamp(1567578788, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578788, 3), $clusterTime: { clusterTime: Timestamp(1567578789, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578788, 3) } 2019-09-04T06:33:10.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:33:10.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:33:12.838Z 2019-09-04T06:33:10.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:38.839+0000 2019-09-04T06:33:10.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:10.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1129) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:10.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1129 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:33:20.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:10.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:38.839+0000 2019-09-04T06:33:10.839+0000 D2 ASIO [Replication] Request 1129 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), opTime: { ts: Timestamp(1567578788, 3), t: 1 }, wallTime: new Date(1567578788567), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 3), t: 1 }, lastCommittedWall: new Date(1567578788567), lastOpVisible: { ts: Timestamp(1567578788, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578788, 3), $clusterTime: { clusterTime: Timestamp(1567578789, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578788, 3) } 2019-09-04T06:33:10.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), opTime: { ts: Timestamp(1567578788, 3), t: 1 }, wallTime: new Date(1567578788567), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 3), t: 1 }, lastCommittedWall: new Date(1567578788567), lastOpVisible: { ts: Timestamp(1567578788, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578788, 3), $clusterTime: { clusterTime: Timestamp(1567578789, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578788, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:33:10.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:10.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1129) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), opTime: { ts: Timestamp(1567578788, 3), t: 1 }, wallTime: new Date(1567578788567), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 3), t: 1 }, lastCommittedWall: new Date(1567578788567), lastOpVisible: { ts: Timestamp(1567578788, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578788, 3), $clusterTime: { clusterTime: Timestamp(1567578789, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578788, 3) } 2019-09-04T06:33:10.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:33:10.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:33:20.231+0000 2019-09-04T06:33:10.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:33:21.876+0000 2019-09-04T06:33:10.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:33:10.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:33:12.839Z 2019-09-04T06:33:10.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:10.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:40.839+0000 2019-09-04T06:33:10.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:40.839+0000 2019-09-04T06:33:10.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:10.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:10.887+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:10.944+0000 D2 COMMAND [conn73] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:10.945+0000 I COMMAND [conn73] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:10.948+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:10.948+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:10.968+0000 D2 COMMAND [conn74] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:10.968+0000 I COMMAND [conn74] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:10.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:10.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:10.987+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:10.997+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:10.997+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:11.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:11.058+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:11.058+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:11.063+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:11.063+0000 D3 REPL [replexec-0] memberData lastupdate is: 2019-09-04T06:33:10.839+0000 2019-09-04T06:33:11.063+0000 D3 REPL [replexec-0] memberData lastupdate is: 2019-09-04T06:33:10.838+0000 2019-09-04T06:33:11.063+0000 D3 REPL [replexec-0] stalest member MemberId(2) date: 2019-09-04T06:33:10.838+0000 2019-09-04T06:33:11.063+0000 D3 REPL [replexec-0] scheduling next check at 2019-09-04T06:33:20.838+0000 2019-09-04T06:33:11.063+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:40.839+0000 2019-09-04T06:33:11.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578789, 2), signature: { hash: BinData(0, AF3E5F6D729C21BC3F3C9F2E05C578CE3E4C4CDB), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:11.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:33:11.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578789, 2), signature: { hash: BinData(0, AF3E5F6D729C21BC3F3C9F2E05C578CE3E4C4CDB), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:11.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578789, 2), signature: { hash: BinData(0, AF3E5F6D729C21BC3F3C9F2E05C578CE3E4C4CDB), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:11.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), opTime: { ts: Timestamp(1567578788, 3), t: 1 }, wallTime: new Date(1567578788567) } 2019-09-04T06:33:11.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578789, 2), signature: { hash: BinData(0, AF3E5F6D729C21BC3F3C9F2E05C578CE3E4C4CDB), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:11.087+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:11.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:11.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:11.147+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:11.147+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:11.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:11.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:11.162+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:11.162+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:11.187+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:11.221+0000 D2 COMMAND [conn77] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:11.221+0000 I COMMAND [conn77] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:11.229+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:11.229+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:11.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:11.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:11.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:11.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:11.273+0000 D2 COMMAND [conn78] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:11.273+0000 I COMMAND [conn78] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:11.287+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:11.312+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:11.312+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:11.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:11.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:11.387+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:11.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:11.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:11.488+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:11.497+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:11.497+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:11.558+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:11.558+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:11.588+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:11.606+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:11.606+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:11.606+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578788, 3) 2019-09-04T06:33:11.606+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 16576 2019-09-04T06:33:11.606+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:11.606+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:11.606+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 16576 2019-09-04T06:33:11.607+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16579 2019-09-04T06:33:11.607+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16579 2019-09-04T06:33:11.607+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578788, 3), t: 1 }({ ts: Timestamp(1567578788, 3), t: 1 }) 2019-09-04T06:33:11.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:11.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:11.647+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:11.647+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:11.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:11.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:11.662+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:11.662+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:11.678+0000 I COMMAND [conn344] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578761, 1), signature: { hash: BinData(0, ABD195D4AC4ED4988535B74DC3F4B42B5EC26ED1), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:33:11.678+0000 D1 - [conn344] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:11.678+0000 W - [conn344] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:11.681+0000 I COMMAND [conn338] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578752, 1), signature: { hash: BinData(0, 5FD87181FFEFAB4F27FB7C85B30A6D21FE89ECFB), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:33:11.681+0000 D1 - [conn338] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:11.681+0000 W - [conn338] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:11.688+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:11.691+0000 I COMMAND [conn323] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578758, 1), signature: { hash: BinData(0, 83D99C38C0B1A9A64B0855E948984423AC47D765), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:33:11.691+0000 D1 - [conn323] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:11.691+0000 W - [conn323] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:11.695+0000 I - [conn344] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:11.695+0000 D1 COMMAND [conn344] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578761, 1), signature: { hash: BinData(0, ABD195D4AC4ED4988535B74DC3F4B42B5EC26ED1), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:11.695+0000 D1 - [conn344] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:11.695+0000 W - [conn344] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:11.713+0000 I - [conn323] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:11.713+0000 D1 COMMAND [conn323] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578758, 1), signature: { hash: BinData(0, 83D99C38C0B1A9A64B0855E948984423AC47D765), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:11.713+0000 D1 - [conn323] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:11.713+0000 W - [conn323] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:11.729+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:11.729+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:11.743+0000 I - [conn323] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:11.743+0000 W COMMAND [conn323] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:11.743+0000 I COMMAND [conn323] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578758, 1), signature: { hash: BinData(0, 83D99C38C0B1A9A64B0855E948984423AC47D765), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30035ms 2019-09-04T06:33:11.743+0000 D2 NETWORK [conn323] Session from 10.108.2.57:34314 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:11.743+0000 I NETWORK [conn323] end connection 10.108.2.57:34314 (85 connections now open) 2019-09-04T06:33:11.753+0000 I - [conn338] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:11.753+0000 D1 COMMAND [conn338] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578752, 1), signature: { hash: BinData(0, 5FD87181FFEFAB4F27FB7C85B30A6D21FE89ECFB), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:11.753+0000 D1 - [conn338] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:11.753+0000 W - [conn338] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:11.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:11.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:11.784+0000 I - [conn344] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:11.784+0000 W COMMAND [conn344] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:11.784+0000 I COMMAND [conn344] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578761, 1), signature: { hash: BinData(0, ABD195D4AC4ED4988535B74DC3F4B42B5EC26ED1), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:33:11.784+0000 D2 NETWORK [conn344] Session from 10.108.2.55:36732 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:11.785+0000 I NETWORK [conn344] end connection 10.108.2.55:36732 (84 connections now open) 2019-09-04T06:33:11.788+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:11.793+0000 I - [conn338] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:11.794+0000 W COMMAND [conn338] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:11.794+0000 I COMMAND [conn338] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578752, 1), signature: { hash: BinData(0, 5FD87181FFEFAB4F27FB7C85B30A6D21FE89ECFB), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30083ms 2019-09-04T06:33:11.794+0000 D2 NETWORK [conn338] Session from 10.108.2.72:45814 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:11.794+0000 I NETWORK [conn338] end connection 10.108.2.72:45814 (83 connections now open) 2019-09-04T06:33:11.795+0000 D2 COMMAND [conn81] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:11.795+0000 I COMMAND [conn81] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:11.796+0000 D2 COMMAND [conn81] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578788, 3), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578789, 2), signature: { hash: BinData(0, AF3E5F6D729C21BC3F3C9F2E05C578CE3E4C4CDB), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578788, 3), t: 1 } }, $db: "config" } 2019-09-04T06:33:11.796+0000 D1 COMMAND [conn81] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578788, 3), t: 1 } } } 2019-09-04T06:33:11.796+0000 D3 STORAGE [conn81] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:33:11.796+0000 D1 COMMAND [conn81] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578788, 3), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578789, 2), signature: { hash: BinData(0, AF3E5F6D729C21BC3F3C9F2E05C578CE3E4C4CDB), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578788, 3), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578788, 3) 2019-09-04T06:33:11.796+0000 D5 QUERY [conn81] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:33:11.796+0000 D5 QUERY [conn81] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:33:11.796+0000 D5 QUERY [conn81] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:33:11.796+0000 D5 QUERY [conn81] Rated tree: $and 2019-09-04T06:33:11.796+0000 D5 QUERY [conn81] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:11.796+0000 D5 QUERY [conn81] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:11.796+0000 D2 QUERY [conn81] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:33:11.796+0000 D3 STORAGE [conn81] WT begin_transaction for snapshot id 16588 2019-09-04T06:33:11.797+0000 D3 STORAGE [conn81] WT rollback_transaction for snapshot id 16588 2019-09-04T06:33:11.797+0000 I COMMAND [conn81] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578788, 3), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578789, 2), signature: { hash: BinData(0, AF3E5F6D729C21BC3F3C9F2E05C578CE3E4C4CDB), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578788, 3), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:879 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:33:11.811+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:11.812+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:11.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:11.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:11.862+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:11.862+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:11.865+0000 I NETWORK [listener] connection accepted from 10.108.2.48:42202 #374 (84 connections now open) 2019-09-04T06:33:11.865+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:11.866+0000 D2 COMMAND [conn374] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:11.866+0000 I NETWORK [conn374] received client metadata from 10.108.2.48:42202 conn374: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:11.866+0000 I COMMAND [conn374] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:11.866+0000 D2 COMMAND [conn374] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578790, 1), signature: { hash: BinData(0, 48B30A814E9398F4B8D261A3888218A13169B002), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:33:11.866+0000 D1 REPL [conn374] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578788, 3), t: 1 } 2019-09-04T06:33:11.866+0000 D3 REPL [conn374] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.876+0000 2019-09-04T06:33:11.867+0000 I NETWORK [listener] connection accepted from 10.108.2.74:51892 #375 (85 connections now open) 2019-09-04T06:33:11.867+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:11.867+0000 D2 COMMAND [conn375] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:11.867+0000 I NETWORK [conn375] received client metadata from 10.108.2.74:51892 conn375: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:11.867+0000 I COMMAND [conn375] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:11.867+0000 D2 COMMAND [conn375] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578785, 1), signature: { hash: BinData(0, D1ABB9AD25439133DC00295B9D506CD1C692B624), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:33:11.867+0000 D1 REPL [conn375] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578788, 3), t: 1 } 2019-09-04T06:33:11.867+0000 D3 REPL [conn375] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.877+0000 2019-09-04T06:33:11.870+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:11.870+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:11.872+0000 D2 COMMAND [conn359] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578791, 1), signature: { hash: BinData(0, 7C0CF3FE10B568392F8A27C39D14989CCAE09485), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:33:11.872+0000 D1 REPL [conn359] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578788, 3), t: 1 } 2019-09-04T06:33:11.872+0000 D3 REPL [conn359] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.882+0000 2019-09-04T06:33:11.883+0000 I NETWORK [listener] connection accepted from 10.108.2.57:34346 #376 (86 connections now open) 2019-09-04T06:33:11.883+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:11.883+0000 D2 COMMAND [conn376] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:11.883+0000 I NETWORK [conn376] received client metadata from 10.108.2.57:34346 conn376: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:11.883+0000 I COMMAND [conn376] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:11.887+0000 D2 COMMAND [conn376] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578788, 1), signature: { hash: BinData(0, 4F311D3E6BCB85B8F76274CF8447E974E8C1C19D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:33:11.888+0000 D1 REPL [conn376] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578788, 3), t: 1 } 2019-09-04T06:33:11.888+0000 D3 REPL [conn376] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.897+0000 2019-09-04T06:33:11.888+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:11.898+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:11.898+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:11.903+0000 D2 COMMAND [conn366] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578783, 1), signature: { hash: BinData(0, 17FA8619F0EE1C4A6A3FDB688C10EE020E3FDDEE), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:33:11.903+0000 D1 REPL [conn366] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578788, 3), t: 1 } 2019-09-04T06:33:11.903+0000 D3 REPL [conn366] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.913+0000 2019-09-04T06:33:11.920+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:11.920+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:11.924+0000 D2 COMMAND [conn373] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578782, 1), signature: { hash: BinData(0, F83DDE0D03ABE82D40CAD12EB3845998BB8EFADA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:33:11.924+0000 D1 REPL [conn373] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578788, 3), t: 1 } 2019-09-04T06:33:11.924+0000 D3 REPL [conn373] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.934+0000 2019-09-04T06:33:11.924+0000 D2 COMMAND [conn367] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578791, 1), signature: { hash: BinData(0, 7C0CF3FE10B568392F8A27C39D14989CCAE09485), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:33:11.924+0000 D1 REPL [conn367] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578788, 3), t: 1 } 2019-09-04T06:33:11.924+0000 D3 REPL [conn367] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.934+0000 2019-09-04T06:33:11.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:11.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:11.988+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:11.997+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:11.997+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:12.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:12.058+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:12.058+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:12.088+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:12.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:12.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:12.147+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:12.147+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:12.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:12.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:12.162+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:12.162+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:12.189+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:12.229+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:12.229+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:12.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578791, 1), signature: { hash: BinData(0, 693362354561395B808EC481711EA45668A1207C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:12.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:33:12.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578791, 1), signature: { hash: BinData(0, 693362354561395B808EC481711EA45668A1207C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:12.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578791, 1), signature: { hash: BinData(0, 693362354561395B808EC481711EA45668A1207C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:12.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), opTime: { ts: Timestamp(1567578788, 3), t: 1 }, wallTime: new Date(1567578788567) } 2019-09-04T06:33:12.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578791, 1), signature: { hash: BinData(0, 693362354561395B808EC481711EA45668A1207C), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:12.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:12.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:12.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:12.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:12.289+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:12.312+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:12.312+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:12.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:12.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:12.361+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:12.361+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:12.369+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:12.370+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:12.389+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:12.398+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:12.398+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:12.419+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:12.419+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:12.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:12.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:12.489+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:12.497+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:12.497+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:12.558+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:12.558+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:12.589+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:12.606+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:12.606+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:12.606+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578788, 3) 2019-09-04T06:33:12.606+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 16627 2019-09-04T06:33:12.606+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:12.606+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:12.606+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 16627 2019-09-04T06:33:12.607+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16630 2019-09-04T06:33:12.607+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16630 2019-09-04T06:33:12.607+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578788, 3), t: 1 }({ ts: Timestamp(1567578788, 3), t: 1 }) 2019-09-04T06:33:12.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:12.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:12.647+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:12.647+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:12.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:12.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:12.662+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:12.662+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:12.689+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:12.729+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:12.729+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:12.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:12.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:12.790+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:12.812+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:12.812+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:12.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:12.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1130) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:12.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1130 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:22.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:12.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:40.839+0000 2019-09-04T06:33:12.838+0000 D2 ASIO [Replication] Request 1130 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), opTime: { ts: Timestamp(1567578788, 3), t: 1 }, wallTime: new Date(1567578788567), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 3), t: 1 }, lastCommittedWall: new Date(1567578788567), lastOpVisible: { ts: Timestamp(1567578788, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578788, 3), $clusterTime: { clusterTime: Timestamp(1567578791, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578788, 3) } 2019-09-04T06:33:12.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), opTime: { ts: Timestamp(1567578788, 3), t: 1 }, wallTime: new Date(1567578788567), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 3), t: 1 }, lastCommittedWall: new Date(1567578788567), lastOpVisible: { ts: Timestamp(1567578788, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578788, 3), $clusterTime: { clusterTime: Timestamp(1567578791, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578788, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:12.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:12.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1130) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), opTime: { ts: Timestamp(1567578788, 3), t: 1 }, wallTime: new Date(1567578788567), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 3), t: 1 }, lastCommittedWall: new Date(1567578788567), lastOpVisible: { ts: Timestamp(1567578788, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578788, 3), $clusterTime: { clusterTime: Timestamp(1567578791, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578788, 3) } 2019-09-04T06:33:12.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:33:12.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:33:14.838Z 2019-09-04T06:33:12.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:40.839+0000 2019-09-04T06:33:12.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:12.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1131) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:12.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1131 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:33:22.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:12.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:40.839+0000 2019-09-04T06:33:12.839+0000 D2 ASIO [Replication] Request 1131 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), opTime: { ts: Timestamp(1567578788, 3), t: 1 }, wallTime: new Date(1567578788567), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 3), t: 1 }, lastCommittedWall: new Date(1567578788567), lastOpVisible: { ts: Timestamp(1567578788, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578788, 3), $clusterTime: { clusterTime: Timestamp(1567578791, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578788, 3) } 2019-09-04T06:33:12.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), opTime: { ts: Timestamp(1567578788, 3), t: 1 }, wallTime: new Date(1567578788567), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 3), t: 1 }, lastCommittedWall: new Date(1567578788567), lastOpVisible: { ts: Timestamp(1567578788, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578788, 3), $clusterTime: { clusterTime: Timestamp(1567578791, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578788, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:33:12.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:12.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1131) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), opTime: { ts: Timestamp(1567578788, 3), t: 1 }, wallTime: new Date(1567578788567), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 3), t: 1 }, lastCommittedWall: new Date(1567578788567), lastOpVisible: { ts: Timestamp(1567578788, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578788, 3), $clusterTime: { clusterTime: Timestamp(1567578791, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578788, 3) } 2019-09-04T06:33:12.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:33:12.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:33:21.876+0000 2019-09-04T06:33:12.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:33:23.740+0000 2019-09-04T06:33:12.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:33:12.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:33:14.839Z 2019-09-04T06:33:12.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:12.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:42.839+0000 2019-09-04T06:33:12.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:42.839+0000 2019-09-04T06:33:12.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:12.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:12.890+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:12.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:12.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:12.990+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:12.997+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:12.997+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:13.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:13.058+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:13.058+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:13.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578791, 1), signature: { hash: BinData(0, 693362354561395B808EC481711EA45668A1207C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:13.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:33:13.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578791, 1), signature: { hash: BinData(0, 693362354561395B808EC481711EA45668A1207C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:13.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578791, 1), signature: { hash: BinData(0, 693362354561395B808EC481711EA45668A1207C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:13.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), opTime: { ts: Timestamp(1567578788, 3), t: 1 }, wallTime: new Date(1567578788567) } 2019-09-04T06:33:13.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578791, 1), signature: { hash: BinData(0, 693362354561395B808EC481711EA45668A1207C), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:13.090+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:13.099+0000 I COMMAND [conn341] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578761, 1), signature: { hash: BinData(0, ABD195D4AC4ED4988535B74DC3F4B42B5EC26ED1), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:33:13.099+0000 D1 - [conn341] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:13.099+0000 W - [conn341] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:13.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:13.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:13.121+0000 I - [conn341] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:13.121+0000 D1 COMMAND [conn341] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578761, 1), signature: { hash: BinData(0, ABD195D4AC4ED4988535B74DC3F4B42B5EC26ED1), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:13.121+0000 D1 - [conn341] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:13.121+0000 W - [conn341] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:13.147+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:13.147+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:13.147+0000 I - [conn341] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:13.147+0000 W COMMAND [conn341] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:13.147+0000 I COMMAND [conn341] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578761, 1), signature: { hash: BinData(0, ABD195D4AC4ED4988535B74DC3F4B42B5EC26ED1), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30036ms 2019-09-04T06:33:13.147+0000 D2 NETWORK [conn341] Session from 10.108.2.59:48418 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:13.147+0000 I NETWORK [conn341] end connection 10.108.2.59:48418 (85 connections now open) 2019-09-04T06:33:13.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:13.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:13.162+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:13.162+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:13.190+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:13.229+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:13.229+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:13.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:13.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:13.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:13.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:13.280+0000 I NETWORK [listener] connection accepted from 10.108.2.52:47284 #377 (86 connections now open) 2019-09-04T06:33:13.280+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:13.280+0000 D2 COMMAND [conn377] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:13.280+0000 I NETWORK [conn377] received client metadata from 10.108.2.52:47284 conn377: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:13.280+0000 I COMMAND [conn377] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:13.284+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:13.284+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:13.290+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:13.295+0000 I COMMAND [conn352] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578762, 1), signature: { hash: BinData(0, 70ADEB07C63950ADB1CBAACD1EFFE951853A061C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:33:13.295+0000 D1 - [conn352] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:13.295+0000 W - [conn352] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:13.311+0000 I - [conn352] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:13.311+0000 D1 COMMAND [conn352] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578762, 1), signature: { hash: BinData(0, 70ADEB07C63950ADB1CBAACD1EFFE951853A061C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:13.312+0000 D1 - [conn352] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:13.312+0000 W - [conn352] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:13.312+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:13.312+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:13.332+0000 I - [conn352] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:13.332+0000 W COMMAND [conn352] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:13.332+0000 I COMMAND [conn352] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578762, 1), signature: { hash: BinData(0, 70ADEB07C63950ADB1CBAACD1EFFE951853A061C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:33:13.332+0000 D2 NETWORK [conn352] Session from 10.108.2.52:47260 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:13.332+0000 I NETWORK [conn352] end connection 10.108.2.52:47260 (85 connections now open) 2019-09-04T06:33:13.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:13.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:13.391+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:13.403+0000 I NETWORK [listener] connection accepted from 10.108.2.45:36642 #378 (86 connections now open) 2019-09-04T06:33:13.403+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:13.403+0000 D2 COMMAND [conn378] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:13.403+0000 I NETWORK [conn378] received client metadata from 10.108.2.45:36642 conn378: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:13.403+0000 I COMMAND [conn378] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:13.423+0000 I COMMAND [conn353] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578758, 1), signature: { hash: BinData(0, 83D99C38C0B1A9A64B0855E948984423AC47D765), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:33:13.424+0000 D1 - [conn353] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:13.424+0000 W - [conn353] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:13.445+0000 I - [conn353] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:13.445+0000 D1 COMMAND [conn353] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578758, 1), signature: { hash: BinData(0, 83D99C38C0B1A9A64B0855E948984423AC47D765), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:13.445+0000 D1 - [conn353] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:13.445+0000 W - [conn353] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:13.471+0000 I - [conn353] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:13.471+0000 W COMMAND [conn353] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:13.471+0000 I COMMAND [conn353] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578758, 1), signature: { hash: BinData(0, 83D99C38C0B1A9A64B0855E948984423AC47D765), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30036ms 2019-09-04T06:33:13.471+0000 D2 NETWORK [conn353] Session from 10.108.2.45:36622 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:13.471+0000 I NETWORK [conn353] end connection 10.108.2.45:36622 (85 connections now open) 2019-09-04T06:33:13.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:13.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:13.491+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:13.497+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:13.497+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:13.498+0000 I NETWORK [listener] connection accepted from 10.108.2.54:49294 #379 (86 connections now open) 2019-09-04T06:33:13.499+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:13.499+0000 D2 COMMAND [conn379] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:13.499+0000 I NETWORK [conn379] received client metadata from 10.108.2.54:49294 conn379: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:13.499+0000 I COMMAND [conn379] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:13.499+0000 D2 COMMAND [conn379] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578788, 1), signature: { hash: BinData(0, 4F311D3E6BCB85B8F76274CF8447E974E8C1C19D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:33:13.499+0000 D1 REPL [conn379] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578788, 3), t: 1 } 2019-09-04T06:33:13.499+0000 D3 REPL [conn379] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:43.509+0000 2019-09-04T06:33:13.558+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:13.558+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:13.591+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:13.606+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:13.606+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:13.606+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578788, 3) 2019-09-04T06:33:13.606+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 16662 2019-09-04T06:33:13.606+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:13.606+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:13.606+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 16662 2019-09-04T06:33:13.607+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16665 2019-09-04T06:33:13.607+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16665 2019-09-04T06:33:13.607+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578788, 3), t: 1 }({ ts: Timestamp(1567578788, 3), t: 1 }) 2019-09-04T06:33:13.609+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:13.609+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), appliedOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, appliedWallTime: new Date(1567578788567), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), appliedOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, appliedWallTime: new Date(1567578788567), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), appliedOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, appliedWallTime: new Date(1567578788567), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 3), t: 1 }, lastCommittedWall: new Date(1567578788567), lastOpVisible: { ts: Timestamp(1567578788, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:13.609+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1132 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:43.609+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), appliedOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, appliedWallTime: new Date(1567578788567), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), appliedOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, appliedWallTime: new Date(1567578788567), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), appliedOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, appliedWallTime: new Date(1567578788567), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 3), t: 1 }, lastCommittedWall: new Date(1567578788567), lastOpVisible: { ts: Timestamp(1567578788, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:13.609+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:38.609+0000 2019-09-04T06:33:13.609+0000 D2 ASIO [RS] Request 1132 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 3), t: 1 }, lastCommittedWall: new Date(1567578788567), lastOpVisible: { ts: Timestamp(1567578788, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578788, 3), $clusterTime: { clusterTime: Timestamp(1567578792, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578788, 3) } 2019-09-04T06:33:13.609+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 3), t: 1 }, lastCommittedWall: new Date(1567578788567), lastOpVisible: { ts: Timestamp(1567578788, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578788, 3), $clusterTime: { clusterTime: Timestamp(1567578792, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578788, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:13.609+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:13.609+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:38.609+0000 2019-09-04T06:33:13.609+0000 D2 ASIO [RS] Request 1125 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 3), t: 1 }, lastCommittedWall: new Date(1567578788567), lastOpVisible: { ts: Timestamp(1567578788, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578788, 3), t: 1 }, lastCommittedWall: new Date(1567578788567), lastOpApplied: { ts: Timestamp(1567578788, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578788, 3), $clusterTime: { clusterTime: Timestamp(1567578792, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578788, 3) } 2019-09-04T06:33:13.609+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 3), t: 1 }, lastCommittedWall: new Date(1567578788567), lastOpVisible: { ts: Timestamp(1567578788, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578788, 3), t: 1 }, lastCommittedWall: new Date(1567578788567), lastOpApplied: { ts: Timestamp(1567578788, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578788, 3), $clusterTime: { clusterTime: Timestamp(1567578792, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578788, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:13.609+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:13.609+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:33:13.609+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:33:23.740+0000 2019-09-04T06:33:13.609+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:33:24.736+0000 2019-09-04T06:33:13.609+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:13.609+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:42.839+0000 2019-09-04T06:33:13.609+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1133 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:23.609+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578788, 3), t: 1 } } 2019-09-04T06:33:13.609+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:38.609+0000 2019-09-04T06:33:13.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:13.611+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:13.647+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:13.647+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:13.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:13.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:13.662+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:13.662+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:13.691+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:13.729+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:13.729+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:13.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:13.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:13.784+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:13.784+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:13.791+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:13.811+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:13.812+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:13.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:13.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:13.891+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:13.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:13.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:13.985+0000 I NETWORK [listener] connection accepted from 10.108.2.62:53534 #380 (87 connections now open) 2019-09-04T06:33:13.985+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:13.985+0000 D2 COMMAND [conn380] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:13.985+0000 I NETWORK [conn380] received client metadata from 10.108.2.62:53534 conn380: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:13.985+0000 I COMMAND [conn380] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:13.992+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:13.992+0000 I COMMAND [conn322] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578762, 1), signature: { hash: BinData(0, 70ADEB07C63950ADB1CBAACD1EFFE951853A061C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:33:13.992+0000 D1 - [conn322] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:13.992+0000 W - [conn322] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:13.997+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:13.997+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:14.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:14.009+0000 I - [conn322] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:14.009+0000 D1 COMMAND [conn322] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578762, 1), signature: { hash: BinData(0, 70ADEB07C63950ADB1CBAACD1EFFE951853A061C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:14.009+0000 D1 - [conn322] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:14.009+0000 W - [conn322] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:14.029+0000 I - [conn322] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:14.029+0000 W COMMAND [conn322] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:14.029+0000 I COMMAND [conn322] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578762, 1), signature: { hash: BinData(0, 70ADEB07C63950ADB1CBAACD1EFFE951853A061C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:33:14.029+0000 D2 NETWORK [conn322] Session from 10.108.2.62:53500 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:14.029+0000 I NETWORK [conn322] end connection 10.108.2.62:53500 (86 connections now open) 2019-09-04T06:33:14.058+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:14.058+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:14.092+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:14.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:14.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:14.147+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:14.147+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:14.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:14.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:14.162+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:14.162+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:14.192+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:14.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578792, 1), signature: { hash: BinData(0, 500F43EC8D60277005B9049962EFF23BF051BFCF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:14.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:33:14.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578792, 1), signature: { hash: BinData(0, 500F43EC8D60277005B9049962EFF23BF051BFCF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:14.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578792, 1), signature: { hash: BinData(0, 500F43EC8D60277005B9049962EFF23BF051BFCF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:14.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), opTime: { ts: Timestamp(1567578788, 3), t: 1 }, wallTime: new Date(1567578788567) } 2019-09-04T06:33:14.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578792, 1), signature: { hash: BinData(0, 500F43EC8D60277005B9049962EFF23BF051BFCF), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:14.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:14.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:14.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:14.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:14.292+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:14.312+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:14.312+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:14.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:14.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:14.392+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:14.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:14.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:14.492+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:14.497+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:14.497+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:14.558+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:14.558+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:14.593+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:14.607+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:14.607+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:14.607+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578788, 3) 2019-09-04T06:33:14.607+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 16694 2019-09-04T06:33:14.607+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:14.607+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:14.607+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 16694 2019-09-04T06:33:14.607+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16697 2019-09-04T06:33:14.607+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16697 2019-09-04T06:33:14.607+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578788, 3), t: 1 }({ ts: Timestamp(1567578788, 3), t: 1 }) 2019-09-04T06:33:14.612+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:14.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:14.647+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:14.647+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:14.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:14.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:14.662+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:14.662+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:14.693+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:14.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:14.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:14.793+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:14.811+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:14.812+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:14.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:14.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1134) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:14.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1134 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:24.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:14.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:42.839+0000 2019-09-04T06:33:14.838+0000 D2 ASIO [Replication] Request 1134 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), opTime: { ts: Timestamp(1567578788, 3), t: 1 }, wallTime: new Date(1567578788567), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 3), t: 1 }, lastCommittedWall: new Date(1567578788567), lastOpVisible: { ts: Timestamp(1567578788, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578788, 3), $clusterTime: { clusterTime: Timestamp(1567578792, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578788, 3) } 2019-09-04T06:33:14.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), opTime: { ts: Timestamp(1567578788, 3), t: 1 }, wallTime: new Date(1567578788567), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 3), t: 1 }, lastCommittedWall: new Date(1567578788567), lastOpVisible: { ts: Timestamp(1567578788, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578788, 3), $clusterTime: { clusterTime: Timestamp(1567578792, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578788, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:14.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:14.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1134) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), opTime: { ts: Timestamp(1567578788, 3), t: 1 }, wallTime: new Date(1567578788567), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 3), t: 1 }, lastCommittedWall: new Date(1567578788567), lastOpVisible: { ts: Timestamp(1567578788, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578788, 3), $clusterTime: { clusterTime: Timestamp(1567578792, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578788, 3) } 2019-09-04T06:33:14.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:33:14.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:33:16.838Z 2019-09-04T06:33:14.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:42.839+0000 2019-09-04T06:33:14.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:14.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1135) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:14.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1135 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:33:24.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:14.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:42.839+0000 2019-09-04T06:33:14.839+0000 D2 ASIO [Replication] Request 1135 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), opTime: { ts: Timestamp(1567578788, 3), t: 1 }, wallTime: new Date(1567578788567), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 3), t: 1 }, lastCommittedWall: new Date(1567578788567), lastOpVisible: { ts: Timestamp(1567578788, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578788, 3), $clusterTime: { clusterTime: Timestamp(1567578792, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578788, 3) } 2019-09-04T06:33:14.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), opTime: { ts: Timestamp(1567578788, 3), t: 1 }, wallTime: new Date(1567578788567), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 3), t: 1 }, lastCommittedWall: new Date(1567578788567), lastOpVisible: { ts: Timestamp(1567578788, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578788, 3), $clusterTime: { clusterTime: Timestamp(1567578792, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578788, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:33:14.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:14.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1135) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), opTime: { ts: Timestamp(1567578788, 3), t: 1 }, wallTime: new Date(1567578788567), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 3), t: 1 }, lastCommittedWall: new Date(1567578788567), lastOpVisible: { ts: Timestamp(1567578788, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578788, 3), $clusterTime: { clusterTime: Timestamp(1567578792, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578788, 3) } 2019-09-04T06:33:14.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:33:14.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:33:24.736+0000 2019-09-04T06:33:14.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:33:25.031+0000 2019-09-04T06:33:14.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:33:14.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:33:16.839Z 2019-09-04T06:33:14.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:14.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:44.839+0000 2019-09-04T06:33:14.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:44.839+0000 2019-09-04T06:33:14.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:14.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:14.893+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:14.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:14.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:14.993+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:14.997+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:14.997+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:15.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:15.058+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:15.058+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:15.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578792, 1), signature: { hash: BinData(0, 500F43EC8D60277005B9049962EFF23BF051BFCF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:15.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:33:15.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578792, 1), signature: { hash: BinData(0, 500F43EC8D60277005B9049962EFF23BF051BFCF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:15.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578792, 1), signature: { hash: BinData(0, 500F43EC8D60277005B9049962EFF23BF051BFCF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:15.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), opTime: { ts: Timestamp(1567578788, 3), t: 1 }, wallTime: new Date(1567578788567) } 2019-09-04T06:33:15.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578792, 1), signature: { hash: BinData(0, 500F43EC8D60277005B9049962EFF23BF051BFCF), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:15.093+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:15.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:15.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:15.147+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:15.147+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:15.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:15.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:15.162+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:15.162+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:15.193+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:15.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:15.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:15.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:15.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:15.294+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:15.312+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:15.312+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:15.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:15.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:15.372+0000 D2 ASIO [RS] Request 1133 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578795, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578795364), o: { $v: 1, $set: { ping: new Date(1567578795361), up: 2695 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 3), t: 1 }, lastCommittedWall: new Date(1567578788567), lastOpVisible: { ts: Timestamp(1567578788, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578788, 3), t: 1 }, lastCommittedWall: new Date(1567578788567), lastOpApplied: { ts: Timestamp(1567578795, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578788, 3), $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578795, 1) } 2019-09-04T06:33:15.372+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578795, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578795364), o: { $v: 1, $set: { ping: new Date(1567578795361), up: 2695 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 3), t: 1 }, lastCommittedWall: new Date(1567578788567), lastOpVisible: { ts: Timestamp(1567578788, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578788, 3), t: 1 }, lastCommittedWall: new Date(1567578788567), lastOpApplied: { ts: Timestamp(1567578795, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578788, 3), $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578795, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:15.372+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:15.372+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578795, 1) and ending at ts: Timestamp(1567578795, 1) 2019-09-04T06:33:15.372+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:33:25.031+0000 2019-09-04T06:33:15.372+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:33:25.701+0000 2019-09-04T06:33:15.372+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:15.372+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:44.839+0000 2019-09-04T06:33:15.372+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578795, 1), t: 1 } 2019-09-04T06:33:15.372+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:15.372+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:15.372+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578788, 3) 2019-09-04T06:33:15.372+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 16721 2019-09-04T06:33:15.372+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:15.372+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:15.373+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 16721 2019-09-04T06:33:15.373+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:15.373+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:15.373+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:33:15.373+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578788, 3) 2019-09-04T06:33:15.373+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 16724 2019-09-04T06:33:15.373+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:15.373+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:15.373+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578795, 1) } 2019-09-04T06:33:15.373+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 16724 2019-09-04T06:33:15.373+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16698 2019-09-04T06:33:15.373+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 16698 2019-09-04T06:33:15.373+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16727 2019-09-04T06:33:15.373+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16727 2019-09-04T06:33:15.373+0000 D3 EXECUTOR [repl-writer-worker-14] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:15.373+0000 D3 STORAGE [repl-writer-worker-14] WT begin_transaction for snapshot id 16729 2019-09-04T06:33:15.373+0000 D4 STORAGE [repl-writer-worker-14] inserting record with timestamp Timestamp(1567578795, 1) 2019-09-04T06:33:15.373+0000 D3 STORAGE [repl-writer-worker-14] WT set timestamp of future write operations to Timestamp(1567578795, 1) 2019-09-04T06:33:15.373+0000 D3 STORAGE [repl-writer-worker-14] WT commit_transaction for snapshot id 16729 2019-09-04T06:33:15.373+0000 D3 EXECUTOR [repl-writer-worker-14] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:15.373+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:33:15.373+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16728 2019-09-04T06:33:15.373+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 16728 2019-09-04T06:33:15.373+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16731 2019-09-04T06:33:15.373+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16731 2019-09-04T06:33:15.373+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578795, 1), t: 1 }({ ts: Timestamp(1567578795, 1), t: 1 }) 2019-09-04T06:33:15.373+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578795, 1) 2019-09-04T06:33:15.373+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16732 2019-09-04T06:33:15.373+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578795, 1) } } ] } sort: {} projection: {} 2019-09-04T06:33:15.373+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:15.373+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:33:15.373+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578795, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:33:15.373+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:15.373+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:33:15.373+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:15.373+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578795, 1) || First: notFirst: full path: ts 2019-09-04T06:33:15.373+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:15.373+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578795, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:15.373+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:33:15.373+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:33:15.373+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:33:15.373+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:15.373+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:15.373+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:33:15.373+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:15.373+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:15.373+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:33:15.373+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578795, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:33:15.373+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:15.373+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:33:15.373+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:15.373+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578795, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:33:15.373+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:15.373+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578795, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:15.373+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 16732 2019-09-04T06:33:15.374+0000 D3 EXECUTOR [repl-writer-worker-3] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:15.374+0000 D3 STORAGE [repl-writer-worker-3] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:15.374+0000 D3 REPL [repl-writer-worker-3] applying op: { ts: Timestamp(1567578795, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578795364), o: { $v: 1, $set: { ping: new Date(1567578795361), up: 2695 } } }, oplog application mode: Secondary 2019-09-04T06:33:15.374+0000 D3 STORAGE [repl-writer-worker-3] WT set timestamp of future write operations to Timestamp(1567578795, 1) 2019-09-04T06:33:15.374+0000 D3 STORAGE [repl-writer-worker-3] WT begin_transaction for snapshot id 16734 2019-09-04T06:33:15.374+0000 D2 QUERY [repl-writer-worker-3] Using idhack: { _id: "cmodb801.togewa.com:27017" } 2019-09-04T06:33:15.374+0000 D4 WRITE [repl-writer-worker-3] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:33:15.374+0000 D3 STORAGE [repl-writer-worker-3] WT commit_transaction for snapshot id 16734 2019-09-04T06:33:15.374+0000 D3 EXECUTOR [repl-writer-worker-3] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:15.374+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578795, 1), t: 1 }({ ts: Timestamp(1567578795, 1), t: 1 }) 2019-09-04T06:33:15.374+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578795, 1) 2019-09-04T06:33:15.374+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16733 2019-09-04T06:33:15.374+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:33:15.374+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:15.374+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:33:15.374+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:15.374+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:15.374+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:33:15.374+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 16733 2019-09-04T06:33:15.374+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578795, 1) 2019-09-04T06:33:15.374+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16737 2019-09-04T06:33:15.374+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16737 2019-09-04T06:33:15.374+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578795, 1), t: 1 }({ ts: Timestamp(1567578795, 1), t: 1 }) 2019-09-04T06:33:15.374+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:15.374+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), appliedOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, appliedWallTime: new Date(1567578788567), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), appliedOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, appliedWallTime: new Date(1567578795364), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), appliedOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, appliedWallTime: new Date(1567578788567), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 3), t: 1 }, lastCommittedWall: new Date(1567578788567), lastOpVisible: { ts: Timestamp(1567578788, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:15.374+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1136 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:45.374+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), appliedOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, appliedWallTime: new Date(1567578788567), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), appliedOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, appliedWallTime: new Date(1567578795364), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), appliedOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, appliedWallTime: new Date(1567578788567), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 3), t: 1 }, lastCommittedWall: new Date(1567578788567), lastOpVisible: { ts: Timestamp(1567578788, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:15.374+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:45.374+0000 2019-09-04T06:33:15.374+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578795, 1), t: 1 } 2019-09-04T06:33:15.375+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1137 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:25.375+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578788, 3), t: 1 } } 2019-09-04T06:33:15.375+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:45.374+0000 2019-09-04T06:33:15.375+0000 D2 ASIO [RS] Request 1136 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 3), t: 1 }, lastCommittedWall: new Date(1567578788567), lastOpVisible: { ts: Timestamp(1567578788, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578788, 3), $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578795, 1) } 2019-09-04T06:33:15.375+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578788, 3), t: 1 }, lastCommittedWall: new Date(1567578788567), lastOpVisible: { ts: Timestamp(1567578788, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578788, 3), $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578795, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:15.375+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:15.375+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:45.374+0000 2019-09-04T06:33:15.380+0000 D2 ASIO [RS] Request 1137 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578795, 1), t: 1 }, lastCommittedWall: new Date(1567578795364), lastOpVisible: { ts: Timestamp(1567578795, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578795, 1), t: 1 }, lastCommittedWall: new Date(1567578795364), lastOpApplied: { ts: Timestamp(1567578795, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578795, 1), $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578795, 1) } 2019-09-04T06:33:15.380+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578795, 1), t: 1 }, lastCommittedWall: new Date(1567578795364), lastOpVisible: { ts: Timestamp(1567578795, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578795, 1), t: 1 }, lastCommittedWall: new Date(1567578795364), lastOpApplied: { ts: Timestamp(1567578795, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578795, 1), $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578795, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:15.380+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:15.380+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:33:15.380+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578795, 1), t: 1 }, 2019-09-04T06:33:15.364+0000 2019-09-04T06:33:15.380+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578795, 1), t: 1 }, 2019-09-04T06:33:15.364+0000 2019-09-04T06:33:15.380+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578790, 1) 2019-09-04T06:33:15.380+0000 D3 REPL [conn379] Got notified of new snapshot: { ts: Timestamp(1567578795, 1), t: 1 }, 2019-09-04T06:33:15.364+0000 2019-09-04T06:33:15.380+0000 D3 REPL [conn379] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:43.509+0000 2019-09-04T06:33:15.380+0000 D3 REPL [conn343] Got notified of new snapshot: { ts: Timestamp(1567578795, 1), t: 1 }, 2019-09-04T06:33:15.364+0000 2019-09-04T06:33:15.380+0000 D3 REPL [conn343] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:22.594+0000 2019-09-04T06:33:15.380+0000 D3 REPL [conn357] Got notified of new snapshot: { ts: Timestamp(1567578795, 1), t: 1 }, 2019-09-04T06:33:15.364+0000 2019-09-04T06:33:15.380+0000 D3 REPL [conn357] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.823+0000 2019-09-04T06:33:15.380+0000 D3 REPL [conn364] Got notified of new snapshot: { ts: Timestamp(1567578795, 1), t: 1 }, 2019-09-04T06:33:15.364+0000 2019-09-04T06:33:15.380+0000 D3 REPL [conn364] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.767+0000 2019-09-04T06:33:15.380+0000 D3 REPL [conn361] Got notified of new snapshot: { ts: Timestamp(1567578795, 1), t: 1 }, 2019-09-04T06:33:15.364+0000 2019-09-04T06:33:15.380+0000 D3 REPL [conn361] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.661+0000 2019-09-04T06:33:15.380+0000 D3 REPL [conn355] Got notified of new snapshot: { ts: Timestamp(1567578795, 1), t: 1 }, 2019-09-04T06:33:15.364+0000 2019-09-04T06:33:15.380+0000 D3 REPL [conn355] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:24.151+0000 2019-09-04T06:33:15.380+0000 D3 REPL [conn358] Got notified of new snapshot: { ts: Timestamp(1567578795, 1), t: 1 }, 2019-09-04T06:33:15.364+0000 2019-09-04T06:33:15.380+0000 D3 REPL [conn358] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.833+0000 2019-09-04T06:33:15.380+0000 D3 REPL [conn359] Got notified of new snapshot: { ts: Timestamp(1567578795, 1), t: 1 }, 2019-09-04T06:33:15.364+0000 2019-09-04T06:33:15.380+0000 D3 REPL [conn359] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.882+0000 2019-09-04T06:33:15.380+0000 D3 REPL [conn376] Got notified of new snapshot: { ts: Timestamp(1567578795, 1), t: 1 }, 2019-09-04T06:33:15.364+0000 2019-09-04T06:33:15.380+0000 D3 REPL [conn376] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.897+0000 2019-09-04T06:33:15.380+0000 D3 REPL [conn373] Got notified of new snapshot: { ts: Timestamp(1567578795, 1), t: 1 }, 2019-09-04T06:33:15.364+0000 2019-09-04T06:33:15.380+0000 D3 REPL [conn373] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.934+0000 2019-09-04T06:33:15.380+0000 D3 REPL [conn367] Got notified of new snapshot: { ts: Timestamp(1567578795, 1), t: 1 }, 2019-09-04T06:33:15.364+0000 2019-09-04T06:33:15.380+0000 D3 REPL [conn367] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.934+0000 2019-09-04T06:33:15.380+0000 D3 REPL [conn366] Got notified of new snapshot: { ts: Timestamp(1567578795, 1), t: 1 }, 2019-09-04T06:33:15.364+0000 2019-09-04T06:33:15.380+0000 D3 REPL [conn366] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.913+0000 2019-09-04T06:33:15.380+0000 D3 REPL [conn375] Got notified of new snapshot: { ts: Timestamp(1567578795, 1), t: 1 }, 2019-09-04T06:33:15.364+0000 2019-09-04T06:33:15.380+0000 D3 REPL [conn375] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.877+0000 2019-09-04T06:33:15.380+0000 D3 REPL [conn374] Got notified of new snapshot: { ts: Timestamp(1567578795, 1), t: 1 }, 2019-09-04T06:33:15.364+0000 2019-09-04T06:33:15.380+0000 D3 REPL [conn374] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.876+0000 2019-09-04T06:33:15.380+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:33:25.701+0000 2019-09-04T06:33:15.380+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:33:26.288+0000 2019-09-04T06:33:15.380+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:15.380+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:44.839+0000 2019-09-04T06:33:15.380+0000 D3 REPL [conn337] Got notified of new snapshot: { ts: Timestamp(1567578795, 1), t: 1 }, 2019-09-04T06:33:15.364+0000 2019-09-04T06:33:15.380+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1138 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:25.380+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578795, 1), t: 1 } } 2019-09-04T06:33:15.381+0000 D3 REPL [conn337] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.660+0000 2019-09-04T06:33:15.381+0000 D3 REPL [conn370] Got notified of new snapshot: { ts: Timestamp(1567578795, 1), t: 1 }, 2019-09-04T06:33:15.364+0000 2019-09-04T06:33:15.381+0000 D3 REPL [conn370] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:29.874+0000 2019-09-04T06:33:15.381+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:45.374+0000 2019-09-04T06:33:15.381+0000 D3 REPL [conn363] Got notified of new snapshot: { ts: Timestamp(1567578795, 1), t: 1 }, 2019-09-04T06:33:15.364+0000 2019-09-04T06:33:15.381+0000 D3 REPL [conn363] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.670+0000 2019-09-04T06:33:15.381+0000 D3 REPL [conn354] Got notified of new snapshot: { ts: Timestamp(1567578795, 1), t: 1 }, 2019-09-04T06:33:15.364+0000 2019-09-04T06:33:15.381+0000 D3 REPL [conn354] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.413+0000 2019-09-04T06:33:15.381+0000 D3 REPL [conn360] Got notified of new snapshot: { ts: Timestamp(1567578795, 1), t: 1 }, 2019-09-04T06:33:15.364+0000 2019-09-04T06:33:15.381+0000 D3 REPL [conn360] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.661+0000 2019-09-04T06:33:15.381+0000 D3 REPL [conn365] Got notified of new snapshot: { ts: Timestamp(1567578795, 1), t: 1 }, 2019-09-04T06:33:15.364+0000 2019-09-04T06:33:15.381+0000 D3 REPL [conn365] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:25.060+0000 2019-09-04T06:33:15.381+0000 D3 REPL [conn327] Got notified of new snapshot: { ts: Timestamp(1567578795, 1), t: 1 }, 2019-09-04T06:33:15.364+0000 2019-09-04T06:33:15.381+0000 D3 REPL [conn327] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:16.840+0000 2019-09-04T06:33:15.382+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:33:15.382+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:15.382+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), appliedOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, appliedWallTime: new Date(1567578788567), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, durableWallTime: new Date(1567578795364), appliedOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, appliedWallTime: new Date(1567578795364), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), appliedOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, appliedWallTime: new Date(1567578788567), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578795, 1), t: 1 }, lastCommittedWall: new Date(1567578795364), lastOpVisible: { ts: Timestamp(1567578795, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:15.382+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1139 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:45.382+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), appliedOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, appliedWallTime: new Date(1567578788567), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, durableWallTime: new Date(1567578795364), appliedOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, appliedWallTime: new Date(1567578795364), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, durableWallTime: new Date(1567578788567), appliedOpTime: { ts: Timestamp(1567578788, 3), t: 1 }, appliedWallTime: new Date(1567578788567), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578795, 1), t: 1 }, lastCommittedWall: new Date(1567578795364), lastOpVisible: { ts: Timestamp(1567578795, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:15.382+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:45.374+0000 2019-09-04T06:33:15.382+0000 D2 ASIO [RS] Request 1139 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578795, 1), t: 1 }, lastCommittedWall: new Date(1567578795364), lastOpVisible: { ts: Timestamp(1567578795, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578795, 1), $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578795, 1) } 2019-09-04T06:33:15.382+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578795, 1), t: 1 }, lastCommittedWall: new Date(1567578795364), lastOpVisible: { ts: Timestamp(1567578795, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578795, 1), $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578795, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:15.382+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:15.382+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:45.374+0000 2019-09-04T06:33:15.394+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:15.473+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578795, 1) 2019-09-04T06:33:15.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:15.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:15.494+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:15.497+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:15.497+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:15.558+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:15.558+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:15.594+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:15.612+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:15.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:15.646+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:15.647+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:15.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:15.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:15.662+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:15.662+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:15.694+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:15.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:15.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:15.794+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:15.812+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:15.812+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:15.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:15.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:15.894+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:15.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:15.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:15.995+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:15.997+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:15.997+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:16.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:16.058+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:16.058+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:16.095+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:16.098+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:16.099+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:16.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:16.111+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:16.146+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:16.147+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:16.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:16.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:16.162+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:16.162+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:16.195+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:16.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 7EC08BA303BCF99177B2F6878748752B023CA683), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:16.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:33:16.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 7EC08BA303BCF99177B2F6878748752B023CA683), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:16.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 7EC08BA303BCF99177B2F6878748752B023CA683), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:16.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, durableWallTime: new Date(1567578795364), opTime: { ts: Timestamp(1567578795, 1), t: 1 }, wallTime: new Date(1567578795364) } 2019-09-04T06:33:16.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 7EC08BA303BCF99177B2F6878748752B023CA683), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:16.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:16.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:16.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:16.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:16.295+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:16.312+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:16.312+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:16.325+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:16.325+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:16.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:16.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:16.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:16.360+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:16.361+0000 D2 COMMAND [conn125] run command admin.$cmd { ping: 1, $db: "admin" } 2019-09-04T06:33:16.361+0000 I COMMAND [conn125] command admin.$cmd appName: "robo3t" command: ping { ping: 1, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:16.371+0000 D2 COMMAND [conn178] run command config.$cmd { ping: 1.0, lsid: { id: UUID("4ca3bc30-0f16-4335-a15f-3e7d48b5566e") }, $clusterTime: { clusterTime: Timestamp(1567578735, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:33:16.372+0000 I COMMAND [conn178] command config.$cmd appName: "MongoDB Shell" command: ping { ping: 1.0, lsid: { id: UUID("4ca3bc30-0f16-4335-a15f-3e7d48b5566e") }, $clusterTime: { clusterTime: Timestamp(1567578735, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:16.373+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:16.373+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:16.373+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578795, 1) 2019-09-04T06:33:16.373+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 16768 2019-09-04T06:33:16.373+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:16.373+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:16.373+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 16768 2019-09-04T06:33:16.374+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16771 2019-09-04T06:33:16.374+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16771 2019-09-04T06:33:16.374+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578795, 1), t: 1 }({ ts: Timestamp(1567578795, 1), t: 1 }) 2019-09-04T06:33:16.395+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:16.414+0000 I COMMAND [conn354] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578758, 1), signature: { hash: BinData(0, 83D99C38C0B1A9A64B0855E948984423AC47D765), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:33:16.414+0000 D1 - [conn354] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:16.414+0000 W - [conn354] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:16.433+0000 I - [conn354] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:16.433+0000 D1 COMMAND [conn354] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578758, 1), signature: { hash: BinData(0, 83D99C38C0B1A9A64B0855E948984423AC47D765), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:16.433+0000 D1 - [conn354] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:16.433+0000 W - [conn354] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:16.457+0000 I - [conn354] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:16.458+0000 W COMMAND [conn354] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:16.458+0000 I COMMAND [conn354] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578758, 1), signature: { hash: BinData(0, 83D99C38C0B1A9A64B0855E948984423AC47D765), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:33:16.458+0000 D2 NETWORK [conn354] Session from 10.108.2.57:34332 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:16.458+0000 I NETWORK [conn354] end connection 10.108.2.57:34332 (85 connections now open) 2019-09-04T06:33:16.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:16.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:16.495+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:16.497+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:16.497+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:16.558+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:16.558+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:16.595+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:16.598+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:16.598+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:16.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:16.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:16.647+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:16.647+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:16.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:16.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:16.662+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:16.662+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:16.696+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:16.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:16.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:16.796+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:16.812+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:16.812+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:16.813+0000 I NETWORK [listener] connection accepted from 10.108.2.44:38782 #381 (86 connections now open) 2019-09-04T06:33:16.813+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:16.813+0000 D2 COMMAND [conn381] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:16.813+0000 I NETWORK [conn381] received client metadata from 10.108.2.44:38782 conn381: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:16.813+0000 I COMMAND [conn381] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:16.817+0000 I NETWORK [listener] connection accepted from 10.108.2.51:59254 #382 (87 connections now open) 2019-09-04T06:33:16.817+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:16.817+0000 D2 COMMAND [conn382] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:16.817+0000 I NETWORK [conn382] received client metadata from 10.108.2.51:59254 conn382: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:16.817+0000 I COMMAND [conn382] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:16.825+0000 I COMMAND [conn357] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578758, 1), signature: { hash: BinData(0, 83D99C38C0B1A9A64B0855E948984423AC47D765), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:33:16.825+0000 D1 - [conn357] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:16.825+0000 W - [conn357] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:16.825+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:16.825+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:16.835+0000 I COMMAND [conn358] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578759, 1), signature: { hash: BinData(0, 8EE15F3E3BC68F6992D473DC2636D9C138513069), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:33:16.835+0000 D1 - [conn358] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:16.835+0000 W - [conn358] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:16.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:16.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1140) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:16.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1140 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:26.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:16.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:44.839+0000 2019-09-04T06:33:16.838+0000 D2 ASIO [Replication] Request 1140 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, durableWallTime: new Date(1567578795364), opTime: { ts: Timestamp(1567578795, 1), t: 1 }, wallTime: new Date(1567578795364), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578795, 1), t: 1 }, lastCommittedWall: new Date(1567578795364), lastOpVisible: { ts: Timestamp(1567578795, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578795, 1), $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578795, 1) } 2019-09-04T06:33:16.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, durableWallTime: new Date(1567578795364), opTime: { ts: Timestamp(1567578795, 1), t: 1 }, wallTime: new Date(1567578795364), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578795, 1), t: 1 }, lastCommittedWall: new Date(1567578795364), lastOpVisible: { ts: Timestamp(1567578795, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578795, 1), $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578795, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:16.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:16.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1140) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, durableWallTime: new Date(1567578795364), opTime: { ts: Timestamp(1567578795, 1), t: 1 }, wallTime: new Date(1567578795364), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578795, 1), t: 1 }, lastCommittedWall: new Date(1567578795364), lastOpVisible: { ts: Timestamp(1567578795, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578795, 1), $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578795, 1) } 2019-09-04T06:33:16.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:33:16.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:33:18.838Z 2019-09-04T06:33:16.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:44.839+0000 2019-09-04T06:33:16.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:16.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1141) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:16.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1141 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:33:26.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:16.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:44.839+0000 2019-09-04T06:33:16.839+0000 D2 ASIO [Replication] Request 1141 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, durableWallTime: new Date(1567578795364), opTime: { ts: Timestamp(1567578795, 1), t: 1 }, wallTime: new Date(1567578795364), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578795, 1), t: 1 }, lastCommittedWall: new Date(1567578795364), lastOpVisible: { ts: Timestamp(1567578795, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578795, 1), $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578795, 1) } 2019-09-04T06:33:16.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, durableWallTime: new Date(1567578795364), opTime: { ts: Timestamp(1567578795, 1), t: 1 }, wallTime: new Date(1567578795364), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578795, 1), t: 1 }, lastCommittedWall: new Date(1567578795364), lastOpVisible: { ts: Timestamp(1567578795, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578795, 1), $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578795, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:33:16.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:16.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1141) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, durableWallTime: new Date(1567578795364), opTime: { ts: Timestamp(1567578795, 1), t: 1 }, wallTime: new Date(1567578795364), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578795, 1), t: 1 }, lastCommittedWall: new Date(1567578795364), lastOpVisible: { ts: Timestamp(1567578795, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578795, 1), $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578795, 1) } 2019-09-04T06:33:16.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:33:16.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:33:26.288+0000 2019-09-04T06:33:16.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:33:27.672+0000 2019-09-04T06:33:16.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:33:16.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:33:18.839Z 2019-09-04T06:33:16.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:16.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:46.839+0000 2019-09-04T06:33:16.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:46.839+0000 2019-09-04T06:33:16.841+0000 I COMMAND [conn327] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578765, 1), signature: { hash: BinData(0, B25952C3221E28D91A8CF134FCDA55710368078E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:33:16.841+0000 D1 - [conn327] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:16.841+0000 W - [conn327] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:16.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:16.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:16.855+0000 I - [conn358] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:16.855+0000 D1 COMMAND [conn358] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578759, 1), signature: { hash: BinData(0, 8EE15F3E3BC68F6992D473DC2636D9C138513069), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:16.855+0000 D1 - [conn358] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:16.855+0000 W - [conn358] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:16.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:16.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:16.861+0000 I - [conn357] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:16.861+0000 D1 COMMAND [conn357] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578758, 1), signature: { hash: BinData(0, 83D99C38C0B1A9A64B0855E948984423AC47D765), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:16.861+0000 D1 - [conn357] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:16.861+0000 W - [conn357] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:16.881+0000 I - [conn327] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:16.881+0000 D1 COMMAND [conn327] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578765, 1), signature: { hash: BinData(0, B25952C3221E28D91A8CF134FCDA55710368078E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:16.881+0000 D1 - [conn327] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:16.882+0000 W - [conn327] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:16.896+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:16.903+0000 I - [conn357] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:16.903+0000 W COMMAND [conn357] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:16.903+0000 I COMMAND [conn357] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578758, 1), signature: { hash: BinData(0, 83D99C38C0B1A9A64B0855E948984423AC47D765), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30048ms 2019-09-04T06:33:16.903+0000 D2 NETWORK [conn357] Session from 10.108.2.44:38766 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:16.903+0000 I NETWORK [conn357] end connection 10.108.2.44:38766 (86 connections now open) 2019-09-04T06:33:16.928+0000 I - [conn327] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:16.928+0000 W COMMAND [conn327] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:16.928+0000 I COMMAND [conn327] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578765, 1), signature: { hash: BinData(0, B25952C3221E28D91A8CF134FCDA55710368078E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30050ms 2019-09-04T06:33:16.929+0000 D2 NETWORK [conn327] Session from 10.108.2.47:56608 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:16.929+0000 I NETWORK [conn327] end connection 10.108.2.47:56608 (85 connections now open) 2019-09-04T06:33:16.948+0000 I - [conn358] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:16.949+0000 W COMMAND [conn358] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:16.949+0000 I COMMAND [conn358] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578759, 1), signature: { hash: BinData(0, 8EE15F3E3BC68F6992D473DC2636D9C138513069), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30032ms 2019-09-04T06:33:16.949+0000 D2 NETWORK [conn358] Session from 10.108.2.51:59236 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:16.949+0000 I NETWORK [conn358] end connection 10.108.2.51:59236 (84 connections now open) 2019-09-04T06:33:16.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:16.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:16.996+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:16.997+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:16.997+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:17.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:17.013+0000 D2 COMMAND [conn381] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578788, 1), signature: { hash: BinData(0, 4F311D3E6BCB85B8F76274CF8447E974E8C1C19D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:33:17.013+0000 D1 REPL [conn381] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578795, 1), t: 1 } 2019-09-04T06:33:17.013+0000 D3 REPL [conn381] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.023+0000 2019-09-04T06:33:17.013+0000 D2 COMMAND [conn371] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578789, 1), signature: { hash: BinData(0, 0549CF4B3FA5453F46F43F76FB42633A9A8F3D20), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:33:17.013+0000 D1 REPL [conn371] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578795, 1), t: 1 } 2019-09-04T06:33:17.013+0000 D3 REPL [conn371] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.023+0000 2019-09-04T06:33:17.015+0000 I NETWORK [listener] connection accepted from 10.108.2.46:41096 #383 (85 connections now open) 2019-09-04T06:33:17.015+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:17.015+0000 D2 COMMAND [conn383] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:17.015+0000 I NETWORK [conn383] received client metadata from 10.108.2.46:41096 conn383: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:17.015+0000 I COMMAND [conn383] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:17.015+0000 D2 COMMAND [conn383] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 815C511F9B78AD09330F18D8946C64B00E8E0783), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:33:17.015+0000 D1 REPL [conn383] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578795, 1), t: 1 } 2019-09-04T06:33:17.015+0000 D3 REPL [conn383] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.025+0000 2019-09-04T06:33:17.028+0000 D2 COMMAND [conn368] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 815C511F9B78AD09330F18D8946C64B00E8E0783), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:33:17.028+0000 D1 REPL [conn368] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578795, 1), t: 1 } 2019-09-04T06:33:17.028+0000 D3 REPL [conn368] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.038+0000 2019-09-04T06:33:17.029+0000 D2 COMMAND [conn369] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578790, 1), signature: { hash: BinData(0, 48B30A814E9398F4B8D261A3888218A13169B002), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:33:17.029+0000 D1 REPL [conn369] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578795, 1), t: 1 } 2019-09-04T06:33:17.029+0000 D3 REPL [conn369] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.039+0000 2019-09-04T06:33:17.030+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:17.030+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:17.030+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:17.030+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:17.033+0000 D2 COMMAND [conn356] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578792, 1), signature: { hash: BinData(0, 688717A47F9056E54E2A82934E3E688DB3858182), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:33:17.033+0000 D1 REPL [conn356] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578795, 1), t: 1 } 2019-09-04T06:33:17.033+0000 D3 REPL [conn356] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.043+0000 2019-09-04T06:33:17.036+0000 D2 COMMAND [conn378] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578788, 1), signature: { hash: BinData(0, 4F311D3E6BCB85B8F76274CF8447E974E8C1C19D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:33:17.036+0000 D1 REPL [conn378] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578795, 1), t: 1 } 2019-09-04T06:33:17.036+0000 D3 REPL [conn378] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.046+0000 2019-09-04T06:33:17.040+0000 I NETWORK [listener] connection accepted from 10.108.2.47:56642 #384 (86 connections now open) 2019-09-04T06:33:17.040+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:17.040+0000 D2 COMMAND [conn384] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:17.040+0000 I NETWORK [conn384] received client metadata from 10.108.2.47:56642 conn384: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:17.040+0000 I COMMAND [conn384] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:17.044+0000 D2 COMMAND [conn384] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 815C511F9B78AD09330F18D8946C64B00E8E0783), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:33:17.044+0000 D1 REPL [conn384] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578795, 1), t: 1 } 2019-09-04T06:33:17.044+0000 D3 REPL [conn384] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.054+0000 2019-09-04T06:33:17.058+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:17.058+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:17.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 7EC08BA303BCF99177B2F6878748752B023CA683), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:17.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:33:17.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 7EC08BA303BCF99177B2F6878748752B023CA683), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:17.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 7EC08BA303BCF99177B2F6878748752B023CA683), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:17.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, durableWallTime: new Date(1567578795364), opTime: { ts: Timestamp(1567578795, 1), t: 1 }, wallTime: new Date(1567578795364) } 2019-09-04T06:33:17.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 7EC08BA303BCF99177B2F6878748752B023CA683), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:17.096+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:17.098+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:17.098+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:17.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:17.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:17.147+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:17.147+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:17.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:17.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:17.162+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:17.162+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:17.196+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:17.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:17.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:17.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:17.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:17.297+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:17.325+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:17.326+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:17.355+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:17.355+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:17.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:17.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:17.373+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:17.373+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:17.373+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578795, 1) 2019-09-04T06:33:17.373+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 16815 2019-09-04T06:33:17.373+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:17.373+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:17.373+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 16815 2019-09-04T06:33:17.374+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16818 2019-09-04T06:33:17.374+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16818 2019-09-04T06:33:17.374+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578795, 1), t: 1 }({ ts: Timestamp(1567578795, 1), t: 1 }) 2019-09-04T06:33:17.397+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:17.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:17.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:17.497+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:17.497+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:17.497+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:17.529+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:17.529+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:17.530+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:17.530+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:17.558+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:17.558+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:17.597+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:17.598+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:17.598+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:17.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:17.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:17.630+0000 D2 ASIO [RS] Request 1138 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578797, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578797606), o: { $v: 1, $set: { ping: new Date(1567578797605) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578795, 1), t: 1 }, lastCommittedWall: new Date(1567578795364), lastOpVisible: { ts: Timestamp(1567578795, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578795, 1), t: 1 }, lastCommittedWall: new Date(1567578795364), lastOpApplied: { ts: Timestamp(1567578797, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578795, 1), $clusterTime: { clusterTime: Timestamp(1567578797, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578797, 1) } 2019-09-04T06:33:17.630+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578797, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578797606), o: { $v: 1, $set: { ping: new Date(1567578797605) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578795, 1), t: 1 }, lastCommittedWall: new Date(1567578795364), lastOpVisible: { ts: Timestamp(1567578795, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578795, 1), t: 1 }, lastCommittedWall: new Date(1567578795364), lastOpApplied: { ts: Timestamp(1567578797, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578795, 1), $clusterTime: { clusterTime: Timestamp(1567578797, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578797, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:17.630+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:17.630+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578797, 1) and ending at ts: Timestamp(1567578797, 1) 2019-09-04T06:33:17.630+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:33:27.672+0000 2019-09-04T06:33:17.630+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:33:27.967+0000 2019-09-04T06:33:17.630+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:17.630+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:46.839+0000 2019-09-04T06:33:17.630+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578797, 1), t: 1 } 2019-09-04T06:33:17.630+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:17.630+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:17.630+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578795, 1) 2019-09-04T06:33:17.630+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 16828 2019-09-04T06:33:17.630+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:17.630+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:17.630+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 16828 2019-09-04T06:33:17.630+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:17.630+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:33:17.630+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:17.630+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578795, 1) 2019-09-04T06:33:17.630+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 16831 2019-09-04T06:33:17.630+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578797, 1) } 2019-09-04T06:33:17.630+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:17.630+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:17.630+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 16831 2019-09-04T06:33:17.630+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16819 2019-09-04T06:33:17.630+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 16819 2019-09-04T06:33:17.630+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16834 2019-09-04T06:33:17.630+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16834 2019-09-04T06:33:17.630+0000 D3 EXECUTOR [repl-writer-worker-12] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:17.630+0000 D3 STORAGE [repl-writer-worker-12] WT begin_transaction for snapshot id 16836 2019-09-04T06:33:17.630+0000 D4 STORAGE [repl-writer-worker-12] inserting record with timestamp Timestamp(1567578797, 1) 2019-09-04T06:33:17.630+0000 D3 STORAGE [repl-writer-worker-12] WT set timestamp of future write operations to Timestamp(1567578797, 1) 2019-09-04T06:33:17.630+0000 D3 STORAGE [repl-writer-worker-12] WT commit_transaction for snapshot id 16836 2019-09-04T06:33:17.630+0000 D3 EXECUTOR [repl-writer-worker-12] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:17.630+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:33:17.630+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16835 2019-09-04T06:33:17.630+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 16835 2019-09-04T06:33:17.630+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16838 2019-09-04T06:33:17.630+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16838 2019-09-04T06:33:17.630+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578797, 1), t: 1 }({ ts: Timestamp(1567578797, 1), t: 1 }) 2019-09-04T06:33:17.630+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578797, 1) 2019-09-04T06:33:17.630+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16839 2019-09-04T06:33:17.630+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578797, 1) } } ] } sort: {} projection: {} 2019-09-04T06:33:17.631+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:17.631+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:33:17.631+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578797, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:33:17.631+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:17.631+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:33:17.631+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:17.631+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578797, 1) || First: notFirst: full path: ts 2019-09-04T06:33:17.631+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:17.631+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578797, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:17.631+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:33:17.631+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:33:17.631+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:33:17.631+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:17.631+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:17.631+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:33:17.631+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:17.631+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:17.631+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:33:17.631+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578797, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:33:17.631+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:17.631+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:33:17.631+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:17.631+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578797, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:33:17.631+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:17.631+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578797, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:17.631+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 16839 2019-09-04T06:33:17.631+0000 D3 EXECUTOR [repl-writer-worker-10] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:17.631+0000 D3 STORAGE [repl-writer-worker-10] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:17.631+0000 D3 REPL [repl-writer-worker-10] applying op: { ts: Timestamp(1567578797, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578797606), o: { $v: 1, $set: { ping: new Date(1567578797605) } } }, oplog application mode: Secondary 2019-09-04T06:33:17.631+0000 D3 STORAGE [repl-writer-worker-10] WT set timestamp of future write operations to Timestamp(1567578797, 1) 2019-09-04T06:33:17.631+0000 D3 STORAGE [repl-writer-worker-10] WT begin_transaction for snapshot id 16841 2019-09-04T06:33:17.631+0000 D2 QUERY [repl-writer-worker-10] Using idhack: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" } 2019-09-04T06:33:17.631+0000 D4 WRITE [repl-writer-worker-10] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:33:17.631+0000 D3 STORAGE [repl-writer-worker-10] WT commit_transaction for snapshot id 16841 2019-09-04T06:33:17.631+0000 D3 EXECUTOR [repl-writer-worker-10] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:17.631+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578797, 1), t: 1 }({ ts: Timestamp(1567578797, 1), t: 1 }) 2019-09-04T06:33:17.631+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578797, 1) 2019-09-04T06:33:17.631+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16840 2019-09-04T06:33:17.631+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:33:17.631+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:17.631+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:33:17.631+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:17.631+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:17.631+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:33:17.631+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 16840 2019-09-04T06:33:17.631+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578797, 1) 2019-09-04T06:33:17.631+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16845 2019-09-04T06:33:17.631+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16845 2019-09-04T06:33:17.631+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578797, 1), t: 1 }({ ts: Timestamp(1567578797, 1), t: 1 }) 2019-09-04T06:33:17.631+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:17.631+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, durableWallTime: new Date(1567578795364), appliedOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, appliedWallTime: new Date(1567578795364), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, durableWallTime: new Date(1567578795364), appliedOpTime: { ts: Timestamp(1567578797, 1), t: 1 }, appliedWallTime: new Date(1567578797606), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, durableWallTime: new Date(1567578795364), appliedOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, appliedWallTime: new Date(1567578795364), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578795, 1), t: 1 }, lastCommittedWall: new Date(1567578795364), lastOpVisible: { ts: Timestamp(1567578795, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:17.631+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1142 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:47.631+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, durableWallTime: new Date(1567578795364), appliedOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, appliedWallTime: new Date(1567578795364), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, durableWallTime: new Date(1567578795364), appliedOpTime: { ts: Timestamp(1567578797, 1), t: 1 }, appliedWallTime: new Date(1567578797606), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, durableWallTime: new Date(1567578795364), appliedOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, appliedWallTime: new Date(1567578795364), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578795, 1), t: 1 }, lastCommittedWall: new Date(1567578795364), lastOpVisible: { ts: Timestamp(1567578795, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:17.631+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:47.631+0000 2019-09-04T06:33:17.631+0000 D2 ASIO [RS] Request 1142 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578795, 1), t: 1 }, lastCommittedWall: new Date(1567578795364), lastOpVisible: { ts: Timestamp(1567578795, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578795, 1), $clusterTime: { clusterTime: Timestamp(1567578797, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578797, 1) } 2019-09-04T06:33:17.632+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578795, 1), t: 1 }, lastCommittedWall: new Date(1567578795364), lastOpVisible: { ts: Timestamp(1567578795, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578795, 1), $clusterTime: { clusterTime: Timestamp(1567578797, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578797, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:17.632+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:17.632+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:47.632+0000 2019-09-04T06:33:17.632+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578797, 1), t: 1 } 2019-09-04T06:33:17.632+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1143 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:27.632+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578795, 1), t: 1 } } 2019-09-04T06:33:17.632+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:47.632+0000 2019-09-04T06:33:17.633+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:33:17.633+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:17.633+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, durableWallTime: new Date(1567578795364), appliedOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, appliedWallTime: new Date(1567578795364), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578797, 1), t: 1 }, durableWallTime: new Date(1567578797606), appliedOpTime: { ts: Timestamp(1567578797, 1), t: 1 }, appliedWallTime: new Date(1567578797606), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, durableWallTime: new Date(1567578795364), appliedOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, appliedWallTime: new Date(1567578795364), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578795, 1), t: 1 }, lastCommittedWall: new Date(1567578795364), lastOpVisible: { ts: Timestamp(1567578795, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:17.633+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1144 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:47.633+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, durableWallTime: new Date(1567578795364), appliedOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, appliedWallTime: new Date(1567578795364), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578797, 1), t: 1 }, durableWallTime: new Date(1567578797606), appliedOpTime: { ts: Timestamp(1567578797, 1), t: 1 }, appliedWallTime: new Date(1567578797606), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, durableWallTime: new Date(1567578795364), appliedOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, appliedWallTime: new Date(1567578795364), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578795, 1), t: 1 }, lastCommittedWall: new Date(1567578795364), lastOpVisible: { ts: Timestamp(1567578795, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:17.633+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:47.632+0000 2019-09-04T06:33:17.634+0000 D2 ASIO [RS] Request 1144 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578795, 1), t: 1 }, lastCommittedWall: new Date(1567578795364), lastOpVisible: { ts: Timestamp(1567578795, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578795, 1), $clusterTime: { clusterTime: Timestamp(1567578797, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578797, 1) } 2019-09-04T06:33:17.634+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578795, 1), t: 1 }, lastCommittedWall: new Date(1567578795364), lastOpVisible: { ts: Timestamp(1567578795, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578795, 1), $clusterTime: { clusterTime: Timestamp(1567578797, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578797, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:17.634+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:17.634+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:47.632+0000 2019-09-04T06:33:17.634+0000 D2 ASIO [RS] Request 1143 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578797, 1), t: 1 }, lastCommittedWall: new Date(1567578797606), lastOpVisible: { ts: Timestamp(1567578797, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578797, 1), t: 1 }, lastCommittedWall: new Date(1567578797606), lastOpApplied: { ts: Timestamp(1567578797, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578797, 1), $clusterTime: { clusterTime: Timestamp(1567578797, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578797, 1) } 2019-09-04T06:33:17.634+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578797, 1), t: 1 }, lastCommittedWall: new Date(1567578797606), lastOpVisible: { ts: Timestamp(1567578797, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578797, 1), t: 1 }, lastCommittedWall: new Date(1567578797606), lastOpApplied: { ts: Timestamp(1567578797, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578797, 1), $clusterTime: { clusterTime: Timestamp(1567578797, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578797, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:17.634+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:17.634+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:33:17.634+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578797, 1), t: 1 }, 2019-09-04T06:33:17.606+0000 2019-09-04T06:33:17.634+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578797, 1), t: 1 }, 2019-09-04T06:33:17.606+0000 2019-09-04T06:33:17.634+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578792, 1) 2019-09-04T06:33:17.634+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:33:27.967+0000 2019-09-04T06:33:17.634+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:33:27.765+0000 2019-09-04T06:33:17.634+0000 D3 REPL [conn343] Got notified of new snapshot: { ts: Timestamp(1567578797, 1), t: 1 }, 2019-09-04T06:33:17.606+0000 2019-09-04T06:33:17.634+0000 D3 REPL [conn343] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:22.594+0000 2019-09-04T06:33:17.634+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1145 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:27.634+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578797, 1), t: 1 } } 2019-09-04T06:33:17.634+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:47.632+0000 2019-09-04T06:33:17.634+0000 D3 REPL [conn376] Got notified of new snapshot: { ts: Timestamp(1567578797, 1), t: 1 }, 2019-09-04T06:33:17.606+0000 2019-09-04T06:33:17.634+0000 D3 REPL [conn376] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.897+0000 2019-09-04T06:33:17.634+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:17.634+0000 D3 REPL [conn359] Got notified of new snapshot: { ts: Timestamp(1567578797, 1), t: 1 }, 2019-09-04T06:33:17.606+0000 2019-09-04T06:33:17.634+0000 D3 REPL [conn359] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.882+0000 2019-09-04T06:33:17.634+0000 D3 REPL [conn356] Got notified of new snapshot: { ts: Timestamp(1567578797, 1), t: 1 }, 2019-09-04T06:33:17.606+0000 2019-09-04T06:33:17.634+0000 D3 REPL [conn356] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.043+0000 2019-09-04T06:33:17.634+0000 D3 REPL [conn366] Got notified of new snapshot: { ts: Timestamp(1567578797, 1), t: 1 }, 2019-09-04T06:33:17.606+0000 2019-09-04T06:33:17.634+0000 D3 REPL [conn366] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.913+0000 2019-09-04T06:33:17.634+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:46.839+0000 2019-09-04T06:33:17.634+0000 D3 REPL [conn374] Got notified of new snapshot: { ts: Timestamp(1567578797, 1), t: 1 }, 2019-09-04T06:33:17.606+0000 2019-09-04T06:33:17.634+0000 D3 REPL [conn374] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.876+0000 2019-09-04T06:33:17.634+0000 D3 REPL [conn365] Got notified of new snapshot: { ts: Timestamp(1567578797, 1), t: 1 }, 2019-09-04T06:33:17.606+0000 2019-09-04T06:33:17.634+0000 D3 REPL [conn365] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:25.060+0000 2019-09-04T06:33:17.634+0000 D3 REPL [conn383] Got notified of new snapshot: { ts: Timestamp(1567578797, 1), t: 1 }, 2019-09-04T06:33:17.606+0000 2019-09-04T06:33:17.634+0000 D3 REPL [conn383] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.025+0000 2019-09-04T06:33:17.634+0000 D3 REPL [conn368] Got notified of new snapshot: { ts: Timestamp(1567578797, 1), t: 1 }, 2019-09-04T06:33:17.606+0000 2019-09-04T06:33:17.634+0000 D3 REPL [conn368] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.038+0000 2019-09-04T06:33:17.634+0000 D3 REPL [conn369] Got notified of new snapshot: { ts: Timestamp(1567578797, 1), t: 1 }, 2019-09-04T06:33:17.606+0000 2019-09-04T06:33:17.634+0000 D3 REPL [conn369] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.039+0000 2019-09-04T06:33:17.635+0000 D3 REPL [conn379] Got notified of new snapshot: { ts: Timestamp(1567578797, 1), t: 1 }, 2019-09-04T06:33:17.606+0000 2019-09-04T06:33:17.635+0000 D3 REPL [conn379] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:43.509+0000 2019-09-04T06:33:17.635+0000 D3 REPL [conn364] Got notified of new snapshot: { ts: Timestamp(1567578797, 1), t: 1 }, 2019-09-04T06:33:17.606+0000 2019-09-04T06:33:17.635+0000 D3 REPL [conn364] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.767+0000 2019-09-04T06:33:17.635+0000 D3 REPL [conn361] Got notified of new snapshot: { ts: Timestamp(1567578797, 1), t: 1 }, 2019-09-04T06:33:17.606+0000 2019-09-04T06:33:17.635+0000 D3 REPL [conn361] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.661+0000 2019-09-04T06:33:17.635+0000 D3 REPL [conn355] Got notified of new snapshot: { ts: Timestamp(1567578797, 1), t: 1 }, 2019-09-04T06:33:17.606+0000 2019-09-04T06:33:17.635+0000 D3 REPL [conn355] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:24.151+0000 2019-09-04T06:33:17.635+0000 D3 REPL [conn367] Got notified of new snapshot: { ts: Timestamp(1567578797, 1), t: 1 }, 2019-09-04T06:33:17.606+0000 2019-09-04T06:33:17.635+0000 D3 REPL [conn367] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.934+0000 2019-09-04T06:33:17.635+0000 D3 REPL [conn375] Got notified of new snapshot: { ts: Timestamp(1567578797, 1), t: 1 }, 2019-09-04T06:33:17.606+0000 2019-09-04T06:33:17.635+0000 D3 REPL [conn375] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.877+0000 2019-09-04T06:33:17.635+0000 D3 REPL [conn337] Got notified of new snapshot: { ts: Timestamp(1567578797, 1), t: 1 }, 2019-09-04T06:33:17.606+0000 2019-09-04T06:33:17.635+0000 D3 REPL [conn337] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.660+0000 2019-09-04T06:33:17.635+0000 D3 REPL [conn370] Got notified of new snapshot: { ts: Timestamp(1567578797, 1), t: 1 }, 2019-09-04T06:33:17.606+0000 2019-09-04T06:33:17.635+0000 D3 REPL [conn370] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:29.874+0000 2019-09-04T06:33:17.635+0000 D3 REPL [conn363] Got notified of new snapshot: { ts: Timestamp(1567578797, 1), t: 1 }, 2019-09-04T06:33:17.606+0000 2019-09-04T06:33:17.635+0000 D3 REPL [conn363] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.670+0000 2019-09-04T06:33:17.635+0000 D3 REPL [conn360] Got notified of new snapshot: { ts: Timestamp(1567578797, 1), t: 1 }, 2019-09-04T06:33:17.606+0000 2019-09-04T06:33:17.635+0000 D3 REPL [conn360] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.661+0000 2019-09-04T06:33:17.635+0000 D3 REPL [conn381] Got notified of new snapshot: { ts: Timestamp(1567578797, 1), t: 1 }, 2019-09-04T06:33:17.606+0000 2019-09-04T06:33:17.635+0000 D3 REPL [conn381] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.023+0000 2019-09-04T06:33:17.635+0000 D3 REPL [conn371] Got notified of new snapshot: { ts: Timestamp(1567578797, 1), t: 1 }, 2019-09-04T06:33:17.606+0000 2019-09-04T06:33:17.635+0000 D3 REPL [conn371] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.023+0000 2019-09-04T06:33:17.635+0000 D3 REPL [conn378] Got notified of new snapshot: { ts: Timestamp(1567578797, 1), t: 1 }, 2019-09-04T06:33:17.606+0000 2019-09-04T06:33:17.635+0000 D3 REPL [conn378] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.046+0000 2019-09-04T06:33:17.635+0000 D3 REPL [conn373] Got notified of new snapshot: { ts: Timestamp(1567578797, 1), t: 1 }, 2019-09-04T06:33:17.606+0000 2019-09-04T06:33:17.635+0000 D3 REPL [conn373] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.934+0000 2019-09-04T06:33:17.635+0000 D3 REPL [conn384] Got notified of new snapshot: { ts: Timestamp(1567578797, 1), t: 1 }, 2019-09-04T06:33:17.606+0000 2019-09-04T06:33:17.635+0000 D3 REPL [conn384] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.054+0000 2019-09-04T06:33:17.646+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:17.647+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:17.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:17.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:17.662+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:17.662+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:17.697+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:17.730+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578797, 1) 2019-09-04T06:33:17.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:17.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:17.784+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:17.784+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:17.797+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:17.825+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:17.825+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:17.855+0000 D2 ASIO [RS] Request 1145 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578797, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578797853), o: { $v: 1, $set: { ping: new Date(1567578797847) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578797, 1), t: 1 }, lastCommittedWall: new Date(1567578797606), lastOpVisible: { ts: Timestamp(1567578797, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578797, 1), t: 1 }, lastCommittedWall: new Date(1567578797606), lastOpApplied: { ts: Timestamp(1567578797, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578797, 1), $clusterTime: { clusterTime: Timestamp(1567578797, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578797, 2) } 2019-09-04T06:33:17.855+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578797, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578797853), o: { $v: 1, $set: { ping: new Date(1567578797847) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578797, 1), t: 1 }, lastCommittedWall: new Date(1567578797606), lastOpVisible: { ts: Timestamp(1567578797, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578797, 1), t: 1 }, lastCommittedWall: new Date(1567578797606), lastOpApplied: { ts: Timestamp(1567578797, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578797, 1), $clusterTime: { clusterTime: Timestamp(1567578797, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578797, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:17.855+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:17.855+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:17.855+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578797, 2) and ending at ts: Timestamp(1567578797, 2) 2019-09-04T06:33:17.855+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:33:27.765+0000 2019-09-04T06:33:17.855+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:33:28.625+0000 2019-09-04T06:33:17.855+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:17.855+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:46.839+0000 2019-09-04T06:33:17.855+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578797, 2), t: 1 } 2019-09-04T06:33:17.855+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:17.855+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:17.855+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:17.855+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578797, 1) 2019-09-04T06:33:17.855+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 16855 2019-09-04T06:33:17.855+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:17.855+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:17.855+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 16855 2019-09-04T06:33:17.855+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:17.855+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:33:17.855+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:17.855+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578797, 1) 2019-09-04T06:33:17.855+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578797, 2) } 2019-09-04T06:33:17.855+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 16858 2019-09-04T06:33:17.855+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:17.855+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:17.855+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 16858 2019-09-04T06:33:17.855+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16846 2019-09-04T06:33:17.855+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 16846 2019-09-04T06:33:17.855+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16861 2019-09-04T06:33:17.855+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16861 2019-09-04T06:33:17.855+0000 D3 EXECUTOR [repl-writer-worker-8] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:17.855+0000 D3 STORAGE [repl-writer-worker-8] WT begin_transaction for snapshot id 16863 2019-09-04T06:33:17.855+0000 D4 STORAGE [repl-writer-worker-8] inserting record with timestamp Timestamp(1567578797, 2) 2019-09-04T06:33:17.855+0000 D3 STORAGE [repl-writer-worker-8] WT set timestamp of future write operations to Timestamp(1567578797, 2) 2019-09-04T06:33:17.855+0000 D3 STORAGE [repl-writer-worker-8] WT commit_transaction for snapshot id 16863 2019-09-04T06:33:17.855+0000 D3 EXECUTOR [repl-writer-worker-8] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:17.856+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:33:17.856+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16862 2019-09-04T06:33:17.856+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 16862 2019-09-04T06:33:17.856+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16865 2019-09-04T06:33:17.856+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16865 2019-09-04T06:33:17.856+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578797, 2), t: 1 }({ ts: Timestamp(1567578797, 2), t: 1 }) 2019-09-04T06:33:17.856+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578797, 2) 2019-09-04T06:33:17.856+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16866 2019-09-04T06:33:17.856+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578797, 2) } } ] } sort: {} projection: {} 2019-09-04T06:33:17.856+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:17.856+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:33:17.856+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578797, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:33:17.856+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:17.856+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:33:17.856+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:17.856+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578797, 2) || First: notFirst: full path: ts 2019-09-04T06:33:17.856+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:17.856+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578797, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:17.856+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:33:17.856+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:33:17.856+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:33:17.856+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:17.856+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:17.856+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:33:17.856+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:17.856+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:17.856+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:33:17.856+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578797, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:33:17.856+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:17.856+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:33:17.856+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:17.856+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578797, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:33:17.856+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:17.856+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578797, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:17.856+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 16866 2019-09-04T06:33:17.856+0000 D3 EXECUTOR [repl-writer-worker-6] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:17.856+0000 D3 STORAGE [repl-writer-worker-6] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:17.856+0000 D3 REPL [repl-writer-worker-6] applying op: { ts: Timestamp(1567578797, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578797853), o: { $v: 1, $set: { ping: new Date(1567578797847) } } }, oplog application mode: Secondary 2019-09-04T06:33:17.856+0000 D3 STORAGE [repl-writer-worker-6] WT set timestamp of future write operations to Timestamp(1567578797, 2) 2019-09-04T06:33:17.856+0000 D3 STORAGE [repl-writer-worker-6] WT begin_transaction for snapshot id 16868 2019-09-04T06:33:17.856+0000 D2 QUERY [repl-writer-worker-6] Using idhack: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" } 2019-09-04T06:33:17.856+0000 D4 WRITE [repl-writer-worker-6] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:33:17.856+0000 D3 STORAGE [repl-writer-worker-6] WT commit_transaction for snapshot id 16868 2019-09-04T06:33:17.856+0000 D3 EXECUTOR [repl-writer-worker-6] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:17.856+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578797, 2), t: 1 }({ ts: Timestamp(1567578797, 2), t: 1 }) 2019-09-04T06:33:17.856+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578797, 2) 2019-09-04T06:33:17.856+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16867 2019-09-04T06:33:17.856+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:33:17.856+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:17.856+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:33:17.856+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:17.856+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:17.856+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:33:17.856+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 16867 2019-09-04T06:33:17.856+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578797, 2) 2019-09-04T06:33:17.856+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16871 2019-09-04T06:33:17.856+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16871 2019-09-04T06:33:17.856+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578797, 2), t: 1 }({ ts: Timestamp(1567578797, 2), t: 1 }) 2019-09-04T06:33:17.856+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:17.856+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, durableWallTime: new Date(1567578795364), appliedOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, appliedWallTime: new Date(1567578795364), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578797, 1), t: 1 }, durableWallTime: new Date(1567578797606), appliedOpTime: { ts: Timestamp(1567578797, 2), t: 1 }, appliedWallTime: new Date(1567578797853), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, durableWallTime: new Date(1567578795364), appliedOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, appliedWallTime: new Date(1567578795364), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578797, 1), t: 1 }, lastCommittedWall: new Date(1567578797606), lastOpVisible: { ts: Timestamp(1567578797, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:17.856+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1146 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:47.856+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, durableWallTime: new Date(1567578795364), appliedOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, appliedWallTime: new Date(1567578795364), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578797, 1), t: 1 }, durableWallTime: new Date(1567578797606), appliedOpTime: { ts: Timestamp(1567578797, 2), t: 1 }, appliedWallTime: new Date(1567578797853), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, durableWallTime: new Date(1567578795364), appliedOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, appliedWallTime: new Date(1567578795364), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578797, 1), t: 1 }, lastCommittedWall: new Date(1567578797606), lastOpVisible: { ts: Timestamp(1567578797, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:17.856+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:47.856+0000 2019-09-04T06:33:17.857+0000 D2 ASIO [RS] Request 1146 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578797, 1), t: 1 }, lastCommittedWall: new Date(1567578797606), lastOpVisible: { ts: Timestamp(1567578797, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578797, 1), $clusterTime: { clusterTime: Timestamp(1567578797, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578797, 2) } 2019-09-04T06:33:17.857+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578797, 1), t: 1 }, lastCommittedWall: new Date(1567578797606), lastOpVisible: { ts: Timestamp(1567578797, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578797, 1), $clusterTime: { clusterTime: Timestamp(1567578797, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578797, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:17.857+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:17.857+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:47.857+0000 2019-09-04T06:33:17.857+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578797, 2), t: 1 } 2019-09-04T06:33:17.857+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1147 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:27.857+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578797, 1), t: 1 } } 2019-09-04T06:33:17.857+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:47.857+0000 2019-09-04T06:33:17.857+0000 D2 ASIO [RS] Request 1147 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578797, 2), t: 1 }, lastCommittedWall: new Date(1567578797853), lastOpVisible: { ts: Timestamp(1567578797, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578797, 2), t: 1 }, lastCommittedWall: new Date(1567578797853), lastOpApplied: { ts: Timestamp(1567578797, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578797, 2), $clusterTime: { clusterTime: Timestamp(1567578797, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578797, 2) } 2019-09-04T06:33:17.857+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578797, 2), t: 1 }, lastCommittedWall: new Date(1567578797853), lastOpVisible: { ts: Timestamp(1567578797, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578797, 2), t: 1 }, lastCommittedWall: new Date(1567578797853), lastOpApplied: { ts: Timestamp(1567578797, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578797, 2), $clusterTime: { clusterTime: Timestamp(1567578797, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578797, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:17.858+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:17.858+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:33:17.858+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578797, 2), t: 1 }, 2019-09-04T06:33:17.853+0000 2019-09-04T06:33:17.858+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578797, 2), t: 1 }, 2019-09-04T06:33:17.853+0000 2019-09-04T06:33:17.858+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578792, 2) 2019-09-04T06:33:17.858+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:33:28.625+0000 2019-09-04T06:33:17.858+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:33:28.050+0000 2019-09-04T06:33:17.858+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:17.858+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1148 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:27.858+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578797, 2), t: 1 } } 2019-09-04T06:33:17.858+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:46.839+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn359] Got notified of new snapshot: { ts: Timestamp(1567578797, 2), t: 1 }, 2019-09-04T06:33:17.853+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn359] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.882+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn375] Got notified of new snapshot: { ts: Timestamp(1567578797, 2), t: 1 }, 2019-09-04T06:33:17.853+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn375] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.877+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn365] Got notified of new snapshot: { ts: Timestamp(1567578797, 2), t: 1 }, 2019-09-04T06:33:17.853+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn365] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:25.060+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn374] Got notified of new snapshot: { ts: Timestamp(1567578797, 2), t: 1 }, 2019-09-04T06:33:17.853+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn374] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.876+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn361] Got notified of new snapshot: { ts: Timestamp(1567578797, 2), t: 1 }, 2019-09-04T06:33:17.853+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn361] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.661+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn356] Got notified of new snapshot: { ts: Timestamp(1567578797, 2), t: 1 }, 2019-09-04T06:33:17.853+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn356] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.043+0000 2019-09-04T06:33:17.858+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:47.857+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn337] Got notified of new snapshot: { ts: Timestamp(1567578797, 2), t: 1 }, 2019-09-04T06:33:17.853+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn337] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.660+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn363] Got notified of new snapshot: { ts: Timestamp(1567578797, 2), t: 1 }, 2019-09-04T06:33:17.853+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn363] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.670+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn381] Got notified of new snapshot: { ts: Timestamp(1567578797, 2), t: 1 }, 2019-09-04T06:33:17.853+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn381] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.023+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn378] Got notified of new snapshot: { ts: Timestamp(1567578797, 2), t: 1 }, 2019-09-04T06:33:17.853+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn378] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.046+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn379] Got notified of new snapshot: { ts: Timestamp(1567578797, 2), t: 1 }, 2019-09-04T06:33:17.853+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn379] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:43.509+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn384] Got notified of new snapshot: { ts: Timestamp(1567578797, 2), t: 1 }, 2019-09-04T06:33:17.853+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn384] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.054+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn343] Got notified of new snapshot: { ts: Timestamp(1567578797, 2), t: 1 }, 2019-09-04T06:33:17.853+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn343] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:22.594+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn376] Got notified of new snapshot: { ts: Timestamp(1567578797, 2), t: 1 }, 2019-09-04T06:33:17.853+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn376] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.897+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn366] Got notified of new snapshot: { ts: Timestamp(1567578797, 2), t: 1 }, 2019-09-04T06:33:17.853+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn366] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.913+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn360] Got notified of new snapshot: { ts: Timestamp(1567578797, 2), t: 1 }, 2019-09-04T06:33:17.853+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn360] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.661+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn371] Got notified of new snapshot: { ts: Timestamp(1567578797, 2), t: 1 }, 2019-09-04T06:33:17.853+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn371] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.023+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn364] Got notified of new snapshot: { ts: Timestamp(1567578797, 2), t: 1 }, 2019-09-04T06:33:17.853+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn364] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.767+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn370] Got notified of new snapshot: { ts: Timestamp(1567578797, 2), t: 1 }, 2019-09-04T06:33:17.853+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn370] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:29.874+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn355] Got notified of new snapshot: { ts: Timestamp(1567578797, 2), t: 1 }, 2019-09-04T06:33:17.853+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn355] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:24.151+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn368] Got notified of new snapshot: { ts: Timestamp(1567578797, 2), t: 1 }, 2019-09-04T06:33:17.853+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn368] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.038+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn373] Got notified of new snapshot: { ts: Timestamp(1567578797, 2), t: 1 }, 2019-09-04T06:33:17.853+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn373] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.934+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn369] Got notified of new snapshot: { ts: Timestamp(1567578797, 2), t: 1 }, 2019-09-04T06:33:17.853+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn369] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.039+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn383] Got notified of new snapshot: { ts: Timestamp(1567578797, 2), t: 1 }, 2019-09-04T06:33:17.853+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn383] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.025+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn367] Got notified of new snapshot: { ts: Timestamp(1567578797, 2), t: 1 }, 2019-09-04T06:33:17.853+0000 2019-09-04T06:33:17.858+0000 D3 REPL [conn367] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.934+0000 2019-09-04T06:33:17.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:17.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:17.863+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:33:17.864+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:17.864+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, durableWallTime: new Date(1567578795364), appliedOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, appliedWallTime: new Date(1567578795364), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578797, 2), t: 1 }, durableWallTime: new Date(1567578797853), appliedOpTime: { ts: Timestamp(1567578797, 2), t: 1 }, appliedWallTime: new Date(1567578797853), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, durableWallTime: new Date(1567578795364), appliedOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, appliedWallTime: new Date(1567578795364), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578797, 2), t: 1 }, lastCommittedWall: new Date(1567578797853), lastOpVisible: { ts: Timestamp(1567578797, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:17.864+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1149 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:47.864+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, durableWallTime: new Date(1567578795364), appliedOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, appliedWallTime: new Date(1567578795364), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578797, 2), t: 1 }, durableWallTime: new Date(1567578797853), appliedOpTime: { ts: Timestamp(1567578797, 2), t: 1 }, appliedWallTime: new Date(1567578797853), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, durableWallTime: new Date(1567578795364), appliedOpTime: { ts: Timestamp(1567578795, 1), t: 1 }, appliedWallTime: new Date(1567578795364), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578797, 2), t: 1 }, lastCommittedWall: new Date(1567578797853), lastOpVisible: { ts: Timestamp(1567578797, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:17.864+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:47.857+0000 2019-09-04T06:33:17.864+0000 D2 ASIO [RS] Request 1149 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578797, 2), t: 1 }, lastCommittedWall: new Date(1567578797853), lastOpVisible: { ts: Timestamp(1567578797, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578797, 2), $clusterTime: { clusterTime: Timestamp(1567578797, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578797, 2) } 2019-09-04T06:33:17.864+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578797, 2), t: 1 }, lastCommittedWall: new Date(1567578797853), lastOpVisible: { ts: Timestamp(1567578797, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578797, 2), $clusterTime: { clusterTime: Timestamp(1567578797, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578797, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:17.864+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:17.864+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:47.857+0000 2019-09-04T06:33:17.897+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:17.955+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578797, 2) 2019-09-04T06:33:17.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:17.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:17.997+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:17.997+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:17.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:18.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:18.058+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:18.058+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:18.098+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:18.098+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:18.098+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:18.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:18.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:18.130+0000 D2 COMMAND [conn20] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:18.130+0000 I COMMAND [conn20] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:18.147+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:18.147+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:18.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:18.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:18.162+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:18.162+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:18.198+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:18.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578797, 2), signature: { hash: BinData(0, 4AD273687A1577756E4C417D98E9C804B13C9BC9), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:18.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:33:18.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578797, 2), signature: { hash: BinData(0, 4AD273687A1577756E4C417D98E9C804B13C9BC9), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:18.233+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578797, 2), signature: { hash: BinData(0, 4AD273687A1577756E4C417D98E9C804B13C9BC9), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:18.233+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578797, 2), t: 1 }, durableWallTime: new Date(1567578797853), opTime: { ts: Timestamp(1567578797, 2), t: 1 }, wallTime: new Date(1567578797853) } 2019-09-04T06:33:18.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578797, 2), signature: { hash: BinData(0, 4AD273687A1577756E4C417D98E9C804B13C9BC9), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:18.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:18.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:18.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:18.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:18.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:18.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:18.298+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:18.325+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:18.326+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:18.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:18.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:18.398+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:18.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:18.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:18.497+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:18.497+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:18.498+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:18.518+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:33:18.518+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:33:18.518+0000 D2 COMMAND [conn90] run command admin.$cmd { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:33:18.518+0000 I COMMAND [conn90] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:33:18.558+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:18.558+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:18.598+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:18.598+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:18.598+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:18.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:18.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:18.647+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:18.647+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:18.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:18.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:18.662+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:18.662+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:18.698+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:18.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:18.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:18.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:18.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:18.799+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:18.825+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:18.825+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:18.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:18.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1150) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:18.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1150 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:28.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:18.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:46.839+0000 2019-09-04T06:33:18.838+0000 D2 ASIO [Replication] Request 1150 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578797, 2), t: 1 }, durableWallTime: new Date(1567578797853), opTime: { ts: Timestamp(1567578797, 2), t: 1 }, wallTime: new Date(1567578797853), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578797, 2), t: 1 }, lastCommittedWall: new Date(1567578797853), lastOpVisible: { ts: Timestamp(1567578797, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578797, 2), $clusterTime: { clusterTime: Timestamp(1567578797, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578797, 2) } 2019-09-04T06:33:18.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578797, 2), t: 1 }, durableWallTime: new Date(1567578797853), opTime: { ts: Timestamp(1567578797, 2), t: 1 }, wallTime: new Date(1567578797853), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578797, 2), t: 1 }, lastCommittedWall: new Date(1567578797853), lastOpVisible: { ts: Timestamp(1567578797, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578797, 2), $clusterTime: { clusterTime: Timestamp(1567578797, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578797, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:18.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:18.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1150) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578797, 2), t: 1 }, durableWallTime: new Date(1567578797853), opTime: { ts: Timestamp(1567578797, 2), t: 1 }, wallTime: new Date(1567578797853), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578797, 2), t: 1 }, lastCommittedWall: new Date(1567578797853), lastOpVisible: { ts: Timestamp(1567578797, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578797, 2), $clusterTime: { clusterTime: Timestamp(1567578797, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578797, 2) } 2019-09-04T06:33:18.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:33:18.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:33:20.838Z 2019-09-04T06:33:18.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:46.839+0000 2019-09-04T06:33:18.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:18.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1151) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:18.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1151 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:33:28.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:18.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:46.839+0000 2019-09-04T06:33:18.839+0000 D2 ASIO [Replication] Request 1151 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578797, 2), t: 1 }, durableWallTime: new Date(1567578797853), opTime: { ts: Timestamp(1567578797, 2), t: 1 }, wallTime: new Date(1567578797853), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578797, 2), t: 1 }, lastCommittedWall: new Date(1567578797853), lastOpVisible: { ts: Timestamp(1567578797, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578797, 2), $clusterTime: { clusterTime: Timestamp(1567578797, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578797, 2) } 2019-09-04T06:33:18.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578797, 2), t: 1 }, durableWallTime: new Date(1567578797853), opTime: { ts: Timestamp(1567578797, 2), t: 1 }, wallTime: new Date(1567578797853), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578797, 2), t: 1 }, lastCommittedWall: new Date(1567578797853), lastOpVisible: { ts: Timestamp(1567578797, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578797, 2), $clusterTime: { clusterTime: Timestamp(1567578797, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578797, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:33:18.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:18.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1151) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578797, 2), t: 1 }, durableWallTime: new Date(1567578797853), opTime: { ts: Timestamp(1567578797, 2), t: 1 }, wallTime: new Date(1567578797853), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578797, 2), t: 1 }, lastCommittedWall: new Date(1567578797853), lastOpVisible: { ts: Timestamp(1567578797, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578797, 2), $clusterTime: { clusterTime: Timestamp(1567578797, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578797, 2) } 2019-09-04T06:33:18.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:33:18.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:33:28.050+0000 2019-09-04T06:33:18.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:33:29.581+0000 2019-09-04T06:33:18.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:33:18.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:33:20.839Z 2019-09-04T06:33:18.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:18.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:48.839+0000 2019-09-04T06:33:18.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:48.839+0000 2019-09-04T06:33:18.855+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:18.855+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:18.855+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578797, 2) 2019-09-04T06:33:18.855+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 16904 2019-09-04T06:33:18.856+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:18.856+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:18.856+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 16904 2019-09-04T06:33:18.856+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer::flush table:config/collection/42--6194257481163143499 -> { numRecords: 2, dataSize: 306 } 2019-09-04T06:33:18.856+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer::flush table:config/collection/28--6194257481163143499 -> { numRecords: 22, dataSize: 1828 } 2019-09-04T06:33:18.856+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer::flush table:local/collection/16--6194257481163143499 -> { numRecords: 1527, dataSize: 344290 } 2019-09-04T06:33:18.856+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer flush took 56 µs 2019-09-04T06:33:18.856+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16907 2019-09-04T06:33:18.856+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16907 2019-09-04T06:33:18.856+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578797, 2), t: 1 }({ ts: Timestamp(1567578797, 2), t: 1 }) 2019-09-04T06:33:18.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:18.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:18.920+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:18.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:18.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:18.997+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:18.997+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:19.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:19.020+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:19.058+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:19.058+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:19.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578797, 2), signature: { hash: BinData(0, 4AD273687A1577756E4C417D98E9C804B13C9BC9), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:19.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:33:19.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578797, 2), signature: { hash: BinData(0, 4AD273687A1577756E4C417D98E9C804B13C9BC9), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:19.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578797, 2), signature: { hash: BinData(0, 4AD273687A1577756E4C417D98E9C804B13C9BC9), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:19.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578797, 2), t: 1 }, durableWallTime: new Date(1567578797853), opTime: { ts: Timestamp(1567578797, 2), t: 1 }, wallTime: new Date(1567578797853) } 2019-09-04T06:33:19.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578797, 2), signature: { hash: BinData(0, 4AD273687A1577756E4C417D98E9C804B13C9BC9), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:19.098+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:19.098+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:19.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:19.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:19.121+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:19.147+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:19.147+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:19.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:19.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:19.162+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:19.162+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:19.221+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:19.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:19.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:19.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:19.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:19.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:19.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:19.284+0000 D2 ASIO [RS] Request 1148 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578799, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" }, wall: new Date(1567578799275), o: { $v: 1, $set: { ping: new Date(1567578799272) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578797, 2), t: 1 }, lastCommittedWall: new Date(1567578797853), lastOpVisible: { ts: Timestamp(1567578797, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578797, 2), t: 1 }, lastCommittedWall: new Date(1567578797853), lastOpApplied: { ts: Timestamp(1567578799, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578797, 2), $clusterTime: { clusterTime: Timestamp(1567578799, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578799, 1) } 2019-09-04T06:33:19.284+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578799, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" }, wall: new Date(1567578799275), o: { $v: 1, $set: { ping: new Date(1567578799272) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578797, 2), t: 1 }, lastCommittedWall: new Date(1567578797853), lastOpVisible: { ts: Timestamp(1567578797, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578797, 2), t: 1 }, lastCommittedWall: new Date(1567578797853), lastOpApplied: { ts: Timestamp(1567578799, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578797, 2), $clusterTime: { clusterTime: Timestamp(1567578799, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578799, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:19.284+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:19.284+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578799, 1) and ending at ts: Timestamp(1567578799, 1) 2019-09-04T06:33:19.284+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:33:29.581+0000 2019-09-04T06:33:19.284+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:33:30.234+0000 2019-09-04T06:33:19.284+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:19.284+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:48.839+0000 2019-09-04T06:33:19.284+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578799, 1), t: 1 } 2019-09-04T06:33:19.284+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:19.284+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:19.284+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578797, 2) 2019-09-04T06:33:19.284+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 16924 2019-09-04T06:33:19.284+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:19.284+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:19.284+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 16924 2019-09-04T06:33:19.284+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:19.284+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:33:19.284+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:19.284+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578797, 2) 2019-09-04T06:33:19.284+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 16927 2019-09-04T06:33:19.284+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578799, 1) } 2019-09-04T06:33:19.284+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:19.284+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:19.284+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 16927 2019-09-04T06:33:19.284+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16908 2019-09-04T06:33:19.285+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 16908 2019-09-04T06:33:19.285+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16930 2019-09-04T06:33:19.285+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16930 2019-09-04T06:33:19.285+0000 D3 EXECUTOR [repl-writer-worker-4] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:19.285+0000 D3 STORAGE [repl-writer-worker-4] WT begin_transaction for snapshot id 16932 2019-09-04T06:33:19.285+0000 D4 STORAGE [repl-writer-worker-4] inserting record with timestamp Timestamp(1567578799, 1) 2019-09-04T06:33:19.285+0000 D3 STORAGE [repl-writer-worker-4] WT set timestamp of future write operations to Timestamp(1567578799, 1) 2019-09-04T06:33:19.285+0000 D2 STORAGE [repl-writer-worker-4] WiredTigerSizeStorer::store Marking table:local/collection/16--6194257481163143499 dirty, numRecords: 1528, dataSize: 344526, use_count: 3 2019-09-04T06:33:19.285+0000 D3 STORAGE [repl-writer-worker-4] WT commit_transaction for snapshot id 16932 2019-09-04T06:33:19.285+0000 D3 EXECUTOR [repl-writer-worker-4] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:19.285+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:33:19.285+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16931 2019-09-04T06:33:19.285+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 16931 2019-09-04T06:33:19.285+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16934 2019-09-04T06:33:19.285+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16934 2019-09-04T06:33:19.285+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578799, 1), t: 1 }({ ts: Timestamp(1567578799, 1), t: 1 }) 2019-09-04T06:33:19.285+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578799, 1) 2019-09-04T06:33:19.285+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16935 2019-09-04T06:33:19.285+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578799, 1) } } ] } sort: {} projection: {} 2019-09-04T06:33:19.285+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:19.285+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:33:19.285+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578799, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:33:19.285+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:19.285+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:19.285+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:33:19.285+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578799, 1) || First: notFirst: full path: ts 2019-09-04T06:33:19.285+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:19.285+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578799, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:19.285+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:33:19.285+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:33:19.285+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:33:19.285+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:19.285+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:19.285+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:33:19.285+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:19.285+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:19.285+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:33:19.285+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578799, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:33:19.285+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:19.285+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:19.285+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:33:19.285+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578799, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:33:19.285+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:19.285+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578799, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:19.285+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 16935 2019-09-04T06:33:19.285+0000 D3 EXECUTOR [repl-writer-worker-2] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:19.285+0000 D3 STORAGE [repl-writer-worker-2] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:19.285+0000 D3 REPL [repl-writer-worker-2] applying op: { ts: Timestamp(1567578799, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" }, wall: new Date(1567578799275), o: { $v: 1, $set: { ping: new Date(1567578799272) } } }, oplog application mode: Secondary 2019-09-04T06:33:19.285+0000 D3 STORAGE [repl-writer-worker-2] WT set timestamp of future write operations to Timestamp(1567578799, 1) 2019-09-04T06:33:19.285+0000 D3 STORAGE [repl-writer-worker-2] WT begin_transaction for snapshot id 16937 2019-09-04T06:33:19.285+0000 D2 QUERY [repl-writer-worker-2] Using idhack: { _id: "cmodb801.togewa.com:27017:1567576097:5449009950928943792" } 2019-09-04T06:33:19.285+0000 D2 STORAGE [repl-writer-worker-2] WiredTigerSizeStorer::store Marking table:config/collection/28--6194257481163143499 dirty, numRecords: 22, dataSize: 1828, use_count: 3 2019-09-04T06:33:19.285+0000 D4 WRITE [repl-writer-worker-2] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:33:19.285+0000 D3 STORAGE [repl-writer-worker-2] WT commit_transaction for snapshot id 16937 2019-09-04T06:33:19.285+0000 D3 EXECUTOR [repl-writer-worker-2] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:19.285+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578799, 1), t: 1 }({ ts: Timestamp(1567578799, 1), t: 1 }) 2019-09-04T06:33:19.285+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578799, 1) 2019-09-04T06:33:19.285+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16936 2019-09-04T06:33:19.285+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:33:19.285+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:19.285+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:33:19.285+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:19.285+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:19.285+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:33:19.285+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 16936 2019-09-04T06:33:19.285+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578799, 1) 2019-09-04T06:33:19.285+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 16940 2019-09-04T06:33:19.285+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 16940 2019-09-04T06:33:19.285+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578799, 1), t: 1 }({ ts: Timestamp(1567578799, 1), t: 1 }) 2019-09-04T06:33:19.285+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:19.285+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578797, 2), t: 1 }, durableWallTime: new Date(1567578797853), appliedOpTime: { ts: Timestamp(1567578797, 2), t: 1 }, appliedWallTime: new Date(1567578797853), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578797, 2), t: 1 }, durableWallTime: new Date(1567578797853), appliedOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, appliedWallTime: new Date(1567578799275), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578797, 2), t: 1 }, durableWallTime: new Date(1567578797853), appliedOpTime: { ts: Timestamp(1567578797, 2), t: 1 }, appliedWallTime: new Date(1567578797853), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578797, 2), t: 1 }, lastCommittedWall: new Date(1567578797853), lastOpVisible: { ts: Timestamp(1567578797, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:19.285+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1152 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:49.285+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578797, 2), t: 1 }, durableWallTime: new Date(1567578797853), appliedOpTime: { ts: Timestamp(1567578797, 2), t: 1 }, appliedWallTime: new Date(1567578797853), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578797, 2), t: 1 }, durableWallTime: new Date(1567578797853), appliedOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, appliedWallTime: new Date(1567578799275), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578797, 2), t: 1 }, durableWallTime: new Date(1567578797853), appliedOpTime: { ts: Timestamp(1567578797, 2), t: 1 }, appliedWallTime: new Date(1567578797853), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578797, 2), t: 1 }, lastCommittedWall: new Date(1567578797853), lastOpVisible: { ts: Timestamp(1567578797, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:19.286+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:49.285+0000 2019-09-04T06:33:19.286+0000 D2 ASIO [RS] Request 1152 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578797, 2), t: 1 }, lastCommittedWall: new Date(1567578797853), lastOpVisible: { ts: Timestamp(1567578797, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578797, 2), $clusterTime: { clusterTime: Timestamp(1567578799, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578799, 1) } 2019-09-04T06:33:19.286+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578797, 2), t: 1 }, lastCommittedWall: new Date(1567578797853), lastOpVisible: { ts: Timestamp(1567578797, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578797, 2), $clusterTime: { clusterTime: Timestamp(1567578799, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578799, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:19.286+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:19.286+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:49.286+0000 2019-09-04T06:33:19.286+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578799, 1), t: 1 } 2019-09-04T06:33:19.286+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1153 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:29.286+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578797, 2), t: 1 } } 2019-09-04T06:33:19.286+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:49.286+0000 2019-09-04T06:33:19.290+0000 D2 ASIO [RS] Request 1153 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578799, 1), t: 1 }, lastCommittedWall: new Date(1567578799275), lastOpVisible: { ts: Timestamp(1567578799, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578799, 1), t: 1 }, lastCommittedWall: new Date(1567578799275), lastOpApplied: { ts: Timestamp(1567578799, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578799, 1), $clusterTime: { clusterTime: Timestamp(1567578799, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578799, 1) } 2019-09-04T06:33:19.290+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578799, 1), t: 1 }, lastCommittedWall: new Date(1567578799275), lastOpVisible: { ts: Timestamp(1567578799, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578799, 1), t: 1 }, lastCommittedWall: new Date(1567578799275), lastOpApplied: { ts: Timestamp(1567578799, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578799, 1), $clusterTime: { clusterTime: Timestamp(1567578799, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578799, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:19.291+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:19.291+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:33:19.291+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578799, 1), t: 1 }, 2019-09-04T06:33:19.275+0000 2019-09-04T06:33:19.291+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578799, 1), t: 1 }, 2019-09-04T06:33:19.275+0000 2019-09-04T06:33:19.291+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578794, 1) 2019-09-04T06:33:19.291+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:33:30.234+0000 2019-09-04T06:33:19.291+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:33:29.383+0000 2019-09-04T06:33:19.291+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1154 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:29.291+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578799, 1), t: 1 } } 2019-09-04T06:33:19.291+0000 D3 REPL [conn361] Got notified of new snapshot: { ts: Timestamp(1567578799, 1), t: 1 }, 2019-09-04T06:33:19.275+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn361] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.661+0000 2019-09-04T06:33:19.291+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:49.286+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn337] Got notified of new snapshot: { ts: Timestamp(1567578799, 1), t: 1 }, 2019-09-04T06:33:19.275+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn337] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.660+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn374] Got notified of new snapshot: { ts: Timestamp(1567578799, 1), t: 1 }, 2019-09-04T06:33:19.275+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn374] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.876+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn356] Got notified of new snapshot: { ts: Timestamp(1567578799, 1), t: 1 }, 2019-09-04T06:33:19.275+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn356] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.043+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn379] Got notified of new snapshot: { ts: Timestamp(1567578799, 1), t: 1 }, 2019-09-04T06:33:19.275+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn379] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:43.509+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn360] Got notified of new snapshot: { ts: Timestamp(1567578799, 1), t: 1 }, 2019-09-04T06:33:19.275+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn360] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.661+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn370] Got notified of new snapshot: { ts: Timestamp(1567578799, 1), t: 1 }, 2019-09-04T06:33:19.275+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn370] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:29.874+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn383] Got notified of new snapshot: { ts: Timestamp(1567578799, 1), t: 1 }, 2019-09-04T06:33:19.275+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn383] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.025+0000 2019-09-04T06:33:19.291+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:19.291+0000 D3 REPL [conn369] Got notified of new snapshot: { ts: Timestamp(1567578799, 1), t: 1 }, 2019-09-04T06:33:19.275+0000 2019-09-04T06:33:19.291+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:48.839+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn369] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.039+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn367] Got notified of new snapshot: { ts: Timestamp(1567578799, 1), t: 1 }, 2019-09-04T06:33:19.275+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn367] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.934+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn343] Got notified of new snapshot: { ts: Timestamp(1567578799, 1), t: 1 }, 2019-09-04T06:33:19.275+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn343] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:22.594+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn366] Got notified of new snapshot: { ts: Timestamp(1567578799, 1), t: 1 }, 2019-09-04T06:33:19.275+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn366] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.913+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn359] Got notified of new snapshot: { ts: Timestamp(1567578799, 1), t: 1 }, 2019-09-04T06:33:19.275+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn359] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.882+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn375] Got notified of new snapshot: { ts: Timestamp(1567578799, 1), t: 1 }, 2019-09-04T06:33:19.275+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn375] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.877+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn381] Got notified of new snapshot: { ts: Timestamp(1567578799, 1), t: 1 }, 2019-09-04T06:33:19.275+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn381] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.023+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn365] Got notified of new snapshot: { ts: Timestamp(1567578799, 1), t: 1 }, 2019-09-04T06:33:19.275+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn365] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:25.060+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn363] Got notified of new snapshot: { ts: Timestamp(1567578799, 1), t: 1 }, 2019-09-04T06:33:19.275+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn363] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.670+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn378] Got notified of new snapshot: { ts: Timestamp(1567578799, 1), t: 1 }, 2019-09-04T06:33:19.275+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn378] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.046+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn384] Got notified of new snapshot: { ts: Timestamp(1567578799, 1), t: 1 }, 2019-09-04T06:33:19.275+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn384] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.054+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn376] Got notified of new snapshot: { ts: Timestamp(1567578799, 1), t: 1 }, 2019-09-04T06:33:19.275+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn376] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.897+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn371] Got notified of new snapshot: { ts: Timestamp(1567578799, 1), t: 1 }, 2019-09-04T06:33:19.275+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn371] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.023+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn364] Got notified of new snapshot: { ts: Timestamp(1567578799, 1), t: 1 }, 2019-09-04T06:33:19.275+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn364] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:21.767+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn355] Got notified of new snapshot: { ts: Timestamp(1567578799, 1), t: 1 }, 2019-09-04T06:33:19.275+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn355] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:24.151+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn373] Got notified of new snapshot: { ts: Timestamp(1567578799, 1), t: 1 }, 2019-09-04T06:33:19.275+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn373] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.934+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn368] Got notified of new snapshot: { ts: Timestamp(1567578799, 1), t: 1 }, 2019-09-04T06:33:19.275+0000 2019-09-04T06:33:19.291+0000 D3 REPL [conn368] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.038+0000 2019-09-04T06:33:19.317+0000 D3 STORAGE [FreeMonProcessor] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:19.321+0000 D3 INDEX [TTLMonitor] thread awake 2019-09-04T06:33:19.322+0000 D3 COMMAND [PeriodicTaskRunner] task: DBConnectionPool-cleaner took: 0ms 2019-09-04T06:33:19.322+0000 D3 COMMAND [PeriodicTaskRunner] task: DBConnectionPool-cleaner took: 0ms 2019-09-04T06:33:19.322+0000 D2 - [PeriodicTaskRunner] cleaning up unused lock buckets of the global lock manager 2019-09-04T06:33:19.322+0000 D3 COMMAND [PeriodicTaskRunner] task: UnusedLockCleaner took: 0ms 2019-09-04T06:33:19.325+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:19.325+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:19.329+0000 D2 WRITE [startPeriodicThreadToAbortExpiredTransactions] Beginning scanSessions. Scanning 0 sessions. 2019-09-04T06:33:19.333+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:33:19.333+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:19.333+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:19.333+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578797, 2), t: 1 }, durableWallTime: new Date(1567578797853), appliedOpTime: { ts: Timestamp(1567578797, 2), t: 1 }, appliedWallTime: new Date(1567578797853), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), appliedOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, appliedWallTime: new Date(1567578799275), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578797, 2), t: 1 }, durableWallTime: new Date(1567578797853), appliedOpTime: { ts: Timestamp(1567578797, 2), t: 1 }, appliedWallTime: new Date(1567578797853), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578799, 1), t: 1 }, lastCommittedWall: new Date(1567578799275), lastOpVisible: { ts: Timestamp(1567578799, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:19.333+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1156 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:49.333+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578797, 2), t: 1 }, durableWallTime: new Date(1567578797853), appliedOpTime: { ts: Timestamp(1567578797, 2), t: 1 }, appliedWallTime: new Date(1567578797853), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), appliedOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, appliedWallTime: new Date(1567578799275), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578797, 2), t: 1 }, durableWallTime: new Date(1567578797853), appliedOpTime: { ts: Timestamp(1567578797, 2), t: 1 }, appliedWallTime: new Date(1567578797853), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578799, 1), t: 1 }, lastCommittedWall: new Date(1567578799275), lastOpVisible: { ts: Timestamp(1567578799, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:19.333+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:49.286+0000 2019-09-04T06:33:19.333+0000 D2 ASIO [RS] Request 1156 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578799, 1), t: 1 }, lastCommittedWall: new Date(1567578799275), lastOpVisible: { ts: Timestamp(1567578799, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578799, 1), $clusterTime: { clusterTime: Timestamp(1567578799, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578799, 1) } 2019-09-04T06:33:19.333+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578799, 1), t: 1 }, lastCommittedWall: new Date(1567578799275), lastOpVisible: { ts: Timestamp(1567578799, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578799, 1), $clusterTime: { clusterTime: Timestamp(1567578799, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578799, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:19.333+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:19.333+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:49.286+0000 2019-09-04T06:33:19.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:19.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:19.370+0000 D1 SHARDING [shard-registry-reload] Reloading shardRegistry 2019-09-04T06:33:19.370+0000 D3 STORAGE [shard-registry-reload] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:33:19.370+0000 D2 COMMAND [shard-registry-reload] run command config.$cmd { find: "shards", $readPreference: { mode: "nearest", tags: [] }, $db: "config" } 2019-09-04T06:33:19.370+0000 D3 STORAGE [shard-registry-reload] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:19.370+0000 D5 QUERY [shard-registry-reload] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:33:19.370+0000 D5 QUERY [shard-registry-reload] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:33:19.370+0000 D5 QUERY [shard-registry-reload] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:33:19.370+0000 D5 QUERY [shard-registry-reload] Rated tree: $and 2019-09-04T06:33:19.370+0000 D5 QUERY [shard-registry-reload] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:19.370+0000 D5 QUERY [shard-registry-reload] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:19.370+0000 D2 QUERY [shard-registry-reload] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:33:19.370+0000 D3 STORAGE [shard-registry-reload] begin_transaction on local snapshot Timestamp(1567578799, 1) 2019-09-04T06:33:19.370+0000 D3 STORAGE [shard-registry-reload] WT begin_transaction for snapshot id 16950 2019-09-04T06:33:19.370+0000 D3 STORAGE [shard-registry-reload] WT rollback_transaction for snapshot id 16950 2019-09-04T06:33:19.370+0000 I COMMAND [shard-registry-reload] command config.shards command: find { find: "shards", $readPreference: { mode: "nearest", tags: [] }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:646 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:33:19.370+0000 D1 SHARDING [shard-registry-reload] found 3 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp(1567578799, 1), t: 1 } 2019-09-04T06:33:19.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0000/cmodb806.togewa.com:27018,cmodb807.togewa.com:27018 2019-09-04T06:33:19.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0000, with CS shard0000/cmodb806.togewa.com:27018,cmodb807.togewa.com:27018 2019-09-04T06:33:19.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0001/cmodb808.togewa.com:27018,cmodb809.togewa.com:27018 2019-09-04T06:33:19.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0001, with CS shard0001/cmodb808.togewa.com:27018,cmodb809.togewa.com:27018 2019-09-04T06:33:19.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0002/cmodb810.togewa.com:27018,cmodb811.togewa.com:27018 2019-09-04T06:33:19.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0002, with CS shard0002/cmodb810.togewa.com:27018,cmodb811.togewa.com:27018 2019-09-04T06:33:19.370+0000 D3 SHARDING [shard-registry-reload] Adding shard config, with CS 2019-09-04T06:33:19.384+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578799, 1) 2019-09-04T06:33:19.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0002 2019-09-04T06:33:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1158 -- target:[cmodb810.togewa.com:27018] db:admin expDate:2019-09-04T06:33:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:33:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1159 -- target:[cmodb811.togewa.com:27018] db:admin expDate:2019-09-04T06:33:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:33:19.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0000 2019-09-04T06:33:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1160 -- target:[cmodb806.togewa.com:27018] db:admin expDate:2019-09-04T06:33:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:33:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1161 -- target:[cmodb807.togewa.com:27018] db:admin expDate:2019-09-04T06:33:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:33:19.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0001 2019-09-04T06:33:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1162 -- target:[cmodb808.togewa.com:27018] db:admin expDate:2019-09-04T06:33:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:33:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1163 -- target:[cmodb809.togewa.com:27018] db:admin expDate:2019-09-04T06:33:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:33:19.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1158 finished with response: { hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb810.togewa.com:27018", me: "cmodb810.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578789, 1), t: 1 }, lastWriteDate: new Date(1567578789000), majorityOpTime: { ts: Timestamp(1567578789, 1), t: 1 }, majorityWriteDate: new Date(1567578789000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578799386), logicalSessionTimeoutMinutes: 30, connectionId: 20469, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578789, 1), $configServerState: { opTime: { ts: Timestamp(1567578795, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578789, 1) } 2019-09-04T06:33:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb810.togewa.com:27018", me: "cmodb810.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578789, 1), t: 1 }, lastWriteDate: new Date(1567578789000), majorityOpTime: { ts: Timestamp(1567578789, 1), t: 1 }, majorityWriteDate: new Date(1567578789000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578799386), logicalSessionTimeoutMinutes: 30, connectionId: 20469, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578789, 1), $configServerState: { opTime: { ts: Timestamp(1567578795, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578789, 1) } target: cmodb810.togewa.com:27018 2019-09-04T06:33:19.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1161 finished with response: { hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb806.togewa.com:27018", me: "cmodb807.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578793, 9), t: 1 }, lastWriteDate: new Date(1567578793000), majorityOpTime: { ts: Timestamp(1567578793, 9), t: 1 }, majorityWriteDate: new Date(1567578793000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578799385), logicalSessionTimeoutMinutes: 30, connectionId: 17074, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578793, 9), $configServerState: { opTime: { ts: Timestamp(1567578788, 3), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578793, 9) } 2019-09-04T06:33:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb806.togewa.com:27018", me: "cmodb807.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578793, 9), t: 1 }, lastWriteDate: new Date(1567578793000), majorityOpTime: { ts: Timestamp(1567578793, 9), t: 1 }, majorityWriteDate: new Date(1567578793000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578799385), logicalSessionTimeoutMinutes: 30, connectionId: 17074, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578793, 9), $configServerState: { opTime: { ts: Timestamp(1567578788, 3), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578793, 9) } target: cmodb807.togewa.com:27018 2019-09-04T06:33:19.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1160 finished with response: { hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb806.togewa.com:27018", me: "cmodb806.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578793, 9), t: 1 }, lastWriteDate: new Date(1567578793000), majorityOpTime: { ts: Timestamp(1567578793, 9), t: 1 }, majorityWriteDate: new Date(1567578793000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578799385), logicalSessionTimeoutMinutes: 30, connectionId: 16400, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578793, 9), $configServerState: { opTime: { ts: Timestamp(1567578795, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578793, 9) } 2019-09-04T06:33:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb806.togewa.com:27018", me: "cmodb806.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578793, 9), t: 1 }, lastWriteDate: new Date(1567578793000), majorityOpTime: { ts: Timestamp(1567578793, 9), t: 1 }, majorityWriteDate: new Date(1567578793000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578799385), logicalSessionTimeoutMinutes: 30, connectionId: 16400, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578793, 9), $configServerState: { opTime: { ts: Timestamp(1567578795, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578793, 9) } target: cmodb806.togewa.com:27018 2019-09-04T06:33:19.385+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0000 took 0ms 2019-09-04T06:33:19.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1162 finished with response: { hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb808.togewa.com:27018", me: "cmodb808.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578797, 1), t: 1 }, lastWriteDate: new Date(1567578797000), majorityOpTime: { ts: Timestamp(1567578797, 1), t: 1 }, majorityWriteDate: new Date(1567578797000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578799386), logicalSessionTimeoutMinutes: 30, connectionId: 18183, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578797, 1), $configServerState: { opTime: { ts: Timestamp(1567578795, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578797, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578797, 1) } 2019-09-04T06:33:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb808.togewa.com:27018", me: "cmodb808.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578797, 1), t: 1 }, lastWriteDate: new Date(1567578797000), majorityOpTime: { ts: Timestamp(1567578797, 1), t: 1 }, majorityWriteDate: new Date(1567578797000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578799386), logicalSessionTimeoutMinutes: 30, connectionId: 18183, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578797, 1), $configServerState: { opTime: { ts: Timestamp(1567578795, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578797, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578797, 1) } target: cmodb808.togewa.com:27018 2019-09-04T06:33:19.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1163 finished with response: { hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb808.togewa.com:27018", me: "cmodb809.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578797, 1), t: 1 }, lastWriteDate: new Date(1567578797000), majorityOpTime: { ts: Timestamp(1567578797, 1), t: 1 }, majorityWriteDate: new Date(1567578797000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578799386), logicalSessionTimeoutMinutes: 30, connectionId: 13302, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578797, 1), $configServerState: { opTime: { ts: Timestamp(1567578780, 3), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578797, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578797, 1) } 2019-09-04T06:33:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb808.togewa.com:27018", me: "cmodb809.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578797, 1), t: 1 }, lastWriteDate: new Date(1567578797000), majorityOpTime: { ts: Timestamp(1567578797, 1), t: 1 }, majorityWriteDate: new Date(1567578797000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578799386), logicalSessionTimeoutMinutes: 30, connectionId: 13302, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578797, 1), $configServerState: { opTime: { ts: Timestamp(1567578780, 3), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578797, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578797, 1) } target: cmodb809.togewa.com:27018 2019-09-04T06:33:19.385+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0001 took 0ms 2019-09-04T06:33:19.390+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1159 finished with response: { hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb810.togewa.com:27018", me: "cmodb811.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578789, 1), t: 1 }, lastWriteDate: new Date(1567578789000), majorityOpTime: { ts: Timestamp(1567578789, 1), t: 1 }, majorityWriteDate: new Date(1567578789000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578799386), logicalSessionTimeoutMinutes: 30, connectionId: 13284, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578789, 1), $configServerState: { opTime: { ts: Timestamp(1567578788, 3), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578789, 1) } 2019-09-04T06:33:19.390+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb810.togewa.com:27018", me: "cmodb811.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578789, 1), t: 1 }, lastWriteDate: new Date(1567578789000), majorityOpTime: { ts: Timestamp(1567578789, 1), t: 1 }, majorityWriteDate: new Date(1567578789000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578799386), logicalSessionTimeoutMinutes: 30, connectionId: 13284, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578789, 1), $configServerState: { opTime: { ts: Timestamp(1567578788, 3), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578789, 1) } target: cmodb811.togewa.com:27018 2019-09-04T06:33:19.390+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0002 took 5ms 2019-09-04T06:33:19.433+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:19.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:19.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:19.497+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:19.497+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:19.533+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:19.534+0000 D3 STORAGE [WTCheckpointThread] setting timestamp read source: 6, provided timestamp: Timestamp(1567578799, 1) 2019-09-04T06:33:19.534+0000 D3 STORAGE [WTCheckpointThread] WT begin_transaction for snapshot id 16954 2019-09-04T06:33:19.534+0000 D3 STORAGE [WTCheckpointThread] WT rollback_transaction for snapshot id 16954 2019-09-04T06:33:19.534+0000 D2 RECOVERY [WTCheckpointThread] Performing stable checkpoint. StableTimestamp: Timestamp(1567578799, 1), OplogNeededForRollback: Timestamp(1567578799, 1) 2019-09-04T06:33:19.558+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:19.558+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:19.581+0000 D2 COMMAND [replSetDistLockPinger] run command config.$cmd { findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578799581) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } 2019-09-04T06:33:19.581+0000 D4 - [replSetDistLockPinger] Taking ticket. Available: 1000000000 2019-09-04T06:33:19.581+0000 D1 - [replSetDistLockPinger] User Assertion: NotMaster: Not primary while running findAndModify command on collection config.lockpings src/mongo/db/commands/find_and_modify.cpp 178 2019-09-04T06:33:19.581+0000 W - [replSetDistLockPinger] DBException thrown :: caused by :: NotMaster: Not primary while running findAndModify command on collection config.lockpings 2019-09-04T06:33:19.598+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:19.598+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:19.600+0000 I - [replSetDistLockPinger] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6b5b62 0x561749c38a0a 0x561749c42521 0x561749a63043 0x56174a33a606 0x56174a33ba55 0x56174b117894 0x56174a082899 0x56174a083f53 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174af452ee 0x56174af457fa 0x56174b0c25e2 0x56174a244e7b 0x56174a243c1e 0x56174a42b1dc 0x56174a23b7b1 0x56174a232a0a 0x56174b82dbbf 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"272DB62","s":"_ZN5mongo11DBExceptionC2ERKNS_6StatusE"},{"b":"561748F88000","o":"CB0A0A","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"ADB043"},{"b":"561748F88000","o":"13B2606"},{"b":"561748F88000","o":"13B3A55"},{"b":"561748F88000","o":"218F894","s":"_ZN5mongo12BasicCommand10Invocation3runEPNS_16OperationContextEPNS_3rpc21ReplyBuilderInterfaceE"},{"b":"561748F88000","o":"10FA899"},{"b":"561748F88000","o":"10FBF53"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"1FBD2EE"},{"b":"561748F88000","o":"1FBD7FA","s":"_ZN5mongo14DBDirectClient4callERNS_7MessageES2_bPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE"},{"b":"561748F88000","o":"213A5E2","s":"_ZN5mongo12DBClientBase20runCommandWithTargetENS_12OpMsgRequestE"},{"b":"561748F88000","o":"12BCE7B","s":"_ZN5mongo13RSLocalClient14runCommandOnceEPNS_16OperationContextENS_10StringDataERKNS_7BSONObjE"},{"b":"561748F88000","o":"12BBC1E","s":"_ZN5mongo10ShardLocal11_runCommandEPNS_16OperationContextERKNS_21ReadPreferenceSettingENS_10StringDataENS_8DurationISt5ratioILl1ELl1000EEEERKNS_7BSONObjE"},{"b":"561748F88000","o":"14A31DC","s":"_ZN5mongo5Shard32runCommandWithFixedRetryAttemptsEPNS_16OperationContextERKNS_21ReadPreferenceSettingERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_7BSONObjENS_8DurationISt5ratioILl1ELl1000EEEENS0_11RetryPolicyE"},{"b":"561748F88000","o":"12B37B1","s":"_ZN5mongo19DistLockCatalogImpl4pingEPNS_16OperationContextENS_10StringDataENS_6Date_tE"},{"b":"561748F88000","o":"12AAA0A","s":"_ZN5mongo22ReplSetDistLockManager6doTaskEv"},{"b":"561748F88000","o":"28A5BBF"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo11DBExceptionC2ERKNS_6StatusE+0x32) [0x56174b6b5b62] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x6D08) [0x561749c38a0a] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xADB043) [0x561749a63043] mongod(+0x13B2606) [0x56174a33a606] mongod(+0x13B3A55) [0x56174a33ba55] mongod(_ZN5mongo12BasicCommand10Invocation3runEPNS_16OperationContextEPNS_3rpc21ReplyBuilderInterfaceE+0x74) [0x56174b117894] mongod(+0x10FA899) [0x56174a082899] mongod(+0x10FBF53) [0x56174a083f53] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(+0x1FBD2EE) [0x56174af452ee] mongod(_ZN5mongo14DBDirectClient4callERNS_7MessageES2_bPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x3A) [0x56174af457fa] mongod(_ZN5mongo12DBClientBase20runCommandWithTargetENS_12OpMsgRequestE+0x1F2) [0x56174b0c25e2] mongod(_ZN5mongo13RSLocalClient14runCommandOnceEPNS_16OperationContextENS_10StringDataERKNS_7BSONObjE+0x4FB) [0x56174a244e7b] mongod(_ZN5mongo10ShardLocal11_runCommandEPNS_16OperationContextERKNS_21ReadPreferenceSettingENS_10StringDataENS_8DurationISt5ratioILl1ELl1000EEEERKNS_7BSONObjE+0x2E) [0x56174a243c1e] mongod(_ZN5mongo5Shard32runCommandWithFixedRetryAttemptsEPNS_16OperationContextERKNS_21ReadPreferenceSettingERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_7BSONObjENS_8DurationISt5ratioILl1ELl1000EEEENS0_11RetryPolicyE+0xDC) [0x56174a42b1dc] mongod(_ZN5mongo19DistLockCatalogImpl4pingEPNS_16OperationContextENS_10StringDataENS_6Date_tE+0x571) [0x56174a23b7b1] mongod(_ZN5mongo22ReplSetDistLockManager6doTaskEv+0x27A) [0x56174a232a0a] mongod(+0x28A5BBF) [0x56174b82dbbf] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:19.600+0000 D2 REPL [replSetDistLockPinger] Waiting for write concern. OpTime: { ts: Timestamp(1567578799, 1), t: 1 }, write concern: { w: "majority", wtimeout: 15000 } 2019-09-04T06:33:19.603+0000 D4 STORAGE [replSetDistLockPinger] flushed journal 2019-09-04T06:33:19.603+0000 D1 COMMAND [replSetDistLockPinger] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578799581) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" }': NotMaster: Not primary while running findAndModify command on collection config.lockpings 2019-09-04T06:33:19.603+0000 I COMMAND [replSetDistLockPinger] command config.lockpings command: findAndModify { findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578799581) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } numYields:0 ok:0 errMsg:"Not primary while running findAndModify command on collection config.lockpings" errName:NotMaster errCode:10107 reslen:527 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } protocol:op_msg 21ms 2019-09-04T06:33:19.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:19.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:19.633+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:19.635+0000 D2 COMMAND [conn21] run command config.$cmd { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578799, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5aaf02d1a496712d727b'), operName: "", parentOperId: "5d6f5aaf02d1a496712d727a" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578799, 1), signature: { hash: BinData(0, 4EB02BC74ECC7B9CC24EAEC002EB47FACC430EA1), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578799, 1), t: 1 } }, $db: "config" } 2019-09-04T06:33:19.635+0000 D1 TRACKING [conn21] Cmd: find, TrackingId: 5d6f5aaf02d1a496712d727a|5d6f5aaf02d1a496712d727b 2019-09-04T06:33:19.635+0000 D1 COMMAND [conn21] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578799, 1), t: 1 } } } 2019-09-04T06:33:19.635+0000 D3 STORAGE [conn21] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:33:19.635+0000 D1 COMMAND [conn21] Using 'committed' snapshot: { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578799, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5aaf02d1a496712d727b'), operName: "", parentOperId: "5d6f5aaf02d1a496712d727a" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578799, 1), signature: { hash: BinData(0, 4EB02BC74ECC7B9CC24EAEC002EB47FACC430EA1), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578799, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578799, 1) 2019-09-04T06:33:19.635+0000 D2 QUERY [conn21] Using idhack: query: { _id: "config.system.sessions" } sort: {} projection: {} limit: 1 2019-09-04T06:33:19.635+0000 D3 STORAGE [conn21] WT begin_transaction for snapshot id 16960 2019-09-04T06:33:19.635+0000 D3 STORAGE [conn21] WT rollback_transaction for snapshot id 16960 2019-09-04T06:33:19.635+0000 I COMMAND [conn21] command config.collections command: find { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578799, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5aaf02d1a496712d727b'), operName: "", parentOperId: "5d6f5aaf02d1a496712d727a" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578799, 1), signature: { hash: BinData(0, 4EB02BC74ECC7B9CC24EAEC002EB47FACC430EA1), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578799, 1), t: 1 } }, $db: "config" } planSummary: IDHACK keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:702 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:33:19.635+0000 D2 COMMAND [conn21] run command config.$cmd { find: "chunks", filter: { ns: "config.system.sessions", lastmod: { $gte: Timestamp(1, 0) } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578799, 1), t: 1 } }, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5aaf02d1a496712d727c'), operName: "", parentOperId: "5d6f5aaf02d1a496712d727a" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578799, 1), signature: { hash: BinData(0, 4EB02BC74ECC7B9CC24EAEC002EB47FACC430EA1), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578799, 1), t: 1 } }, $db: "config" } 2019-09-04T06:33:19.635+0000 D1 TRACKING [conn21] Cmd: find, TrackingId: 5d6f5aaf02d1a496712d727a|5d6f5aaf02d1a496712d727c 2019-09-04T06:33:19.635+0000 D1 COMMAND [conn21] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578799, 1), t: 1 } } } 2019-09-04T06:33:19.635+0000 D3 STORAGE [conn21] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:33:19.635+0000 D1 COMMAND [conn21] Using 'committed' snapshot: { find: "chunks", filter: { ns: "config.system.sessions", lastmod: { $gte: Timestamp(1, 0) } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578799, 1), t: 1 } }, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5aaf02d1a496712d727c'), operName: "", parentOperId: "5d6f5aaf02d1a496712d727a" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578799, 1), signature: { hash: BinData(0, 4EB02BC74ECC7B9CC24EAEC002EB47FACC430EA1), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578799, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578799, 1) 2019-09-04T06:33:19.635+0000 D5 QUERY [conn21] Tagging the match expression according to cache data: Filter: $and ns $eq "config.system.sessions" lastmod $gte Timestamp(1, 0) Cache data: (index-tagged expression tree: tree=Node ---Leaf (ns_1_lastmod_1, ), pos: 0, can combine? 1 ---Leaf (ns_1_lastmod_1, ), pos: 1, can combine? 1 ) 2019-09-04T06:33:19.635+0000 D5 QUERY [conn21] Index 0: (ns_1_min_1, ) 2019-09-04T06:33:19.635+0000 D5 QUERY [conn21] Index 1: (ns_1_shard_1_min_1, ) 2019-09-04T06:33:19.635+0000 D5 QUERY [conn21] Index 2: (ns_1_lastmod_1, ) 2019-09-04T06:33:19.635+0000 D5 QUERY [conn21] Index 3: (_id_, ) 2019-09-04T06:33:19.635+0000 D5 QUERY [conn21] Tagged tree: $and ns $eq "config.system.sessions" || Selected Index #2 pos 0 combine 1 lastmod $gte Timestamp(1, 0) || Selected Index #2 pos 1 combine 1 2019-09-04T06:33:19.635+0000 D5 QUERY [conn21] Planner: solution constructed from the cache: FETCH ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [{ lastmod: 1 }, { ns: 1 }, { ns: 1, lastmod: 1 }, ] ---Child: ------IXSCAN ---------indexName = ns_1_lastmod_1 keyPattern = { ns: 1, lastmod: 1 } ---------direction = 1 ---------bounds = field #0['ns']: ["config.system.sessions", "config.system.sessions"], field #1['lastmod']: [Timestamp(1, 0), Timestamp(4294967295, 4294967295)] ---------fetched = 0 ---------sortedByDiskLoc = 0 ---------getSort = [{ lastmod: 1 }, { ns: 1 }, { ns: 1, lastmod: 1 }, ] 2019-09-04T06:33:19.636+0000 D3 STORAGE [conn21] WT begin_transaction for snapshot id 16962 2019-09-04T06:33:19.636+0000 D2 QUERY [conn21] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) 2019-09-04T06:33:19.636+0000 D3 STORAGE [conn21] WT rollback_transaction for snapshot id 16962 2019-09-04T06:33:19.636+0000 I COMMAND [conn21] command config.chunks command: find { find: "chunks", filter: { ns: "config.system.sessions", lastmod: { $gte: Timestamp(1, 0) } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578799, 1), t: 1 } }, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5aaf02d1a496712d727c'), operName: "", parentOperId: "5d6f5aaf02d1a496712d727a" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578799, 1), signature: { hash: BinData(0, 4EB02BC74ECC7B9CC24EAEC002EB47FACC430EA1), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578799, 1), t: 1 } }, $db: "config" } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:1DDA71BE planCacheKey:167D77D5 reslen:788 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:33:19.647+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:19.647+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:19.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:19.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:19.662+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:19.662+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:19.727+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:19.727+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:19.733+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:19.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:19.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:19.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:19.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:19.825+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:19.825+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:19.834+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:19.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:19.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:19.934+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:19.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:19.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:19.997+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:19.997+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:20.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:20.004+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:33:20.004+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:33:20.004+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:33:20.019+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:33:20.019+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:33:20.032+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:33:20.032+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:33:20.032+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:33:20.032+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:33:20.043+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:20.049+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:33:20.050+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35129 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:33:20.051+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:33:20.051+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:33:20.051+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:33:20.052+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:33:20.052+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:20.052+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:33:20.052+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:33:20.052+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:33:20.052+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:33:20.052+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:33:20.052+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:33:20.052+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:33:20.052+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:20.052+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:20.052+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:33:20.052+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578799, 1) 2019-09-04T06:33:20.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16980 2019-09-04T06:33:20.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16980 2019-09-04T06:33:20.052+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:33:20.052+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:33:20.052+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:33:20.052+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:33:20.052+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:20.052+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:33:20.052+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:33:20.052+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:33:20.052+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578799, 1) 2019-09-04T06:33:20.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16983 2019-09-04T06:33:20.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16983 2019-09-04T06:33:20.052+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:33:20.052+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:33:20.053+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:20.053+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:33:20.053+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:33:20.053+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:33:20.053+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578799, 1) 2019-09-04T06:33:20.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16985 2019-09-04T06:33:20.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16985 2019-09-04T06:33:20.053+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:33:20.053+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:33:20.053+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:33:20.053+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:33:20.053+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:33:20.053+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:33:20.053+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:33:20.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16988 2019-09-04T06:33:20.053+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:33:20.053+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:20.053+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:33:20.053+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:33:20.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16988 2019-09-04T06:33:20.053+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:33:20.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16989 2019-09-04T06:33:20.053+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:33:20.053+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:20.053+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:33:20.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16989 2019-09-04T06:33:20.053+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:33:20.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16990 2019-09-04T06:33:20.053+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:33:20.053+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:20.053+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16990 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16991 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16991 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16992 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16992 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16993 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16993 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16994 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16994 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16995 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16995 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16996 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16996 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16997 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16997 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16998 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16998 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 16999 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:33:20.054+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 16999 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17000 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17000 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17001 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17001 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17002 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17002 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17003 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17003 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17004 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17004 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17005 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17005 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17006 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17006 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17007 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17007 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17008 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17008 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17009 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:33:20.055+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17009 2019-09-04T06:33:20.055+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 2ms 2019-09-04T06:33:20.056+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:33:20.056+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17011 2019-09-04T06:33:20.056+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17011 2019-09-04T06:33:20.056+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17012 2019-09-04T06:33:20.056+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17012 2019-09-04T06:33:20.056+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17013 2019-09-04T06:33:20.056+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17013 2019-09-04T06:33:20.056+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:33:20.056+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:33:20.056+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17015 2019-09-04T06:33:20.056+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17015 2019-09-04T06:33:20.056+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17016 2019-09-04T06:33:20.056+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17016 2019-09-04T06:33:20.056+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17017 2019-09-04T06:33:20.056+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17017 2019-09-04T06:33:20.056+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17018 2019-09-04T06:33:20.056+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17018 2019-09-04T06:33:20.056+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17019 2019-09-04T06:33:20.056+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17019 2019-09-04T06:33:20.056+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17020 2019-09-04T06:33:20.056+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17020 2019-09-04T06:33:20.056+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17021 2019-09-04T06:33:20.056+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17021 2019-09-04T06:33:20.056+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17022 2019-09-04T06:33:20.056+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17022 2019-09-04T06:33:20.056+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17023 2019-09-04T06:33:20.056+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17023 2019-09-04T06:33:20.056+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17024 2019-09-04T06:33:20.056+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17024 2019-09-04T06:33:20.056+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17025 2019-09-04T06:33:20.056+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17025 2019-09-04T06:33:20.056+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17026 2019-09-04T06:33:20.056+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17026 2019-09-04T06:33:20.056+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:33:20.057+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:33:20.057+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17028 2019-09-04T06:33:20.057+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17028 2019-09-04T06:33:20.057+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17029 2019-09-04T06:33:20.057+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17029 2019-09-04T06:33:20.057+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17030 2019-09-04T06:33:20.057+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17030 2019-09-04T06:33:20.057+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17031 2019-09-04T06:33:20.057+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17031 2019-09-04T06:33:20.057+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17032 2019-09-04T06:33:20.057+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17032 2019-09-04T06:33:20.057+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17033 2019-09-04T06:33:20.057+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17033 2019-09-04T06:33:20.057+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:33:20.058+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:20.058+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:20.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:20.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:20.098+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:20.098+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:20.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:20.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:20.143+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:20.147+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:20.147+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:20.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:20.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:20.162+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:20.162+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:20.197+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:20.197+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:20.227+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:20.227+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:20.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578799, 1), signature: { hash: BinData(0, 4EB02BC74ECC7B9CC24EAEC002EB47FACC430EA1), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:20.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:33:20.233+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578799, 1), signature: { hash: BinData(0, 4EB02BC74ECC7B9CC24EAEC002EB47FACC430EA1), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:20.233+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578799, 1), signature: { hash: BinData(0, 4EB02BC74ECC7B9CC24EAEC002EB47FACC430EA1), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:20.233+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), opTime: { ts: Timestamp(1567578799, 1), t: 1 }, wallTime: new Date(1567578799275) } 2019-09-04T06:33:20.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578799, 1), signature: { hash: BinData(0, 4EB02BC74ECC7B9CC24EAEC002EB47FACC430EA1), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:20.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:20.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 999999999 Now: 1000000000 2019-09-04T06:33:20.243+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:20.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:20.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:20.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:20.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:20.285+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:20.285+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:20.285+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578799, 1) 2019-09-04T06:33:20.285+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 17048 2019-09-04T06:33:20.285+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:20.285+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:20.285+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 17048 2019-09-04T06:33:20.286+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17051 2019-09-04T06:33:20.286+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 17051 2019-09-04T06:33:20.286+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578799, 1), t: 1 }({ ts: Timestamp(1567578799, 1), t: 1 }) 2019-09-04T06:33:20.325+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:20.326+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:20.343+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:20.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:20.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:20.443+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:20.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:20.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:20.497+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:20.497+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:20.543+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:20.558+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:20.558+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:20.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:20.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:20.598+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:20.598+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:20.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:20.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:20.643+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:20.646+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:20.647+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:20.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:20.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:20.662+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:20.662+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:20.696+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:20.696+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:20.727+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:20.727+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:20.743+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:20.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:20.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:20.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:20.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:20.825+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:20.826+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:20.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:20.838+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:33:19.063+0000 2019-09-04T06:33:20.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:20.838+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:33:20.233+0000 2019-09-04T06:33:20.838+0000 D3 REPL [replexec-3] stalest member MemberId(0) date: 2019-09-04T06:33:19.063+0000 2019-09-04T06:33:20.838+0000 D3 REPL [replexec-3] scheduling next check at 2019-09-04T06:33:29.063+0000 2019-09-04T06:33:20.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:50.838+0000 2019-09-04T06:33:20.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1168) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:20.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1168 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:30.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:20.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:50.838+0000 2019-09-04T06:33:20.838+0000 D2 ASIO [Replication] Request 1168 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), opTime: { ts: Timestamp(1567578799, 1), t: 1 }, wallTime: new Date(1567578799275), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578799, 1), t: 1 }, lastCommittedWall: new Date(1567578799275), lastOpVisible: { ts: Timestamp(1567578799, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578799, 1), $clusterTime: { clusterTime: Timestamp(1567578799, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578799, 1) } 2019-09-04T06:33:20.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), opTime: { ts: Timestamp(1567578799, 1), t: 1 }, wallTime: new Date(1567578799275), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578799, 1), t: 1 }, lastCommittedWall: new Date(1567578799275), lastOpVisible: { ts: Timestamp(1567578799, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578799, 1), $clusterTime: { clusterTime: Timestamp(1567578799, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578799, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:20.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:20.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1168) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), opTime: { ts: Timestamp(1567578799, 1), t: 1 }, wallTime: new Date(1567578799275), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578799, 1), t: 1 }, lastCommittedWall: new Date(1567578799275), lastOpVisible: { ts: Timestamp(1567578799, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578799, 1), $clusterTime: { clusterTime: Timestamp(1567578799, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578799, 1) } 2019-09-04T06:33:20.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:33:20.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:33:22.838Z 2019-09-04T06:33:20.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:50.838+0000 2019-09-04T06:33:20.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:20.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1169) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:20.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1169 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:33:30.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:20.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:50.838+0000 2019-09-04T06:33:20.839+0000 D2 ASIO [Replication] Request 1169 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), opTime: { ts: Timestamp(1567578799, 1), t: 1 }, wallTime: new Date(1567578799275), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578799, 1), t: 1 }, lastCommittedWall: new Date(1567578799275), lastOpVisible: { ts: Timestamp(1567578799, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578799, 1), $clusterTime: { clusterTime: Timestamp(1567578799, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578799, 1) } 2019-09-04T06:33:20.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), opTime: { ts: Timestamp(1567578799, 1), t: 1 }, wallTime: new Date(1567578799275), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578799, 1), t: 1 }, lastCommittedWall: new Date(1567578799275), lastOpVisible: { ts: Timestamp(1567578799, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578799, 1), $clusterTime: { clusterTime: Timestamp(1567578799, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578799, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:33:20.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:20.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1169) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), opTime: { ts: Timestamp(1567578799, 1), t: 1 }, wallTime: new Date(1567578799275), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578799, 1), t: 1 }, lastCommittedWall: new Date(1567578799275), lastOpVisible: { ts: Timestamp(1567578799, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578799, 1), $clusterTime: { clusterTime: Timestamp(1567578799, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578799, 1) } 2019-09-04T06:33:20.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:33:20.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:33:29.383+0000 2019-09-04T06:33:20.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:33:31.112+0000 2019-09-04T06:33:20.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:33:20.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:33:22.839Z 2019-09-04T06:33:20.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:20.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:50.839+0000 2019-09-04T06:33:20.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:50.839+0000 2019-09-04T06:33:20.844+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:20.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:20.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:20.944+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:20.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:20.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:20.997+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:20.997+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:21.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:21.044+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:21.058+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:21.058+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:21.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578799, 1), signature: { hash: BinData(0, 4EB02BC74ECC7B9CC24EAEC002EB47FACC430EA1), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:21.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:33:21.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578799, 1), signature: { hash: BinData(0, 4EB02BC74ECC7B9CC24EAEC002EB47FACC430EA1), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:21.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578799, 1), signature: { hash: BinData(0, 4EB02BC74ECC7B9CC24EAEC002EB47FACC430EA1), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:21.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), opTime: { ts: Timestamp(1567578799, 1), t: 1 }, wallTime: new Date(1567578799275) } 2019-09-04T06:33:21.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578799, 1), signature: { hash: BinData(0, 4EB02BC74ECC7B9CC24EAEC002EB47FACC430EA1), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:21.068+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:21.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:21.098+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:21.098+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:21.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:21.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:21.117+0000 I NETWORK [listener] connection accepted from 10.108.2.56:35802 #385 (87 connections now open) 2019-09-04T06:33:21.117+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:21.117+0000 D2 COMMAND [conn385] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:21.117+0000 I NETWORK [conn385] received client metadata from 10.108.2.56:35802 conn385: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:21.117+0000 I COMMAND [conn385] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:21.121+0000 D2 COMMAND [conn385] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578791, 1), signature: { hash: BinData(0, 7C0CF3FE10B568392F8A27C39D14989CCAE09485), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:33:21.121+0000 D1 REPL [conn385] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578799, 1), t: 1 } 2019-09-04T06:33:21.121+0000 D3 REPL [conn385] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.131+0000 2019-09-04T06:33:21.144+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:21.147+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:21.147+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:21.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:21.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:21.162+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:21.162+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:21.196+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:21.196+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:21.227+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:21.227+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:21.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:21.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:21.244+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:21.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:21.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:21.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:21.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:21.285+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:21.285+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:21.285+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578799, 1) 2019-09-04T06:33:21.285+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 17088 2019-09-04T06:33:21.285+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:21.285+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:21.285+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 17088 2019-09-04T06:33:21.286+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17091 2019-09-04T06:33:21.286+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 17091 2019-09-04T06:33:21.286+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578799, 1), t: 1 }({ ts: Timestamp(1567578799, 1), t: 1 }) 2019-09-04T06:33:21.325+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:21.326+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:21.344+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:21.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:21.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:21.363+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:21.363+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:21.444+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:21.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:21.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:21.497+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:21.497+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:21.545+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:21.558+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:21.558+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:21.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:21.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:21.598+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:21.598+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:21.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:21.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:21.618+0000 D2 COMMAND [conn126] run command admin.$cmd { ping: 1, $db: "admin" } 2019-09-04T06:33:21.618+0000 I COMMAND [conn126] command admin.$cmd appName: "robo3t" command: ping { ping: 1, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:21.629+0000 D2 COMMAND [conn182] run command config.$cmd { ping: 1.0, lsid: { id: UUID("02492cc9-cb3a-4cd4-9c2e-0d7430e82ce2") }, $clusterTime: { clusterTime: Timestamp(1567578739, 9), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:33:21.629+0000 I COMMAND [conn182] command config.$cmd appName: "MongoDB Shell" command: ping { ping: 1.0, lsid: { id: UUID("02492cc9-cb3a-4cd4-9c2e-0d7430e82ce2") }, $clusterTime: { clusterTime: Timestamp(1567578739, 9), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:21.635+0000 I NETWORK [listener] connection accepted from 10.108.2.44:38790 #386 (88 connections now open) 2019-09-04T06:33:21.635+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:21.635+0000 D2 COMMAND [conn386] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:21.635+0000 I NETWORK [conn386] received client metadata from 10.108.2.44:38790 conn386: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:21.635+0000 I COMMAND [conn386] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:21.635+0000 D2 COMMAND [conn386] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578798, 1), signature: { hash: BinData(0, F12D585EE0967CC5135F5BADB38B8673484018E2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:33:21.635+0000 D1 REPL [conn386] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578799, 1), t: 1 } 2019-09-04T06:33:21.635+0000 D3 REPL [conn386] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.645+0000 2019-09-04T06:33:21.645+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:21.646+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:21.647+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:21.650+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:21.650+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:21.650+0000 D2 COMMAND [conn372] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578793, 1), signature: { hash: BinData(0, EFC3E3CF57AD6C85EB1B9AA6CC52309780658F29), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:33:21.650+0000 D1 REPL [conn372] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578799, 1), t: 1 } 2019-09-04T06:33:21.650+0000 D3 REPL [conn372] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.660+0000 2019-09-04T06:33:21.651+0000 D2 COMMAND [conn362] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578792, 1), signature: { hash: BinData(0, 688717A47F9056E54E2A82934E3E688DB3858182), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:33:21.652+0000 D1 REPL [conn362] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578799, 1), t: 1 } 2019-09-04T06:33:21.652+0000 D3 REPL [conn362] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.661+0000 2019-09-04T06:33:21.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:21.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:21.662+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:21.662+0000 I COMMAND [conn337] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578770, 1), signature: { hash: BinData(0, 3430969E3181A35FCE5BAEFADC4CD97195C5A07D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:33:21.662+0000 D1 - [conn337] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:21.662+0000 W - [conn337] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:21.662+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:21.663+0000 I COMMAND [conn360] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578762, 1), signature: { hash: BinData(0, 70ADEB07C63950ADB1CBAACD1EFFE951853A061C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:33:21.663+0000 D1 - [conn360] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:21.663+0000 W - [conn360] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:21.663+0000 I COMMAND [conn361] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578768, 1), signature: { hash: BinData(0, 5E1D4D6DA330158C7AAB078164AD146D6E619AFC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:33:21.663+0000 D1 - [conn361] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:21.663+0000 W - [conn361] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:21.673+0000 I COMMAND [conn363] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578765, 1), signature: { hash: BinData(0, B25952C3221E28D91A8CF134FCDA55710368078E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:33:21.673+0000 D1 - [conn363] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:21.673+0000 W - [conn363] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:21.680+0000 I - [conn337] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:21.680+0000 D1 COMMAND [conn337] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578770, 1), signature: { hash: BinData(0, 3430969E3181A35FCE5BAEFADC4CD97195C5A07D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:21.680+0000 D1 - [conn337] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:21.680+0000 W - [conn337] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:21.696+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:21.696+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:21.698+0000 I - [conn360] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:21.698+0000 D1 COMMAND [conn360] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578762, 1), signature: { hash: BinData(0, 70ADEB07C63950ADB1CBAACD1EFFE951853A061C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:21.698+0000 D1 - [conn360] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:21.698+0000 W - [conn360] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:21.718+0000 I - [conn337] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:21.718+0000 W COMMAND [conn337] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:21.718+0000 I COMMAND [conn337] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578770, 1), signature: { hash: BinData(0, 3430969E3181A35FCE5BAEFADC4CD97195C5A07D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:33:21.718+0000 D2 NETWORK [conn337] Session from 10.108.2.48:42172 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:21.718+0000 I NETWORK [conn337] end connection 10.108.2.48:42172 (87 connections now open) 2019-09-04T06:33:21.727+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:21.727+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:21.736+0000 I - [conn361] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:21.736+0000 D1 COMMAND [conn361] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578768, 1), signature: { hash: BinData(0, 5E1D4D6DA330158C7AAB078164AD146D6E619AFC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:21.736+0000 D1 - [conn361] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:21.736+0000 W - [conn361] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:21.742+0000 D2 COMMAND [conn377] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578792, 1), signature: { hash: BinData(0, 688717A47F9056E54E2A82934E3E688DB3858182), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:33:21.743+0000 D1 REPL [conn377] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578799, 1), t: 1 } 2019-09-04T06:33:21.743+0000 D3 REPL [conn377] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.752+0000 2019-09-04T06:33:21.745+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:21.756+0000 I NETWORK [listener] connection accepted from 10.108.2.59:48454 #387 (88 connections now open) 2019-09-04T06:33:21.756+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:21.757+0000 D2 COMMAND [conn387] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:21.757+0000 I NETWORK [conn387] received client metadata from 10.108.2.59:48454 conn387: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:21.757+0000 I COMMAND [conn387] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:21.760+0000 I - [conn363] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:21.760+0000 D1 COMMAND [conn363] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578765, 1), signature: { hash: BinData(0, B25952C3221E28D91A8CF134FCDA55710368078E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:21.760+0000 D1 - [conn363] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:21.760+0000 W - [conn363] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:21.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:21.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:21.770+0000 I COMMAND [conn364] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578761, 1), signature: { hash: BinData(0, ABD195D4AC4ED4988535B74DC3F4B42B5EC26ED1), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:33:21.770+0000 D1 - [conn364] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:21.770+0000 W - [conn364] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:21.775+0000 I - [conn360] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:21.775+0000 W COMMAND [conn360] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:21.775+0000 I COMMAND [conn360] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578762, 1), signature: { hash: BinData(0, 70ADEB07C63950ADB1CBAACD1EFFE951853A061C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30047ms 2019-09-04T06:33:21.775+0000 D2 NETWORK [conn360] Session from 10.108.2.72:45832 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:21.775+0000 I NETWORK [conn360] end connection 10.108.2.72:45832 (87 connections now open) 2019-09-04T06:33:21.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:21.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:21.796+0000 I - [conn361] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:21.797+0000 W COMMAND [conn361] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:21.797+0000 I COMMAND [conn361] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578768, 1), signature: { hash: BinData(0, 5E1D4D6DA330158C7AAB078164AD146D6E619AFC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30085ms 2019-09-04T06:33:21.797+0000 D2 NETWORK [conn361] Session from 10.108.2.54:49280 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:21.797+0000 I NETWORK [conn361] end connection 10.108.2.54:49280 (86 connections now open) 2019-09-04T06:33:21.816+0000 I - [conn363] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:21.816+0000 W COMMAND [conn363] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:21.816+0000 I COMMAND [conn363] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578765, 1), signature: { hash: BinData(0, B25952C3221E28D91A8CF134FCDA55710368078E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30100ms 2019-09-04T06:33:21.817+0000 D2 NETWORK [conn363] Session from 10.108.2.47:56632 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:21.817+0000 I NETWORK [conn363] end connection 10.108.2.47:56632 (85 connections now open) 2019-09-04T06:33:21.825+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:21.825+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:21.835+0000 I - [conn364] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:21.835+0000 D1 COMMAND [conn364] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578761, 1), signature: { hash: BinData(0, ABD195D4AC4ED4988535B74DC3F4B42B5EC26ED1), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:21.835+0000 D1 - [conn364] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:21.835+0000 W - [conn364] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:21.845+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:21.856+0000 I - [conn364] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:21.857+0000 W COMMAND [conn364] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:21.857+0000 I COMMAND [conn364] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578761, 1), signature: { hash: BinData(0, ABD195D4AC4ED4988535B74DC3F4B42B5EC26ED1), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30078ms 2019-09-04T06:33:21.857+0000 D2 NETWORK [conn364] Session from 10.108.2.59:48436 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:21.857+0000 I NETWORK [conn364] end connection 10.108.2.59:48436 (84 connections now open) 2019-09-04T06:33:21.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:21.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:21.863+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:21.863+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:21.945+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:21.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:21.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:21.997+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:21.997+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:22.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:22.043+0000 I NETWORK [listener] connection accepted from 10.108.2.50:50230 #388 (85 connections now open) 2019-09-04T06:33:22.043+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:22.043+0000 D2 COMMAND [conn388] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:22.043+0000 I NETWORK [conn388] received client metadata from 10.108.2.50:50230 conn388: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:22.044+0000 I COMMAND [conn388] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:22.044+0000 D2 COMMAND [conn388] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578799, 1), signature: { hash: BinData(0, F1B7200DB4115FB50539E5C981CB65CEC6CB7F66), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:33:22.044+0000 D1 REPL [conn388] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578799, 1), t: 1 } 2019-09-04T06:33:22.044+0000 D3 REPL [conn388] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:52.054+0000 2019-09-04T06:33:22.045+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:22.058+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:22.058+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:22.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:22.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:22.098+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:22.098+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:22.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:22.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:22.145+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:22.147+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:22.147+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:22.149+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:22.149+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:22.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:22.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:22.162+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:22.162+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:22.196+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:22.196+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:22.227+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:22.227+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:22.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578800, 1), signature: { hash: BinData(0, 4C72F7C9CBCD8EF893847330ECCB4E989FA7BF87), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:22.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:33:22.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578800, 1), signature: { hash: BinData(0, 4C72F7C9CBCD8EF893847330ECCB4E989FA7BF87), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:22.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578800, 1), signature: { hash: BinData(0, 4C72F7C9CBCD8EF893847330ECCB4E989FA7BF87), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:22.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), opTime: { ts: Timestamp(1567578799, 1), t: 1 }, wallTime: new Date(1567578799275) } 2019-09-04T06:33:22.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578800, 1), signature: { hash: BinData(0, 4C72F7C9CBCD8EF893847330ECCB4E989FA7BF87), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:22.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:22.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:22.246+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:22.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:22.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:22.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:22.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:22.285+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:22.285+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:22.285+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578799, 1) 2019-09-04T06:33:22.285+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 17140 2019-09-04T06:33:22.285+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:22.285+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:22.285+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 17140 2019-09-04T06:33:22.286+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17143 2019-09-04T06:33:22.286+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 17143 2019-09-04T06:33:22.286+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578799, 1), t: 1 }({ ts: Timestamp(1567578799, 1), t: 1 }) 2019-09-04T06:33:22.325+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:22.326+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:22.346+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:22.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:22.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:22.363+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:22.363+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:22.446+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:22.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:22.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:22.497+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:22.497+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:22.546+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:22.558+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:22.558+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:22.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:22.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:22.584+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:22.584+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:22.585+0000 I NETWORK [listener] connection accepted from 10.108.2.74:51898 #389 (86 connections now open) 2019-09-04T06:33:22.585+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:22.585+0000 D2 COMMAND [conn389] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:22.585+0000 I NETWORK [conn389] received client metadata from 10.108.2.74:51898 conn389: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:22.585+0000 I COMMAND [conn389] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:22.586+0000 D2 COMMAND [conn389] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 815C511F9B78AD09330F18D8946C64B00E8E0783), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:33:22.586+0000 D1 REPL [conn389] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578799, 1), t: 1 } 2019-09-04T06:33:22.586+0000 D3 REPL [conn389] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:52.596+0000 2019-09-04T06:33:22.597+0000 I COMMAND [conn343] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578765, 1), signature: { hash: BinData(0, B25952C3221E28D91A8CF134FCDA55710368078E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:33:22.597+0000 D1 - [conn343] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:22.597+0000 W - [conn343] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:22.598+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:22.598+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:22.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:22.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:22.615+0000 I - [conn343] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:22.615+0000 D1 COMMAND [conn343] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578765, 1), signature: { hash: BinData(0, B25952C3221E28D91A8CF134FCDA55710368078E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:22.615+0000 D1 - [conn343] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:22.615+0000 W - [conn343] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:22.635+0000 I - [conn343] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:22.636+0000 W COMMAND [conn343] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:22.636+0000 I COMMAND [conn343] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578765, 1), signature: { hash: BinData(0, B25952C3221E28D91A8CF134FCDA55710368078E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:33:22.636+0000 D2 NETWORK [conn343] Session from 10.108.2.74:51852 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:22.636+0000 I NETWORK [conn343] end connection 10.108.2.74:51852 (85 connections now open) 2019-09-04T06:33:22.646+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:22.646+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:22.647+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:22.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:22.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:22.662+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:22.662+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:22.696+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:22.696+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:22.727+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:22.727+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:22.746+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:22.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:22.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:22.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:22.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:22.825+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:22.826+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:22.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:22.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1170) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:22.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1170 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:32.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:22.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:50.839+0000 2019-09-04T06:33:22.838+0000 D2 ASIO [Replication] Request 1170 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), opTime: { ts: Timestamp(1567578799, 1), t: 1 }, wallTime: new Date(1567578799275), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578799, 1), t: 1 }, lastCommittedWall: new Date(1567578799275), lastOpVisible: { ts: Timestamp(1567578799, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578799, 1), $clusterTime: { clusterTime: Timestamp(1567578800, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578799, 1) } 2019-09-04T06:33:22.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), opTime: { ts: Timestamp(1567578799, 1), t: 1 }, wallTime: new Date(1567578799275), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578799, 1), t: 1 }, lastCommittedWall: new Date(1567578799275), lastOpVisible: { ts: Timestamp(1567578799, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578799, 1), $clusterTime: { clusterTime: Timestamp(1567578800, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578799, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:22.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:22.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1170) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), opTime: { ts: Timestamp(1567578799, 1), t: 1 }, wallTime: new Date(1567578799275), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578799, 1), t: 1 }, lastCommittedWall: new Date(1567578799275), lastOpVisible: { ts: Timestamp(1567578799, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578799, 1), $clusterTime: { clusterTime: Timestamp(1567578800, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578799, 1) } 2019-09-04T06:33:22.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:33:22.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:33:24.838Z 2019-09-04T06:33:22.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:50.839+0000 2019-09-04T06:33:22.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:22.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1171) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:22.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1171 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:33:32.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:22.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:50.839+0000 2019-09-04T06:33:22.839+0000 D2 ASIO [Replication] Request 1171 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), opTime: { ts: Timestamp(1567578799, 1), t: 1 }, wallTime: new Date(1567578799275), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578799, 1), t: 1 }, lastCommittedWall: new Date(1567578799275), lastOpVisible: { ts: Timestamp(1567578799, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578799, 1), $clusterTime: { clusterTime: Timestamp(1567578800, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578799, 1) } 2019-09-04T06:33:22.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), opTime: { ts: Timestamp(1567578799, 1), t: 1 }, wallTime: new Date(1567578799275), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578799, 1), t: 1 }, lastCommittedWall: new Date(1567578799275), lastOpVisible: { ts: Timestamp(1567578799, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578799, 1), $clusterTime: { clusterTime: Timestamp(1567578800, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578799, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:33:22.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:22.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1171) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), opTime: { ts: Timestamp(1567578799, 1), t: 1 }, wallTime: new Date(1567578799275), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578799, 1), t: 1 }, lastCommittedWall: new Date(1567578799275), lastOpVisible: { ts: Timestamp(1567578799, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578799, 1), $clusterTime: { clusterTime: Timestamp(1567578800, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578799, 1) } 2019-09-04T06:33:22.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:33:22.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:33:31.112+0000 2019-09-04T06:33:22.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:33:32.928+0000 2019-09-04T06:33:22.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:33:22.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:33:24.839Z 2019-09-04T06:33:22.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:22.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:52.839+0000 2019-09-04T06:33:22.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:52.839+0000 2019-09-04T06:33:22.847+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:22.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:22.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:22.863+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:22.863+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:22.947+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:22.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:22.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:22.997+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:22.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:23.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:23.047+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:23.058+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:23.058+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:23.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578800, 1), signature: { hash: BinData(0, 4C72F7C9CBCD8EF893847330ECCB4E989FA7BF87), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:23.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:33:23.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578800, 1), signature: { hash: BinData(0, 4C72F7C9CBCD8EF893847330ECCB4E989FA7BF87), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:23.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578800, 1), signature: { hash: BinData(0, 4C72F7C9CBCD8EF893847330ECCB4E989FA7BF87), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:23.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), opTime: { ts: Timestamp(1567578799, 1), t: 1 }, wallTime: new Date(1567578799275) } 2019-09-04T06:33:23.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578800, 1), signature: { hash: BinData(0, 4C72F7C9CBCD8EF893847330ECCB4E989FA7BF87), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:23.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:23.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:23.084+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:23.084+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:23.098+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:23.098+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:23.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:23.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:23.147+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:23.147+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:23.147+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:23.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:23.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:23.162+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:23.162+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:23.196+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:23.196+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:23.227+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:23.227+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:23.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:23.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:23.247+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:23.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:23.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:23.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:23.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:23.285+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:23.285+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:23.285+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578799, 1) 2019-09-04T06:33:23.285+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 17184 2019-09-04T06:33:23.285+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:23.285+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:23.285+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 17184 2019-09-04T06:33:23.286+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17187 2019-09-04T06:33:23.286+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 17187 2019-09-04T06:33:23.286+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578799, 1), t: 1 }({ ts: Timestamp(1567578799, 1), t: 1 }) 2019-09-04T06:33:23.325+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:23.326+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:23.347+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:23.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:23.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:23.363+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:23.363+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:23.447+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:23.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:23.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:23.497+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:23.497+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:23.548+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:23.558+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:23.558+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:23.568+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:23.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:23.598+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:23.598+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:23.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:23.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:23.647+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:23.647+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:23.648+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:23.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:23.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:23.662+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:23.662+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:23.696+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:23.696+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:23.727+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:23.727+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:23.748+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:23.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:23.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:23.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:23.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:23.825+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:23.826+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:23.848+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:23.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:23.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:23.863+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:23.863+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:23.948+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:23.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:23.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:23.997+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:23.997+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:24.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:24.048+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:24.058+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:24.058+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:24.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:24.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:24.098+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:24.098+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:24.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:24.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:24.147+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:24.147+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:24.149+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:24.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:24.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:24.156+0000 I COMMAND [conn355] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578765, 1), signature: { hash: BinData(0, B25952C3221E28D91A8CF134FCDA55710368078E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:33:24.156+0000 D1 - [conn355] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:24.156+0000 W - [conn355] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:24.162+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:24.162+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:24.173+0000 I - [conn355] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:24.173+0000 D1 COMMAND [conn355] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578765, 1), signature: { hash: BinData(0, B25952C3221E28D91A8CF134FCDA55710368078E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:24.173+0000 D1 - [conn355] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:24.173+0000 W - [conn355] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:24.193+0000 I - [conn355] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:24.193+0000 W COMMAND [conn355] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:24.193+0000 I COMMAND [conn355] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578765, 1), signature: { hash: BinData(0, B25952C3221E28D91A8CF134FCDA55710368078E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30031ms 2019-09-04T06:33:24.193+0000 D2 NETWORK [conn355] Session from 10.108.2.46:41076 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:24.194+0000 I NETWORK [conn355] end connection 10.108.2.46:41076 (84 connections now open) 2019-09-04T06:33:24.196+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:24.196+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:24.227+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:24.227+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:24.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578800, 1), signature: { hash: BinData(0, 4C72F7C9CBCD8EF893847330ECCB4E989FA7BF87), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:24.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:33:24.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578800, 1), signature: { hash: BinData(0, 4C72F7C9CBCD8EF893847330ECCB4E989FA7BF87), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:24.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578800, 1), signature: { hash: BinData(0, 4C72F7C9CBCD8EF893847330ECCB4E989FA7BF87), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:24.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), opTime: { ts: Timestamp(1567578799, 1), t: 1 }, wallTime: new Date(1567578799275) } 2019-09-04T06:33:24.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578800, 1), signature: { hash: BinData(0, 4C72F7C9CBCD8EF893847330ECCB4E989FA7BF87), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:24.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:24.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:24.249+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:24.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:24.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:24.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:24.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:24.285+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:24.285+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:24.285+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578799, 1) 2019-09-04T06:33:24.286+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 17225 2019-09-04T06:33:24.286+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:24.286+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:24.286+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 17225 2019-09-04T06:33:24.286+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17228 2019-09-04T06:33:24.286+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 17228 2019-09-04T06:33:24.286+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578799, 1), t: 1 }({ ts: Timestamp(1567578799, 1), t: 1 }) 2019-09-04T06:33:24.290+0000 D2 ASIO [RS] Request 1154 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578799, 1), t: 1 }, lastCommittedWall: new Date(1567578799275), lastOpVisible: { ts: Timestamp(1567578799, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578799, 1), t: 1 }, lastCommittedWall: new Date(1567578799275), lastOpApplied: { ts: Timestamp(1567578799, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578799, 1), $clusterTime: { clusterTime: Timestamp(1567578800, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578799, 1) } 2019-09-04T06:33:24.290+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578799, 1), t: 1 }, lastCommittedWall: new Date(1567578799275), lastOpVisible: { ts: Timestamp(1567578799, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578799, 1), t: 1 }, lastCommittedWall: new Date(1567578799275), lastOpApplied: { ts: Timestamp(1567578799, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578799, 1), $clusterTime: { clusterTime: Timestamp(1567578800, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578799, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:24.290+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:24.290+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:33:24.290+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:33:32.928+0000 2019-09-04T06:33:24.291+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:33:35.004+0000 2019-09-04T06:33:24.291+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:24.291+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:52.839+0000 2019-09-04T06:33:24.291+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1172 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:34.291+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578799, 1), t: 1 } } 2019-09-04T06:33:24.291+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:49.286+0000 2019-09-04T06:33:24.325+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:24.326+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:24.333+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:24.333+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), appliedOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, appliedWallTime: new Date(1567578799275), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), appliedOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, appliedWallTime: new Date(1567578799275), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), appliedOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, appliedWallTime: new Date(1567578799275), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578799, 1), t: 1 }, lastCommittedWall: new Date(1567578799275), lastOpVisible: { ts: Timestamp(1567578799, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:24.333+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1173 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:54.333+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), appliedOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, appliedWallTime: new Date(1567578799275), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), appliedOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, appliedWallTime: new Date(1567578799275), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), appliedOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, appliedWallTime: new Date(1567578799275), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578799, 1), t: 1 }, lastCommittedWall: new Date(1567578799275), lastOpVisible: { ts: Timestamp(1567578799, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:24.333+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:49.286+0000 2019-09-04T06:33:24.333+0000 D2 ASIO [RS] Request 1173 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578799, 1), t: 1 }, lastCommittedWall: new Date(1567578799275), lastOpVisible: { ts: Timestamp(1567578799, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578799, 1), $clusterTime: { clusterTime: Timestamp(1567578800, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578799, 1) } 2019-09-04T06:33:24.333+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578799, 1), t: 1 }, lastCommittedWall: new Date(1567578799275), lastOpVisible: { ts: Timestamp(1567578799, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578799, 1), $clusterTime: { clusterTime: Timestamp(1567578800, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578799, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:24.333+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:24.333+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:49.286+0000 2019-09-04T06:33:24.349+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:24.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:24.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:24.363+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:24.363+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:24.449+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:24.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:24.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:24.497+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:24.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:24.549+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:24.558+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:24.558+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:24.568+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:24.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:24.598+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:24.598+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:24.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:24.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:24.647+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:24.647+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:24.649+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:24.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:24.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:24.662+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:24.662+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:24.696+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:24.696+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:24.727+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:24.727+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:24.749+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:24.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:24.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:24.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:24.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:24.825+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:24.825+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:24.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:24.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1174) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:24.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1174 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:34.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:24.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:52.839+0000 2019-09-04T06:33:24.838+0000 D2 ASIO [Replication] Request 1174 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), opTime: { ts: Timestamp(1567578799, 1), t: 1 }, wallTime: new Date(1567578799275), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578799, 1), t: 1 }, lastCommittedWall: new Date(1567578799275), lastOpVisible: { ts: Timestamp(1567578799, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578799, 1), $clusterTime: { clusterTime: Timestamp(1567578800, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578799, 1) } 2019-09-04T06:33:24.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), opTime: { ts: Timestamp(1567578799, 1), t: 1 }, wallTime: new Date(1567578799275), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578799, 1), t: 1 }, lastCommittedWall: new Date(1567578799275), lastOpVisible: { ts: Timestamp(1567578799, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578799, 1), $clusterTime: { clusterTime: Timestamp(1567578800, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578799, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:24.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:24.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1174) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), opTime: { ts: Timestamp(1567578799, 1), t: 1 }, wallTime: new Date(1567578799275), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578799, 1), t: 1 }, lastCommittedWall: new Date(1567578799275), lastOpVisible: { ts: Timestamp(1567578799, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578799, 1), $clusterTime: { clusterTime: Timestamp(1567578800, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578799, 1) } 2019-09-04T06:33:24.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:33:24.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:33:26.838Z 2019-09-04T06:33:24.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:52.839+0000 2019-09-04T06:33:24.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:24.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1175) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:24.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1175 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:33:34.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:24.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:52.839+0000 2019-09-04T06:33:24.839+0000 D2 ASIO [Replication] Request 1175 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), opTime: { ts: Timestamp(1567578799, 1), t: 1 }, wallTime: new Date(1567578799275), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578799, 1), t: 1 }, lastCommittedWall: new Date(1567578799275), lastOpVisible: { ts: Timestamp(1567578799, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578799, 1), $clusterTime: { clusterTime: Timestamp(1567578800, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578799, 1) } 2019-09-04T06:33:24.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), opTime: { ts: Timestamp(1567578799, 1), t: 1 }, wallTime: new Date(1567578799275), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578799, 1), t: 1 }, lastCommittedWall: new Date(1567578799275), lastOpVisible: { ts: Timestamp(1567578799, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578799, 1), $clusterTime: { clusterTime: Timestamp(1567578800, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578799, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:33:24.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:24.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1175) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), opTime: { ts: Timestamp(1567578799, 1), t: 1 }, wallTime: new Date(1567578799275), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578799, 1), t: 1 }, lastCommittedWall: new Date(1567578799275), lastOpVisible: { ts: Timestamp(1567578799, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578799, 1), $clusterTime: { clusterTime: Timestamp(1567578800, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578799, 1) } 2019-09-04T06:33:24.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:33:24.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:33:35.004+0000 2019-09-04T06:33:24.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:33:35.279+0000 2019-09-04T06:33:24.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:33:24.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:33:26.839Z 2019-09-04T06:33:24.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:24.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:54.839+0000 2019-09-04T06:33:24.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:54.839+0000 2019-09-04T06:33:24.850+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:24.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:24.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:24.863+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:24.863+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:24.950+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:24.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:24.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:24.997+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:24.997+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:25.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:25.049+0000 I NETWORK [listener] connection accepted from 10.108.2.55:36768 #390 (85 connections now open) 2019-09-04T06:33:25.049+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:25.049+0000 D2 COMMAND [conn390] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:25.049+0000 I NETWORK [conn390] received client metadata from 10.108.2.55:36768 conn390: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:25.049+0000 I COMMAND [conn390] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:25.050+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:25.058+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:25.058+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:25.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578800, 1), signature: { hash: BinData(0, 4C72F7C9CBCD8EF893847330ECCB4E989FA7BF87), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:25.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:33:25.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578800, 1), signature: { hash: BinData(0, 4C72F7C9CBCD8EF893847330ECCB4E989FA7BF87), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:25.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578800, 1), signature: { hash: BinData(0, 4C72F7C9CBCD8EF893847330ECCB4E989FA7BF87), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:25.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), opTime: { ts: Timestamp(1567578799, 1), t: 1 }, wallTime: new Date(1567578799275) } 2019-09-04T06:33:25.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578800, 1), signature: { hash: BinData(0, 4C72F7C9CBCD8EF893847330ECCB4E989FA7BF87), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:25.066+0000 I COMMAND [conn365] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578771, 1), signature: { hash: BinData(0, BDE1D6A72B3EA3CE1E4DDB5EC4BEB336B05E9F0D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:33:25.066+0000 D1 - [conn365] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:25.066+0000 W - [conn365] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:25.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:25.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:25.083+0000 I - [conn365] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:25.083+0000 D1 COMMAND [conn365] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578771, 1), signature: { hash: BinData(0, BDE1D6A72B3EA3CE1E4DDB5EC4BEB336B05E9F0D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:25.083+0000 D1 - [conn365] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:25.083+0000 W - [conn365] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:25.098+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:25.098+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:25.103+0000 I - [conn365] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:25.103+0000 W COMMAND [conn365] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:25.103+0000 I COMMAND [conn365] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578771, 1), signature: { hash: BinData(0, BDE1D6A72B3EA3CE1E4DDB5EC4BEB336B05E9F0D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30033ms 2019-09-04T06:33:25.103+0000 D2 NETWORK [conn365] Session from 10.108.2.55:36750 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:25.103+0000 I NETWORK [conn365] end connection 10.108.2.55:36750 (84 connections now open) 2019-09-04T06:33:25.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:25.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:25.147+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:25.147+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:25.150+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:25.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:25.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:25.162+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:25.162+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:25.196+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:25.196+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:25.227+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:25.227+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:25.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:25.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:25.250+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:25.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:25.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:25.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:25.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:25.286+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:25.286+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:25.286+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578799, 1) 2019-09-04T06:33:25.286+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 17267 2019-09-04T06:33:25.286+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:25.286+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:25.286+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 17267 2019-09-04T06:33:25.287+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17270 2019-09-04T06:33:25.287+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 17270 2019-09-04T06:33:25.287+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578799, 1), t: 1 }({ ts: Timestamp(1567578799, 1), t: 1 }) 2019-09-04T06:33:25.325+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:25.326+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:25.350+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:25.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:25.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:25.363+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:25.363+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:25.389+0000 D2 ASIO [RS] Request 1172 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578805, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578805382), o: { $v: 1, $set: { ping: new Date(1567578805379), up: 2705 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578799, 1), t: 1 }, lastCommittedWall: new Date(1567578799275), lastOpVisible: { ts: Timestamp(1567578799, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578799, 1), t: 1 }, lastCommittedWall: new Date(1567578799275), lastOpApplied: { ts: Timestamp(1567578805, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578799, 1), $clusterTime: { clusterTime: Timestamp(1567578805, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578805, 1) } 2019-09-04T06:33:25.389+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578805, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578805382), o: { $v: 1, $set: { ping: new Date(1567578805379), up: 2705 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578799, 1), t: 1 }, lastCommittedWall: new Date(1567578799275), lastOpVisible: { ts: Timestamp(1567578799, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578799, 1), t: 1 }, lastCommittedWall: new Date(1567578799275), lastOpApplied: { ts: Timestamp(1567578805, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578799, 1), $clusterTime: { clusterTime: Timestamp(1567578805, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578805, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:25.389+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:25.389+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578805, 1) and ending at ts: Timestamp(1567578805, 1) 2019-09-04T06:33:25.389+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:33:35.279+0000 2019-09-04T06:33:25.389+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:33:35.581+0000 2019-09-04T06:33:25.389+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:25.389+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:54.839+0000 2019-09-04T06:33:25.389+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578805, 1), t: 1 } 2019-09-04T06:33:25.389+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:25.389+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:25.389+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578799, 1) 2019-09-04T06:33:25.389+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 17276 2019-09-04T06:33:25.389+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:25.389+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:25.389+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 17276 2019-09-04T06:33:25.389+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:25.389+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:33:25.389+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:25.389+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578799, 1) 2019-09-04T06:33:25.389+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 17279 2019-09-04T06:33:25.389+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578805, 1) } 2019-09-04T06:33:25.389+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:25.389+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:25.389+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 17279 2019-09-04T06:33:25.389+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17271 2019-09-04T06:33:25.389+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 17271 2019-09-04T06:33:25.389+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17282 2019-09-04T06:33:25.389+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 17282 2019-09-04T06:33:25.389+0000 D3 EXECUTOR [repl-writer-worker-0] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:25.389+0000 D3 STORAGE [repl-writer-worker-0] WT begin_transaction for snapshot id 17284 2019-09-04T06:33:25.389+0000 D4 STORAGE [repl-writer-worker-0] inserting record with timestamp Timestamp(1567578805, 1) 2019-09-04T06:33:25.389+0000 D3 STORAGE [repl-writer-worker-0] WT set timestamp of future write operations to Timestamp(1567578805, 1) 2019-09-04T06:33:25.389+0000 D3 STORAGE [repl-writer-worker-0] WT commit_transaction for snapshot id 17284 2019-09-04T06:33:25.389+0000 D3 EXECUTOR [repl-writer-worker-0] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:25.389+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:33:25.389+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17283 2019-09-04T06:33:25.390+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 17283 2019-09-04T06:33:25.390+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17286 2019-09-04T06:33:25.390+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 17286 2019-09-04T06:33:25.390+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578805, 1), t: 1 }({ ts: Timestamp(1567578805, 1), t: 1 }) 2019-09-04T06:33:25.390+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578805, 1) 2019-09-04T06:33:25.390+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17287 2019-09-04T06:33:25.390+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578805, 1) } } ] } sort: {} projection: {} 2019-09-04T06:33:25.390+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:25.390+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:33:25.390+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578805, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:33:25.390+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:25.390+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:33:25.390+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:25.390+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578805, 1) || First: notFirst: full path: ts 2019-09-04T06:33:25.390+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:25.390+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578805, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:25.390+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:33:25.390+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:33:25.390+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:33:25.390+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:25.390+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:25.390+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:33:25.390+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:25.390+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:25.390+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:33:25.390+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578805, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:33:25.390+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:25.390+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:33:25.390+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:25.390+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578805, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:33:25.390+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:25.390+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578805, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:25.390+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 17287 2019-09-04T06:33:25.390+0000 D3 EXECUTOR [repl-writer-worker-15] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:25.390+0000 D3 STORAGE [repl-writer-worker-15] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:25.390+0000 D3 REPL [repl-writer-worker-15] applying op: { ts: Timestamp(1567578805, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578805382), o: { $v: 1, $set: { ping: new Date(1567578805379), up: 2705 } } }, oplog application mode: Secondary 2019-09-04T06:33:25.390+0000 D3 STORAGE [repl-writer-worker-15] WT set timestamp of future write operations to Timestamp(1567578805, 1) 2019-09-04T06:33:25.390+0000 D3 STORAGE [repl-writer-worker-15] WT begin_transaction for snapshot id 17289 2019-09-04T06:33:25.390+0000 D2 QUERY [repl-writer-worker-15] Using idhack: { _id: "cmodb801.togewa.com:27017" } 2019-09-04T06:33:25.390+0000 D4 WRITE [repl-writer-worker-15] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:33:25.390+0000 D3 STORAGE [repl-writer-worker-15] WT commit_transaction for snapshot id 17289 2019-09-04T06:33:25.390+0000 D3 EXECUTOR [repl-writer-worker-15] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:25.390+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578805, 1), t: 1 }({ ts: Timestamp(1567578805, 1), t: 1 }) 2019-09-04T06:33:25.390+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578805, 1) 2019-09-04T06:33:25.390+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17288 2019-09-04T06:33:25.390+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:33:25.390+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:25.390+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:33:25.390+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:25.390+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:25.390+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:33:25.390+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 17288 2019-09-04T06:33:25.390+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578805, 1) 2019-09-04T06:33:25.390+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17292 2019-09-04T06:33:25.390+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 17292 2019-09-04T06:33:25.390+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578805, 1), t: 1 }({ ts: Timestamp(1567578805, 1), t: 1 }) 2019-09-04T06:33:25.391+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:25.391+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), appliedOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, appliedWallTime: new Date(1567578799275), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), appliedOpTime: { ts: Timestamp(1567578805, 1), t: 1 }, appliedWallTime: new Date(1567578805382), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), appliedOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, appliedWallTime: new Date(1567578799275), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578799, 1), t: 1 }, lastCommittedWall: new Date(1567578799275), lastOpVisible: { ts: Timestamp(1567578799, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:25.391+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1176 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:55.391+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), appliedOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, appliedWallTime: new Date(1567578799275), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), appliedOpTime: { ts: Timestamp(1567578805, 1), t: 1 }, appliedWallTime: new Date(1567578805382), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), appliedOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, appliedWallTime: new Date(1567578799275), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578799, 1), t: 1 }, lastCommittedWall: new Date(1567578799275), lastOpVisible: { ts: Timestamp(1567578799, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:25.391+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:55.390+0000 2019-09-04T06:33:25.391+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578805, 1), t: 1 } 2019-09-04T06:33:25.391+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1177 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:35.391+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578799, 1), t: 1 } } 2019-09-04T06:33:25.391+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:55.390+0000 2019-09-04T06:33:25.391+0000 D2 ASIO [RS] Request 1176 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578799, 1), t: 1 }, lastCommittedWall: new Date(1567578799275), lastOpVisible: { ts: Timestamp(1567578799, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578799, 1), $clusterTime: { clusterTime: Timestamp(1567578805, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578805, 1) } 2019-09-04T06:33:25.391+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578799, 1), t: 1 }, lastCommittedWall: new Date(1567578799275), lastOpVisible: { ts: Timestamp(1567578799, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578799, 1), $clusterTime: { clusterTime: Timestamp(1567578805, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578805, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:25.391+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:25.391+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:55.390+0000 2019-09-04T06:33:25.409+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:33:25.409+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:25.409+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), appliedOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, appliedWallTime: new Date(1567578799275), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578805, 1), t: 1 }, durableWallTime: new Date(1567578805382), appliedOpTime: { ts: Timestamp(1567578805, 1), t: 1 }, appliedWallTime: new Date(1567578805382), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), appliedOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, appliedWallTime: new Date(1567578799275), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578799, 1), t: 1 }, lastCommittedWall: new Date(1567578799275), lastOpVisible: { ts: Timestamp(1567578799, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:25.410+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1178 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:55.410+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), appliedOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, appliedWallTime: new Date(1567578799275), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578805, 1), t: 1 }, durableWallTime: new Date(1567578805382), appliedOpTime: { ts: Timestamp(1567578805, 1), t: 1 }, appliedWallTime: new Date(1567578805382), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), appliedOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, appliedWallTime: new Date(1567578799275), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578799, 1), t: 1 }, lastCommittedWall: new Date(1567578799275), lastOpVisible: { ts: Timestamp(1567578799, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:25.410+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:55.390+0000 2019-09-04T06:33:25.410+0000 D2 ASIO [RS] Request 1178 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578799, 1), t: 1 }, lastCommittedWall: new Date(1567578799275), lastOpVisible: { ts: Timestamp(1567578799, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578799, 1), $clusterTime: { clusterTime: Timestamp(1567578805, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578805, 1) } 2019-09-04T06:33:25.410+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578799, 1), t: 1 }, lastCommittedWall: new Date(1567578799275), lastOpVisible: { ts: Timestamp(1567578799, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578799, 1), $clusterTime: { clusterTime: Timestamp(1567578805, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578805, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:25.410+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:25.410+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:55.390+0000 2019-09-04T06:33:25.410+0000 D2 ASIO [RS] Request 1177 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578805, 1), t: 1 }, lastCommittedWall: new Date(1567578805382), lastOpVisible: { ts: Timestamp(1567578805, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578805, 1), t: 1 }, lastCommittedWall: new Date(1567578805382), lastOpApplied: { ts: Timestamp(1567578805, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578805, 1), $clusterTime: { clusterTime: Timestamp(1567578805, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578805, 1) } 2019-09-04T06:33:25.410+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578805, 1), t: 1 }, lastCommittedWall: new Date(1567578805382), lastOpVisible: { ts: Timestamp(1567578805, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578805, 1), t: 1 }, lastCommittedWall: new Date(1567578805382), lastOpApplied: { ts: Timestamp(1567578805, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578805, 1), $clusterTime: { clusterTime: Timestamp(1567578805, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578805, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:25.410+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:25.410+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:33:25.410+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578805, 1), t: 1 }, 2019-09-04T06:33:25.382+0000 2019-09-04T06:33:25.410+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578805, 1), t: 1 }, 2019-09-04T06:33:25.382+0000 2019-09-04T06:33:25.411+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578800, 1) 2019-09-04T06:33:25.411+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:33:35.581+0000 2019-09-04T06:33:25.411+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:33:36.378+0000 2019-09-04T06:33:25.411+0000 D2 COMMAND [conn21] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578805, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5ab502d1a496712d727f'), operName: "", parentOperId: "5d6f5ab502d1a496712d727d" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578805, 1), signature: { hash: BinData(0, 5F2F82F187610862000CECA1B758833740D8F5F8), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578805, 1), t: 1 } }, $db: "config" } 2019-09-04T06:33:25.411+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1179 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:35.411+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578805, 1), t: 1 } } 2019-09-04T06:33:25.411+0000 D1 TRACKING [conn21] Cmd: find, TrackingId: 5d6f5ab502d1a496712d727d|5d6f5ab502d1a496712d727f 2019-09-04T06:33:25.411+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:55.390+0000 2019-09-04T06:33:25.411+0000 D1 COMMAND [conn21] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578805, 1), t: 1 } } } 2019-09-04T06:33:25.411+0000 D3 STORAGE [conn21] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:33:25.411+0000 D3 REPL [conn356] Got notified of new snapshot: { ts: Timestamp(1567578805, 1), t: 1 }, 2019-09-04T06:33:25.382+0000 2019-09-04T06:33:25.411+0000 D1 COMMAND [conn21] Using 'committed' snapshot: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578805, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5ab502d1a496712d727f'), operName: "", parentOperId: "5d6f5ab502d1a496712d727d" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578805, 1), signature: { hash: BinData(0, 5F2F82F187610862000CECA1B758833740D8F5F8), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578805, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578805, 1) 2019-09-04T06:33:25.411+0000 D2 QUERY [conn21] Collection config.settings does not exist. Using EOF plan: query: { _id: "balancer" } sort: {} projection: {} limit: 1 2019-09-04T06:33:25.411+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:25.411+0000 D3 REPL [conn356] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.043+0000 2019-09-04T06:33:25.411+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:54.839+0000 2019-09-04T06:33:25.411+0000 D3 REPL [conn379] Got notified of new snapshot: { ts: Timestamp(1567578805, 1), t: 1 }, 2019-09-04T06:33:25.382+0000 2019-09-04T06:33:25.411+0000 D3 REPL [conn379] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:43.509+0000 2019-09-04T06:33:25.411+0000 D3 REPL [conn370] Got notified of new snapshot: { ts: Timestamp(1567578805, 1), t: 1 }, 2019-09-04T06:33:25.382+0000 2019-09-04T06:33:25.411+0000 D3 REPL [conn370] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:29.874+0000 2019-09-04T06:33:25.411+0000 D3 REPL [conn369] Got notified of new snapshot: { ts: Timestamp(1567578805, 1), t: 1 }, 2019-09-04T06:33:25.382+0000 2019-09-04T06:33:25.411+0000 D3 REPL [conn369] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.039+0000 2019-09-04T06:33:25.411+0000 D3 REPL [conn359] Got notified of new snapshot: { ts: Timestamp(1567578805, 1), t: 1 }, 2019-09-04T06:33:25.382+0000 2019-09-04T06:33:25.411+0000 D3 REPL [conn359] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.882+0000 2019-09-04T06:33:25.411+0000 D3 REPL [conn366] Got notified of new snapshot: { ts: Timestamp(1567578805, 1), t: 1 }, 2019-09-04T06:33:25.382+0000 2019-09-04T06:33:25.411+0000 D3 REPL [conn366] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.913+0000 2019-09-04T06:33:25.411+0000 D3 REPL [conn367] Got notified of new snapshot: { ts: Timestamp(1567578805, 1), t: 1 }, 2019-09-04T06:33:25.382+0000 2019-09-04T06:33:25.411+0000 D3 REPL [conn367] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.934+0000 2019-09-04T06:33:25.411+0000 D3 REPL [conn383] Got notified of new snapshot: { ts: Timestamp(1567578805, 1), t: 1 }, 2019-09-04T06:33:25.382+0000 2019-09-04T06:33:25.411+0000 D3 REPL [conn383] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.025+0000 2019-09-04T06:33:25.411+0000 D3 REPL [conn384] Got notified of new snapshot: { ts: Timestamp(1567578805, 1), t: 1 }, 2019-09-04T06:33:25.382+0000 2019-09-04T06:33:25.411+0000 D3 REPL [conn384] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.054+0000 2019-09-04T06:33:25.411+0000 D3 REPL [conn371] Got notified of new snapshot: { ts: Timestamp(1567578805, 1), t: 1 }, 2019-09-04T06:33:25.382+0000 2019-09-04T06:33:25.411+0000 D3 REPL [conn371] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.023+0000 2019-09-04T06:33:25.411+0000 D3 REPL [conn368] Got notified of new snapshot: { ts: Timestamp(1567578805, 1), t: 1 }, 2019-09-04T06:33:25.382+0000 2019-09-04T06:33:25.411+0000 D3 REPL [conn368] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.038+0000 2019-09-04T06:33:25.411+0000 D3 REPL [conn385] Got notified of new snapshot: { ts: Timestamp(1567578805, 1), t: 1 }, 2019-09-04T06:33:25.382+0000 2019-09-04T06:33:25.411+0000 D3 REPL [conn385] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.131+0000 2019-09-04T06:33:25.411+0000 D3 REPL [conn372] Got notified of new snapshot: { ts: Timestamp(1567578805, 1), t: 1 }, 2019-09-04T06:33:25.382+0000 2019-09-04T06:33:25.411+0000 D3 REPL [conn372] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.660+0000 2019-09-04T06:33:25.411+0000 D3 REPL [conn388] Got notified of new snapshot: { ts: Timestamp(1567578805, 1), t: 1 }, 2019-09-04T06:33:25.382+0000 2019-09-04T06:33:25.411+0000 D3 REPL [conn388] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:52.054+0000 2019-09-04T06:33:25.411+0000 D3 REPL [conn374] Got notified of new snapshot: { ts: Timestamp(1567578805, 1), t: 1 }, 2019-09-04T06:33:25.382+0000 2019-09-04T06:33:25.411+0000 D3 REPL [conn374] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.876+0000 2019-09-04T06:33:25.411+0000 D3 REPL [conn375] Got notified of new snapshot: { ts: Timestamp(1567578805, 1), t: 1 }, 2019-09-04T06:33:25.382+0000 2019-09-04T06:33:25.411+0000 D3 REPL [conn375] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.877+0000 2019-09-04T06:33:25.411+0000 D3 REPL [conn381] Got notified of new snapshot: { ts: Timestamp(1567578805, 1), t: 1 }, 2019-09-04T06:33:25.382+0000 2019-09-04T06:33:25.411+0000 D3 REPL [conn381] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.023+0000 2019-09-04T06:33:25.411+0000 D3 REPL [conn378] Got notified of new snapshot: { ts: Timestamp(1567578805, 1), t: 1 }, 2019-09-04T06:33:25.382+0000 2019-09-04T06:33:25.411+0000 D3 REPL [conn378] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.046+0000 2019-09-04T06:33:25.411+0000 D3 REPL [conn376] Got notified of new snapshot: { ts: Timestamp(1567578805, 1), t: 1 }, 2019-09-04T06:33:25.382+0000 2019-09-04T06:33:25.411+0000 D3 REPL [conn376] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.897+0000 2019-09-04T06:33:25.411+0000 D3 REPL [conn373] Got notified of new snapshot: { ts: Timestamp(1567578805, 1), t: 1 }, 2019-09-04T06:33:25.382+0000 2019-09-04T06:33:25.411+0000 D3 REPL [conn373] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.934+0000 2019-09-04T06:33:25.411+0000 D3 REPL [conn386] Got notified of new snapshot: { ts: Timestamp(1567578805, 1), t: 1 }, 2019-09-04T06:33:25.382+0000 2019-09-04T06:33:25.411+0000 D3 REPL [conn386] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.645+0000 2019-09-04T06:33:25.411+0000 D3 REPL [conn362] Got notified of new snapshot: { ts: Timestamp(1567578805, 1), t: 1 }, 2019-09-04T06:33:25.382+0000 2019-09-04T06:33:25.411+0000 D3 REPL [conn362] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.661+0000 2019-09-04T06:33:25.411+0000 D3 REPL [conn377] Got notified of new snapshot: { ts: Timestamp(1567578805, 1), t: 1 }, 2019-09-04T06:33:25.382+0000 2019-09-04T06:33:25.411+0000 D3 REPL [conn377] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.752+0000 2019-09-04T06:33:25.411+0000 D3 REPL [conn389] Got notified of new snapshot: { ts: Timestamp(1567578805, 1), t: 1 }, 2019-09-04T06:33:25.382+0000 2019-09-04T06:33:25.411+0000 D3 REPL [conn389] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:52.596+0000 2019-09-04T06:33:25.411+0000 I COMMAND [conn21] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578805, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5ab502d1a496712d727f'), operName: "", parentOperId: "5d6f5ab502d1a496712d727d" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578805, 1), signature: { hash: BinData(0, 5F2F82F187610862000CECA1B758833740D8F5F8), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578805, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:33:25.450+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:25.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:25.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:25.489+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578805, 1) 2019-09-04T06:33:25.497+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:25.497+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:25.550+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:25.558+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:25.558+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:33:25.558+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:25.558+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:33:25.558+0000 D2 COMMAND [conn90] run command admin.$cmd { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:33:25.558+0000 I COMMAND [conn90] command admin.$cmd command: isMaster { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:866 locks:{} protocol:op_query 0ms 2019-09-04T06:33:25.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:25.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:25.598+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:25.598+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:25.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:25.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:25.647+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:25.647+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:25.651+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:25.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:25.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:25.662+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:25.662+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:25.696+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:25.696+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:25.727+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:25.727+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:25.739+0000 D2 ASIO [RS] Request 1179 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578805, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578805723), o: { $v: 1, $set: { ping: new Date(1567578805723) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578805, 1), t: 1 }, lastCommittedWall: new Date(1567578805382), lastOpVisible: { ts: Timestamp(1567578805, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578805, 1), t: 1 }, lastCommittedWall: new Date(1567578805382), lastOpApplied: { ts: Timestamp(1567578805, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578805, 1), $clusterTime: { clusterTime: Timestamp(1567578805, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578805, 2) } 2019-09-04T06:33:25.739+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578805, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578805723), o: { $v: 1, $set: { ping: new Date(1567578805723) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578805, 1), t: 1 }, lastCommittedWall: new Date(1567578805382), lastOpVisible: { ts: Timestamp(1567578805, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578805, 1), t: 1 }, lastCommittedWall: new Date(1567578805382), lastOpApplied: { ts: Timestamp(1567578805, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578805, 1), $clusterTime: { clusterTime: Timestamp(1567578805, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578805, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:25.739+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:25.739+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578805, 2) and ending at ts: Timestamp(1567578805, 2) 2019-09-04T06:33:25.739+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:33:36.378+0000 2019-09-04T06:33:25.739+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:33:37.139+0000 2019-09-04T06:33:25.739+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:25.739+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578805, 2), t: 1 } 2019-09-04T06:33:25.739+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:25.739+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:25.739+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578805, 1) 2019-09-04T06:33:25.739+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 17310 2019-09-04T06:33:25.739+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:25.739+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:25.739+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 17310 2019-09-04T06:33:25.739+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:25.739+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:33:25.739+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:25.739+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578805, 1) 2019-09-04T06:33:25.739+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578805, 2) } 2019-09-04T06:33:25.739+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 17313 2019-09-04T06:33:25.739+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17293 2019-09-04T06:33:25.739+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:25.739+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:25.739+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 17313 2019-09-04T06:33:25.739+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:54.839+0000 2019-09-04T06:33:25.740+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 17293 2019-09-04T06:33:25.740+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17316 2019-09-04T06:33:25.740+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 17316 2019-09-04T06:33:25.740+0000 D3 EXECUTOR [repl-writer-worker-1] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:25.740+0000 D3 STORAGE [repl-writer-worker-1] WT begin_transaction for snapshot id 17318 2019-09-04T06:33:25.740+0000 D4 STORAGE [repl-writer-worker-1] inserting record with timestamp Timestamp(1567578805, 2) 2019-09-04T06:33:25.740+0000 D3 STORAGE [repl-writer-worker-1] WT set timestamp of future write operations to Timestamp(1567578805, 2) 2019-09-04T06:33:25.740+0000 D3 STORAGE [repl-writer-worker-1] WT commit_transaction for snapshot id 17318 2019-09-04T06:33:25.740+0000 D3 EXECUTOR [repl-writer-worker-1] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:25.740+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:33:25.740+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17317 2019-09-04T06:33:25.740+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 17317 2019-09-04T06:33:25.740+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17320 2019-09-04T06:33:25.740+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 17320 2019-09-04T06:33:25.740+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578805, 2), t: 1 }({ ts: Timestamp(1567578805, 2), t: 1 }) 2019-09-04T06:33:25.740+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578805, 2) 2019-09-04T06:33:25.740+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17321 2019-09-04T06:33:25.740+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578805, 2) } } ] } sort: {} projection: {} 2019-09-04T06:33:25.740+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:25.740+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:33:25.740+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578805, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:33:25.740+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:25.740+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:33:25.740+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:25.740+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578805, 2) || First: notFirst: full path: ts 2019-09-04T06:33:25.740+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:25.740+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578805, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:25.740+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:33:25.740+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:33:25.740+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:33:25.740+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:25.740+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:25.740+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:33:25.740+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:25.740+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:25.740+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:33:25.740+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578805, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:33:25.740+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:25.740+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:33:25.740+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:25.740+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578805, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:33:25.740+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:25.740+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578805, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:25.740+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 17321 2019-09-04T06:33:25.740+0000 D3 EXECUTOR [repl-writer-worker-5] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:25.741+0000 D3 STORAGE [repl-writer-worker-5] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:25.741+0000 D3 REPL [repl-writer-worker-5] applying op: { ts: Timestamp(1567578805, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578805723), o: { $v: 1, $set: { ping: new Date(1567578805723) } } }, oplog application mode: Secondary 2019-09-04T06:33:25.741+0000 D3 STORAGE [repl-writer-worker-5] WT set timestamp of future write operations to Timestamp(1567578805, 2) 2019-09-04T06:33:25.741+0000 D3 STORAGE [repl-writer-worker-5] WT begin_transaction for snapshot id 17323 2019-09-04T06:33:25.741+0000 D2 QUERY [repl-writer-worker-5] Using idhack: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" } 2019-09-04T06:33:25.741+0000 D4 WRITE [repl-writer-worker-5] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:33:25.741+0000 D3 STORAGE [repl-writer-worker-5] WT commit_transaction for snapshot id 17323 2019-09-04T06:33:25.741+0000 D3 EXECUTOR [repl-writer-worker-5] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:25.741+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578805, 2), t: 1 }({ ts: Timestamp(1567578805, 2), t: 1 }) 2019-09-04T06:33:25.741+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578805, 2) 2019-09-04T06:33:25.741+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17322 2019-09-04T06:33:25.741+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:33:25.741+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:25.741+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:33:25.741+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:25.741+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:25.741+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:33:25.741+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 17322 2019-09-04T06:33:25.741+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578805, 2) 2019-09-04T06:33:25.741+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17327 2019-09-04T06:33:25.741+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 17327 2019-09-04T06:33:25.741+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578805, 2), t: 1 }({ ts: Timestamp(1567578805, 2), t: 1 }) 2019-09-04T06:33:25.741+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:25.741+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), appliedOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, appliedWallTime: new Date(1567578799275), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578805, 1), t: 1 }, durableWallTime: new Date(1567578805382), appliedOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, appliedWallTime: new Date(1567578805723), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), appliedOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, appliedWallTime: new Date(1567578799275), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578805, 1), t: 1 }, lastCommittedWall: new Date(1567578805382), lastOpVisible: { ts: Timestamp(1567578805, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:25.741+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1180 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:55.741+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), appliedOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, appliedWallTime: new Date(1567578799275), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578805, 1), t: 1 }, durableWallTime: new Date(1567578805382), appliedOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, appliedWallTime: new Date(1567578805723), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), appliedOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, appliedWallTime: new Date(1567578799275), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578805, 1), t: 1 }, lastCommittedWall: new Date(1567578805382), lastOpVisible: { ts: Timestamp(1567578805, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:25.741+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:55.741+0000 2019-09-04T06:33:25.741+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578805, 2), t: 1 } 2019-09-04T06:33:25.741+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1181 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:35.741+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578805, 1), t: 1 } } 2019-09-04T06:33:25.741+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:55.741+0000 2019-09-04T06:33:25.741+0000 D2 ASIO [RS] Request 1180 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578805, 1), t: 1 }, lastCommittedWall: new Date(1567578805382), lastOpVisible: { ts: Timestamp(1567578805, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578805, 1), $clusterTime: { clusterTime: Timestamp(1567578805, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578805, 2) } 2019-09-04T06:33:25.742+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578805, 1), t: 1 }, lastCommittedWall: new Date(1567578805382), lastOpVisible: { ts: Timestamp(1567578805, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578805, 1), $clusterTime: { clusterTime: Timestamp(1567578805, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578805, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:25.742+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:25.742+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:55.741+0000 2019-09-04T06:33:25.752+0000 D2 ASIO [RS] Request 1181 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578805, 2), t: 1 }, lastCommittedWall: new Date(1567578805723), lastOpVisible: { ts: Timestamp(1567578805, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578805, 2), t: 1 }, lastCommittedWall: new Date(1567578805723), lastOpApplied: { ts: Timestamp(1567578805, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578805, 2), $clusterTime: { clusterTime: Timestamp(1567578805, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578805, 2) } 2019-09-04T06:33:25.752+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578805, 2), t: 1 }, lastCommittedWall: new Date(1567578805723), lastOpVisible: { ts: Timestamp(1567578805, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578805, 2), t: 1 }, lastCommittedWall: new Date(1567578805723), lastOpApplied: { ts: Timestamp(1567578805, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578805, 2), $clusterTime: { clusterTime: Timestamp(1567578805, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578805, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:25.752+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:25.752+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:33:25.752+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578805, 2), t: 1 }, 2019-09-04T06:33:25.723+0000 2019-09-04T06:33:25.752+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578805, 2), t: 1 }, 2019-09-04T06:33:25.723+0000 2019-09-04T06:33:25.752+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578800, 2) 2019-09-04T06:33:25.752+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:33:37.139+0000 2019-09-04T06:33:25.752+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:33:36.687+0000 2019-09-04T06:33:25.752+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1182 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:35.752+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578805, 2), t: 1 } } 2019-09-04T06:33:25.752+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:55.741+0000 2019-09-04T06:33:25.752+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:25.752+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:54.839+0000 2019-09-04T06:33:25.752+0000 D3 REPL [conn376] Got notified of new snapshot: { ts: Timestamp(1567578805, 2), t: 1 }, 2019-09-04T06:33:25.723+0000 2019-09-04T06:33:25.752+0000 D3 REPL [conn376] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.897+0000 2019-09-04T06:33:25.752+0000 D3 REPL [conn386] Got notified of new snapshot: { ts: Timestamp(1567578805, 2), t: 1 }, 2019-09-04T06:33:25.723+0000 2019-09-04T06:33:25.752+0000 D3 REPL [conn386] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.645+0000 2019-09-04T06:33:25.752+0000 D3 REPL [conn381] Got notified of new snapshot: { ts: Timestamp(1567578805, 2), t: 1 }, 2019-09-04T06:33:25.723+0000 2019-09-04T06:33:25.752+0000 D3 REPL [conn381] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.023+0000 2019-09-04T06:33:25.752+0000 D3 REPL [conn377] Got notified of new snapshot: { ts: Timestamp(1567578805, 2), t: 1 }, 2019-09-04T06:33:25.723+0000 2019-09-04T06:33:25.752+0000 D3 REPL [conn377] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.752+0000 2019-09-04T06:33:25.752+0000 D3 REPL [conn367] Got notified of new snapshot: { ts: Timestamp(1567578805, 2), t: 1 }, 2019-09-04T06:33:25.723+0000 2019-09-04T06:33:25.752+0000 D3 REPL [conn367] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.934+0000 2019-09-04T06:33:25.752+0000 D3 REPL [conn368] Got notified of new snapshot: { ts: Timestamp(1567578805, 2), t: 1 }, 2019-09-04T06:33:25.723+0000 2019-09-04T06:33:25.752+0000 D3 REPL [conn368] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.038+0000 2019-09-04T06:33:25.752+0000 D3 REPL [conn375] Got notified of new snapshot: { ts: Timestamp(1567578805, 2), t: 1 }, 2019-09-04T06:33:25.723+0000 2019-09-04T06:33:25.752+0000 D3 REPL [conn375] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.877+0000 2019-09-04T06:33:25.752+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:33:25.752+0000 D3 REPL [conn383] Got notified of new snapshot: { ts: Timestamp(1567578805, 2), t: 1 }, 2019-09-04T06:33:25.723+0000 2019-09-04T06:33:25.753+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:25.753+0000 D3 REPL [conn383] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.025+0000 2019-09-04T06:33:25.753+0000 D3 REPL [conn356] Got notified of new snapshot: { ts: Timestamp(1567578805, 2), t: 1 }, 2019-09-04T06:33:25.723+0000 2019-09-04T06:33:25.753+0000 D3 REPL [conn356] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.043+0000 2019-09-04T06:33:25.753+0000 D3 REPL [conn366] Got notified of new snapshot: { ts: Timestamp(1567578805, 2), t: 1 }, 2019-09-04T06:33:25.723+0000 2019-09-04T06:33:25.753+0000 D3 REPL [conn366] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.913+0000 2019-09-04T06:33:25.753+0000 D3 REPL [conn370] Got notified of new snapshot: { ts: Timestamp(1567578805, 2), t: 1 }, 2019-09-04T06:33:25.723+0000 2019-09-04T06:33:25.753+0000 D3 REPL [conn370] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:29.874+0000 2019-09-04T06:33:25.753+0000 D3 REPL [conn372] Got notified of new snapshot: { ts: Timestamp(1567578805, 2), t: 1 }, 2019-09-04T06:33:25.723+0000 2019-09-04T06:33:25.753+0000 D3 REPL [conn372] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.660+0000 2019-09-04T06:33:25.753+0000 D3 REPL [conn359] Got notified of new snapshot: { ts: Timestamp(1567578805, 2), t: 1 }, 2019-09-04T06:33:25.723+0000 2019-09-04T06:33:25.753+0000 D3 REPL [conn359] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.882+0000 2019-09-04T06:33:25.753+0000 D3 REPL [conn374] Got notified of new snapshot: { ts: Timestamp(1567578805, 2), t: 1 }, 2019-09-04T06:33:25.723+0000 2019-09-04T06:33:25.753+0000 D3 REPL [conn374] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.876+0000 2019-09-04T06:33:25.753+0000 D3 REPL [conn379] Got notified of new snapshot: { ts: Timestamp(1567578805, 2), t: 1 }, 2019-09-04T06:33:25.723+0000 2019-09-04T06:33:25.753+0000 D3 REPL [conn379] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:43.509+0000 2019-09-04T06:33:25.753+0000 D3 REPL [conn373] Got notified of new snapshot: { ts: Timestamp(1567578805, 2), t: 1 }, 2019-09-04T06:33:25.723+0000 2019-09-04T06:33:25.753+0000 D3 REPL [conn373] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.934+0000 2019-09-04T06:33:25.753+0000 D3 REPL [conn389] Got notified of new snapshot: { ts: Timestamp(1567578805, 2), t: 1 }, 2019-09-04T06:33:25.723+0000 2019-09-04T06:33:25.753+0000 D3 REPL [conn389] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:52.596+0000 2019-09-04T06:33:25.753+0000 D3 REPL [conn362] Got notified of new snapshot: { ts: Timestamp(1567578805, 2), t: 1 }, 2019-09-04T06:33:25.723+0000 2019-09-04T06:33:25.753+0000 D3 REPL [conn362] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.661+0000 2019-09-04T06:33:25.753+0000 D3 REPL [conn385] Got notified of new snapshot: { ts: Timestamp(1567578805, 2), t: 1 }, 2019-09-04T06:33:25.723+0000 2019-09-04T06:33:25.753+0000 D3 REPL [conn385] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.131+0000 2019-09-04T06:33:25.753+0000 D3 REPL [conn378] Got notified of new snapshot: { ts: Timestamp(1567578805, 2), t: 1 }, 2019-09-04T06:33:25.723+0000 2019-09-04T06:33:25.753+0000 D3 REPL [conn378] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.046+0000 2019-09-04T06:33:25.753+0000 D3 REPL [conn371] Got notified of new snapshot: { ts: Timestamp(1567578805, 2), t: 1 }, 2019-09-04T06:33:25.723+0000 2019-09-04T06:33:25.753+0000 D3 REPL [conn371] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.023+0000 2019-09-04T06:33:25.753+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:25.753+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), appliedOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, appliedWallTime: new Date(1567578799275), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, durableWallTime: new Date(1567578805723), appliedOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, appliedWallTime: new Date(1567578805723), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), appliedOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, appliedWallTime: new Date(1567578799275), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578805, 2), t: 1 }, lastCommittedWall: new Date(1567578805723), lastOpVisible: { ts: Timestamp(1567578805, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:25.753+0000 D3 REPL [conn388] Got notified of new snapshot: { ts: Timestamp(1567578805, 2), t: 1 }, 2019-09-04T06:33:25.723+0000 2019-09-04T06:33:25.753+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1183 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:55.753+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), appliedOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, appliedWallTime: new Date(1567578799275), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, durableWallTime: new Date(1567578805723), appliedOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, appliedWallTime: new Date(1567578805723), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, durableWallTime: new Date(1567578799275), appliedOpTime: { ts: Timestamp(1567578799, 1), t: 1 }, appliedWallTime: new Date(1567578799275), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578805, 2), t: 1 }, lastCommittedWall: new Date(1567578805723), lastOpVisible: { ts: Timestamp(1567578805, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:25.753+0000 D3 REPL [conn388] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:52.054+0000 2019-09-04T06:33:25.753+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:55.741+0000 2019-09-04T06:33:25.753+0000 D3 REPL [conn384] Got notified of new snapshot: { ts: Timestamp(1567578805, 2), t: 1 }, 2019-09-04T06:33:25.723+0000 2019-09-04T06:33:25.753+0000 D3 REPL [conn384] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.054+0000 2019-09-04T06:33:25.753+0000 D3 REPL [conn369] Got notified of new snapshot: { ts: Timestamp(1567578805, 2), t: 1 }, 2019-09-04T06:33:25.723+0000 2019-09-04T06:33:25.753+0000 D3 REPL [conn369] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.039+0000 2019-09-04T06:33:25.753+0000 D2 ASIO [RS] Request 1183 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578805, 2), t: 1 }, lastCommittedWall: new Date(1567578805723), lastOpVisible: { ts: Timestamp(1567578805, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578805, 2), $clusterTime: { clusterTime: Timestamp(1567578805, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578805, 2) } 2019-09-04T06:33:25.753+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578805, 2), t: 1 }, lastCommittedWall: new Date(1567578805723), lastOpVisible: { ts: Timestamp(1567578805, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578805, 2), $clusterTime: { clusterTime: Timestamp(1567578805, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578805, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:25.753+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:25.753+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:33:55.741+0000 2019-09-04T06:33:25.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:25.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:25.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:25.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:25.825+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:25.826+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:25.840+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578805, 2) 2019-09-04T06:33:25.853+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:25.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:25.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:25.863+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:25.863+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:25.953+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:25.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:25.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:25.997+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:25.997+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:26.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:26.053+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:26.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:26.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:26.098+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:26.098+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:26.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:26.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:26.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:26.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:26.154+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:26.196+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:26.196+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:26.227+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:26.227+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:26.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578805, 2), signature: { hash: BinData(0, 5F2F82F187610862000CECA1B758833740D8F5F8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:26.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:33:26.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578805, 2), signature: { hash: BinData(0, 5F2F82F187610862000CECA1B758833740D8F5F8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:26.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578805, 2), signature: { hash: BinData(0, 5F2F82F187610862000CECA1B758833740D8F5F8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:26.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, durableWallTime: new Date(1567578805723), opTime: { ts: Timestamp(1567578805, 2), t: 1 }, wallTime: new Date(1567578805723) } 2019-09-04T06:33:26.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578805, 2), signature: { hash: BinData(0, 5F2F82F187610862000CECA1B758833740D8F5F8), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:26.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:26.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:26.254+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:26.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:26.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:26.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:26.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:26.296+0000 I NETWORK [listener] connection accepted from 10.108.2.60:44960 #391 (85 connections now open) 2019-09-04T06:33:26.296+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:26.296+0000 D2 COMMAND [conn391] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:26.296+0000 I NETWORK [conn391] received client metadata from 10.108.2.60:44960 conn391: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:26.296+0000 I COMMAND [conn391] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:26.299+0000 D2 COMMAND [conn391] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578803, 1), signature: { hash: BinData(0, 4C13B88C6EFA3D421BD308E53A9A6ACE16902623), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:33:26.299+0000 D1 REPL [conn391] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578805, 2), t: 1 } 2019-09-04T06:33:26.299+0000 D3 REPL [conn391] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:56.309+0000 2019-09-04T06:33:26.325+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:26.326+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:26.354+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:26.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:26.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:26.363+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:26.363+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:26.454+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:26.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:26.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:26.497+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:26.497+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:26.553+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:26.553+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:26.554+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:26.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:26.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:26.598+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:26.598+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:26.601+0000 D2 COMMAND [conn49] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578799, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578803, 2), signature: { hash: BinData(0, B6E7D85218B1514730EA5A6250AAD0B29854F2CC), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578799, 1), t: 1 } }, $db: "config" } 2019-09-04T06:33:26.601+0000 D1 COMMAND [conn49] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578799, 1), t: 1 } } } 2019-09-04T06:33:26.601+0000 D3 STORAGE [conn49] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:33:26.601+0000 D1 COMMAND [conn49] Using 'committed' snapshot: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578799, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578803, 2), signature: { hash: BinData(0, B6E7D85218B1514730EA5A6250AAD0B29854F2CC), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578799, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578805, 2) 2019-09-04T06:33:26.601+0000 D2 QUERY [conn49] Collection config.settings does not exist. Using EOF plan: query: { _id: "balancer" } sort: {} projection: {} limit: 1 2019-09-04T06:33:26.601+0000 I COMMAND [conn49] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578799, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578803, 2), signature: { hash: BinData(0, B6E7D85218B1514730EA5A6250AAD0B29854F2CC), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578799, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:33:26.601+0000 D2 COMMAND [conn49] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578805, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578805, 2), signature: { hash: BinData(0, 5F2F82F187610862000CECA1B758833740D8F5F8), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578805, 2), t: 1 } }, $db: "config" } 2019-09-04T06:33:26.601+0000 D1 COMMAND [conn49] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578805, 2), t: 1 } } } 2019-09-04T06:33:26.601+0000 D3 STORAGE [conn49] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:33:26.601+0000 D1 COMMAND [conn49] Using 'committed' snapshot: { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578805, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578805, 2), signature: { hash: BinData(0, 5F2F82F187610862000CECA1B758833740D8F5F8), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578805, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578805, 2) 2019-09-04T06:33:26.601+0000 D2 QUERY [conn49] Collection config.settings does not exist. Using EOF plan: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 2019-09-04T06:33:26.601+0000 I COMMAND [conn49] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578805, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578805, 2), signature: { hash: BinData(0, 5F2F82F187610862000CECA1B758833740D8F5F8), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578805, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:33:26.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:26.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:26.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:26.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:26.654+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:26.695+0000 D2 COMMAND [conn49] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578805, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578805, 2), signature: { hash: BinData(0, 5F2F82F187610862000CECA1B758833740D8F5F8), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578805, 2), t: 1 } }, $db: "config" } 2019-09-04T06:33:26.695+0000 D1 COMMAND [conn49] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578805, 2), t: 1 } } } 2019-09-04T06:33:26.695+0000 D3 STORAGE [conn49] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:33:26.695+0000 D1 COMMAND [conn49] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578805, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578805, 2), signature: { hash: BinData(0, 5F2F82F187610862000CECA1B758833740D8F5F8), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578805, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578805, 2) 2019-09-04T06:33:26.695+0000 D5 QUERY [conn49] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:33:26.695+0000 D5 QUERY [conn49] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:33:26.695+0000 D5 QUERY [conn49] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:33:26.695+0000 D5 QUERY [conn49] Rated tree: $and 2019-09-04T06:33:26.695+0000 D5 QUERY [conn49] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:26.695+0000 D5 QUERY [conn49] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:26.695+0000 D2 QUERY [conn49] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:33:26.695+0000 D3 STORAGE [conn49] WT begin_transaction for snapshot id 17361 2019-09-04T06:33:26.695+0000 D3 STORAGE [conn49] WT rollback_transaction for snapshot id 17361 2019-09-04T06:33:26.695+0000 I COMMAND [conn49] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578805, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578805, 2), signature: { hash: BinData(0, 5F2F82F187610862000CECA1B758833740D8F5F8), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578805, 2), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:879 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:33:26.696+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:26.696+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:26.727+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:26.727+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:26.740+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:26.740+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:26.740+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578805, 2) 2019-09-04T06:33:26.740+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 17365 2019-09-04T06:33:26.740+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:26.740+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:26.740+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 17365 2019-09-04T06:33:26.741+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17368 2019-09-04T06:33:26.741+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 17368 2019-09-04T06:33:26.741+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578805, 2), t: 1 }({ ts: Timestamp(1567578805, 2), t: 1 }) 2019-09-04T06:33:26.754+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:26.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:26.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:26.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:26.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:26.825+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:26.826+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:26.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:26.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1184) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:26.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1184 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:36.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:26.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:54.839+0000 2019-09-04T06:33:26.838+0000 D2 ASIO [Replication] Request 1184 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, durableWallTime: new Date(1567578805723), opTime: { ts: Timestamp(1567578805, 2), t: 1 }, wallTime: new Date(1567578805723), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578805, 2), t: 1 }, lastCommittedWall: new Date(1567578805723), lastOpVisible: { ts: Timestamp(1567578805, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578805, 2), $clusterTime: { clusterTime: Timestamp(1567578805, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578805, 2) } 2019-09-04T06:33:26.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, durableWallTime: new Date(1567578805723), opTime: { ts: Timestamp(1567578805, 2), t: 1 }, wallTime: new Date(1567578805723), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578805, 2), t: 1 }, lastCommittedWall: new Date(1567578805723), lastOpVisible: { ts: Timestamp(1567578805, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578805, 2), $clusterTime: { clusterTime: Timestamp(1567578805, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578805, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:26.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:26.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1184) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, durableWallTime: new Date(1567578805723), opTime: { ts: Timestamp(1567578805, 2), t: 1 }, wallTime: new Date(1567578805723), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578805, 2), t: 1 }, lastCommittedWall: new Date(1567578805723), lastOpVisible: { ts: Timestamp(1567578805, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578805, 2), $clusterTime: { clusterTime: Timestamp(1567578805, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578805, 2) } 2019-09-04T06:33:26.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:33:26.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:33:28.838Z 2019-09-04T06:33:26.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:54.839+0000 2019-09-04T06:33:26.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:26.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1185) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:26.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1185 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:33:36.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:26.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:54.839+0000 2019-09-04T06:33:26.839+0000 D2 ASIO [Replication] Request 1185 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, durableWallTime: new Date(1567578805723), opTime: { ts: Timestamp(1567578805, 2), t: 1 }, wallTime: new Date(1567578805723), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578805, 2), t: 1 }, lastCommittedWall: new Date(1567578805723), lastOpVisible: { ts: Timestamp(1567578805, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578805, 2), $clusterTime: { clusterTime: Timestamp(1567578805, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578805, 2) } 2019-09-04T06:33:26.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, durableWallTime: new Date(1567578805723), opTime: { ts: Timestamp(1567578805, 2), t: 1 }, wallTime: new Date(1567578805723), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578805, 2), t: 1 }, lastCommittedWall: new Date(1567578805723), lastOpVisible: { ts: Timestamp(1567578805, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578805, 2), $clusterTime: { clusterTime: Timestamp(1567578805, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578805, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:33:26.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:26.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1185) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, durableWallTime: new Date(1567578805723), opTime: { ts: Timestamp(1567578805, 2), t: 1 }, wallTime: new Date(1567578805723), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578805, 2), t: 1 }, lastCommittedWall: new Date(1567578805723), lastOpVisible: { ts: Timestamp(1567578805, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578805, 2), $clusterTime: { clusterTime: Timestamp(1567578805, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578805, 2) } 2019-09-04T06:33:26.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:33:26.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:33:36.687+0000 2019-09-04T06:33:26.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:33:37.324+0000 2019-09-04T06:33:26.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:33:26.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:33:28.839Z 2019-09-04T06:33:26.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:26.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:56.839+0000 2019-09-04T06:33:26.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:56.839+0000 2019-09-04T06:33:26.855+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:26.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:26.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:26.863+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:26.863+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:26.876+0000 D2 COMMAND [conn47] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:26.876+0000 I COMMAND [conn47] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:26.911+0000 D2 COMMAND [conn48] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:26.911+0000 I COMMAND [conn48] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:26.955+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:26.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:26.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:26.997+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:26.997+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:27.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:27.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:27.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:27.055+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:27.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578805, 2), signature: { hash: BinData(0, 5F2F82F187610862000CECA1B758833740D8F5F8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:27.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:33:27.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578805, 2), signature: { hash: BinData(0, 5F2F82F187610862000CECA1B758833740D8F5F8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:27.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578805, 2), signature: { hash: BinData(0, 5F2F82F187610862000CECA1B758833740D8F5F8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:27.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, durableWallTime: new Date(1567578805723), opTime: { ts: Timestamp(1567578805, 2), t: 1 }, wallTime: new Date(1567578805723) } 2019-09-04T06:33:27.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578805, 2), signature: { hash: BinData(0, 5F2F82F187610862000CECA1B758833740D8F5F8), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:27.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:27.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:27.098+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:27.098+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:27.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:27.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:27.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:27.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:27.155+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:27.196+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:27.196+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:27.227+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:27.227+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:27.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:27.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:27.255+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:27.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:27.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:27.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:27.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:27.325+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:27.326+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:27.355+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:27.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:27.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:27.363+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:27.363+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:27.455+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:27.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:27.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:27.497+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:27.497+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:27.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:27.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:27.555+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:27.568+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:27.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:27.598+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:27.598+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:27.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:27.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:27.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:27.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:27.656+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:27.696+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:27.696+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:27.727+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:27.727+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:27.740+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:27.740+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:27.740+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578805, 2) 2019-09-04T06:33:27.740+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 17404 2019-09-04T06:33:27.740+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:27.740+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:27.740+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 17404 2019-09-04T06:33:27.741+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17407 2019-09-04T06:33:27.741+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 17407 2019-09-04T06:33:27.741+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578805, 2), t: 1 }({ ts: Timestamp(1567578805, 2), t: 1 }) 2019-09-04T06:33:27.756+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:27.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:27.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:27.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:27.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:27.825+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:27.826+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:27.856+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:27.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:27.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:27.863+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:27.863+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:27.956+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:27.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:27.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:27.997+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:27.997+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:28.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:28.022+0000 D2 COMMAND [conn50] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:28.022+0000 I COMMAND [conn50] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:28.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:28.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:28.056+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:28.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:28.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:28.098+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:28.098+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:28.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:28.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:28.156+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:28.162+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:28.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:28.196+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:28.196+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:28.227+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:28.227+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:28.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578807, 1), signature: { hash: BinData(0, 635DF7D11C7961CD1035E76751260806B82FC937), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:28.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:33:28.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578807, 1), signature: { hash: BinData(0, 635DF7D11C7961CD1035E76751260806B82FC937), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:28.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578807, 1), signature: { hash: BinData(0, 635DF7D11C7961CD1035E76751260806B82FC937), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:28.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, durableWallTime: new Date(1567578805723), opTime: { ts: Timestamp(1567578805, 2), t: 1 }, wallTime: new Date(1567578805723) } 2019-09-04T06:33:28.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578807, 1), signature: { hash: BinData(0, 635DF7D11C7961CD1035E76751260806B82FC937), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:28.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:28.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:28.256+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:28.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:28.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:28.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:28.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:28.325+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:28.326+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:28.357+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:28.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:28.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:28.363+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:28.363+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:28.457+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:28.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:28.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:28.497+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:28.497+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:28.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:28.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:28.557+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:28.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:28.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:28.598+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:28.598+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:28.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:28.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:28.657+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:28.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:28.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:28.696+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:28.696+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:28.727+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:28.727+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:28.740+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:28.740+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:28.740+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578805, 2) 2019-09-04T06:33:28.740+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 17441 2019-09-04T06:33:28.740+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:28.740+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:28.740+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 17441 2019-09-04T06:33:28.742+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17444 2019-09-04T06:33:28.742+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 17444 2019-09-04T06:33:28.742+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578805, 2), t: 1 }({ ts: Timestamp(1567578805, 2), t: 1 }) 2019-09-04T06:33:28.746+0000 I NETWORK [listener] connection accepted from 10.108.2.64:46728 #392 (86 connections now open) 2019-09-04T06:33:28.746+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:28.746+0000 D2 COMMAND [conn392] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:28.746+0000 I NETWORK [conn392] received client metadata from 10.108.2.64:46728 conn392: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:28.746+0000 I COMMAND [conn392] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:28.750+0000 D2 COMMAND [conn392] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578805, 1), signature: { hash: BinData(0, 89755D210D4D749FC922939A0BD0751F8197C269), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:33:28.750+0000 D1 REPL [conn392] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578805, 2), t: 1 } 2019-09-04T06:33:28.750+0000 D3 REPL [conn392] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:58.760+0000 2019-09-04T06:33:28.757+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:28.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:28.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:28.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:28.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:28.825+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:28.826+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:28.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:28.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1186) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:28.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1186 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:38.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:28.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:56.839+0000 2019-09-04T06:33:28.838+0000 D2 ASIO [Replication] Request 1186 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, durableWallTime: new Date(1567578805723), opTime: { ts: Timestamp(1567578805, 2), t: 1 }, wallTime: new Date(1567578805723), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578805, 2), t: 1 }, lastCommittedWall: new Date(1567578805723), lastOpVisible: { ts: Timestamp(1567578805, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578805, 2), $clusterTime: { clusterTime: Timestamp(1567578807, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578805, 2) } 2019-09-04T06:33:28.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, durableWallTime: new Date(1567578805723), opTime: { ts: Timestamp(1567578805, 2), t: 1 }, wallTime: new Date(1567578805723), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578805, 2), t: 1 }, lastCommittedWall: new Date(1567578805723), lastOpVisible: { ts: Timestamp(1567578805, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578805, 2), $clusterTime: { clusterTime: Timestamp(1567578807, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578805, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:28.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:28.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1186) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, durableWallTime: new Date(1567578805723), opTime: { ts: Timestamp(1567578805, 2), t: 1 }, wallTime: new Date(1567578805723), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578805, 2), t: 1 }, lastCommittedWall: new Date(1567578805723), lastOpVisible: { ts: Timestamp(1567578805, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578805, 2), $clusterTime: { clusterTime: Timestamp(1567578807, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578805, 2) } 2019-09-04T06:33:28.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:33:28.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:33:30.838Z 2019-09-04T06:33:28.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:56.839+0000 2019-09-04T06:33:28.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:28.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1187) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:28.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1187 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:33:38.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:28.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:56.839+0000 2019-09-04T06:33:28.839+0000 D2 ASIO [Replication] Request 1187 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, durableWallTime: new Date(1567578805723), opTime: { ts: Timestamp(1567578805, 2), t: 1 }, wallTime: new Date(1567578805723), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578805, 2), t: 1 }, lastCommittedWall: new Date(1567578805723), lastOpVisible: { ts: Timestamp(1567578805, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578805, 2), $clusterTime: { clusterTime: Timestamp(1567578807, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578805, 2) } 2019-09-04T06:33:28.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, durableWallTime: new Date(1567578805723), opTime: { ts: Timestamp(1567578805, 2), t: 1 }, wallTime: new Date(1567578805723), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578805, 2), t: 1 }, lastCommittedWall: new Date(1567578805723), lastOpVisible: { ts: Timestamp(1567578805, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578805, 2), $clusterTime: { clusterTime: Timestamp(1567578807, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578805, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:33:28.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:28.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1187) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, durableWallTime: new Date(1567578805723), opTime: { ts: Timestamp(1567578805, 2), t: 1 }, wallTime: new Date(1567578805723), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578805, 2), t: 1 }, lastCommittedWall: new Date(1567578805723), lastOpVisible: { ts: Timestamp(1567578805, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578805, 2), $clusterTime: { clusterTime: Timestamp(1567578807, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578805, 2) } 2019-09-04T06:33:28.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:33:28.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:33:37.324+0000 2019-09-04T06:33:28.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:33:39.453+0000 2019-09-04T06:33:28.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:33:28.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:33:30.839Z 2019-09-04T06:33:28.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:28.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:58.839+0000 2019-09-04T06:33:28.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:58.839+0000 2019-09-04T06:33:28.857+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:28.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:28.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:28.863+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:28.863+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:28.958+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:28.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:28.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:28.997+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:28.997+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:29.001+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:29.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:29.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:29.058+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:29.063+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:29.063+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:33:28.839+0000 2019-09-04T06:33:29.063+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:33:28.838+0000 2019-09-04T06:33:29.063+0000 D3 REPL [replexec-3] stalest member MemberId(2) date: 2019-09-04T06:33:28.838+0000 2019-09-04T06:33:29.063+0000 D3 REPL [replexec-3] scheduling next check at 2019-09-04T06:33:38.838+0000 2019-09-04T06:33:29.063+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:58.839+0000 2019-09-04T06:33:29.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578807, 1), signature: { hash: BinData(0, 635DF7D11C7961CD1035E76751260806B82FC937), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:29.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:33:29.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578807, 1), signature: { hash: BinData(0, 635DF7D11C7961CD1035E76751260806B82FC937), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:29.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578807, 1), signature: { hash: BinData(0, 635DF7D11C7961CD1035E76751260806B82FC937), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:29.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, durableWallTime: new Date(1567578805723), opTime: { ts: Timestamp(1567578805, 2), t: 1 }, wallTime: new Date(1567578805723) } 2019-09-04T06:33:29.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578807, 1), signature: { hash: BinData(0, 635DF7D11C7961CD1035E76751260806B82FC937), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:29.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:29.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:29.098+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:29.098+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:29.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:29.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:29.158+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:29.162+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:29.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:29.196+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:29.196+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:29.227+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:29.227+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:29.233+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:29.233+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:29.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:29.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:29.258+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:29.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:29.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:29.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:29.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:29.326+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:29.326+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:29.358+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:29.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:29.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:29.363+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:29.363+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:29.458+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:29.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:29.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:29.497+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:29.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:29.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:29.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:29.558+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:29.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:29.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:29.598+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:29.598+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:29.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:29.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:29.659+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:29.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:29.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:29.696+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:29.696+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:29.727+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:29.727+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:29.733+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:29.733+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:29.741+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:29.741+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:29.741+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578805, 2) 2019-09-04T06:33:29.741+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 17482 2019-09-04T06:33:29.741+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:29.741+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:29.741+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 17482 2019-09-04T06:33:29.742+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17485 2019-09-04T06:33:29.742+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 17485 2019-09-04T06:33:29.742+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578805, 2), t: 1 }({ ts: Timestamp(1567578805, 2), t: 1 }) 2019-09-04T06:33:29.759+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:29.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:29.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:29.825+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:29.826+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:29.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:29.859+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:29.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:29.863+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:29.863+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:29.863+0000 I NETWORK [listener] connection accepted from 10.108.2.72:45856 #393 (87 connections now open) 2019-09-04T06:33:29.863+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:29.863+0000 D2 COMMAND [conn393] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:29.863+0000 I NETWORK [conn393] received client metadata from 10.108.2.72:45856 conn393: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:29.863+0000 I COMMAND [conn393] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:29.878+0000 I COMMAND [conn370] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578772, 1), signature: { hash: BinData(0, A61A1531E35D372770CFA852083A638C0C2C3E5B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:33:29.878+0000 D1 - [conn370] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:29.878+0000 W - [conn370] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:29.896+0000 I - [conn370] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:29.896+0000 D1 COMMAND [conn370] assertion while executing command 'find' on database 'config' with arguments '{ find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578772, 1), signature: { hash: BinData(0, A61A1531E35D372770CFA852083A638C0C2C3E5B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:29.896+0000 D1 - [conn370] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:29.896+0000 W - [conn370] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:29.918+0000 I - [conn370] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:29.918+0000 W COMMAND [conn370] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:29.918+0000 I COMMAND [conn370] command config.$cmd command: find { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578772, 1), signature: { hash: BinData(0, A61A1531E35D372770CFA852083A638C0C2C3E5B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30032ms 2019-09-04T06:33:29.918+0000 D2 NETWORK [conn370] Session from 10.108.2.72:45836 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:29.918+0000 I NETWORK [conn370] end connection 10.108.2.72:45836 (86 connections now open) 2019-09-04T06:33:29.959+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:29.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:29.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:29.997+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:29.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:30.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:30.003+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:33:30.003+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:33:30.004+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:33:30.012+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:33:30.012+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:33:30.022+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:33:30.023+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:33:30.023+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:33:30.023+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:33:30.036+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:33:30.036+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35129 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:33:30.047+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:33:30.047+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:33:30.048+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:33:30.048+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:33:30.048+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:30.048+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:33:30.048+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:33:30.048+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:33:30.048+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:33:30.048+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:33:30.048+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:33:30.048+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:33:30.048+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:30.048+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:30.048+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:33:30.048+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578805, 2) 2019-09-04T06:33:30.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17500 2019-09-04T06:33:30.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17500 2019-09-04T06:33:30.048+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:33:30.048+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:33:30.048+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:33:30.048+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:33:30.048+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:30.048+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:33:30.048+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:33:30.048+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:33:30.049+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578805, 2) 2019-09-04T06:33:30.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17503 2019-09-04T06:33:30.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17503 2019-09-04T06:33:30.049+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:33:30.049+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:33:30.049+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:30.049+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:33:30.049+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:33:30.049+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:33:30.049+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578805, 2) 2019-09-04T06:33:30.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17505 2019-09-04T06:33:30.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17505 2019-09-04T06:33:30.049+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:33:30.049+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:33:30.049+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:33:30.049+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:33:30.049+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:33:30.049+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:33:30.049+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:33:30.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17508 2019-09-04T06:33:30.049+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:33:30.049+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:30.049+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:33:30.049+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:33:30.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17508 2019-09-04T06:33:30.049+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:33:30.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17509 2019-09-04T06:33:30.049+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:33:30.049+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:30.049+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:33:30.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17509 2019-09-04T06:33:30.049+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:33:30.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17510 2019-09-04T06:33:30.049+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:33:30.049+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:30.049+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:33:30.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17510 2019-09-04T06:33:30.049+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:33:30.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17511 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17511 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17512 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17512 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17513 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17513 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17514 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17514 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17515 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17515 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17516 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17516 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17517 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17517 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17518 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17518 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17519 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17519 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17520 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17520 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17521 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17521 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17522 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17522 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17523 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17523 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17524 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17524 2019-09-04T06:33:30.050+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17525 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17525 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17526 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17526 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17527 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17527 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17528 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17528 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17529 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17529 2019-09-04T06:33:30.051+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:33:30.051+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17531 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17531 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17532 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17532 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17533 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17533 2019-09-04T06:33:30.051+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:33:30.051+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17535 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17535 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17536 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17536 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17537 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17537 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17538 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17538 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17539 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17539 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17540 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17540 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17541 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17541 2019-09-04T06:33:30.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17542 2019-09-04T06:33:30.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17542 2019-09-04T06:33:30.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17543 2019-09-04T06:33:30.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17543 2019-09-04T06:33:30.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17544 2019-09-04T06:33:30.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17544 2019-09-04T06:33:30.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17545 2019-09-04T06:33:30.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17545 2019-09-04T06:33:30.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17546 2019-09-04T06:33:30.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17546 2019-09-04T06:33:30.052+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:33:30.052+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:33:30.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17548 2019-09-04T06:33:30.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17548 2019-09-04T06:33:30.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17549 2019-09-04T06:33:30.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17549 2019-09-04T06:33:30.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17550 2019-09-04T06:33:30.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17550 2019-09-04T06:33:30.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17551 2019-09-04T06:33:30.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17551 2019-09-04T06:33:30.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:30.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17553 2019-09-04T06:33:30.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17553 2019-09-04T06:33:30.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17554 2019-09-04T06:33:30.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:30.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17554 2019-09-04T06:33:30.052+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:33:30.059+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:30.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:30.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:30.098+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:30.098+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:30.160+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:30.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:30.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:30.196+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:30.196+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:30.227+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:30.227+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:30.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578807, 1), signature: { hash: BinData(0, 635DF7D11C7961CD1035E76751260806B82FC937), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:30.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:33:30.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578807, 1), signature: { hash: BinData(0, 635DF7D11C7961CD1035E76751260806B82FC937), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:30.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578807, 1), signature: { hash: BinData(0, 635DF7D11C7961CD1035E76751260806B82FC937), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:30.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, durableWallTime: new Date(1567578805723), opTime: { ts: Timestamp(1567578805, 2), t: 1 }, wallTime: new Date(1567578805723) } 2019-09-04T06:33:30.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578807, 1), signature: { hash: BinData(0, 635DF7D11C7961CD1035E76751260806B82FC937), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:30.233+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:30.233+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:30.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:30.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:30.250+0000 I NETWORK [listener] connection accepted from 10.108.2.54:49304 #394 (87 connections now open) 2019-09-04T06:33:30.250+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:30.250+0000 D2 COMMAND [conn394] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:30.250+0000 I NETWORK [conn394] received client metadata from 10.108.2.54:49304 conn394: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:30.250+0000 I COMMAND [conn394] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:30.251+0000 D2 COMMAND [conn394] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578808, 1), signature: { hash: BinData(0, E0BDAF1F918F5649152C69BBF6299873DB91E045), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:33:30.251+0000 D1 REPL [conn394] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578805, 2), t: 1 } 2019-09-04T06:33:30.251+0000 D3 REPL [conn394] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.261+0000 2019-09-04T06:33:30.260+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:30.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:30.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:30.325+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:30.326+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:30.341+0000 D2 ASIO [RS] Request 1182 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578810, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578810339), o: { $v: 1, $set: { ping: new Date(1567578810337) } } }, { ts: Timestamp(1567578810, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578810339), o: { $v: 1, $set: { ping: new Date(1567578810338) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578805, 2), t: 1 }, lastCommittedWall: new Date(1567578805723), lastOpVisible: { ts: Timestamp(1567578805, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578805, 2), t: 1 }, lastCommittedWall: new Date(1567578805723), lastOpApplied: { ts: Timestamp(1567578810, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578805, 2), $clusterTime: { clusterTime: Timestamp(1567578810, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578810, 2) } 2019-09-04T06:33:30.341+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578810, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578810339), o: { $v: 1, $set: { ping: new Date(1567578810337) } } }, { ts: Timestamp(1567578810, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578810339), o: { $v: 1, $set: { ping: new Date(1567578810338) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578805, 2), t: 1 }, lastCommittedWall: new Date(1567578805723), lastOpVisible: { ts: Timestamp(1567578805, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578805, 2), t: 1 }, lastCommittedWall: new Date(1567578805723), lastOpApplied: { ts: Timestamp(1567578810, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578805, 2), $clusterTime: { clusterTime: Timestamp(1567578810, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578810, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:30.341+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:30.341+0000 D2 REPL [replication-0] oplog fetcher read 2 operations from remote oplog starting at ts: Timestamp(1567578810, 1) and ending at ts: Timestamp(1567578810, 2) 2019-09-04T06:33:30.341+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:33:39.453+0000 2019-09-04T06:33:30.341+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:33:40.444+0000 2019-09-04T06:33:30.341+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:30.341+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578810, 2), t: 1 } 2019-09-04T06:33:30.341+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:30.341+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:30.341+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578805, 2) 2019-09-04T06:33:30.341+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 17569 2019-09-04T06:33:30.341+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:30.341+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:30.341+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 17569 2019-09-04T06:33:30.341+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:30.341+0000 D2 REPL [rsSync-0] replication batch size is 2 2019-09-04T06:33:30.341+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:58.839+0000 2019-09-04T06:33:30.341+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:30.341+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578805, 2) 2019-09-04T06:33:30.341+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578810, 1) } 2019-09-04T06:33:30.341+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 17572 2019-09-04T06:33:30.341+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:30.341+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:30.341+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 17572 2019-09-04T06:33:30.341+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17486 2019-09-04T06:33:30.341+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 17486 2019-09-04T06:33:30.341+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17575 2019-09-04T06:33:30.341+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 17575 2019-09-04T06:33:30.341+0000 D3 EXECUTOR [repl-writer-worker-9] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:30.341+0000 D3 STORAGE [repl-writer-worker-9] WT begin_transaction for snapshot id 17577 2019-09-04T06:33:30.341+0000 D4 STORAGE [repl-writer-worker-9] inserting record with timestamp Timestamp(1567578810, 1) 2019-09-04T06:33:30.341+0000 D3 STORAGE [repl-writer-worker-9] WT set timestamp of future write operations to Timestamp(1567578810, 1) 2019-09-04T06:33:30.341+0000 D4 STORAGE [repl-writer-worker-9] inserting record with timestamp Timestamp(1567578810, 2) 2019-09-04T06:33:30.341+0000 D3 STORAGE [repl-writer-worker-9] WT set timestamp of future write operations to Timestamp(1567578810, 2) 2019-09-04T06:33:30.341+0000 D3 STORAGE [repl-writer-worker-9] WT commit_transaction for snapshot id 17577 2019-09-04T06:33:30.341+0000 D3 EXECUTOR [repl-writer-worker-9] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:30.341+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:33:30.341+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17576 2019-09-04T06:33:30.341+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 17576 2019-09-04T06:33:30.341+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17579 2019-09-04T06:33:30.341+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 17579 2019-09-04T06:33:30.341+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578810, 2), t: 1 }({ ts: Timestamp(1567578810, 2), t: 1 }) 2019-09-04T06:33:30.341+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578810, 2) 2019-09-04T06:33:30.341+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17580 2019-09-04T06:33:30.341+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578810, 2) } } ] } sort: {} projection: {} 2019-09-04T06:33:30.341+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:30.341+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:33:30.342+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578810, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:33:30.342+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:30.342+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:33:30.342+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:30.342+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578810, 2) || First: notFirst: full path: ts 2019-09-04T06:33:30.342+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:30.342+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578810, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:30.342+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:33:30.342+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:33:30.342+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:33:30.342+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:30.342+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:30.342+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:33:30.342+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:30.342+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:30.342+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:33:30.342+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578810, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:33:30.342+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:30.342+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:33:30.342+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:30.342+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578810, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:33:30.342+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:30.342+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578810, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:30.342+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 17580 2019-09-04T06:33:30.342+0000 D3 EXECUTOR [repl-writer-worker-11] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:30.342+0000 D3 STORAGE [repl-writer-worker-11] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:30.342+0000 D3 EXECUTOR [repl-writer-worker-7] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:30.342+0000 D3 REPL [repl-writer-worker-11] applying op: { ts: Timestamp(1567578810, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578810339), o: { $v: 1, $set: { ping: new Date(1567578810337) } } }, oplog application mode: Secondary 2019-09-04T06:33:30.342+0000 D3 STORAGE [repl-writer-worker-7] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:30.342+0000 D3 STORAGE [repl-writer-worker-11] WT set timestamp of future write operations to Timestamp(1567578810, 1) 2019-09-04T06:33:30.342+0000 D3 STORAGE [repl-writer-worker-11] WT begin_transaction for snapshot id 17582 2019-09-04T06:33:30.342+0000 D3 REPL [repl-writer-worker-7] applying op: { ts: Timestamp(1567578810, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578810339), o: { $v: 1, $set: { ping: new Date(1567578810338) } } }, oplog application mode: Secondary 2019-09-04T06:33:30.342+0000 D2 QUERY [repl-writer-worker-11] Using idhack: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" } 2019-09-04T06:33:30.342+0000 D3 STORAGE [repl-writer-worker-7] WT set timestamp of future write operations to Timestamp(1567578810, 2) 2019-09-04T06:33:30.342+0000 D3 STORAGE [repl-writer-worker-7] WT begin_transaction for snapshot id 17583 2019-09-04T06:33:30.342+0000 D2 QUERY [repl-writer-worker-7] Using idhack: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" } 2019-09-04T06:33:30.342+0000 D4 WRITE [repl-writer-worker-11] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:33:30.342+0000 D3 STORAGE [repl-writer-worker-11] WT commit_transaction for snapshot id 17582 2019-09-04T06:33:30.342+0000 D3 EXECUTOR [repl-writer-worker-11] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:30.342+0000 D4 WRITE [repl-writer-worker-7] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:33:30.342+0000 D3 STORAGE [repl-writer-worker-7] WT commit_transaction for snapshot id 17583 2019-09-04T06:33:30.342+0000 D3 EXECUTOR [repl-writer-worker-7] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:30.342+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578810, 2), t: 1 }({ ts: Timestamp(1567578810, 2), t: 1 }) 2019-09-04T06:33:30.342+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578810, 2) 2019-09-04T06:33:30.342+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17581 2019-09-04T06:33:30.342+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:33:30.342+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:30.342+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:33:30.342+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:30.342+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:30.342+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:33:30.342+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 17581 2019-09-04T06:33:30.342+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578810, 2) 2019-09-04T06:33:30.342+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17587 2019-09-04T06:33:30.342+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 17587 2019-09-04T06:33:30.342+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578810, 2), t: 1 }({ ts: Timestamp(1567578810, 2), t: 1 }) 2019-09-04T06:33:30.342+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:30.342+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, durableWallTime: new Date(1567578805723), appliedOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, appliedWallTime: new Date(1567578805723), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, durableWallTime: new Date(1567578805723), appliedOpTime: { ts: Timestamp(1567578810, 2), t: 1 }, appliedWallTime: new Date(1567578810339), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, durableWallTime: new Date(1567578805723), appliedOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, appliedWallTime: new Date(1567578805723), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578805, 2), t: 1 }, lastCommittedWall: new Date(1567578805723), lastOpVisible: { ts: Timestamp(1567578805, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:30.342+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1188 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:00.342+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, durableWallTime: new Date(1567578805723), appliedOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, appliedWallTime: new Date(1567578805723), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, durableWallTime: new Date(1567578805723), appliedOpTime: { ts: Timestamp(1567578810, 2), t: 1 }, appliedWallTime: new Date(1567578810339), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, durableWallTime: new Date(1567578805723), appliedOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, appliedWallTime: new Date(1567578805723), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578805, 2), t: 1 }, lastCommittedWall: new Date(1567578805723), lastOpVisible: { ts: Timestamp(1567578805, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:30.342+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:00.342+0000 2019-09-04T06:33:30.343+0000 D2 ASIO [RS] Request 1188 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578805, 2), t: 1 }, lastCommittedWall: new Date(1567578805723), lastOpVisible: { ts: Timestamp(1567578805, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578805, 2), $clusterTime: { clusterTime: Timestamp(1567578810, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578810, 3) } 2019-09-04T06:33:30.343+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578805, 2), t: 1 }, lastCommittedWall: new Date(1567578805723), lastOpVisible: { ts: Timestamp(1567578805, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578805, 2), $clusterTime: { clusterTime: Timestamp(1567578810, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578810, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:30.343+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:30.343+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:00.343+0000 2019-09-04T06:33:30.343+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578810, 2), t: 1 } 2019-09-04T06:33:30.343+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1189 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:40.343+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578805, 2), t: 1 } } 2019-09-04T06:33:30.343+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:00.343+0000 2019-09-04T06:33:30.343+0000 D2 ASIO [RS] Request 1189 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578810, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578810340), o: { $v: 1, $set: { ping: new Date(1567578810340) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578805, 2), t: 1 }, lastCommittedWall: new Date(1567578805723), lastOpVisible: { ts: Timestamp(1567578805, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578805, 2), t: 1 }, lastCommittedWall: new Date(1567578805723), lastOpApplied: { ts: Timestamp(1567578810, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578805, 2), $clusterTime: { clusterTime: Timestamp(1567578810, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578810, 3) } 2019-09-04T06:33:30.343+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578810, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578810340), o: { $v: 1, $set: { ping: new Date(1567578810340) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578805, 2), t: 1 }, lastCommittedWall: new Date(1567578805723), lastOpVisible: { ts: Timestamp(1567578805, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578805, 2), t: 1 }, lastCommittedWall: new Date(1567578805723), lastOpApplied: { ts: Timestamp(1567578810, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578805, 2), $clusterTime: { clusterTime: Timestamp(1567578810, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578810, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:30.343+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:30.343+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578810, 3) and ending at ts: Timestamp(1567578810, 3) 2019-09-04T06:33:30.343+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:33:40.444+0000 2019-09-04T06:33:30.343+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:33:41.543+0000 2019-09-04T06:33:30.343+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578810, 3), t: 1 } 2019-09-04T06:33:30.343+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:30.343+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:58.839+0000 2019-09-04T06:33:30.343+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:30.344+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:30.344+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578810, 2) 2019-09-04T06:33:30.344+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 17591 2019-09-04T06:33:30.344+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:30.344+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:30.344+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 17591 2019-09-04T06:33:30.344+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:30.344+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:33:30.344+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:30.344+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578810, 3) } 2019-09-04T06:33:30.344+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578810, 2) 2019-09-04T06:33:30.344+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 17594 2019-09-04T06:33:30.344+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17588 2019-09-04T06:33:30.344+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:30.344+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:30.344+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 17594 2019-09-04T06:33:30.344+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 17588 2019-09-04T06:33:30.344+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17597 2019-09-04T06:33:30.344+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 17597 2019-09-04T06:33:30.344+0000 D3 EXECUTOR [repl-writer-worker-13] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:30.344+0000 D3 STORAGE [repl-writer-worker-13] WT begin_transaction for snapshot id 17599 2019-09-04T06:33:30.344+0000 D4 STORAGE [repl-writer-worker-13] inserting record with timestamp Timestamp(1567578810, 3) 2019-09-04T06:33:30.344+0000 D3 STORAGE [repl-writer-worker-13] WT set timestamp of future write operations to Timestamp(1567578810, 3) 2019-09-04T06:33:30.344+0000 D3 STORAGE [repl-writer-worker-13] WT commit_transaction for snapshot id 17599 2019-09-04T06:33:30.344+0000 D3 EXECUTOR [repl-writer-worker-13] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:30.344+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:33:30.344+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17598 2019-09-04T06:33:30.344+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 17598 2019-09-04T06:33:30.344+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17601 2019-09-04T06:33:30.344+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 17601 2019-09-04T06:33:30.344+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578810, 3), t: 1 }({ ts: Timestamp(1567578810, 3), t: 1 }) 2019-09-04T06:33:30.344+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578810, 3) 2019-09-04T06:33:30.344+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17602 2019-09-04T06:33:30.344+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578810, 3) } } ] } sort: {} projection: {} 2019-09-04T06:33:30.344+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:30.344+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:33:30.344+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578810, 3) Sort: {} Proj: {} ============================= 2019-09-04T06:33:30.344+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:30.344+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:30.344+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:33:30.344+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578810, 3) || First: notFirst: full path: ts 2019-09-04T06:33:30.344+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:30.344+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578810, 3) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:30.344+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:33:30.344+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:33:30.344+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:33:30.344+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:30.344+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:30.344+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:33:30.344+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:30.344+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:30.344+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:33:30.344+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578810, 3) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:33:30.344+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:30.344+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:30.344+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:33:30.344+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578810, 3) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:33:30.344+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:30.344+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578810, 3) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:30.345+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 17602 2019-09-04T06:33:30.345+0000 D3 EXECUTOR [repl-writer-worker-14] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:30.345+0000 D3 STORAGE [repl-writer-worker-14] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:30.345+0000 D3 REPL [repl-writer-worker-14] applying op: { ts: Timestamp(1567578810, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578810340), o: { $v: 1, $set: { ping: new Date(1567578810340) } } }, oplog application mode: Secondary 2019-09-04T06:33:30.345+0000 D3 STORAGE [repl-writer-worker-14] WT set timestamp of future write operations to Timestamp(1567578810, 3) 2019-09-04T06:33:30.345+0000 D3 STORAGE [repl-writer-worker-14] WT begin_transaction for snapshot id 17604 2019-09-04T06:33:30.345+0000 D2 QUERY [repl-writer-worker-14] Using idhack: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" } 2019-09-04T06:33:30.345+0000 D4 WRITE [repl-writer-worker-14] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:33:30.345+0000 D3 STORAGE [repl-writer-worker-14] WT commit_transaction for snapshot id 17604 2019-09-04T06:33:30.345+0000 D3 EXECUTOR [repl-writer-worker-14] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:30.345+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578810, 3), t: 1 }({ ts: Timestamp(1567578810, 3), t: 1 }) 2019-09-04T06:33:30.345+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578810, 3) 2019-09-04T06:33:30.345+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17603 2019-09-04T06:33:30.345+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:33:30.345+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:30.345+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:33:30.345+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:30.345+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:30.345+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:33:30.345+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 17603 2019-09-04T06:33:30.345+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578810, 3) 2019-09-04T06:33:30.345+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17607 2019-09-04T06:33:30.345+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 17607 2019-09-04T06:33:30.345+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578810, 3), t: 1 }({ ts: Timestamp(1567578810, 3), t: 1 }) 2019-09-04T06:33:30.345+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:30.345+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, durableWallTime: new Date(1567578805723), appliedOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, appliedWallTime: new Date(1567578805723), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, durableWallTime: new Date(1567578805723), appliedOpTime: { ts: Timestamp(1567578810, 3), t: 1 }, appliedWallTime: new Date(1567578810340), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, durableWallTime: new Date(1567578805723), appliedOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, appliedWallTime: new Date(1567578805723), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578805, 2), t: 1 }, lastCommittedWall: new Date(1567578805723), lastOpVisible: { ts: Timestamp(1567578805, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:30.345+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1190 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:00.345+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, durableWallTime: new Date(1567578805723), appliedOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, appliedWallTime: new Date(1567578805723), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, durableWallTime: new Date(1567578805723), appliedOpTime: { ts: Timestamp(1567578810, 3), t: 1 }, appliedWallTime: new Date(1567578810340), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, durableWallTime: new Date(1567578805723), appliedOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, appliedWallTime: new Date(1567578805723), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578805, 2), t: 1 }, lastCommittedWall: new Date(1567578805723), lastOpVisible: { ts: Timestamp(1567578805, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:30.345+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:00.345+0000 2019-09-04T06:33:30.345+0000 D2 ASIO [RS] Request 1190 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578805, 2), t: 1 }, lastCommittedWall: new Date(1567578805723), lastOpVisible: { ts: Timestamp(1567578805, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578805, 2), $clusterTime: { clusterTime: Timestamp(1567578810, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578810, 3) } 2019-09-04T06:33:30.345+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578805, 2), t: 1 }, lastCommittedWall: new Date(1567578805723), lastOpVisible: { ts: Timestamp(1567578805, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578805, 2), $clusterTime: { clusterTime: Timestamp(1567578810, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578810, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:30.345+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:30.345+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:00.345+0000 2019-09-04T06:33:30.346+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578810, 3), t: 1 } 2019-09-04T06:33:30.346+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1191 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:40.346+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578805, 2), t: 1 } } 2019-09-04T06:33:30.346+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:00.345+0000 2019-09-04T06:33:30.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:30.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:30.362+0000 D2 ASIO [RS] Request 1191 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578810, 3), t: 1 }, lastCommittedWall: new Date(1567578810340), lastOpVisible: { ts: Timestamp(1567578810, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578810, 3), t: 1 }, lastCommittedWall: new Date(1567578810340), lastOpApplied: { ts: Timestamp(1567578810, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578810, 3), $clusterTime: { clusterTime: Timestamp(1567578810, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578810, 3) } 2019-09-04T06:33:30.362+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578810, 3), t: 1 }, lastCommittedWall: new Date(1567578810340), lastOpVisible: { ts: Timestamp(1567578810, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578810, 3), t: 1 }, lastCommittedWall: new Date(1567578810340), lastOpApplied: { ts: Timestamp(1567578810, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578810, 3), $clusterTime: { clusterTime: Timestamp(1567578810, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578810, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:30.362+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:30.362+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:33:30.362+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578810, 3), t: 1 }, 2019-09-04T06:33:30.340+0000 2019-09-04T06:33:30.362+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578810, 3), t: 1 }, 2019-09-04T06:33:30.340+0000 2019-09-04T06:33:30.362+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578805, 3) 2019-09-04T06:33:30.362+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:33:41.543+0000 2019-09-04T06:33:30.362+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:33:41.504+0000 2019-09-04T06:33:30.362+0000 D3 REPL [conn376] Got notified of new snapshot: { ts: Timestamp(1567578810, 3), t: 1 }, 2019-09-04T06:33:30.340+0000 2019-09-04T06:33:30.362+0000 D3 REPL [conn376] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.897+0000 2019-09-04T06:33:30.362+0000 D3 REPL [conn366] Got notified of new snapshot: { ts: Timestamp(1567578810, 3), t: 1 }, 2019-09-04T06:33:30.340+0000 2019-09-04T06:33:30.362+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:30.362+0000 D3 REPL [conn366] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.913+0000 2019-09-04T06:33:30.362+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:58.839+0000 2019-09-04T06:33:30.362+0000 D3 REPL [conn383] Got notified of new snapshot: { ts: Timestamp(1567578810, 3), t: 1 }, 2019-09-04T06:33:30.340+0000 2019-09-04T06:33:30.362+0000 D3 REPL [conn383] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.025+0000 2019-09-04T06:33:30.362+0000 D3 REPL [conn359] Got notified of new snapshot: { ts: Timestamp(1567578810, 3), t: 1 }, 2019-09-04T06:33:30.340+0000 2019-09-04T06:33:30.362+0000 D3 REPL [conn359] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.882+0000 2019-09-04T06:33:30.362+0000 D3 REPL [conn379] Got notified of new snapshot: { ts: Timestamp(1567578810, 3), t: 1 }, 2019-09-04T06:33:30.340+0000 2019-09-04T06:33:30.362+0000 D3 REPL [conn379] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:43.509+0000 2019-09-04T06:33:30.362+0000 D3 REPL [conn389] Got notified of new snapshot: { ts: Timestamp(1567578810, 3), t: 1 }, 2019-09-04T06:33:30.340+0000 2019-09-04T06:33:30.362+0000 D3 REPL [conn389] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:52.596+0000 2019-09-04T06:33:30.362+0000 D3 REPL [conn385] Got notified of new snapshot: { ts: Timestamp(1567578810, 3), t: 1 }, 2019-09-04T06:33:30.340+0000 2019-09-04T06:33:30.362+0000 D3 REPL [conn385] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.131+0000 2019-09-04T06:33:30.362+0000 D3 REPL [conn371] Got notified of new snapshot: { ts: Timestamp(1567578810, 3), t: 1 }, 2019-09-04T06:33:30.340+0000 2019-09-04T06:33:30.362+0000 D3 REPL [conn371] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.023+0000 2019-09-04T06:33:30.362+0000 D3 REPL [conn392] Got notified of new snapshot: { ts: Timestamp(1567578810, 3), t: 1 }, 2019-09-04T06:33:30.340+0000 2019-09-04T06:33:30.362+0000 D3 REPL [conn392] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:58.760+0000 2019-09-04T06:33:30.362+0000 D3 REPL [conn391] Got notified of new snapshot: { ts: Timestamp(1567578810, 3), t: 1 }, 2019-09-04T06:33:30.340+0000 2019-09-04T06:33:30.362+0000 D3 REPL [conn391] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:56.309+0000 2019-09-04T06:33:30.362+0000 D3 REPL [conn369] Got notified of new snapshot: { ts: Timestamp(1567578810, 3), t: 1 }, 2019-09-04T06:33:30.340+0000 2019-09-04T06:33:30.362+0000 D3 REPL [conn369] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.039+0000 2019-09-04T06:33:30.362+0000 D3 REPL [conn384] Got notified of new snapshot: { ts: Timestamp(1567578810, 3), t: 1 }, 2019-09-04T06:33:30.340+0000 2019-09-04T06:33:30.362+0000 D3 REPL [conn384] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.054+0000 2019-09-04T06:33:30.362+0000 D3 REPL [conn388] Got notified of new snapshot: { ts: Timestamp(1567578810, 3), t: 1 }, 2019-09-04T06:33:30.340+0000 2019-09-04T06:33:30.362+0000 D3 REPL [conn388] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:52.054+0000 2019-09-04T06:33:30.362+0000 D3 REPL [conn378] Got notified of new snapshot: { ts: Timestamp(1567578810, 3), t: 1 }, 2019-09-04T06:33:30.340+0000 2019-09-04T06:33:30.362+0000 D3 REPL [conn378] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.046+0000 2019-09-04T06:33:30.362+0000 D3 REPL [conn362] Got notified of new snapshot: { ts: Timestamp(1567578810, 3), t: 1 }, 2019-09-04T06:33:30.340+0000 2019-09-04T06:33:30.362+0000 D3 REPL [conn362] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.661+0000 2019-09-04T06:33:30.362+0000 D3 REPL [conn386] Got notified of new snapshot: { ts: Timestamp(1567578810, 3), t: 1 }, 2019-09-04T06:33:30.340+0000 2019-09-04T06:33:30.362+0000 D3 REPL [conn386] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.645+0000 2019-09-04T06:33:30.362+0000 D3 REPL [conn381] Got notified of new snapshot: { ts: Timestamp(1567578810, 3), t: 1 }, 2019-09-04T06:33:30.340+0000 2019-09-04T06:33:30.362+0000 D3 REPL [conn381] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.023+0000 2019-09-04T06:33:30.362+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1192 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:40.362+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578810, 3), t: 1 } } 2019-09-04T06:33:30.362+0000 D3 REPL [conn367] Got notified of new snapshot: { ts: Timestamp(1567578810, 3), t: 1 }, 2019-09-04T06:33:30.340+0000 2019-09-04T06:33:30.362+0000 D3 REPL [conn367] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.934+0000 2019-09-04T06:33:30.362+0000 D3 REPL [conn375] Got notified of new snapshot: { ts: Timestamp(1567578810, 3), t: 1 }, 2019-09-04T06:33:30.340+0000 2019-09-04T06:33:30.362+0000 D3 REPL [conn375] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.877+0000 2019-09-04T06:33:30.362+0000 D3 REPL [conn356] Got notified of new snapshot: { ts: Timestamp(1567578810, 3), t: 1 }, 2019-09-04T06:33:30.340+0000 2019-09-04T06:33:30.362+0000 D3 REPL [conn356] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.043+0000 2019-09-04T06:33:30.363+0000 D3 REPL [conn368] Got notified of new snapshot: { ts: Timestamp(1567578810, 3), t: 1 }, 2019-09-04T06:33:30.340+0000 2019-09-04T06:33:30.363+0000 D3 REPL [conn368] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.038+0000 2019-09-04T06:33:30.363+0000 D3 REPL [conn377] Got notified of new snapshot: { ts: Timestamp(1567578810, 3), t: 1 }, 2019-09-04T06:33:30.340+0000 2019-09-04T06:33:30.363+0000 D3 REPL [conn377] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.752+0000 2019-09-04T06:33:30.363+0000 D3 REPL [conn372] Got notified of new snapshot: { ts: Timestamp(1567578810, 3), t: 1 }, 2019-09-04T06:33:30.340+0000 2019-09-04T06:33:30.363+0000 D3 REPL [conn372] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.660+0000 2019-09-04T06:33:30.362+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:00.345+0000 2019-09-04T06:33:30.363+0000 D3 REPL [conn374] Got notified of new snapshot: { ts: Timestamp(1567578810, 3), t: 1 }, 2019-09-04T06:33:30.340+0000 2019-09-04T06:33:30.363+0000 D3 REPL [conn374] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.876+0000 2019-09-04T06:33:30.363+0000 D3 REPL [conn373] Got notified of new snapshot: { ts: Timestamp(1567578810, 3), t: 1 }, 2019-09-04T06:33:30.340+0000 2019-09-04T06:33:30.363+0000 D3 REPL [conn373] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.934+0000 2019-09-04T06:33:30.363+0000 D3 REPL [conn394] Got notified of new snapshot: { ts: Timestamp(1567578810, 3), t: 1 }, 2019-09-04T06:33:30.340+0000 2019-09-04T06:33:30.363+0000 D3 REPL [conn394] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.261+0000 2019-09-04T06:33:30.363+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:30.363+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:30.389+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:33:30.389+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:30.389+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, durableWallTime: new Date(1567578805723), appliedOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, appliedWallTime: new Date(1567578805723), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578810, 2), t: 1 }, durableWallTime: new Date(1567578810339), appliedOpTime: { ts: Timestamp(1567578810, 3), t: 1 }, appliedWallTime: new Date(1567578810340), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, durableWallTime: new Date(1567578805723), appliedOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, appliedWallTime: new Date(1567578805723), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578810, 3), t: 1 }, lastCommittedWall: new Date(1567578810340), lastOpVisible: { ts: Timestamp(1567578810, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:30.389+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1193 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:00.389+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, durableWallTime: new Date(1567578805723), appliedOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, appliedWallTime: new Date(1567578805723), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578810, 2), t: 1 }, durableWallTime: new Date(1567578810339), appliedOpTime: { ts: Timestamp(1567578810, 3), t: 1 }, appliedWallTime: new Date(1567578810340), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, durableWallTime: new Date(1567578805723), appliedOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, appliedWallTime: new Date(1567578805723), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578810, 3), t: 1 }, lastCommittedWall: new Date(1567578810340), lastOpVisible: { ts: Timestamp(1567578810, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:30.390+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:00.345+0000 2019-09-04T06:33:30.390+0000 D2 ASIO [RS] Request 1193 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578810, 3), t: 1 }, lastCommittedWall: new Date(1567578810340), lastOpVisible: { ts: Timestamp(1567578810, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578810, 3), $clusterTime: { clusterTime: Timestamp(1567578810, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578810, 3) } 2019-09-04T06:33:30.390+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578810, 3), t: 1 }, lastCommittedWall: new Date(1567578810340), lastOpVisible: { ts: Timestamp(1567578810, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578810, 3), $clusterTime: { clusterTime: Timestamp(1567578810, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578810, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:30.390+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:30.390+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:00.345+0000 2019-09-04T06:33:30.391+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:33:30.391+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:30.391+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, durableWallTime: new Date(1567578805723), appliedOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, appliedWallTime: new Date(1567578805723), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578810, 3), t: 1 }, durableWallTime: new Date(1567578810340), appliedOpTime: { ts: Timestamp(1567578810, 3), t: 1 }, appliedWallTime: new Date(1567578810340), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, durableWallTime: new Date(1567578805723), appliedOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, appliedWallTime: new Date(1567578805723), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578810, 3), t: 1 }, lastCommittedWall: new Date(1567578810340), lastOpVisible: { ts: Timestamp(1567578810, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:30.391+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1194 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:00.391+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, durableWallTime: new Date(1567578805723), appliedOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, appliedWallTime: new Date(1567578805723), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578810, 3), t: 1 }, durableWallTime: new Date(1567578810340), appliedOpTime: { ts: Timestamp(1567578810, 3), t: 1 }, appliedWallTime: new Date(1567578810340), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, durableWallTime: new Date(1567578805723), appliedOpTime: { ts: Timestamp(1567578805, 2), t: 1 }, appliedWallTime: new Date(1567578805723), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578810, 3), t: 1 }, lastCommittedWall: new Date(1567578810340), lastOpVisible: { ts: Timestamp(1567578810, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:30.391+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:00.345+0000 2019-09-04T06:33:30.392+0000 D2 ASIO [RS] Request 1194 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578810, 3), t: 1 }, lastCommittedWall: new Date(1567578810340), lastOpVisible: { ts: Timestamp(1567578810, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578810, 3), $clusterTime: { clusterTime: Timestamp(1567578810, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578810, 3) } 2019-09-04T06:33:30.392+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578810, 3), t: 1 }, lastCommittedWall: new Date(1567578810340), lastOpVisible: { ts: Timestamp(1567578810, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578810, 3), $clusterTime: { clusterTime: Timestamp(1567578810, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578810, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:30.392+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:30.392+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:00.345+0000 2019-09-04T06:33:30.441+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578810, 3) 2019-09-04T06:33:30.445+0000 D2 COMMAND [conn387] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578801, 1), signature: { hash: BinData(0, 3D476481B84657583831CD371DB7EF0A1606D6C0), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:33:30.445+0000 D1 REPL [conn387] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578810, 3), t: 1 } 2019-09-04T06:33:30.445+0000 D3 REPL [conn387] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.455+0000 2019-09-04T06:33:30.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:30.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:30.491+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:30.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:30.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:30.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:30.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:30.592+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:30.598+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:30.598+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:30.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:30.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:30.691+0000 I NETWORK [listener] connection accepted from 10.108.2.74:51906 #395 (88 connections now open) 2019-09-04T06:33:30.691+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:30.692+0000 D2 COMMAND [conn395] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:30.692+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:30.692+0000 I NETWORK [conn395] received client metadata from 10.108.2.74:51906 conn395: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:30.692+0000 I COMMAND [conn395] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:30.692+0000 D2 COMMAND [conn395] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578805, 1), signature: { hash: BinData(0, 89755D210D4D749FC922939A0BD0751F8197C269), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:33:30.692+0000 D1 REPL [conn395] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578810, 3), t: 1 } 2019-09-04T06:33:30.692+0000 D3 REPL [conn395] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.702+0000 2019-09-04T06:33:30.696+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:30.696+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:30.727+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:30.727+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:30.733+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:30.733+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:30.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:30.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:30.792+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:30.825+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:30.826+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:30.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:30.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1195) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:30.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1195 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:40.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:30.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:58.839+0000 2019-09-04T06:33:30.838+0000 D2 ASIO [Replication] Request 1195 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578810, 3), t: 1 }, durableWallTime: new Date(1567578810340), opTime: { ts: Timestamp(1567578810, 3), t: 1 }, wallTime: new Date(1567578810340), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578810, 3), t: 1 }, lastCommittedWall: new Date(1567578810340), lastOpVisible: { ts: Timestamp(1567578810, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578810, 3), $clusterTime: { clusterTime: Timestamp(1567578810, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578810, 3) } 2019-09-04T06:33:30.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578810, 3), t: 1 }, durableWallTime: new Date(1567578810340), opTime: { ts: Timestamp(1567578810, 3), t: 1 }, wallTime: new Date(1567578810340), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578810, 3), t: 1 }, lastCommittedWall: new Date(1567578810340), lastOpVisible: { ts: Timestamp(1567578810, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578810, 3), $clusterTime: { clusterTime: Timestamp(1567578810, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578810, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:30.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:30.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1195) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578810, 3), t: 1 }, durableWallTime: new Date(1567578810340), opTime: { ts: Timestamp(1567578810, 3), t: 1 }, wallTime: new Date(1567578810340), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578810, 3), t: 1 }, lastCommittedWall: new Date(1567578810340), lastOpVisible: { ts: Timestamp(1567578810, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578810, 3), $clusterTime: { clusterTime: Timestamp(1567578810, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578810, 3) } 2019-09-04T06:33:30.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:33:30.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:33:32.838Z 2019-09-04T06:33:30.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:33:58.839+0000 2019-09-04T06:33:30.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:30.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1196) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:30.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1196 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:33:40.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:30.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:33:58.839+0000 2019-09-04T06:33:30.839+0000 D2 ASIO [Replication] Request 1196 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578810, 3), t: 1 }, durableWallTime: new Date(1567578810340), opTime: { ts: Timestamp(1567578810, 3), t: 1 }, wallTime: new Date(1567578810340), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578810, 3), t: 1 }, lastCommittedWall: new Date(1567578810340), lastOpVisible: { ts: Timestamp(1567578810, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578810, 3), $clusterTime: { clusterTime: Timestamp(1567578810, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578810, 3) } 2019-09-04T06:33:30.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578810, 3), t: 1 }, durableWallTime: new Date(1567578810340), opTime: { ts: Timestamp(1567578810, 3), t: 1 }, wallTime: new Date(1567578810340), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578810, 3), t: 1 }, lastCommittedWall: new Date(1567578810340), lastOpVisible: { ts: Timestamp(1567578810, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578810, 3), $clusterTime: { clusterTime: Timestamp(1567578810, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578810, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:33:30.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:30.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1196) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578810, 3), t: 1 }, durableWallTime: new Date(1567578810340), opTime: { ts: Timestamp(1567578810, 3), t: 1 }, wallTime: new Date(1567578810340), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578810, 3), t: 1 }, lastCommittedWall: new Date(1567578810340), lastOpVisible: { ts: Timestamp(1567578810, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578810, 3), $clusterTime: { clusterTime: Timestamp(1567578810, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578810, 3) } 2019-09-04T06:33:30.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:33:30.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:33:41.504+0000 2019-09-04T06:33:30.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:33:41.527+0000 2019-09-04T06:33:30.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:33:30.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:33:32.839Z 2019-09-04T06:33:30.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:30.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:00.839+0000 2019-09-04T06:33:30.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:00.839+0000 2019-09-04T06:33:30.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:30.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:30.863+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:30.863+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:30.886+0000 I NETWORK [listener] connection accepted from 10.108.2.44:38796 #396 (89 connections now open) 2019-09-04T06:33:30.886+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:30.887+0000 D2 COMMAND [conn396] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:30.887+0000 I NETWORK [conn396] received client metadata from 10.108.2.44:38796 conn396: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:30.887+0000 I COMMAND [conn396] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:30.887+0000 D2 COMMAND [conn396] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578808, 1), signature: { hash: BinData(0, E0BDAF1F918F5649152C69BBF6299873DB91E045), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:33:30.887+0000 D1 REPL [conn396] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578810, 3), t: 1 } 2019-09-04T06:33:30.887+0000 D3 REPL [conn396] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.897+0000 2019-09-04T06:33:30.892+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:30.952+0000 I NETWORK [listener] connection accepted from 10.108.2.58:52260 #397 (90 connections now open) 2019-09-04T06:33:30.952+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:30.952+0000 D2 COMMAND [conn397] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:30.952+0000 I NETWORK [conn397] received client metadata from 10.108.2.58:52260 conn397: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:30.952+0000 I COMMAND [conn397] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:30.952+0000 D2 COMMAND [conn397] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578803, 1), signature: { hash: BinData(0, 4C13B88C6EFA3D421BD308E53A9A6ACE16902623), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:33:30.952+0000 D1 REPL [conn397] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578810, 3), t: 1 } 2019-09-04T06:33:30.952+0000 D3 REPL [conn397] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.962+0000 2019-09-04T06:33:30.960+0000 D2 NETWORK [conn20] Session from 10.108.2.15:39012 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:30.960+0000 I NETWORK [conn20] end connection 10.108.2.15:39012 (89 connections now open) 2019-09-04T06:33:30.961+0000 D2 NETWORK [conn21] Session from 10.108.2.15:39014 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:30.961+0000 I NETWORK [conn21] end connection 10.108.2.15:39014 (88 connections now open) 2019-09-04T06:33:30.976+0000 I NETWORK [listener] connection accepted from 10.108.2.46:41104 #398 (89 connections now open) 2019-09-04T06:33:30.976+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:30.976+0000 D2 COMMAND [conn398] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:30.976+0000 I NETWORK [conn398] received client metadata from 10.108.2.46:41104 conn398: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:30.976+0000 I COMMAND [conn398] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:30.977+0000 D2 COMMAND [conn398] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578805, 1), signature: { hash: BinData(0, 89755D210D4D749FC922939A0BD0751F8197C269), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:33:30.977+0000 D1 REPL [conn398] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578810, 3), t: 1 } 2019-09-04T06:33:30.977+0000 D3 REPL [conn398] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.987+0000 2019-09-04T06:33:30.992+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:31.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:31.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:31.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:31.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578810, 3), signature: { hash: BinData(0, ADBF8CD6CD74CDEEA2806F461D4F66E80ECEFFB7), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:31.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:33:31.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578810, 3), signature: { hash: BinData(0, ADBF8CD6CD74CDEEA2806F461D4F66E80ECEFFB7), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:31.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578810, 3), signature: { hash: BinData(0, ADBF8CD6CD74CDEEA2806F461D4F66E80ECEFFB7), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:31.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578810, 3), t: 1 }, durableWallTime: new Date(1567578810340), opTime: { ts: Timestamp(1567578810, 3), t: 1 }, wallTime: new Date(1567578810340) } 2019-09-04T06:33:31.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578810, 3), signature: { hash: BinData(0, ADBF8CD6CD74CDEEA2806F461D4F66E80ECEFFB7), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:31.068+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:31.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:31.092+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:31.098+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:31.098+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:31.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:31.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:31.185+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:31.185+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:31.192+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:31.196+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:31.196+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:31.227+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:31.227+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:31.233+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:31.233+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:31.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:31.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:31.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:31.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:31.293+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:31.325+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:31.326+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:31.344+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:31.344+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:31.344+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578810, 3) 2019-09-04T06:33:31.344+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 17647 2019-09-04T06:33:31.344+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:31.344+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:31.344+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 17647 2019-09-04T06:33:31.345+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17650 2019-09-04T06:33:31.345+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 17650 2019-09-04T06:33:31.345+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578810, 3), t: 1 }({ ts: Timestamp(1567578810, 3), t: 1 }) 2019-09-04T06:33:31.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:31.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:31.363+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:31.363+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:31.393+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:31.493+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:31.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:31.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:31.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:31.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:31.593+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:31.598+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:31.598+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:31.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:31.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:31.685+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:31.685+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:31.693+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:31.696+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:31.696+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:31.727+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:31.727+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:31.733+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:31.733+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:31.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:31.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:31.793+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:31.825+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:31.826+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:31.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:31.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:31.863+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:31.863+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:31.894+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:31.994+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:32.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:32.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:32.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:32.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:32.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:32.094+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:32.098+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:32.098+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:32.162+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:32.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:32.185+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:32.185+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:32.194+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:32.196+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:32.196+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:32.227+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:32.227+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:32.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578810, 3), signature: { hash: BinData(0, ADBF8CD6CD74CDEEA2806F461D4F66E80ECEFFB7), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:32.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:33:32.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578810, 3), signature: { hash: BinData(0, ADBF8CD6CD74CDEEA2806F461D4F66E80ECEFFB7), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:32.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578810, 3), signature: { hash: BinData(0, ADBF8CD6CD74CDEEA2806F461D4F66E80ECEFFB7), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:32.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578810, 3), t: 1 }, durableWallTime: new Date(1567578810340), opTime: { ts: Timestamp(1567578810, 3), t: 1 }, wallTime: new Date(1567578810340) } 2019-09-04T06:33:32.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578810, 3), signature: { hash: BinData(0, ADBF8CD6CD74CDEEA2806F461D4F66E80ECEFFB7), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:32.233+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:32.233+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:32.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:32.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:32.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:32.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:32.294+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:32.325+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:32.325+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:32.344+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:32.344+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:32.344+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578810, 3) 2019-09-04T06:33:32.344+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 17679 2019-09-04T06:33:32.344+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:32.344+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:32.344+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 17679 2019-09-04T06:33:32.345+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17682 2019-09-04T06:33:32.345+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 17682 2019-09-04T06:33:32.345+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578810, 3), t: 1 }({ ts: Timestamp(1567578810, 3), t: 1 }) 2019-09-04T06:33:32.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:32.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:32.363+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:32.363+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:32.394+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:32.468+0000 I NETWORK [listener] connection accepted from 10.108.2.59:48470 #399 (90 connections now open) 2019-09-04T06:33:32.468+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:32.468+0000 D2 COMMAND [conn399] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:32.468+0000 I NETWORK [conn399] received client metadata from 10.108.2.59:48470 conn399: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:32.468+0000 I COMMAND [conn399] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:32.468+0000 D2 COMMAND [conn399] run command config.$cmd { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578811, 1), signature: { hash: BinData(0, E5356C76D2F90A760D65BDFF11E1DF1886F143E2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:33:32.468+0000 D1 REPL [conn399] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578810, 3), t: 1 } 2019-09-04T06:33:32.468+0000 D3 REPL [conn399] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:02.478+0000 2019-09-04T06:33:32.495+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:32.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:32.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:32.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:32.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:32.595+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:32.598+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:32.598+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:32.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:32.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:32.685+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:32.685+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:32.695+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:32.696+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:32.696+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:32.727+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:32.727+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:32.733+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:32.733+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:32.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:32.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:32.795+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:32.825+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:32.826+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:32.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:32.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1197) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:32.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1197 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:42.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:32.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:00.839+0000 2019-09-04T06:33:32.838+0000 D2 ASIO [Replication] Request 1197 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578810, 3), t: 1 }, durableWallTime: new Date(1567578810340), opTime: { ts: Timestamp(1567578810, 3), t: 1 }, wallTime: new Date(1567578810340), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578810, 3), t: 1 }, lastCommittedWall: new Date(1567578810340), lastOpVisible: { ts: Timestamp(1567578810, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578810, 3), $clusterTime: { clusterTime: Timestamp(1567578811, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578810, 3) } 2019-09-04T06:33:32.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578810, 3), t: 1 }, durableWallTime: new Date(1567578810340), opTime: { ts: Timestamp(1567578810, 3), t: 1 }, wallTime: new Date(1567578810340), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578810, 3), t: 1 }, lastCommittedWall: new Date(1567578810340), lastOpVisible: { ts: Timestamp(1567578810, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578810, 3), $clusterTime: { clusterTime: Timestamp(1567578811, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578810, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:32.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:32.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1197) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578810, 3), t: 1 }, durableWallTime: new Date(1567578810340), opTime: { ts: Timestamp(1567578810, 3), t: 1 }, wallTime: new Date(1567578810340), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578810, 3), t: 1 }, lastCommittedWall: new Date(1567578810340), lastOpVisible: { ts: Timestamp(1567578810, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578810, 3), $clusterTime: { clusterTime: Timestamp(1567578811, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578810, 3) } 2019-09-04T06:33:32.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:33:32.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:33:34.838Z 2019-09-04T06:33:32.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:00.839+0000 2019-09-04T06:33:32.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:32.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1198) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:32.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1198 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:33:42.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:32.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:00.839+0000 2019-09-04T06:33:32.839+0000 D2 ASIO [Replication] Request 1198 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578810, 3), t: 1 }, durableWallTime: new Date(1567578810340), opTime: { ts: Timestamp(1567578810, 3), t: 1 }, wallTime: new Date(1567578810340), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578810, 3), t: 1 }, lastCommittedWall: new Date(1567578810340), lastOpVisible: { ts: Timestamp(1567578810, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578810, 3), $clusterTime: { clusterTime: Timestamp(1567578811, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578810, 3) } 2019-09-04T06:33:32.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578810, 3), t: 1 }, durableWallTime: new Date(1567578810340), opTime: { ts: Timestamp(1567578810, 3), t: 1 }, wallTime: new Date(1567578810340), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578810, 3), t: 1 }, lastCommittedWall: new Date(1567578810340), lastOpVisible: { ts: Timestamp(1567578810, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578810, 3), $clusterTime: { clusterTime: Timestamp(1567578811, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578810, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:33:32.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:32.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1198) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578810, 3), t: 1 }, durableWallTime: new Date(1567578810340), opTime: { ts: Timestamp(1567578810, 3), t: 1 }, wallTime: new Date(1567578810340), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578810, 3), t: 1 }, lastCommittedWall: new Date(1567578810340), lastOpVisible: { ts: Timestamp(1567578810, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578810, 3), $clusterTime: { clusterTime: Timestamp(1567578811, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578810, 3) } 2019-09-04T06:33:32.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:33:32.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:33:41.527+0000 2019-09-04T06:33:32.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:33:43.901+0000 2019-09-04T06:33:32.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:33:32.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:33:34.839Z 2019-09-04T06:33:32.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:32.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:02.839+0000 2019-09-04T06:33:32.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:02.839+0000 2019-09-04T06:33:32.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:32.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:32.863+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:32.863+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:32.895+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:32.995+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:33.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:33.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:33.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:33.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578811, 1), signature: { hash: BinData(0, 8B62FAF558DC8512CF3171DB91BE7673F61F4ACC), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:33.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:33:33.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578811, 1), signature: { hash: BinData(0, 8B62FAF558DC8512CF3171DB91BE7673F61F4ACC), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:33.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578811, 1), signature: { hash: BinData(0, 8B62FAF558DC8512CF3171DB91BE7673F61F4ACC), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:33.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578810, 3), t: 1 }, durableWallTime: new Date(1567578810340), opTime: { ts: Timestamp(1567578810, 3), t: 1 }, wallTime: new Date(1567578810340) } 2019-09-04T06:33:33.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578811, 1), signature: { hash: BinData(0, 8B62FAF558DC8512CF3171DB91BE7673F61F4ACC), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:33.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:33.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:33.095+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:33.098+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:33.098+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:33.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:33.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:33.185+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:33.185+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:33.196+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:33.196+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:33.196+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:33.227+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:33.227+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:33.233+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:33.233+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:33.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:33.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:33.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:33.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:33.296+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:33.325+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:33.326+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:33.345+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:33.345+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:33.345+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578810, 3) 2019-09-04T06:33:33.345+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 17713 2019-09-04T06:33:33.345+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:33.345+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:33.345+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 17713 2019-09-04T06:33:33.346+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17716 2019-09-04T06:33:33.346+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 17716 2019-09-04T06:33:33.346+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578810, 3), t: 1 }({ ts: Timestamp(1567578810, 3), t: 1 }) 2019-09-04T06:33:33.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:33.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:33.363+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:33.363+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:33.396+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:33.474+0000 I NETWORK [listener] connection accepted from 10.108.2.63:36406 #400 (91 connections now open) 2019-09-04T06:33:33.474+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:33.475+0000 D2 COMMAND [conn400] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:33.475+0000 I NETWORK [conn400] received client metadata from 10.108.2.63:36406 conn400: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:33.475+0000 I COMMAND [conn400] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:33.480+0000 D2 COMMAND [conn400] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578812, 1), signature: { hash: BinData(0, 4B378BB31368CDD862D6FBF154A78A3408447D9E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:33:33.480+0000 D1 REPL [conn400] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578810, 3), t: 1 } 2019-09-04T06:33:33.480+0000 D3 REPL [conn400] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:03.490+0000 2019-09-04T06:33:33.496+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:33.518+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:33:33.518+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:33:33.518+0000 D2 COMMAND [conn90] run command admin.$cmd { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:33:33.518+0000 I COMMAND [conn90] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:33:33.544+0000 D2 ASIO [RS] Request 1192 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578813, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578813541), o: { $v: 1, $set: { ping: new Date(1567578813536) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578810, 3), t: 1 }, lastCommittedWall: new Date(1567578810340), lastOpVisible: { ts: Timestamp(1567578810, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578810, 3), t: 1 }, lastCommittedWall: new Date(1567578810340), lastOpApplied: { ts: Timestamp(1567578813, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578810, 3), $clusterTime: { clusterTime: Timestamp(1567578813, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578813, 1) } 2019-09-04T06:33:33.544+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578813, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578813541), o: { $v: 1, $set: { ping: new Date(1567578813536) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578810, 3), t: 1 }, lastCommittedWall: new Date(1567578810340), lastOpVisible: { ts: Timestamp(1567578810, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578810, 3), t: 1 }, lastCommittedWall: new Date(1567578810340), lastOpApplied: { ts: Timestamp(1567578813, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578810, 3), $clusterTime: { clusterTime: Timestamp(1567578813, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578813, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:33.544+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:33.544+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578813, 1) and ending at ts: Timestamp(1567578813, 1) 2019-09-04T06:33:33.544+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:33:43.901+0000 2019-09-04T06:33:33.544+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:33:43.793+0000 2019-09-04T06:33:33.544+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:33.544+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:02.839+0000 2019-09-04T06:33:33.544+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578813, 1), t: 1 } 2019-09-04T06:33:33.544+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:33.544+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:33.544+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578810, 3) 2019-09-04T06:33:33.544+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 17725 2019-09-04T06:33:33.544+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:33.544+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:33.544+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 17725 2019-09-04T06:33:33.544+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:33.544+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:33:33.544+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:33.544+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578810, 3) 2019-09-04T06:33:33.544+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578813, 1) } 2019-09-04T06:33:33.544+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 17728 2019-09-04T06:33:33.544+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:33.544+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:33.544+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 17728 2019-09-04T06:33:33.544+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17717 2019-09-04T06:33:33.544+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 17717 2019-09-04T06:33:33.544+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17731 2019-09-04T06:33:33.544+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 17731 2019-09-04T06:33:33.544+0000 D3 EXECUTOR [repl-writer-worker-3] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:33.544+0000 D3 STORAGE [repl-writer-worker-3] WT begin_transaction for snapshot id 17733 2019-09-04T06:33:33.544+0000 D4 STORAGE [repl-writer-worker-3] inserting record with timestamp Timestamp(1567578813, 1) 2019-09-04T06:33:33.544+0000 D3 STORAGE [repl-writer-worker-3] WT set timestamp of future write operations to Timestamp(1567578813, 1) 2019-09-04T06:33:33.544+0000 D3 STORAGE [repl-writer-worker-3] WT commit_transaction for snapshot id 17733 2019-09-04T06:33:33.544+0000 D3 EXECUTOR [repl-writer-worker-3] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:33.544+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:33:33.544+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17732 2019-09-04T06:33:33.544+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 17732 2019-09-04T06:33:33.544+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17735 2019-09-04T06:33:33.544+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 17735 2019-09-04T06:33:33.544+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578813, 1), t: 1 }({ ts: Timestamp(1567578813, 1), t: 1 }) 2019-09-04T06:33:33.545+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578813, 1) 2019-09-04T06:33:33.545+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17736 2019-09-04T06:33:33.545+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578813, 1) } } ] } sort: {} projection: {} 2019-09-04T06:33:33.545+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:33.545+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:33:33.545+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578813, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:33:33.545+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:33.545+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:33:33.545+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:33.545+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578813, 1) || First: notFirst: full path: ts 2019-09-04T06:33:33.545+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:33.545+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578813, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:33.545+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:33:33.545+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:33:33.545+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:33:33.545+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:33.545+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:33.545+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:33:33.545+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:33.545+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:33.545+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:33:33.545+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578813, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:33:33.545+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:33.545+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:33:33.545+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:33.545+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578813, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:33:33.545+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:33.545+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578813, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:33.545+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 17736 2019-09-04T06:33:33.545+0000 D3 EXECUTOR [repl-writer-worker-12] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:33.545+0000 D3 STORAGE [repl-writer-worker-12] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:33.545+0000 D3 REPL [repl-writer-worker-12] applying op: { ts: Timestamp(1567578813, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578813541), o: { $v: 1, $set: { ping: new Date(1567578813536) } } }, oplog application mode: Secondary 2019-09-04T06:33:33.545+0000 D3 STORAGE [repl-writer-worker-12] WT set timestamp of future write operations to Timestamp(1567578813, 1) 2019-09-04T06:33:33.545+0000 D3 STORAGE [repl-writer-worker-12] WT begin_transaction for snapshot id 17738 2019-09-04T06:33:33.545+0000 D2 QUERY [repl-writer-worker-12] Using idhack: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" } 2019-09-04T06:33:33.545+0000 D4 WRITE [repl-writer-worker-12] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:33:33.545+0000 D3 STORAGE [repl-writer-worker-12] WT commit_transaction for snapshot id 17738 2019-09-04T06:33:33.545+0000 D3 EXECUTOR [repl-writer-worker-12] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:33.545+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578813, 1), t: 1 }({ ts: Timestamp(1567578813, 1), t: 1 }) 2019-09-04T06:33:33.545+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578813, 1) 2019-09-04T06:33:33.545+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17737 2019-09-04T06:33:33.545+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:33:33.545+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:33.545+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:33:33.545+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:33.545+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:33.545+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:33:33.545+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 17737 2019-09-04T06:33:33.545+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578813, 1) 2019-09-04T06:33:33.545+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17741 2019-09-04T06:33:33.545+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 17741 2019-09-04T06:33:33.545+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578813, 1), t: 1 }({ ts: Timestamp(1567578813, 1), t: 1 }) 2019-09-04T06:33:33.545+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:33.545+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578810, 3), t: 1 }, durableWallTime: new Date(1567578810340), appliedOpTime: { ts: Timestamp(1567578810, 3), t: 1 }, appliedWallTime: new Date(1567578810340), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578810, 3), t: 1 }, durableWallTime: new Date(1567578810340), appliedOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, appliedWallTime: new Date(1567578813541), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578810, 3), t: 1 }, durableWallTime: new Date(1567578810340), appliedOpTime: { ts: Timestamp(1567578810, 3), t: 1 }, appliedWallTime: new Date(1567578810340), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578810, 3), t: 1 }, lastCommittedWall: new Date(1567578810340), lastOpVisible: { ts: Timestamp(1567578810, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:33.545+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1199 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:03.545+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578810, 3), t: 1 }, durableWallTime: new Date(1567578810340), appliedOpTime: { ts: Timestamp(1567578810, 3), t: 1 }, appliedWallTime: new Date(1567578810340), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578810, 3), t: 1 }, durableWallTime: new Date(1567578810340), appliedOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, appliedWallTime: new Date(1567578813541), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578810, 3), t: 1 }, durableWallTime: new Date(1567578810340), appliedOpTime: { ts: Timestamp(1567578810, 3), t: 1 }, appliedWallTime: new Date(1567578810340), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578810, 3), t: 1 }, lastCommittedWall: new Date(1567578810340), lastOpVisible: { ts: Timestamp(1567578810, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:33.545+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:03.545+0000 2019-09-04T06:33:33.546+0000 D2 ASIO [RS] Request 1199 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578813, 1), t: 1 }, lastCommittedWall: new Date(1567578813541), lastOpVisible: { ts: Timestamp(1567578813, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578813, 1), $clusterTime: { clusterTime: Timestamp(1567578813, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578813, 1) } 2019-09-04T06:33:33.546+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578813, 1), t: 1 }, lastCommittedWall: new Date(1567578813541), lastOpVisible: { ts: Timestamp(1567578813, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578813, 1), $clusterTime: { clusterTime: Timestamp(1567578813, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578813, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:33.546+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:33.546+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:03.546+0000 2019-09-04T06:33:33.546+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578813, 1), t: 1 } 2019-09-04T06:33:33.546+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1200 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:43.546+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578810, 3), t: 1 } } 2019-09-04T06:33:33.546+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:03.546+0000 2019-09-04T06:33:33.546+0000 D2 ASIO [RS] Request 1200 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578813, 1), t: 1 }, lastCommittedWall: new Date(1567578813541), lastOpVisible: { ts: Timestamp(1567578813, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578813, 1), t: 1 }, lastCommittedWall: new Date(1567578813541), lastOpApplied: { ts: Timestamp(1567578813, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578813, 1), $clusterTime: { clusterTime: Timestamp(1567578813, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578813, 1) } 2019-09-04T06:33:33.546+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578813, 1), t: 1 }, lastCommittedWall: new Date(1567578813541), lastOpVisible: { ts: Timestamp(1567578813, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578813, 1), t: 1 }, lastCommittedWall: new Date(1567578813541), lastOpApplied: { ts: Timestamp(1567578813, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578813, 1), $clusterTime: { clusterTime: Timestamp(1567578813, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578813, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:33.546+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:33.546+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:33:33.546+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578813, 1), t: 1 }, 2019-09-04T06:33:33.541+0000 2019-09-04T06:33:33.546+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578813, 1), t: 1 }, 2019-09-04T06:33:33.541+0000 2019-09-04T06:33:33.546+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578808, 1) 2019-09-04T06:33:33.546+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:33:43.793+0000 2019-09-04T06:33:33.546+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:33:44.788+0000 2019-09-04T06:33:33.546+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:33.546+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:02.839+0000 2019-09-04T06:33:33.546+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1201 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:43.546+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578813, 1), t: 1 } } 2019-09-04T06:33:33.547+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:03.546+0000 2019-09-04T06:33:33.546+0000 D3 REPL [conn383] Got notified of new snapshot: { ts: Timestamp(1567578813, 1), t: 1 }, 2019-09-04T06:33:33.541+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn383] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.025+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn395] Got notified of new snapshot: { ts: Timestamp(1567578813, 1), t: 1 }, 2019-09-04T06:33:33.541+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn395] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.702+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn389] Got notified of new snapshot: { ts: Timestamp(1567578813, 1), t: 1 }, 2019-09-04T06:33:33.541+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn389] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:52.596+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn371] Got notified of new snapshot: { ts: Timestamp(1567578813, 1), t: 1 }, 2019-09-04T06:33:33.541+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn371] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.023+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn391] Got notified of new snapshot: { ts: Timestamp(1567578813, 1), t: 1 }, 2019-09-04T06:33:33.541+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn391] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:56.309+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn384] Got notified of new snapshot: { ts: Timestamp(1567578813, 1), t: 1 }, 2019-09-04T06:33:33.541+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn384] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.054+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn378] Got notified of new snapshot: { ts: Timestamp(1567578813, 1), t: 1 }, 2019-09-04T06:33:33.541+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn378] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.046+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn386] Got notified of new snapshot: { ts: Timestamp(1567578813, 1), t: 1 }, 2019-09-04T06:33:33.541+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn386] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.645+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn381] Got notified of new snapshot: { ts: Timestamp(1567578813, 1), t: 1 }, 2019-09-04T06:33:33.541+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn381] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.023+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn367] Got notified of new snapshot: { ts: Timestamp(1567578813, 1), t: 1 }, 2019-09-04T06:33:33.541+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn367] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.934+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn375] Got notified of new snapshot: { ts: Timestamp(1567578813, 1), t: 1 }, 2019-09-04T06:33:33.541+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn375] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.877+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn356] Got notified of new snapshot: { ts: Timestamp(1567578813, 1), t: 1 }, 2019-09-04T06:33:33.541+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn356] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.043+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn368] Got notified of new snapshot: { ts: Timestamp(1567578813, 1), t: 1 }, 2019-09-04T06:33:33.541+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn368] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.038+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn377] Got notified of new snapshot: { ts: Timestamp(1567578813, 1), t: 1 }, 2019-09-04T06:33:33.541+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn377] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.752+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn372] Got notified of new snapshot: { ts: Timestamp(1567578813, 1), t: 1 }, 2019-09-04T06:33:33.541+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn372] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.660+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn374] Got notified of new snapshot: { ts: Timestamp(1567578813, 1), t: 1 }, 2019-09-04T06:33:33.541+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn374] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.876+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn394] Got notified of new snapshot: { ts: Timestamp(1567578813, 1), t: 1 }, 2019-09-04T06:33:33.541+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn394] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.261+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn396] Got notified of new snapshot: { ts: Timestamp(1567578813, 1), t: 1 }, 2019-09-04T06:33:33.541+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn396] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.897+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn397] Got notified of new snapshot: { ts: Timestamp(1567578813, 1), t: 1 }, 2019-09-04T06:33:33.541+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn397] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.962+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn376] Got notified of new snapshot: { ts: Timestamp(1567578813, 1), t: 1 }, 2019-09-04T06:33:33.541+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn376] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.897+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn399] Got notified of new snapshot: { ts: Timestamp(1567578813, 1), t: 1 }, 2019-09-04T06:33:33.541+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn399] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:02.478+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn400] Got notified of new snapshot: { ts: Timestamp(1567578813, 1), t: 1 }, 2019-09-04T06:33:33.541+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn400] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:03.490+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn366] Got notified of new snapshot: { ts: Timestamp(1567578813, 1), t: 1 }, 2019-09-04T06:33:33.541+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn366] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.913+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn379] Got notified of new snapshot: { ts: Timestamp(1567578813, 1), t: 1 }, 2019-09-04T06:33:33.541+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn379] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:43.509+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn385] Got notified of new snapshot: { ts: Timestamp(1567578813, 1), t: 1 }, 2019-09-04T06:33:33.541+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn385] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.131+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn392] Got notified of new snapshot: { ts: Timestamp(1567578813, 1), t: 1 }, 2019-09-04T06:33:33.541+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn392] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:58.760+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn369] Got notified of new snapshot: { ts: Timestamp(1567578813, 1), t: 1 }, 2019-09-04T06:33:33.541+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn369] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.039+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn388] Got notified of new snapshot: { ts: Timestamp(1567578813, 1), t: 1 }, 2019-09-04T06:33:33.541+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn388] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:52.054+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn362] Got notified of new snapshot: { ts: Timestamp(1567578813, 1), t: 1 }, 2019-09-04T06:33:33.541+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn362] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.661+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn373] Got notified of new snapshot: { ts: Timestamp(1567578813, 1), t: 1 }, 2019-09-04T06:33:33.541+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn373] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.934+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn387] Got notified of new snapshot: { ts: Timestamp(1567578813, 1), t: 1 }, 2019-09-04T06:33:33.541+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn387] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.455+0000 2019-09-04T06:33:33.547+0000 D3 REPL [conn359] Got notified of new snapshot: { ts: Timestamp(1567578813, 1), t: 1 }, 2019-09-04T06:33:33.541+0000 2019-09-04T06:33:33.548+0000 D3 REPL [conn359] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.882+0000 2019-09-04T06:33:33.548+0000 D3 REPL [conn398] Got notified of new snapshot: { ts: Timestamp(1567578813, 1), t: 1 }, 2019-09-04T06:33:33.541+0000 2019-09-04T06:33:33.548+0000 D3 REPL [conn398] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.987+0000 2019-09-04T06:33:33.548+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:33:33.548+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:33.548+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578810, 3), t: 1 }, durableWallTime: new Date(1567578810340), appliedOpTime: { ts: Timestamp(1567578810, 3), t: 1 }, appliedWallTime: new Date(1567578810340), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, durableWallTime: new Date(1567578813541), appliedOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, appliedWallTime: new Date(1567578813541), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578810, 3), t: 1 }, durableWallTime: new Date(1567578810340), appliedOpTime: { ts: Timestamp(1567578810, 3), t: 1 }, appliedWallTime: new Date(1567578810340), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578813, 1), t: 1 }, lastCommittedWall: new Date(1567578813541), lastOpVisible: { ts: Timestamp(1567578813, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:33.548+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1202 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:03.548+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578810, 3), t: 1 }, durableWallTime: new Date(1567578810340), appliedOpTime: { ts: Timestamp(1567578810, 3), t: 1 }, appliedWallTime: new Date(1567578810340), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, durableWallTime: new Date(1567578813541), appliedOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, appliedWallTime: new Date(1567578813541), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578810, 3), t: 1 }, durableWallTime: new Date(1567578810340), appliedOpTime: { ts: Timestamp(1567578810, 3), t: 1 }, appliedWallTime: new Date(1567578810340), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578813, 1), t: 1 }, lastCommittedWall: new Date(1567578813541), lastOpVisible: { ts: Timestamp(1567578813, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:33.548+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:03.546+0000 2019-09-04T06:33:33.548+0000 D2 ASIO [RS] Request 1202 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578813, 1), t: 1 }, lastCommittedWall: new Date(1567578813541), lastOpVisible: { ts: Timestamp(1567578813, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578813, 1), $clusterTime: { clusterTime: Timestamp(1567578813, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578813, 1) } 2019-09-04T06:33:33.548+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578813, 1), t: 1 }, lastCommittedWall: new Date(1567578813541), lastOpVisible: { ts: Timestamp(1567578813, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578813, 1), $clusterTime: { clusterTime: Timestamp(1567578813, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578813, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:33.548+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:33.548+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:03.546+0000 2019-09-04T06:33:33.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:33.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:33.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:33.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:33.596+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:33.598+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:33.598+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:33.644+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578813, 1) 2019-09-04T06:33:33.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:33.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:33.685+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:33.685+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:33.696+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:33.696+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:33.696+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:33.727+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:33.727+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:33.733+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:33.733+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:33.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:33.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:33.797+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:33.825+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:33.825+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:33.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:33.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:33.863+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:33.863+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:33.897+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:33.997+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:34.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:34.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:34.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:34.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:34.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:34.097+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:34.098+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:34.098+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:34.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:34.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:34.185+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:34.185+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:34.196+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:34.196+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:34.197+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:34.227+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:34.227+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:34.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578813, 1), signature: { hash: BinData(0, A7D7A1EB9CAEEC755FD84C760904D3DBF089BF30), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:34.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:33:34.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578813, 1), signature: { hash: BinData(0, A7D7A1EB9CAEEC755FD84C760904D3DBF089BF30), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:34.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578813, 1), signature: { hash: BinData(0, A7D7A1EB9CAEEC755FD84C760904D3DBF089BF30), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:34.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, durableWallTime: new Date(1567578813541), opTime: { ts: Timestamp(1567578813, 1), t: 1 }, wallTime: new Date(1567578813541) } 2019-09-04T06:33:34.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578813, 1), signature: { hash: BinData(0, A7D7A1EB9CAEEC755FD84C760904D3DBF089BF30), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:34.233+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:34.233+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:34.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:34.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:34.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:34.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:34.297+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:34.325+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:34.326+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:34.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:34.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:34.363+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:34.363+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:34.398+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:34.498+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:34.544+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:34.544+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:34.544+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578813, 1) 2019-09-04T06:33:34.544+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 17772 2019-09-04T06:33:34.544+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:34.544+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:34.544+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 17772 2019-09-04T06:33:34.546+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17775 2019-09-04T06:33:34.546+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 17775 2019-09-04T06:33:34.546+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578813, 1), t: 1 }({ ts: Timestamp(1567578813, 1), t: 1 }) 2019-09-04T06:33:34.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:34.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:34.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:34.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:34.598+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:34.598+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:34.598+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:34.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:34.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:34.685+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:34.685+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:34.696+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:34.696+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:34.698+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:34.727+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:34.727+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:34.733+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:34.733+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:34.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:34.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:34.798+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:34.825+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:34.825+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:34.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:34.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1203) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:34.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1203 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:44.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:34.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:02.839+0000 2019-09-04T06:33:34.838+0000 D2 ASIO [Replication] Request 1203 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, durableWallTime: new Date(1567578813541), opTime: { ts: Timestamp(1567578813, 1), t: 1 }, wallTime: new Date(1567578813541), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578813, 1), t: 1 }, lastCommittedWall: new Date(1567578813541), lastOpVisible: { ts: Timestamp(1567578813, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578813, 1), $clusterTime: { clusterTime: Timestamp(1567578813, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578813, 1) } 2019-09-04T06:33:34.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, durableWallTime: new Date(1567578813541), opTime: { ts: Timestamp(1567578813, 1), t: 1 }, wallTime: new Date(1567578813541), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578813, 1), t: 1 }, lastCommittedWall: new Date(1567578813541), lastOpVisible: { ts: Timestamp(1567578813, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578813, 1), $clusterTime: { clusterTime: Timestamp(1567578813, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578813, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:34.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:34.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1203) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, durableWallTime: new Date(1567578813541), opTime: { ts: Timestamp(1567578813, 1), t: 1 }, wallTime: new Date(1567578813541), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578813, 1), t: 1 }, lastCommittedWall: new Date(1567578813541), lastOpVisible: { ts: Timestamp(1567578813, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578813, 1), $clusterTime: { clusterTime: Timestamp(1567578813, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578813, 1) } 2019-09-04T06:33:34.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:33:34.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:33:36.838Z 2019-09-04T06:33:34.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:02.839+0000 2019-09-04T06:33:34.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:34.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1204) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:34.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1204 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:33:44.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:34.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:02.839+0000 2019-09-04T06:33:34.839+0000 D2 ASIO [Replication] Request 1204 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, durableWallTime: new Date(1567578813541), opTime: { ts: Timestamp(1567578813, 1), t: 1 }, wallTime: new Date(1567578813541), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578813, 1), t: 1 }, lastCommittedWall: new Date(1567578813541), lastOpVisible: { ts: Timestamp(1567578813, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578813, 1), $clusterTime: { clusterTime: Timestamp(1567578813, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578813, 1) } 2019-09-04T06:33:34.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, durableWallTime: new Date(1567578813541), opTime: { ts: Timestamp(1567578813, 1), t: 1 }, wallTime: new Date(1567578813541), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578813, 1), t: 1 }, lastCommittedWall: new Date(1567578813541), lastOpVisible: { ts: Timestamp(1567578813, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578813, 1), $clusterTime: { clusterTime: Timestamp(1567578813, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578813, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:33:34.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:34.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1204) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, durableWallTime: new Date(1567578813541), opTime: { ts: Timestamp(1567578813, 1), t: 1 }, wallTime: new Date(1567578813541), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578813, 1), t: 1 }, lastCommittedWall: new Date(1567578813541), lastOpVisible: { ts: Timestamp(1567578813, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578813, 1), $clusterTime: { clusterTime: Timestamp(1567578813, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578813, 1) } 2019-09-04T06:33:34.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:33:34.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:33:44.788+0000 2019-09-04T06:33:34.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:33:44.885+0000 2019-09-04T06:33:34.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:33:34.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:33:36.839Z 2019-09-04T06:33:34.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:34.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:04.839+0000 2019-09-04T06:33:34.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:04.839+0000 2019-09-04T06:33:34.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:34.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:34.863+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:34.863+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:34.898+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:34.999+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:35.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:35.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:35.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:35.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578813, 1), signature: { hash: BinData(0, A7D7A1EB9CAEEC755FD84C760904D3DBF089BF30), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:35.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:33:35.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578813, 1), signature: { hash: BinData(0, A7D7A1EB9CAEEC755FD84C760904D3DBF089BF30), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:35.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578813, 1), signature: { hash: BinData(0, A7D7A1EB9CAEEC755FD84C760904D3DBF089BF30), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:35.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, durableWallTime: new Date(1567578813541), opTime: { ts: Timestamp(1567578813, 1), t: 1 }, wallTime: new Date(1567578813541) } 2019-09-04T06:33:35.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578813, 1), signature: { hash: BinData(0, A7D7A1EB9CAEEC755FD84C760904D3DBF089BF30), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:35.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:35.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:35.098+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:35.098+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:35.099+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:35.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:35.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:35.185+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:35.185+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:35.196+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:35.196+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:35.199+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:35.227+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:35.227+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:35.233+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:35.233+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:35.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:35.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:35.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:35.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:35.299+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:35.325+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:35.326+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:35.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:35.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:35.363+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:35.363+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:35.399+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:35.499+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:35.545+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:35.545+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:35.545+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578813, 1) 2019-09-04T06:33:35.545+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 17805 2019-09-04T06:33:35.545+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:35.545+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:35.545+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 17805 2019-09-04T06:33:35.546+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17808 2019-09-04T06:33:35.546+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 17808 2019-09-04T06:33:35.546+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578813, 1), t: 1 }({ ts: Timestamp(1567578813, 1), t: 1 }) 2019-09-04T06:33:35.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:35.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:35.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:35.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:35.598+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:35.598+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:35.599+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:35.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:35.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:35.685+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:35.685+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:35.696+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:35.696+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:35.700+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:35.727+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:35.727+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:35.733+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:35.733+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:35.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:35.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:35.800+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:35.826+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:35.826+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:35.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:35.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:35.863+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:35.863+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:35.900+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:36.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:36.000+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:36.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:36.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:36.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:36.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:36.098+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:36.098+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:36.100+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:36.162+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:36.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:36.185+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:36.185+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:36.196+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:36.196+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:36.200+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:36.227+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:36.227+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:36.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578813, 1), signature: { hash: BinData(0, A7D7A1EB9CAEEC755FD84C760904D3DBF089BF30), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:36.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:33:36.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578813, 1), signature: { hash: BinData(0, A7D7A1EB9CAEEC755FD84C760904D3DBF089BF30), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:36.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578813, 1), signature: { hash: BinData(0, A7D7A1EB9CAEEC755FD84C760904D3DBF089BF30), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:36.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, durableWallTime: new Date(1567578813541), opTime: { ts: Timestamp(1567578813, 1), t: 1 }, wallTime: new Date(1567578813541) } 2019-09-04T06:33:36.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578813, 1), signature: { hash: BinData(0, A7D7A1EB9CAEEC755FD84C760904D3DBF089BF30), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:36.233+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:36.233+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:36.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:36.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:36.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:36.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:36.300+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:36.325+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:36.326+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:36.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:36.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:36.363+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:36.363+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:36.401+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:36.501+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:36.545+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:36.545+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:36.545+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578813, 1) 2019-09-04T06:33:36.545+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 17837 2019-09-04T06:33:36.545+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:36.545+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:36.545+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 17837 2019-09-04T06:33:36.546+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17840 2019-09-04T06:33:36.546+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 17840 2019-09-04T06:33:36.546+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578813, 1), t: 1 }({ ts: Timestamp(1567578813, 1), t: 1 }) 2019-09-04T06:33:36.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:36.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:36.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:36.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:36.598+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:36.598+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:36.601+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:36.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:36.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:36.685+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:36.685+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:36.696+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:36.696+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:36.701+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:36.727+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:36.727+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:36.733+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:36.733+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:36.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:36.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:36.801+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:36.825+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:36.826+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:36.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:36.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1205) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:36.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1205 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:46.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:36.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:04.839+0000 2019-09-04T06:33:36.838+0000 D2 ASIO [Replication] Request 1205 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, durableWallTime: new Date(1567578813541), opTime: { ts: Timestamp(1567578813, 1), t: 1 }, wallTime: new Date(1567578813541), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578813, 1), t: 1 }, lastCommittedWall: new Date(1567578813541), lastOpVisible: { ts: Timestamp(1567578813, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578813, 1), $clusterTime: { clusterTime: Timestamp(1567578813, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578813, 1) } 2019-09-04T06:33:36.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, durableWallTime: new Date(1567578813541), opTime: { ts: Timestamp(1567578813, 1), t: 1 }, wallTime: new Date(1567578813541), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578813, 1), t: 1 }, lastCommittedWall: new Date(1567578813541), lastOpVisible: { ts: Timestamp(1567578813, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578813, 1), $clusterTime: { clusterTime: Timestamp(1567578813, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578813, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:36.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:36.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1205) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, durableWallTime: new Date(1567578813541), opTime: { ts: Timestamp(1567578813, 1), t: 1 }, wallTime: new Date(1567578813541), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578813, 1), t: 1 }, lastCommittedWall: new Date(1567578813541), lastOpVisible: { ts: Timestamp(1567578813, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578813, 1), $clusterTime: { clusterTime: Timestamp(1567578813, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578813, 1) } 2019-09-04T06:33:36.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:33:36.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:33:38.838Z 2019-09-04T06:33:36.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:04.839+0000 2019-09-04T06:33:36.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:36.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1206) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:36.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1206 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:33:46.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:36.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:04.839+0000 2019-09-04T06:33:36.839+0000 D2 ASIO [Replication] Request 1206 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, durableWallTime: new Date(1567578813541), opTime: { ts: Timestamp(1567578813, 1), t: 1 }, wallTime: new Date(1567578813541), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578813, 1), t: 1 }, lastCommittedWall: new Date(1567578813541), lastOpVisible: { ts: Timestamp(1567578813, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578813, 1), $clusterTime: { clusterTime: Timestamp(1567578813, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578813, 1) } 2019-09-04T06:33:36.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, durableWallTime: new Date(1567578813541), opTime: { ts: Timestamp(1567578813, 1), t: 1 }, wallTime: new Date(1567578813541), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578813, 1), t: 1 }, lastCommittedWall: new Date(1567578813541), lastOpVisible: { ts: Timestamp(1567578813, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578813, 1), $clusterTime: { clusterTime: Timestamp(1567578813, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578813, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:33:36.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:36.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1206) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, durableWallTime: new Date(1567578813541), opTime: { ts: Timestamp(1567578813, 1), t: 1 }, wallTime: new Date(1567578813541), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578813, 1), t: 1 }, lastCommittedWall: new Date(1567578813541), lastOpVisible: { ts: Timestamp(1567578813, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578813, 1), $clusterTime: { clusterTime: Timestamp(1567578813, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578813, 1) } 2019-09-04T06:33:36.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:33:36.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:33:44.885+0000 2019-09-04T06:33:36.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:33:47.799+0000 2019-09-04T06:33:36.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:33:36.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:33:38.839Z 2019-09-04T06:33:36.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:36.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:06.839+0000 2019-09-04T06:33:36.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:06.839+0000 2019-09-04T06:33:36.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:36.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:36.863+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:36.863+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:36.901+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:37.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:37.001+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:37.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:37.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:37.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578813, 1), signature: { hash: BinData(0, A7D7A1EB9CAEEC755FD84C760904D3DBF089BF30), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:37.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:33:37.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578813, 1), signature: { hash: BinData(0, A7D7A1EB9CAEEC755FD84C760904D3DBF089BF30), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:37.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578813, 1), signature: { hash: BinData(0, A7D7A1EB9CAEEC755FD84C760904D3DBF089BF30), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:37.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, durableWallTime: new Date(1567578813541), opTime: { ts: Timestamp(1567578813, 1), t: 1 }, wallTime: new Date(1567578813541) } 2019-09-04T06:33:37.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578813, 1), signature: { hash: BinData(0, A7D7A1EB9CAEEC755FD84C760904D3DBF089BF30), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:37.068+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:37.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:37.102+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:37.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:37.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:37.185+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:37.185+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:37.196+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:37.196+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:37.202+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:37.227+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:37.227+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:37.233+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:37.233+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:37.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:37.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:37.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:37.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:37.302+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:37.363+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:37.363+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:37.402+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:37.502+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:37.545+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:37.545+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:37.545+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578813, 1) 2019-09-04T06:33:37.545+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 17866 2019-09-04T06:33:37.545+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:37.545+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:37.545+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 17866 2019-09-04T06:33:37.546+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17869 2019-09-04T06:33:37.546+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 17869 2019-09-04T06:33:37.546+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578813, 1), t: 1 }({ ts: Timestamp(1567578813, 1), t: 1 }) 2019-09-04T06:33:37.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:37.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:37.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:37.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:37.602+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:37.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:37.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:37.685+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:37.685+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:37.696+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:37.696+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:37.702+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:37.727+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:37.727+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:37.733+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:37.733+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:37.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:37.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:37.803+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:37.863+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:37.863+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:37.903+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:38.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:38.003+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:38.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:38.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:38.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:38.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:38.103+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:38.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:38.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:38.185+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:38.185+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:38.196+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:38.196+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:38.203+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:38.227+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:38.227+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:38.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578813, 1), signature: { hash: BinData(0, A7D7A1EB9CAEEC755FD84C760904D3DBF089BF30), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:38.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:33:38.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578813, 1), signature: { hash: BinData(0, A7D7A1EB9CAEEC755FD84C760904D3DBF089BF30), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:38.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578813, 1), signature: { hash: BinData(0, A7D7A1EB9CAEEC755FD84C760904D3DBF089BF30), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:38.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, durableWallTime: new Date(1567578813541), opTime: { ts: Timestamp(1567578813, 1), t: 1 }, wallTime: new Date(1567578813541) } 2019-09-04T06:33:38.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578813, 1), signature: { hash: BinData(0, A7D7A1EB9CAEEC755FD84C760904D3DBF089BF30), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:38.233+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:38.233+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:38.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:38.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:38.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:38.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:38.303+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:38.363+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:38.363+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:38.366+0000 D2 ASIO [RS] Request 1201 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578818, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578818364), o: { $v: 1, $set: { ping: new Date(1567578818364) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578813, 1), t: 1 }, lastCommittedWall: new Date(1567578813541), lastOpVisible: { ts: Timestamp(1567578813, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578813, 1), t: 1 }, lastCommittedWall: new Date(1567578813541), lastOpApplied: { ts: Timestamp(1567578818, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578813, 1), $clusterTime: { clusterTime: Timestamp(1567578818, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 1) } 2019-09-04T06:33:38.366+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578818, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578818364), o: { $v: 1, $set: { ping: new Date(1567578818364) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578813, 1), t: 1 }, lastCommittedWall: new Date(1567578813541), lastOpVisible: { ts: Timestamp(1567578813, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578813, 1), t: 1 }, lastCommittedWall: new Date(1567578813541), lastOpApplied: { ts: Timestamp(1567578818, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578813, 1), $clusterTime: { clusterTime: Timestamp(1567578818, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:38.366+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:38.366+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578818, 1) and ending at ts: Timestamp(1567578818, 1) 2019-09-04T06:33:38.366+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:33:47.799+0000 2019-09-04T06:33:38.366+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:33:48.384+0000 2019-09-04T06:33:38.366+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:38.366+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:06.839+0000 2019-09-04T06:33:38.366+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578818, 1), t: 1 } 2019-09-04T06:33:38.366+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:38.366+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:38.366+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578813, 1) 2019-09-04T06:33:38.366+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 17893 2019-09-04T06:33:38.366+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:38.366+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:38.366+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 17893 2019-09-04T06:33:38.366+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:38.366+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:33:38.366+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:38.366+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578813, 1) 2019-09-04T06:33:38.366+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 17896 2019-09-04T06:33:38.366+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578818, 1) } 2019-09-04T06:33:38.366+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:38.366+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:38.366+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 17896 2019-09-04T06:33:38.366+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17870 2019-09-04T06:33:38.366+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 17870 2019-09-04T06:33:38.366+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17899 2019-09-04T06:33:38.366+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 17899 2019-09-04T06:33:38.366+0000 D3 EXECUTOR [repl-writer-worker-10] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:38.366+0000 D3 STORAGE [repl-writer-worker-10] WT begin_transaction for snapshot id 17901 2019-09-04T06:33:38.366+0000 D4 STORAGE [repl-writer-worker-10] inserting record with timestamp Timestamp(1567578818, 1) 2019-09-04T06:33:38.366+0000 D3 STORAGE [repl-writer-worker-10] WT set timestamp of future write operations to Timestamp(1567578818, 1) 2019-09-04T06:33:38.366+0000 D3 STORAGE [repl-writer-worker-10] WT commit_transaction for snapshot id 17901 2019-09-04T06:33:38.366+0000 D3 EXECUTOR [repl-writer-worker-10] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:38.366+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:33:38.366+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17900 2019-09-04T06:33:38.366+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 17900 2019-09-04T06:33:38.366+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17903 2019-09-04T06:33:38.366+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 17903 2019-09-04T06:33:38.366+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578818, 1), t: 1 }({ ts: Timestamp(1567578818, 1), t: 1 }) 2019-09-04T06:33:38.366+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578818, 1) 2019-09-04T06:33:38.366+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17904 2019-09-04T06:33:38.366+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578818, 1) } } ] } sort: {} projection: {} 2019-09-04T06:33:38.366+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:38.366+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:33:38.366+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578818, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:33:38.366+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:38.366+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:33:38.366+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:38.366+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578818, 1) || First: notFirst: full path: ts 2019-09-04T06:33:38.366+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:38.366+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578818, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:38.367+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:33:38.367+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:33:38.367+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:33:38.367+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:38.367+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:38.367+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:33:38.367+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:38.367+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:38.367+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:33:38.367+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578818, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:33:38.367+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:38.367+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:33:38.367+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:38.367+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578818, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:33:38.367+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:38.367+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578818, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:38.367+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 17904 2019-09-04T06:33:38.367+0000 D3 EXECUTOR [repl-writer-worker-8] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:38.367+0000 D3 STORAGE [repl-writer-worker-8] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:38.367+0000 D3 REPL [repl-writer-worker-8] applying op: { ts: Timestamp(1567578818, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578818364), o: { $v: 1, $set: { ping: new Date(1567578818364) } } }, oplog application mode: Secondary 2019-09-04T06:33:38.367+0000 D3 STORAGE [repl-writer-worker-8] WT set timestamp of future write operations to Timestamp(1567578818, 1) 2019-09-04T06:33:38.367+0000 D3 STORAGE [repl-writer-worker-8] WT begin_transaction for snapshot id 17906 2019-09-04T06:33:38.367+0000 D2 QUERY [repl-writer-worker-8] Using idhack: { _id: "ConfigServer" } 2019-09-04T06:33:38.367+0000 D4 WRITE [repl-writer-worker-8] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:33:38.367+0000 D3 STORAGE [repl-writer-worker-8] WT commit_transaction for snapshot id 17906 2019-09-04T06:33:38.367+0000 D3 EXECUTOR [repl-writer-worker-8] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:38.367+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578818, 1), t: 1 }({ ts: Timestamp(1567578818, 1), t: 1 }) 2019-09-04T06:33:38.367+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578818, 1) 2019-09-04T06:33:38.367+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17905 2019-09-04T06:33:38.367+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:33:38.367+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:38.367+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:33:38.367+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:38.367+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:38.367+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:33:38.367+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 17905 2019-09-04T06:33:38.367+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578818, 1) 2019-09-04T06:33:38.367+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17910 2019-09-04T06:33:38.367+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 17910 2019-09-04T06:33:38.367+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578818, 1), t: 1 }({ ts: Timestamp(1567578818, 1), t: 1 }) 2019-09-04T06:33:38.367+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:38.367+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, durableWallTime: new Date(1567578813541), appliedOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, appliedWallTime: new Date(1567578813541), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, durableWallTime: new Date(1567578813541), appliedOpTime: { ts: Timestamp(1567578818, 1), t: 1 }, appliedWallTime: new Date(1567578818364), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, durableWallTime: new Date(1567578813541), appliedOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, appliedWallTime: new Date(1567578813541), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578813, 1), t: 1 }, lastCommittedWall: new Date(1567578813541), lastOpVisible: { ts: Timestamp(1567578813, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:38.367+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1207 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:08.367+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, durableWallTime: new Date(1567578813541), appliedOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, appliedWallTime: new Date(1567578813541), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, durableWallTime: new Date(1567578813541), appliedOpTime: { ts: Timestamp(1567578818, 1), t: 1 }, appliedWallTime: new Date(1567578818364), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, durableWallTime: new Date(1567578813541), appliedOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, appliedWallTime: new Date(1567578813541), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578813, 1), t: 1 }, lastCommittedWall: new Date(1567578813541), lastOpVisible: { ts: Timestamp(1567578813, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:38.367+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:08.367+0000 2019-09-04T06:33:38.368+0000 D2 ASIO [RS] Request 1207 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578813, 1), t: 1 }, lastCommittedWall: new Date(1567578813541), lastOpVisible: { ts: Timestamp(1567578813, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578813, 1), $clusterTime: { clusterTime: Timestamp(1567578818, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 1) } 2019-09-04T06:33:38.368+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578813, 1), t: 1 }, lastCommittedWall: new Date(1567578813541), lastOpVisible: { ts: Timestamp(1567578813, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578813, 1), $clusterTime: { clusterTime: Timestamp(1567578818, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:38.368+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:38.368+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:08.368+0000 2019-09-04T06:33:38.368+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578818, 1), t: 1 } 2019-09-04T06:33:38.368+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1208 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:48.368+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578813, 1), t: 1 } } 2019-09-04T06:33:38.368+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:08.368+0000 2019-09-04T06:33:38.368+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:33:38.368+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:38.368+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, durableWallTime: new Date(1567578813541), appliedOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, appliedWallTime: new Date(1567578813541), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578818, 1), t: 1 }, durableWallTime: new Date(1567578818364), appliedOpTime: { ts: Timestamp(1567578818, 1), t: 1 }, appliedWallTime: new Date(1567578818364), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, durableWallTime: new Date(1567578813541), appliedOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, appliedWallTime: new Date(1567578813541), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578813, 1), t: 1 }, lastCommittedWall: new Date(1567578813541), lastOpVisible: { ts: Timestamp(1567578813, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:38.368+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1209 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:08.368+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, durableWallTime: new Date(1567578813541), appliedOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, appliedWallTime: new Date(1567578813541), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578818, 1), t: 1 }, durableWallTime: new Date(1567578818364), appliedOpTime: { ts: Timestamp(1567578818, 1), t: 1 }, appliedWallTime: new Date(1567578818364), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, durableWallTime: new Date(1567578813541), appliedOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, appliedWallTime: new Date(1567578813541), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578813, 1), t: 1 }, lastCommittedWall: new Date(1567578813541), lastOpVisible: { ts: Timestamp(1567578813, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:38.368+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:08.368+0000 2019-09-04T06:33:38.368+0000 D2 ASIO [RS] Request 1209 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578813, 1), t: 1 }, lastCommittedWall: new Date(1567578813541), lastOpVisible: { ts: Timestamp(1567578813, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578813, 1), $clusterTime: { clusterTime: Timestamp(1567578818, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 1) } 2019-09-04T06:33:38.368+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578813, 1), t: 1 }, lastCommittedWall: new Date(1567578813541), lastOpVisible: { ts: Timestamp(1567578813, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578813, 1), $clusterTime: { clusterTime: Timestamp(1567578818, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:38.368+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:38.368+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:08.368+0000 2019-09-04T06:33:38.369+0000 D2 ASIO [RS] Request 1208 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 1), t: 1 }, lastCommittedWall: new Date(1567578818364), lastOpVisible: { ts: Timestamp(1567578818, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578818, 1), t: 1 }, lastCommittedWall: new Date(1567578818364), lastOpApplied: { ts: Timestamp(1567578818, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578818, 1), $clusterTime: { clusterTime: Timestamp(1567578818, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 1) } 2019-09-04T06:33:38.369+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 1), t: 1 }, lastCommittedWall: new Date(1567578818364), lastOpVisible: { ts: Timestamp(1567578818, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578818, 1), t: 1 }, lastCommittedWall: new Date(1567578818364), lastOpApplied: { ts: Timestamp(1567578818, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578818, 1), $clusterTime: { clusterTime: Timestamp(1567578818, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:38.369+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:38.369+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:33:38.369+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578818, 1), t: 1 }, 2019-09-04T06:33:38.364+0000 2019-09-04T06:33:38.369+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578818, 1), t: 1 }, 2019-09-04T06:33:38.364+0000 2019-09-04T06:33:38.369+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578813, 1) 2019-09-04T06:33:38.369+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:33:48.384+0000 2019-09-04T06:33:38.369+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:33:49.705+0000 2019-09-04T06:33:38.369+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1210 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:48.369+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578818, 1), t: 1 } } 2019-09-04T06:33:38.369+0000 D3 REPL [conn371] Got notified of new snapshot: { ts: Timestamp(1567578818, 1), t: 1 }, 2019-09-04T06:33:38.364+0000 2019-09-04T06:33:38.369+0000 D3 REPL [conn371] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.023+0000 2019-09-04T06:33:38.369+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:08.368+0000 2019-09-04T06:33:38.369+0000 D3 REPL [conn384] Got notified of new snapshot: { ts: Timestamp(1567578818, 1), t: 1 }, 2019-09-04T06:33:38.364+0000 2019-09-04T06:33:38.369+0000 D3 REPL [conn384] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.054+0000 2019-09-04T06:33:38.369+0000 D3 REPL [conn391] Got notified of new snapshot: { ts: Timestamp(1567578818, 1), t: 1 }, 2019-09-04T06:33:38.364+0000 2019-09-04T06:33:38.369+0000 D3 REPL [conn391] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:56.309+0000 2019-09-04T06:33:38.369+0000 D3 REPL [conn378] Got notified of new snapshot: { ts: Timestamp(1567578818, 1), t: 1 }, 2019-09-04T06:33:38.364+0000 2019-09-04T06:33:38.369+0000 D3 REPL [conn378] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.046+0000 2019-09-04T06:33:38.369+0000 D3 REPL [conn356] Got notified of new snapshot: { ts: Timestamp(1567578818, 1), t: 1 }, 2019-09-04T06:33:38.364+0000 2019-09-04T06:33:38.369+0000 D3 REPL [conn356] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.043+0000 2019-09-04T06:33:38.369+0000 D3 REPL [conn368] Got notified of new snapshot: { ts: Timestamp(1567578818, 1), t: 1 }, 2019-09-04T06:33:38.364+0000 2019-09-04T06:33:38.369+0000 D3 REPL [conn368] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.038+0000 2019-09-04T06:33:38.369+0000 D3 REPL [conn374] Got notified of new snapshot: { ts: Timestamp(1567578818, 1), t: 1 }, 2019-09-04T06:33:38.364+0000 2019-09-04T06:33:38.369+0000 D3 REPL [conn374] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.876+0000 2019-09-04T06:33:38.369+0000 D3 REPL [conn366] Got notified of new snapshot: { ts: Timestamp(1567578818, 1), t: 1 }, 2019-09-04T06:33:38.364+0000 2019-09-04T06:33:38.369+0000 D3 REPL [conn366] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.913+0000 2019-09-04T06:33:38.369+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:38.369+0000 D3 REPL [conn385] Got notified of new snapshot: { ts: Timestamp(1567578818, 1), t: 1 }, 2019-09-04T06:33:38.364+0000 2019-09-04T06:33:38.369+0000 D3 REPL [conn385] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.131+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn387] Got notified of new snapshot: { ts: Timestamp(1567578818, 1), t: 1 }, 2019-09-04T06:33:38.364+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn387] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.455+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn399] Got notified of new snapshot: { ts: Timestamp(1567578818, 1), t: 1 }, 2019-09-04T06:33:38.364+0000 2019-09-04T06:33:38.369+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:06.839+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn399] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:02.478+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn388] Got notified of new snapshot: { ts: Timestamp(1567578818, 1), t: 1 }, 2019-09-04T06:33:38.364+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn388] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:52.054+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn373] Got notified of new snapshot: { ts: Timestamp(1567578818, 1), t: 1 }, 2019-09-04T06:33:38.364+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn373] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.934+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn359] Got notified of new snapshot: { ts: Timestamp(1567578818, 1), t: 1 }, 2019-09-04T06:33:38.364+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn359] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.882+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn400] Got notified of new snapshot: { ts: Timestamp(1567578818, 1), t: 1 }, 2019-09-04T06:33:38.364+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn400] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:03.490+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn383] Got notified of new snapshot: { ts: Timestamp(1567578818, 1), t: 1 }, 2019-09-04T06:33:38.364+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn383] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.025+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn395] Got notified of new snapshot: { ts: Timestamp(1567578818, 1), t: 1 }, 2019-09-04T06:33:38.364+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn395] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.702+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn389] Got notified of new snapshot: { ts: Timestamp(1567578818, 1), t: 1 }, 2019-09-04T06:33:38.364+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn389] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:52.596+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn386] Got notified of new snapshot: { ts: Timestamp(1567578818, 1), t: 1 }, 2019-09-04T06:33:38.364+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn386] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.645+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn367] Got notified of new snapshot: { ts: Timestamp(1567578818, 1), t: 1 }, 2019-09-04T06:33:38.364+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn367] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.934+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn381] Got notified of new snapshot: { ts: Timestamp(1567578818, 1), t: 1 }, 2019-09-04T06:33:38.364+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn381] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.023+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn375] Got notified of new snapshot: { ts: Timestamp(1567578818, 1), t: 1 }, 2019-09-04T06:33:38.364+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn375] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.877+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn377] Got notified of new snapshot: { ts: Timestamp(1567578818, 1), t: 1 }, 2019-09-04T06:33:38.364+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn377] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.752+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn372] Got notified of new snapshot: { ts: Timestamp(1567578818, 1), t: 1 }, 2019-09-04T06:33:38.364+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn372] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.660+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn394] Got notified of new snapshot: { ts: Timestamp(1567578818, 1), t: 1 }, 2019-09-04T06:33:38.364+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn394] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.261+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn397] Got notified of new snapshot: { ts: Timestamp(1567578818, 1), t: 1 }, 2019-09-04T06:33:38.364+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn397] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.962+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn396] Got notified of new snapshot: { ts: Timestamp(1567578818, 1), t: 1 }, 2019-09-04T06:33:38.364+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn396] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.897+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn398] Got notified of new snapshot: { ts: Timestamp(1567578818, 1), t: 1 }, 2019-09-04T06:33:38.364+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn398] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.987+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn369] Got notified of new snapshot: { ts: Timestamp(1567578818, 1), t: 1 }, 2019-09-04T06:33:38.364+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn369] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.039+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn362] Got notified of new snapshot: { ts: Timestamp(1567578818, 1), t: 1 }, 2019-09-04T06:33:38.364+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn362] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.661+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn376] Got notified of new snapshot: { ts: Timestamp(1567578818, 1), t: 1 }, 2019-09-04T06:33:38.364+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn376] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.897+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn392] Got notified of new snapshot: { ts: Timestamp(1567578818, 1), t: 1 }, 2019-09-04T06:33:38.364+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn392] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:58.760+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn379] Got notified of new snapshot: { ts: Timestamp(1567578818, 1), t: 1 }, 2019-09-04T06:33:38.364+0000 2019-09-04T06:33:38.370+0000 D3 REPL [conn379] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:43.509+0000 2019-09-04T06:33:38.403+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:38.466+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578818, 1) 2019-09-04T06:33:38.504+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:38.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:38.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:38.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:38.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:38.604+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:38.622+0000 D2 ASIO [RS] Request 1210 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578818, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578818610), o: { $v: 1, $set: { ping: new Date(1567578818609) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 1), t: 1 }, lastCommittedWall: new Date(1567578818364), lastOpVisible: { ts: Timestamp(1567578818, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578818, 1), t: 1 }, lastCommittedWall: new Date(1567578818364), lastOpApplied: { ts: Timestamp(1567578818, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578818, 1), $clusterTime: { clusterTime: Timestamp(1567578818, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 2) } 2019-09-04T06:33:38.622+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578818, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578818610), o: { $v: 1, $set: { ping: new Date(1567578818609) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 1), t: 1 }, lastCommittedWall: new Date(1567578818364), lastOpVisible: { ts: Timestamp(1567578818, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578818, 1), t: 1 }, lastCommittedWall: new Date(1567578818364), lastOpApplied: { ts: Timestamp(1567578818, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578818, 1), $clusterTime: { clusterTime: Timestamp(1567578818, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:38.622+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:38.622+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578818, 2) and ending at ts: Timestamp(1567578818, 2) 2019-09-04T06:33:38.622+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:33:49.705+0000 2019-09-04T06:33:38.622+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:33:49.938+0000 2019-09-04T06:33:38.622+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578818, 2), t: 1 } 2019-09-04T06:33:38.622+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:38.622+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:06.839+0000 2019-09-04T06:33:38.622+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:38.622+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:38.622+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578818, 1) 2019-09-04T06:33:38.622+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 17915 2019-09-04T06:33:38.622+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:38.622+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:38.622+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 17915 2019-09-04T06:33:38.622+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:33:38.622+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:38.622+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578818, 2) } 2019-09-04T06:33:38.622+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:38.622+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578818, 1) 2019-09-04T06:33:38.622+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 17918 2019-09-04T06:33:38.622+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17911 2019-09-04T06:33:38.622+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:38.622+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:38.622+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 17918 2019-09-04T06:33:38.622+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 17911 2019-09-04T06:33:38.622+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17921 2019-09-04T06:33:38.622+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 17921 2019-09-04T06:33:38.622+0000 D3 EXECUTOR [repl-writer-worker-6] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:38.622+0000 D3 STORAGE [repl-writer-worker-6] WT begin_transaction for snapshot id 17923 2019-09-04T06:33:38.622+0000 D4 STORAGE [repl-writer-worker-6] inserting record with timestamp Timestamp(1567578818, 2) 2019-09-04T06:33:38.622+0000 D3 STORAGE [repl-writer-worker-6] WT set timestamp of future write operations to Timestamp(1567578818, 2) 2019-09-04T06:33:38.622+0000 D3 STORAGE [repl-writer-worker-6] WT commit_transaction for snapshot id 17923 2019-09-04T06:33:38.622+0000 D3 EXECUTOR [repl-writer-worker-6] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:38.622+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:33:38.622+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17922 2019-09-04T06:33:38.622+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 17922 2019-09-04T06:33:38.622+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17925 2019-09-04T06:33:38.623+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 17925 2019-09-04T06:33:38.623+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578818, 2), t: 1 }({ ts: Timestamp(1567578818, 2), t: 1 }) 2019-09-04T06:33:38.623+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578818, 2) 2019-09-04T06:33:38.623+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17926 2019-09-04T06:33:38.623+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578818, 2) } } ] } sort: {} projection: {} 2019-09-04T06:33:38.623+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:38.623+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:33:38.623+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578818, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:33:38.623+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:38.623+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:33:38.623+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:38.623+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578818, 2) || First: notFirst: full path: ts 2019-09-04T06:33:38.623+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:38.623+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578818, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:38.623+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:33:38.623+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:33:38.623+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:33:38.623+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:38.623+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:38.623+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:33:38.623+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:38.623+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:38.623+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:33:38.623+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578818, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:33:38.623+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:38.623+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:33:38.623+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:38.623+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578818, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:33:38.623+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:38.623+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578818, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:38.623+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 17926 2019-09-04T06:33:38.623+0000 D3 EXECUTOR [repl-writer-worker-4] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:38.623+0000 D3 STORAGE [repl-writer-worker-4] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:38.623+0000 D3 REPL [repl-writer-worker-4] applying op: { ts: Timestamp(1567578818, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578818610), o: { $v: 1, $set: { ping: new Date(1567578818609) } } }, oplog application mode: Secondary 2019-09-04T06:33:38.623+0000 D3 STORAGE [repl-writer-worker-4] WT set timestamp of future write operations to Timestamp(1567578818, 2) 2019-09-04T06:33:38.623+0000 D3 STORAGE [repl-writer-worker-4] WT begin_transaction for snapshot id 17928 2019-09-04T06:33:38.623+0000 D2 QUERY [repl-writer-worker-4] Using idhack: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" } 2019-09-04T06:33:38.623+0000 D4 WRITE [repl-writer-worker-4] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:33:38.623+0000 D3 STORAGE [repl-writer-worker-4] WT commit_transaction for snapshot id 17928 2019-09-04T06:33:38.623+0000 D3 EXECUTOR [repl-writer-worker-4] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:38.623+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578818, 2), t: 1 }({ ts: Timestamp(1567578818, 2), t: 1 }) 2019-09-04T06:33:38.623+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578818, 2) 2019-09-04T06:33:38.623+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17927 2019-09-04T06:33:38.623+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:33:38.623+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:38.623+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:33:38.623+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:38.623+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:38.623+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:33:38.623+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 17927 2019-09-04T06:33:38.623+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578818, 2) 2019-09-04T06:33:38.623+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17931 2019-09-04T06:33:38.623+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 17931 2019-09-04T06:33:38.623+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578818, 2), t: 1 }({ ts: Timestamp(1567578818, 2), t: 1 }) 2019-09-04T06:33:38.623+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:38.623+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, durableWallTime: new Date(1567578813541), appliedOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, appliedWallTime: new Date(1567578813541), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578818, 1), t: 1 }, durableWallTime: new Date(1567578818364), appliedOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, appliedWallTime: new Date(1567578818610), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, durableWallTime: new Date(1567578813541), appliedOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, appliedWallTime: new Date(1567578813541), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 1), t: 1 }, lastCommittedWall: new Date(1567578818364), lastOpVisible: { ts: Timestamp(1567578818, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:38.623+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1211 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:08.623+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, durableWallTime: new Date(1567578813541), appliedOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, appliedWallTime: new Date(1567578813541), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578818, 1), t: 1 }, durableWallTime: new Date(1567578818364), appliedOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, appliedWallTime: new Date(1567578818610), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, durableWallTime: new Date(1567578813541), appliedOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, appliedWallTime: new Date(1567578813541), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 1), t: 1 }, lastCommittedWall: new Date(1567578818364), lastOpVisible: { ts: Timestamp(1567578818, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:38.623+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:08.623+0000 2019-09-04T06:33:38.624+0000 D2 ASIO [RS] Request 1211 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 1), t: 1 }, lastCommittedWall: new Date(1567578818364), lastOpVisible: { ts: Timestamp(1567578818, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578818, 1), $clusterTime: { clusterTime: Timestamp(1567578818, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 2) } 2019-09-04T06:33:38.624+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 1), t: 1 }, lastCommittedWall: new Date(1567578818364), lastOpVisible: { ts: Timestamp(1567578818, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578818, 1), $clusterTime: { clusterTime: Timestamp(1567578818, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:38.624+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:38.624+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:08.624+0000 2019-09-04T06:33:38.624+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578818, 2), t: 1 } 2019-09-04T06:33:38.624+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1212 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:48.624+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578818, 1), t: 1 } } 2019-09-04T06:33:38.624+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:08.624+0000 2019-09-04T06:33:38.632+0000 D2 ASIO [RS] Request 1212 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpVisible: { ts: Timestamp(1567578818, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpApplied: { ts: Timestamp(1567578818, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578818, 2), $clusterTime: { clusterTime: Timestamp(1567578818, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 2) } 2019-09-04T06:33:38.633+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpVisible: { ts: Timestamp(1567578818, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpApplied: { ts: Timestamp(1567578818, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578818, 2), $clusterTime: { clusterTime: Timestamp(1567578818, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:38.633+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:38.633+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:33:38.633+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:38.633+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:38.633+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578813, 2) 2019-09-04T06:33:38.633+0000 D3 REPL [conn376] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:38.633+0000 D3 REPL [conn376] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.897+0000 2019-09-04T06:33:38.633+0000 D3 REPL [conn368] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:38.633+0000 D3 REPL [conn368] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.038+0000 2019-09-04T06:33:38.633+0000 D3 REPL [conn377] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:38.633+0000 D3 REPL [conn377] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.752+0000 2019-09-04T06:33:38.633+0000 D3 REPL [conn395] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:38.633+0000 D3 REPL [conn395] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.702+0000 2019-09-04T06:33:38.633+0000 D3 REPL [conn388] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:38.633+0000 D3 REPL [conn388] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:52.054+0000 2019-09-04T06:33:38.633+0000 D3 REPL [conn359] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:38.633+0000 D3 REPL [conn359] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.882+0000 2019-09-04T06:33:38.633+0000 D3 REPL [conn371] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:38.633+0000 D3 REPL [conn371] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.023+0000 2019-09-04T06:33:38.633+0000 D3 REPL [conn389] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:38.633+0000 D3 REPL [conn389] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:52.596+0000 2019-09-04T06:33:38.633+0000 D3 REPL [conn367] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:38.633+0000 D3 REPL [conn367] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.934+0000 2019-09-04T06:33:38.633+0000 D3 REPL [conn375] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:38.633+0000 D3 REPL [conn375] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.877+0000 2019-09-04T06:33:38.633+0000 D3 REPL [conn372] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:38.633+0000 D3 REPL [conn372] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.660+0000 2019-09-04T06:33:38.633+0000 D3 REPL [conn397] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:38.633+0000 D3 REPL [conn397] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.962+0000 2019-09-04T06:33:38.633+0000 D3 REPL [conn398] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:38.633+0000 D3 REPL [conn398] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.987+0000 2019-09-04T06:33:38.633+0000 D3 REPL [conn362] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:38.633+0000 D3 REPL [conn362] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.661+0000 2019-09-04T06:33:38.633+0000 D3 REPL [conn392] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:38.633+0000 D3 REPL [conn392] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:58.760+0000 2019-09-04T06:33:38.633+0000 D3 REPL [conn383] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:38.633+0000 D3 REPL [conn383] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.025+0000 2019-09-04T06:33:38.633+0000 D3 REPL [conn379] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:38.633+0000 D3 REPL [conn379] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:43.509+0000 2019-09-04T06:33:38.633+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:33:49.938+0000 2019-09-04T06:33:38.633+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:33:49.954+0000 2019-09-04T06:33:38.633+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:38.633+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:06.839+0000 2019-09-04T06:33:38.633+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1213 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:48.633+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578818, 2), t: 1 } } 2019-09-04T06:33:38.633+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:08.624+0000 2019-09-04T06:33:38.634+0000 D3 REPL [conn378] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:38.634+0000 D3 REPL [conn378] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.046+0000 2019-09-04T06:33:38.634+0000 D3 REPL [conn369] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:38.634+0000 D3 REPL [conn369] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.039+0000 2019-09-04T06:33:38.634+0000 D3 REPL [conn396] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:38.634+0000 D3 REPL [conn396] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.897+0000 2019-09-04T06:33:38.634+0000 D3 REPL [conn394] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:38.634+0000 D3 REPL [conn394] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.261+0000 2019-09-04T06:33:38.634+0000 D3 REPL [conn366] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:38.634+0000 D3 REPL [conn366] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.913+0000 2019-09-04T06:33:38.634+0000 D3 REPL [conn385] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:38.634+0000 D3 REPL [conn385] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.131+0000 2019-09-04T06:33:38.634+0000 D3 REPL [conn373] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:38.634+0000 D3 REPL [conn373] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.934+0000 2019-09-04T06:33:38.634+0000 D3 REPL [conn381] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:38.634+0000 D3 REPL [conn381] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.023+0000 2019-09-04T06:33:38.634+0000 D3 REPL [conn386] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:38.634+0000 D3 REPL [conn386] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.645+0000 2019-09-04T06:33:38.634+0000 D3 REPL [conn387] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:38.634+0000 D3 REPL [conn387] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.455+0000 2019-09-04T06:33:38.634+0000 D3 REPL [conn374] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:38.634+0000 D3 REPL [conn374] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:41.876+0000 2019-09-04T06:33:38.634+0000 D3 REPL [conn356] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:38.634+0000 D3 REPL [conn356] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.043+0000 2019-09-04T06:33:38.634+0000 D3 REPL [conn384] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:38.634+0000 D3 REPL [conn384] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:47.054+0000 2019-09-04T06:33:38.634+0000 D3 REPL [conn399] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:38.634+0000 D3 REPL [conn399] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:02.478+0000 2019-09-04T06:33:38.634+0000 D3 REPL [conn400] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:38.634+0000 D3 REPL [conn400] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:03.490+0000 2019-09-04T06:33:38.634+0000 D3 REPL [conn391] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:38.634+0000 D3 REPL [conn391] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:56.309+0000 2019-09-04T06:33:38.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:38.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:38.677+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:33:38.677+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:38.677+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, durableWallTime: new Date(1567578813541), appliedOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, appliedWallTime: new Date(1567578813541), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), appliedOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, appliedWallTime: new Date(1567578818610), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, durableWallTime: new Date(1567578813541), appliedOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, appliedWallTime: new Date(1567578813541), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpVisible: { ts: Timestamp(1567578818, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:38.677+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1214 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:08.677+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, durableWallTime: new Date(1567578813541), appliedOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, appliedWallTime: new Date(1567578813541), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), appliedOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, appliedWallTime: new Date(1567578818610), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, durableWallTime: new Date(1567578813541), appliedOpTime: { ts: Timestamp(1567578813, 1), t: 1 }, appliedWallTime: new Date(1567578813541), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpVisible: { ts: Timestamp(1567578818, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:38.677+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:08.624+0000 2019-09-04T06:33:38.677+0000 D2 ASIO [RS] Request 1214 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpVisible: { ts: Timestamp(1567578818, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578818, 2), $clusterTime: { clusterTime: Timestamp(1567578818, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 2) } 2019-09-04T06:33:38.677+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpVisible: { ts: Timestamp(1567578818, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578818, 2), $clusterTime: { clusterTime: Timestamp(1567578818, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:38.677+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:38.677+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:08.624+0000 2019-09-04T06:33:38.685+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:38.685+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:38.696+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:38.696+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:38.704+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:38.722+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578818, 2) 2019-09-04T06:33:38.727+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:38.727+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:38.733+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:38.733+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:38.804+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:38.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:38.838+0000 D3 REPL [replexec-0] memberData lastupdate is: 2019-09-04T06:33:37.063+0000 2019-09-04T06:33:38.838+0000 D3 REPL [replexec-0] memberData lastupdate is: 2019-09-04T06:33:38.232+0000 2019-09-04T06:33:38.838+0000 D3 REPL [replexec-0] stalest member MemberId(0) date: 2019-09-04T06:33:37.063+0000 2019-09-04T06:33:38.838+0000 D3 REPL [replexec-0] scheduling next check at 2019-09-04T06:33:47.063+0000 2019-09-04T06:33:38.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:38.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1215) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:38.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1215 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:48.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:38.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:08.838+0000 2019-09-04T06:33:38.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:08.838+0000 2019-09-04T06:33:38.838+0000 D2 ASIO [Replication] Request 1215 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), opTime: { ts: Timestamp(1567578818, 2), t: 1 }, wallTime: new Date(1567578818610), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpVisible: { ts: Timestamp(1567578818, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578818, 2), $clusterTime: { clusterTime: Timestamp(1567578818, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 2) } 2019-09-04T06:33:38.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), opTime: { ts: Timestamp(1567578818, 2), t: 1 }, wallTime: new Date(1567578818610), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpVisible: { ts: Timestamp(1567578818, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578818, 2), $clusterTime: { clusterTime: Timestamp(1567578818, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:38.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:38.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1215) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), opTime: { ts: Timestamp(1567578818, 2), t: 1 }, wallTime: new Date(1567578818610), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpVisible: { ts: Timestamp(1567578818, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578818, 2), $clusterTime: { clusterTime: Timestamp(1567578818, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 2) } 2019-09-04T06:33:38.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:33:38.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:33:40.838Z 2019-09-04T06:33:38.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:08.838+0000 2019-09-04T06:33:38.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:38.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1216) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:38.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1216 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:33:48.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:38.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:08.838+0000 2019-09-04T06:33:38.839+0000 D2 ASIO [Replication] Request 1216 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), opTime: { ts: Timestamp(1567578818, 2), t: 1 }, wallTime: new Date(1567578818610), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpVisible: { ts: Timestamp(1567578818, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578818, 2), $clusterTime: { clusterTime: Timestamp(1567578818, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 2) } 2019-09-04T06:33:38.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), opTime: { ts: Timestamp(1567578818, 2), t: 1 }, wallTime: new Date(1567578818610), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpVisible: { ts: Timestamp(1567578818, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578818, 2), $clusterTime: { clusterTime: Timestamp(1567578818, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:33:38.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:38.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1216) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), opTime: { ts: Timestamp(1567578818, 2), t: 1 }, wallTime: new Date(1567578818610), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpVisible: { ts: Timestamp(1567578818, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578818, 2), $clusterTime: { clusterTime: Timestamp(1567578818, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 2) } 2019-09-04T06:33:38.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:33:38.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:33:49.954+0000 2019-09-04T06:33:38.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:33:50.049+0000 2019-09-04T06:33:38.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:33:38.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:33:40.839Z 2019-09-04T06:33:38.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:38.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:08.839+0000 2019-09-04T06:33:38.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:08.839+0000 2019-09-04T06:33:38.863+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:38.863+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:38.904+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:39.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:39.004+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:39.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:39.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:39.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578818, 2), signature: { hash: BinData(0, F3C9F8E21C268115F715F26001626323F3337B44), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:39.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:33:39.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578818, 2), signature: { hash: BinData(0, F3C9F8E21C268115F715F26001626323F3337B44), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:39.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578818, 2), signature: { hash: BinData(0, F3C9F8E21C268115F715F26001626323F3337B44), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:39.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), opTime: { ts: Timestamp(1567578818, 2), t: 1 }, wallTime: new Date(1567578818610) } 2019-09-04T06:33:39.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578818, 2), signature: { hash: BinData(0, F3C9F8E21C268115F715F26001626323F3337B44), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:39.068+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:39.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:39.104+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:39.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:39.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:39.185+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:39.185+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:39.196+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:39.196+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:39.205+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:39.227+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:39.227+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:39.233+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:39.233+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:39.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:39.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:39.305+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:39.363+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:39.363+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:39.405+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:39.505+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:39.537+0000 D2 COMMAND [conn61] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578818, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578818, 2), signature: { hash: BinData(0, F3C9F8E21C268115F715F26001626323F3337B44), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578818, 2), t: 1 } }, $db: "config" } 2019-09-04T06:33:39.537+0000 D1 COMMAND [conn61] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578818, 2), t: 1 } } } 2019-09-04T06:33:39.537+0000 D3 STORAGE [conn61] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:33:39.537+0000 D1 COMMAND [conn61] Using 'committed' snapshot: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578818, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578818, 2), signature: { hash: BinData(0, F3C9F8E21C268115F715F26001626323F3337B44), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578818, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578818, 2) 2019-09-04T06:33:39.537+0000 D2 QUERY [conn61] Collection config.settings does not exist. Using EOF plan: query: { _id: "balancer" } sort: {} projection: {} limit: 1 2019-09-04T06:33:39.537+0000 I COMMAND [conn61] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578818, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578818, 2), signature: { hash: BinData(0, F3C9F8E21C268115F715F26001626323F3337B44), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578818, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:33:39.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:39.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:39.568+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:39.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:39.605+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:39.622+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:39.622+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:39.622+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578818, 2) 2019-09-04T06:33:39.622+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 17956 2019-09-04T06:33:39.622+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:39.622+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:39.622+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 17956 2019-09-04T06:33:39.623+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 17959 2019-09-04T06:33:39.623+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 17959 2019-09-04T06:33:39.623+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578818, 2), t: 1 }({ ts: Timestamp(1567578818, 2), t: 1 }) 2019-09-04T06:33:39.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:39.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:39.685+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:39.685+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:39.696+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:39.696+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:39.705+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:39.727+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:39.727+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:39.733+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:39.733+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:39.805+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:39.863+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:39.863+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:39.906+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:40.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:40.003+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:33:40.003+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:33:40.003+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:33:40.006+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:40.010+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:33:40.010+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:33:40.010+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:33:40.010+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:33:40.010+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:33:40.010+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:33:40.011+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:33:40.011+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35129 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:33:40.012+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:33:40.012+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:33:40.012+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:33:40.013+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:33:40.013+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:40.013+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:33:40.013+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:33:40.013+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:33:40.013+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:33:40.013+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:33:40.013+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:33:40.013+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:33:40.013+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:40.013+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:40.013+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:33:40.014+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578818, 2) 2019-09-04T06:33:40.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17973 2019-09-04T06:33:40.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17973 2019-09-04T06:33:40.014+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:33:40.014+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:33:40.014+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:33:40.014+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:33:40.014+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:40.014+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:33:40.014+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:33:40.014+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:33:40.014+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578818, 2) 2019-09-04T06:33:40.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17976 2019-09-04T06:33:40.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17976 2019-09-04T06:33:40.014+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:33:40.014+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:33:40.014+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:40.015+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:33:40.015+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:33:40.015+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578818, 2) 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17978 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17978 2019-09-04T06:33:40.015+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:33:40.015+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:33:40.015+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:33:40.015+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:33:40.015+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:33:40.015+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17981 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17981 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17982 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17982 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17983 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17983 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17984 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17984 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17985 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17985 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17986 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17986 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17987 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17987 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17988 2019-09-04T06:33:40.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17988 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17989 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17989 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17990 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17990 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17991 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17991 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17992 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17992 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17993 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17993 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17994 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17994 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17995 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17995 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17996 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17996 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17997 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17997 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17998 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17998 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 17999 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 17999 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18000 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18000 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18001 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18001 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18002 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:33:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18002 2019-09-04T06:33:40.016+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:33:40.030+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:33:40.030+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18004 2019-09-04T06:33:40.030+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18004 2019-09-04T06:33:40.030+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18005 2019-09-04T06:33:40.030+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18005 2019-09-04T06:33:40.030+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18006 2019-09-04T06:33:40.030+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18006 2019-09-04T06:33:40.030+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:33:40.041+0000 D2 COMMAND [conn69] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:40.041+0000 I COMMAND [conn69] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:40.041+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:33:40.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18009 2019-09-04T06:33:40.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18009 2019-09-04T06:33:40.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18010 2019-09-04T06:33:40.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18010 2019-09-04T06:33:40.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18011 2019-09-04T06:33:40.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18011 2019-09-04T06:33:40.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18012 2019-09-04T06:33:40.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18012 2019-09-04T06:33:40.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18013 2019-09-04T06:33:40.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18013 2019-09-04T06:33:40.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18014 2019-09-04T06:33:40.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18014 2019-09-04T06:33:40.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18015 2019-09-04T06:33:40.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18015 2019-09-04T06:33:40.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18016 2019-09-04T06:33:40.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18016 2019-09-04T06:33:40.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18017 2019-09-04T06:33:40.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18017 2019-09-04T06:33:40.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18018 2019-09-04T06:33:40.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18018 2019-09-04T06:33:40.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18019 2019-09-04T06:33:40.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18019 2019-09-04T06:33:40.041+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18020 2019-09-04T06:33:40.041+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18020 2019-09-04T06:33:40.041+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:33:40.050+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:33:40.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18022 2019-09-04T06:33:40.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18022 2019-09-04T06:33:40.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18023 2019-09-04T06:33:40.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18023 2019-09-04T06:33:40.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18024 2019-09-04T06:33:40.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18024 2019-09-04T06:33:40.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18025 2019-09-04T06:33:40.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18025 2019-09-04T06:33:40.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18026 2019-09-04T06:33:40.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18026 2019-09-04T06:33:40.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18027 2019-09-04T06:33:40.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18027 2019-09-04T06:33:40.050+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:33:40.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:40.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:40.055+0000 D2 COMMAND [conn70] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:40.056+0000 I COMMAND [conn70] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:40.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:40.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:40.106+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:40.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:40.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:40.185+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:40.185+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:40.196+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:40.196+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:40.206+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:40.227+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:40.227+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:40.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578818, 2), signature: { hash: BinData(0, F3C9F8E21C268115F715F26001626323F3337B44), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:40.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:33:40.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578818, 2), signature: { hash: BinData(0, F3C9F8E21C268115F715F26001626323F3337B44), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:40.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578818, 2), signature: { hash: BinData(0, F3C9F8E21C268115F715F26001626323F3337B44), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:40.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), opTime: { ts: Timestamp(1567578818, 2), t: 1 }, wallTime: new Date(1567578818610) } 2019-09-04T06:33:40.232+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578818, 2), signature: { hash: BinData(0, F3C9F8E21C268115F715F26001626323F3337B44), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:40.233+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:40.233+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:40.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:40.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:40.306+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:40.363+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:40.363+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:40.407+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:40.452+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:40.453+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:40.507+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:40.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:40.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:40.568+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:40.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:40.607+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:40.623+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:40.623+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:40.623+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578818, 2) 2019-09-04T06:33:40.623+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 18043 2019-09-04T06:33:40.623+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:40.623+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:40.623+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 18043 2019-09-04T06:33:40.624+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18046 2019-09-04T06:33:40.624+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18046 2019-09-04T06:33:40.624+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578818, 2), t: 1 }({ ts: Timestamp(1567578818, 2), t: 1 }) 2019-09-04T06:33:40.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:40.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:40.685+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:40.685+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:40.696+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:40.696+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:40.707+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:40.721+0000 D2 COMMAND [conn72] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578818, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578818, 2), signature: { hash: BinData(0, F3C9F8E21C268115F715F26001626323F3337B44), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578818, 2), t: 1 } }, $db: "config" } 2019-09-04T06:33:40.722+0000 D1 COMMAND [conn72] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578818, 2), t: 1 } } } 2019-09-04T06:33:40.722+0000 D3 STORAGE [conn72] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:33:40.722+0000 D1 COMMAND [conn72] Using 'committed' snapshot: { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578818, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578818, 2), signature: { hash: BinData(0, F3C9F8E21C268115F715F26001626323F3337B44), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578818, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578818, 2) 2019-09-04T06:33:40.722+0000 D2 QUERY [conn72] Collection config.settings does not exist. Using EOF plan: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 2019-09-04T06:33:40.722+0000 I COMMAND [conn72] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578818, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578818, 2), signature: { hash: BinData(0, F3C9F8E21C268115F715F26001626323F3337B44), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578818, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:33:40.733+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:40.733+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:40.807+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:40.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:40.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1217) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:40.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1217 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:50.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:40.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:08.839+0000 2019-09-04T06:33:40.838+0000 D2 ASIO [Replication] Request 1217 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), opTime: { ts: Timestamp(1567578818, 2), t: 1 }, wallTime: new Date(1567578818610), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpVisible: { ts: Timestamp(1567578818, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578818, 2), $clusterTime: { clusterTime: Timestamp(1567578818, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 2) } 2019-09-04T06:33:40.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), opTime: { ts: Timestamp(1567578818, 2), t: 1 }, wallTime: new Date(1567578818610), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpVisible: { ts: Timestamp(1567578818, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578818, 2), $clusterTime: { clusterTime: Timestamp(1567578818, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:40.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:40.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1217) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), opTime: { ts: Timestamp(1567578818, 2), t: 1 }, wallTime: new Date(1567578818610), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpVisible: { ts: Timestamp(1567578818, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578818, 2), $clusterTime: { clusterTime: Timestamp(1567578818, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 2) } 2019-09-04T06:33:40.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:33:40.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:33:42.838Z 2019-09-04T06:33:40.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:08.839+0000 2019-09-04T06:33:40.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:40.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1218) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:40.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1218 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:33:50.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:40.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:08.839+0000 2019-09-04T06:33:40.839+0000 D2 ASIO [Replication] Request 1218 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), opTime: { ts: Timestamp(1567578818, 2), t: 1 }, wallTime: new Date(1567578818610), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpVisible: { ts: Timestamp(1567578818, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578818, 2), $clusterTime: { clusterTime: Timestamp(1567578818, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 2) } 2019-09-04T06:33:40.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), opTime: { ts: Timestamp(1567578818, 2), t: 1 }, wallTime: new Date(1567578818610), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpVisible: { ts: Timestamp(1567578818, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578818, 2), $clusterTime: { clusterTime: Timestamp(1567578818, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:33:40.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:40.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1218) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), opTime: { ts: Timestamp(1567578818, 2), t: 1 }, wallTime: new Date(1567578818610), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpVisible: { ts: Timestamp(1567578818, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578818, 2), $clusterTime: { clusterTime: Timestamp(1567578818, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 2) } 2019-09-04T06:33:40.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:33:40.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:33:50.049+0000 2019-09-04T06:33:40.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:33:51.134+0000 2019-09-04T06:33:40.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:33:40.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:33:42.839Z 2019-09-04T06:33:40.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:40.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:10.839+0000 2019-09-04T06:33:40.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:10.839+0000 2019-09-04T06:33:40.863+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:40.863+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:40.907+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:40.944+0000 D2 COMMAND [conn73] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:40.945+0000 I COMMAND [conn73] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:40.952+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:40.952+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:40.968+0000 D2 COMMAND [conn74] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:40.968+0000 I COMMAND [conn74] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:41.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:41.007+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:41.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:41.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:41.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578818, 2), signature: { hash: BinData(0, F3C9F8E21C268115F715F26001626323F3337B44), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:41.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:33:41.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578818, 2), signature: { hash: BinData(0, F3C9F8E21C268115F715F26001626323F3337B44), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:41.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578818, 2), signature: { hash: BinData(0, F3C9F8E21C268115F715F26001626323F3337B44), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:41.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), opTime: { ts: Timestamp(1567578818, 2), t: 1 }, wallTime: new Date(1567578818610) } 2019-09-04T06:33:41.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578818, 2), signature: { hash: BinData(0, F3C9F8E21C268115F715F26001626323F3337B44), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:41.108+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:41.162+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:41.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:41.185+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:41.185+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:41.208+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:41.221+0000 D2 COMMAND [conn77] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:41.221+0000 I COMMAND [conn77] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:41.233+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:41.233+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:41.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:41.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:41.273+0000 D2 COMMAND [conn78] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:41.274+0000 I COMMAND [conn78] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:41.283+0000 D2 COMMAND [conn113] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:41.284+0000 I COMMAND [conn113] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:41.288+0000 D2 COMMAND [conn113] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578813, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578819, 1), signature: { hash: BinData(0, 497630E3BA63337AB3201C037EBCFF01AF35E9F6), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578813, 1), t: 1 } }, $db: "config" } 2019-09-04T06:33:41.289+0000 D1 COMMAND [conn113] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578813, 1), t: 1 } } } 2019-09-04T06:33:41.289+0000 D3 STORAGE [conn113] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:33:41.289+0000 D1 COMMAND [conn113] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578813, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578819, 1), signature: { hash: BinData(0, 497630E3BA63337AB3201C037EBCFF01AF35E9F6), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578813, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578818, 2) 2019-09-04T06:33:41.289+0000 D5 QUERY [conn113] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:33:41.289+0000 D5 QUERY [conn113] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:33:41.289+0000 D5 QUERY [conn113] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:33:41.289+0000 D5 QUERY [conn113] Rated tree: $and 2019-09-04T06:33:41.289+0000 D5 QUERY [conn113] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:41.289+0000 D5 QUERY [conn113] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:41.289+0000 D2 QUERY [conn113] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:33:41.289+0000 D3 STORAGE [conn113] WT begin_transaction for snapshot id 18067 2019-09-04T06:33:41.289+0000 D3 STORAGE [conn113] WT rollback_transaction for snapshot id 18067 2019-09-04T06:33:41.289+0000 I COMMAND [conn113] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578813, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578819, 1), signature: { hash: BinData(0, 497630E3BA63337AB3201C037EBCFF01AF35E9F6), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578813, 1), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:879 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:33:41.308+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:41.363+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:41.363+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:41.408+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:41.452+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:41.452+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:41.508+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:41.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:41.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:41.608+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:41.623+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:41.623+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:41.623+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578818, 2) 2019-09-04T06:33:41.623+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 18072 2019-09-04T06:33:41.623+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:41.623+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:41.623+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 18072 2019-09-04T06:33:41.624+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18075 2019-09-04T06:33:41.624+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18075 2019-09-04T06:33:41.624+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578818, 2), t: 1 }({ ts: Timestamp(1567578818, 2), t: 1 }) 2019-09-04T06:33:41.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:41.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:41.685+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:41.685+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:41.708+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:41.733+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:41.733+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:41.809+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:41.863+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:41.863+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:41.865+0000 I NETWORK [listener] connection accepted from 10.108.2.48:42220 #401 (92 connections now open) 2019-09-04T06:33:41.865+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:41.865+0000 D2 COMMAND [conn401] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:41.865+0000 I NETWORK [conn401] received client metadata from 10.108.2.48:42220 conn401: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:41.865+0000 I COMMAND [conn401] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:41.879+0000 I COMMAND [conn374] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578790, 1), signature: { hash: BinData(0, 48B30A814E9398F4B8D261A3888218A13169B002), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:33:41.879+0000 I COMMAND [conn375] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578785, 1), signature: { hash: BinData(0, D1ABB9AD25439133DC00295B9D506CD1C692B624), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:33:41.879+0000 D1 - [conn374] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:41.879+0000 D1 - [conn375] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:41.879+0000 W - [conn374] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:41.879+0000 W - [conn375] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:41.883+0000 I NETWORK [listener] connection accepted from 10.108.2.57:34362 #402 (93 connections now open) 2019-09-04T06:33:41.883+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:41.883+0000 D2 COMMAND [conn402] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:41.883+0000 I NETWORK [conn402] received client metadata from 10.108.2.57:34362 conn402: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:41.883+0000 I COMMAND [conn402] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:41.884+0000 I COMMAND [conn359] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578791, 1), signature: { hash: BinData(0, 7C0CF3FE10B568392F8A27C39D14989CCAE09485), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:33:41.884+0000 D1 - [conn359] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:41.885+0000 W - [conn359] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:41.896+0000 I - [conn374] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:41.896+0000 D1 COMMAND [conn374] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578790, 1), signature: { hash: BinData(0, 48B30A814E9398F4B8D261A3888218A13169B002), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:41.896+0000 D1 - [conn374] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:41.896+0000 W - [conn374] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:41.899+0000 I COMMAND [conn376] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578788, 1), signature: { hash: BinData(0, 4F311D3E6BCB85B8F76274CF8447E974E8C1C19D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:33:41.900+0000 D1 - [conn376] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:41.900+0000 W - [conn376] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:41.909+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:41.914+0000 I - [conn359] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:41.914+0000 D1 COMMAND [conn359] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578791, 1), signature: { hash: BinData(0, 7C0CF3FE10B568392F8A27C39D14989CCAE09485), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:41.914+0000 D1 - [conn359] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:41.914+0000 W - [conn359] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:41.915+0000 I COMMAND [conn366] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578783, 1), signature: { hash: BinData(0, 17FA8619F0EE1C4A6A3FDB688C10EE020E3FDDEE), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:33:41.916+0000 D1 - [conn366] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:41.916+0000 W - [conn366] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:41.926+0000 I NETWORK [listener] connection accepted from 10.108.2.61:38044 #403 (94 connections now open) 2019-09-04T06:33:41.926+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:41.926+0000 D2 COMMAND [conn403] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:41.927+0000 I NETWORK [conn403] received client metadata from 10.108.2.61:38044 conn403: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:41.927+0000 I COMMAND [conn403] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:41.931+0000 I - [conn376] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:41.931+0000 D1 COMMAND [conn376] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578788, 1), signature: { hash: BinData(0, 4F311D3E6BCB85B8F76274CF8447E974E8C1C19D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:41.931+0000 D1 - [conn376] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:41.931+0000 W - [conn376] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:41.936+0000 I COMMAND [conn367] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578791, 1), signature: { hash: BinData(0, 7C0CF3FE10B568392F8A27C39D14989CCAE09485), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:33:41.937+0000 D1 - [conn367] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:41.937+0000 W - [conn367] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:41.937+0000 I COMMAND [conn373] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578782, 1), signature: { hash: BinData(0, F83DDE0D03ABE82D40CAD12EB3845998BB8EFADA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:33:41.937+0000 D1 - [conn373] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:41.937+0000 W - [conn373] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:41.952+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:41.952+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:41.959+0000 I - [conn374] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:41.959+0000 W COMMAND [conn374] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:41.959+0000 I COMMAND [conn374] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578790, 1), signature: { hash: BinData(0, 48B30A814E9398F4B8D261A3888218A13169B002), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:33:41.959+0000 D2 NETWORK [conn374] Session from 10.108.2.48:42202 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:41.959+0000 I NETWORK [conn374] end connection 10.108.2.48:42202 (93 connections now open) 2019-09-04T06:33:41.977+0000 I - [conn375] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:41.977+0000 D1 COMMAND [conn375] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578785, 1), signature: { hash: BinData(0, D1ABB9AD25439133DC00295B9D506CD1C692B624), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:41.977+0000 D1 - [conn375] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:41.977+0000 W - [conn375] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:41.990+0000 I - [conn359] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:41.990+0000 W COMMAND [conn359] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:41.990+0000 I COMMAND [conn359] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578791, 1), signature: { hash: BinData(0, 7C0CF3FE10B568392F8A27C39D14989CCAE09485), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30042ms 2019-09-04T06:33:41.990+0000 D2 NETWORK [conn359] Session from 10.108.2.56:35782 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:41.990+0000 I NETWORK [conn359] end connection 10.108.2.56:35782 (92 connections now open) 2019-09-04T06:33:42.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:42.007+0000 I - [conn373] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:42.007+0000 D1 COMMAND [conn373] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578782, 1), signature: { hash: BinData(0, F83DDE0D03ABE82D40CAD12EB3845998BB8EFADA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:42.007+0000 D1 - [conn373] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:42.007+0000 W - [conn373] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:42.009+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:42.024+0000 I - [conn367] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:42.025+0000 D1 COMMAND [conn367] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578791, 1), signature: { hash: BinData(0, 7C0CF3FE10B568392F8A27C39D14989CCAE09485), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:42.025+0000 D1 - [conn367] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:42.025+0000 W - [conn367] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:42.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:42.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:42.055+0000 I - [conn367] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:42.055+0000 W COMMAND [conn367] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:42.055+0000 I COMMAND [conn367] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578791, 1), signature: { hash: BinData(0, 7C0CF3FE10B568392F8A27C39D14989CCAE09485), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30100ms 2019-09-04T06:33:42.055+0000 D2 NETWORK [conn367] Session from 10.108.2.61:38018 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:42.055+0000 I NETWORK [conn367] end connection 10.108.2.61:38018 (91 connections now open) 2019-09-04T06:33:42.062+0000 I NETWORK [listener] connection accepted from 10.108.2.73:52276 #404 (92 connections now open) 2019-09-04T06:33:42.062+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:42.062+0000 D2 COMMAND [conn404] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:42.062+0000 I NETWORK [conn404] received client metadata from 10.108.2.73:52276 conn404: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:42.062+0000 I COMMAND [conn404] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:42.063+0000 D2 COMMAND [conn404] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578812, 1), signature: { hash: BinData(0, 4B378BB31368CDD862D6FBF154A78A3408447D9E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:33:42.063+0000 D1 REPL [conn404] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578818, 2), t: 1 } 2019-09-04T06:33:42.063+0000 D3 REPL [conn404] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.073+0000 2019-09-04T06:33:42.066+0000 I - [conn373] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:42.066+0000 W COMMAND [conn373] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:42.066+0000 I COMMAND [conn373] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578782, 1), signature: { hash: BinData(0, F83DDE0D03ABE82D40CAD12EB3845998BB8EFADA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30082ms 2019-09-04T06:33:42.066+0000 D2 NETWORK [conn373] Session from 10.108.2.63:36390 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:42.066+0000 I NETWORK [conn373] end connection 10.108.2.63:36390 (91 connections now open) 2019-09-04T06:33:42.067+0000 D2 COMMAND [conn390] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578821, 1), signature: { hash: BinData(0, C4CFA6E23777FD680628F991FF985CB66BF77E05), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:33:42.067+0000 D1 REPL [conn390] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578818, 2), t: 1 } 2019-09-04T06:33:42.067+0000 D3 REPL [conn390] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.077+0000 2019-09-04T06:33:42.071+0000 D2 COMMAND [conn393] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578812, 1), signature: { hash: BinData(0, 4B378BB31368CDD862D6FBF154A78A3408447D9E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:33:42.071+0000 D1 REPL [conn393] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578818, 2), t: 1 } 2019-09-04T06:33:42.071+0000 D3 REPL [conn393] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.081+0000 2019-09-04T06:33:42.087+0000 I - [conn376] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:42.087+0000 W COMMAND [conn376] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:42.087+0000 I COMMAND [conn376] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578788, 1), signature: { hash: BinData(0, 4F311D3E6BCB85B8F76274CF8447E974E8C1C19D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30043ms 2019-09-04T06:33:42.087+0000 D2 NETWORK [conn376] Session from 10.108.2.57:34346 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:42.087+0000 I NETWORK [conn376] end connection 10.108.2.57:34346 (90 connections now open) 2019-09-04T06:33:42.105+0000 I - [conn375] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:42.105+0000 W COMMAND [conn375] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:42.105+0000 I COMMAND [conn375] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578785, 1), signature: { hash: BinData(0, D1ABB9AD25439133DC00295B9D506CD1C692B624), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30109ms 2019-09-04T06:33:42.105+0000 D2 NETWORK [conn375] Session from 10.108.2.74:51892 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:42.105+0000 I NETWORK [conn375] end connection 10.108.2.74:51892 (89 connections now open) 2019-09-04T06:33:42.109+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:42.122+0000 I - [conn366] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:42.122+0000 D1 COMMAND [conn366] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578783, 1), signature: { hash: BinData(0, 17FA8619F0EE1C4A6A3FDB688C10EE020E3FDDEE), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:42.122+0000 D1 - [conn366] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:42.122+0000 W - [conn366] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:42.142+0000 I - [conn366] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:42.142+0000 W COMMAND [conn366] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:42.142+0000 I COMMAND [conn366] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578783, 1), signature: { hash: BinData(0, 17FA8619F0EE1C4A6A3FDB688C10EE020E3FDDEE), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30218ms 2019-09-04T06:33:42.142+0000 D2 NETWORK [conn366] Session from 10.108.2.60:44942 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:42.142+0000 I NETWORK [conn366] end connection 10.108.2.60:44942 (88 connections now open) 2019-09-04T06:33:42.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:42.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:42.185+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:42.185+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:42.209+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:42.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578821, 1), signature: { hash: BinData(0, 16F433E481960220FC894CAD78A7AA1E2A2F8A37), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:42.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:33:42.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578821, 1), signature: { hash: BinData(0, 16F433E481960220FC894CAD78A7AA1E2A2F8A37), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:42.233+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578821, 1), signature: { hash: BinData(0, 16F433E481960220FC894CAD78A7AA1E2A2F8A37), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:42.233+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), opTime: { ts: Timestamp(1567578818, 2), t: 1 }, wallTime: new Date(1567578818610) } 2019-09-04T06:33:42.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578821, 1), signature: { hash: BinData(0, 16F433E481960220FC894CAD78A7AA1E2A2F8A37), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:42.233+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:42.233+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:42.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:42.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:42.309+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:42.398+0000 D2 COMMAND [conn143] run command admin.$cmd { ping: 1, $db: "admin" } 2019-09-04T06:33:42.399+0000 I COMMAND [conn143] command admin.$cmd appName: "robo3t" command: ping { ping: 1, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:42.408+0000 D2 COMMAND [conn206] run command config.$cmd { ping: 1.0, lsid: { id: UUID("2fef7d2a-ea06-44d7-a315-b0e911b7f5bf") }, $clusterTime: { clusterTime: Timestamp(1567578761, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:33:42.409+0000 I COMMAND [conn206] command config.$cmd appName: "MongoDB Shell" command: ping { ping: 1.0, lsid: { id: UUID("2fef7d2a-ea06-44d7-a315-b0e911b7f5bf") }, $clusterTime: { clusterTime: Timestamp(1567578761, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:42.409+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:42.452+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:42.452+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:42.510+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:42.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:42.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:42.610+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:42.623+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:42.623+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:42.623+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578818, 2) 2019-09-04T06:33:42.623+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 18100 2019-09-04T06:33:42.623+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:42.623+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:42.623+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 18100 2019-09-04T06:33:42.624+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18103 2019-09-04T06:33:42.624+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18103 2019-09-04T06:33:42.624+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578818, 2), t: 1 }({ ts: Timestamp(1567578818, 2), t: 1 }) 2019-09-04T06:33:42.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:42.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:42.685+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:42.685+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:42.710+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:42.733+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:42.733+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:42.810+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:42.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:42.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1219) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:42.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1219 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:52.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:42.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:10.839+0000 2019-09-04T06:33:42.838+0000 D2 ASIO [Replication] Request 1219 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), opTime: { ts: Timestamp(1567578818, 2), t: 1 }, wallTime: new Date(1567578818610), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpVisible: { ts: Timestamp(1567578818, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578818, 2), $clusterTime: { clusterTime: Timestamp(1567578821, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 2) } 2019-09-04T06:33:42.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), opTime: { ts: Timestamp(1567578818, 2), t: 1 }, wallTime: new Date(1567578818610), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpVisible: { ts: Timestamp(1567578818, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578818, 2), $clusterTime: { clusterTime: Timestamp(1567578821, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:42.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:42.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1219) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), opTime: { ts: Timestamp(1567578818, 2), t: 1 }, wallTime: new Date(1567578818610), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpVisible: { ts: Timestamp(1567578818, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578818, 2), $clusterTime: { clusterTime: Timestamp(1567578821, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 2) } 2019-09-04T06:33:42.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:33:42.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:33:44.838Z 2019-09-04T06:33:42.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:10.839+0000 2019-09-04T06:33:42.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:42.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1220) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:42.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1220 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:33:52.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:42.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:10.839+0000 2019-09-04T06:33:42.839+0000 D2 ASIO [Replication] Request 1220 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), opTime: { ts: Timestamp(1567578818, 2), t: 1 }, wallTime: new Date(1567578818610), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpVisible: { ts: Timestamp(1567578818, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578818, 2), $clusterTime: { clusterTime: Timestamp(1567578821, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 2) } 2019-09-04T06:33:42.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), opTime: { ts: Timestamp(1567578818, 2), t: 1 }, wallTime: new Date(1567578818610), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpVisible: { ts: Timestamp(1567578818, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578818, 2), $clusterTime: { clusterTime: Timestamp(1567578821, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:33:42.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:42.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1220) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), opTime: { ts: Timestamp(1567578818, 2), t: 1 }, wallTime: new Date(1567578818610), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpVisible: { ts: Timestamp(1567578818, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578818, 2), $clusterTime: { clusterTime: Timestamp(1567578821, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 2) } 2019-09-04T06:33:42.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:33:42.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:33:51.134+0000 2019-09-04T06:33:42.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:33:53.170+0000 2019-09-04T06:33:42.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:33:42.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:33:44.839Z 2019-09-04T06:33:42.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:42.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:12.839+0000 2019-09-04T06:33:42.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:12.839+0000 2019-09-04T06:33:42.910+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:42.952+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:42.952+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:43.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:43.010+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:43.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:43.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:43.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578821, 1), signature: { hash: BinData(0, 16F433E481960220FC894CAD78A7AA1E2A2F8A37), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:43.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:33:43.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578821, 1), signature: { hash: BinData(0, 16F433E481960220FC894CAD78A7AA1E2A2F8A37), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:43.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578821, 1), signature: { hash: BinData(0, 16F433E481960220FC894CAD78A7AA1E2A2F8A37), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:43.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), opTime: { ts: Timestamp(1567578818, 2), t: 1 }, wallTime: new Date(1567578818610) } 2019-09-04T06:33:43.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578821, 1), signature: { hash: BinData(0, 16F433E481960220FC894CAD78A7AA1E2A2F8A37), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:43.111+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:43.162+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:43.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:43.185+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:43.185+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:43.211+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:43.233+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:43.233+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:43.234+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:43.234+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:43.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:43.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:43.311+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:43.402+0000 I NETWORK [listener] connection accepted from 10.108.2.45:36658 #405 (89 connections now open) 2019-09-04T06:33:43.402+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:43.402+0000 D2 COMMAND [conn405] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:43.402+0000 I NETWORK [conn405] received client metadata from 10.108.2.45:36658 conn405: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:43.402+0000 I COMMAND [conn405] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:43.406+0000 D2 COMMAND [conn405] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578818, 1), signature: { hash: BinData(0, 9C9EF00EE4F407A7E772C97AEC68CC0A05914703), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:33:43.406+0000 D1 REPL [conn405] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578818, 2), t: 1 } 2019-09-04T06:33:43.406+0000 D3 REPL [conn405] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.416+0000 2019-09-04T06:33:43.411+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:43.452+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:43.452+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:43.486+0000 I NETWORK [listener] connection accepted from 10.108.2.59:48482 #406 (90 connections now open) 2019-09-04T06:33:43.486+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:43.486+0000 D2 COMMAND [conn406] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:43.486+0000 I NETWORK [conn406] received client metadata from 10.108.2.59:48482 conn406: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:43.486+0000 I COMMAND [conn406] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:43.486+0000 D2 COMMAND [conn406] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578821, 1), signature: { hash: BinData(0, C4CFA6E23777FD680628F991FF985CB66BF77E05), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:33:43.486+0000 D1 REPL [conn406] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578818, 2), t: 1 } 2019-09-04T06:33:43.486+0000 D3 REPL [conn406] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.496+0000 2019-09-04T06:33:43.511+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:43.511+0000 I COMMAND [conn379] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578788, 1), signature: { hash: BinData(0, 4F311D3E6BCB85B8F76274CF8447E974E8C1C19D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:33:43.511+0000 D1 - [conn379] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:43.511+0000 W - [conn379] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:43.529+0000 I - [conn379] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:43.529+0000 D1 COMMAND [conn379] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578788, 1), signature: { hash: BinData(0, 4F311D3E6BCB85B8F76274CF8447E974E8C1C19D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:43.529+0000 D1 - [conn379] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:43.529+0000 W - [conn379] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:43.552+0000 I - [conn379] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:43.552+0000 W COMMAND [conn379] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:43.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:43.552+0000 I COMMAND [conn379] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578788, 1), signature: { hash: BinData(0, 4F311D3E6BCB85B8F76274CF8447E974E8C1C19D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:33:43.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:43.552+0000 D2 NETWORK [conn379] Session from 10.108.2.54:49294 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:43.552+0000 I NETWORK [conn379] end connection 10.108.2.54:49294 (89 connections now open) 2019-09-04T06:33:43.612+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:43.624+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:43.624+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:43.624+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578818, 2) 2019-09-04T06:33:43.624+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 18124 2019-09-04T06:33:43.624+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:43.624+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:43.624+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 18124 2019-09-04T06:33:43.624+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18127 2019-09-04T06:33:43.624+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18127 2019-09-04T06:33:43.624+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578818, 2), t: 1 }({ ts: Timestamp(1567578818, 2), t: 1 }) 2019-09-04T06:33:43.633+0000 D2 ASIO [RS] Request 1213 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpVisible: { ts: Timestamp(1567578818, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpApplied: { ts: Timestamp(1567578818, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578818, 2), $clusterTime: { clusterTime: Timestamp(1567578821, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 2) } 2019-09-04T06:33:43.633+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpVisible: { ts: Timestamp(1567578818, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpApplied: { ts: Timestamp(1567578818, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578818, 2), $clusterTime: { clusterTime: Timestamp(1567578821, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:43.633+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:43.634+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:33:43.634+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:33:53.170+0000 2019-09-04T06:33:43.634+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:33:54.742+0000 2019-09-04T06:33:43.634+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:43.634+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:12.839+0000 2019-09-04T06:33:43.634+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1221 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:53.634+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578818, 2), t: 1 } } 2019-09-04T06:33:43.634+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:08.624+0000 2019-09-04T06:33:43.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:43.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:43.677+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:43.677+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), appliedOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, appliedWallTime: new Date(1567578818610), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), appliedOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, appliedWallTime: new Date(1567578818610), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), appliedOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, appliedWallTime: new Date(1567578818610), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpVisible: { ts: Timestamp(1567578818, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:43.677+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1222 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:13.677+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), appliedOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, appliedWallTime: new Date(1567578818610), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), appliedOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, appliedWallTime: new Date(1567578818610), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), appliedOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, appliedWallTime: new Date(1567578818610), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpVisible: { ts: Timestamp(1567578818, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:43.677+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:08.624+0000 2019-09-04T06:33:43.677+0000 D2 ASIO [RS] Request 1222 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpVisible: { ts: Timestamp(1567578818, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578818, 2), $clusterTime: { clusterTime: Timestamp(1567578821, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 2) } 2019-09-04T06:33:43.677+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpVisible: { ts: Timestamp(1567578818, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578818, 2), $clusterTime: { clusterTime: Timestamp(1567578821, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:43.677+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:43.677+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:08.624+0000 2019-09-04T06:33:43.680+0000 I NETWORK [listener] connection accepted from 10.108.2.52:47304 #407 (90 connections now open) 2019-09-04T06:33:43.680+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:43.680+0000 D2 COMMAND [conn407] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:43.680+0000 I NETWORK [conn407] received client metadata from 10.108.2.52:47304 conn407: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:43.680+0000 I COMMAND [conn407] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:43.681+0000 D2 COMMAND [conn407] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578822, 1), signature: { hash: BinData(0, 2206AF5AF7A132B84E6DD84FBEAA8BC020142021), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:33:43.681+0000 D1 REPL [conn407] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578818, 2), t: 1 } 2019-09-04T06:33:43.681+0000 D3 REPL [conn407] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.691+0000 2019-09-04T06:33:43.685+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:43.685+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:43.712+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:43.733+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:43.733+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:43.734+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:43.734+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:43.812+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:43.912+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:43.952+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:43.952+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:44.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:44.012+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:44.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:44.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:44.112+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:44.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:44.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:44.185+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:44.185+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:44.213+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:44.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578821, 1), signature: { hash: BinData(0, 16F433E481960220FC894CAD78A7AA1E2A2F8A37), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:44.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:33:44.232+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578821, 1), signature: { hash: BinData(0, 16F433E481960220FC894CAD78A7AA1E2A2F8A37), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:44.232+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578821, 1), signature: { hash: BinData(0, 16F433E481960220FC894CAD78A7AA1E2A2F8A37), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:44.232+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), opTime: { ts: Timestamp(1567578818, 2), t: 1 }, wallTime: new Date(1567578818610) } 2019-09-04T06:33:44.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578821, 1), signature: { hash: BinData(0, 16F433E481960220FC894CAD78A7AA1E2A2F8A37), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:44.233+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:44.233+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:44.234+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:44.234+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:44.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:44.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:44.313+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:44.380+0000 D2 COMMAND [conn380] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578822, 1), signature: { hash: BinData(0, 2206AF5AF7A132B84E6DD84FBEAA8BC020142021), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:33:44.380+0000 D1 REPL [conn380] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578818, 2), t: 1 } 2019-09-04T06:33:44.380+0000 D3 REPL [conn380] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:14.390+0000 2019-09-04T06:33:44.413+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:44.452+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:44.452+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:44.513+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:44.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:44.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:44.613+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:44.624+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:44.624+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:44.624+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578818, 2) 2019-09-04T06:33:44.624+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 18148 2019-09-04T06:33:44.624+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:44.624+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:44.624+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 18148 2019-09-04T06:33:44.624+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18151 2019-09-04T06:33:44.624+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18151 2019-09-04T06:33:44.624+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578818, 2), t: 1 }({ ts: Timestamp(1567578818, 2), t: 1 }) 2019-09-04T06:33:44.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:44.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:44.685+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:44.685+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:44.713+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:44.733+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:44.733+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:44.734+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:44.734+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:44.813+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:44.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:44.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1223) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:44.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1223 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:54.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:44.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:12.839+0000 2019-09-04T06:33:44.838+0000 D2 ASIO [Replication] Request 1223 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), opTime: { ts: Timestamp(1567578818, 2), t: 1 }, wallTime: new Date(1567578818610), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpVisible: { ts: Timestamp(1567578818, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578818, 2), $clusterTime: { clusterTime: Timestamp(1567578822, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 2) } 2019-09-04T06:33:44.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), opTime: { ts: Timestamp(1567578818, 2), t: 1 }, wallTime: new Date(1567578818610), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpVisible: { ts: Timestamp(1567578818, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578818, 2), $clusterTime: { clusterTime: Timestamp(1567578822, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:44.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:44.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1223) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), opTime: { ts: Timestamp(1567578818, 2), t: 1 }, wallTime: new Date(1567578818610), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpVisible: { ts: Timestamp(1567578818, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578818, 2), $clusterTime: { clusterTime: Timestamp(1567578822, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 2) } 2019-09-04T06:33:44.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:33:44.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:33:46.838Z 2019-09-04T06:33:44.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:12.839+0000 2019-09-04T06:33:44.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:44.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1224) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:44.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1224 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:33:54.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:44.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:12.839+0000 2019-09-04T06:33:44.839+0000 D2 ASIO [Replication] Request 1224 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), opTime: { ts: Timestamp(1567578818, 2), t: 1 }, wallTime: new Date(1567578818610), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpVisible: { ts: Timestamp(1567578818, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578818, 2), $clusterTime: { clusterTime: Timestamp(1567578822, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 2) } 2019-09-04T06:33:44.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), opTime: { ts: Timestamp(1567578818, 2), t: 1 }, wallTime: new Date(1567578818610), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpVisible: { ts: Timestamp(1567578818, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578818, 2), $clusterTime: { clusterTime: Timestamp(1567578822, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:33:44.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:44.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1224) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), opTime: { ts: Timestamp(1567578818, 2), t: 1 }, wallTime: new Date(1567578818610), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpVisible: { ts: Timestamp(1567578818, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578818, 2), $clusterTime: { clusterTime: Timestamp(1567578822, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 2) } 2019-09-04T06:33:44.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:33:44.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:33:54.742+0000 2019-09-04T06:33:44.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:33:55.033+0000 2019-09-04T06:33:44.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:33:44.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:33:46.839Z 2019-09-04T06:33:44.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:44.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:14.839+0000 2019-09-04T06:33:44.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:14.839+0000 2019-09-04T06:33:44.914+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:44.952+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:44.952+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:45.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:45.014+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:45.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:45.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:45.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578822, 1), signature: { hash: BinData(0, 737E47063A9423B086BEDBD5E092FB91E50F4FC5), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:45.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:33:45.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578822, 1), signature: { hash: BinData(0, 737E47063A9423B086BEDBD5E092FB91E50F4FC5), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:45.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578822, 1), signature: { hash: BinData(0, 737E47063A9423B086BEDBD5E092FB91E50F4FC5), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:45.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), opTime: { ts: Timestamp(1567578818, 2), t: 1 }, wallTime: new Date(1567578818610) } 2019-09-04T06:33:45.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578822, 1), signature: { hash: BinData(0, 737E47063A9423B086BEDBD5E092FB91E50F4FC5), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:45.114+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:45.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:45.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:45.185+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:45.185+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:45.214+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:45.233+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:45.233+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:45.234+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:45.234+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:45.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:45.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:45.314+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:45.414+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:45.452+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:45.452+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:45.514+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:45.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:45.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:45.615+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:45.624+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:45.624+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:45.624+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578818, 2) 2019-09-04T06:33:45.624+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 18168 2019-09-04T06:33:45.624+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:45.624+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:45.624+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 18168 2019-09-04T06:33:45.625+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18171 2019-09-04T06:33:45.625+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18171 2019-09-04T06:33:45.625+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578818, 2), t: 1 }({ ts: Timestamp(1567578818, 2), t: 1 }) 2019-09-04T06:33:45.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:45.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:45.685+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:45.685+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:45.715+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:45.733+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:45.733+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:45.734+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:45.734+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:45.815+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:45.915+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:45.952+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:45.952+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:46.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:46.015+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:46.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:46.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:46.115+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:46.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:46.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:46.185+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:46.185+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:46.216+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:46.232+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578822, 1), signature: { hash: BinData(0, 737E47063A9423B086BEDBD5E092FB91E50F4FC5), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:46.232+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:33:46.233+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578822, 1), signature: { hash: BinData(0, 737E47063A9423B086BEDBD5E092FB91E50F4FC5), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:46.233+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578822, 1), signature: { hash: BinData(0, 737E47063A9423B086BEDBD5E092FB91E50F4FC5), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:46.233+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), opTime: { ts: Timestamp(1567578818, 2), t: 1 }, wallTime: new Date(1567578818610) } 2019-09-04T06:33:46.233+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578822, 1), signature: { hash: BinData(0, 737E47063A9423B086BEDBD5E092FB91E50F4FC5), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:46.233+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:46.233+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:46.234+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:46.234+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:46.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:46.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:46.316+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:46.317+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:46.317+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:46.416+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:46.452+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:46.452+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:46.516+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:46.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:46.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:46.616+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:46.624+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:46.625+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:46.625+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578818, 2) 2019-09-04T06:33:46.625+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 18189 2019-09-04T06:33:46.625+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:46.625+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:46.625+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 18189 2019-09-04T06:33:46.625+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18192 2019-09-04T06:33:46.625+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18192 2019-09-04T06:33:46.625+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578818, 2), t: 1 }({ ts: Timestamp(1567578818, 2), t: 1 }) 2019-09-04T06:33:46.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:46.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:46.685+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:46.685+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:46.716+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:46.733+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:46.733+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:46.734+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:46.734+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:46.817+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:46.817+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:46.817+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:46.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:46.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1225) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:46.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1225 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:56.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:46.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:14.839+0000 2019-09-04T06:33:46.838+0000 D2 ASIO [Replication] Request 1225 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), opTime: { ts: Timestamp(1567578818, 2), t: 1 }, wallTime: new Date(1567578818610), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpVisible: { ts: Timestamp(1567578818, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578818, 2), $clusterTime: { clusterTime: Timestamp(1567578822, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 2) } 2019-09-04T06:33:46.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), opTime: { ts: Timestamp(1567578818, 2), t: 1 }, wallTime: new Date(1567578818610), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpVisible: { ts: Timestamp(1567578818, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578818, 2), $clusterTime: { clusterTime: Timestamp(1567578822, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:46.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:46.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1225) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), opTime: { ts: Timestamp(1567578818, 2), t: 1 }, wallTime: new Date(1567578818610), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpVisible: { ts: Timestamp(1567578818, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578818, 2), $clusterTime: { clusterTime: Timestamp(1567578822, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 2) } 2019-09-04T06:33:46.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:33:46.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:33:48.838Z 2019-09-04T06:33:46.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:14.839+0000 2019-09-04T06:33:46.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:46.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1226) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:46.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1226 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:33:56.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:46.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:14.839+0000 2019-09-04T06:33:46.839+0000 D2 ASIO [Replication] Request 1226 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), opTime: { ts: Timestamp(1567578818, 2), t: 1 }, wallTime: new Date(1567578818610), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpVisible: { ts: Timestamp(1567578818, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578818, 2), $clusterTime: { clusterTime: Timestamp(1567578822, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 2) } 2019-09-04T06:33:46.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), opTime: { ts: Timestamp(1567578818, 2), t: 1 }, wallTime: new Date(1567578818610), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpVisible: { ts: Timestamp(1567578818, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578818, 2), $clusterTime: { clusterTime: Timestamp(1567578822, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:33:46.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:46.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1226) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), opTime: { ts: Timestamp(1567578818, 2), t: 1 }, wallTime: new Date(1567578818610), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpVisible: { ts: Timestamp(1567578818, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578818, 2), $clusterTime: { clusterTime: Timestamp(1567578822, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578818, 2) } 2019-09-04T06:33:46.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:33:46.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:33:55.033+0000 2019-09-04T06:33:46.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:33:58.226+0000 2019-09-04T06:33:46.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:33:46.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:33:48.839Z 2019-09-04T06:33:46.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:46.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:16.839+0000 2019-09-04T06:33:46.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:16.839+0000 2019-09-04T06:33:46.917+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:46.952+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:46.952+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:47.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:47.017+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:47.031+0000 I NETWORK [listener] connection accepted from 10.108.2.49:53492 #408 (91 connections now open) 2019-09-04T06:33:47.031+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:47.031+0000 D2 COMMAND [conn408] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:47.031+0000 I NETWORK [conn408] received client metadata from 10.108.2.49:53492 conn408: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:47.031+0000 I COMMAND [conn381] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578788, 1), signature: { hash: BinData(0, 4F311D3E6BCB85B8F76274CF8447E974E8C1C19D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:33:47.031+0000 I COMMAND [conn371] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578789, 1), signature: { hash: BinData(0, 0549CF4B3FA5453F46F43F76FB42633A9A8F3D20), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:33:47.031+0000 D1 - [conn371] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:47.031+0000 W - [conn371] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:47.031+0000 I COMMAND [conn408] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:47.031+0000 D1 - [conn381] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:47.031+0000 W - [conn381] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:47.033+0000 I COMMAND [conn383] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 815C511F9B78AD09330F18D8946C64B00E8E0783), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:33:47.034+0000 D1 - [conn383] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:47.034+0000 W - [conn383] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:47.037+0000 I NETWORK [listener] connection accepted from 10.108.2.53:50826 #409 (92 connections now open) 2019-09-04T06:33:47.037+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:47.037+0000 D2 COMMAND [conn409] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:47.037+0000 I NETWORK [conn409] received client metadata from 10.108.2.53:50826 conn409: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:47.037+0000 I COMMAND [conn409] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:47.038+0000 I NETWORK [listener] connection accepted from 10.108.2.47:56660 #410 (93 connections now open) 2019-09-04T06:33:47.038+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:47.038+0000 D2 COMMAND [conn410] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:47.038+0000 I NETWORK [conn410] received client metadata from 10.108.2.47:56660 conn410: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:47.038+0000 I COMMAND [conn410] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:47.046+0000 I COMMAND [conn368] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 815C511F9B78AD09330F18D8946C64B00E8E0783), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:33:47.046+0000 D1 - [conn368] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:47.046+0000 W - [conn368] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:47.047+0000 I COMMAND [conn369] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578790, 1), signature: { hash: BinData(0, 48B30A814E9398F4B8D261A3888218A13169B002), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:33:47.047+0000 D1 - [conn369] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:47.047+0000 W - [conn369] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:47.048+0000 I - [conn371] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:47.049+0000 D1 COMMAND [conn371] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578789, 1), signature: { hash: BinData(0, 0549CF4B3FA5453F46F43F76FB42633A9A8F3D20), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:47.049+0000 D1 - [conn371] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:47.049+0000 W - [conn371] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:47.050+0000 I COMMAND [conn356] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578792, 1), signature: { hash: BinData(0, 688717A47F9056E54E2A82934E3E688DB3858182), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:33:47.051+0000 D1 - [conn356] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:47.051+0000 W - [conn356] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:47.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:47.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:47.053+0000 I COMMAND [conn378] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578788, 1), signature: { hash: BinData(0, 4F311D3E6BCB85B8F76274CF8447E974E8C1C19D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:33:47.054+0000 D1 - [conn378] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:47.054+0000 W - [conn378] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:47.061+0000 I COMMAND [conn384] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 815C511F9B78AD09330F18D8946C64B00E8E0783), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:33:47.061+0000 D1 - [conn384] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:47.061+0000 W - [conn384] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:47.063+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:47.063+0000 D3 REPL [replexec-0] memberData lastupdate is: 2019-09-04T06:33:46.839+0000 2019-09-04T06:33:47.063+0000 D3 REPL [replexec-0] memberData lastupdate is: 2019-09-04T06:33:46.838+0000 2019-09-04T06:33:47.063+0000 D3 REPL [replexec-0] stalest member MemberId(2) date: 2019-09-04T06:33:46.838+0000 2019-09-04T06:33:47.063+0000 D3 REPL [replexec-0] scheduling next check at 2019-09-04T06:33:56.838+0000 2019-09-04T06:33:47.063+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:16.839+0000 2019-09-04T06:33:47.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578823, 1), signature: { hash: BinData(0, 755BF950B3368E815B1316344674BA22FB242FFB), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:47.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:33:47.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578823, 1), signature: { hash: BinData(0, 755BF950B3368E815B1316344674BA22FB242FFB), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:47.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578823, 1), signature: { hash: BinData(0, 755BF950B3368E815B1316344674BA22FB242FFB), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:47.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), opTime: { ts: Timestamp(1567578818, 2), t: 1 }, wallTime: new Date(1567578818610) } 2019-09-04T06:33:47.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578823, 1), signature: { hash: BinData(0, 755BF950B3368E815B1316344674BA22FB242FFB), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:47.065+0000 I - [conn383] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:47.065+0000 D1 COMMAND [conn383] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 815C511F9B78AD09330F18D8946C64B00E8E0783), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:47.065+0000 D1 - [conn383] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:47.065+0000 W - [conn383] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:47.082+0000 I - [conn378] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:47.082+0000 D1 COMMAND [conn378] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578788, 1), signature: { hash: BinData(0, 4F311D3E6BCB85B8F76274CF8447E974E8C1C19D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:47.082+0000 D1 - [conn378] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:47.082+0000 W - [conn378] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:47.102+0000 I - [conn371] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:47.102+0000 W COMMAND [conn371] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:47.102+0000 I COMMAND [conn371] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578789, 1), signature: { hash: BinData(0, 0549CF4B3FA5453F46F43F76FB42633A9A8F3D20), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30035ms 2019-09-04T06:33:47.102+0000 D2 NETWORK [conn371] Session from 10.108.2.50:50214 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:47.102+0000 I NETWORK [conn371] end connection 10.108.2.50:50214 (92 connections now open) 2019-09-04T06:33:47.117+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:47.119+0000 I - [conn356] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:47.119+0000 D1 COMMAND [conn356] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578792, 1), signature: { hash: BinData(0, 688717A47F9056E54E2A82934E3E688DB3858182), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:47.119+0000 D1 - [conn356] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:47.119+0000 W - [conn356] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:47.136+0000 I - [conn369] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:47.136+0000 D1 COMMAND [conn369] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578790, 1), signature: { hash: BinData(0, 48B30A814E9398F4B8D261A3888218A13169B002), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:47.136+0000 D1 - [conn369] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:47.136+0000 W - [conn369] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:47.156+0000 I - [conn356] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:47.156+0000 W COMMAND [conn356] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:47.156+0000 I COMMAND [conn356] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578792, 1), signature: { hash: BinData(0, 688717A47F9056E54E2A82934E3E688DB3858182), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30085ms 2019-09-04T06:33:47.156+0000 D2 NETWORK [conn356] Session from 10.108.2.53:50790 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:47.156+0000 I NETWORK [conn356] end connection 10.108.2.53:50790 (91 connections now open) 2019-09-04T06:33:47.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:47.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:47.173+0000 I - [conn368] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:47.174+0000 D1 COMMAND [conn368] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 815C511F9B78AD09330F18D8946C64B00E8E0783), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:47.174+0000 D1 - [conn368] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:47.174+0000 W - [conn368] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:47.185+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:47.185+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:47.188+0000 I NETWORK [listener] connection accepted from 10.108.2.15:39194 #411 (92 connections now open) 2019-09-04T06:33:47.188+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:47.188+0000 D2 COMMAND [conn411] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:47.188+0000 I NETWORK [conn411] received client metadata from 10.108.2.15:39194 conn411: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:47.188+0000 I COMMAND [conn411] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:47.188+0000 D2 COMMAND [conn411] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:47.188+0000 I COMMAND [conn411] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:47.190+0000 I - [conn381] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:47.190+0000 D1 COMMAND [conn381] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578788, 1), signature: { hash: BinData(0, 4F311D3E6BCB85B8F76274CF8447E974E8C1C19D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:47.190+0000 D1 - [conn381] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:47.190+0000 W - [conn381] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:47.192+0000 D2 ASIO [RS] Request 1221 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578827, 1), t: 1, h: 0, v: 2, op: "i", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), wall: new Date(1567578827191), o: { _id: "cmodb801.togewa.com:27017:1567578827:-2504617745590980237", ping: new Date(1567578827185) } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpVisible: { ts: Timestamp(1567578818, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpApplied: { ts: Timestamp(1567578827, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578818, 2), $clusterTime: { clusterTime: Timestamp(1567578827, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578827, 1) } 2019-09-04T06:33:47.192+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578827, 1), t: 1, h: 0, v: 2, op: "i", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), wall: new Date(1567578827191), o: { _id: "cmodb801.togewa.com:27017:1567578827:-2504617745590980237", ping: new Date(1567578827185) } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpVisible: { ts: Timestamp(1567578818, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578818, 2), t: 1 }, lastCommittedWall: new Date(1567578818610), lastOpApplied: { ts: Timestamp(1567578827, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578818, 2), $clusterTime: { clusterTime: Timestamp(1567578827, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578827, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:47.192+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:47.192+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578827, 1) and ending at ts: Timestamp(1567578827, 1) 2019-09-04T06:33:47.192+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:33:58.226+0000 2019-09-04T06:33:47.192+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:33:57.860+0000 2019-09-04T06:33:47.192+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578827, 1), t: 1 } 2019-09-04T06:33:47.192+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:47.192+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:16.839+0000 2019-09-04T06:33:47.192+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:47.192+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:47.192+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578818, 2) 2019-09-04T06:33:47.192+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 18211 2019-09-04T06:33:47.192+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:47.192+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:47.193+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 18211 2019-09-04T06:33:47.193+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:47.193+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:47.193+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578818, 2) 2019-09-04T06:33:47.193+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 18214 2019-09-04T06:33:47.193+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:47.193+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:47.193+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 18214 2019-09-04T06:33:47.193+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:33:47.194+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578827, 1), t: 1 } 2019-09-04T06:33:47.194+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1227 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:57.194+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578818, 2), t: 1 } } 2019-09-04T06:33:47.195+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:08.624+0000 2019-09-04T06:33:47.195+0000 D2 ASIO [RS] Request 1227 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 1), t: 1 }, lastCommittedWall: new Date(1567578827191), lastOpVisible: { ts: Timestamp(1567578827, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578827, 1), t: 1 }, lastCommittedWall: new Date(1567578827191), lastOpApplied: { ts: Timestamp(1567578827, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578827, 1), $clusterTime: { clusterTime: Timestamp(1567578827, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578827, 1) } 2019-09-04T06:33:47.195+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 1), t: 1 }, lastCommittedWall: new Date(1567578827191), lastOpVisible: { ts: Timestamp(1567578827, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578827, 1), t: 1 }, lastCommittedWall: new Date(1567578827191), lastOpApplied: { ts: Timestamp(1567578827, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578827, 1), $clusterTime: { clusterTime: Timestamp(1567578827, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578827, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:47.195+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:47.195+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:33:47.195+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578827, 1), t: 1 }, 2019-09-04T06:33:47.191+0000 2019-09-04T06:33:47.195+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:47.195+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:33:57.860+0000 2019-09-04T06:33:47.195+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:33:57.950+0000 2019-09-04T06:33:47.195+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1228 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:57.195+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578827, 1), t: 1 } } 2019-09-04T06:33:47.195+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:08.624+0000 2019-09-04T06:33:47.195+0000 D3 REPL [conn377] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:47.195+0000 D3 REPL [conn377] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.752+0000 2019-09-04T06:33:47.195+0000 D3 REPL [conn388] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:47.195+0000 D3 REPL [conn388] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:52.054+0000 2019-09-04T06:33:47.195+0000 D3 REPL [conn372] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:47.195+0000 D3 REPL [conn372] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.660+0000 2019-09-04T06:33:47.195+0000 D3 REPL [conn398] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:47.195+0000 D3 REPL [conn398] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.987+0000 2019-09-04T06:33:47.195+0000 D3 REPL [conn392] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:47.195+0000 D3 REPL [conn392] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:58.760+0000 2019-09-04T06:33:47.195+0000 D3 REPL [conn394] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:47.195+0000 D3 REPL [conn394] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.261+0000 2019-09-04T06:33:47.195+0000 D3 REPL [conn385] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:47.195+0000 D3 REPL [conn385] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.131+0000 2019-09-04T06:33:47.195+0000 D3 REPL [conn387] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:47.195+0000 D3 REPL [conn387] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.455+0000 2019-09-04T06:33:47.195+0000 D3 REPL [conn399] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:47.195+0000 D3 REPL [conn399] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:02.478+0000 2019-09-04T06:33:47.195+0000 D3 REPL [conn391] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:47.195+0000 D3 REPL [conn391] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:56.309+0000 2019-09-04T06:33:47.195+0000 D3 REPL [conn404] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:47.195+0000 D3 REPL [conn404] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.073+0000 2019-09-04T06:33:47.195+0000 D3 REPL [conn390] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:47.195+0000 D3 REPL [conn390] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.077+0000 2019-09-04T06:33:47.195+0000 D3 REPL [conn393] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:47.195+0000 D3 REPL [conn393] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.081+0000 2019-09-04T06:33:47.195+0000 D3 REPL [conn407] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:47.195+0000 D3 REPL [conn407] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.691+0000 2019-09-04T06:33:47.195+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:47.195+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:16.839+0000 2019-09-04T06:33:47.196+0000 D3 REPL [conn380] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:47.196+0000 D3 REPL [conn380] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:14.390+0000 2019-09-04T06:33:47.196+0000 D3 REPL [conn406] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:47.196+0000 D3 REPL [conn406] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.496+0000 2019-09-04T06:33:47.196+0000 D3 REPL [conn405] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:47.196+0000 D3 REPL [conn405] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.416+0000 2019-09-04T06:33:47.196+0000 D3 REPL [conn400] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:47.196+0000 D3 REPL [conn400] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:03.490+0000 2019-09-04T06:33:47.197+0000 D3 REPL [conn386] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:47.197+0000 D3 REPL [conn386] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.645+0000 2019-09-04T06:33:47.197+0000 D3 REPL [conn396] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:47.198+0000 D3 REPL [conn396] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.897+0000 2019-09-04T06:33:47.199+0000 D3 REPL [conn362] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:47.199+0000 D3 REPL [conn362] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.661+0000 2019-09-04T06:33:47.199+0000 D3 REPL [conn397] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:47.200+0000 D3 REPL [conn397] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.962+0000 2019-09-04T06:33:47.200+0000 D3 REPL [conn395] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:47.201+0000 D3 REPL [conn395] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.702+0000 2019-09-04T06:33:47.201+0000 D3 REPL [conn389] Got notified of new snapshot: { ts: Timestamp(1567578818, 2), t: 1 }, 2019-09-04T06:33:38.610+0000 2019-09-04T06:33:47.201+0000 D3 REPL [conn389] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:52.596+0000 2019-09-04T06:33:47.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:47.213+0000 I NETWORK [listener] connection accepted from 10.108.2.58:52270 #412 (93 connections now open) 2019-09-04T06:33:47.213+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:47.213+0000 D2 COMMAND [conn412] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:47.213+0000 I NETWORK [conn412] received client metadata from 10.108.2.58:52270 conn412: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:47.213+0000 D2 COMMAND [conn382] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578819, 1), signature: { hash: BinData(0, F48F10C5414C16EB9E237EDDB5359A70016AB5D8), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:33:47.213+0000 D1 REPL [conn382] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578818, 2), t: 1 } 2019-09-04T06:33:47.213+0000 D3 REPL [conn382] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:17.223+0000 2019-09-04T06:33:47.217+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:47.229+0000 I - [conn383] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:47.229+0000 W COMMAND [conn383] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:47.229+0000 I COMMAND [conn383] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 815C511F9B78AD09330F18D8946C64B00E8E0783), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30049ms 2019-09-04T06:33:47.229+0000 D2 NETWORK [conn383] Session from 10.108.2.46:41096 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:47.229+0000 I NETWORK [conn383] end connection 10.108.2.46:41096 (92 connections now open) 2019-09-04T06:33:47.233+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:47.234+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:47.234+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:47.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:47.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:47.249+0000 I - [conn381] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:47.249+0000 W COMMAND [conn381] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:47.249+0000 I COMMAND [conn381] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578788, 1), signature: { hash: BinData(0, 4F311D3E6BCB85B8F76274CF8447E974E8C1C19D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30177ms 2019-09-04T06:33:47.249+0000 D2 NETWORK [conn381] Session from 10.108.2.44:38782 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:47.249+0000 I NETWORK [conn381] end connection 10.108.2.44:38782 (91 connections now open) 2019-09-04T06:33:47.265+0000 I - [conn384] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:47.265+0000 D1 COMMAND [conn384] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 815C511F9B78AD09330F18D8946C64B00E8E0783), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:47.268+0000 I - [conn378] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:47.268+0000 W COMMAND [conn378] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:47.268+0000 I COMMAND [conn378] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578788, 1), signature: { hash: BinData(0, 4F311D3E6BCB85B8F76274CF8447E974E8C1C19D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30045ms 2019-09-04T06:33:47.269+0000 D2 NETWORK [conn378] Session from 10.108.2.45:36642 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:47.269+0000 I NETWORK [conn378] end connection 10.108.2.45:36642 (90 connections now open) 2019-09-04T06:33:47.287+0000 I - [conn369] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:47.287+0000 W COMMAND [conn369] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:47.287+0000 I COMMAND [conn369] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578790, 1), signature: { hash: BinData(0, 48B30A814E9398F4B8D261A3888218A13169B002), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30106ms 2019-09-04T06:33:47.287+0000 D2 NETWORK [conn369] Session from 10.108.2.49:53466 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:47.287+0000 I NETWORK [conn369] end connection 10.108.2.49:53466 (89 connections now open) 2019-09-04T06:33:47.306+0000 I - [conn368] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:47.306+0000 W COMMAND [conn368] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:47.306+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578827, 1) } 2019-09-04T06:33:47.307+0000 I COMMAND [conn368] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 815C511F9B78AD09330F18D8946C64B00E8E0783), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30145ms 2019-09-04T06:33:47.307+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18193 2019-09-04T06:33:47.307+0000 D2 NETWORK [conn368] Session from 10.108.2.64:46710 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:47.307+0000 I NETWORK [conn368] end connection 10.108.2.64:46710 (88 connections now open) 2019-09-04T06:33:47.307+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 18193 2019-09-04T06:33:47.307+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18224 2019-09-04T06:33:47.307+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18224 2019-09-04T06:33:47.307+0000 D3 EXECUTOR [repl-writer-worker-2] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:47.307+0000 D3 STORAGE [repl-writer-worker-2] WT begin_transaction for snapshot id 18226 2019-09-04T06:33:47.307+0000 D4 STORAGE [repl-writer-worker-2] inserting record with timestamp Timestamp(1567578827, 1) 2019-09-04T06:33:47.307+0000 D3 STORAGE [repl-writer-worker-2] WT set timestamp of future write operations to Timestamp(1567578827, 1) 2019-09-04T06:33:47.307+0000 D3 STORAGE [repl-writer-worker-2] WT commit_transaction for snapshot id 18226 2019-09-04T06:33:47.307+0000 D3 EXECUTOR [repl-writer-worker-2] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:47.307+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:33:47.307+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18225 2019-09-04T06:33:47.307+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 18225 2019-09-04T06:33:47.307+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18228 2019-09-04T06:33:47.307+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18228 2019-09-04T06:33:47.307+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578827, 1), t: 1 }({ ts: Timestamp(1567578827, 1), t: 1 }) 2019-09-04T06:33:47.307+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578827, 1) 2019-09-04T06:33:47.307+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18229 2019-09-04T06:33:47.307+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578827, 1) } } ] } sort: {} projection: {} 2019-09-04T06:33:47.307+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:47.307+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:33:47.307+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578827, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:33:47.307+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:47.307+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:47.307+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:33:47.307+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578827, 1) || First: notFirst: full path: ts 2019-09-04T06:33:47.307+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:47.307+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578827, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:47.307+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:33:47.307+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:33:47.307+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:33:47.307+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:47.307+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:47.307+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:33:47.307+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:47.307+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:47.307+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:33:47.307+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578827, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:33:47.307+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:47.307+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:47.307+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:33:47.307+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578827, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:33:47.307+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:47.307+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578827, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:47.307+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 18229 2019-09-04T06:33:47.307+0000 D3 EXECUTOR [repl-writer-worker-0] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:47.307+0000 D3 STORAGE [repl-writer-worker-0] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:47.307+0000 D3 REPL [repl-writer-worker-0] applying op: { ts: Timestamp(1567578827, 1), t: 1, h: 0, v: 2, op: "i", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), wall: new Date(1567578827191), o: { _id: "cmodb801.togewa.com:27017:1567578827:-2504617745590980237", ping: new Date(1567578827185) } }, oplog application mode: Secondary 2019-09-04T06:33:47.307+0000 D3 STORAGE [repl-writer-worker-0] WT begin_transaction for snapshot id 18231 2019-09-04T06:33:47.307+0000 D4 STORAGE [repl-writer-worker-0] inserting record with timestamp Timestamp(1567578827, 1) 2019-09-04T06:33:47.307+0000 D3 STORAGE [repl-writer-worker-0] WT set timestamp of future write operations to Timestamp(1567578827, 1) 2019-09-04T06:33:47.307+0000 D3 STORAGE [repl-writer-worker-0] WT set timestamp of future write operations to Timestamp(1567578827, 1) 2019-09-04T06:33:47.307+0000 D3 STORAGE [repl-writer-worker-0] WT set timestamp of future write operations to Timestamp(1567578827, 1) 2019-09-04T06:33:47.307+0000 D3 STORAGE [repl-writer-worker-0] WT commit_transaction for snapshot id 18231 2019-09-04T06:33:47.307+0000 D3 EXECUTOR [repl-writer-worker-0] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:47.307+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578827, 1), t: 1 }({ ts: Timestamp(1567578827, 1), t: 1 }) 2019-09-04T06:33:47.307+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:47.307+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578827, 1) 2019-09-04T06:33:47.307+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18230 2019-09-04T06:33:47.307+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:33:47.307+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:47.308+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:33:47.308+0000 D1 - [conn384] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:47.308+0000 W - [conn384] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:47.308+0000 I COMMAND [conn412] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:47.308+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:47.308+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:47.308+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:47.308+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:33:47.308+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 18230 2019-09-04T06:33:47.308+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578827, 1) 2019-09-04T06:33:47.308+0000 D2 REPL [rsSync-0] Setting replication's stable optime to { ts: Timestamp(1567578827, 1), t: 1 }, 2019-09-04T06:33:47.191+0000 2019-09-04T06:33:47.308+0000 D2 STORAGE [rsSync-0] oldest_timestamp set to Timestamp(1567578822, 1) 2019-09-04T06:33:47.308+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18234 2019-09-04T06:33:47.308+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18234 2019-09-04T06:33:47.308+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578827, 1), t: 1 }({ ts: Timestamp(1567578827, 1), t: 1 }) 2019-09-04T06:33:47.308+0000 D3 REPL [conn377] Got notified of new snapshot: { ts: Timestamp(1567578827, 1), t: 1 }, 2019-09-04T06:33:47.191+0000 2019-09-04T06:33:47.308+0000 D3 REPL [conn377] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.752+0000 2019-09-04T06:33:47.308+0000 D3 REPL [conn388] Got notified of new snapshot: { ts: Timestamp(1567578827, 1), t: 1 }, 2019-09-04T06:33:47.191+0000 2019-09-04T06:33:47.308+0000 D3 REPL [conn388] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:52.054+0000 2019-09-04T06:33:47.308+0000 D3 REPL [conn372] Got notified of new snapshot: { ts: Timestamp(1567578827, 1), t: 1 }, 2019-09-04T06:33:47.191+0000 2019-09-04T06:33:47.308+0000 D3 REPL [conn372] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.660+0000 2019-09-04T06:33:47.308+0000 D3 REPL [conn398] Got notified of new snapshot: { ts: Timestamp(1567578827, 1), t: 1 }, 2019-09-04T06:33:47.191+0000 2019-09-04T06:33:47.308+0000 D3 REPL [conn398] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.987+0000 2019-09-04T06:33:47.308+0000 D3 REPL [conn392] Got notified of new snapshot: { ts: Timestamp(1567578827, 1), t: 1 }, 2019-09-04T06:33:47.191+0000 2019-09-04T06:33:47.308+0000 D3 REPL [conn392] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:58.760+0000 2019-09-04T06:33:47.308+0000 D3 REPL [conn394] Got notified of new snapshot: { ts: Timestamp(1567578827, 1), t: 1 }, 2019-09-04T06:33:47.191+0000 2019-09-04T06:33:47.308+0000 D3 REPL [conn394] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.261+0000 2019-09-04T06:33:47.308+0000 D3 REPL [conn385] Got notified of new snapshot: { ts: Timestamp(1567578827, 1), t: 1 }, 2019-09-04T06:33:47.191+0000 2019-09-04T06:33:47.308+0000 D3 REPL [conn385] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.131+0000 2019-09-04T06:33:47.308+0000 D3 REPL [conn387] Got notified of new snapshot: { ts: Timestamp(1567578827, 1), t: 1 }, 2019-09-04T06:33:47.191+0000 2019-09-04T06:33:47.308+0000 D3 REPL [conn387] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.455+0000 2019-09-04T06:33:47.308+0000 D3 REPL [conn399] Got notified of new snapshot: { ts: Timestamp(1567578827, 1), t: 1 }, 2019-09-04T06:33:47.191+0000 2019-09-04T06:33:47.308+0000 D3 REPL [conn399] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:02.478+0000 2019-09-04T06:33:47.308+0000 D3 REPL [conn391] Got notified of new snapshot: { ts: Timestamp(1567578827, 1), t: 1 }, 2019-09-04T06:33:47.191+0000 2019-09-04T06:33:47.308+0000 D3 REPL [conn391] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:56.309+0000 2019-09-04T06:33:47.308+0000 D3 REPL [conn404] Got notified of new snapshot: { ts: Timestamp(1567578827, 1), t: 1 }, 2019-09-04T06:33:47.191+0000 2019-09-04T06:33:47.308+0000 D3 REPL [conn404] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.073+0000 2019-09-04T06:33:47.308+0000 D3 REPL [conn390] Got notified of new snapshot: { ts: Timestamp(1567578827, 1), t: 1 }, 2019-09-04T06:33:47.191+0000 2019-09-04T06:33:47.308+0000 D3 REPL [conn390] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.077+0000 2019-09-04T06:33:47.308+0000 D3 REPL [conn393] Got notified of new snapshot: { ts: Timestamp(1567578827, 1), t: 1 }, 2019-09-04T06:33:47.191+0000 2019-09-04T06:33:47.308+0000 D3 REPL [conn393] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.081+0000 2019-09-04T06:33:47.308+0000 D3 REPL [conn407] Got notified of new snapshot: { ts: Timestamp(1567578827, 1), t: 1 }, 2019-09-04T06:33:47.191+0000 2019-09-04T06:33:47.308+0000 D3 REPL [conn407] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.691+0000 2019-09-04T06:33:47.308+0000 D3 REPL [conn380] Got notified of new snapshot: { ts: Timestamp(1567578827, 1), t: 1 }, 2019-09-04T06:33:47.191+0000 2019-09-04T06:33:47.308+0000 D3 REPL [conn380] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:14.390+0000 2019-09-04T06:33:47.308+0000 D3 REPL [conn406] Got notified of new snapshot: { ts: Timestamp(1567578827, 1), t: 1 }, 2019-09-04T06:33:47.191+0000 2019-09-04T06:33:47.308+0000 D3 REPL [conn406] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.496+0000 2019-09-04T06:33:47.308+0000 D3 REPL [conn405] Got notified of new snapshot: { ts: Timestamp(1567578827, 1), t: 1 }, 2019-09-04T06:33:47.191+0000 2019-09-04T06:33:47.308+0000 D3 REPL [conn405] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.416+0000 2019-09-04T06:33:47.308+0000 D3 REPL [conn400] Got notified of new snapshot: { ts: Timestamp(1567578827, 1), t: 1 }, 2019-09-04T06:33:47.191+0000 2019-09-04T06:33:47.308+0000 D3 REPL [conn400] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:03.490+0000 2019-09-04T06:33:47.308+0000 D3 REPL [conn386] Got notified of new snapshot: { ts: Timestamp(1567578827, 1), t: 1 }, 2019-09-04T06:33:47.191+0000 2019-09-04T06:33:47.308+0000 D3 REPL [conn386] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.645+0000 2019-09-04T06:33:47.308+0000 D3 REPL [conn396] Got notified of new snapshot: { ts: Timestamp(1567578827, 1), t: 1 }, 2019-09-04T06:33:47.191+0000 2019-09-04T06:33:47.308+0000 D3 REPL [conn396] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.897+0000 2019-09-04T06:33:47.308+0000 D3 REPL [conn362] Got notified of new snapshot: { ts: Timestamp(1567578827, 1), t: 1 }, 2019-09-04T06:33:47.191+0000 2019-09-04T06:33:47.308+0000 D3 REPL [conn362] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.661+0000 2019-09-04T06:33:47.308+0000 D3 REPL [conn397] Got notified of new snapshot: { ts: Timestamp(1567578827, 1), t: 1 }, 2019-09-04T06:33:47.191+0000 2019-09-04T06:33:47.308+0000 D3 REPL [conn397] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.962+0000 2019-09-04T06:33:47.308+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:47.308+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), appliedOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, appliedWallTime: new Date(1567578818610), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), appliedOpTime: { ts: Timestamp(1567578827, 1), t: 1 }, appliedWallTime: new Date(1567578827191), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), appliedOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, appliedWallTime: new Date(1567578818610), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 1), t: 1 }, lastCommittedWall: new Date(1567578827191), lastOpVisible: { ts: Timestamp(1567578827, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:47.308+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1229 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:17.308+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), appliedOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, appliedWallTime: new Date(1567578818610), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), appliedOpTime: { ts: Timestamp(1567578827, 1), t: 1 }, appliedWallTime: new Date(1567578827191), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), appliedOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, appliedWallTime: new Date(1567578818610), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 1), t: 1 }, lastCommittedWall: new Date(1567578827191), lastOpVisible: { ts: Timestamp(1567578827, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:47.308+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:08.624+0000 2019-09-04T06:33:47.308+0000 D2 COMMAND [conn412] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578823, 1), signature: { hash: BinData(0, 0AD6AB951BD56BA6078970F05FAF7F8D9E5E1F3F), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:33:47.308+0000 D1 REPL [conn412] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578827, 1), t: 1 } 2019-09-04T06:33:47.308+0000 D3 REPL [conn412] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:17.318+0000 2019-09-04T06:33:47.309+0000 D2 ASIO [RS] Request 1229 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 1), t: 1 }, lastCommittedWall: new Date(1567578827191), lastOpVisible: { ts: Timestamp(1567578827, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578827, 1), $clusterTime: { clusterTime: Timestamp(1567578827, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578827, 1) } 2019-09-04T06:33:47.309+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 1), t: 1 }, lastCommittedWall: new Date(1567578827191), lastOpVisible: { ts: Timestamp(1567578827, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578827, 1), $clusterTime: { clusterTime: Timestamp(1567578827, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578827, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:47.309+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:47.309+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:08.624+0000 2019-09-04T06:33:47.310+0000 D3 REPL [conn382] Got notified of new snapshot: { ts: Timestamp(1567578827, 1), t: 1 }, 2019-09-04T06:33:47.191+0000 2019-09-04T06:33:47.310+0000 D3 REPL [conn382] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:17.223+0000 2019-09-04T06:33:47.310+0000 D3 REPL [conn389] Got notified of new snapshot: { ts: Timestamp(1567578827, 1), t: 1 }, 2019-09-04T06:33:47.191+0000 2019-09-04T06:33:47.310+0000 D3 REPL [conn389] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:52.596+0000 2019-09-04T06:33:47.310+0000 D3 REPL [conn395] Got notified of new snapshot: { ts: Timestamp(1567578827, 1), t: 1 }, 2019-09-04T06:33:47.191+0000 2019-09-04T06:33:47.310+0000 D3 REPL [conn395] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.702+0000 2019-09-04T06:33:47.311+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:47.313+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:47.314+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:33:47.314+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:47.314+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), appliedOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, appliedWallTime: new Date(1567578818610), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578827, 1), t: 1 }, durableWallTime: new Date(1567578827191), appliedOpTime: { ts: Timestamp(1567578827, 1), t: 1 }, appliedWallTime: new Date(1567578827191), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), appliedOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, appliedWallTime: new Date(1567578818610), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 1), t: 1 }, lastCommittedWall: new Date(1567578827191), lastOpVisible: { ts: Timestamp(1567578827, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:47.314+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1230 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:17.314+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), appliedOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, appliedWallTime: new Date(1567578818610), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578827, 1), t: 1 }, durableWallTime: new Date(1567578827191), appliedOpTime: { ts: Timestamp(1567578827, 1), t: 1 }, appliedWallTime: new Date(1567578827191), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), appliedOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, appliedWallTime: new Date(1567578818610), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 1), t: 1 }, lastCommittedWall: new Date(1567578827191), lastOpVisible: { ts: Timestamp(1567578827, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:47.314+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:08.624+0000 2019-09-04T06:33:47.315+0000 D2 ASIO [RS] Request 1230 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 1), t: 1 }, lastCommittedWall: new Date(1567578827191), lastOpVisible: { ts: Timestamp(1567578827, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578827, 1), $clusterTime: { clusterTime: Timestamp(1567578827, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578827, 1) } 2019-09-04T06:33:47.315+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 1), t: 1 }, lastCommittedWall: new Date(1567578827191), lastOpVisible: { ts: Timestamp(1567578827, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578827, 1), $clusterTime: { clusterTime: Timestamp(1567578827, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578827, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:47.315+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:47.315+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:08.624+0000 2019-09-04T06:33:47.317+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:47.317+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:47.317+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:47.328+0000 I - [conn384] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:47.328+0000 W COMMAND [conn384] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:47.328+0000 I COMMAND [conn384] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 815C511F9B78AD09330F18D8946C64B00E8E0783), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30220ms 2019-09-04T06:33:47.328+0000 D2 NETWORK [conn384] Session from 10.108.2.47:56642 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:47.328+0000 I NETWORK [conn384] end connection 10.108.2.47:56642 (87 connections now open) 2019-09-04T06:33:47.356+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:47.356+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:47.406+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578827, 1) 2019-09-04T06:33:47.417+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:47.452+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:47.452+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:47.517+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:47.618+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:47.646+0000 D2 ASIO [RS] Request 1228 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578827, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578827635), o: { $v: 1, $set: { ping: new Date(1567578827635) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 1), t: 1 }, lastCommittedWall: new Date(1567578827191), lastOpVisible: { ts: Timestamp(1567578827, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578827, 1), t: 1 }, lastCommittedWall: new Date(1567578827191), lastOpApplied: { ts: Timestamp(1567578827, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578827, 1), $clusterTime: { clusterTime: Timestamp(1567578827, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578827, 2) } 2019-09-04T06:33:47.646+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578827, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578827635), o: { $v: 1, $set: { ping: new Date(1567578827635) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 1), t: 1 }, lastCommittedWall: new Date(1567578827191), lastOpVisible: { ts: Timestamp(1567578827, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578827, 1), t: 1 }, lastCommittedWall: new Date(1567578827191), lastOpApplied: { ts: Timestamp(1567578827, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578827, 1), $clusterTime: { clusterTime: Timestamp(1567578827, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578827, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:47.646+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:47.646+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578827, 2) and ending at ts: Timestamp(1567578827, 2) 2019-09-04T06:33:47.646+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:33:57.950+0000 2019-09-04T06:33:47.646+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:33:57.696+0000 2019-09-04T06:33:47.646+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:47.646+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:16.839+0000 2019-09-04T06:33:47.646+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578827, 2), t: 1 } 2019-09-04T06:33:47.646+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:47.646+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:47.646+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578827, 1) 2019-09-04T06:33:47.646+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 18243 2019-09-04T06:33:47.646+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:47.646+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:47.646+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 18243 2019-09-04T06:33:47.646+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:47.646+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:47.646+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578827, 1) 2019-09-04T06:33:47.646+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 18246 2019-09-04T06:33:47.646+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:33:47.646+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:47.646+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:47.646+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578827, 2) } 2019-09-04T06:33:47.646+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 18246 2019-09-04T06:33:47.646+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18235 2019-09-04T06:33:47.646+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 18235 2019-09-04T06:33:47.646+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18249 2019-09-04T06:33:47.646+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18249 2019-09-04T06:33:47.646+0000 D3 EXECUTOR [repl-writer-worker-15] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:47.646+0000 D3 STORAGE [repl-writer-worker-15] WT begin_transaction for snapshot id 18251 2019-09-04T06:33:47.646+0000 D4 STORAGE [repl-writer-worker-15] inserting record with timestamp Timestamp(1567578827, 2) 2019-09-04T06:33:47.646+0000 D3 STORAGE [repl-writer-worker-15] WT set timestamp of future write operations to Timestamp(1567578827, 2) 2019-09-04T06:33:47.646+0000 D3 STORAGE [repl-writer-worker-15] WT commit_transaction for snapshot id 18251 2019-09-04T06:33:47.646+0000 D3 EXECUTOR [repl-writer-worker-15] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:47.646+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:33:47.646+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18250 2019-09-04T06:33:47.646+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 18250 2019-09-04T06:33:47.646+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18253 2019-09-04T06:33:47.646+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18253 2019-09-04T06:33:47.646+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578827, 2), t: 1 }({ ts: Timestamp(1567578827, 2), t: 1 }) 2019-09-04T06:33:47.646+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578827, 2) 2019-09-04T06:33:47.646+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18254 2019-09-04T06:33:47.646+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578827, 2) } } ] } sort: {} projection: {} 2019-09-04T06:33:47.646+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:47.647+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:33:47.647+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578827, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:33:47.647+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:47.647+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:33:47.647+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:47.647+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578827, 2) || First: notFirst: full path: ts 2019-09-04T06:33:47.647+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:47.647+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578827, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:47.647+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:33:47.647+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:33:47.647+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:33:47.647+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:47.647+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:47.647+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:33:47.647+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:47.647+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:47.647+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:33:47.647+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578827, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:33:47.647+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:47.647+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:33:47.647+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:47.647+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578827, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:33:47.647+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:47.647+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578827, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:47.647+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 18254 2019-09-04T06:33:47.647+0000 D3 EXECUTOR [repl-writer-worker-1] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:47.647+0000 D3 STORAGE [repl-writer-worker-1] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:47.647+0000 D3 REPL [repl-writer-worker-1] applying op: { ts: Timestamp(1567578827, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578827635), o: { $v: 1, $set: { ping: new Date(1567578827635) } } }, oplog application mode: Secondary 2019-09-04T06:33:47.647+0000 D3 STORAGE [repl-writer-worker-1] WT set timestamp of future write operations to Timestamp(1567578827, 2) 2019-09-04T06:33:47.647+0000 D3 STORAGE [repl-writer-worker-1] WT begin_transaction for snapshot id 18256 2019-09-04T06:33:47.647+0000 D2 QUERY [repl-writer-worker-1] Using idhack: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" } 2019-09-04T06:33:47.647+0000 D4 WRITE [repl-writer-worker-1] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:33:47.647+0000 D3 STORAGE [repl-writer-worker-1] WT commit_transaction for snapshot id 18256 2019-09-04T06:33:47.647+0000 D3 EXECUTOR [repl-writer-worker-1] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:47.647+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578827, 2), t: 1 }({ ts: Timestamp(1567578827, 2), t: 1 }) 2019-09-04T06:33:47.647+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578827, 2) 2019-09-04T06:33:47.647+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18255 2019-09-04T06:33:47.647+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:33:47.647+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:47.647+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:33:47.647+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:47.647+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:47.647+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:33:47.647+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 18255 2019-09-04T06:33:47.647+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578827, 2) 2019-09-04T06:33:47.647+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:47.647+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18259 2019-09-04T06:33:47.647+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18259 2019-09-04T06:33:47.647+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578827, 2), t: 1 }({ ts: Timestamp(1567578827, 2), t: 1 }) 2019-09-04T06:33:47.647+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), appliedOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, appliedWallTime: new Date(1567578818610), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578827, 1), t: 1 }, durableWallTime: new Date(1567578827191), appliedOpTime: { ts: Timestamp(1567578827, 2), t: 1 }, appliedWallTime: new Date(1567578827635), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), appliedOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, appliedWallTime: new Date(1567578818610), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 1), t: 1 }, lastCommittedWall: new Date(1567578827191), lastOpVisible: { ts: Timestamp(1567578827, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:47.647+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1231 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:17.647+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), appliedOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, appliedWallTime: new Date(1567578818610), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578827, 1), t: 1 }, durableWallTime: new Date(1567578827191), appliedOpTime: { ts: Timestamp(1567578827, 2), t: 1 }, appliedWallTime: new Date(1567578827635), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), appliedOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, appliedWallTime: new Date(1567578818610), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 1), t: 1 }, lastCommittedWall: new Date(1567578827191), lastOpVisible: { ts: Timestamp(1567578827, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:47.647+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:17.647+0000 2019-09-04T06:33:47.648+0000 D2 ASIO [RS] Request 1231 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 1), t: 1 }, lastCommittedWall: new Date(1567578827191), lastOpVisible: { ts: Timestamp(1567578827, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578827, 1), $clusterTime: { clusterTime: Timestamp(1567578827, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578827, 2) } 2019-09-04T06:33:47.648+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 1), t: 1 }, lastCommittedWall: new Date(1567578827191), lastOpVisible: { ts: Timestamp(1567578827, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578827, 1), $clusterTime: { clusterTime: Timestamp(1567578827, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578827, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:47.648+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:47.648+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:17.648+0000 2019-09-04T06:33:47.648+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578827, 2), t: 1 } 2019-09-04T06:33:47.648+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1232 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:57.648+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578827, 1), t: 1 } } 2019-09-04T06:33:47.648+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:17.648+0000 2019-09-04T06:33:47.654+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:33:47.654+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:47.654+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), appliedOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, appliedWallTime: new Date(1567578818610), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578827, 2), t: 1 }, durableWallTime: new Date(1567578827635), appliedOpTime: { ts: Timestamp(1567578827, 2), t: 1 }, appliedWallTime: new Date(1567578827635), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), appliedOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, appliedWallTime: new Date(1567578818610), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 1), t: 1 }, lastCommittedWall: new Date(1567578827191), lastOpVisible: { ts: Timestamp(1567578827, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:47.654+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1233 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:17.654+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), appliedOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, appliedWallTime: new Date(1567578818610), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578827, 2), t: 1 }, durableWallTime: new Date(1567578827635), appliedOpTime: { ts: Timestamp(1567578827, 2), t: 1 }, appliedWallTime: new Date(1567578827635), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), appliedOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, appliedWallTime: new Date(1567578818610), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 1), t: 1 }, lastCommittedWall: new Date(1567578827191), lastOpVisible: { ts: Timestamp(1567578827, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:47.654+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:17.648+0000 2019-09-04T06:33:47.654+0000 D2 ASIO [RS] Request 1233 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 1), t: 1 }, lastCommittedWall: new Date(1567578827191), lastOpVisible: { ts: Timestamp(1567578827, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578827, 1), $clusterTime: { clusterTime: Timestamp(1567578827, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578827, 2) } 2019-09-04T06:33:47.654+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 1), t: 1 }, lastCommittedWall: new Date(1567578827191), lastOpVisible: { ts: Timestamp(1567578827, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578827, 1), $clusterTime: { clusterTime: Timestamp(1567578827, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578827, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:47.654+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:47.654+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:17.648+0000 2019-09-04T06:33:47.655+0000 D2 ASIO [RS] Request 1232 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 2), t: 1 }, lastCommittedWall: new Date(1567578827635), lastOpVisible: { ts: Timestamp(1567578827, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578827, 2), t: 1 }, lastCommittedWall: new Date(1567578827635), lastOpApplied: { ts: Timestamp(1567578827, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578827, 2), $clusterTime: { clusterTime: Timestamp(1567578827, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578827, 2) } 2019-09-04T06:33:47.655+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 2), t: 1 }, lastCommittedWall: new Date(1567578827635), lastOpVisible: { ts: Timestamp(1567578827, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578827, 2), t: 1 }, lastCommittedWall: new Date(1567578827635), lastOpApplied: { ts: Timestamp(1567578827, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578827, 2), $clusterTime: { clusterTime: Timestamp(1567578827, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578827, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:47.655+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:47.655+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:33:47.655+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578827, 2), t: 1 }, 2019-09-04T06:33:47.635+0000 2019-09-04T06:33:47.655+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578827, 2), t: 1 }, 2019-09-04T06:33:47.635+0000 2019-09-04T06:33:47.655+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578822, 2) 2019-09-04T06:33:47.655+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:33:57.696+0000 2019-09-04T06:33:47.655+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:33:57.674+0000 2019-09-04T06:33:47.655+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:47.655+0000 D3 REPL [conn405] Got notified of new snapshot: { ts: Timestamp(1567578827, 2), t: 1 }, 2019-09-04T06:33:47.635+0000 2019-09-04T06:33:47.655+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:16.839+0000 2019-09-04T06:33:47.655+0000 D3 REPL [conn405] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.416+0000 2019-09-04T06:33:47.655+0000 D3 REPL [conn387] Got notified of new snapshot: { ts: Timestamp(1567578827, 2), t: 1 }, 2019-09-04T06:33:47.635+0000 2019-09-04T06:33:47.655+0000 D3 REPL [conn387] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.455+0000 2019-09-04T06:33:47.655+0000 D3 REPL [conn397] Got notified of new snapshot: { ts: Timestamp(1567578827, 2), t: 1 }, 2019-09-04T06:33:47.635+0000 2019-09-04T06:33:47.655+0000 D3 REPL [conn397] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.962+0000 2019-09-04T06:33:47.655+0000 D3 REPL [conn406] Got notified of new snapshot: { ts: Timestamp(1567578827, 2), t: 1 }, 2019-09-04T06:33:47.635+0000 2019-09-04T06:33:47.655+0000 D3 REPL [conn406] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.496+0000 2019-09-04T06:33:47.655+0000 D3 REPL [conn389] Got notified of new snapshot: { ts: Timestamp(1567578827, 2), t: 1 }, 2019-09-04T06:33:47.635+0000 2019-09-04T06:33:47.655+0000 D3 REPL [conn389] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:52.596+0000 2019-09-04T06:33:47.655+0000 D3 REPL [conn391] Got notified of new snapshot: { ts: Timestamp(1567578827, 2), t: 1 }, 2019-09-04T06:33:47.635+0000 2019-09-04T06:33:47.655+0000 D3 REPL [conn391] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:56.309+0000 2019-09-04T06:33:47.655+0000 D3 REPL [conn412] Got notified of new snapshot: { ts: Timestamp(1567578827, 2), t: 1 }, 2019-09-04T06:33:47.635+0000 2019-09-04T06:33:47.655+0000 D3 REPL [conn412] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:17.318+0000 2019-09-04T06:33:47.655+0000 D3 REPL [conn404] Got notified of new snapshot: { ts: Timestamp(1567578827, 2), t: 1 }, 2019-09-04T06:33:47.635+0000 2019-09-04T06:33:47.655+0000 D3 REPL [conn404] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.073+0000 2019-09-04T06:33:47.655+0000 D3 REPL [conn393] Got notified of new snapshot: { ts: Timestamp(1567578827, 2), t: 1 }, 2019-09-04T06:33:47.635+0000 2019-09-04T06:33:47.655+0000 D3 REPL [conn393] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.081+0000 2019-09-04T06:33:47.655+0000 D3 REPL [conn390] Got notified of new snapshot: { ts: Timestamp(1567578827, 2), t: 1 }, 2019-09-04T06:33:47.635+0000 2019-09-04T06:33:47.655+0000 D3 REPL [conn390] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.077+0000 2019-09-04T06:33:47.655+0000 D3 REPL [conn399] Got notified of new snapshot: { ts: Timestamp(1567578827, 2), t: 1 }, 2019-09-04T06:33:47.635+0000 2019-09-04T06:33:47.655+0000 D3 REPL [conn399] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:02.478+0000 2019-09-04T06:33:47.655+0000 D3 REPL [conn407] Got notified of new snapshot: { ts: Timestamp(1567578827, 2), t: 1 }, 2019-09-04T06:33:47.635+0000 2019-09-04T06:33:47.655+0000 D3 REPL [conn407] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.691+0000 2019-09-04T06:33:47.655+0000 D3 REPL [conn377] Got notified of new snapshot: { ts: Timestamp(1567578827, 2), t: 1 }, 2019-09-04T06:33:47.635+0000 2019-09-04T06:33:47.655+0000 D3 REPL [conn377] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.752+0000 2019-09-04T06:33:47.655+0000 D3 REPL [conn396] Got notified of new snapshot: { ts: Timestamp(1567578827, 2), t: 1 }, 2019-09-04T06:33:47.635+0000 2019-09-04T06:33:47.655+0000 D3 REPL [conn396] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.897+0000 2019-09-04T06:33:47.655+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1234 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:57.655+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578827, 2), t: 1 } } 2019-09-04T06:33:47.655+0000 D3 REPL [conn398] Got notified of new snapshot: { ts: Timestamp(1567578827, 2), t: 1 }, 2019-09-04T06:33:47.635+0000 2019-09-04T06:33:47.655+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:17.648+0000 2019-09-04T06:33:47.655+0000 D3 REPL [conn398] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.987+0000 2019-09-04T06:33:47.655+0000 D3 REPL [conn372] Got notified of new snapshot: { ts: Timestamp(1567578827, 2), t: 1 }, 2019-09-04T06:33:47.635+0000 2019-09-04T06:33:47.655+0000 D3 REPL [conn372] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.660+0000 2019-09-04T06:33:47.655+0000 D3 REPL [conn388] Got notified of new snapshot: { ts: Timestamp(1567578827, 2), t: 1 }, 2019-09-04T06:33:47.635+0000 2019-09-04T06:33:47.655+0000 D3 REPL [conn388] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:52.054+0000 2019-09-04T06:33:47.655+0000 D3 REPL [conn386] Got notified of new snapshot: { ts: Timestamp(1567578827, 2), t: 1 }, 2019-09-04T06:33:47.635+0000 2019-09-04T06:33:47.655+0000 D3 REPL [conn386] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.645+0000 2019-09-04T06:33:47.655+0000 D3 REPL [conn394] Got notified of new snapshot: { ts: Timestamp(1567578827, 2), t: 1 }, 2019-09-04T06:33:47.635+0000 2019-09-04T06:33:47.655+0000 D3 REPL [conn394] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.261+0000 2019-09-04T06:33:47.656+0000 D3 REPL [conn392] Got notified of new snapshot: { ts: Timestamp(1567578827, 2), t: 1 }, 2019-09-04T06:33:47.635+0000 2019-09-04T06:33:47.656+0000 D3 REPL [conn392] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:58.760+0000 2019-09-04T06:33:47.656+0000 D3 REPL [conn362] Got notified of new snapshot: { ts: Timestamp(1567578827, 2), t: 1 }, 2019-09-04T06:33:47.635+0000 2019-09-04T06:33:47.656+0000 D3 REPL [conn362] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.661+0000 2019-09-04T06:33:47.656+0000 D3 REPL [conn400] Got notified of new snapshot: { ts: Timestamp(1567578827, 2), t: 1 }, 2019-09-04T06:33:47.635+0000 2019-09-04T06:33:47.656+0000 D3 REPL [conn400] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:03.490+0000 2019-09-04T06:33:47.656+0000 D3 REPL [conn380] Got notified of new snapshot: { ts: Timestamp(1567578827, 2), t: 1 }, 2019-09-04T06:33:47.635+0000 2019-09-04T06:33:47.656+0000 D3 REPL [conn380] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:14.390+0000 2019-09-04T06:33:47.656+0000 D3 REPL [conn395] Got notified of new snapshot: { ts: Timestamp(1567578827, 2), t: 1 }, 2019-09-04T06:33:47.635+0000 2019-09-04T06:33:47.656+0000 D3 REPL [conn395] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.702+0000 2019-09-04T06:33:47.656+0000 D3 REPL [conn385] Got notified of new snapshot: { ts: Timestamp(1567578827, 2), t: 1 }, 2019-09-04T06:33:47.635+0000 2019-09-04T06:33:47.656+0000 D3 REPL [conn385] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.131+0000 2019-09-04T06:33:47.656+0000 D3 REPL [conn382] Got notified of new snapshot: { ts: Timestamp(1567578827, 2), t: 1 }, 2019-09-04T06:33:47.635+0000 2019-09-04T06:33:47.656+0000 D3 REPL [conn382] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:17.223+0000 2019-09-04T06:33:47.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:47.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:47.685+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:47.685+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:47.687+0000 D2 COMMAND [conn411] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:47.687+0000 I COMMAND [conn411] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:47.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:47.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:47.718+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:47.733+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:47.733+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:47.734+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:47.734+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:47.734+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:47.734+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:47.746+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578827, 2) 2019-09-04T06:33:47.817+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:47.817+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:47.818+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:47.884+0000 D2 ASIO [RS] Request 1234 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578827, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578827862), o: { $v: 1, $set: { ping: new Date(1567578827856) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 2), t: 1 }, lastCommittedWall: new Date(1567578827635), lastOpVisible: { ts: Timestamp(1567578827, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578827, 2), t: 1 }, lastCommittedWall: new Date(1567578827635), lastOpApplied: { ts: Timestamp(1567578827, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578827, 2), $clusterTime: { clusterTime: Timestamp(1567578827, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578827, 3) } 2019-09-04T06:33:47.884+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578827, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578827862), o: { $v: 1, $set: { ping: new Date(1567578827856) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 2), t: 1 }, lastCommittedWall: new Date(1567578827635), lastOpVisible: { ts: Timestamp(1567578827, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578827, 2), t: 1 }, lastCommittedWall: new Date(1567578827635), lastOpApplied: { ts: Timestamp(1567578827, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578827, 2), $clusterTime: { clusterTime: Timestamp(1567578827, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578827, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:47.884+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:47.884+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578827, 3) and ending at ts: Timestamp(1567578827, 3) 2019-09-04T06:33:47.884+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:33:57.674+0000 2019-09-04T06:33:47.884+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:33:59.002+0000 2019-09-04T06:33:47.884+0000 D2 REPL [replication-0] oplog buffer has 0 bytes 2019-09-04T06:33:47.884+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578827, 3), t: 1 } 2019-09-04T06:33:47.884+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:47.884+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:16.839+0000 2019-09-04T06:33:47.884+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:47.884+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:47.884+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578827, 2) 2019-09-04T06:33:47.884+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 18271 2019-09-04T06:33:47.884+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:47.884+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:47.884+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 18271 2019-09-04T06:33:47.884+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:33:47.884+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:47.884+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578827, 3) } 2019-09-04T06:33:47.885+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:47.885+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578827, 2) 2019-09-04T06:33:47.885+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 18274 2019-09-04T06:33:47.885+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:47.885+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:47.885+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18260 2019-09-04T06:33:47.885+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 18274 2019-09-04T06:33:47.885+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 18260 2019-09-04T06:33:47.885+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18277 2019-09-04T06:33:47.885+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18277 2019-09-04T06:33:47.885+0000 D3 EXECUTOR [repl-writer-worker-5] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:47.885+0000 D3 STORAGE [repl-writer-worker-5] WT begin_transaction for snapshot id 18279 2019-09-04T06:33:47.885+0000 D4 STORAGE [repl-writer-worker-5] inserting record with timestamp Timestamp(1567578827, 3) 2019-09-04T06:33:47.885+0000 D3 STORAGE [repl-writer-worker-5] WT set timestamp of future write operations to Timestamp(1567578827, 3) 2019-09-04T06:33:47.885+0000 D3 STORAGE [repl-writer-worker-5] WT commit_transaction for snapshot id 18279 2019-09-04T06:33:47.885+0000 D3 EXECUTOR [repl-writer-worker-5] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:47.885+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:33:47.885+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18278 2019-09-04T06:33:47.885+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 18278 2019-09-04T06:33:47.885+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18281 2019-09-04T06:33:47.885+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18281 2019-09-04T06:33:47.885+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578827, 3), t: 1 }({ ts: Timestamp(1567578827, 3), t: 1 }) 2019-09-04T06:33:47.885+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578827, 3) 2019-09-04T06:33:47.885+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18282 2019-09-04T06:33:47.885+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578827, 3) } } ] } sort: {} projection: {} 2019-09-04T06:33:47.885+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:47.885+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:33:47.885+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578827, 3) Sort: {} Proj: {} ============================= 2019-09-04T06:33:47.885+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:47.885+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:33:47.885+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:47.885+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578827, 3) || First: notFirst: full path: ts 2019-09-04T06:33:47.885+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:47.885+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578827, 3) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:47.885+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:33:47.885+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:33:47.885+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:33:47.885+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:47.885+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:47.885+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:33:47.885+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:47.885+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:47.885+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:33:47.885+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578827, 3) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:33:47.885+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:47.885+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:33:47.885+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:47.885+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578827, 3) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:33:47.885+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:47.885+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578827, 3) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:47.885+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 18282 2019-09-04T06:33:47.885+0000 D3 EXECUTOR [repl-writer-worker-9] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:47.885+0000 D3 STORAGE [repl-writer-worker-9] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:47.885+0000 D3 REPL [repl-writer-worker-9] applying op: { ts: Timestamp(1567578827, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578827862), o: { $v: 1, $set: { ping: new Date(1567578827856) } } }, oplog application mode: Secondary 2019-09-04T06:33:47.885+0000 D3 STORAGE [repl-writer-worker-9] WT set timestamp of future write operations to Timestamp(1567578827, 3) 2019-09-04T06:33:47.885+0000 D3 STORAGE [repl-writer-worker-9] WT begin_transaction for snapshot id 18284 2019-09-04T06:33:47.885+0000 D2 QUERY [repl-writer-worker-9] Using idhack: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" } 2019-09-04T06:33:47.885+0000 D4 WRITE [repl-writer-worker-9] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:33:47.885+0000 D3 STORAGE [repl-writer-worker-9] WT commit_transaction for snapshot id 18284 2019-09-04T06:33:47.885+0000 D3 EXECUTOR [repl-writer-worker-9] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:47.885+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578827, 3), t: 1 }({ ts: Timestamp(1567578827, 3), t: 1 }) 2019-09-04T06:33:47.885+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578827, 3) 2019-09-04T06:33:47.885+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18283 2019-09-04T06:33:47.886+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:33:47.886+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:47.886+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:33:47.886+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:47.886+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:47.886+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:33:47.886+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 18283 2019-09-04T06:33:47.886+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578827, 3) 2019-09-04T06:33:47.886+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18287 2019-09-04T06:33:47.886+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18287 2019-09-04T06:33:47.886+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578827, 3), t: 1 }({ ts: Timestamp(1567578827, 3), t: 1 }) 2019-09-04T06:33:47.886+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:47.886+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), appliedOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, appliedWallTime: new Date(1567578818610), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578827, 2), t: 1 }, durableWallTime: new Date(1567578827635), appliedOpTime: { ts: Timestamp(1567578827, 3), t: 1 }, appliedWallTime: new Date(1567578827862), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), appliedOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, appliedWallTime: new Date(1567578818610), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 2), t: 1 }, lastCommittedWall: new Date(1567578827635), lastOpVisible: { ts: Timestamp(1567578827, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:47.886+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1235 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:17.886+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), appliedOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, appliedWallTime: new Date(1567578818610), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578827, 2), t: 1 }, durableWallTime: new Date(1567578827635), appliedOpTime: { ts: Timestamp(1567578827, 3), t: 1 }, appliedWallTime: new Date(1567578827862), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), appliedOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, appliedWallTime: new Date(1567578818610), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 2), t: 1 }, lastCommittedWall: new Date(1567578827635), lastOpVisible: { ts: Timestamp(1567578827, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:47.886+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:17.886+0000 2019-09-04T06:33:47.886+0000 D2 ASIO [RS] Request 1235 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 2), t: 1 }, lastCommittedWall: new Date(1567578827635), lastOpVisible: { ts: Timestamp(1567578827, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578827, 2), $clusterTime: { clusterTime: Timestamp(1567578827, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578827, 3) } 2019-09-04T06:33:47.886+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 2), t: 1 }, lastCommittedWall: new Date(1567578827635), lastOpVisible: { ts: Timestamp(1567578827, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578827, 2), $clusterTime: { clusterTime: Timestamp(1567578827, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578827, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:47.886+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578827, 3), t: 1 } 2019-09-04T06:33:47.886+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1236 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:57.886+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578827, 2), t: 1 } } 2019-09-04T06:33:47.886+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:47.886+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:17.886+0000 2019-09-04T06:33:47.886+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:17.886+0000 2019-09-04T06:33:47.925+0000 D2 ASIO [RS] Request 1236 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 3), t: 1 }, lastCommittedWall: new Date(1567578827862), lastOpVisible: { ts: Timestamp(1567578827, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578827, 3), t: 1 }, lastCommittedWall: new Date(1567578827862), lastOpApplied: { ts: Timestamp(1567578827, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578827, 3), $clusterTime: { clusterTime: Timestamp(1567578827, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578827, 3) } 2019-09-04T06:33:47.925+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 3), t: 1 }, lastCommittedWall: new Date(1567578827862), lastOpVisible: { ts: Timestamp(1567578827, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578827, 3), t: 1 }, lastCommittedWall: new Date(1567578827862), lastOpApplied: { ts: Timestamp(1567578827, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578827, 3), $clusterTime: { clusterTime: Timestamp(1567578827, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578827, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:47.925+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:47.925+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:33:47.925+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578827, 3), t: 1 }, 2019-09-04T06:33:47.862+0000 2019-09-04T06:33:47.925+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578827, 3), t: 1 }, 2019-09-04T06:33:47.862+0000 2019-09-04T06:33:47.925+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578822, 3) 2019-09-04T06:33:47.925+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:33:59.002+0000 2019-09-04T06:33:47.925+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:33:58.891+0000 2019-09-04T06:33:47.925+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:47.925+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:16.839+0000 2019-09-04T06:33:47.925+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1237 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:57.925+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578827, 3), t: 1 } } 2019-09-04T06:33:47.925+0000 D3 REPL [conn387] Got notified of new snapshot: { ts: Timestamp(1567578827, 3), t: 1 }, 2019-09-04T06:33:47.862+0000 2019-09-04T06:33:47.925+0000 D3 REPL [conn387] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.455+0000 2019-09-04T06:33:47.925+0000 D3 REPL [conn404] Got notified of new snapshot: { ts: Timestamp(1567578827, 3), t: 1 }, 2019-09-04T06:33:47.862+0000 2019-09-04T06:33:47.925+0000 D3 REPL [conn404] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.073+0000 2019-09-04T06:33:47.925+0000 D3 REPL [conn397] Got notified of new snapshot: { ts: Timestamp(1567578827, 3), t: 1 }, 2019-09-04T06:33:47.862+0000 2019-09-04T06:33:47.925+0000 D3 REPL [conn397] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.962+0000 2019-09-04T06:33:47.925+0000 D3 REPL [conn407] Got notified of new snapshot: { ts: Timestamp(1567578827, 3), t: 1 }, 2019-09-04T06:33:47.862+0000 2019-09-04T06:33:47.925+0000 D3 REPL [conn407] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.691+0000 2019-09-04T06:33:47.925+0000 D3 REPL [conn396] Got notified of new snapshot: { ts: Timestamp(1567578827, 3), t: 1 }, 2019-09-04T06:33:47.862+0000 2019-09-04T06:33:47.925+0000 D3 REPL [conn396] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.897+0000 2019-09-04T06:33:47.925+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:17.886+0000 2019-09-04T06:33:47.925+0000 D3 REPL [conn390] Got notified of new snapshot: { ts: Timestamp(1567578827, 3), t: 1 }, 2019-09-04T06:33:47.862+0000 2019-09-04T06:33:47.925+0000 D3 REPL [conn390] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.077+0000 2019-09-04T06:33:47.925+0000 D3 REPL [conn388] Got notified of new snapshot: { ts: Timestamp(1567578827, 3), t: 1 }, 2019-09-04T06:33:47.862+0000 2019-09-04T06:33:47.925+0000 D3 REPL [conn388] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:52.054+0000 2019-09-04T06:33:47.925+0000 D3 REPL [conn362] Got notified of new snapshot: { ts: Timestamp(1567578827, 3), t: 1 }, 2019-09-04T06:33:47.862+0000 2019-09-04T06:33:47.925+0000 D3 REPL [conn362] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.661+0000 2019-09-04T06:33:47.925+0000 D3 REPL [conn400] Got notified of new snapshot: { ts: Timestamp(1567578827, 3), t: 1 }, 2019-09-04T06:33:47.862+0000 2019-09-04T06:33:47.925+0000 D3 REPL [conn400] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:03.490+0000 2019-09-04T06:33:47.925+0000 D3 REPL [conn395] Got notified of new snapshot: { ts: Timestamp(1567578827, 3), t: 1 }, 2019-09-04T06:33:47.862+0000 2019-09-04T06:33:47.925+0000 D3 REPL [conn395] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.702+0000 2019-09-04T06:33:47.925+0000 D3 REPL [conn382] Got notified of new snapshot: { ts: Timestamp(1567578827, 3), t: 1 }, 2019-09-04T06:33:47.862+0000 2019-09-04T06:33:47.925+0000 D3 REPL [conn382] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:17.223+0000 2019-09-04T06:33:47.925+0000 D3 REPL [conn398] Got notified of new snapshot: { ts: Timestamp(1567578827, 3), t: 1 }, 2019-09-04T06:33:47.862+0000 2019-09-04T06:33:47.925+0000 D3 REPL [conn398] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.987+0000 2019-09-04T06:33:47.925+0000 D3 REPL [conn394] Got notified of new snapshot: { ts: Timestamp(1567578827, 3), t: 1 }, 2019-09-04T06:33:47.862+0000 2019-09-04T06:33:47.925+0000 D3 REPL [conn394] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.261+0000 2019-09-04T06:33:47.925+0000 D3 REPL [conn412] Got notified of new snapshot: { ts: Timestamp(1567578827, 3), t: 1 }, 2019-09-04T06:33:47.862+0000 2019-09-04T06:33:47.925+0000 D3 REPL [conn412] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:17.318+0000 2019-09-04T06:33:47.925+0000 D3 REPL [conn372] Got notified of new snapshot: { ts: Timestamp(1567578827, 3), t: 1 }, 2019-09-04T06:33:47.862+0000 2019-09-04T06:33:47.925+0000 D3 REPL [conn372] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.660+0000 2019-09-04T06:33:47.925+0000 D3 REPL [conn386] Got notified of new snapshot: { ts: Timestamp(1567578827, 3), t: 1 }, 2019-09-04T06:33:47.862+0000 2019-09-04T06:33:47.925+0000 D3 REPL [conn386] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.645+0000 2019-09-04T06:33:47.925+0000 D3 REPL [conn380] Got notified of new snapshot: { ts: Timestamp(1567578827, 3), t: 1 }, 2019-09-04T06:33:47.862+0000 2019-09-04T06:33:47.925+0000 D3 REPL [conn380] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:14.390+0000 2019-09-04T06:33:47.926+0000 D3 REPL [conn385] Got notified of new snapshot: { ts: Timestamp(1567578827, 3), t: 1 }, 2019-09-04T06:33:47.862+0000 2019-09-04T06:33:47.926+0000 D3 REPL [conn385] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.131+0000 2019-09-04T06:33:47.926+0000 D3 REPL [conn392] Got notified of new snapshot: { ts: Timestamp(1567578827, 3), t: 1 }, 2019-09-04T06:33:47.862+0000 2019-09-04T06:33:47.926+0000 D3 REPL [conn392] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:58.760+0000 2019-09-04T06:33:47.926+0000 D3 REPL [conn405] Got notified of new snapshot: { ts: Timestamp(1567578827, 3), t: 1 }, 2019-09-04T06:33:47.862+0000 2019-09-04T06:33:47.926+0000 D3 REPL [conn405] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.416+0000 2019-09-04T06:33:47.926+0000 D3 REPL [conn399] Got notified of new snapshot: { ts: Timestamp(1567578827, 3), t: 1 }, 2019-09-04T06:33:47.862+0000 2019-09-04T06:33:47.926+0000 D3 REPL [conn399] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:02.478+0000 2019-09-04T06:33:47.926+0000 D3 REPL [conn389] Got notified of new snapshot: { ts: Timestamp(1567578827, 3), t: 1 }, 2019-09-04T06:33:47.862+0000 2019-09-04T06:33:47.926+0000 D3 REPL [conn389] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:52.596+0000 2019-09-04T06:33:47.926+0000 D3 REPL [conn406] Got notified of new snapshot: { ts: Timestamp(1567578827, 3), t: 1 }, 2019-09-04T06:33:47.862+0000 2019-09-04T06:33:47.926+0000 D3 REPL [conn406] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.496+0000 2019-09-04T06:33:47.926+0000 D3 REPL [conn377] Got notified of new snapshot: { ts: Timestamp(1567578827, 3), t: 1 }, 2019-09-04T06:33:47.862+0000 2019-09-04T06:33:47.926+0000 D3 REPL [conn377] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.752+0000 2019-09-04T06:33:47.926+0000 D3 REPL [conn393] Got notified of new snapshot: { ts: Timestamp(1567578827, 3), t: 1 }, 2019-09-04T06:33:47.862+0000 2019-09-04T06:33:47.926+0000 D3 REPL [conn393] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.081+0000 2019-09-04T06:33:47.926+0000 D3 REPL [conn391] Got notified of new snapshot: { ts: Timestamp(1567578827, 3), t: 1 }, 2019-09-04T06:33:47.862+0000 2019-09-04T06:33:47.926+0000 D3 REPL [conn391] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:56.309+0000 2019-09-04T06:33:47.936+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:33:47.936+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:47.936+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:47.936+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), appliedOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, appliedWallTime: new Date(1567578818610), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578827, 3), t: 1 }, durableWallTime: new Date(1567578827862), appliedOpTime: { ts: Timestamp(1567578827, 3), t: 1 }, appliedWallTime: new Date(1567578827862), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), appliedOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, appliedWallTime: new Date(1567578818610), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 3), t: 1 }, lastCommittedWall: new Date(1567578827862), lastOpVisible: { ts: Timestamp(1567578827, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:47.936+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1238 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:17.936+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), appliedOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, appliedWallTime: new Date(1567578818610), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578827, 3), t: 1 }, durableWallTime: new Date(1567578827862), appliedOpTime: { ts: Timestamp(1567578827, 3), t: 1 }, appliedWallTime: new Date(1567578827862), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, durableWallTime: new Date(1567578818610), appliedOpTime: { ts: Timestamp(1567578818, 2), t: 1 }, appliedWallTime: new Date(1567578818610), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 3), t: 1 }, lastCommittedWall: new Date(1567578827862), lastOpVisible: { ts: Timestamp(1567578827, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:47.936+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:17.886+0000 2019-09-04T06:33:47.936+0000 D2 ASIO [RS] Request 1238 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 3), t: 1 }, lastCommittedWall: new Date(1567578827862), lastOpVisible: { ts: Timestamp(1567578827, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578827, 3), $clusterTime: { clusterTime: Timestamp(1567578827, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578827, 3) } 2019-09-04T06:33:47.936+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 3), t: 1 }, lastCommittedWall: new Date(1567578827862), lastOpVisible: { ts: Timestamp(1567578827, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578827, 3), $clusterTime: { clusterTime: Timestamp(1567578827, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578827, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:47.936+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:47.936+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:17.886+0000 2019-09-04T06:33:47.952+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:47.952+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:47.984+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578827, 3) 2019-09-04T06:33:48.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:48.002+0000 D4 FTDC [ftdc] full-time diagnostic data capture schema change: currrent document is longer than reference document 2019-09-04T06:33:48.036+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:48.136+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:48.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:48.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:48.185+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:48.185+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:48.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:48.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:48.233+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:48.233+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:48.234+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:48.234+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:48.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578827, 3), signature: { hash: BinData(0, 1E2B5943F9D7B7F8405D151A769BF00BBEF1C791), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:48.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:33:48.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578827, 3), signature: { hash: BinData(0, 1E2B5943F9D7B7F8405D151A769BF00BBEF1C791), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:48.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578827, 3), signature: { hash: BinData(0, 1E2B5943F9D7B7F8405D151A769BF00BBEF1C791), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:48.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578827, 3), t: 1 }, durableWallTime: new Date(1567578827862), opTime: { ts: Timestamp(1567578827, 3), t: 1 }, wallTime: new Date(1567578827862) } 2019-09-04T06:33:48.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578827, 3), signature: { hash: BinData(0, 1E2B5943F9D7B7F8405D151A769BF00BBEF1C791), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:48.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:48.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:48.236+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:48.317+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:48.317+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:48.336+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:48.437+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:48.452+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:48.452+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:48.519+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:33:48.519+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:33:48.519+0000 D2 COMMAND [conn90] run command admin.$cmd { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:33:48.519+0000 I COMMAND [conn90] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:33:48.537+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:48.637+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:48.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:48.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:48.685+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:48.685+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:48.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:48.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:48.733+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:48.733+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:48.734+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:48.734+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:48.737+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:48.817+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:48.817+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:48.837+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:48.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:48.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1239) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:48.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1239 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:33:58.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:48.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:16.839+0000 2019-09-04T06:33:48.838+0000 D2 ASIO [Replication] Request 1239 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578827, 3), t: 1 }, durableWallTime: new Date(1567578827862), opTime: { ts: Timestamp(1567578827, 3), t: 1 }, wallTime: new Date(1567578827862), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 3), t: 1 }, lastCommittedWall: new Date(1567578827862), lastOpVisible: { ts: Timestamp(1567578827, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578827, 3), $clusterTime: { clusterTime: Timestamp(1567578827, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578827, 3) } 2019-09-04T06:33:48.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578827, 3), t: 1 }, durableWallTime: new Date(1567578827862), opTime: { ts: Timestamp(1567578827, 3), t: 1 }, wallTime: new Date(1567578827862), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 3), t: 1 }, lastCommittedWall: new Date(1567578827862), lastOpVisible: { ts: Timestamp(1567578827, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578827, 3), $clusterTime: { clusterTime: Timestamp(1567578827, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578827, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:48.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:48.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1239) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578827, 3), t: 1 }, durableWallTime: new Date(1567578827862), opTime: { ts: Timestamp(1567578827, 3), t: 1 }, wallTime: new Date(1567578827862), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 3), t: 1 }, lastCommittedWall: new Date(1567578827862), lastOpVisible: { ts: Timestamp(1567578827, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578827, 3), $clusterTime: { clusterTime: Timestamp(1567578827, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578827, 3) } 2019-09-04T06:33:48.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:33:48.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:33:50.838Z 2019-09-04T06:33:48.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:16.839+0000 2019-09-04T06:33:48.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:48.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1240) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:48.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1240 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:33:58.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:48.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:16.839+0000 2019-09-04T06:33:48.839+0000 D2 ASIO [Replication] Request 1240 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578827, 3), t: 1 }, durableWallTime: new Date(1567578827862), opTime: { ts: Timestamp(1567578827, 3), t: 1 }, wallTime: new Date(1567578827862), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 3), t: 1 }, lastCommittedWall: new Date(1567578827862), lastOpVisible: { ts: Timestamp(1567578827, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578827, 3), $clusterTime: { clusterTime: Timestamp(1567578827, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578827, 3) } 2019-09-04T06:33:48.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578827, 3), t: 1 }, durableWallTime: new Date(1567578827862), opTime: { ts: Timestamp(1567578827, 3), t: 1 }, wallTime: new Date(1567578827862), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 3), t: 1 }, lastCommittedWall: new Date(1567578827862), lastOpVisible: { ts: Timestamp(1567578827, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578827, 3), $clusterTime: { clusterTime: Timestamp(1567578827, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578827, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:33:48.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:48.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1240) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578827, 3), t: 1 }, durableWallTime: new Date(1567578827862), opTime: { ts: Timestamp(1567578827, 3), t: 1 }, wallTime: new Date(1567578827862), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 3), t: 1 }, lastCommittedWall: new Date(1567578827862), lastOpVisible: { ts: Timestamp(1567578827, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578827, 3), $clusterTime: { clusterTime: Timestamp(1567578827, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578827, 3) } 2019-09-04T06:33:48.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:33:48.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:33:58.891+0000 2019-09-04T06:33:48.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:34:00.074+0000 2019-09-04T06:33:48.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:33:48.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:33:50.839Z 2019-09-04T06:33:48.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:48.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:18.839+0000 2019-09-04T06:33:48.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:18.839+0000 2019-09-04T06:33:48.885+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:48.885+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:48.885+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578827, 3) 2019-09-04T06:33:48.885+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 18309 2019-09-04T06:33:48.885+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:48.885+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:48.885+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 18309 2019-09-04T06:33:48.886+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18312 2019-09-04T06:33:48.886+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18312 2019-09-04T06:33:48.886+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578827, 3), t: 1 }({ ts: Timestamp(1567578827, 3), t: 1 }) 2019-09-04T06:33:48.937+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:48.952+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:48.952+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:49.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:49.038+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:49.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578827, 3), signature: { hash: BinData(0, 1E2B5943F9D7B7F8405D151A769BF00BBEF1C791), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:49.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:33:49.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578827, 3), signature: { hash: BinData(0, 1E2B5943F9D7B7F8405D151A769BF00BBEF1C791), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:49.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578827, 3), signature: { hash: BinData(0, 1E2B5943F9D7B7F8405D151A769BF00BBEF1C791), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:49.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578827, 3), t: 1 }, durableWallTime: new Date(1567578827862), opTime: { ts: Timestamp(1567578827, 3), t: 1 }, wallTime: new Date(1567578827862) } 2019-09-04T06:33:49.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578827, 3), signature: { hash: BinData(0, 1E2B5943F9D7B7F8405D151A769BF00BBEF1C791), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:49.138+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:49.185+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:49.185+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:49.190+0000 I NETWORK [listener] connection accepted from 10.108.2.15:39218 #413 (88 connections now open) 2019-09-04T06:33:49.190+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:49.190+0000 D2 COMMAND [conn413] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:49.190+0000 I NETWORK [conn413] received client metadata from 10.108.2.15:39218 conn413: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:49.191+0000 I COMMAND [conn413] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:49.191+0000 D2 COMMAND [conn413] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578827, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5acd0f8f28dab2b56d05'), operName: "", parentOperId: "5d6f5acb0f8f28dab2b56cf8" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578827, 1), signature: { hash: BinData(0, 1E2B5943F9D7B7F8405D151A769BF00BBEF1C791), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578827, 1), t: 1 } }, $db: "config" } 2019-09-04T06:33:49.191+0000 D1 TRACKING [conn413] Cmd: find, TrackingId: 5d6f5acb0f8f28dab2b56cf8|5d6f5acd0f8f28dab2b56d05 2019-09-04T06:33:49.191+0000 D1 COMMAND [conn413] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578827, 1), t: 1 } } } 2019-09-04T06:33:49.191+0000 D3 STORAGE [conn413] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:33:49.191+0000 D1 COMMAND [conn413] Using 'committed' snapshot: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578827, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5acd0f8f28dab2b56d05'), operName: "", parentOperId: "5d6f5acb0f8f28dab2b56cf8" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578827, 1), signature: { hash: BinData(0, 1E2B5943F9D7B7F8405D151A769BF00BBEF1C791), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578827, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578827, 3) 2019-09-04T06:33:49.191+0000 D2 QUERY [conn413] Collection config.settings does not exist. Using EOF plan: query: { _id: "balancer" } sort: {} projection: {} limit: 1 2019-09-04T06:33:49.191+0000 I COMMAND [conn413] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578827, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5acd0f8f28dab2b56d05'), operName: "", parentOperId: "5d6f5acb0f8f28dab2b56cf8" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578827, 1), signature: { hash: BinData(0, 1E2B5943F9D7B7F8405D151A769BF00BBEF1C791), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578827, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:33:49.194+0000 D2 COMMAND [conn413] run command config.$cmd { find: "collections", filter: { _id: /^config\./ }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578827, 3), t: 1 } }, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5acd0f8f28dab2b56d0d'), operName: "", parentOperId: "5d6f5acd0f8f28dab2b56d0c" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578827, 3), signature: { hash: BinData(0, 1E2B5943F9D7B7F8405D151A769BF00BBEF1C791), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578827, 3), t: 1 } }, $db: "config" } 2019-09-04T06:33:49.194+0000 D1 TRACKING [conn413] Cmd: find, TrackingId: 5d6f5acd0f8f28dab2b56d0c|5d6f5acd0f8f28dab2b56d0d 2019-09-04T06:33:49.194+0000 D1 COMMAND [conn413] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578827, 3), t: 1 } } } 2019-09-04T06:33:49.194+0000 D3 STORAGE [conn413] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:33:49.194+0000 D1 COMMAND [conn413] Using 'committed' snapshot: { find: "collections", filter: { _id: /^config\./ }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578827, 3), t: 1 } }, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5acd0f8f28dab2b56d0d'), operName: "", parentOperId: "5d6f5acd0f8f28dab2b56d0c" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578827, 3), signature: { hash: BinData(0, 1E2B5943F9D7B7F8405D151A769BF00BBEF1C791), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578827, 3), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578827, 3) 2019-09-04T06:33:49.194+0000 D5 QUERY [conn413] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.collectionsTree: _id regex /^config\./ Sort: {} Proj: {} ============================= 2019-09-04T06:33:49.194+0000 D5 QUERY [conn413] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" } 2019-09-04T06:33:49.194+0000 D5 QUERY [conn413] Predicate over field '_id' 2019-09-04T06:33:49.194+0000 D2 QUERY [conn413] Relevant index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" } 2019-09-04T06:33:49.195+0000 D5 QUERY [conn413] Rated tree: _id regex /^config\./ || First: 0 notFirst: full path: _id 2019-09-04T06:33:49.195+0000 D5 QUERY [conn413] Tagging memoID 1 2019-09-04T06:33:49.195+0000 D5 QUERY [conn413] Enumerator: memo just before moving: [Node #1]: AND enumstate counter 0 choice 0: subnodes: idx[0] pos 0 pred _id regex /^config\./ 2019-09-04T06:33:49.195+0000 D5 QUERY [conn413] About to build solntree from tagged tree: _id regex /^config\./ || Selected Index #0 pos 0 combine 1 2019-09-04T06:33:49.195+0000 D5 QUERY [conn413] Planner: adding solution: FETCH ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [{ _id: 1 }, ] ---Child: ------IXSCAN ---------indexName = _id_ keyPattern = { _id: 1 } ---------direction = 1 ---------bounds = field #0['_id']: ["config.", "config/"), [/^config\./, /^config\./] ---------fetched = 0 ---------sortedByDiskLoc = 0 ---------getSort = [{ _id: 1 }, ] 2019-09-04T06:33:49.195+0000 D5 QUERY [conn413] Planner: outputted 1 indexed solutions. 2019-09-04T06:33:49.195+0000 D2 QUERY [conn413] Only one plan is available; it will be run but will not be cached. query: { _id: /^config\./ } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 } 2019-09-04T06:33:49.195+0000 D3 STORAGE [conn413] WT begin_transaction for snapshot id 18320 2019-09-04T06:33:49.195+0000 D3 STORAGE [conn413] WT rollback_transaction for snapshot id 18320 2019-09-04T06:33:49.195+0000 I COMMAND [conn413] command config.collections command: find { find: "collections", filter: { _id: /^config\./ }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578827, 3), t: 1 } }, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5acd0f8f28dab2b56d0d'), operName: "", parentOperId: "5d6f5acd0f8f28dab2b56d0c" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578827, 3), signature: { hash: BinData(0, 1E2B5943F9D7B7F8405D151A769BF00BBEF1C791), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578827, 3), t: 1 } }, $db: "config" } planSummary: IXSCAN { _id: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:4A611094 planCacheKey:B6794B7A reslen:702 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:33:49.196+0000 D2 COMMAND [conn413] run command config.$cmd { find: "chunks", filter: { ns: "config.system.sessions", lastmod: { $gte: Timestamp(0, 0) } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578827, 3), t: 1 } }, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5acd0f8f28dab2b56d10'), operName: "", parentOperId: "5d6f5acd0f8f28dab2b56d0e" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578829, 1), signature: { hash: BinData(0, 2A076294130E581EA4A3E1DBDACDB41C1F5E112C), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578827, 3), t: 1 } }, $db: "config" } 2019-09-04T06:33:49.196+0000 D1 TRACKING [conn413] Cmd: find, TrackingId: 5d6f5acd0f8f28dab2b56d0e|5d6f5acd0f8f28dab2b56d10 2019-09-04T06:33:49.196+0000 D1 COMMAND [conn413] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578827, 3), t: 1 } } } 2019-09-04T06:33:49.196+0000 D3 STORAGE [conn413] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:33:49.196+0000 D1 COMMAND [conn413] Using 'committed' snapshot: { find: "chunks", filter: { ns: "config.system.sessions", lastmod: { $gte: Timestamp(0, 0) } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578827, 3), t: 1 } }, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5acd0f8f28dab2b56d10'), operName: "", parentOperId: "5d6f5acd0f8f28dab2b56d0e" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578829, 1), signature: { hash: BinData(0, 2A076294130E581EA4A3E1DBDACDB41C1F5E112C), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578827, 3), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578827, 3) 2019-09-04T06:33:49.196+0000 D5 QUERY [conn413] Tagging the match expression according to cache data: Filter: $and ns $eq "config.system.sessions" lastmod $gte Timestamp(0, 0) Cache data: (index-tagged expression tree: tree=Node ---Leaf (ns_1_lastmod_1, ), pos: 0, can combine? 1 ---Leaf (ns_1_lastmod_1, ), pos: 1, can combine? 1 ) 2019-09-04T06:33:49.196+0000 D5 QUERY [conn413] Index 0: (ns_1_min_1, ) 2019-09-04T06:33:49.196+0000 D5 QUERY [conn413] Index 1: (ns_1_shard_1_min_1, ) 2019-09-04T06:33:49.196+0000 D5 QUERY [conn413] Index 2: (ns_1_lastmod_1, ) 2019-09-04T06:33:49.196+0000 D5 QUERY [conn413] Index 3: (_id_, ) 2019-09-04T06:33:49.196+0000 D2 ASIO [RS] Request 1237 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578829, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578829194), o: { $v: 1, $set: { ping: new Date(1567578829191), up: 0 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 3), t: 1 }, lastCommittedWall: new Date(1567578827862), lastOpVisible: { ts: Timestamp(1567578827, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578827, 3), t: 1 }, lastCommittedWall: new Date(1567578827862), lastOpApplied: { ts: Timestamp(1567578829, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578827, 3), $clusterTime: { clusterTime: Timestamp(1567578829, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578829, 1) } 2019-09-04T06:33:49.196+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578829, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578829194), o: { $v: 1, $set: { ping: new Date(1567578829191), up: 0 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 3), t: 1 }, lastCommittedWall: new Date(1567578827862), lastOpVisible: { ts: Timestamp(1567578827, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578827, 3), t: 1 }, lastCommittedWall: new Date(1567578827862), lastOpApplied: { ts: Timestamp(1567578829, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578827, 3), $clusterTime: { clusterTime: Timestamp(1567578829, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578829, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:49.196+0000 D5 QUERY [conn413] Tagged tree: $and ns $eq "config.system.sessions" || Selected Index #2 pos 0 combine 1 lastmod $gte Timestamp(0, 0) || Selected Index #2 pos 1 combine 1 2019-09-04T06:33:49.196+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:49.196+0000 D5 QUERY [conn413] Planner: solution constructed from the cache: FETCH ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [{ lastmod: 1 }, { ns: 1 }, { ns: 1, lastmod: 1 }, ] ---Child: ------IXSCAN ---------indexName = ns_1_lastmod_1 keyPattern = { ns: 1, lastmod: 1 } ---------direction = 1 ---------bounds = field #0['ns']: ["config.system.sessions", "config.system.sessions"], field #1['lastmod']: [Timestamp(0, 0), Timestamp(4294967295, 4294967295)] ---------fetched = 0 ---------sortedByDiskLoc = 0 ---------getSort = [{ lastmod: 1 }, { ns: 1 }, { ns: 1, lastmod: 1 }, ] 2019-09-04T06:33:49.196+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578829, 1) and ending at ts: Timestamp(1567578829, 1) 2019-09-04T06:33:49.196+0000 D3 STORAGE [conn413] WT begin_transaction for snapshot id 18322 2019-09-04T06:33:49.196+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:34:00.074+0000 2019-09-04T06:33:49.196+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:34:00.322+0000 2019-09-04T06:33:49.196+0000 D2 QUERY [conn413] score(1.5003) = baseScore(1) + productivity((1 advanced)/(2 works) = 0.5) + tieBreakers(0.0001 noFetchBonus + 0.0001 noSortBonus + 0.0001 noIxisectBonus = 0.0003) 2019-09-04T06:33:49.196+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578829, 1), t: 1 } 2019-09-04T06:33:49.196+0000 D3 STORAGE [conn413] WT rollback_transaction for snapshot id 18322 2019-09-04T06:33:49.196+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:49.196+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:49.196+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578827, 3) 2019-09-04T06:33:49.196+0000 I COMMAND [conn413] command config.chunks command: find { find: "chunks", filter: { ns: "config.system.sessions", lastmod: { $gte: Timestamp(0, 0) } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578827, 3), t: 1 } }, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5acd0f8f28dab2b56d10'), operName: "", parentOperId: "5d6f5acd0f8f28dab2b56d0e" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578829, 1), signature: { hash: BinData(0, 2A076294130E581EA4A3E1DBDACDB41C1F5E112C), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578827, 3), t: 1 } }, $db: "config" } planSummary: IXSCAN { ns: 1, lastmod: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:1DDA71BE planCacheKey:167D77D5 reslen:788 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:33:49.196+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:49.196+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:18.839+0000 2019-09-04T06:33:49.196+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 18324 2019-09-04T06:33:49.196+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:49.196+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:49.196+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 18324 2019-09-04T06:33:49.196+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:33:49.196+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:49.196+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:49.196+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578829, 1) } 2019-09-04T06:33:49.196+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578827, 3) 2019-09-04T06:33:49.196+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 18328 2019-09-04T06:33:49.196+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:49.197+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18313 2019-09-04T06:33:49.197+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:49.197+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 18328 2019-09-04T06:33:49.197+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 18313 2019-09-04T06:33:49.197+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18331 2019-09-04T06:33:49.197+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18331 2019-09-04T06:33:49.197+0000 D3 EXECUTOR [repl-writer-worker-11] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:49.197+0000 D3 STORAGE [repl-writer-worker-11] WT begin_transaction for snapshot id 18333 2019-09-04T06:33:49.197+0000 D4 STORAGE [repl-writer-worker-11] inserting record with timestamp Timestamp(1567578829, 1) 2019-09-04T06:33:49.197+0000 D3 STORAGE [repl-writer-worker-11] WT set timestamp of future write operations to Timestamp(1567578829, 1) 2019-09-04T06:33:49.197+0000 D3 STORAGE [repl-writer-worker-11] WT commit_transaction for snapshot id 18333 2019-09-04T06:33:49.197+0000 D3 EXECUTOR [repl-writer-worker-11] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:49.197+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:33:49.197+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18332 2019-09-04T06:33:49.197+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 18332 2019-09-04T06:33:49.197+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18335 2019-09-04T06:33:49.197+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18335 2019-09-04T06:33:49.197+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578829, 1), t: 1 }({ ts: Timestamp(1567578829, 1), t: 1 }) 2019-09-04T06:33:49.197+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578829, 1) 2019-09-04T06:33:49.197+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18336 2019-09-04T06:33:49.197+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578829, 1) } } ] } sort: {} projection: {} 2019-09-04T06:33:49.197+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:49.197+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:33:49.197+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578829, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:33:49.197+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:49.197+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:33:49.197+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:49.197+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578829, 1) || First: notFirst: full path: ts 2019-09-04T06:33:49.197+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:49.197+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578829, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:49.197+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:33:49.197+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:33:49.197+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:33:49.197+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:49.197+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:49.197+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:33:49.197+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:49.197+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:49.197+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:33:49.197+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578829, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:33:49.197+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:49.197+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:33:49.197+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:49.197+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578829, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:33:49.197+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:49.197+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578829, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:49.197+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 18336 2019-09-04T06:33:49.197+0000 D3 EXECUTOR [repl-writer-worker-7] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:49.197+0000 D3 STORAGE [repl-writer-worker-7] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:49.197+0000 D3 REPL [repl-writer-worker-7] applying op: { ts: Timestamp(1567578829, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578829194), o: { $v: 1, $set: { ping: new Date(1567578829191), up: 0 } } }, oplog application mode: Secondary 2019-09-04T06:33:49.197+0000 D3 STORAGE [repl-writer-worker-7] WT set timestamp of future write operations to Timestamp(1567578829, 1) 2019-09-04T06:33:49.197+0000 D3 STORAGE [repl-writer-worker-7] WT begin_transaction for snapshot id 18338 2019-09-04T06:33:49.197+0000 D2 QUERY [repl-writer-worker-7] Using idhack: { _id: "cmodb801.togewa.com:27017" } 2019-09-04T06:33:49.197+0000 D4 WRITE [repl-writer-worker-7] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:33:49.197+0000 D3 STORAGE [repl-writer-worker-7] WT commit_transaction for snapshot id 18338 2019-09-04T06:33:49.197+0000 D3 EXECUTOR [repl-writer-worker-7] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:49.197+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578829, 1), t: 1 }({ ts: Timestamp(1567578829, 1), t: 1 }) 2019-09-04T06:33:49.197+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578829, 1) 2019-09-04T06:33:49.197+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18337 2019-09-04T06:33:49.197+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:33:49.197+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:49.197+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:33:49.197+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:49.197+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:49.197+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:33:49.197+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 18337 2019-09-04T06:33:49.197+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578829, 1) 2019-09-04T06:33:49.197+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18341 2019-09-04T06:33:49.197+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18341 2019-09-04T06:33:49.197+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578829, 1), t: 1 }({ ts: Timestamp(1567578829, 1), t: 1 }) 2019-09-04T06:33:49.197+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:49.197+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578827, 3), t: 1 }, durableWallTime: new Date(1567578827862), appliedOpTime: { ts: Timestamp(1567578827, 3), t: 1 }, appliedWallTime: new Date(1567578827862), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578827, 3), t: 1 }, durableWallTime: new Date(1567578827862), appliedOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, appliedWallTime: new Date(1567578829194), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578827, 3), t: 1 }, durableWallTime: new Date(1567578827862), appliedOpTime: { ts: Timestamp(1567578827, 3), t: 1 }, appliedWallTime: new Date(1567578827862), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 3), t: 1 }, lastCommittedWall: new Date(1567578827862), lastOpVisible: { ts: Timestamp(1567578827, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:49.197+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1241 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:19.197+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578827, 3), t: 1 }, durableWallTime: new Date(1567578827862), appliedOpTime: { ts: Timestamp(1567578827, 3), t: 1 }, appliedWallTime: new Date(1567578827862), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578827, 3), t: 1 }, durableWallTime: new Date(1567578827862), appliedOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, appliedWallTime: new Date(1567578829194), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578827, 3), t: 1 }, durableWallTime: new Date(1567578827862), appliedOpTime: { ts: Timestamp(1567578827, 3), t: 1 }, appliedWallTime: new Date(1567578827862), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 3), t: 1 }, lastCommittedWall: new Date(1567578827862), lastOpVisible: { ts: Timestamp(1567578827, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:49.198+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:19.197+0000 2019-09-04T06:33:49.198+0000 D2 ASIO [RS] Request 1241 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 3), t: 1 }, lastCommittedWall: new Date(1567578827862), lastOpVisible: { ts: Timestamp(1567578827, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578827, 3), $clusterTime: { clusterTime: Timestamp(1567578829, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578829, 1) } 2019-09-04T06:33:49.198+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 3), t: 1 }, lastCommittedWall: new Date(1567578827862), lastOpVisible: { ts: Timestamp(1567578827, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578827, 3), $clusterTime: { clusterTime: Timestamp(1567578829, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578829, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:49.198+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:49.198+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:19.198+0000 2019-09-04T06:33:49.198+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578829, 1), t: 1 } 2019-09-04T06:33:49.198+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1242 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:59.198+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578827, 3), t: 1 } } 2019-09-04T06:33:49.198+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:19.198+0000 2019-09-04T06:33:49.200+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:33:49.200+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:49.200+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578827, 3), t: 1 }, durableWallTime: new Date(1567578827862), appliedOpTime: { ts: Timestamp(1567578827, 3), t: 1 }, appliedWallTime: new Date(1567578827862), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, durableWallTime: new Date(1567578829194), appliedOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, appliedWallTime: new Date(1567578829194), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578827, 3), t: 1 }, durableWallTime: new Date(1567578827862), appliedOpTime: { ts: Timestamp(1567578827, 3), t: 1 }, appliedWallTime: new Date(1567578827862), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 3), t: 1 }, lastCommittedWall: new Date(1567578827862), lastOpVisible: { ts: Timestamp(1567578827, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:49.200+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1243 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:19.200+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578827, 3), t: 1 }, durableWallTime: new Date(1567578827862), appliedOpTime: { ts: Timestamp(1567578827, 3), t: 1 }, appliedWallTime: new Date(1567578827862), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, durableWallTime: new Date(1567578829194), appliedOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, appliedWallTime: new Date(1567578829194), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578827, 3), t: 1 }, durableWallTime: new Date(1567578827862), appliedOpTime: { ts: Timestamp(1567578827, 3), t: 1 }, appliedWallTime: new Date(1567578827862), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578827, 3), t: 1 }, lastCommittedWall: new Date(1567578827862), lastOpVisible: { ts: Timestamp(1567578827, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:49.200+0000 D2 ASIO [RS] Request 1242 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578829, 1), t: 1 }, lastCommittedWall: new Date(1567578829194), lastOpVisible: { ts: Timestamp(1567578829, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578829, 1), t: 1 }, lastCommittedWall: new Date(1567578829194), lastOpApplied: { ts: Timestamp(1567578829, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578829, 1), $clusterTime: { clusterTime: Timestamp(1567578829, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578829, 1) } 2019-09-04T06:33:49.200+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:19.198+0000 2019-09-04T06:33:49.200+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578829, 1), t: 1 }, lastCommittedWall: new Date(1567578829194), lastOpVisible: { ts: Timestamp(1567578829, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578829, 1), t: 1 }, lastCommittedWall: new Date(1567578829194), lastOpApplied: { ts: Timestamp(1567578829, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578829, 1), $clusterTime: { clusterTime: Timestamp(1567578829, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578829, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:49.200+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:49.200+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:33:49.200+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578829, 1), t: 1 }, 2019-09-04T06:33:49.194+0000 2019-09-04T06:33:49.200+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578829, 1), t: 1 }, 2019-09-04T06:33:49.194+0000 2019-09-04T06:33:49.200+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578824, 1) 2019-09-04T06:33:49.200+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:34:00.322+0000 2019-09-04T06:33:49.200+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:34:00.306+0000 2019-09-04T06:33:49.200+0000 D2 ASIO [RS] Request 1243 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578829, 1), t: 1 }, lastCommittedWall: new Date(1567578829194), lastOpVisible: { ts: Timestamp(1567578829, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578829, 1), $clusterTime: { clusterTime: Timestamp(1567578829, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578829, 1) } 2019-09-04T06:33:49.200+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578829, 1), t: 1 }, lastCommittedWall: new Date(1567578829194), lastOpVisible: { ts: Timestamp(1567578829, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578829, 1), $clusterTime: { clusterTime: Timestamp(1567578829, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578829, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:49.200+0000 D3 REPL [conn404] Got notified of new snapshot: { ts: Timestamp(1567578829, 1), t: 1 }, 2019-09-04T06:33:49.194+0000 2019-09-04T06:33:49.200+0000 D3 REPL [conn404] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.073+0000 2019-09-04T06:33:49.200+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:49.200+0000 D3 REPL [conn397] Got notified of new snapshot: { ts: Timestamp(1567578829, 1), t: 1 }, 2019-09-04T06:33:49.194+0000 2019-09-04T06:33:49.200+0000 D3 REPL [conn397] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.962+0000 2019-09-04T06:33:49.200+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1244 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:33:59.200+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578829, 1), t: 1 } } 2019-09-04T06:33:49.200+0000 D3 REPL [conn407] Got notified of new snapshot: { ts: Timestamp(1567578829, 1), t: 1 }, 2019-09-04T06:33:49.194+0000 2019-09-04T06:33:49.200+0000 D3 REPL [conn407] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.691+0000 2019-09-04T06:33:49.200+0000 D3 REPL [conn400] Got notified of new snapshot: { ts: Timestamp(1567578829, 1), t: 1 }, 2019-09-04T06:33:49.194+0000 2019-09-04T06:33:49.200+0000 D3 REPL [conn400] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:03.490+0000 2019-09-04T06:33:49.200+0000 D3 REPL [conn398] Got notified of new snapshot: { ts: Timestamp(1567578829, 1), t: 1 }, 2019-09-04T06:33:49.194+0000 2019-09-04T06:33:49.200+0000 D3 REPL [conn398] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.987+0000 2019-09-04T06:33:49.200+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:19.200+0000 2019-09-04T06:33:49.200+0000 D3 REPL [conn412] Got notified of new snapshot: { ts: Timestamp(1567578829, 1), t: 1 }, 2019-09-04T06:33:49.194+0000 2019-09-04T06:33:49.200+0000 D3 REPL [conn412] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:17.318+0000 2019-09-04T06:33:49.200+0000 D3 REPL [conn391] Got notified of new snapshot: { ts: Timestamp(1567578829, 1), t: 1 }, 2019-09-04T06:33:49.194+0000 2019-09-04T06:33:49.200+0000 D3 REPL [conn391] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:56.309+0000 2019-09-04T06:33:49.201+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:49.201+0000 D3 REPL [conn380] Got notified of new snapshot: { ts: Timestamp(1567578829, 1), t: 1 }, 2019-09-04T06:33:49.194+0000 2019-09-04T06:33:49.201+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:18.839+0000 2019-09-04T06:33:49.200+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:19.200+0000 2019-09-04T06:33:49.201+0000 D3 REPL [conn380] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:14.390+0000 2019-09-04T06:33:49.201+0000 D3 REPL [conn392] Got notified of new snapshot: { ts: Timestamp(1567578829, 1), t: 1 }, 2019-09-04T06:33:49.194+0000 2019-09-04T06:33:49.201+0000 D3 REPL [conn392] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:58.760+0000 2019-09-04T06:33:49.201+0000 D3 REPL [conn399] Got notified of new snapshot: { ts: Timestamp(1567578829, 1), t: 1 }, 2019-09-04T06:33:49.194+0000 2019-09-04T06:33:49.201+0000 D3 REPL [conn399] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:02.478+0000 2019-09-04T06:33:49.201+0000 D3 REPL [conn406] Got notified of new snapshot: { ts: Timestamp(1567578829, 1), t: 1 }, 2019-09-04T06:33:49.194+0000 2019-09-04T06:33:49.201+0000 D3 REPL [conn406] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.496+0000 2019-09-04T06:33:49.201+0000 D3 REPL [conn393] Got notified of new snapshot: { ts: Timestamp(1567578829, 1), t: 1 }, 2019-09-04T06:33:49.194+0000 2019-09-04T06:33:49.201+0000 D3 REPL [conn393] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.081+0000 2019-09-04T06:33:49.201+0000 D3 REPL [conn387] Got notified of new snapshot: { ts: Timestamp(1567578829, 1), t: 1 }, 2019-09-04T06:33:49.194+0000 2019-09-04T06:33:49.201+0000 D3 REPL [conn387] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.455+0000 2019-09-04T06:33:49.201+0000 D3 REPL [conn396] Got notified of new snapshot: { ts: Timestamp(1567578829, 1), t: 1 }, 2019-09-04T06:33:49.194+0000 2019-09-04T06:33:49.201+0000 D3 REPL [conn396] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.897+0000 2019-09-04T06:33:49.201+0000 D3 REPL [conn388] Got notified of new snapshot: { ts: Timestamp(1567578829, 1), t: 1 }, 2019-09-04T06:33:49.194+0000 2019-09-04T06:33:49.201+0000 D3 REPL [conn388] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:52.054+0000 2019-09-04T06:33:49.201+0000 D3 REPL [conn390] Got notified of new snapshot: { ts: Timestamp(1567578829, 1), t: 1 }, 2019-09-04T06:33:49.194+0000 2019-09-04T06:33:49.201+0000 D3 REPL [conn390] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.077+0000 2019-09-04T06:33:49.201+0000 D3 REPL [conn362] Got notified of new snapshot: { ts: Timestamp(1567578829, 1), t: 1 }, 2019-09-04T06:33:49.194+0000 2019-09-04T06:33:49.201+0000 D3 REPL [conn362] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.661+0000 2019-09-04T06:33:49.201+0000 D3 REPL [conn395] Got notified of new snapshot: { ts: Timestamp(1567578829, 1), t: 1 }, 2019-09-04T06:33:49.194+0000 2019-09-04T06:33:49.201+0000 D3 REPL [conn395] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.702+0000 2019-09-04T06:33:49.201+0000 D3 REPL [conn382] Got notified of new snapshot: { ts: Timestamp(1567578829, 1), t: 1 }, 2019-09-04T06:33:49.194+0000 2019-09-04T06:33:49.201+0000 D3 REPL [conn382] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:17.223+0000 2019-09-04T06:33:49.201+0000 D3 REPL [conn394] Got notified of new snapshot: { ts: Timestamp(1567578829, 1), t: 1 }, 2019-09-04T06:33:49.194+0000 2019-09-04T06:33:49.201+0000 D3 REPL [conn394] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.261+0000 2019-09-04T06:33:49.201+0000 D3 REPL [conn386] Got notified of new snapshot: { ts: Timestamp(1567578829, 1), t: 1 }, 2019-09-04T06:33:49.194+0000 2019-09-04T06:33:49.201+0000 D3 REPL [conn386] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.645+0000 2019-09-04T06:33:49.201+0000 D3 REPL [conn385] Got notified of new snapshot: { ts: Timestamp(1567578829, 1), t: 1 }, 2019-09-04T06:33:49.194+0000 2019-09-04T06:33:49.201+0000 D3 REPL [conn385] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.131+0000 2019-09-04T06:33:49.201+0000 D3 REPL [conn405] Got notified of new snapshot: { ts: Timestamp(1567578829, 1), t: 1 }, 2019-09-04T06:33:49.194+0000 2019-09-04T06:33:49.201+0000 D3 REPL [conn405] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.416+0000 2019-09-04T06:33:49.201+0000 D3 REPL [conn389] Got notified of new snapshot: { ts: Timestamp(1567578829, 1), t: 1 }, 2019-09-04T06:33:49.194+0000 2019-09-04T06:33:49.201+0000 D3 REPL [conn389] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:52.596+0000 2019-09-04T06:33:49.201+0000 D3 REPL [conn377] Got notified of new snapshot: { ts: Timestamp(1567578829, 1), t: 1 }, 2019-09-04T06:33:49.194+0000 2019-09-04T06:33:49.201+0000 D3 REPL [conn377] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.752+0000 2019-09-04T06:33:49.201+0000 D3 REPL [conn372] Got notified of new snapshot: { ts: Timestamp(1567578829, 1), t: 1 }, 2019-09-04T06:33:49.194+0000 2019-09-04T06:33:49.201+0000 D3 REPL [conn372] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:51.660+0000 2019-09-04T06:33:49.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:49.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:49.233+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:49.233+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:49.234+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:49.234+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:49.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:49.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:49.238+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:49.296+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578829, 1) 2019-09-04T06:33:49.317+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:49.317+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:49.329+0000 D2 WRITE [startPeriodicThreadToAbortExpiredTransactions] Beginning scanSessions. Scanning 0 sessions. 2019-09-04T06:33:49.338+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:49.370+0000 D1 SHARDING [shard-registry-reload] Reloading shardRegistry 2019-09-04T06:33:49.370+0000 D3 STORAGE [shard-registry-reload] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:33:49.370+0000 D2 COMMAND [shard-registry-reload] run command config.$cmd { find: "shards", $readPreference: { mode: "nearest", tags: [] }, $db: "config" } 2019-09-04T06:33:49.370+0000 D3 STORAGE [shard-registry-reload] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:49.370+0000 D5 QUERY [shard-registry-reload] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:33:49.370+0000 D5 QUERY [shard-registry-reload] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:33:49.370+0000 D5 QUERY [shard-registry-reload] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:33:49.370+0000 D5 QUERY [shard-registry-reload] Rated tree: $and 2019-09-04T06:33:49.370+0000 D5 QUERY [shard-registry-reload] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:49.370+0000 D5 QUERY [shard-registry-reload] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:49.370+0000 D2 QUERY [shard-registry-reload] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:33:49.370+0000 D3 STORAGE [shard-registry-reload] begin_transaction on local snapshot Timestamp(1567578829, 1) 2019-09-04T06:33:49.370+0000 D3 STORAGE [shard-registry-reload] WT begin_transaction for snapshot id 18351 2019-09-04T06:33:49.370+0000 D3 STORAGE [shard-registry-reload] WT rollback_transaction for snapshot id 18351 2019-09-04T06:33:49.370+0000 I COMMAND [shard-registry-reload] command config.shards command: find { find: "shards", $readPreference: { mode: "nearest", tags: [] }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:646 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:33:49.370+0000 D1 SHARDING [shard-registry-reload] found 3 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp(1567578829, 1), t: 1 } 2019-09-04T06:33:49.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0000/cmodb806.togewa.com:27018,cmodb807.togewa.com:27018 2019-09-04T06:33:49.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0000, with CS shard0000/cmodb806.togewa.com:27018,cmodb807.togewa.com:27018 2019-09-04T06:33:49.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0001/cmodb808.togewa.com:27018,cmodb809.togewa.com:27018 2019-09-04T06:33:49.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0001, with CS shard0001/cmodb808.togewa.com:27018,cmodb809.togewa.com:27018 2019-09-04T06:33:49.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0002/cmodb810.togewa.com:27018,cmodb811.togewa.com:27018 2019-09-04T06:33:49.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0002, with CS shard0002/cmodb810.togewa.com:27018,cmodb811.togewa.com:27018 2019-09-04T06:33:49.370+0000 D3 SHARDING [shard-registry-reload] Adding shard config, with CS 2019-09-04T06:33:49.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0002 2019-09-04T06:33:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1245 -- target:[cmodb810.togewa.com:27018] db:admin expDate:2019-09-04T06:33:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:33:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1246 -- target:[cmodb811.togewa.com:27018] db:admin expDate:2019-09-04T06:33:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:33:49.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0001 2019-09-04T06:33:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1247 -- target:[cmodb808.togewa.com:27018] db:admin expDate:2019-09-04T06:33:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:33:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1248 -- target:[cmodb809.togewa.com:27018] db:admin expDate:2019-09-04T06:33:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:33:49.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0000 2019-09-04T06:33:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1249 -- target:[cmodb806.togewa.com:27018] db:admin expDate:2019-09-04T06:33:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:33:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1250 -- target:[cmodb807.togewa.com:27018] db:admin expDate:2019-09-04T06:33:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:33:49.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1247 finished with response: { hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb808.togewa.com:27018", me: "cmodb808.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578827, 1), t: 1 }, lastWriteDate: new Date(1567578827000), majorityOpTime: { ts: Timestamp(1567578827, 1), t: 1 }, majorityWriteDate: new Date(1567578827000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578829386), logicalSessionTimeoutMinutes: 30, connectionId: 18183, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578827, 1), $configServerState: { opTime: { ts: Timestamp(1567578818, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578827, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578827, 1) } 2019-09-04T06:33:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb808.togewa.com:27018", me: "cmodb808.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578827, 1), t: 1 }, lastWriteDate: new Date(1567578827000), majorityOpTime: { ts: Timestamp(1567578827, 1), t: 1 }, majorityWriteDate: new Date(1567578827000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578829386), logicalSessionTimeoutMinutes: 30, connectionId: 18183, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578827, 1), $configServerState: { opTime: { ts: Timestamp(1567578818, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578827, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578827, 1) } target: cmodb808.togewa.com:27018 2019-09-04T06:33:49.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1248 finished with response: { hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb808.togewa.com:27018", me: "cmodb809.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578827, 1), t: 1 }, lastWriteDate: new Date(1567578827000), majorityOpTime: { ts: Timestamp(1567578827, 1), t: 1 }, majorityWriteDate: new Date(1567578827000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578829386), logicalSessionTimeoutMinutes: 30, connectionId: 13302, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578827, 1), $configServerState: { opTime: { ts: Timestamp(1567578810, 3), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578827, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578827, 1) } 2019-09-04T06:33:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb808.togewa.com:27018", me: "cmodb809.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578827, 1), t: 1 }, lastWriteDate: new Date(1567578827000), majorityOpTime: { ts: Timestamp(1567578827, 1), t: 1 }, majorityWriteDate: new Date(1567578827000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578829386), logicalSessionTimeoutMinutes: 30, connectionId: 13302, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578827, 1), $configServerState: { opTime: { ts: Timestamp(1567578810, 3), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578827, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578827, 1) } target: cmodb809.togewa.com:27018 2019-09-04T06:33:49.385+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0001 took 0ms 2019-09-04T06:33:49.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1245 finished with response: { hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb810.togewa.com:27018", me: "cmodb810.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578819, 1), t: 1 }, lastWriteDate: new Date(1567578819000), majorityOpTime: { ts: Timestamp(1567578819, 1), t: 1 }, majorityWriteDate: new Date(1567578819000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578829386), logicalSessionTimeoutMinutes: 30, connectionId: 20469, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578819, 1), $configServerState: { opTime: { ts: Timestamp(1567578818, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578823, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578819, 1) } 2019-09-04T06:33:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb810.togewa.com:27018", me: "cmodb810.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578819, 1), t: 1 }, lastWriteDate: new Date(1567578819000), majorityOpTime: { ts: Timestamp(1567578819, 1), t: 1 }, majorityWriteDate: new Date(1567578819000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578829386), logicalSessionTimeoutMinutes: 30, connectionId: 20469, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578819, 1), $configServerState: { opTime: { ts: Timestamp(1567578818, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578823, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578819, 1) } target: cmodb810.togewa.com:27018 2019-09-04T06:33:49.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1249 finished with response: { hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb806.togewa.com:27018", me: "cmodb806.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578823, 1), t: 1 }, lastWriteDate: new Date(1567578823000), majorityOpTime: { ts: Timestamp(1567578823, 1), t: 1 }, majorityWriteDate: new Date(1567578823000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578829385), logicalSessionTimeoutMinutes: 30, connectionId: 16400, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578823, 1), $configServerState: { opTime: { ts: Timestamp(1567578818, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578823, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578823, 1) } 2019-09-04T06:33:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb806.togewa.com:27018", me: "cmodb806.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578823, 1), t: 1 }, lastWriteDate: new Date(1567578823000), majorityOpTime: { ts: Timestamp(1567578823, 1), t: 1 }, majorityWriteDate: new Date(1567578823000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578829385), logicalSessionTimeoutMinutes: 30, connectionId: 16400, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578823, 1), $configServerState: { opTime: { ts: Timestamp(1567578818, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578823, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578823, 1) } target: cmodb806.togewa.com:27018 2019-09-04T06:33:49.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1250 finished with response: { hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb806.togewa.com:27018", me: "cmodb807.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578823, 1), t: 1 }, lastWriteDate: new Date(1567578823000), majorityOpTime: { ts: Timestamp(1567578823, 1), t: 1 }, majorityWriteDate: new Date(1567578823000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578829386), logicalSessionTimeoutMinutes: 30, connectionId: 17074, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578823, 1), $configServerState: { opTime: { ts: Timestamp(1567578818, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578823, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578823, 1) } 2019-09-04T06:33:49.386+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb806.togewa.com:27018", me: "cmodb807.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578823, 1), t: 1 }, lastWriteDate: new Date(1567578823000), majorityOpTime: { ts: Timestamp(1567578823, 1), t: 1 }, majorityWriteDate: new Date(1567578823000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578829386), logicalSessionTimeoutMinutes: 30, connectionId: 17074, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578823, 1), $configServerState: { opTime: { ts: Timestamp(1567578818, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578823, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578823, 1) } target: cmodb807.togewa.com:27018 2019-09-04T06:33:49.386+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0000 took 0ms 2019-09-04T06:33:49.390+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1246 finished with response: { hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb810.togewa.com:27018", me: "cmodb811.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578819, 1), t: 1 }, lastWriteDate: new Date(1567578819000), majorityOpTime: { ts: Timestamp(1567578819, 1), t: 1 }, majorityWriteDate: new Date(1567578819000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578829386), logicalSessionTimeoutMinutes: 30, connectionId: 13284, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578819, 1), $configServerState: { opTime: { ts: Timestamp(1567578818, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578823, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578819, 1) } 2019-09-04T06:33:49.390+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb810.togewa.com:27018", me: "cmodb811.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578819, 1), t: 1 }, lastWriteDate: new Date(1567578819000), majorityOpTime: { ts: Timestamp(1567578819, 1), t: 1 }, majorityWriteDate: new Date(1567578819000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578829386), logicalSessionTimeoutMinutes: 30, connectionId: 13284, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578819, 1), $configServerState: { opTime: { ts: Timestamp(1567578818, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578823, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578819, 1) } target: cmodb811.togewa.com:27018 2019-09-04T06:33:49.390+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0002 took 5ms 2019-09-04T06:33:49.438+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:49.452+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:49.452+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:49.538+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:49.603+0000 D2 COMMAND [replSetDistLockPinger] run command config.$cmd { findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578829603) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } 2019-09-04T06:33:49.603+0000 D4 - [replSetDistLockPinger] Taking ticket. Available: 1000000000 2019-09-04T06:33:49.603+0000 D1 - [replSetDistLockPinger] User Assertion: NotMaster: Not primary while running findAndModify command on collection config.lockpings src/mongo/db/commands/find_and_modify.cpp 178 2019-09-04T06:33:49.603+0000 W - [replSetDistLockPinger] DBException thrown :: caused by :: NotMaster: Not primary while running findAndModify command on collection config.lockpings 2019-09-04T06:33:49.623+0000 I - [replSetDistLockPinger] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6b5b62 0x561749c38a0a 0x561749c42521 0x561749a63043 0x56174a33a606 0x56174a33ba55 0x56174b117894 0x56174a082899 0x56174a083f53 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174af452ee 0x56174af457fa 0x56174b0c25e2 0x56174a244e7b 0x56174a243c1e 0x56174a42b1dc 0x56174a23b7b1 0x56174a232a0a 0x56174b82dbbf 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"272DB62","s":"_ZN5mongo11DBExceptionC2ERKNS_6StatusE"},{"b":"561748F88000","o":"CB0A0A","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"ADB043"},{"b":"561748F88000","o":"13B2606"},{"b":"561748F88000","o":"13B3A55"},{"b":"561748F88000","o":"218F894","s":"_ZN5mongo12BasicCommand10Invocation3runEPNS_16OperationContextEPNS_3rpc21ReplyBuilderInterfaceE"},{"b":"561748F88000","o":"10FA899"},{"b":"561748F88000","o":"10FBF53"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"1FBD2EE"},{"b":"561748F88000","o":"1FBD7FA","s":"_ZN5mongo14DBDirectClient4callERNS_7MessageES2_bPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE"},{"b":"561748F88000","o":"213A5E2","s":"_ZN5mongo12DBClientBase20runCommandWithTargetENS_12OpMsgRequestE"},{"b":"561748F88000","o":"12BCE7B","s":"_ZN5mongo13RSLocalClient14runCommandOnceEPNS_16OperationContextENS_10StringDataERKNS_7BSONObjE"},{"b":"561748F88000","o":"12BBC1E","s":"_ZN5mongo10ShardLocal11_runCommandEPNS_16OperationContextERKNS_21ReadPreferenceSettingENS_10StringDataENS_8DurationISt5ratioILl1ELl1000EEEERKNS_7BSONObjE"},{"b":"561748F88000","o":"14A31DC","s":"_ZN5mongo5Shard32runCommandWithFixedRetryAttemptsEPNS_16OperationContextERKNS_21ReadPreferenceSettingERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_7BSONObjENS_8DurationISt5ratioILl1ELl1000EEEENS0_11RetryPolicyE"},{"b":"561748F88000","o":"12B37B1","s":"_ZN5mongo19DistLockCatalogImpl4pingEPNS_16OperationContextENS_10StringDataENS_6Date_tE"},{"b":"561748F88000","o":"12AAA0A","s":"_ZN5mongo22ReplSetDistLockManager6doTaskEv"},{"b":"561748F88000","o":"28A5BBF"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo11DBExceptionC2ERKNS_6StatusE+0x32) [0x56174b6b5b62] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x6D08) [0x561749c38a0a] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xADB043) [0x561749a63043] mongod(+0x13B2606) [0x56174a33a606] mongod(+0x13B3A55) [0x56174a33ba55] mongod(_ZN5mongo12BasicCommand10Invocation3runEPNS_16OperationContextEPNS_3rpc21ReplyBuilderInterfaceE+0x74) [0x56174b117894] mongod(+0x10FA899) [0x56174a082899] mongod(+0x10FBF53) [0x56174a083f53] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(+0x1FBD2EE) [0x56174af452ee] mongod(_ZN5mongo14DBDirectClient4callERNS_7MessageES2_bPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x3A) [0x56174af457fa] mongod(_ZN5mongo12DBClientBase20runCommandWithTargetENS_12OpMsgRequestE+0x1F2) [0x56174b0c25e2] mongod(_ZN5mongo13RSLocalClient14runCommandOnceEPNS_16OperationContextENS_10StringDataERKNS_7BSONObjE+0x4FB) [0x56174a244e7b] mongod(_ZN5mongo10ShardLocal11_runCommandEPNS_16OperationContextERKNS_21ReadPreferenceSettingENS_10StringDataENS_8DurationISt5ratioILl1ELl1000EEEERKNS_7BSONObjE+0x2E) [0x56174a243c1e] mongod(_ZN5mongo5Shard32runCommandWithFixedRetryAttemptsEPNS_16OperationContextERKNS_21ReadPreferenceSettingERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_7BSONObjENS_8DurationISt5ratioILl1ELl1000EEEENS0_11RetryPolicyE+0xDC) [0x56174a42b1dc] mongod(_ZN5mongo19DistLockCatalogImpl4pingEPNS_16OperationContextENS_10StringDataENS_6Date_tE+0x571) [0x56174a23b7b1] mongod(_ZN5mongo22ReplSetDistLockManager6doTaskEv+0x27A) [0x56174a232a0a] mongod(+0x28A5BBF) [0x56174b82dbbf] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:49.623+0000 D2 REPL [replSetDistLockPinger] Waiting for write concern. OpTime: { ts: Timestamp(1567578829, 1), t: 1 }, write concern: { w: "majority", wtimeout: 15000 } 2019-09-04T06:33:49.623+0000 D4 STORAGE [replSetDistLockPinger] flushed journal 2019-09-04T06:33:49.623+0000 D1 COMMAND [replSetDistLockPinger] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578829603) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" }': NotMaster: Not primary while running findAndModify command on collection config.lockpings 2019-09-04T06:33:49.623+0000 I COMMAND [replSetDistLockPinger] command config.lockpings command: findAndModify { findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578829603) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } numYields:0 ok:0 errMsg:"Not primary while running findAndModify command on collection config.lockpings" errName:NotMaster errCode:10107 reslen:527 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } protocol:op_msg 20ms 2019-09-04T06:33:49.638+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:49.685+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:49.685+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:49.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:49.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:49.733+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:49.733+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:49.734+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:49.734+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:49.739+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:49.817+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:49.817+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:49.834+0000 I NETWORK [listener] connection accepted from 10.108.2.51:59272 #414 (89 connections now open) 2019-09-04T06:33:49.834+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:49.834+0000 D2 COMMAND [conn414] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:49.834+0000 I NETWORK [conn414] received client metadata from 10.108.2.51:59272 conn414: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:49.835+0000 I COMMAND [conn414] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:49.839+0000 D2 COMMAND [conn414] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578829, 1), signature: { hash: BinData(0, 6FA9AC81673D99E3CEF9E3D81A3010F2FE222A78), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:33:49.839+0000 D1 REPL [conn414] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578829, 1), t: 1 } 2019-09-04T06:33:49.839+0000 D3 REPL [conn414] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:19.849+0000 2019-09-04T06:33:49.839+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:49.939+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:49.952+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:49.952+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:50.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:50.002+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:33:50.002+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:33:50.002+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:33:50.010+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:33:50.010+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:33:50.010+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:33:50.011+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:33:50.011+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:33:50.011+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:33:50.011+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:33:50.011+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35151 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:33:50.012+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:33:50.012+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:33:50.012+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:33:50.014+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:33:50.014+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:50.014+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:33:50.014+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:33:50.014+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:33:50.014+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:33:50.014+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:33:50.014+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:33:50.014+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:33:50.014+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:50.014+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:50.014+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:33:50.014+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578829, 1) 2019-09-04T06:33:50.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18369 2019-09-04T06:33:50.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18369 2019-09-04T06:33:50.014+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:33:50.014+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:33:50.014+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:33:50.014+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:33:50.014+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:50.014+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:33:50.014+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:33:50.014+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:33:50.014+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578829, 1) 2019-09-04T06:33:50.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18372 2019-09-04T06:33:50.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18372 2019-09-04T06:33:50.014+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:33:50.015+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:33:50.015+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:50.015+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:33:50.015+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:33:50.015+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:33:50.015+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578829, 1) 2019-09-04T06:33:50.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18374 2019-09-04T06:33:50.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18374 2019-09-04T06:33:50.015+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:549 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:33:50.015+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:33:50.015+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:33:50.015+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:33:50.015+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:33:50.015+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:33:50.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:33:50.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18377 2019-09-04T06:33:50.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:33:50.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:50.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:33:50.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:33:50.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18377 2019-09-04T06:33:50.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:33:50.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18378 2019-09-04T06:33:50.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:33:50.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:50.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:33:50.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18378 2019-09-04T06:33:50.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:33:50.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18379 2019-09-04T06:33:50.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:33:50.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:50.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:33:50.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18379 2019-09-04T06:33:50.015+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:33:50.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18380 2019-09-04T06:33:50.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:33:50.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:50.015+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:33:50.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18380 2019-09-04T06:33:50.015+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:33:50.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18381 2019-09-04T06:33:50.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:33:50.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:50.015+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:33:50.015+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:33:50.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18381 2019-09-04T06:33:50.015+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:33:50.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18382 2019-09-04T06:33:50.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:33:50.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:50.015+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:33:50.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18382 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18383 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18383 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18384 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18384 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18385 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18385 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18386 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18386 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18387 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18387 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18388 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18388 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18389 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18389 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18390 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18390 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18391 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18391 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18392 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18392 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18393 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18393 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18394 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18394 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18395 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:33:50.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:50.017+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:33:50.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18395 2019-09-04T06:33:50.017+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:50.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18396 2019-09-04T06:33:50.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:50.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:50.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18396 2019-09-04T06:33:50.017+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:33:50.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18397 2019-09-04T06:33:50.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:33:50.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:50.017+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:33:50.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18397 2019-09-04T06:33:50.017+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:33:50.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18398 2019-09-04T06:33:50.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:33:50.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:33:50.017+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:33:50.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18398 2019-09-04T06:33:50.017+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:33:50.017+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:33:50.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18400 2019-09-04T06:33:50.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18400 2019-09-04T06:33:50.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18401 2019-09-04T06:33:50.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18401 2019-09-04T06:33:50.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18402 2019-09-04T06:33:50.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18402 2019-09-04T06:33:50.017+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:33:50.018+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:33:50.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18404 2019-09-04T06:33:50.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18404 2019-09-04T06:33:50.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18405 2019-09-04T06:33:50.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18405 2019-09-04T06:33:50.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18406 2019-09-04T06:33:50.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18406 2019-09-04T06:33:50.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18407 2019-09-04T06:33:50.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18407 2019-09-04T06:33:50.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18408 2019-09-04T06:33:50.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18408 2019-09-04T06:33:50.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18409 2019-09-04T06:33:50.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18409 2019-09-04T06:33:50.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18410 2019-09-04T06:33:50.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18410 2019-09-04T06:33:50.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18411 2019-09-04T06:33:50.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18411 2019-09-04T06:33:50.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18412 2019-09-04T06:33:50.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18412 2019-09-04T06:33:50.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18413 2019-09-04T06:33:50.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18413 2019-09-04T06:33:50.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18414 2019-09-04T06:33:50.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18414 2019-09-04T06:33:50.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18415 2019-09-04T06:33:50.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18415 2019-09-04T06:33:50.019+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:33:50.031+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:33:50.031+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18417 2019-09-04T06:33:50.031+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18417 2019-09-04T06:33:50.031+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18418 2019-09-04T06:33:50.031+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18418 2019-09-04T06:33:50.031+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18419 2019-09-04T06:33:50.031+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18419 2019-09-04T06:33:50.031+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18420 2019-09-04T06:33:50.031+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18420 2019-09-04T06:33:50.031+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18421 2019-09-04T06:33:50.031+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18421 2019-09-04T06:33:50.031+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18422 2019-09-04T06:33:50.031+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18422 2019-09-04T06:33:50.031+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:33:50.039+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:50.139+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:50.185+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:50.185+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:50.197+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:50.197+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:50.197+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578829, 1) 2019-09-04T06:33:50.197+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 18425 2019-09-04T06:33:50.197+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:50.197+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:50.197+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 18425 2019-09-04T06:33:50.198+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18428 2019-09-04T06:33:50.198+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18428 2019-09-04T06:33:50.198+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578829, 1), t: 1 }({ ts: Timestamp(1567578829, 1), t: 1 }) 2019-09-04T06:33:50.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:50.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:50.234+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:50.234+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:50.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578829, 1), signature: { hash: BinData(0, 2A076294130E581EA4A3E1DBDACDB41C1F5E112C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:50.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:33:50.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578829, 1), signature: { hash: BinData(0, 2A076294130E581EA4A3E1DBDACDB41C1F5E112C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:50.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578829, 1), signature: { hash: BinData(0, 2A076294130E581EA4A3E1DBDACDB41C1F5E112C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:50.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, durableWallTime: new Date(1567578829194), opTime: { ts: Timestamp(1567578829, 1), t: 1 }, wallTime: new Date(1567578829194) } 2019-09-04T06:33:50.235+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578829, 1), signature: { hash: BinData(0, 2A076294130E581EA4A3E1DBDACDB41C1F5E112C), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:50.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:50.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 999999999 Now: 1000000000 2019-09-04T06:33:50.239+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:50.317+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:50.317+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:50.340+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:50.440+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:50.452+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:50.452+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:50.540+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:50.640+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:50.685+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:50.685+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:50.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:50.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:50.734+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:50.734+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:50.740+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:50.817+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:50.817+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:50.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:50.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1251) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:50.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1251 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:00.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:50.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:18.839+0000 2019-09-04T06:33:50.838+0000 D2 ASIO [Replication] Request 1251 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, durableWallTime: new Date(1567578829194), opTime: { ts: Timestamp(1567578829, 1), t: 1 }, wallTime: new Date(1567578829194), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578829, 1), t: 1 }, lastCommittedWall: new Date(1567578829194), lastOpVisible: { ts: Timestamp(1567578829, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578829, 1), $clusterTime: { clusterTime: Timestamp(1567578829, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578829, 1) } 2019-09-04T06:33:50.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, durableWallTime: new Date(1567578829194), opTime: { ts: Timestamp(1567578829, 1), t: 1 }, wallTime: new Date(1567578829194), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578829, 1), t: 1 }, lastCommittedWall: new Date(1567578829194), lastOpVisible: { ts: Timestamp(1567578829, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578829, 1), $clusterTime: { clusterTime: Timestamp(1567578829, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578829, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:50.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:50.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1251) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, durableWallTime: new Date(1567578829194), opTime: { ts: Timestamp(1567578829, 1), t: 1 }, wallTime: new Date(1567578829194), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578829, 1), t: 1 }, lastCommittedWall: new Date(1567578829194), lastOpVisible: { ts: Timestamp(1567578829, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578829, 1), $clusterTime: { clusterTime: Timestamp(1567578829, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578829, 1) } 2019-09-04T06:33:50.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:33:50.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:33:52.838Z 2019-09-04T06:33:50.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:18.839+0000 2019-09-04T06:33:50.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:50.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1252) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:50.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1252 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:34:00.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:50.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:18.839+0000 2019-09-04T06:33:50.839+0000 D2 ASIO [Replication] Request 1252 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, durableWallTime: new Date(1567578829194), opTime: { ts: Timestamp(1567578829, 1), t: 1 }, wallTime: new Date(1567578829194), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578829, 1), t: 1 }, lastCommittedWall: new Date(1567578829194), lastOpVisible: { ts: Timestamp(1567578829, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578829, 1), $clusterTime: { clusterTime: Timestamp(1567578829, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578829, 1) } 2019-09-04T06:33:50.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, durableWallTime: new Date(1567578829194), opTime: { ts: Timestamp(1567578829, 1), t: 1 }, wallTime: new Date(1567578829194), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578829, 1), t: 1 }, lastCommittedWall: new Date(1567578829194), lastOpVisible: { ts: Timestamp(1567578829, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578829, 1), $clusterTime: { clusterTime: Timestamp(1567578829, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578829, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:33:50.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:50.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1252) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, durableWallTime: new Date(1567578829194), opTime: { ts: Timestamp(1567578829, 1), t: 1 }, wallTime: new Date(1567578829194), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578829, 1), t: 1 }, lastCommittedWall: new Date(1567578829194), lastOpVisible: { ts: Timestamp(1567578829, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578829, 1), $clusterTime: { clusterTime: Timestamp(1567578829, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578829, 1) } 2019-09-04T06:33:50.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:33:50.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:34:00.306+0000 2019-09-04T06:33:50.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:34:00.894+0000 2019-09-04T06:33:50.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:33:50.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:33:52.839Z 2019-09-04T06:33:50.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:50.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:20.839+0000 2019-09-04T06:33:50.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:20.839+0000 2019-09-04T06:33:50.840+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:50.940+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:50.952+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:50.952+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:51.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:51.040+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:51.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578829, 1), signature: { hash: BinData(0, 2A076294130E581EA4A3E1DBDACDB41C1F5E112C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:51.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:33:51.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578829, 1), signature: { hash: BinData(0, 2A076294130E581EA4A3E1DBDACDB41C1F5E112C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:51.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578829, 1), signature: { hash: BinData(0, 2A076294130E581EA4A3E1DBDACDB41C1F5E112C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:51.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, durableWallTime: new Date(1567578829194), opTime: { ts: Timestamp(1567578829, 1), t: 1 }, wallTime: new Date(1567578829194) } 2019-09-04T06:33:51.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578829, 1), signature: { hash: BinData(0, 2A076294130E581EA4A3E1DBDACDB41C1F5E112C), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:51.116+0000 I NETWORK [listener] connection accepted from 10.108.2.56:35816 #415 (90 connections now open) 2019-09-04T06:33:51.116+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:51.116+0000 D2 COMMAND [conn415] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:51.116+0000 I NETWORK [conn415] received client metadata from 10.108.2.56:35816 conn415: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:51.116+0000 I COMMAND [conn415] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:51.133+0000 I COMMAND [conn385] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578791, 1), signature: { hash: BinData(0, 7C0CF3FE10B568392F8A27C39D14989CCAE09485), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:33:51.133+0000 D1 - [conn385] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:51.133+0000 W - [conn385] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:51.141+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:51.151+0000 I - [conn385] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:51.151+0000 D1 COMMAND [conn385] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578791, 1), signature: { hash: BinData(0, 7C0CF3FE10B568392F8A27C39D14989CCAE09485), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:51.151+0000 D1 - [conn385] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:51.151+0000 W - [conn385] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:51.173+0000 I - [conn385] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:51.173+0000 W COMMAND [conn385] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:51.173+0000 I COMMAND [conn385] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578791, 1), signature: { hash: BinData(0, 7C0CF3FE10B568392F8A27C39D14989CCAE09485), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:33:51.173+0000 D2 NETWORK [conn385] Session from 10.108.2.56:35802 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:51.173+0000 I NETWORK [conn385] end connection 10.108.2.56:35802 (89 connections now open) 2019-09-04T06:33:51.185+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:51.185+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:51.197+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:51.197+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:51.197+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578829, 1) 2019-09-04T06:33:51.197+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 18445 2019-09-04T06:33:51.197+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:51.197+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:51.197+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 18445 2019-09-04T06:33:51.198+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18448 2019-09-04T06:33:51.198+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18448 2019-09-04T06:33:51.198+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578829, 1), t: 1 }({ ts: Timestamp(1567578829, 1), t: 1 }) 2019-09-04T06:33:51.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:51.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:51.234+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:51.234+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:51.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:51.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:51.241+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:51.317+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:51.317+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:51.341+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:51.441+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:51.452+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:51.452+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:51.541+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:51.641+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:51.647+0000 I COMMAND [conn386] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578798, 1), signature: { hash: BinData(0, F12D585EE0967CC5135F5BADB38B8673484018E2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:33:51.648+0000 D1 - [conn386] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:51.648+0000 W - [conn386] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:51.650+0000 D2 COMMAND [conn401] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578830, 1), signature: { hash: BinData(0, 5FF0CCAFC787DA30065B8DDE1CC8B095FDBF43F0), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:33:51.650+0000 D1 REPL [conn401] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578829, 1), t: 1 } 2019-09-04T06:33:51.650+0000 D3 REPL [conn401] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.660+0000 2019-09-04T06:33:51.650+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:51.650+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:51.650+0000 I NETWORK [listener] connection accepted from 10.108.2.72:45870 #416 (90 connections now open) 2019-09-04T06:33:51.650+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:51.650+0000 D2 COMMAND [conn416] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:51.650+0000 I NETWORK [conn416] received client metadata from 10.108.2.72:45870 conn416: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:51.650+0000 I COMMAND [conn416] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:51.650+0000 I NETWORK [listener] connection accepted from 10.108.2.54:49318 #417 (91 connections now open) 2019-09-04T06:33:51.650+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:51.651+0000 D2 COMMAND [conn416] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578822, 1), signature: { hash: BinData(0, 2206AF5AF7A132B84E6DD84FBEAA8BC020142021), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:33:51.651+0000 D1 REPL [conn416] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578829, 1), t: 1 } 2019-09-04T06:33:51.651+0000 D3 REPL [conn416] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.661+0000 2019-09-04T06:33:51.651+0000 D2 COMMAND [conn417] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:51.651+0000 I NETWORK [conn417] received client metadata from 10.108.2.54:49318 conn417: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:51.651+0000 I COMMAND [conn417] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:51.651+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:51.651+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:51.651+0000 D2 COMMAND [conn417] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578828, 1), signature: { hash: BinData(0, 12856C2B1973243F9842A98B6AAFA0F9B961DA7C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:33:51.651+0000 D1 REPL [conn417] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578829, 1), t: 1 } 2019-09-04T06:33:51.651+0000 D3 REPL [conn417] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.661+0000 2019-09-04T06:33:51.652+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:51.652+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:51.663+0000 I COMMAND [conn362] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578792, 1), signature: { hash: BinData(0, 688717A47F9056E54E2A82934E3E688DB3858182), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:33:51.663+0000 D1 - [conn362] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:51.663+0000 W - [conn362] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:51.663+0000 I COMMAND [conn372] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578793, 1), signature: { hash: BinData(0, EFC3E3CF57AD6C85EB1B9AA6CC52309780658F29), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:33:51.663+0000 D1 - [conn372] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:51.663+0000 W - [conn372] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:51.664+0000 I - [conn386] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:51.665+0000 D1 COMMAND [conn386] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578798, 1), signature: { hash: BinData(0, F12D585EE0967CC5135F5BADB38B8673484018E2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:51.665+0000 D1 - [conn386] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:51.665+0000 W - [conn386] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:51.685+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:51.685+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:51.695+0000 I - [conn362] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:51.695+0000 D1 COMMAND [conn362] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578792, 1), signature: { hash: BinData(0, 688717A47F9056E54E2A82934E3E688DB3858182), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:51.695+0000 D1 - [conn362] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:51.695+0000 W - [conn362] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:51.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:51.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:51.718+0000 I - [conn362] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:51.718+0000 W COMMAND [conn362] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:51.718+0000 I COMMAND [conn362] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578792, 1), signature: { hash: BinData(0, 688717A47F9056E54E2A82934E3E688DB3858182), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30043ms 2019-09-04T06:33:51.718+0000 D2 NETWORK [conn362] Session from 10.108.2.73:52238 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:51.718+0000 I NETWORK [conn362] end connection 10.108.2.73:52238 (90 connections now open) 2019-09-04T06:33:51.720+0000 I - [conn372] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:51.720+0000 D1 COMMAND [conn372] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578793, 1), signature: { hash: BinData(0, EFC3E3CF57AD6C85EB1B9AA6CC52309780658F29), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:51.720+0000 D1 - [conn372] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:51.720+0000 W - [conn372] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:51.734+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:51.734+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:51.739+0000 I - [conn386] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:51.740+0000 W COMMAND [conn386] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:51.740+0000 I COMMAND [conn386] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578798, 1), signature: { hash: BinData(0, F12D585EE0967CC5135F5BADB38B8673484018E2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:33:51.740+0000 D2 NETWORK [conn386] Session from 10.108.2.44:38790 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:51.740+0000 I NETWORK [conn386] end connection 10.108.2.44:38790 (89 connections now open) 2019-09-04T06:33:51.742+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:51.743+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:51.743+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:51.743+0000 I NETWORK [listener] connection accepted from 10.108.2.52:47310 #418 (90 connections now open) 2019-09-04T06:33:51.743+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:51.743+0000 D2 COMMAND [conn418] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:51.743+0000 I NETWORK [conn418] received client metadata from 10.108.2.52:47310 conn418: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:51.743+0000 I COMMAND [conn418] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:51.744+0000 D2 COMMAND [conn418] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578822, 1), signature: { hash: BinData(0, 2206AF5AF7A132B84E6DD84FBEAA8BC020142021), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:33:51.744+0000 D1 REPL [conn418] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578829, 1), t: 1 } 2019-09-04T06:33:51.744+0000 D3 REPL [conn418] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.754+0000 2019-09-04T06:33:51.755+0000 I COMMAND [conn377] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578792, 1), signature: { hash: BinData(0, 688717A47F9056E54E2A82934E3E688DB3858182), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:33:51.755+0000 D1 - [conn377] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:51.755+0000 W - [conn377] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:51.756+0000 I NETWORK [listener] connection accepted from 10.108.2.59:48490 #419 (91 connections now open) 2019-09-04T06:33:51.756+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:51.756+0000 D2 COMMAND [conn419] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:51.756+0000 I NETWORK [conn419] received client metadata from 10.108.2.59:48490 conn419: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:51.757+0000 I COMMAND [conn419] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:51.757+0000 D2 COMMAND [conn419] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578821, 1), signature: { hash: BinData(0, C4CFA6E23777FD680628F991FF985CB66BF77E05), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:33:51.757+0000 D1 REPL [conn419] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578829, 1), t: 1 } 2019-09-04T06:33:51.757+0000 D3 REPL [conn419] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.767+0000 2019-09-04T06:33:51.759+0000 I - [conn372] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:51.759+0000 W COMMAND [conn372] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:51.759+0000 I COMMAND [conn372] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578793, 1), signature: { hash: BinData(0, EFC3E3CF57AD6C85EB1B9AA6CC52309780658F29), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30069ms 2019-09-04T06:33:51.759+0000 D2 NETWORK [conn372] Session from 10.108.2.58:52240 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:51.759+0000 I NETWORK [conn372] end connection 10.108.2.58:52240 (90 connections now open) 2019-09-04T06:33:51.776+0000 I - [conn377] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:51.776+0000 D1 COMMAND [conn377] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578792, 1), signature: { hash: BinData(0, 688717A47F9056E54E2A82934E3E688DB3858182), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:51.776+0000 D1 - [conn377] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:51.776+0000 W - [conn377] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:51.797+0000 I - [conn377] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:51.797+0000 W COMMAND [conn377] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:51.797+0000 I COMMAND [conn377] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578792, 1), signature: { hash: BinData(0, 688717A47F9056E54E2A82934E3E688DB3858182), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30033ms 2019-09-04T06:33:51.797+0000 D2 NETWORK [conn377] Session from 10.108.2.52:47284 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:51.797+0000 I NETWORK [conn377] end connection 10.108.2.52:47284 (89 connections now open) 2019-09-04T06:33:51.817+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:51.817+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:51.842+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:51.942+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:51.952+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:51.952+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:52.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:52.042+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:52.043+0000 I NETWORK [listener] connection accepted from 10.108.2.50:50248 #420 (90 connections now open) 2019-09-04T06:33:52.043+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:52.043+0000 D2 COMMAND [conn420] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:52.043+0000 I NETWORK [conn420] received client metadata from 10.108.2.50:50248 conn420: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:52.043+0000 I COMMAND [conn420] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:52.057+0000 I COMMAND [conn388] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578799, 1), signature: { hash: BinData(0, F1B7200DB4115FB50539E5C981CB65CEC6CB7F66), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:33:52.058+0000 D1 - [conn388] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:52.058+0000 W - [conn388] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:52.076+0000 I - [conn388] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:52.077+0000 D1 COMMAND [conn388] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578799, 1), signature: { hash: BinData(0, F1B7200DB4115FB50539E5C981CB65CEC6CB7F66), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:52.077+0000 D1 - [conn388] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:52.077+0000 W - [conn388] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:52.098+0000 I - [conn388] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:52.098+0000 W COMMAND [conn388] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:52.098+0000 I COMMAND [conn388] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578799, 1), signature: { hash: BinData(0, F1B7200DB4115FB50539E5C981CB65CEC6CB7F66), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30032ms 2019-09-04T06:33:52.099+0000 D2 NETWORK [conn388] Session from 10.108.2.50:50230 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:52.099+0000 I NETWORK [conn388] end connection 10.108.2.50:50230 (89 connections now open) 2019-09-04T06:33:52.142+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:52.150+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:52.150+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:52.150+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:52.150+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:52.151+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:52.151+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:52.197+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:52.197+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:52.197+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578829, 1) 2019-09-04T06:33:52.197+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 18479 2019-09-04T06:33:52.197+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:52.197+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:52.197+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 18479 2019-09-04T06:33:52.198+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18482 2019-09-04T06:33:52.198+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18482 2019-09-04T06:33:52.198+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578829, 1), t: 1 }({ ts: Timestamp(1567578829, 1), t: 1 }) 2019-09-04T06:33:52.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:52.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:52.234+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:52.234+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:52.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578829, 1), signature: { hash: BinData(0, 2A076294130E581EA4A3E1DBDACDB41C1F5E112C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:52.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:33:52.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578829, 1), signature: { hash: BinData(0, 2A076294130E581EA4A3E1DBDACDB41C1F5E112C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:52.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578829, 1), signature: { hash: BinData(0, 2A076294130E581EA4A3E1DBDACDB41C1F5E112C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:52.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, durableWallTime: new Date(1567578829194), opTime: { ts: Timestamp(1567578829, 1), t: 1 }, wallTime: new Date(1567578829194) } 2019-09-04T06:33:52.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578829, 1), signature: { hash: BinData(0, 2A076294130E581EA4A3E1DBDACDB41C1F5E112C), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:52.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:52.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:52.242+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:52.242+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:52.242+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:52.317+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:52.317+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:52.343+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:52.443+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:52.452+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:52.452+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:52.543+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:52.600+0000 I COMMAND [conn389] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 815C511F9B78AD09330F18D8946C64B00E8E0783), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:33:52.600+0000 D1 - [conn389] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:52.600+0000 W - [conn389] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:52.617+0000 I - [conn389] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:52.617+0000 D1 COMMAND [conn389] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 815C511F9B78AD09330F18D8946C64B00E8E0783), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:52.617+0000 D1 - [conn389] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:52.617+0000 W - [conn389] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:52.637+0000 I - [conn389] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:52.637+0000 W COMMAND [conn389] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:52.637+0000 I COMMAND [conn389] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 815C511F9B78AD09330F18D8946C64B00E8E0783), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:33:52.637+0000 D2 NETWORK [conn389] Session from 10.108.2.74:51898 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:52.637+0000 I NETWORK [conn389] end connection 10.108.2.74:51898 (88 connections now open) 2019-09-04T06:33:52.643+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:52.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:52.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:52.734+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:52.734+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:52.743+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:52.817+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:52.817+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:52.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:52.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1253) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:52.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1253 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:02.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:52.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:20.839+0000 2019-09-04T06:33:52.838+0000 D2 ASIO [Replication] Request 1253 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, durableWallTime: new Date(1567578829194), opTime: { ts: Timestamp(1567578829, 1), t: 1 }, wallTime: new Date(1567578829194), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578829, 1), t: 1 }, lastCommittedWall: new Date(1567578829194), lastOpVisible: { ts: Timestamp(1567578829, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578829, 1), $clusterTime: { clusterTime: Timestamp(1567578830, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578829, 1) } 2019-09-04T06:33:52.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, durableWallTime: new Date(1567578829194), opTime: { ts: Timestamp(1567578829, 1), t: 1 }, wallTime: new Date(1567578829194), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578829, 1), t: 1 }, lastCommittedWall: new Date(1567578829194), lastOpVisible: { ts: Timestamp(1567578829, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578829, 1), $clusterTime: { clusterTime: Timestamp(1567578830, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578829, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:52.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:52.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1253) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, durableWallTime: new Date(1567578829194), opTime: { ts: Timestamp(1567578829, 1), t: 1 }, wallTime: new Date(1567578829194), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578829, 1), t: 1 }, lastCommittedWall: new Date(1567578829194), lastOpVisible: { ts: Timestamp(1567578829, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578829, 1), $clusterTime: { clusterTime: Timestamp(1567578830, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578829, 1) } 2019-09-04T06:33:52.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:33:52.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:33:54.838Z 2019-09-04T06:33:52.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:20.839+0000 2019-09-04T06:33:52.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:52.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1254) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:52.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1254 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:34:02.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:52.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:20.839+0000 2019-09-04T06:33:52.839+0000 D2 ASIO [Replication] Request 1254 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, durableWallTime: new Date(1567578829194), opTime: { ts: Timestamp(1567578829, 1), t: 1 }, wallTime: new Date(1567578829194), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578829, 1), t: 1 }, lastCommittedWall: new Date(1567578829194), lastOpVisible: { ts: Timestamp(1567578829, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578829, 1), $clusterTime: { clusterTime: Timestamp(1567578830, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578829, 1) } 2019-09-04T06:33:52.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, durableWallTime: new Date(1567578829194), opTime: { ts: Timestamp(1567578829, 1), t: 1 }, wallTime: new Date(1567578829194), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578829, 1), t: 1 }, lastCommittedWall: new Date(1567578829194), lastOpVisible: { ts: Timestamp(1567578829, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578829, 1), $clusterTime: { clusterTime: Timestamp(1567578830, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578829, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:33:52.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:52.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1254) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, durableWallTime: new Date(1567578829194), opTime: { ts: Timestamp(1567578829, 1), t: 1 }, wallTime: new Date(1567578829194), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578829, 1), t: 1 }, lastCommittedWall: new Date(1567578829194), lastOpVisible: { ts: Timestamp(1567578829, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578829, 1), $clusterTime: { clusterTime: Timestamp(1567578830, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578829, 1) } 2019-09-04T06:33:52.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:33:52.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:34:00.894+0000 2019-09-04T06:33:52.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:34:03.026+0000 2019-09-04T06:33:52.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:33:52.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:33:54.839Z 2019-09-04T06:33:52.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:52.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:22.839+0000 2019-09-04T06:33:52.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:22.839+0000 2019-09-04T06:33:52.843+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:52.943+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:52.952+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:52.952+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:53.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:53.044+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:53.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578830, 1), signature: { hash: BinData(0, 7CB99152FE0C561E29B22E03B59B619C4E7CD305), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:53.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:33:53.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578830, 1), signature: { hash: BinData(0, 7CB99152FE0C561E29B22E03B59B619C4E7CD305), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:53.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578830, 1), signature: { hash: BinData(0, 7CB99152FE0C561E29B22E03B59B619C4E7CD305), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:53.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, durableWallTime: new Date(1567578829194), opTime: { ts: Timestamp(1567578829, 1), t: 1 }, wallTime: new Date(1567578829194) } 2019-09-04T06:33:53.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578830, 1), signature: { hash: BinData(0, 7CB99152FE0C561E29B22E03B59B619C4E7CD305), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:53.079+0000 D2 COMMAND [conn101] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:53.079+0000 I COMMAND [conn101] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:53.144+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:53.198+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:53.198+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:53.198+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578829, 1) 2019-09-04T06:33:53.198+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 18498 2019-09-04T06:33:53.198+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:53.198+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:53.198+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 18498 2019-09-04T06:33:53.198+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18501 2019-09-04T06:33:53.198+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18501 2019-09-04T06:33:53.198+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578829, 1), t: 1 }({ ts: Timestamp(1567578829, 1), t: 1 }) 2019-09-04T06:33:53.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:53.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:53.234+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:53.234+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:53.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:53.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:53.244+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:53.304+0000 D2 COMMAND [conn157] run command admin.$cmd { ping: 1, $db: "admin" } 2019-09-04T06:33:53.304+0000 I COMMAND [conn157] command admin.$cmd appName: "robo3t" command: ping { ping: 1, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:53.315+0000 D2 COMMAND [conn218] run command admin.$cmd { ping: 1.0, lsid: { id: UUID("ac8e303f-4e60-4a79-b9a4-f7cba7354076") }, $clusterTime: { clusterTime: Timestamp(1567578770, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } 2019-09-04T06:33:53.316+0000 I COMMAND [conn218] command admin.$cmd appName: "MongoDB Shell" command: ping { ping: 1.0, lsid: { id: UUID("ac8e303f-4e60-4a79-b9a4-f7cba7354076") }, $clusterTime: { clusterTime: Timestamp(1567578770, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:53.317+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:53.317+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:53.344+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:53.444+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:53.452+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:53.452+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:53.544+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:53.644+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:53.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:53.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:53.734+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:53.734+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:53.745+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:53.817+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:53.817+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:53.845+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:53.945+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:53.952+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:53.952+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:54.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:54.045+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:54.141+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:54.141+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:54.142+0000 I NETWORK [listener] connection accepted from 10.108.2.46:41116 #421 (89 connections now open) 2019-09-04T06:33:54.142+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:54.142+0000 D2 COMMAND [conn421] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:54.142+0000 I NETWORK [conn421] received client metadata from 10.108.2.46:41116 conn421: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:54.142+0000 I COMMAND [conn421] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:54.143+0000 D2 COMMAND [conn421] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578825, 1), signature: { hash: BinData(0, 3537EBC8F1D4EC0EF230F13791772B6E3C891595), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:33:54.143+0000 D1 REPL [conn421] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578829, 1), t: 1 } 2019-09-04T06:33:54.143+0000 D3 REPL [conn421] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:24.153+0000 2019-09-04T06:33:54.145+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:54.198+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:54.198+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:54.198+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578829, 1) 2019-09-04T06:33:54.198+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 18518 2019-09-04T06:33:54.198+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:54.198+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:54.198+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 18518 2019-09-04T06:33:54.198+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18521 2019-09-04T06:33:54.198+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18521 2019-09-04T06:33:54.198+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578829, 1), t: 1 }({ ts: Timestamp(1567578829, 1), t: 1 }) 2019-09-04T06:33:54.200+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:54.200+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, durableWallTime: new Date(1567578829194), appliedOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, appliedWallTime: new Date(1567578829194), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, durableWallTime: new Date(1567578829194), appliedOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, appliedWallTime: new Date(1567578829194), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, durableWallTime: new Date(1567578829194), appliedOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, appliedWallTime: new Date(1567578829194), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578829, 1), t: 1 }, lastCommittedWall: new Date(1567578829194), lastOpVisible: { ts: Timestamp(1567578829, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:54.200+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1255 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:24.200+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, durableWallTime: new Date(1567578829194), appliedOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, appliedWallTime: new Date(1567578829194), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, durableWallTime: new Date(1567578829194), appliedOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, appliedWallTime: new Date(1567578829194), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, durableWallTime: new Date(1567578829194), appliedOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, appliedWallTime: new Date(1567578829194), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578829, 1), t: 1 }, lastCommittedWall: new Date(1567578829194), lastOpVisible: { ts: Timestamp(1567578829, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:54.200+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:19.200+0000 2019-09-04T06:33:54.200+0000 D2 ASIO [RS] Request 1255 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578829, 1), t: 1 }, lastCommittedWall: new Date(1567578829194), lastOpVisible: { ts: Timestamp(1567578829, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578829, 1), $clusterTime: { clusterTime: Timestamp(1567578832, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578829, 1) } 2019-09-04T06:33:54.200+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578829, 1), t: 1 }, lastCommittedWall: new Date(1567578829194), lastOpVisible: { ts: Timestamp(1567578829, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578829, 1), $clusterTime: { clusterTime: Timestamp(1567578832, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578829, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:54.200+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:54.200+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:19.200+0000 2019-09-04T06:33:54.200+0000 D2 ASIO [RS] Request 1244 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578829, 1), t: 1 }, lastCommittedWall: new Date(1567578829194), lastOpVisible: { ts: Timestamp(1567578829, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578829, 1), t: 1 }, lastCommittedWall: new Date(1567578829194), lastOpApplied: { ts: Timestamp(1567578829, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578829, 1), $clusterTime: { clusterTime: Timestamp(1567578832, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578829, 1) } 2019-09-04T06:33:54.200+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578829, 1), t: 1 }, lastCommittedWall: new Date(1567578829194), lastOpVisible: { ts: Timestamp(1567578829, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578829, 1), t: 1 }, lastCommittedWall: new Date(1567578829194), lastOpApplied: { ts: Timestamp(1567578829, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578829, 1), $clusterTime: { clusterTime: Timestamp(1567578832, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578829, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:54.200+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:54.200+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:33:54.200+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:34:03.026+0000 2019-09-04T06:33:54.200+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:34:05.067+0000 2019-09-04T06:33:54.200+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1256 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:04.200+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578829, 1), t: 1 } } 2019-09-04T06:33:54.200+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:54.200+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:22.839+0000 2019-09-04T06:33:54.200+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:19.200+0000 2019-09-04T06:33:54.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:54.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:54.234+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:54.234+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:54.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578832, 1), signature: { hash: BinData(0, 23A1301B109301FF7B16D17A3D65F6AA7B35C51A), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:54.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:33:54.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578832, 1), signature: { hash: BinData(0, 23A1301B109301FF7B16D17A3D65F6AA7B35C51A), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:54.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578832, 1), signature: { hash: BinData(0, 23A1301B109301FF7B16D17A3D65F6AA7B35C51A), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:54.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, durableWallTime: new Date(1567578829194), opTime: { ts: Timestamp(1567578829, 1), t: 1 }, wallTime: new Date(1567578829194) } 2019-09-04T06:33:54.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578832, 1), signature: { hash: BinData(0, 23A1301B109301FF7B16D17A3D65F6AA7B35C51A), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:54.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:54.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:54.245+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:54.317+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:54.317+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:54.345+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:54.445+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:54.452+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:54.452+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:54.546+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:54.641+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:54.641+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:54.646+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:54.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:54.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:54.734+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:54.734+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:54.746+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:54.817+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:54.817+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:54.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:54.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1257) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:54.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1257 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:04.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:54.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:22.839+0000 2019-09-04T06:33:54.838+0000 D2 ASIO [Replication] Request 1257 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, durableWallTime: new Date(1567578829194), opTime: { ts: Timestamp(1567578829, 1), t: 1 }, wallTime: new Date(1567578829194), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578829, 1), t: 1 }, lastCommittedWall: new Date(1567578829194), lastOpVisible: { ts: Timestamp(1567578829, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578829, 1), $clusterTime: { clusterTime: Timestamp(1567578832, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578829, 1) } 2019-09-04T06:33:54.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, durableWallTime: new Date(1567578829194), opTime: { ts: Timestamp(1567578829, 1), t: 1 }, wallTime: new Date(1567578829194), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578829, 1), t: 1 }, lastCommittedWall: new Date(1567578829194), lastOpVisible: { ts: Timestamp(1567578829, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578829, 1), $clusterTime: { clusterTime: Timestamp(1567578832, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578829, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:54.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:54.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1257) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, durableWallTime: new Date(1567578829194), opTime: { ts: Timestamp(1567578829, 1), t: 1 }, wallTime: new Date(1567578829194), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578829, 1), t: 1 }, lastCommittedWall: new Date(1567578829194), lastOpVisible: { ts: Timestamp(1567578829, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578829, 1), $clusterTime: { clusterTime: Timestamp(1567578832, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578829, 1) } 2019-09-04T06:33:54.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:33:54.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:33:56.838Z 2019-09-04T06:33:54.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:22.839+0000 2019-09-04T06:33:54.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:54.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1258) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:54.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1258 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:34:04.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:54.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:22.839+0000 2019-09-04T06:33:54.839+0000 D2 ASIO [Replication] Request 1258 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, durableWallTime: new Date(1567578829194), opTime: { ts: Timestamp(1567578829, 1), t: 1 }, wallTime: new Date(1567578829194), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578829, 1), t: 1 }, lastCommittedWall: new Date(1567578829194), lastOpVisible: { ts: Timestamp(1567578829, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578829, 1), $clusterTime: { clusterTime: Timestamp(1567578832, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578829, 1) } 2019-09-04T06:33:54.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, durableWallTime: new Date(1567578829194), opTime: { ts: Timestamp(1567578829, 1), t: 1 }, wallTime: new Date(1567578829194), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578829, 1), t: 1 }, lastCommittedWall: new Date(1567578829194), lastOpVisible: { ts: Timestamp(1567578829, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578829, 1), $clusterTime: { clusterTime: Timestamp(1567578832, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578829, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:33:54.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:54.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1258) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, durableWallTime: new Date(1567578829194), opTime: { ts: Timestamp(1567578829, 1), t: 1 }, wallTime: new Date(1567578829194), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578829, 1), t: 1 }, lastCommittedWall: new Date(1567578829194), lastOpVisible: { ts: Timestamp(1567578829, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578829, 1), $clusterTime: { clusterTime: Timestamp(1567578832, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578829, 1) } 2019-09-04T06:33:54.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:33:54.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:34:05.067+0000 2019-09-04T06:33:54.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:34:05.522+0000 2019-09-04T06:33:54.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:33:54.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:33:56.839Z 2019-09-04T06:33:54.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:54.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:24.839+0000 2019-09-04T06:33:54.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:24.839+0000 2019-09-04T06:33:54.846+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:54.946+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:54.952+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:54.952+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:55.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:55.046+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:55.049+0000 I NETWORK [listener] connection accepted from 10.108.2.55:36786 #422 (90 connections now open) 2019-09-04T06:33:55.049+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:55.049+0000 D2 COMMAND [conn422] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:55.049+0000 I NETWORK [conn422] received client metadata from 10.108.2.55:36786 conn422: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:55.049+0000 I COMMAND [conn422] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:55.050+0000 D2 COMMAND [conn422] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578831, 1), signature: { hash: BinData(0, 5BA3C2225B23EDB083DE9DCD7C1516D49AA24879), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:33:55.050+0000 D1 REPL [conn422] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578829, 1), t: 1 } 2019-09-04T06:33:55.050+0000 D3 REPL [conn422] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:25.060+0000 2019-09-04T06:33:55.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578832, 1), signature: { hash: BinData(0, 23A1301B109301FF7B16D17A3D65F6AA7B35C51A), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:55.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:33:55.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578832, 1), signature: { hash: BinData(0, 23A1301B109301FF7B16D17A3D65F6AA7B35C51A), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:55.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578832, 1), signature: { hash: BinData(0, 23A1301B109301FF7B16D17A3D65F6AA7B35C51A), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:55.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, durableWallTime: new Date(1567578829194), opTime: { ts: Timestamp(1567578829, 1), t: 1 }, wallTime: new Date(1567578829194) } 2019-09-04T06:33:55.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578832, 1), signature: { hash: BinData(0, 23A1301B109301FF7B16D17A3D65F6AA7B35C51A), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:55.063+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:55.063+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:55.146+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:55.151+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:55.151+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:55.167+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:55.167+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:55.198+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:55.198+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:55.198+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578829, 1) 2019-09-04T06:33:55.198+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 18542 2019-09-04T06:33:55.198+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:55.198+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:55.198+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 18542 2019-09-04T06:33:55.199+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18545 2019-09-04T06:33:55.199+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18545 2019-09-04T06:33:55.199+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578829, 1), t: 1 }({ ts: Timestamp(1567578829, 1), t: 1 }) 2019-09-04T06:33:55.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:55.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:55.234+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:55.234+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:55.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:55.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:55.247+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:55.317+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:55.317+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:55.347+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:55.447+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:55.452+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:55.452+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:55.547+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:55.563+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:55.563+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:55.647+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:55.651+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:55.651+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:55.667+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:55.667+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:55.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:55.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:55.734+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:55.734+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:55.747+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:55.761+0000 D2 ASIO [RS] Request 1256 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578835, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578835753), o: { $v: 1, $set: { ping: new Date(1567578835752) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578829, 1), t: 1 }, lastCommittedWall: new Date(1567578829194), lastOpVisible: { ts: Timestamp(1567578829, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578829, 1), t: 1 }, lastCommittedWall: new Date(1567578829194), lastOpApplied: { ts: Timestamp(1567578835, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578829, 1), $clusterTime: { clusterTime: Timestamp(1567578835, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578835, 1) } 2019-09-04T06:33:55.761+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578835, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578835753), o: { $v: 1, $set: { ping: new Date(1567578835752) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578829, 1), t: 1 }, lastCommittedWall: new Date(1567578829194), lastOpVisible: { ts: Timestamp(1567578829, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578829, 1), t: 1 }, lastCommittedWall: new Date(1567578829194), lastOpApplied: { ts: Timestamp(1567578835, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578829, 1), $clusterTime: { clusterTime: Timestamp(1567578835, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578835, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:55.761+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:55.761+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578835, 1) and ending at ts: Timestamp(1567578835, 1) 2019-09-04T06:33:55.761+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:34:05.522+0000 2019-09-04T06:33:55.761+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:34:06.191+0000 2019-09-04T06:33:55.761+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:55.761+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:24.839+0000 2019-09-04T06:33:55.761+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578835, 1), t: 1 } 2019-09-04T06:33:55.761+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:55.761+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:55.761+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578829, 1) 2019-09-04T06:33:55.761+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 18559 2019-09-04T06:33:55.761+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:55.761+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:55.761+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 18559 2019-09-04T06:33:55.761+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:55.761+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:33:55.761+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:55.761+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578829, 1) 2019-09-04T06:33:55.761+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 18562 2019-09-04T06:33:55.761+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578835, 1) } 2019-09-04T06:33:55.761+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:55.761+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:55.761+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 18562 2019-09-04T06:33:55.761+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18546 2019-09-04T06:33:55.762+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 18546 2019-09-04T06:33:55.762+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18565 2019-09-04T06:33:55.762+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18565 2019-09-04T06:33:55.762+0000 D3 EXECUTOR [repl-writer-worker-13] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:55.762+0000 D3 STORAGE [repl-writer-worker-13] WT begin_transaction for snapshot id 18567 2019-09-04T06:33:55.762+0000 D4 STORAGE [repl-writer-worker-13] inserting record with timestamp Timestamp(1567578835, 1) 2019-09-04T06:33:55.762+0000 D3 STORAGE [repl-writer-worker-13] WT set timestamp of future write operations to Timestamp(1567578835, 1) 2019-09-04T06:33:55.762+0000 D3 STORAGE [repl-writer-worker-13] WT commit_transaction for snapshot id 18567 2019-09-04T06:33:55.762+0000 D3 EXECUTOR [repl-writer-worker-13] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:55.762+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:33:55.762+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18566 2019-09-04T06:33:55.762+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 18566 2019-09-04T06:33:55.762+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18569 2019-09-04T06:33:55.762+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18569 2019-09-04T06:33:55.762+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578835, 1), t: 1 }({ ts: Timestamp(1567578835, 1), t: 1 }) 2019-09-04T06:33:55.762+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578835, 1) 2019-09-04T06:33:55.762+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18570 2019-09-04T06:33:55.762+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578835, 1) } } ] } sort: {} projection: {} 2019-09-04T06:33:55.762+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:55.762+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:33:55.762+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578835, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:33:55.762+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:55.762+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:55.762+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:33:55.762+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578835, 1) || First: notFirst: full path: ts 2019-09-04T06:33:55.762+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:55.762+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578835, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:55.762+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:33:55.762+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:33:55.762+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:33:55.762+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:55.762+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:55.762+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:33:55.762+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:55.762+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:55.762+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:33:55.762+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578835, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:33:55.762+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:55.762+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:55.762+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:33:55.762+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578835, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:33:55.762+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:55.762+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578835, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:55.762+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 18570 2019-09-04T06:33:55.762+0000 D3 EXECUTOR [repl-writer-worker-14] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:55.762+0000 D3 STORAGE [repl-writer-worker-14] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:55.762+0000 D3 REPL [repl-writer-worker-14] applying op: { ts: Timestamp(1567578835, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578835753), o: { $v: 1, $set: { ping: new Date(1567578835752) } } }, oplog application mode: Secondary 2019-09-04T06:33:55.762+0000 D3 STORAGE [repl-writer-worker-14] WT set timestamp of future write operations to Timestamp(1567578835, 1) 2019-09-04T06:33:55.762+0000 D3 STORAGE [repl-writer-worker-14] WT begin_transaction for snapshot id 18572 2019-09-04T06:33:55.762+0000 D2 QUERY [repl-writer-worker-14] Using idhack: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" } 2019-09-04T06:33:55.762+0000 D4 WRITE [repl-writer-worker-14] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:33:55.762+0000 D3 STORAGE [repl-writer-worker-14] WT commit_transaction for snapshot id 18572 2019-09-04T06:33:55.762+0000 D3 EXECUTOR [repl-writer-worker-14] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:55.762+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578835, 1), t: 1 }({ ts: Timestamp(1567578835, 1), t: 1 }) 2019-09-04T06:33:55.762+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578835, 1) 2019-09-04T06:33:55.762+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18571 2019-09-04T06:33:55.762+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:33:55.762+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:55.762+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:33:55.762+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:55.762+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:55.762+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:33:55.762+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 18571 2019-09-04T06:33:55.762+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578835, 1) 2019-09-04T06:33:55.762+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18575 2019-09-04T06:33:55.763+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18575 2019-09-04T06:33:55.763+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578835, 1), t: 1 }({ ts: Timestamp(1567578835, 1), t: 1 }) 2019-09-04T06:33:55.763+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:55.763+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, durableWallTime: new Date(1567578829194), appliedOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, appliedWallTime: new Date(1567578829194), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, durableWallTime: new Date(1567578829194), appliedOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, appliedWallTime: new Date(1567578835753), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, durableWallTime: new Date(1567578829194), appliedOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, appliedWallTime: new Date(1567578829194), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578829, 1), t: 1 }, lastCommittedWall: new Date(1567578829194), lastOpVisible: { ts: Timestamp(1567578829, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:55.763+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1259 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:25.763+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, durableWallTime: new Date(1567578829194), appliedOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, appliedWallTime: new Date(1567578829194), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, durableWallTime: new Date(1567578829194), appliedOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, appliedWallTime: new Date(1567578835753), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, durableWallTime: new Date(1567578829194), appliedOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, appliedWallTime: new Date(1567578829194), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578829, 1), t: 1 }, lastCommittedWall: new Date(1567578829194), lastOpVisible: { ts: Timestamp(1567578829, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:55.763+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:25.762+0000 2019-09-04T06:33:55.763+0000 D2 ASIO [RS] Request 1259 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578835, 1), t: 1 }, lastCommittedWall: new Date(1567578835753), lastOpVisible: { ts: Timestamp(1567578835, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578835, 1), $clusterTime: { clusterTime: Timestamp(1567578835, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578835, 1) } 2019-09-04T06:33:55.763+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578835, 1), t: 1 }, lastCommittedWall: new Date(1567578835753), lastOpVisible: { ts: Timestamp(1567578835, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578835, 1), $clusterTime: { clusterTime: Timestamp(1567578835, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578835, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:55.763+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:55.763+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:25.763+0000 2019-09-04T06:33:55.763+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578835, 1), t: 1 } 2019-09-04T06:33:55.763+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1260 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:05.763+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578829, 1), t: 1 } } 2019-09-04T06:33:55.763+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:25.763+0000 2019-09-04T06:33:55.764+0000 D2 ASIO [RS] Request 1260 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578835, 1), t: 1 }, lastCommittedWall: new Date(1567578835753), lastOpVisible: { ts: Timestamp(1567578835, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578835, 1), t: 1 }, lastCommittedWall: new Date(1567578835753), lastOpApplied: { ts: Timestamp(1567578835, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578835, 1), $clusterTime: { clusterTime: Timestamp(1567578835, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578835, 1) } 2019-09-04T06:33:55.764+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578835, 1), t: 1 }, lastCommittedWall: new Date(1567578835753), lastOpVisible: { ts: Timestamp(1567578835, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578835, 1), t: 1 }, lastCommittedWall: new Date(1567578835753), lastOpApplied: { ts: Timestamp(1567578835, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578835, 1), $clusterTime: { clusterTime: Timestamp(1567578835, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578835, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:55.764+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:55.764+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:33:55.764+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578835, 1), t: 1 }, 2019-09-04T06:33:55.753+0000 2019-09-04T06:33:55.764+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578835, 1), t: 1 }, 2019-09-04T06:33:55.753+0000 2019-09-04T06:33:55.764+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578830, 1) 2019-09-04T06:33:55.764+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:34:06.191+0000 2019-09-04T06:33:55.764+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:34:05.797+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn400] Got notified of new snapshot: { ts: Timestamp(1567578835, 1), t: 1 }, 2019-09-04T06:33:55.753+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn400] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:03.490+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn407] Got notified of new snapshot: { ts: Timestamp(1567578835, 1), t: 1 }, 2019-09-04T06:33:55.753+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn407] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.691+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn380] Got notified of new snapshot: { ts: Timestamp(1567578835, 1), t: 1 }, 2019-09-04T06:33:55.753+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn380] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:14.390+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn392] Got notified of new snapshot: { ts: Timestamp(1567578835, 1), t: 1 }, 2019-09-04T06:33:55.753+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn392] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:58.760+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn393] Got notified of new snapshot: { ts: Timestamp(1567578835, 1), t: 1 }, 2019-09-04T06:33:55.753+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn393] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.081+0000 2019-09-04T06:33:55.764+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1261 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:05.764+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578835, 1), t: 1 } } 2019-09-04T06:33:55.764+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:55.764+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:24.839+0000 2019-09-04T06:33:55.764+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:25.763+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn405] Got notified of new snapshot: { ts: Timestamp(1567578835, 1), t: 1 }, 2019-09-04T06:33:55.753+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn405] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.416+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn390] Got notified of new snapshot: { ts: Timestamp(1567578835, 1), t: 1 }, 2019-09-04T06:33:55.753+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn390] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.077+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn395] Got notified of new snapshot: { ts: Timestamp(1567578835, 1), t: 1 }, 2019-09-04T06:33:55.753+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn395] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.702+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn394] Got notified of new snapshot: { ts: Timestamp(1567578835, 1), t: 1 }, 2019-09-04T06:33:55.753+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn394] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.261+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn414] Got notified of new snapshot: { ts: Timestamp(1567578835, 1), t: 1 }, 2019-09-04T06:33:55.753+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn414] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:19.849+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn401] Got notified of new snapshot: { ts: Timestamp(1567578835, 1), t: 1 }, 2019-09-04T06:33:55.753+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn401] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.660+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn416] Got notified of new snapshot: { ts: Timestamp(1567578835, 1), t: 1 }, 2019-09-04T06:33:55.753+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn416] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.661+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn417] Got notified of new snapshot: { ts: Timestamp(1567578835, 1), t: 1 }, 2019-09-04T06:33:55.753+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn417] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.661+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn418] Got notified of new snapshot: { ts: Timestamp(1567578835, 1), t: 1 }, 2019-09-04T06:33:55.753+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn418] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.754+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn419] Got notified of new snapshot: { ts: Timestamp(1567578835, 1), t: 1 }, 2019-09-04T06:33:55.753+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn419] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.767+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn404] Got notified of new snapshot: { ts: Timestamp(1567578835, 1), t: 1 }, 2019-09-04T06:33:55.753+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn404] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.073+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn422] Got notified of new snapshot: { ts: Timestamp(1567578835, 1), t: 1 }, 2019-09-04T06:33:55.753+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn422] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:25.060+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn397] Got notified of new snapshot: { ts: Timestamp(1567578835, 1), t: 1 }, 2019-09-04T06:33:55.753+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn397] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.962+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn412] Got notified of new snapshot: { ts: Timestamp(1567578835, 1), t: 1 }, 2019-09-04T06:33:55.753+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn412] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:17.318+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn398] Got notified of new snapshot: { ts: Timestamp(1567578835, 1), t: 1 }, 2019-09-04T06:33:55.753+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn398] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.987+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn391] Got notified of new snapshot: { ts: Timestamp(1567578835, 1), t: 1 }, 2019-09-04T06:33:55.753+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn391] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:33:56.309+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn399] Got notified of new snapshot: { ts: Timestamp(1567578835, 1), t: 1 }, 2019-09-04T06:33:55.753+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn399] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:02.478+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn406] Got notified of new snapshot: { ts: Timestamp(1567578835, 1), t: 1 }, 2019-09-04T06:33:55.753+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn406] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.496+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn387] Got notified of new snapshot: { ts: Timestamp(1567578835, 1), t: 1 }, 2019-09-04T06:33:55.753+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn387] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.455+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn382] Got notified of new snapshot: { ts: Timestamp(1567578835, 1), t: 1 }, 2019-09-04T06:33:55.753+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn382] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:17.223+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn396] Got notified of new snapshot: { ts: Timestamp(1567578835, 1), t: 1 }, 2019-09-04T06:33:55.753+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn396] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.897+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn421] Got notified of new snapshot: { ts: Timestamp(1567578835, 1), t: 1 }, 2019-09-04T06:33:55.753+0000 2019-09-04T06:33:55.764+0000 D3 REPL [conn421] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:24.153+0000 2019-09-04T06:33:55.773+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:33:55.773+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:55.773+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, durableWallTime: new Date(1567578829194), appliedOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, appliedWallTime: new Date(1567578829194), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, durableWallTime: new Date(1567578835753), appliedOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, appliedWallTime: new Date(1567578835753), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, durableWallTime: new Date(1567578829194), appliedOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, appliedWallTime: new Date(1567578829194), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578835, 1), t: 1 }, lastCommittedWall: new Date(1567578835753), lastOpVisible: { ts: Timestamp(1567578835, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:55.773+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1262 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:25.773+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, durableWallTime: new Date(1567578829194), appliedOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, appliedWallTime: new Date(1567578829194), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, durableWallTime: new Date(1567578835753), appliedOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, appliedWallTime: new Date(1567578835753), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, durableWallTime: new Date(1567578829194), appliedOpTime: { ts: Timestamp(1567578829, 1), t: 1 }, appliedWallTime: new Date(1567578829194), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578835, 1), t: 1 }, lastCommittedWall: new Date(1567578835753), lastOpVisible: { ts: Timestamp(1567578835, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:55.773+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:25.763+0000 2019-09-04T06:33:55.773+0000 D2 ASIO [RS] Request 1262 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578835, 1), t: 1 }, lastCommittedWall: new Date(1567578835753), lastOpVisible: { ts: Timestamp(1567578835, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578835, 1), $clusterTime: { clusterTime: Timestamp(1567578835, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578835, 1) } 2019-09-04T06:33:55.773+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578835, 1), t: 1 }, lastCommittedWall: new Date(1567578835753), lastOpVisible: { ts: Timestamp(1567578835, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578835, 1), $clusterTime: { clusterTime: Timestamp(1567578835, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578835, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:55.773+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:55.773+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:25.763+0000 2019-09-04T06:33:55.817+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:55.817+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:55.847+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:55.861+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578835, 1) 2019-09-04T06:33:55.948+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:55.952+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:55.952+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:56.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:56.048+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:56.059+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:33:56.059+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:33:56.060+0000 D2 COMMAND [conn90] run command admin.$cmd { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:33:56.060+0000 I COMMAND [conn90] command admin.$cmd command: isMaster { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:866 locks:{} protocol:op_query 0ms 2019-09-04T06:33:56.063+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:56.063+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:56.148+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:56.151+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:56.151+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:56.167+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:56.167+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:56.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:56.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:56.234+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:56.234+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:56.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578835, 1), signature: { hash: BinData(0, F761C317C528CCC9FC1F26067730B593020558DD), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:56.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:33:56.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578835, 1), signature: { hash: BinData(0, F761C317C528CCC9FC1F26067730B593020558DD), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:56.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578835, 1), signature: { hash: BinData(0, F761C317C528CCC9FC1F26067730B593020558DD), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:56.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, durableWallTime: new Date(1567578835753), opTime: { ts: Timestamp(1567578835, 1), t: 1 }, wallTime: new Date(1567578835753) } 2019-09-04T06:33:56.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578835, 1), signature: { hash: BinData(0, F761C317C528CCC9FC1F26067730B593020558DD), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:56.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:56.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:56.248+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:56.296+0000 I NETWORK [listener] connection accepted from 10.108.2.60:44974 #423 (91 connections now open) 2019-09-04T06:33:56.296+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:56.296+0000 D2 COMMAND [conn423] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:56.296+0000 I NETWORK [conn423] received client metadata from 10.108.2.60:44974 conn423: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:56.296+0000 I COMMAND [conn423] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:56.310+0000 I COMMAND [conn391] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578803, 1), signature: { hash: BinData(0, 4C13B88C6EFA3D421BD308E53A9A6ACE16902623), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:33:56.310+0000 D1 - [conn391] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:56.310+0000 W - [conn391] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:56.317+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:56.317+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:56.327+0000 I - [conn391] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:56.327+0000 D1 COMMAND [conn391] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578803, 1), signature: { hash: BinData(0, 4C13B88C6EFA3D421BD308E53A9A6ACE16902623), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:56.327+0000 D1 - [conn391] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:56.327+0000 W - [conn391] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:56.347+0000 I - [conn391] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:56.347+0000 W COMMAND [conn391] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:56.347+0000 I COMMAND [conn391] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578803, 1), signature: { hash: BinData(0, 4C13B88C6EFA3D421BD308E53A9A6ACE16902623), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:33:56.347+0000 D2 NETWORK [conn391] Session from 10.108.2.60:44960 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:56.347+0000 I NETWORK [conn391] end connection 10.108.2.60:44960 (90 connections now open) 2019-09-04T06:33:56.348+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:56.448+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:56.452+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:56.452+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:56.548+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:56.563+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:56.563+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:56.649+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:56.651+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:56.651+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:56.667+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:56.667+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:56.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:56.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:56.734+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:56.734+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:56.749+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:56.762+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:56.762+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:56.762+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578835, 1) 2019-09-04T06:33:56.762+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 18598 2019-09-04T06:33:56.762+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:56.762+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:56.762+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 18598 2019-09-04T06:33:56.763+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18601 2019-09-04T06:33:56.763+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18601 2019-09-04T06:33:56.763+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578835, 1), t: 1 }({ ts: Timestamp(1567578835, 1), t: 1 }) 2019-09-04T06:33:56.817+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:56.817+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:56.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:56.838+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:33:55.063+0000 2019-09-04T06:33:56.838+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:33:56.234+0000 2019-09-04T06:33:56.838+0000 D3 REPL [replexec-3] stalest member MemberId(0) date: 2019-09-04T06:33:55.063+0000 2019-09-04T06:33:56.838+0000 D3 REPL [replexec-3] scheduling next check at 2019-09-04T06:34:05.063+0000 2019-09-04T06:33:56.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:56.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1263) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:56.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1263 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:06.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:56.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:26.838+0000 2019-09-04T06:33:56.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:26.838+0000 2019-09-04T06:33:56.838+0000 D2 ASIO [Replication] Request 1263 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, durableWallTime: new Date(1567578835753), opTime: { ts: Timestamp(1567578835, 1), t: 1 }, wallTime: new Date(1567578835753), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578835, 1), t: 1 }, lastCommittedWall: new Date(1567578835753), lastOpVisible: { ts: Timestamp(1567578835, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578835, 1), $clusterTime: { clusterTime: Timestamp(1567578835, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578835, 1) } 2019-09-04T06:33:56.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, durableWallTime: new Date(1567578835753), opTime: { ts: Timestamp(1567578835, 1), t: 1 }, wallTime: new Date(1567578835753), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578835, 1), t: 1 }, lastCommittedWall: new Date(1567578835753), lastOpVisible: { ts: Timestamp(1567578835, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578835, 1), $clusterTime: { clusterTime: Timestamp(1567578835, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578835, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:56.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:56.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1263) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, durableWallTime: new Date(1567578835753), opTime: { ts: Timestamp(1567578835, 1), t: 1 }, wallTime: new Date(1567578835753), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578835, 1), t: 1 }, lastCommittedWall: new Date(1567578835753), lastOpVisible: { ts: Timestamp(1567578835, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578835, 1), $clusterTime: { clusterTime: Timestamp(1567578835, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578835, 1) } 2019-09-04T06:33:56.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:33:56.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:33:58.838Z 2019-09-04T06:33:56.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:26.838+0000 2019-09-04T06:33:56.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:56.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1264) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:56.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1264 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:34:06.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:56.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:26.838+0000 2019-09-04T06:33:56.839+0000 D2 ASIO [Replication] Request 1264 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, durableWallTime: new Date(1567578835753), opTime: { ts: Timestamp(1567578835, 1), t: 1 }, wallTime: new Date(1567578835753), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578835, 1), t: 1 }, lastCommittedWall: new Date(1567578835753), lastOpVisible: { ts: Timestamp(1567578835, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578835, 1), $clusterTime: { clusterTime: Timestamp(1567578835, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578835, 1) } 2019-09-04T06:33:56.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, durableWallTime: new Date(1567578835753), opTime: { ts: Timestamp(1567578835, 1), t: 1 }, wallTime: new Date(1567578835753), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578835, 1), t: 1 }, lastCommittedWall: new Date(1567578835753), lastOpVisible: { ts: Timestamp(1567578835, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578835, 1), $clusterTime: { clusterTime: Timestamp(1567578835, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578835, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:33:56.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:56.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1264) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, durableWallTime: new Date(1567578835753), opTime: { ts: Timestamp(1567578835, 1), t: 1 }, wallTime: new Date(1567578835753), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578835, 1), t: 1 }, lastCommittedWall: new Date(1567578835753), lastOpVisible: { ts: Timestamp(1567578835, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578835, 1), $clusterTime: { clusterTime: Timestamp(1567578835, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578835, 1) } 2019-09-04T06:33:56.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:33:56.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:34:05.797+0000 2019-09-04T06:33:56.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:34:07.982+0000 2019-09-04T06:33:56.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:33:56.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:33:58.839Z 2019-09-04T06:33:56.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:56.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:26.839+0000 2019-09-04T06:33:56.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:26.839+0000 2019-09-04T06:33:56.849+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:56.876+0000 D2 COMMAND [conn47] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:56.876+0000 I COMMAND [conn47] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:56.911+0000 D2 COMMAND [conn48] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:56.911+0000 I COMMAND [conn48] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:56.949+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:56.952+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:56.952+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:57.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:57.049+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:57.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578835, 1), signature: { hash: BinData(0, F761C317C528CCC9FC1F26067730B593020558DD), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:57.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:33:57.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578835, 1), signature: { hash: BinData(0, F761C317C528CCC9FC1F26067730B593020558DD), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:57.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578835, 1), signature: { hash: BinData(0, F761C317C528CCC9FC1F26067730B593020558DD), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:57.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, durableWallTime: new Date(1567578835753), opTime: { ts: Timestamp(1567578835, 1), t: 1 }, wallTime: new Date(1567578835753) } 2019-09-04T06:33:57.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578835, 1), signature: { hash: BinData(0, F761C317C528CCC9FC1F26067730B593020558DD), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:57.063+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:57.063+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:57.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:57.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:57.149+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:57.150+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:57.151+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:57.167+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:57.167+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:57.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:57.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:57.234+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:57.234+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:57.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:57.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:57.249+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:57.264+0000 D2 COMMAND [conn160] run command admin.$cmd { ping: 1, $db: "admin" } 2019-09-04T06:33:57.264+0000 I COMMAND [conn160] command admin.$cmd appName: "robo3t" command: ping { ping: 1, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:57.274+0000 D2 COMMAND [conn220] run command admin.$cmd { ping: 1.0, lsid: { id: UUID("23af97f8-66f0-4a27-b5f1-59167651ca5f") }, $clusterTime: { clusterTime: Timestamp(1567578775, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } 2019-09-04T06:33:57.274+0000 I COMMAND [conn220] command admin.$cmd appName: "MongoDB Shell" command: ping { ping: 1.0, lsid: { id: UUID("23af97f8-66f0-4a27-b5f1-59167651ca5f") }, $clusterTime: { clusterTime: Timestamp(1567578775, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:57.317+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:57.317+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:57.350+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:57.450+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:57.452+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:57.452+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:57.550+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:57.556+0000 D2 COMMAND [conn403] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578831, 1), signature: { hash: BinData(0, 5BA3C2225B23EDB083DE9DCD7C1516D49AA24879), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:33:57.556+0000 D1 REPL [conn403] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578835, 1), t: 1 } 2019-09-04T06:33:57.556+0000 D3 REPL [conn403] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:27.566+0000 2019-09-04T06:33:57.563+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:57.563+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:57.612+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:57.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:57.650+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:57.651+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:57.651+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:57.667+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:57.667+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:57.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:57.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:57.734+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:57.734+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:57.750+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:57.762+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:57.762+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:57.762+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578835, 1) 2019-09-04T06:33:57.762+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 18627 2019-09-04T06:33:57.762+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:57.762+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:57.762+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 18627 2019-09-04T06:33:57.763+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18630 2019-09-04T06:33:57.763+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18630 2019-09-04T06:33:57.763+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578835, 1), t: 1 }({ ts: Timestamp(1567578835, 1), t: 1 }) 2019-09-04T06:33:57.817+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:57.817+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:57.850+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:57.950+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:57.952+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:57.952+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:58.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:58.051+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:58.063+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:58.063+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:58.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:58.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:58.150+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:58.151+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:58.151+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:58.167+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:58.167+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:58.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:58.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:58.234+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:58.234+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:58.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578837, 1), signature: { hash: BinData(0, 6FE0AA1F5A21D3E29D5D1CE65E69E7AC8DF47A9B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:58.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:33:58.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578837, 1), signature: { hash: BinData(0, 6FE0AA1F5A21D3E29D5D1CE65E69E7AC8DF47A9B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:58.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578837, 1), signature: { hash: BinData(0, 6FE0AA1F5A21D3E29D5D1CE65E69E7AC8DF47A9B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:58.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, durableWallTime: new Date(1567578835753), opTime: { ts: Timestamp(1567578835, 1), t: 1 }, wallTime: new Date(1567578835753) } 2019-09-04T06:33:58.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578837, 1), signature: { hash: BinData(0, 6FE0AA1F5A21D3E29D5D1CE65E69E7AC8DF47A9B), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:58.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:58.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:58.251+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:58.317+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:58.317+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:58.351+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:58.451+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:58.452+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:58.452+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:58.551+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:58.563+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:58.563+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:58.612+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:58.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:58.651+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:58.651+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:58.652+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:58.667+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:58.667+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:58.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:58.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:58.734+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:58.734+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:58.745+0000 I NETWORK [listener] connection accepted from 10.108.2.64:46748 #424 (91 connections now open) 2019-09-04T06:33:58.745+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:33:58.745+0000 D2 COMMAND [conn424] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:33:58.745+0000 I NETWORK [conn424] received client metadata from 10.108.2.64:46748 conn424: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:33:58.745+0000 I COMMAND [conn424] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:33:58.752+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:58.762+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:58.762+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:58.762+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578835, 1) 2019-09-04T06:33:58.762+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 18652 2019-09-04T06:33:58.762+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:58.762+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:58.762+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 18652 2019-09-04T06:33:58.763+0000 I COMMAND [conn392] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578805, 1), signature: { hash: BinData(0, 89755D210D4D749FC922939A0BD0751F8197C269), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:33:58.763+0000 D1 - [conn392] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:33:58.763+0000 W - [conn392] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:58.763+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18655 2019-09-04T06:33:58.763+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18655 2019-09-04T06:33:58.763+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578835, 1), t: 1 }({ ts: Timestamp(1567578835, 1), t: 1 }) 2019-09-04T06:33:58.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:58.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:58.780+0000 I - [conn392] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:58.780+0000 D1 COMMAND [conn392] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578805, 1), signature: { hash: BinData(0, 89755D210D4D749FC922939A0BD0751F8197C269), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:58.780+0000 D1 - [conn392] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:33:58.780+0000 W - [conn392] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:33:58.800+0000 I - [conn392] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:33:58.800+0000 W COMMAND [conn392] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:33:58.800+0000 I COMMAND [conn392] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578805, 1), signature: { hash: BinData(0, 89755D210D4D749FC922939A0BD0751F8197C269), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:33:58.800+0000 D2 NETWORK [conn392] Session from 10.108.2.64:46728 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:33:58.800+0000 I NETWORK [conn392] end connection 10.108.2.64:46728 (90 connections now open) 2019-09-04T06:33:58.817+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:58.817+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:58.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:58.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1265) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:58.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1265 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:08.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:58.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:26.839+0000 2019-09-04T06:33:58.838+0000 D2 ASIO [Replication] Request 1265 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, durableWallTime: new Date(1567578835753), opTime: { ts: Timestamp(1567578835, 1), t: 1 }, wallTime: new Date(1567578835753), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578835, 1), t: 1 }, lastCommittedWall: new Date(1567578835753), lastOpVisible: { ts: Timestamp(1567578835, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578835, 1), $clusterTime: { clusterTime: Timestamp(1567578838, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578835, 1) } 2019-09-04T06:33:58.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, durableWallTime: new Date(1567578835753), opTime: { ts: Timestamp(1567578835, 1), t: 1 }, wallTime: new Date(1567578835753), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578835, 1), t: 1 }, lastCommittedWall: new Date(1567578835753), lastOpVisible: { ts: Timestamp(1567578835, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578835, 1), $clusterTime: { clusterTime: Timestamp(1567578838, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578835, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:58.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:58.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1265) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, durableWallTime: new Date(1567578835753), opTime: { ts: Timestamp(1567578835, 1), t: 1 }, wallTime: new Date(1567578835753), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578835, 1), t: 1 }, lastCommittedWall: new Date(1567578835753), lastOpVisible: { ts: Timestamp(1567578835, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578835, 1), $clusterTime: { clusterTime: Timestamp(1567578838, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578835, 1) } 2019-09-04T06:33:58.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:33:58.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:34:00.838Z 2019-09-04T06:33:58.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:26.839+0000 2019-09-04T06:33:58.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:58.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1266) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:58.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1266 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:34:08.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:33:58.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:26.839+0000 2019-09-04T06:33:58.839+0000 D2 ASIO [Replication] Request 1266 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, durableWallTime: new Date(1567578835753), opTime: { ts: Timestamp(1567578835, 1), t: 1 }, wallTime: new Date(1567578835753), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578835, 1), t: 1 }, lastCommittedWall: new Date(1567578835753), lastOpVisible: { ts: Timestamp(1567578835, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578835, 1), $clusterTime: { clusterTime: Timestamp(1567578838, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578835, 1) } 2019-09-04T06:33:58.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, durableWallTime: new Date(1567578835753), opTime: { ts: Timestamp(1567578835, 1), t: 1 }, wallTime: new Date(1567578835753), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578835, 1), t: 1 }, lastCommittedWall: new Date(1567578835753), lastOpVisible: { ts: Timestamp(1567578835, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578835, 1), $clusterTime: { clusterTime: Timestamp(1567578838, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578835, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:33:58.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:58.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1266) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, durableWallTime: new Date(1567578835753), opTime: { ts: Timestamp(1567578835, 1), t: 1 }, wallTime: new Date(1567578835753), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578835, 1), t: 1 }, lastCommittedWall: new Date(1567578835753), lastOpVisible: { ts: Timestamp(1567578835, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578835, 1), $clusterTime: { clusterTime: Timestamp(1567578838, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578835, 1) } 2019-09-04T06:33:58.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:33:58.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:34:07.982+0000 2019-09-04T06:33:58.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:34:09.398+0000 2019-09-04T06:33:58.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:33:58.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:34:00.839Z 2019-09-04T06:33:58.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:58.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:28.839+0000 2019-09-04T06:33:58.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:28.839+0000 2019-09-04T06:33:58.852+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:58.952+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:58.952+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:58.952+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:59.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:59.052+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:59.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578838, 1), signature: { hash: BinData(0, 80220AC3CEF7B766C879107B08885D1C915835E8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:59.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:33:59.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578838, 1), signature: { hash: BinData(0, 80220AC3CEF7B766C879107B08885D1C915835E8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:59.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578838, 1), signature: { hash: BinData(0, 80220AC3CEF7B766C879107B08885D1C915835E8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:33:59.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, durableWallTime: new Date(1567578835753), opTime: { ts: Timestamp(1567578835, 1), t: 1 }, wallTime: new Date(1567578835753) } 2019-09-04T06:33:59.063+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:59.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578838, 1), signature: { hash: BinData(0, 80220AC3CEF7B766C879107B08885D1C915835E8), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:59.063+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:59.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:59.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:59.151+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:59.151+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:59.152+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:59.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:59.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:59.167+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:59.167+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:59.204+0000 D2 ASIO [RS] Request 1261 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578839, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578839202), o: { $v: 1, $set: { ping: new Date(1567578839199), up: 10 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578835, 1), t: 1 }, lastCommittedWall: new Date(1567578835753), lastOpVisible: { ts: Timestamp(1567578835, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578835, 1), t: 1 }, lastCommittedWall: new Date(1567578835753), lastOpApplied: { ts: Timestamp(1567578839, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578835, 1), $clusterTime: { clusterTime: Timestamp(1567578839, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578839, 1) } 2019-09-04T06:33:59.204+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578839, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578839202), o: { $v: 1, $set: { ping: new Date(1567578839199), up: 10 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578835, 1), t: 1 }, lastCommittedWall: new Date(1567578835753), lastOpVisible: { ts: Timestamp(1567578835, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578835, 1), t: 1 }, lastCommittedWall: new Date(1567578835753), lastOpApplied: { ts: Timestamp(1567578839, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578835, 1), $clusterTime: { clusterTime: Timestamp(1567578839, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578839, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:59.204+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:59.204+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578839, 1) and ending at ts: Timestamp(1567578839, 1) 2019-09-04T06:33:59.204+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:34:09.398+0000 2019-09-04T06:33:59.205+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:34:10.484+0000 2019-09-04T06:33:59.205+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:33:59.205+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:28.839+0000 2019-09-04T06:33:59.205+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578839, 1), t: 1 } 2019-09-04T06:33:59.205+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:59.205+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:59.205+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578835, 1) 2019-09-04T06:33:59.205+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 18668 2019-09-04T06:33:59.205+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:59.205+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:59.205+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 18668 2019-09-04T06:33:59.205+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:33:59.205+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:33:59.205+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:33:59.205+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578835, 1) 2019-09-04T06:33:59.205+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 18671 2019-09-04T06:33:59.205+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:33:59.205+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:33:59.205+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578839, 1) } 2019-09-04T06:33:59.205+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 18671 2019-09-04T06:33:59.205+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18656 2019-09-04T06:33:59.205+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 18656 2019-09-04T06:33:59.205+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18674 2019-09-04T06:33:59.205+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18674 2019-09-04T06:33:59.205+0000 D3 EXECUTOR [repl-writer-worker-3] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:59.205+0000 D3 STORAGE [repl-writer-worker-3] WT begin_transaction for snapshot id 18676 2019-09-04T06:33:59.205+0000 D4 STORAGE [repl-writer-worker-3] inserting record with timestamp Timestamp(1567578839, 1) 2019-09-04T06:33:59.205+0000 D3 STORAGE [repl-writer-worker-3] WT set timestamp of future write operations to Timestamp(1567578839, 1) 2019-09-04T06:33:59.205+0000 D3 STORAGE [repl-writer-worker-3] WT commit_transaction for snapshot id 18676 2019-09-04T06:33:59.205+0000 D3 EXECUTOR [repl-writer-worker-3] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:59.205+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:33:59.205+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18675 2019-09-04T06:33:59.205+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 18675 2019-09-04T06:33:59.205+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18678 2019-09-04T06:33:59.205+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18678 2019-09-04T06:33:59.205+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578839, 1), t: 1 }({ ts: Timestamp(1567578839, 1), t: 1 }) 2019-09-04T06:33:59.205+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578839, 1) 2019-09-04T06:33:59.205+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18679 2019-09-04T06:33:59.205+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578839, 1) } } ] } sort: {} projection: {} 2019-09-04T06:33:59.205+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:59.205+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:33:59.205+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578839, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:33:59.205+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:59.205+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:33:59.205+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:59.205+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578839, 1) || First: notFirst: full path: ts 2019-09-04T06:33:59.205+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:59.205+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578839, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:59.205+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:33:59.205+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:33:59.205+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:33:59.205+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:59.205+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:59.205+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:33:59.205+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:59.205+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:59.205+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:33:59.205+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578839, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:33:59.205+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:59.205+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:33:59.205+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:33:59.205+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578839, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:33:59.205+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:59.205+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578839, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:59.206+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 18679 2019-09-04T06:33:59.206+0000 D3 EXECUTOR [repl-writer-worker-12] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:33:59.206+0000 D3 STORAGE [repl-writer-worker-12] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:33:59.206+0000 D3 REPL [repl-writer-worker-12] applying op: { ts: Timestamp(1567578839, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578839202), o: { $v: 1, $set: { ping: new Date(1567578839199), up: 10 } } }, oplog application mode: Secondary 2019-09-04T06:33:59.206+0000 D3 STORAGE [repl-writer-worker-12] WT set timestamp of future write operations to Timestamp(1567578839, 1) 2019-09-04T06:33:59.206+0000 D3 STORAGE [repl-writer-worker-12] WT begin_transaction for snapshot id 18681 2019-09-04T06:33:59.206+0000 D2 QUERY [repl-writer-worker-12] Using idhack: { _id: "cmodb801.togewa.com:27017" } 2019-09-04T06:33:59.206+0000 D4 WRITE [repl-writer-worker-12] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:33:59.206+0000 D3 STORAGE [repl-writer-worker-12] WT commit_transaction for snapshot id 18681 2019-09-04T06:33:59.206+0000 D3 EXECUTOR [repl-writer-worker-12] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:33:59.206+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578839, 1), t: 1 }({ ts: Timestamp(1567578839, 1), t: 1 }) 2019-09-04T06:33:59.206+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578839, 1) 2019-09-04T06:33:59.206+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18680 2019-09-04T06:33:59.206+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:33:59.206+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:33:59.206+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:33:59.206+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:33:59.206+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:33:59.206+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:33:59.206+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 18680 2019-09-04T06:33:59.206+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578839, 1) 2019-09-04T06:33:59.206+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18685 2019-09-04T06:33:59.206+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18685 2019-09-04T06:33:59.206+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578839, 1), t: 1 }({ ts: Timestamp(1567578839, 1), t: 1 }) 2019-09-04T06:33:59.206+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:59.206+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, durableWallTime: new Date(1567578835753), appliedOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, appliedWallTime: new Date(1567578835753), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, durableWallTime: new Date(1567578835753), appliedOpTime: { ts: Timestamp(1567578839, 1), t: 1 }, appliedWallTime: new Date(1567578839202), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, durableWallTime: new Date(1567578835753), appliedOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, appliedWallTime: new Date(1567578835753), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578835, 1), t: 1 }, lastCommittedWall: new Date(1567578835753), lastOpVisible: { ts: Timestamp(1567578835, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:59.206+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1267 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:29.206+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, durableWallTime: new Date(1567578835753), appliedOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, appliedWallTime: new Date(1567578835753), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, durableWallTime: new Date(1567578835753), appliedOpTime: { ts: Timestamp(1567578839, 1), t: 1 }, appliedWallTime: new Date(1567578839202), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, durableWallTime: new Date(1567578835753), appliedOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, appliedWallTime: new Date(1567578835753), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578835, 1), t: 1 }, lastCommittedWall: new Date(1567578835753), lastOpVisible: { ts: Timestamp(1567578835, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:59.206+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:29.206+0000 2019-09-04T06:33:59.206+0000 D2 ASIO [RS] Request 1267 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578835, 1), t: 1 }, lastCommittedWall: new Date(1567578835753), lastOpVisible: { ts: Timestamp(1567578835, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578835, 1), $clusterTime: { clusterTime: Timestamp(1567578839, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578839, 1) } 2019-09-04T06:33:59.206+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578835, 1), t: 1 }, lastCommittedWall: new Date(1567578835753), lastOpVisible: { ts: Timestamp(1567578835, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578835, 1), $clusterTime: { clusterTime: Timestamp(1567578839, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578839, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:59.206+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:59.206+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:29.206+0000 2019-09-04T06:33:59.207+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578839, 1), t: 1 } 2019-09-04T06:33:59.207+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1268 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:09.207+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578835, 1), t: 1 } } 2019-09-04T06:33:59.207+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:29.206+0000 2019-09-04T06:33:59.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:59.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:59.234+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:59.234+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:59.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:33:59.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:33:59.252+0000 D2 ASIO [RS] Request 1268 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578839, 1), t: 1 }, lastCommittedWall: new Date(1567578839202), lastOpVisible: { ts: Timestamp(1567578839, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578839, 1), t: 1 }, lastCommittedWall: new Date(1567578839202), lastOpApplied: { ts: Timestamp(1567578839, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578839, 1), $clusterTime: { clusterTime: Timestamp(1567578839, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578839, 1) } 2019-09-04T06:33:59.252+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578839, 1), t: 1 }, lastCommittedWall: new Date(1567578839202), lastOpVisible: { ts: Timestamp(1567578839, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578839, 1), t: 1 }, lastCommittedWall: new Date(1567578839202), lastOpApplied: { ts: Timestamp(1567578839, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578839, 1), $clusterTime: { clusterTime: Timestamp(1567578839, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578839, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:59.252+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:59.252+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:33:59.252+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578839, 1), t: 1 }, 2019-09-04T06:33:59.202+0000 2019-09-04T06:33:59.252+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578839, 1), t: 1 }, 2019-09-04T06:33:59.202+0000 2019-09-04T06:33:59.252+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578834, 1) 2019-09-04T06:33:59.252+0000 D3 REPL [conn412] Got notified of new snapshot: { ts: Timestamp(1567578839, 1), t: 1 }, 2019-09-04T06:33:59.202+0000 2019-09-04T06:33:59.252+0000 D3 REPL [conn412] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:17.318+0000 2019-09-04T06:33:59.252+0000 D3 REPL [conn390] Got notified of new snapshot: { ts: Timestamp(1567578839, 1), t: 1 }, 2019-09-04T06:33:59.202+0000 2019-09-04T06:33:59.252+0000 D3 REPL [conn390] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.077+0000 2019-09-04T06:33:59.252+0000 D3 REPL [conn393] Got notified of new snapshot: { ts: Timestamp(1567578839, 1), t: 1 }, 2019-09-04T06:33:59.202+0000 2019-09-04T06:33:59.252+0000 D3 REPL [conn393] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.081+0000 2019-09-04T06:33:59.252+0000 D3 REPL [conn395] Got notified of new snapshot: { ts: Timestamp(1567578839, 1), t: 1 }, 2019-09-04T06:33:59.202+0000 2019-09-04T06:33:59.252+0000 D3 REPL [conn395] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.702+0000 2019-09-04T06:33:59.252+0000 D3 REPL [conn414] Got notified of new snapshot: { ts: Timestamp(1567578839, 1), t: 1 }, 2019-09-04T06:33:59.202+0000 2019-09-04T06:33:59.252+0000 D3 REPL [conn414] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:19.849+0000 2019-09-04T06:33:59.252+0000 D3 REPL [conn416] Got notified of new snapshot: { ts: Timestamp(1567578839, 1), t: 1 }, 2019-09-04T06:33:59.202+0000 2019-09-04T06:33:59.252+0000 D3 REPL [conn416] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.661+0000 2019-09-04T06:33:59.252+0000 D3 REPL [conn418] Got notified of new snapshot: { ts: Timestamp(1567578839, 1), t: 1 }, 2019-09-04T06:33:59.202+0000 2019-09-04T06:33:59.252+0000 D3 REPL [conn418] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.754+0000 2019-09-04T06:33:59.252+0000 D3 REPL [conn404] Got notified of new snapshot: { ts: Timestamp(1567578839, 1), t: 1 }, 2019-09-04T06:33:59.202+0000 2019-09-04T06:33:59.252+0000 D3 REPL [conn404] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.073+0000 2019-09-04T06:33:59.252+0000 D3 REPL [conn397] Got notified of new snapshot: { ts: Timestamp(1567578839, 1), t: 1 }, 2019-09-04T06:33:59.202+0000 2019-09-04T06:33:59.252+0000 D3 REPL [conn397] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.962+0000 2019-09-04T06:33:59.252+0000 D3 REPL [conn398] Got notified of new snapshot: { ts: Timestamp(1567578839, 1), t: 1 }, 2019-09-04T06:33:59.202+0000 2019-09-04T06:33:59.252+0000 D3 REPL [conn398] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.987+0000 2019-09-04T06:33:59.252+0000 D3 REPL [conn399] Got notified of new snapshot: { ts: Timestamp(1567578839, 1), t: 1 }, 2019-09-04T06:33:59.202+0000 2019-09-04T06:33:59.252+0000 D3 REPL [conn399] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:02.478+0000 2019-09-04T06:33:59.252+0000 D3 REPL [conn387] Got notified of new snapshot: { ts: Timestamp(1567578839, 1), t: 1 }, 2019-09-04T06:33:59.202+0000 2019-09-04T06:33:59.252+0000 D3 REPL [conn387] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.455+0000 2019-09-04T06:33:59.252+0000 D3 REPL [conn396] Got notified of new snapshot: { ts: Timestamp(1567578839, 1), t: 1 }, 2019-09-04T06:33:59.202+0000 2019-09-04T06:33:59.252+0000 D3 REPL [conn396] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.897+0000 2019-09-04T06:33:59.252+0000 D3 REPL [conn403] Got notified of new snapshot: { ts: Timestamp(1567578839, 1), t: 1 }, 2019-09-04T06:33:59.202+0000 2019-09-04T06:33:59.252+0000 D3 REPL [conn403] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:27.566+0000 2019-09-04T06:33:59.252+0000 D3 REPL [conn421] Got notified of new snapshot: { ts: Timestamp(1567578839, 1), t: 1 }, 2019-09-04T06:33:59.202+0000 2019-09-04T06:33:59.252+0000 D3 REPL [conn421] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:24.153+0000 2019-09-04T06:33:59.252+0000 D3 REPL [conn382] Got notified of new snapshot: { ts: Timestamp(1567578839, 1), t: 1 }, 2019-09-04T06:33:59.202+0000 2019-09-04T06:33:59.252+0000 D3 REPL [conn382] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:17.223+0000 2019-09-04T06:33:59.252+0000 D3 REPL [conn406] Got notified of new snapshot: { ts: Timestamp(1567578839, 1), t: 1 }, 2019-09-04T06:33:59.202+0000 2019-09-04T06:33:59.252+0000 D3 REPL [conn406] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.496+0000 2019-09-04T06:33:59.252+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:34:10.484+0000 2019-09-04T06:33:59.252+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:34:10.329+0000 2019-09-04T06:33:59.252+0000 D3 REPL [conn422] Got notified of new snapshot: { ts: Timestamp(1567578839, 1), t: 1 }, 2019-09-04T06:33:59.202+0000 2019-09-04T06:33:59.252+0000 D3 REPL [conn422] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:25.060+0000 2019-09-04T06:33:59.252+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:33:59.252+0000 D3 REPL [conn419] Got notified of new snapshot: { ts: Timestamp(1567578839, 1), t: 1 }, 2019-09-04T06:33:59.202+0000 2019-09-04T06:33:59.252+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:28.839+0000 2019-09-04T06:33:59.252+0000 D3 REPL [conn419] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.767+0000 2019-09-04T06:33:59.252+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1269 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:09.252+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578839, 1), t: 1 } } 2019-09-04T06:33:59.252+0000 D3 REPL [conn380] Got notified of new snapshot: { ts: Timestamp(1567578839, 1), t: 1 }, 2019-09-04T06:33:59.202+0000 2019-09-04T06:33:59.252+0000 D3 REPL [conn380] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:14.390+0000 2019-09-04T06:33:59.252+0000 D3 REPL [conn400] Got notified of new snapshot: { ts: Timestamp(1567578839, 1), t: 1 }, 2019-09-04T06:33:59.202+0000 2019-09-04T06:33:59.252+0000 D3 REPL [conn400] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:03.490+0000 2019-09-04T06:33:59.252+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:29.206+0000 2019-09-04T06:33:59.252+0000 D3 REPL [conn417] Got notified of new snapshot: { ts: Timestamp(1567578839, 1), t: 1 }, 2019-09-04T06:33:59.202+0000 2019-09-04T06:33:59.252+0000 D3 REPL [conn417] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.661+0000 2019-09-04T06:33:59.252+0000 D3 REPL [conn394] Got notified of new snapshot: { ts: Timestamp(1567578839, 1), t: 1 }, 2019-09-04T06:33:59.202+0000 2019-09-04T06:33:59.252+0000 D3 REPL [conn394] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.261+0000 2019-09-04T06:33:59.253+0000 D3 REPL [conn401] Got notified of new snapshot: { ts: Timestamp(1567578839, 1), t: 1 }, 2019-09-04T06:33:59.202+0000 2019-09-04T06:33:59.253+0000 D3 REPL [conn401] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.660+0000 2019-09-04T06:33:59.253+0000 D3 REPL [conn407] Got notified of new snapshot: { ts: Timestamp(1567578839, 1), t: 1 }, 2019-09-04T06:33:59.202+0000 2019-09-04T06:33:59.253+0000 D3 REPL [conn407] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.691+0000 2019-09-04T06:33:59.253+0000 D3 REPL [conn405] Got notified of new snapshot: { ts: Timestamp(1567578839, 1), t: 1 }, 2019-09-04T06:33:59.202+0000 2019-09-04T06:33:59.253+0000 D3 REPL [conn405] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.416+0000 2019-09-04T06:33:59.256+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:33:59.256+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:33:59.256+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:59.256+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, durableWallTime: new Date(1567578835753), appliedOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, appliedWallTime: new Date(1567578835753), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578839, 1), t: 1 }, durableWallTime: new Date(1567578839202), appliedOpTime: { ts: Timestamp(1567578839, 1), t: 1 }, appliedWallTime: new Date(1567578839202), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, durableWallTime: new Date(1567578835753), appliedOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, appliedWallTime: new Date(1567578835753), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578839, 1), t: 1 }, lastCommittedWall: new Date(1567578839202), lastOpVisible: { ts: Timestamp(1567578839, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:59.256+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1270 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:29.256+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, durableWallTime: new Date(1567578835753), appliedOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, appliedWallTime: new Date(1567578835753), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578839, 1), t: 1 }, durableWallTime: new Date(1567578839202), appliedOpTime: { ts: Timestamp(1567578839, 1), t: 1 }, appliedWallTime: new Date(1567578839202), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, durableWallTime: new Date(1567578835753), appliedOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, appliedWallTime: new Date(1567578835753), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578839, 1), t: 1 }, lastCommittedWall: new Date(1567578839202), lastOpVisible: { ts: Timestamp(1567578839, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:33:59.256+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:29.206+0000 2019-09-04T06:33:59.256+0000 D2 ASIO [RS] Request 1270 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578839, 1), t: 1 }, lastCommittedWall: new Date(1567578839202), lastOpVisible: { ts: Timestamp(1567578839, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578839, 1), $clusterTime: { clusterTime: Timestamp(1567578839, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578839, 1) } 2019-09-04T06:33:59.256+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578839, 1), t: 1 }, lastCommittedWall: new Date(1567578839202), lastOpVisible: { ts: Timestamp(1567578839, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578839, 1), $clusterTime: { clusterTime: Timestamp(1567578839, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578839, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:33:59.256+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:33:59.256+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:29.206+0000 2019-09-04T06:33:59.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:59.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:59.305+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578839, 1) 2019-09-04T06:33:59.317+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:59.317+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:59.356+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:59.452+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:59.452+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:59.456+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:59.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:59.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:59.556+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:59.563+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:59.563+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:59.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:59.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:59.650+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:59.651+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:59.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:59.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:59.656+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:59.667+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:59.667+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:59.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:59.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:59.729+0000 D2 COMMAND [conn408] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578830, 1), signature: { hash: BinData(0, 5FF0CCAFC787DA30065B8DDE1CC8B095FDBF43F0), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:33:59.729+0000 D1 REPL [conn408] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578839, 1), t: 1 } 2019-09-04T06:33:59.729+0000 D3 REPL [conn408] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:29.739+0000 2019-09-04T06:33:59.734+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:59.734+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:59.757+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:59.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:59.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:59.817+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:59.817+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:59.857+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:59.952+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:59.952+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:59.957+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:33:59.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:59.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:33:59.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:33:59.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:00.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:00.002+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:34:00.002+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:34:00.002+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:34:00.021+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:34:00.021+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:34:00.046+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:34:00.046+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:34:00.046+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:34:00.046+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:34:00.048+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:34:00.048+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35151 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:34:00.049+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:34:00.049+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:34:00.049+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:34:00.049+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:34:00.049+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:00.049+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:34:00.049+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:34:00.049+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:34:00.049+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:34:00.049+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:34:00.049+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:34:00.049+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:34:00.049+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:00.049+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:00.049+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:34:00.049+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578839, 1) 2019-09-04T06:34:00.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18715 2019-09-04T06:34:00.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18715 2019-09-04T06:34:00.049+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:34:00.050+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:34:00.050+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:34:00.050+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:34:00.050+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:00.050+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:34:00.050+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:34:00.050+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:34:00.050+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578839, 1) 2019-09-04T06:34:00.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18718 2019-09-04T06:34:00.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18718 2019-09-04T06:34:00.050+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:34:00.050+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:34:00.050+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:00.050+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:34:00.050+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:34:00.050+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:34:00.050+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578839, 1) 2019-09-04T06:34:00.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18720 2019-09-04T06:34:00.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18720 2019-09-04T06:34:00.050+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:549 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:34:00.050+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:34:00.050+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:34:00.050+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:34:00.050+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:34:00.050+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18723 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18723 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18724 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18724 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18725 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18725 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18726 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18726 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18727 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18727 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18728 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18728 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18729 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18729 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18730 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18730 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18731 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18731 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18732 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18732 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18733 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18733 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18734 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18734 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18735 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18735 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18736 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:34:00.051+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18736 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18737 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18737 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18738 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18738 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18739 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18739 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18740 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18740 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18741 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18741 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18742 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18742 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18743 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18743 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18744 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18744 2019-09-04T06:34:00.052+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:34:00.052+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18746 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18746 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18747 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18747 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18748 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18748 2019-09-04T06:34:00.052+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:34:00.052+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:34:00.052+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18750 2019-09-04T06:34:00.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18750 2019-09-04T06:34:00.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18751 2019-09-04T06:34:00.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18751 2019-09-04T06:34:00.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18752 2019-09-04T06:34:00.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18752 2019-09-04T06:34:00.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18753 2019-09-04T06:34:00.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18753 2019-09-04T06:34:00.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18754 2019-09-04T06:34:00.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18754 2019-09-04T06:34:00.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18755 2019-09-04T06:34:00.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18755 2019-09-04T06:34:00.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18756 2019-09-04T06:34:00.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18756 2019-09-04T06:34:00.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18757 2019-09-04T06:34:00.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18757 2019-09-04T06:34:00.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18758 2019-09-04T06:34:00.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18758 2019-09-04T06:34:00.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18759 2019-09-04T06:34:00.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18759 2019-09-04T06:34:00.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18760 2019-09-04T06:34:00.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18760 2019-09-04T06:34:00.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18761 2019-09-04T06:34:00.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18761 2019-09-04T06:34:00.053+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:34:00.053+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:34:00.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18763 2019-09-04T06:34:00.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18763 2019-09-04T06:34:00.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18764 2019-09-04T06:34:00.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18764 2019-09-04T06:34:00.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18765 2019-09-04T06:34:00.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18765 2019-09-04T06:34:00.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18766 2019-09-04T06:34:00.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18766 2019-09-04T06:34:00.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18767 2019-09-04T06:34:00.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18767 2019-09-04T06:34:00.053+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 18768 2019-09-04T06:34:00.053+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 18768 2019-09-04T06:34:00.053+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:34:00.057+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:00.063+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:00.063+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:00.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:00.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:00.151+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:00.151+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:00.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:00.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:00.157+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:00.167+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:00.167+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:00.205+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:00.205+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:00.205+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578839, 1) 2019-09-04T06:34:00.205+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 18775 2019-09-04T06:34:00.205+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:00.205+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:00.205+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 18775 2019-09-04T06:34:00.206+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18778 2019-09-04T06:34:00.206+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18778 2019-09-04T06:34:00.206+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578839, 1), t: 1 }({ ts: Timestamp(1567578839, 1), t: 1 }) 2019-09-04T06:34:00.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:00.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:00.234+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:00.234+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:00.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578839, 1), signature: { hash: BinData(0, 4A574F2C48E68D93108BB77CB388128D2D74B38C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:00.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:34:00.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578839, 1), signature: { hash: BinData(0, 4A574F2C48E68D93108BB77CB388128D2D74B38C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:00.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578839, 1), signature: { hash: BinData(0, 4A574F2C48E68D93108BB77CB388128D2D74B38C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:00.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578839, 1), t: 1 }, durableWallTime: new Date(1567578839202), opTime: { ts: Timestamp(1567578839, 1), t: 1 }, wallTime: new Date(1567578839202) } 2019-09-04T06:34:00.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578839, 1), signature: { hash: BinData(0, 4A574F2C48E68D93108BB77CB388128D2D74B38C), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:00.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:00.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:00.257+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:00.262+0000 I COMMAND [conn394] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578808, 1), signature: { hash: BinData(0, E0BDAF1F918F5649152C69BBF6299873DB91E045), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:34:00.262+0000 D1 - [conn394] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:00.262+0000 W - [conn394] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:00.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:00.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:00.278+0000 I - [conn394] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:00.279+0000 D1 COMMAND [conn394] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578808, 1), signature: { hash: BinData(0, E0BDAF1F918F5649152C69BBF6299873DB91E045), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:00.279+0000 D1 - [conn394] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:00.279+0000 W - [conn394] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:00.298+0000 I - [conn394] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:00.299+0000 W COMMAND [conn394] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:00.299+0000 I COMMAND [conn394] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578808, 1), signature: { hash: BinData(0, E0BDAF1F918F5649152C69BBF6299873DB91E045), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:34:00.299+0000 D2 NETWORK [conn394] Session from 10.108.2.54:49304 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:00.299+0000 I NETWORK [conn394] end connection 10.108.2.54:49304 (89 connections now open) 2019-09-04T06:34:00.317+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:00.317+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:00.357+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:00.370+0000 D2 ASIO [RS] Request 1269 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578840, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578840363), o: { $v: 1, $set: { ping: new Date(1567578840361) } } }, { ts: Timestamp(1567578840, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578840363), o: { $v: 1, $set: { ping: new Date(1567578840363) } } }, { ts: Timestamp(1567578840, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578840363), o: { $v: 1, $set: { ping: new Date(1567578840362) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578839, 1), t: 1 }, lastCommittedWall: new Date(1567578839202), lastOpVisible: { ts: Timestamp(1567578839, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578839, 1), t: 1 }, lastCommittedWall: new Date(1567578839202), lastOpApplied: { ts: Timestamp(1567578840, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578839, 1), $clusterTime: { clusterTime: Timestamp(1567578840, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578840, 3) } 2019-09-04T06:34:00.371+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578840, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578840363), o: { $v: 1, $set: { ping: new Date(1567578840361) } } }, { ts: Timestamp(1567578840, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578840363), o: { $v: 1, $set: { ping: new Date(1567578840363) } } }, { ts: Timestamp(1567578840, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578840363), o: { $v: 1, $set: { ping: new Date(1567578840362) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578839, 1), t: 1 }, lastCommittedWall: new Date(1567578839202), lastOpVisible: { ts: Timestamp(1567578839, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578839, 1), t: 1 }, lastCommittedWall: new Date(1567578839202), lastOpApplied: { ts: Timestamp(1567578840, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578839, 1), $clusterTime: { clusterTime: Timestamp(1567578840, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578840, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:00.371+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:00.371+0000 D2 REPL [replication-1] oplog fetcher read 3 operations from remote oplog starting at ts: Timestamp(1567578840, 1) and ending at ts: Timestamp(1567578840, 3) 2019-09-04T06:34:00.371+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:34:10.329+0000 2019-09-04T06:34:00.371+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:34:10.723+0000 2019-09-04T06:34:00.371+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:00.371+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:28.839+0000 2019-09-04T06:34:00.371+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578840, 3), t: 1 } 2019-09-04T06:34:00.371+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:00.371+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:00.371+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578839, 1) 2019-09-04T06:34:00.371+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 18787 2019-09-04T06:34:00.371+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:00.371+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:00.371+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 18787 2019-09-04T06:34:00.371+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:00.371+0000 D2 REPL [rsSync-0] replication batch size is 3 2019-09-04T06:34:00.371+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:00.371+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578839, 1) 2019-09-04T06:34:00.371+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 18790 2019-09-04T06:34:00.371+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578840, 1) } 2019-09-04T06:34:00.371+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:00.371+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:00.371+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 18790 2019-09-04T06:34:00.371+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18779 2019-09-04T06:34:00.371+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 18779 2019-09-04T06:34:00.371+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18793 2019-09-04T06:34:00.371+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18793 2019-09-04T06:34:00.371+0000 D3 EXECUTOR [repl-writer-worker-10] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:00.371+0000 D3 STORAGE [repl-writer-worker-10] WT begin_transaction for snapshot id 18795 2019-09-04T06:34:00.371+0000 D4 STORAGE [repl-writer-worker-10] inserting record with timestamp Timestamp(1567578840, 1) 2019-09-04T06:34:00.371+0000 D3 STORAGE [repl-writer-worker-10] WT set timestamp of future write operations to Timestamp(1567578840, 1) 2019-09-04T06:34:00.371+0000 D4 STORAGE [repl-writer-worker-10] inserting record with timestamp Timestamp(1567578840, 2) 2019-09-04T06:34:00.371+0000 D3 STORAGE [repl-writer-worker-10] WT set timestamp of future write operations to Timestamp(1567578840, 2) 2019-09-04T06:34:00.371+0000 D4 STORAGE [repl-writer-worker-10] inserting record with timestamp Timestamp(1567578840, 3) 2019-09-04T06:34:00.371+0000 D3 STORAGE [repl-writer-worker-10] WT set timestamp of future write operations to Timestamp(1567578840, 3) 2019-09-04T06:34:00.371+0000 D3 STORAGE [repl-writer-worker-10] WT commit_transaction for snapshot id 18795 2019-09-04T06:34:00.371+0000 D3 EXECUTOR [repl-writer-worker-10] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:00.371+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:34:00.371+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18794 2019-09-04T06:34:00.371+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 18794 2019-09-04T06:34:00.371+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18797 2019-09-04T06:34:00.371+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18797 2019-09-04T06:34:00.371+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578840, 3), t: 1 }({ ts: Timestamp(1567578840, 3), t: 1 }) 2019-09-04T06:34:00.371+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578840, 3) 2019-09-04T06:34:00.371+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18798 2019-09-04T06:34:00.371+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578840, 3) } } ] } sort: {} projection: {} 2019-09-04T06:34:00.371+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:00.371+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:34:00.371+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578840, 3) Sort: {} Proj: {} ============================= 2019-09-04T06:34:00.371+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:00.371+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:00.371+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:00.371+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578840, 3) || First: notFirst: full path: ts 2019-09-04T06:34:00.371+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:00.371+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578840, 3) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:00.371+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:00.371+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:34:00.371+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:00.371+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:00.371+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:00.371+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:00.371+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:00.372+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:00.372+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:00.372+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578840, 3) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:00.372+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:00.372+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:00.372+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:00.372+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578840, 3) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:00.372+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:00.372+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578840, 3) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:00.372+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 18798 2019-09-04T06:34:00.372+0000 D3 EXECUTOR [repl-writer-worker-8] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:00.372+0000 D3 STORAGE [repl-writer-worker-8] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:00.372+0000 D3 EXECUTOR [repl-writer-worker-4] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:00.372+0000 D3 REPL [repl-writer-worker-8] applying op: { ts: Timestamp(1567578840, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578840363), o: { $v: 1, $set: { ping: new Date(1567578840363) } } }, oplog application mode: Secondary 2019-09-04T06:34:00.372+0000 D3 STORAGE [repl-writer-worker-4] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:00.372+0000 D3 STORAGE [repl-writer-worker-8] WT set timestamp of future write operations to Timestamp(1567578840, 2) 2019-09-04T06:34:00.372+0000 D3 STORAGE [repl-writer-worker-8] WT begin_transaction for snapshot id 18800 2019-09-04T06:34:00.372+0000 D3 REPL [repl-writer-worker-4] applying op: { ts: Timestamp(1567578840, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578840363), o: { $v: 1, $set: { ping: new Date(1567578840361) } } }, oplog application mode: Secondary 2019-09-04T06:34:00.372+0000 D3 EXECUTOR [repl-writer-worker-6] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:00.372+0000 D3 STORAGE [repl-writer-worker-4] WT set timestamp of future write operations to Timestamp(1567578840, 1) 2019-09-04T06:34:00.372+0000 D3 STORAGE [repl-writer-worker-6] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:00.372+0000 D3 STORAGE [repl-writer-worker-4] WT begin_transaction for snapshot id 18801 2019-09-04T06:34:00.372+0000 D2 QUERY [repl-writer-worker-4] Using idhack: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" } 2019-09-04T06:34:00.372+0000 D3 REPL [repl-writer-worker-6] applying op: { ts: Timestamp(1567578840, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578840363), o: { $v: 1, $set: { ping: new Date(1567578840362) } } }, oplog application mode: Secondary 2019-09-04T06:34:00.372+0000 D4 WRITE [repl-writer-worker-4] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:34:00.372+0000 D3 STORAGE [repl-writer-worker-4] WT commit_transaction for snapshot id 18801 2019-09-04T06:34:00.372+0000 D3 EXECUTOR [repl-writer-worker-4] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:00.372+0000 D3 STORAGE [repl-writer-worker-6] WT set timestamp of future write operations to Timestamp(1567578840, 3) 2019-09-04T06:34:00.372+0000 D3 STORAGE [repl-writer-worker-6] WT begin_transaction for snapshot id 18802 2019-09-04T06:34:00.372+0000 D2 QUERY [repl-writer-worker-6] Using idhack: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" } 2019-09-04T06:34:00.372+0000 D4 WRITE [repl-writer-worker-6] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:34:00.372+0000 D3 STORAGE [repl-writer-worker-6] WT commit_transaction for snapshot id 18802 2019-09-04T06:34:00.372+0000 D3 EXECUTOR [repl-writer-worker-6] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:00.372+0000 D2 QUERY [repl-writer-worker-8] Using idhack: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" } 2019-09-04T06:34:00.372+0000 D4 WRITE [repl-writer-worker-8] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:34:00.372+0000 D3 STORAGE [repl-writer-worker-8] WT commit_transaction for snapshot id 18800 2019-09-04T06:34:00.372+0000 D3 EXECUTOR [repl-writer-worker-8] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:00.372+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578840, 3), t: 1 }({ ts: Timestamp(1567578840, 3), t: 1 }) 2019-09-04T06:34:00.372+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578840, 3) 2019-09-04T06:34:00.372+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18799 2019-09-04T06:34:00.372+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:34:00.372+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:00.372+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:34:00.372+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:00.372+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:00.372+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:34:00.372+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 18799 2019-09-04T06:34:00.372+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578840, 3) 2019-09-04T06:34:00.372+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18807 2019-09-04T06:34:00.372+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18807 2019-09-04T06:34:00.372+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578840, 3), t: 1 }({ ts: Timestamp(1567578840, 3), t: 1 }) 2019-09-04T06:34:00.372+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:00.372+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, durableWallTime: new Date(1567578835753), appliedOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, appliedWallTime: new Date(1567578835753), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578839, 1), t: 1 }, durableWallTime: new Date(1567578839202), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, durableWallTime: new Date(1567578835753), appliedOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, appliedWallTime: new Date(1567578835753), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578839, 1), t: 1 }, lastCommittedWall: new Date(1567578839202), lastOpVisible: { ts: Timestamp(1567578839, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:00.372+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1271 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:30.372+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, durableWallTime: new Date(1567578835753), appliedOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, appliedWallTime: new Date(1567578835753), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578839, 1), t: 1 }, durableWallTime: new Date(1567578839202), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, durableWallTime: new Date(1567578835753), appliedOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, appliedWallTime: new Date(1567578835753), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578839, 1), t: 1 }, lastCommittedWall: new Date(1567578839202), lastOpVisible: { ts: Timestamp(1567578839, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:00.372+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:30.372+0000 2019-09-04T06:34:00.373+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578840, 3), t: 1 } 2019-09-04T06:34:00.373+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1272 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:10.373+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578839, 1), t: 1 } } 2019-09-04T06:34:00.373+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:30.372+0000 2019-09-04T06:34:00.373+0000 D2 ASIO [RS] Request 1271 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578839, 1), t: 1 }, lastCommittedWall: new Date(1567578839202), lastOpVisible: { ts: Timestamp(1567578839, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578839, 1), $clusterTime: { clusterTime: Timestamp(1567578840, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578840, 3) } 2019-09-04T06:34:00.373+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578839, 1), t: 1 }, lastCommittedWall: new Date(1567578839202), lastOpVisible: { ts: Timestamp(1567578839, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578839, 1), $clusterTime: { clusterTime: Timestamp(1567578840, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578840, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:00.373+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:00.373+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:30.372+0000 2019-09-04T06:34:00.381+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:34:00.382+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:00.382+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, durableWallTime: new Date(1567578835753), appliedOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, appliedWallTime: new Date(1567578835753), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, durableWallTime: new Date(1567578835753), appliedOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, appliedWallTime: new Date(1567578835753), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578839, 1), t: 1 }, lastCommittedWall: new Date(1567578839202), lastOpVisible: { ts: Timestamp(1567578839, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:00.382+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1273 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:30.382+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, durableWallTime: new Date(1567578835753), appliedOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, appliedWallTime: new Date(1567578835753), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, durableWallTime: new Date(1567578835753), appliedOpTime: { ts: Timestamp(1567578835, 1), t: 1 }, appliedWallTime: new Date(1567578835753), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578839, 1), t: 1 }, lastCommittedWall: new Date(1567578839202), lastOpVisible: { ts: Timestamp(1567578839, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:00.382+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:30.372+0000 2019-09-04T06:34:00.382+0000 D2 ASIO [RS] Request 1273 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578839, 1), t: 1 }, lastCommittedWall: new Date(1567578839202), lastOpVisible: { ts: Timestamp(1567578839, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578839, 1), $clusterTime: { clusterTime: Timestamp(1567578840, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578840, 3) } 2019-09-04T06:34:00.382+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578839, 1), t: 1 }, lastCommittedWall: new Date(1567578839202), lastOpVisible: { ts: Timestamp(1567578839, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578839, 1), $clusterTime: { clusterTime: Timestamp(1567578840, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578840, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:00.382+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:00.382+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:30.372+0000 2019-09-04T06:34:00.382+0000 D2 ASIO [RS] Request 1272 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578840, 3), t: 1 }, lastCommittedWall: new Date(1567578840363), lastOpVisible: { ts: Timestamp(1567578840, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578840, 3), t: 1 }, lastCommittedWall: new Date(1567578840363), lastOpApplied: { ts: Timestamp(1567578840, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578840, 3), $clusterTime: { clusterTime: Timestamp(1567578840, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578840, 3) } 2019-09-04T06:34:00.382+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578840, 3), t: 1 }, lastCommittedWall: new Date(1567578840363), lastOpVisible: { ts: Timestamp(1567578840, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578840, 3), t: 1 }, lastCommittedWall: new Date(1567578840363), lastOpApplied: { ts: Timestamp(1567578840, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578840, 3), $clusterTime: { clusterTime: Timestamp(1567578840, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578840, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:00.382+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:00.382+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:34:00.382+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578840, 3), t: 1 }, 2019-09-04T06:34:00.363+0000 2019-09-04T06:34:00.382+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578840, 3), t: 1 }, 2019-09-04T06:34:00.363+0000 2019-09-04T06:34:00.382+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578835, 3) 2019-09-04T06:34:00.383+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:34:10.723+0000 2019-09-04T06:34:00.383+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:34:11.227+0000 2019-09-04T06:34:00.383+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:00.383+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:28.839+0000 2019-09-04T06:34:00.383+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1274 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:10.383+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578840, 3), t: 1 } } 2019-09-04T06:34:00.383+0000 D3 REPL [conn390] Got notified of new snapshot: { ts: Timestamp(1567578840, 3), t: 1 }, 2019-09-04T06:34:00.363+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn390] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.077+0000 2019-09-04T06:34:00.383+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:30.372+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn395] Got notified of new snapshot: { ts: Timestamp(1567578840, 3), t: 1 }, 2019-09-04T06:34:00.363+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn395] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.702+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn418] Got notified of new snapshot: { ts: Timestamp(1567578840, 3), t: 1 }, 2019-09-04T06:34:00.363+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn418] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.754+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn404] Got notified of new snapshot: { ts: Timestamp(1567578840, 3), t: 1 }, 2019-09-04T06:34:00.363+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn404] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.073+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn393] Got notified of new snapshot: { ts: Timestamp(1567578840, 3), t: 1 }, 2019-09-04T06:34:00.363+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn393] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.081+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn398] Got notified of new snapshot: { ts: Timestamp(1567578840, 3), t: 1 }, 2019-09-04T06:34:00.363+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn398] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.987+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn387] Got notified of new snapshot: { ts: Timestamp(1567578840, 3), t: 1 }, 2019-09-04T06:34:00.363+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn387] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.455+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn403] Got notified of new snapshot: { ts: Timestamp(1567578840, 3), t: 1 }, 2019-09-04T06:34:00.363+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn403] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:27.566+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn382] Got notified of new snapshot: { ts: Timestamp(1567578840, 3), t: 1 }, 2019-09-04T06:34:00.363+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn382] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:17.223+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn400] Got notified of new snapshot: { ts: Timestamp(1567578840, 3), t: 1 }, 2019-09-04T06:34:00.363+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn400] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:03.490+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn380] Got notified of new snapshot: { ts: Timestamp(1567578840, 3), t: 1 }, 2019-09-04T06:34:00.363+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn380] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:14.390+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn419] Got notified of new snapshot: { ts: Timestamp(1567578840, 3), t: 1 }, 2019-09-04T06:34:00.363+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn419] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.767+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn422] Got notified of new snapshot: { ts: Timestamp(1567578840, 3), t: 1 }, 2019-09-04T06:34:00.363+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn422] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:25.060+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn406] Got notified of new snapshot: { ts: Timestamp(1567578840, 3), t: 1 }, 2019-09-04T06:34:00.363+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn406] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.496+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn407] Got notified of new snapshot: { ts: Timestamp(1567578840, 3), t: 1 }, 2019-09-04T06:34:00.363+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn407] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.691+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn408] Got notified of new snapshot: { ts: Timestamp(1567578840, 3), t: 1 }, 2019-09-04T06:34:00.363+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn408] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:29.739+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn412] Got notified of new snapshot: { ts: Timestamp(1567578840, 3), t: 1 }, 2019-09-04T06:34:00.363+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn412] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:17.318+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn405] Got notified of new snapshot: { ts: Timestamp(1567578840, 3), t: 1 }, 2019-09-04T06:34:00.363+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn405] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.416+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn401] Got notified of new snapshot: { ts: Timestamp(1567578840, 3), t: 1 }, 2019-09-04T06:34:00.363+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn401] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.660+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn417] Got notified of new snapshot: { ts: Timestamp(1567578840, 3), t: 1 }, 2019-09-04T06:34:00.363+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn417] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.661+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn416] Got notified of new snapshot: { ts: Timestamp(1567578840, 3), t: 1 }, 2019-09-04T06:34:00.363+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn416] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.661+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn414] Got notified of new snapshot: { ts: Timestamp(1567578840, 3), t: 1 }, 2019-09-04T06:34:00.363+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn414] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:19.849+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn421] Got notified of new snapshot: { ts: Timestamp(1567578840, 3), t: 1 }, 2019-09-04T06:34:00.363+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn421] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:24.153+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn396] Got notified of new snapshot: { ts: Timestamp(1567578840, 3), t: 1 }, 2019-09-04T06:34:00.363+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn396] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.897+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn397] Got notified of new snapshot: { ts: Timestamp(1567578840, 3), t: 1 }, 2019-09-04T06:34:00.363+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn397] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:00.962+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn399] Got notified of new snapshot: { ts: Timestamp(1567578840, 3), t: 1 }, 2019-09-04T06:34:00.363+0000 2019-09-04T06:34:00.383+0000 D3 REPL [conn399] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:02.478+0000 2019-09-04T06:34:00.452+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:00.452+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:00.455+0000 I COMMAND [conn387] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578801, 1), signature: { hash: BinData(0, 3D476481B84657583831CD371DB7EF0A1606D6C0), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:34:00.455+0000 D1 - [conn387] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:00.455+0000 W - [conn387] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:00.457+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:00.471+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578840, 3) 2019-09-04T06:34:00.472+0000 I - [conn387] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:00.472+0000 D1 COMMAND [conn387] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578801, 1), signature: { hash: BinData(0, 3D476481B84657583831CD371DB7EF0A1606D6C0), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:00.472+0000 D1 - [conn387] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:00.472+0000 W - [conn387] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:00.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:00.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:00.492+0000 I - [conn387] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:00.492+0000 W COMMAND [conn387] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:00.492+0000 I COMMAND [conn387] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578801, 1), signature: { hash: BinData(0, 3D476481B84657583831CD371DB7EF0A1606D6C0), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30026ms 2019-09-04T06:34:00.492+0000 D2 NETWORK [conn387] Session from 10.108.2.59:48454 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:00.492+0000 I NETWORK [conn387] end connection 10.108.2.59:48454 (88 connections now open) 2019-09-04T06:34:00.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:00.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:00.557+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:00.563+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:00.563+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:00.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:00.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:00.650+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:00.651+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:00.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:00.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:00.658+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:00.667+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:00.667+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:00.691+0000 I NETWORK [listener] connection accepted from 10.108.2.74:51922 #425 (89 connections now open) 2019-09-04T06:34:00.691+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:00.692+0000 D2 COMMAND [conn425] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:00.692+0000 I NETWORK [conn425] received client metadata from 10.108.2.74:51922 conn425: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:00.692+0000 I COMMAND [conn425] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:00.702+0000 I COMMAND [conn395] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578805, 1), signature: { hash: BinData(0, 89755D210D4D749FC922939A0BD0751F8197C269), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:34:00.702+0000 D1 - [conn395] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:00.702+0000 W - [conn395] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:00.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:00.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:00.719+0000 I - [conn395] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:00.719+0000 D1 COMMAND [conn395] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578805, 1), signature: { hash: BinData(0, 89755D210D4D749FC922939A0BD0751F8197C269), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:00.719+0000 D1 - [conn395] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:00.719+0000 W - [conn395] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:00.734+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:00.734+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:00.739+0000 I - [conn395] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:00.739+0000 W COMMAND [conn395] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:00.739+0000 I COMMAND [conn395] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578805, 1), signature: { hash: BinData(0, 89755D210D4D749FC922939A0BD0751F8197C269), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30026ms 2019-09-04T06:34:00.739+0000 D2 NETWORK [conn395] Session from 10.108.2.74:51906 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:00.739+0000 I NETWORK [conn395] end connection 10.108.2.74:51906 (88 connections now open) 2019-09-04T06:34:00.758+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:00.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:00.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:00.817+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:00.817+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:00.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:00.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1275) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:00.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1275 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:10.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:00.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:28.839+0000 2019-09-04T06:34:00.838+0000 D2 ASIO [Replication] Request 1275 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), opTime: { ts: Timestamp(1567578840, 3), t: 1 }, wallTime: new Date(1567578840363), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578840, 3), t: 1 }, lastCommittedWall: new Date(1567578840363), lastOpVisible: { ts: Timestamp(1567578840, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578840, 3), $clusterTime: { clusterTime: Timestamp(1567578840, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578840, 3) } 2019-09-04T06:34:00.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), opTime: { ts: Timestamp(1567578840, 3), t: 1 }, wallTime: new Date(1567578840363), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578840, 3), t: 1 }, lastCommittedWall: new Date(1567578840363), lastOpVisible: { ts: Timestamp(1567578840, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578840, 3), $clusterTime: { clusterTime: Timestamp(1567578840, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578840, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:00.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:00.838+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1275) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), opTime: { ts: Timestamp(1567578840, 3), t: 1 }, wallTime: new Date(1567578840363), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578840, 3), t: 1 }, lastCommittedWall: new Date(1567578840363), lastOpVisible: { ts: Timestamp(1567578840, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578840, 3), $clusterTime: { clusterTime: Timestamp(1567578840, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578840, 3) } 2019-09-04T06:34:00.838+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:34:00.838+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:34:02.838Z 2019-09-04T06:34:00.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:28.839+0000 2019-09-04T06:34:00.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:00.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1276) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:00.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1276 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:34:10.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:00.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:28.839+0000 2019-09-04T06:34:00.839+0000 D2 ASIO [Replication] Request 1276 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), opTime: { ts: Timestamp(1567578840, 3), t: 1 }, wallTime: new Date(1567578840363), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578840, 3), t: 1 }, lastCommittedWall: new Date(1567578840363), lastOpVisible: { ts: Timestamp(1567578840, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578840, 3), $clusterTime: { clusterTime: Timestamp(1567578840, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578840, 3) } 2019-09-04T06:34:00.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), opTime: { ts: Timestamp(1567578840, 3), t: 1 }, wallTime: new Date(1567578840363), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578840, 3), t: 1 }, lastCommittedWall: new Date(1567578840363), lastOpVisible: { ts: Timestamp(1567578840, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578840, 3), $clusterTime: { clusterTime: Timestamp(1567578840, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578840, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:34:00.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:00.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1276) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), opTime: { ts: Timestamp(1567578840, 3), t: 1 }, wallTime: new Date(1567578840363), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578840, 3), t: 1 }, lastCommittedWall: new Date(1567578840363), lastOpVisible: { ts: Timestamp(1567578840, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578840, 3), $clusterTime: { clusterTime: Timestamp(1567578840, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578840, 3) } 2019-09-04T06:34:00.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:34:00.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:34:11.227+0000 2019-09-04T06:34:00.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:34:12.300+0000 2019-09-04T06:34:00.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:34:00.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:34:02.839Z 2019-09-04T06:34:00.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:00.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:30.839+0000 2019-09-04T06:34:00.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:30.839+0000 2019-09-04T06:34:00.858+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:00.886+0000 I NETWORK [listener] connection accepted from 10.108.2.44:38820 #426 (89 connections now open) 2019-09-04T06:34:00.886+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:00.887+0000 D2 COMMAND [conn426] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:00.887+0000 I NETWORK [conn426] received client metadata from 10.108.2.44:38820 conn426: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:00.887+0000 I COMMAND [conn426] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:00.898+0000 I COMMAND [conn396] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578808, 1), signature: { hash: BinData(0, E0BDAF1F918F5649152C69BBF6299873DB91E045), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:34:00.898+0000 D1 - [conn396] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:00.898+0000 W - [conn396] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:00.917+0000 I - [conn396] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:00.917+0000 D1 COMMAND [conn396] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578808, 1), signature: { hash: BinData(0, E0BDAF1F918F5649152C69BBF6299873DB91E045), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:00.917+0000 D1 - [conn396] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:00.917+0000 W - [conn396] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:00.938+0000 I - [conn396] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:00.938+0000 W COMMAND [conn396] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:00.938+0000 I COMMAND [conn396] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578808, 1), signature: { hash: BinData(0, E0BDAF1F918F5649152C69BBF6299873DB91E045), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:34:00.938+0000 D2 NETWORK [conn396] Session from 10.108.2.44:38796 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:00.938+0000 I NETWORK [conn396] end connection 10.108.2.44:38796 (88 connections now open) 2019-09-04T06:34:00.952+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:00.952+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:00.958+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:00.963+0000 I COMMAND [conn397] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578803, 1), signature: { hash: BinData(0, 4C13B88C6EFA3D421BD308E53A9A6ACE16902623), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:34:00.963+0000 D1 - [conn397] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:00.963+0000 W - [conn397] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:00.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:00.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:00.980+0000 I - [conn397] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:00.980+0000 D1 COMMAND [conn397] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578803, 1), signature: { hash: BinData(0, 4C13B88C6EFA3D421BD308E53A9A6ACE16902623), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:00.980+0000 D1 - [conn397] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:00.980+0000 W - [conn397] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:00.987+0000 I COMMAND [conn398] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578805, 1), signature: { hash: BinData(0, 89755D210D4D749FC922939A0BD0751F8197C269), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:34:00.987+0000 D1 - [conn398] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:00.987+0000 W - [conn398] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:00.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:00.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:01.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:01.000+0000 I - [conn397] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:01.001+0000 W COMMAND [conn397] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:01.001+0000 I COMMAND [conn397] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578803, 1), signature: { hash: BinData(0, 4C13B88C6EFA3D421BD308E53A9A6ACE16902623), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:34:01.001+0000 D2 NETWORK [conn397] Session from 10.108.2.58:52260 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:01.001+0000 I NETWORK [conn397] end connection 10.108.2.58:52260 (87 connections now open) 2019-09-04T06:34:01.018+0000 I - [conn398] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:01.018+0000 D1 COMMAND [conn398] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578805, 1), signature: { hash: BinData(0, 89755D210D4D749FC922939A0BD0751F8197C269), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:01.018+0000 D1 - [conn398] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:01.018+0000 W - [conn398] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:01.038+0000 I - [conn398] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:01.038+0000 W COMMAND [conn398] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:01.038+0000 I COMMAND [conn398] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578805, 1), signature: { hash: BinData(0, 89755D210D4D749FC922939A0BD0751F8197C269), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30041ms 2019-09-04T06:34:01.038+0000 D2 NETWORK [conn398] Session from 10.108.2.46:41104 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:01.038+0000 I NETWORK [conn398] end connection 10.108.2.46:41104 (86 connections now open) 2019-09-04T06:34:01.058+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:01.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578840, 3), signature: { hash: BinData(0, 11C87E8F30245402D173BD1186A780E9063ACC97), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:01.063+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:01.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:34:01.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578840, 3), signature: { hash: BinData(0, 11C87E8F30245402D173BD1186A780E9063ACC97), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:01.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578840, 3), signature: { hash: BinData(0, 11C87E8F30245402D173BD1186A780E9063ACC97), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:01.063+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:01.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), opTime: { ts: Timestamp(1567578840, 3), t: 1 }, wallTime: new Date(1567578840363) } 2019-09-04T06:34:01.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578840, 3), signature: { hash: BinData(0, 11C87E8F30245402D173BD1186A780E9063ACC97), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:01.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:01.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:01.151+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:01.151+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:01.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:01.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:01.158+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:01.167+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:01.167+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:01.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:01.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:01.234+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:01.234+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:01.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:01.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:01.258+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:01.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:01.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:01.317+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:01.317+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:01.359+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:01.371+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:01.371+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:01.371+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578840, 3) 2019-09-04T06:34:01.371+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 18839 2019-09-04T06:34:01.371+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:01.371+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:01.371+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 18839 2019-09-04T06:34:01.373+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18842 2019-09-04T06:34:01.373+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18842 2019-09-04T06:34:01.373+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578840, 3), t: 1 }({ ts: Timestamp(1567578840, 3), t: 1 }) 2019-09-04T06:34:01.459+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:01.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:01.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:01.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:01.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:01.559+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:01.563+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:01.563+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:01.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:01.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:01.650+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:01.651+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:01.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:01.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:01.659+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:01.667+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:01.667+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:01.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:01.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:01.734+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:01.734+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:01.759+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:01.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:01.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:01.817+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:01.817+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:01.859+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:01.959+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:01.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:01.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:01.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:01.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:02.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:02.060+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:02.063+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:02.063+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:02.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:02.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:02.150+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:02.151+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:02.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:02.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:02.160+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:02.167+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:02.167+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:02.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:02.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:02.234+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:02.234+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:02.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578840, 3), signature: { hash: BinData(0, 11C87E8F30245402D173BD1186A780E9063ACC97), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:02.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:34:02.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578840, 3), signature: { hash: BinData(0, 11C87E8F30245402D173BD1186A780E9063ACC97), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:02.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578840, 3), signature: { hash: BinData(0, 11C87E8F30245402D173BD1186A780E9063ACC97), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:02.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), opTime: { ts: Timestamp(1567578840, 3), t: 1 }, wallTime: new Date(1567578840363) } 2019-09-04T06:34:02.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578840, 3), signature: { hash: BinData(0, 11C87E8F30245402D173BD1186A780E9063ACC97), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:02.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:02.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:02.260+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:02.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:02.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:02.317+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:02.317+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:02.360+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:02.371+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:02.371+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:02.371+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578840, 3) 2019-09-04T06:34:02.371+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 18869 2019-09-04T06:34:02.371+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:02.372+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:02.372+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 18869 2019-09-04T06:34:02.373+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18872 2019-09-04T06:34:02.373+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18872 2019-09-04T06:34:02.373+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578840, 3), t: 1 }({ ts: Timestamp(1567578840, 3), t: 1 }) 2019-09-04T06:34:02.460+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:02.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:02.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:02.480+0000 I COMMAND [conn399] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578811, 1), signature: { hash: BinData(0, E5356C76D2F90A760D65BDFF11E1DF1886F143E2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:34:02.480+0000 D1 - [conn399] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:02.480+0000 W - [conn399] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:02.497+0000 I - [conn399] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:02.497+0000 D1 COMMAND [conn399] assertion while executing command 'find' on database 'config' with arguments '{ find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578811, 1), signature: { hash: BinData(0, E5356C76D2F90A760D65BDFF11E1DF1886F143E2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:02.497+0000 D1 - [conn399] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:02.497+0000 W - [conn399] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:02.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:02.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:02.517+0000 I - [conn399] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:02.517+0000 W COMMAND [conn399] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:02.517+0000 I COMMAND [conn399] command config.$cmd command: find { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578811, 1), signature: { hash: BinData(0, E5356C76D2F90A760D65BDFF11E1DF1886F143E2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30028ms 2019-09-04T06:34:02.517+0000 D2 NETWORK [conn399] Session from 10.108.2.59:48470 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:02.517+0000 I NETWORK [conn399] end connection 10.108.2.59:48470 (85 connections now open) 2019-09-04T06:34:02.560+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:02.563+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:02.563+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:02.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:02.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:02.650+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:02.651+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:02.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:02.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:02.660+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:02.667+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:02.667+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:02.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:02.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:02.734+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:02.734+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:02.760+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:02.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:02.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:02.817+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:02.817+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:02.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:02.838+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1277) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:02.838+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1277 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:12.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:02.838+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:30.839+0000 2019-09-04T06:34:02.838+0000 D2 ASIO [Replication] Request 1277 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), opTime: { ts: Timestamp(1567578840, 3), t: 1 }, wallTime: new Date(1567578840363), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578840, 3), t: 1 }, lastCommittedWall: new Date(1567578840363), lastOpVisible: { ts: Timestamp(1567578840, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578840, 3), $clusterTime: { clusterTime: Timestamp(1567578840, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578840, 3) } 2019-09-04T06:34:02.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), opTime: { ts: Timestamp(1567578840, 3), t: 1 }, wallTime: new Date(1567578840363), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578840, 3), t: 1 }, lastCommittedWall: new Date(1567578840363), lastOpVisible: { ts: Timestamp(1567578840, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578840, 3), $clusterTime: { clusterTime: Timestamp(1567578840, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578840, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:02.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:02.838+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1277) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), opTime: { ts: Timestamp(1567578840, 3), t: 1 }, wallTime: new Date(1567578840363), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578840, 3), t: 1 }, lastCommittedWall: new Date(1567578840363), lastOpVisible: { ts: Timestamp(1567578840, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578840, 3), $clusterTime: { clusterTime: Timestamp(1567578840, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578840, 3) } 2019-09-04T06:34:02.838+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:34:02.838+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:34:04.838Z 2019-09-04T06:34:02.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:30.839+0000 2019-09-04T06:34:02.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:02.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1278) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:02.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1278 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:34:12.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:02.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:30.839+0000 2019-09-04T06:34:02.839+0000 D2 ASIO [Replication] Request 1278 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), opTime: { ts: Timestamp(1567578840, 3), t: 1 }, wallTime: new Date(1567578840363), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578840, 3), t: 1 }, lastCommittedWall: new Date(1567578840363), lastOpVisible: { ts: Timestamp(1567578840, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578840, 3), $clusterTime: { clusterTime: Timestamp(1567578840, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578840, 3) } 2019-09-04T06:34:02.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), opTime: { ts: Timestamp(1567578840, 3), t: 1 }, wallTime: new Date(1567578840363), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578840, 3), t: 1 }, lastCommittedWall: new Date(1567578840363), lastOpVisible: { ts: Timestamp(1567578840, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578840, 3), $clusterTime: { clusterTime: Timestamp(1567578840, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578840, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:34:02.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:02.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1278) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), opTime: { ts: Timestamp(1567578840, 3), t: 1 }, wallTime: new Date(1567578840363), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578840, 3), t: 1 }, lastCommittedWall: new Date(1567578840363), lastOpVisible: { ts: Timestamp(1567578840, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578840, 3), $clusterTime: { clusterTime: Timestamp(1567578840, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578840, 3) } 2019-09-04T06:34:02.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:34:02.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:34:12.300+0000 2019-09-04T06:34:02.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:34:14.248+0000 2019-09-04T06:34:02.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:34:02.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:34:04.839Z 2019-09-04T06:34:02.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:02.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:32.839+0000 2019-09-04T06:34:02.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:32.839+0000 2019-09-04T06:34:02.861+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:02.961+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:02.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:02.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:02.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:02.999+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:03.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:03.061+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:03.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578840, 3), signature: { hash: BinData(0, 11C87E8F30245402D173BD1186A780E9063ACC97), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:03.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:34:03.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578840, 3), signature: { hash: BinData(0, 11C87E8F30245402D173BD1186A780E9063ACC97), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:03.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578840, 3), signature: { hash: BinData(0, 11C87E8F30245402D173BD1186A780E9063ACC97), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:03.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), opTime: { ts: Timestamp(1567578840, 3), t: 1 }, wallTime: new Date(1567578840363) } 2019-09-04T06:34:03.063+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:03.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578840, 3), signature: { hash: BinData(0, 11C87E8F30245402D173BD1186A780E9063ACC97), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:03.063+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:03.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:03.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:03.151+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:03.151+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:03.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:03.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:03.161+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:03.167+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:03.167+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:03.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:03.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:03.234+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:03.234+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:03.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:03.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:03.261+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:03.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:03.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:03.317+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:03.317+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:03.361+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:03.372+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:03.372+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:03.372+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578840, 3) 2019-09-04T06:34:03.372+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 18900 2019-09-04T06:34:03.372+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:03.372+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:03.372+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 18900 2019-09-04T06:34:03.373+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18903 2019-09-04T06:34:03.373+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18903 2019-09-04T06:34:03.373+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578840, 3), t: 1 }({ ts: Timestamp(1567578840, 3), t: 1 }) 2019-09-04T06:34:03.461+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:03.474+0000 I NETWORK [listener] connection accepted from 10.108.2.63:36422 #427 (86 connections now open) 2019-09-04T06:34:03.474+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:03.474+0000 D2 COMMAND [conn427] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:03.474+0000 I NETWORK [conn427] received client metadata from 10.108.2.63:36422 conn427: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:03.474+0000 I COMMAND [conn427] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:03.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:03.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:03.493+0000 I COMMAND [conn400] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578812, 1), signature: { hash: BinData(0, 4B378BB31368CDD862D6FBF154A78A3408447D9E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:34:03.493+0000 D1 - [conn400] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:03.493+0000 W - [conn400] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:03.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:03.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:03.510+0000 I - [conn400] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:03.510+0000 D1 COMMAND [conn400] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578812, 1), signature: { hash: BinData(0, 4B378BB31368CDD862D6FBF154A78A3408447D9E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:03.510+0000 D1 - [conn400] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:03.510+0000 W - [conn400] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:03.519+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:34:03.519+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:34:03.519+0000 D2 COMMAND [conn90] run command admin.$cmd { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:34:03.519+0000 I COMMAND [conn90] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:34:03.530+0000 I - [conn400] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:03.530+0000 W COMMAND [conn400] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:03.530+0000 I COMMAND [conn400] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578812, 1), signature: { hash: BinData(0, 4B378BB31368CDD862D6FBF154A78A3408447D9E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:34:03.530+0000 D2 NETWORK [conn400] Session from 10.108.2.63:36406 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:03.530+0000 I NETWORK [conn400] end connection 10.108.2.63:36406 (85 connections now open) 2019-09-04T06:34:03.553+0000 D2 ASIO [RS] Request 1274 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578843, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578843551), o: { $v: 1, $set: { ping: new Date(1567578843546) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578840, 3), t: 1 }, lastCommittedWall: new Date(1567578840363), lastOpVisible: { ts: Timestamp(1567578840, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578840, 3), t: 1 }, lastCommittedWall: new Date(1567578840363), lastOpApplied: { ts: Timestamp(1567578843, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578840, 3), $clusterTime: { clusterTime: Timestamp(1567578843, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578843, 1) } 2019-09-04T06:34:03.553+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578843, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578843551), o: { $v: 1, $set: { ping: new Date(1567578843546) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578840, 3), t: 1 }, lastCommittedWall: new Date(1567578840363), lastOpVisible: { ts: Timestamp(1567578840, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578840, 3), t: 1 }, lastCommittedWall: new Date(1567578840363), lastOpApplied: { ts: Timestamp(1567578843, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578840, 3), $clusterTime: { clusterTime: Timestamp(1567578843, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578843, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:03.553+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:03.553+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578843, 1) and ending at ts: Timestamp(1567578843, 1) 2019-09-04T06:34:03.553+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:34:14.248+0000 2019-09-04T06:34:03.553+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:34:13.608+0000 2019-09-04T06:34:03.553+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:03.553+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:32.839+0000 2019-09-04T06:34:03.553+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578843, 1), t: 1 } 2019-09-04T06:34:03.553+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:03.553+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:03.553+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578840, 3) 2019-09-04T06:34:03.553+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 18911 2019-09-04T06:34:03.553+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:03.553+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:03.553+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 18911 2019-09-04T06:34:03.553+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:03.553+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:03.553+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578840, 3) 2019-09-04T06:34:03.553+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:34:03.553+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 18914 2019-09-04T06:34:03.553+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:03.553+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:03.553+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 18914 2019-09-04T06:34:03.553+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578843, 1) } 2019-09-04T06:34:03.553+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18904 2019-09-04T06:34:03.553+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 18904 2019-09-04T06:34:03.553+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18917 2019-09-04T06:34:03.553+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18917 2019-09-04T06:34:03.553+0000 D3 EXECUTOR [repl-writer-worker-2] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:03.553+0000 D3 STORAGE [repl-writer-worker-2] WT begin_transaction for snapshot id 18919 2019-09-04T06:34:03.553+0000 D4 STORAGE [repl-writer-worker-2] inserting record with timestamp Timestamp(1567578843, 1) 2019-09-04T06:34:03.553+0000 D3 STORAGE [repl-writer-worker-2] WT set timestamp of future write operations to Timestamp(1567578843, 1) 2019-09-04T06:34:03.553+0000 D3 STORAGE [repl-writer-worker-2] WT commit_transaction for snapshot id 18919 2019-09-04T06:34:03.553+0000 D3 EXECUTOR [repl-writer-worker-2] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:03.553+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:34:03.553+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18918 2019-09-04T06:34:03.553+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 18918 2019-09-04T06:34:03.553+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18921 2019-09-04T06:34:03.553+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18921 2019-09-04T06:34:03.553+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578843, 1), t: 1 }({ ts: Timestamp(1567578843, 1), t: 1 }) 2019-09-04T06:34:03.553+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578843, 1) 2019-09-04T06:34:03.553+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18922 2019-09-04T06:34:03.553+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578843, 1) } } ] } sort: {} projection: {} 2019-09-04T06:34:03.553+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:03.553+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:34:03.553+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578843, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:34:03.553+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:03.553+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:03.554+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:03.554+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578843, 1) || First: notFirst: full path: ts 2019-09-04T06:34:03.554+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:03.554+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578843, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:03.554+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:03.554+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:34:03.554+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:03.554+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:03.554+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:03.554+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:03.554+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:03.554+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:03.554+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:03.554+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578843, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:03.554+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:03.554+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:03.554+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:03.554+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578843, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:03.554+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:03.554+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578843, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:03.554+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 18922 2019-09-04T06:34:03.554+0000 D3 EXECUTOR [repl-writer-worker-0] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:03.554+0000 D3 STORAGE [repl-writer-worker-0] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:03.554+0000 D3 REPL [repl-writer-worker-0] applying op: { ts: Timestamp(1567578843, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578843551), o: { $v: 1, $set: { ping: new Date(1567578843546) } } }, oplog application mode: Secondary 2019-09-04T06:34:03.554+0000 D3 STORAGE [repl-writer-worker-0] WT set timestamp of future write operations to Timestamp(1567578843, 1) 2019-09-04T06:34:03.554+0000 D3 STORAGE [repl-writer-worker-0] WT begin_transaction for snapshot id 18924 2019-09-04T06:34:03.554+0000 D2 QUERY [repl-writer-worker-0] Using idhack: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" } 2019-09-04T06:34:03.554+0000 D4 WRITE [repl-writer-worker-0] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:34:03.554+0000 D3 STORAGE [repl-writer-worker-0] WT commit_transaction for snapshot id 18924 2019-09-04T06:34:03.554+0000 D3 EXECUTOR [repl-writer-worker-0] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:03.554+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578843, 1), t: 1 }({ ts: Timestamp(1567578843, 1), t: 1 }) 2019-09-04T06:34:03.554+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578843, 1) 2019-09-04T06:34:03.554+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18923 2019-09-04T06:34:03.554+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:34:03.554+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:03.554+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:34:03.554+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:03.554+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:03.554+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:34:03.554+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 18923 2019-09-04T06:34:03.554+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578843, 1) 2019-09-04T06:34:03.554+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18928 2019-09-04T06:34:03.554+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18928 2019-09-04T06:34:03.554+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578843, 1), t: 1 }({ ts: Timestamp(1567578843, 1), t: 1 }) 2019-09-04T06:34:03.554+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:03.554+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578843, 1), t: 1 }, appliedWallTime: new Date(1567578843551), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578840, 3), t: 1 }, lastCommittedWall: new Date(1567578840363), lastOpVisible: { ts: Timestamp(1567578840, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:03.554+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1279 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:33.554+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578843, 1), t: 1 }, appliedWallTime: new Date(1567578843551), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578840, 3), t: 1 }, lastCommittedWall: new Date(1567578840363), lastOpVisible: { ts: Timestamp(1567578840, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:03.554+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:33.554+0000 2019-09-04T06:34:03.554+0000 D2 ASIO [RS] Request 1279 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578843, 1), t: 1 }, lastCommittedWall: new Date(1567578843551), lastOpVisible: { ts: Timestamp(1567578843, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578843, 1), $clusterTime: { clusterTime: Timestamp(1567578843, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578843, 1) } 2019-09-04T06:34:03.555+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578843, 1), t: 1 }, lastCommittedWall: new Date(1567578843551), lastOpVisible: { ts: Timestamp(1567578843, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578843, 1), $clusterTime: { clusterTime: Timestamp(1567578843, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578843, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:03.555+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:03.555+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:33.555+0000 2019-09-04T06:34:03.555+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578843, 1), t: 1 } 2019-09-04T06:34:03.555+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1280 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:13.555+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578840, 3), t: 1 } } 2019-09-04T06:34:03.555+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:33.555+0000 2019-09-04T06:34:03.555+0000 D2 ASIO [RS] Request 1280 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578843, 1), t: 1 }, lastCommittedWall: new Date(1567578843551), lastOpVisible: { ts: Timestamp(1567578843, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578843, 1), t: 1 }, lastCommittedWall: new Date(1567578843551), lastOpApplied: { ts: Timestamp(1567578843, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578843, 1), $clusterTime: { clusterTime: Timestamp(1567578843, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578843, 1) } 2019-09-04T06:34:03.555+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578843, 1), t: 1 }, lastCommittedWall: new Date(1567578843551), lastOpVisible: { ts: Timestamp(1567578843, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578843, 1), t: 1 }, lastCommittedWall: new Date(1567578843551), lastOpApplied: { ts: Timestamp(1567578843, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578843, 1), $clusterTime: { clusterTime: Timestamp(1567578843, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578843, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:03.555+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:03.555+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:34:03.555+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578843, 1), t: 1 }, 2019-09-04T06:34:03.551+0000 2019-09-04T06:34:03.555+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578843, 1), t: 1 }, 2019-09-04T06:34:03.551+0000 2019-09-04T06:34:03.555+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578838, 1) 2019-09-04T06:34:03.555+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:34:13.608+0000 2019-09-04T06:34:03.555+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:34:13.706+0000 2019-09-04T06:34:03.555+0000 D3 REPL [conn382] Got notified of new snapshot: { ts: Timestamp(1567578843, 1), t: 1 }, 2019-09-04T06:34:03.551+0000 2019-09-04T06:34:03.555+0000 D3 REPL [conn382] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:17.223+0000 2019-09-04T06:34:03.555+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1281 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:13.555+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578843, 1), t: 1 } } 2019-09-04T06:34:03.555+0000 D3 REPL [conn404] Got notified of new snapshot: { ts: Timestamp(1567578843, 1), t: 1 }, 2019-09-04T06:34:03.551+0000 2019-09-04T06:34:03.555+0000 D3 REPL [conn404] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.073+0000 2019-09-04T06:34:03.555+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:33.555+0000 2019-09-04T06:34:03.555+0000 D3 REPL [conn422] Got notified of new snapshot: { ts: Timestamp(1567578843, 1), t: 1 }, 2019-09-04T06:34:03.551+0000 2019-09-04T06:34:03.555+0000 D3 REPL [conn422] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:25.060+0000 2019-09-04T06:34:03.555+0000 D3 REPL [conn407] Got notified of new snapshot: { ts: Timestamp(1567578843, 1), t: 1 }, 2019-09-04T06:34:03.551+0000 2019-09-04T06:34:03.555+0000 D3 REPL [conn407] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.691+0000 2019-09-04T06:34:03.555+0000 D3 REPL [conn405] Got notified of new snapshot: { ts: Timestamp(1567578843, 1), t: 1 }, 2019-09-04T06:34:03.551+0000 2019-09-04T06:34:03.555+0000 D3 REPL [conn405] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.416+0000 2019-09-04T06:34:03.556+0000 D3 REPL [conn401] Got notified of new snapshot: { ts: Timestamp(1567578843, 1), t: 1 }, 2019-09-04T06:34:03.551+0000 2019-09-04T06:34:03.556+0000 D3 REPL [conn401] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.660+0000 2019-09-04T06:34:03.556+0000 D3 REPL [conn414] Got notified of new snapshot: { ts: Timestamp(1567578843, 1), t: 1 }, 2019-09-04T06:34:03.551+0000 2019-09-04T06:34:03.556+0000 D3 REPL [conn414] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:19.849+0000 2019-09-04T06:34:03.556+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:03.556+0000 D3 REPL [conn421] Got notified of new snapshot: { ts: Timestamp(1567578843, 1), t: 1 }, 2019-09-04T06:34:03.551+0000 2019-09-04T06:34:03.556+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:32.839+0000 2019-09-04T06:34:03.556+0000 D3 REPL [conn421] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:24.153+0000 2019-09-04T06:34:03.556+0000 D3 REPL [conn390] Got notified of new snapshot: { ts: Timestamp(1567578843, 1), t: 1 }, 2019-09-04T06:34:03.551+0000 2019-09-04T06:34:03.556+0000 D3 REPL [conn390] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.077+0000 2019-09-04T06:34:03.556+0000 D3 REPL [conn393] Got notified of new snapshot: { ts: Timestamp(1567578843, 1), t: 1 }, 2019-09-04T06:34:03.551+0000 2019-09-04T06:34:03.556+0000 D3 REPL [conn393] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.081+0000 2019-09-04T06:34:03.556+0000 D3 REPL [conn418] Got notified of new snapshot: { ts: Timestamp(1567578843, 1), t: 1 }, 2019-09-04T06:34:03.551+0000 2019-09-04T06:34:03.556+0000 D3 REPL [conn418] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.754+0000 2019-09-04T06:34:03.556+0000 D3 REPL [conn380] Got notified of new snapshot: { ts: Timestamp(1567578843, 1), t: 1 }, 2019-09-04T06:34:03.551+0000 2019-09-04T06:34:03.556+0000 D3 REPL [conn380] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:14.390+0000 2019-09-04T06:34:03.556+0000 D3 REPL [conn403] Got notified of new snapshot: { ts: Timestamp(1567578843, 1), t: 1 }, 2019-09-04T06:34:03.551+0000 2019-09-04T06:34:03.556+0000 D3 REPL [conn403] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:27.566+0000 2019-09-04T06:34:03.556+0000 D3 REPL [conn419] Got notified of new snapshot: { ts: Timestamp(1567578843, 1), t: 1 }, 2019-09-04T06:34:03.551+0000 2019-09-04T06:34:03.556+0000 D3 REPL [conn419] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.767+0000 2019-09-04T06:34:03.556+0000 D3 REPL [conn406] Got notified of new snapshot: { ts: Timestamp(1567578843, 1), t: 1 }, 2019-09-04T06:34:03.551+0000 2019-09-04T06:34:03.556+0000 D3 REPL [conn406] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.496+0000 2019-09-04T06:34:03.556+0000 D3 REPL [conn408] Got notified of new snapshot: { ts: Timestamp(1567578843, 1), t: 1 }, 2019-09-04T06:34:03.551+0000 2019-09-04T06:34:03.556+0000 D3 REPL [conn408] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:29.739+0000 2019-09-04T06:34:03.556+0000 D3 REPL [conn412] Got notified of new snapshot: { ts: Timestamp(1567578843, 1), t: 1 }, 2019-09-04T06:34:03.551+0000 2019-09-04T06:34:03.556+0000 D3 REPL [conn412] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:17.318+0000 2019-09-04T06:34:03.556+0000 D3 REPL [conn417] Got notified of new snapshot: { ts: Timestamp(1567578843, 1), t: 1 }, 2019-09-04T06:34:03.551+0000 2019-09-04T06:34:03.556+0000 D3 REPL [conn417] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.661+0000 2019-09-04T06:34:03.556+0000 D3 REPL [conn416] Got notified of new snapshot: { ts: Timestamp(1567578843, 1), t: 1 }, 2019-09-04T06:34:03.551+0000 2019-09-04T06:34:03.556+0000 D3 REPL [conn416] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.661+0000 2019-09-04T06:34:03.560+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:34:03.560+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:03.560+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578843, 1), t: 1 }, durableWallTime: new Date(1567578843551), appliedOpTime: { ts: Timestamp(1567578843, 1), t: 1 }, appliedWallTime: new Date(1567578843551), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578843, 1), t: 1 }, lastCommittedWall: new Date(1567578843551), lastOpVisible: { ts: Timestamp(1567578843, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:03.560+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1282 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:33.560+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578843, 1), t: 1 }, durableWallTime: new Date(1567578843551), appliedOpTime: { ts: Timestamp(1567578843, 1), t: 1 }, appliedWallTime: new Date(1567578843551), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578843, 1), t: 1 }, lastCommittedWall: new Date(1567578843551), lastOpVisible: { ts: Timestamp(1567578843, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:03.560+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:33.555+0000 2019-09-04T06:34:03.560+0000 D2 ASIO [RS] Request 1282 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578843, 1), t: 1 }, lastCommittedWall: new Date(1567578843551), lastOpVisible: { ts: Timestamp(1567578843, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578843, 1), $clusterTime: { clusterTime: Timestamp(1567578843, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578843, 1) } 2019-09-04T06:34:03.560+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578843, 1), t: 1 }, lastCommittedWall: new Date(1567578843551), lastOpVisible: { ts: Timestamp(1567578843, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578843, 1), $clusterTime: { clusterTime: Timestamp(1567578843, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578843, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:03.560+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:03.560+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:33.555+0000 2019-09-04T06:34:03.561+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:03.563+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:03.563+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:03.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:03.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:03.651+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:03.651+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:03.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:03.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:03.653+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578843, 1) 2019-09-04T06:34:03.661+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:03.667+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:03.667+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:03.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:03.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:03.734+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:03.734+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:03.762+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:03.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:03.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:03.817+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:03.817+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:03.862+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:03.962+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:03.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:03.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:03.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:03.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:04.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:04.062+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:04.063+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:04.063+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:04.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:04.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:04.150+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:04.151+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:04.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:04.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:04.162+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:04.167+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:04.167+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:04.171+0000 D2 COMMAND [conn71] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:04.171+0000 I COMMAND [conn71] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:04.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:04.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:04.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578843, 1), signature: { hash: BinData(0, D0CFC0550714F9210BCD0B22CB2CDE5E15218FC7), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:04.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:34:04.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578843, 1), signature: { hash: BinData(0, D0CFC0550714F9210BCD0B22CB2CDE5E15218FC7), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:04.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578843, 1), signature: { hash: BinData(0, D0CFC0550714F9210BCD0B22CB2CDE5E15218FC7), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:04.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578843, 1), t: 1 }, durableWallTime: new Date(1567578843551), opTime: { ts: Timestamp(1567578843, 1), t: 1 }, wallTime: new Date(1567578843551) } 2019-09-04T06:34:04.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578843, 1), signature: { hash: BinData(0, D0CFC0550714F9210BCD0B22CB2CDE5E15218FC7), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:04.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:04.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:04.262+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:04.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:04.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:04.317+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:04.317+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:04.362+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:04.462+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:04.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:04.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:04.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:04.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:04.501+0000 D2 ASIO [RS] Request 1281 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578844, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578844498), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f5adcac9313827bca5ef0'), when: new Date(1567578844498) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578843, 1), t: 1 }, lastCommittedWall: new Date(1567578843551), lastOpVisible: { ts: Timestamp(1567578843, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578843, 1), t: 1 }, lastCommittedWall: new Date(1567578843551), lastOpApplied: { ts: Timestamp(1567578844, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578843, 1), $clusterTime: { clusterTime: Timestamp(1567578844, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 1) } 2019-09-04T06:34:04.501+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578844, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578844498), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f5adcac9313827bca5ef0'), when: new Date(1567578844498) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578843, 1), t: 1 }, lastCommittedWall: new Date(1567578843551), lastOpVisible: { ts: Timestamp(1567578843, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578843, 1), t: 1 }, lastCommittedWall: new Date(1567578843551), lastOpApplied: { ts: Timestamp(1567578844, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578843, 1), $clusterTime: { clusterTime: Timestamp(1567578844, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:04.501+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:04.501+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578844, 1) and ending at ts: Timestamp(1567578844, 1) 2019-09-04T06:34:04.501+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:34:13.706+0000 2019-09-04T06:34:04.501+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:34:15.035+0000 2019-09-04T06:34:04.501+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:04.501+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:32.839+0000 2019-09-04T06:34:04.501+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578844, 1), t: 1 } 2019-09-04T06:34:04.501+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:04.501+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:04.501+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578843, 1) 2019-09-04T06:34:04.501+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 18957 2019-09-04T06:34:04.501+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:04.501+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:04.501+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 18957 2019-09-04T06:34:04.501+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:04.501+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:34:04.501+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:04.501+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578843, 1) 2019-09-04T06:34:04.501+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578844, 1) } 2019-09-04T06:34:04.501+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 18960 2019-09-04T06:34:04.501+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:04.501+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:04.501+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 18960 2019-09-04T06:34:04.501+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18929 2019-09-04T06:34:04.501+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 18929 2019-09-04T06:34:04.501+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18963 2019-09-04T06:34:04.501+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18963 2019-09-04T06:34:04.501+0000 D3 EXECUTOR [repl-writer-worker-15] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:04.501+0000 D3 STORAGE [repl-writer-worker-15] WT begin_transaction for snapshot id 18965 2019-09-04T06:34:04.501+0000 D4 STORAGE [repl-writer-worker-15] inserting record with timestamp Timestamp(1567578844, 1) 2019-09-04T06:34:04.501+0000 D3 STORAGE [repl-writer-worker-15] WT set timestamp of future write operations to Timestamp(1567578844, 1) 2019-09-04T06:34:04.501+0000 D3 STORAGE [repl-writer-worker-15] WT commit_transaction for snapshot id 18965 2019-09-04T06:34:04.501+0000 D3 EXECUTOR [repl-writer-worker-15] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:04.501+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:34:04.501+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18964 2019-09-04T06:34:04.501+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 18964 2019-09-04T06:34:04.501+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18967 2019-09-04T06:34:04.501+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18967 2019-09-04T06:34:04.501+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578844, 1), t: 1 }({ ts: Timestamp(1567578844, 1), t: 1 }) 2019-09-04T06:34:04.501+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578844, 1) 2019-09-04T06:34:04.501+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18968 2019-09-04T06:34:04.501+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578844, 1) } } ] } sort: {} projection: {} 2019-09-04T06:34:04.501+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:04.501+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:34:04.501+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578844, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:34:04.501+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:04.501+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:04.501+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:04.501+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578844, 1) || First: notFirst: full path: ts 2019-09-04T06:34:04.501+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:04.501+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578844, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:04.501+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:04.501+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:34:04.501+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:04.501+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:04.501+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:04.501+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:04.501+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:04.501+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:04.501+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:04.501+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578844, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:04.501+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:04.501+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:04.501+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:04.501+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578844, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:04.501+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:04.502+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578844, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:04.502+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 18968 2019-09-04T06:34:04.502+0000 D3 EXECUTOR [repl-writer-worker-1] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:04.502+0000 D3 STORAGE [repl-writer-worker-1] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:04.502+0000 D3 REPL [repl-writer-worker-1] applying op: { ts: Timestamp(1567578844, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578844498), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f5adcac9313827bca5ef0'), when: new Date(1567578844498) } } }, oplog application mode: Secondary 2019-09-04T06:34:04.502+0000 D3 STORAGE [repl-writer-worker-1] WT set timestamp of future write operations to Timestamp(1567578844, 1) 2019-09-04T06:34:04.502+0000 D3 STORAGE [repl-writer-worker-1] WT begin_transaction for snapshot id 18970 2019-09-04T06:34:04.502+0000 D2 QUERY [repl-writer-worker-1] Using idhack: { _id: "config" } 2019-09-04T06:34:04.502+0000 D2 STORAGE [repl-writer-worker-1] WiredTigerSizeStorer::store Marking table:config/collection/42--6194257481163143499 dirty, numRecords: 2, dataSize: 306, use_count: 3 2019-09-04T06:34:04.502+0000 D4 WRITE [repl-writer-worker-1] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:34:04.502+0000 D3 STORAGE [repl-writer-worker-1] WT commit_transaction for snapshot id 18970 2019-09-04T06:34:04.502+0000 D3 EXECUTOR [repl-writer-worker-1] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:04.502+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578844, 1), t: 1 }({ ts: Timestamp(1567578844, 1), t: 1 }) 2019-09-04T06:34:04.502+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578844, 1) 2019-09-04T06:34:04.502+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18969 2019-09-04T06:34:04.502+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:34:04.502+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:04.502+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:34:04.502+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:04.502+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:04.502+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:34:04.502+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 18969 2019-09-04T06:34:04.502+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578844, 1) 2019-09-04T06:34:04.502+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18973 2019-09-04T06:34:04.502+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18973 2019-09-04T06:34:04.502+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578844, 1), t: 1 }({ ts: Timestamp(1567578844, 1), t: 1 }) 2019-09-04T06:34:04.502+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:04.502+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578843, 1), t: 1 }, durableWallTime: new Date(1567578843551), appliedOpTime: { ts: Timestamp(1567578844, 1), t: 1 }, appliedWallTime: new Date(1567578844498), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578843, 1), t: 1 }, lastCommittedWall: new Date(1567578843551), lastOpVisible: { ts: Timestamp(1567578843, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:04.502+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1283 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:34.502+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578843, 1), t: 1 }, durableWallTime: new Date(1567578843551), appliedOpTime: { ts: Timestamp(1567578844, 1), t: 1 }, appliedWallTime: new Date(1567578844498), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578843, 1), t: 1 }, lastCommittedWall: new Date(1567578843551), lastOpVisible: { ts: Timestamp(1567578843, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:04.502+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.502+0000 2019-09-04T06:34:04.502+0000 D2 ASIO [RS] Request 1283 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578843, 1), t: 1 }, lastCommittedWall: new Date(1567578843551), lastOpVisible: { ts: Timestamp(1567578843, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578843, 1), $clusterTime: { clusterTime: Timestamp(1567578844, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 1) } 2019-09-04T06:34:04.502+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578843, 1), t: 1 }, lastCommittedWall: new Date(1567578843551), lastOpVisible: { ts: Timestamp(1567578843, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578843, 1), $clusterTime: { clusterTime: Timestamp(1567578844, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:04.502+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:04.503+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.502+0000 2019-09-04T06:34:04.503+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578844, 1), t: 1 } 2019-09-04T06:34:04.503+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1284 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:14.503+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578843, 1), t: 1 } } 2019-09-04T06:34:04.503+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.502+0000 2019-09-04T06:34:04.563+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:04.563+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:04.592+0000 D2 ASIO [RS] Request 1284 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 1), t: 1 }, lastCommittedWall: new Date(1567578844498), lastOpVisible: { ts: Timestamp(1567578844, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578844, 1), t: 1 }, lastCommittedWall: new Date(1567578844498), lastOpApplied: { ts: Timestamp(1567578844, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 1), $clusterTime: { clusterTime: Timestamp(1567578844, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 1) } 2019-09-04T06:34:04.592+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 1), t: 1 }, lastCommittedWall: new Date(1567578844498), lastOpVisible: { ts: Timestamp(1567578844, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578844, 1), t: 1 }, lastCommittedWall: new Date(1567578844498), lastOpApplied: { ts: Timestamp(1567578844, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 1), $clusterTime: { clusterTime: Timestamp(1567578844, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:04.592+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:04.592+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:34:04.592+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578844, 1), t: 1 }, 2019-09-04T06:34:04.498+0000 2019-09-04T06:34:04.592+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578844, 1), t: 1 }, 2019-09-04T06:34:04.498+0000 2019-09-04T06:34:04.592+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578839, 1) 2019-09-04T06:34:04.592+0000 D3 REPL [conn403] Got notified of new snapshot: { ts: Timestamp(1567578844, 1), t: 1 }, 2019-09-04T06:34:04.498+0000 2019-09-04T06:34:04.592+0000 D3 REPL [conn403] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:27.566+0000 2019-09-04T06:34:04.592+0000 D3 REPL [conn418] Got notified of new snapshot: { ts: Timestamp(1567578844, 1), t: 1 }, 2019-09-04T06:34:04.498+0000 2019-09-04T06:34:04.592+0000 D3 REPL [conn418] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.754+0000 2019-09-04T06:34:04.592+0000 D3 REPL [conn422] Got notified of new snapshot: { ts: Timestamp(1567578844, 1), t: 1 }, 2019-09-04T06:34:04.498+0000 2019-09-04T06:34:04.592+0000 D3 REPL [conn422] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:25.060+0000 2019-09-04T06:34:04.592+0000 D3 REPL [conn401] Got notified of new snapshot: { ts: Timestamp(1567578844, 1), t: 1 }, 2019-09-04T06:34:04.498+0000 2019-09-04T06:34:04.592+0000 D3 REPL [conn401] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.660+0000 2019-09-04T06:34:04.592+0000 D3 REPL [conn421] Got notified of new snapshot: { ts: Timestamp(1567578844, 1), t: 1 }, 2019-09-04T06:34:04.498+0000 2019-09-04T06:34:04.592+0000 D3 REPL [conn421] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:24.153+0000 2019-09-04T06:34:04.592+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:34:04.592+0000 D3 REPL [conn393] Got notified of new snapshot: { ts: Timestamp(1567578844, 1), t: 1 }, 2019-09-04T06:34:04.498+0000 2019-09-04T06:34:04.592+0000 D3 REPL [conn393] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.081+0000 2019-09-04T06:34:04.592+0000 D3 REPL [conn380] Got notified of new snapshot: { ts: Timestamp(1567578844, 1), t: 1 }, 2019-09-04T06:34:04.498+0000 2019-09-04T06:34:04.592+0000 D3 REPL [conn380] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:14.390+0000 2019-09-04T06:34:04.592+0000 D3 REPL [conn419] Got notified of new snapshot: { ts: Timestamp(1567578844, 1), t: 1 }, 2019-09-04T06:34:04.498+0000 2019-09-04T06:34:04.592+0000 D3 REPL [conn419] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.767+0000 2019-09-04T06:34:04.592+0000 D3 REPL [conn408] Got notified of new snapshot: { ts: Timestamp(1567578844, 1), t: 1 }, 2019-09-04T06:34:04.498+0000 2019-09-04T06:34:04.592+0000 D3 REPL [conn408] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:29.739+0000 2019-09-04T06:34:04.592+0000 D3 REPL [conn417] Got notified of new snapshot: { ts: Timestamp(1567578844, 1), t: 1 }, 2019-09-04T06:34:04.498+0000 2019-09-04T06:34:04.592+0000 D3 REPL [conn417] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.661+0000 2019-09-04T06:34:04.592+0000 D3 REPL [conn407] Got notified of new snapshot: { ts: Timestamp(1567578844, 1), t: 1 }, 2019-09-04T06:34:04.498+0000 2019-09-04T06:34:04.592+0000 D3 REPL [conn407] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.691+0000 2019-09-04T06:34:04.592+0000 D3 REPL [conn405] Got notified of new snapshot: { ts: Timestamp(1567578844, 1), t: 1 }, 2019-09-04T06:34:04.498+0000 2019-09-04T06:34:04.592+0000 D3 REPL [conn405] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.416+0000 2019-09-04T06:34:04.592+0000 D3 REPL [conn414] Got notified of new snapshot: { ts: Timestamp(1567578844, 1), t: 1 }, 2019-09-04T06:34:04.498+0000 2019-09-04T06:34:04.592+0000 D3 REPL [conn414] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:19.849+0000 2019-09-04T06:34:04.592+0000 D3 REPL [conn416] Got notified of new snapshot: { ts: Timestamp(1567578844, 1), t: 1 }, 2019-09-04T06:34:04.498+0000 2019-09-04T06:34:04.592+0000 D3 REPL [conn416] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.661+0000 2019-09-04T06:34:04.592+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:34:15.035+0000 2019-09-04T06:34:04.592+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:34:15.208+0000 2019-09-04T06:34:04.592+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:04.592+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1285 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:14.592+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578844, 1), t: 1 } } 2019-09-04T06:34:04.593+0000 D3 REPL [conn382] Got notified of new snapshot: { ts: Timestamp(1567578844, 1), t: 1 }, 2019-09-04T06:34:04.498+0000 2019-09-04T06:34:04.593+0000 D3 REPL [conn382] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:17.223+0000 2019-09-04T06:34:04.593+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.502+0000 2019-09-04T06:34:04.593+0000 D3 REPL [conn406] Got notified of new snapshot: { ts: Timestamp(1567578844, 1), t: 1 }, 2019-09-04T06:34:04.498+0000 2019-09-04T06:34:04.593+0000 D3 REPL [conn406] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.496+0000 2019-09-04T06:34:04.592+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:32.839+0000 2019-09-04T06:34:04.593+0000 D3 REPL [conn412] Got notified of new snapshot: { ts: Timestamp(1567578844, 1), t: 1 }, 2019-09-04T06:34:04.498+0000 2019-09-04T06:34:04.593+0000 D3 REPL [conn412] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:17.318+0000 2019-09-04T06:34:04.593+0000 D3 REPL [conn404] Got notified of new snapshot: { ts: Timestamp(1567578844, 1), t: 1 }, 2019-09-04T06:34:04.498+0000 2019-09-04T06:34:04.593+0000 D3 REPL [conn404] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.073+0000 2019-09-04T06:34:04.593+0000 D3 REPL [conn390] Got notified of new snapshot: { ts: Timestamp(1567578844, 1), t: 1 }, 2019-09-04T06:34:04.498+0000 2019-09-04T06:34:04.593+0000 D3 REPL [conn390] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.077+0000 2019-09-04T06:34:04.593+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:04.593+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:04.593+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578844, 1), t: 1 }, durableWallTime: new Date(1567578844498), appliedOpTime: { ts: Timestamp(1567578844, 1), t: 1 }, appliedWallTime: new Date(1567578844498), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 1), t: 1 }, lastCommittedWall: new Date(1567578844498), lastOpVisible: { ts: Timestamp(1567578844, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:04.593+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1286 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:34.593+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578844, 1), t: 1 }, durableWallTime: new Date(1567578844498), appliedOpTime: { ts: Timestamp(1567578844, 1), t: 1 }, appliedWallTime: new Date(1567578844498), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 1), t: 1 }, lastCommittedWall: new Date(1567578844498), lastOpVisible: { ts: Timestamp(1567578844, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:04.593+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.502+0000 2019-09-04T06:34:04.593+0000 D2 ASIO [RS] Request 1286 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 1), t: 1 }, lastCommittedWall: new Date(1567578844498), lastOpVisible: { ts: Timestamp(1567578844, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 1), $clusterTime: { clusterTime: Timestamp(1567578844, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 1) } 2019-09-04T06:34:04.593+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 1), t: 1 }, lastCommittedWall: new Date(1567578844498), lastOpVisible: { ts: Timestamp(1567578844, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 1), $clusterTime: { clusterTime: Timestamp(1567578844, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:04.593+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:04.593+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.502+0000 2019-09-04T06:34:04.600+0000 D2 ASIO [RS] Request 1285 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578844, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578844592), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f5adcac9313827bca5efc'), when: new Date(1567578844592) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 1), t: 1 }, lastCommittedWall: new Date(1567578844498), lastOpVisible: { ts: Timestamp(1567578844, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578844, 1), t: 1 }, lastCommittedWall: new Date(1567578844498), lastOpApplied: { ts: Timestamp(1567578844, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 1), $clusterTime: { clusterTime: Timestamp(1567578844, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 2) } 2019-09-04T06:34:04.600+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578844, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578844592), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f5adcac9313827bca5efc'), when: new Date(1567578844592) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 1), t: 1 }, lastCommittedWall: new Date(1567578844498), lastOpVisible: { ts: Timestamp(1567578844, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578844, 1), t: 1 }, lastCommittedWall: new Date(1567578844498), lastOpApplied: { ts: Timestamp(1567578844, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 1), $clusterTime: { clusterTime: Timestamp(1567578844, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:04.600+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:04.600+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578844, 2) and ending at ts: Timestamp(1567578844, 2) 2019-09-04T06:34:04.600+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:34:15.208+0000 2019-09-04T06:34:04.600+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:34:15.544+0000 2019-09-04T06:34:04.600+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:04.600+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:32.839+0000 2019-09-04T06:34:04.600+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578844, 2), t: 1 } 2019-09-04T06:34:04.600+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:04.600+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:04.600+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578844, 1) 2019-09-04T06:34:04.600+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 18978 2019-09-04T06:34:04.600+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:04.600+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:04.600+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 18978 2019-09-04T06:34:04.600+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:04.600+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:04.600+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:34:04.600+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578844, 1) 2019-09-04T06:34:04.600+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 18981 2019-09-04T06:34:04.600+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578844, 2) } 2019-09-04T06:34:04.600+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:04.600+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:04.600+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 18981 2019-09-04T06:34:04.600+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18974 2019-09-04T06:34:04.600+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 18974 2019-09-04T06:34:04.600+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18984 2019-09-04T06:34:04.600+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18984 2019-09-04T06:34:04.600+0000 D3 EXECUTOR [repl-writer-worker-5] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:04.600+0000 D3 STORAGE [repl-writer-worker-5] WT begin_transaction for snapshot id 18986 2019-09-04T06:34:04.600+0000 D4 STORAGE [repl-writer-worker-5] inserting record with timestamp Timestamp(1567578844, 2) 2019-09-04T06:34:04.600+0000 D3 STORAGE [repl-writer-worker-5] WT set timestamp of future write operations to Timestamp(1567578844, 2) 2019-09-04T06:34:04.600+0000 D3 STORAGE [repl-writer-worker-5] WT commit_transaction for snapshot id 18986 2019-09-04T06:34:04.600+0000 D3 EXECUTOR [repl-writer-worker-5] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:04.600+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:34:04.600+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18985 2019-09-04T06:34:04.600+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 18985 2019-09-04T06:34:04.600+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18988 2019-09-04T06:34:04.600+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18988 2019-09-04T06:34:04.600+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578844, 2), t: 1 }({ ts: Timestamp(1567578844, 2), t: 1 }) 2019-09-04T06:34:04.600+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578844, 2) 2019-09-04T06:34:04.600+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18989 2019-09-04T06:34:04.600+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578844, 2) } } ] } sort: {} projection: {} 2019-09-04T06:34:04.601+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:04.601+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:34:04.601+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578844, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:34:04.601+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:04.601+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:04.601+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:04.601+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578844, 2) || First: notFirst: full path: ts 2019-09-04T06:34:04.601+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:04.601+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578844, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:04.601+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:04.601+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:34:04.601+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:04.601+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:04.601+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:04.601+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:04.601+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:04.601+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:04.601+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:04.601+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578844, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:04.601+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:04.601+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:04.601+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:04.601+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578844, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:04.601+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:04.601+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578844, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:04.601+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 18989 2019-09-04T06:34:04.601+0000 D3 EXECUTOR [repl-writer-worker-9] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:04.601+0000 D3 STORAGE [repl-writer-worker-9] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:04.601+0000 D3 REPL [repl-writer-worker-9] applying op: { ts: Timestamp(1567578844, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578844592), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f5adcac9313827bca5efc'), when: new Date(1567578844592) } } }, oplog application mode: Secondary 2019-09-04T06:34:04.601+0000 D3 STORAGE [repl-writer-worker-9] WT set timestamp of future write operations to Timestamp(1567578844, 2) 2019-09-04T06:34:04.601+0000 D3 STORAGE [repl-writer-worker-9] WT begin_transaction for snapshot id 18991 2019-09-04T06:34:04.601+0000 D2 QUERY [repl-writer-worker-9] Using idhack: { _id: "config.system.sessions" } 2019-09-04T06:34:04.601+0000 D4 WRITE [repl-writer-worker-9] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:34:04.601+0000 D3 STORAGE [repl-writer-worker-9] WT commit_transaction for snapshot id 18991 2019-09-04T06:34:04.601+0000 D3 EXECUTOR [repl-writer-worker-9] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:04.601+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578844, 2), t: 1 }({ ts: Timestamp(1567578844, 2), t: 1 }) 2019-09-04T06:34:04.601+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578844, 2) 2019-09-04T06:34:04.601+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18990 2019-09-04T06:34:04.601+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:34:04.601+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:04.601+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:34:04.601+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:04.601+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:04.601+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:34:04.601+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 18990 2019-09-04T06:34:04.601+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578844, 2) 2019-09-04T06:34:04.601+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18994 2019-09-04T06:34:04.601+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 18994 2019-09-04T06:34:04.601+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578844, 2), t: 1 }({ ts: Timestamp(1567578844, 2), t: 1 }) 2019-09-04T06:34:04.601+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:04.601+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578844, 1), t: 1 }, durableWallTime: new Date(1567578844498), appliedOpTime: { ts: Timestamp(1567578844, 2), t: 1 }, appliedWallTime: new Date(1567578844592), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 1), t: 1 }, lastCommittedWall: new Date(1567578844498), lastOpVisible: { ts: Timestamp(1567578844, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:04.601+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1287 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:34.601+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578844, 1), t: 1 }, durableWallTime: new Date(1567578844498), appliedOpTime: { ts: Timestamp(1567578844, 2), t: 1 }, appliedWallTime: new Date(1567578844592), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 1), t: 1 }, lastCommittedWall: new Date(1567578844498), lastOpVisible: { ts: Timestamp(1567578844, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:04.601+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.601+0000 2019-09-04T06:34:04.601+0000 D2 ASIO [RS] Request 1287 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 2), t: 1 }, lastCommittedWall: new Date(1567578844592), lastOpVisible: { ts: Timestamp(1567578844, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 2), $clusterTime: { clusterTime: Timestamp(1567578844, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 2) } 2019-09-04T06:34:04.601+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 2), t: 1 }, lastCommittedWall: new Date(1567578844592), lastOpVisible: { ts: Timestamp(1567578844, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 2), $clusterTime: { clusterTime: Timestamp(1567578844, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:04.602+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:04.602+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.602+0000 2019-09-04T06:34:04.602+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578844, 2), t: 1 } 2019-09-04T06:34:04.602+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1288 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:14.602+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578844, 1), t: 1 } } 2019-09-04T06:34:04.602+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.602+0000 2019-09-04T06:34:04.602+0000 D2 ASIO [RS] Request 1288 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 2), t: 1 }, lastCommittedWall: new Date(1567578844592), lastOpVisible: { ts: Timestamp(1567578844, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578844, 2), t: 1 }, lastCommittedWall: new Date(1567578844592), lastOpApplied: { ts: Timestamp(1567578844, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 2), $clusterTime: { clusterTime: Timestamp(1567578844, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 2) } 2019-09-04T06:34:04.602+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 2), t: 1 }, lastCommittedWall: new Date(1567578844592), lastOpVisible: { ts: Timestamp(1567578844, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578844, 2), t: 1 }, lastCommittedWall: new Date(1567578844592), lastOpApplied: { ts: Timestamp(1567578844, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 2), $clusterTime: { clusterTime: Timestamp(1567578844, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:04.602+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:04.602+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:34:04.602+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578844, 2), t: 1 }, 2019-09-04T06:34:04.592+0000 2019-09-04T06:34:04.602+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578844, 2), t: 1 }, 2019-09-04T06:34:04.592+0000 2019-09-04T06:34:04.602+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578839, 2) 2019-09-04T06:34:04.602+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:34:15.544+0000 2019-09-04T06:34:04.602+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:34:15.001+0000 2019-09-04T06:34:04.602+0000 D3 REPL [conn422] Got notified of new snapshot: { ts: Timestamp(1567578844, 2), t: 1 }, 2019-09-04T06:34:04.592+0000 2019-09-04T06:34:04.602+0000 D3 REPL [conn422] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:25.060+0000 2019-09-04T06:34:04.602+0000 D3 REPL [conn404] Got notified of new snapshot: { ts: Timestamp(1567578844, 2), t: 1 }, 2019-09-04T06:34:04.592+0000 2019-09-04T06:34:04.602+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1289 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:14.602+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578844, 2), t: 1 } } 2019-09-04T06:34:04.602+0000 D3 REPL [conn404] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.073+0000 2019-09-04T06:34:04.602+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.602+0000 2019-09-04T06:34:04.602+0000 D3 REPL [conn408] Got notified of new snapshot: { ts: Timestamp(1567578844, 2), t: 1 }, 2019-09-04T06:34:04.592+0000 2019-09-04T06:34:04.602+0000 D3 REPL [conn408] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:29.739+0000 2019-09-04T06:34:04.602+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:04.602+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:32.839+0000 2019-09-04T06:34:04.602+0000 D3 REPL [conn407] Got notified of new snapshot: { ts: Timestamp(1567578844, 2), t: 1 }, 2019-09-04T06:34:04.592+0000 2019-09-04T06:34:04.602+0000 D3 REPL [conn407] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.691+0000 2019-09-04T06:34:04.602+0000 D3 REPL [conn414] Got notified of new snapshot: { ts: Timestamp(1567578844, 2), t: 1 }, 2019-09-04T06:34:04.592+0000 2019-09-04T06:34:04.602+0000 D3 REPL [conn414] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:19.849+0000 2019-09-04T06:34:04.602+0000 D3 REPL [conn412] Got notified of new snapshot: { ts: Timestamp(1567578844, 2), t: 1 }, 2019-09-04T06:34:04.592+0000 2019-09-04T06:34:04.602+0000 D3 REPL [conn412] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:17.318+0000 2019-09-04T06:34:04.602+0000 D3 REPL [conn421] Got notified of new snapshot: { ts: Timestamp(1567578844, 2), t: 1 }, 2019-09-04T06:34:04.592+0000 2019-09-04T06:34:04.602+0000 D3 REPL [conn421] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:24.153+0000 2019-09-04T06:34:04.602+0000 D3 REPL [conn390] Got notified of new snapshot: { ts: Timestamp(1567578844, 2), t: 1 }, 2019-09-04T06:34:04.592+0000 2019-09-04T06:34:04.602+0000 D3 REPL [conn390] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.077+0000 2019-09-04T06:34:04.602+0000 D3 REPL [conn403] Got notified of new snapshot: { ts: Timestamp(1567578844, 2), t: 1 }, 2019-09-04T06:34:04.592+0000 2019-09-04T06:34:04.602+0000 D3 REPL [conn403] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:27.566+0000 2019-09-04T06:34:04.602+0000 D3 REPL [conn405] Got notified of new snapshot: { ts: Timestamp(1567578844, 2), t: 1 }, 2019-09-04T06:34:04.592+0000 2019-09-04T06:34:04.602+0000 D3 REPL [conn405] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.416+0000 2019-09-04T06:34:04.603+0000 D3 REPL [conn406] Got notified of new snapshot: { ts: Timestamp(1567578844, 2), t: 1 }, 2019-09-04T06:34:04.592+0000 2019-09-04T06:34:04.603+0000 D3 REPL [conn406] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.496+0000 2019-09-04T06:34:04.603+0000 D3 REPL [conn380] Got notified of new snapshot: { ts: Timestamp(1567578844, 2), t: 1 }, 2019-09-04T06:34:04.592+0000 2019-09-04T06:34:04.603+0000 D3 REPL [conn380] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:14.390+0000 2019-09-04T06:34:04.603+0000 D3 REPL [conn418] Got notified of new snapshot: { ts: Timestamp(1567578844, 2), t: 1 }, 2019-09-04T06:34:04.592+0000 2019-09-04T06:34:04.603+0000 D3 REPL [conn418] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.754+0000 2019-09-04T06:34:04.603+0000 D3 REPL [conn401] Got notified of new snapshot: { ts: Timestamp(1567578844, 2), t: 1 }, 2019-09-04T06:34:04.592+0000 2019-09-04T06:34:04.603+0000 D3 REPL [conn401] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.660+0000 2019-09-04T06:34:04.603+0000 D3 REPL [conn419] Got notified of new snapshot: { ts: Timestamp(1567578844, 2), t: 1 }, 2019-09-04T06:34:04.592+0000 2019-09-04T06:34:04.603+0000 D3 REPL [conn419] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.767+0000 2019-09-04T06:34:04.603+0000 D3 REPL [conn417] Got notified of new snapshot: { ts: Timestamp(1567578844, 2), t: 1 }, 2019-09-04T06:34:04.592+0000 2019-09-04T06:34:04.603+0000 D3 REPL [conn417] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.661+0000 2019-09-04T06:34:04.603+0000 D3 REPL [conn382] Got notified of new snapshot: { ts: Timestamp(1567578844, 2), t: 1 }, 2019-09-04T06:34:04.592+0000 2019-09-04T06:34:04.603+0000 D3 REPL [conn382] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:17.223+0000 2019-09-04T06:34:04.603+0000 D3 REPL [conn416] Got notified of new snapshot: { ts: Timestamp(1567578844, 2), t: 1 }, 2019-09-04T06:34:04.592+0000 2019-09-04T06:34:04.603+0000 D3 REPL [conn416] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.661+0000 2019-09-04T06:34:04.603+0000 D3 REPL [conn393] Got notified of new snapshot: { ts: Timestamp(1567578844, 2), t: 1 }, 2019-09-04T06:34:04.592+0000 2019-09-04T06:34:04.603+0000 D3 REPL [conn393] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.081+0000 2019-09-04T06:34:04.611+0000 D2 ASIO [RS] Request 1289 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578844, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578844602), o: { $v: 1, $set: { state: 0 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 2), t: 1 }, lastCommittedWall: new Date(1567578844592), lastOpVisible: { ts: Timestamp(1567578844, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578844, 2), t: 1 }, lastCommittedWall: new Date(1567578844592), lastOpApplied: { ts: Timestamp(1567578844, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 2), $clusterTime: { clusterTime: Timestamp(1567578844, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 3) } 2019-09-04T06:34:04.611+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578844, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578844602), o: { $v: 1, $set: { state: 0 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 2), t: 1 }, lastCommittedWall: new Date(1567578844592), lastOpVisible: { ts: Timestamp(1567578844, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578844, 2), t: 1 }, lastCommittedWall: new Date(1567578844592), lastOpApplied: { ts: Timestamp(1567578844, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 2), $clusterTime: { clusterTime: Timestamp(1567578844, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:04.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:04.611+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:04.612+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578844, 3) and ending at ts: Timestamp(1567578844, 3) 2019-09-04T06:34:04.612+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:34:15.001+0000 2019-09-04T06:34:04.612+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:34:15.358+0000 2019-09-04T06:34:04.612+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578844, 3), t: 1 } 2019-09-04T06:34:04.612+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:04.612+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:32.839+0000 2019-09-04T06:34:04.612+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:04.612+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:04.612+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578844, 2) 2019-09-04T06:34:04.612+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 18999 2019-09-04T06:34:04.612+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:04.612+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:04.612+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 18999 2019-09-04T06:34:04.612+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:04.612+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:34:04.612+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:04.612+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578844, 3) } 2019-09-04T06:34:04.612+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578844, 2) 2019-09-04T06:34:04.612+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 19002 2019-09-04T06:34:04.612+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:04.612+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:04.612+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 19002 2019-09-04T06:34:04.612+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 18995 2019-09-04T06:34:04.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:04.612+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 18995 2019-09-04T06:34:04.612+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19005 2019-09-04T06:34:04.612+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19005 2019-09-04T06:34:04.612+0000 D3 EXECUTOR [repl-writer-worker-11] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:04.612+0000 D3 STORAGE [repl-writer-worker-11] WT begin_transaction for snapshot id 19007 2019-09-04T06:34:04.612+0000 D4 STORAGE [repl-writer-worker-11] inserting record with timestamp Timestamp(1567578844, 3) 2019-09-04T06:34:04.612+0000 D3 STORAGE [repl-writer-worker-11] WT set timestamp of future write operations to Timestamp(1567578844, 3) 2019-09-04T06:34:04.612+0000 D3 STORAGE [repl-writer-worker-11] WT commit_transaction for snapshot id 19007 2019-09-04T06:34:04.612+0000 D3 EXECUTOR [repl-writer-worker-11] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:04.612+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:34:04.612+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19006 2019-09-04T06:34:04.612+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19006 2019-09-04T06:34:04.612+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19009 2019-09-04T06:34:04.612+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19009 2019-09-04T06:34:04.612+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578844, 3), t: 1 }({ ts: Timestamp(1567578844, 3), t: 1 }) 2019-09-04T06:34:04.612+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578844, 3) 2019-09-04T06:34:04.612+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19010 2019-09-04T06:34:04.612+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578844, 3) } } ] } sort: {} projection: {} 2019-09-04T06:34:04.612+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:04.612+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:34:04.612+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578844, 3) Sort: {} Proj: {} ============================= 2019-09-04T06:34:04.612+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:04.612+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:04.612+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:04.612+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578844, 3) || First: notFirst: full path: ts 2019-09-04T06:34:04.612+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:04.612+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578844, 3) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:04.612+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:04.612+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:34:04.612+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:04.612+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:04.612+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:04.612+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:04.612+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:04.612+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:04.612+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:04.612+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578844, 3) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:04.612+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:04.612+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:04.612+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:04.612+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578844, 3) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:04.612+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:04.612+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578844, 3) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:04.612+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19010 2019-09-04T06:34:04.612+0000 D3 EXECUTOR [repl-writer-worker-7] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:04.612+0000 D3 STORAGE [repl-writer-worker-7] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:04.612+0000 D3 REPL [repl-writer-worker-7] applying op: { ts: Timestamp(1567578844, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578844602), o: { $v: 1, $set: { state: 0 } } }, oplog application mode: Secondary 2019-09-04T06:34:04.612+0000 D3 STORAGE [repl-writer-worker-7] WT set timestamp of future write operations to Timestamp(1567578844, 3) 2019-09-04T06:34:04.612+0000 D3 STORAGE [repl-writer-worker-7] WT begin_transaction for snapshot id 19012 2019-09-04T06:34:04.612+0000 D2 QUERY [repl-writer-worker-7] Using idhack: { _id: "config.system.sessions" } 2019-09-04T06:34:04.613+0000 D4 WRITE [repl-writer-worker-7] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:34:04.613+0000 D3 STORAGE [repl-writer-worker-7] WT commit_transaction for snapshot id 19012 2019-09-04T06:34:04.613+0000 D3 EXECUTOR [repl-writer-worker-7] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:04.613+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578844, 3), t: 1 }({ ts: Timestamp(1567578844, 3), t: 1 }) 2019-09-04T06:34:04.613+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578844, 3) 2019-09-04T06:34:04.613+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19011 2019-09-04T06:34:04.613+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:34:04.613+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:04.613+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:34:04.613+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:04.613+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:04.613+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:34:04.613+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19011 2019-09-04T06:34:04.613+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578844, 3) 2019-09-04T06:34:04.613+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19015 2019-09-04T06:34:04.613+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:04.613+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19015 2019-09-04T06:34:04.613+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578844, 3), t: 1 }({ ts: Timestamp(1567578844, 3), t: 1 }) 2019-09-04T06:34:04.613+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578844, 1), t: 1 }, durableWallTime: new Date(1567578844498), appliedOpTime: { ts: Timestamp(1567578844, 3), t: 1 }, appliedWallTime: new Date(1567578844602), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 2), t: 1 }, lastCommittedWall: new Date(1567578844592), lastOpVisible: { ts: Timestamp(1567578844, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:04.613+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1290 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:34.613+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578844, 1), t: 1 }, durableWallTime: new Date(1567578844498), appliedOpTime: { ts: Timestamp(1567578844, 3), t: 1 }, appliedWallTime: new Date(1567578844602), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 2), t: 1 }, lastCommittedWall: new Date(1567578844592), lastOpVisible: { ts: Timestamp(1567578844, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:04.613+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.613+0000 2019-09-04T06:34:04.613+0000 D2 ASIO [RS] Request 1290 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 2), t: 1 }, lastCommittedWall: new Date(1567578844592), lastOpVisible: { ts: Timestamp(1567578844, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 2), $clusterTime: { clusterTime: Timestamp(1567578844, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 3) } 2019-09-04T06:34:04.613+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 2), t: 1 }, lastCommittedWall: new Date(1567578844592), lastOpVisible: { ts: Timestamp(1567578844, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 2), $clusterTime: { clusterTime: Timestamp(1567578844, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:04.613+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:04.613+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.613+0000 2019-09-04T06:34:04.614+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578844, 3), t: 1 } 2019-09-04T06:34:04.614+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1291 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:14.614+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578844, 2), t: 1 } } 2019-09-04T06:34:04.614+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.613+0000 2019-09-04T06:34:04.614+0000 D2 ASIO [RS] Request 1291 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 3), t: 1 }, lastCommittedWall: new Date(1567578844602), lastOpVisible: { ts: Timestamp(1567578844, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578844, 3), t: 1 }, lastCommittedWall: new Date(1567578844602), lastOpApplied: { ts: Timestamp(1567578844, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 3), $clusterTime: { clusterTime: Timestamp(1567578844, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 3) } 2019-09-04T06:34:04.614+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 3), t: 1 }, lastCommittedWall: new Date(1567578844602), lastOpVisible: { ts: Timestamp(1567578844, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578844, 3), t: 1 }, lastCommittedWall: new Date(1567578844602), lastOpApplied: { ts: Timestamp(1567578844, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 3), $clusterTime: { clusterTime: Timestamp(1567578844, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:04.614+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:04.614+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:34:04.614+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578844, 3), t: 1 }, 2019-09-04T06:34:04.602+0000 2019-09-04T06:34:04.614+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578844, 3), t: 1 }, 2019-09-04T06:34:04.602+0000 2019-09-04T06:34:04.614+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578839, 3) 2019-09-04T06:34:04.614+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:34:15.358+0000 2019-09-04T06:34:04.614+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:34:14.825+0000 2019-09-04T06:34:04.614+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:04.614+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1292 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:14.614+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578844, 3), t: 1 } } 2019-09-04T06:34:04.614+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:32.839+0000 2019-09-04T06:34:04.614+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.613+0000 2019-09-04T06:34:04.614+0000 D3 REPL [conn414] Got notified of new snapshot: { ts: Timestamp(1567578844, 3), t: 1 }, 2019-09-04T06:34:04.602+0000 2019-09-04T06:34:04.614+0000 D3 REPL [conn414] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:19.849+0000 2019-09-04T06:34:04.614+0000 D3 REPL [conn412] Got notified of new snapshot: { ts: Timestamp(1567578844, 3), t: 1 }, 2019-09-04T06:34:04.602+0000 2019-09-04T06:34:04.614+0000 D3 REPL [conn412] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:17.318+0000 2019-09-04T06:34:04.614+0000 D3 REPL [conn405] Got notified of new snapshot: { ts: Timestamp(1567578844, 3), t: 1 }, 2019-09-04T06:34:04.602+0000 2019-09-04T06:34:04.614+0000 D3 REPL [conn405] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.416+0000 2019-09-04T06:34:04.614+0000 D3 REPL [conn401] Got notified of new snapshot: { ts: Timestamp(1567578844, 3), t: 1 }, 2019-09-04T06:34:04.602+0000 2019-09-04T06:34:04.614+0000 D3 REPL [conn401] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.660+0000 2019-09-04T06:34:04.614+0000 D3 REPL [conn417] Got notified of new snapshot: { ts: Timestamp(1567578844, 3), t: 1 }, 2019-09-04T06:34:04.602+0000 2019-09-04T06:34:04.614+0000 D3 REPL [conn417] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.661+0000 2019-09-04T06:34:04.614+0000 D3 REPL [conn416] Got notified of new snapshot: { ts: Timestamp(1567578844, 3), t: 1 }, 2019-09-04T06:34:04.602+0000 2019-09-04T06:34:04.614+0000 D3 REPL [conn416] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.661+0000 2019-09-04T06:34:04.614+0000 D3 REPL [conn421] Got notified of new snapshot: { ts: Timestamp(1567578844, 3), t: 1 }, 2019-09-04T06:34:04.602+0000 2019-09-04T06:34:04.614+0000 D3 REPL [conn421] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:24.153+0000 2019-09-04T06:34:04.614+0000 D3 REPL [conn419] Got notified of new snapshot: { ts: Timestamp(1567578844, 3), t: 1 }, 2019-09-04T06:34:04.602+0000 2019-09-04T06:34:04.614+0000 D3 REPL [conn419] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.767+0000 2019-09-04T06:34:04.614+0000 D3 REPL [conn382] Got notified of new snapshot: { ts: Timestamp(1567578844, 3), t: 1 }, 2019-09-04T06:34:04.602+0000 2019-09-04T06:34:04.614+0000 D3 REPL [conn382] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:17.223+0000 2019-09-04T06:34:04.614+0000 D3 REPL [conn422] Got notified of new snapshot: { ts: Timestamp(1567578844, 3), t: 1 }, 2019-09-04T06:34:04.602+0000 2019-09-04T06:34:04.614+0000 D3 REPL [conn422] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:25.060+0000 2019-09-04T06:34:04.615+0000 D3 REPL [conn393] Got notified of new snapshot: { ts: Timestamp(1567578844, 3), t: 1 }, 2019-09-04T06:34:04.602+0000 2019-09-04T06:34:04.615+0000 D3 REPL [conn393] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.081+0000 2019-09-04T06:34:04.615+0000 D3 REPL [conn407] Got notified of new snapshot: { ts: Timestamp(1567578844, 3), t: 1 }, 2019-09-04T06:34:04.602+0000 2019-09-04T06:34:04.615+0000 D3 REPL [conn407] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.691+0000 2019-09-04T06:34:04.615+0000 D3 REPL [conn408] Got notified of new snapshot: { ts: Timestamp(1567578844, 3), t: 1 }, 2019-09-04T06:34:04.602+0000 2019-09-04T06:34:04.615+0000 D3 REPL [conn408] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:29.739+0000 2019-09-04T06:34:04.615+0000 D3 REPL [conn380] Got notified of new snapshot: { ts: Timestamp(1567578844, 3), t: 1 }, 2019-09-04T06:34:04.602+0000 2019-09-04T06:34:04.615+0000 D3 REPL [conn380] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:14.390+0000 2019-09-04T06:34:04.615+0000 D3 REPL [conn404] Got notified of new snapshot: { ts: Timestamp(1567578844, 3), t: 1 }, 2019-09-04T06:34:04.602+0000 2019-09-04T06:34:04.615+0000 D3 REPL [conn404] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.073+0000 2019-09-04T06:34:04.615+0000 D3 REPL [conn406] Got notified of new snapshot: { ts: Timestamp(1567578844, 3), t: 1 }, 2019-09-04T06:34:04.602+0000 2019-09-04T06:34:04.615+0000 D3 REPL [conn406] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.496+0000 2019-09-04T06:34:04.615+0000 D3 REPL [conn418] Got notified of new snapshot: { ts: Timestamp(1567578844, 3), t: 1 }, 2019-09-04T06:34:04.602+0000 2019-09-04T06:34:04.615+0000 D3 REPL [conn418] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.754+0000 2019-09-04T06:34:04.615+0000 D3 REPL [conn403] Got notified of new snapshot: { ts: Timestamp(1567578844, 3), t: 1 }, 2019-09-04T06:34:04.602+0000 2019-09-04T06:34:04.615+0000 D3 REPL [conn403] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:27.566+0000 2019-09-04T06:34:04.615+0000 D3 REPL [conn390] Got notified of new snapshot: { ts: Timestamp(1567578844, 3), t: 1 }, 2019-09-04T06:34:04.602+0000 2019-09-04T06:34:04.615+0000 D3 REPL [conn390] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.077+0000 2019-09-04T06:34:04.615+0000 D2 ASIO [RS] Request 1292 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578844, 4), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578844614), o: { $v: 1, $set: { state: 0 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 3), t: 1 }, lastCommittedWall: new Date(1567578844602), lastOpVisible: { ts: Timestamp(1567578844, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578844, 3), t: 1 }, lastCommittedWall: new Date(1567578844602), lastOpApplied: { ts: Timestamp(1567578844, 4), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 3), $clusterTime: { clusterTime: Timestamp(1567578844, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 4) } 2019-09-04T06:34:04.616+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578844, 4), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578844614), o: { $v: 1, $set: { state: 0 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 3), t: 1 }, lastCommittedWall: new Date(1567578844602), lastOpVisible: { ts: Timestamp(1567578844, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578844, 3), t: 1 }, lastCommittedWall: new Date(1567578844602), lastOpApplied: { ts: Timestamp(1567578844, 4), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 3), $clusterTime: { clusterTime: Timestamp(1567578844, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 4) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:04.616+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:04.616+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578844, 4) and ending at ts: Timestamp(1567578844, 4) 2019-09-04T06:34:04.616+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:34:14.825+0000 2019-09-04T06:34:04.616+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:34:14.966+0000 2019-09-04T06:34:04.616+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578844, 4), t: 1 } 2019-09-04T06:34:04.616+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:04.616+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:32.839+0000 2019-09-04T06:34:04.616+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:04.616+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:04.616+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578844, 3) 2019-09-04T06:34:04.616+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 19018 2019-09-04T06:34:04.616+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:04.616+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:04.616+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 19018 2019-09-04T06:34:04.616+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:04.616+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:04.616+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:34:04.616+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578844, 3) 2019-09-04T06:34:04.616+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 19021 2019-09-04T06:34:04.616+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578844, 4) } 2019-09-04T06:34:04.616+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:04.616+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:04.616+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 19021 2019-09-04T06:34:04.616+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19016 2019-09-04T06:34:04.616+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19016 2019-09-04T06:34:04.616+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19024 2019-09-04T06:34:04.616+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19024 2019-09-04T06:34:04.616+0000 D3 EXECUTOR [repl-writer-worker-13] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:04.616+0000 D3 STORAGE [repl-writer-worker-13] WT begin_transaction for snapshot id 19026 2019-09-04T06:34:04.616+0000 D4 STORAGE [repl-writer-worker-13] inserting record with timestamp Timestamp(1567578844, 4) 2019-09-04T06:34:04.616+0000 D3 STORAGE [repl-writer-worker-13] WT set timestamp of future write operations to Timestamp(1567578844, 4) 2019-09-04T06:34:04.616+0000 D3 STORAGE [repl-writer-worker-13] WT commit_transaction for snapshot id 19026 2019-09-04T06:34:04.616+0000 D3 EXECUTOR [repl-writer-worker-13] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:04.616+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:34:04.616+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19025 2019-09-04T06:34:04.616+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19025 2019-09-04T06:34:04.616+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19028 2019-09-04T06:34:04.616+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19028 2019-09-04T06:34:04.616+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578844, 4), t: 1 }({ ts: Timestamp(1567578844, 4), t: 1 }) 2019-09-04T06:34:04.616+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578844, 4) 2019-09-04T06:34:04.616+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19029 2019-09-04T06:34:04.616+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578844, 4) } } ] } sort: {} projection: {} 2019-09-04T06:34:04.616+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:04.616+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:34:04.616+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578844, 4) Sort: {} Proj: {} ============================= 2019-09-04T06:34:04.616+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:04.616+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:04.616+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:04.616+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578844, 4) || First: notFirst: full path: ts 2019-09-04T06:34:04.616+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:04.616+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578844, 4) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:04.616+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:04.616+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:34:04.616+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:04.616+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:04.616+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:04.616+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:04.616+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:04.616+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:04.616+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:04.616+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578844, 4) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:04.616+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:04.616+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:04.616+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:04.616+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578844, 4) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:04.616+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:04.616+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578844, 4) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:04.616+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19029 2019-09-04T06:34:04.616+0000 D3 EXECUTOR [repl-writer-worker-14] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:04.616+0000 D3 STORAGE [repl-writer-worker-14] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:04.616+0000 D3 REPL [repl-writer-worker-14] applying op: { ts: Timestamp(1567578844, 4), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578844614), o: { $v: 1, $set: { state: 0 } } }, oplog application mode: Secondary 2019-09-04T06:34:04.616+0000 D3 STORAGE [repl-writer-worker-14] WT set timestamp of future write operations to Timestamp(1567578844, 4) 2019-09-04T06:34:04.616+0000 D3 STORAGE [repl-writer-worker-14] WT begin_transaction for snapshot id 19031 2019-09-04T06:34:04.616+0000 D2 QUERY [repl-writer-worker-14] Using idhack: { _id: "config" } 2019-09-04T06:34:04.616+0000 D4 WRITE [repl-writer-worker-14] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:34:04.616+0000 D3 STORAGE [repl-writer-worker-14] WT commit_transaction for snapshot id 19031 2019-09-04T06:34:04.616+0000 D3 EXECUTOR [repl-writer-worker-14] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:04.616+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578844, 4), t: 1 }({ ts: Timestamp(1567578844, 4), t: 1 }) 2019-09-04T06:34:04.617+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578844, 4) 2019-09-04T06:34:04.617+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19030 2019-09-04T06:34:04.617+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:34:04.617+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:04.617+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:34:04.617+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:04.617+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:04.617+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:34:04.617+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19030 2019-09-04T06:34:04.617+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578844, 4) 2019-09-04T06:34:04.617+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19034 2019-09-04T06:34:04.617+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19034 2019-09-04T06:34:04.617+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578844, 4), t: 1 }({ ts: Timestamp(1567578844, 4), t: 1 }) 2019-09-04T06:34:04.617+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:04.617+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578844, 1), t: 1 }, durableWallTime: new Date(1567578844498), appliedOpTime: { ts: Timestamp(1567578844, 4), t: 1 }, appliedWallTime: new Date(1567578844614), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 3), t: 1 }, lastCommittedWall: new Date(1567578844602), lastOpVisible: { ts: Timestamp(1567578844, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:04.617+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1293 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:34.617+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578844, 1), t: 1 }, durableWallTime: new Date(1567578844498), appliedOpTime: { ts: Timestamp(1567578844, 4), t: 1 }, appliedWallTime: new Date(1567578844614), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 3), t: 1 }, lastCommittedWall: new Date(1567578844602), lastOpVisible: { ts: Timestamp(1567578844, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:04.617+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.617+0000 2019-09-04T06:34:04.617+0000 D2 ASIO [RS] Request 1293 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 3), t: 1 }, lastCommittedWall: new Date(1567578844602), lastOpVisible: { ts: Timestamp(1567578844, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 3), $clusterTime: { clusterTime: Timestamp(1567578844, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 4) } 2019-09-04T06:34:04.617+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 3), t: 1 }, lastCommittedWall: new Date(1567578844602), lastOpVisible: { ts: Timestamp(1567578844, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 3), $clusterTime: { clusterTime: Timestamp(1567578844, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 4) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:04.617+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:04.617+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.617+0000 2019-09-04T06:34:04.618+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578844, 4), t: 1 } 2019-09-04T06:34:04.618+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1294 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:14.618+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578844, 3), t: 1 } } 2019-09-04T06:34:04.618+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.617+0000 2019-09-04T06:34:04.618+0000 D2 ASIO [RS] Request 1294 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 4), t: 1 }, lastCommittedWall: new Date(1567578844614), lastOpVisible: { ts: Timestamp(1567578844, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578844, 4), t: 1 }, lastCommittedWall: new Date(1567578844614), lastOpApplied: { ts: Timestamp(1567578844, 4), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 4), $clusterTime: { clusterTime: Timestamp(1567578844, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 4) } 2019-09-04T06:34:04.618+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 4), t: 1 }, lastCommittedWall: new Date(1567578844614), lastOpVisible: { ts: Timestamp(1567578844, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578844, 4), t: 1 }, lastCommittedWall: new Date(1567578844614), lastOpApplied: { ts: Timestamp(1567578844, 4), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 4), $clusterTime: { clusterTime: Timestamp(1567578844, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 4) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:04.618+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:04.618+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:34:04.618+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578844, 4), t: 1 }, 2019-09-04T06:34:04.614+0000 2019-09-04T06:34:04.618+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578844, 4), t: 1 }, 2019-09-04T06:34:04.614+0000 2019-09-04T06:34:04.618+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578839, 4) 2019-09-04T06:34:04.618+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:34:14.966+0000 2019-09-04T06:34:04.618+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:34:14.895+0000 2019-09-04T06:34:04.618+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:04.618+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:32.839+0000 2019-09-04T06:34:04.618+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1295 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:14.618+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578844, 4), t: 1 } } 2019-09-04T06:34:04.618+0000 D3 REPL [conn416] Got notified of new snapshot: { ts: Timestamp(1567578844, 4), t: 1 }, 2019-09-04T06:34:04.614+0000 2019-09-04T06:34:04.618+0000 D3 REPL [conn416] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.661+0000 2019-09-04T06:34:04.618+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.617+0000 2019-09-04T06:34:04.618+0000 D3 REPL [conn421] Got notified of new snapshot: { ts: Timestamp(1567578844, 4), t: 1 }, 2019-09-04T06:34:04.614+0000 2019-09-04T06:34:04.618+0000 D3 REPL [conn421] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:24.153+0000 2019-09-04T06:34:04.618+0000 D3 REPL [conn419] Got notified of new snapshot: { ts: Timestamp(1567578844, 4), t: 1 }, 2019-09-04T06:34:04.614+0000 2019-09-04T06:34:04.618+0000 D3 REPL [conn419] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.767+0000 2019-09-04T06:34:04.618+0000 D3 REPL [conn408] Got notified of new snapshot: { ts: Timestamp(1567578844, 4), t: 1 }, 2019-09-04T06:34:04.614+0000 2019-09-04T06:34:04.618+0000 D3 REPL [conn408] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:29.739+0000 2019-09-04T06:34:04.618+0000 D3 REPL [conn404] Got notified of new snapshot: { ts: Timestamp(1567578844, 4), t: 1 }, 2019-09-04T06:34:04.614+0000 2019-09-04T06:34:04.618+0000 D3 REPL [conn404] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.073+0000 2019-09-04T06:34:04.618+0000 D3 REPL [conn418] Got notified of new snapshot: { ts: Timestamp(1567578844, 4), t: 1 }, 2019-09-04T06:34:04.614+0000 2019-09-04T06:34:04.618+0000 D3 REPL [conn418] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.754+0000 2019-09-04T06:34:04.618+0000 D3 REPL [conn390] Got notified of new snapshot: { ts: Timestamp(1567578844, 4), t: 1 }, 2019-09-04T06:34:04.614+0000 2019-09-04T06:34:04.618+0000 D3 REPL [conn390] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.077+0000 2019-09-04T06:34:04.618+0000 D3 REPL [conn393] Got notified of new snapshot: { ts: Timestamp(1567578844, 4), t: 1 }, 2019-09-04T06:34:04.614+0000 2019-09-04T06:34:04.618+0000 D3 REPL [conn393] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.081+0000 2019-09-04T06:34:04.618+0000 D3 REPL [conn414] Got notified of new snapshot: { ts: Timestamp(1567578844, 4), t: 1 }, 2019-09-04T06:34:04.614+0000 2019-09-04T06:34:04.618+0000 D3 REPL [conn414] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:19.849+0000 2019-09-04T06:34:04.618+0000 D3 REPL [conn407] Got notified of new snapshot: { ts: Timestamp(1567578844, 4), t: 1 }, 2019-09-04T06:34:04.614+0000 2019-09-04T06:34:04.618+0000 D3 REPL [conn407] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.691+0000 2019-09-04T06:34:04.618+0000 D3 REPL [conn405] Got notified of new snapshot: { ts: Timestamp(1567578844, 4), t: 1 }, 2019-09-04T06:34:04.614+0000 2019-09-04T06:34:04.618+0000 D3 REPL [conn405] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.416+0000 2019-09-04T06:34:04.618+0000 D3 REPL [conn412] Got notified of new snapshot: { ts: Timestamp(1567578844, 4), t: 1 }, 2019-09-04T06:34:04.614+0000 2019-09-04T06:34:04.618+0000 D3 REPL [conn412] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:17.318+0000 2019-09-04T06:34:04.618+0000 D3 REPL [conn380] Got notified of new snapshot: { ts: Timestamp(1567578844, 4), t: 1 }, 2019-09-04T06:34:04.614+0000 2019-09-04T06:34:04.618+0000 D3 REPL [conn380] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:14.390+0000 2019-09-04T06:34:04.618+0000 D3 REPL [conn406] Got notified of new snapshot: { ts: Timestamp(1567578844, 4), t: 1 }, 2019-09-04T06:34:04.614+0000 2019-09-04T06:34:04.619+0000 D3 REPL [conn406] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.496+0000 2019-09-04T06:34:04.619+0000 D3 REPL [conn422] Got notified of new snapshot: { ts: Timestamp(1567578844, 4), t: 1 }, 2019-09-04T06:34:04.614+0000 2019-09-04T06:34:04.619+0000 D3 REPL [conn422] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:25.060+0000 2019-09-04T06:34:04.619+0000 D3 REPL [conn417] Got notified of new snapshot: { ts: Timestamp(1567578844, 4), t: 1 }, 2019-09-04T06:34:04.614+0000 2019-09-04T06:34:04.619+0000 D3 REPL [conn417] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.661+0000 2019-09-04T06:34:04.619+0000 D3 REPL [conn401] Got notified of new snapshot: { ts: Timestamp(1567578844, 4), t: 1 }, 2019-09-04T06:34:04.614+0000 2019-09-04T06:34:04.619+0000 D3 REPL [conn401] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.660+0000 2019-09-04T06:34:04.619+0000 D3 REPL [conn382] Got notified of new snapshot: { ts: Timestamp(1567578844, 4), t: 1 }, 2019-09-04T06:34:04.614+0000 2019-09-04T06:34:04.619+0000 D3 REPL [conn382] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:17.223+0000 2019-09-04T06:34:04.619+0000 D3 REPL [conn403] Got notified of new snapshot: { ts: Timestamp(1567578844, 4), t: 1 }, 2019-09-04T06:34:04.614+0000 2019-09-04T06:34:04.619+0000 D3 REPL [conn403] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:27.566+0000 2019-09-04T06:34:04.624+0000 D4 STORAGE [WTOplogJournalThread] flushed journal 2019-09-04T06:34:04.624+0000 D2 ASIO [RS] Request 1295 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578844, 5), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578844619), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f5adcac9313827bca5f13'), when: new Date(1567578844618), who: "ConfigServer:conn10279" } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 4), t: 1 }, lastCommittedWall: new Date(1567578844614), lastOpVisible: { ts: Timestamp(1567578844, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578844, 4), t: 1 }, lastCommittedWall: new Date(1567578844614), lastOpApplied: { ts: Timestamp(1567578844, 5), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 4), $clusterTime: { clusterTime: Timestamp(1567578844, 5), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 5) } 2019-09-04T06:34:04.624+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578844, 5), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578844619), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f5adcac9313827bca5f13'), when: new Date(1567578844618), who: "ConfigServer:conn10279" } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 4), t: 1 }, lastCommittedWall: new Date(1567578844614), lastOpVisible: { ts: Timestamp(1567578844, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578844, 4), t: 1 }, lastCommittedWall: new Date(1567578844614), lastOpApplied: { ts: Timestamp(1567578844, 5), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 4), $clusterTime: { clusterTime: Timestamp(1567578844, 5), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 5) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:04.624+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:04.624+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578844, 5) and ending at ts: Timestamp(1567578844, 5) 2019-09-04T06:34:04.624+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:34:14.895+0000 2019-09-04T06:34:04.624+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:34:15.385+0000 2019-09-04T06:34:04.624+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578844, 5), t: 1 } 2019-09-04T06:34:04.624+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:04.624+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:32.839+0000 2019-09-04T06:34:04.624+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:04.624+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:04.624+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578844, 4) 2019-09-04T06:34:04.624+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 19037 2019-09-04T06:34:04.624+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:04.624+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:04.624+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 19037 2019-09-04T06:34:04.624+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:04.624+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:34:04.624+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:04.624+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578844, 5) } 2019-09-04T06:34:04.624+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578844, 4) 2019-09-04T06:34:04.624+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 19040 2019-09-04T06:34:04.624+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:04.624+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:04.624+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 19040 2019-09-04T06:34:04.624+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19035 2019-09-04T06:34:04.624+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19035 2019-09-04T06:34:04.624+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19043 2019-09-04T06:34:04.624+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19043 2019-09-04T06:34:04.624+0000 D3 EXECUTOR [repl-writer-worker-3] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:04.624+0000 D3 STORAGE [repl-writer-worker-3] WT begin_transaction for snapshot id 19045 2019-09-04T06:34:04.624+0000 D4 STORAGE [repl-writer-worker-3] inserting record with timestamp Timestamp(1567578844, 5) 2019-09-04T06:34:04.624+0000 D3 STORAGE [repl-writer-worker-3] WT set timestamp of future write operations to Timestamp(1567578844, 5) 2019-09-04T06:34:04.624+0000 D3 STORAGE [repl-writer-worker-3] WT commit_transaction for snapshot id 19045 2019-09-04T06:34:04.624+0000 D3 EXECUTOR [repl-writer-worker-3] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:04.624+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:34:04.624+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19044 2019-09-04T06:34:04.624+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19044 2019-09-04T06:34:04.624+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19047 2019-09-04T06:34:04.624+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19047 2019-09-04T06:34:04.624+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578844, 5), t: 1 }({ ts: Timestamp(1567578844, 5), t: 1 }) 2019-09-04T06:34:04.624+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578844, 5) 2019-09-04T06:34:04.624+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19048 2019-09-04T06:34:04.624+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578844, 5) } } ] } sort: {} projection: {} 2019-09-04T06:34:04.624+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:04.624+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:34:04.624+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578844, 5) Sort: {} Proj: {} ============================= 2019-09-04T06:34:04.624+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:04.624+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:04.624+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:04.624+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578844, 5) || First: notFirst: full path: ts 2019-09-04T06:34:04.624+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:04.624+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578844, 5) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:04.624+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:04.624+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:34:04.624+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:04.624+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:04.624+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:04.624+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:04.624+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:04.624+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:04.624+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:04.624+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578844, 5) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:04.624+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:04.624+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:04.624+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:04.624+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578844, 5) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:04.624+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:04.624+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578844, 5) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:04.624+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19048 2019-09-04T06:34:04.625+0000 D3 EXECUTOR [repl-writer-worker-12] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:04.625+0000 D3 STORAGE [repl-writer-worker-12] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:04.625+0000 D3 REPL [repl-writer-worker-12] applying op: { ts: Timestamp(1567578844, 5), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578844619), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f5adcac9313827bca5f13'), when: new Date(1567578844618), who: "ConfigServer:conn10279" } } }, oplog application mode: Secondary 2019-09-04T06:34:04.625+0000 D3 STORAGE [repl-writer-worker-12] WT set timestamp of future write operations to Timestamp(1567578844, 5) 2019-09-04T06:34:04.625+0000 D3 STORAGE [repl-writer-worker-12] WT begin_transaction for snapshot id 19050 2019-09-04T06:34:04.625+0000 D2 QUERY [repl-writer-worker-12] Using idhack: { _id: "config" } 2019-09-04T06:34:04.625+0000 D4 WRITE [repl-writer-worker-12] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:34:04.625+0000 D3 STORAGE [repl-writer-worker-12] WT commit_transaction for snapshot id 19050 2019-09-04T06:34:04.625+0000 D3 EXECUTOR [repl-writer-worker-12] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:04.625+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578844, 5), t: 1 }({ ts: Timestamp(1567578844, 5), t: 1 }) 2019-09-04T06:34:04.625+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578844, 5) 2019-09-04T06:34:04.625+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19049 2019-09-04T06:34:04.625+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:34:04.625+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:04.625+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:34:04.625+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:04.625+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:04.625+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:34:04.625+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19049 2019-09-04T06:34:04.625+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578844, 5) 2019-09-04T06:34:04.625+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19053 2019-09-04T06:34:04.625+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19053 2019-09-04T06:34:04.625+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578844, 5), t: 1 }({ ts: Timestamp(1567578844, 5), t: 1 }) 2019-09-04T06:34:04.625+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:04.625+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578844, 1), t: 1 }, durableWallTime: new Date(1567578844498), appliedOpTime: { ts: Timestamp(1567578844, 5), t: 1 }, appliedWallTime: new Date(1567578844619), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 4), t: 1 }, lastCommittedWall: new Date(1567578844614), lastOpVisible: { ts: Timestamp(1567578844, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:04.625+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1296 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:34.625+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578844, 1), t: 1 }, durableWallTime: new Date(1567578844498), appliedOpTime: { ts: Timestamp(1567578844, 5), t: 1 }, appliedWallTime: new Date(1567578844619), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 4), t: 1 }, lastCommittedWall: new Date(1567578844614), lastOpVisible: { ts: Timestamp(1567578844, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:04.625+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.625+0000 2019-09-04T06:34:04.625+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:34:04.625+0000 D2 ASIO [RS] Request 1296 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 4), t: 1 }, lastCommittedWall: new Date(1567578844614), lastOpVisible: { ts: Timestamp(1567578844, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 4), $clusterTime: { clusterTime: Timestamp(1567578844, 5), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 5) } 2019-09-04T06:34:04.625+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 4), t: 1 }, lastCommittedWall: new Date(1567578844614), lastOpVisible: { ts: Timestamp(1567578844, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 4), $clusterTime: { clusterTime: Timestamp(1567578844, 5), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 5) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:04.625+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:04.625+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578844, 4), t: 1 }, durableWallTime: new Date(1567578844614), appliedOpTime: { ts: Timestamp(1567578844, 5), t: 1 }, appliedWallTime: new Date(1567578844619), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 4), t: 1 }, lastCommittedWall: new Date(1567578844614), lastOpVisible: { ts: Timestamp(1567578844, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:04.625+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1297 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:34.625+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578844, 4), t: 1 }, durableWallTime: new Date(1567578844614), appliedOpTime: { ts: Timestamp(1567578844, 5), t: 1 }, appliedWallTime: new Date(1567578844619), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 4), t: 1 }, lastCommittedWall: new Date(1567578844614), lastOpVisible: { ts: Timestamp(1567578844, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:04.626+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.625+0000 2019-09-04T06:34:04.626+0000 D2 ASIO [RS] Request 1297 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 5), t: 1 }, lastCommittedWall: new Date(1567578844619), lastOpVisible: { ts: Timestamp(1567578844, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 5), $clusterTime: { clusterTime: Timestamp(1567578844, 5), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 5) } 2019-09-04T06:34:04.626+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 5), t: 1 }, lastCommittedWall: new Date(1567578844619), lastOpVisible: { ts: Timestamp(1567578844, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 5), $clusterTime: { clusterTime: Timestamp(1567578844, 5), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 5) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:04.626+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:04.626+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.626+0000 2019-09-04T06:34:04.626+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578844, 5), t: 1 } 2019-09-04T06:34:04.626+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1298 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:14.626+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578844, 4), t: 1 } } 2019-09-04T06:34:04.626+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.626+0000 2019-09-04T06:34:04.626+0000 D2 ASIO [RS] Request 1298 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 5), t: 1 }, lastCommittedWall: new Date(1567578844619), lastOpVisible: { ts: Timestamp(1567578844, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578844, 5), t: 1 }, lastCommittedWall: new Date(1567578844619), lastOpApplied: { ts: Timestamp(1567578844, 5), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 5), $clusterTime: { clusterTime: Timestamp(1567578844, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 5) } 2019-09-04T06:34:04.626+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 5), t: 1 }, lastCommittedWall: new Date(1567578844619), lastOpVisible: { ts: Timestamp(1567578844, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578844, 5), t: 1 }, lastCommittedWall: new Date(1567578844619), lastOpApplied: { ts: Timestamp(1567578844, 5), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 5), $clusterTime: { clusterTime: Timestamp(1567578844, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 5) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:04.626+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:04.626+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:34:04.626+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578844, 5), t: 1 }, 2019-09-04T06:34:04.619+0000 2019-09-04T06:34:04.626+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578844, 5), t: 1 }, 2019-09-04T06:34:04.619+0000 2019-09-04T06:34:04.626+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578839, 5) 2019-09-04T06:34:04.626+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:34:15.385+0000 2019-09-04T06:34:04.626+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:34:15.084+0000 2019-09-04T06:34:04.626+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:04.626+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1299 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:14.626+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578844, 5), t: 1 } } 2019-09-04T06:34:04.626+0000 D3 REPL [conn408] Got notified of new snapshot: { ts: Timestamp(1567578844, 5), t: 1 }, 2019-09-04T06:34:04.619+0000 2019-09-04T06:34:04.626+0000 D3 REPL [conn408] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:29.739+0000 2019-09-04T06:34:04.626+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.626+0000 2019-09-04T06:34:04.626+0000 D3 REPL [conn418] Got notified of new snapshot: { ts: Timestamp(1567578844, 5), t: 1 }, 2019-09-04T06:34:04.619+0000 2019-09-04T06:34:04.626+0000 D3 REPL [conn418] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.754+0000 2019-09-04T06:34:04.626+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:32.839+0000 2019-09-04T06:34:04.626+0000 D3 REPL [conn407] Got notified of new snapshot: { ts: Timestamp(1567578844, 5), t: 1 }, 2019-09-04T06:34:04.619+0000 2019-09-04T06:34:04.626+0000 D3 REPL [conn407] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.691+0000 2019-09-04T06:34:04.626+0000 D3 REPL [conn412] Got notified of new snapshot: { ts: Timestamp(1567578844, 5), t: 1 }, 2019-09-04T06:34:04.619+0000 2019-09-04T06:34:04.626+0000 D3 REPL [conn412] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:17.318+0000 2019-09-04T06:34:04.626+0000 D3 REPL [conn406] Got notified of new snapshot: { ts: Timestamp(1567578844, 5), t: 1 }, 2019-09-04T06:34:04.619+0000 2019-09-04T06:34:04.626+0000 D3 REPL [conn406] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.496+0000 2019-09-04T06:34:04.626+0000 D3 REPL [conn417] Got notified of new snapshot: { ts: Timestamp(1567578844, 5), t: 1 }, 2019-09-04T06:34:04.619+0000 2019-09-04T06:34:04.626+0000 D3 REPL [conn417] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.661+0000 2019-09-04T06:34:04.626+0000 D3 REPL [conn421] Got notified of new snapshot: { ts: Timestamp(1567578844, 5), t: 1 }, 2019-09-04T06:34:04.619+0000 2019-09-04T06:34:04.626+0000 D3 REPL [conn421] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:24.153+0000 2019-09-04T06:34:04.627+0000 D3 REPL [conn382] Got notified of new snapshot: { ts: Timestamp(1567578844, 5), t: 1 }, 2019-09-04T06:34:04.619+0000 2019-09-04T06:34:04.627+0000 D3 REPL [conn382] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:17.223+0000 2019-09-04T06:34:04.627+0000 D3 REPL [conn393] Got notified of new snapshot: { ts: Timestamp(1567578844, 5), t: 1 }, 2019-09-04T06:34:04.619+0000 2019-09-04T06:34:04.627+0000 D3 REPL [conn393] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.081+0000 2019-09-04T06:34:04.627+0000 D3 REPL [conn416] Got notified of new snapshot: { ts: Timestamp(1567578844, 5), t: 1 }, 2019-09-04T06:34:04.619+0000 2019-09-04T06:34:04.627+0000 D3 REPL [conn416] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.661+0000 2019-09-04T06:34:04.627+0000 D3 REPL [conn405] Got notified of new snapshot: { ts: Timestamp(1567578844, 5), t: 1 }, 2019-09-04T06:34:04.619+0000 2019-09-04T06:34:04.627+0000 D3 REPL [conn405] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.416+0000 2019-09-04T06:34:04.627+0000 D3 REPL [conn380] Got notified of new snapshot: { ts: Timestamp(1567578844, 5), t: 1 }, 2019-09-04T06:34:04.619+0000 2019-09-04T06:34:04.627+0000 D3 REPL [conn380] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:14.390+0000 2019-09-04T06:34:04.627+0000 D3 REPL [conn422] Got notified of new snapshot: { ts: Timestamp(1567578844, 5), t: 1 }, 2019-09-04T06:34:04.619+0000 2019-09-04T06:34:04.627+0000 D3 REPL [conn422] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:25.060+0000 2019-09-04T06:34:04.627+0000 D3 REPL [conn414] Got notified of new snapshot: { ts: Timestamp(1567578844, 5), t: 1 }, 2019-09-04T06:34:04.619+0000 2019-09-04T06:34:04.627+0000 D3 REPL [conn414] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:19.849+0000 2019-09-04T06:34:04.627+0000 D3 REPL [conn419] Got notified of new snapshot: { ts: Timestamp(1567578844, 5), t: 1 }, 2019-09-04T06:34:04.619+0000 2019-09-04T06:34:04.627+0000 D3 REPL [conn419] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.767+0000 2019-09-04T06:34:04.627+0000 D3 REPL [conn401] Got notified of new snapshot: { ts: Timestamp(1567578844, 5), t: 1 }, 2019-09-04T06:34:04.619+0000 2019-09-04T06:34:04.627+0000 D3 REPL [conn401] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.660+0000 2019-09-04T06:34:04.627+0000 D3 REPL [conn404] Got notified of new snapshot: { ts: Timestamp(1567578844, 5), t: 1 }, 2019-09-04T06:34:04.619+0000 2019-09-04T06:34:04.627+0000 D3 REPL [conn404] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.073+0000 2019-09-04T06:34:04.627+0000 D3 REPL [conn403] Got notified of new snapshot: { ts: Timestamp(1567578844, 5), t: 1 }, 2019-09-04T06:34:04.619+0000 2019-09-04T06:34:04.627+0000 D3 REPL [conn403] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:27.566+0000 2019-09-04T06:34:04.627+0000 D3 REPL [conn390] Got notified of new snapshot: { ts: Timestamp(1567578844, 5), t: 1 }, 2019-09-04T06:34:04.619+0000 2019-09-04T06:34:04.627+0000 D3 REPL [conn390] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.077+0000 2019-09-04T06:34:04.627+0000 D2 ASIO [RS] Request 1299 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578844, 6), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578844626), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f5adcac9313827bca5f19'), when: new Date(1567578844626), who: "ConfigServer:conn10279" } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 5), t: 1 }, lastCommittedWall: new Date(1567578844619), lastOpVisible: { ts: Timestamp(1567578844, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578844, 5), t: 1 }, lastCommittedWall: new Date(1567578844619), lastOpApplied: { ts: Timestamp(1567578844, 6), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 5), $clusterTime: { clusterTime: Timestamp(1567578844, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 6) } 2019-09-04T06:34:04.627+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578844, 6), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578844626), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f5adcac9313827bca5f19'), when: new Date(1567578844626), who: "ConfigServer:conn10279" } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 5), t: 1 }, lastCommittedWall: new Date(1567578844619), lastOpVisible: { ts: Timestamp(1567578844, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578844, 5), t: 1 }, lastCommittedWall: new Date(1567578844619), lastOpApplied: { ts: Timestamp(1567578844, 6), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 5), $clusterTime: { clusterTime: Timestamp(1567578844, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 6) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:04.627+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:04.627+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578844, 6) and ending at ts: Timestamp(1567578844, 6) 2019-09-04T06:34:04.627+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:34:15.084+0000 2019-09-04T06:34:04.627+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:34:16.101+0000 2019-09-04T06:34:04.627+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578844, 6), t: 1 } 2019-09-04T06:34:04.627+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:04.627+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:32.839+0000 2019-09-04T06:34:04.627+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:04.627+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:04.627+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578844, 5) 2019-09-04T06:34:04.627+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 19057 2019-09-04T06:34:04.627+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:04.627+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:04.627+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 19057 2019-09-04T06:34:04.628+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:34:04.628+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:04.628+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578844, 6) } 2019-09-04T06:34:04.628+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:04.628+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578844, 5) 2019-09-04T06:34:04.628+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 19060 2019-09-04T06:34:04.628+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19054 2019-09-04T06:34:04.628+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:04.628+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:04.628+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 19060 2019-09-04T06:34:04.628+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19054 2019-09-04T06:34:04.628+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19063 2019-09-04T06:34:04.628+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19063 2019-09-04T06:34:04.628+0000 D3 EXECUTOR [repl-writer-worker-10] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:04.628+0000 D3 STORAGE [repl-writer-worker-10] WT begin_transaction for snapshot id 19065 2019-09-04T06:34:04.628+0000 D4 STORAGE [repl-writer-worker-10] inserting record with timestamp Timestamp(1567578844, 6) 2019-09-04T06:34:04.628+0000 D3 STORAGE [repl-writer-worker-10] WT set timestamp of future write operations to Timestamp(1567578844, 6) 2019-09-04T06:34:04.628+0000 D3 STORAGE [repl-writer-worker-10] WT commit_transaction for snapshot id 19065 2019-09-04T06:34:04.628+0000 D3 EXECUTOR [repl-writer-worker-10] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:04.628+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:34:04.628+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19064 2019-09-04T06:34:04.628+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19064 2019-09-04T06:34:04.628+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19067 2019-09-04T06:34:04.628+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19067 2019-09-04T06:34:04.628+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578844, 6), t: 1 }({ ts: Timestamp(1567578844, 6), t: 1 }) 2019-09-04T06:34:04.628+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578844, 6) 2019-09-04T06:34:04.628+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19068 2019-09-04T06:34:04.628+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578844, 6) } } ] } sort: {} projection: {} 2019-09-04T06:34:04.628+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:04.628+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:34:04.628+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578844, 6) Sort: {} Proj: {} ============================= 2019-09-04T06:34:04.628+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:04.628+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:04.628+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:04.628+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578844, 6) || First: notFirst: full path: ts 2019-09-04T06:34:04.628+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:04.628+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578844, 6) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:04.628+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:04.628+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:34:04.628+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:04.628+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:04.628+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:04.628+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:04.628+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:04.628+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:04.628+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:04.628+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578844, 6) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:04.628+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:04.628+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:04.628+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:04.628+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578844, 6) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:04.628+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:04.628+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578844, 6) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:04.628+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19068 2019-09-04T06:34:04.628+0000 D3 EXECUTOR [repl-writer-worker-4] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:04.628+0000 D3 STORAGE [repl-writer-worker-4] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:04.628+0000 D3 REPL [repl-writer-worker-4] applying op: { ts: Timestamp(1567578844, 6), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578844626), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f5adcac9313827bca5f19'), when: new Date(1567578844626), who: "ConfigServer:conn10279" } } }, oplog application mode: Secondary 2019-09-04T06:34:04.628+0000 D3 STORAGE [repl-writer-worker-4] WT set timestamp of future write operations to Timestamp(1567578844, 6) 2019-09-04T06:34:04.628+0000 D3 STORAGE [repl-writer-worker-4] WT begin_transaction for snapshot id 19070 2019-09-04T06:34:04.628+0000 D2 QUERY [repl-writer-worker-4] Using idhack: { _id: "config.system.sessions" } 2019-09-04T06:34:04.628+0000 D4 WRITE [repl-writer-worker-4] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:34:04.628+0000 D3 STORAGE [repl-writer-worker-4] WT commit_transaction for snapshot id 19070 2019-09-04T06:34:04.628+0000 D3 EXECUTOR [repl-writer-worker-4] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:04.628+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578844, 6), t: 1 }({ ts: Timestamp(1567578844, 6), t: 1 }) 2019-09-04T06:34:04.628+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578844, 6) 2019-09-04T06:34:04.628+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19069 2019-09-04T06:34:04.628+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:34:04.628+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:04.628+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:34:04.628+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:04.628+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:04.628+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:34:04.628+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19069 2019-09-04T06:34:04.628+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578844, 6) 2019-09-04T06:34:04.628+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19073 2019-09-04T06:34:04.628+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19073 2019-09-04T06:34:04.628+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578844, 6), t: 1 }({ ts: Timestamp(1567578844, 6), t: 1 }) 2019-09-04T06:34:04.628+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:04.629+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578844, 4), t: 1 }, durableWallTime: new Date(1567578844614), appliedOpTime: { ts: Timestamp(1567578844, 6), t: 1 }, appliedWallTime: new Date(1567578844626), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 5), t: 1 }, lastCommittedWall: new Date(1567578844619), lastOpVisible: { ts: Timestamp(1567578844, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:04.629+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1300 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:34.629+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578844, 4), t: 1 }, durableWallTime: new Date(1567578844614), appliedOpTime: { ts: Timestamp(1567578844, 6), t: 1 }, appliedWallTime: new Date(1567578844626), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 5), t: 1 }, lastCommittedWall: new Date(1567578844619), lastOpVisible: { ts: Timestamp(1567578844, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:04.629+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.628+0000 2019-09-04T06:34:04.629+0000 D2 ASIO [RS] Request 1300 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 5), t: 1 }, lastCommittedWall: new Date(1567578844619), lastOpVisible: { ts: Timestamp(1567578844, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 5), $clusterTime: { clusterTime: Timestamp(1567578844, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 6) } 2019-09-04T06:34:04.629+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 5), t: 1 }, lastCommittedWall: new Date(1567578844619), lastOpVisible: { ts: Timestamp(1567578844, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 5), $clusterTime: { clusterTime: Timestamp(1567578844, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 6) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:04.629+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:04.629+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.629+0000 2019-09-04T06:34:04.629+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:34:04.629+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:04.629+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578844, 5), t: 1 }, durableWallTime: new Date(1567578844619), appliedOpTime: { ts: Timestamp(1567578844, 6), t: 1 }, appliedWallTime: new Date(1567578844626), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 5), t: 1 }, lastCommittedWall: new Date(1567578844619), lastOpVisible: { ts: Timestamp(1567578844, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:04.629+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1301 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:34.629+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578844, 5), t: 1 }, durableWallTime: new Date(1567578844619), appliedOpTime: { ts: Timestamp(1567578844, 6), t: 1 }, appliedWallTime: new Date(1567578844626), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 5), t: 1 }, lastCommittedWall: new Date(1567578844619), lastOpVisible: { ts: Timestamp(1567578844, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:04.629+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.629+0000 2019-09-04T06:34:04.629+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578844, 6), t: 1 } 2019-09-04T06:34:04.629+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1302 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:14.629+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578844, 5), t: 1 } } 2019-09-04T06:34:04.629+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.629+0000 2019-09-04T06:34:04.630+0000 D2 ASIO [RS] Request 1301 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 5), t: 1 }, lastCommittedWall: new Date(1567578844619), lastOpVisible: { ts: Timestamp(1567578844, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 5), $clusterTime: { clusterTime: Timestamp(1567578844, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 6) } 2019-09-04T06:34:04.630+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 5), t: 1 }, lastCommittedWall: new Date(1567578844619), lastOpVisible: { ts: Timestamp(1567578844, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 5), $clusterTime: { clusterTime: Timestamp(1567578844, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 6) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:04.630+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:04.630+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.629+0000 2019-09-04T06:34:04.633+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:34:04.633+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:04.633+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578844, 6), t: 1 }, durableWallTime: new Date(1567578844626), appliedOpTime: { ts: Timestamp(1567578844, 6), t: 1 }, appliedWallTime: new Date(1567578844626), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 5), t: 1 }, lastCommittedWall: new Date(1567578844619), lastOpVisible: { ts: Timestamp(1567578844, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:04.633+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1303 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:34.633+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578844, 6), t: 1 }, durableWallTime: new Date(1567578844626), appliedOpTime: { ts: Timestamp(1567578844, 6), t: 1 }, appliedWallTime: new Date(1567578844626), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 5), t: 1 }, lastCommittedWall: new Date(1567578844619), lastOpVisible: { ts: Timestamp(1567578844, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:04.633+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.629+0000 2019-09-04T06:34:04.634+0000 D2 ASIO [RS] Request 1303 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 5), t: 1 }, lastCommittedWall: new Date(1567578844619), lastOpVisible: { ts: Timestamp(1567578844, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 5), $clusterTime: { clusterTime: Timestamp(1567578844, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 6) } 2019-09-04T06:34:04.634+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 5), t: 1 }, lastCommittedWall: new Date(1567578844619), lastOpVisible: { ts: Timestamp(1567578844, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 5), $clusterTime: { clusterTime: Timestamp(1567578844, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 6) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:04.634+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:04.634+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.629+0000 2019-09-04T06:34:04.634+0000 D2 ASIO [RS] Request 1302 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 6), t: 1 }, lastCommittedWall: new Date(1567578844626), lastOpVisible: { ts: Timestamp(1567578844, 6), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578844, 6), t: 1 }, lastCommittedWall: new Date(1567578844626), lastOpApplied: { ts: Timestamp(1567578844, 6), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 6), $clusterTime: { clusterTime: Timestamp(1567578844, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 6) } 2019-09-04T06:34:04.634+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 6), t: 1 }, lastCommittedWall: new Date(1567578844626), lastOpVisible: { ts: Timestamp(1567578844, 6), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578844, 6), t: 1 }, lastCommittedWall: new Date(1567578844626), lastOpApplied: { ts: Timestamp(1567578844, 6), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 6), $clusterTime: { clusterTime: Timestamp(1567578844, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 6) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:04.634+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:04.634+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:34:04.634+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578844, 6), t: 1 }, 2019-09-04T06:34:04.626+0000 2019-09-04T06:34:04.634+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578844, 6), t: 1 }, 2019-09-04T06:34:04.626+0000 2019-09-04T06:34:04.634+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578839, 6) 2019-09-04T06:34:04.634+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:34:16.101+0000 2019-09-04T06:34:04.634+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:34:15.496+0000 2019-09-04T06:34:04.634+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:04.634+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1304 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:14.634+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578844, 6), t: 1 } } 2019-09-04T06:34:04.634+0000 D3 REPL [conn412] Got notified of new snapshot: { ts: Timestamp(1567578844, 6), t: 1 }, 2019-09-04T06:34:04.626+0000 2019-09-04T06:34:04.634+0000 D3 REPL [conn412] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:17.318+0000 2019-09-04T06:34:04.634+0000 D3 REPL [conn407] Got notified of new snapshot: { ts: Timestamp(1567578844, 6), t: 1 }, 2019-09-04T06:34:04.626+0000 2019-09-04T06:34:04.634+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:32.839+0000 2019-09-04T06:34:04.634+0000 D3 REPL [conn407] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.691+0000 2019-09-04T06:34:04.634+0000 D3 REPL [conn421] Got notified of new snapshot: { ts: Timestamp(1567578844, 6), t: 1 }, 2019-09-04T06:34:04.626+0000 2019-09-04T06:34:04.634+0000 D3 REPL [conn421] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:24.153+0000 2019-09-04T06:34:04.634+0000 D3 REPL [conn390] Got notified of new snapshot: { ts: Timestamp(1567578844, 6), t: 1 }, 2019-09-04T06:34:04.626+0000 2019-09-04T06:34:04.634+0000 D3 REPL [conn390] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.077+0000 2019-09-04T06:34:04.634+0000 D3 REPL [conn393] Got notified of new snapshot: { ts: Timestamp(1567578844, 6), t: 1 }, 2019-09-04T06:34:04.626+0000 2019-09-04T06:34:04.634+0000 D3 REPL [conn393] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.081+0000 2019-09-04T06:34:04.634+0000 D3 REPL [conn414] Got notified of new snapshot: { ts: Timestamp(1567578844, 6), t: 1 }, 2019-09-04T06:34:04.626+0000 2019-09-04T06:34:04.634+0000 D3 REPL [conn414] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:19.849+0000 2019-09-04T06:34:04.634+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.629+0000 2019-09-04T06:34:04.634+0000 D3 REPL [conn401] Got notified of new snapshot: { ts: Timestamp(1567578844, 6), t: 1 }, 2019-09-04T06:34:04.626+0000 2019-09-04T06:34:04.634+0000 D3 REPL [conn401] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.660+0000 2019-09-04T06:34:04.634+0000 D3 REPL [conn403] Got notified of new snapshot: { ts: Timestamp(1567578844, 6), t: 1 }, 2019-09-04T06:34:04.626+0000 2019-09-04T06:34:04.634+0000 D3 REPL [conn403] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:27.566+0000 2019-09-04T06:34:04.634+0000 D3 REPL [conn417] Got notified of new snapshot: { ts: Timestamp(1567578844, 6), t: 1 }, 2019-09-04T06:34:04.626+0000 2019-09-04T06:34:04.634+0000 D3 REPL [conn417] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.661+0000 2019-09-04T06:34:04.635+0000 D3 REPL [conn416] Got notified of new snapshot: { ts: Timestamp(1567578844, 6), t: 1 }, 2019-09-04T06:34:04.626+0000 2019-09-04T06:34:04.635+0000 D3 REPL [conn416] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.661+0000 2019-09-04T06:34:04.635+0000 D3 REPL [conn408] Got notified of new snapshot: { ts: Timestamp(1567578844, 6), t: 1 }, 2019-09-04T06:34:04.626+0000 2019-09-04T06:34:04.635+0000 D3 REPL [conn408] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:29.739+0000 2019-09-04T06:34:04.635+0000 D3 REPL [conn382] Got notified of new snapshot: { ts: Timestamp(1567578844, 6), t: 1 }, 2019-09-04T06:34:04.626+0000 2019-09-04T06:34:04.635+0000 D3 REPL [conn382] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:17.223+0000 2019-09-04T06:34:04.635+0000 D3 REPL [conn405] Got notified of new snapshot: { ts: Timestamp(1567578844, 6), t: 1 }, 2019-09-04T06:34:04.626+0000 2019-09-04T06:34:04.635+0000 D3 REPL [conn405] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.416+0000 2019-09-04T06:34:04.635+0000 D3 REPL [conn418] Got notified of new snapshot: { ts: Timestamp(1567578844, 6), t: 1 }, 2019-09-04T06:34:04.626+0000 2019-09-04T06:34:04.635+0000 D3 REPL [conn418] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.754+0000 2019-09-04T06:34:04.635+0000 D3 REPL [conn406] Got notified of new snapshot: { ts: Timestamp(1567578844, 6), t: 1 }, 2019-09-04T06:34:04.626+0000 2019-09-04T06:34:04.635+0000 D3 REPL [conn406] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.496+0000 2019-09-04T06:34:04.635+0000 D3 REPL [conn422] Got notified of new snapshot: { ts: Timestamp(1567578844, 6), t: 1 }, 2019-09-04T06:34:04.626+0000 2019-09-04T06:34:04.635+0000 D3 REPL [conn422] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:25.060+0000 2019-09-04T06:34:04.635+0000 D3 REPL [conn419] Got notified of new snapshot: { ts: Timestamp(1567578844, 6), t: 1 }, 2019-09-04T06:34:04.626+0000 2019-09-04T06:34:04.635+0000 D3 REPL [conn419] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.767+0000 2019-09-04T06:34:04.635+0000 D3 REPL [conn404] Got notified of new snapshot: { ts: Timestamp(1567578844, 6), t: 1 }, 2019-09-04T06:34:04.626+0000 2019-09-04T06:34:04.635+0000 D3 REPL [conn404] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.073+0000 2019-09-04T06:34:04.635+0000 D3 REPL [conn380] Got notified of new snapshot: { ts: Timestamp(1567578844, 6), t: 1 }, 2019-09-04T06:34:04.626+0000 2019-09-04T06:34:04.635+0000 D3 REPL [conn380] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:14.390+0000 2019-09-04T06:34:04.637+0000 D2 ASIO [RS] Request 1304 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578844, 7), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578844635), o: { $v: 1, $set: { state: 0 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 6), t: 1 }, lastCommittedWall: new Date(1567578844626), lastOpVisible: { ts: Timestamp(1567578844, 6), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578844, 6), t: 1 }, lastCommittedWall: new Date(1567578844626), lastOpApplied: { ts: Timestamp(1567578844, 7), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 6), $clusterTime: { clusterTime: Timestamp(1567578844, 7), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 7) } 2019-09-04T06:34:04.637+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578844, 7), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578844635), o: { $v: 1, $set: { state: 0 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 6), t: 1 }, lastCommittedWall: new Date(1567578844626), lastOpVisible: { ts: Timestamp(1567578844, 6), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578844, 6), t: 1 }, lastCommittedWall: new Date(1567578844626), lastOpApplied: { ts: Timestamp(1567578844, 7), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 6), $clusterTime: { clusterTime: Timestamp(1567578844, 7), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 7) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:04.637+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:04.637+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578844, 7) and ending at ts: Timestamp(1567578844, 7) 2019-09-04T06:34:04.637+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:34:15.496+0000 2019-09-04T06:34:04.637+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:34:15.602+0000 2019-09-04T06:34:04.637+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578844, 7), t: 1 } 2019-09-04T06:34:04.637+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:04.637+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:32.839+0000 2019-09-04T06:34:04.637+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:04.637+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:04.637+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578844, 6) 2019-09-04T06:34:04.637+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 19077 2019-09-04T06:34:04.637+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:04.637+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:04.637+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 19077 2019-09-04T06:34:04.637+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:34:04.637+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:04.637+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578844, 7) } 2019-09-04T06:34:04.637+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:04.637+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578844, 6) 2019-09-04T06:34:04.637+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19074 2019-09-04T06:34:04.637+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 19080 2019-09-04T06:34:04.637+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:04.637+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:04.637+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 19080 2019-09-04T06:34:04.637+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19074 2019-09-04T06:34:04.637+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19083 2019-09-04T06:34:04.637+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19083 2019-09-04T06:34:04.637+0000 D3 EXECUTOR [repl-writer-worker-6] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:04.637+0000 D3 STORAGE [repl-writer-worker-6] WT begin_transaction for snapshot id 19085 2019-09-04T06:34:04.637+0000 D4 STORAGE [repl-writer-worker-6] inserting record with timestamp Timestamp(1567578844, 7) 2019-09-04T06:34:04.637+0000 D3 STORAGE [repl-writer-worker-6] WT set timestamp of future write operations to Timestamp(1567578844, 7) 2019-09-04T06:34:04.637+0000 D3 STORAGE [repl-writer-worker-6] WT commit_transaction for snapshot id 19085 2019-09-04T06:34:04.637+0000 D3 EXECUTOR [repl-writer-worker-6] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:04.637+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:34:04.637+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19084 2019-09-04T06:34:04.637+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19084 2019-09-04T06:34:04.637+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19087 2019-09-04T06:34:04.637+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19087 2019-09-04T06:34:04.637+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578844, 7), t: 1 }({ ts: Timestamp(1567578844, 7), t: 1 }) 2019-09-04T06:34:04.637+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578844, 7) 2019-09-04T06:34:04.637+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19088 2019-09-04T06:34:04.638+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578844, 7) } } ] } sort: {} projection: {} 2019-09-04T06:34:04.638+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:04.638+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:34:04.638+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578844, 7) Sort: {} Proj: {} ============================= 2019-09-04T06:34:04.638+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:04.638+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:04.638+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:04.638+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578844, 7) || First: notFirst: full path: ts 2019-09-04T06:34:04.638+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:04.638+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578844, 7) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:04.638+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:04.638+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:34:04.638+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:04.638+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:04.638+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:04.638+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:04.638+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:04.638+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:04.638+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:04.638+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578844, 7) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:04.638+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:04.638+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:04.638+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:04.638+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578844, 7) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:04.638+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:04.638+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578844, 7) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:04.638+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19088 2019-09-04T06:34:04.638+0000 D3 EXECUTOR [repl-writer-worker-8] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:04.638+0000 D3 STORAGE [repl-writer-worker-8] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:04.638+0000 D3 REPL [repl-writer-worker-8] applying op: { ts: Timestamp(1567578844, 7), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578844635), o: { $v: 1, $set: { state: 0 } } }, oplog application mode: Secondary 2019-09-04T06:34:04.638+0000 D3 STORAGE [repl-writer-worker-8] WT set timestamp of future write operations to Timestamp(1567578844, 7) 2019-09-04T06:34:04.638+0000 D3 STORAGE [repl-writer-worker-8] WT begin_transaction for snapshot id 19090 2019-09-04T06:34:04.638+0000 D2 QUERY [repl-writer-worker-8] Using idhack: { _id: "config.system.sessions" } 2019-09-04T06:34:04.638+0000 D4 WRITE [repl-writer-worker-8] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:34:04.638+0000 D3 STORAGE [repl-writer-worker-8] WT commit_transaction for snapshot id 19090 2019-09-04T06:34:04.638+0000 D3 EXECUTOR [repl-writer-worker-8] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:04.638+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578844, 7), t: 1 }({ ts: Timestamp(1567578844, 7), t: 1 }) 2019-09-04T06:34:04.638+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578844, 7) 2019-09-04T06:34:04.638+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19089 2019-09-04T06:34:04.638+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:34:04.638+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:04.638+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:34:04.638+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:04.638+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:04.638+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:34:04.638+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19089 2019-09-04T06:34:04.638+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578844, 7) 2019-09-04T06:34:04.638+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19094 2019-09-04T06:34:04.638+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19094 2019-09-04T06:34:04.638+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578844, 7), t: 1 }({ ts: Timestamp(1567578844, 7), t: 1 }) 2019-09-04T06:34:04.638+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:04.638+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578844, 6), t: 1 }, durableWallTime: new Date(1567578844626), appliedOpTime: { ts: Timestamp(1567578844, 7), t: 1 }, appliedWallTime: new Date(1567578844635), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 6), t: 1 }, lastCommittedWall: new Date(1567578844626), lastOpVisible: { ts: Timestamp(1567578844, 6), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:04.638+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1305 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:34.638+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578844, 6), t: 1 }, durableWallTime: new Date(1567578844626), appliedOpTime: { ts: Timestamp(1567578844, 7), t: 1 }, appliedWallTime: new Date(1567578844635), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 6), t: 1 }, lastCommittedWall: new Date(1567578844626), lastOpVisible: { ts: Timestamp(1567578844, 6), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:04.638+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.638+0000 2019-09-04T06:34:04.639+0000 D2 ASIO [RS] Request 1305 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 6), t: 1 }, lastCommittedWall: new Date(1567578844626), lastOpVisible: { ts: Timestamp(1567578844, 6), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 6), $clusterTime: { clusterTime: Timestamp(1567578844, 7), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 7) } 2019-09-04T06:34:04.639+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 6), t: 1 }, lastCommittedWall: new Date(1567578844626), lastOpVisible: { ts: Timestamp(1567578844, 6), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 6), $clusterTime: { clusterTime: Timestamp(1567578844, 7), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 7) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:04.639+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:04.639+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.639+0000 2019-09-04T06:34:04.639+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578844, 7), t: 1 } 2019-09-04T06:34:04.639+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1306 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:14.639+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578844, 6), t: 1 } } 2019-09-04T06:34:04.639+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.639+0000 2019-09-04T06:34:04.639+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:34:04.639+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:04.639+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578844, 7), t: 1 }, durableWallTime: new Date(1567578844635), appliedOpTime: { ts: Timestamp(1567578844, 7), t: 1 }, appliedWallTime: new Date(1567578844635), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 6), t: 1 }, lastCommittedWall: new Date(1567578844626), lastOpVisible: { ts: Timestamp(1567578844, 6), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:04.639+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1307 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:34.639+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578844, 7), t: 1 }, durableWallTime: new Date(1567578844635), appliedOpTime: { ts: Timestamp(1567578844, 7), t: 1 }, appliedWallTime: new Date(1567578844635), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 6), t: 1 }, lastCommittedWall: new Date(1567578844626), lastOpVisible: { ts: Timestamp(1567578844, 6), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:04.639+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.639+0000 2019-09-04T06:34:04.639+0000 D2 ASIO [RS] Request 1306 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 7), t: 1 }, lastCommittedWall: new Date(1567578844635), lastOpVisible: { ts: Timestamp(1567578844, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578844, 7), t: 1 }, lastCommittedWall: new Date(1567578844635), lastOpApplied: { ts: Timestamp(1567578844, 7), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 7), $clusterTime: { clusterTime: Timestamp(1567578844, 7), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 7) } 2019-09-04T06:34:04.639+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 7), t: 1 }, lastCommittedWall: new Date(1567578844635), lastOpVisible: { ts: Timestamp(1567578844, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578844, 7), t: 1 }, lastCommittedWall: new Date(1567578844635), lastOpApplied: { ts: Timestamp(1567578844, 7), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 7), $clusterTime: { clusterTime: Timestamp(1567578844, 7), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 7) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:04.639+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:04.640+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:34:04.640+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578844, 7), t: 1 }, 2019-09-04T06:34:04.635+0000 2019-09-04T06:34:04.640+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578844, 7), t: 1 }, 2019-09-04T06:34:04.635+0000 2019-09-04T06:34:04.640+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578839, 7) 2019-09-04T06:34:04.640+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:34:15.602+0000 2019-09-04T06:34:04.640+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:34:14.714+0000 2019-09-04T06:34:04.640+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1308 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:14.640+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578844, 7), t: 1 } } 2019-09-04T06:34:04.640+0000 D3 REPL [conn390] Got notified of new snapshot: { ts: Timestamp(1567578844, 7), t: 1 }, 2019-09-04T06:34:04.635+0000 2019-09-04T06:34:04.640+0000 D3 REPL [conn390] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.077+0000 2019-09-04T06:34:04.640+0000 D3 REPL [conn393] Got notified of new snapshot: { ts: Timestamp(1567578844, 7), t: 1 }, 2019-09-04T06:34:04.635+0000 2019-09-04T06:34:04.640+0000 D3 REPL [conn393] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.081+0000 2019-09-04T06:34:04.640+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:04.640+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.639+0000 2019-09-04T06:34:04.640+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:32.839+0000 2019-09-04T06:34:04.640+0000 D2 ASIO [RS] Request 1307 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 7), t: 1 }, lastCommittedWall: new Date(1567578844635), lastOpVisible: { ts: Timestamp(1567578844, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 7), $clusterTime: { clusterTime: Timestamp(1567578844, 7), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 7) } 2019-09-04T06:34:04.640+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 7), t: 1 }, lastCommittedWall: new Date(1567578844635), lastOpVisible: { ts: Timestamp(1567578844, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 7), $clusterTime: { clusterTime: Timestamp(1567578844, 7), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 7) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:04.640+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:04.640+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.639+0000 2019-09-04T06:34:04.640+0000 D3 REPL [conn401] Got notified of new snapshot: { ts: Timestamp(1567578844, 7), t: 1 }, 2019-09-04T06:34:04.635+0000 2019-09-04T06:34:04.640+0000 D3 REPL [conn401] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.660+0000 2019-09-04T06:34:04.640+0000 D3 REPL [conn417] Got notified of new snapshot: { ts: Timestamp(1567578844, 7), t: 1 }, 2019-09-04T06:34:04.635+0000 2019-09-04T06:34:04.640+0000 D3 REPL [conn417] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.661+0000 2019-09-04T06:34:04.640+0000 D3 REPL [conn408] Got notified of new snapshot: { ts: Timestamp(1567578844, 7), t: 1 }, 2019-09-04T06:34:04.635+0000 2019-09-04T06:34:04.640+0000 D3 REPL [conn408] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:29.739+0000 2019-09-04T06:34:04.640+0000 D3 REPL [conn405] Got notified of new snapshot: { ts: Timestamp(1567578844, 7), t: 1 }, 2019-09-04T06:34:04.635+0000 2019-09-04T06:34:04.640+0000 D3 REPL [conn405] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.416+0000 2019-09-04T06:34:04.640+0000 D3 REPL [conn406] Got notified of new snapshot: { ts: Timestamp(1567578844, 7), t: 1 }, 2019-09-04T06:34:04.635+0000 2019-09-04T06:34:04.640+0000 D3 REPL [conn406] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.496+0000 2019-09-04T06:34:04.640+0000 D3 REPL [conn419] Got notified of new snapshot: { ts: Timestamp(1567578844, 7), t: 1 }, 2019-09-04T06:34:04.635+0000 2019-09-04T06:34:04.640+0000 D3 REPL [conn419] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.767+0000 2019-09-04T06:34:04.640+0000 D3 REPL [conn380] Got notified of new snapshot: { ts: Timestamp(1567578844, 7), t: 1 }, 2019-09-04T06:34:04.635+0000 2019-09-04T06:34:04.640+0000 D3 REPL [conn380] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:14.390+0000 2019-09-04T06:34:04.640+0000 D3 REPL [conn421] Got notified of new snapshot: { ts: Timestamp(1567578844, 7), t: 1 }, 2019-09-04T06:34:04.635+0000 2019-09-04T06:34:04.640+0000 D3 REPL [conn421] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:24.153+0000 2019-09-04T06:34:04.640+0000 D3 REPL [conn412] Got notified of new snapshot: { ts: Timestamp(1567578844, 7), t: 1 }, 2019-09-04T06:34:04.635+0000 2019-09-04T06:34:04.640+0000 D3 REPL [conn412] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:17.318+0000 2019-09-04T06:34:04.640+0000 D3 REPL [conn407] Got notified of new snapshot: { ts: Timestamp(1567578844, 7), t: 1 }, 2019-09-04T06:34:04.635+0000 2019-09-04T06:34:04.640+0000 D3 REPL [conn407] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.691+0000 2019-09-04T06:34:04.640+0000 D3 REPL [conn422] Got notified of new snapshot: { ts: Timestamp(1567578844, 7), t: 1 }, 2019-09-04T06:34:04.635+0000 2019-09-04T06:34:04.640+0000 D3 REPL [conn422] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:25.060+0000 2019-09-04T06:34:04.640+0000 D3 REPL [conn404] Got notified of new snapshot: { ts: Timestamp(1567578844, 7), t: 1 }, 2019-09-04T06:34:04.635+0000 2019-09-04T06:34:04.640+0000 D3 REPL [conn404] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.073+0000 2019-09-04T06:34:04.640+0000 D3 REPL [conn418] Got notified of new snapshot: { ts: Timestamp(1567578844, 7), t: 1 }, 2019-09-04T06:34:04.635+0000 2019-09-04T06:34:04.640+0000 D3 REPL [conn418] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.754+0000 2019-09-04T06:34:04.640+0000 D3 REPL [conn414] Got notified of new snapshot: { ts: Timestamp(1567578844, 7), t: 1 }, 2019-09-04T06:34:04.635+0000 2019-09-04T06:34:04.640+0000 D3 REPL [conn414] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:19.849+0000 2019-09-04T06:34:04.640+0000 D3 REPL [conn403] Got notified of new snapshot: { ts: Timestamp(1567578844, 7), t: 1 }, 2019-09-04T06:34:04.635+0000 2019-09-04T06:34:04.640+0000 D3 REPL [conn403] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:27.566+0000 2019-09-04T06:34:04.640+0000 D3 REPL [conn416] Got notified of new snapshot: { ts: Timestamp(1567578844, 7), t: 1 }, 2019-09-04T06:34:04.635+0000 2019-09-04T06:34:04.640+0000 D3 REPL [conn416] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.661+0000 2019-09-04T06:34:04.640+0000 D3 REPL [conn382] Got notified of new snapshot: { ts: Timestamp(1567578844, 7), t: 1 }, 2019-09-04T06:34:04.635+0000 2019-09-04T06:34:04.640+0000 D3 REPL [conn382] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:17.223+0000 2019-09-04T06:34:04.640+0000 D2 ASIO [RS] Request 1308 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578844, 8), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578844640), o: { $v: 1, $set: { state: 0 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 7), t: 1 }, lastCommittedWall: new Date(1567578844635), lastOpVisible: { ts: Timestamp(1567578844, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578844, 7), t: 1 }, lastCommittedWall: new Date(1567578844635), lastOpApplied: { ts: Timestamp(1567578844, 8), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 7), $clusterTime: { clusterTime: Timestamp(1567578844, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 8) } 2019-09-04T06:34:04.641+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578844, 8), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578844640), o: { $v: 1, $set: { state: 0 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 7), t: 1 }, lastCommittedWall: new Date(1567578844635), lastOpVisible: { ts: Timestamp(1567578844, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578844, 7), t: 1 }, lastCommittedWall: new Date(1567578844635), lastOpApplied: { ts: Timestamp(1567578844, 8), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 7), $clusterTime: { clusterTime: Timestamp(1567578844, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 8) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:04.641+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:04.641+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578844, 8) and ending at ts: Timestamp(1567578844, 8) 2019-09-04T06:34:04.641+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:34:14.714+0000 2019-09-04T06:34:04.641+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:34:14.829+0000 2019-09-04T06:34:04.641+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:04.641+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578844, 8), t: 1 } 2019-09-04T06:34:04.641+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:32.839+0000 2019-09-04T06:34:04.641+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:04.641+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:04.641+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578844, 7) 2019-09-04T06:34:04.641+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 19097 2019-09-04T06:34:04.641+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:04.641+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:04.641+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 19097 2019-09-04T06:34:04.641+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:04.641+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:34:04.641+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:04.641+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578844, 8) } 2019-09-04T06:34:04.641+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578844, 7) 2019-09-04T06:34:04.641+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 19100 2019-09-04T06:34:04.641+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19095 2019-09-04T06:34:04.641+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:04.641+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:04.641+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 19100 2019-09-04T06:34:04.641+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19095 2019-09-04T06:34:04.641+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19103 2019-09-04T06:34:04.641+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19103 2019-09-04T06:34:04.641+0000 D3 EXECUTOR [repl-writer-worker-2] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:04.641+0000 D3 STORAGE [repl-writer-worker-2] WT begin_transaction for snapshot id 19105 2019-09-04T06:34:04.641+0000 D4 STORAGE [repl-writer-worker-2] inserting record with timestamp Timestamp(1567578844, 8) 2019-09-04T06:34:04.641+0000 D3 STORAGE [repl-writer-worker-2] WT set timestamp of future write operations to Timestamp(1567578844, 8) 2019-09-04T06:34:04.641+0000 D3 STORAGE [repl-writer-worker-2] WT commit_transaction for snapshot id 19105 2019-09-04T06:34:04.641+0000 D3 EXECUTOR [repl-writer-worker-2] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:04.641+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:34:04.641+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19104 2019-09-04T06:34:04.641+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19104 2019-09-04T06:34:04.641+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19107 2019-09-04T06:34:04.641+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19107 2019-09-04T06:34:04.641+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578844, 8), t: 1 }({ ts: Timestamp(1567578844, 8), t: 1 }) 2019-09-04T06:34:04.641+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578844, 8) 2019-09-04T06:34:04.641+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19108 2019-09-04T06:34:04.641+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578844, 8) } } ] } sort: {} projection: {} 2019-09-04T06:34:04.641+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:04.641+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:34:04.641+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578844, 8) Sort: {} Proj: {} ============================= 2019-09-04T06:34:04.641+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:04.641+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:04.641+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:04.641+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578844, 8) || First: notFirst: full path: ts 2019-09-04T06:34:04.641+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:04.641+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578844, 8) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:04.641+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:04.641+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:34:04.641+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:04.641+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:04.641+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:04.641+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:04.641+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:04.641+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:04.641+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:04.641+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578844, 8) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:04.641+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:04.641+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:04.641+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:04.641+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578844, 8) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:04.641+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:04.641+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578844, 8) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:04.641+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19108 2019-09-04T06:34:04.641+0000 D3 EXECUTOR [repl-writer-worker-0] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:04.641+0000 D3 STORAGE [repl-writer-worker-0] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:04.641+0000 D3 REPL [repl-writer-worker-0] applying op: { ts: Timestamp(1567578844, 8), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578844640), o: { $v: 1, $set: { state: 0 } } }, oplog application mode: Secondary 2019-09-04T06:34:04.641+0000 D3 STORAGE [repl-writer-worker-0] WT set timestamp of future write operations to Timestamp(1567578844, 8) 2019-09-04T06:34:04.641+0000 D3 STORAGE [repl-writer-worker-0] WT begin_transaction for snapshot id 19110 2019-09-04T06:34:04.641+0000 D2 QUERY [repl-writer-worker-0] Using idhack: { _id: "config" } 2019-09-04T06:34:04.642+0000 D4 WRITE [repl-writer-worker-0] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:34:04.642+0000 D3 STORAGE [repl-writer-worker-0] WT commit_transaction for snapshot id 19110 2019-09-04T06:34:04.642+0000 D3 EXECUTOR [repl-writer-worker-0] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:04.642+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578844, 8), t: 1 }({ ts: Timestamp(1567578844, 8), t: 1 }) 2019-09-04T06:34:04.642+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578844, 8) 2019-09-04T06:34:04.642+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19109 2019-09-04T06:34:04.642+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:34:04.642+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:04.642+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:34:04.642+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:04.642+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:04.642+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:34:04.642+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19109 2019-09-04T06:34:04.642+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578844, 8) 2019-09-04T06:34:04.642+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19113 2019-09-04T06:34:04.642+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19113 2019-09-04T06:34:04.642+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578844, 8), t: 1 }({ ts: Timestamp(1567578844, 8), t: 1 }) 2019-09-04T06:34:04.642+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:04.642+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578844, 7), t: 1 }, durableWallTime: new Date(1567578844635), appliedOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, appliedWallTime: new Date(1567578844640), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 7), t: 1 }, lastCommittedWall: new Date(1567578844635), lastOpVisible: { ts: Timestamp(1567578844, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:04.642+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1309 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:34.642+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578844, 7), t: 1 }, durableWallTime: new Date(1567578844635), appliedOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, appliedWallTime: new Date(1567578844640), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 7), t: 1 }, lastCommittedWall: new Date(1567578844635), lastOpVisible: { ts: Timestamp(1567578844, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:04.642+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.642+0000 2019-09-04T06:34:04.642+0000 D2 ASIO [RS] Request 1309 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 7), t: 1 }, lastCommittedWall: new Date(1567578844635), lastOpVisible: { ts: Timestamp(1567578844, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 7), $clusterTime: { clusterTime: Timestamp(1567578844, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 8) } 2019-09-04T06:34:04.642+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 7), t: 1 }, lastCommittedWall: new Date(1567578844635), lastOpVisible: { ts: Timestamp(1567578844, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 7), $clusterTime: { clusterTime: Timestamp(1567578844, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 8) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:04.642+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:04.642+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.642+0000 2019-09-04T06:34:04.643+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:34:04.643+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:04.643+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, durableWallTime: new Date(1567578844640), appliedOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, appliedWallTime: new Date(1567578844640), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 7), t: 1 }, lastCommittedWall: new Date(1567578844635), lastOpVisible: { ts: Timestamp(1567578844, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:04.643+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1310 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:34.643+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, durableWallTime: new Date(1567578844640), appliedOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, appliedWallTime: new Date(1567578844640), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, durableWallTime: new Date(1567578840363), appliedOpTime: { ts: Timestamp(1567578840, 3), t: 1 }, appliedWallTime: new Date(1567578840363), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 7), t: 1 }, lastCommittedWall: new Date(1567578844635), lastOpVisible: { ts: Timestamp(1567578844, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:04.643+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578844, 8), t: 1 } 2019-09-04T06:34:04.643+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1311 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:14.643+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578844, 7), t: 1 } } 2019-09-04T06:34:04.643+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.643+0000 2019-09-04T06:34:04.643+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.643+0000 2019-09-04T06:34:04.643+0000 D2 ASIO [RS] Request 1310 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 8), t: 1 }, lastCommittedWall: new Date(1567578844640), lastOpVisible: { ts: Timestamp(1567578844, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 8), $clusterTime: { clusterTime: Timestamp(1567578844, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 8) } 2019-09-04T06:34:04.643+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 8), t: 1 }, lastCommittedWall: new Date(1567578844640), lastOpVisible: { ts: Timestamp(1567578844, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 8), $clusterTime: { clusterTime: Timestamp(1567578844, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 8) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:04.643+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:04.643+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.643+0000 2019-09-04T06:34:04.643+0000 D2 ASIO [RS] Request 1311 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 8), t: 1 }, lastCommittedWall: new Date(1567578844640), lastOpVisible: { ts: Timestamp(1567578844, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578844, 8), t: 1 }, lastCommittedWall: new Date(1567578844640), lastOpApplied: { ts: Timestamp(1567578844, 8), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 8), $clusterTime: { clusterTime: Timestamp(1567578844, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 8) } 2019-09-04T06:34:04.643+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 8), t: 1 }, lastCommittedWall: new Date(1567578844640), lastOpVisible: { ts: Timestamp(1567578844, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578844, 8), t: 1 }, lastCommittedWall: new Date(1567578844640), lastOpApplied: { ts: Timestamp(1567578844, 8), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 8), $clusterTime: { clusterTime: Timestamp(1567578844, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 8) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:04.643+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:04.643+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:34:04.643+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578844, 8), t: 1 }, 2019-09-04T06:34:04.640+0000 2019-09-04T06:34:04.643+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578844, 8), t: 1 }, 2019-09-04T06:34:04.640+0000 2019-09-04T06:34:04.643+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578839, 8) 2019-09-04T06:34:04.643+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:34:14.829+0000 2019-09-04T06:34:04.643+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:34:15.064+0000 2019-09-04T06:34:04.643+0000 D3 REPL [conn393] Got notified of new snapshot: { ts: Timestamp(1567578844, 8), t: 1 }, 2019-09-04T06:34:04.640+0000 2019-09-04T06:34:04.643+0000 D3 REPL [conn393] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.081+0000 2019-09-04T06:34:04.643+0000 D3 REPL [conn417] Got notified of new snapshot: { ts: Timestamp(1567578844, 8), t: 1 }, 2019-09-04T06:34:04.640+0000 2019-09-04T06:34:04.643+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1312 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:14.643+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578844, 8), t: 1 } } 2019-09-04T06:34:04.643+0000 D3 REPL [conn417] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.661+0000 2019-09-04T06:34:04.643+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.643+0000 2019-09-04T06:34:04.643+0000 D3 REPL [conn407] Got notified of new snapshot: { ts: Timestamp(1567578844, 8), t: 1 }, 2019-09-04T06:34:04.640+0000 2019-09-04T06:34:04.643+0000 D3 REPL [conn407] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.691+0000 2019-09-04T06:34:04.643+0000 D3 REPL [conn404] Got notified of new snapshot: { ts: Timestamp(1567578844, 8), t: 1 }, 2019-09-04T06:34:04.640+0000 2019-09-04T06:34:04.643+0000 D3 REPL [conn404] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.073+0000 2019-09-04T06:34:04.643+0000 D3 REPL [conn380] Got notified of new snapshot: { ts: Timestamp(1567578844, 8), t: 1 }, 2019-09-04T06:34:04.640+0000 2019-09-04T06:34:04.643+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:04.643+0000 D3 REPL [conn380] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:14.390+0000 2019-09-04T06:34:04.643+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:32.839+0000 2019-09-04T06:34:04.643+0000 D3 REPL [conn412] Got notified of new snapshot: { ts: Timestamp(1567578844, 8), t: 1 }, 2019-09-04T06:34:04.640+0000 2019-09-04T06:34:04.643+0000 D3 REPL [conn412] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:17.318+0000 2019-09-04T06:34:04.643+0000 D3 REPL [conn422] Got notified of new snapshot: { ts: Timestamp(1567578844, 8), t: 1 }, 2019-09-04T06:34:04.640+0000 2019-09-04T06:34:04.643+0000 D3 REPL [conn422] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:25.060+0000 2019-09-04T06:34:04.644+0000 D3 REPL [conn418] Got notified of new snapshot: { ts: Timestamp(1567578844, 8), t: 1 }, 2019-09-04T06:34:04.640+0000 2019-09-04T06:34:04.644+0000 D3 REPL [conn418] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.754+0000 2019-09-04T06:34:04.644+0000 D3 REPL [conn403] Got notified of new snapshot: { ts: Timestamp(1567578844, 8), t: 1 }, 2019-09-04T06:34:04.640+0000 2019-09-04T06:34:04.644+0000 D3 REPL [conn403] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:27.566+0000 2019-09-04T06:34:04.644+0000 D3 REPL [conn382] Got notified of new snapshot: { ts: Timestamp(1567578844, 8), t: 1 }, 2019-09-04T06:34:04.640+0000 2019-09-04T06:34:04.644+0000 D3 REPL [conn382] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:17.223+0000 2019-09-04T06:34:04.644+0000 D3 REPL [conn390] Got notified of new snapshot: { ts: Timestamp(1567578844, 8), t: 1 }, 2019-09-04T06:34:04.640+0000 2019-09-04T06:34:04.644+0000 D3 REPL [conn390] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.077+0000 2019-09-04T06:34:04.644+0000 D3 REPL [conn405] Got notified of new snapshot: { ts: Timestamp(1567578844, 8), t: 1 }, 2019-09-04T06:34:04.640+0000 2019-09-04T06:34:04.644+0000 D3 REPL [conn405] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.416+0000 2019-09-04T06:34:04.644+0000 D3 REPL [conn419] Got notified of new snapshot: { ts: Timestamp(1567578844, 8), t: 1 }, 2019-09-04T06:34:04.640+0000 2019-09-04T06:34:04.644+0000 D3 REPL [conn419] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.767+0000 2019-09-04T06:34:04.644+0000 D3 REPL [conn401] Got notified of new snapshot: { ts: Timestamp(1567578844, 8), t: 1 }, 2019-09-04T06:34:04.640+0000 2019-09-04T06:34:04.644+0000 D3 REPL [conn401] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.660+0000 2019-09-04T06:34:04.644+0000 D3 REPL [conn421] Got notified of new snapshot: { ts: Timestamp(1567578844, 8), t: 1 }, 2019-09-04T06:34:04.640+0000 2019-09-04T06:34:04.644+0000 D3 REPL [conn421] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:24.153+0000 2019-09-04T06:34:04.644+0000 D3 REPL [conn408] Got notified of new snapshot: { ts: Timestamp(1567578844, 8), t: 1 }, 2019-09-04T06:34:04.640+0000 2019-09-04T06:34:04.644+0000 D3 REPL [conn408] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:29.739+0000 2019-09-04T06:34:04.644+0000 D3 REPL [conn406] Got notified of new snapshot: { ts: Timestamp(1567578844, 8), t: 1 }, 2019-09-04T06:34:04.640+0000 2019-09-04T06:34:04.644+0000 D3 REPL [conn406] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.496+0000 2019-09-04T06:34:04.644+0000 D3 REPL [conn414] Got notified of new snapshot: { ts: Timestamp(1567578844, 8), t: 1 }, 2019-09-04T06:34:04.640+0000 2019-09-04T06:34:04.644+0000 D3 REPL [conn414] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:19.849+0000 2019-09-04T06:34:04.644+0000 D3 REPL [conn416] Got notified of new snapshot: { ts: Timestamp(1567578844, 8), t: 1 }, 2019-09-04T06:34:04.640+0000 2019-09-04T06:34:04.644+0000 D3 REPL [conn416] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.661+0000 2019-09-04T06:34:04.650+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:04.651+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:04.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:04.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:04.667+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:04.667+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:04.693+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:04.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:04.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:04.723+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578844, 8) 2019-09-04T06:34:04.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:04.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:04.793+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:04.817+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:04.817+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:04.838+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:04.838+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1313) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:04.838+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1313 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:14.838+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:04.838+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:32.839+0000 2019-09-04T06:34:04.838+0000 D2 ASIO [Replication] Request 1313 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, durableWallTime: new Date(1567578844640), opTime: { ts: Timestamp(1567578844, 8), t: 1 }, wallTime: new Date(1567578844640), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 8), t: 1 }, lastCommittedWall: new Date(1567578844640), lastOpVisible: { ts: Timestamp(1567578844, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 8), $clusterTime: { clusterTime: Timestamp(1567578844, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 8) } 2019-09-04T06:34:04.838+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, durableWallTime: new Date(1567578844640), opTime: { ts: Timestamp(1567578844, 8), t: 1 }, wallTime: new Date(1567578844640), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 8), t: 1 }, lastCommittedWall: new Date(1567578844640), lastOpVisible: { ts: Timestamp(1567578844, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 8), $clusterTime: { clusterTime: Timestamp(1567578844, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 8) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:04.838+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:04.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1313) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, durableWallTime: new Date(1567578844640), opTime: { ts: Timestamp(1567578844, 8), t: 1 }, wallTime: new Date(1567578844640), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 8), t: 1 }, lastCommittedWall: new Date(1567578844640), lastOpVisible: { ts: Timestamp(1567578844, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 8), $clusterTime: { clusterTime: Timestamp(1567578844, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 8) } 2019-09-04T06:34:04.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:04.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:34:04.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:34:06.839Z 2019-09-04T06:34:04.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.839+0000 2019-09-04T06:34:04.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1314) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:04.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1314 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:34:14.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:04.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.839+0000 2019-09-04T06:34:04.839+0000 D2 ASIO [Replication] Request 1314 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, durableWallTime: new Date(1567578844640), opTime: { ts: Timestamp(1567578844, 8), t: 1 }, wallTime: new Date(1567578844640), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 8), t: 1 }, lastCommittedWall: new Date(1567578844640), lastOpVisible: { ts: Timestamp(1567578844, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578844, 8), $clusterTime: { clusterTime: Timestamp(1567578844, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 8) } 2019-09-04T06:34:04.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, durableWallTime: new Date(1567578844640), opTime: { ts: Timestamp(1567578844, 8), t: 1 }, wallTime: new Date(1567578844640), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 8), t: 1 }, lastCommittedWall: new Date(1567578844640), lastOpVisible: { ts: Timestamp(1567578844, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578844, 8), $clusterTime: { clusterTime: Timestamp(1567578844, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 8) } target: cmodb802.togewa.com:27019 2019-09-04T06:34:04.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:04.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1314) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, durableWallTime: new Date(1567578844640), opTime: { ts: Timestamp(1567578844, 8), t: 1 }, wallTime: new Date(1567578844640), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 8), t: 1 }, lastCommittedWall: new Date(1567578844640), lastOpVisible: { ts: Timestamp(1567578844, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578844, 8), $clusterTime: { clusterTime: Timestamp(1567578844, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 8) } 2019-09-04T06:34:04.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:34:04.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:34:15.064+0000 2019-09-04T06:34:04.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:34:15.613+0000 2019-09-04T06:34:04.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:04.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:34:04.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.839+0000 2019-09-04T06:34:04.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:34:06.839Z 2019-09-04T06:34:04.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.839+0000 2019-09-04T06:34:04.893+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:04.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:04.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:04.993+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:04.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:04.999+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:05.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:05.063+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:05.063+0000 D3 REPL [replexec-0] memberData lastupdate is: 2019-09-04T06:34:04.839+0000 2019-09-04T06:34:05.063+0000 D3 REPL [replexec-0] memberData lastupdate is: 2019-09-04T06:34:04.839+0000 2019-09-04T06:34:05.063+0000 D3 REPL [replexec-0] stalest member MemberId(0) date: 2019-09-04T06:34:04.839+0000 2019-09-04T06:34:05.063+0000 D3 REPL [replexec-0] scheduling next check at 2019-09-04T06:34:14.839+0000 2019-09-04T06:34:05.063+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:34.839+0000 2019-09-04T06:34:05.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578844, 8), signature: { hash: BinData(0, 0596E69D0DEB4339F189884638210F22C05132B2), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:05.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:34:05.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578844, 8), signature: { hash: BinData(0, 0596E69D0DEB4339F189884638210F22C05132B2), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:05.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578844, 8), signature: { hash: BinData(0, 0596E69D0DEB4339F189884638210F22C05132B2), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:05.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, durableWallTime: new Date(1567578844640), opTime: { ts: Timestamp(1567578844, 8), t: 1 }, wallTime: new Date(1567578844640) } 2019-09-04T06:34:05.063+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:05.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578844, 8), signature: { hash: BinData(0, 0596E69D0DEB4339F189884638210F22C05132B2), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:05.063+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:05.093+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:05.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:05.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:05.151+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:05.151+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:05.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:05.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:05.167+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:05.167+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:05.194+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:05.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:05.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:05.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:05.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:05.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:05.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:05.294+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:05.317+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:05.317+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:05.394+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:05.434+0000 I NETWORK [listener] connection accepted from 10.108.2.15:39234 #428 (86 connections now open) 2019-09-04T06:34:05.434+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:05.434+0000 D2 COMMAND [conn428] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:05.434+0000 I NETWORK [conn428] received client metadata from 10.108.2.15:39234 conn428: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:05.434+0000 I COMMAND [conn428] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:05.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:05.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:05.494+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:05.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:05.499+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:05.563+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:05.563+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:05.594+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:05.612+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:05.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:05.641+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:05.641+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:05.641+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578844, 8) 2019-09-04T06:34:05.641+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 19140 2019-09-04T06:34:05.641+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:05.641+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:05.641+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 19140 2019-09-04T06:34:05.642+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19143 2019-09-04T06:34:05.642+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19143 2019-09-04T06:34:05.642+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578844, 8), t: 1 }({ ts: Timestamp(1567578844, 8), t: 1 }) 2019-09-04T06:34:05.651+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:05.651+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:05.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:05.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:05.667+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:05.667+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:05.695+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:05.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:05.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:05.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:05.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:05.795+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:05.817+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:05.817+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:05.895+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:05.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:05.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:05.995+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:05.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:05.999+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:06.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:06.063+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:06.063+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:06.095+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:06.104+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:06.104+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:06.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:06.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:06.151+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:06.151+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:06.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:06.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:06.167+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:06.167+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:06.196+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:06.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:06.213+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:06.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578844, 8), signature: { hash: BinData(0, 0596E69D0DEB4339F189884638210F22C05132B2), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:06.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:34:06.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578844, 8), signature: { hash: BinData(0, 0596E69D0DEB4339F189884638210F22C05132B2), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:06.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578844, 8), signature: { hash: BinData(0, 0596E69D0DEB4339F189884638210F22C05132B2), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:06.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, durableWallTime: new Date(1567578844640), opTime: { ts: Timestamp(1567578844, 8), t: 1 }, wallTime: new Date(1567578844640) } 2019-09-04T06:34:06.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:06.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:06.235+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578844, 8), signature: { hash: BinData(0, 0596E69D0DEB4339F189884638210F22C05132B2), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:06.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:06.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:06.296+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:06.317+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:06.317+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:06.326+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:06.326+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:06.360+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:06.360+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:06.396+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:06.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:06.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:06.496+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:06.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:06.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:06.563+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:06.563+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:06.596+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:06.603+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:06.603+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:06.612+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:06.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:06.641+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:06.641+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:06.642+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578844, 8) 2019-09-04T06:34:06.642+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 19172 2019-09-04T06:34:06.642+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:06.642+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:06.642+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 19172 2019-09-04T06:34:06.642+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19175 2019-09-04T06:34:06.642+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19175 2019-09-04T06:34:06.642+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578844, 8), t: 1 }({ ts: Timestamp(1567578844, 8), t: 1 }) 2019-09-04T06:34:06.651+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:06.651+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:06.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:06.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:06.667+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:06.667+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:06.697+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:06.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:06.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:06.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:06.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:06.797+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:06.817+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:06.817+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:06.826+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:06.826+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:06.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:06.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:06.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1315) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:06.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1315 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:16.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:06.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:36.839+0000 2019-09-04T06:34:06.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1316) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:06.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1316 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:34:16.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:06.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:36.839+0000 2019-09-04T06:34:06.839+0000 D2 ASIO [Replication] Request 1315 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, durableWallTime: new Date(1567578844640), opTime: { ts: Timestamp(1567578844, 8), t: 1 }, wallTime: new Date(1567578844640), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 8), t: 1 }, lastCommittedWall: new Date(1567578844640), lastOpVisible: { ts: Timestamp(1567578844, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 8), $clusterTime: { clusterTime: Timestamp(1567578844, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 8) } 2019-09-04T06:34:06.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, durableWallTime: new Date(1567578844640), opTime: { ts: Timestamp(1567578844, 8), t: 1 }, wallTime: new Date(1567578844640), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 8), t: 1 }, lastCommittedWall: new Date(1567578844640), lastOpVisible: { ts: Timestamp(1567578844, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 8), $clusterTime: { clusterTime: Timestamp(1567578844, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 8) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:06.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:06.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1315) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, durableWallTime: new Date(1567578844640), opTime: { ts: Timestamp(1567578844, 8), t: 1 }, wallTime: new Date(1567578844640), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 8), t: 1 }, lastCommittedWall: new Date(1567578844640), lastOpVisible: { ts: Timestamp(1567578844, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 8), $clusterTime: { clusterTime: Timestamp(1567578844, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 8) } 2019-09-04T06:34:06.839+0000 D2 ASIO [Replication] Request 1316 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, durableWallTime: new Date(1567578844640), opTime: { ts: Timestamp(1567578844, 8), t: 1 }, wallTime: new Date(1567578844640), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 8), t: 1 }, lastCommittedWall: new Date(1567578844640), lastOpVisible: { ts: Timestamp(1567578844, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578844, 8), $clusterTime: { clusterTime: Timestamp(1567578844, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 8) } 2019-09-04T06:34:06.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:34:06.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, durableWallTime: new Date(1567578844640), opTime: { ts: Timestamp(1567578844, 8), t: 1 }, wallTime: new Date(1567578844640), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 8), t: 1 }, lastCommittedWall: new Date(1567578844640), lastOpVisible: { ts: Timestamp(1567578844, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578844, 8), $clusterTime: { clusterTime: Timestamp(1567578844, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 8) } target: cmodb802.togewa.com:27019 2019-09-04T06:34:06.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:34:08.839Z 2019-09-04T06:34:06.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:06.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1316) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, durableWallTime: new Date(1567578844640), opTime: { ts: Timestamp(1567578844, 8), t: 1 }, wallTime: new Date(1567578844640), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 8), t: 1 }, lastCommittedWall: new Date(1567578844640), lastOpVisible: { ts: Timestamp(1567578844, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578844, 8), $clusterTime: { clusterTime: Timestamp(1567578844, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578844, 8) } 2019-09-04T06:34:06.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:34:06.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:34:15.613+0000 2019-09-04T06:34:06.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:34:17.558+0000 2019-09-04T06:34:06.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:34:06.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:34:08.839Z 2019-09-04T06:34:06.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:06.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:36.839+0000 2019-09-04T06:34:06.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:36.839+0000 2019-09-04T06:34:06.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:06.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:06.897+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:06.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:06.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:06.997+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:06.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:06.999+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:07.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:07.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578844, 8), signature: { hash: BinData(0, 0596E69D0DEB4339F189884638210F22C05132B2), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:07.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:34:07.063+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:07.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578844, 8), signature: { hash: BinData(0, 0596E69D0DEB4339F189884638210F22C05132B2), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:07.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578844, 8), signature: { hash: BinData(0, 0596E69D0DEB4339F189884638210F22C05132B2), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:07.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, durableWallTime: new Date(1567578844640), opTime: { ts: Timestamp(1567578844, 8), t: 1 }, wallTime: new Date(1567578844640) } 2019-09-04T06:34:07.063+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:07.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578844, 8), signature: { hash: BinData(0, 0596E69D0DEB4339F189884638210F22C05132B2), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:07.097+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:07.103+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:07.103+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:07.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:07.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:07.151+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:07.151+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:07.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:07.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:07.167+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:07.167+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:07.197+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:07.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:07.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:07.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:07.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:07.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:07.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:07.298+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:07.326+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:07.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:07.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:07.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:07.398+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:07.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:07.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:07.498+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:07.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:07.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:07.563+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:07.563+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:07.598+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:07.603+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:07.603+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:07.612+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:07.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:07.642+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:07.642+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:07.642+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578844, 8) 2019-09-04T06:34:07.642+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 19206 2019-09-04T06:34:07.642+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:07.642+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:07.642+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 19206 2019-09-04T06:34:07.642+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19209 2019-09-04T06:34:07.642+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19209 2019-09-04T06:34:07.642+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578844, 8), t: 1 }({ ts: Timestamp(1567578844, 8), t: 1 }) 2019-09-04T06:34:07.651+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:07.651+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:07.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:07.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:07.667+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:07.667+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:07.698+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:07.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:07.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:07.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:07.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:07.784+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:07.784+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:07.798+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:07.826+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:07.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:07.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:07.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:07.899+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:07.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:07.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:07.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:07.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:07.999+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:08.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:08.063+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:08.063+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:08.099+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:08.103+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:08.103+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:08.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:08.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:08.151+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:08.151+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:08.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:08.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:08.167+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:08.167+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:08.199+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:08.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:08.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:08.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578844, 8), signature: { hash: BinData(0, 0596E69D0DEB4339F189884638210F22C05132B2), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:08.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:34:08.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578844, 8), signature: { hash: BinData(0, 0596E69D0DEB4339F189884638210F22C05132B2), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:08.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578844, 8), signature: { hash: BinData(0, 0596E69D0DEB4339F189884638210F22C05132B2), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:08.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, durableWallTime: new Date(1567578844640), opTime: { ts: Timestamp(1567578844, 8), t: 1 }, wallTime: new Date(1567578844640) } 2019-09-04T06:34:08.235+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578844, 8), signature: { hash: BinData(0, 0596E69D0DEB4339F189884638210F22C05132B2), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:08.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:08.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:08.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:08.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:08.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:08.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:08.299+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:08.326+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:08.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:08.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:08.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:08.373+0000 D2 ASIO [RS] Request 1312 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578848, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578848370), o: { $v: 1, $set: { ping: new Date(1567578848369) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 8), t: 1 }, lastCommittedWall: new Date(1567578844640), lastOpVisible: { ts: Timestamp(1567578844, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578844, 8), t: 1 }, lastCommittedWall: new Date(1567578844640), lastOpApplied: { ts: Timestamp(1567578848, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 8), $clusterTime: { clusterTime: Timestamp(1567578848, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578848, 1) } 2019-09-04T06:34:08.373+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578848, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578848370), o: { $v: 1, $set: { ping: new Date(1567578848369) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 8), t: 1 }, lastCommittedWall: new Date(1567578844640), lastOpVisible: { ts: Timestamp(1567578844, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578844, 8), t: 1 }, lastCommittedWall: new Date(1567578844640), lastOpApplied: { ts: Timestamp(1567578848, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578844, 8), $clusterTime: { clusterTime: Timestamp(1567578848, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578848, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:08.373+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:08.373+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578848, 1) and ending at ts: Timestamp(1567578848, 1) 2019-09-04T06:34:08.373+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:34:17.558+0000 2019-09-04T06:34:08.373+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:34:19.368+0000 2019-09-04T06:34:08.373+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:08.373+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:36.839+0000 2019-09-04T06:34:08.373+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578848, 1), t: 1 } 2019-09-04T06:34:08.373+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:08.373+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:08.373+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578844, 8) 2019-09-04T06:34:08.373+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 19236 2019-09-04T06:34:08.373+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:08.373+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:08.373+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 19236 2019-09-04T06:34:08.373+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:08.373+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:34:08.373+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:08.373+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578844, 8) 2019-09-04T06:34:08.373+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578848, 1) } 2019-09-04T06:34:08.373+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 19239 2019-09-04T06:34:08.373+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:08.373+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:08.373+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 19239 2019-09-04T06:34:08.373+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19210 2019-09-04T06:34:08.373+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19210 2019-09-04T06:34:08.373+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19242 2019-09-04T06:34:08.373+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19242 2019-09-04T06:34:08.373+0000 D3 EXECUTOR [repl-writer-worker-15] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:08.373+0000 D3 STORAGE [repl-writer-worker-15] WT begin_transaction for snapshot id 19244 2019-09-04T06:34:08.373+0000 D4 STORAGE [repl-writer-worker-15] inserting record with timestamp Timestamp(1567578848, 1) 2019-09-04T06:34:08.373+0000 D3 STORAGE [repl-writer-worker-15] WT set timestamp of future write operations to Timestamp(1567578848, 1) 2019-09-04T06:34:08.373+0000 D3 STORAGE [repl-writer-worker-15] WT commit_transaction for snapshot id 19244 2019-09-04T06:34:08.373+0000 D3 EXECUTOR [repl-writer-worker-15] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:08.373+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:34:08.374+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19243 2019-09-04T06:34:08.374+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19243 2019-09-04T06:34:08.374+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19246 2019-09-04T06:34:08.374+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19246 2019-09-04T06:34:08.374+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578848, 1), t: 1 }({ ts: Timestamp(1567578848, 1), t: 1 }) 2019-09-04T06:34:08.374+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578848, 1) 2019-09-04T06:34:08.374+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19247 2019-09-04T06:34:08.374+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578848, 1) } } ] } sort: {} projection: {} 2019-09-04T06:34:08.374+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:08.374+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:34:08.374+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578848, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:34:08.374+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:08.374+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:08.374+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:08.374+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578848, 1) || First: notFirst: full path: ts 2019-09-04T06:34:08.374+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:08.374+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578848, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:08.374+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:08.374+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:34:08.374+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:08.374+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:08.374+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:08.374+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:08.374+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:08.374+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:08.374+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:08.374+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578848, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:08.374+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:08.374+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:08.374+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:08.374+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578848, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:08.374+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:08.374+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578848, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:08.374+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19247 2019-09-04T06:34:08.374+0000 D3 EXECUTOR [repl-writer-worker-1] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:08.374+0000 D3 STORAGE [repl-writer-worker-1] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:08.374+0000 D3 REPL [repl-writer-worker-1] applying op: { ts: Timestamp(1567578848, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578848370), o: { $v: 1, $set: { ping: new Date(1567578848369) } } }, oplog application mode: Secondary 2019-09-04T06:34:08.374+0000 D3 STORAGE [repl-writer-worker-1] WT set timestamp of future write operations to Timestamp(1567578848, 1) 2019-09-04T06:34:08.374+0000 D3 STORAGE [repl-writer-worker-1] WT begin_transaction for snapshot id 19249 2019-09-04T06:34:08.374+0000 D2 QUERY [repl-writer-worker-1] Using idhack: { _id: "ConfigServer" } 2019-09-04T06:34:08.374+0000 D4 WRITE [repl-writer-worker-1] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:34:08.374+0000 D3 STORAGE [repl-writer-worker-1] WT commit_transaction for snapshot id 19249 2019-09-04T06:34:08.374+0000 D3 EXECUTOR [repl-writer-worker-1] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:08.374+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578848, 1), t: 1 }({ ts: Timestamp(1567578848, 1), t: 1 }) 2019-09-04T06:34:08.374+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578848, 1) 2019-09-04T06:34:08.374+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19248 2019-09-04T06:34:08.374+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:34:08.374+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:08.374+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:34:08.374+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:08.374+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:08.374+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:34:08.374+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19248 2019-09-04T06:34:08.374+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578848, 1) 2019-09-04T06:34:08.374+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:08.374+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19252 2019-09-04T06:34:08.374+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19252 2019-09-04T06:34:08.374+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, durableWallTime: new Date(1567578844640), appliedOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, appliedWallTime: new Date(1567578844640), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, durableWallTime: new Date(1567578844640), appliedOpTime: { ts: Timestamp(1567578848, 1), t: 1 }, appliedWallTime: new Date(1567578848370), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, durableWallTime: new Date(1567578844640), appliedOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, appliedWallTime: new Date(1567578844640), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 8), t: 1 }, lastCommittedWall: new Date(1567578844640), lastOpVisible: { ts: Timestamp(1567578844, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:08.374+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1317 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:38.374+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, durableWallTime: new Date(1567578844640), appliedOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, appliedWallTime: new Date(1567578844640), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, durableWallTime: new Date(1567578844640), appliedOpTime: { ts: Timestamp(1567578848, 1), t: 1 }, appliedWallTime: new Date(1567578848370), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, durableWallTime: new Date(1567578844640), appliedOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, appliedWallTime: new Date(1567578844640), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 8), t: 1 }, lastCommittedWall: new Date(1567578844640), lastOpVisible: { ts: Timestamp(1567578844, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:08.374+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:38.374+0000 2019-09-04T06:34:08.374+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578848, 1), t: 1 }({ ts: Timestamp(1567578848, 1), t: 1 }) 2019-09-04T06:34:08.375+0000 D2 ASIO [RS] Request 1317 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578848, 1), t: 1 }, lastCommittedWall: new Date(1567578848370), lastOpVisible: { ts: Timestamp(1567578848, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578848, 1), $clusterTime: { clusterTime: Timestamp(1567578848, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578848, 1) } 2019-09-04T06:34:08.375+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578848, 1), t: 1 }, lastCommittedWall: new Date(1567578848370), lastOpVisible: { ts: Timestamp(1567578848, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578848, 1), $clusterTime: { clusterTime: Timestamp(1567578848, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578848, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:08.375+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:08.375+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:38.375+0000 2019-09-04T06:34:08.375+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578848, 1), t: 1 } 2019-09-04T06:34:08.375+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1318 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:18.375+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578844, 8), t: 1 } } 2019-09-04T06:34:08.375+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:38.375+0000 2019-09-04T06:34:08.375+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:34:08.375+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:08.375+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, durableWallTime: new Date(1567578844640), appliedOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, appliedWallTime: new Date(1567578844640), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578848, 1), t: 1 }, durableWallTime: new Date(1567578848370), appliedOpTime: { ts: Timestamp(1567578848, 1), t: 1 }, appliedWallTime: new Date(1567578848370), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, durableWallTime: new Date(1567578844640), appliedOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, appliedWallTime: new Date(1567578844640), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 8), t: 1 }, lastCommittedWall: new Date(1567578844640), lastOpVisible: { ts: Timestamp(1567578844, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:08.375+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1319 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:38.375+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, durableWallTime: new Date(1567578844640), appliedOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, appliedWallTime: new Date(1567578844640), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578848, 1), t: 1 }, durableWallTime: new Date(1567578848370), appliedOpTime: { ts: Timestamp(1567578848, 1), t: 1 }, appliedWallTime: new Date(1567578848370), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, durableWallTime: new Date(1567578844640), appliedOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, appliedWallTime: new Date(1567578844640), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578844, 8), t: 1 }, lastCommittedWall: new Date(1567578844640), lastOpVisible: { ts: Timestamp(1567578844, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:08.375+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:38.375+0000 2019-09-04T06:34:08.375+0000 D2 ASIO [RS] Request 1318 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578848, 1), t: 1 }, lastCommittedWall: new Date(1567578848370), lastOpVisible: { ts: Timestamp(1567578848, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578848, 1), t: 1 }, lastCommittedWall: new Date(1567578848370), lastOpApplied: { ts: Timestamp(1567578848, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578848, 1), $clusterTime: { clusterTime: Timestamp(1567578848, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578848, 1) } 2019-09-04T06:34:08.375+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578848, 1), t: 1 }, lastCommittedWall: new Date(1567578848370), lastOpVisible: { ts: Timestamp(1567578848, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578848, 1), t: 1 }, lastCommittedWall: new Date(1567578848370), lastOpApplied: { ts: Timestamp(1567578848, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578848, 1), $clusterTime: { clusterTime: Timestamp(1567578848, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578848, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:08.375+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:08.375+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:34:08.375+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578848, 1), t: 1 }, 2019-09-04T06:34:08.370+0000 2019-09-04T06:34:08.375+0000 D2 ASIO [RS] Request 1319 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578848, 1), t: 1 }, lastCommittedWall: new Date(1567578848370), lastOpVisible: { ts: Timestamp(1567578848, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578848, 1), $clusterTime: { clusterTime: Timestamp(1567578848, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578848, 1) } 2019-09-04T06:34:08.375+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578848, 1), t: 1 }, 2019-09-04T06:34:08.370+0000 2019-09-04T06:34:08.375+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578848, 1), t: 1 }, lastCommittedWall: new Date(1567578848370), lastOpVisible: { ts: Timestamp(1567578848, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578848, 1), $clusterTime: { clusterTime: Timestamp(1567578848, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578848, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:08.375+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578843, 1) 2019-09-04T06:34:08.375+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:34:19.368+0000 2019-09-04T06:34:08.375+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:34:18.813+0000 2019-09-04T06:34:08.375+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:08.376+0000 D3 REPL [conn417] Got notified of new snapshot: { ts: Timestamp(1567578848, 1), t: 1 }, 2019-09-04T06:34:08.370+0000 2019-09-04T06:34:08.376+0000 D3 REPL [conn417] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.661+0000 2019-09-04T06:34:08.376+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1320 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:18.376+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578848, 1), t: 1 } } 2019-09-04T06:34:08.376+0000 D3 REPL [conn407] Got notified of new snapshot: { ts: Timestamp(1567578848, 1), t: 1 }, 2019-09-04T06:34:08.370+0000 2019-09-04T06:34:08.376+0000 D3 REPL [conn407] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.691+0000 2019-09-04T06:34:08.376+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:08.376+0000 D3 REPL [conn404] Got notified of new snapshot: { ts: Timestamp(1567578848, 1), t: 1 }, 2019-09-04T06:34:08.370+0000 2019-09-04T06:34:08.376+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:38.375+0000 2019-09-04T06:34:08.376+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:36.839+0000 2019-09-04T06:34:08.376+0000 D3 REPL [conn404] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.073+0000 2019-09-04T06:34:08.376+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:38.375+0000 2019-09-04T06:34:08.376+0000 D3 REPL [conn418] Got notified of new snapshot: { ts: Timestamp(1567578848, 1), t: 1 }, 2019-09-04T06:34:08.370+0000 2019-09-04T06:34:08.376+0000 D3 REPL [conn418] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.754+0000 2019-09-04T06:34:08.376+0000 D3 REPL [conn382] Got notified of new snapshot: { ts: Timestamp(1567578848, 1), t: 1 }, 2019-09-04T06:34:08.370+0000 2019-09-04T06:34:08.376+0000 D3 REPL [conn382] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:17.223+0000 2019-09-04T06:34:08.376+0000 D3 REPL [conn405] Got notified of new snapshot: { ts: Timestamp(1567578848, 1), t: 1 }, 2019-09-04T06:34:08.370+0000 2019-09-04T06:34:08.376+0000 D3 REPL [conn405] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.416+0000 2019-09-04T06:34:08.376+0000 D3 REPL [conn401] Got notified of new snapshot: { ts: Timestamp(1567578848, 1), t: 1 }, 2019-09-04T06:34:08.370+0000 2019-09-04T06:34:08.376+0000 D3 REPL [conn401] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.660+0000 2019-09-04T06:34:08.376+0000 D3 REPL [conn408] Got notified of new snapshot: { ts: Timestamp(1567578848, 1), t: 1 }, 2019-09-04T06:34:08.370+0000 2019-09-04T06:34:08.376+0000 D3 REPL [conn408] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:29.739+0000 2019-09-04T06:34:08.376+0000 D3 REPL [conn414] Got notified of new snapshot: { ts: Timestamp(1567578848, 1), t: 1 }, 2019-09-04T06:34:08.370+0000 2019-09-04T06:34:08.376+0000 D3 REPL [conn414] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:19.849+0000 2019-09-04T06:34:08.376+0000 D3 REPL [conn393] Got notified of new snapshot: { ts: Timestamp(1567578848, 1), t: 1 }, 2019-09-04T06:34:08.370+0000 2019-09-04T06:34:08.376+0000 D3 REPL [conn393] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.081+0000 2019-09-04T06:34:08.376+0000 D3 REPL [conn380] Got notified of new snapshot: { ts: Timestamp(1567578848, 1), t: 1 }, 2019-09-04T06:34:08.370+0000 2019-09-04T06:34:08.376+0000 D3 REPL [conn380] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:14.390+0000 2019-09-04T06:34:08.376+0000 D3 REPL [conn422] Got notified of new snapshot: { ts: Timestamp(1567578848, 1), t: 1 }, 2019-09-04T06:34:08.370+0000 2019-09-04T06:34:08.376+0000 D3 REPL [conn422] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:25.060+0000 2019-09-04T06:34:08.376+0000 D3 REPL [conn412] Got notified of new snapshot: { ts: Timestamp(1567578848, 1), t: 1 }, 2019-09-04T06:34:08.370+0000 2019-09-04T06:34:08.376+0000 D3 REPL [conn412] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:17.318+0000 2019-09-04T06:34:08.376+0000 D3 REPL [conn403] Got notified of new snapshot: { ts: Timestamp(1567578848, 1), t: 1 }, 2019-09-04T06:34:08.370+0000 2019-09-04T06:34:08.376+0000 D3 REPL [conn403] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:27.566+0000 2019-09-04T06:34:08.376+0000 D3 REPL [conn390] Got notified of new snapshot: { ts: Timestamp(1567578848, 1), t: 1 }, 2019-09-04T06:34:08.370+0000 2019-09-04T06:34:08.376+0000 D3 REPL [conn390] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.077+0000 2019-09-04T06:34:08.376+0000 D3 REPL [conn419] Got notified of new snapshot: { ts: Timestamp(1567578848, 1), t: 1 }, 2019-09-04T06:34:08.370+0000 2019-09-04T06:34:08.376+0000 D3 REPL [conn419] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.767+0000 2019-09-04T06:34:08.376+0000 D3 REPL [conn421] Got notified of new snapshot: { ts: Timestamp(1567578848, 1), t: 1 }, 2019-09-04T06:34:08.370+0000 2019-09-04T06:34:08.376+0000 D3 REPL [conn421] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:24.153+0000 2019-09-04T06:34:08.376+0000 D3 REPL [conn406] Got notified of new snapshot: { ts: Timestamp(1567578848, 1), t: 1 }, 2019-09-04T06:34:08.370+0000 2019-09-04T06:34:08.376+0000 D3 REPL [conn406] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.496+0000 2019-09-04T06:34:08.376+0000 D3 REPL [conn416] Got notified of new snapshot: { ts: Timestamp(1567578848, 1), t: 1 }, 2019-09-04T06:34:08.370+0000 2019-09-04T06:34:08.376+0000 D3 REPL [conn416] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.661+0000 2019-09-04T06:34:08.399+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:08.473+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578848, 1) 2019-09-04T06:34:08.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:08.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:08.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:08.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:08.499+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:08.563+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:08.563+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:08.600+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:08.603+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:08.603+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:08.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:08.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:08.635+0000 D2 ASIO [RS] Request 1320 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578848, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578848633), o: { $v: 1, $set: { ping: new Date(1567578848633) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578848, 1), t: 1 }, lastCommittedWall: new Date(1567578848370), lastOpVisible: { ts: Timestamp(1567578848, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578848, 1), t: 1 }, lastCommittedWall: new Date(1567578848370), lastOpApplied: { ts: Timestamp(1567578848, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578848, 1), $clusterTime: { clusterTime: Timestamp(1567578848, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578848, 2) } 2019-09-04T06:34:08.635+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578848, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578848633), o: { $v: 1, $set: { ping: new Date(1567578848633) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578848, 1), t: 1 }, lastCommittedWall: new Date(1567578848370), lastOpVisible: { ts: Timestamp(1567578848, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578848, 1), t: 1 }, lastCommittedWall: new Date(1567578848370), lastOpApplied: { ts: Timestamp(1567578848, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578848, 1), $clusterTime: { clusterTime: Timestamp(1567578848, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578848, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:08.635+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:08.635+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578848, 2) and ending at ts: Timestamp(1567578848, 2) 2019-09-04T06:34:08.635+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:34:18.813+0000 2019-09-04T06:34:08.635+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:34:19.199+0000 2019-09-04T06:34:08.635+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:08.635+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:36.839+0000 2019-09-04T06:34:08.635+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578848, 2), t: 1 } 2019-09-04T06:34:08.635+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:08.635+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:08.635+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578848, 1) 2019-09-04T06:34:08.635+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 19261 2019-09-04T06:34:08.635+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:08.635+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:08.635+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 19261 2019-09-04T06:34:08.635+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:08.635+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:34:08.635+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:08.635+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578848, 1) 2019-09-04T06:34:08.636+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 19264 2019-09-04T06:34:08.635+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578848, 2) } 2019-09-04T06:34:08.636+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:08.636+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:08.636+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 19264 2019-09-04T06:34:08.636+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19253 2019-09-04T06:34:08.636+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19253 2019-09-04T06:34:08.636+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19267 2019-09-04T06:34:08.636+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19267 2019-09-04T06:34:08.636+0000 D3 EXECUTOR [repl-writer-worker-5] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:08.636+0000 D3 STORAGE [repl-writer-worker-5] WT begin_transaction for snapshot id 19269 2019-09-04T06:34:08.636+0000 D4 STORAGE [repl-writer-worker-5] inserting record with timestamp Timestamp(1567578848, 2) 2019-09-04T06:34:08.636+0000 D3 STORAGE [repl-writer-worker-5] WT set timestamp of future write operations to Timestamp(1567578848, 2) 2019-09-04T06:34:08.636+0000 D3 STORAGE [repl-writer-worker-5] WT commit_transaction for snapshot id 19269 2019-09-04T06:34:08.636+0000 D3 EXECUTOR [repl-writer-worker-5] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:08.636+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:34:08.636+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19268 2019-09-04T06:34:08.636+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19268 2019-09-04T06:34:08.636+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19271 2019-09-04T06:34:08.636+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19271 2019-09-04T06:34:08.636+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578848, 2), t: 1 }({ ts: Timestamp(1567578848, 2), t: 1 }) 2019-09-04T06:34:08.636+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578848, 2) 2019-09-04T06:34:08.636+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19272 2019-09-04T06:34:08.636+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578848, 2) } } ] } sort: {} projection: {} 2019-09-04T06:34:08.636+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:08.636+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:34:08.636+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578848, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:34:08.636+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:08.636+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:08.636+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:08.636+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578848, 2) || First: notFirst: full path: ts 2019-09-04T06:34:08.636+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:08.636+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578848, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:08.636+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:08.636+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:34:08.636+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:08.636+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:08.636+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:08.636+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:08.636+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:08.636+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:08.636+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:08.636+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578848, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:08.636+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:08.636+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:08.636+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:08.636+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578848, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:08.636+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:08.636+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578848, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:08.636+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19272 2019-09-04T06:34:08.636+0000 D3 EXECUTOR [repl-writer-worker-9] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:08.636+0000 D3 STORAGE [repl-writer-worker-9] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:08.636+0000 D3 REPL [repl-writer-worker-9] applying op: { ts: Timestamp(1567578848, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578848633), o: { $v: 1, $set: { ping: new Date(1567578848633) } } }, oplog application mode: Secondary 2019-09-04T06:34:08.636+0000 D3 STORAGE [repl-writer-worker-9] WT set timestamp of future write operations to Timestamp(1567578848, 2) 2019-09-04T06:34:08.636+0000 D3 STORAGE [repl-writer-worker-9] WT begin_transaction for snapshot id 19274 2019-09-04T06:34:08.636+0000 D2 QUERY [repl-writer-worker-9] Using idhack: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" } 2019-09-04T06:34:08.637+0000 D4 WRITE [repl-writer-worker-9] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:34:08.637+0000 D3 STORAGE [repl-writer-worker-9] WT commit_transaction for snapshot id 19274 2019-09-04T06:34:08.637+0000 D3 EXECUTOR [repl-writer-worker-9] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:08.637+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578848, 2), t: 1 }({ ts: Timestamp(1567578848, 2), t: 1 }) 2019-09-04T06:34:08.637+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578848, 2) 2019-09-04T06:34:08.637+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19273 2019-09-04T06:34:08.637+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:34:08.637+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:08.637+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:34:08.637+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:08.637+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:08.637+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:34:08.637+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19273 2019-09-04T06:34:08.637+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578848, 2) 2019-09-04T06:34:08.637+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19277 2019-09-04T06:34:08.637+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19277 2019-09-04T06:34:08.637+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578848, 2), t: 1 }({ ts: Timestamp(1567578848, 2), t: 1 }) 2019-09-04T06:34:08.637+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:08.637+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, durableWallTime: new Date(1567578844640), appliedOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, appliedWallTime: new Date(1567578844640), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578848, 1), t: 1 }, durableWallTime: new Date(1567578848370), appliedOpTime: { ts: Timestamp(1567578848, 2), t: 1 }, appliedWallTime: new Date(1567578848633), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, durableWallTime: new Date(1567578844640), appliedOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, appliedWallTime: new Date(1567578844640), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578848, 1), t: 1 }, lastCommittedWall: new Date(1567578848370), lastOpVisible: { ts: Timestamp(1567578848, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:08.637+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1321 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:38.637+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, durableWallTime: new Date(1567578844640), appliedOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, appliedWallTime: new Date(1567578844640), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578848, 1), t: 1 }, durableWallTime: new Date(1567578848370), appliedOpTime: { ts: Timestamp(1567578848, 2), t: 1 }, appliedWallTime: new Date(1567578848633), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, durableWallTime: new Date(1567578844640), appliedOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, appliedWallTime: new Date(1567578844640), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578848, 1), t: 1 }, lastCommittedWall: new Date(1567578848370), lastOpVisible: { ts: Timestamp(1567578848, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:08.637+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:38.637+0000 2019-09-04T06:34:08.637+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578848, 2), t: 1 } 2019-09-04T06:34:08.637+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1322 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:18.637+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578848, 1), t: 1 } } 2019-09-04T06:34:08.637+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:38.637+0000 2019-09-04T06:34:08.638+0000 D2 ASIO [RS] Request 1321 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578848, 2), t: 1 }, lastCommittedWall: new Date(1567578848633), lastOpVisible: { ts: Timestamp(1567578848, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578848, 2), $clusterTime: { clusterTime: Timestamp(1567578848, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578848, 2) } 2019-09-04T06:34:08.638+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578848, 2), t: 1 }, lastCommittedWall: new Date(1567578848633), lastOpVisible: { ts: Timestamp(1567578848, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578848, 2), $clusterTime: { clusterTime: Timestamp(1567578848, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578848, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:08.638+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:08.638+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:38.637+0000 2019-09-04T06:34:08.638+0000 D2 ASIO [RS] Request 1322 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578848, 2), t: 1 }, lastCommittedWall: new Date(1567578848633), lastOpVisible: { ts: Timestamp(1567578848, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578848, 2), t: 1 }, lastCommittedWall: new Date(1567578848633), lastOpApplied: { ts: Timestamp(1567578848, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578848, 2), $clusterTime: { clusterTime: Timestamp(1567578848, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578848, 2) } 2019-09-04T06:34:08.638+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578848, 2), t: 1 }, lastCommittedWall: new Date(1567578848633), lastOpVisible: { ts: Timestamp(1567578848, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578848, 2), t: 1 }, lastCommittedWall: new Date(1567578848633), lastOpApplied: { ts: Timestamp(1567578848, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578848, 2), $clusterTime: { clusterTime: Timestamp(1567578848, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578848, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:08.638+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:08.638+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:34:08.638+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578848, 2), t: 1 }, 2019-09-04T06:34:08.633+0000 2019-09-04T06:34:08.638+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578848, 2), t: 1 }, 2019-09-04T06:34:08.633+0000 2019-09-04T06:34:08.638+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578843, 2) 2019-09-04T06:34:08.638+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:34:19.199+0000 2019-09-04T06:34:08.638+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:34:18.873+0000 2019-09-04T06:34:08.638+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:08.638+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1323 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:18.638+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578848, 2), t: 1 } } 2019-09-04T06:34:08.638+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:36.839+0000 2019-09-04T06:34:08.638+0000 D3 REPL [conn418] Got notified of new snapshot: { ts: Timestamp(1567578848, 2), t: 1 }, 2019-09-04T06:34:08.633+0000 2019-09-04T06:34:08.638+0000 D3 REPL [conn418] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.754+0000 2019-09-04T06:34:08.638+0000 D3 REPL [conn408] Got notified of new snapshot: { ts: Timestamp(1567578848, 2), t: 1 }, 2019-09-04T06:34:08.633+0000 2019-09-04T06:34:08.638+0000 D3 REPL [conn408] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:29.739+0000 2019-09-04T06:34:08.638+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:38.637+0000 2019-09-04T06:34:08.638+0000 D3 REPL [conn401] Got notified of new snapshot: { ts: Timestamp(1567578848, 2), t: 1 }, 2019-09-04T06:34:08.633+0000 2019-09-04T06:34:08.638+0000 D3 REPL [conn401] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.660+0000 2019-09-04T06:34:08.638+0000 D3 REPL [conn403] Got notified of new snapshot: { ts: Timestamp(1567578848, 2), t: 1 }, 2019-09-04T06:34:08.633+0000 2019-09-04T06:34:08.638+0000 D3 REPL [conn403] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:27.566+0000 2019-09-04T06:34:08.638+0000 D3 REPL [conn419] Got notified of new snapshot: { ts: Timestamp(1567578848, 2), t: 1 }, 2019-09-04T06:34:08.633+0000 2019-09-04T06:34:08.638+0000 D3 REPL [conn419] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.767+0000 2019-09-04T06:34:08.638+0000 D3 REPL [conn406] Got notified of new snapshot: { ts: Timestamp(1567578848, 2), t: 1 }, 2019-09-04T06:34:08.633+0000 2019-09-04T06:34:08.638+0000 D3 REPL [conn406] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.496+0000 2019-09-04T06:34:08.638+0000 D3 REPL [conn422] Got notified of new snapshot: { ts: Timestamp(1567578848, 2), t: 1 }, 2019-09-04T06:34:08.633+0000 2019-09-04T06:34:08.638+0000 D3 REPL [conn422] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:25.060+0000 2019-09-04T06:34:08.638+0000 D3 REPL [conn405] Got notified of new snapshot: { ts: Timestamp(1567578848, 2), t: 1 }, 2019-09-04T06:34:08.633+0000 2019-09-04T06:34:08.638+0000 D3 REPL [conn405] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.416+0000 2019-09-04T06:34:08.638+0000 D3 REPL [conn417] Got notified of new snapshot: { ts: Timestamp(1567578848, 2), t: 1 }, 2019-09-04T06:34:08.633+0000 2019-09-04T06:34:08.638+0000 D3 REPL [conn417] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.661+0000 2019-09-04T06:34:08.638+0000 D3 REPL [conn407] Got notified of new snapshot: { ts: Timestamp(1567578848, 2), t: 1 }, 2019-09-04T06:34:08.633+0000 2019-09-04T06:34:08.638+0000 D3 REPL [conn407] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.691+0000 2019-09-04T06:34:08.638+0000 D3 REPL [conn404] Got notified of new snapshot: { ts: Timestamp(1567578848, 2), t: 1 }, 2019-09-04T06:34:08.633+0000 2019-09-04T06:34:08.638+0000 D3 REPL [conn404] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.073+0000 2019-09-04T06:34:08.638+0000 D3 REPL [conn382] Got notified of new snapshot: { ts: Timestamp(1567578848, 2), t: 1 }, 2019-09-04T06:34:08.633+0000 2019-09-04T06:34:08.638+0000 D3 REPL [conn382] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:17.223+0000 2019-09-04T06:34:08.638+0000 D3 REPL [conn393] Got notified of new snapshot: { ts: Timestamp(1567578848, 2), t: 1 }, 2019-09-04T06:34:08.633+0000 2019-09-04T06:34:08.638+0000 D3 REPL [conn393] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.081+0000 2019-09-04T06:34:08.638+0000 D3 REPL [conn414] Got notified of new snapshot: { ts: Timestamp(1567578848, 2), t: 1 }, 2019-09-04T06:34:08.633+0000 2019-09-04T06:34:08.638+0000 D3 REPL [conn414] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:19.849+0000 2019-09-04T06:34:08.638+0000 D3 REPL [conn380] Got notified of new snapshot: { ts: Timestamp(1567578848, 2), t: 1 }, 2019-09-04T06:34:08.633+0000 2019-09-04T06:34:08.638+0000 D3 REPL [conn380] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:14.390+0000 2019-09-04T06:34:08.638+0000 D3 REPL [conn412] Got notified of new snapshot: { ts: Timestamp(1567578848, 2), t: 1 }, 2019-09-04T06:34:08.633+0000 2019-09-04T06:34:08.638+0000 D3 REPL [conn412] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:17.318+0000 2019-09-04T06:34:08.638+0000 D3 REPL [conn390] Got notified of new snapshot: { ts: Timestamp(1567578848, 2), t: 1 }, 2019-09-04T06:34:08.633+0000 2019-09-04T06:34:08.638+0000 D3 REPL [conn390] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.077+0000 2019-09-04T06:34:08.638+0000 D3 REPL [conn421] Got notified of new snapshot: { ts: Timestamp(1567578848, 2), t: 1 }, 2019-09-04T06:34:08.633+0000 2019-09-04T06:34:08.638+0000 D3 REPL [conn421] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:24.153+0000 2019-09-04T06:34:08.638+0000 D3 REPL [conn416] Got notified of new snapshot: { ts: Timestamp(1567578848, 2), t: 1 }, 2019-09-04T06:34:08.633+0000 2019-09-04T06:34:08.638+0000 D3 REPL [conn416] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.661+0000 2019-09-04T06:34:08.651+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:08.651+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:08.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:08.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:08.667+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:08.667+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:08.701+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:34:08.701+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:08.701+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:08.701+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, durableWallTime: new Date(1567578844640), appliedOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, appliedWallTime: new Date(1567578844640), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578848, 2), t: 1 }, durableWallTime: new Date(1567578848633), appliedOpTime: { ts: Timestamp(1567578848, 2), t: 1 }, appliedWallTime: new Date(1567578848633), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, durableWallTime: new Date(1567578844640), appliedOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, appliedWallTime: new Date(1567578844640), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578848, 2), t: 1 }, lastCommittedWall: new Date(1567578848633), lastOpVisible: { ts: Timestamp(1567578848, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:08.701+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1324 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:38.701+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, durableWallTime: new Date(1567578844640), appliedOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, appliedWallTime: new Date(1567578844640), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578848, 2), t: 1 }, durableWallTime: new Date(1567578848633), appliedOpTime: { ts: Timestamp(1567578848, 2), t: 1 }, appliedWallTime: new Date(1567578848633), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, durableWallTime: new Date(1567578844640), appliedOpTime: { ts: Timestamp(1567578844, 8), t: 1 }, appliedWallTime: new Date(1567578844640), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578848, 2), t: 1 }, lastCommittedWall: new Date(1567578848633), lastOpVisible: { ts: Timestamp(1567578848, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:08.702+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:38.637+0000 2019-09-04T06:34:08.702+0000 D2 ASIO [RS] Request 1324 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578848, 2), t: 1 }, lastCommittedWall: new Date(1567578848633), lastOpVisible: { ts: Timestamp(1567578848, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578848, 2), $clusterTime: { clusterTime: Timestamp(1567578848, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578848, 2) } 2019-09-04T06:34:08.702+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578848, 2), t: 1 }, lastCommittedWall: new Date(1567578848633), lastOpVisible: { ts: Timestamp(1567578848, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578848, 2), $clusterTime: { clusterTime: Timestamp(1567578848, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578848, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:08.702+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:08.702+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:38.637+0000 2019-09-04T06:34:08.736+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578848, 2) 2019-09-04T06:34:08.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:08.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:08.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:08.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:08.802+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:08.826+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:08.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:08.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:08.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1325) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:08.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:08.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1325 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:18.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:08.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:38.839+0000 2019-09-04T06:34:08.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1326) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:08.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1326 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:34:18.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:08.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:38.839+0000 2019-09-04T06:34:08.839+0000 D2 ASIO [Replication] Request 1325 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578848, 2), t: 1 }, durableWallTime: new Date(1567578848633), opTime: { ts: Timestamp(1567578848, 2), t: 1 }, wallTime: new Date(1567578848633), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578848, 2), t: 1 }, lastCommittedWall: new Date(1567578848633), lastOpVisible: { ts: Timestamp(1567578848, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578848, 2), $clusterTime: { clusterTime: Timestamp(1567578848, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578848, 2) } 2019-09-04T06:34:08.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578848, 2), t: 1 }, durableWallTime: new Date(1567578848633), opTime: { ts: Timestamp(1567578848, 2), t: 1 }, wallTime: new Date(1567578848633), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578848, 2), t: 1 }, lastCommittedWall: new Date(1567578848633), lastOpVisible: { ts: Timestamp(1567578848, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578848, 2), $clusterTime: { clusterTime: Timestamp(1567578848, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578848, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:08.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:08.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1325) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578848, 2), t: 1 }, durableWallTime: new Date(1567578848633), opTime: { ts: Timestamp(1567578848, 2), t: 1 }, wallTime: new Date(1567578848633), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578848, 2), t: 1 }, lastCommittedWall: new Date(1567578848633), lastOpVisible: { ts: Timestamp(1567578848, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578848, 2), $clusterTime: { clusterTime: Timestamp(1567578848, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578848, 2) } 2019-09-04T06:34:08.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:34:08.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:34:10.839Z 2019-09-04T06:34:08.839+0000 D2 ASIO [Replication] Request 1326 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578848, 2), t: 1 }, durableWallTime: new Date(1567578848633), opTime: { ts: Timestamp(1567578848, 2), t: 1 }, wallTime: new Date(1567578848633), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578848, 2), t: 1 }, lastCommittedWall: new Date(1567578848633), lastOpVisible: { ts: Timestamp(1567578848, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578848, 2), $clusterTime: { clusterTime: Timestamp(1567578848, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578848, 2) } 2019-09-04T06:34:08.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:38.839+0000 2019-09-04T06:34:08.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578848, 2), t: 1 }, durableWallTime: new Date(1567578848633), opTime: { ts: Timestamp(1567578848, 2), t: 1 }, wallTime: new Date(1567578848633), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578848, 2), t: 1 }, lastCommittedWall: new Date(1567578848633), lastOpVisible: { ts: Timestamp(1567578848, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578848, 2), $clusterTime: { clusterTime: Timestamp(1567578848, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578848, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:34:08.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:08.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1326) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578848, 2), t: 1 }, durableWallTime: new Date(1567578848633), opTime: { ts: Timestamp(1567578848, 2), t: 1 }, wallTime: new Date(1567578848633), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578848, 2), t: 1 }, lastCommittedWall: new Date(1567578848633), lastOpVisible: { ts: Timestamp(1567578848, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578848, 2), $clusterTime: { clusterTime: Timestamp(1567578848, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578848, 2) } 2019-09-04T06:34:08.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:34:08.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:34:18.873+0000 2019-09-04T06:34:08.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:34:20.124+0000 2019-09-04T06:34:08.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:34:08.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:34:10.839Z 2019-09-04T06:34:08.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:08.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:38.839+0000 2019-09-04T06:34:08.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:38.839+0000 2019-09-04T06:34:08.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:08.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:08.902+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:08.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:08.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:08.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:08.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:09.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:09.002+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:09.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578848, 2), signature: { hash: BinData(0, ABD1F029F94CF5D4E6D96CCCACA4C6ABE5F84BF0), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:09.063+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:09.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:34:09.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578848, 2), signature: { hash: BinData(0, ABD1F029F94CF5D4E6D96CCCACA4C6ABE5F84BF0), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:09.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578848, 2), signature: { hash: BinData(0, ABD1F029F94CF5D4E6D96CCCACA4C6ABE5F84BF0), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:09.063+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:09.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578848, 2), t: 1 }, durableWallTime: new Date(1567578848633), opTime: { ts: Timestamp(1567578848, 2), t: 1 }, wallTime: new Date(1567578848633) } 2019-09-04T06:34:09.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578848, 2), signature: { hash: BinData(0, ABD1F029F94CF5D4E6D96CCCACA4C6ABE5F84BF0), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:09.102+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:09.103+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:09.103+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:09.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:09.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:09.151+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:09.151+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:09.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:09.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:09.167+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:09.167+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:09.202+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:09.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:09.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:09.255+0000 D2 ASIO [RS] Request 1323 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578849, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578849254), o: { $v: 1, $set: { ping: new Date(1567578849251), up: 20 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578848, 2), t: 1 }, lastCommittedWall: new Date(1567578848633), lastOpVisible: { ts: Timestamp(1567578848, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578848, 2), t: 1 }, lastCommittedWall: new Date(1567578848633), lastOpApplied: { ts: Timestamp(1567578849, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578848, 2), $clusterTime: { clusterTime: Timestamp(1567578849, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578849, 1) } 2019-09-04T06:34:09.255+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578849, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578849254), o: { $v: 1, $set: { ping: new Date(1567578849251), up: 20 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578848, 2), t: 1 }, lastCommittedWall: new Date(1567578848633), lastOpVisible: { ts: Timestamp(1567578848, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578848, 2), t: 1 }, lastCommittedWall: new Date(1567578848633), lastOpApplied: { ts: Timestamp(1567578849, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578848, 2), $clusterTime: { clusterTime: Timestamp(1567578849, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578849, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:09.255+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:09.255+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578849, 1) and ending at ts: Timestamp(1567578849, 1) 2019-09-04T06:34:09.255+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:34:20.124+0000 2019-09-04T06:34:09.255+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:34:19.837+0000 2019-09-04T06:34:09.255+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:09.255+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:38.839+0000 2019-09-04T06:34:09.256+0000 D2 REPL [replication-1] oplog buffer has 0 bytes 2019-09-04T06:34:09.256+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578849, 1), t: 1 } 2019-09-04T06:34:09.256+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:09.256+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:09.256+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578848, 2) 2019-09-04T06:34:09.256+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 19299 2019-09-04T06:34:09.256+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:09.256+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:09.256+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 19299 2019-09-04T06:34:09.256+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:09.256+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:09.256+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578848, 2) 2019-09-04T06:34:09.256+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 19302 2019-09-04T06:34:09.256+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:09.256+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:09.256+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 19302 2019-09-04T06:34:09.256+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:34:09.256+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578849, 1) } 2019-09-04T06:34:09.256+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19278 2019-09-04T06:34:09.256+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19278 2019-09-04T06:34:09.256+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19305 2019-09-04T06:34:09.256+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19305 2019-09-04T06:34:09.256+0000 D3 EXECUTOR [repl-writer-worker-11] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:09.256+0000 D3 STORAGE [repl-writer-worker-11] WT begin_transaction for snapshot id 19307 2019-09-04T06:34:09.256+0000 D4 STORAGE [repl-writer-worker-11] inserting record with timestamp Timestamp(1567578849, 1) 2019-09-04T06:34:09.256+0000 D3 STORAGE [repl-writer-worker-11] WT set timestamp of future write operations to Timestamp(1567578849, 1) 2019-09-04T06:34:09.256+0000 D3 STORAGE [repl-writer-worker-11] WT commit_transaction for snapshot id 19307 2019-09-04T06:34:09.256+0000 D3 EXECUTOR [repl-writer-worker-11] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:09.256+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:34:09.256+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19306 2019-09-04T06:34:09.256+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19306 2019-09-04T06:34:09.256+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19309 2019-09-04T06:34:09.256+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19309 2019-09-04T06:34:09.256+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578849, 1), t: 1 }({ ts: Timestamp(1567578849, 1), t: 1 }) 2019-09-04T06:34:09.256+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578849, 1) 2019-09-04T06:34:09.256+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19310 2019-09-04T06:34:09.256+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578849, 1) } } ] } sort: {} projection: {} 2019-09-04T06:34:09.256+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:09.256+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:34:09.256+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578849, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:34:09.256+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:09.256+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:09.256+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:09.256+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578849, 1) || First: notFirst: full path: ts 2019-09-04T06:34:09.256+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:09.256+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578849, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:09.256+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:09.256+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:34:09.256+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:09.256+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:09.256+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:09.256+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:09.256+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:09.256+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:09.256+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:09.256+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578849, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:09.256+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:09.256+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:09.256+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:09.256+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578849, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:09.256+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:09.256+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578849, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:09.256+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19310 2019-09-04T06:34:09.256+0000 D3 EXECUTOR [repl-writer-worker-7] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:09.256+0000 D3 STORAGE [repl-writer-worker-7] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:09.256+0000 D3 REPL [repl-writer-worker-7] applying op: { ts: Timestamp(1567578849, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578849254), o: { $v: 1, $set: { ping: new Date(1567578849251), up: 20 } } }, oplog application mode: Secondary 2019-09-04T06:34:09.257+0000 D3 STORAGE [repl-writer-worker-7] WT set timestamp of future write operations to Timestamp(1567578849, 1) 2019-09-04T06:34:09.257+0000 D3 STORAGE [repl-writer-worker-7] WT begin_transaction for snapshot id 19312 2019-09-04T06:34:09.257+0000 D2 QUERY [repl-writer-worker-7] Using idhack: { _id: "cmodb801.togewa.com:27017" } 2019-09-04T06:34:09.257+0000 D4 WRITE [repl-writer-worker-7] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:34:09.257+0000 D3 STORAGE [repl-writer-worker-7] WT commit_transaction for snapshot id 19312 2019-09-04T06:34:09.257+0000 D3 EXECUTOR [repl-writer-worker-7] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:09.257+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578849, 1), t: 1 }({ ts: Timestamp(1567578849, 1), t: 1 }) 2019-09-04T06:34:09.257+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578849, 1) 2019-09-04T06:34:09.257+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19311 2019-09-04T06:34:09.257+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:34:09.257+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:09.257+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:34:09.257+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:09.257+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:09.257+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:34:09.257+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19311 2019-09-04T06:34:09.257+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578849, 1) 2019-09-04T06:34:09.257+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19316 2019-09-04T06:34:09.257+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19316 2019-09-04T06:34:09.257+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578849, 1), t: 1 }({ ts: Timestamp(1567578849, 1), t: 1 }) 2019-09-04T06:34:09.257+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:09.257+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578848, 2), t: 1 }, durableWallTime: new Date(1567578848633), appliedOpTime: { ts: Timestamp(1567578848, 2), t: 1 }, appliedWallTime: new Date(1567578848633), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578848, 2), t: 1 }, durableWallTime: new Date(1567578848633), appliedOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, appliedWallTime: new Date(1567578849254), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578848, 2), t: 1 }, durableWallTime: new Date(1567578848633), appliedOpTime: { ts: Timestamp(1567578848, 2), t: 1 }, appliedWallTime: new Date(1567578848633), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578848, 2), t: 1 }, lastCommittedWall: new Date(1567578848633), lastOpVisible: { ts: Timestamp(1567578848, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:09.257+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1327 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:39.257+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578848, 2), t: 1 }, durableWallTime: new Date(1567578848633), appliedOpTime: { ts: Timestamp(1567578848, 2), t: 1 }, appliedWallTime: new Date(1567578848633), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578848, 2), t: 1 }, durableWallTime: new Date(1567578848633), appliedOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, appliedWallTime: new Date(1567578849254), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578848, 2), t: 1 }, durableWallTime: new Date(1567578848633), appliedOpTime: { ts: Timestamp(1567578848, 2), t: 1 }, appliedWallTime: new Date(1567578848633), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578848, 2), t: 1 }, lastCommittedWall: new Date(1567578848633), lastOpVisible: { ts: Timestamp(1567578848, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:09.257+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:39.257+0000 2019-09-04T06:34:09.257+0000 D2 ASIO [RS] Request 1327 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpVisible: { ts: Timestamp(1567578849, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578849, 1), $clusterTime: { clusterTime: Timestamp(1567578849, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578849, 1) } 2019-09-04T06:34:09.257+0000 D2 COMMAND [conn413] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578849, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5ae10f8f28dab2b56d3b'), operName: "", parentOperId: "5d6f5ae10f8f28dab2b56d38" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578849, 1), signature: { hash: BinData(0, BFEC1F9852D290106ED3CD2F9B5904EBF0F4559F), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578849, 1), t: 1 } }, $db: "config" } 2019-09-04T06:34:09.257+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpVisible: { ts: Timestamp(1567578849, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578849, 1), $clusterTime: { clusterTime: Timestamp(1567578849, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578849, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:09.257+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:09.257+0000 D1 TRACKING [conn413] Cmd: find, TrackingId: 5d6f5ae10f8f28dab2b56d38|5d6f5ae10f8f28dab2b56d3b 2019-09-04T06:34:09.257+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:39.257+0000 2019-09-04T06:34:09.257+0000 D1 REPL [conn413] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1567578849, 1), t: 1 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578848, 2), t: 1 } 2019-09-04T06:34:09.257+0000 D3 REPL [conn413] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:39.267+0000 2019-09-04T06:34:09.258+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:34:09.258+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:09.258+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578848, 2), t: 1 }, durableWallTime: new Date(1567578848633), appliedOpTime: { ts: Timestamp(1567578848, 2), t: 1 }, appliedWallTime: new Date(1567578848633), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), appliedOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, appliedWallTime: new Date(1567578849254), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578848, 2), t: 1 }, durableWallTime: new Date(1567578848633), appliedOpTime: { ts: Timestamp(1567578848, 2), t: 1 }, appliedWallTime: new Date(1567578848633), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578848, 2), t: 1 }, lastCommittedWall: new Date(1567578848633), lastOpVisible: { ts: Timestamp(1567578848, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:09.258+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1328 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:39.258+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578848, 2), t: 1 }, durableWallTime: new Date(1567578848633), appliedOpTime: { ts: Timestamp(1567578848, 2), t: 1 }, appliedWallTime: new Date(1567578848633), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), appliedOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, appliedWallTime: new Date(1567578849254), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578848, 2), t: 1 }, durableWallTime: new Date(1567578848633), appliedOpTime: { ts: Timestamp(1567578848, 2), t: 1 }, appliedWallTime: new Date(1567578848633), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578848, 2), t: 1 }, lastCommittedWall: new Date(1567578848633), lastOpVisible: { ts: Timestamp(1567578848, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:09.258+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578849, 1), t: 1 } 2019-09-04T06:34:09.258+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1329 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:19.258+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578848, 2), t: 1 } } 2019-09-04T06:34:09.258+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:39.258+0000 2019-09-04T06:34:09.258+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:39.258+0000 2019-09-04T06:34:09.258+0000 D2 ASIO [RS] Request 1328 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpVisible: { ts: Timestamp(1567578849, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578849, 1), $clusterTime: { clusterTime: Timestamp(1567578849, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578849, 1) } 2019-09-04T06:34:09.258+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpVisible: { ts: Timestamp(1567578849, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578849, 1), $clusterTime: { clusterTime: Timestamp(1567578849, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578849, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:09.258+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:09.258+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:39.258+0000 2019-09-04T06:34:09.258+0000 D2 ASIO [RS] Request 1329 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpVisible: { ts: Timestamp(1567578849, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpApplied: { ts: Timestamp(1567578849, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578849, 1), $clusterTime: { clusterTime: Timestamp(1567578849, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578849, 1) } 2019-09-04T06:34:09.258+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpVisible: { ts: Timestamp(1567578849, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpApplied: { ts: Timestamp(1567578849, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578849, 1), $clusterTime: { clusterTime: Timestamp(1567578849, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578849, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:09.258+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:09.258+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:34:09.258+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578849, 1), t: 1 }, 2019-09-04T06:34:09.254+0000 2019-09-04T06:34:09.258+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578849, 1), t: 1 }, 2019-09-04T06:34:09.254+0000 2019-09-04T06:34:09.258+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578844, 1) 2019-09-04T06:34:09.258+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:34:19.837+0000 2019-09-04T06:34:09.258+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:34:19.764+0000 2019-09-04T06:34:09.258+0000 D3 REPL [conn401] Got notified of new snapshot: { ts: Timestamp(1567578849, 1), t: 1 }, 2019-09-04T06:34:09.254+0000 2019-09-04T06:34:09.258+0000 D3 REPL [conn401] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.660+0000 2019-09-04T06:34:09.258+0000 D3 REPL [conn382] Got notified of new snapshot: { ts: Timestamp(1567578849, 1), t: 1 }, 2019-09-04T06:34:09.254+0000 2019-09-04T06:34:09.258+0000 D3 REPL [conn382] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:17.223+0000 2019-09-04T06:34:09.258+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1330 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:19.258+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578849, 1), t: 1 } } 2019-09-04T06:34:09.258+0000 D3 REPL [conn414] Got notified of new snapshot: { ts: Timestamp(1567578849, 1), t: 1 }, 2019-09-04T06:34:09.254+0000 2019-09-04T06:34:09.258+0000 D3 REPL [conn414] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:19.849+0000 2019-09-04T06:34:09.258+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:39.258+0000 2019-09-04T06:34:09.258+0000 D3 REPL [conn421] Got notified of new snapshot: { ts: Timestamp(1567578849, 1), t: 1 }, 2019-09-04T06:34:09.254+0000 2019-09-04T06:34:09.258+0000 D3 REPL [conn421] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:24.153+0000 2019-09-04T06:34:09.258+0000 D3 REPL [conn407] Got notified of new snapshot: { ts: Timestamp(1567578849, 1), t: 1 }, 2019-09-04T06:34:09.254+0000 2019-09-04T06:34:09.258+0000 D3 REPL [conn407] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.691+0000 2019-09-04T06:34:09.258+0000 D3 REPL [conn404] Got notified of new snapshot: { ts: Timestamp(1567578849, 1), t: 1 }, 2019-09-04T06:34:09.254+0000 2019-09-04T06:34:09.258+0000 D3 REPL [conn404] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.073+0000 2019-09-04T06:34:09.258+0000 D3 REPL [conn413] Got notified of new snapshot: { ts: Timestamp(1567578849, 1), t: 1 }, 2019-09-04T06:34:09.254+0000 2019-09-04T06:34:09.258+0000 D1 COMMAND [conn413] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578849, 1), t: 1 } } } 2019-09-04T06:34:09.258+0000 D3 REPL [conn380] Got notified of new snapshot: { ts: Timestamp(1567578849, 1), t: 1 }, 2019-09-04T06:34:09.254+0000 2019-09-04T06:34:09.258+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:09.258+0000 D3 REPL [conn380] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:14.390+0000 2019-09-04T06:34:09.258+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:38.839+0000 2019-09-04T06:34:09.258+0000 D3 STORAGE [conn413] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:34:09.258+0000 D3 REPL [conn390] Got notified of new snapshot: { ts: Timestamp(1567578849, 1), t: 1 }, 2019-09-04T06:34:09.254+0000 2019-09-04T06:34:09.258+0000 D1 COMMAND [conn413] Using 'committed' snapshot: { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578849, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5ae10f8f28dab2b56d3b'), operName: "", parentOperId: "5d6f5ae10f8f28dab2b56d38" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578849, 1), signature: { hash: BinData(0, BFEC1F9852D290106ED3CD2F9B5904EBF0F4559F), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578849, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578849, 1) 2019-09-04T06:34:09.258+0000 D3 REPL [conn390] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.077+0000 2019-09-04T06:34:09.258+0000 D2 QUERY [conn413] Collection config.settings does not exist. Using EOF plan: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 2019-09-04T06:34:09.259+0000 I COMMAND [conn413] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578849, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5ae10f8f28dab2b56d3b'), operName: "", parentOperId: "5d6f5ae10f8f28dab2b56d38" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578849, 1), signature: { hash: BinData(0, BFEC1F9852D290106ED3CD2F9B5904EBF0F4559F), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578849, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 1ms 2019-09-04T06:34:09.259+0000 D3 REPL [conn416] Got notified of new snapshot: { ts: Timestamp(1567578849, 1), t: 1 }, 2019-09-04T06:34:09.254+0000 2019-09-04T06:34:09.259+0000 D3 REPL [conn416] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.661+0000 2019-09-04T06:34:09.259+0000 D3 REPL [conn418] Got notified of new snapshot: { ts: Timestamp(1567578849, 1), t: 1 }, 2019-09-04T06:34:09.254+0000 2019-09-04T06:34:09.259+0000 D3 REPL [conn418] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.754+0000 2019-09-04T06:34:09.259+0000 D3 REPL [conn412] Got notified of new snapshot: { ts: Timestamp(1567578849, 1), t: 1 }, 2019-09-04T06:34:09.254+0000 2019-09-04T06:34:09.259+0000 D3 REPL [conn412] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:17.318+0000 2019-09-04T06:34:09.259+0000 D3 REPL [conn419] Got notified of new snapshot: { ts: Timestamp(1567578849, 1), t: 1 }, 2019-09-04T06:34:09.254+0000 2019-09-04T06:34:09.259+0000 D3 REPL [conn419] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.767+0000 2019-09-04T06:34:09.259+0000 D3 REPL [conn422] Got notified of new snapshot: { ts: Timestamp(1567578849, 1), t: 1 }, 2019-09-04T06:34:09.254+0000 2019-09-04T06:34:09.259+0000 D3 REPL [conn422] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:25.060+0000 2019-09-04T06:34:09.259+0000 D3 REPL [conn417] Got notified of new snapshot: { ts: Timestamp(1567578849, 1), t: 1 }, 2019-09-04T06:34:09.254+0000 2019-09-04T06:34:09.259+0000 D3 REPL [conn417] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.661+0000 2019-09-04T06:34:09.259+0000 D3 REPL [conn408] Got notified of new snapshot: { ts: Timestamp(1567578849, 1), t: 1 }, 2019-09-04T06:34:09.254+0000 2019-09-04T06:34:09.259+0000 D3 REPL [conn408] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:29.739+0000 2019-09-04T06:34:09.259+0000 D3 REPL [conn406] Got notified of new snapshot: { ts: Timestamp(1567578849, 1), t: 1 }, 2019-09-04T06:34:09.254+0000 2019-09-04T06:34:09.259+0000 D3 REPL [conn406] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.496+0000 2019-09-04T06:34:09.259+0000 D3 REPL [conn403] Got notified of new snapshot: { ts: Timestamp(1567578849, 1), t: 1 }, 2019-09-04T06:34:09.254+0000 2019-09-04T06:34:09.259+0000 D3 REPL [conn403] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:27.566+0000 2019-09-04T06:34:09.259+0000 D2 COMMAND [conn413] run command config.$cmd { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578849, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5ae10f8f28dab2b56d3c'), operName: "", parentOperId: "5d6f5ae10f8f28dab2b56d38" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578849, 1), signature: { hash: BinData(0, BFEC1F9852D290106ED3CD2F9B5904EBF0F4559F), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578849, 1), t: 1 } }, $db: "config" } 2019-09-04T06:34:09.259+0000 D3 REPL [conn405] Got notified of new snapshot: { ts: Timestamp(1567578849, 1), t: 1 }, 2019-09-04T06:34:09.254+0000 2019-09-04T06:34:09.259+0000 D3 REPL [conn405] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:13.416+0000 2019-09-04T06:34:09.259+0000 D1 TRACKING [conn413] Cmd: find, TrackingId: 5d6f5ae10f8f28dab2b56d38|5d6f5ae10f8f28dab2b56d3c 2019-09-04T06:34:09.259+0000 D1 COMMAND [conn413] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578849, 1), t: 1 } } } 2019-09-04T06:34:09.259+0000 D3 STORAGE [conn413] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:34:09.259+0000 D1 COMMAND [conn413] Using 'committed' snapshot: { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578849, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5ae10f8f28dab2b56d3c'), operName: "", parentOperId: "5d6f5ae10f8f28dab2b56d38" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578849, 1), signature: { hash: BinData(0, BFEC1F9852D290106ED3CD2F9B5904EBF0F4559F), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578849, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578849, 1) 2019-09-04T06:34:09.259+0000 D2 QUERY [conn413] Collection config.settings does not exist. Using EOF plan: query: { _id: "autosplit" } sort: {} projection: {} limit: 1 2019-09-04T06:34:09.259+0000 I COMMAND [conn413] command config.settings command: find { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578849, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5ae10f8f28dab2b56d3c'), operName: "", parentOperId: "5d6f5ae10f8f28dab2b56d38" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578849, 1), signature: { hash: BinData(0, BFEC1F9852D290106ED3CD2F9B5904EBF0F4559F), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578849, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:34:09.259+0000 D3 REPL [conn393] Got notified of new snapshot: { ts: Timestamp(1567578849, 1), t: 1 }, 2019-09-04T06:34:09.254+0000 2019-09-04T06:34:09.259+0000 D3 REPL [conn393] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:12.081+0000 2019-09-04T06:34:09.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:09.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:09.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:09.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:09.302+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:09.326+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:09.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:09.356+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578849, 1) 2019-09-04T06:34:09.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:09.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:09.402+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:09.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:09.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:09.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:09.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:09.503+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:09.538+0000 D2 COMMAND [conn61] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578849, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578849, 1), signature: { hash: BinData(0, BFEC1F9852D290106ED3CD2F9B5904EBF0F4559F), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578849, 1), t: 1 } }, $db: "config" } 2019-09-04T06:34:09.538+0000 D1 COMMAND [conn61] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578849, 1), t: 1 } } } 2019-09-04T06:34:09.538+0000 D3 STORAGE [conn61] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:34:09.538+0000 D1 COMMAND [conn61] Using 'committed' snapshot: { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578849, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578849, 1), signature: { hash: BinData(0, BFEC1F9852D290106ED3CD2F9B5904EBF0F4559F), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578849, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578849, 1) 2019-09-04T06:34:09.538+0000 D2 QUERY [conn61] Collection config.settings does not exist. Using EOF plan: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 2019-09-04T06:34:09.538+0000 I COMMAND [conn61] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578849, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578849, 1), signature: { hash: BinData(0, BFEC1F9852D290106ED3CD2F9B5904EBF0F4559F), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578849, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:34:09.563+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:09.563+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:09.603+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:09.603+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:09.603+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:09.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:09.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:09.651+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:09.651+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:09.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:09.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:09.667+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:09.667+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:09.703+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:09.732+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:09.732+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:09.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:09.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:09.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:09.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:09.803+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:09.826+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:09.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:09.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:09.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:09.903+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:09.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:09.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:09.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:09.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:10.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:10.002+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:34:10.002+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:34:10.002+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:34:10.003+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:10.014+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:34:10.014+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:34:10.014+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:34:10.014+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:34:10.014+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:34:10.014+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:34:10.014+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:34:10.015+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35151 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:34:10.015+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:34:10.015+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:34:10.015+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:34:10.015+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:34:10.015+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:10.016+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:34:10.016+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:34:10.016+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:34:10.016+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:34:10.016+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:34:10.016+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:34:10.016+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:34:10.016+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:10.016+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:10.016+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:34:10.016+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578849, 1) 2019-09-04T06:34:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19347 2019-09-04T06:34:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19347 2019-09-04T06:34:10.016+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:34:10.016+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:34:10.016+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:34:10.016+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:34:10.016+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:10.016+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:34:10.016+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:34:10.016+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:34:10.016+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578849, 1) 2019-09-04T06:34:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19350 2019-09-04T06:34:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19350 2019-09-04T06:34:10.016+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:34:10.016+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:34:10.016+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:10.016+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:34:10.016+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:34:10.016+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:34:10.016+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578849, 1) 2019-09-04T06:34:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19352 2019-09-04T06:34:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19352 2019-09-04T06:34:10.016+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:549 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:34:10.017+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:34:10.017+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:34:10.017+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:34:10.017+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:34:10.017+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19355 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19355 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19356 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19356 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19357 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19357 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19358 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19358 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19359 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19359 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19360 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19360 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19361 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19361 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19362 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19362 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19363 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19363 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19364 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19364 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19365 2019-09-04T06:34:10.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19365 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19366 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19366 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19367 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19367 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19368 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19368 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19369 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19369 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19370 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19370 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19371 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19371 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19372 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19372 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19373 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19373 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19374 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19374 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19375 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19375 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19376 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:34:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19376 2019-09-04T06:34:10.018+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:34:10.024+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:34:10.024+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19378 2019-09-04T06:34:10.024+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19378 2019-09-04T06:34:10.024+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19379 2019-09-04T06:34:10.024+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19379 2019-09-04T06:34:10.024+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19380 2019-09-04T06:34:10.024+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19380 2019-09-04T06:34:10.024+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:34:10.046+0000 D2 COMMAND [conn69] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:10.046+0000 I COMMAND [conn69] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:10.046+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:34:10.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19383 2019-09-04T06:34:10.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19383 2019-09-04T06:34:10.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19384 2019-09-04T06:34:10.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19384 2019-09-04T06:34:10.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19385 2019-09-04T06:34:10.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19385 2019-09-04T06:34:10.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19386 2019-09-04T06:34:10.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19386 2019-09-04T06:34:10.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19387 2019-09-04T06:34:10.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19387 2019-09-04T06:34:10.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19388 2019-09-04T06:34:10.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19388 2019-09-04T06:34:10.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19389 2019-09-04T06:34:10.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19389 2019-09-04T06:34:10.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19390 2019-09-04T06:34:10.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19390 2019-09-04T06:34:10.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19391 2019-09-04T06:34:10.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19391 2019-09-04T06:34:10.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19392 2019-09-04T06:34:10.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19392 2019-09-04T06:34:10.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19393 2019-09-04T06:34:10.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19393 2019-09-04T06:34:10.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19394 2019-09-04T06:34:10.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19394 2019-09-04T06:34:10.046+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:34:10.050+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:34:10.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19396 2019-09-04T06:34:10.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19396 2019-09-04T06:34:10.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19397 2019-09-04T06:34:10.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19397 2019-09-04T06:34:10.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19398 2019-09-04T06:34:10.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19398 2019-09-04T06:34:10.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19399 2019-09-04T06:34:10.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19399 2019-09-04T06:34:10.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19400 2019-09-04T06:34:10.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19400 2019-09-04T06:34:10.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19401 2019-09-04T06:34:10.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19401 2019-09-04T06:34:10.051+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:34:10.055+0000 D2 COMMAND [conn70] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:10.056+0000 I COMMAND [conn70] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:10.063+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:10.063+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:10.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:10.070+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:10.103+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:10.103+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:10.103+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:10.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:10.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:10.151+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:10.151+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:10.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:10.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:10.167+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:10.167+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:10.202+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:10.202+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:10.203+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:10.232+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:10.232+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:10.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578849, 1), signature: { hash: BinData(0, BFEC1F9852D290106ED3CD2F9B5904EBF0F4559F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:10.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:34:10.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578849, 1), signature: { hash: BinData(0, BFEC1F9852D290106ED3CD2F9B5904EBF0F4559F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:10.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578849, 1), signature: { hash: BinData(0, BFEC1F9852D290106ED3CD2F9B5904EBF0F4559F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:10.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), opTime: { ts: Timestamp(1567578849, 1), t: 1 }, wallTime: new Date(1567578849254) } 2019-09-04T06:34:10.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578849, 1), signature: { hash: BinData(0, BFEC1F9852D290106ED3CD2F9B5904EBF0F4559F), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:10.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:10.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:10.256+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:10.256+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:10.256+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578849, 1) 2019-09-04T06:34:10.256+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 19415 2019-09-04T06:34:10.256+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:10.256+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:10.256+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 19415 2019-09-04T06:34:10.257+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19418 2019-09-04T06:34:10.257+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19418 2019-09-04T06:34:10.257+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578849, 1), t: 1 }({ ts: Timestamp(1567578849, 1), t: 1 }) 2019-09-04T06:34:10.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:10.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:10.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:10.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:10.304+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:10.327+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:10.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:10.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:10.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:10.404+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:10.464+0000 D2 COMMAND [conn71] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578827, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578846, 1), signature: { hash: BinData(0, 358D0502B4FCC9C400B6C699DBC90579933C5B63), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578827, 2), t: 1 } }, $db: "config" } 2019-09-04T06:34:10.465+0000 D1 COMMAND [conn71] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578827, 2), t: 1 } } } 2019-09-04T06:34:10.465+0000 D3 STORAGE [conn71] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:34:10.465+0000 D1 COMMAND [conn71] Using 'committed' snapshot: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578827, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578846, 1), signature: { hash: BinData(0, 358D0502B4FCC9C400B6C699DBC90579933C5B63), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578827, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578849, 1) 2019-09-04T06:34:10.465+0000 D2 QUERY [conn71] Collection config.settings does not exist. Using EOF plan: query: { _id: "balancer" } sort: {} projection: {} limit: 1 2019-09-04T06:34:10.465+0000 I COMMAND [conn71] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578827, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578846, 1), signature: { hash: BinData(0, 358D0502B4FCC9C400B6C699DBC90579933C5B63), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578827, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:34:10.465+0000 D2 COMMAND [conn71] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578849, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578849, 1), signature: { hash: BinData(0, BFEC1F9852D290106ED3CD2F9B5904EBF0F4559F), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578849, 1), t: 1 } }, $db: "config" } 2019-09-04T06:34:10.465+0000 D1 COMMAND [conn71] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578849, 1), t: 1 } } } 2019-09-04T06:34:10.465+0000 D3 STORAGE [conn71] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:34:10.465+0000 D1 COMMAND [conn71] Using 'committed' snapshot: { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578849, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578849, 1), signature: { hash: BinData(0, BFEC1F9852D290106ED3CD2F9B5904EBF0F4559F), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578849, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578849, 1) 2019-09-04T06:34:10.465+0000 D2 QUERY [conn71] Collection config.settings does not exist. Using EOF plan: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 2019-09-04T06:34:10.465+0000 I COMMAND [conn71] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578849, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578849, 1), signature: { hash: BinData(0, BFEC1F9852D290106ED3CD2F9B5904EBF0F4559F), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578849, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:34:10.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:10.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:10.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:10.499+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:10.504+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:10.563+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:10.563+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:10.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:10.570+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:10.603+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:10.603+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:10.604+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:10.612+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:10.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:10.651+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:10.651+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:10.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:10.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:10.667+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:10.667+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:10.701+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:10.701+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:10.704+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:10.721+0000 D2 COMMAND [conn72] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578849, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578849, 1), signature: { hash: BinData(0, BFEC1F9852D290106ED3CD2F9B5904EBF0F4559F), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578849, 1), t: 1 } }, $db: "config" } 2019-09-04T06:34:10.722+0000 D1 COMMAND [conn72] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578849, 1), t: 1 } } } 2019-09-04T06:34:10.722+0000 D3 STORAGE [conn72] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:34:10.722+0000 D1 COMMAND [conn72] Using 'committed' snapshot: { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578849, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578849, 1), signature: { hash: BinData(0, BFEC1F9852D290106ED3CD2F9B5904EBF0F4559F), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578849, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578849, 1) 2019-09-04T06:34:10.722+0000 D2 QUERY [conn72] Collection config.settings does not exist. Using EOF plan: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 2019-09-04T06:34:10.722+0000 I COMMAND [conn72] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578849, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578849, 1), signature: { hash: BinData(0, BFEC1F9852D290106ED3CD2F9B5904EBF0F4559F), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578849, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:34:10.732+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:10.732+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:10.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:10.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:10.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:10.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:10.804+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:10.827+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:10.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:10.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:10.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:10.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1331) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:10.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1331 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:20.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:10.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:40.839+0000 2019-09-04T06:34:10.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1332) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:10.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1332 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:34:20.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:10.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:40.839+0000 2019-09-04T06:34:10.839+0000 D2 ASIO [Replication] Request 1332 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), opTime: { ts: Timestamp(1567578849, 1), t: 1 }, wallTime: new Date(1567578849254), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpVisible: { ts: Timestamp(1567578849, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578849, 1), $clusterTime: { clusterTime: Timestamp(1567578849, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578849, 1) } 2019-09-04T06:34:10.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), opTime: { ts: Timestamp(1567578849, 1), t: 1 }, wallTime: new Date(1567578849254), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpVisible: { ts: Timestamp(1567578849, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578849, 1), $clusterTime: { clusterTime: Timestamp(1567578849, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578849, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:34:10.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:10.839+0000 D2 ASIO [Replication] Request 1331 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), opTime: { ts: Timestamp(1567578849, 1), t: 1 }, wallTime: new Date(1567578849254), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpVisible: { ts: Timestamp(1567578849, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578849, 1), $clusterTime: { clusterTime: Timestamp(1567578849, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578849, 1) } 2019-09-04T06:34:10.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), opTime: { ts: Timestamp(1567578849, 1), t: 1 }, wallTime: new Date(1567578849254), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpVisible: { ts: Timestamp(1567578849, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578849, 1), $clusterTime: { clusterTime: Timestamp(1567578849, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578849, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:10.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1332) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), opTime: { ts: Timestamp(1567578849, 1), t: 1 }, wallTime: new Date(1567578849254), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpVisible: { ts: Timestamp(1567578849, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578849, 1), $clusterTime: { clusterTime: Timestamp(1567578849, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578849, 1) } 2019-09-04T06:34:10.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:34:10.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:34:19.764+0000 2019-09-04T06:34:10.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:34:20.959+0000 2019-09-04T06:34:10.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:34:10.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:34:12.839Z 2019-09-04T06:34:10.839+0000 D1 EXECUTOR [replexec-4] starting thread in pool replexec 2019-09-04T06:34:10.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:10.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:10.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1331) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), opTime: { ts: Timestamp(1567578849, 1), t: 1 }, wallTime: new Date(1567578849254), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpVisible: { ts: Timestamp(1567578849, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578849, 1), $clusterTime: { clusterTime: Timestamp(1567578849, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578849, 1) } 2019-09-04T06:34:10.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:40.839+0000 2019-09-04T06:34:10.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:34:10.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:34:40.839+0000 2019-09-04T06:34:10.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:34:12.839Z 2019-09-04T06:34:10.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:40.839+0000 2019-09-04T06:34:10.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:10.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:10.904+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:10.944+0000 D2 COMMAND [conn73] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:10.945+0000 I COMMAND [conn73] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:10.968+0000 D2 COMMAND [conn74] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:10.968+0000 I COMMAND [conn74] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:10.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:10.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:10.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:10.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:11.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:11.005+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:11.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578849, 1), signature: { hash: BinData(0, BFEC1F9852D290106ED3CD2F9B5904EBF0F4559F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:11.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:34:11.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578849, 1), signature: { hash: BinData(0, BFEC1F9852D290106ED3CD2F9B5904EBF0F4559F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:11.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578849, 1), signature: { hash: BinData(0, BFEC1F9852D290106ED3CD2F9B5904EBF0F4559F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:11.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), opTime: { ts: Timestamp(1567578849, 1), t: 1 }, wallTime: new Date(1567578849254) } 2019-09-04T06:34:11.063+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:11.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578849, 1), signature: { hash: BinData(0, BFEC1F9852D290106ED3CD2F9B5904EBF0F4559F), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:11.063+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:11.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:11.070+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:11.103+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:11.103+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:11.105+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:11.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:11.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:11.151+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:11.151+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:11.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:11.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:11.167+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:11.167+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:11.201+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:11.201+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:11.205+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:11.221+0000 D2 COMMAND [conn77] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:11.221+0000 I COMMAND [conn77] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:11.232+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:11.232+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:11.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:11.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:11.256+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:11.256+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:11.256+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578849, 1) 2019-09-04T06:34:11.256+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 19459 2019-09-04T06:34:11.256+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:11.256+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:11.257+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 19459 2019-09-04T06:34:11.257+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19462 2019-09-04T06:34:11.257+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19462 2019-09-04T06:34:11.257+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578849, 1), t: 1 }({ ts: Timestamp(1567578849, 1), t: 1 }) 2019-09-04T06:34:11.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:11.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:11.273+0000 D2 COMMAND [conn78] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:11.273+0000 I COMMAND [conn78] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:11.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:11.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:11.305+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:11.326+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:11.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:11.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:11.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:11.368+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:11.369+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:11.405+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:11.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:11.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:11.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:11.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:11.505+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:11.563+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:11.563+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:11.570+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:11.570+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:11.572+0000 I NETWORK [listener] connection accepted from 10.108.2.55:36804 #429 (87 connections now open) 2019-09-04T06:34:11.573+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:11.573+0000 D2 COMMAND [conn429] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:11.573+0000 I NETWORK [conn429] received client metadata from 10.108.2.55:36804 conn429: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:11.573+0000 I COMMAND [conn429] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:11.573+0000 D2 COMMAND [conn429] run command config.$cmd { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578841, 1), signature: { hash: BinData(0, 76AD270FB0503A5D458AD13D48279C4C7DEE0538), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:34:11.573+0000 D1 REPL [conn429] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578849, 1), t: 1 } 2019-09-04T06:34:11.573+0000 D3 REPL [conn429] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:41.583+0000 2019-09-04T06:34:11.603+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:11.603+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:11.606+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:11.612+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:11.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:11.651+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:11.651+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:11.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:11.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:11.667+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:11.667+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:11.701+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:11.701+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:11.706+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:11.732+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:11.732+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:11.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:11.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:11.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:11.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:11.796+0000 D2 COMMAND [conn81] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:11.796+0000 I COMMAND [conn81] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:11.797+0000 D2 COMMAND [conn81] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578835, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578849, 1), signature: { hash: BinData(0, BFEC1F9852D290106ED3CD2F9B5904EBF0F4559F), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578835, 1), t: 1 } }, $db: "config" } 2019-09-04T06:34:11.797+0000 D1 COMMAND [conn81] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578835, 1), t: 1 } } } 2019-09-04T06:34:11.797+0000 D3 STORAGE [conn81] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:34:11.797+0000 D1 COMMAND [conn81] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578835, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578849, 1), signature: { hash: BinData(0, BFEC1F9852D290106ED3CD2F9B5904EBF0F4559F), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578835, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578849, 1) 2019-09-04T06:34:11.797+0000 D5 QUERY [conn81] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:34:11.797+0000 D5 QUERY [conn81] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:34:11.797+0000 D5 QUERY [conn81] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:34:11.797+0000 D5 QUERY [conn81] Rated tree: $and 2019-09-04T06:34:11.797+0000 D5 QUERY [conn81] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:11.797+0000 D5 QUERY [conn81] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:11.797+0000 D2 QUERY [conn81] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:34:11.797+0000 D3 STORAGE [conn81] WT begin_transaction for snapshot id 19487 2019-09-04T06:34:11.797+0000 D3 STORAGE [conn81] WT rollback_transaction for snapshot id 19487 2019-09-04T06:34:11.797+0000 I COMMAND [conn81] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578835, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578849, 1), signature: { hash: BinData(0, BFEC1F9852D290106ED3CD2F9B5904EBF0F4559F), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578835, 1), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:879 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:34:11.806+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:11.826+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:11.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:11.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:11.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:11.868+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:11.868+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:11.906+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:11.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:11.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:11.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:11.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:12.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:12.006+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:12.062+0000 I NETWORK [listener] connection accepted from 10.108.2.73:52292 #430 (88 connections now open) 2019-09-04T06:34:12.062+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:12.062+0000 D2 COMMAND [conn430] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:12.062+0000 I NETWORK [conn430] received client metadata from 10.108.2.73:52292 conn430: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:12.062+0000 I COMMAND [conn430] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:12.063+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:12.063+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:12.070+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:12.070+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:12.076+0000 I COMMAND [conn404] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578812, 1), signature: { hash: BinData(0, 4B378BB31368CDD862D6FBF154A78A3408447D9E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:34:12.076+0000 D1 - [conn404] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:12.076+0000 W - [conn404] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:12.080+0000 I COMMAND [conn390] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578821, 1), signature: { hash: BinData(0, C4CFA6E23777FD680628F991FF985CB66BF77E05), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:34:12.080+0000 D1 - [conn390] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:12.080+0000 W - [conn390] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:12.084+0000 I COMMAND [conn393] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578812, 1), signature: { hash: BinData(0, 4B378BB31368CDD862D6FBF154A78A3408447D9E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:34:12.084+0000 D1 - [conn393] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:12.084+0000 W - [conn393] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:12.093+0000 I - [conn404] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:12.093+0000 D1 COMMAND [conn404] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578812, 1), signature: { hash: BinData(0, 4B378BB31368CDD862D6FBF154A78A3408447D9E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:12.093+0000 D1 - [conn404] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:12.093+0000 W - [conn404] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:12.103+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:12.103+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:12.106+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:12.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:12.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:12.119+0000 I - [conn390] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:12.119+0000 D1 COMMAND [conn390] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578821, 1), signature: { hash: BinData(0, C4CFA6E23777FD680628F991FF985CB66BF77E05), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:12.119+0000 D1 - [conn390] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:12.120+0000 W - [conn390] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:12.141+0000 I - [conn404] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:12.141+0000 W COMMAND [conn404] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:12.141+0000 I COMMAND [conn404] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578812, 1), signature: { hash: BinData(0, 4B378BB31368CDD862D6FBF154A78A3408447D9E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:34:12.141+0000 D2 NETWORK [conn404] Session from 10.108.2.73:52276 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:12.142+0000 I NETWORK [conn404] end connection 10.108.2.73:52276 (87 connections now open) 2019-09-04T06:34:12.151+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:12.151+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:12.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:12.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:12.167+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:12.167+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:12.171+0000 I - [conn393] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:12.171+0000 I - [conn390] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:12.171+0000 W COMMAND [conn390] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:12.171+0000 D1 COMMAND [conn393] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578812, 1), signature: { hash: BinData(0, 4B378BB31368CDD862D6FBF154A78A3408447D9E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:12.171+0000 I COMMAND [conn390] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578821, 1), signature: { hash: BinData(0, C4CFA6E23777FD680628F991FF985CB66BF77E05), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30052ms 2019-09-04T06:34:12.171+0000 D1 - [conn393] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:12.171+0000 W - [conn393] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:12.171+0000 D2 NETWORK [conn390] Session from 10.108.2.55:36768 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:12.171+0000 I NETWORK [conn390] end connection 10.108.2.55:36768 (86 connections now open) 2019-09-04T06:34:12.193+0000 I - [conn393] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:12.193+0000 W COMMAND [conn393] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:12.193+0000 I COMMAND [conn393] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578812, 1), signature: { hash: BinData(0, 4B378BB31368CDD862D6FBF154A78A3408447D9E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30099ms 2019-09-04T06:34:12.193+0000 D2 NETWORK [conn393] Session from 10.108.2.72:45856 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:12.193+0000 I NETWORK [conn393] end connection 10.108.2.72:45856 (85 connections now open) 2019-09-04T06:34:12.201+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:12.201+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:12.206+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:12.232+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:12.232+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:12.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578849, 2), signature: { hash: BinData(0, BFEC1F9852D290106ED3CD2F9B5904EBF0F4559F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:12.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:34:12.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578849, 2), signature: { hash: BinData(0, BFEC1F9852D290106ED3CD2F9B5904EBF0F4559F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:12.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578849, 2), signature: { hash: BinData(0, BFEC1F9852D290106ED3CD2F9B5904EBF0F4559F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:12.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), opTime: { ts: Timestamp(1567578849, 1), t: 1 }, wallTime: new Date(1567578849254) } 2019-09-04T06:34:12.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578849, 2), signature: { hash: BinData(0, BFEC1F9852D290106ED3CD2F9B5904EBF0F4559F), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:12.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:12.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:12.257+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:12.257+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:12.257+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578849, 1) 2019-09-04T06:34:12.257+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 19507 2019-09-04T06:34:12.257+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:12.257+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:12.257+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 19507 2019-09-04T06:34:12.257+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19510 2019-09-04T06:34:12.257+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19510 2019-09-04T06:34:12.257+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578849, 1), t: 1 }({ ts: Timestamp(1567578849, 1), t: 1 }) 2019-09-04T06:34:12.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:12.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:12.266+0000 I NETWORK [listener] connection accepted from 10.108.2.48:42246 #431 (86 connections now open) 2019-09-04T06:34:12.266+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:12.266+0000 D2 COMMAND [conn431] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:12.266+0000 I NETWORK [conn431] received client metadata from 10.108.2.48:42246 conn431: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:12.266+0000 I COMMAND [conn431] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:12.266+0000 D2 COMMAND [conn431] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578850, 1), signature: { hash: BinData(0, DF7BE7C881CF5FD2AF2EDB3FBEF3BF57179C23BB), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:34:12.266+0000 D1 REPL [conn431] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578849, 1), t: 1 } 2019-09-04T06:34:12.266+0000 D3 REPL [conn431] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.276+0000 2019-09-04T06:34:12.267+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:12.267+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:12.272+0000 D2 COMMAND [conn415] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578851, 1), signature: { hash: BinData(0, DE6CD7CA3AD84D3D5B5CC231FF3233432495E5FA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:34:12.272+0000 D1 REPL [conn415] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578849, 1), t: 1 } 2019-09-04T06:34:12.272+0000 D3 REPL [conn415] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.282+0000 2019-09-04T06:34:12.279+0000 D2 COMMAND [conn402] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578848, 1), signature: { hash: BinData(0, 075D9F54C2C306A6D74FB1440B554A0345006370), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:34:12.279+0000 D1 REPL [conn402] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578849, 1), t: 1 } 2019-09-04T06:34:12.279+0000 D3 REPL [conn402] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.289+0000 2019-09-04T06:34:12.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:12.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:12.307+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:12.318+0000 D2 COMMAND [conn409] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578842, 1), signature: { hash: BinData(0, 6E364D23EB11B87900F4FB61D074B82CCAB19665), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:34:12.318+0000 D1 REPL [conn409] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578849, 1), t: 1 } 2019-09-04T06:34:12.318+0000 D3 REPL [conn409] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.328+0000 2019-09-04T06:34:12.325+0000 D2 COMMAND [conn427] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578842, 1), signature: { hash: BinData(0, 6E364D23EB11B87900F4FB61D074B82CCAB19665), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:34:12.325+0000 D1 REPL [conn427] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578849, 1), t: 1 } 2019-09-04T06:34:12.325+0000 D3 REPL [conn427] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.335+0000 2019-09-04T06:34:12.326+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:12.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:12.329+0000 I NETWORK [listener] connection accepted from 10.108.2.61:38062 #432 (87 connections now open) 2019-09-04T06:34:12.329+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:12.329+0000 D2 COMMAND [conn432] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:12.329+0000 I NETWORK [conn432] received client metadata from 10.108.2.61:38062 conn432: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:12.329+0000 I COMMAND [conn432] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:12.333+0000 D2 COMMAND [conn432] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578851, 1), signature: { hash: BinData(0, DE6CD7CA3AD84D3D5B5CC231FF3233432495E5FA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:34:12.333+0000 D1 REPL [conn432] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578849, 1), t: 1 } 2019-09-04T06:34:12.333+0000 D3 REPL [conn432] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.343+0000 2019-09-04T06:34:12.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:12.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:12.368+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:12.368+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:12.407+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:12.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:12.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:12.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:12.499+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:12.507+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:12.563+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:12.563+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:12.570+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:12.570+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:12.603+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:12.603+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:12.607+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:12.612+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:12.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:12.651+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:12.651+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:12.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:12.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:12.667+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:12.667+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:12.672+0000 I NETWORK [listener] connection accepted from 10.108.2.48:42248 #433 (88 connections now open) 2019-09-04T06:34:12.672+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:12.672+0000 D2 COMMAND [conn433] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:12.672+0000 I NETWORK [conn433] received client metadata from 10.108.2.48:42248 conn433: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:12.672+0000 I COMMAND [conn433] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:12.673+0000 D2 COMMAND [conn433] run command config.$cmd { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578850, 1), signature: { hash: BinData(0, DF7BE7C881CF5FD2AF2EDB3FBEF3BF57179C23BB), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:34:12.673+0000 D1 REPL [conn433] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578849, 1), t: 1 } 2019-09-04T06:34:12.673+0000 D3 REPL [conn433] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.683+0000 2019-09-04T06:34:12.701+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:12.701+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:12.707+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:12.732+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:12.732+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:12.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:12.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:12.766+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:12.766+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:12.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:12.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:12.807+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:12.826+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:12.826+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:12.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:12.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:12.839+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1333) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:12.839+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1333 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:34:22.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:12.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:34:40.839+0000 2019-09-04T06:34:12.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1334) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:12.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1334 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:22.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:12.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:40.839+0000 2019-09-04T06:34:12.839+0000 D2 ASIO [Replication] Request 1333 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), opTime: { ts: Timestamp(1567578849, 1), t: 1 }, wallTime: new Date(1567578849254), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpVisible: { ts: Timestamp(1567578849, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578849, 1), $clusterTime: { clusterTime: Timestamp(1567578851, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578849, 1) } 2019-09-04T06:34:12.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), opTime: { ts: Timestamp(1567578849, 1), t: 1 }, wallTime: new Date(1567578849254), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpVisible: { ts: Timestamp(1567578849, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578849, 1), $clusterTime: { clusterTime: Timestamp(1567578851, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578849, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:34:12.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:12.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1333) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), opTime: { ts: Timestamp(1567578849, 1), t: 1 }, wallTime: new Date(1567578849254), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpVisible: { ts: Timestamp(1567578849, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578849, 1), $clusterTime: { clusterTime: Timestamp(1567578851, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578849, 1) } 2019-09-04T06:34:12.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:34:12.839+0000 D2 ASIO [Replication] Request 1334 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), opTime: { ts: Timestamp(1567578849, 1), t: 1 }, wallTime: new Date(1567578849254), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpVisible: { ts: Timestamp(1567578849, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578849, 1), $clusterTime: { clusterTime: Timestamp(1567578851, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578849, 1) } 2019-09-04T06:34:12.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:34:20.959+0000 2019-09-04T06:34:12.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), opTime: { ts: Timestamp(1567578849, 1), t: 1 }, wallTime: new Date(1567578849254), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpVisible: { ts: Timestamp(1567578849, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578849, 1), $clusterTime: { clusterTime: Timestamp(1567578851, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578849, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:12.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:12.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:34:23.022+0000 2019-09-04T06:34:12.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:34:12.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:34:14.839Z 2019-09-04T06:34:12.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:12.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:42.839+0000 2019-09-04T06:34:12.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:42.839+0000 2019-09-04T06:34:12.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1334) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), opTime: { ts: Timestamp(1567578849, 1), t: 1 }, wallTime: new Date(1567578849254), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpVisible: { ts: Timestamp(1567578849, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578849, 1), $clusterTime: { clusterTime: Timestamp(1567578851, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578849, 1) } 2019-09-04T06:34:12.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:34:12.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:34:14.839Z 2019-09-04T06:34:12.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:34:42.839+0000 2019-09-04T06:34:12.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:12.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:12.868+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:12.868+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:12.907+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:12.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:12.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:12.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:12.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:13.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:13.007+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:13.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578851, 1), signature: { hash: BinData(0, C9F2C5BF1DF343B1D0393757115BA418AF5CFF04), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:13.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:34:13.063+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:13.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578851, 1), signature: { hash: BinData(0, C9F2C5BF1DF343B1D0393757115BA418AF5CFF04), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:13.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578851, 1), signature: { hash: BinData(0, C9F2C5BF1DF343B1D0393757115BA418AF5CFF04), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:13.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), opTime: { ts: Timestamp(1567578849, 1), t: 1 }, wallTime: new Date(1567578849254) } 2019-09-04T06:34:13.063+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:13.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578851, 1), signature: { hash: BinData(0, C9F2C5BF1DF343B1D0393757115BA418AF5CFF04), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:13.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:13.070+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:13.103+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:13.103+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:13.108+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:13.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:13.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:13.151+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:13.151+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:13.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:13.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:13.167+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:13.167+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:13.201+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:13.201+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:13.208+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:13.232+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:13.232+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:13.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:13.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:13.257+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:13.257+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:13.257+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578849, 1) 2019-09-04T06:34:13.257+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 19559 2019-09-04T06:34:13.257+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:13.257+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:13.257+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 19559 2019-09-04T06:34:13.257+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19562 2019-09-04T06:34:13.258+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19562 2019-09-04T06:34:13.258+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578849, 1), t: 1 }({ ts: Timestamp(1567578849, 1), t: 1 }) 2019-09-04T06:34:13.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:13.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:13.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:13.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:13.308+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:13.326+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:13.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:13.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:13.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:13.368+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:13.368+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:13.402+0000 I NETWORK [listener] connection accepted from 10.108.2.45:36672 #434 (89 connections now open) 2019-09-04T06:34:13.402+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:13.402+0000 D2 COMMAND [conn434] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:13.402+0000 I NETWORK [conn434] received client metadata from 10.108.2.45:36672 conn434: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:13.402+0000 I COMMAND [conn434] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:13.408+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:13.420+0000 I COMMAND [conn405] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578818, 1), signature: { hash: BinData(0, 9C9EF00EE4F407A7E772C97AEC68CC0A05914703), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:34:13.420+0000 D1 - [conn405] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:13.420+0000 W - [conn405] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:13.438+0000 I - [conn405] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:13.438+0000 D1 COMMAND [conn405] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578818, 1), signature: { hash: BinData(0, 9C9EF00EE4F407A7E772C97AEC68CC0A05914703), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:13.438+0000 D1 - [conn405] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:13.438+0000 W - [conn405] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:13.461+0000 I - [conn405] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:13.461+0000 W COMMAND [conn405] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:13.461+0000 I COMMAND [conn405] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578818, 1), signature: { hash: BinData(0, 9C9EF00EE4F407A7E772C97AEC68CC0A05914703), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30031ms 2019-09-04T06:34:13.461+0000 D2 NETWORK [conn405] Session from 10.108.2.45:36658 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:13.461+0000 I NETWORK [conn405] end connection 10.108.2.45:36658 (88 connections now open) 2019-09-04T06:34:13.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:13.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:13.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:13.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:13.500+0000 I COMMAND [conn406] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578821, 1), signature: { hash: BinData(0, C4CFA6E23777FD680628F991FF985CB66BF77E05), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:34:13.500+0000 D1 - [conn406] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:13.500+0000 W - [conn406] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:13.508+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:13.517+0000 I - [conn406] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:13.517+0000 D1 COMMAND [conn406] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578821, 1), signature: { hash: BinData(0, C4CFA6E23777FD680628F991FF985CB66BF77E05), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:13.517+0000 D1 - [conn406] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:13.517+0000 W - [conn406] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:13.537+0000 I - [conn406] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:13.537+0000 W COMMAND [conn406] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:13.537+0000 I COMMAND [conn406] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578821, 1), signature: { hash: BinData(0, C4CFA6E23777FD680628F991FF985CB66BF77E05), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:34:13.537+0000 D2 NETWORK [conn406] Session from 10.108.2.59:48482 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:13.537+0000 I NETWORK [conn406] end connection 10.108.2.59:48482 (87 connections now open) 2019-09-04T06:34:13.563+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:13.563+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:13.570+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:13.570+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:13.603+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:13.603+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:13.608+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:13.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:13.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:13.651+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:13.651+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:13.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:13.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:13.667+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:13.667+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:13.696+0000 I COMMAND [conn407] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578822, 1), signature: { hash: BinData(0, 2206AF5AF7A132B84E6DD84FBEAA8BC020142021), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:34:13.696+0000 D1 - [conn407] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:13.696+0000 W - [conn407] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:13.701+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:13.701+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:13.708+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:13.713+0000 I - [conn407] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:13.713+0000 D1 COMMAND [conn407] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578822, 1), signature: { hash: BinData(0, 2206AF5AF7A132B84E6DD84FBEAA8BC020142021), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:13.713+0000 D1 - [conn407] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:13.713+0000 W - [conn407] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:13.732+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:13.732+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:13.733+0000 I - [conn407] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:13.733+0000 W COMMAND [conn407] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:13.733+0000 I COMMAND [conn407] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578822, 1), signature: { hash: BinData(0, 2206AF5AF7A132B84E6DD84FBEAA8BC020142021), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30032ms 2019-09-04T06:34:13.733+0000 D2 NETWORK [conn407] Session from 10.108.2.52:47304 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:13.733+0000 I NETWORK [conn407] end connection 10.108.2.52:47304 (86 connections now open) 2019-09-04T06:34:13.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:13.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:13.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:13.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:13.808+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:13.826+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:13.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:13.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:13.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:13.868+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:13.868+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:13.899+0000 I NETWORK [listener] connection accepted from 10.108.2.54:49330 #435 (87 connections now open) 2019-09-04T06:34:13.899+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:13.899+0000 D2 COMMAND [conn435] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:13.899+0000 I NETWORK [conn435] received client metadata from 10.108.2.54:49330 conn435: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:13.899+0000 I COMMAND [conn435] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:13.899+0000 D2 COMMAND [conn435] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578848, 1), signature: { hash: BinData(0, 075D9F54C2C306A6D74FB1440B554A0345006370), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:34:13.899+0000 D1 REPL [conn435] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578849, 1), t: 1 } 2019-09-04T06:34:13.899+0000 D3 REPL [conn435] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:43.909+0000 2019-09-04T06:34:13.909+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:13.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:13.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:13.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:13.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:14.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:14.009+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:14.063+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:14.063+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:14.070+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:14.070+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:14.103+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:14.103+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:14.109+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:14.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:14.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:14.151+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:14.151+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:14.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:14.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:14.167+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:14.167+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:14.201+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:14.201+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:14.209+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:14.232+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:14.232+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:14.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578852, 1), signature: { hash: BinData(0, 109B87471B5B3B35184B42A71C275BF0A5E40193), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:14.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:34:14.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578852, 1), signature: { hash: BinData(0, 109B87471B5B3B35184B42A71C275BF0A5E40193), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:14.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578852, 1), signature: { hash: BinData(0, 109B87471B5B3B35184B42A71C275BF0A5E40193), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:14.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), opTime: { ts: Timestamp(1567578849, 1), t: 1 }, wallTime: new Date(1567578849254) } 2019-09-04T06:34:14.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578852, 1), signature: { hash: BinData(0, 109B87471B5B3B35184B42A71C275BF0A5E40193), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:14.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:14.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:14.257+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:14.257+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:14.257+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578849, 1) 2019-09-04T06:34:14.257+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 19602 2019-09-04T06:34:14.257+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:14.257+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:14.257+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 19602 2019-09-04T06:34:14.258+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:14.258+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), appliedOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, appliedWallTime: new Date(1567578849254), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), appliedOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, appliedWallTime: new Date(1567578849254), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), appliedOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, appliedWallTime: new Date(1567578849254), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpVisible: { ts: Timestamp(1567578849, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:14.258+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1335 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:44.258+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), appliedOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, appliedWallTime: new Date(1567578849254), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), appliedOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, appliedWallTime: new Date(1567578849254), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), appliedOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, appliedWallTime: new Date(1567578849254), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpVisible: { ts: Timestamp(1567578849, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:14.258+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:39.258+0000 2019-09-04T06:34:14.258+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19605 2019-09-04T06:34:14.258+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19605 2019-09-04T06:34:14.258+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578849, 1), t: 1 }({ ts: Timestamp(1567578849, 1), t: 1 }) 2019-09-04T06:34:14.258+0000 D2 ASIO [RS] Request 1335 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpVisible: { ts: Timestamp(1567578849, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578849, 1), $clusterTime: { clusterTime: Timestamp(1567578852, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578849, 1) } 2019-09-04T06:34:14.258+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpVisible: { ts: Timestamp(1567578849, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578849, 1), $clusterTime: { clusterTime: Timestamp(1567578852, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578849, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:14.258+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:14.258+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:39.258+0000 2019-09-04T06:34:14.258+0000 D2 ASIO [RS] Request 1330 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpVisible: { ts: Timestamp(1567578849, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpApplied: { ts: Timestamp(1567578849, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578849, 1), $clusterTime: { clusterTime: Timestamp(1567578852, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578849, 1) } 2019-09-04T06:34:14.258+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpVisible: { ts: Timestamp(1567578849, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpApplied: { ts: Timestamp(1567578849, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578849, 1), $clusterTime: { clusterTime: Timestamp(1567578852, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578849, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:14.258+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:14.258+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:34:14.258+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:34:23.022+0000 2019-09-04T06:34:14.258+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:34:24.320+0000 2019-09-04T06:34:14.258+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:14.258+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1336 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:24.258+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578849, 1), t: 1 } } 2019-09-04T06:34:14.258+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:42.839+0000 2019-09-04T06:34:14.258+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:39.258+0000 2019-09-04T06:34:14.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:14.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:14.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:14.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:14.309+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:14.326+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:14.326+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:14.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:14.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:14.368+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:14.368+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:14.384+0000 I NETWORK [listener] connection accepted from 10.108.2.62:53574 #436 (88 connections now open) 2019-09-04T06:34:14.384+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:14.384+0000 D2 COMMAND [conn436] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:14.384+0000 I NETWORK [conn436] received client metadata from 10.108.2.62:53574 conn436: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:14.384+0000 I COMMAND [conn436] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:14.396+0000 I COMMAND [conn380] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578822, 1), signature: { hash: BinData(0, 2206AF5AF7A132B84E6DD84FBEAA8BC020142021), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:34:14.396+0000 D1 - [conn380] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:14.396+0000 W - [conn380] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:14.409+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:14.413+0000 I - [conn380] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:14.413+0000 D1 COMMAND [conn380] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578822, 1), signature: { hash: BinData(0, 2206AF5AF7A132B84E6DD84FBEAA8BC020142021), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:14.413+0000 D1 - [conn380] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:14.413+0000 W - [conn380] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:14.433+0000 I - [conn380] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:14.434+0000 W COMMAND [conn380] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:14.434+0000 I COMMAND [conn380] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578822, 1), signature: { hash: BinData(0, 2206AF5AF7A132B84E6DD84FBEAA8BC020142021), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30032ms 2019-09-04T06:34:14.434+0000 D2 NETWORK [conn380] Session from 10.108.2.62:53534 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:14.434+0000 I NETWORK [conn380] end connection 10.108.2.62:53534 (87 connections now open) 2019-09-04T06:34:14.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:14.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:14.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:14.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:14.509+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:14.563+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:14.563+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:14.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:14.570+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:14.603+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:14.603+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:14.609+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:14.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:14.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:14.651+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:14.651+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:14.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:14.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:14.667+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:14.667+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:14.701+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:14.701+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:14.710+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:14.732+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:14.732+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:14.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:14.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:14.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:14.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:14.810+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:14.826+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:14.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:14.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:14.839+0000 D3 REPL [replexec-4] memberData lastupdate is: 2019-09-04T06:34:13.063+0000 2019-09-04T06:34:14.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:14.839+0000 D3 REPL [replexec-4] memberData lastupdate is: 2019-09-04T06:34:14.234+0000 2019-09-04T06:34:14.839+0000 D3 REPL [replexec-4] stalest member MemberId(0) date: 2019-09-04T06:34:13.063+0000 2019-09-04T06:34:14.839+0000 D3 REPL [replexec-4] scheduling next check at 2019-09-04T06:34:23.063+0000 2019-09-04T06:34:14.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:14.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:14.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1337) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:14.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1337 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:24.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:14.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:14.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1338) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:14.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1338 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:34:24.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:14.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:14.839+0000 D2 ASIO [Replication] Request 1337 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), opTime: { ts: Timestamp(1567578849, 1), t: 1 }, wallTime: new Date(1567578849254), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpVisible: { ts: Timestamp(1567578849, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578849, 1), $clusterTime: { clusterTime: Timestamp(1567578852, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578849, 1) } 2019-09-04T06:34:14.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), opTime: { ts: Timestamp(1567578849, 1), t: 1 }, wallTime: new Date(1567578849254), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpVisible: { ts: Timestamp(1567578849, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578849, 1), $clusterTime: { clusterTime: Timestamp(1567578852, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578849, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:14.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:14.839+0000 D2 ASIO [Replication] Request 1338 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), opTime: { ts: Timestamp(1567578849, 1), t: 1 }, wallTime: new Date(1567578849254), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpVisible: { ts: Timestamp(1567578849, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578849, 1), $clusterTime: { clusterTime: Timestamp(1567578852, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578849, 1) } 2019-09-04T06:34:14.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1337) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), opTime: { ts: Timestamp(1567578849, 1), t: 1 }, wallTime: new Date(1567578849254), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpVisible: { ts: Timestamp(1567578849, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578849, 1), $clusterTime: { clusterTime: Timestamp(1567578852, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578849, 1) } 2019-09-04T06:34:14.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), opTime: { ts: Timestamp(1567578849, 1), t: 1 }, wallTime: new Date(1567578849254), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpVisible: { ts: Timestamp(1567578849, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578849, 1), $clusterTime: { clusterTime: Timestamp(1567578852, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578849, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:34:14.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:34:14.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:34:16.839Z 2019-09-04T06:34:14.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:14.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1338) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), opTime: { ts: Timestamp(1567578849, 1), t: 1 }, wallTime: new Date(1567578849254), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpVisible: { ts: Timestamp(1567578849, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578849, 1), $clusterTime: { clusterTime: Timestamp(1567578852, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578849, 1) } 2019-09-04T06:34:14.839+0000 D4 ELECTION [replexec-4] Postponing election timeout due to heartbeat from primary 2019-09-04T06:34:14.839+0000 D4 REPL [replexec-4] Canceling election timeout callback at 2019-09-04T06:34:24.320+0000 2019-09-04T06:34:14.839+0000 D4 ELECTION [replexec-4] Scheduling election timeout callback at 2019-09-04T06:34:25.406+0000 2019-09-04T06:34:14.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:34:14.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:34:16.839Z 2019-09-04T06:34:14.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:14.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:14.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:14.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:14.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:14.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:14.868+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:14.868+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:14.910+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:14.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:14.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:14.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:14.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:15.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:15.010+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:15.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578852, 1), signature: { hash: BinData(0, 109B87471B5B3B35184B42A71C275BF0A5E40193), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:15.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:34:15.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578852, 1), signature: { hash: BinData(0, 109B87471B5B3B35184B42A71C275BF0A5E40193), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:15.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578852, 1), signature: { hash: BinData(0, 109B87471B5B3B35184B42A71C275BF0A5E40193), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:15.063+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:15.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), opTime: { ts: Timestamp(1567578849, 1), t: 1 }, wallTime: new Date(1567578849254) } 2019-09-04T06:34:15.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578852, 1), signature: { hash: BinData(0, 109B87471B5B3B35184B42A71C275BF0A5E40193), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:15.063+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:15.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:15.070+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:15.103+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:15.103+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:15.110+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:15.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:15.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:15.151+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:15.151+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:15.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:15.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:15.167+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:15.167+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:15.201+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:15.201+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:15.210+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:15.232+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:15.232+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:15.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:15.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:15.258+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:15.258+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:15.258+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578849, 1) 2019-09-04T06:34:15.258+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 19644 2019-09-04T06:34:15.258+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:15.258+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:15.258+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 19644 2019-09-04T06:34:15.258+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19647 2019-09-04T06:34:15.258+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19647 2019-09-04T06:34:15.258+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578849, 1), t: 1 }({ ts: Timestamp(1567578849, 1), t: 1 }) 2019-09-04T06:34:15.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:15.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:15.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:15.284+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:15.311+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:15.326+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:15.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:15.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:15.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:15.368+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:15.368+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:15.411+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:15.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:15.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:15.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:15.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:15.511+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:15.563+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:15.563+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:15.570+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:15.570+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:15.603+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:15.603+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:15.611+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:15.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:15.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:15.650+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:15.651+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:15.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:15.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:15.667+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:15.667+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:15.701+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:15.701+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:15.711+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:15.732+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:15.732+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:15.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:15.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:15.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:15.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:15.811+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:15.826+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:15.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:15.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:15.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:15.868+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:15.868+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:15.911+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:15.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:15.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:15.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:15.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:16.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:16.012+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:16.070+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:16.070+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:16.103+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:16.103+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:16.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:16.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:16.112+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:16.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:16.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:16.201+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:16.201+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:16.212+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:16.232+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:16.232+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:16.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578852, 1), signature: { hash: BinData(0, 109B87471B5B3B35184B42A71C275BF0A5E40193), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:16.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:34:16.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578852, 1), signature: { hash: BinData(0, 109B87471B5B3B35184B42A71C275BF0A5E40193), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:16.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578852, 1), signature: { hash: BinData(0, 109B87471B5B3B35184B42A71C275BF0A5E40193), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:16.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), opTime: { ts: Timestamp(1567578849, 1), t: 1 }, wallTime: new Date(1567578849254) } 2019-09-04T06:34:16.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578852, 1), signature: { hash: BinData(0, 109B87471B5B3B35184B42A71C275BF0A5E40193), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:16.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:16.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:16.258+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:16.258+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:16.258+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578849, 1) 2019-09-04T06:34:16.258+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 19682 2019-09-04T06:34:16.258+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:16.258+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:16.258+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 19682 2019-09-04T06:34:16.258+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19685 2019-09-04T06:34:16.258+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19685 2019-09-04T06:34:16.258+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578849, 1), t: 1 }({ ts: Timestamp(1567578849, 1), t: 1 }) 2019-09-04T06:34:16.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:16.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:16.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:16.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:16.312+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:16.326+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:16.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:16.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:16.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:16.368+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:16.368+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:16.376+0000 D2 COMMAND [conn125] run command admin.$cmd { ping: 1, $db: "admin" } 2019-09-04T06:34:16.376+0000 I COMMAND [conn125] command admin.$cmd appName: "robo3t" command: ping { ping: 1, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:16.386+0000 D2 COMMAND [conn178] run command config.$cmd { ping: 1.0, lsid: { id: UUID("4ca3bc30-0f16-4335-a15f-3e7d48b5566e") }, $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:34:16.387+0000 I COMMAND [conn178] command config.$cmd appName: "MongoDB Shell" command: ping { ping: 1.0, lsid: { id: UUID("4ca3bc30-0f16-4335-a15f-3e7d48b5566e") }, $clusterTime: { clusterTime: Timestamp(1567578795, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:16.402+0000 I NETWORK [listener] connection accepted from 10.108.2.57:34388 #437 (88 connections now open) 2019-09-04T06:34:16.402+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:16.402+0000 D2 COMMAND [conn437] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:16.402+0000 I NETWORK [conn437] received client metadata from 10.108.2.57:34388 conn437: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:16.402+0000 I COMMAND [conn437] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:16.406+0000 D2 COMMAND [conn437] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578848, 1), signature: { hash: BinData(0, 075D9F54C2C306A6D74FB1440B554A0345006370), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:34:16.406+0000 D1 REPL [conn437] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578849, 1), t: 1 } 2019-09-04T06:34:16.406+0000 D3 REPL [conn437] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:46.416+0000 2019-09-04T06:34:16.412+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:16.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:16.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:16.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:16.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:16.512+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:16.553+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:16.553+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:16.570+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:16.570+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:16.603+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:16.603+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:16.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:16.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:16.612+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:16.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:16.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:16.701+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:16.701+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:16.713+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:16.732+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:16.732+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:16.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:16.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:16.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:16.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:16.813+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:16.826+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:16.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:16.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:16.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:16.839+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1339) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:16.839+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1339 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:26.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:16.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:16.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1340) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:16.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1340 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:34:26.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:16.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:16.839+0000 D2 ASIO [Replication] Request 1339 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), opTime: { ts: Timestamp(1567578849, 1), t: 1 }, wallTime: new Date(1567578849254), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpVisible: { ts: Timestamp(1567578849, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578849, 1), $clusterTime: { clusterTime: Timestamp(1567578852, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578849, 1) } 2019-09-04T06:34:16.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), opTime: { ts: Timestamp(1567578849, 1), t: 1 }, wallTime: new Date(1567578849254), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpVisible: { ts: Timestamp(1567578849, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578849, 1), $clusterTime: { clusterTime: Timestamp(1567578852, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578849, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:16.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:16.839+0000 D2 ASIO [Replication] Request 1340 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), opTime: { ts: Timestamp(1567578849, 1), t: 1 }, wallTime: new Date(1567578849254), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpVisible: { ts: Timestamp(1567578849, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578849, 1), $clusterTime: { clusterTime: Timestamp(1567578852, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578849, 1) } 2019-09-04T06:34:16.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1339) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), opTime: { ts: Timestamp(1567578849, 1), t: 1 }, wallTime: new Date(1567578849254), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpVisible: { ts: Timestamp(1567578849, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578849, 1), $clusterTime: { clusterTime: Timestamp(1567578852, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578849, 1) } 2019-09-04T06:34:16.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), opTime: { ts: Timestamp(1567578849, 1), t: 1 }, wallTime: new Date(1567578849254), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpVisible: { ts: Timestamp(1567578849, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578849, 1), $clusterTime: { clusterTime: Timestamp(1567578852, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578849, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:34:16.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:34:16.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:16.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:34:18.839Z 2019-09-04T06:34:16.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:16.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1340) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), opTime: { ts: Timestamp(1567578849, 1), t: 1 }, wallTime: new Date(1567578849254), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpVisible: { ts: Timestamp(1567578849, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578849, 1), $clusterTime: { clusterTime: Timestamp(1567578852, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578849, 1) } 2019-09-04T06:34:16.839+0000 D4 ELECTION [replexec-4] Postponing election timeout due to heartbeat from primary 2019-09-04T06:34:16.839+0000 D4 REPL [replexec-4] Canceling election timeout callback at 2019-09-04T06:34:25.406+0000 2019-09-04T06:34:16.839+0000 D4 ELECTION [replexec-4] Scheduling election timeout callback at 2019-09-04T06:34:26.863+0000 2019-09-04T06:34:16.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:16.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:16.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:34:16.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:34:18.839Z 2019-09-04T06:34:16.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:16.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:16.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:16.868+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:16.868+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:16.913+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:16.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:16.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:16.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:16.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:17.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:17.013+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:17.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:17.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:17.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578852, 1), signature: { hash: BinData(0, 109B87471B5B3B35184B42A71C275BF0A5E40193), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:17.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:34:17.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578852, 1), signature: { hash: BinData(0, 109B87471B5B3B35184B42A71C275BF0A5E40193), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:17.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578852, 1), signature: { hash: BinData(0, 109B87471B5B3B35184B42A71C275BF0A5E40193), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:17.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), opTime: { ts: Timestamp(1567578849, 1), t: 1 }, wallTime: new Date(1567578849254) } 2019-09-04T06:34:17.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578852, 1), signature: { hash: BinData(0, 109B87471B5B3B35184B42A71C275BF0A5E40193), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:17.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:17.070+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:17.103+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:17.103+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:17.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:17.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:17.113+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:17.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:17.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:17.197+0000 D2 ASIO [RS] Request 1336 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578857, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb801.togewa.com:27017:1567578827:-2504617745590980237" }, wall: new Date(1567578857195), o: { $v: 1, $set: { ping: new Date(1567578857192) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpVisible: { ts: Timestamp(1567578849, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpApplied: { ts: Timestamp(1567578857, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578849, 1), $clusterTime: { clusterTime: Timestamp(1567578857, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578857, 1) } 2019-09-04T06:34:17.197+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578857, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb801.togewa.com:27017:1567578827:-2504617745590980237" }, wall: new Date(1567578857195), o: { $v: 1, $set: { ping: new Date(1567578857192) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpVisible: { ts: Timestamp(1567578849, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpApplied: { ts: Timestamp(1567578857, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578849, 1), $clusterTime: { clusterTime: Timestamp(1567578857, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578857, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:17.197+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:17.197+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578857, 1) and ending at ts: Timestamp(1567578857, 1) 2019-09-04T06:34:17.197+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:34:26.863+0000 2019-09-04T06:34:17.197+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:34:27.375+0000 2019-09-04T06:34:17.197+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:17.197+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:17.197+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578857, 1), t: 1 } 2019-09-04T06:34:17.197+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:17.197+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:17.197+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578849, 1) 2019-09-04T06:34:17.197+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 19720 2019-09-04T06:34:17.197+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:17.197+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:17.197+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 19720 2019-09-04T06:34:17.197+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:17.197+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:34:17.197+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:17.197+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578849, 1) 2019-09-04T06:34:17.197+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 19723 2019-09-04T06:34:17.197+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578857, 1) } 2019-09-04T06:34:17.197+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:17.197+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:17.197+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 19723 2019-09-04T06:34:17.197+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19686 2019-09-04T06:34:17.197+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19686 2019-09-04T06:34:17.197+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19726 2019-09-04T06:34:17.197+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19726 2019-09-04T06:34:17.197+0000 D3 EXECUTOR [repl-writer-worker-13] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:17.197+0000 D3 STORAGE [repl-writer-worker-13] WT begin_transaction for snapshot id 19728 2019-09-04T06:34:17.197+0000 D4 STORAGE [repl-writer-worker-13] inserting record with timestamp Timestamp(1567578857, 1) 2019-09-04T06:34:17.197+0000 D3 STORAGE [repl-writer-worker-13] WT set timestamp of future write operations to Timestamp(1567578857, 1) 2019-09-04T06:34:17.197+0000 D3 STORAGE [repl-writer-worker-13] WT commit_transaction for snapshot id 19728 2019-09-04T06:34:17.197+0000 D3 EXECUTOR [repl-writer-worker-13] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:17.197+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:34:17.197+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19727 2019-09-04T06:34:17.197+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19727 2019-09-04T06:34:17.197+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19730 2019-09-04T06:34:17.197+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19730 2019-09-04T06:34:17.197+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578857, 1), t: 1 }({ ts: Timestamp(1567578857, 1), t: 1 }) 2019-09-04T06:34:17.197+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578857, 1) 2019-09-04T06:34:17.197+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19731 2019-09-04T06:34:17.197+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578857, 1) } } ] } sort: {} projection: {} 2019-09-04T06:34:17.197+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:17.197+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:34:17.197+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578857, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:34:17.197+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:17.197+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:17.197+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:17.197+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578857, 1) || First: notFirst: full path: ts 2019-09-04T06:34:17.197+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:17.197+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578857, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:17.197+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:17.197+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:34:17.197+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:17.197+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:17.197+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:17.197+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:17.197+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:17.197+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:17.197+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:17.197+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578857, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:17.197+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:17.198+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:17.198+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:17.198+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578857, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:17.198+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:17.198+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578857, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:17.198+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19731 2019-09-04T06:34:17.198+0000 D3 EXECUTOR [repl-writer-worker-14] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:17.198+0000 D3 STORAGE [repl-writer-worker-14] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:17.198+0000 D3 REPL [repl-writer-worker-14] applying op: { ts: Timestamp(1567578857, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb801.togewa.com:27017:1567578827:-2504617745590980237" }, wall: new Date(1567578857195), o: { $v: 1, $set: { ping: new Date(1567578857192) } } }, oplog application mode: Secondary 2019-09-04T06:34:17.198+0000 D3 STORAGE [repl-writer-worker-14] WT set timestamp of future write operations to Timestamp(1567578857, 1) 2019-09-04T06:34:17.198+0000 D3 STORAGE [repl-writer-worker-14] WT begin_transaction for snapshot id 19733 2019-09-04T06:34:17.198+0000 D2 QUERY [repl-writer-worker-14] Using idhack: { _id: "cmodb801.togewa.com:27017:1567578827:-2504617745590980237" } 2019-09-04T06:34:17.198+0000 D4 WRITE [repl-writer-worker-14] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:34:17.198+0000 D3 STORAGE [repl-writer-worker-14] WT commit_transaction for snapshot id 19733 2019-09-04T06:34:17.198+0000 D3 EXECUTOR [repl-writer-worker-14] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:17.198+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578857, 1), t: 1 }({ ts: Timestamp(1567578857, 1), t: 1 }) 2019-09-04T06:34:17.198+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578857, 1) 2019-09-04T06:34:17.198+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19732 2019-09-04T06:34:17.198+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:34:17.198+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:17.198+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:34:17.198+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:17.198+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:17.198+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:34:17.198+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19732 2019-09-04T06:34:17.198+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578857, 1) 2019-09-04T06:34:17.198+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19736 2019-09-04T06:34:17.198+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19736 2019-09-04T06:34:17.198+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578857, 1), t: 1 }({ ts: Timestamp(1567578857, 1), t: 1 }) 2019-09-04T06:34:17.198+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:17.198+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), appliedOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, appliedWallTime: new Date(1567578849254), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), appliedOpTime: { ts: Timestamp(1567578857, 1), t: 1 }, appliedWallTime: new Date(1567578857195), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), appliedOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, appliedWallTime: new Date(1567578849254), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpVisible: { ts: Timestamp(1567578849, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:17.198+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1341 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:47.198+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), appliedOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, appliedWallTime: new Date(1567578849254), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), appliedOpTime: { ts: Timestamp(1567578857, 1), t: 1 }, appliedWallTime: new Date(1567578857195), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), appliedOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, appliedWallTime: new Date(1567578849254), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpVisible: { ts: Timestamp(1567578849, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:17.198+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:47.198+0000 2019-09-04T06:34:17.198+0000 D2 ASIO [RS] Request 1341 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpVisible: { ts: Timestamp(1567578849, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578849, 1), $clusterTime: { clusterTime: Timestamp(1567578857, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578857, 1) } 2019-09-04T06:34:17.198+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578849, 1), t: 1 }, lastCommittedWall: new Date(1567578849254), lastOpVisible: { ts: Timestamp(1567578849, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578849, 1), $clusterTime: { clusterTime: Timestamp(1567578857, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578857, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:17.198+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:17.198+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:47.198+0000 2019-09-04T06:34:17.199+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578857, 1), t: 1 } 2019-09-04T06:34:17.199+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1342 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:27.199+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578849, 1), t: 1 } } 2019-09-04T06:34:17.199+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:47.198+0000 2019-09-04T06:34:17.200+0000 D2 ASIO [RS] Request 1342 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578857, 1), t: 1 }, lastCommittedWall: new Date(1567578857195), lastOpVisible: { ts: Timestamp(1567578857, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578857, 1), t: 1 }, lastCommittedWall: new Date(1567578857195), lastOpApplied: { ts: Timestamp(1567578857, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578857, 1), $clusterTime: { clusterTime: Timestamp(1567578857, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578857, 1) } 2019-09-04T06:34:17.200+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578857, 1), t: 1 }, lastCommittedWall: new Date(1567578857195), lastOpVisible: { ts: Timestamp(1567578857, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578857, 1), t: 1 }, lastCommittedWall: new Date(1567578857195), lastOpApplied: { ts: Timestamp(1567578857, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578857, 1), $clusterTime: { clusterTime: Timestamp(1567578857, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578857, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:17.200+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:17.200+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:34:17.200+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578857, 1), t: 1 }, 2019-09-04T06:34:17.195+0000 2019-09-04T06:34:17.200+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578857, 1), t: 1 }, 2019-09-04T06:34:17.195+0000 2019-09-04T06:34:17.200+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578852, 1) 2019-09-04T06:34:17.200+0000 D3 REPL [conn435] Got notified of new snapshot: { ts: Timestamp(1567578857, 1), t: 1 }, 2019-09-04T06:34:17.195+0000 2019-09-04T06:34:17.200+0000 D3 REPL [conn435] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:43.909+0000 2019-09-04T06:34:17.200+0000 D3 REPL [conn418] Got notified of new snapshot: { ts: Timestamp(1567578857, 1), t: 1 }, 2019-09-04T06:34:17.195+0000 2019-09-04T06:34:17.200+0000 D3 REPL [conn418] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.754+0000 2019-09-04T06:34:17.200+0000 D3 REPL [conn412] Got notified of new snapshot: { ts: Timestamp(1567578857, 1), t: 1 }, 2019-09-04T06:34:17.195+0000 2019-09-04T06:34:17.200+0000 D3 REPL [conn412] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:17.318+0000 2019-09-04T06:34:17.200+0000 D3 REPL [conn429] Got notified of new snapshot: { ts: Timestamp(1567578857, 1), t: 1 }, 2019-09-04T06:34:17.195+0000 2019-09-04T06:34:17.200+0000 D3 REPL [conn429] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:41.583+0000 2019-09-04T06:34:17.200+0000 D3 REPL [conn408] Got notified of new snapshot: { ts: Timestamp(1567578857, 1), t: 1 }, 2019-09-04T06:34:17.195+0000 2019-09-04T06:34:17.200+0000 D3 REPL [conn408] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:29.739+0000 2019-09-04T06:34:17.200+0000 D3 REPL [conn402] Got notified of new snapshot: { ts: Timestamp(1567578857, 1), t: 1 }, 2019-09-04T06:34:17.195+0000 2019-09-04T06:34:17.200+0000 D3 REPL [conn402] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.289+0000 2019-09-04T06:34:17.200+0000 D3 REPL [conn409] Got notified of new snapshot: { ts: Timestamp(1567578857, 1), t: 1 }, 2019-09-04T06:34:17.195+0000 2019-09-04T06:34:17.200+0000 D3 REPL [conn409] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.328+0000 2019-09-04T06:34:17.200+0000 D3 REPL [conn427] Got notified of new snapshot: { ts: Timestamp(1567578857, 1), t: 1 }, 2019-09-04T06:34:17.195+0000 2019-09-04T06:34:17.200+0000 D3 REPL [conn427] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.335+0000 2019-09-04T06:34:17.201+0000 D3 REPL [conn433] Got notified of new snapshot: { ts: Timestamp(1567578857, 1), t: 1 }, 2019-09-04T06:34:17.195+0000 2019-09-04T06:34:17.201+0000 D3 REPL [conn433] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.683+0000 2019-09-04T06:34:17.201+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:34:27.375+0000 2019-09-04T06:34:17.201+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:34:28.367+0000 2019-09-04T06:34:17.201+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:17.201+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1343 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:27.201+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578857, 1), t: 1 } } 2019-09-04T06:34:17.201+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:17.201+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:47.198+0000 2019-09-04T06:34:17.201+0000 D3 REPL [conn414] Got notified of new snapshot: { ts: Timestamp(1567578857, 1), t: 1 }, 2019-09-04T06:34:17.195+0000 2019-09-04T06:34:17.201+0000 D3 REPL [conn414] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:19.849+0000 2019-09-04T06:34:17.201+0000 D3 REPL [conn421] Got notified of new snapshot: { ts: Timestamp(1567578857, 1), t: 1 }, 2019-09-04T06:34:17.195+0000 2019-09-04T06:34:17.201+0000 D3 REPL [conn421] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:24.153+0000 2019-09-04T06:34:17.201+0000 D3 REPL [conn416] Got notified of new snapshot: { ts: Timestamp(1567578857, 1), t: 1 }, 2019-09-04T06:34:17.195+0000 2019-09-04T06:34:17.201+0000 D3 REPL [conn416] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.661+0000 2019-09-04T06:34:17.201+0000 D3 REPL [conn401] Got notified of new snapshot: { ts: Timestamp(1567578857, 1), t: 1 }, 2019-09-04T06:34:17.195+0000 2019-09-04T06:34:17.201+0000 D3 REPL [conn401] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.660+0000 2019-09-04T06:34:17.201+0000 D3 REPL [conn437] Got notified of new snapshot: { ts: Timestamp(1567578857, 1), t: 1 }, 2019-09-04T06:34:17.195+0000 2019-09-04T06:34:17.201+0000 D3 REPL [conn437] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:46.416+0000 2019-09-04T06:34:17.201+0000 D3 REPL [conn419] Got notified of new snapshot: { ts: Timestamp(1567578857, 1), t: 1 }, 2019-09-04T06:34:17.195+0000 2019-09-04T06:34:17.201+0000 D3 REPL [conn419] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.767+0000 2019-09-04T06:34:17.201+0000 D3 REPL [conn417] Got notified of new snapshot: { ts: Timestamp(1567578857, 1), t: 1 }, 2019-09-04T06:34:17.195+0000 2019-09-04T06:34:17.201+0000 D3 REPL [conn417] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.661+0000 2019-09-04T06:34:17.201+0000 D3 REPL [conn403] Got notified of new snapshot: { ts: Timestamp(1567578857, 1), t: 1 }, 2019-09-04T06:34:17.195+0000 2019-09-04T06:34:17.201+0000 D3 REPL [conn403] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:27.566+0000 2019-09-04T06:34:17.201+0000 D3 REPL [conn422] Got notified of new snapshot: { ts: Timestamp(1567578857, 1), t: 1 }, 2019-09-04T06:34:17.195+0000 2019-09-04T06:34:17.201+0000 D3 REPL [conn422] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:25.060+0000 2019-09-04T06:34:17.201+0000 D3 REPL [conn431] Got notified of new snapshot: { ts: Timestamp(1567578857, 1), t: 1 }, 2019-09-04T06:34:17.195+0000 2019-09-04T06:34:17.201+0000 D3 REPL [conn431] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.276+0000 2019-09-04T06:34:17.201+0000 D3 REPL [conn415] Got notified of new snapshot: { ts: Timestamp(1567578857, 1), t: 1 }, 2019-09-04T06:34:17.195+0000 2019-09-04T06:34:17.201+0000 D3 REPL [conn415] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.282+0000 2019-09-04T06:34:17.201+0000 D3 REPL [conn432] Got notified of new snapshot: { ts: Timestamp(1567578857, 1), t: 1 }, 2019-09-04T06:34:17.195+0000 2019-09-04T06:34:17.201+0000 D3 REPL [conn432] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.343+0000 2019-09-04T06:34:17.201+0000 D3 REPL [conn382] Got notified of new snapshot: { ts: Timestamp(1567578857, 1), t: 1 }, 2019-09-04T06:34:17.195+0000 2019-09-04T06:34:17.201+0000 D3 REPL [conn382] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:17.223+0000 2019-09-04T06:34:17.201+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:17.201+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:17.203+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:34:17.203+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:17.203+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), appliedOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, appliedWallTime: new Date(1567578849254), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578857, 1), t: 1 }, durableWallTime: new Date(1567578857195), appliedOpTime: { ts: Timestamp(1567578857, 1), t: 1 }, appliedWallTime: new Date(1567578857195), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), appliedOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, appliedWallTime: new Date(1567578849254), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578857, 1), t: 1 }, lastCommittedWall: new Date(1567578857195), lastOpVisible: { ts: Timestamp(1567578857, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:17.203+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1344 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:47.203+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), appliedOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, appliedWallTime: new Date(1567578849254), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578857, 1), t: 1 }, durableWallTime: new Date(1567578857195), appliedOpTime: { ts: Timestamp(1567578857, 1), t: 1 }, appliedWallTime: new Date(1567578857195), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), appliedOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, appliedWallTime: new Date(1567578849254), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578857, 1), t: 1 }, lastCommittedWall: new Date(1567578857195), lastOpVisible: { ts: Timestamp(1567578857, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:17.203+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:47.198+0000 2019-09-04T06:34:17.203+0000 D2 ASIO [RS] Request 1344 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578857, 1), t: 1 }, lastCommittedWall: new Date(1567578857195), lastOpVisible: { ts: Timestamp(1567578857, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578857, 1), $clusterTime: { clusterTime: Timestamp(1567578857, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578857, 1) } 2019-09-04T06:34:17.203+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578857, 1), t: 1 }, lastCommittedWall: new Date(1567578857195), lastOpVisible: { ts: Timestamp(1567578857, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578857, 1), $clusterTime: { clusterTime: Timestamp(1567578857, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578857, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:17.203+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:17.203+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:47.198+0000 2019-09-04T06:34:17.213+0000 I NETWORK [listener] connection accepted from 10.108.2.58:52288 #438 (89 connections now open) 2019-09-04T06:34:17.213+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:17.213+0000 D2 COMMAND [conn438] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:17.213+0000 I NETWORK [conn438] received client metadata from 10.108.2.58:52288 conn438: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:17.213+0000 I COMMAND [conn438] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:17.213+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:17.223+0000 I COMMAND [conn382] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578819, 1), signature: { hash: BinData(0, F48F10C5414C16EB9E237EDDB5359A70016AB5D8), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:34:17.223+0000 D1 - [conn382] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:17.223+0000 W - [conn382] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:17.232+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:17.232+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:17.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:17.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:17.240+0000 I - [conn382] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:17.240+0000 D1 COMMAND [conn382] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578819, 1), signature: { hash: BinData(0, F48F10C5414C16EB9E237EDDB5359A70016AB5D8), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:17.240+0000 D1 - [conn382] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:17.240+0000 W - [conn382] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:17.260+0000 I - [conn382] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:17.260+0000 W COMMAND [conn382] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:17.260+0000 I COMMAND [conn382] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578819, 1), signature: { hash: BinData(0, F48F10C5414C16EB9E237EDDB5359A70016AB5D8), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30026ms 2019-09-04T06:34:17.260+0000 D2 NETWORK [conn382] Session from 10.108.2.51:59254 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:17.260+0000 I NETWORK [conn382] end connection 10.108.2.51:59254 (88 connections now open) 2019-09-04T06:34:17.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:17.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:17.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:17.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:17.297+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578857, 1) 2019-09-04T06:34:17.313+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:17.319+0000 I COMMAND [conn412] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578823, 1), signature: { hash: BinData(0, 0AD6AB951BD56BA6078970F05FAF7F8D9E5E1F3F), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:34:17.319+0000 D1 - [conn412] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:17.319+0000 W - [conn412] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:17.326+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:17.326+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:17.335+0000 I - [conn412] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:17.335+0000 D1 COMMAND [conn412] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578823, 1), signature: { hash: BinData(0, 0AD6AB951BD56BA6078970F05FAF7F8D9E5E1F3F), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:17.335+0000 D1 - [conn412] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:17.335+0000 W - [conn412] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:17.355+0000 I - [conn412] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:17.355+0000 W COMMAND [conn412] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:17.355+0000 I COMMAND [conn412] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578823, 1), signature: { hash: BinData(0, 0AD6AB951BD56BA6078970F05FAF7F8D9E5E1F3F), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30026ms 2019-09-04T06:34:17.355+0000 D2 NETWORK [conn412] Session from 10.108.2.58:52270 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:17.355+0000 I NETWORK [conn412] end connection 10.108.2.58:52270 (87 connections now open) 2019-09-04T06:34:17.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:17.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:17.368+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:17.368+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:17.413+0000 D2 COMMAND [conn420] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578849, 1), signature: { hash: BinData(0, E539C2FA8BCC8C9A0D94E22CA2ADA62100E7CF8D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:34:17.413+0000 D1 REPL [conn420] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578857, 1), t: 1 } 2019-09-04T06:34:17.413+0000 D3 REPL [conn420] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.423+0000 2019-09-04T06:34:17.413+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:17.427+0000 D2 COMMAND [conn424] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578855, 1), signature: { hash: BinData(0, 1F4340BC981E2B9D0D58976FE30DE1B22C151008), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:34:17.427+0000 D1 REPL [conn424] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578857, 1), t: 1 } 2019-09-04T06:34:17.427+0000 D3 REPL [conn424] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.437+0000 2019-09-04T06:34:17.433+0000 I NETWORK [listener] connection accepted from 10.108.2.49:53512 #439 (88 connections now open) 2019-09-04T06:34:17.433+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:17.433+0000 D2 COMMAND [conn439] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:17.433+0000 I NETWORK [conn439] received client metadata from 10.108.2.49:53512 conn439: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:17.433+0000 I COMMAND [conn439] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:17.435+0000 D2 COMMAND [conn410] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578855, 1), signature: { hash: BinData(0, 1F4340BC981E2B9D0D58976FE30DE1B22C151008), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:34:17.435+0000 D1 REPL [conn410] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578857, 1), t: 1 } 2019-09-04T06:34:17.435+0000 D3 REPL [conn410] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.445+0000 2019-09-04T06:34:17.435+0000 D2 COMMAND [conn434] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578848, 1), signature: { hash: BinData(0, 075D9F54C2C306A6D74FB1440B554A0345006370), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:34:17.435+0000 D1 REPL [conn434] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578857, 1), t: 1 } 2019-09-04T06:34:17.435+0000 D3 REPL [conn434] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.445+0000 2019-09-04T06:34:17.437+0000 D2 COMMAND [conn439] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578850, 1), signature: { hash: BinData(0, DF7BE7C881CF5FD2AF2EDB3FBEF3BF57179C23BB), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:34:17.437+0000 D1 REPL [conn439] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578857, 1), t: 1 } 2019-09-04T06:34:17.437+0000 D3 REPL [conn439] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.447+0000 2019-09-04T06:34:17.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:17.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:17.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:17.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:17.507+0000 D2 COMMAND [conn426] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578848, 1), signature: { hash: BinData(0, 075D9F54C2C306A6D74FB1440B554A0345006370), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:34:17.507+0000 D1 REPL [conn426] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578857, 1), t: 1 } 2019-09-04T06:34:17.507+0000 D3 REPL [conn426] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.517+0000 2019-09-04T06:34:17.513+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:17.523+0000 I NETWORK [listener] connection accepted from 10.108.2.53:50844 #440 (89 connections now open) 2019-09-04T06:34:17.523+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:17.523+0000 D2 COMMAND [conn440] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:17.523+0000 I NETWORK [conn440] received client metadata from 10.108.2.53:50844 conn440: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:17.523+0000 I COMMAND [conn440] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:17.528+0000 D2 COMMAND [conn440] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578852, 1), signature: { hash: BinData(0, 695EC7A4F3A174023E82B47C0B1F2FFF237676A8), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:34:17.528+0000 D1 REPL [conn440] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578857, 1), t: 1 } 2019-09-04T06:34:17.528+0000 D3 REPL [conn440] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.538+0000 2019-09-04T06:34:17.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:17.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:17.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:17.570+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:17.603+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:17.603+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:17.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:17.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:17.614+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:17.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:17.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:17.662+0000 D2 ASIO [RS] Request 1343 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578857, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578857656), o: { $v: 1, $set: { ping: new Date(1567578857655) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578857, 1), t: 1 }, lastCommittedWall: new Date(1567578857195), lastOpVisible: { ts: Timestamp(1567578857, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578857, 1), t: 1 }, lastCommittedWall: new Date(1567578857195), lastOpApplied: { ts: Timestamp(1567578857, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578857, 1), $clusterTime: { clusterTime: Timestamp(1567578857, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578857, 2) } 2019-09-04T06:34:17.662+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578857, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578857656), o: { $v: 1, $set: { ping: new Date(1567578857655) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578857, 1), t: 1 }, lastCommittedWall: new Date(1567578857195), lastOpVisible: { ts: Timestamp(1567578857, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578857, 1), t: 1 }, lastCommittedWall: new Date(1567578857195), lastOpApplied: { ts: Timestamp(1567578857, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578857, 1), $clusterTime: { clusterTime: Timestamp(1567578857, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578857, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:17.662+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:17.662+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578857, 2) and ending at ts: Timestamp(1567578857, 2) 2019-09-04T06:34:17.662+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:34:28.367+0000 2019-09-04T06:34:17.662+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:34:27.717+0000 2019-09-04T06:34:17.662+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:17.662+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578857, 2), t: 1 } 2019-09-04T06:34:17.662+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:17.662+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:17.662+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:17.662+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578857, 1) 2019-09-04T06:34:17.662+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 19765 2019-09-04T06:34:17.662+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:17.662+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:17.662+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 19765 2019-09-04T06:34:17.662+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:17.662+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:17.662+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:34:17.662+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578857, 1) 2019-09-04T06:34:17.662+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 19768 2019-09-04T06:34:17.662+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578857, 2) } 2019-09-04T06:34:17.662+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:17.662+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:17.662+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 19768 2019-09-04T06:34:17.662+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19737 2019-09-04T06:34:17.662+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19737 2019-09-04T06:34:17.662+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19771 2019-09-04T06:34:17.662+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19771 2019-09-04T06:34:17.662+0000 D3 EXECUTOR [repl-writer-worker-3] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:17.662+0000 D3 STORAGE [repl-writer-worker-3] WT begin_transaction for snapshot id 19773 2019-09-04T06:34:17.662+0000 D4 STORAGE [repl-writer-worker-3] inserting record with timestamp Timestamp(1567578857, 2) 2019-09-04T06:34:17.662+0000 D3 STORAGE [repl-writer-worker-3] WT set timestamp of future write operations to Timestamp(1567578857, 2) 2019-09-04T06:34:17.662+0000 D3 STORAGE [repl-writer-worker-3] WT commit_transaction for snapshot id 19773 2019-09-04T06:34:17.662+0000 D3 EXECUTOR [repl-writer-worker-3] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:17.662+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:34:17.663+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19772 2019-09-04T06:34:17.663+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19772 2019-09-04T06:34:17.663+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19775 2019-09-04T06:34:17.663+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19775 2019-09-04T06:34:17.663+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578857, 2), t: 1 }({ ts: Timestamp(1567578857, 2), t: 1 }) 2019-09-04T06:34:17.663+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578857, 2) 2019-09-04T06:34:17.663+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19776 2019-09-04T06:34:17.663+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578857, 2) } } ] } sort: {} projection: {} 2019-09-04T06:34:17.663+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:17.663+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:34:17.663+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578857, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:34:17.663+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:17.663+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:17.663+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:17.663+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578857, 2) || First: notFirst: full path: ts 2019-09-04T06:34:17.663+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:17.663+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578857, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:17.663+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:17.663+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:34:17.663+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:17.663+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:17.663+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:17.663+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:17.663+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:17.663+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:17.663+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:17.663+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578857, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:17.663+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:17.663+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:17.663+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:17.663+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578857, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:17.663+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:17.663+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578857, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:17.663+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19776 2019-09-04T06:34:17.663+0000 D3 EXECUTOR [repl-writer-worker-12] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:17.663+0000 D3 STORAGE [repl-writer-worker-12] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:17.663+0000 D3 REPL [repl-writer-worker-12] applying op: { ts: Timestamp(1567578857, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578857656), o: { $v: 1, $set: { ping: new Date(1567578857655) } } }, oplog application mode: Secondary 2019-09-04T06:34:17.663+0000 D3 STORAGE [repl-writer-worker-12] WT set timestamp of future write operations to Timestamp(1567578857, 2) 2019-09-04T06:34:17.663+0000 D3 STORAGE [repl-writer-worker-12] WT begin_transaction for snapshot id 19778 2019-09-04T06:34:17.663+0000 D2 QUERY [repl-writer-worker-12] Using idhack: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" } 2019-09-04T06:34:17.663+0000 D4 WRITE [repl-writer-worker-12] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:34:17.663+0000 D3 STORAGE [repl-writer-worker-12] WT commit_transaction for snapshot id 19778 2019-09-04T06:34:17.663+0000 D3 EXECUTOR [repl-writer-worker-12] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:17.663+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578857, 2), t: 1 }({ ts: Timestamp(1567578857, 2), t: 1 }) 2019-09-04T06:34:17.663+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578857, 2) 2019-09-04T06:34:17.663+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19777 2019-09-04T06:34:17.663+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:34:17.663+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:17.663+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:34:17.663+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:17.663+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:17.663+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:34:17.663+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19777 2019-09-04T06:34:17.663+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578857, 2) 2019-09-04T06:34:17.663+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19781 2019-09-04T06:34:17.663+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19781 2019-09-04T06:34:17.663+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578857, 2), t: 1 }({ ts: Timestamp(1567578857, 2), t: 1 }) 2019-09-04T06:34:17.663+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:17.663+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), appliedOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, appliedWallTime: new Date(1567578849254), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578857, 1), t: 1 }, durableWallTime: new Date(1567578857195), appliedOpTime: { ts: Timestamp(1567578857, 2), t: 1 }, appliedWallTime: new Date(1567578857656), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), appliedOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, appliedWallTime: new Date(1567578849254), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578857, 1), t: 1 }, lastCommittedWall: new Date(1567578857195), lastOpVisible: { ts: Timestamp(1567578857, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:17.663+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1345 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:47.663+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), appliedOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, appliedWallTime: new Date(1567578849254), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578857, 1), t: 1 }, durableWallTime: new Date(1567578857195), appliedOpTime: { ts: Timestamp(1567578857, 2), t: 1 }, appliedWallTime: new Date(1567578857656), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), appliedOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, appliedWallTime: new Date(1567578849254), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578857, 1), t: 1 }, lastCommittedWall: new Date(1567578857195), lastOpVisible: { ts: Timestamp(1567578857, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:17.663+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:47.663+0000 2019-09-04T06:34:17.664+0000 D2 ASIO [RS] Request 1345 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578857, 1), t: 1 }, lastCommittedWall: new Date(1567578857195), lastOpVisible: { ts: Timestamp(1567578857, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578857, 1), $clusterTime: { clusterTime: Timestamp(1567578857, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578857, 2) } 2019-09-04T06:34:17.664+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578857, 1), t: 1 }, lastCommittedWall: new Date(1567578857195), lastOpVisible: { ts: Timestamp(1567578857, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578857, 1), $clusterTime: { clusterTime: Timestamp(1567578857, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578857, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:17.664+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:17.664+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:47.664+0000 2019-09-04T06:34:17.664+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578857, 2), t: 1 } 2019-09-04T06:34:17.664+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1346 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:27.664+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578857, 1), t: 1 } } 2019-09-04T06:34:17.664+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:47.664+0000 2019-09-04T06:34:17.679+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:34:17.679+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:17.679+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), appliedOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, appliedWallTime: new Date(1567578849254), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578857, 2), t: 1 }, durableWallTime: new Date(1567578857656), appliedOpTime: { ts: Timestamp(1567578857, 2), t: 1 }, appliedWallTime: new Date(1567578857656), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), appliedOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, appliedWallTime: new Date(1567578849254), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578857, 1), t: 1 }, lastCommittedWall: new Date(1567578857195), lastOpVisible: { ts: Timestamp(1567578857, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:17.679+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1347 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:47.679+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), appliedOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, appliedWallTime: new Date(1567578849254), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578857, 2), t: 1 }, durableWallTime: new Date(1567578857656), appliedOpTime: { ts: Timestamp(1567578857, 2), t: 1 }, appliedWallTime: new Date(1567578857656), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), appliedOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, appliedWallTime: new Date(1567578849254), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578857, 1), t: 1 }, lastCommittedWall: new Date(1567578857195), lastOpVisible: { ts: Timestamp(1567578857, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:17.679+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:47.664+0000 2019-09-04T06:34:17.679+0000 D2 ASIO [RS] Request 1347 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578857, 1), t: 1 }, lastCommittedWall: new Date(1567578857195), lastOpVisible: { ts: Timestamp(1567578857, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578857, 1), $clusterTime: { clusterTime: Timestamp(1567578857, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578857, 2) } 2019-09-04T06:34:17.679+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578857, 1), t: 1 }, lastCommittedWall: new Date(1567578857195), lastOpVisible: { ts: Timestamp(1567578857, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578857, 1), $clusterTime: { clusterTime: Timestamp(1567578857, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578857, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:17.679+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:17.679+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:47.664+0000 2019-09-04T06:34:17.680+0000 D2 ASIO [RS] Request 1346 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578857, 2), t: 1 }, lastCommittedWall: new Date(1567578857656), lastOpVisible: { ts: Timestamp(1567578857, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578857, 2), t: 1 }, lastCommittedWall: new Date(1567578857656), lastOpApplied: { ts: Timestamp(1567578857, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578857, 2), $clusterTime: { clusterTime: Timestamp(1567578857, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578857, 2) } 2019-09-04T06:34:17.680+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578857, 2), t: 1 }, lastCommittedWall: new Date(1567578857656), lastOpVisible: { ts: Timestamp(1567578857, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578857, 2), t: 1 }, lastCommittedWall: new Date(1567578857656), lastOpApplied: { ts: Timestamp(1567578857, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578857, 2), $clusterTime: { clusterTime: Timestamp(1567578857, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578857, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:17.680+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:17.680+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:34:17.680+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578857, 2), t: 1 }, 2019-09-04T06:34:17.656+0000 2019-09-04T06:34:17.680+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578857, 2), t: 1 }, 2019-09-04T06:34:17.656+0000 2019-09-04T06:34:17.680+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578852, 2) 2019-09-04T06:34:17.680+0000 D3 REPL [conn435] Got notified of new snapshot: { ts: Timestamp(1567578857, 2), t: 1 }, 2019-09-04T06:34:17.656+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn435] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:43.909+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn433] Got notified of new snapshot: { ts: Timestamp(1567578857, 2), t: 1 }, 2019-09-04T06:34:17.656+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn433] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.683+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn415] Got notified of new snapshot: { ts: Timestamp(1567578857, 2), t: 1 }, 2019-09-04T06:34:17.656+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn415] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.282+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn417] Got notified of new snapshot: { ts: Timestamp(1567578857, 2), t: 1 }, 2019-09-04T06:34:17.656+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn417] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.661+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn421] Got notified of new snapshot: { ts: Timestamp(1567578857, 2), t: 1 }, 2019-09-04T06:34:17.656+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn421] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:24.153+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn401] Got notified of new snapshot: { ts: Timestamp(1567578857, 2), t: 1 }, 2019-09-04T06:34:17.656+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn401] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.660+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn419] Got notified of new snapshot: { ts: Timestamp(1567578857, 2), t: 1 }, 2019-09-04T06:34:17.656+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn419] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.767+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn403] Got notified of new snapshot: { ts: Timestamp(1567578857, 2), t: 1 }, 2019-09-04T06:34:17.656+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn403] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:27.566+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn431] Got notified of new snapshot: { ts: Timestamp(1567578857, 2), t: 1 }, 2019-09-04T06:34:17.656+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn431] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.276+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn432] Got notified of new snapshot: { ts: Timestamp(1567578857, 2), t: 1 }, 2019-09-04T06:34:17.656+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn432] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.343+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn420] Got notified of new snapshot: { ts: Timestamp(1567578857, 2), t: 1 }, 2019-09-04T06:34:17.656+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn420] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.423+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn410] Got notified of new snapshot: { ts: Timestamp(1567578857, 2), t: 1 }, 2019-09-04T06:34:17.656+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn410] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.445+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn434] Got notified of new snapshot: { ts: Timestamp(1567578857, 2), t: 1 }, 2019-09-04T06:34:17.656+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn434] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.445+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn426] Got notified of new snapshot: { ts: Timestamp(1567578857, 2), t: 1 }, 2019-09-04T06:34:17.656+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn426] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.517+0000 2019-09-04T06:34:17.680+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:34:27.717+0000 2019-09-04T06:34:17.680+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:34:29.050+0000 2019-09-04T06:34:17.680+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:17.680+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn429] Got notified of new snapshot: { ts: Timestamp(1567578857, 2), t: 1 }, 2019-09-04T06:34:17.656+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn429] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:41.583+0000 2019-09-04T06:34:17.680+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1348 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:27.680+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578857, 2), t: 1 } } 2019-09-04T06:34:17.680+0000 D3 REPL [conn408] Got notified of new snapshot: { ts: Timestamp(1567578857, 2), t: 1 }, 2019-09-04T06:34:17.656+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn408] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:29.739+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn409] Got notified of new snapshot: { ts: Timestamp(1567578857, 2), t: 1 }, 2019-09-04T06:34:17.656+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn409] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.328+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn402] Got notified of new snapshot: { ts: Timestamp(1567578857, 2), t: 1 }, 2019-09-04T06:34:17.656+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn402] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.289+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn418] Got notified of new snapshot: { ts: Timestamp(1567578857, 2), t: 1 }, 2019-09-04T06:34:17.656+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn418] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.754+0000 2019-09-04T06:34:17.680+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:47.664+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn424] Got notified of new snapshot: { ts: Timestamp(1567578857, 2), t: 1 }, 2019-09-04T06:34:17.656+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn424] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.437+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn416] Got notified of new snapshot: { ts: Timestamp(1567578857, 2), t: 1 }, 2019-09-04T06:34:17.656+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn416] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.661+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn437] Got notified of new snapshot: { ts: Timestamp(1567578857, 2), t: 1 }, 2019-09-04T06:34:17.656+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn437] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:46.416+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn414] Got notified of new snapshot: { ts: Timestamp(1567578857, 2), t: 1 }, 2019-09-04T06:34:17.656+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn414] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:19.849+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn427] Got notified of new snapshot: { ts: Timestamp(1567578857, 2), t: 1 }, 2019-09-04T06:34:17.656+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn427] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.335+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn439] Got notified of new snapshot: { ts: Timestamp(1567578857, 2), t: 1 }, 2019-09-04T06:34:17.656+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn439] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.447+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn440] Got notified of new snapshot: { ts: Timestamp(1567578857, 2), t: 1 }, 2019-09-04T06:34:17.656+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn440] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.538+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn422] Got notified of new snapshot: { ts: Timestamp(1567578857, 2), t: 1 }, 2019-09-04T06:34:17.656+0000 2019-09-04T06:34:17.680+0000 D3 REPL [conn422] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:25.060+0000 2019-09-04T06:34:17.687+0000 D2 COMMAND [conn411] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:17.687+0000 I COMMAND [conn411] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:17.701+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:17.701+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:17.714+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:17.732+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:17.732+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:17.762+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578857, 2) 2019-09-04T06:34:17.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:17.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:17.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:17.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:17.814+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:17.826+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:17.826+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:17.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:17.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:17.868+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:17.868+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:17.914+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:17.933+0000 D2 ASIO [RS] Request 1348 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578857, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578857929), o: { $v: 1, $set: { ping: new Date(1567578857923) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578857, 2), t: 1 }, lastCommittedWall: new Date(1567578857656), lastOpVisible: { ts: Timestamp(1567578857, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578857, 2), t: 1 }, lastCommittedWall: new Date(1567578857656), lastOpApplied: { ts: Timestamp(1567578857, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578857, 2), $clusterTime: { clusterTime: Timestamp(1567578857, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578857, 3) } 2019-09-04T06:34:17.933+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578857, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578857929), o: { $v: 1, $set: { ping: new Date(1567578857923) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578857, 2), t: 1 }, lastCommittedWall: new Date(1567578857656), lastOpVisible: { ts: Timestamp(1567578857, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578857, 2), t: 1 }, lastCommittedWall: new Date(1567578857656), lastOpApplied: { ts: Timestamp(1567578857, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578857, 2), $clusterTime: { clusterTime: Timestamp(1567578857, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578857, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:17.933+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:17.934+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578857, 3) and ending at ts: Timestamp(1567578857, 3) 2019-09-04T06:34:17.934+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:34:29.050+0000 2019-09-04T06:34:17.934+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:34:28.840+0000 2019-09-04T06:34:17.934+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578857, 3), t: 1 } 2019-09-04T06:34:17.934+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:17.934+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:17.934+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:17.934+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:17.934+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578857, 2) 2019-09-04T06:34:17.934+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 19793 2019-09-04T06:34:17.934+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:17.934+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:17.934+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 19793 2019-09-04T06:34:17.934+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:17.934+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:17.934+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:34:17.934+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578857, 2) 2019-09-04T06:34:17.934+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 19796 2019-09-04T06:34:17.934+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578857, 3) } 2019-09-04T06:34:17.934+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:17.934+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:17.934+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 19796 2019-09-04T06:34:17.934+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19782 2019-09-04T06:34:17.934+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19782 2019-09-04T06:34:17.934+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19799 2019-09-04T06:34:17.934+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19799 2019-09-04T06:34:17.934+0000 D3 EXECUTOR [repl-writer-worker-10] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:17.934+0000 D3 STORAGE [repl-writer-worker-10] WT begin_transaction for snapshot id 19801 2019-09-04T06:34:17.934+0000 D4 STORAGE [repl-writer-worker-10] inserting record with timestamp Timestamp(1567578857, 3) 2019-09-04T06:34:17.934+0000 D3 STORAGE [repl-writer-worker-10] WT set timestamp of future write operations to Timestamp(1567578857, 3) 2019-09-04T06:34:17.934+0000 D3 STORAGE [repl-writer-worker-10] WT commit_transaction for snapshot id 19801 2019-09-04T06:34:17.934+0000 D3 EXECUTOR [repl-writer-worker-10] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:17.934+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:34:17.934+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19800 2019-09-04T06:34:17.934+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19800 2019-09-04T06:34:17.934+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19803 2019-09-04T06:34:17.934+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19803 2019-09-04T06:34:17.934+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578857, 3), t: 1 }({ ts: Timestamp(1567578857, 3), t: 1 }) 2019-09-04T06:34:17.934+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578857, 3) 2019-09-04T06:34:17.934+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19804 2019-09-04T06:34:17.934+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578857, 3) } } ] } sort: {} projection: {} 2019-09-04T06:34:17.934+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:17.934+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:34:17.934+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578857, 3) Sort: {} Proj: {} ============================= 2019-09-04T06:34:17.934+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:17.934+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:17.934+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:17.934+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578857, 3) || First: notFirst: full path: ts 2019-09-04T06:34:17.934+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:17.934+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578857, 3) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:17.934+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:17.934+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:34:17.934+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:17.934+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:17.934+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:17.934+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:17.934+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:17.934+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:17.934+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:17.934+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578857, 3) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:17.934+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:17.934+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:17.934+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:17.934+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578857, 3) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:17.934+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:17.934+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578857, 3) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:17.934+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19804 2019-09-04T06:34:17.934+0000 D3 EXECUTOR [repl-writer-worker-4] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:17.934+0000 D3 STORAGE [repl-writer-worker-4] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:17.934+0000 D3 REPL [repl-writer-worker-4] applying op: { ts: Timestamp(1567578857, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578857929), o: { $v: 1, $set: { ping: new Date(1567578857923) } } }, oplog application mode: Secondary 2019-09-04T06:34:17.934+0000 D3 STORAGE [repl-writer-worker-4] WT set timestamp of future write operations to Timestamp(1567578857, 3) 2019-09-04T06:34:17.934+0000 D3 STORAGE [repl-writer-worker-4] WT begin_transaction for snapshot id 19806 2019-09-04T06:34:17.934+0000 D2 QUERY [repl-writer-worker-4] Using idhack: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" } 2019-09-04T06:34:17.935+0000 D4 WRITE [repl-writer-worker-4] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:34:17.935+0000 D3 STORAGE [repl-writer-worker-4] WT commit_transaction for snapshot id 19806 2019-09-04T06:34:17.935+0000 D3 EXECUTOR [repl-writer-worker-4] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:17.935+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578857, 3), t: 1 }({ ts: Timestamp(1567578857, 3), t: 1 }) 2019-09-04T06:34:17.935+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578857, 3) 2019-09-04T06:34:17.935+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19805 2019-09-04T06:34:17.935+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:34:17.935+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:17.935+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:34:17.935+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:17.935+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:17.935+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:34:17.935+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19805 2019-09-04T06:34:17.935+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578857, 3) 2019-09-04T06:34:17.935+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19809 2019-09-04T06:34:17.935+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19809 2019-09-04T06:34:17.935+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578857, 3), t: 1 }({ ts: Timestamp(1567578857, 3), t: 1 }) 2019-09-04T06:34:17.935+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:17.935+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), appliedOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, appliedWallTime: new Date(1567578849254), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578857, 2), t: 1 }, durableWallTime: new Date(1567578857656), appliedOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, appliedWallTime: new Date(1567578857929), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), appliedOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, appliedWallTime: new Date(1567578849254), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578857, 2), t: 1 }, lastCommittedWall: new Date(1567578857656), lastOpVisible: { ts: Timestamp(1567578857, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:17.935+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1349 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:47.935+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), appliedOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, appliedWallTime: new Date(1567578849254), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578857, 2), t: 1 }, durableWallTime: new Date(1567578857656), appliedOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, appliedWallTime: new Date(1567578857929), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), appliedOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, appliedWallTime: new Date(1567578849254), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578857, 2), t: 1 }, lastCommittedWall: new Date(1567578857656), lastOpVisible: { ts: Timestamp(1567578857, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:17.935+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:47.935+0000 2019-09-04T06:34:17.935+0000 D2 ASIO [RS] Request 1349 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578857, 3), t: 1 }, lastCommittedWall: new Date(1567578857929), lastOpVisible: { ts: Timestamp(1567578857, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578857, 3), $clusterTime: { clusterTime: Timestamp(1567578857, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578857, 3) } 2019-09-04T06:34:17.935+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578857, 3), t: 1 }, lastCommittedWall: new Date(1567578857929), lastOpVisible: { ts: Timestamp(1567578857, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578857, 3), $clusterTime: { clusterTime: Timestamp(1567578857, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578857, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:17.935+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:17.935+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:47.935+0000 2019-09-04T06:34:17.936+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578857, 3), t: 1 } 2019-09-04T06:34:17.936+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1350 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:27.936+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578857, 2), t: 1 } } 2019-09-04T06:34:17.936+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:47.935+0000 2019-09-04T06:34:17.936+0000 D2 ASIO [RS] Request 1350 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578857, 3), t: 1 }, lastCommittedWall: new Date(1567578857929), lastOpVisible: { ts: Timestamp(1567578857, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578857, 3), t: 1 }, lastCommittedWall: new Date(1567578857929), lastOpApplied: { ts: Timestamp(1567578857, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578857, 3), $clusterTime: { clusterTime: Timestamp(1567578857, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578857, 3) } 2019-09-04T06:34:17.936+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578857, 3), t: 1 }, lastCommittedWall: new Date(1567578857929), lastOpVisible: { ts: Timestamp(1567578857, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578857, 3), t: 1 }, lastCommittedWall: new Date(1567578857929), lastOpApplied: { ts: Timestamp(1567578857, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578857, 3), $clusterTime: { clusterTime: Timestamp(1567578857, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578857, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:17.936+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:17.936+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:34:17.936+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578857, 3), t: 1 }, 2019-09-04T06:34:17.929+0000 2019-09-04T06:34:17.936+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578857, 3), t: 1 }, 2019-09-04T06:34:17.929+0000 2019-09-04T06:34:17.936+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:34:17.936+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578852, 3) 2019-09-04T06:34:17.936+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:34:28.840+0000 2019-09-04T06:34:17.936+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:34:29.092+0000 2019-09-04T06:34:17.936+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:17.936+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1351 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:27.936+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578857, 3), t: 1 } } 2019-09-04T06:34:17.936+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:17.936+0000 D3 REPL [conn415] Got notified of new snapshot: { ts: Timestamp(1567578857, 3), t: 1 }, 2019-09-04T06:34:17.929+0000 2019-09-04T06:34:17.936+0000 D3 REPL [conn415] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.282+0000 2019-09-04T06:34:17.936+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:47.935+0000 2019-09-04T06:34:17.936+0000 D3 REPL [conn417] Got notified of new snapshot: { ts: Timestamp(1567578857, 3), t: 1 }, 2019-09-04T06:34:17.929+0000 2019-09-04T06:34:17.936+0000 D3 REPL [conn417] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.661+0000 2019-09-04T06:34:17.936+0000 D3 REPL [conn431] Got notified of new snapshot: { ts: Timestamp(1567578857, 3), t: 1 }, 2019-09-04T06:34:17.929+0000 2019-09-04T06:34:17.936+0000 D3 REPL [conn431] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.276+0000 2019-09-04T06:34:17.936+0000 D3 REPL [conn419] Got notified of new snapshot: { ts: Timestamp(1567578857, 3), t: 1 }, 2019-09-04T06:34:17.929+0000 2019-09-04T06:34:17.936+0000 D3 REPL [conn419] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.767+0000 2019-09-04T06:34:17.936+0000 D3 REPL [conn420] Got notified of new snapshot: { ts: Timestamp(1567578857, 3), t: 1 }, 2019-09-04T06:34:17.929+0000 2019-09-04T06:34:17.936+0000 D3 REPL [conn420] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.423+0000 2019-09-04T06:34:17.936+0000 D3 REPL [conn434] Got notified of new snapshot: { ts: Timestamp(1567578857, 3), t: 1 }, 2019-09-04T06:34:17.929+0000 2019-09-04T06:34:17.936+0000 D3 REPL [conn434] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.445+0000 2019-09-04T06:34:17.936+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:17.936+0000 D3 REPL [conn416] Got notified of new snapshot: { ts: Timestamp(1567578857, 3), t: 1 }, 2019-09-04T06:34:17.929+0000 2019-09-04T06:34:17.936+0000 D3 REPL [conn416] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.661+0000 2019-09-04T06:34:17.936+0000 D3 REPL [conn414] Got notified of new snapshot: { ts: Timestamp(1567578857, 3), t: 1 }, 2019-09-04T06:34:17.929+0000 2019-09-04T06:34:17.936+0000 D3 REPL [conn414] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:19.849+0000 2019-09-04T06:34:17.936+0000 D3 REPL [conn439] Got notified of new snapshot: { ts: Timestamp(1567578857, 3), t: 1 }, 2019-09-04T06:34:17.929+0000 2019-09-04T06:34:17.936+0000 D3 REPL [conn439] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.447+0000 2019-09-04T06:34:17.936+0000 D3 REPL [conn422] Got notified of new snapshot: { ts: Timestamp(1567578857, 3), t: 1 }, 2019-09-04T06:34:17.929+0000 2019-09-04T06:34:17.936+0000 D3 REPL [conn422] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:25.060+0000 2019-09-04T06:34:17.936+0000 D3 REPL [conn435] Got notified of new snapshot: { ts: Timestamp(1567578857, 3), t: 1 }, 2019-09-04T06:34:17.929+0000 2019-09-04T06:34:17.936+0000 D3 REPL [conn435] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:43.909+0000 2019-09-04T06:34:17.936+0000 D3 REPL [conn433] Got notified of new snapshot: { ts: Timestamp(1567578857, 3), t: 1 }, 2019-09-04T06:34:17.929+0000 2019-09-04T06:34:17.936+0000 D3 REPL [conn433] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.683+0000 2019-09-04T06:34:17.937+0000 D3 REPL [conn421] Got notified of new snapshot: { ts: Timestamp(1567578857, 3), t: 1 }, 2019-09-04T06:34:17.929+0000 2019-09-04T06:34:17.937+0000 D3 REPL [conn421] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:24.153+0000 2019-09-04T06:34:17.937+0000 D3 REPL [conn401] Got notified of new snapshot: { ts: Timestamp(1567578857, 3), t: 1 }, 2019-09-04T06:34:17.929+0000 2019-09-04T06:34:17.937+0000 D3 REPL [conn401] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.660+0000 2019-09-04T06:34:17.937+0000 D3 REPL [conn403] Got notified of new snapshot: { ts: Timestamp(1567578857, 3), t: 1 }, 2019-09-04T06:34:17.929+0000 2019-09-04T06:34:17.937+0000 D3 REPL [conn403] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:27.566+0000 2019-09-04T06:34:17.937+0000 D3 REPL [conn432] Got notified of new snapshot: { ts: Timestamp(1567578857, 3), t: 1 }, 2019-09-04T06:34:17.929+0000 2019-09-04T06:34:17.937+0000 D3 REPL [conn432] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.343+0000 2019-09-04T06:34:17.937+0000 D3 REPL [conn410] Got notified of new snapshot: { ts: Timestamp(1567578857, 3), t: 1 }, 2019-09-04T06:34:17.929+0000 2019-09-04T06:34:17.937+0000 D3 REPL [conn410] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.445+0000 2019-09-04T06:34:17.937+0000 D3 REPL [conn426] Got notified of new snapshot: { ts: Timestamp(1567578857, 3), t: 1 }, 2019-09-04T06:34:17.929+0000 2019-09-04T06:34:17.937+0000 D3 REPL [conn426] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.517+0000 2019-09-04T06:34:17.937+0000 D3 REPL [conn429] Got notified of new snapshot: { ts: Timestamp(1567578857, 3), t: 1 }, 2019-09-04T06:34:17.929+0000 2019-09-04T06:34:17.937+0000 D3 REPL [conn429] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:41.583+0000 2019-09-04T06:34:17.937+0000 D3 REPL [conn409] Got notified of new snapshot: { ts: Timestamp(1567578857, 3), t: 1 }, 2019-09-04T06:34:17.929+0000 2019-09-04T06:34:17.937+0000 D3 REPL [conn409] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.328+0000 2019-09-04T06:34:17.937+0000 D3 REPL [conn402] Got notified of new snapshot: { ts: Timestamp(1567578857, 3), t: 1 }, 2019-09-04T06:34:17.929+0000 2019-09-04T06:34:17.937+0000 D3 REPL [conn402] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.289+0000 2019-09-04T06:34:17.937+0000 D3 REPL [conn418] Got notified of new snapshot: { ts: Timestamp(1567578857, 3), t: 1 }, 2019-09-04T06:34:17.929+0000 2019-09-04T06:34:17.937+0000 D3 REPL [conn418] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.754+0000 2019-09-04T06:34:17.937+0000 D3 REPL [conn424] Got notified of new snapshot: { ts: Timestamp(1567578857, 3), t: 1 }, 2019-09-04T06:34:17.929+0000 2019-09-04T06:34:17.937+0000 D3 REPL [conn424] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.437+0000 2019-09-04T06:34:17.937+0000 D3 REPL [conn427] Got notified of new snapshot: { ts: Timestamp(1567578857, 3), t: 1 }, 2019-09-04T06:34:17.929+0000 2019-09-04T06:34:17.937+0000 D3 REPL [conn427] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.335+0000 2019-09-04T06:34:17.937+0000 D3 REPL [conn408] Got notified of new snapshot: { ts: Timestamp(1567578857, 3), t: 1 }, 2019-09-04T06:34:17.929+0000 2019-09-04T06:34:17.937+0000 D3 REPL [conn408] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:29.739+0000 2019-09-04T06:34:17.937+0000 D3 REPL [conn437] Got notified of new snapshot: { ts: Timestamp(1567578857, 3), t: 1 }, 2019-09-04T06:34:17.929+0000 2019-09-04T06:34:17.937+0000 D3 REPL [conn437] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:46.416+0000 2019-09-04T06:34:17.937+0000 D3 REPL [conn440] Got notified of new snapshot: { ts: Timestamp(1567578857, 3), t: 1 }, 2019-09-04T06:34:17.929+0000 2019-09-04T06:34:17.937+0000 D3 REPL [conn440] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.538+0000 2019-09-04T06:34:17.937+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), appliedOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, appliedWallTime: new Date(1567578849254), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, durableWallTime: new Date(1567578857929), appliedOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, appliedWallTime: new Date(1567578857929), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), appliedOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, appliedWallTime: new Date(1567578849254), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578857, 3), t: 1 }, lastCommittedWall: new Date(1567578857929), lastOpVisible: { ts: Timestamp(1567578857, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:17.937+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1352 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:47.937+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), appliedOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, appliedWallTime: new Date(1567578849254), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, durableWallTime: new Date(1567578857929), appliedOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, appliedWallTime: new Date(1567578857929), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, durableWallTime: new Date(1567578849254), appliedOpTime: { ts: Timestamp(1567578849, 1), t: 1 }, appliedWallTime: new Date(1567578849254), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578857, 3), t: 1 }, lastCommittedWall: new Date(1567578857929), lastOpVisible: { ts: Timestamp(1567578857, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:17.937+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:47.935+0000 2019-09-04T06:34:17.937+0000 D2 ASIO [RS] Request 1352 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578857, 3), t: 1 }, lastCommittedWall: new Date(1567578857929), lastOpVisible: { ts: Timestamp(1567578857, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578857, 3), $clusterTime: { clusterTime: Timestamp(1567578857, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578857, 3) } 2019-09-04T06:34:17.937+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578857, 3), t: 1 }, lastCommittedWall: new Date(1567578857929), lastOpVisible: { ts: Timestamp(1567578857, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578857, 3), $clusterTime: { clusterTime: Timestamp(1567578857, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578857, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:17.937+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:17.937+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:47.935+0000 2019-09-04T06:34:17.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:17.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:17.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:17.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:18.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:18.014+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:18.034+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578857, 3) 2019-09-04T06:34:18.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:18.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:18.070+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:18.070+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:18.103+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:18.103+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:18.114+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:18.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:18.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:18.162+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:18.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:18.201+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:18.201+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:18.214+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:18.232+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:18.232+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:18.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578857, 3), signature: { hash: BinData(0, 5965CB04AE087F3D605727BE0BFDE44683B1CDEF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:18.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:34:18.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578857, 3), signature: { hash: BinData(0, 5965CB04AE087F3D605727BE0BFDE44683B1CDEF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:18.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578857, 3), signature: { hash: BinData(0, 5965CB04AE087F3D605727BE0BFDE44683B1CDEF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:18.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, durableWallTime: new Date(1567578857929), opTime: { ts: Timestamp(1567578857, 3), t: 1 }, wallTime: new Date(1567578857929) } 2019-09-04T06:34:18.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578857, 3), signature: { hash: BinData(0, 5965CB04AE087F3D605727BE0BFDE44683B1CDEF), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:18.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:18.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:18.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:18.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:18.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:18.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:18.314+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:18.326+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:18.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:18.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:18.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:18.368+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:18.368+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:18.415+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:18.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:18.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:18.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:18.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:18.515+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:18.520+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:34:18.520+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:34:18.520+0000 D2 COMMAND [conn90] run command admin.$cmd { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:34:18.520+0000 I COMMAND [conn90] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:34:18.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:18.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:18.570+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:18.570+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:18.603+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:18.603+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:18.615+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:18.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:18.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:18.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:18.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:18.701+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:18.701+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:18.715+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:18.732+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:18.732+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:18.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:18.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:18.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:18.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:18.815+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:18.826+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:18.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:18.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:18.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1353) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:18.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:18.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1353 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:28.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:18.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:18.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1354) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:18.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1354 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:34:28.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:18.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:18.839+0000 D2 ASIO [Replication] Request 1353 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, durableWallTime: new Date(1567578857929), opTime: { ts: Timestamp(1567578857, 3), t: 1 }, wallTime: new Date(1567578857929), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578857, 3), t: 1 }, lastCommittedWall: new Date(1567578857929), lastOpVisible: { ts: Timestamp(1567578857, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578857, 3), $clusterTime: { clusterTime: Timestamp(1567578857, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578857, 3) } 2019-09-04T06:34:18.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, durableWallTime: new Date(1567578857929), opTime: { ts: Timestamp(1567578857, 3), t: 1 }, wallTime: new Date(1567578857929), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578857, 3), t: 1 }, lastCommittedWall: new Date(1567578857929), lastOpVisible: { ts: Timestamp(1567578857, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578857, 3), $clusterTime: { clusterTime: Timestamp(1567578857, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578857, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:18.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:18.839+0000 D2 ASIO [Replication] Request 1354 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, durableWallTime: new Date(1567578857929), opTime: { ts: Timestamp(1567578857, 3), t: 1 }, wallTime: new Date(1567578857929), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578857, 3), t: 1 }, lastCommittedWall: new Date(1567578857929), lastOpVisible: { ts: Timestamp(1567578857, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578857, 3), $clusterTime: { clusterTime: Timestamp(1567578857, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578857, 3) } 2019-09-04T06:34:18.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, durableWallTime: new Date(1567578857929), opTime: { ts: Timestamp(1567578857, 3), t: 1 }, wallTime: new Date(1567578857929), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578857, 3), t: 1 }, lastCommittedWall: new Date(1567578857929), lastOpVisible: { ts: Timestamp(1567578857, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578857, 3), $clusterTime: { clusterTime: Timestamp(1567578857, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578857, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:34:18.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1353) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, durableWallTime: new Date(1567578857929), opTime: { ts: Timestamp(1567578857, 3), t: 1 }, wallTime: new Date(1567578857929), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578857, 3), t: 1 }, lastCommittedWall: new Date(1567578857929), lastOpVisible: { ts: Timestamp(1567578857, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578857, 3), $clusterTime: { clusterTime: Timestamp(1567578857, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578857, 3) } 2019-09-04T06:34:18.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:18.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:34:18.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:34:20.839Z 2019-09-04T06:34:18.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:18.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1354) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, durableWallTime: new Date(1567578857929), opTime: { ts: Timestamp(1567578857, 3), t: 1 }, wallTime: new Date(1567578857929), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578857, 3), t: 1 }, lastCommittedWall: new Date(1567578857929), lastOpVisible: { ts: Timestamp(1567578857, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578857, 3), $clusterTime: { clusterTime: Timestamp(1567578857, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578857, 3) } 2019-09-04T06:34:18.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:34:18.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:34:29.092+0000 2019-09-04T06:34:18.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:34:30.246+0000 2019-09-04T06:34:18.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:18.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:34:18.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:18.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:34:20.839Z 2019-09-04T06:34:18.840+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:18.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:18.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:18.868+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:18.868+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:18.915+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:18.934+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:18.934+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:18.934+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578857, 3) 2019-09-04T06:34:18.934+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 19845 2019-09-04T06:34:18.934+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:18.934+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:18.934+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 19845 2019-09-04T06:34:18.934+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer::flush table:config/collection/42--6194257481163143499 -> { numRecords: 2, dataSize: 308 } 2019-09-04T06:34:18.934+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer::flush table:config/collection/28--6194257481163143499 -> { numRecords: 23, dataSize: 1914 } 2019-09-04T06:34:18.934+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer::flush table:local/collection/16--6194257481163143499 -> { numRecords: 1560, dataSize: 351664 } 2019-09-04T06:34:18.934+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer flush took 49 µs 2019-09-04T06:34:18.935+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19848 2019-09-04T06:34:18.935+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19848 2019-09-04T06:34:18.935+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578857, 3), t: 1 }({ ts: Timestamp(1567578857, 3), t: 1 }) 2019-09-04T06:34:18.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:18.976+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:18.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:18.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:19.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:19.025+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:19.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:19.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:19.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578857, 3), signature: { hash: BinData(0, 5965CB04AE087F3D605727BE0BFDE44683B1CDEF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:19.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:34:19.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578857, 3), signature: { hash: BinData(0, 5965CB04AE087F3D605727BE0BFDE44683B1CDEF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:19.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578857, 3), signature: { hash: BinData(0, 5965CB04AE087F3D605727BE0BFDE44683B1CDEF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:19.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, durableWallTime: new Date(1567578857929), opTime: { ts: Timestamp(1567578857, 3), t: 1 }, wallTime: new Date(1567578857929) } 2019-09-04T06:34:19.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578857, 3), signature: { hash: BinData(0, 5965CB04AE087F3D605727BE0BFDE44683B1CDEF), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:19.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:19.070+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:19.103+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:19.103+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:19.125+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:19.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:19.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:19.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:19.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:19.201+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:19.201+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:19.225+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:19.232+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:19.232+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:19.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:19.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:19.238+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:19.238+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:19.263+0000 D2 ASIO [RS] Request 1351 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578859, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578859260), o: { $v: 1, $set: { ping: new Date(1567578859257), up: 30 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578857, 3), t: 1 }, lastCommittedWall: new Date(1567578857929), lastOpVisible: { ts: Timestamp(1567578857, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578857, 3), t: 1 }, lastCommittedWall: new Date(1567578857929), lastOpApplied: { ts: Timestamp(1567578859, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578857, 3), $clusterTime: { clusterTime: Timestamp(1567578859, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578859, 1) } 2019-09-04T06:34:19.263+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578859, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578859260), o: { $v: 1, $set: { ping: new Date(1567578859257), up: 30 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578857, 3), t: 1 }, lastCommittedWall: new Date(1567578857929), lastOpVisible: { ts: Timestamp(1567578857, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578857, 3), t: 1 }, lastCommittedWall: new Date(1567578857929), lastOpApplied: { ts: Timestamp(1567578859, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578857, 3), $clusterTime: { clusterTime: Timestamp(1567578859, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578859, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:19.263+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:19.263+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578859, 1) and ending at ts: Timestamp(1567578859, 1) 2019-09-04T06:34:19.263+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:34:30.246+0000 2019-09-04T06:34:19.263+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:34:29.377+0000 2019-09-04T06:34:19.263+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:19.263+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:19.263+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578859, 1), t: 1 } 2019-09-04T06:34:19.263+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:19.263+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:19.263+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578857, 3) 2019-09-04T06:34:19.263+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 19864 2019-09-04T06:34:19.263+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:19.263+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:19.263+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 19864 2019-09-04T06:34:19.263+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:19.263+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:19.263+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:34:19.263+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578857, 3) 2019-09-04T06:34:19.263+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 19867 2019-09-04T06:34:19.263+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:19.263+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:19.263+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 19867 2019-09-04T06:34:19.263+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578859, 1) } 2019-09-04T06:34:19.263+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19849 2019-09-04T06:34:19.263+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19849 2019-09-04T06:34:19.263+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19870 2019-09-04T06:34:19.263+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19870 2019-09-04T06:34:19.263+0000 D3 EXECUTOR [repl-writer-worker-6] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:19.263+0000 D3 STORAGE [repl-writer-worker-6] WT begin_transaction for snapshot id 19872 2019-09-04T06:34:19.263+0000 D4 STORAGE [repl-writer-worker-6] inserting record with timestamp Timestamp(1567578859, 1) 2019-09-04T06:34:19.263+0000 D3 STORAGE [repl-writer-worker-6] WT set timestamp of future write operations to Timestamp(1567578859, 1) 2019-09-04T06:34:19.263+0000 D2 STORAGE [repl-writer-worker-6] WiredTigerSizeStorer::store Marking table:local/collection/16--6194257481163143499 dirty, numRecords: 1561, dataSize: 351878, use_count: 3 2019-09-04T06:34:19.264+0000 D3 STORAGE [repl-writer-worker-6] WT commit_transaction for snapshot id 19872 2019-09-04T06:34:19.264+0000 D3 EXECUTOR [repl-writer-worker-6] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:19.264+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:34:19.264+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19871 2019-09-04T06:34:19.264+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19871 2019-09-04T06:34:19.264+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19874 2019-09-04T06:34:19.264+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19874 2019-09-04T06:34:19.264+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578859, 1), t: 1 }({ ts: Timestamp(1567578859, 1), t: 1 }) 2019-09-04T06:34:19.264+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578859, 1) 2019-09-04T06:34:19.264+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19875 2019-09-04T06:34:19.264+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578859, 1) } } ] } sort: {} projection: {} 2019-09-04T06:34:19.264+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:19.264+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:34:19.264+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578859, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:34:19.264+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:19.264+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:19.264+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:19.264+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578859, 1) || First: notFirst: full path: ts 2019-09-04T06:34:19.264+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:19.264+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578859, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:19.264+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:19.264+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:34:19.264+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:19.264+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:19.264+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:19.264+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:19.264+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:19.264+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:19.264+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:19.264+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578859, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:19.264+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:19.264+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:19.264+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:19.264+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578859, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:19.264+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:19.264+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578859, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:19.264+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19875 2019-09-04T06:34:19.264+0000 D3 EXECUTOR [repl-writer-worker-8] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:19.264+0000 D3 STORAGE [repl-writer-worker-8] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:19.264+0000 D3 REPL [repl-writer-worker-8] applying op: { ts: Timestamp(1567578859, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578859260), o: { $v: 1, $set: { ping: new Date(1567578859257), up: 30 } } }, oplog application mode: Secondary 2019-09-04T06:34:19.264+0000 D3 STORAGE [repl-writer-worker-8] WT set timestamp of future write operations to Timestamp(1567578859, 1) 2019-09-04T06:34:19.264+0000 D3 STORAGE [repl-writer-worker-8] WT begin_transaction for snapshot id 19877 2019-09-04T06:34:19.264+0000 D2 QUERY [repl-writer-worker-8] Using idhack: { _id: "cmodb801.togewa.com:27017" } 2019-09-04T06:34:19.264+0000 D4 WRITE [repl-writer-worker-8] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:34:19.264+0000 D3 STORAGE [repl-writer-worker-8] WT commit_transaction for snapshot id 19877 2019-09-04T06:34:19.264+0000 D3 EXECUTOR [repl-writer-worker-8] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:19.264+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578859, 1), t: 1 }({ ts: Timestamp(1567578859, 1), t: 1 }) 2019-09-04T06:34:19.264+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578859, 1) 2019-09-04T06:34:19.264+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19876 2019-09-04T06:34:19.264+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:34:19.264+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:19.264+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:34:19.264+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:19.264+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:19.264+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:34:19.264+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19876 2019-09-04T06:34:19.264+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578859, 1) 2019-09-04T06:34:19.264+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19881 2019-09-04T06:34:19.264+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19881 2019-09-04T06:34:19.264+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578859, 1), t: 1 }({ ts: Timestamp(1567578859, 1), t: 1 }) 2019-09-04T06:34:19.264+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:19.264+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, durableWallTime: new Date(1567578857929), appliedOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, appliedWallTime: new Date(1567578857929), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, durableWallTime: new Date(1567578857929), appliedOpTime: { ts: Timestamp(1567578859, 1), t: 1 }, appliedWallTime: new Date(1567578859260), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, durableWallTime: new Date(1567578857929), appliedOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, appliedWallTime: new Date(1567578857929), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578857, 3), t: 1 }, lastCommittedWall: new Date(1567578857929), lastOpVisible: { ts: Timestamp(1567578857, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:19.264+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1355 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:49.264+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, durableWallTime: new Date(1567578857929), appliedOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, appliedWallTime: new Date(1567578857929), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, durableWallTime: new Date(1567578857929), appliedOpTime: { ts: Timestamp(1567578859, 1), t: 1 }, appliedWallTime: new Date(1567578859260), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, durableWallTime: new Date(1567578857929), appliedOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, appliedWallTime: new Date(1567578857929), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578857, 3), t: 1 }, lastCommittedWall: new Date(1567578857929), lastOpVisible: { ts: Timestamp(1567578857, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:19.265+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:49.264+0000 2019-09-04T06:34:19.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:19.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:19.265+0000 D2 ASIO [RS] Request 1355 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578857, 3), t: 1 }, lastCommittedWall: new Date(1567578857929), lastOpVisible: { ts: Timestamp(1567578857, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578857, 3), $clusterTime: { clusterTime: Timestamp(1567578859, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578859, 1) } 2019-09-04T06:34:19.265+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578857, 3), t: 1 }, lastCommittedWall: new Date(1567578857929), lastOpVisible: { ts: Timestamp(1567578857, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578857, 3), $clusterTime: { clusterTime: Timestamp(1567578859, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578859, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:19.265+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:19.265+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:49.265+0000 2019-09-04T06:34:19.265+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578859, 1), t: 1 } 2019-09-04T06:34:19.265+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1356 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:29.265+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578857, 3), t: 1 } } 2019-09-04T06:34:19.265+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:49.265+0000 2019-09-04T06:34:19.265+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:34:19.265+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:19.265+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, durableWallTime: new Date(1567578857929), appliedOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, appliedWallTime: new Date(1567578857929), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578859, 1), t: 1 }, durableWallTime: new Date(1567578859260), appliedOpTime: { ts: Timestamp(1567578859, 1), t: 1 }, appliedWallTime: new Date(1567578859260), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, durableWallTime: new Date(1567578857929), appliedOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, appliedWallTime: new Date(1567578857929), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578857, 3), t: 1 }, lastCommittedWall: new Date(1567578857929), lastOpVisible: { ts: Timestamp(1567578857, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:19.265+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1357 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:49.265+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, durableWallTime: new Date(1567578857929), appliedOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, appliedWallTime: new Date(1567578857929), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578859, 1), t: 1 }, durableWallTime: new Date(1567578859260), appliedOpTime: { ts: Timestamp(1567578859, 1), t: 1 }, appliedWallTime: new Date(1567578859260), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, durableWallTime: new Date(1567578857929), appliedOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, appliedWallTime: new Date(1567578857929), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578857, 3), t: 1 }, lastCommittedWall: new Date(1567578857929), lastOpVisible: { ts: Timestamp(1567578857, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:19.265+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:49.265+0000 2019-09-04T06:34:19.266+0000 D2 ASIO [RS] Request 1357 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578857, 3), t: 1 }, lastCommittedWall: new Date(1567578857929), lastOpVisible: { ts: Timestamp(1567578857, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578857, 3), $clusterTime: { clusterTime: Timestamp(1567578859, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578859, 1) } 2019-09-04T06:34:19.266+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578857, 3), t: 1 }, lastCommittedWall: new Date(1567578857929), lastOpVisible: { ts: Timestamp(1567578857, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578857, 3), $clusterTime: { clusterTime: Timestamp(1567578859, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578859, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:19.266+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:19.266+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:49.265+0000 2019-09-04T06:34:19.266+0000 D2 ASIO [RS] Request 1356 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578859, 1), t: 1 }, lastCommittedWall: new Date(1567578859260), lastOpVisible: { ts: Timestamp(1567578859, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578859, 1), t: 1 }, lastCommittedWall: new Date(1567578859260), lastOpApplied: { ts: Timestamp(1567578859, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578859, 1), $clusterTime: { clusterTime: Timestamp(1567578859, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578859, 1) } 2019-09-04T06:34:19.266+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578859, 1), t: 1 }, lastCommittedWall: new Date(1567578859260), lastOpVisible: { ts: Timestamp(1567578859, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578859, 1), t: 1 }, lastCommittedWall: new Date(1567578859260), lastOpApplied: { ts: Timestamp(1567578859, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578859, 1), $clusterTime: { clusterTime: Timestamp(1567578859, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578859, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:19.266+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:19.266+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:34:19.266+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578859, 1), t: 1 }, 2019-09-04T06:34:19.260+0000 2019-09-04T06:34:19.266+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578859, 1), t: 1 }, 2019-09-04T06:34:19.260+0000 2019-09-04T06:34:19.266+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578854, 1) 2019-09-04T06:34:19.266+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:34:29.377+0000 2019-09-04T06:34:19.266+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:34:30.282+0000 2019-09-04T06:34:19.266+0000 D3 REPL [conn417] Got notified of new snapshot: { ts: Timestamp(1567578859, 1), t: 1 }, 2019-09-04T06:34:19.260+0000 2019-09-04T06:34:19.266+0000 D3 REPL [conn417] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.661+0000 2019-09-04T06:34:19.266+0000 D3 REPL [conn429] Got notified of new snapshot: { ts: Timestamp(1567578859, 1), t: 1 }, 2019-09-04T06:34:19.260+0000 2019-09-04T06:34:19.266+0000 D3 REPL [conn429] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:41.583+0000 2019-09-04T06:34:19.266+0000 D3 REPL [conn402] Got notified of new snapshot: { ts: Timestamp(1567578859, 1), t: 1 }, 2019-09-04T06:34:19.260+0000 2019-09-04T06:34:19.266+0000 D3 REPL [conn402] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.289+0000 2019-09-04T06:34:19.266+0000 D3 REPL [conn433] Got notified of new snapshot: { ts: Timestamp(1567578859, 1), t: 1 }, 2019-09-04T06:34:19.260+0000 2019-09-04T06:34:19.266+0000 D3 REPL [conn433] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.683+0000 2019-09-04T06:34:19.266+0000 D3 REPL [conn421] Got notified of new snapshot: { ts: Timestamp(1567578859, 1), t: 1 }, 2019-09-04T06:34:19.260+0000 2019-09-04T06:34:19.266+0000 D3 REPL [conn421] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:24.153+0000 2019-09-04T06:34:19.266+0000 D3 REPL [conn432] Got notified of new snapshot: { ts: Timestamp(1567578859, 1), t: 1 }, 2019-09-04T06:34:19.260+0000 2019-09-04T06:34:19.266+0000 D3 REPL [conn432] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.343+0000 2019-09-04T06:34:19.266+0000 D3 REPL [conn426] Got notified of new snapshot: { ts: Timestamp(1567578859, 1), t: 1 }, 2019-09-04T06:34:19.260+0000 2019-09-04T06:34:19.266+0000 D3 REPL [conn426] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.517+0000 2019-09-04T06:34:19.266+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:19.266+0000 D3 REPL [conn409] Got notified of new snapshot: { ts: Timestamp(1567578859, 1), t: 1 }, 2019-09-04T06:34:19.260+0000 2019-09-04T06:34:19.266+0000 D3 REPL [conn409] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.328+0000 2019-09-04T06:34:19.266+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:19.266+0000 D3 REPL [conn440] Got notified of new snapshot: { ts: Timestamp(1567578859, 1), t: 1 }, 2019-09-04T06:34:19.260+0000 2019-09-04T06:34:19.266+0000 D3 REPL [conn440] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.538+0000 2019-09-04T06:34:19.266+0000 D3 REPL [conn427] Got notified of new snapshot: { ts: Timestamp(1567578859, 1), t: 1 }, 2019-09-04T06:34:19.260+0000 2019-09-04T06:34:19.266+0000 D3 REPL [conn427] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.335+0000 2019-09-04T06:34:19.266+0000 D3 REPL [conn437] Got notified of new snapshot: { ts: Timestamp(1567578859, 1), t: 1 }, 2019-09-04T06:34:19.260+0000 2019-09-04T06:34:19.267+0000 D3 REPL [conn437] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:46.416+0000 2019-09-04T06:34:19.267+0000 D3 REPL [conn415] Got notified of new snapshot: { ts: Timestamp(1567578859, 1), t: 1 }, 2019-09-04T06:34:19.260+0000 2019-09-04T06:34:19.267+0000 D3 REPL [conn415] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.282+0000 2019-09-04T06:34:19.267+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1358 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:29.267+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578859, 1), t: 1 } } 2019-09-04T06:34:19.267+0000 D3 REPL [conn403] Got notified of new snapshot: { ts: Timestamp(1567578859, 1), t: 1 }, 2019-09-04T06:34:19.260+0000 2019-09-04T06:34:19.267+0000 D3 REPL [conn403] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:27.566+0000 2019-09-04T06:34:19.267+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:49.265+0000 2019-09-04T06:34:19.267+0000 D3 REPL [conn419] Got notified of new snapshot: { ts: Timestamp(1567578859, 1), t: 1 }, 2019-09-04T06:34:19.260+0000 2019-09-04T06:34:19.267+0000 D3 REPL [conn419] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.767+0000 2019-09-04T06:34:19.267+0000 D3 REPL [conn414] Got notified of new snapshot: { ts: Timestamp(1567578859, 1), t: 1 }, 2019-09-04T06:34:19.260+0000 2019-09-04T06:34:19.267+0000 D3 REPL [conn414] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:19.849+0000 2019-09-04T06:34:19.267+0000 D2 COMMAND [conn413] run command config.$cmd { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578859, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5aeb0f8f28dab2b56d47'), operName: "", parentOperId: "5d6f5aeb0f8f28dab2b56d43" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578859, 1), signature: { hash: BinData(0, B5A34ABFF108486ED9CAB818ADD9788411D2FD04), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578859, 1), t: 1 } }, $db: "config" } 2019-09-04T06:34:19.267+0000 D1 TRACKING [conn413] Cmd: find, TrackingId: 5d6f5aeb0f8f28dab2b56d43|5d6f5aeb0f8f28dab2b56d47 2019-09-04T06:34:19.267+0000 D1 COMMAND [conn413] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578859, 1), t: 1 } } } 2019-09-04T06:34:19.267+0000 D3 STORAGE [conn413] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:34:19.267+0000 D1 COMMAND [conn413] Using 'committed' snapshot: { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578859, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5aeb0f8f28dab2b56d47'), operName: "", parentOperId: "5d6f5aeb0f8f28dab2b56d43" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578859, 1), signature: { hash: BinData(0, B5A34ABFF108486ED9CAB818ADD9788411D2FD04), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578859, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578859, 1) 2019-09-04T06:34:19.267+0000 D2 QUERY [conn413] Collection config.settings does not exist. Using EOF plan: query: { _id: "autosplit" } sort: {} projection: {} limit: 1 2019-09-04T06:34:19.267+0000 I COMMAND [conn413] command config.settings command: find { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578859, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5aeb0f8f28dab2b56d47'), operName: "", parentOperId: "5d6f5aeb0f8f28dab2b56d43" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578859, 1), signature: { hash: BinData(0, B5A34ABFF108486ED9CAB818ADD9788411D2FD04), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578859, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:34:19.267+0000 D3 REPL [conn422] Got notified of new snapshot: { ts: Timestamp(1567578859, 1), t: 1 }, 2019-09-04T06:34:19.260+0000 2019-09-04T06:34:19.267+0000 D3 REPL [conn422] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:25.060+0000 2019-09-04T06:34:19.267+0000 D3 REPL [conn424] Got notified of new snapshot: { ts: Timestamp(1567578859, 1), t: 1 }, 2019-09-04T06:34:19.260+0000 2019-09-04T06:34:19.267+0000 D3 REPL [conn424] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.437+0000 2019-09-04T06:34:19.267+0000 D3 REPL [conn410] Got notified of new snapshot: { ts: Timestamp(1567578859, 1), t: 1 }, 2019-09-04T06:34:19.260+0000 2019-09-04T06:34:19.267+0000 D3 REPL [conn410] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.445+0000 2019-09-04T06:34:19.267+0000 D3 REPL [conn408] Got notified of new snapshot: { ts: Timestamp(1567578859, 1), t: 1 }, 2019-09-04T06:34:19.260+0000 2019-09-04T06:34:19.267+0000 D3 REPL [conn408] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:29.739+0000 2019-09-04T06:34:19.267+0000 D3 REPL [conn401] Got notified of new snapshot: { ts: Timestamp(1567578859, 1), t: 1 }, 2019-09-04T06:34:19.260+0000 2019-09-04T06:34:19.267+0000 D3 REPL [conn401] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.660+0000 2019-09-04T06:34:19.267+0000 D3 REPL [conn434] Got notified of new snapshot: { ts: Timestamp(1567578859, 1), t: 1 }, 2019-09-04T06:34:19.260+0000 2019-09-04T06:34:19.267+0000 D3 REPL [conn434] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.445+0000 2019-09-04T06:34:19.267+0000 D3 REPL [conn431] Got notified of new snapshot: { ts: Timestamp(1567578859, 1), t: 1 }, 2019-09-04T06:34:19.260+0000 2019-09-04T06:34:19.267+0000 D3 REPL [conn431] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.276+0000 2019-09-04T06:34:19.267+0000 D3 REPL [conn420] Got notified of new snapshot: { ts: Timestamp(1567578859, 1), t: 1 }, 2019-09-04T06:34:19.260+0000 2019-09-04T06:34:19.267+0000 D3 REPL [conn420] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.423+0000 2019-09-04T06:34:19.267+0000 D3 REPL [conn416] Got notified of new snapshot: { ts: Timestamp(1567578859, 1), t: 1 }, 2019-09-04T06:34:19.260+0000 2019-09-04T06:34:19.267+0000 D3 REPL [conn416] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.661+0000 2019-09-04T06:34:19.267+0000 D3 REPL [conn435] Got notified of new snapshot: { ts: Timestamp(1567578859, 1), t: 1 }, 2019-09-04T06:34:19.260+0000 2019-09-04T06:34:19.267+0000 D3 REPL [conn435] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:43.909+0000 2019-09-04T06:34:19.267+0000 D3 REPL [conn439] Got notified of new snapshot: { ts: Timestamp(1567578859, 1), t: 1 }, 2019-09-04T06:34:19.260+0000 2019-09-04T06:34:19.267+0000 D3 REPL [conn439] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.447+0000 2019-09-04T06:34:19.267+0000 D3 REPL [conn418] Got notified of new snapshot: { ts: Timestamp(1567578859, 1), t: 1 }, 2019-09-04T06:34:19.260+0000 2019-09-04T06:34:19.267+0000 D3 REPL [conn418] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.754+0000 2019-09-04T06:34:19.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:19.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:19.317+0000 D3 STORAGE [FreeMonProcessor] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:19.321+0000 D3 INDEX [TTLMonitor] thread awake 2019-09-04T06:34:19.322+0000 D3 COMMAND [PeriodicTaskRunner] task: DBConnectionPool-cleaner took: 0ms 2019-09-04T06:34:19.322+0000 D3 COMMAND [PeriodicTaskRunner] task: DBConnectionPool-cleaner took: 0ms 2019-09-04T06:34:19.322+0000 D2 - [PeriodicTaskRunner] cleaning up unused lock buckets of the global lock manager 2019-09-04T06:34:19.322+0000 D3 COMMAND [PeriodicTaskRunner] task: UnusedLockCleaner took: 0ms 2019-09-04T06:34:19.325+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:19.326+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:19.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:19.329+0000 D2 WRITE [startPeriodicThreadToAbortExpiredTransactions] Beginning scanSessions. Scanning 0 sessions. 2019-09-04T06:34:19.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:19.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:19.363+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578859, 1) 2019-09-04T06:34:19.368+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:19.368+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:19.370+0000 D1 SHARDING [shard-registry-reload] Reloading shardRegistry 2019-09-04T06:34:19.370+0000 D3 STORAGE [shard-registry-reload] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:34:19.370+0000 D2 COMMAND [shard-registry-reload] run command config.$cmd { find: "shards", $readPreference: { mode: "nearest", tags: [] }, $db: "config" } 2019-09-04T06:34:19.370+0000 D3 STORAGE [shard-registry-reload] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:19.370+0000 D5 QUERY [shard-registry-reload] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:34:19.370+0000 D5 QUERY [shard-registry-reload] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:34:19.370+0000 D5 QUERY [shard-registry-reload] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:34:19.370+0000 D5 QUERY [shard-registry-reload] Rated tree: $and 2019-09-04T06:34:19.370+0000 D5 QUERY [shard-registry-reload] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:19.370+0000 D5 QUERY [shard-registry-reload] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:19.370+0000 D2 QUERY [shard-registry-reload] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:34:19.370+0000 D3 STORAGE [shard-registry-reload] begin_transaction on local snapshot Timestamp(1567578859, 1) 2019-09-04T06:34:19.370+0000 D3 STORAGE [shard-registry-reload] WT begin_transaction for snapshot id 19894 2019-09-04T06:34:19.370+0000 D3 STORAGE [shard-registry-reload] WT rollback_transaction for snapshot id 19894 2019-09-04T06:34:19.370+0000 I COMMAND [shard-registry-reload] command config.shards command: find { find: "shards", $readPreference: { mode: "nearest", tags: [] }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:3 cursorExhausted:1 numYields:0 nreturned:3 reslen:646 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:34:19.370+0000 D1 SHARDING [shard-registry-reload] found 3 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp(1567578859, 1), t: 1 } 2019-09-04T06:34:19.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0000/cmodb806.togewa.com:27018,cmodb807.togewa.com:27018 2019-09-04T06:34:19.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0000, with CS shard0000/cmodb806.togewa.com:27018,cmodb807.togewa.com:27018 2019-09-04T06:34:19.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0001/cmodb808.togewa.com:27018,cmodb809.togewa.com:27018 2019-09-04T06:34:19.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0001, with CS shard0001/cmodb808.togewa.com:27018,cmodb809.togewa.com:27018 2019-09-04T06:34:19.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0002/cmodb810.togewa.com:27018,cmodb811.togewa.com:27018 2019-09-04T06:34:19.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0002, with CS shard0002/cmodb810.togewa.com:27018,cmodb811.togewa.com:27018 2019-09-04T06:34:19.370+0000 D3 SHARDING [shard-registry-reload] Adding shard config, with CS 2019-09-04T06:34:19.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0002 2019-09-04T06:34:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1361 -- target:[cmodb810.togewa.com:27018] db:admin expDate:2019-09-04T06:34:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:34:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1362 -- target:[cmodb811.togewa.com:27018] db:admin expDate:2019-09-04T06:34:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:34:19.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0000 2019-09-04T06:34:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1363 -- target:[cmodb806.togewa.com:27018] db:admin expDate:2019-09-04T06:34:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:34:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1364 -- target:[cmodb807.togewa.com:27018] db:admin expDate:2019-09-04T06:34:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:34:19.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0001 2019-09-04T06:34:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1365 -- target:[cmodb808.togewa.com:27018] db:admin expDate:2019-09-04T06:34:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:34:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1366 -- target:[cmodb809.togewa.com:27018] db:admin expDate:2019-09-04T06:34:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:34:19.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1361 finished with response: { hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb810.togewa.com:27018", me: "cmodb810.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578849, 2), t: 1 }, lastWriteDate: new Date(1567578849000), majorityOpTime: { ts: Timestamp(1567578849, 2), t: 1 }, majorityWriteDate: new Date(1567578849000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578859386), logicalSessionTimeoutMinutes: 30, connectionId: 20469, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578849, 2), $configServerState: { opTime: { ts: Timestamp(1567578849, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578853, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578849, 2) } 2019-09-04T06:34:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb810.togewa.com:27018", me: "cmodb810.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578849, 2), t: 1 }, lastWriteDate: new Date(1567578849000), majorityOpTime: { ts: Timestamp(1567578849, 2), t: 1 }, majorityWriteDate: new Date(1567578849000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578859386), logicalSessionTimeoutMinutes: 30, connectionId: 20469, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578849, 2), $configServerState: { opTime: { ts: Timestamp(1567578849, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578853, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578849, 2) } target: cmodb810.togewa.com:27018 2019-09-04T06:34:19.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1364 finished with response: { hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb806.togewa.com:27018", me: "cmodb807.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578853, 2), t: 1 }, lastWriteDate: new Date(1567578853000), majorityOpTime: { ts: Timestamp(1567578853, 2), t: 1 }, majorityWriteDate: new Date(1567578853000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578859386), logicalSessionTimeoutMinutes: 30, connectionId: 17074, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578853, 2), $configServerState: { opTime: { ts: Timestamp(1567578849, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578853, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578853, 2) } 2019-09-04T06:34:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb806.togewa.com:27018", me: "cmodb807.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578853, 2), t: 1 }, lastWriteDate: new Date(1567578853000), majorityOpTime: { ts: Timestamp(1567578853, 2), t: 1 }, majorityWriteDate: new Date(1567578853000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578859386), logicalSessionTimeoutMinutes: 30, connectionId: 17074, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578853, 2), $configServerState: { opTime: { ts: Timestamp(1567578849, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578853, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578853, 2) } target: cmodb807.togewa.com:27018 2019-09-04T06:34:19.386+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1363 finished with response: { hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb806.togewa.com:27018", me: "cmodb806.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578853, 2), t: 1 }, lastWriteDate: new Date(1567578853000), majorityOpTime: { ts: Timestamp(1567578853, 2), t: 1 }, majorityWriteDate: new Date(1567578853000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578859385), logicalSessionTimeoutMinutes: 30, connectionId: 16400, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578853, 2), $configServerState: { opTime: { ts: Timestamp(1567578849, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578853, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578853, 2) } 2019-09-04T06:34:19.386+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb806.togewa.com:27018", me: "cmodb806.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578853, 2), t: 1 }, lastWriteDate: new Date(1567578853000), majorityOpTime: { ts: Timestamp(1567578853, 2), t: 1 }, majorityWriteDate: new Date(1567578853000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578859385), logicalSessionTimeoutMinutes: 30, connectionId: 16400, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578853, 2), $configServerState: { opTime: { ts: Timestamp(1567578849, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578853, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578853, 2) } target: cmodb806.togewa.com:27018 2019-09-04T06:34:19.386+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0000 took 0ms 2019-09-04T06:34:19.386+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1366 finished with response: { hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb808.togewa.com:27018", me: "cmodb809.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578857, 1), t: 1 }, lastWriteDate: new Date(1567578857000), majorityOpTime: { ts: Timestamp(1567578857, 1), t: 1 }, majorityWriteDate: new Date(1567578857000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578859386), logicalSessionTimeoutMinutes: 30, connectionId: 13302, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578857, 1), $configServerState: { opTime: { ts: Timestamp(1567578840, 3), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578857, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578857, 1) } 2019-09-04T06:34:19.386+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb808.togewa.com:27018", me: "cmodb809.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578857, 1), t: 1 }, lastWriteDate: new Date(1567578857000), majorityOpTime: { ts: Timestamp(1567578857, 1), t: 1 }, majorityWriteDate: new Date(1567578857000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578859386), logicalSessionTimeoutMinutes: 30, connectionId: 13302, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578857, 1), $configServerState: { opTime: { ts: Timestamp(1567578840, 3), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578857, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578857, 1) } target: cmodb809.togewa.com:27018 2019-09-04T06:34:19.386+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1365 finished with response: { hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb808.togewa.com:27018", me: "cmodb808.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578857, 1), t: 1 }, lastWriteDate: new Date(1567578857000), majorityOpTime: { ts: Timestamp(1567578857, 1), t: 1 }, majorityWriteDate: new Date(1567578857000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578859386), logicalSessionTimeoutMinutes: 30, connectionId: 18183, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578857, 1), $configServerState: { opTime: { ts: Timestamp(1567578849, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578857, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578857, 1) } 2019-09-04T06:34:19.386+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb808.togewa.com:27018", me: "cmodb808.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578857, 1), t: 1 }, lastWriteDate: new Date(1567578857000), majorityOpTime: { ts: Timestamp(1567578857, 1), t: 1 }, majorityWriteDate: new Date(1567578857000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578859386), logicalSessionTimeoutMinutes: 30, connectionId: 18183, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578857, 1), $configServerState: { opTime: { ts: Timestamp(1567578849, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578857, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578857, 1) } target: cmodb808.togewa.com:27018 2019-09-04T06:34:19.386+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0001 took 1ms 2019-09-04T06:34:19.390+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1362 finished with response: { hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb810.togewa.com:27018", me: "cmodb811.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578849, 2), t: 1 }, lastWriteDate: new Date(1567578849000), majorityOpTime: { ts: Timestamp(1567578849, 2), t: 1 }, majorityWriteDate: new Date(1567578849000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578859386), logicalSessionTimeoutMinutes: 30, connectionId: 13284, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578849, 2), $configServerState: { opTime: { ts: Timestamp(1567578849, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578853, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578849, 2) } 2019-09-04T06:34:19.390+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb810.togewa.com:27018", me: "cmodb811.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578849, 2), t: 1 }, lastWriteDate: new Date(1567578849000), majorityOpTime: { ts: Timestamp(1567578849, 2), t: 1 }, majorityWriteDate: new Date(1567578849000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578859386), logicalSessionTimeoutMinutes: 30, connectionId: 13284, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578849, 2), $configServerState: { opTime: { ts: Timestamp(1567578849, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578853, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578849, 2) } target: cmodb811.togewa.com:27018 2019-09-04T06:34:19.390+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0002 took 5ms 2019-09-04T06:34:19.425+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:19.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:19.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:19.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:19.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:19.526+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:19.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:19.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:19.570+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:19.570+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:19.603+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:19.603+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:19.623+0000 D2 COMMAND [replSetDistLockPinger] run command config.$cmd { findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578859623) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } 2019-09-04T06:34:19.623+0000 D4 - [replSetDistLockPinger] Taking ticket. Available: 1000000000 2019-09-04T06:34:19.623+0000 D1 - [replSetDistLockPinger] User Assertion: NotMaster: Not primary while running findAndModify command on collection config.lockpings src/mongo/db/commands/find_and_modify.cpp 178 2019-09-04T06:34:19.623+0000 W - [replSetDistLockPinger] DBException thrown :: caused by :: NotMaster: Not primary while running findAndModify command on collection config.lockpings 2019-09-04T06:34:19.626+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:19.643+0000 I - [replSetDistLockPinger] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6b5b62 0x561749c38a0a 0x561749c42521 0x561749a63043 0x56174a33a606 0x56174a33ba55 0x56174b117894 0x56174a082899 0x56174a083f53 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174af452ee 0x56174af457fa 0x56174b0c25e2 0x56174a244e7b 0x56174a243c1e 0x56174a42b1dc 0x56174a23b7b1 0x56174a232a0a 0x56174b82dbbf 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"272DB62","s":"_ZN5mongo11DBExceptionC2ERKNS_6StatusE"},{"b":"561748F88000","o":"CB0A0A","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"ADB043"},{"b":"561748F88000","o":"13B2606"},{"b":"561748F88000","o":"13B3A55"},{"b":"561748F88000","o":"218F894","s":"_ZN5mongo12BasicCommand10Invocation3runEPNS_16OperationContextEPNS_3rpc21ReplyBuilderInterfaceE"},{"b":"561748F88000","o":"10FA899"},{"b":"561748F88000","o":"10FBF53"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"1FBD2EE"},{"b":"561748F88000","o":"1FBD7FA","s":"_ZN5mongo14DBDirectClient4callERNS_7MessageES2_bPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE"},{"b":"561748F88000","o":"213A5E2","s":"_ZN5mongo12DBClientBase20runCommandWithTargetENS_12OpMsgRequestE"},{"b":"561748F88000","o":"12BCE7B","s":"_ZN5mongo13RSLocalClient14runCommandOnceEPNS_16OperationContextENS_10StringDataERKNS_7BSONObjE"},{"b":"561748F88000","o":"12BBC1E","s":"_ZN5mongo10ShardLocal11_runCommandEPNS_16OperationContextERKNS_21ReadPreferenceSettingENS_10StringDataENS_8DurationISt5ratioILl1ELl1000EEEERKNS_7BSONObjE"},{"b":"561748F88000","o":"14A31DC","s":"_ZN5mongo5Shard32runCommandWithFixedRetryAttemptsEPNS_16OperationContextERKNS_21ReadPreferenceSettingERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_7BSONObjENS_8DurationISt5ratioILl1ELl1000EEEENS0_11RetryPolicyE"},{"b":"561748F88000","o":"12B37B1","s":"_ZN5mongo19DistLockCatalogImpl4pingEPNS_16OperationContextENS_10StringDataENS_6Date_tE"},{"b":"561748F88000","o":"12AAA0A","s":"_ZN5mongo22ReplSetDistLockManager6doTaskEv"},{"b":"561748F88000","o":"28A5BBF"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo11DBExceptionC2ERKNS_6StatusE+0x32) [0x56174b6b5b62] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x6D08) [0x561749c38a0a] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xADB043) [0x561749a63043] mongod(+0x13B2606) [0x56174a33a606] mongod(+0x13B3A55) [0x56174a33ba55] mongod(_ZN5mongo12BasicCommand10Invocation3runEPNS_16OperationContextEPNS_3rpc21ReplyBuilderInterfaceE+0x74) [0x56174b117894] mongod(+0x10FA899) [0x56174a082899] mongod(+0x10FBF53) [0x56174a083f53] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(+0x1FBD2EE) [0x56174af452ee] mongod(_ZN5mongo14DBDirectClient4callERNS_7MessageES2_bPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x3A) [0x56174af457fa] mongod(_ZN5mongo12DBClientBase20runCommandWithTargetENS_12OpMsgRequestE+0x1F2) [0x56174b0c25e2] mongod(_ZN5mongo13RSLocalClient14runCommandOnceEPNS_16OperationContextENS_10StringDataERKNS_7BSONObjE+0x4FB) [0x56174a244e7b] mongod(_ZN5mongo10ShardLocal11_runCommandEPNS_16OperationContextERKNS_21ReadPreferenceSettingENS_10StringDataENS_8DurationISt5ratioILl1ELl1000EEEERKNS_7BSONObjE+0x2E) [0x56174a243c1e] mongod(_ZN5mongo5Shard32runCommandWithFixedRetryAttemptsEPNS_16OperationContextERKNS_21ReadPreferenceSettingERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_7BSONObjENS_8DurationISt5ratioILl1ELl1000EEEENS0_11RetryPolicyE+0xDC) [0x56174a42b1dc] mongod(_ZN5mongo19DistLockCatalogImpl4pingEPNS_16OperationContextENS_10StringDataENS_6Date_tE+0x571) [0x56174a23b7b1] mongod(_ZN5mongo22ReplSetDistLockManager6doTaskEv+0x27A) [0x56174a232a0a] mongod(+0x28A5BBF) [0x56174b82dbbf] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:19.643+0000 D2 REPL [replSetDistLockPinger] Waiting for write concern. OpTime: { ts: Timestamp(1567578859, 1), t: 1 }, write concern: { w: "majority", wtimeout: 15000 } 2019-09-04T06:34:19.643+0000 D4 STORAGE [replSetDistLockPinger] flushed journal 2019-09-04T06:34:19.643+0000 D1 COMMAND [replSetDistLockPinger] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578859623) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" }': NotMaster: Not primary while running findAndModify command on collection config.lockpings 2019-09-04T06:34:19.643+0000 I COMMAND [replSetDistLockPinger] command config.lockpings command: findAndModify { findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578859623) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } numYields:0 ok:0 errMsg:"Not primary while running findAndModify command on collection config.lockpings" errName:NotMaster errCode:10107 reslen:527 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } protocol:op_msg 19ms 2019-09-04T06:34:19.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:19.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:19.662+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:19.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:19.663+0000 D3 STORAGE [WTCheckpointThread] setting timestamp read source: 6, provided timestamp: Timestamp(1567578859, 1) 2019-09-04T06:34:19.663+0000 D3 STORAGE [WTCheckpointThread] WT begin_transaction for snapshot id 19904 2019-09-04T06:34:19.663+0000 D3 STORAGE [WTCheckpointThread] WT rollback_transaction for snapshot id 19904 2019-09-04T06:34:19.663+0000 D2 RECOVERY [WTCheckpointThread] Performing stable checkpoint. StableTimestamp: Timestamp(1567578859, 1), OplogNeededForRollback: Timestamp(1567578859, 1) 2019-09-04T06:34:19.701+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:19.701+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:19.728+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:19.732+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:19.732+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:19.738+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:19.738+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:19.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:19.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:19.826+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:19.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:19.828+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:19.833+0000 I NETWORK [listener] connection accepted from 10.108.2.51:59288 #441 (90 connections now open) 2019-09-04T06:34:19.833+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:19.833+0000 D2 COMMAND [conn441] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:19.833+0000 I NETWORK [conn441] received client metadata from 10.108.2.51:59288 conn441: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:19.833+0000 I COMMAND [conn441] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:19.849+0000 I COMMAND [conn414] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578829, 1), signature: { hash: BinData(0, 6FA9AC81673D99E3CEF9E3D81A3010F2FE222A78), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:34:19.849+0000 D1 - [conn414] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:19.849+0000 W - [conn414] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:19.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:19.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:19.866+0000 I - [conn414] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:19.866+0000 D1 COMMAND [conn414] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578829, 1), signature: { hash: BinData(0, 6FA9AC81673D99E3CEF9E3D81A3010F2FE222A78), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:19.866+0000 D1 - [conn414] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:19.866+0000 W - [conn414] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:19.868+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:19.868+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:19.886+0000 I - [conn414] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:19.886+0000 W COMMAND [conn414] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:19.886+0000 I COMMAND [conn414] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578829, 1), signature: { hash: BinData(0, 6FA9AC81673D99E3CEF9E3D81A3010F2FE222A78), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:34:19.886+0000 D2 NETWORK [conn414] Session from 10.108.2.51:59272 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:19.887+0000 I NETWORK [conn414] end connection 10.108.2.51:59272 (89 connections now open) 2019-09-04T06:34:19.928+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:19.976+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:19.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:19.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:19.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:20.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:20.002+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:34:20.002+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:34:20.002+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:34:20.010+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:34:20.010+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:34:20.010+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:34:20.010+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:34:20.010+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:34:20.011+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:34:20.011+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:34:20.011+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35151 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:34:20.012+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:34:20.012+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:34:20.012+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:34:20.014+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:34:20.014+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:20.014+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:34:20.014+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:34:20.014+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:34:20.014+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:34:20.014+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:34:20.014+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:34:20.014+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:34:20.014+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:20.014+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:20.014+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:34:20.014+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578859, 1) 2019-09-04T06:34:20.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19922 2019-09-04T06:34:20.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19922 2019-09-04T06:34:20.014+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:34:20.014+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:34:20.014+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:34:20.014+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:34:20.014+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:20.014+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:34:20.014+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:34:20.014+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:34:20.014+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578859, 1) 2019-09-04T06:34:20.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19925 2019-09-04T06:34:20.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19925 2019-09-04T06:34:20.014+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:34:20.014+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:34:20.014+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:20.014+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:34:20.014+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:34:20.014+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:34:20.014+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578859, 1) 2019-09-04T06:34:20.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19927 2019-09-04T06:34:20.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19927 2019-09-04T06:34:20.015+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:549 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:34:20.015+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:34:20.015+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:34:20.015+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:34:20.015+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:34:20.015+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19930 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19930 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19931 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19931 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19932 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19932 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19933 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19933 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19934 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19934 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19935 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19935 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19936 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19936 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19937 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:34:20.015+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19937 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19938 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19938 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19939 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19939 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19940 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19940 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19941 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19941 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19942 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19942 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19943 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19943 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19944 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19944 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19945 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19945 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19946 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19946 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19947 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19947 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19948 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19948 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19949 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19949 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19950 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19950 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19951 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:20.016+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:34:20.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19951 2019-09-04T06:34:20.017+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:34:20.017+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:34:20.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19953 2019-09-04T06:34:20.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19953 2019-09-04T06:34:20.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19954 2019-09-04T06:34:20.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19954 2019-09-04T06:34:20.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19955 2019-09-04T06:34:20.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19955 2019-09-04T06:34:20.017+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:34:20.017+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:34:20.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19957 2019-09-04T06:34:20.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19957 2019-09-04T06:34:20.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19958 2019-09-04T06:34:20.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19958 2019-09-04T06:34:20.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19959 2019-09-04T06:34:20.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19959 2019-09-04T06:34:20.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19960 2019-09-04T06:34:20.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19960 2019-09-04T06:34:20.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19961 2019-09-04T06:34:20.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19961 2019-09-04T06:34:20.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19962 2019-09-04T06:34:20.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19962 2019-09-04T06:34:20.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19963 2019-09-04T06:34:20.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19963 2019-09-04T06:34:20.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19964 2019-09-04T06:34:20.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19964 2019-09-04T06:34:20.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19965 2019-09-04T06:34:20.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19965 2019-09-04T06:34:20.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19966 2019-09-04T06:34:20.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19966 2019-09-04T06:34:20.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19967 2019-09-04T06:34:20.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19967 2019-09-04T06:34:20.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19968 2019-09-04T06:34:20.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19968 2019-09-04T06:34:20.017+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:34:20.018+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:34:20.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19970 2019-09-04T06:34:20.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19970 2019-09-04T06:34:20.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19971 2019-09-04T06:34:20.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19971 2019-09-04T06:34:20.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19972 2019-09-04T06:34:20.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19972 2019-09-04T06:34:20.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19973 2019-09-04T06:34:20.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19973 2019-09-04T06:34:20.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19974 2019-09-04T06:34:20.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19974 2019-09-04T06:34:20.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 19975 2019-09-04T06:34:20.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 19975 2019-09-04T06:34:20.019+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:34:20.029+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:20.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:20.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:20.070+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:20.070+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:20.103+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:20.103+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:20.130+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:20.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:20.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:20.201+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:20.201+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:20.230+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:20.232+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:20.232+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:20.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578859, 1), signature: { hash: BinData(0, B5A34ABFF108486ED9CAB818ADD9788411D2FD04), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:20.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:34:20.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578859, 1), signature: { hash: BinData(0, B5A34ABFF108486ED9CAB818ADD9788411D2FD04), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:20.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578859, 1), signature: { hash: BinData(0, B5A34ABFF108486ED9CAB818ADD9788411D2FD04), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:20.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578859, 1), t: 1 }, durableWallTime: new Date(1567578859260), opTime: { ts: Timestamp(1567578859, 1), t: 1 }, wallTime: new Date(1567578859260) } 2019-09-04T06:34:20.235+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578859, 1), signature: { hash: BinData(0, B5A34ABFF108486ED9CAB818ADD9788411D2FD04), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:20.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:20.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 999999999 Now: 1000000000 2019-09-04T06:34:20.238+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:20.238+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:20.239+0000 D2 COMMAND [conn71] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578859, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578860, 1), signature: { hash: BinData(0, 10FABDB484354AA90AD9FE9F773AE6164E7F8C75), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578859, 1), t: 1 } }, $db: "config" } 2019-09-04T06:34:20.239+0000 D1 COMMAND [conn71] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578859, 1), t: 1 } } } 2019-09-04T06:34:20.239+0000 D3 STORAGE [conn71] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:34:20.239+0000 D1 COMMAND [conn71] Using 'committed' snapshot: { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578859, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578860, 1), signature: { hash: BinData(0, 10FABDB484354AA90AD9FE9F773AE6164E7F8C75), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578859, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578859, 1) 2019-09-04T06:34:20.239+0000 D2 QUERY [conn71] Collection config.settings does not exist. Using EOF plan: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 2019-09-04T06:34:20.239+0000 I COMMAND [conn71] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578859, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578860, 1), signature: { hash: BinData(0, 10FABDB484354AA90AD9FE9F773AE6164E7F8C75), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578859, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:34:20.239+0000 D2 COMMAND [conn71] run command config.$cmd { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578859, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578860, 1), signature: { hash: BinData(0, 10FABDB484354AA90AD9FE9F773AE6164E7F8C75), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578859, 1), t: 1 } }, $db: "config" } 2019-09-04T06:34:20.239+0000 D1 COMMAND [conn71] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578859, 1), t: 1 } } } 2019-09-04T06:34:20.239+0000 D3 STORAGE [conn71] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:34:20.239+0000 D1 COMMAND [conn71] Using 'committed' snapshot: { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578859, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578860, 1), signature: { hash: BinData(0, 10FABDB484354AA90AD9FE9F773AE6164E7F8C75), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578859, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578859, 1) 2019-09-04T06:34:20.239+0000 D2 QUERY [conn71] Collection config.settings does not exist. Using EOF plan: query: { _id: "autosplit" } sort: {} projection: {} limit: 1 2019-09-04T06:34:20.239+0000 I COMMAND [conn71] command config.settings command: find { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578859, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578860, 1), signature: { hash: BinData(0, 10FABDB484354AA90AD9FE9F773AE6164E7F8C75), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578859, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:34:20.245+0000 D2 ASIO [RS] Request 1358 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578860, 1), t: 1, h: 0, v: 2, op: "i", ns: "config.shards", ui: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a"), wall: new Date(1567578860241), o: { _id: "shard0003", host: "shard0003/cmodb812.togewa.com:27018,cmodb813.togewa.com:27018", state: 1 } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578859, 1), t: 1 }, lastCommittedWall: new Date(1567578859260), lastOpVisible: { ts: Timestamp(1567578859, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578859, 1), t: 1 }, lastCommittedWall: new Date(1567578859260), lastOpApplied: { ts: Timestamp(1567578860, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578859, 1), $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578860, 1) } 2019-09-04T06:34:20.245+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578860, 1), t: 1, h: 0, v: 2, op: "i", ns: "config.shards", ui: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a"), wall: new Date(1567578860241), o: { _id: "shard0003", host: "shard0003/cmodb812.togewa.com:27018,cmodb813.togewa.com:27018", state: 1 } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578859, 1), t: 1 }, lastCommittedWall: new Date(1567578859260), lastOpVisible: { ts: Timestamp(1567578859, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578859, 1), t: 1 }, lastCommittedWall: new Date(1567578859260), lastOpApplied: { ts: Timestamp(1567578860, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578859, 1), $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578860, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:20.245+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:20.245+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578860, 1) and ending at ts: Timestamp(1567578860, 1) 2019-09-04T06:34:20.245+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:34:30.282+0000 2019-09-04T06:34:20.245+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:34:30.581+0000 2019-09-04T06:34:20.245+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:20.245+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:20.245+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578860, 1), t: 1 } 2019-09-04T06:34:20.245+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:20.245+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:20.245+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578859, 1) 2019-09-04T06:34:20.245+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 19988 2019-09-04T06:34:20.245+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:20.245+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:20.245+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 19988 2019-09-04T06:34:20.245+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:20.245+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:20.245+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:34:20.245+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578859, 1) 2019-09-04T06:34:20.245+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578860, 1) } 2019-09-04T06:34:20.245+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 19991 2019-09-04T06:34:20.245+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:20.245+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:20.245+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 19991 2019-09-04T06:34:20.245+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19882 2019-09-04T06:34:20.245+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19882 2019-09-04T06:34:20.245+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19994 2019-09-04T06:34:20.245+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19994 2019-09-04T06:34:20.245+0000 D3 EXECUTOR [repl-writer-worker-2] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:20.245+0000 D3 STORAGE [repl-writer-worker-2] WT begin_transaction for snapshot id 19996 2019-09-04T06:34:20.245+0000 D4 STORAGE [repl-writer-worker-2] inserting record with timestamp Timestamp(1567578860, 1) 2019-09-04T06:34:20.245+0000 D3 STORAGE [repl-writer-worker-2] WT set timestamp of future write operations to Timestamp(1567578860, 1) 2019-09-04T06:34:20.246+0000 D3 STORAGE [repl-writer-worker-2] WT commit_transaction for snapshot id 19996 2019-09-04T06:34:20.246+0000 D3 EXECUTOR [repl-writer-worker-2] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:20.246+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:34:20.246+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19995 2019-09-04T06:34:20.246+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19995 2019-09-04T06:34:20.246+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19998 2019-09-04T06:34:20.246+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 19998 2019-09-04T06:34:20.246+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578860, 1), t: 1 }({ ts: Timestamp(1567578860, 1), t: 1 }) 2019-09-04T06:34:20.246+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578860, 1) 2019-09-04T06:34:20.246+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 19999 2019-09-04T06:34:20.246+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578860, 1) } } ] } sort: {} projection: {} 2019-09-04T06:34:20.246+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:20.246+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:34:20.246+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578860, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:34:20.246+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:20.246+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:20.246+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:20.246+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578860, 1) || First: notFirst: full path: ts 2019-09-04T06:34:20.246+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:20.246+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578860, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:20.246+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:20.246+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:34:20.246+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:20.246+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:20.246+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:20.246+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:20.246+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:20.246+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:20.246+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:20.246+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578860, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:20.246+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:20.246+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:20.246+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:20.246+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578860, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:20.246+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:20.246+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578860, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:20.246+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 19999 2019-09-04T06:34:20.246+0000 D3 EXECUTOR [repl-writer-worker-0] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:20.246+0000 D3 STORAGE [repl-writer-worker-0] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:20.246+0000 D3 REPL [repl-writer-worker-0] applying op: { ts: Timestamp(1567578860, 1), t: 1, h: 0, v: 2, op: "i", ns: "config.shards", ui: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a"), wall: new Date(1567578860241), o: { _id: "shard0003", host: "shard0003/cmodb812.togewa.com:27018,cmodb813.togewa.com:27018", state: 1 } }, oplog application mode: Secondary 2019-09-04T06:34:20.246+0000 D3 STORAGE [repl-writer-worker-0] WT begin_transaction for snapshot id 20001 2019-09-04T06:34:20.246+0000 D4 STORAGE [repl-writer-worker-0] inserting record with timestamp Timestamp(1567578860, 1) 2019-09-04T06:34:20.246+0000 D3 STORAGE [repl-writer-worker-0] WT set timestamp of future write operations to Timestamp(1567578860, 1) 2019-09-04T06:34:20.246+0000 D2 STORAGE [repl-writer-worker-0] WiredTigerSizeStorer::store Marking table:config/collection/82--6194257481163143499 dirty, numRecords: 4, dataSize: 428, use_count: 3 2019-09-04T06:34:20.246+0000 D3 STORAGE [repl-writer-worker-0] WT set timestamp of future write operations to Timestamp(1567578860, 1) 2019-09-04T06:34:20.247+0000 D3 STORAGE [repl-writer-worker-0] WT set timestamp of future write operations to Timestamp(1567578860, 1) 2019-09-04T06:34:20.247+0000 D3 STORAGE [repl-writer-worker-0] WT commit_transaction for snapshot id 20001 2019-09-04T06:34:20.247+0000 D3 EXECUTOR [repl-writer-worker-0] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:20.247+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578860, 1), t: 1 }({ ts: Timestamp(1567578860, 1), t: 1 }) 2019-09-04T06:34:20.247+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578860, 1) 2019-09-04T06:34:20.247+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20000 2019-09-04T06:34:20.247+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:34:20.247+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:20.247+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:34:20.247+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578860, 1), t: 1 } 2019-09-04T06:34:20.247+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:20.247+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1368 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:30.247+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578859, 1), t: 1 } } 2019-09-04T06:34:20.247+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:20.247+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:34:20.247+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:49.265+0000 2019-09-04T06:34:20.247+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 20000 2019-09-04T06:34:20.247+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578860, 1) 2019-09-04T06:34:20.247+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20005 2019-09-04T06:34:20.247+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20005 2019-09-04T06:34:20.247+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578860, 1), t: 1 }({ ts: Timestamp(1567578860, 1), t: 1 }) 2019-09-04T06:34:20.247+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:20.247+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, durableWallTime: new Date(1567578857929), appliedOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, appliedWallTime: new Date(1567578857929), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578859, 1), t: 1 }, durableWallTime: new Date(1567578859260), appliedOpTime: { ts: Timestamp(1567578860, 1), t: 1 }, appliedWallTime: new Date(1567578860241), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, durableWallTime: new Date(1567578857929), appliedOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, appliedWallTime: new Date(1567578857929), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578859, 1), t: 1 }, lastCommittedWall: new Date(1567578859260), lastOpVisible: { ts: Timestamp(1567578859, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:20.247+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1369 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:50.247+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, durableWallTime: new Date(1567578857929), appliedOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, appliedWallTime: new Date(1567578857929), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578859, 1), t: 1 }, durableWallTime: new Date(1567578859260), appliedOpTime: { ts: Timestamp(1567578860, 1), t: 1 }, appliedWallTime: new Date(1567578860241), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, durableWallTime: new Date(1567578857929), appliedOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, appliedWallTime: new Date(1567578857929), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578859, 1), t: 1 }, lastCommittedWall: new Date(1567578859260), lastOpVisible: { ts: Timestamp(1567578859, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:20.247+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:49.265+0000 2019-09-04T06:34:20.247+0000 D2 ASIO [RS] Request 1368 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578860, 2), t: 1, h: 0, v: 2, op: "i", ns: "config.changelog", ui: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), wall: new Date(1567578860241), o: { _id: "cmodb802.togewa.com:27019-2019-09-04T06:34:20.241+0000-5d6f5aecac9313827bca6177", server: "cmodb802.togewa.com:27019", shard: "config", clientAddr: "10.108.2.15:43156", time: new Date(1567578860241), what: "addShard", ns: "", details: { name: "shard0003", host: "shard0003/cmodb812.togewa.com:27018,cmodb813.togewa.com:27018" } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578859, 1), t: 1 }, lastCommittedWall: new Date(1567578859260), lastOpVisible: { ts: Timestamp(1567578859, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578859, 1), t: 1 }, lastCommittedWall: new Date(1567578859260), lastOpApplied: { ts: Timestamp(1567578860, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578859, 1), $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578860, 2) } 2019-09-04T06:34:20.247+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578860, 2), t: 1, h: 0, v: 2, op: "i", ns: "config.changelog", ui: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), wall: new Date(1567578860241), o: { _id: "cmodb802.togewa.com:27019-2019-09-04T06:34:20.241+0000-5d6f5aecac9313827bca6177", server: "cmodb802.togewa.com:27019", shard: "config", clientAddr: "10.108.2.15:43156", time: new Date(1567578860241), what: "addShard", ns: "", details: { name: "shard0003", host: "shard0003/cmodb812.togewa.com:27018,cmodb813.togewa.com:27018" } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578859, 1), t: 1 }, lastCommittedWall: new Date(1567578859260), lastOpVisible: { ts: Timestamp(1567578859, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578859, 1), t: 1 }, lastCommittedWall: new Date(1567578859260), lastOpApplied: { ts: Timestamp(1567578860, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578859, 1), $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578860, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:20.247+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:20.247+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578860, 2) and ending at ts: Timestamp(1567578860, 2) 2019-09-04T06:34:20.247+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:34:30.581+0000 2019-09-04T06:34:20.248+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:34:30.952+0000 2019-09-04T06:34:20.248+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:20.248+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:20.248+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578860, 2), t: 1 } 2019-09-04T06:34:20.248+0000 D2 ASIO [RS] Request 1369 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578859, 1), t: 1 }, lastCommittedWall: new Date(1567578859260), lastOpVisible: { ts: Timestamp(1567578859, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578859, 1), $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578860, 2) } 2019-09-04T06:34:20.248+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578859, 1), t: 1 }, lastCommittedWall: new Date(1567578859260), lastOpVisible: { ts: Timestamp(1567578859, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578859, 1), $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578860, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:20.248+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:20.248+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:20.248+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578860, 1) 2019-09-04T06:34:20.248+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 20008 2019-09-04T06:34:20.248+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:20.248+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:20.248+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 20008 2019-09-04T06:34:20.248+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:34:20.248+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:20.248+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578860, 2) } 2019-09-04T06:34:20.248+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:50.248+0000 2019-09-04T06:34:20.248+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:20.248+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:20.248+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578860, 1) 2019-09-04T06:34:20.248+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20006 2019-09-04T06:34:20.248+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 20011 2019-09-04T06:34:20.248+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:20.248+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:20.248+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 20011 2019-09-04T06:34:20.248+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 20006 2019-09-04T06:34:20.248+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20014 2019-09-04T06:34:20.248+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20014 2019-09-04T06:34:20.248+0000 D3 EXECUTOR [repl-writer-worker-15] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:20.248+0000 D3 STORAGE [repl-writer-worker-15] WT begin_transaction for snapshot id 20016 2019-09-04T06:34:20.248+0000 D4 STORAGE [repl-writer-worker-15] inserting record with timestamp Timestamp(1567578860, 2) 2019-09-04T06:34:20.248+0000 D3 STORAGE [repl-writer-worker-15] WT set timestamp of future write operations to Timestamp(1567578860, 2) 2019-09-04T06:34:20.248+0000 D3 STORAGE [repl-writer-worker-15] WT commit_transaction for snapshot id 20016 2019-09-04T06:34:20.248+0000 D3 EXECUTOR [repl-writer-worker-15] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:20.248+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:34:20.248+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20015 2019-09-04T06:34:20.248+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 20015 2019-09-04T06:34:20.248+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20018 2019-09-04T06:34:20.248+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20018 2019-09-04T06:34:20.248+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578860, 2), t: 1 }({ ts: Timestamp(1567578860, 2), t: 1 }) 2019-09-04T06:34:20.248+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578860, 2) 2019-09-04T06:34:20.248+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20019 2019-09-04T06:34:20.248+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578860, 2) } } ] } sort: {} projection: {} 2019-09-04T06:34:20.248+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:20.248+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:34:20.248+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578860, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:34:20.248+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:20.248+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:20.248+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:20.248+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578860, 2) || First: notFirst: full path: ts 2019-09-04T06:34:20.248+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:20.248+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578860, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:20.248+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:20.248+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:34:20.248+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:20.248+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:20.248+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:20.248+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:20.248+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:20.248+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:20.248+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:20.248+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578860, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:20.248+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:20.248+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:20.248+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:20.248+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578860, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:20.248+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:20.248+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578860, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:20.248+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 20019 2019-09-04T06:34:20.248+0000 D3 EXECUTOR [repl-writer-worker-1] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:20.249+0000 D3 STORAGE [repl-writer-worker-1] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:20.249+0000 I SHARDING [repl-writer-worker-1] Marking collection config.changelog as collection version: 2019-09-04T06:34:20.249+0000 D3 REPL [repl-writer-worker-1] applying op: { ts: Timestamp(1567578860, 2), t: 1, h: 0, v: 2, op: "i", ns: "config.changelog", ui: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), wall: new Date(1567578860241), o: { _id: "cmodb802.togewa.com:27019-2019-09-04T06:34:20.241+0000-5d6f5aecac9313827bca6177", server: "cmodb802.togewa.com:27019", shard: "config", clientAddr: "10.108.2.15:43156", time: new Date(1567578860241), what: "addShard", ns: "", details: { name: "shard0003", host: "shard0003/cmodb812.togewa.com:27018,cmodb813.togewa.com:27018" } } }, oplog application mode: Secondary 2019-09-04T06:34:20.249+0000 D3 STORAGE [repl-writer-worker-1] WT begin_transaction for snapshot id 20021 2019-09-04T06:34:20.249+0000 D4 STORAGE [repl-writer-worker-1] inserting record with timestamp Timestamp(1567578860, 2) 2019-09-04T06:34:20.249+0000 D3 STORAGE [repl-writer-worker-1] WT set timestamp of future write operations to Timestamp(1567578860, 2) 2019-09-04T06:34:20.249+0000 D2 STORAGE [repl-writer-worker-1] WiredTigerSizeStorer::store Marking table:config/collection/26--6194257481163143499 dirty, numRecords: 9, dataSize: 3031, use_count: 3 2019-09-04T06:34:20.249+0000 D3 STORAGE [repl-writer-worker-1] WT set timestamp of future write operations to Timestamp(1567578860, 2) 2019-09-04T06:34:20.249+0000 D3 STORAGE [repl-writer-worker-1] WT commit_transaction for snapshot id 20021 2019-09-04T06:34:20.249+0000 D3 EXECUTOR [repl-writer-worker-1] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:20.249+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578860, 2), t: 1 }({ ts: Timestamp(1567578860, 2), t: 1 }) 2019-09-04T06:34:20.249+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578860, 2) 2019-09-04T06:34:20.249+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20020 2019-09-04T06:34:20.249+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:34:20.249+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:20.249+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:34:20.249+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:20.249+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:20.249+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:34:20.249+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 20020 2019-09-04T06:34:20.249+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578860, 2) 2019-09-04T06:34:20.249+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:20.249+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20024 2019-09-04T06:34:20.249+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, durableWallTime: new Date(1567578857929), appliedOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, appliedWallTime: new Date(1567578857929), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578859, 1), t: 1 }, durableWallTime: new Date(1567578859260), appliedOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, appliedWallTime: new Date(1567578860241), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, durableWallTime: new Date(1567578857929), appliedOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, appliedWallTime: new Date(1567578857929), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578859, 1), t: 1 }, lastCommittedWall: new Date(1567578859260), lastOpVisible: { ts: Timestamp(1567578859, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:20.249+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20024 2019-09-04T06:34:20.249+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578860, 2), t: 1 }({ ts: Timestamp(1567578860, 2), t: 1 }) 2019-09-04T06:34:20.249+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1370 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:50.249+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, durableWallTime: new Date(1567578857929), appliedOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, appliedWallTime: new Date(1567578857929), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578859, 1), t: 1 }, durableWallTime: new Date(1567578859260), appliedOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, appliedWallTime: new Date(1567578860241), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, durableWallTime: new Date(1567578857929), appliedOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, appliedWallTime: new Date(1567578857929), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578859, 1), t: 1 }, lastCommittedWall: new Date(1567578859260), lastOpVisible: { ts: Timestamp(1567578859, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:20.249+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:50.249+0000 2019-09-04T06:34:20.250+0000 D2 ASIO [RS] Request 1370 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578859, 1), t: 1 }, lastCommittedWall: new Date(1567578859260), lastOpVisible: { ts: Timestamp(1567578859, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578859, 1), $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578860, 2) } 2019-09-04T06:34:20.250+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578859, 1), t: 1 }, lastCommittedWall: new Date(1567578859260), lastOpVisible: { ts: Timestamp(1567578859, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578859, 1), $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578860, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:20.250+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:20.250+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:50.250+0000 2019-09-04T06:34:20.250+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578860, 2), t: 1 } 2019-09-04T06:34:20.250+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1371 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:30.250+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578859, 1), t: 1 } } 2019-09-04T06:34:20.250+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:50.250+0000 2019-09-04T06:34:20.256+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:34:20.257+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:20.257+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, durableWallTime: new Date(1567578857929), appliedOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, appliedWallTime: new Date(1567578857929), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578860, 1), t: 1 }, durableWallTime: new Date(1567578860241), appliedOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, appliedWallTime: new Date(1567578860241), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, durableWallTime: new Date(1567578857929), appliedOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, appliedWallTime: new Date(1567578857929), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578859, 1), t: 1 }, lastCommittedWall: new Date(1567578859260), lastOpVisible: { ts: Timestamp(1567578859, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:20.257+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1372 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:50.257+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, durableWallTime: new Date(1567578857929), appliedOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, appliedWallTime: new Date(1567578857929), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578860, 1), t: 1 }, durableWallTime: new Date(1567578860241), appliedOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, appliedWallTime: new Date(1567578860241), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, durableWallTime: new Date(1567578857929), appliedOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, appliedWallTime: new Date(1567578857929), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578859, 1), t: 1 }, lastCommittedWall: new Date(1567578859260), lastOpVisible: { ts: Timestamp(1567578859, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:20.257+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:50.250+0000 2019-09-04T06:34:20.257+0000 D2 ASIO [RS] Request 1372 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578859, 1), t: 1 }, lastCommittedWall: new Date(1567578859260), lastOpVisible: { ts: Timestamp(1567578859, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578859, 1), $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578860, 2) } 2019-09-04T06:34:20.257+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578859, 1), t: 1 }, lastCommittedWall: new Date(1567578859260), lastOpVisible: { ts: Timestamp(1567578859, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578859, 1), $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578860, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:20.257+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:20.257+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:50.250+0000 2019-09-04T06:34:20.257+0000 D2 ASIO [RS] Request 1371 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578860, 1), t: 1 }, lastCommittedWall: new Date(1567578860241), lastOpVisible: { ts: Timestamp(1567578860, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578860, 1), t: 1 }, lastCommittedWall: new Date(1567578860241), lastOpApplied: { ts: Timestamp(1567578860, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578860, 1), $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578860, 2) } 2019-09-04T06:34:20.257+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578860, 1), t: 1 }, lastCommittedWall: new Date(1567578860241), lastOpVisible: { ts: Timestamp(1567578860, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578860, 1), t: 1 }, lastCommittedWall: new Date(1567578860241), lastOpApplied: { ts: Timestamp(1567578860, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578860, 1), $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578860, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:20.257+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:20.257+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:34:20.257+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578860, 1), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.257+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578860, 1), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.258+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578855, 1) 2019-09-04T06:34:20.258+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:34:30.952+0000 2019-09-04T06:34:20.258+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:34:30.847+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn429] Got notified of new snapshot: { ts: Timestamp(1567578860, 1), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn429] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:41.583+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn432] Got notified of new snapshot: { ts: Timestamp(1567578860, 1), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn432] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.343+0000 2019-09-04T06:34:20.258+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1373 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:30.258+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578860, 1), t: 1 } } 2019-09-04T06:34:20.258+0000 D3 REPL [conn426] Got notified of new snapshot: { ts: Timestamp(1567578860, 1), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn426] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.517+0000 2019-09-04T06:34:20.258+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:50.250+0000 2019-09-04T06:34:20.258+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:20.258+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn409] Got notified of new snapshot: { ts: Timestamp(1567578860, 1), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn409] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.328+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn437] Got notified of new snapshot: { ts: Timestamp(1567578860, 1), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn437] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:46.416+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn424] Got notified of new snapshot: { ts: Timestamp(1567578860, 1), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn424] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.437+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn410] Got notified of new snapshot: { ts: Timestamp(1567578860, 1), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn410] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.445+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn408] Got notified of new snapshot: { ts: Timestamp(1567578860, 1), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn408] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:29.739+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn434] Got notified of new snapshot: { ts: Timestamp(1567578860, 1), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn434] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.445+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn420] Got notified of new snapshot: { ts: Timestamp(1567578860, 1), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn420] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.423+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn435] Got notified of new snapshot: { ts: Timestamp(1567578860, 1), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn435] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:43.909+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn418] Got notified of new snapshot: { ts: Timestamp(1567578860, 1), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn418] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.754+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn417] Got notified of new snapshot: { ts: Timestamp(1567578860, 1), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn417] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.661+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn402] Got notified of new snapshot: { ts: Timestamp(1567578860, 1), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn402] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.289+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn433] Got notified of new snapshot: { ts: Timestamp(1567578860, 1), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn433] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.683+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn421] Got notified of new snapshot: { ts: Timestamp(1567578860, 1), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn421] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:24.153+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn427] Got notified of new snapshot: { ts: Timestamp(1567578860, 1), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn427] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.335+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn440] Got notified of new snapshot: { ts: Timestamp(1567578860, 1), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn440] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.538+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn415] Got notified of new snapshot: { ts: Timestamp(1567578860, 1), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn415] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.282+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn403] Got notified of new snapshot: { ts: Timestamp(1567578860, 1), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn403] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:27.566+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn419] Got notified of new snapshot: { ts: Timestamp(1567578860, 1), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn419] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.767+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn422] Got notified of new snapshot: { ts: Timestamp(1567578860, 1), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn422] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:25.060+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn401] Got notified of new snapshot: { ts: Timestamp(1567578860, 1), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn401] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.660+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn431] Got notified of new snapshot: { ts: Timestamp(1567578860, 1), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn431] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.276+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn416] Got notified of new snapshot: { ts: Timestamp(1567578860, 1), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn416] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.661+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn439] Got notified of new snapshot: { ts: Timestamp(1567578860, 1), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.258+0000 D3 REPL [conn439] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.447+0000 2019-09-04T06:34:20.259+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:34:20.259+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:20.259+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, durableWallTime: new Date(1567578857929), appliedOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, appliedWallTime: new Date(1567578857929), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, durableWallTime: new Date(1567578860241), appliedOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, appliedWallTime: new Date(1567578860241), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, durableWallTime: new Date(1567578857929), appliedOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, appliedWallTime: new Date(1567578857929), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578860, 1), t: 1 }, lastCommittedWall: new Date(1567578860241), lastOpVisible: { ts: Timestamp(1567578860, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:20.259+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1374 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:50.259+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, durableWallTime: new Date(1567578857929), appliedOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, appliedWallTime: new Date(1567578857929), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, durableWallTime: new Date(1567578860241), appliedOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, appliedWallTime: new Date(1567578860241), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, durableWallTime: new Date(1567578857929), appliedOpTime: { ts: Timestamp(1567578857, 3), t: 1 }, appliedWallTime: new Date(1567578857929), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578860, 1), t: 1 }, lastCommittedWall: new Date(1567578860241), lastOpVisible: { ts: Timestamp(1567578860, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:20.259+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:50.250+0000 2019-09-04T06:34:20.259+0000 D2 ASIO [RS] Request 1374 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578860, 1), t: 1 }, lastCommittedWall: new Date(1567578860241), lastOpVisible: { ts: Timestamp(1567578860, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578860, 1), $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578860, 2) } 2019-09-04T06:34:20.259+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578860, 1), t: 1 }, lastCommittedWall: new Date(1567578860241), lastOpVisible: { ts: Timestamp(1567578860, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578860, 1), $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578860, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:20.259+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:20.259+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:50.250+0000 2019-09-04T06:34:20.259+0000 D2 ASIO [RS] Request 1373 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578860, 2), t: 1 }, lastCommittedWall: new Date(1567578860241), lastOpVisible: { ts: Timestamp(1567578860, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578860, 2), t: 1 }, lastCommittedWall: new Date(1567578860241), lastOpApplied: { ts: Timestamp(1567578860, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578860, 2), $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578860, 2) } 2019-09-04T06:34:20.259+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578860, 2), t: 1 }, lastCommittedWall: new Date(1567578860241), lastOpVisible: { ts: Timestamp(1567578860, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578860, 2), t: 1 }, lastCommittedWall: new Date(1567578860241), lastOpApplied: { ts: Timestamp(1567578860, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578860, 2), $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578860, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:20.259+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:20.259+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:34:20.259+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578860, 2), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.259+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578860, 2), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.259+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578855, 2) 2019-09-04T06:34:20.259+0000 D3 REPL [conn410] Got notified of new snapshot: { ts: Timestamp(1567578860, 2), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.259+0000 D3 REPL [conn410] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.445+0000 2019-09-04T06:34:20.259+0000 D3 REPL [conn418] Got notified of new snapshot: { ts: Timestamp(1567578860, 2), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.259+0000 D3 REPL [conn418] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.754+0000 2019-09-04T06:34:20.259+0000 D3 REPL [conn417] Got notified of new snapshot: { ts: Timestamp(1567578860, 2), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.259+0000 D3 REPL [conn417] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.661+0000 2019-09-04T06:34:20.259+0000 D3 REPL [conn440] Got notified of new snapshot: { ts: Timestamp(1567578860, 2), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.259+0000 D3 REPL [conn440] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.538+0000 2019-09-04T06:34:20.259+0000 D3 REPL [conn431] Got notified of new snapshot: { ts: Timestamp(1567578860, 2), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.259+0000 D3 REPL [conn431] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.276+0000 2019-09-04T06:34:20.260+0000 D3 REPL [conn424] Got notified of new snapshot: { ts: Timestamp(1567578860, 2), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.260+0000 D3 REPL [conn424] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.437+0000 2019-09-04T06:34:20.260+0000 D3 REPL [conn415] Got notified of new snapshot: { ts: Timestamp(1567578860, 2), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.260+0000 D3 REPL [conn415] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.282+0000 2019-09-04T06:34:20.260+0000 D3 REPL [conn419] Got notified of new snapshot: { ts: Timestamp(1567578860, 2), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.260+0000 D3 REPL [conn419] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.767+0000 2019-09-04T06:34:20.260+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:34:30.847+0000 2019-09-04T06:34:20.260+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:34:30.483+0000 2019-09-04T06:34:20.260+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1375 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:30.260+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578860, 2), t: 1 } } 2019-09-04T06:34:20.260+0000 D2 COMMAND [conn413] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578860, 2), t: 1 } }, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5aec0f8f28dab2b56d4a'), operName: "", parentOperId: "5d6f5aec0f8f28dab2b56d48" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 10FABDB484354AA90AD9FE9F773AE6164E7F8C75), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578860, 2), t: 1 } }, $db: "config" } 2019-09-04T06:34:20.260+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:50.250+0000 2019-09-04T06:34:20.260+0000 D1 TRACKING [conn413] Cmd: find, TrackingId: 5d6f5aec0f8f28dab2b56d48|5d6f5aec0f8f28dab2b56d4a 2019-09-04T06:34:20.260+0000 D1 COMMAND [conn413] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578860, 2), t: 1 } } } 2019-09-04T06:34:20.260+0000 D3 STORAGE [conn413] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:34:20.260+0000 D1 COMMAND [conn413] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578860, 2), t: 1 } }, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5aec0f8f28dab2b56d4a'), operName: "", parentOperId: "5d6f5aec0f8f28dab2b56d48" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 10FABDB484354AA90AD9FE9F773AE6164E7F8C75), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578860, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578860, 2) 2019-09-04T06:34:20.260+0000 D5 QUERY [conn413] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:34:20.260+0000 D5 QUERY [conn413] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:34:20.260+0000 D5 QUERY [conn413] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:34:20.260+0000 D5 QUERY [conn413] Rated tree: $and 2019-09-04T06:34:20.260+0000 D5 QUERY [conn413] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:20.260+0000 D5 QUERY [conn413] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:20.260+0000 D2 QUERY [conn413] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:34:20.260+0000 D3 STORAGE [conn413] WT begin_transaction for snapshot id 20028 2019-09-04T06:34:20.260+0000 D3 STORAGE [conn413] WT rollback_transaction for snapshot id 20028 2019-09-04T06:34:20.260+0000 I COMMAND [conn413] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578860, 2), t: 1 } }, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5aec0f8f28dab2b56d4a'), operName: "", parentOperId: "5d6f5aec0f8f28dab2b56d48" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 10FABDB484354AA90AD9FE9F773AE6164E7F8C75), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578860, 2), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:4 cursorExhausted:1 numYields:0 nreturned:4 reslen:989 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:34:20.260+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:20.260+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:20.260+0000 D3 REPL [conn409] Got notified of new snapshot: { ts: Timestamp(1567578860, 2), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.260+0000 D3 REPL [conn409] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.328+0000 2019-09-04T06:34:20.260+0000 D3 REPL [conn439] Got notified of new snapshot: { ts: Timestamp(1567578860, 2), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.260+0000 D3 REPL [conn439] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.447+0000 2019-09-04T06:34:20.260+0000 D3 REPL [conn432] Got notified of new snapshot: { ts: Timestamp(1567578860, 2), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.260+0000 D3 REPL [conn432] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.343+0000 2019-09-04T06:34:20.260+0000 D3 REPL [conn422] Got notified of new snapshot: { ts: Timestamp(1567578860, 2), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.260+0000 D3 REPL [conn422] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:25.060+0000 2019-09-04T06:34:20.260+0000 D3 REPL [conn416] Got notified of new snapshot: { ts: Timestamp(1567578860, 2), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.260+0000 D3 REPL [conn416] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.661+0000 2019-09-04T06:34:20.260+0000 D3 REPL [conn437] Got notified of new snapshot: { ts: Timestamp(1567578860, 2), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.260+0000 D3 REPL [conn437] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:46.416+0000 2019-09-04T06:34:20.260+0000 D3 REPL [conn434] Got notified of new snapshot: { ts: Timestamp(1567578860, 2), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.260+0000 D3 REPL [conn434] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.445+0000 2019-09-04T06:34:20.260+0000 D3 REPL [conn420] Got notified of new snapshot: { ts: Timestamp(1567578860, 2), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.260+0000 D3 REPL [conn420] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.423+0000 2019-09-04T06:34:20.260+0000 D3 REPL [conn408] Got notified of new snapshot: { ts: Timestamp(1567578860, 2), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.260+0000 D3 REPL [conn408] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:29.739+0000 2019-09-04T06:34:20.260+0000 D3 REPL [conn402] Got notified of new snapshot: { ts: Timestamp(1567578860, 2), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.260+0000 D3 REPL [conn402] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.289+0000 2019-09-04T06:34:20.260+0000 D3 REPL [conn421] Got notified of new snapshot: { ts: Timestamp(1567578860, 2), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.260+0000 D3 REPL [conn421] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:24.153+0000 2019-09-04T06:34:20.260+0000 D3 REPL [conn403] Got notified of new snapshot: { ts: Timestamp(1567578860, 2), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.260+0000 D3 REPL [conn403] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:27.566+0000 2019-09-04T06:34:20.260+0000 D3 REPL [conn433] Got notified of new snapshot: { ts: Timestamp(1567578860, 2), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.260+0000 D3 REPL [conn433] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.683+0000 2019-09-04T06:34:20.260+0000 D3 REPL [conn435] Got notified of new snapshot: { ts: Timestamp(1567578860, 2), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.260+0000 D3 REPL [conn435] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:43.909+0000 2019-09-04T06:34:20.260+0000 D3 REPL [conn429] Got notified of new snapshot: { ts: Timestamp(1567578860, 2), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.260+0000 D3 REPL [conn429] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:41.583+0000 2019-09-04T06:34:20.260+0000 D3 REPL [conn427] Got notified of new snapshot: { ts: Timestamp(1567578860, 2), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.260+0000 D3 REPL [conn427] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.335+0000 2019-09-04T06:34:20.260+0000 D3 REPL [conn401] Got notified of new snapshot: { ts: Timestamp(1567578860, 2), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.260+0000 D3 REPL [conn401] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:21.660+0000 2019-09-04T06:34:20.260+0000 D3 REPL [conn426] Got notified of new snapshot: { ts: Timestamp(1567578860, 2), t: 1 }, 2019-09-04T06:34:20.241+0000 2019-09-04T06:34:20.260+0000 D3 REPL [conn426] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.517+0000 2019-09-04T06:34:20.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:20.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:20.326+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:20.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:20.330+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:20.346+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578860, 2) 2019-09-04T06:34:20.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:20.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:20.368+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:20.368+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:20.430+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:20.476+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:20.476+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:20.530+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:20.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:20.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:20.570+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:20.570+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:20.603+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:20.603+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:20.630+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:20.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:20.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:20.701+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:20.701+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:20.730+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:20.732+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:20.732+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:20.738+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:20.738+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:20.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:20.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:20.826+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:20.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:20.831+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:20.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:20.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:20.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1379) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:20.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1379 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:30.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:20.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:20.839+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1380) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:20.839+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1380 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:34:30.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:20.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:20.839+0000 D2 ASIO [Replication] Request 1379 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, durableWallTime: new Date(1567578860241), opTime: { ts: Timestamp(1567578860, 2), t: 1 }, wallTime: new Date(1567578860241), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578860, 2), t: 1 }, lastCommittedWall: new Date(1567578860241), lastOpVisible: { ts: Timestamp(1567578860, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578860, 2), $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578860, 2) } 2019-09-04T06:34:20.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, durableWallTime: new Date(1567578860241), opTime: { ts: Timestamp(1567578860, 2), t: 1 }, wallTime: new Date(1567578860241), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578860, 2), t: 1 }, lastCommittedWall: new Date(1567578860241), lastOpVisible: { ts: Timestamp(1567578860, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578860, 2), $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578860, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:20.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:20.839+0000 D2 ASIO [Replication] Request 1380 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, durableWallTime: new Date(1567578860241), opTime: { ts: Timestamp(1567578860, 2), t: 1 }, wallTime: new Date(1567578860241), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578860, 2), t: 1 }, lastCommittedWall: new Date(1567578860241), lastOpVisible: { ts: Timestamp(1567578860, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578860, 2), $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578860, 2) } 2019-09-04T06:34:20.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, durableWallTime: new Date(1567578860241), opTime: { ts: Timestamp(1567578860, 2), t: 1 }, wallTime: new Date(1567578860241), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578860, 2), t: 1 }, lastCommittedWall: new Date(1567578860241), lastOpVisible: { ts: Timestamp(1567578860, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578860, 2), $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578860, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:34:20.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:20.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1379) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, durableWallTime: new Date(1567578860241), opTime: { ts: Timestamp(1567578860, 2), t: 1 }, wallTime: new Date(1567578860241), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578860, 2), t: 1 }, lastCommittedWall: new Date(1567578860241), lastOpVisible: { ts: Timestamp(1567578860, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578860, 2), $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578860, 2) } 2019-09-04T06:34:20.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:34:20.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:34:22.839Z 2019-09-04T06:34:20.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:20.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1380) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, durableWallTime: new Date(1567578860241), opTime: { ts: Timestamp(1567578860, 2), t: 1 }, wallTime: new Date(1567578860241), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578860, 2), t: 1 }, lastCommittedWall: new Date(1567578860241), lastOpVisible: { ts: Timestamp(1567578860, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578860, 2), $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578860, 2) } 2019-09-04T06:34:20.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:34:20.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:34:30.483+0000 2019-09-04T06:34:20.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:34:31.446+0000 2019-09-04T06:34:20.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:34:20.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:20.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:34:22.839Z 2019-09-04T06:34:20.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:20.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:20.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:20.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:20.868+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:20.868+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:20.931+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:21.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:21.031+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:21.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:21.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:21.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 10FABDB484354AA90AD9FE9F773AE6164E7F8C75), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:21.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:34:21.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 10FABDB484354AA90AD9FE9F773AE6164E7F8C75), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:21.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 10FABDB484354AA90AD9FE9F773AE6164E7F8C75), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:21.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, durableWallTime: new Date(1567578860241), opTime: { ts: Timestamp(1567578860, 2), t: 1 }, wallTime: new Date(1567578860241) } 2019-09-04T06:34:21.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 10FABDB484354AA90AD9FE9F773AE6164E7F8C75), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:21.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:21.070+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:21.103+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:21.103+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:21.118+0000 I NETWORK [listener] connection accepted from 10.108.2.56:35836 #442 (90 connections now open) 2019-09-04T06:34:21.118+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:21.118+0000 D2 COMMAND [conn442] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:21.118+0000 I NETWORK [conn442] received client metadata from 10.108.2.56:35836 conn442: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:21.118+0000 I COMMAND [conn442] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:21.123+0000 D2 COMMAND [conn442] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578851, 1), signature: { hash: BinData(0, DE6CD7CA3AD84D3D5B5CC231FF3233432495E5FA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:34:21.123+0000 D1 REPL [conn442] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578860, 2), t: 1 } 2019-09-04T06:34:21.123+0000 D3 REPL [conn442] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.133+0000 2019-09-04T06:34:21.131+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:21.162+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:21.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:21.189+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:21.189+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:21.201+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:21.201+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:21.231+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:21.232+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:21.232+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:21.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:21.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:21.238+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:21.238+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:21.248+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:21.248+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:21.248+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578860, 2) 2019-09-04T06:34:21.248+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 20058 2019-09-04T06:34:21.248+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:21.248+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:21.248+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 20058 2019-09-04T06:34:21.249+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20061 2019-09-04T06:34:21.249+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20061 2019-09-04T06:34:21.249+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578860, 2), t: 1 }({ ts: Timestamp(1567578860, 2), t: 1 }) 2019-09-04T06:34:21.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:21.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:21.326+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:21.326+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:21.331+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:21.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:21.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:21.368+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:21.368+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:21.431+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:21.532+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:21.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:21.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:21.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:21.570+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:21.603+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:21.603+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:21.632+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:21.632+0000 D2 COMMAND [conn126] run command admin.$cmd { ping: 1, $db: "admin" } 2019-09-04T06:34:21.632+0000 I COMMAND [conn126] command admin.$cmd appName: "robo3t" command: ping { ping: 1, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:21.635+0000 I NETWORK [listener] connection accepted from 10.108.2.44:38834 #443 (91 connections now open) 2019-09-04T06:34:21.635+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:21.635+0000 D2 COMMAND [conn443] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:21.635+0000 I NETWORK [conn443] received client metadata from 10.108.2.44:38834 conn443: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:21.635+0000 I COMMAND [conn443] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:21.635+0000 D2 COMMAND [conn443] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578858, 1), signature: { hash: BinData(0, B0947E8BACC12B932E38FDD8F3A31C0CEDCAD63A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:34:21.635+0000 D1 REPL [conn443] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578860, 2), t: 1 } 2019-09-04T06:34:21.635+0000 D3 REPL [conn443] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.645+0000 2019-09-04T06:34:21.644+0000 D2 COMMAND [conn182] run command config.$cmd { ping: 1.0, lsid: { id: UUID("02492cc9-cb3a-4cd4-9c2e-0d7430e82ce2") }, $clusterTime: { clusterTime: Timestamp(1567578799, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:34:21.644+0000 I COMMAND [conn182] command config.$cmd appName: "MongoDB Shell" command: ping { ping: 1.0, lsid: { id: UUID("02492cc9-cb3a-4cd4-9c2e-0d7430e82ce2") }, $clusterTime: { clusterTime: Timestamp(1567578799, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:21.650+0000 I NETWORK [listener] connection accepted from 10.108.2.72:45890 #444 (92 connections now open) 2019-09-04T06:34:21.650+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:21.650+0000 D2 COMMAND [conn444] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:21.650+0000 I NETWORK [conn444] received client metadata from 10.108.2.72:45890 conn444: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:21.650+0000 I COMMAND [conn444] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:21.650+0000 D2 COMMAND [conn438] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578853, 1), signature: { hash: BinData(0, E25495A303323F3A37C8BA9965010F6640AA1AE5), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:34:21.650+0000 D1 REPL [conn438] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578860, 2), t: 1 } 2019-09-04T06:34:21.651+0000 D3 REPL [conn438] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.660+0000 2019-09-04T06:34:21.652+0000 D2 COMMAND [conn430] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578852, 1), signature: { hash: BinData(0, 695EC7A4F3A174023E82B47C0B1F2FFF237676A8), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:34:21.652+0000 D1 REPL [conn430] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578860, 2), t: 1 } 2019-09-04T06:34:21.652+0000 D3 REPL [conn430] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.662+0000 2019-09-04T06:34:21.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:21.662+0000 I COMMAND [conn401] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578830, 1), signature: { hash: BinData(0, 5FF0CCAFC787DA30065B8DDE1CC8B095FDBF43F0), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:34:21.662+0000 D1 - [conn401] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:21.662+0000 W - [conn401] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:21.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:21.663+0000 I COMMAND [conn417] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578828, 1), signature: { hash: BinData(0, 12856C2B1973243F9842A98B6AAFA0F9B961DA7C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:34:21.663+0000 D1 - [conn417] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:21.663+0000 W - [conn417] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:21.663+0000 I COMMAND [conn416] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578822, 1), signature: { hash: BinData(0, 2206AF5AF7A132B84E6DD84FBEAA8BC020142021), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:34:21.663+0000 D1 - [conn416] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:21.663+0000 W - [conn416] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:21.679+0000 I - [conn401] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:21.679+0000 D1 COMMAND [conn401] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578830, 1), signature: { hash: BinData(0, 5FF0CCAFC787DA30065B8DDE1CC8B095FDBF43F0), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:21.679+0000 D1 - [conn401] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:21.679+0000 W - [conn401] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:21.689+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:21.689+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:21.696+0000 I - [conn417] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:21.696+0000 D1 COMMAND [conn417] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578828, 1), signature: { hash: BinData(0, 12856C2B1973243F9842A98B6AAFA0F9B961DA7C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:21.696+0000 D1 - [conn417] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:21.696+0000 W - [conn417] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:21.701+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:21.701+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:21.716+0000 I - [conn401] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:21.716+0000 W COMMAND [conn401] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:21.716+0000 I COMMAND [conn401] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578830, 1), signature: { hash: BinData(0, 5FF0CCAFC787DA30065B8DDE1CC8B095FDBF43F0), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:34:21.716+0000 D2 NETWORK [conn401] Session from 10.108.2.48:42220 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:21.716+0000 I NETWORK [conn401] end connection 10.108.2.48:42220 (91 connections now open) 2019-09-04T06:34:21.732+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:21.732+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:21.732+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:21.738+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:21.738+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:21.742+0000 I - [conn417] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:21.742+0000 W COMMAND [conn417] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:21.742+0000 I COMMAND [conn417] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578828, 1), signature: { hash: BinData(0, 12856C2B1973243F9842A98B6AAFA0F9B961DA7C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30045ms 2019-09-04T06:34:21.742+0000 D2 NETWORK [conn417] Session from 10.108.2.54:49318 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:21.742+0000 I NETWORK [conn417] end connection 10.108.2.54:49318 (90 connections now open) 2019-09-04T06:34:21.743+0000 I NETWORK [listener] connection accepted from 10.108.2.52:47328 #445 (91 connections now open) 2019-09-04T06:34:21.743+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:21.743+0000 D2 COMMAND [conn445] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:21.743+0000 I NETWORK [conn445] received client metadata from 10.108.2.52:47328 conn445: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:21.743+0000 I COMMAND [conn445] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:21.754+0000 I - [conn416] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:21.754+0000 D1 COMMAND [conn416] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578822, 1), signature: { hash: BinData(0, 2206AF5AF7A132B84E6DD84FBEAA8BC020142021), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:21.754+0000 D1 - [conn416] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:21.754+0000 W - [conn416] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:21.756+0000 I COMMAND [conn418] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578822, 1), signature: { hash: BinData(0, 2206AF5AF7A132B84E6DD84FBEAA8BC020142021), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:34:21.756+0000 D1 - [conn418] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:21.756+0000 W - [conn418] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:21.756+0000 I NETWORK [listener] connection accepted from 10.108.2.59:48506 #446 (92 connections now open) 2019-09-04T06:34:21.756+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:21.756+0000 D2 COMMAND [conn446] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:21.756+0000 I NETWORK [conn446] received client metadata from 10.108.2.59:48506 conn446: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:21.756+0000 I COMMAND [conn446] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:21.768+0000 I COMMAND [conn419] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578821, 1), signature: { hash: BinData(0, C4CFA6E23777FD680628F991FF985CB66BF77E05), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:34:21.768+0000 D1 - [conn419] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:21.768+0000 W - [conn419] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:21.774+0000 I - [conn418] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:21.774+0000 D1 COMMAND [conn418] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578822, 1), signature: { hash: BinData(0, 2206AF5AF7A132B84E6DD84FBEAA8BC020142021), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:21.774+0000 D1 - [conn418] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:21.774+0000 W - [conn418] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:21.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:21.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:21.791+0000 I - [conn419] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:21.791+0000 D1 COMMAND [conn419] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578821, 1), signature: { hash: BinData(0, C4CFA6E23777FD680628F991FF985CB66BF77E05), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:21.791+0000 D1 - [conn419] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:21.791+0000 W - [conn419] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:21.811+0000 I - [conn418] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:21.811+0000 W COMMAND [conn418] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:21.811+0000 I COMMAND [conn418] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578822, 1), signature: { hash: BinData(0, 2206AF5AF7A132B84E6DD84FBEAA8BC020142021), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:34:21.811+0000 D2 NETWORK [conn418] Session from 10.108.2.52:47310 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:21.811+0000 I NETWORK [conn418] end connection 10.108.2.52:47310 (91 connections now open) 2019-09-04T06:34:21.826+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:21.826+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:21.832+0000 I - [conn419] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:21.832+0000 W COMMAND [conn419] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:21.832+0000 I COMMAND [conn419] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578821, 1), signature: { hash: BinData(0, C4CFA6E23777FD680628F991FF985CB66BF77E05), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30033ms 2019-09-04T06:34:21.832+0000 D2 NETWORK [conn419] Session from 10.108.2.59:48490 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:21.832+0000 I NETWORK [conn419] end connection 10.108.2.59:48490 (90 connections now open) 2019-09-04T06:34:21.832+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:21.848+0000 I - [conn416] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:21.848+0000 W COMMAND [conn416] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:21.848+0000 I COMMAND [conn416] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578822, 1), signature: { hash: BinData(0, 2206AF5AF7A132B84E6DD84FBEAA8BC020142021), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30103ms 2019-09-04T06:34:21.848+0000 D2 NETWORK [conn416] Session from 10.108.2.72:45870 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:21.848+0000 I NETWORK [conn416] end connection 10.108.2.72:45870 (89 connections now open) 2019-09-04T06:34:21.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:21.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:21.868+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:21.868+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:21.932+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:22.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:22.032+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:22.043+0000 I NETWORK [listener] connection accepted from 10.108.2.50:50268 #447 (90 connections now open) 2019-09-04T06:34:22.043+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:22.043+0000 D2 COMMAND [conn447] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:22.043+0000 I NETWORK [conn447] received client metadata from 10.108.2.50:50268 conn447: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:22.043+0000 I COMMAND [conn447] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:22.044+0000 D2 COMMAND [conn447] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578859, 1), signature: { hash: BinData(0, 890D13BC1F7341B693C197CA647BC05E3FDD2B2E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:34:22.044+0000 D1 REPL [conn447] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578860, 2), t: 1 } 2019-09-04T06:34:22.044+0000 D3 REPL [conn447] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:52.054+0000 2019-09-04T06:34:22.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:22.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:22.070+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:22.070+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:22.103+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:22.103+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:22.132+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:22.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:22.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:22.189+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:22.189+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:22.201+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:22.201+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:22.232+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:22.232+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:22.233+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:22.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 10FABDB484354AA90AD9FE9F773AE6164E7F8C75), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:22.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:34:22.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 10FABDB484354AA90AD9FE9F773AE6164E7F8C75), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:22.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 10FABDB484354AA90AD9FE9F773AE6164E7F8C75), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:22.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, durableWallTime: new Date(1567578860241), opTime: { ts: Timestamp(1567578860, 2), t: 1 }, wallTime: new Date(1567578860241) } 2019-09-04T06:34:22.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 10FABDB484354AA90AD9FE9F773AE6164E7F8C75), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:22.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:22.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:22.238+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:22.238+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:22.248+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:22.248+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:22.248+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578860, 2) 2019-09-04T06:34:22.248+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 20101 2019-09-04T06:34:22.248+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:22.248+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:22.248+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 20101 2019-09-04T06:34:22.250+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20104 2019-09-04T06:34:22.250+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20104 2019-09-04T06:34:22.250+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578860, 2), t: 1 }({ ts: Timestamp(1567578860, 2), t: 1 }) 2019-09-04T06:34:22.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:22.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:22.326+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:22.326+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:22.333+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:22.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:22.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:22.368+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:22.368+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:22.433+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:22.533+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:22.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:22.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:22.570+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:22.570+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:22.584+0000 D2 COMMAND [conn425] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578855, 1), signature: { hash: BinData(0, 1F4340BC981E2B9D0D58976FE30DE1B22C151008), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:34:22.584+0000 D1 REPL [conn425] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578860, 2), t: 1 } 2019-09-04T06:34:22.584+0000 D3 REPL [conn425] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:52.594+0000 2019-09-04T06:34:22.603+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:22.603+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:22.633+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:22.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:22.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:22.689+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:22.689+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:22.701+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:22.701+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:22.732+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:22.732+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:22.733+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:22.738+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:22.738+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:22.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:22.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:22.826+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:22.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:22.833+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:22.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:22.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:22.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1381) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:22.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1381 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:32.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:22.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:22.839+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1382) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:22.839+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1382 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:34:32.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:22.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:22.839+0000 D2 ASIO [Replication] Request 1381 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, durableWallTime: new Date(1567578860241), opTime: { ts: Timestamp(1567578860, 2), t: 1 }, wallTime: new Date(1567578860241), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578860, 2), t: 1 }, lastCommittedWall: new Date(1567578860241), lastOpVisible: { ts: Timestamp(1567578860, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578860, 2), $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578860, 2) } 2019-09-04T06:34:22.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, durableWallTime: new Date(1567578860241), opTime: { ts: Timestamp(1567578860, 2), t: 1 }, wallTime: new Date(1567578860241), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578860, 2), t: 1 }, lastCommittedWall: new Date(1567578860241), lastOpVisible: { ts: Timestamp(1567578860, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578860, 2), $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578860, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:22.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:22.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1381) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, durableWallTime: new Date(1567578860241), opTime: { ts: Timestamp(1567578860, 2), t: 1 }, wallTime: new Date(1567578860241), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578860, 2), t: 1 }, lastCommittedWall: new Date(1567578860241), lastOpVisible: { ts: Timestamp(1567578860, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578860, 2), $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578860, 2) } 2019-09-04T06:34:22.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:34:22.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:34:24.839Z 2019-09-04T06:34:22.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:22.839+0000 D2 ASIO [Replication] Request 1382 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, durableWallTime: new Date(1567578860241), opTime: { ts: Timestamp(1567578860, 2), t: 1 }, wallTime: new Date(1567578860241), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578860, 2), t: 1 }, lastCommittedWall: new Date(1567578860241), lastOpVisible: { ts: Timestamp(1567578860, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578860, 2), $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578860, 2) } 2019-09-04T06:34:22.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, durableWallTime: new Date(1567578860241), opTime: { ts: Timestamp(1567578860, 2), t: 1 }, wallTime: new Date(1567578860241), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578860, 2), t: 1 }, lastCommittedWall: new Date(1567578860241), lastOpVisible: { ts: Timestamp(1567578860, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578860, 2), $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578860, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:34:22.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:22.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1382) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, durableWallTime: new Date(1567578860241), opTime: { ts: Timestamp(1567578860, 2), t: 1 }, wallTime: new Date(1567578860241), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578860, 2), t: 1 }, lastCommittedWall: new Date(1567578860241), lastOpVisible: { ts: Timestamp(1567578860, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578860, 2), $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578860, 2) } 2019-09-04T06:34:22.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:34:22.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:34:31.446+0000 2019-09-04T06:34:22.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:34:33.684+0000 2019-09-04T06:34:22.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:34:22.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:34:24.839Z 2019-09-04T06:34:22.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:22.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:22.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:22.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:22.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:22.868+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:22.868+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:22.933+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:23.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:23.034+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:23.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:23.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:23.063+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:23.063+0000 D3 REPL [replexec-0] memberData lastupdate is: 2019-09-04T06:34:22.839+0000 2019-09-04T06:34:23.063+0000 D3 REPL [replexec-0] memberData lastupdate is: 2019-09-04T06:34:22.839+0000 2019-09-04T06:34:23.063+0000 D3 REPL [replexec-0] stalest member MemberId(0) date: 2019-09-04T06:34:22.839+0000 2019-09-04T06:34:23.063+0000 D3 REPL [replexec-0] scheduling next check at 2019-09-04T06:34:32.839+0000 2019-09-04T06:34:23.063+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:23.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 10FABDB484354AA90AD9FE9F773AE6164E7F8C75), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:23.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:34:23.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 10FABDB484354AA90AD9FE9F773AE6164E7F8C75), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:23.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 10FABDB484354AA90AD9FE9F773AE6164E7F8C75), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:23.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, durableWallTime: new Date(1567578860241), opTime: { ts: Timestamp(1567578860, 2), t: 1 }, wallTime: new Date(1567578860241) } 2019-09-04T06:34:23.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 10FABDB484354AA90AD9FE9F773AE6164E7F8C75), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:23.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:23.070+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:23.103+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:23.103+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:23.134+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:23.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:23.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:23.189+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:23.189+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:23.201+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:23.201+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:23.232+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:23.232+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:23.234+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:23.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:23.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:23.238+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:23.238+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:23.249+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:23.249+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:23.249+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578860, 2) 2019-09-04T06:34:23.249+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 20134 2019-09-04T06:34:23.249+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:23.249+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:23.249+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 20134 2019-09-04T06:34:23.250+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20137 2019-09-04T06:34:23.250+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20137 2019-09-04T06:34:23.250+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578860, 2), t: 1 }({ ts: Timestamp(1567578860, 2), t: 1 }) 2019-09-04T06:34:23.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:23.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:23.326+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:23.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:23.334+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:23.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:23.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:23.368+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:23.368+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:23.434+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:23.534+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:23.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:23.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:23.570+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:23.570+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:23.603+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:23.603+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:23.634+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:23.662+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:23.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:23.689+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:23.689+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:23.701+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:23.701+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:23.732+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:23.732+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:23.735+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:23.738+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:23.738+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:23.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:23.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:23.826+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:23.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:23.835+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:23.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:23.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:23.868+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:23.868+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:23.935+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:24.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:24.035+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:24.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:24.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:24.070+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:24.070+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:24.103+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:24.103+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:24.135+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:24.141+0000 I NETWORK [listener] connection accepted from 10.108.2.46:41134 #448 (91 connections now open) 2019-09-04T06:34:24.142+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:24.142+0000 D2 COMMAND [conn448] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:24.142+0000 I NETWORK [conn448] received client metadata from 10.108.2.46:41134 conn448: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:24.142+0000 I COMMAND [conn448] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:24.154+0000 I COMMAND [conn421] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578825, 1), signature: { hash: BinData(0, 3537EBC8F1D4EC0EF230F13791772B6E3C891595), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:34:24.154+0000 D1 - [conn421] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:24.154+0000 W - [conn421] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:24.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:24.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:24.170+0000 I - [conn421] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:24.170+0000 D1 COMMAND [conn421] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578825, 1), signature: { hash: BinData(0, 3537EBC8F1D4EC0EF230F13791772B6E3C891595), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:24.171+0000 D1 - [conn421] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:24.171+0000 W - [conn421] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:24.189+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:24.189+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:24.190+0000 I - [conn421] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:24.190+0000 W COMMAND [conn421] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:24.190+0000 I COMMAND [conn421] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578825, 1), signature: { hash: BinData(0, 3537EBC8F1D4EC0EF230F13791772B6E3C891595), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:34:24.190+0000 D2 NETWORK [conn421] Session from 10.108.2.46:41116 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:24.191+0000 I NETWORK [conn421] end connection 10.108.2.46:41116 (90 connections now open) 2019-09-04T06:34:24.201+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:24.201+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:24.232+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:24.232+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:24.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 10FABDB484354AA90AD9FE9F773AE6164E7F8C75), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:24.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:34:24.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 10FABDB484354AA90AD9FE9F773AE6164E7F8C75), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:24.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 10FABDB484354AA90AD9FE9F773AE6164E7F8C75), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:24.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, durableWallTime: new Date(1567578860241), opTime: { ts: Timestamp(1567578860, 2), t: 1 }, wallTime: new Date(1567578860241) } 2019-09-04T06:34:24.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 10FABDB484354AA90AD9FE9F773AE6164E7F8C75), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:24.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:24.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:24.235+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:24.238+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:24.238+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:24.249+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:24.249+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:24.249+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578860, 2) 2019-09-04T06:34:24.249+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 20168 2019-09-04T06:34:24.249+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:24.249+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:24.249+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 20168 2019-09-04T06:34:24.250+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20171 2019-09-04T06:34:24.250+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20171 2019-09-04T06:34:24.250+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578860, 2), t: 1 }({ ts: Timestamp(1567578860, 2), t: 1 }) 2019-09-04T06:34:24.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:24.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:24.326+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:24.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:24.335+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:24.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:24.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:24.368+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:24.368+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:24.435+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:24.536+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:24.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:24.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:24.570+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:24.570+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:24.603+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:24.603+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:24.636+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:24.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:24.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:24.689+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:24.689+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:24.701+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:24.701+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:24.732+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:24.732+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:24.736+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:24.738+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:24.738+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:24.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:24.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:24.826+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:24.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:24.836+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:24.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:24.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:24.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1383) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:24.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1383 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:34.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:24.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:24.839+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1384) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:24.839+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1384 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:34:34.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:24.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:24.839+0000 D2 ASIO [Replication] Request 1383 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, durableWallTime: new Date(1567578860241), opTime: { ts: Timestamp(1567578860, 2), t: 1 }, wallTime: new Date(1567578860241), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578860, 2), t: 1 }, lastCommittedWall: new Date(1567578860241), lastOpVisible: { ts: Timestamp(1567578860, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578860, 2), $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578860, 2) } 2019-09-04T06:34:24.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, durableWallTime: new Date(1567578860241), opTime: { ts: Timestamp(1567578860, 2), t: 1 }, wallTime: new Date(1567578860241), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578860, 2), t: 1 }, lastCommittedWall: new Date(1567578860241), lastOpVisible: { ts: Timestamp(1567578860, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578860, 2), $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578860, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:24.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:24.839+0000 D2 ASIO [Replication] Request 1384 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, durableWallTime: new Date(1567578860241), opTime: { ts: Timestamp(1567578860, 2), t: 1 }, wallTime: new Date(1567578860241), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578860, 2), t: 1 }, lastCommittedWall: new Date(1567578860241), lastOpVisible: { ts: Timestamp(1567578860, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578860, 2), $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578860, 2) } 2019-09-04T06:34:24.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, durableWallTime: new Date(1567578860241), opTime: { ts: Timestamp(1567578860, 2), t: 1 }, wallTime: new Date(1567578860241), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578860, 2), t: 1 }, lastCommittedWall: new Date(1567578860241), lastOpVisible: { ts: Timestamp(1567578860, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578860, 2), $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578860, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:34:24.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1383) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, durableWallTime: new Date(1567578860241), opTime: { ts: Timestamp(1567578860, 2), t: 1 }, wallTime: new Date(1567578860241), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578860, 2), t: 1 }, lastCommittedWall: new Date(1567578860241), lastOpVisible: { ts: Timestamp(1567578860, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578860, 2), $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578860, 2) } 2019-09-04T06:34:24.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:24.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:34:24.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:34:26.839Z 2019-09-04T06:34:24.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:24.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1384) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, durableWallTime: new Date(1567578860241), opTime: { ts: Timestamp(1567578860, 2), t: 1 }, wallTime: new Date(1567578860241), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578860, 2), t: 1 }, lastCommittedWall: new Date(1567578860241), lastOpVisible: { ts: Timestamp(1567578860, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578860, 2), $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578860, 2) } 2019-09-04T06:34:24.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:34:24.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:34:33.684+0000 2019-09-04T06:34:24.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:34:35.535+0000 2019-09-04T06:34:24.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:24.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:34:24.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:24.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:34:26.839Z 2019-09-04T06:34:24.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:24.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:24.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:24.868+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:24.868+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:24.936+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:25.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:25.036+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:25.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:25.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:25.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 10FABDB484354AA90AD9FE9F773AE6164E7F8C75), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:25.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:34:25.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 10FABDB484354AA90AD9FE9F773AE6164E7F8C75), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:25.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 10FABDB484354AA90AD9FE9F773AE6164E7F8C75), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:25.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, durableWallTime: new Date(1567578860241), opTime: { ts: Timestamp(1567578860, 2), t: 1 }, wallTime: new Date(1567578860241) } 2019-09-04T06:34:25.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 10FABDB484354AA90AD9FE9F773AE6164E7F8C75), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:25.065+0000 I COMMAND [conn422] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578831, 1), signature: { hash: BinData(0, 5BA3C2225B23EDB083DE9DCD7C1516D49AA24879), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:34:25.065+0000 D1 - [conn422] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:25.065+0000 W - [conn422] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:25.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:25.070+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:25.083+0000 I - [conn422] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:25.083+0000 D1 COMMAND [conn422] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578831, 1), signature: { hash: BinData(0, 5BA3C2225B23EDB083DE9DCD7C1516D49AA24879), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:25.083+0000 D1 - [conn422] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:25.083+0000 W - [conn422] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:25.103+0000 I - [conn422] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:25.103+0000 W COMMAND [conn422] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:25.103+0000 I COMMAND [conn422] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578831, 1), signature: { hash: BinData(0, 5BA3C2225B23EDB083DE9DCD7C1516D49AA24879), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30033ms 2019-09-04T06:34:25.103+0000 D2 NETWORK [conn422] Session from 10.108.2.55:36786 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:25.103+0000 I NETWORK [conn422] end connection 10.108.2.55:36786 (89 connections now open) 2019-09-04T06:34:25.103+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:25.103+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:25.136+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:25.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:25.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:25.189+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:25.189+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:25.201+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:25.201+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:25.232+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:25.232+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:25.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:25.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:25.237+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:25.238+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:25.238+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:25.249+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:25.249+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:25.249+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578860, 2) 2019-09-04T06:34:25.249+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 20201 2019-09-04T06:34:25.249+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:25.249+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:25.249+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 20201 2019-09-04T06:34:25.250+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20204 2019-09-04T06:34:25.250+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20204 2019-09-04T06:34:25.250+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578860, 2), t: 1 }({ ts: Timestamp(1567578860, 2), t: 1 }) 2019-09-04T06:34:25.259+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:25.259+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, durableWallTime: new Date(1567578860241), appliedOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, appliedWallTime: new Date(1567578860241), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, durableWallTime: new Date(1567578860241), appliedOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, appliedWallTime: new Date(1567578860241), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, durableWallTime: new Date(1567578860241), appliedOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, appliedWallTime: new Date(1567578860241), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578860, 2), t: 1 }, lastCommittedWall: new Date(1567578860241), lastOpVisible: { ts: Timestamp(1567578860, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:25.259+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1385 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:55.259+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, durableWallTime: new Date(1567578860241), appliedOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, appliedWallTime: new Date(1567578860241), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, durableWallTime: new Date(1567578860241), appliedOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, appliedWallTime: new Date(1567578860241), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, durableWallTime: new Date(1567578860241), appliedOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, appliedWallTime: new Date(1567578860241), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578860, 2), t: 1 }, lastCommittedWall: new Date(1567578860241), lastOpVisible: { ts: Timestamp(1567578860, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:25.259+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:50.250+0000 2019-09-04T06:34:25.259+0000 D2 ASIO [RS] Request 1385 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578860, 2), t: 1 }, lastCommittedWall: new Date(1567578860241), lastOpVisible: { ts: Timestamp(1567578860, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578860, 2), $clusterTime: { clusterTime: Timestamp(1567578861, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578860, 2) } 2019-09-04T06:34:25.259+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578860, 2), t: 1 }, lastCommittedWall: new Date(1567578860241), lastOpVisible: { ts: Timestamp(1567578860, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578860, 2), $clusterTime: { clusterTime: Timestamp(1567578861, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578860, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:25.259+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:25.259+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:50.250+0000 2019-09-04T06:34:25.259+0000 D2 ASIO [RS] Request 1375 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578860, 2), t: 1 }, lastCommittedWall: new Date(1567578860241), lastOpVisible: { ts: Timestamp(1567578860, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578860, 2), t: 1 }, lastCommittedWall: new Date(1567578860241), lastOpApplied: { ts: Timestamp(1567578860, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578860, 2), $clusterTime: { clusterTime: Timestamp(1567578861, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578860, 2) } 2019-09-04T06:34:25.259+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578860, 2), t: 1 }, lastCommittedWall: new Date(1567578860241), lastOpVisible: { ts: Timestamp(1567578860, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578860, 2), t: 1 }, lastCommittedWall: new Date(1567578860241), lastOpApplied: { ts: Timestamp(1567578860, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578860, 2), $clusterTime: { clusterTime: Timestamp(1567578861, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578860, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:25.259+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:25.259+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:34:25.259+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:34:35.535+0000 2019-09-04T06:34:25.259+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:34:36.527+0000 2019-09-04T06:34:25.259+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:25.259+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:25.259+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1386 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:35.259+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578860, 2), t: 1 } } 2019-09-04T06:34:25.259+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:50.250+0000 2019-09-04T06:34:25.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:25.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:25.326+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:25.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:25.337+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:25.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:25.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:25.368+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:25.368+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:25.437+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:25.537+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:25.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:25.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:25.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:25.570+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:25.603+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:25.603+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:25.637+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:25.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:25.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:25.689+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:25.689+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:25.701+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:25.701+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:25.732+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:25.732+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:25.737+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:25.738+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:25.738+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:25.767+0000 D2 ASIO [RS] Request 1386 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578865, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578865763), o: { $v: 1, $set: { ping: new Date(1567578865763) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578860, 2), t: 1 }, lastCommittedWall: new Date(1567578860241), lastOpVisible: { ts: Timestamp(1567578860, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578860, 2), t: 1 }, lastCommittedWall: new Date(1567578860241), lastOpApplied: { ts: Timestamp(1567578865, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578860, 2), $clusterTime: { clusterTime: Timestamp(1567578865, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578865, 1) } 2019-09-04T06:34:25.767+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578865, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578865763), o: { $v: 1, $set: { ping: new Date(1567578865763) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578860, 2), t: 1 }, lastCommittedWall: new Date(1567578860241), lastOpVisible: { ts: Timestamp(1567578860, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578860, 2), t: 1 }, lastCommittedWall: new Date(1567578860241), lastOpApplied: { ts: Timestamp(1567578865, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578860, 2), $clusterTime: { clusterTime: Timestamp(1567578865, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578865, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:25.767+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:25.767+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578865, 1) and ending at ts: Timestamp(1567578865, 1) 2019-09-04T06:34:25.767+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:34:36.527+0000 2019-09-04T06:34:25.767+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:34:36.173+0000 2019-09-04T06:34:25.767+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:25.767+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:25.767+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:25.767+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:25.767+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578860, 2) 2019-09-04T06:34:25.767+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 20220 2019-09-04T06:34:25.767+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:25.767+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:25.767+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 20220 2019-09-04T06:34:25.767+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:34:25.767+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:25.767+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:25.767+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578860, 2) 2019-09-04T06:34:25.767+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578865, 1) } 2019-09-04T06:34:25.767+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578865, 1), t: 1 } 2019-09-04T06:34:25.767+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 20223 2019-09-04T06:34:25.767+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:25.767+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:25.767+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 20223 2019-09-04T06:34:25.767+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20205 2019-09-04T06:34:25.767+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 20205 2019-09-04T06:34:25.767+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20226 2019-09-04T06:34:25.767+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20226 2019-09-04T06:34:25.767+0000 D3 EXECUTOR [repl-writer-worker-5] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:25.767+0000 D3 STORAGE [repl-writer-worker-5] WT begin_transaction for snapshot id 20228 2019-09-04T06:34:25.767+0000 D4 STORAGE [repl-writer-worker-5] inserting record with timestamp Timestamp(1567578865, 1) 2019-09-04T06:34:25.767+0000 D3 STORAGE [repl-writer-worker-5] WT set timestamp of future write operations to Timestamp(1567578865, 1) 2019-09-04T06:34:25.767+0000 D3 STORAGE [repl-writer-worker-5] WT commit_transaction for snapshot id 20228 2019-09-04T06:34:25.767+0000 D3 EXECUTOR [repl-writer-worker-5] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:25.767+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:34:25.767+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20227 2019-09-04T06:34:25.767+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 20227 2019-09-04T06:34:25.767+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20230 2019-09-04T06:34:25.767+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20230 2019-09-04T06:34:25.767+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578865, 1), t: 1 }({ ts: Timestamp(1567578865, 1), t: 1 }) 2019-09-04T06:34:25.767+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578865, 1) 2019-09-04T06:34:25.767+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20231 2019-09-04T06:34:25.768+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578865, 1) } } ] } sort: {} projection: {} 2019-09-04T06:34:25.768+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:25.768+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:34:25.768+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578865, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:34:25.768+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:25.768+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:25.768+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:25.768+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578865, 1) || First: notFirst: full path: ts 2019-09-04T06:34:25.768+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:25.768+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578865, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:25.768+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:25.768+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:34:25.768+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:25.768+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:25.768+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:25.768+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:25.768+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:25.768+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:25.768+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:25.768+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578865, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:25.768+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:25.768+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:25.768+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:25.768+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578865, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:25.768+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:25.768+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578865, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:25.768+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 20231 2019-09-04T06:34:25.768+0000 D3 EXECUTOR [repl-writer-worker-9] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:25.768+0000 D3 STORAGE [repl-writer-worker-9] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:25.768+0000 D3 REPL [repl-writer-worker-9] applying op: { ts: Timestamp(1567578865, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578865763), o: { $v: 1, $set: { ping: new Date(1567578865763) } } }, oplog application mode: Secondary 2019-09-04T06:34:25.768+0000 D3 STORAGE [repl-writer-worker-9] WT set timestamp of future write operations to Timestamp(1567578865, 1) 2019-09-04T06:34:25.768+0000 D3 STORAGE [repl-writer-worker-9] WT begin_transaction for snapshot id 20233 2019-09-04T06:34:25.768+0000 D2 QUERY [repl-writer-worker-9] Using idhack: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" } 2019-09-04T06:34:25.768+0000 D2 STORAGE [repl-writer-worker-9] WiredTigerSizeStorer::store Marking table:config/collection/28--6194257481163143499 dirty, numRecords: 23, dataSize: 1914, use_count: 3 2019-09-04T06:34:25.768+0000 D4 WRITE [repl-writer-worker-9] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:34:25.768+0000 D3 STORAGE [repl-writer-worker-9] WT commit_transaction for snapshot id 20233 2019-09-04T06:34:25.768+0000 D3 EXECUTOR [repl-writer-worker-9] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:25.768+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578865, 1), t: 1 }({ ts: Timestamp(1567578865, 1), t: 1 }) 2019-09-04T06:34:25.768+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578865, 1) 2019-09-04T06:34:25.768+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20232 2019-09-04T06:34:25.768+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:34:25.768+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:25.768+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:34:25.768+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:25.768+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:25.768+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:34:25.768+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 20232 2019-09-04T06:34:25.768+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578865, 1) 2019-09-04T06:34:25.768+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:25.768+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20236 2019-09-04T06:34:25.768+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20236 2019-09-04T06:34:25.768+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, durableWallTime: new Date(1567578860241), appliedOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, appliedWallTime: new Date(1567578860241), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, durableWallTime: new Date(1567578860241), appliedOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, appliedWallTime: new Date(1567578865763), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, durableWallTime: new Date(1567578860241), appliedOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, appliedWallTime: new Date(1567578860241), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578860, 2), t: 1 }, lastCommittedWall: new Date(1567578860241), lastOpVisible: { ts: Timestamp(1567578860, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:25.768+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1387 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:55.768+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, durableWallTime: new Date(1567578860241), appliedOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, appliedWallTime: new Date(1567578860241), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, durableWallTime: new Date(1567578860241), appliedOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, appliedWallTime: new Date(1567578865763), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, durableWallTime: new Date(1567578860241), appliedOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, appliedWallTime: new Date(1567578860241), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578860, 2), t: 1 }, lastCommittedWall: new Date(1567578860241), lastOpVisible: { ts: Timestamp(1567578860, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:25.768+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:55.768+0000 2019-09-04T06:34:25.768+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578865, 1), t: 1 }({ ts: Timestamp(1567578865, 1), t: 1 }) 2019-09-04T06:34:25.769+0000 D2 ASIO [RS] Request 1387 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578865, 1), t: 1 }, lastCommittedWall: new Date(1567578865763), lastOpVisible: { ts: Timestamp(1567578865, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578865, 1), $clusterTime: { clusterTime: Timestamp(1567578865, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578865, 1) } 2019-09-04T06:34:25.769+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578865, 1), t: 1 }, lastCommittedWall: new Date(1567578865763), lastOpVisible: { ts: Timestamp(1567578865, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578865, 1), $clusterTime: { clusterTime: Timestamp(1567578865, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578865, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:25.769+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:25.769+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:55.769+0000 2019-09-04T06:34:25.769+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578865, 1), t: 1 } 2019-09-04T06:34:25.769+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1388 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:35.769+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578860, 2), t: 1 } } 2019-09-04T06:34:25.769+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:55.769+0000 2019-09-04T06:34:25.769+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:34:25.769+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:25.769+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, durableWallTime: new Date(1567578860241), appliedOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, appliedWallTime: new Date(1567578860241), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, durableWallTime: new Date(1567578865763), appliedOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, appliedWallTime: new Date(1567578865763), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, durableWallTime: new Date(1567578860241), appliedOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, appliedWallTime: new Date(1567578860241), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578860, 2), t: 1 }, lastCommittedWall: new Date(1567578860241), lastOpVisible: { ts: Timestamp(1567578860, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:25.769+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1389 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:55.769+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, durableWallTime: new Date(1567578860241), appliedOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, appliedWallTime: new Date(1567578860241), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, durableWallTime: new Date(1567578865763), appliedOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, appliedWallTime: new Date(1567578865763), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, durableWallTime: new Date(1567578860241), appliedOpTime: { ts: Timestamp(1567578860, 2), t: 1 }, appliedWallTime: new Date(1567578860241), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578860, 2), t: 1 }, lastCommittedWall: new Date(1567578860241), lastOpVisible: { ts: Timestamp(1567578860, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:25.769+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:55.769+0000 2019-09-04T06:34:25.770+0000 D2 ASIO [RS] Request 1388 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578865, 1), t: 1 }, lastCommittedWall: new Date(1567578865763), lastOpVisible: { ts: Timestamp(1567578865, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578865, 1), t: 1 }, lastCommittedWall: new Date(1567578865763), lastOpApplied: { ts: Timestamp(1567578865, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578865, 1), $clusterTime: { clusterTime: Timestamp(1567578865, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578865, 1) } 2019-09-04T06:34:25.770+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578865, 1), t: 1 }, lastCommittedWall: new Date(1567578865763), lastOpVisible: { ts: Timestamp(1567578865, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578865, 1), t: 1 }, lastCommittedWall: new Date(1567578865763), lastOpApplied: { ts: Timestamp(1567578865, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578865, 1), $clusterTime: { clusterTime: Timestamp(1567578865, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578865, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:25.770+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:25.770+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:34:25.770+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578865, 1), t: 1 }, 2019-09-04T06:34:25.763+0000 2019-09-04T06:34:25.770+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578865, 1), t: 1 }, 2019-09-04T06:34:25.763+0000 2019-09-04T06:34:25.770+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578860, 1) 2019-09-04T06:34:25.770+0000 D3 REPL [conn430] Got notified of new snapshot: { ts: Timestamp(1567578865, 1), t: 1 }, 2019-09-04T06:34:25.763+0000 2019-09-04T06:34:25.770+0000 D3 REPL [conn430] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.662+0000 2019-09-04T06:34:25.770+0000 D3 REPL [conn439] Got notified of new snapshot: { ts: Timestamp(1567578865, 1), t: 1 }, 2019-09-04T06:34:25.763+0000 2019-09-04T06:34:25.770+0000 D3 REPL [conn439] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.447+0000 2019-09-04T06:34:25.770+0000 D3 REPL [conn437] Got notified of new snapshot: { ts: Timestamp(1567578865, 1), t: 1 }, 2019-09-04T06:34:25.763+0000 2019-09-04T06:34:25.770+0000 D3 REPL [conn437] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:46.416+0000 2019-09-04T06:34:25.770+0000 D2 ASIO [RS] Request 1389 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578865, 1), t: 1 }, lastCommittedWall: new Date(1567578865763), lastOpVisible: { ts: Timestamp(1567578865, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578865, 1), $clusterTime: { clusterTime: Timestamp(1567578865, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578865, 1) } 2019-09-04T06:34:25.770+0000 D3 REPL [conn420] Got notified of new snapshot: { ts: Timestamp(1567578865, 1), t: 1 }, 2019-09-04T06:34:25.763+0000 2019-09-04T06:34:25.770+0000 D3 REPL [conn420] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.423+0000 2019-09-04T06:34:25.770+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578865, 1), t: 1 }, lastCommittedWall: new Date(1567578865763), lastOpVisible: { ts: Timestamp(1567578865, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578865, 1), $clusterTime: { clusterTime: Timestamp(1567578865, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578865, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:25.770+0000 D3 REPL [conn402] Got notified of new snapshot: { ts: Timestamp(1567578865, 1), t: 1 }, 2019-09-04T06:34:25.763+0000 2019-09-04T06:34:25.770+0000 D3 REPL [conn402] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.289+0000 2019-09-04T06:34:25.770+0000 D3 REPL [conn424] Got notified of new snapshot: { ts: Timestamp(1567578865, 1), t: 1 }, 2019-09-04T06:34:25.763+0000 2019-09-04T06:34:25.770+0000 D3 REPL [conn424] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.437+0000 2019-09-04T06:34:25.770+0000 D3 REPL [conn443] Got notified of new snapshot: { ts: Timestamp(1567578865, 1), t: 1 }, 2019-09-04T06:34:25.763+0000 2019-09-04T06:34:25.770+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:25.770+0000 D3 REPL [conn443] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.645+0000 2019-09-04T06:34:25.770+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:55.770+0000 2019-09-04T06:34:25.770+0000 D3 REPL [conn432] Got notified of new snapshot: { ts: Timestamp(1567578865, 1), t: 1 }, 2019-09-04T06:34:25.763+0000 2019-09-04T06:34:25.770+0000 D3 REPL [conn432] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.343+0000 2019-09-04T06:34:25.770+0000 D3 REPL [conn434] Got notified of new snapshot: { ts: Timestamp(1567578865, 1), t: 1 }, 2019-09-04T06:34:25.763+0000 2019-09-04T06:34:25.770+0000 D3 REPL [conn434] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.445+0000 2019-09-04T06:34:25.770+0000 D3 REPL [conn408] Got notified of new snapshot: { ts: Timestamp(1567578865, 1), t: 1 }, 2019-09-04T06:34:25.763+0000 2019-09-04T06:34:25.770+0000 D3 REPL [conn408] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:29.739+0000 2019-09-04T06:34:25.770+0000 D3 REPL [conn433] Got notified of new snapshot: { ts: Timestamp(1567578865, 1), t: 1 }, 2019-09-04T06:34:25.763+0000 2019-09-04T06:34:25.770+0000 D3 REPL [conn433] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.683+0000 2019-09-04T06:34:25.770+0000 D3 REPL [conn429] Got notified of new snapshot: { ts: Timestamp(1567578865, 1), t: 1 }, 2019-09-04T06:34:25.763+0000 2019-09-04T06:34:25.770+0000 D3 REPL [conn429] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:41.583+0000 2019-09-04T06:34:25.770+0000 D3 REPL [conn442] Got notified of new snapshot: { ts: Timestamp(1567578865, 1), t: 1 }, 2019-09-04T06:34:25.763+0000 2019-09-04T06:34:25.770+0000 D3 REPL [conn442] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.133+0000 2019-09-04T06:34:25.770+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:34:36.173+0000 2019-09-04T06:34:25.770+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:34:36.108+0000 2019-09-04T06:34:25.770+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:25.770+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1390 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:35.770+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578865, 1), t: 1 } } 2019-09-04T06:34:25.770+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:25.770+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:55.770+0000 2019-09-04T06:34:25.770+0000 D3 REPL [conn447] Got notified of new snapshot: { ts: Timestamp(1567578865, 1), t: 1 }, 2019-09-04T06:34:25.763+0000 2019-09-04T06:34:25.770+0000 D3 REPL [conn447] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:52.054+0000 2019-09-04T06:34:25.770+0000 D3 REPL [conn425] Got notified of new snapshot: { ts: Timestamp(1567578865, 1), t: 1 }, 2019-09-04T06:34:25.763+0000 2019-09-04T06:34:25.770+0000 D3 REPL [conn425] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:52.594+0000 2019-09-04T06:34:25.770+0000 D3 REPL [conn440] Got notified of new snapshot: { ts: Timestamp(1567578865, 1), t: 1 }, 2019-09-04T06:34:25.763+0000 2019-09-04T06:34:25.770+0000 D3 REPL [conn440] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.538+0000 2019-09-04T06:34:25.770+0000 D3 REPL [conn410] Got notified of new snapshot: { ts: Timestamp(1567578865, 1), t: 1 }, 2019-09-04T06:34:25.763+0000 2019-09-04T06:34:25.770+0000 D3 REPL [conn410] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.445+0000 2019-09-04T06:34:25.770+0000 D3 REPL [conn431] Got notified of new snapshot: { ts: Timestamp(1567578865, 1), t: 1 }, 2019-09-04T06:34:25.763+0000 2019-09-04T06:34:25.770+0000 D3 REPL [conn431] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.276+0000 2019-09-04T06:34:25.770+0000 D3 REPL [conn403] Got notified of new snapshot: { ts: Timestamp(1567578865, 1), t: 1 }, 2019-09-04T06:34:25.763+0000 2019-09-04T06:34:25.770+0000 D3 REPL [conn403] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:27.566+0000 2019-09-04T06:34:25.770+0000 D3 REPL [conn435] Got notified of new snapshot: { ts: Timestamp(1567578865, 1), t: 1 }, 2019-09-04T06:34:25.763+0000 2019-09-04T06:34:25.770+0000 D3 REPL [conn435] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:43.909+0000 2019-09-04T06:34:25.770+0000 D3 REPL [conn427] Got notified of new snapshot: { ts: Timestamp(1567578865, 1), t: 1 }, 2019-09-04T06:34:25.763+0000 2019-09-04T06:34:25.770+0000 D3 REPL [conn427] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.335+0000 2019-09-04T06:34:25.770+0000 D3 REPL [conn426] Got notified of new snapshot: { ts: Timestamp(1567578865, 1), t: 1 }, 2019-09-04T06:34:25.763+0000 2019-09-04T06:34:25.770+0000 D3 REPL [conn426] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.517+0000 2019-09-04T06:34:25.770+0000 D3 REPL [conn409] Got notified of new snapshot: { ts: Timestamp(1567578865, 1), t: 1 }, 2019-09-04T06:34:25.763+0000 2019-09-04T06:34:25.770+0000 D3 REPL [conn409] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.328+0000 2019-09-04T06:34:25.770+0000 D3 REPL [conn438] Got notified of new snapshot: { ts: Timestamp(1567578865, 1), t: 1 }, 2019-09-04T06:34:25.763+0000 2019-09-04T06:34:25.770+0000 D3 REPL [conn438] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.660+0000 2019-09-04T06:34:25.770+0000 D3 REPL [conn415] Got notified of new snapshot: { ts: Timestamp(1567578865, 1), t: 1 }, 2019-09-04T06:34:25.763+0000 2019-09-04T06:34:25.770+0000 D3 REPL [conn415] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.282+0000 2019-09-04T06:34:25.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:25.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:25.826+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:25.826+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:25.837+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:25.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:25.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:25.867+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578865, 1) 2019-09-04T06:34:25.868+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:25.868+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:25.937+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:26.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:26.038+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:26.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:26.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:26.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:26.070+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:26.103+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:26.103+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:26.138+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:26.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:26.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:26.189+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:26.189+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:26.201+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:26.201+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:26.232+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:26.232+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:26.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578865, 1), signature: { hash: BinData(0, 8C9932C637D8584A911B0A94B48C9D02F41B847B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:26.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:34:26.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578865, 1), signature: { hash: BinData(0, 8C9932C637D8584A911B0A94B48C9D02F41B847B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:26.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578865, 1), signature: { hash: BinData(0, 8C9932C637D8584A911B0A94B48C9D02F41B847B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:26.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, durableWallTime: new Date(1567578865763), opTime: { ts: Timestamp(1567578865, 1), t: 1 }, wallTime: new Date(1567578865763) } 2019-09-04T06:34:26.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578865, 1), signature: { hash: BinData(0, 8C9932C637D8584A911B0A94B48C9D02F41B847B), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:26.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:26.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:26.238+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:26.238+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:26.238+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:26.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:26.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:26.326+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:26.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:26.338+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:26.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:26.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:26.368+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:26.368+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:26.438+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:26.538+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:26.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:26.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:26.560+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:34:26.561+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:34:26.561+0000 D2 COMMAND [conn90] run command admin.$cmd { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:34:26.561+0000 I COMMAND [conn90] command admin.$cmd command: isMaster { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:866 locks:{} protocol:op_query 0ms 2019-09-04T06:34:26.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:26.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:26.603+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:26.603+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:26.638+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:26.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:26.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:26.689+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:26.689+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:26.695+0000 D2 COMMAND [conn49] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:26.695+0000 I COMMAND [conn49] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:26.701+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:26.701+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:26.732+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:26.732+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:26.738+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:26.738+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:26.738+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:26.767+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:26.767+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:26.767+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578865, 1) 2019-09-04T06:34:26.767+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 20269 2019-09-04T06:34:26.767+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:26.767+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:26.767+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 20269 2019-09-04T06:34:26.768+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20272 2019-09-04T06:34:26.768+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20272 2019-09-04T06:34:26.769+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578865, 1), t: 1 }({ ts: Timestamp(1567578865, 1), t: 1 }) 2019-09-04T06:34:26.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:26.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:26.826+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:26.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:26.839+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:26.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:26.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:26.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1391) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:26.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1391 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:36.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:26.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:26.839+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1392) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:26.839+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1392 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:34:36.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:26.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:26.839+0000 D2 ASIO [Replication] Request 1391 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, durableWallTime: new Date(1567578865763), opTime: { ts: Timestamp(1567578865, 1), t: 1 }, wallTime: new Date(1567578865763), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578865, 1), t: 1 }, lastCommittedWall: new Date(1567578865763), lastOpVisible: { ts: Timestamp(1567578865, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578865, 1), $clusterTime: { clusterTime: Timestamp(1567578865, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578865, 1) } 2019-09-04T06:34:26.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, durableWallTime: new Date(1567578865763), opTime: { ts: Timestamp(1567578865, 1), t: 1 }, wallTime: new Date(1567578865763), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578865, 1), t: 1 }, lastCommittedWall: new Date(1567578865763), lastOpVisible: { ts: Timestamp(1567578865, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578865, 1), $clusterTime: { clusterTime: Timestamp(1567578865, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578865, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:26.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:26.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1391) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, durableWallTime: new Date(1567578865763), opTime: { ts: Timestamp(1567578865, 1), t: 1 }, wallTime: new Date(1567578865763), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578865, 1), t: 1 }, lastCommittedWall: new Date(1567578865763), lastOpVisible: { ts: Timestamp(1567578865, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578865, 1), $clusterTime: { clusterTime: Timestamp(1567578865, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578865, 1) } 2019-09-04T06:34:26.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:34:26.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:34:28.839Z 2019-09-04T06:34:26.839+0000 D2 ASIO [Replication] Request 1392 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, durableWallTime: new Date(1567578865763), opTime: { ts: Timestamp(1567578865, 1), t: 1 }, wallTime: new Date(1567578865763), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578865, 1), t: 1 }, lastCommittedWall: new Date(1567578865763), lastOpVisible: { ts: Timestamp(1567578865, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578865, 1), $clusterTime: { clusterTime: Timestamp(1567578865, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578865, 1) } 2019-09-04T06:34:26.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:26.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, durableWallTime: new Date(1567578865763), opTime: { ts: Timestamp(1567578865, 1), t: 1 }, wallTime: new Date(1567578865763), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578865, 1), t: 1 }, lastCommittedWall: new Date(1567578865763), lastOpVisible: { ts: Timestamp(1567578865, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578865, 1), $clusterTime: { clusterTime: Timestamp(1567578865, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578865, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:34:26.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:26.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1392) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, durableWallTime: new Date(1567578865763), opTime: { ts: Timestamp(1567578865, 1), t: 1 }, wallTime: new Date(1567578865763), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578865, 1), t: 1 }, lastCommittedWall: new Date(1567578865763), lastOpVisible: { ts: Timestamp(1567578865, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578865, 1), $clusterTime: { clusterTime: Timestamp(1567578865, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578865, 1) } 2019-09-04T06:34:26.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:34:26.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:34:36.108+0000 2019-09-04T06:34:26.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:34:38.188+0000 2019-09-04T06:34:26.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:34:26.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:34:28.839Z 2019-09-04T06:34:26.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:26.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:26.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:26.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:26.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:26.868+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:26.868+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:26.876+0000 D2 COMMAND [conn47] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:26.876+0000 I COMMAND [conn47] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:26.911+0000 D2 COMMAND [conn48] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:26.911+0000 I COMMAND [conn48] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:26.939+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:27.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:27.039+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:27.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:27.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:27.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578865, 1), signature: { hash: BinData(0, 8C9932C637D8584A911B0A94B48C9D02F41B847B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:27.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:34:27.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578865, 1), signature: { hash: BinData(0, 8C9932C637D8584A911B0A94B48C9D02F41B847B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:27.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578865, 1), signature: { hash: BinData(0, 8C9932C637D8584A911B0A94B48C9D02F41B847B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:27.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, durableWallTime: new Date(1567578865763), opTime: { ts: Timestamp(1567578865, 1), t: 1 }, wallTime: new Date(1567578865763) } 2019-09-04T06:34:27.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578865, 1), signature: { hash: BinData(0, 8C9932C637D8584A911B0A94B48C9D02F41B847B), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:27.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:27.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:27.139+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:27.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:27.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:27.189+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:27.189+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:27.201+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:27.201+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:27.232+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:27.232+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:27.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:27.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:27.238+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:27.238+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:27.239+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:27.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:27.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:27.339+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:27.368+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:27.368+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:27.439+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:27.539+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:27.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:27.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:27.568+0000 I COMMAND [conn403] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578831, 1), signature: { hash: BinData(0, 5BA3C2225B23EDB083DE9DCD7C1516D49AA24879), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:34:27.568+0000 D1 - [conn403] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:27.568+0000 W - [conn403] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:27.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:27.570+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:27.585+0000 I - [conn403] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:27.585+0000 D1 COMMAND [conn403] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578831, 1), signature: { hash: BinData(0, 5BA3C2225B23EDB083DE9DCD7C1516D49AA24879), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:27.585+0000 D1 - [conn403] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:27.585+0000 W - [conn403] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:27.605+0000 I - [conn403] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:27.605+0000 W COMMAND [conn403] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:27.605+0000 I COMMAND [conn403] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578831, 1), signature: { hash: BinData(0, 5BA3C2225B23EDB083DE9DCD7C1516D49AA24879), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30028ms 2019-09-04T06:34:27.605+0000 D2 NETWORK [conn403] Session from 10.108.2.61:38044 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:27.605+0000 I NETWORK [conn403] end connection 10.108.2.61:38044 (88 connections now open) 2019-09-04T06:34:27.639+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:27.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:27.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:27.689+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:27.689+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:27.701+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:27.702+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:27.732+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:27.732+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:27.738+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:27.738+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:27.740+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:27.768+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:27.768+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:27.768+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578865, 1) 2019-09-04T06:34:27.768+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 20300 2019-09-04T06:34:27.768+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:27.768+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:27.768+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 20300 2019-09-04T06:34:27.769+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20303 2019-09-04T06:34:27.769+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20303 2019-09-04T06:34:27.769+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578865, 1), t: 1 }({ ts: Timestamp(1567578865, 1), t: 1 }) 2019-09-04T06:34:27.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:27.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:27.840+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:27.868+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:27.868+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:27.940+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:28.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:28.022+0000 D2 COMMAND [conn50] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:28.022+0000 I COMMAND [conn50] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:28.040+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:28.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:28.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:28.070+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:28.070+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:28.140+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:28.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:28.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:28.189+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:28.189+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:28.201+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:28.201+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:28.232+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:28.232+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:28.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578867, 1), signature: { hash: BinData(0, 34E6B6CE5B6ADAAA9071904D0208EEDEFE61DF85), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:28.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:34:28.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578867, 1), signature: { hash: BinData(0, 34E6B6CE5B6ADAAA9071904D0208EEDEFE61DF85), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:28.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578867, 1), signature: { hash: BinData(0, 34E6B6CE5B6ADAAA9071904D0208EEDEFE61DF85), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:28.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, durableWallTime: new Date(1567578865763), opTime: { ts: Timestamp(1567578865, 1), t: 1 }, wallTime: new Date(1567578865763) } 2019-09-04T06:34:28.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578867, 1), signature: { hash: BinData(0, 34E6B6CE5B6ADAAA9071904D0208EEDEFE61DF85), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:28.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:28.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:28.238+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:28.238+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:28.240+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:28.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:28.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:28.340+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:28.368+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:28.368+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:28.441+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:28.541+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:28.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:28.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:28.570+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:28.570+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:28.641+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:28.662+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:28.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:28.689+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:28.689+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:28.701+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:28.701+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:28.732+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:28.732+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:28.738+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:28.738+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:28.741+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:28.745+0000 I NETWORK [listener] connection accepted from 10.108.2.64:46774 #449 (89 connections now open) 2019-09-04T06:34:28.745+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:28.745+0000 D2 COMMAND [conn449] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:28.745+0000 I NETWORK [conn449] received client metadata from 10.108.2.64:46774 conn449: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:28.745+0000 I COMMAND [conn449] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:28.749+0000 D2 COMMAND [conn449] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578865, 1), signature: { hash: BinData(0, 664A6946CA5A257A3A9BCE39541DDEC5F5F9603B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:34:28.749+0000 D1 REPL [conn449] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578865, 1), t: 1 } 2019-09-04T06:34:28.749+0000 D3 REPL [conn449] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:58.759+0000 2019-09-04T06:34:28.768+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:28.768+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:28.768+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578865, 1) 2019-09-04T06:34:28.768+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 20329 2019-09-04T06:34:28.768+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:28.768+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:28.768+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 20329 2019-09-04T06:34:28.769+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20332 2019-09-04T06:34:28.769+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20332 2019-09-04T06:34:28.769+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578865, 1), t: 1 }({ ts: Timestamp(1567578865, 1), t: 1 }) 2019-09-04T06:34:28.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:28.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:28.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1393) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:28.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1393 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:38.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:28.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:28.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1394) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:28.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1394 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:34:38.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:28.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:28.839+0000 D2 ASIO [Replication] Request 1393 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, durableWallTime: new Date(1567578865763), opTime: { ts: Timestamp(1567578865, 1), t: 1 }, wallTime: new Date(1567578865763), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578865, 1), t: 1 }, lastCommittedWall: new Date(1567578865763), lastOpVisible: { ts: Timestamp(1567578865, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578865, 1), $clusterTime: { clusterTime: Timestamp(1567578867, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578865, 1) } 2019-09-04T06:34:28.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, durableWallTime: new Date(1567578865763), opTime: { ts: Timestamp(1567578865, 1), t: 1 }, wallTime: new Date(1567578865763), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578865, 1), t: 1 }, lastCommittedWall: new Date(1567578865763), lastOpVisible: { ts: Timestamp(1567578865, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578865, 1), $clusterTime: { clusterTime: Timestamp(1567578867, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578865, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:28.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:28.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1393) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, durableWallTime: new Date(1567578865763), opTime: { ts: Timestamp(1567578865, 1), t: 1 }, wallTime: new Date(1567578865763), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578865, 1), t: 1 }, lastCommittedWall: new Date(1567578865763), lastOpVisible: { ts: Timestamp(1567578865, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578865, 1), $clusterTime: { clusterTime: Timestamp(1567578867, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578865, 1) } 2019-09-04T06:34:28.839+0000 D2 ASIO [Replication] Request 1394 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, durableWallTime: new Date(1567578865763), opTime: { ts: Timestamp(1567578865, 1), t: 1 }, wallTime: new Date(1567578865763), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578865, 1), t: 1 }, lastCommittedWall: new Date(1567578865763), lastOpVisible: { ts: Timestamp(1567578865, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578865, 1), $clusterTime: { clusterTime: Timestamp(1567578867, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578865, 1) } 2019-09-04T06:34:28.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:34:28.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, durableWallTime: new Date(1567578865763), opTime: { ts: Timestamp(1567578865, 1), t: 1 }, wallTime: new Date(1567578865763), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578865, 1), t: 1 }, lastCommittedWall: new Date(1567578865763), lastOpVisible: { ts: Timestamp(1567578865, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578865, 1), $clusterTime: { clusterTime: Timestamp(1567578867, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578865, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:34:28.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:34:30.839Z 2019-09-04T06:34:28.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:28.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:28.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1394) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, durableWallTime: new Date(1567578865763), opTime: { ts: Timestamp(1567578865, 1), t: 1 }, wallTime: new Date(1567578865763), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578865, 1), t: 1 }, lastCommittedWall: new Date(1567578865763), lastOpVisible: { ts: Timestamp(1567578865, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578865, 1), $clusterTime: { clusterTime: Timestamp(1567578867, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578865, 1) } 2019-09-04T06:34:28.839+0000 D4 ELECTION [replexec-4] Postponing election timeout due to heartbeat from primary 2019-09-04T06:34:28.839+0000 D4 REPL [replexec-4] Canceling election timeout callback at 2019-09-04T06:34:38.188+0000 2019-09-04T06:34:28.839+0000 D4 ELECTION [replexec-4] Scheduling election timeout callback at 2019-09-04T06:34:40.255+0000 2019-09-04T06:34:28.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:34:28.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:34:30.839Z 2019-09-04T06:34:28.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:28.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:28.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:28.841+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:28.868+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:28.868+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:28.941+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:29.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:29.041+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:29.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:29.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:29.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578867, 1), signature: { hash: BinData(0, 34E6B6CE5B6ADAAA9071904D0208EEDEFE61DF85), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:29.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:34:29.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578867, 1), signature: { hash: BinData(0, 34E6B6CE5B6ADAAA9071904D0208EEDEFE61DF85), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:29.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578867, 1), signature: { hash: BinData(0, 34E6B6CE5B6ADAAA9071904D0208EEDEFE61DF85), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:29.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, durableWallTime: new Date(1567578865763), opTime: { ts: Timestamp(1567578865, 1), t: 1 }, wallTime: new Date(1567578865763) } 2019-09-04T06:34:29.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578867, 1), signature: { hash: BinData(0, 34E6B6CE5B6ADAAA9071904D0208EEDEFE61DF85), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:29.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:29.070+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:29.142+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:29.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:29.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:29.189+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:29.189+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:29.201+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:29.201+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:29.232+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:29.232+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:29.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:29.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:29.238+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:29.238+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:29.242+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:29.278+0000 D2 ASIO [RS] Request 1390 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578869, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578869268), o: { $v: 1, $set: { ping: new Date(1567578869265), up: 40 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578865, 1), t: 1 }, lastCommittedWall: new Date(1567578865763), lastOpVisible: { ts: Timestamp(1567578865, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578865, 1), t: 1 }, lastCommittedWall: new Date(1567578865763), lastOpApplied: { ts: Timestamp(1567578869, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578865, 1), $clusterTime: { clusterTime: Timestamp(1567578869, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578869, 1) } 2019-09-04T06:34:29.278+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578869, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578869268), o: { $v: 1, $set: { ping: new Date(1567578869265), up: 40 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578865, 1), t: 1 }, lastCommittedWall: new Date(1567578865763), lastOpVisible: { ts: Timestamp(1567578865, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578865, 1), t: 1 }, lastCommittedWall: new Date(1567578865763), lastOpApplied: { ts: Timestamp(1567578869, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578865, 1), $clusterTime: { clusterTime: Timestamp(1567578869, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578869, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:29.278+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:29.278+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578869, 1) and ending at ts: Timestamp(1567578869, 1) 2019-09-04T06:34:29.278+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:34:40.255+0000 2019-09-04T06:34:29.278+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:34:39.763+0000 2019-09-04T06:34:29.278+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:29.278+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:29.278+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578869, 1), t: 1 } 2019-09-04T06:34:29.278+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:29.278+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:29.278+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578865, 1) 2019-09-04T06:34:29.278+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 20346 2019-09-04T06:34:29.278+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:29.278+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:29.278+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 20346 2019-09-04T06:34:29.278+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:29.278+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:34:29.278+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:29.278+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578865, 1) 2019-09-04T06:34:29.278+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 20349 2019-09-04T06:34:29.278+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578869, 1) } 2019-09-04T06:34:29.278+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:29.278+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:29.278+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 20349 2019-09-04T06:34:29.278+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20333 2019-09-04T06:34:29.278+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 20333 2019-09-04T06:34:29.278+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20352 2019-09-04T06:34:29.278+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20352 2019-09-04T06:34:29.278+0000 D3 EXECUTOR [repl-writer-worker-11] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:29.279+0000 D3 STORAGE [repl-writer-worker-11] WT begin_transaction for snapshot id 20354 2019-09-04T06:34:29.279+0000 D4 STORAGE [repl-writer-worker-11] inserting record with timestamp Timestamp(1567578869, 1) 2019-09-04T06:34:29.279+0000 D3 STORAGE [repl-writer-worker-11] WT set timestamp of future write operations to Timestamp(1567578869, 1) 2019-09-04T06:34:29.279+0000 D3 STORAGE [repl-writer-worker-11] WT commit_transaction for snapshot id 20354 2019-09-04T06:34:29.279+0000 D3 EXECUTOR [repl-writer-worker-11] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:29.279+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:34:29.279+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20353 2019-09-04T06:34:29.279+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 20353 2019-09-04T06:34:29.279+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20356 2019-09-04T06:34:29.279+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20356 2019-09-04T06:34:29.279+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578869, 1), t: 1 }({ ts: Timestamp(1567578869, 1), t: 1 }) 2019-09-04T06:34:29.279+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578869, 1) 2019-09-04T06:34:29.279+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20357 2019-09-04T06:34:29.279+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578869, 1) } } ] } sort: {} projection: {} 2019-09-04T06:34:29.279+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:29.279+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:34:29.279+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578869, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:34:29.279+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:29.279+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:29.279+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:29.279+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578869, 1) || First: notFirst: full path: ts 2019-09-04T06:34:29.279+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:29.279+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578869, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:29.279+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:29.279+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:34:29.279+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:29.279+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:29.279+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:29.279+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:29.279+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:29.279+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:29.279+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:29.279+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578869, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:29.279+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:29.279+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:29.279+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:29.279+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578869, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:29.279+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:29.279+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578869, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:29.279+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 20357 2019-09-04T06:34:29.279+0000 D3 EXECUTOR [repl-writer-worker-7] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:29.279+0000 D3 STORAGE [repl-writer-worker-7] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:29.279+0000 D3 REPL [repl-writer-worker-7] applying op: { ts: Timestamp(1567578869, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578869268), o: { $v: 1, $set: { ping: new Date(1567578869265), up: 40 } } }, oplog application mode: Secondary 2019-09-04T06:34:29.279+0000 D3 STORAGE [repl-writer-worker-7] WT set timestamp of future write operations to Timestamp(1567578869, 1) 2019-09-04T06:34:29.279+0000 D3 STORAGE [repl-writer-worker-7] WT begin_transaction for snapshot id 20359 2019-09-04T06:34:29.279+0000 D2 QUERY [repl-writer-worker-7] Using idhack: { _id: "cmodb801.togewa.com:27017" } 2019-09-04T06:34:29.279+0000 D4 WRITE [repl-writer-worker-7] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:34:29.279+0000 D3 STORAGE [repl-writer-worker-7] WT commit_transaction for snapshot id 20359 2019-09-04T06:34:29.279+0000 D3 EXECUTOR [repl-writer-worker-7] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:29.279+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578869, 1), t: 1 }({ ts: Timestamp(1567578869, 1), t: 1 }) 2019-09-04T06:34:29.279+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578869, 1) 2019-09-04T06:34:29.279+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20358 2019-09-04T06:34:29.279+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:34:29.279+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:29.279+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:34:29.279+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:29.279+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:29.279+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:34:29.279+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 20358 2019-09-04T06:34:29.279+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578869, 1) 2019-09-04T06:34:29.279+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:29.279+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20362 2019-09-04T06:34:29.279+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, durableWallTime: new Date(1567578865763), appliedOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, appliedWallTime: new Date(1567578865763), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, durableWallTime: new Date(1567578865763), appliedOpTime: { ts: Timestamp(1567578869, 1), t: 1 }, appliedWallTime: new Date(1567578869268), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, durableWallTime: new Date(1567578865763), appliedOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, appliedWallTime: new Date(1567578865763), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578865, 1), t: 1 }, lastCommittedWall: new Date(1567578865763), lastOpVisible: { ts: Timestamp(1567578865, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:29.279+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1395 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:59.279+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, durableWallTime: new Date(1567578865763), appliedOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, appliedWallTime: new Date(1567578865763), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, durableWallTime: new Date(1567578865763), appliedOpTime: { ts: Timestamp(1567578869, 1), t: 1 }, appliedWallTime: new Date(1567578869268), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, durableWallTime: new Date(1567578865763), appliedOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, appliedWallTime: new Date(1567578865763), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578865, 1), t: 1 }, lastCommittedWall: new Date(1567578865763), lastOpVisible: { ts: Timestamp(1567578865, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:29.279+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:59.279+0000 2019-09-04T06:34:29.279+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20362 2019-09-04T06:34:29.279+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578869, 1), t: 1 }({ ts: Timestamp(1567578869, 1), t: 1 }) 2019-09-04T06:34:29.280+0000 D2 ASIO [RS] Request 1395 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578865, 1), t: 1 }, lastCommittedWall: new Date(1567578865763), lastOpVisible: { ts: Timestamp(1567578865, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578865, 1), $clusterTime: { clusterTime: Timestamp(1567578869, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578869, 1) } 2019-09-04T06:34:29.280+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578865, 1), t: 1 }, lastCommittedWall: new Date(1567578865763), lastOpVisible: { ts: Timestamp(1567578865, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578865, 1), $clusterTime: { clusterTime: Timestamp(1567578869, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578869, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:29.280+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:29.280+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:59.280+0000 2019-09-04T06:34:29.280+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578869, 1), t: 1 } 2019-09-04T06:34:29.280+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1396 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:39.280+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578865, 1), t: 1 } } 2019-09-04T06:34:29.280+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:59.280+0000 2019-09-04T06:34:29.283+0000 D2 ASIO [RS] Request 1396 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578869, 1), t: 1 }, lastCommittedWall: new Date(1567578869268), lastOpVisible: { ts: Timestamp(1567578869, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578869, 1), t: 1 }, lastCommittedWall: new Date(1567578869268), lastOpApplied: { ts: Timestamp(1567578869, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578869, 1), $clusterTime: { clusterTime: Timestamp(1567578869, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578869, 1) } 2019-09-04T06:34:29.283+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578869, 1), t: 1 }, lastCommittedWall: new Date(1567578869268), lastOpVisible: { ts: Timestamp(1567578869, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578869, 1), t: 1 }, lastCommittedWall: new Date(1567578869268), lastOpApplied: { ts: Timestamp(1567578869, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578869, 1), $clusterTime: { clusterTime: Timestamp(1567578869, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578869, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:29.283+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:29.283+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:34:29.283+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578869, 1), t: 1 }, 2019-09-04T06:34:29.268+0000 2019-09-04T06:34:29.283+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578869, 1), t: 1 }, 2019-09-04T06:34:29.268+0000 2019-09-04T06:34:29.283+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578864, 1) 2019-09-04T06:34:29.283+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:34:39.763+0000 2019-09-04T06:34:29.283+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:34:40.067+0000 2019-09-04T06:34:29.283+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:29.283+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:29.283+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1397 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:39.283+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578869, 1), t: 1 } } 2019-09-04T06:34:29.283+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:59.280+0000 2019-09-04T06:34:29.283+0000 D2 COMMAND [conn413] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578869, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5af50f8f28dab2b56d66'), operName: "", parentOperId: "5d6f5af50f8f28dab2b56d63" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578869, 1), signature: { hash: BinData(0, 35A2A179D2A99348B19F323C47CEFCA244AFC53E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578869, 1), t: 1 } }, $db: "config" } 2019-09-04T06:34:29.283+0000 D3 REPL [conn424] Got notified of new snapshot: { ts: Timestamp(1567578869, 1), t: 1 }, 2019-09-04T06:34:29.268+0000 2019-09-04T06:34:29.283+0000 D3 REPL [conn424] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.437+0000 2019-09-04T06:34:29.283+0000 D3 REPL [conn427] Got notified of new snapshot: { ts: Timestamp(1567578869, 1), t: 1 }, 2019-09-04T06:34:29.268+0000 2019-09-04T06:34:29.283+0000 D3 REPL [conn427] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.335+0000 2019-09-04T06:34:29.283+0000 D3 REPL [conn409] Got notified of new snapshot: { ts: Timestamp(1567578869, 1), t: 1 }, 2019-09-04T06:34:29.268+0000 2019-09-04T06:34:29.283+0000 D3 REPL [conn409] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.328+0000 2019-09-04T06:34:29.283+0000 D3 REPL [conn429] Got notified of new snapshot: { ts: Timestamp(1567578869, 1), t: 1 }, 2019-09-04T06:34:29.268+0000 2019-09-04T06:34:29.283+0000 D3 REPL [conn429] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:41.583+0000 2019-09-04T06:34:29.283+0000 D3 REPL [conn447] Got notified of new snapshot: { ts: Timestamp(1567578869, 1), t: 1 }, 2019-09-04T06:34:29.268+0000 2019-09-04T06:34:29.283+0000 D3 REPL [conn447] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:52.054+0000 2019-09-04T06:34:29.283+0000 D3 REPL [conn440] Got notified of new snapshot: { ts: Timestamp(1567578869, 1), t: 1 }, 2019-09-04T06:34:29.268+0000 2019-09-04T06:34:29.284+0000 D3 REPL [conn440] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.538+0000 2019-09-04T06:34:29.284+0000 D3 REPL [conn431] Got notified of new snapshot: { ts: Timestamp(1567578869, 1), t: 1 }, 2019-09-04T06:34:29.268+0000 2019-09-04T06:34:29.284+0000 D3 REPL [conn431] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.276+0000 2019-09-04T06:34:29.284+0000 D3 REPL [conn435] Got notified of new snapshot: { ts: Timestamp(1567578869, 1), t: 1 }, 2019-09-04T06:34:29.268+0000 2019-09-04T06:34:29.284+0000 D3 REPL [conn435] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:43.909+0000 2019-09-04T06:34:29.284+0000 D3 REPL [conn426] Got notified of new snapshot: { ts: Timestamp(1567578869, 1), t: 1 }, 2019-09-04T06:34:29.268+0000 2019-09-04T06:34:29.284+0000 D3 REPL [conn426] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.517+0000 2019-09-04T06:34:29.284+0000 D3 REPL [conn438] Got notified of new snapshot: { ts: Timestamp(1567578869, 1), t: 1 }, 2019-09-04T06:34:29.268+0000 2019-09-04T06:34:29.284+0000 D3 REPL [conn438] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.660+0000 2019-09-04T06:34:29.284+0000 D3 REPL [conn449] Got notified of new snapshot: { ts: Timestamp(1567578869, 1), t: 1 }, 2019-09-04T06:34:29.268+0000 2019-09-04T06:34:29.284+0000 D3 REPL [conn449] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:58.759+0000 2019-09-04T06:34:29.284+0000 D3 REPL [conn430] Got notified of new snapshot: { ts: Timestamp(1567578869, 1), t: 1 }, 2019-09-04T06:34:29.268+0000 2019-09-04T06:34:29.284+0000 D3 REPL [conn430] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.662+0000 2019-09-04T06:34:29.284+0000 D3 REPL [conn439] Got notified of new snapshot: { ts: Timestamp(1567578869, 1), t: 1 }, 2019-09-04T06:34:29.268+0000 2019-09-04T06:34:29.284+0000 D3 REPL [conn439] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.447+0000 2019-09-04T06:34:29.284+0000 D3 REPL [conn437] Got notified of new snapshot: { ts: Timestamp(1567578869, 1), t: 1 }, 2019-09-04T06:34:29.268+0000 2019-09-04T06:34:29.284+0000 D3 REPL [conn437] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:46.416+0000 2019-09-04T06:34:29.284+0000 D3 REPL [conn420] Got notified of new snapshot: { ts: Timestamp(1567578869, 1), t: 1 }, 2019-09-04T06:34:29.268+0000 2019-09-04T06:34:29.284+0000 D3 REPL [conn420] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.423+0000 2019-09-04T06:34:29.284+0000 D3 REPL [conn402] Got notified of new snapshot: { ts: Timestamp(1567578869, 1), t: 1 }, 2019-09-04T06:34:29.268+0000 2019-09-04T06:34:29.284+0000 D3 REPL [conn402] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.289+0000 2019-09-04T06:34:29.284+0000 D3 REPL [conn443] Got notified of new snapshot: { ts: Timestamp(1567578869, 1), t: 1 }, 2019-09-04T06:34:29.268+0000 2019-09-04T06:34:29.284+0000 D3 REPL [conn443] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.645+0000 2019-09-04T06:34:29.284+0000 D3 REPL [conn434] Got notified of new snapshot: { ts: Timestamp(1567578869, 1), t: 1 }, 2019-09-04T06:34:29.268+0000 2019-09-04T06:34:29.284+0000 D3 REPL [conn434] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.445+0000 2019-09-04T06:34:29.284+0000 D3 REPL [conn433] Got notified of new snapshot: { ts: Timestamp(1567578869, 1), t: 1 }, 2019-09-04T06:34:29.268+0000 2019-09-04T06:34:29.284+0000 D3 REPL [conn433] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.683+0000 2019-09-04T06:34:29.284+0000 D3 REPL [conn442] Got notified of new snapshot: { ts: Timestamp(1567578869, 1), t: 1 }, 2019-09-04T06:34:29.268+0000 2019-09-04T06:34:29.284+0000 D3 REPL [conn442] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.133+0000 2019-09-04T06:34:29.284+0000 D3 REPL [conn425] Got notified of new snapshot: { ts: Timestamp(1567578869, 1), t: 1 }, 2019-09-04T06:34:29.268+0000 2019-09-04T06:34:29.284+0000 D1 TRACKING [conn413] Cmd: find, TrackingId: 5d6f5af50f8f28dab2b56d63|5d6f5af50f8f28dab2b56d66 2019-09-04T06:34:29.284+0000 D3 REPL [conn425] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:52.594+0000 2019-09-04T06:34:29.284+0000 D3 REPL [conn410] Got notified of new snapshot: { ts: Timestamp(1567578869, 1), t: 1 }, 2019-09-04T06:34:29.268+0000 2019-09-04T06:34:29.284+0000 D3 REPL [conn410] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.445+0000 2019-09-04T06:34:29.284+0000 D3 REPL [conn432] Got notified of new snapshot: { ts: Timestamp(1567578869, 1), t: 1 }, 2019-09-04T06:34:29.268+0000 2019-09-04T06:34:29.284+0000 D3 REPL [conn432] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.343+0000 2019-09-04T06:34:29.284+0000 D3 REPL [conn408] Got notified of new snapshot: { ts: Timestamp(1567578869, 1), t: 1 }, 2019-09-04T06:34:29.268+0000 2019-09-04T06:34:29.284+0000 D3 REPL [conn408] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:29.739+0000 2019-09-04T06:34:29.284+0000 D3 REPL [conn415] Got notified of new snapshot: { ts: Timestamp(1567578869, 1), t: 1 }, 2019-09-04T06:34:29.268+0000 2019-09-04T06:34:29.284+0000 D3 REPL [conn415] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.282+0000 2019-09-04T06:34:29.284+0000 D1 COMMAND [conn413] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578869, 1), t: 1 } } } 2019-09-04T06:34:29.284+0000 D3 STORAGE [conn413] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:34:29.284+0000 D1 COMMAND [conn413] Using 'committed' snapshot: { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578869, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5af50f8f28dab2b56d66'), operName: "", parentOperId: "5d6f5af50f8f28dab2b56d63" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578869, 1), signature: { hash: BinData(0, 35A2A179D2A99348B19F323C47CEFCA244AFC53E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578869, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578869, 1) 2019-09-04T06:34:29.284+0000 D2 QUERY [conn413] Collection config.settings does not exist. Using EOF plan: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 2019-09-04T06:34:29.284+0000 I COMMAND [conn413] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578869, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5af50f8f28dab2b56d66'), operName: "", parentOperId: "5d6f5af50f8f28dab2b56d63" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578869, 1), signature: { hash: BinData(0, 35A2A179D2A99348B19F323C47CEFCA244AFC53E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578869, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:34:29.285+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:34:29.285+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:29.285+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, durableWallTime: new Date(1567578865763), appliedOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, appliedWallTime: new Date(1567578865763), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578869, 1), t: 1 }, durableWallTime: new Date(1567578869268), appliedOpTime: { ts: Timestamp(1567578869, 1), t: 1 }, appliedWallTime: new Date(1567578869268), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, durableWallTime: new Date(1567578865763), appliedOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, appliedWallTime: new Date(1567578865763), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578869, 1), t: 1 }, lastCommittedWall: new Date(1567578869268), lastOpVisible: { ts: Timestamp(1567578869, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:29.285+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1398 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:59.285+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, durableWallTime: new Date(1567578865763), appliedOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, appliedWallTime: new Date(1567578865763), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578869, 1), t: 1 }, durableWallTime: new Date(1567578869268), appliedOpTime: { ts: Timestamp(1567578869, 1), t: 1 }, appliedWallTime: new Date(1567578869268), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, durableWallTime: new Date(1567578865763), appliedOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, appliedWallTime: new Date(1567578865763), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578869, 1), t: 1 }, lastCommittedWall: new Date(1567578869268), lastOpVisible: { ts: Timestamp(1567578869, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:29.285+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:59.280+0000 2019-09-04T06:34:29.285+0000 D2 ASIO [RS] Request 1398 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578869, 1), t: 1 }, lastCommittedWall: new Date(1567578869268), lastOpVisible: { ts: Timestamp(1567578869, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578869, 1), $clusterTime: { clusterTime: Timestamp(1567578869, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578869, 1) } 2019-09-04T06:34:29.285+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578869, 1), t: 1 }, lastCommittedWall: new Date(1567578869268), lastOpVisible: { ts: Timestamp(1567578869, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578869, 1), $clusterTime: { clusterTime: Timestamp(1567578869, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578869, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:29.285+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:29.285+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:34:59.280+0000 2019-09-04T06:34:29.342+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:29.368+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:29.368+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:29.378+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578869, 1) 2019-09-04T06:34:29.442+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:29.542+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:29.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:29.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:29.570+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:29.570+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:29.642+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:29.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:29.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:29.689+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:29.689+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:29.701+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:29.701+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:29.732+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:29.732+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:29.738+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:29.738+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:29.740+0000 I COMMAND [conn408] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578830, 1), signature: { hash: BinData(0, 5FF0CCAFC787DA30065B8DDE1CC8B095FDBF43F0), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:34:29.740+0000 D1 - [conn408] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:29.740+0000 W - [conn408] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:29.742+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:29.757+0000 I - [conn408] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:29.758+0000 D1 COMMAND [conn408] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578830, 1), signature: { hash: BinData(0, 5FF0CCAFC787DA30065B8DDE1CC8B095FDBF43F0), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:29.758+0000 D1 - [conn408] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:29.758+0000 W - [conn408] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:29.778+0000 I - [conn408] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:29.778+0000 W COMMAND [conn408] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:29.778+0000 I COMMAND [conn408] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578830, 1), signature: { hash: BinData(0, 5FF0CCAFC787DA30065B8DDE1CC8B095FDBF43F0), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30028ms 2019-09-04T06:34:29.778+0000 D2 NETWORK [conn408] Session from 10.108.2.49:53492 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:29.778+0000 I NETWORK [conn408] end connection 10.108.2.49:53492 (88 connections now open) 2019-09-04T06:34:29.843+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:29.868+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:29.868+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:29.933+0000 D2 COMMAND [conn444] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578862, 1), signature: { hash: BinData(0, B957C0F67BE372F4946F6CB8E2B141AAD7F04C25), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:34:29.933+0000 D1 REPL [conn444] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578869, 1), t: 1 } 2019-09-04T06:34:29.933+0000 D3 REPL [conn444] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:59.943+0000 2019-09-04T06:34:29.943+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:30.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:30.000+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:34:30.000+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:34:30.000+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:34:30.008+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:34:30.008+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:34:30.010+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:34:30.010+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:34:30.010+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:34:30.010+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:34:30.010+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:34:30.011+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35151 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:34:30.011+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:34:30.011+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:34:30.011+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:34:30.011+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:34:30.011+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:30.012+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:34:30.012+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:34:30.012+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:34:30.012+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:34:30.012+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:34:30.012+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:34:30.012+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:34:30.012+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:30.012+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:30.012+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:34:30.012+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578869, 1) 2019-09-04T06:34:30.012+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20383 2019-09-04T06:34:30.012+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20383 2019-09-04T06:34:30.012+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:34:30.012+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:34:30.012+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:34:30.012+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:34:30.012+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:30.012+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:34:30.012+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:34:30.012+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:34:30.012+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578869, 1) 2019-09-04T06:34:30.012+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20386 2019-09-04T06:34:30.012+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20386 2019-09-04T06:34:30.012+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:34:30.014+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:34:30.014+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:30.014+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:34:30.014+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:34:30.014+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:34:30.014+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578869, 1) 2019-09-04T06:34:30.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20388 2019-09-04T06:34:30.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20388 2019-09-04T06:34:30.014+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:549 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:34:30.014+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:34:30.014+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:34:30.014+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:34:30.014+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:34:30.014+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:34:30.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:34:30.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20391 2019-09-04T06:34:30.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:34:30.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:30.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:34:30.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:34:30.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20391 2019-09-04T06:34:30.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:34:30.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20392 2019-09-04T06:34:30.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:34:30.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:30.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:34:30.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20392 2019-09-04T06:34:30.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:34:30.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20393 2019-09-04T06:34:30.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:34:30.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:30.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:34:30.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20393 2019-09-04T06:34:30.014+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:34:30.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20394 2019-09-04T06:34:30.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:34:30.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:30.014+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:34:30.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20394 2019-09-04T06:34:30.014+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:34:30.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20395 2019-09-04T06:34:30.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:34:30.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:30.014+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:34:30.014+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:34:30.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20395 2019-09-04T06:34:30.014+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:34:30.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20396 2019-09-04T06:34:30.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20396 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20397 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20397 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20398 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20398 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20399 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20399 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20400 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20400 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20401 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20401 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20402 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20402 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20403 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20403 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20404 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20404 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20405 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20405 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20406 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20406 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20407 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20407 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20408 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20408 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20409 2019-09-04T06:34:30.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:34:30.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:30.016+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:34:30.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20409 2019-09-04T06:34:30.016+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:30.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20410 2019-09-04T06:34:30.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:30.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:30.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20410 2019-09-04T06:34:30.016+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:34:30.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20411 2019-09-04T06:34:30.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:34:30.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:30.016+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:34:30.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20411 2019-09-04T06:34:30.016+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:34:30.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20412 2019-09-04T06:34:30.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:34:30.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:30.016+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:34:30.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20412 2019-09-04T06:34:30.016+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:34:30.016+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:34:30.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20414 2019-09-04T06:34:30.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20414 2019-09-04T06:34:30.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20415 2019-09-04T06:34:30.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20415 2019-09-04T06:34:30.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20416 2019-09-04T06:34:30.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20416 2019-09-04T06:34:30.016+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:34:30.016+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:34:30.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20418 2019-09-04T06:34:30.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20418 2019-09-04T06:34:30.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20419 2019-09-04T06:34:30.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20419 2019-09-04T06:34:30.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20420 2019-09-04T06:34:30.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20420 2019-09-04T06:34:30.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20421 2019-09-04T06:34:30.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20421 2019-09-04T06:34:30.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20422 2019-09-04T06:34:30.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20422 2019-09-04T06:34:30.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20423 2019-09-04T06:34:30.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20423 2019-09-04T06:34:30.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20424 2019-09-04T06:34:30.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20424 2019-09-04T06:34:30.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20425 2019-09-04T06:34:30.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20425 2019-09-04T06:34:30.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20426 2019-09-04T06:34:30.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20426 2019-09-04T06:34:30.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20427 2019-09-04T06:34:30.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20427 2019-09-04T06:34:30.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20428 2019-09-04T06:34:30.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20428 2019-09-04T06:34:30.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20429 2019-09-04T06:34:30.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20429 2019-09-04T06:34:30.017+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:34:30.017+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:34:30.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20431 2019-09-04T06:34:30.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20431 2019-09-04T06:34:30.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20432 2019-09-04T06:34:30.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20432 2019-09-04T06:34:30.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20433 2019-09-04T06:34:30.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20433 2019-09-04T06:34:30.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20434 2019-09-04T06:34:30.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20434 2019-09-04T06:34:30.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20435 2019-09-04T06:34:30.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20435 2019-09-04T06:34:30.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20436 2019-09-04T06:34:30.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20436 2019-09-04T06:34:30.017+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:34:30.043+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:30.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:30.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:30.070+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:30.070+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:30.143+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:30.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:30.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:30.189+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:30.189+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:30.201+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:30.201+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:30.232+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:30.232+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:30.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578869, 1), signature: { hash: BinData(0, 35A2A179D2A99348B19F323C47CEFCA244AFC53E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:30.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:34:30.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578869, 1), signature: { hash: BinData(0, 35A2A179D2A99348B19F323C47CEFCA244AFC53E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:30.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578869, 1), signature: { hash: BinData(0, 35A2A179D2A99348B19F323C47CEFCA244AFC53E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:30.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578869, 1), t: 1 }, durableWallTime: new Date(1567578869268), opTime: { ts: Timestamp(1567578869, 1), t: 1 }, wallTime: new Date(1567578869268) } 2019-09-04T06:34:30.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578869, 1), signature: { hash: BinData(0, 35A2A179D2A99348B19F323C47CEFCA244AFC53E), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:30.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:30.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:30.238+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:30.238+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:30.243+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:30.250+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:30.250+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:30.251+0000 I NETWORK [listener] connection accepted from 10.108.2.54:49342 #450 (89 connections now open) 2019-09-04T06:34:30.251+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:30.251+0000 D2 COMMAND [conn450] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:30.251+0000 I NETWORK [conn450] received client metadata from 10.108.2.54:49342 conn450: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:30.251+0000 I COMMAND [conn450] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:30.251+0000 D2 COMMAND [conn450] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578868, 1), signature: { hash: BinData(0, 8B61E7B5973B908A587EE744417ACD62ADE50C6C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:34:30.251+0000 D1 REPL [conn450] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578869, 1), t: 1 } 2019-09-04T06:34:30.251+0000 D3 REPL [conn450] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.261+0000 2019-09-04T06:34:30.278+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:30.279+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:30.279+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578869, 1) 2019-09-04T06:34:30.279+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 20450 2019-09-04T06:34:30.279+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:30.279+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:30.279+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 20450 2019-09-04T06:34:30.280+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20453 2019-09-04T06:34:30.280+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20453 2019-09-04T06:34:30.280+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578869, 1), t: 1 }({ ts: Timestamp(1567578869, 1), t: 1 }) 2019-09-04T06:34:30.343+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:30.368+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:30.368+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:30.407+0000 D2 ASIO [RS] Request 1397 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578870, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578870383), o: { $v: 1, $set: { ping: new Date(1567578870383) } } }, { ts: Timestamp(1567578870, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578870384), o: { $v: 1, $set: { ping: new Date(1567578870383) } } }, { ts: Timestamp(1567578870, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578870383), o: { $v: 1, $set: { ping: new Date(1567578870382) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578869, 1), t: 1 }, lastCommittedWall: new Date(1567578869268), lastOpVisible: { ts: Timestamp(1567578869, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578869, 1), t: 1 }, lastCommittedWall: new Date(1567578869268), lastOpApplied: { ts: Timestamp(1567578870, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578869, 1), $clusterTime: { clusterTime: Timestamp(1567578870, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578870, 3) } 2019-09-04T06:34:30.407+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578870, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578870383), o: { $v: 1, $set: { ping: new Date(1567578870383) } } }, { ts: Timestamp(1567578870, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578870384), o: { $v: 1, $set: { ping: new Date(1567578870383) } } }, { ts: Timestamp(1567578870, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578870383), o: { $v: 1, $set: { ping: new Date(1567578870382) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578869, 1), t: 1 }, lastCommittedWall: new Date(1567578869268), lastOpVisible: { ts: Timestamp(1567578869, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578869, 1), t: 1 }, lastCommittedWall: new Date(1567578869268), lastOpApplied: { ts: Timestamp(1567578870, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578869, 1), $clusterTime: { clusterTime: Timestamp(1567578870, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578870, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:30.407+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:30.408+0000 D2 REPL [replication-0] oplog fetcher read 3 operations from remote oplog starting at ts: Timestamp(1567578870, 1) and ending at ts: Timestamp(1567578870, 3) 2019-09-04T06:34:30.408+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:34:40.067+0000 2019-09-04T06:34:30.408+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:34:40.921+0000 2019-09-04T06:34:30.408+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:30.408+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:30.408+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578870, 3), t: 1 } 2019-09-04T06:34:30.408+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:30.408+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:30.408+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578869, 1) 2019-09-04T06:34:30.408+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 20457 2019-09-04T06:34:30.408+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:30.408+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:30.408+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 20457 2019-09-04T06:34:30.408+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:30.408+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:30.408+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578869, 1) 2019-09-04T06:34:30.408+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 20460 2019-09-04T06:34:30.408+0000 D2 REPL [rsSync-0] replication batch size is 3 2019-09-04T06:34:30.408+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:30.408+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:30.408+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 20460 2019-09-04T06:34:30.408+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578870, 1) } 2019-09-04T06:34:30.408+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20454 2019-09-04T06:34:30.408+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 20454 2019-09-04T06:34:30.408+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20463 2019-09-04T06:34:30.408+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20463 2019-09-04T06:34:30.408+0000 D3 EXECUTOR [repl-writer-worker-13] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:30.408+0000 D3 STORAGE [repl-writer-worker-13] WT begin_transaction for snapshot id 20465 2019-09-04T06:34:30.408+0000 D4 STORAGE [repl-writer-worker-13] inserting record with timestamp Timestamp(1567578870, 1) 2019-09-04T06:34:30.408+0000 D3 STORAGE [repl-writer-worker-13] WT set timestamp of future write operations to Timestamp(1567578870, 1) 2019-09-04T06:34:30.408+0000 D4 STORAGE [repl-writer-worker-13] inserting record with timestamp Timestamp(1567578870, 2) 2019-09-04T06:34:30.408+0000 D3 STORAGE [repl-writer-worker-13] WT set timestamp of future write operations to Timestamp(1567578870, 2) 2019-09-04T06:34:30.408+0000 D4 STORAGE [repl-writer-worker-13] inserting record with timestamp Timestamp(1567578870, 3) 2019-09-04T06:34:30.408+0000 D3 STORAGE [repl-writer-worker-13] WT set timestamp of future write operations to Timestamp(1567578870, 3) 2019-09-04T06:34:30.408+0000 D3 STORAGE [repl-writer-worker-13] WT commit_transaction for snapshot id 20465 2019-09-04T06:34:30.408+0000 D3 EXECUTOR [repl-writer-worker-13] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:30.408+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:34:30.408+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20464 2019-09-04T06:34:30.408+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 20464 2019-09-04T06:34:30.408+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20467 2019-09-04T06:34:30.408+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20467 2019-09-04T06:34:30.408+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578870, 3), t: 1 }({ ts: Timestamp(1567578870, 3), t: 1 }) 2019-09-04T06:34:30.408+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578870, 3) 2019-09-04T06:34:30.408+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20468 2019-09-04T06:34:30.408+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578870, 3) } } ] } sort: {} projection: {} 2019-09-04T06:34:30.408+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:30.408+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:34:30.409+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578870, 3) Sort: {} Proj: {} ============================= 2019-09-04T06:34:30.409+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:30.409+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:30.409+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:30.409+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578870, 3) || First: notFirst: full path: ts 2019-09-04T06:34:30.409+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:30.409+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578870, 3) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:30.409+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:30.409+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:34:30.409+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:30.409+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:30.409+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:30.409+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:30.409+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:30.409+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:30.409+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:30.409+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578870, 3) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:30.409+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:30.409+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:30.409+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:30.409+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578870, 3) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:30.409+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:30.409+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578870, 3) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:30.409+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 20468 2019-09-04T06:34:30.409+0000 D3 EXECUTOR [repl-writer-worker-14] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:30.409+0000 D3 STORAGE [repl-writer-worker-14] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:30.409+0000 D3 EXECUTOR [repl-writer-worker-12] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:30.409+0000 D3 STORAGE [repl-writer-worker-12] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:30.409+0000 D3 EXECUTOR [repl-writer-worker-3] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:30.409+0000 D3 STORAGE [repl-writer-worker-3] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:30.409+0000 D3 REPL [repl-writer-worker-12] applying op: { ts: Timestamp(1567578870, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578870383), o: { $v: 1, $set: { ping: new Date(1567578870382) } } }, oplog application mode: Secondary 2019-09-04T06:34:30.409+0000 D3 REPL [repl-writer-worker-3] applying op: { ts: Timestamp(1567578870, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578870383), o: { $v: 1, $set: { ping: new Date(1567578870383) } } }, oplog application mode: Secondary 2019-09-04T06:34:30.409+0000 D3 STORAGE [repl-writer-worker-12] WT set timestamp of future write operations to Timestamp(1567578870, 3) 2019-09-04T06:34:30.409+0000 D3 STORAGE [repl-writer-worker-3] WT set timestamp of future write operations to Timestamp(1567578870, 1) 2019-09-04T06:34:30.409+0000 D3 STORAGE [repl-writer-worker-12] WT begin_transaction for snapshot id 20471 2019-09-04T06:34:30.409+0000 D3 STORAGE [repl-writer-worker-3] WT begin_transaction for snapshot id 20472 2019-09-04T06:34:30.409+0000 D2 QUERY [repl-writer-worker-12] Using idhack: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" } 2019-09-04T06:34:30.409+0000 D2 QUERY [repl-writer-worker-3] Using idhack: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" } 2019-09-04T06:34:30.409+0000 D4 WRITE [repl-writer-worker-12] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:34:30.409+0000 D3 STORAGE [repl-writer-worker-12] WT commit_transaction for snapshot id 20471 2019-09-04T06:34:30.409+0000 D3 REPL [repl-writer-worker-14] applying op: { ts: Timestamp(1567578870, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578870384), o: { $v: 1, $set: { ping: new Date(1567578870383) } } }, oplog application mode: Secondary 2019-09-04T06:34:30.409+0000 D3 STORAGE [repl-writer-worker-14] WT set timestamp of future write operations to Timestamp(1567578870, 2) 2019-09-04T06:34:30.409+0000 D3 EXECUTOR [repl-writer-worker-12] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:30.409+0000 D4 WRITE [repl-writer-worker-3] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:34:30.409+0000 D3 STORAGE [repl-writer-worker-3] WT commit_transaction for snapshot id 20472 2019-09-04T06:34:30.409+0000 D3 EXECUTOR [repl-writer-worker-3] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:30.409+0000 D3 STORAGE [repl-writer-worker-14] WT begin_transaction for snapshot id 20470 2019-09-04T06:34:30.409+0000 D2 QUERY [repl-writer-worker-14] Using idhack: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" } 2019-09-04T06:34:30.409+0000 D4 WRITE [repl-writer-worker-14] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:34:30.409+0000 D3 STORAGE [repl-writer-worker-14] WT commit_transaction for snapshot id 20470 2019-09-04T06:34:30.409+0000 D3 EXECUTOR [repl-writer-worker-14] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:30.409+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578870, 3), t: 1 }({ ts: Timestamp(1567578870, 3), t: 1 }) 2019-09-04T06:34:30.409+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578870, 3) 2019-09-04T06:34:30.409+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20469 2019-09-04T06:34:30.409+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:34:30.409+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:30.409+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:34:30.409+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:30.409+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:30.409+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:34:30.409+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 20469 2019-09-04T06:34:30.409+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578870, 3) 2019-09-04T06:34:30.409+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20477 2019-09-04T06:34:30.409+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20477 2019-09-04T06:34:30.409+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578870, 3), t: 1 }({ ts: Timestamp(1567578870, 3), t: 1 }) 2019-09-04T06:34:30.409+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:30.409+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, durableWallTime: new Date(1567578865763), appliedOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, appliedWallTime: new Date(1567578865763), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578869, 1), t: 1 }, durableWallTime: new Date(1567578869268), appliedOpTime: { ts: Timestamp(1567578870, 3), t: 1 }, appliedWallTime: new Date(1567578870383), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, durableWallTime: new Date(1567578865763), appliedOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, appliedWallTime: new Date(1567578865763), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578869, 1), t: 1 }, lastCommittedWall: new Date(1567578869268), lastOpVisible: { ts: Timestamp(1567578869, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:30.409+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1399 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:00.409+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, durableWallTime: new Date(1567578865763), appliedOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, appliedWallTime: new Date(1567578865763), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578869, 1), t: 1 }, durableWallTime: new Date(1567578869268), appliedOpTime: { ts: Timestamp(1567578870, 3), t: 1 }, appliedWallTime: new Date(1567578870383), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, durableWallTime: new Date(1567578865763), appliedOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, appliedWallTime: new Date(1567578865763), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578869, 1), t: 1 }, lastCommittedWall: new Date(1567578869268), lastOpVisible: { ts: Timestamp(1567578869, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:30.409+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:00.409+0000 2019-09-04T06:34:30.410+0000 D2 ASIO [RS] Request 1399 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578869, 1), t: 1 }, lastCommittedWall: new Date(1567578869268), lastOpVisible: { ts: Timestamp(1567578869, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578869, 1), $clusterTime: { clusterTime: Timestamp(1567578870, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578870, 3) } 2019-09-04T06:34:30.410+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578869, 1), t: 1 }, lastCommittedWall: new Date(1567578869268), lastOpVisible: { ts: Timestamp(1567578869, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578869, 1), $clusterTime: { clusterTime: Timestamp(1567578870, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578870, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:30.410+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:30.410+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:00.410+0000 2019-09-04T06:34:30.410+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578870, 3), t: 1 } 2019-09-04T06:34:30.410+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1400 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:40.410+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578869, 1), t: 1 } } 2019-09-04T06:34:30.410+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:00.410+0000 2019-09-04T06:34:30.423+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:34:30.423+0000 D2 ASIO [RS] Request 1400 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578870, 3), t: 1 }, lastCommittedWall: new Date(1567578870383), lastOpVisible: { ts: Timestamp(1567578870, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578870, 3), t: 1 }, lastCommittedWall: new Date(1567578870383), lastOpApplied: { ts: Timestamp(1567578870, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578870, 3), $clusterTime: { clusterTime: Timestamp(1567578870, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578870, 3) } 2019-09-04T06:34:30.423+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578870, 3), t: 1 }, lastCommittedWall: new Date(1567578870383), lastOpVisible: { ts: Timestamp(1567578870, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578870, 3), t: 1 }, lastCommittedWall: new Date(1567578870383), lastOpApplied: { ts: Timestamp(1567578870, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578870, 3), $clusterTime: { clusterTime: Timestamp(1567578870, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578870, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:30.423+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:30.423+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, durableWallTime: new Date(1567578865763), appliedOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, appliedWallTime: new Date(1567578865763), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578870, 3), t: 1 }, durableWallTime: new Date(1567578870383), appliedOpTime: { ts: Timestamp(1567578870, 3), t: 1 }, appliedWallTime: new Date(1567578870383), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, durableWallTime: new Date(1567578865763), appliedOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, appliedWallTime: new Date(1567578865763), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578869, 1), t: 1 }, lastCommittedWall: new Date(1567578869268), lastOpVisible: { ts: Timestamp(1567578869, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:30.423+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:30.423+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1401 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:00.423+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, durableWallTime: new Date(1567578865763), appliedOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, appliedWallTime: new Date(1567578865763), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578870, 3), t: 1 }, durableWallTime: new Date(1567578870383), appliedOpTime: { ts: Timestamp(1567578870, 3), t: 1 }, appliedWallTime: new Date(1567578870383), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, durableWallTime: new Date(1567578865763), appliedOpTime: { ts: Timestamp(1567578865, 1), t: 1 }, appliedWallTime: new Date(1567578865763), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578869, 1), t: 1 }, lastCommittedWall: new Date(1567578869268), lastOpVisible: { ts: Timestamp(1567578869, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:30.423+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:34:30.423+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:00.423+0000 2019-09-04T06:34:30.423+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578870, 3), t: 1 }, 2019-09-04T06:34:30.383+0000 2019-09-04T06:34:30.423+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578870, 3), t: 1 }, 2019-09-04T06:34:30.383+0000 2019-09-04T06:34:30.423+0000 I NETWORK [listener] connection accepted from 10.108.2.48:42266 #451 (90 connections now open) 2019-09-04T06:34:30.423+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:30.423+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578865, 3) 2019-09-04T06:34:30.423+0000 D3 REPL [conn437] Got notified of new snapshot: { ts: Timestamp(1567578870, 3), t: 1 }, 2019-09-04T06:34:30.383+0000 2019-09-04T06:34:30.423+0000 D3 REPL [conn437] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:46.416+0000 2019-09-04T06:34:30.423+0000 D3 REPL [conn409] Got notified of new snapshot: { ts: Timestamp(1567578870, 3), t: 1 }, 2019-09-04T06:34:30.383+0000 2019-09-04T06:34:30.423+0000 D3 REPL [conn409] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.328+0000 2019-09-04T06:34:30.423+0000 D3 REPL [conn429] Got notified of new snapshot: { ts: Timestamp(1567578870, 3), t: 1 }, 2019-09-04T06:34:30.383+0000 2019-09-04T06:34:30.423+0000 D3 REPL [conn429] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:41.583+0000 2019-09-04T06:34:30.423+0000 D3 REPL [conn426] Got notified of new snapshot: { ts: Timestamp(1567578870, 3), t: 1 }, 2019-09-04T06:34:30.383+0000 2019-09-04T06:34:30.423+0000 D3 REPL [conn426] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.517+0000 2019-09-04T06:34:30.423+0000 D3 REPL [conn449] Got notified of new snapshot: { ts: Timestamp(1567578870, 3), t: 1 }, 2019-09-04T06:34:30.383+0000 2019-09-04T06:34:30.423+0000 D3 REPL [conn449] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:58.759+0000 2019-09-04T06:34:30.423+0000 D3 REPL [conn439] Got notified of new snapshot: { ts: Timestamp(1567578870, 3), t: 1 }, 2019-09-04T06:34:30.383+0000 2019-09-04T06:34:30.423+0000 D3 REPL [conn439] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.447+0000 2019-09-04T06:34:30.423+0000 D3 REPL [conn420] Got notified of new snapshot: { ts: Timestamp(1567578870, 3), t: 1 }, 2019-09-04T06:34:30.383+0000 2019-09-04T06:34:30.423+0000 D2 COMMAND [conn451] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:30.423+0000 I NETWORK [conn451] received client metadata from 10.108.2.48:42266 conn451: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:30.423+0000 D3 REPL [conn420] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.423+0000 2019-09-04T06:34:30.423+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:34:40.921+0000 2019-09-04T06:34:30.423+0000 D2 ASIO [RS] Request 1401 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578870, 3), t: 1 }, lastCommittedWall: new Date(1567578870383), lastOpVisible: { ts: Timestamp(1567578870, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578870, 3), $clusterTime: { clusterTime: Timestamp(1567578870, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578870, 3) } 2019-09-04T06:34:30.423+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:34:41.121+0000 2019-09-04T06:34:30.423+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:30.423+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:30.423+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578870, 3), t: 1 }, lastCommittedWall: new Date(1567578870383), lastOpVisible: { ts: Timestamp(1567578870, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578870, 3), $clusterTime: { clusterTime: Timestamp(1567578870, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578870, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:30.423+0000 D3 REPL [conn424] Got notified of new snapshot: { ts: Timestamp(1567578870, 3), t: 1 }, 2019-09-04T06:34:30.383+0000 2019-09-04T06:34:30.423+0000 D3 REPL [conn424] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.437+0000 2019-09-04T06:34:30.423+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:30.423+0000 D3 REPL [conn427] Got notified of new snapshot: { ts: Timestamp(1567578870, 3), t: 1 }, 2019-09-04T06:34:30.383+0000 2019-09-04T06:34:30.423+0000 D3 REPL [conn427] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.335+0000 2019-09-04T06:34:30.423+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:00.423+0000 2019-09-04T06:34:30.423+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1402 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:40.423+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578870, 3), t: 1 } } 2019-09-04T06:34:30.423+0000 D3 REPL [conn402] Got notified of new snapshot: { ts: Timestamp(1567578870, 3), t: 1 }, 2019-09-04T06:34:30.383+0000 2019-09-04T06:34:30.423+0000 D3 REPL [conn402] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.289+0000 2019-09-04T06:34:30.423+0000 D3 REPL [conn434] Got notified of new snapshot: { ts: Timestamp(1567578870, 3), t: 1 }, 2019-09-04T06:34:30.383+0000 2019-09-04T06:34:30.423+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:00.423+0000 2019-09-04T06:34:30.423+0000 D3 REPL [conn434] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.445+0000 2019-09-04T06:34:30.423+0000 D3 REPL [conn442] Got notified of new snapshot: { ts: Timestamp(1567578870, 3), t: 1 }, 2019-09-04T06:34:30.383+0000 2019-09-04T06:34:30.423+0000 D3 REPL [conn442] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.133+0000 2019-09-04T06:34:30.423+0000 D3 REPL [conn415] Got notified of new snapshot: { ts: Timestamp(1567578870, 3), t: 1 }, 2019-09-04T06:34:30.383+0000 2019-09-04T06:34:30.423+0000 D3 REPL [conn415] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.282+0000 2019-09-04T06:34:30.423+0000 D3 REPL [conn447] Got notified of new snapshot: { ts: Timestamp(1567578870, 3), t: 1 }, 2019-09-04T06:34:30.383+0000 2019-09-04T06:34:30.423+0000 D3 REPL [conn447] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:52.054+0000 2019-09-04T06:34:30.423+0000 D3 REPL [conn431] Got notified of new snapshot: { ts: Timestamp(1567578870, 3), t: 1 }, 2019-09-04T06:34:30.383+0000 2019-09-04T06:34:30.423+0000 D3 REPL [conn431] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.276+0000 2019-09-04T06:34:30.423+0000 D3 REPL [conn440] Got notified of new snapshot: { ts: Timestamp(1567578870, 3), t: 1 }, 2019-09-04T06:34:30.383+0000 2019-09-04T06:34:30.423+0000 D3 REPL [conn440] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.538+0000 2019-09-04T06:34:30.423+0000 D3 REPL [conn435] Got notified of new snapshot: { ts: Timestamp(1567578870, 3), t: 1 }, 2019-09-04T06:34:30.383+0000 2019-09-04T06:34:30.423+0000 D3 REPL [conn435] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:43.909+0000 2019-09-04T06:34:30.423+0000 D3 REPL [conn438] Got notified of new snapshot: { ts: Timestamp(1567578870, 3), t: 1 }, 2019-09-04T06:34:30.383+0000 2019-09-04T06:34:30.424+0000 D3 REPL [conn438] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.660+0000 2019-09-04T06:34:30.424+0000 D3 REPL [conn450] Got notified of new snapshot: { ts: Timestamp(1567578870, 3), t: 1 }, 2019-09-04T06:34:30.383+0000 2019-09-04T06:34:30.424+0000 D3 REPL [conn450] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.261+0000 2019-09-04T06:34:30.424+0000 D3 REPL [conn443] Got notified of new snapshot: { ts: Timestamp(1567578870, 3), t: 1 }, 2019-09-04T06:34:30.383+0000 2019-09-04T06:34:30.424+0000 D3 REPL [conn443] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.645+0000 2019-09-04T06:34:30.424+0000 D3 REPL [conn433] Got notified of new snapshot: { ts: Timestamp(1567578870, 3), t: 1 }, 2019-09-04T06:34:30.383+0000 2019-09-04T06:34:30.424+0000 D3 REPL [conn433] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.683+0000 2019-09-04T06:34:30.424+0000 D3 REPL [conn425] Got notified of new snapshot: { ts: Timestamp(1567578870, 3), t: 1 }, 2019-09-04T06:34:30.383+0000 2019-09-04T06:34:30.424+0000 D3 REPL [conn425] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:52.594+0000 2019-09-04T06:34:30.424+0000 D3 REPL [conn410] Got notified of new snapshot: { ts: Timestamp(1567578870, 3), t: 1 }, 2019-09-04T06:34:30.383+0000 2019-09-04T06:34:30.424+0000 D3 REPL [conn410] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.445+0000 2019-09-04T06:34:30.424+0000 D3 REPL [conn432] Got notified of new snapshot: { ts: Timestamp(1567578870, 3), t: 1 }, 2019-09-04T06:34:30.383+0000 2019-09-04T06:34:30.424+0000 D3 REPL [conn432] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.343+0000 2019-09-04T06:34:30.424+0000 D3 REPL [conn444] Got notified of new snapshot: { ts: Timestamp(1567578870, 3), t: 1 }, 2019-09-04T06:34:30.383+0000 2019-09-04T06:34:30.424+0000 D3 REPL [conn444] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:59.943+0000 2019-09-04T06:34:30.424+0000 D3 REPL [conn430] Got notified of new snapshot: { ts: Timestamp(1567578870, 3), t: 1 }, 2019-09-04T06:34:30.383+0000 2019-09-04T06:34:30.424+0000 D3 REPL [conn430] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.662+0000 2019-09-04T06:34:30.424+0000 I COMMAND [conn451] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:30.424+0000 D2 COMMAND [conn451] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578870, 1), signature: { hash: BinData(0, 6920513E637F8F9FA85BDF131C416F82F176C928), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:34:30.424+0000 D1 REPL [conn451] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578870, 3), t: 1 } 2019-09-04T06:34:30.424+0000 D3 REPL [conn451] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.434+0000 2019-09-04T06:34:30.444+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:30.456+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:30.456+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:30.508+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578870, 3) 2019-09-04T06:34:30.544+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:30.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:30.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:30.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:30.570+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:30.644+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:30.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:30.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:30.689+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:30.689+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:30.701+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:30.701+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:30.738+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:30.738+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:30.742+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:30.742+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:30.744+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:30.750+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:30.750+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:30.753+0000 I NETWORK [listener] connection accepted from 10.108.2.50:50272 #452 (91 connections now open) 2019-09-04T06:34:30.753+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:30.753+0000 D2 COMMAND [conn452] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:30.753+0000 I NETWORK [conn452] received client metadata from 10.108.2.50:50272 conn452: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:30.753+0000 I COMMAND [conn452] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:30.753+0000 D2 COMMAND [conn452] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578869, 1), signature: { hash: BinData(0, 22521EC28CCB56AA7FEB5EF8031A493EC328EE30), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:34:30.753+0000 D1 REPL [conn452] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578870, 3), t: 1 } 2019-09-04T06:34:30.753+0000 D3 REPL [conn452] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.763+0000 2019-09-04T06:34:30.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:30.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:30.839+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1403) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:30.839+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1403 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:40.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:30.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:30.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1404) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:30.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1404 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:34:40.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:30.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:30.839+0000 D2 ASIO [Replication] Request 1403 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578870, 3), t: 1 }, durableWallTime: new Date(1567578870383), opTime: { ts: Timestamp(1567578870, 3), t: 1 }, wallTime: new Date(1567578870383), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578870, 3), t: 1 }, lastCommittedWall: new Date(1567578870383), lastOpVisible: { ts: Timestamp(1567578870, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578870, 3), $clusterTime: { clusterTime: Timestamp(1567578870, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578870, 3) } 2019-09-04T06:34:30.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578870, 3), t: 1 }, durableWallTime: new Date(1567578870383), opTime: { ts: Timestamp(1567578870, 3), t: 1 }, wallTime: new Date(1567578870383), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578870, 3), t: 1 }, lastCommittedWall: new Date(1567578870383), lastOpVisible: { ts: Timestamp(1567578870, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578870, 3), $clusterTime: { clusterTime: Timestamp(1567578870, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578870, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:30.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:30.839+0000 D2 ASIO [Replication] Request 1404 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578870, 3), t: 1 }, durableWallTime: new Date(1567578870383), opTime: { ts: Timestamp(1567578870, 3), t: 1 }, wallTime: new Date(1567578870383), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578870, 3), t: 1 }, lastCommittedWall: new Date(1567578870383), lastOpVisible: { ts: Timestamp(1567578870, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578870, 3), $clusterTime: { clusterTime: Timestamp(1567578870, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578870, 3) } 2019-09-04T06:34:30.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1403) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578870, 3), t: 1 }, durableWallTime: new Date(1567578870383), opTime: { ts: Timestamp(1567578870, 3), t: 1 }, wallTime: new Date(1567578870383), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578870, 3), t: 1 }, lastCommittedWall: new Date(1567578870383), lastOpVisible: { ts: Timestamp(1567578870, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578870, 3), $clusterTime: { clusterTime: Timestamp(1567578870, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578870, 3) } 2019-09-04T06:34:30.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578870, 3), t: 1 }, durableWallTime: new Date(1567578870383), opTime: { ts: Timestamp(1567578870, 3), t: 1 }, wallTime: new Date(1567578870383), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578870, 3), t: 1 }, lastCommittedWall: new Date(1567578870383), lastOpVisible: { ts: Timestamp(1567578870, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578870, 3), $clusterTime: { clusterTime: Timestamp(1567578870, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578870, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:34:30.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:34:30.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:30.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:34:32.839Z 2019-09-04T06:34:30.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:30.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1404) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578870, 3), t: 1 }, durableWallTime: new Date(1567578870383), opTime: { ts: Timestamp(1567578870, 3), t: 1 }, wallTime: new Date(1567578870383), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578870, 3), t: 1 }, lastCommittedWall: new Date(1567578870383), lastOpVisible: { ts: Timestamp(1567578870, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578870, 3), $clusterTime: { clusterTime: Timestamp(1567578870, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578870, 3) } 2019-09-04T06:34:30.839+0000 D4 ELECTION [replexec-4] Postponing election timeout due to heartbeat from primary 2019-09-04T06:34:30.839+0000 D4 REPL [replexec-4] Canceling election timeout callback at 2019-09-04T06:34:41.121+0000 2019-09-04T06:34:30.839+0000 D4 ELECTION [replexec-4] Scheduling election timeout callback at 2019-09-04T06:34:41.883+0000 2019-09-04T06:34:30.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:30.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:34:30.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:30.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:34:32.839Z 2019-09-04T06:34:30.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:34:44.839+0000 2019-09-04T06:34:30.844+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:30.868+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:30.868+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:30.886+0000 I NETWORK [listener] connection accepted from 10.108.2.44:38840 #453 (92 connections now open) 2019-09-04T06:34:30.887+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:30.887+0000 D2 COMMAND [conn453] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:30.887+0000 I NETWORK [conn453] received client metadata from 10.108.2.44:38840 conn453: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:30.887+0000 I COMMAND [conn453] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:30.887+0000 D2 COMMAND [conn453] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578868, 1), signature: { hash: BinData(0, 8B61E7B5973B908A587EE744417ACD62ADE50C6C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:34:30.887+0000 D1 REPL [conn453] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578870, 3), t: 1 } 2019-09-04T06:34:30.887+0000 D3 REPL [conn453] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.897+0000 2019-09-04T06:34:30.915+0000 I NETWORK [listener] connection accepted from 10.108.2.73:52308 #454 (93 connections now open) 2019-09-04T06:34:30.915+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:30.915+0000 D2 COMMAND [conn454] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:30.915+0000 I NETWORK [conn454] received client metadata from 10.108.2.73:52308 conn454: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:30.915+0000 I COMMAND [conn454] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:30.915+0000 D2 COMMAND [conn454] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578862, 1), signature: { hash: BinData(0, B957C0F67BE372F4946F6CB8E2B141AAD7F04C25), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:34:30.915+0000 D1 REPL [conn454] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578870, 3), t: 1 } 2019-09-04T06:34:30.915+0000 D3 REPL [conn454] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.925+0000 2019-09-04T06:34:30.944+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:30.952+0000 I NETWORK [listener] connection accepted from 10.108.2.58:52298 #455 (94 connections now open) 2019-09-04T06:34:30.952+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:30.952+0000 D2 COMMAND [conn455] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:30.952+0000 I NETWORK [conn455] received client metadata from 10.108.2.58:52298 conn455: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:30.952+0000 I COMMAND [conn455] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:30.952+0000 D2 COMMAND [conn455] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578863, 1), signature: { hash: BinData(0, AAB852E4A9300DFDA061E8F68671FD068E007876), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:34:30.952+0000 D1 REPL [conn455] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578870, 3), t: 1 } 2019-09-04T06:34:30.952+0000 D3 REPL [conn455] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.962+0000 2019-09-04T06:34:30.956+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:30.956+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:31.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:31.044+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:31.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:31.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:31.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578870, 3), signature: { hash: BinData(0, ACECDD31A1A57E04F37B00C948D8C482E3FD9FE1), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:31.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:34:31.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578870, 3), signature: { hash: BinData(0, ACECDD31A1A57E04F37B00C948D8C482E3FD9FE1), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:31.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578870, 3), signature: { hash: BinData(0, ACECDD31A1A57E04F37B00C948D8C482E3FD9FE1), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:31.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578870, 3), t: 1 }, durableWallTime: new Date(1567578870383), opTime: { ts: Timestamp(1567578870, 3), t: 1 }, wallTime: new Date(1567578870383) } 2019-09-04T06:34:31.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578870, 3), signature: { hash: BinData(0, ACECDD31A1A57E04F37B00C948D8C482E3FD9FE1), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:31.144+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:31.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:31.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:31.189+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:31.189+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:31.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:31.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:31.238+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:31.238+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:31.242+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:31.242+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:31.245+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:31.345+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:31.368+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:31.368+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:31.408+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:31.408+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:31.408+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578870, 3) 2019-09-04T06:34:31.408+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 20511 2019-09-04T06:34:31.408+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:31.408+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:31.408+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 20511 2019-09-04T06:34:31.409+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20514 2019-09-04T06:34:31.409+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20514 2019-09-04T06:34:31.409+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578870, 3), t: 1 }({ ts: Timestamp(1567578870, 3), t: 1 }) 2019-09-04T06:34:31.445+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:31.456+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:31.456+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:31.545+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:31.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:31.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:31.645+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:31.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:31.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:31.689+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:31.689+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:31.738+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:31.738+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:31.745+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:31.846+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:31.868+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:31.868+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:31.946+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:31.956+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:31.956+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:32.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:32.046+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:32.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:32.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:32.146+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:32.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:32.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:32.189+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:32.189+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:32.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578870, 3), signature: { hash: BinData(0, ACECDD31A1A57E04F37B00C948D8C482E3FD9FE1), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:32.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:34:32.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578870, 3), signature: { hash: BinData(0, ACECDD31A1A57E04F37B00C948D8C482E3FD9FE1), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:32.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578870, 3), signature: { hash: BinData(0, ACECDD31A1A57E04F37B00C948D8C482E3FD9FE1), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:32.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578870, 3), t: 1 }, durableWallTime: new Date(1567578870383), opTime: { ts: Timestamp(1567578870, 3), t: 1 }, wallTime: new Date(1567578870383) } 2019-09-04T06:34:32.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578870, 3), signature: { hash: BinData(0, ACECDD31A1A57E04F37B00C948D8C482E3FD9FE1), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:32.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:32.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:32.238+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:32.238+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:32.246+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:32.346+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:32.408+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:32.408+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:32.408+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578870, 3) 2019-09-04T06:34:32.408+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 20530 2019-09-04T06:34:32.408+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:32.408+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:32.408+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 20530 2019-09-04T06:34:32.410+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20533 2019-09-04T06:34:32.410+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20533 2019-09-04T06:34:32.410+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578870, 3), t: 1 }({ ts: Timestamp(1567578870, 3), t: 1 }) 2019-09-04T06:34:32.446+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:32.456+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:32.456+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:32.546+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:32.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:32.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:32.647+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:32.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:32.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:32.689+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:32.689+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:32.738+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:32.738+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:32.747+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:32.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:32.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:32.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1405) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:32.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:32.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1405 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:42.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:32.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:02.839+0000 2019-09-04T06:34:32.839+0000 D3 REPL [replexec-0] memberData lastupdate is: 2019-09-04T06:34:31.063+0000 2019-09-04T06:34:32.839+0000 D3 REPL [replexec-0] memberData lastupdate is: 2019-09-04T06:34:32.234+0000 2019-09-04T06:34:32.839+0000 D3 REPL [replexec-0] stalest member MemberId(0) date: 2019-09-04T06:34:31.063+0000 2019-09-04T06:34:32.839+0000 D3 REPL [replexec-0] scheduling next check at 2019-09-04T06:34:41.063+0000 2019-09-04T06:34:32.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:02.839+0000 2019-09-04T06:34:32.839+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1406) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:32.839+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1406 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:34:42.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:32.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:02.839+0000 2019-09-04T06:34:32.839+0000 D2 ASIO [Replication] Request 1405 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578870, 3), t: 1 }, durableWallTime: new Date(1567578870383), opTime: { ts: Timestamp(1567578870, 3), t: 1 }, wallTime: new Date(1567578870383), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578870, 3), t: 1 }, lastCommittedWall: new Date(1567578870383), lastOpVisible: { ts: Timestamp(1567578870, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578870, 3), $clusterTime: { clusterTime: Timestamp(1567578870, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578870, 3) } 2019-09-04T06:34:32.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578870, 3), t: 1 }, durableWallTime: new Date(1567578870383), opTime: { ts: Timestamp(1567578870, 3), t: 1 }, wallTime: new Date(1567578870383), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578870, 3), t: 1 }, lastCommittedWall: new Date(1567578870383), lastOpVisible: { ts: Timestamp(1567578870, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578870, 3), $clusterTime: { clusterTime: Timestamp(1567578870, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578870, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:32.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:32.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1405) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578870, 3), t: 1 }, durableWallTime: new Date(1567578870383), opTime: { ts: Timestamp(1567578870, 3), t: 1 }, wallTime: new Date(1567578870383), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578870, 3), t: 1 }, lastCommittedWall: new Date(1567578870383), lastOpVisible: { ts: Timestamp(1567578870, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578870, 3), $clusterTime: { clusterTime: Timestamp(1567578870, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578870, 3) } 2019-09-04T06:34:32.839+0000 D2 ASIO [Replication] Request 1406 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578870, 3), t: 1 }, durableWallTime: new Date(1567578870383), opTime: { ts: Timestamp(1567578870, 3), t: 1 }, wallTime: new Date(1567578870383), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578870, 3), t: 1 }, lastCommittedWall: new Date(1567578870383), lastOpVisible: { ts: Timestamp(1567578870, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578870, 3), $clusterTime: { clusterTime: Timestamp(1567578870, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578870, 3) } 2019-09-04T06:34:32.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:34:32.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578870, 3), t: 1 }, durableWallTime: new Date(1567578870383), opTime: { ts: Timestamp(1567578870, 3), t: 1 }, wallTime: new Date(1567578870383), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578870, 3), t: 1 }, lastCommittedWall: new Date(1567578870383), lastOpVisible: { ts: Timestamp(1567578870, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578870, 3), $clusterTime: { clusterTime: Timestamp(1567578870, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578870, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:34:32.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:34:34.839Z 2019-09-04T06:34:32.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:32.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:02.839+0000 2019-09-04T06:34:32.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1406) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578870, 3), t: 1 }, durableWallTime: new Date(1567578870383), opTime: { ts: Timestamp(1567578870, 3), t: 1 }, wallTime: new Date(1567578870383), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578870, 3), t: 1 }, lastCommittedWall: new Date(1567578870383), lastOpVisible: { ts: Timestamp(1567578870, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578870, 3), $clusterTime: { clusterTime: Timestamp(1567578870, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578870, 3) } 2019-09-04T06:34:32.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:34:32.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:34:41.883+0000 2019-09-04T06:34:32.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:34:42.983+0000 2019-09-04T06:34:32.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:32.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:34:32.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:02.839+0000 2019-09-04T06:34:32.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:34:34.839Z 2019-09-04T06:34:32.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:02.839+0000 2019-09-04T06:34:32.847+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:32.947+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:32.956+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:32.956+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:33.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:33.047+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:33.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:33.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:33.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578870, 3), signature: { hash: BinData(0, ACECDD31A1A57E04F37B00C948D8C482E3FD9FE1), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:33.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:34:33.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578870, 3), signature: { hash: BinData(0, ACECDD31A1A57E04F37B00C948D8C482E3FD9FE1), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:33.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578870, 3), signature: { hash: BinData(0, ACECDD31A1A57E04F37B00C948D8C482E3FD9FE1), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:33.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578870, 3), t: 1 }, durableWallTime: new Date(1567578870383), opTime: { ts: Timestamp(1567578870, 3), t: 1 }, wallTime: new Date(1567578870383) } 2019-09-04T06:34:33.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578870, 3), signature: { hash: BinData(0, ACECDD31A1A57E04F37B00C948D8C482E3FD9FE1), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:33.147+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:33.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:33.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:33.189+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:33.189+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:33.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:33.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:33.238+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:33.238+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:33.239+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:33.239+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:33.248+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:33.348+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:33.409+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:33.409+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:33.409+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578870, 3) 2019-09-04T06:34:33.409+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 20549 2019-09-04T06:34:33.409+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:33.409+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:33.409+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 20549 2019-09-04T06:34:33.410+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20552 2019-09-04T06:34:33.410+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20552 2019-09-04T06:34:33.410+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578870, 3), t: 1 }({ ts: Timestamp(1567578870, 3), t: 1 }) 2019-09-04T06:34:33.448+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:33.456+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:33.456+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:33.473+0000 I NETWORK [listener] connection accepted from 10.108.2.63:36440 #456 (95 connections now open) 2019-09-04T06:34:33.473+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:33.473+0000 D2 COMMAND [conn456] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:33.473+0000 I NETWORK [conn456] received client metadata from 10.108.2.63:36440 conn456: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:33.473+0000 I COMMAND [conn456] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:33.477+0000 D2 COMMAND [conn456] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578872, 1), signature: { hash: BinData(0, 4DBDABAE602CF810BD3D716DCE9BD15DA1394F1C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:34:33.477+0000 D1 REPL [conn456] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578870, 3), t: 1 } 2019-09-04T06:34:33.477+0000 D3 REPL [conn456] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:03.487+0000 2019-09-04T06:34:33.520+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:34:33.520+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:34:33.520+0000 D2 COMMAND [conn90] run command admin.$cmd { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:34:33.520+0000 I COMMAND [conn90] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:34:33.548+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:33.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:33.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:33.566+0000 D2 ASIO [RS] Request 1402 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578873, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578873559), o: { $v: 1, $set: { ping: new Date(1567578873554) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578870, 3), t: 1 }, lastCommittedWall: new Date(1567578870383), lastOpVisible: { ts: Timestamp(1567578870, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578870, 3), t: 1 }, lastCommittedWall: new Date(1567578870383), lastOpApplied: { ts: Timestamp(1567578873, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578870, 3), $clusterTime: { clusterTime: Timestamp(1567578873, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578873, 1) } 2019-09-04T06:34:33.566+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578873, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578873559), o: { $v: 1, $set: { ping: new Date(1567578873554) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578870, 3), t: 1 }, lastCommittedWall: new Date(1567578870383), lastOpVisible: { ts: Timestamp(1567578870, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578870, 3), t: 1 }, lastCommittedWall: new Date(1567578870383), lastOpApplied: { ts: Timestamp(1567578873, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578870, 3), $clusterTime: { clusterTime: Timestamp(1567578873, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578873, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:33.566+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:33.566+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578873, 1) and ending at ts: Timestamp(1567578873, 1) 2019-09-04T06:34:33.566+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:34:42.983+0000 2019-09-04T06:34:33.566+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:34:45.005+0000 2019-09-04T06:34:33.566+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:33.566+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:02.839+0000 2019-09-04T06:34:33.567+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578873, 1), t: 1 } 2019-09-04T06:34:33.567+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:33.567+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:33.567+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578870, 3) 2019-09-04T06:34:33.567+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 20561 2019-09-04T06:34:33.567+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:33.567+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:33.567+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 20561 2019-09-04T06:34:33.567+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:33.567+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:33.567+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578870, 3) 2019-09-04T06:34:33.567+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 20564 2019-09-04T06:34:33.567+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:33.567+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:33.567+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:34:33.567+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 20564 2019-09-04T06:34:33.567+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578873, 1) } 2019-09-04T06:34:33.567+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20553 2019-09-04T06:34:33.567+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 20553 2019-09-04T06:34:33.567+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20567 2019-09-04T06:34:33.567+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20567 2019-09-04T06:34:33.567+0000 D3 EXECUTOR [repl-writer-worker-10] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:33.567+0000 D3 STORAGE [repl-writer-worker-10] WT begin_transaction for snapshot id 20569 2019-09-04T06:34:33.567+0000 D4 STORAGE [repl-writer-worker-10] inserting record with timestamp Timestamp(1567578873, 1) 2019-09-04T06:34:33.567+0000 D3 STORAGE [repl-writer-worker-10] WT set timestamp of future write operations to Timestamp(1567578873, 1) 2019-09-04T06:34:33.567+0000 D3 STORAGE [repl-writer-worker-10] WT commit_transaction for snapshot id 20569 2019-09-04T06:34:33.567+0000 D3 EXECUTOR [repl-writer-worker-10] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:33.567+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:34:33.567+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20568 2019-09-04T06:34:33.567+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 20568 2019-09-04T06:34:33.567+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20571 2019-09-04T06:34:33.567+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20571 2019-09-04T06:34:33.567+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578873, 1), t: 1 }({ ts: Timestamp(1567578873, 1), t: 1 }) 2019-09-04T06:34:33.567+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578873, 1) 2019-09-04T06:34:33.567+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20572 2019-09-04T06:34:33.567+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578873, 1) } } ] } sort: {} projection: {} 2019-09-04T06:34:33.567+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:33.567+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:34:33.567+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578873, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:34:33.567+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:33.567+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:33.567+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:33.567+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578873, 1) || First: notFirst: full path: ts 2019-09-04T06:34:33.567+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:33.567+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578873, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:33.567+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:33.567+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:34:33.567+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:33.567+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:33.567+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:33.567+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:33.567+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:33.567+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:33.567+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:33.567+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578873, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:33.567+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:33.567+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:33.567+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:33.567+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578873, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:33.567+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:33.567+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578873, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:33.567+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 20572 2019-09-04T06:34:33.568+0000 D3 EXECUTOR [repl-writer-worker-4] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:33.568+0000 D3 STORAGE [repl-writer-worker-4] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:33.568+0000 D3 REPL [repl-writer-worker-4] applying op: { ts: Timestamp(1567578873, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578873559), o: { $v: 1, $set: { ping: new Date(1567578873554) } } }, oplog application mode: Secondary 2019-09-04T06:34:33.568+0000 D3 STORAGE [repl-writer-worker-4] WT set timestamp of future write operations to Timestamp(1567578873, 1) 2019-09-04T06:34:33.568+0000 D3 STORAGE [repl-writer-worker-4] WT begin_transaction for snapshot id 20574 2019-09-04T06:34:33.568+0000 D2 QUERY [repl-writer-worker-4] Using idhack: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" } 2019-09-04T06:34:33.568+0000 D4 WRITE [repl-writer-worker-4] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:34:33.568+0000 D3 STORAGE [repl-writer-worker-4] WT commit_transaction for snapshot id 20574 2019-09-04T06:34:33.568+0000 D3 EXECUTOR [repl-writer-worker-4] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:33.568+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578873, 1), t: 1 }({ ts: Timestamp(1567578873, 1), t: 1 }) 2019-09-04T06:34:33.568+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578873, 1) 2019-09-04T06:34:33.568+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20573 2019-09-04T06:34:33.568+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:34:33.568+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:33.568+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:34:33.568+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:33.568+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:33.568+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:34:33.568+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 20573 2019-09-04T06:34:33.568+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578873, 1) 2019-09-04T06:34:33.568+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20577 2019-09-04T06:34:33.568+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20577 2019-09-04T06:34:33.568+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578873, 1), t: 1 }({ ts: Timestamp(1567578873, 1), t: 1 }) 2019-09-04T06:34:33.568+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:33.568+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578870, 3), t: 1 }, durableWallTime: new Date(1567578870383), appliedOpTime: { ts: Timestamp(1567578870, 3), t: 1 }, appliedWallTime: new Date(1567578870383), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578870, 3), t: 1 }, durableWallTime: new Date(1567578870383), appliedOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, appliedWallTime: new Date(1567578873559), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578870, 3), t: 1 }, durableWallTime: new Date(1567578870383), appliedOpTime: { ts: Timestamp(1567578870, 3), t: 1 }, appliedWallTime: new Date(1567578870383), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578870, 3), t: 1 }, lastCommittedWall: new Date(1567578870383), lastOpVisible: { ts: Timestamp(1567578870, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:33.568+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1407 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:03.568+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578870, 3), t: 1 }, durableWallTime: new Date(1567578870383), appliedOpTime: { ts: Timestamp(1567578870, 3), t: 1 }, appliedWallTime: new Date(1567578870383), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578870, 3), t: 1 }, durableWallTime: new Date(1567578870383), appliedOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, appliedWallTime: new Date(1567578873559), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578870, 3), t: 1 }, durableWallTime: new Date(1567578870383), appliedOpTime: { ts: Timestamp(1567578870, 3), t: 1 }, appliedWallTime: new Date(1567578870383), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578870, 3), t: 1 }, lastCommittedWall: new Date(1567578870383), lastOpVisible: { ts: Timestamp(1567578870, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:33.568+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:03.568+0000 2019-09-04T06:34:33.568+0000 D2 ASIO [RS] Request 1407 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578870, 3), t: 1 }, lastCommittedWall: new Date(1567578870383), lastOpVisible: { ts: Timestamp(1567578870, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578870, 3), $clusterTime: { clusterTime: Timestamp(1567578873, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578873, 1) } 2019-09-04T06:34:33.568+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578870, 3), t: 1 }, lastCommittedWall: new Date(1567578870383), lastOpVisible: { ts: Timestamp(1567578870, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578870, 3), $clusterTime: { clusterTime: Timestamp(1567578873, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578873, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:33.568+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:33.568+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:03.568+0000 2019-09-04T06:34:33.569+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578873, 1), t: 1 } 2019-09-04T06:34:33.569+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1408 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:43.569+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578870, 3), t: 1 } } 2019-09-04T06:34:33.569+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:03.568+0000 2019-09-04T06:34:33.577+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:34:33.577+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:33.577+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578870, 3), t: 1 }, durableWallTime: new Date(1567578870383), appliedOpTime: { ts: Timestamp(1567578870, 3), t: 1 }, appliedWallTime: new Date(1567578870383), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, durableWallTime: new Date(1567578873559), appliedOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, appliedWallTime: new Date(1567578873559), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578870, 3), t: 1 }, durableWallTime: new Date(1567578870383), appliedOpTime: { ts: Timestamp(1567578870, 3), t: 1 }, appliedWallTime: new Date(1567578870383), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578870, 3), t: 1 }, lastCommittedWall: new Date(1567578870383), lastOpVisible: { ts: Timestamp(1567578870, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:33.577+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1409 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:03.577+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578870, 3), t: 1 }, durableWallTime: new Date(1567578870383), appliedOpTime: { ts: Timestamp(1567578870, 3), t: 1 }, appliedWallTime: new Date(1567578870383), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, durableWallTime: new Date(1567578873559), appliedOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, appliedWallTime: new Date(1567578873559), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578870, 3), t: 1 }, durableWallTime: new Date(1567578870383), appliedOpTime: { ts: Timestamp(1567578870, 3), t: 1 }, appliedWallTime: new Date(1567578870383), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578870, 3), t: 1 }, lastCommittedWall: new Date(1567578870383), lastOpVisible: { ts: Timestamp(1567578870, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:33.577+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:03.568+0000 2019-09-04T06:34:33.578+0000 D2 ASIO [RS] Request 1409 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578870, 3), t: 1 }, lastCommittedWall: new Date(1567578870383), lastOpVisible: { ts: Timestamp(1567578870, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578870, 3), $clusterTime: { clusterTime: Timestamp(1567578873, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578873, 1) } 2019-09-04T06:34:33.578+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578870, 3), t: 1 }, lastCommittedWall: new Date(1567578870383), lastOpVisible: { ts: Timestamp(1567578870, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578870, 3), $clusterTime: { clusterTime: Timestamp(1567578873, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578873, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:33.578+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:33.578+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:03.568+0000 2019-09-04T06:34:33.578+0000 D2 ASIO [RS] Request 1408 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578873, 1), t: 1 }, lastCommittedWall: new Date(1567578873559), lastOpVisible: { ts: Timestamp(1567578873, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578873, 1), t: 1 }, lastCommittedWall: new Date(1567578873559), lastOpApplied: { ts: Timestamp(1567578873, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578873, 1), $clusterTime: { clusterTime: Timestamp(1567578873, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578873, 1) } 2019-09-04T06:34:33.578+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578873, 1), t: 1 }, lastCommittedWall: new Date(1567578873559), lastOpVisible: { ts: Timestamp(1567578873, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578873, 1), t: 1 }, lastCommittedWall: new Date(1567578873559), lastOpApplied: { ts: Timestamp(1567578873, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578873, 1), $clusterTime: { clusterTime: Timestamp(1567578873, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578873, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:33.578+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:33.578+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:34:33.578+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578873, 1), t: 1 }, 2019-09-04T06:34:33.559+0000 2019-09-04T06:34:33.578+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578873, 1), t: 1 }, 2019-09-04T06:34:33.559+0000 2019-09-04T06:34:33.578+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578868, 1) 2019-09-04T06:34:33.578+0000 D3 REPL [conn452] Got notified of new snapshot: { ts: Timestamp(1567578873, 1), t: 1 }, 2019-09-04T06:34:33.559+0000 2019-09-04T06:34:33.578+0000 D3 REPL [conn452] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.763+0000 2019-09-04T06:34:33.578+0000 D3 REPL [conn424] Got notified of new snapshot: { ts: Timestamp(1567578873, 1), t: 1 }, 2019-09-04T06:34:33.559+0000 2019-09-04T06:34:33.578+0000 D3 REPL [conn424] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.437+0000 2019-09-04T06:34:33.578+0000 D3 REPL [conn456] Got notified of new snapshot: { ts: Timestamp(1567578873, 1), t: 1 }, 2019-09-04T06:34:33.559+0000 2019-09-04T06:34:33.578+0000 D3 REPL [conn456] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:03.487+0000 2019-09-04T06:34:33.578+0000 D3 REPL [conn420] Got notified of new snapshot: { ts: Timestamp(1567578873, 1), t: 1 }, 2019-09-04T06:34:33.559+0000 2019-09-04T06:34:33.578+0000 D3 REPL [conn420] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.423+0000 2019-09-04T06:34:33.578+0000 D3 REPL [conn438] Got notified of new snapshot: { ts: Timestamp(1567578873, 1), t: 1 }, 2019-09-04T06:34:33.559+0000 2019-09-04T06:34:33.578+0000 D3 REPL [conn438] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.660+0000 2019-09-04T06:34:33.578+0000 D3 REPL [conn425] Got notified of new snapshot: { ts: Timestamp(1567578873, 1), t: 1 }, 2019-09-04T06:34:33.559+0000 2019-09-04T06:34:33.578+0000 D3 REPL [conn425] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:52.594+0000 2019-09-04T06:34:33.578+0000 D3 REPL [conn431] Got notified of new snapshot: { ts: Timestamp(1567578873, 1), t: 1 }, 2019-09-04T06:34:33.559+0000 2019-09-04T06:34:33.578+0000 D3 REPL [conn431] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.276+0000 2019-09-04T06:34:33.578+0000 D3 REPL [conn415] Got notified of new snapshot: { ts: Timestamp(1567578873, 1), t: 1 }, 2019-09-04T06:34:33.559+0000 2019-09-04T06:34:33.578+0000 D3 REPL [conn415] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.282+0000 2019-09-04T06:34:33.578+0000 D3 REPL [conn450] Got notified of new snapshot: { ts: Timestamp(1567578873, 1), t: 1 }, 2019-09-04T06:34:33.559+0000 2019-09-04T06:34:33.578+0000 D3 REPL [conn450] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.261+0000 2019-09-04T06:34:33.578+0000 D3 REPL [conn433] Got notified of new snapshot: { ts: Timestamp(1567578873, 1), t: 1 }, 2019-09-04T06:34:33.559+0000 2019-09-04T06:34:33.578+0000 D3 REPL [conn433] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.683+0000 2019-09-04T06:34:33.578+0000 D3 REPL [conn435] Got notified of new snapshot: { ts: Timestamp(1567578873, 1), t: 1 }, 2019-09-04T06:34:33.559+0000 2019-09-04T06:34:33.578+0000 D3 REPL [conn435] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:43.909+0000 2019-09-04T06:34:33.578+0000 D3 REPL [conn444] Got notified of new snapshot: { ts: Timestamp(1567578873, 1), t: 1 }, 2019-09-04T06:34:33.559+0000 2019-09-04T06:34:33.578+0000 D3 REPL [conn444] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:59.943+0000 2019-09-04T06:34:33.578+0000 D3 REPL [conn430] Got notified of new snapshot: { ts: Timestamp(1567578873, 1), t: 1 }, 2019-09-04T06:34:33.559+0000 2019-09-04T06:34:33.578+0000 D3 REPL [conn430] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.662+0000 2019-09-04T06:34:33.579+0000 D3 REPL [conn451] Got notified of new snapshot: { ts: Timestamp(1567578873, 1), t: 1 }, 2019-09-04T06:34:33.559+0000 2019-09-04T06:34:33.579+0000 D3 REPL [conn451] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.434+0000 2019-09-04T06:34:33.579+0000 D3 REPL [conn410] Got notified of new snapshot: { ts: Timestamp(1567578873, 1), t: 1 }, 2019-09-04T06:34:33.559+0000 2019-09-04T06:34:33.579+0000 D3 REPL [conn410] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.445+0000 2019-09-04T06:34:33.579+0000 D3 REPL [conn409] Got notified of new snapshot: { ts: Timestamp(1567578873, 1), t: 1 }, 2019-09-04T06:34:33.559+0000 2019-09-04T06:34:33.579+0000 D3 REPL [conn409] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.328+0000 2019-09-04T06:34:33.579+0000 D3 REPL [conn426] Got notified of new snapshot: { ts: Timestamp(1567578873, 1), t: 1 }, 2019-09-04T06:34:33.559+0000 2019-09-04T06:34:33.579+0000 D3 REPL [conn426] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.517+0000 2019-09-04T06:34:33.579+0000 D3 REPL [conn449] Got notified of new snapshot: { ts: Timestamp(1567578873, 1), t: 1 }, 2019-09-04T06:34:33.559+0000 2019-09-04T06:34:33.579+0000 D3 REPL [conn449] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:58.759+0000 2019-09-04T06:34:33.579+0000 D3 REPL [conn439] Got notified of new snapshot: { ts: Timestamp(1567578873, 1), t: 1 }, 2019-09-04T06:34:33.559+0000 2019-09-04T06:34:33.579+0000 D3 REPL [conn439] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.447+0000 2019-09-04T06:34:33.579+0000 D3 REPL [conn453] Got notified of new snapshot: { ts: Timestamp(1567578873, 1), t: 1 }, 2019-09-04T06:34:33.559+0000 2019-09-04T06:34:33.579+0000 D3 REPL [conn453] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.897+0000 2019-09-04T06:34:33.579+0000 D3 REPL [conn429] Got notified of new snapshot: { ts: Timestamp(1567578873, 1), t: 1 }, 2019-09-04T06:34:33.559+0000 2019-09-04T06:34:33.579+0000 D3 REPL [conn429] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:41.583+0000 2019-09-04T06:34:33.579+0000 D3 REPL [conn437] Got notified of new snapshot: { ts: Timestamp(1567578873, 1), t: 1 }, 2019-09-04T06:34:33.559+0000 2019-09-04T06:34:33.579+0000 D3 REPL [conn437] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:46.416+0000 2019-09-04T06:34:33.579+0000 D3 REPL [conn455] Got notified of new snapshot: { ts: Timestamp(1567578873, 1), t: 1 }, 2019-09-04T06:34:33.559+0000 2019-09-04T06:34:33.579+0000 D3 REPL [conn455] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.962+0000 2019-09-04T06:34:33.579+0000 D3 REPL [conn427] Got notified of new snapshot: { ts: Timestamp(1567578873, 1), t: 1 }, 2019-09-04T06:34:33.559+0000 2019-09-04T06:34:33.579+0000 D3 REPL [conn427] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.335+0000 2019-09-04T06:34:33.579+0000 D3 REPL [conn442] Got notified of new snapshot: { ts: Timestamp(1567578873, 1), t: 1 }, 2019-09-04T06:34:33.559+0000 2019-09-04T06:34:33.579+0000 D3 REPL [conn442] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.133+0000 2019-09-04T06:34:33.579+0000 D3 REPL [conn447] Got notified of new snapshot: { ts: Timestamp(1567578873, 1), t: 1 }, 2019-09-04T06:34:33.559+0000 2019-09-04T06:34:33.579+0000 D3 REPL [conn447] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:52.054+0000 2019-09-04T06:34:33.579+0000 D3 REPL [conn440] Got notified of new snapshot: { ts: Timestamp(1567578873, 1), t: 1 }, 2019-09-04T06:34:33.559+0000 2019-09-04T06:34:33.579+0000 D3 REPL [conn440] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.538+0000 2019-09-04T06:34:33.579+0000 D3 REPL [conn434] Got notified of new snapshot: { ts: Timestamp(1567578873, 1), t: 1 }, 2019-09-04T06:34:33.559+0000 2019-09-04T06:34:33.579+0000 D3 REPL [conn434] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.445+0000 2019-09-04T06:34:33.579+0000 D3 REPL [conn402] Got notified of new snapshot: { ts: Timestamp(1567578873, 1), t: 1 }, 2019-09-04T06:34:33.559+0000 2019-09-04T06:34:33.579+0000 D3 REPL [conn402] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.289+0000 2019-09-04T06:34:33.579+0000 D3 REPL [conn454] Got notified of new snapshot: { ts: Timestamp(1567578873, 1), t: 1 }, 2019-09-04T06:34:33.559+0000 2019-09-04T06:34:33.579+0000 D3 REPL [conn454] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.925+0000 2019-09-04T06:34:33.579+0000 D3 REPL [conn432] Got notified of new snapshot: { ts: Timestamp(1567578873, 1), t: 1 }, 2019-09-04T06:34:33.559+0000 2019-09-04T06:34:33.579+0000 D3 REPL [conn432] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.343+0000 2019-09-04T06:34:33.579+0000 D3 REPL [conn443] Got notified of new snapshot: { ts: Timestamp(1567578873, 1), t: 1 }, 2019-09-04T06:34:33.559+0000 2019-09-04T06:34:33.579+0000 D3 REPL [conn443] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.645+0000 2019-09-04T06:34:33.579+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:34:45.005+0000 2019-09-04T06:34:33.579+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:34:44.186+0000 2019-09-04T06:34:33.579+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1410 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:43.579+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578873, 1), t: 1 } } 2019-09-04T06:34:33.579+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:03.568+0000 2019-09-04T06:34:33.579+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:33.579+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:02.839+0000 2019-09-04T06:34:33.648+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:33.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:33.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:33.667+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578873, 1) 2019-09-04T06:34:33.689+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:33.689+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:33.738+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:33.738+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:33.739+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:33.739+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:33.748+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:33.848+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:33.948+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:33.956+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:33.956+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:34.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:34.049+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:34.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:34.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:34.149+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:34.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:34.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:34.189+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:34.189+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:34.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578873, 1), signature: { hash: BinData(0, B0F56F860A6D09A9C03368353403A89ACD68BD15), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:34.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:34:34.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578873, 1), signature: { hash: BinData(0, B0F56F860A6D09A9C03368353403A89ACD68BD15), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:34.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578873, 1), signature: { hash: BinData(0, B0F56F860A6D09A9C03368353403A89ACD68BD15), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:34.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, durableWallTime: new Date(1567578873559), opTime: { ts: Timestamp(1567578873, 1), t: 1 }, wallTime: new Date(1567578873559) } 2019-09-04T06:34:34.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578873, 1), signature: { hash: BinData(0, B0F56F860A6D09A9C03368353403A89ACD68BD15), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:34.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:34.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:34.238+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:34.238+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:34.239+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:34.239+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:34.249+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:34.349+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:34.449+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:34.456+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:34.456+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:34.549+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:34.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:34.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:34.567+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:34.567+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:34.567+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578873, 1) 2019-09-04T06:34:34.567+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 20596 2019-09-04T06:34:34.567+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:34.567+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:34.567+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 20596 2019-09-04T06:34:34.568+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20599 2019-09-04T06:34:34.568+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20599 2019-09-04T06:34:34.568+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578873, 1), t: 1 }({ ts: Timestamp(1567578873, 1), t: 1 }) 2019-09-04T06:34:34.650+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:34.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:34.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:34.689+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:34.689+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:34.738+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:34.738+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:34.739+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:34.739+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:34.750+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:34.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:34.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:34.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1411) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:34.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1411 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:44.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:34.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:02.839+0000 2019-09-04T06:34:34.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1412) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:34.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1412 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:34:44.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:34.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:02.839+0000 2019-09-04T06:34:34.839+0000 D2 ASIO [Replication] Request 1412 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, durableWallTime: new Date(1567578873559), opTime: { ts: Timestamp(1567578873, 1), t: 1 }, wallTime: new Date(1567578873559), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578873, 1), t: 1 }, lastCommittedWall: new Date(1567578873559), lastOpVisible: { ts: Timestamp(1567578873, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578873, 1), $clusterTime: { clusterTime: Timestamp(1567578873, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578873, 1) } 2019-09-04T06:34:34.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, durableWallTime: new Date(1567578873559), opTime: { ts: Timestamp(1567578873, 1), t: 1 }, wallTime: new Date(1567578873559), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578873, 1), t: 1 }, lastCommittedWall: new Date(1567578873559), lastOpVisible: { ts: Timestamp(1567578873, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578873, 1), $clusterTime: { clusterTime: Timestamp(1567578873, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578873, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:34:34.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:34.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1412) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, durableWallTime: new Date(1567578873559), opTime: { ts: Timestamp(1567578873, 1), t: 1 }, wallTime: new Date(1567578873559), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578873, 1), t: 1 }, lastCommittedWall: new Date(1567578873559), lastOpVisible: { ts: Timestamp(1567578873, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578873, 1), $clusterTime: { clusterTime: Timestamp(1567578873, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578873, 1) } 2019-09-04T06:34:34.839+0000 D2 ASIO [Replication] Request 1411 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, durableWallTime: new Date(1567578873559), opTime: { ts: Timestamp(1567578873, 1), t: 1 }, wallTime: new Date(1567578873559), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578873, 1), t: 1 }, lastCommittedWall: new Date(1567578873559), lastOpVisible: { ts: Timestamp(1567578873, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578873, 1), $clusterTime: { clusterTime: Timestamp(1567578873, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578873, 1) } 2019-09-04T06:34:34.839+0000 D4 ELECTION [replexec-4] Postponing election timeout due to heartbeat from primary 2019-09-04T06:34:34.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, durableWallTime: new Date(1567578873559), opTime: { ts: Timestamp(1567578873, 1), t: 1 }, wallTime: new Date(1567578873559), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578873, 1), t: 1 }, lastCommittedWall: new Date(1567578873559), lastOpVisible: { ts: Timestamp(1567578873, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578873, 1), $clusterTime: { clusterTime: Timestamp(1567578873, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578873, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:34.839+0000 D4 REPL [replexec-4] Canceling election timeout callback at 2019-09-04T06:34:44.186+0000 2019-09-04T06:34:34.839+0000 D4 ELECTION [replexec-4] Scheduling election timeout callback at 2019-09-04T06:34:45.447+0000 2019-09-04T06:34:34.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:34.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:34:34.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:34:36.839Z 2019-09-04T06:34:34.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:34.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:34.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:34.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1411) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, durableWallTime: new Date(1567578873559), opTime: { ts: Timestamp(1567578873, 1), t: 1 }, wallTime: new Date(1567578873559), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578873, 1), t: 1 }, lastCommittedWall: new Date(1567578873559), lastOpVisible: { ts: Timestamp(1567578873, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578873, 1), $clusterTime: { clusterTime: Timestamp(1567578873, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578873, 1) } 2019-09-04T06:34:34.840+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:34:34.840+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:34:36.840Z 2019-09-04T06:34:34.840+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:34.850+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:34.950+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:34.956+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:34.956+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:35.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:35.050+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:35.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:35.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:35.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578873, 1), signature: { hash: BinData(0, B0F56F860A6D09A9C03368353403A89ACD68BD15), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:35.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:34:35.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578873, 1), signature: { hash: BinData(0, B0F56F860A6D09A9C03368353403A89ACD68BD15), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:35.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578873, 1), signature: { hash: BinData(0, B0F56F860A6D09A9C03368353403A89ACD68BD15), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:35.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, durableWallTime: new Date(1567578873559), opTime: { ts: Timestamp(1567578873, 1), t: 1 }, wallTime: new Date(1567578873559) } 2019-09-04T06:34:35.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578873, 1), signature: { hash: BinData(0, B0F56F860A6D09A9C03368353403A89ACD68BD15), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:35.150+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:35.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:35.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:35.189+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:35.189+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:35.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:35.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:35.238+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:35.238+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:35.239+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:35.239+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:35.250+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:35.351+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:35.451+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:35.456+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:35.456+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:35.551+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:35.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:35.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:35.567+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:35.567+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:35.567+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578873, 1) 2019-09-04T06:34:35.567+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 20617 2019-09-04T06:34:35.567+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:35.567+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:35.567+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 20617 2019-09-04T06:34:35.568+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20620 2019-09-04T06:34:35.568+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20620 2019-09-04T06:34:35.568+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578873, 1), t: 1 }({ ts: Timestamp(1567578873, 1), t: 1 }) 2019-09-04T06:34:35.651+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:35.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:35.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:35.689+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:35.689+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:35.738+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:35.738+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:35.739+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:35.739+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:35.751+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:35.851+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:35.952+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:35.956+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:35.956+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:36.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:36.052+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:36.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:36.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:36.152+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:36.162+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:36.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:36.189+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:36.189+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:36.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578873, 1), signature: { hash: BinData(0, B0F56F860A6D09A9C03368353403A89ACD68BD15), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:36.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:34:36.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578873, 1), signature: { hash: BinData(0, B0F56F860A6D09A9C03368353403A89ACD68BD15), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:36.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578873, 1), signature: { hash: BinData(0, B0F56F860A6D09A9C03368353403A89ACD68BD15), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:36.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, durableWallTime: new Date(1567578873559), opTime: { ts: Timestamp(1567578873, 1), t: 1 }, wallTime: new Date(1567578873559) } 2019-09-04T06:34:36.235+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578873, 1), signature: { hash: BinData(0, B0F56F860A6D09A9C03368353403A89ACD68BD15), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:36.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:36.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:36.238+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:36.238+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:36.239+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:36.239+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:36.252+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:36.322+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:36.322+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:36.352+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:36.452+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:36.456+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:36.456+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:36.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:36.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:36.552+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:36.567+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:36.567+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:36.567+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578873, 1) 2019-09-04T06:34:36.567+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 20638 2019-09-04T06:34:36.568+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:36.568+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:36.568+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 20638 2019-09-04T06:34:36.569+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20641 2019-09-04T06:34:36.569+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20641 2019-09-04T06:34:36.569+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578873, 1), t: 1 }({ ts: Timestamp(1567578873, 1), t: 1 }) 2019-09-04T06:34:36.652+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:36.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:36.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:36.688+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:36.689+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:36.738+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:36.738+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:36.739+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:36.739+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:36.753+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:36.822+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:36.822+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:36.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:36.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1413) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:36.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1413 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:34:46.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:36.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:36.839+0000 D2 ASIO [Replication] Request 1413 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, durableWallTime: new Date(1567578873559), opTime: { ts: Timestamp(1567578873, 1), t: 1 }, wallTime: new Date(1567578873559), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578873, 1), t: 1 }, lastCommittedWall: new Date(1567578873559), lastOpVisible: { ts: Timestamp(1567578873, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578873, 1), $clusterTime: { clusterTime: Timestamp(1567578873, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578873, 1) } 2019-09-04T06:34:36.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, durableWallTime: new Date(1567578873559), opTime: { ts: Timestamp(1567578873, 1), t: 1 }, wallTime: new Date(1567578873559), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578873, 1), t: 1 }, lastCommittedWall: new Date(1567578873559), lastOpVisible: { ts: Timestamp(1567578873, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578873, 1), $clusterTime: { clusterTime: Timestamp(1567578873, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578873, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:34:36.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:36.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1413) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, durableWallTime: new Date(1567578873559), opTime: { ts: Timestamp(1567578873, 1), t: 1 }, wallTime: new Date(1567578873559), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578873, 1), t: 1 }, lastCommittedWall: new Date(1567578873559), lastOpVisible: { ts: Timestamp(1567578873, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578873, 1), $clusterTime: { clusterTime: Timestamp(1567578873, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578873, 1) } 2019-09-04T06:34:36.839+0000 D4 ELECTION [replexec-4] Postponing election timeout due to heartbeat from primary 2019-09-04T06:34:36.839+0000 D4 REPL [replexec-4] Canceling election timeout callback at 2019-09-04T06:34:45.447+0000 2019-09-04T06:34:36.839+0000 D4 ELECTION [replexec-4] Scheduling election timeout callback at 2019-09-04T06:34:47.937+0000 2019-09-04T06:34:36.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:36.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:34:36.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:36.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:34:38.839Z 2019-09-04T06:34:36.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:36.840+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:36.840+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1414) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:36.840+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1414 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:46.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:36.840+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:36.840+0000 D2 ASIO [Replication] Request 1414 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, durableWallTime: new Date(1567578873559), opTime: { ts: Timestamp(1567578873, 1), t: 1 }, wallTime: new Date(1567578873559), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578873, 1), t: 1 }, lastCommittedWall: new Date(1567578873559), lastOpVisible: { ts: Timestamp(1567578873, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578873, 1), $clusterTime: { clusterTime: Timestamp(1567578873, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578873, 1) } 2019-09-04T06:34:36.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, durableWallTime: new Date(1567578873559), opTime: { ts: Timestamp(1567578873, 1), t: 1 }, wallTime: new Date(1567578873559), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578873, 1), t: 1 }, lastCommittedWall: new Date(1567578873559), lastOpVisible: { ts: Timestamp(1567578873, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578873, 1), $clusterTime: { clusterTime: Timestamp(1567578873, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578873, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:36.840+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:36.840+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1414) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, durableWallTime: new Date(1567578873559), opTime: { ts: Timestamp(1567578873, 1), t: 1 }, wallTime: new Date(1567578873559), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578873, 1), t: 1 }, lastCommittedWall: new Date(1567578873559), lastOpVisible: { ts: Timestamp(1567578873, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578873, 1), $clusterTime: { clusterTime: Timestamp(1567578873, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578873, 1) } 2019-09-04T06:34:36.840+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:34:36.840+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:34:38.840Z 2019-09-04T06:34:36.840+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:36.853+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:36.953+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:36.956+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:36.956+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:37.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:37.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:37.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:37.053+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:37.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578873, 1), signature: { hash: BinData(0, B0F56F860A6D09A9C03368353403A89ACD68BD15), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:37.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:34:37.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578873, 1), signature: { hash: BinData(0, B0F56F860A6D09A9C03368353403A89ACD68BD15), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:37.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578873, 1), signature: { hash: BinData(0, B0F56F860A6D09A9C03368353403A89ACD68BD15), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:37.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, durableWallTime: new Date(1567578873559), opTime: { ts: Timestamp(1567578873, 1), t: 1 }, wallTime: new Date(1567578873559) } 2019-09-04T06:34:37.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578873, 1), signature: { hash: BinData(0, B0F56F860A6D09A9C03368353403A89ACD68BD15), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:37.153+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:37.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:37.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:37.189+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:37.189+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:37.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:37.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:37.238+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:37.238+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:37.239+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:37.239+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:37.253+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:37.322+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:37.322+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:37.353+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:37.454+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:37.456+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:37.456+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:37.554+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:37.568+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:37.568+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:37.568+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578873, 1) 2019-09-04T06:34:37.568+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 20659 2019-09-04T06:34:37.568+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:37.568+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:37.568+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 20659 2019-09-04T06:34:37.569+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20662 2019-09-04T06:34:37.569+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20662 2019-09-04T06:34:37.569+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578873, 1), t: 1 }({ ts: Timestamp(1567578873, 1), t: 1 }) 2019-09-04T06:34:37.654+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:37.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:37.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:37.689+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:37.689+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:37.713+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:37.713+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:37.738+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:37.738+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:37.739+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:37.739+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:37.754+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:37.822+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:37.822+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:37.854+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:37.954+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:37.956+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:37.956+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:38.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:38.055+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:38.155+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:38.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:38.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:38.189+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:38.189+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:38.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:38.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:38.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578876, 1), signature: { hash: BinData(0, D612E3524CB16FF0584AC64F6A0A44CFD10C554A), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:38.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:34:38.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578876, 1), signature: { hash: BinData(0, D612E3524CB16FF0584AC64F6A0A44CFD10C554A), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:38.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578876, 1), signature: { hash: BinData(0, D612E3524CB16FF0584AC64F6A0A44CFD10C554A), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:38.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, durableWallTime: new Date(1567578873559), opTime: { ts: Timestamp(1567578873, 1), t: 1 }, wallTime: new Date(1567578873559) } 2019-09-04T06:34:38.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578876, 1), signature: { hash: BinData(0, D612E3524CB16FF0584AC64F6A0A44CFD10C554A), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:38.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:38.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:38.238+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:38.238+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:38.239+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:38.239+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:38.255+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:38.322+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:38.322+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:38.355+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:38.397+0000 D2 ASIO [RS] Request 1410 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578878, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578878376), o: { $v: 1, $set: { ping: new Date(1567578878375) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578873, 1), t: 1 }, lastCommittedWall: new Date(1567578873559), lastOpVisible: { ts: Timestamp(1567578873, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578873, 1), t: 1 }, lastCommittedWall: new Date(1567578873559), lastOpApplied: { ts: Timestamp(1567578878, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578873, 1), $clusterTime: { clusterTime: Timestamp(1567578878, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578878, 1) } 2019-09-04T06:34:38.397+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578878, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578878376), o: { $v: 1, $set: { ping: new Date(1567578878375) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578873, 1), t: 1 }, lastCommittedWall: new Date(1567578873559), lastOpVisible: { ts: Timestamp(1567578873, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578873, 1), t: 1 }, lastCommittedWall: new Date(1567578873559), lastOpApplied: { ts: Timestamp(1567578878, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578873, 1), $clusterTime: { clusterTime: Timestamp(1567578878, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578878, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:38.398+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:38.398+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578878, 1) and ending at ts: Timestamp(1567578878, 1) 2019-09-04T06:34:38.398+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:34:47.937+0000 2019-09-04T06:34:38.398+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:34:49.633+0000 2019-09-04T06:34:38.398+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:38.398+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:38.398+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578878, 1), t: 1 } 2019-09-04T06:34:38.398+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:38.398+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:38.398+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578873, 1) 2019-09-04T06:34:38.398+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 20681 2019-09-04T06:34:38.398+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:38.398+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:38.398+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 20681 2019-09-04T06:34:38.398+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:38.398+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:34:38.398+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:38.398+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578873, 1) 2019-09-04T06:34:38.398+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578878, 1) } 2019-09-04T06:34:38.398+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 20684 2019-09-04T06:34:38.398+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:38.398+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:38.398+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 20684 2019-09-04T06:34:38.398+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20663 2019-09-04T06:34:38.398+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 20663 2019-09-04T06:34:38.398+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20687 2019-09-04T06:34:38.398+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20687 2019-09-04T06:34:38.398+0000 D3 EXECUTOR [repl-writer-worker-6] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:38.398+0000 D3 STORAGE [repl-writer-worker-6] WT begin_transaction for snapshot id 20689 2019-09-04T06:34:38.398+0000 D4 STORAGE [repl-writer-worker-6] inserting record with timestamp Timestamp(1567578878, 1) 2019-09-04T06:34:38.398+0000 D3 STORAGE [repl-writer-worker-6] WT set timestamp of future write operations to Timestamp(1567578878, 1) 2019-09-04T06:34:38.398+0000 D3 STORAGE [repl-writer-worker-6] WT commit_transaction for snapshot id 20689 2019-09-04T06:34:38.398+0000 D3 EXECUTOR [repl-writer-worker-6] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:38.398+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:34:38.398+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20688 2019-09-04T06:34:38.398+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 20688 2019-09-04T06:34:38.398+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20691 2019-09-04T06:34:38.398+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20691 2019-09-04T06:34:38.398+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578878, 1), t: 1 }({ ts: Timestamp(1567578878, 1), t: 1 }) 2019-09-04T06:34:38.398+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578878, 1) 2019-09-04T06:34:38.398+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20692 2019-09-04T06:34:38.398+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578878, 1) } } ] } sort: {} projection: {} 2019-09-04T06:34:38.398+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:38.398+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:34:38.398+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578878, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:34:38.398+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:38.398+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:38.398+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:38.398+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578878, 1) || First: notFirst: full path: ts 2019-09-04T06:34:38.398+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:38.398+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578878, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:38.398+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:38.398+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:34:38.398+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:38.398+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:38.398+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:38.398+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:38.398+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:38.398+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:38.398+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:38.398+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578878, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:38.398+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:38.398+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:38.398+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:38.398+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578878, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:38.398+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:38.398+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578878, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:38.398+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 20692 2019-09-04T06:34:38.398+0000 D3 EXECUTOR [repl-writer-worker-8] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:38.398+0000 D3 STORAGE [repl-writer-worker-8] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:38.398+0000 D3 REPL [repl-writer-worker-8] applying op: { ts: Timestamp(1567578878, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578878376), o: { $v: 1, $set: { ping: new Date(1567578878375) } } }, oplog application mode: Secondary 2019-09-04T06:34:38.399+0000 D3 STORAGE [repl-writer-worker-8] WT set timestamp of future write operations to Timestamp(1567578878, 1) 2019-09-04T06:34:38.399+0000 D3 STORAGE [repl-writer-worker-8] WT begin_transaction for snapshot id 20694 2019-09-04T06:34:38.399+0000 D2 QUERY [repl-writer-worker-8] Using idhack: { _id: "ConfigServer" } 2019-09-04T06:34:38.399+0000 D4 WRITE [repl-writer-worker-8] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:34:38.399+0000 D3 STORAGE [repl-writer-worker-8] WT commit_transaction for snapshot id 20694 2019-09-04T06:34:38.399+0000 D3 EXECUTOR [repl-writer-worker-8] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:38.399+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578878, 1), t: 1 }({ ts: Timestamp(1567578878, 1), t: 1 }) 2019-09-04T06:34:38.399+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578878, 1) 2019-09-04T06:34:38.399+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20693 2019-09-04T06:34:38.399+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:34:38.399+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:38.399+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:34:38.399+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:38.399+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:38.399+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:34:38.399+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 20693 2019-09-04T06:34:38.399+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578878, 1) 2019-09-04T06:34:38.399+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20698 2019-09-04T06:34:38.399+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20698 2019-09-04T06:34:38.399+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578878, 1), t: 1 }({ ts: Timestamp(1567578878, 1), t: 1 }) 2019-09-04T06:34:38.399+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:38.399+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, durableWallTime: new Date(1567578873559), appliedOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, appliedWallTime: new Date(1567578873559), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, durableWallTime: new Date(1567578873559), appliedOpTime: { ts: Timestamp(1567578878, 1), t: 1 }, appliedWallTime: new Date(1567578878376), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, durableWallTime: new Date(1567578873559), appliedOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, appliedWallTime: new Date(1567578873559), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578873, 1), t: 1 }, lastCommittedWall: new Date(1567578873559), lastOpVisible: { ts: Timestamp(1567578873, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:38.399+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1415 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:08.399+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, durableWallTime: new Date(1567578873559), appliedOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, appliedWallTime: new Date(1567578873559), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, durableWallTime: new Date(1567578873559), appliedOpTime: { ts: Timestamp(1567578878, 1), t: 1 }, appliedWallTime: new Date(1567578878376), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, durableWallTime: new Date(1567578873559), appliedOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, appliedWallTime: new Date(1567578873559), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578873, 1), t: 1 }, lastCommittedWall: new Date(1567578873559), lastOpVisible: { ts: Timestamp(1567578873, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:38.399+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:08.399+0000 2019-09-04T06:34:38.399+0000 D2 ASIO [RS] Request 1415 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578873, 1), t: 1 }, lastCommittedWall: new Date(1567578873559), lastOpVisible: { ts: Timestamp(1567578873, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578873, 1), $clusterTime: { clusterTime: Timestamp(1567578878, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578878, 1) } 2019-09-04T06:34:38.399+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578873, 1), t: 1 }, lastCommittedWall: new Date(1567578873559), lastOpVisible: { ts: Timestamp(1567578873, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578873, 1), $clusterTime: { clusterTime: Timestamp(1567578878, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578878, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:38.399+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:38.399+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:08.399+0000 2019-09-04T06:34:38.400+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578878, 1), t: 1 } 2019-09-04T06:34:38.400+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1416 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:48.400+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578873, 1), t: 1 } } 2019-09-04T06:34:38.400+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:08.399+0000 2019-09-04T06:34:38.400+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:34:38.400+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:38.400+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, durableWallTime: new Date(1567578873559), appliedOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, appliedWallTime: new Date(1567578873559), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578878, 1), t: 1 }, durableWallTime: new Date(1567578878376), appliedOpTime: { ts: Timestamp(1567578878, 1), t: 1 }, appliedWallTime: new Date(1567578878376), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, durableWallTime: new Date(1567578873559), appliedOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, appliedWallTime: new Date(1567578873559), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578873, 1), t: 1 }, lastCommittedWall: new Date(1567578873559), lastOpVisible: { ts: Timestamp(1567578873, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:38.400+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1417 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:08.400+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, durableWallTime: new Date(1567578873559), appliedOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, appliedWallTime: new Date(1567578873559), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578878, 1), t: 1 }, durableWallTime: new Date(1567578878376), appliedOpTime: { ts: Timestamp(1567578878, 1), t: 1 }, appliedWallTime: new Date(1567578878376), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, durableWallTime: new Date(1567578873559), appliedOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, appliedWallTime: new Date(1567578873559), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578873, 1), t: 1 }, lastCommittedWall: new Date(1567578873559), lastOpVisible: { ts: Timestamp(1567578873, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:38.400+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:08.399+0000 2019-09-04T06:34:38.400+0000 D2 ASIO [RS] Request 1417 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578873, 1), t: 1 }, lastCommittedWall: new Date(1567578873559), lastOpVisible: { ts: Timestamp(1567578873, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578873, 1), $clusterTime: { clusterTime: Timestamp(1567578878, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578878, 1) } 2019-09-04T06:34:38.400+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578873, 1), t: 1 }, lastCommittedWall: new Date(1567578873559), lastOpVisible: { ts: Timestamp(1567578873, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578873, 1), $clusterTime: { clusterTime: Timestamp(1567578878, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578878, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:38.400+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:38.400+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:08.399+0000 2019-09-04T06:34:38.401+0000 D2 ASIO [RS] Request 1416 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578878, 1), t: 1 }, lastCommittedWall: new Date(1567578878376), lastOpVisible: { ts: Timestamp(1567578878, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578878, 1), t: 1 }, lastCommittedWall: new Date(1567578878376), lastOpApplied: { ts: Timestamp(1567578878, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578878, 1), $clusterTime: { clusterTime: Timestamp(1567578878, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578878, 1) } 2019-09-04T06:34:38.401+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578878, 1), t: 1 }, lastCommittedWall: new Date(1567578878376), lastOpVisible: { ts: Timestamp(1567578878, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578878, 1), t: 1 }, lastCommittedWall: new Date(1567578878376), lastOpApplied: { ts: Timestamp(1567578878, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578878, 1), $clusterTime: { clusterTime: Timestamp(1567578878, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578878, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:38.401+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:38.401+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:34:38.401+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578878, 1), t: 1 }, 2019-09-04T06:34:38.376+0000 2019-09-04T06:34:38.401+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578878, 1), t: 1 }, 2019-09-04T06:34:38.376+0000 2019-09-04T06:34:38.401+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578873, 1) 2019-09-04T06:34:38.401+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:34:49.633+0000 2019-09-04T06:34:38.401+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:34:48.642+0000 2019-09-04T06:34:38.401+0000 D3 REPL [conn424] Got notified of new snapshot: { ts: Timestamp(1567578878, 1), t: 1 }, 2019-09-04T06:34:38.376+0000 2019-09-04T06:34:38.401+0000 D3 REPL [conn424] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.437+0000 2019-09-04T06:34:38.401+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1418 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:48.401+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578878, 1), t: 1 } } 2019-09-04T06:34:38.401+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:08.399+0000 2019-09-04T06:34:38.401+0000 D3 REPL [conn425] Got notified of new snapshot: { ts: Timestamp(1567578878, 1), t: 1 }, 2019-09-04T06:34:38.376+0000 2019-09-04T06:34:38.401+0000 D3 REPL [conn425] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:52.594+0000 2019-09-04T06:34:38.401+0000 D3 REPL [conn449] Got notified of new snapshot: { ts: Timestamp(1567578878, 1), t: 1 }, 2019-09-04T06:34:38.376+0000 2019-09-04T06:34:38.401+0000 D3 REPL [conn449] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:58.759+0000 2019-09-04T06:34:38.401+0000 D3 REPL [conn453] Got notified of new snapshot: { ts: Timestamp(1567578878, 1), t: 1 }, 2019-09-04T06:34:38.376+0000 2019-09-04T06:34:38.401+0000 D3 REPL [conn453] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.897+0000 2019-09-04T06:34:38.401+0000 D3 REPL [conn455] Got notified of new snapshot: { ts: Timestamp(1567578878, 1), t: 1 }, 2019-09-04T06:34:38.376+0000 2019-09-04T06:34:38.401+0000 D3 REPL [conn455] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.962+0000 2019-09-04T06:34:38.401+0000 D3 REPL [conn440] Got notified of new snapshot: { ts: Timestamp(1567578878, 1), t: 1 }, 2019-09-04T06:34:38.376+0000 2019-09-04T06:34:38.401+0000 D3 REPL [conn440] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.538+0000 2019-09-04T06:34:38.401+0000 D3 REPL [conn447] Got notified of new snapshot: { ts: Timestamp(1567578878, 1), t: 1 }, 2019-09-04T06:34:38.376+0000 2019-09-04T06:34:38.401+0000 D3 REPL [conn447] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:52.054+0000 2019-09-04T06:34:38.401+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:38.401+0000 D3 REPL [conn434] Got notified of new snapshot: { ts: Timestamp(1567578878, 1), t: 1 }, 2019-09-04T06:34:38.376+0000 2019-09-04T06:34:38.401+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:38.401+0000 D3 REPL [conn434] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.445+0000 2019-09-04T06:34:38.401+0000 D3 REPL [conn454] Got notified of new snapshot: { ts: Timestamp(1567578878, 1), t: 1 }, 2019-09-04T06:34:38.376+0000 2019-09-04T06:34:38.401+0000 D3 REPL [conn454] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.925+0000 2019-09-04T06:34:38.401+0000 D3 REPL [conn443] Got notified of new snapshot: { ts: Timestamp(1567578878, 1), t: 1 }, 2019-09-04T06:34:38.376+0000 2019-09-04T06:34:38.401+0000 D3 REPL [conn443] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.645+0000 2019-09-04T06:34:38.401+0000 D3 REPL [conn452] Got notified of new snapshot: { ts: Timestamp(1567578878, 1), t: 1 }, 2019-09-04T06:34:38.376+0000 2019-09-04T06:34:38.401+0000 D3 REPL [conn452] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.763+0000 2019-09-04T06:34:38.401+0000 D3 REPL [conn456] Got notified of new snapshot: { ts: Timestamp(1567578878, 1), t: 1 }, 2019-09-04T06:34:38.376+0000 2019-09-04T06:34:38.401+0000 D3 REPL [conn456] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:03.487+0000 2019-09-04T06:34:38.401+0000 D3 REPL [conn420] Got notified of new snapshot: { ts: Timestamp(1567578878, 1), t: 1 }, 2019-09-04T06:34:38.376+0000 2019-09-04T06:34:38.401+0000 D3 REPL [conn420] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.423+0000 2019-09-04T06:34:38.401+0000 D3 REPL [conn438] Got notified of new snapshot: { ts: Timestamp(1567578878, 1), t: 1 }, 2019-09-04T06:34:38.376+0000 2019-09-04T06:34:38.401+0000 D3 REPL [conn438] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.660+0000 2019-09-04T06:34:38.401+0000 D3 REPL [conn431] Got notified of new snapshot: { ts: Timestamp(1567578878, 1), t: 1 }, 2019-09-04T06:34:38.376+0000 2019-09-04T06:34:38.401+0000 D3 REPL [conn431] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.276+0000 2019-09-04T06:34:38.401+0000 D3 REPL [conn415] Got notified of new snapshot: { ts: Timestamp(1567578878, 1), t: 1 }, 2019-09-04T06:34:38.376+0000 2019-09-04T06:34:38.401+0000 D3 REPL [conn415] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.282+0000 2019-09-04T06:34:38.401+0000 D3 REPL [conn450] Got notified of new snapshot: { ts: Timestamp(1567578878, 1), t: 1 }, 2019-09-04T06:34:38.376+0000 2019-09-04T06:34:38.401+0000 D3 REPL [conn450] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.261+0000 2019-09-04T06:34:38.401+0000 D3 REPL [conn435] Got notified of new snapshot: { ts: Timestamp(1567578878, 1), t: 1 }, 2019-09-04T06:34:38.376+0000 2019-09-04T06:34:38.401+0000 D3 REPL [conn435] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:43.909+0000 2019-09-04T06:34:38.401+0000 D3 REPL [conn444] Got notified of new snapshot: { ts: Timestamp(1567578878, 1), t: 1 }, 2019-09-04T06:34:38.376+0000 2019-09-04T06:34:38.401+0000 D3 REPL [conn444] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:59.943+0000 2019-09-04T06:34:38.401+0000 D3 REPL [conn430] Got notified of new snapshot: { ts: Timestamp(1567578878, 1), t: 1 }, 2019-09-04T06:34:38.376+0000 2019-09-04T06:34:38.401+0000 D3 REPL [conn430] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.662+0000 2019-09-04T06:34:38.401+0000 D3 REPL [conn451] Got notified of new snapshot: { ts: Timestamp(1567578878, 1), t: 1 }, 2019-09-04T06:34:38.376+0000 2019-09-04T06:34:38.402+0000 D3 REPL [conn451] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.434+0000 2019-09-04T06:34:38.402+0000 D3 REPL [conn410] Got notified of new snapshot: { ts: Timestamp(1567578878, 1), t: 1 }, 2019-09-04T06:34:38.376+0000 2019-09-04T06:34:38.402+0000 D3 REPL [conn410] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.445+0000 2019-09-04T06:34:38.402+0000 D3 REPL [conn409] Got notified of new snapshot: { ts: Timestamp(1567578878, 1), t: 1 }, 2019-09-04T06:34:38.376+0000 2019-09-04T06:34:38.402+0000 D3 REPL [conn409] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.328+0000 2019-09-04T06:34:38.402+0000 D3 REPL [conn426] Got notified of new snapshot: { ts: Timestamp(1567578878, 1), t: 1 }, 2019-09-04T06:34:38.376+0000 2019-09-04T06:34:38.402+0000 D3 REPL [conn426] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.517+0000 2019-09-04T06:34:38.402+0000 D3 REPL [conn433] Got notified of new snapshot: { ts: Timestamp(1567578878, 1), t: 1 }, 2019-09-04T06:34:38.376+0000 2019-09-04T06:34:38.402+0000 D3 REPL [conn433] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.683+0000 2019-09-04T06:34:38.402+0000 D3 REPL [conn439] Got notified of new snapshot: { ts: Timestamp(1567578878, 1), t: 1 }, 2019-09-04T06:34:38.376+0000 2019-09-04T06:34:38.402+0000 D3 REPL [conn439] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.447+0000 2019-09-04T06:34:38.402+0000 D3 REPL [conn429] Got notified of new snapshot: { ts: Timestamp(1567578878, 1), t: 1 }, 2019-09-04T06:34:38.376+0000 2019-09-04T06:34:38.402+0000 D3 REPL [conn429] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:41.583+0000 2019-09-04T06:34:38.402+0000 D3 REPL [conn437] Got notified of new snapshot: { ts: Timestamp(1567578878, 1), t: 1 }, 2019-09-04T06:34:38.376+0000 2019-09-04T06:34:38.402+0000 D3 REPL [conn437] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:46.416+0000 2019-09-04T06:34:38.402+0000 D3 REPL [conn442] Got notified of new snapshot: { ts: Timestamp(1567578878, 1), t: 1 }, 2019-09-04T06:34:38.376+0000 2019-09-04T06:34:38.402+0000 D3 REPL [conn442] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.133+0000 2019-09-04T06:34:38.402+0000 D3 REPL [conn427] Got notified of new snapshot: { ts: Timestamp(1567578878, 1), t: 1 }, 2019-09-04T06:34:38.376+0000 2019-09-04T06:34:38.402+0000 D3 REPL [conn427] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.335+0000 2019-09-04T06:34:38.402+0000 D3 REPL [conn402] Got notified of new snapshot: { ts: Timestamp(1567578878, 1), t: 1 }, 2019-09-04T06:34:38.376+0000 2019-09-04T06:34:38.402+0000 D3 REPL [conn402] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.289+0000 2019-09-04T06:34:38.402+0000 D3 REPL [conn432] Got notified of new snapshot: { ts: Timestamp(1567578878, 1), t: 1 }, 2019-09-04T06:34:38.376+0000 2019-09-04T06:34:38.402+0000 D3 REPL [conn432] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.343+0000 2019-09-04T06:34:38.455+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:38.456+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:38.456+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:38.498+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578878, 1) 2019-09-04T06:34:38.555+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:38.642+0000 D2 ASIO [RS] Request 1418 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578878, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578878638), o: { $v: 1, $set: { ping: new Date(1567578878637) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578878, 1), t: 1 }, lastCommittedWall: new Date(1567578878376), lastOpVisible: { ts: Timestamp(1567578878, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578878, 1), t: 1 }, lastCommittedWall: new Date(1567578878376), lastOpApplied: { ts: Timestamp(1567578878, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578878, 1), $clusterTime: { clusterTime: Timestamp(1567578878, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578878, 2) } 2019-09-04T06:34:38.642+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578878, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578878638), o: { $v: 1, $set: { ping: new Date(1567578878637) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578878, 1), t: 1 }, lastCommittedWall: new Date(1567578878376), lastOpVisible: { ts: Timestamp(1567578878, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578878, 1), t: 1 }, lastCommittedWall: new Date(1567578878376), lastOpApplied: { ts: Timestamp(1567578878, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578878, 1), $clusterTime: { clusterTime: Timestamp(1567578878, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578878, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:38.642+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:38.642+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578878, 2) and ending at ts: Timestamp(1567578878, 2) 2019-09-04T06:34:38.642+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:34:48.642+0000 2019-09-04T06:34:38.642+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:34:50.075+0000 2019-09-04T06:34:38.642+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:38.642+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:38.642+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578878, 2), t: 1 } 2019-09-04T06:34:38.642+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:38.642+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:38.642+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578878, 1) 2019-09-04T06:34:38.642+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 20702 2019-09-04T06:34:38.642+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:38.642+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:38.642+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 20702 2019-09-04T06:34:38.642+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:38.642+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:34:38.642+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:38.642+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578878, 2) } 2019-09-04T06:34:38.642+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578878, 1) 2019-09-04T06:34:38.642+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 20705 2019-09-04T06:34:38.642+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:38.642+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:38.642+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 20705 2019-09-04T06:34:38.642+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20699 2019-09-04T06:34:38.642+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 20699 2019-09-04T06:34:38.642+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20708 2019-09-04T06:34:38.642+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20708 2019-09-04T06:34:38.642+0000 D3 EXECUTOR [repl-writer-worker-2] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:38.642+0000 D3 STORAGE [repl-writer-worker-2] WT begin_transaction for snapshot id 20710 2019-09-04T06:34:38.642+0000 D4 STORAGE [repl-writer-worker-2] inserting record with timestamp Timestamp(1567578878, 2) 2019-09-04T06:34:38.642+0000 D3 STORAGE [repl-writer-worker-2] WT set timestamp of future write operations to Timestamp(1567578878, 2) 2019-09-04T06:34:38.642+0000 D3 STORAGE [repl-writer-worker-2] WT commit_transaction for snapshot id 20710 2019-09-04T06:34:38.642+0000 D3 EXECUTOR [repl-writer-worker-2] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:38.643+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:34:38.643+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20709 2019-09-04T06:34:38.643+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 20709 2019-09-04T06:34:38.643+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20712 2019-09-04T06:34:38.643+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20712 2019-09-04T06:34:38.643+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578878, 2), t: 1 }({ ts: Timestamp(1567578878, 2), t: 1 }) 2019-09-04T06:34:38.643+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578878, 2) 2019-09-04T06:34:38.643+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20713 2019-09-04T06:34:38.643+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578878, 2) } } ] } sort: {} projection: {} 2019-09-04T06:34:38.643+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:38.643+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:34:38.643+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578878, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:34:38.643+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:38.643+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:38.643+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:38.643+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578878, 2) || First: notFirst: full path: ts 2019-09-04T06:34:38.643+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:38.643+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578878, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:38.643+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:38.643+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:34:38.643+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:38.643+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:38.643+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:38.643+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:38.643+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:38.643+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:38.643+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:38.643+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578878, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:38.643+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:38.643+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:38.643+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:38.643+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578878, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:38.643+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:38.643+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578878, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:38.643+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 20713 2019-09-04T06:34:38.643+0000 D3 EXECUTOR [repl-writer-worker-0] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:38.643+0000 D3 STORAGE [repl-writer-worker-0] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:38.643+0000 D3 REPL [repl-writer-worker-0] applying op: { ts: Timestamp(1567578878, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578878638), o: { $v: 1, $set: { ping: new Date(1567578878637) } } }, oplog application mode: Secondary 2019-09-04T06:34:38.643+0000 D3 STORAGE [repl-writer-worker-0] WT set timestamp of future write operations to Timestamp(1567578878, 2) 2019-09-04T06:34:38.643+0000 D3 STORAGE [repl-writer-worker-0] WT begin_transaction for snapshot id 20715 2019-09-04T06:34:38.643+0000 D2 QUERY [repl-writer-worker-0] Using idhack: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" } 2019-09-04T06:34:38.643+0000 D4 WRITE [repl-writer-worker-0] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:34:38.643+0000 D3 STORAGE [repl-writer-worker-0] WT commit_transaction for snapshot id 20715 2019-09-04T06:34:38.643+0000 D3 EXECUTOR [repl-writer-worker-0] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:38.643+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578878, 2), t: 1 }({ ts: Timestamp(1567578878, 2), t: 1 }) 2019-09-04T06:34:38.643+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578878, 2) 2019-09-04T06:34:38.643+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20714 2019-09-04T06:34:38.643+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:34:38.643+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:38.643+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:34:38.643+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:38.643+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:38.643+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:34:38.643+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 20714 2019-09-04T06:34:38.643+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578878, 2) 2019-09-04T06:34:38.643+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:38.643+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20718 2019-09-04T06:34:38.643+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, durableWallTime: new Date(1567578873559), appliedOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, appliedWallTime: new Date(1567578873559), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578878, 1), t: 1 }, durableWallTime: new Date(1567578878376), appliedOpTime: { ts: Timestamp(1567578878, 2), t: 1 }, appliedWallTime: new Date(1567578878638), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, durableWallTime: new Date(1567578873559), appliedOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, appliedWallTime: new Date(1567578873559), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578878, 1), t: 1 }, lastCommittedWall: new Date(1567578878376), lastOpVisible: { ts: Timestamp(1567578878, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:38.643+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20718 2019-09-04T06:34:38.643+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1419 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:08.643+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, durableWallTime: new Date(1567578873559), appliedOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, appliedWallTime: new Date(1567578873559), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578878, 1), t: 1 }, durableWallTime: new Date(1567578878376), appliedOpTime: { ts: Timestamp(1567578878, 2), t: 1 }, appliedWallTime: new Date(1567578878638), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, durableWallTime: new Date(1567578873559), appliedOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, appliedWallTime: new Date(1567578873559), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578878, 1), t: 1 }, lastCommittedWall: new Date(1567578878376), lastOpVisible: { ts: Timestamp(1567578878, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:38.643+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578878, 2), t: 1 }({ ts: Timestamp(1567578878, 2), t: 1 }) 2019-09-04T06:34:38.643+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:08.643+0000 2019-09-04T06:34:38.644+0000 D2 ASIO [RS] Request 1419 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578878, 1), t: 1 }, lastCommittedWall: new Date(1567578878376), lastOpVisible: { ts: Timestamp(1567578878, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578878, 1), $clusterTime: { clusterTime: Timestamp(1567578878, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578878, 2) } 2019-09-04T06:34:38.644+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578878, 1), t: 1 }, lastCommittedWall: new Date(1567578878376), lastOpVisible: { ts: Timestamp(1567578878, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578878, 1), $clusterTime: { clusterTime: Timestamp(1567578878, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578878, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:38.644+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:38.644+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:08.644+0000 2019-09-04T06:34:38.644+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578878, 2), t: 1 } 2019-09-04T06:34:38.644+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1420 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:48.644+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578878, 1), t: 1 } } 2019-09-04T06:34:38.644+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:08.644+0000 2019-09-04T06:34:38.644+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:34:38.644+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:38.644+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, durableWallTime: new Date(1567578873559), appliedOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, appliedWallTime: new Date(1567578873559), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578878, 2), t: 1 }, durableWallTime: new Date(1567578878638), appliedOpTime: { ts: Timestamp(1567578878, 2), t: 1 }, appliedWallTime: new Date(1567578878638), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, durableWallTime: new Date(1567578873559), appliedOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, appliedWallTime: new Date(1567578873559), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578878, 1), t: 1 }, lastCommittedWall: new Date(1567578878376), lastOpVisible: { ts: Timestamp(1567578878, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:38.644+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1421 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:08.644+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, durableWallTime: new Date(1567578873559), appliedOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, appliedWallTime: new Date(1567578873559), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578878, 2), t: 1 }, durableWallTime: new Date(1567578878638), appliedOpTime: { ts: Timestamp(1567578878, 2), t: 1 }, appliedWallTime: new Date(1567578878638), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, durableWallTime: new Date(1567578873559), appliedOpTime: { ts: Timestamp(1567578873, 1), t: 1 }, appliedWallTime: new Date(1567578873559), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578878, 1), t: 1 }, lastCommittedWall: new Date(1567578878376), lastOpVisible: { ts: Timestamp(1567578878, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:38.645+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:08.644+0000 2019-09-04T06:34:38.645+0000 D2 ASIO [RS] Request 1421 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578878, 1), t: 1 }, lastCommittedWall: new Date(1567578878376), lastOpVisible: { ts: Timestamp(1567578878, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578878, 1), $clusterTime: { clusterTime: Timestamp(1567578878, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578878, 2) } 2019-09-04T06:34:38.645+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578878, 1), t: 1 }, lastCommittedWall: new Date(1567578878376), lastOpVisible: { ts: Timestamp(1567578878, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578878, 1), $clusterTime: { clusterTime: Timestamp(1567578878, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578878, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:38.645+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:38.645+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:08.644+0000 2019-09-04T06:34:38.645+0000 D2 ASIO [RS] Request 1420 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578878, 2), t: 1 }, lastCommittedWall: new Date(1567578878638), lastOpVisible: { ts: Timestamp(1567578878, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578878, 2), t: 1 }, lastCommittedWall: new Date(1567578878638), lastOpApplied: { ts: Timestamp(1567578878, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578878, 2), $clusterTime: { clusterTime: Timestamp(1567578878, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578878, 2) } 2019-09-04T06:34:38.645+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578878, 2), t: 1 }, lastCommittedWall: new Date(1567578878638), lastOpVisible: { ts: Timestamp(1567578878, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578878, 2), t: 1 }, lastCommittedWall: new Date(1567578878638), lastOpApplied: { ts: Timestamp(1567578878, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578878, 2), $clusterTime: { clusterTime: Timestamp(1567578878, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578878, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:38.645+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:38.645+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:34:38.645+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578878, 2), t: 1 }, 2019-09-04T06:34:38.638+0000 2019-09-04T06:34:38.645+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578878, 2), t: 1 }, 2019-09-04T06:34:38.638+0000 2019-09-04T06:34:38.645+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578873, 2) 2019-09-04T06:34:38.645+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:34:50.075+0000 2019-09-04T06:34:38.645+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:34:49.816+0000 2019-09-04T06:34:38.645+0000 D3 REPL [conn449] Got notified of new snapshot: { ts: Timestamp(1567578878, 2), t: 1 }, 2019-09-04T06:34:38.638+0000 2019-09-04T06:34:38.645+0000 D3 REPL [conn449] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:58.759+0000 2019-09-04T06:34:38.645+0000 D3 REPL [conn432] Got notified of new snapshot: { ts: Timestamp(1567578878, 2), t: 1 }, 2019-09-04T06:34:38.638+0000 2019-09-04T06:34:38.645+0000 D3 REPL [conn432] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.343+0000 2019-09-04T06:34:38.645+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1422 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:48.645+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578878, 2), t: 1 } } 2019-09-04T06:34:38.645+0000 D3 REPL [conn437] Got notified of new snapshot: { ts: Timestamp(1567578878, 2), t: 1 }, 2019-09-04T06:34:38.638+0000 2019-09-04T06:34:38.645+0000 D3 REPL [conn437] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:46.416+0000 2019-09-04T06:34:38.645+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:08.644+0000 2019-09-04T06:34:38.645+0000 D3 REPL [conn430] Got notified of new snapshot: { ts: Timestamp(1567578878, 2), t: 1 }, 2019-09-04T06:34:38.638+0000 2019-09-04T06:34:38.645+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:38.645+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:38.645+0000 D3 REPL [conn430] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.662+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn456] Got notified of new snapshot: { ts: Timestamp(1567578878, 2), t: 1 }, 2019-09-04T06:34:38.638+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn456] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:03.487+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn439] Got notified of new snapshot: { ts: Timestamp(1567578878, 2), t: 1 }, 2019-09-04T06:34:38.638+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn439] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.447+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn431] Got notified of new snapshot: { ts: Timestamp(1567578878, 2), t: 1 }, 2019-09-04T06:34:38.638+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn431] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.276+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn443] Got notified of new snapshot: { ts: Timestamp(1567578878, 2), t: 1 }, 2019-09-04T06:34:38.638+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn443] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.645+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn444] Got notified of new snapshot: { ts: Timestamp(1567578878, 2), t: 1 }, 2019-09-04T06:34:38.638+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn444] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:59.943+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn451] Got notified of new snapshot: { ts: Timestamp(1567578878, 2), t: 1 }, 2019-09-04T06:34:38.638+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn451] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.434+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn409] Got notified of new snapshot: { ts: Timestamp(1567578878, 2), t: 1 }, 2019-09-04T06:34:38.638+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn409] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.328+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn433] Got notified of new snapshot: { ts: Timestamp(1567578878, 2), t: 1 }, 2019-09-04T06:34:38.638+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn433] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.683+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn429] Got notified of new snapshot: { ts: Timestamp(1567578878, 2), t: 1 }, 2019-09-04T06:34:38.638+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn429] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:41.583+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn442] Got notified of new snapshot: { ts: Timestamp(1567578878, 2), t: 1 }, 2019-09-04T06:34:38.638+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn442] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.133+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn402] Got notified of new snapshot: { ts: Timestamp(1567578878, 2), t: 1 }, 2019-09-04T06:34:38.638+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn402] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.289+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn454] Got notified of new snapshot: { ts: Timestamp(1567578878, 2), t: 1 }, 2019-09-04T06:34:38.638+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn454] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.925+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn424] Got notified of new snapshot: { ts: Timestamp(1567578878, 2), t: 1 }, 2019-09-04T06:34:38.638+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn424] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.437+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn425] Got notified of new snapshot: { ts: Timestamp(1567578878, 2), t: 1 }, 2019-09-04T06:34:38.638+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn425] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:52.594+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn440] Got notified of new snapshot: { ts: Timestamp(1567578878, 2), t: 1 }, 2019-09-04T06:34:38.638+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn440] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.538+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn453] Got notified of new snapshot: { ts: Timestamp(1567578878, 2), t: 1 }, 2019-09-04T06:34:38.638+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn453] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.897+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn410] Got notified of new snapshot: { ts: Timestamp(1567578878, 2), t: 1 }, 2019-09-04T06:34:38.638+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn410] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.445+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn435] Got notified of new snapshot: { ts: Timestamp(1567578878, 2), t: 1 }, 2019-09-04T06:34:38.638+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn435] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:43.909+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn415] Got notified of new snapshot: { ts: Timestamp(1567578878, 2), t: 1 }, 2019-09-04T06:34:38.638+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn415] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.282+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn426] Got notified of new snapshot: { ts: Timestamp(1567578878, 2), t: 1 }, 2019-09-04T06:34:38.638+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn426] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.517+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn452] Got notified of new snapshot: { ts: Timestamp(1567578878, 2), t: 1 }, 2019-09-04T06:34:38.638+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn452] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.763+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn420] Got notified of new snapshot: { ts: Timestamp(1567578878, 2), t: 1 }, 2019-09-04T06:34:38.638+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn420] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.423+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn438] Got notified of new snapshot: { ts: Timestamp(1567578878, 2), t: 1 }, 2019-09-04T06:34:38.638+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn438] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.660+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn427] Got notified of new snapshot: { ts: Timestamp(1567578878, 2), t: 1 }, 2019-09-04T06:34:38.638+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn427] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.335+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn434] Got notified of new snapshot: { ts: Timestamp(1567578878, 2), t: 1 }, 2019-09-04T06:34:38.638+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn434] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.445+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn450] Got notified of new snapshot: { ts: Timestamp(1567578878, 2), t: 1 }, 2019-09-04T06:34:38.638+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn450] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.261+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn447] Got notified of new snapshot: { ts: Timestamp(1567578878, 2), t: 1 }, 2019-09-04T06:34:38.638+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn447] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:52.054+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn455] Got notified of new snapshot: { ts: Timestamp(1567578878, 2), t: 1 }, 2019-09-04T06:34:38.638+0000 2019-09-04T06:34:38.646+0000 D3 REPL [conn455] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.962+0000 2019-09-04T06:34:38.655+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:38.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:38.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:38.689+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:38.689+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:38.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:38.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:38.738+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:38.738+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:38.739+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:38.739+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:38.742+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578878, 2) 2019-09-04T06:34:38.755+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:38.822+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:38.822+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:38.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:38.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1423) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:38.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1423 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:34:48.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:38.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:38.839+0000 D2 ASIO [Replication] Request 1423 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578878, 2), t: 1 }, durableWallTime: new Date(1567578878638), opTime: { ts: Timestamp(1567578878, 2), t: 1 }, wallTime: new Date(1567578878638), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578878, 2), t: 1 }, lastCommittedWall: new Date(1567578878638), lastOpVisible: { ts: Timestamp(1567578878, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578878, 2), $clusterTime: { clusterTime: Timestamp(1567578878, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578878, 2) } 2019-09-04T06:34:38.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578878, 2), t: 1 }, durableWallTime: new Date(1567578878638), opTime: { ts: Timestamp(1567578878, 2), t: 1 }, wallTime: new Date(1567578878638), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578878, 2), t: 1 }, lastCommittedWall: new Date(1567578878638), lastOpVisible: { ts: Timestamp(1567578878, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578878, 2), $clusterTime: { clusterTime: Timestamp(1567578878, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578878, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:34:38.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:38.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1423) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578878, 2), t: 1 }, durableWallTime: new Date(1567578878638), opTime: { ts: Timestamp(1567578878, 2), t: 1 }, wallTime: new Date(1567578878638), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578878, 2), t: 1 }, lastCommittedWall: new Date(1567578878638), lastOpVisible: { ts: Timestamp(1567578878, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578878, 2), $clusterTime: { clusterTime: Timestamp(1567578878, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578878, 2) } 2019-09-04T06:34:38.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:34:38.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:34:49.816+0000 2019-09-04T06:34:38.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:34:50.240+0000 2019-09-04T06:34:38.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:38.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:34:38.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:38.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:34:40.839Z 2019-09-04T06:34:38.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:38.840+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:38.840+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1424) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:38.840+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1424 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:48.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:38.840+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:38.840+0000 D2 ASIO [Replication] Request 1424 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578878, 2), t: 1 }, durableWallTime: new Date(1567578878638), opTime: { ts: Timestamp(1567578878, 2), t: 1 }, wallTime: new Date(1567578878638), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578878, 2), t: 1 }, lastCommittedWall: new Date(1567578878638), lastOpVisible: { ts: Timestamp(1567578878, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578878, 2), $clusterTime: { clusterTime: Timestamp(1567578878, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578878, 2) } 2019-09-04T06:34:38.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578878, 2), t: 1 }, durableWallTime: new Date(1567578878638), opTime: { ts: Timestamp(1567578878, 2), t: 1 }, wallTime: new Date(1567578878638), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578878, 2), t: 1 }, lastCommittedWall: new Date(1567578878638), lastOpVisible: { ts: Timestamp(1567578878, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578878, 2), $clusterTime: { clusterTime: Timestamp(1567578878, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578878, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:38.840+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:38.840+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1424) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578878, 2), t: 1 }, durableWallTime: new Date(1567578878638), opTime: { ts: Timestamp(1567578878, 2), t: 1 }, wallTime: new Date(1567578878638), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578878, 2), t: 1 }, lastCommittedWall: new Date(1567578878638), lastOpVisible: { ts: Timestamp(1567578878, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578878, 2), $clusterTime: { clusterTime: Timestamp(1567578878, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578878, 2) } 2019-09-04T06:34:38.840+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:34:38.840+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:34:40.840Z 2019-09-04T06:34:38.840+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:38.856+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:38.956+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:38.956+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:38.956+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:39.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:39.056+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:39.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578878, 2), signature: { hash: BinData(0, FB9DD3E9707DDE37D96D385D867A43DB24A557A9), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:39.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:34:39.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578878, 2), signature: { hash: BinData(0, FB9DD3E9707DDE37D96D385D867A43DB24A557A9), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:39.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578878, 2), signature: { hash: BinData(0, FB9DD3E9707DDE37D96D385D867A43DB24A557A9), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:39.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578878, 2), t: 1 }, durableWallTime: new Date(1567578878638), opTime: { ts: Timestamp(1567578878, 2), t: 1 }, wallTime: new Date(1567578878638) } 2019-09-04T06:34:39.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578878, 2), signature: { hash: BinData(0, FB9DD3E9707DDE37D96D385D867A43DB24A557A9), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:39.156+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:39.189+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:39.189+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:39.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:39.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:39.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:39.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:39.238+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:39.238+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:39.239+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:39.239+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:39.256+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:39.298+0000 D2 ASIO [RS] Request 1422 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578879, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578879286), o: { $v: 1, $set: { ping: new Date(1567578879283), up: 50 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578878, 2), t: 1 }, lastCommittedWall: new Date(1567578878638), lastOpVisible: { ts: Timestamp(1567578878, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578878, 2), t: 1 }, lastCommittedWall: new Date(1567578878638), lastOpApplied: { ts: Timestamp(1567578879, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578878, 2), $clusterTime: { clusterTime: Timestamp(1567578879, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578879, 1) } 2019-09-04T06:34:39.298+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578879, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578879286), o: { $v: 1, $set: { ping: new Date(1567578879283), up: 50 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578878, 2), t: 1 }, lastCommittedWall: new Date(1567578878638), lastOpVisible: { ts: Timestamp(1567578878, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578878, 2), t: 1 }, lastCommittedWall: new Date(1567578878638), lastOpApplied: { ts: Timestamp(1567578879, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578878, 2), $clusterTime: { clusterTime: Timestamp(1567578879, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578879, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:39.298+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:39.298+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578879, 1) and ending at ts: Timestamp(1567578879, 1) 2019-09-04T06:34:39.298+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:34:50.240+0000 2019-09-04T06:34:39.298+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:34:49.874+0000 2019-09-04T06:34:39.298+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:39.298+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:39.298+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:39.298+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:39.298+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578878, 2) 2019-09-04T06:34:39.298+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 20736 2019-09-04T06:34:39.298+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:39.298+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:39.298+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 20736 2019-09-04T06:34:39.298+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:39.298+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:34:39.298+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:39.298+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578878, 2) 2019-09-04T06:34:39.298+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578879, 1) } 2019-09-04T06:34:39.298+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 20739 2019-09-04T06:34:39.298+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:39.298+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:39.298+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 20739 2019-09-04T06:34:39.298+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578879, 1), t: 1 } 2019-09-04T06:34:39.298+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20719 2019-09-04T06:34:39.298+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 20719 2019-09-04T06:34:39.298+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20742 2019-09-04T06:34:39.298+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20742 2019-09-04T06:34:39.298+0000 D3 EXECUTOR [repl-writer-worker-15] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:39.298+0000 D3 STORAGE [repl-writer-worker-15] WT begin_transaction for snapshot id 20744 2019-09-04T06:34:39.298+0000 D4 STORAGE [repl-writer-worker-15] inserting record with timestamp Timestamp(1567578879, 1) 2019-09-04T06:34:39.298+0000 D3 STORAGE [repl-writer-worker-15] WT set timestamp of future write operations to Timestamp(1567578879, 1) 2019-09-04T06:34:39.298+0000 D3 STORAGE [repl-writer-worker-15] WT commit_transaction for snapshot id 20744 2019-09-04T06:34:39.299+0000 D3 EXECUTOR [repl-writer-worker-15] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:39.299+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:34:39.299+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20743 2019-09-04T06:34:39.299+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 20743 2019-09-04T06:34:39.299+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20746 2019-09-04T06:34:39.299+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20746 2019-09-04T06:34:39.299+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578879, 1), t: 1 }({ ts: Timestamp(1567578879, 1), t: 1 }) 2019-09-04T06:34:39.299+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578879, 1) 2019-09-04T06:34:39.299+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20747 2019-09-04T06:34:39.299+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578879, 1) } } ] } sort: {} projection: {} 2019-09-04T06:34:39.299+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:39.299+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:34:39.299+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578879, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:34:39.299+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:39.299+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:39.299+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:39.299+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578879, 1) || First: notFirst: full path: ts 2019-09-04T06:34:39.299+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:39.299+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578879, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:39.299+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:39.299+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:34:39.299+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:39.299+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:39.299+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:39.299+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:39.299+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:39.299+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:39.299+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:39.299+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578879, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:39.299+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:39.299+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:39.299+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:39.299+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578879, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:39.299+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:39.299+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578879, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:39.299+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 20747 2019-09-04T06:34:39.299+0000 D3 EXECUTOR [repl-writer-worker-1] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:39.299+0000 D3 STORAGE [repl-writer-worker-1] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:39.299+0000 D3 REPL [repl-writer-worker-1] applying op: { ts: Timestamp(1567578879, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578879286), o: { $v: 1, $set: { ping: new Date(1567578879283), up: 50 } } }, oplog application mode: Secondary 2019-09-04T06:34:39.299+0000 D3 STORAGE [repl-writer-worker-1] WT set timestamp of future write operations to Timestamp(1567578879, 1) 2019-09-04T06:34:39.299+0000 D3 STORAGE [repl-writer-worker-1] WT begin_transaction for snapshot id 20749 2019-09-04T06:34:39.299+0000 D2 QUERY [repl-writer-worker-1] Using idhack: { _id: "cmodb801.togewa.com:27017" } 2019-09-04T06:34:39.299+0000 D4 WRITE [repl-writer-worker-1] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:34:39.299+0000 D3 STORAGE [repl-writer-worker-1] WT commit_transaction for snapshot id 20749 2019-09-04T06:34:39.299+0000 D3 EXECUTOR [repl-writer-worker-1] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:39.299+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578879, 1), t: 1 }({ ts: Timestamp(1567578879, 1), t: 1 }) 2019-09-04T06:34:39.299+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578879, 1) 2019-09-04T06:34:39.299+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20748 2019-09-04T06:34:39.299+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:34:39.299+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:39.299+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:34:39.299+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:39.299+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:39.299+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:34:39.299+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 20748 2019-09-04T06:34:39.299+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578879, 1) 2019-09-04T06:34:39.299+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20752 2019-09-04T06:34:39.299+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20752 2019-09-04T06:34:39.299+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578879, 1), t: 1 }({ ts: Timestamp(1567578879, 1), t: 1 }) 2019-09-04T06:34:39.299+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:39.299+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578878, 2), t: 1 }, durableWallTime: new Date(1567578878638), appliedOpTime: { ts: Timestamp(1567578878, 2), t: 1 }, appliedWallTime: new Date(1567578878638), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578878, 2), t: 1 }, durableWallTime: new Date(1567578878638), appliedOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, appliedWallTime: new Date(1567578879286), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578878, 2), t: 1 }, durableWallTime: new Date(1567578878638), appliedOpTime: { ts: Timestamp(1567578878, 2), t: 1 }, appliedWallTime: new Date(1567578878638), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578878, 2), t: 1 }, lastCommittedWall: new Date(1567578878638), lastOpVisible: { ts: Timestamp(1567578878, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:39.299+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1425 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:09.299+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578878, 2), t: 1 }, durableWallTime: new Date(1567578878638), appliedOpTime: { ts: Timestamp(1567578878, 2), t: 1 }, appliedWallTime: new Date(1567578878638), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578878, 2), t: 1 }, durableWallTime: new Date(1567578878638), appliedOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, appliedWallTime: new Date(1567578879286), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578878, 2), t: 1 }, durableWallTime: new Date(1567578878638), appliedOpTime: { ts: Timestamp(1567578878, 2), t: 1 }, appliedWallTime: new Date(1567578878638), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578878, 2), t: 1 }, lastCommittedWall: new Date(1567578878638), lastOpVisible: { ts: Timestamp(1567578878, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:39.299+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:09.299+0000 2019-09-04T06:34:39.300+0000 D2 ASIO [RS] Request 1425 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578878, 2), t: 1 }, lastCommittedWall: new Date(1567578878638), lastOpVisible: { ts: Timestamp(1567578878, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578878, 2), $clusterTime: { clusterTime: Timestamp(1567578879, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578879, 1) } 2019-09-04T06:34:39.300+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578878, 2), t: 1 }, lastCommittedWall: new Date(1567578878638), lastOpVisible: { ts: Timestamp(1567578878, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578878, 2), $clusterTime: { clusterTime: Timestamp(1567578879, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578879, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:39.300+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:39.300+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:09.300+0000 2019-09-04T06:34:39.300+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578879, 1), t: 1 } 2019-09-04T06:34:39.300+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1426 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:49.300+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578878, 2), t: 1 } } 2019-09-04T06:34:39.300+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:09.300+0000 2019-09-04T06:34:39.301+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:34:39.301+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:39.301+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578878, 2), t: 1 }, durableWallTime: new Date(1567578878638), appliedOpTime: { ts: Timestamp(1567578878, 2), t: 1 }, appliedWallTime: new Date(1567578878638), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), appliedOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, appliedWallTime: new Date(1567578879286), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578878, 2), t: 1 }, durableWallTime: new Date(1567578878638), appliedOpTime: { ts: Timestamp(1567578878, 2), t: 1 }, appliedWallTime: new Date(1567578878638), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578878, 2), t: 1 }, lastCommittedWall: new Date(1567578878638), lastOpVisible: { ts: Timestamp(1567578878, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:39.301+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1427 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:09.301+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578878, 2), t: 1 }, durableWallTime: new Date(1567578878638), appliedOpTime: { ts: Timestamp(1567578878, 2), t: 1 }, appliedWallTime: new Date(1567578878638), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), appliedOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, appliedWallTime: new Date(1567578879286), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578878, 2), t: 1 }, durableWallTime: new Date(1567578878638), appliedOpTime: { ts: Timestamp(1567578878, 2), t: 1 }, appliedWallTime: new Date(1567578878638), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578878, 2), t: 1 }, lastCommittedWall: new Date(1567578878638), lastOpVisible: { ts: Timestamp(1567578878, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:39.301+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:09.300+0000 2019-09-04T06:34:39.301+0000 D2 ASIO [RS] Request 1427 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578878, 2), t: 1 }, lastCommittedWall: new Date(1567578878638), lastOpVisible: { ts: Timestamp(1567578878, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578878, 2), $clusterTime: { clusterTime: Timestamp(1567578879, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578879, 1) } 2019-09-04T06:34:39.301+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578878, 2), t: 1 }, lastCommittedWall: new Date(1567578878638), lastOpVisible: { ts: Timestamp(1567578878, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578878, 2), $clusterTime: { clusterTime: Timestamp(1567578879, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578879, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:39.301+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:39.301+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:09.300+0000 2019-09-04T06:34:39.302+0000 D2 ASIO [RS] Request 1426 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578879, 1), t: 1 }, lastCommittedWall: new Date(1567578879286), lastOpVisible: { ts: Timestamp(1567578879, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578879, 1), t: 1 }, lastCommittedWall: new Date(1567578879286), lastOpApplied: { ts: Timestamp(1567578879, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578879, 1), $clusterTime: { clusterTime: Timestamp(1567578879, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578879, 1) } 2019-09-04T06:34:39.302+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578879, 1), t: 1 }, lastCommittedWall: new Date(1567578879286), lastOpVisible: { ts: Timestamp(1567578879, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578879, 1), t: 1 }, lastCommittedWall: new Date(1567578879286), lastOpApplied: { ts: Timestamp(1567578879, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578879, 1), $clusterTime: { clusterTime: Timestamp(1567578879, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578879, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:39.302+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:39.302+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:34:39.302+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578879, 1), t: 1 }, 2019-09-04T06:34:39.286+0000 2019-09-04T06:34:39.302+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578879, 1), t: 1 }, 2019-09-04T06:34:39.286+0000 2019-09-04T06:34:39.302+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578874, 1) 2019-09-04T06:34:39.302+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:34:49.874+0000 2019-09-04T06:34:39.302+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:34:49.532+0000 2019-09-04T06:34:39.302+0000 D3 REPL [conn430] Got notified of new snapshot: { ts: Timestamp(1567578879, 1), t: 1 }, 2019-09-04T06:34:39.286+0000 2019-09-04T06:34:39.302+0000 D3 REPL [conn430] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.662+0000 2019-09-04T06:34:39.302+0000 D3 REPL [conn432] Got notified of new snapshot: { ts: Timestamp(1567578879, 1), t: 1 }, 2019-09-04T06:34:39.286+0000 2019-09-04T06:34:39.302+0000 D3 REPL [conn432] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.343+0000 2019-09-04T06:34:39.302+0000 D3 REPL [conn409] Got notified of new snapshot: { ts: Timestamp(1567578879, 1), t: 1 }, 2019-09-04T06:34:39.286+0000 2019-09-04T06:34:39.302+0000 D3 REPL [conn409] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.328+0000 2019-09-04T06:34:39.302+0000 D3 REPL [conn453] Got notified of new snapshot: { ts: Timestamp(1567578879, 1), t: 1 }, 2019-09-04T06:34:39.286+0000 2019-09-04T06:34:39.302+0000 D3 REPL [conn453] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.897+0000 2019-09-04T06:34:39.302+0000 D3 REPL [conn402] Got notified of new snapshot: { ts: Timestamp(1567578879, 1), t: 1 }, 2019-09-04T06:34:39.286+0000 2019-09-04T06:34:39.302+0000 D3 REPL [conn402] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.289+0000 2019-09-04T06:34:39.302+0000 D3 REPL [conn433] Got notified of new snapshot: { ts: Timestamp(1567578879, 1), t: 1 }, 2019-09-04T06:34:39.286+0000 2019-09-04T06:34:39.302+0000 D3 REPL [conn433] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.683+0000 2019-09-04T06:34:39.302+0000 D3 REPL [conn440] Got notified of new snapshot: { ts: Timestamp(1567578879, 1), t: 1 }, 2019-09-04T06:34:39.286+0000 2019-09-04T06:34:39.302+0000 D3 REPL [conn440] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.538+0000 2019-09-04T06:34:39.302+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:39.302+0000 D3 REPL [conn410] Got notified of new snapshot: { ts: Timestamp(1567578879, 1), t: 1 }, 2019-09-04T06:34:39.286+0000 2019-09-04T06:34:39.302+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:39.302+0000 D3 REPL [conn410] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.445+0000 2019-09-04T06:34:39.302+0000 D3 REPL [conn455] Got notified of new snapshot: { ts: Timestamp(1567578879, 1), t: 1 }, 2019-09-04T06:34:39.286+0000 2019-09-04T06:34:39.302+0000 D3 REPL [conn455] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.962+0000 2019-09-04T06:34:39.302+0000 D3 REPL [conn452] Got notified of new snapshot: { ts: Timestamp(1567578879, 1), t: 1 }, 2019-09-04T06:34:39.286+0000 2019-09-04T06:34:39.302+0000 D3 REPL [conn452] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.763+0000 2019-09-04T06:34:39.302+0000 D3 REPL [conn438] Got notified of new snapshot: { ts: Timestamp(1567578879, 1), t: 1 }, 2019-09-04T06:34:39.286+0000 2019-09-04T06:34:39.302+0000 D3 REPL [conn438] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.660+0000 2019-09-04T06:34:39.302+0000 D3 REPL [conn434] Got notified of new snapshot: { ts: Timestamp(1567578879, 1), t: 1 }, 2019-09-04T06:34:39.286+0000 2019-09-04T06:34:39.302+0000 D3 REPL [conn434] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.445+0000 2019-09-04T06:34:39.302+0000 D3 REPL [conn447] Got notified of new snapshot: { ts: Timestamp(1567578879, 1), t: 1 }, 2019-09-04T06:34:39.286+0000 2019-09-04T06:34:39.302+0000 D3 REPL [conn447] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:52.054+0000 2019-09-04T06:34:39.302+0000 D3 REPL [conn449] Got notified of new snapshot: { ts: Timestamp(1567578879, 1), t: 1 }, 2019-09-04T06:34:39.286+0000 2019-09-04T06:34:39.302+0000 D3 REPL [conn449] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:58.759+0000 2019-09-04T06:34:39.302+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1428 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:49.302+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578879, 1), t: 1 } } 2019-09-04T06:34:39.302+0000 D3 REPL [conn437] Got notified of new snapshot: { ts: Timestamp(1567578879, 1), t: 1 }, 2019-09-04T06:34:39.286+0000 2019-09-04T06:34:39.302+0000 D3 REPL [conn437] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:46.416+0000 2019-09-04T06:34:39.302+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:09.300+0000 2019-09-04T06:34:39.302+0000 D3 REPL [conn456] Got notified of new snapshot: { ts: Timestamp(1567578879, 1), t: 1 }, 2019-09-04T06:34:39.286+0000 2019-09-04T06:34:39.302+0000 D3 REPL [conn456] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:03.487+0000 2019-09-04T06:34:39.302+0000 D3 REPL [conn439] Got notified of new snapshot: { ts: Timestamp(1567578879, 1), t: 1 }, 2019-09-04T06:34:39.286+0000 2019-09-04T06:34:39.302+0000 D3 REPL [conn439] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.447+0000 2019-09-04T06:34:39.302+0000 D3 REPL [conn425] Got notified of new snapshot: { ts: Timestamp(1567578879, 1), t: 1 }, 2019-09-04T06:34:39.286+0000 2019-09-04T06:34:39.302+0000 D3 REPL [conn425] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:52.594+0000 2019-09-04T06:34:39.302+0000 D3 REPL [conn431] Got notified of new snapshot: { ts: Timestamp(1567578879, 1), t: 1 }, 2019-09-04T06:34:39.286+0000 2019-09-04T06:34:39.302+0000 D3 REPL [conn431] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.276+0000 2019-09-04T06:34:39.302+0000 D3 REPL [conn444] Got notified of new snapshot: { ts: Timestamp(1567578879, 1), t: 1 }, 2019-09-04T06:34:39.286+0000 2019-09-04T06:34:39.302+0000 D3 REPL [conn444] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:59.943+0000 2019-09-04T06:34:39.303+0000 D3 REPL [conn442] Got notified of new snapshot: { ts: Timestamp(1567578879, 1), t: 1 }, 2019-09-04T06:34:39.286+0000 2019-09-04T06:34:39.303+0000 D3 REPL [conn442] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.133+0000 2019-09-04T06:34:39.303+0000 D3 REPL [conn426] Got notified of new snapshot: { ts: Timestamp(1567578879, 1), t: 1 }, 2019-09-04T06:34:39.286+0000 2019-09-04T06:34:39.303+0000 D3 REPL [conn426] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.517+0000 2019-09-04T06:34:39.303+0000 D2 COMMAND [conn413] run command config.$cmd { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578879, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5aff0f8f28dab2b56d6d'), operName: "", parentOperId: "5d6f5aff0f8f28dab2b56d69" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578879, 1), signature: { hash: BinData(0, 83C7F4DB49436E1771D96B7211417AA70A8AFEDA), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578879, 1), t: 1 } }, $db: "config" } 2019-09-04T06:34:39.303+0000 D1 TRACKING [conn413] Cmd: find, TrackingId: 5d6f5aff0f8f28dab2b56d69|5d6f5aff0f8f28dab2b56d6d 2019-09-04T06:34:39.303+0000 D1 COMMAND [conn413] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578879, 1), t: 1 } } } 2019-09-04T06:34:39.303+0000 D3 STORAGE [conn413] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:34:39.303+0000 D1 COMMAND [conn413] Using 'committed' snapshot: { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578879, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5aff0f8f28dab2b56d6d'), operName: "", parentOperId: "5d6f5aff0f8f28dab2b56d69" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578879, 1), signature: { hash: BinData(0, 83C7F4DB49436E1771D96B7211417AA70A8AFEDA), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578879, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578879, 1) 2019-09-04T06:34:39.303+0000 D2 QUERY [conn413] Collection config.settings does not exist. Using EOF plan: query: { _id: "autosplit" } sort: {} projection: {} limit: 1 2019-09-04T06:34:39.303+0000 I COMMAND [conn413] command config.settings command: find { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578879, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5aff0f8f28dab2b56d6d'), operName: "", parentOperId: "5d6f5aff0f8f28dab2b56d69" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578879, 1), signature: { hash: BinData(0, 83C7F4DB49436E1771D96B7211417AA70A8AFEDA), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578879, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:34:39.303+0000 D3 REPL [conn450] Got notified of new snapshot: { ts: Timestamp(1567578879, 1), t: 1 }, 2019-09-04T06:34:39.286+0000 2019-09-04T06:34:39.303+0000 D3 REPL [conn450] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.261+0000 2019-09-04T06:34:39.303+0000 D3 REPL [conn429] Got notified of new snapshot: { ts: Timestamp(1567578879, 1), t: 1 }, 2019-09-04T06:34:39.286+0000 2019-09-04T06:34:39.303+0000 D3 REPL [conn429] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:41.583+0000 2019-09-04T06:34:39.303+0000 D3 REPL [conn435] Got notified of new snapshot: { ts: Timestamp(1567578879, 1), t: 1 }, 2019-09-04T06:34:39.286+0000 2019-09-04T06:34:39.303+0000 D3 REPL [conn435] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:43.909+0000 2019-09-04T06:34:39.303+0000 D3 REPL [conn451] Got notified of new snapshot: { ts: Timestamp(1567578879, 1), t: 1 }, 2019-09-04T06:34:39.286+0000 2019-09-04T06:34:39.303+0000 D3 REPL [conn451] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.434+0000 2019-09-04T06:34:39.303+0000 D3 REPL [conn420] Got notified of new snapshot: { ts: Timestamp(1567578879, 1), t: 1 }, 2019-09-04T06:34:39.286+0000 2019-09-04T06:34:39.303+0000 D3 REPL [conn420] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.423+0000 2019-09-04T06:34:39.303+0000 D3 REPL [conn424] Got notified of new snapshot: { ts: Timestamp(1567578879, 1), t: 1 }, 2019-09-04T06:34:39.286+0000 2019-09-04T06:34:39.303+0000 D3 REPL [conn424] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.437+0000 2019-09-04T06:34:39.303+0000 D3 REPL [conn443] Got notified of new snapshot: { ts: Timestamp(1567578879, 1), t: 1 }, 2019-09-04T06:34:39.286+0000 2019-09-04T06:34:39.303+0000 D3 REPL [conn443] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.645+0000 2019-09-04T06:34:39.303+0000 D3 REPL [conn427] Got notified of new snapshot: { ts: Timestamp(1567578879, 1), t: 1 }, 2019-09-04T06:34:39.286+0000 2019-09-04T06:34:39.303+0000 D3 REPL [conn427] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.335+0000 2019-09-04T06:34:39.303+0000 D3 REPL [conn454] Got notified of new snapshot: { ts: Timestamp(1567578879, 1), t: 1 }, 2019-09-04T06:34:39.286+0000 2019-09-04T06:34:39.303+0000 D3 REPL [conn454] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.925+0000 2019-09-04T06:34:39.303+0000 D3 REPL [conn415] Got notified of new snapshot: { ts: Timestamp(1567578879, 1), t: 1 }, 2019-09-04T06:34:39.286+0000 2019-09-04T06:34:39.303+0000 D3 REPL [conn415] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:42.282+0000 2019-09-04T06:34:39.322+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:39.322+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:39.356+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:39.398+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578879, 1) 2019-09-04T06:34:39.456+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:39.456+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:39.456+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:39.538+0000 D2 COMMAND [conn61] run command config.$cmd { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578879, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578879, 1), signature: { hash: BinData(0, 83C7F4DB49436E1771D96B7211417AA70A8AFEDA), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578879, 1), t: 1 } }, $db: "config" } 2019-09-04T06:34:39.538+0000 D1 COMMAND [conn61] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578879, 1), t: 1 } } } 2019-09-04T06:34:39.538+0000 D3 STORAGE [conn61] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:34:39.538+0000 D1 COMMAND [conn61] Using 'committed' snapshot: { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578879, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578879, 1), signature: { hash: BinData(0, 83C7F4DB49436E1771D96B7211417AA70A8AFEDA), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578879, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578879, 1) 2019-09-04T06:34:39.538+0000 D2 QUERY [conn61] Collection config.settings does not exist. Using EOF plan: query: { _id: "autosplit" } sort: {} projection: {} limit: 1 2019-09-04T06:34:39.538+0000 I COMMAND [conn61] command config.settings command: find { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578879, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578879, 1), signature: { hash: BinData(0, 83C7F4DB49436E1771D96B7211417AA70A8AFEDA), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578879, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:34:39.556+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:39.657+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:39.689+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:39.689+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:39.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:39.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:39.738+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:39.738+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:39.739+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:39.739+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:39.757+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:39.822+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:39.822+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:39.857+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:39.956+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:39.956+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:39.957+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:40.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:40.002+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:34:40.002+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:34:40.002+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:34:40.008+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:34:40.008+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:34:40.010+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:34:40.010+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:34:40.010+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:34:40.010+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:34:40.010+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:34:40.010+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35151 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:34:40.011+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:34:40.011+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:34:40.011+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:34:40.011+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:34:40.011+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:40.011+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:34:40.011+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:34:40.011+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:34:40.011+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:34:40.011+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:34:40.011+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:34:40.011+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:34:40.011+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:40.011+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:40.011+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:34:40.011+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578879, 1) 2019-09-04T06:34:40.011+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20773 2019-09-04T06:34:40.012+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20773 2019-09-04T06:34:40.012+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:34:40.012+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:34:40.012+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:34:40.012+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:34:40.012+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:40.012+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:34:40.012+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:34:40.012+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:34:40.012+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578879, 1) 2019-09-04T06:34:40.012+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20776 2019-09-04T06:34:40.012+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20776 2019-09-04T06:34:40.012+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:34:40.014+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:34:40.014+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:40.014+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:34:40.014+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:34:40.014+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:34:40.014+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578879, 1) 2019-09-04T06:34:40.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20778 2019-09-04T06:34:40.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20778 2019-09-04T06:34:40.014+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:549 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:34:40.014+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:34:40.014+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:34:40.014+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:34:40.014+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:34:40.014+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:34:40.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:34:40.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20781 2019-09-04T06:34:40.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:34:40.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:40.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:34:40.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:34:40.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20781 2019-09-04T06:34:40.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:34:40.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20782 2019-09-04T06:34:40.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:34:40.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20782 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20783 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20783 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20784 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20784 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20785 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20785 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20786 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20786 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20787 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20787 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20788 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20788 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20789 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20789 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20790 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20790 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20791 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20791 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20792 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20792 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20793 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20793 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20794 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20794 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20795 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:34:40.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20795 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20796 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20796 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20797 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20797 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20798 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20798 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20799 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20799 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20800 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20800 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20801 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20801 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20802 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20802 2019-09-04T06:34:40.016+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:34:40.016+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20804 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20804 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20805 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20805 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20806 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20806 2019-09-04T06:34:40.016+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:34:40.016+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20808 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20808 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20809 2019-09-04T06:34:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20809 2019-09-04T06:34:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20810 2019-09-04T06:34:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20810 2019-09-04T06:34:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20811 2019-09-04T06:34:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20811 2019-09-04T06:34:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20812 2019-09-04T06:34:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20812 2019-09-04T06:34:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20813 2019-09-04T06:34:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20813 2019-09-04T06:34:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20814 2019-09-04T06:34:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20814 2019-09-04T06:34:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20815 2019-09-04T06:34:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20815 2019-09-04T06:34:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20816 2019-09-04T06:34:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20816 2019-09-04T06:34:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20817 2019-09-04T06:34:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20817 2019-09-04T06:34:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20818 2019-09-04T06:34:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20818 2019-09-04T06:34:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20819 2019-09-04T06:34:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20819 2019-09-04T06:34:40.017+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:34:40.017+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:34:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20821 2019-09-04T06:34:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20821 2019-09-04T06:34:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20822 2019-09-04T06:34:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20822 2019-09-04T06:34:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20823 2019-09-04T06:34:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20823 2019-09-04T06:34:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20824 2019-09-04T06:34:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20824 2019-09-04T06:34:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20825 2019-09-04T06:34:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20825 2019-09-04T06:34:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 20826 2019-09-04T06:34:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 20826 2019-09-04T06:34:40.017+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:34:40.038+0000 D2 COMMAND [conn69] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:40.038+0000 I COMMAND [conn69] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:40.055+0000 D2 COMMAND [conn70] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:40.056+0000 I COMMAND [conn70] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:40.057+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:40.157+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:40.189+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:40.189+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:40.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:40.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:40.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578879, 1), signature: { hash: BinData(0, 83C7F4DB49436E1771D96B7211417AA70A8AFEDA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:40.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:34:40.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578879, 1), signature: { hash: BinData(0, 83C7F4DB49436E1771D96B7211417AA70A8AFEDA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:40.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578879, 1), signature: { hash: BinData(0, 83C7F4DB49436E1771D96B7211417AA70A8AFEDA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:40.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), opTime: { ts: Timestamp(1567578879, 1), t: 1 }, wallTime: new Date(1567578879286) } 2019-09-04T06:34:40.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578879, 1), signature: { hash: BinData(0, 83C7F4DB49436E1771D96B7211417AA70A8AFEDA), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:40.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:40.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:40.239+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:40.239+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:40.257+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:40.298+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:40.298+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:40.298+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578879, 1) 2019-09-04T06:34:40.298+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 20835 2019-09-04T06:34:40.298+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:40.298+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:40.298+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 20835 2019-09-04T06:34:40.299+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20838 2019-09-04T06:34:40.299+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20838 2019-09-04T06:34:40.299+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578879, 1), t: 1 }({ ts: Timestamp(1567578879, 1), t: 1 }) 2019-09-04T06:34:40.322+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:40.322+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:40.357+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:40.456+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:40.456+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:40.457+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:40.464+0000 D2 COMMAND [conn71] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578873, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578876, 1), signature: { hash: BinData(0, D612E3524CB16FF0584AC64F6A0A44CFD10C554A), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578873, 1), t: 1 } }, $db: "config" } 2019-09-04T06:34:40.464+0000 D1 COMMAND [conn71] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578873, 1), t: 1 } } } 2019-09-04T06:34:40.464+0000 D3 STORAGE [conn71] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:34:40.464+0000 D1 COMMAND [conn71] Using 'committed' snapshot: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578873, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578876, 1), signature: { hash: BinData(0, D612E3524CB16FF0584AC64F6A0A44CFD10C554A), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578873, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578879, 1) 2019-09-04T06:34:40.465+0000 D2 QUERY [conn71] Collection config.settings does not exist. Using EOF plan: query: { _id: "balancer" } sort: {} projection: {} limit: 1 2019-09-04T06:34:40.465+0000 I COMMAND [conn71] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578873, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578876, 1), signature: { hash: BinData(0, D612E3524CB16FF0584AC64F6A0A44CFD10C554A), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578873, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:34:40.558+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:40.658+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:40.689+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:40.689+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:40.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:40.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:40.721+0000 D2 COMMAND [conn72] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578879, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578879, 1), signature: { hash: BinData(0, 83C7F4DB49436E1771D96B7211417AA70A8AFEDA), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578879, 1), t: 1 } }, $db: "config" } 2019-09-04T06:34:40.721+0000 D1 COMMAND [conn72] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578879, 1), t: 1 } } } 2019-09-04T06:34:40.721+0000 D3 STORAGE [conn72] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:34:40.721+0000 D1 COMMAND [conn72] Using 'committed' snapshot: { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578879, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578879, 1), signature: { hash: BinData(0, 83C7F4DB49436E1771D96B7211417AA70A8AFEDA), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578879, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578879, 1) 2019-09-04T06:34:40.721+0000 D2 QUERY [conn72] Collection config.settings does not exist. Using EOF plan: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 2019-09-04T06:34:40.722+0000 I COMMAND [conn72] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578879, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578879, 1), signature: { hash: BinData(0, 83C7F4DB49436E1771D96B7211417AA70A8AFEDA), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578879, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:34:40.739+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:40.739+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:40.758+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:40.822+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:40.822+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:40.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:40.839+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1429) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:40.839+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1429 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:34:50.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:40.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:40.839+0000 D2 ASIO [Replication] Request 1429 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), opTime: { ts: Timestamp(1567578879, 1), t: 1 }, wallTime: new Date(1567578879286), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578879, 1), t: 1 }, lastCommittedWall: new Date(1567578879286), lastOpVisible: { ts: Timestamp(1567578879, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578879, 1), $clusterTime: { clusterTime: Timestamp(1567578879, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578879, 1) } 2019-09-04T06:34:40.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), opTime: { ts: Timestamp(1567578879, 1), t: 1 }, wallTime: new Date(1567578879286), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578879, 1), t: 1 }, lastCommittedWall: new Date(1567578879286), lastOpVisible: { ts: Timestamp(1567578879, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578879, 1), $clusterTime: { clusterTime: Timestamp(1567578879, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578879, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:34:40.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:40.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1429) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), opTime: { ts: Timestamp(1567578879, 1), t: 1 }, wallTime: new Date(1567578879286), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578879, 1), t: 1 }, lastCommittedWall: new Date(1567578879286), lastOpVisible: { ts: Timestamp(1567578879, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578879, 1), $clusterTime: { clusterTime: Timestamp(1567578879, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578879, 1) } 2019-09-04T06:34:40.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:34:40.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:34:49.532+0000 2019-09-04T06:34:40.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:34:51.997+0000 2019-09-04T06:34:40.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:40.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:34:40.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:40.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:34:42.839Z 2019-09-04T06:34:40.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:40.840+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:40.840+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1430) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:40.840+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1430 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:50.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:40.840+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:40.840+0000 D2 ASIO [Replication] Request 1430 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), opTime: { ts: Timestamp(1567578879, 1), t: 1 }, wallTime: new Date(1567578879286), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578879, 1), t: 1 }, lastCommittedWall: new Date(1567578879286), lastOpVisible: { ts: Timestamp(1567578879, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578879, 1), $clusterTime: { clusterTime: Timestamp(1567578879, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578879, 1) } 2019-09-04T06:34:40.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), opTime: { ts: Timestamp(1567578879, 1), t: 1 }, wallTime: new Date(1567578879286), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578879, 1), t: 1 }, lastCommittedWall: new Date(1567578879286), lastOpVisible: { ts: Timestamp(1567578879, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578879, 1), $clusterTime: { clusterTime: Timestamp(1567578879, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578879, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:40.840+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:40.840+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1430) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), opTime: { ts: Timestamp(1567578879, 1), t: 1 }, wallTime: new Date(1567578879286), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578879, 1), t: 1 }, lastCommittedWall: new Date(1567578879286), lastOpVisible: { ts: Timestamp(1567578879, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578879, 1), $clusterTime: { clusterTime: Timestamp(1567578879, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578879, 1) } 2019-09-04T06:34:40.840+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:34:40.840+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:34:42.840Z 2019-09-04T06:34:40.840+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:40.858+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:40.944+0000 D2 COMMAND [conn73] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:40.944+0000 I COMMAND [conn73] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:40.956+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:40.956+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:40.958+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:40.968+0000 D2 COMMAND [conn74] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:40.968+0000 I COMMAND [conn74] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:41.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:41.058+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:41.063+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:41.063+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:34:40.839+0000 2019-09-04T06:34:41.063+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:34:40.840+0000 2019-09-04T06:34:41.063+0000 D3 REPL [replexec-3] stalest member MemberId(0) date: 2019-09-04T06:34:40.839+0000 2019-09-04T06:34:41.063+0000 D3 REPL [replexec-3] scheduling next check at 2019-09-04T06:34:50.839+0000 2019-09-04T06:34:41.063+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:41.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578879, 1), signature: { hash: BinData(0, 83C7F4DB49436E1771D96B7211417AA70A8AFEDA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:41.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:34:41.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578879, 1), signature: { hash: BinData(0, 83C7F4DB49436E1771D96B7211417AA70A8AFEDA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:41.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578879, 1), signature: { hash: BinData(0, 83C7F4DB49436E1771D96B7211417AA70A8AFEDA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:41.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), opTime: { ts: Timestamp(1567578879, 1), t: 1 }, wallTime: new Date(1567578879286) } 2019-09-04T06:34:41.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578879, 1), signature: { hash: BinData(0, 83C7F4DB49436E1771D96B7211417AA70A8AFEDA), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:41.158+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:41.189+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:41.189+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:41.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:41.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:41.221+0000 D2 COMMAND [conn77] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:41.221+0000 I COMMAND [conn77] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:41.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:41.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:41.239+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:41.239+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:41.258+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:41.273+0000 D2 COMMAND [conn78] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:41.273+0000 I COMMAND [conn78] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:41.292+0000 D2 COMMAND [conn113] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:41.292+0000 I COMMAND [conn113] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:41.296+0000 D2 COMMAND [conn113] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578873, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578879, 2), signature: { hash: BinData(0, 83C7F4DB49436E1771D96B7211417AA70A8AFEDA), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578873, 1), t: 1 } }, $db: "config" } 2019-09-04T06:34:41.296+0000 D1 COMMAND [conn113] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578873, 1), t: 1 } } } 2019-09-04T06:34:41.296+0000 D3 STORAGE [conn113] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:34:41.296+0000 D1 COMMAND [conn113] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578873, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578879, 2), signature: { hash: BinData(0, 83C7F4DB49436E1771D96B7211417AA70A8AFEDA), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578873, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578879, 1) 2019-09-04T06:34:41.296+0000 D5 QUERY [conn113] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:34:41.296+0000 D5 QUERY [conn113] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:34:41.296+0000 D5 QUERY [conn113] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:34:41.296+0000 D5 QUERY [conn113] Rated tree: $and 2019-09-04T06:34:41.296+0000 D5 QUERY [conn113] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:41.296+0000 D5 QUERY [conn113] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:41.297+0000 D2 QUERY [conn113] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:34:41.297+0000 D3 STORAGE [conn113] WT begin_transaction for snapshot id 20860 2019-09-04T06:34:41.297+0000 D3 STORAGE [conn113] WT rollback_transaction for snapshot id 20860 2019-09-04T06:34:41.297+0000 I COMMAND [conn113] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578873, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578879, 2), signature: { hash: BinData(0, 83C7F4DB49436E1771D96B7211417AA70A8AFEDA), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578873, 1), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:4 cursorExhausted:1 numYields:0 nreturned:4 reslen:989 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:34:41.299+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:41.299+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:41.299+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578879, 1) 2019-09-04T06:34:41.299+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 20862 2019-09-04T06:34:41.299+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:41.299+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:41.299+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 20862 2019-09-04T06:34:41.300+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20865 2019-09-04T06:34:41.300+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20865 2019-09-04T06:34:41.300+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578879, 1), t: 1 }({ ts: Timestamp(1567578879, 1), t: 1 }) 2019-09-04T06:34:41.322+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:41.322+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:41.358+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:41.456+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:41.456+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:41.459+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:41.559+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:41.572+0000 I NETWORK [listener] connection accepted from 10.108.2.55:36830 #457 (96 connections now open) 2019-09-04T06:34:41.572+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:41.572+0000 D2 COMMAND [conn457] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:41.572+0000 I NETWORK [conn457] received client metadata from 10.108.2.55:36830 conn457: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:41.572+0000 I COMMAND [conn457] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:41.585+0000 I COMMAND [conn429] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578841, 1), signature: { hash: BinData(0, 76AD270FB0503A5D458AD13D48279C4C7DEE0538), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:34:41.585+0000 D1 - [conn429] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:41.585+0000 W - [conn429] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:41.604+0000 I - [conn429] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:41.604+0000 D1 COMMAND [conn429] assertion while executing command 'find' on database 'config' with arguments '{ find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578841, 1), signature: { hash: BinData(0, 76AD270FB0503A5D458AD13D48279C4C7DEE0538), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:41.604+0000 D1 - [conn429] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:41.604+0000 W - [conn429] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:41.626+0000 I - [conn429] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:41.627+0000 W COMMAND [conn429] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:41.627+0000 I COMMAND [conn429] command config.$cmd command: find { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578841, 1), signature: { hash: BinData(0, 76AD270FB0503A5D458AD13D48279C4C7DEE0538), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30031ms 2019-09-04T06:34:41.627+0000 D2 NETWORK [conn429] Session from 10.108.2.55:36804 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:41.627+0000 I NETWORK [conn429] end connection 10.108.2.55:36804 (95 connections now open) 2019-09-04T06:34:41.659+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:41.688+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:41.689+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:41.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:41.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:41.739+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:41.739+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:41.759+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:41.822+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:41.822+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:41.859+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:41.956+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:41.956+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:41.959+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:42.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:42.059+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:42.159+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:42.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:42.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:42.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578879, 1), signature: { hash: BinData(0, 83C7F4DB49436E1771D96B7211417AA70A8AFEDA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:42.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:34:42.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578879, 1), signature: { hash: BinData(0, 83C7F4DB49436E1771D96B7211417AA70A8AFEDA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:42.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578879, 1), signature: { hash: BinData(0, 83C7F4DB49436E1771D96B7211417AA70A8AFEDA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:42.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), opTime: { ts: Timestamp(1567578879, 1), t: 1 }, wallTime: new Date(1567578879286) } 2019-09-04T06:34:42.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578879, 1), signature: { hash: BinData(0, 83C7F4DB49436E1771D96B7211417AA70A8AFEDA), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:42.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:42.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:42.239+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:42.239+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:42.260+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:42.280+0000 I COMMAND [conn431] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578850, 1), signature: { hash: BinData(0, DF7BE7C881CF5FD2AF2EDB3FBEF3BF57179C23BB), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:34:42.280+0000 D1 - [conn431] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:42.280+0000 W - [conn431] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:42.285+0000 I COMMAND [conn415] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578851, 1), signature: { hash: BinData(0, DE6CD7CA3AD84D3D5B5CC231FF3233432495E5FA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:34:42.285+0000 D1 - [conn415] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:42.285+0000 W - [conn415] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:42.292+0000 I COMMAND [conn402] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578848, 1), signature: { hash: BinData(0, 075D9F54C2C306A6D74FB1440B554A0345006370), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:34:42.292+0000 D1 - [conn402] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:42.292+0000 W - [conn402] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:42.297+0000 I - [conn431] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:42.297+0000 D1 COMMAND [conn431] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578850, 1), signature: { hash: BinData(0, DF7BE7C881CF5FD2AF2EDB3FBEF3BF57179C23BB), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:42.297+0000 D1 - [conn431] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:42.297+0000 W - [conn431] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:42.299+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:42.299+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:42.299+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578879, 1) 2019-09-04T06:34:42.299+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 20880 2019-09-04T06:34:42.299+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:42.299+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:42.299+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 20880 2019-09-04T06:34:42.300+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20883 2019-09-04T06:34:42.300+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20883 2019-09-04T06:34:42.300+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578879, 1), t: 1 }({ ts: Timestamp(1567578879, 1), t: 1 }) 2019-09-04T06:34:42.320+0000 I - [conn415] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:42.320+0000 D1 COMMAND [conn415] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578851, 1), signature: { hash: BinData(0, DE6CD7CA3AD84D3D5B5CC231FF3233432495E5FA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:42.320+0000 D1 - [conn415] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:42.320+0000 W - [conn415] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:42.322+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:42.322+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:42.329+0000 I NETWORK [listener] connection accepted from 10.108.2.61:38078 #458 (96 connections now open) 2019-09-04T06:34:42.329+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:42.329+0000 D2 COMMAND [conn458] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:42.329+0000 I NETWORK [conn458] received client metadata from 10.108.2.61:38078 conn458: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:42.329+0000 I COMMAND [conn458] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:42.330+0000 I - [conn402] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:42.330+0000 D1 COMMAND [conn402] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578848, 1), signature: { hash: BinData(0, 075D9F54C2C306A6D74FB1440B554A0345006370), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:42.330+0000 D1 - [conn402] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:42.330+0000 W - [conn402] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:42.331+0000 I COMMAND [conn409] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578842, 1), signature: { hash: BinData(0, 6E364D23EB11B87900F4FB61D074B82CCAB19665), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:34:42.331+0000 D1 - [conn409] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:42.331+0000 W - [conn409] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:42.338+0000 I COMMAND [conn427] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578842, 1), signature: { hash: BinData(0, 6E364D23EB11B87900F4FB61D074B82CCAB19665), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:34:42.338+0000 D1 - [conn427] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:42.338+0000 W - [conn427] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:42.345+0000 I COMMAND [conn432] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578851, 1), signature: { hash: BinData(0, DE6CD7CA3AD84D3D5B5CC231FF3233432495E5FA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:34:42.345+0000 D1 - [conn432] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:42.345+0000 W - [conn432] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:42.350+0000 I - [conn431] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:42.350+0000 W COMMAND [conn431] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:42.350+0000 I COMMAND [conn431] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578850, 1), signature: { hash: BinData(0, DF7BE7C881CF5FD2AF2EDB3FBEF3BF57179C23BB), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:34:42.350+0000 D2 NETWORK [conn431] Session from 10.108.2.48:42246 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:42.350+0000 I NETWORK [conn431] end connection 10.108.2.48:42246 (95 connections now open) 2019-09-04T06:34:42.360+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:42.367+0000 I - [conn409] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:42.367+0000 D1 COMMAND [conn409] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578842, 1), signature: { hash: BinData(0, 6E364D23EB11B87900F4FB61D074B82CCAB19665), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:42.367+0000 D1 - [conn409] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:42.367+0000 W - [conn409] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:42.383+0000 I - [conn427] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:42.383+0000 D1 COMMAND [conn427] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578842, 1), signature: { hash: BinData(0, 6E364D23EB11B87900F4FB61D074B82CCAB19665), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:42.384+0000 D1 - [conn427] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:42.384+0000 W - [conn427] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:42.404+0000 I - [conn409] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:42.404+0000 W COMMAND [conn409] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:42.404+0000 I COMMAND [conn409] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578842, 1), signature: { hash: BinData(0, 6E364D23EB11B87900F4FB61D074B82CCAB19665), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30048ms 2019-09-04T06:34:42.404+0000 D2 NETWORK [conn409] Session from 10.108.2.53:50826 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:42.404+0000 I NETWORK [conn409] end connection 10.108.2.53:50826 (94 connections now open) 2019-09-04T06:34:42.413+0000 D2 COMMAND [conn143] run command admin.$cmd { ping: 1, $db: "admin" } 2019-09-04T06:34:42.413+0000 I COMMAND [conn143] command admin.$cmd appName: "robo3t" command: ping { ping: 1, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:42.423+0000 D2 COMMAND [conn206] run command config.$cmd { ping: 1.0, lsid: { id: UUID("2fef7d2a-ea06-44d7-a315-b0e911b7f5bf") }, $clusterTime: { clusterTime: Timestamp(1567578821, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:34:42.423+0000 I COMMAND [conn206] command config.$cmd appName: "MongoDB Shell" command: ping { ping: 1.0, lsid: { id: UUID("2fef7d2a-ea06-44d7-a315-b0e911b7f5bf") }, $clusterTime: { clusterTime: Timestamp(1567578821, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:42.424+0000 I - [conn415] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:42.424+0000 W COMMAND [conn415] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:42.424+0000 I COMMAND [conn415] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578851, 1), signature: { hash: BinData(0, DE6CD7CA3AD84D3D5B5CC231FF3233432495E5FA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30048ms 2019-09-04T06:34:42.424+0000 D2 NETWORK [conn415] Session from 10.108.2.56:35816 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:42.424+0000 I NETWORK [conn415] end connection 10.108.2.56:35816 (93 connections now open) 2019-09-04T06:34:42.441+0000 I - [conn432] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:42.441+0000 D1 COMMAND [conn432] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578851, 1), signature: { hash: BinData(0, DE6CD7CA3AD84D3D5B5CC231FF3233432495E5FA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:42.441+0000 D1 - [conn432] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:42.441+0000 W - [conn432] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:42.456+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:42.456+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:42.460+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:42.461+0000 I - [conn402] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:42.461+0000 W COMMAND [conn402] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:42.461+0000 I COMMAND [conn402] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578848, 1), signature: { hash: BinData(0, 075D9F54C2C306A6D74FB1440B554A0345006370), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30050ms 2019-09-04T06:34:42.461+0000 D2 NETWORK [conn402] Session from 10.108.2.57:34362 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:42.461+0000 I NETWORK [conn402] end connection 10.108.2.57:34362 (92 connections now open) 2019-09-04T06:34:42.462+0000 I NETWORK [listener] connection accepted from 10.108.2.73:52314 #459 (93 connections now open) 2019-09-04T06:34:42.462+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:42.462+0000 D2 COMMAND [conn459] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:42.462+0000 I NETWORK [conn459] received client metadata from 10.108.2.73:52314 conn459: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:42.462+0000 I COMMAND [conn459] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:42.463+0000 D2 COMMAND [conn459] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, D5DE159C203210085B53C66F62E0C0DF973FF589), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:34:42.463+0000 D1 REPL [conn459] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578879, 1), t: 1 } 2019-09-04T06:34:42.463+0000 D3 REPL [conn459] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.473+0000 2019-09-04T06:34:42.467+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:42.467+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:42.468+0000 I NETWORK [listener] connection accepted from 10.108.2.74:51948 #460 (94 connections now open) 2019-09-04T06:34:42.468+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:42.468+0000 D2 COMMAND [conn460] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:42.468+0000 I NETWORK [conn460] received client metadata from 10.108.2.74:51948 conn460: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:42.468+0000 I COMMAND [conn460] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:42.468+0000 D2 COMMAND [conn460] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578875, 1), signature: { hash: BinData(0, 8CDC5DAAAA5AA59CBD4286129FA24BB0384E9F7D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:34:42.468+0000 D1 REPL [conn460] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578879, 1), t: 1 } 2019-09-04T06:34:42.468+0000 D3 REPL [conn460] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.478+0000 2019-09-04T06:34:42.471+0000 I NETWORK [listener] connection accepted from 10.108.2.72:45904 #461 (95 connections now open) 2019-09-04T06:34:42.471+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:42.471+0000 D2 COMMAND [conn461] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:42.471+0000 I NETWORK [conn461] received client metadata from 10.108.2.72:45904 conn461: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:42.471+0000 I COMMAND [conn461] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:42.472+0000 D2 COMMAND [conn461] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578872, 1), signature: { hash: BinData(0, 4DBDABAE602CF810BD3D716DCE9BD15DA1394F1C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:34:42.472+0000 D1 REPL [conn461] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578879, 1), t: 1 } 2019-09-04T06:34:42.472+0000 D3 REPL [conn461] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.482+0000 2019-09-04T06:34:42.481+0000 I - [conn427] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:42.481+0000 W COMMAND [conn427] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:42.481+0000 I COMMAND [conn427] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578842, 1), signature: { hash: BinData(0, 6E364D23EB11B87900F4FB61D074B82CCAB19665), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30058ms 2019-09-04T06:34:42.481+0000 D2 NETWORK [conn427] Session from 10.108.2.63:36422 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:42.481+0000 I NETWORK [conn427] end connection 10.108.2.63:36422 (94 connections now open) 2019-09-04T06:34:42.483+0000 I NETWORK [listener] connection accepted from 10.108.2.57:34410 #462 (95 connections now open) 2019-09-04T06:34:42.483+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:42.483+0000 D2 COMMAND [conn462] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:42.483+0000 I NETWORK [conn462] received client metadata from 10.108.2.57:34410 conn462: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:42.483+0000 I COMMAND [conn462] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:42.487+0000 D2 COMMAND [conn462] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578878, 1), signature: { hash: BinData(0, 9EEAEBC19ED13CB91FBBE628C791550FD6DDF6FC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:34:42.487+0000 D1 REPL [conn462] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578879, 1), t: 1 } 2019-09-04T06:34:42.487+0000 D3 REPL [conn462] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.497+0000 2019-09-04T06:34:42.501+0000 I - [conn432] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:42.501+0000 W COMMAND [conn432] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:42.501+0000 I COMMAND [conn432] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578851, 1), signature: { hash: BinData(0, DE6CD7CA3AD84D3D5B5CC231FF3233432495E5FA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30108ms 2019-09-04T06:34:42.501+0000 D2 NETWORK [conn432] Session from 10.108.2.61:38062 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:42.501+0000 I NETWORK [conn432] end connection 10.108.2.61:38062 (94 connections now open) 2019-09-04T06:34:42.502+0000 D2 COMMAND [conn423] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578873, 1), signature: { hash: BinData(0, 191AE0EF5D8E856F2D8CCC65C75DFCD6EDF25A90), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:34:42.502+0000 D1 REPL [conn423] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578879, 1), t: 1 } 2019-09-04T06:34:42.502+0000 D3 REPL [conn423] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.512+0000 2019-09-04T06:34:42.525+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:42.525+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:42.560+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:42.660+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:42.687+0000 I COMMAND [conn433] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578850, 1), signature: { hash: BinData(0, DF7BE7C881CF5FD2AF2EDB3FBEF3BF57179C23BB), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:34:42.687+0000 D1 - [conn433] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:42.687+0000 W - [conn433] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:42.703+0000 I - [conn433] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:42.703+0000 D1 COMMAND [conn433] assertion while executing command 'find' on database 'config' with arguments '{ find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578850, 1), signature: { hash: BinData(0, DF7BE7C881CF5FD2AF2EDB3FBEF3BF57179C23BB), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:42.703+0000 D1 - [conn433] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:42.703+0000 W - [conn433] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:42.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:42.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:42.723+0000 I - [conn433] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:42.723+0000 W COMMAND [conn433] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:42.723+0000 I COMMAND [conn433] command config.$cmd command: find { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578850, 1), signature: { hash: BinData(0, DF7BE7C881CF5FD2AF2EDB3FBEF3BF57179C23BB), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:34:42.723+0000 D2 NETWORK [conn433] Session from 10.108.2.48:42248 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:42.723+0000 I NETWORK [conn433] end connection 10.108.2.48:42248 (93 connections now open) 2019-09-04T06:34:42.739+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:42.739+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:42.760+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:42.822+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:42.822+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:42.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:42.839+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1431) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:42.839+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1431 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:34:52.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:42.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:42.839+0000 D2 ASIO [Replication] Request 1431 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), opTime: { ts: Timestamp(1567578879, 1), t: 1 }, wallTime: new Date(1567578879286), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578879, 1), t: 1 }, lastCommittedWall: new Date(1567578879286), lastOpVisible: { ts: Timestamp(1567578879, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578879, 1), $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578879, 1) } 2019-09-04T06:34:42.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), opTime: { ts: Timestamp(1567578879, 1), t: 1 }, wallTime: new Date(1567578879286), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578879, 1), t: 1 }, lastCommittedWall: new Date(1567578879286), lastOpVisible: { ts: Timestamp(1567578879, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578879, 1), $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578879, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:34:42.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:42.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1431) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), opTime: { ts: Timestamp(1567578879, 1), t: 1 }, wallTime: new Date(1567578879286), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578879, 1), t: 1 }, lastCommittedWall: new Date(1567578879286), lastOpVisible: { ts: Timestamp(1567578879, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578879, 1), $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578879, 1) } 2019-09-04T06:34:42.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:34:42.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:34:51.997+0000 2019-09-04T06:34:42.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:34:53.071+0000 2019-09-04T06:34:42.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:42.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:34:42.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:42.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:34:44.839Z 2019-09-04T06:34:42.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:42.840+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:42.840+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1432) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:42.840+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1432 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:52.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:42.840+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:42.840+0000 D2 ASIO [Replication] Request 1432 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), opTime: { ts: Timestamp(1567578879, 1), t: 1 }, wallTime: new Date(1567578879286), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578879, 1), t: 1 }, lastCommittedWall: new Date(1567578879286), lastOpVisible: { ts: Timestamp(1567578879, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578879, 1), $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578879, 1) } 2019-09-04T06:34:42.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), opTime: { ts: Timestamp(1567578879, 1), t: 1 }, wallTime: new Date(1567578879286), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578879, 1), t: 1 }, lastCommittedWall: new Date(1567578879286), lastOpVisible: { ts: Timestamp(1567578879, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578879, 1), $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578879, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:42.840+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:42.840+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1432) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), opTime: { ts: Timestamp(1567578879, 1), t: 1 }, wallTime: new Date(1567578879286), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578879, 1), t: 1 }, lastCommittedWall: new Date(1567578879286), lastOpVisible: { ts: Timestamp(1567578879, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578879, 1), $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578879, 1) } 2019-09-04T06:34:42.840+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:34:42.840+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:34:44.840Z 2019-09-04T06:34:42.840+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:42.860+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:42.956+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:42.956+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:42.960+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:42.967+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:42.967+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:43.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:43.024+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:43.024+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:43.060+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:43.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, 258B463F0B407805ED60E110CB1ECD17F709D3B4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:43.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:34:43.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, 258B463F0B407805ED60E110CB1ECD17F709D3B4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:43.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, 258B463F0B407805ED60E110CB1ECD17F709D3B4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:43.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), opTime: { ts: Timestamp(1567578879, 1), t: 1 }, wallTime: new Date(1567578879286) } 2019-09-04T06:34:43.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, 258B463F0B407805ED60E110CB1ECD17F709D3B4), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:43.160+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:43.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:43.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:43.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:43.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:43.239+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:43.239+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:43.261+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:43.299+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:43.299+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:43.299+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578879, 1) 2019-09-04T06:34:43.299+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 20912 2019-09-04T06:34:43.299+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:43.299+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:43.299+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 20912 2019-09-04T06:34:43.300+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20915 2019-09-04T06:34:43.300+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20915 2019-09-04T06:34:43.300+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578879, 1), t: 1 }({ ts: Timestamp(1567578879, 1), t: 1 }) 2019-09-04T06:34:43.322+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:43.322+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:43.361+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:43.403+0000 I NETWORK [listener] connection accepted from 10.108.2.45:36690 #463 (94 connections now open) 2019-09-04T06:34:43.403+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:43.403+0000 D2 COMMAND [conn463] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:43.403+0000 I NETWORK [conn463] received client metadata from 10.108.2.45:36690 conn463: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:43.403+0000 I COMMAND [conn463] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:43.407+0000 D2 COMMAND [conn463] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578878, 1), signature: { hash: BinData(0, 9EEAEBC19ED13CB91FBBE628C791550FD6DDF6FC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:34:43.407+0000 D1 REPL [conn463] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578879, 1), t: 1 } 2019-09-04T06:34:43.407+0000 D3 REPL [conn463] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:13.417+0000 2019-09-04T06:34:43.456+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:43.456+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:43.461+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:43.561+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:43.661+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:43.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:43.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:43.739+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:43.739+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:43.761+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:43.822+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:43.822+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:43.861+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:43.885+0000 D2 COMMAND [conn446] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578881, 1), signature: { hash: BinData(0, 51070AB17CE4E1EB2E629A93F6E8B1A003D9C311), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:34:43.885+0000 D1 REPL [conn446] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578879, 1), t: 1 } 2019-09-04T06:34:43.885+0000 D3 REPL [conn446] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:13.895+0000 2019-09-04T06:34:43.913+0000 I COMMAND [conn435] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578848, 1), signature: { hash: BinData(0, 075D9F54C2C306A6D74FB1440B554A0345006370), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:34:43.913+0000 D1 - [conn435] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:43.913+0000 W - [conn435] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:43.929+0000 I - [conn435] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:43.929+0000 D1 COMMAND [conn435] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578848, 1), signature: { hash: BinData(0, 075D9F54C2C306A6D74FB1440B554A0345006370), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:43.929+0000 D1 - [conn435] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:43.929+0000 W - [conn435] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:43.949+0000 I - [conn435] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:43.949+0000 W COMMAND [conn435] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:43.949+0000 I COMMAND [conn435] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578848, 1), signature: { hash: BinData(0, 075D9F54C2C306A6D74FB1440B554A0345006370), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:34:43.949+0000 D2 NETWORK [conn435] Session from 10.108.2.54:49330 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:43.949+0000 I NETWORK [conn435] end connection 10.108.2.54:49330 (93 connections now open) 2019-09-04T06:34:43.956+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:43.956+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:43.961+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:44.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:44.061+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:44.080+0000 D2 COMMAND [conn445] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, D5DE159C203210085B53C66F62E0C0DF973FF589), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:34:44.080+0000 D1 REPL [conn445] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578879, 1), t: 1 } 2019-09-04T06:34:44.080+0000 D3 REPL [conn445] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:14.090+0000 2019-09-04T06:34:44.161+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:44.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:44.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:44.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, 258B463F0B407805ED60E110CB1ECD17F709D3B4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:44.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:34:44.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, 258B463F0B407805ED60E110CB1ECD17F709D3B4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:44.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, 258B463F0B407805ED60E110CB1ECD17F709D3B4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:44.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), opTime: { ts: Timestamp(1567578879, 1), t: 1 }, wallTime: new Date(1567578879286) } 2019-09-04T06:34:44.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, 258B463F0B407805ED60E110CB1ECD17F709D3B4), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:44.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:44.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:44.239+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:44.239+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:44.261+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:44.299+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:44.299+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:44.299+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578879, 1) 2019-09-04T06:34:44.299+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 20933 2019-09-04T06:34:44.299+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:44.299+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:44.299+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 20933 2019-09-04T06:34:44.300+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20936 2019-09-04T06:34:44.300+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20936 2019-09-04T06:34:44.300+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578879, 1), t: 1 }({ ts: Timestamp(1567578879, 1), t: 1 }) 2019-09-04T06:34:44.301+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:44.301+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), appliedOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, appliedWallTime: new Date(1567578879286), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), appliedOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, appliedWallTime: new Date(1567578879286), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), appliedOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, appliedWallTime: new Date(1567578879286), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578879, 1), t: 1 }, lastCommittedWall: new Date(1567578879286), lastOpVisible: { ts: Timestamp(1567578879, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:44.301+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1433 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:14.301+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), appliedOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, appliedWallTime: new Date(1567578879286), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), appliedOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, appliedWallTime: new Date(1567578879286), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), appliedOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, appliedWallTime: new Date(1567578879286), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578879, 1), t: 1 }, lastCommittedWall: new Date(1567578879286), lastOpVisible: { ts: Timestamp(1567578879, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:44.301+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:09.300+0000 2019-09-04T06:34:44.301+0000 D2 ASIO [RS] Request 1433 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578879, 1), t: 1 }, lastCommittedWall: new Date(1567578879286), lastOpVisible: { ts: Timestamp(1567578879, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578879, 1), $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578879, 1) } 2019-09-04T06:34:44.301+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578879, 1), t: 1 }, lastCommittedWall: new Date(1567578879286), lastOpVisible: { ts: Timestamp(1567578879, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578879, 1), $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578879, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:44.301+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:44.301+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:09.300+0000 2019-09-04T06:34:44.302+0000 D2 ASIO [RS] Request 1428 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578879, 1), t: 1 }, lastCommittedWall: new Date(1567578879286), lastOpVisible: { ts: Timestamp(1567578879, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578879, 1), t: 1 }, lastCommittedWall: new Date(1567578879286), lastOpApplied: { ts: Timestamp(1567578879, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578879, 1), $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578879, 1) } 2019-09-04T06:34:44.302+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578879, 1), t: 1 }, lastCommittedWall: new Date(1567578879286), lastOpVisible: { ts: Timestamp(1567578879, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578879, 1), t: 1 }, lastCommittedWall: new Date(1567578879286), lastOpApplied: { ts: Timestamp(1567578879, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578879, 1), $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578879, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:44.302+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:44.302+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:34:44.302+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:34:53.071+0000 2019-09-04T06:34:44.302+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:34:55.527+0000 2019-09-04T06:34:44.302+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:44.302+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1434 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:54.302+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578879, 1), t: 1 } } 2019-09-04T06:34:44.302+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:44.302+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:09.300+0000 2019-09-04T06:34:44.322+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:44.322+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:44.362+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:44.456+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:44.456+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:44.462+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:44.562+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:44.662+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:44.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:44.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:44.739+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:44.739+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:44.762+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:44.780+0000 D2 COMMAND [conn436] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, D5DE159C203210085B53C66F62E0C0DF973FF589), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:34:44.780+0000 D1 REPL [conn436] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578879, 1), t: 1 } 2019-09-04T06:34:44.780+0000 D3 REPL [conn436] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:14.790+0000 2019-09-04T06:34:44.822+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:44.822+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:44.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:44.839+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1435) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:44.839+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1435 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:34:54.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:44.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:44.839+0000 D2 ASIO [Replication] Request 1435 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), opTime: { ts: Timestamp(1567578879, 1), t: 1 }, wallTime: new Date(1567578879286), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578879, 1), t: 1 }, lastCommittedWall: new Date(1567578879286), lastOpVisible: { ts: Timestamp(1567578879, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578879, 1), $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578879, 1) } 2019-09-04T06:34:44.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), opTime: { ts: Timestamp(1567578879, 1), t: 1 }, wallTime: new Date(1567578879286), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578879, 1), t: 1 }, lastCommittedWall: new Date(1567578879286), lastOpVisible: { ts: Timestamp(1567578879, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578879, 1), $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578879, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:34:44.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:44.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1435) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), opTime: { ts: Timestamp(1567578879, 1), t: 1 }, wallTime: new Date(1567578879286), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578879, 1), t: 1 }, lastCommittedWall: new Date(1567578879286), lastOpVisible: { ts: Timestamp(1567578879, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578879, 1), $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578879, 1) } 2019-09-04T06:34:44.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:34:44.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:34:55.527+0000 2019-09-04T06:34:44.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:34:56.323+0000 2019-09-04T06:34:44.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:44.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:34:44.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:44.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:34:46.839Z 2019-09-04T06:34:44.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:44.840+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:44.840+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1436) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:44.840+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1436 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:54.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:44.840+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:44.840+0000 D2 ASIO [Replication] Request 1436 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), opTime: { ts: Timestamp(1567578879, 1), t: 1 }, wallTime: new Date(1567578879286), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578879, 1), t: 1 }, lastCommittedWall: new Date(1567578879286), lastOpVisible: { ts: Timestamp(1567578879, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578879, 1), $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578879, 1) } 2019-09-04T06:34:44.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), opTime: { ts: Timestamp(1567578879, 1), t: 1 }, wallTime: new Date(1567578879286), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578879, 1), t: 1 }, lastCommittedWall: new Date(1567578879286), lastOpVisible: { ts: Timestamp(1567578879, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578879, 1), $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578879, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:44.840+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:44.840+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1436) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), opTime: { ts: Timestamp(1567578879, 1), t: 1 }, wallTime: new Date(1567578879286), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578879, 1), t: 1 }, lastCommittedWall: new Date(1567578879286), lastOpVisible: { ts: Timestamp(1567578879, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578879, 1), $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578879, 1) } 2019-09-04T06:34:44.840+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:34:44.840+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:34:46.840Z 2019-09-04T06:34:44.840+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:44.862+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:44.956+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:44.956+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:44.962+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:45.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:45.062+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:45.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, 258B463F0B407805ED60E110CB1ECD17F709D3B4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:45.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:34:45.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, 258B463F0B407805ED60E110CB1ECD17F709D3B4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:45.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, 258B463F0B407805ED60E110CB1ECD17F709D3B4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:45.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), opTime: { ts: Timestamp(1567578879, 1), t: 1 }, wallTime: new Date(1567578879286) } 2019-09-04T06:34:45.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, 258B463F0B407805ED60E110CB1ECD17F709D3B4), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:45.068+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:45.069+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:45.155+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:45.155+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:45.162+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:45.172+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:45.172+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:45.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:45.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:45.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:45.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:45.239+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:45.239+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:45.263+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:45.300+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:45.300+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:45.300+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578879, 1) 2019-09-04T06:34:45.300+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 20954 2019-09-04T06:34:45.300+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:45.300+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:45.300+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 20954 2019-09-04T06:34:45.300+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20957 2019-09-04T06:34:45.300+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20957 2019-09-04T06:34:45.300+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578879, 1), t: 1 }({ ts: Timestamp(1567578879, 1), t: 1 }) 2019-09-04T06:34:45.322+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:45.322+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:45.363+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:45.456+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:45.456+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:45.463+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:45.563+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:45.568+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:45.568+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:45.654+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:45.655+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:45.663+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:45.672+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:45.672+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:45.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:45.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:45.739+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:45.739+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:45.763+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:45.822+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:45.822+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:45.863+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:45.956+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:45.956+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:45.963+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:46.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:46.063+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:46.068+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:46.068+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:46.154+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:46.155+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:46.163+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:46.172+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:46.172+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:46.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:46.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:46.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, 258B463F0B407805ED60E110CB1ECD17F709D3B4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:46.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:34:46.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, 258B463F0B407805ED60E110CB1ECD17F709D3B4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:46.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, 258B463F0B407805ED60E110CB1ECD17F709D3B4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:46.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), opTime: { ts: Timestamp(1567578879, 1), t: 1 }, wallTime: new Date(1567578879286) } 2019-09-04T06:34:46.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, 258B463F0B407805ED60E110CB1ECD17F709D3B4), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:46.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:46.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:46.239+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:46.239+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:46.264+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:46.300+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:46.300+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:46.300+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578879, 1) 2019-09-04T06:34:46.300+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 20976 2019-09-04T06:34:46.300+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:46.300+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:46.300+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 20976 2019-09-04T06:34:46.300+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20979 2019-09-04T06:34:46.300+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 20979 2019-09-04T06:34:46.300+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578879, 1), t: 1 }({ ts: Timestamp(1567578879, 1), t: 1 }) 2019-09-04T06:34:46.322+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:46.322+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:46.364+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:46.424+0000 I COMMAND [conn437] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578848, 1), signature: { hash: BinData(0, 075D9F54C2C306A6D74FB1440B554A0345006370), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:34:46.424+0000 D1 - [conn437] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:46.424+0000 W - [conn437] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:46.440+0000 I - [conn437] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:46.441+0000 D1 COMMAND [conn437] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578848, 1), signature: { hash: BinData(0, 075D9F54C2C306A6D74FB1440B554A0345006370), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:46.441+0000 D1 - [conn437] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:46.441+0000 W - [conn437] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:46.456+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:46.456+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:46.461+0000 I - [conn437] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:46.461+0000 W COMMAND [conn437] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:46.461+0000 I COMMAND [conn437] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578848, 1), signature: { hash: BinData(0, 075D9F54C2C306A6D74FB1440B554A0345006370), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30034ms 2019-09-04T06:34:46.461+0000 D2 NETWORK [conn437] Session from 10.108.2.57:34388 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:46.461+0000 I NETWORK [conn437] end connection 10.108.2.57:34388 (92 connections now open) 2019-09-04T06:34:46.464+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:46.564+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:46.568+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:46.568+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:46.654+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:46.654+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:46.664+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:46.672+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:46.672+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:46.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:46.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:46.739+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:46.739+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:46.764+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:46.822+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:46.822+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:46.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:46.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1437) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:46.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1437 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:34:56.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:46.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:46.839+0000 D2 ASIO [Replication] Request 1437 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), opTime: { ts: Timestamp(1567578879, 1), t: 1 }, wallTime: new Date(1567578879286), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578879, 1), t: 1 }, lastCommittedWall: new Date(1567578879286), lastOpVisible: { ts: Timestamp(1567578879, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578879, 1), $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578879, 1) } 2019-09-04T06:34:46.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), opTime: { ts: Timestamp(1567578879, 1), t: 1 }, wallTime: new Date(1567578879286), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578879, 1), t: 1 }, lastCommittedWall: new Date(1567578879286), lastOpVisible: { ts: Timestamp(1567578879, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578879, 1), $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578879, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:34:46.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:46.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1437) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), opTime: { ts: Timestamp(1567578879, 1), t: 1 }, wallTime: new Date(1567578879286), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578879, 1), t: 1 }, lastCommittedWall: new Date(1567578879286), lastOpVisible: { ts: Timestamp(1567578879, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578879, 1), $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578879, 1) } 2019-09-04T06:34:46.839+0000 D4 ELECTION [replexec-4] Postponing election timeout due to heartbeat from primary 2019-09-04T06:34:46.839+0000 D4 REPL [replexec-4] Canceling election timeout callback at 2019-09-04T06:34:56.323+0000 2019-09-04T06:34:46.839+0000 D4 ELECTION [replexec-4] Scheduling election timeout callback at 2019-09-04T06:34:57.919+0000 2019-09-04T06:34:46.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:46.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:34:46.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:46.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:34:48.839Z 2019-09-04T06:34:46.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:46.840+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:46.840+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1438) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:46.840+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1438 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:56.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:46.840+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:46.840+0000 D2 ASIO [Replication] Request 1438 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), opTime: { ts: Timestamp(1567578879, 1), t: 1 }, wallTime: new Date(1567578879286), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578879, 1), t: 1 }, lastCommittedWall: new Date(1567578879286), lastOpVisible: { ts: Timestamp(1567578879, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578879, 1), $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578879, 1) } 2019-09-04T06:34:46.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), opTime: { ts: Timestamp(1567578879, 1), t: 1 }, wallTime: new Date(1567578879286), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578879, 1), t: 1 }, lastCommittedWall: new Date(1567578879286), lastOpVisible: { ts: Timestamp(1567578879, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578879, 1), $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578879, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:46.840+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:46.840+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1438) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), opTime: { ts: Timestamp(1567578879, 1), t: 1 }, wallTime: new Date(1567578879286), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578879, 1), t: 1 }, lastCommittedWall: new Date(1567578879286), lastOpVisible: { ts: Timestamp(1567578879, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578879, 1), $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578879, 1) } 2019-09-04T06:34:46.840+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:34:46.840+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:34:48.840Z 2019-09-04T06:34:46.840+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:46.864+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:46.956+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:46.956+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:46.964+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:47.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:47.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, 258B463F0B407805ED60E110CB1ECD17F709D3B4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:47.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:34:47.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, 258B463F0B407805ED60E110CB1ECD17F709D3B4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:47.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, 258B463F0B407805ED60E110CB1ECD17F709D3B4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:47.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), opTime: { ts: Timestamp(1567578879, 1), t: 1 }, wallTime: new Date(1567578879286) } 2019-09-04T06:34:47.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, 258B463F0B407805ED60E110CB1ECD17F709D3B4), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:47.064+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:47.068+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:47.068+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:47.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:47.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:47.154+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:47.154+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:47.164+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:47.172+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:47.172+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:47.190+0000 D2 COMMAND [conn413] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578879, 1), t: 1 } }, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5b070f8f28dab2b56d6f'), operName: "", parentOperId: "5d6f5b070f8f28dab2b56d6e" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578879, 1), signature: { hash: BinData(0, 83C7F4DB49436E1771D96B7211417AA70A8AFEDA), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578879, 1), t: 1 } }, $db: "config" } 2019-09-04T06:34:47.190+0000 D1 TRACKING [conn413] Cmd: find, TrackingId: 5d6f5b070f8f28dab2b56d6e|5d6f5b070f8f28dab2b56d6f 2019-09-04T06:34:47.190+0000 D1 COMMAND [conn413] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578879, 1), t: 1 } } } 2019-09-04T06:34:47.190+0000 D3 STORAGE [conn413] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:34:47.190+0000 D1 COMMAND [conn413] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578879, 1), t: 1 } }, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5b070f8f28dab2b56d6f'), operName: "", parentOperId: "5d6f5b070f8f28dab2b56d6e" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578879, 1), signature: { hash: BinData(0, 83C7F4DB49436E1771D96B7211417AA70A8AFEDA), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578879, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578879, 1) 2019-09-04T06:34:47.190+0000 D5 QUERY [conn413] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:34:47.190+0000 D5 QUERY [conn413] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:34:47.190+0000 D5 QUERY [conn413] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:34:47.190+0000 D5 QUERY [conn413] Rated tree: $and 2019-09-04T06:34:47.190+0000 D5 QUERY [conn413] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:47.190+0000 D5 QUERY [conn413] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:47.190+0000 D2 QUERY [conn413] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:34:47.190+0000 D3 STORAGE [conn413] WT begin_transaction for snapshot id 20996 2019-09-04T06:34:47.190+0000 D3 STORAGE [conn413] WT rollback_transaction for snapshot id 20996 2019-09-04T06:34:47.190+0000 I COMMAND [conn413] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578879, 1), t: 1 } }, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5b070f8f28dab2b56d6f'), operName: "", parentOperId: "5d6f5b070f8f28dab2b56d6e" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578879, 1), signature: { hash: BinData(0, 83C7F4DB49436E1771D96B7211417AA70A8AFEDA), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578879, 1), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:4 cursorExhausted:1 numYields:0 nreturned:4 reslen:989 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:34:47.203+0000 D2 ASIO [RS] Request 1434 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578887, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb801.togewa.com:27017:1567578827:-2504617745590980237" }, wall: new Date(1567578887201), o: { $v: 1, $set: { ping: new Date(1567578887198) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578879, 1), t: 1 }, lastCommittedWall: new Date(1567578879286), lastOpVisible: { ts: Timestamp(1567578879, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578879, 1), t: 1 }, lastCommittedWall: new Date(1567578879286), lastOpApplied: { ts: Timestamp(1567578887, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578879, 1), $clusterTime: { clusterTime: Timestamp(1567578887, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578887, 1) } 2019-09-04T06:34:47.203+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578887, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb801.togewa.com:27017:1567578827:-2504617745590980237" }, wall: new Date(1567578887201), o: { $v: 1, $set: { ping: new Date(1567578887198) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578879, 1), t: 1 }, lastCommittedWall: new Date(1567578879286), lastOpVisible: { ts: Timestamp(1567578879, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578879, 1), t: 1 }, lastCommittedWall: new Date(1567578879286), lastOpApplied: { ts: Timestamp(1567578887, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578879, 1), $clusterTime: { clusterTime: Timestamp(1567578887, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578887, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:47.203+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:47.203+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578887, 1) and ending at ts: Timestamp(1567578887, 1) 2019-09-04T06:34:47.203+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:34:57.919+0000 2019-09-04T06:34:47.203+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:34:57.684+0000 2019-09-04T06:34:47.203+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:47.203+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578887, 1), t: 1 } 2019-09-04T06:34:47.203+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:47.203+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:47.203+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578879, 1) 2019-09-04T06:34:47.203+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 20999 2019-09-04T06:34:47.203+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:47.203+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:47.203+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 20999 2019-09-04T06:34:47.203+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:47.203+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:34:47.203+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:47.203+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578879, 1) 2019-09-04T06:34:47.203+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 21002 2019-09-04T06:34:47.203+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578887, 1) } 2019-09-04T06:34:47.203+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:47.203+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:47.203+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 21002 2019-09-04T06:34:47.203+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:47.203+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 20980 2019-09-04T06:34:47.203+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 20980 2019-09-04T06:34:47.203+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21005 2019-09-04T06:34:47.203+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21005 2019-09-04T06:34:47.203+0000 D3 EXECUTOR [repl-writer-worker-5] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:47.203+0000 D3 STORAGE [repl-writer-worker-5] WT begin_transaction for snapshot id 21007 2019-09-04T06:34:47.203+0000 D4 STORAGE [repl-writer-worker-5] inserting record with timestamp Timestamp(1567578887, 1) 2019-09-04T06:34:47.203+0000 D3 STORAGE [repl-writer-worker-5] WT set timestamp of future write operations to Timestamp(1567578887, 1) 2019-09-04T06:34:47.203+0000 D3 STORAGE [repl-writer-worker-5] WT commit_transaction for snapshot id 21007 2019-09-04T06:34:47.203+0000 D3 EXECUTOR [repl-writer-worker-5] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:47.203+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:34:47.203+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21006 2019-09-04T06:34:47.203+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21006 2019-09-04T06:34:47.203+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21009 2019-09-04T06:34:47.203+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21009 2019-09-04T06:34:47.203+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578887, 1), t: 1 }({ ts: Timestamp(1567578887, 1), t: 1 }) 2019-09-04T06:34:47.203+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578887, 1) 2019-09-04T06:34:47.203+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21010 2019-09-04T06:34:47.203+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578887, 1) } } ] } sort: {} projection: {} 2019-09-04T06:34:47.203+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:47.203+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:34:47.203+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578887, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:34:47.203+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:47.203+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:47.203+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:47.203+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578887, 1) || First: notFirst: full path: ts 2019-09-04T06:34:47.203+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:47.203+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578887, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:47.203+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:47.204+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:34:47.204+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:47.204+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:47.204+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:47.204+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:47.204+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:47.204+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:47.204+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:47.204+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578887, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:47.204+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:47.204+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:47.204+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:47.204+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578887, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:47.204+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:47.204+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578887, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:47.204+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21010 2019-09-04T06:34:47.204+0000 D3 EXECUTOR [repl-writer-worker-9] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:47.204+0000 D3 STORAGE [repl-writer-worker-9] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:47.204+0000 D3 REPL [repl-writer-worker-9] applying op: { ts: Timestamp(1567578887, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb801.togewa.com:27017:1567578827:-2504617745590980237" }, wall: new Date(1567578887201), o: { $v: 1, $set: { ping: new Date(1567578887198) } } }, oplog application mode: Secondary 2019-09-04T06:34:47.204+0000 D3 STORAGE [repl-writer-worker-9] WT set timestamp of future write operations to Timestamp(1567578887, 1) 2019-09-04T06:34:47.204+0000 D3 STORAGE [repl-writer-worker-9] WT begin_transaction for snapshot id 21012 2019-09-04T06:34:47.204+0000 D2 QUERY [repl-writer-worker-9] Using idhack: { _id: "cmodb801.togewa.com:27017:1567578827:-2504617745590980237" } 2019-09-04T06:34:47.204+0000 D4 WRITE [repl-writer-worker-9] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:34:47.204+0000 D3 STORAGE [repl-writer-worker-9] WT commit_transaction for snapshot id 21012 2019-09-04T06:34:47.204+0000 D3 EXECUTOR [repl-writer-worker-9] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:47.204+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578887, 1), t: 1 }({ ts: Timestamp(1567578887, 1), t: 1 }) 2019-09-04T06:34:47.204+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578887, 1) 2019-09-04T06:34:47.204+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21011 2019-09-04T06:34:47.204+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:34:47.204+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:47.204+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:34:47.204+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:47.204+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:47.204+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:34:47.204+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21011 2019-09-04T06:34:47.204+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578887, 1) 2019-09-04T06:34:47.204+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21015 2019-09-04T06:34:47.204+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21015 2019-09-04T06:34:47.204+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578887, 1), t: 1 }({ ts: Timestamp(1567578887, 1), t: 1 }) 2019-09-04T06:34:47.204+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:47.204+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), appliedOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, appliedWallTime: new Date(1567578879286), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), appliedOpTime: { ts: Timestamp(1567578887, 1), t: 1 }, appliedWallTime: new Date(1567578887201), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), appliedOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, appliedWallTime: new Date(1567578879286), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578879, 1), t: 1 }, lastCommittedWall: new Date(1567578879286), lastOpVisible: { ts: Timestamp(1567578879, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:47.204+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1439 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:17.204+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), appliedOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, appliedWallTime: new Date(1567578879286), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), appliedOpTime: { ts: Timestamp(1567578887, 1), t: 1 }, appliedWallTime: new Date(1567578887201), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), appliedOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, appliedWallTime: new Date(1567578879286), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578879, 1), t: 1 }, lastCommittedWall: new Date(1567578879286), lastOpVisible: { ts: Timestamp(1567578879, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:47.204+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:17.204+0000 2019-09-04T06:34:47.204+0000 D2 ASIO [RS] Request 1439 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578879, 1), t: 1 }, lastCommittedWall: new Date(1567578879286), lastOpVisible: { ts: Timestamp(1567578879, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578879, 1), $clusterTime: { clusterTime: Timestamp(1567578887, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578887, 1) } 2019-09-04T06:34:47.204+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578879, 1), t: 1 }, lastCommittedWall: new Date(1567578879286), lastOpVisible: { ts: Timestamp(1567578879, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578879, 1), $clusterTime: { clusterTime: Timestamp(1567578887, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578887, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:47.204+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:47.204+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:17.204+0000 2019-09-04T06:34:47.205+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578887, 1), t: 1 } 2019-09-04T06:34:47.205+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1440 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:57.205+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578879, 1), t: 1 } } 2019-09-04T06:34:47.205+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:17.204+0000 2019-09-04T06:34:47.205+0000 D2 ASIO [RS] Request 1440 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578887, 1), t: 1 }, lastCommittedWall: new Date(1567578887201), lastOpVisible: { ts: Timestamp(1567578887, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578887, 1), t: 1 }, lastCommittedWall: new Date(1567578887201), lastOpApplied: { ts: Timestamp(1567578887, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578887, 1), $clusterTime: { clusterTime: Timestamp(1567578887, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578887, 1) } 2019-09-04T06:34:47.205+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578887, 1), t: 1 }, lastCommittedWall: new Date(1567578887201), lastOpVisible: { ts: Timestamp(1567578887, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578887, 1), t: 1 }, lastCommittedWall: new Date(1567578887201), lastOpApplied: { ts: Timestamp(1567578887, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578887, 1), $clusterTime: { clusterTime: Timestamp(1567578887, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578887, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:47.205+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:47.205+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:34:47.205+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578887, 1), t: 1 }, 2019-09-04T06:34:47.201+0000 2019-09-04T06:34:47.205+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578887, 1), t: 1 }, 2019-09-04T06:34:47.201+0000 2019-09-04T06:34:47.205+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578882, 1) 2019-09-04T06:34:47.205+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:34:57.684+0000 2019-09-04T06:34:47.205+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:34:47.205+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:34:57.686+0000 2019-09-04T06:34:47.205+0000 D3 REPL [conn436] Got notified of new snapshot: { ts: Timestamp(1567578887, 1), t: 1 }, 2019-09-04T06:34:47.201+0000 2019-09-04T06:34:47.205+0000 D3 REPL [conn436] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:14.790+0000 2019-09-04T06:34:47.205+0000 D3 REPL [conn452] Got notified of new snapshot: { ts: Timestamp(1567578887, 1), t: 1 }, 2019-09-04T06:34:47.201+0000 2019-09-04T06:34:47.205+0000 D3 REPL [conn452] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.763+0000 2019-09-04T06:34:47.205+0000 D3 REPL [conn438] Got notified of new snapshot: { ts: Timestamp(1567578887, 1), t: 1 }, 2019-09-04T06:34:47.201+0000 2019-09-04T06:34:47.205+0000 D3 REPL [conn438] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.660+0000 2019-09-04T06:34:47.205+0000 D3 REPL [conn444] Got notified of new snapshot: { ts: Timestamp(1567578887, 1), t: 1 }, 2019-09-04T06:34:47.201+0000 2019-09-04T06:34:47.205+0000 D3 REPL [conn444] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:59.943+0000 2019-09-04T06:34:47.205+0000 D3 REPL [conn445] Got notified of new snapshot: { ts: Timestamp(1567578887, 1), t: 1 }, 2019-09-04T06:34:47.201+0000 2019-09-04T06:34:47.205+0000 D3 REPL [conn445] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:14.090+0000 2019-09-04T06:34:47.205+0000 D3 REPL [conn424] Got notified of new snapshot: { ts: Timestamp(1567578887, 1), t: 1 }, 2019-09-04T06:34:47.201+0000 2019-09-04T06:34:47.205+0000 D3 REPL [conn424] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.437+0000 2019-09-04T06:34:47.205+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:47.206+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn442] Got notified of new snapshot: { ts: Timestamp(1567578887, 1), t: 1 }, 2019-09-04T06:34:47.201+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn442] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.133+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn450] Got notified of new snapshot: { ts: Timestamp(1567578887, 1), t: 1 }, 2019-09-04T06:34:47.201+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn450] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.261+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn420] Got notified of new snapshot: { ts: Timestamp(1567578887, 1), t: 1 }, 2019-09-04T06:34:47.201+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn420] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.423+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn443] Got notified of new snapshot: { ts: Timestamp(1567578887, 1), t: 1 }, 2019-09-04T06:34:47.201+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn443] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.645+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn454] Got notified of new snapshot: { ts: Timestamp(1567578887, 1), t: 1 }, 2019-09-04T06:34:47.201+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn454] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.925+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn459] Got notified of new snapshot: { ts: Timestamp(1567578887, 1), t: 1 }, 2019-09-04T06:34:47.201+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn459] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.473+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn460] Got notified of new snapshot: { ts: Timestamp(1567578887, 1), t: 1 }, 2019-09-04T06:34:47.201+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn460] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.478+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn461] Got notified of new snapshot: { ts: Timestamp(1567578887, 1), t: 1 }, 2019-09-04T06:34:47.201+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn461] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.482+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn423] Got notified of new snapshot: { ts: Timestamp(1567578887, 1), t: 1 }, 2019-09-04T06:34:47.201+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn423] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.512+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn463] Got notified of new snapshot: { ts: Timestamp(1567578887, 1), t: 1 }, 2019-09-04T06:34:47.201+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn463] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:13.417+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn446] Got notified of new snapshot: { ts: Timestamp(1567578887, 1), t: 1 }, 2019-09-04T06:34:47.201+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn446] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:13.895+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn430] Got notified of new snapshot: { ts: Timestamp(1567578887, 1), t: 1 }, 2019-09-04T06:34:47.201+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn430] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.662+0000 2019-09-04T06:34:47.206+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:47.206+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), appliedOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, appliedWallTime: new Date(1567578879286), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578887, 1), t: 1 }, durableWallTime: new Date(1567578887201), appliedOpTime: { ts: Timestamp(1567578887, 1), t: 1 }, appliedWallTime: new Date(1567578887201), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), appliedOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, appliedWallTime: new Date(1567578879286), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578887, 1), t: 1 }, lastCommittedWall: new Date(1567578887201), lastOpVisible: { ts: Timestamp(1567578887, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:47.206+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1441 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:17.206+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), appliedOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, appliedWallTime: new Date(1567578879286), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578887, 1), t: 1 }, durableWallTime: new Date(1567578887201), appliedOpTime: { ts: Timestamp(1567578887, 1), t: 1 }, appliedWallTime: new Date(1567578887201), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), appliedOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, appliedWallTime: new Date(1567578879286), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578887, 1), t: 1 }, lastCommittedWall: new Date(1567578887201), lastOpVisible: { ts: Timestamp(1567578887, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:47.206+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:17.206+0000 2019-09-04T06:34:47.206+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1442 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:57.206+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578887, 1), t: 1 } } 2019-09-04T06:34:47.206+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:17.206+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn440] Got notified of new snapshot: { ts: Timestamp(1567578887, 1), t: 1 }, 2019-09-04T06:34:47.201+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn440] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.538+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn410] Got notified of new snapshot: { ts: Timestamp(1567578887, 1), t: 1 }, 2019-09-04T06:34:47.201+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn410] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.445+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn455] Got notified of new snapshot: { ts: Timestamp(1567578887, 1), t: 1 }, 2019-09-04T06:34:47.201+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn455] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.962+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn453] Got notified of new snapshot: { ts: Timestamp(1567578887, 1), t: 1 }, 2019-09-04T06:34:47.201+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn453] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.897+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn434] Got notified of new snapshot: { ts: Timestamp(1567578887, 1), t: 1 }, 2019-09-04T06:34:47.201+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn434] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.445+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn449] Got notified of new snapshot: { ts: Timestamp(1567578887, 1), t: 1 }, 2019-09-04T06:34:47.201+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn449] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:58.759+0000 2019-09-04T06:34:47.206+0000 D2 ASIO [RS] Request 1441 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578887, 1), t: 1 }, lastCommittedWall: new Date(1567578887201), lastOpVisible: { ts: Timestamp(1567578887, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578887, 1), $clusterTime: { clusterTime: Timestamp(1567578887, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578887, 1) } 2019-09-04T06:34:47.206+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578887, 1), t: 1 }, lastCommittedWall: new Date(1567578887201), lastOpVisible: { ts: Timestamp(1567578887, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578887, 1), $clusterTime: { clusterTime: Timestamp(1567578887, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578887, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:47.206+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:47.206+0000 D3 REPL [conn425] Got notified of new snapshot: { ts: Timestamp(1567578887, 1), t: 1 }, 2019-09-04T06:34:47.201+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn425] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:52.594+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn447] Got notified of new snapshot: { ts: Timestamp(1567578887, 1), t: 1 }, 2019-09-04T06:34:47.201+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn447] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:52.054+0000 2019-09-04T06:34:47.206+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:17.206+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn426] Got notified of new snapshot: { ts: Timestamp(1567578887, 1), t: 1 }, 2019-09-04T06:34:47.201+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn426] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.517+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn451] Got notified of new snapshot: { ts: Timestamp(1567578887, 1), t: 1 }, 2019-09-04T06:34:47.201+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn451] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.434+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn462] Got notified of new snapshot: { ts: Timestamp(1567578887, 1), t: 1 }, 2019-09-04T06:34:47.201+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn462] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.497+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn456] Got notified of new snapshot: { ts: Timestamp(1567578887, 1), t: 1 }, 2019-09-04T06:34:47.201+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn456] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:03.487+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn439] Got notified of new snapshot: { ts: Timestamp(1567578887, 1), t: 1 }, 2019-09-04T06:34:47.201+0000 2019-09-04T06:34:47.206+0000 D3 REPL [conn439] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:47.447+0000 2019-09-04T06:34:47.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:47.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:47.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:47.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:47.239+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:47.239+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:47.264+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:47.303+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578887, 1) 2019-09-04T06:34:47.322+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:47.322+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:47.365+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:47.423+0000 I COMMAND [conn420] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578849, 1), signature: { hash: BinData(0, E539C2FA8BCC8C9A0D94E22CA2ADA62100E7CF8D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:34:47.423+0000 D1 - [conn420] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:47.423+0000 W - [conn420] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:47.433+0000 I NETWORK [listener] connection accepted from 10.108.2.49:53526 #464 (93 connections now open) 2019-09-04T06:34:47.433+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:47.433+0000 D2 COMMAND [conn464] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:47.433+0000 I NETWORK [conn464] received client metadata from 10.108.2.49:53526 conn464: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:47.433+0000 I COMMAND [conn464] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:47.437+0000 I COMMAND [conn424] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578855, 1), signature: { hash: BinData(0, 1F4340BC981E2B9D0D58976FE30DE1B22C151008), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:34:47.437+0000 D1 - [conn424] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:47.437+0000 W - [conn424] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:47.440+0000 I - [conn420] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:47.440+0000 D1 COMMAND [conn420] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578849, 1), signature: { hash: BinData(0, E539C2FA8BCC8C9A0D94E22CA2ADA62100E7CF8D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:47.440+0000 D1 - [conn420] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:47.440+0000 W - [conn420] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:47.440+0000 I NETWORK [listener] connection accepted from 10.108.2.47:56700 #465 (94 connections now open) 2019-09-04T06:34:47.440+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:47.440+0000 D2 COMMAND [conn465] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:47.440+0000 I NETWORK [conn465] received client metadata from 10.108.2.47:56700 conn465: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:47.440+0000 I COMMAND [conn465] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:47.445+0000 I COMMAND [conn410] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578855, 1), signature: { hash: BinData(0, 1F4340BC981E2B9D0D58976FE30DE1B22C151008), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:34:47.445+0000 D1 - [conn410] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:47.445+0000 W - [conn410] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:47.445+0000 I COMMAND [conn434] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578848, 1), signature: { hash: BinData(0, 075D9F54C2C306A6D74FB1440B554A0345006370), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:34:47.445+0000 D1 - [conn434] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:47.445+0000 W - [conn434] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:47.447+0000 I COMMAND [conn439] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578850, 1), signature: { hash: BinData(0, DF7BE7C881CF5FD2AF2EDB3FBEF3BF57179C23BB), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:34:47.447+0000 D1 - [conn439] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:47.447+0000 W - [conn439] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:47.456+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:47.456+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:47.457+0000 I - [conn424] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:47.457+0000 D1 COMMAND [conn424] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578855, 1), signature: { hash: BinData(0, 1F4340BC981E2B9D0D58976FE30DE1B22C151008), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:47.457+0000 D1 - [conn424] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:47.457+0000 W - [conn424] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:47.465+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:47.486+0000 I - [conn420] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:47.486+0000 W COMMAND [conn420] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:47.486+0000 I COMMAND [conn420] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578849, 1), signature: { hash: BinData(0, E539C2FA8BCC8C9A0D94E22CA2ADA62100E7CF8D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30026ms 2019-09-04T06:34:47.487+0000 D2 NETWORK [conn420] Session from 10.108.2.50:50248 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:47.487+0000 I NETWORK [conn420] end connection 10.108.2.50:50248 (93 connections now open) 2019-09-04T06:34:47.506+0000 I - [conn424] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:47.506+0000 W COMMAND [conn424] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:47.506+0000 I COMMAND [conn424] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578855, 1), signature: { hash: BinData(0, 1F4340BC981E2B9D0D58976FE30DE1B22C151008), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:34:47.506+0000 D2 NETWORK [conn424] Session from 10.108.2.64:46748 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:47.507+0000 I NETWORK [conn424] end connection 10.108.2.64:46748 (92 connections now open) 2019-09-04T06:34:47.517+0000 I COMMAND [conn426] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578848, 1), signature: { hash: BinData(0, 075D9F54C2C306A6D74FB1440B554A0345006370), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:34:47.517+0000 D1 - [conn426] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:47.518+0000 W - [conn426] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:47.522+0000 I NETWORK [listener] connection accepted from 10.108.2.53:50858 #466 (93 connections now open) 2019-09-04T06:34:47.522+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:47.522+0000 D2 COMMAND [conn466] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:47.522+0000 I NETWORK [conn466] received client metadata from 10.108.2.53:50858 conn466: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:47.522+0000 I COMMAND [conn466] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:47.523+0000 I - [conn410] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:47.523+0000 D1 COMMAND [conn410] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578855, 1), signature: { hash: BinData(0, 1F4340BC981E2B9D0D58976FE30DE1B22C151008), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:47.523+0000 D1 - [conn410] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:47.523+0000 W - [conn410] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:47.530+0000 I - [conn434] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:47.530+0000 D1 COMMAND [conn434] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578848, 1), signature: { hash: BinData(0, 075D9F54C2C306A6D74FB1440B554A0345006370), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:47.530+0000 D1 - [conn434] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:47.530+0000 W - [conn434] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:47.538+0000 I COMMAND [conn440] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578852, 1), signature: { hash: BinData(0, 695EC7A4F3A174023E82B47C0B1F2FFF237676A8), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:34:47.538+0000 D1 - [conn440] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:47.538+0000 W - [conn440] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:47.547+0000 I - [conn426] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:47.547+0000 D1 COMMAND [conn426] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578848, 1), signature: { hash: BinData(0, 075D9F54C2C306A6D74FB1440B554A0345006370), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:47.547+0000 D1 - [conn426] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:47.547+0000 W - [conn426] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:47.565+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:47.567+0000 I - [conn434] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:47.567+0000 W COMMAND [conn434] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:47.567+0000 I COMMAND [conn434] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578848, 1), signature: { hash: BinData(0, 075D9F54C2C306A6D74FB1440B554A0345006370), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30094ms 2019-09-04T06:34:47.567+0000 D2 NETWORK [conn434] Session from 10.108.2.45:36672 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:47.567+0000 I NETWORK [conn434] end connection 10.108.2.45:36672 (92 connections now open) 2019-09-04T06:34:47.568+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:47.568+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:47.583+0000 I - [conn439] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:47.583+0000 D1 COMMAND [conn439] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578850, 1), signature: { hash: BinData(0, DF7BE7C881CF5FD2AF2EDB3FBEF3BF57179C23BB), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:47.583+0000 D1 - [conn439] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:47.583+0000 W - [conn439] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:47.600+0000 I - [conn440] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:47.600+0000 D1 COMMAND [conn440] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578852, 1), signature: { hash: BinData(0, 695EC7A4F3A174023E82B47C0B1F2FFF237676A8), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:47.600+0000 D1 - [conn440] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:47.600+0000 W - [conn440] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:47.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:47.611+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:47.613+0000 I NETWORK [listener] connection accepted from 10.108.2.58:52308 #467 (93 connections now open) 2019-09-04T06:34:47.613+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:47.613+0000 D2 COMMAND [conn467] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:47.613+0000 I NETWORK [conn467] received client metadata from 10.108.2.58:52308 conn467: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:47.613+0000 I COMMAND [conn467] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:47.613+0000 D2 COMMAND [conn467] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578883, 1), signature: { hash: BinData(0, 01F6D33FDEDCF76CD13D7205FDE1395C03188BC4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:34:47.613+0000 D1 REPL [conn467] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578887, 1), t: 1 } 2019-09-04T06:34:47.613+0000 D3 REPL [conn467] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.623+0000 2019-09-04T06:34:47.614+0000 D2 COMMAND [conn441] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578879, 1), signature: { hash: BinData(0, 95740D6E9D55C70552F443700C129E7BD46EEBF9), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:34:47.614+0000 D1 REPL [conn441] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578887, 1), t: 1 } 2019-09-04T06:34:47.614+0000 D3 REPL [conn441] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.624+0000 2019-09-04T06:34:47.615+0000 D2 COMMAND [conn448] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578885, 1), signature: { hash: BinData(0, 3250ABA29FA1BAB080F5BF7C803FB52479895B2B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:34:47.615+0000 D1 REPL [conn448] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578887, 1), t: 1 } 2019-09-04T06:34:47.615+0000 D3 REPL [conn448] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.625+0000 2019-09-04T06:34:47.620+0000 I - [conn439] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:47.620+0000 W COMMAND [conn439] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:47.620+0000 I COMMAND [conn439] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578850, 1), signature: { hash: BinData(0, DF7BE7C881CF5FD2AF2EDB3FBEF3BF57179C23BB), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30145ms 2019-09-04T06:34:47.620+0000 D2 NETWORK [conn439] Session from 10.108.2.49:53512 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:47.620+0000 I NETWORK [conn439] end connection 10.108.2.49:53512 (92 connections now open) 2019-09-04T06:34:47.640+0000 I - [conn440] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:47.640+0000 W COMMAND [conn440] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:47.640+0000 I COMMAND [conn440] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578852, 1), signature: { hash: BinData(0, 695EC7A4F3A174023E82B47C0B1F2FFF237676A8), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30071ms 2019-09-04T06:34:47.640+0000 D2 NETWORK [conn440] Session from 10.108.2.53:50844 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:47.640+0000 I NETWORK [conn440] end connection 10.108.2.53:50844 (91 connections now open) 2019-09-04T06:34:47.654+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:47.654+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:47.665+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:47.671+0000 I - [conn410] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:47.671+0000 W COMMAND [conn410] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:47.671+0000 I COMMAND [conn410] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578855, 1), signature: { hash: BinData(0, 1F4340BC981E2B9D0D58976FE30DE1B22C151008), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30088ms 2019-09-04T06:34:47.671+0000 D2 NETWORK [conn410] Session from 10.108.2.47:56660 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:47.671+0000 I NETWORK [conn410] end connection 10.108.2.47:56660 (90 connections now open) 2019-09-04T06:34:47.672+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:47.672+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:47.680+0000 I - [conn426] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:47.680+0000 W COMMAND [conn426] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:47.680+0000 I COMMAND [conn426] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578848, 1), signature: { hash: BinData(0, 075D9F54C2C306A6D74FB1440B554A0345006370), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30039ms 2019-09-04T06:34:47.680+0000 D2 NETWORK [conn426] Session from 10.108.2.44:38820 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:47.680+0000 I NETWORK [conn426] end connection 10.108.2.44:38820 (89 connections now open) 2019-09-04T06:34:47.682+0000 D2 ASIO [RS] Request 1442 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578887, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578887681), o: { $v: 1, $set: { ping: new Date(1567578887680) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578887, 1), t: 1 }, lastCommittedWall: new Date(1567578887201), lastOpVisible: { ts: Timestamp(1567578887, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578887, 1), t: 1 }, lastCommittedWall: new Date(1567578887201), lastOpApplied: { ts: Timestamp(1567578887, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578887, 1), $clusterTime: { clusterTime: Timestamp(1567578887, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578887, 2) } 2019-09-04T06:34:47.682+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578887, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578887681), o: { $v: 1, $set: { ping: new Date(1567578887680) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578887, 1), t: 1 }, lastCommittedWall: new Date(1567578887201), lastOpVisible: { ts: Timestamp(1567578887, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578887, 1), t: 1 }, lastCommittedWall: new Date(1567578887201), lastOpApplied: { ts: Timestamp(1567578887, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578887, 1), $clusterTime: { clusterTime: Timestamp(1567578887, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578887, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:47.682+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:47.682+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578887, 2) and ending at ts: Timestamp(1567578887, 2) 2019-09-04T06:34:47.682+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:34:57.686+0000 2019-09-04T06:34:47.682+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:34:59.170+0000 2019-09-04T06:34:47.682+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:47.682+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578887, 2), t: 1 } 2019-09-04T06:34:47.682+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:47.682+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:47.682+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578887, 1) 2019-09-04T06:34:47.682+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 21036 2019-09-04T06:34:47.682+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:47.682+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:47.682+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 21036 2019-09-04T06:34:47.682+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:34:47.682+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:47.682+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:47.682+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578887, 2) } 2019-09-04T06:34:47.682+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:47.682+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578887, 1) 2019-09-04T06:34:47.683+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 21039 2019-09-04T06:34:47.683+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:47.683+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:47.683+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 21039 2019-09-04T06:34:47.683+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21017 2019-09-04T06:34:47.683+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21017 2019-09-04T06:34:47.683+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21042 2019-09-04T06:34:47.683+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21042 2019-09-04T06:34:47.683+0000 D3 EXECUTOR [repl-writer-worker-11] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:47.683+0000 D3 STORAGE [repl-writer-worker-11] WT begin_transaction for snapshot id 21044 2019-09-04T06:34:47.683+0000 D4 STORAGE [repl-writer-worker-11] inserting record with timestamp Timestamp(1567578887, 2) 2019-09-04T06:34:47.683+0000 D3 STORAGE [repl-writer-worker-11] WT set timestamp of future write operations to Timestamp(1567578887, 2) 2019-09-04T06:34:47.683+0000 D3 STORAGE [repl-writer-worker-11] WT commit_transaction for snapshot id 21044 2019-09-04T06:34:47.683+0000 D3 EXECUTOR [repl-writer-worker-11] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:47.683+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:34:47.683+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21043 2019-09-04T06:34:47.683+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21043 2019-09-04T06:34:47.683+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21046 2019-09-04T06:34:47.683+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21046 2019-09-04T06:34:47.683+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578887, 2), t: 1 }({ ts: Timestamp(1567578887, 2), t: 1 }) 2019-09-04T06:34:47.683+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578887, 2) 2019-09-04T06:34:47.683+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21047 2019-09-04T06:34:47.683+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578887, 2) } } ] } sort: {} projection: {} 2019-09-04T06:34:47.683+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:47.683+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:34:47.683+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578887, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:34:47.683+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:47.683+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:47.683+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:47.683+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578887, 2) || First: notFirst: full path: ts 2019-09-04T06:34:47.683+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:47.683+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578887, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:47.683+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:47.683+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:34:47.683+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:47.683+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:47.683+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:47.683+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:47.683+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:47.683+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:47.683+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:47.683+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578887, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:47.683+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:47.683+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:47.683+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:47.683+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578887, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:47.683+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:47.683+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578887, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:47.683+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21047 2019-09-04T06:34:47.683+0000 D3 EXECUTOR [repl-writer-worker-7] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:47.683+0000 D3 STORAGE [repl-writer-worker-7] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:47.683+0000 D3 REPL [repl-writer-worker-7] applying op: { ts: Timestamp(1567578887, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578887681), o: { $v: 1, $set: { ping: new Date(1567578887680) } } }, oplog application mode: Secondary 2019-09-04T06:34:47.683+0000 D3 STORAGE [repl-writer-worker-7] WT set timestamp of future write operations to Timestamp(1567578887, 2) 2019-09-04T06:34:47.683+0000 D3 STORAGE [repl-writer-worker-7] WT begin_transaction for snapshot id 21049 2019-09-04T06:34:47.683+0000 D2 QUERY [repl-writer-worker-7] Using idhack: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" } 2019-09-04T06:34:47.683+0000 D4 WRITE [repl-writer-worker-7] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:34:47.683+0000 D3 STORAGE [repl-writer-worker-7] WT commit_transaction for snapshot id 21049 2019-09-04T06:34:47.683+0000 D3 EXECUTOR [repl-writer-worker-7] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:47.683+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578887, 2), t: 1 }({ ts: Timestamp(1567578887, 2), t: 1 }) 2019-09-04T06:34:47.683+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578887, 2) 2019-09-04T06:34:47.683+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21048 2019-09-04T06:34:47.683+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:34:47.683+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:47.683+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:34:47.683+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:47.683+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:47.683+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:34:47.683+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21048 2019-09-04T06:34:47.683+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578887, 2) 2019-09-04T06:34:47.683+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21052 2019-09-04T06:34:47.683+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21052 2019-09-04T06:34:47.683+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578887, 2), t: 1 }({ ts: Timestamp(1567578887, 2), t: 1 }) 2019-09-04T06:34:47.684+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:47.684+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), appliedOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, appliedWallTime: new Date(1567578879286), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578887, 1), t: 1 }, durableWallTime: new Date(1567578887201), appliedOpTime: { ts: Timestamp(1567578887, 2), t: 1 }, appliedWallTime: new Date(1567578887681), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), appliedOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, appliedWallTime: new Date(1567578879286), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578887, 1), t: 1 }, lastCommittedWall: new Date(1567578887201), lastOpVisible: { ts: Timestamp(1567578887, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:47.684+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1443 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:17.684+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), appliedOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, appliedWallTime: new Date(1567578879286), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578887, 1), t: 1 }, durableWallTime: new Date(1567578887201), appliedOpTime: { ts: Timestamp(1567578887, 2), t: 1 }, appliedWallTime: new Date(1567578887681), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), appliedOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, appliedWallTime: new Date(1567578879286), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578887, 1), t: 1 }, lastCommittedWall: new Date(1567578887201), lastOpVisible: { ts: Timestamp(1567578887, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:47.684+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:17.683+0000 2019-09-04T06:34:47.684+0000 D2 ASIO [RS] Request 1443 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578887, 1), t: 1 }, lastCommittedWall: new Date(1567578887201), lastOpVisible: { ts: Timestamp(1567578887, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578887, 1), $clusterTime: { clusterTime: Timestamp(1567578887, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578887, 2) } 2019-09-04T06:34:47.684+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578887, 1), t: 1 }, lastCommittedWall: new Date(1567578887201), lastOpVisible: { ts: Timestamp(1567578887, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578887, 1), $clusterTime: { clusterTime: Timestamp(1567578887, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578887, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:47.684+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:47.684+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:17.684+0000 2019-09-04T06:34:47.684+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578887, 2), t: 1 } 2019-09-04T06:34:47.684+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1444 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:57.684+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578887, 1), t: 1 } } 2019-09-04T06:34:47.684+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:17.684+0000 2019-09-04T06:34:47.685+0000 D2 ASIO [RS] Request 1444 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578887, 2), t: 1 }, lastCommittedWall: new Date(1567578887681), lastOpVisible: { ts: Timestamp(1567578887, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578887, 2), t: 1 }, lastCommittedWall: new Date(1567578887681), lastOpApplied: { ts: Timestamp(1567578887, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578887, 2), $clusterTime: { clusterTime: Timestamp(1567578887, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578887, 2) } 2019-09-04T06:34:47.685+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578887, 2), t: 1 }, lastCommittedWall: new Date(1567578887681), lastOpVisible: { ts: Timestamp(1567578887, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578887, 2), t: 1 }, lastCommittedWall: new Date(1567578887681), lastOpApplied: { ts: Timestamp(1567578887, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578887, 2), $clusterTime: { clusterTime: Timestamp(1567578887, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578887, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:47.685+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:47.685+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:34:47.685+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578887, 2), t: 1 }, 2019-09-04T06:34:47.681+0000 2019-09-04T06:34:47.685+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578887, 2), t: 1 }, 2019-09-04T06:34:47.681+0000 2019-09-04T06:34:47.685+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578882, 2) 2019-09-04T06:34:47.685+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:34:59.170+0000 2019-09-04T06:34:47.685+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:34:58.635+0000 2019-09-04T06:34:47.685+0000 D3 REPL [conn452] Got notified of new snapshot: { ts: Timestamp(1567578887, 2), t: 1 }, 2019-09-04T06:34:47.681+0000 2019-09-04T06:34:47.685+0000 D3 REPL [conn452] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.763+0000 2019-09-04T06:34:47.685+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1445 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:57.685+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578887, 2), t: 1 } } 2019-09-04T06:34:47.685+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:17.684+0000 2019-09-04T06:34:47.685+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:47.685+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:47.685+0000 D3 REPL [conn442] Got notified of new snapshot: { ts: Timestamp(1567578887, 2), t: 1 }, 2019-09-04T06:34:47.681+0000 2019-09-04T06:34:47.685+0000 D3 REPL [conn442] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.133+0000 2019-09-04T06:34:47.685+0000 D3 REPL [conn450] Got notified of new snapshot: { ts: Timestamp(1567578887, 2), t: 1 }, 2019-09-04T06:34:47.681+0000 2019-09-04T06:34:47.685+0000 D3 REPL [conn450] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.261+0000 2019-09-04T06:34:47.685+0000 D3 REPL [conn443] Got notified of new snapshot: { ts: Timestamp(1567578887, 2), t: 1 }, 2019-09-04T06:34:47.681+0000 2019-09-04T06:34:47.685+0000 D3 REPL [conn443] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.645+0000 2019-09-04T06:34:47.685+0000 D3 REPL [conn459] Got notified of new snapshot: { ts: Timestamp(1567578887, 2), t: 1 }, 2019-09-04T06:34:47.681+0000 2019-09-04T06:34:47.685+0000 D3 REPL [conn459] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.473+0000 2019-09-04T06:34:47.685+0000 D3 REPL [conn461] Got notified of new snapshot: { ts: Timestamp(1567578887, 2), t: 1 }, 2019-09-04T06:34:47.681+0000 2019-09-04T06:34:47.685+0000 D3 REPL [conn461] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.482+0000 2019-09-04T06:34:47.685+0000 D3 REPL [conn463] Got notified of new snapshot: { ts: Timestamp(1567578887, 2), t: 1 }, 2019-09-04T06:34:47.681+0000 2019-09-04T06:34:47.685+0000 D3 REPL [conn463] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:13.417+0000 2019-09-04T06:34:47.685+0000 D3 REPL [conn455] Got notified of new snapshot: { ts: Timestamp(1567578887, 2), t: 1 }, 2019-09-04T06:34:47.681+0000 2019-09-04T06:34:47.685+0000 D3 REPL [conn455] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.962+0000 2019-09-04T06:34:47.685+0000 D3 REPL [conn425] Got notified of new snapshot: { ts: Timestamp(1567578887, 2), t: 1 }, 2019-09-04T06:34:47.681+0000 2019-09-04T06:34:47.686+0000 D3 REPL [conn425] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:52.594+0000 2019-09-04T06:34:47.686+0000 D3 REPL [conn462] Got notified of new snapshot: { ts: Timestamp(1567578887, 2), t: 1 }, 2019-09-04T06:34:47.681+0000 2019-09-04T06:34:47.686+0000 D3 REPL [conn462] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.497+0000 2019-09-04T06:34:47.686+0000 D3 REPL [conn467] Got notified of new snapshot: { ts: Timestamp(1567578887, 2), t: 1 }, 2019-09-04T06:34:47.681+0000 2019-09-04T06:34:47.686+0000 D3 REPL [conn467] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.623+0000 2019-09-04T06:34:47.686+0000 D3 REPL [conn448] Got notified of new snapshot: { ts: Timestamp(1567578887, 2), t: 1 }, 2019-09-04T06:34:47.681+0000 2019-09-04T06:34:47.686+0000 D3 REPL [conn448] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.625+0000 2019-09-04T06:34:47.686+0000 D3 REPL [conn436] Got notified of new snapshot: { ts: Timestamp(1567578887, 2), t: 1 }, 2019-09-04T06:34:47.681+0000 2019-09-04T06:34:47.686+0000 D3 REPL [conn436] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:14.790+0000 2019-09-04T06:34:47.686+0000 D3 REPL [conn438] Got notified of new snapshot: { ts: Timestamp(1567578887, 2), t: 1 }, 2019-09-04T06:34:47.681+0000 2019-09-04T06:34:47.686+0000 D3 REPL [conn438] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.660+0000 2019-09-04T06:34:47.686+0000 D3 REPL [conn445] Got notified of new snapshot: { ts: Timestamp(1567578887, 2), t: 1 }, 2019-09-04T06:34:47.681+0000 2019-09-04T06:34:47.686+0000 D3 REPL [conn445] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:14.090+0000 2019-09-04T06:34:47.686+0000 D3 REPL [conn444] Got notified of new snapshot: { ts: Timestamp(1567578887, 2), t: 1 }, 2019-09-04T06:34:47.681+0000 2019-09-04T06:34:47.686+0000 D3 REPL [conn444] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:59.943+0000 2019-09-04T06:34:47.686+0000 D3 REPL [conn454] Got notified of new snapshot: { ts: Timestamp(1567578887, 2), t: 1 }, 2019-09-04T06:34:47.681+0000 2019-09-04T06:34:47.686+0000 D3 REPL [conn454] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.925+0000 2019-09-04T06:34:47.686+0000 D3 REPL [conn460] Got notified of new snapshot: { ts: Timestamp(1567578887, 2), t: 1 }, 2019-09-04T06:34:47.681+0000 2019-09-04T06:34:47.686+0000 D3 REPL [conn460] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.478+0000 2019-09-04T06:34:47.686+0000 D3 REPL [conn423] Got notified of new snapshot: { ts: Timestamp(1567578887, 2), t: 1 }, 2019-09-04T06:34:47.681+0000 2019-09-04T06:34:47.686+0000 D3 REPL [conn423] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.512+0000 2019-09-04T06:34:47.686+0000 D3 REPL [conn446] Got notified of new snapshot: { ts: Timestamp(1567578887, 2), t: 1 }, 2019-09-04T06:34:47.681+0000 2019-09-04T06:34:47.686+0000 D3 REPL [conn446] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:13.895+0000 2019-09-04T06:34:47.686+0000 D3 REPL [conn430] Got notified of new snapshot: { ts: Timestamp(1567578887, 2), t: 1 }, 2019-09-04T06:34:47.681+0000 2019-09-04T06:34:47.686+0000 D3 REPL [conn430] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.662+0000 2019-09-04T06:34:47.686+0000 D3 REPL [conn453] Got notified of new snapshot: { ts: Timestamp(1567578887, 2), t: 1 }, 2019-09-04T06:34:47.681+0000 2019-09-04T06:34:47.686+0000 D3 REPL [conn453] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.897+0000 2019-09-04T06:34:47.686+0000 D3 REPL [conn449] Got notified of new snapshot: { ts: Timestamp(1567578887, 2), t: 1 }, 2019-09-04T06:34:47.681+0000 2019-09-04T06:34:47.686+0000 D3 REPL [conn449] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:58.759+0000 2019-09-04T06:34:47.686+0000 D3 REPL [conn447] Got notified of new snapshot: { ts: Timestamp(1567578887, 2), t: 1 }, 2019-09-04T06:34:47.681+0000 2019-09-04T06:34:47.686+0000 D3 REPL [conn447] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:52.054+0000 2019-09-04T06:34:47.686+0000 D3 REPL [conn451] Got notified of new snapshot: { ts: Timestamp(1567578887, 2), t: 1 }, 2019-09-04T06:34:47.681+0000 2019-09-04T06:34:47.686+0000 D3 REPL [conn451] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.434+0000 2019-09-04T06:34:47.686+0000 D3 REPL [conn456] Got notified of new snapshot: { ts: Timestamp(1567578887, 2), t: 1 }, 2019-09-04T06:34:47.681+0000 2019-09-04T06:34:47.686+0000 D3 REPL [conn456] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:03.487+0000 2019-09-04T06:34:47.686+0000 D3 REPL [conn441] Got notified of new snapshot: { ts: Timestamp(1567578887, 2), t: 1 }, 2019-09-04T06:34:47.681+0000 2019-09-04T06:34:47.686+0000 D3 REPL [conn441] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.624+0000 2019-09-04T06:34:47.687+0000 D2 COMMAND [conn411] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:47.687+0000 I COMMAND [conn411] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:47.691+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:34:47.691+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:47.691+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), appliedOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, appliedWallTime: new Date(1567578879286), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578887, 2), t: 1 }, durableWallTime: new Date(1567578887681), appliedOpTime: { ts: Timestamp(1567578887, 2), t: 1 }, appliedWallTime: new Date(1567578887681), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), appliedOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, appliedWallTime: new Date(1567578879286), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578887, 2), t: 1 }, lastCommittedWall: new Date(1567578887681), lastOpVisible: { ts: Timestamp(1567578887, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:47.691+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1446 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:17.691+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), appliedOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, appliedWallTime: new Date(1567578879286), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578887, 2), t: 1 }, durableWallTime: new Date(1567578887681), appliedOpTime: { ts: Timestamp(1567578887, 2), t: 1 }, appliedWallTime: new Date(1567578887681), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), appliedOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, appliedWallTime: new Date(1567578879286), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578887, 2), t: 1 }, lastCommittedWall: new Date(1567578887681), lastOpVisible: { ts: Timestamp(1567578887, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:47.691+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:17.684+0000 2019-09-04T06:34:47.691+0000 D2 ASIO [RS] Request 1446 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578887, 2), t: 1 }, lastCommittedWall: new Date(1567578887681), lastOpVisible: { ts: Timestamp(1567578887, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578887, 2), $clusterTime: { clusterTime: Timestamp(1567578887, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578887, 2) } 2019-09-04T06:34:47.691+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578887, 2), t: 1 }, lastCommittedWall: new Date(1567578887681), lastOpVisible: { ts: Timestamp(1567578887, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578887, 2), $clusterTime: { clusterTime: Timestamp(1567578887, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578887, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:47.691+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:47.691+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:17.684+0000 2019-09-04T06:34:47.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:47.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:47.718+0000 D2 COMMAND [conn466] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, D5DE159C203210085B53C66F62E0C0DF973FF589), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:34:47.718+0000 D1 REPL [conn466] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578887, 2), t: 1 } 2019-09-04T06:34:47.718+0000 D3 REPL [conn466] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.728+0000 2019-09-04T06:34:47.739+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:47.739+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:47.765+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:47.782+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578887, 2) 2019-09-04T06:34:47.822+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:47.822+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:47.865+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:47.943+0000 D2 ASIO [RS] Request 1445 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578887, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578887940), o: { $v: 1, $set: { ping: new Date(1567578887933) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578887, 2), t: 1 }, lastCommittedWall: new Date(1567578887681), lastOpVisible: { ts: Timestamp(1567578887, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578887, 2), t: 1 }, lastCommittedWall: new Date(1567578887681), lastOpApplied: { ts: Timestamp(1567578887, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578887, 2), $clusterTime: { clusterTime: Timestamp(1567578887, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578887, 3) } 2019-09-04T06:34:47.943+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578887, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578887940), o: { $v: 1, $set: { ping: new Date(1567578887933) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578887, 2), t: 1 }, lastCommittedWall: new Date(1567578887681), lastOpVisible: { ts: Timestamp(1567578887, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578887, 2), t: 1 }, lastCommittedWall: new Date(1567578887681), lastOpApplied: { ts: Timestamp(1567578887, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578887, 2), $clusterTime: { clusterTime: Timestamp(1567578887, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578887, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:47.943+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:47.943+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578887, 3) and ending at ts: Timestamp(1567578887, 3) 2019-09-04T06:34:47.943+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:34:58.635+0000 2019-09-04T06:34:47.943+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:34:59.039+0000 2019-09-04T06:34:47.943+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:47.943+0000 D2 REPL [replication-1] oplog buffer has 0 bytes 2019-09-04T06:34:47.943+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:47.943+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578887, 3), t: 1 } 2019-09-04T06:34:47.943+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:47.943+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:47.943+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578887, 2) 2019-09-04T06:34:47.943+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 21061 2019-09-04T06:34:47.943+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:47.943+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:47.943+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 21061 2019-09-04T06:34:47.943+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:47.943+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:34:47.943+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:47.943+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578887, 3) } 2019-09-04T06:34:47.943+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578887, 2) 2019-09-04T06:34:47.943+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 21064 2019-09-04T06:34:47.943+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:47.943+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:47.943+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21054 2019-09-04T06:34:47.943+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 21064 2019-09-04T06:34:47.943+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21054 2019-09-04T06:34:47.943+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21067 2019-09-04T06:34:47.943+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21067 2019-09-04T06:34:47.943+0000 D3 EXECUTOR [repl-writer-worker-13] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:47.943+0000 D3 STORAGE [repl-writer-worker-13] WT begin_transaction for snapshot id 21069 2019-09-04T06:34:47.943+0000 D4 STORAGE [repl-writer-worker-13] inserting record with timestamp Timestamp(1567578887, 3) 2019-09-04T06:34:47.943+0000 D3 STORAGE [repl-writer-worker-13] WT set timestamp of future write operations to Timestamp(1567578887, 3) 2019-09-04T06:34:47.943+0000 D3 STORAGE [repl-writer-worker-13] WT commit_transaction for snapshot id 21069 2019-09-04T06:34:47.943+0000 D3 EXECUTOR [repl-writer-worker-13] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:47.943+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:34:47.943+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21068 2019-09-04T06:34:47.943+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21068 2019-09-04T06:34:47.943+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21071 2019-09-04T06:34:47.943+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21071 2019-09-04T06:34:47.943+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578887, 3), t: 1 }({ ts: Timestamp(1567578887, 3), t: 1 }) 2019-09-04T06:34:47.944+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578887, 3) 2019-09-04T06:34:47.944+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21072 2019-09-04T06:34:47.944+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578887, 3) } } ] } sort: {} projection: {} 2019-09-04T06:34:47.944+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:47.944+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:34:47.944+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578887, 3) Sort: {} Proj: {} ============================= 2019-09-04T06:34:47.944+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:47.944+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:47.944+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:47.944+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578887, 3) || First: notFirst: full path: ts 2019-09-04T06:34:47.944+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:47.944+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578887, 3) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:47.944+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:47.944+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:34:47.944+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:47.944+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:47.944+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:47.944+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:47.944+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:47.944+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:47.944+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:47.944+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578887, 3) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:47.944+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:47.944+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:47.944+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:47.944+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578887, 3) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:47.944+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:47.944+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578887, 3) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:47.944+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21072 2019-09-04T06:34:47.944+0000 D3 EXECUTOR [repl-writer-worker-12] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:47.944+0000 D3 STORAGE [repl-writer-worker-12] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:47.944+0000 D3 REPL [repl-writer-worker-12] applying op: { ts: Timestamp(1567578887, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578887940), o: { $v: 1, $set: { ping: new Date(1567578887933) } } }, oplog application mode: Secondary 2019-09-04T06:34:47.944+0000 D3 STORAGE [repl-writer-worker-12] WT set timestamp of future write operations to Timestamp(1567578887, 3) 2019-09-04T06:34:47.944+0000 D3 STORAGE [repl-writer-worker-12] WT begin_transaction for snapshot id 21074 2019-09-04T06:34:47.944+0000 D2 QUERY [repl-writer-worker-12] Using idhack: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" } 2019-09-04T06:34:47.944+0000 D4 WRITE [repl-writer-worker-12] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:34:47.944+0000 D3 STORAGE [repl-writer-worker-12] WT commit_transaction for snapshot id 21074 2019-09-04T06:34:47.944+0000 D3 EXECUTOR [repl-writer-worker-12] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:47.944+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578887, 3), t: 1 }({ ts: Timestamp(1567578887, 3), t: 1 }) 2019-09-04T06:34:47.944+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578887, 3) 2019-09-04T06:34:47.944+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21073 2019-09-04T06:34:47.944+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:34:47.944+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:47.944+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:34:47.944+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:47.944+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:47.944+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:34:47.944+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21073 2019-09-04T06:34:47.944+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578887, 3) 2019-09-04T06:34:47.944+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21077 2019-09-04T06:34:47.944+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21077 2019-09-04T06:34:47.944+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578887, 3), t: 1 }({ ts: Timestamp(1567578887, 3), t: 1 }) 2019-09-04T06:34:47.944+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:47.944+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), appliedOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, appliedWallTime: new Date(1567578879286), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578887, 2), t: 1 }, durableWallTime: new Date(1567578887681), appliedOpTime: { ts: Timestamp(1567578887, 3), t: 1 }, appliedWallTime: new Date(1567578887940), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), appliedOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, appliedWallTime: new Date(1567578879286), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578887, 2), t: 1 }, lastCommittedWall: new Date(1567578887681), lastOpVisible: { ts: Timestamp(1567578887, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:47.944+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1447 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:17.944+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), appliedOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, appliedWallTime: new Date(1567578879286), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578887, 2), t: 1 }, durableWallTime: new Date(1567578887681), appliedOpTime: { ts: Timestamp(1567578887, 3), t: 1 }, appliedWallTime: new Date(1567578887940), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), appliedOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, appliedWallTime: new Date(1567578879286), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578887, 2), t: 1 }, lastCommittedWall: new Date(1567578887681), lastOpVisible: { ts: Timestamp(1567578887, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:47.944+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:17.944+0000 2019-09-04T06:34:47.945+0000 D2 ASIO [RS] Request 1447 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578887, 3), t: 1 }, lastCommittedWall: new Date(1567578887940), lastOpVisible: { ts: Timestamp(1567578887, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578887, 3), $clusterTime: { clusterTime: Timestamp(1567578887, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578887, 3) } 2019-09-04T06:34:47.945+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578887, 3), t: 1 }, lastCommittedWall: new Date(1567578887940), lastOpVisible: { ts: Timestamp(1567578887, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578887, 3), $clusterTime: { clusterTime: Timestamp(1567578887, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578887, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:47.945+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:47.945+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:17.945+0000 2019-09-04T06:34:47.945+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578887, 3), t: 1 } 2019-09-04T06:34:47.945+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1448 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:57.945+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578887, 2), t: 1 } } 2019-09-04T06:34:47.945+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:17.945+0000 2019-09-04T06:34:47.945+0000 D2 ASIO [RS] Request 1448 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578887, 3), t: 1 }, lastCommittedWall: new Date(1567578887940), lastOpVisible: { ts: Timestamp(1567578887, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578887, 3), t: 1 }, lastCommittedWall: new Date(1567578887940), lastOpApplied: { ts: Timestamp(1567578887, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578887, 3), $clusterTime: { clusterTime: Timestamp(1567578887, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578887, 3) } 2019-09-04T06:34:47.945+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578887, 3), t: 1 }, lastCommittedWall: new Date(1567578887940), lastOpVisible: { ts: Timestamp(1567578887, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578887, 3), t: 1 }, lastCommittedWall: new Date(1567578887940), lastOpApplied: { ts: Timestamp(1567578887, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578887, 3), $clusterTime: { clusterTime: Timestamp(1567578887, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578887, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:47.945+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:47.945+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:34:47.945+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578887, 3), t: 1 }, 2019-09-04T06:34:47.940+0000 2019-09-04T06:34:47.945+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578887, 3), t: 1 }, 2019-09-04T06:34:47.940+0000 2019-09-04T06:34:47.945+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578882, 3) 2019-09-04T06:34:47.945+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:34:59.039+0000 2019-09-04T06:34:47.946+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:34:59.060+0000 2019-09-04T06:34:47.946+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:47.946+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1449 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:57.946+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578887, 3), t: 1 } } 2019-09-04T06:34:47.946+0000 D3 REPL [conn450] Got notified of new snapshot: { ts: Timestamp(1567578887, 3), t: 1 }, 2019-09-04T06:34:47.940+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn450] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.261+0000 2019-09-04T06:34:47.946+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:17.945+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn459] Got notified of new snapshot: { ts: Timestamp(1567578887, 3), t: 1 }, 2019-09-04T06:34:47.940+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn459] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.473+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn442] Got notified of new snapshot: { ts: Timestamp(1567578887, 3), t: 1 }, 2019-09-04T06:34:47.940+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn442] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.133+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn441] Got notified of new snapshot: { ts: Timestamp(1567578887, 3), t: 1 }, 2019-09-04T06:34:47.940+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn441] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.624+0000 2019-09-04T06:34:47.946+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn451] Got notified of new snapshot: { ts: Timestamp(1567578887, 3), t: 1 }, 2019-09-04T06:34:47.940+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn451] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.434+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn436] Got notified of new snapshot: { ts: Timestamp(1567578887, 3), t: 1 }, 2019-09-04T06:34:47.940+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn436] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:14.790+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn443] Got notified of new snapshot: { ts: Timestamp(1567578887, 3), t: 1 }, 2019-09-04T06:34:47.940+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn443] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.645+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn423] Got notified of new snapshot: { ts: Timestamp(1567578887, 3), t: 1 }, 2019-09-04T06:34:47.940+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn423] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.512+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn455] Got notified of new snapshot: { ts: Timestamp(1567578887, 3), t: 1 }, 2019-09-04T06:34:47.940+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn455] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.962+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn447] Got notified of new snapshot: { ts: Timestamp(1567578887, 3), t: 1 }, 2019-09-04T06:34:47.940+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn447] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:52.054+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn456] Got notified of new snapshot: { ts: Timestamp(1567578887, 3), t: 1 }, 2019-09-04T06:34:47.940+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn456] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:03.487+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn446] Got notified of new snapshot: { ts: Timestamp(1567578887, 3), t: 1 }, 2019-09-04T06:34:47.940+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn446] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:13.895+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn452] Got notified of new snapshot: { ts: Timestamp(1567578887, 3), t: 1 }, 2019-09-04T06:34:47.940+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn452] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.763+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn454] Got notified of new snapshot: { ts: Timestamp(1567578887, 3), t: 1 }, 2019-09-04T06:34:47.940+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn454] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.925+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn460] Got notified of new snapshot: { ts: Timestamp(1567578887, 3), t: 1 }, 2019-09-04T06:34:47.940+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn460] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.478+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn463] Got notified of new snapshot: { ts: Timestamp(1567578887, 3), t: 1 }, 2019-09-04T06:34:47.940+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn463] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:13.417+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn425] Got notified of new snapshot: { ts: Timestamp(1567578887, 3), t: 1 }, 2019-09-04T06:34:47.940+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn425] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:52.594+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn467] Got notified of new snapshot: { ts: Timestamp(1567578887, 3), t: 1 }, 2019-09-04T06:34:47.940+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn467] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.623+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn430] Got notified of new snapshot: { ts: Timestamp(1567578887, 3), t: 1 }, 2019-09-04T06:34:47.940+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn430] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.662+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn449] Got notified of new snapshot: { ts: Timestamp(1567578887, 3), t: 1 }, 2019-09-04T06:34:47.940+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn449] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:58.759+0000 2019-09-04T06:34:47.946+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:34:47.946+0000 D3 REPL [conn461] Got notified of new snapshot: { ts: Timestamp(1567578887, 3), t: 1 }, 2019-09-04T06:34:47.940+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn461] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.482+0000 2019-09-04T06:34:47.946+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:47.946+0000 D3 REPL [conn445] Got notified of new snapshot: { ts: Timestamp(1567578887, 3), t: 1 }, 2019-09-04T06:34:47.940+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn445] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:14.090+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn438] Got notified of new snapshot: { ts: Timestamp(1567578887, 3), t: 1 }, 2019-09-04T06:34:47.940+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn438] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.660+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn448] Got notified of new snapshot: { ts: Timestamp(1567578887, 3), t: 1 }, 2019-09-04T06:34:47.940+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn448] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.625+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn462] Got notified of new snapshot: { ts: Timestamp(1567578887, 3), t: 1 }, 2019-09-04T06:34:47.940+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn462] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.497+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn453] Got notified of new snapshot: { ts: Timestamp(1567578887, 3), t: 1 }, 2019-09-04T06:34:47.940+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn453] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.897+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn444] Got notified of new snapshot: { ts: Timestamp(1567578887, 3), t: 1 }, 2019-09-04T06:34:47.940+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn444] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:59.943+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn466] Got notified of new snapshot: { ts: Timestamp(1567578887, 3), t: 1 }, 2019-09-04T06:34:47.940+0000 2019-09-04T06:34:47.946+0000 D3 REPL [conn466] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.728+0000 2019-09-04T06:34:47.946+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), appliedOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, appliedWallTime: new Date(1567578879286), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578887, 3), t: 1 }, durableWallTime: new Date(1567578887940), appliedOpTime: { ts: Timestamp(1567578887, 3), t: 1 }, appliedWallTime: new Date(1567578887940), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), appliedOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, appliedWallTime: new Date(1567578879286), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578887, 3), t: 1 }, lastCommittedWall: new Date(1567578887940), lastOpVisible: { ts: Timestamp(1567578887, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:47.946+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1450 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:17.946+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), appliedOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, appliedWallTime: new Date(1567578879286), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578887, 3), t: 1 }, durableWallTime: new Date(1567578887940), appliedOpTime: { ts: Timestamp(1567578887, 3), t: 1 }, appliedWallTime: new Date(1567578887940), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, durableWallTime: new Date(1567578879286), appliedOpTime: { ts: Timestamp(1567578879, 1), t: 1 }, appliedWallTime: new Date(1567578879286), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578887, 3), t: 1 }, lastCommittedWall: new Date(1567578887940), lastOpVisible: { ts: Timestamp(1567578887, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:47.946+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:17.945+0000 2019-09-04T06:34:47.946+0000 D2 ASIO [RS] Request 1450 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578887, 3), t: 1 }, lastCommittedWall: new Date(1567578887940), lastOpVisible: { ts: Timestamp(1567578887, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578887, 3), $clusterTime: { clusterTime: Timestamp(1567578887, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578887, 3) } 2019-09-04T06:34:47.946+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578887, 3), t: 1 }, lastCommittedWall: new Date(1567578887940), lastOpVisible: { ts: Timestamp(1567578887, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578887, 3), $clusterTime: { clusterTime: Timestamp(1567578887, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578887, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:47.946+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:47.946+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:17.945+0000 2019-09-04T06:34:47.956+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:47.956+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:47.965+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:48.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:48.043+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578887, 3) 2019-09-04T06:34:48.065+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:48.068+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:48.068+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:48.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:48.111+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:48.154+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:48.154+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:48.165+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:48.172+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:48.172+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:48.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:48.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:48.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578887, 3), signature: { hash: BinData(0, 57FC75A52086D895956790C3442619CF4220369B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:48.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:34:48.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578887, 3), signature: { hash: BinData(0, 57FC75A52086D895956790C3442619CF4220369B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:48.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578887, 3), signature: { hash: BinData(0, 57FC75A52086D895956790C3442619CF4220369B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:48.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578887, 3), t: 1 }, durableWallTime: new Date(1567578887940), opTime: { ts: Timestamp(1567578887, 3), t: 1 }, wallTime: new Date(1567578887940) } 2019-09-04T06:34:48.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578887, 3), signature: { hash: BinData(0, 57FC75A52086D895956790C3442619CF4220369B), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:48.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:48.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:48.239+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:48.239+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:48.265+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:48.322+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:48.322+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:48.366+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:48.456+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:48.456+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:48.466+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:48.521+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:34:48.521+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:34:48.521+0000 D2 COMMAND [conn90] run command admin.$cmd { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:34:48.521+0000 I COMMAND [conn90] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:34:48.566+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:48.568+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:48.568+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:48.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:48.611+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:48.654+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:48.654+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:48.666+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:48.672+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:48.672+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:48.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:48.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:48.739+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:48.739+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:48.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:48.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:48.766+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:48.822+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:48.822+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:48.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:48.839+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1451) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:48.839+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1451 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:34:58.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:48.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:48.839+0000 D2 ASIO [Replication] Request 1451 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578887, 3), t: 1 }, durableWallTime: new Date(1567578887940), opTime: { ts: Timestamp(1567578887, 3), t: 1 }, wallTime: new Date(1567578887940), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578887, 3), t: 1 }, lastCommittedWall: new Date(1567578887940), lastOpVisible: { ts: Timestamp(1567578887, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578887, 3), $clusterTime: { clusterTime: Timestamp(1567578887, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578887, 3) } 2019-09-04T06:34:48.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578887, 3), t: 1 }, durableWallTime: new Date(1567578887940), opTime: { ts: Timestamp(1567578887, 3), t: 1 }, wallTime: new Date(1567578887940), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578887, 3), t: 1 }, lastCommittedWall: new Date(1567578887940), lastOpVisible: { ts: Timestamp(1567578887, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578887, 3), $clusterTime: { clusterTime: Timestamp(1567578887, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578887, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:34:48.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:48.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1451) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578887, 3), t: 1 }, durableWallTime: new Date(1567578887940), opTime: { ts: Timestamp(1567578887, 3), t: 1 }, wallTime: new Date(1567578887940), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578887, 3), t: 1 }, lastCommittedWall: new Date(1567578887940), lastOpVisible: { ts: Timestamp(1567578887, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578887, 3), $clusterTime: { clusterTime: Timestamp(1567578887, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578887, 3) } 2019-09-04T06:34:48.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:34:48.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:34:59.060+0000 2019-09-04T06:34:48.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:34:59.270+0000 2019-09-04T06:34:48.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:48.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:34:48.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:48.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:34:50.839Z 2019-09-04T06:34:48.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:48.840+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:48.840+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1452) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:48.840+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1452 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:34:58.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:48.840+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:48.840+0000 D2 ASIO [Replication] Request 1452 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578887, 3), t: 1 }, durableWallTime: new Date(1567578887940), opTime: { ts: Timestamp(1567578887, 3), t: 1 }, wallTime: new Date(1567578887940), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578887, 3), t: 1 }, lastCommittedWall: new Date(1567578887940), lastOpVisible: { ts: Timestamp(1567578887, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578887, 3), $clusterTime: { clusterTime: Timestamp(1567578887, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578887, 3) } 2019-09-04T06:34:48.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578887, 3), t: 1 }, durableWallTime: new Date(1567578887940), opTime: { ts: Timestamp(1567578887, 3), t: 1 }, wallTime: new Date(1567578887940), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578887, 3), t: 1 }, lastCommittedWall: new Date(1567578887940), lastOpVisible: { ts: Timestamp(1567578887, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578887, 3), $clusterTime: { clusterTime: Timestamp(1567578887, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578887, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:48.840+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:48.840+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1452) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578887, 3), t: 1 }, durableWallTime: new Date(1567578887940), opTime: { ts: Timestamp(1567578887, 3), t: 1 }, wallTime: new Date(1567578887940), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578887, 3), t: 1 }, lastCommittedWall: new Date(1567578887940), lastOpVisible: { ts: Timestamp(1567578887, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578887, 3), $clusterTime: { clusterTime: Timestamp(1567578887, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578887, 3) } 2019-09-04T06:34:48.840+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:34:48.840+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:34:50.840Z 2019-09-04T06:34:48.840+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:48.866+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:48.943+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:48.943+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:48.943+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578887, 3) 2019-09-04T06:34:48.943+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 21102 2019-09-04T06:34:48.943+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:48.943+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:48.943+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 21102 2019-09-04T06:34:48.944+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21105 2019-09-04T06:34:48.944+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21105 2019-09-04T06:34:48.944+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578887, 3), t: 1 }({ ts: Timestamp(1567578887, 3), t: 1 }) 2019-09-04T06:34:48.956+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:48.956+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:48.966+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:49.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:49.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578887, 3), signature: { hash: BinData(0, 57FC75A52086D895956790C3442619CF4220369B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:49.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:34:49.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578887, 3), signature: { hash: BinData(0, 57FC75A52086D895956790C3442619CF4220369B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:49.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578887, 3), signature: { hash: BinData(0, 57FC75A52086D895956790C3442619CF4220369B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:49.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578887, 3), t: 1 }, durableWallTime: new Date(1567578887940), opTime: { ts: Timestamp(1567578887, 3), t: 1 }, wallTime: new Date(1567578887940) } 2019-09-04T06:34:49.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578887, 3), signature: { hash: BinData(0, 57FC75A52086D895956790C3442619CF4220369B), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:49.066+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:49.068+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:49.068+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:49.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:49.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:49.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:49.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:49.154+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:49.154+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:49.166+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:49.172+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:49.172+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:49.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:49.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:49.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:49.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:49.239+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:49.239+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:49.264+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:49.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:49.266+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:49.310+0000 D2 ASIO [RS] Request 1449 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578889, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578889304), o: { $v: 1, $set: { ping: new Date(1567578889301), up: 60 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578887, 3), t: 1 }, lastCommittedWall: new Date(1567578887940), lastOpVisible: { ts: Timestamp(1567578887, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578887, 3), t: 1 }, lastCommittedWall: new Date(1567578887940), lastOpApplied: { ts: Timestamp(1567578889, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578887, 3), $clusterTime: { clusterTime: Timestamp(1567578889, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578889, 1) } 2019-09-04T06:34:49.310+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578889, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578889304), o: { $v: 1, $set: { ping: new Date(1567578889301), up: 60 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578887, 3), t: 1 }, lastCommittedWall: new Date(1567578887940), lastOpVisible: { ts: Timestamp(1567578887, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578887, 3), t: 1 }, lastCommittedWall: new Date(1567578887940), lastOpApplied: { ts: Timestamp(1567578889, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578887, 3), $clusterTime: { clusterTime: Timestamp(1567578889, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578889, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:49.310+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:49.310+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578889, 1) and ending at ts: Timestamp(1567578889, 1) 2019-09-04T06:34:49.310+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:34:59.270+0000 2019-09-04T06:34:49.310+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:34:59.899+0000 2019-09-04T06:34:49.310+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:49.310+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:49.310+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578889, 1), t: 1 } 2019-09-04T06:34:49.310+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:49.310+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:49.310+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578887, 3) 2019-09-04T06:34:49.310+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 21120 2019-09-04T06:34:49.310+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:49.310+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:49.310+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 21120 2019-09-04T06:34:49.310+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:49.310+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:34:49.310+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:49.310+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578887, 3) 2019-09-04T06:34:49.310+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578889, 1) } 2019-09-04T06:34:49.310+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 21123 2019-09-04T06:34:49.310+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:49.310+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:49.310+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 21123 2019-09-04T06:34:49.310+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21106 2019-09-04T06:34:49.310+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21106 2019-09-04T06:34:49.310+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21126 2019-09-04T06:34:49.310+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21126 2019-09-04T06:34:49.310+0000 D3 EXECUTOR [repl-writer-worker-3] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:49.310+0000 D3 STORAGE [repl-writer-worker-3] WT begin_transaction for snapshot id 21128 2019-09-04T06:34:49.310+0000 D4 STORAGE [repl-writer-worker-3] inserting record with timestamp Timestamp(1567578889, 1) 2019-09-04T06:34:49.310+0000 D3 STORAGE [repl-writer-worker-3] WT set timestamp of future write operations to Timestamp(1567578889, 1) 2019-09-04T06:34:49.310+0000 D3 STORAGE [repl-writer-worker-3] WT commit_transaction for snapshot id 21128 2019-09-04T06:34:49.310+0000 D3 EXECUTOR [repl-writer-worker-3] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:49.310+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:34:49.310+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21127 2019-09-04T06:34:49.310+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21127 2019-09-04T06:34:49.310+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21130 2019-09-04T06:34:49.310+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21130 2019-09-04T06:34:49.310+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578889, 1), t: 1 }({ ts: Timestamp(1567578889, 1), t: 1 }) 2019-09-04T06:34:49.310+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578889, 1) 2019-09-04T06:34:49.310+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21131 2019-09-04T06:34:49.311+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578889, 1) } } ] } sort: {} projection: {} 2019-09-04T06:34:49.311+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:49.311+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:34:49.311+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578889, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:34:49.311+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:49.311+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:49.311+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:49.311+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578889, 1) || First: notFirst: full path: ts 2019-09-04T06:34:49.311+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:49.311+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578889, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:49.311+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:49.311+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:34:49.311+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:49.311+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:49.311+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:49.311+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:49.311+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:49.311+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:49.311+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:49.311+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578889, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:49.311+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:49.311+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:49.311+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:49.311+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578889, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:49.311+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:49.311+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578889, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:49.311+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21131 2019-09-04T06:34:49.311+0000 D3 EXECUTOR [repl-writer-worker-14] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:49.311+0000 D3 STORAGE [repl-writer-worker-14] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:49.311+0000 D3 REPL [repl-writer-worker-14] applying op: { ts: Timestamp(1567578889, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.mongos", ui: UUID("1734bd4e-af6d-441a-8751-93e269784617"), o2: { _id: "cmodb801.togewa.com:27017" }, wall: new Date(1567578889304), o: { $v: 1, $set: { ping: new Date(1567578889301), up: 60 } } }, oplog application mode: Secondary 2019-09-04T06:34:49.311+0000 D3 STORAGE [repl-writer-worker-14] WT set timestamp of future write operations to Timestamp(1567578889, 1) 2019-09-04T06:34:49.311+0000 D3 STORAGE [repl-writer-worker-14] WT begin_transaction for snapshot id 21133 2019-09-04T06:34:49.311+0000 D2 QUERY [repl-writer-worker-14] Using idhack: { _id: "cmodb801.togewa.com:27017" } 2019-09-04T06:34:49.311+0000 D4 WRITE [repl-writer-worker-14] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:34:49.311+0000 D3 STORAGE [repl-writer-worker-14] WT commit_transaction for snapshot id 21133 2019-09-04T06:34:49.311+0000 D3 EXECUTOR [repl-writer-worker-14] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:49.311+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578889, 1), t: 1 }({ ts: Timestamp(1567578889, 1), t: 1 }) 2019-09-04T06:34:49.311+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578889, 1) 2019-09-04T06:34:49.311+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21132 2019-09-04T06:34:49.311+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:34:49.311+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:49.311+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:34:49.311+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:49.311+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:49.311+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:34:49.311+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21132 2019-09-04T06:34:49.311+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578889, 1) 2019-09-04T06:34:49.311+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21136 2019-09-04T06:34:49.311+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21136 2019-09-04T06:34:49.311+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578889, 1), t: 1 }({ ts: Timestamp(1567578889, 1), t: 1 }) 2019-09-04T06:34:49.311+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:49.311+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578887, 3), t: 1 }, durableWallTime: new Date(1567578887940), appliedOpTime: { ts: Timestamp(1567578887, 3), t: 1 }, appliedWallTime: new Date(1567578887940), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578887, 3), t: 1 }, durableWallTime: new Date(1567578887940), appliedOpTime: { ts: Timestamp(1567578889, 1), t: 1 }, appliedWallTime: new Date(1567578889304), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578887, 3), t: 1 }, durableWallTime: new Date(1567578887940), appliedOpTime: { ts: Timestamp(1567578887, 3), t: 1 }, appliedWallTime: new Date(1567578887940), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578887, 3), t: 1 }, lastCommittedWall: new Date(1567578887940), lastOpVisible: { ts: Timestamp(1567578887, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:49.311+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1453 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:19.311+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578887, 3), t: 1 }, durableWallTime: new Date(1567578887940), appliedOpTime: { ts: Timestamp(1567578887, 3), t: 1 }, appliedWallTime: new Date(1567578887940), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578887, 3), t: 1 }, durableWallTime: new Date(1567578887940), appliedOpTime: { ts: Timestamp(1567578889, 1), t: 1 }, appliedWallTime: new Date(1567578889304), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578887, 3), t: 1 }, durableWallTime: new Date(1567578887940), appliedOpTime: { ts: Timestamp(1567578887, 3), t: 1 }, appliedWallTime: new Date(1567578887940), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578887, 3), t: 1 }, lastCommittedWall: new Date(1567578887940), lastOpVisible: { ts: Timestamp(1567578887, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:49.311+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:19.311+0000 2019-09-04T06:34:49.312+0000 D2 ASIO [RS] Request 1453 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578887, 3), t: 1 }, lastCommittedWall: new Date(1567578887940), lastOpVisible: { ts: Timestamp(1567578887, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578887, 3), $clusterTime: { clusterTime: Timestamp(1567578889, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578889, 1) } 2019-09-04T06:34:49.312+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578887, 3), t: 1 }, lastCommittedWall: new Date(1567578887940), lastOpVisible: { ts: Timestamp(1567578887, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578887, 3), $clusterTime: { clusterTime: Timestamp(1567578889, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578889, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:49.312+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:49.312+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:19.312+0000 2019-09-04T06:34:49.312+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578889, 1), t: 1 } 2019-09-04T06:34:49.312+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1454 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:59.312+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578887, 3), t: 1 } } 2019-09-04T06:34:49.312+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:19.312+0000 2019-09-04T06:34:49.319+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:34:49.319+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:49.319+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578887, 3), t: 1 }, durableWallTime: new Date(1567578887940), appliedOpTime: { ts: Timestamp(1567578887, 3), t: 1 }, appliedWallTime: new Date(1567578887940), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578889, 1), t: 1 }, durableWallTime: new Date(1567578889304), appliedOpTime: { ts: Timestamp(1567578889, 1), t: 1 }, appliedWallTime: new Date(1567578889304), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578887, 3), t: 1 }, durableWallTime: new Date(1567578887940), appliedOpTime: { ts: Timestamp(1567578887, 3), t: 1 }, appliedWallTime: new Date(1567578887940), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578887, 3), t: 1 }, lastCommittedWall: new Date(1567578887940), lastOpVisible: { ts: Timestamp(1567578887, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:49.319+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1455 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:19.319+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578887, 3), t: 1 }, durableWallTime: new Date(1567578887940), appliedOpTime: { ts: Timestamp(1567578887, 3), t: 1 }, appliedWallTime: new Date(1567578887940), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578889, 1), t: 1 }, durableWallTime: new Date(1567578889304), appliedOpTime: { ts: Timestamp(1567578889, 1), t: 1 }, appliedWallTime: new Date(1567578889304), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578887, 3), t: 1 }, durableWallTime: new Date(1567578887940), appliedOpTime: { ts: Timestamp(1567578887, 3), t: 1 }, appliedWallTime: new Date(1567578887940), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578887, 3), t: 1 }, lastCommittedWall: new Date(1567578887940), lastOpVisible: { ts: Timestamp(1567578887, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:49.319+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:19.312+0000 2019-09-04T06:34:49.319+0000 D2 ASIO [RS] Request 1455 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578887, 3), t: 1 }, lastCommittedWall: new Date(1567578887940), lastOpVisible: { ts: Timestamp(1567578887, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578887, 3), $clusterTime: { clusterTime: Timestamp(1567578889, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578889, 1) } 2019-09-04T06:34:49.319+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578887, 3), t: 1 }, lastCommittedWall: new Date(1567578887940), lastOpVisible: { ts: Timestamp(1567578887, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578887, 3), $clusterTime: { clusterTime: Timestamp(1567578889, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578889, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:49.319+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:49.319+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:19.312+0000 2019-09-04T06:34:49.320+0000 D2 ASIO [RS] Request 1454 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578889, 1), t: 1 }, lastCommittedWall: new Date(1567578889304), lastOpVisible: { ts: Timestamp(1567578889, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578889, 1), t: 1 }, lastCommittedWall: new Date(1567578889304), lastOpApplied: { ts: Timestamp(1567578889, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578889, 1), $clusterTime: { clusterTime: Timestamp(1567578889, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578889, 1) } 2019-09-04T06:34:49.320+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578889, 1), t: 1 }, lastCommittedWall: new Date(1567578889304), lastOpVisible: { ts: Timestamp(1567578889, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578889, 1), t: 1 }, lastCommittedWall: new Date(1567578889304), lastOpApplied: { ts: Timestamp(1567578889, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578889, 1), $clusterTime: { clusterTime: Timestamp(1567578889, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578889, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:49.320+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:49.320+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:34:49.320+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578889, 1), t: 1 }, 2019-09-04T06:34:49.304+0000 2019-09-04T06:34:49.320+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578889, 1), t: 1 }, 2019-09-04T06:34:49.304+0000 2019-09-04T06:34:49.320+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578884, 1) 2019-09-04T06:34:49.320+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:34:59.899+0000 2019-09-04T06:34:49.320+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:34:59.804+0000 2019-09-04T06:34:49.320+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:49.320+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1456 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:34:59.320+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578889, 1), t: 1 } } 2019-09-04T06:34:49.320+0000 D3 REPL [conn441] Got notified of new snapshot: { ts: Timestamp(1567578889, 1), t: 1 }, 2019-09-04T06:34:49.304+0000 2019-09-04T06:34:49.320+0000 D3 REPL [conn441] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.624+0000 2019-09-04T06:34:49.320+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:19.312+0000 2019-09-04T06:34:49.320+0000 D3 REPL [conn442] Got notified of new snapshot: { ts: Timestamp(1567578889, 1), t: 1 }, 2019-09-04T06:34:49.304+0000 2019-09-04T06:34:49.320+0000 D3 REPL [conn442] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.133+0000 2019-09-04T06:34:49.320+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:49.320+0000 D3 REPL [conn436] Got notified of new snapshot: { ts: Timestamp(1567578889, 1), t: 1 }, 2019-09-04T06:34:49.304+0000 2019-09-04T06:34:49.320+0000 D3 REPL [conn436] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:14.790+0000 2019-09-04T06:34:49.320+0000 D3 REPL [conn443] Got notified of new snapshot: { ts: Timestamp(1567578889, 1), t: 1 }, 2019-09-04T06:34:49.304+0000 2019-09-04T06:34:49.320+0000 D3 REPL [conn443] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.645+0000 2019-09-04T06:34:49.320+0000 D3 REPL [conn455] Got notified of new snapshot: { ts: Timestamp(1567578889, 1), t: 1 }, 2019-09-04T06:34:49.304+0000 2019-09-04T06:34:49.320+0000 D3 REPL [conn455] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.962+0000 2019-09-04T06:34:49.320+0000 D3 REPL [conn460] Got notified of new snapshot: { ts: Timestamp(1567578889, 1), t: 1 }, 2019-09-04T06:34:49.304+0000 2019-09-04T06:34:49.320+0000 D3 REPL [conn460] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.478+0000 2019-09-04T06:34:49.320+0000 D3 REPL [conn456] Got notified of new snapshot: { ts: Timestamp(1567578889, 1), t: 1 }, 2019-09-04T06:34:49.304+0000 2019-09-04T06:34:49.320+0000 D3 REPL [conn456] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:03.487+0000 2019-09-04T06:34:49.320+0000 D3 REPL [conn430] Got notified of new snapshot: { ts: Timestamp(1567578889, 1), t: 1 }, 2019-09-04T06:34:49.304+0000 2019-09-04T06:34:49.320+0000 D3 REPL [conn430] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.662+0000 2019-09-04T06:34:49.320+0000 D3 REPL [conn461] Got notified of new snapshot: { ts: Timestamp(1567578889, 1), t: 1 }, 2019-09-04T06:34:49.304+0000 2019-09-04T06:34:49.320+0000 D3 REPL [conn461] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.482+0000 2019-09-04T06:34:49.320+0000 D3 REPL [conn438] Got notified of new snapshot: { ts: Timestamp(1567578889, 1), t: 1 }, 2019-09-04T06:34:49.304+0000 2019-09-04T06:34:49.320+0000 D3 REPL [conn438] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.660+0000 2019-09-04T06:34:49.320+0000 D3 REPL [conn462] Got notified of new snapshot: { ts: Timestamp(1567578889, 1), t: 1 }, 2019-09-04T06:34:49.304+0000 2019-09-04T06:34:49.320+0000 D3 REPL [conn462] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.497+0000 2019-09-04T06:34:49.320+0000 D3 REPL [conn444] Got notified of new snapshot: { ts: Timestamp(1567578889, 1), t: 1 }, 2019-09-04T06:34:49.304+0000 2019-09-04T06:34:49.320+0000 D3 REPL [conn444] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:59.943+0000 2019-09-04T06:34:49.320+0000 D3 REPL [conn425] Got notified of new snapshot: { ts: Timestamp(1567578889, 1), t: 1 }, 2019-09-04T06:34:49.304+0000 2019-09-04T06:34:49.320+0000 D3 REPL [conn425] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:52.594+0000 2019-09-04T06:34:49.320+0000 D3 REPL [conn452] Got notified of new snapshot: { ts: Timestamp(1567578889, 1), t: 1 }, 2019-09-04T06:34:49.304+0000 2019-09-04T06:34:49.320+0000 D3 REPL [conn452] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.763+0000 2019-09-04T06:34:49.320+0000 D3 REPL [conn450] Got notified of new snapshot: { ts: Timestamp(1567578889, 1), t: 1 }, 2019-09-04T06:34:49.304+0000 2019-09-04T06:34:49.320+0000 D3 REPL [conn450] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.261+0000 2019-09-04T06:34:49.320+0000 D3 REPL [conn459] Got notified of new snapshot: { ts: Timestamp(1567578889, 1), t: 1 }, 2019-09-04T06:34:49.304+0000 2019-09-04T06:34:49.320+0000 D3 REPL [conn459] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.473+0000 2019-09-04T06:34:49.320+0000 D3 REPL [conn451] Got notified of new snapshot: { ts: Timestamp(1567578889, 1), t: 1 }, 2019-09-04T06:34:49.304+0000 2019-09-04T06:34:49.320+0000 D3 REPL [conn451] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.434+0000 2019-09-04T06:34:49.320+0000 D2 COMMAND [conn413] run command config.$cmd { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578889, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5b090f8f28dab2b56d78'), operName: "", parentOperId: "5d6f5b090f8f28dab2b56d74" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578889, 1), signature: { hash: BinData(0, 458A14AAE49C4C58FBACA27BE4070B5CE50C8A12), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578889, 1), t: 1 } }, $db: "config" } 2019-09-04T06:34:49.320+0000 D1 TRACKING [conn413] Cmd: find, TrackingId: 5d6f5b090f8f28dab2b56d74|5d6f5b090f8f28dab2b56d78 2019-09-04T06:34:49.321+0000 D1 COMMAND [conn413] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578889, 1), t: 1 } } } 2019-09-04T06:34:49.321+0000 D3 STORAGE [conn413] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:34:49.321+0000 D1 COMMAND [conn413] Using 'committed' snapshot: { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578889, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5b090f8f28dab2b56d78'), operName: "", parentOperId: "5d6f5b090f8f28dab2b56d74" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578889, 1), signature: { hash: BinData(0, 458A14AAE49C4C58FBACA27BE4070B5CE50C8A12), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578889, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578889, 1) 2019-09-04T06:34:49.321+0000 D2 QUERY [conn413] Collection config.settings does not exist. Using EOF plan: query: { _id: "autosplit" } sort: {} projection: {} limit: 1 2019-09-04T06:34:49.321+0000 I COMMAND [conn413] command config.settings command: find { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578889, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5b090f8f28dab2b56d78'), operName: "", parentOperId: "5d6f5b090f8f28dab2b56d74" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578889, 1), signature: { hash: BinData(0, 458A14AAE49C4C58FBACA27BE4070B5CE50C8A12), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578889, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:34:49.321+0000 D3 REPL [conn423] Got notified of new snapshot: { ts: Timestamp(1567578889, 1), t: 1 }, 2019-09-04T06:34:49.304+0000 2019-09-04T06:34:49.321+0000 D3 REPL [conn423] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.512+0000 2019-09-04T06:34:49.321+0000 D3 REPL [conn447] Got notified of new snapshot: { ts: Timestamp(1567578889, 1), t: 1 }, 2019-09-04T06:34:49.304+0000 2019-09-04T06:34:49.321+0000 D3 REPL [conn447] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:52.054+0000 2019-09-04T06:34:49.321+0000 D3 REPL [conn446] Got notified of new snapshot: { ts: Timestamp(1567578889, 1), t: 1 }, 2019-09-04T06:34:49.304+0000 2019-09-04T06:34:49.321+0000 D3 REPL [conn446] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:13.895+0000 2019-09-04T06:34:49.321+0000 D3 REPL [conn454] Got notified of new snapshot: { ts: Timestamp(1567578889, 1), t: 1 }, 2019-09-04T06:34:49.304+0000 2019-09-04T06:34:49.321+0000 D3 REPL [conn454] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.925+0000 2019-09-04T06:34:49.321+0000 D3 REPL [conn463] Got notified of new snapshot: { ts: Timestamp(1567578889, 1), t: 1 }, 2019-09-04T06:34:49.304+0000 2019-09-04T06:34:49.321+0000 D3 REPL [conn463] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:13.417+0000 2019-09-04T06:34:49.321+0000 D3 REPL [conn467] Got notified of new snapshot: { ts: Timestamp(1567578889, 1), t: 1 }, 2019-09-04T06:34:49.304+0000 2019-09-04T06:34:49.321+0000 D3 REPL [conn467] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.623+0000 2019-09-04T06:34:49.321+0000 D3 REPL [conn449] Got notified of new snapshot: { ts: Timestamp(1567578889, 1), t: 1 }, 2019-09-04T06:34:49.304+0000 2019-09-04T06:34:49.321+0000 D3 REPL [conn449] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:58.759+0000 2019-09-04T06:34:49.321+0000 D3 REPL [conn445] Got notified of new snapshot: { ts: Timestamp(1567578889, 1), t: 1 }, 2019-09-04T06:34:49.304+0000 2019-09-04T06:34:49.321+0000 D3 REPL [conn445] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:14.090+0000 2019-09-04T06:34:49.321+0000 D3 REPL [conn448] Got notified of new snapshot: { ts: Timestamp(1567578889, 1), t: 1 }, 2019-09-04T06:34:49.304+0000 2019-09-04T06:34:49.321+0000 D3 REPL [conn448] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.625+0000 2019-09-04T06:34:49.321+0000 D3 REPL [conn453] Got notified of new snapshot: { ts: Timestamp(1567578889, 1), t: 1 }, 2019-09-04T06:34:49.304+0000 2019-09-04T06:34:49.321+0000 D3 REPL [conn453] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.897+0000 2019-09-04T06:34:49.321+0000 D3 REPL [conn466] Got notified of new snapshot: { ts: Timestamp(1567578889, 1), t: 1 }, 2019-09-04T06:34:49.304+0000 2019-09-04T06:34:49.321+0000 D3 REPL [conn466] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.728+0000 2019-09-04T06:34:49.322+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:49.322+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:49.329+0000 D2 WRITE [startPeriodicThreadToAbortExpiredTransactions] Beginning scanSessions. Scanning 0 sessions. 2019-09-04T06:34:49.366+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:49.370+0000 D1 SHARDING [shard-registry-reload] Reloading shardRegistry 2019-09-04T06:34:49.370+0000 D3 STORAGE [shard-registry-reload] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:34:49.370+0000 D2 COMMAND [shard-registry-reload] run command config.$cmd { find: "shards", $readPreference: { mode: "nearest", tags: [] }, $db: "config" } 2019-09-04T06:34:49.370+0000 D3 STORAGE [shard-registry-reload] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:49.370+0000 D5 QUERY [shard-registry-reload] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:34:49.370+0000 D5 QUERY [shard-registry-reload] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:34:49.370+0000 D5 QUERY [shard-registry-reload] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:34:49.370+0000 D5 QUERY [shard-registry-reload] Rated tree: $and 2019-09-04T06:34:49.370+0000 D5 QUERY [shard-registry-reload] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:49.370+0000 D5 QUERY [shard-registry-reload] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:49.370+0000 D2 QUERY [shard-registry-reload] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:34:49.370+0000 D3 STORAGE [shard-registry-reload] begin_transaction on local snapshot Timestamp(1567578889, 1) 2019-09-04T06:34:49.370+0000 D3 STORAGE [shard-registry-reload] WT begin_transaction for snapshot id 21143 2019-09-04T06:34:49.370+0000 D3 STORAGE [shard-registry-reload] WT rollback_transaction for snapshot id 21143 2019-09-04T06:34:49.370+0000 I COMMAND [shard-registry-reload] command config.shards command: find { find: "shards", $readPreference: { mode: "nearest", tags: [] }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:4 cursorExhausted:1 numYields:0 nreturned:4 reslen:756 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:34:49.370+0000 D1 SHARDING [shard-registry-reload] found 4 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp(1567578889, 1), t: 1 } 2019-09-04T06:34:49.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0000/cmodb806.togewa.com:27018,cmodb807.togewa.com:27018 2019-09-04T06:34:49.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0000, with CS shard0000/cmodb806.togewa.com:27018,cmodb807.togewa.com:27018 2019-09-04T06:34:49.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0001/cmodb808.togewa.com:27018,cmodb809.togewa.com:27018 2019-09-04T06:34:49.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0001, with CS shard0001/cmodb808.togewa.com:27018,cmodb809.togewa.com:27018 2019-09-04T06:34:49.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0002/cmodb810.togewa.com:27018,cmodb811.togewa.com:27018 2019-09-04T06:34:49.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0002, with CS shard0002/cmodb810.togewa.com:27018,cmodb811.togewa.com:27018 2019-09-04T06:34:49.370+0000 I NETWORK [shard-registry-reload] Starting new replica set monitor for shard0003/cmodb812.togewa.com:27018,cmodb813.togewa.com:27018 2019-09-04T06:34:49.370+0000 D2 NETWORK [shard-registry-reload] Signaling found set shard0003 2019-09-04T06:34:49.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0003/cmodb812.togewa.com:27018,cmodb813.togewa.com:27018 2019-09-04T06:34:49.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0003, with CS shard0003/cmodb812.togewa.com:27018,cmodb813.togewa.com:27018 2019-09-04T06:34:49.370+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0003 2019-09-04T06:34:49.370+0000 D3 SHARDING [shard-registry-reload] Adding shard config, with CS 2019-09-04T06:34:49.370+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1457 -- target:[cmodb812.togewa.com:27018] db:admin expDate:2019-09-04T06:34:54.370+0000 cmd:{ isMaster: 1 } 2019-09-04T06:34:49.370+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1458 -- target:[cmodb813.togewa.com:27018] db:admin expDate:2019-09-04T06:34:54.370+0000 cmd:{ isMaster: 1 } 2019-09-04T06:34:49.370+0000 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to cmodb812.togewa.com:27018 2019-09-04T06:34:49.370+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Finished connection setup. 2019-09-04T06:34:49.370+0000 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to cmodb813.togewa.com:27018 2019-09-04T06:34:49.370+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Finished connection setup. 2019-09-04T06:34:49.371+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1457 finished with response: { hosts: [ "cmodb812.togewa.com:27018", "cmodb813.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27025" ], setName: "shard0003", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb812.togewa.com:27018", me: "cmodb812.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578886, 1), t: 1 }, lastWriteDate: new Date(1567578886000), majorityOpTime: { ts: Timestamp(1567578886, 1), t: 1 }, majorityWriteDate: new Date(1567578886000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578889371), logicalSessionTimeoutMinutes: 30, connectionId: 21713, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578886, 1), $configServerState: { opTime: { ts: Timestamp(1567578887, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578887, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578886, 1) } 2019-09-04T06:34:49.371+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb812.togewa.com:27018", "cmodb813.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27025" ], setName: "shard0003", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb812.togewa.com:27018", me: "cmodb812.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578886, 1), t: 1 }, lastWriteDate: new Date(1567578886000), majorityOpTime: { ts: Timestamp(1567578886, 1), t: 1 }, majorityWriteDate: new Date(1567578886000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578889371), logicalSessionTimeoutMinutes: 30, connectionId: 21713, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578886, 1), $configServerState: { opTime: { ts: Timestamp(1567578887, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578887, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578886, 1) } target: cmodb812.togewa.com:27018 2019-09-04T06:34:49.371+0000 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for shard0003 is shard0003/cmodb812.togewa.com:27018,cmodb813.togewa.com:27018 2019-09-04T06:34:49.371+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Signaling confirmed set shard0003/cmodb812.togewa.com:27018,cmodb813.togewa.com:27018 with primary cmodb812.togewa.com:27018 2019-09-04T06:34:49.382+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1458 finished with response: { hosts: [ "cmodb812.togewa.com:27018", "cmodb813.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27025" ], setName: "shard0003", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb812.togewa.com:27018", me: "cmodb813.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578886, 1), t: 1 }, lastWriteDate: new Date(1567578886000), majorityOpTime: { ts: Timestamp(1567578886, 1), t: 1 }, majorityWriteDate: new Date(1567578886000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578889377), logicalSessionTimeoutMinutes: 30, connectionId: 13308, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578886, 1), $configServerState: { opTime: { ts: Timestamp(1567578887, 3), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578887, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578886, 1) } 2019-09-04T06:34:49.382+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb812.togewa.com:27018", "cmodb813.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27025" ], setName: "shard0003", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb812.togewa.com:27018", me: "cmodb813.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578886, 1), t: 1 }, lastWriteDate: new Date(1567578886000), majorityOpTime: { ts: Timestamp(1567578886, 1), t: 1 }, majorityWriteDate: new Date(1567578886000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578889377), logicalSessionTimeoutMinutes: 30, connectionId: 13308, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578886, 1), $configServerState: { opTime: { ts: Timestamp(1567578887, 3), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578887, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578886, 1) } target: cmodb813.togewa.com:27018 2019-09-04T06:34:49.382+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0003 took 12ms 2019-09-04T06:34:49.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0002 2019-09-04T06:34:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1459 -- target:[cmodb810.togewa.com:27018] db:admin expDate:2019-09-04T06:34:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:34:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1460 -- target:[cmodb811.togewa.com:27018] db:admin expDate:2019-09-04T06:34:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:34:49.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0001 2019-09-04T06:34:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1461 -- target:[cmodb808.togewa.com:27018] db:admin expDate:2019-09-04T06:34:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:34:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1462 -- target:[cmodb809.togewa.com:27018] db:admin expDate:2019-09-04T06:34:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:34:49.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0000 2019-09-04T06:34:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1463 -- target:[cmodb806.togewa.com:27018] db:admin expDate:2019-09-04T06:34:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:34:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1464 -- target:[cmodb807.togewa.com:27018] db:admin expDate:2019-09-04T06:34:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:34:49.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1459 finished with response: { hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb810.togewa.com:27018", me: "cmodb810.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578879, 2), t: 1 }, lastWriteDate: new Date(1567578879000), majorityOpTime: { ts: Timestamp(1567578879, 2), t: 1 }, majorityWriteDate: new Date(1567578879000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578889386), logicalSessionTimeoutMinutes: 30, connectionId: 20469, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578879, 2), $configServerState: { opTime: { ts: Timestamp(1567578879, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578886, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578879, 2) } 2019-09-04T06:34:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb810.togewa.com:27018", me: "cmodb810.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578879, 2), t: 1 }, lastWriteDate: new Date(1567578879000), majorityOpTime: { ts: Timestamp(1567578879, 2), t: 1 }, majorityWriteDate: new Date(1567578879000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578889386), logicalSessionTimeoutMinutes: 30, connectionId: 20469, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578879, 2), $configServerState: { opTime: { ts: Timestamp(1567578879, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578886, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578879, 2) } target: cmodb810.togewa.com:27018 2019-09-04T06:34:49.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1461 finished with response: { hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb808.togewa.com:27018", me: "cmodb808.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578887, 1), t: 1 }, lastWriteDate: new Date(1567578887000), majorityOpTime: { ts: Timestamp(1567578887, 1), t: 1 }, majorityWriteDate: new Date(1567578887000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578889386), logicalSessionTimeoutMinutes: 30, connectionId: 18183, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578887, 1), $configServerState: { opTime: { ts: Timestamp(1567578879, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578887, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578887, 1) } 2019-09-04T06:34:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb808.togewa.com:27018", me: "cmodb808.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578887, 1), t: 1 }, lastWriteDate: new Date(1567578887000), majorityOpTime: { ts: Timestamp(1567578887, 1), t: 1 }, majorityWriteDate: new Date(1567578887000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578889386), logicalSessionTimeoutMinutes: 30, connectionId: 18183, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578887, 1), $configServerState: { opTime: { ts: Timestamp(1567578879, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578887, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578887, 1) } target: cmodb808.togewa.com:27018 2019-09-04T06:34:49.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1462 finished with response: { hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb808.togewa.com:27018", me: "cmodb809.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578887, 1), t: 1 }, lastWriteDate: new Date(1567578887000), majorityOpTime: { ts: Timestamp(1567578887, 1), t: 1 }, majorityWriteDate: new Date(1567578887000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578889386), logicalSessionTimeoutMinutes: 30, connectionId: 13302, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578887, 1), $configServerState: { opTime: { ts: Timestamp(1567578870, 3), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578887, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578887, 1) } 2019-09-04T06:34:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb808.togewa.com:27018", me: "cmodb809.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578887, 1), t: 1 }, lastWriteDate: new Date(1567578887000), majorityOpTime: { ts: Timestamp(1567578887, 1), t: 1 }, majorityWriteDate: new Date(1567578887000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578889386), logicalSessionTimeoutMinutes: 30, connectionId: 13302, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578887, 1), $configServerState: { opTime: { ts: Timestamp(1567578870, 3), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578887, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578887, 1) } target: cmodb809.togewa.com:27018 2019-09-04T06:34:49.385+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0001 took 0ms 2019-09-04T06:34:49.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1463 finished with response: { hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb806.togewa.com:27018", me: "cmodb806.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578883, 1), t: 1 }, lastWriteDate: new Date(1567578883000), majorityOpTime: { ts: Timestamp(1567578883, 1), t: 1 }, majorityWriteDate: new Date(1567578883000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578889385), logicalSessionTimeoutMinutes: 30, connectionId: 16400, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578883, 1), $configServerState: { opTime: { ts: Timestamp(1567578879, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578886, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578883, 1) } 2019-09-04T06:34:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb806.togewa.com:27018", me: "cmodb806.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578883, 1), t: 1 }, lastWriteDate: new Date(1567578883000), majorityOpTime: { ts: Timestamp(1567578883, 1), t: 1 }, majorityWriteDate: new Date(1567578883000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578889385), logicalSessionTimeoutMinutes: 30, connectionId: 16400, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578883, 1), $configServerState: { opTime: { ts: Timestamp(1567578879, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578886, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578883, 1) } target: cmodb806.togewa.com:27018 2019-09-04T06:34:49.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1464 finished with response: { hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb806.togewa.com:27018", me: "cmodb807.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578883, 1), t: 1 }, lastWriteDate: new Date(1567578883000), majorityOpTime: { ts: Timestamp(1567578883, 1), t: 1 }, majorityWriteDate: new Date(1567578883000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578889386), logicalSessionTimeoutMinutes: 30, connectionId: 17074, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578883, 1), $configServerState: { opTime: { ts: Timestamp(1567578879, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578886, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578883, 1) } 2019-09-04T06:34:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb806.togewa.com:27018", me: "cmodb807.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578883, 1), t: 1 }, lastWriteDate: new Date(1567578883000), majorityOpTime: { ts: Timestamp(1567578883, 1), t: 1 }, majorityWriteDate: new Date(1567578883000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578889386), logicalSessionTimeoutMinutes: 30, connectionId: 17074, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578883, 1), $configServerState: { opTime: { ts: Timestamp(1567578879, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578886, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578883, 1) } target: cmodb807.togewa.com:27018 2019-09-04T06:34:49.385+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0000 took 0ms 2019-09-04T06:34:49.390+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1460 finished with response: { hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb810.togewa.com:27018", me: "cmodb811.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578879, 2), t: 1 }, lastWriteDate: new Date(1567578879000), majorityOpTime: { ts: Timestamp(1567578879, 2), t: 1 }, majorityWriteDate: new Date(1567578879000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578889386), logicalSessionTimeoutMinutes: 30, connectionId: 13284, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578879, 2), $configServerState: { opTime: { ts: Timestamp(1567578879, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578886, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578879, 2) } 2019-09-04T06:34:49.390+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb810.togewa.com:27018", me: "cmodb811.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578879, 2), t: 1 }, lastWriteDate: new Date(1567578879000), majorityOpTime: { ts: Timestamp(1567578879, 2), t: 1 }, majorityWriteDate: new Date(1567578879000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578889386), logicalSessionTimeoutMinutes: 30, connectionId: 13284, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578879, 2), $configServerState: { opTime: { ts: Timestamp(1567578879, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578886, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578879, 2) } target: cmodb811.togewa.com:27018 2019-09-04T06:34:49.390+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0002 took 5ms 2019-09-04T06:34:49.410+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578889, 1) 2019-09-04T06:34:49.456+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:49.456+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:49.467+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:49.499+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:49.499+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:49.567+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:49.568+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:49.568+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:49.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:49.611+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:49.643+0000 D2 COMMAND [replSetDistLockPinger] run command config.$cmd { findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578889643) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } 2019-09-04T06:34:49.643+0000 D4 - [replSetDistLockPinger] Taking ticket. Available: 1000000000 2019-09-04T06:34:49.643+0000 D1 - [replSetDistLockPinger] User Assertion: NotMaster: Not primary while running findAndModify command on collection config.lockpings src/mongo/db/commands/find_and_modify.cpp 178 2019-09-04T06:34:49.643+0000 W - [replSetDistLockPinger] DBException thrown :: caused by :: NotMaster: Not primary while running findAndModify command on collection config.lockpings 2019-09-04T06:34:49.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:49.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:49.654+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:49.654+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:49.662+0000 I - [replSetDistLockPinger] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6b5b62 0x561749c38a0a 0x561749c42521 0x561749a63043 0x56174a33a606 0x56174a33ba55 0x56174b117894 0x56174a082899 0x56174a083f53 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174af452ee 0x56174af457fa 0x56174b0c25e2 0x56174a244e7b 0x56174a243c1e 0x56174a42b1dc 0x56174a23b7b1 0x56174a232a0a 0x56174b82dbbf 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"272DB62","s":"_ZN5mongo11DBExceptionC2ERKNS_6StatusE"},{"b":"561748F88000","o":"CB0A0A","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"ADB043"},{"b":"561748F88000","o":"13B2606"},{"b":"561748F88000","o":"13B3A55"},{"b":"561748F88000","o":"218F894","s":"_ZN5mongo12BasicCommand10Invocation3runEPNS_16OperationContextEPNS_3rpc21ReplyBuilderInterfaceE"},{"b":"561748F88000","o":"10FA899"},{"b":"561748F88000","o":"10FBF53"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"1FBD2EE"},{"b":"561748F88000","o":"1FBD7FA","s":"_ZN5mongo14DBDirectClient4callERNS_7MessageES2_bPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE"},{"b":"561748F88000","o":"213A5E2","s":"_ZN5mongo12DBClientBase20runCommandWithTargetENS_12OpMsgRequestE"},{"b":"561748F88000","o":"12BCE7B","s":"_ZN5mongo13RSLocalClient14runCommandOnceEPNS_16OperationContextENS_10StringDataERKNS_7BSONObjE"},{"b":"561748F88000","o":"12BBC1E","s":"_ZN5mongo10ShardLocal11_runCommandEPNS_16OperationContextERKNS_21ReadPreferenceSettingENS_10StringDataENS_8DurationISt5ratioILl1ELl1000EEEERKNS_7BSONObjE"},{"b":"561748F88000","o":"14A31DC","s":"_ZN5mongo5Shard32runCommandWithFixedRetryAttemptsEPNS_16OperationContextERKNS_21ReadPreferenceSettingERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_7BSONObjENS_8DurationISt5ratioILl1ELl1000EEEENS0_11RetryPolicyE"},{"b":"561748F88000","o":"12B37B1","s":"_ZN5mongo19DistLockCatalogImpl4pingEPNS_16OperationContextENS_10StringDataENS_6Date_tE"},{"b":"561748F88000","o":"12AAA0A","s":"_ZN5mongo22ReplSetDistLockManager6doTaskEv"},{"b":"561748F88000","o":"28A5BBF"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo11DBExceptionC2ERKNS_6StatusE+0x32) [0x56174b6b5b62] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x6D08) [0x561749c38a0a] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xADB043) [0x561749a63043] mongod(+0x13B2606) [0x56174a33a606] mongod(+0x13B3A55) [0x56174a33ba55] mongod(_ZN5mongo12BasicCommand10Invocation3runEPNS_16OperationContextEPNS_3rpc21ReplyBuilderInterfaceE+0x74) [0x56174b117894] mongod(+0x10FA899) [0x56174a082899] mongod(+0x10FBF53) [0x56174a083f53] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(+0x1FBD2EE) [0x56174af452ee] mongod(_ZN5mongo14DBDirectClient4callERNS_7MessageES2_bPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x3A) [0x56174af457fa] mongod(_ZN5mongo12DBClientBase20runCommandWithTargetENS_12OpMsgRequestE+0x1F2) [0x56174b0c25e2] mongod(_ZN5mongo13RSLocalClient14runCommandOnceEPNS_16OperationContextENS_10StringDataERKNS_7BSONObjE+0x4FB) [0x56174a244e7b] mongod(_ZN5mongo10ShardLocal11_runCommandEPNS_16OperationContextERKNS_21ReadPreferenceSettingENS_10StringDataENS_8DurationISt5ratioILl1ELl1000EEEERKNS_7BSONObjE+0x2E) [0x56174a243c1e] mongod(_ZN5mongo5Shard32runCommandWithFixedRetryAttemptsEPNS_16OperationContextERKNS_21ReadPreferenceSettingERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_7BSONObjENS_8DurationISt5ratioILl1ELl1000EEEENS0_11RetryPolicyE+0xDC) [0x56174a42b1dc] mongod(_ZN5mongo19DistLockCatalogImpl4pingEPNS_16OperationContextENS_10StringDataENS_6Date_tE+0x571) [0x56174a23b7b1] mongod(_ZN5mongo22ReplSetDistLockManager6doTaskEv+0x27A) [0x56174a232a0a] mongod(+0x28A5BBF) [0x56174b82dbbf] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:49.662+0000 D2 REPL [replSetDistLockPinger] Waiting for write concern. OpTime: { ts: Timestamp(1567578889, 1), t: 1 }, write concern: { w: "majority", wtimeout: 15000 } 2019-09-04T06:34:49.662+0000 D4 STORAGE [replSetDistLockPinger] flushed journal 2019-09-04T06:34:49.662+0000 D1 COMMAND [replSetDistLockPinger] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578889643) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" }': NotMaster: Not primary while running findAndModify command on collection config.lockpings 2019-09-04T06:34:49.662+0000 I COMMAND [replSetDistLockPinger] command config.lockpings command: findAndModify { findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578889643) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } numYields:0 ok:0 errMsg:"Not primary while running findAndModify command on collection config.lockpings" errName:NotMaster errCode:10107 reslen:527 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } protocol:op_msg 18ms 2019-09-04T06:34:49.667+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:49.672+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:49.672+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:49.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:49.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:49.739+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:49.739+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:49.764+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:49.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:49.767+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:49.822+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:49.822+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:49.867+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:49.956+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:49.956+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:49.967+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:49.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:49.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:49.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:49.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:50.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:50.000+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:34:50.000+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:34:50.000+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:34:50.017+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:34:50.017+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:34:50.030+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:34:50.030+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:34:50.030+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:34:50.030+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:34:50.034+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:34:50.035+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35151 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:34:50.035+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:34:50.035+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:34:50.035+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:34:50.036+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:34:50.036+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:50.036+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:34:50.036+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:34:50.036+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:34:50.036+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:34:50.036+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:34:50.036+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:34:50.036+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:34:50.036+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:50.036+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:50.036+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:34:50.036+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578889, 1) 2019-09-04T06:34:50.036+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21166 2019-09-04T06:34:50.036+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21166 2019-09-04T06:34:50.036+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:34:50.036+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:34:50.036+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:34:50.036+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:34:50.036+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:50.036+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:34:50.036+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:34:50.036+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:34:50.036+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578889, 1) 2019-09-04T06:34:50.036+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21169 2019-09-04T06:34:50.036+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21169 2019-09-04T06:34:50.036+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:34:50.036+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:34:50.036+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:50.037+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:34:50.037+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:34:50.037+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:34:50.037+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578889, 1) 2019-09-04T06:34:50.037+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21171 2019-09-04T06:34:50.037+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21171 2019-09-04T06:34:50.037+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:549 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:34:50.037+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:34:50.037+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:34:50.037+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:34:50.037+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:34:50.037+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:34:50.037+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:34:50.037+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21174 2019-09-04T06:34:50.037+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:34:50.037+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:50.037+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:34:50.037+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:34:50.037+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21174 2019-09-04T06:34:50.037+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:34:50.037+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21175 2019-09-04T06:34:50.037+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:34:50.037+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:50.037+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:34:50.037+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21175 2019-09-04T06:34:50.037+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:34:50.037+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21176 2019-09-04T06:34:50.037+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:34:50.037+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:50.037+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:34:50.037+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21176 2019-09-04T06:34:50.037+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:34:50.037+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21177 2019-09-04T06:34:50.037+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:34:50.037+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:50.037+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:34:50.037+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21177 2019-09-04T06:34:50.037+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:34:50.037+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21178 2019-09-04T06:34:50.037+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:34:50.037+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:50.037+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:34:50.037+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:34:50.037+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21178 2019-09-04T06:34:50.037+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:34:50.037+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21179 2019-09-04T06:34:50.037+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:34:50.037+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:50.037+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:34:50.037+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21179 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21180 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21180 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21181 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21181 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21182 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21182 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21183 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21183 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21184 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21184 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21185 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21185 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21186 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21186 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21187 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21187 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21188 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21188 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21189 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21189 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21190 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21190 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21191 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21191 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21192 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21192 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21193 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21193 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:34:50.038+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21194 2019-09-04T06:34:50.039+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:34:50.039+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:50.039+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:34:50.039+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21194 2019-09-04T06:34:50.039+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:34:50.039+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21195 2019-09-04T06:34:50.039+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:34:50.039+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:34:50.039+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:34:50.039+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21195 2019-09-04T06:34:50.039+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:34:50.039+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:34:50.039+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21197 2019-09-04T06:34:50.039+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21197 2019-09-04T06:34:50.039+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21198 2019-09-04T06:34:50.039+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21198 2019-09-04T06:34:50.039+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21199 2019-09-04T06:34:50.039+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21199 2019-09-04T06:34:50.039+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:34:50.039+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:34:50.039+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21201 2019-09-04T06:34:50.039+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21201 2019-09-04T06:34:50.039+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21202 2019-09-04T06:34:50.039+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21202 2019-09-04T06:34:50.039+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21203 2019-09-04T06:34:50.039+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21203 2019-09-04T06:34:50.039+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21204 2019-09-04T06:34:50.039+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21204 2019-09-04T06:34:50.039+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21205 2019-09-04T06:34:50.039+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21205 2019-09-04T06:34:50.039+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21206 2019-09-04T06:34:50.039+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21206 2019-09-04T06:34:50.039+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21207 2019-09-04T06:34:50.039+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21207 2019-09-04T06:34:50.039+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21208 2019-09-04T06:34:50.039+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21208 2019-09-04T06:34:50.039+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21209 2019-09-04T06:34:50.039+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21209 2019-09-04T06:34:50.039+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21210 2019-09-04T06:34:50.039+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21210 2019-09-04T06:34:50.039+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21211 2019-09-04T06:34:50.039+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21211 2019-09-04T06:34:50.039+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21212 2019-09-04T06:34:50.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21212 2019-09-04T06:34:50.040+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:34:50.040+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:34:50.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21214 2019-09-04T06:34:50.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21214 2019-09-04T06:34:50.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21215 2019-09-04T06:34:50.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21215 2019-09-04T06:34:50.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21216 2019-09-04T06:34:50.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21216 2019-09-04T06:34:50.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21217 2019-09-04T06:34:50.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21217 2019-09-04T06:34:50.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21218 2019-09-04T06:34:50.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21218 2019-09-04T06:34:50.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21219 2019-09-04T06:34:50.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21219 2019-09-04T06:34:50.040+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:34:50.067+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:50.068+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:50.068+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:50.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:50.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:50.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:50.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:50.154+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:50.155+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:50.167+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:50.172+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:50.172+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:50.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:50.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:50.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578889, 1), signature: { hash: BinData(0, 458A14AAE49C4C58FBACA27BE4070B5CE50C8A12), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:50.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:34:50.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578889, 1), signature: { hash: BinData(0, 458A14AAE49C4C58FBACA27BE4070B5CE50C8A12), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:50.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578889, 1), signature: { hash: BinData(0, 458A14AAE49C4C58FBACA27BE4070B5CE50C8A12), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:50.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578889, 1), t: 1 }, durableWallTime: new Date(1567578889304), opTime: { ts: Timestamp(1567578889, 1), t: 1 }, wallTime: new Date(1567578889304) } 2019-09-04T06:34:50.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578889, 1), signature: { hash: BinData(0, 458A14AAE49C4C58FBACA27BE4070B5CE50C8A12), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:50.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:50.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 999999999 Now: 1000000000 2019-09-04T06:34:50.239+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:50.239+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:50.264+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:50.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:50.268+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:50.310+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:50.310+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:50.310+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578889, 1) 2019-09-04T06:34:50.310+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 21231 2019-09-04T06:34:50.310+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:50.310+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:50.310+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 21231 2019-09-04T06:34:50.311+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21234 2019-09-04T06:34:50.311+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21234 2019-09-04T06:34:50.311+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578889, 1), t: 1 }({ ts: Timestamp(1567578889, 1), t: 1 }) 2019-09-04T06:34:50.322+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:50.322+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:50.368+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:50.456+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:50.456+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:50.468+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:50.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:50.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:50.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:50.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:50.568+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:50.568+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:50.568+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:50.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:50.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:50.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:50.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:50.654+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:50.654+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:50.668+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:50.672+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:50.672+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:50.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:50.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:50.739+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:50.739+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:50.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:50.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:50.768+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:50.818+0000 D2 NETWORK [conn411] Session from 10.108.2.15:39194 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:50.819+0000 I NETWORK [conn411] end connection 10.108.2.15:39194 (88 connections now open) 2019-09-04T06:34:50.819+0000 D2 NETWORK [conn413] Session from 10.108.2.15:39218 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:50.819+0000 I NETWORK [conn413] end connection 10.108.2.15:39218 (87 connections now open) 2019-09-04T06:34:50.819+0000 D2 NETWORK [conn428] Session from 10.108.2.15:39234 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:50.819+0000 I NETWORK [conn428] end connection 10.108.2.15:39234 (86 connections now open) 2019-09-04T06:34:50.822+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:50.822+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:50.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:50.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:50.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1465) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:50.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1465 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:35:00.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:50.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:50.839+0000 D3 REPL [replexec-0] memberData lastupdate is: 2019-09-04T06:34:49.063+0000 2019-09-04T06:34:50.839+0000 D3 REPL [replexec-0] memberData lastupdate is: 2019-09-04T06:34:50.234+0000 2019-09-04T06:34:50.839+0000 D3 REPL [replexec-0] stalest member MemberId(0) date: 2019-09-04T06:34:49.063+0000 2019-09-04T06:34:50.839+0000 D3 REPL [replexec-0] scheduling next check at 2019-09-04T06:34:59.063+0000 2019-09-04T06:34:50.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:50.839+0000 D2 ASIO [Replication] Request 1465 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578889, 1), t: 1 }, durableWallTime: new Date(1567578889304), opTime: { ts: Timestamp(1567578889, 1), t: 1 }, wallTime: new Date(1567578889304), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578889, 1), t: 1 }, lastCommittedWall: new Date(1567578889304), lastOpVisible: { ts: Timestamp(1567578889, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578889, 1), $clusterTime: { clusterTime: Timestamp(1567578889, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578889, 1) } 2019-09-04T06:34:50.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578889, 1), t: 1 }, durableWallTime: new Date(1567578889304), opTime: { ts: Timestamp(1567578889, 1), t: 1 }, wallTime: new Date(1567578889304), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578889, 1), t: 1 }, lastCommittedWall: new Date(1567578889304), lastOpVisible: { ts: Timestamp(1567578889, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578889, 1), $clusterTime: { clusterTime: Timestamp(1567578889, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578889, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:34:50.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:50.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1465) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578889, 1), t: 1 }, durableWallTime: new Date(1567578889304), opTime: { ts: Timestamp(1567578889, 1), t: 1 }, wallTime: new Date(1567578889304), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578889, 1), t: 1 }, lastCommittedWall: new Date(1567578889304), lastOpVisible: { ts: Timestamp(1567578889, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578889, 1), $clusterTime: { clusterTime: Timestamp(1567578889, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578889, 1) } 2019-09-04T06:34:50.839+0000 D4 ELECTION [replexec-4] Postponing election timeout due to heartbeat from primary 2019-09-04T06:34:50.839+0000 D4 REPL [replexec-4] Canceling election timeout callback at 2019-09-04T06:34:59.804+0000 2019-09-04T06:34:50.839+0000 D4 ELECTION [replexec-4] Scheduling election timeout callback at 2019-09-04T06:35:01.396+0000 2019-09-04T06:34:50.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:50.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:50.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:34:50.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:34:52.839Z 2019-09-04T06:34:50.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:50.840+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:50.840+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1466) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:50.840+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1466 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:00.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:50.840+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:50.840+0000 D2 ASIO [Replication] Request 1466 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578889, 1), t: 1 }, durableWallTime: new Date(1567578889304), opTime: { ts: Timestamp(1567578889, 1), t: 1 }, wallTime: new Date(1567578889304), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578889, 1), t: 1 }, lastCommittedWall: new Date(1567578889304), lastOpVisible: { ts: Timestamp(1567578889, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578889, 1), $clusterTime: { clusterTime: Timestamp(1567578889, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578889, 1) } 2019-09-04T06:34:50.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578889, 1), t: 1 }, durableWallTime: new Date(1567578889304), opTime: { ts: Timestamp(1567578889, 1), t: 1 }, wallTime: new Date(1567578889304), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578889, 1), t: 1 }, lastCommittedWall: new Date(1567578889304), lastOpVisible: { ts: Timestamp(1567578889, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578889, 1), $clusterTime: { clusterTime: Timestamp(1567578889, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578889, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:50.840+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:50.840+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1466) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578889, 1), t: 1 }, durableWallTime: new Date(1567578889304), opTime: { ts: Timestamp(1567578889, 1), t: 1 }, wallTime: new Date(1567578889304), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578889, 1), t: 1 }, lastCommittedWall: new Date(1567578889304), lastOpVisible: { ts: Timestamp(1567578889, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578889, 1), $clusterTime: { clusterTime: Timestamp(1567578889, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578889, 1) } 2019-09-04T06:34:50.840+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:34:50.840+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:34:52.840Z 2019-09-04T06:34:50.840+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:50.851+0000 I NETWORK [listener] connection accepted from 10.108.2.15:39266 #470 (87 connections now open) 2019-09-04T06:34:50.851+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:50.851+0000 D2 COMMAND [conn470] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:50.851+0000 I NETWORK [conn470] received client metadata from 10.108.2.15:39266 conn470: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:50.851+0000 I COMMAND [conn470] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:50.851+0000 D2 COMMAND [conn470] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:50.851+0000 I COMMAND [conn470] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:50.855+0000 D2 ASIO [RS] Request 1456 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578890, 1), t: 1, h: 0, v: 2, op: "i", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), wall: new Date(1567578890853), o: { _id: "cmodb801.togewa.com:27017:1567578890:-5629725153971038030", ping: new Date(1567578890848) } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578889, 1), t: 1 }, lastCommittedWall: new Date(1567578889304), lastOpVisible: { ts: Timestamp(1567578889, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578889, 1), t: 1 }, lastCommittedWall: new Date(1567578889304), lastOpApplied: { ts: Timestamp(1567578890, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578889, 1), $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578890, 1) } 2019-09-04T06:34:50.855+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578890, 1), t: 1, h: 0, v: 2, op: "i", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), wall: new Date(1567578890853), o: { _id: "cmodb801.togewa.com:27017:1567578890:-5629725153971038030", ping: new Date(1567578890848) } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578889, 1), t: 1 }, lastCommittedWall: new Date(1567578889304), lastOpVisible: { ts: Timestamp(1567578889, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578889, 1), t: 1 }, lastCommittedWall: new Date(1567578889304), lastOpApplied: { ts: Timestamp(1567578890, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578889, 1), $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578890, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:50.855+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:50.855+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578890, 1) and ending at ts: Timestamp(1567578890, 1) 2019-09-04T06:34:50.855+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:35:01.396+0000 2019-09-04T06:34:50.855+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:35:02.243+0000 2019-09-04T06:34:50.855+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:50.855+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578890, 1), t: 1 } 2019-09-04T06:34:50.855+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:50.855+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:50.855+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:50.855+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578889, 1) 2019-09-04T06:34:50.855+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 21252 2019-09-04T06:34:50.855+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:50.855+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:50.855+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 21252 2019-09-04T06:34:50.855+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:50.855+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:34:50.855+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:50.855+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578889, 1) 2019-09-04T06:34:50.855+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 21255 2019-09-04T06:34:50.855+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578890, 1) } 2019-09-04T06:34:50.855+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:50.855+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:50.855+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 21255 2019-09-04T06:34:50.855+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21235 2019-09-04T06:34:50.855+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21235 2019-09-04T06:34:50.855+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21258 2019-09-04T06:34:50.855+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21258 2019-09-04T06:34:50.855+0000 D3 EXECUTOR [repl-writer-worker-10] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:50.855+0000 D3 STORAGE [repl-writer-worker-10] WT begin_transaction for snapshot id 21260 2019-09-04T06:34:50.855+0000 D4 STORAGE [repl-writer-worker-10] inserting record with timestamp Timestamp(1567578890, 1) 2019-09-04T06:34:50.855+0000 D3 STORAGE [repl-writer-worker-10] WT set timestamp of future write operations to Timestamp(1567578890, 1) 2019-09-04T06:34:50.855+0000 D3 STORAGE [repl-writer-worker-10] WT commit_transaction for snapshot id 21260 2019-09-04T06:34:50.855+0000 D3 EXECUTOR [repl-writer-worker-10] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:50.855+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:34:50.855+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21259 2019-09-04T06:34:50.855+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21259 2019-09-04T06:34:50.855+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21262 2019-09-04T06:34:50.855+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21262 2019-09-04T06:34:50.855+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578890, 1), t: 1 }({ ts: Timestamp(1567578890, 1), t: 1 }) 2019-09-04T06:34:50.855+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578890, 1) 2019-09-04T06:34:50.855+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21263 2019-09-04T06:34:50.855+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578890, 1) } } ] } sort: {} projection: {} 2019-09-04T06:34:50.855+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:50.855+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:34:50.855+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578890, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:34:50.855+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:50.855+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:50.855+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:50.856+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578890, 1) || First: notFirst: full path: ts 2019-09-04T06:34:50.856+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:50.856+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578890, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:50.856+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:50.856+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:34:50.856+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:50.856+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:50.856+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:50.856+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:50.856+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:50.856+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:50.856+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:50.856+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578890, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:50.856+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:50.856+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:50.856+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:50.856+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578890, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:50.856+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:50.856+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578890, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:50.856+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21263 2019-09-04T06:34:50.856+0000 D3 EXECUTOR [repl-writer-worker-4] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:50.856+0000 D3 STORAGE [repl-writer-worker-4] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:50.856+0000 D3 REPL [repl-writer-worker-4] applying op: { ts: Timestamp(1567578890, 1), t: 1, h: 0, v: 2, op: "i", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), wall: new Date(1567578890853), o: { _id: "cmodb801.togewa.com:27017:1567578890:-5629725153971038030", ping: new Date(1567578890848) } }, oplog application mode: Secondary 2019-09-04T06:34:50.856+0000 D3 STORAGE [repl-writer-worker-4] WT begin_transaction for snapshot id 21265 2019-09-04T06:34:50.856+0000 D4 STORAGE [repl-writer-worker-4] inserting record with timestamp Timestamp(1567578890, 1) 2019-09-04T06:34:50.856+0000 D3 STORAGE [repl-writer-worker-4] WT set timestamp of future write operations to Timestamp(1567578890, 1) 2019-09-04T06:34:50.856+0000 D3 STORAGE [repl-writer-worker-4] WT set timestamp of future write operations to Timestamp(1567578890, 1) 2019-09-04T06:34:50.856+0000 D3 STORAGE [repl-writer-worker-4] WT set timestamp of future write operations to Timestamp(1567578890, 1) 2019-09-04T06:34:50.856+0000 D3 STORAGE [repl-writer-worker-4] WT commit_transaction for snapshot id 21265 2019-09-04T06:34:50.856+0000 D3 EXECUTOR [repl-writer-worker-4] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:50.856+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578890, 1), t: 1 }({ ts: Timestamp(1567578890, 1), t: 1 }) 2019-09-04T06:34:50.856+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578890, 1) 2019-09-04T06:34:50.856+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21264 2019-09-04T06:34:50.856+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:34:50.856+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:50.856+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:34:50.856+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:50.856+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:50.856+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:34:50.856+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21264 2019-09-04T06:34:50.856+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578890, 1) 2019-09-04T06:34:50.856+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:50.856+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21268 2019-09-04T06:34:50.856+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578889, 1), t: 1 }, durableWallTime: new Date(1567578889304), appliedOpTime: { ts: Timestamp(1567578889, 1), t: 1 }, appliedWallTime: new Date(1567578889304), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578889, 1), t: 1 }, durableWallTime: new Date(1567578889304), appliedOpTime: { ts: Timestamp(1567578890, 1), t: 1 }, appliedWallTime: new Date(1567578890853), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578889, 1), t: 1 }, durableWallTime: new Date(1567578889304), appliedOpTime: { ts: Timestamp(1567578889, 1), t: 1 }, appliedWallTime: new Date(1567578889304), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578889, 1), t: 1 }, lastCommittedWall: new Date(1567578889304), lastOpVisible: { ts: Timestamp(1567578889, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:50.856+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21268 2019-09-04T06:34:50.856+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1467 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:20.856+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578889, 1), t: 1 }, durableWallTime: new Date(1567578889304), appliedOpTime: { ts: Timestamp(1567578889, 1), t: 1 }, appliedWallTime: new Date(1567578889304), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578889, 1), t: 1 }, durableWallTime: new Date(1567578889304), appliedOpTime: { ts: Timestamp(1567578890, 1), t: 1 }, appliedWallTime: new Date(1567578890853), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578889, 1), t: 1 }, durableWallTime: new Date(1567578889304), appliedOpTime: { ts: Timestamp(1567578889, 1), t: 1 }, appliedWallTime: new Date(1567578889304), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578889, 1), t: 1 }, lastCommittedWall: new Date(1567578889304), lastOpVisible: { ts: Timestamp(1567578889, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:50.856+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:20.856+0000 2019-09-04T06:34:50.856+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578890, 1), t: 1 }({ ts: Timestamp(1567578890, 1), t: 1 }) 2019-09-04T06:34:50.856+0000 D2 ASIO [RS] Request 1467 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578889, 1), t: 1 }, lastCommittedWall: new Date(1567578889304), lastOpVisible: { ts: Timestamp(1567578889, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578889, 1), $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578890, 1) } 2019-09-04T06:34:50.856+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578889, 1), t: 1 }, lastCommittedWall: new Date(1567578889304), lastOpVisible: { ts: Timestamp(1567578889, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578889, 1), $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578890, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:50.856+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:50.856+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:20.856+0000 2019-09-04T06:34:50.857+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578890, 1), t: 1 } 2019-09-04T06:34:50.857+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1468 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:00.857+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578889, 1), t: 1 } } 2019-09-04T06:34:50.857+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:20.856+0000 2019-09-04T06:34:50.858+0000 D2 ASIO [RS] Request 1468 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578890, 1), t: 1 }, lastCommittedWall: new Date(1567578890853), lastOpVisible: { ts: Timestamp(1567578890, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578890, 1), t: 1 }, lastCommittedWall: new Date(1567578890853), lastOpApplied: { ts: Timestamp(1567578890, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578890, 1), $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578890, 1) } 2019-09-04T06:34:50.858+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578890, 1), t: 1 }, lastCommittedWall: new Date(1567578890853), lastOpVisible: { ts: Timestamp(1567578890, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578890, 1), t: 1 }, lastCommittedWall: new Date(1567578890853), lastOpApplied: { ts: Timestamp(1567578890, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578890, 1), $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578890, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:50.858+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:50.858+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:34:50.858+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578890, 1), t: 1 }, 2019-09-04T06:34:50.853+0000 2019-09-04T06:34:50.858+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578890, 1), t: 1 }, 2019-09-04T06:34:50.853+0000 2019-09-04T06:34:50.858+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578885, 1) 2019-09-04T06:34:50.858+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:35:02.243+0000 2019-09-04T06:34:50.858+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:35:01.546+0000 2019-09-04T06:34:50.858+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1469 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:00.858+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578890, 1), t: 1 } } 2019-09-04T06:34:50.858+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:50.858+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:20.856+0000 2019-09-04T06:34:50.858+0000 D3 REPL [conn453] Got notified of new snapshot: { ts: Timestamp(1567578890, 1), t: 1 }, 2019-09-04T06:34:50.853+0000 2019-09-04T06:34:50.858+0000 D3 REPL [conn453] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.897+0000 2019-09-04T06:34:50.858+0000 D3 REPL [conn456] Got notified of new snapshot: { ts: Timestamp(1567578890, 1), t: 1 }, 2019-09-04T06:34:50.853+0000 2019-09-04T06:34:50.858+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:50.858+0000 D3 REPL [conn456] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:03.487+0000 2019-09-04T06:34:50.858+0000 D3 REPL [conn450] Got notified of new snapshot: { ts: Timestamp(1567578890, 1), t: 1 }, 2019-09-04T06:34:50.853+0000 2019-09-04T06:34:50.858+0000 D3 REPL [conn450] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.261+0000 2019-09-04T06:34:50.858+0000 D3 REPL [conn423] Got notified of new snapshot: { ts: Timestamp(1567578890, 1), t: 1 }, 2019-09-04T06:34:50.853+0000 2019-09-04T06:34:50.858+0000 D3 REPL [conn423] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.512+0000 2019-09-04T06:34:50.858+0000 D3 REPL [conn446] Got notified of new snapshot: { ts: Timestamp(1567578890, 1), t: 1 }, 2019-09-04T06:34:50.853+0000 2019-09-04T06:34:50.858+0000 D3 REPL [conn446] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:13.895+0000 2019-09-04T06:34:50.858+0000 D3 REPL [conn463] Got notified of new snapshot: { ts: Timestamp(1567578890, 1), t: 1 }, 2019-09-04T06:34:50.853+0000 2019-09-04T06:34:50.858+0000 D3 REPL [conn463] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:13.417+0000 2019-09-04T06:34:50.858+0000 D3 REPL [conn444] Got notified of new snapshot: { ts: Timestamp(1567578890, 1), t: 1 }, 2019-09-04T06:34:50.853+0000 2019-09-04T06:34:50.858+0000 D3 REPL [conn444] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:59.943+0000 2019-09-04T06:34:50.858+0000 D3 REPL [conn452] Got notified of new snapshot: { ts: Timestamp(1567578890, 1), t: 1 }, 2019-09-04T06:34:50.853+0000 2019-09-04T06:34:50.858+0000 D3 REPL [conn452] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.763+0000 2019-09-04T06:34:50.858+0000 D3 REPL [conn459] Got notified of new snapshot: { ts: Timestamp(1567578890, 1), t: 1 }, 2019-09-04T06:34:50.853+0000 2019-09-04T06:34:50.858+0000 D3 REPL [conn459] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.473+0000 2019-09-04T06:34:50.858+0000 D3 REPL [conn451] Got notified of new snapshot: { ts: Timestamp(1567578890, 1), t: 1 }, 2019-09-04T06:34:50.853+0000 2019-09-04T06:34:50.858+0000 D3 REPL [conn451] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.434+0000 2019-09-04T06:34:50.858+0000 D3 REPL [conn447] Got notified of new snapshot: { ts: Timestamp(1567578890, 1), t: 1 }, 2019-09-04T06:34:50.853+0000 2019-09-04T06:34:50.858+0000 D3 REPL [conn447] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:52.054+0000 2019-09-04T06:34:50.859+0000 D3 REPL [conn454] Got notified of new snapshot: { ts: Timestamp(1567578890, 1), t: 1 }, 2019-09-04T06:34:50.853+0000 2019-09-04T06:34:50.859+0000 D3 REPL [conn454] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.925+0000 2019-09-04T06:34:50.859+0000 D3 REPL [conn467] Got notified of new snapshot: { ts: Timestamp(1567578890, 1), t: 1 }, 2019-09-04T06:34:50.853+0000 2019-09-04T06:34:50.859+0000 D3 REPL [conn467] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.623+0000 2019-09-04T06:34:50.859+0000 D3 REPL [conn445] Got notified of new snapshot: { ts: Timestamp(1567578890, 1), t: 1 }, 2019-09-04T06:34:50.853+0000 2019-09-04T06:34:50.859+0000 D3 REPL [conn445] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:14.090+0000 2019-09-04T06:34:50.859+0000 D3 REPL [conn442] Got notified of new snapshot: { ts: Timestamp(1567578890, 1), t: 1 }, 2019-09-04T06:34:50.853+0000 2019-09-04T06:34:50.859+0000 D3 REPL [conn442] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.133+0000 2019-09-04T06:34:50.859+0000 D3 REPL [conn436] Got notified of new snapshot: { ts: Timestamp(1567578890, 1), t: 1 }, 2019-09-04T06:34:50.853+0000 2019-09-04T06:34:50.859+0000 D3 REPL [conn436] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:14.790+0000 2019-09-04T06:34:50.859+0000 D3 REPL [conn460] Got notified of new snapshot: { ts: Timestamp(1567578890, 1), t: 1 }, 2019-09-04T06:34:50.853+0000 2019-09-04T06:34:50.859+0000 D3 REPL [conn460] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.478+0000 2019-09-04T06:34:50.859+0000 D3 REPL [conn443] Got notified of new snapshot: { ts: Timestamp(1567578890, 1), t: 1 }, 2019-09-04T06:34:50.853+0000 2019-09-04T06:34:50.859+0000 D3 REPL [conn443] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.645+0000 2019-09-04T06:34:50.859+0000 D3 REPL [conn441] Got notified of new snapshot: { ts: Timestamp(1567578890, 1), t: 1 }, 2019-09-04T06:34:50.853+0000 2019-09-04T06:34:50.859+0000 D3 REPL [conn441] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.624+0000 2019-09-04T06:34:50.859+0000 D3 REPL [conn461] Got notified of new snapshot: { ts: Timestamp(1567578890, 1), t: 1 }, 2019-09-04T06:34:50.853+0000 2019-09-04T06:34:50.859+0000 D3 REPL [conn461] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.482+0000 2019-09-04T06:34:50.859+0000 D3 REPL [conn462] Got notified of new snapshot: { ts: Timestamp(1567578890, 1), t: 1 }, 2019-09-04T06:34:50.853+0000 2019-09-04T06:34:50.859+0000 D3 REPL [conn462] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.497+0000 2019-09-04T06:34:50.859+0000 D3 REPL [conn425] Got notified of new snapshot: { ts: Timestamp(1567578890, 1), t: 1 }, 2019-09-04T06:34:50.853+0000 2019-09-04T06:34:50.859+0000 D3 REPL [conn425] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:52.594+0000 2019-09-04T06:34:50.859+0000 D3 REPL [conn455] Got notified of new snapshot: { ts: Timestamp(1567578890, 1), t: 1 }, 2019-09-04T06:34:50.853+0000 2019-09-04T06:34:50.859+0000 D3 REPL [conn455] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.962+0000 2019-09-04T06:34:50.859+0000 D3 REPL [conn430] Got notified of new snapshot: { ts: Timestamp(1567578890, 1), t: 1 }, 2019-09-04T06:34:50.853+0000 2019-09-04T06:34:50.859+0000 D3 REPL [conn430] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.662+0000 2019-09-04T06:34:50.859+0000 D3 REPL [conn438] Got notified of new snapshot: { ts: Timestamp(1567578890, 1), t: 1 }, 2019-09-04T06:34:50.853+0000 2019-09-04T06:34:50.859+0000 D3 REPL [conn438] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:51.660+0000 2019-09-04T06:34:50.859+0000 D3 REPL [conn449] Got notified of new snapshot: { ts: Timestamp(1567578890, 1), t: 1 }, 2019-09-04T06:34:50.853+0000 2019-09-04T06:34:50.859+0000 D3 REPL [conn449] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:58.759+0000 2019-09-04T06:34:50.859+0000 D3 REPL [conn448] Got notified of new snapshot: { ts: Timestamp(1567578890, 1), t: 1 }, 2019-09-04T06:34:50.853+0000 2019-09-04T06:34:50.859+0000 D3 REPL [conn448] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.625+0000 2019-09-04T06:34:50.859+0000 D3 REPL [conn466] Got notified of new snapshot: { ts: Timestamp(1567578890, 1), t: 1 }, 2019-09-04T06:34:50.853+0000 2019-09-04T06:34:50.859+0000 D3 REPL [conn466] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.728+0000 2019-09-04T06:34:50.860+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:34:50.860+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:50.860+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578889, 1), t: 1 }, durableWallTime: new Date(1567578889304), appliedOpTime: { ts: Timestamp(1567578889, 1), t: 1 }, appliedWallTime: new Date(1567578889304), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578890, 1), t: 1 }, durableWallTime: new Date(1567578890853), appliedOpTime: { ts: Timestamp(1567578890, 1), t: 1 }, appliedWallTime: new Date(1567578890853), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578889, 1), t: 1 }, durableWallTime: new Date(1567578889304), appliedOpTime: { ts: Timestamp(1567578889, 1), t: 1 }, appliedWallTime: new Date(1567578889304), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578890, 1), t: 1 }, lastCommittedWall: new Date(1567578890853), lastOpVisible: { ts: Timestamp(1567578890, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:50.860+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1470 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:20.860+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578889, 1), t: 1 }, durableWallTime: new Date(1567578889304), appliedOpTime: { ts: Timestamp(1567578889, 1), t: 1 }, appliedWallTime: new Date(1567578889304), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578890, 1), t: 1 }, durableWallTime: new Date(1567578890853), appliedOpTime: { ts: Timestamp(1567578890, 1), t: 1 }, appliedWallTime: new Date(1567578890853), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578889, 1), t: 1 }, durableWallTime: new Date(1567578889304), appliedOpTime: { ts: Timestamp(1567578889, 1), t: 1 }, appliedWallTime: new Date(1567578889304), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578890, 1), t: 1 }, lastCommittedWall: new Date(1567578890853), lastOpVisible: { ts: Timestamp(1567578890, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:50.860+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:20.856+0000 2019-09-04T06:34:50.860+0000 D2 ASIO [RS] Request 1470 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578890, 1), t: 1 }, lastCommittedWall: new Date(1567578890853), lastOpVisible: { ts: Timestamp(1567578890, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578890, 1), $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578890, 1) } 2019-09-04T06:34:50.860+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578890, 1), t: 1 }, lastCommittedWall: new Date(1567578890853), lastOpVisible: { ts: Timestamp(1567578890, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578890, 1), $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578890, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:50.860+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:50.860+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:20.856+0000 2019-09-04T06:34:50.868+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:50.955+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578890, 1) 2019-09-04T06:34:50.956+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:50.956+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:50.968+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:50.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:50.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:50.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:50.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:51.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:51.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, 8FEA89271D837BB68B98FE2323CB5C3FDECD39B3), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:51.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:34:51.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, 8FEA89271D837BB68B98FE2323CB5C3FDECD39B3), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:51.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, 8FEA89271D837BB68B98FE2323CB5C3FDECD39B3), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:51.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578890, 1), t: 1 }, durableWallTime: new Date(1567578890853), opTime: { ts: Timestamp(1567578890, 1), t: 1 }, wallTime: new Date(1567578890853) } 2019-09-04T06:34:51.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, 8FEA89271D837BB68B98FE2323CB5C3FDECD39B3), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:51.068+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:51.068+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:51.068+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:51.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:51.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:51.118+0000 I NETWORK [listener] connection accepted from 10.108.2.56:35852 #471 (88 connections now open) 2019-09-04T06:34:51.118+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:51.118+0000 D2 COMMAND [conn471] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:51.118+0000 I NETWORK [conn471] received client metadata from 10.108.2.56:35852 conn471: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:51.118+0000 I COMMAND [conn471] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:51.133+0000 I COMMAND [conn442] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578851, 1), signature: { hash: BinData(0, DE6CD7CA3AD84D3D5B5CC231FF3233432495E5FA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:34:51.133+0000 D1 - [conn442] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:51.133+0000 W - [conn442] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:51.150+0000 I - [conn442] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:51.150+0000 D1 COMMAND [conn442] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578851, 1), signature: { hash: BinData(0, DE6CD7CA3AD84D3D5B5CC231FF3233432495E5FA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:51.150+0000 D1 - [conn442] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:51.150+0000 W - [conn442] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:51.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:51.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:51.154+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:51.154+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:51.169+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:51.170+0000 I - [conn442] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:51.170+0000 W COMMAND [conn442] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:51.170+0000 I COMMAND [conn442] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578851, 1), signature: { hash: BinData(0, DE6CD7CA3AD84D3D5B5CC231FF3233432495E5FA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30026ms 2019-09-04T06:34:51.170+0000 D2 NETWORK [conn442] Session from 10.108.2.56:35836 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:51.170+0000 I NETWORK [conn442] end connection 10.108.2.56:35836 (87 connections now open) 2019-09-04T06:34:51.172+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:51.172+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:51.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:51.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:51.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:51.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:51.239+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:51.239+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:51.264+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:51.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:51.269+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:51.322+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:51.322+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:51.350+0000 D2 COMMAND [conn470] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:51.350+0000 I COMMAND [conn470] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:51.369+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:51.469+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:51.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:51.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:51.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:51.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:51.568+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:51.568+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:51.569+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:51.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:51.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:51.646+0000 I COMMAND [conn443] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578858, 1), signature: { hash: BinData(0, B0947E8BACC12B932E38FDD8F3A31C0CEDCAD63A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:34:51.646+0000 D1 - [conn443] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:51.646+0000 W - [conn443] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:51.650+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:51.650+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:51.650+0000 I NETWORK [listener] connection accepted from 10.108.2.54:49358 #472 (88 connections now open) 2019-09-04T06:34:51.650+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:51.650+0000 I NETWORK [listener] connection accepted from 10.108.2.72:45912 #473 (89 connections now open) 2019-09-04T06:34:51.650+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:51.650+0000 D2 COMMAND [conn472] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:51.650+0000 I NETWORK [conn472] received client metadata from 10.108.2.54:49358 conn472: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:51.650+0000 I COMMAND [conn472] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:51.650+0000 I NETWORK [listener] connection accepted from 10.108.2.48:42278 #474 (90 connections now open) 2019-09-04T06:34:51.650+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:51.650+0000 D2 COMMAND [conn474] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:51.651+0000 I NETWORK [conn474] received client metadata from 10.108.2.48:42278 conn474: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:51.651+0000 I COMMAND [conn474] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:51.651+0000 D2 COMMAND [conn472] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578888, 1), signature: { hash: BinData(0, 909ECCA09915F7DAC68D518238A63EF9362BC3C6), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:34:51.651+0000 D1 REPL [conn472] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578890, 1), t: 1 } 2019-09-04T06:34:51.651+0000 D3 REPL [conn472] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:34:51.651+0000 D2 COMMAND [conn473] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:51.651+0000 I NETWORK [conn473] received client metadata from 10.108.2.72:45912 conn473: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:51.651+0000 I COMMAND [conn473] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:51.651+0000 D2 COMMAND [conn474] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, EB35739D9E23DC6FDDF7730B34A04CDF748BD46F), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:34:51.651+0000 D1 REPL [conn474] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578890, 1), t: 1 } 2019-09-04T06:34:51.651+0000 D3 REPL [conn474] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:34:51.651+0000 D2 COMMAND [conn473] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, D5DE159C203210085B53C66F62E0C0DF973FF589), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:34:51.651+0000 D1 REPL [conn473] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578890, 1), t: 1 } 2019-09-04T06:34:51.651+0000 D3 REPL [conn473] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:34:51.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:51.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:51.652+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:51.652+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:51.654+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:51.654+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:51.661+0000 I COMMAND [conn438] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578853, 1), signature: { hash: BinData(0, E25495A303323F3A37C8BA9965010F6640AA1AE5), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:34:51.661+0000 D1 - [conn438] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:51.661+0000 W - [conn438] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:51.662+0000 I COMMAND [conn430] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578852, 1), signature: { hash: BinData(0, 695EC7A4F3A174023E82B47C0B1F2FFF237676A8), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:34:51.662+0000 D1 - [conn430] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:51.662+0000 W - [conn430] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:51.663+0000 I - [conn443] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:51.663+0000 D1 COMMAND [conn443] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578858, 1), signature: { hash: BinData(0, B0947E8BACC12B932E38FDD8F3A31C0CEDCAD63A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:51.663+0000 D1 - [conn443] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:51.663+0000 W - [conn443] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:51.669+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:51.672+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:51.672+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:51.679+0000 I - [conn430] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:51.679+0000 D1 COMMAND [conn430] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578852, 1), signature: { hash: BinData(0, 695EC7A4F3A174023E82B47C0B1F2FFF237676A8), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:51.679+0000 D1 - [conn430] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:51.679+0000 W - [conn430] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:51.699+0000 I - [conn443] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:51.699+0000 W COMMAND [conn443] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:51.699+0000 I COMMAND [conn443] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578858, 1), signature: { hash: BinData(0, B0947E8BACC12B932E38FDD8F3A31C0CEDCAD63A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:34:51.699+0000 D2 NETWORK [conn443] Session from 10.108.2.44:38834 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:51.699+0000 I NETWORK [conn443] end connection 10.108.2.44:38834 (89 connections now open) 2019-09-04T06:34:51.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:51.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:51.716+0000 I - [conn438] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:51.716+0000 D1 COMMAND [conn438] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578853, 1), signature: { hash: BinData(0, E25495A303323F3A37C8BA9965010F6640AA1AE5), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:51.716+0000 D1 - [conn438] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:51.716+0000 W - [conn438] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:51.736+0000 I - [conn430] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:51.736+0000 W COMMAND [conn430] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:51.736+0000 I COMMAND [conn430] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578852, 1), signature: { hash: BinData(0, 695EC7A4F3A174023E82B47C0B1F2FFF237676A8), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:34:51.736+0000 D2 NETWORK [conn430] Session from 10.108.2.73:52292 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:51.736+0000 I NETWORK [conn430] end connection 10.108.2.73:52292 (88 connections now open) 2019-09-04T06:34:51.739+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:51.739+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:51.743+0000 I NETWORK [listener] connection accepted from 10.108.2.52:47352 #475 (89 connections now open) 2019-09-04T06:34:51.743+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:51.743+0000 D2 COMMAND [conn475] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:51.743+0000 I NETWORK [conn475] received client metadata from 10.108.2.52:47352 conn475: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:51.743+0000 I COMMAND [conn475] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:51.743+0000 D2 COMMAND [conn475] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, D5DE159C203210085B53C66F62E0C0DF973FF589), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:34:51.743+0000 D1 REPL [conn475] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578890, 1), t: 1 } 2019-09-04T06:34:51.743+0000 D3 REPL [conn475] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.753+0000 2019-09-04T06:34:51.756+0000 I - [conn438] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:51.756+0000 W COMMAND [conn438] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:51.756+0000 I COMMAND [conn438] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578853, 1), signature: { hash: BinData(0, E25495A303323F3A37C8BA9965010F6640AA1AE5), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30065ms 2019-09-04T06:34:51.756+0000 D2 NETWORK [conn438] Session from 10.108.2.58:52288 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:51.756+0000 I NETWORK [conn438] end connection 10.108.2.58:52288 (88 connections now open) 2019-09-04T06:34:51.756+0000 I NETWORK [listener] connection accepted from 10.108.2.59:48524 #476 (89 connections now open) 2019-09-04T06:34:51.756+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:51.756+0000 D2 COMMAND [conn476] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:51.756+0000 I NETWORK [conn476] received client metadata from 10.108.2.59:48524 conn476: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:51.756+0000 I COMMAND [conn476] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:51.757+0000 D2 COMMAND [conn476] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578881, 1), signature: { hash: BinData(0, 51070AB17CE4E1EB2E629A93F6E8B1A003D9C311), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:34:51.757+0000 D1 REPL [conn476] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578890, 1), t: 1 } 2019-09-04T06:34:51.757+0000 D3 REPL [conn476] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.767+0000 2019-09-04T06:34:51.764+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:51.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:51.769+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:51.822+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:51.822+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:51.855+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:51.855+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:51.855+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578890, 1) 2019-09-04T06:34:51.855+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 21312 2019-09-04T06:34:51.855+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:51.855+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:51.855+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 21312 2019-09-04T06:34:51.856+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21315 2019-09-04T06:34:51.856+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21315 2019-09-04T06:34:51.856+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578890, 1), t: 1 }({ ts: Timestamp(1567578890, 1), t: 1 }) 2019-09-04T06:34:51.869+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:51.969+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:51.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:51.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:51.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:51.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:52.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:52.055+0000 I COMMAND [conn447] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578859, 1), signature: { hash: BinData(0, 890D13BC1F7341B693C197CA647BC05E3FDD2B2E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:34:52.055+0000 D1 - [conn447] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:52.055+0000 W - [conn447] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:52.068+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:52.068+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:52.069+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:52.071+0000 I - [conn447] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:52.071+0000 D1 COMMAND [conn447] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578859, 1), signature: { hash: BinData(0, 890D13BC1F7341B693C197CA647BC05E3FDD2B2E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:52.071+0000 D1 - [conn447] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:52.071+0000 W - [conn447] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:52.091+0000 I - [conn447] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:52.091+0000 W COMMAND [conn447] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:52.091+0000 I COMMAND [conn447] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578859, 1), signature: { hash: BinData(0, 890D13BC1F7341B693C197CA647BC05E3FDD2B2E), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:34:52.091+0000 D2 NETWORK [conn447] Session from 10.108.2.50:50268 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:52.091+0000 I NETWORK [conn447] end connection 10.108.2.50:50268 (88 connections now open) 2019-09-04T06:34:52.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:52.111+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:52.149+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:52.149+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:52.151+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:52.151+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:52.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:52.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:52.154+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:52.154+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:52.170+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:52.172+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:52.172+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:52.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:52.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:52.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, 8FEA89271D837BB68B98FE2323CB5C3FDECD39B3), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:52.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:34:52.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, 8FEA89271D837BB68B98FE2323CB5C3FDECD39B3), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:52.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, 8FEA89271D837BB68B98FE2323CB5C3FDECD39B3), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:52.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578890, 1), t: 1 }, durableWallTime: new Date(1567578890853), opTime: { ts: Timestamp(1567578890, 1), t: 1 }, wallTime: new Date(1567578890853) } 2019-09-04T06:34:52.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, 8FEA89271D837BB68B98FE2323CB5C3FDECD39B3), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:52.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:52.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:52.239+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:52.239+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:52.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:52.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:52.270+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:52.322+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:52.322+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:52.370+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:52.470+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:52.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:52.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:52.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:52.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:52.568+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:52.568+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:52.570+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:52.584+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:52.584+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:52.585+0000 I NETWORK [listener] connection accepted from 10.108.2.74:51954 #477 (89 connections now open) 2019-09-04T06:34:52.585+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:52.585+0000 D2 COMMAND [conn477] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:52.585+0000 I NETWORK [conn477] received client metadata from 10.108.2.74:51954 conn477: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:52.585+0000 I COMMAND [conn477] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:52.585+0000 D2 COMMAND [conn477] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578885, 1), signature: { hash: BinData(0, 3250ABA29FA1BAB080F5BF7C803FB52479895B2B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:34:52.585+0000 D1 REPL [conn477] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578890, 1), t: 1 } 2019-09-04T06:34:52.585+0000 D3 REPL [conn477] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:22.595+0000 2019-09-04T06:34:52.596+0000 I COMMAND [conn425] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578855, 1), signature: { hash: BinData(0, 1F4340BC981E2B9D0D58976FE30DE1B22C151008), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:34:52.596+0000 D1 - [conn425] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:52.596+0000 W - [conn425] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:52.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:52.611+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:52.612+0000 I - [conn425] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:52.612+0000 D1 COMMAND [conn425] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578855, 1), signature: { hash: BinData(0, 1F4340BC981E2B9D0D58976FE30DE1B22C151008), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:52.612+0000 D1 - [conn425] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:52.612+0000 W - [conn425] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:52.632+0000 I - [conn425] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:52.632+0000 W COMMAND [conn425] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:52.632+0000 I COMMAND [conn425] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578855, 1), signature: { hash: BinData(0, 1F4340BC981E2B9D0D58976FE30DE1B22C151008), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30028ms 2019-09-04T06:34:52.633+0000 D2 NETWORK [conn425] Session from 10.108.2.74:51922 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:52.633+0000 I NETWORK [conn425] end connection 10.108.2.74:51922 (88 connections now open) 2019-09-04T06:34:52.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:52.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:52.654+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:52.654+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:52.670+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:52.672+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:52.672+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:52.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:52.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:52.739+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:52.739+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:52.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:52.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:52.770+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:52.822+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:52.822+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:52.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:52.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1471) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:52.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1471 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:35:02.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:52.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:52.839+0000 D2 ASIO [Replication] Request 1471 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578890, 1), t: 1 }, durableWallTime: new Date(1567578890853), opTime: { ts: Timestamp(1567578890, 1), t: 1 }, wallTime: new Date(1567578890853), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578890, 1), t: 1 }, lastCommittedWall: new Date(1567578890853), lastOpVisible: { ts: Timestamp(1567578890, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578890, 1), $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578890, 1) } 2019-09-04T06:34:52.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578890, 1), t: 1 }, durableWallTime: new Date(1567578890853), opTime: { ts: Timestamp(1567578890, 1), t: 1 }, wallTime: new Date(1567578890853), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578890, 1), t: 1 }, lastCommittedWall: new Date(1567578890853), lastOpVisible: { ts: Timestamp(1567578890, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578890, 1), $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578890, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:34:52.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:52.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1471) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578890, 1), t: 1 }, durableWallTime: new Date(1567578890853), opTime: { ts: Timestamp(1567578890, 1), t: 1 }, wallTime: new Date(1567578890853), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578890, 1), t: 1 }, lastCommittedWall: new Date(1567578890853), lastOpVisible: { ts: Timestamp(1567578890, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578890, 1), $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578890, 1) } 2019-09-04T06:34:52.839+0000 D4 ELECTION [replexec-4] Postponing election timeout due to heartbeat from primary 2019-09-04T06:34:52.839+0000 D4 REPL [replexec-4] Canceling election timeout callback at 2019-09-04T06:35:01.546+0000 2019-09-04T06:34:52.839+0000 D4 ELECTION [replexec-4] Scheduling election timeout callback at 2019-09-04T06:35:03.775+0000 2019-09-04T06:34:52.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:52.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:34:52.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:52.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:34:54.839Z 2019-09-04T06:34:52.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:52.840+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:52.840+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1472) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:52.840+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1472 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:02.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:52.840+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:52.840+0000 D2 ASIO [Replication] Request 1472 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578890, 1), t: 1 }, durableWallTime: new Date(1567578890853), opTime: { ts: Timestamp(1567578890, 1), t: 1 }, wallTime: new Date(1567578890853), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578890, 1), t: 1 }, lastCommittedWall: new Date(1567578890853), lastOpVisible: { ts: Timestamp(1567578890, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578890, 1), $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578890, 1) } 2019-09-04T06:34:52.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578890, 1), t: 1 }, durableWallTime: new Date(1567578890853), opTime: { ts: Timestamp(1567578890, 1), t: 1 }, wallTime: new Date(1567578890853), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578890, 1), t: 1 }, lastCommittedWall: new Date(1567578890853), lastOpVisible: { ts: Timestamp(1567578890, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578890, 1), $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578890, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:52.840+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:52.840+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1472) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578890, 1), t: 1 }, durableWallTime: new Date(1567578890853), opTime: { ts: Timestamp(1567578890, 1), t: 1 }, wallTime: new Date(1567578890853), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578890, 1), t: 1 }, lastCommittedWall: new Date(1567578890853), lastOpVisible: { ts: Timestamp(1567578890, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578890, 1), $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578890, 1) } 2019-09-04T06:34:52.840+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:34:52.840+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:34:54.840Z 2019-09-04T06:34:52.840+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:52.855+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:52.855+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:52.855+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578890, 1) 2019-09-04T06:34:52.855+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 21347 2019-09-04T06:34:52.855+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:52.855+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:52.855+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 21347 2019-09-04T06:34:52.857+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21350 2019-09-04T06:34:52.857+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21350 2019-09-04T06:34:52.857+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578890, 1), t: 1 }({ ts: Timestamp(1567578890, 1), t: 1 }) 2019-09-04T06:34:52.870+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:52.970+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:52.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:52.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:52.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:52.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:53.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:53.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, 8FEA89271D837BB68B98FE2323CB5C3FDECD39B3), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:53.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:34:53.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, 8FEA89271D837BB68B98FE2323CB5C3FDECD39B3), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:53.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, 8FEA89271D837BB68B98FE2323CB5C3FDECD39B3), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:53.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578890, 1), t: 1 }, durableWallTime: new Date(1567578890853), opTime: { ts: Timestamp(1567578890, 1), t: 1 }, wallTime: new Date(1567578890853) } 2019-09-04T06:34:53.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, 8FEA89271D837BB68B98FE2323CB5C3FDECD39B3), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:53.068+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:53.068+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:53.071+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:53.084+0000 D2 COMMAND [conn101] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:53.084+0000 I COMMAND [conn101] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:53.084+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:53.084+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:53.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:53.111+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:53.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:53.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:53.154+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:53.154+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:53.171+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:53.172+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:53.172+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:53.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:53.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:53.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:53.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:53.239+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:53.239+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:53.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:53.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:53.271+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:53.317+0000 D2 COMMAND [conn157] run command admin.$cmd { ping: 1, $db: "admin" } 2019-09-04T06:34:53.317+0000 I COMMAND [conn157] command admin.$cmd appName: "robo3t" command: ping { ping: 1, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:53.322+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:53.322+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:53.324+0000 D2 COMMAND [conn101] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578887, 3), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578887, 3), signature: { hash: BinData(0, 57FC75A52086D895956790C3442619CF4220369B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578887, 3), t: 1 } }, $db: "config" } 2019-09-04T06:34:53.324+0000 D1 COMMAND [conn101] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578887, 3), t: 1 } } } 2019-09-04T06:34:53.324+0000 D3 STORAGE [conn101] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:34:53.324+0000 D1 COMMAND [conn101] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578887, 3), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578887, 3), signature: { hash: BinData(0, 57FC75A52086D895956790C3442619CF4220369B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578887, 3), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578890, 1) 2019-09-04T06:34:53.324+0000 D5 QUERY [conn101] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:34:53.324+0000 D5 QUERY [conn101] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:34:53.324+0000 D5 QUERY [conn101] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:34:53.324+0000 D5 QUERY [conn101] Rated tree: $and 2019-09-04T06:34:53.324+0000 D5 QUERY [conn101] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:53.324+0000 D5 QUERY [conn101] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:53.324+0000 D2 QUERY [conn101] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:34:53.324+0000 D3 STORAGE [conn101] WT begin_transaction for snapshot id 21369 2019-09-04T06:34:53.324+0000 D3 STORAGE [conn101] WT rollback_transaction for snapshot id 21369 2019-09-04T06:34:53.324+0000 I COMMAND [conn101] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578887, 3), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578887, 3), signature: { hash: BinData(0, 57FC75A52086D895956790C3442619CF4220369B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578887, 3), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:4 cursorExhausted:1 numYields:0 nreturned:4 reslen:989 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:34:53.329+0000 D2 COMMAND [conn218] run command admin.$cmd { ping: 1.0, lsid: { id: UUID("ac8e303f-4e60-4a79-b9a4-f7cba7354076") }, $clusterTime: { clusterTime: Timestamp(1567578830, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } 2019-09-04T06:34:53.329+0000 I COMMAND [conn218] command admin.$cmd appName: "MongoDB Shell" command: ping { ping: 1.0, lsid: { id: UUID("ac8e303f-4e60-4a79-b9a4-f7cba7354076") }, $clusterTime: { clusterTime: Timestamp(1567578830, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:53.371+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:53.471+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:53.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:53.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:53.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:53.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:53.568+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:53.568+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:53.571+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:53.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:53.611+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:53.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:53.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:53.654+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:53.654+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:53.671+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:53.672+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:53.672+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:53.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:53.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:53.739+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:53.739+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:53.764+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:53.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:53.771+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:53.822+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:53.822+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:53.856+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:53.856+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:53.856+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578890, 1) 2019-09-04T06:34:53.856+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 21383 2019-09-04T06:34:53.856+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:53.856+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:53.856+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 21383 2019-09-04T06:34:53.857+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21386 2019-09-04T06:34:53.857+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21386 2019-09-04T06:34:53.857+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578890, 1), t: 1 }({ ts: Timestamp(1567578890, 1), t: 1 }) 2019-09-04T06:34:53.871+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:53.971+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:53.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:53.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:53.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:53.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:54.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:54.068+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:54.068+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:54.072+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:54.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:54.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:54.142+0000 I NETWORK [listener] connection accepted from 10.108.2.46:41168 #478 (89 connections now open) 2019-09-04T06:34:54.142+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:54.142+0000 D2 COMMAND [conn478] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:54.142+0000 I NETWORK [conn478] received client metadata from 10.108.2.46:41168 conn478: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:54.142+0000 I COMMAND [conn478] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:54.142+0000 D2 COMMAND [conn478] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578885, 1), signature: { hash: BinData(0, 3250ABA29FA1BAB080F5BF7C803FB52479895B2B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:34:54.142+0000 D1 REPL [conn478] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578890, 1), t: 1 } 2019-09-04T06:34:54.142+0000 D3 REPL [conn478] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:24.152+0000 2019-09-04T06:34:54.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:54.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:54.154+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:54.154+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:54.172+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:54.172+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:54.172+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:54.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:54.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:54.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, 8FEA89271D837BB68B98FE2323CB5C3FDECD39B3), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:54.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:34:54.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, 8FEA89271D837BB68B98FE2323CB5C3FDECD39B3), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:54.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, 8FEA89271D837BB68B98FE2323CB5C3FDECD39B3), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:54.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578890, 1), t: 1 }, durableWallTime: new Date(1567578890853), opTime: { ts: Timestamp(1567578890, 1), t: 1 }, wallTime: new Date(1567578890853) } 2019-09-04T06:34:54.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, 8FEA89271D837BB68B98FE2323CB5C3FDECD39B3), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:54.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:54.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:54.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:54.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:54.272+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:54.322+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:54.322+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:54.372+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:54.472+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:54.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:54.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:54.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:54.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:54.568+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:54.568+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:54.572+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:54.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:54.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:54.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:54.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:54.654+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:54.654+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:54.672+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:54.672+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:54.672+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:54.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:54.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:54.764+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:54.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:54.772+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:54.822+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:54.822+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:54.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:54.839+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1473) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:54.839+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1473 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:35:04.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:54.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:54.839+0000 D2 ASIO [Replication] Request 1473 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578890, 1), t: 1 }, durableWallTime: new Date(1567578890853), opTime: { ts: Timestamp(1567578890, 1), t: 1 }, wallTime: new Date(1567578890853), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578890, 1), t: 1 }, lastCommittedWall: new Date(1567578890853), lastOpVisible: { ts: Timestamp(1567578890, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578890, 1), $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578890, 1) } 2019-09-04T06:34:54.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578890, 1), t: 1 }, durableWallTime: new Date(1567578890853), opTime: { ts: Timestamp(1567578890, 1), t: 1 }, wallTime: new Date(1567578890853), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578890, 1), t: 1 }, lastCommittedWall: new Date(1567578890853), lastOpVisible: { ts: Timestamp(1567578890, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578890, 1), $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578890, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:34:54.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:54.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1473) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578890, 1), t: 1 }, durableWallTime: new Date(1567578890853), opTime: { ts: Timestamp(1567578890, 1), t: 1 }, wallTime: new Date(1567578890853), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578890, 1), t: 1 }, lastCommittedWall: new Date(1567578890853), lastOpVisible: { ts: Timestamp(1567578890, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578890, 1), $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578890, 1) } 2019-09-04T06:34:54.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:34:54.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:35:03.775+0000 2019-09-04T06:34:54.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:35:06.221+0000 2019-09-04T06:34:54.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:54.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:34:54.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:54.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:34:56.839Z 2019-09-04T06:34:54.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:54.840+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:54.840+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1474) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:54.840+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1474 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:04.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:54.840+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:54.840+0000 D2 ASIO [Replication] Request 1474 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578890, 1), t: 1 }, durableWallTime: new Date(1567578890853), opTime: { ts: Timestamp(1567578890, 1), t: 1 }, wallTime: new Date(1567578890853), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578890, 1), t: 1 }, lastCommittedWall: new Date(1567578890853), lastOpVisible: { ts: Timestamp(1567578890, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578890, 1), $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578890, 1) } 2019-09-04T06:34:54.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578890, 1), t: 1 }, durableWallTime: new Date(1567578890853), opTime: { ts: Timestamp(1567578890, 1), t: 1 }, wallTime: new Date(1567578890853), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578890, 1), t: 1 }, lastCommittedWall: new Date(1567578890853), lastOpVisible: { ts: Timestamp(1567578890, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578890, 1), $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578890, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:54.840+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:54.840+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1474) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578890, 1), t: 1 }, durableWallTime: new Date(1567578890853), opTime: { ts: Timestamp(1567578890, 1), t: 1 }, wallTime: new Date(1567578890853), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578890, 1), t: 1 }, lastCommittedWall: new Date(1567578890853), lastOpVisible: { ts: Timestamp(1567578890, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578890, 1), $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578890, 1) } 2019-09-04T06:34:54.840+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:34:54.840+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:34:56.840Z 2019-09-04T06:34:54.840+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:54.856+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:54.856+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:54.856+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578890, 1) 2019-09-04T06:34:54.856+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 21414 2019-09-04T06:34:54.856+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:54.856+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:54.856+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 21414 2019-09-04T06:34:54.857+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21417 2019-09-04T06:34:54.857+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21417 2019-09-04T06:34:54.857+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578890, 1), t: 1 }({ ts: Timestamp(1567578890, 1), t: 1 }) 2019-09-04T06:34:54.872+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:54.972+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:54.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:54.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:54.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:54.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:55.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:55.049+0000 D2 COMMAND [conn457] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578891, 1), signature: { hash: BinData(0, AAD839FC04C28C69C88AE327F60F4597EE716274), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:34:55.049+0000 D1 REPL [conn457] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578890, 1), t: 1 } 2019-09-04T06:34:55.049+0000 D3 REPL [conn457] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:25.059+0000 2019-09-04T06:34:55.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, 8FEA89271D837BB68B98FE2323CB5C3FDECD39B3), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:55.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:34:55.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, 8FEA89271D837BB68B98FE2323CB5C3FDECD39B3), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:55.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, 8FEA89271D837BB68B98FE2323CB5C3FDECD39B3), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:55.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578890, 1), t: 1 }, durableWallTime: new Date(1567578890853), opTime: { ts: Timestamp(1567578890, 1), t: 1 }, wallTime: new Date(1567578890853) } 2019-09-04T06:34:55.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, 8FEA89271D837BB68B98FE2323CB5C3FDECD39B3), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:55.068+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:55.068+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:55.073+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:55.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:55.111+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:55.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:55.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:55.154+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:55.154+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:55.172+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:55.172+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:55.173+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:55.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:55.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:55.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:55.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:55.264+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:55.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:55.273+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:55.322+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:55.322+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:55.373+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:55.473+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:55.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:55.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:55.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:55.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:55.568+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:55.568+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:55.573+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:55.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:55.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:55.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:55.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:55.654+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:55.655+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:55.672+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:55.672+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:55.673+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:55.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:55.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:55.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:55.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:55.771+0000 D2 ASIO [RS] Request 1469 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578895, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578895769), o: { $v: 1, $set: { ping: new Date(1567578895768) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578890, 1), t: 1 }, lastCommittedWall: new Date(1567578890853), lastOpVisible: { ts: Timestamp(1567578890, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578890, 1), t: 1 }, lastCommittedWall: new Date(1567578890853), lastOpApplied: { ts: Timestamp(1567578895, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578890, 1), $clusterTime: { clusterTime: Timestamp(1567578895, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578895, 1) } 2019-09-04T06:34:55.771+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578895, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578895769), o: { $v: 1, $set: { ping: new Date(1567578895768) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578890, 1), t: 1 }, lastCommittedWall: new Date(1567578890853), lastOpVisible: { ts: Timestamp(1567578890, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578890, 1), t: 1 }, lastCommittedWall: new Date(1567578890853), lastOpApplied: { ts: Timestamp(1567578895, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578890, 1), $clusterTime: { clusterTime: Timestamp(1567578895, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578895, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:55.771+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:55.771+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578895, 1) and ending at ts: Timestamp(1567578895, 1) 2019-09-04T06:34:55.771+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:35:06.221+0000 2019-09-04T06:34:55.771+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:35:05.887+0000 2019-09-04T06:34:55.771+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:55.771+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578895, 1), t: 1 } 2019-09-04T06:34:55.771+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:55.771+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:55.771+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:55.771+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578890, 1) 2019-09-04T06:34:55.771+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 21444 2019-09-04T06:34:55.771+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:55.771+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:55.771+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 21444 2019-09-04T06:34:55.772+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:55.772+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:34:55.772+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:55.772+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578890, 1) 2019-09-04T06:34:55.772+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 21447 2019-09-04T06:34:55.772+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578895, 1) } 2019-09-04T06:34:55.772+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:55.772+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:55.772+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 21447 2019-09-04T06:34:55.772+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21418 2019-09-04T06:34:55.772+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21418 2019-09-04T06:34:55.772+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21450 2019-09-04T06:34:55.772+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21450 2019-09-04T06:34:55.772+0000 D3 EXECUTOR [repl-writer-worker-6] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:55.772+0000 D3 STORAGE [repl-writer-worker-6] WT begin_transaction for snapshot id 21452 2019-09-04T06:34:55.772+0000 D4 STORAGE [repl-writer-worker-6] inserting record with timestamp Timestamp(1567578895, 1) 2019-09-04T06:34:55.772+0000 D3 STORAGE [repl-writer-worker-6] WT set timestamp of future write operations to Timestamp(1567578895, 1) 2019-09-04T06:34:55.772+0000 D3 STORAGE [repl-writer-worker-6] WT commit_transaction for snapshot id 21452 2019-09-04T06:34:55.772+0000 D3 EXECUTOR [repl-writer-worker-6] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:55.772+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:34:55.772+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21451 2019-09-04T06:34:55.772+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21451 2019-09-04T06:34:55.772+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21454 2019-09-04T06:34:55.772+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21454 2019-09-04T06:34:55.772+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578895, 1), t: 1 }({ ts: Timestamp(1567578895, 1), t: 1 }) 2019-09-04T06:34:55.772+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578895, 1) 2019-09-04T06:34:55.772+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21455 2019-09-04T06:34:55.772+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578895, 1) } } ] } sort: {} projection: {} 2019-09-04T06:34:55.772+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:55.772+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:34:55.772+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578895, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:34:55.772+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:55.772+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:55.772+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:55.772+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578895, 1) || First: notFirst: full path: ts 2019-09-04T06:34:55.772+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:55.772+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578895, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:55.772+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:55.772+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:34:55.772+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:55.772+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:55.772+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:55.772+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:55.772+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:55.772+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:55.772+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:34:55.772+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578895, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:34:55.772+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:55.772+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:34:55.772+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:34:55.772+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578895, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:34:55.772+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:55.772+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578895, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:55.772+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21455 2019-09-04T06:34:55.772+0000 D3 EXECUTOR [repl-writer-worker-8] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:34:55.772+0000 D3 STORAGE [repl-writer-worker-8] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:55.772+0000 D3 REPL [repl-writer-worker-8] applying op: { ts: Timestamp(1567578895, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578895769), o: { $v: 1, $set: { ping: new Date(1567578895768) } } }, oplog application mode: Secondary 2019-09-04T06:34:55.772+0000 D3 STORAGE [repl-writer-worker-8] WT set timestamp of future write operations to Timestamp(1567578895, 1) 2019-09-04T06:34:55.772+0000 D3 STORAGE [repl-writer-worker-8] WT begin_transaction for snapshot id 21457 2019-09-04T06:34:55.772+0000 D2 QUERY [repl-writer-worker-8] Using idhack: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" } 2019-09-04T06:34:55.772+0000 D4 WRITE [repl-writer-worker-8] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:34:55.772+0000 D3 STORAGE [repl-writer-worker-8] WT commit_transaction for snapshot id 21457 2019-09-04T06:34:55.772+0000 D3 EXECUTOR [repl-writer-worker-8] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:34:55.772+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578895, 1), t: 1 }({ ts: Timestamp(1567578895, 1), t: 1 }) 2019-09-04T06:34:55.772+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578895, 1) 2019-09-04T06:34:55.772+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21456 2019-09-04T06:34:55.772+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:34:55.772+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:34:55.772+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:34:55.773+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:55.773+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:55.773+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:34:55.773+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21456 2019-09-04T06:34:55.773+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578895, 1) 2019-09-04T06:34:55.773+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:55.773+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21460 2019-09-04T06:34:55.773+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578890, 1), t: 1 }, durableWallTime: new Date(1567578890853), appliedOpTime: { ts: Timestamp(1567578890, 1), t: 1 }, appliedWallTime: new Date(1567578890853), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578890, 1), t: 1 }, durableWallTime: new Date(1567578890853), appliedOpTime: { ts: Timestamp(1567578895, 1), t: 1 }, appliedWallTime: new Date(1567578895769), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578890, 1), t: 1 }, durableWallTime: new Date(1567578890853), appliedOpTime: { ts: Timestamp(1567578890, 1), t: 1 }, appliedWallTime: new Date(1567578890853), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578890, 1), t: 1 }, lastCommittedWall: new Date(1567578890853), lastOpVisible: { ts: Timestamp(1567578890, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:55.773+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21460 2019-09-04T06:34:55.773+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1475 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:25.773+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578890, 1), t: 1 }, durableWallTime: new Date(1567578890853), appliedOpTime: { ts: Timestamp(1567578890, 1), t: 1 }, appliedWallTime: new Date(1567578890853), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578890, 1), t: 1 }, durableWallTime: new Date(1567578890853), appliedOpTime: { ts: Timestamp(1567578895, 1), t: 1 }, appliedWallTime: new Date(1567578895769), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578890, 1), t: 1 }, durableWallTime: new Date(1567578890853), appliedOpTime: { ts: Timestamp(1567578890, 1), t: 1 }, appliedWallTime: new Date(1567578890853), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578890, 1), t: 1 }, lastCommittedWall: new Date(1567578890853), lastOpVisible: { ts: Timestamp(1567578890, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:55.773+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:25.773+0000 2019-09-04T06:34:55.773+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578895, 1), t: 1 }({ ts: Timestamp(1567578895, 1), t: 1 }) 2019-09-04T06:34:55.773+0000 D2 ASIO [RS] Request 1475 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578890, 1), t: 1 }, lastCommittedWall: new Date(1567578890853), lastOpVisible: { ts: Timestamp(1567578890, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578890, 1), $clusterTime: { clusterTime: Timestamp(1567578895, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578895, 1) } 2019-09-04T06:34:55.773+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578890, 1), t: 1 }, lastCommittedWall: new Date(1567578890853), lastOpVisible: { ts: Timestamp(1567578890, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578890, 1), $clusterTime: { clusterTime: Timestamp(1567578895, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578895, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:55.773+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:55.773+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:25.773+0000 2019-09-04T06:34:55.773+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578895, 1), t: 1 } 2019-09-04T06:34:55.773+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1476 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:05.773+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578890, 1), t: 1 } } 2019-09-04T06:34:55.773+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:25.773+0000 2019-09-04T06:34:55.774+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:34:55.774+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:55.774+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578890, 1), t: 1 }, durableWallTime: new Date(1567578890853), appliedOpTime: { ts: Timestamp(1567578890, 1), t: 1 }, appliedWallTime: new Date(1567578890853), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578895, 1), t: 1 }, durableWallTime: new Date(1567578895769), appliedOpTime: { ts: Timestamp(1567578895, 1), t: 1 }, appliedWallTime: new Date(1567578895769), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578890, 1), t: 1 }, durableWallTime: new Date(1567578890853), appliedOpTime: { ts: Timestamp(1567578890, 1), t: 1 }, appliedWallTime: new Date(1567578890853), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578890, 1), t: 1 }, lastCommittedWall: new Date(1567578890853), lastOpVisible: { ts: Timestamp(1567578890, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:55.774+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1477 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:25.774+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578890, 1), t: 1 }, durableWallTime: new Date(1567578890853), appliedOpTime: { ts: Timestamp(1567578890, 1), t: 1 }, appliedWallTime: new Date(1567578890853), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578895, 1), t: 1 }, durableWallTime: new Date(1567578895769), appliedOpTime: { ts: Timestamp(1567578895, 1), t: 1 }, appliedWallTime: new Date(1567578895769), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578890, 1), t: 1 }, durableWallTime: new Date(1567578890853), appliedOpTime: { ts: Timestamp(1567578890, 1), t: 1 }, appliedWallTime: new Date(1567578890853), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578890, 1), t: 1 }, lastCommittedWall: new Date(1567578890853), lastOpVisible: { ts: Timestamp(1567578890, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:34:55.774+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:25.773+0000 2019-09-04T06:34:55.774+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:55.774+0000 D2 ASIO [RS] Request 1477 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578890, 1), t: 1 }, lastCommittedWall: new Date(1567578890853), lastOpVisible: { ts: Timestamp(1567578890, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578890, 1), $clusterTime: { clusterTime: Timestamp(1567578895, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578895, 1) } 2019-09-04T06:34:55.774+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578890, 1), t: 1 }, lastCommittedWall: new Date(1567578890853), lastOpVisible: { ts: Timestamp(1567578890, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578890, 1), $clusterTime: { clusterTime: Timestamp(1567578895, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578895, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:55.774+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:34:55.774+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:25.773+0000 2019-09-04T06:34:55.775+0000 D2 ASIO [RS] Request 1476 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578895, 1), t: 1 }, lastCommittedWall: new Date(1567578895769), lastOpVisible: { ts: Timestamp(1567578895, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578895, 1), t: 1 }, lastCommittedWall: new Date(1567578895769), lastOpApplied: { ts: Timestamp(1567578895, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578895, 1), $clusterTime: { clusterTime: Timestamp(1567578895, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578895, 1) } 2019-09-04T06:34:55.775+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578895, 1), t: 1 }, lastCommittedWall: new Date(1567578895769), lastOpVisible: { ts: Timestamp(1567578895, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578895, 1), t: 1 }, lastCommittedWall: new Date(1567578895769), lastOpApplied: { ts: Timestamp(1567578895, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578895, 1), $clusterTime: { clusterTime: Timestamp(1567578895, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578895, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:55.775+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:34:55.775+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:34:55.775+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578895, 1), t: 1 }, 2019-09-04T06:34:55.769+0000 2019-09-04T06:34:55.775+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578895, 1), t: 1 }, 2019-09-04T06:34:55.769+0000 2019-09-04T06:34:55.775+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578890, 1) 2019-09-04T06:34:55.775+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:35:05.887+0000 2019-09-04T06:34:55.775+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:35:06.502+0000 2019-09-04T06:34:55.775+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1478 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:05.775+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578895, 1), t: 1 } } 2019-09-04T06:34:55.775+0000 D3 REPL [conn477] Got notified of new snapshot: { ts: Timestamp(1567578895, 1), t: 1 }, 2019-09-04T06:34:55.769+0000 2019-09-04T06:34:55.775+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:25.773+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn477] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:22.595+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn423] Got notified of new snapshot: { ts: Timestamp(1567578895, 1), t: 1 }, 2019-09-04T06:34:55.769+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn423] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.512+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn446] Got notified of new snapshot: { ts: Timestamp(1567578895, 1), t: 1 }, 2019-09-04T06:34:55.769+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn446] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:13.895+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn459] Got notified of new snapshot: { ts: Timestamp(1567578895, 1), t: 1 }, 2019-09-04T06:34:55.769+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn459] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.473+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn436] Got notified of new snapshot: { ts: Timestamp(1567578895, 1), t: 1 }, 2019-09-04T06:34:55.769+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn436] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:14.790+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn461] Got notified of new snapshot: { ts: Timestamp(1567578895, 1), t: 1 }, 2019-09-04T06:34:55.769+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn461] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.482+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn449] Got notified of new snapshot: { ts: Timestamp(1567578895, 1), t: 1 }, 2019-09-04T06:34:55.769+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn449] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:58.759+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn466] Got notified of new snapshot: { ts: Timestamp(1567578895, 1), t: 1 }, 2019-09-04T06:34:55.769+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn466] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.728+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn455] Got notified of new snapshot: { ts: Timestamp(1567578895, 1), t: 1 }, 2019-09-04T06:34:55.769+0000 2019-09-04T06:34:55.775+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:55.775+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn455] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.962+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn448] Got notified of new snapshot: { ts: Timestamp(1567578895, 1), t: 1 }, 2019-09-04T06:34:55.769+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn448] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.625+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn472] Got notified of new snapshot: { ts: Timestamp(1567578895, 1), t: 1 }, 2019-09-04T06:34:55.769+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn472] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn474] Got notified of new snapshot: { ts: Timestamp(1567578895, 1), t: 1 }, 2019-09-04T06:34:55.769+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn474] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn473] Got notified of new snapshot: { ts: Timestamp(1567578895, 1), t: 1 }, 2019-09-04T06:34:55.769+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn473] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn453] Got notified of new snapshot: { ts: Timestamp(1567578895, 1), t: 1 }, 2019-09-04T06:34:55.769+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn453] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.897+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn457] Got notified of new snapshot: { ts: Timestamp(1567578895, 1), t: 1 }, 2019-09-04T06:34:55.769+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn457] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:25.059+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn450] Got notified of new snapshot: { ts: Timestamp(1567578895, 1), t: 1 }, 2019-09-04T06:34:55.769+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn450] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.261+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn444] Got notified of new snapshot: { ts: Timestamp(1567578895, 1), t: 1 }, 2019-09-04T06:34:55.769+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn444] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:34:59.943+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn456] Got notified of new snapshot: { ts: Timestamp(1567578895, 1), t: 1 }, 2019-09-04T06:34:55.769+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn456] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:03.487+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn463] Got notified of new snapshot: { ts: Timestamp(1567578895, 1), t: 1 }, 2019-09-04T06:34:55.769+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn463] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:13.417+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn452] Got notified of new snapshot: { ts: Timestamp(1567578895, 1), t: 1 }, 2019-09-04T06:34:55.769+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn452] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.763+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn451] Got notified of new snapshot: { ts: Timestamp(1567578895, 1), t: 1 }, 2019-09-04T06:34:55.769+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn451] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.434+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn454] Got notified of new snapshot: { ts: Timestamp(1567578895, 1), t: 1 }, 2019-09-04T06:34:55.769+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn454] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.925+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn445] Got notified of new snapshot: { ts: Timestamp(1567578895, 1), t: 1 }, 2019-09-04T06:34:55.769+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn445] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:14.090+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn467] Got notified of new snapshot: { ts: Timestamp(1567578895, 1), t: 1 }, 2019-09-04T06:34:55.769+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn467] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.623+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn478] Got notified of new snapshot: { ts: Timestamp(1567578895, 1), t: 1 }, 2019-09-04T06:34:55.769+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn478] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:24.152+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn460] Got notified of new snapshot: { ts: Timestamp(1567578895, 1), t: 1 }, 2019-09-04T06:34:55.769+0000 2019-09-04T06:34:55.775+0000 D3 REPL [conn460] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.478+0000 2019-09-04T06:34:55.776+0000 D3 REPL [conn441] Got notified of new snapshot: { ts: Timestamp(1567578895, 1), t: 1 }, 2019-09-04T06:34:55.769+0000 2019-09-04T06:34:55.776+0000 D3 REPL [conn441] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.624+0000 2019-09-04T06:34:55.776+0000 D3 REPL [conn462] Got notified of new snapshot: { ts: Timestamp(1567578895, 1), t: 1 }, 2019-09-04T06:34:55.769+0000 2019-09-04T06:34:55.776+0000 D3 REPL [conn462] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.497+0000 2019-09-04T06:34:55.776+0000 D3 REPL [conn475] Got notified of new snapshot: { ts: Timestamp(1567578895, 1), t: 1 }, 2019-09-04T06:34:55.769+0000 2019-09-04T06:34:55.776+0000 D3 REPL [conn475] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.753+0000 2019-09-04T06:34:55.776+0000 D3 REPL [conn476] Got notified of new snapshot: { ts: Timestamp(1567578895, 1), t: 1 }, 2019-09-04T06:34:55.769+0000 2019-09-04T06:34:55.776+0000 D3 REPL [conn476] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.767+0000 2019-09-04T06:34:55.822+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:55.822+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:55.871+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578895, 1) 2019-09-04T06:34:55.874+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:55.974+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:55.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:55.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:55.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:55.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:56.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:56.068+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:56.068+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:56.074+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:56.108+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:56.108+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:56.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:56.111+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:56.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:56.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:56.154+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:56.154+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:56.172+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:56.172+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:56.174+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:56.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:56.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:56.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578895, 1), signature: { hash: BinData(0, 61B9E9CB03C23EEA135D9CA95563D75BA85A6224), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:56.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:34:56.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578895, 1), signature: { hash: BinData(0, 61B9E9CB03C23EEA135D9CA95563D75BA85A6224), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:56.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578895, 1), signature: { hash: BinData(0, 61B9E9CB03C23EEA135D9CA95563D75BA85A6224), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:56.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578895, 1), t: 1 }, durableWallTime: new Date(1567578895769), opTime: { ts: Timestamp(1567578895, 1), t: 1 }, wallTime: new Date(1567578895769) } 2019-09-04T06:34:56.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578895, 1), signature: { hash: BinData(0, 61B9E9CB03C23EEA135D9CA95563D75BA85A6224), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:56.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:56.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:56.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:56.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:56.274+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:56.296+0000 I NETWORK [listener] connection accepted from 10.108.2.60:45020 #479 (90 connections now open) 2019-09-04T06:34:56.296+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:56.296+0000 D2 COMMAND [conn479] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:56.296+0000 I NETWORK [conn479] received client metadata from 10.108.2.60:45020 conn479: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:56.296+0000 I COMMAND [conn479] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:56.299+0000 D2 COMMAND [conn479] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578893, 1), signature: { hash: BinData(0, 68688A86A60615DADD724D5C37EF1F723A2E3681), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:34:56.299+0000 D1 REPL [conn479] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578895, 1), t: 1 } 2019-09-04T06:34:56.299+0000 D3 REPL [conn479] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:26.309+0000 2019-09-04T06:34:56.322+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:56.322+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:56.327+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:56.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:56.360+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:56.360+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:56.374+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:56.474+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:56.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:56.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:56.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:56.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:56.568+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:56.568+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:56.574+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:56.601+0000 D2 COMMAND [conn49] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578895, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578895, 1), signature: { hash: BinData(0, 61B9E9CB03C23EEA135D9CA95563D75BA85A6224), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578895, 1), t: 1 } }, $db: "config" } 2019-09-04T06:34:56.602+0000 D1 COMMAND [conn49] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578895, 1), t: 1 } } } 2019-09-04T06:34:56.602+0000 D3 STORAGE [conn49] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:34:56.602+0000 D1 COMMAND [conn49] Using 'committed' snapshot: { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578895, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578895, 1), signature: { hash: BinData(0, 61B9E9CB03C23EEA135D9CA95563D75BA85A6224), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578895, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578895, 1) 2019-09-04T06:34:56.602+0000 D2 QUERY [conn49] Collection config.settings does not exist. Using EOF plan: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 2019-09-04T06:34:56.602+0000 I COMMAND [conn49] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578895, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578895, 1), signature: { hash: BinData(0, 61B9E9CB03C23EEA135D9CA95563D75BA85A6224), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578895, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:34:56.608+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:56.608+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:56.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:56.611+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:56.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:56.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:56.654+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:56.654+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:56.672+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:56.672+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:56.675+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:56.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:56.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:56.764+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:56.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:56.772+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:56.772+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:56.772+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578895, 1) 2019-09-04T06:34:56.772+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 21493 2019-09-04T06:34:56.772+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:56.772+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:56.772+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 21493 2019-09-04T06:34:56.773+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21496 2019-09-04T06:34:56.773+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21496 2019-09-04T06:34:56.773+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578895, 1), t: 1 }({ ts: Timestamp(1567578895, 1), t: 1 }) 2019-09-04T06:34:56.775+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:56.822+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:56.822+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:56.827+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:56.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:56.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:56.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1479) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:56.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1479 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:35:06.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:56.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:56.839+0000 D2 ASIO [Replication] Request 1479 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578895, 1), t: 1 }, durableWallTime: new Date(1567578895769), opTime: { ts: Timestamp(1567578895, 1), t: 1 }, wallTime: new Date(1567578895769), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578895, 1), t: 1 }, lastCommittedWall: new Date(1567578895769), lastOpVisible: { ts: Timestamp(1567578895, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578895, 1), $clusterTime: { clusterTime: Timestamp(1567578895, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578895, 1) } 2019-09-04T06:34:56.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578895, 1), t: 1 }, durableWallTime: new Date(1567578895769), opTime: { ts: Timestamp(1567578895, 1), t: 1 }, wallTime: new Date(1567578895769), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578895, 1), t: 1 }, lastCommittedWall: new Date(1567578895769), lastOpVisible: { ts: Timestamp(1567578895, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578895, 1), $clusterTime: { clusterTime: Timestamp(1567578895, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578895, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:34:56.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:56.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1479) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578895, 1), t: 1 }, durableWallTime: new Date(1567578895769), opTime: { ts: Timestamp(1567578895, 1), t: 1 }, wallTime: new Date(1567578895769), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578895, 1), t: 1 }, lastCommittedWall: new Date(1567578895769), lastOpVisible: { ts: Timestamp(1567578895, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578895, 1), $clusterTime: { clusterTime: Timestamp(1567578895, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578895, 1) } 2019-09-04T06:34:56.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:34:56.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:35:06.502+0000 2019-09-04T06:34:56.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:35:06.923+0000 2019-09-04T06:34:56.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:56.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:34:56.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:56.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:34:58.839Z 2019-09-04T06:34:56.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:56.840+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:56.840+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1480) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:56.840+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1480 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:06.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:56.840+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:56.840+0000 D2 ASIO [Replication] Request 1480 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578895, 1), t: 1 }, durableWallTime: new Date(1567578895769), opTime: { ts: Timestamp(1567578895, 1), t: 1 }, wallTime: new Date(1567578895769), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578895, 1), t: 1 }, lastCommittedWall: new Date(1567578895769), lastOpVisible: { ts: Timestamp(1567578895, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578895, 1), $clusterTime: { clusterTime: Timestamp(1567578895, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578895, 1) } 2019-09-04T06:34:56.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578895, 1), t: 1 }, durableWallTime: new Date(1567578895769), opTime: { ts: Timestamp(1567578895, 1), t: 1 }, wallTime: new Date(1567578895769), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578895, 1), t: 1 }, lastCommittedWall: new Date(1567578895769), lastOpVisible: { ts: Timestamp(1567578895, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578895, 1), $clusterTime: { clusterTime: Timestamp(1567578895, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578895, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:56.840+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:56.840+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1480) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578895, 1), t: 1 }, durableWallTime: new Date(1567578895769), opTime: { ts: Timestamp(1567578895, 1), t: 1 }, wallTime: new Date(1567578895769), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578895, 1), t: 1 }, lastCommittedWall: new Date(1567578895769), lastOpVisible: { ts: Timestamp(1567578895, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578895, 1), $clusterTime: { clusterTime: Timestamp(1567578895, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578895, 1) } 2019-09-04T06:34:56.840+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:34:56.840+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:34:58.840Z 2019-09-04T06:34:56.840+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:56.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:56.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:56.875+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:56.876+0000 D2 COMMAND [conn47] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:56.876+0000 I COMMAND [conn47] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:56.911+0000 D2 COMMAND [conn48] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:56.911+0000 I COMMAND [conn48] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:56.975+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:56.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:56.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:56.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:56.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:57.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:57.061+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:34:57.061+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:34:57.061+0000 D2 COMMAND [conn90] run command admin.$cmd { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:34:57.062+0000 I COMMAND [conn90] command admin.$cmd command: isMaster { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:866 locks:{} protocol:op_query 0ms 2019-09-04T06:34:57.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578895, 1), signature: { hash: BinData(0, 61B9E9CB03C23EEA135D9CA95563D75BA85A6224), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:57.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:34:57.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578895, 1), signature: { hash: BinData(0, 61B9E9CB03C23EEA135D9CA95563D75BA85A6224), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:57.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578895, 1), signature: { hash: BinData(0, 61B9E9CB03C23EEA135D9CA95563D75BA85A6224), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:57.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578895, 1), t: 1 }, durableWallTime: new Date(1567578895769), opTime: { ts: Timestamp(1567578895, 1), t: 1 }, wallTime: new Date(1567578895769) } 2019-09-04T06:34:57.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578895, 1), signature: { hash: BinData(0, 61B9E9CB03C23EEA135D9CA95563D75BA85A6224), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:57.068+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:57.068+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:57.075+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:57.108+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:57.108+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:57.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:57.111+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:57.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:57.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:57.154+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:57.154+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:57.172+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:57.172+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:57.175+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:57.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:57.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:57.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:57.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:57.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:57.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:57.275+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:57.280+0000 D2 COMMAND [conn160] run command admin.$cmd { ping: 1, $db: "admin" } 2019-09-04T06:34:57.280+0000 I COMMAND [conn160] command admin.$cmd appName: "robo3t" command: ping { ping: 1, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:57.289+0000 D2 COMMAND [conn220] run command admin.$cmd { ping: 1.0, lsid: { id: UUID("23af97f8-66f0-4a27-b5f1-59167651ca5f") }, $clusterTime: { clusterTime: Timestamp(1567578835, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } 2019-09-04T06:34:57.289+0000 I COMMAND [conn220] command admin.$cmd appName: "MongoDB Shell" command: ping { ping: 1.0, lsid: { id: UUID("23af97f8-66f0-4a27-b5f1-59167651ca5f") }, $clusterTime: { clusterTime: Timestamp(1567578835, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:57.327+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:57.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:57.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:57.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:57.375+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:57.475+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:57.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:57.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:57.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:57.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:57.568+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:57.568+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:57.576+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:57.608+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:57.608+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:57.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:57.611+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:57.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:57.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:57.654+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:57.654+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:57.672+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:57.672+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:57.676+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:57.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:57.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:57.764+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:57.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:57.772+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:57.772+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:57.772+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578895, 1) 2019-09-04T06:34:57.772+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 21532 2019-09-04T06:34:57.772+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:57.772+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:57.772+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 21532 2019-09-04T06:34:57.773+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21535 2019-09-04T06:34:57.773+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21535 2019-09-04T06:34:57.773+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578895, 1), t: 1 }({ ts: Timestamp(1567578895, 1), t: 1 }) 2019-09-04T06:34:57.776+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:57.784+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:57.784+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:57.827+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:57.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:57.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:57.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:57.876+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:57.976+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:57.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:57.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:57.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:57.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:58.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:58.023+0000 D2 COMMAND [conn50] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578870, 3), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578897, 1), signature: { hash: BinData(0, A73456E9D593F28AB376F1212E2D9172C1BAE844), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578870, 3), t: 1 } }, $db: "config" } 2019-09-04T06:34:58.023+0000 D1 COMMAND [conn50] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578870, 3), t: 1 } } } 2019-09-04T06:34:58.023+0000 D3 STORAGE [conn50] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:34:58.023+0000 D1 COMMAND [conn50] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578870, 3), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578897, 1), signature: { hash: BinData(0, A73456E9D593F28AB376F1212E2D9172C1BAE844), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578870, 3), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578895, 1) 2019-09-04T06:34:58.023+0000 D5 QUERY [conn50] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:34:58.023+0000 D5 QUERY [conn50] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:34:58.023+0000 D5 QUERY [conn50] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:34:58.023+0000 D5 QUERY [conn50] Rated tree: $and 2019-09-04T06:34:58.023+0000 D5 QUERY [conn50] Planner: outputted 0 indexed solutions. 2019-09-04T06:34:58.023+0000 D5 QUERY [conn50] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:34:58.023+0000 D2 QUERY [conn50] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:34:58.023+0000 D3 STORAGE [conn50] WT begin_transaction for snapshot id 21543 2019-09-04T06:34:58.023+0000 D3 STORAGE [conn50] WT rollback_transaction for snapshot id 21543 2019-09-04T06:34:58.023+0000 I COMMAND [conn50] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578870, 3), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578897, 1), signature: { hash: BinData(0, A73456E9D593F28AB376F1212E2D9172C1BAE844), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578870, 3), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:4 cursorExhausted:1 numYields:0 nreturned:4 reslen:989 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:34:58.068+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:58.068+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:58.076+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:58.108+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:58.108+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:58.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:58.111+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:58.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:58.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:58.154+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:58.154+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:58.172+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:58.172+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:58.176+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:58.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:58.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:58.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578896, 1), signature: { hash: BinData(0, 34FFB7B05FC923D5FC2254A215498D23AA282F79), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:58.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:34:58.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578896, 1), signature: { hash: BinData(0, 34FFB7B05FC923D5FC2254A215498D23AA282F79), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:58.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578896, 1), signature: { hash: BinData(0, 34FFB7B05FC923D5FC2254A215498D23AA282F79), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:58.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578895, 1), t: 1 }, durableWallTime: new Date(1567578895769), opTime: { ts: Timestamp(1567578895, 1), t: 1 }, wallTime: new Date(1567578895769) } 2019-09-04T06:34:58.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578896, 1), signature: { hash: BinData(0, 34FFB7B05FC923D5FC2254A215498D23AA282F79), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:58.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:58.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:58.264+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:58.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:58.276+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:58.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:58.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:58.327+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:58.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:58.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:58.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:58.376+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:58.476+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:58.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:58.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:58.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:58.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:58.568+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:58.568+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:58.577+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:58.608+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:58.608+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:58.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:58.611+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:58.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:58.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:58.654+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:58.654+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:58.672+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:58.672+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:58.677+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:58.745+0000 I NETWORK [listener] connection accepted from 10.108.2.64:46788 #480 (91 connections now open) 2019-09-04T06:34:58.745+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:34:58.746+0000 D2 COMMAND [conn480] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:34:58.746+0000 I NETWORK [conn480] received client metadata from 10.108.2.64:46788 conn480: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:34:58.746+0000 I COMMAND [conn480] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:34:58.759+0000 I COMMAND [conn449] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578865, 1), signature: { hash: BinData(0, 664A6946CA5A257A3A9BCE39541DDEC5F5F9603B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:34:58.759+0000 D1 - [conn449] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:58.759+0000 W - [conn449] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:58.764+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:58.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:58.772+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:58.772+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:58.772+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578895, 1) 2019-09-04T06:34:58.772+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 21568 2019-09-04T06:34:58.772+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:58.772+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:58.772+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 21568 2019-09-04T06:34:58.773+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21571 2019-09-04T06:34:58.773+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21571 2019-09-04T06:34:58.773+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578895, 1), t: 1 }({ ts: Timestamp(1567578895, 1), t: 1 }) 2019-09-04T06:34:58.776+0000 I - [conn449] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:58.776+0000 D1 COMMAND [conn449] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578865, 1), signature: { hash: BinData(0, 664A6946CA5A257A3A9BCE39541DDEC5F5F9603B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:58.776+0000 D1 - [conn449] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:58.776+0000 W - [conn449] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:58.777+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:58.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:58.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:58.796+0000 I - [conn449] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:58.796+0000 W COMMAND [conn449] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:58.796+0000 I COMMAND [conn449] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578865, 1), signature: { hash: BinData(0, 664A6946CA5A257A3A9BCE39541DDEC5F5F9603B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:34:58.796+0000 D2 NETWORK [conn449] Session from 10.108.2.64:46774 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:58.796+0000 I NETWORK [conn449] end connection 10.108.2.64:46774 (90 connections now open) 2019-09-04T06:34:58.827+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:58.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:58.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:58.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1481) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:58.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1481 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:35:08.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:58.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:58.839+0000 D2 ASIO [Replication] Request 1481 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578895, 1), t: 1 }, durableWallTime: new Date(1567578895769), opTime: { ts: Timestamp(1567578895, 1), t: 1 }, wallTime: new Date(1567578895769), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578895, 1), t: 1 }, lastCommittedWall: new Date(1567578895769), lastOpVisible: { ts: Timestamp(1567578895, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578895, 1), $clusterTime: { clusterTime: Timestamp(1567578897, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578895, 1) } 2019-09-04T06:34:58.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578895, 1), t: 1 }, durableWallTime: new Date(1567578895769), opTime: { ts: Timestamp(1567578895, 1), t: 1 }, wallTime: new Date(1567578895769), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578895, 1), t: 1 }, lastCommittedWall: new Date(1567578895769), lastOpVisible: { ts: Timestamp(1567578895, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578895, 1), $clusterTime: { clusterTime: Timestamp(1567578897, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578895, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:34:58.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:58.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1481) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578895, 1), t: 1 }, durableWallTime: new Date(1567578895769), opTime: { ts: Timestamp(1567578895, 1), t: 1 }, wallTime: new Date(1567578895769), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578895, 1), t: 1 }, lastCommittedWall: new Date(1567578895769), lastOpVisible: { ts: Timestamp(1567578895, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578895, 1), $clusterTime: { clusterTime: Timestamp(1567578897, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578895, 1) } 2019-09-04T06:34:58.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:34:58.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:35:06.923+0000 2019-09-04T06:34:58.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:35:09.095+0000 2019-09-04T06:34:58.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:58.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:34:58.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:58.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:35:00.839Z 2019-09-04T06:34:58.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:58.840+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:34:58.840+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1482) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:58.840+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1482 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:08.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:34:58.840+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:58.840+0000 D2 ASIO [Replication] Request 1482 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578895, 1), t: 1 }, durableWallTime: new Date(1567578895769), opTime: { ts: Timestamp(1567578895, 1), t: 1 }, wallTime: new Date(1567578895769), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578895, 1), t: 1 }, lastCommittedWall: new Date(1567578895769), lastOpVisible: { ts: Timestamp(1567578895, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578895, 1), $clusterTime: { clusterTime: Timestamp(1567578897, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578895, 1) } 2019-09-04T06:34:58.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578895, 1), t: 1 }, durableWallTime: new Date(1567578895769), opTime: { ts: Timestamp(1567578895, 1), t: 1 }, wallTime: new Date(1567578895769), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578895, 1), t: 1 }, lastCommittedWall: new Date(1567578895769), lastOpVisible: { ts: Timestamp(1567578895, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578895, 1), $clusterTime: { clusterTime: Timestamp(1567578897, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578895, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:34:58.840+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:34:58.840+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1482) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578895, 1), t: 1 }, durableWallTime: new Date(1567578895769), opTime: { ts: Timestamp(1567578895, 1), t: 1 }, wallTime: new Date(1567578895769), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578895, 1), t: 1 }, lastCommittedWall: new Date(1567578895769), lastOpVisible: { ts: Timestamp(1567578895, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578895, 1), $clusterTime: { clusterTime: Timestamp(1567578897, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578895, 1) } 2019-09-04T06:34:58.840+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:34:58.840+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:35:00.840Z 2019-09-04T06:34:58.840+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:58.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:58.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:58.877+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:58.977+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:58.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:58.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:58.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:58.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:59.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:34:59.063+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:34:59.063+0000 D3 REPL [replexec-0] memberData lastupdate is: 2019-09-04T06:34:58.839+0000 2019-09-04T06:34:59.063+0000 D3 REPL [replexec-0] memberData lastupdate is: 2019-09-04T06:34:58.840+0000 2019-09-04T06:34:59.063+0000 D3 REPL [replexec-0] stalest member MemberId(0) date: 2019-09-04T06:34:58.839+0000 2019-09-04T06:34:59.063+0000 D3 REPL [replexec-0] scheduling next check at 2019-09-04T06:35:08.839+0000 2019-09-04T06:34:59.063+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:34:59.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578897, 1), signature: { hash: BinData(0, A73456E9D593F28AB376F1212E2D9172C1BAE844), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:59.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:34:59.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578897, 1), signature: { hash: BinData(0, A73456E9D593F28AB376F1212E2D9172C1BAE844), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:59.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578897, 1), signature: { hash: BinData(0, A73456E9D593F28AB376F1212E2D9172C1BAE844), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:34:59.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578895, 1), t: 1 }, durableWallTime: new Date(1567578895769), opTime: { ts: Timestamp(1567578895, 1), t: 1 }, wallTime: new Date(1567578895769) } 2019-09-04T06:34:59.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578897, 1), signature: { hash: BinData(0, A73456E9D593F28AB376F1212E2D9172C1BAE844), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:59.068+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:59.068+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:59.077+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:59.108+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:59.108+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:59.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:59.111+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:59.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:59.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:59.154+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:59.154+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:59.172+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:59.172+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:59.177+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:59.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:34:59.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:34:59.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:59.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:59.277+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:59.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:59.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:59.327+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:59.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:59.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:59.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:59.377+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:59.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:59.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:59.477+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:59.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:59.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:59.568+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:59.568+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:59.578+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:59.608+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:59.608+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:59.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:59.611+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:59.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:59.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:59.654+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:59.654+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:59.672+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:59.672+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:59.678+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:59.737+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:59.737+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:59.764+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:59.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:59.772+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:34:59.772+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:34:59.772+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578895, 1) 2019-09-04T06:34:59.772+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 21603 2019-09-04T06:34:59.772+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:34:59.772+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:34:59.772+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 21603 2019-09-04T06:34:59.773+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21606 2019-09-04T06:34:59.774+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21606 2019-09-04T06:34:59.774+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578895, 1), t: 1 }({ ts: Timestamp(1567578895, 1), t: 1 }) 2019-09-04T06:34:59.778+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:59.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:59.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:59.827+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:59.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:59.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:59.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:59.878+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:59.947+0000 I COMMAND [conn444] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578862, 1), signature: { hash: BinData(0, B957C0F67BE372F4946F6CB8E2B141AAD7F04C25), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:34:59.948+0000 D1 - [conn444] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:34:59.948+0000 W - [conn444] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:59.964+0000 I - [conn444] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:59.964+0000 D1 COMMAND [conn444] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578862, 1), signature: { hash: BinData(0, B957C0F67BE372F4946F6CB8E2B141AAD7F04C25), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:59.964+0000 D1 - [conn444] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:34:59.964+0000 W - [conn444] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:34:59.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:59.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:34:59.978+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:34:59.984+0000 I - [conn444] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:34:59.984+0000 W COMMAND [conn444] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:34:59.984+0000 I COMMAND [conn444] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578862, 1), signature: { hash: BinData(0, B957C0F67BE372F4946F6CB8E2B141AAD7F04C25), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30031ms 2019-09-04T06:34:59.984+0000 D2 NETWORK [conn444] Session from 10.108.2.72:45890 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:34:59.984+0000 I NETWORK [conn444] end connection 10.108.2.72:45890 (89 connections now open) 2019-09-04T06:34:59.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:34:59.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:00.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:00.003+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:35:00.003+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:35:00.003+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:35:00.017+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:35:00.017+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:35:00.028+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:35:00.028+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:35:00.028+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:35:00.028+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:35:00.033+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:35:00.034+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35151 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:35:00.046+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:35:00.046+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:35:00.046+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:35:00.046+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:35:00.046+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:00.046+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:35:00.046+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:35:00.046+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:35:00.046+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:35:00.046+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:35:00.046+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:35:00.046+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:35:00.046+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:00.046+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:00.046+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:35:00.046+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578895, 1) 2019-09-04T06:35:00.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21619 2019-09-04T06:35:00.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21619 2019-09-04T06:35:00.047+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:35:00.047+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:35:00.047+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:35:00.047+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:35:00.047+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:00.047+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:35:00.047+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:35:00.047+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:35:00.047+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578895, 1) 2019-09-04T06:35:00.047+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21622 2019-09-04T06:35:00.047+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21622 2019-09-04T06:35:00.047+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:35:00.047+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:35:00.047+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:00.047+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:35:00.047+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:35:00.047+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:35:00.047+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578895, 1) 2019-09-04T06:35:00.047+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21624 2019-09-04T06:35:00.047+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21624 2019-09-04T06:35:00.047+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:35:00.047+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:35:00.047+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:35:00.047+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:35:00.047+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:35:00.047+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21627 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21627 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21628 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21628 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21629 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21629 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21630 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21630 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21631 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21631 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21632 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21632 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21633 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21633 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21634 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21634 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21635 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21635 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21636 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21636 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21637 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21637 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21638 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21638 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21639 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21639 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21640 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21640 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21641 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21641 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21642 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21642 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21643 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21643 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21644 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21644 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21645 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21645 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21646 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21646 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21647 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21647 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21648 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21648 2019-09-04T06:35:00.049+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:35:00.049+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21650 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21650 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21651 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21651 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21652 2019-09-04T06:35:00.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21652 2019-09-04T06:35:00.049+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:35:00.050+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:35:00.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21654 2019-09-04T06:35:00.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21654 2019-09-04T06:35:00.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21655 2019-09-04T06:35:00.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21655 2019-09-04T06:35:00.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21656 2019-09-04T06:35:00.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21656 2019-09-04T06:35:00.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21657 2019-09-04T06:35:00.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21657 2019-09-04T06:35:00.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21658 2019-09-04T06:35:00.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21658 2019-09-04T06:35:00.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21659 2019-09-04T06:35:00.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21659 2019-09-04T06:35:00.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21660 2019-09-04T06:35:00.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21660 2019-09-04T06:35:00.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21661 2019-09-04T06:35:00.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21661 2019-09-04T06:35:00.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21662 2019-09-04T06:35:00.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21662 2019-09-04T06:35:00.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21663 2019-09-04T06:35:00.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21663 2019-09-04T06:35:00.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21664 2019-09-04T06:35:00.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21664 2019-09-04T06:35:00.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21665 2019-09-04T06:35:00.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21665 2019-09-04T06:35:00.050+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:35:00.050+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:35:00.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21667 2019-09-04T06:35:00.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21667 2019-09-04T06:35:00.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21668 2019-09-04T06:35:00.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21668 2019-09-04T06:35:00.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21669 2019-09-04T06:35:00.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21669 2019-09-04T06:35:00.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21670 2019-09-04T06:35:00.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21670 2019-09-04T06:35:00.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21671 2019-09-04T06:35:00.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21671 2019-09-04T06:35:00.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 21672 2019-09-04T06:35:00.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 21672 2019-09-04T06:35:00.050+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:35:00.068+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:00.068+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:00.070+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:00.070+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:00.078+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:00.108+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:00.108+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:00.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:00.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:00.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:00.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:00.154+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:00.154+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:00.172+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:00.172+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:00.178+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:00.206+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:00.207+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:00.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578897, 1), signature: { hash: BinData(0, A73456E9D593F28AB376F1212E2D9172C1BAE844), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:00.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:35:00.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578897, 1), signature: { hash: BinData(0, A73456E9D593F28AB376F1212E2D9172C1BAE844), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:00.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578897, 1), signature: { hash: BinData(0, A73456E9D593F28AB376F1212E2D9172C1BAE844), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:00.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578895, 1), t: 1 }, durableWallTime: new Date(1567578895769), opTime: { ts: Timestamp(1567578895, 1), t: 1 }, wallTime: new Date(1567578895769) } 2019-09-04T06:35:00.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578897, 1), signature: { hash: BinData(0, A73456E9D593F28AB376F1212E2D9172C1BAE844), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:00.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:00.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:00.236+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:00.237+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:00.261+0000 I COMMAND [conn450] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578868, 1), signature: { hash: BinData(0, 8B61E7B5973B908A587EE744417ACD62ADE50C6C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:35:00.261+0000 D1 - [conn450] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:00.261+0000 W - [conn450] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:00.264+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:00.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:00.278+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:00.278+0000 I - [conn450] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:00.278+0000 D1 COMMAND [conn450] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578868, 1), signature: { hash: BinData(0, 8B61E7B5973B908A587EE744417ACD62ADE50C6C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:00.278+0000 D1 - [conn450] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:00.278+0000 W - [conn450] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:00.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:00.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:00.298+0000 I - [conn450] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:00.298+0000 W COMMAND [conn450] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:00.298+0000 I COMMAND [conn450] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578868, 1), signature: { hash: BinData(0, 8B61E7B5973B908A587EE744417ACD62ADE50C6C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:35:00.298+0000 D2 NETWORK [conn450] Session from 10.108.2.54:49342 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:00.298+0000 I NETWORK [conn450] end connection 10.108.2.54:49342 (88 connections now open) 2019-09-04T06:35:00.327+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:00.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:00.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:00.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:00.378+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:00.435+0000 I COMMAND [conn451] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578870, 1), signature: { hash: BinData(0, 6920513E637F8F9FA85BDF131C416F82F176C928), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:35:00.435+0000 D1 - [conn451] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:00.435+0000 W - [conn451] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:00.437+0000 D2 ASIO [RS] Request 1478 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578900, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578900424), o: { $v: 1, $set: { ping: new Date(1567578900422) } } }, { ts: Timestamp(1567578900, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578900424), o: { $v: 1, $set: { ping: new Date(1567578900424) } } }, { ts: Timestamp(1567578900, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578900424), o: { $v: 1, $set: { ping: new Date(1567578900423) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578895, 1), t: 1 }, lastCommittedWall: new Date(1567578895769), lastOpVisible: { ts: Timestamp(1567578895, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578895, 1), t: 1 }, lastCommittedWall: new Date(1567578895769), lastOpApplied: { ts: Timestamp(1567578900, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578895, 1), $clusterTime: { clusterTime: Timestamp(1567578900, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578900, 3) } 2019-09-04T06:35:00.437+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578900, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578900424), o: { $v: 1, $set: { ping: new Date(1567578900422) } } }, { ts: Timestamp(1567578900, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578900424), o: { $v: 1, $set: { ping: new Date(1567578900424) } } }, { ts: Timestamp(1567578900, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578900424), o: { $v: 1, $set: { ping: new Date(1567578900423) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578895, 1), t: 1 }, lastCommittedWall: new Date(1567578895769), lastOpVisible: { ts: Timestamp(1567578895, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578895, 1), t: 1 }, lastCommittedWall: new Date(1567578895769), lastOpApplied: { ts: Timestamp(1567578900, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578895, 1), $clusterTime: { clusterTime: Timestamp(1567578900, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578900, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:00.437+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:00.438+0000 D2 REPL [replication-1] oplog fetcher read 3 operations from remote oplog starting at ts: Timestamp(1567578900, 1) and ending at ts: Timestamp(1567578900, 3) 2019-09-04T06:35:00.438+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:35:09.095+0000 2019-09-04T06:35:00.438+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:35:10.728+0000 2019-09-04T06:35:00.438+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578900, 3), t: 1 } 2019-09-04T06:35:00.438+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:00.438+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:35:00.438+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:00.438+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:00.438+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578895, 1) 2019-09-04T06:35:00.438+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 21690 2019-09-04T06:35:00.438+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:00.438+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:00.438+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 21690 2019-09-04T06:35:00.438+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:00.438+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:00.438+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578895, 1) 2019-09-04T06:35:00.438+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 21693 2019-09-04T06:35:00.438+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:00.438+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:00.438+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 21693 2019-09-04T06:35:00.438+0000 D2 REPL [rsSync-0] replication batch size is 3 2019-09-04T06:35:00.438+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578900, 1) } 2019-09-04T06:35:00.438+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21607 2019-09-04T06:35:00.438+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21607 2019-09-04T06:35:00.438+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21696 2019-09-04T06:35:00.438+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21696 2019-09-04T06:35:00.438+0000 D3 EXECUTOR [repl-writer-worker-2] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:00.438+0000 D3 STORAGE [repl-writer-worker-2] WT begin_transaction for snapshot id 21698 2019-09-04T06:35:00.438+0000 D4 STORAGE [repl-writer-worker-2] inserting record with timestamp Timestamp(1567578900, 1) 2019-09-04T06:35:00.438+0000 D3 STORAGE [repl-writer-worker-2] WT set timestamp of future write operations to Timestamp(1567578900, 1) 2019-09-04T06:35:00.438+0000 D4 STORAGE [repl-writer-worker-2] inserting record with timestamp Timestamp(1567578900, 2) 2019-09-04T06:35:00.438+0000 D3 STORAGE [repl-writer-worker-2] WT set timestamp of future write operations to Timestamp(1567578900, 2) 2019-09-04T06:35:00.438+0000 D4 STORAGE [repl-writer-worker-2] inserting record with timestamp Timestamp(1567578900, 3) 2019-09-04T06:35:00.438+0000 D3 STORAGE [repl-writer-worker-2] WT set timestamp of future write operations to Timestamp(1567578900, 3) 2019-09-04T06:35:00.438+0000 D3 STORAGE [repl-writer-worker-2] WT commit_transaction for snapshot id 21698 2019-09-04T06:35:00.438+0000 D3 EXECUTOR [repl-writer-worker-2] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:00.438+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:35:00.438+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21697 2019-09-04T06:35:00.438+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21697 2019-09-04T06:35:00.438+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21700 2019-09-04T06:35:00.438+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21700 2019-09-04T06:35:00.438+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578900, 3), t: 1 }({ ts: Timestamp(1567578900, 3), t: 1 }) 2019-09-04T06:35:00.438+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578900, 3) 2019-09-04T06:35:00.438+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21701 2019-09-04T06:35:00.438+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578900, 3) } } ] } sort: {} projection: {} 2019-09-04T06:35:00.438+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:00.438+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:35:00.438+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578900, 3) Sort: {} Proj: {} ============================= 2019-09-04T06:35:00.438+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:00.438+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:00.438+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:00.438+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578900, 3) || First: notFirst: full path: ts 2019-09-04T06:35:00.438+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:00.438+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578900, 3) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:00.438+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:00.438+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:35:00.438+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:00.438+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:00.438+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:00.438+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:00.438+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:00.438+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:00.438+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:00.438+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578900, 3) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:00.438+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:00.438+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:00.438+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:00.438+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578900, 3) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:00.438+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:00.438+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578900, 3) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:00.438+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21701 2019-09-04T06:35:00.438+0000 D3 EXECUTOR [repl-writer-worker-0] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:00.438+0000 D3 STORAGE [repl-writer-worker-0] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:00.438+0000 D3 REPL [repl-writer-worker-0] applying op: { ts: Timestamp(1567578900, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578900424), o: { $v: 1, $set: { ping: new Date(1567578900424) } } }, oplog application mode: Secondary 2019-09-04T06:35:00.438+0000 D3 STORAGE [repl-writer-worker-0] WT set timestamp of future write operations to Timestamp(1567578900, 2) 2019-09-04T06:35:00.438+0000 D3 STORAGE [repl-writer-worker-0] WT begin_transaction for snapshot id 21703 2019-09-04T06:35:00.439+0000 D2 QUERY [repl-writer-worker-0] Using idhack: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" } 2019-09-04T06:35:00.439+0000 D4 WRITE [repl-writer-worker-0] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:35:00.439+0000 D3 STORAGE [repl-writer-worker-0] WT commit_transaction for snapshot id 21703 2019-09-04T06:35:00.439+0000 D3 EXECUTOR [repl-writer-worker-0] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:00.439+0000 D3 STORAGE [repl-writer-worker-0] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:00.439+0000 D3 REPL [repl-writer-worker-0] applying op: { ts: Timestamp(1567578900, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578900424), o: { $v: 1, $set: { ping: new Date(1567578900422) } } }, oplog application mode: Secondary 2019-09-04T06:35:00.439+0000 D3 STORAGE [repl-writer-worker-0] WT set timestamp of future write operations to Timestamp(1567578900, 1) 2019-09-04T06:35:00.439+0000 D3 STORAGE [repl-writer-worker-0] WT begin_transaction for snapshot id 21705 2019-09-04T06:35:00.439+0000 D2 QUERY [repl-writer-worker-0] Using idhack: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" } 2019-09-04T06:35:00.439+0000 D4 WRITE [repl-writer-worker-0] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:35:00.439+0000 D3 STORAGE [repl-writer-worker-0] WT commit_transaction for snapshot id 21705 2019-09-04T06:35:00.439+0000 D3 EXECUTOR [repl-writer-worker-0] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:00.439+0000 D3 STORAGE [repl-writer-worker-0] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:00.439+0000 D3 REPL [repl-writer-worker-0] applying op: { ts: Timestamp(1567578900, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578900424), o: { $v: 1, $set: { ping: new Date(1567578900423) } } }, oplog application mode: Secondary 2019-09-04T06:35:00.439+0000 D3 STORAGE [repl-writer-worker-0] WT set timestamp of future write operations to Timestamp(1567578900, 3) 2019-09-04T06:35:00.439+0000 D3 STORAGE [repl-writer-worker-0] WT begin_transaction for snapshot id 21707 2019-09-04T06:35:00.439+0000 D2 QUERY [repl-writer-worker-0] Using idhack: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" } 2019-09-04T06:35:00.439+0000 D4 WRITE [repl-writer-worker-0] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:35:00.439+0000 D3 STORAGE [repl-writer-worker-0] WT commit_transaction for snapshot id 21707 2019-09-04T06:35:00.439+0000 D3 EXECUTOR [repl-writer-worker-0] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:00.439+0000 D3 EXECUTOR [repl-writer-worker-1] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:00.439+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578900, 3), t: 1 }({ ts: Timestamp(1567578900, 3), t: 1 }) 2019-09-04T06:35:00.439+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578900, 3) 2019-09-04T06:35:00.439+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21702 2019-09-04T06:35:00.439+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:35:00.439+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:00.439+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:35:00.439+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:00.439+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:00.439+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:35:00.439+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21702 2019-09-04T06:35:00.439+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578900, 3) 2019-09-04T06:35:00.439+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21710 2019-09-04T06:35:00.439+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21710 2019-09-04T06:35:00.439+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578900, 3), t: 1 }({ ts: Timestamp(1567578900, 3), t: 1 }) 2019-09-04T06:35:00.439+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:00.439+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578895, 1), t: 1 }, durableWallTime: new Date(1567578895769), appliedOpTime: { ts: Timestamp(1567578895, 1), t: 1 }, appliedWallTime: new Date(1567578895769), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578895, 1), t: 1 }, durableWallTime: new Date(1567578895769), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578895, 1), t: 1 }, durableWallTime: new Date(1567578895769), appliedOpTime: { ts: Timestamp(1567578895, 1), t: 1 }, appliedWallTime: new Date(1567578895769), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578895, 1), t: 1 }, lastCommittedWall: new Date(1567578895769), lastOpVisible: { ts: Timestamp(1567578895, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:00.439+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1483 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:30.439+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578895, 1), t: 1 }, durableWallTime: new Date(1567578895769), appliedOpTime: { ts: Timestamp(1567578895, 1), t: 1 }, appliedWallTime: new Date(1567578895769), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578895, 1), t: 1 }, durableWallTime: new Date(1567578895769), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578895, 1), t: 1 }, durableWallTime: new Date(1567578895769), appliedOpTime: { ts: Timestamp(1567578895, 1), t: 1 }, appliedWallTime: new Date(1567578895769), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578895, 1), t: 1 }, lastCommittedWall: new Date(1567578895769), lastOpVisible: { ts: Timestamp(1567578895, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:00.439+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:30.439+0000 2019-09-04T06:35:00.439+0000 D3 EXECUTOR [repl-writer-worker-15] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:00.439+0000 D2 ASIO [RS] Request 1483 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578900, 3), t: 1 }, lastCommittedWall: new Date(1567578900424), lastOpVisible: { ts: Timestamp(1567578900, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578900, 3), $clusterTime: { clusterTime: Timestamp(1567578900, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578900, 3) } 2019-09-04T06:35:00.439+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578900, 3), t: 1 }, lastCommittedWall: new Date(1567578900424), lastOpVisible: { ts: Timestamp(1567578900, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578900, 3), $clusterTime: { clusterTime: Timestamp(1567578900, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578900, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:00.439+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:00.439+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:30.439+0000 2019-09-04T06:35:00.440+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578900, 3), t: 1 } 2019-09-04T06:35:00.440+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1484 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:10.440+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578895, 1), t: 1 } } 2019-09-04T06:35:00.440+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:30.439+0000 2019-09-04T06:35:00.440+0000 D2 ASIO [RS] Request 1484 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578900, 3), t: 1 }, lastCommittedWall: new Date(1567578900424), lastOpVisible: { ts: Timestamp(1567578900, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578900, 3), t: 1 }, lastCommittedWall: new Date(1567578900424), lastOpApplied: { ts: Timestamp(1567578900, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578900, 3), $clusterTime: { clusterTime: Timestamp(1567578900, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578900, 3) } 2019-09-04T06:35:00.440+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578900, 3), t: 1 }, lastCommittedWall: new Date(1567578900424), lastOpVisible: { ts: Timestamp(1567578900, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578900, 3), t: 1 }, lastCommittedWall: new Date(1567578900424), lastOpApplied: { ts: Timestamp(1567578900, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578900, 3), $clusterTime: { clusterTime: Timestamp(1567578900, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578900, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:00.440+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:00.440+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:35:00.440+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578900, 3), t: 1 }, 2019-09-04T06:35:00.424+0000 2019-09-04T06:35:00.440+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578900, 3), t: 1 }, 2019-09-04T06:35:00.424+0000 2019-09-04T06:35:00.440+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578895, 3) 2019-09-04T06:35:00.440+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:35:10.728+0000 2019-09-04T06:35:00.440+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:35:11.166+0000 2019-09-04T06:35:00.440+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1485 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:10.440+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578900, 3), t: 1 } } 2019-09-04T06:35:00.440+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:30.439+0000 2019-09-04T06:35:00.440+0000 D3 REPL [conn423] Got notified of new snapshot: { ts: Timestamp(1567578900, 3), t: 1 }, 2019-09-04T06:35:00.424+0000 2019-09-04T06:35:00.440+0000 D3 REPL [conn423] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.512+0000 2019-09-04T06:35:00.440+0000 D3 REPL [conn446] Got notified of new snapshot: { ts: Timestamp(1567578900, 3), t: 1 }, 2019-09-04T06:35:00.424+0000 2019-09-04T06:35:00.440+0000 D3 REPL [conn446] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:13.895+0000 2019-09-04T06:35:00.440+0000 D3 REPL [conn436] Got notified of new snapshot: { ts: Timestamp(1567578900, 3), t: 1 }, 2019-09-04T06:35:00.424+0000 2019-09-04T06:35:00.440+0000 D3 REPL [conn436] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:14.790+0000 2019-09-04T06:35:00.440+0000 D3 REPL [conn461] Got notified of new snapshot: { ts: Timestamp(1567578900, 3), t: 1 }, 2019-09-04T06:35:00.424+0000 2019-09-04T06:35:00.440+0000 D3 REPL [conn461] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.482+0000 2019-09-04T06:35:00.440+0000 D3 REPL [conn466] Got notified of new snapshot: { ts: Timestamp(1567578900, 3), t: 1 }, 2019-09-04T06:35:00.424+0000 2019-09-04T06:35:00.440+0000 D3 REPL [conn466] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.728+0000 2019-09-04T06:35:00.440+0000 D3 REPL [conn448] Got notified of new snapshot: { ts: Timestamp(1567578900, 3), t: 1 }, 2019-09-04T06:35:00.424+0000 2019-09-04T06:35:00.440+0000 D3 REPL [conn448] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.625+0000 2019-09-04T06:35:00.440+0000 D3 REPL [conn474] Got notified of new snapshot: { ts: Timestamp(1567578900, 3), t: 1 }, 2019-09-04T06:35:00.424+0000 2019-09-04T06:35:00.440+0000 D3 REPL [conn474] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:00.440+0000 D3 REPL [conn453] Got notified of new snapshot: { ts: Timestamp(1567578900, 3), t: 1 }, 2019-09-04T06:35:00.424+0000 2019-09-04T06:35:00.440+0000 D3 REPL [conn453] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.897+0000 2019-09-04T06:35:00.440+0000 D3 REPL [conn456] Got notified of new snapshot: { ts: Timestamp(1567578900, 3), t: 1 }, 2019-09-04T06:35:00.424+0000 2019-09-04T06:35:00.440+0000 D3 REPL [conn456] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:03.487+0000 2019-09-04T06:35:00.440+0000 D3 REPL [conn452] Got notified of new snapshot: { ts: Timestamp(1567578900, 3), t: 1 }, 2019-09-04T06:35:00.424+0000 2019-09-04T06:35:00.440+0000 D3 REPL [conn452] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.763+0000 2019-09-04T06:35:00.440+0000 D3 REPL [conn454] Got notified of new snapshot: { ts: Timestamp(1567578900, 3), t: 1 }, 2019-09-04T06:35:00.424+0000 2019-09-04T06:35:00.440+0000 D3 REPL [conn454] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.925+0000 2019-09-04T06:35:00.440+0000 D3 REPL [conn467] Got notified of new snapshot: { ts: Timestamp(1567578900, 3), t: 1 }, 2019-09-04T06:35:00.424+0000 2019-09-04T06:35:00.440+0000 D3 REPL [conn467] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.623+0000 2019-09-04T06:35:00.440+0000 D3 REPL [conn460] Got notified of new snapshot: { ts: Timestamp(1567578900, 3), t: 1 }, 2019-09-04T06:35:00.424+0000 2019-09-04T06:35:00.440+0000 D3 REPL [conn460] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.478+0000 2019-09-04T06:35:00.440+0000 D3 REPL [conn462] Got notified of new snapshot: { ts: Timestamp(1567578900, 3), t: 1 }, 2019-09-04T06:35:00.424+0000 2019-09-04T06:35:00.440+0000 D3 REPL [conn462] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.497+0000 2019-09-04T06:35:00.440+0000 D3 REPL [conn475] Got notified of new snapshot: { ts: Timestamp(1567578900, 3), t: 1 }, 2019-09-04T06:35:00.424+0000 2019-09-04T06:35:00.440+0000 D3 REPL [conn475] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.753+0000 2019-09-04T06:35:00.440+0000 D3 REPL [conn479] Got notified of new snapshot: { ts: Timestamp(1567578900, 3), t: 1 }, 2019-09-04T06:35:00.424+0000 2019-09-04T06:35:00.440+0000 D3 REPL [conn479] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:26.309+0000 2019-09-04T06:35:00.440+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:00.440+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:35:00.440+0000 D3 REPL [conn476] Got notified of new snapshot: { ts: Timestamp(1567578900, 3), t: 1 }, 2019-09-04T06:35:00.424+0000 2019-09-04T06:35:00.440+0000 D3 REPL [conn476] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.767+0000 2019-09-04T06:35:00.441+0000 D3 REPL [conn441] Got notified of new snapshot: { ts: Timestamp(1567578900, 3), t: 1 }, 2019-09-04T06:35:00.424+0000 2019-09-04T06:35:00.441+0000 D3 REPL [conn441] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.624+0000 2019-09-04T06:35:00.441+0000 D3 REPL [conn478] Got notified of new snapshot: { ts: Timestamp(1567578900, 3), t: 1 }, 2019-09-04T06:35:00.424+0000 2019-09-04T06:35:00.441+0000 D3 REPL [conn478] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:24.152+0000 2019-09-04T06:35:00.441+0000 D3 REPL [conn445] Got notified of new snapshot: { ts: Timestamp(1567578900, 3), t: 1 }, 2019-09-04T06:35:00.424+0000 2019-09-04T06:35:00.441+0000 D3 REPL [conn445] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:14.090+0000 2019-09-04T06:35:00.441+0000 D3 REPL [conn463] Got notified of new snapshot: { ts: Timestamp(1567578900, 3), t: 1 }, 2019-09-04T06:35:00.424+0000 2019-09-04T06:35:00.441+0000 D3 REPL [conn463] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:13.417+0000 2019-09-04T06:35:00.441+0000 D3 REPL [conn457] Got notified of new snapshot: { ts: Timestamp(1567578900, 3), t: 1 }, 2019-09-04T06:35:00.424+0000 2019-09-04T06:35:00.441+0000 D3 REPL [conn457] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:25.059+0000 2019-09-04T06:35:00.441+0000 D3 REPL [conn473] Got notified of new snapshot: { ts: Timestamp(1567578900, 3), t: 1 }, 2019-09-04T06:35:00.424+0000 2019-09-04T06:35:00.441+0000 D3 REPL [conn473] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:00.441+0000 D3 REPL [conn472] Got notified of new snapshot: { ts: Timestamp(1567578900, 3), t: 1 }, 2019-09-04T06:35:00.424+0000 2019-09-04T06:35:00.441+0000 D3 REPL [conn472] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:00.441+0000 D3 REPL [conn455] Got notified of new snapshot: { ts: Timestamp(1567578900, 3), t: 1 }, 2019-09-04T06:35:00.424+0000 2019-09-04T06:35:00.441+0000 D3 REPL [conn455] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:00.962+0000 2019-09-04T06:35:00.441+0000 D3 REPL [conn477] Got notified of new snapshot: { ts: Timestamp(1567578900, 3), t: 1 }, 2019-09-04T06:35:00.424+0000 2019-09-04T06:35:00.441+0000 D3 REPL [conn477] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:22.595+0000 2019-09-04T06:35:00.441+0000 D3 REPL [conn459] Got notified of new snapshot: { ts: Timestamp(1567578900, 3), t: 1 }, 2019-09-04T06:35:00.424+0000 2019-09-04T06:35:00.442+0000 D3 REPL [conn459] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.473+0000 2019-09-04T06:35:00.444+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:35:00.444+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:00.444+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578895, 1), t: 1 }, durableWallTime: new Date(1567578895769), appliedOpTime: { ts: Timestamp(1567578895, 1), t: 1 }, appliedWallTime: new Date(1567578895769), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578895, 1), t: 1 }, durableWallTime: new Date(1567578895769), appliedOpTime: { ts: Timestamp(1567578895, 1), t: 1 }, appliedWallTime: new Date(1567578895769), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578900, 3), t: 1 }, lastCommittedWall: new Date(1567578900424), lastOpVisible: { ts: Timestamp(1567578900, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:00.444+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1486 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:30.444+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578895, 1), t: 1 }, durableWallTime: new Date(1567578895769), appliedOpTime: { ts: Timestamp(1567578895, 1), t: 1 }, appliedWallTime: new Date(1567578895769), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578895, 1), t: 1 }, durableWallTime: new Date(1567578895769), appliedOpTime: { ts: Timestamp(1567578895, 1), t: 1 }, appliedWallTime: new Date(1567578895769), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578900, 3), t: 1 }, lastCommittedWall: new Date(1567578900424), lastOpVisible: { ts: Timestamp(1567578900, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:00.444+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:30.439+0000 2019-09-04T06:35:00.444+0000 D2 ASIO [RS] Request 1486 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578900, 3), t: 1 }, lastCommittedWall: new Date(1567578900424), lastOpVisible: { ts: Timestamp(1567578900, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578900, 3), $clusterTime: { clusterTime: Timestamp(1567578900, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578900, 3) } 2019-09-04T06:35:00.444+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578900, 3), t: 1 }, lastCommittedWall: new Date(1567578900424), lastOpVisible: { ts: Timestamp(1567578900, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578900, 3), $clusterTime: { clusterTime: Timestamp(1567578900, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578900, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:00.444+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:00.444+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:30.439+0000 2019-09-04T06:35:00.451+0000 I - [conn451] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:00.452+0000 D1 COMMAND [conn451] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578870, 1), signature: { hash: BinData(0, 6920513E637F8F9FA85BDF131C416F82F176C928), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:00.452+0000 D1 - [conn451] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:00.452+0000 W - [conn451] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:00.471+0000 I - [conn451] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:00.471+0000 W COMMAND [conn451] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:00.471+0000 I COMMAND [conn451] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578870, 1), signature: { hash: BinData(0, 6920513E637F8F9FA85BDF131C416F82F176C928), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:35:00.472+0000 D2 NETWORK [conn451] Session from 10.108.2.48:42266 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:00.472+0000 I NETWORK [conn451] end connection 10.108.2.48:42266 (87 connections now open) 2019-09-04T06:35:00.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:00.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:00.478+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:00.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:00.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:00.538+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578900, 3) 2019-09-04T06:35:00.568+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:00.568+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:00.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:00.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:00.578+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:00.608+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:00.608+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:00.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:00.611+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:00.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:00.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:00.654+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:00.654+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:00.672+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:00.672+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:00.679+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:00.706+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:00.706+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:00.736+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:00.737+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:00.753+0000 I NETWORK [listener] connection accepted from 10.108.2.50:50292 #481 (88 connections now open) 2019-09-04T06:35:00.753+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:00.753+0000 D2 COMMAND [conn481] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:00.753+0000 I NETWORK [conn481] received client metadata from 10.108.2.50:50292 conn481: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:00.753+0000 I COMMAND [conn481] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:00.764+0000 I COMMAND [conn452] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578869, 1), signature: { hash: BinData(0, 22521EC28CCB56AA7FEB5EF8031A493EC328EE30), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:35:00.764+0000 D1 - [conn452] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:00.764+0000 W - [conn452] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:00.764+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:00.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:00.779+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:00.781+0000 I - [conn452] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:00.781+0000 D1 COMMAND [conn452] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578869, 1), signature: { hash: BinData(0, 22521EC28CCB56AA7FEB5EF8031A493EC328EE30), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:00.781+0000 D1 - [conn452] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:00.781+0000 W - [conn452] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:00.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:00.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:00.801+0000 I - [conn452] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:00.801+0000 W COMMAND [conn452] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:00.801+0000 I COMMAND [conn452] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578869, 1), signature: { hash: BinData(0, 22521EC28CCB56AA7FEB5EF8031A493EC328EE30), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:35:00.801+0000 D2 NETWORK [conn452] Session from 10.108.2.50:50272 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:00.801+0000 I NETWORK [conn452] end connection 10.108.2.50:50272 (87 connections now open) 2019-09-04T06:35:00.827+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:00.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:00.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:35:00.839+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1487) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:00.839+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1487 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:35:10.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:00.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:35:00.839+0000 D2 ASIO [Replication] Request 1487 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), opTime: { ts: Timestamp(1567578900, 3), t: 1 }, wallTime: new Date(1567578900424), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578900, 3), t: 1 }, lastCommittedWall: new Date(1567578900424), lastOpVisible: { ts: Timestamp(1567578900, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578900, 3), $clusterTime: { clusterTime: Timestamp(1567578900, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578900, 3) } 2019-09-04T06:35:00.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), opTime: { ts: Timestamp(1567578900, 3), t: 1 }, wallTime: new Date(1567578900424), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578900, 3), t: 1 }, lastCommittedWall: new Date(1567578900424), lastOpVisible: { ts: Timestamp(1567578900, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578900, 3), $clusterTime: { clusterTime: Timestamp(1567578900, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578900, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:35:00.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:00.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1487) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), opTime: { ts: Timestamp(1567578900, 3), t: 1 }, wallTime: new Date(1567578900424), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578900, 3), t: 1 }, lastCommittedWall: new Date(1567578900424), lastOpVisible: { ts: Timestamp(1567578900, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578900, 3), $clusterTime: { clusterTime: Timestamp(1567578900, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578900, 3) } 2019-09-04T06:35:00.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:35:00.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:35:11.166+0000 2019-09-04T06:35:00.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:35:12.167+0000 2019-09-04T06:35:00.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:00.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:35:00.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:35:00.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:35:02.839Z 2019-09-04T06:35:00.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:35:00.840+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:35:00.840+0000 D2 REPL_HB [replexec-0] Sending heartbeat (requestId: 1488) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:00.840+0000 D3 EXECUTOR [replexec-0] Scheduling remote command request: RemoteCommand 1488 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:10.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:00.840+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:35:00.840+0000 D2 ASIO [Replication] Request 1488 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), opTime: { ts: Timestamp(1567578900, 3), t: 1 }, wallTime: new Date(1567578900424), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578900, 3), t: 1 }, lastCommittedWall: new Date(1567578900424), lastOpVisible: { ts: Timestamp(1567578900, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578900, 3), $clusterTime: { clusterTime: Timestamp(1567578900, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578900, 3) } 2019-09-04T06:35:00.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), opTime: { ts: Timestamp(1567578900, 3), t: 1 }, wallTime: new Date(1567578900424), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578900, 3), t: 1 }, lastCommittedWall: new Date(1567578900424), lastOpVisible: { ts: Timestamp(1567578900, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578900, 3), $clusterTime: { clusterTime: Timestamp(1567578900, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578900, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:00.840+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:00.840+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1488) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), opTime: { ts: Timestamp(1567578900, 3), t: 1 }, wallTime: new Date(1567578900424), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578900, 3), t: 1 }, lastCommittedWall: new Date(1567578900424), lastOpVisible: { ts: Timestamp(1567578900, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578900, 3), $clusterTime: { clusterTime: Timestamp(1567578900, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578900, 3) } 2019-09-04T06:35:00.840+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:35:00.840+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:35:02.840Z 2019-09-04T06:35:00.840+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:35:00.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:00.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:00.879+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:00.886+0000 I NETWORK [listener] connection accepted from 10.108.2.44:38858 #482 (88 connections now open) 2019-09-04T06:35:00.886+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:00.886+0000 D2 COMMAND [conn482] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:00.886+0000 I NETWORK [conn482] received client metadata from 10.108.2.44:38858 conn482: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:00.887+0000 I COMMAND [conn482] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:00.898+0000 I COMMAND [conn453] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578868, 1), signature: { hash: BinData(0, 8B61E7B5973B908A587EE744417ACD62ADE50C6C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:35:00.898+0000 D1 - [conn453] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:00.898+0000 W - [conn453] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:00.916+0000 I - [conn453] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:00.916+0000 D1 COMMAND [conn453] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578868, 1), signature: { hash: BinData(0, 8B61E7B5973B908A587EE744417ACD62ADE50C6C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:00.916+0000 D1 - [conn453] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:00.916+0000 W - [conn453] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:00.926+0000 I COMMAND [conn454] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578862, 1), signature: { hash: BinData(0, B957C0F67BE372F4946F6CB8E2B141AAD7F04C25), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:35:00.926+0000 D1 - [conn454] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:00.926+0000 W - [conn454] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:00.938+0000 I - [conn453] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:00.938+0000 W COMMAND [conn453] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:00.938+0000 I COMMAND [conn453] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578868, 1), signature: { hash: BinData(0, 8B61E7B5973B908A587EE744417ACD62ADE50C6C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:35:00.938+0000 D2 NETWORK [conn453] Session from 10.108.2.44:38840 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:00.938+0000 I NETWORK [conn453] end connection 10.108.2.44:38840 (87 connections now open) 2019-09-04T06:35:00.956+0000 I - [conn454] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:00.956+0000 D1 COMMAND [conn454] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578862, 1), signature: { hash: BinData(0, B957C0F67BE372F4946F6CB8E2B141AAD7F04C25), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:00.956+0000 D1 - [conn454] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:00.956+0000 W - [conn454] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:00.962+0000 I COMMAND [conn455] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578863, 1), signature: { hash: BinData(0, AAB852E4A9300DFDA061E8F68671FD068E007876), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:35:00.962+0000 D1 - [conn455] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:00.962+0000 W - [conn455] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:00.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:00.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:00.979+0000 I - [conn454] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:00.979+0000 W COMMAND [conn454] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:00.979+0000 I COMMAND [conn454] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578862, 1), signature: { hash: BinData(0, B957C0F67BE372F4946F6CB8E2B141AAD7F04C25), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30040ms 2019-09-04T06:35:00.979+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:00.979+0000 D2 NETWORK [conn454] Session from 10.108.2.73:52308 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:00.979+0000 I NETWORK [conn454] end connection 10.108.2.73:52308 (86 connections now open) 2019-09-04T06:35:00.997+0000 I - [conn455] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:00.997+0000 D1 COMMAND [conn455] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578863, 1), signature: { hash: BinData(0, AAB852E4A9300DFDA061E8F68671FD068E007876), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:00.997+0000 D1 - [conn455] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:00.997+0000 W - [conn455] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:00.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:00.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:01.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:01.018+0000 I - [conn455] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:01.018+0000 W COMMAND [conn455] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:01.018+0000 I COMMAND [conn455] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578863, 1), signature: { hash: BinData(0, AAB852E4A9300DFDA061E8F68671FD068E007876), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30044ms 2019-09-04T06:35:01.018+0000 D2 NETWORK [conn455] Session from 10.108.2.58:52298 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:01.018+0000 I NETWORK [conn455] end connection 10.108.2.58:52298 (85 connections now open) 2019-09-04T06:35:01.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578900, 3), signature: { hash: BinData(0, 6F2A2893584F713EF130396C39150C069081411C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:01.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:35:01.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578900, 3), signature: { hash: BinData(0, 6F2A2893584F713EF130396C39150C069081411C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:01.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578900, 3), signature: { hash: BinData(0, 6F2A2893584F713EF130396C39150C069081411C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:01.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), opTime: { ts: Timestamp(1567578900, 3), t: 1 }, wallTime: new Date(1567578900424) } 2019-09-04T06:35:01.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578900, 3), signature: { hash: BinData(0, 6F2A2893584F713EF130396C39150C069081411C), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:01.068+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:01.068+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:01.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:01.070+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:01.079+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:01.108+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:01.108+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:01.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:01.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:01.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:01.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:01.154+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:01.154+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:01.172+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:01.172+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:01.179+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:01.206+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:01.206+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:01.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:01.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:01.237+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:01.237+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:01.264+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:01.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:01.279+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:01.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:01.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:01.327+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:01.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:01.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:01.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:01.373+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:01.373+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:01.379+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:01.438+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:01.438+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:01.438+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578900, 3) 2019-09-04T06:35:01.438+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 21749 2019-09-04T06:35:01.438+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:01.438+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:01.438+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 21749 2019-09-04T06:35:01.439+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21752 2019-09-04T06:35:01.439+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21752 2019-09-04T06:35:01.439+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578900, 3), t: 1 }({ ts: Timestamp(1567578900, 3), t: 1 }) 2019-09-04T06:35:01.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:01.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:01.479+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:01.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:01.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:01.568+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:01.568+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:01.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:01.570+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:01.579+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:01.608+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:01.608+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:01.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:01.611+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:01.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:01.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:01.654+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:01.654+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:01.672+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:01.672+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:01.680+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:01.706+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:01.706+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:01.736+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:01.737+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:01.764+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:01.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:01.780+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:01.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:01.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:01.827+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:01.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:01.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:01.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:01.873+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:01.873+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:01.880+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:01.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:01.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:01.980+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:01.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:01.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:02.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:02.068+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:02.068+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:02.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:02.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:02.080+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:02.108+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:02.108+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:02.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:02.111+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:02.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:02.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:02.154+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:02.154+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:02.172+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:02.172+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:02.180+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:02.206+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:02.206+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:02.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578900, 3), signature: { hash: BinData(0, 6F2A2893584F713EF130396C39150C069081411C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:02.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:35:02.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578900, 3), signature: { hash: BinData(0, 6F2A2893584F713EF130396C39150C069081411C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:02.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578900, 3), signature: { hash: BinData(0, 6F2A2893584F713EF130396C39150C069081411C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:02.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), opTime: { ts: Timestamp(1567578900, 3), t: 1 }, wallTime: new Date(1567578900424) } 2019-09-04T06:35:02.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578900, 3), signature: { hash: BinData(0, 6F2A2893584F713EF130396C39150C069081411C), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:02.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:02.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:02.236+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:02.237+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:02.264+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:02.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:02.280+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:02.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:02.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:02.327+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:02.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:02.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:02.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:02.373+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:02.373+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:02.380+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:02.438+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:02.438+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:02.438+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578900, 3) 2019-09-04T06:35:02.438+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 21789 2019-09-04T06:35:02.438+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:02.438+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:02.438+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 21789 2019-09-04T06:35:02.439+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21792 2019-09-04T06:35:02.439+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21792 2019-09-04T06:35:02.439+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578900, 3), t: 1 }({ ts: Timestamp(1567578900, 3), t: 1 }) 2019-09-04T06:35:02.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:02.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:02.480+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:02.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:02.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:02.568+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:02.568+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:02.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:02.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:02.580+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:02.608+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:02.608+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:02.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:02.611+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:02.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:02.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:02.654+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:02.654+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:02.672+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:02.672+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:02.680+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:02.706+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:02.706+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:02.736+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:02.737+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:02.764+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:02.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:02.781+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:02.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:02.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:02.827+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:02.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:02.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:02.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1489) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:02.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1489 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:35:12.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:02.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:35:02.839+0000 D2 ASIO [Replication] Request 1489 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), opTime: { ts: Timestamp(1567578900, 3), t: 1 }, wallTime: new Date(1567578900424), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578900, 3), t: 1 }, lastCommittedWall: new Date(1567578900424), lastOpVisible: { ts: Timestamp(1567578900, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578900, 3), $clusterTime: { clusterTime: Timestamp(1567578900, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578900, 3) } 2019-09-04T06:35:02.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), opTime: { ts: Timestamp(1567578900, 3), t: 1 }, wallTime: new Date(1567578900424), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578900, 3), t: 1 }, lastCommittedWall: new Date(1567578900424), lastOpVisible: { ts: Timestamp(1567578900, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578900, 3), $clusterTime: { clusterTime: Timestamp(1567578900, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578900, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:35:02.839+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:35:02.839+0000 D2 REPL_HB [replexec-0] Received response to heartbeat (requestId: 1489) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), opTime: { ts: Timestamp(1567578900, 3), t: 1 }, wallTime: new Date(1567578900424), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578900, 3), t: 1 }, lastCommittedWall: new Date(1567578900424), lastOpVisible: { ts: Timestamp(1567578900, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578900, 3), $clusterTime: { clusterTime: Timestamp(1567578900, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578900, 3) } 2019-09-04T06:35:02.839+0000 D4 ELECTION [replexec-0] Postponing election timeout due to heartbeat from primary 2019-09-04T06:35:02.839+0000 D4 REPL [replexec-0] Canceling election timeout callback at 2019-09-04T06:35:12.167+0000 2019-09-04T06:35:02.839+0000 D4 ELECTION [replexec-0] Scheduling election timeout callback at 2019-09-04T06:35:13.138+0000 2019-09-04T06:35:02.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:02.839+0000 D3 REPL [replexec-0] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:35:02.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:35:02.839+0000 D2 REPL_HB [replexec-0] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:35:04.839Z 2019-09-04T06:35:02.839+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:35:02.840+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:02.840+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1490) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:02.840+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1490 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:12.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:02.840+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:35:02.840+0000 D2 ASIO [Replication] Request 1490 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), opTime: { ts: Timestamp(1567578900, 3), t: 1 }, wallTime: new Date(1567578900424), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578900, 3), t: 1 }, lastCommittedWall: new Date(1567578900424), lastOpVisible: { ts: Timestamp(1567578900, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578900, 3), $clusterTime: { clusterTime: Timestamp(1567578900, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578900, 3) } 2019-09-04T06:35:02.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), opTime: { ts: Timestamp(1567578900, 3), t: 1 }, wallTime: new Date(1567578900424), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578900, 3), t: 1 }, lastCommittedWall: new Date(1567578900424), lastOpVisible: { ts: Timestamp(1567578900, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578900, 3), $clusterTime: { clusterTime: Timestamp(1567578900, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578900, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.840+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:02.840+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1490) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), opTime: { ts: Timestamp(1567578900, 3), t: 1 }, wallTime: new Date(1567578900424), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578900, 3), t: 1 }, lastCommittedWall: new Date(1567578900424), lastOpVisible: { ts: Timestamp(1567578900, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578900, 3), $clusterTime: { clusterTime: Timestamp(1567578900, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578900, 3) } 2019-09-04T06:35:02.840+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:35:02.840+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:35:04.840Z 2019-09-04T06:35:02.840+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:35:02.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:02.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:02.873+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:02.873+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:02.881+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:02.930+0000 D2 ASIO [RS] Request 1485 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578902, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578902928), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f5b16ac9313827bca65e5'), when: new Date(1567578902928) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578900, 3), t: 1 }, lastCommittedWall: new Date(1567578900424), lastOpVisible: { ts: Timestamp(1567578900, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578900, 3), t: 1 }, lastCommittedWall: new Date(1567578900424), lastOpApplied: { ts: Timestamp(1567578902, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578900, 3), $clusterTime: { clusterTime: Timestamp(1567578902, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 1) } 2019-09-04T06:35:02.930+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578902, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578902928), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f5b16ac9313827bca65e5'), when: new Date(1567578902928) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578900, 3), t: 1 }, lastCommittedWall: new Date(1567578900424), lastOpVisible: { ts: Timestamp(1567578900, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578900, 3), t: 1 }, lastCommittedWall: new Date(1567578900424), lastOpApplied: { ts: Timestamp(1567578902, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578900, 3), $clusterTime: { clusterTime: Timestamp(1567578902, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.930+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:02.930+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578902, 1) and ending at ts: Timestamp(1567578902, 1) 2019-09-04T06:35:02.930+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:35:13.138+0000 2019-09-04T06:35:02.930+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:35:14.138+0000 2019-09-04T06:35:02.930+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:35:02.930+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:35:02.930+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578902, 1), t: 1 } 2019-09-04T06:35:02.930+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:02.930+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:02.930+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578900, 3) 2019-09-04T06:35:02.930+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 21811 2019-09-04T06:35:02.930+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:02.930+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:02.930+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 21811 2019-09-04T06:35:02.930+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:02.930+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:35:02.930+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:02.930+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578900, 3) 2019-09-04T06:35:02.930+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578902, 1) } 2019-09-04T06:35:02.930+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 21814 2019-09-04T06:35:02.930+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:02.930+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:02.930+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 21814 2019-09-04T06:35:02.930+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21793 2019-09-04T06:35:02.930+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21793 2019-09-04T06:35:02.930+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21817 2019-09-04T06:35:02.930+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21817 2019-09-04T06:35:02.930+0000 D3 EXECUTOR [repl-writer-worker-5] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:02.930+0000 D3 STORAGE [repl-writer-worker-5] WT begin_transaction for snapshot id 21819 2019-09-04T06:35:02.930+0000 D4 STORAGE [repl-writer-worker-5] inserting record with timestamp Timestamp(1567578902, 1) 2019-09-04T06:35:02.930+0000 D3 STORAGE [repl-writer-worker-5] WT set timestamp of future write operations to Timestamp(1567578902, 1) 2019-09-04T06:35:02.930+0000 D3 STORAGE [repl-writer-worker-5] WT commit_transaction for snapshot id 21819 2019-09-04T06:35:02.930+0000 D3 EXECUTOR [repl-writer-worker-5] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:02.930+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:35:02.930+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21818 2019-09-04T06:35:02.930+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21818 2019-09-04T06:35:02.930+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21821 2019-09-04T06:35:02.930+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21821 2019-09-04T06:35:02.930+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578902, 1), t: 1 }({ ts: Timestamp(1567578902, 1), t: 1 }) 2019-09-04T06:35:02.930+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578902, 1) 2019-09-04T06:35:02.930+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21822 2019-09-04T06:35:02.931+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578902, 1) } } ] } sort: {} projection: {} 2019-09-04T06:35:02.931+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.931+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:35:02.931+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578902, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.931+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.931+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:02.931+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:02.931+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578902, 1) || First: notFirst: full path: ts 2019-09-04T06:35:02.931+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.931+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578902, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.931+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:02.931+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:35:02.931+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.931+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.931+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:02.931+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:02.931+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.931+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.931+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:02.931+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578902, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.931+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.931+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:02.931+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:02.931+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578902, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:02.931+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.931+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578902, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.931+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21822 2019-09-04T06:35:02.931+0000 D3 EXECUTOR [repl-writer-worker-9] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:02.931+0000 D3 STORAGE [repl-writer-worker-9] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:02.931+0000 D3 REPL [repl-writer-worker-9] applying op: { ts: Timestamp(1567578902, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578902928), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f5b16ac9313827bca65e5'), when: new Date(1567578902928) } } }, oplog application mode: Secondary 2019-09-04T06:35:02.931+0000 D3 STORAGE [repl-writer-worker-9] WT set timestamp of future write operations to Timestamp(1567578902, 1) 2019-09-04T06:35:02.931+0000 D3 STORAGE [repl-writer-worker-9] WT begin_transaction for snapshot id 21824 2019-09-04T06:35:02.931+0000 D2 QUERY [repl-writer-worker-9] Using idhack: { _id: "config" } 2019-09-04T06:35:02.931+0000 D2 STORAGE [repl-writer-worker-9] WiredTigerSizeStorer::store Marking table:config/collection/42--6194257481163143499 dirty, numRecords: 2, dataSize: 308, use_count: 3 2019-09-04T06:35:02.931+0000 D4 WRITE [repl-writer-worker-9] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:35:02.931+0000 D3 STORAGE [repl-writer-worker-9] WT commit_transaction for snapshot id 21824 2019-09-04T06:35:02.931+0000 D3 EXECUTOR [repl-writer-worker-9] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:02.931+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578902, 1), t: 1 }({ ts: Timestamp(1567578902, 1), t: 1 }) 2019-09-04T06:35:02.931+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578902, 1) 2019-09-04T06:35:02.931+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21823 2019-09-04T06:35:02.931+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.931+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.931+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:35:02.931+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.931+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.931+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:35:02.931+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21823 2019-09-04T06:35:02.931+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578902, 1) 2019-09-04T06:35:02.931+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21827 2019-09-04T06:35:02.931+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21827 2019-09-04T06:35:02.931+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578902, 1), t: 1 }({ ts: Timestamp(1567578902, 1), t: 1 }) 2019-09-04T06:35:02.931+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:02.931+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578902, 1), t: 1 }, appliedWallTime: new Date(1567578902928), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578900, 3), t: 1 }, lastCommittedWall: new Date(1567578900424), lastOpVisible: { ts: Timestamp(1567578900, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:02.931+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1491 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:32.931+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578902, 1), t: 1 }, appliedWallTime: new Date(1567578902928), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578900, 3), t: 1 }, lastCommittedWall: new Date(1567578900424), lastOpVisible: { ts: Timestamp(1567578900, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:02.931+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.931+0000 2019-09-04T06:35:02.932+0000 D2 ASIO [RS] Request 1491 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 1), t: 1 }, lastCommittedWall: new Date(1567578902928), lastOpVisible: { ts: Timestamp(1567578902, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 1), $clusterTime: { clusterTime: Timestamp(1567578902, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 1) } 2019-09-04T06:35:02.932+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 1), t: 1 }, lastCommittedWall: new Date(1567578902928), lastOpVisible: { ts: Timestamp(1567578902, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 1), $clusterTime: { clusterTime: Timestamp(1567578902, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.932+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:02.932+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.932+0000 2019-09-04T06:35:02.932+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578902, 1), t: 1 } 2019-09-04T06:35:02.932+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1492 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:12.932+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578900, 3), t: 1 } } 2019-09-04T06:35:02.932+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.932+0000 2019-09-04T06:35:02.932+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:35:02.932+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:02.932+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578902, 1), t: 1 }, durableWallTime: new Date(1567578902928), appliedOpTime: { ts: Timestamp(1567578902, 1), t: 1 }, appliedWallTime: new Date(1567578902928), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578900, 3), t: 1 }, lastCommittedWall: new Date(1567578900424), lastOpVisible: { ts: Timestamp(1567578900, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:02.932+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1493 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:32.932+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578902, 1), t: 1 }, durableWallTime: new Date(1567578902928), appliedOpTime: { ts: Timestamp(1567578902, 1), t: 1 }, appliedWallTime: new Date(1567578902928), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578900, 3), t: 1 }, lastCommittedWall: new Date(1567578900424), lastOpVisible: { ts: Timestamp(1567578900, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:02.932+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.932+0000 2019-09-04T06:35:02.932+0000 D2 ASIO [RS] Request 1492 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 1), t: 1 }, lastCommittedWall: new Date(1567578902928), lastOpVisible: { ts: Timestamp(1567578902, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 1), t: 1 }, lastCommittedWall: new Date(1567578902928), lastOpApplied: { ts: Timestamp(1567578902, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 1), $clusterTime: { clusterTime: Timestamp(1567578902, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 1) } 2019-09-04T06:35:02.932+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 1), t: 1 }, lastCommittedWall: new Date(1567578902928), lastOpVisible: { ts: Timestamp(1567578902, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 1), t: 1 }, lastCommittedWall: new Date(1567578902928), lastOpApplied: { ts: Timestamp(1567578902, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 1), $clusterTime: { clusterTime: Timestamp(1567578902, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.932+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:02.932+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:35:02.932+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578902, 1), t: 1 }, 2019-09-04T06:35:02.928+0000 2019-09-04T06:35:02.932+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578902, 1), t: 1 }, 2019-09-04T06:35:02.928+0000 2019-09-04T06:35:02.932+0000 D2 ASIO [RS] Request 1493 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 1), t: 1 }, lastCommittedWall: new Date(1567578902928), lastOpVisible: { ts: Timestamp(1567578902, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 1), $clusterTime: { clusterTime: Timestamp(1567578902, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 1) } 2019-09-04T06:35:02.932+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 1), t: 1 }, lastCommittedWall: new Date(1567578902928), lastOpVisible: { ts: Timestamp(1567578902, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 1), $clusterTime: { clusterTime: Timestamp(1567578902, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.932+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:02.932+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.932+0000 2019-09-04T06:35:02.932+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578897, 1) 2019-09-04T06:35:02.932+0000 D3 REPL [conn476] Got notified of new snapshot: { ts: Timestamp(1567578902, 1), t: 1 }, 2019-09-04T06:35:02.928+0000 2019-09-04T06:35:02.932+0000 D3 REPL [conn476] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.767+0000 2019-09-04T06:35:02.932+0000 D3 REPL [conn477] Got notified of new snapshot: { ts: Timestamp(1567578902, 1), t: 1 }, 2019-09-04T06:35:02.928+0000 2019-09-04T06:35:02.932+0000 D3 REPL [conn477] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:22.595+0000 2019-09-04T06:35:02.932+0000 D3 REPL [conn463] Got notified of new snapshot: { ts: Timestamp(1567578902, 1), t: 1 }, 2019-09-04T06:35:02.928+0000 2019-09-04T06:35:02.933+0000 D3 REPL [conn463] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:13.417+0000 2019-09-04T06:35:02.933+0000 D3 REPL [conn446] Got notified of new snapshot: { ts: Timestamp(1567578902, 1), t: 1 }, 2019-09-04T06:35:02.928+0000 2019-09-04T06:35:02.933+0000 D3 REPL [conn446] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:13.895+0000 2019-09-04T06:35:02.933+0000 D3 REPL [conn436] Got notified of new snapshot: { ts: Timestamp(1567578902, 1), t: 1 }, 2019-09-04T06:35:02.928+0000 2019-09-04T06:35:02.933+0000 D3 REPL [conn436] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:14.790+0000 2019-09-04T06:35:02.933+0000 D3 REPL [conn461] Got notified of new snapshot: { ts: Timestamp(1567578902, 1), t: 1 }, 2019-09-04T06:35:02.928+0000 2019-09-04T06:35:02.933+0000 D3 REPL [conn461] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.482+0000 2019-09-04T06:35:02.933+0000 D3 REPL [conn466] Got notified of new snapshot: { ts: Timestamp(1567578902, 1), t: 1 }, 2019-09-04T06:35:02.928+0000 2019-09-04T06:35:02.933+0000 D3 REPL [conn466] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.728+0000 2019-09-04T06:35:02.933+0000 D3 REPL [conn448] Got notified of new snapshot: { ts: Timestamp(1567578902, 1), t: 1 }, 2019-09-04T06:35:02.928+0000 2019-09-04T06:35:02.933+0000 D3 REPL [conn448] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.625+0000 2019-09-04T06:35:02.933+0000 D3 REPL [conn456] Got notified of new snapshot: { ts: Timestamp(1567578902, 1), t: 1 }, 2019-09-04T06:35:02.928+0000 2019-09-04T06:35:02.933+0000 D3 REPL [conn456] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:03.487+0000 2019-09-04T06:35:02.933+0000 D3 REPL [conn467] Got notified of new snapshot: { ts: Timestamp(1567578902, 1), t: 1 }, 2019-09-04T06:35:02.928+0000 2019-09-04T06:35:02.933+0000 D3 REPL [conn467] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.623+0000 2019-09-04T06:35:02.933+0000 D3 REPL [conn460] Got notified of new snapshot: { ts: Timestamp(1567578902, 1), t: 1 }, 2019-09-04T06:35:02.928+0000 2019-09-04T06:35:02.933+0000 D3 REPL [conn460] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.478+0000 2019-09-04T06:35:02.933+0000 D3 REPL [conn462] Got notified of new snapshot: { ts: Timestamp(1567578902, 1), t: 1 }, 2019-09-04T06:35:02.928+0000 2019-09-04T06:35:02.933+0000 D3 REPL [conn462] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.497+0000 2019-09-04T06:35:02.933+0000 D3 REPL [conn479] Got notified of new snapshot: { ts: Timestamp(1567578902, 1), t: 1 }, 2019-09-04T06:35:02.928+0000 2019-09-04T06:35:02.933+0000 D3 REPL [conn479] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:26.309+0000 2019-09-04T06:35:02.933+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:35:14.138+0000 2019-09-04T06:35:02.933+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:35:13.663+0000 2019-09-04T06:35:02.933+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:02.933+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:35:02.933+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1494 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:12.933+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578902, 1), t: 1 } } 2019-09-04T06:35:02.933+0000 D3 REPL [conn459] Got notified of new snapshot: { ts: Timestamp(1567578902, 1), t: 1 }, 2019-09-04T06:35:02.928+0000 2019-09-04T06:35:02.933+0000 D3 REPL [conn459] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.473+0000 2019-09-04T06:35:02.933+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.932+0000 2019-09-04T06:35:02.933+0000 D3 REPL [conn441] Got notified of new snapshot: { ts: Timestamp(1567578902, 1), t: 1 }, 2019-09-04T06:35:02.928+0000 2019-09-04T06:35:02.933+0000 D3 REPL [conn441] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.624+0000 2019-09-04T06:35:02.933+0000 D3 REPL [conn478] Got notified of new snapshot: { ts: Timestamp(1567578902, 1), t: 1 }, 2019-09-04T06:35:02.928+0000 2019-09-04T06:35:02.933+0000 D3 REPL [conn478] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:24.152+0000 2019-09-04T06:35:02.933+0000 D3 REPL [conn445] Got notified of new snapshot: { ts: Timestamp(1567578902, 1), t: 1 }, 2019-09-04T06:35:02.928+0000 2019-09-04T06:35:02.933+0000 D3 REPL [conn445] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:14.090+0000 2019-09-04T06:35:02.933+0000 D3 REPL [conn472] Got notified of new snapshot: { ts: Timestamp(1567578902, 1), t: 1 }, 2019-09-04T06:35:02.928+0000 2019-09-04T06:35:02.933+0000 D3 REPL [conn472] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:02.933+0000 D3 REPL [conn473] Got notified of new snapshot: { ts: Timestamp(1567578902, 1), t: 1 }, 2019-09-04T06:35:02.928+0000 2019-09-04T06:35:02.933+0000 D3 REPL [conn473] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:02.933+0000 D3 REPL [conn457] Got notified of new snapshot: { ts: Timestamp(1567578902, 1), t: 1 }, 2019-09-04T06:35:02.928+0000 2019-09-04T06:35:02.933+0000 D3 REPL [conn457] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:25.059+0000 2019-09-04T06:35:02.933+0000 D3 REPL [conn474] Got notified of new snapshot: { ts: Timestamp(1567578902, 1), t: 1 }, 2019-09-04T06:35:02.928+0000 2019-09-04T06:35:02.933+0000 D3 REPL [conn474] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:02.933+0000 D3 REPL [conn475] Got notified of new snapshot: { ts: Timestamp(1567578902, 1), t: 1 }, 2019-09-04T06:35:02.928+0000 2019-09-04T06:35:02.933+0000 D3 REPL [conn475] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.753+0000 2019-09-04T06:35:02.933+0000 D3 REPL [conn423] Got notified of new snapshot: { ts: Timestamp(1567578902, 1), t: 1 }, 2019-09-04T06:35:02.928+0000 2019-09-04T06:35:02.933+0000 D3 REPL [conn423] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.512+0000 2019-09-04T06:35:02.933+0000 D2 ASIO [RS] Request 1494 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578902, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578902931), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f5b16ac9313827bca65eb'), when: new Date(1567578902931) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 1), t: 1 }, lastCommittedWall: new Date(1567578902928), lastOpVisible: { ts: Timestamp(1567578902, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 1), t: 1 }, lastCommittedWall: new Date(1567578902928), lastOpApplied: { ts: Timestamp(1567578902, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 1), $clusterTime: { clusterTime: Timestamp(1567578902, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 2) } 2019-09-04T06:35:02.933+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578902, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578902931), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f5b16ac9313827bca65eb'), when: new Date(1567578902931) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 1), t: 1 }, lastCommittedWall: new Date(1567578902928), lastOpVisible: { ts: Timestamp(1567578902, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 1), t: 1 }, lastCommittedWall: new Date(1567578902928), lastOpApplied: { ts: Timestamp(1567578902, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 1), $clusterTime: { clusterTime: Timestamp(1567578902, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.933+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:02.933+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578902, 2) and ending at ts: Timestamp(1567578902, 2) 2019-09-04T06:35:02.933+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:35:13.663+0000 2019-09-04T06:35:02.933+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:35:14.028+0000 2019-09-04T06:35:02.933+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578902, 2), t: 1 } 2019-09-04T06:35:02.933+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:02.933+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:35:02.933+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:02.933+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:02.933+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578902, 1) 2019-09-04T06:35:02.933+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 21831 2019-09-04T06:35:02.933+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:02.933+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:02.933+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 21831 2019-09-04T06:35:02.933+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:02.933+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:02.933+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:35:02.933+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578902, 1) 2019-09-04T06:35:02.933+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578902, 2) } 2019-09-04T06:35:02.933+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 21834 2019-09-04T06:35:02.933+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:02.933+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:02.933+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21828 2019-09-04T06:35:02.933+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 21834 2019-09-04T06:35:02.933+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21828 2019-09-04T06:35:02.933+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21837 2019-09-04T06:35:02.933+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21837 2019-09-04T06:35:02.933+0000 D3 EXECUTOR [repl-writer-worker-11] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:02.933+0000 D3 STORAGE [repl-writer-worker-11] WT begin_transaction for snapshot id 21839 2019-09-04T06:35:02.933+0000 D4 STORAGE [repl-writer-worker-11] inserting record with timestamp Timestamp(1567578902, 2) 2019-09-04T06:35:02.933+0000 D3 STORAGE [repl-writer-worker-11] WT set timestamp of future write operations to Timestamp(1567578902, 2) 2019-09-04T06:35:02.933+0000 D3 STORAGE [repl-writer-worker-11] WT commit_transaction for snapshot id 21839 2019-09-04T06:35:02.933+0000 D3 EXECUTOR [repl-writer-worker-11] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:02.934+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:35:02.934+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21838 2019-09-04T06:35:02.934+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21838 2019-09-04T06:35:02.934+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21841 2019-09-04T06:35:02.934+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21841 2019-09-04T06:35:02.934+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578902, 2), t: 1 }({ ts: Timestamp(1567578902, 2), t: 1 }) 2019-09-04T06:35:02.934+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578902, 2) 2019-09-04T06:35:02.934+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21842 2019-09-04T06:35:02.934+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578902, 2) } } ] } sort: {} projection: {} 2019-09-04T06:35:02.934+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.934+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:35:02.934+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578902, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.934+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.934+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:02.934+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:02.934+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578902, 2) || First: notFirst: full path: ts 2019-09-04T06:35:02.934+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.934+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578902, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.934+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:02.934+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:35:02.934+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.934+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.934+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:02.934+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:02.934+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.934+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.934+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:02.934+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578902, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.934+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.934+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:02.934+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:02.934+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578902, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:02.934+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.934+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578902, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.934+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21842 2019-09-04T06:35:02.934+0000 D3 EXECUTOR [repl-writer-worker-7] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:02.934+0000 D3 STORAGE [repl-writer-worker-7] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:02.934+0000 D3 REPL [repl-writer-worker-7] applying op: { ts: Timestamp(1567578902, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578902931), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f5b16ac9313827bca65eb'), when: new Date(1567578902931) } } }, oplog application mode: Secondary 2019-09-04T06:35:02.934+0000 D3 STORAGE [repl-writer-worker-7] WT set timestamp of future write operations to Timestamp(1567578902, 2) 2019-09-04T06:35:02.934+0000 D3 STORAGE [repl-writer-worker-7] WT begin_transaction for snapshot id 21844 2019-09-04T06:35:02.934+0000 D2 QUERY [repl-writer-worker-7] Using idhack: { _id: "config.system.sessions" } 2019-09-04T06:35:02.934+0000 D4 WRITE [repl-writer-worker-7] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:35:02.934+0000 D3 STORAGE [repl-writer-worker-7] WT commit_transaction for snapshot id 21844 2019-09-04T06:35:02.934+0000 D3 EXECUTOR [repl-writer-worker-7] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:02.934+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578902, 2), t: 1 }({ ts: Timestamp(1567578902, 2), t: 1 }) 2019-09-04T06:35:02.934+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578902, 2) 2019-09-04T06:35:02.934+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21843 2019-09-04T06:35:02.934+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.934+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.934+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:35:02.934+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.934+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.934+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:35:02.934+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21843 2019-09-04T06:35:02.934+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578902, 2) 2019-09-04T06:35:02.934+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:02.934+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21847 2019-09-04T06:35:02.934+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578902, 1), t: 1 }, durableWallTime: new Date(1567578902928), appliedOpTime: { ts: Timestamp(1567578902, 2), t: 1 }, appliedWallTime: new Date(1567578902931), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 1), t: 1 }, lastCommittedWall: new Date(1567578902928), lastOpVisible: { ts: Timestamp(1567578902, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:02.934+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21847 2019-09-04T06:35:02.934+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578902, 2), t: 1 }({ ts: Timestamp(1567578902, 2), t: 1 }) 2019-09-04T06:35:02.934+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1495 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:32.934+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578902, 1), t: 1 }, durableWallTime: new Date(1567578902928), appliedOpTime: { ts: Timestamp(1567578902, 2), t: 1 }, appliedWallTime: new Date(1567578902931), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 1), t: 1 }, lastCommittedWall: new Date(1567578902928), lastOpVisible: { ts: Timestamp(1567578902, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:02.934+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.934+0000 2019-09-04T06:35:02.935+0000 D2 ASIO [RS] Request 1495 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 1), t: 1 }, lastCommittedWall: new Date(1567578902928), lastOpVisible: { ts: Timestamp(1567578902, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 1), $clusterTime: { clusterTime: Timestamp(1567578902, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 2) } 2019-09-04T06:35:02.935+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 1), t: 1 }, lastCommittedWall: new Date(1567578902928), lastOpVisible: { ts: Timestamp(1567578902, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 1), $clusterTime: { clusterTime: Timestamp(1567578902, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.935+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:02.935+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.935+0000 2019-09-04T06:35:02.935+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578902, 2), t: 1 } 2019-09-04T06:35:02.935+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1496 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:12.935+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578902, 1), t: 1 } } 2019-09-04T06:35:02.935+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.935+0000 2019-09-04T06:35:02.935+0000 D2 ASIO [RS] Request 1496 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 2), t: 1 }, lastCommittedWall: new Date(1567578902931), lastOpVisible: { ts: Timestamp(1567578902, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 2), t: 1 }, lastCommittedWall: new Date(1567578902931), lastOpApplied: { ts: Timestamp(1567578902, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 2), $clusterTime: { clusterTime: Timestamp(1567578902, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 2) } 2019-09-04T06:35:02.935+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 2), t: 1 }, lastCommittedWall: new Date(1567578902931), lastOpVisible: { ts: Timestamp(1567578902, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 2), t: 1 }, lastCommittedWall: new Date(1567578902931), lastOpApplied: { ts: Timestamp(1567578902, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 2), $clusterTime: { clusterTime: Timestamp(1567578902, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.935+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:02.936+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:35:02.936+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578902, 2), t: 1 }, 2019-09-04T06:35:02.931+0000 2019-09-04T06:35:02.936+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578902, 2), t: 1 }, 2019-09-04T06:35:02.931+0000 2019-09-04T06:35:02.936+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578897, 2) 2019-09-04T06:35:02.936+0000 D3 REPL [conn461] Got notified of new snapshot: { ts: Timestamp(1567578902, 2), t: 1 }, 2019-09-04T06:35:02.931+0000 2019-09-04T06:35:02.936+0000 D3 REPL [conn461] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.482+0000 2019-09-04T06:35:02.936+0000 D3 REPL [conn456] Got notified of new snapshot: { ts: Timestamp(1567578902, 2), t: 1 }, 2019-09-04T06:35:02.931+0000 2019-09-04T06:35:02.936+0000 D3 REPL [conn456] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:03.487+0000 2019-09-04T06:35:02.936+0000 D3 REPL [conn446] Got notified of new snapshot: { ts: Timestamp(1567578902, 2), t: 1 }, 2019-09-04T06:35:02.931+0000 2019-09-04T06:35:02.936+0000 D3 REPL [conn446] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:13.895+0000 2019-09-04T06:35:02.936+0000 D3 REPL [conn459] Got notified of new snapshot: { ts: Timestamp(1567578902, 2), t: 1 }, 2019-09-04T06:35:02.931+0000 2019-09-04T06:35:02.936+0000 D3 REPL [conn459] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.473+0000 2019-09-04T06:35:02.936+0000 D3 REPL [conn441] Got notified of new snapshot: { ts: Timestamp(1567578902, 2), t: 1 }, 2019-09-04T06:35:02.931+0000 2019-09-04T06:35:02.936+0000 D3 REPL [conn441] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.624+0000 2019-09-04T06:35:02.936+0000 D3 REPL [conn478] Got notified of new snapshot: { ts: Timestamp(1567578902, 2), t: 1 }, 2019-09-04T06:35:02.931+0000 2019-09-04T06:35:02.936+0000 D3 REPL [conn478] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:24.152+0000 2019-09-04T06:35:02.936+0000 D3 REPL [conn457] Got notified of new snapshot: { ts: Timestamp(1567578902, 2), t: 1 }, 2019-09-04T06:35:02.931+0000 2019-09-04T06:35:02.936+0000 D3 REPL [conn457] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:25.059+0000 2019-09-04T06:35:02.936+0000 D3 REPL [conn448] Got notified of new snapshot: { ts: Timestamp(1567578902, 2), t: 1 }, 2019-09-04T06:35:02.931+0000 2019-09-04T06:35:02.936+0000 D3 REPL [conn448] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.625+0000 2019-09-04T06:35:02.936+0000 D3 REPL [conn462] Got notified of new snapshot: { ts: Timestamp(1567578902, 2), t: 1 }, 2019-09-04T06:35:02.931+0000 2019-09-04T06:35:02.936+0000 D3 REPL [conn462] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.497+0000 2019-09-04T06:35:02.936+0000 D3 REPL [conn472] Got notified of new snapshot: { ts: Timestamp(1567578902, 2), t: 1 }, 2019-09-04T06:35:02.931+0000 2019-09-04T06:35:02.936+0000 D3 REPL [conn472] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:02.936+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:35:14.028+0000 2019-09-04T06:35:02.936+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:35:13.738+0000 2019-09-04T06:35:02.936+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:35:02.936+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1497 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:12.936+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578902, 2), t: 1 } } 2019-09-04T06:35:02.936+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:35:02.936+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.935+0000 2019-09-04T06:35:02.936+0000 D3 REPL [conn479] Got notified of new snapshot: { ts: Timestamp(1567578902, 2), t: 1 }, 2019-09-04T06:35:02.931+0000 2019-09-04T06:35:02.936+0000 D3 REPL [conn479] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:26.309+0000 2019-09-04T06:35:02.936+0000 D3 REPL [conn460] Got notified of new snapshot: { ts: Timestamp(1567578902, 2), t: 1 }, 2019-09-04T06:35:02.931+0000 2019-09-04T06:35:02.936+0000 D3 REPL [conn460] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.478+0000 2019-09-04T06:35:02.936+0000 D3 REPL [conn463] Got notified of new snapshot: { ts: Timestamp(1567578902, 2), t: 1 }, 2019-09-04T06:35:02.931+0000 2019-09-04T06:35:02.936+0000 D3 REPL [conn463] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:13.417+0000 2019-09-04T06:35:02.936+0000 D3 REPL [conn476] Got notified of new snapshot: { ts: Timestamp(1567578902, 2), t: 1 }, 2019-09-04T06:35:02.931+0000 2019-09-04T06:35:02.936+0000 D3 REPL [conn476] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.767+0000 2019-09-04T06:35:02.936+0000 D3 REPL [conn475] Got notified of new snapshot: { ts: Timestamp(1567578902, 2), t: 1 }, 2019-09-04T06:35:02.931+0000 2019-09-04T06:35:02.936+0000 D3 REPL [conn475] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.753+0000 2019-09-04T06:35:02.936+0000 D3 REPL [conn466] Got notified of new snapshot: { ts: Timestamp(1567578902, 2), t: 1 }, 2019-09-04T06:35:02.931+0000 2019-09-04T06:35:02.936+0000 D3 REPL [conn466] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.728+0000 2019-09-04T06:35:02.936+0000 D3 REPL [conn473] Got notified of new snapshot: { ts: Timestamp(1567578902, 2), t: 1 }, 2019-09-04T06:35:02.931+0000 2019-09-04T06:35:02.936+0000 D3 REPL [conn473] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:02.936+0000 D3 REPL [conn445] Got notified of new snapshot: { ts: Timestamp(1567578902, 2), t: 1 }, 2019-09-04T06:35:02.931+0000 2019-09-04T06:35:02.936+0000 D3 REPL [conn445] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:14.090+0000 2019-09-04T06:35:02.936+0000 D3 REPL [conn436] Got notified of new snapshot: { ts: Timestamp(1567578902, 2), t: 1 }, 2019-09-04T06:35:02.931+0000 2019-09-04T06:35:02.936+0000 D3 REPL [conn436] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:14.790+0000 2019-09-04T06:35:02.936+0000 D3 REPL [conn467] Got notified of new snapshot: { ts: Timestamp(1567578902, 2), t: 1 }, 2019-09-04T06:35:02.931+0000 2019-09-04T06:35:02.936+0000 D3 REPL [conn467] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.623+0000 2019-09-04T06:35:02.936+0000 D3 REPL [conn474] Got notified of new snapshot: { ts: Timestamp(1567578902, 2), t: 1 }, 2019-09-04T06:35:02.931+0000 2019-09-04T06:35:02.936+0000 D3 REPL [conn474] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:02.936+0000 D3 REPL [conn423] Got notified of new snapshot: { ts: Timestamp(1567578902, 2), t: 1 }, 2019-09-04T06:35:02.931+0000 2019-09-04T06:35:02.936+0000 D3 REPL [conn423] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.512+0000 2019-09-04T06:35:02.936+0000 D3 REPL [conn477] Got notified of new snapshot: { ts: Timestamp(1567578902, 2), t: 1 }, 2019-09-04T06:35:02.931+0000 2019-09-04T06:35:02.936+0000 D3 REPL [conn477] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:22.595+0000 2019-09-04T06:35:02.939+0000 D2 ASIO [RS] Request 1497 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578902, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578902935), o: { $v: 1, $set: { state: 0 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 2), t: 1 }, lastCommittedWall: new Date(1567578902931), lastOpVisible: { ts: Timestamp(1567578902, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 2), t: 1 }, lastCommittedWall: new Date(1567578902931), lastOpApplied: { ts: Timestamp(1567578902, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 2), $clusterTime: { clusterTime: Timestamp(1567578902, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 3) } 2019-09-04T06:35:02.939+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578902, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578902935), o: { $v: 1, $set: { state: 0 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 2), t: 1 }, lastCommittedWall: new Date(1567578902931), lastOpVisible: { ts: Timestamp(1567578902, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 2), t: 1 }, lastCommittedWall: new Date(1567578902931), lastOpApplied: { ts: Timestamp(1567578902, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 2), $clusterTime: { clusterTime: Timestamp(1567578902, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.939+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:02.939+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578902, 3) and ending at ts: Timestamp(1567578902, 3) 2019-09-04T06:35:02.939+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:35:13.738+0000 2019-09-04T06:35:02.939+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:35:13.355+0000 2019-09-04T06:35:02.939+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:02.939+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578902, 3), t: 1 } 2019-09-04T06:35:02.939+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:35:02.939+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:02.939+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:02.939+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578902, 2) 2019-09-04T06:35:02.939+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 21851 2019-09-04T06:35:02.939+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:02.939+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:02.939+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 21851 2019-09-04T06:35:02.939+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:02.939+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:02.939+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:35:02.939+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578902, 2) 2019-09-04T06:35:02.939+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578902, 3) } 2019-09-04T06:35:02.940+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 21854 2019-09-04T06:35:02.940+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:02.940+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21848 2019-09-04T06:35:02.940+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:02.940+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 21854 2019-09-04T06:35:02.940+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21848 2019-09-04T06:35:02.940+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21857 2019-09-04T06:35:02.940+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21857 2019-09-04T06:35:02.940+0000 D3 EXECUTOR [repl-writer-worker-13] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:02.940+0000 D3 STORAGE [repl-writer-worker-13] WT begin_transaction for snapshot id 21859 2019-09-04T06:35:02.940+0000 D4 STORAGE [repl-writer-worker-13] inserting record with timestamp Timestamp(1567578902, 3) 2019-09-04T06:35:02.940+0000 D3 STORAGE [repl-writer-worker-13] WT set timestamp of future write operations to Timestamp(1567578902, 3) 2019-09-04T06:35:02.940+0000 D3 STORAGE [repl-writer-worker-13] WT commit_transaction for snapshot id 21859 2019-09-04T06:35:02.940+0000 D3 EXECUTOR [repl-writer-worker-13] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:02.940+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:35:02.940+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21858 2019-09-04T06:35:02.940+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21858 2019-09-04T06:35:02.940+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21861 2019-09-04T06:35:02.940+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21861 2019-09-04T06:35:02.940+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578902, 3), t: 1 }({ ts: Timestamp(1567578902, 3), t: 1 }) 2019-09-04T06:35:02.940+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578902, 3) 2019-09-04T06:35:02.940+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21862 2019-09-04T06:35:02.940+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578902, 3) } } ] } sort: {} projection: {} 2019-09-04T06:35:02.940+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.940+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:35:02.940+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578902, 3) Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.940+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.940+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:02.940+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:02.940+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578902, 3) || First: notFirst: full path: ts 2019-09-04T06:35:02.940+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.940+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578902, 3) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.940+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:02.940+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:35:02.940+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.940+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.940+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:02.940+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:02.940+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.940+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.940+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:02.940+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578902, 3) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.940+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.940+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:02.940+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:02.940+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578902, 3) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:02.940+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.940+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578902, 3) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.940+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21862 2019-09-04T06:35:02.940+0000 D3 EXECUTOR [repl-writer-worker-12] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:02.940+0000 D3 STORAGE [repl-writer-worker-12] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:02.940+0000 D3 REPL [repl-writer-worker-12] applying op: { ts: Timestamp(1567578902, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578902935), o: { $v: 1, $set: { state: 0 } } }, oplog application mode: Secondary 2019-09-04T06:35:02.940+0000 D3 STORAGE [repl-writer-worker-12] WT set timestamp of future write operations to Timestamp(1567578902, 3) 2019-09-04T06:35:02.940+0000 D3 STORAGE [repl-writer-worker-12] WT begin_transaction for snapshot id 21864 2019-09-04T06:35:02.940+0000 D2 QUERY [repl-writer-worker-12] Using idhack: { _id: "config.system.sessions" } 2019-09-04T06:35:02.940+0000 D4 WRITE [repl-writer-worker-12] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:35:02.940+0000 D3 STORAGE [repl-writer-worker-12] WT commit_transaction for snapshot id 21864 2019-09-04T06:35:02.940+0000 D3 EXECUTOR [repl-writer-worker-12] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:02.940+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578902, 3), t: 1 }({ ts: Timestamp(1567578902, 3), t: 1 }) 2019-09-04T06:35:02.940+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578902, 3) 2019-09-04T06:35:02.940+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21863 2019-09-04T06:35:02.940+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.940+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.940+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:35:02.940+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.940+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.940+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:35:02.940+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21863 2019-09-04T06:35:02.940+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578902, 3) 2019-09-04T06:35:02.940+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:02.940+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21867 2019-09-04T06:35:02.940+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578902, 1), t: 1 }, durableWallTime: new Date(1567578902928), appliedOpTime: { ts: Timestamp(1567578902, 3), t: 1 }, appliedWallTime: new Date(1567578902935), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 2), t: 1 }, lastCommittedWall: new Date(1567578902931), lastOpVisible: { ts: Timestamp(1567578902, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:02.940+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21867 2019-09-04T06:35:02.940+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1498 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:32.940+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578902, 1), t: 1 }, durableWallTime: new Date(1567578902928), appliedOpTime: { ts: Timestamp(1567578902, 3), t: 1 }, appliedWallTime: new Date(1567578902935), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 2), t: 1 }, lastCommittedWall: new Date(1567578902931), lastOpVisible: { ts: Timestamp(1567578902, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:02.940+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578902, 3), t: 1 }({ ts: Timestamp(1567578902, 3), t: 1 }) 2019-09-04T06:35:02.940+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.940+0000 2019-09-04T06:35:02.941+0000 D2 ASIO [RS] Request 1498 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 2), t: 1 }, lastCommittedWall: new Date(1567578902931), lastOpVisible: { ts: Timestamp(1567578902, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 2), $clusterTime: { clusterTime: Timestamp(1567578902, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 3) } 2019-09-04T06:35:02.941+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 2), t: 1 }, lastCommittedWall: new Date(1567578902931), lastOpVisible: { ts: Timestamp(1567578902, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 2), $clusterTime: { clusterTime: Timestamp(1567578902, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.941+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:02.941+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.941+0000 2019-09-04T06:35:02.941+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578902, 3), t: 1 } 2019-09-04T06:35:02.941+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1499 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:12.941+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578902, 2), t: 1 } } 2019-09-04T06:35:02.941+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.941+0000 2019-09-04T06:35:02.942+0000 D2 ASIO [RS] Request 1499 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 3), t: 1 }, lastCommittedWall: new Date(1567578902935), lastOpVisible: { ts: Timestamp(1567578902, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 3), t: 1 }, lastCommittedWall: new Date(1567578902935), lastOpApplied: { ts: Timestamp(1567578902, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 3), $clusterTime: { clusterTime: Timestamp(1567578902, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 3) } 2019-09-04T06:35:02.942+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 3), t: 1 }, lastCommittedWall: new Date(1567578902935), lastOpVisible: { ts: Timestamp(1567578902, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 3), t: 1 }, lastCommittedWall: new Date(1567578902935), lastOpApplied: { ts: Timestamp(1567578902, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 3), $clusterTime: { clusterTime: Timestamp(1567578902, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.942+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:02.942+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:35:02.942+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578902, 3), t: 1 }, 2019-09-04T06:35:02.935+0000 2019-09-04T06:35:02.942+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578902, 3), t: 1 }, 2019-09-04T06:35:02.935+0000 2019-09-04T06:35:02.942+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578897, 3) 2019-09-04T06:35:02.942+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:35:13.355+0000 2019-09-04T06:35:02.942+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:35:14.146+0000 2019-09-04T06:35:02.942+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:02.942+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1500 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:12.942+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578902, 3), t: 1 } } 2019-09-04T06:35:02.942+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:35:02.942+0000 D3 REPL [conn461] Got notified of new snapshot: { ts: Timestamp(1567578902, 3), t: 1 }, 2019-09-04T06:35:02.935+0000 2019-09-04T06:35:02.942+0000 D3 REPL [conn461] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.482+0000 2019-09-04T06:35:02.942+0000 D3 REPL [conn463] Got notified of new snapshot: { ts: Timestamp(1567578902, 3), t: 1 }, 2019-09-04T06:35:02.935+0000 2019-09-04T06:35:02.942+0000 D3 REPL [conn463] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:13.417+0000 2019-09-04T06:35:02.942+0000 D3 REPL [conn446] Got notified of new snapshot: { ts: Timestamp(1567578902, 3), t: 1 }, 2019-09-04T06:35:02.935+0000 2019-09-04T06:35:02.942+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.941+0000 2019-09-04T06:35:02.942+0000 D3 REPL [conn446] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:13.895+0000 2019-09-04T06:35:02.942+0000 D3 REPL [conn474] Got notified of new snapshot: { ts: Timestamp(1567578902, 3), t: 1 }, 2019-09-04T06:35:02.935+0000 2019-09-04T06:35:02.942+0000 D3 REPL [conn474] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:02.942+0000 D3 REPL [conn448] Got notified of new snapshot: { ts: Timestamp(1567578902, 3), t: 1 }, 2019-09-04T06:35:02.935+0000 2019-09-04T06:35:02.942+0000 D3 REPL [conn448] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.625+0000 2019-09-04T06:35:02.942+0000 D3 REPL [conn472] Got notified of new snapshot: { ts: Timestamp(1567578902, 3), t: 1 }, 2019-09-04T06:35:02.935+0000 2019-09-04T06:35:02.942+0000 D3 REPL [conn472] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:02.942+0000 D3 REPL [conn466] Got notified of new snapshot: { ts: Timestamp(1567578902, 3), t: 1 }, 2019-09-04T06:35:02.935+0000 2019-09-04T06:35:02.942+0000 D3 REPL [conn466] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.728+0000 2019-09-04T06:35:02.942+0000 D3 REPL [conn445] Got notified of new snapshot: { ts: Timestamp(1567578902, 3), t: 1 }, 2019-09-04T06:35:02.935+0000 2019-09-04T06:35:02.942+0000 D3 REPL [conn445] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:14.090+0000 2019-09-04T06:35:02.942+0000 D3 REPL [conn467] Got notified of new snapshot: { ts: Timestamp(1567578902, 3), t: 1 }, 2019-09-04T06:35:02.935+0000 2019-09-04T06:35:02.942+0000 D3 REPL [conn467] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.623+0000 2019-09-04T06:35:02.942+0000 D3 REPL [conn476] Got notified of new snapshot: { ts: Timestamp(1567578902, 3), t: 1 }, 2019-09-04T06:35:02.935+0000 2019-09-04T06:35:02.942+0000 D3 REPL [conn476] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.767+0000 2019-09-04T06:35:02.942+0000 D3 REPL [conn423] Got notified of new snapshot: { ts: Timestamp(1567578902, 3), t: 1 }, 2019-09-04T06:35:02.935+0000 2019-09-04T06:35:02.942+0000 D3 REPL [conn423] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.512+0000 2019-09-04T06:35:02.942+0000 D3 REPL [conn456] Got notified of new snapshot: { ts: Timestamp(1567578902, 3), t: 1 }, 2019-09-04T06:35:02.935+0000 2019-09-04T06:35:02.942+0000 D3 REPL [conn456] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:03.487+0000 2019-09-04T06:35:02.942+0000 D3 REPL [conn479] Got notified of new snapshot: { ts: Timestamp(1567578902, 3), t: 1 }, 2019-09-04T06:35:02.935+0000 2019-09-04T06:35:02.942+0000 D3 REPL [conn479] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:26.309+0000 2019-09-04T06:35:02.942+0000 D3 REPL [conn441] Got notified of new snapshot: { ts: Timestamp(1567578902, 3), t: 1 }, 2019-09-04T06:35:02.935+0000 2019-09-04T06:35:02.942+0000 D3 REPL [conn441] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.624+0000 2019-09-04T06:35:02.942+0000 D3 REPL [conn459] Got notified of new snapshot: { ts: Timestamp(1567578902, 3), t: 1 }, 2019-09-04T06:35:02.935+0000 2019-09-04T06:35:02.942+0000 D3 REPL [conn459] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.473+0000 2019-09-04T06:35:02.942+0000 D3 REPL [conn460] Got notified of new snapshot: { ts: Timestamp(1567578902, 3), t: 1 }, 2019-09-04T06:35:02.935+0000 2019-09-04T06:35:02.942+0000 D3 REPL [conn460] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.478+0000 2019-09-04T06:35:02.942+0000 D3 REPL [conn457] Got notified of new snapshot: { ts: Timestamp(1567578902, 3), t: 1 }, 2019-09-04T06:35:02.935+0000 2019-09-04T06:35:02.942+0000 D3 REPL [conn457] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:25.059+0000 2019-09-04T06:35:02.942+0000 D3 REPL [conn473] Got notified of new snapshot: { ts: Timestamp(1567578902, 3), t: 1 }, 2019-09-04T06:35:02.935+0000 2019-09-04T06:35:02.942+0000 D3 REPL [conn473] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:02.942+0000 D3 REPL [conn477] Got notified of new snapshot: { ts: Timestamp(1567578902, 3), t: 1 }, 2019-09-04T06:35:02.935+0000 2019-09-04T06:35:02.942+0000 D3 REPL [conn477] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:22.595+0000 2019-09-04T06:35:02.942+0000 D3 REPL [conn436] Got notified of new snapshot: { ts: Timestamp(1567578902, 3), t: 1 }, 2019-09-04T06:35:02.935+0000 2019-09-04T06:35:02.942+0000 D3 REPL [conn436] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:14.790+0000 2019-09-04T06:35:02.942+0000 D3 REPL [conn478] Got notified of new snapshot: { ts: Timestamp(1567578902, 3), t: 1 }, 2019-09-04T06:35:02.935+0000 2019-09-04T06:35:02.942+0000 D3 REPL [conn478] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:24.152+0000 2019-09-04T06:35:02.942+0000 D3 REPL [conn462] Got notified of new snapshot: { ts: Timestamp(1567578902, 3), t: 1 }, 2019-09-04T06:35:02.935+0000 2019-09-04T06:35:02.942+0000 D3 REPL [conn462] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.497+0000 2019-09-04T06:35:02.942+0000 D3 REPL [conn475] Got notified of new snapshot: { ts: Timestamp(1567578902, 3), t: 1 }, 2019-09-04T06:35:02.935+0000 2019-09-04T06:35:02.942+0000 D3 REPL [conn475] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.753+0000 2019-09-04T06:35:02.943+0000 D2 ASIO [RS] Request 1500 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578902, 4), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578902942), o: { $v: 1, $set: { state: 0 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 3), t: 1 }, lastCommittedWall: new Date(1567578902935), lastOpVisible: { ts: Timestamp(1567578902, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 3), t: 1 }, lastCommittedWall: new Date(1567578902935), lastOpApplied: { ts: Timestamp(1567578902, 4), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 3), $clusterTime: { clusterTime: Timestamp(1567578902, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 4) } 2019-09-04T06:35:02.943+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578902, 4), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578902942), o: { $v: 1, $set: { state: 0 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 3), t: 1 }, lastCommittedWall: new Date(1567578902935), lastOpVisible: { ts: Timestamp(1567578902, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 3), t: 1 }, lastCommittedWall: new Date(1567578902935), lastOpApplied: { ts: Timestamp(1567578902, 4), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 3), $clusterTime: { clusterTime: Timestamp(1567578902, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 4) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.944+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:02.944+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578902, 4) and ending at ts: Timestamp(1567578902, 4) 2019-09-04T06:35:02.944+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:35:14.146+0000 2019-09-04T06:35:02.944+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:35:13.162+0000 2019-09-04T06:35:02.944+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:35:02.944+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578902, 4), t: 1 } 2019-09-04T06:35:02.944+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:35:02.944+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:02.944+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:02.944+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578902, 3) 2019-09-04T06:35:02.944+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 21870 2019-09-04T06:35:02.944+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:02.944+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:02.944+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 21870 2019-09-04T06:35:02.944+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:02.944+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:02.944+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:35:02.944+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578902, 3) 2019-09-04T06:35:02.944+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578902, 4) } 2019-09-04T06:35:02.944+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 21873 2019-09-04T06:35:02.944+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:02.944+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21868 2019-09-04T06:35:02.944+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:02.944+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 21873 2019-09-04T06:35:02.944+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21868 2019-09-04T06:35:02.944+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21876 2019-09-04T06:35:02.944+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21876 2019-09-04T06:35:02.944+0000 D3 EXECUTOR [repl-writer-worker-3] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:02.944+0000 D3 STORAGE [repl-writer-worker-3] WT begin_transaction for snapshot id 21878 2019-09-04T06:35:02.944+0000 D4 STORAGE [repl-writer-worker-3] inserting record with timestamp Timestamp(1567578902, 4) 2019-09-04T06:35:02.944+0000 D3 STORAGE [repl-writer-worker-3] WT set timestamp of future write operations to Timestamp(1567578902, 4) 2019-09-04T06:35:02.944+0000 D3 STORAGE [repl-writer-worker-3] WT commit_transaction for snapshot id 21878 2019-09-04T06:35:02.944+0000 D3 EXECUTOR [repl-writer-worker-3] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:02.944+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:35:02.944+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21877 2019-09-04T06:35:02.944+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21877 2019-09-04T06:35:02.944+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21880 2019-09-04T06:35:02.944+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21880 2019-09-04T06:35:02.944+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578902, 4), t: 1 }({ ts: Timestamp(1567578902, 4), t: 1 }) 2019-09-04T06:35:02.944+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578902, 4) 2019-09-04T06:35:02.944+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21881 2019-09-04T06:35:02.944+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578902, 4) } } ] } sort: {} projection: {} 2019-09-04T06:35:02.944+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.944+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:35:02.944+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578902, 4) Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.944+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.944+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:02.944+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:02.944+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578902, 4) || First: notFirst: full path: ts 2019-09-04T06:35:02.944+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.944+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578902, 4) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.944+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:02.944+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:35:02.944+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.944+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.944+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:02.944+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:02.944+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.944+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.944+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:02.944+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578902, 4) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.944+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.944+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:02.944+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:02.944+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578902, 4) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:02.944+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.944+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578902, 4) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.944+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21881 2019-09-04T06:35:02.944+0000 D3 EXECUTOR [repl-writer-worker-14] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:02.944+0000 D3 STORAGE [repl-writer-worker-14] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:02.944+0000 D3 REPL [repl-writer-worker-14] applying op: { ts: Timestamp(1567578902, 4), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578902942), o: { $v: 1, $set: { state: 0 } } }, oplog application mode: Secondary 2019-09-04T06:35:02.944+0000 D3 STORAGE [repl-writer-worker-14] WT set timestamp of future write operations to Timestamp(1567578902, 4) 2019-09-04T06:35:02.944+0000 D3 STORAGE [repl-writer-worker-14] WT begin_transaction for snapshot id 21883 2019-09-04T06:35:02.944+0000 D2 QUERY [repl-writer-worker-14] Using idhack: { _id: "config" } 2019-09-04T06:35:02.944+0000 D4 WRITE [repl-writer-worker-14] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:35:02.944+0000 D3 STORAGE [repl-writer-worker-14] WT commit_transaction for snapshot id 21883 2019-09-04T06:35:02.944+0000 D3 EXECUTOR [repl-writer-worker-14] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:02.944+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578902, 4), t: 1 }({ ts: Timestamp(1567578902, 4), t: 1 }) 2019-09-04T06:35:02.944+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578902, 4) 2019-09-04T06:35:02.944+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21882 2019-09-04T06:35:02.944+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.944+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.944+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:35:02.944+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.944+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.944+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:35:02.944+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21882 2019-09-04T06:35:02.944+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578902, 4) 2019-09-04T06:35:02.945+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21886 2019-09-04T06:35:02.945+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21886 2019-09-04T06:35:02.945+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578902, 4), t: 1 }({ ts: Timestamp(1567578902, 4), t: 1 }) 2019-09-04T06:35:02.945+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:02.945+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578902, 1), t: 1 }, durableWallTime: new Date(1567578902928), appliedOpTime: { ts: Timestamp(1567578902, 4), t: 1 }, appliedWallTime: new Date(1567578902942), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 3), t: 1 }, lastCommittedWall: new Date(1567578902935), lastOpVisible: { ts: Timestamp(1567578902, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:02.945+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1501 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:32.945+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578902, 1), t: 1 }, durableWallTime: new Date(1567578902928), appliedOpTime: { ts: Timestamp(1567578902, 4), t: 1 }, appliedWallTime: new Date(1567578902942), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 3), t: 1 }, lastCommittedWall: new Date(1567578902935), lastOpVisible: { ts: Timestamp(1567578902, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:02.945+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.944+0000 2019-09-04T06:35:02.945+0000 D2 ASIO [RS] Request 1501 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 3), t: 1 }, lastCommittedWall: new Date(1567578902935), lastOpVisible: { ts: Timestamp(1567578902, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 3), $clusterTime: { clusterTime: Timestamp(1567578902, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 4) } 2019-09-04T06:35:02.945+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 3), t: 1 }, lastCommittedWall: new Date(1567578902935), lastOpVisible: { ts: Timestamp(1567578902, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 3), $clusterTime: { clusterTime: Timestamp(1567578902, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 4) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.945+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:02.945+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.945+0000 2019-09-04T06:35:02.946+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578902, 4), t: 1 } 2019-09-04T06:35:02.946+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1502 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:12.946+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578902, 3), t: 1 } } 2019-09-04T06:35:02.946+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.945+0000 2019-09-04T06:35:02.946+0000 D2 ASIO [RS] Request 1502 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 4), t: 1 }, lastCommittedWall: new Date(1567578902942), lastOpVisible: { ts: Timestamp(1567578902, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 4), t: 1 }, lastCommittedWall: new Date(1567578902942), lastOpApplied: { ts: Timestamp(1567578902, 4), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 4), $clusterTime: { clusterTime: Timestamp(1567578902, 5), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 4) } 2019-09-04T06:35:02.946+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 4), t: 1 }, lastCommittedWall: new Date(1567578902942), lastOpVisible: { ts: Timestamp(1567578902, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 4), t: 1 }, lastCommittedWall: new Date(1567578902942), lastOpApplied: { ts: Timestamp(1567578902, 4), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 4), $clusterTime: { clusterTime: Timestamp(1567578902, 5), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 4) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.946+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:02.946+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:35:02.946+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578902, 4), t: 1 }, 2019-09-04T06:35:02.942+0000 2019-09-04T06:35:02.946+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578902, 4), t: 1 }, 2019-09-04T06:35:02.942+0000 2019-09-04T06:35:02.946+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578897, 4) 2019-09-04T06:35:02.946+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:35:13.162+0000 2019-09-04T06:35:02.946+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:35:14.014+0000 2019-09-04T06:35:02.946+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:02.946+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:35:02.946+0000 D3 REPL [conn463] Got notified of new snapshot: { ts: Timestamp(1567578902, 4), t: 1 }, 2019-09-04T06:35:02.942+0000 2019-09-04T06:35:02.946+0000 D3 REPL [conn463] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:13.417+0000 2019-09-04T06:35:02.946+0000 D3 REPL [conn466] Got notified of new snapshot: { ts: Timestamp(1567578902, 4), t: 1 }, 2019-09-04T06:35:02.942+0000 2019-09-04T06:35:02.946+0000 D3 REPL [conn466] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.728+0000 2019-09-04T06:35:02.946+0000 D3 REPL [conn472] Got notified of new snapshot: { ts: Timestamp(1567578902, 4), t: 1 }, 2019-09-04T06:35:02.942+0000 2019-09-04T06:35:02.946+0000 D3 REPL [conn472] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:02.946+0000 D3 REPL [conn479] Got notified of new snapshot: { ts: Timestamp(1567578902, 4), t: 1 }, 2019-09-04T06:35:02.942+0000 2019-09-04T06:35:02.946+0000 D3 REPL [conn479] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:26.309+0000 2019-09-04T06:35:02.946+0000 D3 REPL [conn476] Got notified of new snapshot: { ts: Timestamp(1567578902, 4), t: 1 }, 2019-09-04T06:35:02.942+0000 2019-09-04T06:35:02.946+0000 D3 REPL [conn476] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.767+0000 2019-09-04T06:35:02.946+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1503 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:12.946+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578902, 4), t: 1 } } 2019-09-04T06:35:02.946+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.945+0000 2019-09-04T06:35:02.946+0000 D3 REPL [conn457] Got notified of new snapshot: { ts: Timestamp(1567578902, 4), t: 1 }, 2019-09-04T06:35:02.942+0000 2019-09-04T06:35:02.946+0000 D3 REPL [conn457] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:25.059+0000 2019-09-04T06:35:02.946+0000 D3 REPL [conn477] Got notified of new snapshot: { ts: Timestamp(1567578902, 4), t: 1 }, 2019-09-04T06:35:02.942+0000 2019-09-04T06:35:02.946+0000 D3 REPL [conn477] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:22.595+0000 2019-09-04T06:35:02.946+0000 D3 REPL [conn478] Got notified of new snapshot: { ts: Timestamp(1567578902, 4), t: 1 }, 2019-09-04T06:35:02.942+0000 2019-09-04T06:35:02.946+0000 D3 REPL [conn478] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:24.152+0000 2019-09-04T06:35:02.946+0000 D3 REPL [conn475] Got notified of new snapshot: { ts: Timestamp(1567578902, 4), t: 1 }, 2019-09-04T06:35:02.942+0000 2019-09-04T06:35:02.946+0000 D3 REPL [conn475] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.753+0000 2019-09-04T06:35:02.946+0000 D3 REPL [conn423] Got notified of new snapshot: { ts: Timestamp(1567578902, 4), t: 1 }, 2019-09-04T06:35:02.942+0000 2019-09-04T06:35:02.946+0000 D3 REPL [conn423] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.512+0000 2019-09-04T06:35:02.946+0000 D3 REPL [conn461] Got notified of new snapshot: { ts: Timestamp(1567578902, 4), t: 1 }, 2019-09-04T06:35:02.942+0000 2019-09-04T06:35:02.946+0000 D3 REPL [conn461] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.482+0000 2019-09-04T06:35:02.946+0000 D3 REPL [conn448] Got notified of new snapshot: { ts: Timestamp(1567578902, 4), t: 1 }, 2019-09-04T06:35:02.942+0000 2019-09-04T06:35:02.946+0000 D3 REPL [conn448] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.625+0000 2019-09-04T06:35:02.946+0000 D3 REPL [conn441] Got notified of new snapshot: { ts: Timestamp(1567578902, 4), t: 1 }, 2019-09-04T06:35:02.942+0000 2019-09-04T06:35:02.946+0000 D3 REPL [conn441] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.624+0000 2019-09-04T06:35:02.946+0000 D3 REPL [conn446] Got notified of new snapshot: { ts: Timestamp(1567578902, 4), t: 1 }, 2019-09-04T06:35:02.942+0000 2019-09-04T06:35:02.946+0000 D3 REPL [conn446] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:13.895+0000 2019-09-04T06:35:02.946+0000 D3 REPL [conn460] Got notified of new snapshot: { ts: Timestamp(1567578902, 4), t: 1 }, 2019-09-04T06:35:02.942+0000 2019-09-04T06:35:02.946+0000 D3 REPL [conn460] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.478+0000 2019-09-04T06:35:02.946+0000 D3 REPL [conn474] Got notified of new snapshot: { ts: Timestamp(1567578902, 4), t: 1 }, 2019-09-04T06:35:02.942+0000 2019-09-04T06:35:02.946+0000 D3 REPL [conn474] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:02.946+0000 D3 REPL [conn473] Got notified of new snapshot: { ts: Timestamp(1567578902, 4), t: 1 }, 2019-09-04T06:35:02.942+0000 2019-09-04T06:35:02.946+0000 D3 REPL [conn473] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:02.946+0000 D3 REPL [conn467] Got notified of new snapshot: { ts: Timestamp(1567578902, 4), t: 1 }, 2019-09-04T06:35:02.942+0000 2019-09-04T06:35:02.946+0000 D3 REPL [conn467] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.623+0000 2019-09-04T06:35:02.946+0000 D3 REPL [conn436] Got notified of new snapshot: { ts: Timestamp(1567578902, 4), t: 1 }, 2019-09-04T06:35:02.942+0000 2019-09-04T06:35:02.946+0000 D3 REPL [conn436] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:14.790+0000 2019-09-04T06:35:02.946+0000 D2 ASIO [RS] Request 1503 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578902, 5), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578902945), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f5b16ac9313827bca6601'), when: new Date(1567578902945), who: "ConfigServer:conn4988" } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 4), t: 1 }, lastCommittedWall: new Date(1567578902942), lastOpVisible: { ts: Timestamp(1567578902, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 4), t: 1 }, lastCommittedWall: new Date(1567578902942), lastOpApplied: { ts: Timestamp(1567578902, 5), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 4), $clusterTime: { clusterTime: Timestamp(1567578902, 5), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 5) } 2019-09-04T06:35:02.946+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578902, 5), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578902945), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f5b16ac9313827bca6601'), when: new Date(1567578902945), who: "ConfigServer:conn4988" } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 4), t: 1 }, lastCommittedWall: new Date(1567578902942), lastOpVisible: { ts: Timestamp(1567578902, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 4), t: 1 }, lastCommittedWall: new Date(1567578902942), lastOpApplied: { ts: Timestamp(1567578902, 5), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 4), $clusterTime: { clusterTime: Timestamp(1567578902, 5), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 5) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.946+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:02.946+0000 D3 REPL [conn445] Got notified of new snapshot: { ts: Timestamp(1567578902, 4), t: 1 }, 2019-09-04T06:35:02.942+0000 2019-09-04T06:35:02.946+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578902, 5) and ending at ts: Timestamp(1567578902, 5) 2019-09-04T06:35:02.946+0000 D3 REPL [conn445] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:14.090+0000 2019-09-04T06:35:02.946+0000 D3 REPL [conn462] Got notified of new snapshot: { ts: Timestamp(1567578902, 4), t: 1 }, 2019-09-04T06:35:02.942+0000 2019-09-04T06:35:02.947+0000 D3 REPL [conn462] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.497+0000 2019-09-04T06:35:02.947+0000 D3 REPL [conn456] Got notified of new snapshot: { ts: Timestamp(1567578902, 4), t: 1 }, 2019-09-04T06:35:02.942+0000 2019-09-04T06:35:02.947+0000 D3 REPL [conn456] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:03.487+0000 2019-09-04T06:35:02.947+0000 D3 REPL [conn459] Got notified of new snapshot: { ts: Timestamp(1567578902, 4), t: 1 }, 2019-09-04T06:35:02.942+0000 2019-09-04T06:35:02.947+0000 D3 REPL [conn459] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.473+0000 2019-09-04T06:35:02.947+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:35:14.014+0000 2019-09-04T06:35:02.947+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:35:12.975+0000 2019-09-04T06:35:02.947+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:02.947+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:35:02.947+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:02.947+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:02.947+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578902, 4) 2019-09-04T06:35:02.947+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 21889 2019-09-04T06:35:02.947+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:02.947+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:02.947+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 21889 2019-09-04T06:35:02.947+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:02.947+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:02.947+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:35:02.947+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578902, 5) } 2019-09-04T06:35:02.947+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21887 2019-09-04T06:35:02.947+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578902, 5), t: 1 } 2019-09-04T06:35:02.947+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21887 2019-09-04T06:35:02.947+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578902, 4) 2019-09-04T06:35:02.947+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21893 2019-09-04T06:35:02.947+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 21892 2019-09-04T06:35:02.947+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21893 2019-09-04T06:35:02.947+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:02.947+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:02.947+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 21892 2019-09-04T06:35:02.947+0000 D3 EXECUTOR [repl-writer-worker-10] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:02.947+0000 D3 STORAGE [repl-writer-worker-10] WT begin_transaction for snapshot id 21897 2019-09-04T06:35:02.947+0000 D4 STORAGE [repl-writer-worker-10] inserting record with timestamp Timestamp(1567578902, 5) 2019-09-04T06:35:02.947+0000 D3 STORAGE [repl-writer-worker-10] WT set timestamp of future write operations to Timestamp(1567578902, 5) 2019-09-04T06:35:02.947+0000 D3 STORAGE [repl-writer-worker-10] WT commit_transaction for snapshot id 21897 2019-09-04T06:35:02.947+0000 D3 EXECUTOR [repl-writer-worker-10] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:02.947+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:35:02.947+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21894 2019-09-04T06:35:02.947+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21894 2019-09-04T06:35:02.947+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21899 2019-09-04T06:35:02.947+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21899 2019-09-04T06:35:02.947+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578902, 5), t: 1 }({ ts: Timestamp(1567578902, 5), t: 1 }) 2019-09-04T06:35:02.947+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578902, 5) 2019-09-04T06:35:02.947+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21900 2019-09-04T06:35:02.947+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578902, 5) } } ] } sort: {} projection: {} 2019-09-04T06:35:02.947+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.947+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:35:02.947+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578902, 5) Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.947+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.947+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:02.947+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:02.947+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578902, 5) || First: notFirst: full path: ts 2019-09-04T06:35:02.947+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.947+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578902, 5) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.947+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:02.947+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:35:02.947+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.947+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.947+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:02.947+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:02.947+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.947+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.947+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:02.947+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578902, 5) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.947+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.947+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:02.947+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:02.947+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578902, 5) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:02.947+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.947+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578902, 5) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.947+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21900 2019-09-04T06:35:02.947+0000 D3 EXECUTOR [repl-writer-worker-4] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:02.947+0000 D3 STORAGE [repl-writer-worker-4] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:02.947+0000 D3 REPL [repl-writer-worker-4] applying op: { ts: Timestamp(1567578902, 5), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578902945), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f5b16ac9313827bca6601'), when: new Date(1567578902945), who: "ConfigServer:conn4988" } } }, oplog application mode: Secondary 2019-09-04T06:35:02.947+0000 D3 STORAGE [repl-writer-worker-4] WT set timestamp of future write operations to Timestamp(1567578902, 5) 2019-09-04T06:35:02.947+0000 D3 STORAGE [repl-writer-worker-4] WT begin_transaction for snapshot id 21902 2019-09-04T06:35:02.947+0000 D2 QUERY [repl-writer-worker-4] Using idhack: { _id: "config" } 2019-09-04T06:35:02.947+0000 D4 WRITE [repl-writer-worker-4] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:35:02.947+0000 D3 STORAGE [repl-writer-worker-4] WT commit_transaction for snapshot id 21902 2019-09-04T06:35:02.947+0000 D3 EXECUTOR [repl-writer-worker-4] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:02.947+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578902, 5), t: 1 }({ ts: Timestamp(1567578902, 5), t: 1 }) 2019-09-04T06:35:02.947+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:35:02.947+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578902, 5) 2019-09-04T06:35:02.947+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21901 2019-09-04T06:35:02.947+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.947+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.947+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:35:02.947+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.947+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.947+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:35:02.948+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21901 2019-09-04T06:35:02.948+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578902, 5) 2019-09-04T06:35:02.948+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21906 2019-09-04T06:35:02.948+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21906 2019-09-04T06:35:02.948+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578902, 5), t: 1 }({ ts: Timestamp(1567578902, 5), t: 1 }) 2019-09-04T06:35:02.948+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:02.948+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578902, 2), t: 1 }, durableWallTime: new Date(1567578902931), appliedOpTime: { ts: Timestamp(1567578902, 5), t: 1 }, appliedWallTime: new Date(1567578902945), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 4), t: 1 }, lastCommittedWall: new Date(1567578902942), lastOpVisible: { ts: Timestamp(1567578902, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:02.948+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1504 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:32.948+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578902, 2), t: 1 }, durableWallTime: new Date(1567578902931), appliedOpTime: { ts: Timestamp(1567578902, 5), t: 1 }, appliedWallTime: new Date(1567578902945), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 4), t: 1 }, lastCommittedWall: new Date(1567578902942), lastOpVisible: { ts: Timestamp(1567578902, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:02.948+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.947+0000 2019-09-04T06:35:02.948+0000 D2 ASIO [RS] Request 1504 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 4), t: 1 }, lastCommittedWall: new Date(1567578902942), lastOpVisible: { ts: Timestamp(1567578902, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 4), $clusterTime: { clusterTime: Timestamp(1567578902, 5), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 5) } 2019-09-04T06:35:02.948+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 4), t: 1 }, lastCommittedWall: new Date(1567578902942), lastOpVisible: { ts: Timestamp(1567578902, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 4), $clusterTime: { clusterTime: Timestamp(1567578902, 5), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 5) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.948+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:02.948+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578902, 2), t: 1 }, durableWallTime: new Date(1567578902931), appliedOpTime: { ts: Timestamp(1567578902, 5), t: 1 }, appliedWallTime: new Date(1567578902945), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 4), t: 1 }, lastCommittedWall: new Date(1567578902942), lastOpVisible: { ts: Timestamp(1567578902, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:02.948+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1505 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:32.948+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578902, 2), t: 1 }, durableWallTime: new Date(1567578902931), appliedOpTime: { ts: Timestamp(1567578902, 5), t: 1 }, appliedWallTime: new Date(1567578902945), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 4), t: 1 }, lastCommittedWall: new Date(1567578902942), lastOpVisible: { ts: Timestamp(1567578902, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:02.948+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.948+0000 2019-09-04T06:35:02.948+0000 D2 ASIO [RS] Request 1505 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 4), t: 1 }, lastCommittedWall: new Date(1567578902942), lastOpVisible: { ts: Timestamp(1567578902, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 4), $clusterTime: { clusterTime: Timestamp(1567578902, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 5) } 2019-09-04T06:35:02.948+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 4), t: 1 }, lastCommittedWall: new Date(1567578902942), lastOpVisible: { ts: Timestamp(1567578902, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 4), $clusterTime: { clusterTime: Timestamp(1567578902, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 5) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.948+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:02.948+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.948+0000 2019-09-04T06:35:02.948+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:35:02.948+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:02.948+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:35:02.948+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578902, 4), t: 1 }, durableWallTime: new Date(1567578902942), appliedOpTime: { ts: Timestamp(1567578902, 5), t: 1 }, appliedWallTime: new Date(1567578902945), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 4), t: 1 }, lastCommittedWall: new Date(1567578902942), lastOpVisible: { ts: Timestamp(1567578902, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:02.948+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1506 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:32.948+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578902, 4), t: 1 }, durableWallTime: new Date(1567578902942), appliedOpTime: { ts: Timestamp(1567578902, 5), t: 1 }, appliedWallTime: new Date(1567578902945), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 4), t: 1 }, lastCommittedWall: new Date(1567578902942), lastOpVisible: { ts: Timestamp(1567578902, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:02.948+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.948+0000 2019-09-04T06:35:02.948+0000 D2 ASIO [RS] Request 1506 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 5), t: 1 }, lastCommittedWall: new Date(1567578902945), lastOpVisible: { ts: Timestamp(1567578902, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 5), $clusterTime: { clusterTime: Timestamp(1567578902, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 5) } 2019-09-04T06:35:02.948+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 5), t: 1 }, lastCommittedWall: new Date(1567578902945), lastOpVisible: { ts: Timestamp(1567578902, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 5), $clusterTime: { clusterTime: Timestamp(1567578902, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 5) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.948+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:02.948+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578902, 5), t: 1 }, durableWallTime: new Date(1567578902945), appliedOpTime: { ts: Timestamp(1567578902, 5), t: 1 }, appliedWallTime: new Date(1567578902945), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 4), t: 1 }, lastCommittedWall: new Date(1567578902942), lastOpVisible: { ts: Timestamp(1567578902, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:02.948+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1507 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:32.948+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578902, 5), t: 1 }, durableWallTime: new Date(1567578902945), appliedOpTime: { ts: Timestamp(1567578902, 5), t: 1 }, appliedWallTime: new Date(1567578902945), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 4), t: 1 }, lastCommittedWall: new Date(1567578902942), lastOpVisible: { ts: Timestamp(1567578902, 4), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:02.948+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.948+0000 2019-09-04T06:35:02.949+0000 D2 ASIO [RS] Request 1507 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 5), t: 1 }, lastCommittedWall: new Date(1567578902945), lastOpVisible: { ts: Timestamp(1567578902, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 5), $clusterTime: { clusterTime: Timestamp(1567578902, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 5) } 2019-09-04T06:35:02.949+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 5), t: 1 }, lastCommittedWall: new Date(1567578902945), lastOpVisible: { ts: Timestamp(1567578902, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 5), $clusterTime: { clusterTime: Timestamp(1567578902, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 5) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.949+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:02.949+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.949+0000 2019-09-04T06:35:02.949+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578902, 5), t: 1 } 2019-09-04T06:35:02.949+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1508 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:12.949+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578902, 4), t: 1 } } 2019-09-04T06:35:02.949+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.949+0000 2019-09-04T06:35:02.949+0000 D2 ASIO [RS] Request 1508 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 5), t: 1 }, lastCommittedWall: new Date(1567578902945), lastOpVisible: { ts: Timestamp(1567578902, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 5), t: 1 }, lastCommittedWall: new Date(1567578902945), lastOpApplied: { ts: Timestamp(1567578902, 5), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 5), $clusterTime: { clusterTime: Timestamp(1567578902, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 5) } 2019-09-04T06:35:02.949+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 5), t: 1 }, lastCommittedWall: new Date(1567578902945), lastOpVisible: { ts: Timestamp(1567578902, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 5), t: 1 }, lastCommittedWall: new Date(1567578902945), lastOpApplied: { ts: Timestamp(1567578902, 5), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 5), $clusterTime: { clusterTime: Timestamp(1567578902, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 5) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.949+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:02.949+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:35:02.949+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578902, 5), t: 1 }, 2019-09-04T06:35:02.945+0000 2019-09-04T06:35:02.949+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578902, 5), t: 1 }, 2019-09-04T06:35:02.945+0000 2019-09-04T06:35:02.949+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578897, 5) 2019-09-04T06:35:02.949+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:35:12.975+0000 2019-09-04T06:35:02.949+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:35:14.012+0000 2019-09-04T06:35:02.949+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:35:02.949+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:35:02.949+0000 D3 REPL [conn446] Got notified of new snapshot: { ts: Timestamp(1567578902, 5), t: 1 }, 2019-09-04T06:35:02.945+0000 2019-09-04T06:35:02.949+0000 D3 REPL [conn446] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:13.895+0000 2019-09-04T06:35:02.949+0000 D3 REPL [conn479] Got notified of new snapshot: { ts: Timestamp(1567578902, 5), t: 1 }, 2019-09-04T06:35:02.945+0000 2019-09-04T06:35:02.949+0000 D3 REPL [conn479] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:26.309+0000 2019-09-04T06:35:02.949+0000 D3 REPL [conn467] Got notified of new snapshot: { ts: Timestamp(1567578902, 5), t: 1 }, 2019-09-04T06:35:02.945+0000 2019-09-04T06:35:02.949+0000 D3 REPL [conn467] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.623+0000 2019-09-04T06:35:02.949+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1509 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:12.949+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578902, 5), t: 1 } } 2019-09-04T06:35:02.949+0000 D3 REPL [conn478] Got notified of new snapshot: { ts: Timestamp(1567578902, 5), t: 1 }, 2019-09-04T06:35:02.945+0000 2019-09-04T06:35:02.949+0000 D3 REPL [conn478] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:24.152+0000 2019-09-04T06:35:02.949+0000 D3 REPL [conn445] Got notified of new snapshot: { ts: Timestamp(1567578902, 5), t: 1 }, 2019-09-04T06:35:02.945+0000 2019-09-04T06:35:02.949+0000 D3 REPL [conn445] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:14.090+0000 2019-09-04T06:35:02.949+0000 D3 REPL [conn423] Got notified of new snapshot: { ts: Timestamp(1567578902, 5), t: 1 }, 2019-09-04T06:35:02.945+0000 2019-09-04T06:35:02.949+0000 D3 REPL [conn423] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.512+0000 2019-09-04T06:35:02.949+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.949+0000 2019-09-04T06:35:02.949+0000 D3 REPL [conn473] Got notified of new snapshot: { ts: Timestamp(1567578902, 5), t: 1 }, 2019-09-04T06:35:02.945+0000 2019-09-04T06:35:02.949+0000 D3 REPL [conn473] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:02.949+0000 D3 REPL [conn466] Got notified of new snapshot: { ts: Timestamp(1567578902, 5), t: 1 }, 2019-09-04T06:35:02.945+0000 2019-09-04T06:35:02.949+0000 D3 REPL [conn466] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.728+0000 2019-09-04T06:35:02.949+0000 D3 REPL [conn459] Got notified of new snapshot: { ts: Timestamp(1567578902, 5), t: 1 }, 2019-09-04T06:35:02.945+0000 2019-09-04T06:35:02.949+0000 D3 REPL [conn459] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.473+0000 2019-09-04T06:35:02.949+0000 D3 REPL [conn461] Got notified of new snapshot: { ts: Timestamp(1567578902, 5), t: 1 }, 2019-09-04T06:35:02.945+0000 2019-09-04T06:35:02.949+0000 D3 REPL [conn461] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.482+0000 2019-09-04T06:35:02.949+0000 D3 REPL [conn436] Got notified of new snapshot: { ts: Timestamp(1567578902, 5), t: 1 }, 2019-09-04T06:35:02.945+0000 2019-09-04T06:35:02.949+0000 D3 REPL [conn436] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:14.790+0000 2019-09-04T06:35:02.949+0000 D3 REPL [conn462] Got notified of new snapshot: { ts: Timestamp(1567578902, 5), t: 1 }, 2019-09-04T06:35:02.945+0000 2019-09-04T06:35:02.949+0000 D3 REPL [conn462] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.497+0000 2019-09-04T06:35:02.949+0000 D3 REPL [conn475] Got notified of new snapshot: { ts: Timestamp(1567578902, 5), t: 1 }, 2019-09-04T06:35:02.945+0000 2019-09-04T06:35:02.949+0000 D3 REPL [conn475] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.753+0000 2019-09-04T06:35:02.949+0000 D3 REPL [conn463] Got notified of new snapshot: { ts: Timestamp(1567578902, 5), t: 1 }, 2019-09-04T06:35:02.945+0000 2019-09-04T06:35:02.949+0000 D3 REPL [conn463] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:13.417+0000 2019-09-04T06:35:02.950+0000 D3 REPL [conn457] Got notified of new snapshot: { ts: Timestamp(1567578902, 5), t: 1 }, 2019-09-04T06:35:02.945+0000 2019-09-04T06:35:02.950+0000 D3 REPL [conn457] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:25.059+0000 2019-09-04T06:35:02.950+0000 D2 ASIO [RS] Request 1509 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578902, 6), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578902948), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f5b16ac9313827bca6608'), when: new Date(1567578902948), who: "ConfigServer:conn4988" } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 5), t: 1 }, lastCommittedWall: new Date(1567578902945), lastOpVisible: { ts: Timestamp(1567578902, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 5), t: 1 }, lastCommittedWall: new Date(1567578902945), lastOpApplied: { ts: Timestamp(1567578902, 6), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 5), $clusterTime: { clusterTime: Timestamp(1567578902, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 6) } 2019-09-04T06:35:02.950+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578902, 6), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578902948), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f5b16ac9313827bca6608'), when: new Date(1567578902948), who: "ConfigServer:conn4988" } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 5), t: 1 }, lastCommittedWall: new Date(1567578902945), lastOpVisible: { ts: Timestamp(1567578902, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 5), t: 1 }, lastCommittedWall: new Date(1567578902945), lastOpApplied: { ts: Timestamp(1567578902, 6), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 5), $clusterTime: { clusterTime: Timestamp(1567578902, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 6) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.950+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:02.950+0000 D3 REPL [conn474] Got notified of new snapshot: { ts: Timestamp(1567578902, 5), t: 1 }, 2019-09-04T06:35:02.945+0000 2019-09-04T06:35:02.950+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578902, 6) and ending at ts: Timestamp(1567578902, 6) 2019-09-04T06:35:02.950+0000 D3 REPL [conn474] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:02.950+0000 D3 REPL [conn472] Got notified of new snapshot: { ts: Timestamp(1567578902, 5), t: 1 }, 2019-09-04T06:35:02.945+0000 2019-09-04T06:35:02.950+0000 D3 REPL [conn472] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:02.950+0000 D3 REPL [conn448] Got notified of new snapshot: { ts: Timestamp(1567578902, 5), t: 1 }, 2019-09-04T06:35:02.945+0000 2019-09-04T06:35:02.950+0000 D3 REPL [conn448] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.625+0000 2019-09-04T06:35:02.950+0000 D3 REPL [conn456] Got notified of new snapshot: { ts: Timestamp(1567578902, 5), t: 1 }, 2019-09-04T06:35:02.945+0000 2019-09-04T06:35:02.950+0000 D3 REPL [conn456] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:03.487+0000 2019-09-04T06:35:02.950+0000 D3 REPL [conn476] Got notified of new snapshot: { ts: Timestamp(1567578902, 5), t: 1 }, 2019-09-04T06:35:02.945+0000 2019-09-04T06:35:02.950+0000 D3 REPL [conn476] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.767+0000 2019-09-04T06:35:02.950+0000 D3 REPL [conn441] Got notified of new snapshot: { ts: Timestamp(1567578902, 5), t: 1 }, 2019-09-04T06:35:02.945+0000 2019-09-04T06:35:02.950+0000 D3 REPL [conn441] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.624+0000 2019-09-04T06:35:02.950+0000 D3 REPL [conn477] Got notified of new snapshot: { ts: Timestamp(1567578902, 5), t: 1 }, 2019-09-04T06:35:02.945+0000 2019-09-04T06:35:02.950+0000 D3 REPL [conn477] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:22.595+0000 2019-09-04T06:35:02.950+0000 D3 REPL [conn460] Got notified of new snapshot: { ts: Timestamp(1567578902, 5), t: 1 }, 2019-09-04T06:35:02.945+0000 2019-09-04T06:35:02.950+0000 D3 REPL [conn460] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.478+0000 2019-09-04T06:35:02.950+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:35:14.012+0000 2019-09-04T06:35:02.950+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:35:13.690+0000 2019-09-04T06:35:02.950+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:02.950+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578902, 6), t: 1 } 2019-09-04T06:35:02.950+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:35:02.950+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:02.950+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:02.950+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578902, 5) 2019-09-04T06:35:02.950+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 21910 2019-09-04T06:35:02.950+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:02.950+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:02.950+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 21910 2019-09-04T06:35:02.950+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:02.950+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:35:02.950+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:02.950+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578902, 6) } 2019-09-04T06:35:02.950+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578902, 5) 2019-09-04T06:35:02.950+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 21913 2019-09-04T06:35:02.950+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21907 2019-09-04T06:35:02.950+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:02.950+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:02.950+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 21913 2019-09-04T06:35:02.950+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21907 2019-09-04T06:35:02.950+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21916 2019-09-04T06:35:02.950+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21916 2019-09-04T06:35:02.950+0000 D3 EXECUTOR [repl-writer-worker-6] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:02.950+0000 D3 STORAGE [repl-writer-worker-6] WT begin_transaction for snapshot id 21918 2019-09-04T06:35:02.950+0000 D4 STORAGE [repl-writer-worker-6] inserting record with timestamp Timestamp(1567578902, 6) 2019-09-04T06:35:02.950+0000 D3 STORAGE [repl-writer-worker-6] WT set timestamp of future write operations to Timestamp(1567578902, 6) 2019-09-04T06:35:02.950+0000 D3 STORAGE [repl-writer-worker-6] WT commit_transaction for snapshot id 21918 2019-09-04T06:35:02.950+0000 D3 EXECUTOR [repl-writer-worker-6] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:02.950+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:35:02.950+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21917 2019-09-04T06:35:02.950+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21917 2019-09-04T06:35:02.950+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21920 2019-09-04T06:35:02.950+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21920 2019-09-04T06:35:02.950+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578902, 6), t: 1 }({ ts: Timestamp(1567578902, 6), t: 1 }) 2019-09-04T06:35:02.950+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578902, 6) 2019-09-04T06:35:02.950+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21921 2019-09-04T06:35:02.950+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578902, 6) } } ] } sort: {} projection: {} 2019-09-04T06:35:02.950+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.950+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:35:02.950+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578902, 6) Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.950+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.950+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:02.950+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:02.950+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578902, 6) || First: notFirst: full path: ts 2019-09-04T06:35:02.950+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.950+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578902, 6) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.950+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:02.950+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:35:02.950+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.950+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.950+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:02.950+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:02.950+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.950+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.950+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:02.950+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578902, 6) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.950+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.950+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:02.950+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:02.950+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578902, 6) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:02.950+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.950+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578902, 6) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.950+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21921 2019-09-04T06:35:02.950+0000 D3 EXECUTOR [repl-writer-worker-8] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:02.950+0000 D3 STORAGE [repl-writer-worker-8] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:02.950+0000 D3 REPL [repl-writer-worker-8] applying op: { ts: Timestamp(1567578902, 6), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578902948), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f5b16ac9313827bca6608'), when: new Date(1567578902948), who: "ConfigServer:conn4988" } } }, oplog application mode: Secondary 2019-09-04T06:35:02.950+0000 D3 STORAGE [repl-writer-worker-8] WT set timestamp of future write operations to Timestamp(1567578902, 6) 2019-09-04T06:35:02.950+0000 D3 STORAGE [repl-writer-worker-8] WT begin_transaction for snapshot id 21923 2019-09-04T06:35:02.950+0000 D2 QUERY [repl-writer-worker-8] Using idhack: { _id: "config.system.sessions" } 2019-09-04T06:35:02.951+0000 D4 WRITE [repl-writer-worker-8] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:35:02.951+0000 D3 STORAGE [repl-writer-worker-8] WT commit_transaction for snapshot id 21923 2019-09-04T06:35:02.951+0000 D3 EXECUTOR [repl-writer-worker-8] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:02.951+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578902, 6), t: 1 }({ ts: Timestamp(1567578902, 6), t: 1 }) 2019-09-04T06:35:02.951+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578902, 6) 2019-09-04T06:35:02.951+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21922 2019-09-04T06:35:02.951+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.951+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.951+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:35:02.951+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.951+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.951+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:35:02.951+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21922 2019-09-04T06:35:02.951+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578902, 6) 2019-09-04T06:35:02.951+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21926 2019-09-04T06:35:02.951+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21926 2019-09-04T06:35:02.951+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578902, 6), t: 1 }({ ts: Timestamp(1567578902, 6), t: 1 }) 2019-09-04T06:35:02.951+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:02.951+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578902, 5), t: 1 }, durableWallTime: new Date(1567578902945), appliedOpTime: { ts: Timestamp(1567578902, 6), t: 1 }, appliedWallTime: new Date(1567578902948), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 5), t: 1 }, lastCommittedWall: new Date(1567578902945), lastOpVisible: { ts: Timestamp(1567578902, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:02.951+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1510 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:32.951+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578902, 5), t: 1 }, durableWallTime: new Date(1567578902945), appliedOpTime: { ts: Timestamp(1567578902, 6), t: 1 }, appliedWallTime: new Date(1567578902948), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 5), t: 1 }, lastCommittedWall: new Date(1567578902945), lastOpVisible: { ts: Timestamp(1567578902, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:02.951+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.951+0000 2019-09-04T06:35:02.951+0000 D2 ASIO [RS] Request 1510 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 5), t: 1 }, lastCommittedWall: new Date(1567578902945), lastOpVisible: { ts: Timestamp(1567578902, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 5), $clusterTime: { clusterTime: Timestamp(1567578902, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 6) } 2019-09-04T06:35:02.951+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 5), t: 1 }, lastCommittedWall: new Date(1567578902945), lastOpVisible: { ts: Timestamp(1567578902, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 5), $clusterTime: { clusterTime: Timestamp(1567578902, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 6) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.951+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:02.951+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.951+0000 2019-09-04T06:35:02.952+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578902, 6), t: 1 } 2019-09-04T06:35:02.952+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1511 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:12.952+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578902, 5), t: 1 } } 2019-09-04T06:35:02.952+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.951+0000 2019-09-04T06:35:02.952+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:35:02.952+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:02.952+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578902, 6), t: 1 }, durableWallTime: new Date(1567578902948), appliedOpTime: { ts: Timestamp(1567578902, 6), t: 1 }, appliedWallTime: new Date(1567578902948), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 5), t: 1 }, lastCommittedWall: new Date(1567578902945), lastOpVisible: { ts: Timestamp(1567578902, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:02.952+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1512 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:32.952+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578902, 6), t: 1 }, durableWallTime: new Date(1567578902948), appliedOpTime: { ts: Timestamp(1567578902, 6), t: 1 }, appliedWallTime: new Date(1567578902948), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 5), t: 1 }, lastCommittedWall: new Date(1567578902945), lastOpVisible: { ts: Timestamp(1567578902, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:02.952+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.951+0000 2019-09-04T06:35:02.952+0000 D2 ASIO [RS] Request 1512 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 5), t: 1 }, lastCommittedWall: new Date(1567578902945), lastOpVisible: { ts: Timestamp(1567578902, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 5), $clusterTime: { clusterTime: Timestamp(1567578902, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 6) } 2019-09-04T06:35:02.952+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 5), t: 1 }, lastCommittedWall: new Date(1567578902945), lastOpVisible: { ts: Timestamp(1567578902, 5), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 5), $clusterTime: { clusterTime: Timestamp(1567578902, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 6) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.952+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:02.952+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.951+0000 2019-09-04T06:35:02.953+0000 D2 ASIO [RS] Request 1511 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 6), t: 1 }, lastCommittedWall: new Date(1567578902948), lastOpVisible: { ts: Timestamp(1567578902, 6), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 6), t: 1 }, lastCommittedWall: new Date(1567578902948), lastOpApplied: { ts: Timestamp(1567578902, 6), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 6), $clusterTime: { clusterTime: Timestamp(1567578902, 7), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 6) } 2019-09-04T06:35:02.953+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 6), t: 1 }, lastCommittedWall: new Date(1567578902948), lastOpVisible: { ts: Timestamp(1567578902, 6), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 6), t: 1 }, lastCommittedWall: new Date(1567578902948), lastOpApplied: { ts: Timestamp(1567578902, 6), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 6), $clusterTime: { clusterTime: Timestamp(1567578902, 7), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 6) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.953+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:02.953+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:35:02.953+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578902, 6), t: 1 }, 2019-09-04T06:35:02.948+0000 2019-09-04T06:35:02.953+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578902, 6), t: 1 }, 2019-09-04T06:35:02.948+0000 2019-09-04T06:35:02.953+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578897, 6) 2019-09-04T06:35:02.953+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:35:13.690+0000 2019-09-04T06:35:02.953+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:35:13.628+0000 2019-09-04T06:35:02.953+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:02.953+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:35:02.953+0000 D3 REPL [conn463] Got notified of new snapshot: { ts: Timestamp(1567578902, 6), t: 1 }, 2019-09-04T06:35:02.948+0000 2019-09-04T06:35:02.953+0000 D3 REPL [conn463] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:13.417+0000 2019-09-04T06:35:02.953+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1513 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:12.953+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578902, 6), t: 1 } } 2019-09-04T06:35:02.953+0000 D3 REPL [conn477] Got notified of new snapshot: { ts: Timestamp(1567578902, 6), t: 1 }, 2019-09-04T06:35:02.948+0000 2019-09-04T06:35:02.953+0000 D3 REPL [conn477] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:22.595+0000 2019-09-04T06:35:02.953+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.951+0000 2019-09-04T06:35:02.953+0000 D3 REPL [conn448] Got notified of new snapshot: { ts: Timestamp(1567578902, 6), t: 1 }, 2019-09-04T06:35:02.948+0000 2019-09-04T06:35:02.953+0000 D3 REPL [conn448] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.625+0000 2019-09-04T06:35:02.953+0000 D3 REPL [conn461] Got notified of new snapshot: { ts: Timestamp(1567578902, 6), t: 1 }, 2019-09-04T06:35:02.948+0000 2019-09-04T06:35:02.953+0000 D3 REPL [conn461] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.482+0000 2019-09-04T06:35:02.953+0000 D3 REPL [conn474] Got notified of new snapshot: { ts: Timestamp(1567578902, 6), t: 1 }, 2019-09-04T06:35:02.948+0000 2019-09-04T06:35:02.953+0000 D3 REPL [conn474] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:02.953+0000 D3 REPL [conn472] Got notified of new snapshot: { ts: Timestamp(1567578902, 6), t: 1 }, 2019-09-04T06:35:02.948+0000 2019-09-04T06:35:02.953+0000 D3 REPL [conn472] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:02.953+0000 D3 REPL [conn436] Got notified of new snapshot: { ts: Timestamp(1567578902, 6), t: 1 }, 2019-09-04T06:35:02.948+0000 2019-09-04T06:35:02.953+0000 D3 REPL [conn436] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:14.790+0000 2019-09-04T06:35:02.953+0000 D3 REPL [conn441] Got notified of new snapshot: { ts: Timestamp(1567578902, 6), t: 1 }, 2019-09-04T06:35:02.948+0000 2019-09-04T06:35:02.953+0000 D3 REPL [conn441] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.624+0000 2019-09-04T06:35:02.953+0000 D3 REPL [conn460] Got notified of new snapshot: { ts: Timestamp(1567578902, 6), t: 1 }, 2019-09-04T06:35:02.948+0000 2019-09-04T06:35:02.953+0000 D3 REPL [conn460] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.478+0000 2019-09-04T06:35:02.953+0000 D3 REPL [conn456] Got notified of new snapshot: { ts: Timestamp(1567578902, 6), t: 1 }, 2019-09-04T06:35:02.948+0000 2019-09-04T06:35:02.953+0000 D3 REPL [conn456] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:03.487+0000 2019-09-04T06:35:02.953+0000 D3 REPL [conn475] Got notified of new snapshot: { ts: Timestamp(1567578902, 6), t: 1 }, 2019-09-04T06:35:02.948+0000 2019-09-04T06:35:02.953+0000 D3 REPL [conn475] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.753+0000 2019-09-04T06:35:02.953+0000 D3 REPL [conn446] Got notified of new snapshot: { ts: Timestamp(1567578902, 6), t: 1 }, 2019-09-04T06:35:02.948+0000 2019-09-04T06:35:02.953+0000 D3 REPL [conn446] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:13.895+0000 2019-09-04T06:35:02.953+0000 D3 REPL [conn479] Got notified of new snapshot: { ts: Timestamp(1567578902, 6), t: 1 }, 2019-09-04T06:35:02.948+0000 2019-09-04T06:35:02.953+0000 D3 REPL [conn479] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:26.309+0000 2019-09-04T06:35:02.953+0000 D3 REPL [conn466] Got notified of new snapshot: { ts: Timestamp(1567578902, 6), t: 1 }, 2019-09-04T06:35:02.948+0000 2019-09-04T06:35:02.953+0000 D3 REPL [conn466] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.728+0000 2019-09-04T06:35:02.953+0000 D3 REPL [conn478] Got notified of new snapshot: { ts: Timestamp(1567578902, 6), t: 1 }, 2019-09-04T06:35:02.948+0000 2019-09-04T06:35:02.953+0000 D3 REPL [conn478] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:24.152+0000 2019-09-04T06:35:02.953+0000 D3 REPL [conn473] Got notified of new snapshot: { ts: Timestamp(1567578902, 6), t: 1 }, 2019-09-04T06:35:02.948+0000 2019-09-04T06:35:02.953+0000 D3 REPL [conn473] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:02.953+0000 D3 REPL [conn459] Got notified of new snapshot: { ts: Timestamp(1567578902, 6), t: 1 }, 2019-09-04T06:35:02.948+0000 2019-09-04T06:35:02.953+0000 D3 REPL [conn459] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.473+0000 2019-09-04T06:35:02.953+0000 D3 REPL [conn457] Got notified of new snapshot: { ts: Timestamp(1567578902, 6), t: 1 }, 2019-09-04T06:35:02.948+0000 2019-09-04T06:35:02.953+0000 D3 REPL [conn457] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:25.059+0000 2019-09-04T06:35:02.953+0000 D3 REPL [conn445] Got notified of new snapshot: { ts: Timestamp(1567578902, 6), t: 1 }, 2019-09-04T06:35:02.948+0000 2019-09-04T06:35:02.953+0000 D3 REPL [conn445] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:14.090+0000 2019-09-04T06:35:02.953+0000 D3 REPL [conn467] Got notified of new snapshot: { ts: Timestamp(1567578902, 6), t: 1 }, 2019-09-04T06:35:02.948+0000 2019-09-04T06:35:02.954+0000 D3 REPL [conn467] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.623+0000 2019-09-04T06:35:02.954+0000 D3 REPL [conn476] Got notified of new snapshot: { ts: Timestamp(1567578902, 6), t: 1 }, 2019-09-04T06:35:02.948+0000 2019-09-04T06:35:02.954+0000 D3 REPL [conn476] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.767+0000 2019-09-04T06:35:02.954+0000 D3 REPL [conn462] Got notified of new snapshot: { ts: Timestamp(1567578902, 6), t: 1 }, 2019-09-04T06:35:02.948+0000 2019-09-04T06:35:02.954+0000 D3 REPL [conn462] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.497+0000 2019-09-04T06:35:02.954+0000 D3 REPL [conn423] Got notified of new snapshot: { ts: Timestamp(1567578902, 6), t: 1 }, 2019-09-04T06:35:02.948+0000 2019-09-04T06:35:02.954+0000 D3 REPL [conn423] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.512+0000 2019-09-04T06:35:02.956+0000 D2 ASIO [RS] Request 1513 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578902, 7), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578902953), o: { $v: 1, $set: { state: 0 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 6), t: 1 }, lastCommittedWall: new Date(1567578902948), lastOpVisible: { ts: Timestamp(1567578902, 6), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 6), t: 1 }, lastCommittedWall: new Date(1567578902948), lastOpApplied: { ts: Timestamp(1567578902, 7), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 6), $clusterTime: { clusterTime: Timestamp(1567578902, 7), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 7) } 2019-09-04T06:35:02.956+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578902, 7), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578902953), o: { $v: 1, $set: { state: 0 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 6), t: 1 }, lastCommittedWall: new Date(1567578902948), lastOpVisible: { ts: Timestamp(1567578902, 6), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 6), t: 1 }, lastCommittedWall: new Date(1567578902948), lastOpApplied: { ts: Timestamp(1567578902, 7), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 6), $clusterTime: { clusterTime: Timestamp(1567578902, 7), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 7) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.957+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:02.957+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578902, 7) and ending at ts: Timestamp(1567578902, 7) 2019-09-04T06:35:02.957+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:35:13.628+0000 2019-09-04T06:35:02.957+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:35:13.353+0000 2019-09-04T06:35:02.957+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:35:02.957+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578902, 7), t: 1 } 2019-09-04T06:35:02.957+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:35:02.957+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:02.957+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:02.957+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578902, 6) 2019-09-04T06:35:02.957+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 21930 2019-09-04T06:35:02.957+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:02.957+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:02.957+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 21930 2019-09-04T06:35:02.957+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:02.957+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:02.957+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578902, 6) 2019-09-04T06:35:02.957+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:35:02.957+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 21933 2019-09-04T06:35:02.957+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578902, 7) } 2019-09-04T06:35:02.957+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:02.957+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:02.957+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 21933 2019-09-04T06:35:02.957+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21928 2019-09-04T06:35:02.957+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21928 2019-09-04T06:35:02.957+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21936 2019-09-04T06:35:02.957+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21936 2019-09-04T06:35:02.957+0000 D3 EXECUTOR [repl-writer-worker-2] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:02.957+0000 D3 STORAGE [repl-writer-worker-2] WT begin_transaction for snapshot id 21938 2019-09-04T06:35:02.957+0000 D4 STORAGE [repl-writer-worker-2] inserting record with timestamp Timestamp(1567578902, 7) 2019-09-04T06:35:02.957+0000 D3 STORAGE [repl-writer-worker-2] WT set timestamp of future write operations to Timestamp(1567578902, 7) 2019-09-04T06:35:02.957+0000 D3 STORAGE [repl-writer-worker-2] WT commit_transaction for snapshot id 21938 2019-09-04T06:35:02.957+0000 D3 EXECUTOR [repl-writer-worker-2] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:02.957+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:35:02.957+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21937 2019-09-04T06:35:02.957+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21937 2019-09-04T06:35:02.957+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21940 2019-09-04T06:35:02.957+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21940 2019-09-04T06:35:02.957+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578902, 7), t: 1 }({ ts: Timestamp(1567578902, 7), t: 1 }) 2019-09-04T06:35:02.957+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578902, 7) 2019-09-04T06:35:02.957+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21941 2019-09-04T06:35:02.957+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578902, 7) } } ] } sort: {} projection: {} 2019-09-04T06:35:02.957+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.957+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:35:02.957+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578902, 7) Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.957+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.957+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:02.957+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:02.957+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578902, 7) || First: notFirst: full path: ts 2019-09-04T06:35:02.957+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.957+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578902, 7) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.957+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:02.957+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:35:02.957+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.957+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.957+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:02.957+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:02.957+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.957+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.957+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:02.957+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578902, 7) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.957+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.957+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:02.957+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:02.957+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578902, 7) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:02.957+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.957+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578902, 7) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.957+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21941 2019-09-04T06:35:02.957+0000 D3 EXECUTOR [repl-writer-worker-0] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:02.957+0000 D3 STORAGE [repl-writer-worker-0] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:02.957+0000 D3 REPL [repl-writer-worker-0] applying op: { ts: Timestamp(1567578902, 7), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578902953), o: { $v: 1, $set: { state: 0 } } }, oplog application mode: Secondary 2019-09-04T06:35:02.957+0000 D3 STORAGE [repl-writer-worker-0] WT set timestamp of future write operations to Timestamp(1567578902, 7) 2019-09-04T06:35:02.957+0000 D3 STORAGE [repl-writer-worker-0] WT begin_transaction for snapshot id 21943 2019-09-04T06:35:02.957+0000 D2 QUERY [repl-writer-worker-0] Using idhack: { _id: "config.system.sessions" } 2019-09-04T06:35:02.957+0000 D4 WRITE [repl-writer-worker-0] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:35:02.957+0000 D3 STORAGE [repl-writer-worker-0] WT commit_transaction for snapshot id 21943 2019-09-04T06:35:02.957+0000 D3 EXECUTOR [repl-writer-worker-0] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:02.957+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578902, 7), t: 1 }({ ts: Timestamp(1567578902, 7), t: 1 }) 2019-09-04T06:35:02.957+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578902, 7) 2019-09-04T06:35:02.957+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21942 2019-09-04T06:35:02.957+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.957+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.957+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:35:02.957+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.957+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.957+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:35:02.957+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21942 2019-09-04T06:35:02.957+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578902, 7) 2019-09-04T06:35:02.958+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21947 2019-09-04T06:35:02.958+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21947 2019-09-04T06:35:02.958+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578902, 7), t: 1 }({ ts: Timestamp(1567578902, 7), t: 1 }) 2019-09-04T06:35:02.958+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:02.958+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578902, 6), t: 1 }, durableWallTime: new Date(1567578902948), appliedOpTime: { ts: Timestamp(1567578902, 7), t: 1 }, appliedWallTime: new Date(1567578902953), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 6), t: 1 }, lastCommittedWall: new Date(1567578902948), lastOpVisible: { ts: Timestamp(1567578902, 6), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:02.958+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1514 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:32.958+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578902, 6), t: 1 }, durableWallTime: new Date(1567578902948), appliedOpTime: { ts: Timestamp(1567578902, 7), t: 1 }, appliedWallTime: new Date(1567578902953), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 6), t: 1 }, lastCommittedWall: new Date(1567578902948), lastOpVisible: { ts: Timestamp(1567578902, 6), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:02.958+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.957+0000 2019-09-04T06:35:02.958+0000 D2 ASIO [RS] Request 1514 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 6), t: 1 }, lastCommittedWall: new Date(1567578902948), lastOpVisible: { ts: Timestamp(1567578902, 6), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 6), $clusterTime: { clusterTime: Timestamp(1567578902, 7), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 7) } 2019-09-04T06:35:02.958+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 6), t: 1 }, lastCommittedWall: new Date(1567578902948), lastOpVisible: { ts: Timestamp(1567578902, 6), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 6), $clusterTime: { clusterTime: Timestamp(1567578902, 7), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 7) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.958+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:02.958+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.958+0000 2019-09-04T06:35:02.959+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578902, 7), t: 1 } 2019-09-04T06:35:02.959+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1515 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:12.959+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578902, 6), t: 1 } } 2019-09-04T06:35:02.959+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.958+0000 2019-09-04T06:35:02.959+0000 D2 ASIO [RS] Request 1515 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 7), t: 1 }, lastCommittedWall: new Date(1567578902953), lastOpVisible: { ts: Timestamp(1567578902, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 7), t: 1 }, lastCommittedWall: new Date(1567578902953), lastOpApplied: { ts: Timestamp(1567578902, 7), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 7), $clusterTime: { clusterTime: Timestamp(1567578902, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 7) } 2019-09-04T06:35:02.959+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 7), t: 1 }, lastCommittedWall: new Date(1567578902953), lastOpVisible: { ts: Timestamp(1567578902, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 7), t: 1 }, lastCommittedWall: new Date(1567578902953), lastOpApplied: { ts: Timestamp(1567578902, 7), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 7), $clusterTime: { clusterTime: Timestamp(1567578902, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 7) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.959+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:02.959+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:35:02.959+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578902, 7), t: 1 }, 2019-09-04T06:35:02.953+0000 2019-09-04T06:35:02.959+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578902, 7), t: 1 }, 2019-09-04T06:35:02.953+0000 2019-09-04T06:35:02.959+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578897, 7) 2019-09-04T06:35:02.959+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:35:13.353+0000 2019-09-04T06:35:02.959+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:35:13.134+0000 2019-09-04T06:35:02.959+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:02.959+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1516 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:12.959+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578902, 7), t: 1 } } 2019-09-04T06:35:02.959+0000 D3 REPL [conn472] Got notified of new snapshot: { ts: Timestamp(1567578902, 7), t: 1 }, 2019-09-04T06:35:02.953+0000 2019-09-04T06:35:02.959+0000 D3 REPL [conn472] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:02.959+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.958+0000 2019-09-04T06:35:02.959+0000 D3 REPL [conn441] Got notified of new snapshot: { ts: Timestamp(1567578902, 7), t: 1 }, 2019-09-04T06:35:02.953+0000 2019-09-04T06:35:02.959+0000 D3 REPL [conn441] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.624+0000 2019-09-04T06:35:02.959+0000 D3 REPL [conn459] Got notified of new snapshot: { ts: Timestamp(1567578902, 7), t: 1 }, 2019-09-04T06:35:02.953+0000 2019-09-04T06:35:02.959+0000 D3 REPL [conn459] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.473+0000 2019-09-04T06:35:02.959+0000 D3 REPL [conn446] Got notified of new snapshot: { ts: Timestamp(1567578902, 7), t: 1 }, 2019-09-04T06:35:02.953+0000 2019-09-04T06:35:02.959+0000 D3 REPL [conn446] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:13.895+0000 2019-09-04T06:35:02.959+0000 D3 REPL [conn475] Got notified of new snapshot: { ts: Timestamp(1567578902, 7), t: 1 }, 2019-09-04T06:35:02.953+0000 2019-09-04T06:35:02.959+0000 D3 REPL [conn475] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.753+0000 2019-09-04T06:35:02.959+0000 D3 REPL [conn473] Got notified of new snapshot: { ts: Timestamp(1567578902, 7), t: 1 }, 2019-09-04T06:35:02.953+0000 2019-09-04T06:35:02.959+0000 D3 REPL [conn473] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:02.959+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:35:02.959+0000 D3 REPL [conn467] Got notified of new snapshot: { ts: Timestamp(1567578902, 7), t: 1 }, 2019-09-04T06:35:02.953+0000 2019-09-04T06:35:02.959+0000 D3 REPL [conn467] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.623+0000 2019-09-04T06:35:02.959+0000 D3 REPL [conn457] Got notified of new snapshot: { ts: Timestamp(1567578902, 7), t: 1 }, 2019-09-04T06:35:02.953+0000 2019-09-04T06:35:02.959+0000 D3 REPL [conn457] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:25.059+0000 2019-09-04T06:35:02.959+0000 D3 REPL [conn462] Got notified of new snapshot: { ts: Timestamp(1567578902, 7), t: 1 }, 2019-09-04T06:35:02.953+0000 2019-09-04T06:35:02.959+0000 D3 REPL [conn462] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.497+0000 2019-09-04T06:35:02.959+0000 D3 REPL [conn463] Got notified of new snapshot: { ts: Timestamp(1567578902, 7), t: 1 }, 2019-09-04T06:35:02.953+0000 2019-09-04T06:35:02.959+0000 D3 REPL [conn463] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:13.417+0000 2019-09-04T06:35:02.959+0000 D3 REPL [conn477] Got notified of new snapshot: { ts: Timestamp(1567578902, 7), t: 1 }, 2019-09-04T06:35:02.953+0000 2019-09-04T06:35:02.959+0000 D3 REPL [conn477] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:22.595+0000 2019-09-04T06:35:02.959+0000 D3 REPL [conn448] Got notified of new snapshot: { ts: Timestamp(1567578902, 7), t: 1 }, 2019-09-04T06:35:02.953+0000 2019-09-04T06:35:02.959+0000 D3 REPL [conn448] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.625+0000 2019-09-04T06:35:02.959+0000 D3 REPL [conn479] Got notified of new snapshot: { ts: Timestamp(1567578902, 7), t: 1 }, 2019-09-04T06:35:02.953+0000 2019-09-04T06:35:02.959+0000 D3 REPL [conn479] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:26.309+0000 2019-09-04T06:35:02.959+0000 D3 REPL [conn474] Got notified of new snapshot: { ts: Timestamp(1567578902, 7), t: 1 }, 2019-09-04T06:35:02.953+0000 2019-09-04T06:35:02.959+0000 D3 REPL [conn474] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:02.959+0000 D3 REPL [conn461] Got notified of new snapshot: { ts: Timestamp(1567578902, 7), t: 1 }, 2019-09-04T06:35:02.953+0000 2019-09-04T06:35:02.959+0000 D3 REPL [conn461] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.482+0000 2019-09-04T06:35:02.959+0000 D3 REPL [conn460] Got notified of new snapshot: { ts: Timestamp(1567578902, 7), t: 1 }, 2019-09-04T06:35:02.953+0000 2019-09-04T06:35:02.959+0000 D3 REPL [conn460] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.478+0000 2019-09-04T06:35:02.959+0000 D3 REPL [conn478] Got notified of new snapshot: { ts: Timestamp(1567578902, 7), t: 1 }, 2019-09-04T06:35:02.953+0000 2019-09-04T06:35:02.959+0000 D3 REPL [conn478] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:24.152+0000 2019-09-04T06:35:02.959+0000 D3 REPL [conn456] Got notified of new snapshot: { ts: Timestamp(1567578902, 7), t: 1 }, 2019-09-04T06:35:02.953+0000 2019-09-04T06:35:02.959+0000 D3 REPL [conn456] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:03.487+0000 2019-09-04T06:35:02.959+0000 D3 REPL [conn476] Got notified of new snapshot: { ts: Timestamp(1567578902, 7), t: 1 }, 2019-09-04T06:35:02.953+0000 2019-09-04T06:35:02.959+0000 D3 REPL [conn476] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.767+0000 2019-09-04T06:35:02.959+0000 D3 REPL [conn445] Got notified of new snapshot: { ts: Timestamp(1567578902, 7), t: 1 }, 2019-09-04T06:35:02.953+0000 2019-09-04T06:35:02.959+0000 D3 REPL [conn445] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:14.090+0000 2019-09-04T06:35:02.959+0000 D3 REPL [conn436] Got notified of new snapshot: { ts: Timestamp(1567578902, 7), t: 1 }, 2019-09-04T06:35:02.953+0000 2019-09-04T06:35:02.959+0000 D3 REPL [conn436] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:14.790+0000 2019-09-04T06:35:02.959+0000 D3 REPL [conn423] Got notified of new snapshot: { ts: Timestamp(1567578902, 7), t: 1 }, 2019-09-04T06:35:02.953+0000 2019-09-04T06:35:02.959+0000 D3 REPL [conn423] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.512+0000 2019-09-04T06:35:02.959+0000 D3 REPL [conn466] Got notified of new snapshot: { ts: Timestamp(1567578902, 7), t: 1 }, 2019-09-04T06:35:02.953+0000 2019-09-04T06:35:02.959+0000 D3 REPL [conn466] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.728+0000 2019-09-04T06:35:02.960+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:35:02.960+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:02.960+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578902, 7), t: 1 }, durableWallTime: new Date(1567578902953), appliedOpTime: { ts: Timestamp(1567578902, 7), t: 1 }, appliedWallTime: new Date(1567578902953), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 7), t: 1 }, lastCommittedWall: new Date(1567578902953), lastOpVisible: { ts: Timestamp(1567578902, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:02.960+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1517 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:32.960+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578902, 7), t: 1 }, durableWallTime: new Date(1567578902953), appliedOpTime: { ts: Timestamp(1567578902, 7), t: 1 }, appliedWallTime: new Date(1567578902953), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 7), t: 1 }, lastCommittedWall: new Date(1567578902953), lastOpVisible: { ts: Timestamp(1567578902, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:02.960+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.958+0000 2019-09-04T06:35:02.960+0000 D2 ASIO [RS] Request 1517 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 7), t: 1 }, lastCommittedWall: new Date(1567578902953), lastOpVisible: { ts: Timestamp(1567578902, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 7), $clusterTime: { clusterTime: Timestamp(1567578902, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 7) } 2019-09-04T06:35:02.960+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 7), t: 1 }, lastCommittedWall: new Date(1567578902953), lastOpVisible: { ts: Timestamp(1567578902, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 7), $clusterTime: { clusterTime: Timestamp(1567578902, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 7) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.960+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:02.960+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.958+0000 2019-09-04T06:35:02.960+0000 D2 ASIO [RS] Request 1516 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578902, 8), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578902958), o: { $v: 1, $set: { state: 0 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 7), t: 1 }, lastCommittedWall: new Date(1567578902953), lastOpVisible: { ts: Timestamp(1567578902, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 7), t: 1 }, lastCommittedWall: new Date(1567578902953), lastOpApplied: { ts: Timestamp(1567578902, 8), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 7), $clusterTime: { clusterTime: Timestamp(1567578902, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 8) } 2019-09-04T06:35:02.960+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578902, 8), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578902958), o: { $v: 1, $set: { state: 0 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 7), t: 1 }, lastCommittedWall: new Date(1567578902953), lastOpVisible: { ts: Timestamp(1567578902, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 7), t: 1 }, lastCommittedWall: new Date(1567578902953), lastOpApplied: { ts: Timestamp(1567578902, 8), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 7), $clusterTime: { clusterTime: Timestamp(1567578902, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 8) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.960+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:02.960+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578902, 8) and ending at ts: Timestamp(1567578902, 8) 2019-09-04T06:35:02.960+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:35:13.134+0000 2019-09-04T06:35:02.960+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:35:14.012+0000 2019-09-04T06:35:02.960+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:02.960+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578902, 8), t: 1 } 2019-09-04T06:35:02.961+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:35:02.961+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:02.961+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:02.961+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578902, 7) 2019-09-04T06:35:02.961+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 21950 2019-09-04T06:35:02.961+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:02.961+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:02.961+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 21950 2019-09-04T06:35:02.961+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:02.961+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:02.961+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:35:02.961+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578902, 7) 2019-09-04T06:35:02.961+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578902, 8) } 2019-09-04T06:35:02.961+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 21953 2019-09-04T06:35:02.961+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:02.961+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21948 2019-09-04T06:35:02.961+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:02.961+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 21953 2019-09-04T06:35:02.961+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21948 2019-09-04T06:35:02.961+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21956 2019-09-04T06:35:02.961+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21956 2019-09-04T06:35:02.961+0000 D3 EXECUTOR [repl-writer-worker-1] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:02.961+0000 D3 STORAGE [repl-writer-worker-1] WT begin_transaction for snapshot id 21958 2019-09-04T06:35:02.961+0000 D4 STORAGE [repl-writer-worker-1] inserting record with timestamp Timestamp(1567578902, 8) 2019-09-04T06:35:02.961+0000 D3 STORAGE [repl-writer-worker-1] WT set timestamp of future write operations to Timestamp(1567578902, 8) 2019-09-04T06:35:02.961+0000 D3 STORAGE [repl-writer-worker-1] WT commit_transaction for snapshot id 21958 2019-09-04T06:35:02.961+0000 D3 EXECUTOR [repl-writer-worker-1] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:02.961+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:35:02.961+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21957 2019-09-04T06:35:02.961+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21957 2019-09-04T06:35:02.961+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21960 2019-09-04T06:35:02.961+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21960 2019-09-04T06:35:02.961+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578902, 8), t: 1 }({ ts: Timestamp(1567578902, 8), t: 1 }) 2019-09-04T06:35:02.961+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578902, 8) 2019-09-04T06:35:02.961+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21961 2019-09-04T06:35:02.961+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578902, 8) } } ] } sort: {} projection: {} 2019-09-04T06:35:02.961+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.961+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:35:02.961+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578902, 8) Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.961+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.961+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:02.961+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:02.961+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578902, 8) || First: notFirst: full path: ts 2019-09-04T06:35:02.961+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.961+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578902, 8) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.961+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:02.961+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:35:02.961+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.961+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.961+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:02.961+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:02.961+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.961+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.961+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:02.961+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578902, 8) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.961+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.961+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:02.961+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:02.961+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578902, 8) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:02.961+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.961+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578902, 8) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.961+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21961 2019-09-04T06:35:02.961+0000 D3 EXECUTOR [repl-writer-worker-15] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:02.961+0000 D3 STORAGE [repl-writer-worker-15] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:02.961+0000 D3 REPL [repl-writer-worker-15] applying op: { ts: Timestamp(1567578902, 8), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578902958), o: { $v: 1, $set: { state: 0 } } }, oplog application mode: Secondary 2019-09-04T06:35:02.961+0000 D3 STORAGE [repl-writer-worker-15] WT set timestamp of future write operations to Timestamp(1567578902, 8) 2019-09-04T06:35:02.961+0000 D3 STORAGE [repl-writer-worker-15] WT begin_transaction for snapshot id 21963 2019-09-04T06:35:02.961+0000 D2 QUERY [repl-writer-worker-15] Using idhack: { _id: "config" } 2019-09-04T06:35:02.961+0000 D4 WRITE [repl-writer-worker-15] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:35:02.961+0000 D3 STORAGE [repl-writer-worker-15] WT commit_transaction for snapshot id 21963 2019-09-04T06:35:02.961+0000 D3 EXECUTOR [repl-writer-worker-15] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:02.961+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578902, 8), t: 1 }({ ts: Timestamp(1567578902, 8), t: 1 }) 2019-09-04T06:35:02.961+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578902, 8) 2019-09-04T06:35:02.961+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21962 2019-09-04T06:35:02.961+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.961+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.961+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:35:02.961+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.961+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.961+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:35:02.961+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21962 2019-09-04T06:35:02.961+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578902, 8) 2019-09-04T06:35:02.961+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:02.961+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578902, 7), t: 1 }, durableWallTime: new Date(1567578902953), appliedOpTime: { ts: Timestamp(1567578902, 8), t: 1 }, appliedWallTime: new Date(1567578902958), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 7), t: 1 }, lastCommittedWall: new Date(1567578902953), lastOpVisible: { ts: Timestamp(1567578902, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:02.961+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21966 2019-09-04T06:35:02.961+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21966 2019-09-04T06:35:02.961+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578902, 8), t: 1 }({ ts: Timestamp(1567578902, 8), t: 1 }) 2019-09-04T06:35:02.961+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1518 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:32.961+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578902, 7), t: 1 }, durableWallTime: new Date(1567578902953), appliedOpTime: { ts: Timestamp(1567578902, 8), t: 1 }, appliedWallTime: new Date(1567578902958), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 7), t: 1 }, lastCommittedWall: new Date(1567578902953), lastOpVisible: { ts: Timestamp(1567578902, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:02.962+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.961+0000 2019-09-04T06:35:02.962+0000 D2 ASIO [RS] Request 1518 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 7), t: 1 }, lastCommittedWall: new Date(1567578902953), lastOpVisible: { ts: Timestamp(1567578902, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 7), $clusterTime: { clusterTime: Timestamp(1567578902, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 8) } 2019-09-04T06:35:02.962+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 7), t: 1 }, lastCommittedWall: new Date(1567578902953), lastOpVisible: { ts: Timestamp(1567578902, 7), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 7), $clusterTime: { clusterTime: Timestamp(1567578902, 8), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 8) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.962+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:02.962+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.962+0000 2019-09-04T06:35:02.963+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578902, 8), t: 1 } 2019-09-04T06:35:02.963+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1519 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:12.963+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578902, 7), t: 1 } } 2019-09-04T06:35:02.963+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.962+0000 2019-09-04T06:35:02.963+0000 D2 ASIO [RS] Request 1519 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 8), t: 1 }, lastCommittedWall: new Date(1567578902958), lastOpVisible: { ts: Timestamp(1567578902, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 8), t: 1 }, lastCommittedWall: new Date(1567578902958), lastOpApplied: { ts: Timestamp(1567578902, 8), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 8), $clusterTime: { clusterTime: Timestamp(1567578902, 9), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 8) } 2019-09-04T06:35:02.963+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 8), t: 1 }, lastCommittedWall: new Date(1567578902958), lastOpVisible: { ts: Timestamp(1567578902, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 8), t: 1 }, lastCommittedWall: new Date(1567578902958), lastOpApplied: { ts: Timestamp(1567578902, 8), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 8), $clusterTime: { clusterTime: Timestamp(1567578902, 9), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 8) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.963+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:02.963+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:35:02.963+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578902, 8), t: 1 }, 2019-09-04T06:35:02.958+0000 2019-09-04T06:35:02.963+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578902, 8), t: 1 }, 2019-09-04T06:35:02.958+0000 2019-09-04T06:35:02.963+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578897, 8) 2019-09-04T06:35:02.963+0000 D3 REPL [conn457] Got notified of new snapshot: { ts: Timestamp(1567578902, 8), t: 1 }, 2019-09-04T06:35:02.958+0000 2019-09-04T06:35:02.963+0000 D3 REPL [conn457] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:25.059+0000 2019-09-04T06:35:02.963+0000 D3 REPL [conn459] Got notified of new snapshot: { ts: Timestamp(1567578902, 8), t: 1 }, 2019-09-04T06:35:02.958+0000 2019-09-04T06:35:02.963+0000 D3 REPL [conn459] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.473+0000 2019-09-04T06:35:02.963+0000 D3 REPL [conn472] Got notified of new snapshot: { ts: Timestamp(1567578902, 8), t: 1 }, 2019-09-04T06:35:02.958+0000 2019-09-04T06:35:02.963+0000 D3 REPL [conn472] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:02.963+0000 D3 REPL [conn462] Got notified of new snapshot: { ts: Timestamp(1567578902, 8), t: 1 }, 2019-09-04T06:35:02.958+0000 2019-09-04T06:35:02.963+0000 D3 REPL [conn462] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.497+0000 2019-09-04T06:35:02.963+0000 D3 REPL [conn474] Got notified of new snapshot: { ts: Timestamp(1567578902, 8), t: 1 }, 2019-09-04T06:35:02.958+0000 2019-09-04T06:35:02.963+0000 D3 REPL [conn474] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:02.963+0000 D3 REPL [conn463] Got notified of new snapshot: { ts: Timestamp(1567578902, 8), t: 1 }, 2019-09-04T06:35:02.958+0000 2019-09-04T06:35:02.963+0000 D3 REPL [conn463] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:13.417+0000 2019-09-04T06:35:02.963+0000 D3 REPL [conn460] Got notified of new snapshot: { ts: Timestamp(1567578902, 8), t: 1 }, 2019-09-04T06:35:02.958+0000 2019-09-04T06:35:02.963+0000 D3 REPL [conn460] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.478+0000 2019-09-04T06:35:02.963+0000 D3 REPL [conn446] Got notified of new snapshot: { ts: Timestamp(1567578902, 8), t: 1 }, 2019-09-04T06:35:02.958+0000 2019-09-04T06:35:02.963+0000 D3 REPL [conn446] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:13.895+0000 2019-09-04T06:35:02.963+0000 D3 REPL [conn456] Got notified of new snapshot: { ts: Timestamp(1567578902, 8), t: 1 }, 2019-09-04T06:35:02.958+0000 2019-09-04T06:35:02.963+0000 D3 REPL [conn456] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:03.487+0000 2019-09-04T06:35:02.963+0000 D3 REPL [conn445] Got notified of new snapshot: { ts: Timestamp(1567578902, 8), t: 1 }, 2019-09-04T06:35:02.958+0000 2019-09-04T06:35:02.963+0000 D3 REPL [conn445] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:14.090+0000 2019-09-04T06:35:02.963+0000 D3 REPL [conn423] Got notified of new snapshot: { ts: Timestamp(1567578902, 8), t: 1 }, 2019-09-04T06:35:02.958+0000 2019-09-04T06:35:02.963+0000 D3 REPL [conn423] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.512+0000 2019-09-04T06:35:02.963+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:35:14.012+0000 2019-09-04T06:35:02.963+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:35:12.968+0000 2019-09-04T06:35:02.963+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:35:02.963+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1520 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:12.963+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578902, 8), t: 1 } } 2019-09-04T06:35:02.963+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:35:02.963+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.962+0000 2019-09-04T06:35:02.963+0000 D3 REPL [conn479] Got notified of new snapshot: { ts: Timestamp(1567578902, 8), t: 1 }, 2019-09-04T06:35:02.958+0000 2019-09-04T06:35:02.963+0000 D3 REPL [conn479] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:26.309+0000 2019-09-04T06:35:02.963+0000 D3 REPL [conn461] Got notified of new snapshot: { ts: Timestamp(1567578902, 8), t: 1 }, 2019-09-04T06:35:02.958+0000 2019-09-04T06:35:02.963+0000 D3 REPL [conn461] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.482+0000 2019-09-04T06:35:02.963+0000 D3 REPL [conn478] Got notified of new snapshot: { ts: Timestamp(1567578902, 8), t: 1 }, 2019-09-04T06:35:02.958+0000 2019-09-04T06:35:02.963+0000 D3 REPL [conn478] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:24.152+0000 2019-09-04T06:35:02.963+0000 D3 REPL [conn473] Got notified of new snapshot: { ts: Timestamp(1567578902, 8), t: 1 }, 2019-09-04T06:35:02.958+0000 2019-09-04T06:35:02.963+0000 D3 REPL [conn473] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:02.963+0000 D3 REPL [conn441] Got notified of new snapshot: { ts: Timestamp(1567578902, 8), t: 1 }, 2019-09-04T06:35:02.958+0000 2019-09-04T06:35:02.963+0000 D3 REPL [conn441] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.624+0000 2019-09-04T06:35:02.963+0000 D3 REPL [conn476] Got notified of new snapshot: { ts: Timestamp(1567578902, 8), t: 1 }, 2019-09-04T06:35:02.958+0000 2019-09-04T06:35:02.963+0000 D3 REPL [conn476] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.767+0000 2019-09-04T06:35:02.963+0000 D3 REPL [conn436] Got notified of new snapshot: { ts: Timestamp(1567578902, 8), t: 1 }, 2019-09-04T06:35:02.958+0000 2019-09-04T06:35:02.963+0000 D3 REPL [conn436] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:14.790+0000 2019-09-04T06:35:02.963+0000 D3 REPL [conn467] Got notified of new snapshot: { ts: Timestamp(1567578902, 8), t: 1 }, 2019-09-04T06:35:02.958+0000 2019-09-04T06:35:02.963+0000 D3 REPL [conn467] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.623+0000 2019-09-04T06:35:02.963+0000 D3 REPL [conn466] Got notified of new snapshot: { ts: Timestamp(1567578902, 8), t: 1 }, 2019-09-04T06:35:02.958+0000 2019-09-04T06:35:02.963+0000 D3 REPL [conn466] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.728+0000 2019-09-04T06:35:02.963+0000 D3 REPL [conn475] Got notified of new snapshot: { ts: Timestamp(1567578902, 8), t: 1 }, 2019-09-04T06:35:02.958+0000 2019-09-04T06:35:02.963+0000 D3 REPL [conn475] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.753+0000 2019-09-04T06:35:02.963+0000 D3 REPL [conn448] Got notified of new snapshot: { ts: Timestamp(1567578902, 8), t: 1 }, 2019-09-04T06:35:02.958+0000 2019-09-04T06:35:02.963+0000 D3 REPL [conn448] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.625+0000 2019-09-04T06:35:02.963+0000 D3 REPL [conn477] Got notified of new snapshot: { ts: Timestamp(1567578902, 8), t: 1 }, 2019-09-04T06:35:02.958+0000 2019-09-04T06:35:02.963+0000 D3 REPL [conn477] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:22.595+0000 2019-09-04T06:35:02.964+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:35:02.964+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:02.964+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578902, 8), t: 1 }, durableWallTime: new Date(1567578902958), appliedOpTime: { ts: Timestamp(1567578902, 8), t: 1 }, appliedWallTime: new Date(1567578902958), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 8), t: 1 }, lastCommittedWall: new Date(1567578902958), lastOpVisible: { ts: Timestamp(1567578902, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:02.964+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1521 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:32.964+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578902, 8), t: 1 }, durableWallTime: new Date(1567578902958), appliedOpTime: { ts: Timestamp(1567578902, 8), t: 1 }, appliedWallTime: new Date(1567578902958), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 8), t: 1 }, lastCommittedWall: new Date(1567578902958), lastOpVisible: { ts: Timestamp(1567578902, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:02.964+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.962+0000 2019-09-04T06:35:02.965+0000 D2 ASIO [RS] Request 1521 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 8), t: 1 }, lastCommittedWall: new Date(1567578902958), lastOpVisible: { ts: Timestamp(1567578902, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 8), $clusterTime: { clusterTime: Timestamp(1567578902, 9), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 8) } 2019-09-04T06:35:02.965+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 8), t: 1 }, lastCommittedWall: new Date(1567578902958), lastOpVisible: { ts: Timestamp(1567578902, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 8), $clusterTime: { clusterTime: Timestamp(1567578902, 9), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 8) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.965+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:02.965+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.962+0000 2019-09-04T06:35:02.965+0000 D2 ASIO [RS] Request 1520 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578902, 9), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578902962), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f5b16ac9313827bca6622'), when: new Date(1567578902962), who: "ConfigServer:conn9511" } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 8), t: 1 }, lastCommittedWall: new Date(1567578902958), lastOpVisible: { ts: Timestamp(1567578902, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 8), t: 1 }, lastCommittedWall: new Date(1567578902958), lastOpApplied: { ts: Timestamp(1567578902, 9), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 8), $clusterTime: { clusterTime: Timestamp(1567578902, 9), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 9) } 2019-09-04T06:35:02.965+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578902, 9), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578902962), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f5b16ac9313827bca6622'), when: new Date(1567578902962), who: "ConfigServer:conn9511" } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 8), t: 1 }, lastCommittedWall: new Date(1567578902958), lastOpVisible: { ts: Timestamp(1567578902, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 8), t: 1 }, lastCommittedWall: new Date(1567578902958), lastOpApplied: { ts: Timestamp(1567578902, 9), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 8), $clusterTime: { clusterTime: Timestamp(1567578902, 9), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 9) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.965+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:02.965+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578902, 9) and ending at ts: Timestamp(1567578902, 9) 2019-09-04T06:35:02.965+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:35:12.968+0000 2019-09-04T06:35:02.965+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:35:14.207+0000 2019-09-04T06:35:02.965+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:02.965+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578902, 9), t: 1 } 2019-09-04T06:35:02.965+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:35:02.965+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:02.965+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:02.965+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578902, 8) 2019-09-04T06:35:02.965+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 21970 2019-09-04T06:35:02.965+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:02.965+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:02.965+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 21970 2019-09-04T06:35:02.965+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:02.965+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:35:02.965+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:02.965+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578902, 9) } 2019-09-04T06:35:02.965+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578902, 8) 2019-09-04T06:35:02.965+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 21973 2019-09-04T06:35:02.965+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:02.965+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:02.965+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21968 2019-09-04T06:35:02.965+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 21973 2019-09-04T06:35:02.965+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21968 2019-09-04T06:35:02.965+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21976 2019-09-04T06:35:02.965+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21976 2019-09-04T06:35:02.965+0000 D3 EXECUTOR [repl-writer-worker-5] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:02.965+0000 D3 STORAGE [repl-writer-worker-5] WT begin_transaction for snapshot id 21978 2019-09-04T06:35:02.965+0000 D4 STORAGE [repl-writer-worker-5] inserting record with timestamp Timestamp(1567578902, 9) 2019-09-04T06:35:02.965+0000 D3 STORAGE [repl-writer-worker-5] WT set timestamp of future write operations to Timestamp(1567578902, 9) 2019-09-04T06:35:02.965+0000 D3 STORAGE [repl-writer-worker-5] WT commit_transaction for snapshot id 21978 2019-09-04T06:35:02.965+0000 D3 EXECUTOR [repl-writer-worker-5] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:02.965+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:35:02.965+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21977 2019-09-04T06:35:02.965+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21977 2019-09-04T06:35:02.965+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21980 2019-09-04T06:35:02.965+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21980 2019-09-04T06:35:02.965+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578902, 9), t: 1 }({ ts: Timestamp(1567578902, 9), t: 1 }) 2019-09-04T06:35:02.965+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578902, 9) 2019-09-04T06:35:02.965+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21981 2019-09-04T06:35:02.965+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578902, 9) } } ] } sort: {} projection: {} 2019-09-04T06:35:02.965+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.965+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:35:02.965+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578902, 9) Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.965+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.965+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:02.965+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:02.965+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578902, 9) || First: notFirst: full path: ts 2019-09-04T06:35:02.965+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.965+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578902, 9) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.965+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:02.965+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:35:02.965+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.965+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.965+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:02.965+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:02.965+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.965+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.965+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:02.965+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578902, 9) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.965+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.965+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:02.965+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:02.965+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578902, 9) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:02.965+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.965+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578902, 9) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.965+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21981 2019-09-04T06:35:02.965+0000 D3 EXECUTOR [repl-writer-worker-9] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:02.965+0000 D3 STORAGE [repl-writer-worker-9] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:02.966+0000 D3 REPL [repl-writer-worker-9] applying op: { ts: Timestamp(1567578902, 9), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578902962), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f5b16ac9313827bca6622'), when: new Date(1567578902962), who: "ConfigServer:conn9511" } } }, oplog application mode: Secondary 2019-09-04T06:35:02.966+0000 D3 STORAGE [repl-writer-worker-9] WT set timestamp of future write operations to Timestamp(1567578902, 9) 2019-09-04T06:35:02.966+0000 D3 STORAGE [repl-writer-worker-9] WT begin_transaction for snapshot id 21983 2019-09-04T06:35:02.966+0000 D2 QUERY [repl-writer-worker-9] Using idhack: { _id: "config" } 2019-09-04T06:35:02.966+0000 D4 WRITE [repl-writer-worker-9] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:35:02.966+0000 D3 STORAGE [repl-writer-worker-9] WT commit_transaction for snapshot id 21983 2019-09-04T06:35:02.966+0000 D3 EXECUTOR [repl-writer-worker-9] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:02.966+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578902, 9), t: 1 }({ ts: Timestamp(1567578902, 9), t: 1 }) 2019-09-04T06:35:02.966+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578902, 9) 2019-09-04T06:35:02.966+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21982 2019-09-04T06:35:02.966+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.966+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.966+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:35:02.966+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.966+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.966+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:35:02.966+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21982 2019-09-04T06:35:02.966+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578902, 9) 2019-09-04T06:35:02.966+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21987 2019-09-04T06:35:02.966+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21987 2019-09-04T06:35:02.966+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578902, 9), t: 1 }({ ts: Timestamp(1567578902, 9), t: 1 }) 2019-09-04T06:35:02.966+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:02.966+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578902, 8), t: 1 }, durableWallTime: new Date(1567578902958), appliedOpTime: { ts: Timestamp(1567578902, 9), t: 1 }, appliedWallTime: new Date(1567578902962), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 8), t: 1 }, lastCommittedWall: new Date(1567578902958), lastOpVisible: { ts: Timestamp(1567578902, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:02.966+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1522 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:32.966+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578902, 8), t: 1 }, durableWallTime: new Date(1567578902958), appliedOpTime: { ts: Timestamp(1567578902, 9), t: 1 }, appliedWallTime: new Date(1567578902962), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 8), t: 1 }, lastCommittedWall: new Date(1567578902958), lastOpVisible: { ts: Timestamp(1567578902, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:02.966+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.966+0000 2019-09-04T06:35:02.966+0000 D2 ASIO [RS] Request 1522 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 8), t: 1 }, lastCommittedWall: new Date(1567578902958), lastOpVisible: { ts: Timestamp(1567578902, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 8), $clusterTime: { clusterTime: Timestamp(1567578902, 9), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 9) } 2019-09-04T06:35:02.966+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 8), t: 1 }, lastCommittedWall: new Date(1567578902958), lastOpVisible: { ts: Timestamp(1567578902, 8), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 8), $clusterTime: { clusterTime: Timestamp(1567578902, 9), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 9) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.966+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:02.966+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.966+0000 2019-09-04T06:35:02.967+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578902, 9), t: 1 } 2019-09-04T06:35:02.967+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1523 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:12.967+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578902, 8), t: 1 } } 2019-09-04T06:35:02.967+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.966+0000 2019-09-04T06:35:02.967+0000 D2 ASIO [RS] Request 1523 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 9), t: 1 }, lastCommittedWall: new Date(1567578902962), lastOpVisible: { ts: Timestamp(1567578902, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 9), t: 1 }, lastCommittedWall: new Date(1567578902962), lastOpApplied: { ts: Timestamp(1567578902, 9), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 9), $clusterTime: { clusterTime: Timestamp(1567578902, 10), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 9) } 2019-09-04T06:35:02.967+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 9), t: 1 }, lastCommittedWall: new Date(1567578902962), lastOpVisible: { ts: Timestamp(1567578902, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 9), t: 1 }, lastCommittedWall: new Date(1567578902962), lastOpApplied: { ts: Timestamp(1567578902, 9), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 9), $clusterTime: { clusterTime: Timestamp(1567578902, 10), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 9) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.967+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:02.967+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:35:02.967+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578902, 9), t: 1 }, 2019-09-04T06:35:02.962+0000 2019-09-04T06:35:02.967+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578902, 9), t: 1 }, 2019-09-04T06:35:02.962+0000 2019-09-04T06:35:02.967+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578897, 9) 2019-09-04T06:35:02.967+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:35:14.207+0000 2019-09-04T06:35:02.967+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:35:13.095+0000 2019-09-04T06:35:02.967+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:02.967+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1524 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:12.967+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578902, 9), t: 1 } } 2019-09-04T06:35:02.967+0000 D3 REPL [conn472] Got notified of new snapshot: { ts: Timestamp(1567578902, 9), t: 1 }, 2019-09-04T06:35:02.962+0000 2019-09-04T06:35:02.967+0000 D3 REPL [conn472] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:02.967+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.966+0000 2019-09-04T06:35:02.967+0000 D3 REPL [conn474] Got notified of new snapshot: { ts: Timestamp(1567578902, 9), t: 1 }, 2019-09-04T06:35:02.962+0000 2019-09-04T06:35:02.967+0000 D3 REPL [conn474] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:02.967+0000 D3 REPL [conn445] Got notified of new snapshot: { ts: Timestamp(1567578902, 9), t: 1 }, 2019-09-04T06:35:02.962+0000 2019-09-04T06:35:02.967+0000 D3 REPL [conn445] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:14.090+0000 2019-09-04T06:35:02.967+0000 D3 REPL [conn456] Got notified of new snapshot: { ts: Timestamp(1567578902, 9), t: 1 }, 2019-09-04T06:35:02.962+0000 2019-09-04T06:35:02.967+0000 D3 REPL [conn456] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:03.487+0000 2019-09-04T06:35:02.967+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:35:02.967+0000 D3 REPL [conn423] Got notified of new snapshot: { ts: Timestamp(1567578902, 9), t: 1 }, 2019-09-04T06:35:02.962+0000 2019-09-04T06:35:02.967+0000 D3 REPL [conn423] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.512+0000 2019-09-04T06:35:02.967+0000 D3 REPL [conn478] Got notified of new snapshot: { ts: Timestamp(1567578902, 9), t: 1 }, 2019-09-04T06:35:02.962+0000 2019-09-04T06:35:02.967+0000 D3 REPL [conn478] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:24.152+0000 2019-09-04T06:35:02.967+0000 D3 REPL [conn441] Got notified of new snapshot: { ts: Timestamp(1567578902, 9), t: 1 }, 2019-09-04T06:35:02.962+0000 2019-09-04T06:35:02.967+0000 D3 REPL [conn441] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.624+0000 2019-09-04T06:35:02.967+0000 D3 REPL [conn436] Got notified of new snapshot: { ts: Timestamp(1567578902, 9), t: 1 }, 2019-09-04T06:35:02.962+0000 2019-09-04T06:35:02.967+0000 D3 REPL [conn436] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:14.790+0000 2019-09-04T06:35:02.967+0000 D3 REPL [conn466] Got notified of new snapshot: { ts: Timestamp(1567578902, 9), t: 1 }, 2019-09-04T06:35:02.962+0000 2019-09-04T06:35:02.967+0000 D3 REPL [conn466] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.728+0000 2019-09-04T06:35:02.967+0000 D3 REPL [conn448] Got notified of new snapshot: { ts: Timestamp(1567578902, 9), t: 1 }, 2019-09-04T06:35:02.962+0000 2019-09-04T06:35:02.967+0000 D3 REPL [conn448] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.625+0000 2019-09-04T06:35:02.968+0000 D3 REPL [conn457] Got notified of new snapshot: { ts: Timestamp(1567578902, 9), t: 1 }, 2019-09-04T06:35:02.962+0000 2019-09-04T06:35:02.968+0000 D3 REPL [conn457] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:25.059+0000 2019-09-04T06:35:02.968+0000 D3 REPL [conn479] Got notified of new snapshot: { ts: Timestamp(1567578902, 9), t: 1 }, 2019-09-04T06:35:02.962+0000 2019-09-04T06:35:02.968+0000 D3 REPL [conn479] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:26.309+0000 2019-09-04T06:35:02.968+0000 D3 REPL [conn477] Got notified of new snapshot: { ts: Timestamp(1567578902, 9), t: 1 }, 2019-09-04T06:35:02.962+0000 2019-09-04T06:35:02.968+0000 D3 REPL [conn477] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:22.595+0000 2019-09-04T06:35:02.968+0000 D3 REPL [conn459] Got notified of new snapshot: { ts: Timestamp(1567578902, 9), t: 1 }, 2019-09-04T06:35:02.962+0000 2019-09-04T06:35:02.968+0000 D3 REPL [conn459] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.473+0000 2019-09-04T06:35:02.968+0000 D3 REPL [conn461] Got notified of new snapshot: { ts: Timestamp(1567578902, 9), t: 1 }, 2019-09-04T06:35:02.962+0000 2019-09-04T06:35:02.968+0000 D3 REPL [conn461] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.482+0000 2019-09-04T06:35:02.968+0000 D3 REPL [conn463] Got notified of new snapshot: { ts: Timestamp(1567578902, 9), t: 1 }, 2019-09-04T06:35:02.962+0000 2019-09-04T06:35:02.968+0000 D3 REPL [conn463] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:13.417+0000 2019-09-04T06:35:02.968+0000 D3 REPL [conn473] Got notified of new snapshot: { ts: Timestamp(1567578902, 9), t: 1 }, 2019-09-04T06:35:02.962+0000 2019-09-04T06:35:02.968+0000 D3 REPL [conn473] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:02.968+0000 D3 REPL [conn446] Got notified of new snapshot: { ts: Timestamp(1567578902, 9), t: 1 }, 2019-09-04T06:35:02.962+0000 2019-09-04T06:35:02.968+0000 D3 REPL [conn446] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:13.895+0000 2019-09-04T06:35:02.968+0000 D3 REPL [conn476] Got notified of new snapshot: { ts: Timestamp(1567578902, 9), t: 1 }, 2019-09-04T06:35:02.962+0000 2019-09-04T06:35:02.968+0000 D3 REPL [conn476] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.767+0000 2019-09-04T06:35:02.968+0000 D2 ASIO [RS] Request 1524 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578902, 10), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578902966), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f5b16ac9313827bca662a'), when: new Date(1567578902966), who: "ConfigServer:conn9511" } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 9), t: 1 }, lastCommittedWall: new Date(1567578902962), lastOpVisible: { ts: Timestamp(1567578902, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 9), t: 1 }, lastCommittedWall: new Date(1567578902962), lastOpApplied: { ts: Timestamp(1567578902, 10), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 9), $clusterTime: { clusterTime: Timestamp(1567578902, 10), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 10) } 2019-09-04T06:35:02.968+0000 D3 REPL [conn467] Got notified of new snapshot: { ts: Timestamp(1567578902, 9), t: 1 }, 2019-09-04T06:35:02.962+0000 2019-09-04T06:35:02.968+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578902, 10), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578902966), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f5b16ac9313827bca662a'), when: new Date(1567578902966), who: "ConfigServer:conn9511" } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 9), t: 1 }, lastCommittedWall: new Date(1567578902962), lastOpVisible: { ts: Timestamp(1567578902, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 9), t: 1 }, lastCommittedWall: new Date(1567578902962), lastOpApplied: { ts: Timestamp(1567578902, 10), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 9), $clusterTime: { clusterTime: Timestamp(1567578902, 10), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 10) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.968+0000 D3 REPL [conn467] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.623+0000 2019-09-04T06:35:02.968+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:02.968+0000 D3 REPL [conn460] Got notified of new snapshot: { ts: Timestamp(1567578902, 9), t: 1 }, 2019-09-04T06:35:02.962+0000 2019-09-04T06:35:02.968+0000 D3 REPL [conn460] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.478+0000 2019-09-04T06:35:02.968+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578902, 10) and ending at ts: Timestamp(1567578902, 10) 2019-09-04T06:35:02.968+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:35:13.095+0000 2019-09-04T06:35:02.968+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:35:13.487+0000 2019-09-04T06:35:02.968+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:35:02.968+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578902, 10), t: 1 } 2019-09-04T06:35:02.968+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:35:02.968+0000 D3 REPL [conn475] Got notified of new snapshot: { ts: Timestamp(1567578902, 9), t: 1 }, 2019-09-04T06:35:02.962+0000 2019-09-04T06:35:02.968+0000 D3 REPL [conn475] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.753+0000 2019-09-04T06:35:02.968+0000 D3 REPL [conn462] Got notified of new snapshot: { ts: Timestamp(1567578902, 9), t: 1 }, 2019-09-04T06:35:02.962+0000 2019-09-04T06:35:02.968+0000 D3 REPL [conn462] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.497+0000 2019-09-04T06:35:02.968+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:02.968+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:02.968+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578902, 9) 2019-09-04T06:35:02.968+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 21990 2019-09-04T06:35:02.968+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:02.968+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:02.968+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 21990 2019-09-04T06:35:02.968+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:02.968+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:35:02.968+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:02.968+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578902, 10) } 2019-09-04T06:35:02.968+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578902, 9) 2019-09-04T06:35:02.968+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 21993 2019-09-04T06:35:02.968+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21988 2019-09-04T06:35:02.968+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:02.968+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:02.968+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 21993 2019-09-04T06:35:02.968+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21988 2019-09-04T06:35:02.968+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21996 2019-09-04T06:35:02.968+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 21996 2019-09-04T06:35:02.968+0000 D3 EXECUTOR [repl-writer-worker-11] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:02.968+0000 D3 STORAGE [repl-writer-worker-11] WT begin_transaction for snapshot id 21998 2019-09-04T06:35:02.968+0000 D4 STORAGE [repl-writer-worker-11] inserting record with timestamp Timestamp(1567578902, 10) 2019-09-04T06:35:02.968+0000 D3 STORAGE [repl-writer-worker-11] WT set timestamp of future write operations to Timestamp(1567578902, 10) 2019-09-04T06:35:02.968+0000 D3 STORAGE [repl-writer-worker-11] WT commit_transaction for snapshot id 21998 2019-09-04T06:35:02.968+0000 D3 EXECUTOR [repl-writer-worker-11] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:02.968+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:35:02.968+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 21997 2019-09-04T06:35:02.968+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 21997 2019-09-04T06:35:02.968+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22000 2019-09-04T06:35:02.968+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 22000 2019-09-04T06:35:02.968+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578902, 10), t: 1 }({ ts: Timestamp(1567578902, 10), t: 1 }) 2019-09-04T06:35:02.968+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578902, 10) 2019-09-04T06:35:02.968+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22001 2019-09-04T06:35:02.968+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578902, 10) } } ] } sort: {} projection: {} 2019-09-04T06:35:02.968+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.968+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:35:02.968+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578902, 10) Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.968+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.968+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:02.968+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:02.968+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578902, 10) || First: notFirst: full path: ts 2019-09-04T06:35:02.968+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.968+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578902, 10) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.968+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:02.968+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:35:02.968+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.968+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.968+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:02.968+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:02.968+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.968+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.968+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:02.968+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578902, 10) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.968+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.968+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:02.968+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:02.968+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578902, 10) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:02.968+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.968+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578902, 10) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.968+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 22001 2019-09-04T06:35:02.968+0000 D3 EXECUTOR [repl-writer-worker-7] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:02.968+0000 D3 STORAGE [repl-writer-worker-7] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:02.968+0000 D3 REPL [repl-writer-worker-7] applying op: { ts: Timestamp(1567578902, 10), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578902966), o: { $v: 1, $set: { state: 2, ts: ObjectId('5d6f5b16ac9313827bca662a'), when: new Date(1567578902966), who: "ConfigServer:conn9511" } } }, oplog application mode: Secondary 2019-09-04T06:35:02.968+0000 D3 STORAGE [repl-writer-worker-7] WT set timestamp of future write operations to Timestamp(1567578902, 10) 2019-09-04T06:35:02.968+0000 D3 STORAGE [repl-writer-worker-7] WT begin_transaction for snapshot id 22003 2019-09-04T06:35:02.968+0000 D2 QUERY [repl-writer-worker-7] Using idhack: { _id: "config.system.sessions" } 2019-09-04T06:35:02.969+0000 D4 WRITE [repl-writer-worker-7] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:35:02.969+0000 D3 STORAGE [repl-writer-worker-7] WT commit_transaction for snapshot id 22003 2019-09-04T06:35:02.969+0000 D3 EXECUTOR [repl-writer-worker-7] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:02.969+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578902, 10), t: 1 }({ ts: Timestamp(1567578902, 10), t: 1 }) 2019-09-04T06:35:02.969+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578902, 10) 2019-09-04T06:35:02.969+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22002 2019-09-04T06:35:02.969+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.969+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.969+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:35:02.969+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.969+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.969+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:35:02.969+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 22002 2019-09-04T06:35:02.969+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578902, 10) 2019-09-04T06:35:02.969+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:02.969+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22006 2019-09-04T06:35:02.969+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578902, 8), t: 1 }, durableWallTime: new Date(1567578902958), appliedOpTime: { ts: Timestamp(1567578902, 10), t: 1 }, appliedWallTime: new Date(1567578902966), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 9), t: 1 }, lastCommittedWall: new Date(1567578902962), lastOpVisible: { ts: Timestamp(1567578902, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:02.969+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 22006 2019-09-04T06:35:02.969+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1525 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:32.969+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578902, 8), t: 1 }, durableWallTime: new Date(1567578902958), appliedOpTime: { ts: Timestamp(1567578902, 10), t: 1 }, appliedWallTime: new Date(1567578902966), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 9), t: 1 }, lastCommittedWall: new Date(1567578902962), lastOpVisible: { ts: Timestamp(1567578902, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:02.969+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578902, 10), t: 1 }({ ts: Timestamp(1567578902, 10), t: 1 }) 2019-09-04T06:35:02.969+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.969+0000 2019-09-04T06:35:02.969+0000 D2 ASIO [RS] Request 1525 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 9), t: 1 }, lastCommittedWall: new Date(1567578902962), lastOpVisible: { ts: Timestamp(1567578902, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 9), $clusterTime: { clusterTime: Timestamp(1567578902, 10), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 10) } 2019-09-04T06:35:02.969+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 9), t: 1 }, lastCommittedWall: new Date(1567578902962), lastOpVisible: { ts: Timestamp(1567578902, 9), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 9), $clusterTime: { clusterTime: Timestamp(1567578902, 10), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 10) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.969+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:02.969+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.969+0000 2019-09-04T06:35:02.970+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578902, 10), t: 1 } 2019-09-04T06:35:02.970+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1526 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:12.970+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578902, 9), t: 1 } } 2019-09-04T06:35:02.970+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.969+0000 2019-09-04T06:35:02.970+0000 D2 ASIO [RS] Request 1526 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 10), t: 1 }, lastCommittedWall: new Date(1567578902966), lastOpVisible: { ts: Timestamp(1567578902, 10), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 10), t: 1 }, lastCommittedWall: new Date(1567578902966), lastOpApplied: { ts: Timestamp(1567578902, 10), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 10), $clusterTime: { clusterTime: Timestamp(1567578902, 11), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 10) } 2019-09-04T06:35:02.970+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 10), t: 1 }, lastCommittedWall: new Date(1567578902966), lastOpVisible: { ts: Timestamp(1567578902, 10), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 10), t: 1 }, lastCommittedWall: new Date(1567578902966), lastOpApplied: { ts: Timestamp(1567578902, 10), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 10), $clusterTime: { clusterTime: Timestamp(1567578902, 11), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 10) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.970+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:02.970+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:35:02.970+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578902, 10), t: 1 }, 2019-09-04T06:35:02.966+0000 2019-09-04T06:35:02.970+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578902, 10), t: 1 }, 2019-09-04T06:35:02.966+0000 2019-09-04T06:35:02.970+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578897, 10) 2019-09-04T06:35:02.970+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:35:13.487+0000 2019-09-04T06:35:02.970+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:35:14.202+0000 2019-09-04T06:35:02.970+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:02.970+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1527 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:12.970+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578902, 10), t: 1 } } 2019-09-04T06:35:02.970+0000 D3 REPL [conn474] Got notified of new snapshot: { ts: Timestamp(1567578902, 10), t: 1 }, 2019-09-04T06:35:02.966+0000 2019-09-04T06:35:02.970+0000 D3 REPL [conn474] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:02.970+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.969+0000 2019-09-04T06:35:02.970+0000 D3 REPL [conn456] Got notified of new snapshot: { ts: Timestamp(1567578902, 10), t: 1 }, 2019-09-04T06:35:02.966+0000 2019-09-04T06:35:02.970+0000 D3 REPL [conn456] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:03.487+0000 2019-09-04T06:35:02.970+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:35:02.970+0000 D3 REPL [conn466] Got notified of new snapshot: { ts: Timestamp(1567578902, 10), t: 1 }, 2019-09-04T06:35:02.966+0000 2019-09-04T06:35:02.970+0000 D3 REPL [conn466] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.728+0000 2019-09-04T06:35:02.970+0000 D3 REPL [conn445] Got notified of new snapshot: { ts: Timestamp(1567578902, 10), t: 1 }, 2019-09-04T06:35:02.966+0000 2019-09-04T06:35:02.970+0000 D3 REPL [conn445] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:14.090+0000 2019-09-04T06:35:02.970+0000 D3 REPL [conn467] Got notified of new snapshot: { ts: Timestamp(1567578902, 10), t: 1 }, 2019-09-04T06:35:02.966+0000 2019-09-04T06:35:02.970+0000 D3 REPL [conn467] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.623+0000 2019-09-04T06:35:02.970+0000 D3 REPL [conn479] Got notified of new snapshot: { ts: Timestamp(1567578902, 10), t: 1 }, 2019-09-04T06:35:02.966+0000 2019-09-04T06:35:02.970+0000 D3 REPL [conn479] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:26.309+0000 2019-09-04T06:35:02.970+0000 D3 REPL [conn459] Got notified of new snapshot: { ts: Timestamp(1567578902, 10), t: 1 }, 2019-09-04T06:35:02.966+0000 2019-09-04T06:35:02.970+0000 D3 REPL [conn459] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.473+0000 2019-09-04T06:35:02.970+0000 D3 REPL [conn463] Got notified of new snapshot: { ts: Timestamp(1567578902, 10), t: 1 }, 2019-09-04T06:35:02.966+0000 2019-09-04T06:35:02.970+0000 D3 REPL [conn463] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:13.417+0000 2019-09-04T06:35:02.970+0000 D3 REPL [conn460] Got notified of new snapshot: { ts: Timestamp(1567578902, 10), t: 1 }, 2019-09-04T06:35:02.966+0000 2019-09-04T06:35:02.970+0000 D3 REPL [conn460] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.478+0000 2019-09-04T06:35:02.970+0000 D3 REPL [conn462] Got notified of new snapshot: { ts: Timestamp(1567578902, 10), t: 1 }, 2019-09-04T06:35:02.966+0000 2019-09-04T06:35:02.970+0000 D3 REPL [conn462] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.497+0000 2019-09-04T06:35:02.970+0000 D3 REPL [conn472] Got notified of new snapshot: { ts: Timestamp(1567578902, 10), t: 1 }, 2019-09-04T06:35:02.966+0000 2019-09-04T06:35:02.970+0000 D3 REPL [conn472] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:02.970+0000 D2 ASIO [RS] Request 1527 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578902, 11), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578902969), o: { $v: 1, $set: { state: 0 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 10), t: 1 }, lastCommittedWall: new Date(1567578902966), lastOpVisible: { ts: Timestamp(1567578902, 10), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 10), t: 1 }, lastCommittedWall: new Date(1567578902966), lastOpApplied: { ts: Timestamp(1567578902, 11), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 10), $clusterTime: { clusterTime: Timestamp(1567578902, 11), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 11) } 2019-09-04T06:35:02.970+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578902, 11), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578902969), o: { $v: 1, $set: { state: 0 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 10), t: 1 }, lastCommittedWall: new Date(1567578902966), lastOpVisible: { ts: Timestamp(1567578902, 10), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 10), t: 1 }, lastCommittedWall: new Date(1567578902966), lastOpApplied: { ts: Timestamp(1567578902, 11), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 10), $clusterTime: { clusterTime: Timestamp(1567578902, 11), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 11) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.971+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:02.971+0000 D3 REPL [conn441] Got notified of new snapshot: { ts: Timestamp(1567578902, 10), t: 1 }, 2019-09-04T06:35:02.966+0000 2019-09-04T06:35:02.971+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578902, 11) and ending at ts: Timestamp(1567578902, 11) 2019-09-04T06:35:02.971+0000 D3 REPL [conn441] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.624+0000 2019-09-04T06:35:02.971+0000 D3 REPL [conn423] Got notified of new snapshot: { ts: Timestamp(1567578902, 10), t: 1 }, 2019-09-04T06:35:02.966+0000 2019-09-04T06:35:02.971+0000 D3 REPL [conn423] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.512+0000 2019-09-04T06:35:02.971+0000 D3 REPL [conn478] Got notified of new snapshot: { ts: Timestamp(1567578902, 10), t: 1 }, 2019-09-04T06:35:02.966+0000 2019-09-04T06:35:02.971+0000 D3 REPL [conn478] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:24.152+0000 2019-09-04T06:35:02.971+0000 D3 REPL [conn457] Got notified of new snapshot: { ts: Timestamp(1567578902, 10), t: 1 }, 2019-09-04T06:35:02.966+0000 2019-09-04T06:35:02.971+0000 D3 REPL [conn457] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:25.059+0000 2019-09-04T06:35:02.971+0000 D3 REPL [conn477] Got notified of new snapshot: { ts: Timestamp(1567578902, 10), t: 1 }, 2019-09-04T06:35:02.966+0000 2019-09-04T06:35:02.971+0000 D3 REPL [conn477] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:22.595+0000 2019-09-04T06:35:02.971+0000 D3 REPL [conn446] Got notified of new snapshot: { ts: Timestamp(1567578902, 10), t: 1 }, 2019-09-04T06:35:02.966+0000 2019-09-04T06:35:02.971+0000 D3 REPL [conn446] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:13.895+0000 2019-09-04T06:35:02.971+0000 D3 REPL [conn461] Got notified of new snapshot: { ts: Timestamp(1567578902, 10), t: 1 }, 2019-09-04T06:35:02.966+0000 2019-09-04T06:35:02.971+0000 D3 REPL [conn461] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.482+0000 2019-09-04T06:35:02.971+0000 D3 REPL [conn476] Got notified of new snapshot: { ts: Timestamp(1567578902, 10), t: 1 }, 2019-09-04T06:35:02.966+0000 2019-09-04T06:35:02.971+0000 D3 REPL [conn476] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.767+0000 2019-09-04T06:35:02.971+0000 D3 REPL [conn473] Got notified of new snapshot: { ts: Timestamp(1567578902, 10), t: 1 }, 2019-09-04T06:35:02.966+0000 2019-09-04T06:35:02.971+0000 D3 REPL [conn473] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:02.971+0000 D3 REPL [conn475] Got notified of new snapshot: { ts: Timestamp(1567578902, 10), t: 1 }, 2019-09-04T06:35:02.966+0000 2019-09-04T06:35:02.971+0000 D3 REPL [conn475] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.753+0000 2019-09-04T06:35:02.971+0000 D3 REPL [conn436] Got notified of new snapshot: { ts: Timestamp(1567578902, 10), t: 1 }, 2019-09-04T06:35:02.966+0000 2019-09-04T06:35:02.971+0000 D3 REPL [conn436] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:14.790+0000 2019-09-04T06:35:02.971+0000 D3 REPL [conn448] Got notified of new snapshot: { ts: Timestamp(1567578902, 10), t: 1 }, 2019-09-04T06:35:02.966+0000 2019-09-04T06:35:02.971+0000 D3 REPL [conn448] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.625+0000 2019-09-04T06:35:02.971+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:35:14.202+0000 2019-09-04T06:35:02.971+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:35:14.432+0000 2019-09-04T06:35:02.971+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:02.971+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578902, 11), t: 1 } 2019-09-04T06:35:02.971+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:35:02.971+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:02.971+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:02.971+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578902, 10) 2019-09-04T06:35:02.971+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 22009 2019-09-04T06:35:02.971+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:02.971+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:02.971+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 22009 2019-09-04T06:35:02.971+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:02.971+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:35:02.971+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:02.971+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578902, 11) } 2019-09-04T06:35:02.971+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578902, 10) 2019-09-04T06:35:02.971+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 22012 2019-09-04T06:35:02.971+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22007 2019-09-04T06:35:02.971+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:02.971+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:02.971+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 22012 2019-09-04T06:35:02.971+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 22007 2019-09-04T06:35:02.971+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22015 2019-09-04T06:35:02.971+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 22015 2019-09-04T06:35:02.971+0000 D3 EXECUTOR [repl-writer-worker-13] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:02.971+0000 D3 STORAGE [repl-writer-worker-13] WT begin_transaction for snapshot id 22017 2019-09-04T06:35:02.971+0000 D4 STORAGE [repl-writer-worker-13] inserting record with timestamp Timestamp(1567578902, 11) 2019-09-04T06:35:02.971+0000 D3 STORAGE [repl-writer-worker-13] WT set timestamp of future write operations to Timestamp(1567578902, 11) 2019-09-04T06:35:02.971+0000 D3 STORAGE [repl-writer-worker-13] WT commit_transaction for snapshot id 22017 2019-09-04T06:35:02.971+0000 D3 EXECUTOR [repl-writer-worker-13] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:02.971+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:35:02.971+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22016 2019-09-04T06:35:02.971+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 22016 2019-09-04T06:35:02.971+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22019 2019-09-04T06:35:02.971+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 22019 2019-09-04T06:35:02.971+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578902, 11), t: 1 }({ ts: Timestamp(1567578902, 11), t: 1 }) 2019-09-04T06:35:02.971+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578902, 11) 2019-09-04T06:35:02.971+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22020 2019-09-04T06:35:02.971+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578902, 11) } } ] } sort: {} projection: {} 2019-09-04T06:35:02.971+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.971+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:35:02.971+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578902, 11) Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.971+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.971+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:02.971+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:02.971+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578902, 11) || First: notFirst: full path: ts 2019-09-04T06:35:02.971+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.971+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578902, 11) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.971+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:02.971+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:35:02.971+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.971+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.971+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:02.971+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:02.971+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.971+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.971+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:02.971+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578902, 11) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.971+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.971+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:02.971+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:02.971+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578902, 11) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:02.971+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.971+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578902, 11) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.971+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 22020 2019-09-04T06:35:02.971+0000 D3 EXECUTOR [repl-writer-worker-12] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:02.971+0000 D3 STORAGE [repl-writer-worker-12] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:02.971+0000 D3 REPL [repl-writer-worker-12] applying op: { ts: Timestamp(1567578902, 11), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config.system.sessions" }, wall: new Date(1567578902969), o: { $v: 1, $set: { state: 0 } } }, oplog application mode: Secondary 2019-09-04T06:35:02.971+0000 D3 STORAGE [repl-writer-worker-12] WT set timestamp of future write operations to Timestamp(1567578902, 11) 2019-09-04T06:35:02.971+0000 D3 STORAGE [repl-writer-worker-12] WT begin_transaction for snapshot id 22022 2019-09-04T06:35:02.972+0000 D2 QUERY [repl-writer-worker-12] Using idhack: { _id: "config.system.sessions" } 2019-09-04T06:35:02.972+0000 D4 WRITE [repl-writer-worker-12] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:35:02.972+0000 D3 STORAGE [repl-writer-worker-12] WT commit_transaction for snapshot id 22022 2019-09-04T06:35:02.972+0000 D3 EXECUTOR [repl-writer-worker-12] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:02.972+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578902, 11), t: 1 }({ ts: Timestamp(1567578902, 11), t: 1 }) 2019-09-04T06:35:02.972+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578902, 11) 2019-09-04T06:35:02.972+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22021 2019-09-04T06:35:02.972+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.972+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.972+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:35:02.972+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.972+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.972+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:35:02.972+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 22021 2019-09-04T06:35:02.972+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578902, 11) 2019-09-04T06:35:02.972+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:02.972+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22025 2019-09-04T06:35:02.972+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578902, 8), t: 1 }, durableWallTime: new Date(1567578902958), appliedOpTime: { ts: Timestamp(1567578902, 11), t: 1 }, appliedWallTime: new Date(1567578902969), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 10), t: 1 }, lastCommittedWall: new Date(1567578902966), lastOpVisible: { ts: Timestamp(1567578902, 10), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:02.972+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 22025 2019-09-04T06:35:02.972+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1528 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:32.972+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578902, 8), t: 1 }, durableWallTime: new Date(1567578902958), appliedOpTime: { ts: Timestamp(1567578902, 11), t: 1 }, appliedWallTime: new Date(1567578902969), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 10), t: 1 }, lastCommittedWall: new Date(1567578902966), lastOpVisible: { ts: Timestamp(1567578902, 10), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:02.972+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578902, 11), t: 1 }({ ts: Timestamp(1567578902, 11), t: 1 }) 2019-09-04T06:35:02.972+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.972+0000 2019-09-04T06:35:02.972+0000 D2 ASIO [RS] Request 1528 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 10), t: 1 }, lastCommittedWall: new Date(1567578902966), lastOpVisible: { ts: Timestamp(1567578902, 10), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 10), $clusterTime: { clusterTime: Timestamp(1567578902, 11), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 11) } 2019-09-04T06:35:02.972+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 10), t: 1 }, lastCommittedWall: new Date(1567578902966), lastOpVisible: { ts: Timestamp(1567578902, 10), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 10), $clusterTime: { clusterTime: Timestamp(1567578902, 11), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 11) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.972+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:02.972+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.972+0000 2019-09-04T06:35:02.973+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:35:02.973+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:02.973+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578902, 9), t: 1 }, durableWallTime: new Date(1567578902962), appliedOpTime: { ts: Timestamp(1567578902, 11), t: 1 }, appliedWallTime: new Date(1567578902969), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 10), t: 1 }, lastCommittedWall: new Date(1567578902966), lastOpVisible: { ts: Timestamp(1567578902, 10), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:02.973+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1529 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:32.973+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578902, 9), t: 1 }, durableWallTime: new Date(1567578902962), appliedOpTime: { ts: Timestamp(1567578902, 11), t: 1 }, appliedWallTime: new Date(1567578902969), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 10), t: 1 }, lastCommittedWall: new Date(1567578902966), lastOpVisible: { ts: Timestamp(1567578902, 10), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:02.973+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.973+0000 2019-09-04T06:35:02.973+0000 D2 ASIO [RS] Request 1529 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 11), t: 1 }, lastCommittedWall: new Date(1567578902969), lastOpVisible: { ts: Timestamp(1567578902, 11), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 11), $clusterTime: { clusterTime: Timestamp(1567578902, 12), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 11) } 2019-09-04T06:35:02.973+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 11), t: 1 }, lastCommittedWall: new Date(1567578902969), lastOpVisible: { ts: Timestamp(1567578902, 11), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 11), $clusterTime: { clusterTime: Timestamp(1567578902, 12), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 11) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.973+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:02.973+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578902, 11), t: 1 } 2019-09-04T06:35:02.973+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.973+0000 2019-09-04T06:35:02.973+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1530 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:12.973+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578902, 10), t: 1 } } 2019-09-04T06:35:02.973+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.973+0000 2019-09-04T06:35:02.973+0000 D2 ASIO [RS] Request 1530 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 11), t: 1 }, lastCommittedWall: new Date(1567578902969), lastOpVisible: { ts: Timestamp(1567578902, 11), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 11), t: 1 }, lastCommittedWall: new Date(1567578902969), lastOpApplied: { ts: Timestamp(1567578902, 11), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 11), $clusterTime: { clusterTime: Timestamp(1567578902, 12), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 11) } 2019-09-04T06:35:02.973+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 11), t: 1 }, lastCommittedWall: new Date(1567578902969), lastOpVisible: { ts: Timestamp(1567578902, 11), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 11), t: 1 }, lastCommittedWall: new Date(1567578902969), lastOpApplied: { ts: Timestamp(1567578902, 11), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 11), $clusterTime: { clusterTime: Timestamp(1567578902, 12), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 11) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.973+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:02.973+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:35:02.973+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578902, 11), t: 1 }, 2019-09-04T06:35:02.969+0000 2019-09-04T06:35:02.973+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578902, 11), t: 1 }, 2019-09-04T06:35:02.969+0000 2019-09-04T06:35:02.973+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578897, 11) 2019-09-04T06:35:02.973+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:35:14.432+0000 2019-09-04T06:35:02.973+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:35:13.701+0000 2019-09-04T06:35:02.973+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1531 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:12.973+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578902, 11), t: 1 } } 2019-09-04T06:35:02.973+0000 D3 REPL [conn459] Got notified of new snapshot: { ts: Timestamp(1567578902, 11), t: 1 }, 2019-09-04T06:35:02.969+0000 2019-09-04T06:35:02.973+0000 D3 REPL [conn459] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.473+0000 2019-09-04T06:35:02.973+0000 D3 REPL [conn456] Got notified of new snapshot: { ts: Timestamp(1567578902, 11), t: 1 }, 2019-09-04T06:35:02.969+0000 2019-09-04T06:35:02.973+0000 D3 REPL [conn456] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:03.487+0000 2019-09-04T06:35:02.973+0000 D3 REPL [conn466] Got notified of new snapshot: { ts: Timestamp(1567578902, 11), t: 1 }, 2019-09-04T06:35:02.969+0000 2019-09-04T06:35:02.973+0000 D3 REPL [conn466] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.728+0000 2019-09-04T06:35:02.973+0000 D3 REPL [conn478] Got notified of new snapshot: { ts: Timestamp(1567578902, 11), t: 1 }, 2019-09-04T06:35:02.969+0000 2019-09-04T06:35:02.973+0000 D3 REPL [conn478] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:24.152+0000 2019-09-04T06:35:02.973+0000 D3 REPL [conn457] Got notified of new snapshot: { ts: Timestamp(1567578902, 11), t: 1 }, 2019-09-04T06:35:02.969+0000 2019-09-04T06:35:02.973+0000 D3 REPL [conn457] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:25.059+0000 2019-09-04T06:35:02.973+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.973+0000 2019-09-04T06:35:02.973+0000 D3 REPL [conn446] Got notified of new snapshot: { ts: Timestamp(1567578902, 11), t: 1 }, 2019-09-04T06:35:02.969+0000 2019-09-04T06:35:02.973+0000 D3 REPL [conn446] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:13.895+0000 2019-09-04T06:35:02.973+0000 D3 REPL [conn476] Got notified of new snapshot: { ts: Timestamp(1567578902, 11), t: 1 }, 2019-09-04T06:35:02.969+0000 2019-09-04T06:35:02.973+0000 D3 REPL [conn476] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.767+0000 2019-09-04T06:35:02.973+0000 D3 REPL [conn475] Got notified of new snapshot: { ts: Timestamp(1567578902, 11), t: 1 }, 2019-09-04T06:35:02.969+0000 2019-09-04T06:35:02.973+0000 D3 REPL [conn475] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.753+0000 2019-09-04T06:35:02.973+0000 D3 REPL [conn448] Got notified of new snapshot: { ts: Timestamp(1567578902, 11), t: 1 }, 2019-09-04T06:35:02.969+0000 2019-09-04T06:35:02.973+0000 D3 REPL [conn448] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.625+0000 2019-09-04T06:35:02.973+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:35:02.973+0000 D3 REPL [conn467] Got notified of new snapshot: { ts: Timestamp(1567578902, 11), t: 1 }, 2019-09-04T06:35:02.969+0000 2019-09-04T06:35:02.973+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:35:02.973+0000 D3 REPL [conn467] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.623+0000 2019-09-04T06:35:02.973+0000 D3 REPL [conn474] Got notified of new snapshot: { ts: Timestamp(1567578902, 11), t: 1 }, 2019-09-04T06:35:02.969+0000 2019-09-04T06:35:02.973+0000 D3 REPL [conn474] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:02.973+0000 D3 REPL [conn460] Got notified of new snapshot: { ts: Timestamp(1567578902, 11), t: 1 }, 2019-09-04T06:35:02.969+0000 2019-09-04T06:35:02.973+0000 D3 REPL [conn460] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.478+0000 2019-09-04T06:35:02.973+0000 D3 REPL [conn441] Got notified of new snapshot: { ts: Timestamp(1567578902, 11), t: 1 }, 2019-09-04T06:35:02.969+0000 2019-09-04T06:35:02.974+0000 D3 REPL [conn441] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.624+0000 2019-09-04T06:35:02.974+0000 D3 REPL [conn479] Got notified of new snapshot: { ts: Timestamp(1567578902, 11), t: 1 }, 2019-09-04T06:35:02.969+0000 2019-09-04T06:35:02.974+0000 D3 REPL [conn479] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:26.309+0000 2019-09-04T06:35:02.974+0000 D3 REPL [conn463] Got notified of new snapshot: { ts: Timestamp(1567578902, 11), t: 1 }, 2019-09-04T06:35:02.969+0000 2019-09-04T06:35:02.974+0000 D3 REPL [conn463] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:13.417+0000 2019-09-04T06:35:02.974+0000 D3 REPL [conn462] Got notified of new snapshot: { ts: Timestamp(1567578902, 11), t: 1 }, 2019-09-04T06:35:02.969+0000 2019-09-04T06:35:02.974+0000 D3 REPL [conn462] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.497+0000 2019-09-04T06:35:02.974+0000 D3 REPL [conn472] Got notified of new snapshot: { ts: Timestamp(1567578902, 11), t: 1 }, 2019-09-04T06:35:02.969+0000 2019-09-04T06:35:02.974+0000 D3 REPL [conn472] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:02.974+0000 D3 REPL [conn423] Got notified of new snapshot: { ts: Timestamp(1567578902, 11), t: 1 }, 2019-09-04T06:35:02.969+0000 2019-09-04T06:35:02.974+0000 D3 REPL [conn423] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.512+0000 2019-09-04T06:35:02.974+0000 D3 REPL [conn477] Got notified of new snapshot: { ts: Timestamp(1567578902, 11), t: 1 }, 2019-09-04T06:35:02.969+0000 2019-09-04T06:35:02.974+0000 D3 REPL [conn477] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:22.595+0000 2019-09-04T06:35:02.974+0000 D3 REPL [conn461] Got notified of new snapshot: { ts: Timestamp(1567578902, 11), t: 1 }, 2019-09-04T06:35:02.969+0000 2019-09-04T06:35:02.974+0000 D3 REPL [conn461] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.482+0000 2019-09-04T06:35:02.974+0000 D3 REPL [conn473] Got notified of new snapshot: { ts: Timestamp(1567578902, 11), t: 1 }, 2019-09-04T06:35:02.969+0000 2019-09-04T06:35:02.974+0000 D3 REPL [conn473] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:02.974+0000 D3 REPL [conn436] Got notified of new snapshot: { ts: Timestamp(1567578902, 11), t: 1 }, 2019-09-04T06:35:02.969+0000 2019-09-04T06:35:02.974+0000 D3 REPL [conn436] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:14.790+0000 2019-09-04T06:35:02.974+0000 D3 REPL [conn445] Got notified of new snapshot: { ts: Timestamp(1567578902, 11), t: 1 }, 2019-09-04T06:35:02.969+0000 2019-09-04T06:35:02.974+0000 D3 REPL [conn445] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:14.090+0000 2019-09-04T06:35:02.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:02.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:02.978+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:35:02.978+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:02.978+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578902, 11), t: 1 }, durableWallTime: new Date(1567578902969), appliedOpTime: { ts: Timestamp(1567578902, 11), t: 1 }, appliedWallTime: new Date(1567578902969), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 11), t: 1 }, lastCommittedWall: new Date(1567578902969), lastOpVisible: { ts: Timestamp(1567578902, 11), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:02.978+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1532 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:32.978+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578902, 11), t: 1 }, durableWallTime: new Date(1567578902969), appliedOpTime: { ts: Timestamp(1567578902, 11), t: 1 }, appliedWallTime: new Date(1567578902969), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 11), t: 1 }, lastCommittedWall: new Date(1567578902969), lastOpVisible: { ts: Timestamp(1567578902, 11), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:02.979+0000 D2 ASIO [RS] Request 1531 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578902, 12), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578902972), o: { $v: 1, $set: { state: 0 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 11), t: 1 }, lastCommittedWall: new Date(1567578902969), lastOpVisible: { ts: Timestamp(1567578902, 11), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 11), t: 1 }, lastCommittedWall: new Date(1567578902969), lastOpApplied: { ts: Timestamp(1567578902, 12), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 11), $clusterTime: { clusterTime: Timestamp(1567578902, 12), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 12) } 2019-09-04T06:35:02.979+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.973+0000 2019-09-04T06:35:02.979+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578902, 12), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578902972), o: { $v: 1, $set: { state: 0 } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 11), t: 1 }, lastCommittedWall: new Date(1567578902969), lastOpVisible: { ts: Timestamp(1567578902, 11), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 11), t: 1 }, lastCommittedWall: new Date(1567578902969), lastOpApplied: { ts: Timestamp(1567578902, 12), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 11), $clusterTime: { clusterTime: Timestamp(1567578902, 12), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 12) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.979+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:02.979+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578902, 12) and ending at ts: Timestamp(1567578902, 12) 2019-09-04T06:35:02.979+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:35:13.701+0000 2019-09-04T06:35:02.979+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:35:13.661+0000 2019-09-04T06:35:02.979+0000 D2 REPL [replication-1] oplog buffer has 0 bytes 2019-09-04T06:35:02.979+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578902, 12), t: 1 } 2019-09-04T06:35:02.979+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:02.979+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:35:02.979+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:02.979+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:02.979+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578902, 11) 2019-09-04T06:35:02.979+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 22030 2019-09-04T06:35:02.979+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:02.979+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:02.979+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 22030 2019-09-04T06:35:02.979+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:02.979+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:02.979+0000 D2 ASIO [RS] Request 1532 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 11), t: 1 }, lastCommittedWall: new Date(1567578902969), lastOpVisible: { ts: Timestamp(1567578902, 11), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 11), $clusterTime: { clusterTime: Timestamp(1567578902, 12), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 12) } 2019-09-04T06:35:02.979+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578902, 11) 2019-09-04T06:35:02.979+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:35:02.979+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 22033 2019-09-04T06:35:02.979+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578902, 12) } 2019-09-04T06:35:02.979+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 11), t: 1 }, lastCommittedWall: new Date(1567578902969), lastOpVisible: { ts: Timestamp(1567578902, 11), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 11), $clusterTime: { clusterTime: Timestamp(1567578902, 12), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 12) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.979+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22026 2019-09-04T06:35:02.979+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:02.979+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:02.979+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 22033 2019-09-04T06:35:02.979+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 22026 2019-09-04T06:35:02.979+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22036 2019-09-04T06:35:02.979+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 22036 2019-09-04T06:35:02.979+0000 D3 EXECUTOR [repl-writer-worker-3] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:02.979+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:02.979+0000 D3 STORAGE [repl-writer-worker-3] WT begin_transaction for snapshot id 22038 2019-09-04T06:35:02.979+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.979+0000 2019-09-04T06:35:02.979+0000 D4 STORAGE [repl-writer-worker-3] inserting record with timestamp Timestamp(1567578902, 12) 2019-09-04T06:35:02.979+0000 D3 STORAGE [repl-writer-worker-3] WT set timestamp of future write operations to Timestamp(1567578902, 12) 2019-09-04T06:35:02.979+0000 D3 STORAGE [repl-writer-worker-3] WT commit_transaction for snapshot id 22038 2019-09-04T06:35:02.979+0000 D3 EXECUTOR [repl-writer-worker-3] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:02.979+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:35:02.979+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22037 2019-09-04T06:35:02.979+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 22037 2019-09-04T06:35:02.979+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22040 2019-09-04T06:35:02.979+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 22040 2019-09-04T06:35:02.979+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578902, 12), t: 1 }({ ts: Timestamp(1567578902, 12), t: 1 }) 2019-09-04T06:35:02.979+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578902, 12) 2019-09-04T06:35:02.979+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22041 2019-09-04T06:35:02.979+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578902, 12) } } ] } sort: {} projection: {} 2019-09-04T06:35:02.979+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.979+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:35:02.979+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578902, 12) Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.979+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.979+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:02.979+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:02.979+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578902, 12) || First: notFirst: full path: ts 2019-09-04T06:35:02.979+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.979+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578902, 12) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.979+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:02.979+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:35:02.979+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.979+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.979+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:02.979+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:02.979+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.979+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.979+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:02.979+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578902, 12) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.979+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.979+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:02.979+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:02.979+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578902, 12) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:02.979+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.979+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578902, 12) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.979+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 22041 2019-09-04T06:35:02.979+0000 D3 EXECUTOR [repl-writer-worker-14] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:02.979+0000 D3 STORAGE [repl-writer-worker-14] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:02.979+0000 D3 REPL [repl-writer-worker-14] applying op: { ts: Timestamp(1567578902, 12), t: 1, h: 0, v: 2, op: "u", ns: "config.locks", ui: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287"), o2: { _id: "config" }, wall: new Date(1567578902972), o: { $v: 1, $set: { state: 0 } } }, oplog application mode: Secondary 2019-09-04T06:35:02.979+0000 D3 STORAGE [repl-writer-worker-14] WT set timestamp of future write operations to Timestamp(1567578902, 12) 2019-09-04T06:35:02.979+0000 D3 STORAGE [repl-writer-worker-14] WT begin_transaction for snapshot id 22043 2019-09-04T06:35:02.979+0000 D2 QUERY [repl-writer-worker-14] Using idhack: { _id: "config" } 2019-09-04T06:35:02.979+0000 D4 WRITE [repl-writer-worker-14] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:35:02.979+0000 D3 STORAGE [repl-writer-worker-14] WT commit_transaction for snapshot id 22043 2019-09-04T06:35:02.979+0000 D3 EXECUTOR [repl-writer-worker-14] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:02.979+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578902, 12), t: 1 }({ ts: Timestamp(1567578902, 12), t: 1 }) 2019-09-04T06:35:02.979+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578902, 12) 2019-09-04T06:35:02.979+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22042 2019-09-04T06:35:02.979+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:35:02.979+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:02.979+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:35:02.979+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:02.979+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:02.979+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:35:02.980+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 22042 2019-09-04T06:35:02.980+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578902, 12) 2019-09-04T06:35:02.980+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22046 2019-09-04T06:35:02.980+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 22046 2019-09-04T06:35:02.980+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578902, 12), t: 1 }({ ts: Timestamp(1567578902, 12), t: 1 }) 2019-09-04T06:35:02.980+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:02.980+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578902, 11), t: 1 }, durableWallTime: new Date(1567578902969), appliedOpTime: { ts: Timestamp(1567578902, 12), t: 1 }, appliedWallTime: new Date(1567578902972), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 11), t: 1 }, lastCommittedWall: new Date(1567578902969), lastOpVisible: { ts: Timestamp(1567578902, 11), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:02.980+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1533 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:32.980+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578902, 11), t: 1 }, durableWallTime: new Date(1567578902969), appliedOpTime: { ts: Timestamp(1567578902, 12), t: 1 }, appliedWallTime: new Date(1567578902972), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 11), t: 1 }, lastCommittedWall: new Date(1567578902969), lastOpVisible: { ts: Timestamp(1567578902, 11), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:02.980+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.980+0000 2019-09-04T06:35:02.980+0000 D2 ASIO [RS] Request 1533 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 11), t: 1 }, lastCommittedWall: new Date(1567578902969), lastOpVisible: { ts: Timestamp(1567578902, 11), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 11), $clusterTime: { clusterTime: Timestamp(1567578902, 12), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 12) } 2019-09-04T06:35:02.980+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 11), t: 1 }, lastCommittedWall: new Date(1567578902969), lastOpVisible: { ts: Timestamp(1567578902, 11), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 11), $clusterTime: { clusterTime: Timestamp(1567578902, 12), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 12) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.980+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:02.980+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.980+0000 2019-09-04T06:35:02.981+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578902, 12), t: 1 } 2019-09-04T06:35:02.981+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1534 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:12.981+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578902, 11), t: 1 } } 2019-09-04T06:35:02.981+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.980+0000 2019-09-04T06:35:02.981+0000 D2 ASIO [RS] Request 1534 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 12), t: 1 }, lastCommittedWall: new Date(1567578902972), lastOpVisible: { ts: Timestamp(1567578902, 12), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 12), t: 1 }, lastCommittedWall: new Date(1567578902972), lastOpApplied: { ts: Timestamp(1567578902, 12), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 12), $clusterTime: { clusterTime: Timestamp(1567578902, 12), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 12) } 2019-09-04T06:35:02.981+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 12), t: 1 }, lastCommittedWall: new Date(1567578902972), lastOpVisible: { ts: Timestamp(1567578902, 12), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 12), t: 1 }, lastCommittedWall: new Date(1567578902972), lastOpApplied: { ts: Timestamp(1567578902, 12), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 12), $clusterTime: { clusterTime: Timestamp(1567578902, 12), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 12) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.981+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:02.981+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:35:02.981+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578902, 12), t: 1 }, 2019-09-04T06:35:02.972+0000 2019-09-04T06:35:02.981+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578902, 12), t: 1 }, 2019-09-04T06:35:02.972+0000 2019-09-04T06:35:02.981+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578897, 12) 2019-09-04T06:35:02.981+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:35:13.661+0000 2019-09-04T06:35:02.981+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:35:13.776+0000 2019-09-04T06:35:02.981+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1535 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:12.981+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578902, 12), t: 1 } } 2019-09-04T06:35:02.981+0000 D3 REPL [conn478] Got notified of new snapshot: { ts: Timestamp(1567578902, 12), t: 1 }, 2019-09-04T06:35:02.972+0000 2019-09-04T06:35:02.981+0000 D3 REPL [conn478] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:24.152+0000 2019-09-04T06:35:02.981+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.980+0000 2019-09-04T06:35:02.981+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:02.981+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:35:02.981+0000 D3 REPL [conn466] Got notified of new snapshot: { ts: Timestamp(1567578902, 12), t: 1 }, 2019-09-04T06:35:02.972+0000 2019-09-04T06:35:02.981+0000 D3 REPL [conn466] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.728+0000 2019-09-04T06:35:02.981+0000 D3 REPL [conn460] Got notified of new snapshot: { ts: Timestamp(1567578902, 12), t: 1 }, 2019-09-04T06:35:02.972+0000 2019-09-04T06:35:02.981+0000 D3 REPL [conn460] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.478+0000 2019-09-04T06:35:02.981+0000 D3 REPL [conn479] Got notified of new snapshot: { ts: Timestamp(1567578902, 12), t: 1 }, 2019-09-04T06:35:02.972+0000 2019-09-04T06:35:02.981+0000 D3 REPL [conn479] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:26.309+0000 2019-09-04T06:35:02.981+0000 D3 REPL [conn462] Got notified of new snapshot: { ts: Timestamp(1567578902, 12), t: 1 }, 2019-09-04T06:35:02.972+0000 2019-09-04T06:35:02.981+0000 D3 REPL [conn462] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.497+0000 2019-09-04T06:35:02.981+0000 D3 REPL [conn423] Got notified of new snapshot: { ts: Timestamp(1567578902, 12), t: 1 }, 2019-09-04T06:35:02.972+0000 2019-09-04T06:35:02.981+0000 D3 REPL [conn423] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.512+0000 2019-09-04T06:35:02.981+0000 D3 REPL [conn436] Got notified of new snapshot: { ts: Timestamp(1567578902, 12), t: 1 }, 2019-09-04T06:35:02.972+0000 2019-09-04T06:35:02.981+0000 D3 REPL [conn436] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:14.790+0000 2019-09-04T06:35:02.981+0000 D3 REPL [conn461] Got notified of new snapshot: { ts: Timestamp(1567578902, 12), t: 1 }, 2019-09-04T06:35:02.972+0000 2019-09-04T06:35:02.981+0000 D3 REPL [conn461] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.482+0000 2019-09-04T06:35:02.981+0000 D3 REPL [conn459] Got notified of new snapshot: { ts: Timestamp(1567578902, 12), t: 1 }, 2019-09-04T06:35:02.972+0000 2019-09-04T06:35:02.981+0000 D3 REPL [conn459] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.473+0000 2019-09-04T06:35:02.981+0000 D3 REPL [conn456] Got notified of new snapshot: { ts: Timestamp(1567578902, 12), t: 1 }, 2019-09-04T06:35:02.972+0000 2019-09-04T06:35:02.981+0000 D3 REPL [conn456] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:03.487+0000 2019-09-04T06:35:02.981+0000 D3 REPL [conn467] Got notified of new snapshot: { ts: Timestamp(1567578902, 12), t: 1 }, 2019-09-04T06:35:02.972+0000 2019-09-04T06:35:02.981+0000 D3 REPL [conn467] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.623+0000 2019-09-04T06:35:02.981+0000 D3 REPL [conn457] Got notified of new snapshot: { ts: Timestamp(1567578902, 12), t: 1 }, 2019-09-04T06:35:02.972+0000 2019-09-04T06:35:02.981+0000 D3 REPL [conn457] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:25.059+0000 2019-09-04T06:35:02.981+0000 D3 REPL [conn446] Got notified of new snapshot: { ts: Timestamp(1567578902, 12), t: 1 }, 2019-09-04T06:35:02.972+0000 2019-09-04T06:35:02.981+0000 D3 REPL [conn446] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:13.895+0000 2019-09-04T06:35:02.981+0000 D3 REPL [conn476] Got notified of new snapshot: { ts: Timestamp(1567578902, 12), t: 1 }, 2019-09-04T06:35:02.972+0000 2019-09-04T06:35:02.981+0000 D3 REPL [conn476] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.767+0000 2019-09-04T06:35:02.981+0000 D3 REPL [conn475] Got notified of new snapshot: { ts: Timestamp(1567578902, 12), t: 1 }, 2019-09-04T06:35:02.972+0000 2019-09-04T06:35:02.981+0000 D3 REPL [conn475] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.753+0000 2019-09-04T06:35:02.981+0000 D3 REPL [conn448] Got notified of new snapshot: { ts: Timestamp(1567578902, 12), t: 1 }, 2019-09-04T06:35:02.972+0000 2019-09-04T06:35:02.981+0000 D3 REPL [conn448] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.625+0000 2019-09-04T06:35:02.981+0000 D3 REPL [conn474] Got notified of new snapshot: { ts: Timestamp(1567578902, 12), t: 1 }, 2019-09-04T06:35:02.972+0000 2019-09-04T06:35:02.981+0000 D3 REPL [conn474] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:02.981+0000 D3 REPL [conn441] Got notified of new snapshot: { ts: Timestamp(1567578902, 12), t: 1 }, 2019-09-04T06:35:02.972+0000 2019-09-04T06:35:02.981+0000 D3 REPL [conn441] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.624+0000 2019-09-04T06:35:02.981+0000 D3 REPL [conn463] Got notified of new snapshot: { ts: Timestamp(1567578902, 12), t: 1 }, 2019-09-04T06:35:02.972+0000 2019-09-04T06:35:02.981+0000 D3 REPL [conn463] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:13.417+0000 2019-09-04T06:35:02.981+0000 D3 REPL [conn472] Got notified of new snapshot: { ts: Timestamp(1567578902, 12), t: 1 }, 2019-09-04T06:35:02.972+0000 2019-09-04T06:35:02.981+0000 D3 REPL [conn472] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:02.982+0000 D3 REPL [conn477] Got notified of new snapshot: { ts: Timestamp(1567578902, 12), t: 1 }, 2019-09-04T06:35:02.972+0000 2019-09-04T06:35:02.982+0000 D3 REPL [conn477] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:22.595+0000 2019-09-04T06:35:02.982+0000 D3 REPL [conn473] Got notified of new snapshot: { ts: Timestamp(1567578902, 12), t: 1 }, 2019-09-04T06:35:02.972+0000 2019-09-04T06:35:02.982+0000 D3 REPL [conn473] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:02.982+0000 D3 REPL [conn445] Got notified of new snapshot: { ts: Timestamp(1567578902, 12), t: 1 }, 2019-09-04T06:35:02.972+0000 2019-09-04T06:35:02.982+0000 D3 REPL [conn445] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:14.090+0000 2019-09-04T06:35:02.982+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:35:02.982+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:02.982+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:02.982+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578902, 12), t: 1 }, durableWallTime: new Date(1567578902972), appliedOpTime: { ts: Timestamp(1567578902, 12), t: 1 }, appliedWallTime: new Date(1567578902972), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 12), t: 1 }, lastCommittedWall: new Date(1567578902972), lastOpVisible: { ts: Timestamp(1567578902, 12), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:02.982+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1536 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:32.982+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578902, 12), t: 1 }, durableWallTime: new Date(1567578902972), appliedOpTime: { ts: Timestamp(1567578902, 12), t: 1 }, appliedWallTime: new Date(1567578902972), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 12), t: 1 }, lastCommittedWall: new Date(1567578902972), lastOpVisible: { ts: Timestamp(1567578902, 12), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:02.982+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.980+0000 2019-09-04T06:35:02.982+0000 D2 ASIO [RS] Request 1536 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 12), t: 1 }, lastCommittedWall: new Date(1567578902972), lastOpVisible: { ts: Timestamp(1567578902, 12), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 12), $clusterTime: { clusterTime: Timestamp(1567578902, 12), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 12) } 2019-09-04T06:35:02.982+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 12), t: 1 }, lastCommittedWall: new Date(1567578902972), lastOpVisible: { ts: Timestamp(1567578902, 12), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 12), $clusterTime: { clusterTime: Timestamp(1567578902, 12), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578902, 12) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:02.982+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:02.982+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:32.980+0000 2019-09-04T06:35:02.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:02.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:03.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:03.030+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578902, 12) 2019-09-04T06:35:03.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578902, 12), signature: { hash: BinData(0, 868520DD1B711FB22C1C71C50A0295CD022B7B59), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:03.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:35:03.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578902, 12), signature: { hash: BinData(0, 868520DD1B711FB22C1C71C50A0295CD022B7B59), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:03.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578902, 12), signature: { hash: BinData(0, 868520DD1B711FB22C1C71C50A0295CD022B7B59), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:03.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578902, 12), t: 1 }, durableWallTime: new Date(1567578902972), opTime: { ts: Timestamp(1567578902, 12), t: 1 }, wallTime: new Date(1567578902972) } 2019-09-04T06:35:03.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578902, 12), signature: { hash: BinData(0, 868520DD1B711FB22C1C71C50A0295CD022B7B59), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:03.068+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:03.068+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:03.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:03.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:03.082+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:03.108+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:03.108+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:03.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:03.111+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:03.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:03.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:03.154+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:03.154+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:03.172+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:03.172+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:03.182+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:03.206+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:03.206+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:03.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:03.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:03.237+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:03.237+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:03.264+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:03.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:03.282+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:03.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:03.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:03.327+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:03.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:03.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:03.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:03.373+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:03.373+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:03.382+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:03.471+0000 I NETWORK [listener] connection accepted from 10.108.2.63:36458 #483 (86 connections now open) 2019-09-04T06:35:03.471+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:03.472+0000 D2 COMMAND [conn483] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:03.472+0000 I NETWORK [conn483] received client metadata from 10.108.2.63:36458 conn483: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:03.472+0000 I COMMAND [conn483] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:03.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:03.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:03.482+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:03.488+0000 I COMMAND [conn456] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578872, 1), signature: { hash: BinData(0, 4DBDABAE602CF810BD3D716DCE9BD15DA1394F1C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:35:03.488+0000 D1 - [conn456] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:03.488+0000 W - [conn456] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:03.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:03.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:03.505+0000 I - [conn456] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:03.505+0000 D1 COMMAND [conn456] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578872, 1), signature: { hash: BinData(0, 4DBDABAE602CF810BD3D716DCE9BD15DA1394F1C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:03.505+0000 D1 - [conn456] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:03.505+0000 W - [conn456] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:03.521+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:35:03.521+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:35:03.521+0000 D2 COMMAND [conn90] run command admin.$cmd { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:35:03.521+0000 I COMMAND [conn90] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:35:03.527+0000 I - [conn456] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:03.527+0000 W COMMAND [conn456] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:03.527+0000 I COMMAND [conn456] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578872, 1), signature: { hash: BinData(0, 4DBDABAE602CF810BD3D716DCE9BD15DA1394F1C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30028ms 2019-09-04T06:35:03.527+0000 D2 NETWORK [conn456] Session from 10.108.2.63:36440 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:03.527+0000 I NETWORK [conn456] end connection 10.108.2.63:36440 (85 connections now open) 2019-09-04T06:35:03.568+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:03.568+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:03.570+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:03.570+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:03.582+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:03.593+0000 D2 ASIO [RS] Request 1535 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578903, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578903584), o: { $v: 1, $set: { ping: new Date(1567578903579) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 12), t: 1 }, lastCommittedWall: new Date(1567578902972), lastOpVisible: { ts: Timestamp(1567578902, 12), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 12), t: 1 }, lastCommittedWall: new Date(1567578902972), lastOpApplied: { ts: Timestamp(1567578903, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 12), $clusterTime: { clusterTime: Timestamp(1567578903, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578903, 1) } 2019-09-04T06:35:03.593+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578903, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578903584), o: { $v: 1, $set: { ping: new Date(1567578903579) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 12), t: 1 }, lastCommittedWall: new Date(1567578902972), lastOpVisible: { ts: Timestamp(1567578902, 12), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578902, 12), t: 1 }, lastCommittedWall: new Date(1567578902972), lastOpApplied: { ts: Timestamp(1567578903, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 12), $clusterTime: { clusterTime: Timestamp(1567578903, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578903, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:03.593+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:03.593+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578903, 1) and ending at ts: Timestamp(1567578903, 1) 2019-09-04T06:35:03.593+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:35:13.776+0000 2019-09-04T06:35:03.593+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:35:14.850+0000 2019-09-04T06:35:03.593+0000 D3 EXECUTOR [replexec-0] Executing a task on behalf of pool replexec 2019-09-04T06:35:03.593+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578903, 1), t: 1 } 2019-09-04T06:35:03.593+0000 D3 EXECUTOR [replexec-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:35:03.593+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:03.593+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:03.593+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578902, 12) 2019-09-04T06:35:03.593+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 22076 2019-09-04T06:35:03.593+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:03.593+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:03.593+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 22076 2019-09-04T06:35:03.593+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:03.593+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:35:03.593+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:03.593+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578902, 12) 2019-09-04T06:35:03.593+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578903, 1) } 2019-09-04T06:35:03.593+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 22079 2019-09-04T06:35:03.593+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:03.593+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:03.593+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 22079 2019-09-04T06:35:03.593+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22047 2019-09-04T06:35:03.593+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 22047 2019-09-04T06:35:03.593+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22082 2019-09-04T06:35:03.593+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 22082 2019-09-04T06:35:03.593+0000 D3 EXECUTOR [repl-writer-worker-10] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:03.593+0000 D3 STORAGE [repl-writer-worker-10] WT begin_transaction for snapshot id 22084 2019-09-04T06:35:03.593+0000 D4 STORAGE [repl-writer-worker-10] inserting record with timestamp Timestamp(1567578903, 1) 2019-09-04T06:35:03.593+0000 D3 STORAGE [repl-writer-worker-10] WT set timestamp of future write operations to Timestamp(1567578903, 1) 2019-09-04T06:35:03.593+0000 D3 STORAGE [repl-writer-worker-10] WT commit_transaction for snapshot id 22084 2019-09-04T06:35:03.593+0000 D3 EXECUTOR [repl-writer-worker-10] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:03.593+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:35:03.593+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22083 2019-09-04T06:35:03.593+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 22083 2019-09-04T06:35:03.593+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22086 2019-09-04T06:35:03.593+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 22086 2019-09-04T06:35:03.593+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578903, 1), t: 1 }({ ts: Timestamp(1567578903, 1), t: 1 }) 2019-09-04T06:35:03.593+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578903, 1) 2019-09-04T06:35:03.593+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22087 2019-09-04T06:35:03.593+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578903, 1) } } ] } sort: {} projection: {} 2019-09-04T06:35:03.593+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:03.593+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:35:03.593+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578903, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:35:03.593+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:03.593+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:03.593+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:03.593+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578903, 1) || First: notFirst: full path: ts 2019-09-04T06:35:03.593+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:03.593+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578903, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:03.593+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:03.593+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:35:03.593+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:03.593+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:03.593+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:03.593+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:03.593+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:03.593+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:03.593+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:03.593+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578903, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:03.593+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:03.593+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:03.593+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:03.593+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578903, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:03.593+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:03.593+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578903, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:03.594+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 22087 2019-09-04T06:35:03.594+0000 D3 EXECUTOR [repl-writer-worker-4] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:03.594+0000 D3 STORAGE [repl-writer-worker-4] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:03.594+0000 D3 REPL [repl-writer-worker-4] applying op: { ts: Timestamp(1567578903, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578903584), o: { $v: 1, $set: { ping: new Date(1567578903579) } } }, oplog application mode: Secondary 2019-09-04T06:35:03.594+0000 D3 STORAGE [repl-writer-worker-4] WT set timestamp of future write operations to Timestamp(1567578903, 1) 2019-09-04T06:35:03.594+0000 D3 STORAGE [repl-writer-worker-4] WT begin_transaction for snapshot id 22089 2019-09-04T06:35:03.594+0000 D2 QUERY [repl-writer-worker-4] Using idhack: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" } 2019-09-04T06:35:03.594+0000 D4 WRITE [repl-writer-worker-4] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:35:03.594+0000 D3 STORAGE [repl-writer-worker-4] WT commit_transaction for snapshot id 22089 2019-09-04T06:35:03.594+0000 D3 EXECUTOR [repl-writer-worker-4] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:03.594+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578903, 1), t: 1 }({ ts: Timestamp(1567578903, 1), t: 1 }) 2019-09-04T06:35:03.594+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578903, 1) 2019-09-04T06:35:03.594+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22088 2019-09-04T06:35:03.594+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:35:03.594+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:03.594+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:35:03.594+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:03.594+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:03.594+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:35:03.594+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 22088 2019-09-04T06:35:03.594+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578903, 1) 2019-09-04T06:35:03.594+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:03.594+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22092 2019-09-04T06:35:03.594+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 22092 2019-09-04T06:35:03.594+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578903, 1), t: 1 }({ ts: Timestamp(1567578903, 1), t: 1 }) 2019-09-04T06:35:03.594+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578902, 12), t: 1 }, durableWallTime: new Date(1567578902972), appliedOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, appliedWallTime: new Date(1567578903584), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 12), t: 1 }, lastCommittedWall: new Date(1567578902972), lastOpVisible: { ts: Timestamp(1567578902, 12), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:03.594+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1537 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:33.594+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578902, 12), t: 1 }, durableWallTime: new Date(1567578902972), appliedOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, appliedWallTime: new Date(1567578903584), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 12), t: 1 }, lastCommittedWall: new Date(1567578902972), lastOpVisible: { ts: Timestamp(1567578902, 12), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:03.594+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:33.594+0000 2019-09-04T06:35:03.594+0000 D2 ASIO [RS] Request 1537 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 12), t: 1 }, lastCommittedWall: new Date(1567578902972), lastOpVisible: { ts: Timestamp(1567578902, 12), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 12), $clusterTime: { clusterTime: Timestamp(1567578903, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578903, 1) } 2019-09-04T06:35:03.594+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 12), t: 1 }, lastCommittedWall: new Date(1567578902972), lastOpVisible: { ts: Timestamp(1567578902, 12), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 12), $clusterTime: { clusterTime: Timestamp(1567578903, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578903, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:03.594+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:03.594+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:33.594+0000 2019-09-04T06:35:03.595+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578903, 1), t: 1 } 2019-09-04T06:35:03.595+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1538 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:13.595+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578902, 12), t: 1 } } 2019-09-04T06:35:03.595+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:33.594+0000 2019-09-04T06:35:03.597+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:35:03.597+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:03.597+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, durableWallTime: new Date(1567578903584), appliedOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, appliedWallTime: new Date(1567578903584), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 12), t: 1 }, lastCommittedWall: new Date(1567578902972), lastOpVisible: { ts: Timestamp(1567578902, 12), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:03.597+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1539 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:33.597+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, durableWallTime: new Date(1567578903584), appliedOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, appliedWallTime: new Date(1567578903584), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, durableWallTime: new Date(1567578900424), appliedOpTime: { ts: Timestamp(1567578900, 3), t: 1 }, appliedWallTime: new Date(1567578900424), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 12), t: 1 }, lastCommittedWall: new Date(1567578902972), lastOpVisible: { ts: Timestamp(1567578902, 12), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:03.597+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:33.594+0000 2019-09-04T06:35:03.597+0000 D2 ASIO [RS] Request 1539 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 12), t: 1 }, lastCommittedWall: new Date(1567578902972), lastOpVisible: { ts: Timestamp(1567578902, 12), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 12), $clusterTime: { clusterTime: Timestamp(1567578903, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578903, 1) } 2019-09-04T06:35:03.597+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578902, 12), t: 1 }, lastCommittedWall: new Date(1567578902972), lastOpVisible: { ts: Timestamp(1567578902, 12), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578902, 12), $clusterTime: { clusterTime: Timestamp(1567578903, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578903, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:03.597+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:03.597+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:33.594+0000 2019-09-04T06:35:03.597+0000 D2 ASIO [RS] Request 1538 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578903, 1), t: 1 }, lastCommittedWall: new Date(1567578903584), lastOpVisible: { ts: Timestamp(1567578903, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578903, 1), t: 1 }, lastCommittedWall: new Date(1567578903584), lastOpApplied: { ts: Timestamp(1567578903, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578903, 1), $clusterTime: { clusterTime: Timestamp(1567578903, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578903, 1) } 2019-09-04T06:35:03.597+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578903, 1), t: 1 }, lastCommittedWall: new Date(1567578903584), lastOpVisible: { ts: Timestamp(1567578903, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578903, 1), t: 1 }, lastCommittedWall: new Date(1567578903584), lastOpApplied: { ts: Timestamp(1567578903, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578903, 1), $clusterTime: { clusterTime: Timestamp(1567578903, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578903, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:03.597+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:03.597+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:35:03.597+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578903, 1), t: 1 }, 2019-09-04T06:35:03.584+0000 2019-09-04T06:35:03.597+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578903, 1), t: 1 }, 2019-09-04T06:35:03.584+0000 2019-09-04T06:35:03.597+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578898, 1) 2019-09-04T06:35:03.597+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:35:14.850+0000 2019-09-04T06:35:03.597+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:35:14.206+0000 2019-09-04T06:35:03.597+0000 D3 REPL [conn448] Got notified of new snapshot: { ts: Timestamp(1567578903, 1), t: 1 }, 2019-09-04T06:35:03.584+0000 2019-09-04T06:35:03.598+0000 D3 REPL [conn448] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.625+0000 2019-09-04T06:35:03.598+0000 D3 REPL [conn423] Got notified of new snapshot: { ts: Timestamp(1567578903, 1), t: 1 }, 2019-09-04T06:35:03.584+0000 2019-09-04T06:35:03.598+0000 D3 REPL [conn423] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.512+0000 2019-09-04T06:35:03.598+0000 D3 REPL [conn459] Got notified of new snapshot: { ts: Timestamp(1567578903, 1), t: 1 }, 2019-09-04T06:35:03.584+0000 2019-09-04T06:35:03.598+0000 D3 REPL [conn459] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.473+0000 2019-09-04T06:35:03.598+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:03.598+0000 D3 REPL [conn467] Got notified of new snapshot: { ts: Timestamp(1567578903, 1), t: 1 }, 2019-09-04T06:35:03.584+0000 2019-09-04T06:35:03.598+0000 D3 REPL [conn467] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.623+0000 2019-09-04T06:35:03.598+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:04.839+0000 2019-09-04T06:35:03.598+0000 D3 REPL [conn446] Got notified of new snapshot: { ts: Timestamp(1567578903, 1), t: 1 }, 2019-09-04T06:35:03.584+0000 2019-09-04T06:35:03.598+0000 D3 REPL [conn446] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:13.895+0000 2019-09-04T06:35:03.598+0000 D3 REPL [conn475] Got notified of new snapshot: { ts: Timestamp(1567578903, 1), t: 1 }, 2019-09-04T06:35:03.584+0000 2019-09-04T06:35:03.598+0000 D3 REPL [conn475] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.753+0000 2019-09-04T06:35:03.598+0000 D3 REPL [conn474] Got notified of new snapshot: { ts: Timestamp(1567578903, 1), t: 1 }, 2019-09-04T06:35:03.584+0000 2019-09-04T06:35:03.598+0000 D3 REPL [conn474] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:03.598+0000 D3 REPL [conn463] Got notified of new snapshot: { ts: Timestamp(1567578903, 1), t: 1 }, 2019-09-04T06:35:03.584+0000 2019-09-04T06:35:03.598+0000 D3 REPL [conn463] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:13.417+0000 2019-09-04T06:35:03.598+0000 D3 REPL [conn477] Got notified of new snapshot: { ts: Timestamp(1567578903, 1), t: 1 }, 2019-09-04T06:35:03.584+0000 2019-09-04T06:35:03.598+0000 D3 REPL [conn477] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:22.595+0000 2019-09-04T06:35:03.598+0000 D3 REPL [conn445] Got notified of new snapshot: { ts: Timestamp(1567578903, 1), t: 1 }, 2019-09-04T06:35:03.584+0000 2019-09-04T06:35:03.598+0000 D3 REPL [conn445] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:14.090+0000 2019-09-04T06:35:03.598+0000 D3 REPL [conn478] Got notified of new snapshot: { ts: Timestamp(1567578903, 1), t: 1 }, 2019-09-04T06:35:03.584+0000 2019-09-04T06:35:03.598+0000 D3 REPL [conn478] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:24.152+0000 2019-09-04T06:35:03.598+0000 D3 REPL [conn473] Got notified of new snapshot: { ts: Timestamp(1567578903, 1), t: 1 }, 2019-09-04T06:35:03.584+0000 2019-09-04T06:35:03.598+0000 D3 REPL [conn473] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:03.598+0000 D3 REPL [conn472] Got notified of new snapshot: { ts: Timestamp(1567578903, 1), t: 1 }, 2019-09-04T06:35:03.584+0000 2019-09-04T06:35:03.598+0000 D3 REPL [conn472] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:03.598+0000 D3 REPL [conn441] Got notified of new snapshot: { ts: Timestamp(1567578903, 1), t: 1 }, 2019-09-04T06:35:03.584+0000 2019-09-04T06:35:03.598+0000 D3 REPL [conn441] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.624+0000 2019-09-04T06:35:03.598+0000 D3 REPL [conn466] Got notified of new snapshot: { ts: Timestamp(1567578903, 1), t: 1 }, 2019-09-04T06:35:03.584+0000 2019-09-04T06:35:03.598+0000 D3 REPL [conn466] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.728+0000 2019-09-04T06:35:03.598+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1540 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:13.598+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578903, 1), t: 1 } } 2019-09-04T06:35:03.598+0000 D3 REPL [conn479] Got notified of new snapshot: { ts: Timestamp(1567578903, 1), t: 1 }, 2019-09-04T06:35:03.584+0000 2019-09-04T06:35:03.598+0000 D3 REPL [conn479] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:26.309+0000 2019-09-04T06:35:03.598+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:33.594+0000 2019-09-04T06:35:03.598+0000 D3 REPL [conn476] Got notified of new snapshot: { ts: Timestamp(1567578903, 1), t: 1 }, 2019-09-04T06:35:03.584+0000 2019-09-04T06:35:03.598+0000 D3 REPL [conn476] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.767+0000 2019-09-04T06:35:03.598+0000 D3 REPL [conn436] Got notified of new snapshot: { ts: Timestamp(1567578903, 1), t: 1 }, 2019-09-04T06:35:03.584+0000 2019-09-04T06:35:03.598+0000 D3 REPL [conn436] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:14.790+0000 2019-09-04T06:35:03.598+0000 D3 REPL [conn462] Got notified of new snapshot: { ts: Timestamp(1567578903, 1), t: 1 }, 2019-09-04T06:35:03.584+0000 2019-09-04T06:35:03.598+0000 D3 REPL [conn462] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.497+0000 2019-09-04T06:35:03.598+0000 D3 REPL [conn460] Got notified of new snapshot: { ts: Timestamp(1567578903, 1), t: 1 }, 2019-09-04T06:35:03.584+0000 2019-09-04T06:35:03.598+0000 D3 REPL [conn460] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.478+0000 2019-09-04T06:35:03.598+0000 D3 REPL [conn461] Got notified of new snapshot: { ts: Timestamp(1567578903, 1), t: 1 }, 2019-09-04T06:35:03.584+0000 2019-09-04T06:35:03.598+0000 D3 REPL [conn461] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.482+0000 2019-09-04T06:35:03.598+0000 D3 REPL [conn457] Got notified of new snapshot: { ts: Timestamp(1567578903, 1), t: 1 }, 2019-09-04T06:35:03.584+0000 2019-09-04T06:35:03.598+0000 D3 REPL [conn457] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:25.059+0000 2019-09-04T06:35:03.608+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:03.608+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:03.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:03.611+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:03.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:03.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:03.654+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:03.654+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:03.672+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:03.672+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:03.682+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:03.693+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578903, 1) 2019-09-04T06:35:03.706+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:03.706+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:03.737+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:03.737+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:03.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:03.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:03.782+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:03.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:03.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:03.827+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:03.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:03.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:03.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:03.873+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:03.873+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:03.883+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:03.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:03.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:03.983+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:03.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:03.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:04.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:04.068+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:04.068+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:04.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:04.070+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:04.083+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:04.108+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:04.108+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:04.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:04.111+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:04.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:04.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:04.154+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:04.154+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:04.172+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:04.172+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:04.183+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:04.206+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:04.206+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:04.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578903, 1), signature: { hash: BinData(0, B9E4D3378240803214365C654C29505F6ABE14E7), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:04.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:35:04.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578903, 1), signature: { hash: BinData(0, B9E4D3378240803214365C654C29505F6ABE14E7), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:04.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578903, 1), signature: { hash: BinData(0, B9E4D3378240803214365C654C29505F6ABE14E7), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:04.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, durableWallTime: new Date(1567578903584), opTime: { ts: Timestamp(1567578903, 1), t: 1 }, wallTime: new Date(1567578903584) } 2019-09-04T06:35:04.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578903, 1), signature: { hash: BinData(0, B9E4D3378240803214365C654C29505F6ABE14E7), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:04.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:04.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:04.237+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:04.237+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:04.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:04.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:04.283+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:04.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:04.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:04.327+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:04.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:04.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:04.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:04.373+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:04.373+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:04.383+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:04.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:04.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:04.483+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:04.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:04.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:04.568+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:04.568+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:04.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:04.570+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:04.583+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:04.593+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:04.593+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:04.593+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578903, 1) 2019-09-04T06:35:04.593+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 22131 2019-09-04T06:35:04.593+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:04.593+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:04.593+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 22131 2019-09-04T06:35:04.594+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22134 2019-09-04T06:35:04.594+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 22134 2019-09-04T06:35:04.594+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578903, 1), t: 1 }({ ts: Timestamp(1567578903, 1), t: 1 }) 2019-09-04T06:35:04.608+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:04.609+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:04.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:04.611+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:04.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:04.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:04.654+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:04.654+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:04.672+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:04.672+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:04.683+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:04.706+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:04.706+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:04.736+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:04.737+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:04.764+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:04.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:04.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:04.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:04.784+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:04.827+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:04.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:04.839+0000 D1 EXECUTOR [replexec-0] Reaping this thread; next thread reaped no earlier than 2019-09-04T06:35:34.839+0000 2019-09-04T06:35:04.839+0000 D1 EXECUTOR [replexec-0] shutting down thread in pool replexec 2019-09-04T06:35:04.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:04.839+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1541) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:04.839+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1541 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:35:14.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:04.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:34.839+0000 2019-09-04T06:35:04.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:34.839+0000 2019-09-04T06:35:04.839+0000 D2 ASIO [Replication] Request 1541 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, durableWallTime: new Date(1567578903584), opTime: { ts: Timestamp(1567578903, 1), t: 1 }, wallTime: new Date(1567578903584), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578903, 1), t: 1 }, lastCommittedWall: new Date(1567578903584), lastOpVisible: { ts: Timestamp(1567578903, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578903, 1), $clusterTime: { clusterTime: Timestamp(1567578903, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578903, 1) } 2019-09-04T06:35:04.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, durableWallTime: new Date(1567578903584), opTime: { ts: Timestamp(1567578903, 1), t: 1 }, wallTime: new Date(1567578903584), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578903, 1), t: 1 }, lastCommittedWall: new Date(1567578903584), lastOpVisible: { ts: Timestamp(1567578903, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578903, 1), $clusterTime: { clusterTime: Timestamp(1567578903, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578903, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:35:04.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:04.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1541) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, durableWallTime: new Date(1567578903584), opTime: { ts: Timestamp(1567578903, 1), t: 1 }, wallTime: new Date(1567578903584), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578903, 1), t: 1 }, lastCommittedWall: new Date(1567578903584), lastOpVisible: { ts: Timestamp(1567578903, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578903, 1), $clusterTime: { clusterTime: Timestamp(1567578903, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578903, 1) } 2019-09-04T06:35:04.839+0000 D4 ELECTION [replexec-4] Postponing election timeout due to heartbeat from primary 2019-09-04T06:35:04.839+0000 D4 REPL [replexec-4] Canceling election timeout callback at 2019-09-04T06:35:14.206+0000 2019-09-04T06:35:04.839+0000 D4 ELECTION [replexec-4] Scheduling election timeout callback at 2019-09-04T06:35:15.992+0000 2019-09-04T06:35:04.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:35:04.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:35:06.839Z 2019-09-04T06:35:04.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:04.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:34.839+0000 2019-09-04T06:35:04.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:34.839+0000 2019-09-04T06:35:04.840+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:04.840+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1542) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:04.840+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1542 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:14.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:04.840+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:34.839+0000 2019-09-04T06:35:04.840+0000 D2 ASIO [Replication] Request 1542 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, durableWallTime: new Date(1567578903584), opTime: { ts: Timestamp(1567578903, 1), t: 1 }, wallTime: new Date(1567578903584), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578903, 1), t: 1 }, lastCommittedWall: new Date(1567578903584), lastOpVisible: { ts: Timestamp(1567578903, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578903, 1), $clusterTime: { clusterTime: Timestamp(1567578903, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578903, 1) } 2019-09-04T06:35:04.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, durableWallTime: new Date(1567578903584), opTime: { ts: Timestamp(1567578903, 1), t: 1 }, wallTime: new Date(1567578903584), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578903, 1), t: 1 }, lastCommittedWall: new Date(1567578903584), lastOpVisible: { ts: Timestamp(1567578903, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578903, 1), $clusterTime: { clusterTime: Timestamp(1567578903, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578903, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:04.840+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:04.840+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1542) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, durableWallTime: new Date(1567578903584), opTime: { ts: Timestamp(1567578903, 1), t: 1 }, wallTime: new Date(1567578903584), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578903, 1), t: 1 }, lastCommittedWall: new Date(1567578903584), lastOpVisible: { ts: Timestamp(1567578903, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578903, 1), $clusterTime: { clusterTime: Timestamp(1567578903, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578903, 1) } 2019-09-04T06:35:04.840+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:35:04.840+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:35:06.840Z 2019-09-04T06:35:04.840+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:34.839+0000 2019-09-04T06:35:04.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:04.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:04.873+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:04.873+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:04.884+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:04.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:04.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:04.984+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:04.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:04.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:05.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:05.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578903, 1), signature: { hash: BinData(0, B9E4D3378240803214365C654C29505F6ABE14E7), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:05.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:35:05.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578903, 1), signature: { hash: BinData(0, B9E4D3378240803214365C654C29505F6ABE14E7), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:05.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578903, 1), signature: { hash: BinData(0, B9E4D3378240803214365C654C29505F6ABE14E7), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:05.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, durableWallTime: new Date(1567578903584), opTime: { ts: Timestamp(1567578903, 1), t: 1 }, wallTime: new Date(1567578903584) } 2019-09-04T06:35:05.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578903, 1), signature: { hash: BinData(0, B9E4D3378240803214365C654C29505F6ABE14E7), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:05.068+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:05.068+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:05.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:05.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:05.084+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:05.108+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:05.108+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:05.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:05.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:05.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:05.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:05.154+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:05.154+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:05.172+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:05.172+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:05.184+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:05.206+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:05.206+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:05.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:05.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:05.236+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:05.237+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:05.264+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:05.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:05.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:05.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:05.284+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:05.327+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:05.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:05.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:05.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:05.373+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:05.373+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:05.384+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:05.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:05.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:05.484+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:05.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:05.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:05.568+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:05.568+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:05.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:05.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:05.584+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:05.593+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:05.593+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:05.593+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578903, 1) 2019-09-04T06:35:05.593+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 22171 2019-09-04T06:35:05.593+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:05.593+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:05.593+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 22171 2019-09-04T06:35:05.594+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22174 2019-09-04T06:35:05.594+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 22174 2019-09-04T06:35:05.594+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578903, 1), t: 1 }({ ts: Timestamp(1567578903, 1), t: 1 }) 2019-09-04T06:35:05.608+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:05.608+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:05.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:05.611+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:05.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:05.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:05.654+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:05.654+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:05.672+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:05.672+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:05.684+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:05.706+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:05.706+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:05.737+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:05.737+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:05.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:05.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:05.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:05.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:05.784+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:05.827+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:05.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:05.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:05.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:05.873+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:05.873+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:05.885+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:05.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:05.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:05.985+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:05.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:05.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:06.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:06.070+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:06.070+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:06.085+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:06.108+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:06.108+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:06.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:06.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:06.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:06.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:06.185+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:06.206+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:06.206+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:06.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578903, 1), signature: { hash: BinData(0, B9E4D3378240803214365C654C29505F6ABE14E7), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:06.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:35:06.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578903, 1), signature: { hash: BinData(0, B9E4D3378240803214365C654C29505F6ABE14E7), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:06.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578903, 1), signature: { hash: BinData(0, B9E4D3378240803214365C654C29505F6ABE14E7), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:06.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, durableWallTime: new Date(1567578903584), opTime: { ts: Timestamp(1567578903, 1), t: 1 }, wallTime: new Date(1567578903584) } 2019-09-04T06:35:06.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578903, 1), signature: { hash: BinData(0, B9E4D3378240803214365C654C29505F6ABE14E7), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:06.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:06.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:06.237+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:06.237+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:06.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:06.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:06.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:06.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:06.285+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:06.327+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:06.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:06.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:06.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:06.373+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:06.373+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:06.385+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:06.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:06.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:06.485+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:06.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:06.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:06.553+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:06.553+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:06.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:06.570+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:06.585+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:06.594+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:06.594+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:06.594+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578903, 1) 2019-09-04T06:35:06.594+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 22208 2019-09-04T06:35:06.594+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:06.594+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:06.594+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 22208 2019-09-04T06:35:06.594+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22211 2019-09-04T06:35:06.594+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 22211 2019-09-04T06:35:06.594+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578903, 1), t: 1 }({ ts: Timestamp(1567578903, 1), t: 1 }) 2019-09-04T06:35:06.608+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:06.608+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:06.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:06.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:06.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:06.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:06.685+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:06.706+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:06.706+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:06.736+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:06.737+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:06.764+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:06.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:06.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:06.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:06.785+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:06.827+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:06.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:06.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:06.839+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1543) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:06.839+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1543 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:35:16.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:06.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:34.839+0000 2019-09-04T06:35:06.839+0000 D2 ASIO [Replication] Request 1543 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, durableWallTime: new Date(1567578903584), opTime: { ts: Timestamp(1567578903, 1), t: 1 }, wallTime: new Date(1567578903584), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578903, 1), t: 1 }, lastCommittedWall: new Date(1567578903584), lastOpVisible: { ts: Timestamp(1567578903, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578903, 1), $clusterTime: { clusterTime: Timestamp(1567578903, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578903, 1) } 2019-09-04T06:35:06.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, durableWallTime: new Date(1567578903584), opTime: { ts: Timestamp(1567578903, 1), t: 1 }, wallTime: new Date(1567578903584), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578903, 1), t: 1 }, lastCommittedWall: new Date(1567578903584), lastOpVisible: { ts: Timestamp(1567578903, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578903, 1), $clusterTime: { clusterTime: Timestamp(1567578903, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578903, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:35:06.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:06.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1543) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, durableWallTime: new Date(1567578903584), opTime: { ts: Timestamp(1567578903, 1), t: 1 }, wallTime: new Date(1567578903584), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578903, 1), t: 1 }, lastCommittedWall: new Date(1567578903584), lastOpVisible: { ts: Timestamp(1567578903, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578903, 1), $clusterTime: { clusterTime: Timestamp(1567578903, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578903, 1) } 2019-09-04T06:35:06.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:35:06.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:35:15.992+0000 2019-09-04T06:35:06.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:35:17.173+0000 2019-09-04T06:35:06.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:35:06.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:35:08.839Z 2019-09-04T06:35:06.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:06.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:36.839+0000 2019-09-04T06:35:06.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:36.839+0000 2019-09-04T06:35:06.840+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:06.840+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1544) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:06.840+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1544 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:16.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:06.840+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:36.839+0000 2019-09-04T06:35:06.840+0000 D2 ASIO [Replication] Request 1544 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, durableWallTime: new Date(1567578903584), opTime: { ts: Timestamp(1567578903, 1), t: 1 }, wallTime: new Date(1567578903584), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578903, 1), t: 1 }, lastCommittedWall: new Date(1567578903584), lastOpVisible: { ts: Timestamp(1567578903, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578903, 1), $clusterTime: { clusterTime: Timestamp(1567578903, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578903, 1) } 2019-09-04T06:35:06.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, durableWallTime: new Date(1567578903584), opTime: { ts: Timestamp(1567578903, 1), t: 1 }, wallTime: new Date(1567578903584), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578903, 1), t: 1 }, lastCommittedWall: new Date(1567578903584), lastOpVisible: { ts: Timestamp(1567578903, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578903, 1), $clusterTime: { clusterTime: Timestamp(1567578903, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578903, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:06.840+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:06.840+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1544) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, durableWallTime: new Date(1567578903584), opTime: { ts: Timestamp(1567578903, 1), t: 1 }, wallTime: new Date(1567578903584), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578903, 1), t: 1 }, lastCommittedWall: new Date(1567578903584), lastOpVisible: { ts: Timestamp(1567578903, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578903, 1), $clusterTime: { clusterTime: Timestamp(1567578903, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578903, 1) } 2019-09-04T06:35:06.840+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:35:06.840+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:35:08.840Z 2019-09-04T06:35:06.840+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:36.839+0000 2019-09-04T06:35:06.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:06.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:06.873+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:06.873+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:06.886+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:06.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:06.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:06.986+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:06.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:06.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:07.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:07.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:07.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:07.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578903, 1), signature: { hash: BinData(0, B9E4D3378240803214365C654C29505F6ABE14E7), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:07.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:35:07.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578903, 1), signature: { hash: BinData(0, B9E4D3378240803214365C654C29505F6ABE14E7), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:07.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578903, 1), signature: { hash: BinData(0, B9E4D3378240803214365C654C29505F6ABE14E7), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:07.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, durableWallTime: new Date(1567578903584), opTime: { ts: Timestamp(1567578903, 1), t: 1 }, wallTime: new Date(1567578903584) } 2019-09-04T06:35:07.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578903, 1), signature: { hash: BinData(0, B9E4D3378240803214365C654C29505F6ABE14E7), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:07.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:07.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:07.086+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:07.108+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:07.108+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:07.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:07.111+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:07.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:07.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:07.186+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:07.206+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:07.206+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:07.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:07.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:07.237+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:07.237+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:07.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:07.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:07.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:07.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:07.286+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:07.327+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:07.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:07.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:07.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:07.373+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:07.373+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:07.386+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:07.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:07.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:07.486+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:07.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:07.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:07.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:07.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:07.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:07.570+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:07.586+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:07.594+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:07.594+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:07.594+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578903, 1) 2019-09-04T06:35:07.594+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 22245 2019-09-04T06:35:07.594+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:07.594+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:07.594+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 22245 2019-09-04T06:35:07.595+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22248 2019-09-04T06:35:07.595+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 22248 2019-09-04T06:35:07.595+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578903, 1), t: 1 }({ ts: Timestamp(1567578903, 1), t: 1 }) 2019-09-04T06:35:07.608+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:07.608+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:07.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:07.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:07.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:07.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:07.686+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:07.706+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:07.706+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:07.737+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:07.737+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:07.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:07.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:07.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:07.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:07.787+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:07.827+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:07.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:07.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:07.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:07.873+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:07.873+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:07.887+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:07.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:07.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:07.987+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:07.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:07.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:08.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:08.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:08.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:08.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:08.070+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:08.087+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:08.108+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:08.108+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:08.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:08.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:08.162+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:08.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:08.187+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:08.206+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:08.206+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:08.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578906, 1), signature: { hash: BinData(0, E74EC27B39B67BF35307DD3E1FD7DFF515B33F3F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:08.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:35:08.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578906, 1), signature: { hash: BinData(0, E74EC27B39B67BF35307DD3E1FD7DFF515B33F3F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:08.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578906, 1), signature: { hash: BinData(0, E74EC27B39B67BF35307DD3E1FD7DFF515B33F3F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:08.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, durableWallTime: new Date(1567578903584), opTime: { ts: Timestamp(1567578903, 1), t: 1 }, wallTime: new Date(1567578903584) } 2019-09-04T06:35:08.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578906, 1), signature: { hash: BinData(0, E74EC27B39B67BF35307DD3E1FD7DFF515B33F3F), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:08.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:08.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:08.236+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:08.237+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:08.264+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:08.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:08.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:08.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:08.287+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:08.327+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:08.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:08.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:08.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:08.373+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:08.373+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:08.387+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:08.406+0000 D2 ASIO [RS] Request 1540 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578908, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578908401), o: { $v: 1, $set: { ping: new Date(1567578908401) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578903, 1), t: 1 }, lastCommittedWall: new Date(1567578903584), lastOpVisible: { ts: Timestamp(1567578903, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578903, 1), t: 1 }, lastCommittedWall: new Date(1567578903584), lastOpApplied: { ts: Timestamp(1567578908, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578903, 1), $clusterTime: { clusterTime: Timestamp(1567578908, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 1) } 2019-09-04T06:35:08.406+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578908, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578908401), o: { $v: 1, $set: { ping: new Date(1567578908401) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578903, 1), t: 1 }, lastCommittedWall: new Date(1567578903584), lastOpVisible: { ts: Timestamp(1567578903, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578903, 1), t: 1 }, lastCommittedWall: new Date(1567578903584), lastOpApplied: { ts: Timestamp(1567578908, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578903, 1), $clusterTime: { clusterTime: Timestamp(1567578908, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:08.406+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:08.406+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578908, 1) and ending at ts: Timestamp(1567578908, 1) 2019-09-04T06:35:08.406+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:35:17.173+0000 2019-09-04T06:35:08.406+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:35:19.135+0000 2019-09-04T06:35:08.406+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:08.406+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:36.839+0000 2019-09-04T06:35:08.406+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578908, 1), t: 1 } 2019-09-04T06:35:08.406+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:08.406+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:08.406+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578903, 1) 2019-09-04T06:35:08.406+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 22278 2019-09-04T06:35:08.406+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:08.406+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:08.406+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 22278 2019-09-04T06:35:08.406+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:08.406+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:35:08.406+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:08.406+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578903, 1) 2019-09-04T06:35:08.406+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 22281 2019-09-04T06:35:08.406+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578908, 1) } 2019-09-04T06:35:08.406+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:08.406+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:08.406+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 22281 2019-09-04T06:35:08.406+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22249 2019-09-04T06:35:08.407+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 22249 2019-09-04T06:35:08.407+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22284 2019-09-04T06:35:08.407+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 22284 2019-09-04T06:35:08.407+0000 D3 EXECUTOR [repl-writer-worker-6] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:08.407+0000 D3 STORAGE [repl-writer-worker-6] WT begin_transaction for snapshot id 22286 2019-09-04T06:35:08.407+0000 D4 STORAGE [repl-writer-worker-6] inserting record with timestamp Timestamp(1567578908, 1) 2019-09-04T06:35:08.407+0000 D3 STORAGE [repl-writer-worker-6] WT set timestamp of future write operations to Timestamp(1567578908, 1) 2019-09-04T06:35:08.407+0000 D3 STORAGE [repl-writer-worker-6] WT commit_transaction for snapshot id 22286 2019-09-04T06:35:08.407+0000 D3 EXECUTOR [repl-writer-worker-6] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:08.407+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:35:08.407+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22285 2019-09-04T06:35:08.407+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 22285 2019-09-04T06:35:08.407+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22288 2019-09-04T06:35:08.407+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 22288 2019-09-04T06:35:08.407+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578908, 1), t: 1 }({ ts: Timestamp(1567578908, 1), t: 1 }) 2019-09-04T06:35:08.407+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578908, 1) 2019-09-04T06:35:08.407+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22289 2019-09-04T06:35:08.407+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578908, 1) } } ] } sort: {} projection: {} 2019-09-04T06:35:08.407+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:08.407+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:35:08.407+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578908, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:35:08.407+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:08.407+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:08.407+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:08.407+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578908, 1) || First: notFirst: full path: ts 2019-09-04T06:35:08.407+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:08.407+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578908, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:08.407+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:08.407+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:35:08.407+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:08.407+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:08.407+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:08.407+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:08.407+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:08.407+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:08.407+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:08.407+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578908, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:08.407+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:08.407+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:08.407+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:08.407+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578908, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:08.407+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:08.407+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578908, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:08.407+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 22289 2019-09-04T06:35:08.407+0000 D3 EXECUTOR [repl-writer-worker-8] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:08.407+0000 D3 STORAGE [repl-writer-worker-8] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:08.407+0000 D3 REPL [repl-writer-worker-8] applying op: { ts: Timestamp(1567578908, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578908401), o: { $v: 1, $set: { ping: new Date(1567578908401) } } }, oplog application mode: Secondary 2019-09-04T06:35:08.407+0000 D3 STORAGE [repl-writer-worker-8] WT set timestamp of future write operations to Timestamp(1567578908, 1) 2019-09-04T06:35:08.407+0000 D3 STORAGE [repl-writer-worker-8] WT begin_transaction for snapshot id 22291 2019-09-04T06:35:08.407+0000 D2 QUERY [repl-writer-worker-8] Using idhack: { _id: "ConfigServer" } 2019-09-04T06:35:08.407+0000 D4 WRITE [repl-writer-worker-8] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:35:08.407+0000 D3 STORAGE [repl-writer-worker-8] WT commit_transaction for snapshot id 22291 2019-09-04T06:35:08.407+0000 D3 EXECUTOR [repl-writer-worker-8] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:08.407+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578908, 1), t: 1 }({ ts: Timestamp(1567578908, 1), t: 1 }) 2019-09-04T06:35:08.407+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578908, 1) 2019-09-04T06:35:08.407+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22290 2019-09-04T06:35:08.407+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:35:08.407+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:08.407+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:35:08.407+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:08.407+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:08.407+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:35:08.407+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 22290 2019-09-04T06:35:08.407+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578908, 1) 2019-09-04T06:35:08.408+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:08.408+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22294 2019-09-04T06:35:08.408+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, durableWallTime: new Date(1567578903584), appliedOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, appliedWallTime: new Date(1567578903584), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, durableWallTime: new Date(1567578903584), appliedOpTime: { ts: Timestamp(1567578908, 1), t: 1 }, appliedWallTime: new Date(1567578908401), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, durableWallTime: new Date(1567578903584), appliedOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, appliedWallTime: new Date(1567578903584), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578903, 1), t: 1 }, lastCommittedWall: new Date(1567578903584), lastOpVisible: { ts: Timestamp(1567578903, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:08.408+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 22294 2019-09-04T06:35:08.408+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578908, 1), t: 1 }({ ts: Timestamp(1567578908, 1), t: 1 }) 2019-09-04T06:35:08.408+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1545 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:38.408+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, durableWallTime: new Date(1567578903584), appliedOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, appliedWallTime: new Date(1567578903584), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, durableWallTime: new Date(1567578903584), appliedOpTime: { ts: Timestamp(1567578908, 1), t: 1 }, appliedWallTime: new Date(1567578908401), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, durableWallTime: new Date(1567578903584), appliedOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, appliedWallTime: new Date(1567578903584), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578903, 1), t: 1 }, lastCommittedWall: new Date(1567578903584), lastOpVisible: { ts: Timestamp(1567578903, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:08.408+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:38.407+0000 2019-09-04T06:35:08.408+0000 D2 ASIO [RS] Request 1545 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578903, 1), t: 1 }, lastCommittedWall: new Date(1567578903584), lastOpVisible: { ts: Timestamp(1567578903, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578903, 1), $clusterTime: { clusterTime: Timestamp(1567578908, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 1) } 2019-09-04T06:35:08.408+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578903, 1), t: 1 }, lastCommittedWall: new Date(1567578903584), lastOpVisible: { ts: Timestamp(1567578903, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578903, 1), $clusterTime: { clusterTime: Timestamp(1567578908, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:08.408+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:08.408+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:38.408+0000 2019-09-04T06:35:08.408+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578908, 1), t: 1 } 2019-09-04T06:35:08.408+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1546 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:18.408+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578903, 1), t: 1 } } 2019-09-04T06:35:08.408+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:38.408+0000 2019-09-04T06:35:08.409+0000 D2 ASIO [RS] Request 1546 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 1), t: 1 }, lastCommittedWall: new Date(1567578908401), lastOpVisible: { ts: Timestamp(1567578908, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578908, 1), t: 1 }, lastCommittedWall: new Date(1567578908401), lastOpApplied: { ts: Timestamp(1567578908, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578908, 1), $clusterTime: { clusterTime: Timestamp(1567578908, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 1) } 2019-09-04T06:35:08.409+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 1), t: 1 }, lastCommittedWall: new Date(1567578908401), lastOpVisible: { ts: Timestamp(1567578908, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578908, 1), t: 1 }, lastCommittedWall: new Date(1567578908401), lastOpApplied: { ts: Timestamp(1567578908, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578908, 1), $clusterTime: { clusterTime: Timestamp(1567578908, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:08.409+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:08.409+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:35:08.409+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578908, 1), t: 1 }, 2019-09-04T06:35:08.401+0000 2019-09-04T06:35:08.409+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578908, 1), t: 1 }, 2019-09-04T06:35:08.401+0000 2019-09-04T06:35:08.409+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578903, 1) 2019-09-04T06:35:08.409+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:35:19.135+0000 2019-09-04T06:35:08.409+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:35:19.863+0000 2019-09-04T06:35:08.409+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1547 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:18.409+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578908, 1), t: 1 } } 2019-09-04T06:35:08.409+0000 D3 REPL [conn459] Got notified of new snapshot: { ts: Timestamp(1567578908, 1), t: 1 }, 2019-09-04T06:35:08.401+0000 2019-09-04T06:35:08.409+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:38.408+0000 2019-09-04T06:35:08.409+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:35:08.409+0000 D3 REPL [conn459] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.473+0000 2019-09-04T06:35:08.409+0000 D3 REPL [conn479] Got notified of new snapshot: { ts: Timestamp(1567578908, 1), t: 1 }, 2019-09-04T06:35:08.401+0000 2019-09-04T06:35:08.409+0000 D3 REPL [conn479] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:26.309+0000 2019-09-04T06:35:08.409+0000 D3 REPL [conn446] Got notified of new snapshot: { ts: Timestamp(1567578908, 1), t: 1 }, 2019-09-04T06:35:08.401+0000 2019-09-04T06:35:08.409+0000 D3 REPL [conn446] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:13.895+0000 2019-09-04T06:35:08.409+0000 D3 REPL [conn475] Got notified of new snapshot: { ts: Timestamp(1567578908, 1), t: 1 }, 2019-09-04T06:35:08.401+0000 2019-09-04T06:35:08.409+0000 D3 REPL [conn475] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.753+0000 2019-09-04T06:35:08.409+0000 D3 REPL [conn477] Got notified of new snapshot: { ts: Timestamp(1567578908, 1), t: 1 }, 2019-09-04T06:35:08.401+0000 2019-09-04T06:35:08.409+0000 D3 REPL [conn477] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:22.595+0000 2019-09-04T06:35:08.409+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:08.409+0000 D3 REPL [conn478] Got notified of new snapshot: { ts: Timestamp(1567578908, 1), t: 1 }, 2019-09-04T06:35:08.401+0000 2019-09-04T06:35:08.409+0000 D3 REPL [conn478] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:24.152+0000 2019-09-04T06:35:08.409+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:36.839+0000 2019-09-04T06:35:08.409+0000 D3 REPL [conn472] Got notified of new snapshot: { ts: Timestamp(1567578908, 1), t: 1 }, 2019-09-04T06:35:08.401+0000 2019-09-04T06:35:08.409+0000 D3 REPL [conn472] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:08.409+0000 D3 REPL [conn436] Got notified of new snapshot: { ts: Timestamp(1567578908, 1), t: 1 }, 2019-09-04T06:35:08.401+0000 2019-09-04T06:35:08.409+0000 D3 REPL [conn436] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:14.790+0000 2019-09-04T06:35:08.409+0000 D3 REPL [conn460] Got notified of new snapshot: { ts: Timestamp(1567578908, 1), t: 1 }, 2019-09-04T06:35:08.401+0000 2019-09-04T06:35:08.409+0000 D3 REPL [conn460] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.478+0000 2019-09-04T06:35:08.409+0000 D3 REPL [conn457] Got notified of new snapshot: { ts: Timestamp(1567578908, 1), t: 1 }, 2019-09-04T06:35:08.401+0000 2019-09-04T06:35:08.409+0000 D3 REPL [conn457] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:25.059+0000 2019-09-04T06:35:08.409+0000 D3 REPL [conn448] Got notified of new snapshot: { ts: Timestamp(1567578908, 1), t: 1 }, 2019-09-04T06:35:08.401+0000 2019-09-04T06:35:08.409+0000 D3 REPL [conn448] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.625+0000 2019-09-04T06:35:08.409+0000 D3 REPL [conn423] Got notified of new snapshot: { ts: Timestamp(1567578908, 1), t: 1 }, 2019-09-04T06:35:08.401+0000 2019-09-04T06:35:08.409+0000 D3 REPL [conn423] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.512+0000 2019-09-04T06:35:08.409+0000 D3 REPL [conn476] Got notified of new snapshot: { ts: Timestamp(1567578908, 1), t: 1 }, 2019-09-04T06:35:08.401+0000 2019-09-04T06:35:08.409+0000 D3 REPL [conn476] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.767+0000 2019-09-04T06:35:08.409+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:08.409+0000 D3 REPL [conn467] Got notified of new snapshot: { ts: Timestamp(1567578908, 1), t: 1 }, 2019-09-04T06:35:08.401+0000 2019-09-04T06:35:08.409+0000 D3 REPL [conn467] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.623+0000 2019-09-04T06:35:08.409+0000 D3 REPL [conn461] Got notified of new snapshot: { ts: Timestamp(1567578908, 1), t: 1 }, 2019-09-04T06:35:08.401+0000 2019-09-04T06:35:08.409+0000 D3 REPL [conn461] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.482+0000 2019-09-04T06:35:08.409+0000 D3 REPL [conn445] Got notified of new snapshot: { ts: Timestamp(1567578908, 1), t: 1 }, 2019-09-04T06:35:08.401+0000 2019-09-04T06:35:08.409+0000 D3 REPL [conn445] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:14.090+0000 2019-09-04T06:35:08.409+0000 D3 REPL [conn441] Got notified of new snapshot: { ts: Timestamp(1567578908, 1), t: 1 }, 2019-09-04T06:35:08.401+0000 2019-09-04T06:35:08.409+0000 D3 REPL [conn441] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.624+0000 2019-09-04T06:35:08.409+0000 D3 REPL [conn463] Got notified of new snapshot: { ts: Timestamp(1567578908, 1), t: 1 }, 2019-09-04T06:35:08.401+0000 2019-09-04T06:35:08.409+0000 D3 REPL [conn463] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:13.417+0000 2019-09-04T06:35:08.409+0000 D3 REPL [conn474] Got notified of new snapshot: { ts: Timestamp(1567578908, 1), t: 1 }, 2019-09-04T06:35:08.401+0000 2019-09-04T06:35:08.409+0000 D3 REPL [conn474] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:08.409+0000 D3 REPL [conn462] Got notified of new snapshot: { ts: Timestamp(1567578908, 1), t: 1 }, 2019-09-04T06:35:08.401+0000 2019-09-04T06:35:08.409+0000 D3 REPL [conn462] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.497+0000 2019-09-04T06:35:08.409+0000 D3 REPL [conn466] Got notified of new snapshot: { ts: Timestamp(1567578908, 1), t: 1 }, 2019-09-04T06:35:08.401+0000 2019-09-04T06:35:08.409+0000 D3 REPL [conn466] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.728+0000 2019-09-04T06:35:08.409+0000 D3 REPL [conn473] Got notified of new snapshot: { ts: Timestamp(1567578908, 1), t: 1 }, 2019-09-04T06:35:08.401+0000 2019-09-04T06:35:08.409+0000 D3 REPL [conn473] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:08.409+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, durableWallTime: new Date(1567578903584), appliedOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, appliedWallTime: new Date(1567578903584), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578908, 1), t: 1 }, durableWallTime: new Date(1567578908401), appliedOpTime: { ts: Timestamp(1567578908, 1), t: 1 }, appliedWallTime: new Date(1567578908401), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, durableWallTime: new Date(1567578903584), appliedOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, appliedWallTime: new Date(1567578903584), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 1), t: 1 }, lastCommittedWall: new Date(1567578908401), lastOpVisible: { ts: Timestamp(1567578908, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:08.409+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1548 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:38.409+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, durableWallTime: new Date(1567578903584), appliedOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, appliedWallTime: new Date(1567578903584), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578908, 1), t: 1 }, durableWallTime: new Date(1567578908401), appliedOpTime: { ts: Timestamp(1567578908, 1), t: 1 }, appliedWallTime: new Date(1567578908401), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, durableWallTime: new Date(1567578903584), appliedOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, appliedWallTime: new Date(1567578903584), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 1), t: 1 }, lastCommittedWall: new Date(1567578908401), lastOpVisible: { ts: Timestamp(1567578908, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:08.410+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:38.408+0000 2019-09-04T06:35:08.410+0000 D2 ASIO [RS] Request 1548 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 1), t: 1 }, lastCommittedWall: new Date(1567578908401), lastOpVisible: { ts: Timestamp(1567578908, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578908, 1), $clusterTime: { clusterTime: Timestamp(1567578908, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 1) } 2019-09-04T06:35:08.410+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 1), t: 1 }, lastCommittedWall: new Date(1567578908401), lastOpVisible: { ts: Timestamp(1567578908, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578908, 1), $clusterTime: { clusterTime: Timestamp(1567578908, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:08.410+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:08.410+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:38.408+0000 2019-09-04T06:35:08.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:08.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:08.487+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:08.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:08.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:08.506+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578908, 1) 2019-09-04T06:35:08.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:08.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:08.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:08.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:08.587+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:08.608+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:08.608+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:08.648+0000 D2 ASIO [RS] Request 1547 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578908, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578908646), o: { $v: 1, $set: { ping: new Date(1567578908646) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 1), t: 1 }, lastCommittedWall: new Date(1567578908401), lastOpVisible: { ts: Timestamp(1567578908, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578908, 1), t: 1 }, lastCommittedWall: new Date(1567578908401), lastOpApplied: { ts: Timestamp(1567578908, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578908, 1), $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 2) } 2019-09-04T06:35:08.648+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578908, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578908646), o: { $v: 1, $set: { ping: new Date(1567578908646) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 1), t: 1 }, lastCommittedWall: new Date(1567578908401), lastOpVisible: { ts: Timestamp(1567578908, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578908, 1), t: 1 }, lastCommittedWall: new Date(1567578908401), lastOpApplied: { ts: Timestamp(1567578908, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578908, 1), $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:08.648+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:08.648+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578908, 2) and ending at ts: Timestamp(1567578908, 2) 2019-09-04T06:35:08.648+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:35:19.863+0000 2019-09-04T06:35:08.648+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:35:19.910+0000 2019-09-04T06:35:08.648+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:08.648+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578908, 2), t: 1 } 2019-09-04T06:35:08.648+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:36.839+0000 2019-09-04T06:35:08.648+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:08.648+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:08.648+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578908, 1) 2019-09-04T06:35:08.648+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 22303 2019-09-04T06:35:08.648+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:08.648+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:08.648+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 22303 2019-09-04T06:35:08.648+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:08.648+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:08.648+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578908, 1) 2019-09-04T06:35:08.648+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 22306 2019-09-04T06:35:08.648+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:35:08.648+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:08.648+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:08.648+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578908, 2) } 2019-09-04T06:35:08.648+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 22306 2019-09-04T06:35:08.648+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22295 2019-09-04T06:35:08.648+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 22295 2019-09-04T06:35:08.648+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22309 2019-09-04T06:35:08.648+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 22309 2019-09-04T06:35:08.648+0000 D3 EXECUTOR [repl-writer-worker-2] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:08.648+0000 D3 STORAGE [repl-writer-worker-2] WT begin_transaction for snapshot id 22311 2019-09-04T06:35:08.648+0000 D4 STORAGE [repl-writer-worker-2] inserting record with timestamp Timestamp(1567578908, 2) 2019-09-04T06:35:08.648+0000 D3 STORAGE [repl-writer-worker-2] WT set timestamp of future write operations to Timestamp(1567578908, 2) 2019-09-04T06:35:08.648+0000 D3 STORAGE [repl-writer-worker-2] WT commit_transaction for snapshot id 22311 2019-09-04T06:35:08.648+0000 D3 EXECUTOR [repl-writer-worker-2] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:08.648+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:35:08.648+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22310 2019-09-04T06:35:08.648+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 22310 2019-09-04T06:35:08.648+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22313 2019-09-04T06:35:08.648+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 22313 2019-09-04T06:35:08.649+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578908, 2), t: 1 }({ ts: Timestamp(1567578908, 2), t: 1 }) 2019-09-04T06:35:08.649+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578908, 2) 2019-09-04T06:35:08.649+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22314 2019-09-04T06:35:08.649+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578908, 2) } } ] } sort: {} projection: {} 2019-09-04T06:35:08.649+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:08.649+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:35:08.649+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578908, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:35:08.649+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:08.649+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:08.649+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:08.649+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578908, 2) || First: notFirst: full path: ts 2019-09-04T06:35:08.649+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:08.649+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578908, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:08.649+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:08.649+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:35:08.649+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:08.649+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:08.649+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:08.649+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:08.649+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:08.649+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:08.649+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:08.649+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578908, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:08.649+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:08.649+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:08.649+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:08.649+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578908, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:08.649+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:08.649+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578908, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:08.649+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 22314 2019-09-04T06:35:08.649+0000 D3 EXECUTOR [repl-writer-worker-0] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:08.649+0000 D3 STORAGE [repl-writer-worker-0] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:08.649+0000 D3 REPL [repl-writer-worker-0] applying op: { ts: Timestamp(1567578908, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578908646), o: { $v: 1, $set: { ping: new Date(1567578908646) } } }, oplog application mode: Secondary 2019-09-04T06:35:08.649+0000 D3 STORAGE [repl-writer-worker-0] WT set timestamp of future write operations to Timestamp(1567578908, 2) 2019-09-04T06:35:08.649+0000 D3 STORAGE [repl-writer-worker-0] WT begin_transaction for snapshot id 22316 2019-09-04T06:35:08.649+0000 D2 QUERY [repl-writer-worker-0] Using idhack: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" } 2019-09-04T06:35:08.649+0000 D4 WRITE [repl-writer-worker-0] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:35:08.649+0000 D3 STORAGE [repl-writer-worker-0] WT commit_transaction for snapshot id 22316 2019-09-04T06:35:08.649+0000 D3 EXECUTOR [repl-writer-worker-0] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:08.649+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578908, 2), t: 1 }({ ts: Timestamp(1567578908, 2), t: 1 }) 2019-09-04T06:35:08.649+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578908, 2) 2019-09-04T06:35:08.649+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22315 2019-09-04T06:35:08.649+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:35:08.649+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:08.649+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:35:08.649+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:08.649+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:08.649+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:35:08.649+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 22315 2019-09-04T06:35:08.649+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578908, 2) 2019-09-04T06:35:08.649+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:08.649+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22319 2019-09-04T06:35:08.649+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, durableWallTime: new Date(1567578903584), appliedOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, appliedWallTime: new Date(1567578903584), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578908, 1), t: 1 }, durableWallTime: new Date(1567578908401), appliedOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, appliedWallTime: new Date(1567578908646), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, durableWallTime: new Date(1567578903584), appliedOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, appliedWallTime: new Date(1567578903584), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 1), t: 1 }, lastCommittedWall: new Date(1567578908401), lastOpVisible: { ts: Timestamp(1567578908, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:08.649+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1549 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:38.649+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, durableWallTime: new Date(1567578903584), appliedOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, appliedWallTime: new Date(1567578903584), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578908, 1), t: 1 }, durableWallTime: new Date(1567578908401), appliedOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, appliedWallTime: new Date(1567578908646), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, durableWallTime: new Date(1567578903584), appliedOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, appliedWallTime: new Date(1567578903584), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 1), t: 1 }, lastCommittedWall: new Date(1567578908401), lastOpVisible: { ts: Timestamp(1567578908, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:08.649+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:38.649+0000 2019-09-04T06:35:08.649+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 22319 2019-09-04T06:35:08.649+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578908, 2), t: 1 }({ ts: Timestamp(1567578908, 2), t: 1 }) 2019-09-04T06:35:08.650+0000 D2 ASIO [RS] Request 1549 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpVisible: { ts: Timestamp(1567578908, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578908, 2), $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 2) } 2019-09-04T06:35:08.650+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpVisible: { ts: Timestamp(1567578908, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578908, 2), $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:08.650+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:08.650+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:38.650+0000 2019-09-04T06:35:08.650+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578908, 2), t: 1 } 2019-09-04T06:35:08.650+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1550 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:18.650+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578908, 1), t: 1 } } 2019-09-04T06:35:08.650+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:38.650+0000 2019-09-04T06:35:08.650+0000 D2 ASIO [RS] Request 1550 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpVisible: { ts: Timestamp(1567578908, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpApplied: { ts: Timestamp(1567578908, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578908, 2), $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 2) } 2019-09-04T06:35:08.650+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpVisible: { ts: Timestamp(1567578908, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpApplied: { ts: Timestamp(1567578908, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578908, 2), $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:08.650+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:08.650+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:35:08.650+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578908, 2), t: 1 }, 2019-09-04T06:35:08.646+0000 2019-09-04T06:35:08.650+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578908, 2), t: 1 }, 2019-09-04T06:35:08.646+0000 2019-09-04T06:35:08.650+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578903, 2) 2019-09-04T06:35:08.650+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:35:19.910+0000 2019-09-04T06:35:08.650+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:35:19.305+0000 2019-09-04T06:35:08.650+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:08.650+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:36.839+0000 2019-09-04T06:35:08.650+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1551 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:18.650+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578908, 2), t: 1 } } 2019-09-04T06:35:08.650+0000 D3 REPL [conn478] Got notified of new snapshot: { ts: Timestamp(1567578908, 2), t: 1 }, 2019-09-04T06:35:08.646+0000 2019-09-04T06:35:08.651+0000 D3 REPL [conn478] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:24.152+0000 2019-09-04T06:35:08.651+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:38.650+0000 2019-09-04T06:35:08.651+0000 D3 REPL [conn448] Got notified of new snapshot: { ts: Timestamp(1567578908, 2), t: 1 }, 2019-09-04T06:35:08.646+0000 2019-09-04T06:35:08.651+0000 D3 REPL [conn448] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.625+0000 2019-09-04T06:35:08.651+0000 D3 REPL [conn462] Got notified of new snapshot: { ts: Timestamp(1567578908, 2), t: 1 }, 2019-09-04T06:35:08.646+0000 2019-09-04T06:35:08.651+0000 D3 REPL [conn462] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.497+0000 2019-09-04T06:35:08.651+0000 D3 REPL [conn475] Got notified of new snapshot: { ts: Timestamp(1567578908, 2), t: 1 }, 2019-09-04T06:35:08.646+0000 2019-09-04T06:35:08.651+0000 D3 REPL [conn475] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.753+0000 2019-09-04T06:35:08.651+0000 D3 REPL [conn476] Got notified of new snapshot: { ts: Timestamp(1567578908, 2), t: 1 }, 2019-09-04T06:35:08.646+0000 2019-09-04T06:35:08.651+0000 D3 REPL [conn476] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.767+0000 2019-09-04T06:35:08.651+0000 D3 REPL [conn423] Got notified of new snapshot: { ts: Timestamp(1567578908, 2), t: 1 }, 2019-09-04T06:35:08.646+0000 2019-09-04T06:35:08.651+0000 D3 REPL [conn423] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.512+0000 2019-09-04T06:35:08.651+0000 D3 REPL [conn441] Got notified of new snapshot: { ts: Timestamp(1567578908, 2), t: 1 }, 2019-09-04T06:35:08.646+0000 2019-09-04T06:35:08.651+0000 D3 REPL [conn441] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.624+0000 2019-09-04T06:35:08.651+0000 D3 REPL [conn474] Got notified of new snapshot: { ts: Timestamp(1567578908, 2), t: 1 }, 2019-09-04T06:35:08.646+0000 2019-09-04T06:35:08.651+0000 D3 REPL [conn474] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:08.651+0000 D3 REPL [conn461] Got notified of new snapshot: { ts: Timestamp(1567578908, 2), t: 1 }, 2019-09-04T06:35:08.646+0000 2019-09-04T06:35:08.651+0000 D3 REPL [conn461] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.482+0000 2019-09-04T06:35:08.651+0000 D3 REPL [conn466] Got notified of new snapshot: { ts: Timestamp(1567578908, 2), t: 1 }, 2019-09-04T06:35:08.646+0000 2019-09-04T06:35:08.651+0000 D3 REPL [conn466] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.728+0000 2019-09-04T06:35:08.651+0000 D3 REPL [conn457] Got notified of new snapshot: { ts: Timestamp(1567578908, 2), t: 1 }, 2019-09-04T06:35:08.646+0000 2019-09-04T06:35:08.651+0000 D3 REPL [conn457] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:25.059+0000 2019-09-04T06:35:08.651+0000 D3 REPL [conn479] Got notified of new snapshot: { ts: Timestamp(1567578908, 2), t: 1 }, 2019-09-04T06:35:08.646+0000 2019-09-04T06:35:08.651+0000 D3 REPL [conn479] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:26.309+0000 2019-09-04T06:35:08.651+0000 D3 REPL [conn472] Got notified of new snapshot: { ts: Timestamp(1567578908, 2), t: 1 }, 2019-09-04T06:35:08.646+0000 2019-09-04T06:35:08.651+0000 D3 REPL [conn472] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:08.651+0000 D3 REPL [conn477] Got notified of new snapshot: { ts: Timestamp(1567578908, 2), t: 1 }, 2019-09-04T06:35:08.646+0000 2019-09-04T06:35:08.651+0000 D3 REPL [conn477] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:22.595+0000 2019-09-04T06:35:08.651+0000 D3 REPL [conn446] Got notified of new snapshot: { ts: Timestamp(1567578908, 2), t: 1 }, 2019-09-04T06:35:08.646+0000 2019-09-04T06:35:08.651+0000 D3 REPL [conn446] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:13.895+0000 2019-09-04T06:35:08.651+0000 D3 REPL [conn459] Got notified of new snapshot: { ts: Timestamp(1567578908, 2), t: 1 }, 2019-09-04T06:35:08.646+0000 2019-09-04T06:35:08.651+0000 D3 REPL [conn459] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.473+0000 2019-09-04T06:35:08.651+0000 D3 REPL [conn460] Got notified of new snapshot: { ts: Timestamp(1567578908, 2), t: 1 }, 2019-09-04T06:35:08.646+0000 2019-09-04T06:35:08.651+0000 D3 REPL [conn460] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:12.478+0000 2019-09-04T06:35:08.651+0000 D3 REPL [conn467] Got notified of new snapshot: { ts: Timestamp(1567578908, 2), t: 1 }, 2019-09-04T06:35:08.646+0000 2019-09-04T06:35:08.651+0000 D3 REPL [conn467] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.623+0000 2019-09-04T06:35:08.651+0000 D3 REPL [conn436] Got notified of new snapshot: { ts: Timestamp(1567578908, 2), t: 1 }, 2019-09-04T06:35:08.646+0000 2019-09-04T06:35:08.651+0000 D3 REPL [conn436] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:14.790+0000 2019-09-04T06:35:08.651+0000 D3 REPL [conn473] Got notified of new snapshot: { ts: Timestamp(1567578908, 2), t: 1 }, 2019-09-04T06:35:08.646+0000 2019-09-04T06:35:08.651+0000 D3 REPL [conn473] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:08.651+0000 D3 REPL [conn463] Got notified of new snapshot: { ts: Timestamp(1567578908, 2), t: 1 }, 2019-09-04T06:35:08.646+0000 2019-09-04T06:35:08.651+0000 D3 REPL [conn463] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:13.417+0000 2019-09-04T06:35:08.651+0000 D3 REPL [conn445] Got notified of new snapshot: { ts: Timestamp(1567578908, 2), t: 1 }, 2019-09-04T06:35:08.646+0000 2019-09-04T06:35:08.651+0000 D3 REPL [conn445] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:14.090+0000 2019-09-04T06:35:08.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:08.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:08.652+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:35:08.652+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:08.652+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, durableWallTime: new Date(1567578903584), appliedOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, appliedWallTime: new Date(1567578903584), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), appliedOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, appliedWallTime: new Date(1567578908646), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, durableWallTime: new Date(1567578903584), appliedOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, appliedWallTime: new Date(1567578903584), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpVisible: { ts: Timestamp(1567578908, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:08.652+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1552 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:38.652+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, durableWallTime: new Date(1567578903584), appliedOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, appliedWallTime: new Date(1567578903584), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), appliedOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, appliedWallTime: new Date(1567578908646), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, durableWallTime: new Date(1567578903584), appliedOpTime: { ts: Timestamp(1567578903, 1), t: 1 }, appliedWallTime: new Date(1567578903584), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpVisible: { ts: Timestamp(1567578908, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:08.652+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:38.650+0000 2019-09-04T06:35:08.652+0000 D2 ASIO [RS] Request 1552 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpVisible: { ts: Timestamp(1567578908, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578908, 2), $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 2) } 2019-09-04T06:35:08.652+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpVisible: { ts: Timestamp(1567578908, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578908, 2), $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:08.652+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:08.652+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:38.650+0000 2019-09-04T06:35:08.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:08.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:08.688+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:08.706+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:08.706+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:08.736+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:08.737+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:08.748+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578908, 2) 2019-09-04T06:35:08.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:08.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:08.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:08.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:08.788+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:08.827+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:08.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:08.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:08.839+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:35:07.063+0000 2019-09-04T06:35:08.839+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:35:08.234+0000 2019-09-04T06:35:08.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:08.839+0000 D3 REPL [replexec-3] stalest member MemberId(0) date: 2019-09-04T06:35:07.063+0000 2019-09-04T06:35:08.839+0000 D3 REPL [replexec-3] scheduling next check at 2019-09-04T06:35:17.063+0000 2019-09-04T06:35:08.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:38.839+0000 2019-09-04T06:35:08.839+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1553) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:08.839+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1553 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:35:18.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:08.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:38.839+0000 2019-09-04T06:35:08.839+0000 D2 ASIO [Replication] Request 1553 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), opTime: { ts: Timestamp(1567578908, 2), t: 1 }, wallTime: new Date(1567578908646), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpVisible: { ts: Timestamp(1567578908, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578908, 2), $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 2) } 2019-09-04T06:35:08.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), opTime: { ts: Timestamp(1567578908, 2), t: 1 }, wallTime: new Date(1567578908646), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpVisible: { ts: Timestamp(1567578908, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578908, 2), $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:35:08.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:08.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1553) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), opTime: { ts: Timestamp(1567578908, 2), t: 1 }, wallTime: new Date(1567578908646), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpVisible: { ts: Timestamp(1567578908, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578908, 2), $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 2) } 2019-09-04T06:35:08.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:35:08.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:35:19.305+0000 2019-09-04T06:35:08.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:35:19.528+0000 2019-09-04T06:35:08.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:35:08.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:35:10.839Z 2019-09-04T06:35:08.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:08.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:38.839+0000 2019-09-04T06:35:08.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:38.839+0000 2019-09-04T06:35:08.840+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:08.840+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1554) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:08.840+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1554 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:18.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:08.840+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:38.839+0000 2019-09-04T06:35:08.840+0000 D2 ASIO [Replication] Request 1554 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), opTime: { ts: Timestamp(1567578908, 2), t: 1 }, wallTime: new Date(1567578908646), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpVisible: { ts: Timestamp(1567578908, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578908, 2), $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 2) } 2019-09-04T06:35:08.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), opTime: { ts: Timestamp(1567578908, 2), t: 1 }, wallTime: new Date(1567578908646), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpVisible: { ts: Timestamp(1567578908, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578908, 2), $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:08.840+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:08.840+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1554) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), opTime: { ts: Timestamp(1567578908, 2), t: 1 }, wallTime: new Date(1567578908646), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpVisible: { ts: Timestamp(1567578908, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578908, 2), $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 2) } 2019-09-04T06:35:08.840+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:35:08.840+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:35:10.840Z 2019-09-04T06:35:08.840+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:38.839+0000 2019-09-04T06:35:08.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:08.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:08.873+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:08.873+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:08.888+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:08.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:08.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:08.988+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:08.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:08.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:09.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:09.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:09.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:09.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 407E9088C51383903635FCA71CE58900B8B997EA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:09.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:35:09.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 407E9088C51383903635FCA71CE58900B8B997EA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:09.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 407E9088C51383903635FCA71CE58900B8B997EA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:09.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), opTime: { ts: Timestamp(1567578908, 2), t: 1 }, wallTime: new Date(1567578908646) } 2019-09-04T06:35:09.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 407E9088C51383903635FCA71CE58900B8B997EA), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:09.070+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:09.070+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:09.088+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:09.108+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:09.108+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:09.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:09.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:09.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:09.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:09.188+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:09.206+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:09.206+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:09.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:09.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:09.237+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:09.237+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:09.243+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:09.243+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:09.264+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:09.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:09.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:09.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:09.288+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:09.327+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:09.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:09.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:09.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:09.373+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:09.373+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:09.388+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:09.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:09.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:09.488+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:09.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:09.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:09.537+0000 D2 COMMAND [conn61] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578908, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 407E9088C51383903635FCA71CE58900B8B997EA), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578908, 2), t: 1 } }, $db: "config" } 2019-09-04T06:35:09.537+0000 D1 COMMAND [conn61] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578908, 2), t: 1 } } } 2019-09-04T06:35:09.537+0000 D3 STORAGE [conn61] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:35:09.537+0000 D1 COMMAND [conn61] Using 'committed' snapshot: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578908, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 407E9088C51383903635FCA71CE58900B8B997EA), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578908, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578908, 2) 2019-09-04T06:35:09.537+0000 D2 QUERY [conn61] Collection config.settings does not exist. Using EOF plan: query: { _id: "balancer" } sort: {} projection: {} limit: 1 2019-09-04T06:35:09.537+0000 I COMMAND [conn61] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578908, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 407E9088C51383903635FCA71CE58900B8B997EA), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578908, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:35:09.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:09.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:09.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:09.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:09.588+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:09.608+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:09.608+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:09.648+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:09.648+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:09.648+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578908, 2) 2019-09-04T06:35:09.648+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 22356 2019-09-04T06:35:09.648+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:09.648+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:09.648+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 22356 2019-09-04T06:35:09.650+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22359 2019-09-04T06:35:09.650+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 22359 2019-09-04T06:35:09.650+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578908, 2), t: 1 }({ ts: Timestamp(1567578908, 2), t: 1 }) 2019-09-04T06:35:09.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:09.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:09.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:09.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:09.689+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:09.706+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:09.706+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:09.737+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:09.737+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:09.743+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:09.743+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:09.768+0000 D2 COMMAND [conn61] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578908, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 407E9088C51383903635FCA71CE58900B8B997EA), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578908, 2), t: 1 } }, $db: "config" } 2019-09-04T06:35:09.768+0000 D1 COMMAND [conn61] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578908, 2), t: 1 } } } 2019-09-04T06:35:09.768+0000 D3 STORAGE [conn61] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:35:09.768+0000 D1 COMMAND [conn61] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578908, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 407E9088C51383903635FCA71CE58900B8B997EA), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578908, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578908, 2) 2019-09-04T06:35:09.768+0000 D5 QUERY [conn61] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:35:09.768+0000 D5 QUERY [conn61] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:35:09.768+0000 D5 QUERY [conn61] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:35:09.768+0000 D5 QUERY [conn61] Rated tree: $and 2019-09-04T06:35:09.768+0000 D5 QUERY [conn61] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:09.768+0000 D5 QUERY [conn61] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:09.768+0000 D2 QUERY [conn61] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:35:09.768+0000 D3 STORAGE [conn61] WT begin_transaction for snapshot id 22366 2019-09-04T06:35:09.768+0000 D3 STORAGE [conn61] WT rollback_transaction for snapshot id 22366 2019-09-04T06:35:09.768+0000 I COMMAND [conn61] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578908, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 407E9088C51383903635FCA71CE58900B8B997EA), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578908, 2), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:4 cursorExhausted:1 numYields:0 nreturned:4 reslen:989 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:35:09.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:09.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:09.789+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:09.827+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:09.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:09.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:09.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:09.873+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:09.873+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:09.889+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:09.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:09.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:09.989+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:09.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:09.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:10.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:10.000+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:35:10.000+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:35:10.002+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 1ms 2019-09-04T06:35:10.008+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:35:10.008+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:35:10.009+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:35:10.010+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:35:10.010+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:35:10.010+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:35:10.010+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:35:10.010+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35151 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:35:10.011+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:35:10.011+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:35:10.011+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:35:10.011+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:35:10.011+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:10.011+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:35:10.011+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:35:10.011+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:35:10.011+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:35:10.011+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:35:10.011+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:35:10.011+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:35:10.011+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:10.011+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:10.011+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:35:10.011+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578908, 2) 2019-09-04T06:35:10.011+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22380 2019-09-04T06:35:10.011+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22380 2019-09-04T06:35:10.011+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:35:10.012+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:35:10.012+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:35:10.012+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:35:10.012+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:10.012+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:35:10.012+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:35:10.012+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:35:10.012+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578908, 2) 2019-09-04T06:35:10.012+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22383 2019-09-04T06:35:10.012+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22383 2019-09-04T06:35:10.012+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:35:10.012+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:35:10.012+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:10.012+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:35:10.012+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:35:10.012+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:35:10.012+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578908, 2) 2019-09-04T06:35:10.012+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22385 2019-09-04T06:35:10.012+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22385 2019-09-04T06:35:10.012+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:35:10.014+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:35:10.014+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:35:10.014+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:35:10.014+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:35:10.014+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:35:10.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:35:10.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22388 2019-09-04T06:35:10.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:35:10.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:10.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:35:10.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:35:10.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22388 2019-09-04T06:35:10.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:35:10.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22389 2019-09-04T06:35:10.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:35:10.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:10.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:35:10.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22389 2019-09-04T06:35:10.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:35:10.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22390 2019-09-04T06:35:10.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:35:10.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:10.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:35:10.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22390 2019-09-04T06:35:10.014+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:35:10.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22391 2019-09-04T06:35:10.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:35:10.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:10.014+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:35:10.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22391 2019-09-04T06:35:10.014+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:35:10.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22392 2019-09-04T06:35:10.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:35:10.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:10.014+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:35:10.014+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:35:10.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22392 2019-09-04T06:35:10.014+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:35:10.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22393 2019-09-04T06:35:10.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:35:10.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:10.014+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:35:10.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22393 2019-09-04T06:35:10.014+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:35:10.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22394 2019-09-04T06:35:10.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:35:10.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:10.014+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:35:10.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22394 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22395 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22395 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22396 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22396 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22397 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22397 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22398 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22398 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22399 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22399 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22400 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22400 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22401 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22401 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22402 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22402 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22403 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22403 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22404 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22404 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22405 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22405 2019-09-04T06:35:10.015+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:35:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22406 2019-09-04T06:35:10.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:35:10.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:10.016+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:35:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22406 2019-09-04T06:35:10.016+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22407 2019-09-04T06:35:10.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:10.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22407 2019-09-04T06:35:10.016+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:35:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22408 2019-09-04T06:35:10.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:35:10.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:10.016+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:35:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22408 2019-09-04T06:35:10.016+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:35:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22409 2019-09-04T06:35:10.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:35:10.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:10.016+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:35:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22409 2019-09-04T06:35:10.016+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:35:10.016+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:35:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22411 2019-09-04T06:35:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22411 2019-09-04T06:35:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22412 2019-09-04T06:35:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22412 2019-09-04T06:35:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22413 2019-09-04T06:35:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22413 2019-09-04T06:35:10.016+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:35:10.016+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:35:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22415 2019-09-04T06:35:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22415 2019-09-04T06:35:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22416 2019-09-04T06:35:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22416 2019-09-04T06:35:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22417 2019-09-04T06:35:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22417 2019-09-04T06:35:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22418 2019-09-04T06:35:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22418 2019-09-04T06:35:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22419 2019-09-04T06:35:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22419 2019-09-04T06:35:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22420 2019-09-04T06:35:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22420 2019-09-04T06:35:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22421 2019-09-04T06:35:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22421 2019-09-04T06:35:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22422 2019-09-04T06:35:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22422 2019-09-04T06:35:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22423 2019-09-04T06:35:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22423 2019-09-04T06:35:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22424 2019-09-04T06:35:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22424 2019-09-04T06:35:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22425 2019-09-04T06:35:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22425 2019-09-04T06:35:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22426 2019-09-04T06:35:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22426 2019-09-04T06:35:10.017+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:35:10.018+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:35:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22428 2019-09-04T06:35:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22428 2019-09-04T06:35:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22429 2019-09-04T06:35:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22429 2019-09-04T06:35:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22430 2019-09-04T06:35:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22430 2019-09-04T06:35:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22431 2019-09-04T06:35:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22431 2019-09-04T06:35:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22432 2019-09-04T06:35:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22432 2019-09-04T06:35:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22433 2019-09-04T06:35:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22433 2019-09-04T06:35:10.018+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:35:10.043+0000 D2 COMMAND [conn69] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:10.043+0000 I COMMAND [conn69] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:10.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:10.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:10.055+0000 D2 COMMAND [conn70] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:10.056+0000 I COMMAND [conn70] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:10.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:10.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:10.089+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:10.108+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:10.108+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:10.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:10.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:10.189+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:10.206+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:10.206+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:10.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 407E9088C51383903635FCA71CE58900B8B997EA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:10.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:35:10.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 407E9088C51383903635FCA71CE58900B8B997EA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:10.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 407E9088C51383903635FCA71CE58900B8B997EA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:10.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), opTime: { ts: Timestamp(1567578908, 2), t: 1 }, wallTime: new Date(1567578908646) } 2019-09-04T06:35:10.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 407E9088C51383903635FCA71CE58900B8B997EA), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:10.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:10.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:10.237+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:10.237+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:10.243+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:10.243+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:10.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:10.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:10.289+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:10.327+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:10.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:10.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:10.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:10.373+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:10.373+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:10.389+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:10.464+0000 D2 COMMAND [conn71] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578903, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578906, 1), signature: { hash: BinData(0, E74EC27B39B67BF35307DD3E1FD7DFF515B33F3F), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578903, 1), t: 1 } }, $db: "config" } 2019-09-04T06:35:10.464+0000 D1 COMMAND [conn71] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578903, 1), t: 1 } } } 2019-09-04T06:35:10.464+0000 D3 STORAGE [conn71] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:35:10.464+0000 D1 COMMAND [conn71] Using 'committed' snapshot: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578903, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578906, 1), signature: { hash: BinData(0, E74EC27B39B67BF35307DD3E1FD7DFF515B33F3F), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578903, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578908, 2) 2019-09-04T06:35:10.464+0000 D2 QUERY [conn71] Collection config.settings does not exist. Using EOF plan: query: { _id: "balancer" } sort: {} projection: {} limit: 1 2019-09-04T06:35:10.465+0000 I COMMAND [conn71] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578903, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578906, 1), signature: { hash: BinData(0, E74EC27B39B67BF35307DD3E1FD7DFF515B33F3F), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578903, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:35:10.465+0000 D2 COMMAND [conn71] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578908, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 407E9088C51383903635FCA71CE58900B8B997EA), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578908, 2), t: 1 } }, $db: "config" } 2019-09-04T06:35:10.465+0000 D1 COMMAND [conn71] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578908, 2), t: 1 } } } 2019-09-04T06:35:10.465+0000 D3 STORAGE [conn71] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:35:10.465+0000 D1 COMMAND [conn71] Using 'committed' snapshot: { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578908, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 407E9088C51383903635FCA71CE58900B8B997EA), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578908, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578908, 2) 2019-09-04T06:35:10.465+0000 D2 QUERY [conn71] Collection config.settings does not exist. Using EOF plan: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 2019-09-04T06:35:10.465+0000 I COMMAND [conn71] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578908, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 407E9088C51383903635FCA71CE58900B8B997EA), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578908, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:35:10.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:10.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:10.489+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:10.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:10.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:10.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:10.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:10.589+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:10.608+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:10.608+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:10.648+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:10.649+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:10.649+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578908, 2) 2019-09-04T06:35:10.649+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 22456 2019-09-04T06:35:10.649+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:10.649+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:10.649+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 22456 2019-09-04T06:35:10.650+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22459 2019-09-04T06:35:10.650+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 22459 2019-09-04T06:35:10.650+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578908, 2), t: 1 }({ ts: Timestamp(1567578908, 2), t: 1 }) 2019-09-04T06:35:10.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:10.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:10.690+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:10.706+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:10.706+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:10.721+0000 D2 COMMAND [conn72] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578908, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 407E9088C51383903635FCA71CE58900B8B997EA), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578908, 2), t: 1 } }, $db: "config" } 2019-09-04T06:35:10.721+0000 D1 COMMAND [conn72] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578908, 2), t: 1 } } } 2019-09-04T06:35:10.721+0000 D3 STORAGE [conn72] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:35:10.722+0000 D1 COMMAND [conn72] Using 'committed' snapshot: { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578908, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 407E9088C51383903635FCA71CE58900B8B997EA), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578908, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578908, 2) 2019-09-04T06:35:10.722+0000 D2 QUERY [conn72] Collection config.settings does not exist. Using EOF plan: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 2019-09-04T06:35:10.722+0000 I COMMAND [conn72] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578908, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 407E9088C51383903635FCA71CE58900B8B997EA), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578908, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:35:10.737+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:10.737+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:10.743+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:10.743+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:10.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:10.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:10.790+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:10.827+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:10.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:10.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:10.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1555) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:10.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1555 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:35:20.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:10.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:38.839+0000 2019-09-04T06:35:10.839+0000 D2 ASIO [Replication] Request 1555 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), opTime: { ts: Timestamp(1567578908, 2), t: 1 }, wallTime: new Date(1567578908646), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpVisible: { ts: Timestamp(1567578908, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578908, 2), $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 2) } 2019-09-04T06:35:10.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), opTime: { ts: Timestamp(1567578908, 2), t: 1 }, wallTime: new Date(1567578908646), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpVisible: { ts: Timestamp(1567578908, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578908, 2), $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:35:10.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:10.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1555) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), opTime: { ts: Timestamp(1567578908, 2), t: 1 }, wallTime: new Date(1567578908646), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpVisible: { ts: Timestamp(1567578908, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578908, 2), $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 2) } 2019-09-04T06:35:10.839+0000 D4 ELECTION [replexec-4] Postponing election timeout due to heartbeat from primary 2019-09-04T06:35:10.839+0000 D4 REPL [replexec-4] Canceling election timeout callback at 2019-09-04T06:35:19.528+0000 2019-09-04T06:35:10.839+0000 D4 ELECTION [replexec-4] Scheduling election timeout callback at 2019-09-04T06:35:22.300+0000 2019-09-04T06:35:10.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:35:10.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:35:12.839Z 2019-09-04T06:35:10.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:10.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:40.839+0000 2019-09-04T06:35:10.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:40.839+0000 2019-09-04T06:35:10.840+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:10.840+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1556) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:10.840+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1556 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:20.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:10.840+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:40.839+0000 2019-09-04T06:35:10.840+0000 D2 ASIO [Replication] Request 1556 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), opTime: { ts: Timestamp(1567578908, 2), t: 1 }, wallTime: new Date(1567578908646), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpVisible: { ts: Timestamp(1567578908, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578908, 2), $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 2) } 2019-09-04T06:35:10.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), opTime: { ts: Timestamp(1567578908, 2), t: 1 }, wallTime: new Date(1567578908646), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpVisible: { ts: Timestamp(1567578908, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578908, 2), $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:10.840+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:10.840+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1556) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), opTime: { ts: Timestamp(1567578908, 2), t: 1 }, wallTime: new Date(1567578908646), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpVisible: { ts: Timestamp(1567578908, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578908, 2), $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 2) } 2019-09-04T06:35:10.840+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:35:10.840+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:35:12.840Z 2019-09-04T06:35:10.840+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:40.839+0000 2019-09-04T06:35:10.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:10.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:10.873+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:10.873+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:10.890+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:10.944+0000 D2 COMMAND [conn73] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:10.944+0000 I COMMAND [conn73] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:10.968+0000 D2 COMMAND [conn74] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:10.968+0000 I COMMAND [conn74] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:10.990+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:11.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:11.040+0000 D2 COMMAND [conn72] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578908, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 407E9088C51383903635FCA71CE58900B8B997EA), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578908, 2), t: 1 } }, $db: "config" } 2019-09-04T06:35:11.040+0000 D1 COMMAND [conn72] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578908, 2), t: 1 } } } 2019-09-04T06:35:11.040+0000 D3 STORAGE [conn72] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:35:11.040+0000 D1 COMMAND [conn72] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578908, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 407E9088C51383903635FCA71CE58900B8B997EA), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578908, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578908, 2) 2019-09-04T06:35:11.040+0000 D5 QUERY [conn72] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:35:11.040+0000 D5 QUERY [conn72] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:35:11.040+0000 D5 QUERY [conn72] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:35:11.040+0000 D5 QUERY [conn72] Rated tree: $and 2019-09-04T06:35:11.040+0000 D5 QUERY [conn72] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:11.040+0000 D5 QUERY [conn72] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:11.040+0000 D2 QUERY [conn72] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:35:11.040+0000 D3 STORAGE [conn72] WT begin_transaction for snapshot id 22473 2019-09-04T06:35:11.040+0000 D3 STORAGE [conn72] WT rollback_transaction for snapshot id 22473 2019-09-04T06:35:11.040+0000 I COMMAND [conn72] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578908, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 407E9088C51383903635FCA71CE58900B8B997EA), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578908, 2), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:4 cursorExhausted:1 numYields:0 nreturned:4 reslen:989 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:35:11.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:11.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:11.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 407E9088C51383903635FCA71CE58900B8B997EA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:11.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:35:11.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 407E9088C51383903635FCA71CE58900B8B997EA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:11.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 407E9088C51383903635FCA71CE58900B8B997EA), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:11.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), opTime: { ts: Timestamp(1567578908, 2), t: 1 }, wallTime: new Date(1567578908646) } 2019-09-04T06:35:11.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 407E9088C51383903635FCA71CE58900B8B997EA), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:11.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:11.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:11.090+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:11.108+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:11.108+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:11.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:11.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:11.190+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:11.193+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:11.193+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:11.206+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:11.206+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:11.221+0000 D2 COMMAND [conn77] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:11.221+0000 I COMMAND [conn77] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:11.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:11.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:11.237+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:11.237+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:11.243+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:11.243+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:11.273+0000 D2 COMMAND [conn78] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:11.273+0000 I COMMAND [conn78] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:11.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:11.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:11.290+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:11.327+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:11.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:11.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:11.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:11.373+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:11.373+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:11.390+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:11.490+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:11.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:11.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:11.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:11.569+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:11.590+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:11.608+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:11.608+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:11.649+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:11.649+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:11.649+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578908, 2) 2019-09-04T06:35:11.649+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 22495 2019-09-04T06:35:11.649+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:11.649+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:11.649+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 22495 2019-09-04T06:35:11.650+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22498 2019-09-04T06:35:11.650+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 22498 2019-09-04T06:35:11.650+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578908, 2), t: 1 }({ ts: Timestamp(1567578908, 2), t: 1 }) 2019-09-04T06:35:11.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:11.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:11.691+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:11.692+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:11.692+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:11.706+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:11.706+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:11.737+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:11.737+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:11.743+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:11.743+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:11.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:11.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:11.791+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:11.796+0000 D2 COMMAND [conn81] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:11.796+0000 I COMMAND [conn81] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:11.797+0000 D2 COMMAND [conn81] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578895, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 407E9088C51383903635FCA71CE58900B8B997EA), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578895, 1), t: 1 } }, $db: "config" } 2019-09-04T06:35:11.797+0000 D1 COMMAND [conn81] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578895, 1), t: 1 } } } 2019-09-04T06:35:11.797+0000 D3 STORAGE [conn81] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:35:11.797+0000 D1 COMMAND [conn81] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578895, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 407E9088C51383903635FCA71CE58900B8B997EA), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578895, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578908, 2) 2019-09-04T06:35:11.797+0000 D5 QUERY [conn81] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:35:11.797+0000 D5 QUERY [conn81] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:35:11.797+0000 D5 QUERY [conn81] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:35:11.797+0000 D5 QUERY [conn81] Rated tree: $and 2019-09-04T06:35:11.797+0000 D5 QUERY [conn81] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:11.797+0000 D5 QUERY [conn81] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:11.797+0000 D2 QUERY [conn81] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:35:11.797+0000 D3 STORAGE [conn81] WT begin_transaction for snapshot id 22507 2019-09-04T06:35:11.797+0000 D3 STORAGE [conn81] WT rollback_transaction for snapshot id 22507 2019-09-04T06:35:11.797+0000 I COMMAND [conn81] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578895, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578908, 2), signature: { hash: BinData(0, 407E9088C51383903635FCA71CE58900B8B997EA), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578895, 1), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:4 cursorExhausted:1 numYields:0 nreturned:4 reslen:989 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:35:11.827+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:11.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:11.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:11.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:11.873+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:11.873+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:11.891+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:11.991+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:12.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:12.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:12.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:12.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:12.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:12.091+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:12.108+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:12.108+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:12.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:12.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:12.191+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:12.193+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:12.193+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:12.206+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:12.206+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:12.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578909, 1), signature: { hash: BinData(0, 88BBA81F6981FCEBAF576F649AD7C0D7CE984A7A), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:12.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:35:12.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578909, 1), signature: { hash: BinData(0, 88BBA81F6981FCEBAF576F649AD7C0D7CE984A7A), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:12.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578909, 1), signature: { hash: BinData(0, 88BBA81F6981FCEBAF576F649AD7C0D7CE984A7A), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:12.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), opTime: { ts: Timestamp(1567578908, 2), t: 1 }, wallTime: new Date(1567578908646) } 2019-09-04T06:35:12.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578909, 1), signature: { hash: BinData(0, 88BBA81F6981FCEBAF576F649AD7C0D7CE984A7A), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:12.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:12.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:12.237+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:12.237+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:12.243+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:12.243+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:12.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:12.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:12.291+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:12.327+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:12.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:12.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:12.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:12.373+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:12.373+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:12.391+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:12.462+0000 I NETWORK [listener] connection accepted from 10.108.2.73:52334 #484 (86 connections now open) 2019-09-04T06:35:12.462+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:12.462+0000 D2 COMMAND [conn484] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:12.462+0000 I NETWORK [conn484] received client metadata from 10.108.2.73:52334 conn484: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:12.462+0000 I COMMAND [conn484] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:12.477+0000 I COMMAND [conn459] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, D5DE159C203210085B53C66F62E0C0DF973FF589), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:35:12.477+0000 D1 - [conn459] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:12.477+0000 W - [conn459] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:12.482+0000 I COMMAND [conn460] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578875, 1), signature: { hash: BinData(0, 8CDC5DAAAA5AA59CBD4286129FA24BB0384E9F7D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:35:12.482+0000 D1 - [conn460] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:12.482+0000 W - [conn460] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:12.482+0000 I NETWORK [listener] connection accepted from 10.108.2.57:34424 #485 (87 connections now open) 2019-09-04T06:35:12.482+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:12.482+0000 D2 COMMAND [conn485] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:12.482+0000 I NETWORK [conn485] received client metadata from 10.108.2.57:34424 conn485: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:12.482+0000 I COMMAND [conn485] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:12.485+0000 I COMMAND [conn461] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578872, 1), signature: { hash: BinData(0, 4DBDABAE602CF810BD3D716DCE9BD15DA1394F1C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:35:12.485+0000 D1 - [conn461] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:12.485+0000 W - [conn461] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:12.491+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:12.494+0000 I - [conn459] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:12.494+0000 D1 COMMAND [conn459] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, D5DE159C203210085B53C66F62E0C0DF973FF589), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:12.494+0000 D1 - [conn459] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:12.494+0000 W - [conn459] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:12.499+0000 I COMMAND [conn462] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578878, 1), signature: { hash: BinData(0, 9EEAEBC19ED13CB91FBBE628C791550FD6DDF6FC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:35:12.499+0000 D1 - [conn462] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:12.499+0000 W - [conn462] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:12.510+0000 I - [conn460] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:12.511+0000 D1 COMMAND [conn460] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578875, 1), signature: { hash: BinData(0, 8CDC5DAAAA5AA59CBD4286129FA24BB0384E9F7D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:12.511+0000 D1 - [conn460] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:12.511+0000 W - [conn460] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:12.515+0000 I COMMAND [conn423] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578873, 1), signature: { hash: BinData(0, 191AE0EF5D8E856F2D8CCC65C75DFCD6EDF25A90), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:35:12.515+0000 D1 - [conn423] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:12.515+0000 W - [conn423] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:12.527+0000 I - [conn461] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:12.527+0000 D1 COMMAND [conn461] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578872, 1), signature: { hash: BinData(0, 4DBDABAE602CF810BD3D716DCE9BD15DA1394F1C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:12.527+0000 D1 - [conn461] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:12.527+0000 W - [conn461] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:12.547+0000 I - [conn459] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:12.547+0000 W COMMAND [conn459] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:12.547+0000 I COMMAND [conn459] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, D5DE159C203210085B53C66F62E0C0DF973FF589), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30031ms 2019-09-04T06:35:12.547+0000 D2 NETWORK [conn459] Session from 10.108.2.73:52314 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:12.547+0000 I NETWORK [conn459] end connection 10.108.2.73:52314 (86 connections now open) 2019-09-04T06:35:12.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:12.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:12.564+0000 I - [conn423] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:12.564+0000 D1 COMMAND [conn423] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578873, 1), signature: { hash: BinData(0, 191AE0EF5D8E856F2D8CCC65C75DFCD6EDF25A90), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:12.564+0000 D1 - [conn423] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:12.564+0000 W - [conn423] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:12.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:12.570+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:12.584+0000 I - [conn460] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:12.584+0000 W COMMAND [conn460] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:12.584+0000 I COMMAND [conn460] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578875, 1), signature: { hash: BinData(0, 8CDC5DAAAA5AA59CBD4286129FA24BB0384E9F7D), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30042ms 2019-09-04T06:35:12.584+0000 D2 NETWORK [conn460] Session from 10.108.2.74:51948 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:12.584+0000 I NETWORK [conn460] end connection 10.108.2.74:51948 (85 connections now open) 2019-09-04T06:35:12.591+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:12.604+0000 I - [conn423] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:12.604+0000 W COMMAND [conn423] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:12.604+0000 I COMMAND [conn423] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578873, 1), signature: { hash: BinData(0, 191AE0EF5D8E856F2D8CCC65C75DFCD6EDF25A90), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30061ms 2019-09-04T06:35:12.604+0000 D2 NETWORK [conn423] Session from 10.108.2.60:44974 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:12.604+0000 I NETWORK [conn423] end connection 10.108.2.60:44974 (84 connections now open) 2019-09-04T06:35:12.608+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:12.608+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:12.620+0000 I - [conn462] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:12.620+0000 D1 COMMAND [conn462] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578878, 1), signature: { hash: BinData(0, 9EEAEBC19ED13CB91FBBE628C791550FD6DDF6FC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:12.620+0000 D1 - [conn462] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:12.620+0000 W - [conn462] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:12.640+0000 I - [conn461] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:12.640+0000 W COMMAND [conn461] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:12.640+0000 I COMMAND [conn461] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578872, 1), signature: { hash: BinData(0, 4DBDABAE602CF810BD3D716DCE9BD15DA1394F1C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30055ms 2019-09-04T06:35:12.640+0000 D2 NETWORK [conn461] Session from 10.108.2.72:45904 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:12.640+0000 I NETWORK [conn461] end connection 10.108.2.72:45904 (83 connections now open) 2019-09-04T06:35:12.649+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:12.649+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:12.649+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578908, 2) 2019-09-04T06:35:12.649+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 22532 2019-09-04T06:35:12.649+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:12.649+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:12.649+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 22532 2019-09-04T06:35:12.650+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22535 2019-09-04T06:35:12.650+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 22535 2019-09-04T06:35:12.650+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578908, 2), t: 1 }({ ts: Timestamp(1567578908, 2), t: 1 }) 2019-09-04T06:35:12.660+0000 I - [conn462] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:12.660+0000 W COMMAND [conn462] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:12.660+0000 I COMMAND [conn462] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578878, 1), signature: { hash: BinData(0, 9EEAEBC19ED13CB91FBBE628C791550FD6DDF6FC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30133ms 2019-09-04T06:35:12.660+0000 D2 NETWORK [conn462] Session from 10.108.2.57:34410 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:12.660+0000 I NETWORK [conn462] end connection 10.108.2.57:34410 (82 connections now open) 2019-09-04T06:35:12.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:12.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:12.668+0000 I NETWORK [listener] connection accepted from 10.108.2.55:36848 #486 (83 connections now open) 2019-09-04T06:35:12.668+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:12.668+0000 D2 COMMAND [conn486] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:12.669+0000 I NETWORK [conn486] received client metadata from 10.108.2.55:36848 conn486: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:12.669+0000 I COMMAND [conn486] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:12.669+0000 D2 COMMAND [conn486] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578911, 1), signature: { hash: BinData(0, 71EB50998B0ED96CFCE295A2D5D129D8C4E11CBE), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:35:12.669+0000 D1 REPL [conn486] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578908, 2), t: 1 } 2019-09-04T06:35:12.669+0000 D3 REPL [conn486] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:42.679+0000 2019-09-04T06:35:12.673+0000 D2 COMMAND [conn471] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578911, 1), signature: { hash: BinData(0, 71EB50998B0ED96CFCE295A2D5D129D8C4E11CBE), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:35:12.673+0000 D1 REPL [conn471] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578908, 2), t: 1 } 2019-09-04T06:35:12.673+0000 D3 REPL [conn471] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:42.683+0000 2019-09-04T06:35:12.692+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:12.692+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:12.693+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:12.706+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:12.706+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:12.724+0000 D2 COMMAND [conn458] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578911, 1), signature: { hash: BinData(0, 71EB50998B0ED96CFCE295A2D5D129D8C4E11CBE), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:35:12.724+0000 D1 REPL [conn458] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578908, 2), t: 1 } 2019-09-04T06:35:12.724+0000 D3 REPL [conn458] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:42.734+0000 2019-09-04T06:35:12.736+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:12.737+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:12.743+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:12.743+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:12.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:12.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:12.792+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:12.827+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:12.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:12.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:12.839+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1557) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:12.839+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1557 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:35:22.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:12.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:40.839+0000 2019-09-04T06:35:12.839+0000 D2 ASIO [Replication] Request 1557 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), opTime: { ts: Timestamp(1567578908, 2), t: 1 }, wallTime: new Date(1567578908646), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpVisible: { ts: Timestamp(1567578908, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578908, 2), $clusterTime: { clusterTime: Timestamp(1567578911, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 2) } 2019-09-04T06:35:12.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), opTime: { ts: Timestamp(1567578908, 2), t: 1 }, wallTime: new Date(1567578908646), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpVisible: { ts: Timestamp(1567578908, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578908, 2), $clusterTime: { clusterTime: Timestamp(1567578911, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:35:12.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:12.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1557) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), opTime: { ts: Timestamp(1567578908, 2), t: 1 }, wallTime: new Date(1567578908646), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpVisible: { ts: Timestamp(1567578908, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578908, 2), $clusterTime: { clusterTime: Timestamp(1567578911, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 2) } 2019-09-04T06:35:12.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:35:12.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:35:22.300+0000 2019-09-04T06:35:12.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:35:23.849+0000 2019-09-04T06:35:12.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:35:12.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:35:14.839Z 2019-09-04T06:35:12.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:12.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:42.839+0000 2019-09-04T06:35:12.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:42.839+0000 2019-09-04T06:35:12.840+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:12.840+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1558) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:12.840+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1558 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:22.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:12.840+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:42.839+0000 2019-09-04T06:35:12.840+0000 D2 ASIO [Replication] Request 1558 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), opTime: { ts: Timestamp(1567578908, 2), t: 1 }, wallTime: new Date(1567578908646), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpVisible: { ts: Timestamp(1567578908, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578908, 2), $clusterTime: { clusterTime: Timestamp(1567578912, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 2) } 2019-09-04T06:35:12.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), opTime: { ts: Timestamp(1567578908, 2), t: 1 }, wallTime: new Date(1567578908646), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpVisible: { ts: Timestamp(1567578908, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578908, 2), $clusterTime: { clusterTime: Timestamp(1567578912, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:12.840+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:12.840+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1558) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), opTime: { ts: Timestamp(1567578908, 2), t: 1 }, wallTime: new Date(1567578908646), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpVisible: { ts: Timestamp(1567578908, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578908, 2), $clusterTime: { clusterTime: Timestamp(1567578912, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 2) } 2019-09-04T06:35:12.840+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:35:12.840+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:35:14.840Z 2019-09-04T06:35:12.840+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:42.839+0000 2019-09-04T06:35:12.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:12.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:12.873+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:12.873+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:12.892+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:12.992+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:13.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:13.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:13.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:13.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578912, 1), signature: { hash: BinData(0, B3639DDE39BF5610E87CAD0AFF3B18A684E8ED2F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:13.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:35:13.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578912, 1), signature: { hash: BinData(0, B3639DDE39BF5610E87CAD0AFF3B18A684E8ED2F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:13.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578912, 1), signature: { hash: BinData(0, B3639DDE39BF5610E87CAD0AFF3B18A684E8ED2F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:13.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), opTime: { ts: Timestamp(1567578908, 2), t: 1 }, wallTime: new Date(1567578908646) } 2019-09-04T06:35:13.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578912, 1), signature: { hash: BinData(0, B3639DDE39BF5610E87CAD0AFF3B18A684E8ED2F), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:13.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:13.069+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:13.092+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:13.108+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:13.108+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:13.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:13.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:13.192+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:13.193+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:13.193+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:13.206+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:13.206+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:13.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:13.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:13.236+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:13.237+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:13.243+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:13.243+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:13.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:13.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:13.292+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:13.327+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:13.328+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:13.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:13.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:13.373+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:13.373+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:13.392+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:13.404+0000 I NETWORK [listener] connection accepted from 10.108.2.45:36712 #487 (84 connections now open) 2019-09-04T06:35:13.404+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:13.404+0000 D2 COMMAND [conn487] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:13.404+0000 I NETWORK [conn487] received client metadata from 10.108.2.45:36712 conn487: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:13.404+0000 I COMMAND [conn487] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:13.422+0000 I COMMAND [conn463] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578878, 1), signature: { hash: BinData(0, 9EEAEBC19ED13CB91FBBE628C791550FD6DDF6FC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:35:13.422+0000 D1 - [conn463] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:13.422+0000 W - [conn463] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:13.439+0000 I - [conn463] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:13.439+0000 D1 COMMAND [conn463] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578878, 1), signature: { hash: BinData(0, 9EEAEBC19ED13CB91FBBE628C791550FD6DDF6FC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:13.439+0000 D1 - [conn463] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:13.439+0000 W - [conn463] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:13.459+0000 I - [conn463] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:13.459+0000 W COMMAND [conn463] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:13.459+0000 I COMMAND [conn463] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578878, 1), signature: { hash: BinData(0, 9EEAEBC19ED13CB91FBBE628C791550FD6DDF6FC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30031ms 2019-09-04T06:35:13.459+0000 D2 NETWORK [conn463] Session from 10.108.2.45:36690 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:13.459+0000 I NETWORK [conn463] end connection 10.108.2.45:36690 (83 connections now open) 2019-09-04T06:35:13.492+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:13.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:13.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:13.570+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:13.570+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:13.593+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:13.608+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:13.608+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:13.649+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:13.649+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:13.649+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578908, 2) 2019-09-04T06:35:13.649+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 22569 2019-09-04T06:35:13.649+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:13.649+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:13.649+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 22569 2019-09-04T06:35:13.650+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22572 2019-09-04T06:35:13.650+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 22572 2019-09-04T06:35:13.650+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578908, 2), t: 1 }({ ts: Timestamp(1567578908, 2), t: 1 }) 2019-09-04T06:35:13.650+0000 D2 ASIO [RS] Request 1551 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpVisible: { ts: Timestamp(1567578908, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpApplied: { ts: Timestamp(1567578908, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578908, 2), $clusterTime: { clusterTime: Timestamp(1567578912, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 2) } 2019-09-04T06:35:13.650+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpVisible: { ts: Timestamp(1567578908, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpApplied: { ts: Timestamp(1567578908, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578908, 2), $clusterTime: { clusterTime: Timestamp(1567578912, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:13.650+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:13.650+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:35:13.650+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:35:23.849+0000 2019-09-04T06:35:13.650+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:35:24.695+0000 2019-09-04T06:35:13.650+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:13.650+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:42.839+0000 2019-09-04T06:35:13.650+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1559 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:23.650+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578908, 2), t: 1 } } 2019-09-04T06:35:13.650+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:38.650+0000 2019-09-04T06:35:13.652+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:13.652+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), appliedOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, appliedWallTime: new Date(1567578908646), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), appliedOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, appliedWallTime: new Date(1567578908646), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), appliedOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, appliedWallTime: new Date(1567578908646), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpVisible: { ts: Timestamp(1567578908, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:13.652+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1560 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:43.652+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), appliedOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, appliedWallTime: new Date(1567578908646), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), appliedOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, appliedWallTime: new Date(1567578908646), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), appliedOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, appliedWallTime: new Date(1567578908646), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpVisible: { ts: Timestamp(1567578908, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:13.652+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:38.650+0000 2019-09-04T06:35:13.652+0000 D2 ASIO [RS] Request 1560 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpVisible: { ts: Timestamp(1567578908, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578908, 2), $clusterTime: { clusterTime: Timestamp(1567578912, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 2) } 2019-09-04T06:35:13.652+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpVisible: { ts: Timestamp(1567578908, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578908, 2), $clusterTime: { clusterTime: Timestamp(1567578912, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:13.652+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:13.652+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:38.650+0000 2019-09-04T06:35:13.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:13.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:13.693+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:13.693+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:13.693+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:13.706+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:13.706+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:13.737+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:13.737+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:13.743+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:13.743+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:13.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:13.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:13.793+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:13.827+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:13.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:13.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:13.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:13.873+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:13.873+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:13.893+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:13.901+0000 I COMMAND [conn446] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578881, 1), signature: { hash: BinData(0, 51070AB17CE4E1EB2E629A93F6E8B1A003D9C311), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:35:13.901+0000 D1 - [conn446] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:13.901+0000 W - [conn446] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:13.918+0000 I - [conn446] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:13.918+0000 D1 COMMAND [conn446] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578881, 1), signature: { hash: BinData(0, 51070AB17CE4E1EB2E629A93F6E8B1A003D9C311), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:13.918+0000 D1 - [conn446] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:13.918+0000 W - [conn446] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:13.938+0000 I - [conn446] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:13.938+0000 W COMMAND [conn446] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:13.938+0000 I COMMAND [conn446] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578881, 1), signature: { hash: BinData(0, 51070AB17CE4E1EB2E629A93F6E8B1A003D9C311), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30032ms 2019-09-04T06:35:13.938+0000 D2 NETWORK [conn446] Session from 10.108.2.59:48506 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:13.938+0000 I NETWORK [conn446] end connection 10.108.2.59:48506 (82 connections now open) 2019-09-04T06:35:13.993+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:14.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:14.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:14.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:14.070+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:14.070+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:14.092+0000 I COMMAND [conn445] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, D5DE159C203210085B53C66F62E0C0DF973FF589), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:35:14.092+0000 D1 - [conn445] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:14.092+0000 W - [conn445] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:14.093+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:14.108+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:14.108+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:14.109+0000 I - [conn445] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:14.109+0000 D1 COMMAND [conn445] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, D5DE159C203210085B53C66F62E0C0DF973FF589), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:14.109+0000 D1 - [conn445] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:14.109+0000 W - [conn445] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:14.130+0000 I - [conn445] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:14.130+0000 W COMMAND [conn445] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:14.130+0000 I COMMAND [conn445] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, D5DE159C203210085B53C66F62E0C0DF973FF589), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:35:14.130+0000 D2 NETWORK [conn445] Session from 10.108.2.52:47328 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:14.130+0000 I NETWORK [conn445] end connection 10.108.2.52:47328 (81 connections now open) 2019-09-04T06:35:14.162+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:14.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:14.193+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:14.193+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:14.193+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:14.206+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:14.206+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:14.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578912, 1), signature: { hash: BinData(0, B3639DDE39BF5610E87CAD0AFF3B18A684E8ED2F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:14.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:35:14.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578912, 1), signature: { hash: BinData(0, B3639DDE39BF5610E87CAD0AFF3B18A684E8ED2F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:14.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578912, 1), signature: { hash: BinData(0, B3639DDE39BF5610E87CAD0AFF3B18A684E8ED2F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:14.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), opTime: { ts: Timestamp(1567578908, 2), t: 1 }, wallTime: new Date(1567578908646) } 2019-09-04T06:35:14.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578912, 1), signature: { hash: BinData(0, B3639DDE39BF5610E87CAD0AFF3B18A684E8ED2F), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:14.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:14.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:14.237+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:14.237+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:14.243+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:14.243+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:14.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:14.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:14.293+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:14.298+0000 I NETWORK [listener] connection accepted from 10.108.2.54:49372 #488 (82 connections now open) 2019-09-04T06:35:14.298+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:14.299+0000 D2 COMMAND [conn488] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:14.299+0000 I NETWORK [conn488] received client metadata from 10.108.2.54:49372 conn488: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:14.299+0000 I COMMAND [conn488] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:14.299+0000 D2 COMMAND [conn488] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578908, 1), signature: { hash: BinData(0, EF3ADB01B4FC3197228A5D8B8AD09E7DD93915A6), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:35:14.299+0000 D1 REPL [conn488] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578908, 2), t: 1 } 2019-09-04T06:35:14.299+0000 D3 REPL [conn488] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:44.309+0000 2019-09-04T06:35:14.327+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:14.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:14.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:14.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:14.373+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:14.373+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:14.393+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:14.494+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:14.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:14.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:14.570+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:14.570+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:14.594+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:14.608+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:14.608+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:14.649+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:14.649+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:14.649+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578908, 2) 2019-09-04T06:35:14.649+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 22604 2019-09-04T06:35:14.650+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:14.650+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:14.650+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 22604 2019-09-04T06:35:14.650+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22607 2019-09-04T06:35:14.650+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 22607 2019-09-04T06:35:14.650+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578908, 2), t: 1 }({ ts: Timestamp(1567578908, 2), t: 1 }) 2019-09-04T06:35:14.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:14.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:14.693+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:14.693+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:14.694+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:14.706+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:14.706+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:14.737+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:14.737+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:14.743+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:14.743+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:14.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:14.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:14.785+0000 I NETWORK [listener] connection accepted from 10.108.2.62:53610 #489 (83 connections now open) 2019-09-04T06:35:14.785+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:14.785+0000 D2 COMMAND [conn489] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:14.785+0000 I NETWORK [conn489] received client metadata from 10.108.2.62:53610 conn489: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:14.785+0000 I COMMAND [conn489] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:14.794+0000 I COMMAND [conn436] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, D5DE159C203210085B53C66F62E0C0DF973FF589), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:35:14.794+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:14.794+0000 D1 - [conn436] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:14.794+0000 W - [conn436] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:14.811+0000 I - [conn436] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:14.811+0000 D1 COMMAND [conn436] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, D5DE159C203210085B53C66F62E0C0DF973FF589), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:14.811+0000 D1 - [conn436] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:14.811+0000 W - [conn436] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:14.827+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:14.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:14.831+0000 I - [conn436] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:14.831+0000 W COMMAND [conn436] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:14.831+0000 I COMMAND [conn436] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, D5DE159C203210085B53C66F62E0C0DF973FF589), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:35:14.831+0000 D2 NETWORK [conn436] Session from 10.108.2.62:53574 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:14.831+0000 I NETWORK [conn436] end connection 10.108.2.62:53574 (82 connections now open) 2019-09-04T06:35:14.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:14.839+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1561) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:14.839+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1561 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:35:24.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:14.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:42.839+0000 2019-09-04T06:35:14.839+0000 D2 ASIO [Replication] Request 1561 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), opTime: { ts: Timestamp(1567578908, 2), t: 1 }, wallTime: new Date(1567578908646), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpVisible: { ts: Timestamp(1567578908, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578908, 2), $clusterTime: { clusterTime: Timestamp(1567578912, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 2) } 2019-09-04T06:35:14.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), opTime: { ts: Timestamp(1567578908, 2), t: 1 }, wallTime: new Date(1567578908646), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpVisible: { ts: Timestamp(1567578908, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578908, 2), $clusterTime: { clusterTime: Timestamp(1567578912, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:35:14.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:14.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1561) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), opTime: { ts: Timestamp(1567578908, 2), t: 1 }, wallTime: new Date(1567578908646), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpVisible: { ts: Timestamp(1567578908, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578908, 2), $clusterTime: { clusterTime: Timestamp(1567578912, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 2) } 2019-09-04T06:35:14.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:35:14.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:35:24.695+0000 2019-09-04T06:35:14.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:35:26.204+0000 2019-09-04T06:35:14.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:35:14.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:35:16.839Z 2019-09-04T06:35:14.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:14.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:44.839+0000 2019-09-04T06:35:14.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:44.839+0000 2019-09-04T06:35:14.840+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:14.840+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1562) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:14.840+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1562 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:24.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:14.840+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:44.839+0000 2019-09-04T06:35:14.840+0000 D2 ASIO [Replication] Request 1562 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), opTime: { ts: Timestamp(1567578908, 2), t: 1 }, wallTime: new Date(1567578908646), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpVisible: { ts: Timestamp(1567578908, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578908, 2), $clusterTime: { clusterTime: Timestamp(1567578912, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 2) } 2019-09-04T06:35:14.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), opTime: { ts: Timestamp(1567578908, 2), t: 1 }, wallTime: new Date(1567578908646), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpVisible: { ts: Timestamp(1567578908, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578908, 2), $clusterTime: { clusterTime: Timestamp(1567578912, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:14.840+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:14.840+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1562) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), opTime: { ts: Timestamp(1567578908, 2), t: 1 }, wallTime: new Date(1567578908646), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpVisible: { ts: Timestamp(1567578908, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578908, 2), $clusterTime: { clusterTime: Timestamp(1567578912, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 2) } 2019-09-04T06:35:14.840+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:35:14.840+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:35:16.840Z 2019-09-04T06:35:14.840+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:44.839+0000 2019-09-04T06:35:14.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:14.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:14.873+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:14.873+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:14.894+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:14.994+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:15.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:15.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:15.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:15.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578912, 1), signature: { hash: BinData(0, B3639DDE39BF5610E87CAD0AFF3B18A684E8ED2F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:15.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:35:15.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578912, 1), signature: { hash: BinData(0, B3639DDE39BF5610E87CAD0AFF3B18A684E8ED2F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:15.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578912, 1), signature: { hash: BinData(0, B3639DDE39BF5610E87CAD0AFF3B18A684E8ED2F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:15.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), opTime: { ts: Timestamp(1567578908, 2), t: 1 }, wallTime: new Date(1567578908646) } 2019-09-04T06:35:15.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578912, 1), signature: { hash: BinData(0, B3639DDE39BF5610E87CAD0AFF3B18A684E8ED2F), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:15.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:15.070+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:15.094+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:15.108+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:15.108+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:15.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:15.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:15.193+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:15.193+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:15.195+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:15.206+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:15.206+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:15.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:15.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:15.237+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:15.237+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:15.243+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:15.243+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:15.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:15.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:15.295+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:15.327+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:15.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:15.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:15.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:15.373+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:15.373+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:15.395+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:15.495+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:15.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:15.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:15.570+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:15.570+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:15.595+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:15.608+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:15.608+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:15.650+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:15.650+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:15.650+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578908, 2) 2019-09-04T06:35:15.650+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 22638 2019-09-04T06:35:15.650+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:15.650+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:15.650+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 22638 2019-09-04T06:35:15.651+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22641 2019-09-04T06:35:15.651+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 22641 2019-09-04T06:35:15.651+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578908, 2), t: 1 }({ ts: Timestamp(1567578908, 2), t: 1 }) 2019-09-04T06:35:15.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:15.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:15.693+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:15.693+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:15.695+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:15.706+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:15.706+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:15.737+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:15.737+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:15.743+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:15.743+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:15.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:15.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:15.795+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:15.827+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:15.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:15.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:15.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:15.873+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:15.873+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:15.895+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:15.996+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:16.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:16.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:16.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:16.069+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:16.070+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:16.096+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:16.108+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:16.108+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:16.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:16.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:16.193+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:16.193+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:16.196+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:16.206+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:16.206+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:16.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578912, 1), signature: { hash: BinData(0, B3639DDE39BF5610E87CAD0AFF3B18A684E8ED2F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:16.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:35:16.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578912, 1), signature: { hash: BinData(0, B3639DDE39BF5610E87CAD0AFF3B18A684E8ED2F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:16.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578912, 1), signature: { hash: BinData(0, B3639DDE39BF5610E87CAD0AFF3B18A684E8ED2F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:16.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), opTime: { ts: Timestamp(1567578908, 2), t: 1 }, wallTime: new Date(1567578908646) } 2019-09-04T06:35:16.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578912, 1), signature: { hash: BinData(0, B3639DDE39BF5610E87CAD0AFF3B18A684E8ED2F), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:16.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:16.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:16.237+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:16.237+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:16.243+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:16.243+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:16.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:16.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:16.296+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:16.327+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:16.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:16.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:16.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:16.373+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:16.373+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:16.391+0000 D2 COMMAND [conn125] run command admin.$cmd { ping: 1, $db: "admin" } 2019-09-04T06:35:16.391+0000 I COMMAND [conn125] command admin.$cmd appName: "robo3t" command: ping { ping: 1, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:16.396+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:16.403+0000 D2 COMMAND [conn178] run command config.$cmd { ping: 1.0, lsid: { id: UUID("4ca3bc30-0f16-4335-a15f-3e7d48b5566e") }, $clusterTime: { clusterTime: Timestamp(1567578852, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:35:16.403+0000 I COMMAND [conn178] command config.$cmd appName: "MongoDB Shell" command: ping { ping: 1.0, lsid: { id: UUID("4ca3bc30-0f16-4335-a15f-3e7d48b5566e") }, $clusterTime: { clusterTime: Timestamp(1567578852, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:16.496+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:16.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:16.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:16.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:16.570+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:16.596+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:16.608+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:16.608+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:16.650+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:16.650+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:16.650+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578908, 2) 2019-09-04T06:35:16.650+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 22672 2019-09-04T06:35:16.650+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:16.650+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:16.650+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 22672 2019-09-04T06:35:16.651+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22675 2019-09-04T06:35:16.651+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 22675 2019-09-04T06:35:16.651+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578908, 2), t: 1 }({ ts: Timestamp(1567578908, 2), t: 1 }) 2019-09-04T06:35:16.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:16.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:16.693+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:16.693+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:16.696+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:16.706+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:16.706+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:16.737+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:16.737+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:16.743+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:16.743+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:16.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:16.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:16.797+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:16.827+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:16.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:16.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:16.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1563) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:16.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1563 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:35:26.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:16.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:44.839+0000 2019-09-04T06:35:16.839+0000 D2 ASIO [Replication] Request 1563 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), opTime: { ts: Timestamp(1567578908, 2), t: 1 }, wallTime: new Date(1567578908646), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpVisible: { ts: Timestamp(1567578908, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578908, 2), $clusterTime: { clusterTime: Timestamp(1567578912, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 2) } 2019-09-04T06:35:16.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), opTime: { ts: Timestamp(1567578908, 2), t: 1 }, wallTime: new Date(1567578908646), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpVisible: { ts: Timestamp(1567578908, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578908, 2), $clusterTime: { clusterTime: Timestamp(1567578912, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:35:16.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:16.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1563) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), opTime: { ts: Timestamp(1567578908, 2), t: 1 }, wallTime: new Date(1567578908646), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpVisible: { ts: Timestamp(1567578908, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578908, 2), $clusterTime: { clusterTime: Timestamp(1567578912, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 2) } 2019-09-04T06:35:16.839+0000 D4 ELECTION [replexec-4] Postponing election timeout due to heartbeat from primary 2019-09-04T06:35:16.839+0000 D4 REPL [replexec-4] Canceling election timeout callback at 2019-09-04T06:35:26.204+0000 2019-09-04T06:35:16.839+0000 D4 ELECTION [replexec-4] Scheduling election timeout callback at 2019-09-04T06:35:27.482+0000 2019-09-04T06:35:16.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:35:16.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:35:18.839Z 2019-09-04T06:35:16.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:16.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:46.839+0000 2019-09-04T06:35:16.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:46.839+0000 2019-09-04T06:35:16.840+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:16.840+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1564) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:16.840+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1564 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:26.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:16.840+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:46.839+0000 2019-09-04T06:35:16.840+0000 D2 ASIO [Replication] Request 1564 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), opTime: { ts: Timestamp(1567578908, 2), t: 1 }, wallTime: new Date(1567578908646), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpVisible: { ts: Timestamp(1567578908, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578908, 2), $clusterTime: { clusterTime: Timestamp(1567578912, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 2) } 2019-09-04T06:35:16.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), opTime: { ts: Timestamp(1567578908, 2), t: 1 }, wallTime: new Date(1567578908646), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpVisible: { ts: Timestamp(1567578908, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578908, 2), $clusterTime: { clusterTime: Timestamp(1567578912, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:16.840+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:16.840+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1564) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), opTime: { ts: Timestamp(1567578908, 2), t: 1 }, wallTime: new Date(1567578908646), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpVisible: { ts: Timestamp(1567578908, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578908, 2), $clusterTime: { clusterTime: Timestamp(1567578912, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578908, 2) } 2019-09-04T06:35:16.840+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:35:16.840+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:35:18.840Z 2019-09-04T06:35:16.840+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:46.839+0000 2019-09-04T06:35:16.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:16.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:16.873+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:16.873+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:16.897+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:16.997+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:17.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:17.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:17.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:17.063+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:17.063+0000 D3 REPL [replexec-4] memberData lastupdate is: 2019-09-04T06:35:16.839+0000 2019-09-04T06:35:17.063+0000 D3 REPL [replexec-4] memberData lastupdate is: 2019-09-04T06:35:16.840+0000 2019-09-04T06:35:17.063+0000 D3 REPL [replexec-4] stalest member MemberId(0) date: 2019-09-04T06:35:16.839+0000 2019-09-04T06:35:17.063+0000 D3 REPL [replexec-4] scheduling next check at 2019-09-04T06:35:26.839+0000 2019-09-04T06:35:17.063+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:46.839+0000 2019-09-04T06:35:17.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578912, 1), signature: { hash: BinData(0, B3639DDE39BF5610E87CAD0AFF3B18A684E8ED2F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:17.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:35:17.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578912, 1), signature: { hash: BinData(0, B3639DDE39BF5610E87CAD0AFF3B18A684E8ED2F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:17.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578912, 1), signature: { hash: BinData(0, B3639DDE39BF5610E87CAD0AFF3B18A684E8ED2F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:17.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), opTime: { ts: Timestamp(1567578908, 2), t: 1 }, wallTime: new Date(1567578908646) } 2019-09-04T06:35:17.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578912, 1), signature: { hash: BinData(0, B3639DDE39BF5610E87CAD0AFF3B18A684E8ED2F), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:17.070+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:17.070+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:17.097+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:17.162+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:17.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:17.193+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:17.193+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:17.197+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:17.206+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:17.206+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:17.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:17.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:17.237+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:17.237+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:17.243+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:17.243+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:17.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:17.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:17.297+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:17.373+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:17.373+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:17.397+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:17.498+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:17.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:17.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:17.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:17.570+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:17.598+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:17.613+0000 I NETWORK [listener] connection accepted from 10.108.2.58:52328 #490 (83 connections now open) 2019-09-04T06:35:17.613+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:17.613+0000 D2 COMMAND [conn490] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:17.613+0000 I NETWORK [conn490] received client metadata from 10.108.2.58:52328 conn490: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:17.613+0000 I COMMAND [conn490] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:17.617+0000 I NETWORK [listener] connection accepted from 10.108.2.51:59324 #491 (84 connections now open) 2019-09-04T06:35:17.617+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:17.617+0000 D2 COMMAND [conn491] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:17.617+0000 I NETWORK [conn491] received client metadata from 10.108.2.51:59324 conn491: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:17.617+0000 I COMMAND [conn491] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:17.626+0000 I COMMAND [conn448] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578885, 1), signature: { hash: BinData(0, 3250ABA29FA1BAB080F5BF7C803FB52479895B2B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:35:17.626+0000 I COMMAND [conn441] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578879, 1), signature: { hash: BinData(0, 95740D6E9D55C70552F443700C129E7BD46EEBF9), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:35:17.626+0000 D1 - [conn448] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:17.626+0000 D1 - [conn441] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:17.626+0000 W - [conn448] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:17.626+0000 W - [conn441] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:17.632+0000 I COMMAND [conn467] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578883, 1), signature: { hash: BinData(0, 01F6D33FDEDCF76CD13D7205FDE1395C03188BC4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:35:17.632+0000 D1 - [conn467] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:17.632+0000 W - [conn467] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:17.642+0000 I - [conn441] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:17.643+0000 D1 COMMAND [conn441] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578879, 1), signature: { hash: BinData(0, 95740D6E9D55C70552F443700C129E7BD46EEBF9), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:17.643+0000 D1 - [conn441] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:17.643+0000 W - [conn441] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:17.650+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:17.650+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:17.650+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578908, 2) 2019-09-04T06:35:17.650+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 22702 2019-09-04T06:35:17.650+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:17.650+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:17.650+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 22702 2019-09-04T06:35:17.651+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22705 2019-09-04T06:35:17.651+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 22705 2019-09-04T06:35:17.651+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578908, 2), t: 1 }({ ts: Timestamp(1567578908, 2), t: 1 }) 2019-09-04T06:35:17.659+0000 I - [conn448] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:17.659+0000 D1 COMMAND [conn448] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578885, 1), signature: { hash: BinData(0, 3250ABA29FA1BAB080F5BF7C803FB52479895B2B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:17.659+0000 D1 - [conn448] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:17.659+0000 W - [conn448] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:17.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:17.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:17.679+0000 I - [conn441] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:17.679+0000 W COMMAND [conn441] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:17.679+0000 I COMMAND [conn441] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578879, 1), signature: { hash: BinData(0, 95740D6E9D55C70552F443700C129E7BD46EEBF9), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:35:17.679+0000 D2 NETWORK [conn441] Session from 10.108.2.51:59288 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:17.679+0000 I NETWORK [conn441] end connection 10.108.2.51:59288 (83 connections now open) 2019-09-04T06:35:17.687+0000 D2 ASIO [RS] Request 1559 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578917, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578917686), o: { $v: 1, $set: { ping: new Date(1567578917685) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpVisible: { ts: Timestamp(1567578908, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpApplied: { ts: Timestamp(1567578917, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578908, 2), $clusterTime: { clusterTime: Timestamp(1567578917, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 1) } 2019-09-04T06:35:17.687+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578917, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578917686), o: { $v: 1, $set: { ping: new Date(1567578917685) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpVisible: { ts: Timestamp(1567578908, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578908, 2), t: 1 }, lastCommittedWall: new Date(1567578908646), lastOpApplied: { ts: Timestamp(1567578917, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578908, 2), $clusterTime: { clusterTime: Timestamp(1567578917, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:17.687+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:17.687+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578917, 1) and ending at ts: Timestamp(1567578917, 1) 2019-09-04T06:35:17.687+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:35:27.482+0000 2019-09-04T06:35:17.687+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:35:28.305+0000 2019-09-04T06:35:17.687+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578917, 1), t: 1 } 2019-09-04T06:35:17.687+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:17.687+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:17.687+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578908, 2) 2019-09-04T06:35:17.687+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 22709 2019-09-04T06:35:17.687+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:17.687+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:17.687+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 22709 2019-09-04T06:35:17.688+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:17.688+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:17.688+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578908, 2) 2019-09-04T06:35:17.688+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 22712 2019-09-04T06:35:17.688+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:17.688+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:17.688+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 22712 2019-09-04T06:35:17.688+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:35:17.688+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:17.688+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:46.839+0000 2019-09-04T06:35:17.689+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578917, 1), t: 1 } 2019-09-04T06:35:17.689+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1565 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:27.689+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578908, 2), t: 1 } } 2019-09-04T06:35:17.690+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:38.650+0000 2019-09-04T06:35:17.693+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:17.696+0000 I - [conn467] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:17.696+0000 D1 COMMAND [conn467] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578883, 1), signature: { hash: BinData(0, 01F6D33FDEDCF76CD13D7205FDE1395C03188BC4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:17.698+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:17.706+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:17.708+0000 D2 ASIO [RS] Request 1565 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 1), t: 1 }, lastCommittedWall: new Date(1567578917686), lastOpVisible: { ts: Timestamp(1567578917, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578917, 1), t: 1 }, lastCommittedWall: new Date(1567578917686), lastOpApplied: { ts: Timestamp(1567578917, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578917, 1), $clusterTime: { clusterTime: Timestamp(1567578917, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 1) } 2019-09-04T06:35:17.708+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 1), t: 1 }, lastCommittedWall: new Date(1567578917686), lastOpVisible: { ts: Timestamp(1567578917, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578917, 1), t: 1 }, lastCommittedWall: new Date(1567578917686), lastOpApplied: { ts: Timestamp(1567578917, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578917, 1), $clusterTime: { clusterTime: Timestamp(1567578917, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:17.708+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:17.708+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:35:17.708+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578917, 1), t: 1 }, 2019-09-04T06:35:17.686+0000 2019-09-04T06:35:17.708+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578908, 2), t: 1 }, 2019-09-04T06:35:08.646+0000 2019-09-04T06:35:17.708+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:35:28.305+0000 2019-09-04T06:35:17.708+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:35:27.725+0000 2019-09-04T06:35:17.708+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1566 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:27.708+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578917, 1), t: 1 } } 2019-09-04T06:35:17.708+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:38.650+0000 2019-09-04T06:35:17.708+0000 D3 REPL [conn475] Got notified of new snapshot: { ts: Timestamp(1567578908, 2), t: 1 }, 2019-09-04T06:35:08.646+0000 2019-09-04T06:35:17.708+0000 D3 REPL [conn475] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.753+0000 2019-09-04T06:35:17.708+0000 D3 REPL [conn474] Got notified of new snapshot: { ts: Timestamp(1567578908, 2), t: 1 }, 2019-09-04T06:35:08.646+0000 2019-09-04T06:35:17.708+0000 D3 REPL [conn474] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:17.708+0000 D3 REPL [conn466] Got notified of new snapshot: { ts: Timestamp(1567578908, 2), t: 1 }, 2019-09-04T06:35:08.646+0000 2019-09-04T06:35:17.708+0000 D3 REPL [conn466] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.728+0000 2019-09-04T06:35:17.708+0000 D3 REPL [conn479] Got notified of new snapshot: { ts: Timestamp(1567578908, 2), t: 1 }, 2019-09-04T06:35:08.646+0000 2019-09-04T06:35:17.708+0000 D3 REPL [conn479] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:26.309+0000 2019-09-04T06:35:17.708+0000 D3 REPL [conn477] Got notified of new snapshot: { ts: Timestamp(1567578908, 2), t: 1 }, 2019-09-04T06:35:08.646+0000 2019-09-04T06:35:17.708+0000 D3 REPL [conn477] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:22.595+0000 2019-09-04T06:35:17.708+0000 D3 REPL [conn473] Got notified of new snapshot: { ts: Timestamp(1567578908, 2), t: 1 }, 2019-09-04T06:35:08.646+0000 2019-09-04T06:35:17.708+0000 D3 REPL [conn473] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:17.708+0000 D3 REPL [conn486] Got notified of new snapshot: { ts: Timestamp(1567578908, 2), t: 1 }, 2019-09-04T06:35:08.646+0000 2019-09-04T06:35:17.708+0000 D3 REPL [conn486] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:42.679+0000 2019-09-04T06:35:17.708+0000 D3 REPL [conn471] Got notified of new snapshot: { ts: Timestamp(1567578908, 2), t: 1 }, 2019-09-04T06:35:08.646+0000 2019-09-04T06:35:17.708+0000 D3 REPL [conn471] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:42.683+0000 2019-09-04T06:35:17.708+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:17.708+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:46.839+0000 2019-09-04T06:35:17.708+0000 D3 REPL [conn488] Got notified of new snapshot: { ts: Timestamp(1567578908, 2), t: 1 }, 2019-09-04T06:35:08.646+0000 2019-09-04T06:35:17.708+0000 D3 REPL [conn488] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:44.309+0000 2019-09-04T06:35:17.708+0000 D3 REPL [conn458] Got notified of new snapshot: { ts: Timestamp(1567578908, 2), t: 1 }, 2019-09-04T06:35:08.646+0000 2019-09-04T06:35:17.708+0000 D3 REPL [conn458] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:42.734+0000 2019-09-04T06:35:17.708+0000 D3 REPL [conn472] Got notified of new snapshot: { ts: Timestamp(1567578908, 2), t: 1 }, 2019-09-04T06:35:08.646+0000 2019-09-04T06:35:17.708+0000 D3 REPL [conn472] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:17.708+0000 D3 REPL [conn457] Got notified of new snapshot: { ts: Timestamp(1567578908, 2), t: 1 }, 2019-09-04T06:35:08.646+0000 2019-09-04T06:35:17.708+0000 D3 REPL [conn457] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:25.059+0000 2019-09-04T06:35:17.708+0000 D3 REPL [conn476] Got notified of new snapshot: { ts: Timestamp(1567578908, 2), t: 1 }, 2019-09-04T06:35:08.646+0000 2019-09-04T06:35:17.708+0000 D3 REPL [conn476] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.767+0000 2019-09-04T06:35:17.708+0000 D3 REPL [conn478] Got notified of new snapshot: { ts: Timestamp(1567578908, 2), t: 1 }, 2019-09-04T06:35:08.646+0000 2019-09-04T06:35:17.709+0000 D3 REPL [conn478] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:24.152+0000 2019-09-04T06:35:17.716+0000 I - [conn448] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:17.716+0000 W COMMAND [conn448] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:17.716+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578917, 1) } 2019-09-04T06:35:17.716+0000 I COMMAND [conn448] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578885, 1), signature: { hash: BinData(0, 3250ABA29FA1BAB080F5BF7C803FB52479895B2B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30044ms 2019-09-04T06:35:17.716+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22706 2019-09-04T06:35:17.716+0000 D2 NETWORK [conn448] Session from 10.108.2.46:41134 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:17.716+0000 I NETWORK [conn448] end connection 10.108.2.46:41134 (82 connections now open) 2019-09-04T06:35:17.716+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 22706 2019-09-04T06:35:17.716+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22717 2019-09-04T06:35:17.716+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 22717 2019-09-04T06:35:17.716+0000 D3 EXECUTOR [repl-writer-worker-1] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:17.716+0000 D3 STORAGE [repl-writer-worker-1] WT begin_transaction for snapshot id 22719 2019-09-04T06:35:17.716+0000 D4 STORAGE [repl-writer-worker-1] inserting record with timestamp Timestamp(1567578917, 1) 2019-09-04T06:35:17.716+0000 D3 STORAGE [repl-writer-worker-1] WT set timestamp of future write operations to Timestamp(1567578917, 1) 2019-09-04T06:35:17.716+0000 D3 STORAGE [repl-writer-worker-1] WT commit_transaction for snapshot id 22719 2019-09-04T06:35:17.716+0000 D3 EXECUTOR [repl-writer-worker-1] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:17.716+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:35:17.716+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22718 2019-09-04T06:35:17.716+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 22718 2019-09-04T06:35:17.716+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22721 2019-09-04T06:35:17.716+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 22721 2019-09-04T06:35:17.716+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578917, 1), t: 1 }({ ts: Timestamp(1567578917, 1), t: 1 }) 2019-09-04T06:35:17.716+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578917, 1) 2019-09-04T06:35:17.716+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22722 2019-09-04T06:35:17.716+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578917, 1) } } ] } sort: {} projection: {} 2019-09-04T06:35:17.716+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:17.716+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:35:17.716+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578917, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:35:17.716+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:17.716+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:17.716+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:17.716+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578917, 1) || First: notFirst: full path: ts 2019-09-04T06:35:17.716+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:17.716+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578917, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:17.716+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:17.716+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:35:17.716+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:17.716+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:17.716+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:17.716+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:17.716+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:17.716+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:17.716+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:17.716+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578917, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:17.716+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:17.716+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:17.716+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:17.716+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578917, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:17.716+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:17.716+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578917, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:17.716+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 22722 2019-09-04T06:35:17.716+0000 D3 EXECUTOR [repl-writer-worker-15] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:17.716+0000 D3 STORAGE [repl-writer-worker-15] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:17.717+0000 D3 REPL [repl-writer-worker-15] applying op: { ts: Timestamp(1567578917, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578917686), o: { $v: 1, $set: { ping: new Date(1567578917685) } } }, oplog application mode: Secondary 2019-09-04T06:35:17.717+0000 D3 STORAGE [repl-writer-worker-15] WT set timestamp of future write operations to Timestamp(1567578917, 1) 2019-09-04T06:35:17.717+0000 D3 STORAGE [repl-writer-worker-15] WT begin_transaction for snapshot id 22724 2019-09-04T06:35:17.717+0000 D2 QUERY [repl-writer-worker-15] Using idhack: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" } 2019-09-04T06:35:17.717+0000 D4 WRITE [repl-writer-worker-15] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:35:17.717+0000 D3 STORAGE [repl-writer-worker-15] WT commit_transaction for snapshot id 22724 2019-09-04T06:35:17.717+0000 D3 EXECUTOR [repl-writer-worker-15] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:17.717+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578917, 1), t: 1 }({ ts: Timestamp(1567578917, 1), t: 1 }) 2019-09-04T06:35:17.717+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:17.717+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578917, 1) 2019-09-04T06:35:17.717+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22723 2019-09-04T06:35:17.717+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:35:17.717+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:17.717+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:35:17.717+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:17.717+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:17.717+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:35:17.717+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 22723 2019-09-04T06:35:17.717+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578917, 1) 2019-09-04T06:35:17.717+0000 D2 REPL [rsSync-0] Setting replication's stable optime to { ts: Timestamp(1567578917, 1), t: 1 }, 2019-09-04T06:35:17.686+0000 2019-09-04T06:35:17.717+0000 D2 STORAGE [rsSync-0] oldest_timestamp set to Timestamp(1567578912, 1) 2019-09-04T06:35:17.717+0000 D3 REPL [conn475] Got notified of new snapshot: { ts: Timestamp(1567578917, 1), t: 1 }, 2019-09-04T06:35:17.686+0000 2019-09-04T06:35:17.717+0000 D3 REPL [conn475] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.753+0000 2019-09-04T06:35:17.717+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:17.717+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22727 2019-09-04T06:35:17.717+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 22727 2019-09-04T06:35:17.717+0000 D3 REPL [conn486] Got notified of new snapshot: { ts: Timestamp(1567578917, 1), t: 1 }, 2019-09-04T06:35:17.686+0000 2019-09-04T06:35:17.717+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578917, 1), t: 1 }({ ts: Timestamp(1567578917, 1), t: 1 }) 2019-09-04T06:35:17.717+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), appliedOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, appliedWallTime: new Date(1567578908646), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), appliedOpTime: { ts: Timestamp(1567578917, 1), t: 1 }, appliedWallTime: new Date(1567578917686), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), appliedOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, appliedWallTime: new Date(1567578908646), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 1), t: 1 }, lastCommittedWall: new Date(1567578917686), lastOpVisible: { ts: Timestamp(1567578917, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:17.717+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1567 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:47.717+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), appliedOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, appliedWallTime: new Date(1567578908646), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), appliedOpTime: { ts: Timestamp(1567578917, 1), t: 1 }, appliedWallTime: new Date(1567578917686), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), appliedOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, appliedWallTime: new Date(1567578908646), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 1), t: 1 }, lastCommittedWall: new Date(1567578917686), lastOpVisible: { ts: Timestamp(1567578917, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:17.717+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:38.650+0000 2019-09-04T06:35:17.717+0000 D3 REPL [conn486] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:42.679+0000 2019-09-04T06:35:17.717+0000 D3 REPL [conn458] Got notified of new snapshot: { ts: Timestamp(1567578917, 1), t: 1 }, 2019-09-04T06:35:17.686+0000 2019-09-04T06:35:17.717+0000 D3 REPL [conn458] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:42.734+0000 2019-09-04T06:35:17.717+0000 D3 REPL [conn478] Got notified of new snapshot: { ts: Timestamp(1567578917, 1), t: 1 }, 2019-09-04T06:35:17.686+0000 2019-09-04T06:35:17.717+0000 D3 REPL [conn478] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:24.152+0000 2019-09-04T06:35:17.717+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:17.717+0000 D3 REPL [conn473] Got notified of new snapshot: { ts: Timestamp(1567578917, 1), t: 1 }, 2019-09-04T06:35:17.686+0000 2019-09-04T06:35:17.717+0000 D3 REPL [conn473] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:17.717+0000 D3 REPL [conn476] Got notified of new snapshot: { ts: Timestamp(1567578917, 1), t: 1 }, 2019-09-04T06:35:17.686+0000 2019-09-04T06:35:17.717+0000 D3 REPL [conn476] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.767+0000 2019-09-04T06:35:17.717+0000 D2 ASIO [RS] Request 1567 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 1), t: 1 }, lastCommittedWall: new Date(1567578917686), lastOpVisible: { ts: Timestamp(1567578917, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578917, 1), $clusterTime: { clusterTime: Timestamp(1567578917, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 1) } 2019-09-04T06:35:17.717+0000 D3 REPL [conn477] Got notified of new snapshot: { ts: Timestamp(1567578917, 1), t: 1 }, 2019-09-04T06:35:17.686+0000 2019-09-04T06:35:17.717+0000 D3 REPL [conn477] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:22.595+0000 2019-09-04T06:35:17.717+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 1), t: 1 }, lastCommittedWall: new Date(1567578917686), lastOpVisible: { ts: Timestamp(1567578917, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578917, 1), $clusterTime: { clusterTime: Timestamp(1567578917, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:17.717+0000 D1 - [conn467] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:17.717+0000 W - [conn467] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:17.717+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:17.717+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:38.650+0000 2019-09-04T06:35:17.717+0000 D3 REPL [conn471] Got notified of new snapshot: { ts: Timestamp(1567578917, 1), t: 1 }, 2019-09-04T06:35:17.686+0000 2019-09-04T06:35:17.719+0000 D3 REPL [conn471] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:42.683+0000 2019-09-04T06:35:17.720+0000 I NETWORK [listener] connection accepted from 10.108.2.53:50874 #492 (83 connections now open) 2019-09-04T06:35:17.720+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:17.720+0000 D2 COMMAND [conn492] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:17.720+0000 I NETWORK [conn492] received client metadata from 10.108.2.53:50874 conn492: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:17.720+0000 I COMMAND [conn492] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:17.721+0000 D3 REPL [conn479] Got notified of new snapshot: { ts: Timestamp(1567578917, 1), t: 1 }, 2019-09-04T06:35:17.686+0000 2019-09-04T06:35:17.721+0000 D3 REPL [conn479] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:26.309+0000 2019-09-04T06:35:17.723+0000 D3 REPL [conn474] Got notified of new snapshot: { ts: Timestamp(1567578917, 1), t: 1 }, 2019-09-04T06:35:17.686+0000 2019-09-04T06:35:17.723+0000 D3 REPL [conn474] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:17.724+0000 D3 REPL [conn466] Got notified of new snapshot: { ts: Timestamp(1567578917, 1), t: 1 }, 2019-09-04T06:35:17.686+0000 2019-09-04T06:35:17.724+0000 D3 REPL [conn466] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:17.728+0000 2019-09-04T06:35:17.725+0000 D3 REPL [conn472] Got notified of new snapshot: { ts: Timestamp(1567578917, 1), t: 1 }, 2019-09-04T06:35:17.686+0000 2019-09-04T06:35:17.725+0000 D3 REPL [conn472] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:17.727+0000 D3 REPL [conn488] Got notified of new snapshot: { ts: Timestamp(1567578917, 1), t: 1 }, 2019-09-04T06:35:17.686+0000 2019-09-04T06:35:17.727+0000 D3 REPL [conn488] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:44.309+0000 2019-09-04T06:35:17.727+0000 D3 REPL [conn457] Got notified of new snapshot: { ts: Timestamp(1567578917, 1), t: 1 }, 2019-09-04T06:35:17.686+0000 2019-09-04T06:35:17.727+0000 D3 REPL [conn457] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:25.059+0000 2019-09-04T06:35:17.728+0000 I COMMAND [conn466] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, D5DE159C203210085B53C66F62E0C0DF973FF589), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:35:17.728+0000 D1 - [conn466] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:17.728+0000 W - [conn466] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:17.737+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:17.737+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:17.737+0000 I - [conn467] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:17.737+0000 W COMMAND [conn467] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:17.737+0000 I COMMAND [conn467] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578883, 1), signature: { hash: BinData(0, 01F6D33FDEDCF76CD13D7205FDE1395C03188BC4), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30082ms 2019-09-04T06:35:17.738+0000 D2 NETWORK [conn467] Session from 10.108.2.58:52308 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:17.738+0000 I NETWORK [conn467] end connection 10.108.2.58:52308 (82 connections now open) 2019-09-04T06:35:17.743+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:17.743+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:17.754+0000 I - [conn466] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:17.754+0000 D1 COMMAND [conn466] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, D5DE159C203210085B53C66F62E0C0DF973FF589), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:17.754+0000 D1 - [conn466] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:17.754+0000 W - [conn466] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:17.762+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:35:17.762+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:17.762+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), appliedOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, appliedWallTime: new Date(1567578908646), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578917, 1), t: 1 }, durableWallTime: new Date(1567578917686), appliedOpTime: { ts: Timestamp(1567578917, 1), t: 1 }, appliedWallTime: new Date(1567578917686), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), appliedOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, appliedWallTime: new Date(1567578908646), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 1), t: 1 }, lastCommittedWall: new Date(1567578917686), lastOpVisible: { ts: Timestamp(1567578917, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:17.762+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1568 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:47.762+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), appliedOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, appliedWallTime: new Date(1567578908646), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578917, 1), t: 1 }, durableWallTime: new Date(1567578917686), appliedOpTime: { ts: Timestamp(1567578917, 1), t: 1 }, appliedWallTime: new Date(1567578917686), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), appliedOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, appliedWallTime: new Date(1567578908646), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 1), t: 1 }, lastCommittedWall: new Date(1567578917686), lastOpVisible: { ts: Timestamp(1567578917, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:17.762+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:38.650+0000 2019-09-04T06:35:17.762+0000 D2 ASIO [RS] Request 1568 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 1), t: 1 }, lastCommittedWall: new Date(1567578917686), lastOpVisible: { ts: Timestamp(1567578917, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578917, 1), $clusterTime: { clusterTime: Timestamp(1567578917, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 1) } 2019-09-04T06:35:17.762+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 1), t: 1 }, lastCommittedWall: new Date(1567578917686), lastOpVisible: { ts: Timestamp(1567578917, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578917, 1), $clusterTime: { clusterTime: Timestamp(1567578917, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:17.762+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:17.762+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:38.650+0000 2019-09-04T06:35:17.774+0000 I - [conn466] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:17.774+0000 W COMMAND [conn466] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:17.774+0000 I COMMAND [conn466] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, D5DE159C203210085B53C66F62E0C0DF973FF589), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30035ms 2019-09-04T06:35:17.774+0000 D2 NETWORK [conn466] Session from 10.108.2.53:50858 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:17.774+0000 I NETWORK [conn466] end connection 10.108.2.53:50858 (81 connections now open) 2019-09-04T06:35:17.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:17.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:17.798+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:17.813+0000 D2 COMMAND [conn481] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578909, 1), signature: { hash: BinData(0, 924C09A60358AABC6457CA62E6B62DFD7CFA8AC5), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:35:17.813+0000 D1 REPL [conn481] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578917, 1), t: 1 } 2019-09-04T06:35:17.813+0000 D3 REPL [conn481] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:47.823+0000 2019-09-04T06:35:17.815+0000 I NETWORK [listener] connection accepted from 10.108.2.46:41184 #493 (82 connections now open) 2019-09-04T06:35:17.815+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:17.815+0000 D2 COMMAND [conn493] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:17.815+0000 I NETWORK [conn493] received client metadata from 10.108.2.46:41184 conn493: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:17.815+0000 I COMMAND [conn493] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:17.815+0000 D2 COMMAND [conn493] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578915, 1), signature: { hash: BinData(0, 3ABBA556EFC99C049EC08DFC000AC593457278AA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:35:17.815+0000 D1 REPL [conn493] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578917, 1), t: 1 } 2019-09-04T06:35:17.815+0000 D3 REPL [conn493] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:47.825+0000 2019-09-04T06:35:17.816+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578917, 1) 2019-09-04T06:35:17.828+0000 D2 COMMAND [conn480] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578915, 1), signature: { hash: BinData(0, 3ABBA556EFC99C049EC08DFC000AC593457278AA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:35:17.828+0000 D1 REPL [conn480] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578917, 1), t: 1 } 2019-09-04T06:35:17.828+0000 D3 REPL [conn480] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:47.838+0000 2019-09-04T06:35:17.836+0000 D2 COMMAND [conn465] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578915, 1), signature: { hash: BinData(0, 3ABBA556EFC99C049EC08DFC000AC593457278AA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:35:17.836+0000 D1 REPL [conn465] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578917, 1), t: 1 } 2019-09-04T06:35:17.836+0000 D3 REPL [conn465] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:47.846+0000 2019-09-04T06:35:17.836+0000 D2 COMMAND [conn487] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578908, 1), signature: { hash: BinData(0, EF3ADB01B4FC3197228A5D8B8AD09E7DD93915A6), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:35:17.836+0000 D1 REPL [conn487] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578917, 1), t: 1 } 2019-09-04T06:35:17.836+0000 D3 REPL [conn487] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:47.846+0000 2019-09-04T06:35:17.873+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:17.873+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:17.898+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:17.907+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:17.908+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:17.953+0000 D2 ASIO [RS] Request 1566 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578917, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578917949), o: { $v: 1, $set: { ping: new Date(1567578917943) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 1), t: 1 }, lastCommittedWall: new Date(1567578917686), lastOpVisible: { ts: Timestamp(1567578917, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578917, 1), t: 1 }, lastCommittedWall: new Date(1567578917686), lastOpApplied: { ts: Timestamp(1567578917, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578917, 1), $clusterTime: { clusterTime: Timestamp(1567578917, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 2) } 2019-09-04T06:35:17.953+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578917, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578917949), o: { $v: 1, $set: { ping: new Date(1567578917943) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 1), t: 1 }, lastCommittedWall: new Date(1567578917686), lastOpVisible: { ts: Timestamp(1567578917, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578917, 1), t: 1 }, lastCommittedWall: new Date(1567578917686), lastOpApplied: { ts: Timestamp(1567578917, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578917, 1), $clusterTime: { clusterTime: Timestamp(1567578917, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:17.953+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:17.953+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578917, 2) and ending at ts: Timestamp(1567578917, 2) 2019-09-04T06:35:17.953+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:35:27.725+0000 2019-09-04T06:35:17.953+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:35:28.816+0000 2019-09-04T06:35:17.953+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:17.953+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:46.839+0000 2019-09-04T06:35:17.953+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578917, 2), t: 1 } 2019-09-04T06:35:17.953+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:17.953+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:17.953+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578917, 1) 2019-09-04T06:35:17.953+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 22743 2019-09-04T06:35:17.953+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:17.953+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:17.953+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 22743 2019-09-04T06:35:17.953+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:17.953+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:35:17.953+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:17.953+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578917, 2) } 2019-09-04T06:35:17.953+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578917, 1) 2019-09-04T06:35:17.953+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 22746 2019-09-04T06:35:17.953+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:17.953+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:17.953+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 22746 2019-09-04T06:35:17.953+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22728 2019-09-04T06:35:17.954+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 22728 2019-09-04T06:35:17.954+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22749 2019-09-04T06:35:17.954+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 22749 2019-09-04T06:35:17.954+0000 D3 EXECUTOR [repl-writer-worker-5] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:17.954+0000 D3 STORAGE [repl-writer-worker-5] WT begin_transaction for snapshot id 22751 2019-09-04T06:35:17.954+0000 D4 STORAGE [repl-writer-worker-5] inserting record with timestamp Timestamp(1567578917, 2) 2019-09-04T06:35:17.954+0000 D3 STORAGE [repl-writer-worker-5] WT set timestamp of future write operations to Timestamp(1567578917, 2) 2019-09-04T06:35:17.954+0000 D3 STORAGE [repl-writer-worker-5] WT commit_transaction for snapshot id 22751 2019-09-04T06:35:17.954+0000 D3 EXECUTOR [repl-writer-worker-5] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:17.954+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:35:17.954+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22750 2019-09-04T06:35:17.954+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 22750 2019-09-04T06:35:17.954+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22753 2019-09-04T06:35:17.954+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 22753 2019-09-04T06:35:17.954+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578917, 2), t: 1 }({ ts: Timestamp(1567578917, 2), t: 1 }) 2019-09-04T06:35:17.954+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578917, 2) 2019-09-04T06:35:17.954+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22754 2019-09-04T06:35:17.954+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578917, 2) } } ] } sort: {} projection: {} 2019-09-04T06:35:17.954+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:17.954+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:35:17.954+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578917, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:35:17.954+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:17.954+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:17.954+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:17.954+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578917, 2) || First: notFirst: full path: ts 2019-09-04T06:35:17.954+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:17.954+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578917, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:17.954+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:17.954+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:35:17.954+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:17.954+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:17.954+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:17.954+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:17.954+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:17.954+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:17.954+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:17.954+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578917, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:17.954+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:17.954+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:17.954+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:17.954+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578917, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:17.954+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:17.954+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578917, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:17.954+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 22754 2019-09-04T06:35:17.954+0000 D3 EXECUTOR [repl-writer-worker-9] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:17.954+0000 D3 STORAGE [repl-writer-worker-9] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:17.954+0000 D3 REPL [repl-writer-worker-9] applying op: { ts: Timestamp(1567578917, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578917949), o: { $v: 1, $set: { ping: new Date(1567578917943) } } }, oplog application mode: Secondary 2019-09-04T06:35:17.954+0000 D3 STORAGE [repl-writer-worker-9] WT set timestamp of future write operations to Timestamp(1567578917, 2) 2019-09-04T06:35:17.954+0000 D3 STORAGE [repl-writer-worker-9] WT begin_transaction for snapshot id 22756 2019-09-04T06:35:17.954+0000 D2 QUERY [repl-writer-worker-9] Using idhack: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" } 2019-09-04T06:35:17.954+0000 D4 WRITE [repl-writer-worker-9] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:35:17.954+0000 D3 STORAGE [repl-writer-worker-9] WT commit_transaction for snapshot id 22756 2019-09-04T06:35:17.954+0000 D3 EXECUTOR [repl-writer-worker-9] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:17.954+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578917, 2), t: 1 }({ ts: Timestamp(1567578917, 2), t: 1 }) 2019-09-04T06:35:17.954+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578917, 2) 2019-09-04T06:35:17.954+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22755 2019-09-04T06:35:17.954+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:35:17.954+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:17.954+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:35:17.954+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:17.954+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:17.954+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:35:17.954+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 22755 2019-09-04T06:35:17.954+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578917, 2) 2019-09-04T06:35:17.954+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:17.954+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22759 2019-09-04T06:35:17.954+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), appliedOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, appliedWallTime: new Date(1567578908646), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578917, 1), t: 1 }, durableWallTime: new Date(1567578917686), appliedOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, appliedWallTime: new Date(1567578917949), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), appliedOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, appliedWallTime: new Date(1567578908646), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 1), t: 1 }, lastCommittedWall: new Date(1567578917686), lastOpVisible: { ts: Timestamp(1567578917, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:17.954+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 22759 2019-09-04T06:35:17.955+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1569 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:47.955+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), appliedOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, appliedWallTime: new Date(1567578908646), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578917, 1), t: 1 }, durableWallTime: new Date(1567578917686), appliedOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, appliedWallTime: new Date(1567578917949), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), appliedOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, appliedWallTime: new Date(1567578908646), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 1), t: 1 }, lastCommittedWall: new Date(1567578917686), lastOpVisible: { ts: Timestamp(1567578917, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:17.955+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578917, 2), t: 1 }({ ts: Timestamp(1567578917, 2), t: 1 }) 2019-09-04T06:35:17.955+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:47.954+0000 2019-09-04T06:35:17.955+0000 D2 ASIO [RS] Request 1569 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 1), t: 1 }, lastCommittedWall: new Date(1567578917686), lastOpVisible: { ts: Timestamp(1567578917, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578917, 1), $clusterTime: { clusterTime: Timestamp(1567578917, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 2) } 2019-09-04T06:35:17.955+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 1), t: 1 }, lastCommittedWall: new Date(1567578917686), lastOpVisible: { ts: Timestamp(1567578917, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578917, 1), $clusterTime: { clusterTime: Timestamp(1567578917, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:17.955+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:17.955+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:47.955+0000 2019-09-04T06:35:17.955+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578917, 2), t: 1 } 2019-09-04T06:35:17.955+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1570 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:27.955+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578917, 1), t: 1 } } 2019-09-04T06:35:17.955+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:47.955+0000 2019-09-04T06:35:17.956+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:35:17.956+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:17.956+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), appliedOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, appliedWallTime: new Date(1567578908646), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), appliedOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, appliedWallTime: new Date(1567578917949), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), appliedOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, appliedWallTime: new Date(1567578908646), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 1), t: 1 }, lastCommittedWall: new Date(1567578917686), lastOpVisible: { ts: Timestamp(1567578917, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:17.956+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1571 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:47.956+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), appliedOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, appliedWallTime: new Date(1567578908646), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), appliedOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, appliedWallTime: new Date(1567578917949), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, durableWallTime: new Date(1567578908646), appliedOpTime: { ts: Timestamp(1567578908, 2), t: 1 }, appliedWallTime: new Date(1567578908646), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 1), t: 1 }, lastCommittedWall: new Date(1567578917686), lastOpVisible: { ts: Timestamp(1567578917, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:17.956+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:47.955+0000 2019-09-04T06:35:17.956+0000 D2 ASIO [RS] Request 1571 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 1), t: 1 }, lastCommittedWall: new Date(1567578917686), lastOpVisible: { ts: Timestamp(1567578917, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578917, 1), $clusterTime: { clusterTime: Timestamp(1567578917, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 2) } 2019-09-04T06:35:17.956+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 1), t: 1 }, lastCommittedWall: new Date(1567578917686), lastOpVisible: { ts: Timestamp(1567578917, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578917, 1), $clusterTime: { clusterTime: Timestamp(1567578917, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:17.956+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:17.956+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:47.955+0000 2019-09-04T06:35:17.957+0000 D2 ASIO [RS] Request 1570 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 2), t: 1 }, lastCommittedWall: new Date(1567578917949), lastOpVisible: { ts: Timestamp(1567578917, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578917, 2), t: 1 }, lastCommittedWall: new Date(1567578917949), lastOpApplied: { ts: Timestamp(1567578917, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578917, 2), $clusterTime: { clusterTime: Timestamp(1567578917, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 2) } 2019-09-04T06:35:17.957+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 2), t: 1 }, lastCommittedWall: new Date(1567578917949), lastOpVisible: { ts: Timestamp(1567578917, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578917, 2), t: 1 }, lastCommittedWall: new Date(1567578917949), lastOpApplied: { ts: Timestamp(1567578917, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578917, 2), $clusterTime: { clusterTime: Timestamp(1567578917, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:17.957+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:17.957+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:35:17.957+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578917, 2), t: 1 }, 2019-09-04T06:35:17.949+0000 2019-09-04T06:35:17.957+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578917, 2), t: 1 }, 2019-09-04T06:35:17.949+0000 2019-09-04T06:35:17.957+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578912, 2) 2019-09-04T06:35:17.957+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:35:28.816+0000 2019-09-04T06:35:17.957+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:35:28.037+0000 2019-09-04T06:35:17.957+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:17.957+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:46.839+0000 2019-09-04T06:35:17.957+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1572 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:27.957+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578917, 2), t: 1 } } 2019-09-04T06:35:17.957+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:47.955+0000 2019-09-04T06:35:17.957+0000 D3 REPL [conn486] Got notified of new snapshot: { ts: Timestamp(1567578917, 2), t: 1 }, 2019-09-04T06:35:17.949+0000 2019-09-04T06:35:17.957+0000 D3 REPL [conn486] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:42.679+0000 2019-09-04T06:35:17.957+0000 D3 REPL [conn473] Got notified of new snapshot: { ts: Timestamp(1567578917, 2), t: 1 }, 2019-09-04T06:35:17.949+0000 2019-09-04T06:35:17.957+0000 D3 REPL [conn473] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:17.957+0000 D3 REPL [conn476] Got notified of new snapshot: { ts: Timestamp(1567578917, 2), t: 1 }, 2019-09-04T06:35:17.949+0000 2019-09-04T06:35:17.957+0000 D3 REPL [conn476] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.767+0000 2019-09-04T06:35:17.957+0000 D3 REPL [conn477] Got notified of new snapshot: { ts: Timestamp(1567578917, 2), t: 1 }, 2019-09-04T06:35:17.949+0000 2019-09-04T06:35:17.957+0000 D3 REPL [conn477] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:22.595+0000 2019-09-04T06:35:17.957+0000 D3 REPL [conn474] Got notified of new snapshot: { ts: Timestamp(1567578917, 2), t: 1 }, 2019-09-04T06:35:17.949+0000 2019-09-04T06:35:17.957+0000 D3 REPL [conn474] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:17.957+0000 D3 REPL [conn481] Got notified of new snapshot: { ts: Timestamp(1567578917, 2), t: 1 }, 2019-09-04T06:35:17.949+0000 2019-09-04T06:35:17.957+0000 D3 REPL [conn481] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:47.823+0000 2019-09-04T06:35:17.957+0000 D3 REPL [conn480] Got notified of new snapshot: { ts: Timestamp(1567578917, 2), t: 1 }, 2019-09-04T06:35:17.949+0000 2019-09-04T06:35:17.957+0000 D3 REPL [conn480] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:47.838+0000 2019-09-04T06:35:17.957+0000 D3 REPL [conn493] Got notified of new snapshot: { ts: Timestamp(1567578917, 2), t: 1 }, 2019-09-04T06:35:17.949+0000 2019-09-04T06:35:17.957+0000 D3 REPL [conn493] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:47.825+0000 2019-09-04T06:35:17.957+0000 D3 REPL [conn475] Got notified of new snapshot: { ts: Timestamp(1567578917, 2), t: 1 }, 2019-09-04T06:35:17.949+0000 2019-09-04T06:35:17.957+0000 D3 REPL [conn475] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.753+0000 2019-09-04T06:35:17.957+0000 D3 REPL [conn458] Got notified of new snapshot: { ts: Timestamp(1567578917, 2), t: 1 }, 2019-09-04T06:35:17.949+0000 2019-09-04T06:35:17.957+0000 D3 REPL [conn458] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:42.734+0000 2019-09-04T06:35:17.957+0000 D3 REPL [conn478] Got notified of new snapshot: { ts: Timestamp(1567578917, 2), t: 1 }, 2019-09-04T06:35:17.949+0000 2019-09-04T06:35:17.957+0000 D3 REPL [conn478] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:24.152+0000 2019-09-04T06:35:17.957+0000 D3 REPL [conn471] Got notified of new snapshot: { ts: Timestamp(1567578917, 2), t: 1 }, 2019-09-04T06:35:17.949+0000 2019-09-04T06:35:17.957+0000 D3 REPL [conn471] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:42.683+0000 2019-09-04T06:35:17.957+0000 D3 REPL [conn479] Got notified of new snapshot: { ts: Timestamp(1567578917, 2), t: 1 }, 2019-09-04T06:35:17.949+0000 2019-09-04T06:35:17.957+0000 D3 REPL [conn479] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:26.309+0000 2019-09-04T06:35:17.957+0000 D3 REPL [conn472] Got notified of new snapshot: { ts: Timestamp(1567578917, 2), t: 1 }, 2019-09-04T06:35:17.949+0000 2019-09-04T06:35:17.957+0000 D3 REPL [conn472] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:21.661+0000 2019-09-04T06:35:17.957+0000 D3 REPL [conn488] Got notified of new snapshot: { ts: Timestamp(1567578917, 2), t: 1 }, 2019-09-04T06:35:17.949+0000 2019-09-04T06:35:17.957+0000 D3 REPL [conn488] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:44.309+0000 2019-09-04T06:35:17.957+0000 D3 REPL [conn457] Got notified of new snapshot: { ts: Timestamp(1567578917, 2), t: 1 }, 2019-09-04T06:35:17.949+0000 2019-09-04T06:35:17.957+0000 D3 REPL [conn457] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:25.059+0000 2019-09-04T06:35:17.957+0000 D3 REPL [conn487] Got notified of new snapshot: { ts: Timestamp(1567578917, 2), t: 1 }, 2019-09-04T06:35:17.949+0000 2019-09-04T06:35:17.957+0000 D3 REPL [conn487] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:47.846+0000 2019-09-04T06:35:17.957+0000 D3 REPL [conn465] Got notified of new snapshot: { ts: Timestamp(1567578917, 2), t: 1 }, 2019-09-04T06:35:17.949+0000 2019-09-04T06:35:17.957+0000 D3 REPL [conn465] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:47.846+0000 2019-09-04T06:35:17.998+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:18.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:18.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:18.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:18.053+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578917, 2) 2019-09-04T06:35:18.070+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:18.070+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:18.098+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:18.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:18.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:18.193+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:18.193+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:18.198+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:18.206+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:18.206+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:18.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578917, 2), signature: { hash: BinData(0, 591599C827DFAC23F7434220FBEB5F1338DA9F08), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:18.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:35:18.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578917, 2), signature: { hash: BinData(0, 591599C827DFAC23F7434220FBEB5F1338DA9F08), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:18.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578917, 2), signature: { hash: BinData(0, 591599C827DFAC23F7434220FBEB5F1338DA9F08), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:18.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), opTime: { ts: Timestamp(1567578917, 2), t: 1 }, wallTime: new Date(1567578917949) } 2019-09-04T06:35:18.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578917, 2), signature: { hash: BinData(0, 591599C827DFAC23F7434220FBEB5F1338DA9F08), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:18.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:18.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:18.237+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:18.237+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:18.243+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:18.243+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:18.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:18.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:18.298+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:18.373+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:18.373+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:18.399+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:18.407+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:18.407+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:18.499+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:18.521+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:35:18.521+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:35:18.522+0000 D2 COMMAND [conn90] run command admin.$cmd { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:35:18.522+0000 I COMMAND [conn90] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:35:18.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:18.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:18.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:18.570+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:18.599+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:18.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:18.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:18.693+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:18.693+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:18.699+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:18.706+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:18.706+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:18.737+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:18.737+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:18.743+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:18.743+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:18.799+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:18.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:18.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1573) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:18.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1573 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:35:28.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:18.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:46.839+0000 2019-09-04T06:35:18.839+0000 D2 ASIO [Replication] Request 1573 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), opTime: { ts: Timestamp(1567578917, 2), t: 1 }, wallTime: new Date(1567578917949), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 2), t: 1 }, lastCommittedWall: new Date(1567578917949), lastOpVisible: { ts: Timestamp(1567578917, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578917, 2), $clusterTime: { clusterTime: Timestamp(1567578917, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 2) } 2019-09-04T06:35:18.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), opTime: { ts: Timestamp(1567578917, 2), t: 1 }, wallTime: new Date(1567578917949), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 2), t: 1 }, lastCommittedWall: new Date(1567578917949), lastOpVisible: { ts: Timestamp(1567578917, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578917, 2), $clusterTime: { clusterTime: Timestamp(1567578917, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:35:18.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:18.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1573) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), opTime: { ts: Timestamp(1567578917, 2), t: 1 }, wallTime: new Date(1567578917949), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 2), t: 1 }, lastCommittedWall: new Date(1567578917949), lastOpVisible: { ts: Timestamp(1567578917, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578917, 2), $clusterTime: { clusterTime: Timestamp(1567578917, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 2) } 2019-09-04T06:35:18.839+0000 D4 ELECTION [replexec-4] Postponing election timeout due to heartbeat from primary 2019-09-04T06:35:18.839+0000 D4 REPL [replexec-4] Canceling election timeout callback at 2019-09-04T06:35:28.037+0000 2019-09-04T06:35:18.839+0000 D4 ELECTION [replexec-4] Scheduling election timeout callback at 2019-09-04T06:35:28.997+0000 2019-09-04T06:35:18.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:35:18.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:35:20.839Z 2019-09-04T06:35:18.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:18.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:48.839+0000 2019-09-04T06:35:18.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:48.839+0000 2019-09-04T06:35:18.840+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:18.840+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1574) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:18.840+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1574 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:28.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:18.840+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:48.839+0000 2019-09-04T06:35:18.840+0000 D2 ASIO [Replication] Request 1574 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), opTime: { ts: Timestamp(1567578917, 2), t: 1 }, wallTime: new Date(1567578917949), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 2), t: 1 }, lastCommittedWall: new Date(1567578917949), lastOpVisible: { ts: Timestamp(1567578917, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578917, 2), $clusterTime: { clusterTime: Timestamp(1567578917, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 2) } 2019-09-04T06:35:18.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), opTime: { ts: Timestamp(1567578917, 2), t: 1 }, wallTime: new Date(1567578917949), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 2), t: 1 }, lastCommittedWall: new Date(1567578917949), lastOpVisible: { ts: Timestamp(1567578917, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578917, 2), $clusterTime: { clusterTime: Timestamp(1567578917, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:18.840+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:18.840+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1574) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), opTime: { ts: Timestamp(1567578917, 2), t: 1 }, wallTime: new Date(1567578917949), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 2), t: 1 }, lastCommittedWall: new Date(1567578917949), lastOpVisible: { ts: Timestamp(1567578917, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578917, 2), $clusterTime: { clusterTime: Timestamp(1567578917, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 2) } 2019-09-04T06:35:18.840+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:35:18.840+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:35:20.840Z 2019-09-04T06:35:18.840+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:48.839+0000 2019-09-04T06:35:18.873+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:18.873+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:18.899+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:18.954+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:18.954+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:18.954+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578917, 2) 2019-09-04T06:35:18.954+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 22785 2019-09-04T06:35:18.954+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:18.954+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:18.954+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 22785 2019-09-04T06:35:18.954+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer::flush table:config/collection/82--6194257481163143499 -> { numRecords: 4, dataSize: 428 } 2019-09-04T06:35:18.954+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer::flush table:config/collection/42--6194257481163143499 -> { numRecords: 2, dataSize: 306 } 2019-09-04T06:35:18.954+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer::flush table:config/collection/28--6194257481163143499 -> { numRecords: 24, dataSize: 2000 } 2019-09-04T06:35:18.954+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer::flush table:config/collection/26--6194257481163143499 -> { numRecords: 9, dataSize: 3031 } 2019-09-04T06:35:18.954+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer::flush table:local/collection/16--6194257481163143499 -> { numRecords: 1598, dataSize: 360352 } 2019-09-04T06:35:18.954+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer flush took 70 µs 2019-09-04T06:35:18.955+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22788 2019-09-04T06:35:18.955+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 22788 2019-09-04T06:35:18.955+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578917, 2), t: 1 }({ ts: Timestamp(1567578917, 2), t: 1 }) 2019-09-04T06:35:19.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:19.008+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:19.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:19.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:19.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578917, 2), signature: { hash: BinData(0, 591599C827DFAC23F7434220FBEB5F1338DA9F08), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:19.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:35:19.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578917, 2), signature: { hash: BinData(0, 591599C827DFAC23F7434220FBEB5F1338DA9F08), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:19.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578917, 2), signature: { hash: BinData(0, 591599C827DFAC23F7434220FBEB5F1338DA9F08), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:19.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), opTime: { ts: Timestamp(1567578917, 2), t: 1 }, wallTime: new Date(1567578917949) } 2019-09-04T06:35:19.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578917, 2), signature: { hash: BinData(0, 591599C827DFAC23F7434220FBEB5F1338DA9F08), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:19.070+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:19.070+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:19.108+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:19.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:19.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:19.193+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:19.193+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:19.206+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:19.206+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:19.208+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:19.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:19.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:19.237+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:19.237+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:19.243+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:19.243+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:19.308+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:19.317+0000 D3 STORAGE [FreeMonProcessor] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:19.321+0000 D3 INDEX [TTLMonitor] thread awake 2019-09-04T06:35:19.322+0000 D3 COMMAND [PeriodicTaskRunner] task: DBConnectionPool-cleaner took: 0ms 2019-09-04T06:35:19.322+0000 D3 COMMAND [PeriodicTaskRunner] task: DBConnectionPool-cleaner took: 0ms 2019-09-04T06:35:19.322+0000 D2 - [PeriodicTaskRunner] cleaning up unused lock buckets of the global lock manager 2019-09-04T06:35:19.322+0000 D3 COMMAND [PeriodicTaskRunner] task: UnusedLockCleaner took: 0ms 2019-09-04T06:35:19.329+0000 D2 WRITE [startPeriodicThreadToAbortExpiredTransactions] Beginning scanSessions. Scanning 0 sessions. 2019-09-04T06:35:19.370+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0003 2019-09-04T06:35:19.370+0000 D1 SHARDING [shard-registry-reload] Reloading shardRegistry 2019-09-04T06:35:19.370+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1576 -- target:[cmodb812.togewa.com:27018] db:admin expDate:2019-09-04T06:35:24.370+0000 cmd:{ isMaster: 1 } 2019-09-04T06:35:19.370+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1577 -- target:[cmodb813.togewa.com:27018] db:admin expDate:2019-09-04T06:35:24.370+0000 cmd:{ isMaster: 1 } 2019-09-04T06:35:19.370+0000 D3 STORAGE [shard-registry-reload] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:35:19.370+0000 D2 COMMAND [shard-registry-reload] run command config.$cmd { find: "shards", $readPreference: { mode: "nearest", tags: [] }, $db: "config" } 2019-09-04T06:35:19.370+0000 D3 STORAGE [shard-registry-reload] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:19.370+0000 D5 QUERY [shard-registry-reload] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:35:19.370+0000 D5 QUERY [shard-registry-reload] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:35:19.370+0000 D5 QUERY [shard-registry-reload] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:35:19.370+0000 D5 QUERY [shard-registry-reload] Rated tree: $and 2019-09-04T06:35:19.370+0000 D5 QUERY [shard-registry-reload] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:19.370+0000 D5 QUERY [shard-registry-reload] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:19.370+0000 D2 QUERY [shard-registry-reload] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:35:19.370+0000 D3 STORAGE [shard-registry-reload] begin_transaction on local snapshot Timestamp(1567578917, 2) 2019-09-04T06:35:19.370+0000 D3 STORAGE [shard-registry-reload] WT begin_transaction for snapshot id 22805 2019-09-04T06:35:19.370+0000 D3 STORAGE [shard-registry-reload] WT rollback_transaction for snapshot id 22805 2019-09-04T06:35:19.370+0000 I COMMAND [shard-registry-reload] command config.shards command: find { find: "shards", $readPreference: { mode: "nearest", tags: [] }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:4 cursorExhausted:1 numYields:0 nreturned:4 reslen:756 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:35:19.370+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1576 finished with response: { hosts: [ "cmodb812.togewa.com:27018", "cmodb813.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27025" ], setName: "shard0003", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb812.togewa.com:27018", me: "cmodb812.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578916, 1), t: 1 }, lastWriteDate: new Date(1567578916000), majorityOpTime: { ts: Timestamp(1567578916, 1), t: 1 }, majorityWriteDate: new Date(1567578916000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578919370), logicalSessionTimeoutMinutes: 30, connectionId: 21713, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578916, 1), $configServerState: { opTime: { ts: Timestamp(1567578917, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578917, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578916, 1) } 2019-09-04T06:35:19.370+0000 D1 SHARDING [shard-registry-reload] found 4 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp(1567578917, 2), t: 1 } 2019-09-04T06:35:19.370+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb812.togewa.com:27018", "cmodb813.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27025" ], setName: "shard0003", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb812.togewa.com:27018", me: "cmodb812.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578916, 1), t: 1 }, lastWriteDate: new Date(1567578916000), majorityOpTime: { ts: Timestamp(1567578916, 1), t: 1 }, majorityWriteDate: new Date(1567578916000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578919370), logicalSessionTimeoutMinutes: 30, connectionId: 21713, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578916, 1), $configServerState: { opTime: { ts: Timestamp(1567578917, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578917, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578916, 1) } target: cmodb812.togewa.com:27018 2019-09-04T06:35:19.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0000/cmodb806.togewa.com:27018,cmodb807.togewa.com:27018 2019-09-04T06:35:19.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0000, with CS shard0000/cmodb806.togewa.com:27018,cmodb807.togewa.com:27018 2019-09-04T06:35:19.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0001/cmodb808.togewa.com:27018,cmodb809.togewa.com:27018 2019-09-04T06:35:19.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0001, with CS shard0001/cmodb808.togewa.com:27018,cmodb809.togewa.com:27018 2019-09-04T06:35:19.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0002/cmodb810.togewa.com:27018,cmodb811.togewa.com:27018 2019-09-04T06:35:19.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0002, with CS shard0002/cmodb810.togewa.com:27018,cmodb811.togewa.com:27018 2019-09-04T06:35:19.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0003/cmodb812.togewa.com:27018,cmodb813.togewa.com:27018 2019-09-04T06:35:19.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0003, with CS shard0003/cmodb812.togewa.com:27018,cmodb813.togewa.com:27018 2019-09-04T06:35:19.370+0000 D3 SHARDING [shard-registry-reload] Adding shard config, with CS 2019-09-04T06:35:19.373+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:19.373+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:19.374+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1577 finished with response: { hosts: [ "cmodb812.togewa.com:27018", "cmodb813.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27025" ], setName: "shard0003", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb812.togewa.com:27018", me: "cmodb813.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578916, 1), t: 1 }, lastWriteDate: new Date(1567578916000), majorityOpTime: { ts: Timestamp(1567578916, 1), t: 1 }, majorityWriteDate: new Date(1567578916000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578919368), logicalSessionTimeoutMinutes: 30, connectionId: 13308, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578916, 1), $configServerState: { opTime: { ts: Timestamp(1567578917, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578917, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578916, 1) } 2019-09-04T06:35:19.374+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb812.togewa.com:27018", "cmodb813.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27025" ], setName: "shard0003", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb812.togewa.com:27018", me: "cmodb813.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578916, 1), t: 1 }, lastWriteDate: new Date(1567578916000), majorityOpTime: { ts: Timestamp(1567578916, 1), t: 1 }, majorityWriteDate: new Date(1567578916000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578919368), logicalSessionTimeoutMinutes: 30, connectionId: 13308, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578916, 1), $configServerState: { opTime: { ts: Timestamp(1567578917, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578917, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578916, 1) } target: cmodb813.togewa.com:27018 2019-09-04T06:35:19.374+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0003 took 4ms 2019-09-04T06:35:19.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0000 2019-09-04T06:35:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1579 -- target:[cmodb806.togewa.com:27018] db:admin expDate:2019-09-04T06:35:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:35:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1580 -- target:[cmodb807.togewa.com:27018] db:admin expDate:2019-09-04T06:35:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:35:19.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0002 2019-09-04T06:35:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1581 -- target:[cmodb810.togewa.com:27018] db:admin expDate:2019-09-04T06:35:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:35:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1582 -- target:[cmodb811.togewa.com:27018] db:admin expDate:2019-09-04T06:35:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:35:19.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0001 2019-09-04T06:35:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1583 -- target:[cmodb808.togewa.com:27018] db:admin expDate:2019-09-04T06:35:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:35:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1584 -- target:[cmodb809.togewa.com:27018] db:admin expDate:2019-09-04T06:35:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:35:19.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1581 finished with response: { hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb810.togewa.com:27018", me: "cmodb810.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578909, 1), t: 1 }, lastWriteDate: new Date(1567578909000), majorityOpTime: { ts: Timestamp(1567578909, 1), t: 1 }, majorityWriteDate: new Date(1567578909000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578919386), logicalSessionTimeoutMinutes: 30, connectionId: 20469, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578909, 1), $configServerState: { opTime: { ts: Timestamp(1567578908, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578916, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578909, 1) } 2019-09-04T06:35:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb810.togewa.com:27018", me: "cmodb810.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578909, 1), t: 1 }, lastWriteDate: new Date(1567578909000), majorityOpTime: { ts: Timestamp(1567578909, 1), t: 1 }, majorityWriteDate: new Date(1567578909000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578919386), logicalSessionTimeoutMinutes: 30, connectionId: 20469, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578909, 1), $configServerState: { opTime: { ts: Timestamp(1567578908, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578916, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578909, 1) } target: cmodb810.togewa.com:27018 2019-09-04T06:35:19.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1580 finished with response: { hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb806.togewa.com:27018", me: "cmodb807.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578913, 1), t: 1 }, lastWriteDate: new Date(1567578913000), majorityOpTime: { ts: Timestamp(1567578913, 1), t: 1 }, majorityWriteDate: new Date(1567578913000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578919385), logicalSessionTimeoutMinutes: 30, connectionId: 17074, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578913, 1), $configServerState: { opTime: { ts: Timestamp(1567578908, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578916, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578913, 1) } 2019-09-04T06:35:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb806.togewa.com:27018", me: "cmodb807.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578913, 1), t: 1 }, lastWriteDate: new Date(1567578913000), majorityOpTime: { ts: Timestamp(1567578913, 1), t: 1 }, majorityWriteDate: new Date(1567578913000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578919385), logicalSessionTimeoutMinutes: 30, connectionId: 17074, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578913, 1), $configServerState: { opTime: { ts: Timestamp(1567578908, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578916, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578913, 1) } target: cmodb807.togewa.com:27018 2019-09-04T06:35:19.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1583 finished with response: { hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb808.togewa.com:27018", me: "cmodb808.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578917, 1), t: 1 }, lastWriteDate: new Date(1567578917000), majorityOpTime: { ts: Timestamp(1567578917, 1), t: 1 }, majorityWriteDate: new Date(1567578917000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578919386), logicalSessionTimeoutMinutes: 30, connectionId: 18183, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578917, 1), $configServerState: { opTime: { ts: Timestamp(1567578908, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578917, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 1) } 2019-09-04T06:35:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb808.togewa.com:27018", me: "cmodb808.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578917, 1), t: 1 }, lastWriteDate: new Date(1567578917000), majorityOpTime: { ts: Timestamp(1567578917, 1), t: 1 }, majorityWriteDate: new Date(1567578917000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578919386), logicalSessionTimeoutMinutes: 30, connectionId: 18183, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578917, 1), $configServerState: { opTime: { ts: Timestamp(1567578908, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578917, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 1) } target: cmodb808.togewa.com:27018 2019-09-04T06:35:19.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1579 finished with response: { hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb806.togewa.com:27018", me: "cmodb806.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578913, 1), t: 1 }, lastWriteDate: new Date(1567578913000), majorityOpTime: { ts: Timestamp(1567578913, 1), t: 1 }, majorityWriteDate: new Date(1567578913000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578919385), logicalSessionTimeoutMinutes: 30, connectionId: 16400, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578913, 1), $configServerState: { opTime: { ts: Timestamp(1567578908, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578916, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578913, 1) } 2019-09-04T06:35:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb806.togewa.com:27018", me: "cmodb806.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578913, 1), t: 1 }, lastWriteDate: new Date(1567578913000), majorityOpTime: { ts: Timestamp(1567578913, 1), t: 1 }, majorityWriteDate: new Date(1567578913000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578919385), logicalSessionTimeoutMinutes: 30, connectionId: 16400, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578913, 1), $configServerState: { opTime: { ts: Timestamp(1567578908, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578916, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578913, 1) } target: cmodb806.togewa.com:27018 2019-09-04T06:35:19.385+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0000 took 0ms 2019-09-04T06:35:19.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1584 finished with response: { hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb808.togewa.com:27018", me: "cmodb809.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578917, 1), t: 1 }, lastWriteDate: new Date(1567578917000), majorityOpTime: { ts: Timestamp(1567578917, 1), t: 1 }, majorityWriteDate: new Date(1567578917000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578919386), logicalSessionTimeoutMinutes: 30, connectionId: 13302, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578917, 1), $configServerState: { opTime: { ts: Timestamp(1567578900, 3), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578917, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 1) } 2019-09-04T06:35:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb808.togewa.com:27018", me: "cmodb809.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578917, 1), t: 1 }, lastWriteDate: new Date(1567578917000), majorityOpTime: { ts: Timestamp(1567578917, 1), t: 1 }, majorityWriteDate: new Date(1567578917000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578919386), logicalSessionTimeoutMinutes: 30, connectionId: 13302, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578917, 1), $configServerState: { opTime: { ts: Timestamp(1567578900, 3), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578917, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 1) } target: cmodb809.togewa.com:27018 2019-09-04T06:35:19.385+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0001 took 0ms 2019-09-04T06:35:19.390+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1582 finished with response: { hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb810.togewa.com:27018", me: "cmodb811.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578909, 1), t: 1 }, lastWriteDate: new Date(1567578909000), majorityOpTime: { ts: Timestamp(1567578909, 1), t: 1 }, majorityWriteDate: new Date(1567578909000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578919386), logicalSessionTimeoutMinutes: 30, connectionId: 13284, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578909, 1), $configServerState: { opTime: { ts: Timestamp(1567578908, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578916, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578909, 1) } 2019-09-04T06:35:19.390+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb810.togewa.com:27018", me: "cmodb811.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578909, 1), t: 1 }, lastWriteDate: new Date(1567578909000), majorityOpTime: { ts: Timestamp(1567578909, 1), t: 1 }, majorityWriteDate: new Date(1567578909000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578919386), logicalSessionTimeoutMinutes: 30, connectionId: 13284, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578909, 1), $configServerState: { opTime: { ts: Timestamp(1567578908, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578916, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578909, 1) } target: cmodb811.togewa.com:27018 2019-09-04T06:35:19.390+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0002 took 5ms 2019-09-04T06:35:19.408+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:19.508+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:19.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:19.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:19.570+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:19.570+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:19.608+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:19.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:19.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:19.662+0000 D2 COMMAND [replSetDistLockPinger] run command config.$cmd { findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578919662) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } 2019-09-04T06:35:19.662+0000 D4 - [replSetDistLockPinger] Taking ticket. Available: 1000000000 2019-09-04T06:35:19.662+0000 D1 - [replSetDistLockPinger] User Assertion: NotMaster: Not primary while running findAndModify command on collection config.lockpings src/mongo/db/commands/find_and_modify.cpp 178 2019-09-04T06:35:19.662+0000 W - [replSetDistLockPinger] DBException thrown :: caused by :: NotMaster: Not primary while running findAndModify command on collection config.lockpings 2019-09-04T06:35:19.681+0000 I - [replSetDistLockPinger] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6b5b62 0x561749c38a0a 0x561749c42521 0x561749a63043 0x56174a33a606 0x56174a33ba55 0x56174b117894 0x56174a082899 0x56174a083f53 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174af452ee 0x56174af457fa 0x56174b0c25e2 0x56174a244e7b 0x56174a243c1e 0x56174a42b1dc 0x56174a23b7b1 0x56174a232a0a 0x56174b82dbbf 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"272DB62","s":"_ZN5mongo11DBExceptionC2ERKNS_6StatusE"},{"b":"561748F88000","o":"CB0A0A","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"ADB043"},{"b":"561748F88000","o":"13B2606"},{"b":"561748F88000","o":"13B3A55"},{"b":"561748F88000","o":"218F894","s":"_ZN5mongo12BasicCommand10Invocation3runEPNS_16OperationContextEPNS_3rpc21ReplyBuilderInterfaceE"},{"b":"561748F88000","o":"10FA899"},{"b":"561748F88000","o":"10FBF53"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"1FBD2EE"},{"b":"561748F88000","o":"1FBD7FA","s":"_ZN5mongo14DBDirectClient4callERNS_7MessageES2_bPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE"},{"b":"561748F88000","o":"213A5E2","s":"_ZN5mongo12DBClientBase20runCommandWithTargetENS_12OpMsgRequestE"},{"b":"561748F88000","o":"12BCE7B","s":"_ZN5mongo13RSLocalClient14runCommandOnceEPNS_16OperationContextENS_10StringDataERKNS_7BSONObjE"},{"b":"561748F88000","o":"12BBC1E","s":"_ZN5mongo10ShardLocal11_runCommandEPNS_16OperationContextERKNS_21ReadPreferenceSettingENS_10StringDataENS_8DurationISt5ratioILl1ELl1000EEEERKNS_7BSONObjE"},{"b":"561748F88000","o":"14A31DC","s":"_ZN5mongo5Shard32runCommandWithFixedRetryAttemptsEPNS_16OperationContextERKNS_21ReadPreferenceSettingERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_7BSONObjENS_8DurationISt5ratioILl1ELl1000EEEENS0_11RetryPolicyE"},{"b":"561748F88000","o":"12B37B1","s":"_ZN5mongo19DistLockCatalogImpl4pingEPNS_16OperationContextENS_10StringDataENS_6Date_tE"},{"b":"561748F88000","o":"12AAA0A","s":"_ZN5mongo22ReplSetDistLockManager6doTaskEv"},{"b":"561748F88000","o":"28A5BBF"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo11DBExceptionC2ERKNS_6StatusE+0x32) [0x56174b6b5b62] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x6D08) [0x561749c38a0a] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xADB043) [0x561749a63043] mongod(+0x13B2606) [0x56174a33a606] mongod(+0x13B3A55) [0x56174a33ba55] mongod(_ZN5mongo12BasicCommand10Invocation3runEPNS_16OperationContextEPNS_3rpc21ReplyBuilderInterfaceE+0x74) [0x56174b117894] mongod(+0x10FA899) [0x56174a082899] mongod(+0x10FBF53) [0x56174a083f53] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(+0x1FBD2EE) [0x56174af452ee] mongod(_ZN5mongo14DBDirectClient4callERNS_7MessageES2_bPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x3A) [0x56174af457fa] mongod(_ZN5mongo12DBClientBase20runCommandWithTargetENS_12OpMsgRequestE+0x1F2) [0x56174b0c25e2] mongod(_ZN5mongo13RSLocalClient14runCommandOnceEPNS_16OperationContextENS_10StringDataERKNS_7BSONObjE+0x4FB) [0x56174a244e7b] mongod(_ZN5mongo10ShardLocal11_runCommandEPNS_16OperationContextERKNS_21ReadPreferenceSettingENS_10StringDataENS_8DurationISt5ratioILl1ELl1000EEEERKNS_7BSONObjE+0x2E) [0x56174a243c1e] mongod(_ZN5mongo5Shard32runCommandWithFixedRetryAttemptsEPNS_16OperationContextERKNS_21ReadPreferenceSettingERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_7BSONObjENS_8DurationISt5ratioILl1ELl1000EEEENS0_11RetryPolicyE+0xDC) [0x56174a42b1dc] mongod(_ZN5mongo19DistLockCatalogImpl4pingEPNS_16OperationContextENS_10StringDataENS_6Date_tE+0x571) [0x56174a23b7b1] mongod(_ZN5mongo22ReplSetDistLockManager6doTaskEv+0x27A) [0x56174a232a0a] mongod(+0x28A5BBF) [0x56174b82dbbf] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:19.681+0000 D2 REPL [replSetDistLockPinger] Waiting for write concern. OpTime: { ts: Timestamp(1567578917, 2), t: 1 }, write concern: { w: "majority", wtimeout: 15000 } 2019-09-04T06:35:19.681+0000 D4 STORAGE [replSetDistLockPinger] flushed journal 2019-09-04T06:35:19.681+0000 D1 COMMAND [replSetDistLockPinger] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578919662) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" }': NotMaster: Not primary while running findAndModify command on collection config.lockpings 2019-09-04T06:35:19.681+0000 I COMMAND [replSetDistLockPinger] command config.lockpings command: findAndModify { findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578919662) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } numYields:0 ok:0 errMsg:"Not primary while running findAndModify command on collection config.lockpings" errName:NotMaster errCode:10107 reslen:527 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } protocol:op_msg 19ms 2019-09-04T06:35:19.693+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:19.693+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:19.706+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:19.706+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:19.709+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:19.737+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:19.737+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:19.743+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:19.743+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:19.809+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:19.814+0000 D3 STORAGE [WTCheckpointThread] setting timestamp read source: 6, provided timestamp: Timestamp(1567578917, 2) 2019-09-04T06:35:19.814+0000 D3 STORAGE [WTCheckpointThread] WT begin_transaction for snapshot id 22816 2019-09-04T06:35:19.814+0000 D3 STORAGE [WTCheckpointThread] WT rollback_transaction for snapshot id 22816 2019-09-04T06:35:19.814+0000 D2 RECOVERY [WTCheckpointThread] Performing stable checkpoint. StableTimestamp: Timestamp(1567578917, 2), OplogNeededForRollback: Timestamp(1567578917, 2) 2019-09-04T06:35:19.873+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:19.873+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:19.926+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:19.954+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:19.954+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:19.954+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578917, 2) 2019-09-04T06:35:19.954+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 22819 2019-09-04T06:35:19.954+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:19.954+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:19.954+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 22819 2019-09-04T06:35:19.955+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22822 2019-09-04T06:35:19.955+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 22822 2019-09-04T06:35:19.955+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578917, 2), t: 1 }({ ts: Timestamp(1567578917, 2), t: 1 }) 2019-09-04T06:35:20.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:20.002+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:35:20.002+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:35:20.002+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:35:20.010+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:35:20.010+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:35:20.011+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:35:20.011+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:35:20.011+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:35:20.011+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:35:20.011+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:35:20.012+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35151 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:35:20.012+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:35:20.012+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:35:20.012+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:35:20.014+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:35:20.014+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:20.014+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:35:20.014+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:35:20.014+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:35:20.014+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:35:20.014+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:35:20.014+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:35:20.014+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:35:20.014+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:20.014+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:20.014+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:35:20.014+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578917, 2) 2019-09-04T06:35:20.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22830 2019-09-04T06:35:20.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22830 2019-09-04T06:35:20.014+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:35:20.014+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:35:20.014+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:35:20.014+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:35:20.014+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:20.014+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:35:20.014+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:35:20.014+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:35:20.015+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578917, 2) 2019-09-04T06:35:20.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22833 2019-09-04T06:35:20.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22833 2019-09-04T06:35:20.015+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:35:20.015+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:35:20.015+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:20.015+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:35:20.015+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:35:20.015+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:35:20.015+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578917, 2) 2019-09-04T06:35:20.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22835 2019-09-04T06:35:20.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22835 2019-09-04T06:35:20.015+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:570 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:35:20.015+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:35:20.015+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:35:20.015+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:35:20.015+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:35:20.015+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:35:20.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:35:20.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22838 2019-09-04T06:35:20.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:35:20.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:20.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:35:20.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:35:20.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22838 2019-09-04T06:35:20.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:35:20.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22839 2019-09-04T06:35:20.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:35:20.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:20.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:35:20.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22839 2019-09-04T06:35:20.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:35:20.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22840 2019-09-04T06:35:20.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:35:20.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:20.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:35:20.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22840 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22841 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22841 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22842 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22842 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22843 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22843 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22844 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22844 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22845 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22845 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22846 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22846 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22847 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22847 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22848 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22848 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22849 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22849 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22850 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22850 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22851 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22851 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22852 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22852 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22853 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22853 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:35:20.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22854 2019-09-04T06:35:20.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:35:20.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:20.017+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:35:20.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22854 2019-09-04T06:35:20.017+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:35:20.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22855 2019-09-04T06:35:20.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:35:20.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:20.017+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:35:20.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22855 2019-09-04T06:35:20.017+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:35:20.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22856 2019-09-04T06:35:20.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:35:20.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:20.017+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:35:20.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22856 2019-09-04T06:35:20.017+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:20.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22857 2019-09-04T06:35:20.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:20.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:20.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22857 2019-09-04T06:35:20.017+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:35:20.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22858 2019-09-04T06:35:20.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:35:20.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:20.017+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:35:20.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22858 2019-09-04T06:35:20.017+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:35:20.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22859 2019-09-04T06:35:20.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:35:20.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:20.017+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:35:20.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22859 2019-09-04T06:35:20.017+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:35:20.017+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:35:20.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22861 2019-09-04T06:35:20.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22861 2019-09-04T06:35:20.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22862 2019-09-04T06:35:20.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22862 2019-09-04T06:35:20.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22863 2019-09-04T06:35:20.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22863 2019-09-04T06:35:20.017+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:35:20.018+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:35:20.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22865 2019-09-04T06:35:20.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22865 2019-09-04T06:35:20.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22866 2019-09-04T06:35:20.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22866 2019-09-04T06:35:20.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22867 2019-09-04T06:35:20.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22867 2019-09-04T06:35:20.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22868 2019-09-04T06:35:20.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22868 2019-09-04T06:35:20.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22869 2019-09-04T06:35:20.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22869 2019-09-04T06:35:20.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22870 2019-09-04T06:35:20.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22870 2019-09-04T06:35:20.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22871 2019-09-04T06:35:20.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22871 2019-09-04T06:35:20.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22872 2019-09-04T06:35:20.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22872 2019-09-04T06:35:20.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22873 2019-09-04T06:35:20.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22873 2019-09-04T06:35:20.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22874 2019-09-04T06:35:20.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22874 2019-09-04T06:35:20.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22875 2019-09-04T06:35:20.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22875 2019-09-04T06:35:20.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22876 2019-09-04T06:35:20.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22876 2019-09-04T06:35:20.018+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:35:20.033+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:20.037+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:35:20.037+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22878 2019-09-04T06:35:20.037+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22878 2019-09-04T06:35:20.037+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22879 2019-09-04T06:35:20.037+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22879 2019-09-04T06:35:20.037+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22880 2019-09-04T06:35:20.037+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22880 2019-09-04T06:35:20.037+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22881 2019-09-04T06:35:20.037+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22881 2019-09-04T06:35:20.037+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22882 2019-09-04T06:35:20.037+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22882 2019-09-04T06:35:20.037+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 22883 2019-09-04T06:35:20.037+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 22883 2019-09-04T06:35:20.037+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:35:20.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:20.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:20.070+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:20.070+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:20.133+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:20.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:20.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:20.193+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:20.193+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:20.206+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:20.206+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:20.233+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:20.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578919, 1), signature: { hash: BinData(0, 97C2C2312D10260D26476B00DE9AFE3DE681A90D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:20.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:35:20.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578919, 1), signature: { hash: BinData(0, 97C2C2312D10260D26476B00DE9AFE3DE681A90D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:20.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578919, 1), signature: { hash: BinData(0, 97C2C2312D10260D26476B00DE9AFE3DE681A90D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:20.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), opTime: { ts: Timestamp(1567578917, 2), t: 1 }, wallTime: new Date(1567578917949) } 2019-09-04T06:35:20.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578919, 1), signature: { hash: BinData(0, 97C2C2312D10260D26476B00DE9AFE3DE681A90D), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:20.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:20.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 999999999 Now: 1000000000 2019-09-04T06:35:20.236+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:20.237+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:20.243+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:20.243+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:20.333+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:20.373+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:20.373+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:20.433+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:20.460+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:20.460+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:20.534+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:20.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:20.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:20.569+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:20.570+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:20.634+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:20.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:20.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:20.693+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:20.693+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:20.706+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:20.706+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:20.734+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:20.743+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:20.743+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:20.834+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:20.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:20.839+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1589) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:20.839+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1589 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:35:30.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:20.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:48.839+0000 2019-09-04T06:35:20.839+0000 D2 ASIO [Replication] Request 1589 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), opTime: { ts: Timestamp(1567578917, 2), t: 1 }, wallTime: new Date(1567578917949), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 2), t: 1 }, lastCommittedWall: new Date(1567578917949), lastOpVisible: { ts: Timestamp(1567578917, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578917, 2), $clusterTime: { clusterTime: Timestamp(1567578919, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 2) } 2019-09-04T06:35:20.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), opTime: { ts: Timestamp(1567578917, 2), t: 1 }, wallTime: new Date(1567578917949), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 2), t: 1 }, lastCommittedWall: new Date(1567578917949), lastOpVisible: { ts: Timestamp(1567578917, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578917, 2), $clusterTime: { clusterTime: Timestamp(1567578919, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:35:20.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:20.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1589) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), opTime: { ts: Timestamp(1567578917, 2), t: 1 }, wallTime: new Date(1567578917949), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 2), t: 1 }, lastCommittedWall: new Date(1567578917949), lastOpVisible: { ts: Timestamp(1567578917, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578917, 2), $clusterTime: { clusterTime: Timestamp(1567578919, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 2) } 2019-09-04T06:35:20.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:35:20.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:35:28.997+0000 2019-09-04T06:35:20.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:35:30.963+0000 2019-09-04T06:35:20.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:35:20.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:35:22.839Z 2019-09-04T06:35:20.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:20.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:50.839+0000 2019-09-04T06:35:20.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:50.839+0000 2019-09-04T06:35:20.840+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:20.840+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1590) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:20.840+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1590 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:30.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:20.840+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:50.839+0000 2019-09-04T06:35:20.840+0000 D2 ASIO [Replication] Request 1590 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), opTime: { ts: Timestamp(1567578917, 2), t: 1 }, wallTime: new Date(1567578917949), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 2), t: 1 }, lastCommittedWall: new Date(1567578917949), lastOpVisible: { ts: Timestamp(1567578917, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578917, 2), $clusterTime: { clusterTime: Timestamp(1567578919, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 2) } 2019-09-04T06:35:20.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), opTime: { ts: Timestamp(1567578917, 2), t: 1 }, wallTime: new Date(1567578917949), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 2), t: 1 }, lastCommittedWall: new Date(1567578917949), lastOpVisible: { ts: Timestamp(1567578917, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578917, 2), $clusterTime: { clusterTime: Timestamp(1567578919, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:20.840+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:20.840+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1590) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), opTime: { ts: Timestamp(1567578917, 2), t: 1 }, wallTime: new Date(1567578917949), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 2), t: 1 }, lastCommittedWall: new Date(1567578917949), lastOpVisible: { ts: Timestamp(1567578917, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578917, 2), $clusterTime: { clusterTime: Timestamp(1567578919, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 2) } 2019-09-04T06:35:20.840+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:35:20.840+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:35:22.840Z 2019-09-04T06:35:20.840+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:50.839+0000 2019-09-04T06:35:20.853+0000 I NETWORK [listener] connection accepted from 10.108.2.15:39312 #494 (83 connections now open) 2019-09-04T06:35:20.853+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:20.854+0000 D2 COMMAND [conn494] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:20.854+0000 I NETWORK [conn494] received client metadata from 10.108.2.15:39312 conn494: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:20.854+0000 I COMMAND [conn494] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 8, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:20.854+0000 D2 COMMAND [conn494] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578890, 1), t: 1 } }, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5b28a216ea01508ac5d4'), operName: "", parentOperId: "5d6f5b28a216ea01508ac5d3" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, 8FEA89271D837BB68B98FE2323CB5C3FDECD39B3), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578890, 1), t: 1 } }, $db: "config" } 2019-09-04T06:35:20.854+0000 D1 TRACKING [conn494] Cmd: find, TrackingId: 5d6f5b28a216ea01508ac5d3|5d6f5b28a216ea01508ac5d4 2019-09-04T06:35:20.854+0000 D1 COMMAND [conn494] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578890, 1), t: 1 } } } 2019-09-04T06:35:20.854+0000 D3 STORAGE [conn494] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:35:20.854+0000 D1 COMMAND [conn494] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578890, 1), t: 1 } }, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5b28a216ea01508ac5d4'), operName: "", parentOperId: "5d6f5b28a216ea01508ac5d3" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, 8FEA89271D837BB68B98FE2323CB5C3FDECD39B3), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578890, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578917, 2) 2019-09-04T06:35:20.854+0000 D5 QUERY [conn494] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:35:20.854+0000 D5 QUERY [conn494] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:35:20.854+0000 D5 QUERY [conn494] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:35:20.854+0000 D5 QUERY [conn494] Rated tree: $and 2019-09-04T06:35:20.854+0000 D5 QUERY [conn494] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:20.854+0000 D5 QUERY [conn494] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:20.854+0000 D2 QUERY [conn494] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:35:20.854+0000 D3 STORAGE [conn494] WT begin_transaction for snapshot id 22903 2019-09-04T06:35:20.854+0000 D3 STORAGE [conn494] WT rollback_transaction for snapshot id 22903 2019-09-04T06:35:20.854+0000 I COMMAND [conn494] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578890, 1), t: 1 } }, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d6f5b28a216ea01508ac5d4'), operName: "", parentOperId: "5d6f5b28a216ea01508ac5d3" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, 8FEA89271D837BB68B98FE2323CB5C3FDECD39B3), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578890, 1), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:4 cursorExhausted:1 numYields:0 nreturned:4 reslen:989 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:35:20.873+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:20.873+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:20.934+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:20.954+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:20.954+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:20.954+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578917, 2) 2019-09-04T06:35:20.954+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 22906 2019-09-04T06:35:20.954+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:20.954+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:20.954+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 22906 2019-09-04T06:35:20.955+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22909 2019-09-04T06:35:20.955+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 22909 2019-09-04T06:35:20.955+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578917, 2), t: 1 }({ ts: Timestamp(1567578917, 2), t: 1 }) 2019-09-04T06:35:20.960+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:20.960+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:21.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:21.034+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:21.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:21.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:21.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578919, 1), signature: { hash: BinData(0, 97C2C2312D10260D26476B00DE9AFE3DE681A90D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:21.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:35:21.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578919, 1), signature: { hash: BinData(0, 97C2C2312D10260D26476B00DE9AFE3DE681A90D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:21.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578919, 1), signature: { hash: BinData(0, 97C2C2312D10260D26476B00DE9AFE3DE681A90D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:21.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), opTime: { ts: Timestamp(1567578917, 2), t: 1 }, wallTime: new Date(1567578917949) } 2019-09-04T06:35:21.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578919, 1), signature: { hash: BinData(0, 97C2C2312D10260D26476B00DE9AFE3DE681A90D), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:21.115+0000 I NETWORK [listener] connection accepted from 10.108.2.56:35870 #495 (84 connections now open) 2019-09-04T06:35:21.115+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:21.115+0000 D2 COMMAND [conn495] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:21.115+0000 I NETWORK [conn495] received client metadata from 10.108.2.56:35870 conn495: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:21.115+0000 I COMMAND [conn495] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:21.118+0000 D2 COMMAND [conn495] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578911, 1), signature: { hash: BinData(0, 71EB50998B0ED96CFCE295A2D5D129D8C4E11CBE), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:35:21.118+0000 D1 REPL [conn495] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578917, 2), t: 1 } 2019-09-04T06:35:21.118+0000 D3 REPL [conn495] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:51.128+0000 2019-09-04T06:35:21.134+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:21.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:21.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:21.193+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:21.193+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:21.235+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:21.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:21.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:21.243+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:21.243+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:21.335+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:21.350+0000 D2 COMMAND [conn470] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:21.350+0000 I COMMAND [conn470] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:21.373+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:21.373+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:21.435+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:21.460+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:21.460+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:21.535+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:21.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:21.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:21.634+0000 D2 COMMAND [conn482] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578918, 1), signature: { hash: BinData(0, 1AA7EB05C40215A8E8D27C963B2C5C048A4F65D3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:35:21.634+0000 D1 REPL [conn482] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578917, 2), t: 1 } 2019-09-04T06:35:21.634+0000 D3 REPL [conn482] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:51.644+0000 2019-09-04T06:35:21.635+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:21.648+0000 D2 COMMAND [conn126] run command admin.$cmd { ping: 1, $db: "admin" } 2019-09-04T06:35:21.648+0000 I COMMAND [conn126] command admin.$cmd appName: "robo3t" command: ping { ping: 1, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:21.650+0000 I NETWORK [listener] connection accepted from 10.108.2.48:42294 #496 (85 connections now open) 2019-09-04T06:35:21.650+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:21.650+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:21.650+0000 I NETWORK [listener] connection accepted from 10.108.2.72:45934 #497 (86 connections now open) 2019-09-04T06:35:21.650+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:21.650+0000 D2 COMMAND [conn497] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:21.650+0000 D2 COMMAND [conn496] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:21.650+0000 I NETWORK [conn497] received client metadata from 10.108.2.72:45934 conn497: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:21.650+0000 I NETWORK [conn496] received client metadata from 10.108.2.48:42294 conn496: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:21.650+0000 I COMMAND [conn496] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:21.650+0000 I COMMAND [conn497] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:21.650+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:21.650+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:21.651+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:21.651+0000 I NETWORK [listener] connection accepted from 10.108.2.54:49378 #498 (87 connections now open) 2019-09-04T06:35:21.651+0000 D2 COMMAND [conn490] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578913, 1), signature: { hash: BinData(0, 20276E4D7DE2C0C2B6B8CAF955AF3D33107689B9), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:35:21.651+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:21.651+0000 D1 REPL [conn490] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578917, 2), t: 1 } 2019-09-04T06:35:21.651+0000 D3 REPL [conn490] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:51.661+0000 2019-09-04T06:35:21.651+0000 D2 COMMAND [conn498] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:21.651+0000 I NETWORK [conn498] received client metadata from 10.108.2.54:49378 conn498: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:21.651+0000 I COMMAND [conn498] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:21.651+0000 D2 COMMAND [conn498] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578918, 1), signature: { hash: BinData(0, 1AA7EB05C40215A8E8D27C963B2C5C048A4F65D3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:35:21.651+0000 D1 REPL [conn498] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578917, 2), t: 1 } 2019-09-04T06:35:21.651+0000 D3 REPL [conn498] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:51.661+0000 2019-09-04T06:35:21.651+0000 D2 COMMAND [conn484] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578912, 1), signature: { hash: BinData(0, 6CEB6570F5B99291340499417CC6D1FF61799D1C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:35:21.652+0000 D1 REPL [conn484] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578917, 2), t: 1 } 2019-09-04T06:35:21.652+0000 D3 REPL [conn484] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:51.661+0000 2019-09-04T06:35:21.659+0000 D2 COMMAND [conn182] run command config.$cmd { ping: 1.0, lsid: { id: UUID("02492cc9-cb3a-4cd4-9c2e-0d7430e82ce2") }, $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:35:21.659+0000 I COMMAND [conn182] command config.$cmd appName: "MongoDB Shell" command: ping { ping: 1.0, lsid: { id: UUID("02492cc9-cb3a-4cd4-9c2e-0d7430e82ce2") }, $clusterTime: { clusterTime: Timestamp(1567578860, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:21.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:21.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:21.664+0000 I COMMAND [conn473] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, D5DE159C203210085B53C66F62E0C0DF973FF589), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:35:21.664+0000 I COMMAND [conn472] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578888, 1), signature: { hash: BinData(0, 909ECCA09915F7DAC68D518238A63EF9362BC3C6), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:35:21.664+0000 D1 - [conn473] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:21.664+0000 D1 - [conn472] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:21.664+0000 W - [conn473] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:21.664+0000 W - [conn472] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:21.665+0000 I COMMAND [conn474] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, EB35739D9E23DC6FDDF7730B34A04CDF748BD46F), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:35:21.665+0000 D1 - [conn474] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:21.665+0000 W - [conn474] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:21.682+0000 I - [conn473] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:21.682+0000 D1 COMMAND [conn473] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, D5DE159C203210085B53C66F62E0C0DF973FF589), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:21.682+0000 D1 - [conn473] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:21.682+0000 W - [conn473] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:21.692+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:21.693+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:21.716+0000 I - [conn473] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:21.716+0000 W COMMAND [conn473] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:21.716+0000 I COMMAND [conn473] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, D5DE159C203210085B53C66F62E0C0DF973FF589), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30031ms 2019-09-04T06:35:21.716+0000 D2 NETWORK [conn473] Session from 10.108.2.72:45912 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:21.716+0000 I NETWORK [conn473] end connection 10.108.2.72:45912 (86 connections now open) 2019-09-04T06:35:21.724+0000 I - [conn474] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:21.724+0000 D1 COMMAND [conn474] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, EB35739D9E23DC6FDDF7730B34A04CDF748BD46F), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:21.724+0000 D1 - [conn474] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:21.724+0000 W - [conn474] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:21.735+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:21.738+0000 I - [conn472] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:21.738+0000 D1 COMMAND [conn472] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578888, 1), signature: { hash: BinData(0, 909ECCA09915F7DAC68D518238A63EF9362BC3C6), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:21.738+0000 D1 - [conn472] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:21.738+0000 W - [conn472] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:21.743+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:21.743+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:21.743+0000 I NETWORK [listener] connection accepted from 10.108.2.52:47368 #499 (87 connections now open) 2019-09-04T06:35:21.743+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:21.743+0000 D2 COMMAND [conn499] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:21.743+0000 I NETWORK [conn499] received client metadata from 10.108.2.52:47368 conn499: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:21.743+0000 I COMMAND [conn499] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:21.756+0000 I NETWORK [listener] connection accepted from 10.108.2.59:48542 #500 (88 connections now open) 2019-09-04T06:35:21.756+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:21.756+0000 D2 COMMAND [conn500] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:21.756+0000 I NETWORK [conn500] received client metadata from 10.108.2.59:48542 conn500: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:21.756+0000 I COMMAND [conn500] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:21.756+0000 I COMMAND [conn475] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, D5DE159C203210085B53C66F62E0C0DF973FF589), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:35:21.756+0000 D1 - [conn475] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:21.757+0000 W - [conn475] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:21.771+0000 I COMMAND [conn476] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578881, 1), signature: { hash: BinData(0, 51070AB17CE4E1EB2E629A93F6E8B1A003D9C311), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:35:21.771+0000 D1 - [conn476] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:21.771+0000 W - [conn476] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:21.780+0000 I - [conn474] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:21.780+0000 W COMMAND [conn474] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:21.780+0000 I COMMAND [conn474] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, EB35739D9E23DC6FDDF7730B34A04CDF748BD46F), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30072ms 2019-09-04T06:35:21.780+0000 D2 NETWORK [conn474] Session from 10.108.2.48:42278 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:21.780+0000 I NETWORK [conn474] end connection 10.108.2.48:42278 (87 connections now open) 2019-09-04T06:35:21.798+0000 I - [conn476] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:21.798+0000 D1 COMMAND [conn476] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578881, 1), signature: { hash: BinData(0, 51070AB17CE4E1EB2E629A93F6E8B1A003D9C311), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:21.798+0000 D1 - [conn476] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:21.798+0000 W - [conn476] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:21.798+0000 I - [conn472] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:21.799+0000 W COMMAND [conn472] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:21.799+0000 I COMMAND [conn472] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578888, 1), signature: { hash: BinData(0, 909ECCA09915F7DAC68D518238A63EF9362BC3C6), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30087ms 2019-09-04T06:35:21.799+0000 D2 NETWORK [conn472] Session from 10.108.2.54:49358 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:21.799+0000 I NETWORK [conn472] end connection 10.108.2.54:49358 (86 connections now open) 2019-09-04T06:35:21.823+0000 I - [conn476] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:21.823+0000 W COMMAND [conn476] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:21.823+0000 I COMMAND [conn476] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578881, 1), signature: { hash: BinData(0, 51070AB17CE4E1EB2E629A93F6E8B1A003D9C311), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30041ms 2019-09-04T06:35:21.823+0000 D2 NETWORK [conn476] Session from 10.108.2.59:48524 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:21.823+0000 I NETWORK [conn476] end connection 10.108.2.59:48524 (85 connections now open) 2019-09-04T06:35:21.835+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:21.837+0000 I - [conn475] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:21.837+0000 D1 COMMAND [conn475] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, D5DE159C203210085B53C66F62E0C0DF973FF589), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:21.837+0000 D1 - [conn475] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:21.837+0000 W - [conn475] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:21.858+0000 I - [conn475] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:21.859+0000 W COMMAND [conn475] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:21.859+0000 I COMMAND [conn475] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578882, 1), signature: { hash: BinData(0, D5DE159C203210085B53C66F62E0C0DF973FF589), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30094ms 2019-09-04T06:35:21.859+0000 D2 NETWORK [conn475] Session from 10.108.2.52:47352 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:21.859+0000 I NETWORK [conn475] end connection 10.108.2.52:47352 (84 connections now open) 2019-09-04T06:35:21.873+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:21.873+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:21.935+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:21.955+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:21.955+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:21.955+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578917, 2) 2019-09-04T06:35:21.955+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 22942 2019-09-04T06:35:21.955+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:21.955+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:21.955+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 22942 2019-09-04T06:35:21.955+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22945 2019-09-04T06:35:21.955+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 22945 2019-09-04T06:35:21.955+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578917, 2), t: 1 }({ ts: Timestamp(1567578917, 2), t: 1 }) 2019-09-04T06:35:21.960+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:21.960+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:22.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:22.036+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:22.043+0000 I NETWORK [listener] connection accepted from 10.108.2.50:50308 #501 (85 connections now open) 2019-09-04T06:35:22.043+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:22.043+0000 D2 COMMAND [conn501] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:22.043+0000 I NETWORK [conn501] received client metadata from 10.108.2.50:50308 conn501: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:22.043+0000 I COMMAND [conn501] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:22.044+0000 D2 COMMAND [conn501] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578919, 1), signature: { hash: BinData(0, 5F52DEB026CAAD2DBA516440B34A022BEC413848), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:35:22.044+0000 D1 REPL [conn501] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578917, 2), t: 1 } 2019-09-04T06:35:22.044+0000 D3 REPL [conn501] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:52.054+0000 2019-09-04T06:35:22.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:22.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:22.136+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:22.150+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:22.150+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:22.150+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:22.150+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:22.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:22.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:22.193+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:22.193+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:22.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578920, 1), signature: { hash: BinData(0, FEFCA1354990A7C182CDB71739BFAF6161415093), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:22.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:35:22.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578920, 1), signature: { hash: BinData(0, FEFCA1354990A7C182CDB71739BFAF6161415093), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:22.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578920, 1), signature: { hash: BinData(0, FEFCA1354990A7C182CDB71739BFAF6161415093), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:22.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), opTime: { ts: Timestamp(1567578917, 2), t: 1 }, wallTime: new Date(1567578917949) } 2019-09-04T06:35:22.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578920, 1), signature: { hash: BinData(0, FEFCA1354990A7C182CDB71739BFAF6161415093), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:22.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:22.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:22.236+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:22.243+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:22.243+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:22.336+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:22.436+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:22.460+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:22.460+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:22.536+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:22.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:22.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:22.585+0000 I NETWORK [listener] connection accepted from 10.108.2.74:51974 #502 (86 connections now open) 2019-09-04T06:35:22.585+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:22.585+0000 D2 COMMAND [conn502] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:22.585+0000 I NETWORK [conn502] received client metadata from 10.108.2.74:51974 conn502: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:22.585+0000 I COMMAND [conn502] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:22.600+0000 I COMMAND [conn477] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578885, 1), signature: { hash: BinData(0, 3250ABA29FA1BAB080F5BF7C803FB52479895B2B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:35:22.600+0000 D1 - [conn477] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:22.600+0000 W - [conn477] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:22.618+0000 I - [conn477] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:22.618+0000 D1 COMMAND [conn477] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578885, 1), signature: { hash: BinData(0, 3250ABA29FA1BAB080F5BF7C803FB52479895B2B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:22.618+0000 D1 - [conn477] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:22.618+0000 W - [conn477] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:22.636+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:22.639+0000 I - [conn477] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:22.639+0000 W COMMAND [conn477] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:22.639+0000 I COMMAND [conn477] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578885, 1), signature: { hash: BinData(0, 3250ABA29FA1BAB080F5BF7C803FB52479895B2B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30032ms 2019-09-04T06:35:22.639+0000 D2 NETWORK [conn477] Session from 10.108.2.74:51954 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:22.639+0000 I NETWORK [conn477] end connection 10.108.2.74:51954 (85 connections now open) 2019-09-04T06:35:22.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:22.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:22.693+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:22.693+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:22.736+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:22.743+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:22.743+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:22.837+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:22.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:22.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1591) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:22.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1591 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:35:32.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:22.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:50.839+0000 2019-09-04T06:35:22.839+0000 D2 ASIO [Replication] Request 1591 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), opTime: { ts: Timestamp(1567578917, 2), t: 1 }, wallTime: new Date(1567578917949), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 2), t: 1 }, lastCommittedWall: new Date(1567578917949), lastOpVisible: { ts: Timestamp(1567578917, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578917, 2), $clusterTime: { clusterTime: Timestamp(1567578920, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 2) } 2019-09-04T06:35:22.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), opTime: { ts: Timestamp(1567578917, 2), t: 1 }, wallTime: new Date(1567578917949), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 2), t: 1 }, lastCommittedWall: new Date(1567578917949), lastOpVisible: { ts: Timestamp(1567578917, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578917, 2), $clusterTime: { clusterTime: Timestamp(1567578920, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:35:22.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:22.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1591) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), opTime: { ts: Timestamp(1567578917, 2), t: 1 }, wallTime: new Date(1567578917949), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 2), t: 1 }, lastCommittedWall: new Date(1567578917949), lastOpVisible: { ts: Timestamp(1567578917, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578917, 2), $clusterTime: { clusterTime: Timestamp(1567578920, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 2) } 2019-09-04T06:35:22.839+0000 D4 ELECTION [replexec-4] Postponing election timeout due to heartbeat from primary 2019-09-04T06:35:22.839+0000 D4 REPL [replexec-4] Canceling election timeout callback at 2019-09-04T06:35:30.963+0000 2019-09-04T06:35:22.839+0000 D4 ELECTION [replexec-4] Scheduling election timeout callback at 2019-09-04T06:35:34.087+0000 2019-09-04T06:35:22.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:35:22.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:35:24.839Z 2019-09-04T06:35:22.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:22.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:52.839+0000 2019-09-04T06:35:22.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:52.839+0000 2019-09-04T06:35:22.840+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:22.840+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1592) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:22.840+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1592 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:32.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:22.840+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:52.839+0000 2019-09-04T06:35:22.840+0000 D2 ASIO [Replication] Request 1592 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), opTime: { ts: Timestamp(1567578917, 2), t: 1 }, wallTime: new Date(1567578917949), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 2), t: 1 }, lastCommittedWall: new Date(1567578917949), lastOpVisible: { ts: Timestamp(1567578917, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578917, 2), $clusterTime: { clusterTime: Timestamp(1567578920, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 2) } 2019-09-04T06:35:22.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), opTime: { ts: Timestamp(1567578917, 2), t: 1 }, wallTime: new Date(1567578917949), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 2), t: 1 }, lastCommittedWall: new Date(1567578917949), lastOpVisible: { ts: Timestamp(1567578917, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578917, 2), $clusterTime: { clusterTime: Timestamp(1567578920, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:22.840+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:22.840+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1592) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), opTime: { ts: Timestamp(1567578917, 2), t: 1 }, wallTime: new Date(1567578917949), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 2), t: 1 }, lastCommittedWall: new Date(1567578917949), lastOpVisible: { ts: Timestamp(1567578917, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578917, 2), $clusterTime: { clusterTime: Timestamp(1567578920, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 2) } 2019-09-04T06:35:22.840+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:35:22.840+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:35:24.840Z 2019-09-04T06:35:22.840+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:52.839+0000 2019-09-04T06:35:22.937+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:22.955+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:22.955+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:22.955+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578917, 2) 2019-09-04T06:35:22.955+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 22965 2019-09-04T06:35:22.955+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:22.955+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:22.955+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 22965 2019-09-04T06:35:22.955+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22968 2019-09-04T06:35:22.955+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 22968 2019-09-04T06:35:22.955+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578917, 2), t: 1 }({ ts: Timestamp(1567578917, 2), t: 1 }) 2019-09-04T06:35:22.956+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:22.956+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), appliedOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, appliedWallTime: new Date(1567578917949), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), appliedOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, appliedWallTime: new Date(1567578917949), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), appliedOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, appliedWallTime: new Date(1567578917949), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 2), t: 1 }, lastCommittedWall: new Date(1567578917949), lastOpVisible: { ts: Timestamp(1567578917, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:22.956+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1593 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:52.956+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), appliedOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, appliedWallTime: new Date(1567578917949), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), appliedOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, appliedWallTime: new Date(1567578917949), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), appliedOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, appliedWallTime: new Date(1567578917949), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 2), t: 1 }, lastCommittedWall: new Date(1567578917949), lastOpVisible: { ts: Timestamp(1567578917, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:22.956+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:47.955+0000 2019-09-04T06:35:22.956+0000 D2 ASIO [RS] Request 1593 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 2), t: 1 }, lastCommittedWall: new Date(1567578917949), lastOpVisible: { ts: Timestamp(1567578917, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578917, 2), $clusterTime: { clusterTime: Timestamp(1567578920, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 2) } 2019-09-04T06:35:22.956+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 2), t: 1 }, lastCommittedWall: new Date(1567578917949), lastOpVisible: { ts: Timestamp(1567578917, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578917, 2), $clusterTime: { clusterTime: Timestamp(1567578920, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:22.956+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:22.956+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:47.955+0000 2019-09-04T06:35:22.956+0000 D2 ASIO [RS] Request 1572 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 2), t: 1 }, lastCommittedWall: new Date(1567578917949), lastOpVisible: { ts: Timestamp(1567578917, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578917, 2), t: 1 }, lastCommittedWall: new Date(1567578917949), lastOpApplied: { ts: Timestamp(1567578917, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578917, 2), $clusterTime: { clusterTime: Timestamp(1567578920, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 2) } 2019-09-04T06:35:22.956+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 2), t: 1 }, lastCommittedWall: new Date(1567578917949), lastOpVisible: { ts: Timestamp(1567578917, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578917, 2), t: 1 }, lastCommittedWall: new Date(1567578917949), lastOpApplied: { ts: Timestamp(1567578917, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578917, 2), $clusterTime: { clusterTime: Timestamp(1567578920, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:22.956+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:22.956+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:35:22.956+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:35:34.087+0000 2019-09-04T06:35:22.956+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:35:34.246+0000 2019-09-04T06:35:22.956+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1594 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:32.956+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578917, 2), t: 1 } } 2019-09-04T06:35:22.956+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:47.955+0000 2019-09-04T06:35:22.956+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:22.956+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:52.839+0000 2019-09-04T06:35:22.960+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:22.960+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:23.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:23.037+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:23.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:23.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:23.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578920, 1), signature: { hash: BinData(0, FEFCA1354990A7C182CDB71739BFAF6161415093), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:23.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:35:23.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578920, 1), signature: { hash: BinData(0, FEFCA1354990A7C182CDB71739BFAF6161415093), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:23.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578920, 1), signature: { hash: BinData(0, FEFCA1354990A7C182CDB71739BFAF6161415093), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:23.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), opTime: { ts: Timestamp(1567578917, 2), t: 1 }, wallTime: new Date(1567578917949) } 2019-09-04T06:35:23.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578920, 1), signature: { hash: BinData(0, FEFCA1354990A7C182CDB71739BFAF6161415093), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:23.137+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:23.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:23.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:23.193+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:23.193+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:23.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:23.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:23.237+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:23.243+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:23.243+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:23.244+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:23.244+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:23.337+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:23.437+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:23.460+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:23.460+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:23.537+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:23.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:23.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:23.638+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:23.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:23.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:23.693+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:23.693+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:23.738+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:23.743+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:23.743+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:23.744+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:23.744+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:23.838+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:23.938+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:23.955+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:23.955+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:23.955+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578917, 2) 2019-09-04T06:35:23.955+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 22986 2019-09-04T06:35:23.955+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:23.955+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:23.955+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 22986 2019-09-04T06:35:23.956+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 22989 2019-09-04T06:35:23.956+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 22989 2019-09-04T06:35:23.956+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578917, 2), t: 1 }({ ts: Timestamp(1567578917, 2), t: 1 }) 2019-09-04T06:35:23.960+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:23.960+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:24.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:24.038+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:24.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:24.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:24.138+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:24.158+0000 I COMMAND [conn478] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578885, 1), signature: { hash: BinData(0, 3250ABA29FA1BAB080F5BF7C803FB52479895B2B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:35:24.158+0000 D1 - [conn478] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:24.158+0000 W - [conn478] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:24.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:24.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:24.175+0000 I - [conn478] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:24.175+0000 D1 COMMAND [conn478] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578885, 1), signature: { hash: BinData(0, 3250ABA29FA1BAB080F5BF7C803FB52479895B2B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:24.175+0000 D1 - [conn478] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:24.175+0000 W - [conn478] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:24.193+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:24.193+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:24.195+0000 I - [conn478] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:24.195+0000 W COMMAND [conn478] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:24.195+0000 I COMMAND [conn478] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578885, 1), signature: { hash: BinData(0, 3250ABA29FA1BAB080F5BF7C803FB52479895B2B), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30033ms 2019-09-04T06:35:24.195+0000 D2 NETWORK [conn478] Session from 10.108.2.46:41168 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:24.195+0000 I NETWORK [conn478] end connection 10.108.2.46:41168 (84 connections now open) 2019-09-04T06:35:24.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578920, 1), signature: { hash: BinData(0, FEFCA1354990A7C182CDB71739BFAF6161415093), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:24.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:35:24.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578920, 1), signature: { hash: BinData(0, FEFCA1354990A7C182CDB71739BFAF6161415093), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:24.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578920, 1), signature: { hash: BinData(0, FEFCA1354990A7C182CDB71739BFAF6161415093), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:24.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), opTime: { ts: Timestamp(1567578917, 2), t: 1 }, wallTime: new Date(1567578917949) } 2019-09-04T06:35:24.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578920, 1), signature: { hash: BinData(0, FEFCA1354990A7C182CDB71739BFAF6161415093), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:24.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:24.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:24.238+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:24.243+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:24.243+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:24.244+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:24.244+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:24.338+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:24.438+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:24.460+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:24.460+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:24.539+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:24.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:24.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:24.639+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:24.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:24.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:24.693+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:24.693+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:24.739+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:24.743+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:24.743+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:24.744+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:24.744+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:24.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:24.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1595) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:24.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1595 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:35:34.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:24.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:52.839+0000 2019-09-04T06:35:24.839+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:24.839+0000 D2 ASIO [Replication] Request 1595 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), opTime: { ts: Timestamp(1567578917, 2), t: 1 }, wallTime: new Date(1567578917949), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 2), t: 1 }, lastCommittedWall: new Date(1567578917949), lastOpVisible: { ts: Timestamp(1567578917, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578917, 2), $clusterTime: { clusterTime: Timestamp(1567578920, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 2) } 2019-09-04T06:35:24.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), opTime: { ts: Timestamp(1567578917, 2), t: 1 }, wallTime: new Date(1567578917949), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 2), t: 1 }, lastCommittedWall: new Date(1567578917949), lastOpVisible: { ts: Timestamp(1567578917, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578917, 2), $clusterTime: { clusterTime: Timestamp(1567578920, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:35:24.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:24.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1595) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), opTime: { ts: Timestamp(1567578917, 2), t: 1 }, wallTime: new Date(1567578917949), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 2), t: 1 }, lastCommittedWall: new Date(1567578917949), lastOpVisible: { ts: Timestamp(1567578917, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578917, 2), $clusterTime: { clusterTime: Timestamp(1567578920, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 2) } 2019-09-04T06:35:24.839+0000 D4 ELECTION [replexec-4] Postponing election timeout due to heartbeat from primary 2019-09-04T06:35:24.839+0000 D4 REPL [replexec-4] Canceling election timeout callback at 2019-09-04T06:35:34.246+0000 2019-09-04T06:35:24.839+0000 D4 ELECTION [replexec-4] Scheduling election timeout callback at 2019-09-04T06:35:35.459+0000 2019-09-04T06:35:24.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:35:24.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:35:26.839Z 2019-09-04T06:35:24.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:24.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:54.839+0000 2019-09-04T06:35:24.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:54.839+0000 2019-09-04T06:35:24.840+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:24.840+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1596) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:24.840+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1596 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:34.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:24.840+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:54.839+0000 2019-09-04T06:35:24.840+0000 D2 ASIO [Replication] Request 1596 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), opTime: { ts: Timestamp(1567578917, 2), t: 1 }, wallTime: new Date(1567578917949), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 2), t: 1 }, lastCommittedWall: new Date(1567578917949), lastOpVisible: { ts: Timestamp(1567578917, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578917, 2), $clusterTime: { clusterTime: Timestamp(1567578920, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 2) } 2019-09-04T06:35:24.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), opTime: { ts: Timestamp(1567578917, 2), t: 1 }, wallTime: new Date(1567578917949), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 2), t: 1 }, lastCommittedWall: new Date(1567578917949), lastOpVisible: { ts: Timestamp(1567578917, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578917, 2), $clusterTime: { clusterTime: Timestamp(1567578920, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:24.840+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:24.840+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1596) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), opTime: { ts: Timestamp(1567578917, 2), t: 1 }, wallTime: new Date(1567578917949), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 2), t: 1 }, lastCommittedWall: new Date(1567578917949), lastOpVisible: { ts: Timestamp(1567578917, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578917, 2), $clusterTime: { clusterTime: Timestamp(1567578920, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578917, 2) } 2019-09-04T06:35:24.840+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:35:24.840+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:35:26.840Z 2019-09-04T06:35:24.840+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:54.839+0000 2019-09-04T06:35:24.939+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:24.956+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:24.956+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:24.956+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578917, 2) 2019-09-04T06:35:24.956+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 23007 2019-09-04T06:35:24.956+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:24.956+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:24.956+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 23007 2019-09-04T06:35:24.956+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23010 2019-09-04T06:35:24.956+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 23010 2019-09-04T06:35:24.956+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578917, 2), t: 1 }({ ts: Timestamp(1567578917, 2), t: 1 }) 2019-09-04T06:35:24.960+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:24.960+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:25.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:25.039+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:25.049+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:25.049+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:25.049+0000 I NETWORK [listener] connection accepted from 10.108.2.55:36858 #503 (85 connections now open) 2019-09-04T06:35:25.049+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:25.050+0000 D2 COMMAND [conn503] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:25.050+0000 I NETWORK [conn503] received client metadata from 10.108.2.55:36858 conn503: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:25.050+0000 I COMMAND [conn503] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:25.050+0000 D2 COMMAND [conn503] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578921, 1), signature: { hash: BinData(0, F00E7B6B729CE76FBA9DE1FC1A42C69A8B06DDE7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:35:25.050+0000 D1 REPL [conn503] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578917, 2), t: 1 } 2019-09-04T06:35:25.050+0000 D3 REPL [conn503] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:55.060+0000 2019-09-04T06:35:25.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:25.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:25.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578920, 1), signature: { hash: BinData(0, FEFCA1354990A7C182CDB71739BFAF6161415093), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:25.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:35:25.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578920, 1), signature: { hash: BinData(0, FEFCA1354990A7C182CDB71739BFAF6161415093), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:25.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578920, 1), signature: { hash: BinData(0, FEFCA1354990A7C182CDB71739BFAF6161415093), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:25.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), opTime: { ts: Timestamp(1567578917, 2), t: 1 }, wallTime: new Date(1567578917949) } 2019-09-04T06:35:25.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578920, 1), signature: { hash: BinData(0, FEFCA1354990A7C182CDB71739BFAF6161415093), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:25.066+0000 I COMMAND [conn457] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578891, 1), signature: { hash: BinData(0, AAD839FC04C28C69C88AE327F60F4597EE716274), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:35:25.066+0000 D1 - [conn457] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:25.066+0000 W - [conn457] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:25.083+0000 I - [conn457] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:25.083+0000 D1 COMMAND [conn457] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578891, 1), signature: { hash: BinData(0, AAD839FC04C28C69C88AE327F60F4597EE716274), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:25.083+0000 D1 - [conn457] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:25.083+0000 W - [conn457] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:25.103+0000 I - [conn457] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:25.103+0000 W COMMAND [conn457] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:25.103+0000 I COMMAND [conn457] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578891, 1), signature: { hash: BinData(0, AAD839FC04C28C69C88AE327F60F4597EE716274), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30034ms 2019-09-04T06:35:25.103+0000 D2 NETWORK [conn457] Session from 10.108.2.55:36830 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:25.103+0000 I NETWORK [conn457] end connection 10.108.2.55:36830 (84 connections now open) 2019-09-04T06:35:25.139+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:25.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:25.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:25.193+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:25.193+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:25.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:25.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:25.239+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:25.243+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:25.243+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:25.244+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:25.244+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:25.339+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:25.440+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:25.460+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:25.460+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:25.540+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:25.549+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:25.549+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:25.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:25.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:25.640+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:25.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:25.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:25.692+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:25.693+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:25.740+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:25.743+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:25.743+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:25.744+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:25.744+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:25.777+0000 D2 ASIO [RS] Request 1594 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578925, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578925776), o: { $v: 1, $set: { ping: new Date(1567578925775) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 2), t: 1 }, lastCommittedWall: new Date(1567578917949), lastOpVisible: { ts: Timestamp(1567578917, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578917, 2), t: 1 }, lastCommittedWall: new Date(1567578917949), lastOpApplied: { ts: Timestamp(1567578925, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578917, 2), $clusterTime: { clusterTime: Timestamp(1567578925, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578925, 1) } 2019-09-04T06:35:25.777+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578925, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578925776), o: { $v: 1, $set: { ping: new Date(1567578925775) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 2), t: 1 }, lastCommittedWall: new Date(1567578917949), lastOpVisible: { ts: Timestamp(1567578917, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578917, 2), t: 1 }, lastCommittedWall: new Date(1567578917949), lastOpApplied: { ts: Timestamp(1567578925, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578917, 2), $clusterTime: { clusterTime: Timestamp(1567578925, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578925, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:25.777+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:25.777+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578925, 1) and ending at ts: Timestamp(1567578925, 1) 2019-09-04T06:35:25.777+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:35:35.459+0000 2019-09-04T06:35:25.777+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:35:36.845+0000 2019-09-04T06:35:25.777+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:25.777+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:54.839+0000 2019-09-04T06:35:25.777+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:25.777+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:25.777+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578917, 2) 2019-09-04T06:35:25.777+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 23032 2019-09-04T06:35:25.777+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:25.777+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:25.777+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 23032 2019-09-04T06:35:25.777+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:25.777+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:25.777+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:35:25.777+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578917, 2) 2019-09-04T06:35:25.778+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 23035 2019-09-04T06:35:25.778+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578925, 1) } 2019-09-04T06:35:25.777+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578925, 1), t: 1 } 2019-09-04T06:35:25.778+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:25.778+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:25.778+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 23035 2019-09-04T06:35:25.778+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23011 2019-09-04T06:35:25.778+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 23011 2019-09-04T06:35:25.778+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23038 2019-09-04T06:35:25.778+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 23038 2019-09-04T06:35:25.778+0000 D3 EXECUTOR [repl-writer-worker-11] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:25.778+0000 D3 STORAGE [repl-writer-worker-11] WT begin_transaction for snapshot id 23040 2019-09-04T06:35:25.778+0000 D4 STORAGE [repl-writer-worker-11] inserting record with timestamp Timestamp(1567578925, 1) 2019-09-04T06:35:25.778+0000 D3 STORAGE [repl-writer-worker-11] WT set timestamp of future write operations to Timestamp(1567578925, 1) 2019-09-04T06:35:25.778+0000 D2 STORAGE [repl-writer-worker-11] WiredTigerSizeStorer::store Marking table:local/collection/16--6194257481163143499 dirty, numRecords: 1599, dataSize: 360588, use_count: 3 2019-09-04T06:35:25.778+0000 D3 STORAGE [repl-writer-worker-11] WT commit_transaction for snapshot id 23040 2019-09-04T06:35:25.778+0000 D3 EXECUTOR [repl-writer-worker-11] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:25.778+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:35:25.778+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23039 2019-09-04T06:35:25.778+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 23039 2019-09-04T06:35:25.778+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23042 2019-09-04T06:35:25.778+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 23042 2019-09-04T06:35:25.778+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578925, 1), t: 1 }({ ts: Timestamp(1567578925, 1), t: 1 }) 2019-09-04T06:35:25.778+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578925, 1) 2019-09-04T06:35:25.778+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23043 2019-09-04T06:35:25.778+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578925, 1) } } ] } sort: {} projection: {} 2019-09-04T06:35:25.778+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:25.778+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:35:25.778+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578925, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:35:25.778+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:25.778+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:25.778+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:25.778+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578925, 1) || First: notFirst: full path: ts 2019-09-04T06:35:25.778+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:25.778+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578925, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:25.778+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:25.778+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:35:25.778+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:25.778+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:25.778+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:25.778+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:25.778+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:25.778+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:25.778+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:25.778+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578925, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:25.778+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:25.778+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:25.778+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:25.778+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578925, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:25.778+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:25.778+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578925, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:25.778+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 23043 2019-09-04T06:35:25.778+0000 D3 EXECUTOR [repl-writer-worker-7] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:25.778+0000 D3 STORAGE [repl-writer-worker-7] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:25.778+0000 D3 REPL [repl-writer-worker-7] applying op: { ts: Timestamp(1567578925, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578925776), o: { $v: 1, $set: { ping: new Date(1567578925775) } } }, oplog application mode: Secondary 2019-09-04T06:35:25.778+0000 D3 STORAGE [repl-writer-worker-7] WT set timestamp of future write operations to Timestamp(1567578925, 1) 2019-09-04T06:35:25.778+0000 D3 STORAGE [repl-writer-worker-7] WT begin_transaction for snapshot id 23045 2019-09-04T06:35:25.778+0000 D2 QUERY [repl-writer-worker-7] Using idhack: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" } 2019-09-04T06:35:25.778+0000 D2 STORAGE [repl-writer-worker-7] WiredTigerSizeStorer::store Marking table:config/collection/28--6194257481163143499 dirty, numRecords: 24, dataSize: 2000, use_count: 3 2019-09-04T06:35:25.778+0000 D4 WRITE [repl-writer-worker-7] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:35:25.778+0000 D3 STORAGE [repl-writer-worker-7] WT commit_transaction for snapshot id 23045 2019-09-04T06:35:25.778+0000 D3 EXECUTOR [repl-writer-worker-7] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:25.779+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578925, 1), t: 1 }({ ts: Timestamp(1567578925, 1), t: 1 }) 2019-09-04T06:35:25.779+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578925, 1) 2019-09-04T06:35:25.779+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23044 2019-09-04T06:35:25.779+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:35:25.779+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:25.779+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:35:25.779+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:25.779+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:25.779+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:35:25.779+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 23044 2019-09-04T06:35:25.779+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578925, 1) 2019-09-04T06:35:25.779+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23048 2019-09-04T06:35:25.779+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 23048 2019-09-04T06:35:25.779+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578925, 1), t: 1 }({ ts: Timestamp(1567578925, 1), t: 1 }) 2019-09-04T06:35:25.779+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:25.779+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), appliedOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, appliedWallTime: new Date(1567578917949), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), appliedOpTime: { ts: Timestamp(1567578925, 1), t: 1 }, appliedWallTime: new Date(1567578925776), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), appliedOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, appliedWallTime: new Date(1567578917949), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 2), t: 1 }, lastCommittedWall: new Date(1567578917949), lastOpVisible: { ts: Timestamp(1567578917, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:25.779+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1597 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:55.779+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), appliedOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, appliedWallTime: new Date(1567578917949), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), appliedOpTime: { ts: Timestamp(1567578925, 1), t: 1 }, appliedWallTime: new Date(1567578925776), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), appliedOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, appliedWallTime: new Date(1567578917949), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578917, 2), t: 1 }, lastCommittedWall: new Date(1567578917949), lastOpVisible: { ts: Timestamp(1567578917, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:25.779+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:55.779+0000 2019-09-04T06:35:25.779+0000 D2 ASIO [RS] Request 1597 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578925, 1), t: 1 }, lastCommittedWall: new Date(1567578925776), lastOpVisible: { ts: Timestamp(1567578925, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578925, 1), $clusterTime: { clusterTime: Timestamp(1567578925, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578925, 1) } 2019-09-04T06:35:25.779+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578925, 1), t: 1 }, lastCommittedWall: new Date(1567578925776), lastOpVisible: { ts: Timestamp(1567578925, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578925, 1), $clusterTime: { clusterTime: Timestamp(1567578925, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578925, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:25.779+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:25.779+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:55.779+0000 2019-09-04T06:35:25.780+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578925, 1), t: 1 } 2019-09-04T06:35:25.780+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1598 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:35.780+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578917, 2), t: 1 } } 2019-09-04T06:35:25.780+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:55.779+0000 2019-09-04T06:35:25.780+0000 D2 ASIO [RS] Request 1598 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578925, 1), t: 1 }, lastCommittedWall: new Date(1567578925776), lastOpVisible: { ts: Timestamp(1567578925, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578925, 1), t: 1 }, lastCommittedWall: new Date(1567578925776), lastOpApplied: { ts: Timestamp(1567578925, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578925, 1), $clusterTime: { clusterTime: Timestamp(1567578925, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578925, 1) } 2019-09-04T06:35:25.780+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578925, 1), t: 1 }, lastCommittedWall: new Date(1567578925776), lastOpVisible: { ts: Timestamp(1567578925, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578925, 1), t: 1 }, lastCommittedWall: new Date(1567578925776), lastOpApplied: { ts: Timestamp(1567578925, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578925, 1), $clusterTime: { clusterTime: Timestamp(1567578925, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578925, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:25.780+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:25.780+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:35:25.780+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578925, 1), t: 1 }, 2019-09-04T06:35:25.776+0000 2019-09-04T06:35:25.780+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578925, 1), t: 1 }, 2019-09-04T06:35:25.776+0000 2019-09-04T06:35:25.780+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578920, 1) 2019-09-04T06:35:25.780+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:35:36.845+0000 2019-09-04T06:35:25.780+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:35:36.816+0000 2019-09-04T06:35:25.780+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:25.780+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:54.839+0000 2019-09-04T06:35:25.780+0000 D3 REPL [conn481] Got notified of new snapshot: { ts: Timestamp(1567578925, 1), t: 1 }, 2019-09-04T06:35:25.776+0000 2019-09-04T06:35:25.780+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1599 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:35.780+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578925, 1), t: 1 } } 2019-09-04T06:35:25.780+0000 D3 REPL [conn481] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:47.823+0000 2019-09-04T06:35:25.780+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:55.779+0000 2019-09-04T06:35:25.780+0000 D3 REPL [conn493] Got notified of new snapshot: { ts: Timestamp(1567578925, 1), t: 1 }, 2019-09-04T06:35:25.776+0000 2019-09-04T06:35:25.780+0000 D3 REPL [conn493] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:47.825+0000 2019-09-04T06:35:25.780+0000 D3 REPL [conn458] Got notified of new snapshot: { ts: Timestamp(1567578925, 1), t: 1 }, 2019-09-04T06:35:25.776+0000 2019-09-04T06:35:25.780+0000 D3 REPL [conn458] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:42.734+0000 2019-09-04T06:35:25.780+0000 D3 REPL [conn495] Got notified of new snapshot: { ts: Timestamp(1567578925, 1), t: 1 }, 2019-09-04T06:35:25.776+0000 2019-09-04T06:35:25.780+0000 D3 REPL [conn495] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:51.128+0000 2019-09-04T06:35:25.780+0000 D3 REPL [conn482] Got notified of new snapshot: { ts: Timestamp(1567578925, 1), t: 1 }, 2019-09-04T06:35:25.776+0000 2019-09-04T06:35:25.780+0000 D3 REPL [conn482] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:51.644+0000 2019-09-04T06:35:25.780+0000 D3 REPL [conn490] Got notified of new snapshot: { ts: Timestamp(1567578925, 1), t: 1 }, 2019-09-04T06:35:25.776+0000 2019-09-04T06:35:25.780+0000 D3 REPL [conn490] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:51.661+0000 2019-09-04T06:35:25.780+0000 D3 REPL [conn484] Got notified of new snapshot: { ts: Timestamp(1567578925, 1), t: 1 }, 2019-09-04T06:35:25.776+0000 2019-09-04T06:35:25.780+0000 D3 REPL [conn484] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:51.661+0000 2019-09-04T06:35:25.780+0000 D3 REPL [conn501] Got notified of new snapshot: { ts: Timestamp(1567578925, 1), t: 1 }, 2019-09-04T06:35:25.776+0000 2019-09-04T06:35:25.780+0000 D3 REPL [conn501] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:52.054+0000 2019-09-04T06:35:25.780+0000 D3 REPL [conn503] Got notified of new snapshot: { ts: Timestamp(1567578925, 1), t: 1 }, 2019-09-04T06:35:25.776+0000 2019-09-04T06:35:25.780+0000 D3 REPL [conn503] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:55.060+0000 2019-09-04T06:35:25.780+0000 D3 REPL [conn486] Got notified of new snapshot: { ts: Timestamp(1567578925, 1), t: 1 }, 2019-09-04T06:35:25.776+0000 2019-09-04T06:35:25.780+0000 D3 REPL [conn486] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:42.679+0000 2019-09-04T06:35:25.780+0000 D3 REPL [conn480] Got notified of new snapshot: { ts: Timestamp(1567578925, 1), t: 1 }, 2019-09-04T06:35:25.776+0000 2019-09-04T06:35:25.780+0000 D3 REPL [conn480] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:47.838+0000 2019-09-04T06:35:25.780+0000 D3 REPL [conn479] Got notified of new snapshot: { ts: Timestamp(1567578925, 1), t: 1 }, 2019-09-04T06:35:25.776+0000 2019-09-04T06:35:25.780+0000 D3 REPL [conn479] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:26.309+0000 2019-09-04T06:35:25.780+0000 D3 REPL [conn488] Got notified of new snapshot: { ts: Timestamp(1567578925, 1), t: 1 }, 2019-09-04T06:35:25.776+0000 2019-09-04T06:35:25.780+0000 D3 REPL [conn488] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:44.309+0000 2019-09-04T06:35:25.780+0000 D3 REPL [conn487] Got notified of new snapshot: { ts: Timestamp(1567578925, 1), t: 1 }, 2019-09-04T06:35:25.776+0000 2019-09-04T06:35:25.780+0000 D3 REPL [conn487] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:47.846+0000 2019-09-04T06:35:25.780+0000 D3 REPL [conn471] Got notified of new snapshot: { ts: Timestamp(1567578925, 1), t: 1 }, 2019-09-04T06:35:25.776+0000 2019-09-04T06:35:25.780+0000 D3 REPL [conn471] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:42.683+0000 2019-09-04T06:35:25.780+0000 D3 REPL [conn465] Got notified of new snapshot: { ts: Timestamp(1567578925, 1), t: 1 }, 2019-09-04T06:35:25.776+0000 2019-09-04T06:35:25.780+0000 D3 REPL [conn465] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:47.846+0000 2019-09-04T06:35:25.780+0000 D3 REPL [conn498] Got notified of new snapshot: { ts: Timestamp(1567578925, 1), t: 1 }, 2019-09-04T06:35:25.776+0000 2019-09-04T06:35:25.780+0000 D3 REPL [conn498] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:51.661+0000 2019-09-04T06:35:25.781+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:35:25.781+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:25.781+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), appliedOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, appliedWallTime: new Date(1567578917949), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578925, 1), t: 1 }, durableWallTime: new Date(1567578925776), appliedOpTime: { ts: Timestamp(1567578925, 1), t: 1 }, appliedWallTime: new Date(1567578925776), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), appliedOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, appliedWallTime: new Date(1567578917949), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578925, 1), t: 1 }, lastCommittedWall: new Date(1567578925776), lastOpVisible: { ts: Timestamp(1567578925, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:25.781+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1600 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:55.781+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), appliedOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, appliedWallTime: new Date(1567578917949), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578925, 1), t: 1 }, durableWallTime: new Date(1567578925776), appliedOpTime: { ts: Timestamp(1567578925, 1), t: 1 }, appliedWallTime: new Date(1567578925776), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, durableWallTime: new Date(1567578917949), appliedOpTime: { ts: Timestamp(1567578917, 2), t: 1 }, appliedWallTime: new Date(1567578917949), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578925, 1), t: 1 }, lastCommittedWall: new Date(1567578925776), lastOpVisible: { ts: Timestamp(1567578925, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:25.781+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:35:55.779+0000 2019-09-04T06:35:25.782+0000 D2 ASIO [RS] Request 1600 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578925, 1), t: 1 }, lastCommittedWall: new Date(1567578925776), lastOpVisible: { ts: Timestamp(1567578925, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578925, 1), $clusterTime: { clusterTime: Timestamp(1567578925, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578925, 1) } 2019-09-04T06:35:25.782+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578925, 1), t: 1 }, lastCommittedWall: new Date(1567578925776), lastOpVisible: { ts: Timestamp(1567578925, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578925, 1), $clusterTime: { clusterTime: Timestamp(1567578925, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578925, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:25.782+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:25.782+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:35:55.779+0000 2019-09-04T06:35:25.840+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:25.878+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578925, 1) 2019-09-04T06:35:25.940+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:25.960+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:25.960+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:26.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:26.040+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:26.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:26.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:26.140+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:26.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:26.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:26.193+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:26.193+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:26.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578925, 1), signature: { hash: BinData(0, 4126E13300A05D574C344B868C0A61E99A641B46), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:26.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:35:26.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578925, 1), signature: { hash: BinData(0, 4126E13300A05D574C344B868C0A61E99A641B46), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:26.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578925, 1), signature: { hash: BinData(0, 4126E13300A05D574C344B868C0A61E99A641B46), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:26.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578925, 1), t: 1 }, durableWallTime: new Date(1567578925776), opTime: { ts: Timestamp(1567578925, 1), t: 1 }, wallTime: new Date(1567578925776) } 2019-09-04T06:35:26.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578925, 1), signature: { hash: BinData(0, 4126E13300A05D574C344B868C0A61E99A641B46), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:26.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:26.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:26.241+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:26.243+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:26.243+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:26.244+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:26.244+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:26.299+0000 I NETWORK [listener] connection accepted from 10.108.2.60:45038 #504 (85 connections now open) 2019-09-04T06:35:26.299+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:26.299+0000 D2 COMMAND [conn504] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:26.299+0000 I NETWORK [conn504] received client metadata from 10.108.2.60:45038 conn504: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:26.299+0000 I COMMAND [conn504] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:26.310+0000 I COMMAND [conn479] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578893, 1), signature: { hash: BinData(0, 68688A86A60615DADD724D5C37EF1F723A2E3681), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:35:26.310+0000 D1 - [conn479] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:26.310+0000 W - [conn479] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:26.327+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:26.327+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:26.328+0000 I - [conn479] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:26.328+0000 D1 COMMAND [conn479] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578893, 1), signature: { hash: BinData(0, 68688A86A60615DADD724D5C37EF1F723A2E3681), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:26.328+0000 D1 - [conn479] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:26.328+0000 W - [conn479] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:26.341+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:26.348+0000 I - [conn479] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:26.348+0000 W COMMAND [conn479] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:26.348+0000 I COMMAND [conn479] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578893, 1), signature: { hash: BinData(0, 68688A86A60615DADD724D5C37EF1F723A2E3681), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30028ms 2019-09-04T06:35:26.348+0000 D2 NETWORK [conn479] Session from 10.108.2.60:45020 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:26.348+0000 I NETWORK [conn479] end connection 10.108.2.60:45020 (84 connections now open) 2019-09-04T06:35:26.441+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:26.460+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:26.460+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:26.541+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:26.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:26.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:26.601+0000 D2 COMMAND [conn49] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578908, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578917, 1), signature: { hash: BinData(0, 591599C827DFAC23F7434220FBEB5F1338DA9F08), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578908, 2), t: 1 } }, $db: "config" } 2019-09-04T06:35:26.601+0000 D1 COMMAND [conn49] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578908, 2), t: 1 } } } 2019-09-04T06:35:26.601+0000 D3 STORAGE [conn49] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:35:26.601+0000 D1 COMMAND [conn49] Using 'committed' snapshot: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578908, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578917, 1), signature: { hash: BinData(0, 591599C827DFAC23F7434220FBEB5F1338DA9F08), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578908, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578925, 1) 2019-09-04T06:35:26.601+0000 D2 QUERY [conn49] Collection config.settings does not exist. Using EOF plan: query: { _id: "balancer" } sort: {} projection: {} limit: 1 2019-09-04T06:35:26.601+0000 I COMMAND [conn49] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578908, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578917, 1), signature: { hash: BinData(0, 591599C827DFAC23F7434220FBEB5F1338DA9F08), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578908, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:35:26.641+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:26.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:26.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:26.692+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:26.693+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:26.695+0000 D2 COMMAND [conn49] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578925, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578925, 1), signature: { hash: BinData(0, 4126E13300A05D574C344B868C0A61E99A641B46), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578925, 1), t: 1 } }, $db: "config" } 2019-09-04T06:35:26.695+0000 D1 COMMAND [conn49] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578925, 1), t: 1 } } } 2019-09-04T06:35:26.695+0000 D3 STORAGE [conn49] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:35:26.695+0000 D1 COMMAND [conn49] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578925, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578925, 1), signature: { hash: BinData(0, 4126E13300A05D574C344B868C0A61E99A641B46), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578925, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578925, 1) 2019-09-04T06:35:26.695+0000 D5 QUERY [conn49] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:35:26.695+0000 D5 QUERY [conn49] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:35:26.695+0000 D5 QUERY [conn49] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:35:26.695+0000 D5 QUERY [conn49] Rated tree: $and 2019-09-04T06:35:26.695+0000 D5 QUERY [conn49] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:26.695+0000 D5 QUERY [conn49] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:26.695+0000 D2 QUERY [conn49] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:35:26.695+0000 D3 STORAGE [conn49] WT begin_transaction for snapshot id 23067 2019-09-04T06:35:26.695+0000 D3 STORAGE [conn49] WT rollback_transaction for snapshot id 23067 2019-09-04T06:35:26.695+0000 I COMMAND [conn49] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578925, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578925, 1), signature: { hash: BinData(0, 4126E13300A05D574C344B868C0A61E99A641B46), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578925, 1), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:4 cursorExhausted:1 numYields:0 nreturned:4 reslen:989 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:35:26.741+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:26.743+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:26.743+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:26.744+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:26.744+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:26.778+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:26.778+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:26.778+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578925, 1) 2019-09-04T06:35:26.778+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 23071 2019-09-04T06:35:26.778+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:26.778+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:26.778+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 23071 2019-09-04T06:35:26.779+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23074 2019-09-04T06:35:26.779+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 23074 2019-09-04T06:35:26.779+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578925, 1), t: 1 }({ ts: Timestamp(1567578925, 1), t: 1 }) 2019-09-04T06:35:26.827+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:26.827+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:26.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:26.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:26.839+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1601) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:26.839+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1601 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:35:36.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:26.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:56.839+0000 2019-09-04T06:35:26.839+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:35:25.063+0000 2019-09-04T06:35:26.839+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:35:26.234+0000 2019-09-04T06:35:26.839+0000 D3 REPL [replexec-3] stalest member MemberId(0) date: 2019-09-04T06:35:25.063+0000 2019-09-04T06:35:26.839+0000 D3 REPL [replexec-3] scheduling next check at 2019-09-04T06:35:35.063+0000 2019-09-04T06:35:26.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:56.839+0000 2019-09-04T06:35:26.839+0000 D2 ASIO [Replication] Request 1601 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578925, 1), t: 1 }, durableWallTime: new Date(1567578925776), opTime: { ts: Timestamp(1567578925, 1), t: 1 }, wallTime: new Date(1567578925776), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578925, 1), t: 1 }, lastCommittedWall: new Date(1567578925776), lastOpVisible: { ts: Timestamp(1567578925, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578925, 1), $clusterTime: { clusterTime: Timestamp(1567578925, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578925, 1) } 2019-09-04T06:35:26.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578925, 1), t: 1 }, durableWallTime: new Date(1567578925776), opTime: { ts: Timestamp(1567578925, 1), t: 1 }, wallTime: new Date(1567578925776), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578925, 1), t: 1 }, lastCommittedWall: new Date(1567578925776), lastOpVisible: { ts: Timestamp(1567578925, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578925, 1), $clusterTime: { clusterTime: Timestamp(1567578925, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578925, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:35:26.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:26.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1601) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578925, 1), t: 1 }, durableWallTime: new Date(1567578925776), opTime: { ts: Timestamp(1567578925, 1), t: 1 }, wallTime: new Date(1567578925776), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578925, 1), t: 1 }, lastCommittedWall: new Date(1567578925776), lastOpVisible: { ts: Timestamp(1567578925, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578925, 1), $clusterTime: { clusterTime: Timestamp(1567578925, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578925, 1) } 2019-09-04T06:35:26.839+0000 D4 ELECTION [replexec-4] Postponing election timeout due to heartbeat from primary 2019-09-04T06:35:26.839+0000 D4 REPL [replexec-4] Canceling election timeout callback at 2019-09-04T06:35:36.816+0000 2019-09-04T06:35:26.839+0000 D4 ELECTION [replexec-4] Scheduling election timeout callback at 2019-09-04T06:35:38.008+0000 2019-09-04T06:35:26.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:26.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:35:26.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:56.839+0000 2019-09-04T06:35:26.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:35:28.839Z 2019-09-04T06:35:26.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:56.839+0000 2019-09-04T06:35:26.840+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:26.840+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1602) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:26.840+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1602 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:36.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:26.840+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:56.839+0000 2019-09-04T06:35:26.840+0000 D2 ASIO [Replication] Request 1602 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578925, 1), t: 1 }, durableWallTime: new Date(1567578925776), opTime: { ts: Timestamp(1567578925, 1), t: 1 }, wallTime: new Date(1567578925776), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578925, 1), t: 1 }, lastCommittedWall: new Date(1567578925776), lastOpVisible: { ts: Timestamp(1567578925, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578925, 1), $clusterTime: { clusterTime: Timestamp(1567578925, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578925, 1) } 2019-09-04T06:35:26.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578925, 1), t: 1 }, durableWallTime: new Date(1567578925776), opTime: { ts: Timestamp(1567578925, 1), t: 1 }, wallTime: new Date(1567578925776), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578925, 1), t: 1 }, lastCommittedWall: new Date(1567578925776), lastOpVisible: { ts: Timestamp(1567578925, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578925, 1), $clusterTime: { clusterTime: Timestamp(1567578925, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578925, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:26.840+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:26.840+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1602) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578925, 1), t: 1 }, durableWallTime: new Date(1567578925776), opTime: { ts: Timestamp(1567578925, 1), t: 1 }, wallTime: new Date(1567578925776), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578925, 1), t: 1 }, lastCommittedWall: new Date(1567578925776), lastOpVisible: { ts: Timestamp(1567578925, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578925, 1), $clusterTime: { clusterTime: Timestamp(1567578925, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578925, 1) } 2019-09-04T06:35:26.840+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:35:26.840+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:35:28.840Z 2019-09-04T06:35:26.840+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:56.839+0000 2019-09-04T06:35:26.841+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:26.876+0000 D2 COMMAND [conn47] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:26.876+0000 I COMMAND [conn47] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:26.911+0000 D2 COMMAND [conn48] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:26.911+0000 I COMMAND [conn48] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:26.941+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:26.960+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:26.960+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:27.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:27.042+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:27.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:27.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:27.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578925, 1), signature: { hash: BinData(0, 4126E13300A05D574C344B868C0A61E99A641B46), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:27.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:35:27.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578925, 1), signature: { hash: BinData(0, 4126E13300A05D574C344B868C0A61E99A641B46), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:27.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578925, 1), signature: { hash: BinData(0, 4126E13300A05D574C344B868C0A61E99A641B46), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:27.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578925, 1), t: 1 }, durableWallTime: new Date(1567578925776), opTime: { ts: Timestamp(1567578925, 1), t: 1 }, wallTime: new Date(1567578925776) } 2019-09-04T06:35:27.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578925, 1), signature: { hash: BinData(0, 4126E13300A05D574C344B868C0A61E99A641B46), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:27.142+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:27.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:27.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:27.193+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:27.193+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:27.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:27.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:27.242+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:27.243+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:27.243+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:27.244+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:27.244+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:27.327+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:27.327+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:27.342+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:27.442+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:27.460+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:27.460+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:27.542+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:27.561+0000 I NETWORK [listener] connection accepted from 10.108.2.61:38106 #505 (85 connections now open) 2019-09-04T06:35:27.561+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:27.561+0000 D2 COMMAND [conn505] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:27.561+0000 I NETWORK [conn505] received client metadata from 10.108.2.61:38106 conn505: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:27.561+0000 I COMMAND [conn505] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:27.562+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:35:27.562+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:35:27.562+0000 D2 COMMAND [conn90] run command admin.$cmd { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:35:27.562+0000 I COMMAND [conn90] command admin.$cmd command: isMaster { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:866 locks:{} protocol:op_query 0ms 2019-09-04T06:35:27.565+0000 D2 COMMAND [conn505] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578921, 1), signature: { hash: BinData(0, F00E7B6B729CE76FBA9DE1FC1A42C69A8B06DDE7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:35:27.565+0000 D1 REPL [conn505] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578925, 1), t: 1 } 2019-09-04T06:35:27.565+0000 D3 REPL [conn505] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:57.575+0000 2019-09-04T06:35:27.642+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:27.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:27.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:27.693+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:27.693+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:27.713+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:27.713+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:27.742+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:27.743+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:27.743+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:27.744+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:27.744+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:27.778+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:27.778+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:27.778+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578925, 1) 2019-09-04T06:35:27.778+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 23100 2019-09-04T06:35:27.778+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:27.778+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:27.778+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 23100 2019-09-04T06:35:27.779+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23103 2019-09-04T06:35:27.779+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 23103 2019-09-04T06:35:27.779+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578925, 1), t: 1 }({ ts: Timestamp(1567578925, 1), t: 1 }) 2019-09-04T06:35:27.827+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:27.827+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:27.843+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:27.943+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:27.960+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:27.960+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:28.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:28.043+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:28.143+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:28.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:28.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:28.193+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:28.193+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:28.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:28.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:28.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578927, 1), signature: { hash: BinData(0, 71D640AE032DCFE6D391C60ACE98E430FF75A7E1), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:28.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:35:28.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578927, 1), signature: { hash: BinData(0, 71D640AE032DCFE6D391C60ACE98E430FF75A7E1), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:28.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578927, 1), signature: { hash: BinData(0, 71D640AE032DCFE6D391C60ACE98E430FF75A7E1), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:28.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578925, 1), t: 1 }, durableWallTime: new Date(1567578925776), opTime: { ts: Timestamp(1567578925, 1), t: 1 }, wallTime: new Date(1567578925776) } 2019-09-04T06:35:28.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578927, 1), signature: { hash: BinData(0, 71D640AE032DCFE6D391C60ACE98E430FF75A7E1), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:28.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:28.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:28.243+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:28.243+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:28.243+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:28.244+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:28.244+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:28.327+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:28.327+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:28.343+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:28.444+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:28.460+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:28.460+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:28.544+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:28.644+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:28.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:28.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:28.693+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:28.693+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:28.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:28.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:28.743+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:28.743+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:28.744+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:28.744+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:28.744+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:28.745+0000 I NETWORK [listener] connection accepted from 10.108.2.64:46806 #506 (86 connections now open) 2019-09-04T06:35:28.745+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:28.746+0000 D2 COMMAND [conn506] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:28.746+0000 I NETWORK [conn506] received client metadata from 10.108.2.64:46806 conn506: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:28.746+0000 I COMMAND [conn506] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:28.750+0000 D2 COMMAND [conn506] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578925, 1), signature: { hash: BinData(0, 3DCD6584CE1F58263916AC072D4A54C85B98128C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:35:28.750+0000 D1 REPL [conn506] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578925, 1), t: 1 } 2019-09-04T06:35:28.750+0000 D3 REPL [conn506] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:58.760+0000 2019-09-04T06:35:28.778+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:28.778+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:28.778+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578925, 1) 2019-09-04T06:35:28.778+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 23124 2019-09-04T06:35:28.778+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:28.778+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:28.778+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 23124 2019-09-04T06:35:28.779+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23127 2019-09-04T06:35:28.779+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 23127 2019-09-04T06:35:28.779+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578925, 1), t: 1 }({ ts: Timestamp(1567578925, 1), t: 1 }) 2019-09-04T06:35:28.827+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:28.827+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:28.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:28.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1603) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:28.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1603 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:35:38.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:28.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:56.839+0000 2019-09-04T06:35:28.839+0000 D2 ASIO [Replication] Request 1603 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578925, 1), t: 1 }, durableWallTime: new Date(1567578925776), opTime: { ts: Timestamp(1567578925, 1), t: 1 }, wallTime: new Date(1567578925776), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578925, 1), t: 1 }, lastCommittedWall: new Date(1567578925776), lastOpVisible: { ts: Timestamp(1567578925, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578925, 1), $clusterTime: { clusterTime: Timestamp(1567578927, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578925, 1) } 2019-09-04T06:35:28.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578925, 1), t: 1 }, durableWallTime: new Date(1567578925776), opTime: { ts: Timestamp(1567578925, 1), t: 1 }, wallTime: new Date(1567578925776), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578925, 1), t: 1 }, lastCommittedWall: new Date(1567578925776), lastOpVisible: { ts: Timestamp(1567578925, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578925, 1), $clusterTime: { clusterTime: Timestamp(1567578927, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578925, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:35:28.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:28.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1603) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578925, 1), t: 1 }, durableWallTime: new Date(1567578925776), opTime: { ts: Timestamp(1567578925, 1), t: 1 }, wallTime: new Date(1567578925776), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578925, 1), t: 1 }, lastCommittedWall: new Date(1567578925776), lastOpVisible: { ts: Timestamp(1567578925, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578925, 1), $clusterTime: { clusterTime: Timestamp(1567578927, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578925, 1) } 2019-09-04T06:35:28.839+0000 D4 ELECTION [replexec-4] Postponing election timeout due to heartbeat from primary 2019-09-04T06:35:28.839+0000 D4 REPL [replexec-4] Canceling election timeout callback at 2019-09-04T06:35:38.008+0000 2019-09-04T06:35:28.839+0000 D4 ELECTION [replexec-4] Scheduling election timeout callback at 2019-09-04T06:35:38.899+0000 2019-09-04T06:35:28.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:35:28.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:35:30.839Z 2019-09-04T06:35:28.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:28.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:58.839+0000 2019-09-04T06:35:28.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:58.839+0000 2019-09-04T06:35:28.840+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:28.840+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1604) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:28.840+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1604 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:38.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:28.840+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:58.839+0000 2019-09-04T06:35:28.840+0000 D2 ASIO [Replication] Request 1604 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578925, 1), t: 1 }, durableWallTime: new Date(1567578925776), opTime: { ts: Timestamp(1567578925, 1), t: 1 }, wallTime: new Date(1567578925776), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578925, 1), t: 1 }, lastCommittedWall: new Date(1567578925776), lastOpVisible: { ts: Timestamp(1567578925, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578925, 1), $clusterTime: { clusterTime: Timestamp(1567578927, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578925, 1) } 2019-09-04T06:35:28.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578925, 1), t: 1 }, durableWallTime: new Date(1567578925776), opTime: { ts: Timestamp(1567578925, 1), t: 1 }, wallTime: new Date(1567578925776), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578925, 1), t: 1 }, lastCommittedWall: new Date(1567578925776), lastOpVisible: { ts: Timestamp(1567578925, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578925, 1), $clusterTime: { clusterTime: Timestamp(1567578927, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578925, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:28.840+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:28.840+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1604) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578925, 1), t: 1 }, durableWallTime: new Date(1567578925776), opTime: { ts: Timestamp(1567578925, 1), t: 1 }, wallTime: new Date(1567578925776), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578925, 1), t: 1 }, lastCommittedWall: new Date(1567578925776), lastOpVisible: { ts: Timestamp(1567578925, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578925, 1), $clusterTime: { clusterTime: Timestamp(1567578927, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578925, 1) } 2019-09-04T06:35:28.840+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:35:28.840+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:35:30.840Z 2019-09-04T06:35:28.840+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:58.839+0000 2019-09-04T06:35:28.844+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:28.944+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:28.960+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:28.960+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:29.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:29.044+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:29.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578927, 1), signature: { hash: BinData(0, 71D640AE032DCFE6D391C60ACE98E430FF75A7E1), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:29.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:35:29.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578927, 1), signature: { hash: BinData(0, 71D640AE032DCFE6D391C60ACE98E430FF75A7E1), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:29.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578927, 1), signature: { hash: BinData(0, 71D640AE032DCFE6D391C60ACE98E430FF75A7E1), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:29.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578925, 1), t: 1 }, durableWallTime: new Date(1567578925776), opTime: { ts: Timestamp(1567578925, 1), t: 1 }, wallTime: new Date(1567578925776) } 2019-09-04T06:35:29.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578927, 1), signature: { hash: BinData(0, 71D640AE032DCFE6D391C60ACE98E430FF75A7E1), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:29.144+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:29.193+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:29.193+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:29.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:29.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:29.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:29.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:29.243+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:29.243+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:29.244+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:29.244+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:29.244+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:29.327+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:29.327+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:29.345+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:29.445+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:29.460+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:29.460+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:29.545+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:29.645+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:29.693+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:29.693+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:29.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:29.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:29.730+0000 D2 COMMAND [conn464] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578920, 1), signature: { hash: BinData(0, 21870DB0FD441442746D1E915D108B530DCF31E8), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:35:29.730+0000 D1 REPL [conn464] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578925, 1), t: 1 } 2019-09-04T06:35:29.730+0000 D3 REPL [conn464] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:59.740+0000 2019-09-04T06:35:29.743+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:29.743+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:29.744+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:29.744+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:29.745+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:29.778+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:29.779+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:29.779+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578925, 1) 2019-09-04T06:35:29.779+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 23146 2019-09-04T06:35:29.779+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:29.779+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:29.779+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 23146 2019-09-04T06:35:29.779+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23149 2019-09-04T06:35:29.779+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 23149 2019-09-04T06:35:29.779+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578925, 1), t: 1 }({ ts: Timestamp(1567578925, 1), t: 1 }) 2019-09-04T06:35:29.827+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:29.827+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:29.845+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:29.945+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:29.960+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:29.960+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:30.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:30.002+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:35:30.002+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:35:30.002+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:35:30.008+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:35:30.008+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:35:30.009+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:35:30.009+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:35:30.009+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:35:30.009+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:35:30.009+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:35:30.010+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35151 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:35:30.011+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:35:30.011+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:35:30.011+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:35:30.011+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:35:30.011+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:30.011+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:35:30.011+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:35:30.011+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:35:30.011+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:35:30.011+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:35:30.011+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:35:30.011+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:35:30.011+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:30.011+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:30.011+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:35:30.011+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578925, 1) 2019-09-04T06:35:30.011+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23159 2019-09-04T06:35:30.011+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23159 2019-09-04T06:35:30.011+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:35:30.011+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:35:30.011+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:35:30.012+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:35:30.012+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:30.012+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:35:30.012+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:35:30.012+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:35:30.012+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578925, 1) 2019-09-04T06:35:30.012+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23162 2019-09-04T06:35:30.012+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23162 2019-09-04T06:35:30.012+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:35:30.012+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:35:30.012+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:30.012+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:35:30.012+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:35:30.012+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:35:30.012+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578925, 1) 2019-09-04T06:35:30.012+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23164 2019-09-04T06:35:30.012+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23164 2019-09-04T06:35:30.012+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:35:30.014+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:35:30.014+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:35:30.014+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:35:30.014+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:35:30.014+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23167 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23167 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23168 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23168 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23169 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23169 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23170 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23170 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23171 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23171 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23172 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23172 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23173 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23173 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23174 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23174 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23175 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:35:30.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23175 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23176 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23176 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23177 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23177 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23178 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23178 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23179 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23179 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23180 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23180 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23181 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23181 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23182 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23182 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23183 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23183 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23184 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23184 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23185 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23185 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23186 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23186 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23187 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23187 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23188 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:35:30.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23188 2019-09-04T06:35:30.015+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:35:30.016+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:35:30.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23190 2019-09-04T06:35:30.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23190 2019-09-04T06:35:30.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23191 2019-09-04T06:35:30.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23191 2019-09-04T06:35:30.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23192 2019-09-04T06:35:30.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23192 2019-09-04T06:35:30.016+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:35:30.016+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:35:30.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23194 2019-09-04T06:35:30.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23194 2019-09-04T06:35:30.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23195 2019-09-04T06:35:30.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23195 2019-09-04T06:35:30.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23196 2019-09-04T06:35:30.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23196 2019-09-04T06:35:30.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23197 2019-09-04T06:35:30.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23197 2019-09-04T06:35:30.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23198 2019-09-04T06:35:30.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23198 2019-09-04T06:35:30.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23199 2019-09-04T06:35:30.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23199 2019-09-04T06:35:30.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23200 2019-09-04T06:35:30.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23200 2019-09-04T06:35:30.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23201 2019-09-04T06:35:30.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23201 2019-09-04T06:35:30.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23202 2019-09-04T06:35:30.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23202 2019-09-04T06:35:30.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23203 2019-09-04T06:35:30.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23203 2019-09-04T06:35:30.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23204 2019-09-04T06:35:30.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23204 2019-09-04T06:35:30.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23205 2019-09-04T06:35:30.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23205 2019-09-04T06:35:30.016+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:35:30.016+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:35:30.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23207 2019-09-04T06:35:30.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23207 2019-09-04T06:35:30.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23208 2019-09-04T06:35:30.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23208 2019-09-04T06:35:30.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23209 2019-09-04T06:35:30.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23209 2019-09-04T06:35:30.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23210 2019-09-04T06:35:30.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23210 2019-09-04T06:35:30.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23211 2019-09-04T06:35:30.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23211 2019-09-04T06:35:30.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23212 2019-09-04T06:35:30.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23212 2019-09-04T06:35:30.017+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:35:30.045+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:30.145+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:30.193+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:30.193+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:30.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:30.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:30.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578927, 1), signature: { hash: BinData(0, 71D640AE032DCFE6D391C60ACE98E430FF75A7E1), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:30.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:35:30.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578927, 1), signature: { hash: BinData(0, 71D640AE032DCFE6D391C60ACE98E430FF75A7E1), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:30.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578927, 1), signature: { hash: BinData(0, 71D640AE032DCFE6D391C60ACE98E430FF75A7E1), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:30.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578925, 1), t: 1 }, durableWallTime: new Date(1567578925776), opTime: { ts: Timestamp(1567578925, 1), t: 1 }, wallTime: new Date(1567578925776) } 2019-09-04T06:35:30.235+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578927, 1), signature: { hash: BinData(0, 71D640AE032DCFE6D391C60ACE98E430FF75A7E1), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:30.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:30.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:30.244+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:30.244+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:30.246+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:30.327+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:30.327+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:30.346+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:30.422+0000 D2 COMMAND [conn496] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578930, 1), signature: { hash: BinData(0, C7B41E71C8B299289F009C76596DCCE73A882CDE), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:35:30.423+0000 D1 REPL [conn496] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578925, 1), t: 1 } 2019-09-04T06:35:30.423+0000 D3 REPL [conn496] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.432+0000 2019-09-04T06:35:30.443+0000 D2 ASIO [RS] Request 1599 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578930, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578930440), o: { $v: 1, $set: { ping: new Date(1567578930439) } } }, { ts: Timestamp(1567578930, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578930440), o: { $v: 1, $set: { ping: new Date(1567578930438) } } }, { ts: Timestamp(1567578930, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578930440), o: { $v: 1, $set: { ping: new Date(1567578930440) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578925, 1), t: 1 }, lastCommittedWall: new Date(1567578925776), lastOpVisible: { ts: Timestamp(1567578925, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578925, 1), t: 1 }, lastCommittedWall: new Date(1567578925776), lastOpApplied: { ts: Timestamp(1567578930, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578925, 1), $clusterTime: { clusterTime: Timestamp(1567578930, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578930, 3) } 2019-09-04T06:35:30.443+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578930, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578930440), o: { $v: 1, $set: { ping: new Date(1567578930439) } } }, { ts: Timestamp(1567578930, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578930440), o: { $v: 1, $set: { ping: new Date(1567578930438) } } }, { ts: Timestamp(1567578930, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578930440), o: { $v: 1, $set: { ping: new Date(1567578930440) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578925, 1), t: 1 }, lastCommittedWall: new Date(1567578925776), lastOpVisible: { ts: Timestamp(1567578925, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578925, 1), t: 1 }, lastCommittedWall: new Date(1567578925776), lastOpApplied: { ts: Timestamp(1567578930, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578925, 1), $clusterTime: { clusterTime: Timestamp(1567578930, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578930, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:30.443+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:30.443+0000 D2 REPL [replication-1] oplog fetcher read 3 operations from remote oplog starting at ts: Timestamp(1567578930, 1) and ending at ts: Timestamp(1567578930, 3) 2019-09-04T06:35:30.443+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:35:38.899+0000 2019-09-04T06:35:30.443+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:35:41.631+0000 2019-09-04T06:35:30.443+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:30.443+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:58.839+0000 2019-09-04T06:35:30.443+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578930, 3), t: 1 } 2019-09-04T06:35:30.443+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:30.443+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:30.443+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578925, 1) 2019-09-04T06:35:30.443+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 23222 2019-09-04T06:35:30.443+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:30.443+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:30.444+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 23222 2019-09-04T06:35:30.444+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:30.444+0000 D2 REPL [rsSync-0] replication batch size is 3 2019-09-04T06:35:30.444+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:30.444+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578925, 1) 2019-09-04T06:35:30.444+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578930, 1) } 2019-09-04T06:35:30.444+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 23225 2019-09-04T06:35:30.444+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:30.444+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:30.444+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 23225 2019-09-04T06:35:30.444+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23150 2019-09-04T06:35:30.444+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 23150 2019-09-04T06:35:30.444+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23228 2019-09-04T06:35:30.444+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 23228 2019-09-04T06:35:30.444+0000 D3 EXECUTOR [repl-writer-worker-13] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:30.444+0000 D3 STORAGE [repl-writer-worker-13] WT begin_transaction for snapshot id 23230 2019-09-04T06:35:30.444+0000 D4 STORAGE [repl-writer-worker-13] inserting record with timestamp Timestamp(1567578930, 1) 2019-09-04T06:35:30.444+0000 D3 STORAGE [repl-writer-worker-13] WT set timestamp of future write operations to Timestamp(1567578930, 1) 2019-09-04T06:35:30.444+0000 D4 STORAGE [repl-writer-worker-13] inserting record with timestamp Timestamp(1567578930, 2) 2019-09-04T06:35:30.444+0000 D3 STORAGE [repl-writer-worker-13] WT set timestamp of future write operations to Timestamp(1567578930, 2) 2019-09-04T06:35:30.444+0000 D4 STORAGE [repl-writer-worker-13] inserting record with timestamp Timestamp(1567578930, 3) 2019-09-04T06:35:30.444+0000 D3 STORAGE [repl-writer-worker-13] WT set timestamp of future write operations to Timestamp(1567578930, 3) 2019-09-04T06:35:30.444+0000 D3 STORAGE [repl-writer-worker-13] WT commit_transaction for snapshot id 23230 2019-09-04T06:35:30.444+0000 D3 EXECUTOR [repl-writer-worker-13] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:30.444+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:35:30.444+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23229 2019-09-04T06:35:30.444+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 23229 2019-09-04T06:35:30.444+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23232 2019-09-04T06:35:30.444+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 23232 2019-09-04T06:35:30.444+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578930, 3), t: 1 }({ ts: Timestamp(1567578930, 3), t: 1 }) 2019-09-04T06:35:30.444+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578930, 3) 2019-09-04T06:35:30.444+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23233 2019-09-04T06:35:30.444+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578930, 3) } } ] } sort: {} projection: {} 2019-09-04T06:35:30.444+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:30.444+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:35:30.444+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578930, 3) Sort: {} Proj: {} ============================= 2019-09-04T06:35:30.444+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:30.444+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:30.444+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:30.444+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578930, 3) || First: notFirst: full path: ts 2019-09-04T06:35:30.444+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:30.444+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578930, 3) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:30.444+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:30.444+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:35:30.444+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:30.444+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:30.444+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:30.444+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:30.444+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:30.444+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:30.444+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:30.444+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578930, 3) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:30.444+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:30.444+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:30.444+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:30.444+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578930, 3) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:30.444+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:30.444+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578930, 3) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:30.444+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 23233 2019-09-04T06:35:30.444+0000 D3 EXECUTOR [repl-writer-worker-12] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:30.444+0000 D3 STORAGE [repl-writer-worker-12] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:30.444+0000 D3 EXECUTOR [repl-writer-worker-14] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:30.444+0000 D3 REPL [repl-writer-worker-12] applying op: { ts: Timestamp(1567578930, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578930440), o: { $v: 1, $set: { ping: new Date(1567578930440) } } }, oplog application mode: Secondary 2019-09-04T06:35:30.444+0000 D3 STORAGE [repl-writer-worker-14] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:30.444+0000 D3 EXECUTOR [repl-writer-worker-3] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:30.444+0000 D3 STORAGE [repl-writer-worker-3] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:30.444+0000 D3 REPL [repl-writer-worker-14] applying op: { ts: Timestamp(1567578930, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578930440), o: { $v: 1, $set: { ping: new Date(1567578930438) } } }, oplog application mode: Secondary 2019-09-04T06:35:30.444+0000 D3 REPL [repl-writer-worker-3] applying op: { ts: Timestamp(1567578930, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578930440), o: { $v: 1, $set: { ping: new Date(1567578930439) } } }, oplog application mode: Secondary 2019-09-04T06:35:30.444+0000 D3 STORAGE [repl-writer-worker-14] WT set timestamp of future write operations to Timestamp(1567578930, 2) 2019-09-04T06:35:30.444+0000 D3 STORAGE [repl-writer-worker-3] WT set timestamp of future write operations to Timestamp(1567578930, 1) 2019-09-04T06:35:30.444+0000 D3 STORAGE [repl-writer-worker-14] WT begin_transaction for snapshot id 23236 2019-09-04T06:35:30.444+0000 D3 STORAGE [repl-writer-worker-3] WT begin_transaction for snapshot id 23237 2019-09-04T06:35:30.444+0000 D2 QUERY [repl-writer-worker-14] Using idhack: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" } 2019-09-04T06:35:30.444+0000 D2 QUERY [repl-writer-worker-3] Using idhack: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" } 2019-09-04T06:35:30.444+0000 D4 WRITE [repl-writer-worker-14] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:35:30.444+0000 D3 STORAGE [repl-writer-worker-14] WT commit_transaction for snapshot id 23236 2019-09-04T06:35:30.444+0000 D3 STORAGE [repl-writer-worker-12] WT set timestamp of future write operations to Timestamp(1567578930, 3) 2019-09-04T06:35:30.444+0000 D3 STORAGE [repl-writer-worker-12] WT begin_transaction for snapshot id 23235 2019-09-04T06:35:30.444+0000 D3 EXECUTOR [repl-writer-worker-14] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:30.445+0000 D2 QUERY [repl-writer-worker-12] Using idhack: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" } 2019-09-04T06:35:30.445+0000 D4 WRITE [repl-writer-worker-12] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:35:30.445+0000 D3 STORAGE [repl-writer-worker-12] WT commit_transaction for snapshot id 23235 2019-09-04T06:35:30.445+0000 D3 EXECUTOR [repl-writer-worker-12] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:30.444+0000 D4 WRITE [repl-writer-worker-3] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:35:30.445+0000 D3 STORAGE [repl-writer-worker-3] WT commit_transaction for snapshot id 23237 2019-09-04T06:35:30.445+0000 D3 EXECUTOR [repl-writer-worker-3] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:30.445+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578930, 3), t: 1 }({ ts: Timestamp(1567578930, 3), t: 1 }) 2019-09-04T06:35:30.445+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578930, 3) 2019-09-04T06:35:30.445+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23234 2019-09-04T06:35:30.445+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:35:30.445+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:30.445+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:35:30.445+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:30.445+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:30.445+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:35:30.445+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 23234 2019-09-04T06:35:30.445+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578930, 3) 2019-09-04T06:35:30.445+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23242 2019-09-04T06:35:30.445+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 23242 2019-09-04T06:35:30.445+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578930, 3), t: 1 }({ ts: Timestamp(1567578930, 3), t: 1 }) 2019-09-04T06:35:30.445+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:30.445+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578925, 1), t: 1 }, durableWallTime: new Date(1567578925776), appliedOpTime: { ts: Timestamp(1567578925, 1), t: 1 }, appliedWallTime: new Date(1567578925776), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578925, 1), t: 1 }, durableWallTime: new Date(1567578925776), appliedOpTime: { ts: Timestamp(1567578930, 3), t: 1 }, appliedWallTime: new Date(1567578930440), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578925, 1), t: 1 }, durableWallTime: new Date(1567578925776), appliedOpTime: { ts: Timestamp(1567578925, 1), t: 1 }, appliedWallTime: new Date(1567578925776), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578925, 1), t: 1 }, lastCommittedWall: new Date(1567578925776), lastOpVisible: { ts: Timestamp(1567578925, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:30.445+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1605 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:00.445+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578925, 1), t: 1 }, durableWallTime: new Date(1567578925776), appliedOpTime: { ts: Timestamp(1567578925, 1), t: 1 }, appliedWallTime: new Date(1567578925776), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578925, 1), t: 1 }, durableWallTime: new Date(1567578925776), appliedOpTime: { ts: Timestamp(1567578930, 3), t: 1 }, appliedWallTime: new Date(1567578930440), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578925, 1), t: 1 }, durableWallTime: new Date(1567578925776), appliedOpTime: { ts: Timestamp(1567578925, 1), t: 1 }, appliedWallTime: new Date(1567578925776), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578925, 1), t: 1 }, lastCommittedWall: new Date(1567578925776), lastOpVisible: { ts: Timestamp(1567578925, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:30.445+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:00.445+0000 2019-09-04T06:35:30.445+0000 D2 ASIO [RS] Request 1605 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578925, 1), t: 1 }, lastCommittedWall: new Date(1567578925776), lastOpVisible: { ts: Timestamp(1567578925, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578925, 1), $clusterTime: { clusterTime: Timestamp(1567578930, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578930, 3) } 2019-09-04T06:35:30.445+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578925, 1), t: 1 }, lastCommittedWall: new Date(1567578925776), lastOpVisible: { ts: Timestamp(1567578925, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578925, 1), $clusterTime: { clusterTime: Timestamp(1567578930, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578930, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:30.445+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:30.445+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:00.445+0000 2019-09-04T06:35:30.445+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578930, 3), t: 1 } 2019-09-04T06:35:30.445+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1606 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:40.445+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578925, 1), t: 1 } } 2019-09-04T06:35:30.446+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:00.445+0000 2019-09-04T06:35:30.448+0000 D2 ASIO [RS] Request 1606 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578930, 3), t: 1 }, lastCommittedWall: new Date(1567578930440), lastOpVisible: { ts: Timestamp(1567578930, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578930, 3), t: 1 }, lastCommittedWall: new Date(1567578930440), lastOpApplied: { ts: Timestamp(1567578930, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578930, 3), $clusterTime: { clusterTime: Timestamp(1567578930, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578930, 3) } 2019-09-04T06:35:30.448+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578930, 3), t: 1 }, lastCommittedWall: new Date(1567578930440), lastOpVisible: { ts: Timestamp(1567578930, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578930, 3), t: 1 }, lastCommittedWall: new Date(1567578930440), lastOpApplied: { ts: Timestamp(1567578930, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578930, 3), $clusterTime: { clusterTime: Timestamp(1567578930, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578930, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:30.448+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:30.448+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:35:30.448+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578930, 3), t: 1 }, 2019-09-04T06:35:30.440+0000 2019-09-04T06:35:30.448+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578930, 3), t: 1 }, 2019-09-04T06:35:30.440+0000 2019-09-04T06:35:30.448+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578925, 3) 2019-09-04T06:35:30.448+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:35:41.631+0000 2019-09-04T06:35:30.448+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:35:40.908+0000 2019-09-04T06:35:30.448+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:30.448+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1607 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:40.448+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578930, 3), t: 1 } } 2019-09-04T06:35:30.448+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:35:58.839+0000 2019-09-04T06:35:30.448+0000 D3 REPL [conn493] Got notified of new snapshot: { ts: Timestamp(1567578930, 3), t: 1 }, 2019-09-04T06:35:30.440+0000 2019-09-04T06:35:30.448+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:00.445+0000 2019-09-04T06:35:30.448+0000 D3 REPL [conn493] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:47.825+0000 2019-09-04T06:35:30.448+0000 D3 REPL [conn503] Got notified of new snapshot: { ts: Timestamp(1567578930, 3), t: 1 }, 2019-09-04T06:35:30.440+0000 2019-09-04T06:35:30.448+0000 D3 REPL [conn503] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:55.060+0000 2019-09-04T06:35:30.448+0000 D3 REPL [conn501] Got notified of new snapshot: { ts: Timestamp(1567578930, 3), t: 1 }, 2019-09-04T06:35:30.440+0000 2019-09-04T06:35:30.448+0000 D3 REPL [conn501] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:52.054+0000 2019-09-04T06:35:30.448+0000 D3 REPL [conn498] Got notified of new snapshot: { ts: Timestamp(1567578930, 3), t: 1 }, 2019-09-04T06:35:30.440+0000 2019-09-04T06:35:30.448+0000 D3 REPL [conn498] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:51.661+0000 2019-09-04T06:35:30.448+0000 D3 REPL [conn487] Got notified of new snapshot: { ts: Timestamp(1567578930, 3), t: 1 }, 2019-09-04T06:35:30.440+0000 2019-09-04T06:35:30.448+0000 D3 REPL [conn487] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:47.846+0000 2019-09-04T06:35:30.448+0000 D3 REPL [conn465] Got notified of new snapshot: { ts: Timestamp(1567578930, 3), t: 1 }, 2019-09-04T06:35:30.440+0000 2019-09-04T06:35:30.448+0000 D3 REPL [conn465] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:47.846+0000 2019-09-04T06:35:30.448+0000 D3 REPL [conn506] Got notified of new snapshot: { ts: Timestamp(1567578930, 3), t: 1 }, 2019-09-04T06:35:30.440+0000 2019-09-04T06:35:30.448+0000 D3 REPL [conn506] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:58.760+0000 2019-09-04T06:35:30.448+0000 D3 REPL [conn464] Got notified of new snapshot: { ts: Timestamp(1567578930, 3), t: 1 }, 2019-09-04T06:35:30.440+0000 2019-09-04T06:35:30.448+0000 D3 REPL [conn464] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:59.740+0000 2019-09-04T06:35:30.448+0000 D3 REPL [conn481] Got notified of new snapshot: { ts: Timestamp(1567578930, 3), t: 1 }, 2019-09-04T06:35:30.440+0000 2019-09-04T06:35:30.448+0000 D3 REPL [conn481] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:47.823+0000 2019-09-04T06:35:30.448+0000 D3 REPL [conn458] Got notified of new snapshot: { ts: Timestamp(1567578930, 3), t: 1 }, 2019-09-04T06:35:30.440+0000 2019-09-04T06:35:30.448+0000 D3 REPL [conn458] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:42.734+0000 2019-09-04T06:35:30.448+0000 D3 REPL [conn495] Got notified of new snapshot: { ts: Timestamp(1567578930, 3), t: 1 }, 2019-09-04T06:35:30.440+0000 2019-09-04T06:35:30.448+0000 D3 REPL [conn495] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:51.128+0000 2019-09-04T06:35:30.448+0000 D3 REPL [conn482] Got notified of new snapshot: { ts: Timestamp(1567578930, 3), t: 1 }, 2019-09-04T06:35:30.440+0000 2019-09-04T06:35:30.448+0000 D3 REPL [conn482] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:51.644+0000 2019-09-04T06:35:30.448+0000 D3 REPL [conn484] Got notified of new snapshot: { ts: Timestamp(1567578930, 3), t: 1 }, 2019-09-04T06:35:30.440+0000 2019-09-04T06:35:30.448+0000 D3 REPL [conn484] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:51.661+0000 2019-09-04T06:35:30.448+0000 D3 REPL [conn490] Got notified of new snapshot: { ts: Timestamp(1567578930, 3), t: 1 }, 2019-09-04T06:35:30.440+0000 2019-09-04T06:35:30.448+0000 D3 REPL [conn490] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:51.661+0000 2019-09-04T06:35:30.448+0000 D3 REPL [conn480] Got notified of new snapshot: { ts: Timestamp(1567578930, 3), t: 1 }, 2019-09-04T06:35:30.440+0000 2019-09-04T06:35:30.448+0000 D3 REPL [conn480] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:47.838+0000 2019-09-04T06:35:30.448+0000 D3 REPL [conn488] Got notified of new snapshot: { ts: Timestamp(1567578930, 3), t: 1 }, 2019-09-04T06:35:30.440+0000 2019-09-04T06:35:30.448+0000 D3 REPL [conn488] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:44.309+0000 2019-09-04T06:35:30.448+0000 D3 REPL [conn471] Got notified of new snapshot: { ts: Timestamp(1567578930, 3), t: 1 }, 2019-09-04T06:35:30.440+0000 2019-09-04T06:35:30.448+0000 D3 REPL [conn471] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:42.683+0000 2019-09-04T06:35:30.448+0000 D3 REPL [conn486] Got notified of new snapshot: { ts: Timestamp(1567578930, 3), t: 1 }, 2019-09-04T06:35:30.440+0000 2019-09-04T06:35:30.448+0000 D3 REPL [conn486] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:42.679+0000 2019-09-04T06:35:30.448+0000 D3 REPL [conn505] Got notified of new snapshot: { ts: Timestamp(1567578930, 3), t: 1 }, 2019-09-04T06:35:30.440+0000 2019-09-04T06:35:30.448+0000 D3 REPL [conn505] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:57.575+0000 2019-09-04T06:35:30.448+0000 D3 REPL [conn496] Got notified of new snapshot: { ts: Timestamp(1567578930, 3), t: 1 }, 2019-09-04T06:35:30.440+0000 2019-09-04T06:35:30.448+0000 D3 REPL [conn496] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.432+0000 2019-09-04T06:35:30.451+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:35:30.451+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:30.451+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578925, 1), t: 1 }, durableWallTime: new Date(1567578925776), appliedOpTime: { ts: Timestamp(1567578925, 1), t: 1 }, appliedWallTime: new Date(1567578925776), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578930, 3), t: 1 }, durableWallTime: new Date(1567578930440), appliedOpTime: { ts: Timestamp(1567578930, 3), t: 1 }, appliedWallTime: new Date(1567578930440), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578925, 1), t: 1 }, durableWallTime: new Date(1567578925776), appliedOpTime: { ts: Timestamp(1567578925, 1), t: 1 }, appliedWallTime: new Date(1567578925776), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578930, 3), t: 1 }, lastCommittedWall: new Date(1567578930440), lastOpVisible: { ts: Timestamp(1567578930, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:30.451+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1608 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:00.451+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578925, 1), t: 1 }, durableWallTime: new Date(1567578925776), appliedOpTime: { ts: Timestamp(1567578925, 1), t: 1 }, appliedWallTime: new Date(1567578925776), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578930, 3), t: 1 }, durableWallTime: new Date(1567578930440), appliedOpTime: { ts: Timestamp(1567578930, 3), t: 1 }, appliedWallTime: new Date(1567578930440), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578925, 1), t: 1 }, durableWallTime: new Date(1567578925776), appliedOpTime: { ts: Timestamp(1567578925, 1), t: 1 }, appliedWallTime: new Date(1567578925776), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578930, 3), t: 1 }, lastCommittedWall: new Date(1567578930440), lastOpVisible: { ts: Timestamp(1567578930, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:30.451+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:00.445+0000 2019-09-04T06:35:30.451+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:30.451+0000 D2 ASIO [RS] Request 1608 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578930, 3), t: 1 }, lastCommittedWall: new Date(1567578930440), lastOpVisible: { ts: Timestamp(1567578930, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578930, 3), $clusterTime: { clusterTime: Timestamp(1567578930, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578930, 3) } 2019-09-04T06:35:30.451+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578930, 3), t: 1 }, lastCommittedWall: new Date(1567578930440), lastOpVisible: { ts: Timestamp(1567578930, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578930, 3), $clusterTime: { clusterTime: Timestamp(1567578930, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578930, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:30.451+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:30.451+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:00.445+0000 2019-09-04T06:35:30.458+0000 I NETWORK [listener] connection accepted from 10.108.2.55:36862 #507 (87 connections now open) 2019-09-04T06:35:30.458+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:30.458+0000 D2 COMMAND [conn507] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:30.458+0000 I NETWORK [conn507] received client metadata from 10.108.2.55:36862 conn507: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:30.458+0000 I COMMAND [conn507] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:30.458+0000 D2 COMMAND [conn507] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578921, 1), signature: { hash: BinData(0, F00E7B6B729CE76FBA9DE1FC1A42C69A8B06DDE7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:35:30.458+0000 D1 REPL [conn507] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578930, 3), t: 1 } 2019-09-04T06:35:30.458+0000 D3 REPL [conn507] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.468+0000 2019-09-04T06:35:30.460+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:30.460+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:30.544+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578930, 3) 2019-09-04T06:35:30.551+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:30.651+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:30.691+0000 D2 COMMAND [conn502] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578925, 1), signature: { hash: BinData(0, 3DCD6584CE1F58263916AC072D4A54C85B98128C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:35:30.691+0000 D1 REPL [conn502] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578930, 3), t: 1 } 2019-09-04T06:35:30.691+0000 D3 REPL [conn502] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.701+0000 2019-09-04T06:35:30.693+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:30.693+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:30.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:30.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:30.744+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:30.744+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:30.751+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:30.753+0000 I NETWORK [listener] connection accepted from 10.108.2.50:50312 #508 (88 connections now open) 2019-09-04T06:35:30.753+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:30.753+0000 D2 COMMAND [conn508] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:30.753+0000 I NETWORK [conn508] received client metadata from 10.108.2.50:50312 conn508: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:30.753+0000 I COMMAND [conn508] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:30.753+0000 D2 COMMAND [conn508] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578929, 1), signature: { hash: BinData(0, 6F9E7E1D8182235552E89108F51DB316747DE523), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:35:30.753+0000 D1 REPL [conn508] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578930, 3), t: 1 } 2019-09-04T06:35:30.753+0000 D3 REPL [conn508] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.763+0000 2019-09-04T06:35:30.827+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:30.827+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:30.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:30.839+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1609) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:30.839+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1609 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:35:40.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:30.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:35:58.839+0000 2019-09-04T06:35:30.839+0000 D2 ASIO [Replication] Request 1609 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578930, 3), t: 1 }, durableWallTime: new Date(1567578930440), opTime: { ts: Timestamp(1567578930, 3), t: 1 }, wallTime: new Date(1567578930440), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578930, 3), t: 1 }, lastCommittedWall: new Date(1567578930440), lastOpVisible: { ts: Timestamp(1567578930, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578930, 3), $clusterTime: { clusterTime: Timestamp(1567578930, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578930, 3) } 2019-09-04T06:35:30.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578930, 3), t: 1 }, durableWallTime: new Date(1567578930440), opTime: { ts: Timestamp(1567578930, 3), t: 1 }, wallTime: new Date(1567578930440), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578930, 3), t: 1 }, lastCommittedWall: new Date(1567578930440), lastOpVisible: { ts: Timestamp(1567578930, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578930, 3), $clusterTime: { clusterTime: Timestamp(1567578930, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578930, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:35:30.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:30.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1609) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578930, 3), t: 1 }, durableWallTime: new Date(1567578930440), opTime: { ts: Timestamp(1567578930, 3), t: 1 }, wallTime: new Date(1567578930440), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578930, 3), t: 1 }, lastCommittedWall: new Date(1567578930440), lastOpVisible: { ts: Timestamp(1567578930, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578930, 3), $clusterTime: { clusterTime: Timestamp(1567578930, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578930, 3) } 2019-09-04T06:35:30.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:35:30.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:35:40.908+0000 2019-09-04T06:35:30.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:35:41.552+0000 2019-09-04T06:35:30.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:35:30.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:35:32.839Z 2019-09-04T06:35:30.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:30.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:00.839+0000 2019-09-04T06:35:30.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:00.839+0000 2019-09-04T06:35:30.840+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:30.840+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1610) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:30.840+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1610 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:40.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:30.840+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:00.839+0000 2019-09-04T06:35:30.840+0000 D2 ASIO [Replication] Request 1610 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578930, 3), t: 1 }, durableWallTime: new Date(1567578930440), opTime: { ts: Timestamp(1567578930, 3), t: 1 }, wallTime: new Date(1567578930440), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578930, 3), t: 1 }, lastCommittedWall: new Date(1567578930440), lastOpVisible: { ts: Timestamp(1567578930, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578930, 3), $clusterTime: { clusterTime: Timestamp(1567578930, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578930, 3) } 2019-09-04T06:35:30.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578930, 3), t: 1 }, durableWallTime: new Date(1567578930440), opTime: { ts: Timestamp(1567578930, 3), t: 1 }, wallTime: new Date(1567578930440), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578930, 3), t: 1 }, lastCommittedWall: new Date(1567578930440), lastOpVisible: { ts: Timestamp(1567578930, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578930, 3), $clusterTime: { clusterTime: Timestamp(1567578930, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578930, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:30.840+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:30.840+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1610) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578930, 3), t: 1 }, durableWallTime: new Date(1567578930440), opTime: { ts: Timestamp(1567578930, 3), t: 1 }, wallTime: new Date(1567578930440), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578930, 3), t: 1 }, lastCommittedWall: new Date(1567578930440), lastOpVisible: { ts: Timestamp(1567578930, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578930, 3), $clusterTime: { clusterTime: Timestamp(1567578930, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578930, 3) } 2019-09-04T06:35:30.840+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:35:30.840+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:35:32.840Z 2019-09-04T06:35:30.840+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:00.839+0000 2019-09-04T06:35:30.851+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:30.915+0000 I NETWORK [listener] connection accepted from 10.108.2.73:52348 #509 (89 connections now open) 2019-09-04T06:35:30.915+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:30.915+0000 D2 COMMAND [conn509] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:30.915+0000 I NETWORK [conn509] received client metadata from 10.108.2.73:52348 conn509: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:30.915+0000 I COMMAND [conn509] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:30.915+0000 D2 COMMAND [conn509] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578922, 1), signature: { hash: BinData(0, 7D6FA38919A7B0C53E875A5965620108C7C24ABC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:35:30.915+0000 D1 REPL [conn509] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578930, 3), t: 1 } 2019-09-04T06:35:30.915+0000 D3 REPL [conn509] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.925+0000 2019-09-04T06:35:30.951+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:30.952+0000 I NETWORK [listener] connection accepted from 10.108.2.58:52338 #510 (90 connections now open) 2019-09-04T06:35:30.952+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:30.952+0000 D2 COMMAND [conn510] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:30.952+0000 I NETWORK [conn510] received client metadata from 10.108.2.58:52338 conn510: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:30.952+0000 I COMMAND [conn510] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:30.952+0000 D2 COMMAND [conn510] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578923, 1), signature: { hash: BinData(0, 42F4C0BD7866DEA005B3B91FBF6D097EB91221F1), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:35:30.952+0000 D1 REPL [conn510] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578930, 3), t: 1 } 2019-09-04T06:35:30.952+0000 D3 REPL [conn510] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.962+0000 2019-09-04T06:35:30.960+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:30.960+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:31.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:31.051+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:31.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578930, 3), signature: { hash: BinData(0, 31D2D31F0D718D64DD7F3990191B1EDD31A97D8F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:31.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:35:31.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578930, 3), signature: { hash: BinData(0, 31D2D31F0D718D64DD7F3990191B1EDD31A97D8F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:31.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578930, 3), signature: { hash: BinData(0, 31D2D31F0D718D64DD7F3990191B1EDD31A97D8F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:31.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578930, 3), t: 1 }, durableWallTime: new Date(1567578930440), opTime: { ts: Timestamp(1567578930, 3), t: 1 }, wallTime: new Date(1567578930440) } 2019-09-04T06:35:31.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578930, 3), signature: { hash: BinData(0, 31D2D31F0D718D64DD7F3990191B1EDD31A97D8F), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:31.152+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:31.193+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:31.193+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:31.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:31.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:31.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:31.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:31.244+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:31.244+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:31.252+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:31.327+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:31.327+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:31.352+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:31.444+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:31.444+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:31.444+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578930, 3) 2019-09-04T06:35:31.444+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 23268 2019-09-04T06:35:31.444+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:31.444+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:31.444+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 23268 2019-09-04T06:35:31.445+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23271 2019-09-04T06:35:31.445+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 23271 2019-09-04T06:35:31.445+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578930, 3), t: 1 }({ ts: Timestamp(1567578930, 3), t: 1 }) 2019-09-04T06:35:31.452+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:31.460+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:31.460+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:31.552+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:31.652+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:31.693+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:31.693+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:31.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:31.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:31.744+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:31.744+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:31.752+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:31.827+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:31.827+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:31.852+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:31.952+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:31.960+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:31.960+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:32.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:32.053+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:32.153+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:32.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:32.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:32.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578930, 3), signature: { hash: BinData(0, 31D2D31F0D718D64DD7F3990191B1EDD31A97D8F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:32.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:35:32.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578930, 3), signature: { hash: BinData(0, 31D2D31F0D718D64DD7F3990191B1EDD31A97D8F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:32.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578930, 3), signature: { hash: BinData(0, 31D2D31F0D718D64DD7F3990191B1EDD31A97D8F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:32.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578930, 3), t: 1 }, durableWallTime: new Date(1567578930440), opTime: { ts: Timestamp(1567578930, 3), t: 1 }, wallTime: new Date(1567578930440) } 2019-09-04T06:35:32.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578930, 3), signature: { hash: BinData(0, 31D2D31F0D718D64DD7F3990191B1EDD31A97D8F), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:32.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:32.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:32.244+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:32.244+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:32.253+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:32.327+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:32.327+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:32.353+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:32.444+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:32.444+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:32.444+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578930, 3) 2019-09-04T06:35:32.444+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 23285 2019-09-04T06:35:32.444+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:32.444+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:32.444+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 23285 2019-09-04T06:35:32.445+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23288 2019-09-04T06:35:32.445+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 23288 2019-09-04T06:35:32.445+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578930, 3), t: 1 }({ ts: Timestamp(1567578930, 3), t: 1 }) 2019-09-04T06:35:32.453+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:32.460+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:32.460+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:32.553+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:32.653+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:32.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:32.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:32.744+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:32.744+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:32.753+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:32.827+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:32.827+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:32.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:32.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1611) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:32.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1611 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:35:42.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:32.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:00.839+0000 2019-09-04T06:35:32.839+0000 D2 ASIO [Replication] Request 1611 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578930, 3), t: 1 }, durableWallTime: new Date(1567578930440), opTime: { ts: Timestamp(1567578930, 3), t: 1 }, wallTime: new Date(1567578930440), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578930, 3), t: 1 }, lastCommittedWall: new Date(1567578930440), lastOpVisible: { ts: Timestamp(1567578930, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578930, 3), $clusterTime: { clusterTime: Timestamp(1567578930, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578930, 3) } 2019-09-04T06:35:32.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578930, 3), t: 1 }, durableWallTime: new Date(1567578930440), opTime: { ts: Timestamp(1567578930, 3), t: 1 }, wallTime: new Date(1567578930440), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578930, 3), t: 1 }, lastCommittedWall: new Date(1567578930440), lastOpVisible: { ts: Timestamp(1567578930, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578930, 3), $clusterTime: { clusterTime: Timestamp(1567578930, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578930, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:35:32.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:32.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1611) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578930, 3), t: 1 }, durableWallTime: new Date(1567578930440), opTime: { ts: Timestamp(1567578930, 3), t: 1 }, wallTime: new Date(1567578930440), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578930, 3), t: 1 }, lastCommittedWall: new Date(1567578930440), lastOpVisible: { ts: Timestamp(1567578930, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578930, 3), $clusterTime: { clusterTime: Timestamp(1567578930, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578930, 3) } 2019-09-04T06:35:32.839+0000 D4 ELECTION [replexec-4] Postponing election timeout due to heartbeat from primary 2019-09-04T06:35:32.839+0000 D4 REPL [replexec-4] Canceling election timeout callback at 2019-09-04T06:35:41.552+0000 2019-09-04T06:35:32.839+0000 D4 ELECTION [replexec-4] Scheduling election timeout callback at 2019-09-04T06:35:44.305+0000 2019-09-04T06:35:32.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:35:32.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:35:34.839Z 2019-09-04T06:35:32.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:32.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:02.839+0000 2019-09-04T06:35:32.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:02.839+0000 2019-09-04T06:35:32.840+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:32.840+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1612) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:32.840+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1612 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:42.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:32.840+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:02.839+0000 2019-09-04T06:35:32.840+0000 D2 ASIO [Replication] Request 1612 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578930, 3), t: 1 }, durableWallTime: new Date(1567578930440), opTime: { ts: Timestamp(1567578930, 3), t: 1 }, wallTime: new Date(1567578930440), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578930, 3), t: 1 }, lastCommittedWall: new Date(1567578930440), lastOpVisible: { ts: Timestamp(1567578930, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578930, 3), $clusterTime: { clusterTime: Timestamp(1567578930, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578930, 3) } 2019-09-04T06:35:32.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578930, 3), t: 1 }, durableWallTime: new Date(1567578930440), opTime: { ts: Timestamp(1567578930, 3), t: 1 }, wallTime: new Date(1567578930440), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578930, 3), t: 1 }, lastCommittedWall: new Date(1567578930440), lastOpVisible: { ts: Timestamp(1567578930, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578930, 3), $clusterTime: { clusterTime: Timestamp(1567578930, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578930, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:32.840+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:32.840+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1612) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578930, 3), t: 1 }, durableWallTime: new Date(1567578930440), opTime: { ts: Timestamp(1567578930, 3), t: 1 }, wallTime: new Date(1567578930440), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578930, 3), t: 1 }, lastCommittedWall: new Date(1567578930440), lastOpVisible: { ts: Timestamp(1567578930, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578930, 3), $clusterTime: { clusterTime: Timestamp(1567578930, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578930, 3) } 2019-09-04T06:35:32.840+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:35:32.840+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:35:34.840Z 2019-09-04T06:35:32.840+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:02.839+0000 2019-09-04T06:35:32.853+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:32.954+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:32.960+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:32.960+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:33.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:33.054+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:33.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578930, 3), signature: { hash: BinData(0, 31D2D31F0D718D64DD7F3990191B1EDD31A97D8F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:33.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:35:33.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578930, 3), signature: { hash: BinData(0, 31D2D31F0D718D64DD7F3990191B1EDD31A97D8F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:33.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578930, 3), signature: { hash: BinData(0, 31D2D31F0D718D64DD7F3990191B1EDD31A97D8F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:33.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578930, 3), t: 1 }, durableWallTime: new Date(1567578930440), opTime: { ts: Timestamp(1567578930, 3), t: 1 }, wallTime: new Date(1567578930440) } 2019-09-04T06:35:33.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578930, 3), signature: { hash: BinData(0, 31D2D31F0D718D64DD7F3990191B1EDD31A97D8F), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:33.154+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:33.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:33.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:33.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:33.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:33.244+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:33.244+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:33.254+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:33.327+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:33.327+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:33.354+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:33.444+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:33.444+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:33.444+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578930, 3) 2019-09-04T06:35:33.444+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 23301 2019-09-04T06:35:33.444+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:33.444+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:33.444+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 23301 2019-09-04T06:35:33.445+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23304 2019-09-04T06:35:33.445+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 23304 2019-09-04T06:35:33.445+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578930, 3), t: 1 }({ ts: Timestamp(1567578930, 3), t: 1 }) 2019-09-04T06:35:33.454+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:33.460+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:33.460+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:33.522+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:35:33.522+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:35:33.522+0000 D2 COMMAND [conn90] run command admin.$cmd { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:35:33.522+0000 I COMMAND [conn90] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:35:33.554+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:33.605+0000 D2 ASIO [RS] Request 1607 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578933, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578933603), o: { $v: 1, $set: { ping: new Date(1567578933598) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578930, 3), t: 1 }, lastCommittedWall: new Date(1567578930440), lastOpVisible: { ts: Timestamp(1567578930, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578930, 3), t: 1 }, lastCommittedWall: new Date(1567578930440), lastOpApplied: { ts: Timestamp(1567578933, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578930, 3), $clusterTime: { clusterTime: Timestamp(1567578933, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578933, 1) } 2019-09-04T06:35:33.605+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578933, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578933603), o: { $v: 1, $set: { ping: new Date(1567578933598) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578930, 3), t: 1 }, lastCommittedWall: new Date(1567578930440), lastOpVisible: { ts: Timestamp(1567578930, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578930, 3), t: 1 }, lastCommittedWall: new Date(1567578930440), lastOpApplied: { ts: Timestamp(1567578933, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578930, 3), $clusterTime: { clusterTime: Timestamp(1567578933, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578933, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:33.605+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:33.605+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578933, 1) and ending at ts: Timestamp(1567578933, 1) 2019-09-04T06:35:33.605+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:35:44.305+0000 2019-09-04T06:35:33.605+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:35:43.860+0000 2019-09-04T06:35:33.605+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:33.605+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:02.839+0000 2019-09-04T06:35:33.605+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578933, 1), t: 1 } 2019-09-04T06:35:33.605+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:33.605+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:33.605+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578930, 3) 2019-09-04T06:35:33.605+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 23310 2019-09-04T06:35:33.605+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:33.605+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:33.605+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 23310 2019-09-04T06:35:33.605+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:33.605+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:35:33.605+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:33.605+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578930, 3) 2019-09-04T06:35:33.605+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 23313 2019-09-04T06:35:33.605+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578933, 1) } 2019-09-04T06:35:33.605+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:33.605+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:33.605+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 23313 2019-09-04T06:35:33.605+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23305 2019-09-04T06:35:33.605+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 23305 2019-09-04T06:35:33.605+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23316 2019-09-04T06:35:33.605+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 23316 2019-09-04T06:35:33.605+0000 D3 EXECUTOR [repl-writer-worker-10] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:33.605+0000 D3 STORAGE [repl-writer-worker-10] WT begin_transaction for snapshot id 23318 2019-09-04T06:35:33.605+0000 D4 STORAGE [repl-writer-worker-10] inserting record with timestamp Timestamp(1567578933, 1) 2019-09-04T06:35:33.605+0000 D3 STORAGE [repl-writer-worker-10] WT set timestamp of future write operations to Timestamp(1567578933, 1) 2019-09-04T06:35:33.605+0000 D3 STORAGE [repl-writer-worker-10] WT commit_transaction for snapshot id 23318 2019-09-04T06:35:33.605+0000 D3 EXECUTOR [repl-writer-worker-10] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:33.605+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:35:33.605+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23317 2019-09-04T06:35:33.605+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 23317 2019-09-04T06:35:33.605+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23320 2019-09-04T06:35:33.605+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 23320 2019-09-04T06:35:33.605+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578933, 1), t: 1 }({ ts: Timestamp(1567578933, 1), t: 1 }) 2019-09-04T06:35:33.606+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578933, 1) 2019-09-04T06:35:33.606+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23321 2019-09-04T06:35:33.606+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578933, 1) } } ] } sort: {} projection: {} 2019-09-04T06:35:33.606+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:33.606+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:35:33.606+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578933, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:35:33.606+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:33.606+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:33.606+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:33.606+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578933, 1) || First: notFirst: full path: ts 2019-09-04T06:35:33.606+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:33.606+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578933, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:33.606+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:33.606+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:35:33.606+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:33.606+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:33.606+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:33.606+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:33.606+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:33.606+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:33.606+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:33.606+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578933, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:33.606+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:33.606+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:33.606+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:33.606+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578933, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:33.606+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:33.606+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578933, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:33.606+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 23321 2019-09-04T06:35:33.606+0000 D3 EXECUTOR [repl-writer-worker-4] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:33.606+0000 D3 STORAGE [repl-writer-worker-4] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:33.606+0000 D3 REPL [repl-writer-worker-4] applying op: { ts: Timestamp(1567578933, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578933603), o: { $v: 1, $set: { ping: new Date(1567578933598) } } }, oplog application mode: Secondary 2019-09-04T06:35:33.606+0000 D3 STORAGE [repl-writer-worker-4] WT set timestamp of future write operations to Timestamp(1567578933, 1) 2019-09-04T06:35:33.606+0000 D3 STORAGE [repl-writer-worker-4] WT begin_transaction for snapshot id 23323 2019-09-04T06:35:33.606+0000 D2 QUERY [repl-writer-worker-4] Using idhack: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" } 2019-09-04T06:35:33.606+0000 D4 WRITE [repl-writer-worker-4] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:35:33.606+0000 D3 STORAGE [repl-writer-worker-4] WT commit_transaction for snapshot id 23323 2019-09-04T06:35:33.606+0000 D3 EXECUTOR [repl-writer-worker-4] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:33.606+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578933, 1), t: 1 }({ ts: Timestamp(1567578933, 1), t: 1 }) 2019-09-04T06:35:33.606+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578933, 1) 2019-09-04T06:35:33.606+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23322 2019-09-04T06:35:33.606+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:35:33.606+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:33.606+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:35:33.606+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:33.606+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:33.606+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:35:33.606+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 23322 2019-09-04T06:35:33.606+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578933, 1) 2019-09-04T06:35:33.606+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23326 2019-09-04T06:35:33.606+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 23326 2019-09-04T06:35:33.606+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578933, 1), t: 1 }({ ts: Timestamp(1567578933, 1), t: 1 }) 2019-09-04T06:35:33.606+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:33.606+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578930, 3), t: 1 }, durableWallTime: new Date(1567578930440), appliedOpTime: { ts: Timestamp(1567578930, 3), t: 1 }, appliedWallTime: new Date(1567578930440), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578930, 3), t: 1 }, durableWallTime: new Date(1567578930440), appliedOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, appliedWallTime: new Date(1567578933603), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578930, 3), t: 1 }, durableWallTime: new Date(1567578930440), appliedOpTime: { ts: Timestamp(1567578930, 3), t: 1 }, appliedWallTime: new Date(1567578930440), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578930, 3), t: 1 }, lastCommittedWall: new Date(1567578930440), lastOpVisible: { ts: Timestamp(1567578930, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:33.606+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1613 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:03.606+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578930, 3), t: 1 }, durableWallTime: new Date(1567578930440), appliedOpTime: { ts: Timestamp(1567578930, 3), t: 1 }, appliedWallTime: new Date(1567578930440), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578930, 3), t: 1 }, durableWallTime: new Date(1567578930440), appliedOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, appliedWallTime: new Date(1567578933603), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578930, 3), t: 1 }, durableWallTime: new Date(1567578930440), appliedOpTime: { ts: Timestamp(1567578930, 3), t: 1 }, appliedWallTime: new Date(1567578930440), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578930, 3), t: 1 }, lastCommittedWall: new Date(1567578930440), lastOpVisible: { ts: Timestamp(1567578930, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:33.606+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:03.606+0000 2019-09-04T06:35:33.606+0000 D2 ASIO [RS] Request 1613 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578930, 3), t: 1 }, lastCommittedWall: new Date(1567578930440), lastOpVisible: { ts: Timestamp(1567578930, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578930, 3), $clusterTime: { clusterTime: Timestamp(1567578933, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578933, 1) } 2019-09-04T06:35:33.606+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578930, 3), t: 1 }, lastCommittedWall: new Date(1567578930440), lastOpVisible: { ts: Timestamp(1567578930, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578930, 3), $clusterTime: { clusterTime: Timestamp(1567578933, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578933, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:33.607+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:33.607+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:03.607+0000 2019-09-04T06:35:33.607+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578933, 1), t: 1 } 2019-09-04T06:35:33.607+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1614 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:43.607+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578930, 3), t: 1 } } 2019-09-04T06:35:33.607+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:03.607+0000 2019-09-04T06:35:33.610+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:35:33.610+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:33.611+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578930, 3), t: 1 }, durableWallTime: new Date(1567578930440), appliedOpTime: { ts: Timestamp(1567578930, 3), t: 1 }, appliedWallTime: new Date(1567578930440), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, durableWallTime: new Date(1567578933603), appliedOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, appliedWallTime: new Date(1567578933603), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578930, 3), t: 1 }, durableWallTime: new Date(1567578930440), appliedOpTime: { ts: Timestamp(1567578930, 3), t: 1 }, appliedWallTime: new Date(1567578930440), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578930, 3), t: 1 }, lastCommittedWall: new Date(1567578930440), lastOpVisible: { ts: Timestamp(1567578930, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:33.611+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1615 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:03.611+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578930, 3), t: 1 }, durableWallTime: new Date(1567578930440), appliedOpTime: { ts: Timestamp(1567578930, 3), t: 1 }, appliedWallTime: new Date(1567578930440), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, durableWallTime: new Date(1567578933603), appliedOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, appliedWallTime: new Date(1567578933603), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578930, 3), t: 1 }, durableWallTime: new Date(1567578930440), appliedOpTime: { ts: Timestamp(1567578930, 3), t: 1 }, appliedWallTime: new Date(1567578930440), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578930, 3), t: 1 }, lastCommittedWall: new Date(1567578930440), lastOpVisible: { ts: Timestamp(1567578930, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:33.611+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:03.607+0000 2019-09-04T06:35:33.611+0000 D2 ASIO [RS] Request 1615 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578930, 3), t: 1 }, lastCommittedWall: new Date(1567578930440), lastOpVisible: { ts: Timestamp(1567578930, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578930, 3), $clusterTime: { clusterTime: Timestamp(1567578933, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578933, 1) } 2019-09-04T06:35:33.611+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578930, 3), t: 1 }, lastCommittedWall: new Date(1567578930440), lastOpVisible: { ts: Timestamp(1567578930, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578930, 3), $clusterTime: { clusterTime: Timestamp(1567578933, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578933, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:33.611+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:33.611+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:03.607+0000 2019-09-04T06:35:33.611+0000 D2 ASIO [RS] Request 1614 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578933, 1), t: 1 }, lastCommittedWall: new Date(1567578933603), lastOpVisible: { ts: Timestamp(1567578933, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578933, 1), t: 1 }, lastCommittedWall: new Date(1567578933603), lastOpApplied: { ts: Timestamp(1567578933, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578933, 1), $clusterTime: { clusterTime: Timestamp(1567578933, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578933, 1) } 2019-09-04T06:35:33.611+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578933, 1), t: 1 }, lastCommittedWall: new Date(1567578933603), lastOpVisible: { ts: Timestamp(1567578933, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578933, 1), t: 1 }, lastCommittedWall: new Date(1567578933603), lastOpApplied: { ts: Timestamp(1567578933, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578933, 1), $clusterTime: { clusterTime: Timestamp(1567578933, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578933, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:33.611+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:33.611+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:35:33.611+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578933, 1), t: 1 }, 2019-09-04T06:35:33.603+0000 2019-09-04T06:35:33.611+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578933, 1), t: 1 }, 2019-09-04T06:35:33.603+0000 2019-09-04T06:35:33.611+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578928, 1) 2019-09-04T06:35:33.611+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:35:43.860+0000 2019-09-04T06:35:33.611+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:35:44.619+0000 2019-09-04T06:35:33.611+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1616 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:43.611+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578933, 1), t: 1 } } 2019-09-04T06:35:33.611+0000 D3 REPL [conn501] Got notified of new snapshot: { ts: Timestamp(1567578933, 1), t: 1 }, 2019-09-04T06:35:33.603+0000 2019-09-04T06:35:33.611+0000 D3 REPL [conn501] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:52.054+0000 2019-09-04T06:35:33.611+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:03.607+0000 2019-09-04T06:35:33.611+0000 D3 REPL [conn487] Got notified of new snapshot: { ts: Timestamp(1567578933, 1), t: 1 }, 2019-09-04T06:35:33.603+0000 2019-09-04T06:35:33.611+0000 D3 REPL [conn487] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:47.846+0000 2019-09-04T06:35:33.611+0000 D3 REPL [conn490] Got notified of new snapshot: { ts: Timestamp(1567578933, 1), t: 1 }, 2019-09-04T06:35:33.603+0000 2019-09-04T06:35:33.611+0000 D3 REPL [conn490] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:51.661+0000 2019-09-04T06:35:33.611+0000 D3 REPL [conn508] Got notified of new snapshot: { ts: Timestamp(1567578933, 1), t: 1 }, 2019-09-04T06:35:33.603+0000 2019-09-04T06:35:33.611+0000 D3 REPL [conn508] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.763+0000 2019-09-04T06:35:33.611+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:33.611+0000 D3 REPL [conn495] Got notified of new snapshot: { ts: Timestamp(1567578933, 1), t: 1 }, 2019-09-04T06:35:33.603+0000 2019-09-04T06:35:33.611+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:02.839+0000 2019-09-04T06:35:33.612+0000 D3 REPL [conn495] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:51.128+0000 2019-09-04T06:35:33.612+0000 D3 REPL [conn484] Got notified of new snapshot: { ts: Timestamp(1567578933, 1), t: 1 }, 2019-09-04T06:35:33.603+0000 2019-09-04T06:35:33.612+0000 D3 REPL [conn484] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:51.661+0000 2019-09-04T06:35:33.612+0000 D3 REPL [conn480] Got notified of new snapshot: { ts: Timestamp(1567578933, 1), t: 1 }, 2019-09-04T06:35:33.603+0000 2019-09-04T06:35:33.612+0000 D3 REPL [conn480] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:47.838+0000 2019-09-04T06:35:33.612+0000 D3 REPL [conn471] Got notified of new snapshot: { ts: Timestamp(1567578933, 1), t: 1 }, 2019-09-04T06:35:33.603+0000 2019-09-04T06:35:33.612+0000 D3 REPL [conn471] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:42.683+0000 2019-09-04T06:35:33.612+0000 D3 REPL [conn505] Got notified of new snapshot: { ts: Timestamp(1567578933, 1), t: 1 }, 2019-09-04T06:35:33.603+0000 2019-09-04T06:35:33.612+0000 D3 REPL [conn505] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:57.575+0000 2019-09-04T06:35:33.612+0000 D3 REPL [conn507] Got notified of new snapshot: { ts: Timestamp(1567578933, 1), t: 1 }, 2019-09-04T06:35:33.603+0000 2019-09-04T06:35:33.612+0000 D3 REPL [conn507] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.468+0000 2019-09-04T06:35:33.612+0000 D3 REPL [conn502] Got notified of new snapshot: { ts: Timestamp(1567578933, 1), t: 1 }, 2019-09-04T06:35:33.603+0000 2019-09-04T06:35:33.612+0000 D3 REPL [conn502] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.701+0000 2019-09-04T06:35:33.612+0000 D3 REPL [conn509] Got notified of new snapshot: { ts: Timestamp(1567578933, 1), t: 1 }, 2019-09-04T06:35:33.603+0000 2019-09-04T06:35:33.612+0000 D3 REPL [conn509] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.925+0000 2019-09-04T06:35:33.612+0000 D3 REPL [conn510] Got notified of new snapshot: { ts: Timestamp(1567578933, 1), t: 1 }, 2019-09-04T06:35:33.603+0000 2019-09-04T06:35:33.612+0000 D3 REPL [conn510] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.962+0000 2019-09-04T06:35:33.612+0000 D3 REPL [conn493] Got notified of new snapshot: { ts: Timestamp(1567578933, 1), t: 1 }, 2019-09-04T06:35:33.603+0000 2019-09-04T06:35:33.612+0000 D3 REPL [conn493] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:47.825+0000 2019-09-04T06:35:33.612+0000 D3 REPL [conn503] Got notified of new snapshot: { ts: Timestamp(1567578933, 1), t: 1 }, 2019-09-04T06:35:33.603+0000 2019-09-04T06:35:33.612+0000 D3 REPL [conn503] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:55.060+0000 2019-09-04T06:35:33.612+0000 D3 REPL [conn498] Got notified of new snapshot: { ts: Timestamp(1567578933, 1), t: 1 }, 2019-09-04T06:35:33.603+0000 2019-09-04T06:35:33.612+0000 D3 REPL [conn498] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:51.661+0000 2019-09-04T06:35:33.612+0000 D3 REPL [conn465] Got notified of new snapshot: { ts: Timestamp(1567578933, 1), t: 1 }, 2019-09-04T06:35:33.603+0000 2019-09-04T06:35:33.612+0000 D3 REPL [conn465] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:47.846+0000 2019-09-04T06:35:33.612+0000 D3 REPL [conn464] Got notified of new snapshot: { ts: Timestamp(1567578933, 1), t: 1 }, 2019-09-04T06:35:33.603+0000 2019-09-04T06:35:33.612+0000 D3 REPL [conn464] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:59.740+0000 2019-09-04T06:35:33.612+0000 D3 REPL [conn458] Got notified of new snapshot: { ts: Timestamp(1567578933, 1), t: 1 }, 2019-09-04T06:35:33.603+0000 2019-09-04T06:35:33.612+0000 D3 REPL [conn458] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:42.734+0000 2019-09-04T06:35:33.612+0000 D3 REPL [conn482] Got notified of new snapshot: { ts: Timestamp(1567578933, 1), t: 1 }, 2019-09-04T06:35:33.603+0000 2019-09-04T06:35:33.612+0000 D3 REPL [conn482] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:51.644+0000 2019-09-04T06:35:33.612+0000 D3 REPL [conn506] Got notified of new snapshot: { ts: Timestamp(1567578933, 1), t: 1 }, 2019-09-04T06:35:33.603+0000 2019-09-04T06:35:33.612+0000 D3 REPL [conn506] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:58.760+0000 2019-09-04T06:35:33.612+0000 D3 REPL [conn488] Got notified of new snapshot: { ts: Timestamp(1567578933, 1), t: 1 }, 2019-09-04T06:35:33.603+0000 2019-09-04T06:35:33.612+0000 D3 REPL [conn488] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:44.309+0000 2019-09-04T06:35:33.612+0000 D3 REPL [conn486] Got notified of new snapshot: { ts: Timestamp(1567578933, 1), t: 1 }, 2019-09-04T06:35:33.603+0000 2019-09-04T06:35:33.612+0000 D3 REPL [conn486] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:42.679+0000 2019-09-04T06:35:33.612+0000 D3 REPL [conn496] Got notified of new snapshot: { ts: Timestamp(1567578933, 1), t: 1 }, 2019-09-04T06:35:33.603+0000 2019-09-04T06:35:33.612+0000 D3 REPL [conn496] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.432+0000 2019-09-04T06:35:33.612+0000 D3 REPL [conn481] Got notified of new snapshot: { ts: Timestamp(1567578933, 1), t: 1 }, 2019-09-04T06:35:33.603+0000 2019-09-04T06:35:33.612+0000 D3 REPL [conn481] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:47.823+0000 2019-09-04T06:35:33.654+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:33.705+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578933, 1) 2019-09-04T06:35:33.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:33.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:33.744+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:33.744+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:33.754+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:33.827+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:33.827+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:33.854+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:33.955+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:33.960+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:33.960+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:34.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:34.055+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:34.155+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:34.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:34.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:34.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578933, 1), signature: { hash: BinData(0, 4AE2C36C4495616CED1B18F0405A9F99AD03E6BD), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:34.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:35:34.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578933, 1), signature: { hash: BinData(0, 4AE2C36C4495616CED1B18F0405A9F99AD03E6BD), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:34.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578933, 1), signature: { hash: BinData(0, 4AE2C36C4495616CED1B18F0405A9F99AD03E6BD), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:34.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, durableWallTime: new Date(1567578933603), opTime: { ts: Timestamp(1567578933, 1), t: 1 }, wallTime: new Date(1567578933603) } 2019-09-04T06:35:34.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578933, 1), signature: { hash: BinData(0, 4AE2C36C4495616CED1B18F0405A9F99AD03E6BD), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:34.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:34.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:34.244+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:34.244+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:34.255+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:34.327+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:34.327+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:34.355+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:34.455+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:34.460+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:34.460+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:34.555+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:34.605+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:34.605+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:34.605+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578933, 1) 2019-09-04T06:35:34.605+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 23341 2019-09-04T06:35:34.605+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:34.605+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:34.605+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 23341 2019-09-04T06:35:34.606+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23344 2019-09-04T06:35:34.606+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 23344 2019-09-04T06:35:34.606+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578933, 1), t: 1 }({ ts: Timestamp(1567578933, 1), t: 1 }) 2019-09-04T06:35:34.655+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:34.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:34.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:34.744+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:34.744+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:34.756+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:34.827+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:34.827+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:34.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:34.839+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1617) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:34.839+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1617 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:35:44.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:34.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:02.839+0000 2019-09-04T06:35:34.839+0000 D2 ASIO [Replication] Request 1617 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, durableWallTime: new Date(1567578933603), opTime: { ts: Timestamp(1567578933, 1), t: 1 }, wallTime: new Date(1567578933603), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578933, 1), t: 1 }, lastCommittedWall: new Date(1567578933603), lastOpVisible: { ts: Timestamp(1567578933, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578933, 1), $clusterTime: { clusterTime: Timestamp(1567578933, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578933, 1) } 2019-09-04T06:35:34.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, durableWallTime: new Date(1567578933603), opTime: { ts: Timestamp(1567578933, 1), t: 1 }, wallTime: new Date(1567578933603), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578933, 1), t: 1 }, lastCommittedWall: new Date(1567578933603), lastOpVisible: { ts: Timestamp(1567578933, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578933, 1), $clusterTime: { clusterTime: Timestamp(1567578933, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578933, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:35:34.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:34.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1617) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, durableWallTime: new Date(1567578933603), opTime: { ts: Timestamp(1567578933, 1), t: 1 }, wallTime: new Date(1567578933603), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578933, 1), t: 1 }, lastCommittedWall: new Date(1567578933603), lastOpVisible: { ts: Timestamp(1567578933, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578933, 1), $clusterTime: { clusterTime: Timestamp(1567578933, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578933, 1) } 2019-09-04T06:35:34.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:35:34.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:35:44.619+0000 2019-09-04T06:35:34.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:35:45.468+0000 2019-09-04T06:35:34.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:35:34.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:35:36.839Z 2019-09-04T06:35:34.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:34.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:04.839+0000 2019-09-04T06:35:34.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:04.839+0000 2019-09-04T06:35:34.840+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:34.840+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1618) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:34.840+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1618 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:44.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:34.840+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:04.839+0000 2019-09-04T06:35:34.840+0000 D2 ASIO [Replication] Request 1618 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, durableWallTime: new Date(1567578933603), opTime: { ts: Timestamp(1567578933, 1), t: 1 }, wallTime: new Date(1567578933603), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578933, 1), t: 1 }, lastCommittedWall: new Date(1567578933603), lastOpVisible: { ts: Timestamp(1567578933, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578933, 1), $clusterTime: { clusterTime: Timestamp(1567578933, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578933, 1) } 2019-09-04T06:35:34.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, durableWallTime: new Date(1567578933603), opTime: { ts: Timestamp(1567578933, 1), t: 1 }, wallTime: new Date(1567578933603), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578933, 1), t: 1 }, lastCommittedWall: new Date(1567578933603), lastOpVisible: { ts: Timestamp(1567578933, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578933, 1), $clusterTime: { clusterTime: Timestamp(1567578933, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578933, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:34.840+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:34.840+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1618) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, durableWallTime: new Date(1567578933603), opTime: { ts: Timestamp(1567578933, 1), t: 1 }, wallTime: new Date(1567578933603), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578933, 1), t: 1 }, lastCommittedWall: new Date(1567578933603), lastOpVisible: { ts: Timestamp(1567578933, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578933, 1), $clusterTime: { clusterTime: Timestamp(1567578933, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578933, 1) } 2019-09-04T06:35:34.840+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:35:34.840+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:35:36.840Z 2019-09-04T06:35:34.840+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:04.839+0000 2019-09-04T06:35:34.856+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:34.956+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:34.960+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:34.960+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:35.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:35.056+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:35.063+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:35.063+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:35:34.839+0000 2019-09-04T06:35:35.063+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:35:34.840+0000 2019-09-04T06:35:35.063+0000 D3 REPL [replexec-3] stalest member MemberId(0) date: 2019-09-04T06:35:34.839+0000 2019-09-04T06:35:35.063+0000 D3 REPL [replexec-3] scheduling next check at 2019-09-04T06:35:44.839+0000 2019-09-04T06:35:35.063+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:04.839+0000 2019-09-04T06:35:35.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578933, 1), signature: { hash: BinData(0, 4AE2C36C4495616CED1B18F0405A9F99AD03E6BD), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:35.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:35:35.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578933, 1), signature: { hash: BinData(0, 4AE2C36C4495616CED1B18F0405A9F99AD03E6BD), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:35.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578933, 1), signature: { hash: BinData(0, 4AE2C36C4495616CED1B18F0405A9F99AD03E6BD), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:35.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, durableWallTime: new Date(1567578933603), opTime: { ts: Timestamp(1567578933, 1), t: 1 }, wallTime: new Date(1567578933603) } 2019-09-04T06:35:35.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578933, 1), signature: { hash: BinData(0, 4AE2C36C4495616CED1B18F0405A9F99AD03E6BD), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:35.073+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:35.073+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:35.156+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:35.159+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:35.159+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:35.177+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:35.177+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:35.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:35.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:35.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:35.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:35.244+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:35.244+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:35.256+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:35.327+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:35.327+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:35.356+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:35.456+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:35.460+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:35.460+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:35.557+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:35.573+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:35.573+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:35.606+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:35.606+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:35.606+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578933, 1) 2019-09-04T06:35:35.606+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 23362 2019-09-04T06:35:35.606+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:35.606+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:35.606+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 23362 2019-09-04T06:35:35.606+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23365 2019-09-04T06:35:35.607+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 23365 2019-09-04T06:35:35.607+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578933, 1), t: 1 }({ ts: Timestamp(1567578933, 1), t: 1 }) 2019-09-04T06:35:35.657+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:35.658+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:35.659+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:35.677+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:35.677+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:35.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:35.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:35.744+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:35.744+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:35.757+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:35.827+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:35.827+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:35.857+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:35.957+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:35.960+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:35.960+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:36.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:36.057+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:36.073+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:36.073+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:36.157+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:36.159+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:36.159+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:36.177+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:36.177+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:36.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:36.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:36.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578933, 1), signature: { hash: BinData(0, 4AE2C36C4495616CED1B18F0405A9F99AD03E6BD), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:36.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:35:36.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578933, 1), signature: { hash: BinData(0, 4AE2C36C4495616CED1B18F0405A9F99AD03E6BD), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:36.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578933, 1), signature: { hash: BinData(0, 4AE2C36C4495616CED1B18F0405A9F99AD03E6BD), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:36.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, durableWallTime: new Date(1567578933603), opTime: { ts: Timestamp(1567578933, 1), t: 1 }, wallTime: new Date(1567578933603) } 2019-09-04T06:35:36.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578933, 1), signature: { hash: BinData(0, 4AE2C36C4495616CED1B18F0405A9F99AD03E6BD), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:36.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:36.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:36.244+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:36.244+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:36.257+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:36.327+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:36.327+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:36.358+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:36.458+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:36.460+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:36.460+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:36.558+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:36.573+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:36.573+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:36.606+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:36.606+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:36.606+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578933, 1) 2019-09-04T06:35:36.606+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 23384 2019-09-04T06:35:36.606+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:36.606+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:36.606+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 23384 2019-09-04T06:35:36.607+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23387 2019-09-04T06:35:36.607+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 23387 2019-09-04T06:35:36.607+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578933, 1), t: 1 }({ ts: Timestamp(1567578933, 1), t: 1 }) 2019-09-04T06:35:36.658+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:36.658+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:36.659+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:36.677+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:36.677+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:36.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:36.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:36.744+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:36.744+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:36.758+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:36.827+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:36.827+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:36.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:36.839+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1619) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:36.839+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1619 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:35:46.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:36.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:04.839+0000 2019-09-04T06:35:36.839+0000 D2 ASIO [Replication] Request 1619 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, durableWallTime: new Date(1567578933603), opTime: { ts: Timestamp(1567578933, 1), t: 1 }, wallTime: new Date(1567578933603), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578933, 1), t: 1 }, lastCommittedWall: new Date(1567578933603), lastOpVisible: { ts: Timestamp(1567578933, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578933, 1), $clusterTime: { clusterTime: Timestamp(1567578933, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578933, 1) } 2019-09-04T06:35:36.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, durableWallTime: new Date(1567578933603), opTime: { ts: Timestamp(1567578933, 1), t: 1 }, wallTime: new Date(1567578933603), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578933, 1), t: 1 }, lastCommittedWall: new Date(1567578933603), lastOpVisible: { ts: Timestamp(1567578933, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578933, 1), $clusterTime: { clusterTime: Timestamp(1567578933, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578933, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:35:36.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:36.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1619) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, durableWallTime: new Date(1567578933603), opTime: { ts: Timestamp(1567578933, 1), t: 1 }, wallTime: new Date(1567578933603), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578933, 1), t: 1 }, lastCommittedWall: new Date(1567578933603), lastOpVisible: { ts: Timestamp(1567578933, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578933, 1), $clusterTime: { clusterTime: Timestamp(1567578933, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578933, 1) } 2019-09-04T06:35:36.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:35:36.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:35:45.468+0000 2019-09-04T06:35:36.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:35:48.221+0000 2019-09-04T06:35:36.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:35:36.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:35:38.839Z 2019-09-04T06:35:36.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:36.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:06.839+0000 2019-09-04T06:35:36.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:06.839+0000 2019-09-04T06:35:36.840+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:36.840+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1620) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:36.840+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1620 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:46.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:36.840+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:06.839+0000 2019-09-04T06:35:36.840+0000 D2 ASIO [Replication] Request 1620 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, durableWallTime: new Date(1567578933603), opTime: { ts: Timestamp(1567578933, 1), t: 1 }, wallTime: new Date(1567578933603), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578933, 1), t: 1 }, lastCommittedWall: new Date(1567578933603), lastOpVisible: { ts: Timestamp(1567578933, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578933, 1), $clusterTime: { clusterTime: Timestamp(1567578933, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578933, 1) } 2019-09-04T06:35:36.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, durableWallTime: new Date(1567578933603), opTime: { ts: Timestamp(1567578933, 1), t: 1 }, wallTime: new Date(1567578933603), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578933, 1), t: 1 }, lastCommittedWall: new Date(1567578933603), lastOpVisible: { ts: Timestamp(1567578933, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578933, 1), $clusterTime: { clusterTime: Timestamp(1567578933, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578933, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:36.840+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:36.840+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1620) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, durableWallTime: new Date(1567578933603), opTime: { ts: Timestamp(1567578933, 1), t: 1 }, wallTime: new Date(1567578933603), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578933, 1), t: 1 }, lastCommittedWall: new Date(1567578933603), lastOpVisible: { ts: Timestamp(1567578933, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578933, 1), $clusterTime: { clusterTime: Timestamp(1567578933, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578933, 1) } 2019-09-04T06:35:36.840+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:35:36.840+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:35:38.840Z 2019-09-04T06:35:36.840+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:06.839+0000 2019-09-04T06:35:36.858+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:36.958+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:36.960+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:36.960+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:37.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:37.058+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:37.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578933, 1), signature: { hash: BinData(0, 4AE2C36C4495616CED1B18F0405A9F99AD03E6BD), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:37.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:35:37.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578933, 1), signature: { hash: BinData(0, 4AE2C36C4495616CED1B18F0405A9F99AD03E6BD), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:37.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578933, 1), signature: { hash: BinData(0, 4AE2C36C4495616CED1B18F0405A9F99AD03E6BD), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:37.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, durableWallTime: new Date(1567578933603), opTime: { ts: Timestamp(1567578933, 1), t: 1 }, wallTime: new Date(1567578933603) } 2019-09-04T06:35:37.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578933, 1), signature: { hash: BinData(0, 4AE2C36C4495616CED1B18F0405A9F99AD03E6BD), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:37.073+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:37.073+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:37.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:37.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:37.158+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:37.158+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:37.159+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:37.177+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:37.177+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:37.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:37.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:37.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:37.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:37.244+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:37.244+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:37.259+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:37.327+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:37.327+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:37.359+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:37.459+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:37.460+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:37.460+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:37.559+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:37.573+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:37.573+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:37.606+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:37.606+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:37.606+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578933, 1) 2019-09-04T06:35:37.606+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 23407 2019-09-04T06:35:37.606+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:37.606+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:37.606+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 23407 2019-09-04T06:35:37.607+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23410 2019-09-04T06:35:37.607+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 23410 2019-09-04T06:35:37.607+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578933, 1), t: 1 }({ ts: Timestamp(1567578933, 1), t: 1 }) 2019-09-04T06:35:37.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:37.611+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:37.658+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:37.659+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:37.659+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:37.677+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:37.677+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:37.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:37.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:37.744+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:37.744+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:37.759+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:37.827+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:37.827+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:37.859+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:37.959+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:37.960+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:37.960+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:38.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:38.059+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:38.073+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:38.073+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:38.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:38.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:38.158+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:38.159+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:38.160+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:38.177+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:38.177+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:38.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:38.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:38.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578936, 1), signature: { hash: BinData(0, EB4FCD1C46190C17C0E854CB51837DEAD732B640), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:38.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:35:38.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578936, 1), signature: { hash: BinData(0, EB4FCD1C46190C17C0E854CB51837DEAD732B640), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:38.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578936, 1), signature: { hash: BinData(0, EB4FCD1C46190C17C0E854CB51837DEAD732B640), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:38.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, durableWallTime: new Date(1567578933603), opTime: { ts: Timestamp(1567578933, 1), t: 1 }, wallTime: new Date(1567578933603) } 2019-09-04T06:35:38.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578936, 1), signature: { hash: BinData(0, EB4FCD1C46190C17C0E854CB51837DEAD732B640), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:38.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:38.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:38.244+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:38.244+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:38.260+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:38.327+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:38.327+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:38.360+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:38.411+0000 D2 ASIO [RS] Request 1616 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578938, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578938409), o: { $v: 1, $set: { ping: new Date(1567578938409) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578933, 1), t: 1 }, lastCommittedWall: new Date(1567578933603), lastOpVisible: { ts: Timestamp(1567578933, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578933, 1), t: 1 }, lastCommittedWall: new Date(1567578933603), lastOpApplied: { ts: Timestamp(1567578938, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578933, 1), $clusterTime: { clusterTime: Timestamp(1567578938, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 1) } 2019-09-04T06:35:38.411+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578938, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578938409), o: { $v: 1, $set: { ping: new Date(1567578938409) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578933, 1), t: 1 }, lastCommittedWall: new Date(1567578933603), lastOpVisible: { ts: Timestamp(1567578933, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578933, 1), t: 1 }, lastCommittedWall: new Date(1567578933603), lastOpApplied: { ts: Timestamp(1567578938, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578933, 1), $clusterTime: { clusterTime: Timestamp(1567578938, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:38.411+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:38.411+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578938, 1) and ending at ts: Timestamp(1567578938, 1) 2019-09-04T06:35:38.411+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:35:48.221+0000 2019-09-04T06:35:38.411+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:35:49.427+0000 2019-09-04T06:35:38.411+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:38.411+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:06.839+0000 2019-09-04T06:35:38.411+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578938, 1), t: 1 } 2019-09-04T06:35:38.411+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:38.411+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:38.411+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578933, 1) 2019-09-04T06:35:38.411+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 23430 2019-09-04T06:35:38.411+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:38.411+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:38.411+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 23430 2019-09-04T06:35:38.411+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:38.411+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:38.411+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578933, 1) 2019-09-04T06:35:38.411+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:35:38.411+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 23433 2019-09-04T06:35:38.411+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:38.411+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578938, 1) } 2019-09-04T06:35:38.411+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:38.411+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 23433 2019-09-04T06:35:38.411+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23411 2019-09-04T06:35:38.411+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 23411 2019-09-04T06:35:38.411+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23436 2019-09-04T06:35:38.411+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 23436 2019-09-04T06:35:38.411+0000 D3 EXECUTOR [repl-writer-worker-6] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:38.412+0000 D3 STORAGE [repl-writer-worker-6] WT begin_transaction for snapshot id 23438 2019-09-04T06:35:38.412+0000 D4 STORAGE [repl-writer-worker-6] inserting record with timestamp Timestamp(1567578938, 1) 2019-09-04T06:35:38.412+0000 D3 STORAGE [repl-writer-worker-6] WT set timestamp of future write operations to Timestamp(1567578938, 1) 2019-09-04T06:35:38.412+0000 D3 STORAGE [repl-writer-worker-6] WT commit_transaction for snapshot id 23438 2019-09-04T06:35:38.412+0000 D3 EXECUTOR [repl-writer-worker-6] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:38.412+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:35:38.412+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23437 2019-09-04T06:35:38.412+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 23437 2019-09-04T06:35:38.412+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23440 2019-09-04T06:35:38.412+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 23440 2019-09-04T06:35:38.412+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578938, 1), t: 1 }({ ts: Timestamp(1567578938, 1), t: 1 }) 2019-09-04T06:35:38.412+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578938, 1) 2019-09-04T06:35:38.412+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23441 2019-09-04T06:35:38.412+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578938, 1) } } ] } sort: {} projection: {} 2019-09-04T06:35:38.412+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:38.412+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:35:38.412+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578938, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:35:38.412+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:38.412+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:38.412+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:38.412+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578938, 1) || First: notFirst: full path: ts 2019-09-04T06:35:38.412+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:38.412+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578938, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:38.412+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:38.412+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:35:38.412+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:38.412+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:38.412+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:38.412+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:38.412+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:38.412+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:38.412+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:38.412+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578938, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:38.412+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:38.412+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:38.412+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:38.412+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578938, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:38.412+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:38.412+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578938, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:38.412+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 23441 2019-09-04T06:35:38.412+0000 D3 EXECUTOR [repl-writer-worker-8] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:38.412+0000 D3 STORAGE [repl-writer-worker-8] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:38.412+0000 D3 REPL [repl-writer-worker-8] applying op: { ts: Timestamp(1567578938, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578938409), o: { $v: 1, $set: { ping: new Date(1567578938409) } } }, oplog application mode: Secondary 2019-09-04T06:35:38.412+0000 D3 STORAGE [repl-writer-worker-8] WT set timestamp of future write operations to Timestamp(1567578938, 1) 2019-09-04T06:35:38.412+0000 D3 STORAGE [repl-writer-worker-8] WT begin_transaction for snapshot id 23443 2019-09-04T06:35:38.412+0000 D2 QUERY [repl-writer-worker-8] Using idhack: { _id: "ConfigServer" } 2019-09-04T06:35:38.412+0000 D4 WRITE [repl-writer-worker-8] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:35:38.412+0000 D3 STORAGE [repl-writer-worker-8] WT commit_transaction for snapshot id 23443 2019-09-04T06:35:38.412+0000 D3 EXECUTOR [repl-writer-worker-8] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:38.412+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578938, 1), t: 1 }({ ts: Timestamp(1567578938, 1), t: 1 }) 2019-09-04T06:35:38.412+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578938, 1) 2019-09-04T06:35:38.412+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23442 2019-09-04T06:35:38.412+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:35:38.412+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:38.412+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:35:38.412+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:38.412+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:38.412+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:35:38.412+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 23442 2019-09-04T06:35:38.412+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578938, 1) 2019-09-04T06:35:38.412+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23446 2019-09-04T06:35:38.412+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 23446 2019-09-04T06:35:38.412+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578938, 1), t: 1 }({ ts: Timestamp(1567578938, 1), t: 1 }) 2019-09-04T06:35:38.412+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:38.412+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, durableWallTime: new Date(1567578933603), appliedOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, appliedWallTime: new Date(1567578933603), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, durableWallTime: new Date(1567578933603), appliedOpTime: { ts: Timestamp(1567578938, 1), t: 1 }, appliedWallTime: new Date(1567578938409), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, durableWallTime: new Date(1567578933603), appliedOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, appliedWallTime: new Date(1567578933603), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578933, 1), t: 1 }, lastCommittedWall: new Date(1567578933603), lastOpVisible: { ts: Timestamp(1567578933, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:38.412+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1621 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:08.412+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, durableWallTime: new Date(1567578933603), appliedOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, appliedWallTime: new Date(1567578933603), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, durableWallTime: new Date(1567578933603), appliedOpTime: { ts: Timestamp(1567578938, 1), t: 1 }, appliedWallTime: new Date(1567578938409), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, durableWallTime: new Date(1567578933603), appliedOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, appliedWallTime: new Date(1567578933603), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578933, 1), t: 1 }, lastCommittedWall: new Date(1567578933603), lastOpVisible: { ts: Timestamp(1567578933, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:38.413+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:08.412+0000 2019-09-04T06:35:38.413+0000 D2 ASIO [RS] Request 1621 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578933, 1), t: 1 }, lastCommittedWall: new Date(1567578933603), lastOpVisible: { ts: Timestamp(1567578933, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578933, 1), $clusterTime: { clusterTime: Timestamp(1567578938, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 1) } 2019-09-04T06:35:38.413+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578933, 1), t: 1 }, lastCommittedWall: new Date(1567578933603), lastOpVisible: { ts: Timestamp(1567578933, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578933, 1), $clusterTime: { clusterTime: Timestamp(1567578938, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:38.413+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:38.413+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:08.413+0000 2019-09-04T06:35:38.413+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578938, 1), t: 1 } 2019-09-04T06:35:38.413+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1622 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:48.413+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578933, 1), t: 1 } } 2019-09-04T06:35:38.413+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:35:38.413+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:08.413+0000 2019-09-04T06:35:38.413+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:38.413+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, durableWallTime: new Date(1567578933603), appliedOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, appliedWallTime: new Date(1567578933603), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578938, 1), t: 1 }, durableWallTime: new Date(1567578938409), appliedOpTime: { ts: Timestamp(1567578938, 1), t: 1 }, appliedWallTime: new Date(1567578938409), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, durableWallTime: new Date(1567578933603), appliedOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, appliedWallTime: new Date(1567578933603), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578933, 1), t: 1 }, lastCommittedWall: new Date(1567578933603), lastOpVisible: { ts: Timestamp(1567578933, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:38.413+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1623 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:08.413+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, durableWallTime: new Date(1567578933603), appliedOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, appliedWallTime: new Date(1567578933603), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578938, 1), t: 1 }, durableWallTime: new Date(1567578938409), appliedOpTime: { ts: Timestamp(1567578938, 1), t: 1 }, appliedWallTime: new Date(1567578938409), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, durableWallTime: new Date(1567578933603), appliedOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, appliedWallTime: new Date(1567578933603), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578933, 1), t: 1 }, lastCommittedWall: new Date(1567578933603), lastOpVisible: { ts: Timestamp(1567578933, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:38.413+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:08.413+0000 2019-09-04T06:35:38.414+0000 D2 ASIO [RS] Request 1623 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578933, 1), t: 1 }, lastCommittedWall: new Date(1567578933603), lastOpVisible: { ts: Timestamp(1567578933, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578933, 1), $clusterTime: { clusterTime: Timestamp(1567578938, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 1) } 2019-09-04T06:35:38.414+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578933, 1), t: 1 }, lastCommittedWall: new Date(1567578933603), lastOpVisible: { ts: Timestamp(1567578933, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578933, 1), $clusterTime: { clusterTime: Timestamp(1567578938, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:38.414+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:38.414+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:08.413+0000 2019-09-04T06:35:38.414+0000 D2 ASIO [RS] Request 1622 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 1), t: 1 }, lastCommittedWall: new Date(1567578938409), lastOpVisible: { ts: Timestamp(1567578938, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578938, 1), t: 1 }, lastCommittedWall: new Date(1567578938409), lastOpApplied: { ts: Timestamp(1567578938, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578938, 1), $clusterTime: { clusterTime: Timestamp(1567578938, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 1) } 2019-09-04T06:35:38.414+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 1), t: 1 }, lastCommittedWall: new Date(1567578938409), lastOpVisible: { ts: Timestamp(1567578938, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578938, 1), t: 1 }, lastCommittedWall: new Date(1567578938409), lastOpApplied: { ts: Timestamp(1567578938, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578938, 1), $clusterTime: { clusterTime: Timestamp(1567578938, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:38.414+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:38.414+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:35:38.414+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578938, 1), t: 1 }, 2019-09-04T06:35:38.409+0000 2019-09-04T06:35:38.414+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578938, 1), t: 1 }, 2019-09-04T06:35:38.409+0000 2019-09-04T06:35:38.414+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578933, 1) 2019-09-04T06:35:38.414+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:35:49.427+0000 2019-09-04T06:35:38.414+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:35:49.677+0000 2019-09-04T06:35:38.414+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1624 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:48.414+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578938, 1), t: 1 } } 2019-09-04T06:35:38.414+0000 D3 REPL [conn487] Got notified of new snapshot: { ts: Timestamp(1567578938, 1), t: 1 }, 2019-09-04T06:35:38.409+0000 2019-09-04T06:35:38.414+0000 D3 REPL [conn487] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:47.846+0000 2019-09-04T06:35:38.414+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:08.413+0000 2019-09-04T06:35:38.414+0000 D3 REPL [conn508] Got notified of new snapshot: { ts: Timestamp(1567578938, 1), t: 1 }, 2019-09-04T06:35:38.409+0000 2019-09-04T06:35:38.414+0000 D3 REPL [conn508] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.763+0000 2019-09-04T06:35:38.414+0000 D3 REPL [conn496] Got notified of new snapshot: { ts: Timestamp(1567578938, 1), t: 1 }, 2019-09-04T06:35:38.409+0000 2019-09-04T06:35:38.414+0000 D3 REPL [conn496] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.432+0000 2019-09-04T06:35:38.414+0000 D3 REPL [conn505] Got notified of new snapshot: { ts: Timestamp(1567578938, 1), t: 1 }, 2019-09-04T06:35:38.409+0000 2019-09-04T06:35:38.414+0000 D3 REPL [conn505] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:57.575+0000 2019-09-04T06:35:38.414+0000 D3 REPL [conn488] Got notified of new snapshot: { ts: Timestamp(1567578938, 1), t: 1 }, 2019-09-04T06:35:38.409+0000 2019-09-04T06:35:38.414+0000 D3 REPL [conn488] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:44.309+0000 2019-09-04T06:35:38.414+0000 D3 REPL [conn510] Got notified of new snapshot: { ts: Timestamp(1567578938, 1), t: 1 }, 2019-09-04T06:35:38.409+0000 2019-09-04T06:35:38.414+0000 D3 REPL [conn510] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.962+0000 2019-09-04T06:35:38.414+0000 D3 REPL [conn490] Got notified of new snapshot: { ts: Timestamp(1567578938, 1), t: 1 }, 2019-09-04T06:35:38.409+0000 2019-09-04T06:35:38.414+0000 D3 REPL [conn490] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:51.661+0000 2019-09-04T06:35:38.414+0000 D3 REPL [conn465] Got notified of new snapshot: { ts: Timestamp(1567578938, 1), t: 1 }, 2019-09-04T06:35:38.409+0000 2019-09-04T06:35:38.414+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:38.414+0000 D3 REPL [conn465] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:47.846+0000 2019-09-04T06:35:38.414+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:06.839+0000 2019-09-04T06:35:38.414+0000 D3 REPL [conn458] Got notified of new snapshot: { ts: Timestamp(1567578938, 1), t: 1 }, 2019-09-04T06:35:38.409+0000 2019-09-04T06:35:38.414+0000 D3 REPL [conn458] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:42.734+0000 2019-09-04T06:35:38.414+0000 D3 REPL [conn506] Got notified of new snapshot: { ts: Timestamp(1567578938, 1), t: 1 }, 2019-09-04T06:35:38.409+0000 2019-09-04T06:35:38.414+0000 D3 REPL [conn506] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:58.760+0000 2019-09-04T06:35:38.414+0000 D3 REPL [conn486] Got notified of new snapshot: { ts: Timestamp(1567578938, 1), t: 1 }, 2019-09-04T06:35:38.409+0000 2019-09-04T06:35:38.414+0000 D3 REPL [conn486] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:42.679+0000 2019-09-04T06:35:38.414+0000 D3 REPL [conn481] Got notified of new snapshot: { ts: Timestamp(1567578938, 1), t: 1 }, 2019-09-04T06:35:38.409+0000 2019-09-04T06:35:38.414+0000 D3 REPL [conn481] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:47.823+0000 2019-09-04T06:35:38.414+0000 D3 REPL [conn501] Got notified of new snapshot: { ts: Timestamp(1567578938, 1), t: 1 }, 2019-09-04T06:35:38.409+0000 2019-09-04T06:35:38.414+0000 D3 REPL [conn501] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:52.054+0000 2019-09-04T06:35:38.414+0000 D3 REPL [conn495] Got notified of new snapshot: { ts: Timestamp(1567578938, 1), t: 1 }, 2019-09-04T06:35:38.409+0000 2019-09-04T06:35:38.414+0000 D3 REPL [conn495] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:51.128+0000 2019-09-04T06:35:38.414+0000 D3 REPL [conn480] Got notified of new snapshot: { ts: Timestamp(1567578938, 1), t: 1 }, 2019-09-04T06:35:38.409+0000 2019-09-04T06:35:38.414+0000 D3 REPL [conn480] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:47.838+0000 2019-09-04T06:35:38.414+0000 D3 REPL [conn482] Got notified of new snapshot: { ts: Timestamp(1567578938, 1), t: 1 }, 2019-09-04T06:35:38.409+0000 2019-09-04T06:35:38.414+0000 D3 REPL [conn482] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:51.644+0000 2019-09-04T06:35:38.414+0000 D3 REPL [conn464] Got notified of new snapshot: { ts: Timestamp(1567578938, 1), t: 1 }, 2019-09-04T06:35:38.409+0000 2019-09-04T06:35:38.414+0000 D3 REPL [conn464] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:59.740+0000 2019-09-04T06:35:38.414+0000 D3 REPL [conn498] Got notified of new snapshot: { ts: Timestamp(1567578938, 1), t: 1 }, 2019-09-04T06:35:38.409+0000 2019-09-04T06:35:38.414+0000 D3 REPL [conn498] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:51.661+0000 2019-09-04T06:35:38.415+0000 D3 REPL [conn502] Got notified of new snapshot: { ts: Timestamp(1567578938, 1), t: 1 }, 2019-09-04T06:35:38.409+0000 2019-09-04T06:35:38.415+0000 D3 REPL [conn502] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.701+0000 2019-09-04T06:35:38.415+0000 D3 REPL [conn507] Got notified of new snapshot: { ts: Timestamp(1567578938, 1), t: 1 }, 2019-09-04T06:35:38.409+0000 2019-09-04T06:35:38.415+0000 D3 REPL [conn507] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.468+0000 2019-09-04T06:35:38.415+0000 D3 REPL [conn471] Got notified of new snapshot: { ts: Timestamp(1567578938, 1), t: 1 }, 2019-09-04T06:35:38.409+0000 2019-09-04T06:35:38.415+0000 D3 REPL [conn471] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:42.683+0000 2019-09-04T06:35:38.415+0000 D3 REPL [conn484] Got notified of new snapshot: { ts: Timestamp(1567578938, 1), t: 1 }, 2019-09-04T06:35:38.409+0000 2019-09-04T06:35:38.415+0000 D3 REPL [conn484] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:51.661+0000 2019-09-04T06:35:38.415+0000 D3 REPL [conn503] Got notified of new snapshot: { ts: Timestamp(1567578938, 1), t: 1 }, 2019-09-04T06:35:38.409+0000 2019-09-04T06:35:38.415+0000 D3 REPL [conn503] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:55.060+0000 2019-09-04T06:35:38.415+0000 D3 REPL [conn493] Got notified of new snapshot: { ts: Timestamp(1567578938, 1), t: 1 }, 2019-09-04T06:35:38.409+0000 2019-09-04T06:35:38.415+0000 D3 REPL [conn493] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:47.825+0000 2019-09-04T06:35:38.415+0000 D3 REPL [conn509] Got notified of new snapshot: { ts: Timestamp(1567578938, 1), t: 1 }, 2019-09-04T06:35:38.409+0000 2019-09-04T06:35:38.415+0000 D3 REPL [conn509] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.925+0000 2019-09-04T06:35:38.460+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:38.460+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:38.460+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:38.511+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578938, 1) 2019-09-04T06:35:38.560+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:38.573+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:38.573+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:38.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:38.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:38.659+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:38.659+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:38.660+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:38.677+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:38.677+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:38.682+0000 D2 ASIO [RS] Request 1624 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578938, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578938650), o: { $v: 1, $set: { ping: new Date(1567578938650) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 1), t: 1 }, lastCommittedWall: new Date(1567578938409), lastOpVisible: { ts: Timestamp(1567578938, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578938, 1), t: 1 }, lastCommittedWall: new Date(1567578938409), lastOpApplied: { ts: Timestamp(1567578938, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578938, 1), $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 2) } 2019-09-04T06:35:38.682+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578938, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578938650), o: { $v: 1, $set: { ping: new Date(1567578938650) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 1), t: 1 }, lastCommittedWall: new Date(1567578938409), lastOpVisible: { ts: Timestamp(1567578938, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578938, 1), t: 1 }, lastCommittedWall: new Date(1567578938409), lastOpApplied: { ts: Timestamp(1567578938, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578938, 1), $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:38.682+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:38.682+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578938, 2) and ending at ts: Timestamp(1567578938, 2) 2019-09-04T06:35:38.682+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:35:49.677+0000 2019-09-04T06:35:38.682+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:35:49.142+0000 2019-09-04T06:35:38.682+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:38.682+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578938, 2), t: 1 } 2019-09-04T06:35:38.682+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:38.682+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:38.682+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578938, 1) 2019-09-04T06:35:38.682+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 23455 2019-09-04T06:35:38.682+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:38.682+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:38.682+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 23455 2019-09-04T06:35:38.682+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:38.682+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:35:38.682+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:38.682+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578938, 2) } 2019-09-04T06:35:38.682+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578938, 1) 2019-09-04T06:35:38.682+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 23458 2019-09-04T06:35:38.682+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:38.682+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:38.682+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 23458 2019-09-04T06:35:38.682+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23447 2019-09-04T06:35:38.682+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:06.839+0000 2019-09-04T06:35:38.682+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 23447 2019-09-04T06:35:38.682+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23461 2019-09-04T06:35:38.682+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 23461 2019-09-04T06:35:38.682+0000 D3 EXECUTOR [repl-writer-worker-2] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:38.682+0000 D3 STORAGE [repl-writer-worker-2] WT begin_transaction for snapshot id 23463 2019-09-04T06:35:38.682+0000 D4 STORAGE [repl-writer-worker-2] inserting record with timestamp Timestamp(1567578938, 2) 2019-09-04T06:35:38.682+0000 D3 STORAGE [repl-writer-worker-2] WT set timestamp of future write operations to Timestamp(1567578938, 2) 2019-09-04T06:35:38.682+0000 D3 STORAGE [repl-writer-worker-2] WT commit_transaction for snapshot id 23463 2019-09-04T06:35:38.682+0000 D3 EXECUTOR [repl-writer-worker-2] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:38.682+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:35:38.682+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23462 2019-09-04T06:35:38.682+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 23462 2019-09-04T06:35:38.682+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23465 2019-09-04T06:35:38.682+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 23465 2019-09-04T06:35:38.682+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578938, 2), t: 1 }({ ts: Timestamp(1567578938, 2), t: 1 }) 2019-09-04T06:35:38.682+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578938, 2) 2019-09-04T06:35:38.682+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23466 2019-09-04T06:35:38.682+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578938, 2) } } ] } sort: {} projection: {} 2019-09-04T06:35:38.682+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:38.682+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:35:38.682+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578938, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:35:38.682+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:38.682+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:38.682+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:38.682+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578938, 2) || First: notFirst: full path: ts 2019-09-04T06:35:38.682+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:38.682+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578938, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:38.682+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:38.682+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:35:38.682+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:38.682+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:38.682+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:38.682+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:38.682+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:38.682+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:38.682+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:38.682+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578938, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:38.682+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:38.682+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:38.682+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:38.683+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578938, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:38.683+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:38.683+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578938, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:38.683+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 23466 2019-09-04T06:35:38.683+0000 D3 EXECUTOR [repl-writer-worker-0] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:38.683+0000 D3 STORAGE [repl-writer-worker-0] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:38.683+0000 D3 REPL [repl-writer-worker-0] applying op: { ts: Timestamp(1567578938, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578938650), o: { $v: 1, $set: { ping: new Date(1567578938650) } } }, oplog application mode: Secondary 2019-09-04T06:35:38.683+0000 D3 STORAGE [repl-writer-worker-0] WT set timestamp of future write operations to Timestamp(1567578938, 2) 2019-09-04T06:35:38.683+0000 D3 STORAGE [repl-writer-worker-0] WT begin_transaction for snapshot id 23468 2019-09-04T06:35:38.683+0000 D2 QUERY [repl-writer-worker-0] Using idhack: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" } 2019-09-04T06:35:38.683+0000 D4 WRITE [repl-writer-worker-0] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:35:38.683+0000 D3 STORAGE [repl-writer-worker-0] WT commit_transaction for snapshot id 23468 2019-09-04T06:35:38.683+0000 D3 EXECUTOR [repl-writer-worker-0] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:38.683+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578938, 2), t: 1 }({ ts: Timestamp(1567578938, 2), t: 1 }) 2019-09-04T06:35:38.683+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578938, 2) 2019-09-04T06:35:38.683+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23467 2019-09-04T06:35:38.683+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:35:38.683+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:38.683+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:35:38.683+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:38.683+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:38.683+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:35:38.683+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 23467 2019-09-04T06:35:38.683+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578938, 2) 2019-09-04T06:35:38.683+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23472 2019-09-04T06:35:38.683+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 23472 2019-09-04T06:35:38.683+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578938, 2), t: 1 }({ ts: Timestamp(1567578938, 2), t: 1 }) 2019-09-04T06:35:38.683+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:38.683+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, durableWallTime: new Date(1567578933603), appliedOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, appliedWallTime: new Date(1567578933603), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578938, 1), t: 1 }, durableWallTime: new Date(1567578938409), appliedOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, appliedWallTime: new Date(1567578938650), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, durableWallTime: new Date(1567578933603), appliedOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, appliedWallTime: new Date(1567578933603), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 1), t: 1 }, lastCommittedWall: new Date(1567578938409), lastOpVisible: { ts: Timestamp(1567578938, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:38.683+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1625 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:08.683+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, durableWallTime: new Date(1567578933603), appliedOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, appliedWallTime: new Date(1567578933603), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578938, 1), t: 1 }, durableWallTime: new Date(1567578938409), appliedOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, appliedWallTime: new Date(1567578938650), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, durableWallTime: new Date(1567578933603), appliedOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, appliedWallTime: new Date(1567578933603), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 1), t: 1 }, lastCommittedWall: new Date(1567578938409), lastOpVisible: { ts: Timestamp(1567578938, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:38.683+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:08.683+0000 2019-09-04T06:35:38.684+0000 D2 ASIO [RS] Request 1625 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578938, 2), $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 2) } 2019-09-04T06:35:38.684+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578938, 2), $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:38.684+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:38.684+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:08.684+0000 2019-09-04T06:35:38.684+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578938, 2), t: 1 } 2019-09-04T06:35:38.684+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1626 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:48.684+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578938, 1), t: 1 } } 2019-09-04T06:35:38.684+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:08.684+0000 2019-09-04T06:35:38.684+0000 D2 ASIO [RS] Request 1626 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpApplied: { ts: Timestamp(1567578938, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578938, 2), $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 2) } 2019-09-04T06:35:38.684+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpApplied: { ts: Timestamp(1567578938, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578938, 2), $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:38.684+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:38.684+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:35:38.684+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578938, 2), t: 1 }, 2019-09-04T06:35:38.650+0000 2019-09-04T06:35:38.684+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578938, 2), t: 1 }, 2019-09-04T06:35:38.650+0000 2019-09-04T06:35:38.684+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:35:38.684+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578933, 2) 2019-09-04T06:35:38.684+0000 D3 REPL [conn495] Got notified of new snapshot: { ts: Timestamp(1567578938, 2), t: 1 }, 2019-09-04T06:35:38.650+0000 2019-09-04T06:35:38.684+0000 D3 REPL [conn495] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:51.128+0000 2019-09-04T06:35:38.684+0000 D3 REPL [conn505] Got notified of new snapshot: { ts: Timestamp(1567578938, 2), t: 1 }, 2019-09-04T06:35:38.650+0000 2019-09-04T06:35:38.684+0000 D3 REPL [conn505] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:57.575+0000 2019-09-04T06:35:38.684+0000 D3 REPL [conn484] Got notified of new snapshot: { ts: Timestamp(1567578938, 2), t: 1 }, 2019-09-04T06:35:38.650+0000 2019-09-04T06:35:38.684+0000 D3 REPL [conn484] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:51.661+0000 2019-09-04T06:35:38.684+0000 D3 REPL [conn465] Got notified of new snapshot: { ts: Timestamp(1567578938, 2), t: 1 }, 2019-09-04T06:35:38.650+0000 2019-09-04T06:35:38.684+0000 D3 REPL [conn465] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:47.846+0000 2019-09-04T06:35:38.684+0000 D3 REPL [conn482] Got notified of new snapshot: { ts: Timestamp(1567578938, 2), t: 1 }, 2019-09-04T06:35:38.650+0000 2019-09-04T06:35:38.684+0000 D3 REPL [conn482] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:51.644+0000 2019-09-04T06:35:38.684+0000 D3 REPL [conn486] Got notified of new snapshot: { ts: Timestamp(1567578938, 2), t: 1 }, 2019-09-04T06:35:38.650+0000 2019-09-04T06:35:38.684+0000 D3 REPL [conn486] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:42.679+0000 2019-09-04T06:35:38.684+0000 D3 REPL [conn501] Got notified of new snapshot: { ts: Timestamp(1567578938, 2), t: 1 }, 2019-09-04T06:35:38.650+0000 2019-09-04T06:35:38.684+0000 D3 REPL [conn501] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:52.054+0000 2019-09-04T06:35:38.684+0000 D3 REPL [conn480] Got notified of new snapshot: { ts: Timestamp(1567578938, 2), t: 1 }, 2019-09-04T06:35:38.650+0000 2019-09-04T06:35:38.684+0000 D3 REPL [conn480] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:47.838+0000 2019-09-04T06:35:38.684+0000 D3 REPL [conn464] Got notified of new snapshot: { ts: Timestamp(1567578938, 2), t: 1 }, 2019-09-04T06:35:38.650+0000 2019-09-04T06:35:38.684+0000 D3 REPL [conn464] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:59.740+0000 2019-09-04T06:35:38.684+0000 D3 REPL [conn502] Got notified of new snapshot: { ts: Timestamp(1567578938, 2), t: 1 }, 2019-09-04T06:35:38.650+0000 2019-09-04T06:35:38.684+0000 D3 REPL [conn502] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.701+0000 2019-09-04T06:35:38.685+0000 D3 REPL [conn471] Got notified of new snapshot: { ts: Timestamp(1567578938, 2), t: 1 }, 2019-09-04T06:35:38.650+0000 2019-09-04T06:35:38.685+0000 D3 REPL [conn471] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:42.683+0000 2019-09-04T06:35:38.685+0000 D3 REPL [conn503] Got notified of new snapshot: { ts: Timestamp(1567578938, 2), t: 1 }, 2019-09-04T06:35:38.650+0000 2019-09-04T06:35:38.685+0000 D3 REPL [conn503] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:55.060+0000 2019-09-04T06:35:38.685+0000 D3 REPL [conn508] Got notified of new snapshot: { ts: Timestamp(1567578938, 2), t: 1 }, 2019-09-04T06:35:38.650+0000 2019-09-04T06:35:38.685+0000 D3 REPL [conn508] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.763+0000 2019-09-04T06:35:38.685+0000 D3 REPL [conn481] Got notified of new snapshot: { ts: Timestamp(1567578938, 2), t: 1 }, 2019-09-04T06:35:38.650+0000 2019-09-04T06:35:38.685+0000 D3 REPL [conn481] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:47.823+0000 2019-09-04T06:35:38.685+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:38.685+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:35:49.142+0000 2019-09-04T06:35:38.685+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:35:48.893+0000 2019-09-04T06:35:38.685+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:38.685+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1627 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:48.685+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578938, 2), t: 1 } } 2019-09-04T06:35:38.685+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:06.839+0000 2019-09-04T06:35:38.685+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:08.685+0000 2019-09-04T06:35:38.685+0000 D3 REPL [conn487] Got notified of new snapshot: { ts: Timestamp(1567578938, 2), t: 1 }, 2019-09-04T06:35:38.650+0000 2019-09-04T06:35:38.685+0000 D3 REPL [conn487] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:47.846+0000 2019-09-04T06:35:38.685+0000 D3 REPL [conn506] Got notified of new snapshot: { ts: Timestamp(1567578938, 2), t: 1 }, 2019-09-04T06:35:38.650+0000 2019-09-04T06:35:38.685+0000 D3 REPL [conn506] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:58.760+0000 2019-09-04T06:35:38.685+0000 D3 REPL [conn490] Got notified of new snapshot: { ts: Timestamp(1567578938, 2), t: 1 }, 2019-09-04T06:35:38.650+0000 2019-09-04T06:35:38.685+0000 D3 REPL [conn490] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:51.661+0000 2019-09-04T06:35:38.685+0000 D3 REPL [conn488] Got notified of new snapshot: { ts: Timestamp(1567578938, 2), t: 1 }, 2019-09-04T06:35:38.650+0000 2019-09-04T06:35:38.685+0000 D3 REPL [conn488] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:44.309+0000 2019-09-04T06:35:38.685+0000 D3 REPL [conn510] Got notified of new snapshot: { ts: Timestamp(1567578938, 2), t: 1 }, 2019-09-04T06:35:38.650+0000 2019-09-04T06:35:38.685+0000 D3 REPL [conn510] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.962+0000 2019-09-04T06:35:38.685+0000 D3 REPL [conn507] Got notified of new snapshot: { ts: Timestamp(1567578938, 2), t: 1 }, 2019-09-04T06:35:38.650+0000 2019-09-04T06:35:38.685+0000 D3 REPL [conn507] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.468+0000 2019-09-04T06:35:38.685+0000 D3 REPL [conn496] Got notified of new snapshot: { ts: Timestamp(1567578938, 2), t: 1 }, 2019-09-04T06:35:38.650+0000 2019-09-04T06:35:38.685+0000 D3 REPL [conn496] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.432+0000 2019-09-04T06:35:38.685+0000 D3 REPL [conn498] Got notified of new snapshot: { ts: Timestamp(1567578938, 2), t: 1 }, 2019-09-04T06:35:38.650+0000 2019-09-04T06:35:38.685+0000 D3 REPL [conn498] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:51.661+0000 2019-09-04T06:35:38.685+0000 D3 REPL [conn458] Got notified of new snapshot: { ts: Timestamp(1567578938, 2), t: 1 }, 2019-09-04T06:35:38.650+0000 2019-09-04T06:35:38.685+0000 D3 REPL [conn458] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:42.734+0000 2019-09-04T06:35:38.685+0000 D3 REPL [conn493] Got notified of new snapshot: { ts: Timestamp(1567578938, 2), t: 1 }, 2019-09-04T06:35:38.650+0000 2019-09-04T06:35:38.685+0000 D3 REPL [conn493] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:47.825+0000 2019-09-04T06:35:38.685+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, durableWallTime: new Date(1567578933603), appliedOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, appliedWallTime: new Date(1567578933603), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), appliedOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, appliedWallTime: new Date(1567578938650), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, durableWallTime: new Date(1567578933603), appliedOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, appliedWallTime: new Date(1567578933603), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:38.685+0000 D3 REPL [conn509] Got notified of new snapshot: { ts: Timestamp(1567578938, 2), t: 1 }, 2019-09-04T06:35:38.650+0000 2019-09-04T06:35:38.685+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1628 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:08.685+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, durableWallTime: new Date(1567578933603), appliedOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, appliedWallTime: new Date(1567578933603), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), appliedOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, appliedWallTime: new Date(1567578938650), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, durableWallTime: new Date(1567578933603), appliedOpTime: { ts: Timestamp(1567578933, 1), t: 1 }, appliedWallTime: new Date(1567578933603), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:38.685+0000 D3 REPL [conn509] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.925+0000 2019-09-04T06:35:38.685+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:08.685+0000 2019-09-04T06:35:38.685+0000 D2 ASIO [RS] Request 1628 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578938, 2), $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 2) } 2019-09-04T06:35:38.685+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578938, 2), $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:38.685+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:38.685+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:08.685+0000 2019-09-04T06:35:38.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:38.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:38.744+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:38.744+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:38.760+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:38.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:38.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:38.782+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578938, 2) 2019-09-04T06:35:38.827+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:38.827+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:38.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:38.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1629) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:38.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1629 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:35:48.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:38.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:06.839+0000 2019-09-04T06:35:38.839+0000 D2 ASIO [Replication] Request 1629 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), opTime: { ts: Timestamp(1567578938, 2), t: 1 }, wallTime: new Date(1567578938650), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578938, 2), $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 2) } 2019-09-04T06:35:38.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), opTime: { ts: Timestamp(1567578938, 2), t: 1 }, wallTime: new Date(1567578938650), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578938, 2), $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:35:38.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:38.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1629) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), opTime: { ts: Timestamp(1567578938, 2), t: 1 }, wallTime: new Date(1567578938650), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578938, 2), $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 2) } 2019-09-04T06:35:38.839+0000 D4 ELECTION [replexec-4] Postponing election timeout due to heartbeat from primary 2019-09-04T06:35:38.839+0000 D4 REPL [replexec-4] Canceling election timeout callback at 2019-09-04T06:35:48.893+0000 2019-09-04T06:35:38.839+0000 D4 ELECTION [replexec-4] Scheduling election timeout callback at 2019-09-04T06:35:50.066+0000 2019-09-04T06:35:38.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:35:38.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:35:40.839Z 2019-09-04T06:35:38.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:38.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:08.839+0000 2019-09-04T06:35:38.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:08.839+0000 2019-09-04T06:35:38.840+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:38.840+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1630) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:38.840+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1630 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:48.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:38.840+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:08.839+0000 2019-09-04T06:35:38.840+0000 D2 ASIO [Replication] Request 1630 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), opTime: { ts: Timestamp(1567578938, 2), t: 1 }, wallTime: new Date(1567578938650), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578938, 2), $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 2) } 2019-09-04T06:35:38.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), opTime: { ts: Timestamp(1567578938, 2), t: 1 }, wallTime: new Date(1567578938650), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578938, 2), $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:38.840+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:38.840+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1630) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), opTime: { ts: Timestamp(1567578938, 2), t: 1 }, wallTime: new Date(1567578938650), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578938, 2), $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 2) } 2019-09-04T06:35:38.840+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:35:38.840+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:35:40.840Z 2019-09-04T06:35:38.840+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:08.839+0000 2019-09-04T06:35:38.860+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:38.960+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:38.960+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:38.960+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:39.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:39.061+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:39.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, D8BDFCCBF5E663F772ABD76EE38A7D006022E66B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:39.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:35:39.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, D8BDFCCBF5E663F772ABD76EE38A7D006022E66B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:39.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, D8BDFCCBF5E663F772ABD76EE38A7D006022E66B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:39.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), opTime: { ts: Timestamp(1567578938, 2), t: 1 }, wallTime: new Date(1567578938650) } 2019-09-04T06:35:39.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, D8BDFCCBF5E663F772ABD76EE38A7D006022E66B), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:39.073+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:39.073+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:39.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:39.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:39.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:39.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:39.158+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:39.159+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:39.161+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:39.177+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:39.177+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:39.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:39.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:39.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:39.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:39.244+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:39.244+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:39.261+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:39.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:39.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:39.327+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:39.327+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:39.361+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:39.460+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:39.460+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:39.461+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:39.499+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:39.499+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:39.538+0000 D2 COMMAND [conn61] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578938, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, D8BDFCCBF5E663F772ABD76EE38A7D006022E66B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578938, 2), t: 1 } }, $db: "config" } 2019-09-04T06:35:39.538+0000 D1 COMMAND [conn61] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578938, 2), t: 1 } } } 2019-09-04T06:35:39.538+0000 D3 STORAGE [conn61] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:35:39.538+0000 D1 COMMAND [conn61] Using 'committed' snapshot: { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578938, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, D8BDFCCBF5E663F772ABD76EE38A7D006022E66B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578938, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578938, 2) 2019-09-04T06:35:39.538+0000 D2 QUERY [conn61] Collection config.settings does not exist. Using EOF plan: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 2019-09-04T06:35:39.538+0000 I COMMAND [conn61] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578938, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, D8BDFCCBF5E663F772ABD76EE38A7D006022E66B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578938, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:35:39.538+0000 D2 COMMAND [conn61] run command config.$cmd { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578938, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, D8BDFCCBF5E663F772ABD76EE38A7D006022E66B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578938, 2), t: 1 } }, $db: "config" } 2019-09-04T06:35:39.538+0000 D1 COMMAND [conn61] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578938, 2), t: 1 } } } 2019-09-04T06:35:39.538+0000 D3 STORAGE [conn61] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:35:39.538+0000 D1 COMMAND [conn61] Using 'committed' snapshot: { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578938, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, D8BDFCCBF5E663F772ABD76EE38A7D006022E66B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578938, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578938, 2) 2019-09-04T06:35:39.538+0000 D2 QUERY [conn61] Collection config.settings does not exist. Using EOF plan: query: { _id: "autosplit" } sort: {} projection: {} limit: 1 2019-09-04T06:35:39.538+0000 I COMMAND [conn61] command config.settings command: find { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578938, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, D8BDFCCBF5E663F772ABD76EE38A7D006022E66B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578938, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:35:39.561+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:39.573+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:39.573+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:39.612+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:39.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:39.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:39.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:39.658+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:39.659+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:39.661+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:39.677+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:39.677+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:39.682+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:39.682+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:39.682+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578938, 2) 2019-09-04T06:35:39.682+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 23502 2019-09-04T06:35:39.682+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:39.682+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:39.682+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 23502 2019-09-04T06:35:39.683+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23505 2019-09-04T06:35:39.683+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 23505 2019-09-04T06:35:39.683+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578938, 2), t: 1 }({ ts: Timestamp(1567578938, 2), t: 1 }) 2019-09-04T06:35:39.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:39.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:39.744+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:39.744+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:39.761+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:39.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:39.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:39.768+0000 D2 COMMAND [conn61] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578938, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, D8BDFCCBF5E663F772ABD76EE38A7D006022E66B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578938, 2), t: 1 } }, $db: "config" } 2019-09-04T06:35:39.768+0000 D1 COMMAND [conn61] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578938, 2), t: 1 } } } 2019-09-04T06:35:39.768+0000 D3 STORAGE [conn61] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:35:39.768+0000 D1 COMMAND [conn61] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578938, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, D8BDFCCBF5E663F772ABD76EE38A7D006022E66B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578938, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578938, 2) 2019-09-04T06:35:39.768+0000 D5 QUERY [conn61] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:35:39.768+0000 D5 QUERY [conn61] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:35:39.768+0000 D5 QUERY [conn61] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:35:39.768+0000 D5 QUERY [conn61] Rated tree: $and 2019-09-04T06:35:39.768+0000 D5 QUERY [conn61] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:39.768+0000 D5 QUERY [conn61] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:39.768+0000 D2 QUERY [conn61] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:35:39.768+0000 D3 STORAGE [conn61] WT begin_transaction for snapshot id 23510 2019-09-04T06:35:39.768+0000 D3 STORAGE [conn61] WT rollback_transaction for snapshot id 23510 2019-09-04T06:35:39.768+0000 I COMMAND [conn61] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578938, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, D8BDFCCBF5E663F772ABD76EE38A7D006022E66B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578938, 2), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:4 cursorExhausted:1 numYields:0 nreturned:4 reslen:989 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:35:39.827+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:39.827+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:39.862+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:39.960+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:39.960+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:39.962+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:39.978+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:39.978+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:39.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:39.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:40.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:40.000+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:35:40.000+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:35:40.002+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 1ms 2019-09-04T06:35:40.010+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:35:40.010+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:35:40.010+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:35:40.010+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:35:40.010+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:35:40.010+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:35:40.010+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:35:40.010+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35151 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:35:40.011+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:35:40.011+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:35:40.011+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:35:40.011+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:35:40.011+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:40.011+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:35:40.011+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:35:40.011+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:35:40.011+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:35:40.012+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:35:40.012+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:35:40.012+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:35:40.012+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:40.012+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:40.012+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:35:40.012+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578938, 2) 2019-09-04T06:35:40.012+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23522 2019-09-04T06:35:40.012+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23522 2019-09-04T06:35:40.012+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:35:40.012+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:35:40.012+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:35:40.012+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:35:40.012+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:40.012+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:35:40.012+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:35:40.012+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:35:40.012+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578938, 2) 2019-09-04T06:35:40.012+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23525 2019-09-04T06:35:40.012+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23525 2019-09-04T06:35:40.012+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:35:40.015+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:35:40.015+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:40.015+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:35:40.015+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:35:40.015+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:35:40.015+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578938, 2) 2019-09-04T06:35:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23527 2019-09-04T06:35:40.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23527 2019-09-04T06:35:40.015+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:35:40.015+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:35:40.015+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:35:40.015+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:35:40.015+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:35:40.015+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:35:40.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:35:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23530 2019-09-04T06:35:40.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:35:40.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:40.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:35:40.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23530 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23531 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23531 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23532 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23532 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23533 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23533 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23534 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23534 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23535 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23535 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23536 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23536 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23537 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23537 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23538 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23538 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23539 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23539 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23540 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23540 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23541 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23541 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23542 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23542 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23543 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23543 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23544 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:35:40.016+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:35:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23544 2019-09-04T06:35:40.017+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:35:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23545 2019-09-04T06:35:40.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:35:40.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:40.017+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:35:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23545 2019-09-04T06:35:40.017+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:35:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23546 2019-09-04T06:35:40.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:35:40.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:40.017+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:35:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23546 2019-09-04T06:35:40.017+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:35:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23547 2019-09-04T06:35:40.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:35:40.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:40.017+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:35:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23547 2019-09-04T06:35:40.017+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:35:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23548 2019-09-04T06:35:40.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:35:40.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:40.017+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:35:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23548 2019-09-04T06:35:40.017+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23549 2019-09-04T06:35:40.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:40.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23549 2019-09-04T06:35:40.017+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:35:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23550 2019-09-04T06:35:40.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:35:40.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:40.017+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:35:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23550 2019-09-04T06:35:40.017+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:35:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23551 2019-09-04T06:35:40.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:35:40.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:40.017+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:35:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23551 2019-09-04T06:35:40.017+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:35:40.018+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:35:40.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23553 2019-09-04T06:35:40.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23553 2019-09-04T06:35:40.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23554 2019-09-04T06:35:40.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23554 2019-09-04T06:35:40.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23555 2019-09-04T06:35:40.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23555 2019-09-04T06:35:40.018+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:35:40.018+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:35:40.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23557 2019-09-04T06:35:40.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23557 2019-09-04T06:35:40.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23558 2019-09-04T06:35:40.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23558 2019-09-04T06:35:40.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23559 2019-09-04T06:35:40.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23559 2019-09-04T06:35:40.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23560 2019-09-04T06:35:40.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23560 2019-09-04T06:35:40.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23561 2019-09-04T06:35:40.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23561 2019-09-04T06:35:40.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23562 2019-09-04T06:35:40.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23562 2019-09-04T06:35:40.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23563 2019-09-04T06:35:40.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23563 2019-09-04T06:35:40.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23564 2019-09-04T06:35:40.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23564 2019-09-04T06:35:40.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23565 2019-09-04T06:35:40.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23565 2019-09-04T06:35:40.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23566 2019-09-04T06:35:40.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23566 2019-09-04T06:35:40.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23567 2019-09-04T06:35:40.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23567 2019-09-04T06:35:40.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23568 2019-09-04T06:35:40.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23568 2019-09-04T06:35:40.019+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:35:40.019+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:35:40.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23570 2019-09-04T06:35:40.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23570 2019-09-04T06:35:40.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23571 2019-09-04T06:35:40.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23571 2019-09-04T06:35:40.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23572 2019-09-04T06:35:40.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23572 2019-09-04T06:35:40.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23573 2019-09-04T06:35:40.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23573 2019-09-04T06:35:40.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23574 2019-09-04T06:35:40.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23574 2019-09-04T06:35:40.019+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23575 2019-09-04T06:35:40.019+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23575 2019-09-04T06:35:40.019+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:35:40.038+0000 D2 COMMAND [conn69] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:40.038+0000 I COMMAND [conn69] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:40.055+0000 D2 COMMAND [conn70] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:40.056+0000 I COMMAND [conn70] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:40.062+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:40.073+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:40.073+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:40.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:40.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:40.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:40.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:40.159+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:40.159+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:40.162+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:40.177+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:40.177+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:40.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:40.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:40.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, D8BDFCCBF5E663F772ABD76EE38A7D006022E66B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:40.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:35:40.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, D8BDFCCBF5E663F772ABD76EE38A7D006022E66B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:40.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, D8BDFCCBF5E663F772ABD76EE38A7D006022E66B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:40.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), opTime: { ts: Timestamp(1567578938, 2), t: 1 }, wallTime: new Date(1567578938650) } 2019-09-04T06:35:40.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, D8BDFCCBF5E663F772ABD76EE38A7D006022E66B), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:40.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:40.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:40.244+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:40.244+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:40.262+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:40.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:40.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:40.327+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:40.327+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:40.362+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:40.460+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:40.460+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:40.462+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:40.464+0000 D2 COMMAND [conn71] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578933, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578936, 1), signature: { hash: BinData(0, EB4FCD1C46190C17C0E854CB51837DEAD732B640), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578933, 1), t: 1 } }, $db: "config" } 2019-09-04T06:35:40.464+0000 D1 COMMAND [conn71] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578933, 1), t: 1 } } } 2019-09-04T06:35:40.464+0000 D3 STORAGE [conn71] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:35:40.464+0000 D1 COMMAND [conn71] Using 'committed' snapshot: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578933, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578936, 1), signature: { hash: BinData(0, EB4FCD1C46190C17C0E854CB51837DEAD732B640), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578933, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578938, 2) 2019-09-04T06:35:40.465+0000 D2 QUERY [conn71] Collection config.settings does not exist. Using EOF plan: query: { _id: "balancer" } sort: {} projection: {} limit: 1 2019-09-04T06:35:40.465+0000 I COMMAND [conn71] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578933, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578936, 1), signature: { hash: BinData(0, EB4FCD1C46190C17C0E854CB51837DEAD732B640), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578933, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:35:40.465+0000 D2 COMMAND [conn71] run command config.$cmd { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578938, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, D8BDFCCBF5E663F772ABD76EE38A7D006022E66B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578938, 2), t: 1 } }, $db: "config" } 2019-09-04T06:35:40.465+0000 D1 COMMAND [conn71] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578938, 2), t: 1 } } } 2019-09-04T06:35:40.465+0000 D3 STORAGE [conn71] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:35:40.465+0000 D1 COMMAND [conn71] Using 'committed' snapshot: { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578938, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, D8BDFCCBF5E663F772ABD76EE38A7D006022E66B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578938, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578938, 2) 2019-09-04T06:35:40.465+0000 D2 QUERY [conn71] Collection config.settings does not exist. Using EOF plan: query: { _id: "autosplit" } sort: {} projection: {} limit: 1 2019-09-04T06:35:40.465+0000 I COMMAND [conn71] command config.settings command: find { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578938, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, D8BDFCCBF5E663F772ABD76EE38A7D006022E66B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578938, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:35:40.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:40.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:40.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:40.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:40.562+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:40.573+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:40.573+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:40.612+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:40.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:40.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:40.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:40.658+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:40.659+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:40.662+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:40.677+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:40.677+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:40.682+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:40.682+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:40.682+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578938, 2) 2019-09-04T06:35:40.682+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 23600 2019-09-04T06:35:40.682+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:40.682+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:40.682+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 23600 2019-09-04T06:35:40.683+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23603 2019-09-04T06:35:40.683+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 23603 2019-09-04T06:35:40.683+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578938, 2), t: 1 }({ ts: Timestamp(1567578938, 2), t: 1 }) 2019-09-04T06:35:40.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:40.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:40.722+0000 D2 COMMAND [conn72] run command config.$cmd { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578938, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, D8BDFCCBF5E663F772ABD76EE38A7D006022E66B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578938, 2), t: 1 } }, $db: "config" } 2019-09-04T06:35:40.722+0000 D1 COMMAND [conn72] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578938, 2), t: 1 } } } 2019-09-04T06:35:40.722+0000 D3 STORAGE [conn72] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:35:40.722+0000 D1 COMMAND [conn72] Using 'committed' snapshot: { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578938, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, D8BDFCCBF5E663F772ABD76EE38A7D006022E66B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578938, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578938, 2) 2019-09-04T06:35:40.722+0000 D2 QUERY [conn72] Collection config.settings does not exist. Using EOF plan: query: { _id: "autosplit" } sort: {} projection: {} limit: 1 2019-09-04T06:35:40.722+0000 I COMMAND [conn72] command config.settings command: find { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578938, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, D8BDFCCBF5E663F772ABD76EE38A7D006022E66B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578938, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:35:40.744+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:40.744+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:40.763+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:40.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:40.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:40.827+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:40.827+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:40.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:40.839+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1631) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:40.839+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1631 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:35:50.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:40.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:08.839+0000 2019-09-04T06:35:40.839+0000 D2 ASIO [Replication] Request 1631 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), opTime: { ts: Timestamp(1567578938, 2), t: 1 }, wallTime: new Date(1567578938650), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578938, 2), $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 2) } 2019-09-04T06:35:40.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), opTime: { ts: Timestamp(1567578938, 2), t: 1 }, wallTime: new Date(1567578938650), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578938, 2), $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:35:40.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:40.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1631) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), opTime: { ts: Timestamp(1567578938, 2), t: 1 }, wallTime: new Date(1567578938650), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578938, 2), $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 2) } 2019-09-04T06:35:40.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:35:40.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:35:50.066+0000 2019-09-04T06:35:40.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:35:52.133+0000 2019-09-04T06:35:40.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:35:40.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:35:42.839Z 2019-09-04T06:35:40.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:40.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:10.839+0000 2019-09-04T06:35:40.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:10.839+0000 2019-09-04T06:35:40.840+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:40.840+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1632) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:40.840+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1632 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:50.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:40.840+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:10.839+0000 2019-09-04T06:35:40.840+0000 D2 ASIO [Replication] Request 1632 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), opTime: { ts: Timestamp(1567578938, 2), t: 1 }, wallTime: new Date(1567578938650), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578938, 2), $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 2) } 2019-09-04T06:35:40.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), opTime: { ts: Timestamp(1567578938, 2), t: 1 }, wallTime: new Date(1567578938650), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578938, 2), $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:40.840+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:40.840+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1632) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), opTime: { ts: Timestamp(1567578938, 2), t: 1 }, wallTime: new Date(1567578938650), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578938, 2), $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 2) } 2019-09-04T06:35:40.840+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:35:40.840+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:35:42.840Z 2019-09-04T06:35:40.840+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:10.839+0000 2019-09-04T06:35:40.863+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:40.944+0000 D2 COMMAND [conn73] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:40.945+0000 I COMMAND [conn73] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:40.960+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:40.960+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:40.963+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:40.968+0000 D2 COMMAND [conn74] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:40.968+0000 I COMMAND [conn74] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:40.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:40.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:40.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:40.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:41.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:41.040+0000 D2 COMMAND [conn72] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578938, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, D8BDFCCBF5E663F772ABD76EE38A7D006022E66B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578938, 2), t: 1 } }, $db: "config" } 2019-09-04T06:35:41.040+0000 D1 COMMAND [conn72] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578938, 2), t: 1 } } } 2019-09-04T06:35:41.040+0000 D3 STORAGE [conn72] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:35:41.040+0000 D1 COMMAND [conn72] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578938, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, D8BDFCCBF5E663F772ABD76EE38A7D006022E66B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578938, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578938, 2) 2019-09-04T06:35:41.040+0000 D5 QUERY [conn72] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:35:41.040+0000 D5 QUERY [conn72] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:35:41.040+0000 D5 QUERY [conn72] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:35:41.040+0000 D5 QUERY [conn72] Rated tree: $and 2019-09-04T06:35:41.040+0000 D5 QUERY [conn72] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:41.040+0000 D5 QUERY [conn72] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:41.040+0000 D2 QUERY [conn72] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:35:41.040+0000 D3 STORAGE [conn72] WT begin_transaction for snapshot id 23616 2019-09-04T06:35:41.040+0000 D3 STORAGE [conn72] WT rollback_transaction for snapshot id 23616 2019-09-04T06:35:41.040+0000 I COMMAND [conn72] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578938, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, D8BDFCCBF5E663F772ABD76EE38A7D006022E66B), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578938, 2), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:4 cursorExhausted:1 numYields:0 nreturned:4 reslen:989 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:35:41.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, D8BDFCCBF5E663F772ABD76EE38A7D006022E66B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:41.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:35:41.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, D8BDFCCBF5E663F772ABD76EE38A7D006022E66B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:41.063+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:41.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, D8BDFCCBF5E663F772ABD76EE38A7D006022E66B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:41.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), opTime: { ts: Timestamp(1567578938, 2), t: 1 }, wallTime: new Date(1567578938650) } 2019-09-04T06:35:41.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578938, 2), signature: { hash: BinData(0, D8BDFCCBF5E663F772ABD76EE38A7D006022E66B), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:41.073+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:41.073+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:41.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:41.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:41.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:41.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:41.158+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:41.159+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:41.163+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:41.177+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:41.177+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:41.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:41.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:41.221+0000 D2 COMMAND [conn77] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:41.221+0000 I COMMAND [conn77] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:41.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:41.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:41.244+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:41.244+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:41.263+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:41.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:41.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:41.273+0000 D2 COMMAND [conn78] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:41.273+0000 I COMMAND [conn78] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:41.301+0000 D2 COMMAND [conn113] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:41.301+0000 I COMMAND [conn113] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:41.327+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:41.327+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:41.363+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:41.464+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:41.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:41.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:41.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:41.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:41.564+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:41.573+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:41.573+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:41.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:41.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:41.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:41.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:41.658+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:41.659+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:41.664+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:41.677+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:41.677+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:41.683+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:41.683+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:41.683+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578938, 2) 2019-09-04T06:35:41.683+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 23639 2019-09-04T06:35:41.683+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:41.683+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:41.683+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 23639 2019-09-04T06:35:41.683+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23642 2019-09-04T06:35:41.684+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 23642 2019-09-04T06:35:41.684+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578938, 2), t: 1 }({ ts: Timestamp(1567578938, 2), t: 1 }) 2019-09-04T06:35:41.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:41.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:41.744+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:41.744+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:41.764+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:41.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:41.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:41.827+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:41.827+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:41.864+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:41.964+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:41.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:41.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:41.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:41.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:42.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:42.064+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:42.073+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:42.073+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:42.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:42.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:42.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:42.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:42.158+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:42.158+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:42.164+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:42.177+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:42.177+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:42.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:42.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:42.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578939, 1), signature: { hash: BinData(0, 2C9122FD8D45957C737ADB0E9C00E9313E72A1BF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:42.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:35:42.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578939, 1), signature: { hash: BinData(0, 2C9122FD8D45957C737ADB0E9C00E9313E72A1BF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:42.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578939, 1), signature: { hash: BinData(0, 2C9122FD8D45957C737ADB0E9C00E9313E72A1BF), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:42.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), opTime: { ts: Timestamp(1567578938, 2), t: 1 }, wallTime: new Date(1567578938650) } 2019-09-04T06:35:42.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578939, 1), signature: { hash: BinData(0, 2C9122FD8D45957C737ADB0E9C00E9313E72A1BF), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:42.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:42.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:42.244+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:42.244+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:42.264+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:42.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:42.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:42.318+0000 D2 COMMAND [conn492] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578932, 1), signature: { hash: BinData(0, E41B373647AEEE84869015C1A3EA6E4D87DF51B7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:35:42.318+0000 D1 REPL [conn492] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578938, 2), t: 1 } 2019-09-04T06:35:42.318+0000 D3 REPL [conn492] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.328+0000 2019-09-04T06:35:42.327+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:42.327+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:42.365+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:42.428+0000 D2 COMMAND [conn143] run command admin.$cmd { ping: 1, $db: "admin" } 2019-09-04T06:35:42.428+0000 I COMMAND [conn143] command admin.$cmd appName: "robo3t" command: ping { ping: 1, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:42.438+0000 D2 COMMAND [conn206] run command config.$cmd { ping: 1.0, lsid: { id: UUID("2fef7d2a-ea06-44d7-a315-b0e911b7f5bf") }, $clusterTime: { clusterTime: Timestamp(1567578879, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:35:42.438+0000 I COMMAND [conn206] command config.$cmd appName: "MongoDB Shell" command: ping { ping: 1.0, lsid: { id: UUID("2fef7d2a-ea06-44d7-a315-b0e911b7f5bf") }, $clusterTime: { clusterTime: Timestamp(1567578879, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:42.465+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:42.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:42.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:42.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:42.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:42.565+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:42.573+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:42.573+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:42.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:42.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:42.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:42.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:42.658+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:42.659+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:42.665+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:42.677+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:42.677+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:42.683+0000 I COMMAND [conn486] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578911, 1), signature: { hash: BinData(0, 71EB50998B0ED96CFCE295A2D5D129D8C4E11CBE), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:35:42.683+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:42.683+0000 D1 - [conn486] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:42.683+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:42.683+0000 W - [conn486] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:42.683+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578938, 2) 2019-09-04T06:35:42.683+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 23672 2019-09-04T06:35:42.683+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:42.683+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:42.683+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 23672 2019-09-04T06:35:42.684+0000 I COMMAND [conn471] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578911, 1), signature: { hash: BinData(0, 71EB50998B0ED96CFCE295A2D5D129D8C4E11CBE), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:35:42.684+0000 D1 - [conn471] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:42.684+0000 W - [conn471] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:42.684+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23675 2019-09-04T06:35:42.684+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 23675 2019-09-04T06:35:42.684+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578938, 2), t: 1 }({ ts: Timestamp(1567578938, 2), t: 1 }) 2019-09-04T06:35:42.711+0000 I - [conn486] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:42.711+0000 D1 COMMAND [conn486] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578911, 1), signature: { hash: BinData(0, 71EB50998B0ED96CFCE295A2D5D129D8C4E11CBE), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:42.711+0000 D1 - [conn486] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:42.711+0000 W - [conn486] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:42.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:42.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:42.718+0000 I - [conn471] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:42.719+0000 D1 COMMAND [conn471] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578911, 1), signature: { hash: BinData(0, 71EB50998B0ED96CFCE295A2D5D129D8C4E11CBE), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:42.719+0000 D1 - [conn471] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:42.719+0000 W - [conn471] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:42.738+0000 I COMMAND [conn458] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578911, 1), signature: { hash: BinData(0, 71EB50998B0ED96CFCE295A2D5D129D8C4E11CBE), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:35:42.738+0000 D1 - [conn458] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:42.738+0000 W - [conn458] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:42.739+0000 I - [conn486] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:42.739+0000 W COMMAND [conn486] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:42.739+0000 I COMMAND [conn486] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578911, 1), signature: { hash: BinData(0, 71EB50998B0ED96CFCE295A2D5D129D8C4E11CBE), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30042ms 2019-09-04T06:35:42.739+0000 D2 NETWORK [conn486] Session from 10.108.2.55:36848 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:42.739+0000 I NETWORK [conn486] end connection 10.108.2.55:36848 (89 connections now open) 2019-09-04T06:35:42.744+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:42.744+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:42.756+0000 I - [conn458] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:42.756+0000 D1 COMMAND [conn458] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578911, 1), signature: { hash: BinData(0, 71EB50998B0ED96CFCE295A2D5D129D8C4E11CBE), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:42.756+0000 D1 - [conn458] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:42.756+0000 W - [conn458] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:42.764+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:42.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:42.765+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:42.795+0000 I - [conn458] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:42.795+0000 W COMMAND [conn458] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:42.795+0000 I COMMAND [conn458] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578911, 1), signature: { hash: BinData(0, 71EB50998B0ED96CFCE295A2D5D129D8C4E11CBE), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30031ms 2019-09-04T06:35:42.795+0000 D2 NETWORK [conn458] Session from 10.108.2.61:38078 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:42.795+0000 I NETWORK [conn458] end connection 10.108.2.61:38078 (88 connections now open) 2019-09-04T06:35:42.796+0000 I - [conn471] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:42.796+0000 W COMMAND [conn471] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:42.796+0000 I COMMAND [conn471] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578911, 1), signature: { hash: BinData(0, 71EB50998B0ED96CFCE295A2D5D129D8C4E11CBE), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30045ms 2019-09-04T06:35:42.796+0000 D2 NETWORK [conn471] Session from 10.108.2.56:35852 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:42.796+0000 I NETWORK [conn471] end connection 10.108.2.56:35852 (87 connections now open) 2019-09-04T06:35:42.827+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:42.827+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:42.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:42.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1633) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:42.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1633 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:35:52.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:42.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:10.839+0000 2019-09-04T06:35:42.839+0000 D2 ASIO [Replication] Request 1633 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), opTime: { ts: Timestamp(1567578938, 2), t: 1 }, wallTime: new Date(1567578938650), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578938, 2), $clusterTime: { clusterTime: Timestamp(1567578939, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 2) } 2019-09-04T06:35:42.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), opTime: { ts: Timestamp(1567578938, 2), t: 1 }, wallTime: new Date(1567578938650), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578938, 2), $clusterTime: { clusterTime: Timestamp(1567578939, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:35:42.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:42.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1633) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), opTime: { ts: Timestamp(1567578938, 2), t: 1 }, wallTime: new Date(1567578938650), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578938, 2), $clusterTime: { clusterTime: Timestamp(1567578939, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 2) } 2019-09-04T06:35:42.839+0000 D4 ELECTION [replexec-4] Postponing election timeout due to heartbeat from primary 2019-09-04T06:35:42.839+0000 D4 REPL [replexec-4] Canceling election timeout callback at 2019-09-04T06:35:52.133+0000 2019-09-04T06:35:42.839+0000 D4 ELECTION [replexec-4] Scheduling election timeout callback at 2019-09-04T06:35:53.648+0000 2019-09-04T06:35:42.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:35:42.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:35:44.839Z 2019-09-04T06:35:42.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:42.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:12.839+0000 2019-09-04T06:35:42.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:12.839+0000 2019-09-04T06:35:42.840+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:42.840+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1634) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:42.840+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1634 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:52.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:42.840+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:12.839+0000 2019-09-04T06:35:42.840+0000 D2 ASIO [Replication] Request 1634 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), opTime: { ts: Timestamp(1567578938, 2), t: 1 }, wallTime: new Date(1567578938650), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578938, 2), $clusterTime: { clusterTime: Timestamp(1567578939, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 2) } 2019-09-04T06:35:42.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), opTime: { ts: Timestamp(1567578938, 2), t: 1 }, wallTime: new Date(1567578938650), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578938, 2), $clusterTime: { clusterTime: Timestamp(1567578939, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:42.840+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:42.840+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1634) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), opTime: { ts: Timestamp(1567578938, 2), t: 1 }, wallTime: new Date(1567578938650), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578938, 2), $clusterTime: { clusterTime: Timestamp(1567578939, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 2) } 2019-09-04T06:35:42.840+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:35:42.840+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:35:44.840Z 2019-09-04T06:35:42.840+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:12.839+0000 2019-09-04T06:35:42.862+0000 I NETWORK [listener] connection accepted from 10.108.2.73:52356 #511 (88 connections now open) 2019-09-04T06:35:42.862+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:42.862+0000 D2 COMMAND [conn511] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:42.862+0000 I NETWORK [conn511] received client metadata from 10.108.2.73:52356 conn511: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:42.862+0000 I COMMAND [conn511] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:42.863+0000 D2 COMMAND [conn511] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, EAB2E9C123EA69D6E814C4F993AEAED9311438B2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:35:42.863+0000 D1 REPL [conn511] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578938, 2), t: 1 } 2019-09-04T06:35:42.863+0000 D3 REPL [conn511] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.873+0000 2019-09-04T06:35:42.865+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:42.865+0000 I NETWORK [listener] connection accepted from 10.108.2.48:42308 #512 (89 connections now open) 2019-09-04T06:35:42.865+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:42.865+0000 D2 COMMAND [conn512] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:42.865+0000 I NETWORK [conn512] received client metadata from 10.108.2.48:42308 conn512: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:42.865+0000 I COMMAND [conn512] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:42.866+0000 D2 COMMAND [conn512] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578940, 1), signature: { hash: BinData(0, DC364624D5EB396408DE50F10AD903484AFAB43A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:35:42.866+0000 D1 REPL [conn512] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578938, 2), t: 1 } 2019-09-04T06:35:42.866+0000 D3 REPL [conn512] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.876+0000 2019-09-04T06:35:42.868+0000 I NETWORK [listener] connection accepted from 10.108.2.74:51988 #513 (90 connections now open) 2019-09-04T06:35:42.868+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:42.868+0000 D2 COMMAND [conn513] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:42.868+0000 I NETWORK [conn513] received client metadata from 10.108.2.74:51988 conn513: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:42.868+0000 I COMMAND [conn513] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:42.868+0000 D2 COMMAND [conn513] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578935, 1), signature: { hash: BinData(0, EC39415F0541090717F188464B6C52E034EB96D7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:35:42.868+0000 D1 REPL [conn513] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578938, 2), t: 1 } 2019-09-04T06:35:42.868+0000 D3 REPL [conn513] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.878+0000 2019-09-04T06:35:42.871+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:42.871+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:42.872+0000 D2 COMMAND [conn497] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, EAB2E9C123EA69D6E814C4F993AEAED9311438B2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:35:42.872+0000 D1 REPL [conn497] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578938, 2), t: 1 } 2019-09-04T06:35:42.872+0000 D3 REPL [conn497] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.882+0000 2019-09-04T06:35:42.878+0000 D2 COMMAND [conn485] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578938, 1), signature: { hash: BinData(0, 31B0FBBE3F88E8227ED4748A3C95E935277E7687), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:35:42.878+0000 D1 REPL [conn485] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578938, 2), t: 1 } 2019-09-04T06:35:42.878+0000 D3 REPL [conn485] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.888+0000 2019-09-04T06:35:42.903+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:42.903+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:42.908+0000 D2 COMMAND [conn504] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578933, 1), signature: { hash: BinData(0, 544A6412BEE4E32C365992D40037A87F96284AE8), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:35:42.908+0000 D1 REPL [conn504] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578938, 2), t: 1 } 2019-09-04T06:35:42.908+0000 D3 REPL [conn504] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.918+0000 2019-09-04T06:35:42.929+0000 D2 COMMAND [conn483] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, EAB2E9C123EA69D6E814C4F993AEAED9311438B2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:35:42.929+0000 D1 REPL [conn483] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578938, 2), t: 1 } 2019-09-04T06:35:42.929+0000 D3 REPL [conn483] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.939+0000 2019-09-04T06:35:42.965+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:42.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:42.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:42.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:42.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:43.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:43.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578941, 1), signature: { hash: BinData(0, 22D46C8C4495A1062F462F27A21600D32D2F8A9C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:43.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:35:43.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578941, 1), signature: { hash: BinData(0, 22D46C8C4495A1062F462F27A21600D32D2F8A9C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:43.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578941, 1), signature: { hash: BinData(0, 22D46C8C4495A1062F462F27A21600D32D2F8A9C), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:43.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), opTime: { ts: Timestamp(1567578938, 2), t: 1 }, wallTime: new Date(1567578938650) } 2019-09-04T06:35:43.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578941, 1), signature: { hash: BinData(0, 22D46C8C4495A1062F462F27A21600D32D2F8A9C), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:43.065+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:43.073+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:43.073+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:43.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:43.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:43.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:43.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:43.158+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:43.158+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:43.165+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:43.177+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:43.177+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:43.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:43.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:43.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:43.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:43.244+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:43.244+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:43.264+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:43.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:43.266+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:43.327+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:43.327+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:43.366+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:43.370+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:43.371+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:43.401+0000 I NETWORK [listener] connection accepted from 10.108.2.45:36738 #514 (91 connections now open) 2019-09-04T06:35:43.401+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:43.401+0000 D2 COMMAND [conn514] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:43.401+0000 I NETWORK [conn514] received client metadata from 10.108.2.45:36738 conn514: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:43.401+0000 I COMMAND [conn514] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:43.402+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:43.403+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:43.404+0000 D2 COMMAND [conn514] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578938, 1), signature: { hash: BinData(0, 31B0FBBE3F88E8227ED4748A3C95E935277E7687), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:35:43.405+0000 D1 REPL [conn514] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578938, 2), t: 1 } 2019-09-04T06:35:43.405+0000 D3 REPL [conn514] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:13.414+0000 2019-09-04T06:35:43.466+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:43.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:43.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:43.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:43.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:43.566+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:43.573+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:43.573+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:43.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:43.611+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:43.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:43.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:43.658+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:43.658+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:43.666+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:43.677+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:43.677+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:43.683+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:43.683+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:43.683+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578938, 2) 2019-09-04T06:35:43.683+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 23719 2019-09-04T06:35:43.683+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:43.683+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:43.683+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 23719 2019-09-04T06:35:43.684+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23722 2019-09-04T06:35:43.684+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 23722 2019-09-04T06:35:43.684+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578938, 2), t: 1 }({ ts: Timestamp(1567578938, 2), t: 1 }) 2019-09-04T06:35:43.684+0000 D2 ASIO [RS] Request 1627 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpApplied: { ts: Timestamp(1567578938, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578938, 2), $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 2) } 2019-09-04T06:35:43.684+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpApplied: { ts: Timestamp(1567578938, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578938, 2), $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:43.684+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:43.684+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:35:43.684+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:35:53.648+0000 2019-09-04T06:35:43.684+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:35:54.798+0000 2019-09-04T06:35:43.684+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:43.684+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:12.839+0000 2019-09-04T06:35:43.684+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1635 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:53.684+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578938, 2), t: 1 } } 2019-09-04T06:35:43.684+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:08.685+0000 2019-09-04T06:35:43.685+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:43.685+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), appliedOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, appliedWallTime: new Date(1567578938650), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), appliedOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, appliedWallTime: new Date(1567578938650), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), appliedOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, appliedWallTime: new Date(1567578938650), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:43.685+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1636 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:13.685+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), appliedOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, appliedWallTime: new Date(1567578938650), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), appliedOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, appliedWallTime: new Date(1567578938650), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), appliedOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, appliedWallTime: new Date(1567578938650), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:43.685+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:08.685+0000 2019-09-04T06:35:43.685+0000 D2 ASIO [RS] Request 1636 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578938, 2), $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 2) } 2019-09-04T06:35:43.685+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578938, 2), $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:43.685+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:43.685+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:08.685+0000 2019-09-04T06:35:43.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:43.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:43.744+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:43.744+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:43.764+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:43.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:43.766+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:43.827+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:43.827+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:43.866+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:43.966+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:43.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:43.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:43.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:43.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:44.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:44.066+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:44.073+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:44.073+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:44.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:44.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:44.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:44.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:44.158+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:44.158+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:44.166+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:44.177+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:44.177+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:44.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:44.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:44.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, D099775A140995DF44C910CA466C12FC105D6AEC), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:44.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:35:44.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, D099775A140995DF44C910CA466C12FC105D6AEC), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:44.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, D099775A140995DF44C910CA466C12FC105D6AEC), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:44.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), opTime: { ts: Timestamp(1567578938, 2), t: 1 }, wallTime: new Date(1567578938650) } 2019-09-04T06:35:44.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, D099775A140995DF44C910CA466C12FC105D6AEC), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:44.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:44.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:44.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:44.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:44.267+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:44.285+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:44.285+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:44.286+0000 D2 COMMAND [conn500] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578941, 1), signature: { hash: BinData(0, B607257475BAE42FA27A1C9FCFB0BFBA13A2A499), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:35:44.286+0000 D1 REPL [conn500] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578938, 2), t: 1 } 2019-09-04T06:35:44.286+0000 D3 REPL [conn500] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:14.296+0000 2019-09-04T06:35:44.313+0000 I COMMAND [conn488] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578908, 1), signature: { hash: BinData(0, EF3ADB01B4FC3197228A5D8B8AD09E7DD93915A6), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:35:44.313+0000 D1 - [conn488] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:44.313+0000 W - [conn488] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:44.327+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:44.327+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:44.329+0000 I - [conn488] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:44.329+0000 D1 COMMAND [conn488] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578908, 1), signature: { hash: BinData(0, EF3ADB01B4FC3197228A5D8B8AD09E7DD93915A6), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:44.329+0000 D1 - [conn488] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:44.329+0000 W - [conn488] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:44.349+0000 I - [conn488] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:44.349+0000 W COMMAND [conn488] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:44.349+0000 I COMMAND [conn488] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578908, 1), signature: { hash: BinData(0, EF3ADB01B4FC3197228A5D8B8AD09E7DD93915A6), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:35:44.350+0000 D2 NETWORK [conn488] Session from 10.108.2.54:49372 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:44.350+0000 I NETWORK [conn488] end connection 10.108.2.54:49372 (90 connections now open) 2019-09-04T06:35:44.367+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:44.467+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:44.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:44.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:44.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:44.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:44.567+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:44.573+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:44.573+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:44.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:44.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:44.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:44.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:44.659+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:44.659+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:44.667+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:44.677+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:44.677+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:44.683+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:44.683+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:44.683+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578938, 2) 2019-09-04T06:35:44.683+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 23751 2019-09-04T06:35:44.683+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:44.683+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:44.683+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 23751 2019-09-04T06:35:44.684+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23754 2019-09-04T06:35:44.684+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 23754 2019-09-04T06:35:44.684+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578938, 2), t: 1 }({ ts: Timestamp(1567578938, 2), t: 1 }) 2019-09-04T06:35:44.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:44.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:44.764+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:44.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:44.767+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:44.785+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:44.785+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:44.827+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:44.827+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:44.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:44.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:44.839+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:35:43.063+0000 2019-09-04T06:35:44.839+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:35:44.234+0000 2019-09-04T06:35:44.839+0000 D3 REPL [replexec-3] stalest member MemberId(0) date: 2019-09-04T06:35:43.063+0000 2019-09-04T06:35:44.839+0000 D3 REPL [replexec-3] scheduling next check at 2019-09-04T06:35:53.063+0000 2019-09-04T06:35:44.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:14.839+0000 2019-09-04T06:35:44.839+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1637) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:44.839+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1637 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:35:54.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:44.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:14.839+0000 2019-09-04T06:35:44.839+0000 D2 ASIO [Replication] Request 1637 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), opTime: { ts: Timestamp(1567578938, 2), t: 1 }, wallTime: new Date(1567578938650), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578938, 2), $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 2) } 2019-09-04T06:35:44.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), opTime: { ts: Timestamp(1567578938, 2), t: 1 }, wallTime: new Date(1567578938650), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578938, 2), $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:35:44.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:44.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1637) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), opTime: { ts: Timestamp(1567578938, 2), t: 1 }, wallTime: new Date(1567578938650), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578938, 2), $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 2) } 2019-09-04T06:35:44.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:35:44.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:35:54.798+0000 2019-09-04T06:35:44.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:35:55.045+0000 2019-09-04T06:35:44.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:35:44.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:35:46.839Z 2019-09-04T06:35:44.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:44.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:14.839+0000 2019-09-04T06:35:44.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:14.839+0000 2019-09-04T06:35:44.840+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:44.840+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1638) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:44.840+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1638 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:54.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:44.840+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:14.839+0000 2019-09-04T06:35:44.840+0000 D2 ASIO [Replication] Request 1638 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), opTime: { ts: Timestamp(1567578938, 2), t: 1 }, wallTime: new Date(1567578938650), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578938, 2), $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 2) } 2019-09-04T06:35:44.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), opTime: { ts: Timestamp(1567578938, 2), t: 1 }, wallTime: new Date(1567578938650), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578938, 2), $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:44.840+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:44.840+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1638) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), opTime: { ts: Timestamp(1567578938, 2), t: 1 }, wallTime: new Date(1567578938650), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578938, 2), $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 2) } 2019-09-04T06:35:44.840+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:35:44.840+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:35:46.840Z 2019-09-04T06:35:44.840+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:14.839+0000 2019-09-04T06:35:44.867+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:44.967+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:44.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:44.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:44.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:44.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:45.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:45.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, D099775A140995DF44C910CA466C12FC105D6AEC), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:45.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:35:45.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, D099775A140995DF44C910CA466C12FC105D6AEC), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:45.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, D099775A140995DF44C910CA466C12FC105D6AEC), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:45.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), opTime: { ts: Timestamp(1567578938, 2), t: 1 }, wallTime: new Date(1567578938650) } 2019-09-04T06:35:45.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, D099775A140995DF44C910CA466C12FC105D6AEC), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:45.067+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:45.073+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:45.073+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:45.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:45.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:45.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:45.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:45.158+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:45.159+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:45.168+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:45.177+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:45.177+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:45.180+0000 D2 COMMAND [conn489] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, EAB2E9C123EA69D6E814C4F993AEAED9311438B2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:35:45.180+0000 D1 REPL [conn489] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578938, 2), t: 1 } 2019-09-04T06:35:45.180+0000 D3 REPL [conn489] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:15.190+0000 2019-09-04T06:35:45.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:45.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:45.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:45.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:45.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:45.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:45.268+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:45.327+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:45.327+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:45.368+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:45.468+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:45.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:45.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:45.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:45.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:45.568+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:45.573+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:45.573+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:45.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:45.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:45.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:45.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:45.658+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:45.658+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:45.668+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:45.677+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:45.677+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:45.683+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:45.683+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:45.684+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578938, 2) 2019-09-04T06:35:45.684+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 23781 2019-09-04T06:35:45.684+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:45.684+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:45.684+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 23781 2019-09-04T06:35:45.684+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23784 2019-09-04T06:35:45.684+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 23784 2019-09-04T06:35:45.684+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578938, 2), t: 1 }({ ts: Timestamp(1567578938, 2), t: 1 }) 2019-09-04T06:35:45.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:45.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:45.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:45.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:45.768+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:45.827+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:45.827+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:45.868+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:45.968+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:45.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:45.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:45.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:45.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:46.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:46.069+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:46.073+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:46.073+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:46.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:46.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:46.114+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:46.114+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:46.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:46.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:46.158+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:46.158+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:46.169+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:46.177+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:46.177+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:46.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:46.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:46.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, D099775A140995DF44C910CA466C12FC105D6AEC), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:46.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:35:46.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, D099775A140995DF44C910CA466C12FC105D6AEC), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:46.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, D099775A140995DF44C910CA466C12FC105D6AEC), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:46.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), opTime: { ts: Timestamp(1567578938, 2), t: 1 }, wallTime: new Date(1567578938650) } 2019-09-04T06:35:46.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, D099775A140995DF44C910CA466C12FC105D6AEC), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:46.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:46.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:46.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:46.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:46.269+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:46.327+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:46.327+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:46.328+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:46.328+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:46.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:46.360+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:46.369+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:46.469+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:46.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:46.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:46.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:46.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:46.569+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:46.573+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:46.573+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:46.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:46.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:46.613+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:46.613+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:46.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:46.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:46.658+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:46.659+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:46.669+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:46.677+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:46.677+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:46.684+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:46.684+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:46.684+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578938, 2) 2019-09-04T06:35:46.684+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 23813 2019-09-04T06:35:46.684+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:46.684+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:46.684+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 23813 2019-09-04T06:35:46.684+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23816 2019-09-04T06:35:46.684+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 23816 2019-09-04T06:35:46.684+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578938, 2), t: 1 }({ ts: Timestamp(1567578938, 2), t: 1 }) 2019-09-04T06:35:46.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:46.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:46.764+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:46.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:46.769+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:46.827+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:46.827+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:46.827+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:46.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:46.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:46.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1639) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:46.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1639 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:35:56.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:46.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:14.839+0000 2019-09-04T06:35:46.839+0000 D2 ASIO [Replication] Request 1639 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), opTime: { ts: Timestamp(1567578938, 2), t: 1 }, wallTime: new Date(1567578938650), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578938, 2), $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 2) } 2019-09-04T06:35:46.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), opTime: { ts: Timestamp(1567578938, 2), t: 1 }, wallTime: new Date(1567578938650), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578938, 2), $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:35:46.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:46.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1639) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), opTime: { ts: Timestamp(1567578938, 2), t: 1 }, wallTime: new Date(1567578938650), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578938, 2), $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 2) } 2019-09-04T06:35:46.839+0000 D4 ELECTION [replexec-4] Postponing election timeout due to heartbeat from primary 2019-09-04T06:35:46.839+0000 D4 REPL [replexec-4] Canceling election timeout callback at 2019-09-04T06:35:55.045+0000 2019-09-04T06:35:46.839+0000 D4 ELECTION [replexec-4] Scheduling election timeout callback at 2019-09-04T06:35:57.857+0000 2019-09-04T06:35:46.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:35:46.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:35:48.839Z 2019-09-04T06:35:46.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:46.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:16.839+0000 2019-09-04T06:35:46.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:16.839+0000 2019-09-04T06:35:46.840+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:46.840+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1640) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:46.840+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1640 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:56.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:46.840+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:16.839+0000 2019-09-04T06:35:46.840+0000 D2 ASIO [Replication] Request 1640 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), opTime: { ts: Timestamp(1567578938, 2), t: 1 }, wallTime: new Date(1567578938650), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578938, 2), $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 2) } 2019-09-04T06:35:46.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), opTime: { ts: Timestamp(1567578938, 2), t: 1 }, wallTime: new Date(1567578938650), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578938, 2), $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:46.840+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:46.840+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1640) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), opTime: { ts: Timestamp(1567578938, 2), t: 1 }, wallTime: new Date(1567578938650), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578938, 2), $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578938, 2) } 2019-09-04T06:35:46.840+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:35:46.840+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:35:48.840Z 2019-09-04T06:35:46.840+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:16.839+0000 2019-09-04T06:35:46.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:46.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:46.869+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:46.970+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:46.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:46.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:46.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:46.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:47.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:47.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, D099775A140995DF44C910CA466C12FC105D6AEC), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:47.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:35:47.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, D099775A140995DF44C910CA466C12FC105D6AEC), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:47.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, D099775A140995DF44C910CA466C12FC105D6AEC), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:47.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), opTime: { ts: Timestamp(1567578938, 2), t: 1 }, wallTime: new Date(1567578938650) } 2019-09-04T06:35:47.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, D099775A140995DF44C910CA466C12FC105D6AEC), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:47.070+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:47.073+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:47.073+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:47.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:47.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:47.113+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:47.113+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:47.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:47.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:47.158+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:47.158+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:47.170+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:47.177+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:47.177+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:47.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:47.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:47.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:47.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:47.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:47.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:47.270+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:47.327+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:47.328+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:47.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:47.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:47.370+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:47.470+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:47.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:47.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:47.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:47.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:47.570+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:47.573+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:47.573+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:47.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:47.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:47.613+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:47.613+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:47.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:47.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:47.658+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:47.658+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:47.670+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:47.677+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:47.677+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:47.684+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:47.684+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:47.684+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578938, 2) 2019-09-04T06:35:47.684+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 23847 2019-09-04T06:35:47.684+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:47.684+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:47.684+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 23847 2019-09-04T06:35:47.684+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23850 2019-09-04T06:35:47.684+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 23850 2019-09-04T06:35:47.684+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578938, 2), t: 1 }({ ts: Timestamp(1567578938, 2), t: 1 }) 2019-09-04T06:35:47.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:47.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:47.713+0000 D2 ASIO [RS] Request 1635 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578947, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578947709), o: { $v: 1, $set: { ping: new Date(1567578947708) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpApplied: { ts: Timestamp(1567578947, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578938, 2), $clusterTime: { clusterTime: Timestamp(1567578947, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 1) } 2019-09-04T06:35:47.713+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578947, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578947709), o: { $v: 1, $set: { ping: new Date(1567578947708) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpApplied: { ts: Timestamp(1567578947, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578938, 2), $clusterTime: { clusterTime: Timestamp(1567578947, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:47.713+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:47.713+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578947, 1) and ending at ts: Timestamp(1567578947, 1) 2019-09-04T06:35:47.713+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:35:57.857+0000 2019-09-04T06:35:47.713+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:35:58.423+0000 2019-09-04T06:35:47.713+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:47.713+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:16.839+0000 2019-09-04T06:35:47.713+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578947, 1), t: 1 } 2019-09-04T06:35:47.713+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:47.713+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:47.713+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578938, 2) 2019-09-04T06:35:47.713+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 23854 2019-09-04T06:35:47.713+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:47.713+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:47.713+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 23854 2019-09-04T06:35:47.714+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:47.714+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:35:47.714+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:47.714+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578938, 2) 2019-09-04T06:35:47.714+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578947, 1) } 2019-09-04T06:35:47.714+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 23857 2019-09-04T06:35:47.714+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:47.714+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:47.714+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 23857 2019-09-04T06:35:47.714+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23851 2019-09-04T06:35:47.714+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 23851 2019-09-04T06:35:47.714+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23860 2019-09-04T06:35:47.714+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 23860 2019-09-04T06:35:47.714+0000 D3 EXECUTOR [repl-writer-worker-1] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:47.714+0000 D3 STORAGE [repl-writer-worker-1] WT begin_transaction for snapshot id 23862 2019-09-04T06:35:47.714+0000 D4 STORAGE [repl-writer-worker-1] inserting record with timestamp Timestamp(1567578947, 1) 2019-09-04T06:35:47.714+0000 D3 STORAGE [repl-writer-worker-1] WT set timestamp of future write operations to Timestamp(1567578947, 1) 2019-09-04T06:35:47.714+0000 D3 STORAGE [repl-writer-worker-1] WT commit_transaction for snapshot id 23862 2019-09-04T06:35:47.714+0000 D3 EXECUTOR [repl-writer-worker-1] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:47.714+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:35:47.714+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23861 2019-09-04T06:35:47.714+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 23861 2019-09-04T06:35:47.714+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23864 2019-09-04T06:35:47.714+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 23864 2019-09-04T06:35:47.714+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578947, 1), t: 1 }({ ts: Timestamp(1567578947, 1), t: 1 }) 2019-09-04T06:35:47.714+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578947, 1) 2019-09-04T06:35:47.714+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23865 2019-09-04T06:35:47.714+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578947, 1) } } ] } sort: {} projection: {} 2019-09-04T06:35:47.714+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:47.714+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:35:47.714+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578947, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:35:47.714+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:47.714+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:47.714+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:47.714+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578947, 1) || First: notFirst: full path: ts 2019-09-04T06:35:47.714+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:47.714+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578947, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:47.714+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:47.714+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:35:47.714+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:47.714+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:47.714+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:47.714+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:47.714+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:47.714+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:47.714+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:47.714+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578947, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:47.714+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:47.714+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:47.714+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:47.714+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578947, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:47.714+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:47.714+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578947, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:47.714+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 23865 2019-09-04T06:35:47.714+0000 D3 EXECUTOR [repl-writer-worker-15] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:47.714+0000 D3 STORAGE [repl-writer-worker-15] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:47.714+0000 D3 REPL [repl-writer-worker-15] applying op: { ts: Timestamp(1567578947, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578947709), o: { $v: 1, $set: { ping: new Date(1567578947708) } } }, oplog application mode: Secondary 2019-09-04T06:35:47.714+0000 D3 STORAGE [repl-writer-worker-15] WT set timestamp of future write operations to Timestamp(1567578947, 1) 2019-09-04T06:35:47.714+0000 D3 STORAGE [repl-writer-worker-15] WT begin_transaction for snapshot id 23867 2019-09-04T06:35:47.714+0000 D2 QUERY [repl-writer-worker-15] Using idhack: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" } 2019-09-04T06:35:47.714+0000 D4 WRITE [repl-writer-worker-15] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:35:47.714+0000 D3 STORAGE [repl-writer-worker-15] WT commit_transaction for snapshot id 23867 2019-09-04T06:35:47.714+0000 D3 EXECUTOR [repl-writer-worker-15] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:47.714+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578947, 1), t: 1 }({ ts: Timestamp(1567578947, 1), t: 1 }) 2019-09-04T06:35:47.714+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578947, 1) 2019-09-04T06:35:47.714+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23866 2019-09-04T06:35:47.714+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:35:47.714+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:47.714+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:35:47.714+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:47.714+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:47.714+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:35:47.714+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 23866 2019-09-04T06:35:47.714+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578947, 1) 2019-09-04T06:35:47.715+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:47.715+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23870 2019-09-04T06:35:47.715+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), appliedOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, appliedWallTime: new Date(1567578938650), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), appliedOpTime: { ts: Timestamp(1567578947, 1), t: 1 }, appliedWallTime: new Date(1567578947709), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), appliedOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, appliedWallTime: new Date(1567578938650), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:47.715+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1641 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:17.715+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), appliedOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, appliedWallTime: new Date(1567578938650), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), appliedOpTime: { ts: Timestamp(1567578947, 1), t: 1 }, appliedWallTime: new Date(1567578947709), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), appliedOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, appliedWallTime: new Date(1567578938650), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:47.715+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:17.714+0000 2019-09-04T06:35:47.715+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 23870 2019-09-04T06:35:47.715+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578947, 1), t: 1 }({ ts: Timestamp(1567578947, 1), t: 1 }) 2019-09-04T06:35:47.715+0000 D2 ASIO [RS] Request 1641 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 1), t: 1 }, lastCommittedWall: new Date(1567578947709), lastOpVisible: { ts: Timestamp(1567578947, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578947, 1), $clusterTime: { clusterTime: Timestamp(1567578947, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 1) } 2019-09-04T06:35:47.715+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 1), t: 1 }, lastCommittedWall: new Date(1567578947709), lastOpVisible: { ts: Timestamp(1567578947, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578947, 1), $clusterTime: { clusterTime: Timestamp(1567578947, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:47.715+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:47.715+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:17.715+0000 2019-09-04T06:35:47.715+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:35:47.715+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:47.715+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), appliedOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, appliedWallTime: new Date(1567578938650), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578947, 1), t: 1 }, durableWallTime: new Date(1567578947709), appliedOpTime: { ts: Timestamp(1567578947, 1), t: 1 }, appliedWallTime: new Date(1567578947709), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), appliedOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, appliedWallTime: new Date(1567578938650), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:47.715+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1642 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:17.715+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), appliedOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, appliedWallTime: new Date(1567578938650), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578947, 1), t: 1 }, durableWallTime: new Date(1567578947709), appliedOpTime: { ts: Timestamp(1567578947, 1), t: 1 }, appliedWallTime: new Date(1567578947709), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), appliedOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, appliedWallTime: new Date(1567578938650), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578938, 2), t: 1 }, lastCommittedWall: new Date(1567578938650), lastOpVisible: { ts: Timestamp(1567578938, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:47.715+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:17.715+0000 2019-09-04T06:35:47.715+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578947, 1), t: 1 } 2019-09-04T06:35:47.715+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1643 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:57.715+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578938, 2), t: 1 } } 2019-09-04T06:35:47.715+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:17.715+0000 2019-09-04T06:35:47.716+0000 D2 ASIO [RS] Request 1642 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 1), t: 1 }, lastCommittedWall: new Date(1567578947709), lastOpVisible: { ts: Timestamp(1567578947, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578947, 1), $clusterTime: { clusterTime: Timestamp(1567578947, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 1) } 2019-09-04T06:35:47.716+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 1), t: 1 }, lastCommittedWall: new Date(1567578947709), lastOpVisible: { ts: Timestamp(1567578947, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578947, 1), $clusterTime: { clusterTime: Timestamp(1567578947, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:47.716+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:47.716+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:17.715+0000 2019-09-04T06:35:47.716+0000 D2 ASIO [RS] Request 1643 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 1), t: 1 }, lastCommittedWall: new Date(1567578947709), lastOpVisible: { ts: Timestamp(1567578947, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578947, 1), t: 1 }, lastCommittedWall: new Date(1567578947709), lastOpApplied: { ts: Timestamp(1567578947, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578947, 1), $clusterTime: { clusterTime: Timestamp(1567578947, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 1) } 2019-09-04T06:35:47.716+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 1), t: 1 }, lastCommittedWall: new Date(1567578947709), lastOpVisible: { ts: Timestamp(1567578947, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578947, 1), t: 1 }, lastCommittedWall: new Date(1567578947709), lastOpApplied: { ts: Timestamp(1567578947, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578947, 1), $clusterTime: { clusterTime: Timestamp(1567578947, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:47.716+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:47.716+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:35:47.716+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.716+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.716+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578942, 1) 2019-09-04T06:35:47.716+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:35:58.423+0000 2019-09-04T06:35:47.716+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:35:59.151+0000 2019-09-04T06:35:47.716+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1644 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:57.716+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578947, 1), t: 1 } } 2019-09-04T06:35:47.716+0000 D3 REPL [conn484] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.716+0000 D3 REPL [conn484] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:51.661+0000 2019-09-04T06:35:47.716+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:17.715+0000 2019-09-04T06:35:47.716+0000 D3 REPL [conn505] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.716+0000 D3 REPL [conn505] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:57.575+0000 2019-09-04T06:35:47.716+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:47.716+0000 D3 REPL [conn501] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.716+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:16.839+0000 2019-09-04T06:35:47.716+0000 D3 REPL [conn501] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:52.054+0000 2019-09-04T06:35:47.716+0000 D3 REPL [conn464] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.716+0000 D3 REPL [conn464] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:59.740+0000 2019-09-04T06:35:47.716+0000 D3 REPL [conn508] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.716+0000 D3 REPL [conn508] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.763+0000 2019-09-04T06:35:47.716+0000 D3 REPL [conn481] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.716+0000 D3 REPL [conn481] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:47.823+0000 2019-09-04T06:35:47.716+0000 D3 REPL [conn506] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.716+0000 D3 REPL [conn506] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:58.760+0000 2019-09-04T06:35:47.716+0000 D3 REPL [conn507] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.716+0000 D3 REPL [conn507] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.468+0000 2019-09-04T06:35:47.716+0000 D3 REPL [conn498] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.716+0000 D3 REPL [conn498] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:51.661+0000 2019-09-04T06:35:47.716+0000 D3 REPL [conn493] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.716+0000 D3 REPL [conn493] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:47.825+0000 2019-09-04T06:35:47.716+0000 D3 REPL [conn513] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.716+0000 D3 REPL [conn513] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.878+0000 2019-09-04T06:35:47.716+0000 D3 REPL [conn485] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.716+0000 D3 REPL [conn485] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.888+0000 2019-09-04T06:35:47.716+0000 D3 REPL [conn504] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.716+0000 D3 REPL [conn504] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.918+0000 2019-09-04T06:35:47.716+0000 D3 REPL [conn483] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.716+0000 D3 REPL [conn483] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.939+0000 2019-09-04T06:35:47.716+0000 D3 REPL [conn495] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.716+0000 D3 REPL [conn495] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:51.128+0000 2019-09-04T06:35:47.716+0000 D3 REPL [conn482] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.716+0000 D3 REPL [conn482] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:51.644+0000 2019-09-04T06:35:47.716+0000 D3 REPL [conn465] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.716+0000 D3 REPL [conn465] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:47.846+0000 2019-09-04T06:35:47.716+0000 D3 REPL [conn480] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.716+0000 D3 REPL [conn480] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:47.838+0000 2019-09-04T06:35:47.716+0000 D3 REPL [conn502] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.716+0000 D3 REPL [conn502] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.701+0000 2019-09-04T06:35:47.716+0000 D3 REPL [conn503] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.716+0000 D3 REPL [conn503] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:55.060+0000 2019-09-04T06:35:47.716+0000 D3 REPL [conn487] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.717+0000 D3 REPL [conn487] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:47.846+0000 2019-09-04T06:35:47.717+0000 D3 REPL [conn490] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.717+0000 D3 REPL [conn490] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:51.661+0000 2019-09-04T06:35:47.717+0000 D3 REPL [conn510] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.717+0000 D3 REPL [conn510] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.962+0000 2019-09-04T06:35:47.717+0000 D3 REPL [conn496] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.717+0000 D3 REPL [conn496] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.432+0000 2019-09-04T06:35:47.717+0000 D3 REPL [conn509] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.717+0000 D3 REPL [conn509] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.925+0000 2019-09-04T06:35:47.717+0000 D3 REPL [conn492] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.717+0000 D3 REPL [conn492] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.328+0000 2019-09-04T06:35:47.717+0000 D3 REPL [conn511] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.717+0000 D3 REPL [conn511] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.873+0000 2019-09-04T06:35:47.717+0000 D3 REPL [conn512] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.717+0000 D3 REPL [conn512] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.876+0000 2019-09-04T06:35:47.717+0000 D3 REPL [conn497] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.717+0000 D3 REPL [conn497] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.882+0000 2019-09-04T06:35:47.717+0000 D3 REPL [conn514] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.717+0000 D3 REPL [conn514] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:13.414+0000 2019-09-04T06:35:47.717+0000 D3 REPL [conn500] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.717+0000 D3 REPL [conn500] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:14.296+0000 2019-09-04T06:35:47.717+0000 D3 REPL [conn489] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.717+0000 D3 REPL [conn489] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:15.190+0000 2019-09-04T06:35:47.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:47.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:47.770+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:47.784+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:47.784+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:47.813+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578947, 1) 2019-09-04T06:35:47.815+0000 I NETWORK [listener] connection accepted from 10.108.2.46:41200 #515 (91 connections now open) 2019-09-04T06:35:47.815+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:47.815+0000 D2 COMMAND [conn515] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:47.815+0000 I NETWORK [conn515] received client metadata from 10.108.2.46:41200 conn515: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:47.815+0000 I COMMAND [conn515] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:47.823+0000 I COMMAND [conn481] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578909, 1), signature: { hash: BinData(0, 924C09A60358AABC6457CA62E6B62DFD7CFA8AC5), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:35:47.823+0000 D1 - [conn481] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:47.823+0000 W - [conn481] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:47.825+0000 I COMMAND [conn493] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578915, 1), signature: { hash: BinData(0, 3ABBA556EFC99C049EC08DFC000AC593457278AA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:35:47.825+0000 D1 - [conn493] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:47.825+0000 W - [conn493] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:47.827+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:47.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:47.837+0000 I NETWORK [listener] connection accepted from 10.108.2.47:56734 #516 (92 connections now open) 2019-09-04T06:35:47.837+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:47.837+0000 D2 COMMAND [conn516] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:47.837+0000 I NETWORK [conn516] received client metadata from 10.108.2.47:56734 conn516: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:47.838+0000 I COMMAND [conn516] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:47.839+0000 I COMMAND [conn480] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578915, 1), signature: { hash: BinData(0, 3ABBA556EFC99C049EC08DFC000AC593457278AA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:35:47.839+0000 D1 - [conn480] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:47.839+0000 W - [conn480] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:47.840+0000 I - [conn481] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:47.840+0000 D1 COMMAND [conn481] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578909, 1), signature: { hash: BinData(0, 924C09A60358AABC6457CA62E6B62DFD7CFA8AC5), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:47.840+0000 D1 - [conn481] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:47.840+0000 W - [conn481] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:47.846+0000 I COMMAND [conn487] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578908, 1), signature: { hash: BinData(0, EF3ADB01B4FC3197228A5D8B8AD09E7DD93915A6), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:35:47.846+0000 D1 - [conn487] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:47.846+0000 W - [conn487] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:47.846+0000 I COMMAND [conn465] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578915, 1), signature: { hash: BinData(0, 3ABBA556EFC99C049EC08DFC000AC593457278AA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:35:47.846+0000 D1 - [conn465] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:47.846+0000 W - [conn465] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:47.857+0000 I - [conn480] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:47.857+0000 D1 COMMAND [conn480] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578915, 1), signature: { hash: BinData(0, 3ABBA556EFC99C049EC08DFC000AC593457278AA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:47.857+0000 D1 - [conn480] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:47.857+0000 W - [conn480] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:47.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:47.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:47.871+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:47.877+0000 I - [conn481] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:47.877+0000 W COMMAND [conn481] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:47.877+0000 I COMMAND [conn481] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578909, 1), signature: { hash: BinData(0, 924C09A60358AABC6457CA62E6B62DFD7CFA8AC5), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:35:47.877+0000 D2 NETWORK [conn481] Session from 10.108.2.50:50292 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:47.877+0000 I NETWORK [conn481] end connection 10.108.2.50:50292 (91 connections now open) 2019-09-04T06:35:47.894+0000 I - [conn465] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:47.894+0000 D1 COMMAND [conn465] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578915, 1), signature: { hash: BinData(0, 3ABBA556EFC99C049EC08DFC000AC593457278AA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:47.894+0000 D1 - [conn465] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:47.894+0000 W - [conn465] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:47.911+0000 I - [conn493] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:47.911+0000 D1 COMMAND [conn493] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578915, 1), signature: { hash: BinData(0, 3ABBA556EFC99C049EC08DFC000AC593457278AA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:47.911+0000 D1 - [conn493] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:47.911+0000 W - [conn493] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:47.931+0000 I - [conn465] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:47.931+0000 W COMMAND [conn465] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:47.931+0000 I COMMAND [conn465] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578915, 1), signature: { hash: BinData(0, 3ABBA556EFC99C049EC08DFC000AC593457278AA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30058ms 2019-09-04T06:35:47.931+0000 D2 NETWORK [conn465] Session from 10.108.2.47:56700 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:47.931+0000 I NETWORK [conn465] end connection 10.108.2.47:56700 (90 connections now open) 2019-09-04T06:35:47.949+0000 I - [conn487] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:47.949+0000 D1 COMMAND [conn487] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578908, 1), signature: { hash: BinData(0, EF3ADB01B4FC3197228A5D8B8AD09E7DD93915A6), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:47.949+0000 D1 - [conn487] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:47.949+0000 W - [conn487] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:47.971+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:47.973+0000 I - [conn487] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:47.973+0000 W COMMAND [conn487] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:47.973+0000 I COMMAND [conn487] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578908, 1), signature: { hash: BinData(0, EF3ADB01B4FC3197228A5D8B8AD09E7DD93915A6), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30113ms 2019-09-04T06:35:47.973+0000 D2 NETWORK [conn487] Session from 10.108.2.45:36712 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:47.973+0000 I NETWORK [conn487] end connection 10.108.2.45:36712 (89 connections now open) 2019-09-04T06:35:47.975+0000 D2 ASIO [RS] Request 1644 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578947, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578947962), o: { $v: 1, $set: { ping: new Date(1567578947955) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 1), t: 1 }, lastCommittedWall: new Date(1567578947709), lastOpVisible: { ts: Timestamp(1567578947, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578947, 1), t: 1 }, lastCommittedWall: new Date(1567578947709), lastOpApplied: { ts: Timestamp(1567578947, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578947, 1), $clusterTime: { clusterTime: Timestamp(1567578947, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 2) } 2019-09-04T06:35:47.975+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578947, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578947962), o: { $v: 1, $set: { ping: new Date(1567578947955) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 1), t: 1 }, lastCommittedWall: new Date(1567578947709), lastOpVisible: { ts: Timestamp(1567578947, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578947, 1), t: 1 }, lastCommittedWall: new Date(1567578947709), lastOpApplied: { ts: Timestamp(1567578947, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578947, 1), $clusterTime: { clusterTime: Timestamp(1567578947, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:47.975+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:47.975+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578947, 2) and ending at ts: Timestamp(1567578947, 2) 2019-09-04T06:35:47.975+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:35:59.151+0000 2019-09-04T06:35:47.976+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:35:58.478+0000 2019-09-04T06:35:47.976+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578947, 2), t: 1 } 2019-09-04T06:35:47.976+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:47.976+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:47.976+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578947, 1) 2019-09-04T06:35:47.976+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 23880 2019-09-04T06:35:47.976+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:47.976+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:47.976+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 23880 2019-09-04T06:35:47.976+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:47.976+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:47.976+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578947, 1) 2019-09-04T06:35:47.976+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 23883 2019-09-04T06:35:47.976+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:47.976+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:47.976+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 23883 2019-09-04T06:35:47.976+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:35:47.976+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:47.976+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:16.839+0000 2019-09-04T06:35:47.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:47.978+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578947, 2), t: 1 } 2019-09-04T06:35:47.978+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1645 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:57.978+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578947, 1), t: 1 } } 2019-09-04T06:35:47.978+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:17.715+0000 2019-09-04T06:35:47.979+0000 D2 ASIO [RS] Request 1645 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpVisible: { ts: Timestamp(1567578947, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpApplied: { ts: Timestamp(1567578947, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578947, 2), $clusterTime: { clusterTime: Timestamp(1567578947, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 2) } 2019-09-04T06:35:47.979+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpVisible: { ts: Timestamp(1567578947, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpApplied: { ts: Timestamp(1567578947, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578947, 2), $clusterTime: { clusterTime: Timestamp(1567578947, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:47.979+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:47.980+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:35:47.980+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578947, 2), t: 1 }, 2019-09-04T06:35:47.962+0000 2019-09-04T06:35:47.980+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.980+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:35:58.478+0000 2019-09-04T06:35:47.980+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:35:59.349+0000 2019-09-04T06:35:47.980+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1646 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:35:57.980+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578947, 2), t: 1 } } 2019-09-04T06:35:47.980+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:17.715+0000 2019-09-04T06:35:47.980+0000 D3 REPL [conn505] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.980+0000 D3 REPL [conn505] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:57.575+0000 2019-09-04T06:35:47.980+0000 D3 REPL [conn464] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.980+0000 D3 REPL [conn464] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:59.740+0000 2019-09-04T06:35:47.980+0000 D3 REPL [conn507] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.980+0000 D3 REPL [conn507] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.468+0000 2019-09-04T06:35:47.980+0000 D3 REPL [conn513] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.980+0000 D3 REPL [conn513] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.878+0000 2019-09-04T06:35:47.980+0000 D3 REPL [conn485] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.980+0000 D3 REPL [conn485] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.888+0000 2019-09-04T06:35:47.980+0000 D3 REPL [conn483] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.980+0000 D3 REPL [conn483] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.939+0000 2019-09-04T06:35:47.980+0000 D3 REPL [conn482] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.980+0000 D3 REPL [conn482] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:51.644+0000 2019-09-04T06:35:47.980+0000 D3 REPL [conn503] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.980+0000 D3 REPL [conn503] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:55.060+0000 2019-09-04T06:35:47.980+0000 D3 REPL [conn490] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.980+0000 D3 REPL [conn490] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:51.661+0000 2019-09-04T06:35:47.980+0000 D3 REPL [conn496] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.980+0000 D3 REPL [conn496] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.432+0000 2019-09-04T06:35:47.980+0000 D3 REPL [conn492] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.980+0000 D3 REPL [conn492] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.328+0000 2019-09-04T06:35:47.980+0000 D3 REPL [conn512] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.980+0000 D3 REPL [conn512] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.876+0000 2019-09-04T06:35:47.980+0000 D3 REPL [conn514] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.980+0000 D3 REPL [conn514] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:13.414+0000 2019-09-04T06:35:47.980+0000 D3 REPL [conn489] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.980+0000 D3 REPL [conn489] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:15.190+0000 2019-09-04T06:35:47.980+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:47.980+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:16.839+0000 2019-09-04T06:35:47.981+0000 D3 REPL [conn500] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.982+0000 D3 REPL [conn500] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:14.296+0000 2019-09-04T06:35:47.982+0000 D3 REPL [conn497] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.982+0000 D3 REPL [conn497] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.882+0000 2019-09-04T06:35:47.982+0000 D3 REPL [conn511] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.982+0000 D3 REPL [conn511] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.873+0000 2019-09-04T06:35:47.982+0000 D3 REPL [conn509] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.982+0000 D3 REPL [conn509] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.925+0000 2019-09-04T06:35:47.982+0000 D3 REPL [conn510] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.982+0000 D3 REPL [conn510] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.962+0000 2019-09-04T06:35:47.982+0000 D3 REPL [conn502] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.982+0000 D3 REPL [conn502] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.701+0000 2019-09-04T06:35:47.983+0000 D3 REPL [conn495] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.984+0000 D3 REPL [conn495] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:51.128+0000 2019-09-04T06:35:47.984+0000 D3 REPL [conn504] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.984+0000 D3 REPL [conn504] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.918+0000 2019-09-04T06:35:47.984+0000 D3 REPL [conn498] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.984+0000 D3 REPL [conn498] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:51.661+0000 2019-09-04T06:35:47.984+0000 D3 REPL [conn484] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.984+0000 D3 REPL [conn484] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:51.661+0000 2019-09-04T06:35:47.985+0000 D3 REPL [conn501] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.985+0000 D3 REPL [conn501] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:52.054+0000 2019-09-04T06:35:47.985+0000 D3 REPL [conn508] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.985+0000 D3 REPL [conn508] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.763+0000 2019-09-04T06:35:47.985+0000 D3 REPL [conn506] Got notified of new snapshot: { ts: Timestamp(1567578947, 1), t: 1 }, 2019-09-04T06:35:47.709+0000 2019-09-04T06:35:47.985+0000 D3 REPL [conn506] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:58.760+0000 2019-09-04T06:35:47.992+0000 I - [conn493] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:47.992+0000 W COMMAND [conn493] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:47.992+0000 I COMMAND [conn493] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578915, 1), signature: { hash: BinData(0, 3ABBA556EFC99C049EC08DFC000AC593457278AA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30095ms 2019-09-04T06:35:47.992+0000 D2 NETWORK [conn493] Session from 10.108.2.46:41184 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:47.992+0000 I NETWORK [conn493] end connection 10.108.2.46:41184 (88 connections now open) 2019-09-04T06:35:47.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:48.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:48.008+0000 I - [conn480] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:48.008+0000 W COMMAND [conn480] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:48.008+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578947, 2) } 2019-09-04T06:35:48.008+0000 I COMMAND [conn480] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578915, 1), signature: { hash: BinData(0, 3ABBA556EFC99C049EC08DFC000AC593457278AA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:35:48.008+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23872 2019-09-04T06:35:48.008+0000 D2 NETWORK [conn480] Session from 10.108.2.64:46788 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:48.008+0000 I NETWORK [conn480] end connection 10.108.2.64:46788 (87 connections now open) 2019-09-04T06:35:48.008+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 23872 2019-09-04T06:35:48.008+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23889 2019-09-04T06:35:48.008+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 23889 2019-09-04T06:35:48.008+0000 D3 EXECUTOR [repl-writer-worker-5] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:48.008+0000 D3 STORAGE [repl-writer-worker-5] WT begin_transaction for snapshot id 23891 2019-09-04T06:35:48.008+0000 D4 STORAGE [repl-writer-worker-5] inserting record with timestamp Timestamp(1567578947, 2) 2019-09-04T06:35:48.008+0000 D3 STORAGE [repl-writer-worker-5] WT set timestamp of future write operations to Timestamp(1567578947, 2) 2019-09-04T06:35:48.008+0000 D3 STORAGE [repl-writer-worker-5] WT commit_transaction for snapshot id 23891 2019-09-04T06:35:48.008+0000 D3 EXECUTOR [repl-writer-worker-5] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:48.008+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:35:48.008+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23890 2019-09-04T06:35:48.008+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 23890 2019-09-04T06:35:48.008+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23893 2019-09-04T06:35:48.008+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 23893 2019-09-04T06:35:48.008+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578947, 2), t: 1 }({ ts: Timestamp(1567578947, 2), t: 1 }) 2019-09-04T06:35:48.008+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578947, 2) 2019-09-04T06:35:48.008+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23894 2019-09-04T06:35:48.008+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578947, 2) } } ] } sort: {} projection: {} 2019-09-04T06:35:48.008+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:48.008+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:35:48.008+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578947, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:35:48.008+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:48.008+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:48.008+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:48.008+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578947, 2) || First: notFirst: full path: ts 2019-09-04T06:35:48.008+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:48.008+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578947, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:48.008+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:48.008+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:35:48.008+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:48.008+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:48.008+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:48.008+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:48.008+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:48.008+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:48.008+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:48.008+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578947, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:48.008+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:48.008+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:48.008+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:48.008+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578947, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:48.008+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:48.008+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578947, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:48.008+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 23894 2019-09-04T06:35:48.008+0000 D3 EXECUTOR [repl-writer-worker-9] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:48.008+0000 D3 STORAGE [repl-writer-worker-9] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:48.008+0000 D3 REPL [repl-writer-worker-9] applying op: { ts: Timestamp(1567578947, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578947962), o: { $v: 1, $set: { ping: new Date(1567578947955) } } }, oplog application mode: Secondary 2019-09-04T06:35:48.008+0000 D3 STORAGE [repl-writer-worker-9] WT set timestamp of future write operations to Timestamp(1567578947, 2) 2019-09-04T06:35:48.008+0000 D3 STORAGE [repl-writer-worker-9] WT begin_transaction for snapshot id 23896 2019-09-04T06:35:48.008+0000 D2 QUERY [repl-writer-worker-9] Using idhack: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" } 2019-09-04T06:35:48.008+0000 D4 WRITE [repl-writer-worker-9] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:35:48.008+0000 D3 STORAGE [repl-writer-worker-9] WT commit_transaction for snapshot id 23896 2019-09-04T06:35:48.008+0000 D3 EXECUTOR [repl-writer-worker-9] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:48.008+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578947, 2), t: 1 }({ ts: Timestamp(1567578947, 2), t: 1 }) 2019-09-04T06:35:48.009+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:48.009+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:48.009+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578947, 2) 2019-09-04T06:35:48.009+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23895 2019-09-04T06:35:48.009+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:35:48.009+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:48.009+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:35:48.009+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:48.009+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:48.009+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:35:48.009+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 23895 2019-09-04T06:35:48.009+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578947, 2) 2019-09-04T06:35:48.009+0000 D2 REPL [rsSync-0] Setting replication's stable optime to { ts: Timestamp(1567578947, 2), t: 1 }, 2019-09-04T06:35:47.962+0000 2019-09-04T06:35:48.009+0000 D2 STORAGE [rsSync-0] oldest_timestamp set to Timestamp(1567578942, 2) 2019-09-04T06:35:48.009+0000 D3 REPL [conn495] Got notified of new snapshot: { ts: Timestamp(1567578947, 2), t: 1 }, 2019-09-04T06:35:47.962+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn495] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:51.128+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn504] Got notified of new snapshot: { ts: Timestamp(1567578947, 2), t: 1 }, 2019-09-04T06:35:47.962+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn504] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.918+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn513] Got notified of new snapshot: { ts: Timestamp(1567578947, 2), t: 1 }, 2019-09-04T06:35:47.962+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn513] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.878+0000 2019-09-04T06:35:48.009+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:48.009+0000 D3 REPL [conn482] Got notified of new snapshot: { ts: Timestamp(1567578947, 2), t: 1 }, 2019-09-04T06:35:47.962+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn482] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:51.644+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn505] Got notified of new snapshot: { ts: Timestamp(1567578947, 2), t: 1 }, 2019-09-04T06:35:47.962+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn505] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:57.575+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn509] Got notified of new snapshot: { ts: Timestamp(1567578947, 2), t: 1 }, 2019-09-04T06:35:47.962+0000 2019-09-04T06:35:48.009+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23899 2019-09-04T06:35:48.009+0000 D3 REPL [conn509] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.925+0000 2019-09-04T06:35:48.009+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 23899 2019-09-04T06:35:48.009+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578947, 2), t: 1 }({ ts: Timestamp(1567578947, 2), t: 1 }) 2019-09-04T06:35:48.009+0000 D3 REPL [conn483] Got notified of new snapshot: { ts: Timestamp(1567578947, 2), t: 1 }, 2019-09-04T06:35:47.962+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn483] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.939+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn503] Got notified of new snapshot: { ts: Timestamp(1567578947, 2), t: 1 }, 2019-09-04T06:35:47.962+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn503] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:55.060+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn490] Got notified of new snapshot: { ts: Timestamp(1567578947, 2), t: 1 }, 2019-09-04T06:35:47.962+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn490] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:51.661+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn496] Got notified of new snapshot: { ts: Timestamp(1567578947, 2), t: 1 }, 2019-09-04T06:35:47.962+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn496] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.432+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn512] Got notified of new snapshot: { ts: Timestamp(1567578947, 2), t: 1 }, 2019-09-04T06:35:47.962+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn512] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.876+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn514] Got notified of new snapshot: { ts: Timestamp(1567578947, 2), t: 1 }, 2019-09-04T06:35:47.962+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn514] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:13.414+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn500] Got notified of new snapshot: { ts: Timestamp(1567578947, 2), t: 1 }, 2019-09-04T06:35:47.962+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn500] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:14.296+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn497] Got notified of new snapshot: { ts: Timestamp(1567578947, 2), t: 1 }, 2019-09-04T06:35:47.962+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn497] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.882+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn511] Got notified of new snapshot: { ts: Timestamp(1567578947, 2), t: 1 }, 2019-09-04T06:35:47.962+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn511] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.873+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn510] Got notified of new snapshot: { ts: Timestamp(1567578947, 2), t: 1 }, 2019-09-04T06:35:47.962+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn510] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.962+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn502] Got notified of new snapshot: { ts: Timestamp(1567578947, 2), t: 1 }, 2019-09-04T06:35:47.962+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn502] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.701+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn498] Got notified of new snapshot: { ts: Timestamp(1567578947, 2), t: 1 }, 2019-09-04T06:35:47.962+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn498] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:51.661+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn464] Got notified of new snapshot: { ts: Timestamp(1567578947, 2), t: 1 }, 2019-09-04T06:35:47.962+0000 2019-09-04T06:35:48.009+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), appliedOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, appliedWallTime: new Date(1567578938650), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578947, 1), t: 1 }, durableWallTime: new Date(1567578947709), appliedOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, appliedWallTime: new Date(1567578947962), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), appliedOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, appliedWallTime: new Date(1567578938650), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpVisible: { ts: Timestamp(1567578947, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:48.009+0000 D3 REPL [conn464] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:59.740+0000 2019-09-04T06:35:48.009+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1647 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:18.009+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), appliedOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, appliedWallTime: new Date(1567578938650), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578947, 1), t: 1 }, durableWallTime: new Date(1567578947709), appliedOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, appliedWallTime: new Date(1567578947962), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), appliedOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, appliedWallTime: new Date(1567578938650), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpVisible: { ts: Timestamp(1567578947, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:48.009+0000 D3 REPL [conn492] Got notified of new snapshot: { ts: Timestamp(1567578947, 2), t: 1 }, 2019-09-04T06:35:47.962+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn492] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.328+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn489] Got notified of new snapshot: { ts: Timestamp(1567578947, 2), t: 1 }, 2019-09-04T06:35:47.962+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn489] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:15.190+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn507] Got notified of new snapshot: { ts: Timestamp(1567578947, 2), t: 1 }, 2019-09-04T06:35:47.962+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn507] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.468+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn485] Got notified of new snapshot: { ts: Timestamp(1567578947, 2), t: 1 }, 2019-09-04T06:35:47.962+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn485] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.888+0000 2019-09-04T06:35:48.009+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:17.715+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn484] Got notified of new snapshot: { ts: Timestamp(1567578947, 2), t: 1 }, 2019-09-04T06:35:47.962+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn484] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:51.661+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn501] Got notified of new snapshot: { ts: Timestamp(1567578947, 2), t: 1 }, 2019-09-04T06:35:47.962+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn501] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:52.054+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn508] Got notified of new snapshot: { ts: Timestamp(1567578947, 2), t: 1 }, 2019-09-04T06:35:47.962+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn508] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.763+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn506] Got notified of new snapshot: { ts: Timestamp(1567578947, 2), t: 1 }, 2019-09-04T06:35:47.962+0000 2019-09-04T06:35:48.009+0000 D3 REPL [conn506] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:58.760+0000 2019-09-04T06:35:48.010+0000 D2 ASIO [RS] Request 1647 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpVisible: { ts: Timestamp(1567578947, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578947, 2), $clusterTime: { clusterTime: Timestamp(1567578947, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 2) } 2019-09-04T06:35:48.010+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpVisible: { ts: Timestamp(1567578947, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578947, 2), $clusterTime: { clusterTime: Timestamp(1567578947, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:48.010+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:48.010+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:17.715+0000 2019-09-04T06:35:48.013+0000 D2 COMMAND [conn491] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578939, 1), signature: { hash: BinData(0, 0C747D665C0A018C40BCB6AF44F3387EECD2F396), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:35:48.013+0000 D1 REPL [conn491] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578947, 2), t: 1 } 2019-09-04T06:35:48.013+0000 D3 REPL [conn491] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:18.023+0000 2019-09-04T06:35:48.027+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:35:48.027+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:48.027+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), appliedOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, appliedWallTime: new Date(1567578938650), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), appliedOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, appliedWallTime: new Date(1567578947962), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), appliedOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, appliedWallTime: new Date(1567578938650), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpVisible: { ts: Timestamp(1567578947, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:48.027+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1648 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:18.027+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), appliedOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, appliedWallTime: new Date(1567578938650), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), appliedOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, appliedWallTime: new Date(1567578947962), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, durableWallTime: new Date(1567578938650), appliedOpTime: { ts: Timestamp(1567578938, 2), t: 1 }, appliedWallTime: new Date(1567578938650), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpVisible: { ts: Timestamp(1567578947, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:48.027+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:17.715+0000 2019-09-04T06:35:48.028+0000 D2 ASIO [RS] Request 1648 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpVisible: { ts: Timestamp(1567578947, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578947, 2), $clusterTime: { clusterTime: Timestamp(1567578947, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 2) } 2019-09-04T06:35:48.028+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpVisible: { ts: Timestamp(1567578947, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578947, 2), $clusterTime: { clusterTime: Timestamp(1567578947, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:48.028+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:48.028+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:17.715+0000 2019-09-04T06:35:48.032+0000 I NETWORK [listener] connection accepted from 10.108.2.49:53562 #517 (88 connections now open) 2019-09-04T06:35:48.032+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:48.032+0000 D2 COMMAND [conn517] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:48.032+0000 I NETWORK [conn517] received client metadata from 10.108.2.49:53562 conn517: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:48.032+0000 I COMMAND [conn517] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:48.035+0000 D2 COMMAND [conn517] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578940, 1), signature: { hash: BinData(0, DC364624D5EB396408DE50F10AD903484AFAB43A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:35:48.035+0000 D1 REPL [conn517] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578947, 2), t: 1 } 2019-09-04T06:35:48.035+0000 D3 REPL [conn517] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:18.045+0000 2019-09-04T06:35:48.035+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:48.035+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:48.046+0000 I NETWORK [listener] connection accepted from 10.108.2.45:36742 #518 (89 connections now open) 2019-09-04T06:35:48.046+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:48.046+0000 D2 COMMAND [conn518] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:48.046+0000 I NETWORK [conn518] received client metadata from 10.108.2.45:36742 conn518: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:48.046+0000 I COMMAND [conn518] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:48.051+0000 D2 COMMAND [conn518] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578938, 1), signature: { hash: BinData(0, 31B0FBBE3F88E8227ED4748A3C95E935277E7687), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:35:48.051+0000 D1 REPL [conn518] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578947, 2), t: 1 } 2019-09-04T06:35:48.051+0000 D3 REPL [conn518] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:18.061+0000 2019-09-04T06:35:48.071+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:48.073+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:48.073+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:48.107+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578947, 2) 2019-09-04T06:35:48.109+0000 I NETWORK [listener] connection accepted from 10.108.2.44:38884 #519 (90 connections now open) 2019-09-04T06:35:48.109+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:48.109+0000 D2 COMMAND [conn519] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:48.109+0000 I NETWORK [conn519] received client metadata from 10.108.2.44:38884 conn519: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:48.109+0000 I COMMAND [conn519] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:48.109+0000 D2 COMMAND [conn519] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578938, 1), signature: { hash: BinData(0, 31B0FBBE3F88E8227ED4748A3C95E935277E7687), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:35:48.109+0000 D1 REPL [conn519] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578947, 2), t: 1 } 2019-09-04T06:35:48.109+0000 D3 REPL [conn519] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:18.119+0000 2019-09-04T06:35:48.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:48.111+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:48.113+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:48.113+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:48.120+0000 I NETWORK [listener] connection accepted from 10.108.2.53:50906 #520 (91 connections now open) 2019-09-04T06:35:48.120+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:48.121+0000 D2 COMMAND [conn520] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:48.121+0000 I NETWORK [conn520] received client metadata from 10.108.2.53:50906 conn520: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:48.121+0000 I COMMAND [conn520] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:48.124+0000 D2 COMMAND [conn520] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, EAB2E9C123EA69D6E814C4F993AEAED9311438B2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:35:48.124+0000 D1 REPL [conn520] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578947, 2), t: 1 } 2019-09-04T06:35:48.124+0000 D3 REPL [conn520] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:18.134+0000 2019-09-04T06:35:48.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:48.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:48.158+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:48.159+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:48.171+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:48.177+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:48.177+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:48.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:48.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:48.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578947, 2), signature: { hash: BinData(0, 18600E894AFAF231E6D95239D4DD4C977475DFFB), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:48.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:35:48.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578947, 2), signature: { hash: BinData(0, 18600E894AFAF231E6D95239D4DD4C977475DFFB), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:48.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578947, 2), signature: { hash: BinData(0, 18600E894AFAF231E6D95239D4DD4C977475DFFB), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:48.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), opTime: { ts: Timestamp(1567578947, 2), t: 1 }, wallTime: new Date(1567578947962) } 2019-09-04T06:35:48.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578947, 2), signature: { hash: BinData(0, 18600E894AFAF231E6D95239D4DD4C977475DFFB), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:48.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:48.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:48.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:48.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:48.271+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:48.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:48.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:48.327+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:48.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:48.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:48.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:48.371+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:48.471+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:48.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:48.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:48.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:48.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:48.522+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:35:48.522+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:35:48.522+0000 D2 COMMAND [conn90] run command admin.$cmd { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:35:48.522+0000 I COMMAND [conn90] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:35:48.535+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:48.535+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:48.571+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:48.573+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:48.573+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:48.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:48.611+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:48.613+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:48.613+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:48.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:48.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:48.658+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:48.658+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:48.671+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:48.677+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:48.677+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:48.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:48.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:48.771+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:48.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:48.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:48.827+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:48.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:48.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:48.839+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1649) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:48.839+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1649 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:35:58.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:48.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:16.839+0000 2019-09-04T06:35:48.839+0000 D2 ASIO [Replication] Request 1649 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), opTime: { ts: Timestamp(1567578947, 2), t: 1 }, wallTime: new Date(1567578947962), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpVisible: { ts: Timestamp(1567578947, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578947, 2), $clusterTime: { clusterTime: Timestamp(1567578947, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 2) } 2019-09-04T06:35:48.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), opTime: { ts: Timestamp(1567578947, 2), t: 1 }, wallTime: new Date(1567578947962), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpVisible: { ts: Timestamp(1567578947, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578947, 2), $clusterTime: { clusterTime: Timestamp(1567578947, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:35:48.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:48.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1649) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), opTime: { ts: Timestamp(1567578947, 2), t: 1 }, wallTime: new Date(1567578947962), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpVisible: { ts: Timestamp(1567578947, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578947, 2), $clusterTime: { clusterTime: Timestamp(1567578947, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 2) } 2019-09-04T06:35:48.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:35:48.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:35:59.349+0000 2019-09-04T06:35:48.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:35:58.912+0000 2019-09-04T06:35:48.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:35:48.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:35:50.839Z 2019-09-04T06:35:48.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:48.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:18.839+0000 2019-09-04T06:35:48.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:18.839+0000 2019-09-04T06:35:48.840+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:48.840+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1650) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:48.840+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1650 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:35:58.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:48.840+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:18.839+0000 2019-09-04T06:35:48.840+0000 D2 ASIO [Replication] Request 1650 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), opTime: { ts: Timestamp(1567578947, 2), t: 1 }, wallTime: new Date(1567578947962), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpVisible: { ts: Timestamp(1567578947, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578947, 2), $clusterTime: { clusterTime: Timestamp(1567578947, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 2) } 2019-09-04T06:35:48.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), opTime: { ts: Timestamp(1567578947, 2), t: 1 }, wallTime: new Date(1567578947962), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpVisible: { ts: Timestamp(1567578947, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578947, 2), $clusterTime: { clusterTime: Timestamp(1567578947, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:48.840+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:48.840+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1650) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), opTime: { ts: Timestamp(1567578947, 2), t: 1 }, wallTime: new Date(1567578947962), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpVisible: { ts: Timestamp(1567578947, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578947, 2), $clusterTime: { clusterTime: Timestamp(1567578947, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 2) } 2019-09-04T06:35:48.840+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:35:48.840+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:35:50.840Z 2019-09-04T06:35:48.840+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:18.839+0000 2019-09-04T06:35:48.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:48.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:48.872+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:48.972+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:48.976+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:48.976+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:48.976+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578947, 2) 2019-09-04T06:35:48.976+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 23940 2019-09-04T06:35:48.976+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:48.976+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:48.976+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 23940 2019-09-04T06:35:48.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:48.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:48.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:48.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:49.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:49.009+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23946 2019-09-04T06:35:49.009+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 23946 2019-09-04T06:35:49.009+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578947, 2), t: 1 }({ ts: Timestamp(1567578947, 2), t: 1 }) 2019-09-04T06:35:49.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578947, 2), signature: { hash: BinData(0, 18600E894AFAF231E6D95239D4DD4C977475DFFB), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:49.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:35:49.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578947, 2), signature: { hash: BinData(0, 18600E894AFAF231E6D95239D4DD4C977475DFFB), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:49.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578947, 2), signature: { hash: BinData(0, 18600E894AFAF231E6D95239D4DD4C977475DFFB), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:49.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), opTime: { ts: Timestamp(1567578947, 2), t: 1 }, wallTime: new Date(1567578947962) } 2019-09-04T06:35:49.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578947, 2), signature: { hash: BinData(0, 18600E894AFAF231E6D95239D4DD4C977475DFFB), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:49.072+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:49.073+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:49.073+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:49.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:49.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:49.113+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:49.113+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:49.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:49.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:49.158+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:49.158+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:49.172+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:49.177+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:49.177+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:49.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:49.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:49.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:49.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:49.272+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:49.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:49.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:49.328+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:49.328+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:49.329+0000 D2 WRITE [startPeriodicThreadToAbortExpiredTransactions] Beginning scanSessions. Scanning 0 sessions. 2019-09-04T06:35:49.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:49.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:49.370+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0003 2019-09-04T06:35:49.370+0000 D1 SHARDING [shard-registry-reload] Reloading shardRegistry 2019-09-04T06:35:49.370+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1651 -- target:[cmodb812.togewa.com:27018] db:admin expDate:2019-09-04T06:35:54.370+0000 cmd:{ isMaster: 1 } 2019-09-04T06:35:49.370+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1652 -- target:[cmodb813.togewa.com:27018] db:admin expDate:2019-09-04T06:35:54.370+0000 cmd:{ isMaster: 1 } 2019-09-04T06:35:49.370+0000 D3 STORAGE [shard-registry-reload] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:35:49.370+0000 D2 COMMAND [shard-registry-reload] run command config.$cmd { find: "shards", $readPreference: { mode: "nearest", tags: [] }, $db: "config" } 2019-09-04T06:35:49.370+0000 D3 STORAGE [shard-registry-reload] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:49.370+0000 D5 QUERY [shard-registry-reload] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:35:49.370+0000 D5 QUERY [shard-registry-reload] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:35:49.370+0000 D5 QUERY [shard-registry-reload] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:35:49.370+0000 D5 QUERY [shard-registry-reload] Rated tree: $and 2019-09-04T06:35:49.370+0000 D5 QUERY [shard-registry-reload] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:49.370+0000 D5 QUERY [shard-registry-reload] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:49.370+0000 D2 QUERY [shard-registry-reload] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:35:49.370+0000 D3 STORAGE [shard-registry-reload] begin_transaction on local snapshot Timestamp(1567578947, 2) 2019-09-04T06:35:49.370+0000 D3 STORAGE [shard-registry-reload] WT begin_transaction for snapshot id 23962 2019-09-04T06:35:49.370+0000 D3 STORAGE [shard-registry-reload] WT rollback_transaction for snapshot id 23962 2019-09-04T06:35:49.370+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1651 finished with response: { hosts: [ "cmodb812.togewa.com:27018", "cmodb813.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27025" ], setName: "shard0003", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb812.togewa.com:27018", me: "cmodb812.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578946, 1), t: 1 }, lastWriteDate: new Date(1567578946000), majorityOpTime: { ts: Timestamp(1567578946, 1), t: 1 }, majorityWriteDate: new Date(1567578946000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578949370), logicalSessionTimeoutMinutes: 30, connectionId: 21713, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578946, 1), $configServerState: { opTime: { ts: Timestamp(1567578947, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578947, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578946, 1) } 2019-09-04T06:35:49.370+0000 I COMMAND [shard-registry-reload] command config.shards command: find { find: "shards", $readPreference: { mode: "nearest", tags: [] }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:4 cursorExhausted:1 numYields:0 nreturned:4 reslen:756 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:35:49.370+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb812.togewa.com:27018", "cmodb813.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27025" ], setName: "shard0003", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb812.togewa.com:27018", me: "cmodb812.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578946, 1), t: 1 }, lastWriteDate: new Date(1567578946000), majorityOpTime: { ts: Timestamp(1567578946, 1), t: 1 }, majorityWriteDate: new Date(1567578946000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578949370), logicalSessionTimeoutMinutes: 30, connectionId: 21713, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578946, 1), $configServerState: { opTime: { ts: Timestamp(1567578947, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578947, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578946, 1) } target: cmodb812.togewa.com:27018 2019-09-04T06:35:49.370+0000 D1 SHARDING [shard-registry-reload] found 4 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp(1567578947, 2), t: 1 } 2019-09-04T06:35:49.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0000/cmodb806.togewa.com:27018,cmodb807.togewa.com:27018 2019-09-04T06:35:49.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0000, with CS shard0000/cmodb806.togewa.com:27018,cmodb807.togewa.com:27018 2019-09-04T06:35:49.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0001/cmodb808.togewa.com:27018,cmodb809.togewa.com:27018 2019-09-04T06:35:49.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0001, with CS shard0001/cmodb808.togewa.com:27018,cmodb809.togewa.com:27018 2019-09-04T06:35:49.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0002/cmodb810.togewa.com:27018,cmodb811.togewa.com:27018 2019-09-04T06:35:49.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0002, with CS shard0002/cmodb810.togewa.com:27018,cmodb811.togewa.com:27018 2019-09-04T06:35:49.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0003/cmodb812.togewa.com:27018,cmodb813.togewa.com:27018 2019-09-04T06:35:49.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0003, with CS shard0003/cmodb812.togewa.com:27018,cmodb813.togewa.com:27018 2019-09-04T06:35:49.370+0000 D3 SHARDING [shard-registry-reload] Adding shard config, with CS 2019-09-04T06:35:49.372+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:49.374+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1652 finished with response: { hosts: [ "cmodb812.togewa.com:27018", "cmodb813.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27025" ], setName: "shard0003", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb812.togewa.com:27018", me: "cmodb813.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578946, 1), t: 1 }, lastWriteDate: new Date(1567578946000), majorityOpTime: { ts: Timestamp(1567578946, 1), t: 1 }, majorityWriteDate: new Date(1567578946000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578949368), logicalSessionTimeoutMinutes: 30, connectionId: 13308, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578946, 1), $configServerState: { opTime: { ts: Timestamp(1567578947, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578947, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578946, 1) } 2019-09-04T06:35:49.374+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb812.togewa.com:27018", "cmodb813.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27025" ], setName: "shard0003", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb812.togewa.com:27018", me: "cmodb813.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578946, 1), t: 1 }, lastWriteDate: new Date(1567578946000), majorityOpTime: { ts: Timestamp(1567578946, 1), t: 1 }, majorityWriteDate: new Date(1567578946000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578949368), logicalSessionTimeoutMinutes: 30, connectionId: 13308, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578946, 1), $configServerState: { opTime: { ts: Timestamp(1567578947, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578947, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578946, 1) } target: cmodb813.togewa.com:27018 2019-09-04T06:35:49.374+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0003 took 4ms 2019-09-04T06:35:49.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0001 2019-09-04T06:35:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1653 -- target:[cmodb808.togewa.com:27018] db:admin expDate:2019-09-04T06:35:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:35:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1654 -- target:[cmodb809.togewa.com:27018] db:admin expDate:2019-09-04T06:35:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:35:49.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0000 2019-09-04T06:35:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1655 -- target:[cmodb806.togewa.com:27018] db:admin expDate:2019-09-04T06:35:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:35:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1656 -- target:[cmodb807.togewa.com:27018] db:admin expDate:2019-09-04T06:35:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:35:49.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0002 2019-09-04T06:35:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1657 -- target:[cmodb810.togewa.com:27018] db:admin expDate:2019-09-04T06:35:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:35:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1658 -- target:[cmodb811.togewa.com:27018] db:admin expDate:2019-09-04T06:35:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:35:49.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1655 finished with response: { hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb806.togewa.com:27018", me: "cmodb806.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578943, 1), t: 1 }, lastWriteDate: new Date(1567578943000), majorityOpTime: { ts: Timestamp(1567578943, 1), t: 1 }, majorityWriteDate: new Date(1567578943000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578949385), logicalSessionTimeoutMinutes: 30, connectionId: 16400, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578943, 1), $configServerState: { opTime: { ts: Timestamp(1567578938, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578946, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578943, 1) } 2019-09-04T06:35:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb806.togewa.com:27018", me: "cmodb806.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578943, 1), t: 1 }, lastWriteDate: new Date(1567578943000), majorityOpTime: { ts: Timestamp(1567578943, 1), t: 1 }, majorityWriteDate: new Date(1567578943000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578949385), logicalSessionTimeoutMinutes: 30, connectionId: 16400, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578943, 1), $configServerState: { opTime: { ts: Timestamp(1567578938, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578946, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578943, 1) } target: cmodb806.togewa.com:27018 2019-09-04T06:35:49.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1653 finished with response: { hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb808.togewa.com:27018", me: "cmodb808.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578947, 1), t: 1 }, lastWriteDate: new Date(1567578947000), majorityOpTime: { ts: Timestamp(1567578947, 1), t: 1 }, majorityWriteDate: new Date(1567578947000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578949386), logicalSessionTimeoutMinutes: 30, connectionId: 18183, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578947, 1), $configServerState: { opTime: { ts: Timestamp(1567578938, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578947, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 1) } 2019-09-04T06:35:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb808.togewa.com:27018", me: "cmodb808.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578947, 1), t: 1 }, lastWriteDate: new Date(1567578947000), majorityOpTime: { ts: Timestamp(1567578947, 1), t: 1 }, majorityWriteDate: new Date(1567578947000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578949386), logicalSessionTimeoutMinutes: 30, connectionId: 18183, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578947, 1), $configServerState: { opTime: { ts: Timestamp(1567578938, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578947, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 1) } target: cmodb808.togewa.com:27018 2019-09-04T06:35:49.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1657 finished with response: { hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb810.togewa.com:27018", me: "cmodb810.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578939, 1), t: 1 }, lastWriteDate: new Date(1567578939000), majorityOpTime: { ts: Timestamp(1567578939, 1), t: 1 }, majorityWriteDate: new Date(1567578939000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578949386), logicalSessionTimeoutMinutes: 30, connectionId: 20469, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578939, 1), $configServerState: { opTime: { ts: Timestamp(1567578938, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578946, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578939, 1) } 2019-09-04T06:35:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb810.togewa.com:27018", me: "cmodb810.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578939, 1), t: 1 }, lastWriteDate: new Date(1567578939000), majorityOpTime: { ts: Timestamp(1567578939, 1), t: 1 }, majorityWriteDate: new Date(1567578939000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578949386), logicalSessionTimeoutMinutes: 30, connectionId: 20469, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578939, 1), $configServerState: { opTime: { ts: Timestamp(1567578938, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578946, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578939, 1) } target: cmodb810.togewa.com:27018 2019-09-04T06:35:49.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1654 finished with response: { hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb808.togewa.com:27018", me: "cmodb809.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578947, 1), t: 1 }, lastWriteDate: new Date(1567578947000), majorityOpTime: { ts: Timestamp(1567578947, 1), t: 1 }, majorityWriteDate: new Date(1567578947000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578949386), logicalSessionTimeoutMinutes: 30, connectionId: 13302, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578947, 1), $configServerState: { opTime: { ts: Timestamp(1567578930, 3), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578947, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 1) } 2019-09-04T06:35:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb808.togewa.com:27018", me: "cmodb809.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578947, 1), t: 1 }, lastWriteDate: new Date(1567578947000), majorityOpTime: { ts: Timestamp(1567578947, 1), t: 1 }, majorityWriteDate: new Date(1567578947000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578949386), logicalSessionTimeoutMinutes: 30, connectionId: 13302, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578947, 1), $configServerState: { opTime: { ts: Timestamp(1567578930, 3), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578947, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 1) } target: cmodb809.togewa.com:27018 2019-09-04T06:35:49.385+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0001 took 0ms 2019-09-04T06:35:49.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1656 finished with response: { hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb806.togewa.com:27018", me: "cmodb807.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578943, 1), t: 1 }, lastWriteDate: new Date(1567578943000), majorityOpTime: { ts: Timestamp(1567578943, 1), t: 1 }, majorityWriteDate: new Date(1567578943000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578949385), logicalSessionTimeoutMinutes: 30, connectionId: 17074, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578943, 1), $configServerState: { opTime: { ts: Timestamp(1567578938, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578946, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578943, 1) } 2019-09-04T06:35:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb806.togewa.com:27018", me: "cmodb807.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578943, 1), t: 1 }, lastWriteDate: new Date(1567578943000), majorityOpTime: { ts: Timestamp(1567578943, 1), t: 1 }, majorityWriteDate: new Date(1567578943000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578949385), logicalSessionTimeoutMinutes: 30, connectionId: 17074, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578943, 1), $configServerState: { opTime: { ts: Timestamp(1567578938, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578946, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578943, 1) } target: cmodb807.togewa.com:27018 2019-09-04T06:35:49.385+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0000 took 0ms 2019-09-04T06:35:49.390+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1658 finished with response: { hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb810.togewa.com:27018", me: "cmodb811.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578939, 1), t: 1 }, lastWriteDate: new Date(1567578939000), majorityOpTime: { ts: Timestamp(1567578939, 1), t: 1 }, majorityWriteDate: new Date(1567578939000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578949386), logicalSessionTimeoutMinutes: 30, connectionId: 13284, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578939, 1), $configServerState: { opTime: { ts: Timestamp(1567578938, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578946, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578939, 1) } 2019-09-04T06:35:49.390+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb810.togewa.com:27018", me: "cmodb811.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578939, 1), t: 1 }, lastWriteDate: new Date(1567578939000), majorityOpTime: { ts: Timestamp(1567578939, 1), t: 1 }, majorityWriteDate: new Date(1567578939000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578949386), logicalSessionTimeoutMinutes: 30, connectionId: 13284, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578939, 1), $configServerState: { opTime: { ts: Timestamp(1567578938, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578946, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578939, 1) } target: cmodb811.togewa.com:27018 2019-09-04T06:35:49.390+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0002 took 5ms 2019-09-04T06:35:49.472+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:49.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:49.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:49.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:49.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:49.572+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:49.573+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:49.573+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:49.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:49.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:49.613+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:49.613+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:49.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:49.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:49.658+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:49.659+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:49.672+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:49.677+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:49.677+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:49.682+0000 D2 COMMAND [replSetDistLockPinger] run command config.$cmd { findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578949682) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } 2019-09-04T06:35:49.682+0000 D4 - [replSetDistLockPinger] Taking ticket. Available: 1000000000 2019-09-04T06:35:49.682+0000 D1 - [replSetDistLockPinger] User Assertion: NotMaster: Not primary while running findAndModify command on collection config.lockpings src/mongo/db/commands/find_and_modify.cpp 178 2019-09-04T06:35:49.682+0000 W - [replSetDistLockPinger] DBException thrown :: caused by :: NotMaster: Not primary while running findAndModify command on collection config.lockpings 2019-09-04T06:35:49.701+0000 I - [replSetDistLockPinger] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6b5b62 0x561749c38a0a 0x561749c42521 0x561749a63043 0x56174a33a606 0x56174a33ba55 0x56174b117894 0x56174a082899 0x56174a083f53 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174af452ee 0x56174af457fa 0x56174b0c25e2 0x56174a244e7b 0x56174a243c1e 0x56174a42b1dc 0x56174a23b7b1 0x56174a232a0a 0x56174b82dbbf 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"272DB62","s":"_ZN5mongo11DBExceptionC2ERKNS_6StatusE"},{"b":"561748F88000","o":"CB0A0A","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"ADB043"},{"b":"561748F88000","o":"13B2606"},{"b":"561748F88000","o":"13B3A55"},{"b":"561748F88000","o":"218F894","s":"_ZN5mongo12BasicCommand10Invocation3runEPNS_16OperationContextEPNS_3rpc21ReplyBuilderInterfaceE"},{"b":"561748F88000","o":"10FA899"},{"b":"561748F88000","o":"10FBF53"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"1FBD2EE"},{"b":"561748F88000","o":"1FBD7FA","s":"_ZN5mongo14DBDirectClient4callERNS_7MessageES2_bPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE"},{"b":"561748F88000","o":"213A5E2","s":"_ZN5mongo12DBClientBase20runCommandWithTargetENS_12OpMsgRequestE"},{"b":"561748F88000","o":"12BCE7B","s":"_ZN5mongo13RSLocalClient14runCommandOnceEPNS_16OperationContextENS_10StringDataERKNS_7BSONObjE"},{"b":"561748F88000","o":"12BBC1E","s":"_ZN5mongo10ShardLocal11_runCommandEPNS_16OperationContextERKNS_21ReadPreferenceSettingENS_10StringDataENS_8DurationISt5ratioILl1ELl1000EEEERKNS_7BSONObjE"},{"b":"561748F88000","o":"14A31DC","s":"_ZN5mongo5Shard32runCommandWithFixedRetryAttemptsEPNS_16OperationContextERKNS_21ReadPreferenceSettingERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_7BSONObjENS_8DurationISt5ratioILl1ELl1000EEEENS0_11RetryPolicyE"},{"b":"561748F88000","o":"12B37B1","s":"_ZN5mongo19DistLockCatalogImpl4pingEPNS_16OperationContextENS_10StringDataENS_6Date_tE"},{"b":"561748F88000","o":"12AAA0A","s":"_ZN5mongo22ReplSetDistLockManager6doTaskEv"},{"b":"561748F88000","o":"28A5BBF"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo11DBExceptionC2ERKNS_6StatusE+0x32) [0x56174b6b5b62] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x6D08) [0x561749c38a0a] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xADB043) [0x561749a63043] mongod(+0x13B2606) [0x56174a33a606] mongod(+0x13B3A55) [0x56174a33ba55] mongod(_ZN5mongo12BasicCommand10Invocation3runEPNS_16OperationContextEPNS_3rpc21ReplyBuilderInterfaceE+0x74) [0x56174b117894] mongod(+0x10FA899) [0x56174a082899] mongod(+0x10FBF53) [0x56174a083f53] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(+0x1FBD2EE) [0x56174af452ee] mongod(_ZN5mongo14DBDirectClient4callERNS_7MessageES2_bPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x3A) [0x56174af457fa] mongod(_ZN5mongo12DBClientBase20runCommandWithTargetENS_12OpMsgRequestE+0x1F2) [0x56174b0c25e2] mongod(_ZN5mongo13RSLocalClient14runCommandOnceEPNS_16OperationContextENS_10StringDataERKNS_7BSONObjE+0x4FB) [0x56174a244e7b] mongod(_ZN5mongo10ShardLocal11_runCommandEPNS_16OperationContextERKNS_21ReadPreferenceSettingENS_10StringDataENS_8DurationISt5ratioILl1ELl1000EEEERKNS_7BSONObjE+0x2E) [0x56174a243c1e] mongod(_ZN5mongo5Shard32runCommandWithFixedRetryAttemptsEPNS_16OperationContextERKNS_21ReadPreferenceSettingERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_7BSONObjENS_8DurationISt5ratioILl1ELl1000EEEENS0_11RetryPolicyE+0xDC) [0x56174a42b1dc] mongod(_ZN5mongo19DistLockCatalogImpl4pingEPNS_16OperationContextENS_10StringDataENS_6Date_tE+0x571) [0x56174a23b7b1] mongod(_ZN5mongo22ReplSetDistLockManager6doTaskEv+0x27A) [0x56174a232a0a] mongod(+0x28A5BBF) [0x56174b82dbbf] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:49.701+0000 D2 REPL [replSetDistLockPinger] Waiting for write concern. OpTime: { ts: Timestamp(1567578947, 2), t: 1 }, write concern: { w: "majority", wtimeout: 15000 } 2019-09-04T06:35:49.701+0000 D4 STORAGE [replSetDistLockPinger] flushed journal 2019-09-04T06:35:49.701+0000 D1 COMMAND [replSetDistLockPinger] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578949682) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" }': NotMaster: Not primary while running findAndModify command on collection config.lockpings 2019-09-04T06:35:49.701+0000 I COMMAND [replSetDistLockPinger] command config.lockpings command: findAndModify { findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578949682) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } numYields:0 ok:0 errMsg:"Not primary while running findAndModify command on collection config.lockpings" errName:NotMaster errCode:10107 reslen:527 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } protocol:op_msg 19ms 2019-09-04T06:35:49.742+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:49.742+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:49.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:49.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:49.773+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:49.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:49.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:49.827+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:49.828+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:49.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:49.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:49.873+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:49.973+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:49.976+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:49.976+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:49.976+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578947, 2) 2019-09-04T06:35:49.976+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 23978 2019-09-04T06:35:49.976+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:49.976+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:49.976+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 23978 2019-09-04T06:35:49.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:49.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:49.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:49.999+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:50.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:50.000+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:35:50.000+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:35:50.000+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:35:50.009+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 23985 2019-09-04T06:35:50.009+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 23985 2019-09-04T06:35:50.009+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578947, 2), t: 1 }({ ts: Timestamp(1567578947, 2), t: 1 }) 2019-09-04T06:35:50.011+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:35:50.011+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:35:50.022+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:35:50.022+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:35:50.022+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:35:50.022+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:35:50.027+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:35:50.027+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35151 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:35:50.041+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:35:50.041+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:35:50.041+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:35:50.042+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:35:50.042+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:50.042+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:35:50.042+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:35:50.042+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:35:50.042+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:35:50.042+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:35:50.042+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:35:50.042+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:35:50.042+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:50.042+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:50.042+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:35:50.042+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578947, 2) 2019-09-04T06:35:50.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23991 2019-09-04T06:35:50.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23991 2019-09-04T06:35:50.042+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:35:50.042+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:35:50.042+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:35:50.042+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:35:50.042+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:50.042+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:35:50.042+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:35:50.042+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:35:50.042+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578947, 2) 2019-09-04T06:35:50.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23994 2019-09-04T06:35:50.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23994 2019-09-04T06:35:50.042+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:35:50.043+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:35:50.043+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:50.043+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:35:50.043+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:35:50.043+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:35:50.043+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578947, 2) 2019-09-04T06:35:50.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23996 2019-09-04T06:35:50.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23996 2019-09-04T06:35:50.043+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:570 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:35:50.043+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:35:50.043+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:35:50.043+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:35:50.043+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:35:50.043+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:35:50.043+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:35:50.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 23999 2019-09-04T06:35:50.043+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:35:50.043+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:50.043+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:35:50.043+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:35:50.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 23999 2019-09-04T06:35:50.043+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:35:50.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24000 2019-09-04T06:35:50.043+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:35:50.043+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:50.043+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:35:50.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24000 2019-09-04T06:35:50.043+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:35:50.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24001 2019-09-04T06:35:50.043+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:35:50.043+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:50.043+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:35:50.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24001 2019-09-04T06:35:50.043+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:35:50.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24002 2019-09-04T06:35:50.043+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:35:50.043+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:50.043+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24002 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24003 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24003 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24004 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24004 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24005 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24005 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24006 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24006 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24007 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24007 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24008 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24008 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24009 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24009 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24010 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24010 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24011 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24011 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24012 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24012 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24013 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24013 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24014 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24014 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24015 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24015 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24016 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:50.044+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:35:50.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24016 2019-09-04T06:35:50.045+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:35:50.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24017 2019-09-04T06:35:50.045+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:35:50.045+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:50.045+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:35:50.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24017 2019-09-04T06:35:50.045+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:50.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24018 2019-09-04T06:35:50.045+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:50.045+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:50.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24018 2019-09-04T06:35:50.045+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:35:50.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24019 2019-09-04T06:35:50.045+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:35:50.045+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:50.045+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:35:50.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24019 2019-09-04T06:35:50.045+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:35:50.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24020 2019-09-04T06:35:50.045+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:35:50.045+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:35:50.045+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:35:50.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24020 2019-09-04T06:35:50.045+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:35:50.045+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:35:50.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24022 2019-09-04T06:35:50.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24022 2019-09-04T06:35:50.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24023 2019-09-04T06:35:50.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24023 2019-09-04T06:35:50.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24024 2019-09-04T06:35:50.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24024 2019-09-04T06:35:50.045+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:35:50.045+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:35:50.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24026 2019-09-04T06:35:50.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24026 2019-09-04T06:35:50.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24027 2019-09-04T06:35:50.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24027 2019-09-04T06:35:50.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24028 2019-09-04T06:35:50.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24028 2019-09-04T06:35:50.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24029 2019-09-04T06:35:50.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24029 2019-09-04T06:35:50.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24030 2019-09-04T06:35:50.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24030 2019-09-04T06:35:50.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24031 2019-09-04T06:35:50.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24031 2019-09-04T06:35:50.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24032 2019-09-04T06:35:50.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24032 2019-09-04T06:35:50.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24033 2019-09-04T06:35:50.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24033 2019-09-04T06:35:50.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24034 2019-09-04T06:35:50.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24034 2019-09-04T06:35:50.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24035 2019-09-04T06:35:50.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24035 2019-09-04T06:35:50.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24036 2019-09-04T06:35:50.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24036 2019-09-04T06:35:50.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24037 2019-09-04T06:35:50.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24037 2019-09-04T06:35:50.046+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:35:50.046+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:35:50.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24039 2019-09-04T06:35:50.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24039 2019-09-04T06:35:50.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24040 2019-09-04T06:35:50.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24040 2019-09-04T06:35:50.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24041 2019-09-04T06:35:50.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24041 2019-09-04T06:35:50.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24042 2019-09-04T06:35:50.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24042 2019-09-04T06:35:50.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24043 2019-09-04T06:35:50.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24043 2019-09-04T06:35:50.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24044 2019-09-04T06:35:50.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24044 2019-09-04T06:35:50.046+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:35:50.070+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:50.070+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:50.073+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:50.073+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:50.073+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:50.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:50.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:50.113+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:50.113+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:50.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:50.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:50.158+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:50.159+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:50.173+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:50.177+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:50.177+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:50.212+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:50.212+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:50.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578947, 2), signature: { hash: BinData(0, 18600E894AFAF231E6D95239D4DD4C977475DFFB), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:50.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:35:50.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578947, 2), signature: { hash: BinData(0, 18600E894AFAF231E6D95239D4DD4C977475DFFB), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:50.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578947, 2), signature: { hash: BinData(0, 18600E894AFAF231E6D95239D4DD4C977475DFFB), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:50.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), opTime: { ts: Timestamp(1567578947, 2), t: 1 }, wallTime: new Date(1567578947962) } 2019-09-04T06:35:50.235+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578947, 2), signature: { hash: BinData(0, 18600E894AFAF231E6D95239D4DD4C977475DFFB), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:50.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:50.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 999999999 Now: 1000000000 2019-09-04T06:35:50.242+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:50.242+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:50.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:50.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:50.273+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:50.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:50.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:50.327+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:50.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:50.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:50.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:50.373+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:50.473+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:50.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:50.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:50.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:50.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:50.571+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:50.571+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:50.573+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:50.573+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:50.574+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:50.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:50.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:50.613+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:50.613+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:50.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:50.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:50.658+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:50.659+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:50.674+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:50.677+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:50.677+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:50.711+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:50.711+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:50.742+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:50.742+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:50.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:50.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:50.774+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:50.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:50.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:50.827+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:50.828+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:50.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:50.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1659) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:50.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1659 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:36:00.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:50.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:18.839+0000 2019-09-04T06:35:50.839+0000 D2 ASIO [Replication] Request 1659 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), opTime: { ts: Timestamp(1567578947, 2), t: 1 }, wallTime: new Date(1567578947962), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpVisible: { ts: Timestamp(1567578947, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578947, 2), $clusterTime: { clusterTime: Timestamp(1567578947, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 2) } 2019-09-04T06:35:50.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), opTime: { ts: Timestamp(1567578947, 2), t: 1 }, wallTime: new Date(1567578947962), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpVisible: { ts: Timestamp(1567578947, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578947, 2), $clusterTime: { clusterTime: Timestamp(1567578947, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:35:50.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:50.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1659) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), opTime: { ts: Timestamp(1567578947, 2), t: 1 }, wallTime: new Date(1567578947962), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpVisible: { ts: Timestamp(1567578947, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578947, 2), $clusterTime: { clusterTime: Timestamp(1567578947, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 2) } 2019-09-04T06:35:50.839+0000 D4 ELECTION [replexec-4] Postponing election timeout due to heartbeat from primary 2019-09-04T06:35:50.839+0000 D4 REPL [replexec-4] Canceling election timeout callback at 2019-09-04T06:35:58.912+0000 2019-09-04T06:35:50.839+0000 D4 ELECTION [replexec-4] Scheduling election timeout callback at 2019-09-04T06:36:00.895+0000 2019-09-04T06:35:50.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:35:50.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:35:52.839Z 2019-09-04T06:35:50.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:50.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:20.839+0000 2019-09-04T06:35:50.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:20.839+0000 2019-09-04T06:35:50.840+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:50.840+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1660) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:50.840+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1660 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:00.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:50.840+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:20.839+0000 2019-09-04T06:35:50.840+0000 D2 ASIO [Replication] Request 1660 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), opTime: { ts: Timestamp(1567578947, 2), t: 1 }, wallTime: new Date(1567578947962), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpVisible: { ts: Timestamp(1567578947, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578947, 2), $clusterTime: { clusterTime: Timestamp(1567578947, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 2) } 2019-09-04T06:35:50.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), opTime: { ts: Timestamp(1567578947, 2), t: 1 }, wallTime: new Date(1567578947962), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpVisible: { ts: Timestamp(1567578947, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578947, 2), $clusterTime: { clusterTime: Timestamp(1567578947, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:50.840+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:50.840+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1660) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), opTime: { ts: Timestamp(1567578947, 2), t: 1 }, wallTime: new Date(1567578947962), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpVisible: { ts: Timestamp(1567578947, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578947, 2), $clusterTime: { clusterTime: Timestamp(1567578947, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 2) } 2019-09-04T06:35:50.840+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:35:50.840+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:35:52.840Z 2019-09-04T06:35:50.840+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:20.839+0000 2019-09-04T06:35:50.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:50.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:50.874+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:50.974+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:50.976+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:50.976+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:50.976+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578947, 2) 2019-09-04T06:35:50.976+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 24076 2019-09-04T06:35:50.977+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:50.977+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:50.977+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 24076 2019-09-04T06:35:50.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:50.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:50.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:50.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:51.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:51.010+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24082 2019-09-04T06:35:51.010+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 24082 2019-09-04T06:35:51.010+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578947, 2), t: 1 }({ ts: Timestamp(1567578947, 2), t: 1 }) 2019-09-04T06:35:51.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578947, 2), signature: { hash: BinData(0, 18600E894AFAF231E6D95239D4DD4C977475DFFB), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:51.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:35:51.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578947, 2), signature: { hash: BinData(0, 18600E894AFAF231E6D95239D4DD4C977475DFFB), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:51.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578947, 2), signature: { hash: BinData(0, 18600E894AFAF231E6D95239D4DD4C977475DFFB), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:51.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), opTime: { ts: Timestamp(1567578947, 2), t: 1 }, wallTime: new Date(1567578947962) } 2019-09-04T06:35:51.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578947, 2), signature: { hash: BinData(0, 18600E894AFAF231E6D95239D4DD4C977475DFFB), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:51.070+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:51.071+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:51.073+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:51.073+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:51.074+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:51.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:51.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:51.113+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:51.113+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:51.116+0000 I NETWORK [listener] connection accepted from 10.108.2.56:35890 #521 (92 connections now open) 2019-09-04T06:35:51.116+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:51.116+0000 D2 COMMAND [conn521] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:51.116+0000 I NETWORK [conn521] received client metadata from 10.108.2.56:35890 conn521: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:51.116+0000 I COMMAND [conn521] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:51.131+0000 I COMMAND [conn495] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578911, 1), signature: { hash: BinData(0, 71EB50998B0ED96CFCE295A2D5D129D8C4E11CBE), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:35:51.131+0000 D1 - [conn495] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:51.131+0000 W - [conn495] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:51.149+0000 I - [conn495] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:51.149+0000 D1 COMMAND [conn495] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578911, 1), signature: { hash: BinData(0, 71EB50998B0ED96CFCE295A2D5D129D8C4E11CBE), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:51.149+0000 D1 - [conn495] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:51.149+0000 W - [conn495] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:51.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:51.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:51.158+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:51.159+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:51.170+0000 I - [conn495] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:51.170+0000 W COMMAND [conn495] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:51.170+0000 I COMMAND [conn495] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578911, 1), signature: { hash: BinData(0, 71EB50998B0ED96CFCE295A2D5D129D8C4E11CBE), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:35:51.170+0000 D2 NETWORK [conn495] Session from 10.108.2.56:35870 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:51.170+0000 I NETWORK [conn495] end connection 10.108.2.56:35870 (91 connections now open) 2019-09-04T06:35:51.174+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:51.177+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:51.177+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:51.211+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:51.211+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:51.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:51.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:51.242+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:51.242+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:51.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:51.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:51.274+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:51.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:51.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:51.327+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:51.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:51.350+0000 D2 COMMAND [conn470] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:51.350+0000 I COMMAND [conn470] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:51.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:51.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:51.374+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:51.378+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:51.378+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:51.475+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:51.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:51.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:51.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:51.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:51.571+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:51.571+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:51.573+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:51.573+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:51.575+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:51.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:51.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:51.613+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:51.613+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:51.648+0000 I COMMAND [conn482] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578918, 1), signature: { hash: BinData(0, 1AA7EB05C40215A8E8D27C963B2C5C048A4F65D3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:35:51.648+0000 D1 - [conn482] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:51.648+0000 W - [conn482] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:51.650+0000 I NETWORK [listener] connection accepted from 10.108.2.48:42316 #522 (92 connections now open) 2019-09-04T06:35:51.650+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:51.650+0000 I NETWORK [listener] connection accepted from 10.108.2.72:45954 #523 (93 connections now open) 2019-09-04T06:35:51.650+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:51.650+0000 D2 COMMAND [conn522] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:51.650+0000 I NETWORK [conn522] received client metadata from 10.108.2.48:42316 conn522: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:51.650+0000 I COMMAND [conn522] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:51.650+0000 D2 COMMAND [conn523] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:51.650+0000 I NETWORK [conn523] received client metadata from 10.108.2.72:45954 conn523: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:51.650+0000 I COMMAND [conn523] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:51.650+0000 D2 COMMAND [conn522] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578950, 1), signature: { hash: BinData(0, FD25E1CF7A4A1D067ADEB8019F5D3BD9C83758AA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:35:51.650+0000 D1 REPL [conn522] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578947, 2), t: 1 } 2019-09-04T06:35:51.650+0000 D3 REPL [conn522] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:21.660+0000 2019-09-04T06:35:51.650+0000 D2 COMMAND [conn523] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, EAB2E9C123EA69D6E814C4F993AEAED9311438B2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:35:51.650+0000 D1 REPL [conn523] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578947, 2), t: 1 } 2019-09-04T06:35:51.650+0000 D3 REPL [conn523] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:21.660+0000 2019-09-04T06:35:51.651+0000 I NETWORK [listener] connection accepted from 10.108.2.54:49402 #524 (94 connections now open) 2019-09-04T06:35:51.651+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:51.651+0000 D2 COMMAND [conn524] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:51.651+0000 I NETWORK [conn524] received client metadata from 10.108.2.54:49402 conn524: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:51.651+0000 I COMMAND [conn524] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:51.651+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:51.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:51.658+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:51.658+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:51.664+0000 I COMMAND [conn498] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578918, 1), signature: { hash: BinData(0, 1AA7EB05C40215A8E8D27C963B2C5C048A4F65D3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:35:51.665+0000 D1 - [conn498] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:51.665+0000 W - [conn498] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:51.665+0000 I COMMAND [conn484] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578912, 1), signature: { hash: BinData(0, 6CEB6570F5B99291340499417CC6D1FF61799D1C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:35:51.665+0000 D1 - [conn484] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:51.665+0000 W - [conn484] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:51.665+0000 I - [conn482] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:51.665+0000 D1 COMMAND [conn482] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578918, 1), signature: { hash: BinData(0, 1AA7EB05C40215A8E8D27C963B2C5C048A4F65D3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:51.665+0000 D1 - [conn482] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:51.665+0000 W - [conn482] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:51.665+0000 I COMMAND [conn490] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578913, 1), signature: { hash: BinData(0, 20276E4D7DE2C0C2B6B8CAF955AF3D33107689B9), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:35:51.665+0000 D1 - [conn490] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:51.665+0000 W - [conn490] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:51.675+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:51.677+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:51.677+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:51.681+0000 I - [conn484] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:51.681+0000 D1 COMMAND [conn484] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578912, 1), signature: { hash: BinData(0, 6CEB6570F5B99291340499417CC6D1FF61799D1C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:51.681+0000 D1 - [conn484] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:51.681+0000 W - [conn484] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:51.698+0000 I - [conn498] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:51.698+0000 D1 COMMAND [conn498] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578918, 1), signature: { hash: BinData(0, 1AA7EB05C40215A8E8D27C963B2C5C048A4F65D3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:51.698+0000 D1 - [conn498] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:51.698+0000 W - [conn498] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:51.711+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:51.711+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:51.718+0000 I - [conn484] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:51.718+0000 W COMMAND [conn484] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:51.718+0000 I COMMAND [conn484] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578912, 1), signature: { hash: BinData(0, 6CEB6570F5B99291340499417CC6D1FF61799D1C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:35:51.718+0000 D2 NETWORK [conn484] Session from 10.108.2.73:52334 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:51.718+0000 I NETWORK [conn484] end connection 10.108.2.73:52334 (93 connections now open) 2019-09-04T06:35:51.738+0000 I - [conn498] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:51.738+0000 W COMMAND [conn498] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:51.739+0000 I COMMAND [conn498] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578918, 1), signature: { hash: BinData(0, 1AA7EB05C40215A8E8D27C963B2C5C048A4F65D3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30046ms 2019-09-04T06:35:51.739+0000 D2 NETWORK [conn498] Session from 10.108.2.54:49378 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:51.739+0000 I NETWORK [conn498] end connection 10.108.2.54:49378 (92 connections now open) 2019-09-04T06:35:51.741+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:51.742+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:51.742+0000 D2 COMMAND [conn499] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, EAB2E9C123EA69D6E814C4F993AEAED9311438B2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:35:51.742+0000 D1 REPL [conn499] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578947, 2), t: 1 } 2019-09-04T06:35:51.742+0000 D3 REPL [conn499] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:21.752+0000 2019-09-04T06:35:51.756+0000 I NETWORK [listener] connection accepted from 10.108.2.59:48564 #525 (93 connections now open) 2019-09-04T06:35:51.756+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:51.756+0000 D2 COMMAND [conn525] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:51.756+0000 I NETWORK [conn525] received client metadata from 10.108.2.59:48564 conn525: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:51.756+0000 I COMMAND [conn525] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:51.757+0000 D2 COMMAND [conn525] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578941, 1), signature: { hash: BinData(0, B607257475BAE42FA27A1C9FCFB0BFBA13A2A499), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:35:51.757+0000 D1 REPL [conn525] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578947, 2), t: 1 } 2019-09-04T06:35:51.757+0000 D3 REPL [conn525] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:21.767+0000 2019-09-04T06:35:51.764+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:51.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:51.767+0000 I - [conn482] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:51.767+0000 W COMMAND [conn482] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:51.767+0000 I COMMAND [conn482] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578918, 1), signature: { hash: BinData(0, 1AA7EB05C40215A8E8D27C963B2C5C048A4F65D3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:35:51.767+0000 D2 NETWORK [conn482] Session from 10.108.2.44:38858 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:51.767+0000 I NETWORK [conn482] end connection 10.108.2.44:38858 (92 connections now open) 2019-09-04T06:35:51.775+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:51.776+0000 I - [conn490] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:51.776+0000 D1 COMMAND [conn490] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578913, 1), signature: { hash: BinData(0, 20276E4D7DE2C0C2B6B8CAF955AF3D33107689B9), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:51.776+0000 D1 - [conn490] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:51.776+0000 W - [conn490] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:51.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:51.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:51.796+0000 I - [conn490] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:51.796+0000 W COMMAND [conn490] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:51.796+0000 I COMMAND [conn490] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578913, 1), signature: { hash: BinData(0, 20276E4D7DE2C0C2B6B8CAF955AF3D33107689B9), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30124ms 2019-09-04T06:35:51.796+0000 D2 NETWORK [conn490] Session from 10.108.2.58:52328 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:51.796+0000 I NETWORK [conn490] end connection 10.108.2.58:52328 (91 connections now open) 2019-09-04T06:35:51.827+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:51.828+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:51.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:51.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:51.875+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:51.878+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:51.878+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:51.975+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:51.977+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:51.977+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:51.977+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578947, 2) 2019-09-04T06:35:51.977+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 24127 2019-09-04T06:35:51.977+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:51.977+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:51.977+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 24127 2019-09-04T06:35:51.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:51.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:51.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:51.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:52.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:52.010+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24133 2019-09-04T06:35:52.010+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 24133 2019-09-04T06:35:52.010+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578947, 2), t: 1 }({ ts: Timestamp(1567578947, 2), t: 1 }) 2019-09-04T06:35:52.057+0000 I COMMAND [conn501] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578919, 1), signature: { hash: BinData(0, 5F52DEB026CAAD2DBA516440B34A022BEC413848), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:35:52.057+0000 D1 - [conn501] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:52.057+0000 W - [conn501] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:52.070+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:52.071+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:52.073+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:52.073+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:52.074+0000 I - [conn501] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:52.074+0000 D1 COMMAND [conn501] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578919, 1), signature: { hash: BinData(0, 5F52DEB026CAAD2DBA516440B34A022BEC413848), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:52.074+0000 D1 - [conn501] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:52.074+0000 W - [conn501] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:52.075+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:52.094+0000 I - [conn501] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:52.094+0000 W COMMAND [conn501] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:52.094+0000 I COMMAND [conn501] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578919, 1), signature: { hash: BinData(0, 5F52DEB026CAAD2DBA516440B34A022BEC413848), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:35:52.094+0000 D2 NETWORK [conn501] Session from 10.108.2.50:50308 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:52.094+0000 I NETWORK [conn501] end connection 10.108.2.50:50308 (90 connections now open) 2019-09-04T06:35:52.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:52.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:52.113+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:52.113+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:52.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:52.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:52.158+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:52.158+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:52.175+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:52.177+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:52.177+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:52.211+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:52.211+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:52.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578949, 1), signature: { hash: BinData(0, AF6625297FE14AA0FC532C7C37E4ABCC11B1517B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:52.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:35:52.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578949, 1), signature: { hash: BinData(0, AF6625297FE14AA0FC532C7C37E4ABCC11B1517B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:52.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578949, 1), signature: { hash: BinData(0, AF6625297FE14AA0FC532C7C37E4ABCC11B1517B), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:52.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), opTime: { ts: Timestamp(1567578947, 2), t: 1 }, wallTime: new Date(1567578947962) } 2019-09-04T06:35:52.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578949, 1), signature: { hash: BinData(0, AF6625297FE14AA0FC532C7C37E4ABCC11B1517B), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:52.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:52.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:52.241+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:52.242+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:52.264+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:52.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:52.276+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:52.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:52.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:52.327+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:52.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:52.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:52.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:52.376+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:52.378+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:52.378+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:52.476+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:52.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:52.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:52.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:52.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:52.570+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:52.571+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:52.573+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:52.573+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:52.576+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:52.584+0000 I NETWORK [listener] connection accepted from 10.108.2.74:51996 #526 (91 connections now open) 2019-09-04T06:35:52.584+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:52.585+0000 D2 COMMAND [conn526] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:52.585+0000 I NETWORK [conn526] received client metadata from 10.108.2.74:51996 conn526: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:52.585+0000 I COMMAND [conn526] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:52.585+0000 D2 COMMAND [conn526] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578945, 1), signature: { hash: BinData(0, F316497B8BBC8A053887E7C33BE8F047E760F03A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:35:52.585+0000 D1 REPL [conn526] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578947, 2), t: 1 } 2019-09-04T06:35:52.585+0000 D3 REPL [conn526] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:22.595+0000 2019-09-04T06:35:52.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:52.611+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:52.613+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:52.613+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:52.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:52.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:52.658+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:52.658+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:52.676+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:52.677+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:52.677+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:52.711+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:52.711+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:52.742+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:52.742+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:52.764+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:52.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:52.776+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:52.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:52.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:52.827+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:52.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:52.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:52.839+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1661) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:52.839+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1661 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:36:02.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:52.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:20.839+0000 2019-09-04T06:35:52.839+0000 D2 ASIO [Replication] Request 1661 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), opTime: { ts: Timestamp(1567578947, 2), t: 1 }, wallTime: new Date(1567578947962), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpVisible: { ts: Timestamp(1567578947, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578947, 2), $clusterTime: { clusterTime: Timestamp(1567578950, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 2) } 2019-09-04T06:35:52.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), opTime: { ts: Timestamp(1567578947, 2), t: 1 }, wallTime: new Date(1567578947962), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpVisible: { ts: Timestamp(1567578947, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578947, 2), $clusterTime: { clusterTime: Timestamp(1567578950, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:35:52.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:52.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1661) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), opTime: { ts: Timestamp(1567578947, 2), t: 1 }, wallTime: new Date(1567578947962), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpVisible: { ts: Timestamp(1567578947, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578947, 2), $clusterTime: { clusterTime: Timestamp(1567578950, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 2) } 2019-09-04T06:35:52.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:35:52.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:36:00.895+0000 2019-09-04T06:35:52.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:36:02.883+0000 2019-09-04T06:35:52.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:35:52.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:35:54.839Z 2019-09-04T06:35:52.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:52.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:22.839+0000 2019-09-04T06:35:52.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:22.839+0000 2019-09-04T06:35:52.840+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:52.840+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1662) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:52.840+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1662 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:02.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:52.840+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:22.839+0000 2019-09-04T06:35:52.840+0000 D2 ASIO [Replication] Request 1662 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), opTime: { ts: Timestamp(1567578947, 2), t: 1 }, wallTime: new Date(1567578947962), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpVisible: { ts: Timestamp(1567578947, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578947, 2), $clusterTime: { clusterTime: Timestamp(1567578950, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 2) } 2019-09-04T06:35:52.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), opTime: { ts: Timestamp(1567578947, 2), t: 1 }, wallTime: new Date(1567578947962), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpVisible: { ts: Timestamp(1567578947, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578947, 2), $clusterTime: { clusterTime: Timestamp(1567578950, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:52.840+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:52.840+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1662) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), opTime: { ts: Timestamp(1567578947, 2), t: 1 }, wallTime: new Date(1567578947962), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpVisible: { ts: Timestamp(1567578947, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578947, 2), $clusterTime: { clusterTime: Timestamp(1567578950, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 2) } 2019-09-04T06:35:52.840+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:35:52.840+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:35:54.840Z 2019-09-04T06:35:52.840+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:22.839+0000 2019-09-04T06:35:52.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:52.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:52.876+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:52.878+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:52.878+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:52.976+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:52.977+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:52.977+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:52.977+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578947, 2) 2019-09-04T06:35:52.977+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 24169 2019-09-04T06:35:52.977+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:52.977+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:52.977+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 24169 2019-09-04T06:35:52.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:52.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:52.980+0000 D2 ASIO [RS] Request 1646 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpVisible: { ts: Timestamp(1567578947, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpApplied: { ts: Timestamp(1567578947, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578947, 2), $clusterTime: { clusterTime: Timestamp(1567578950, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 2) } 2019-09-04T06:35:52.980+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpVisible: { ts: Timestamp(1567578947, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpApplied: { ts: Timestamp(1567578947, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578947, 2), $clusterTime: { clusterTime: Timestamp(1567578950, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:52.980+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:52.980+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:35:52.980+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:36:02.883+0000 2019-09-04T06:35:52.980+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:36:03.386+0000 2019-09-04T06:35:52.980+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:52.980+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:22.839+0000 2019-09-04T06:35:52.980+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1663 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:36:02.980+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578947, 2), t: 1 } } 2019-09-04T06:35:52.980+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:17.715+0000 2019-09-04T06:35:52.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:52.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:53.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:53.010+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24175 2019-09-04T06:35:53.010+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 24175 2019-09-04T06:35:53.010+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578947, 2), t: 1 }({ ts: Timestamp(1567578947, 2), t: 1 }) 2019-09-04T06:35:53.028+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:53.028+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), appliedOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, appliedWallTime: new Date(1567578947962), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), appliedOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, appliedWallTime: new Date(1567578947962), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), appliedOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, appliedWallTime: new Date(1567578947962), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpVisible: { ts: Timestamp(1567578947, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:53.028+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1664 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:23.028+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), appliedOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, appliedWallTime: new Date(1567578947962), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), appliedOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, appliedWallTime: new Date(1567578947962), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), appliedOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, appliedWallTime: new Date(1567578947962), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpVisible: { ts: Timestamp(1567578947, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:53.028+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:17.715+0000 2019-09-04T06:35:53.028+0000 D2 ASIO [RS] Request 1664 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpVisible: { ts: Timestamp(1567578947, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578947, 2), $clusterTime: { clusterTime: Timestamp(1567578950, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 2) } 2019-09-04T06:35:53.028+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpVisible: { ts: Timestamp(1567578947, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578947, 2), $clusterTime: { clusterTime: Timestamp(1567578950, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:53.028+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:53.028+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:17.715+0000 2019-09-04T06:35:53.063+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:53.063+0000 D3 REPL [replexec-4] memberData lastupdate is: 2019-09-04T06:35:52.839+0000 2019-09-04T06:35:53.063+0000 D3 REPL [replexec-4] memberData lastupdate is: 2019-09-04T06:35:52.840+0000 2019-09-04T06:35:53.063+0000 D3 REPL [replexec-4] stalest member MemberId(0) date: 2019-09-04T06:35:52.839+0000 2019-09-04T06:35:53.063+0000 D3 REPL [replexec-4] scheduling next check at 2019-09-04T06:36:02.839+0000 2019-09-04T06:35:53.063+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:22.839+0000 2019-09-04T06:35:53.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578950, 1), signature: { hash: BinData(0, B2F73AED656828D5242D8C781DE16EF4BFA5BCA9), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:53.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:35:53.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578950, 1), signature: { hash: BinData(0, B2F73AED656828D5242D8C781DE16EF4BFA5BCA9), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:53.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578950, 1), signature: { hash: BinData(0, B2F73AED656828D5242D8C781DE16EF4BFA5BCA9), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:53.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), opTime: { ts: Timestamp(1567578947, 2), t: 1 }, wallTime: new Date(1567578947962) } 2019-09-04T06:35:53.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578950, 1), signature: { hash: BinData(0, B2F73AED656828D5242D8C781DE16EF4BFA5BCA9), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:53.070+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:53.070+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:53.073+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:53.073+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:53.076+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:53.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:53.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:53.113+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:53.113+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:53.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:53.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:53.158+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:53.158+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:53.177+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:53.177+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:53.177+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:53.211+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:53.211+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:53.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:53.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:53.242+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:53.242+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:53.264+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:53.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:53.277+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:53.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:53.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:53.327+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:53.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:53.329+0000 D2 COMMAND [conn101] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:53.329+0000 I COMMAND [conn101] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:53.334+0000 D2 COMMAND [conn157] run command admin.$cmd { ping: 1, $db: "admin" } 2019-09-04T06:35:53.334+0000 I COMMAND [conn157] command admin.$cmd appName: "robo3t" command: ping { ping: 1, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:53.345+0000 D2 COMMAND [conn218] run command admin.$cmd { ping: 1.0, lsid: { id: UUID("ac8e303f-4e60-4a79-b9a4-f7cba7354076") }, $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } 2019-09-04T06:35:53.345+0000 I COMMAND [conn218] command admin.$cmd appName: "MongoDB Shell" command: ping { ping: 1.0, lsid: { id: UUID("ac8e303f-4e60-4a79-b9a4-f7cba7354076") }, $clusterTime: { clusterTime: Timestamp(1567578890, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:53.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:53.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:53.377+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:53.378+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:53.378+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:53.477+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:53.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:53.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:53.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:53.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:53.570+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:53.571+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:53.573+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:53.573+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:53.577+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:53.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:53.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:53.613+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:53.613+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:53.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:53.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:53.658+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:53.658+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:53.677+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:53.677+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:53.677+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:53.711+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:53.711+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:53.742+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:53.742+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:53.764+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:53.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:53.777+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:53.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:53.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:53.827+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:53.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:53.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:53.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:53.877+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:53.878+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:53.878+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:53.977+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:53.977+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:53.977+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578947, 2) 2019-09-04T06:35:53.977+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 24212 2019-09-04T06:35:53.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:53.977+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:53.977+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:53.977+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 24212 2019-09-04T06:35:53.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:53.977+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:53.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:53.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:54.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:54.010+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24218 2019-09-04T06:35:54.010+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 24218 2019-09-04T06:35:54.010+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578947, 2), t: 1 }({ ts: Timestamp(1567578947, 2), t: 1 }) 2019-09-04T06:35:54.070+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:54.071+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:54.073+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:54.073+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:54.077+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:54.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:54.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:54.113+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:54.113+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:54.141+0000 D2 COMMAND [conn515] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578945, 1), signature: { hash: BinData(0, F316497B8BBC8A053887E7C33BE8F047E760F03A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:35:54.141+0000 D1 REPL [conn515] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578947, 2), t: 1 } 2019-09-04T06:35:54.141+0000 D3 REPL [conn515] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:24.151+0000 2019-09-04T06:35:54.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:54.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:54.158+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:54.158+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:54.177+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:54.177+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:54.178+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:54.211+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:54.211+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:54.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578950, 1), signature: { hash: BinData(0, B2F73AED656828D5242D8C781DE16EF4BFA5BCA9), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:54.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:35:54.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578950, 1), signature: { hash: BinData(0, B2F73AED656828D5242D8C781DE16EF4BFA5BCA9), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:54.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578950, 1), signature: { hash: BinData(0, B2F73AED656828D5242D8C781DE16EF4BFA5BCA9), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:54.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), opTime: { ts: Timestamp(1567578947, 2), t: 1 }, wallTime: new Date(1567578947962) } 2019-09-04T06:35:54.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578950, 1), signature: { hash: BinData(0, B2F73AED656828D5242D8C781DE16EF4BFA5BCA9), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:54.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:54.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:54.242+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:54.242+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:54.264+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:54.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:54.278+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:54.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:54.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:54.327+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:54.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:54.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:54.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:54.378+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:54.378+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:54.378+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:54.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:54.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:54.478+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:54.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:54.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:54.571+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:54.571+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:54.573+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:54.573+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:54.578+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:54.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:54.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:54.613+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:54.613+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:54.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:54.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:54.658+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:54.658+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:54.677+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:54.677+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:54.678+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:54.711+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:54.711+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:54.742+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:54.742+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:54.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:54.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:54.778+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:54.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:54.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:54.827+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:54.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:54.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:54.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1665) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:54.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1665 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:36:04.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:54.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:22.839+0000 2019-09-04T06:35:54.839+0000 D2 ASIO [Replication] Request 1665 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), opTime: { ts: Timestamp(1567578947, 2), t: 1 }, wallTime: new Date(1567578947962), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpVisible: { ts: Timestamp(1567578947, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578947, 2), $clusterTime: { clusterTime: Timestamp(1567578950, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 2) } 2019-09-04T06:35:54.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), opTime: { ts: Timestamp(1567578947, 2), t: 1 }, wallTime: new Date(1567578947962), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpVisible: { ts: Timestamp(1567578947, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578947, 2), $clusterTime: { clusterTime: Timestamp(1567578950, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:35:54.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:54.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1665) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), opTime: { ts: Timestamp(1567578947, 2), t: 1 }, wallTime: new Date(1567578947962), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpVisible: { ts: Timestamp(1567578947, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578947, 2), $clusterTime: { clusterTime: Timestamp(1567578950, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 2) } 2019-09-04T06:35:54.839+0000 D4 ELECTION [replexec-4] Postponing election timeout due to heartbeat from primary 2019-09-04T06:35:54.839+0000 D4 REPL [replexec-4] Canceling election timeout callback at 2019-09-04T06:36:03.386+0000 2019-09-04T06:35:54.839+0000 D4 ELECTION [replexec-4] Scheduling election timeout callback at 2019-09-04T06:36:05.824+0000 2019-09-04T06:35:54.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:35:54.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:35:56.839Z 2019-09-04T06:35:54.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:54.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:24.839+0000 2019-09-04T06:35:54.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:24.839+0000 2019-09-04T06:35:54.840+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:54.840+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1666) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:54.840+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1666 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:04.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:54.840+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:24.839+0000 2019-09-04T06:35:54.840+0000 D2 ASIO [Replication] Request 1666 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), opTime: { ts: Timestamp(1567578947, 2), t: 1 }, wallTime: new Date(1567578947962), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpVisible: { ts: Timestamp(1567578947, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578947, 2), $clusterTime: { clusterTime: Timestamp(1567578950, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 2) } 2019-09-04T06:35:54.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), opTime: { ts: Timestamp(1567578947, 2), t: 1 }, wallTime: new Date(1567578947962), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpVisible: { ts: Timestamp(1567578947, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578947, 2), $clusterTime: { clusterTime: Timestamp(1567578950, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:54.840+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:54.840+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1666) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), opTime: { ts: Timestamp(1567578947, 2), t: 1 }, wallTime: new Date(1567578947962), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpVisible: { ts: Timestamp(1567578947, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578947, 2), $clusterTime: { clusterTime: Timestamp(1567578950, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578947, 2) } 2019-09-04T06:35:54.840+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:35:54.840+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:35:56.840Z 2019-09-04T06:35:54.840+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:24.839+0000 2019-09-04T06:35:54.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:54.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:54.878+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:54.878+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:54.878+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:54.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:54.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:54.977+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:54.977+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:54.977+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578947, 2) 2019-09-04T06:35:54.978+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 24255 2019-09-04T06:35:54.978+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:54.978+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:54.978+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 24255 2019-09-04T06:35:54.978+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:54.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:54.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:55.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:55.010+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24260 2019-09-04T06:35:55.010+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 24260 2019-09-04T06:35:55.010+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578947, 2), t: 1 }({ ts: Timestamp(1567578947, 2), t: 1 }) 2019-09-04T06:35:55.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578950, 1), signature: { hash: BinData(0, B2F73AED656828D5242D8C781DE16EF4BFA5BCA9), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:55.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:35:55.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578950, 1), signature: { hash: BinData(0, B2F73AED656828D5242D8C781DE16EF4BFA5BCA9), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:55.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578950, 1), signature: { hash: BinData(0, B2F73AED656828D5242D8C781DE16EF4BFA5BCA9), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:55.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), opTime: { ts: Timestamp(1567578947, 2), t: 1 }, wallTime: new Date(1567578947962) } 2019-09-04T06:35:55.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578950, 1), signature: { hash: BinData(0, B2F73AED656828D5242D8C781DE16EF4BFA5BCA9), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:55.067+0000 I COMMAND [conn503] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578921, 1), signature: { hash: BinData(0, F00E7B6B729CE76FBA9DE1FC1A42C69A8B06DDE7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:35:55.067+0000 D1 - [conn503] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:55.067+0000 W - [conn503] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:55.070+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:55.070+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:55.073+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:55.073+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:55.078+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:55.084+0000 I - [conn503] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:55.084+0000 D1 COMMAND [conn503] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578921, 1), signature: { hash: BinData(0, F00E7B6B729CE76FBA9DE1FC1A42C69A8B06DDE7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:55.084+0000 D1 - [conn503] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:55.084+0000 W - [conn503] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:55.104+0000 I - [conn503] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:55.104+0000 W COMMAND [conn503] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:55.104+0000 I COMMAND [conn503] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578921, 1), signature: { hash: BinData(0, F00E7B6B729CE76FBA9DE1FC1A42C69A8B06DDE7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30034ms 2019-09-04T06:35:55.104+0000 D2 NETWORK [conn503] Session from 10.108.2.55:36858 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:55.104+0000 I NETWORK [conn503] end connection 10.108.2.55:36858 (90 connections now open) 2019-09-04T06:35:55.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:55.111+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:55.113+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:55.113+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:55.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:55.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:55.158+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:55.158+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:55.177+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:55.177+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:55.179+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:55.211+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:55.211+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:55.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:55.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:55.242+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:55.242+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:55.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:55.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:55.279+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:55.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:55.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:55.327+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:55.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:55.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:55.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:55.378+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:55.378+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:55.379+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:55.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:55.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:55.479+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:55.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:55.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:55.570+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:55.571+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:55.573+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:55.573+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:55.579+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:55.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:55.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:55.613+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:55.613+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:55.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:55.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:55.658+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:55.658+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:55.677+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:55.677+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:55.679+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:55.711+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:55.711+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:55.742+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:55.742+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:55.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:55.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:55.779+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:55.781+0000 D2 ASIO [RS] Request 1663 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578955, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578955779), o: { $v: 1, $set: { ping: new Date(1567578955779) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpVisible: { ts: Timestamp(1567578947, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpApplied: { ts: Timestamp(1567578955, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578947, 2), $clusterTime: { clusterTime: Timestamp(1567578955, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578955, 1) } 2019-09-04T06:35:55.781+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578955, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578955779), o: { $v: 1, $set: { ping: new Date(1567578955779) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpVisible: { ts: Timestamp(1567578947, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpApplied: { ts: Timestamp(1567578955, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578947, 2), $clusterTime: { clusterTime: Timestamp(1567578955, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578955, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:55.781+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:55.781+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578955, 1) and ending at ts: Timestamp(1567578955, 1) 2019-09-04T06:35:55.781+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:36:05.824+0000 2019-09-04T06:35:55.781+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:36:06.311+0000 2019-09-04T06:35:55.781+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:55.781+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:24.839+0000 2019-09-04T06:35:55.781+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578955, 1), t: 1 } 2019-09-04T06:35:55.781+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:55.781+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:55.781+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578947, 2) 2019-09-04T06:35:55.781+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 24292 2019-09-04T06:35:55.781+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:55.781+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:55.781+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 24292 2019-09-04T06:35:55.781+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:55.781+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:35:55.781+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:55.781+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578947, 2) 2019-09-04T06:35:55.781+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 24295 2019-09-04T06:35:55.781+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578955, 1) } 2019-09-04T06:35:55.781+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:55.781+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:55.781+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 24295 2019-09-04T06:35:55.781+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24261 2019-09-04T06:35:55.781+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 24261 2019-09-04T06:35:55.781+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24298 2019-09-04T06:35:55.781+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 24298 2019-09-04T06:35:55.782+0000 D3 EXECUTOR [repl-writer-worker-11] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:55.782+0000 D3 STORAGE [repl-writer-worker-11] WT begin_transaction for snapshot id 24300 2019-09-04T06:35:55.782+0000 D4 STORAGE [repl-writer-worker-11] inserting record with timestamp Timestamp(1567578955, 1) 2019-09-04T06:35:55.782+0000 D3 STORAGE [repl-writer-worker-11] WT set timestamp of future write operations to Timestamp(1567578955, 1) 2019-09-04T06:35:55.782+0000 D3 STORAGE [repl-writer-worker-11] WT commit_transaction for snapshot id 24300 2019-09-04T06:35:55.782+0000 D3 EXECUTOR [repl-writer-worker-11] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:55.782+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:35:55.782+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24299 2019-09-04T06:35:55.782+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 24299 2019-09-04T06:35:55.782+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24302 2019-09-04T06:35:55.782+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 24302 2019-09-04T06:35:55.782+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578955, 1), t: 1 }({ ts: Timestamp(1567578955, 1), t: 1 }) 2019-09-04T06:35:55.782+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578955, 1) 2019-09-04T06:35:55.782+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24303 2019-09-04T06:35:55.782+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578955, 1) } } ] } sort: {} projection: {} 2019-09-04T06:35:55.782+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:55.782+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:35:55.782+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578955, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:35:55.782+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:55.782+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:55.782+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:55.782+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578955, 1) || First: notFirst: full path: ts 2019-09-04T06:35:55.782+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:55.782+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578955, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:55.782+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:55.782+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:35:55.782+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:55.782+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:55.782+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:55.782+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:55.782+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:55.782+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:55.782+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:35:55.782+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578955, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:35:55.782+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:55.782+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:35:55.782+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:35:55.782+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578955, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:35:55.782+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:55.782+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578955, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:55.782+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 24303 2019-09-04T06:35:55.782+0000 D3 EXECUTOR [repl-writer-worker-7] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:35:55.782+0000 D3 STORAGE [repl-writer-worker-7] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:55.782+0000 D3 REPL [repl-writer-worker-7] applying op: { ts: Timestamp(1567578955, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578955779), o: { $v: 1, $set: { ping: new Date(1567578955779) } } }, oplog application mode: Secondary 2019-09-04T06:35:55.782+0000 D3 STORAGE [repl-writer-worker-7] WT set timestamp of future write operations to Timestamp(1567578955, 1) 2019-09-04T06:35:55.782+0000 D3 STORAGE [repl-writer-worker-7] WT begin_transaction for snapshot id 24305 2019-09-04T06:35:55.782+0000 D2 QUERY [repl-writer-worker-7] Using idhack: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" } 2019-09-04T06:35:55.782+0000 D4 WRITE [repl-writer-worker-7] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:35:55.782+0000 D3 STORAGE [repl-writer-worker-7] WT commit_transaction for snapshot id 24305 2019-09-04T06:35:55.782+0000 D3 EXECUTOR [repl-writer-worker-7] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:35:55.782+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578955, 1), t: 1 }({ ts: Timestamp(1567578955, 1), t: 1 }) 2019-09-04T06:35:55.782+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578955, 1) 2019-09-04T06:35:55.782+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24304 2019-09-04T06:35:55.782+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:35:55.782+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:35:55.782+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:35:55.782+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:35:55.782+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:35:55.782+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:35:55.782+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 24304 2019-09-04T06:35:55.782+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578955, 1) 2019-09-04T06:35:55.782+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24308 2019-09-04T06:35:55.782+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 24308 2019-09-04T06:35:55.782+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578955, 1), t: 1 }({ ts: Timestamp(1567578955, 1), t: 1 }) 2019-09-04T06:35:55.782+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:55.782+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), appliedOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, appliedWallTime: new Date(1567578947962), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), appliedOpTime: { ts: Timestamp(1567578955, 1), t: 1 }, appliedWallTime: new Date(1567578955779), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), appliedOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, appliedWallTime: new Date(1567578947962), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpVisible: { ts: Timestamp(1567578947, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:55.782+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1667 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:25.782+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), appliedOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, appliedWallTime: new Date(1567578947962), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), appliedOpTime: { ts: Timestamp(1567578955, 1), t: 1 }, appliedWallTime: new Date(1567578955779), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), appliedOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, appliedWallTime: new Date(1567578947962), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpVisible: { ts: Timestamp(1567578947, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:55.782+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:25.782+0000 2019-09-04T06:35:55.783+0000 D2 ASIO [RS] Request 1667 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpVisible: { ts: Timestamp(1567578947, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578947, 2), $clusterTime: { clusterTime: Timestamp(1567578955, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578955, 1) } 2019-09-04T06:35:55.783+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578947, 2), t: 1 }, lastCommittedWall: new Date(1567578947962), lastOpVisible: { ts: Timestamp(1567578947, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578947, 2), $clusterTime: { clusterTime: Timestamp(1567578955, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578955, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:55.783+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:55.783+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:25.783+0000 2019-09-04T06:35:55.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:55.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:55.783+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578955, 1), t: 1 } 2019-09-04T06:35:55.783+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1668 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:36:05.783+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578947, 2), t: 1 } } 2019-09-04T06:35:55.783+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:25.783+0000 2019-09-04T06:35:55.785+0000 D2 ASIO [RS] Request 1668 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578955, 1), t: 1 }, lastCommittedWall: new Date(1567578955779), lastOpVisible: { ts: Timestamp(1567578955, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578955, 1), t: 1 }, lastCommittedWall: new Date(1567578955779), lastOpApplied: { ts: Timestamp(1567578955, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578955, 1), $clusterTime: { clusterTime: Timestamp(1567578955, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578955, 1) } 2019-09-04T06:35:55.785+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578955, 1), t: 1 }, lastCommittedWall: new Date(1567578955779), lastOpVisible: { ts: Timestamp(1567578955, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578955, 1), t: 1 }, lastCommittedWall: new Date(1567578955779), lastOpApplied: { ts: Timestamp(1567578955, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578955, 1), $clusterTime: { clusterTime: Timestamp(1567578955, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578955, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:55.785+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:55.785+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:35:55.785+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:35:55.785+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:35:55.786+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578950, 1) 2019-09-04T06:35:55.786+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:36:06.311+0000 2019-09-04T06:35:55.786+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:36:06.037+0000 2019-09-04T06:35:55.786+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1669 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:36:05.786+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578955, 1), t: 1 } } 2019-09-04T06:35:55.786+0000 D3 REPL [conn505] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn505] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:57.575+0000 2019-09-04T06:35:55.786+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:25.783+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn483] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn483] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.939+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn509] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn509] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.925+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn496] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn496] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.432+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn511] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn511] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.873+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn492] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn492] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.328+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn485] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn485] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.888+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn517] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn517] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:18.045+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn491] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn491] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:18.023+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn526] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn526] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:22.595+0000 2019-09-04T06:35:55.786+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:55.786+0000 D3 REPL [conn519] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn519] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:18.119+0000 2019-09-04T06:35:55.786+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:24.839+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn522] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn522] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:21.660+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn523] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn523] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:21.660+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn525] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn525] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:21.767+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn515] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn515] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:24.151+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn504] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn504] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.918+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn513] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn513] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.878+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn512] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn512] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.876+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn500] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn500] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:14.296+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn514] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn514] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:13.414+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn497] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn497] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.882+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn510] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn510] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.962+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn464] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn464] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:59.740+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn502] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn502] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.701+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn489] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn489] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:15.190+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn507] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn507] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.468+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn508] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn508] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.763+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn506] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn506] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:35:58.760+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn520] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn520] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:18.134+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn499] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn499] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:21.752+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn518] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:35:55.786+0000 D3 REPL [conn518] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:18.061+0000 2019-09-04T06:35:55.790+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:35:55.790+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:35:55.790+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), appliedOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, appliedWallTime: new Date(1567578947962), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578955, 1), t: 1 }, durableWallTime: new Date(1567578955779), appliedOpTime: { ts: Timestamp(1567578955, 1), t: 1 }, appliedWallTime: new Date(1567578955779), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), appliedOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, appliedWallTime: new Date(1567578947962), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578955, 1), t: 1 }, lastCommittedWall: new Date(1567578955779), lastOpVisible: { ts: Timestamp(1567578955, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:55.790+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1670 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:25.790+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), appliedOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, appliedWallTime: new Date(1567578947962), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578955, 1), t: 1 }, durableWallTime: new Date(1567578955779), appliedOpTime: { ts: Timestamp(1567578955, 1), t: 1 }, appliedWallTime: new Date(1567578955779), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, durableWallTime: new Date(1567578947962), appliedOpTime: { ts: Timestamp(1567578947, 2), t: 1 }, appliedWallTime: new Date(1567578947962), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578955, 1), t: 1 }, lastCommittedWall: new Date(1567578955779), lastOpVisible: { ts: Timestamp(1567578955, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:35:55.790+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:25.783+0000 2019-09-04T06:35:55.790+0000 D2 ASIO [RS] Request 1670 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578955, 1), t: 1 }, lastCommittedWall: new Date(1567578955779), lastOpVisible: { ts: Timestamp(1567578955, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578955, 1), $clusterTime: { clusterTime: Timestamp(1567578955, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578955, 1) } 2019-09-04T06:35:55.790+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578955, 1), t: 1 }, lastCommittedWall: new Date(1567578955779), lastOpVisible: { ts: Timestamp(1567578955, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578955, 1), $clusterTime: { clusterTime: Timestamp(1567578955, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578955, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:55.790+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:35:55.790+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:25.783+0000 2019-09-04T06:35:55.827+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:55.828+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:55.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:55.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:55.878+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:55.878+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:55.879+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:55.881+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578955, 1) 2019-09-04T06:35:55.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:55.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:55.979+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:55.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:55.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:56.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:56.070+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:56.071+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:56.079+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:56.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:56.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:56.113+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:56.113+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:56.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:56.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:56.180+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:56.211+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:56.211+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:56.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578955, 1), signature: { hash: BinData(0, 9E54D5E8681975ACC85A339C52CFDDB623088E16), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:56.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:35:56.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578955, 1), signature: { hash: BinData(0, 9E54D5E8681975ACC85A339C52CFDDB623088E16), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:56.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578955, 1), signature: { hash: BinData(0, 9E54D5E8681975ACC85A339C52CFDDB623088E16), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:56.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578955, 1), t: 1 }, durableWallTime: new Date(1567578955779), opTime: { ts: Timestamp(1567578955, 1), t: 1 }, wallTime: new Date(1567578955779) } 2019-09-04T06:35:56.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578955, 1), signature: { hash: BinData(0, 9E54D5E8681975ACC85A339C52CFDDB623088E16), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:56.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:56.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:56.242+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:56.242+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:56.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:56.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:56.280+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:56.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:56.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:56.327+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:56.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:56.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:56.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:56.378+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:56.378+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:56.380+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:56.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:56.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:56.480+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:56.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:56.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:56.553+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:56.553+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:56.570+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:56.571+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:56.580+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:56.601+0000 D2 COMMAND [conn49] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578938, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578947, 1), signature: { hash: BinData(0, 18600E894AFAF231E6D95239D4DD4C977475DFFB), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578938, 2), t: 1 } }, $db: "config" } 2019-09-04T06:35:56.601+0000 D1 COMMAND [conn49] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578938, 2), t: 1 } } } 2019-09-04T06:35:56.601+0000 D3 STORAGE [conn49] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:35:56.601+0000 D1 COMMAND [conn49] Using 'committed' snapshot: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578938, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578947, 1), signature: { hash: BinData(0, 18600E894AFAF231E6D95239D4DD4C977475DFFB), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578938, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578955, 1) 2019-09-04T06:35:56.601+0000 D2 QUERY [conn49] Collection config.settings does not exist. Using EOF plan: query: { _id: "balancer" } sort: {} projection: {} limit: 1 2019-09-04T06:35:56.601+0000 I COMMAND [conn49] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578938, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578947, 1), signature: { hash: BinData(0, 18600E894AFAF231E6D95239D4DD4C977475DFFB), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578938, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:35:56.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:56.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:56.613+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:56.613+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:56.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:56.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:56.680+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:56.711+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:56.711+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:56.742+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:56.742+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:56.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:56.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:56.780+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:56.781+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:56.781+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:56.781+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578955, 1) 2019-09-04T06:35:56.781+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 24342 2019-09-04T06:35:56.782+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:56.782+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:56.782+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 24342 2019-09-04T06:35:56.782+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24345 2019-09-04T06:35:56.783+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 24345 2019-09-04T06:35:56.783+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578955, 1), t: 1 }({ ts: Timestamp(1567578955, 1), t: 1 }) 2019-09-04T06:35:56.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:56.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:56.827+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:56.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:56.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:56.839+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1671) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:56.839+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1671 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:36:06.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:56.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:24.839+0000 2019-09-04T06:35:56.839+0000 D2 ASIO [Replication] Request 1671 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578955, 1), t: 1 }, durableWallTime: new Date(1567578955779), opTime: { ts: Timestamp(1567578955, 1), t: 1 }, wallTime: new Date(1567578955779), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578955, 1), t: 1 }, lastCommittedWall: new Date(1567578955779), lastOpVisible: { ts: Timestamp(1567578955, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578955, 1), $clusterTime: { clusterTime: Timestamp(1567578955, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578955, 1) } 2019-09-04T06:35:56.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578955, 1), t: 1 }, durableWallTime: new Date(1567578955779), opTime: { ts: Timestamp(1567578955, 1), t: 1 }, wallTime: new Date(1567578955779), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578955, 1), t: 1 }, lastCommittedWall: new Date(1567578955779), lastOpVisible: { ts: Timestamp(1567578955, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578955, 1), $clusterTime: { clusterTime: Timestamp(1567578955, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578955, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:35:56.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:56.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1671) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578955, 1), t: 1 }, durableWallTime: new Date(1567578955779), opTime: { ts: Timestamp(1567578955, 1), t: 1 }, wallTime: new Date(1567578955779), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578955, 1), t: 1 }, lastCommittedWall: new Date(1567578955779), lastOpVisible: { ts: Timestamp(1567578955, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578955, 1), $clusterTime: { clusterTime: Timestamp(1567578955, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578955, 1) } 2019-09-04T06:35:56.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:35:56.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:36:06.037+0000 2019-09-04T06:35:56.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:36:07.565+0000 2019-09-04T06:35:56.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:35:56.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:35:58.839Z 2019-09-04T06:35:56.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:56.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:26.839+0000 2019-09-04T06:35:56.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:26.839+0000 2019-09-04T06:35:56.840+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:56.840+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1672) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:56.840+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1672 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:06.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:56.840+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:26.839+0000 2019-09-04T06:35:56.840+0000 D2 ASIO [Replication] Request 1672 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578955, 1), t: 1 }, durableWallTime: new Date(1567578955779), opTime: { ts: Timestamp(1567578955, 1), t: 1 }, wallTime: new Date(1567578955779), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578955, 1), t: 1 }, lastCommittedWall: new Date(1567578955779), lastOpVisible: { ts: Timestamp(1567578955, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578955, 1), $clusterTime: { clusterTime: Timestamp(1567578955, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578955, 1) } 2019-09-04T06:35:56.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578955, 1), t: 1 }, durableWallTime: new Date(1567578955779), opTime: { ts: Timestamp(1567578955, 1), t: 1 }, wallTime: new Date(1567578955779), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578955, 1), t: 1 }, lastCommittedWall: new Date(1567578955779), lastOpVisible: { ts: Timestamp(1567578955, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578955, 1), $clusterTime: { clusterTime: Timestamp(1567578955, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578955, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:56.840+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:56.840+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1672) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578955, 1), t: 1 }, durableWallTime: new Date(1567578955779), opTime: { ts: Timestamp(1567578955, 1), t: 1 }, wallTime: new Date(1567578955779), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578955, 1), t: 1 }, lastCommittedWall: new Date(1567578955779), lastOpVisible: { ts: Timestamp(1567578955, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578955, 1), $clusterTime: { clusterTime: Timestamp(1567578955, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578955, 1) } 2019-09-04T06:35:56.840+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:35:56.840+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:35:58.840Z 2019-09-04T06:35:56.840+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:26.839+0000 2019-09-04T06:35:56.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:56.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:56.876+0000 D2 COMMAND [conn47] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:56.876+0000 I COMMAND [conn47] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:56.878+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:56.878+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:56.880+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:56.911+0000 D2 COMMAND [conn48] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:56.911+0000 I COMMAND [conn48] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:56.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:56.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:56.980+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:56.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:56.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:57.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:57.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:57.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:57.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578955, 1), signature: { hash: BinData(0, 9E54D5E8681975ACC85A339C52CFDDB623088E16), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:57.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:35:57.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578955, 1), signature: { hash: BinData(0, 9E54D5E8681975ACC85A339C52CFDDB623088E16), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:57.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578955, 1), signature: { hash: BinData(0, 9E54D5E8681975ACC85A339C52CFDDB623088E16), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:57.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578955, 1), t: 1 }, durableWallTime: new Date(1567578955779), opTime: { ts: Timestamp(1567578955, 1), t: 1 }, wallTime: new Date(1567578955779) } 2019-09-04T06:35:57.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578955, 1), signature: { hash: BinData(0, 9E54D5E8681975ACC85A339C52CFDDB623088E16), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:57.070+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:57.070+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:57.080+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:57.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:57.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:57.113+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:57.113+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:57.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:57.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:57.181+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:57.211+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:57.211+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:57.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:57.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:57.242+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:57.242+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:57.264+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:57.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:57.281+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:57.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:57.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:57.295+0000 D2 COMMAND [conn160] run command admin.$cmd { ping: 1, $db: "admin" } 2019-09-04T06:35:57.295+0000 I COMMAND [conn160] command admin.$cmd appName: "robo3t" command: ping { ping: 1, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:57.305+0000 D2 COMMAND [conn220] run command admin.$cmd { ping: 1.0, lsid: { id: UUID("23af97f8-66f0-4a27-b5f1-59167651ca5f") }, $clusterTime: { clusterTime: Timestamp(1567578895, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } 2019-09-04T06:35:57.305+0000 I COMMAND [conn220] command admin.$cmd appName: "MongoDB Shell" command: ping { ping: 1.0, lsid: { id: UUID("23af97f8-66f0-4a27-b5f1-59167651ca5f") }, $clusterTime: { clusterTime: Timestamp(1567578895, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:57.327+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:57.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:57.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:57.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:57.378+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:57.378+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:57.381+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:57.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:57.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:57.481+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:57.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:57.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:57.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:57.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:57.560+0000 I NETWORK [listener] connection accepted from 10.108.2.61:38128 #527 (91 connections now open) 2019-09-04T06:35:57.560+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:57.560+0000 D2 COMMAND [conn527] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:57.560+0000 I NETWORK [conn527] received client metadata from 10.108.2.61:38128 conn527: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:57.560+0000 I COMMAND [conn527] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:57.570+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:57.570+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:57.576+0000 I COMMAND [conn505] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578921, 1), signature: { hash: BinData(0, F00E7B6B729CE76FBA9DE1FC1A42C69A8B06DDE7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:35:57.576+0000 D1 - [conn505] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:57.576+0000 W - [conn505] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:57.581+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:57.593+0000 I - [conn505] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:57.593+0000 D1 COMMAND [conn505] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578921, 1), signature: { hash: BinData(0, F00E7B6B729CE76FBA9DE1FC1A42C69A8B06DDE7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:57.593+0000 D1 - [conn505] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:57.593+0000 W - [conn505] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:57.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:57.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:57.613+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:57.613+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:57.614+0000 I - [conn505] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:57.614+0000 W COMMAND [conn505] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:57.614+0000 I COMMAND [conn505] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578921, 1), signature: { hash: BinData(0, F00E7B6B729CE76FBA9DE1FC1A42C69A8B06DDE7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:35:57.614+0000 D2 NETWORK [conn505] Session from 10.108.2.61:38106 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:57.614+0000 I NETWORK [conn505] end connection 10.108.2.61:38106 (90 connections now open) 2019-09-04T06:35:57.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:57.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:57.681+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:57.711+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:57.711+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:57.742+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:57.742+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:57.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:57.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:57.781+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:57.782+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:57.782+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:57.782+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578955, 1) 2019-09-04T06:35:57.782+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 24383 2019-09-04T06:35:57.782+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:57.782+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:57.782+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 24383 2019-09-04T06:35:57.783+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24386 2019-09-04T06:35:57.783+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 24386 2019-09-04T06:35:57.783+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578955, 1), t: 1 }({ ts: Timestamp(1567578955, 1), t: 1 }) 2019-09-04T06:35:57.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:57.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:57.827+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:57.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:57.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:57.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:57.878+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:57.878+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:57.881+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:57.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:57.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:57.981+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:57.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:57.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:58.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:58.023+0000 D2 COMMAND [conn50] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:58.023+0000 I COMMAND [conn50] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:58.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:58.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:58.063+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:35:58.063+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:35:58.063+0000 D2 COMMAND [conn90] run command admin.$cmd { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:35:58.063+0000 I COMMAND [conn90] command admin.$cmd command: isMaster { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:866 locks:{} protocol:op_query 0ms 2019-09-04T06:35:58.070+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:58.070+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:58.081+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:58.113+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:58.113+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:58.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:58.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:58.162+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:58.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:58.182+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:58.211+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:58.211+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:58.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578957, 1), signature: { hash: BinData(0, 6936411DCFF1509D7D50279F554396151374C50E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:58.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:35:58.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578957, 1), signature: { hash: BinData(0, 6936411DCFF1509D7D50279F554396151374C50E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:58.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578957, 1), signature: { hash: BinData(0, 6936411DCFF1509D7D50279F554396151374C50E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:58.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578955, 1), t: 1 }, durableWallTime: new Date(1567578955779), opTime: { ts: Timestamp(1567578955, 1), t: 1 }, wallTime: new Date(1567578955779) } 2019-09-04T06:35:58.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578957, 1), signature: { hash: BinData(0, 6936411DCFF1509D7D50279F554396151374C50E), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:58.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:58.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:58.242+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:58.242+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:58.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:58.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:58.282+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:58.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:58.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:58.327+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:58.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:58.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:58.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:58.378+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:58.378+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:58.382+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:58.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:58.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:58.482+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:58.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:58.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:58.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:58.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:58.570+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:58.571+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:58.582+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:58.613+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:58.613+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:58.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:58.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:58.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:58.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:58.682+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:58.711+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:58.711+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:58.742+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:58.742+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:58.745+0000 I NETWORK [listener] connection accepted from 10.108.2.64:46822 #528 (91 connections now open) 2019-09-04T06:35:58.745+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:35:58.746+0000 D2 COMMAND [conn528] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:35:58.746+0000 I NETWORK [conn528] received client metadata from 10.108.2.64:46822 conn528: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:35:58.746+0000 I COMMAND [conn528] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:35:58.762+0000 I COMMAND [conn506] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578925, 1), signature: { hash: BinData(0, 3DCD6584CE1F58263916AC072D4A54C85B98128C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:35:58.762+0000 D1 - [conn506] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:58.762+0000 W - [conn506] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:58.764+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:58.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:58.779+0000 I - [conn506] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:58.779+0000 D1 COMMAND [conn506] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578925, 1), signature: { hash: BinData(0, 3DCD6584CE1F58263916AC072D4A54C85B98128C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:58.779+0000 D1 - [conn506] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:58.779+0000 W - [conn506] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:58.782+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:58.782+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:58.782+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578955, 1) 2019-09-04T06:35:58.782+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 24423 2019-09-04T06:35:58.782+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:58.782+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:58.782+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 24423 2019-09-04T06:35:58.782+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:58.783+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24426 2019-09-04T06:35:58.783+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 24426 2019-09-04T06:35:58.783+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578955, 1), t: 1 }({ ts: Timestamp(1567578955, 1), t: 1 }) 2019-09-04T06:35:58.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:58.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:58.799+0000 I - [conn506] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:58.799+0000 W COMMAND [conn506] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:58.799+0000 I COMMAND [conn506] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578925, 1), signature: { hash: BinData(0, 3DCD6584CE1F58263916AC072D4A54C85B98128C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:35:58.800+0000 D2 NETWORK [conn506] Session from 10.108.2.64:46806 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:58.800+0000 I NETWORK [conn506] end connection 10.108.2.64:46806 (90 connections now open) 2019-09-04T06:35:58.827+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:58.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:58.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:58.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1673) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:58.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1673 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:36:08.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:58.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:26.839+0000 2019-09-04T06:35:58.839+0000 D2 ASIO [Replication] Request 1673 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578955, 1), t: 1 }, durableWallTime: new Date(1567578955779), opTime: { ts: Timestamp(1567578955, 1), t: 1 }, wallTime: new Date(1567578955779), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578955, 1), t: 1 }, lastCommittedWall: new Date(1567578955779), lastOpVisible: { ts: Timestamp(1567578955, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578955, 1), $clusterTime: { clusterTime: Timestamp(1567578957, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578955, 1) } 2019-09-04T06:35:58.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578955, 1), t: 1 }, durableWallTime: new Date(1567578955779), opTime: { ts: Timestamp(1567578955, 1), t: 1 }, wallTime: new Date(1567578955779), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578955, 1), t: 1 }, lastCommittedWall: new Date(1567578955779), lastOpVisible: { ts: Timestamp(1567578955, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578955, 1), $clusterTime: { clusterTime: Timestamp(1567578957, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578955, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:35:58.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:58.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1673) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578955, 1), t: 1 }, durableWallTime: new Date(1567578955779), opTime: { ts: Timestamp(1567578955, 1), t: 1 }, wallTime: new Date(1567578955779), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578955, 1), t: 1 }, lastCommittedWall: new Date(1567578955779), lastOpVisible: { ts: Timestamp(1567578955, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578955, 1), $clusterTime: { clusterTime: Timestamp(1567578957, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578955, 1) } 2019-09-04T06:35:58.839+0000 D4 ELECTION [replexec-4] Postponing election timeout due to heartbeat from primary 2019-09-04T06:35:58.839+0000 D4 REPL [replexec-4] Canceling election timeout callback at 2019-09-04T06:36:07.565+0000 2019-09-04T06:35:58.839+0000 D4 ELECTION [replexec-4] Scheduling election timeout callback at 2019-09-04T06:36:10.050+0000 2019-09-04T06:35:58.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:35:58.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:36:00.839Z 2019-09-04T06:35:58.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:58.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:28.839+0000 2019-09-04T06:35:58.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:28.839+0000 2019-09-04T06:35:58.840+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:35:58.840+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1674) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:58.840+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1674 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:08.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:35:58.840+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:28.839+0000 2019-09-04T06:35:58.840+0000 D2 ASIO [Replication] Request 1674 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578955, 1), t: 1 }, durableWallTime: new Date(1567578955779), opTime: { ts: Timestamp(1567578955, 1), t: 1 }, wallTime: new Date(1567578955779), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578955, 1), t: 1 }, lastCommittedWall: new Date(1567578955779), lastOpVisible: { ts: Timestamp(1567578955, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578955, 1), $clusterTime: { clusterTime: Timestamp(1567578957, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578955, 1) } 2019-09-04T06:35:58.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578955, 1), t: 1 }, durableWallTime: new Date(1567578955779), opTime: { ts: Timestamp(1567578955, 1), t: 1 }, wallTime: new Date(1567578955779), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578955, 1), t: 1 }, lastCommittedWall: new Date(1567578955779), lastOpVisible: { ts: Timestamp(1567578955, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578955, 1), $clusterTime: { clusterTime: Timestamp(1567578957, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578955, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:35:58.840+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:35:58.840+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1674) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578955, 1), t: 1 }, durableWallTime: new Date(1567578955779), opTime: { ts: Timestamp(1567578955, 1), t: 1 }, wallTime: new Date(1567578955779), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578955, 1), t: 1 }, lastCommittedWall: new Date(1567578955779), lastOpVisible: { ts: Timestamp(1567578955, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578955, 1), $clusterTime: { clusterTime: Timestamp(1567578957, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578955, 1) } 2019-09-04T06:35:58.840+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:35:58.840+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:36:00.840Z 2019-09-04T06:35:58.840+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:28.839+0000 2019-09-04T06:35:58.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:58.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:58.878+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:58.878+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:58.882+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:58.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:58.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:58.982+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:58.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:58.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:59.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:35:59.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:59.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:59.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578957, 1), signature: { hash: BinData(0, 6936411DCFF1509D7D50279F554396151374C50E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:59.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:35:59.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578957, 1), signature: { hash: BinData(0, 6936411DCFF1509D7D50279F554396151374C50E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:59.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578957, 1), signature: { hash: BinData(0, 6936411DCFF1509D7D50279F554396151374C50E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:35:59.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578955, 1), t: 1 }, durableWallTime: new Date(1567578955779), opTime: { ts: Timestamp(1567578955, 1), t: 1 }, wallTime: new Date(1567578955779) } 2019-09-04T06:35:59.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578957, 1), signature: { hash: BinData(0, 6936411DCFF1509D7D50279F554396151374C50E), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:59.070+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:59.070+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:59.082+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:59.113+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:59.113+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:59.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:59.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:59.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:59.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:59.183+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:59.211+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:59.211+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:59.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:35:59.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:35:59.242+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:59.242+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:59.248+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:59.248+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:59.264+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:59.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:59.283+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:59.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:59.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:59.327+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:59.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:59.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:59.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:59.378+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:59.378+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:59.383+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:59.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:59.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:59.483+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:59.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:59.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:59.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:59.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:59.570+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:59.571+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:59.583+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:59.613+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:59.613+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:59.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:59.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:59.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:59.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:59.683+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:59.711+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:59.711+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:59.742+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:59.742+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:59.744+0000 I COMMAND [conn464] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578920, 1), signature: { hash: BinData(0, 21870DB0FD441442746D1E915D108B530DCF31E8), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:35:59.744+0000 D1 - [conn464] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:35:59.744+0000 W - [conn464] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:59.748+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:59.748+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:59.761+0000 I - [conn464] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:59.761+0000 D1 COMMAND [conn464] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578920, 1), signature: { hash: BinData(0, 21870DB0FD441442746D1E915D108B530DCF31E8), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:59.761+0000 D1 - [conn464] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:35:59.761+0000 W - [conn464] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:35:59.781+0000 I - [conn464] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:35:59.781+0000 W COMMAND [conn464] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:35:59.781+0000 I COMMAND [conn464] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578920, 1), signature: { hash: BinData(0, 21870DB0FD441442746D1E915D108B530DCF31E8), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30031ms 2019-09-04T06:35:59.781+0000 D2 NETWORK [conn464] Session from 10.108.2.49:53526 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:35:59.781+0000 I NETWORK [conn464] end connection 10.108.2.49:53526 (89 connections now open) 2019-09-04T06:35:59.782+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:35:59.782+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:35:59.782+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578955, 1) 2019-09-04T06:35:59.782+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 24462 2019-09-04T06:35:59.782+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:35:59.782+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:35:59.782+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 24462 2019-09-04T06:35:59.783+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24465 2019-09-04T06:35:59.783+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 24465 2019-09-04T06:35:59.783+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578955, 1), t: 1 }({ ts: Timestamp(1567578955, 1), t: 1 }) 2019-09-04T06:35:59.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:59.783+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:59.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:59.827+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:59.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:59.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:59.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:59.878+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:59.878+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:59.883+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:59.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:59.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:35:59.983+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:35:59.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:35:59.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:00.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:00.000+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:36:00.000+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:36:00.000+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:36:00.009+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:36:00.009+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:36:00.011+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:36:00.011+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:36:00.011+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:36:00.011+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:36:00.011+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:36:00.012+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35151 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:36:00.023+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:36:00.023+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:36:00.023+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:36:00.045+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:36:00.046+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:00.046+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:36:00.046+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:36:00.046+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:36:00.046+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:36:00.046+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:36:00.046+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:36:00.046+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:36:00.046+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:00.046+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:00.046+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:36:00.046+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578955, 1) 2019-09-04T06:36:00.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24479 2019-09-04T06:36:00.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24479 2019-09-04T06:36:00.046+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:36:00.047+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:36:00.047+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:36:00.047+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:36:00.047+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:00.047+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:36:00.047+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:36:00.047+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:36:00.047+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578955, 1) 2019-09-04T06:36:00.047+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24482 2019-09-04T06:36:00.047+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24482 2019-09-04T06:36:00.047+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:36:00.047+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:00.048+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:36:00.048+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:36:00.048+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578955, 1) 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24484 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24484 2019-09-04T06:36:00.048+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:36:00.048+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:36:00.048+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:36:00.048+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:36:00.048+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:36:00.048+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24487 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24487 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24488 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24488 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24489 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24489 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24490 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24490 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24491 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24491 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24492 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24492 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24493 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24493 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:36:00.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24494 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24494 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24495 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24495 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24496 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24496 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24497 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24497 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24498 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24498 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24499 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24499 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24500 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24500 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24501 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24501 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24502 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24502 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24503 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24503 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24504 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24504 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24505 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24505 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24506 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24506 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24507 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24507 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24508 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:36:00.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24508 2019-09-04T06:36:00.050+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:36:00.050+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:36:00.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24510 2019-09-04T06:36:00.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24510 2019-09-04T06:36:00.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24511 2019-09-04T06:36:00.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24511 2019-09-04T06:36:00.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24512 2019-09-04T06:36:00.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24512 2019-09-04T06:36:00.050+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:36:00.050+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:36:00.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24514 2019-09-04T06:36:00.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24514 2019-09-04T06:36:00.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24515 2019-09-04T06:36:00.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24515 2019-09-04T06:36:00.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24516 2019-09-04T06:36:00.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24516 2019-09-04T06:36:00.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24517 2019-09-04T06:36:00.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24517 2019-09-04T06:36:00.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24518 2019-09-04T06:36:00.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24518 2019-09-04T06:36:00.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24519 2019-09-04T06:36:00.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24519 2019-09-04T06:36:00.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24520 2019-09-04T06:36:00.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24520 2019-09-04T06:36:00.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24521 2019-09-04T06:36:00.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24521 2019-09-04T06:36:00.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24522 2019-09-04T06:36:00.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24522 2019-09-04T06:36:00.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24523 2019-09-04T06:36:00.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24523 2019-09-04T06:36:00.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24524 2019-09-04T06:36:00.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24524 2019-09-04T06:36:00.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24525 2019-09-04T06:36:00.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24525 2019-09-04T06:36:00.050+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:36:00.051+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:36:00.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24527 2019-09-04T06:36:00.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24527 2019-09-04T06:36:00.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24528 2019-09-04T06:36:00.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24528 2019-09-04T06:36:00.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24529 2019-09-04T06:36:00.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24529 2019-09-04T06:36:00.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24530 2019-09-04T06:36:00.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24530 2019-09-04T06:36:00.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24531 2019-09-04T06:36:00.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24531 2019-09-04T06:36:00.051+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24532 2019-09-04T06:36:00.051+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24532 2019-09-04T06:36:00.051+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:36:00.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:00.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:00.070+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:00.071+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:00.083+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:00.113+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:00.113+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:00.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:00.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:00.184+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:00.211+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:00.211+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:00.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578957, 1), signature: { hash: BinData(0, 6936411DCFF1509D7D50279F554396151374C50E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:00.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:36:00.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578957, 1), signature: { hash: BinData(0, 6936411DCFF1509D7D50279F554396151374C50E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:00.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578957, 1), signature: { hash: BinData(0, 6936411DCFF1509D7D50279F554396151374C50E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:00.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578955, 1), t: 1 }, durableWallTime: new Date(1567578955779), opTime: { ts: Timestamp(1567578955, 1), t: 1 }, wallTime: new Date(1567578955779) } 2019-09-04T06:36:00.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578957, 1), signature: { hash: BinData(0, 6936411DCFF1509D7D50279F554396151374C50E), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:00.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:00.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:00.242+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:00.242+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:00.248+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:00.248+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:00.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:00.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:00.284+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:00.327+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:00.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:00.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:00.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:00.378+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:00.378+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:00.384+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:00.436+0000 I COMMAND [conn496] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578930, 1), signature: { hash: BinData(0, C7B41E71C8B299289F009C76596DCCE73A882CDE), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:36:00.436+0000 D1 - [conn496] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:00.436+0000 W - [conn496] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:00.452+0000 I - [conn496] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:00.452+0000 D1 COMMAND [conn496] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578930, 1), signature: { hash: BinData(0, C7B41E71C8B299289F009C76596DCCE73A882CDE), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:00.452+0000 D1 - [conn496] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:00.452+0000 W - [conn496] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:00.454+0000 D2 ASIO [RS] Request 1669 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578960, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578960449), o: { $v: 1, $set: { ping: new Date(1567578960448) } } }, { ts: Timestamp(1567578960, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578960449), o: { $v: 1, $set: { ping: new Date(1567578960449) } } }, { ts: Timestamp(1567578960, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578960449), o: { $v: 1, $set: { ping: new Date(1567578960447) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578955, 1), t: 1 }, lastCommittedWall: new Date(1567578955779), lastOpVisible: { ts: Timestamp(1567578955, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578955, 1), t: 1 }, lastCommittedWall: new Date(1567578955779), lastOpApplied: { ts: Timestamp(1567578960, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578955, 1), $clusterTime: { clusterTime: Timestamp(1567578960, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578960, 3) } 2019-09-04T06:36:00.454+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578960, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578960449), o: { $v: 1, $set: { ping: new Date(1567578960448) } } }, { ts: Timestamp(1567578960, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578960449), o: { $v: 1, $set: { ping: new Date(1567578960449) } } }, { ts: Timestamp(1567578960, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578960449), o: { $v: 1, $set: { ping: new Date(1567578960447) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578955, 1), t: 1 }, lastCommittedWall: new Date(1567578955779), lastOpVisible: { ts: Timestamp(1567578955, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578955, 1), t: 1 }, lastCommittedWall: new Date(1567578955779), lastOpApplied: { ts: Timestamp(1567578960, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578955, 1), $clusterTime: { clusterTime: Timestamp(1567578960, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578960, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:00.454+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:00.454+0000 D2 REPL [replication-0] oplog fetcher read 3 operations from remote oplog starting at ts: Timestamp(1567578960, 1) and ending at ts: Timestamp(1567578960, 3) 2019-09-04T06:36:00.454+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:36:10.050+0000 2019-09-04T06:36:00.454+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:36:11.179+0000 2019-09-04T06:36:00.454+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:00.454+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:28.839+0000 2019-09-04T06:36:00.454+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:00.455+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:00.455+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578955, 1) 2019-09-04T06:36:00.455+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 24548 2019-09-04T06:36:00.455+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:00.455+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:00.455+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 24548 2019-09-04T06:36:00.455+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:00.455+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:00.455+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578955, 1) 2019-09-04T06:36:00.455+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 24551 2019-09-04T06:36:00.455+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:00.455+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:00.455+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 24551 2019-09-04T06:36:00.455+0000 D2 REPL [rsSync-0] replication batch size is 3 2019-09-04T06:36:00.454+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578960, 3), t: 1 } 2019-09-04T06:36:00.457+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578960, 3), t: 1 } 2019-09-04T06:36:00.457+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1675 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:36:10.457+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578955, 1), t: 1 } } 2019-09-04T06:36:00.457+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:25.783+0000 2019-09-04T06:36:00.458+0000 I NETWORK [listener] connection accepted from 10.108.2.55:36878 #529 (90 connections now open) 2019-09-04T06:36:00.458+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:36:00.458+0000 D2 COMMAND [conn529] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:36:00.458+0000 I NETWORK [conn529] received client metadata from 10.108.2.55:36878 conn529: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:36:00.461+0000 D2 ASIO [RS] Request 1675 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578960, 3), t: 1 }, lastCommittedWall: new Date(1567578960449), lastOpVisible: { ts: Timestamp(1567578960, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578960, 3), t: 1 }, lastCommittedWall: new Date(1567578960449), lastOpApplied: { ts: Timestamp(1567578960, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578960, 3), $clusterTime: { clusterTime: Timestamp(1567578960, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578960, 3) } 2019-09-04T06:36:00.461+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578960, 3), t: 1 }, lastCommittedWall: new Date(1567578960449), lastOpVisible: { ts: Timestamp(1567578960, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578960, 3), t: 1 }, lastCommittedWall: new Date(1567578960449), lastOpApplied: { ts: Timestamp(1567578960, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578960, 3), $clusterTime: { clusterTime: Timestamp(1567578960, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578960, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:00.461+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:36:00.461+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:36:00.461+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578960, 3), t: 1 }, 2019-09-04T06:36:00.449+0000 2019-09-04T06:36:00.461+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:36:00.461+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:36:11.179+0000 2019-09-04T06:36:00.461+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:36:11.133+0000 2019-09-04T06:36:00.461+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1676 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:36:10.461+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578960, 3), t: 1 } } 2019-09-04T06:36:00.461+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:25.783+0000 2019-09-04T06:36:00.461+0000 D3 REPL [conn483] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:36:00.461+0000 D3 REPL [conn483] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.939+0000 2019-09-04T06:36:00.461+0000 D3 REPL [conn511] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:36:00.461+0000 D3 REPL [conn511] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.873+0000 2019-09-04T06:36:00.461+0000 D3 REPL [conn485] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:36:00.461+0000 D3 REPL [conn485] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.888+0000 2019-09-04T06:36:00.461+0000 D3 REPL [conn491] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:36:00.461+0000 D3 REPL [conn491] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:18.023+0000 2019-09-04T06:36:00.461+0000 D3 REPL [conn519] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:36:00.461+0000 D3 REPL [conn519] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:18.119+0000 2019-09-04T06:36:00.461+0000 D3 REPL [conn522] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:36:00.461+0000 D3 REPL [conn522] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:21.660+0000 2019-09-04T06:36:00.461+0000 D3 REPL [conn525] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:36:00.461+0000 D3 REPL [conn525] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:21.767+0000 2019-09-04T06:36:00.461+0000 D3 REPL [conn504] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:36:00.461+0000 D3 REPL [conn504] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.918+0000 2019-09-04T06:36:00.461+0000 D3 REPL [conn512] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:36:00.461+0000 D3 REPL [conn512] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.876+0000 2019-09-04T06:36:00.461+0000 D3 REPL [conn514] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:36:00.461+0000 D3 REPL [conn514] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:13.414+0000 2019-09-04T06:36:00.461+0000 D3 REPL [conn510] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:36:00.461+0000 D3 REPL [conn510] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.962+0000 2019-09-04T06:36:00.461+0000 D3 REPL [conn502] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:36:00.461+0000 D3 REPL [conn502] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.701+0000 2019-09-04T06:36:00.462+0000 D3 REPL [conn507] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:36:00.462+0000 D3 REPL [conn507] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.468+0000 2019-09-04T06:36:00.462+0000 D3 REPL [conn499] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:36:00.462+0000 D3 REPL [conn499] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:21.752+0000 2019-09-04T06:36:00.462+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:00.462+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:28.839+0000 2019-09-04T06:36:00.462+0000 D3 REPL [conn518] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:36:00.462+0000 D3 REPL [conn518] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:18.061+0000 2019-09-04T06:36:00.462+0000 D3 REPL [conn520] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:36:00.462+0000 D3 REPL [conn520] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:18.134+0000 2019-09-04T06:36:00.462+0000 D3 REPL [conn508] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:36:00.462+0000 D3 REPL [conn508] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.763+0000 2019-09-04T06:36:00.462+0000 D3 REPL [conn489] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:36:00.462+0000 D3 REPL [conn489] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:15.190+0000 2019-09-04T06:36:00.462+0000 D3 REPL [conn497] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:36:00.462+0000 D3 REPL [conn497] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.882+0000 2019-09-04T06:36:00.462+0000 D3 REPL [conn500] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:36:00.462+0000 D3 REPL [conn500] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:14.296+0000 2019-09-04T06:36:00.462+0000 D3 REPL [conn513] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:36:00.462+0000 D3 REPL [conn513] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.878+0000 2019-09-04T06:36:00.462+0000 D3 REPL [conn515] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:36:00.462+0000 D3 REPL [conn515] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:24.151+0000 2019-09-04T06:36:00.462+0000 D3 REPL [conn523] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:36:00.462+0000 D3 REPL [conn523] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:21.660+0000 2019-09-04T06:36:00.462+0000 D3 REPL [conn526] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:36:00.462+0000 D3 REPL [conn526] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:22.595+0000 2019-09-04T06:36:00.462+0000 D3 REPL [conn517] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:36:00.462+0000 D3 REPL [conn517] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:18.045+0000 2019-09-04T06:36:00.462+0000 D3 REPL [conn509] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:36:00.462+0000 D3 REPL [conn509] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.925+0000 2019-09-04T06:36:00.462+0000 D3 REPL [conn492] Got notified of new snapshot: { ts: Timestamp(1567578955, 1), t: 1 }, 2019-09-04T06:35:55.779+0000 2019-09-04T06:36:00.463+0000 D3 REPL [conn492] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.328+0000 2019-09-04T06:36:00.468+0000 I COMMAND [conn507] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578921, 1), signature: { hash: BinData(0, F00E7B6B729CE76FBA9DE1FC1A42C69A8B06DDE7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:36:00.468+0000 D1 - [conn507] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:00.468+0000 W - [conn507] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:00.472+0000 I - [conn496] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:00.473+0000 W COMMAND [conn496] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:00.473+0000 I COMMAND [conn496] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578930, 1), signature: { hash: BinData(0, C7B41E71C8B299289F009C76596DCCE73A882CDE), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:36:00.473+0000 D2 NETWORK [conn496] Session from 10.108.2.48:42294 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:00.473+0000 I NETWORK [conn496] end connection 10.108.2.48:42294 (89 connections now open) 2019-09-04T06:36:00.473+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578960, 1) } 2019-09-04T06:36:00.473+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24466 2019-09-04T06:36:00.473+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 24466 2019-09-04T06:36:00.473+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24555 2019-09-04T06:36:00.473+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 24555 2019-09-04T06:36:00.473+0000 D3 EXECUTOR [repl-writer-worker-13] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:36:00.473+0000 D3 STORAGE [repl-writer-worker-13] WT begin_transaction for snapshot id 24557 2019-09-04T06:36:00.473+0000 D4 STORAGE [repl-writer-worker-13] inserting record with timestamp Timestamp(1567578960, 1) 2019-09-04T06:36:00.473+0000 D3 STORAGE [repl-writer-worker-13] WT set timestamp of future write operations to Timestamp(1567578960, 1) 2019-09-04T06:36:00.473+0000 D4 STORAGE [repl-writer-worker-13] inserting record with timestamp Timestamp(1567578960, 2) 2019-09-04T06:36:00.473+0000 D3 STORAGE [repl-writer-worker-13] WT set timestamp of future write operations to Timestamp(1567578960, 2) 2019-09-04T06:36:00.473+0000 D4 STORAGE [repl-writer-worker-13] inserting record with timestamp Timestamp(1567578960, 3) 2019-09-04T06:36:00.473+0000 D3 STORAGE [repl-writer-worker-13] WT set timestamp of future write operations to Timestamp(1567578960, 3) 2019-09-04T06:36:00.473+0000 D3 STORAGE [repl-writer-worker-13] WT commit_transaction for snapshot id 24557 2019-09-04T06:36:00.473+0000 D3 EXECUTOR [repl-writer-worker-13] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:36:00.473+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:36:00.473+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24556 2019-09-04T06:36:00.473+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 24556 2019-09-04T06:36:00.473+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24559 2019-09-04T06:36:00.473+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 24559 2019-09-04T06:36:00.473+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578960, 3), t: 1 }({ ts: Timestamp(1567578960, 3), t: 1 }) 2019-09-04T06:36:00.473+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578960, 3) 2019-09-04T06:36:00.473+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24560 2019-09-04T06:36:00.473+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578960, 3) } } ] } sort: {} projection: {} 2019-09-04T06:36:00.473+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:00.473+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:36:00.473+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578960, 3) Sort: {} Proj: {} ============================= 2019-09-04T06:36:00.473+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:00.473+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:36:00.473+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:36:00.473+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578960, 3) || First: notFirst: full path: ts 2019-09-04T06:36:00.473+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:00.473+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578960, 3) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:00.473+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:36:00.473+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:36:00.473+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:36:00.473+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:00.473+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:36:00.473+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:36:00.473+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:00.473+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:00.473+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:36:00.473+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578960, 3) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:36:00.473+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:00.473+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:36:00.473+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:36:00.473+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578960, 3) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:36:00.473+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:00.473+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578960, 3) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:00.474+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 24560 2019-09-04T06:36:00.474+0000 D3 EXECUTOR [repl-writer-worker-14] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:36:00.474+0000 D3 STORAGE [repl-writer-worker-14] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:00.474+0000 D3 REPL [repl-writer-worker-14] applying op: { ts: Timestamp(1567578960, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578960449), o: { $v: 1, $set: { ping: new Date(1567578960449) } } }, oplog application mode: Secondary 2019-09-04T06:36:00.474+0000 D3 STORAGE [repl-writer-worker-14] WT set timestamp of future write operations to Timestamp(1567578960, 2) 2019-09-04T06:36:00.474+0000 D3 STORAGE [repl-writer-worker-14] WT begin_transaction for snapshot id 24562 2019-09-04T06:36:00.474+0000 D2 QUERY [repl-writer-worker-14] Using idhack: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" } 2019-09-04T06:36:00.474+0000 D4 WRITE [repl-writer-worker-14] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:36:00.474+0000 D3 STORAGE [repl-writer-worker-14] WT commit_transaction for snapshot id 24562 2019-09-04T06:36:00.474+0000 D3 EXECUTOR [repl-writer-worker-14] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:36:00.474+0000 D3 STORAGE [repl-writer-worker-14] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:00.474+0000 D3 REPL [repl-writer-worker-14] applying op: { ts: Timestamp(1567578960, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578960449), o: { $v: 1, $set: { ping: new Date(1567578960447) } } }, oplog application mode: Secondary 2019-09-04T06:36:00.474+0000 D3 STORAGE [repl-writer-worker-14] WT set timestamp of future write operations to Timestamp(1567578960, 3) 2019-09-04T06:36:00.474+0000 D3 STORAGE [repl-writer-worker-14] WT begin_transaction for snapshot id 24564 2019-09-04T06:36:00.474+0000 D2 QUERY [repl-writer-worker-14] Using idhack: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" } 2019-09-04T06:36:00.474+0000 D4 WRITE [repl-writer-worker-14] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:36:00.474+0000 D3 STORAGE [repl-writer-worker-14] WT commit_transaction for snapshot id 24564 2019-09-04T06:36:00.474+0000 D3 EXECUTOR [repl-writer-worker-14] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:36:00.474+0000 D3 STORAGE [repl-writer-worker-14] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:00.474+0000 D3 REPL [repl-writer-worker-14] applying op: { ts: Timestamp(1567578960, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578960449), o: { $v: 1, $set: { ping: new Date(1567578960448) } } }, oplog application mode: Secondary 2019-09-04T06:36:00.474+0000 D3 STORAGE [repl-writer-worker-14] WT set timestamp of future write operations to Timestamp(1567578960, 1) 2019-09-04T06:36:00.474+0000 D3 STORAGE [repl-writer-worker-14] WT begin_transaction for snapshot id 24566 2019-09-04T06:36:00.474+0000 D2 QUERY [repl-writer-worker-14] Using idhack: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" } 2019-09-04T06:36:00.474+0000 D4 WRITE [repl-writer-worker-14] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:36:00.474+0000 D3 STORAGE [repl-writer-worker-14] WT commit_transaction for snapshot id 24566 2019-09-04T06:36:00.474+0000 D3 EXECUTOR [repl-writer-worker-14] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:36:00.474+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578960, 3), t: 1 }({ ts: Timestamp(1567578960, 3), t: 1 }) 2019-09-04T06:36:00.474+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578960, 3) 2019-09-04T06:36:00.474+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24561 2019-09-04T06:36:00.474+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:36:00.474+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:00.474+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:36:00.474+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:00.474+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:00.474+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:36:00.474+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 24561 2019-09-04T06:36:00.474+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578960, 3) 2019-09-04T06:36:00.474+0000 D2 REPL [rsSync-0] Setting replication's stable optime to { ts: Timestamp(1567578960, 3), t: 1 }, 2019-09-04T06:36:00.449+0000 2019-09-04T06:36:00.474+0000 D2 STORAGE [rsSync-0] oldest_timestamp set to Timestamp(1567578955, 3) 2019-09-04T06:36:00.474+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24569 2019-09-04T06:36:00.474+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 24569 2019-09-04T06:36:00.474+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578960, 3), t: 1 }({ ts: Timestamp(1567578960, 3), t: 1 }) 2019-09-04T06:36:00.474+0000 D3 REPL [conn489] Got notified of new snapshot: { ts: Timestamp(1567578960, 3), t: 1 }, 2019-09-04T06:36:00.449+0000 2019-09-04T06:36:00.474+0000 D3 REPL [conn489] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:15.190+0000 2019-09-04T06:36:00.474+0000 D3 REPL [conn513] Got notified of new snapshot: { ts: Timestamp(1567578960, 3), t: 1 }, 2019-09-04T06:36:00.449+0000 2019-09-04T06:36:00.474+0000 D3 REPL [conn513] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.878+0000 2019-09-04T06:36:00.474+0000 D3 REPL [conn499] Got notified of new snapshot: { ts: Timestamp(1567578960, 3), t: 1 }, 2019-09-04T06:36:00.449+0000 2019-09-04T06:36:00.474+0000 D3 REPL [conn499] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:21.752+0000 2019-09-04T06:36:00.474+0000 D3 REPL [conn510] Got notified of new snapshot: { ts: Timestamp(1567578960, 3), t: 1 }, 2019-09-04T06:36:00.449+0000 2019-09-04T06:36:00.474+0000 D3 REPL [conn510] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.962+0000 2019-09-04T06:36:00.475+0000 D3 REPL [conn517] Got notified of new snapshot: { ts: Timestamp(1567578960, 3), t: 1 }, 2019-09-04T06:36:00.449+0000 2019-09-04T06:36:00.475+0000 D3 REPL [conn517] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:18.045+0000 2019-09-04T06:36:00.475+0000 D3 REPL [conn523] Got notified of new snapshot: { ts: Timestamp(1567578960, 3), t: 1 }, 2019-09-04T06:36:00.449+0000 2019-09-04T06:36:00.475+0000 D3 REPL [conn523] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:21.660+0000 2019-09-04T06:36:00.475+0000 D3 REPL [conn512] Got notified of new snapshot: { ts: Timestamp(1567578960, 3), t: 1 }, 2019-09-04T06:36:00.449+0000 2019-09-04T06:36:00.475+0000 D3 REPL [conn512] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.876+0000 2019-09-04T06:36:00.475+0000 D3 REPL [conn485] Got notified of new snapshot: { ts: Timestamp(1567578960, 3), t: 1 }, 2019-09-04T06:36:00.449+0000 2019-09-04T06:36:00.475+0000 D3 REPL [conn485] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.888+0000 2019-09-04T06:36:00.475+0000 D3 REPL [conn525] Got notified of new snapshot: { ts: Timestamp(1567578960, 3), t: 1 }, 2019-09-04T06:36:00.449+0000 2019-09-04T06:36:00.475+0000 D3 REPL [conn525] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:21.767+0000 2019-09-04T06:36:00.475+0000 D3 REPL [conn520] Got notified of new snapshot: { ts: Timestamp(1567578960, 3), t: 1 }, 2019-09-04T06:36:00.449+0000 2019-09-04T06:36:00.475+0000 D3 REPL [conn520] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:18.134+0000 2019-09-04T06:36:00.475+0000 D3 REPL [conn522] Got notified of new snapshot: { ts: Timestamp(1567578960, 3), t: 1 }, 2019-09-04T06:36:00.449+0000 2019-09-04T06:36:00.475+0000 D3 REPL [conn522] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:21.660+0000 2019-09-04T06:36:00.475+0000 D3 REPL [conn500] Got notified of new snapshot: { ts: Timestamp(1567578960, 3), t: 1 }, 2019-09-04T06:36:00.449+0000 2019-09-04T06:36:00.475+0000 D3 REPL [conn500] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:14.296+0000 2019-09-04T06:36:00.475+0000 D3 REPL [conn492] Got notified of new snapshot: { ts: Timestamp(1567578960, 3), t: 1 }, 2019-09-04T06:36:00.449+0000 2019-09-04T06:36:00.475+0000 D3 REPL [conn492] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.328+0000 2019-09-04T06:36:00.475+0000 D3 REPL [conn483] Got notified of new snapshot: { ts: Timestamp(1567578960, 3), t: 1 }, 2019-09-04T06:36:00.449+0000 2019-09-04T06:36:00.475+0000 D3 REPL [conn483] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.939+0000 2019-09-04T06:36:00.475+0000 I COMMAND [conn529] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:36:00.475+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:00.475+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578955, 1), t: 1 }, durableWallTime: new Date(1567578955779), appliedOpTime: { ts: Timestamp(1567578955, 1), t: 1 }, appliedWallTime: new Date(1567578955779), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578955, 1), t: 1 }, durableWallTime: new Date(1567578955779), appliedOpTime: { ts: Timestamp(1567578960, 3), t: 1 }, appliedWallTime: new Date(1567578960449), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578955, 1), t: 1 }, durableWallTime: new Date(1567578955779), appliedOpTime: { ts: Timestamp(1567578955, 1), t: 1 }, appliedWallTime: new Date(1567578955779), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578960, 3), t: 1 }, lastCommittedWall: new Date(1567578960449), lastOpVisible: { ts: Timestamp(1567578960, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:00.475+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1677 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:30.475+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578955, 1), t: 1 }, durableWallTime: new Date(1567578955779), appliedOpTime: { ts: Timestamp(1567578955, 1), t: 1 }, appliedWallTime: new Date(1567578955779), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578955, 1), t: 1 }, durableWallTime: new Date(1567578955779), appliedOpTime: { ts: Timestamp(1567578960, 3), t: 1 }, appliedWallTime: new Date(1567578960449), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578955, 1), t: 1 }, durableWallTime: new Date(1567578955779), appliedOpTime: { ts: Timestamp(1567578955, 1), t: 1 }, appliedWallTime: new Date(1567578955779), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578960, 3), t: 1 }, lastCommittedWall: new Date(1567578960449), lastOpVisible: { ts: Timestamp(1567578960, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:00.475+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:25.783+0000 2019-09-04T06:36:00.475+0000 D3 REPL [conn497] Got notified of new snapshot: { ts: Timestamp(1567578960, 3), t: 1 }, 2019-09-04T06:36:00.449+0000 2019-09-04T06:36:00.475+0000 D3 REPL [conn497] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.882+0000 2019-09-04T06:36:00.475+0000 D3 REPL [conn508] Got notified of new snapshot: { ts: Timestamp(1567578960, 3), t: 1 }, 2019-09-04T06:36:00.449+0000 2019-09-04T06:36:00.475+0000 D3 REPL [conn508] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.763+0000 2019-09-04T06:36:00.475+0000 D3 REPL [conn515] Got notified of new snapshot: { ts: Timestamp(1567578960, 3), t: 1 }, 2019-09-04T06:36:00.449+0000 2019-09-04T06:36:00.475+0000 D3 REPL [conn515] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:24.151+0000 2019-09-04T06:36:00.475+0000 D3 REPL [conn526] Got notified of new snapshot: { ts: Timestamp(1567578960, 3), t: 1 }, 2019-09-04T06:36:00.449+0000 2019-09-04T06:36:00.475+0000 D3 REPL [conn526] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:22.595+0000 2019-09-04T06:36:00.475+0000 D3 REPL [conn509] Got notified of new snapshot: { ts: Timestamp(1567578960, 3), t: 1 }, 2019-09-04T06:36:00.449+0000 2019-09-04T06:36:00.475+0000 D3 REPL [conn509] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.925+0000 2019-09-04T06:36:00.475+0000 D3 REPL [conn502] Got notified of new snapshot: { ts: Timestamp(1567578960, 3), t: 1 }, 2019-09-04T06:36:00.449+0000 2019-09-04T06:36:00.475+0000 D3 REPL [conn502] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:00.701+0000 2019-09-04T06:36:00.475+0000 D3 REPL [conn518] Got notified of new snapshot: { ts: Timestamp(1567578960, 3), t: 1 }, 2019-09-04T06:36:00.449+0000 2019-09-04T06:36:00.475+0000 D3 REPL [conn518] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:18.061+0000 2019-09-04T06:36:00.475+0000 D3 REPL [conn514] Got notified of new snapshot: { ts: Timestamp(1567578960, 3), t: 1 }, 2019-09-04T06:36:00.449+0000 2019-09-04T06:36:00.475+0000 D3 REPL [conn514] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:13.414+0000 2019-09-04T06:36:00.475+0000 D3 REPL [conn519] Got notified of new snapshot: { ts: Timestamp(1567578960, 3), t: 1 }, 2019-09-04T06:36:00.449+0000 2019-09-04T06:36:00.475+0000 D3 REPL [conn519] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:18.119+0000 2019-09-04T06:36:00.475+0000 D3 REPL [conn504] Got notified of new snapshot: { ts: Timestamp(1567578960, 3), t: 1 }, 2019-09-04T06:36:00.449+0000 2019-09-04T06:36:00.475+0000 D3 REPL [conn504] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.918+0000 2019-09-04T06:36:00.475+0000 D3 REPL [conn491] Got notified of new snapshot: { ts: Timestamp(1567578960, 3), t: 1 }, 2019-09-04T06:36:00.449+0000 2019-09-04T06:36:00.475+0000 D3 REPL [conn491] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:18.023+0000 2019-09-04T06:36:00.475+0000 D3 REPL [conn511] Got notified of new snapshot: { ts: Timestamp(1567578960, 3), t: 1 }, 2019-09-04T06:36:00.449+0000 2019-09-04T06:36:00.475+0000 D3 REPL [conn511] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.873+0000 2019-09-04T06:36:00.475+0000 D2 ASIO [RS] Request 1677 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578960, 3), t: 1 }, lastCommittedWall: new Date(1567578960449), lastOpVisible: { ts: Timestamp(1567578960, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578960, 3), $clusterTime: { clusterTime: Timestamp(1567578960, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578960, 3) } 2019-09-04T06:36:00.475+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578960, 3), t: 1 }, lastCommittedWall: new Date(1567578960449), lastOpVisible: { ts: Timestamp(1567578960, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578960, 3), $clusterTime: { clusterTime: Timestamp(1567578960, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578960, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:00.475+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:36:00.475+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:25.783+0000 2019-09-04T06:36:00.475+0000 D3 EXECUTOR [repl-writer-worker-12] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:36:00.475+0000 D3 EXECUTOR [repl-writer-worker-3] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:36:00.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:00.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:00.485+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:36:00.485+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:00.485+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:00.485+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578955, 1), t: 1 }, durableWallTime: new Date(1567578955779), appliedOpTime: { ts: Timestamp(1567578955, 1), t: 1 }, appliedWallTime: new Date(1567578955779), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578960, 3), t: 1 }, durableWallTime: new Date(1567578960449), appliedOpTime: { ts: Timestamp(1567578960, 3), t: 1 }, appliedWallTime: new Date(1567578960449), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578955, 1), t: 1 }, durableWallTime: new Date(1567578955779), appliedOpTime: { ts: Timestamp(1567578955, 1), t: 1 }, appliedWallTime: new Date(1567578955779), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578960, 3), t: 1 }, lastCommittedWall: new Date(1567578960449), lastOpVisible: { ts: Timestamp(1567578960, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:00.485+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1678 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:30.485+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578955, 1), t: 1 }, durableWallTime: new Date(1567578955779), appliedOpTime: { ts: Timestamp(1567578955, 1), t: 1 }, appliedWallTime: new Date(1567578955779), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578960, 3), t: 1 }, durableWallTime: new Date(1567578960449), appliedOpTime: { ts: Timestamp(1567578960, 3), t: 1 }, appliedWallTime: new Date(1567578960449), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578955, 1), t: 1 }, durableWallTime: new Date(1567578955779), appliedOpTime: { ts: Timestamp(1567578955, 1), t: 1 }, appliedWallTime: new Date(1567578955779), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578960, 3), t: 1 }, lastCommittedWall: new Date(1567578960449), lastOpVisible: { ts: Timestamp(1567578960, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:00.485+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:25.783+0000 2019-09-04T06:36:00.485+0000 D2 ASIO [RS] Request 1678 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578960, 3), t: 1 }, lastCommittedWall: new Date(1567578960449), lastOpVisible: { ts: Timestamp(1567578960, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578960, 3), $clusterTime: { clusterTime: Timestamp(1567578960, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578960, 3) } 2019-09-04T06:36:00.485+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578960, 3), t: 1 }, lastCommittedWall: new Date(1567578960449), lastOpVisible: { ts: Timestamp(1567578960, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578960, 3), $clusterTime: { clusterTime: Timestamp(1567578960, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578960, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:00.485+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:36:00.485+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:25.783+0000 2019-09-04T06:36:00.489+0000 I - [conn507] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:00.489+0000 D1 COMMAND [conn507] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578921, 1), signature: { hash: BinData(0, F00E7B6B729CE76FBA9DE1FC1A42C69A8B06DDE7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:00.489+0000 D1 - [conn507] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:00.489+0000 W - [conn507] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:00.509+0000 I - [conn507] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:00.509+0000 W COMMAND [conn507] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:00.509+0000 I COMMAND [conn507] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578921, 1), signature: { hash: BinData(0, F00E7B6B729CE76FBA9DE1FC1A42C69A8B06DDE7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:36:00.509+0000 D2 NETWORK [conn507] Session from 10.108.2.55:36862 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:00.509+0000 I NETWORK [conn507] end connection 10.108.2.55:36862 (88 connections now open) 2019-09-04T06:36:00.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:00.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:00.570+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:00.570+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:00.573+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578960, 3) 2019-09-04T06:36:00.585+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:00.613+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:00.613+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:00.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:00.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:00.685+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:00.701+0000 I COMMAND [conn502] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578925, 1), signature: { hash: BinData(0, 3DCD6584CE1F58263916AC072D4A54C85B98128C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:36:00.701+0000 D1 - [conn502] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:00.701+0000 W - [conn502] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:00.711+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:00.711+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:00.718+0000 I - [conn502] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:00.718+0000 D1 COMMAND [conn502] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578925, 1), signature: { hash: BinData(0, 3DCD6584CE1F58263916AC072D4A54C85B98128C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:00.718+0000 D1 - [conn502] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:00.718+0000 W - [conn502] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:00.738+0000 I - [conn502] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:00.738+0000 W COMMAND [conn502] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:00.738+0000 I COMMAND [conn502] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578925, 1), signature: { hash: BinData(0, 3DCD6584CE1F58263916AC072D4A54C85B98128C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:36:00.738+0000 D2 NETWORK [conn502] Session from 10.108.2.74:51974 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:00.739+0000 I NETWORK [conn502] end connection 10.108.2.74:51974 (87 connections now open) 2019-09-04T06:36:00.742+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:00.742+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:00.748+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:00.748+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:00.753+0000 I NETWORK [listener] connection accepted from 10.108.2.50:50336 #530 (88 connections now open) 2019-09-04T06:36:00.753+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:36:00.753+0000 D2 COMMAND [conn530] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:36:00.753+0000 I NETWORK [conn530] received client metadata from 10.108.2.50:50336 conn530: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:36:00.753+0000 I COMMAND [conn530] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:36:00.763+0000 I COMMAND [conn508] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578929, 1), signature: { hash: BinData(0, 6F9E7E1D8182235552E89108F51DB316747DE523), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:36:00.763+0000 D1 - [conn508] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:00.763+0000 W - [conn508] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:00.780+0000 I - [conn508] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:00.780+0000 D1 COMMAND [conn508] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578929, 1), signature: { hash: BinData(0, 6F9E7E1D8182235552E89108F51DB316747DE523), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:00.780+0000 D1 - [conn508] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:00.780+0000 W - [conn508] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:00.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:00.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:00.785+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:00.800+0000 I - [conn508] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:00.800+0000 W COMMAND [conn508] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:00.800+0000 I COMMAND [conn508] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578929, 1), signature: { hash: BinData(0, 6F9E7E1D8182235552E89108F51DB316747DE523), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30026ms 2019-09-04T06:36:00.800+0000 D2 NETWORK [conn508] Session from 10.108.2.50:50312 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:00.800+0000 I NETWORK [conn508] end connection 10.108.2.50:50312 (87 connections now open) 2019-09-04T06:36:00.827+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:00.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:00.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:00.839+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1679) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:00.839+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1679 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:36:10.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:00.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:28.839+0000 2019-09-04T06:36:00.839+0000 D2 ASIO [Replication] Request 1679 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578960, 3), t: 1 }, durableWallTime: new Date(1567578960449), opTime: { ts: Timestamp(1567578960, 3), t: 1 }, wallTime: new Date(1567578960449), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578960, 3), t: 1 }, lastCommittedWall: new Date(1567578960449), lastOpVisible: { ts: Timestamp(1567578960, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578960, 3), $clusterTime: { clusterTime: Timestamp(1567578960, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578960, 3) } 2019-09-04T06:36:00.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578960, 3), t: 1 }, durableWallTime: new Date(1567578960449), opTime: { ts: Timestamp(1567578960, 3), t: 1 }, wallTime: new Date(1567578960449), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578960, 3), t: 1 }, lastCommittedWall: new Date(1567578960449), lastOpVisible: { ts: Timestamp(1567578960, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578960, 3), $clusterTime: { clusterTime: Timestamp(1567578960, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578960, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:36:00.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:00.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1679) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578960, 3), t: 1 }, durableWallTime: new Date(1567578960449), opTime: { ts: Timestamp(1567578960, 3), t: 1 }, wallTime: new Date(1567578960449), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578960, 3), t: 1 }, lastCommittedWall: new Date(1567578960449), lastOpVisible: { ts: Timestamp(1567578960, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578960, 3), $clusterTime: { clusterTime: Timestamp(1567578960, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578960, 3) } 2019-09-04T06:36:00.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:36:00.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:36:11.133+0000 2019-09-04T06:36:00.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:36:11.131+0000 2019-09-04T06:36:00.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:36:00.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:36:02.839Z 2019-09-04T06:36:00.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:00.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:30.839+0000 2019-09-04T06:36:00.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:30.839+0000 2019-09-04T06:36:00.840+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:00.840+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1680) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:00.840+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1680 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:10.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:00.840+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:30.839+0000 2019-09-04T06:36:00.840+0000 D2 ASIO [Replication] Request 1680 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578960, 3), t: 1 }, durableWallTime: new Date(1567578960449), opTime: { ts: Timestamp(1567578960, 3), t: 1 }, wallTime: new Date(1567578960449), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578960, 3), t: 1 }, lastCommittedWall: new Date(1567578960449), lastOpVisible: { ts: Timestamp(1567578960, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578960, 3), $clusterTime: { clusterTime: Timestamp(1567578960, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578960, 3) } 2019-09-04T06:36:00.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578960, 3), t: 1 }, durableWallTime: new Date(1567578960449), opTime: { ts: Timestamp(1567578960, 3), t: 1 }, wallTime: new Date(1567578960449), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578960, 3), t: 1 }, lastCommittedWall: new Date(1567578960449), lastOpVisible: { ts: Timestamp(1567578960, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578960, 3), $clusterTime: { clusterTime: Timestamp(1567578960, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578960, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:00.840+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:00.840+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1680) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578960, 3), t: 1 }, durableWallTime: new Date(1567578960449), opTime: { ts: Timestamp(1567578960, 3), t: 1 }, wallTime: new Date(1567578960449), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578960, 3), t: 1 }, lastCommittedWall: new Date(1567578960449), lastOpVisible: { ts: Timestamp(1567578960, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578960, 3), $clusterTime: { clusterTime: Timestamp(1567578960, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578960, 3) } 2019-09-04T06:36:00.840+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:36:00.840+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:36:02.840Z 2019-09-04T06:36:00.840+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:30.839+0000 2019-09-04T06:36:00.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:00.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:00.878+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:00.878+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:00.885+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:00.925+0000 I COMMAND [conn509] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578922, 1), signature: { hash: BinData(0, 7D6FA38919A7B0C53E875A5965620108C7C24ABC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:36:00.926+0000 D1 - [conn509] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:00.926+0000 W - [conn509] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:00.943+0000 I - [conn509] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:00.943+0000 D1 COMMAND [conn509] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578922, 1), signature: { hash: BinData(0, 7D6FA38919A7B0C53E875A5965620108C7C24ABC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:00.943+0000 D1 - [conn509] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:00.943+0000 W - [conn509] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:00.952+0000 I NETWORK [listener] connection accepted from 10.108.2.58:52362 #531 (88 connections now open) 2019-09-04T06:36:00.952+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:36:00.952+0000 D2 COMMAND [conn531] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:36:00.952+0000 I NETWORK [conn531] received client metadata from 10.108.2.58:52362 conn531: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:36:00.952+0000 I COMMAND [conn531] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:36:00.963+0000 I COMMAND [conn510] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578923, 1), signature: { hash: BinData(0, 42F4C0BD7866DEA005B3B91FBF6D097EB91221F1), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:36:00.963+0000 D1 - [conn510] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:00.963+0000 W - [conn510] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:00.963+0000 I - [conn509] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:00.963+0000 W COMMAND [conn509] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:00.963+0000 I COMMAND [conn509] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578922, 1), signature: { hash: BinData(0, 7D6FA38919A7B0C53E875A5965620108C7C24ABC), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30028ms 2019-09-04T06:36:00.963+0000 D2 NETWORK [conn509] Session from 10.108.2.73:52348 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:00.963+0000 I NETWORK [conn509] end connection 10.108.2.73:52348 (87 connections now open) 2019-09-04T06:36:00.980+0000 I - [conn510] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:00.980+0000 D1 COMMAND [conn510] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578923, 1), signature: { hash: BinData(0, 42F4C0BD7866DEA005B3B91FBF6D097EB91221F1), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:00.980+0000 D1 - [conn510] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:00.980+0000 W - [conn510] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:00.986+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:01.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:01.000+0000 I - [conn510] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:01.000+0000 W COMMAND [conn510] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:01.000+0000 I COMMAND [conn510] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578923, 1), signature: { hash: BinData(0, 42F4C0BD7866DEA005B3B91FBF6D097EB91221F1), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30028ms 2019-09-04T06:36:01.000+0000 D2 NETWORK [conn510] Session from 10.108.2.58:52338 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:01.000+0000 I NETWORK [conn510] end connection 10.108.2.58:52338 (86 connections now open) 2019-09-04T06:36:01.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:01.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:01.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578960, 3), signature: { hash: BinData(0, FAB4A9D884E5FDD31AC46372F92928E037A3406D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:01.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:36:01.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578960, 3), signature: { hash: BinData(0, FAB4A9D884E5FDD31AC46372F92928E037A3406D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:01.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578960, 3), signature: { hash: BinData(0, FAB4A9D884E5FDD31AC46372F92928E037A3406D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:01.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578960, 3), t: 1 }, durableWallTime: new Date(1567578960449), opTime: { ts: Timestamp(1567578960, 3), t: 1 }, wallTime: new Date(1567578960449) } 2019-09-04T06:36:01.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578960, 3), signature: { hash: BinData(0, FAB4A9D884E5FDD31AC46372F92928E037A3406D), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:01.070+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:01.070+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:01.086+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:01.113+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:01.113+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:01.162+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:01.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:01.186+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:01.197+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:01.197+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:01.211+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:01.211+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:01.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:01.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:01.242+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:01.242+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:01.248+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:01.248+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:01.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:01.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:01.286+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:01.303+0000 I NETWORK [listener] connection accepted from 10.108.2.62:53636 #532 (87 connections now open) 2019-09-04T06:36:01.303+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:36:01.303+0000 D2 COMMAND [conn532] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:36:01.303+0000 I NETWORK [conn532] received client metadata from 10.108.2.62:53636 conn532: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:36:01.303+0000 I COMMAND [conn532] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:36:01.306+0000 D2 COMMAND [conn532] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578952, 1), signature: { hash: BinData(0, 1FAED7B25EC658F7E231F7EAA64E65CC737D0C58), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:36:01.306+0000 D1 REPL [conn532] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578960, 3), t: 1 } 2019-09-04T06:36:01.306+0000 D3 REPL [conn532] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:31.316+0000 2019-09-04T06:36:01.327+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:01.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:01.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:01.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:01.378+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:01.378+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:01.386+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:01.455+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:01.455+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:01.455+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578960, 3) 2019-09-04T06:36:01.455+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 24603 2019-09-04T06:36:01.455+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:01.455+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:01.455+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 24603 2019-09-04T06:36:01.474+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24606 2019-09-04T06:36:01.474+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 24606 2019-09-04T06:36:01.474+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578960, 3), t: 1 }({ ts: Timestamp(1567578960, 3), t: 1 }) 2019-09-04T06:36:01.486+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:01.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:01.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:01.571+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:01.571+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:01.586+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:01.613+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:01.613+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:01.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:01.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:01.686+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:01.696+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:01.697+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:01.711+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:01.711+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:01.742+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:01.742+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:01.748+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:01.748+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:01.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:01.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:01.786+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:01.827+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:01.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:01.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:01.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:01.878+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:01.878+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:01.886+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:01.987+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:02.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:02.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:02.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:02.071+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:02.071+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:02.087+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:02.113+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:02.113+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:02.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:02.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:02.187+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:02.197+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:02.197+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:02.211+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:02.211+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:02.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578960, 3), signature: { hash: BinData(0, FAB4A9D884E5FDD31AC46372F92928E037A3406D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:02.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:36:02.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578960, 3), signature: { hash: BinData(0, FAB4A9D884E5FDD31AC46372F92928E037A3406D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:02.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578960, 3), signature: { hash: BinData(0, FAB4A9D884E5FDD31AC46372F92928E037A3406D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:02.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578960, 3), t: 1 }, durableWallTime: new Date(1567578960449), opTime: { ts: Timestamp(1567578960, 3), t: 1 }, wallTime: new Date(1567578960449) } 2019-09-04T06:36:02.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578960, 3), signature: { hash: BinData(0, FAB4A9D884E5FDD31AC46372F92928E037A3406D), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:02.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:02.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:02.242+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:02.242+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:02.248+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:02.248+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:02.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:02.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:02.287+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:02.327+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:02.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:02.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:02.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:02.378+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:02.378+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:02.387+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:02.455+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:02.455+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:02.455+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578960, 3) 2019-09-04T06:36:02.455+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 24635 2019-09-04T06:36:02.455+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:02.455+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:02.455+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 24635 2019-09-04T06:36:02.475+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24638 2019-09-04T06:36:02.475+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 24638 2019-09-04T06:36:02.475+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578960, 3), t: 1 }({ ts: Timestamp(1567578960, 3), t: 1 }) 2019-09-04T06:36:02.487+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:02.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:02.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:02.571+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:02.571+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:02.587+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:02.613+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:02.613+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:02.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:02.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:02.687+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:02.697+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:02.697+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:02.711+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:02.711+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:02.742+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:02.742+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:02.748+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:02.748+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:02.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:02.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:02.788+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:02.827+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:02.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:02.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:02.839+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:36:01.063+0000 2019-09-04T06:36:02.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:02.839+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:36:02.234+0000 2019-09-04T06:36:02.839+0000 D3 REPL [replexec-3] stalest member MemberId(0) date: 2019-09-04T06:36:01.063+0000 2019-09-04T06:36:02.839+0000 D3 REPL [replexec-3] scheduling next check at 2019-09-04T06:36:11.063+0000 2019-09-04T06:36:02.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:32.839+0000 2019-09-04T06:36:02.839+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1681) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:02.839+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1681 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:36:12.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:02.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:32.839+0000 2019-09-04T06:36:02.839+0000 D2 ASIO [Replication] Request 1681 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578960, 3), t: 1 }, durableWallTime: new Date(1567578960449), opTime: { ts: Timestamp(1567578960, 3), t: 1 }, wallTime: new Date(1567578960449), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578960, 3), t: 1 }, lastCommittedWall: new Date(1567578960449), lastOpVisible: { ts: Timestamp(1567578960, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578960, 3), $clusterTime: { clusterTime: Timestamp(1567578960, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578960, 3) } 2019-09-04T06:36:02.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578960, 3), t: 1 }, durableWallTime: new Date(1567578960449), opTime: { ts: Timestamp(1567578960, 3), t: 1 }, wallTime: new Date(1567578960449), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578960, 3), t: 1 }, lastCommittedWall: new Date(1567578960449), lastOpVisible: { ts: Timestamp(1567578960, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578960, 3), $clusterTime: { clusterTime: Timestamp(1567578960, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578960, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:36:02.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:02.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1681) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578960, 3), t: 1 }, durableWallTime: new Date(1567578960449), opTime: { ts: Timestamp(1567578960, 3), t: 1 }, wallTime: new Date(1567578960449), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578960, 3), t: 1 }, lastCommittedWall: new Date(1567578960449), lastOpVisible: { ts: Timestamp(1567578960, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578960, 3), $clusterTime: { clusterTime: Timestamp(1567578960, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578960, 3) } 2019-09-04T06:36:02.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:36:02.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:36:11.131+0000 2019-09-04T06:36:02.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:36:13.044+0000 2019-09-04T06:36:02.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:36:02.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:36:04.839Z 2019-09-04T06:36:02.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:02.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:32.839+0000 2019-09-04T06:36:02.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:32.839+0000 2019-09-04T06:36:02.840+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:02.840+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1682) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:02.840+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1682 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:12.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:02.840+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:32.839+0000 2019-09-04T06:36:02.840+0000 D2 ASIO [Replication] Request 1682 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578960, 3), t: 1 }, durableWallTime: new Date(1567578960449), opTime: { ts: Timestamp(1567578960, 3), t: 1 }, wallTime: new Date(1567578960449), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578960, 3), t: 1 }, lastCommittedWall: new Date(1567578960449), lastOpVisible: { ts: Timestamp(1567578960, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578960, 3), $clusterTime: { clusterTime: Timestamp(1567578960, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578960, 3) } 2019-09-04T06:36:02.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578960, 3), t: 1 }, durableWallTime: new Date(1567578960449), opTime: { ts: Timestamp(1567578960, 3), t: 1 }, wallTime: new Date(1567578960449), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578960, 3), t: 1 }, lastCommittedWall: new Date(1567578960449), lastOpVisible: { ts: Timestamp(1567578960, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578960, 3), $clusterTime: { clusterTime: Timestamp(1567578960, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578960, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:02.840+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:02.840+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1682) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578960, 3), t: 1 }, durableWallTime: new Date(1567578960449), opTime: { ts: Timestamp(1567578960, 3), t: 1 }, wallTime: new Date(1567578960449), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578960, 3), t: 1 }, lastCommittedWall: new Date(1567578960449), lastOpVisible: { ts: Timestamp(1567578960, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578960, 3), $clusterTime: { clusterTime: Timestamp(1567578960, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578960, 3) } 2019-09-04T06:36:02.840+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:36:02.840+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:36:04.840Z 2019-09-04T06:36:02.840+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:32.839+0000 2019-09-04T06:36:02.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:02.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:02.878+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:02.878+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:02.888+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:02.988+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:03.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:03.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:03.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:03.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578960, 3), signature: { hash: BinData(0, FAB4A9D884E5FDD31AC46372F92928E037A3406D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:03.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:36:03.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578960, 3), signature: { hash: BinData(0, FAB4A9D884E5FDD31AC46372F92928E037A3406D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:03.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578960, 3), signature: { hash: BinData(0, FAB4A9D884E5FDD31AC46372F92928E037A3406D), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:03.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578960, 3), t: 1 }, durableWallTime: new Date(1567578960449), opTime: { ts: Timestamp(1567578960, 3), t: 1 }, wallTime: new Date(1567578960449) } 2019-09-04T06:36:03.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578960, 3), signature: { hash: BinData(0, FAB4A9D884E5FDD31AC46372F92928E037A3406D), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:03.070+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:03.070+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:03.088+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:03.113+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:03.113+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:03.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:03.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:03.188+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:03.197+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:03.197+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:03.211+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:03.211+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:03.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:03.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:03.242+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:03.242+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:03.248+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:03.248+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:03.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:03.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:03.288+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:03.327+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:03.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:03.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:03.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:03.378+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:03.378+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:03.388+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:03.455+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:03.455+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:03.455+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578960, 3) 2019-09-04T06:36:03.455+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 24668 2019-09-04T06:36:03.455+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:03.456+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:03.456+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 24668 2019-09-04T06:36:03.475+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24671 2019-09-04T06:36:03.475+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 24671 2019-09-04T06:36:03.475+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578960, 3), t: 1 }({ ts: Timestamp(1567578960, 3), t: 1 }) 2019-09-04T06:36:03.488+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:03.523+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:36:03.523+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:36:03.523+0000 D2 COMMAND [conn90] run command admin.$cmd { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:36:03.523+0000 I COMMAND [conn90] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:36:03.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:03.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:03.570+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:03.571+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:03.589+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:03.613+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:03.613+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:03.621+0000 D2 ASIO [RS] Request 1676 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578963, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578963617), o: { $v: 1, $set: { ping: new Date(1567578963612) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578960, 3), t: 1 }, lastCommittedWall: new Date(1567578960449), lastOpVisible: { ts: Timestamp(1567578960, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578960, 3), t: 1 }, lastCommittedWall: new Date(1567578960449), lastOpApplied: { ts: Timestamp(1567578963, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578960, 3), $clusterTime: { clusterTime: Timestamp(1567578963, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578963, 1) } 2019-09-04T06:36:03.621+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578963, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578963617), o: { $v: 1, $set: { ping: new Date(1567578963612) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578960, 3), t: 1 }, lastCommittedWall: new Date(1567578960449), lastOpVisible: { ts: Timestamp(1567578960, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578960, 3), t: 1 }, lastCommittedWall: new Date(1567578960449), lastOpApplied: { ts: Timestamp(1567578963, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578960, 3), $clusterTime: { clusterTime: Timestamp(1567578963, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578963, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:03.621+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:03.621+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578963, 1) and ending at ts: Timestamp(1567578963, 1) 2019-09-04T06:36:03.621+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:36:13.044+0000 2019-09-04T06:36:03.621+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:36:14.298+0000 2019-09-04T06:36:03.621+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:03.621+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:32.839+0000 2019-09-04T06:36:03.621+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:03.621+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:03.621+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578960, 3) 2019-09-04T06:36:03.621+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 24679 2019-09-04T06:36:03.621+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:03.621+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:03.621+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 24679 2019-09-04T06:36:03.621+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:03.621+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:36:03.621+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:03.622+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578960, 3) 2019-09-04T06:36:03.622+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 24682 2019-09-04T06:36:03.622+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578963, 1) } 2019-09-04T06:36:03.622+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:03.622+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:03.622+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 24682 2019-09-04T06:36:03.621+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578963, 1), t: 1 } 2019-09-04T06:36:03.622+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24672 2019-09-04T06:36:03.622+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 24672 2019-09-04T06:36:03.622+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24685 2019-09-04T06:36:03.622+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 24685 2019-09-04T06:36:03.622+0000 D3 EXECUTOR [repl-writer-worker-10] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:36:03.622+0000 D3 STORAGE [repl-writer-worker-10] WT begin_transaction for snapshot id 24687 2019-09-04T06:36:03.622+0000 D4 STORAGE [repl-writer-worker-10] inserting record with timestamp Timestamp(1567578963, 1) 2019-09-04T06:36:03.622+0000 D3 STORAGE [repl-writer-worker-10] WT set timestamp of future write operations to Timestamp(1567578963, 1) 2019-09-04T06:36:03.622+0000 D3 STORAGE [repl-writer-worker-10] WT commit_transaction for snapshot id 24687 2019-09-04T06:36:03.622+0000 D3 EXECUTOR [repl-writer-worker-10] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:36:03.622+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:36:03.622+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24686 2019-09-04T06:36:03.622+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 24686 2019-09-04T06:36:03.622+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24689 2019-09-04T06:36:03.622+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 24689 2019-09-04T06:36:03.622+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578963, 1), t: 1 }({ ts: Timestamp(1567578963, 1), t: 1 }) 2019-09-04T06:36:03.622+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578963, 1) 2019-09-04T06:36:03.622+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24690 2019-09-04T06:36:03.622+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578963, 1) } } ] } sort: {} projection: {} 2019-09-04T06:36:03.622+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:03.622+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:36:03.622+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578963, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:36:03.622+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:03.622+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:36:03.622+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:36:03.622+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578963, 1) || First: notFirst: full path: ts 2019-09-04T06:36:03.622+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:03.622+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578963, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:03.622+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:36:03.622+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:36:03.622+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:36:03.622+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:03.622+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:36:03.622+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:36:03.622+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:03.622+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:03.622+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:36:03.622+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578963, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:36:03.622+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:03.622+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:36:03.622+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:36:03.622+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578963, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:36:03.622+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:03.622+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578963, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:03.622+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 24690 2019-09-04T06:36:03.622+0000 D3 EXECUTOR [repl-writer-worker-4] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:36:03.622+0000 D3 STORAGE [repl-writer-worker-4] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:03.622+0000 D3 REPL [repl-writer-worker-4] applying op: { ts: Timestamp(1567578963, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578963617), o: { $v: 1, $set: { ping: new Date(1567578963612) } } }, oplog application mode: Secondary 2019-09-04T06:36:03.622+0000 D3 STORAGE [repl-writer-worker-4] WT set timestamp of future write operations to Timestamp(1567578963, 1) 2019-09-04T06:36:03.622+0000 D3 STORAGE [repl-writer-worker-4] WT begin_transaction for snapshot id 24692 2019-09-04T06:36:03.622+0000 D2 QUERY [repl-writer-worker-4] Using idhack: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" } 2019-09-04T06:36:03.622+0000 D4 WRITE [repl-writer-worker-4] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:36:03.622+0000 D3 STORAGE [repl-writer-worker-4] WT commit_transaction for snapshot id 24692 2019-09-04T06:36:03.622+0000 D3 EXECUTOR [repl-writer-worker-4] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:36:03.622+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578963, 1), t: 1 }({ ts: Timestamp(1567578963, 1), t: 1 }) 2019-09-04T06:36:03.622+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578963, 1) 2019-09-04T06:36:03.622+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24691 2019-09-04T06:36:03.622+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:36:03.622+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:03.622+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:36:03.622+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:03.622+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:03.622+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:36:03.622+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 24691 2019-09-04T06:36:03.622+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578963, 1) 2019-09-04T06:36:03.623+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:36:03.623+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24695 2019-09-04T06:36:03.623+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578960, 3), t: 1 }, durableWallTime: new Date(1567578960449), appliedOpTime: { ts: Timestamp(1567578960, 3), t: 1 }, appliedWallTime: new Date(1567578960449), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578960, 3), t: 1 }, durableWallTime: new Date(1567578960449), appliedOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, appliedWallTime: new Date(1567578963617), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578960, 3), t: 1 }, durableWallTime: new Date(1567578960449), appliedOpTime: { ts: Timestamp(1567578960, 3), t: 1 }, appliedWallTime: new Date(1567578960449), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578960, 3), t: 1 }, lastCommittedWall: new Date(1567578960449), lastOpVisible: { ts: Timestamp(1567578960, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:03.623+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 24695 2019-09-04T06:36:03.623+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578963, 1), t: 1 }({ ts: Timestamp(1567578963, 1), t: 1 }) 2019-09-04T06:36:03.623+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1683 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:33.623+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578960, 3), t: 1 }, durableWallTime: new Date(1567578960449), appliedOpTime: { ts: Timestamp(1567578960, 3), t: 1 }, appliedWallTime: new Date(1567578960449), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578960, 3), t: 1 }, durableWallTime: new Date(1567578960449), appliedOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, appliedWallTime: new Date(1567578963617), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578960, 3), t: 1 }, durableWallTime: new Date(1567578960449), appliedOpTime: { ts: Timestamp(1567578960, 3), t: 1 }, appliedWallTime: new Date(1567578960449), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578960, 3), t: 1 }, lastCommittedWall: new Date(1567578960449), lastOpVisible: { ts: Timestamp(1567578960, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:03.623+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:33.622+0000 2019-09-04T06:36:03.623+0000 D2 ASIO [RS] Request 1683 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578960, 3), t: 1 }, lastCommittedWall: new Date(1567578960449), lastOpVisible: { ts: Timestamp(1567578960, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578960, 3), $clusterTime: { clusterTime: Timestamp(1567578963, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578963, 1) } 2019-09-04T06:36:03.623+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578960, 3), t: 1 }, lastCommittedWall: new Date(1567578960449), lastOpVisible: { ts: Timestamp(1567578960, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578960, 3), $clusterTime: { clusterTime: Timestamp(1567578963, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578963, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:03.623+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:36:03.623+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:33.623+0000 2019-09-04T06:36:03.624+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578963, 1), t: 1 } 2019-09-04T06:36:03.624+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1684 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:36:13.624+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578960, 3), t: 1 } } 2019-09-04T06:36:03.624+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:33.623+0000 2019-09-04T06:36:03.627+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:36:03.627+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:36:03.627+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578960, 3), t: 1 }, durableWallTime: new Date(1567578960449), appliedOpTime: { ts: Timestamp(1567578960, 3), t: 1 }, appliedWallTime: new Date(1567578960449), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, durableWallTime: new Date(1567578963617), appliedOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, appliedWallTime: new Date(1567578963617), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578960, 3), t: 1 }, durableWallTime: new Date(1567578960449), appliedOpTime: { ts: Timestamp(1567578960, 3), t: 1 }, appliedWallTime: new Date(1567578960449), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578960, 3), t: 1 }, lastCommittedWall: new Date(1567578960449), lastOpVisible: { ts: Timestamp(1567578960, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:03.627+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1685 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:33.627+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578960, 3), t: 1 }, durableWallTime: new Date(1567578960449), appliedOpTime: { ts: Timestamp(1567578960, 3), t: 1 }, appliedWallTime: new Date(1567578960449), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, durableWallTime: new Date(1567578963617), appliedOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, appliedWallTime: new Date(1567578963617), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578960, 3), t: 1 }, durableWallTime: new Date(1567578960449), appliedOpTime: { ts: Timestamp(1567578960, 3), t: 1 }, appliedWallTime: new Date(1567578960449), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578960, 3), t: 1 }, lastCommittedWall: new Date(1567578960449), lastOpVisible: { ts: Timestamp(1567578960, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:03.627+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:33.623+0000 2019-09-04T06:36:03.627+0000 D2 ASIO [RS] Request 1685 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578960, 3), t: 1 }, lastCommittedWall: new Date(1567578960449), lastOpVisible: { ts: Timestamp(1567578960, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578960, 3), $clusterTime: { clusterTime: Timestamp(1567578963, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578963, 1) } 2019-09-04T06:36:03.627+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578960, 3), t: 1 }, lastCommittedWall: new Date(1567578960449), lastOpVisible: { ts: Timestamp(1567578960, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578960, 3), $clusterTime: { clusterTime: Timestamp(1567578963, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578963, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:03.627+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:03.627+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:33.623+0000 2019-09-04T06:36:03.628+0000 D2 ASIO [RS] Request 1684 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578963, 1), t: 1 }, lastCommittedWall: new Date(1567578963617), lastOpVisible: { ts: Timestamp(1567578963, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578963, 1), t: 1 }, lastCommittedWall: new Date(1567578963617), lastOpApplied: { ts: Timestamp(1567578963, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578963, 1), $clusterTime: { clusterTime: Timestamp(1567578963, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578963, 1) } 2019-09-04T06:36:03.628+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578963, 1), t: 1 }, lastCommittedWall: new Date(1567578963617), lastOpVisible: { ts: Timestamp(1567578963, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578963, 1), t: 1 }, lastCommittedWall: new Date(1567578963617), lastOpApplied: { ts: Timestamp(1567578963, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578963, 1), $clusterTime: { clusterTime: Timestamp(1567578963, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578963, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:03.628+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:36:03.628+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:36:03.628+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578963, 1), t: 1 }, 2019-09-04T06:36:03.617+0000 2019-09-04T06:36:03.628+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578963, 1), t: 1 }, 2019-09-04T06:36:03.617+0000 2019-09-04T06:36:03.628+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578958, 1) 2019-09-04T06:36:03.628+0000 D3 REPL [conn519] Got notified of new snapshot: { ts: Timestamp(1567578963, 1), t: 1 }, 2019-09-04T06:36:03.617+0000 2019-09-04T06:36:03.628+0000 D3 REPL [conn519] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:18.119+0000 2019-09-04T06:36:03.628+0000 D3 REPL [conn520] Got notified of new snapshot: { ts: Timestamp(1567578963, 1), t: 1 }, 2019-09-04T06:36:03.617+0000 2019-09-04T06:36:03.628+0000 D3 REPL [conn520] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:18.134+0000 2019-09-04T06:36:03.628+0000 D3 REPL [conn517] Got notified of new snapshot: { ts: Timestamp(1567578963, 1), t: 1 }, 2019-09-04T06:36:03.617+0000 2019-09-04T06:36:03.628+0000 D3 REPL [conn517] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:18.045+0000 2019-09-04T06:36:03.628+0000 D3 REPL [conn532] Got notified of new snapshot: { ts: Timestamp(1567578963, 1), t: 1 }, 2019-09-04T06:36:03.617+0000 2019-09-04T06:36:03.628+0000 D3 REPL [conn532] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:31.316+0000 2019-09-04T06:36:03.628+0000 D3 REPL [conn485] Got notified of new snapshot: { ts: Timestamp(1567578963, 1), t: 1 }, 2019-09-04T06:36:03.617+0000 2019-09-04T06:36:03.628+0000 D3 REPL [conn485] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.888+0000 2019-09-04T06:36:03.628+0000 D3 REPL [conn525] Got notified of new snapshot: { ts: Timestamp(1567578963, 1), t: 1 }, 2019-09-04T06:36:03.617+0000 2019-09-04T06:36:03.628+0000 D3 REPL [conn525] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:21.767+0000 2019-09-04T06:36:03.628+0000 D3 REPL [conn522] Got notified of new snapshot: { ts: Timestamp(1567578963, 1), t: 1 }, 2019-09-04T06:36:03.617+0000 2019-09-04T06:36:03.628+0000 D3 REPL [conn522] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:21.660+0000 2019-09-04T06:36:03.628+0000 D3 REPL [conn500] Got notified of new snapshot: { ts: Timestamp(1567578963, 1), t: 1 }, 2019-09-04T06:36:03.617+0000 2019-09-04T06:36:03.628+0000 D3 REPL [conn500] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:14.296+0000 2019-09-04T06:36:03.628+0000 D3 REPL [conn492] Got notified of new snapshot: { ts: Timestamp(1567578963, 1), t: 1 }, 2019-09-04T06:36:03.617+0000 2019-09-04T06:36:03.628+0000 D3 REPL [conn492] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.328+0000 2019-09-04T06:36:03.628+0000 D3 REPL [conn497] Got notified of new snapshot: { ts: Timestamp(1567578963, 1), t: 1 }, 2019-09-04T06:36:03.617+0000 2019-09-04T06:36:03.628+0000 D3 REPL [conn497] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.882+0000 2019-09-04T06:36:03.628+0000 D3 REPL [conn515] Got notified of new snapshot: { ts: Timestamp(1567578963, 1), t: 1 }, 2019-09-04T06:36:03.617+0000 2019-09-04T06:36:03.628+0000 D3 REPL [conn515] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:24.151+0000 2019-09-04T06:36:03.628+0000 D3 REPL [conn526] Got notified of new snapshot: { ts: Timestamp(1567578963, 1), t: 1 }, 2019-09-04T06:36:03.617+0000 2019-09-04T06:36:03.628+0000 D3 REPL [conn526] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:22.595+0000 2019-09-04T06:36:03.628+0000 D3 REPL [conn514] Got notified of new snapshot: { ts: Timestamp(1567578963, 1), t: 1 }, 2019-09-04T06:36:03.617+0000 2019-09-04T06:36:03.628+0000 D3 REPL [conn514] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:13.414+0000 2019-09-04T06:36:03.628+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:36:14.298+0000 2019-09-04T06:36:03.628+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:36:13.948+0000 2019-09-04T06:36:03.628+0000 D3 REPL [conn523] Got notified of new snapshot: { ts: Timestamp(1567578963, 1), t: 1 }, 2019-09-04T06:36:03.617+0000 2019-09-04T06:36:03.628+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1686 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:36:13.628+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578963, 1), t: 1 } } 2019-09-04T06:36:03.628+0000 D3 REPL [conn523] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:21.660+0000 2019-09-04T06:36:03.628+0000 D3 REPL [conn504] Got notified of new snapshot: { ts: Timestamp(1567578963, 1), t: 1 }, 2019-09-04T06:36:03.617+0000 2019-09-04T06:36:03.628+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:03.628+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:32.839+0000 2019-09-04T06:36:03.628+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:33.623+0000 2019-09-04T06:36:03.628+0000 D3 REPL [conn504] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.918+0000 2019-09-04T06:36:03.628+0000 D3 REPL [conn511] Got notified of new snapshot: { ts: Timestamp(1567578963, 1), t: 1 }, 2019-09-04T06:36:03.617+0000 2019-09-04T06:36:03.628+0000 D3 REPL [conn511] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.873+0000 2019-09-04T06:36:03.628+0000 D3 REPL [conn513] Got notified of new snapshot: { ts: Timestamp(1567578963, 1), t: 1 }, 2019-09-04T06:36:03.617+0000 2019-09-04T06:36:03.628+0000 D3 REPL [conn513] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.878+0000 2019-09-04T06:36:03.628+0000 D3 REPL [conn489] Got notified of new snapshot: { ts: Timestamp(1567578963, 1), t: 1 }, 2019-09-04T06:36:03.617+0000 2019-09-04T06:36:03.628+0000 D3 REPL [conn489] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:15.190+0000 2019-09-04T06:36:03.628+0000 D3 REPL [conn483] Got notified of new snapshot: { ts: Timestamp(1567578963, 1), t: 1 }, 2019-09-04T06:36:03.617+0000 2019-09-04T06:36:03.628+0000 D3 REPL [conn483] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.939+0000 2019-09-04T06:36:03.628+0000 D3 REPL [conn518] Got notified of new snapshot: { ts: Timestamp(1567578963, 1), t: 1 }, 2019-09-04T06:36:03.617+0000 2019-09-04T06:36:03.628+0000 D3 REPL [conn518] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:18.061+0000 2019-09-04T06:36:03.628+0000 D3 REPL [conn491] Got notified of new snapshot: { ts: Timestamp(1567578963, 1), t: 1 }, 2019-09-04T06:36:03.617+0000 2019-09-04T06:36:03.628+0000 D3 REPL [conn491] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:18.023+0000 2019-09-04T06:36:03.628+0000 D3 REPL [conn512] Got notified of new snapshot: { ts: Timestamp(1567578963, 1), t: 1 }, 2019-09-04T06:36:03.617+0000 2019-09-04T06:36:03.628+0000 D3 REPL [conn512] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.876+0000 2019-09-04T06:36:03.628+0000 D3 REPL [conn499] Got notified of new snapshot: { ts: Timestamp(1567578963, 1), t: 1 }, 2019-09-04T06:36:03.617+0000 2019-09-04T06:36:03.628+0000 D3 REPL [conn499] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:21.752+0000 2019-09-04T06:36:03.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:03.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:03.689+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:03.697+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:03.697+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:03.711+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:03.711+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:03.721+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578963, 1) 2019-09-04T06:36:03.742+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:03.742+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:03.748+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:03.748+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:03.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:03.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:03.789+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:03.827+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:03.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:03.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:03.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:03.878+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:03.878+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:03.889+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:03.989+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:04.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:04.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:04.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:04.071+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:04.071+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:04.089+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:04.113+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:04.113+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:04.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:04.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:04.189+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:04.196+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:04.197+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:04.211+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:04.211+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:04.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578963, 1), signature: { hash: BinData(0, 857478F508F9A72C7D8E83816CC8E2CC32039236), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:04.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:36:04.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578963, 1), signature: { hash: BinData(0, 857478F508F9A72C7D8E83816CC8E2CC32039236), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:04.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578963, 1), signature: { hash: BinData(0, 857478F508F9A72C7D8E83816CC8E2CC32039236), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:04.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, durableWallTime: new Date(1567578963617), opTime: { ts: Timestamp(1567578963, 1), t: 1 }, wallTime: new Date(1567578963617) } 2019-09-04T06:36:04.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578963, 1), signature: { hash: BinData(0, 857478F508F9A72C7D8E83816CC8E2CC32039236), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:04.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:04.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:04.242+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:04.242+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:04.248+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:04.248+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:04.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:04.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:04.289+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:04.327+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:04.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:04.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:04.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:04.378+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:04.378+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:04.390+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:04.490+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:04.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:04.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:04.570+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:04.571+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:04.590+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:04.613+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:04.613+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:04.622+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:04.622+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:04.622+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578963, 1) 2019-09-04T06:36:04.622+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 24726 2019-09-04T06:36:04.622+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:04.622+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:04.622+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 24726 2019-09-04T06:36:04.623+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24729 2019-09-04T06:36:04.623+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 24729 2019-09-04T06:36:04.623+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578963, 1), t: 1 }({ ts: Timestamp(1567578963, 1), t: 1 }) 2019-09-04T06:36:04.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:04.661+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:04.690+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:04.696+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:04.697+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:04.711+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:04.711+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:04.742+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:04.742+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:04.748+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:04.748+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:04.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:04.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:04.790+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:04.827+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:04.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:04.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:04.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1687) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:04.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1687 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:36:14.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:04.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:32.839+0000 2019-09-04T06:36:04.839+0000 D2 ASIO [Replication] Request 1687 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, durableWallTime: new Date(1567578963617), opTime: { ts: Timestamp(1567578963, 1), t: 1 }, wallTime: new Date(1567578963617), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578963, 1), t: 1 }, lastCommittedWall: new Date(1567578963617), lastOpVisible: { ts: Timestamp(1567578963, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578963, 1), $clusterTime: { clusterTime: Timestamp(1567578963, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578963, 1) } 2019-09-04T06:36:04.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, durableWallTime: new Date(1567578963617), opTime: { ts: Timestamp(1567578963, 1), t: 1 }, wallTime: new Date(1567578963617), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578963, 1), t: 1 }, lastCommittedWall: new Date(1567578963617), lastOpVisible: { ts: Timestamp(1567578963, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578963, 1), $clusterTime: { clusterTime: Timestamp(1567578963, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578963, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:36:04.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:04.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1687) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, durableWallTime: new Date(1567578963617), opTime: { ts: Timestamp(1567578963, 1), t: 1 }, wallTime: new Date(1567578963617), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578963, 1), t: 1 }, lastCommittedWall: new Date(1567578963617), lastOpVisible: { ts: Timestamp(1567578963, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578963, 1), $clusterTime: { clusterTime: Timestamp(1567578963, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578963, 1) } 2019-09-04T06:36:04.839+0000 D4 ELECTION [replexec-4] Postponing election timeout due to heartbeat from primary 2019-09-04T06:36:04.839+0000 D4 REPL [replexec-4] Canceling election timeout callback at 2019-09-04T06:36:13.948+0000 2019-09-04T06:36:04.839+0000 D4 ELECTION [replexec-4] Scheduling election timeout callback at 2019-09-04T06:36:15.962+0000 2019-09-04T06:36:04.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:36:04.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:36:06.839Z 2019-09-04T06:36:04.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:04.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:34.839+0000 2019-09-04T06:36:04.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:34.839+0000 2019-09-04T06:36:04.840+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:04.840+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1688) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:04.840+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1688 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:14.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:04.840+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:34.839+0000 2019-09-04T06:36:04.840+0000 D2 ASIO [Replication] Request 1688 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, durableWallTime: new Date(1567578963617), opTime: { ts: Timestamp(1567578963, 1), t: 1 }, wallTime: new Date(1567578963617), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578963, 1), t: 1 }, lastCommittedWall: new Date(1567578963617), lastOpVisible: { ts: Timestamp(1567578963, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578963, 1), $clusterTime: { clusterTime: Timestamp(1567578963, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578963, 1) } 2019-09-04T06:36:04.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, durableWallTime: new Date(1567578963617), opTime: { ts: Timestamp(1567578963, 1), t: 1 }, wallTime: new Date(1567578963617), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578963, 1), t: 1 }, lastCommittedWall: new Date(1567578963617), lastOpVisible: { ts: Timestamp(1567578963, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578963, 1), $clusterTime: { clusterTime: Timestamp(1567578963, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578963, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:04.840+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:04.840+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1688) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, durableWallTime: new Date(1567578963617), opTime: { ts: Timestamp(1567578963, 1), t: 1 }, wallTime: new Date(1567578963617), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578963, 1), t: 1 }, lastCommittedWall: new Date(1567578963617), lastOpVisible: { ts: Timestamp(1567578963, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578963, 1), $clusterTime: { clusterTime: Timestamp(1567578963, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578963, 1) } 2019-09-04T06:36:04.840+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:36:04.840+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:36:06.840Z 2019-09-04T06:36:04.840+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:34.839+0000 2019-09-04T06:36:04.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:04.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:04.878+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:04.878+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:04.890+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:04.990+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:05.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:05.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:05.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:05.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578963, 1), signature: { hash: BinData(0, 857478F508F9A72C7D8E83816CC8E2CC32039236), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:05.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:36:05.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578963, 1), signature: { hash: BinData(0, 857478F508F9A72C7D8E83816CC8E2CC32039236), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:05.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578963, 1), signature: { hash: BinData(0, 857478F508F9A72C7D8E83816CC8E2CC32039236), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:05.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, durableWallTime: new Date(1567578963617), opTime: { ts: Timestamp(1567578963, 1), t: 1 }, wallTime: new Date(1567578963617) } 2019-09-04T06:36:05.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578963, 1), signature: { hash: BinData(0, 857478F508F9A72C7D8E83816CC8E2CC32039236), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:05.070+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:05.071+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:05.090+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:05.113+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:05.113+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:05.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:05.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:05.191+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:05.197+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:05.197+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:05.211+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:05.211+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:05.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:05.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:05.242+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:05.242+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:05.248+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:05.248+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:05.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:05.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:05.291+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:05.327+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:05.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:05.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:05.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:05.378+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:05.378+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:05.391+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:05.491+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:05.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:05.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:05.571+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:05.571+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:05.591+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:05.613+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:05.613+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:05.622+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:05.622+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:05.622+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578963, 1) 2019-09-04T06:36:05.622+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 24758 2019-09-04T06:36:05.622+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:05.622+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:05.622+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 24758 2019-09-04T06:36:05.623+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24761 2019-09-04T06:36:05.623+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 24761 2019-09-04T06:36:05.623+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578963, 1), t: 1 }({ ts: Timestamp(1567578963, 1), t: 1 }) 2019-09-04T06:36:05.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:05.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:05.691+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:05.696+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:05.697+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:05.711+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:05.711+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:05.742+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:05.742+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:05.748+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:05.748+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:05.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:05.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:05.791+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:05.827+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:05.827+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:05.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:05.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:05.878+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:05.878+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:05.891+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:05.992+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:06.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:06.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:06.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:06.070+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:06.070+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:06.092+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:06.113+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:06.113+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:06.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:06.161+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:06.192+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:06.197+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:06.197+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:06.211+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:06.211+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:06.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578963, 1), signature: { hash: BinData(0, 857478F508F9A72C7D8E83816CC8E2CC32039236), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:06.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:36:06.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578963, 1), signature: { hash: BinData(0, 857478F508F9A72C7D8E83816CC8E2CC32039236), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:06.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578963, 1), signature: { hash: BinData(0, 857478F508F9A72C7D8E83816CC8E2CC32039236), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:06.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, durableWallTime: new Date(1567578963617), opTime: { ts: Timestamp(1567578963, 1), t: 1 }, wallTime: new Date(1567578963617) } 2019-09-04T06:36:06.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578963, 1), signature: { hash: BinData(0, 857478F508F9A72C7D8E83816CC8E2CC32039236), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:06.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:06.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:06.241+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:06.242+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:06.248+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:06.248+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:06.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:06.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:06.292+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:06.327+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:06.327+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:06.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:06.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:06.378+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:06.378+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:06.392+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:06.492+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:06.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:06.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:06.570+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:06.571+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:06.592+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:06.613+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:06.613+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:06.622+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:06.622+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:06.622+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578963, 1) 2019-09-04T06:36:06.622+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 24790 2019-09-04T06:36:06.622+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:06.622+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:06.622+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 24790 2019-09-04T06:36:06.623+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24793 2019-09-04T06:36:06.623+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 24793 2019-09-04T06:36:06.623+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578963, 1), t: 1 }({ ts: Timestamp(1567578963, 1), t: 1 }) 2019-09-04T06:36:06.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:06.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:06.692+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:06.697+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:06.697+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:06.711+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:06.711+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:06.742+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:06.742+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:06.748+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:06.748+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:06.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:06.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:06.793+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:06.828+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:06.828+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:06.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:06.839+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1689) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:06.839+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1689 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:36:16.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:06.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:34.839+0000 2019-09-04T06:36:06.839+0000 D2 ASIO [Replication] Request 1689 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, durableWallTime: new Date(1567578963617), opTime: { ts: Timestamp(1567578963, 1), t: 1 }, wallTime: new Date(1567578963617), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578963, 1), t: 1 }, lastCommittedWall: new Date(1567578963617), lastOpVisible: { ts: Timestamp(1567578963, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578963, 1), $clusterTime: { clusterTime: Timestamp(1567578963, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578963, 1) } 2019-09-04T06:36:06.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, durableWallTime: new Date(1567578963617), opTime: { ts: Timestamp(1567578963, 1), t: 1 }, wallTime: new Date(1567578963617), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578963, 1), t: 1 }, lastCommittedWall: new Date(1567578963617), lastOpVisible: { ts: Timestamp(1567578963, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578963, 1), $clusterTime: { clusterTime: Timestamp(1567578963, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578963, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:36:06.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:06.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1689) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, durableWallTime: new Date(1567578963617), opTime: { ts: Timestamp(1567578963, 1), t: 1 }, wallTime: new Date(1567578963617), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578963, 1), t: 1 }, lastCommittedWall: new Date(1567578963617), lastOpVisible: { ts: Timestamp(1567578963, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578963, 1), $clusterTime: { clusterTime: Timestamp(1567578963, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578963, 1) } 2019-09-04T06:36:06.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:36:06.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:36:15.962+0000 2019-09-04T06:36:06.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:36:18.188+0000 2019-09-04T06:36:06.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:36:06.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:36:08.839Z 2019-09-04T06:36:06.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:06.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:36.839+0000 2019-09-04T06:36:06.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:36.839+0000 2019-09-04T06:36:06.840+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:06.840+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1690) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:06.840+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1690 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:16.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:06.840+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:36.839+0000 2019-09-04T06:36:06.840+0000 D2 ASIO [Replication] Request 1690 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, durableWallTime: new Date(1567578963617), opTime: { ts: Timestamp(1567578963, 1), t: 1 }, wallTime: new Date(1567578963617), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578963, 1), t: 1 }, lastCommittedWall: new Date(1567578963617), lastOpVisible: { ts: Timestamp(1567578963, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578963, 1), $clusterTime: { clusterTime: Timestamp(1567578963, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578963, 1) } 2019-09-04T06:36:06.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, durableWallTime: new Date(1567578963617), opTime: { ts: Timestamp(1567578963, 1), t: 1 }, wallTime: new Date(1567578963617), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578963, 1), t: 1 }, lastCommittedWall: new Date(1567578963617), lastOpVisible: { ts: Timestamp(1567578963, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578963, 1), $clusterTime: { clusterTime: Timestamp(1567578963, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578963, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:06.840+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:06.840+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1690) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, durableWallTime: new Date(1567578963617), opTime: { ts: Timestamp(1567578963, 1), t: 1 }, wallTime: new Date(1567578963617), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578963, 1), t: 1 }, lastCommittedWall: new Date(1567578963617), lastOpVisible: { ts: Timestamp(1567578963, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578963, 1), $clusterTime: { clusterTime: Timestamp(1567578963, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578963, 1) } 2019-09-04T06:36:06.840+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:36:06.840+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:36:08.840Z 2019-09-04T06:36:06.840+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:36.839+0000 2019-09-04T06:36:06.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:06.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:06.878+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:06.878+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:06.893+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:06.993+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:07.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:07.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:07.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:07.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578963, 1), signature: { hash: BinData(0, 857478F508F9A72C7D8E83816CC8E2CC32039236), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:07.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:36:07.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578963, 1), signature: { hash: BinData(0, 857478F508F9A72C7D8E83816CC8E2CC32039236), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:07.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578963, 1), signature: { hash: BinData(0, 857478F508F9A72C7D8E83816CC8E2CC32039236), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:07.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, durableWallTime: new Date(1567578963617), opTime: { ts: Timestamp(1567578963, 1), t: 1 }, wallTime: new Date(1567578963617) } 2019-09-04T06:36:07.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578963, 1), signature: { hash: BinData(0, 857478F508F9A72C7D8E83816CC8E2CC32039236), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:07.070+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:07.071+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:07.093+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:07.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:07.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:07.193+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:07.197+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:07.197+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:07.211+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:07.211+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:07.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:07.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:07.242+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:07.242+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:07.248+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:07.248+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:07.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:07.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:07.293+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:07.378+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:07.378+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:07.393+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:07.493+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:07.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:07.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:07.571+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:07.571+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:07.594+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:07.622+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:07.623+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:07.623+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578963, 1) 2019-09-04T06:36:07.623+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 24819 2019-09-04T06:36:07.623+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:07.623+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:07.623+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 24819 2019-09-04T06:36:07.623+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24822 2019-09-04T06:36:07.623+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 24822 2019-09-04T06:36:07.623+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578963, 1), t: 1 }({ ts: Timestamp(1567578963, 1), t: 1 }) 2019-09-04T06:36:07.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:07.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:07.694+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:07.697+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:07.697+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:07.711+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:07.711+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:07.742+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:07.742+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:07.748+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:07.748+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:07.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:07.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:07.794+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:07.878+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:07.878+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:07.894+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:07.994+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:08.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:08.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:08.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:08.070+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:08.071+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:08.094+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:08.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:08.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:08.195+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:08.197+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:08.197+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:08.211+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:08.211+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:08.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578966, 1), signature: { hash: BinData(0, EEE4B78A57649CB9954A58D102C2003C3321D7A7), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:08.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:36:08.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578966, 1), signature: { hash: BinData(0, EEE4B78A57649CB9954A58D102C2003C3321D7A7), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:08.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578966, 1), signature: { hash: BinData(0, EEE4B78A57649CB9954A58D102C2003C3321D7A7), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:08.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, durableWallTime: new Date(1567578963617), opTime: { ts: Timestamp(1567578963, 1), t: 1 }, wallTime: new Date(1567578963617) } 2019-09-04T06:36:08.235+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578966, 1), signature: { hash: BinData(0, EEE4B78A57649CB9954A58D102C2003C3321D7A7), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:08.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:08.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:08.242+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:08.242+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:08.248+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:08.248+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:08.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:08.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:08.295+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:08.378+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:08.378+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:08.395+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:08.417+0000 D2 ASIO [RS] Request 1686 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578968, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578968415), o: { $v: 1, $set: { ping: new Date(1567578968415) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578963, 1), t: 1 }, lastCommittedWall: new Date(1567578963617), lastOpVisible: { ts: Timestamp(1567578963, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578963, 1), t: 1 }, lastCommittedWall: new Date(1567578963617), lastOpApplied: { ts: Timestamp(1567578968, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578963, 1), $clusterTime: { clusterTime: Timestamp(1567578968, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 1) } 2019-09-04T06:36:08.417+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578968, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578968415), o: { $v: 1, $set: { ping: new Date(1567578968415) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578963, 1), t: 1 }, lastCommittedWall: new Date(1567578963617), lastOpVisible: { ts: Timestamp(1567578963, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578963, 1), t: 1 }, lastCommittedWall: new Date(1567578963617), lastOpApplied: { ts: Timestamp(1567578968, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578963, 1), $clusterTime: { clusterTime: Timestamp(1567578968, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:08.417+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:08.417+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578968, 1) and ending at ts: Timestamp(1567578968, 1) 2019-09-04T06:36:08.417+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:36:18.188+0000 2019-09-04T06:36:08.417+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:36:19.313+0000 2019-09-04T06:36:08.417+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:08.417+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:36.839+0000 2019-09-04T06:36:08.417+0000 D2 REPL [replication-0] oplog buffer has 0 bytes 2019-09-04T06:36:08.417+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578968, 1), t: 1 } 2019-09-04T06:36:08.417+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:08.417+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:08.417+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578963, 1) 2019-09-04T06:36:08.417+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 24844 2019-09-04T06:36:08.417+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:08.417+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:08.417+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 24844 2019-09-04T06:36:08.417+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:08.417+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:36:08.417+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:08.417+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578963, 1) 2019-09-04T06:36:08.417+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 24847 2019-09-04T06:36:08.417+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578968, 1) } 2019-09-04T06:36:08.417+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:08.417+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:08.417+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 24847 2019-09-04T06:36:08.417+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24823 2019-09-04T06:36:08.417+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 24823 2019-09-04T06:36:08.417+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24850 2019-09-04T06:36:08.417+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 24850 2019-09-04T06:36:08.417+0000 D3 EXECUTOR [repl-writer-worker-6] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:36:08.417+0000 D3 STORAGE [repl-writer-worker-6] WT begin_transaction for snapshot id 24852 2019-09-04T06:36:08.417+0000 D4 STORAGE [repl-writer-worker-6] inserting record with timestamp Timestamp(1567578968, 1) 2019-09-04T06:36:08.417+0000 D3 STORAGE [repl-writer-worker-6] WT set timestamp of future write operations to Timestamp(1567578968, 1) 2019-09-04T06:36:08.418+0000 D3 STORAGE [repl-writer-worker-6] WT commit_transaction for snapshot id 24852 2019-09-04T06:36:08.418+0000 D3 EXECUTOR [repl-writer-worker-6] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:36:08.418+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:36:08.418+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24851 2019-09-04T06:36:08.418+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 24851 2019-09-04T06:36:08.418+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24854 2019-09-04T06:36:08.418+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 24854 2019-09-04T06:36:08.418+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578968, 1), t: 1 }({ ts: Timestamp(1567578968, 1), t: 1 }) 2019-09-04T06:36:08.418+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578968, 1) 2019-09-04T06:36:08.418+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24855 2019-09-04T06:36:08.418+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578968, 1) } } ] } sort: {} projection: {} 2019-09-04T06:36:08.418+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:08.418+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:36:08.418+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578968, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:36:08.418+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:08.418+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:36:08.418+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:36:08.418+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578968, 1) || First: notFirst: full path: ts 2019-09-04T06:36:08.418+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:08.418+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578968, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:08.418+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:36:08.418+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:36:08.418+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:36:08.418+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:08.418+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:36:08.418+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:36:08.418+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:08.418+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:08.418+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:36:08.418+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578968, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:36:08.418+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:08.418+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:36:08.418+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:36:08.418+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578968, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:36:08.418+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:08.418+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578968, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:08.418+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 24855 2019-09-04T06:36:08.418+0000 D3 EXECUTOR [repl-writer-worker-8] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:36:08.418+0000 D3 STORAGE [repl-writer-worker-8] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:08.418+0000 D3 REPL [repl-writer-worker-8] applying op: { ts: Timestamp(1567578968, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578968415), o: { $v: 1, $set: { ping: new Date(1567578968415) } } }, oplog application mode: Secondary 2019-09-04T06:36:08.418+0000 D3 STORAGE [repl-writer-worker-8] WT set timestamp of future write operations to Timestamp(1567578968, 1) 2019-09-04T06:36:08.418+0000 D3 STORAGE [repl-writer-worker-8] WT begin_transaction for snapshot id 24857 2019-09-04T06:36:08.418+0000 D2 QUERY [repl-writer-worker-8] Using idhack: { _id: "ConfigServer" } 2019-09-04T06:36:08.418+0000 D4 WRITE [repl-writer-worker-8] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:36:08.418+0000 D3 STORAGE [repl-writer-worker-8] WT commit_transaction for snapshot id 24857 2019-09-04T06:36:08.418+0000 D3 EXECUTOR [repl-writer-worker-8] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:36:08.418+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578968, 1), t: 1 }({ ts: Timestamp(1567578968, 1), t: 1 }) 2019-09-04T06:36:08.418+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578968, 1) 2019-09-04T06:36:08.418+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24856 2019-09-04T06:36:08.418+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:36:08.418+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:08.418+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:36:08.418+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:08.418+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:08.418+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:36:08.418+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 24856 2019-09-04T06:36:08.418+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578968, 1) 2019-09-04T06:36:08.418+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24860 2019-09-04T06:36:08.418+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 24860 2019-09-04T06:36:08.418+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578968, 1), t: 1 }({ ts: Timestamp(1567578968, 1), t: 1 }) 2019-09-04T06:36:08.418+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:36:08.418+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, durableWallTime: new Date(1567578963617), appliedOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, appliedWallTime: new Date(1567578963617), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, durableWallTime: new Date(1567578963617), appliedOpTime: { ts: Timestamp(1567578968, 1), t: 1 }, appliedWallTime: new Date(1567578968415), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, durableWallTime: new Date(1567578963617), appliedOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, appliedWallTime: new Date(1567578963617), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578963, 1), t: 1 }, lastCommittedWall: new Date(1567578963617), lastOpVisible: { ts: Timestamp(1567578963, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:08.418+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1691 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:38.418+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, durableWallTime: new Date(1567578963617), appliedOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, appliedWallTime: new Date(1567578963617), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, durableWallTime: new Date(1567578963617), appliedOpTime: { ts: Timestamp(1567578968, 1), t: 1 }, appliedWallTime: new Date(1567578968415), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, durableWallTime: new Date(1567578963617), appliedOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, appliedWallTime: new Date(1567578963617), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578963, 1), t: 1 }, lastCommittedWall: new Date(1567578963617), lastOpVisible: { ts: Timestamp(1567578963, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:08.419+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:38.418+0000 2019-09-04T06:36:08.419+0000 D2 ASIO [RS] Request 1691 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578963, 1), t: 1 }, lastCommittedWall: new Date(1567578963617), lastOpVisible: { ts: Timestamp(1567578963, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578963, 1), $clusterTime: { clusterTime: Timestamp(1567578968, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 1) } 2019-09-04T06:36:08.419+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578963, 1), t: 1 }, lastCommittedWall: new Date(1567578963617), lastOpVisible: { ts: Timestamp(1567578963, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578963, 1), $clusterTime: { clusterTime: Timestamp(1567578968, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:08.419+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:36:08.419+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:38.419+0000 2019-09-04T06:36:08.419+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578968, 1), t: 1 } 2019-09-04T06:36:08.419+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1692 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:36:18.419+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578963, 1), t: 1 } } 2019-09-04T06:36:08.419+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:38.419+0000 2019-09-04T06:36:08.437+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:36:08.437+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:36:08.437+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, durableWallTime: new Date(1567578963617), appliedOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, appliedWallTime: new Date(1567578963617), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578968, 1), t: 1 }, durableWallTime: new Date(1567578968415), appliedOpTime: { ts: Timestamp(1567578968, 1), t: 1 }, appliedWallTime: new Date(1567578968415), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, durableWallTime: new Date(1567578963617), appliedOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, appliedWallTime: new Date(1567578963617), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578963, 1), t: 1 }, lastCommittedWall: new Date(1567578963617), lastOpVisible: { ts: Timestamp(1567578963, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:08.437+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1693 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:38.437+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, durableWallTime: new Date(1567578963617), appliedOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, appliedWallTime: new Date(1567578963617), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578968, 1), t: 1 }, durableWallTime: new Date(1567578968415), appliedOpTime: { ts: Timestamp(1567578968, 1), t: 1 }, appliedWallTime: new Date(1567578968415), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, durableWallTime: new Date(1567578963617), appliedOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, appliedWallTime: new Date(1567578963617), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578963, 1), t: 1 }, lastCommittedWall: new Date(1567578963617), lastOpVisible: { ts: Timestamp(1567578963, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:08.437+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:38.419+0000 2019-09-04T06:36:08.437+0000 D2 ASIO [RS] Request 1693 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578963, 1), t: 1 }, lastCommittedWall: new Date(1567578963617), lastOpVisible: { ts: Timestamp(1567578963, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578963, 1), $clusterTime: { clusterTime: Timestamp(1567578968, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 1) } 2019-09-04T06:36:08.437+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578963, 1), t: 1 }, lastCommittedWall: new Date(1567578963617), lastOpVisible: { ts: Timestamp(1567578963, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578963, 1), $clusterTime: { clusterTime: Timestamp(1567578968, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:08.437+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:08.437+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:38.419+0000 2019-09-04T06:36:08.438+0000 D2 ASIO [RS] Request 1692 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 1), t: 1 }, lastCommittedWall: new Date(1567578968415), lastOpVisible: { ts: Timestamp(1567578968, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578968, 1), t: 1 }, lastCommittedWall: new Date(1567578968415), lastOpApplied: { ts: Timestamp(1567578968, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578968, 1), $clusterTime: { clusterTime: Timestamp(1567578968, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 1) } 2019-09-04T06:36:08.438+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 1), t: 1 }, lastCommittedWall: new Date(1567578968415), lastOpVisible: { ts: Timestamp(1567578968, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578968, 1), t: 1 }, lastCommittedWall: new Date(1567578968415), lastOpApplied: { ts: Timestamp(1567578968, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578968, 1), $clusterTime: { clusterTime: Timestamp(1567578968, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:08.438+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:36:08.438+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:36:08.438+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578968, 1), t: 1 }, 2019-09-04T06:36:08.415+0000 2019-09-04T06:36:08.438+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578968, 1), t: 1 }, 2019-09-04T06:36:08.415+0000 2019-09-04T06:36:08.438+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578963, 1) 2019-09-04T06:36:08.438+0000 D3 REPL [conn512] Got notified of new snapshot: { ts: Timestamp(1567578968, 1), t: 1 }, 2019-09-04T06:36:08.415+0000 2019-09-04T06:36:08.438+0000 D3 REPL [conn512] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.876+0000 2019-09-04T06:36:08.438+0000 D3 REPL [conn525] Got notified of new snapshot: { ts: Timestamp(1567578968, 1), t: 1 }, 2019-09-04T06:36:08.415+0000 2019-09-04T06:36:08.438+0000 D3 REPL [conn525] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:21.767+0000 2019-09-04T06:36:08.438+0000 D3 REPL [conn526] Got notified of new snapshot: { ts: Timestamp(1567578968, 1), t: 1 }, 2019-09-04T06:36:08.415+0000 2019-09-04T06:36:08.438+0000 D3 REPL [conn526] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:22.595+0000 2019-09-04T06:36:08.438+0000 D3 REPL [conn517] Got notified of new snapshot: { ts: Timestamp(1567578968, 1), t: 1 }, 2019-09-04T06:36:08.415+0000 2019-09-04T06:36:08.438+0000 D3 REPL [conn517] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:18.045+0000 2019-09-04T06:36:08.438+0000 D3 REPL [conn522] Got notified of new snapshot: { ts: Timestamp(1567578968, 1), t: 1 }, 2019-09-04T06:36:08.415+0000 2019-09-04T06:36:08.438+0000 D3 REPL [conn522] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:21.660+0000 2019-09-04T06:36:08.438+0000 D3 REPL [conn485] Got notified of new snapshot: { ts: Timestamp(1567578968, 1), t: 1 }, 2019-09-04T06:36:08.415+0000 2019-09-04T06:36:08.438+0000 D3 REPL [conn485] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.888+0000 2019-09-04T06:36:08.438+0000 D3 REPL [conn519] Got notified of new snapshot: { ts: Timestamp(1567578968, 1), t: 1 }, 2019-09-04T06:36:08.415+0000 2019-09-04T06:36:08.438+0000 D3 REPL [conn519] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:18.119+0000 2019-09-04T06:36:08.438+0000 D3 REPL [conn523] Got notified of new snapshot: { ts: Timestamp(1567578968, 1), t: 1 }, 2019-09-04T06:36:08.415+0000 2019-09-04T06:36:08.438+0000 D3 REPL [conn523] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:21.660+0000 2019-09-04T06:36:08.438+0000 D3 REPL [conn504] Got notified of new snapshot: { ts: Timestamp(1567578968, 1), t: 1 }, 2019-09-04T06:36:08.415+0000 2019-09-04T06:36:08.438+0000 D3 REPL [conn504] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.918+0000 2019-09-04T06:36:08.438+0000 D3 REPL [conn513] Got notified of new snapshot: { ts: Timestamp(1567578968, 1), t: 1 }, 2019-09-04T06:36:08.415+0000 2019-09-04T06:36:08.438+0000 D3 REPL [conn513] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.878+0000 2019-09-04T06:36:08.438+0000 D3 REPL [conn483] Got notified of new snapshot: { ts: Timestamp(1567578968, 1), t: 1 }, 2019-09-04T06:36:08.415+0000 2019-09-04T06:36:08.438+0000 D3 REPL [conn483] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.939+0000 2019-09-04T06:36:08.439+0000 D3 REPL [conn491] Got notified of new snapshot: { ts: Timestamp(1567578968, 1), t: 1 }, 2019-09-04T06:36:08.415+0000 2019-09-04T06:36:08.439+0000 D3 REPL [conn491] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:18.023+0000 2019-09-04T06:36:08.439+0000 D3 REPL [conn499] Got notified of new snapshot: { ts: Timestamp(1567578968, 1), t: 1 }, 2019-09-04T06:36:08.415+0000 2019-09-04T06:36:08.439+0000 D3 REPL [conn499] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:21.752+0000 2019-09-04T06:36:08.439+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:36:19.313+0000 2019-09-04T06:36:08.439+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:36:19.320+0000 2019-09-04T06:36:08.439+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:08.439+0000 D3 REPL [conn532] Got notified of new snapshot: { ts: Timestamp(1567578968, 1), t: 1 }, 2019-09-04T06:36:08.415+0000 2019-09-04T06:36:08.439+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:36.839+0000 2019-09-04T06:36:08.439+0000 D3 REPL [conn532] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:31.316+0000 2019-09-04T06:36:08.439+0000 D3 REPL [conn518] Got notified of new snapshot: { ts: Timestamp(1567578968, 1), t: 1 }, 2019-09-04T06:36:08.415+0000 2019-09-04T06:36:08.439+0000 D3 REPL [conn518] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:18.061+0000 2019-09-04T06:36:08.439+0000 D3 REPL [conn489] Got notified of new snapshot: { ts: Timestamp(1567578968, 1), t: 1 }, 2019-09-04T06:36:08.415+0000 2019-09-04T06:36:08.439+0000 D3 REPL [conn489] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:15.190+0000 2019-09-04T06:36:08.439+0000 D3 REPL [conn511] Got notified of new snapshot: { ts: Timestamp(1567578968, 1), t: 1 }, 2019-09-04T06:36:08.415+0000 2019-09-04T06:36:08.439+0000 D3 REPL [conn511] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.873+0000 2019-09-04T06:36:08.439+0000 D3 REPL [conn500] Got notified of new snapshot: { ts: Timestamp(1567578968, 1), t: 1 }, 2019-09-04T06:36:08.415+0000 2019-09-04T06:36:08.439+0000 D3 REPL [conn500] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:14.296+0000 2019-09-04T06:36:08.439+0000 D3 REPL [conn497] Got notified of new snapshot: { ts: Timestamp(1567578968, 1), t: 1 }, 2019-09-04T06:36:08.415+0000 2019-09-04T06:36:08.439+0000 D3 REPL [conn497] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.882+0000 2019-09-04T06:36:08.439+0000 D3 REPL [conn492] Got notified of new snapshot: { ts: Timestamp(1567578968, 1), t: 1 }, 2019-09-04T06:36:08.415+0000 2019-09-04T06:36:08.439+0000 D3 REPL [conn492] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.328+0000 2019-09-04T06:36:08.439+0000 D3 REPL [conn515] Got notified of new snapshot: { ts: Timestamp(1567578968, 1), t: 1 }, 2019-09-04T06:36:08.415+0000 2019-09-04T06:36:08.439+0000 D3 REPL [conn515] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:24.151+0000 2019-09-04T06:36:08.439+0000 D3 REPL [conn514] Got notified of new snapshot: { ts: Timestamp(1567578968, 1), t: 1 }, 2019-09-04T06:36:08.415+0000 2019-09-04T06:36:08.439+0000 D3 REPL [conn514] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:13.414+0000 2019-09-04T06:36:08.439+0000 D3 REPL [conn520] Got notified of new snapshot: { ts: Timestamp(1567578968, 1), t: 1 }, 2019-09-04T06:36:08.415+0000 2019-09-04T06:36:08.439+0000 D3 REPL [conn520] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:18.134+0000 2019-09-04T06:36:08.439+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1694 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:36:18.439+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578968, 1), t: 1 } } 2019-09-04T06:36:08.439+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:38.419+0000 2019-09-04T06:36:08.495+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:08.517+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578968, 1) 2019-09-04T06:36:08.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:08.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:08.571+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:08.571+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:08.595+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:08.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:08.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:08.695+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:08.697+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:08.697+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:08.701+0000 D2 ASIO [RS] Request 1694 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578968, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578968684), o: { $v: 1, $set: { ping: new Date(1567578968684) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 1), t: 1 }, lastCommittedWall: new Date(1567578968415), lastOpVisible: { ts: Timestamp(1567578968, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578968, 1), t: 1 }, lastCommittedWall: new Date(1567578968415), lastOpApplied: { ts: Timestamp(1567578968, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578968, 1), $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 2) } 2019-09-04T06:36:08.701+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578968, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578968684), o: { $v: 1, $set: { ping: new Date(1567578968684) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 1), t: 1 }, lastCommittedWall: new Date(1567578968415), lastOpVisible: { ts: Timestamp(1567578968, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578968, 1), t: 1 }, lastCommittedWall: new Date(1567578968415), lastOpApplied: { ts: Timestamp(1567578968, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578968, 1), $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:08.701+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:08.701+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578968, 2) and ending at ts: Timestamp(1567578968, 2) 2019-09-04T06:36:08.701+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:36:19.320+0000 2019-09-04T06:36:08.701+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:36:18.740+0000 2019-09-04T06:36:08.701+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:08.701+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578968, 2), t: 1 } 2019-09-04T06:36:08.701+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:08.701+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:08.701+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578968, 1) 2019-09-04T06:36:08.701+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 24868 2019-09-04T06:36:08.701+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:08.701+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:08.701+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 24868 2019-09-04T06:36:08.701+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:08.701+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:08.701+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578968, 1) 2019-09-04T06:36:08.701+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:36:08.701+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 24871 2019-09-04T06:36:08.701+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578968, 2) } 2019-09-04T06:36:08.701+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:08.701+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:08.701+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 24871 2019-09-04T06:36:08.701+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:36.839+0000 2019-09-04T06:36:08.701+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24862 2019-09-04T06:36:08.701+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 24862 2019-09-04T06:36:08.701+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24874 2019-09-04T06:36:08.701+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 24874 2019-09-04T06:36:08.701+0000 D3 EXECUTOR [repl-writer-worker-2] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:36:08.701+0000 D3 STORAGE [repl-writer-worker-2] WT begin_transaction for snapshot id 24876 2019-09-04T06:36:08.701+0000 D4 STORAGE [repl-writer-worker-2] inserting record with timestamp Timestamp(1567578968, 2) 2019-09-04T06:36:08.701+0000 D3 STORAGE [repl-writer-worker-2] WT set timestamp of future write operations to Timestamp(1567578968, 2) 2019-09-04T06:36:08.701+0000 D3 STORAGE [repl-writer-worker-2] WT commit_transaction for snapshot id 24876 2019-09-04T06:36:08.701+0000 D3 EXECUTOR [repl-writer-worker-2] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:36:08.701+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:36:08.701+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24875 2019-09-04T06:36:08.701+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 24875 2019-09-04T06:36:08.701+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24878 2019-09-04T06:36:08.701+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 24878 2019-09-04T06:36:08.701+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578968, 2), t: 1 }({ ts: Timestamp(1567578968, 2), t: 1 }) 2019-09-04T06:36:08.702+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578968, 2) 2019-09-04T06:36:08.702+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24879 2019-09-04T06:36:08.702+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578968, 2) } } ] } sort: {} projection: {} 2019-09-04T06:36:08.702+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:08.702+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:36:08.702+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578968, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:36:08.702+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:08.702+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:36:08.702+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:36:08.702+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578968, 2) || First: notFirst: full path: ts 2019-09-04T06:36:08.702+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:08.702+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578968, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:08.702+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:36:08.702+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:36:08.702+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:36:08.702+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:08.702+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:36:08.702+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:36:08.702+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:08.702+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:08.702+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:36:08.702+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578968, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:36:08.702+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:08.702+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:36:08.702+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:36:08.702+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578968, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:36:08.702+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:08.702+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578968, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:08.702+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 24879 2019-09-04T06:36:08.702+0000 D3 EXECUTOR [repl-writer-worker-0] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:36:08.702+0000 D3 STORAGE [repl-writer-worker-0] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:08.702+0000 D3 REPL [repl-writer-worker-0] applying op: { ts: Timestamp(1567578968, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578968684), o: { $v: 1, $set: { ping: new Date(1567578968684) } } }, oplog application mode: Secondary 2019-09-04T06:36:08.702+0000 D3 STORAGE [repl-writer-worker-0] WT set timestamp of future write operations to Timestamp(1567578968, 2) 2019-09-04T06:36:08.702+0000 D3 STORAGE [repl-writer-worker-0] WT begin_transaction for snapshot id 24881 2019-09-04T06:36:08.702+0000 D2 QUERY [repl-writer-worker-0] Using idhack: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" } 2019-09-04T06:36:08.702+0000 D4 WRITE [repl-writer-worker-0] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:36:08.702+0000 D3 STORAGE [repl-writer-worker-0] WT commit_transaction for snapshot id 24881 2019-09-04T06:36:08.702+0000 D3 EXECUTOR [repl-writer-worker-0] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:36:08.702+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578968, 2), t: 1 }({ ts: Timestamp(1567578968, 2), t: 1 }) 2019-09-04T06:36:08.702+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578968, 2) 2019-09-04T06:36:08.702+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24880 2019-09-04T06:36:08.702+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:36:08.702+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:08.702+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:36:08.702+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:08.702+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:08.702+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:36:08.702+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 24880 2019-09-04T06:36:08.702+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578968, 2) 2019-09-04T06:36:08.702+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24884 2019-09-04T06:36:08.702+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 24884 2019-09-04T06:36:08.702+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578968, 2), t: 1 }({ ts: Timestamp(1567578968, 2), t: 1 }) 2019-09-04T06:36:08.702+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:36:08.702+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, durableWallTime: new Date(1567578963617), appliedOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, appliedWallTime: new Date(1567578963617), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578968, 1), t: 1 }, durableWallTime: new Date(1567578968415), appliedOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, appliedWallTime: new Date(1567578968684), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, durableWallTime: new Date(1567578963617), appliedOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, appliedWallTime: new Date(1567578963617), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 1), t: 1 }, lastCommittedWall: new Date(1567578968415), lastOpVisible: { ts: Timestamp(1567578968, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:08.702+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1695 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:38.702+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, durableWallTime: new Date(1567578963617), appliedOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, appliedWallTime: new Date(1567578963617), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578968, 1), t: 1 }, durableWallTime: new Date(1567578968415), appliedOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, appliedWallTime: new Date(1567578968684), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, durableWallTime: new Date(1567578963617), appliedOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, appliedWallTime: new Date(1567578963617), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 1), t: 1 }, lastCommittedWall: new Date(1567578968415), lastOpVisible: { ts: Timestamp(1567578968, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:08.702+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:38.702+0000 2019-09-04T06:36:08.703+0000 D2 ASIO [RS] Request 1695 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 1), t: 1 }, lastCommittedWall: new Date(1567578968415), lastOpVisible: { ts: Timestamp(1567578968, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578968, 1), $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 2) } 2019-09-04T06:36:08.703+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 1), t: 1 }, lastCommittedWall: new Date(1567578968415), lastOpVisible: { ts: Timestamp(1567578968, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578968, 1), $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:08.703+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:36:08.703+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:38.703+0000 2019-09-04T06:36:08.703+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578968, 2), t: 1 } 2019-09-04T06:36:08.703+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1696 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:36:18.703+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578968, 1), t: 1 } } 2019-09-04T06:36:08.703+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:38.703+0000 2019-09-04T06:36:08.707+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:36:08.707+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:36:08.707+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, durableWallTime: new Date(1567578963617), appliedOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, appliedWallTime: new Date(1567578963617), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), appliedOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, appliedWallTime: new Date(1567578968684), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, durableWallTime: new Date(1567578963617), appliedOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, appliedWallTime: new Date(1567578963617), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 1), t: 1 }, lastCommittedWall: new Date(1567578968415), lastOpVisible: { ts: Timestamp(1567578968, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:08.707+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1697 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:38.707+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, durableWallTime: new Date(1567578963617), appliedOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, appliedWallTime: new Date(1567578963617), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), appliedOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, appliedWallTime: new Date(1567578968684), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, durableWallTime: new Date(1567578963617), appliedOpTime: { ts: Timestamp(1567578963, 1), t: 1 }, appliedWallTime: new Date(1567578963617), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 1), t: 1 }, lastCommittedWall: new Date(1567578968415), lastOpVisible: { ts: Timestamp(1567578968, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:08.707+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:38.703+0000 2019-09-04T06:36:08.708+0000 D2 ASIO [RS] Request 1697 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 1), t: 1 }, lastCommittedWall: new Date(1567578968415), lastOpVisible: { ts: Timestamp(1567578968, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578968, 1), $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 2) } 2019-09-04T06:36:08.708+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 1), t: 1 }, lastCommittedWall: new Date(1567578968415), lastOpVisible: { ts: Timestamp(1567578968, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578968, 1), $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:08.708+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:08.708+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:38.703+0000 2019-09-04T06:36:08.708+0000 D2 ASIO [RS] Request 1696 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpVisible: { ts: Timestamp(1567578968, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpApplied: { ts: Timestamp(1567578968, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578968, 2), $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 2) } 2019-09-04T06:36:08.708+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpVisible: { ts: Timestamp(1567578968, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpApplied: { ts: Timestamp(1567578968, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578968, 2), $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:08.708+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:36:08.708+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:36:08.708+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578968, 2), t: 1 }, 2019-09-04T06:36:08.684+0000 2019-09-04T06:36:08.708+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578968, 2), t: 1 }, 2019-09-04T06:36:08.684+0000 2019-09-04T06:36:08.708+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578963, 2) 2019-09-04T06:36:08.708+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:36:18.740+0000 2019-09-04T06:36:08.708+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:36:18.918+0000 2019-09-04T06:36:08.708+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:08.708+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1698 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:36:18.708+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578968, 2), t: 1 } } 2019-09-04T06:36:08.708+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:36.839+0000 2019-09-04T06:36:08.708+0000 D3 REPL [conn525] Got notified of new snapshot: { ts: Timestamp(1567578968, 2), t: 1 }, 2019-09-04T06:36:08.684+0000 2019-09-04T06:36:08.708+0000 D3 REPL [conn525] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:21.767+0000 2019-09-04T06:36:08.708+0000 D3 REPL [conn517] Got notified of new snapshot: { ts: Timestamp(1567578968, 2), t: 1 }, 2019-09-04T06:36:08.684+0000 2019-09-04T06:36:08.708+0000 D3 REPL [conn517] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:18.045+0000 2019-09-04T06:36:08.708+0000 D3 REPL [conn522] Got notified of new snapshot: { ts: Timestamp(1567578968, 2), t: 1 }, 2019-09-04T06:36:08.684+0000 2019-09-04T06:36:08.708+0000 D3 REPL [conn522] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:21.660+0000 2019-09-04T06:36:08.708+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:38.703+0000 2019-09-04T06:36:08.708+0000 D3 REPL [conn485] Got notified of new snapshot: { ts: Timestamp(1567578968, 2), t: 1 }, 2019-09-04T06:36:08.684+0000 2019-09-04T06:36:08.708+0000 D3 REPL [conn485] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.888+0000 2019-09-04T06:36:08.708+0000 D3 REPL [conn519] Got notified of new snapshot: { ts: Timestamp(1567578968, 2), t: 1 }, 2019-09-04T06:36:08.684+0000 2019-09-04T06:36:08.708+0000 D3 REPL [conn519] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:18.119+0000 2019-09-04T06:36:08.708+0000 D3 REPL [conn504] Got notified of new snapshot: { ts: Timestamp(1567578968, 2), t: 1 }, 2019-09-04T06:36:08.684+0000 2019-09-04T06:36:08.708+0000 D3 REPL [conn504] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.918+0000 2019-09-04T06:36:08.708+0000 D3 REPL [conn497] Got notified of new snapshot: { ts: Timestamp(1567578968, 2), t: 1 }, 2019-09-04T06:36:08.684+0000 2019-09-04T06:36:08.708+0000 D3 REPL [conn497] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.882+0000 2019-09-04T06:36:08.708+0000 D3 REPL [conn532] Got notified of new snapshot: { ts: Timestamp(1567578968, 2), t: 1 }, 2019-09-04T06:36:08.684+0000 2019-09-04T06:36:08.708+0000 D3 REPL [conn532] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:31.316+0000 2019-09-04T06:36:08.708+0000 D3 REPL [conn520] Got notified of new snapshot: { ts: Timestamp(1567578968, 2), t: 1 }, 2019-09-04T06:36:08.684+0000 2019-09-04T06:36:08.708+0000 D3 REPL [conn520] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:18.134+0000 2019-09-04T06:36:08.708+0000 D3 REPL [conn512] Got notified of new snapshot: { ts: Timestamp(1567578968, 2), t: 1 }, 2019-09-04T06:36:08.684+0000 2019-09-04T06:36:08.708+0000 D3 REPL [conn512] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.876+0000 2019-09-04T06:36:08.708+0000 D3 REPL [conn526] Got notified of new snapshot: { ts: Timestamp(1567578968, 2), t: 1 }, 2019-09-04T06:36:08.684+0000 2019-09-04T06:36:08.708+0000 D3 REPL [conn526] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:22.595+0000 2019-09-04T06:36:08.708+0000 D3 REPL [conn523] Got notified of new snapshot: { ts: Timestamp(1567578968, 2), t: 1 }, 2019-09-04T06:36:08.684+0000 2019-09-04T06:36:08.708+0000 D3 REPL [conn523] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:21.660+0000 2019-09-04T06:36:08.708+0000 D3 REPL [conn513] Got notified of new snapshot: { ts: Timestamp(1567578968, 2), t: 1 }, 2019-09-04T06:36:08.684+0000 2019-09-04T06:36:08.708+0000 D3 REPL [conn513] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.878+0000 2019-09-04T06:36:08.708+0000 D3 REPL [conn491] Got notified of new snapshot: { ts: Timestamp(1567578968, 2), t: 1 }, 2019-09-04T06:36:08.684+0000 2019-09-04T06:36:08.708+0000 D3 REPL [conn491] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:18.023+0000 2019-09-04T06:36:08.708+0000 D3 REPL [conn518] Got notified of new snapshot: { ts: Timestamp(1567578968, 2), t: 1 }, 2019-09-04T06:36:08.684+0000 2019-09-04T06:36:08.708+0000 D3 REPL [conn518] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:18.061+0000 2019-09-04T06:36:08.708+0000 D3 REPL [conn511] Got notified of new snapshot: { ts: Timestamp(1567578968, 2), t: 1 }, 2019-09-04T06:36:08.684+0000 2019-09-04T06:36:08.708+0000 D3 REPL [conn511] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.873+0000 2019-09-04T06:36:08.708+0000 D3 REPL [conn489] Got notified of new snapshot: { ts: Timestamp(1567578968, 2), t: 1 }, 2019-09-04T06:36:08.684+0000 2019-09-04T06:36:08.708+0000 D3 REPL [conn489] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:15.190+0000 2019-09-04T06:36:08.709+0000 D3 REPL [conn500] Got notified of new snapshot: { ts: Timestamp(1567578968, 2), t: 1 }, 2019-09-04T06:36:08.684+0000 2019-09-04T06:36:08.709+0000 D3 REPL [conn500] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:14.296+0000 2019-09-04T06:36:08.709+0000 D3 REPL [conn514] Got notified of new snapshot: { ts: Timestamp(1567578968, 2), t: 1 }, 2019-09-04T06:36:08.684+0000 2019-09-04T06:36:08.709+0000 D3 REPL [conn514] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:13.414+0000 2019-09-04T06:36:08.709+0000 D3 REPL [conn483] Got notified of new snapshot: { ts: Timestamp(1567578968, 2), t: 1 }, 2019-09-04T06:36:08.684+0000 2019-09-04T06:36:08.709+0000 D3 REPL [conn483] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.939+0000 2019-09-04T06:36:08.709+0000 D3 REPL [conn492] Got notified of new snapshot: { ts: Timestamp(1567578968, 2), t: 1 }, 2019-09-04T06:36:08.684+0000 2019-09-04T06:36:08.709+0000 D3 REPL [conn492] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:12.328+0000 2019-09-04T06:36:08.709+0000 D3 REPL [conn499] Got notified of new snapshot: { ts: Timestamp(1567578968, 2), t: 1 }, 2019-09-04T06:36:08.684+0000 2019-09-04T06:36:08.709+0000 D3 REPL [conn499] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:21.752+0000 2019-09-04T06:36:08.709+0000 D3 REPL [conn515] Got notified of new snapshot: { ts: Timestamp(1567578968, 2), t: 1 }, 2019-09-04T06:36:08.684+0000 2019-09-04T06:36:08.709+0000 D3 REPL [conn515] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:24.151+0000 2019-09-04T06:36:08.711+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:08.711+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:08.742+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:08.742+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:08.748+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:08.748+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:08.795+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:08.801+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578968, 2) 2019-09-04T06:36:08.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:08.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1699) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:08.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1699 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:36:18.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:08.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:36.839+0000 2019-09-04T06:36:08.839+0000 D2 ASIO [Replication] Request 1699 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), opTime: { ts: Timestamp(1567578968, 2), t: 1 }, wallTime: new Date(1567578968684), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpVisible: { ts: Timestamp(1567578968, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578968, 2), $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 2) } 2019-09-04T06:36:08.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), opTime: { ts: Timestamp(1567578968, 2), t: 1 }, wallTime: new Date(1567578968684), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpVisible: { ts: Timestamp(1567578968, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578968, 2), $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:36:08.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:08.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1699) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), opTime: { ts: Timestamp(1567578968, 2), t: 1 }, wallTime: new Date(1567578968684), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpVisible: { ts: Timestamp(1567578968, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578968, 2), $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 2) } 2019-09-04T06:36:08.839+0000 D4 ELECTION [replexec-4] Postponing election timeout due to heartbeat from primary 2019-09-04T06:36:08.839+0000 D4 REPL [replexec-4] Canceling election timeout callback at 2019-09-04T06:36:18.918+0000 2019-09-04T06:36:08.839+0000 D4 ELECTION [replexec-4] Scheduling election timeout callback at 2019-09-04T06:36:20.163+0000 2019-09-04T06:36:08.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:08.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:36:08.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:38.839+0000 2019-09-04T06:36:08.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:36:10.839Z 2019-09-04T06:36:08.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:38.839+0000 2019-09-04T06:36:08.840+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:08.840+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1700) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:08.840+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1700 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:18.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:08.840+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:38.839+0000 2019-09-04T06:36:08.840+0000 D2 ASIO [Replication] Request 1700 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), opTime: { ts: Timestamp(1567578968, 2), t: 1 }, wallTime: new Date(1567578968684), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpVisible: { ts: Timestamp(1567578968, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578968, 2), $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 2) } 2019-09-04T06:36:08.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), opTime: { ts: Timestamp(1567578968, 2), t: 1 }, wallTime: new Date(1567578968684), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpVisible: { ts: Timestamp(1567578968, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578968, 2), $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:08.840+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:08.840+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1700) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), opTime: { ts: Timestamp(1567578968, 2), t: 1 }, wallTime: new Date(1567578968684), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpVisible: { ts: Timestamp(1567578968, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578968, 2), $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 2) } 2019-09-04T06:36:08.840+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:36:08.840+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:36:10.840Z 2019-09-04T06:36:08.840+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:38.839+0000 2019-09-04T06:36:08.878+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:08.878+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:08.896+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:08.996+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:09.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:09.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:09.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:09.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 260B423CCB2EA7291AF6D740013A29A37483764E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:09.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:36:09.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 260B423CCB2EA7291AF6D740013A29A37483764E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:09.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 260B423CCB2EA7291AF6D740013A29A37483764E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:09.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), opTime: { ts: Timestamp(1567578968, 2), t: 1 }, wallTime: new Date(1567578968684) } 2019-09-04T06:36:09.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 260B423CCB2EA7291AF6D740013A29A37483764E), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:09.070+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:09.071+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:09.096+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:09.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:09.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:09.196+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:09.197+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:09.197+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:09.211+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:09.211+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:09.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:09.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:09.242+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:09.242+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:09.248+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:09.248+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:09.296+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:09.378+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:09.378+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:09.397+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:09.497+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:09.538+0000 D2 COMMAND [conn61] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578968, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 260B423CCB2EA7291AF6D740013A29A37483764E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578968, 2), t: 1 } }, $db: "config" } 2019-09-04T06:36:09.538+0000 D1 COMMAND [conn61] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578968, 2), t: 1 } } } 2019-09-04T06:36:09.538+0000 D3 STORAGE [conn61] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:36:09.538+0000 D1 COMMAND [conn61] Using 'committed' snapshot: { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578968, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 260B423CCB2EA7291AF6D740013A29A37483764E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578968, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578968, 2) 2019-09-04T06:36:09.538+0000 D2 QUERY [conn61] Collection config.settings does not exist. Using EOF plan: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 2019-09-04T06:36:09.538+0000 I COMMAND [conn61] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578968, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 260B423CCB2EA7291AF6D740013A29A37483764E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578968, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:36:09.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:09.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:09.571+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:09.571+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:09.597+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:09.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:09.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:09.697+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:09.697+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:09.697+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:09.701+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:09.701+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:09.701+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578968, 2) 2019-09-04T06:36:09.701+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 24908 2019-09-04T06:36:09.701+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:09.701+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:09.701+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 24908 2019-09-04T06:36:09.702+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24911 2019-09-04T06:36:09.702+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 24911 2019-09-04T06:36:09.702+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578968, 2), t: 1 }({ ts: Timestamp(1567578968, 2), t: 1 }) 2019-09-04T06:36:09.711+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:09.711+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:09.742+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:09.742+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:09.748+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:09.748+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:09.797+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:09.878+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:09.878+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:09.897+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:09.997+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:10.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:10.000+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:36:10.000+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:36:10.000+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:36:10.012+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:36:10.012+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:36:10.012+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:36:10.012+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:36:10.012+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:36:10.012+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:36:10.012+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:36:10.013+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35151 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:36:10.014+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:36:10.014+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:36:10.014+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:36:10.014+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:36:10.014+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:10.014+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:36:10.014+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:36:10.014+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:36:10.014+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:36:10.014+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:36:10.014+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:36:10.014+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:36:10.014+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:10.014+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:10.014+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:36:10.014+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578968, 2) 2019-09-04T06:36:10.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24923 2019-09-04T06:36:10.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24923 2019-09-04T06:36:10.014+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:36:10.014+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:36:10.014+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:36:10.014+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:36:10.015+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:10.015+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:36:10.015+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:36:10.015+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:36:10.015+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578968, 2) 2019-09-04T06:36:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24926 2019-09-04T06:36:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24926 2019-09-04T06:36:10.015+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:36:10.015+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:36:10.015+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:10.015+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:36:10.015+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:36:10.015+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:36:10.015+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578968, 2) 2019-09-04T06:36:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24928 2019-09-04T06:36:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24928 2019-09-04T06:36:10.015+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:36:10.015+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:36:10.015+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:36:10.015+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:36:10.015+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:36:10.015+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:36:10.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:36:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24931 2019-09-04T06:36:10.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:36:10.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:10.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:36:10.015+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:36:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24931 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24932 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24932 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24933 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24933 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24934 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24934 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24935 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24935 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24936 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24936 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24937 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24937 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24938 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24938 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24939 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24939 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24940 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24940 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24941 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24941 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24942 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24942 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24943 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24943 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24944 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:36:10.016+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24944 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24945 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24945 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24946 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24946 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24947 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24947 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24948 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24948 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24949 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24949 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24950 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24950 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24951 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24951 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24952 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24952 2019-09-04T06:36:10.017+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:36:10.017+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24954 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24954 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24955 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24955 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24956 2019-09-04T06:36:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24956 2019-09-04T06:36:10.017+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:36:10.018+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:36:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24958 2019-09-04T06:36:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24958 2019-09-04T06:36:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24959 2019-09-04T06:36:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24959 2019-09-04T06:36:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24960 2019-09-04T06:36:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24960 2019-09-04T06:36:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24961 2019-09-04T06:36:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24961 2019-09-04T06:36:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24962 2019-09-04T06:36:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24962 2019-09-04T06:36:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24963 2019-09-04T06:36:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24963 2019-09-04T06:36:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24964 2019-09-04T06:36:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24964 2019-09-04T06:36:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24965 2019-09-04T06:36:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24965 2019-09-04T06:36:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24966 2019-09-04T06:36:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24966 2019-09-04T06:36:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24967 2019-09-04T06:36:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24967 2019-09-04T06:36:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24968 2019-09-04T06:36:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24968 2019-09-04T06:36:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24969 2019-09-04T06:36:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24969 2019-09-04T06:36:10.018+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:36:10.018+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:36:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24971 2019-09-04T06:36:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24971 2019-09-04T06:36:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24972 2019-09-04T06:36:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24972 2019-09-04T06:36:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24973 2019-09-04T06:36:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24973 2019-09-04T06:36:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24974 2019-09-04T06:36:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24974 2019-09-04T06:36:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24975 2019-09-04T06:36:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24975 2019-09-04T06:36:10.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 24976 2019-09-04T06:36:10.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 24976 2019-09-04T06:36:10.018+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:36:10.038+0000 D2 COMMAND [conn69] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:10.038+0000 I COMMAND [conn69] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:10.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:10.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:10.056+0000 D2 COMMAND [conn70] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:10.056+0000 I COMMAND [conn70] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:10.071+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:10.071+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:10.098+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:10.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:10.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:10.197+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:10.197+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:10.198+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:10.211+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:10.211+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:10.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 260B423CCB2EA7291AF6D740013A29A37483764E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:10.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:36:10.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 260B423CCB2EA7291AF6D740013A29A37483764E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:10.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 260B423CCB2EA7291AF6D740013A29A37483764E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:10.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), opTime: { ts: Timestamp(1567578968, 2), t: 1 }, wallTime: new Date(1567578968684) } 2019-09-04T06:36:10.234+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 260B423CCB2EA7291AF6D740013A29A37483764E), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:10.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:10.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:10.242+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:10.242+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:10.248+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:10.248+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:10.298+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:10.378+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:10.378+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:10.398+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:10.464+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:10.464+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:10.465+0000 D2 COMMAND [conn71] run command config.$cmd { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578968, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 260B423CCB2EA7291AF6D740013A29A37483764E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578968, 2), t: 1 } }, $db: "config" } 2019-09-04T06:36:10.465+0000 D1 COMMAND [conn71] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578968, 2), t: 1 } } } 2019-09-04T06:36:10.465+0000 D3 STORAGE [conn71] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:36:10.465+0000 D1 COMMAND [conn71] Using 'committed' snapshot: { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578968, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 260B423CCB2EA7291AF6D740013A29A37483764E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578968, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578968, 2) 2019-09-04T06:36:10.466+0000 D2 QUERY [conn71] Collection config.settings does not exist. Using EOF plan: query: { _id: "autosplit" } sort: {} projection: {} limit: 1 2019-09-04T06:36:10.466+0000 I COMMAND [conn71] command config.settings command: find { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578968, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 260B423CCB2EA7291AF6D740013A29A37483764E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578968, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:36:10.498+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:10.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:10.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:10.570+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:10.571+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:10.598+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:10.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:10.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:10.697+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:10.697+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:10.699+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:10.702+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:10.702+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:10.702+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578968, 2) 2019-09-04T06:36:10.702+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 24996 2019-09-04T06:36:10.702+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:10.702+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:10.702+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 24996 2019-09-04T06:36:10.703+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 24999 2019-09-04T06:36:10.703+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 24999 2019-09-04T06:36:10.703+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578968, 2), t: 1 }({ ts: Timestamp(1567578968, 2), t: 1 }) 2019-09-04T06:36:10.711+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:10.711+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:10.721+0000 D2 COMMAND [conn72] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578963, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578966, 1), signature: { hash: BinData(0, EEE4B78A57649CB9954A58D102C2003C3321D7A7), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578963, 1), t: 1 } }, $db: "config" } 2019-09-04T06:36:10.721+0000 D1 COMMAND [conn72] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578963, 1), t: 1 } } } 2019-09-04T06:36:10.721+0000 D3 STORAGE [conn72] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:36:10.721+0000 D1 COMMAND [conn72] Using 'committed' snapshot: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578963, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578966, 1), signature: { hash: BinData(0, EEE4B78A57649CB9954A58D102C2003C3321D7A7), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578963, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578968, 2) 2019-09-04T06:36:10.721+0000 D2 QUERY [conn72] Collection config.settings does not exist. Using EOF plan: query: { _id: "balancer" } sort: {} projection: {} limit: 1 2019-09-04T06:36:10.721+0000 I COMMAND [conn72] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578963, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578966, 1), signature: { hash: BinData(0, EEE4B78A57649CB9954A58D102C2003C3321D7A7), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578963, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:36:10.722+0000 D2 COMMAND [conn72] run command config.$cmd { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578968, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 260B423CCB2EA7291AF6D740013A29A37483764E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578968, 2), t: 1 } }, $db: "config" } 2019-09-04T06:36:10.722+0000 D1 COMMAND [conn72] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578968, 2), t: 1 } } } 2019-09-04T06:36:10.722+0000 D3 STORAGE [conn72] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:36:10.722+0000 D1 COMMAND [conn72] Using 'committed' snapshot: { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578968, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 260B423CCB2EA7291AF6D740013A29A37483764E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578968, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578968, 2) 2019-09-04T06:36:10.722+0000 D2 QUERY [conn72] Collection config.settings does not exist. Using EOF plan: query: { _id: "autosplit" } sort: {} projection: {} limit: 1 2019-09-04T06:36:10.722+0000 I COMMAND [conn72] command config.settings command: find { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578968, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 260B423CCB2EA7291AF6D740013A29A37483764E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578968, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:36:10.748+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:10.748+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:10.799+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:10.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:10.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1701) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:10.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1701 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:36:20.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:10.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:38.839+0000 2019-09-04T06:36:10.839+0000 D2 ASIO [Replication] Request 1701 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), opTime: { ts: Timestamp(1567578968, 2), t: 1 }, wallTime: new Date(1567578968684), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpVisible: { ts: Timestamp(1567578968, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578968, 2), $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 2) } 2019-09-04T06:36:10.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), opTime: { ts: Timestamp(1567578968, 2), t: 1 }, wallTime: new Date(1567578968684), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpVisible: { ts: Timestamp(1567578968, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578968, 2), $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:36:10.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:10.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1701) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), opTime: { ts: Timestamp(1567578968, 2), t: 1 }, wallTime: new Date(1567578968684), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpVisible: { ts: Timestamp(1567578968, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578968, 2), $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 2) } 2019-09-04T06:36:10.839+0000 D4 ELECTION [replexec-4] Postponing election timeout due to heartbeat from primary 2019-09-04T06:36:10.839+0000 D4 REPL [replexec-4] Canceling election timeout callback at 2019-09-04T06:36:20.163+0000 2019-09-04T06:36:10.839+0000 D4 ELECTION [replexec-4] Scheduling election timeout callback at 2019-09-04T06:36:21.769+0000 2019-09-04T06:36:10.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:10.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:36:10.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:40.839+0000 2019-09-04T06:36:10.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:36:12.839Z 2019-09-04T06:36:10.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:40.839+0000 2019-09-04T06:36:10.840+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:10.840+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1702) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:10.840+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1702 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:20.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:10.840+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:40.839+0000 2019-09-04T06:36:10.840+0000 D2 ASIO [Replication] Request 1702 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), opTime: { ts: Timestamp(1567578968, 2), t: 1 }, wallTime: new Date(1567578968684), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpVisible: { ts: Timestamp(1567578968, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578968, 2), $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 2) } 2019-09-04T06:36:10.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), opTime: { ts: Timestamp(1567578968, 2), t: 1 }, wallTime: new Date(1567578968684), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpVisible: { ts: Timestamp(1567578968, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578968, 2), $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:10.840+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:10.840+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1702) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), opTime: { ts: Timestamp(1567578968, 2), t: 1 }, wallTime: new Date(1567578968684), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpVisible: { ts: Timestamp(1567578968, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578968, 2), $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 2) } 2019-09-04T06:36:10.840+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:36:10.840+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:36:12.840Z 2019-09-04T06:36:10.840+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:40.839+0000 2019-09-04T06:36:10.878+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:10.878+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:10.899+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:10.944+0000 D2 COMMAND [conn73] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:10.945+0000 I COMMAND [conn73] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:10.964+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:10.964+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:10.968+0000 D2 COMMAND [conn74] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:10.968+0000 I COMMAND [conn74] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:10.982+0000 D2 COMMAND [conn71] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578968, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 260B423CCB2EA7291AF6D740013A29A37483764E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578968, 2), t: 1 } }, $db: "config" } 2019-09-04T06:36:10.983+0000 D1 COMMAND [conn71] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578968, 2), t: 1 } } } 2019-09-04T06:36:10.983+0000 D3 STORAGE [conn71] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:36:10.983+0000 D1 COMMAND [conn71] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578968, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 260B423CCB2EA7291AF6D740013A29A37483764E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578968, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578968, 2) 2019-09-04T06:36:10.983+0000 D5 QUERY [conn71] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:36:10.983+0000 D5 QUERY [conn71] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:36:10.983+0000 D5 QUERY [conn71] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:36:10.983+0000 D5 QUERY [conn71] Rated tree: $and 2019-09-04T06:36:10.983+0000 D5 QUERY [conn71] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:10.983+0000 D5 QUERY [conn71] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:10.983+0000 D2 QUERY [conn71] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:36:10.983+0000 D3 STORAGE [conn71] WT begin_transaction for snapshot id 25009 2019-09-04T06:36:10.983+0000 D3 STORAGE [conn71] WT rollback_transaction for snapshot id 25009 2019-09-04T06:36:10.983+0000 I COMMAND [conn71] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578968, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 260B423CCB2EA7291AF6D740013A29A37483764E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578968, 2), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:4 cursorExhausted:1 numYields:0 nreturned:4 reslen:989 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:36:10.999+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:11.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:11.041+0000 D2 COMMAND [conn72] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578968, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 260B423CCB2EA7291AF6D740013A29A37483764E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578968, 2), t: 1 } }, $db: "config" } 2019-09-04T06:36:11.041+0000 D1 COMMAND [conn72] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578968, 2), t: 1 } } } 2019-09-04T06:36:11.041+0000 D3 STORAGE [conn72] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:36:11.041+0000 D1 COMMAND [conn72] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578968, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 260B423CCB2EA7291AF6D740013A29A37483764E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578968, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578968, 2) 2019-09-04T06:36:11.041+0000 D5 QUERY [conn72] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:36:11.041+0000 D5 QUERY [conn72] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:36:11.041+0000 D5 QUERY [conn72] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:36:11.041+0000 D5 QUERY [conn72] Rated tree: $and 2019-09-04T06:36:11.041+0000 D5 QUERY [conn72] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:11.041+0000 D5 QUERY [conn72] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:11.041+0000 D2 QUERY [conn72] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:36:11.041+0000 D3 STORAGE [conn72] WT begin_transaction for snapshot id 25012 2019-09-04T06:36:11.042+0000 D3 STORAGE [conn72] WT rollback_transaction for snapshot id 25012 2019-09-04T06:36:11.042+0000 I COMMAND [conn72] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578968, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 260B423CCB2EA7291AF6D740013A29A37483764E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578968, 2), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:4 cursorExhausted:1 numYields:0 nreturned:4 reslen:989 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:36:11.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:11.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:11.063+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:11.063+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:36:10.839+0000 2019-09-04T06:36:11.063+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:36:10.840+0000 2019-09-04T06:36:11.063+0000 D3 REPL [replexec-3] stalest member MemberId(0) date: 2019-09-04T06:36:10.839+0000 2019-09-04T06:36:11.063+0000 D3 REPL [replexec-3] scheduling next check at 2019-09-04T06:36:20.839+0000 2019-09-04T06:36:11.063+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:40.839+0000 2019-09-04T06:36:11.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 260B423CCB2EA7291AF6D740013A29A37483764E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:11.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:36:11.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 260B423CCB2EA7291AF6D740013A29A37483764E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:11.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 260B423CCB2EA7291AF6D740013A29A37483764E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:11.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), opTime: { ts: Timestamp(1567578968, 2), t: 1 }, wallTime: new Date(1567578968684) } 2019-09-04T06:36:11.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578968, 2), signature: { hash: BinData(0, 260B423CCB2EA7291AF6D740013A29A37483764E), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:11.099+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:11.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:11.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:11.197+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:11.197+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:11.199+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:11.221+0000 D2 COMMAND [conn77] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:11.221+0000 I COMMAND [conn77] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:11.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:11.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:11.248+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:11.248+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:11.273+0000 D2 COMMAND [conn78] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:11.273+0000 I COMMAND [conn78] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:11.299+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:11.378+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:11.378+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:11.400+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:11.464+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:11.464+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:11.500+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:11.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:11.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:11.600+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:11.662+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:11.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:11.697+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:11.697+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:11.700+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:11.702+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:11.702+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:11.702+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578968, 2) 2019-09-04T06:36:11.702+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 25028 2019-09-04T06:36:11.702+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:11.702+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:11.702+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 25028 2019-09-04T06:36:11.703+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25031 2019-09-04T06:36:11.703+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 25031 2019-09-04T06:36:11.703+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578968, 2), t: 1 }({ ts: Timestamp(1567578968, 2), t: 1 }) 2019-09-04T06:36:11.748+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:11.748+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:11.797+0000 D2 COMMAND [conn81] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:11.797+0000 I COMMAND [conn81] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:11.800+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:11.878+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:11.878+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:11.900+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:11.964+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:11.964+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:12.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:12.001+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:12.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:12.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:12.101+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:12.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:12.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:12.197+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:12.197+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:12.201+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:12.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578969, 1), signature: { hash: BinData(0, 6715B027C7B6EC0B267D10C0838BA0DEFE501D0F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:12.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:36:12.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578969, 1), signature: { hash: BinData(0, 6715B027C7B6EC0B267D10C0838BA0DEFE501D0F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:12.235+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578969, 1), signature: { hash: BinData(0, 6715B027C7B6EC0B267D10C0838BA0DEFE501D0F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:12.235+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), opTime: { ts: Timestamp(1567578968, 2), t: 1 }, wallTime: new Date(1567578968684) } 2019-09-04T06:36:12.235+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578969, 1), signature: { hash: BinData(0, 6715B027C7B6EC0B267D10C0838BA0DEFE501D0F), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:12.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:12.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:12.248+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:12.248+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:12.301+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:12.331+0000 I COMMAND [conn492] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578932, 1), signature: { hash: BinData(0, E41B373647AEEE84869015C1A3EA6E4D87DF51B7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:36:12.331+0000 D1 - [conn492] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:12.331+0000 W - [conn492] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:12.348+0000 I - [conn492] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:12.348+0000 D1 COMMAND [conn492] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578932, 1), signature: { hash: BinData(0, E41B373647AEEE84869015C1A3EA6E4D87DF51B7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:12.348+0000 D1 - [conn492] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:12.348+0000 W - [conn492] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:12.369+0000 I - [conn492] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:12.369+0000 W COMMAND [conn492] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:12.369+0000 I COMMAND [conn492] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578932, 1), signature: { hash: BinData(0, E41B373647AEEE84869015C1A3EA6E4D87DF51B7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:36:12.369+0000 D2 NETWORK [conn492] Session from 10.108.2.53:50874 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:12.369+0000 I NETWORK [conn492] end connection 10.108.2.53:50874 (86 connections now open) 2019-09-04T06:36:12.401+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:12.464+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:12.464+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:12.501+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:12.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:12.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:12.601+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:12.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:12.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:12.697+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:12.697+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:12.702+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:12.702+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:12.702+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:12.702+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578968, 2) 2019-09-04T06:36:12.702+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 25048 2019-09-04T06:36:12.702+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:12.702+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:12.702+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 25048 2019-09-04T06:36:12.703+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25051 2019-09-04T06:36:12.703+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 25051 2019-09-04T06:36:12.703+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578968, 2), t: 1 }({ ts: Timestamp(1567578968, 2), t: 1 }) 2019-09-04T06:36:12.748+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:12.748+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:12.802+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:12.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:12.839+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1703) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:12.839+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1703 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:36:22.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:12.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:40.839+0000 2019-09-04T06:36:12.839+0000 D2 ASIO [Replication] Request 1703 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), opTime: { ts: Timestamp(1567578968, 2), t: 1 }, wallTime: new Date(1567578968684), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpVisible: { ts: Timestamp(1567578968, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578968, 2), $clusterTime: { clusterTime: Timestamp(1567578969, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 2) } 2019-09-04T06:36:12.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), opTime: { ts: Timestamp(1567578968, 2), t: 1 }, wallTime: new Date(1567578968684), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpVisible: { ts: Timestamp(1567578968, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578968, 2), $clusterTime: { clusterTime: Timestamp(1567578969, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:36:12.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:12.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1703) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), opTime: { ts: Timestamp(1567578968, 2), t: 1 }, wallTime: new Date(1567578968684), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpVisible: { ts: Timestamp(1567578968, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578968, 2), $clusterTime: { clusterTime: Timestamp(1567578969, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 2) } 2019-09-04T06:36:12.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:36:12.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:36:21.769+0000 2019-09-04T06:36:12.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:36:24.270+0000 2019-09-04T06:36:12.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:12.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:36:12.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:42.839+0000 2019-09-04T06:36:12.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:36:14.839Z 2019-09-04T06:36:12.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:42.839+0000 2019-09-04T06:36:12.840+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:12.840+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1704) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:12.840+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1704 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:22.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:12.840+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:42.839+0000 2019-09-04T06:36:12.840+0000 D2 ASIO [Replication] Request 1704 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), opTime: { ts: Timestamp(1567578968, 2), t: 1 }, wallTime: new Date(1567578968684), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpVisible: { ts: Timestamp(1567578968, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578968, 2), $clusterTime: { clusterTime: Timestamp(1567578969, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 2) } 2019-09-04T06:36:12.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), opTime: { ts: Timestamp(1567578968, 2), t: 1 }, wallTime: new Date(1567578968684), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpVisible: { ts: Timestamp(1567578968, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578968, 2), $clusterTime: { clusterTime: Timestamp(1567578969, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:12.840+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:12.840+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1704) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), opTime: { ts: Timestamp(1567578968, 2), t: 1 }, wallTime: new Date(1567578968684), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpVisible: { ts: Timestamp(1567578968, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578968, 2), $clusterTime: { clusterTime: Timestamp(1567578969, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 2) } 2019-09-04T06:36:12.840+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:36:12.840+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:36:14.840Z 2019-09-04T06:36:12.840+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:42.839+0000 2019-09-04T06:36:12.862+0000 I NETWORK [listener] connection accepted from 10.108.2.73:52378 #533 (87 connections now open) 2019-09-04T06:36:12.862+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:36:12.862+0000 D2 COMMAND [conn533] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:36:12.862+0000 I NETWORK [conn533] received client metadata from 10.108.2.73:52378 conn533: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:36:12.862+0000 I COMMAND [conn533] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:36:12.878+0000 I COMMAND [conn511] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, EAB2E9C123EA69D6E814C4F993AEAED9311438B2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:36:12.878+0000 D1 - [conn511] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:12.878+0000 W - [conn511] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:12.881+0000 I COMMAND [conn512] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578940, 1), signature: { hash: BinData(0, DC364624D5EB396408DE50F10AD903484AFAB43A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:36:12.881+0000 D1 - [conn512] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:12.881+0000 W - [conn512] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:12.882+0000 I COMMAND [conn513] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578935, 1), signature: { hash: BinData(0, EC39415F0541090717F188464B6C52E034EB96D7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:36:12.882+0000 D1 - [conn513] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:12.882+0000 W - [conn513] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:12.882+0000 I NETWORK [listener] connection accepted from 10.108.2.57:34458 #534 (88 connections now open) 2019-09-04T06:36:12.882+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:36:12.882+0000 D2 COMMAND [conn534] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:36:12.882+0000 I NETWORK [conn534] received client metadata from 10.108.2.57:34458 conn534: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:36:12.882+0000 I COMMAND [conn534] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:36:12.886+0000 I COMMAND [conn497] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, EAB2E9C123EA69D6E814C4F993AEAED9311438B2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:36:12.886+0000 D1 - [conn497] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:12.886+0000 W - [conn497] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:12.891+0000 I COMMAND [conn485] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578938, 1), signature: { hash: BinData(0, 31B0FBBE3F88E8227ED4748A3C95E935277E7687), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:36:12.891+0000 D1 - [conn485] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:12.892+0000 W - [conn485] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:12.902+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:12.913+0000 I NETWORK [listener] connection accepted from 10.108.2.60:45066 #535 (89 connections now open) 2019-09-04T06:36:12.913+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:36:12.913+0000 D2 COMMAND [conn535] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:36:12.913+0000 I NETWORK [conn535] received client metadata from 10.108.2.60:45066 conn535: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:36:12.913+0000 I COMMAND [conn535] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:36:12.916+0000 I - [conn497] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:12.916+0000 D1 COMMAND [conn497] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, EAB2E9C123EA69D6E814C4F993AEAED9311438B2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:12.916+0000 D1 - [conn497] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:12.916+0000 W - [conn497] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:12.922+0000 I COMMAND [conn504] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578933, 1), signature: { hash: BinData(0, 544A6412BEE4E32C365992D40037A87F96284AE8), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:36:12.922+0000 D1 - [conn504] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:12.922+0000 W - [conn504] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:12.933+0000 I - [conn512] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:12.933+0000 D1 COMMAND [conn512] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578940, 1), signature: { hash: BinData(0, DC364624D5EB396408DE50F10AD903484AFAB43A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:12.933+0000 D1 - [conn512] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:12.933+0000 W - [conn512] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:12.934+0000 I NETWORK [listener] connection accepted from 10.108.2.63:36504 #536 (90 connections now open) 2019-09-04T06:36:12.934+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:36:12.934+0000 D2 COMMAND [conn536] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:36:12.934+0000 I NETWORK [conn536] received client metadata from 10.108.2.63:36504 conn536: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:36:12.934+0000 I COMMAND [conn536] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:36:12.943+0000 I COMMAND [conn483] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, EAB2E9C123EA69D6E814C4F993AEAED9311438B2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:36:12.943+0000 D1 - [conn483] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:12.943+0000 W - [conn483] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:12.952+0000 I - [conn513] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:12.952+0000 D1 COMMAND [conn513] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578935, 1), signature: { hash: BinData(0, EC39415F0541090717F188464B6C52E034EB96D7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:12.952+0000 D1 - [conn513] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:12.952+0000 W - [conn513] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:12.964+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:12.964+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:12.969+0000 I - [conn483] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:12.969+0000 D1 COMMAND [conn483] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, EAB2E9C123EA69D6E814C4F993AEAED9311438B2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:12.969+0000 D1 - [conn483] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:12.969+0000 W - [conn483] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:13.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:13.002+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:13.007+0000 I - [conn483] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:13.007+0000 W COMMAND [conn483] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:13.008+0000 I COMMAND [conn483] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, EAB2E9C123EA69D6E814C4F993AEAED9311438B2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30040ms 2019-09-04T06:36:13.008+0000 D2 NETWORK [conn483] Session from 10.108.2.63:36458 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:13.008+0000 I NETWORK [conn483] end connection 10.108.2.63:36458 (89 connections now open) 2019-09-04T06:36:13.038+0000 I - [conn497] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:13.039+0000 W COMMAND [conn497] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:13.039+0000 I COMMAND [conn497] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, EAB2E9C123EA69D6E814C4F993AEAED9311438B2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30044ms 2019-09-04T06:36:13.039+0000 D2 NETWORK [conn497] Session from 10.108.2.72:45934 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:13.039+0000 I NETWORK [conn497] end connection 10.108.2.72:45934 (88 connections now open) 2019-09-04T06:36:13.047+0000 I - [conn513] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:13.047+0000 W COMMAND [conn513] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:13.047+0000 I COMMAND [conn513] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578935, 1), signature: { hash: BinData(0, EC39415F0541090717F188464B6C52E034EB96D7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30083ms 2019-09-04T06:36:13.047+0000 D2 NETWORK [conn513] Session from 10.108.2.74:51988 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:13.047+0000 I NETWORK [conn513] end connection 10.108.2.74:51988 (87 connections now open) 2019-09-04T06:36:13.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:13.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:13.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578969, 1), signature: { hash: BinData(0, 6715B027C7B6EC0B267D10C0838BA0DEFE501D0F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:13.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:36:13.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578969, 1), signature: { hash: BinData(0, 6715B027C7B6EC0B267D10C0838BA0DEFE501D0F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:13.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578969, 1), signature: { hash: BinData(0, 6715B027C7B6EC0B267D10C0838BA0DEFE501D0F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:13.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), opTime: { ts: Timestamp(1567578968, 2), t: 1 }, wallTime: new Date(1567578968684) } 2019-09-04T06:36:13.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578969, 1), signature: { hash: BinData(0, 6715B027C7B6EC0B267D10C0838BA0DEFE501D0F), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:13.068+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:13.068+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:13.069+0000 D2 COMMAND [conn529] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578971, 1), signature: { hash: BinData(0, BF9ABB3B036DCA9BFE2E0631E7AB0668EE141209), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:36:13.069+0000 D1 REPL [conn529] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578968, 2), t: 1 } 2019-09-04T06:36:13.069+0000 D3 REPL [conn529] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:43.079+0000 2019-09-04T06:36:13.071+0000 I - [conn512] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:13.071+0000 W COMMAND [conn512] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:13.071+0000 I COMMAND [conn512] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578940, 1), signature: { hash: BinData(0, DC364624D5EB396408DE50F10AD903484AFAB43A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30067ms 2019-09-04T06:36:13.071+0000 D2 NETWORK [conn512] Session from 10.108.2.48:42308 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:13.071+0000 I NETWORK [conn512] end connection 10.108.2.48:42308 (86 connections now open) 2019-09-04T06:36:13.072+0000 D2 COMMAND [conn521] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578971, 1), signature: { hash: BinData(0, BF9ABB3B036DCA9BFE2E0631E7AB0668EE141209), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:36:13.072+0000 D1 REPL [conn521] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578968, 2), t: 1 } 2019-09-04T06:36:13.072+0000 D3 REPL [conn521] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:43.082+0000 2019-09-04T06:36:13.071+0000 I - [conn485] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:13.073+0000 I - [conn511] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:13.073+0000 D1 COMMAND [conn485] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578938, 1), signature: { hash: BinData(0, 31B0FBBE3F88E8227ED4748A3C95E935277E7687), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:13.073+0000 D1 COMMAND [conn511] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, EAB2E9C123EA69D6E814C4F993AEAED9311438B2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:13.073+0000 D1 - [conn485] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:13.073+0000 D1 - [conn511] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:13.073+0000 W - [conn511] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:13.073+0000 W - [conn485] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:13.094+0000 I - [conn511] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:13.094+0000 W COMMAND [conn511] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:13.094+0000 I COMMAND [conn511] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, EAB2E9C123EA69D6E814C4F993AEAED9311438B2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30210ms 2019-09-04T06:36:13.094+0000 D2 NETWORK [conn511] Session from 10.108.2.73:52356 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:13.094+0000 I NETWORK [conn511] end connection 10.108.2.73:52356 (85 connections now open) 2019-09-04T06:36:13.102+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:13.109+0000 I - [conn504] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:13.109+0000 D1 COMMAND [conn504] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578933, 1), signature: { hash: BinData(0, 544A6412BEE4E32C365992D40037A87F96284AE8), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:13.109+0000 D1 - [conn504] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:13.109+0000 W - [conn504] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:13.124+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:13.124+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:13.129+0000 D2 COMMAND [conn527] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578971, 1), signature: { hash: BinData(0, BF9ABB3B036DCA9BFE2E0631E7AB0668EE141209), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:36:13.129+0000 D1 REPL [conn527] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578968, 2), t: 1 } 2019-09-04T06:36:13.129+0000 D3 REPL [conn527] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:43.139+0000 2019-09-04T06:36:13.143+0000 I - [conn504] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:13.143+0000 W COMMAND [conn504] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:13.143+0000 I COMMAND [conn504] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578933, 1), signature: { hash: BinData(0, 544A6412BEE4E32C365992D40037A87F96284AE8), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30200ms 2019-09-04T06:36:13.143+0000 D2 NETWORK [conn504] Session from 10.108.2.60:45038 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:13.143+0000 I NETWORK [conn504] end connection 10.108.2.60:45038 (84 connections now open) 2019-09-04T06:36:13.149+0000 I - [conn485] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:13.149+0000 W COMMAND [conn485] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:13.149+0000 I COMMAND [conn485] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578938, 1), signature: { hash: BinData(0, 31B0FBBE3F88E8227ED4748A3C95E935277E7687), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30194ms 2019-09-04T06:36:13.149+0000 D2 NETWORK [conn485] Session from 10.108.2.57:34424 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:13.149+0000 I NETWORK [conn485] end connection 10.108.2.57:34424 (83 connections now open) 2019-09-04T06:36:13.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:13.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:13.197+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:13.197+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:13.202+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:13.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:13.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:13.248+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:13.248+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:13.249+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:13.249+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:13.302+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:13.403+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:13.418+0000 I COMMAND [conn514] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578938, 1), signature: { hash: BinData(0, 31B0FBBE3F88E8227ED4748A3C95E935277E7687), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:36:13.418+0000 D1 - [conn514] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:13.418+0000 W - [conn514] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:13.435+0000 I - [conn514] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:13.435+0000 D1 COMMAND [conn514] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578938, 1), signature: { hash: BinData(0, 31B0FBBE3F88E8227ED4748A3C95E935277E7687), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:13.435+0000 D1 - [conn514] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:13.435+0000 W - [conn514] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:13.455+0000 I - [conn514] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:13.455+0000 W COMMAND [conn514] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:13.455+0000 I COMMAND [conn514] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578938, 1), signature: { hash: BinData(0, 31B0FBBE3F88E8227ED4748A3C95E935277E7687), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:36:13.455+0000 D2 NETWORK [conn514] Session from 10.108.2.45:36738 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:13.455+0000 I NETWORK [conn514] end connection 10.108.2.45:36738 (82 connections now open) 2019-09-04T06:36:13.464+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:13.464+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:13.503+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:13.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:13.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:13.568+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:13.568+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:13.603+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:13.623+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:13.623+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:13.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:13.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:13.697+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:13.697+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:13.702+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:13.702+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:13.702+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578968, 2) 2019-09-04T06:36:13.702+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 25078 2019-09-04T06:36:13.702+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:13.702+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:13.702+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 25078 2019-09-04T06:36:13.703+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:13.703+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25081 2019-09-04T06:36:13.703+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 25081 2019-09-04T06:36:13.703+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578968, 2), t: 1 }({ ts: Timestamp(1567578968, 2), t: 1 }) 2019-09-04T06:36:13.708+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:13.708+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), appliedOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, appliedWallTime: new Date(1567578968684), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), appliedOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, appliedWallTime: new Date(1567578968684), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), appliedOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, appliedWallTime: new Date(1567578968684), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpVisible: { ts: Timestamp(1567578968, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:13.708+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1705 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:43.708+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), appliedOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, appliedWallTime: new Date(1567578968684), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), appliedOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, appliedWallTime: new Date(1567578968684), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), appliedOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, appliedWallTime: new Date(1567578968684), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpVisible: { ts: Timestamp(1567578968, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:13.708+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:38.703+0000 2019-09-04T06:36:13.708+0000 D2 ASIO [RS] Request 1705 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpVisible: { ts: Timestamp(1567578968, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578968, 2), $clusterTime: { clusterTime: Timestamp(1567578973, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 2) } 2019-09-04T06:36:13.708+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpVisible: { ts: Timestamp(1567578968, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578968, 2), $clusterTime: { clusterTime: Timestamp(1567578973, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:13.708+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:36:13.708+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:38.703+0000 2019-09-04T06:36:13.708+0000 D2 ASIO [RS] Request 1698 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpVisible: { ts: Timestamp(1567578968, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpApplied: { ts: Timestamp(1567578968, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578968, 2), $clusterTime: { clusterTime: Timestamp(1567578973, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 2) } 2019-09-04T06:36:13.708+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpVisible: { ts: Timestamp(1567578968, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpApplied: { ts: Timestamp(1567578968, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578968, 2), $clusterTime: { clusterTime: Timestamp(1567578973, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:13.708+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:13.708+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:36:13.708+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:36:24.270+0000 2019-09-04T06:36:13.708+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:36:24.635+0000 2019-09-04T06:36:13.708+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:13.708+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1706 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:36:23.708+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578968, 2), t: 1 } } 2019-09-04T06:36:13.708+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:42.839+0000 2019-09-04T06:36:13.708+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:38.703+0000 2019-09-04T06:36:13.748+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:13.748+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:13.749+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:13.749+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:13.803+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:13.903+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:13.964+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:13.964+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:14.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:14.003+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:14.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:14.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:14.104+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:14.162+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:14.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:14.197+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:14.197+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:14.204+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:14.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578973, 1), signature: { hash: BinData(0, 555C8C01FEBC39E43EF18EDB0DD0B2CEC6209F15), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:14.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:36:14.235+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578973, 1), signature: { hash: BinData(0, 555C8C01FEBC39E43EF18EDB0DD0B2CEC6209F15), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:14.235+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578973, 1), signature: { hash: BinData(0, 555C8C01FEBC39E43EF18EDB0DD0B2CEC6209F15), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:14.235+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), opTime: { ts: Timestamp(1567578968, 2), t: 1 }, wallTime: new Date(1567578968684) } 2019-09-04T06:36:14.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:14.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:14.235+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578973, 1), signature: { hash: BinData(0, 555C8C01FEBC39E43EF18EDB0DD0B2CEC6209F15), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:14.248+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:14.248+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:14.249+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:14.249+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:14.301+0000 I COMMAND [conn500] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578941, 1), signature: { hash: BinData(0, B607257475BAE42FA27A1C9FCFB0BFBA13A2A499), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:36:14.301+0000 D1 - [conn500] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:14.301+0000 W - [conn500] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:14.304+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:14.319+0000 I - [conn500] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:14.319+0000 D1 COMMAND [conn500] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578941, 1), signature: { hash: BinData(0, B607257475BAE42FA27A1C9FCFB0BFBA13A2A499), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:14.319+0000 D1 - [conn500] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:14.319+0000 W - [conn500] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:14.339+0000 I - [conn500] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:14.339+0000 W COMMAND [conn500] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:14.339+0000 I COMMAND [conn500] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578941, 1), signature: { hash: BinData(0, B607257475BAE42FA27A1C9FCFB0BFBA13A2A499), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30033ms 2019-09-04T06:36:14.340+0000 D2 NETWORK [conn500] Session from 10.108.2.59:48542 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:14.340+0000 I NETWORK [conn500] end connection 10.108.2.59:48542 (81 connections now open) 2019-09-04T06:36:14.404+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:14.464+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:14.464+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:14.504+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:14.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:14.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:14.604+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:14.662+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:14.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:14.680+0000 I NETWORK [listener] connection accepted from 10.108.2.52:47410 #537 (82 connections now open) 2019-09-04T06:36:14.680+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:36:14.681+0000 D2 COMMAND [conn537] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:36:14.681+0000 I NETWORK [conn537] received client metadata from 10.108.2.52:47410 conn537: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:36:14.681+0000 I COMMAND [conn537] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:36:14.681+0000 D2 COMMAND [conn537] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578972, 1), signature: { hash: BinData(0, 748CA4A915B339EE77A5A9AF1E7BC1F0A591620C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:36:14.681+0000 D1 REPL [conn537] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578968, 2), t: 1 } 2019-09-04T06:36:14.681+0000 D3 REPL [conn537] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:44.691+0000 2019-09-04T06:36:14.697+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:14.697+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:14.698+0000 D2 COMMAND [conn524] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578968, 1), signature: { hash: BinData(0, FBB800930BE44BC3F2A298082833F31CC117AACA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:36:14.698+0000 D1 REPL [conn524] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578968, 2), t: 1 } 2019-09-04T06:36:14.698+0000 D3 REPL [conn524] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:44.708+0000 2019-09-04T06:36:14.703+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:14.703+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:14.703+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578968, 2) 2019-09-04T06:36:14.703+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 25102 2019-09-04T06:36:14.703+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:14.703+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:14.703+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 25102 2019-09-04T06:36:14.703+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25105 2019-09-04T06:36:14.703+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 25105 2019-09-04T06:36:14.703+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578968, 2), t: 1 }({ ts: Timestamp(1567578968, 2), t: 1 }) 2019-09-04T06:36:14.705+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:14.748+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:14.748+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:14.749+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:14.749+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:14.805+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:14.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:14.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1707) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:14.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1707 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:36:24.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:14.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:42.839+0000 2019-09-04T06:36:14.839+0000 D2 ASIO [Replication] Request 1707 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), opTime: { ts: Timestamp(1567578968, 2), t: 1 }, wallTime: new Date(1567578968684), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpVisible: { ts: Timestamp(1567578968, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578968, 2), $clusterTime: { clusterTime: Timestamp(1567578973, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 2) } 2019-09-04T06:36:14.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), opTime: { ts: Timestamp(1567578968, 2), t: 1 }, wallTime: new Date(1567578968684), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpVisible: { ts: Timestamp(1567578968, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578968, 2), $clusterTime: { clusterTime: Timestamp(1567578973, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:36:14.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:14.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1707) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), opTime: { ts: Timestamp(1567578968, 2), t: 1 }, wallTime: new Date(1567578968684), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpVisible: { ts: Timestamp(1567578968, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578968, 2), $clusterTime: { clusterTime: Timestamp(1567578973, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 2) } 2019-09-04T06:36:14.839+0000 D4 ELECTION [replexec-4] Postponing election timeout due to heartbeat from primary 2019-09-04T06:36:14.839+0000 D4 REPL [replexec-4] Canceling election timeout callback at 2019-09-04T06:36:24.635+0000 2019-09-04T06:36:14.839+0000 D4 ELECTION [replexec-4] Scheduling election timeout callback at 2019-09-04T06:36:25.672+0000 2019-09-04T06:36:14.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:14.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:36:14.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:44.839+0000 2019-09-04T06:36:14.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:36:16.839Z 2019-09-04T06:36:14.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:44.839+0000 2019-09-04T06:36:14.840+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:14.840+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1708) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:14.840+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1708 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:24.840+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:14.840+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:44.839+0000 2019-09-04T06:36:14.840+0000 D2 ASIO [Replication] Request 1708 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), opTime: { ts: Timestamp(1567578968, 2), t: 1 }, wallTime: new Date(1567578968684), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpVisible: { ts: Timestamp(1567578968, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578968, 2), $clusterTime: { clusterTime: Timestamp(1567578973, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 2) } 2019-09-04T06:36:14.840+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), opTime: { ts: Timestamp(1567578968, 2), t: 1 }, wallTime: new Date(1567578968684), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpVisible: { ts: Timestamp(1567578968, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578968, 2), $clusterTime: { clusterTime: Timestamp(1567578973, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:14.840+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:14.841+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1708) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), opTime: { ts: Timestamp(1567578968, 2), t: 1 }, wallTime: new Date(1567578968684), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpVisible: { ts: Timestamp(1567578968, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578968, 2), $clusterTime: { clusterTime: Timestamp(1567578973, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 2) } 2019-09-04T06:36:14.841+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:36:14.841+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:36:16.841Z 2019-09-04T06:36:14.841+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:44.839+0000 2019-09-04T06:36:14.905+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:14.964+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:14.964+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:15.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:15.005+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:15.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:15.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:15.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578973, 1), signature: { hash: BinData(0, 555C8C01FEBC39E43EF18EDB0DD0B2CEC6209F15), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:15.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:36:15.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578973, 1), signature: { hash: BinData(0, 555C8C01FEBC39E43EF18EDB0DD0B2CEC6209F15), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:15.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578973, 1), signature: { hash: BinData(0, 555C8C01FEBC39E43EF18EDB0DD0B2CEC6209F15), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:15.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), opTime: { ts: Timestamp(1567578968, 2), t: 1 }, wallTime: new Date(1567578968684) } 2019-09-04T06:36:15.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578973, 1), signature: { hash: BinData(0, 555C8C01FEBC39E43EF18EDB0DD0B2CEC6209F15), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:15.105+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:15.161+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:15.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:15.196+0000 I COMMAND [conn489] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, EAB2E9C123EA69D6E814C4F993AEAED9311438B2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:36:15.196+0000 D1 - [conn489] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:15.196+0000 W - [conn489] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:15.197+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:15.197+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:15.205+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:15.213+0000 I - [conn489] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:15.213+0000 D1 COMMAND [conn489] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, EAB2E9C123EA69D6E814C4F993AEAED9311438B2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:15.213+0000 D1 - [conn489] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:15.213+0000 W - [conn489] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:15.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:15.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:15.238+0000 I - [conn489] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:15.238+0000 W COMMAND [conn489] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:15.239+0000 I COMMAND [conn489] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, EAB2E9C123EA69D6E814C4F993AEAED9311438B2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30033ms 2019-09-04T06:36:15.239+0000 D2 NETWORK [conn489] Session from 10.108.2.62:53610 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:15.239+0000 I NETWORK [conn489] end connection 10.108.2.62:53610 (81 connections now open) 2019-09-04T06:36:15.248+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:15.248+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:15.249+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:15.249+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:15.306+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:15.406+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:15.464+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:15.464+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:15.506+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:15.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:15.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:15.606+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:15.662+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:15.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:15.697+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:15.697+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:15.703+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:15.703+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:15.703+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578968, 2) 2019-09-04T06:36:15.703+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 25123 2019-09-04T06:36:15.703+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:15.703+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:15.703+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 25123 2019-09-04T06:36:15.704+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25126 2019-09-04T06:36:15.704+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 25126 2019-09-04T06:36:15.704+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578968, 2), t: 1 }({ ts: Timestamp(1567578968, 2), t: 1 }) 2019-09-04T06:36:15.706+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:15.748+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:15.748+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:15.749+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:15.749+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:15.806+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:15.907+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:15.964+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:15.964+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:16.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:16.007+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:16.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:16.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:16.107+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:16.162+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:16.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:16.197+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:16.197+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:16.207+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:16.234+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578973, 1), signature: { hash: BinData(0, 555C8C01FEBC39E43EF18EDB0DD0B2CEC6209F15), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:16.234+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:36:16.234+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578973, 1), signature: { hash: BinData(0, 555C8C01FEBC39E43EF18EDB0DD0B2CEC6209F15), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:16.234+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578973, 1), signature: { hash: BinData(0, 555C8C01FEBC39E43EF18EDB0DD0B2CEC6209F15), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:16.234+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), opTime: { ts: Timestamp(1567578968, 2), t: 1 }, wallTime: new Date(1567578968684) } 2019-09-04T06:36:16.235+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578973, 1), signature: { hash: BinData(0, 555C8C01FEBC39E43EF18EDB0DD0B2CEC6209F15), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:16.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:16.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:16.248+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:16.248+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:16.249+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:16.249+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:16.307+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:16.332+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:16.332+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:16.406+0000 D2 COMMAND [conn125] run command admin.$cmd { ping: 1, $db: "admin" } 2019-09-04T06:36:16.406+0000 I COMMAND [conn125] command admin.$cmd appName: "robo3t" command: ping { ping: 1, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:16.407+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:16.416+0000 D2 COMMAND [conn178] run command config.$cmd { ping: 1.0, lsid: { id: UUID("4ca3bc30-0f16-4335-a15f-3e7d48b5566e") }, $clusterTime: { clusterTime: Timestamp(1567578912, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:36:16.416+0000 I COMMAND [conn178] command config.$cmd appName: "MongoDB Shell" command: ping { ping: 1.0, lsid: { id: UUID("4ca3bc30-0f16-4335-a15f-3e7d48b5566e") }, $clusterTime: { clusterTime: Timestamp(1567578912, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:16.464+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:16.464+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:16.507+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:16.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:16.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:16.608+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:16.662+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:16.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:16.697+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:16.697+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:16.703+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:16.703+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:16.703+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578968, 2) 2019-09-04T06:36:16.703+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 25146 2019-09-04T06:36:16.703+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:16.703+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:16.703+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 25146 2019-09-04T06:36:16.704+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25149 2019-09-04T06:36:16.704+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 25149 2019-09-04T06:36:16.704+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578968, 2), t: 1 }({ ts: Timestamp(1567578968, 2), t: 1 }) 2019-09-04T06:36:16.708+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:16.748+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:16.748+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:16.749+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:16.749+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:16.808+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:16.832+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:16.832+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:16.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:16.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1709) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:16.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1709 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:36:26.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:16.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:44.839+0000 2019-09-04T06:36:16.839+0000 D2 ASIO [Replication] Request 1709 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), opTime: { ts: Timestamp(1567578968, 2), t: 1 }, wallTime: new Date(1567578968684), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpVisible: { ts: Timestamp(1567578968, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578968, 2), $clusterTime: { clusterTime: Timestamp(1567578973, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 2) } 2019-09-04T06:36:16.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), opTime: { ts: Timestamp(1567578968, 2), t: 1 }, wallTime: new Date(1567578968684), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpVisible: { ts: Timestamp(1567578968, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578968, 2), $clusterTime: { clusterTime: Timestamp(1567578973, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:36:16.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:16.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1709) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), opTime: { ts: Timestamp(1567578968, 2), t: 1 }, wallTime: new Date(1567578968684), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpVisible: { ts: Timestamp(1567578968, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578968, 2), $clusterTime: { clusterTime: Timestamp(1567578973, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 2) } 2019-09-04T06:36:16.839+0000 D4 ELECTION [replexec-4] Postponing election timeout due to heartbeat from primary 2019-09-04T06:36:16.839+0000 D4 REPL [replexec-4] Canceling election timeout callback at 2019-09-04T06:36:25.672+0000 2019-09-04T06:36:16.839+0000 D4 ELECTION [replexec-4] Scheduling election timeout callback at 2019-09-04T06:36:27.738+0000 2019-09-04T06:36:16.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:36:16.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:36:18.839Z 2019-09-04T06:36:16.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:16.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:46.839+0000 2019-09-04T06:36:16.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:46.839+0000 2019-09-04T06:36:16.841+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:16.841+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1710) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:16.841+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1710 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:26.841+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:16.841+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:46.839+0000 2019-09-04T06:36:16.841+0000 D2 ASIO [Replication] Request 1710 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), opTime: { ts: Timestamp(1567578968, 2), t: 1 }, wallTime: new Date(1567578968684), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpVisible: { ts: Timestamp(1567578968, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578968, 2), $clusterTime: { clusterTime: Timestamp(1567578973, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 2) } 2019-09-04T06:36:16.841+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), opTime: { ts: Timestamp(1567578968, 2), t: 1 }, wallTime: new Date(1567578968684), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpVisible: { ts: Timestamp(1567578968, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578968, 2), $clusterTime: { clusterTime: Timestamp(1567578973, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:16.841+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:16.841+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1710) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), opTime: { ts: Timestamp(1567578968, 2), t: 1 }, wallTime: new Date(1567578968684), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpVisible: { ts: Timestamp(1567578968, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578968, 2), $clusterTime: { clusterTime: Timestamp(1567578973, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578968, 2) } 2019-09-04T06:36:16.841+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:36:16.841+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:36:18.841Z 2019-09-04T06:36:16.841+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:46.839+0000 2019-09-04T06:36:16.908+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:16.964+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:16.964+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:17.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:17.008+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:17.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:17.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:17.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578973, 1), signature: { hash: BinData(0, 555C8C01FEBC39E43EF18EDB0DD0B2CEC6209F15), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:17.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:36:17.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578973, 1), signature: { hash: BinData(0, 555C8C01FEBC39E43EF18EDB0DD0B2CEC6209F15), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:17.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578973, 1), signature: { hash: BinData(0, 555C8C01FEBC39E43EF18EDB0DD0B2CEC6209F15), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:17.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), opTime: { ts: Timestamp(1567578968, 2), t: 1 }, wallTime: new Date(1567578968684) } 2019-09-04T06:36:17.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578973, 1), signature: { hash: BinData(0, 555C8C01FEBC39E43EF18EDB0DD0B2CEC6209F15), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:17.108+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:17.162+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:17.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:17.197+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:17.197+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:17.208+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:17.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:17.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:17.248+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:17.248+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:17.249+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:17.249+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:17.309+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:17.332+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:17.332+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:17.409+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:17.464+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:17.464+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:17.509+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:17.609+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:17.662+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:17.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:17.697+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:17.697+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:17.703+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:17.703+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:17.703+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578968, 2) 2019-09-04T06:36:17.703+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 25167 2019-09-04T06:36:17.703+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:17.703+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:17.703+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 25167 2019-09-04T06:36:17.704+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25170 2019-09-04T06:36:17.704+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 25170 2019-09-04T06:36:17.704+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578968, 2), t: 1 }({ ts: Timestamp(1567578968, 2), t: 1 }) 2019-09-04T06:36:17.709+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:17.713+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:17.713+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:17.725+0000 D2 ASIO [RS] Request 1706 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578977, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578977716), o: { $v: 1, $set: { ping: new Date(1567578977715) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpVisible: { ts: Timestamp(1567578968, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpApplied: { ts: Timestamp(1567578977, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578968, 2), $clusterTime: { clusterTime: Timestamp(1567578977, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 1) } 2019-09-04T06:36:17.725+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578977, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578977716), o: { $v: 1, $set: { ping: new Date(1567578977715) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpVisible: { ts: Timestamp(1567578968, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpApplied: { ts: Timestamp(1567578977, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578968, 2), $clusterTime: { clusterTime: Timestamp(1567578977, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:17.725+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:36:17.725+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578977, 1) and ending at ts: Timestamp(1567578977, 1) 2019-09-04T06:36:17.725+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:36:27.738+0000 2019-09-04T06:36:17.725+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:36:29.179+0000 2019-09-04T06:36:17.725+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:17.725+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:46.839+0000 2019-09-04T06:36:17.725+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:17.725+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:17.725+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578968, 2) 2019-09-04T06:36:17.725+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 25174 2019-09-04T06:36:17.725+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:17.725+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:17.725+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 25174 2019-09-04T06:36:17.725+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:17.725+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:36:17.725+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:17.725+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578968, 2) 2019-09-04T06:36:17.725+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 25177 2019-09-04T06:36:17.725+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578977, 1) } 2019-09-04T06:36:17.725+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:17.725+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:17.725+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 25177 2019-09-04T06:36:17.725+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578977, 1), t: 1 } 2019-09-04T06:36:17.725+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25171 2019-09-04T06:36:17.725+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 25171 2019-09-04T06:36:17.725+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25180 2019-09-04T06:36:17.725+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 25180 2019-09-04T06:36:17.725+0000 D3 EXECUTOR [repl-writer-worker-1] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:36:17.725+0000 D3 STORAGE [repl-writer-worker-1] WT begin_transaction for snapshot id 25182 2019-09-04T06:36:17.725+0000 D4 STORAGE [repl-writer-worker-1] inserting record with timestamp Timestamp(1567578977, 1) 2019-09-04T06:36:17.725+0000 D3 STORAGE [repl-writer-worker-1] WT set timestamp of future write operations to Timestamp(1567578977, 1) 2019-09-04T06:36:17.725+0000 D3 STORAGE [repl-writer-worker-1] WT commit_transaction for snapshot id 25182 2019-09-04T06:36:17.725+0000 D3 EXECUTOR [repl-writer-worker-1] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:36:17.725+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:36:17.725+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25181 2019-09-04T06:36:17.725+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 25181 2019-09-04T06:36:17.725+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25184 2019-09-04T06:36:17.725+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 25184 2019-09-04T06:36:17.725+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578977, 1), t: 1 }({ ts: Timestamp(1567578977, 1), t: 1 }) 2019-09-04T06:36:17.725+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578977, 1) 2019-09-04T06:36:17.725+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25185 2019-09-04T06:36:17.725+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578977, 1) } } ] } sort: {} projection: {} 2019-09-04T06:36:17.725+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:17.725+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:36:17.725+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578977, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:36:17.725+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:17.725+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:36:17.725+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:36:17.725+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578977, 1) || First: notFirst: full path: ts 2019-09-04T06:36:17.725+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:17.725+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578977, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:17.725+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:36:17.726+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:36:17.726+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:36:17.726+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:17.726+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:36:17.726+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:36:17.726+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:17.726+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:17.726+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:36:17.726+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578977, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:36:17.726+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:17.726+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:36:17.726+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:36:17.726+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578977, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:36:17.726+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:17.726+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578977, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:17.726+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 25185 2019-09-04T06:36:17.726+0000 D3 EXECUTOR [repl-writer-worker-15] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:36:17.726+0000 D3 STORAGE [repl-writer-worker-15] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:17.726+0000 D3 REPL [repl-writer-worker-15] applying op: { ts: Timestamp(1567578977, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567578977716), o: { $v: 1, $set: { ping: new Date(1567578977715) } } }, oplog application mode: Secondary 2019-09-04T06:36:17.726+0000 D3 STORAGE [repl-writer-worker-15] WT set timestamp of future write operations to Timestamp(1567578977, 1) 2019-09-04T06:36:17.726+0000 D3 STORAGE [repl-writer-worker-15] WT begin_transaction for snapshot id 25187 2019-09-04T06:36:17.726+0000 D2 QUERY [repl-writer-worker-15] Using idhack: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" } 2019-09-04T06:36:17.726+0000 D4 WRITE [repl-writer-worker-15] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:36:17.726+0000 D3 STORAGE [repl-writer-worker-15] WT commit_transaction for snapshot id 25187 2019-09-04T06:36:17.726+0000 D3 EXECUTOR [repl-writer-worker-15] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:36:17.726+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578977, 1), t: 1 }({ ts: Timestamp(1567578977, 1), t: 1 }) 2019-09-04T06:36:17.726+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578977, 1) 2019-09-04T06:36:17.726+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25186 2019-09-04T06:36:17.726+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:36:17.726+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:17.726+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:36:17.726+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:17.726+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:17.726+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:36:17.726+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 25186 2019-09-04T06:36:17.726+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578977, 1) 2019-09-04T06:36:17.726+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:17.726+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25190 2019-09-04T06:36:17.726+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 25190 2019-09-04T06:36:17.726+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), appliedOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, appliedWallTime: new Date(1567578968684), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), appliedOpTime: { ts: Timestamp(1567578977, 1), t: 1 }, appliedWallTime: new Date(1567578977716), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), appliedOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, appliedWallTime: new Date(1567578968684), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpVisible: { ts: Timestamp(1567578968, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:17.726+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578977, 1), t: 1 }({ ts: Timestamp(1567578977, 1), t: 1 }) 2019-09-04T06:36:17.726+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1711 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:47.726+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), appliedOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, appliedWallTime: new Date(1567578968684), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), appliedOpTime: { ts: Timestamp(1567578977, 1), t: 1 }, appliedWallTime: new Date(1567578977716), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), appliedOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, appliedWallTime: new Date(1567578968684), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpVisible: { ts: Timestamp(1567578968, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:17.726+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:47.726+0000 2019-09-04T06:36:17.726+0000 D2 ASIO [RS] Request 1711 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpVisible: { ts: Timestamp(1567578968, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578968, 2), $clusterTime: { clusterTime: Timestamp(1567578977, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 1) } 2019-09-04T06:36:17.726+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578968, 2), t: 1 }, lastCommittedWall: new Date(1567578968684), lastOpVisible: { ts: Timestamp(1567578968, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578968, 2), $clusterTime: { clusterTime: Timestamp(1567578977, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:17.726+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:17.726+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:47.726+0000 2019-09-04T06:36:17.727+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578977, 1), t: 1 } 2019-09-04T06:36:17.727+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1712 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:36:27.727+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578968, 2), t: 1 } } 2019-09-04T06:36:17.727+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:47.726+0000 2019-09-04T06:36:17.730+0000 D2 ASIO [RS] Request 1712 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 1), t: 1 }, lastCommittedWall: new Date(1567578977716), lastOpVisible: { ts: Timestamp(1567578977, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578977, 1), t: 1 }, lastCommittedWall: new Date(1567578977716), lastOpApplied: { ts: Timestamp(1567578977, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578977, 1), $clusterTime: { clusterTime: Timestamp(1567578977, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 1) } 2019-09-04T06:36:17.730+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 1), t: 1 }, lastCommittedWall: new Date(1567578977716), lastOpVisible: { ts: Timestamp(1567578977, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578977, 1), t: 1 }, lastCommittedWall: new Date(1567578977716), lastOpApplied: { ts: Timestamp(1567578977, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578977, 1), $clusterTime: { clusterTime: Timestamp(1567578977, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:17.730+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:17.730+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:36:17.730+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578977, 1), t: 1 }, 2019-09-04T06:36:17.716+0000 2019-09-04T06:36:17.730+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578977, 1), t: 1 }, 2019-09-04T06:36:17.716+0000 2019-09-04T06:36:17.730+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578972, 1) 2019-09-04T06:36:17.730+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:36:29.179+0000 2019-09-04T06:36:17.730+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:36:28.574+0000 2019-09-04T06:36:17.730+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:17.730+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:46.839+0000 2019-09-04T06:36:17.730+0000 D3 REPL [conn517] Got notified of new snapshot: { ts: Timestamp(1567578977, 1), t: 1 }, 2019-09-04T06:36:17.716+0000 2019-09-04T06:36:17.730+0000 D3 REPL [conn517] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:18.045+0000 2019-09-04T06:36:17.730+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1713 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:36:27.730+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578977, 1), t: 1 } } 2019-09-04T06:36:17.730+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:47.726+0000 2019-09-04T06:36:17.730+0000 D3 REPL [conn518] Got notified of new snapshot: { ts: Timestamp(1567578977, 1), t: 1 }, 2019-09-04T06:36:17.716+0000 2019-09-04T06:36:17.731+0000 D3 REPL [conn518] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:18.061+0000 2019-09-04T06:36:17.731+0000 D3 REPL [conn515] Got notified of new snapshot: { ts: Timestamp(1567578977, 1), t: 1 }, 2019-09-04T06:36:17.716+0000 2019-09-04T06:36:17.731+0000 D3 REPL [conn515] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:24.151+0000 2019-09-04T06:36:17.731+0000 D3 REPL [conn521] Got notified of new snapshot: { ts: Timestamp(1567578977, 1), t: 1 }, 2019-09-04T06:36:17.716+0000 2019-09-04T06:36:17.731+0000 D3 REPL [conn521] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:43.082+0000 2019-09-04T06:36:17.731+0000 D3 REPL [conn527] Got notified of new snapshot: { ts: Timestamp(1567578977, 1), t: 1 }, 2019-09-04T06:36:17.716+0000 2019-09-04T06:36:17.731+0000 D3 REPL [conn527] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:43.139+0000 2019-09-04T06:36:17.731+0000 D3 REPL [conn537] Got notified of new snapshot: { ts: Timestamp(1567578977, 1), t: 1 }, 2019-09-04T06:36:17.716+0000 2019-09-04T06:36:17.731+0000 D3 REPL [conn537] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:44.691+0000 2019-09-04T06:36:17.731+0000 D3 REPL [conn524] Got notified of new snapshot: { ts: Timestamp(1567578977, 1), t: 1 }, 2019-09-04T06:36:17.716+0000 2019-09-04T06:36:17.731+0000 D3 REPL [conn524] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:44.708+0000 2019-09-04T06:36:17.731+0000 D3 REPL [conn529] Got notified of new snapshot: { ts: Timestamp(1567578977, 1), t: 1 }, 2019-09-04T06:36:17.716+0000 2019-09-04T06:36:17.731+0000 D3 REPL [conn529] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:43.079+0000 2019-09-04T06:36:17.731+0000 D3 REPL [conn499] Got notified of new snapshot: { ts: Timestamp(1567578977, 1), t: 1 }, 2019-09-04T06:36:17.716+0000 2019-09-04T06:36:17.731+0000 D3 REPL [conn499] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:21.752+0000 2019-09-04T06:36:17.731+0000 D3 REPL [conn491] Got notified of new snapshot: { ts: Timestamp(1567578977, 1), t: 1 }, 2019-09-04T06:36:17.716+0000 2019-09-04T06:36:17.731+0000 D3 REPL [conn491] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:18.023+0000 2019-09-04T06:36:17.731+0000 D3 REPL [conn523] Got notified of new snapshot: { ts: Timestamp(1567578977, 1), t: 1 }, 2019-09-04T06:36:17.716+0000 2019-09-04T06:36:17.731+0000 D3 REPL [conn523] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:21.660+0000 2019-09-04T06:36:17.731+0000 D3 REPL [conn525] Got notified of new snapshot: { ts: Timestamp(1567578977, 1), t: 1 }, 2019-09-04T06:36:17.716+0000 2019-09-04T06:36:17.731+0000 D3 REPL [conn525] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:21.767+0000 2019-09-04T06:36:17.731+0000 D3 REPL [conn520] Got notified of new snapshot: { ts: Timestamp(1567578977, 1), t: 1 }, 2019-09-04T06:36:17.716+0000 2019-09-04T06:36:17.731+0000 D3 REPL [conn520] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:18.134+0000 2019-09-04T06:36:17.731+0000 D3 REPL [conn522] Got notified of new snapshot: { ts: Timestamp(1567578977, 1), t: 1 }, 2019-09-04T06:36:17.716+0000 2019-09-04T06:36:17.731+0000 D3 REPL [conn522] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:21.660+0000 2019-09-04T06:36:17.731+0000 D3 REPL [conn519] Got notified of new snapshot: { ts: Timestamp(1567578977, 1), t: 1 }, 2019-09-04T06:36:17.716+0000 2019-09-04T06:36:17.731+0000 D3 REPL [conn519] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:18.119+0000 2019-09-04T06:36:17.731+0000 D3 REPL [conn532] Got notified of new snapshot: { ts: Timestamp(1567578977, 1), t: 1 }, 2019-09-04T06:36:17.716+0000 2019-09-04T06:36:17.731+0000 D3 REPL [conn532] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:31.316+0000 2019-09-04T06:36:17.731+0000 D3 REPL [conn526] Got notified of new snapshot: { ts: Timestamp(1567578977, 1), t: 1 }, 2019-09-04T06:36:17.716+0000 2019-09-04T06:36:17.731+0000 D3 REPL [conn526] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:22.595+0000 2019-09-04T06:36:17.732+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:36:17.732+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:36:17.732+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), appliedOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, appliedWallTime: new Date(1567578968684), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578977, 1), t: 1 }, durableWallTime: new Date(1567578977716), appliedOpTime: { ts: Timestamp(1567578977, 1), t: 1 }, appliedWallTime: new Date(1567578977716), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), appliedOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, appliedWallTime: new Date(1567578968684), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 1), t: 1 }, lastCommittedWall: new Date(1567578977716), lastOpVisible: { ts: Timestamp(1567578977, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:17.732+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1714 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:47.732+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), appliedOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, appliedWallTime: new Date(1567578968684), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578977, 1), t: 1 }, durableWallTime: new Date(1567578977716), appliedOpTime: { ts: Timestamp(1567578977, 1), t: 1 }, appliedWallTime: new Date(1567578977716), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), appliedOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, appliedWallTime: new Date(1567578968684), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 1), t: 1 }, lastCommittedWall: new Date(1567578977716), lastOpVisible: { ts: Timestamp(1567578977, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:17.732+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:47.726+0000 2019-09-04T06:36:17.732+0000 D2 ASIO [RS] Request 1714 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 1), t: 1 }, lastCommittedWall: new Date(1567578977716), lastOpVisible: { ts: Timestamp(1567578977, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578977, 1), $clusterTime: { clusterTime: Timestamp(1567578977, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 1) } 2019-09-04T06:36:17.732+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 1), t: 1 }, lastCommittedWall: new Date(1567578977716), lastOpVisible: { ts: Timestamp(1567578977, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578977, 1), $clusterTime: { clusterTime: Timestamp(1567578977, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:17.732+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:17.732+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:47.726+0000 2019-09-04T06:36:17.748+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:17.748+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:17.749+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:17.749+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:17.809+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:17.825+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578977, 1) 2019-09-04T06:36:17.832+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:17.832+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:17.910+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:17.964+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:17.964+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:18.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:18.002+0000 D2 ASIO [RS] Request 1713 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578977, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578977984), o: { $v: 1, $set: { ping: new Date(1567578977977) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 1), t: 1 }, lastCommittedWall: new Date(1567578977716), lastOpVisible: { ts: Timestamp(1567578977, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578977, 1), t: 1 }, lastCommittedWall: new Date(1567578977716), lastOpApplied: { ts: Timestamp(1567578977, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578977, 1), $clusterTime: { clusterTime: Timestamp(1567578977, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 2) } 2019-09-04T06:36:18.002+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578977, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578977984), o: { $v: 1, $set: { ping: new Date(1567578977977) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 1), t: 1 }, lastCommittedWall: new Date(1567578977716), lastOpVisible: { ts: Timestamp(1567578977, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578977, 1), t: 1 }, lastCommittedWall: new Date(1567578977716), lastOpApplied: { ts: Timestamp(1567578977, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578977, 1), $clusterTime: { clusterTime: Timestamp(1567578977, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:18.002+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:36:18.002+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578977, 2) and ending at ts: Timestamp(1567578977, 2) 2019-09-04T06:36:18.002+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:36:28.574+0000 2019-09-04T06:36:18.002+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:36:28.869+0000 2019-09-04T06:36:18.002+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578977, 2), t: 1 } 2019-09-04T06:36:18.002+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:18.002+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:18.002+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578977, 1) 2019-09-04T06:36:18.002+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 25199 2019-09-04T06:36:18.002+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:18.002+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:18.002+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 25199 2019-09-04T06:36:18.002+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:18.002+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:18.002+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578977, 1) 2019-09-04T06:36:18.002+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 25202 2019-09-04T06:36:18.002+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:18.002+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:18.002+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 25202 2019-09-04T06:36:18.002+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:36:18.002+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578977, 2) } 2019-09-04T06:36:18.003+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25191 2019-09-04T06:36:18.003+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 25191 2019-09-04T06:36:18.003+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25205 2019-09-04T06:36:18.003+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 25205 2019-09-04T06:36:18.003+0000 D3 EXECUTOR [repl-writer-worker-5] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:36:18.003+0000 D3 STORAGE [repl-writer-worker-5] WT begin_transaction for snapshot id 25207 2019-09-04T06:36:18.003+0000 D4 STORAGE [repl-writer-worker-5] inserting record with timestamp Timestamp(1567578977, 2) 2019-09-04T06:36:18.003+0000 D3 STORAGE [repl-writer-worker-5] WT set timestamp of future write operations to Timestamp(1567578977, 2) 2019-09-04T06:36:18.003+0000 D3 STORAGE [repl-writer-worker-5] WT commit_transaction for snapshot id 25207 2019-09-04T06:36:18.003+0000 D3 EXECUTOR [repl-writer-worker-5] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:36:18.003+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:36:18.003+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25206 2019-09-04T06:36:18.003+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 25206 2019-09-04T06:36:18.003+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25209 2019-09-04T06:36:18.003+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 25209 2019-09-04T06:36:18.003+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578977, 2), t: 1 }({ ts: Timestamp(1567578977, 2), t: 1 }) 2019-09-04T06:36:18.003+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578977, 2) 2019-09-04T06:36:18.003+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25210 2019-09-04T06:36:18.003+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578977, 2) } } ] } sort: {} projection: {} 2019-09-04T06:36:18.003+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:18.003+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:36:18.003+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578977, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:36:18.003+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:18.003+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:36:18.003+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:36:18.003+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578977, 2) || First: notFirst: full path: ts 2019-09-04T06:36:18.003+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:18.003+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578977, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:18.003+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:36:18.003+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:36:18.003+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:36:18.003+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:18.003+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:36:18.003+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:36:18.003+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:18.003+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:18.003+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:36:18.003+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578977, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:36:18.003+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:18.003+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:36:18.003+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:36:18.003+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578977, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:36:18.003+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:18.003+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578977, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:18.003+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 25210 2019-09-04T06:36:18.003+0000 D3 EXECUTOR [repl-writer-worker-9] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:36:18.003+0000 D3 STORAGE [repl-writer-worker-9] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:18.003+0000 D3 REPL [repl-writer-worker-9] applying op: { ts: Timestamp(1567578977, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567578977984), o: { $v: 1, $set: { ping: new Date(1567578977977) } } }, oplog application mode: Secondary 2019-09-04T06:36:18.003+0000 D3 STORAGE [repl-writer-worker-9] WT set timestamp of future write operations to Timestamp(1567578977, 2) 2019-09-04T06:36:18.003+0000 D3 STORAGE [repl-writer-worker-9] WT begin_transaction for snapshot id 25212 2019-09-04T06:36:18.003+0000 D2 QUERY [repl-writer-worker-9] Using idhack: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" } 2019-09-04T06:36:18.003+0000 D4 WRITE [repl-writer-worker-9] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:36:18.003+0000 D3 STORAGE [repl-writer-worker-9] WT commit_transaction for snapshot id 25212 2019-09-04T06:36:18.003+0000 D3 EXECUTOR [repl-writer-worker-9] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:36:18.003+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578977, 2), t: 1 }({ ts: Timestamp(1567578977, 2), t: 1 }) 2019-09-04T06:36:18.003+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578977, 2) 2019-09-04T06:36:18.003+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25211 2019-09-04T06:36:18.003+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:36:18.003+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:18.003+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:36:18.003+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:18.003+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:18.003+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:36:18.003+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 25211 2019-09-04T06:36:18.003+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578977, 2) 2019-09-04T06:36:18.003+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25215 2019-09-04T06:36:18.003+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 25215 2019-09-04T06:36:18.003+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578977, 2), t: 1 }({ ts: Timestamp(1567578977, 2), t: 1 }) 2019-09-04T06:36:18.004+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:18.004+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), appliedOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, appliedWallTime: new Date(1567578968684), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578977, 1), t: 1 }, durableWallTime: new Date(1567578977716), appliedOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, appliedWallTime: new Date(1567578977984), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), appliedOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, appliedWallTime: new Date(1567578968684), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 1), t: 1 }, lastCommittedWall: new Date(1567578977716), lastOpVisible: { ts: Timestamp(1567578977, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:18.004+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1715 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:48.004+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), appliedOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, appliedWallTime: new Date(1567578968684), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578977, 1), t: 1 }, durableWallTime: new Date(1567578977716), appliedOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, appliedWallTime: new Date(1567578977984), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), appliedOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, appliedWallTime: new Date(1567578968684), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 1), t: 1 }, lastCommittedWall: new Date(1567578977716), lastOpVisible: { ts: Timestamp(1567578977, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:18.004+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:48.003+0000 2019-09-04T06:36:18.004+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:18.004+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:46.839+0000 2019-09-04T06:36:18.004+0000 D2 ASIO [RS] Request 1715 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 1), t: 1 }, lastCommittedWall: new Date(1567578977716), lastOpVisible: { ts: Timestamp(1567578977, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578977, 1), $clusterTime: { clusterTime: Timestamp(1567578977, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 2) } 2019-09-04T06:36:18.004+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 1), t: 1 }, lastCommittedWall: new Date(1567578977716), lastOpVisible: { ts: Timestamp(1567578977, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578977, 1), $clusterTime: { clusterTime: Timestamp(1567578977, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:18.004+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:18.004+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:48.004+0000 2019-09-04T06:36:18.004+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578977, 2), t: 1 } 2019-09-04T06:36:18.004+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1716 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:36:28.004+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578977, 1), t: 1 } } 2019-09-04T06:36:18.004+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:48.004+0000 2019-09-04T06:36:18.018+0000 I NETWORK [listener] connection accepted from 10.108.2.51:59364 #538 (82 connections now open) 2019-09-04T06:36:18.018+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:36:18.018+0000 D2 COMMAND [conn538] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:36:18.018+0000 I NETWORK [conn538] received client metadata from 10.108.2.51:59364 conn538: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:36:18.018+0000 I COMMAND [conn538] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:36:18.023+0000 I COMMAND [conn491] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578939, 1), signature: { hash: BinData(0, 0C747D665C0A018C40BCB6AF44F3387EECD2F396), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:36:18.023+0000 D1 - [conn491] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:18.023+0000 W - [conn491] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:18.027+0000 D2 ASIO [RS] Request 1716 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 2), t: 1 }, lastCommittedWall: new Date(1567578977984), lastOpVisible: { ts: Timestamp(1567578977, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578977, 2), t: 1 }, lastCommittedWall: new Date(1567578977984), lastOpApplied: { ts: Timestamp(1567578977, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578977, 2), $clusterTime: { clusterTime: Timestamp(1567578977, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 2) } 2019-09-04T06:36:18.027+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 2), t: 1 }, lastCommittedWall: new Date(1567578977984), lastOpVisible: { ts: Timestamp(1567578977, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578977, 2), t: 1 }, lastCommittedWall: new Date(1567578977984), lastOpApplied: { ts: Timestamp(1567578977, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578977, 2), $clusterTime: { clusterTime: Timestamp(1567578977, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:18.027+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:18.027+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:36:18.027+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578977, 2), t: 1 }, 2019-09-04T06:36:17.984+0000 2019-09-04T06:36:18.027+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578977, 2), t: 1 }, 2019-09-04T06:36:17.984+0000 2019-09-04T06:36:18.027+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578972, 2) 2019-09-04T06:36:18.027+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:36:28.869+0000 2019-09-04T06:36:18.027+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:36:28.862+0000 2019-09-04T06:36:18.027+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1717 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:36:28.027+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578977, 2), t: 1 } } 2019-09-04T06:36:18.027+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:48.004+0000 2019-09-04T06:36:18.027+0000 D3 REPL [conn523] Got notified of new snapshot: { ts: Timestamp(1567578977, 2), t: 1 }, 2019-09-04T06:36:17.984+0000 2019-09-04T06:36:18.027+0000 D3 REPL [conn523] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:21.660+0000 2019-09-04T06:36:18.027+0000 D3 REPL [conn526] Got notified of new snapshot: { ts: Timestamp(1567578977, 2), t: 1 }, 2019-09-04T06:36:17.984+0000 2019-09-04T06:36:18.027+0000 D3 REPL [conn526] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:22.595+0000 2019-09-04T06:36:18.027+0000 D3 REPL [conn517] Got notified of new snapshot: { ts: Timestamp(1567578977, 2), t: 1 }, 2019-09-04T06:36:17.984+0000 2019-09-04T06:36:18.027+0000 D3 REPL [conn517] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:18.045+0000 2019-09-04T06:36:18.027+0000 D3 REPL [conn499] Got notified of new snapshot: { ts: Timestamp(1567578977, 2), t: 1 }, 2019-09-04T06:36:17.984+0000 2019-09-04T06:36:18.027+0000 D3 REPL [conn499] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:21.752+0000 2019-09-04T06:36:18.027+0000 D3 REPL [conn527] Got notified of new snapshot: { ts: Timestamp(1567578977, 2), t: 1 }, 2019-09-04T06:36:17.984+0000 2019-09-04T06:36:18.027+0000 D3 REPL [conn527] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:43.139+0000 2019-09-04T06:36:18.027+0000 D3 REPL [conn515] Got notified of new snapshot: { ts: Timestamp(1567578977, 2), t: 1 }, 2019-09-04T06:36:17.984+0000 2019-09-04T06:36:18.027+0000 D3 REPL [conn515] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:24.151+0000 2019-09-04T06:36:18.027+0000 D3 REPL [conn524] Got notified of new snapshot: { ts: Timestamp(1567578977, 2), t: 1 }, 2019-09-04T06:36:17.984+0000 2019-09-04T06:36:18.027+0000 D3 REPL [conn524] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:44.708+0000 2019-09-04T06:36:18.027+0000 D3 REPL [conn519] Got notified of new snapshot: { ts: Timestamp(1567578977, 2), t: 1 }, 2019-09-04T06:36:17.984+0000 2019-09-04T06:36:18.027+0000 D3 REPL [conn519] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:18.119+0000 2019-09-04T06:36:18.027+0000 D3 REPL [conn520] Got notified of new snapshot: { ts: Timestamp(1567578977, 2), t: 1 }, 2019-09-04T06:36:17.984+0000 2019-09-04T06:36:18.027+0000 D3 REPL [conn520] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:18.134+0000 2019-09-04T06:36:18.027+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:18.027+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:46.839+0000 2019-09-04T06:36:18.027+0000 D3 REPL [conn532] Got notified of new snapshot: { ts: Timestamp(1567578977, 2), t: 1 }, 2019-09-04T06:36:17.984+0000 2019-09-04T06:36:18.027+0000 D3 REPL [conn532] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:31.316+0000 2019-09-04T06:36:18.027+0000 D3 REPL [conn522] Got notified of new snapshot: { ts: Timestamp(1567578977, 2), t: 1 }, 2019-09-04T06:36:17.984+0000 2019-09-04T06:36:18.027+0000 D3 REPL [conn522] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:21.660+0000 2019-09-04T06:36:18.027+0000 D3 REPL [conn525] Got notified of new snapshot: { ts: Timestamp(1567578977, 2), t: 1 }, 2019-09-04T06:36:17.984+0000 2019-09-04T06:36:18.027+0000 D3 REPL [conn525] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:21.767+0000 2019-09-04T06:36:18.027+0000 D3 REPL [conn529] Got notified of new snapshot: { ts: Timestamp(1567578977, 2), t: 1 }, 2019-09-04T06:36:17.984+0000 2019-09-04T06:36:18.027+0000 D3 REPL [conn529] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:43.079+0000 2019-09-04T06:36:18.027+0000 D3 REPL [conn537] Got notified of new snapshot: { ts: Timestamp(1567578977, 2), t: 1 }, 2019-09-04T06:36:17.984+0000 2019-09-04T06:36:18.027+0000 D3 REPL [conn537] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:44.691+0000 2019-09-04T06:36:18.027+0000 D3 REPL [conn521] Got notified of new snapshot: { ts: Timestamp(1567578977, 2), t: 1 }, 2019-09-04T06:36:17.984+0000 2019-09-04T06:36:18.027+0000 D3 REPL [conn521] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:43.082+0000 2019-09-04T06:36:18.028+0000 D3 REPL [conn518] Got notified of new snapshot: { ts: Timestamp(1567578977, 2), t: 1 }, 2019-09-04T06:36:17.984+0000 2019-09-04T06:36:18.028+0000 D3 REPL [conn518] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:18.061+0000 2019-09-04T06:36:18.032+0000 I NETWORK [listener] connection accepted from 10.108.2.49:53584 #539 (83 connections now open) 2019-09-04T06:36:18.032+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:36:18.032+0000 D2 COMMAND [conn539] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:36:18.032+0000 I NETWORK [conn539] received client metadata from 10.108.2.49:53584 conn539: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:36:18.032+0000 I COMMAND [conn539] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:36:18.040+0000 I - [conn491] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:18.040+0000 D1 COMMAND [conn491] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578939, 1), signature: { hash: BinData(0, 0C747D665C0A018C40BCB6AF44F3387EECD2F396), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:18.040+0000 D1 - [conn491] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:18.040+0000 W - [conn491] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:18.041+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:36:18.041+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:36:18.041+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), appliedOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, appliedWallTime: new Date(1567578968684), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), appliedOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, appliedWallTime: new Date(1567578977984), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), appliedOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, appliedWallTime: new Date(1567578968684), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 2), t: 1 }, lastCommittedWall: new Date(1567578977984), lastOpVisible: { ts: Timestamp(1567578977, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:18.041+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1718 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:48.041+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), appliedOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, appliedWallTime: new Date(1567578968684), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), appliedOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, appliedWallTime: new Date(1567578977984), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, durableWallTime: new Date(1567578968684), appliedOpTime: { ts: Timestamp(1567578968, 2), t: 1 }, appliedWallTime: new Date(1567578968684), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 2), t: 1 }, lastCommittedWall: new Date(1567578977984), lastOpVisible: { ts: Timestamp(1567578977, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:18.041+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:48.004+0000 2019-09-04T06:36:18.041+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:18.041+0000 D2 ASIO [RS] Request 1718 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 2), t: 1 }, lastCommittedWall: new Date(1567578977984), lastOpVisible: { ts: Timestamp(1567578977, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578977, 2), $clusterTime: { clusterTime: Timestamp(1567578977, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 2) } 2019-09-04T06:36:18.041+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 2), t: 1 }, lastCommittedWall: new Date(1567578977984), lastOpVisible: { ts: Timestamp(1567578977, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578977, 2), $clusterTime: { clusterTime: Timestamp(1567578977, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:18.041+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:18.041+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:48.004+0000 2019-09-04T06:36:18.043+0000 I NETWORK [listener] connection accepted from 10.108.2.45:36756 #540 (84 connections now open) 2019-09-04T06:36:18.043+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:36:18.043+0000 D2 COMMAND [conn540] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:36:18.043+0000 I NETWORK [conn540] received client metadata from 10.108.2.45:36756 conn540: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:36:18.043+0000 I COMMAND [conn540] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:36:18.045+0000 I COMMAND [conn517] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578940, 1), signature: { hash: BinData(0, DC364624D5EB396408DE50F10AD903484AFAB43A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:36:18.045+0000 D1 - [conn517] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:18.045+0000 W - [conn517] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:18.061+0000 I COMMAND [conn518] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578938, 1), signature: { hash: BinData(0, 31B0FBBE3F88E8227ED4748A3C95E935277E7687), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:36:18.061+0000 D1 - [conn518] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:18.061+0000 W - [conn518] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:18.066+0000 I - [conn517] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:18.066+0000 D1 COMMAND [conn517] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578940, 1), signature: { hash: BinData(0, DC364624D5EB396408DE50F10AD903484AFAB43A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:18.066+0000 D1 - [conn517] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:18.066+0000 W - [conn517] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:18.083+0000 I - [conn518] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:18.083+0000 D1 COMMAND [conn518] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578938, 1), signature: { hash: BinData(0, 31B0FBBE3F88E8227ED4748A3C95E935277E7687), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:18.083+0000 D1 - [conn518] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:18.083+0000 W - [conn518] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:18.101+0000 I - [conn491] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:18.101+0000 W COMMAND [conn491] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:18.101+0000 I COMMAND [conn491] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578939, 1), signature: { hash: BinData(0, 0C747D665C0A018C40BCB6AF44F3387EECD2F396), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30026ms 2019-09-04T06:36:18.101+0000 D2 NETWORK [conn491] Session from 10.108.2.51:59324 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:18.101+0000 I NETWORK [conn491] end connection 10.108.2.51:59324 (83 connections now open) 2019-09-04T06:36:18.102+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578977, 2) 2019-09-04T06:36:18.108+0000 I NETWORK [listener] connection accepted from 10.108.2.44:38900 #541 (84 connections now open) 2019-09-04T06:36:18.108+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:36:18.109+0000 D2 COMMAND [conn541] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:36:18.109+0000 I NETWORK [conn541] received client metadata from 10.108.2.44:38900 conn541: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:36:18.109+0000 I COMMAND [conn541] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:36:18.115+0000 I - [conn517] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:18.115+0000 W COMMAND [conn517] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:18.115+0000 I COMMAND [conn517] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578940, 1), signature: { hash: BinData(0, DC364624D5EB396408DE50F10AD903484AFAB43A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30031ms 2019-09-04T06:36:18.115+0000 D2 NETWORK [conn517] Session from 10.108.2.49:53562 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:18.115+0000 I NETWORK [conn517] end connection 10.108.2.49:53562 (83 connections now open) 2019-09-04T06:36:18.119+0000 I COMMAND [conn519] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578938, 1), signature: { hash: BinData(0, 31B0FBBE3F88E8227ED4748A3C95E935277E7687), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:36:18.119+0000 D1 - [conn519] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:18.119+0000 W - [conn519] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:18.121+0000 I NETWORK [listener] connection accepted from 10.108.2.53:50920 #542 (84 connections now open) 2019-09-04T06:36:18.121+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:36:18.121+0000 D2 COMMAND [conn542] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:36:18.121+0000 I NETWORK [conn542] received client metadata from 10.108.2.53:50920 conn542: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:36:18.121+0000 I COMMAND [conn542] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:36:18.134+0000 I - [conn518] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:18.134+0000 W COMMAND [conn518] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:18.134+0000 I COMMAND [conn518] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578938, 1), signature: { hash: BinData(0, 31B0FBBE3F88E8227ED4748A3C95E935277E7687), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30031ms 2019-09-04T06:36:18.134+0000 D2 NETWORK [conn518] Session from 10.108.2.45:36742 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:18.134+0000 I NETWORK [conn518] end connection 10.108.2.45:36742 (83 connections now open) 2019-09-04T06:36:18.134+0000 I COMMAND [conn520] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, EAB2E9C123EA69D6E814C4F993AEAED9311438B2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:36:18.134+0000 D1 - [conn520] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:18.134+0000 W - [conn520] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:18.141+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:18.158+0000 I - [conn520] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:18.158+0000 D1 COMMAND [conn520] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, EAB2E9C123EA69D6E814C4F993AEAED9311438B2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:18.158+0000 D1 - [conn520] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:18.158+0000 W - [conn520] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:18.162+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:18.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:18.179+0000 I - [conn520] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:18.179+0000 W COMMAND [conn520] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:18.179+0000 I COMMAND [conn520] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, EAB2E9C123EA69D6E814C4F993AEAED9311438B2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30034ms 2019-09-04T06:36:18.179+0000 D2 NETWORK [conn520] Session from 10.108.2.53:50906 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:18.179+0000 I NETWORK [conn520] end connection 10.108.2.53:50906 (82 connections now open) 2019-09-04T06:36:18.187+0000 I - [conn519] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:18.187+0000 D1 COMMAND [conn519] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578938, 1), signature: { hash: BinData(0, 31B0FBBE3F88E8227ED4748A3C95E935277E7687), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:18.187+0000 D1 - [conn519] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:18.187+0000 W - [conn519] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:18.197+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:18.197+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:18.207+0000 I - [conn519] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:18.207+0000 W COMMAND [conn519] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:18.207+0000 I COMMAND [conn519] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578938, 1), signature: { hash: BinData(0, 31B0FBBE3F88E8227ED4748A3C95E935277E7687), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30078ms 2019-09-04T06:36:18.207+0000 D2 NETWORK [conn519] Session from 10.108.2.44:38884 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:18.207+0000 I NETWORK [conn519] end connection 10.108.2.44:38884 (81 connections now open) 2019-09-04T06:36:18.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:18.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:18.213+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:18.213+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:18.213+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:18.213+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:18.213+0000 D2 COMMAND [conn530] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578969, 1), signature: { hash: BinData(0, F33D70B8F63242DD78E8976C6A303C81C5BC1B74), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:36:18.213+0000 D1 REPL [conn530] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578977, 2), t: 1 } 2019-09-04T06:36:18.213+0000 D3 REPL [conn530] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:48.223+0000 2019-09-04T06:36:18.216+0000 I NETWORK [listener] connection accepted from 10.108.2.46:41220 #543 (82 connections now open) 2019-09-04T06:36:18.216+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:36:18.216+0000 D2 COMMAND [conn543] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:36:18.216+0000 I NETWORK [conn543] received client metadata from 10.108.2.46:41220 conn543: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:36:18.216+0000 I COMMAND [conn543] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:36:18.216+0000 D2 COMMAND [conn543] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578975, 1), signature: { hash: BinData(0, ABF7625DF5CCC6209B46F27CBA9E5DCE8DF7AB16), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:36:18.216+0000 D1 REPL [conn543] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578977, 2), t: 1 } 2019-09-04T06:36:18.216+0000 D3 REPL [conn543] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:48.226+0000 2019-09-04T06:36:18.227+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:18.227+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:18.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:18.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:18.235+0000 D2 COMMAND [conn516] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578975, 1), signature: { hash: BinData(0, ABF7625DF5CCC6209B46F27CBA9E5DCE8DF7AB16), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:36:18.235+0000 D1 REPL [conn516] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578977, 2), t: 1 } 2019-09-04T06:36:18.235+0000 D3 REPL [conn516] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:48.245+0000 2019-09-04T06:36:18.235+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578977, 2), signature: { hash: BinData(0, 627D9223B6E19EBBF06E0CE0E0E56FDBA7530BF9), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:18.235+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:36:18.235+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578977, 2), signature: { hash: BinData(0, 627D9223B6E19EBBF06E0CE0E0E56FDBA7530BF9), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:18.235+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578977, 2), signature: { hash: BinData(0, 627D9223B6E19EBBF06E0CE0E0E56FDBA7530BF9), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:18.235+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), opTime: { ts: Timestamp(1567578977, 2), t: 1 }, wallTime: new Date(1567578977984) } 2019-09-04T06:36:18.235+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578977, 2), signature: { hash: BinData(0, 627D9223B6E19EBBF06E0CE0E0E56FDBA7530BF9), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:18.241+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:18.248+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:18.248+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:18.249+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:18.249+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:18.309+0000 D2 COMMAND [conn541] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578978, 1), signature: { hash: BinData(0, BCF1DE88D4E6F24645E4AC5DC68814FC1B8C75B8), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:36:18.309+0000 D1 REPL [conn541] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578977, 2), t: 1 } 2019-09-04T06:36:18.309+0000 D3 REPL [conn541] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:48.319+0000 2019-09-04T06:36:18.332+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:18.332+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:18.341+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:18.441+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:18.464+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:18.464+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:18.523+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:36:18.523+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:36:18.523+0000 D2 COMMAND [conn90] run command admin.$cmd { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:36:18.523+0000 I COMMAND [conn90] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:36:18.542+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:18.642+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:18.661+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:18.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:18.697+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:18.697+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:18.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:18.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:18.712+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:18.712+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:18.713+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:18.713+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:18.726+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:18.727+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:18.742+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:18.748+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:18.748+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:18.749+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:18.749+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:18.832+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:18.832+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:18.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:18.839+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1719) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:18.839+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1719 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:36:28.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:18.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:46.839+0000 2019-09-04T06:36:18.839+0000 D2 ASIO [Replication] Request 1719 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), opTime: { ts: Timestamp(1567578977, 2), t: 1 }, wallTime: new Date(1567578977984), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 2), t: 1 }, lastCommittedWall: new Date(1567578977984), lastOpVisible: { ts: Timestamp(1567578977, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578977, 2), $clusterTime: { clusterTime: Timestamp(1567578978, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 2) } 2019-09-04T06:36:18.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), opTime: { ts: Timestamp(1567578977, 2), t: 1 }, wallTime: new Date(1567578977984), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 2), t: 1 }, lastCommittedWall: new Date(1567578977984), lastOpVisible: { ts: Timestamp(1567578977, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578977, 2), $clusterTime: { clusterTime: Timestamp(1567578978, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:36:18.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:18.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1719) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), opTime: { ts: Timestamp(1567578977, 2), t: 1 }, wallTime: new Date(1567578977984), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 2), t: 1 }, lastCommittedWall: new Date(1567578977984), lastOpVisible: { ts: Timestamp(1567578977, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578977, 2), $clusterTime: { clusterTime: Timestamp(1567578978, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 2) } 2019-09-04T06:36:18.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:36:18.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:36:28.862+0000 2019-09-04T06:36:18.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:36:29.789+0000 2019-09-04T06:36:18.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:18.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:36:18.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:48.839+0000 2019-09-04T06:36:18.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:36:20.839Z 2019-09-04T06:36:18.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:48.839+0000 2019-09-04T06:36:18.841+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:18.841+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1720) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:18.841+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1720 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:28.841+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:18.841+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:48.839+0000 2019-09-04T06:36:18.841+0000 D2 ASIO [Replication] Request 1720 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), opTime: { ts: Timestamp(1567578977, 2), t: 1 }, wallTime: new Date(1567578977984), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 2), t: 1 }, lastCommittedWall: new Date(1567578977984), lastOpVisible: { ts: Timestamp(1567578977, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578977, 2), $clusterTime: { clusterTime: Timestamp(1567578978, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 2) } 2019-09-04T06:36:18.841+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), opTime: { ts: Timestamp(1567578977, 2), t: 1 }, wallTime: new Date(1567578977984), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 2), t: 1 }, lastCommittedWall: new Date(1567578977984), lastOpVisible: { ts: Timestamp(1567578977, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578977, 2), $clusterTime: { clusterTime: Timestamp(1567578978, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:18.841+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:18.841+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1720) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), opTime: { ts: Timestamp(1567578977, 2), t: 1 }, wallTime: new Date(1567578977984), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 2), t: 1 }, lastCommittedWall: new Date(1567578977984), lastOpVisible: { ts: Timestamp(1567578977, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578977, 2), $clusterTime: { clusterTime: Timestamp(1567578978, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 2) } 2019-09-04T06:36:18.841+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:36:18.841+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:36:20.841Z 2019-09-04T06:36:18.841+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:48.839+0000 2019-09-04T06:36:18.842+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:18.942+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:18.964+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:18.964+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:19.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:19.003+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:19.003+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:19.003+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578977, 2) 2019-09-04T06:36:19.003+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 25253 2019-09-04T06:36:19.003+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:19.003+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:19.003+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 25253 2019-09-04T06:36:19.003+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer::flush table:config/collection/28--6194257481163143499 -> { numRecords: 24, dataSize: 2000 } 2019-09-04T06:36:19.003+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer::flush table:local/collection/16--6194257481163143499 -> { numRecords: 1616, dataSize: 364508 } 2019-09-04T06:36:19.003+0000 D2 STORAGE [ReplBatcher] WiredTigerSizeStorer flush took 53 µs 2019-09-04T06:36:19.004+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25256 2019-09-04T06:36:19.004+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 25256 2019-09-04T06:36:19.004+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578977, 2), t: 1 }({ ts: Timestamp(1567578977, 2), t: 1 }) 2019-09-04T06:36:19.045+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:19.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578978, 1), signature: { hash: BinData(0, 70DA8DE8BC4F45A85717B5161992B81519461076), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:19.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:36:19.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578978, 1), signature: { hash: BinData(0, 70DA8DE8BC4F45A85717B5161992B81519461076), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:19.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578978, 1), signature: { hash: BinData(0, 70DA8DE8BC4F45A85717B5161992B81519461076), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:19.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), opTime: { ts: Timestamp(1567578977, 2), t: 1 }, wallTime: new Date(1567578977984) } 2019-09-04T06:36:19.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578978, 1), signature: { hash: BinData(0, 70DA8DE8BC4F45A85717B5161992B81519461076), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:19.145+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:19.197+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:19.197+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:19.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:19.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:19.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:19.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:19.245+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:19.248+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:19.248+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:19.249+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:19.249+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:19.317+0000 D3 STORAGE [FreeMonProcessor] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:19.321+0000 D3 INDEX [TTLMonitor] thread awake 2019-09-04T06:36:19.322+0000 D3 COMMAND [PeriodicTaskRunner] task: DBConnectionPool-cleaner took: 0ms 2019-09-04T06:36:19.322+0000 D3 COMMAND [PeriodicTaskRunner] task: DBConnectionPool-cleaner took: 0ms 2019-09-04T06:36:19.323+0000 D2 - [PeriodicTaskRunner] cleaning up unused lock buckets of the global lock manager 2019-09-04T06:36:19.323+0000 D3 COMMAND [PeriodicTaskRunner] task: UnusedLockCleaner took: 0ms 2019-09-04T06:36:19.329+0000 D2 WRITE [startPeriodicThreadToAbortExpiredTransactions] Beginning scanSessions. Scanning 0 sessions. 2019-09-04T06:36:19.332+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:19.332+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:19.345+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:19.370+0000 D1 SHARDING [shard-registry-reload] Reloading shardRegistry 2019-09-04T06:36:19.370+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0003 2019-09-04T06:36:19.370+0000 D3 STORAGE [shard-registry-reload] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:36:19.370+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1722 -- target:[cmodb812.togewa.com:27018] db:admin expDate:2019-09-04T06:36:24.370+0000 cmd:{ isMaster: 1 } 2019-09-04T06:36:19.370+0000 D2 COMMAND [shard-registry-reload] run command config.$cmd { find: "shards", $readPreference: { mode: "nearest", tags: [] }, $db: "config" } 2019-09-04T06:36:19.370+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1723 -- target:[cmodb813.togewa.com:27018] db:admin expDate:2019-09-04T06:36:24.370+0000 cmd:{ isMaster: 1 } 2019-09-04T06:36:19.370+0000 D3 STORAGE [shard-registry-reload] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:19.370+0000 D5 QUERY [shard-registry-reload] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:36:19.370+0000 D5 QUERY [shard-registry-reload] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:36:19.370+0000 D5 QUERY [shard-registry-reload] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:36:19.370+0000 D5 QUERY [shard-registry-reload] Rated tree: $and 2019-09-04T06:36:19.370+0000 D5 QUERY [shard-registry-reload] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:19.370+0000 D5 QUERY [shard-registry-reload] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:19.370+0000 D2 QUERY [shard-registry-reload] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:36:19.370+0000 D3 STORAGE [shard-registry-reload] begin_transaction on local snapshot Timestamp(1567578977, 2) 2019-09-04T06:36:19.370+0000 D3 STORAGE [shard-registry-reload] WT begin_transaction for snapshot id 25270 2019-09-04T06:36:19.370+0000 D3 STORAGE [shard-registry-reload] WT rollback_transaction for snapshot id 25270 2019-09-04T06:36:19.370+0000 I COMMAND [shard-registry-reload] command config.shards command: find { find: "shards", $readPreference: { mode: "nearest", tags: [] }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:4 cursorExhausted:1 numYields:0 nreturned:4 reslen:756 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:36:19.370+0000 D1 SHARDING [shard-registry-reload] found 4 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp(1567578977, 2), t: 1 } 2019-09-04T06:36:19.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0000/cmodb806.togewa.com:27018,cmodb807.togewa.com:27018 2019-09-04T06:36:19.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0000, with CS shard0000/cmodb806.togewa.com:27018,cmodb807.togewa.com:27018 2019-09-04T06:36:19.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0001/cmodb808.togewa.com:27018,cmodb809.togewa.com:27018 2019-09-04T06:36:19.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0001, with CS shard0001/cmodb808.togewa.com:27018,cmodb809.togewa.com:27018 2019-09-04T06:36:19.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0002/cmodb810.togewa.com:27018,cmodb811.togewa.com:27018 2019-09-04T06:36:19.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0002, with CS shard0002/cmodb810.togewa.com:27018,cmodb811.togewa.com:27018 2019-09-04T06:36:19.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0003/cmodb812.togewa.com:27018,cmodb813.togewa.com:27018 2019-09-04T06:36:19.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0003, with CS shard0003/cmodb812.togewa.com:27018,cmodb813.togewa.com:27018 2019-09-04T06:36:19.370+0000 D3 SHARDING [shard-registry-reload] Adding shard config, with CS 2019-09-04T06:36:19.370+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1722 finished with response: { hosts: [ "cmodb812.togewa.com:27018", "cmodb813.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27025" ], setName: "shard0003", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb812.togewa.com:27018", me: "cmodb812.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578976, 1), t: 1 }, lastWriteDate: new Date(1567578976000), majorityOpTime: { ts: Timestamp(1567578976, 1), t: 1 }, majorityWriteDate: new Date(1567578976000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578979371), logicalSessionTimeoutMinutes: 30, connectionId: 21713, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578976, 1), $configServerState: { opTime: { ts: Timestamp(1567578977, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578977, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578976, 1) } 2019-09-04T06:36:19.370+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb812.togewa.com:27018", "cmodb813.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27025" ], setName: "shard0003", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb812.togewa.com:27018", me: "cmodb812.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578976, 1), t: 1 }, lastWriteDate: new Date(1567578976000), majorityOpTime: { ts: Timestamp(1567578976, 1), t: 1 }, majorityWriteDate: new Date(1567578976000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578979371), logicalSessionTimeoutMinutes: 30, connectionId: 21713, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578976, 1), $configServerState: { opTime: { ts: Timestamp(1567578977, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578977, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578976, 1) } target: cmodb812.togewa.com:27018 2019-09-04T06:36:19.374+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1723 finished with response: { hosts: [ "cmodb812.togewa.com:27018", "cmodb813.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27025" ], setName: "shard0003", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb812.togewa.com:27018", me: "cmodb813.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578976, 1), t: 1 }, lastWriteDate: new Date(1567578976000), majorityOpTime: { ts: Timestamp(1567578976, 1), t: 1 }, majorityWriteDate: new Date(1567578976000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578979369), logicalSessionTimeoutMinutes: 30, connectionId: 13308, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578976, 1), $configServerState: { opTime: { ts: Timestamp(1567578977, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578977, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578976, 1) } 2019-09-04T06:36:19.374+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb812.togewa.com:27018", "cmodb813.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27025" ], setName: "shard0003", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb812.togewa.com:27018", me: "cmodb813.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578976, 1), t: 1 }, lastWriteDate: new Date(1567578976000), majorityOpTime: { ts: Timestamp(1567578976, 1), t: 1 }, majorityWriteDate: new Date(1567578976000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578979369), logicalSessionTimeoutMinutes: 30, connectionId: 13308, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578976, 1), $configServerState: { opTime: { ts: Timestamp(1567578977, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578977, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578976, 1) } target: cmodb813.togewa.com:27018 2019-09-04T06:36:19.374+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0003 took 4ms 2019-09-04T06:36:19.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0000 2019-09-04T06:36:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1725 -- target:[cmodb806.togewa.com:27018] db:admin expDate:2019-09-04T06:36:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:36:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1726 -- target:[cmodb807.togewa.com:27018] db:admin expDate:2019-09-04T06:36:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:36:19.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0002 2019-09-04T06:36:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1727 -- target:[cmodb810.togewa.com:27018] db:admin expDate:2019-09-04T06:36:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:36:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1728 -- target:[cmodb811.togewa.com:27018] db:admin expDate:2019-09-04T06:36:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:36:19.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0001 2019-09-04T06:36:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1729 -- target:[cmodb808.togewa.com:27018] db:admin expDate:2019-09-04T06:36:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:36:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1730 -- target:[cmodb809.togewa.com:27018] db:admin expDate:2019-09-04T06:36:24.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:36:19.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1730 finished with response: { hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb808.togewa.com:27018", me: "cmodb809.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578977, 1), t: 1 }, lastWriteDate: new Date(1567578977000), majorityOpTime: { ts: Timestamp(1567578977, 1), t: 1 }, majorityWriteDate: new Date(1567578977000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578979386), logicalSessionTimeoutMinutes: 30, connectionId: 13302, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578977, 1), $configServerState: { opTime: { ts: Timestamp(1567578960, 3), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578977, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 1) } 2019-09-04T06:36:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb808.togewa.com:27018", me: "cmodb809.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578977, 1), t: 1 }, lastWriteDate: new Date(1567578977000), majorityOpTime: { ts: Timestamp(1567578977, 1), t: 1 }, majorityWriteDate: new Date(1567578977000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578979386), logicalSessionTimeoutMinutes: 30, connectionId: 13302, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578977, 1), $configServerState: { opTime: { ts: Timestamp(1567578960, 3), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578977, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 1) } target: cmodb809.togewa.com:27018 2019-09-04T06:36:19.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1726 finished with response: { hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb806.togewa.com:27018", me: "cmodb807.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578973, 1), t: 1 }, lastWriteDate: new Date(1567578973000), majorityOpTime: { ts: Timestamp(1567578973, 1), t: 1 }, majorityWriteDate: new Date(1567578973000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578979385), logicalSessionTimeoutMinutes: 30, connectionId: 17074, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578973, 1), $configServerState: { opTime: { ts: Timestamp(1567578968, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578976, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578973, 1) } 2019-09-04T06:36:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb806.togewa.com:27018", me: "cmodb807.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578973, 1), t: 1 }, lastWriteDate: new Date(1567578973000), majorityOpTime: { ts: Timestamp(1567578973, 1), t: 1 }, majorityWriteDate: new Date(1567578973000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578979385), logicalSessionTimeoutMinutes: 30, connectionId: 17074, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578973, 1), $configServerState: { opTime: { ts: Timestamp(1567578968, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578976, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578973, 1) } target: cmodb807.togewa.com:27018 2019-09-04T06:36:19.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1725 finished with response: { hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb806.togewa.com:27018", me: "cmodb806.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578973, 1), t: 1 }, lastWriteDate: new Date(1567578973000), majorityOpTime: { ts: Timestamp(1567578973, 1), t: 1 }, majorityWriteDate: new Date(1567578973000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578979385), logicalSessionTimeoutMinutes: 30, connectionId: 16400, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578973, 1), $configServerState: { opTime: { ts: Timestamp(1567578968, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578976, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578973, 1) } 2019-09-04T06:36:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb806.togewa.com:27018", me: "cmodb806.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578973, 1), t: 1 }, lastWriteDate: new Date(1567578973000), majorityOpTime: { ts: Timestamp(1567578973, 1), t: 1 }, majorityWriteDate: new Date(1567578973000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578979385), logicalSessionTimeoutMinutes: 30, connectionId: 16400, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578973, 1), $configServerState: { opTime: { ts: Timestamp(1567578968, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578976, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578973, 1) } target: cmodb806.togewa.com:27018 2019-09-04T06:36:19.385+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0000 took 0ms 2019-09-04T06:36:19.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1729 finished with response: { hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb808.togewa.com:27018", me: "cmodb808.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578977, 1), t: 1 }, lastWriteDate: new Date(1567578977000), majorityOpTime: { ts: Timestamp(1567578977, 1), t: 1 }, majorityWriteDate: new Date(1567578977000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578979386), logicalSessionTimeoutMinutes: 30, connectionId: 18183, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578977, 1), $configServerState: { opTime: { ts: Timestamp(1567578968, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578977, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 1) } 2019-09-04T06:36:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb808.togewa.com:27018", me: "cmodb808.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578977, 1), t: 1 }, lastWriteDate: new Date(1567578977000), majorityOpTime: { ts: Timestamp(1567578977, 1), t: 1 }, majorityWriteDate: new Date(1567578977000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578979386), logicalSessionTimeoutMinutes: 30, connectionId: 18183, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578977, 1), $configServerState: { opTime: { ts: Timestamp(1567578968, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578977, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 1) } target: cmodb808.togewa.com:27018 2019-09-04T06:36:19.385+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0001 took 0ms 2019-09-04T06:36:19.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1727 finished with response: { hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb810.togewa.com:27018", me: "cmodb810.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578969, 1), t: 1 }, lastWriteDate: new Date(1567578969000), majorityOpTime: { ts: Timestamp(1567578969, 1), t: 1 }, majorityWriteDate: new Date(1567578969000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578979386), logicalSessionTimeoutMinutes: 30, connectionId: 20469, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578969, 1), $configServerState: { opTime: { ts: Timestamp(1567578968, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578976, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578969, 1) } 2019-09-04T06:36:19.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb810.togewa.com:27018", me: "cmodb810.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578969, 1), t: 1 }, lastWriteDate: new Date(1567578969000), majorityOpTime: { ts: Timestamp(1567578969, 1), t: 1 }, majorityWriteDate: new Date(1567578969000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578979386), logicalSessionTimeoutMinutes: 30, connectionId: 20469, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578969, 1), $configServerState: { opTime: { ts: Timestamp(1567578968, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578976, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578969, 1) } target: cmodb810.togewa.com:27018 2019-09-04T06:36:19.390+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1728 finished with response: { hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb810.togewa.com:27018", me: "cmodb811.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578969, 1), t: 1 }, lastWriteDate: new Date(1567578969000), majorityOpTime: { ts: Timestamp(1567578969, 1), t: 1 }, majorityWriteDate: new Date(1567578969000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578979386), logicalSessionTimeoutMinutes: 30, connectionId: 13284, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578969, 1), $configServerState: { opTime: { ts: Timestamp(1567578968, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578976, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578969, 1) } 2019-09-04T06:36:19.390+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb810.togewa.com:27018", me: "cmodb811.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578969, 1), t: 1 }, lastWriteDate: new Date(1567578969000), majorityOpTime: { ts: Timestamp(1567578969, 1), t: 1 }, majorityWriteDate: new Date(1567578969000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567578979386), logicalSessionTimeoutMinutes: 30, connectionId: 13284, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578969, 1), $configServerState: { opTime: { ts: Timestamp(1567578968, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567578976, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578969, 1) } target: cmodb811.togewa.com:27018 2019-09-04T06:36:19.390+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0002 took 5ms 2019-09-04T06:36:19.445+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:19.464+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:19.464+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:19.546+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:19.646+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:19.697+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:19.697+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:19.702+0000 D2 COMMAND [replSetDistLockPinger] run command config.$cmd { findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578979702) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } 2019-09-04T06:36:19.702+0000 D4 - [replSetDistLockPinger] Taking ticket. Available: 1000000000 2019-09-04T06:36:19.702+0000 D1 - [replSetDistLockPinger] User Assertion: NotMaster: Not primary while running findAndModify command on collection config.lockpings src/mongo/db/commands/find_and_modify.cpp 178 2019-09-04T06:36:19.702+0000 W - [replSetDistLockPinger] DBException thrown :: caused by :: NotMaster: Not primary while running findAndModify command on collection config.lockpings 2019-09-04T06:36:19.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:19.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:19.720+0000 I - [replSetDistLockPinger] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6b5b62 0x561749c38a0a 0x561749c42521 0x561749a63043 0x56174a33a606 0x56174a33ba55 0x56174b117894 0x56174a082899 0x56174a083f53 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174af452ee 0x56174af457fa 0x56174b0c25e2 0x56174a244e7b 0x56174a243c1e 0x56174a42b1dc 0x56174a23b7b1 0x56174a232a0a 0x56174b82dbbf 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"272DB62","s":"_ZN5mongo11DBExceptionC2ERKNS_6StatusE"},{"b":"561748F88000","o":"CB0A0A","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"ADB043"},{"b":"561748F88000","o":"13B2606"},{"b":"561748F88000","o":"13B3A55"},{"b":"561748F88000","o":"218F894","s":"_ZN5mongo12BasicCommand10Invocation3runEPNS_16OperationContextEPNS_3rpc21ReplyBuilderInterfaceE"},{"b":"561748F88000","o":"10FA899"},{"b":"561748F88000","o":"10FBF53"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"1FBD2EE"},{"b":"561748F88000","o":"1FBD7FA","s":"_ZN5mongo14DBDirectClient4callERNS_7MessageES2_bPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE"},{"b":"561748F88000","o":"213A5E2","s":"_ZN5mongo12DBClientBase20runCommandWithTargetENS_12OpMsgRequestE"},{"b":"561748F88000","o":"12BCE7B","s":"_ZN5mongo13RSLocalClient14runCommandOnceEPNS_16OperationContextENS_10StringDataERKNS_7BSONObjE"},{"b":"561748F88000","o":"12BBC1E","s":"_ZN5mongo10ShardLocal11_runCommandEPNS_16OperationContextERKNS_21ReadPreferenceSettingENS_10StringDataENS_8DurationISt5ratioILl1ELl1000EEEERKNS_7BSONObjE"},{"b":"561748F88000","o":"14A31DC","s":"_ZN5mongo5Shard32runCommandWithFixedRetryAttemptsEPNS_16OperationContextERKNS_21ReadPreferenceSettingERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_7BSONObjENS_8DurationISt5ratioILl1ELl1000EEEENS0_11RetryPolicyE"},{"b":"561748F88000","o":"12B37B1","s":"_ZN5mongo19DistLockCatalogImpl4pingEPNS_16OperationContextENS_10StringDataENS_6Date_tE"},{"b":"561748F88000","o":"12AAA0A","s":"_ZN5mongo22ReplSetDistLockManager6doTaskEv"},{"b":"561748F88000","o":"28A5BBF"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo11DBExceptionC2ERKNS_6StatusE+0x32) [0x56174b6b5b62] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x6D08) [0x561749c38a0a] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xADB043) [0x561749a63043] mongod(+0x13B2606) [0x56174a33a606] mongod(+0x13B3A55) [0x56174a33ba55] mongod(_ZN5mongo12BasicCommand10Invocation3runEPNS_16OperationContextEPNS_3rpc21ReplyBuilderInterfaceE+0x74) [0x56174b117894] mongod(+0x10FA899) [0x56174a082899] mongod(+0x10FBF53) [0x56174a083f53] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(+0x1FBD2EE) [0x56174af452ee] mongod(_ZN5mongo14DBDirectClient4callERNS_7MessageES2_bPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x3A) [0x56174af457fa] mongod(_ZN5mongo12DBClientBase20runCommandWithTargetENS_12OpMsgRequestE+0x1F2) [0x56174b0c25e2] mongod(_ZN5mongo13RSLocalClient14runCommandOnceEPNS_16OperationContextENS_10StringDataERKNS_7BSONObjE+0x4FB) [0x56174a244e7b] mongod(_ZN5mongo10ShardLocal11_runCommandEPNS_16OperationContextERKNS_21ReadPreferenceSettingENS_10StringDataENS_8DurationISt5ratioILl1ELl1000EEEERKNS_7BSONObjE+0x2E) [0x56174a243c1e] mongod(_ZN5mongo5Shard32runCommandWithFixedRetryAttemptsEPNS_16OperationContextERKNS_21ReadPreferenceSettingERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_7BSONObjENS_8DurationISt5ratioILl1ELl1000EEEENS0_11RetryPolicyE+0xDC) [0x56174a42b1dc] mongod(_ZN5mongo19DistLockCatalogImpl4pingEPNS_16OperationContextENS_10StringDataENS_6Date_tE+0x571) [0x56174a23b7b1] mongod(_ZN5mongo22ReplSetDistLockManager6doTaskEv+0x27A) [0x56174a232a0a] mongod(+0x28A5BBF) [0x56174b82dbbf] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:19.721+0000 D2 REPL [replSetDistLockPinger] Waiting for write concern. OpTime: { ts: Timestamp(1567578977, 2), t: 1 }, write concern: { w: "majority", wtimeout: 15000 } 2019-09-04T06:36:19.721+0000 D4 STORAGE [replSetDistLockPinger] flushed journal 2019-09-04T06:36:19.721+0000 D1 COMMAND [replSetDistLockPinger] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578979702) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" }': NotMaster: Not primary while running findAndModify command on collection config.lockpings 2019-09-04T06:36:19.721+0000 I COMMAND [replSetDistLockPinger] command config.lockpings command: findAndModify { findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567578979702) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } numYields:0 ok:0 errMsg:"Not primary while running findAndModify command on collection config.lockpings" errName:NotMaster errCode:10107 reslen:527 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } protocol:op_msg 19ms 2019-09-04T06:36:19.746+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:19.748+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:19.748+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:19.749+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:19.749+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:19.831+0000 D2 COMMAND [conn538] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578979, 1), signature: { hash: BinData(0, EB65E230B52E35DFBC6E0EB448D0124ABC83B686), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:36:19.831+0000 D1 REPL [conn538] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578977, 2), t: 1 } 2019-09-04T06:36:19.831+0000 D3 REPL [conn538] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:49.841+0000 2019-09-04T06:36:19.832+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:19.832+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:19.846+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:19.938+0000 D3 STORAGE [WTCheckpointThread] setting timestamp read source: 6, provided timestamp: Timestamp(1567578977, 2) 2019-09-04T06:36:19.938+0000 D3 STORAGE [WTCheckpointThread] WT begin_transaction for snapshot id 25280 2019-09-04T06:36:19.938+0000 D3 STORAGE [WTCheckpointThread] WT rollback_transaction for snapshot id 25280 2019-09-04T06:36:19.938+0000 D2 RECOVERY [WTCheckpointThread] Performing stable checkpoint. StableTimestamp: Timestamp(1567578977, 2), OplogNeededForRollback: Timestamp(1567578977, 2) 2019-09-04T06:36:19.964+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:19.964+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:19.967+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:20.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:20.002+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:36:20.002+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:36:20.002+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:36:20.003+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:20.003+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:20.003+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578977, 2) 2019-09-04T06:36:20.003+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 25285 2019-09-04T06:36:20.003+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:20.003+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:20.003+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 25285 2019-09-04T06:36:20.004+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25288 2019-09-04T06:36:20.004+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 25288 2019-09-04T06:36:20.004+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578977, 2), t: 1 }({ ts: Timestamp(1567578977, 2), t: 1 }) 2019-09-04T06:36:20.017+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:36:20.017+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:36:20.041+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:36:20.041+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:36:20.041+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:36:20.041+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:36:20.041+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:36:20.042+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35151 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:36:20.043+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:36:20.043+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:36:20.043+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:36:20.043+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:36:20.043+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:20.043+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:36:20.043+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:36:20.043+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:36:20.043+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:36:20.043+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:36:20.043+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:36:20.043+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:36:20.043+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:20.043+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:20.043+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:36:20.043+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578977, 2) 2019-09-04T06:36:20.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25294 2019-09-04T06:36:20.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25294 2019-09-04T06:36:20.043+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:36:20.043+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:36:20.043+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:36:20.043+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:36:20.044+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:20.044+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:36:20.044+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:36:20.044+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:36:20.044+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578977, 2) 2019-09-04T06:36:20.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25297 2019-09-04T06:36:20.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25297 2019-09-04T06:36:20.044+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:36:20.044+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:36:20.044+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:20.044+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:36:20.044+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:36:20.044+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:36:20.044+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578977, 2) 2019-09-04T06:36:20.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25299 2019-09-04T06:36:20.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25299 2019-09-04T06:36:20.044+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:570 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:36:20.044+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:36:20.044+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:36:20.044+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:36:20.044+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:36:20.044+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:36:20.044+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:36:20.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25302 2019-09-04T06:36:20.044+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:36:20.044+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:20.044+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:36:20.044+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:36:20.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25302 2019-09-04T06:36:20.044+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:36:20.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25303 2019-09-04T06:36:20.044+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:36:20.044+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:20.044+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:36:20.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25303 2019-09-04T06:36:20.044+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:36:20.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25304 2019-09-04T06:36:20.044+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:36:20.044+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:20.044+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25304 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25305 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25305 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25306 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25306 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25307 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25307 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25308 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25308 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25309 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25309 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25310 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25310 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25311 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25311 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25312 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25312 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25313 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25313 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25314 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25314 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25315 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25315 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25316 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25316 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25317 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25317 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25318 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:36:20.045+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25318 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25319 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25319 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25320 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25320 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25321 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25321 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25322 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25322 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25323 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25323 2019-09-04T06:36:20.046+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:36:20.046+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25325 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25325 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25326 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25326 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25327 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25327 2019-09-04T06:36:20.046+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:36:20.046+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25329 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25329 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25330 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25330 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25331 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25331 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25332 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25332 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25333 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25333 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25334 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25334 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25335 2019-09-04T06:36:20.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25335 2019-09-04T06:36:20.047+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25336 2019-09-04T06:36:20.047+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25336 2019-09-04T06:36:20.047+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25337 2019-09-04T06:36:20.047+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25337 2019-09-04T06:36:20.047+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25338 2019-09-04T06:36:20.047+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25338 2019-09-04T06:36:20.047+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25339 2019-09-04T06:36:20.047+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25339 2019-09-04T06:36:20.047+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25340 2019-09-04T06:36:20.047+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25340 2019-09-04T06:36:20.047+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:36:20.047+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:36:20.047+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25342 2019-09-04T06:36:20.047+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25342 2019-09-04T06:36:20.047+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25343 2019-09-04T06:36:20.047+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25343 2019-09-04T06:36:20.047+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25344 2019-09-04T06:36:20.047+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25344 2019-09-04T06:36:20.047+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25345 2019-09-04T06:36:20.047+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25345 2019-09-04T06:36:20.047+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25346 2019-09-04T06:36:20.047+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25346 2019-09-04T06:36:20.047+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25347 2019-09-04T06:36:20.047+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25347 2019-09-04T06:36:20.047+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:36:20.073+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:20.173+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:20.197+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:20.197+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:20.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:20.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:20.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:20.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 999999999 Now: 1000000000 2019-09-04T06:36:20.235+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578978, 1), signature: { hash: BinData(0, 70DA8DE8BC4F45A85717B5161992B81519461076), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:20.235+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:36:20.235+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578978, 1), signature: { hash: BinData(0, 70DA8DE8BC4F45A85717B5161992B81519461076), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:20.235+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578978, 1), signature: { hash: BinData(0, 70DA8DE8BC4F45A85717B5161992B81519461076), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:20.235+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), opTime: { ts: Timestamp(1567578977, 2), t: 1 }, wallTime: new Date(1567578977984) } 2019-09-04T06:36:20.235+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578978, 1), signature: { hash: BinData(0, 70DA8DE8BC4F45A85717B5161992B81519461076), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:20.249+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:20.249+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:20.273+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:20.332+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:20.332+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:20.373+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:20.464+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:20.464+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:20.473+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:20.573+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:20.674+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:20.697+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:20.697+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:20.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:20.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:20.749+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:20.749+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:20.774+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:20.832+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:20.832+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:20.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:20.839+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:36:19.063+0000 2019-09-04T06:36:20.839+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:36:20.235+0000 2019-09-04T06:36:20.839+0000 D3 REPL [replexec-3] stalest member MemberId(0) date: 2019-09-04T06:36:19.063+0000 2019-09-04T06:36:20.839+0000 D3 REPL [replexec-3] scheduling next check at 2019-09-04T06:36:29.063+0000 2019-09-04T06:36:20.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:20.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1735) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:20.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1735 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:36:30.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:20.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:50.839+0000 2019-09-04T06:36:20.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:50.839+0000 2019-09-04T06:36:20.839+0000 D2 ASIO [Replication] Request 1735 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), opTime: { ts: Timestamp(1567578977, 2), t: 1 }, wallTime: new Date(1567578977984), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 2), t: 1 }, lastCommittedWall: new Date(1567578977984), lastOpVisible: { ts: Timestamp(1567578977, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578977, 2), $clusterTime: { clusterTime: Timestamp(1567578979, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 2) } 2019-09-04T06:36:20.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), opTime: { ts: Timestamp(1567578977, 2), t: 1 }, wallTime: new Date(1567578977984), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 2), t: 1 }, lastCommittedWall: new Date(1567578977984), lastOpVisible: { ts: Timestamp(1567578977, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578977, 2), $clusterTime: { clusterTime: Timestamp(1567578979, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:36:20.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:20.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1735) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), opTime: { ts: Timestamp(1567578977, 2), t: 1 }, wallTime: new Date(1567578977984), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 2), t: 1 }, lastCommittedWall: new Date(1567578977984), lastOpVisible: { ts: Timestamp(1567578977, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578977, 2), $clusterTime: { clusterTime: Timestamp(1567578979, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 2) } 2019-09-04T06:36:20.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:36:20.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:36:29.789+0000 2019-09-04T06:36:20.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:36:31.185+0000 2019-09-04T06:36:20.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:36:20.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:36:22.839Z 2019-09-04T06:36:20.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:20.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:50.839+0000 2019-09-04T06:36:20.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:50.839+0000 2019-09-04T06:36:20.841+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:20.841+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1736) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:20.841+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1736 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:30.841+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:20.841+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:50.839+0000 2019-09-04T06:36:20.841+0000 D2 ASIO [Replication] Request 1736 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), opTime: { ts: Timestamp(1567578977, 2), t: 1 }, wallTime: new Date(1567578977984), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 2), t: 1 }, lastCommittedWall: new Date(1567578977984), lastOpVisible: { ts: Timestamp(1567578977, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578977, 2), $clusterTime: { clusterTime: Timestamp(1567578979, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 2) } 2019-09-04T06:36:20.841+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), opTime: { ts: Timestamp(1567578977, 2), t: 1 }, wallTime: new Date(1567578977984), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 2), t: 1 }, lastCommittedWall: new Date(1567578977984), lastOpVisible: { ts: Timestamp(1567578977, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578977, 2), $clusterTime: { clusterTime: Timestamp(1567578979, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:20.841+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:20.841+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1736) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), opTime: { ts: Timestamp(1567578977, 2), t: 1 }, wallTime: new Date(1567578977984), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 2), t: 1 }, lastCommittedWall: new Date(1567578977984), lastOpVisible: { ts: Timestamp(1567578977, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578977, 2), $clusterTime: { clusterTime: Timestamp(1567578979, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 2) } 2019-09-04T06:36:20.841+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:36:20.841+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:36:22.841Z 2019-09-04T06:36:20.841+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:50.839+0000 2019-09-04T06:36:20.854+0000 D2 COMMAND [conn494] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:20.855+0000 I COMMAND [conn494] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:20.874+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:20.964+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:20.964+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:20.974+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:21.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:21.003+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:21.003+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:21.003+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578977, 2) 2019-09-04T06:36:21.003+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 25363 2019-09-04T06:36:21.003+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:21.003+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:21.003+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 25363 2019-09-04T06:36:21.004+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25366 2019-09-04T06:36:21.004+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 25366 2019-09-04T06:36:21.004+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578977, 2), t: 1 }({ ts: Timestamp(1567578977, 2), t: 1 }) 2019-09-04T06:36:21.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578979, 1), signature: { hash: BinData(0, 0D336B668D4F1BC58E4967A71C973D20D3962B30), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:21.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:36:21.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578979, 1), signature: { hash: BinData(0, 0D336B668D4F1BC58E4967A71C973D20D3962B30), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:21.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578979, 1), signature: { hash: BinData(0, 0D336B668D4F1BC58E4967A71C973D20D3962B30), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:21.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), opTime: { ts: Timestamp(1567578977, 2), t: 1 }, wallTime: new Date(1567578977984) } 2019-09-04T06:36:21.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578979, 1), signature: { hash: BinData(0, 0D336B668D4F1BC58E4967A71C973D20D3962B30), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:21.074+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:21.115+0000 I NETWORK [listener] connection accepted from 10.108.2.56:35910 #544 (83 connections now open) 2019-09-04T06:36:21.115+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:36:21.115+0000 D2 COMMAND [conn544] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:36:21.115+0000 I NETWORK [conn544] received client metadata from 10.108.2.56:35910 conn544: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:36:21.115+0000 I COMMAND [conn544] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:36:21.118+0000 D2 COMMAND [conn544] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578971, 1), signature: { hash: BinData(0, BF9ABB3B036DCA9BFE2E0631E7AB0668EE141209), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:36:21.118+0000 D1 REPL [conn544] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578977, 2), t: 1 } 2019-09-04T06:36:21.118+0000 D3 REPL [conn544] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.128+0000 2019-09-04T06:36:21.174+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:21.197+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:21.197+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:21.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:21.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:21.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:21.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:21.249+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:21.249+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:21.274+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:21.332+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:21.332+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:21.350+0000 D2 COMMAND [conn470] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:21.350+0000 I COMMAND [conn470] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:21.375+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:21.426+0000 D2 NETWORK [conn470] Session from 10.108.2.15:39266 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:21.426+0000 I NETWORK [conn470] end connection 10.108.2.15:39266 (82 connections now open) 2019-09-04T06:36:21.464+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:21.464+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:21.475+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:21.575+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:21.635+0000 I NETWORK [listener] connection accepted from 10.108.2.44:38908 #545 (83 connections now open) 2019-09-04T06:36:21.635+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:36:21.635+0000 D2 COMMAND [conn545] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:36:21.635+0000 I NETWORK [conn545] received client metadata from 10.108.2.44:38908 conn545: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:36:21.635+0000 I COMMAND [conn545] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:36:21.635+0000 D2 COMMAND [conn545] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578978, 1), signature: { hash: BinData(0, BCF1DE88D4E6F24645E4AC5DC68814FC1B8C75B8), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:36:21.635+0000 D1 REPL [conn545] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578977, 2), t: 1 } 2019-09-04T06:36:21.635+0000 D3 REPL [conn545] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.645+0000 2019-09-04T06:36:21.650+0000 I NETWORK [listener] connection accepted from 10.108.2.48:42332 #546 (84 connections now open) 2019-09-04T06:36:21.650+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:36:21.650+0000 I NETWORK [listener] connection accepted from 10.108.2.72:45974 #547 (85 connections now open) 2019-09-04T06:36:21.650+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:36:21.650+0000 D2 COMMAND [conn547] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:36:21.650+0000 D2 COMMAND [conn546] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:36:21.650+0000 I NETWORK [conn547] received client metadata from 10.108.2.72:45974 conn547: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:36:21.650+0000 I NETWORK [conn546] received client metadata from 10.108.2.48:42332 conn546: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:36:21.650+0000 I COMMAND [conn547] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:36:21.650+0000 I COMMAND [conn546] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:36:21.651+0000 D2 COMMAND [conn531] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578973, 1), signature: { hash: BinData(0, 3B570DEA96F79DC959130E0D1F1E1335CDDFD78C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:36:21.651+0000 D1 REPL [conn531] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578977, 2), t: 1 } 2019-09-04T06:36:21.651+0000 D3 REPL [conn531] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.661+0000 2019-09-04T06:36:21.651+0000 I NETWORK [listener] connection accepted from 10.108.2.54:49422 #548 (86 connections now open) 2019-09-04T06:36:21.651+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:36:21.651+0000 D2 COMMAND [conn548] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:36:21.651+0000 I NETWORK [conn548] received client metadata from 10.108.2.54:49422 conn548: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:36:21.651+0000 I COMMAND [conn548] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:36:21.651+0000 D2 COMMAND [conn548] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578978, 1), signature: { hash: BinData(0, BCF1DE88D4E6F24645E4AC5DC68814FC1B8C75B8), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:36:21.652+0000 D1 REPL [conn548] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578977, 2), t: 1 } 2019-09-04T06:36:21.652+0000 D3 REPL [conn548] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.661+0000 2019-09-04T06:36:21.652+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:21.652+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:21.659+0000 I NETWORK [listener] connection accepted from 10.108.2.47:56756 #549 (87 connections now open) 2019-09-04T06:36:21.659+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:36:21.659+0000 D2 COMMAND [conn549] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:36:21.659+0000 I NETWORK [conn549] received client metadata from 10.108.2.47:56756 conn549: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:36:21.659+0000 I COMMAND [conn549] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:36:21.663+0000 D2 COMMAND [conn126] run command admin.$cmd { ping: 1, $db: "admin" } 2019-09-04T06:36:21.663+0000 I COMMAND [conn126] command admin.$cmd appName: "robo3t" command: ping { ping: 1, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:21.664+0000 I COMMAND [conn523] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, EAB2E9C123EA69D6E814C4F993AEAED9311438B2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:36:21.664+0000 I COMMAND [conn522] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578950, 1), signature: { hash: BinData(0, FD25E1CF7A4A1D067ADEB8019F5D3BD9C83758AA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:36:21.664+0000 D1 - [conn523] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:21.664+0000 D1 - [conn522] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:21.664+0000 W - [conn522] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:21.664+0000 W - [conn523] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:21.664+0000 D2 COMMAND [conn549] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578975, 1), signature: { hash: BinData(0, ABF7625DF5CCC6209B46F27CBA9E5DCE8DF7AB16), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:36:21.664+0000 D1 REPL [conn549] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578977, 2), t: 1 } 2019-09-04T06:36:21.664+0000 D3 REPL [conn549] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.674+0000 2019-09-04T06:36:21.674+0000 D2 COMMAND [conn182] run command config.$cmd { ping: 1.0, lsid: { id: UUID("02492cc9-cb3a-4cd4-9c2e-0d7430e82ce2") }, $clusterTime: { clusterTime: Timestamp(1567578919, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:36:21.674+0000 I COMMAND [conn182] command config.$cmd appName: "MongoDB Shell" command: ping { ping: 1.0, lsid: { id: UUID("02492cc9-cb3a-4cd4-9c2e-0d7430e82ce2") }, $clusterTime: { clusterTime: Timestamp(1567578919, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:21.675+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:21.681+0000 I - [conn523] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:21.681+0000 D1 COMMAND [conn523] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, EAB2E9C123EA69D6E814C4F993AEAED9311438B2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:21.681+0000 D1 - [conn523] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:21.681+0000 W - [conn523] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:21.696+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:21.697+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:21.698+0000 I - [conn522] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:21.699+0000 D1 COMMAND [conn522] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578950, 1), signature: { hash: BinData(0, FD25E1CF7A4A1D067ADEB8019F5D3BD9C83758AA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:21.699+0000 D1 - [conn522] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:21.699+0000 W - [conn522] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:21.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:21.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:21.735+0000 I - [conn522] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:21.736+0000 W COMMAND [conn522] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:21.736+0000 I COMMAND [conn522] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578950, 1), signature: { hash: BinData(0, FD25E1CF7A4A1D067ADEB8019F5D3BD9C83758AA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30048ms 2019-09-04T06:36:21.736+0000 D2 NETWORK [conn522] Session from 10.108.2.48:42316 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:21.736+0000 I NETWORK [conn522] end connection 10.108.2.48:42316 (86 connections now open) 2019-09-04T06:36:21.739+0000 I - [conn523] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:21.739+0000 W COMMAND [conn523] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:21.740+0000 I COMMAND [conn523] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, EAB2E9C123EA69D6E814C4F993AEAED9311438B2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:36:21.740+0000 D2 NETWORK [conn523] Session from 10.108.2.72:45954 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:21.740+0000 I NETWORK [conn523] end connection 10.108.2.72:45954 (85 connections now open) 2019-09-04T06:36:21.742+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:21.743+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:21.743+0000 I NETWORK [listener] connection accepted from 10.108.2.52:47414 #550 (86 connections now open) 2019-09-04T06:36:21.743+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:36:21.743+0000 D2 COMMAND [conn550] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:36:21.743+0000 I NETWORK [conn550] received client metadata from 10.108.2.52:47414 conn550: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:36:21.743+0000 I COMMAND [conn550] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:36:21.744+0000 D2 COMMAND [conn550] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578972, 1), signature: { hash: BinData(0, 748CA4A915B339EE77A5A9AF1E7BC1F0A591620C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:36:21.744+0000 D1 REPL [conn550] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578977, 2), t: 1 } 2019-09-04T06:36:21.744+0000 D3 REPL [conn550] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.754+0000 2019-09-04T06:36:21.749+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:21.749+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:21.756+0000 I COMMAND [conn499] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, EAB2E9C123EA69D6E814C4F993AEAED9311438B2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:36:21.756+0000 D1 - [conn499] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:21.756+0000 W - [conn499] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:21.756+0000 I NETWORK [listener] connection accepted from 10.108.2.59:48582 #551 (87 connections now open) 2019-09-04T06:36:21.756+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:36:21.756+0000 D2 COMMAND [conn551] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:36:21.756+0000 I NETWORK [conn551] received client metadata from 10.108.2.59:48582 conn551: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:36:21.756+0000 I COMMAND [conn551] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:36:21.771+0000 I COMMAND [conn525] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578941, 1), signature: { hash: BinData(0, B607257475BAE42FA27A1C9FCFB0BFBA13A2A499), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:36:21.771+0000 D1 - [conn525] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:21.771+0000 W - [conn525] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:21.773+0000 I - [conn499] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:21.773+0000 D1 COMMAND [conn499] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, EAB2E9C123EA69D6E814C4F993AEAED9311438B2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:21.773+0000 D1 - [conn499] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:21.773+0000 W - [conn499] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:21.775+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:21.810+0000 I - [conn499] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:21.810+0000 I - [conn525] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:21.810+0000 W COMMAND [conn499] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:21.810+0000 I COMMAND [conn499] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578942, 1), signature: { hash: BinData(0, EAB2E9C123EA69D6E814C4F993AEAED9311438B2), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:36:21.810+0000 D1 COMMAND [conn525] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578941, 1), signature: { hash: BinData(0, B607257475BAE42FA27A1C9FCFB0BFBA13A2A499), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:21.810+0000 D1 - [conn525] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:21.810+0000 W - [conn525] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:21.810+0000 D2 NETWORK [conn499] Session from 10.108.2.52:47368 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:21.810+0000 I NETWORK [conn499] end connection 10.108.2.52:47368 (86 connections now open) 2019-09-04T06:36:21.830+0000 I - [conn525] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:21.830+0000 W COMMAND [conn525] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:21.830+0000 I COMMAND [conn525] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578941, 1), signature: { hash: BinData(0, B607257475BAE42FA27A1C9FCFB0BFBA13A2A499), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30052ms 2019-09-04T06:36:21.830+0000 D2 NETWORK [conn525] Session from 10.108.2.59:48564 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:21.830+0000 I NETWORK [conn525] end connection 10.108.2.59:48564 (85 connections now open) 2019-09-04T06:36:21.832+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:21.832+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:21.875+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:21.964+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:21.964+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:21.975+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:22.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:22.003+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:22.003+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:22.003+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578977, 2) 2019-09-04T06:36:22.003+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 25400 2019-09-04T06:36:22.003+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:22.003+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:22.003+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 25400 2019-09-04T06:36:22.004+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25403 2019-09-04T06:36:22.004+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 25403 2019-09-04T06:36:22.004+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578977, 2), t: 1 }({ ts: Timestamp(1567578977, 2), t: 1 }) 2019-09-04T06:36:22.043+0000 I NETWORK [listener] connection accepted from 10.108.2.50:50352 #552 (86 connections now open) 2019-09-04T06:36:22.043+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:36:22.043+0000 D2 COMMAND [conn552] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:36:22.043+0000 I NETWORK [conn552] received client metadata from 10.108.2.50:50352 conn552: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:36:22.043+0000 I COMMAND [conn552] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:36:22.044+0000 D2 COMMAND [conn552] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578979, 1), signature: { hash: BinData(0, EB65E230B52E35DFBC6E0EB448D0124ABC83B686), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:36:22.044+0000 D1 REPL [conn552] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578977, 2), t: 1 } 2019-09-04T06:36:22.044+0000 D3 REPL [conn552] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:52.054+0000 2019-09-04T06:36:22.075+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:22.151+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:22.152+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:22.176+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:22.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:22.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:22.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:22.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:22.235+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578980, 1), signature: { hash: BinData(0, 8472F633EEF20E52A6A3D4F717B4CB180711539F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:22.235+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:36:22.235+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578980, 1), signature: { hash: BinData(0, 8472F633EEF20E52A6A3D4F717B4CB180711539F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:22.235+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578980, 1), signature: { hash: BinData(0, 8472F633EEF20E52A6A3D4F717B4CB180711539F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:22.235+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), opTime: { ts: Timestamp(1567578977, 2), t: 1 }, wallTime: new Date(1567578977984) } 2019-09-04T06:36:22.235+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578980, 1), signature: { hash: BinData(0, 8472F633EEF20E52A6A3D4F717B4CB180711539F), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:22.242+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:22.242+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:22.249+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:22.249+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:22.276+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:22.332+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:22.332+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:22.376+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:22.464+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:22.464+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:22.476+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:22.576+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:22.584+0000 I NETWORK [listener] connection accepted from 10.108.2.74:52018 #553 (87 connections now open) 2019-09-04T06:36:22.584+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:36:22.585+0000 D2 COMMAND [conn553] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:36:22.585+0000 I NETWORK [conn553] received client metadata from 10.108.2.74:52018 conn553: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:36:22.585+0000 I COMMAND [conn553] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:36:22.600+0000 I COMMAND [conn526] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578945, 1), signature: { hash: BinData(0, F316497B8BBC8A053887E7C33BE8F047E760F03A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:36:22.600+0000 D1 - [conn526] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:22.600+0000 W - [conn526] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:22.617+0000 I - [conn526] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:22.617+0000 D1 COMMAND [conn526] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578945, 1), signature: { hash: BinData(0, F316497B8BBC8A053887E7C33BE8F047E760F03A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:22.617+0000 D1 - [conn526] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:22.617+0000 W - [conn526] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:22.637+0000 I - [conn526] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:22.637+0000 W COMMAND [conn526] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:22.637+0000 I COMMAND [conn526] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578945, 1), signature: { hash: BinData(0, F316497B8BBC8A053887E7C33BE8F047E760F03A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30031ms 2019-09-04T06:36:22.637+0000 D2 NETWORK [conn526] Session from 10.108.2.74:51996 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:22.637+0000 I NETWORK [conn526] end connection 10.108.2.74:51996 (86 connections now open) 2019-09-04T06:36:22.676+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:22.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:22.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:22.749+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:22.749+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:22.776+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:22.832+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:22.832+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:22.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:22.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1737) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:22.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1737 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:36:32.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:22.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:50.839+0000 2019-09-04T06:36:22.839+0000 D2 ASIO [Replication] Request 1737 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), opTime: { ts: Timestamp(1567578977, 2), t: 1 }, wallTime: new Date(1567578977984), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 2), t: 1 }, lastCommittedWall: new Date(1567578977984), lastOpVisible: { ts: Timestamp(1567578977, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578977, 2), $clusterTime: { clusterTime: Timestamp(1567578980, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 2) } 2019-09-04T06:36:22.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), opTime: { ts: Timestamp(1567578977, 2), t: 1 }, wallTime: new Date(1567578977984), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 2), t: 1 }, lastCommittedWall: new Date(1567578977984), lastOpVisible: { ts: Timestamp(1567578977, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578977, 2), $clusterTime: { clusterTime: Timestamp(1567578980, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:36:22.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:22.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1737) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), opTime: { ts: Timestamp(1567578977, 2), t: 1 }, wallTime: new Date(1567578977984), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 2), t: 1 }, lastCommittedWall: new Date(1567578977984), lastOpVisible: { ts: Timestamp(1567578977, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578977, 2), $clusterTime: { clusterTime: Timestamp(1567578980, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 2) } 2019-09-04T06:36:22.839+0000 D4 ELECTION [replexec-4] Postponing election timeout due to heartbeat from primary 2019-09-04T06:36:22.839+0000 D4 REPL [replexec-4] Canceling election timeout callback at 2019-09-04T06:36:31.185+0000 2019-09-04T06:36:22.839+0000 D4 ELECTION [replexec-4] Scheduling election timeout callback at 2019-09-04T06:36:33.753+0000 2019-09-04T06:36:22.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:36:22.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:36:24.839Z 2019-09-04T06:36:22.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:22.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:52.839+0000 2019-09-04T06:36:22.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:52.839+0000 2019-09-04T06:36:22.841+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:22.841+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1738) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:22.841+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1738 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:32.841+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:22.841+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:52.839+0000 2019-09-04T06:36:22.841+0000 D2 ASIO [Replication] Request 1738 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), opTime: { ts: Timestamp(1567578977, 2), t: 1 }, wallTime: new Date(1567578977984), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 2), t: 1 }, lastCommittedWall: new Date(1567578977984), lastOpVisible: { ts: Timestamp(1567578977, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578977, 2), $clusterTime: { clusterTime: Timestamp(1567578980, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 2) } 2019-09-04T06:36:22.841+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), opTime: { ts: Timestamp(1567578977, 2), t: 1 }, wallTime: new Date(1567578977984), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 2), t: 1 }, lastCommittedWall: new Date(1567578977984), lastOpVisible: { ts: Timestamp(1567578977, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578977, 2), $clusterTime: { clusterTime: Timestamp(1567578980, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:22.841+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:22.841+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1738) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), opTime: { ts: Timestamp(1567578977, 2), t: 1 }, wallTime: new Date(1567578977984), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 2), t: 1 }, lastCommittedWall: new Date(1567578977984), lastOpVisible: { ts: Timestamp(1567578977, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578977, 2), $clusterTime: { clusterTime: Timestamp(1567578980, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 2) } 2019-09-04T06:36:22.841+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:36:22.841+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:36:24.841Z 2019-09-04T06:36:22.841+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:52.839+0000 2019-09-04T06:36:22.876+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:22.964+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:22.964+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:22.977+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:23.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:23.004+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:23.004+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:23.004+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578977, 2) 2019-09-04T06:36:23.004+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 25421 2019-09-04T06:36:23.004+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:23.004+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:23.004+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 25421 2019-09-04T06:36:23.004+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25424 2019-09-04T06:36:23.004+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 25424 2019-09-04T06:36:23.004+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578977, 2), t: 1 }({ ts: Timestamp(1567578977, 2), t: 1 }) 2019-09-04T06:36:23.027+0000 D2 ASIO [RS] Request 1717 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 2), t: 1 }, lastCommittedWall: new Date(1567578977984), lastOpVisible: { ts: Timestamp(1567578977, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578977, 2), t: 1 }, lastCommittedWall: new Date(1567578977984), lastOpApplied: { ts: Timestamp(1567578977, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578977, 2), $clusterTime: { clusterTime: Timestamp(1567578980, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 2) } 2019-09-04T06:36:23.027+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 2), t: 1 }, lastCommittedWall: new Date(1567578977984), lastOpVisible: { ts: Timestamp(1567578977, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578977, 2), t: 1 }, lastCommittedWall: new Date(1567578977984), lastOpApplied: { ts: Timestamp(1567578977, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578977, 2), $clusterTime: { clusterTime: Timestamp(1567578980, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:23.027+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:36:23.028+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:36:23.028+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:36:33.753+0000 2019-09-04T06:36:23.028+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:36:33.185+0000 2019-09-04T06:36:23.028+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:23.028+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:52.839+0000 2019-09-04T06:36:23.028+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1739 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:36:33.028+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578977, 2), t: 1 } } 2019-09-04T06:36:23.028+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:48.004+0000 2019-09-04T06:36:23.041+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:23.041+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), appliedOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, appliedWallTime: new Date(1567578977984), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), appliedOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, appliedWallTime: new Date(1567578977984), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), appliedOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, appliedWallTime: new Date(1567578977984), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 2), t: 1 }, lastCommittedWall: new Date(1567578977984), lastOpVisible: { ts: Timestamp(1567578977, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:23.041+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1740 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:53.041+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), appliedOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, appliedWallTime: new Date(1567578977984), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), appliedOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, appliedWallTime: new Date(1567578977984), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), appliedOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, appliedWallTime: new Date(1567578977984), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 2), t: 1 }, lastCommittedWall: new Date(1567578977984), lastOpVisible: { ts: Timestamp(1567578977, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:23.041+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:48.004+0000 2019-09-04T06:36:23.041+0000 D2 ASIO [RS] Request 1740 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 2), t: 1 }, lastCommittedWall: new Date(1567578977984), lastOpVisible: { ts: Timestamp(1567578977, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578977, 2), $clusterTime: { clusterTime: Timestamp(1567578980, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 2) } 2019-09-04T06:36:23.041+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 2), t: 1 }, lastCommittedWall: new Date(1567578977984), lastOpVisible: { ts: Timestamp(1567578977, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578977, 2), $clusterTime: { clusterTime: Timestamp(1567578980, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:23.041+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:36:23.041+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:48.004+0000 2019-09-04T06:36:23.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578980, 1), signature: { hash: BinData(0, 8472F633EEF20E52A6A3D4F717B4CB180711539F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:23.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:36:23.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578980, 1), signature: { hash: BinData(0, 8472F633EEF20E52A6A3D4F717B4CB180711539F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:23.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578980, 1), signature: { hash: BinData(0, 8472F633EEF20E52A6A3D4F717B4CB180711539F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:23.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), opTime: { ts: Timestamp(1567578977, 2), t: 1 }, wallTime: new Date(1567578977984) } 2019-09-04T06:36:23.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578980, 1), signature: { hash: BinData(0, 8472F633EEF20E52A6A3D4F717B4CB180711539F), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:23.077+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:23.177+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:23.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:23.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:23.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:23.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:23.249+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:23.249+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:23.277+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:23.332+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:23.332+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:23.377+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:23.464+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:23.464+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:23.477+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:23.577+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:23.677+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:23.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:23.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:23.749+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:23.749+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:23.778+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:23.832+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:23.832+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:23.878+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:23.964+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:23.964+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:23.978+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:24.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:24.004+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:24.004+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:24.004+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578977, 2) 2019-09-04T06:36:24.004+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 25438 2019-09-04T06:36:24.004+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:24.004+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:24.004+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 25438 2019-09-04T06:36:24.004+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25441 2019-09-04T06:36:24.004+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 25441 2019-09-04T06:36:24.004+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578977, 2), t: 1 }({ ts: Timestamp(1567578977, 2), t: 1 }) 2019-09-04T06:36:24.078+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:24.141+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:24.141+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:24.142+0000 I NETWORK [listener] connection accepted from 10.108.2.46:41224 #554 (87 connections now open) 2019-09-04T06:36:24.142+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:36:24.142+0000 D2 COMMAND [conn554] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:36:24.142+0000 I NETWORK [conn554] received client metadata from 10.108.2.46:41224 conn554: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:36:24.142+0000 I COMMAND [conn554] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:36:24.143+0000 D2 COMMAND [conn554] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578975, 1), signature: { hash: BinData(0, ABF7625DF5CCC6209B46F27CBA9E5DCE8DF7AB16), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:36:24.143+0000 D1 REPL [conn554] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578977, 2), t: 1 } 2019-09-04T06:36:24.143+0000 D3 REPL [conn554] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:54.153+0000 2019-09-04T06:36:24.157+0000 I COMMAND [conn515] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578945, 1), signature: { hash: BinData(0, F316497B8BBC8A053887E7C33BE8F047E760F03A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:36:24.157+0000 D1 - [conn515] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:24.157+0000 W - [conn515] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:24.174+0000 I - [conn515] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:24.174+0000 D1 COMMAND [conn515] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578945, 1), signature: { hash: BinData(0, F316497B8BBC8A053887E7C33BE8F047E760F03A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:24.174+0000 D1 - [conn515] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:24.174+0000 W - [conn515] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:24.178+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:24.194+0000 I - [conn515] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:24.194+0000 W COMMAND [conn515] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:24.194+0000 I COMMAND [conn515] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578945, 1), signature: { hash: BinData(0, F316497B8BBC8A053887E7C33BE8F047E760F03A), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30032ms 2019-09-04T06:36:24.194+0000 D2 NETWORK [conn515] Session from 10.108.2.46:41200 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:24.194+0000 I NETWORK [conn515] end connection 10.108.2.46:41200 (86 connections now open) 2019-09-04T06:36:24.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:24.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:24.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:24.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:24.235+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578980, 1), signature: { hash: BinData(0, 8472F633EEF20E52A6A3D4F717B4CB180711539F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:24.235+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:36:24.235+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578980, 1), signature: { hash: BinData(0, 8472F633EEF20E52A6A3D4F717B4CB180711539F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:24.235+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578980, 1), signature: { hash: BinData(0, 8472F633EEF20E52A6A3D4F717B4CB180711539F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:24.235+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), opTime: { ts: Timestamp(1567578977, 2), t: 1 }, wallTime: new Date(1567578977984) } 2019-09-04T06:36:24.235+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578980, 1), signature: { hash: BinData(0, 8472F633EEF20E52A6A3D4F717B4CB180711539F), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:24.249+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:24.249+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:24.278+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:24.332+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:24.332+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:24.378+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:24.464+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:24.464+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:24.479+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:24.579+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:24.641+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:24.641+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:24.679+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:24.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:24.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:24.749+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:24.749+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:24.779+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:24.832+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:24.832+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:24.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:24.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1741) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:24.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1741 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:36:34.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:24.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:52.839+0000 2019-09-04T06:36:24.839+0000 D2 ASIO [Replication] Request 1741 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), opTime: { ts: Timestamp(1567578977, 2), t: 1 }, wallTime: new Date(1567578977984), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 2), t: 1 }, lastCommittedWall: new Date(1567578977984), lastOpVisible: { ts: Timestamp(1567578977, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578977, 2), $clusterTime: { clusterTime: Timestamp(1567578980, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 2) } 2019-09-04T06:36:24.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), opTime: { ts: Timestamp(1567578977, 2), t: 1 }, wallTime: new Date(1567578977984), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 2), t: 1 }, lastCommittedWall: new Date(1567578977984), lastOpVisible: { ts: Timestamp(1567578977, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578977, 2), $clusterTime: { clusterTime: Timestamp(1567578980, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:36:24.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:24.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1741) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), opTime: { ts: Timestamp(1567578977, 2), t: 1 }, wallTime: new Date(1567578977984), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 2), t: 1 }, lastCommittedWall: new Date(1567578977984), lastOpVisible: { ts: Timestamp(1567578977, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578977, 2), $clusterTime: { clusterTime: Timestamp(1567578980, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 2) } 2019-09-04T06:36:24.839+0000 D4 ELECTION [replexec-4] Postponing election timeout due to heartbeat from primary 2019-09-04T06:36:24.839+0000 D4 REPL [replexec-4] Canceling election timeout callback at 2019-09-04T06:36:33.185+0000 2019-09-04T06:36:24.839+0000 D4 ELECTION [replexec-4] Scheduling election timeout callback at 2019-09-04T06:36:34.876+0000 2019-09-04T06:36:24.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:36:24.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:36:26.839Z 2019-09-04T06:36:24.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:24.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:54.839+0000 2019-09-04T06:36:24.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:54.839+0000 2019-09-04T06:36:24.841+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:24.841+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1742) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:24.841+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1742 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:34.841+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:24.841+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:54.839+0000 2019-09-04T06:36:24.841+0000 D2 ASIO [Replication] Request 1742 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), opTime: { ts: Timestamp(1567578977, 2), t: 1 }, wallTime: new Date(1567578977984), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 2), t: 1 }, lastCommittedWall: new Date(1567578977984), lastOpVisible: { ts: Timestamp(1567578977, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578977, 2), $clusterTime: { clusterTime: Timestamp(1567578980, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 2) } 2019-09-04T06:36:24.841+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), opTime: { ts: Timestamp(1567578977, 2), t: 1 }, wallTime: new Date(1567578977984), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 2), t: 1 }, lastCommittedWall: new Date(1567578977984), lastOpVisible: { ts: Timestamp(1567578977, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578977, 2), $clusterTime: { clusterTime: Timestamp(1567578980, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:24.841+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:24.841+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1742) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), opTime: { ts: Timestamp(1567578977, 2), t: 1 }, wallTime: new Date(1567578977984), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 2), t: 1 }, lastCommittedWall: new Date(1567578977984), lastOpVisible: { ts: Timestamp(1567578977, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578977, 2), $clusterTime: { clusterTime: Timestamp(1567578980, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578977, 2) } 2019-09-04T06:36:24.841+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:36:24.841+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:36:26.841Z 2019-09-04T06:36:24.841+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:54.839+0000 2019-09-04T06:36:24.879+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:24.964+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:24.964+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:24.979+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:25.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:25.004+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:25.004+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:25.004+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578977, 2) 2019-09-04T06:36:25.004+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 25459 2019-09-04T06:36:25.004+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:25.004+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:25.004+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 25459 2019-09-04T06:36:25.004+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25462 2019-09-04T06:36:25.004+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 25462 2019-09-04T06:36:25.004+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578977, 2), t: 1 }({ ts: Timestamp(1567578977, 2), t: 1 }) 2019-09-04T06:36:25.049+0000 I NETWORK [listener] connection accepted from 10.108.2.55:36898 #555 (87 connections now open) 2019-09-04T06:36:25.049+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:36:25.049+0000 D2 COMMAND [conn555] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:36:25.050+0000 I NETWORK [conn555] received client metadata from 10.108.2.55:36898 conn555: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:36:25.050+0000 I COMMAND [conn555] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:36:25.050+0000 D2 COMMAND [conn555] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578981, 1), signature: { hash: BinData(0, B1B9733EBFF2202C247A072FB8FE073A842B9ACF), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:36:25.050+0000 D1 REPL [conn555] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578977, 2), t: 1 } 2019-09-04T06:36:25.050+0000 D3 REPL [conn555] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:55.060+0000 2019-09-04T06:36:25.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578980, 1), signature: { hash: BinData(0, 8472F633EEF20E52A6A3D4F717B4CB180711539F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:25.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:36:25.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578980, 1), signature: { hash: BinData(0, 8472F633EEF20E52A6A3D4F717B4CB180711539F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:25.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578980, 1), signature: { hash: BinData(0, 8472F633EEF20E52A6A3D4F717B4CB180711539F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:25.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), opTime: { ts: Timestamp(1567578977, 2), t: 1 }, wallTime: new Date(1567578977984) } 2019-09-04T06:36:25.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578980, 1), signature: { hash: BinData(0, 8472F633EEF20E52A6A3D4F717B4CB180711539F), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:25.078+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:25.078+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:25.079+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:25.163+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:25.163+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:25.180+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:25.182+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:25.182+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:25.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:25.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:25.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:25.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:25.249+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:25.249+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:25.280+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:25.332+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:25.332+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:25.380+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:25.464+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:25.464+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:25.480+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:25.578+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:25.578+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:25.580+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:25.663+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:25.663+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:25.680+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:25.682+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:25.682+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:25.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:25.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:25.749+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:25.749+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:25.780+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:25.789+0000 D2 ASIO [RS] Request 1739 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578985, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578985786), o: { $v: 1, $set: { ping: new Date(1567578985786) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 2), t: 1 }, lastCommittedWall: new Date(1567578977984), lastOpVisible: { ts: Timestamp(1567578977, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578977, 2), t: 1 }, lastCommittedWall: new Date(1567578977984), lastOpApplied: { ts: Timestamp(1567578985, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578977, 2), $clusterTime: { clusterTime: Timestamp(1567578985, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578985, 1) } 2019-09-04T06:36:25.789+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578985, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578985786), o: { $v: 1, $set: { ping: new Date(1567578985786) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 2), t: 1 }, lastCommittedWall: new Date(1567578977984), lastOpVisible: { ts: Timestamp(1567578977, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578977, 2), t: 1 }, lastCommittedWall: new Date(1567578977984), lastOpApplied: { ts: Timestamp(1567578985, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578977, 2), $clusterTime: { clusterTime: Timestamp(1567578985, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578985, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:25.789+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:25.789+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578985, 1) and ending at ts: Timestamp(1567578985, 1) 2019-09-04T06:36:25.789+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:36:34.876+0000 2019-09-04T06:36:25.789+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:36:35.871+0000 2019-09-04T06:36:25.789+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:25.789+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:54.839+0000 2019-09-04T06:36:25.789+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578985, 1), t: 1 } 2019-09-04T06:36:25.789+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:25.789+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:25.789+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578977, 2) 2019-09-04T06:36:25.789+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 25481 2019-09-04T06:36:25.789+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:25.789+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:25.789+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 25481 2019-09-04T06:36:25.789+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:36:25.789+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:25.789+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:25.789+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578985, 1) } 2019-09-04T06:36:25.789+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578977, 2) 2019-09-04T06:36:25.789+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 25484 2019-09-04T06:36:25.789+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:25.789+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:25.789+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 25484 2019-09-04T06:36:25.789+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25463 2019-09-04T06:36:25.789+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 25463 2019-09-04T06:36:25.789+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25487 2019-09-04T06:36:25.789+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 25487 2019-09-04T06:36:25.789+0000 D3 EXECUTOR [repl-writer-worker-11] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:36:25.789+0000 D3 STORAGE [repl-writer-worker-11] WT begin_transaction for snapshot id 25489 2019-09-04T06:36:25.789+0000 D4 STORAGE [repl-writer-worker-11] inserting record with timestamp Timestamp(1567578985, 1) 2019-09-04T06:36:25.789+0000 D3 STORAGE [repl-writer-worker-11] WT set timestamp of future write operations to Timestamp(1567578985, 1) 2019-09-04T06:36:25.789+0000 D2 STORAGE [repl-writer-worker-11] WiredTigerSizeStorer::store Marking table:local/collection/16--6194257481163143499 dirty, numRecords: 1617, dataSize: 364744, use_count: 3 2019-09-04T06:36:25.789+0000 D3 STORAGE [repl-writer-worker-11] WT commit_transaction for snapshot id 25489 2019-09-04T06:36:25.789+0000 D3 EXECUTOR [repl-writer-worker-11] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:36:25.789+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:36:25.789+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25488 2019-09-04T06:36:25.789+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 25488 2019-09-04T06:36:25.789+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25491 2019-09-04T06:36:25.789+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 25491 2019-09-04T06:36:25.789+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578985, 1), t: 1 }({ ts: Timestamp(1567578985, 1), t: 1 }) 2019-09-04T06:36:25.790+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578985, 1) 2019-09-04T06:36:25.790+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25492 2019-09-04T06:36:25.790+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578985, 1) } } ] } sort: {} projection: {} 2019-09-04T06:36:25.790+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:25.790+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:36:25.790+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578985, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:36:25.790+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:25.790+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:36:25.790+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:36:25.790+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578985, 1) || First: notFirst: full path: ts 2019-09-04T06:36:25.790+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:25.790+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578985, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:25.790+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:36:25.790+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:36:25.790+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:36:25.790+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:25.790+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:36:25.790+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:36:25.790+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:25.790+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:25.790+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:36:25.790+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578985, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:36:25.790+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:25.790+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:36:25.790+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:36:25.790+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578985, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:36:25.790+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:25.790+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578985, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:25.790+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 25492 2019-09-04T06:36:25.790+0000 D3 EXECUTOR [repl-writer-worker-7] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:36:25.790+0000 D3 STORAGE [repl-writer-worker-7] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:25.790+0000 D3 REPL [repl-writer-worker-7] applying op: { ts: Timestamp(1567578985, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567578985786), o: { $v: 1, $set: { ping: new Date(1567578985786) } } }, oplog application mode: Secondary 2019-09-04T06:36:25.790+0000 D3 STORAGE [repl-writer-worker-7] WT set timestamp of future write operations to Timestamp(1567578985, 1) 2019-09-04T06:36:25.790+0000 D3 STORAGE [repl-writer-worker-7] WT begin_transaction for snapshot id 25494 2019-09-04T06:36:25.790+0000 D2 QUERY [repl-writer-worker-7] Using idhack: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" } 2019-09-04T06:36:25.790+0000 D2 STORAGE [repl-writer-worker-7] WiredTigerSizeStorer::store Marking table:config/collection/28--6194257481163143499 dirty, numRecords: 24, dataSize: 2000, use_count: 3 2019-09-04T06:36:25.790+0000 D4 WRITE [repl-writer-worker-7] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:36:25.790+0000 D3 STORAGE [repl-writer-worker-7] WT commit_transaction for snapshot id 25494 2019-09-04T06:36:25.790+0000 D3 EXECUTOR [repl-writer-worker-7] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:36:25.790+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578985, 1), t: 1 }({ ts: Timestamp(1567578985, 1), t: 1 }) 2019-09-04T06:36:25.790+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578985, 1) 2019-09-04T06:36:25.790+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25493 2019-09-04T06:36:25.790+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:36:25.790+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:25.790+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:36:25.790+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:25.790+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:25.790+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:36:25.790+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 25493 2019-09-04T06:36:25.790+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578985, 1) 2019-09-04T06:36:25.790+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25497 2019-09-04T06:36:25.790+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 25497 2019-09-04T06:36:25.790+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578985, 1), t: 1 }({ ts: Timestamp(1567578985, 1), t: 1 }) 2019-09-04T06:36:25.790+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:36:25.790+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), appliedOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, appliedWallTime: new Date(1567578977984), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), appliedOpTime: { ts: Timestamp(1567578985, 1), t: 1 }, appliedWallTime: new Date(1567578985786), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), appliedOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, appliedWallTime: new Date(1567578977984), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 2), t: 1 }, lastCommittedWall: new Date(1567578977984), lastOpVisible: { ts: Timestamp(1567578977, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:25.790+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1743 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:55.790+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), appliedOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, appliedWallTime: new Date(1567578977984), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), appliedOpTime: { ts: Timestamp(1567578985, 1), t: 1 }, appliedWallTime: new Date(1567578985786), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), appliedOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, appliedWallTime: new Date(1567578977984), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578977, 2), t: 1 }, lastCommittedWall: new Date(1567578977984), lastOpVisible: { ts: Timestamp(1567578977, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:25.790+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:55.790+0000 2019-09-04T06:36:25.791+0000 D2 ASIO [RS] Request 1743 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578985, 1), t: 1 }, lastCommittedWall: new Date(1567578985786), lastOpVisible: { ts: Timestamp(1567578985, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578985, 1), $clusterTime: { clusterTime: Timestamp(1567578985, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578985, 1) } 2019-09-04T06:36:25.791+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578985, 1), t: 1 }, lastCommittedWall: new Date(1567578985786), lastOpVisible: { ts: Timestamp(1567578985, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578985, 1), $clusterTime: { clusterTime: Timestamp(1567578985, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578985, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:25.791+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:36:25.791+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:55.791+0000 2019-09-04T06:36:25.791+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578985, 1), t: 1 } 2019-09-04T06:36:25.791+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1744 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:36:35.791+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578977, 2), t: 1 } } 2019-09-04T06:36:25.791+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:55.791+0000 2019-09-04T06:36:25.791+0000 D2 ASIO [RS] Request 1744 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578985, 1), t: 1 }, lastCommittedWall: new Date(1567578985786), lastOpVisible: { ts: Timestamp(1567578985, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578985, 1), t: 1 }, lastCommittedWall: new Date(1567578985786), lastOpApplied: { ts: Timestamp(1567578985, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578985, 1), $clusterTime: { clusterTime: Timestamp(1567578985, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578985, 1) } 2019-09-04T06:36:25.791+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578985, 1), t: 1 }, lastCommittedWall: new Date(1567578985786), lastOpVisible: { ts: Timestamp(1567578985, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578985, 1), t: 1 }, lastCommittedWall: new Date(1567578985786), lastOpApplied: { ts: Timestamp(1567578985, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578985, 1), $clusterTime: { clusterTime: Timestamp(1567578985, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578985, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:25.791+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:36:25.791+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:36:25.791+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578985, 1), t: 1 }, 2019-09-04T06:36:25.786+0000 2019-09-04T06:36:25.791+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578985, 1), t: 1 }, 2019-09-04T06:36:25.786+0000 2019-09-04T06:36:25.791+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578980, 1) 2019-09-04T06:36:25.791+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:36:35.871+0000 2019-09-04T06:36:25.791+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:36:36.578+0000 2019-09-04T06:36:25.791+0000 D3 REPL [conn524] Got notified of new snapshot: { ts: Timestamp(1567578985, 1), t: 1 }, 2019-09-04T06:36:25.786+0000 2019-09-04T06:36:25.791+0000 D3 REPL [conn524] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:44.708+0000 2019-09-04T06:36:25.791+0000 D3 REPL [conn541] Got notified of new snapshot: { ts: Timestamp(1567578985, 1), t: 1 }, 2019-09-04T06:36:25.786+0000 2019-09-04T06:36:25.791+0000 D3 REPL [conn541] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:48.319+0000 2019-09-04T06:36:25.791+0000 D3 REPL [conn537] Got notified of new snapshot: { ts: Timestamp(1567578985, 1), t: 1 }, 2019-09-04T06:36:25.786+0000 2019-09-04T06:36:25.791+0000 D3 REPL [conn537] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:44.691+0000 2019-09-04T06:36:25.791+0000 D3 REPL [conn548] Got notified of new snapshot: { ts: Timestamp(1567578985, 1), t: 1 }, 2019-09-04T06:36:25.786+0000 2019-09-04T06:36:25.791+0000 D3 REPL [conn548] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.661+0000 2019-09-04T06:36:25.791+0000 D3 REPL [conn549] Got notified of new snapshot: { ts: Timestamp(1567578985, 1), t: 1 }, 2019-09-04T06:36:25.786+0000 2019-09-04T06:36:25.791+0000 D3 REPL [conn549] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.674+0000 2019-09-04T06:36:25.792+0000 D3 REPL [conn554] Got notified of new snapshot: { ts: Timestamp(1567578985, 1), t: 1 }, 2019-09-04T06:36:25.786+0000 2019-09-04T06:36:25.792+0000 D3 REPL [conn554] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:54.153+0000 2019-09-04T06:36:25.792+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:25.792+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:54.839+0000 2019-09-04T06:36:25.792+0000 D3 REPL [conn516] Got notified of new snapshot: { ts: Timestamp(1567578985, 1), t: 1 }, 2019-09-04T06:36:25.786+0000 2019-09-04T06:36:25.792+0000 D3 REPL [conn516] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:48.245+0000 2019-09-04T06:36:25.792+0000 D3 REPL [conn538] Got notified of new snapshot: { ts: Timestamp(1567578985, 1), t: 1 }, 2019-09-04T06:36:25.786+0000 2019-09-04T06:36:25.792+0000 D3 REPL [conn538] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:49.841+0000 2019-09-04T06:36:25.792+0000 D3 REPL [conn550] Got notified of new snapshot: { ts: Timestamp(1567578985, 1), t: 1 }, 2019-09-04T06:36:25.786+0000 2019-09-04T06:36:25.792+0000 D3 REPL [conn550] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.754+0000 2019-09-04T06:36:25.792+0000 D3 REPL [conn552] Got notified of new snapshot: { ts: Timestamp(1567578985, 1), t: 1 }, 2019-09-04T06:36:25.786+0000 2019-09-04T06:36:25.792+0000 D3 REPL [conn552] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:52.054+0000 2019-09-04T06:36:25.792+0000 D3 REPL [conn555] Got notified of new snapshot: { ts: Timestamp(1567578985, 1), t: 1 }, 2019-09-04T06:36:25.786+0000 2019-09-04T06:36:25.792+0000 D3 REPL [conn555] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:55.060+0000 2019-09-04T06:36:25.792+0000 D3 REPL [conn527] Got notified of new snapshot: { ts: Timestamp(1567578985, 1), t: 1 }, 2019-09-04T06:36:25.786+0000 2019-09-04T06:36:25.792+0000 D3 REPL [conn527] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:43.139+0000 2019-09-04T06:36:25.792+0000 D3 REPL [conn529] Got notified of new snapshot: { ts: Timestamp(1567578985, 1), t: 1 }, 2019-09-04T06:36:25.786+0000 2019-09-04T06:36:25.792+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1745 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:36:35.792+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578985, 1), t: 1 } } 2019-09-04T06:36:25.792+0000 D3 REPL [conn529] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:43.079+0000 2019-09-04T06:36:25.792+0000 D3 REPL [conn530] Got notified of new snapshot: { ts: Timestamp(1567578985, 1), t: 1 }, 2019-09-04T06:36:25.786+0000 2019-09-04T06:36:25.792+0000 D3 REPL [conn530] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:48.223+0000 2019-09-04T06:36:25.792+0000 D3 REPL [conn543] Got notified of new snapshot: { ts: Timestamp(1567578985, 1), t: 1 }, 2019-09-04T06:36:25.786+0000 2019-09-04T06:36:25.792+0000 D3 REPL [conn543] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:48.226+0000 2019-09-04T06:36:25.792+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:55.791+0000 2019-09-04T06:36:25.792+0000 D3 REPL [conn532] Got notified of new snapshot: { ts: Timestamp(1567578985, 1), t: 1 }, 2019-09-04T06:36:25.786+0000 2019-09-04T06:36:25.792+0000 D3 REPL [conn532] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:31.316+0000 2019-09-04T06:36:25.792+0000 D3 REPL [conn544] Got notified of new snapshot: { ts: Timestamp(1567578985, 1), t: 1 }, 2019-09-04T06:36:25.786+0000 2019-09-04T06:36:25.792+0000 D3 REPL [conn544] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.128+0000 2019-09-04T06:36:25.792+0000 D3 REPL [conn545] Got notified of new snapshot: { ts: Timestamp(1567578985, 1), t: 1 }, 2019-09-04T06:36:25.786+0000 2019-09-04T06:36:25.792+0000 D3 REPL [conn545] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.645+0000 2019-09-04T06:36:25.792+0000 D3 REPL [conn531] Got notified of new snapshot: { ts: Timestamp(1567578985, 1), t: 1 }, 2019-09-04T06:36:25.786+0000 2019-09-04T06:36:25.792+0000 D3 REPL [conn531] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.661+0000 2019-09-04T06:36:25.792+0000 D3 REPL [conn521] Got notified of new snapshot: { ts: Timestamp(1567578985, 1), t: 1 }, 2019-09-04T06:36:25.786+0000 2019-09-04T06:36:25.792+0000 D3 REPL [conn521] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:43.082+0000 2019-09-04T06:36:25.810+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:36:25.810+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:25.810+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), appliedOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, appliedWallTime: new Date(1567578977984), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578985, 1), t: 1 }, durableWallTime: new Date(1567578985786), appliedOpTime: { ts: Timestamp(1567578985, 1), t: 1 }, appliedWallTime: new Date(1567578985786), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), appliedOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, appliedWallTime: new Date(1567578977984), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578985, 1), t: 1 }, lastCommittedWall: new Date(1567578985786), lastOpVisible: { ts: Timestamp(1567578985, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:25.810+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1746 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:55.810+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), appliedOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, appliedWallTime: new Date(1567578977984), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578985, 1), t: 1 }, durableWallTime: new Date(1567578985786), appliedOpTime: { ts: Timestamp(1567578985, 1), t: 1 }, appliedWallTime: new Date(1567578985786), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, durableWallTime: new Date(1567578977984), appliedOpTime: { ts: Timestamp(1567578977, 2), t: 1 }, appliedWallTime: new Date(1567578977984), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578985, 1), t: 1 }, lastCommittedWall: new Date(1567578985786), lastOpVisible: { ts: Timestamp(1567578985, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:25.810+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:36:55.791+0000 2019-09-04T06:36:25.811+0000 D2 ASIO [RS] Request 1746 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578985, 1), t: 1 }, lastCommittedWall: new Date(1567578985786), lastOpVisible: { ts: Timestamp(1567578985, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578985, 1), $clusterTime: { clusterTime: Timestamp(1567578985, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578985, 1) } 2019-09-04T06:36:25.811+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578985, 1), t: 1 }, lastCommittedWall: new Date(1567578985786), lastOpVisible: { ts: Timestamp(1567578985, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578985, 1), $clusterTime: { clusterTime: Timestamp(1567578985, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578985, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:25.811+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:36:25.811+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:36:55.791+0000 2019-09-04T06:36:25.832+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:25.832+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:25.880+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:25.889+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578985, 1) 2019-09-04T06:36:25.964+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:25.964+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:25.981+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:26.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:26.078+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:26.078+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:26.081+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:26.162+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:26.163+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:26.181+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:26.182+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:26.182+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:26.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:26.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:26.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:26.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:26.235+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578985, 1), signature: { hash: BinData(0, 385053C857959EE4CEE3FC56FC1AEFB1FB08A032), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:26.235+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:36:26.235+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578985, 1), signature: { hash: BinData(0, 385053C857959EE4CEE3FC56FC1AEFB1FB08A032), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:26.235+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578985, 1), signature: { hash: BinData(0, 385053C857959EE4CEE3FC56FC1AEFB1FB08A032), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:26.235+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578985, 1), t: 1 }, durableWallTime: new Date(1567578985786), opTime: { ts: Timestamp(1567578985, 1), t: 1 }, wallTime: new Date(1567578985786) } 2019-09-04T06:36:26.235+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578985, 1), signature: { hash: BinData(0, 385053C857959EE4CEE3FC56FC1AEFB1FB08A032), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:26.249+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:26.249+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:26.281+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:26.332+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:26.332+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:26.381+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:26.464+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:26.464+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:26.481+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:26.578+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:26.578+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:26.581+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:26.663+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:26.663+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:26.681+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:26.682+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:26.682+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:26.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:26.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:26.749+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:26.749+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:26.782+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:26.789+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:26.789+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:26.789+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578985, 1) 2019-09-04T06:36:26.789+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 25517 2019-09-04T06:36:26.789+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:26.789+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:26.789+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 25517 2019-09-04T06:36:26.790+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25520 2019-09-04T06:36:26.790+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 25520 2019-09-04T06:36:26.790+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578985, 1), t: 1 }({ ts: Timestamp(1567578985, 1), t: 1 }) 2019-09-04T06:36:26.832+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:26.832+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:26.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:26.839+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1747) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:26.839+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1747 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:36:36.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:26.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:54.839+0000 2019-09-04T06:36:26.839+0000 D2 ASIO [Replication] Request 1747 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578985, 1), t: 1 }, durableWallTime: new Date(1567578985786), opTime: { ts: Timestamp(1567578985, 1), t: 1 }, wallTime: new Date(1567578985786), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578985, 1), t: 1 }, lastCommittedWall: new Date(1567578985786), lastOpVisible: { ts: Timestamp(1567578985, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578985, 1), $clusterTime: { clusterTime: Timestamp(1567578985, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578985, 1) } 2019-09-04T06:36:26.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578985, 1), t: 1 }, durableWallTime: new Date(1567578985786), opTime: { ts: Timestamp(1567578985, 1), t: 1 }, wallTime: new Date(1567578985786), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578985, 1), t: 1 }, lastCommittedWall: new Date(1567578985786), lastOpVisible: { ts: Timestamp(1567578985, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578985, 1), $clusterTime: { clusterTime: Timestamp(1567578985, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578985, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:36:26.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:26.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1747) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578985, 1), t: 1 }, durableWallTime: new Date(1567578985786), opTime: { ts: Timestamp(1567578985, 1), t: 1 }, wallTime: new Date(1567578985786), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578985, 1), t: 1 }, lastCommittedWall: new Date(1567578985786), lastOpVisible: { ts: Timestamp(1567578985, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578985, 1), $clusterTime: { clusterTime: Timestamp(1567578985, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578985, 1) } 2019-09-04T06:36:26.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:36:26.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:36:36.578+0000 2019-09-04T06:36:26.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:36:37.146+0000 2019-09-04T06:36:26.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:36:26.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:36:28.839Z 2019-09-04T06:36:26.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:26.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:56.839+0000 2019-09-04T06:36:26.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:56.839+0000 2019-09-04T06:36:26.841+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:26.841+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1748) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:26.841+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1748 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:36.841+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:26.841+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:56.839+0000 2019-09-04T06:36:26.841+0000 D2 ASIO [Replication] Request 1748 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578985, 1), t: 1 }, durableWallTime: new Date(1567578985786), opTime: { ts: Timestamp(1567578985, 1), t: 1 }, wallTime: new Date(1567578985786), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578985, 1), t: 1 }, lastCommittedWall: new Date(1567578985786), lastOpVisible: { ts: Timestamp(1567578985, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578985, 1), $clusterTime: { clusterTime: Timestamp(1567578985, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578985, 1) } 2019-09-04T06:36:26.841+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578985, 1), t: 1 }, durableWallTime: new Date(1567578985786), opTime: { ts: Timestamp(1567578985, 1), t: 1 }, wallTime: new Date(1567578985786), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578985, 1), t: 1 }, lastCommittedWall: new Date(1567578985786), lastOpVisible: { ts: Timestamp(1567578985, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578985, 1), $clusterTime: { clusterTime: Timestamp(1567578985, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578985, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:26.841+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:26.841+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1748) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578985, 1), t: 1 }, durableWallTime: new Date(1567578985786), opTime: { ts: Timestamp(1567578985, 1), t: 1 }, wallTime: new Date(1567578985786), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578985, 1), t: 1 }, lastCommittedWall: new Date(1567578985786), lastOpVisible: { ts: Timestamp(1567578985, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578985, 1), $clusterTime: { clusterTime: Timestamp(1567578985, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578985, 1) } 2019-09-04T06:36:26.841+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:36:26.841+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:36:28.841Z 2019-09-04T06:36:26.841+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:56.839+0000 2019-09-04T06:36:26.876+0000 D2 COMMAND [conn47] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:26.876+0000 I COMMAND [conn47] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:26.882+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:26.911+0000 D2 COMMAND [conn48] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:26.911+0000 I COMMAND [conn48] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:26.964+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:26.964+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:26.982+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:27.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:27.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578985, 1), signature: { hash: BinData(0, 385053C857959EE4CEE3FC56FC1AEFB1FB08A032), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:27.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:36:27.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578985, 1), signature: { hash: BinData(0, 385053C857959EE4CEE3FC56FC1AEFB1FB08A032), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:27.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578985, 1), signature: { hash: BinData(0, 385053C857959EE4CEE3FC56FC1AEFB1FB08A032), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:27.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578985, 1), t: 1 }, durableWallTime: new Date(1567578985786), opTime: { ts: Timestamp(1567578985, 1), t: 1 }, wallTime: new Date(1567578985786) } 2019-09-04T06:36:27.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578985, 1), signature: { hash: BinData(0, 385053C857959EE4CEE3FC56FC1AEFB1FB08A032), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:27.078+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:27.078+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:27.082+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:27.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:27.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:27.163+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:27.163+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:27.182+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:27.182+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:27.182+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:27.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:27.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:27.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:27.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:27.249+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:27.249+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:27.282+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:27.332+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:27.332+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:27.382+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:27.464+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:27.464+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:27.482+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:27.578+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:27.578+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:27.583+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:27.612+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:27.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:27.662+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:27.663+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:27.682+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:27.682+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:27.683+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:27.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:27.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:27.749+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:27.749+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:27.783+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:27.789+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:27.790+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:27.790+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578985, 1) 2019-09-04T06:36:27.790+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 25544 2019-09-04T06:36:27.790+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:27.790+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:27.790+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 25544 2019-09-04T06:36:27.791+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25547 2019-09-04T06:36:27.791+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 25547 2019-09-04T06:36:27.791+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578985, 1), t: 1 }({ ts: Timestamp(1567578985, 1), t: 1 }) 2019-09-04T06:36:27.832+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:27.832+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:27.883+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:27.964+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:27.964+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:27.983+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:28.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:28.078+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:28.078+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:28.083+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:28.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:28.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:28.162+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:28.163+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:28.182+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:28.182+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:28.183+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:28.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:28.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:28.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:28.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:28.235+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578987, 1), signature: { hash: BinData(0, 875AFEBDECB6B331DE2953BBC1F1D23C11BCD7F6), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:28.235+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:36:28.235+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578987, 1), signature: { hash: BinData(0, 875AFEBDECB6B331DE2953BBC1F1D23C11BCD7F6), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:28.235+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578987, 1), signature: { hash: BinData(0, 875AFEBDECB6B331DE2953BBC1F1D23C11BCD7F6), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:28.235+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578985, 1), t: 1 }, durableWallTime: new Date(1567578985786), opTime: { ts: Timestamp(1567578985, 1), t: 1 }, wallTime: new Date(1567578985786) } 2019-09-04T06:36:28.236+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578987, 1), signature: { hash: BinData(0, 875AFEBDECB6B331DE2953BBC1F1D23C11BCD7F6), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:28.249+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:28.249+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:28.283+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:28.332+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:28.332+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:28.384+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:28.464+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:28.464+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:28.484+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:28.564+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:36:28.564+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:36:28.564+0000 D2 COMMAND [conn90] run command admin.$cmd { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:36:28.564+0000 I COMMAND [conn90] command admin.$cmd command: isMaster { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:866 locks:{} protocol:op_query 0ms 2019-09-04T06:36:28.578+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:28.578+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:28.584+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:28.612+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:28.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:28.663+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:28.663+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:28.682+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:28.682+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:28.684+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:28.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:28.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:28.741+0000 D2 COMMAND [conn528] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578985, 1), signature: { hash: BinData(0, D1304EA88BDDB05F2A6149D01308971F2D62D5C7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:36:28.742+0000 D1 REPL [conn528] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578985, 1), t: 1 } 2019-09-04T06:36:28.742+0000 D3 REPL [conn528] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:58.752+0000 2019-09-04T06:36:28.749+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:28.749+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:28.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:28.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:28.784+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:28.790+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:28.790+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:28.790+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578985, 1) 2019-09-04T06:36:28.790+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 25572 2019-09-04T06:36:28.790+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:28.790+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:28.790+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 25572 2019-09-04T06:36:28.791+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25575 2019-09-04T06:36:28.791+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 25575 2019-09-04T06:36:28.791+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578985, 1), t: 1 }({ ts: Timestamp(1567578985, 1), t: 1 }) 2019-09-04T06:36:28.832+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:28.832+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:28.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:28.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1749) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:28.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1749 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:36:38.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:28.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:56.839+0000 2019-09-04T06:36:28.839+0000 D2 ASIO [Replication] Request 1749 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578985, 1), t: 1 }, durableWallTime: new Date(1567578985786), opTime: { ts: Timestamp(1567578985, 1), t: 1 }, wallTime: new Date(1567578985786), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578985, 1), t: 1 }, lastCommittedWall: new Date(1567578985786), lastOpVisible: { ts: Timestamp(1567578985, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578985, 1), $clusterTime: { clusterTime: Timestamp(1567578987, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578985, 1) } 2019-09-04T06:36:28.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578985, 1), t: 1 }, durableWallTime: new Date(1567578985786), opTime: { ts: Timestamp(1567578985, 1), t: 1 }, wallTime: new Date(1567578985786), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578985, 1), t: 1 }, lastCommittedWall: new Date(1567578985786), lastOpVisible: { ts: Timestamp(1567578985, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578985, 1), $clusterTime: { clusterTime: Timestamp(1567578987, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578985, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:36:28.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:28.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1749) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578985, 1), t: 1 }, durableWallTime: new Date(1567578985786), opTime: { ts: Timestamp(1567578985, 1), t: 1 }, wallTime: new Date(1567578985786), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578985, 1), t: 1 }, lastCommittedWall: new Date(1567578985786), lastOpVisible: { ts: Timestamp(1567578985, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578985, 1), $clusterTime: { clusterTime: Timestamp(1567578987, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578985, 1) } 2019-09-04T06:36:28.839+0000 D4 ELECTION [replexec-4] Postponing election timeout due to heartbeat from primary 2019-09-04T06:36:28.839+0000 D4 REPL [replexec-4] Canceling election timeout callback at 2019-09-04T06:36:37.146+0000 2019-09-04T06:36:28.839+0000 D4 ELECTION [replexec-4] Scheduling election timeout callback at 2019-09-04T06:36:39.319+0000 2019-09-04T06:36:28.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:36:28.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:28.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:36:30.839Z 2019-09-04T06:36:28.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:58.839+0000 2019-09-04T06:36:28.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:58.839+0000 2019-09-04T06:36:28.841+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:28.841+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1750) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:28.841+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1750 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:38.841+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:28.841+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:58.839+0000 2019-09-04T06:36:28.841+0000 D2 ASIO [Replication] Request 1750 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578985, 1), t: 1 }, durableWallTime: new Date(1567578985786), opTime: { ts: Timestamp(1567578985, 1), t: 1 }, wallTime: new Date(1567578985786), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578985, 1), t: 1 }, lastCommittedWall: new Date(1567578985786), lastOpVisible: { ts: Timestamp(1567578985, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578985, 1), $clusterTime: { clusterTime: Timestamp(1567578987, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578985, 1) } 2019-09-04T06:36:28.841+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578985, 1), t: 1 }, durableWallTime: new Date(1567578985786), opTime: { ts: Timestamp(1567578985, 1), t: 1 }, wallTime: new Date(1567578985786), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578985, 1), t: 1 }, lastCommittedWall: new Date(1567578985786), lastOpVisible: { ts: Timestamp(1567578985, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578985, 1), $clusterTime: { clusterTime: Timestamp(1567578987, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578985, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:28.841+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:28.841+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1750) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578985, 1), t: 1 }, durableWallTime: new Date(1567578985786), opTime: { ts: Timestamp(1567578985, 1), t: 1 }, wallTime: new Date(1567578985786), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578985, 1), t: 1 }, lastCommittedWall: new Date(1567578985786), lastOpVisible: { ts: Timestamp(1567578985, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578985, 1), $clusterTime: { clusterTime: Timestamp(1567578987, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578985, 1) } 2019-09-04T06:36:28.841+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:36:28.841+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:36:30.841Z 2019-09-04T06:36:28.841+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:58.839+0000 2019-09-04T06:36:28.884+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:28.964+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:28.964+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:28.984+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:29.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:29.063+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:29.063+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:36:28.839+0000 2019-09-04T06:36:29.063+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:36:28.841+0000 2019-09-04T06:36:29.063+0000 D3 REPL [replexec-3] stalest member MemberId(0) date: 2019-09-04T06:36:28.839+0000 2019-09-04T06:36:29.063+0000 D3 REPL [replexec-3] scheduling next check at 2019-09-04T06:36:38.839+0000 2019-09-04T06:36:29.063+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:58.839+0000 2019-09-04T06:36:29.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578987, 1), signature: { hash: BinData(0, 875AFEBDECB6B331DE2953BBC1F1D23C11BCD7F6), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:29.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:36:29.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578987, 1), signature: { hash: BinData(0, 875AFEBDECB6B331DE2953BBC1F1D23C11BCD7F6), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:29.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578987, 1), signature: { hash: BinData(0, 875AFEBDECB6B331DE2953BBC1F1D23C11BCD7F6), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:29.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578985, 1), t: 1 }, durableWallTime: new Date(1567578985786), opTime: { ts: Timestamp(1567578985, 1), t: 1 }, wallTime: new Date(1567578985786) } 2019-09-04T06:36:29.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578987, 1), signature: { hash: BinData(0, 875AFEBDECB6B331DE2953BBC1F1D23C11BCD7F6), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:29.078+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:29.078+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:29.085+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:29.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:29.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:29.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:29.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:29.162+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:29.163+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:29.182+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:29.182+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:29.185+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:29.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:29.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:29.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:29.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:29.249+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:29.249+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:29.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:29.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:29.285+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:29.332+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:29.332+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:29.385+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:29.464+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:29.464+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:29.485+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:29.499+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:29.499+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:29.578+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:29.578+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:29.585+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:29.612+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:29.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:29.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:29.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:29.662+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:29.663+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:29.682+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:29.682+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:29.685+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:29.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:29.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:29.749+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:29.749+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:29.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:29.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:29.785+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:29.790+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:29.790+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:29.790+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578985, 1) 2019-09-04T06:36:29.790+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 25602 2019-09-04T06:36:29.790+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:29.790+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:29.790+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 25602 2019-09-04T06:36:29.791+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25605 2019-09-04T06:36:29.791+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 25605 2019-09-04T06:36:29.791+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578985, 1), t: 1 }({ ts: Timestamp(1567578985, 1), t: 1 }) 2019-09-04T06:36:29.832+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:29.832+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:29.886+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:29.964+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:29.964+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:29.978+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:29.978+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:29.986+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:29.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:29.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:30.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:30.002+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:36:30.002+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:36:30.002+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:36:30.009+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:36:30.009+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:36:30.010+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:36:30.010+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:36:30.010+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:36:30.010+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:36:30.012+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:36:30.012+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35151 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:36:30.016+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:36:30.016+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:36:30.016+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:36:30.016+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:36:30.016+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:30.016+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:36:30.016+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:36:30.016+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:36:30.016+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:36:30.016+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:36:30.016+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:36:30.016+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:36:30.016+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:30.017+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:30.017+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:36:30.017+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578985, 1) 2019-09-04T06:36:30.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25617 2019-09-04T06:36:30.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25617 2019-09-04T06:36:30.017+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:36:30.017+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:36:30.017+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:36:30.017+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:36:30.017+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:30.017+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:36:30.017+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:36:30.017+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:36:30.017+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578985, 1) 2019-09-04T06:36:30.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25620 2019-09-04T06:36:30.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25620 2019-09-04T06:36:30.017+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:36:30.018+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:36:30.018+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:30.018+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:36:30.018+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:36:30.018+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:36:30.018+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578985, 1) 2019-09-04T06:36:30.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25622 2019-09-04T06:36:30.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25622 2019-09-04T06:36:30.018+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:36:30.035+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:36:30.035+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:36:30.035+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:36:30.046+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:36:30.046+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:36:30.046+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:36:30.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25625 2019-09-04T06:36:30.046+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:36:30.046+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:30.046+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:36:30.046+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:36:30.046+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25625 2019-09-04T06:36:30.046+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:36:30.046+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25626 2019-09-04T06:36:30.046+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:36:30.046+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:30.046+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25626 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25627 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25627 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25628 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25628 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25629 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25629 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25630 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25630 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25631 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25631 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25632 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25632 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25633 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25633 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25634 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25634 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25635 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25635 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25636 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25636 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25637 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25637 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25638 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25638 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:36:30.047+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25639 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25639 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25640 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25640 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25641 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25641 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25642 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25642 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25643 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25643 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25644 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25644 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25645 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25645 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25646 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25646 2019-09-04T06:36:30.048+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:36:30.048+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25648 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25648 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25649 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25649 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25650 2019-09-04T06:36:30.048+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25650 2019-09-04T06:36:30.049+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:36:30.049+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:36:30.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25652 2019-09-04T06:36:30.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25652 2019-09-04T06:36:30.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25653 2019-09-04T06:36:30.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25653 2019-09-04T06:36:30.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25654 2019-09-04T06:36:30.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25654 2019-09-04T06:36:30.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25655 2019-09-04T06:36:30.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25655 2019-09-04T06:36:30.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25656 2019-09-04T06:36:30.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25656 2019-09-04T06:36:30.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25657 2019-09-04T06:36:30.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25657 2019-09-04T06:36:30.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25658 2019-09-04T06:36:30.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25658 2019-09-04T06:36:30.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25659 2019-09-04T06:36:30.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25659 2019-09-04T06:36:30.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25660 2019-09-04T06:36:30.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25660 2019-09-04T06:36:30.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25661 2019-09-04T06:36:30.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25661 2019-09-04T06:36:30.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25662 2019-09-04T06:36:30.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25662 2019-09-04T06:36:30.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25663 2019-09-04T06:36:30.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25663 2019-09-04T06:36:30.049+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:36:30.049+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:36:30.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25665 2019-09-04T06:36:30.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25665 2019-09-04T06:36:30.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25666 2019-09-04T06:36:30.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25666 2019-09-04T06:36:30.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25667 2019-09-04T06:36:30.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25667 2019-09-04T06:36:30.049+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25668 2019-09-04T06:36:30.049+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25668 2019-09-04T06:36:30.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25669 2019-09-04T06:36:30.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25669 2019-09-04T06:36:30.050+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 25670 2019-09-04T06:36:30.050+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 25670 2019-09-04T06:36:30.050+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:36:30.078+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:30.078+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:30.086+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:30.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:30.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:30.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:30.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:30.162+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:30.162+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:30.182+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:30.182+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:30.186+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:30.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:30.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:30.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:30.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:30.235+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578987, 1), signature: { hash: BinData(0, 875AFEBDECB6B331DE2953BBC1F1D23C11BCD7F6), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:30.235+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:36:30.236+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578987, 1), signature: { hash: BinData(0, 875AFEBDECB6B331DE2953BBC1F1D23C11BCD7F6), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:30.236+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578987, 1), signature: { hash: BinData(0, 875AFEBDECB6B331DE2953BBC1F1D23C11BCD7F6), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:30.236+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578985, 1), t: 1 }, durableWallTime: new Date(1567578985786), opTime: { ts: Timestamp(1567578985, 1), t: 1 }, wallTime: new Date(1567578985786) } 2019-09-04T06:36:30.236+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578987, 1), signature: { hash: BinData(0, 875AFEBDECB6B331DE2953BBC1F1D23C11BCD7F6), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:30.249+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:30.249+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:30.264+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:30.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:30.286+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:30.332+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:30.332+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:30.386+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:30.464+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:30.464+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:30.467+0000 D2 ASIO [RS] Request 1745 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578990, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578990462), o: { $v: 1, $set: { ping: new Date(1567578990462) } } }, { ts: Timestamp(1567578990, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578990463), o: { $v: 1, $set: { ping: new Date(1567578990462) } } }, { ts: Timestamp(1567578990, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578990462), o: { $v: 1, $set: { ping: new Date(1567578990461) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578985, 1), t: 1 }, lastCommittedWall: new Date(1567578985786), lastOpVisible: { ts: Timestamp(1567578985, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578985, 1), t: 1 }, lastCommittedWall: new Date(1567578985786), lastOpApplied: { ts: Timestamp(1567578990, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578985, 1), $clusterTime: { clusterTime: Timestamp(1567578990, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578990, 3) } 2019-09-04T06:36:30.467+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578990, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578990462), o: { $v: 1, $set: { ping: new Date(1567578990462) } } }, { ts: Timestamp(1567578990, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578990463), o: { $v: 1, $set: { ping: new Date(1567578990462) } } }, { ts: Timestamp(1567578990, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578990462), o: { $v: 1, $set: { ping: new Date(1567578990461) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578985, 1), t: 1 }, lastCommittedWall: new Date(1567578985786), lastOpVisible: { ts: Timestamp(1567578985, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578985, 1), t: 1 }, lastCommittedWall: new Date(1567578985786), lastOpApplied: { ts: Timestamp(1567578990, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578985, 1), $clusterTime: { clusterTime: Timestamp(1567578990, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578990, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:30.467+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:30.467+0000 D2 REPL [replication-0] oplog fetcher read 3 operations from remote oplog starting at ts: Timestamp(1567578990, 1) and ending at ts: Timestamp(1567578990, 3) 2019-09-04T06:36:30.467+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:36:39.319+0000 2019-09-04T06:36:30.468+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:36:41.687+0000 2019-09-04T06:36:30.468+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:30.468+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:58.839+0000 2019-09-04T06:36:30.468+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578990, 3), t: 1 } 2019-09-04T06:36:30.468+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:30.468+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:30.468+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578985, 1) 2019-09-04T06:36:30.468+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 25685 2019-09-04T06:36:30.468+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:30.468+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:30.468+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 25685 2019-09-04T06:36:30.468+0000 D2 REPL [rsSync-0] replication batch size is 3 2019-09-04T06:36:30.468+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:30.468+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:30.468+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578985, 1) 2019-09-04T06:36:30.468+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 25688 2019-09-04T06:36:30.468+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578990, 1) } 2019-09-04T06:36:30.468+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:30.468+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:30.468+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 25688 2019-09-04T06:36:30.468+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25606 2019-09-04T06:36:30.468+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 25606 2019-09-04T06:36:30.468+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25691 2019-09-04T06:36:30.468+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 25691 2019-09-04T06:36:30.468+0000 D3 EXECUTOR [repl-writer-worker-13] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:36:30.468+0000 D3 STORAGE [repl-writer-worker-13] WT begin_transaction for snapshot id 25693 2019-09-04T06:36:30.468+0000 D4 STORAGE [repl-writer-worker-13] inserting record with timestamp Timestamp(1567578990, 1) 2019-09-04T06:36:30.468+0000 D3 STORAGE [repl-writer-worker-13] WT set timestamp of future write operations to Timestamp(1567578990, 1) 2019-09-04T06:36:30.468+0000 D4 STORAGE [repl-writer-worker-13] inserting record with timestamp Timestamp(1567578990, 2) 2019-09-04T06:36:30.468+0000 D3 STORAGE [repl-writer-worker-13] WT set timestamp of future write operations to Timestamp(1567578990, 2) 2019-09-04T06:36:30.468+0000 D4 STORAGE [repl-writer-worker-13] inserting record with timestamp Timestamp(1567578990, 3) 2019-09-04T06:36:30.468+0000 D3 STORAGE [repl-writer-worker-13] WT set timestamp of future write operations to Timestamp(1567578990, 3) 2019-09-04T06:36:30.468+0000 D3 STORAGE [repl-writer-worker-13] WT commit_transaction for snapshot id 25693 2019-09-04T06:36:30.468+0000 D3 EXECUTOR [repl-writer-worker-13] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:36:30.468+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:36:30.468+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25692 2019-09-04T06:36:30.468+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 25692 2019-09-04T06:36:30.468+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25695 2019-09-04T06:36:30.468+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 25695 2019-09-04T06:36:30.468+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578990, 3), t: 1 }({ ts: Timestamp(1567578990, 3), t: 1 }) 2019-09-04T06:36:30.468+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578990, 3) 2019-09-04T06:36:30.468+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25696 2019-09-04T06:36:30.468+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578990, 3) } } ] } sort: {} projection: {} 2019-09-04T06:36:30.468+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:30.468+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:36:30.468+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578990, 3) Sort: {} Proj: {} ============================= 2019-09-04T06:36:30.468+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:30.468+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:36:30.468+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:36:30.468+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578990, 3) || First: notFirst: full path: ts 2019-09-04T06:36:30.468+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:30.468+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578990, 3) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:30.468+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:36:30.468+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:36:30.468+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:36:30.468+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:30.468+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:36:30.468+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:36:30.468+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:30.468+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:30.468+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:36:30.468+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578990, 3) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:36:30.468+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:30.468+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:36:30.468+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:36:30.468+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578990, 3) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:36:30.468+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:30.468+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578990, 3) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:30.469+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 25696 2019-09-04T06:36:30.469+0000 D3 EXECUTOR [repl-writer-worker-14] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:36:30.469+0000 D3 STORAGE [repl-writer-worker-14] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:30.469+0000 D3 EXECUTOR [repl-writer-worker-3] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:36:30.469+0000 D3 REPL [repl-writer-worker-14] applying op: { ts: Timestamp(1567578990, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567578990462), o: { $v: 1, $set: { ping: new Date(1567578990462) } } }, oplog application mode: Secondary 2019-09-04T06:36:30.469+0000 D3 STORAGE [repl-writer-worker-3] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:30.469+0000 D3 STORAGE [repl-writer-worker-14] WT set timestamp of future write operations to Timestamp(1567578990, 1) 2019-09-04T06:36:30.469+0000 D3 STORAGE [repl-writer-worker-14] WT begin_transaction for snapshot id 25698 2019-09-04T06:36:30.469+0000 D3 REPL [repl-writer-worker-3] applying op: { ts: Timestamp(1567578990, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567578990462), o: { $v: 1, $set: { ping: new Date(1567578990461) } } }, oplog application mode: Secondary 2019-09-04T06:36:30.469+0000 D2 QUERY [repl-writer-worker-14] Using idhack: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" } 2019-09-04T06:36:30.469+0000 D3 STORAGE [repl-writer-worker-3] WT set timestamp of future write operations to Timestamp(1567578990, 3) 2019-09-04T06:36:30.469+0000 D3 STORAGE [repl-writer-worker-3] WT begin_transaction for snapshot id 25699 2019-09-04T06:36:30.469+0000 D2 QUERY [repl-writer-worker-3] Using idhack: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" } 2019-09-04T06:36:30.469+0000 D4 WRITE [repl-writer-worker-14] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:36:30.469+0000 D3 STORAGE [repl-writer-worker-14] WT commit_transaction for snapshot id 25698 2019-09-04T06:36:30.469+0000 D4 WRITE [repl-writer-worker-3] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:36:30.469+0000 D3 STORAGE [repl-writer-worker-3] WT commit_transaction for snapshot id 25699 2019-09-04T06:36:30.469+0000 D3 EXECUTOR [repl-writer-worker-12] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:36:30.469+0000 D3 STORAGE [repl-writer-worker-12] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:30.469+0000 D3 EXECUTOR [repl-writer-worker-14] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:36:30.469+0000 D3 REPL [repl-writer-worker-12] applying op: { ts: Timestamp(1567578990, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567578990463), o: { $v: 1, $set: { ping: new Date(1567578990462) } } }, oplog application mode: Secondary 2019-09-04T06:36:30.469+0000 D3 STORAGE [repl-writer-worker-12] WT set timestamp of future write operations to Timestamp(1567578990, 2) 2019-09-04T06:36:30.469+0000 D3 STORAGE [repl-writer-worker-12] WT begin_transaction for snapshot id 25702 2019-09-04T06:36:30.469+0000 D2 QUERY [repl-writer-worker-12] Using idhack: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" } 2019-09-04T06:36:30.469+0000 D4 WRITE [repl-writer-worker-12] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:36:30.469+0000 D3 STORAGE [repl-writer-worker-12] WT commit_transaction for snapshot id 25702 2019-09-04T06:36:30.469+0000 D3 EXECUTOR [repl-writer-worker-12] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:36:30.469+0000 D3 EXECUTOR [repl-writer-worker-3] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:36:30.469+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578990, 3), t: 1 }({ ts: Timestamp(1567578990, 3), t: 1 }) 2019-09-04T06:36:30.469+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578990, 3) 2019-09-04T06:36:30.469+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25697 2019-09-04T06:36:30.469+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:36:30.469+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:30.469+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:36:30.469+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:30.469+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:30.469+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:36:30.469+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 25697 2019-09-04T06:36:30.469+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578990, 3) 2019-09-04T06:36:30.469+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:36:30.469+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25705 2019-09-04T06:36:30.469+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 25705 2019-09-04T06:36:30.469+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578985, 1), t: 1 }, durableWallTime: new Date(1567578985786), appliedOpTime: { ts: Timestamp(1567578985, 1), t: 1 }, appliedWallTime: new Date(1567578985786), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578985, 1), t: 1 }, durableWallTime: new Date(1567578985786), appliedOpTime: { ts: Timestamp(1567578990, 3), t: 1 }, appliedWallTime: new Date(1567578990462), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578985, 1), t: 1 }, durableWallTime: new Date(1567578985786), appliedOpTime: { ts: Timestamp(1567578985, 1), t: 1 }, appliedWallTime: new Date(1567578985786), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578985, 1), t: 1 }, lastCommittedWall: new Date(1567578985786), lastOpVisible: { ts: Timestamp(1567578985, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:30.469+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1751 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:37:00.469+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578985, 1), t: 1 }, durableWallTime: new Date(1567578985786), appliedOpTime: { ts: Timestamp(1567578985, 1), t: 1 }, appliedWallTime: new Date(1567578985786), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578985, 1), t: 1 }, durableWallTime: new Date(1567578985786), appliedOpTime: { ts: Timestamp(1567578990, 3), t: 1 }, appliedWallTime: new Date(1567578990462), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578985, 1), t: 1 }, durableWallTime: new Date(1567578985786), appliedOpTime: { ts: Timestamp(1567578985, 1), t: 1 }, appliedWallTime: new Date(1567578985786), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578985, 1), t: 1 }, lastCommittedWall: new Date(1567578985786), lastOpVisible: { ts: Timestamp(1567578985, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:30.469+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:37:00.469+0000 2019-09-04T06:36:30.469+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578990, 3), t: 1 }({ ts: Timestamp(1567578990, 3), t: 1 }) 2019-09-04T06:36:30.470+0000 D2 ASIO [RS] Request 1751 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578985, 1), t: 1 }, lastCommittedWall: new Date(1567578985786), lastOpVisible: { ts: Timestamp(1567578985, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578985, 1), $clusterTime: { clusterTime: Timestamp(1567578990, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578990, 3) } 2019-09-04T06:36:30.470+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578990, 3), t: 1 } 2019-09-04T06:36:30.470+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578985, 1), t: 1 }, lastCommittedWall: new Date(1567578985786), lastOpVisible: { ts: Timestamp(1567578985, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578985, 1), $clusterTime: { clusterTime: Timestamp(1567578990, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578990, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:30.470+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1752 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:36:40.470+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578985, 1), t: 1 } } 2019-09-04T06:36:30.470+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:36:30.470+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:37:00.470+0000 2019-09-04T06:36:30.470+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:37:00.470+0000 2019-09-04T06:36:30.473+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:36:30.473+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:36:30.473+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578985, 1), t: 1 }, durableWallTime: new Date(1567578985786), appliedOpTime: { ts: Timestamp(1567578985, 1), t: 1 }, appliedWallTime: new Date(1567578985786), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578990, 3), t: 1 }, durableWallTime: new Date(1567578990462), appliedOpTime: { ts: Timestamp(1567578990, 3), t: 1 }, appliedWallTime: new Date(1567578990462), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578985, 1), t: 1 }, durableWallTime: new Date(1567578985786), appliedOpTime: { ts: Timestamp(1567578985, 1), t: 1 }, appliedWallTime: new Date(1567578985786), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578985, 1), t: 1 }, lastCommittedWall: new Date(1567578985786), lastOpVisible: { ts: Timestamp(1567578985, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:30.473+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1753 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:37:00.473+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578985, 1), t: 1 }, durableWallTime: new Date(1567578985786), appliedOpTime: { ts: Timestamp(1567578985, 1), t: 1 }, appliedWallTime: new Date(1567578985786), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578990, 3), t: 1 }, durableWallTime: new Date(1567578990462), appliedOpTime: { ts: Timestamp(1567578990, 3), t: 1 }, appliedWallTime: new Date(1567578990462), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578985, 1), t: 1 }, durableWallTime: new Date(1567578985786), appliedOpTime: { ts: Timestamp(1567578985, 1), t: 1 }, appliedWallTime: new Date(1567578985786), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578985, 1), t: 1 }, lastCommittedWall: new Date(1567578985786), lastOpVisible: { ts: Timestamp(1567578985, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:30.473+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:37:00.470+0000 2019-09-04T06:36:30.473+0000 D2 ASIO [RS] Request 1753 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578985, 1), t: 1 }, lastCommittedWall: new Date(1567578985786), lastOpVisible: { ts: Timestamp(1567578985, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578985, 1), $clusterTime: { clusterTime: Timestamp(1567578990, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578990, 3) } 2019-09-04T06:36:30.473+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578985, 1), t: 1 }, lastCommittedWall: new Date(1567578985786), lastOpVisible: { ts: Timestamp(1567578985, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578985, 1), $clusterTime: { clusterTime: Timestamp(1567578990, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578990, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:30.473+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:30.473+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:37:00.470+0000 2019-09-04T06:36:30.474+0000 D2 ASIO [RS] Request 1752 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578990, 3), t: 1 }, lastCommittedWall: new Date(1567578990462), lastOpVisible: { ts: Timestamp(1567578990, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578990, 3), t: 1 }, lastCommittedWall: new Date(1567578990462), lastOpApplied: { ts: Timestamp(1567578990, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578990, 3), $clusterTime: { clusterTime: Timestamp(1567578990, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578990, 3) } 2019-09-04T06:36:30.474+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578990, 3), t: 1 }, lastCommittedWall: new Date(1567578990462), lastOpVisible: { ts: Timestamp(1567578990, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578990, 3), t: 1 }, lastCommittedWall: new Date(1567578990462), lastOpApplied: { ts: Timestamp(1567578990, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578990, 3), $clusterTime: { clusterTime: Timestamp(1567578990, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578990, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:30.474+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:36:30.474+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:36:30.474+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578990, 3), t: 1 }, 2019-09-04T06:36:30.462+0000 2019-09-04T06:36:30.474+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578990, 3), t: 1 }, 2019-09-04T06:36:30.462+0000 2019-09-04T06:36:30.474+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578985, 3) 2019-09-04T06:36:30.474+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:36:41.687+0000 2019-09-04T06:36:30.474+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:36:41.879+0000 2019-09-04T06:36:30.474+0000 D3 REPL [conn537] Got notified of new snapshot: { ts: Timestamp(1567578990, 3), t: 1 }, 2019-09-04T06:36:30.462+0000 2019-09-04T06:36:30.474+0000 D3 REPL [conn537] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:44.691+0000 2019-09-04T06:36:30.474+0000 D3 REPL [conn554] Got notified of new snapshot: { ts: Timestamp(1567578990, 3), t: 1 }, 2019-09-04T06:36:30.462+0000 2019-09-04T06:36:30.474+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1754 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:36:40.474+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578990, 3), t: 1 } } 2019-09-04T06:36:30.474+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:37:00.470+0000 2019-09-04T06:36:30.474+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:30.474+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:36:58.839+0000 2019-09-04T06:36:30.474+0000 D3 REPL [conn554] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:54.153+0000 2019-09-04T06:36:30.474+0000 D3 REPL [conn555] Got notified of new snapshot: { ts: Timestamp(1567578990, 3), t: 1 }, 2019-09-04T06:36:30.462+0000 2019-09-04T06:36:30.474+0000 D3 REPL [conn555] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:55.060+0000 2019-09-04T06:36:30.474+0000 D3 REPL [conn532] Got notified of new snapshot: { ts: Timestamp(1567578990, 3), t: 1 }, 2019-09-04T06:36:30.462+0000 2019-09-04T06:36:30.474+0000 D3 REPL [conn532] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:31.316+0000 2019-09-04T06:36:30.474+0000 D3 REPL [conn545] Got notified of new snapshot: { ts: Timestamp(1567578990, 3), t: 1 }, 2019-09-04T06:36:30.462+0000 2019-09-04T06:36:30.474+0000 D3 REPL [conn545] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.645+0000 2019-09-04T06:36:30.474+0000 D3 REPL [conn521] Got notified of new snapshot: { ts: Timestamp(1567578990, 3), t: 1 }, 2019-09-04T06:36:30.462+0000 2019-09-04T06:36:30.474+0000 D3 REPL [conn521] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:43.082+0000 2019-09-04T06:36:30.474+0000 D3 REPL [conn528] Got notified of new snapshot: { ts: Timestamp(1567578990, 3), t: 1 }, 2019-09-04T06:36:30.462+0000 2019-09-04T06:36:30.474+0000 D3 REPL [conn528] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:58.752+0000 2019-09-04T06:36:30.474+0000 D3 REPL [conn524] Got notified of new snapshot: { ts: Timestamp(1567578990, 3), t: 1 }, 2019-09-04T06:36:30.462+0000 2019-09-04T06:36:30.474+0000 D3 REPL [conn524] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:44.708+0000 2019-09-04T06:36:30.474+0000 D3 REPL [conn541] Got notified of new snapshot: { ts: Timestamp(1567578990, 3), t: 1 }, 2019-09-04T06:36:30.462+0000 2019-09-04T06:36:30.474+0000 D3 REPL [conn541] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:48.319+0000 2019-09-04T06:36:30.474+0000 D3 REPL [conn548] Got notified of new snapshot: { ts: Timestamp(1567578990, 3), t: 1 }, 2019-09-04T06:36:30.462+0000 2019-09-04T06:36:30.474+0000 D3 REPL [conn548] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.661+0000 2019-09-04T06:36:30.474+0000 D3 REPL [conn549] Got notified of new snapshot: { ts: Timestamp(1567578990, 3), t: 1 }, 2019-09-04T06:36:30.462+0000 2019-09-04T06:36:30.474+0000 D3 REPL [conn549] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.674+0000 2019-09-04T06:36:30.474+0000 D3 REPL [conn550] Got notified of new snapshot: { ts: Timestamp(1567578990, 3), t: 1 }, 2019-09-04T06:36:30.462+0000 2019-09-04T06:36:30.474+0000 D3 REPL [conn550] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.754+0000 2019-09-04T06:36:30.474+0000 D3 REPL [conn516] Got notified of new snapshot: { ts: Timestamp(1567578990, 3), t: 1 }, 2019-09-04T06:36:30.462+0000 2019-09-04T06:36:30.474+0000 D3 REPL [conn516] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:48.245+0000 2019-09-04T06:36:30.475+0000 D3 REPL [conn538] Got notified of new snapshot: { ts: Timestamp(1567578990, 3), t: 1 }, 2019-09-04T06:36:30.462+0000 2019-09-04T06:36:30.475+0000 D3 REPL [conn538] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:49.841+0000 2019-09-04T06:36:30.475+0000 D3 REPL [conn552] Got notified of new snapshot: { ts: Timestamp(1567578990, 3), t: 1 }, 2019-09-04T06:36:30.462+0000 2019-09-04T06:36:30.475+0000 D3 REPL [conn552] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:52.054+0000 2019-09-04T06:36:30.475+0000 D3 REPL [conn527] Got notified of new snapshot: { ts: Timestamp(1567578990, 3), t: 1 }, 2019-09-04T06:36:30.462+0000 2019-09-04T06:36:30.475+0000 D3 REPL [conn527] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:43.139+0000 2019-09-04T06:36:30.475+0000 D3 REPL [conn529] Got notified of new snapshot: { ts: Timestamp(1567578990, 3), t: 1 }, 2019-09-04T06:36:30.462+0000 2019-09-04T06:36:30.475+0000 D3 REPL [conn529] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:43.079+0000 2019-09-04T06:36:30.475+0000 D3 REPL [conn530] Got notified of new snapshot: { ts: Timestamp(1567578990, 3), t: 1 }, 2019-09-04T06:36:30.462+0000 2019-09-04T06:36:30.475+0000 D3 REPL [conn530] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:48.223+0000 2019-09-04T06:36:30.475+0000 D3 REPL [conn543] Got notified of new snapshot: { ts: Timestamp(1567578990, 3), t: 1 }, 2019-09-04T06:36:30.462+0000 2019-09-04T06:36:30.475+0000 D3 REPL [conn543] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:48.226+0000 2019-09-04T06:36:30.475+0000 D3 REPL [conn544] Got notified of new snapshot: { ts: Timestamp(1567578990, 3), t: 1 }, 2019-09-04T06:36:30.462+0000 2019-09-04T06:36:30.475+0000 D3 REPL [conn544] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.128+0000 2019-09-04T06:36:30.475+0000 D3 REPL [conn531] Got notified of new snapshot: { ts: Timestamp(1567578990, 3), t: 1 }, 2019-09-04T06:36:30.462+0000 2019-09-04T06:36:30.475+0000 D3 REPL [conn531] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.661+0000 2019-09-04T06:36:30.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:30.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:30.486+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:30.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:30.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:30.568+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578990, 3) 2019-09-04T06:36:30.578+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:30.578+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:30.586+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:30.612+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:30.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:30.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:30.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:30.663+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:30.663+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:30.682+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:30.682+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:30.687+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:30.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:30.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:30.743+0000 I NETWORK [listener] connection accepted from 10.108.2.52:47422 #556 (88 connections now open) 2019-09-04T06:36:30.743+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:36:30.743+0000 D2 COMMAND [conn556] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:36:30.743+0000 I NETWORK [conn556] received client metadata from 10.108.2.52:47422 conn556: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:36:30.743+0000 I COMMAND [conn556] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:36:30.743+0000 D2 COMMAND [conn556] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578982, 1), signature: { hash: BinData(0, 71D77AFD556EDF81B93114236E65E2BB26C765AF), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:36:30.743+0000 D1 REPL [conn556] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578990, 3), t: 1 } 2019-09-04T06:36:30.743+0000 D3 REPL [conn556] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:00.753+0000 2019-09-04T06:36:30.749+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:30.749+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:30.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:30.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:30.787+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:30.832+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:30.832+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:30.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:30.839+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1755) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:30.839+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1755 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:36:40.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:30.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:36:58.839+0000 2019-09-04T06:36:30.839+0000 D2 ASIO [Replication] Request 1755 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578990, 3), t: 1 }, durableWallTime: new Date(1567578990462), opTime: { ts: Timestamp(1567578990, 3), t: 1 }, wallTime: new Date(1567578990462), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578990, 3), t: 1 }, lastCommittedWall: new Date(1567578990462), lastOpVisible: { ts: Timestamp(1567578990, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578990, 3), $clusterTime: { clusterTime: Timestamp(1567578990, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578990, 3) } 2019-09-04T06:36:30.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578990, 3), t: 1 }, durableWallTime: new Date(1567578990462), opTime: { ts: Timestamp(1567578990, 3), t: 1 }, wallTime: new Date(1567578990462), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578990, 3), t: 1 }, lastCommittedWall: new Date(1567578990462), lastOpVisible: { ts: Timestamp(1567578990, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578990, 3), $clusterTime: { clusterTime: Timestamp(1567578990, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578990, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:36:30.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:30.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1755) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578990, 3), t: 1 }, durableWallTime: new Date(1567578990462), opTime: { ts: Timestamp(1567578990, 3), t: 1 }, wallTime: new Date(1567578990462), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578990, 3), t: 1 }, lastCommittedWall: new Date(1567578990462), lastOpVisible: { ts: Timestamp(1567578990, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578990, 3), $clusterTime: { clusterTime: Timestamp(1567578990, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578990, 3) } 2019-09-04T06:36:30.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:36:30.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:36:41.879+0000 2019-09-04T06:36:30.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:36:42.006+0000 2019-09-04T06:36:30.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:36:30.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:36:32.839Z 2019-09-04T06:36:30.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:30.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:00.839+0000 2019-09-04T06:36:30.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:00.839+0000 2019-09-04T06:36:30.841+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:30.841+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1756) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:30.841+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1756 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:40.841+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:30.841+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:00.839+0000 2019-09-04T06:36:30.841+0000 D2 ASIO [Replication] Request 1756 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578990, 3), t: 1 }, durableWallTime: new Date(1567578990462), opTime: { ts: Timestamp(1567578990, 3), t: 1 }, wallTime: new Date(1567578990462), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578990, 3), t: 1 }, lastCommittedWall: new Date(1567578990462), lastOpVisible: { ts: Timestamp(1567578990, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578990, 3), $clusterTime: { clusterTime: Timestamp(1567578990, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578990, 3) } 2019-09-04T06:36:30.841+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578990, 3), t: 1 }, durableWallTime: new Date(1567578990462), opTime: { ts: Timestamp(1567578990, 3), t: 1 }, wallTime: new Date(1567578990462), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578990, 3), t: 1 }, lastCommittedWall: new Date(1567578990462), lastOpVisible: { ts: Timestamp(1567578990, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578990, 3), $clusterTime: { clusterTime: Timestamp(1567578990, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578990, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:30.841+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:30.841+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1756) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578990, 3), t: 1 }, durableWallTime: new Date(1567578990462), opTime: { ts: Timestamp(1567578990, 3), t: 1 }, wallTime: new Date(1567578990462), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578990, 3), t: 1 }, lastCommittedWall: new Date(1567578990462), lastOpVisible: { ts: Timestamp(1567578990, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578990, 3), $clusterTime: { clusterTime: Timestamp(1567578990, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578990, 3) } 2019-09-04T06:36:30.841+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:36:30.841+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:36:32.841Z 2019-09-04T06:36:30.841+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:00.839+0000 2019-09-04T06:36:30.887+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:30.964+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:30.964+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:30.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:30.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:30.987+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:30.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:30.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:31.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:31.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578990, 3), signature: { hash: BinData(0, 0A0A895804C31C10FCD2128FB57F8F38244AB1A8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:31.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:36:31.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578990, 3), signature: { hash: BinData(0, 0A0A895804C31C10FCD2128FB57F8F38244AB1A8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:31.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578990, 3), signature: { hash: BinData(0, 0A0A895804C31C10FCD2128FB57F8F38244AB1A8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:31.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578990, 3), t: 1 }, durableWallTime: new Date(1567578990462), opTime: { ts: Timestamp(1567578990, 3), t: 1 }, wallTime: new Date(1567578990462) } 2019-09-04T06:36:31.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578990, 3), signature: { hash: BinData(0, 0A0A895804C31C10FCD2128FB57F8F38244AB1A8), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:31.078+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:31.078+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:31.087+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:31.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:31.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:31.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:31.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:31.162+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:31.162+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:31.182+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:31.182+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:31.187+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:31.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:31.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:31.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:31.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:31.249+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:31.249+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:31.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:31.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:31.287+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:31.303+0000 I NETWORK [listener] connection accepted from 10.108.2.62:53660 #557 (89 connections now open) 2019-09-04T06:36:31.303+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:36:31.303+0000 D2 COMMAND [conn557] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:36:31.303+0000 I NETWORK [conn557] received client metadata from 10.108.2.62:53660 conn557: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:36:31.303+0000 I COMMAND [conn557] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:36:31.317+0000 I COMMAND [conn532] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578952, 1), signature: { hash: BinData(0, 1FAED7B25EC658F7E231F7EAA64E65CC737D0C58), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:36:31.317+0000 D1 - [conn532] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:31.317+0000 W - [conn532] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:31.332+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:31.332+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:31.334+0000 I - [conn532] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:31.334+0000 D1 COMMAND [conn532] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578952, 1), signature: { hash: BinData(0, 1FAED7B25EC658F7E231F7EAA64E65CC737D0C58), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:31.334+0000 D1 - [conn532] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:31.334+0000 W - [conn532] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:31.354+0000 I - [conn532] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:31.354+0000 W COMMAND [conn532] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:31.354+0000 I COMMAND [conn532] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578952, 1), signature: { hash: BinData(0, 1FAED7B25EC658F7E231F7EAA64E65CC737D0C58), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:36:31.354+0000 D2 NETWORK [conn532] Session from 10.108.2.62:53636 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:31.354+0000 I NETWORK [conn532] end connection 10.108.2.62:53636 (88 connections now open) 2019-09-04T06:36:31.387+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:31.468+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:31.468+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:31.468+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578990, 3) 2019-09-04T06:36:31.468+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 25738 2019-09-04T06:36:31.468+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:31.468+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:31.468+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 25738 2019-09-04T06:36:31.469+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25741 2019-09-04T06:36:31.469+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 25741 2019-09-04T06:36:31.469+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578990, 3), t: 1 }({ ts: Timestamp(1567578990, 3), t: 1 }) 2019-09-04T06:36:31.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:31.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:31.487+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:31.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:31.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:31.578+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:31.578+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:31.588+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:31.612+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:31.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:31.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:31.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:31.662+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:31.663+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:31.682+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:31.682+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:31.688+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:31.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:31.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:31.749+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:31.749+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:31.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:31.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:31.788+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:31.832+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:31.832+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:31.888+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:31.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:31.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:31.988+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:31.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:31.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:32.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:32.078+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:32.078+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:32.088+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:32.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:32.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:32.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:32.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:32.162+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:32.163+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:32.182+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:32.182+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:32.188+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:32.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:32.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:32.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:32.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:32.235+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578990, 3), signature: { hash: BinData(0, 0A0A895804C31C10FCD2128FB57F8F38244AB1A8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:32.235+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:36:32.236+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578990, 3), signature: { hash: BinData(0, 0A0A895804C31C10FCD2128FB57F8F38244AB1A8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:32.236+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578990, 3), signature: { hash: BinData(0, 0A0A895804C31C10FCD2128FB57F8F38244AB1A8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:32.236+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578990, 3), t: 1 }, durableWallTime: new Date(1567578990462), opTime: { ts: Timestamp(1567578990, 3), t: 1 }, wallTime: new Date(1567578990462) } 2019-09-04T06:36:32.236+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578990, 3), signature: { hash: BinData(0, 0A0A895804C31C10FCD2128FB57F8F38244AB1A8), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:32.249+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:32.249+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:32.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:32.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:32.288+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:32.332+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:32.332+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:32.389+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:32.468+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:32.468+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:32.468+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578990, 3) 2019-09-04T06:36:32.468+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 25768 2019-09-04T06:36:32.468+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:32.468+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:32.468+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 25768 2019-09-04T06:36:32.470+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25771 2019-09-04T06:36:32.470+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 25771 2019-09-04T06:36:32.470+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578990, 3), t: 1 }({ ts: Timestamp(1567578990, 3), t: 1 }) 2019-09-04T06:36:32.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:32.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:32.489+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:32.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:32.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:32.578+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:32.578+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:32.589+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:32.612+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:32.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:32.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:32.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:32.662+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:32.663+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:32.682+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:32.682+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:32.689+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:32.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:32.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:32.749+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:32.749+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:32.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:32.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:32.789+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:32.832+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:32.832+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:32.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:32.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1757) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:32.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1757 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:36:42.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:32.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:00.839+0000 2019-09-04T06:36:32.839+0000 D2 ASIO [Replication] Request 1757 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578990, 3), t: 1 }, durableWallTime: new Date(1567578990462), opTime: { ts: Timestamp(1567578990, 3), t: 1 }, wallTime: new Date(1567578990462), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578990, 3), t: 1 }, lastCommittedWall: new Date(1567578990462), lastOpVisible: { ts: Timestamp(1567578990, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578990, 3), $clusterTime: { clusterTime: Timestamp(1567578990, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578990, 3) } 2019-09-04T06:36:32.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578990, 3), t: 1 }, durableWallTime: new Date(1567578990462), opTime: { ts: Timestamp(1567578990, 3), t: 1 }, wallTime: new Date(1567578990462), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578990, 3), t: 1 }, lastCommittedWall: new Date(1567578990462), lastOpVisible: { ts: Timestamp(1567578990, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578990, 3), $clusterTime: { clusterTime: Timestamp(1567578990, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578990, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:36:32.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:32.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1757) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578990, 3), t: 1 }, durableWallTime: new Date(1567578990462), opTime: { ts: Timestamp(1567578990, 3), t: 1 }, wallTime: new Date(1567578990462), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578990, 3), t: 1 }, lastCommittedWall: new Date(1567578990462), lastOpVisible: { ts: Timestamp(1567578990, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578990, 3), $clusterTime: { clusterTime: Timestamp(1567578990, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578990, 3) } 2019-09-04T06:36:32.839+0000 D4 ELECTION [replexec-4] Postponing election timeout due to heartbeat from primary 2019-09-04T06:36:32.839+0000 D4 REPL [replexec-4] Canceling election timeout callback at 2019-09-04T06:36:42.006+0000 2019-09-04T06:36:32.839+0000 D4 ELECTION [replexec-4] Scheduling election timeout callback at 2019-09-04T06:36:42.910+0000 2019-09-04T06:36:32.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:36:32.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:36:34.839Z 2019-09-04T06:36:32.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:32.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:02.839+0000 2019-09-04T06:36:32.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:02.839+0000 2019-09-04T06:36:32.841+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:32.841+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1758) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:32.841+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1758 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:42.841+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:32.841+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:02.839+0000 2019-09-04T06:36:32.841+0000 D2 ASIO [Replication] Request 1758 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578990, 3), t: 1 }, durableWallTime: new Date(1567578990462), opTime: { ts: Timestamp(1567578990, 3), t: 1 }, wallTime: new Date(1567578990462), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578990, 3), t: 1 }, lastCommittedWall: new Date(1567578990462), lastOpVisible: { ts: Timestamp(1567578990, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578990, 3), $clusterTime: { clusterTime: Timestamp(1567578990, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578990, 3) } 2019-09-04T06:36:32.841+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578990, 3), t: 1 }, durableWallTime: new Date(1567578990462), opTime: { ts: Timestamp(1567578990, 3), t: 1 }, wallTime: new Date(1567578990462), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578990, 3), t: 1 }, lastCommittedWall: new Date(1567578990462), lastOpVisible: { ts: Timestamp(1567578990, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578990, 3), $clusterTime: { clusterTime: Timestamp(1567578990, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578990, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:32.841+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:32.841+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1758) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578990, 3), t: 1 }, durableWallTime: new Date(1567578990462), opTime: { ts: Timestamp(1567578990, 3), t: 1 }, wallTime: new Date(1567578990462), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578990, 3), t: 1 }, lastCommittedWall: new Date(1567578990462), lastOpVisible: { ts: Timestamp(1567578990, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578990, 3), $clusterTime: { clusterTime: Timestamp(1567578990, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578990, 3) } 2019-09-04T06:36:32.841+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:36:32.841+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:36:34.841Z 2019-09-04T06:36:32.841+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:02.839+0000 2019-09-04T06:36:32.889+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:32.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:32.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:32.989+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:32.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:32.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:33.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:33.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578990, 3), signature: { hash: BinData(0, 0A0A895804C31C10FCD2128FB57F8F38244AB1A8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:33.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:36:33.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578990, 3), signature: { hash: BinData(0, 0A0A895804C31C10FCD2128FB57F8F38244AB1A8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:33.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578990, 3), signature: { hash: BinData(0, 0A0A895804C31C10FCD2128FB57F8F38244AB1A8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:33.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578990, 3), t: 1 }, durableWallTime: new Date(1567578990462), opTime: { ts: Timestamp(1567578990, 3), t: 1 }, wallTime: new Date(1567578990462) } 2019-09-04T06:36:33.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578990, 3), signature: { hash: BinData(0, 0A0A895804C31C10FCD2128FB57F8F38244AB1A8), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:33.078+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:33.078+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:33.090+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:33.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:33.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:33.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:33.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:33.163+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:33.163+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:33.182+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:33.182+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:33.190+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:33.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:33.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:33.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:33.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:33.249+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:33.249+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:33.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:33.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:33.290+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:33.332+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:33.332+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:33.390+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:33.469+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:33.469+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:33.469+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578990, 3) 2019-09-04T06:36:33.469+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 25798 2019-09-04T06:36:33.469+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:33.469+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:33.469+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 25798 2019-09-04T06:36:33.470+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25801 2019-09-04T06:36:33.470+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 25801 2019-09-04T06:36:33.470+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578990, 3), t: 1 }({ ts: Timestamp(1567578990, 3), t: 1 }) 2019-09-04T06:36:33.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:33.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:33.490+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:33.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:33.499+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:33.524+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:36:33.524+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:36:33.524+0000 D2 COMMAND [conn90] run command admin.$cmd { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:36:33.524+0000 I COMMAND [conn90] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:36:33.578+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:33.578+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:33.590+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:33.612+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:33.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:33.636+0000 D2 ASIO [RS] Request 1754 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578993, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578993634), o: { $v: 1, $set: { ping: new Date(1567578993628) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578990, 3), t: 1 }, lastCommittedWall: new Date(1567578990462), lastOpVisible: { ts: Timestamp(1567578990, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578990, 3), t: 1 }, lastCommittedWall: new Date(1567578990462), lastOpApplied: { ts: Timestamp(1567578993, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578990, 3), $clusterTime: { clusterTime: Timestamp(1567578993, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578993, 1) } 2019-09-04T06:36:33.636+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578993, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578993634), o: { $v: 1, $set: { ping: new Date(1567578993628) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578990, 3), t: 1 }, lastCommittedWall: new Date(1567578990462), lastOpVisible: { ts: Timestamp(1567578990, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578990, 3), t: 1 }, lastCommittedWall: new Date(1567578990462), lastOpApplied: { ts: Timestamp(1567578993, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578990, 3), $clusterTime: { clusterTime: Timestamp(1567578993, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578993, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:33.636+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:33.636+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578993, 1) and ending at ts: Timestamp(1567578993, 1) 2019-09-04T06:36:33.636+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:36:42.910+0000 2019-09-04T06:36:33.636+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:36:43.979+0000 2019-09-04T06:36:33.636+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:33.636+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:02.839+0000 2019-09-04T06:36:33.636+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578993, 1), t: 1 } 2019-09-04T06:36:33.636+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:33.636+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:33.636+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578990, 3) 2019-09-04T06:36:33.636+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 25810 2019-09-04T06:36:33.636+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:33.636+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:33.636+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 25810 2019-09-04T06:36:33.636+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:33.636+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:33.636+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:36:33.636+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578990, 3) 2019-09-04T06:36:33.636+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 25813 2019-09-04T06:36:33.636+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:33.636+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:33.636+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578993, 1) } 2019-09-04T06:36:33.636+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 25813 2019-09-04T06:36:33.636+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25802 2019-09-04T06:36:33.636+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 25802 2019-09-04T06:36:33.636+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25816 2019-09-04T06:36:33.636+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 25816 2019-09-04T06:36:33.636+0000 D3 EXECUTOR [repl-writer-worker-10] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:36:33.636+0000 D3 STORAGE [repl-writer-worker-10] WT begin_transaction for snapshot id 25818 2019-09-04T06:36:33.637+0000 D4 STORAGE [repl-writer-worker-10] inserting record with timestamp Timestamp(1567578993, 1) 2019-09-04T06:36:33.637+0000 D3 STORAGE [repl-writer-worker-10] WT set timestamp of future write operations to Timestamp(1567578993, 1) 2019-09-04T06:36:33.637+0000 D3 STORAGE [repl-writer-worker-10] WT commit_transaction for snapshot id 25818 2019-09-04T06:36:33.637+0000 D3 EXECUTOR [repl-writer-worker-10] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:36:33.637+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:36:33.637+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25817 2019-09-04T06:36:33.637+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 25817 2019-09-04T06:36:33.637+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25820 2019-09-04T06:36:33.637+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 25820 2019-09-04T06:36:33.637+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578993, 1), t: 1 }({ ts: Timestamp(1567578993, 1), t: 1 }) 2019-09-04T06:36:33.637+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578993, 1) 2019-09-04T06:36:33.637+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25821 2019-09-04T06:36:33.637+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578993, 1) } } ] } sort: {} projection: {} 2019-09-04T06:36:33.637+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:33.637+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:36:33.637+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578993, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:36:33.637+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:33.637+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:36:33.637+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:36:33.637+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578993, 1) || First: notFirst: full path: ts 2019-09-04T06:36:33.637+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:33.637+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578993, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:33.637+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:36:33.637+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:36:33.637+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:36:33.637+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:33.637+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:36:33.637+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:36:33.637+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:33.637+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:33.637+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:36:33.637+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578993, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:36:33.637+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:33.637+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:36:33.637+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:36:33.637+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578993, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:36:33.637+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:33.637+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578993, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:33.637+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 25821 2019-09-04T06:36:33.637+0000 D3 EXECUTOR [repl-writer-worker-4] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:36:33.637+0000 D3 STORAGE [repl-writer-worker-4] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:33.637+0000 D3 REPL [repl-writer-worker-4] applying op: { ts: Timestamp(1567578993, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567578993634), o: { $v: 1, $set: { ping: new Date(1567578993628) } } }, oplog application mode: Secondary 2019-09-04T06:36:33.637+0000 D3 STORAGE [repl-writer-worker-4] WT set timestamp of future write operations to Timestamp(1567578993, 1) 2019-09-04T06:36:33.637+0000 D3 STORAGE [repl-writer-worker-4] WT begin_transaction for snapshot id 25823 2019-09-04T06:36:33.637+0000 D2 QUERY [repl-writer-worker-4] Using idhack: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" } 2019-09-04T06:36:33.637+0000 D4 WRITE [repl-writer-worker-4] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:36:33.637+0000 D3 STORAGE [repl-writer-worker-4] WT commit_transaction for snapshot id 25823 2019-09-04T06:36:33.637+0000 D3 EXECUTOR [repl-writer-worker-4] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:36:33.637+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578993, 1), t: 1 }({ ts: Timestamp(1567578993, 1), t: 1 }) 2019-09-04T06:36:33.637+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578993, 1) 2019-09-04T06:36:33.637+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25822 2019-09-04T06:36:33.638+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:36:33.638+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:33.638+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:36:33.638+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:33.638+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:33.638+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:36:33.638+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 25822 2019-09-04T06:36:33.638+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578993, 1) 2019-09-04T06:36:33.638+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25826 2019-09-04T06:36:33.638+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 25826 2019-09-04T06:36:33.638+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578993, 1), t: 1 }({ ts: Timestamp(1567578993, 1), t: 1 }) 2019-09-04T06:36:33.638+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:36:33.638+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578990, 3), t: 1 }, durableWallTime: new Date(1567578990462), appliedOpTime: { ts: Timestamp(1567578990, 3), t: 1 }, appliedWallTime: new Date(1567578990462), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578990, 3), t: 1 }, durableWallTime: new Date(1567578990462), appliedOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, appliedWallTime: new Date(1567578993634), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578990, 3), t: 1 }, durableWallTime: new Date(1567578990462), appliedOpTime: { ts: Timestamp(1567578990, 3), t: 1 }, appliedWallTime: new Date(1567578990462), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578990, 3), t: 1 }, lastCommittedWall: new Date(1567578990462), lastOpVisible: { ts: Timestamp(1567578990, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:33.638+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1759 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:37:03.638+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578990, 3), t: 1 }, durableWallTime: new Date(1567578990462), appliedOpTime: { ts: Timestamp(1567578990, 3), t: 1 }, appliedWallTime: new Date(1567578990462), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578990, 3), t: 1 }, durableWallTime: new Date(1567578990462), appliedOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, appliedWallTime: new Date(1567578993634), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578990, 3), t: 1 }, durableWallTime: new Date(1567578990462), appliedOpTime: { ts: Timestamp(1567578990, 3), t: 1 }, appliedWallTime: new Date(1567578990462), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578990, 3), t: 1 }, lastCommittedWall: new Date(1567578990462), lastOpVisible: { ts: Timestamp(1567578990, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:33.638+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:37:03.638+0000 2019-09-04T06:36:33.638+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578993, 1), t: 1 } 2019-09-04T06:36:33.638+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1760 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:36:43.638+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578990, 3), t: 1 } } 2019-09-04T06:36:33.638+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:37:03.638+0000 2019-09-04T06:36:33.638+0000 D2 ASIO [RS] Request 1759 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578990, 3), t: 1 }, lastCommittedWall: new Date(1567578990462), lastOpVisible: { ts: Timestamp(1567578990, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578990, 3), $clusterTime: { clusterTime: Timestamp(1567578993, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578993, 1) } 2019-09-04T06:36:33.638+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578990, 3), t: 1 }, lastCommittedWall: new Date(1567578990462), lastOpVisible: { ts: Timestamp(1567578990, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578990, 3), $clusterTime: { clusterTime: Timestamp(1567578993, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578993, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:33.638+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:36:33.638+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:37:03.638+0000 2019-09-04T06:36:33.639+0000 D2 ASIO [RS] Request 1760 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578993, 1), t: 1 }, lastCommittedWall: new Date(1567578993634), lastOpVisible: { ts: Timestamp(1567578993, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578993, 1), t: 1 }, lastCommittedWall: new Date(1567578993634), lastOpApplied: { ts: Timestamp(1567578993, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578993, 1), $clusterTime: { clusterTime: Timestamp(1567578993, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578993, 1) } 2019-09-04T06:36:33.639+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578993, 1), t: 1 }, lastCommittedWall: new Date(1567578993634), lastOpVisible: { ts: Timestamp(1567578993, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578993, 1), t: 1 }, lastCommittedWall: new Date(1567578993634), lastOpApplied: { ts: Timestamp(1567578993, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578993, 1), $clusterTime: { clusterTime: Timestamp(1567578993, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578993, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:33.639+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:33.639+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:36:33.639+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578993, 1), t: 1 }, 2019-09-04T06:36:33.634+0000 2019-09-04T06:36:33.639+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578993, 1), t: 1 }, 2019-09-04T06:36:33.634+0000 2019-09-04T06:36:33.640+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578988, 1) 2019-09-04T06:36:33.640+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:36:43.979+0000 2019-09-04T06:36:33.640+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:36:43.772+0000 2019-09-04T06:36:33.640+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:33.640+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1761 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:36:43.640+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578993, 1), t: 1 } } 2019-09-04T06:36:33.640+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:02.839+0000 2019-09-04T06:36:33.640+0000 D3 REPL [conn555] Got notified of new snapshot: { ts: Timestamp(1567578993, 1), t: 1 }, 2019-09-04T06:36:33.634+0000 2019-09-04T06:36:33.640+0000 D3 REPL [conn555] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:55.060+0000 2019-09-04T06:36:33.640+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:37:03.638+0000 2019-09-04T06:36:33.640+0000 D3 REPL [conn545] Got notified of new snapshot: { ts: Timestamp(1567578993, 1), t: 1 }, 2019-09-04T06:36:33.634+0000 2019-09-04T06:36:33.640+0000 D3 REPL [conn545] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.645+0000 2019-09-04T06:36:33.640+0000 D3 REPL [conn528] Got notified of new snapshot: { ts: Timestamp(1567578993, 1), t: 1 }, 2019-09-04T06:36:33.634+0000 2019-09-04T06:36:33.640+0000 D3 REPL [conn528] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:58.752+0000 2019-09-04T06:36:33.640+0000 D3 REPL [conn548] Got notified of new snapshot: { ts: Timestamp(1567578993, 1), t: 1 }, 2019-09-04T06:36:33.634+0000 2019-09-04T06:36:33.640+0000 D3 REPL [conn548] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.661+0000 2019-09-04T06:36:33.640+0000 D3 REPL [conn550] Got notified of new snapshot: { ts: Timestamp(1567578993, 1), t: 1 }, 2019-09-04T06:36:33.634+0000 2019-09-04T06:36:33.640+0000 D3 REPL [conn550] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.754+0000 2019-09-04T06:36:33.640+0000 D3 REPL [conn516] Got notified of new snapshot: { ts: Timestamp(1567578993, 1), t: 1 }, 2019-09-04T06:36:33.634+0000 2019-09-04T06:36:33.640+0000 D3 REPL [conn516] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:48.245+0000 2019-09-04T06:36:33.640+0000 D3 REPL [conn544] Got notified of new snapshot: { ts: Timestamp(1567578993, 1), t: 1 }, 2019-09-04T06:36:33.634+0000 2019-09-04T06:36:33.640+0000 D3 REPL [conn544] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.128+0000 2019-09-04T06:36:33.640+0000 D3 REPL [conn529] Got notified of new snapshot: { ts: Timestamp(1567578993, 1), t: 1 }, 2019-09-04T06:36:33.634+0000 2019-09-04T06:36:33.640+0000 D3 REPL [conn529] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:43.079+0000 2019-09-04T06:36:33.640+0000 D3 REPL [conn543] Got notified of new snapshot: { ts: Timestamp(1567578993, 1), t: 1 }, 2019-09-04T06:36:33.634+0000 2019-09-04T06:36:33.640+0000 D3 REPL [conn543] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:48.226+0000 2019-09-04T06:36:33.640+0000 D3 REPL [conn531] Got notified of new snapshot: { ts: Timestamp(1567578993, 1), t: 1 }, 2019-09-04T06:36:33.634+0000 2019-09-04T06:36:33.640+0000 D3 REPL [conn531] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.661+0000 2019-09-04T06:36:33.640+0000 D3 REPL [conn556] Got notified of new snapshot: { ts: Timestamp(1567578993, 1), t: 1 }, 2019-09-04T06:36:33.634+0000 2019-09-04T06:36:33.640+0000 D3 REPL [conn556] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:00.753+0000 2019-09-04T06:36:33.640+0000 D3 REPL [conn537] Got notified of new snapshot: { ts: Timestamp(1567578993, 1), t: 1 }, 2019-09-04T06:36:33.634+0000 2019-09-04T06:36:33.640+0000 D3 REPL [conn537] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:44.691+0000 2019-09-04T06:36:33.640+0000 D3 REPL [conn554] Got notified of new snapshot: { ts: Timestamp(1567578993, 1), t: 1 }, 2019-09-04T06:36:33.634+0000 2019-09-04T06:36:33.640+0000 D3 REPL [conn554] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:54.153+0000 2019-09-04T06:36:33.640+0000 D3 REPL [conn521] Got notified of new snapshot: { ts: Timestamp(1567578993, 1), t: 1 }, 2019-09-04T06:36:33.634+0000 2019-09-04T06:36:33.640+0000 D3 REPL [conn521] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:43.082+0000 2019-09-04T06:36:33.640+0000 D3 REPL [conn524] Got notified of new snapshot: { ts: Timestamp(1567578993, 1), t: 1 }, 2019-09-04T06:36:33.634+0000 2019-09-04T06:36:33.640+0000 D3 REPL [conn524] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:44.708+0000 2019-09-04T06:36:33.640+0000 D3 REPL [conn541] Got notified of new snapshot: { ts: Timestamp(1567578993, 1), t: 1 }, 2019-09-04T06:36:33.634+0000 2019-09-04T06:36:33.640+0000 D3 REPL [conn541] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:48.319+0000 2019-09-04T06:36:33.640+0000 D3 REPL [conn549] Got notified of new snapshot: { ts: Timestamp(1567578993, 1), t: 1 }, 2019-09-04T06:36:33.634+0000 2019-09-04T06:36:33.640+0000 D3 REPL [conn549] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.674+0000 2019-09-04T06:36:33.640+0000 D3 REPL [conn538] Got notified of new snapshot: { ts: Timestamp(1567578993, 1), t: 1 }, 2019-09-04T06:36:33.634+0000 2019-09-04T06:36:33.640+0000 D3 REPL [conn538] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:49.841+0000 2019-09-04T06:36:33.640+0000 D3 REPL [conn527] Got notified of new snapshot: { ts: Timestamp(1567578993, 1), t: 1 }, 2019-09-04T06:36:33.634+0000 2019-09-04T06:36:33.640+0000 D3 REPL [conn527] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:43.139+0000 2019-09-04T06:36:33.640+0000 D3 REPL [conn530] Got notified of new snapshot: { ts: Timestamp(1567578993, 1), t: 1 }, 2019-09-04T06:36:33.634+0000 2019-09-04T06:36:33.640+0000 D3 REPL [conn530] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:48.223+0000 2019-09-04T06:36:33.640+0000 D3 REPL [conn552] Got notified of new snapshot: { ts: Timestamp(1567578993, 1), t: 1 }, 2019-09-04T06:36:33.634+0000 2019-09-04T06:36:33.640+0000 D3 REPL [conn552] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:52.054+0000 2019-09-04T06:36:33.645+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:36:33.645+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:36:33.645+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578990, 3), t: 1 }, durableWallTime: new Date(1567578990462), appliedOpTime: { ts: Timestamp(1567578990, 3), t: 1 }, appliedWallTime: new Date(1567578990462), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, durableWallTime: new Date(1567578993634), appliedOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, appliedWallTime: new Date(1567578993634), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578990, 3), t: 1 }, durableWallTime: new Date(1567578990462), appliedOpTime: { ts: Timestamp(1567578990, 3), t: 1 }, appliedWallTime: new Date(1567578990462), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578993, 1), t: 1 }, lastCommittedWall: new Date(1567578993634), lastOpVisible: { ts: Timestamp(1567578993, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:33.645+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1762 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:37:03.645+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578990, 3), t: 1 }, durableWallTime: new Date(1567578990462), appliedOpTime: { ts: Timestamp(1567578990, 3), t: 1 }, appliedWallTime: new Date(1567578990462), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, durableWallTime: new Date(1567578993634), appliedOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, appliedWallTime: new Date(1567578993634), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578990, 3), t: 1 }, durableWallTime: new Date(1567578990462), appliedOpTime: { ts: Timestamp(1567578990, 3), t: 1 }, appliedWallTime: new Date(1567578990462), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578993, 1), t: 1 }, lastCommittedWall: new Date(1567578993634), lastOpVisible: { ts: Timestamp(1567578993, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:33.645+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:37:03.638+0000 2019-09-04T06:36:33.645+0000 D2 ASIO [RS] Request 1762 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578993, 1), t: 1 }, lastCommittedWall: new Date(1567578993634), lastOpVisible: { ts: Timestamp(1567578993, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578993, 1), $clusterTime: { clusterTime: Timestamp(1567578993, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578993, 1) } 2019-09-04T06:36:33.645+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578993, 1), t: 1 }, lastCommittedWall: new Date(1567578993634), lastOpVisible: { ts: Timestamp(1567578993, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578993, 1), $clusterTime: { clusterTime: Timestamp(1567578993, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578993, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:33.645+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:33.645+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:37:03.638+0000 2019-09-04T06:36:33.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:33.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:33.663+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:33.663+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:33.682+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:33.682+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:33.691+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:33.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:33.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:33.736+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578993, 1) 2019-09-04T06:36:33.749+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:33.749+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:33.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:33.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:33.791+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:33.832+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:33.832+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:33.891+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:33.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:33.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:33.991+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:33.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:33.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:34.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:34.078+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:34.078+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:34.091+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:34.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:34.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:34.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:34.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:34.162+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:34.163+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:34.182+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:34.182+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:34.191+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:34.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:34.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:34.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:34.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:34.235+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578993, 1), signature: { hash: BinData(0, 8F42EEE89C062D34F1BD42616C943DF1BD65471E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:34.235+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:36:34.235+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578993, 1), signature: { hash: BinData(0, 8F42EEE89C062D34F1BD42616C943DF1BD65471E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:34.235+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578993, 1), signature: { hash: BinData(0, 8F42EEE89C062D34F1BD42616C943DF1BD65471E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:34.235+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, durableWallTime: new Date(1567578993634), opTime: { ts: Timestamp(1567578993, 1), t: 1 }, wallTime: new Date(1567578993634) } 2019-09-04T06:36:34.235+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578993, 1), signature: { hash: BinData(0, 8F42EEE89C062D34F1BD42616C943DF1BD65471E), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:34.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:34.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:34.291+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:34.332+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:34.332+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:34.391+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:34.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:34.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:34.492+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:34.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:34.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:34.578+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:34.578+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:34.592+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:34.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:34.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:34.636+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:34.636+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:34.636+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578993, 1) 2019-09-04T06:36:34.636+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 25854 2019-09-04T06:36:34.637+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:34.637+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:34.637+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 25854 2019-09-04T06:36:34.638+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25857 2019-09-04T06:36:34.638+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 25857 2019-09-04T06:36:34.638+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578993, 1), t: 1 }({ ts: Timestamp(1567578993, 1), t: 1 }) 2019-09-04T06:36:34.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:34.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:34.662+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:34.663+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:34.682+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:34.682+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:34.692+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:34.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:34.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:34.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:34.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:34.792+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:34.832+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:34.832+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:34.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:34.839+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1763) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:34.839+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1763 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:36:44.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:34.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:02.839+0000 2019-09-04T06:36:34.839+0000 D2 ASIO [Replication] Request 1763 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, durableWallTime: new Date(1567578993634), opTime: { ts: Timestamp(1567578993, 1), t: 1 }, wallTime: new Date(1567578993634), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578993, 1), t: 1 }, lastCommittedWall: new Date(1567578993634), lastOpVisible: { ts: Timestamp(1567578993, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578993, 1), $clusterTime: { clusterTime: Timestamp(1567578993, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578993, 1) } 2019-09-04T06:36:34.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, durableWallTime: new Date(1567578993634), opTime: { ts: Timestamp(1567578993, 1), t: 1 }, wallTime: new Date(1567578993634), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578993, 1), t: 1 }, lastCommittedWall: new Date(1567578993634), lastOpVisible: { ts: Timestamp(1567578993, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578993, 1), $clusterTime: { clusterTime: Timestamp(1567578993, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578993, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:36:34.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:34.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1763) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, durableWallTime: new Date(1567578993634), opTime: { ts: Timestamp(1567578993, 1), t: 1 }, wallTime: new Date(1567578993634), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578993, 1), t: 1 }, lastCommittedWall: new Date(1567578993634), lastOpVisible: { ts: Timestamp(1567578993, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578993, 1), $clusterTime: { clusterTime: Timestamp(1567578993, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578993, 1) } 2019-09-04T06:36:34.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:36:34.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:36:43.772+0000 2019-09-04T06:36:34.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:36:45.396+0000 2019-09-04T06:36:34.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:36:34.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:36:36.839Z 2019-09-04T06:36:34.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:34.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:04.839+0000 2019-09-04T06:36:34.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:04.839+0000 2019-09-04T06:36:34.841+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:34.841+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1764) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:34.841+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1764 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:44.841+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:34.841+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:04.839+0000 2019-09-04T06:36:34.841+0000 D2 ASIO [Replication] Request 1764 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, durableWallTime: new Date(1567578993634), opTime: { ts: Timestamp(1567578993, 1), t: 1 }, wallTime: new Date(1567578993634), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578993, 1), t: 1 }, lastCommittedWall: new Date(1567578993634), lastOpVisible: { ts: Timestamp(1567578993, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578993, 1), $clusterTime: { clusterTime: Timestamp(1567578993, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578993, 1) } 2019-09-04T06:36:34.841+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, durableWallTime: new Date(1567578993634), opTime: { ts: Timestamp(1567578993, 1), t: 1 }, wallTime: new Date(1567578993634), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578993, 1), t: 1 }, lastCommittedWall: new Date(1567578993634), lastOpVisible: { ts: Timestamp(1567578993, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578993, 1), $clusterTime: { clusterTime: Timestamp(1567578993, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578993, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:34.841+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:34.841+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1764) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, durableWallTime: new Date(1567578993634), opTime: { ts: Timestamp(1567578993, 1), t: 1 }, wallTime: new Date(1567578993634), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578993, 1), t: 1 }, lastCommittedWall: new Date(1567578993634), lastOpVisible: { ts: Timestamp(1567578993, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578993, 1), $clusterTime: { clusterTime: Timestamp(1567578993, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578993, 1) } 2019-09-04T06:36:34.841+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:36:34.841+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:36:36.841Z 2019-09-04T06:36:34.841+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:04.839+0000 2019-09-04T06:36:34.892+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:34.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:34.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:34.992+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:34.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:34.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:35.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:35.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578993, 1), signature: { hash: BinData(0, 8F42EEE89C062D34F1BD42616C943DF1BD65471E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:35.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:36:35.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578993, 1), signature: { hash: BinData(0, 8F42EEE89C062D34F1BD42616C943DF1BD65471E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:35.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578993, 1), signature: { hash: BinData(0, 8F42EEE89C062D34F1BD42616C943DF1BD65471E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:35.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, durableWallTime: new Date(1567578993634), opTime: { ts: Timestamp(1567578993, 1), t: 1 }, wallTime: new Date(1567578993634) } 2019-09-04T06:36:35.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578993, 1), signature: { hash: BinData(0, 8F42EEE89C062D34F1BD42616C943DF1BD65471E), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:35.078+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:35.078+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:35.092+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:35.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:35.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:35.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:35.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:35.162+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:35.163+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:35.182+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:35.182+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:35.193+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:35.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:35.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:35.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:35.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:35.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:35.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:35.293+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:35.332+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:35.332+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:35.393+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:35.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:35.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:35.493+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:35.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:35.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:35.578+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:35.578+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:35.593+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:35.612+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:35.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:35.637+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:35.637+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:35.637+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578993, 1) 2019-09-04T06:36:35.637+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 25883 2019-09-04T06:36:35.637+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:35.637+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:35.637+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 25883 2019-09-04T06:36:35.638+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25886 2019-09-04T06:36:35.638+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 25886 2019-09-04T06:36:35.638+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578993, 1), t: 1 }({ ts: Timestamp(1567578993, 1), t: 1 }) 2019-09-04T06:36:35.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:35.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:35.662+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:35.662+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:35.682+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:35.682+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:35.693+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:35.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:35.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:35.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:35.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:35.793+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:35.832+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:35.832+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:35.893+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:35.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:35.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:35.994+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:35.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:35.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:36.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:36.078+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:36.078+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:36.094+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:36.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:36.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:36.119+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:36.119+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:36.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:36.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:36.162+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:36.163+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:36.182+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:36.182+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:36.194+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:36.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:36.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:36.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:36.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:36.235+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578993, 1), signature: { hash: BinData(0, 8F42EEE89C062D34F1BD42616C943DF1BD65471E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:36.235+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:36:36.235+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578993, 1), signature: { hash: BinData(0, 8F42EEE89C062D34F1BD42616C943DF1BD65471E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:36.235+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578993, 1), signature: { hash: BinData(0, 8F42EEE89C062D34F1BD42616C943DF1BD65471E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:36.235+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, durableWallTime: new Date(1567578993634), opTime: { ts: Timestamp(1567578993, 1), t: 1 }, wallTime: new Date(1567578993634) } 2019-09-04T06:36:36.235+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578993, 1), signature: { hash: BinData(0, 8F42EEE89C062D34F1BD42616C943DF1BD65471E), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:36.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:36.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:36.294+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:36.328+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:36.328+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:36.332+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:36.332+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:36.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:36.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:36.394+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:36.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:36.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:36.494+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:36.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:36.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:36.578+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:36.578+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:36.594+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:36.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:36.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:36.618+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:36.618+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:36.637+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:36.637+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:36.637+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578993, 1) 2019-09-04T06:36:36.637+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 25915 2019-09-04T06:36:36.637+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:36.637+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:36.637+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 25915 2019-09-04T06:36:36.638+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25918 2019-09-04T06:36:36.638+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 25918 2019-09-04T06:36:36.638+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578993, 1), t: 1 }({ ts: Timestamp(1567578993, 1), t: 1 }) 2019-09-04T06:36:36.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:36.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:36.662+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:36.663+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:36.682+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:36.682+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:36.694+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:36.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:36.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:36.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:36.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:36.795+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:36.828+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:36.829+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:36.832+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:36.832+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:36.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:36.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1765) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:36.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1765 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:36:46.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:36.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:04.839+0000 2019-09-04T06:36:36.839+0000 D2 ASIO [Replication] Request 1765 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, durableWallTime: new Date(1567578993634), opTime: { ts: Timestamp(1567578993, 1), t: 1 }, wallTime: new Date(1567578993634), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578993, 1), t: 1 }, lastCommittedWall: new Date(1567578993634), lastOpVisible: { ts: Timestamp(1567578993, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578993, 1), $clusterTime: { clusterTime: Timestamp(1567578993, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578993, 1) } 2019-09-04T06:36:36.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, durableWallTime: new Date(1567578993634), opTime: { ts: Timestamp(1567578993, 1), t: 1 }, wallTime: new Date(1567578993634), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578993, 1), t: 1 }, lastCommittedWall: new Date(1567578993634), lastOpVisible: { ts: Timestamp(1567578993, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578993, 1), $clusterTime: { clusterTime: Timestamp(1567578993, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578993, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:36:36.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:36.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1765) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, durableWallTime: new Date(1567578993634), opTime: { ts: Timestamp(1567578993, 1), t: 1 }, wallTime: new Date(1567578993634), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578993, 1), t: 1 }, lastCommittedWall: new Date(1567578993634), lastOpVisible: { ts: Timestamp(1567578993, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578993, 1), $clusterTime: { clusterTime: Timestamp(1567578993, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578993, 1) } 2019-09-04T06:36:36.839+0000 D4 ELECTION [replexec-4] Postponing election timeout due to heartbeat from primary 2019-09-04T06:36:36.839+0000 D4 REPL [replexec-4] Canceling election timeout callback at 2019-09-04T06:36:45.396+0000 2019-09-04T06:36:36.839+0000 D4 ELECTION [replexec-4] Scheduling election timeout callback at 2019-09-04T06:36:47.340+0000 2019-09-04T06:36:36.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:36:36.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:36:38.839Z 2019-09-04T06:36:36.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:36.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:06.839+0000 2019-09-04T06:36:36.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:06.839+0000 2019-09-04T06:36:36.841+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:36.841+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1766) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:36.841+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1766 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:46.841+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:36.841+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:06.839+0000 2019-09-04T06:36:36.841+0000 D2 ASIO [Replication] Request 1766 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, durableWallTime: new Date(1567578993634), opTime: { ts: Timestamp(1567578993, 1), t: 1 }, wallTime: new Date(1567578993634), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578993, 1), t: 1 }, lastCommittedWall: new Date(1567578993634), lastOpVisible: { ts: Timestamp(1567578993, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578993, 1), $clusterTime: { clusterTime: Timestamp(1567578993, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578993, 1) } 2019-09-04T06:36:36.841+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, durableWallTime: new Date(1567578993634), opTime: { ts: Timestamp(1567578993, 1), t: 1 }, wallTime: new Date(1567578993634), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578993, 1), t: 1 }, lastCommittedWall: new Date(1567578993634), lastOpVisible: { ts: Timestamp(1567578993, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578993, 1), $clusterTime: { clusterTime: Timestamp(1567578993, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578993, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:36.841+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:36.841+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1766) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, durableWallTime: new Date(1567578993634), opTime: { ts: Timestamp(1567578993, 1), t: 1 }, wallTime: new Date(1567578993634), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578993, 1), t: 1 }, lastCommittedWall: new Date(1567578993634), lastOpVisible: { ts: Timestamp(1567578993, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578993, 1), $clusterTime: { clusterTime: Timestamp(1567578993, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578993, 1) } 2019-09-04T06:36:36.841+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:36:36.841+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:36:38.841Z 2019-09-04T06:36:36.841+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:06.839+0000 2019-09-04T06:36:36.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:36.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:36.895+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:36.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:36.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:36.995+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:36.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:36.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:37.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:37.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578993, 1), signature: { hash: BinData(0, 8F42EEE89C062D34F1BD42616C943DF1BD65471E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:37.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:36:37.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578993, 1), signature: { hash: BinData(0, 8F42EEE89C062D34F1BD42616C943DF1BD65471E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:37.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578993, 1), signature: { hash: BinData(0, 8F42EEE89C062D34F1BD42616C943DF1BD65471E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:37.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, durableWallTime: new Date(1567578993634), opTime: { ts: Timestamp(1567578993, 1), t: 1 }, wallTime: new Date(1567578993634) } 2019-09-04T06:36:37.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578993, 1), signature: { hash: BinData(0, 8F42EEE89C062D34F1BD42616C943DF1BD65471E), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:37.078+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:37.078+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:37.095+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:37.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:37.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:37.118+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:37.118+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:37.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:37.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:37.162+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:37.162+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:37.182+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:37.182+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:37.195+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:37.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:37.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:37.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:37.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:37.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:37.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:37.295+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:37.328+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:37.329+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:37.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:37.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:37.395+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:37.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:37.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:37.495+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:37.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:37.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:37.578+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:37.578+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:37.596+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:37.612+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:37.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:37.618+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:37.618+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:37.637+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:37.637+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:37.637+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578993, 1) 2019-09-04T06:36:37.637+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 25948 2019-09-04T06:36:37.637+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:37.637+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:37.637+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 25948 2019-09-04T06:36:37.638+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25951 2019-09-04T06:36:37.638+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 25951 2019-09-04T06:36:37.638+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578993, 1), t: 1 }({ ts: Timestamp(1567578993, 1), t: 1 }) 2019-09-04T06:36:37.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:37.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:37.662+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:37.663+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:37.682+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:37.682+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:37.696+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:37.712+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:37.712+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:37.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:37.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:37.784+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:37.784+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:37.796+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:37.828+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:37.829+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:37.842+0000 I NETWORK [listener] connection accepted from 10.108.2.44:38924 #558 (89 connections now open) 2019-09-04T06:36:37.842+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:36:37.842+0000 D2 COMMAND [conn558] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:36:37.842+0000 I NETWORK [conn558] received client metadata from 10.108.2.44:38924 conn558: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:36:37.842+0000 I COMMAND [conn558] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:36:37.842+0000 D2 COMMAND [conn558] run command config.$cmd { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578988, 1), signature: { hash: BinData(0, D80E87FECF943C28EA6B59DE3744F95F3A830FED), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:36:37.842+0000 D1 REPL [conn558] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578993, 1), t: 1 } 2019-09-04T06:36:37.842+0000 D3 REPL [conn558] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:07.852+0000 2019-09-04T06:36:37.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:37.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:37.896+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:37.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:37.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:37.996+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:37.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:37.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:38.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:38.078+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:38.078+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:38.096+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:38.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:38.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:38.118+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:38.118+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:38.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:38.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:38.162+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:38.163+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:38.182+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:38.182+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:38.196+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:38.212+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:38.212+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:38.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:38.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:38.235+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578996, 1), signature: { hash: BinData(0, B361C78B89EC6925BA8888CA3FCEC6CFBA750A77), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:38.235+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:36:38.235+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578996, 1), signature: { hash: BinData(0, B361C78B89EC6925BA8888CA3FCEC6CFBA750A77), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:38.235+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578996, 1), signature: { hash: BinData(0, B361C78B89EC6925BA8888CA3FCEC6CFBA750A77), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:38.235+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, durableWallTime: new Date(1567578993634), opTime: { ts: Timestamp(1567578993, 1), t: 1 }, wallTime: new Date(1567578993634) } 2019-09-04T06:36:38.235+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578996, 1), signature: { hash: BinData(0, B361C78B89EC6925BA8888CA3FCEC6CFBA750A77), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:38.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:38.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:38.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:38.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:38.296+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:38.328+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:38.328+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:38.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:38.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:38.396+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:38.451+0000 D2 ASIO [RS] Request 1761 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578998, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578998438), o: { $v: 1, $set: { ping: new Date(1567578998438) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578993, 1), t: 1 }, lastCommittedWall: new Date(1567578993634), lastOpVisible: { ts: Timestamp(1567578993, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578993, 1), t: 1 }, lastCommittedWall: new Date(1567578993634), lastOpApplied: { ts: Timestamp(1567578998, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578993, 1), $clusterTime: { clusterTime: Timestamp(1567578998, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 1) } 2019-09-04T06:36:38.451+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578998, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578998438), o: { $v: 1, $set: { ping: new Date(1567578998438) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578993, 1), t: 1 }, lastCommittedWall: new Date(1567578993634), lastOpVisible: { ts: Timestamp(1567578993, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578993, 1), t: 1 }, lastCommittedWall: new Date(1567578993634), lastOpApplied: { ts: Timestamp(1567578998, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578993, 1), $clusterTime: { clusterTime: Timestamp(1567578998, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:38.451+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:36:38.451+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578998, 1) and ending at ts: Timestamp(1567578998, 1) 2019-09-04T06:36:38.451+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:36:47.340+0000 2019-09-04T06:36:38.451+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:36:49.126+0000 2019-09-04T06:36:38.451+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:38.451+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:06.839+0000 2019-09-04T06:36:38.451+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578998, 1), t: 1 } 2019-09-04T06:36:38.451+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:38.451+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:38.451+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578993, 1) 2019-09-04T06:36:38.451+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 25980 2019-09-04T06:36:38.451+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:38.451+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:38.451+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 25980 2019-09-04T06:36:38.451+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:38.451+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:36:38.452+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:38.452+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578993, 1) 2019-09-04T06:36:38.452+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578998, 1) } 2019-09-04T06:36:38.452+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 25983 2019-09-04T06:36:38.452+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:38.452+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:38.452+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 25983 2019-09-04T06:36:38.452+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25952 2019-09-04T06:36:38.452+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 25952 2019-09-04T06:36:38.452+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25986 2019-09-04T06:36:38.452+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 25986 2019-09-04T06:36:38.452+0000 D3 EXECUTOR [repl-writer-worker-6] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:36:38.452+0000 D3 STORAGE [repl-writer-worker-6] WT begin_transaction for snapshot id 25988 2019-09-04T06:36:38.452+0000 D4 STORAGE [repl-writer-worker-6] inserting record with timestamp Timestamp(1567578998, 1) 2019-09-04T06:36:38.452+0000 D3 STORAGE [repl-writer-worker-6] WT set timestamp of future write operations to Timestamp(1567578998, 1) 2019-09-04T06:36:38.452+0000 D3 STORAGE [repl-writer-worker-6] WT commit_transaction for snapshot id 25988 2019-09-04T06:36:38.452+0000 D3 EXECUTOR [repl-writer-worker-6] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:36:38.452+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:36:38.452+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25987 2019-09-04T06:36:38.452+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 25987 2019-09-04T06:36:38.452+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25990 2019-09-04T06:36:38.452+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 25990 2019-09-04T06:36:38.452+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578998, 1), t: 1 }({ ts: Timestamp(1567578998, 1), t: 1 }) 2019-09-04T06:36:38.452+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578998, 1) 2019-09-04T06:36:38.452+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25991 2019-09-04T06:36:38.452+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578998, 1) } } ] } sort: {} projection: {} 2019-09-04T06:36:38.452+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:38.452+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:36:38.452+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578998, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:36:38.452+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:38.452+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:36:38.452+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:36:38.452+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578998, 1) || First: notFirst: full path: ts 2019-09-04T06:36:38.452+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:38.452+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578998, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:38.452+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:36:38.452+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:36:38.452+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:36:38.452+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:38.452+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:36:38.452+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:36:38.452+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:38.452+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:38.452+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:36:38.452+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578998, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:36:38.452+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:38.452+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:36:38.452+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:36:38.452+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578998, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:36:38.452+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:38.452+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578998, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:38.452+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 25991 2019-09-04T06:36:38.452+0000 D3 EXECUTOR [repl-writer-worker-8] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:36:38.452+0000 D3 STORAGE [repl-writer-worker-8] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:38.452+0000 D3 REPL [repl-writer-worker-8] applying op: { ts: Timestamp(1567578998, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567578998438), o: { $v: 1, $set: { ping: new Date(1567578998438) } } }, oplog application mode: Secondary 2019-09-04T06:36:38.452+0000 D3 STORAGE [repl-writer-worker-8] WT set timestamp of future write operations to Timestamp(1567578998, 1) 2019-09-04T06:36:38.452+0000 D3 STORAGE [repl-writer-worker-8] WT begin_transaction for snapshot id 25993 2019-09-04T06:36:38.452+0000 D2 QUERY [repl-writer-worker-8] Using idhack: { _id: "ConfigServer" } 2019-09-04T06:36:38.452+0000 D4 WRITE [repl-writer-worker-8] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:36:38.452+0000 D3 STORAGE [repl-writer-worker-8] WT commit_transaction for snapshot id 25993 2019-09-04T06:36:38.452+0000 D3 EXECUTOR [repl-writer-worker-8] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:36:38.452+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578998, 1), t: 1 }({ ts: Timestamp(1567578998, 1), t: 1 }) 2019-09-04T06:36:38.452+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578998, 1) 2019-09-04T06:36:38.452+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25992 2019-09-04T06:36:38.453+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:36:38.453+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:38.453+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:36:38.453+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:38.453+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:38.453+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:36:38.453+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 25992 2019-09-04T06:36:38.453+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578998, 1) 2019-09-04T06:36:38.453+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25996 2019-09-04T06:36:38.453+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 25996 2019-09-04T06:36:38.453+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578998, 1), t: 1 }({ ts: Timestamp(1567578998, 1), t: 1 }) 2019-09-04T06:36:38.453+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:38.453+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, durableWallTime: new Date(1567578993634), appliedOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, appliedWallTime: new Date(1567578993634), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, durableWallTime: new Date(1567578993634), appliedOpTime: { ts: Timestamp(1567578998, 1), t: 1 }, appliedWallTime: new Date(1567578998438), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, durableWallTime: new Date(1567578993634), appliedOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, appliedWallTime: new Date(1567578993634), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578993, 1), t: 1 }, lastCommittedWall: new Date(1567578993634), lastOpVisible: { ts: Timestamp(1567578993, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:38.453+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1767 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:37:08.453+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, durableWallTime: new Date(1567578993634), appliedOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, appliedWallTime: new Date(1567578993634), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, durableWallTime: new Date(1567578993634), appliedOpTime: { ts: Timestamp(1567578998, 1), t: 1 }, appliedWallTime: new Date(1567578998438), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, durableWallTime: new Date(1567578993634), appliedOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, appliedWallTime: new Date(1567578993634), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578993, 1), t: 1 }, lastCommittedWall: new Date(1567578993634), lastOpVisible: { ts: Timestamp(1567578993, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:38.453+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:37:08.453+0000 2019-09-04T06:36:38.453+0000 D2 ASIO [RS] Request 1767 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578993, 1), t: 1 }, lastCommittedWall: new Date(1567578993634), lastOpVisible: { ts: Timestamp(1567578993, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578993, 1), $clusterTime: { clusterTime: Timestamp(1567578998, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 1) } 2019-09-04T06:36:38.453+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578993, 1), t: 1 }, lastCommittedWall: new Date(1567578993634), lastOpVisible: { ts: Timestamp(1567578993, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578993, 1), $clusterTime: { clusterTime: Timestamp(1567578998, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:38.453+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:38.453+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:37:08.453+0000 2019-09-04T06:36:38.453+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578998, 1), t: 1 } 2019-09-04T06:36:38.453+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1768 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:36:48.453+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578993, 1), t: 1 } } 2019-09-04T06:36:38.453+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:37:08.453+0000 2019-09-04T06:36:38.455+0000 D2 ASIO [RS] Request 1768 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 1), t: 1 }, lastCommittedWall: new Date(1567578998438), lastOpVisible: { ts: Timestamp(1567578998, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578998, 1), t: 1 }, lastCommittedWall: new Date(1567578998438), lastOpApplied: { ts: Timestamp(1567578998, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578998, 1), $clusterTime: { clusterTime: Timestamp(1567578998, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 1) } 2019-09-04T06:36:38.455+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 1), t: 1 }, lastCommittedWall: new Date(1567578998438), lastOpVisible: { ts: Timestamp(1567578998, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578998, 1), t: 1 }, lastCommittedWall: new Date(1567578998438), lastOpApplied: { ts: Timestamp(1567578998, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578998, 1), $clusterTime: { clusterTime: Timestamp(1567578998, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:38.455+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:38.455+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:36:38.455+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578998, 1), t: 1 }, 2019-09-04T06:36:38.438+0000 2019-09-04T06:36:38.455+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567578998, 1), t: 1 }, 2019-09-04T06:36:38.438+0000 2019-09-04T06:36:38.456+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567578993, 1) 2019-09-04T06:36:38.456+0000 D3 REPL [conn530] Got notified of new snapshot: { ts: Timestamp(1567578998, 1), t: 1 }, 2019-09-04T06:36:38.438+0000 2019-09-04T06:36:38.456+0000 D3 REPL [conn530] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:48.223+0000 2019-09-04T06:36:38.456+0000 D3 REPL [conn550] Got notified of new snapshot: { ts: Timestamp(1567578998, 1), t: 1 }, 2019-09-04T06:36:38.438+0000 2019-09-04T06:36:38.456+0000 D3 REPL [conn550] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.754+0000 2019-09-04T06:36:38.456+0000 D3 REPL [conn516] Got notified of new snapshot: { ts: Timestamp(1567578998, 1), t: 1 }, 2019-09-04T06:36:38.438+0000 2019-09-04T06:36:38.456+0000 D3 REPL [conn516] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:48.245+0000 2019-09-04T06:36:38.456+0000 D3 REPL [conn554] Got notified of new snapshot: { ts: Timestamp(1567578998, 1), t: 1 }, 2019-09-04T06:36:38.438+0000 2019-09-04T06:36:38.456+0000 D3 REPL [conn554] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:54.153+0000 2019-09-04T06:36:38.456+0000 D3 REPL [conn531] Got notified of new snapshot: { ts: Timestamp(1567578998, 1), t: 1 }, 2019-09-04T06:36:38.438+0000 2019-09-04T06:36:38.456+0000 D3 REPL [conn531] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.661+0000 2019-09-04T06:36:38.456+0000 D3 REPL [conn537] Got notified of new snapshot: { ts: Timestamp(1567578998, 1), t: 1 }, 2019-09-04T06:36:38.438+0000 2019-09-04T06:36:38.456+0000 D3 REPL [conn537] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:44.691+0000 2019-09-04T06:36:38.456+0000 D3 REPL [conn521] Got notified of new snapshot: { ts: Timestamp(1567578998, 1), t: 1 }, 2019-09-04T06:36:38.438+0000 2019-09-04T06:36:38.456+0000 D3 REPL [conn521] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:43.082+0000 2019-09-04T06:36:38.456+0000 D3 REPL [conn541] Got notified of new snapshot: { ts: Timestamp(1567578998, 1), t: 1 }, 2019-09-04T06:36:38.438+0000 2019-09-04T06:36:38.456+0000 D3 REPL [conn541] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:48.319+0000 2019-09-04T06:36:38.456+0000 D3 REPL [conn538] Got notified of new snapshot: { ts: Timestamp(1567578998, 1), t: 1 }, 2019-09-04T06:36:38.438+0000 2019-09-04T06:36:38.456+0000 D3 REPL [conn538] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:49.841+0000 2019-09-04T06:36:38.456+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:36:49.126+0000 2019-09-04T06:36:38.456+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:36:48.479+0000 2019-09-04T06:36:38.456+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:38.456+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:06.839+0000 2019-09-04T06:36:38.456+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1769 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:36:48.456+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578998, 1), t: 1 } } 2019-09-04T06:36:38.456+0000 D3 REPL [conn548] Got notified of new snapshot: { ts: Timestamp(1567578998, 1), t: 1 }, 2019-09-04T06:36:38.438+0000 2019-09-04T06:36:38.456+0000 D3 REPL [conn548] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.661+0000 2019-09-04T06:36:38.456+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:37:08.453+0000 2019-09-04T06:36:38.456+0000 D3 REPL [conn545] Got notified of new snapshot: { ts: Timestamp(1567578998, 1), t: 1 }, 2019-09-04T06:36:38.438+0000 2019-09-04T06:36:38.456+0000 D3 REPL [conn545] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.645+0000 2019-09-04T06:36:38.456+0000 D3 REPL [conn555] Got notified of new snapshot: { ts: Timestamp(1567578998, 1), t: 1 }, 2019-09-04T06:36:38.438+0000 2019-09-04T06:36:38.456+0000 D3 REPL [conn555] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:55.060+0000 2019-09-04T06:36:38.456+0000 D3 REPL [conn544] Got notified of new snapshot: { ts: Timestamp(1567578998, 1), t: 1 }, 2019-09-04T06:36:38.438+0000 2019-09-04T06:36:38.456+0000 D3 REPL [conn544] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.128+0000 2019-09-04T06:36:38.456+0000 D3 REPL [conn543] Got notified of new snapshot: { ts: Timestamp(1567578998, 1), t: 1 }, 2019-09-04T06:36:38.438+0000 2019-09-04T06:36:38.456+0000 D3 REPL [conn543] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:48.226+0000 2019-09-04T06:36:38.456+0000 D3 REPL [conn556] Got notified of new snapshot: { ts: Timestamp(1567578998, 1), t: 1 }, 2019-09-04T06:36:38.438+0000 2019-09-04T06:36:38.456+0000 D3 REPL [conn556] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:00.753+0000 2019-09-04T06:36:38.456+0000 D3 REPL [conn529] Got notified of new snapshot: { ts: Timestamp(1567578998, 1), t: 1 }, 2019-09-04T06:36:38.438+0000 2019-09-04T06:36:38.456+0000 D3 REPL [conn529] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:43.079+0000 2019-09-04T06:36:38.456+0000 D3 REPL [conn524] Got notified of new snapshot: { ts: Timestamp(1567578998, 1), t: 1 }, 2019-09-04T06:36:38.438+0000 2019-09-04T06:36:38.456+0000 D3 REPL [conn524] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:44.708+0000 2019-09-04T06:36:38.456+0000 D3 REPL [conn549] Got notified of new snapshot: { ts: Timestamp(1567578998, 1), t: 1 }, 2019-09-04T06:36:38.438+0000 2019-09-04T06:36:38.456+0000 D3 REPL [conn549] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.674+0000 2019-09-04T06:36:38.456+0000 D3 REPL [conn527] Got notified of new snapshot: { ts: Timestamp(1567578998, 1), t: 1 }, 2019-09-04T06:36:38.438+0000 2019-09-04T06:36:38.456+0000 D3 REPL [conn527] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:43.139+0000 2019-09-04T06:36:38.456+0000 D3 REPL [conn552] Got notified of new snapshot: { ts: Timestamp(1567578998, 1), t: 1 }, 2019-09-04T06:36:38.438+0000 2019-09-04T06:36:38.456+0000 D3 REPL [conn552] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:52.054+0000 2019-09-04T06:36:38.456+0000 D3 REPL [conn558] Got notified of new snapshot: { ts: Timestamp(1567578998, 1), t: 1 }, 2019-09-04T06:36:38.438+0000 2019-09-04T06:36:38.456+0000 D3 REPL [conn558] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:07.852+0000 2019-09-04T06:36:38.456+0000 D3 REPL [conn528] Got notified of new snapshot: { ts: Timestamp(1567578998, 1), t: 1 }, 2019-09-04T06:36:38.438+0000 2019-09-04T06:36:38.456+0000 D3 REPL [conn528] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:58.752+0000 2019-09-04T06:36:38.457+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:36:38.457+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:36:38.457+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, durableWallTime: new Date(1567578993634), appliedOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, appliedWallTime: new Date(1567578993634), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578998, 1), t: 1 }, durableWallTime: new Date(1567578998438), appliedOpTime: { ts: Timestamp(1567578998, 1), t: 1 }, appliedWallTime: new Date(1567578998438), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, durableWallTime: new Date(1567578993634), appliedOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, appliedWallTime: new Date(1567578993634), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 1), t: 1 }, lastCommittedWall: new Date(1567578998438), lastOpVisible: { ts: Timestamp(1567578998, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:38.457+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1770 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:37:08.457+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, durableWallTime: new Date(1567578993634), appliedOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, appliedWallTime: new Date(1567578993634), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578998, 1), t: 1 }, durableWallTime: new Date(1567578998438), appliedOpTime: { ts: Timestamp(1567578998, 1), t: 1 }, appliedWallTime: new Date(1567578998438), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, durableWallTime: new Date(1567578993634), appliedOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, appliedWallTime: new Date(1567578993634), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 1), t: 1 }, lastCommittedWall: new Date(1567578998438), lastOpVisible: { ts: Timestamp(1567578998, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:38.457+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:37:08.453+0000 2019-09-04T06:36:38.457+0000 D2 ASIO [RS] Request 1770 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 1), t: 1 }, lastCommittedWall: new Date(1567578998438), lastOpVisible: { ts: Timestamp(1567578998, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578998, 1), $clusterTime: { clusterTime: Timestamp(1567578998, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 1) } 2019-09-04T06:36:38.457+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 1), t: 1 }, lastCommittedWall: new Date(1567578998438), lastOpVisible: { ts: Timestamp(1567578998, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578998, 1), $clusterTime: { clusterTime: Timestamp(1567578998, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:38.457+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:38.457+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:37:08.453+0000 2019-09-04T06:36:38.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:38.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:38.497+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:38.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:38.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:38.551+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578998, 1) 2019-09-04T06:36:38.578+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:38.578+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:38.597+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:38.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:38.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:38.618+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:38.618+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:38.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:38.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:38.663+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:38.663+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:38.682+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:38.682+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:38.697+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:38.735+0000 D2 ASIO [RS] Request 1769 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567578998, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578998709), o: { $v: 1, $set: { ping: new Date(1567578998709) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 1), t: 1 }, lastCommittedWall: new Date(1567578998438), lastOpVisible: { ts: Timestamp(1567578998, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578998, 1), t: 1 }, lastCommittedWall: new Date(1567578998438), lastOpApplied: { ts: Timestamp(1567578998, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578998, 1), $clusterTime: { clusterTime: Timestamp(1567578998, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 2) } 2019-09-04T06:36:38.736+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567578998, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578998709), o: { $v: 1, $set: { ping: new Date(1567578998709) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 1), t: 1 }, lastCommittedWall: new Date(1567578998438), lastOpVisible: { ts: Timestamp(1567578998, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578998, 1), t: 1 }, lastCommittedWall: new Date(1567578998438), lastOpApplied: { ts: Timestamp(1567578998, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578998, 1), $clusterTime: { clusterTime: Timestamp(1567578998, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:38.736+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:36:38.736+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567578998, 2) and ending at ts: Timestamp(1567578998, 2) 2019-09-04T06:36:38.736+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:36:48.479+0000 2019-09-04T06:36:38.736+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:36:49.589+0000 2019-09-04T06:36:38.736+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:38.736+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:06.839+0000 2019-09-04T06:36:38.736+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567578998, 2), t: 1 } 2019-09-04T06:36:38.736+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:38.736+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:38.736+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578998, 1) 2019-09-04T06:36:38.736+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 26008 2019-09-04T06:36:38.736+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:38.736+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:38.736+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 26008 2019-09-04T06:36:38.736+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:38.736+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:36:38.736+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:38.736+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567578998, 2) } 2019-09-04T06:36:38.736+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578998, 1) 2019-09-04T06:36:38.736+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 26011 2019-09-04T06:36:38.736+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:38.736+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:38.736+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 25997 2019-09-04T06:36:38.736+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 26011 2019-09-04T06:36:38.736+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 25997 2019-09-04T06:36:38.736+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26014 2019-09-04T06:36:38.736+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 26014 2019-09-04T06:36:38.736+0000 D3 EXECUTOR [repl-writer-worker-2] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:36:38.736+0000 D3 STORAGE [repl-writer-worker-2] WT begin_transaction for snapshot id 26016 2019-09-04T06:36:38.736+0000 D4 STORAGE [repl-writer-worker-2] inserting record with timestamp Timestamp(1567578998, 2) 2019-09-04T06:36:38.736+0000 D3 STORAGE [repl-writer-worker-2] WT set timestamp of future write operations to Timestamp(1567578998, 2) 2019-09-04T06:36:38.736+0000 D3 STORAGE [repl-writer-worker-2] WT commit_transaction for snapshot id 26016 2019-09-04T06:36:38.736+0000 D3 EXECUTOR [repl-writer-worker-2] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:36:38.736+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:36:38.736+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26015 2019-09-04T06:36:38.736+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 26015 2019-09-04T06:36:38.736+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26018 2019-09-04T06:36:38.736+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 26018 2019-09-04T06:36:38.736+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567578998, 2), t: 1 }({ ts: Timestamp(1567578998, 2), t: 1 }) 2019-09-04T06:36:38.736+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578998, 2) 2019-09-04T06:36:38.736+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26019 2019-09-04T06:36:38.736+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567578998, 2) } } ] } sort: {} projection: {} 2019-09-04T06:36:38.736+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:38.736+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:36:38.736+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567578998, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:36:38.736+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:38.736+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:36:38.736+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:36:38.736+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578998, 2) || First: notFirst: full path: ts 2019-09-04T06:36:38.736+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:38.736+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567578998, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:38.736+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:36:38.736+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:36:38.736+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:36:38.736+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:38.736+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:36:38.736+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:36:38.736+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:38.736+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:38.736+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:36:38.736+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567578998, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:36:38.736+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:38.736+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:36:38.736+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:36:38.736+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567578998, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:36:38.736+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:38.736+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567578998, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:38.736+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 26019 2019-09-04T06:36:38.736+0000 D3 EXECUTOR [repl-writer-worker-0] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:36:38.736+0000 D3 STORAGE [repl-writer-worker-0] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:38.736+0000 D3 REPL [repl-writer-worker-0] applying op: { ts: Timestamp(1567578998, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567578998709), o: { $v: 1, $set: { ping: new Date(1567578998709) } } }, oplog application mode: Secondary 2019-09-04T06:36:38.737+0000 D3 STORAGE [repl-writer-worker-0] WT set timestamp of future write operations to Timestamp(1567578998, 2) 2019-09-04T06:36:38.737+0000 D3 STORAGE [repl-writer-worker-0] WT begin_transaction for snapshot id 26021 2019-09-04T06:36:38.737+0000 D2 QUERY [repl-writer-worker-0] Using idhack: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" } 2019-09-04T06:36:38.737+0000 D4 WRITE [repl-writer-worker-0] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:36:38.737+0000 D3 STORAGE [repl-writer-worker-0] WT commit_transaction for snapshot id 26021 2019-09-04T06:36:38.737+0000 D3 EXECUTOR [repl-writer-worker-0] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:36:38.737+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567578998, 2), t: 1 }({ ts: Timestamp(1567578998, 2), t: 1 }) 2019-09-04T06:36:38.737+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567578998, 2) 2019-09-04T06:36:38.737+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26020 2019-09-04T06:36:38.737+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:36:38.737+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:38.737+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:36:38.737+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:38.737+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:38.737+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:36:38.737+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 26020 2019-09-04T06:36:38.737+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567578998, 2) 2019-09-04T06:36:38.737+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26024 2019-09-04T06:36:38.737+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 26024 2019-09-04T06:36:38.737+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578998, 2), t: 1 }({ ts: Timestamp(1567578998, 2), t: 1 }) 2019-09-04T06:36:38.737+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:38.737+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, durableWallTime: new Date(1567578993634), appliedOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, appliedWallTime: new Date(1567578993634), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578998, 1), t: 1 }, durableWallTime: new Date(1567578998438), appliedOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, appliedWallTime: new Date(1567578998709), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, durableWallTime: new Date(1567578993634), appliedOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, appliedWallTime: new Date(1567578993634), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 1), t: 1 }, lastCommittedWall: new Date(1567578998438), lastOpVisible: { ts: Timestamp(1567578998, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:38.737+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1771 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:37:08.737+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, durableWallTime: new Date(1567578993634), appliedOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, appliedWallTime: new Date(1567578993634), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578998, 1), t: 1 }, durableWallTime: new Date(1567578998438), appliedOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, appliedWallTime: new Date(1567578998709), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, durableWallTime: new Date(1567578993634), appliedOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, appliedWallTime: new Date(1567578993634), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 1), t: 1 }, lastCommittedWall: new Date(1567578998438), lastOpVisible: { ts: Timestamp(1567578998, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:38.737+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:37:08.737+0000 2019-09-04T06:36:38.737+0000 D2 ASIO [RS] Request 1771 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 1), t: 1 }, lastCommittedWall: new Date(1567578998438), lastOpVisible: { ts: Timestamp(1567578998, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578998, 1), $clusterTime: { clusterTime: Timestamp(1567578998, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 2) } 2019-09-04T06:36:38.737+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 1), t: 1 }, lastCommittedWall: new Date(1567578998438), lastOpVisible: { ts: Timestamp(1567578998, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578998, 1), $clusterTime: { clusterTime: Timestamp(1567578998, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:38.737+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:38.737+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:37:08.737+0000 2019-09-04T06:36:38.738+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567578998, 2), t: 1 } 2019-09-04T06:36:38.738+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1772 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:36:48.738+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578998, 1), t: 1 } } 2019-09-04T06:36:38.738+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:37:08.737+0000 2019-09-04T06:36:38.738+0000 D2 ASIO [RS] Request 1772 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpApplied: { ts: Timestamp(1567578998, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578998, 2), $clusterTime: { clusterTime: Timestamp(1567578998, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 2) } 2019-09-04T06:36:38.738+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpApplied: { ts: Timestamp(1567578998, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578998, 2), $clusterTime: { clusterTime: Timestamp(1567578998, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:38.738+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:36:38.738+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:36:38.738+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:36:38.738+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567578998, 2), t: 1 }, 2019-09-04T06:36:38.709+0000 2019-09-04T06:36:38.738+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567578998, 2), t: 1 }, 2019-09-04T06:36:38.709+0000 2019-09-04T06:36:38.738+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567578993, 2) 2019-09-04T06:36:38.738+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:36:49.589+0000 2019-09-04T06:36:38.738+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:36:48.754+0000 2019-09-04T06:36:38.738+0000 D3 REPL [conn516] Got notified of new snapshot: { ts: Timestamp(1567578998, 2), t: 1 }, 2019-09-04T06:36:38.709+0000 2019-09-04T06:36:38.738+0000 D3 REPL [conn516] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:48.245+0000 2019-09-04T06:36:38.738+0000 D3 REPL [conn556] Got notified of new snapshot: { ts: Timestamp(1567578998, 2), t: 1 }, 2019-09-04T06:36:38.709+0000 2019-09-04T06:36:38.738+0000 D3 REPL [conn556] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:00.753+0000 2019-09-04T06:36:38.738+0000 D3 REPL [conn531] Got notified of new snapshot: { ts: Timestamp(1567578998, 2), t: 1 }, 2019-09-04T06:36:38.709+0000 2019-09-04T06:36:38.738+0000 D3 REPL [conn531] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.661+0000 2019-09-04T06:36:38.738+0000 D3 REPL [conn545] Got notified of new snapshot: { ts: Timestamp(1567578998, 2), t: 1 }, 2019-09-04T06:36:38.709+0000 2019-09-04T06:36:38.738+0000 D3 REPL [conn545] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.645+0000 2019-09-04T06:36:38.738+0000 D3 REPL [conn543] Got notified of new snapshot: { ts: Timestamp(1567578998, 2), t: 1 }, 2019-09-04T06:36:38.709+0000 2019-09-04T06:36:38.738+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:38.738+0000 D3 REPL [conn543] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:48.226+0000 2019-09-04T06:36:38.738+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:38.738+0000 D3 REPL [conn529] Got notified of new snapshot: { ts: Timestamp(1567578998, 2), t: 1 }, 2019-09-04T06:36:38.709+0000 2019-09-04T06:36:38.738+0000 D3 REPL [conn529] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:43.079+0000 2019-09-04T06:36:38.738+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:06.839+0000 2019-09-04T06:36:38.738+0000 D3 REPL [conn549] Got notified of new snapshot: { ts: Timestamp(1567578998, 2), t: 1 }, 2019-09-04T06:36:38.709+0000 2019-09-04T06:36:38.738+0000 D3 REPL [conn549] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.674+0000 2019-09-04T06:36:38.738+0000 D3 REPL [conn552] Got notified of new snapshot: { ts: Timestamp(1567578998, 2), t: 1 }, 2019-09-04T06:36:38.709+0000 2019-09-04T06:36:38.738+0000 D3 REPL [conn552] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:52.054+0000 2019-09-04T06:36:38.738+0000 D3 REPL [conn528] Got notified of new snapshot: { ts: Timestamp(1567578998, 2), t: 1 }, 2019-09-04T06:36:38.709+0000 2019-09-04T06:36:38.738+0000 D3 REPL [conn528] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:58.752+0000 2019-09-04T06:36:38.738+0000 D3 REPL [conn530] Got notified of new snapshot: { ts: Timestamp(1567578998, 2), t: 1 }, 2019-09-04T06:36:38.709+0000 2019-09-04T06:36:38.738+0000 D3 REPL [conn530] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:48.223+0000 2019-09-04T06:36:38.738+0000 D3 REPL [conn550] Got notified of new snapshot: { ts: Timestamp(1567578998, 2), t: 1 }, 2019-09-04T06:36:38.709+0000 2019-09-04T06:36:38.738+0000 D3 REPL [conn550] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.754+0000 2019-09-04T06:36:38.739+0000 D3 REPL [conn554] Got notified of new snapshot: { ts: Timestamp(1567578998, 2), t: 1 }, 2019-09-04T06:36:38.709+0000 2019-09-04T06:36:38.739+0000 D3 REPL [conn554] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:54.153+0000 2019-09-04T06:36:38.738+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1773 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:36:48.738+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578998, 2), t: 1 } } 2019-09-04T06:36:38.739+0000 D3 REPL [conn544] Got notified of new snapshot: { ts: Timestamp(1567578998, 2), t: 1 }, 2019-09-04T06:36:38.709+0000 2019-09-04T06:36:38.739+0000 D3 REPL [conn544] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.128+0000 2019-09-04T06:36:38.739+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:37:08.738+0000 2019-09-04T06:36:38.739+0000 D3 REPL [conn537] Got notified of new snapshot: { ts: Timestamp(1567578998, 2), t: 1 }, 2019-09-04T06:36:38.709+0000 2019-09-04T06:36:38.739+0000 D3 REPL [conn537] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:44.691+0000 2019-09-04T06:36:38.739+0000 D3 REPL [conn541] Got notified of new snapshot: { ts: Timestamp(1567578998, 2), t: 1 }, 2019-09-04T06:36:38.709+0000 2019-09-04T06:36:38.739+0000 D3 REPL [conn541] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:48.319+0000 2019-09-04T06:36:38.739+0000 D3 REPL [conn524] Got notified of new snapshot: { ts: Timestamp(1567578998, 2), t: 1 }, 2019-09-04T06:36:38.709+0000 2019-09-04T06:36:38.739+0000 D3 REPL [conn524] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:44.708+0000 2019-09-04T06:36:38.739+0000 D3 REPL [conn527] Got notified of new snapshot: { ts: Timestamp(1567578998, 2), t: 1 }, 2019-09-04T06:36:38.709+0000 2019-09-04T06:36:38.739+0000 D3 REPL [conn527] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:43.139+0000 2019-09-04T06:36:38.739+0000 D3 REPL [conn548] Got notified of new snapshot: { ts: Timestamp(1567578998, 2), t: 1 }, 2019-09-04T06:36:38.709+0000 2019-09-04T06:36:38.739+0000 D3 REPL [conn548] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.661+0000 2019-09-04T06:36:38.739+0000 D3 REPL [conn555] Got notified of new snapshot: { ts: Timestamp(1567578998, 2), t: 1 }, 2019-09-04T06:36:38.709+0000 2019-09-04T06:36:38.739+0000 D3 REPL [conn555] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:55.060+0000 2019-09-04T06:36:38.739+0000 D3 REPL [conn521] Got notified of new snapshot: { ts: Timestamp(1567578998, 2), t: 1 }, 2019-09-04T06:36:38.709+0000 2019-09-04T06:36:38.739+0000 D3 REPL [conn521] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:43.082+0000 2019-09-04T06:36:38.739+0000 D3 REPL [conn538] Got notified of new snapshot: { ts: Timestamp(1567578998, 2), t: 1 }, 2019-09-04T06:36:38.709+0000 2019-09-04T06:36:38.739+0000 D3 REPL [conn538] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:49.841+0000 2019-09-04T06:36:38.739+0000 D3 REPL [conn558] Got notified of new snapshot: { ts: Timestamp(1567578998, 2), t: 1 }, 2019-09-04T06:36:38.709+0000 2019-09-04T06:36:38.739+0000 D3 REPL [conn558] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:07.852+0000 2019-09-04T06:36:38.739+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, durableWallTime: new Date(1567578993634), appliedOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, appliedWallTime: new Date(1567578993634), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), appliedOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, appliedWallTime: new Date(1567578998709), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, durableWallTime: new Date(1567578993634), appliedOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, appliedWallTime: new Date(1567578993634), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:38.739+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1774 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:37:08.739+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, durableWallTime: new Date(1567578993634), appliedOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, appliedWallTime: new Date(1567578993634), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), appliedOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, appliedWallTime: new Date(1567578998709), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, durableWallTime: new Date(1567578993634), appliedOpTime: { ts: Timestamp(1567578993, 1), t: 1 }, appliedWallTime: new Date(1567578993634), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:38.739+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:37:08.738+0000 2019-09-04T06:36:38.739+0000 D2 ASIO [RS] Request 1774 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578998, 2), $clusterTime: { clusterTime: Timestamp(1567578998, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 2) } 2019-09-04T06:36:38.739+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578998, 2), $clusterTime: { clusterTime: Timestamp(1567578998, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:38.739+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:36:38.739+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:37:08.738+0000 2019-09-04T06:36:38.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:38.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:38.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:38.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:38.797+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:38.828+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:38.829+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:38.836+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567578998, 2) 2019-09-04T06:36:38.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:38.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:38.839+0000 D3 REPL [replexec-4] memberData lastupdate is: 2019-09-04T06:36:37.063+0000 2019-09-04T06:36:38.839+0000 D3 REPL [replexec-4] memberData lastupdate is: 2019-09-04T06:36:38.235+0000 2019-09-04T06:36:38.839+0000 D3 REPL [replexec-4] stalest member MemberId(0) date: 2019-09-04T06:36:37.063+0000 2019-09-04T06:36:38.839+0000 D3 REPL [replexec-4] scheduling next check at 2019-09-04T06:36:47.063+0000 2019-09-04T06:36:38.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:08.839+0000 2019-09-04T06:36:38.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1775) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:38.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1775 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:36:48.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:38.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:08.839+0000 2019-09-04T06:36:38.839+0000 D2 ASIO [Replication] Request 1775 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), opTime: { ts: Timestamp(1567578998, 2), t: 1 }, wallTime: new Date(1567578998709), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578998, 2), $clusterTime: { clusterTime: Timestamp(1567578998, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 2) } 2019-09-04T06:36:38.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), opTime: { ts: Timestamp(1567578998, 2), t: 1 }, wallTime: new Date(1567578998709), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578998, 2), $clusterTime: { clusterTime: Timestamp(1567578998, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:36:38.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:38.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1775) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), opTime: { ts: Timestamp(1567578998, 2), t: 1 }, wallTime: new Date(1567578998709), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578998, 2), $clusterTime: { clusterTime: Timestamp(1567578998, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 2) } 2019-09-04T06:36:38.839+0000 D4 ELECTION [replexec-4] Postponing election timeout due to heartbeat from primary 2019-09-04T06:36:38.839+0000 D4 REPL [replexec-4] Canceling election timeout callback at 2019-09-04T06:36:48.754+0000 2019-09-04T06:36:38.839+0000 D4 ELECTION [replexec-4] Scheduling election timeout callback at 2019-09-04T06:36:50.012+0000 2019-09-04T06:36:38.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:36:38.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:36:40.839Z 2019-09-04T06:36:38.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:38.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:08.839+0000 2019-09-04T06:36:38.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:08.839+0000 2019-09-04T06:36:38.841+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:38.841+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1776) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:38.841+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1776 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:48.841+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:38.841+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:08.839+0000 2019-09-04T06:36:38.841+0000 D2 ASIO [Replication] Request 1776 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), opTime: { ts: Timestamp(1567578998, 2), t: 1 }, wallTime: new Date(1567578998709), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578998, 2), $clusterTime: { clusterTime: Timestamp(1567578998, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 2) } 2019-09-04T06:36:38.841+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), opTime: { ts: Timestamp(1567578998, 2), t: 1 }, wallTime: new Date(1567578998709), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578998, 2), $clusterTime: { clusterTime: Timestamp(1567578998, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:38.841+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:38.841+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1776) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), opTime: { ts: Timestamp(1567578998, 2), t: 1 }, wallTime: new Date(1567578998709), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578998, 2), $clusterTime: { clusterTime: Timestamp(1567578998, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 2) } 2019-09-04T06:36:38.841+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:36:38.841+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:36:40.841Z 2019-09-04T06:36:38.841+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:08.839+0000 2019-09-04T06:36:38.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:38.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:38.897+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:38.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:38.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:38.997+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:38.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:38.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:39.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:39.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578998, 2), signature: { hash: BinData(0, 45993FB0D8068315D1E5A9744B03547BBD384841), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:39.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:36:39.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578998, 2), signature: { hash: BinData(0, 45993FB0D8068315D1E5A9744B03547BBD384841), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:39.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578998, 2), signature: { hash: BinData(0, 45993FB0D8068315D1E5A9744B03547BBD384841), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:39.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), opTime: { ts: Timestamp(1567578998, 2), t: 1 }, wallTime: new Date(1567578998709) } 2019-09-04T06:36:39.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578998, 2), signature: { hash: BinData(0, 45993FB0D8068315D1E5A9744B03547BBD384841), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:39.078+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:39.078+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:39.097+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:39.112+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:39.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:39.118+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:39.118+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:39.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:39.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:39.162+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:39.162+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:39.182+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:39.182+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:39.197+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:39.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:39.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:39.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:39.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:39.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:39.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:39.297+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:39.328+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:39.328+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:39.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:39.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:39.398+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:39.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:39.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:39.498+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:39.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:39.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:39.538+0000 D2 COMMAND [conn61] run command config.$cmd { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578998, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578998, 2), signature: { hash: BinData(0, 45993FB0D8068315D1E5A9744B03547BBD384841), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578998, 2), t: 1 } }, $db: "config" } 2019-09-04T06:36:39.538+0000 D1 COMMAND [conn61] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578998, 2), t: 1 } } } 2019-09-04T06:36:39.538+0000 D3 STORAGE [conn61] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:36:39.538+0000 D1 COMMAND [conn61] Using 'committed' snapshot: { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578998, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578998, 2), signature: { hash: BinData(0, 45993FB0D8068315D1E5A9744B03547BBD384841), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578998, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578998, 2) 2019-09-04T06:36:39.538+0000 D2 QUERY [conn61] Collection config.settings does not exist. Using EOF plan: query: { _id: "autosplit" } sort: {} projection: {} limit: 1 2019-09-04T06:36:39.538+0000 I COMMAND [conn61] command config.settings command: find { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578998, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578998, 2), signature: { hash: BinData(0, 45993FB0D8068315D1E5A9744B03547BBD384841), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578998, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:36:39.578+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:39.578+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:39.598+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:39.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:39.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:39.618+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:39.618+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:39.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:39.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:39.662+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:39.662+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:39.682+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:39.682+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:39.698+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:39.736+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:39.736+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:39.736+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578998, 2) 2019-09-04T06:36:39.736+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 26057 2019-09-04T06:36:39.736+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:39.736+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:39.736+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 26057 2019-09-04T06:36:39.737+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26060 2019-09-04T06:36:39.737+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 26060 2019-09-04T06:36:39.737+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578998, 2), t: 1 }({ ts: Timestamp(1567578998, 2), t: 1 }) 2019-09-04T06:36:39.747+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:39.747+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:39.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:39.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:39.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:39.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:39.798+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:39.828+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:39.828+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:39.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:39.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:39.898+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:39.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:39.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:39.998+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:39.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:39.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:40.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:40.002+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:36:40.002+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:36:40.002+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:36:40.008+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:36:40.008+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:36:40.010+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:36:40.010+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:36:40.010+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:36:40.010+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:36:40.010+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:36:40.010+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35151 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:36:40.011+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:36:40.011+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:36:40.011+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:36:40.011+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:36:40.011+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:40.011+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:36:40.011+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:36:40.011+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:36:40.011+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:36:40.011+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:36:40.011+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:36:40.011+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:36:40.011+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:40.011+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:40.011+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:36:40.012+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578998, 2) 2019-09-04T06:36:40.012+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26075 2019-09-04T06:36:40.012+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26075 2019-09-04T06:36:40.012+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:36:40.012+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:36:40.012+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:36:40.012+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:36:40.012+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:40.012+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:36:40.012+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:36:40.012+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:36:40.012+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578998, 2) 2019-09-04T06:36:40.012+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26078 2019-09-04T06:36:40.012+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26078 2019-09-04T06:36:40.012+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:36:40.013+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:40.014+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:36:40.014+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:36:40.014+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567578998, 2) 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26080 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26080 2019-09-04T06:36:40.014+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:36:40.014+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:36:40.014+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:36:40.014+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:36:40.014+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:36:40.014+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26083 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26083 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26084 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26084 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26085 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26085 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26086 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26086 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26087 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26087 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26088 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26088 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26089 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:36:40.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26089 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26090 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26090 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26091 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26091 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26092 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26092 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26093 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26093 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26094 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26094 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26095 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26095 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26096 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26096 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26097 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26097 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26098 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26098 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26099 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26099 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26100 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26100 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26101 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26101 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26102 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26102 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26103 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26103 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26104 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:40.015+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:36:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26104 2019-09-04T06:36:40.016+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:36:40.016+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:36:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26106 2019-09-04T06:36:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26106 2019-09-04T06:36:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26107 2019-09-04T06:36:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26107 2019-09-04T06:36:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26108 2019-09-04T06:36:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26108 2019-09-04T06:36:40.016+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:36:40.016+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:36:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26110 2019-09-04T06:36:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26110 2019-09-04T06:36:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26111 2019-09-04T06:36:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26111 2019-09-04T06:36:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26112 2019-09-04T06:36:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26112 2019-09-04T06:36:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26113 2019-09-04T06:36:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26113 2019-09-04T06:36:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26114 2019-09-04T06:36:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26114 2019-09-04T06:36:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26115 2019-09-04T06:36:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26115 2019-09-04T06:36:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26116 2019-09-04T06:36:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26116 2019-09-04T06:36:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26117 2019-09-04T06:36:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26117 2019-09-04T06:36:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26118 2019-09-04T06:36:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26118 2019-09-04T06:36:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26119 2019-09-04T06:36:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26119 2019-09-04T06:36:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26120 2019-09-04T06:36:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26120 2019-09-04T06:36:40.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26121 2019-09-04T06:36:40.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26121 2019-09-04T06:36:40.016+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:36:40.017+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:36:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26123 2019-09-04T06:36:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26123 2019-09-04T06:36:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26124 2019-09-04T06:36:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26124 2019-09-04T06:36:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26125 2019-09-04T06:36:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26125 2019-09-04T06:36:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26126 2019-09-04T06:36:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26126 2019-09-04T06:36:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26127 2019-09-04T06:36:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26127 2019-09-04T06:36:40.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26128 2019-09-04T06:36:40.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26128 2019-09-04T06:36:40.017+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:36:40.038+0000 D2 COMMAND [conn69] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:40.038+0000 I COMMAND [conn69] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:40.055+0000 D2 COMMAND [conn70] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:40.056+0000 I COMMAND [conn70] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:40.071+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:40.071+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:40.078+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:40.078+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:40.098+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:40.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:40.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:40.118+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:40.118+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:40.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:40.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:40.162+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:40.162+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:40.182+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:40.182+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:40.198+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:40.216+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:40.217+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:40.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:40.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:40.235+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578998, 2), signature: { hash: BinData(0, 45993FB0D8068315D1E5A9744B03547BBD384841), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:40.235+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:36:40.235+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578998, 2), signature: { hash: BinData(0, 45993FB0D8068315D1E5A9744B03547BBD384841), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:40.235+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578998, 2), signature: { hash: BinData(0, 45993FB0D8068315D1E5A9744B03547BBD384841), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:40.235+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), opTime: { ts: Timestamp(1567578998, 2), t: 1 }, wallTime: new Date(1567578998709) } 2019-09-04T06:36:40.235+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578998, 2), signature: { hash: BinData(0, 45993FB0D8068315D1E5A9744B03547BBD384841), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:40.246+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:40.246+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:40.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:40.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:40.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:40.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:40.299+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:40.328+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:40.328+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:40.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:40.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:40.399+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:40.465+0000 D2 COMMAND [conn71] run command config.$cmd { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578998, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578998, 2), signature: { hash: BinData(0, 45993FB0D8068315D1E5A9744B03547BBD384841), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578998, 2), t: 1 } }, $db: "config" } 2019-09-04T06:36:40.465+0000 D1 COMMAND [conn71] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578998, 2), t: 1 } } } 2019-09-04T06:36:40.465+0000 D3 STORAGE [conn71] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:36:40.465+0000 D1 COMMAND [conn71] Using 'committed' snapshot: { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578998, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578998, 2), signature: { hash: BinData(0, 45993FB0D8068315D1E5A9744B03547BBD384841), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578998, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578998, 2) 2019-09-04T06:36:40.465+0000 D2 QUERY [conn71] Collection config.settings does not exist. Using EOF plan: query: { _id: "autosplit" } sort: {} projection: {} limit: 1 2019-09-04T06:36:40.465+0000 I COMMAND [conn71] command config.settings command: find { find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578998, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578998, 2), signature: { hash: BinData(0, 45993FB0D8068315D1E5A9744B03547BBD384841), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578998, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:36:40.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:40.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:40.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:40.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:40.499+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:40.572+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:40.572+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:40.578+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:40.578+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:40.599+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:40.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:40.611+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:40.618+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:40.618+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:40.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:40.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:40.662+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:40.662+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:40.682+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:40.682+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:40.699+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:40.716+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:40.716+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:40.736+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:40.736+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:40.736+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578998, 2) 2019-09-04T06:36:40.736+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 26158 2019-09-04T06:36:40.736+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:40.736+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:40.736+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 26158 2019-09-04T06:36:40.737+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26161 2019-09-04T06:36:40.737+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 26161 2019-09-04T06:36:40.737+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578998, 2), t: 1 }({ ts: Timestamp(1567578998, 2), t: 1 }) 2019-09-04T06:36:40.746+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:40.747+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:40.764+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:40.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:40.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:40.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:40.799+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:40.828+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:40.828+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:40.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:40.839+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1777) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:40.839+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1777 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:36:50.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:40.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:08.839+0000 2019-09-04T06:36:40.839+0000 D2 ASIO [Replication] Request 1777 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), opTime: { ts: Timestamp(1567578998, 2), t: 1 }, wallTime: new Date(1567578998709), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578998, 2), $clusterTime: { clusterTime: Timestamp(1567578998, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 2) } 2019-09-04T06:36:40.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), opTime: { ts: Timestamp(1567578998, 2), t: 1 }, wallTime: new Date(1567578998709), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578998, 2), $clusterTime: { clusterTime: Timestamp(1567578998, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:36:40.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:40.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1777) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), opTime: { ts: Timestamp(1567578998, 2), t: 1 }, wallTime: new Date(1567578998709), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578998, 2), $clusterTime: { clusterTime: Timestamp(1567578998, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 2) } 2019-09-04T06:36:40.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:36:40.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:36:50.012+0000 2019-09-04T06:36:40.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:36:51.296+0000 2019-09-04T06:36:40.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:36:40.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:36:42.839Z 2019-09-04T06:36:40.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:40.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:10.839+0000 2019-09-04T06:36:40.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:10.839+0000 2019-09-04T06:36:40.841+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:40.841+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1778) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:40.841+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1778 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:50.841+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:40.841+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:10.839+0000 2019-09-04T06:36:40.841+0000 D2 ASIO [Replication] Request 1778 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), opTime: { ts: Timestamp(1567578998, 2), t: 1 }, wallTime: new Date(1567578998709), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578998, 2), $clusterTime: { clusterTime: Timestamp(1567578998, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 2) } 2019-09-04T06:36:40.841+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), opTime: { ts: Timestamp(1567578998, 2), t: 1 }, wallTime: new Date(1567578998709), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578998, 2), $clusterTime: { clusterTime: Timestamp(1567578998, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:40.841+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:40.841+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1778) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), opTime: { ts: Timestamp(1567578998, 2), t: 1 }, wallTime: new Date(1567578998709), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578998, 2), $clusterTime: { clusterTime: Timestamp(1567578998, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 2) } 2019-09-04T06:36:40.841+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:36:40.841+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:36:42.841Z 2019-09-04T06:36:40.841+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:10.839+0000 2019-09-04T06:36:40.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:40.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:40.899+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:40.944+0000 D2 COMMAND [conn73] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:40.944+0000 I COMMAND [conn73] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:40.968+0000 D2 COMMAND [conn74] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:40.968+0000 I COMMAND [conn74] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:40.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:40.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:40.983+0000 D2 COMMAND [conn71] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578998, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578998, 2), signature: { hash: BinData(0, 45993FB0D8068315D1E5A9744B03547BBD384841), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578998, 2), t: 1 } }, $db: "config" } 2019-09-04T06:36:40.983+0000 D1 COMMAND [conn71] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578998, 2), t: 1 } } } 2019-09-04T06:36:40.983+0000 D3 STORAGE [conn71] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:36:40.983+0000 D1 COMMAND [conn71] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578998, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578998, 2), signature: { hash: BinData(0, 45993FB0D8068315D1E5A9744B03547BBD384841), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578998, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578998, 2) 2019-09-04T06:36:40.983+0000 D5 QUERY [conn71] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:36:40.983+0000 D5 QUERY [conn71] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:36:40.983+0000 D5 QUERY [conn71] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:36:40.983+0000 D5 QUERY [conn71] Rated tree: $and 2019-09-04T06:36:40.983+0000 D5 QUERY [conn71] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:40.983+0000 D5 QUERY [conn71] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:40.983+0000 D2 QUERY [conn71] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:36:40.983+0000 D3 STORAGE [conn71] WT begin_transaction for snapshot id 26171 2019-09-04T06:36:40.983+0000 D3 STORAGE [conn71] WT rollback_transaction for snapshot id 26171 2019-09-04T06:36:40.983+0000 I COMMAND [conn71] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578998, 2), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578998, 2), signature: { hash: BinData(0, 45993FB0D8068315D1E5A9744B03547BBD384841), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578998, 2), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:4 cursorExhausted:1 numYields:0 nreturned:4 reslen:989 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:36:40.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:40.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:40.999+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:41.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:41.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578998, 2), signature: { hash: BinData(0, 45993FB0D8068315D1E5A9744B03547BBD384841), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:41.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:36:41.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578998, 2), signature: { hash: BinData(0, 45993FB0D8068315D1E5A9744B03547BBD384841), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:41.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578998, 2), signature: { hash: BinData(0, 45993FB0D8068315D1E5A9744B03547BBD384841), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:41.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), opTime: { ts: Timestamp(1567578998, 2), t: 1 }, wallTime: new Date(1567578998709) } 2019-09-04T06:36:41.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578998, 2), signature: { hash: BinData(0, 45993FB0D8068315D1E5A9744B03547BBD384841), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:41.071+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:41.072+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:41.078+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:41.078+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:41.099+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:41.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:41.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:41.118+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:41.118+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:41.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:41.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:41.162+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:41.162+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:41.182+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:41.182+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:41.199+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:41.216+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:41.216+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:41.221+0000 D2 COMMAND [conn77] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:41.221+0000 I COMMAND [conn77] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:41.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:41.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:41.246+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:41.247+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:41.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:41.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:41.273+0000 D2 COMMAND [conn78] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:41.273+0000 I COMMAND [conn78] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:41.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:41.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:41.299+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:41.304+0000 D2 COMMAND [conn113] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:41.304+0000 I COMMAND [conn113] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:41.328+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:41.328+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:41.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:41.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:41.383+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:41.383+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:41.400+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:41.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:41.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:41.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:41.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:41.500+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:41.571+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:41.571+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:41.578+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:41.578+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:41.600+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:41.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:41.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:41.618+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:41.618+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:41.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:41.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:41.662+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:41.663+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:41.682+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:41.682+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:41.700+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:41.716+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:41.716+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:41.736+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:41.736+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:41.736+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578998, 2) 2019-09-04T06:36:41.736+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 26204 2019-09-04T06:36:41.736+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:41.736+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:41.736+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 26204 2019-09-04T06:36:41.737+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26207 2019-09-04T06:36:41.737+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 26207 2019-09-04T06:36:41.737+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578998, 2), t: 1 }({ ts: Timestamp(1567578998, 2), t: 1 }) 2019-09-04T06:36:41.746+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:41.747+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:41.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:41.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:41.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:41.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:41.797+0000 D2 COMMAND [conn81] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578985, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578998, 2), signature: { hash: BinData(0, 45993FB0D8068315D1E5A9744B03547BBD384841), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578985, 1), t: 1 } }, $db: "config" } 2019-09-04T06:36:41.797+0000 D1 COMMAND [conn81] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578985, 1), t: 1 } } } 2019-09-04T06:36:41.797+0000 D3 STORAGE [conn81] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:36:41.797+0000 D1 COMMAND [conn81] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578985, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578998, 2), signature: { hash: BinData(0, 45993FB0D8068315D1E5A9744B03547BBD384841), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578985, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567578998, 2) 2019-09-04T06:36:41.797+0000 D5 QUERY [conn81] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:36:41.797+0000 D5 QUERY [conn81] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:36:41.797+0000 D5 QUERY [conn81] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:36:41.797+0000 D5 QUERY [conn81] Rated tree: $and 2019-09-04T06:36:41.797+0000 D5 QUERY [conn81] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:41.797+0000 D5 QUERY [conn81] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:41.797+0000 D2 QUERY [conn81] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:36:41.797+0000 D3 STORAGE [conn81] WT begin_transaction for snapshot id 26212 2019-09-04T06:36:41.797+0000 D3 STORAGE [conn81] WT rollback_transaction for snapshot id 26212 2019-09-04T06:36:41.797+0000 I COMMAND [conn81] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567578985, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578998, 2), signature: { hash: BinData(0, 45993FB0D8068315D1E5A9744B03547BBD384841), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567578985, 1), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:4 cursorExhausted:1 numYields:0 nreturned:4 reslen:989 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:36:41.800+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:41.828+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:41.828+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:41.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:41.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:41.883+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:41.883+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:41.900+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:41.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:41.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:41.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:41.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:42.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:42.000+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:42.072+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:42.072+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:42.078+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:42.078+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:42.100+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:42.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:42.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:42.118+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:42.118+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:42.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:42.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:42.162+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:42.163+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:42.182+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:42.182+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:42.200+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:42.216+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:42.216+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:42.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:42.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:42.235+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578999, 1), signature: { hash: BinData(0, 8A796DE5D9354F250ED966584A0ACEDB505E104E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:42.235+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:36:42.235+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578999, 1), signature: { hash: BinData(0, 8A796DE5D9354F250ED966584A0ACEDB505E104E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:42.235+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578999, 1), signature: { hash: BinData(0, 8A796DE5D9354F250ED966584A0ACEDB505E104E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:42.235+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), opTime: { ts: Timestamp(1567578998, 2), t: 1 }, wallTime: new Date(1567578998709) } 2019-09-04T06:36:42.235+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578999, 1), signature: { hash: BinData(0, 8A796DE5D9354F250ED966584A0ACEDB505E104E), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:42.246+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:42.247+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:42.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:42.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:42.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:42.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:42.300+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:42.328+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:42.328+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:42.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:42.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:42.383+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:42.383+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:42.401+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:42.445+0000 D2 COMMAND [conn143] run command admin.$cmd { ping: 1, $db: "admin" } 2019-09-04T06:36:42.445+0000 I COMMAND [conn143] command admin.$cmd appName: "robo3t" command: ping { ping: 1, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:42.454+0000 D2 COMMAND [conn206] run command config.$cmd { ping: 1.0, lsid: { id: UUID("2fef7d2a-ea06-44d7-a315-b0e911b7f5bf") }, $clusterTime: { clusterTime: Timestamp(1567578939, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:36:42.454+0000 I COMMAND [conn206] command config.$cmd appName: "MongoDB Shell" command: ping { ping: 1.0, lsid: { id: UUID("2fef7d2a-ea06-44d7-a315-b0e911b7f5bf") }, $clusterTime: { clusterTime: Timestamp(1567578939, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:42.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:42.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:42.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:42.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:42.501+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:42.571+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:42.572+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:42.578+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:42.578+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:42.601+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:42.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:42.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:42.618+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:42.618+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:42.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:42.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:42.662+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:42.663+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:42.682+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:42.682+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:42.701+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:42.716+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:42.716+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:42.737+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:42.737+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:42.737+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578998, 2) 2019-09-04T06:36:42.737+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 26248 2019-09-04T06:36:42.737+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:42.737+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:42.737+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 26248 2019-09-04T06:36:42.737+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26251 2019-09-04T06:36:42.738+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 26251 2019-09-04T06:36:42.738+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578998, 2), t: 1 }({ ts: Timestamp(1567578998, 2), t: 1 }) 2019-09-04T06:36:42.747+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:42.747+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:42.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:42.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:42.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:42.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:42.801+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:42.828+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:42.828+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:42.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:42.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1779) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:42.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1779 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:36:52.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:42.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:10.839+0000 2019-09-04T06:36:42.839+0000 D2 ASIO [Replication] Request 1779 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), opTime: { ts: Timestamp(1567578998, 2), t: 1 }, wallTime: new Date(1567578998709), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578998, 2), $clusterTime: { clusterTime: Timestamp(1567578999, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 2) } 2019-09-04T06:36:42.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), opTime: { ts: Timestamp(1567578998, 2), t: 1 }, wallTime: new Date(1567578998709), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578998, 2), $clusterTime: { clusterTime: Timestamp(1567578999, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:36:42.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:42.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1779) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), opTime: { ts: Timestamp(1567578998, 2), t: 1 }, wallTime: new Date(1567578998709), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578998, 2), $clusterTime: { clusterTime: Timestamp(1567578999, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 2) } 2019-09-04T06:36:42.839+0000 D4 ELECTION [replexec-4] Postponing election timeout due to heartbeat from primary 2019-09-04T06:36:42.839+0000 D4 REPL [replexec-4] Canceling election timeout callback at 2019-09-04T06:36:51.296+0000 2019-09-04T06:36:42.839+0000 D4 ELECTION [replexec-4] Scheduling election timeout callback at 2019-09-04T06:36:53.156+0000 2019-09-04T06:36:42.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:36:42.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:36:44.839Z 2019-09-04T06:36:42.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:42.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:12.839+0000 2019-09-04T06:36:42.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:12.839+0000 2019-09-04T06:36:42.841+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:42.841+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1780) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:42.841+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1780 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:52.841+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:42.841+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:12.839+0000 2019-09-04T06:36:42.841+0000 D2 ASIO [Replication] Request 1780 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), opTime: { ts: Timestamp(1567578998, 2), t: 1 }, wallTime: new Date(1567578998709), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578998, 2), $clusterTime: { clusterTime: Timestamp(1567578999, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 2) } 2019-09-04T06:36:42.841+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), opTime: { ts: Timestamp(1567578998, 2), t: 1 }, wallTime: new Date(1567578998709), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578998, 2), $clusterTime: { clusterTime: Timestamp(1567578999, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:42.841+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:42.841+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1780) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), opTime: { ts: Timestamp(1567578998, 2), t: 1 }, wallTime: new Date(1567578998709), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578998, 2), $clusterTime: { clusterTime: Timestamp(1567578999, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 2) } 2019-09-04T06:36:42.841+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:36:42.841+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:36:44.841Z 2019-09-04T06:36:42.841+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:12.839+0000 2019-09-04T06:36:42.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:42.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:42.883+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:42.883+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:42.901+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:42.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:42.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:42.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:42.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:43.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:43.001+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:43.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578999, 1), signature: { hash: BinData(0, 8A796DE5D9354F250ED966584A0ACEDB505E104E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:43.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:36:43.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578999, 1), signature: { hash: BinData(0, 8A796DE5D9354F250ED966584A0ACEDB505E104E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:43.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578999, 1), signature: { hash: BinData(0, 8A796DE5D9354F250ED966584A0ACEDB505E104E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:43.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), opTime: { ts: Timestamp(1567578998, 2), t: 1 }, wallTime: new Date(1567578998709) } 2019-09-04T06:36:43.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578999, 1), signature: { hash: BinData(0, 8A796DE5D9354F250ED966584A0ACEDB505E104E), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:43.072+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:43.072+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:43.078+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:43.078+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:43.083+0000 I COMMAND [conn521] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578971, 1), signature: { hash: BinData(0, BF9ABB3B036DCA9BFE2E0631E7AB0668EE141209), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:36:43.083+0000 D1 - [conn521] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:43.083+0000 W - [conn521] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:43.084+0000 I COMMAND [conn529] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578971, 1), signature: { hash: BinData(0, BF9ABB3B036DCA9BFE2E0631E7AB0668EE141209), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:36:43.084+0000 D1 - [conn529] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:43.084+0000 W - [conn529] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:43.099+0000 I - [conn521] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:43.100+0000 D1 COMMAND [conn521] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578971, 1), signature: { hash: BinData(0, BF9ABB3B036DCA9BFE2E0631E7AB0668EE141209), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:43.100+0000 D1 - [conn521] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:43.100+0000 W - [conn521] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:43.101+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:43.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:43.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:43.117+0000 I - [conn529] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:43.117+0000 D1 COMMAND [conn529] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578971, 1), signature: { hash: BinData(0, BF9ABB3B036DCA9BFE2E0631E7AB0668EE141209), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:43.117+0000 D1 - [conn529] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:43.117+0000 W - [conn529] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:43.118+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:43.118+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:43.132+0000 I NETWORK [listener] connection accepted from 10.108.2.61:38154 #559 (90 connections now open) 2019-09-04T06:36:43.132+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:36:43.132+0000 D2 COMMAND [conn559] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:36:43.132+0000 I NETWORK [conn559] received client metadata from 10.108.2.61:38154 conn559: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:36:43.132+0000 I COMMAND [conn559] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:36:43.137+0000 I - [conn521] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:43.137+0000 W COMMAND [conn521] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:43.137+0000 I COMMAND [conn521] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578971, 1), signature: { hash: BinData(0, BF9ABB3B036DCA9BFE2E0631E7AB0668EE141209), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30027ms 2019-09-04T06:36:43.137+0000 D2 NETWORK [conn521] Session from 10.108.2.56:35890 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:43.137+0000 I NETWORK [conn521] end connection 10.108.2.56:35890 (89 connections now open) 2019-09-04T06:36:43.142+0000 I COMMAND [conn527] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578971, 1), signature: { hash: BinData(0, BF9ABB3B036DCA9BFE2E0631E7AB0668EE141209), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:36:43.142+0000 D1 - [conn527] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:43.143+0000 W - [conn527] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:43.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:43.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:43.157+0000 I - [conn529] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:43.157+0000 W COMMAND [conn529] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:43.157+0000 I COMMAND [conn529] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578971, 1), signature: { hash: BinData(0, BF9ABB3B036DCA9BFE2E0631E7AB0668EE141209), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30048ms 2019-09-04T06:36:43.157+0000 D2 NETWORK [conn529] Session from 10.108.2.55:36878 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:43.157+0000 I NETWORK [conn529] end connection 10.108.2.55:36878 (88 connections now open) 2019-09-04T06:36:43.162+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:43.162+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:43.173+0000 I - [conn527] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:43.173+0000 D1 COMMAND [conn527] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578971, 1), signature: { hash: BinData(0, BF9ABB3B036DCA9BFE2E0631E7AB0668EE141209), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:43.173+0000 D1 - [conn527] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:43.173+0000 W - [conn527] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:43.182+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:43.182+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:43.193+0000 I - [conn527] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:43.193+0000 W COMMAND [conn527] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:43.193+0000 I COMMAND [conn527] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578971, 1), signature: { hash: BinData(0, BF9ABB3B036DCA9BFE2E0631E7AB0668EE141209), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30044ms 2019-09-04T06:36:43.193+0000 D2 NETWORK [conn527] Session from 10.108.2.61:38128 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:43.193+0000 I NETWORK [conn527] end connection 10.108.2.61:38128 (87 connections now open) 2019-09-04T06:36:43.201+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:43.216+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:43.216+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:43.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:43.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:43.246+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:43.246+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:43.262+0000 D2 COMMAND [conn533] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579002, 1), signature: { hash: BinData(0, 38AD2B934DDEB2D907B8E3A2E411898B9145843C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:36:43.262+0000 D1 REPL [conn533] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578998, 2), t: 1 } 2019-09-04T06:36:43.262+0000 D3 REPL [conn533] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:13.272+0000 2019-09-04T06:36:43.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:43.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:43.265+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:43.265+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:43.267+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:43.267+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:43.269+0000 I NETWORK [listener] connection accepted from 10.108.2.55:36906 #560 (88 connections now open) 2019-09-04T06:36:43.269+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:36:43.270+0000 D2 COMMAND [conn560] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:36:43.270+0000 I NETWORK [conn560] received client metadata from 10.108.2.55:36906 conn560: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:36:43.270+0000 I COMMAND [conn560] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:36:43.270+0000 D2 COMMAND [conn560] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579001, 1), signature: { hash: BinData(0, 7537361D776A054A83C9137EA9B4D3F225E292EA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:36:43.270+0000 D1 REPL [conn560] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578998, 2), t: 1 } 2019-09-04T06:36:43.270+0000 D3 REPL [conn560] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:13.280+0000 2019-09-04T06:36:43.278+0000 D2 COMMAND [conn534] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578998, 1), signature: { hash: BinData(0, 452433E92E6007DECC7F3453A9B73B44CC5A0C2C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:36:43.278+0000 D1 REPL [conn534] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578998, 2), t: 1 } 2019-09-04T06:36:43.278+0000 D3 REPL [conn534] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:13.288+0000 2019-09-04T06:36:43.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:43.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:43.301+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:43.328+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:43.328+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:43.329+0000 D2 COMMAND [conn536] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579002, 1), signature: { hash: BinData(0, 38AD2B934DDEB2D907B8E3A2E411898B9145843C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:36:43.329+0000 D1 REPL [conn536] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578998, 2), t: 1 } 2019-09-04T06:36:43.329+0000 D3 REPL [conn536] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:13.339+0000 2019-09-04T06:36:43.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:43.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:43.383+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:43.383+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:43.398+0000 D2 COMMAND [conn540] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578998, 1), signature: { hash: BinData(0, 452433E92E6007DECC7F3453A9B73B44CC5A0C2C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:36:43.398+0000 D1 REPL [conn540] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578998, 2), t: 1 } 2019-09-04T06:36:43.398+0000 D3 REPL [conn540] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:13.408+0000 2019-09-04T06:36:43.401+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:43.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:43.478+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:43.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:43.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:43.502+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:43.571+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:43.572+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:43.578+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:43.578+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:43.602+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:43.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:43.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:43.618+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:43.618+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:43.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:43.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:43.662+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:43.662+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:43.682+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:43.682+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:43.702+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:43.716+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:43.716+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:43.737+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:43.737+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:43.737+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578998, 2) 2019-09-04T06:36:43.737+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 26298 2019-09-04T06:36:43.737+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:43.737+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:43.737+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 26298 2019-09-04T06:36:43.738+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26301 2019-09-04T06:36:43.738+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 26301 2019-09-04T06:36:43.738+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578998, 2), t: 1 }({ ts: Timestamp(1567578998, 2), t: 1 }) 2019-09-04T06:36:43.738+0000 D2 ASIO [RS] Request 1773 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpApplied: { ts: Timestamp(1567578998, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578998, 2), $clusterTime: { clusterTime: Timestamp(1567579003, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 2) } 2019-09-04T06:36:43.738+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpApplied: { ts: Timestamp(1567578998, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578998, 2), $clusterTime: { clusterTime: Timestamp(1567579003, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:43.738+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:43.738+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:36:43.738+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:36:53.156+0000 2019-09-04T06:36:43.738+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:36:54.198+0000 2019-09-04T06:36:43.738+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:43.738+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:12.839+0000 2019-09-04T06:36:43.738+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1781 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:36:53.738+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578998, 2), t: 1 } } 2019-09-04T06:36:43.738+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:37:08.738+0000 2019-09-04T06:36:43.739+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:36:43.739+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), appliedOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, appliedWallTime: new Date(1567578998709), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), appliedOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, appliedWallTime: new Date(1567578998709), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), appliedOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, appliedWallTime: new Date(1567578998709), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:43.739+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1782 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:37:13.739+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), appliedOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, appliedWallTime: new Date(1567578998709), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), appliedOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, appliedWallTime: new Date(1567578998709), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), appliedOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, appliedWallTime: new Date(1567578998709), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:43.739+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:37:08.738+0000 2019-09-04T06:36:43.739+0000 D2 ASIO [RS] Request 1782 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578998, 2), $clusterTime: { clusterTime: Timestamp(1567579003, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 2) } 2019-09-04T06:36:43.739+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578998, 2), $clusterTime: { clusterTime: Timestamp(1567579003, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:43.739+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:43.739+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:37:08.738+0000 2019-09-04T06:36:43.746+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:43.747+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:43.764+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:43.764+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:43.764+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:43.764+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:43.767+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:43.767+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:43.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:43.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:43.802+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:43.828+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:43.828+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:43.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:43.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:43.883+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:43.883+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:43.902+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:43.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:43.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:43.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:43.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:44.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:44.002+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:44.071+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:44.072+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:44.078+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:44.078+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:44.102+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:44.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:44.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:44.118+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:44.118+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:44.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:44.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:44.162+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:44.163+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:44.182+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:44.182+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:44.202+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:44.216+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:44.216+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:44.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:44.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:44.235+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579003, 1), signature: { hash: BinData(0, 97E3EBAC47CB22864F4E1615F5C4BD477E5E0606), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:44.235+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:36:44.235+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579003, 1), signature: { hash: BinData(0, 97E3EBAC47CB22864F4E1615F5C4BD477E5E0606), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:44.235+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579003, 1), signature: { hash: BinData(0, 97E3EBAC47CB22864F4E1615F5C4BD477E5E0606), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:44.235+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), opTime: { ts: Timestamp(1567578998, 2), t: 1 }, wallTime: new Date(1567578998709) } 2019-09-04T06:36:44.235+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579003, 1), signature: { hash: BinData(0, 97E3EBAC47CB22864F4E1615F5C4BD477E5E0606), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:44.246+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:44.246+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:44.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:44.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:44.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:44.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:44.302+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:44.328+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:44.329+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:44.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:44.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:44.383+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:44.383+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:44.402+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:44.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:44.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:44.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:44.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:44.503+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:44.572+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:44.572+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:44.578+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:44.578+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:44.603+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:44.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:44.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:44.618+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:44.618+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:44.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:44.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:44.662+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:44.663+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:44.682+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:44.682+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:44.686+0000 D2 COMMAND [conn551] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579001, 1), signature: { hash: BinData(0, 7537361D776A054A83C9137EA9B4D3F225E292EA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:36:44.686+0000 D1 REPL [conn551] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578998, 2), t: 1 } 2019-09-04T06:36:44.686+0000 D3 REPL [conn551] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:14.696+0000 2019-09-04T06:36:44.696+0000 I COMMAND [conn537] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578972, 1), signature: { hash: BinData(0, 748CA4A915B339EE77A5A9AF1E7BC1F0A591620C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:36:44.696+0000 D1 - [conn537] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:44.696+0000 W - [conn537] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:44.703+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:44.713+0000 I - [conn537] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:44.713+0000 D1 COMMAND [conn537] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578972, 1), signature: { hash: BinData(0, 748CA4A915B339EE77A5A9AF1E7BC1F0A591620C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:44.713+0000 D1 - [conn537] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:44.713+0000 W - [conn537] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:44.713+0000 I COMMAND [conn524] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578968, 1), signature: { hash: BinData(0, FBB800930BE44BC3F2A298082833F31CC117AACA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:36:44.714+0000 D1 - [conn524] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:44.714+0000 W - [conn524] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:44.716+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:44.716+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:44.733+0000 I - [conn537] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:44.733+0000 W COMMAND [conn537] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:44.733+0000 I COMMAND [conn537] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578972, 1), signature: { hash: BinData(0, 748CA4A915B339EE77A5A9AF1E7BC1F0A591620C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30031ms 2019-09-04T06:36:44.733+0000 D2 NETWORK [conn537] Session from 10.108.2.52:47410 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:44.733+0000 I NETWORK [conn537] end connection 10.108.2.52:47410 (87 connections now open) 2019-09-04T06:36:44.737+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:44.737+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:44.737+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578998, 2) 2019-09-04T06:36:44.737+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 26342 2019-09-04T06:36:44.737+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:44.737+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:44.737+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 26342 2019-09-04T06:36:44.738+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26345 2019-09-04T06:36:44.738+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 26345 2019-09-04T06:36:44.738+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578998, 2), t: 1 }({ ts: Timestamp(1567578998, 2), t: 1 }) 2019-09-04T06:36:44.746+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:44.747+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:44.750+0000 I - [conn524] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:44.750+0000 D1 COMMAND [conn524] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578968, 1), signature: { hash: BinData(0, FBB800930BE44BC3F2A298082833F31CC117AACA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:44.750+0000 D1 - [conn524] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:44.750+0000 W - [conn524] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:44.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:44.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:44.769+0000 I - [conn524] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:44.770+0000 W COMMAND [conn524] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:44.770+0000 I COMMAND [conn524] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578968, 1), signature: { hash: BinData(0, FBB800930BE44BC3F2A298082833F31CC117AACA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30051ms 2019-09-04T06:36:44.770+0000 D2 NETWORK [conn524] Session from 10.108.2.54:49402 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:44.770+0000 I NETWORK [conn524] end connection 10.108.2.54:49402 (86 connections now open) 2019-09-04T06:36:44.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:44.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:44.803+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:44.828+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:44.828+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:44.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:44.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1783) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:44.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1783 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:36:54.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:44.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:12.839+0000 2019-09-04T06:36:44.839+0000 D2 ASIO [Replication] Request 1783 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), opTime: { ts: Timestamp(1567578998, 2), t: 1 }, wallTime: new Date(1567578998709), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578998, 2), $clusterTime: { clusterTime: Timestamp(1567579003, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 2) } 2019-09-04T06:36:44.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), opTime: { ts: Timestamp(1567578998, 2), t: 1 }, wallTime: new Date(1567578998709), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578998, 2), $clusterTime: { clusterTime: Timestamp(1567579003, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:36:44.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:44.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1783) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), opTime: { ts: Timestamp(1567578998, 2), t: 1 }, wallTime: new Date(1567578998709), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578998, 2), $clusterTime: { clusterTime: Timestamp(1567579003, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 2) } 2019-09-04T06:36:44.839+0000 D4 ELECTION [replexec-4] Postponing election timeout due to heartbeat from primary 2019-09-04T06:36:44.839+0000 D4 REPL [replexec-4] Canceling election timeout callback at 2019-09-04T06:36:54.198+0000 2019-09-04T06:36:44.839+0000 D4 ELECTION [replexec-4] Scheduling election timeout callback at 2019-09-04T06:36:55.974+0000 2019-09-04T06:36:44.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:36:44.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:36:46.839Z 2019-09-04T06:36:44.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:44.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:14.839+0000 2019-09-04T06:36:44.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:14.839+0000 2019-09-04T06:36:44.841+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:44.841+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1784) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:44.841+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1784 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:54.841+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:44.841+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:14.839+0000 2019-09-04T06:36:44.841+0000 D2 ASIO [Replication] Request 1784 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), opTime: { ts: Timestamp(1567578998, 2), t: 1 }, wallTime: new Date(1567578998709), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578998, 2), $clusterTime: { clusterTime: Timestamp(1567579003, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 2) } 2019-09-04T06:36:44.841+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), opTime: { ts: Timestamp(1567578998, 2), t: 1 }, wallTime: new Date(1567578998709), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578998, 2), $clusterTime: { clusterTime: Timestamp(1567579003, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:44.841+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:44.841+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1784) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), opTime: { ts: Timestamp(1567578998, 2), t: 1 }, wallTime: new Date(1567578998709), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578998, 2), $clusterTime: { clusterTime: Timestamp(1567579003, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 2) } 2019-09-04T06:36:44.841+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:36:44.841+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:36:46.841Z 2019-09-04T06:36:44.841+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:14.839+0000 2019-09-04T06:36:44.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:44.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:44.883+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:44.883+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:44.903+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:44.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:44.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:44.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:44.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:45.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:45.003+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:45.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579003, 1), signature: { hash: BinData(0, 97E3EBAC47CB22864F4E1615F5C4BD477E5E0606), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:45.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:36:45.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579003, 1), signature: { hash: BinData(0, 97E3EBAC47CB22864F4E1615F5C4BD477E5E0606), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:45.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579003, 1), signature: { hash: BinData(0, 97E3EBAC47CB22864F4E1615F5C4BD477E5E0606), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:45.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), opTime: { ts: Timestamp(1567578998, 2), t: 1 }, wallTime: new Date(1567578998709) } 2019-09-04T06:36:45.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579003, 1), signature: { hash: BinData(0, 97E3EBAC47CB22864F4E1615F5C4BD477E5E0606), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:45.071+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:45.071+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:45.078+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:45.078+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:45.103+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:45.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:45.111+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:45.118+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:45.118+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:45.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:45.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:45.162+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:45.162+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:45.182+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:45.182+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:45.203+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:45.216+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:45.216+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:45.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:45.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:45.246+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:45.247+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:45.264+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:45.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:45.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:45.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:45.303+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:45.328+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:45.328+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:45.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:45.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:45.383+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:45.383+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:45.403+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:45.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:45.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:45.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:45.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:45.504+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:45.571+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:45.572+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:45.578+0000 D2 COMMAND [conn17] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:45.578+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:45.579+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:45.579+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:45.583+0000 D2 COMMAND [conn557] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579002, 1), signature: { hash: BinData(0, 38AD2B934DDEB2D907B8E3A2E411898B9145843C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:36:45.583+0000 D1 REPL [conn557] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578998, 2), t: 1 } 2019-09-04T06:36:45.583+0000 D3 REPL [conn557] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:15.593+0000 2019-09-04T06:36:45.604+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:45.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:45.611+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:45.618+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:45.618+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:45.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:45.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:45.662+0000 D2 COMMAND [conn19] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:45.662+0000 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:45.682+0000 D2 COMMAND [conn18] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:45.682+0000 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:45.704+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:45.716+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:45.716+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:45.737+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:45.737+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:45.737+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578998, 2) 2019-09-04T06:36:45.737+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 26384 2019-09-04T06:36:45.737+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:45.737+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:45.737+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 26384 2019-09-04T06:36:45.738+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26387 2019-09-04T06:36:45.738+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 26387 2019-09-04T06:36:45.738+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578998, 2), t: 1 }({ ts: Timestamp(1567578998, 2), t: 1 }) 2019-09-04T06:36:45.746+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:45.747+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:45.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:45.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:45.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:45.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:45.804+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:45.828+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:45.828+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:45.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:45.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:45.883+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:45.883+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:45.904+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:45.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:45.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:45.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:45.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:46.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:46.004+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:46.071+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:46.072+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:46.079+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:46.079+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:46.104+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:46.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:46.112+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:46.118+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:46.118+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:46.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:46.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:46.204+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:46.216+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:46.216+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:46.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:46.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:46.235+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579003, 1), signature: { hash: BinData(0, 97E3EBAC47CB22864F4E1615F5C4BD477E5E0606), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:46.235+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:36:46.235+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579003, 1), signature: { hash: BinData(0, 97E3EBAC47CB22864F4E1615F5C4BD477E5E0606), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:46.235+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579003, 1), signature: { hash: BinData(0, 97E3EBAC47CB22864F4E1615F5C4BD477E5E0606), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:46.235+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), opTime: { ts: Timestamp(1567578998, 2), t: 1 }, wallTime: new Date(1567578998709) } 2019-09-04T06:36:46.235+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579003, 1), signature: { hash: BinData(0, 97E3EBAC47CB22864F4E1615F5C4BD477E5E0606), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:46.246+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:46.246+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:46.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:46.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:46.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:46.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:46.304+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:46.328+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:46.328+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:46.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:46.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:46.383+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:46.383+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:46.401+0000 I NETWORK [listener] connection accepted from 10.108.2.57:34478 #561 (87 connections now open) 2019-09-04T06:36:46.401+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:36:46.401+0000 D2 COMMAND [conn561] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:36:46.401+0000 I NETWORK [conn561] received client metadata from 10.108.2.57:34478 conn561: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:36:46.401+0000 I COMMAND [conn561] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:36:46.404+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:46.405+0000 D2 COMMAND [conn561] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578998, 1), signature: { hash: BinData(0, 452433E92E6007DECC7F3453A9B73B44CC5A0C2C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:36:46.405+0000 D1 REPL [conn561] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567578998, 2), t: 1 } 2019-09-04T06:36:46.405+0000 D3 REPL [conn561] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:16.415+0000 2019-09-04T06:36:46.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:46.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:46.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:46.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:46.504+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:46.553+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:46.553+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:46.571+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:46.572+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:46.605+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:46.611+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:46.611+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:46.618+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:46.618+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:46.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:46.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:46.705+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:46.716+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:46.716+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:46.737+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:46.738+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:46.738+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578998, 2) 2019-09-04T06:36:46.738+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 26422 2019-09-04T06:36:46.738+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:46.738+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:46.738+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 26422 2019-09-04T06:36:46.738+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26425 2019-09-04T06:36:46.738+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 26425 2019-09-04T06:36:46.738+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567578998, 2), t: 1 }({ ts: Timestamp(1567578998, 2), t: 1 }) 2019-09-04T06:36:46.746+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:46.747+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:46.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:46.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:46.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:46.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:46.805+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:46.828+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:46.828+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:46.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:46.839+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1785) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:46.839+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1785 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:36:56.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:46.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:14.839+0000 2019-09-04T06:36:46.839+0000 D2 ASIO [Replication] Request 1785 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), opTime: { ts: Timestamp(1567578998, 2), t: 1 }, wallTime: new Date(1567578998709), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578998, 2), $clusterTime: { clusterTime: Timestamp(1567579003, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 2) } 2019-09-04T06:36:46.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), opTime: { ts: Timestamp(1567578998, 2), t: 1 }, wallTime: new Date(1567578998709), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578998, 2), $clusterTime: { clusterTime: Timestamp(1567579003, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:36:46.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:46.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1785) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), opTime: { ts: Timestamp(1567578998, 2), t: 1 }, wallTime: new Date(1567578998709), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578998, 2), $clusterTime: { clusterTime: Timestamp(1567579003, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 2) } 2019-09-04T06:36:46.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:36:46.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:36:55.974+0000 2019-09-04T06:36:46.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:36:57.237+0000 2019-09-04T06:36:46.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:36:46.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:36:48.839Z 2019-09-04T06:36:46.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:46.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:16.839+0000 2019-09-04T06:36:46.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:16.839+0000 2019-09-04T06:36:46.841+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:46.841+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1786) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:46.841+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1786 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:56.841+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:46.841+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:16.839+0000 2019-09-04T06:36:46.841+0000 D2 ASIO [Replication] Request 1786 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), opTime: { ts: Timestamp(1567578998, 2), t: 1 }, wallTime: new Date(1567578998709), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578998, 2), $clusterTime: { clusterTime: Timestamp(1567579003, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 2) } 2019-09-04T06:36:46.841+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), opTime: { ts: Timestamp(1567578998, 2), t: 1 }, wallTime: new Date(1567578998709), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578998, 2), $clusterTime: { clusterTime: Timestamp(1567579003, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:46.841+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:46.841+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1786) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), opTime: { ts: Timestamp(1567578998, 2), t: 1 }, wallTime: new Date(1567578998709), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578998, 2), $clusterTime: { clusterTime: Timestamp(1567579003, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578998, 2) } 2019-09-04T06:36:46.841+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:36:46.841+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:36:48.841Z 2019-09-04T06:36:46.841+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:16.839+0000 2019-09-04T06:36:46.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:46.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:46.883+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:46.883+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:46.905+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:46.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:46.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:46.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:46.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:47.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:47.005+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:47.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:47.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:47.063+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:47.063+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:36:46.839+0000 2019-09-04T06:36:47.063+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:36:46.841+0000 2019-09-04T06:36:47.063+0000 D3 REPL [replexec-3] stalest member MemberId(0) date: 2019-09-04T06:36:46.839+0000 2019-09-04T06:36:47.063+0000 D3 REPL [replexec-3] scheduling next check at 2019-09-04T06:36:56.839+0000 2019-09-04T06:36:47.063+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:16.839+0000 2019-09-04T06:36:47.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579003, 1), signature: { hash: BinData(0, 97E3EBAC47CB22864F4E1615F5C4BD477E5E0606), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:47.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:36:47.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579003, 1), signature: { hash: BinData(0, 97E3EBAC47CB22864F4E1615F5C4BD477E5E0606), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:47.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579003, 1), signature: { hash: BinData(0, 97E3EBAC47CB22864F4E1615F5C4BD477E5E0606), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:47.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), opTime: { ts: Timestamp(1567578998, 2), t: 1 }, wallTime: new Date(1567578998709) } 2019-09-04T06:36:47.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579003, 1), signature: { hash: BinData(0, 97E3EBAC47CB22864F4E1615F5C4BD477E5E0606), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:47.071+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:47.071+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:47.105+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:47.111+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:47.111+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:47.118+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:47.118+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:47.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:47.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:47.205+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:47.216+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:47.216+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:47.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:47.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:47.246+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:47.247+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:47.264+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:47.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:47.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:47.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:47.305+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:47.328+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:47.328+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:47.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:47.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:47.383+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:47.383+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:47.405+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:47.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:47.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:47.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:47.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:47.505+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:47.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:47.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:47.571+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:47.571+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:47.605+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:47.612+0000 D2 COMMAND [conn14] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:47.612+0000 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:47.618+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:47.618+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:47.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:47.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:47.706+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:47.716+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:47.716+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:47.733+0000 D2 ASIO [RS] Request 1781 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567579007, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567579007731), o: { $v: 1, $set: { ping: new Date(1567579007731) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpApplied: { ts: Timestamp(1567579007, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578998, 2), $clusterTime: { clusterTime: Timestamp(1567579007, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579007, 1) } 2019-09-04T06:36:47.733+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567579007, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567579007731), o: { $v: 1, $set: { ping: new Date(1567579007731) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpApplied: { ts: Timestamp(1567579007, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578998, 2), $clusterTime: { clusterTime: Timestamp(1567579007, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579007, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:47.733+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:36:47.733+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567579007, 1) and ending at ts: Timestamp(1567579007, 1) 2019-09-04T06:36:47.733+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:36:57.237+0000 2019-09-04T06:36:47.733+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:36:58.732+0000 2019-09-04T06:36:47.733+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:47.733+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567579007, 1), t: 1 } 2019-09-04T06:36:47.733+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:47.733+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:47.733+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578998, 2) 2019-09-04T06:36:47.733+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 26460 2019-09-04T06:36:47.733+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:47.733+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:47.733+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 26460 2019-09-04T06:36:47.733+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:47.733+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:36:47.733+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:16.839+0000 2019-09-04T06:36:47.733+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:47.733+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567578998, 2) 2019-09-04T06:36:47.733+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 26463 2019-09-04T06:36:47.733+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567579007, 1) } 2019-09-04T06:36:47.733+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:47.733+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:47.733+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 26463 2019-09-04T06:36:47.733+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26426 2019-09-04T06:36:47.734+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 26426 2019-09-04T06:36:47.734+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26466 2019-09-04T06:36:47.734+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 26466 2019-09-04T06:36:47.734+0000 D3 EXECUTOR [repl-writer-worker-1] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:36:47.734+0000 D3 STORAGE [repl-writer-worker-1] WT begin_transaction for snapshot id 26468 2019-09-04T06:36:47.734+0000 D4 STORAGE [repl-writer-worker-1] inserting record with timestamp Timestamp(1567579007, 1) 2019-09-04T06:36:47.734+0000 D3 STORAGE [repl-writer-worker-1] WT set timestamp of future write operations to Timestamp(1567579007, 1) 2019-09-04T06:36:47.734+0000 D3 STORAGE [repl-writer-worker-1] WT commit_transaction for snapshot id 26468 2019-09-04T06:36:47.734+0000 D3 EXECUTOR [repl-writer-worker-1] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:36:47.734+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:36:47.734+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26467 2019-09-04T06:36:47.734+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 26467 2019-09-04T06:36:47.734+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26470 2019-09-04T06:36:47.734+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 26470 2019-09-04T06:36:47.734+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567579007, 1), t: 1 }({ ts: Timestamp(1567579007, 1), t: 1 }) 2019-09-04T06:36:47.734+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567579007, 1) 2019-09-04T06:36:47.734+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26471 2019-09-04T06:36:47.734+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567579007, 1) } } ] } sort: {} projection: {} 2019-09-04T06:36:47.734+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:47.734+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:36:47.734+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567579007, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:36:47.734+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:47.734+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:36:47.734+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:36:47.734+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567579007, 1) || First: notFirst: full path: ts 2019-09-04T06:36:47.734+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:47.734+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567579007, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:47.734+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:36:47.734+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:36:47.734+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:36:47.734+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:47.734+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:36:47.734+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:36:47.734+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:47.734+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:47.734+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:36:47.734+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567579007, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:36:47.734+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:47.734+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:36:47.734+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:36:47.734+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567579007, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:36:47.734+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:47.734+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567579007, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:47.734+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 26471 2019-09-04T06:36:47.734+0000 D3 EXECUTOR [repl-writer-worker-15] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:36:47.734+0000 D3 STORAGE [repl-writer-worker-15] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:47.734+0000 D3 REPL [repl-writer-worker-15] applying op: { ts: Timestamp(1567579007, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" }, wall: new Date(1567579007731), o: { $v: 1, $set: { ping: new Date(1567579007731) } } }, oplog application mode: Secondary 2019-09-04T06:36:47.734+0000 D3 STORAGE [repl-writer-worker-15] WT set timestamp of future write operations to Timestamp(1567579007, 1) 2019-09-04T06:36:47.734+0000 D3 STORAGE [repl-writer-worker-15] WT begin_transaction for snapshot id 26473 2019-09-04T06:36:47.734+0000 D2 QUERY [repl-writer-worker-15] Using idhack: { _id: "cmodb812.togewa.com:27018:1566462580:6072223747994412197" } 2019-09-04T06:36:47.734+0000 D4 WRITE [repl-writer-worker-15] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:36:47.734+0000 D3 STORAGE [repl-writer-worker-15] WT commit_transaction for snapshot id 26473 2019-09-04T06:36:47.734+0000 D3 EXECUTOR [repl-writer-worker-15] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:36:47.734+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567579007, 1), t: 1 }({ ts: Timestamp(1567579007, 1), t: 1 }) 2019-09-04T06:36:47.734+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567579007, 1) 2019-09-04T06:36:47.734+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26472 2019-09-04T06:36:47.734+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:36:47.734+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:47.734+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:36:47.734+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:47.734+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:47.734+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:36:47.734+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 26472 2019-09-04T06:36:47.734+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567579007, 1) 2019-09-04T06:36:47.734+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26476 2019-09-04T06:36:47.734+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 26476 2019-09-04T06:36:47.734+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567579007, 1), t: 1 }({ ts: Timestamp(1567579007, 1), t: 1 }) 2019-09-04T06:36:47.734+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:47.734+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), appliedOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, appliedWallTime: new Date(1567578998709), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), appliedOpTime: { ts: Timestamp(1567579007, 1), t: 1 }, appliedWallTime: new Date(1567579007731), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), appliedOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, appliedWallTime: new Date(1567578998709), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:47.734+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1787 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:37:17.734+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), appliedOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, appliedWallTime: new Date(1567578998709), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), appliedOpTime: { ts: Timestamp(1567579007, 1), t: 1 }, appliedWallTime: new Date(1567579007731), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), appliedOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, appliedWallTime: new Date(1567578998709), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:47.735+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:37:17.734+0000 2019-09-04T06:36:47.735+0000 D2 ASIO [RS] Request 1787 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578998, 2), $clusterTime: { clusterTime: Timestamp(1567579007, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579007, 1) } 2019-09-04T06:36:47.735+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578998, 2), $clusterTime: { clusterTime: Timestamp(1567579007, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579007, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:47.735+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:47.735+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:37:17.735+0000 2019-09-04T06:36:47.735+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567579007, 1), t: 1 } 2019-09-04T06:36:47.735+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1788 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:36:57.735+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567578998, 2), t: 1 } } 2019-09-04T06:36:47.735+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:37:17.735+0000 2019-09-04T06:36:47.739+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:36:47.739+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:47.740+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), appliedOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, appliedWallTime: new Date(1567578998709), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579007, 1), t: 1 }, durableWallTime: new Date(1567579007731), appliedOpTime: { ts: Timestamp(1567579007, 1), t: 1 }, appliedWallTime: new Date(1567579007731), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), appliedOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, appliedWallTime: new Date(1567578998709), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:47.740+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1789 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:37:17.740+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), appliedOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, appliedWallTime: new Date(1567578998709), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579007, 1), t: 1 }, durableWallTime: new Date(1567579007731), appliedOpTime: { ts: Timestamp(1567579007, 1), t: 1 }, appliedWallTime: new Date(1567579007731), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), appliedOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, appliedWallTime: new Date(1567578998709), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:47.740+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:37:17.735+0000 2019-09-04T06:36:47.740+0000 D2 ASIO [RS] Request 1789 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578998, 2), $clusterTime: { clusterTime: Timestamp(1567579007, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579007, 1) } 2019-09-04T06:36:47.740+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567578998, 2), t: 1 }, lastCommittedWall: new Date(1567578998709), lastOpVisible: { ts: Timestamp(1567578998, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578998, 2), $clusterTime: { clusterTime: Timestamp(1567579007, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579007, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:47.740+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:36:47.740+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:37:17.735+0000 2019-09-04T06:36:47.740+0000 D2 ASIO [RS] Request 1788 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579007, 1), t: 1 }, lastCommittedWall: new Date(1567579007731), lastOpVisible: { ts: Timestamp(1567579007, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567579007, 1), t: 1 }, lastCommittedWall: new Date(1567579007731), lastOpApplied: { ts: Timestamp(1567579007, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579007, 1), $clusterTime: { clusterTime: Timestamp(1567579007, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579007, 1) } 2019-09-04T06:36:47.740+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579007, 1), t: 1 }, lastCommittedWall: new Date(1567579007731), lastOpVisible: { ts: Timestamp(1567579007, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567579007, 1), t: 1 }, lastCommittedWall: new Date(1567579007731), lastOpApplied: { ts: Timestamp(1567579007, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579007, 1), $clusterTime: { clusterTime: Timestamp(1567579007, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579007, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:47.740+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:47.740+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:36:47.740+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567579007, 1), t: 1 }, 2019-09-04T06:36:47.731+0000 2019-09-04T06:36:47.740+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567579007, 1), t: 1 }, 2019-09-04T06:36:47.731+0000 2019-09-04T06:36:47.741+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567579002, 1) 2019-09-04T06:36:47.741+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:36:58.732+0000 2019-09-04T06:36:47.741+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:36:58.028+0000 2019-09-04T06:36:47.741+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1790 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:36:57.741+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567579007, 1), t: 1 } } 2019-09-04T06:36:47.741+0000 D3 REPL [conn543] Got notified of new snapshot: { ts: Timestamp(1567579007, 1), t: 1 }, 2019-09-04T06:36:47.731+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn543] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:48.226+0000 2019-09-04T06:36:47.741+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:37:17.735+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn552] Got notified of new snapshot: { ts: Timestamp(1567579007, 1), t: 1 }, 2019-09-04T06:36:47.731+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn552] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:52.054+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn545] Got notified of new snapshot: { ts: Timestamp(1567579007, 1), t: 1 }, 2019-09-04T06:36:47.731+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn545] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.645+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn549] Got notified of new snapshot: { ts: Timestamp(1567579007, 1), t: 1 }, 2019-09-04T06:36:47.731+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn549] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.674+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn544] Got notified of new snapshot: { ts: Timestamp(1567579007, 1), t: 1 }, 2019-09-04T06:36:47.731+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn544] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.128+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn555] Got notified of new snapshot: { ts: Timestamp(1567579007, 1), t: 1 }, 2019-09-04T06:36:47.731+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn555] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:55.060+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn557] Got notified of new snapshot: { ts: Timestamp(1567579007, 1), t: 1 }, 2019-09-04T06:36:47.731+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn557] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:15.593+0000 2019-09-04T06:36:47.741+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:47.741+0000 D3 REPL [conn558] Got notified of new snapshot: { ts: Timestamp(1567579007, 1), t: 1 }, 2019-09-04T06:36:47.731+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn558] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:07.852+0000 2019-09-04T06:36:47.741+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:16.839+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn533] Got notified of new snapshot: { ts: Timestamp(1567579007, 1), t: 1 }, 2019-09-04T06:36:47.731+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn533] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:13.272+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn536] Got notified of new snapshot: { ts: Timestamp(1567579007, 1), t: 1 }, 2019-09-04T06:36:47.731+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn536] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:13.339+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn540] Got notified of new snapshot: { ts: Timestamp(1567579007, 1), t: 1 }, 2019-09-04T06:36:47.731+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn540] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:13.408+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn516] Got notified of new snapshot: { ts: Timestamp(1567579007, 1), t: 1 }, 2019-09-04T06:36:47.731+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn516] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:48.245+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn531] Got notified of new snapshot: { ts: Timestamp(1567579007, 1), t: 1 }, 2019-09-04T06:36:47.731+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn531] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.661+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn556] Got notified of new snapshot: { ts: Timestamp(1567579007, 1), t: 1 }, 2019-09-04T06:36:47.731+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn556] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:00.753+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn530] Got notified of new snapshot: { ts: Timestamp(1567579007, 1), t: 1 }, 2019-09-04T06:36:47.731+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn530] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:48.223+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn550] Got notified of new snapshot: { ts: Timestamp(1567579007, 1), t: 1 }, 2019-09-04T06:36:47.731+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn550] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.754+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn528] Got notified of new snapshot: { ts: Timestamp(1567579007, 1), t: 1 }, 2019-09-04T06:36:47.731+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn528] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:58.752+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn554] Got notified of new snapshot: { ts: Timestamp(1567579007, 1), t: 1 }, 2019-09-04T06:36:47.731+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn554] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:54.153+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn541] Got notified of new snapshot: { ts: Timestamp(1567579007, 1), t: 1 }, 2019-09-04T06:36:47.731+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn541] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:48.319+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn548] Got notified of new snapshot: { ts: Timestamp(1567579007, 1), t: 1 }, 2019-09-04T06:36:47.731+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn548] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.661+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn560] Got notified of new snapshot: { ts: Timestamp(1567579007, 1), t: 1 }, 2019-09-04T06:36:47.731+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn560] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:13.280+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn534] Got notified of new snapshot: { ts: Timestamp(1567579007, 1), t: 1 }, 2019-09-04T06:36:47.731+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn534] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:13.288+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn551] Got notified of new snapshot: { ts: Timestamp(1567579007, 1), t: 1 }, 2019-09-04T06:36:47.731+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn551] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:14.696+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn538] Got notified of new snapshot: { ts: Timestamp(1567579007, 1), t: 1 }, 2019-09-04T06:36:47.731+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn538] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:49.841+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn561] Got notified of new snapshot: { ts: Timestamp(1567579007, 1), t: 1 }, 2019-09-04T06:36:47.731+0000 2019-09-04T06:36:47.741+0000 D3 REPL [conn561] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:16.415+0000 2019-09-04T06:36:47.746+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:47.747+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:47.764+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:47.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:47.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:47.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:47.806+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:47.828+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:47.828+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:47.833+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567579007, 1) 2019-09-04T06:36:47.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:47.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:47.883+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:47.883+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:47.906+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:47.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:47.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:47.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:47.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:48.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:48.006+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:48.034+0000 D2 ASIO [RS] Request 1790 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567579008, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567579008032), o: { $v: 1, $set: { ping: new Date(1567579008025) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579007, 1), t: 1 }, lastCommittedWall: new Date(1567579007731), lastOpVisible: { ts: Timestamp(1567579007, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567579007, 1), t: 1 }, lastCommittedWall: new Date(1567579007731), lastOpApplied: { ts: Timestamp(1567579008, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579007, 1), $clusterTime: { clusterTime: Timestamp(1567579008, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579008, 1) } 2019-09-04T06:36:48.034+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567579008, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567579008032), o: { $v: 1, $set: { ping: new Date(1567579008025) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579007, 1), t: 1 }, lastCommittedWall: new Date(1567579007731), lastOpVisible: { ts: Timestamp(1567579007, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567579007, 1), t: 1 }, lastCommittedWall: new Date(1567579007731), lastOpApplied: { ts: Timestamp(1567579008, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579007, 1), $clusterTime: { clusterTime: Timestamp(1567579008, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579008, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:48.034+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:36:48.034+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567579008, 1) and ending at ts: Timestamp(1567579008, 1) 2019-09-04T06:36:48.034+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:36:58.028+0000 2019-09-04T06:36:48.034+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:36:58.287+0000 2019-09-04T06:36:48.034+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:48.034+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:16.839+0000 2019-09-04T06:36:48.034+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567579008, 1), t: 1 } 2019-09-04T06:36:48.034+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:48.034+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:48.034+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567579007, 1) 2019-09-04T06:36:48.034+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 26489 2019-09-04T06:36:48.034+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:48.034+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:48.034+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 26489 2019-09-04T06:36:48.034+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:48.034+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:36:48.034+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:48.034+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567579008, 1) } 2019-09-04T06:36:48.034+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567579007, 1) 2019-09-04T06:36:48.034+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 26492 2019-09-04T06:36:48.034+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:48.034+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:48.034+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 26492 2019-09-04T06:36:48.034+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26477 2019-09-04T06:36:48.034+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 26477 2019-09-04T06:36:48.034+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26495 2019-09-04T06:36:48.034+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 26495 2019-09-04T06:36:48.034+0000 D3 EXECUTOR [repl-writer-worker-5] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:36:48.034+0000 D3 STORAGE [repl-writer-worker-5] WT begin_transaction for snapshot id 26497 2019-09-04T06:36:48.034+0000 D4 STORAGE [repl-writer-worker-5] inserting record with timestamp Timestamp(1567579008, 1) 2019-09-04T06:36:48.034+0000 D3 STORAGE [repl-writer-worker-5] WT set timestamp of future write operations to Timestamp(1567579008, 1) 2019-09-04T06:36:48.034+0000 D3 STORAGE [repl-writer-worker-5] WT commit_transaction for snapshot id 26497 2019-09-04T06:36:48.034+0000 D3 EXECUTOR [repl-writer-worker-5] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:36:48.034+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:36:48.034+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26496 2019-09-04T06:36:48.034+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 26496 2019-09-04T06:36:48.034+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26499 2019-09-04T06:36:48.034+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 26499 2019-09-04T06:36:48.034+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567579008, 1), t: 1 }({ ts: Timestamp(1567579008, 1), t: 1 }) 2019-09-04T06:36:48.034+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567579008, 1) 2019-09-04T06:36:48.034+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26500 2019-09-04T06:36:48.034+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567579008, 1) } } ] } sort: {} projection: {} 2019-09-04T06:36:48.034+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:48.034+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:36:48.034+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567579008, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:36:48.034+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:48.034+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:36:48.034+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:36:48.034+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567579008, 1) || First: notFirst: full path: ts 2019-09-04T06:36:48.034+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:48.034+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567579008, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:48.034+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:36:48.034+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:36:48.034+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:36:48.034+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:48.034+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:36:48.034+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:36:48.034+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:48.034+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:48.035+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:36:48.035+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567579008, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:36:48.035+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:48.035+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:36:48.035+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:36:48.035+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567579008, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:36:48.035+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:48.035+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567579008, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:48.035+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 26500 2019-09-04T06:36:48.035+0000 D3 EXECUTOR [repl-writer-worker-9] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:36:48.035+0000 D3 STORAGE [repl-writer-worker-9] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:48.035+0000 D3 REPL [repl-writer-worker-9] applying op: { ts: Timestamp(1567579008, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" }, wall: new Date(1567579008032), o: { $v: 1, $set: { ping: new Date(1567579008025) } } }, oplog application mode: Secondary 2019-09-04T06:36:48.035+0000 D3 STORAGE [repl-writer-worker-9] WT set timestamp of future write operations to Timestamp(1567579008, 1) 2019-09-04T06:36:48.035+0000 D3 STORAGE [repl-writer-worker-9] WT begin_transaction for snapshot id 26502 2019-09-04T06:36:48.035+0000 D2 QUERY [repl-writer-worker-9] Using idhack: { _id: "cmodb813.togewa.com:27018:1566462580:-30629627797338295" } 2019-09-04T06:36:48.035+0000 D4 WRITE [repl-writer-worker-9] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:36:48.035+0000 D3 STORAGE [repl-writer-worker-9] WT commit_transaction for snapshot id 26502 2019-09-04T06:36:48.035+0000 D3 EXECUTOR [repl-writer-worker-9] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:36:48.035+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567579008, 1), t: 1 }({ ts: Timestamp(1567579008, 1), t: 1 }) 2019-09-04T06:36:48.035+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567579008, 1) 2019-09-04T06:36:48.035+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26501 2019-09-04T06:36:48.035+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:36:48.035+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:48.035+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:36:48.035+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:48.035+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:48.035+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:36:48.035+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 26501 2019-09-04T06:36:48.035+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567579008, 1) 2019-09-04T06:36:48.035+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26506 2019-09-04T06:36:48.035+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 26506 2019-09-04T06:36:48.035+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567579008, 1), t: 1 }({ ts: Timestamp(1567579008, 1), t: 1 }) 2019-09-04T06:36:48.035+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:48.035+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), appliedOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, appliedWallTime: new Date(1567578998709), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579007, 1), t: 1 }, durableWallTime: new Date(1567579007731), appliedOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, appliedWallTime: new Date(1567579008032), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), appliedOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, appliedWallTime: new Date(1567578998709), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579007, 1), t: 1 }, lastCommittedWall: new Date(1567579007731), lastOpVisible: { ts: Timestamp(1567579007, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:48.035+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1791 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:37:18.035+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), appliedOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, appliedWallTime: new Date(1567578998709), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579007, 1), t: 1 }, durableWallTime: new Date(1567579007731), appliedOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, appliedWallTime: new Date(1567579008032), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), appliedOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, appliedWallTime: new Date(1567578998709), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579007, 1), t: 1 }, lastCommittedWall: new Date(1567579007731), lastOpVisible: { ts: Timestamp(1567579007, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:48.035+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:37:18.035+0000 2019-09-04T06:36:48.035+0000 D2 ASIO [RS] Request 1791 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpVisible: { ts: Timestamp(1567579008, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579008, 1), $clusterTime: { clusterTime: Timestamp(1567579008, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579008, 1) } 2019-09-04T06:36:48.035+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpVisible: { ts: Timestamp(1567579008, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579008, 1), $clusterTime: { clusterTime: Timestamp(1567579008, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579008, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:48.035+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:48.035+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:37:18.035+0000 2019-09-04T06:36:48.036+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:36:48.036+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:48.036+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), appliedOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, appliedWallTime: new Date(1567578998709), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), appliedOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, appliedWallTime: new Date(1567579008032), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), appliedOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, appliedWallTime: new Date(1567578998709), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579007, 1), t: 1 }, lastCommittedWall: new Date(1567579007731), lastOpVisible: { ts: Timestamp(1567579007, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:48.036+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1792 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:37:18.036+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), appliedOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, appliedWallTime: new Date(1567578998709), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), appliedOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, appliedWallTime: new Date(1567579008032), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, durableWallTime: new Date(1567578998709), appliedOpTime: { ts: Timestamp(1567578998, 2), t: 1 }, appliedWallTime: new Date(1567578998709), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579007, 1), t: 1 }, lastCommittedWall: new Date(1567579007731), lastOpVisible: { ts: Timestamp(1567579007, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:48.036+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:37:18.036+0000 2019-09-04T06:36:48.036+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567579008, 1), t: 1 } 2019-09-04T06:36:48.036+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1793 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:36:58.036+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567579007, 1), t: 1 } } 2019-09-04T06:36:48.036+0000 D2 ASIO [RS] Request 1792 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpVisible: { ts: Timestamp(1567579008, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579008, 1), $clusterTime: { clusterTime: Timestamp(1567579008, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579008, 1) } 2019-09-04T06:36:48.036+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:37:18.036+0000 2019-09-04T06:36:48.036+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpVisible: { ts: Timestamp(1567579008, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579008, 1), $clusterTime: { clusterTime: Timestamp(1567579008, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579008, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:48.036+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:48.036+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:37:18.036+0000 2019-09-04T06:36:48.036+0000 D2 ASIO [RS] Request 1793 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpVisible: { ts: Timestamp(1567579008, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpApplied: { ts: Timestamp(1567579008, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579008, 1), $clusterTime: { clusterTime: Timestamp(1567579008, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579008, 1) } 2019-09-04T06:36:48.036+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpVisible: { ts: Timestamp(1567579008, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpApplied: { ts: Timestamp(1567579008, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579008, 1), $clusterTime: { clusterTime: Timestamp(1567579008, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579008, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:48.036+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:36:48.036+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:36:48.036+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567579008, 1), t: 1 }, 2019-09-04T06:36:48.032+0000 2019-09-04T06:36:48.036+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567579008, 1), t: 1 }, 2019-09-04T06:36:48.032+0000 2019-09-04T06:36:48.036+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567579003, 1) 2019-09-04T06:36:48.036+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:36:58.287+0000 2019-09-04T06:36:48.036+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:36:58.794+0000 2019-09-04T06:36:48.036+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1794 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:36:58.036+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567579008, 1), t: 1 } } 2019-09-04T06:36:48.036+0000 D3 REPL [conn552] Got notified of new snapshot: { ts: Timestamp(1567579008, 1), t: 1 }, 2019-09-04T06:36:48.032+0000 2019-09-04T06:36:48.036+0000 D3 REPL [conn552] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:52.054+0000 2019-09-04T06:36:48.036+0000 D3 REPL [conn549] Got notified of new snapshot: { ts: Timestamp(1567579008, 1), t: 1 }, 2019-09-04T06:36:48.032+0000 2019-09-04T06:36:48.036+0000 D3 REPL [conn549] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.674+0000 2019-09-04T06:36:48.036+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:37:18.036+0000 2019-09-04T06:36:48.037+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:48.036+0000 D3 REPL [conn557] Got notified of new snapshot: { ts: Timestamp(1567579008, 1), t: 1 }, 2019-09-04T06:36:48.032+0000 2019-09-04T06:36:48.037+0000 D3 REPL [conn557] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:15.593+0000 2019-09-04T06:36:48.037+0000 D3 REPL [conn558] Got notified of new snapshot: { ts: Timestamp(1567579008, 1), t: 1 }, 2019-09-04T06:36:48.032+0000 2019-09-04T06:36:48.037+0000 D3 REPL [conn558] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:07.852+0000 2019-09-04T06:36:48.037+0000 D3 REPL [conn538] Got notified of new snapshot: { ts: Timestamp(1567579008, 1), t: 1 }, 2019-09-04T06:36:48.032+0000 2019-09-04T06:36:48.037+0000 D3 REPL [conn538] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:49.841+0000 2019-09-04T06:36:48.037+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:16.839+0000 2019-09-04T06:36:48.037+0000 D3 REPL [conn516] Got notified of new snapshot: { ts: Timestamp(1567579008, 1), t: 1 }, 2019-09-04T06:36:48.032+0000 2019-09-04T06:36:48.037+0000 D3 REPL [conn516] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:48.245+0000 2019-09-04T06:36:48.037+0000 D3 REPL [conn531] Got notified of new snapshot: { ts: Timestamp(1567579008, 1), t: 1 }, 2019-09-04T06:36:48.032+0000 2019-09-04T06:36:48.037+0000 D3 REPL [conn531] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.661+0000 2019-09-04T06:36:48.037+0000 D3 REPL [conn530] Got notified of new snapshot: { ts: Timestamp(1567579008, 1), t: 1 }, 2019-09-04T06:36:48.032+0000 2019-09-04T06:36:48.037+0000 D3 REPL [conn530] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:48.223+0000 2019-09-04T06:36:48.037+0000 D3 REPL [conn528] Got notified of new snapshot: { ts: Timestamp(1567579008, 1), t: 1 }, 2019-09-04T06:36:48.032+0000 2019-09-04T06:36:48.037+0000 D3 REPL [conn528] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:58.752+0000 2019-09-04T06:36:48.037+0000 D3 REPL [conn541] Got notified of new snapshot: { ts: Timestamp(1567579008, 1), t: 1 }, 2019-09-04T06:36:48.032+0000 2019-09-04T06:36:48.037+0000 D3 REPL [conn541] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:48.319+0000 2019-09-04T06:36:48.037+0000 D3 REPL [conn560] Got notified of new snapshot: { ts: Timestamp(1567579008, 1), t: 1 }, 2019-09-04T06:36:48.032+0000 2019-09-04T06:36:48.037+0000 D3 REPL [conn560] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:13.280+0000 2019-09-04T06:36:48.037+0000 D3 REPL [conn551] Got notified of new snapshot: { ts: Timestamp(1567579008, 1), t: 1 }, 2019-09-04T06:36:48.032+0000 2019-09-04T06:36:48.037+0000 D3 REPL [conn551] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:14.696+0000 2019-09-04T06:36:48.037+0000 D3 REPL [conn561] Got notified of new snapshot: { ts: Timestamp(1567579008, 1), t: 1 }, 2019-09-04T06:36:48.032+0000 2019-09-04T06:36:48.037+0000 D3 REPL [conn561] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:16.415+0000 2019-09-04T06:36:48.037+0000 D3 REPL [conn543] Got notified of new snapshot: { ts: Timestamp(1567579008, 1), t: 1 }, 2019-09-04T06:36:48.032+0000 2019-09-04T06:36:48.037+0000 D3 REPL [conn543] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:48.226+0000 2019-09-04T06:36:48.037+0000 D3 REPL [conn544] Got notified of new snapshot: { ts: Timestamp(1567579008, 1), t: 1 }, 2019-09-04T06:36:48.032+0000 2019-09-04T06:36:48.037+0000 D3 REPL [conn544] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.128+0000 2019-09-04T06:36:48.037+0000 D3 REPL [conn555] Got notified of new snapshot: { ts: Timestamp(1567579008, 1), t: 1 }, 2019-09-04T06:36:48.032+0000 2019-09-04T06:36:48.037+0000 D3 REPL [conn555] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:55.060+0000 2019-09-04T06:36:48.037+0000 D3 REPL [conn545] Got notified of new snapshot: { ts: Timestamp(1567579008, 1), t: 1 }, 2019-09-04T06:36:48.032+0000 2019-09-04T06:36:48.037+0000 D3 REPL [conn545] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.645+0000 2019-09-04T06:36:48.037+0000 D3 REPL [conn548] Got notified of new snapshot: { ts: Timestamp(1567579008, 1), t: 1 }, 2019-09-04T06:36:48.032+0000 2019-09-04T06:36:48.037+0000 D3 REPL [conn548] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.661+0000 2019-09-04T06:36:48.037+0000 D3 REPL [conn554] Got notified of new snapshot: { ts: Timestamp(1567579008, 1), t: 1 }, 2019-09-04T06:36:48.032+0000 2019-09-04T06:36:48.037+0000 D3 REPL [conn554] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:54.153+0000 2019-09-04T06:36:48.037+0000 D3 REPL [conn556] Got notified of new snapshot: { ts: Timestamp(1567579008, 1), t: 1 }, 2019-09-04T06:36:48.032+0000 2019-09-04T06:36:48.037+0000 D3 REPL [conn556] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:00.753+0000 2019-09-04T06:36:48.037+0000 D3 REPL [conn550] Got notified of new snapshot: { ts: Timestamp(1567579008, 1), t: 1 }, 2019-09-04T06:36:48.032+0000 2019-09-04T06:36:48.037+0000 D3 REPL [conn550] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:51.754+0000 2019-09-04T06:36:48.037+0000 D3 REPL [conn536] Got notified of new snapshot: { ts: Timestamp(1567579008, 1), t: 1 }, 2019-09-04T06:36:48.032+0000 2019-09-04T06:36:48.037+0000 D3 REPL [conn536] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:13.339+0000 2019-09-04T06:36:48.037+0000 D3 REPL [conn533] Got notified of new snapshot: { ts: Timestamp(1567579008, 1), t: 1 }, 2019-09-04T06:36:48.032+0000 2019-09-04T06:36:48.037+0000 D3 REPL [conn533] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:13.272+0000 2019-09-04T06:36:48.037+0000 D3 REPL [conn540] Got notified of new snapshot: { ts: Timestamp(1567579008, 1), t: 1 }, 2019-09-04T06:36:48.032+0000 2019-09-04T06:36:48.037+0000 D3 REPL [conn540] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:13.408+0000 2019-09-04T06:36:48.037+0000 D3 REPL [conn534] Got notified of new snapshot: { ts: Timestamp(1567579008, 1), t: 1 }, 2019-09-04T06:36:48.032+0000 2019-09-04T06:36:48.037+0000 D3 REPL [conn534] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:13.288+0000 2019-09-04T06:36:48.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:48.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:48.071+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:48.072+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:48.106+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:48.118+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:48.118+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:48.134+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567579008, 1) 2019-09-04T06:36:48.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:48.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:48.162+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:48.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:48.206+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:48.216+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:48.216+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:48.223+0000 I COMMAND [conn530] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578969, 1), signature: { hash: BinData(0, F33D70B8F63242DD78E8976C6A303C81C5BC1B74), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:36:48.223+0000 D1 - [conn530] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:48.223+0000 W - [conn530] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:48.226+0000 I COMMAND [conn543] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578975, 1), signature: { hash: BinData(0, ABF7625DF5CCC6209B46F27CBA9E5DCE8DF7AB16), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:36:48.226+0000 D1 - [conn543] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:48.226+0000 W - [conn543] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:48.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:48.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:48.235+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579008, 1), signature: { hash: BinData(0, 776C898CD130420B19F1AC62E088D036EE6F4DDE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:48.235+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:36:48.235+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579008, 1), signature: { hash: BinData(0, 776C898CD130420B19F1AC62E088D036EE6F4DDE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:48.235+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579008, 1), signature: { hash: BinData(0, 776C898CD130420B19F1AC62E088D036EE6F4DDE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:48.235+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), opTime: { ts: Timestamp(1567579008, 1), t: 1 }, wallTime: new Date(1567579008032) } 2019-09-04T06:36:48.235+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579008, 1), signature: { hash: BinData(0, 776C898CD130420B19F1AC62E088D036EE6F4DDE), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:48.241+0000 I - [conn530] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:48.242+0000 D1 COMMAND [conn530] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578969, 1), signature: { hash: BinData(0, F33D70B8F63242DD78E8976C6A303C81C5BC1B74), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:48.242+0000 D1 - [conn530] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:48.242+0000 W - [conn530] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:48.245+0000 I COMMAND [conn516] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578975, 1), signature: { hash: BinData(0, ABF7625DF5CCC6209B46F27CBA9E5DCE8DF7AB16), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:36:48.245+0000 D1 - [conn516] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:48.245+0000 W - [conn516] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:48.246+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:48.247+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:48.258+0000 I - [conn543] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:48.258+0000 D1 COMMAND [conn543] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578975, 1), signature: { hash: BinData(0, ABF7625DF5CCC6209B46F27CBA9E5DCE8DF7AB16), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:48.258+0000 D1 - [conn543] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:48.258+0000 W - [conn543] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:48.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:48.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:48.276+0000 I - [conn516] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:48.276+0000 D1 COMMAND [conn516] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578975, 1), signature: { hash: BinData(0, ABF7625DF5CCC6209B46F27CBA9E5DCE8DF7AB16), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:48.276+0000 D1 - [conn516] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:48.276+0000 W - [conn516] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:48.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:48.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:48.296+0000 I - [conn543] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:48.296+0000 W COMMAND [conn543] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:48.296+0000 I COMMAND [conn543] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578975, 1), signature: { hash: BinData(0, ABF7625DF5CCC6209B46F27CBA9E5DCE8DF7AB16), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30041ms 2019-09-04T06:36:48.296+0000 D2 NETWORK [conn543] Session from 10.108.2.46:41220 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:48.296+0000 I NETWORK [conn543] end connection 10.108.2.46:41220 (86 connections now open) 2019-09-04T06:36:48.306+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:48.318+0000 I - [conn516] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:48.318+0000 W COMMAND [conn516] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:48.318+0000 I COMMAND [conn516] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578975, 1), signature: { hash: BinData(0, ABF7625DF5CCC6209B46F27CBA9E5DCE8DF7AB16), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30041ms 2019-09-04T06:36:48.318+0000 D2 NETWORK [conn516] Session from 10.108.2.47:56734 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:48.318+0000 I NETWORK [conn516] end connection 10.108.2.47:56734 (85 connections now open) 2019-09-04T06:36:48.319+0000 I COMMAND [conn541] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578978, 1), signature: { hash: BinData(0, BCF1DE88D4E6F24645E4AC5DC68814FC1B8C75B8), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:36:48.319+0000 D1 - [conn541] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:48.319+0000 W - [conn541] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:48.328+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:48.328+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:48.336+0000 I - [conn530] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:48.336+0000 W COMMAND [conn530] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:48.336+0000 I COMMAND [conn530] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578969, 1), signature: { hash: BinData(0, F33D70B8F63242DD78E8976C6A303C81C5BC1B74), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30028ms 2019-09-04T06:36:48.336+0000 D2 NETWORK [conn530] Session from 10.108.2.50:50336 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:48.336+0000 I NETWORK [conn530] end connection 10.108.2.50:50336 (84 connections now open) 2019-09-04T06:36:48.353+0000 I - [conn541] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:48.353+0000 D1 COMMAND [conn541] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578978, 1), signature: { hash: BinData(0, BCF1DE88D4E6F24645E4AC5DC68814FC1B8C75B8), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:48.353+0000 D1 - [conn541] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:48.353+0000 W - [conn541] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:48.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:48.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:48.373+0000 I - [conn541] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:48.373+0000 W COMMAND [conn541] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:48.373+0000 I COMMAND [conn541] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578978, 1), signature: { hash: BinData(0, BCF1DE88D4E6F24645E4AC5DC68814FC1B8C75B8), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30044ms 2019-09-04T06:36:48.373+0000 D2 NETWORK [conn541] Session from 10.108.2.44:38900 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:48.373+0000 I NETWORK [conn541] end connection 10.108.2.44:38900 (83 connections now open) 2019-09-04T06:36:48.383+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:48.383+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:48.406+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:48.414+0000 I NETWORK [listener] connection accepted from 10.108.2.58:52390 #562 (84 connections now open) 2019-09-04T06:36:48.414+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:36:48.414+0000 D2 COMMAND [conn562] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:36:48.414+0000 I NETWORK [conn562] received client metadata from 10.108.2.58:52390 conn562: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:36:48.414+0000 I COMMAND [conn562] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:36:48.415+0000 D2 COMMAND [conn562] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579003, 1), signature: { hash: BinData(0, A289170F1BAE9FA08647DE97A5B8C27ACEDB2865), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:36:48.415+0000 D1 REPL [conn562] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567579008, 1), t: 1 } 2019-09-04T06:36:48.415+0000 D3 REPL [conn562] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:18.425+0000 2019-09-04T06:36:48.416+0000 I NETWORK [listener] connection accepted from 10.108.2.46:41240 #563 (85 connections now open) 2019-09-04T06:36:48.416+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:36:48.416+0000 D2 COMMAND [conn563] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:36:48.416+0000 I NETWORK [conn563] received client metadata from 10.108.2.46:41240 conn563: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:36:48.416+0000 I COMMAND [conn563] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:36:48.416+0000 D2 COMMAND [conn563] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579005, 1), signature: { hash: BinData(0, 375553C631FDD70A0763FB36DD9A69D312E32B86), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:36:48.416+0000 D1 REPL [conn563] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567579008, 1), t: 1 } 2019-09-04T06:36:48.416+0000 D3 REPL [conn563] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:18.426+0000 2019-09-04T06:36:48.416+0000 I NETWORK [listener] connection accepted from 10.108.2.51:59382 #564 (86 connections now open) 2019-09-04T06:36:48.416+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:36:48.417+0000 D2 COMMAND [conn564] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:36:48.417+0000 I NETWORK [conn564] received client metadata from 10.108.2.51:59382 conn564: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:36:48.417+0000 I COMMAND [conn564] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:36:48.420+0000 D2 COMMAND [conn564] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578999, 1), signature: { hash: BinData(0, C8771EE9C7A10B42498AC50E6183BD33E56835EB), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:36:48.420+0000 D1 REPL [conn564] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567579008, 1), t: 1 } 2019-09-04T06:36:48.420+0000 D3 REPL [conn564] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:18.430+0000 2019-09-04T06:36:48.428+0000 D2 COMMAND [conn539] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579000, 1), signature: { hash: BinData(0, 7F1B36D37E33518926A2E3AC697EEB045EB3E700), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:36:48.428+0000 D1 REPL [conn539] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567579008, 1), t: 1 } 2019-09-04T06:36:48.428+0000 D3 REPL [conn539] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:18.438+0000 2019-09-04T06:36:48.434+0000 I NETWORK [listener] connection accepted from 10.108.2.64:46850 #565 (87 connections now open) 2019-09-04T06:36:48.434+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:36:48.434+0000 D2 COMMAND [conn565] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:36:48.434+0000 I NETWORK [conn565] received client metadata from 10.108.2.64:46850 conn565: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:36:48.434+0000 I COMMAND [conn565] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:36:48.437+0000 D2 COMMAND [conn565] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579005, 1), signature: { hash: BinData(0, 375553C631FDD70A0763FB36DD9A69D312E32B86), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:36:48.437+0000 D1 REPL [conn565] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567579008, 1), t: 1 } 2019-09-04T06:36:48.437+0000 D3 REPL [conn565] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:18.447+0000 2019-09-04T06:36:48.446+0000 I NETWORK [listener] connection accepted from 10.108.2.45:36774 #566 (88 connections now open) 2019-09-04T06:36:48.446+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:36:48.446+0000 D2 COMMAND [conn566] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:36:48.446+0000 I NETWORK [conn566] received client metadata from 10.108.2.45:36774 conn566: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:36:48.446+0000 I COMMAND [conn566] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:36:48.451+0000 D2 COMMAND [conn566] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579008, 1), signature: { hash: BinData(0, 7BDD2443E6EB09988B11F09F05FE4E15377E0BBD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:36:48.451+0000 D1 REPL [conn566] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567579008, 1), t: 1 } 2019-09-04T06:36:48.451+0000 D3 REPL [conn566] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:18.461+0000 2019-09-04T06:36:48.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:48.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:48.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:48.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:48.506+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:48.517+0000 D2 COMMAND [conn542] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579002, 1), signature: { hash: BinData(0, 38AD2B934DDEB2D907B8E3A2E411898B9145843C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:36:48.517+0000 D1 REPL [conn542] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567579008, 1), t: 1 } 2019-09-04T06:36:48.517+0000 D3 REPL [conn542] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:18.527+0000 2019-09-04T06:36:48.524+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:36:48.524+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:36:48.524+0000 D2 COMMAND [conn90] run command admin.$cmd { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:36:48.524+0000 I COMMAND [conn90] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:36:48.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:48.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:48.571+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:48.572+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:48.606+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:48.618+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:48.618+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:48.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:48.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:48.662+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:48.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:48.707+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:48.716+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:48.716+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:48.746+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:48.747+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:48.765+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:48.765+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:48.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:48.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:48.807+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:48.828+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:48.828+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:48.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:48.839+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1795) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:48.839+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1795 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:36:58.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:48.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:16.839+0000 2019-09-04T06:36:48.839+0000 D2 ASIO [Replication] Request 1795 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), opTime: { ts: Timestamp(1567579008, 1), t: 1 }, wallTime: new Date(1567579008032), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpVisible: { ts: Timestamp(1567579008, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567579008, 1), $clusterTime: { clusterTime: Timestamp(1567579008, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579008, 1) } 2019-09-04T06:36:48.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), opTime: { ts: Timestamp(1567579008, 1), t: 1 }, wallTime: new Date(1567579008032), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpVisible: { ts: Timestamp(1567579008, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567579008, 1), $clusterTime: { clusterTime: Timestamp(1567579008, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579008, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:36:48.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:48.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1795) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), opTime: { ts: Timestamp(1567579008, 1), t: 1 }, wallTime: new Date(1567579008032), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpVisible: { ts: Timestamp(1567579008, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567579008, 1), $clusterTime: { clusterTime: Timestamp(1567579008, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579008, 1) } 2019-09-04T06:36:48.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:36:48.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:36:58.794+0000 2019-09-04T06:36:48.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:36:59.662+0000 2019-09-04T06:36:48.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:36:48.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:36:50.839Z 2019-09-04T06:36:48.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:48.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:18.839+0000 2019-09-04T06:36:48.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:18.839+0000 2019-09-04T06:36:48.841+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:48.841+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1796) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:48.841+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1796 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:36:58.841+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:48.841+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:18.839+0000 2019-09-04T06:36:48.841+0000 D2 ASIO [Replication] Request 1796 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), opTime: { ts: Timestamp(1567579008, 1), t: 1 }, wallTime: new Date(1567579008032), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpVisible: { ts: Timestamp(1567579008, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579008, 1), $clusterTime: { clusterTime: Timestamp(1567579008, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579008, 1) } 2019-09-04T06:36:48.841+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), opTime: { ts: Timestamp(1567579008, 1), t: 1 }, wallTime: new Date(1567579008032), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpVisible: { ts: Timestamp(1567579008, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579008, 1), $clusterTime: { clusterTime: Timestamp(1567579008, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579008, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:48.841+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:48.841+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1796) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), opTime: { ts: Timestamp(1567579008, 1), t: 1 }, wallTime: new Date(1567579008032), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpVisible: { ts: Timestamp(1567579008, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579008, 1), $clusterTime: { clusterTime: Timestamp(1567579008, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579008, 1) } 2019-09-04T06:36:48.841+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:36:48.841+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:36:50.841Z 2019-09-04T06:36:48.841+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:18.839+0000 2019-09-04T06:36:48.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:48.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:48.883+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:48.883+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:48.907+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:48.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:48.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:48.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:48.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:49.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:49.007+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:49.034+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:49.034+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:49.034+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567579008, 1) 2019-09-04T06:36:49.034+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 26553 2019-09-04T06:36:49.034+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:49.034+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:49.034+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 26553 2019-09-04T06:36:49.035+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26556 2019-09-04T06:36:49.035+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 26556 2019-09-04T06:36:49.035+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567579008, 1), t: 1 }({ ts: Timestamp(1567579008, 1), t: 1 }) 2019-09-04T06:36:49.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:49.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:49.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579008, 1), signature: { hash: BinData(0, 776C898CD130420B19F1AC62E088D036EE6F4DDE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:49.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:36:49.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579008, 1), signature: { hash: BinData(0, 776C898CD130420B19F1AC62E088D036EE6F4DDE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:49.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579008, 1), signature: { hash: BinData(0, 776C898CD130420B19F1AC62E088D036EE6F4DDE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:49.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), opTime: { ts: Timestamp(1567579008, 1), t: 1 }, wallTime: new Date(1567579008032) } 2019-09-04T06:36:49.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579008, 1), signature: { hash: BinData(0, 776C898CD130420B19F1AC62E088D036EE6F4DDE), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:49.071+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:49.071+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:49.107+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:49.118+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:49.118+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:49.152+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:49.152+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:49.162+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:49.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:49.207+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:49.216+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:49.216+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:49.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:49.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:49.246+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:49.247+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:49.253+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:49.253+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:49.265+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:49.265+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:49.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:49.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:49.307+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:49.328+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:49.329+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:49.329+0000 D2 WRITE [startPeriodicThreadToAbortExpiredTransactions] Beginning scanSessions. Scanning 0 sessions. 2019-09-04T06:36:49.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:49.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:49.370+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0003 2019-09-04T06:36:49.370+0000 D1 SHARDING [shard-registry-reload] Reloading shardRegistry 2019-09-04T06:36:49.370+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1797 -- target:[cmodb812.togewa.com:27018] db:admin expDate:2019-09-04T06:36:54.370+0000 cmd:{ isMaster: 1 } 2019-09-04T06:36:49.370+0000 D3 STORAGE [shard-registry-reload] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:36:49.370+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1798 -- target:[cmodb813.togewa.com:27018] db:admin expDate:2019-09-04T06:36:54.370+0000 cmd:{ isMaster: 1 } 2019-09-04T06:36:49.370+0000 D2 COMMAND [shard-registry-reload] run command config.$cmd { find: "shards", $readPreference: { mode: "nearest", tags: [] }, $db: "config" } 2019-09-04T06:36:49.370+0000 D3 STORAGE [shard-registry-reload] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:49.370+0000 D5 QUERY [shard-registry-reload] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:36:49.370+0000 D5 QUERY [shard-registry-reload] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:36:49.370+0000 D5 QUERY [shard-registry-reload] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:36:49.370+0000 D5 QUERY [shard-registry-reload] Rated tree: $and 2019-09-04T06:36:49.370+0000 D5 QUERY [shard-registry-reload] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:49.370+0000 D5 QUERY [shard-registry-reload] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:49.370+0000 D2 QUERY [shard-registry-reload] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:36:49.370+0000 D3 STORAGE [shard-registry-reload] begin_transaction on local snapshot Timestamp(1567579008, 1) 2019-09-04T06:36:49.370+0000 D3 STORAGE [shard-registry-reload] WT begin_transaction for snapshot id 26574 2019-09-04T06:36:49.370+0000 D3 STORAGE [shard-registry-reload] WT rollback_transaction for snapshot id 26574 2019-09-04T06:36:49.370+0000 I COMMAND [shard-registry-reload] command config.shards command: find { find: "shards", $readPreference: { mode: "nearest", tags: [] }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:4 cursorExhausted:1 numYields:0 nreturned:4 reslen:756 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:36:49.370+0000 D1 SHARDING [shard-registry-reload] found 4 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp(1567579008, 1), t: 1 } 2019-09-04T06:36:49.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0000/cmodb806.togewa.com:27018,cmodb807.togewa.com:27018 2019-09-04T06:36:49.370+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1797 finished with response: { hosts: [ "cmodb812.togewa.com:27018", "cmodb813.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27025" ], setName: "shard0003", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb812.togewa.com:27018", me: "cmodb812.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567579006, 1), t: 1 }, lastWriteDate: new Date(1567579006000), majorityOpTime: { ts: Timestamp(1567579006, 1), t: 1 }, majorityWriteDate: new Date(1567579006000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567579009370), logicalSessionTimeoutMinutes: 30, connectionId: 21713, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567579006, 1), $configServerState: { opTime: { ts: Timestamp(1567579007, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567579008, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579006, 1) } 2019-09-04T06:36:49.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0000, with CS shard0000/cmodb806.togewa.com:27018,cmodb807.togewa.com:27018 2019-09-04T06:36:49.370+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb812.togewa.com:27018", "cmodb813.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27025" ], setName: "shard0003", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb812.togewa.com:27018", me: "cmodb812.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567579006, 1), t: 1 }, lastWriteDate: new Date(1567579006000), majorityOpTime: { ts: Timestamp(1567579006, 1), t: 1 }, majorityWriteDate: new Date(1567579006000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567579009370), logicalSessionTimeoutMinutes: 30, connectionId: 21713, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567579006, 1), $configServerState: { opTime: { ts: Timestamp(1567579007, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567579008, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579006, 1) } target: cmodb812.togewa.com:27018 2019-09-04T06:36:49.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0001/cmodb808.togewa.com:27018,cmodb809.togewa.com:27018 2019-09-04T06:36:49.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0001, with CS shard0001/cmodb808.togewa.com:27018,cmodb809.togewa.com:27018 2019-09-04T06:36:49.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0002/cmodb810.togewa.com:27018,cmodb811.togewa.com:27018 2019-09-04T06:36:49.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0002, with CS shard0002/cmodb810.togewa.com:27018,cmodb811.togewa.com:27018 2019-09-04T06:36:49.370+0000 D1 NETWORK [shard-registry-reload] Started targeter for shard0003/cmodb812.togewa.com:27018,cmodb813.togewa.com:27018 2019-09-04T06:36:49.370+0000 D3 SHARDING [shard-registry-reload] Adding shard shard0003, with CS shard0003/cmodb812.togewa.com:27018,cmodb813.togewa.com:27018 2019-09-04T06:36:49.370+0000 D3 SHARDING [shard-registry-reload] Adding shard config, with CS 2019-09-04T06:36:49.374+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1798 finished with response: { hosts: [ "cmodb812.togewa.com:27018", "cmodb813.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27025" ], setName: "shard0003", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb812.togewa.com:27018", me: "cmodb813.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567579006, 1), t: 1 }, lastWriteDate: new Date(1567579006000), majorityOpTime: { ts: Timestamp(1567579006, 1), t: 1 }, majorityWriteDate: new Date(1567579006000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567579009368), logicalSessionTimeoutMinutes: 30, connectionId: 13308, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579006, 1), $configServerState: { opTime: { ts: Timestamp(1567579008, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567579008, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579006, 1) } 2019-09-04T06:36:49.374+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb812.togewa.com:27018", "cmodb813.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27025" ], setName: "shard0003", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb812.togewa.com:27018", me: "cmodb813.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567579006, 1), t: 1 }, lastWriteDate: new Date(1567579006000), majorityOpTime: { ts: Timestamp(1567579006, 1), t: 1 }, majorityWriteDate: new Date(1567579006000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567579009368), logicalSessionTimeoutMinutes: 30, connectionId: 13308, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579006, 1), $configServerState: { opTime: { ts: Timestamp(1567579008, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567579008, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579006, 1) } target: cmodb813.togewa.com:27018 2019-09-04T06:36:49.374+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0003 took 4ms 2019-09-04T06:36:49.383+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:49.383+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:49.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0002 2019-09-04T06:36:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1799 -- target:[cmodb810.togewa.com:27018] db:admin expDate:2019-09-04T06:36:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:36:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1800 -- target:[cmodb811.togewa.com:27018] db:admin expDate:2019-09-04T06:36:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:36:49.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0001 2019-09-04T06:36:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1801 -- target:[cmodb808.togewa.com:27018] db:admin expDate:2019-09-04T06:36:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:36:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1802 -- target:[cmodb809.togewa.com:27018] db:admin expDate:2019-09-04T06:36:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:36:49.385+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set shard0000 2019-09-04T06:36:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1803 -- target:[cmodb806.togewa.com:27018] db:admin expDate:2019-09-04T06:36:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:36:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Scheduling remote command request: RemoteCommand 1804 -- target:[cmodb807.togewa.com:27018] db:admin expDate:2019-09-04T06:36:54.385+0000 cmd:{ isMaster: 1 } 2019-09-04T06:36:49.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1799 finished with response: { hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb810.togewa.com:27018", me: "cmodb810.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578999, 1), t: 1 }, lastWriteDate: new Date(1567578999000), majorityOpTime: { ts: Timestamp(1567578999, 1), t: 1 }, majorityWriteDate: new Date(1567578999000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567579009386), logicalSessionTimeoutMinutes: 30, connectionId: 20469, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578999, 1), $configServerState: { opTime: { ts: Timestamp(1567578998, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567579006, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578999, 1) } 2019-09-04T06:36:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb810.togewa.com:27018", me: "cmodb810.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567578999, 1), t: 1 }, lastWriteDate: new Date(1567578999000), majorityOpTime: { ts: Timestamp(1567578999, 1), t: 1 }, majorityWriteDate: new Date(1567578999000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567579009386), logicalSessionTimeoutMinutes: 30, connectionId: 20469, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567578999, 1), $configServerState: { opTime: { ts: Timestamp(1567578998, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567579006, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578999, 1) } target: cmodb810.togewa.com:27018 2019-09-04T06:36:49.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1803 finished with response: { hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb806.togewa.com:27018", me: "cmodb806.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567579003, 1), t: 1 }, lastWriteDate: new Date(1567579003000), majorityOpTime: { ts: Timestamp(1567579003, 1), t: 1 }, majorityWriteDate: new Date(1567579003000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567579009385), logicalSessionTimeoutMinutes: 30, connectionId: 16400, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567579003, 1), $configServerState: { opTime: { ts: Timestamp(1567578998, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567579006, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579003, 1) } 2019-09-04T06:36:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb806.togewa.com:27018", me: "cmodb806.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567579003, 1), t: 1 }, lastWriteDate: new Date(1567579003000), majorityOpTime: { ts: Timestamp(1567579003, 1), t: 1 }, majorityWriteDate: new Date(1567579003000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567579009385), logicalSessionTimeoutMinutes: 30, connectionId: 16400, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567579003, 1), $configServerState: { opTime: { ts: Timestamp(1567578998, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567579006, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579003, 1) } target: cmodb806.togewa.com:27018 2019-09-04T06:36:49.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1801 finished with response: { hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb808.togewa.com:27018", me: "cmodb808.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567579007, 1), t: 1 }, lastWriteDate: new Date(1567579007000), majorityOpTime: { ts: Timestamp(1567579007, 1), t: 1 }, majorityWriteDate: new Date(1567579007000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567579009386), logicalSessionTimeoutMinutes: 30, connectionId: 18183, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567579007, 1), $configServerState: { opTime: { ts: Timestamp(1567578998, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567579007, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579007, 1) } 2019-09-04T06:36:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: true, secondary: false, primary: "cmodb808.togewa.com:27018", me: "cmodb808.togewa.com:27018", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1567579007, 1), t: 1 }, lastWriteDate: new Date(1567579007000), majorityOpTime: { ts: Timestamp(1567579007, 1), t: 1 }, majorityWriteDate: new Date(1567579007000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567579009386), logicalSessionTimeoutMinutes: 30, connectionId: 18183, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567579007, 1), $configServerState: { opTime: { ts: Timestamp(1567578998, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567579007, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579007, 1) } target: cmodb808.togewa.com:27018 2019-09-04T06:36:49.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1804 finished with response: { hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb806.togewa.com:27018", me: "cmodb807.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567579003, 1), t: 1 }, lastWriteDate: new Date(1567579003000), majorityOpTime: { ts: Timestamp(1567579003, 1), t: 1 }, majorityWriteDate: new Date(1567579003000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567579009386), logicalSessionTimeoutMinutes: 30, connectionId: 17074, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579003, 1), $configServerState: { opTime: { ts: Timestamp(1567578998, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567579006, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579003, 1) } 2019-09-04T06:36:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb806.togewa.com:27018", "cmodb807.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27022" ], setName: "shard0000", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb806.togewa.com:27018", me: "cmodb807.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567579003, 1), t: 1 }, lastWriteDate: new Date(1567579003000), majorityOpTime: { ts: Timestamp(1567579003, 1), t: 1 }, majorityWriteDate: new Date(1567579003000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567579009386), logicalSessionTimeoutMinutes: 30, connectionId: 17074, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579003, 1), $configServerState: { opTime: { ts: Timestamp(1567578998, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567579006, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579003, 1) } target: cmodb807.togewa.com:27018 2019-09-04T06:36:49.385+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0000 took 0ms 2019-09-04T06:36:49.385+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1802 finished with response: { hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb808.togewa.com:27018", me: "cmodb809.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567579007, 1), t: 1 }, lastWriteDate: new Date(1567579007000), majorityOpTime: { ts: Timestamp(1567579007, 1), t: 1 }, majorityWriteDate: new Date(1567579007000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567579009386), logicalSessionTimeoutMinutes: 30, connectionId: 13302, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579007, 1), $configServerState: { opTime: { ts: Timestamp(1567578990, 3), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567579007, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579007, 1) } 2019-09-04T06:36:49.385+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb808.togewa.com:27018", "cmodb809.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27023" ], setName: "shard0001", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb808.togewa.com:27018", me: "cmodb809.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567579007, 1), t: 1 }, lastWriteDate: new Date(1567579007000), majorityOpTime: { ts: Timestamp(1567579007, 1), t: 1 }, majorityWriteDate: new Date(1567579007000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567579009386), logicalSessionTimeoutMinutes: 30, connectionId: 13302, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579007, 1), $configServerState: { opTime: { ts: Timestamp(1567578990, 3), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567579007, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579007, 1) } target: cmodb809.togewa.com:27018 2019-09-04T06:36:49.385+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0001 took 0ms 2019-09-04T06:36:49.390+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] Request 1800 finished with response: { hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb810.togewa.com:27018", me: "cmodb811.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578999, 1), t: 1 }, lastWriteDate: new Date(1567578999000), majorityOpTime: { ts: Timestamp(1567578999, 1), t: 1 }, majorityWriteDate: new Date(1567578999000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567579009386), logicalSessionTimeoutMinutes: 30, connectionId: 13284, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578999, 1), $configServerState: { opTime: { ts: Timestamp(1567578998, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567579006, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578999, 1) } 2019-09-04T06:36:49.390+0000 D3 EXECUTOR [ReplicaSetMonitor-TaskExecutor] Received remote response: RemoteOnAnyResponse -- cmd:{ hosts: [ "cmodb810.togewa.com:27018", "cmodb811.togewa.com:27018" ], arbiters: [ "cmodb805.togewa.com:27024" ], setName: "shard0002", setVersion: 2, ismaster: false, secondary: true, primary: "cmodb810.togewa.com:27018", me: "cmodb811.togewa.com:27018", lastWrite: { opTime: { ts: Timestamp(1567578999, 1), t: 1 }, lastWriteDate: new Date(1567578999000), majorityOpTime: { ts: Timestamp(1567578999, 1), t: 1 }, majorityWriteDate: new Date(1567578999000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1567579009386), logicalSessionTimeoutMinutes: 30, connectionId: 13284, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ "snappy", "zstd", "zlib" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567578999, 1), $configServerState: { opTime: { ts: Timestamp(1567578998, 2), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1567579006, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567578999, 1) } target: cmodb811.togewa.com:27018 2019-09-04T06:36:49.390+0000 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set shard0002 took 5ms 2019-09-04T06:36:49.407+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:49.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:49.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:49.498+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:49.498+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:49.507+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:49.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:49.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:49.571+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:49.572+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:49.608+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:49.618+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:49.618+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:49.652+0000 D2 COMMAND [conn23] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:49.652+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:49.662+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:49.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:49.708+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:49.716+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:49.716+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:49.721+0000 D2 COMMAND [replSetDistLockPinger] run command config.$cmd { findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567579009721) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } 2019-09-04T06:36:49.721+0000 D4 - [replSetDistLockPinger] Taking ticket. Available: 1000000000 2019-09-04T06:36:49.721+0000 D1 - [replSetDistLockPinger] User Assertion: NotMaster: Not primary while running findAndModify command on collection config.lockpings src/mongo/db/commands/find_and_modify.cpp 178 2019-09-04T06:36:49.721+0000 W - [replSetDistLockPinger] DBException thrown :: caused by :: NotMaster: Not primary while running findAndModify command on collection config.lockpings 2019-09-04T06:36:49.740+0000 I - [replSetDistLockPinger] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6b5b62 0x561749c38a0a 0x561749c42521 0x561749a63043 0x56174a33a606 0x56174a33ba55 0x56174b117894 0x56174a082899 0x56174a083f53 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174af452ee 0x56174af457fa 0x56174b0c25e2 0x56174a244e7b 0x56174a243c1e 0x56174a42b1dc 0x56174a23b7b1 0x56174a232a0a 0x56174b82dbbf 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"272DB62","s":"_ZN5mongo11DBExceptionC2ERKNS_6StatusE"},{"b":"561748F88000","o":"CB0A0A","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"ADB043"},{"b":"561748F88000","o":"13B2606"},{"b":"561748F88000","o":"13B3A55"},{"b":"561748F88000","o":"218F894","s":"_ZN5mongo12BasicCommand10Invocation3runEPNS_16OperationContextEPNS_3rpc21ReplyBuilderInterfaceE"},{"b":"561748F88000","o":"10FA899"},{"b":"561748F88000","o":"10FBF53"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"1FBD2EE"},{"b":"561748F88000","o":"1FBD7FA","s":"_ZN5mongo14DBDirectClient4callERNS_7MessageES2_bPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE"},{"b":"561748F88000","o":"213A5E2","s":"_ZN5mongo12DBClientBase20runCommandWithTargetENS_12OpMsgRequestE"},{"b":"561748F88000","o":"12BCE7B","s":"_ZN5mongo13RSLocalClient14runCommandOnceEPNS_16OperationContextENS_10StringDataERKNS_7BSONObjE"},{"b":"561748F88000","o":"12BBC1E","s":"_ZN5mongo10ShardLocal11_runCommandEPNS_16OperationContextERKNS_21ReadPreferenceSettingENS_10StringDataENS_8DurationISt5ratioILl1ELl1000EEEERKNS_7BSONObjE"},{"b":"561748F88000","o":"14A31DC","s":"_ZN5mongo5Shard32runCommandWithFixedRetryAttemptsEPNS_16OperationContextERKNS_21ReadPreferenceSettingERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_7BSONObjENS_8DurationISt5ratioILl1ELl1000EEEENS0_11RetryPolicyE"},{"b":"561748F88000","o":"12B37B1","s":"_ZN5mongo19DistLockCatalogImpl4pingEPNS_16OperationContextENS_10StringDataENS_6Date_tE"},{"b":"561748F88000","o":"12AAA0A","s":"_ZN5mongo22ReplSetDistLockManager6doTaskEv"},{"b":"561748F88000","o":"28A5BBF"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo11DBExceptionC2ERKNS_6StatusE+0x32) [0x56174b6b5b62] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x6D08) [0x561749c38a0a] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xADB043) [0x561749a63043] mongod(+0x13B2606) [0x56174a33a606] mongod(+0x13B3A55) [0x56174a33ba55] mongod(_ZN5mongo12BasicCommand10Invocation3runEPNS_16OperationContextEPNS_3rpc21ReplyBuilderInterfaceE+0x74) [0x56174b117894] mongod(+0x10FA899) [0x56174a082899] mongod(+0x10FBF53) [0x56174a083f53] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(+0x1FBD2EE) [0x56174af452ee] mongod(_ZN5mongo14DBDirectClient4callERNS_7MessageES2_bPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x3A) [0x56174af457fa] mongod(_ZN5mongo12DBClientBase20runCommandWithTargetENS_12OpMsgRequestE+0x1F2) [0x56174b0c25e2] mongod(_ZN5mongo13RSLocalClient14runCommandOnceEPNS_16OperationContextENS_10StringDataERKNS_7BSONObjE+0x4FB) [0x56174a244e7b] mongod(_ZN5mongo10ShardLocal11_runCommandEPNS_16OperationContextERKNS_21ReadPreferenceSettingENS_10StringDataENS_8DurationISt5ratioILl1ELl1000EEEERKNS_7BSONObjE+0x2E) [0x56174a243c1e] mongod(_ZN5mongo5Shard32runCommandWithFixedRetryAttemptsEPNS_16OperationContextERKNS_21ReadPreferenceSettingERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_7BSONObjENS_8DurationISt5ratioILl1ELl1000EEEENS0_11RetryPolicyE+0xDC) [0x56174a42b1dc] mongod(_ZN5mongo19DistLockCatalogImpl4pingEPNS_16OperationContextENS_10StringDataENS_6Date_tE+0x571) [0x56174a23b7b1] mongod(_ZN5mongo22ReplSetDistLockManager6doTaskEv+0x27A) [0x56174a232a0a] mongod(+0x28A5BBF) [0x56174b82dbbf] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:49.740+0000 D2 REPL [replSetDistLockPinger] Waiting for write concern. OpTime: { ts: Timestamp(1567579008, 1), t: 1 }, write concern: { w: "majority", wtimeout: 15000 } 2019-09-04T06:36:49.740+0000 D4 STORAGE [replSetDistLockPinger] flushed journal 2019-09-04T06:36:49.740+0000 D1 COMMAND [replSetDistLockPinger] assertion while executing command 'findAndModify' on database 'config' with arguments '{ findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567579009721) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" }': NotMaster: Not primary while running findAndModify command on collection config.lockpings 2019-09-04T06:36:49.740+0000 I COMMAND [replSetDistLockPinger] command config.lockpings command: findAndModify { findAndModify: "lockpings", query: { _id: "ConfigServer" }, update: { $set: { ping: new Date(1567579009721) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } numYields:0 ok:0 errMsg:"Not primary while running findAndModify command on collection config.lockpings" errName:NotMaster errCode:10107 reslen:527 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } }, Mutex: { acquireCount: { r: 1 } } } flowControl:{ acquireCount: 1 } protocol:op_msg 19ms 2019-09-04T06:36:49.746+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:49.747+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:49.753+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:49.753+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:49.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:49.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:49.808+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:49.828+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:49.828+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:49.842+0000 I COMMAND [conn538] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578979, 1), signature: { hash: BinData(0, EB65E230B52E35DFBC6E0EB448D0124ABC83B686), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:36:49.842+0000 D1 - [conn538] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:49.842+0000 W - [conn538] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:49.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:49.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:49.859+0000 I - [conn538] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:49.859+0000 D1 COMMAND [conn538] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578979, 1), signature: { hash: BinData(0, EB65E230B52E35DFBC6E0EB448D0124ABC83B686), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:49.859+0000 D1 - [conn538] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:49.859+0000 W - [conn538] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:49.879+0000 I - [conn538] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:49.879+0000 W COMMAND [conn538] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:49.879+0000 I COMMAND [conn538] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578979, 1), signature: { hash: BinData(0, EB65E230B52E35DFBC6E0EB448D0124ABC83B686), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30028ms 2019-09-04T06:36:49.880+0000 D2 NETWORK [conn538] Session from 10.108.2.51:59364 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:49.880+0000 I NETWORK [conn538] end connection 10.108.2.51:59364 (87 connections now open) 2019-09-04T06:36:49.883+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:49.883+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:49.908+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:49.977+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:49.977+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:49.998+0000 D2 COMMAND [conn15] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:49.998+0000 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:50.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:50.000+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:36:50.000+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:36:50.000+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:36:50.008+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:50.012+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:36:50.012+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:36:50.016+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:36:50.016+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:36:50.016+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:36:50.016+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:36:50.016+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:36:50.017+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35151 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:36:50.018+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:36:50.018+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:36:50.018+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:36:50.018+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:36:50.018+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:50.018+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:36:50.018+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:36:50.018+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:36:50.018+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:36:50.018+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:36:50.018+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:36:50.018+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:36:50.018+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:50.018+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:50.018+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:36:50.018+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567579008, 1) 2019-09-04T06:36:50.018+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26600 2019-09-04T06:36:50.018+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26600 2019-09-04T06:36:50.018+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:36:50.034+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:50.034+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:50.034+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567579008, 1) 2019-09-04T06:36:50.034+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 26602 2019-09-04T06:36:50.034+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:50.034+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:50.034+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 26602 2019-09-04T06:36:50.035+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:36:50.035+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:36:50.040+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26606 2019-09-04T06:36:50.040+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 26606 2019-09-04T06:36:50.040+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567579008, 1), t: 1 }({ ts: Timestamp(1567579008, 1), t: 1 }) 2019-09-04T06:36:50.040+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:36:50.040+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:50.040+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:36:50.040+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:36:50.040+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:36:50.040+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567579008, 1) 2019-09-04T06:36:50.040+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26608 2019-09-04T06:36:50.040+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26608 2019-09-04T06:36:50.041+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:36:50.042+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:36:50.042+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:50.042+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:36:50.042+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:36:50.042+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:36:50.042+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567579008, 1) 2019-09-04T06:36:50.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26610 2019-09-04T06:36:50.042+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26610 2019-09-04T06:36:50.042+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:570 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:36:50.042+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:36:50.042+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:36:50.042+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:36:50.042+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:36:50.042+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:36:50.042+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:36:50.042+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26613 2019-09-04T06:36:50.042+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26613 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26614 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26614 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26615 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26615 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26616 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26616 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26617 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26617 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26618 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26618 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26619 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26619 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26620 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26620 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26621 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26621 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26622 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26622 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26623 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26623 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26624 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26624 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26625 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26625 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26626 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:36:50.043+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26626 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26627 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26627 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26628 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26628 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26629 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26629 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26630 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26630 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26631 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26631 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26632 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26632 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26633 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26633 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26634 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26634 2019-09-04T06:36:50.044+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:36:50.044+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26636 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26636 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26637 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26637 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26638 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26638 2019-09-04T06:36:50.044+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:36:50.044+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:36:50.044+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26640 2019-09-04T06:36:50.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26640 2019-09-04T06:36:50.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26641 2019-09-04T06:36:50.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26641 2019-09-04T06:36:50.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26642 2019-09-04T06:36:50.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26642 2019-09-04T06:36:50.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26643 2019-09-04T06:36:50.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26643 2019-09-04T06:36:50.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26644 2019-09-04T06:36:50.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26644 2019-09-04T06:36:50.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26645 2019-09-04T06:36:50.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26645 2019-09-04T06:36:50.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26646 2019-09-04T06:36:50.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26646 2019-09-04T06:36:50.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26647 2019-09-04T06:36:50.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26647 2019-09-04T06:36:50.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26648 2019-09-04T06:36:50.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26648 2019-09-04T06:36:50.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26649 2019-09-04T06:36:50.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26649 2019-09-04T06:36:50.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26650 2019-09-04T06:36:50.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26650 2019-09-04T06:36:50.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26651 2019-09-04T06:36:50.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26651 2019-09-04T06:36:50.045+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:36:50.045+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:36:50.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26653 2019-09-04T06:36:50.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26653 2019-09-04T06:36:50.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26654 2019-09-04T06:36:50.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26654 2019-09-04T06:36:50.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26655 2019-09-04T06:36:50.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26655 2019-09-04T06:36:50.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26656 2019-09-04T06:36:50.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26656 2019-09-04T06:36:50.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26657 2019-09-04T06:36:50.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26657 2019-09-04T06:36:50.045+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 26658 2019-09-04T06:36:50.045+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 26658 2019-09-04T06:36:50.045+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:36:50.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:50.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:50.071+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:50.072+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:50.108+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:50.118+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:50.118+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:50.162+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:50.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:50.208+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:50.216+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:50.216+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:50.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:50.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 999999999 Now: 1000000000 2019-09-04T06:36:50.235+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579008, 1), signature: { hash: BinData(0, 776C898CD130420B19F1AC62E088D036EE6F4DDE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:50.235+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:36:50.235+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579008, 1), signature: { hash: BinData(0, 776C898CD130420B19F1AC62E088D036EE6F4DDE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:50.235+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579008, 1), signature: { hash: BinData(0, 776C898CD130420B19F1AC62E088D036EE6F4DDE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:50.235+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), opTime: { ts: Timestamp(1567579008, 1), t: 1 }, wallTime: new Date(1567579008032) } 2019-09-04T06:36:50.235+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579008, 1), signature: { hash: BinData(0, 776C898CD130420B19F1AC62E088D036EE6F4DDE), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:50.247+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:50.247+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:50.253+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:50.253+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:50.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:50.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:50.308+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:50.328+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:50.328+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:50.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:50.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:50.383+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:50.383+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:50.408+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:50.477+0000 D2 COMMAND [conn29] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:50.477+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:50.509+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:50.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:50.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:50.571+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:50.572+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:50.609+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:50.618+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:50.618+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:50.662+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:50.663+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:50.709+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:50.716+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:50.716+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:50.747+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:50.747+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:50.753+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:50.753+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:50.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:50.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:50.809+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:50.828+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:50.828+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:50.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:50.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1805) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:50.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1805 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:37:00.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:50.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:18.839+0000 2019-09-04T06:36:50.839+0000 D2 ASIO [Replication] Request 1805 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), opTime: { ts: Timestamp(1567579008, 1), t: 1 }, wallTime: new Date(1567579008032), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpVisible: { ts: Timestamp(1567579008, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567579008, 1), $clusterTime: { clusterTime: Timestamp(1567579008, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579008, 1) } 2019-09-04T06:36:50.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), opTime: { ts: Timestamp(1567579008, 1), t: 1 }, wallTime: new Date(1567579008032), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpVisible: { ts: Timestamp(1567579008, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567579008, 1), $clusterTime: { clusterTime: Timestamp(1567579008, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579008, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:36:50.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:50.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1805) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), opTime: { ts: Timestamp(1567579008, 1), t: 1 }, wallTime: new Date(1567579008032), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpVisible: { ts: Timestamp(1567579008, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567579008, 1), $clusterTime: { clusterTime: Timestamp(1567579008, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579008, 1) } 2019-09-04T06:36:50.839+0000 D4 ELECTION [replexec-4] Postponing election timeout due to heartbeat from primary 2019-09-04T06:36:50.839+0000 D4 REPL [replexec-4] Canceling election timeout callback at 2019-09-04T06:36:59.662+0000 2019-09-04T06:36:50.839+0000 D4 ELECTION [replexec-4] Scheduling election timeout callback at 2019-09-04T06:37:01.510+0000 2019-09-04T06:36:50.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:36:50.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:36:52.839Z 2019-09-04T06:36:50.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:50.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:20.839+0000 2019-09-04T06:36:50.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:20.839+0000 2019-09-04T06:36:50.841+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:50.841+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1806) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:50.841+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1806 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:37:00.841+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:50.841+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:20.839+0000 2019-09-04T06:36:50.841+0000 D2 ASIO [Replication] Request 1806 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), opTime: { ts: Timestamp(1567579008, 1), t: 1 }, wallTime: new Date(1567579008032), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpVisible: { ts: Timestamp(1567579008, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579008, 1), $clusterTime: { clusterTime: Timestamp(1567579008, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579008, 1) } 2019-09-04T06:36:50.841+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), opTime: { ts: Timestamp(1567579008, 1), t: 1 }, wallTime: new Date(1567579008032), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpVisible: { ts: Timestamp(1567579008, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579008, 1), $clusterTime: { clusterTime: Timestamp(1567579008, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579008, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:50.841+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:50.841+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1806) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), opTime: { ts: Timestamp(1567579008, 1), t: 1 }, wallTime: new Date(1567579008032), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpVisible: { ts: Timestamp(1567579008, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579008, 1), $clusterTime: { clusterTime: Timestamp(1567579008, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579008, 1) } 2019-09-04T06:36:50.841+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:36:50.841+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:36:52.841Z 2019-09-04T06:36:50.841+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:20.839+0000 2019-09-04T06:36:50.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:50.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:50.883+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:50.883+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:50.909+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:51.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:51.009+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:51.035+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:51.035+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:51.035+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567579008, 1) 2019-09-04T06:36:51.035+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 26686 2019-09-04T06:36:51.035+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:51.035+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:51.035+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 26686 2019-09-04T06:36:51.040+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26689 2019-09-04T06:36:51.040+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 26689 2019-09-04T06:36:51.040+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567579008, 1), t: 1 }({ ts: Timestamp(1567579008, 1), t: 1 }) 2019-09-04T06:36:51.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:51.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:51.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579008, 1), signature: { hash: BinData(0, 776C898CD130420B19F1AC62E088D036EE6F4DDE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:51.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:36:51.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579008, 1), signature: { hash: BinData(0, 776C898CD130420B19F1AC62E088D036EE6F4DDE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:51.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579008, 1), signature: { hash: BinData(0, 776C898CD130420B19F1AC62E088D036EE6F4DDE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:51.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), opTime: { ts: Timestamp(1567579008, 1), t: 1 }, wallTime: new Date(1567579008032) } 2019-09-04T06:36:51.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579008, 1), signature: { hash: BinData(0, 776C898CD130420B19F1AC62E088D036EE6F4DDE), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:51.071+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:51.072+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:51.109+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:51.115+0000 I NETWORK [listener] connection accepted from 10.108.2.56:35926 #567 (88 connections now open) 2019-09-04T06:36:51.115+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:36:51.115+0000 D2 COMMAND [conn567] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:36:51.115+0000 I NETWORK [conn567] received client metadata from 10.108.2.56:35926 conn567: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:36:51.115+0000 I COMMAND [conn567] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:36:51.118+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:51.118+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:51.131+0000 I COMMAND [conn544] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578971, 1), signature: { hash: BinData(0, BF9ABB3B036DCA9BFE2E0631E7AB0668EE141209), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:36:51.131+0000 D1 - [conn544] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:51.131+0000 W - [conn544] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:51.148+0000 I - [conn544] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:51.148+0000 D1 COMMAND [conn544] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578971, 1), signature: { hash: BinData(0, BF9ABB3B036DCA9BFE2E0631E7AB0668EE141209), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:51.148+0000 D1 - [conn544] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:51.148+0000 W - [conn544] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:51.162+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:51.163+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:51.168+0000 I - [conn544] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:51.168+0000 W COMMAND [conn544] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:51.168+0000 I COMMAND [conn544] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578971, 1), signature: { hash: BinData(0, BF9ABB3B036DCA9BFE2E0631E7AB0668EE141209), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:36:51.168+0000 D2 NETWORK [conn544] Session from 10.108.2.56:35910 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:51.168+0000 I NETWORK [conn544] end connection 10.108.2.56:35910 (87 connections now open) 2019-09-04T06:36:51.201+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:51.201+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:51.209+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:51.216+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:51.216+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:51.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:51.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:51.246+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:51.247+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:51.253+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:51.253+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:51.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:51.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:51.309+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:51.328+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:51.328+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:51.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:51.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:51.383+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:51.383+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:51.410+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:51.510+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:51.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:51.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:51.571+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:51.572+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:51.610+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:51.618+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:51.618+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:51.648+0000 I COMMAND [conn545] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578978, 1), signature: { hash: BinData(0, BCF1DE88D4E6F24645E4AC5DC68814FC1B8C75B8), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:36:51.648+0000 D1 - [conn545] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:51.649+0000 W - [conn545] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:51.650+0000 D2 COMMAND [conn546] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579010, 1), signature: { hash: BinData(0, 3C684505E6B30E411E62B4304F3BF68143E41091), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:36:51.650+0000 D1 REPL [conn546] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567579008, 1), t: 1 } 2019-09-04T06:36:51.650+0000 D3 REPL [conn546] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:21.660+0000 2019-09-04T06:36:51.650+0000 D2 COMMAND [conn547] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579002, 1), signature: { hash: BinData(0, 38AD2B934DDEB2D907B8E3A2E411898B9145843C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:36:51.650+0000 D1 REPL [conn547] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567579008, 1), t: 1 } 2019-09-04T06:36:51.650+0000 D3 REPL [conn547] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:21.660+0000 2019-09-04T06:36:51.651+0000 I NETWORK [listener] connection accepted from 10.108.2.54:49442 #568 (88 connections now open) 2019-09-04T06:36:51.651+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:36:51.651+0000 D2 COMMAND [conn568] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:36:51.651+0000 I NETWORK [conn568] received client metadata from 10.108.2.54:49442 conn568: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:36:51.651+0000 I COMMAND [conn568] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:36:51.652+0000 I NETWORK [listener] connection accepted from 10.108.2.73:52404 #569 (89 connections now open) 2019-09-04T06:36:51.652+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:36:51.652+0000 D2 COMMAND [conn569] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:36:51.652+0000 I NETWORK [conn569] received client metadata from 10.108.2.73:52404 conn569: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:36:51.652+0000 I COMMAND [conn569] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:36:51.652+0000 D2 COMMAND [conn569] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579002, 1), signature: { hash: BinData(0, 38AD2B934DDEB2D907B8E3A2E411898B9145843C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:36:51.652+0000 D1 REPL [conn569] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567579008, 1), t: 1 } 2019-09-04T06:36:51.652+0000 D3 REPL [conn569] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:21.662+0000 2019-09-04T06:36:51.659+0000 I NETWORK [listener] connection accepted from 10.108.2.47:56772 #570 (90 connections now open) 2019-09-04T06:36:51.659+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:36:51.659+0000 D2 COMMAND [conn570] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:36:51.659+0000 I NETWORK [conn570] received client metadata from 10.108.2.47:56772 conn570: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:36:51.659+0000 I COMMAND [conn570] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:36:51.662+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:51.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:51.663+0000 I COMMAND [conn531] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578973, 1), signature: { hash: BinData(0, 3B570DEA96F79DC959130E0D1F1E1335CDDFD78C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:36:51.663+0000 D1 - [conn531] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:51.664+0000 W - [conn531] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:51.664+0000 I COMMAND [conn548] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578978, 1), signature: { hash: BinData(0, BCF1DE88D4E6F24645E4AC5DC68814FC1B8C75B8), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:36:51.664+0000 D1 - [conn548] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:51.665+0000 W - [conn548] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:51.665+0000 I - [conn545] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:51.665+0000 D1 COMMAND [conn545] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578978, 1), signature: { hash: BinData(0, BCF1DE88D4E6F24645E4AC5DC68814FC1B8C75B8), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:51.666+0000 D1 - [conn545] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:51.666+0000 W - [conn545] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:51.677+0000 I COMMAND [conn549] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578975, 1), signature: { hash: BinData(0, ABF7625DF5CCC6209B46F27CBA9E5DCE8DF7AB16), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:36:51.678+0000 D1 - [conn549] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:51.678+0000 W - [conn549] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:51.683+0000 I - [conn531] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:51.683+0000 D1 COMMAND [conn531] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578973, 1), signature: { hash: BinData(0, 3B570DEA96F79DC959130E0D1F1E1335CDDFD78C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:51.683+0000 D1 - [conn531] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:51.683+0000 W - [conn531] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:51.699+0000 I - [conn549] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:51.699+0000 D1 COMMAND [conn549] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578975, 1), signature: { hash: BinData(0, ABF7625DF5CCC6209B46F27CBA9E5DCE8DF7AB16), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:51.700+0000 D1 - [conn549] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:51.700+0000 W - [conn549] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:51.700+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:51.701+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:51.710+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:51.716+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:51.716+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:51.728+0000 I - [conn545] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:51.728+0000 W COMMAND [conn545] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:51.728+0000 I COMMAND [conn545] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578978, 1), signature: { hash: BinData(0, BCF1DE88D4E6F24645E4AC5DC68814FC1B8C75B8), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:36:51.728+0000 D2 NETWORK [conn545] Session from 10.108.2.44:38908 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:51.728+0000 I NETWORK [conn545] end connection 10.108.2.44:38908 (89 connections now open) 2019-09-04T06:36:51.746+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:51.747+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:51.750+0000 I - [conn549] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:51.750+0000 W COMMAND [conn549] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:51.750+0000 I COMMAND [conn549] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578975, 1), signature: { hash: BinData(0, ABF7625DF5CCC6209B46F27CBA9E5DCE8DF7AB16), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30035ms 2019-09-04T06:36:51.750+0000 D2 NETWORK [conn549] Session from 10.108.2.47:56756 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:51.750+0000 I NETWORK [conn549] end connection 10.108.2.47:56756 (88 connections now open) 2019-09-04T06:36:51.753+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:51.753+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:51.756+0000 I NETWORK [listener] connection accepted from 10.108.2.59:48608 #571 (89 connections now open) 2019-09-04T06:36:51.756+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:36:51.756+0000 D2 COMMAND [conn571] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:36:51.756+0000 I NETWORK [conn571] received client metadata from 10.108.2.59:48608 conn571: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:36:51.756+0000 I COMMAND [conn571] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:36:51.757+0000 D2 COMMAND [conn571] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579001, 1), signature: { hash: BinData(0, 7537361D776A054A83C9137EA9B4D3F225E292EA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:36:51.757+0000 D1 REPL [conn571] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567579008, 1), t: 1 } 2019-09-04T06:36:51.757+0000 D3 REPL [conn571] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:21.767+0000 2019-09-04T06:36:51.757+0000 I COMMAND [conn550] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578972, 1), signature: { hash: BinData(0, 748CA4A915B339EE77A5A9AF1E7BC1F0A591620C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:36:51.758+0000 D1 - [conn550] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:51.758+0000 W - [conn550] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:51.767+0000 I - [conn548] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:51.767+0000 D1 COMMAND [conn548] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578978, 1), signature: { hash: BinData(0, BCF1DE88D4E6F24645E4AC5DC68814FC1B8C75B8), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:51.767+0000 D1 - [conn548] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:51.767+0000 W - [conn548] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:51.777+0000 I - [conn531] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:51.777+0000 W COMMAND [conn531] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:51.777+0000 I COMMAND [conn531] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578973, 1), signature: { hash: BinData(0, 3B570DEA96F79DC959130E0D1F1E1335CDDFD78C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30031ms 2019-09-04T06:36:51.777+0000 D2 NETWORK [conn531] Session from 10.108.2.58:52362 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:51.777+0000 I NETWORK [conn531] end connection 10.108.2.58:52362 (88 connections now open) 2019-09-04T06:36:51.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:51.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:51.793+0000 I - [conn550] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:51.793+0000 D1 COMMAND [conn550] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578972, 1), signature: { hash: BinData(0, 748CA4A915B339EE77A5A9AF1E7BC1F0A591620C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:51.793+0000 D1 - [conn550] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:51.793+0000 W - [conn550] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:51.810+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:51.813+0000 I - [conn548] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:51.813+0000 W COMMAND [conn548] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:51.813+0000 I COMMAND [conn548] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578978, 1), signature: { hash: BinData(0, BCF1DE88D4E6F24645E4AC5DC68814FC1B8C75B8), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30115ms 2019-09-04T06:36:51.813+0000 D2 NETWORK [conn548] Session from 10.108.2.54:49422 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:51.813+0000 I NETWORK [conn548] end connection 10.108.2.54:49422 (87 connections now open) 2019-09-04T06:36:51.828+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:51.828+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:51.833+0000 I - [conn550] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:51.833+0000 W COMMAND [conn550] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:51.833+0000 I COMMAND [conn550] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578972, 1), signature: { hash: BinData(0, 748CA4A915B339EE77A5A9AF1E7BC1F0A591620C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30049ms 2019-09-04T06:36:51.833+0000 D2 NETWORK [conn550] Session from 10.108.2.52:47414 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:51.833+0000 I NETWORK [conn550] end connection 10.108.2.52:47414 (86 connections now open) 2019-09-04T06:36:51.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:51.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:51.883+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:51.883+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:51.910+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:52.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:52.010+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:52.035+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:52.035+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:52.035+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567579008, 1) 2019-09-04T06:36:52.035+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 26728 2019-09-04T06:36:52.035+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:52.035+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:52.035+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 26728 2019-09-04T06:36:52.041+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26731 2019-09-04T06:36:52.041+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 26731 2019-09-04T06:36:52.041+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567579008, 1), t: 1 }({ ts: Timestamp(1567579008, 1), t: 1 }) 2019-09-04T06:36:52.043+0000 I NETWORK [listener] connection accepted from 10.108.2.50:50370 #572 (87 connections now open) 2019-09-04T06:36:52.043+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:36:52.043+0000 D2 COMMAND [conn572] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:36:52.043+0000 I NETWORK [conn572] received client metadata from 10.108.2.50:50370 conn572: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:36:52.043+0000 I COMMAND [conn572] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:36:52.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:52.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:52.056+0000 I COMMAND [conn552] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578979, 1), signature: { hash: BinData(0, EB65E230B52E35DFBC6E0EB448D0124ABC83B686), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:36:52.057+0000 D1 - [conn552] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:52.057+0000 W - [conn552] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:52.071+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:52.071+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:52.073+0000 I - [conn552] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:52.073+0000 D1 COMMAND [conn552] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578979, 1), signature: { hash: BinData(0, EB65E230B52E35DFBC6E0EB448D0124ABC83B686), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:52.073+0000 D1 - [conn552] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:52.073+0000 W - [conn552] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:52.093+0000 I - [conn552] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:52.093+0000 W COMMAND [conn552] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:52.093+0000 I COMMAND [conn552] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578979, 1), signature: { hash: BinData(0, EB65E230B52E35DFBC6E0EB448D0124ABC83B686), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:36:52.093+0000 D2 NETWORK [conn552] Session from 10.108.2.50:50352 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:52.093+0000 I NETWORK [conn552] end connection 10.108.2.50:50352 (86 connections now open) 2019-09-04T06:36:52.110+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:52.118+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:52.118+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:52.162+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:52.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:52.200+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:52.201+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:52.210+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:52.216+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:52.216+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:52.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:52.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:52.235+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579009, 1), signature: { hash: BinData(0, 62A49CFB61D4D6AD80FB0A1E5FF2538A28A25B7E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:52.235+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:36:52.235+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579009, 1), signature: { hash: BinData(0, 62A49CFB61D4D6AD80FB0A1E5FF2538A28A25B7E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:52.235+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579009, 1), signature: { hash: BinData(0, 62A49CFB61D4D6AD80FB0A1E5FF2538A28A25B7E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:52.235+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), opTime: { ts: Timestamp(1567579008, 1), t: 1 }, wallTime: new Date(1567579008032) } 2019-09-04T06:36:52.235+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579009, 1), signature: { hash: BinData(0, 62A49CFB61D4D6AD80FB0A1E5FF2538A28A25B7E), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:52.246+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:52.246+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:52.253+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:52.253+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:52.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:52.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:52.310+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:52.328+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:52.328+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:52.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:52.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:52.383+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:52.383+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:52.410+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:52.511+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:52.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:52.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:52.571+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:52.571+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:52.584+0000 D2 COMMAND [conn553] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579005, 1), signature: { hash: BinData(0, 375553C631FDD70A0763FB36DD9A69D312E32B86), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:36:52.584+0000 D1 REPL [conn553] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459161, 3), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567579008, 1), t: 1 } 2019-09-04T06:36:52.584+0000 D3 REPL [conn553] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:22.594+0000 2019-09-04T06:36:52.611+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:52.618+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:52.618+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:52.662+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:52.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:52.700+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:52.701+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:52.711+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:52.716+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:52.716+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:52.746+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:52.747+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:52.753+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:52.753+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:52.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:52.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:52.811+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:52.828+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:52.828+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:52.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:52.839+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1807) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:52.839+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1807 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:37:02.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:52.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:20.839+0000 2019-09-04T06:36:52.839+0000 D2 ASIO [Replication] Request 1807 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), opTime: { ts: Timestamp(1567579008, 1), t: 1 }, wallTime: new Date(1567579008032), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpVisible: { ts: Timestamp(1567579008, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567579008, 1), $clusterTime: { clusterTime: Timestamp(1567579010, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579008, 1) } 2019-09-04T06:36:52.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), opTime: { ts: Timestamp(1567579008, 1), t: 1 }, wallTime: new Date(1567579008032), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpVisible: { ts: Timestamp(1567579008, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567579008, 1), $clusterTime: { clusterTime: Timestamp(1567579010, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579008, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:36:52.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:52.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1807) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), opTime: { ts: Timestamp(1567579008, 1), t: 1 }, wallTime: new Date(1567579008032), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpVisible: { ts: Timestamp(1567579008, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567579008, 1), $clusterTime: { clusterTime: Timestamp(1567579010, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579008, 1) } 2019-09-04T06:36:52.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:36:52.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:37:01.510+0000 2019-09-04T06:36:52.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:37:02.899+0000 2019-09-04T06:36:52.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:36:52.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:36:54.839Z 2019-09-04T06:36:52.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:52.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:22.839+0000 2019-09-04T06:36:52.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:22.839+0000 2019-09-04T06:36:52.841+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:52.841+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1808) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:52.841+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1808 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:37:02.841+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:52.841+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:22.839+0000 2019-09-04T06:36:52.841+0000 D2 ASIO [Replication] Request 1808 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), opTime: { ts: Timestamp(1567579008, 1), t: 1 }, wallTime: new Date(1567579008032), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpVisible: { ts: Timestamp(1567579008, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579008, 1), $clusterTime: { clusterTime: Timestamp(1567579010, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579008, 1) } 2019-09-04T06:36:52.841+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), opTime: { ts: Timestamp(1567579008, 1), t: 1 }, wallTime: new Date(1567579008032), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpVisible: { ts: Timestamp(1567579008, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579008, 1), $clusterTime: { clusterTime: Timestamp(1567579010, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579008, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:52.841+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:52.841+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1808) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), opTime: { ts: Timestamp(1567579008, 1), t: 1 }, wallTime: new Date(1567579008032), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpVisible: { ts: Timestamp(1567579008, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579008, 1), $clusterTime: { clusterTime: Timestamp(1567579010, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579008, 1) } 2019-09-04T06:36:52.841+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:36:52.841+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:36:54.841Z 2019-09-04T06:36:52.841+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:22.839+0000 2019-09-04T06:36:52.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:52.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:52.883+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:52.883+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:52.911+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:53.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:53.011+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:53.035+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:53.035+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:53.035+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567579008, 1) 2019-09-04T06:36:53.035+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 26762 2019-09-04T06:36:53.035+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:53.035+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:53.035+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 26762 2019-09-04T06:36:53.036+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:53.036+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), appliedOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, appliedWallTime: new Date(1567579008032), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), appliedOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, appliedWallTime: new Date(1567579008032), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), appliedOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, appliedWallTime: new Date(1567579008032), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpVisible: { ts: Timestamp(1567579008, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:53.036+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1809 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:37:23.036+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), appliedOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, appliedWallTime: new Date(1567579008032), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), appliedOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, appliedWallTime: new Date(1567579008032), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), appliedOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, appliedWallTime: new Date(1567579008032), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpVisible: { ts: Timestamp(1567579008, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:53.036+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:37:18.036+0000 2019-09-04T06:36:53.036+0000 D2 ASIO [RS] Request 1809 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpVisible: { ts: Timestamp(1567579008, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579008, 1), $clusterTime: { clusterTime: Timestamp(1567579010, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579008, 1) } 2019-09-04T06:36:53.036+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpVisible: { ts: Timestamp(1567579008, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579008, 1), $clusterTime: { clusterTime: Timestamp(1567579010, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579008, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:53.036+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:36:53.036+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:37:18.036+0000 2019-09-04T06:36:53.036+0000 D2 ASIO [RS] Request 1794 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpVisible: { ts: Timestamp(1567579008, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpApplied: { ts: Timestamp(1567579008, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579008, 1), $clusterTime: { clusterTime: Timestamp(1567579010, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579008, 1) } 2019-09-04T06:36:53.036+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpVisible: { ts: Timestamp(1567579008, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpApplied: { ts: Timestamp(1567579008, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579008, 1), $clusterTime: { clusterTime: Timestamp(1567579010, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579008, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:53.036+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:53.036+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:36:53.036+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:37:02.899+0000 2019-09-04T06:36:53.036+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:37:04.202+0000 2019-09-04T06:36:53.036+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:53.036+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:22.839+0000 2019-09-04T06:36:53.036+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1810 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:37:03.036+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567579008, 1), t: 1 } } 2019-09-04T06:36:53.036+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:37:18.036+0000 2019-09-04T06:36:53.041+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26765 2019-09-04T06:36:53.041+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 26765 2019-09-04T06:36:53.041+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567579008, 1), t: 1 }({ ts: Timestamp(1567579008, 1), t: 1 }) 2019-09-04T06:36:53.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:53.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:53.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579010, 1), signature: { hash: BinData(0, 001B609A8CD6F71946AECDAEC3F9499D06413DB4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:53.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:36:53.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579010, 1), signature: { hash: BinData(0, 001B609A8CD6F71946AECDAEC3F9499D06413DB4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:53.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579010, 1), signature: { hash: BinData(0, 001B609A8CD6F71946AECDAEC3F9499D06413DB4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:53.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), opTime: { ts: Timestamp(1567579008, 1), t: 1 }, wallTime: new Date(1567579008032) } 2019-09-04T06:36:53.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579010, 1), signature: { hash: BinData(0, 001B609A8CD6F71946AECDAEC3F9499D06413DB4), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:53.071+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:53.071+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:53.111+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:53.118+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:53.118+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:53.162+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:53.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:53.200+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:53.201+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:53.211+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:53.216+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:53.216+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:53.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:53.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:53.246+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:53.247+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:53.253+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:53.253+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:53.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:53.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:53.311+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:53.328+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:53.328+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:53.333+0000 D2 COMMAND [conn101] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:53.333+0000 I COMMAND [conn101] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:53.349+0000 D2 COMMAND [conn157] run command admin.$cmd { ping: 1, $db: "admin" } 2019-09-04T06:36:53.349+0000 I COMMAND [conn157] command admin.$cmd appName: "robo3t" command: ping { ping: 1, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:53.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:53.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:53.361+0000 D2 COMMAND [conn218] run command admin.$cmd { ping: 1.0, lsid: { id: UUID("ac8e303f-4e60-4a79-b9a4-f7cba7354076") }, $clusterTime: { clusterTime: Timestamp(1567578950, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } 2019-09-04T06:36:53.361+0000 I COMMAND [conn218] command admin.$cmd appName: "MongoDB Shell" command: ping { ping: 1.0, lsid: { id: UUID("ac8e303f-4e60-4a79-b9a4-f7cba7354076") }, $clusterTime: { clusterTime: Timestamp(1567578950, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:53.383+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:53.383+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:53.411+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:53.511+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:53.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:53.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:53.571+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:53.572+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:53.612+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:53.618+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:53.618+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:53.662+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:53.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:53.700+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:53.701+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:53.712+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:53.716+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:53.716+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:53.746+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:53.747+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:53.753+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:53.753+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:53.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:53.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:53.812+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:53.828+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:53.828+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:53.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:53.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:53.883+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:53.883+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:53.912+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:54.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:54.012+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:54.035+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:54.035+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:54.035+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567579008, 1) 2019-09-04T06:36:54.035+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 26797 2019-09-04T06:36:54.035+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:54.035+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:54.035+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 26797 2019-09-04T06:36:54.041+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26800 2019-09-04T06:36:54.041+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 26800 2019-09-04T06:36:54.041+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567579008, 1), t: 1 }({ ts: Timestamp(1567579008, 1), t: 1 }) 2019-09-04T06:36:54.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:54.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:54.071+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:54.072+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:54.112+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:54.118+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:54.118+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:54.156+0000 I COMMAND [conn554] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578975, 1), signature: { hash: BinData(0, ABF7625DF5CCC6209B46F27CBA9E5DCE8DF7AB16), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } 2019-09-04T06:36:54.157+0000 D1 - [conn554] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:54.157+0000 W - [conn554] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:54.162+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:54.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:54.173+0000 I - [conn554] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:54.173+0000 D1 COMMAND [conn554] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578975, 1), signature: { hash: BinData(0, ABF7625DF5CCC6209B46F27CBA9E5DCE8DF7AB16), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:54.173+0000 D1 - [conn554] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:54.173+0000 W - [conn554] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:54.193+0000 I - [conn554] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:54.193+0000 W COMMAND [conn554] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:54.193+0000 I COMMAND [conn554] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578975, 1), signature: { hash: BinData(0, ABF7625DF5CCC6209B46F27CBA9E5DCE8DF7AB16), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:36:54.193+0000 D2 NETWORK [conn554] Session from 10.108.2.46:41224 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:54.193+0000 I NETWORK [conn554] end connection 10.108.2.46:41224 (85 connections now open) 2019-09-04T06:36:54.200+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:54.201+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:54.212+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:54.216+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:54.216+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:54.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:54.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:54.235+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579010, 1), signature: { hash: BinData(0, 001B609A8CD6F71946AECDAEC3F9499D06413DB4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:54.235+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:36:54.235+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579010, 1), signature: { hash: BinData(0, 001B609A8CD6F71946AECDAEC3F9499D06413DB4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:54.235+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579010, 1), signature: { hash: BinData(0, 001B609A8CD6F71946AECDAEC3F9499D06413DB4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:54.235+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), opTime: { ts: Timestamp(1567579008, 1), t: 1 }, wallTime: new Date(1567579008032) } 2019-09-04T06:36:54.235+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579010, 1), signature: { hash: BinData(0, 001B609A8CD6F71946AECDAEC3F9499D06413DB4), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:54.246+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:54.247+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:54.253+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:54.253+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:54.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:54.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:54.312+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:54.328+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:54.328+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:54.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:54.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:54.383+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:54.383+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:54.412+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:54.512+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:54.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:54.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:54.571+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:54.571+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:54.612+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:54.618+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:54.618+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:54.662+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:54.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:54.700+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:54.701+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:54.712+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:54.716+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:54.716+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:54.747+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:54.747+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:54.753+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:54.753+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:54.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:54.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:54.812+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:54.828+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:54.828+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:54.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:54.839+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1811) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:54.839+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1811 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:37:04.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:54.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:22.839+0000 2019-09-04T06:36:54.839+0000 D2 ASIO [Replication] Request 1811 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), opTime: { ts: Timestamp(1567579008, 1), t: 1 }, wallTime: new Date(1567579008032), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpVisible: { ts: Timestamp(1567579008, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567579008, 1), $clusterTime: { clusterTime: Timestamp(1567579010, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579008, 1) } 2019-09-04T06:36:54.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), opTime: { ts: Timestamp(1567579008, 1), t: 1 }, wallTime: new Date(1567579008032), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpVisible: { ts: Timestamp(1567579008, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567579008, 1), $clusterTime: { clusterTime: Timestamp(1567579010, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579008, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:36:54.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:54.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1811) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), opTime: { ts: Timestamp(1567579008, 1), t: 1 }, wallTime: new Date(1567579008032), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpVisible: { ts: Timestamp(1567579008, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567579008, 1), $clusterTime: { clusterTime: Timestamp(1567579010, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579008, 1) } 2019-09-04T06:36:54.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:36:54.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:37:04.202+0000 2019-09-04T06:36:54.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:37:05.917+0000 2019-09-04T06:36:54.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:36:54.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:36:56.839Z 2019-09-04T06:36:54.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:54.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:24.839+0000 2019-09-04T06:36:54.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:24.839+0000 2019-09-04T06:36:54.841+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:54.841+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1812) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:54.841+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1812 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:37:04.841+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:54.841+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:24.839+0000 2019-09-04T06:36:54.841+0000 D2 ASIO [Replication] Request 1812 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), opTime: { ts: Timestamp(1567579008, 1), t: 1 }, wallTime: new Date(1567579008032), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpVisible: { ts: Timestamp(1567579008, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579008, 1), $clusterTime: { clusterTime: Timestamp(1567579010, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579008, 1) } 2019-09-04T06:36:54.841+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), opTime: { ts: Timestamp(1567579008, 1), t: 1 }, wallTime: new Date(1567579008032), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpVisible: { ts: Timestamp(1567579008, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579008, 1), $clusterTime: { clusterTime: Timestamp(1567579010, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579008, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:54.841+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:54.841+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1812) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), opTime: { ts: Timestamp(1567579008, 1), t: 1 }, wallTime: new Date(1567579008032), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpVisible: { ts: Timestamp(1567579008, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579008, 1), $clusterTime: { clusterTime: Timestamp(1567579010, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579008, 1) } 2019-09-04T06:36:54.841+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:36:54.841+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:36:56.841Z 2019-09-04T06:36:54.841+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:24.839+0000 2019-09-04T06:36:54.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:54.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:54.883+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:54.883+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:54.913+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:55.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:55.013+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:55.035+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:55.035+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:55.035+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567579008, 1) 2019-09-04T06:36:55.035+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 26830 2019-09-04T06:36:55.036+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:55.036+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:55.036+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 26830 2019-09-04T06:36:55.041+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26833 2019-09-04T06:36:55.041+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 26833 2019-09-04T06:36:55.041+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567579008, 1), t: 1 }({ ts: Timestamp(1567579008, 1), t: 1 }) 2019-09-04T06:36:55.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:55.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:55.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579010, 1), signature: { hash: BinData(0, 001B609A8CD6F71946AECDAEC3F9499D06413DB4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:55.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:36:55.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579010, 1), signature: { hash: BinData(0, 001B609A8CD6F71946AECDAEC3F9499D06413DB4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:55.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579010, 1), signature: { hash: BinData(0, 001B609A8CD6F71946AECDAEC3F9499D06413DB4), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:55.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), opTime: { ts: Timestamp(1567579008, 1), t: 1 }, wallTime: new Date(1567579008032) } 2019-09-04T06:36:55.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579010, 1), signature: { hash: BinData(0, 001B609A8CD6F71946AECDAEC3F9499D06413DB4), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:55.064+0000 I COMMAND [conn555] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578981, 1), signature: { hash: BinData(0, B1B9733EBFF2202C247A072FB8FE073A842B9ACF), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:36:55.064+0000 D1 - [conn555] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:55.064+0000 W - [conn555] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:55.071+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:55.071+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:55.081+0000 I - [conn555] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:55.081+0000 D1 COMMAND [conn555] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578981, 1), signature: { hash: BinData(0, B1B9733EBFF2202C247A072FB8FE073A842B9ACF), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:55.081+0000 D1 - [conn555] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:55.081+0000 W - [conn555] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:55.101+0000 I - [conn555] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:55.101+0000 W COMMAND [conn555] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:55.101+0000 I COMMAND [conn555] command config.$cmd command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578981, 1), signature: { hash: BinData(0, B1B9733EBFF2202C247A072FB8FE073A842B9ACF), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30031ms 2019-09-04T06:36:55.101+0000 D2 NETWORK [conn555] Session from 10.108.2.55:36898 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:55.101+0000 I NETWORK [conn555] end connection 10.108.2.55:36898 (84 connections now open) 2019-09-04T06:36:55.113+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:55.118+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:55.118+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:55.162+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:55.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:55.201+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:55.201+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:55.213+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:55.216+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:55.216+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:55.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:55.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:55.246+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:55.247+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:55.253+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:55.253+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:55.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:55.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:55.313+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:55.328+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:55.328+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:55.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:55.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:55.383+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:55.383+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:55.413+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:55.513+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:55.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:55.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:55.571+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:55.571+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:55.613+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:55.618+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:55.618+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:55.662+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:55.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:55.701+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:55.701+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:55.713+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:55.716+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:55.716+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:55.746+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:55.747+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:55.753+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:55.753+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:55.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:55.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:55.794+0000 D2 ASIO [RS] Request 1810 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567579015, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567579015792), o: { $v: 1, $set: { ping: new Date(1567579015791) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpVisible: { ts: Timestamp(1567579008, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpApplied: { ts: Timestamp(1567579015, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579008, 1), $clusterTime: { clusterTime: Timestamp(1567579015, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579015, 1) } 2019-09-04T06:36:55.794+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567579015, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567579015792), o: { $v: 1, $set: { ping: new Date(1567579015791) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpVisible: { ts: Timestamp(1567579008, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpApplied: { ts: Timestamp(1567579015, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579008, 1), $clusterTime: { clusterTime: Timestamp(1567579015, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579015, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:55.794+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:36:55.794+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567579015, 1) and ending at ts: Timestamp(1567579015, 1) 2019-09-04T06:36:55.794+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:37:05.917+0000 2019-09-04T06:36:55.794+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:37:06.454+0000 2019-09-04T06:36:55.794+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:55.794+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567579015, 1), t: 1 } 2019-09-04T06:36:55.794+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:24.839+0000 2019-09-04T06:36:55.794+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:55.794+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:55.794+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567579008, 1) 2019-09-04T06:36:55.794+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 26860 2019-09-04T06:36:55.794+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:55.794+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:55.794+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 26860 2019-09-04T06:36:55.794+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:55.794+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:36:55.794+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:55.794+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567579008, 1) 2019-09-04T06:36:55.794+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567579015, 1) } 2019-09-04T06:36:55.794+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 26863 2019-09-04T06:36:55.794+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:55.794+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:55.794+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 26863 2019-09-04T06:36:55.794+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26834 2019-09-04T06:36:55.794+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 26834 2019-09-04T06:36:55.794+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26866 2019-09-04T06:36:55.794+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 26866 2019-09-04T06:36:55.794+0000 D3 EXECUTOR [repl-writer-worker-11] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:36:55.794+0000 D3 STORAGE [repl-writer-worker-11] WT begin_transaction for snapshot id 26868 2019-09-04T06:36:55.794+0000 D4 STORAGE [repl-writer-worker-11] inserting record with timestamp Timestamp(1567579015, 1) 2019-09-04T06:36:55.794+0000 D3 STORAGE [repl-writer-worker-11] WT set timestamp of future write operations to Timestamp(1567579015, 1) 2019-09-04T06:36:55.794+0000 D3 STORAGE [repl-writer-worker-11] WT commit_transaction for snapshot id 26868 2019-09-04T06:36:55.794+0000 D3 EXECUTOR [repl-writer-worker-11] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:36:55.794+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:36:55.794+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26867 2019-09-04T06:36:55.794+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 26867 2019-09-04T06:36:55.794+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26870 2019-09-04T06:36:55.794+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 26870 2019-09-04T06:36:55.794+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567579015, 1), t: 1 }({ ts: Timestamp(1567579015, 1), t: 1 }) 2019-09-04T06:36:55.794+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567579015, 1) 2019-09-04T06:36:55.794+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26871 2019-09-04T06:36:55.794+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567579015, 1) } } ] } sort: {} projection: {} 2019-09-04T06:36:55.794+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:55.794+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:36:55.794+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567579015, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:36:55.794+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:55.794+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:36:55.794+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:36:55.794+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567579015, 1) || First: notFirst: full path: ts 2019-09-04T06:36:55.794+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:55.794+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567579015, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:55.794+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:36:55.794+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:36:55.794+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:36:55.794+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:55.794+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:36:55.794+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:36:55.794+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:55.794+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:55.794+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:36:55.794+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567579015, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:36:55.794+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:55.794+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:36:55.794+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:36:55.794+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567579015, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:36:55.794+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:55.794+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567579015, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:55.794+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 26871 2019-09-04T06:36:55.795+0000 D3 EXECUTOR [repl-writer-worker-7] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:36:55.795+0000 D3 STORAGE [repl-writer-worker-7] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:55.795+0000 D3 REPL [repl-writer-worker-7] applying op: { ts: Timestamp(1567579015, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" }, wall: new Date(1567579015792), o: { $v: 1, $set: { ping: new Date(1567579015791) } } }, oplog application mode: Secondary 2019-09-04T06:36:55.795+0000 D3 STORAGE [repl-writer-worker-7] WT set timestamp of future write operations to Timestamp(1567579015, 1) 2019-09-04T06:36:55.795+0000 D3 STORAGE [repl-writer-worker-7] WT begin_transaction for snapshot id 26873 2019-09-04T06:36:55.795+0000 D2 QUERY [repl-writer-worker-7] Using idhack: { _id: "cmodb807.togewa.com:27018:1566460180:7657529699693886924" } 2019-09-04T06:36:55.795+0000 D4 WRITE [repl-writer-worker-7] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:36:55.795+0000 D3 STORAGE [repl-writer-worker-7] WT commit_transaction for snapshot id 26873 2019-09-04T06:36:55.795+0000 D3 EXECUTOR [repl-writer-worker-7] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:36:55.795+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567579015, 1), t: 1 }({ ts: Timestamp(1567579015, 1), t: 1 }) 2019-09-04T06:36:55.795+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567579015, 1) 2019-09-04T06:36:55.795+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26872 2019-09-04T06:36:55.795+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:36:55.795+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:36:55.795+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:36:55.795+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:55.795+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:55.795+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:36:55.795+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 26872 2019-09-04T06:36:55.795+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567579015, 1) 2019-09-04T06:36:55.795+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:55.795+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26876 2019-09-04T06:36:55.795+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), appliedOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, appliedWallTime: new Date(1567579008032), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), appliedOpTime: { ts: Timestamp(1567579015, 1), t: 1 }, appliedWallTime: new Date(1567579015792), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), appliedOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, appliedWallTime: new Date(1567579008032), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpVisible: { ts: Timestamp(1567579008, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:55.795+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1813 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:37:25.795+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), appliedOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, appliedWallTime: new Date(1567579008032), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), appliedOpTime: { ts: Timestamp(1567579015, 1), t: 1 }, appliedWallTime: new Date(1567579015792), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), appliedOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, appliedWallTime: new Date(1567579008032), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpVisible: { ts: Timestamp(1567579008, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:55.795+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:37:25.795+0000 2019-09-04T06:36:55.795+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 26876 2019-09-04T06:36:55.795+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567579015, 1), t: 1 }({ ts: Timestamp(1567579015, 1), t: 1 }) 2019-09-04T06:36:55.795+0000 D2 ASIO [RS] Request 1813 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579015, 1), t: 1 }, lastCommittedWall: new Date(1567579015792), lastOpVisible: { ts: Timestamp(1567579015, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579015, 1), $clusterTime: { clusterTime: Timestamp(1567579015, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579015, 1) } 2019-09-04T06:36:55.795+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579015, 1), t: 1 }, lastCommittedWall: new Date(1567579015792), lastOpVisible: { ts: Timestamp(1567579015, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579015, 1), $clusterTime: { clusterTime: Timestamp(1567579015, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579015, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:55.795+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:55.795+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:37:25.795+0000 2019-09-04T06:36:55.796+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567579015, 1), t: 1 } 2019-09-04T06:36:55.796+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1814 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:37:05.796+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567579008, 1), t: 1 } } 2019-09-04T06:36:55.796+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:37:25.795+0000 2019-09-04T06:36:55.796+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:36:55.796+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:55.796+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), appliedOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, appliedWallTime: new Date(1567579008032), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579015, 1), t: 1 }, durableWallTime: new Date(1567579015792), appliedOpTime: { ts: Timestamp(1567579015, 1), t: 1 }, appliedWallTime: new Date(1567579015792), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), appliedOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, appliedWallTime: new Date(1567579008032), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpVisible: { ts: Timestamp(1567579008, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:55.796+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1815 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:37:25.796+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), appliedOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, appliedWallTime: new Date(1567579008032), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579015, 1), t: 1 }, durableWallTime: new Date(1567579015792), appliedOpTime: { ts: Timestamp(1567579015, 1), t: 1 }, appliedWallTime: new Date(1567579015792), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, durableWallTime: new Date(1567579008032), appliedOpTime: { ts: Timestamp(1567579008, 1), t: 1 }, appliedWallTime: new Date(1567579008032), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579008, 1), t: 1 }, lastCommittedWall: new Date(1567579008032), lastOpVisible: { ts: Timestamp(1567579008, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:36:55.796+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:37:25.795+0000 2019-09-04T06:36:55.796+0000 D2 ASIO [RS] Request 1814 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579015, 1), t: 1 }, lastCommittedWall: new Date(1567579015792), lastOpVisible: { ts: Timestamp(1567579015, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567579015, 1), t: 1 }, lastCommittedWall: new Date(1567579015792), lastOpApplied: { ts: Timestamp(1567579015, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579015, 1), $clusterTime: { clusterTime: Timestamp(1567579015, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579015, 1) } 2019-09-04T06:36:55.796+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579015, 1), t: 1 }, lastCommittedWall: new Date(1567579015792), lastOpVisible: { ts: Timestamp(1567579015, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567579015, 1), t: 1 }, lastCommittedWall: new Date(1567579015792), lastOpApplied: { ts: Timestamp(1567579015, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579015, 1), $clusterTime: { clusterTime: Timestamp(1567579015, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579015, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:55.796+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:36:55.796+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:36:55.796+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567579015, 1), t: 1 }, 2019-09-04T06:36:55.792+0000 2019-09-04T06:36:55.796+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567579015, 1), t: 1 }, 2019-09-04T06:36:55.792+0000 2019-09-04T06:36:55.796+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567579010, 1) 2019-09-04T06:36:55.796+0000 D3 REPL [conn553] Got notified of new snapshot: { ts: Timestamp(1567579015, 1), t: 1 }, 2019-09-04T06:36:55.792+0000 2019-09-04T06:36:55.796+0000 D3 REPL [conn553] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:22.594+0000 2019-09-04T06:36:55.796+0000 D3 REPL [conn534] Got notified of new snapshot: { ts: Timestamp(1567579015, 1), t: 1 }, 2019-09-04T06:36:55.792+0000 2019-09-04T06:36:55.796+0000 D3 REPL [conn534] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:13.288+0000 2019-09-04T06:36:55.796+0000 D2 ASIO [RS] Request 1815 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579015, 1), t: 1 }, lastCommittedWall: new Date(1567579015792), lastOpVisible: { ts: Timestamp(1567579015, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579015, 1), $clusterTime: { clusterTime: Timestamp(1567579015, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579015, 1) } 2019-09-04T06:36:55.796+0000 D3 REPL [conn562] Got notified of new snapshot: { ts: Timestamp(1567579015, 1), t: 1 }, 2019-09-04T06:36:55.792+0000 2019-09-04T06:36:55.796+0000 D3 REPL [conn562] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:18.425+0000 2019-09-04T06:36:55.796+0000 D3 REPL [conn536] Got notified of new snapshot: { ts: Timestamp(1567579015, 1), t: 1 }, 2019-09-04T06:36:55.792+0000 2019-09-04T06:36:55.796+0000 D3 REPL [conn536] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:13.339+0000 2019-09-04T06:36:55.796+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579015, 1), t: 1 }, lastCommittedWall: new Date(1567579015792), lastOpVisible: { ts: Timestamp(1567579015, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579015, 1), $clusterTime: { clusterTime: Timestamp(1567579015, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579015, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:55.796+0000 D3 REPL [conn540] Got notified of new snapshot: { ts: Timestamp(1567579015, 1), t: 1 }, 2019-09-04T06:36:55.792+0000 2019-09-04T06:36:55.796+0000 D3 REPL [conn540] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:13.408+0000 2019-09-04T06:36:55.796+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:36:55.796+0000 D3 REPL [conn563] Got notified of new snapshot: { ts: Timestamp(1567579015, 1), t: 1 }, 2019-09-04T06:36:55.792+0000 2019-09-04T06:36:55.796+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:37:25.796+0000 2019-09-04T06:36:55.796+0000 D3 REPL [conn563] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:18.426+0000 2019-09-04T06:36:55.796+0000 D3 REPL [conn564] Got notified of new snapshot: { ts: Timestamp(1567579015, 1), t: 1 }, 2019-09-04T06:36:55.792+0000 2019-09-04T06:36:55.796+0000 D3 REPL [conn564] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:18.430+0000 2019-09-04T06:36:55.796+0000 D3 REPL [conn539] Got notified of new snapshot: { ts: Timestamp(1567579015, 1), t: 1 }, 2019-09-04T06:36:55.792+0000 2019-09-04T06:36:55.796+0000 D3 REPL [conn539] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:18.438+0000 2019-09-04T06:36:55.796+0000 D3 REPL [conn566] Got notified of new snapshot: { ts: Timestamp(1567579015, 1), t: 1 }, 2019-09-04T06:36:55.792+0000 2019-09-04T06:36:55.796+0000 D3 REPL [conn566] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:18.461+0000 2019-09-04T06:36:55.796+0000 D3 REPL [conn546] Got notified of new snapshot: { ts: Timestamp(1567579015, 1), t: 1 }, 2019-09-04T06:36:55.792+0000 2019-09-04T06:36:55.796+0000 D3 REPL [conn546] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:21.660+0000 2019-09-04T06:36:55.796+0000 D3 REPL [conn547] Got notified of new snapshot: { ts: Timestamp(1567579015, 1), t: 1 }, 2019-09-04T06:36:55.792+0000 2019-09-04T06:36:55.796+0000 D3 REPL [conn547] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:21.660+0000 2019-09-04T06:36:55.797+0000 D3 REPL [conn569] Got notified of new snapshot: { ts: Timestamp(1567579015, 1), t: 1 }, 2019-09-04T06:36:55.792+0000 2019-09-04T06:36:55.797+0000 D3 REPL [conn569] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:21.662+0000 2019-09-04T06:36:55.797+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:37:06.454+0000 2019-09-04T06:36:55.797+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:37:06.972+0000 2019-09-04T06:36:55.797+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:55.797+0000 D3 REPL [conn528] Got notified of new snapshot: { ts: Timestamp(1567579015, 1), t: 1 }, 2019-09-04T06:36:55.792+0000 2019-09-04T06:36:55.797+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:24.839+0000 2019-09-04T06:36:55.797+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1816 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:37:05.797+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567579015, 1), t: 1 } } 2019-09-04T06:36:55.797+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:37:25.796+0000 2019-09-04T06:36:55.797+0000 D3 REPL [conn528] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:36:58.752+0000 2019-09-04T06:36:55.797+0000 D3 REPL [conn560] Got notified of new snapshot: { ts: Timestamp(1567579015, 1), t: 1 }, 2019-09-04T06:36:55.792+0000 2019-09-04T06:36:55.797+0000 D3 REPL [conn560] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:13.280+0000 2019-09-04T06:36:55.797+0000 D3 REPL [conn561] Got notified of new snapshot: { ts: Timestamp(1567579015, 1), t: 1 }, 2019-09-04T06:36:55.792+0000 2019-09-04T06:36:55.797+0000 D3 REPL [conn561] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:16.415+0000 2019-09-04T06:36:55.797+0000 D3 REPL [conn533] Got notified of new snapshot: { ts: Timestamp(1567579015, 1), t: 1 }, 2019-09-04T06:36:55.792+0000 2019-09-04T06:36:55.797+0000 D3 REPL [conn533] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:13.272+0000 2019-09-04T06:36:55.797+0000 D3 REPL [conn551] Got notified of new snapshot: { ts: Timestamp(1567579015, 1), t: 1 }, 2019-09-04T06:36:55.792+0000 2019-09-04T06:36:55.797+0000 D3 REPL [conn551] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:14.696+0000 2019-09-04T06:36:55.797+0000 D3 REPL [conn557] Got notified of new snapshot: { ts: Timestamp(1567579015, 1), t: 1 }, 2019-09-04T06:36:55.792+0000 2019-09-04T06:36:55.797+0000 D3 REPL [conn557] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:15.593+0000 2019-09-04T06:36:55.797+0000 D3 REPL [conn556] Got notified of new snapshot: { ts: Timestamp(1567579015, 1), t: 1 }, 2019-09-04T06:36:55.792+0000 2019-09-04T06:36:55.797+0000 D3 REPL [conn556] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:00.753+0000 2019-09-04T06:36:55.797+0000 D3 REPL [conn565] Got notified of new snapshot: { ts: Timestamp(1567579015, 1), t: 1 }, 2019-09-04T06:36:55.792+0000 2019-09-04T06:36:55.797+0000 D3 REPL [conn565] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:18.447+0000 2019-09-04T06:36:55.797+0000 D3 REPL [conn542] Got notified of new snapshot: { ts: Timestamp(1567579015, 1), t: 1 }, 2019-09-04T06:36:55.792+0000 2019-09-04T06:36:55.797+0000 D3 REPL [conn542] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:18.527+0000 2019-09-04T06:36:55.797+0000 D3 REPL [conn571] Got notified of new snapshot: { ts: Timestamp(1567579015, 1), t: 1 }, 2019-09-04T06:36:55.792+0000 2019-09-04T06:36:55.797+0000 D3 REPL [conn571] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:21.767+0000 2019-09-04T06:36:55.797+0000 D3 REPL [conn558] Got notified of new snapshot: { ts: Timestamp(1567579015, 1), t: 1 }, 2019-09-04T06:36:55.792+0000 2019-09-04T06:36:55.797+0000 D3 REPL [conn558] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:07.852+0000 2019-09-04T06:36:55.813+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:55.828+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:55.828+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:55.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:55.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:55.883+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:55.883+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:55.894+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567579015, 1) 2019-09-04T06:36:55.913+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:56.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:56.014+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:56.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:56.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:56.071+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:56.072+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:56.114+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:56.118+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:56.118+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:56.162+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:56.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:56.201+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:56.201+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:56.214+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:56.216+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:56.216+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:56.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:56.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:56.235+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579015, 1), signature: { hash: BinData(0, 7B4054D60801D8B9452FDC01C7CAE3FBB733C353), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:56.235+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:36:56.235+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579015, 1), signature: { hash: BinData(0, 7B4054D60801D8B9452FDC01C7CAE3FBB733C353), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:56.235+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579015, 1), signature: { hash: BinData(0, 7B4054D60801D8B9452FDC01C7CAE3FBB733C353), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:56.235+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579015, 1), t: 1 }, durableWallTime: new Date(1567579015792), opTime: { ts: Timestamp(1567579015, 1), t: 1 }, wallTime: new Date(1567579015792) } 2019-09-04T06:36:56.235+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579015, 1), signature: { hash: BinData(0, 7B4054D60801D8B9452FDC01C7CAE3FBB733C353), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:56.246+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:56.246+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:56.253+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:56.253+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:56.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:56.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:56.314+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:56.328+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:56.328+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:56.359+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:56.359+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:56.383+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:56.383+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:56.414+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:56.514+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:56.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:56.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:56.571+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:56.572+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:56.601+0000 D2 COMMAND [conn49] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:56.601+0000 I COMMAND [conn49] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:56.614+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:56.618+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:56.618+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:56.662+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:56.663+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:56.695+0000 D2 COMMAND [conn49] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567579015, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579015, 1), signature: { hash: BinData(0, 7B4054D60801D8B9452FDC01C7CAE3FBB733C353), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567579015, 1), t: 1 } }, $db: "config" } 2019-09-04T06:36:56.695+0000 D1 COMMAND [conn49] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567579015, 1), t: 1 } } } 2019-09-04T06:36:56.695+0000 D3 STORAGE [conn49] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:36:56.695+0000 D1 COMMAND [conn49] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567579015, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579015, 1), signature: { hash: BinData(0, 7B4054D60801D8B9452FDC01C7CAE3FBB733C353), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567579015, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567579015, 1) 2019-09-04T06:36:56.695+0000 D5 QUERY [conn49] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:36:56.695+0000 D5 QUERY [conn49] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:36:56.695+0000 D5 QUERY [conn49] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:36:56.695+0000 D5 QUERY [conn49] Rated tree: $and 2019-09-04T06:36:56.695+0000 D5 QUERY [conn49] Planner: outputted 0 indexed solutions. 2019-09-04T06:36:56.695+0000 D5 QUERY [conn49] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:36:56.695+0000 D2 QUERY [conn49] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:36:56.695+0000 D3 STORAGE [conn49] WT begin_transaction for snapshot id 26902 2019-09-04T06:36:56.695+0000 D3 STORAGE [conn49] WT rollback_transaction for snapshot id 26902 2019-09-04T06:36:56.695+0000 I COMMAND [conn49] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567579015, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579015, 1), signature: { hash: BinData(0, 7B4054D60801D8B9452FDC01C7CAE3FBB733C353), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567579015, 1), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:4 cursorExhausted:1 numYields:0 nreturned:4 reslen:989 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:36:56.700+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:56.701+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:56.714+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:56.716+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:56.716+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:56.747+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:56.747+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:56.753+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:56.753+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:56.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:56.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:56.794+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:56.794+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:56.794+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567579015, 1) 2019-09-04T06:36:56.794+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 26909 2019-09-04T06:36:56.794+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:56.794+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:56.794+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 26909 2019-09-04T06:36:56.795+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26912 2019-09-04T06:36:56.795+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 26912 2019-09-04T06:36:56.795+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567579015, 1), t: 1 }({ ts: Timestamp(1567579015, 1), t: 1 }) 2019-09-04T06:36:56.814+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:56.828+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:56.828+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:56.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:56.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:56.839+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1817) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:56.839+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1817 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:37:06.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:56.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:26.839+0000 2019-09-04T06:36:56.839+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:36:55.063+0000 2019-09-04T06:36:56.839+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:36:56.235+0000 2019-09-04T06:36:56.839+0000 D3 REPL [replexec-3] stalest member MemberId(0) date: 2019-09-04T06:36:55.063+0000 2019-09-04T06:36:56.839+0000 D3 REPL [replexec-3] scheduling next check at 2019-09-04T06:37:05.063+0000 2019-09-04T06:36:56.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:26.839+0000 2019-09-04T06:36:56.839+0000 D2 ASIO [Replication] Request 1817 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579015, 1), t: 1 }, durableWallTime: new Date(1567579015792), opTime: { ts: Timestamp(1567579015, 1), t: 1 }, wallTime: new Date(1567579015792), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579015, 1), t: 1 }, lastCommittedWall: new Date(1567579015792), lastOpVisible: { ts: Timestamp(1567579015, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567579015, 1), $clusterTime: { clusterTime: Timestamp(1567579015, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579015, 1) } 2019-09-04T06:36:56.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579015, 1), t: 1 }, durableWallTime: new Date(1567579015792), opTime: { ts: Timestamp(1567579015, 1), t: 1 }, wallTime: new Date(1567579015792), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579015, 1), t: 1 }, lastCommittedWall: new Date(1567579015792), lastOpVisible: { ts: Timestamp(1567579015, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567579015, 1), $clusterTime: { clusterTime: Timestamp(1567579015, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579015, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:36:56.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:56.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1817) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579015, 1), t: 1 }, durableWallTime: new Date(1567579015792), opTime: { ts: Timestamp(1567579015, 1), t: 1 }, wallTime: new Date(1567579015792), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579015, 1), t: 1 }, lastCommittedWall: new Date(1567579015792), lastOpVisible: { ts: Timestamp(1567579015, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567579015, 1), $clusterTime: { clusterTime: Timestamp(1567579015, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579015, 1) } 2019-09-04T06:36:56.839+0000 D4 ELECTION [replexec-4] Postponing election timeout due to heartbeat from primary 2019-09-04T06:36:56.839+0000 D4 REPL [replexec-4] Canceling election timeout callback at 2019-09-04T06:37:06.972+0000 2019-09-04T06:36:56.839+0000 D4 ELECTION [replexec-4] Scheduling election timeout callback at 2019-09-04T06:37:07.522+0000 2019-09-04T06:36:56.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:56.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:26.839+0000 2019-09-04T06:36:56.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:36:56.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:36:58.839Z 2019-09-04T06:36:56.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:26.839+0000 2019-09-04T06:36:56.841+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:56.841+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1818) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:56.841+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1818 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:37:06.841+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:56.841+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:26.839+0000 2019-09-04T06:36:56.841+0000 D2 ASIO [Replication] Request 1818 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579015, 1), t: 1 }, durableWallTime: new Date(1567579015792), opTime: { ts: Timestamp(1567579015, 1), t: 1 }, wallTime: new Date(1567579015792), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579015, 1), t: 1 }, lastCommittedWall: new Date(1567579015792), lastOpVisible: { ts: Timestamp(1567579015, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579015, 1), $clusterTime: { clusterTime: Timestamp(1567579015, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579015, 1) } 2019-09-04T06:36:56.841+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579015, 1), t: 1 }, durableWallTime: new Date(1567579015792), opTime: { ts: Timestamp(1567579015, 1), t: 1 }, wallTime: new Date(1567579015792), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579015, 1), t: 1 }, lastCommittedWall: new Date(1567579015792), lastOpVisible: { ts: Timestamp(1567579015, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579015, 1), $clusterTime: { clusterTime: Timestamp(1567579015, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579015, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:56.841+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:56.841+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1818) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579015, 1), t: 1 }, durableWallTime: new Date(1567579015792), opTime: { ts: Timestamp(1567579015, 1), t: 1 }, wallTime: new Date(1567579015792), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579015, 1), t: 1 }, lastCommittedWall: new Date(1567579015792), lastOpVisible: { ts: Timestamp(1567579015, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579015, 1), $clusterTime: { clusterTime: Timestamp(1567579015, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579015, 1) } 2019-09-04T06:36:56.841+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:36:56.841+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:36:58.841Z 2019-09-04T06:36:56.841+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:26.839+0000 2019-09-04T06:36:56.859+0000 D2 COMMAND [conn46] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:56.859+0000 I COMMAND [conn46] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:56.876+0000 D2 COMMAND [conn47] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:56.876+0000 I COMMAND [conn47] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:56.883+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:56.883+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:56.911+0000 D2 COMMAND [conn48] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:56.911+0000 I COMMAND [conn48] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:56.915+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:57.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:57.015+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:57.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:57.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:57.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579015, 1), signature: { hash: BinData(0, 7B4054D60801D8B9452FDC01C7CAE3FBB733C353), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:57.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:36:57.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579015, 1), signature: { hash: BinData(0, 7B4054D60801D8B9452FDC01C7CAE3FBB733C353), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:57.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579015, 1), signature: { hash: BinData(0, 7B4054D60801D8B9452FDC01C7CAE3FBB733C353), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:57.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579015, 1), t: 1 }, durableWallTime: new Date(1567579015792), opTime: { ts: Timestamp(1567579015, 1), t: 1 }, wallTime: new Date(1567579015792) } 2019-09-04T06:36:57.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579015, 1), signature: { hash: BinData(0, 7B4054D60801D8B9452FDC01C7CAE3FBB733C353), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:57.072+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:57.072+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:57.115+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:57.162+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:57.163+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:57.201+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:57.201+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:57.215+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:57.216+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:57.216+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:57.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:57.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:57.247+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:57.247+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:57.253+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:57.253+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:57.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:57.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:57.311+0000 D2 COMMAND [conn160] run command admin.$cmd { ping: 1, $db: "admin" } 2019-09-04T06:36:57.311+0000 I COMMAND [conn160] command admin.$cmd appName: "robo3t" command: ping { ping: 1, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:57.315+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:57.320+0000 D2 COMMAND [conn220] run command admin.$cmd { ping: 1.0, lsid: { id: UUID("23af97f8-66f0-4a27-b5f1-59167651ca5f") }, $clusterTime: { clusterTime: Timestamp(1567578955, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } 2019-09-04T06:36:57.320+0000 I COMMAND [conn220] command admin.$cmd appName: "MongoDB Shell" command: ping { ping: 1.0, lsid: { id: UUID("23af97f8-66f0-4a27-b5f1-59167651ca5f") }, $clusterTime: { clusterTime: Timestamp(1567578955, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } numYields:0 reslen:252 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:57.383+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:57.383+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:57.415+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:57.515+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:57.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:57.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:57.572+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:57.572+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:57.615+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:57.662+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:57.663+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:57.701+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:57.701+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:57.715+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:57.716+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:57.716+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:57.747+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:57.747+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:57.753+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:57.753+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:57.783+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:57.783+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:57.794+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:57.794+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:57.794+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567579015, 1) 2019-09-04T06:36:57.794+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 26941 2019-09-04T06:36:57.794+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:57.794+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:57.794+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 26941 2019-09-04T06:36:57.795+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26944 2019-09-04T06:36:57.795+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 26944 2019-09-04T06:36:57.795+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567579015, 1), t: 1 }({ ts: Timestamp(1567579015, 1), t: 1 }) 2019-09-04T06:36:57.816+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:57.883+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:57.883+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:57.916+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:58.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:58.016+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:58.023+0000 D2 COMMAND [conn50] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:58.023+0000 I COMMAND [conn50] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:58.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:58.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:58.071+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:58.072+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:58.116+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:58.162+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:58.163+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:58.200+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:58.201+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:58.216+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:58.216+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:58.216+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:58.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:58.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:58.235+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579017, 1), signature: { hash: BinData(0, E2ADA65F1A367D91344C54B20D909EB133905BEE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:58.235+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:36:58.235+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579017, 1), signature: { hash: BinData(0, E2ADA65F1A367D91344C54B20D909EB133905BEE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:58.236+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579017, 1), signature: { hash: BinData(0, E2ADA65F1A367D91344C54B20D909EB133905BEE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:58.236+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579015, 1), t: 1 }, durableWallTime: new Date(1567579015792), opTime: { ts: Timestamp(1567579015, 1), t: 1 }, wallTime: new Date(1567579015792) } 2019-09-04T06:36:58.236+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579017, 1), signature: { hash: BinData(0, E2ADA65F1A367D91344C54B20D909EB133905BEE), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:58.246+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:58.247+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:58.253+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:58.253+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:58.283+0000 D2 COMMAND [conn33] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:58.283+0000 I COMMAND [conn33] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:58.316+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:58.383+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:58.383+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:58.416+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:58.516+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:58.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:58.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:58.571+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:58.572+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:58.617+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:58.662+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:58.663+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:58.700+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:58.701+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:58.716+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:58.716+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:58.717+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:58.747+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:58.747+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:58.753+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:58.753+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:58.755+0000 I COMMAND [conn528] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578985, 1), signature: { hash: BinData(0, D1304EA88BDDB05F2A6149D01308971F2D62D5C7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:36:58.755+0000 D1 - [conn528] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:36:58.755+0000 W - [conn528] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:58.772+0000 I - [conn528] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:58.772+0000 D1 COMMAND [conn528] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578985, 1), signature: { hash: BinData(0, D1304EA88BDDB05F2A6149D01308971F2D62D5C7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:58.772+0000 D1 - [conn528] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:36:58.772+0000 W - [conn528] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:36:58.793+0000 I - [conn528] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:36:58.793+0000 W COMMAND [conn528] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:36:58.793+0000 I COMMAND [conn528] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578985, 1), signature: { hash: BinData(0, D1304EA88BDDB05F2A6149D01308971F2D62D5C7), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:36:58.793+0000 D2 NETWORK [conn528] Session from 10.108.2.64:46822 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:36:58.793+0000 I NETWORK [conn528] end connection 10.108.2.64:46822 (83 connections now open) 2019-09-04T06:36:58.794+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:58.795+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:58.795+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567579015, 1) 2019-09-04T06:36:58.795+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 26967 2019-09-04T06:36:58.795+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:58.795+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:58.795+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 26967 2019-09-04T06:36:58.795+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26970 2019-09-04T06:36:58.796+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 26970 2019-09-04T06:36:58.796+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567579015, 1), t: 1 }({ ts: Timestamp(1567579015, 1), t: 1 }) 2019-09-04T06:36:58.817+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:58.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:58.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1819) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:58.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1819 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:37:08.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:58.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:26.839+0000 2019-09-04T06:36:58.839+0000 D2 ASIO [Replication] Request 1819 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579015, 1), t: 1 }, durableWallTime: new Date(1567579015792), opTime: { ts: Timestamp(1567579015, 1), t: 1 }, wallTime: new Date(1567579015792), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579015, 1), t: 1 }, lastCommittedWall: new Date(1567579015792), lastOpVisible: { ts: Timestamp(1567579015, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567579015, 1), $clusterTime: { clusterTime: Timestamp(1567579017, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579015, 1) } 2019-09-04T06:36:58.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579015, 1), t: 1 }, durableWallTime: new Date(1567579015792), opTime: { ts: Timestamp(1567579015, 1), t: 1 }, wallTime: new Date(1567579015792), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579015, 1), t: 1 }, lastCommittedWall: new Date(1567579015792), lastOpVisible: { ts: Timestamp(1567579015, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567579015, 1), $clusterTime: { clusterTime: Timestamp(1567579017, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579015, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:36:58.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:58.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1819) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579015, 1), t: 1 }, durableWallTime: new Date(1567579015792), opTime: { ts: Timestamp(1567579015, 1), t: 1 }, wallTime: new Date(1567579015792), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579015, 1), t: 1 }, lastCommittedWall: new Date(1567579015792), lastOpVisible: { ts: Timestamp(1567579015, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567579015, 1), $clusterTime: { clusterTime: Timestamp(1567579017, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579015, 1) } 2019-09-04T06:36:58.839+0000 D4 ELECTION [replexec-4] Postponing election timeout due to heartbeat from primary 2019-09-04T06:36:58.839+0000 D4 REPL [replexec-4] Canceling election timeout callback at 2019-09-04T06:37:07.522+0000 2019-09-04T06:36:58.839+0000 D4 ELECTION [replexec-4] Scheduling election timeout callback at 2019-09-04T06:37:08.861+0000 2019-09-04T06:36:58.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:36:58.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:58.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:37:00.839Z 2019-09-04T06:36:58.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:28.839+0000 2019-09-04T06:36:58.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:28.839+0000 2019-09-04T06:36:58.841+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:36:58.841+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1820) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:58.841+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1820 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:37:08.841+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:36:58.841+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:28.839+0000 2019-09-04T06:36:58.841+0000 D2 ASIO [Replication] Request 1820 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579015, 1), t: 1 }, durableWallTime: new Date(1567579015792), opTime: { ts: Timestamp(1567579015, 1), t: 1 }, wallTime: new Date(1567579015792), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579015, 1), t: 1 }, lastCommittedWall: new Date(1567579015792), lastOpVisible: { ts: Timestamp(1567579015, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579015, 1), $clusterTime: { clusterTime: Timestamp(1567579017, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579015, 1) } 2019-09-04T06:36:58.841+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579015, 1), t: 1 }, durableWallTime: new Date(1567579015792), opTime: { ts: Timestamp(1567579015, 1), t: 1 }, wallTime: new Date(1567579015792), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579015, 1), t: 1 }, lastCommittedWall: new Date(1567579015792), lastOpVisible: { ts: Timestamp(1567579015, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579015, 1), $clusterTime: { clusterTime: Timestamp(1567579017, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579015, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:36:58.841+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:36:58.841+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1820) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579015, 1), t: 1 }, durableWallTime: new Date(1567579015792), opTime: { ts: Timestamp(1567579015, 1), t: 1 }, wallTime: new Date(1567579015792), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579015, 1), t: 1 }, lastCommittedWall: new Date(1567579015792), lastOpVisible: { ts: Timestamp(1567579015, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579015, 1), $clusterTime: { clusterTime: Timestamp(1567579017, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579015, 1) } 2019-09-04T06:36:58.841+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:36:58.841+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:37:00.841Z 2019-09-04T06:36:58.841+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:28.839+0000 2019-09-04T06:36:58.883+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:58.883+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:58.917+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:59.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:36:59.017+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:59.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:59.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:59.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579017, 1), signature: { hash: BinData(0, E2ADA65F1A367D91344C54B20D909EB133905BEE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:59.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:36:59.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579017, 1), signature: { hash: BinData(0, E2ADA65F1A367D91344C54B20D909EB133905BEE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:59.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579017, 1), signature: { hash: BinData(0, E2ADA65F1A367D91344C54B20D909EB133905BEE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:36:59.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579015, 1), t: 1 }, durableWallTime: new Date(1567579015792), opTime: { ts: Timestamp(1567579015, 1), t: 1 }, wallTime: new Date(1567579015792) } 2019-09-04T06:36:59.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579017, 1), signature: { hash: BinData(0, E2ADA65F1A367D91344C54B20D909EB133905BEE), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:59.065+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:36:59.065+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:36:59.065+0000 D2 COMMAND [conn90] run command admin.$cmd { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:36:59.065+0000 I COMMAND [conn90] command admin.$cmd command: isMaster { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:866 locks:{} protocol:op_query 0ms 2019-09-04T06:36:59.071+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:59.071+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:59.117+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:59.162+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:59.163+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:59.201+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:59.201+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:59.216+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:59.216+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:59.217+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:59.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:36:59.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:36:59.247+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:59.247+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:59.253+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:59.253+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:59.317+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:59.383+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:59.383+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:59.417+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:59.518+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:59.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:59.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:59.571+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:59.572+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:59.618+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:59.662+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:59.663+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:59.701+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:59.701+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:59.716+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:59.716+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:59.718+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:59.746+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:59.747+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:59.753+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:59.753+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:59.795+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:36:59.795+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:36:59.795+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567579015, 1) 2019-09-04T06:36:59.795+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 26995 2019-09-04T06:36:59.795+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:36:59.795+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:36:59.795+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 26995 2019-09-04T06:36:59.796+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26998 2019-09-04T06:36:59.796+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 26998 2019-09-04T06:36:59.796+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567579015, 1), t: 1 }({ ts: Timestamp(1567579015, 1), t: 1 }) 2019-09-04T06:36:59.818+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:36:59.883+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:36:59.883+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:36:59.918+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:00.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:37:00.002+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:37:00.002+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:37:00.002+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:37:00.009+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:37:00.009+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:37:00.010+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:37:00.010+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:37:00.010+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:37:00.010+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:37:00.010+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:37:00.011+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35151 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:37:00.012+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:37:00.012+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:37:00.012+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:37:00.012+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:37:00.012+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:37:00.012+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:37:00.012+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:37:00.012+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:37:00.013+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:37:00.013+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:37:00.013+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:37:00.013+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:37:00.013+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:37:00.013+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:37:00.013+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:37:00.013+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567579015, 1) 2019-09-04T06:37:00.013+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27007 2019-09-04T06:37:00.013+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27007 2019-09-04T06:37:00.013+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:37:00.013+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:37:00.013+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:37:00.013+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:37:00.013+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:37:00.013+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:37:00.013+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:37:00.013+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:37:00.013+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567579015, 1) 2019-09-04T06:37:00.013+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27010 2019-09-04T06:37:00.013+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27010 2019-09-04T06:37:00.013+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:37:00.013+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:37:00.013+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:37:00.013+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:37:00.013+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:37:00.013+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:37:00.013+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567579015, 1) 2019-09-04T06:37:00.013+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27012 2019-09-04T06:37:00.013+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27012 2019-09-04T06:37:00.013+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:37:00.014+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:37:00.014+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:37:00.014+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:37:00.014+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:37:00.014+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:37:00.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:37:00.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27015 2019-09-04T06:37:00.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:37:00.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:37:00.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:37:00.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:37:00.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27015 2019-09-04T06:37:00.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:37:00.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27016 2019-09-04T06:37:00.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:37:00.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:37:00.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:37:00.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27016 2019-09-04T06:37:00.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:37:00.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27017 2019-09-04T06:37:00.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:37:00.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:37:00.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:37:00.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27017 2019-09-04T06:37:00.014+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:37:00.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27018 2019-09-04T06:37:00.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:37:00.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:37:00.014+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:37:00.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27018 2019-09-04T06:37:00.014+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:37:00.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27019 2019-09-04T06:37:00.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:37:00.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:37:00.014+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:37:00.014+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:37:00.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27019 2019-09-04T06:37:00.014+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:37:00.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27020 2019-09-04T06:37:00.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:37:00.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:37:00.014+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:37:00.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27020 2019-09-04T06:37:00.014+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:37:00.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27021 2019-09-04T06:37:00.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27021 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27022 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27022 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27023 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27023 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27024 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27024 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27025 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27025 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27026 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27026 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27027 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27027 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27028 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27028 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27029 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27029 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27030 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27030 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27031 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27031 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27032 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27032 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27033 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27033 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27034 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27034 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:37:00.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27035 2019-09-04T06:37:00.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:37:00.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:37:00.016+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:37:00.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27035 2019-09-04T06:37:00.016+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:37:00.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27036 2019-09-04T06:37:00.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:37:00.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:37:00.016+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:37:00.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27036 2019-09-04T06:37:00.016+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:37:00.016+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:37:00.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27038 2019-09-04T06:37:00.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27038 2019-09-04T06:37:00.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27039 2019-09-04T06:37:00.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27039 2019-09-04T06:37:00.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27040 2019-09-04T06:37:00.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27040 2019-09-04T06:37:00.016+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:37:00.016+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:37:00.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27042 2019-09-04T06:37:00.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27042 2019-09-04T06:37:00.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27043 2019-09-04T06:37:00.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27043 2019-09-04T06:37:00.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27044 2019-09-04T06:37:00.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27044 2019-09-04T06:37:00.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27045 2019-09-04T06:37:00.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27045 2019-09-04T06:37:00.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27046 2019-09-04T06:37:00.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27046 2019-09-04T06:37:00.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27047 2019-09-04T06:37:00.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27047 2019-09-04T06:37:00.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27048 2019-09-04T06:37:00.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27048 2019-09-04T06:37:00.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27049 2019-09-04T06:37:00.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27049 2019-09-04T06:37:00.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27050 2019-09-04T06:37:00.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27050 2019-09-04T06:37:00.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27051 2019-09-04T06:37:00.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27051 2019-09-04T06:37:00.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27052 2019-09-04T06:37:00.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27052 2019-09-04T06:37:00.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27053 2019-09-04T06:37:00.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27053 2019-09-04T06:37:00.017+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:37:00.017+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:37:00.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27055 2019-09-04T06:37:00.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27055 2019-09-04T06:37:00.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27056 2019-09-04T06:37:00.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27056 2019-09-04T06:37:00.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27057 2019-09-04T06:37:00.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27057 2019-09-04T06:37:00.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27058 2019-09-04T06:37:00.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27058 2019-09-04T06:37:00.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27059 2019-09-04T06:37:00.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27059 2019-09-04T06:37:00.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27060 2019-09-04T06:37:00.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27060 2019-09-04T06:37:00.017+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:37:00.018+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:00.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:00.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:00.071+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:00.072+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:00.118+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:00.162+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:00.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:00.201+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:00.201+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:00.216+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:00.216+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:00.218+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:00.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:37:00.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:37:00.235+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579017, 1), signature: { hash: BinData(0, E2ADA65F1A367D91344C54B20D909EB133905BEE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:37:00.235+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:37:00.235+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579017, 1), signature: { hash: BinData(0, E2ADA65F1A367D91344C54B20D909EB133905BEE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:37:00.235+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579017, 1), signature: { hash: BinData(0, E2ADA65F1A367D91344C54B20D909EB133905BEE), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:37:00.235+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579015, 1), t: 1 }, durableWallTime: new Date(1567579015792), opTime: { ts: Timestamp(1567579015, 1), t: 1 }, wallTime: new Date(1567579015792) } 2019-09-04T06:37:00.235+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579017, 1), signature: { hash: BinData(0, E2ADA65F1A367D91344C54B20D909EB133905BEE), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:00.246+0000 D2 COMMAND [conn51] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:00.247+0000 I COMMAND [conn51] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:00.253+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:00.253+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:00.318+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:00.383+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:00.383+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:00.418+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:00.468+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:00.468+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:00.478+0000 D2 ASIO [RS] Request 1816 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567579020, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567579020475), o: { $v: 1, $set: { ping: new Date(1567579020473) } } }, { ts: Timestamp(1567579020, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567579020475), o: { $v: 1, $set: { ping: new Date(1567579020475) } } }, { ts: Timestamp(1567579020, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567579020475), o: { $v: 1, $set: { ping: new Date(1567579020475) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579015, 1), t: 1 }, lastCommittedWall: new Date(1567579015792), lastOpVisible: { ts: Timestamp(1567579015, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567579015, 1), t: 1 }, lastCommittedWall: new Date(1567579015792), lastOpApplied: { ts: Timestamp(1567579020, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579015, 1), $clusterTime: { clusterTime: Timestamp(1567579020, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579020, 3) } 2019-09-04T06:37:00.478+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567579020, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567579020475), o: { $v: 1, $set: { ping: new Date(1567579020473) } } }, { ts: Timestamp(1567579020, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567579020475), o: { $v: 1, $set: { ping: new Date(1567579020475) } } }, { ts: Timestamp(1567579020, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567579020475), o: { $v: 1, $set: { ping: new Date(1567579020475) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579015, 1), t: 1 }, lastCommittedWall: new Date(1567579015792), lastOpVisible: { ts: Timestamp(1567579015, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567579015, 1), t: 1 }, lastCommittedWall: new Date(1567579015792), lastOpApplied: { ts: Timestamp(1567579020, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579015, 1), $clusterTime: { clusterTime: Timestamp(1567579020, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579020, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:37:00.478+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:37:00.478+0000 D2 REPL [replication-0] oplog fetcher read 3 operations from remote oplog starting at ts: Timestamp(1567579020, 1) and ending at ts: Timestamp(1567579020, 3) 2019-09-04T06:37:00.478+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:37:08.861+0000 2019-09-04T06:37:00.478+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:37:11.369+0000 2019-09-04T06:37:00.478+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:37:00.478+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567579020, 3), t: 1 } 2019-09-04T06:37:00.478+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:37:00.478+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:37:00.478+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567579015, 1) 2019-09-04T06:37:00.478+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 27074 2019-09-04T06:37:00.478+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:37:00.478+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:37:00.478+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 27074 2019-09-04T06:37:00.478+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:37:00.478+0000 D2 REPL [rsSync-0] replication batch size is 3 2019-09-04T06:37:00.478+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:37:00.478+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567579015, 1) 2019-09-04T06:37:00.478+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567579020, 1) } 2019-09-04T06:37:00.478+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 27077 2019-09-04T06:37:00.478+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:37:00.478+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:37:00.478+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 27077 2019-09-04T06:37:00.478+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 26999 2019-09-04T06:37:00.478+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:28.839+0000 2019-09-04T06:37:00.478+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 26999 2019-09-04T06:37:00.478+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 27080 2019-09-04T06:37:00.478+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 27080 2019-09-04T06:37:00.478+0000 D3 EXECUTOR [repl-writer-worker-13] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:37:00.479+0000 D3 STORAGE [repl-writer-worker-13] WT begin_transaction for snapshot id 27082 2019-09-04T06:37:00.479+0000 D4 STORAGE [repl-writer-worker-13] inserting record with timestamp Timestamp(1567579020, 1) 2019-09-04T06:37:00.479+0000 D3 STORAGE [repl-writer-worker-13] WT set timestamp of future write operations to Timestamp(1567579020, 1) 2019-09-04T06:37:00.479+0000 D4 STORAGE [repl-writer-worker-13] inserting record with timestamp Timestamp(1567579020, 2) 2019-09-04T06:37:00.479+0000 D3 STORAGE [repl-writer-worker-13] WT set timestamp of future write operations to Timestamp(1567579020, 2) 2019-09-04T06:37:00.479+0000 D4 STORAGE [repl-writer-worker-13] inserting record with timestamp Timestamp(1567579020, 3) 2019-09-04T06:37:00.479+0000 D3 STORAGE [repl-writer-worker-13] WT set timestamp of future write operations to Timestamp(1567579020, 3) 2019-09-04T06:37:00.479+0000 D3 STORAGE [repl-writer-worker-13] WT commit_transaction for snapshot id 27082 2019-09-04T06:37:00.479+0000 D3 EXECUTOR [repl-writer-worker-13] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:37:00.479+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:37:00.479+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 27081 2019-09-04T06:37:00.479+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 27081 2019-09-04T06:37:00.479+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 27084 2019-09-04T06:37:00.479+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 27084 2019-09-04T06:37:00.479+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567579020, 3), t: 1 }({ ts: Timestamp(1567579020, 3), t: 1 }) 2019-09-04T06:37:00.479+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567579020, 3) 2019-09-04T06:37:00.479+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 27085 2019-09-04T06:37:00.479+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567579020, 3) } } ] } sort: {} projection: {} 2019-09-04T06:37:00.479+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:37:00.479+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:37:00.479+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567579020, 3) Sort: {} Proj: {} ============================= 2019-09-04T06:37:00.479+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:37:00.479+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:37:00.479+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:37:00.479+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567579020, 3) || First: notFirst: full path: ts 2019-09-04T06:37:00.479+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:37:00.479+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567579020, 3) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:37:00.479+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:37:00.479+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:37:00.479+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:37:00.479+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:37:00.479+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:37:00.479+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:37:00.479+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:37:00.479+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:37:00.479+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:37:00.479+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567579020, 3) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:37:00.479+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:37:00.479+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:37:00.479+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:37:00.479+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567579020, 3) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:37:00.479+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:37:00.479+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567579020, 3) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:37:00.479+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 27085 2019-09-04T06:37:00.479+0000 D3 EXECUTOR [repl-writer-worker-14] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:37:00.479+0000 D3 STORAGE [repl-writer-worker-14] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:37:00.479+0000 D3 EXECUTOR [repl-writer-worker-12] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:37:00.479+0000 D3 STORAGE [repl-writer-worker-12] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:37:00.479+0000 D3 REPL [repl-writer-worker-14] applying op: { ts: Timestamp(1567579020, 3), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" }, wall: new Date(1567579020475), o: { $v: 1, $set: { ping: new Date(1567579020475) } } }, oplog application mode: Secondary 2019-09-04T06:37:00.479+0000 D3 REPL [repl-writer-worker-12] applying op: { ts: Timestamp(1567579020, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" }, wall: new Date(1567579020475), o: { $v: 1, $set: { ping: new Date(1567579020473) } } }, oplog application mode: Secondary 2019-09-04T06:37:00.479+0000 D3 STORAGE [repl-writer-worker-14] WT set timestamp of future write operations to Timestamp(1567579020, 3) 2019-09-04T06:37:00.479+0000 D3 STORAGE [repl-writer-worker-12] WT set timestamp of future write operations to Timestamp(1567579020, 1) 2019-09-04T06:37:00.479+0000 D3 STORAGE [repl-writer-worker-14] WT begin_transaction for snapshot id 27087 2019-09-04T06:37:00.479+0000 D3 STORAGE [repl-writer-worker-12] WT begin_transaction for snapshot id 27088 2019-09-04T06:37:00.479+0000 D2 QUERY [repl-writer-worker-12] Using idhack: { _id: "cmodb806.togewa.com:27018:1566460180:5935759852999151728" } 2019-09-04T06:37:00.479+0000 D2 QUERY [repl-writer-worker-14] Using idhack: { _id: "cmodb809.togewa.com:27018:1566460586:-775195962064398460" } 2019-09-04T06:37:00.479+0000 D4 WRITE [repl-writer-worker-12] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:37:00.479+0000 D4 WRITE [repl-writer-worker-14] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:37:00.479+0000 D3 STORAGE [repl-writer-worker-12] WT commit_transaction for snapshot id 27088 2019-09-04T06:37:00.479+0000 D3 STORAGE [repl-writer-worker-14] WT commit_transaction for snapshot id 27087 2019-09-04T06:37:00.479+0000 D3 EXECUTOR [repl-writer-worker-3] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:37:00.479+0000 D3 STORAGE [repl-writer-worker-3] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:37:00.479+0000 D3 EXECUTOR [repl-writer-worker-12] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:37:00.479+0000 D3 EXECUTOR [repl-writer-worker-14] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:37:00.479+0000 D3 REPL [repl-writer-worker-3] applying op: { ts: Timestamp(1567579020, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" }, wall: new Date(1567579020475), o: { $v: 1, $set: { ping: new Date(1567579020475) } } }, oplog application mode: Secondary 2019-09-04T06:37:00.479+0000 D3 STORAGE [repl-writer-worker-3] WT set timestamp of future write operations to Timestamp(1567579020, 2) 2019-09-04T06:37:00.479+0000 D3 STORAGE [repl-writer-worker-3] WT begin_transaction for snapshot id 27091 2019-09-04T06:37:00.479+0000 D2 QUERY [repl-writer-worker-3] Using idhack: { _id: "cmodb808.togewa.com:27018:1566460586:6100684078630646260" } 2019-09-04T06:37:00.479+0000 D4 WRITE [repl-writer-worker-3] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:37:00.479+0000 D3 STORAGE [repl-writer-worker-3] WT commit_transaction for snapshot id 27091 2019-09-04T06:37:00.479+0000 D3 EXECUTOR [repl-writer-worker-3] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:37:00.479+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567579020, 3), t: 1 }({ ts: Timestamp(1567579020, 3), t: 1 }) 2019-09-04T06:37:00.479+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567579020, 3) 2019-09-04T06:37:00.479+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 27086 2019-09-04T06:37:00.479+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:37:00.479+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:37:00.479+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:37:00.479+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:37:00.479+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:37:00.479+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:37:00.479+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 27086 2019-09-04T06:37:00.479+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567579020, 3) 2019-09-04T06:37:00.480+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:37:00.480+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 27094 2019-09-04T06:37:00.480+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 27094 2019-09-04T06:37:00.480+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567579015, 1), t: 1 }, durableWallTime: new Date(1567579015792), appliedOpTime: { ts: Timestamp(1567579015, 1), t: 1 }, appliedWallTime: new Date(1567579015792), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579015, 1), t: 1 }, durableWallTime: new Date(1567579015792), appliedOpTime: { ts: Timestamp(1567579020, 3), t: 1 }, appliedWallTime: new Date(1567579020475), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579015, 1), t: 1 }, durableWallTime: new Date(1567579015792), appliedOpTime: { ts: Timestamp(1567579015, 1), t: 1 }, appliedWallTime: new Date(1567579015792), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579015, 1), t: 1 }, lastCommittedWall: new Date(1567579015792), lastOpVisible: { ts: Timestamp(1567579015, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:37:00.480+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567579020, 3), t: 1 }({ ts: Timestamp(1567579020, 3), t: 1 }) 2019-09-04T06:37:00.480+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1821 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:37:30.480+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567579015, 1), t: 1 }, durableWallTime: new Date(1567579015792), appliedOpTime: { ts: Timestamp(1567579015, 1), t: 1 }, appliedWallTime: new Date(1567579015792), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579015, 1), t: 1 }, durableWallTime: new Date(1567579015792), appliedOpTime: { ts: Timestamp(1567579020, 3), t: 1 }, appliedWallTime: new Date(1567579020475), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579015, 1), t: 1 }, durableWallTime: new Date(1567579015792), appliedOpTime: { ts: Timestamp(1567579015, 1), t: 1 }, appliedWallTime: new Date(1567579015792), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579015, 1), t: 1 }, lastCommittedWall: new Date(1567579015792), lastOpVisible: { ts: Timestamp(1567579015, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:37:00.480+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:37:30.480+0000 2019-09-04T06:37:00.480+0000 D2 ASIO [RS] Request 1821 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579020, 3), t: 1 }, lastCommittedWall: new Date(1567579020475), lastOpVisible: { ts: Timestamp(1567579020, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579020, 3), $clusterTime: { clusterTime: Timestamp(1567579020, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579020, 3) } 2019-09-04T06:37:00.480+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579020, 3), t: 1 }, lastCommittedWall: new Date(1567579020475), lastOpVisible: { ts: Timestamp(1567579020, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579020, 3), $clusterTime: { clusterTime: Timestamp(1567579020, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579020, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:37:00.480+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:37:00.480+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:37:30.480+0000 2019-09-04T06:37:00.480+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567579020, 3), t: 1 } 2019-09-04T06:37:00.480+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1822 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:37:10.480+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567579015, 1), t: 1 } } 2019-09-04T06:37:00.480+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:37:30.480+0000 2019-09-04T06:37:00.480+0000 D2 ASIO [RS] Request 1822 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579020, 3), t: 1 }, lastCommittedWall: new Date(1567579020475), lastOpVisible: { ts: Timestamp(1567579020, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567579020, 3), t: 1 }, lastCommittedWall: new Date(1567579020475), lastOpApplied: { ts: Timestamp(1567579020, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579020, 3), $clusterTime: { clusterTime: Timestamp(1567579020, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579020, 3) } 2019-09-04T06:37:00.480+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579020, 3), t: 1 }, lastCommittedWall: new Date(1567579020475), lastOpVisible: { ts: Timestamp(1567579020, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567579020, 3), t: 1 }, lastCommittedWall: new Date(1567579020475), lastOpApplied: { ts: Timestamp(1567579020, 3), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579020, 3), $clusterTime: { clusterTime: Timestamp(1567579020, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579020, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:37:00.480+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:37:00.481+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:37:00.481+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567579020, 3), t: 1 }, 2019-09-04T06:37:00.475+0000 2019-09-04T06:37:00.481+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567579020, 3), t: 1 }, 2019-09-04T06:37:00.475+0000 2019-09-04T06:37:00.481+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567579015, 3) 2019-09-04T06:37:00.481+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:37:11.369+0000 2019-09-04T06:37:00.481+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:37:11.291+0000 2019-09-04T06:37:00.481+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:37:00.481+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:28.839+0000 2019-09-04T06:37:00.481+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1823 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:37:10.481+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567579020, 3), t: 1 } } 2019-09-04T06:37:00.481+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:37:30.480+0000 2019-09-04T06:37:00.481+0000 D3 REPL [conn534] Got notified of new snapshot: { ts: Timestamp(1567579020, 3), t: 1 }, 2019-09-04T06:37:00.475+0000 2019-09-04T06:37:00.481+0000 D3 REPL [conn534] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:13.288+0000 2019-09-04T06:37:00.481+0000 D3 REPL [conn557] Got notified of new snapshot: { ts: Timestamp(1567579020, 3), t: 1 }, 2019-09-04T06:37:00.475+0000 2019-09-04T06:37:00.481+0000 D3 REPL [conn557] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:15.593+0000 2019-09-04T06:37:00.481+0000 D3 REPL [conn565] Got notified of new snapshot: { ts: Timestamp(1567579020, 3), t: 1 }, 2019-09-04T06:37:00.475+0000 2019-09-04T06:37:00.481+0000 D3 REPL [conn565] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:18.447+0000 2019-09-04T06:37:00.481+0000 D3 REPL [conn546] Got notified of new snapshot: { ts: Timestamp(1567579020, 3), t: 1 }, 2019-09-04T06:37:00.475+0000 2019-09-04T06:37:00.481+0000 D3 REPL [conn546] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:21.660+0000 2019-09-04T06:37:00.481+0000 D3 REPL [conn569] Got notified of new snapshot: { ts: Timestamp(1567579020, 3), t: 1 }, 2019-09-04T06:37:00.475+0000 2019-09-04T06:37:00.481+0000 D3 REPL [conn569] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:21.662+0000 2019-09-04T06:37:00.481+0000 D3 REPL [conn561] Got notified of new snapshot: { ts: Timestamp(1567579020, 3), t: 1 }, 2019-09-04T06:37:00.475+0000 2019-09-04T06:37:00.481+0000 D3 REPL [conn561] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:16.415+0000 2019-09-04T06:37:00.481+0000 D3 REPL [conn551] Got notified of new snapshot: { ts: Timestamp(1567579020, 3), t: 1 }, 2019-09-04T06:37:00.475+0000 2019-09-04T06:37:00.481+0000 D3 REPL [conn551] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:14.696+0000 2019-09-04T06:37:00.481+0000 D3 REPL [conn556] Got notified of new snapshot: { ts: Timestamp(1567579020, 3), t: 1 }, 2019-09-04T06:37:00.475+0000 2019-09-04T06:37:00.481+0000 D3 REPL [conn556] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:00.753+0000 2019-09-04T06:37:00.481+0000 D3 REPL [conn542] Got notified of new snapshot: { ts: Timestamp(1567579020, 3), t: 1 }, 2019-09-04T06:37:00.475+0000 2019-09-04T06:37:00.481+0000 D3 REPL [conn542] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:18.527+0000 2019-09-04T06:37:00.481+0000 D3 REPL [conn558] Got notified of new snapshot: { ts: Timestamp(1567579020, 3), t: 1 }, 2019-09-04T06:37:00.475+0000 2019-09-04T06:37:00.481+0000 D3 REPL [conn558] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:07.852+0000 2019-09-04T06:37:00.481+0000 D3 REPL [conn553] Got notified of new snapshot: { ts: Timestamp(1567579020, 3), t: 1 }, 2019-09-04T06:37:00.475+0000 2019-09-04T06:37:00.481+0000 D3 REPL [conn553] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:22.594+0000 2019-09-04T06:37:00.481+0000 D3 REPL [conn536] Got notified of new snapshot: { ts: Timestamp(1567579020, 3), t: 1 }, 2019-09-04T06:37:00.475+0000 2019-09-04T06:37:00.481+0000 D3 REPL [conn536] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:13.339+0000 2019-09-04T06:37:00.481+0000 D3 REPL [conn562] Got notified of new snapshot: { ts: Timestamp(1567579020, 3), t: 1 }, 2019-09-04T06:37:00.475+0000 2019-09-04T06:37:00.481+0000 D3 REPL [conn562] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:18.425+0000 2019-09-04T06:37:00.481+0000 D3 REPL [conn540] Got notified of new snapshot: { ts: Timestamp(1567579020, 3), t: 1 }, 2019-09-04T06:37:00.475+0000 2019-09-04T06:37:00.481+0000 D3 REPL [conn540] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:13.408+0000 2019-09-04T06:37:00.481+0000 D3 REPL [conn564] Got notified of new snapshot: { ts: Timestamp(1567579020, 3), t: 1 }, 2019-09-04T06:37:00.475+0000 2019-09-04T06:37:00.481+0000 D3 REPL [conn564] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:18.430+0000 2019-09-04T06:37:00.481+0000 D3 REPL [conn566] Got notified of new snapshot: { ts: Timestamp(1567579020, 3), t: 1 }, 2019-09-04T06:37:00.475+0000 2019-09-04T06:37:00.481+0000 D3 REPL [conn566] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:18.461+0000 2019-09-04T06:37:00.481+0000 D3 REPL [conn547] Got notified of new snapshot: { ts: Timestamp(1567579020, 3), t: 1 }, 2019-09-04T06:37:00.475+0000 2019-09-04T06:37:00.481+0000 D3 REPL [conn547] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:21.660+0000 2019-09-04T06:37:00.481+0000 D3 REPL [conn560] Got notified of new snapshot: { ts: Timestamp(1567579020, 3), t: 1 }, 2019-09-04T06:37:00.475+0000 2019-09-04T06:37:00.481+0000 D3 REPL [conn560] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:13.280+0000 2019-09-04T06:37:00.481+0000 D3 REPL [conn533] Got notified of new snapshot: { ts: Timestamp(1567579020, 3), t: 1 }, 2019-09-04T06:37:00.475+0000 2019-09-04T06:37:00.481+0000 D3 REPL [conn533] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:13.272+0000 2019-09-04T06:37:00.481+0000 D3 REPL [conn563] Got notified of new snapshot: { ts: Timestamp(1567579020, 3), t: 1 }, 2019-09-04T06:37:00.475+0000 2019-09-04T06:37:00.481+0000 D3 REPL [conn563] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:18.426+0000 2019-09-04T06:37:00.481+0000 D3 REPL [conn539] Got notified of new snapshot: { ts: Timestamp(1567579020, 3), t: 1 }, 2019-09-04T06:37:00.475+0000 2019-09-04T06:37:00.481+0000 D3 REPL [conn539] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:18.438+0000 2019-09-04T06:37:00.481+0000 D3 REPL [conn571] Got notified of new snapshot: { ts: Timestamp(1567579020, 3), t: 1 }, 2019-09-04T06:37:00.475+0000 2019-09-04T06:37:00.481+0000 D3 REPL [conn571] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:21.767+0000 2019-09-04T06:37:00.483+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:37:00.483+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:37:00.483+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567579015, 1), t: 1 }, durableWallTime: new Date(1567579015792), appliedOpTime: { ts: Timestamp(1567579015, 1), t: 1 }, appliedWallTime: new Date(1567579015792), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579020, 3), t: 1 }, durableWallTime: new Date(1567579020475), appliedOpTime: { ts: Timestamp(1567579020, 3), t: 1 }, appliedWallTime: new Date(1567579020475), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579015, 1), t: 1 }, durableWallTime: new Date(1567579015792), appliedOpTime: { ts: Timestamp(1567579015, 1), t: 1 }, appliedWallTime: new Date(1567579015792), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579020, 3), t: 1 }, lastCommittedWall: new Date(1567579020475), lastOpVisible: { ts: Timestamp(1567579020, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:37:00.483+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1824 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:37:30.483+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567579015, 1), t: 1 }, durableWallTime: new Date(1567579015792), appliedOpTime: { ts: Timestamp(1567579015, 1), t: 1 }, appliedWallTime: new Date(1567579015792), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579020, 3), t: 1 }, durableWallTime: new Date(1567579020475), appliedOpTime: { ts: Timestamp(1567579020, 3), t: 1 }, appliedWallTime: new Date(1567579020475), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579015, 1), t: 1 }, durableWallTime: new Date(1567579015792), appliedOpTime: { ts: Timestamp(1567579015, 1), t: 1 }, appliedWallTime: new Date(1567579015792), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579020, 3), t: 1 }, lastCommittedWall: new Date(1567579020475), lastOpVisible: { ts: Timestamp(1567579020, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:37:00.483+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:37:30.480+0000 2019-09-04T06:37:00.483+0000 D2 ASIO [RS] Request 1824 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579020, 3), t: 1 }, lastCommittedWall: new Date(1567579020475), lastOpVisible: { ts: Timestamp(1567579020, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579020, 3), $clusterTime: { clusterTime: Timestamp(1567579020, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579020, 3) } 2019-09-04T06:37:00.483+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579020, 3), t: 1 }, lastCommittedWall: new Date(1567579020475), lastOpVisible: { ts: Timestamp(1567579020, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579020, 3), $clusterTime: { clusterTime: Timestamp(1567579020, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579020, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:37:00.483+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:37:00.483+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:37:30.480+0000 2019-09-04T06:37:00.518+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:00.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:00.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:00.571+0000 D2 COMMAND [conn52] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:00.572+0000 I COMMAND [conn52] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:00.578+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567579020, 3) 2019-09-04T06:37:00.619+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:00.662+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:00.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:00.701+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:00.701+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:00.716+0000 D2 COMMAND [conn31] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:00.716+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:00.719+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:00.743+0000 I NETWORK [listener] connection accepted from 10.108.2.52:47440 #573 (84 connections now open) 2019-09-04T06:37:00.743+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:37:00.743+0000 D2 COMMAND [conn573] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:37:00.743+0000 I NETWORK [conn573] received client metadata from 10.108.2.52:47440 conn573: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:37:00.743+0000 I COMMAND [conn573] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:37:00.753+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:00.753+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:00.753+0000 I COMMAND [conn556] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578982, 1), signature: { hash: BinData(0, 71D77AFD556EDF81B93114236E65E2BB26C765AF), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:37:00.753+0000 D1 - [conn556] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:37:00.753+0000 W - [conn556] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:37:00.770+0000 I - [conn556] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:37:00.770+0000 D1 COMMAND [conn556] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578982, 1), signature: { hash: BinData(0, 71D77AFD556EDF81B93114236E65E2BB26C765AF), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:37:00.770+0000 D1 - [conn556] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:37:00.770+0000 W - [conn556] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:37:00.790+0000 I - [conn556] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:37:00.790+0000 W COMMAND [conn556] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:37:00.790+0000 I COMMAND [conn556] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578982, 1), signature: { hash: BinData(0, 71D77AFD556EDF81B93114236E65E2BB26C765AF), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30026ms 2019-09-04T06:37:00.790+0000 D2 NETWORK [conn556] Session from 10.108.2.52:47422 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:37:00.790+0000 I NETWORK [conn556] end connection 10.108.2.52:47422 (83 connections now open) 2019-09-04T06:37:00.819+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:00.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:37:00.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1825) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:37:00.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1825 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:37:10.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:37:00.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:28.839+0000 2019-09-04T06:37:00.839+0000 D2 ASIO [Replication] Request 1825 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579020, 3), t: 1 }, durableWallTime: new Date(1567579020475), opTime: { ts: Timestamp(1567579020, 3), t: 1 }, wallTime: new Date(1567579020475), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579020, 3), t: 1 }, lastCommittedWall: new Date(1567579020475), lastOpVisible: { ts: Timestamp(1567579020, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567579020, 3), $clusterTime: { clusterTime: Timestamp(1567579020, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579020, 3) } 2019-09-04T06:37:00.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579020, 3), t: 1 }, durableWallTime: new Date(1567579020475), opTime: { ts: Timestamp(1567579020, 3), t: 1 }, wallTime: new Date(1567579020475), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579020, 3), t: 1 }, lastCommittedWall: new Date(1567579020475), lastOpVisible: { ts: Timestamp(1567579020, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567579020, 3), $clusterTime: { clusterTime: Timestamp(1567579020, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579020, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:37:00.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:37:00.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1825) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579020, 3), t: 1 }, durableWallTime: new Date(1567579020475), opTime: { ts: Timestamp(1567579020, 3), t: 1 }, wallTime: new Date(1567579020475), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579020, 3), t: 1 }, lastCommittedWall: new Date(1567579020475), lastOpVisible: { ts: Timestamp(1567579020, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567579020, 3), $clusterTime: { clusterTime: Timestamp(1567579020, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579020, 3) } 2019-09-04T06:37:00.839+0000 D4 ELECTION [replexec-4] Postponing election timeout due to heartbeat from primary 2019-09-04T06:37:00.839+0000 D4 REPL [replexec-4] Canceling election timeout callback at 2019-09-04T06:37:11.291+0000 2019-09-04T06:37:00.839+0000 D4 ELECTION [replexec-4] Scheduling election timeout callback at 2019-09-04T06:37:11.383+0000 2019-09-04T06:37:00.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:37:00.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:37:02.839Z 2019-09-04T06:37:00.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:37:00.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:30.839+0000 2019-09-04T06:37:00.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:30.839+0000 2019-09-04T06:37:00.841+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:37:00.841+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1826) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:37:00.841+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1826 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:37:10.841+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:37:00.841+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:30.839+0000 2019-09-04T06:37:00.841+0000 D2 ASIO [Replication] Request 1826 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579020, 3), t: 1 }, durableWallTime: new Date(1567579020475), opTime: { ts: Timestamp(1567579020, 3), t: 1 }, wallTime: new Date(1567579020475), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579020, 3), t: 1 }, lastCommittedWall: new Date(1567579020475), lastOpVisible: { ts: Timestamp(1567579020, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579020, 3), $clusterTime: { clusterTime: Timestamp(1567579020, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579020, 3) } 2019-09-04T06:37:00.841+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579020, 3), t: 1 }, durableWallTime: new Date(1567579020475), opTime: { ts: Timestamp(1567579020, 3), t: 1 }, wallTime: new Date(1567579020475), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579020, 3), t: 1 }, lastCommittedWall: new Date(1567579020475), lastOpVisible: { ts: Timestamp(1567579020, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579020, 3), $clusterTime: { clusterTime: Timestamp(1567579020, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579020, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:37:00.841+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:37:00.841+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1826) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579020, 3), t: 1 }, durableWallTime: new Date(1567579020475), opTime: { ts: Timestamp(1567579020, 3), t: 1 }, wallTime: new Date(1567579020475), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579020, 3), t: 1 }, lastCommittedWall: new Date(1567579020475), lastOpVisible: { ts: Timestamp(1567579020, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579020, 3), $clusterTime: { clusterTime: Timestamp(1567579020, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579020, 3) } 2019-09-04T06:37:00.841+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:37:00.841+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:37:02.841Z 2019-09-04T06:37:00.841+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:30.839+0000 2019-09-04T06:37:00.883+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:00.883+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:00.919+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:00.968+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:00.968+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:01.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:37:01.019+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:01.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:01.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:01.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579020, 3), signature: { hash: BinData(0, 2F2D0E3F548EEC5812CA69EFF565FB54E237A004), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:37:01.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:37:01.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579020, 3), signature: { hash: BinData(0, 2F2D0E3F548EEC5812CA69EFF565FB54E237A004), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:37:01.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579020, 3), signature: { hash: BinData(0, 2F2D0E3F548EEC5812CA69EFF565FB54E237A004), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:37:01.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579020, 3), t: 1 }, durableWallTime: new Date(1567579020475), opTime: { ts: Timestamp(1567579020, 3), t: 1 }, wallTime: new Date(1567579020475) } 2019-09-04T06:37:01.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579020, 3), signature: { hash: BinData(0, 2F2D0E3F548EEC5812CA69EFF565FB54E237A004), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:01.119+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:01.162+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:01.163+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:01.201+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:01.201+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:01.219+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:01.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:37:01.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:37:01.253+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:01.253+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:01.304+0000 I NETWORK [listener] connection accepted from 10.108.2.62:53684 #574 (84 connections now open) 2019-09-04T06:37:01.304+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:37:01.304+0000 D2 COMMAND [conn574] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:37:01.304+0000 I NETWORK [conn574] received client metadata from 10.108.2.62:53684 conn574: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:37:01.304+0000 I COMMAND [conn574] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:37:01.308+0000 D2 COMMAND [conn574] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579012, 1), signature: { hash: BinData(0, 03F942D7850679A43066F511B52016CE3558C974), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:37:01.308+0000 D1 REPL [conn574] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567579020, 3), t: 1 } 2019-09-04T06:37:01.308+0000 D3 REPL [conn574] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:31.318+0000 2019-09-04T06:37:01.319+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:01.383+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:01.383+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:01.419+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:01.468+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:01.468+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:01.478+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:37:01.478+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:37:01.479+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567579020, 3) 2019-09-04T06:37:01.479+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 27117 2019-09-04T06:37:01.479+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:37:01.479+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:37:01.479+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 27117 2019-09-04T06:37:01.480+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 27120 2019-09-04T06:37:01.480+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 27120 2019-09-04T06:37:01.480+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567579020, 3), t: 1 }({ ts: Timestamp(1567579020, 3), t: 1 }) 2019-09-04T06:37:01.519+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:01.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:01.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:01.619+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:01.662+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:01.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:01.700+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:01.701+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:01.720+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:01.753+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:01.753+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:01.820+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:01.883+0000 D2 COMMAND [conn22] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:01.883+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:01.920+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:01.968+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:01.968+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:02.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:37:02.020+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:02.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:02.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:02.120+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:02.162+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:02.163+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:02.201+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:02.201+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:02.220+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:02.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:37:02.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:37:02.235+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579020, 3), signature: { hash: BinData(0, 2F2D0E3F548EEC5812CA69EFF565FB54E237A004), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:37:02.235+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:37:02.235+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579020, 3), signature: { hash: BinData(0, 2F2D0E3F548EEC5812CA69EFF565FB54E237A004), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:37:02.235+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579020, 3), signature: { hash: BinData(0, 2F2D0E3F548EEC5812CA69EFF565FB54E237A004), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:37:02.235+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579020, 3), t: 1 }, durableWallTime: new Date(1567579020475), opTime: { ts: Timestamp(1567579020, 3), t: 1 }, wallTime: new Date(1567579020475) } 2019-09-04T06:37:02.235+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579020, 3), signature: { hash: BinData(0, 2F2D0E3F548EEC5812CA69EFF565FB54E237A004), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:02.253+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:02.253+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:02.320+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:02.420+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:02.468+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:02.468+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:02.479+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:37:02.479+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:37:02.479+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567579020, 3) 2019-09-04T06:37:02.479+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 27136 2019-09-04T06:37:02.479+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:37:02.479+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:37:02.479+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 27136 2019-09-04T06:37:02.480+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 27139 2019-09-04T06:37:02.480+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 27139 2019-09-04T06:37:02.480+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567579020, 3), t: 1 }({ ts: Timestamp(1567579020, 3), t: 1 }) 2019-09-04T06:37:02.520+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:02.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:02.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:02.621+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:02.662+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:02.663+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:02.701+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:02.701+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:02.721+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:02.753+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:02.753+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:02.821+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:02.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:37:02.839+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1827) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:37:02.839+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1827 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:37:12.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:37:02.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:30.839+0000 2019-09-04T06:37:02.839+0000 D2 ASIO [Replication] Request 1827 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579020, 3), t: 1 }, durableWallTime: new Date(1567579020475), opTime: { ts: Timestamp(1567579020, 3), t: 1 }, wallTime: new Date(1567579020475), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579020, 3), t: 1 }, lastCommittedWall: new Date(1567579020475), lastOpVisible: { ts: Timestamp(1567579020, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567579020, 3), $clusterTime: { clusterTime: Timestamp(1567579020, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579020, 3) } 2019-09-04T06:37:02.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579020, 3), t: 1 }, durableWallTime: new Date(1567579020475), opTime: { ts: Timestamp(1567579020, 3), t: 1 }, wallTime: new Date(1567579020475), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579020, 3), t: 1 }, lastCommittedWall: new Date(1567579020475), lastOpVisible: { ts: Timestamp(1567579020, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567579020, 3), $clusterTime: { clusterTime: Timestamp(1567579020, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579020, 3) } target: cmodb802.togewa.com:27019 2019-09-04T06:37:02.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:37:02.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1827) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579020, 3), t: 1 }, durableWallTime: new Date(1567579020475), opTime: { ts: Timestamp(1567579020, 3), t: 1 }, wallTime: new Date(1567579020475), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579020, 3), t: 1 }, lastCommittedWall: new Date(1567579020475), lastOpVisible: { ts: Timestamp(1567579020, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567579020, 3), $clusterTime: { clusterTime: Timestamp(1567579020, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579020, 3) } 2019-09-04T06:37:02.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:37:02.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:37:11.383+0000 2019-09-04T06:37:02.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:37:13.180+0000 2019-09-04T06:37:02.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:37:02.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:37:04.839Z 2019-09-04T06:37:02.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:37:02.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:32.839+0000 2019-09-04T06:37:02.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:32.839+0000 2019-09-04T06:37:02.841+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:37:02.841+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1828) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:37:02.841+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1828 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:37:12.841+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:37:02.841+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:32.839+0000 2019-09-04T06:37:02.841+0000 D2 ASIO [Replication] Request 1828 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579020, 3), t: 1 }, durableWallTime: new Date(1567579020475), opTime: { ts: Timestamp(1567579020, 3), t: 1 }, wallTime: new Date(1567579020475), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579020, 3), t: 1 }, lastCommittedWall: new Date(1567579020475), lastOpVisible: { ts: Timestamp(1567579020, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579020, 3), $clusterTime: { clusterTime: Timestamp(1567579020, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579020, 3) } 2019-09-04T06:37:02.841+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579020, 3), t: 1 }, durableWallTime: new Date(1567579020475), opTime: { ts: Timestamp(1567579020, 3), t: 1 }, wallTime: new Date(1567579020475), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579020, 3), t: 1 }, lastCommittedWall: new Date(1567579020475), lastOpVisible: { ts: Timestamp(1567579020, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579020, 3), $clusterTime: { clusterTime: Timestamp(1567579020, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579020, 3) } target: cmodb804.togewa.com:27019 2019-09-04T06:37:02.841+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:37:02.841+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1828) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579020, 3), t: 1 }, durableWallTime: new Date(1567579020475), opTime: { ts: Timestamp(1567579020, 3), t: 1 }, wallTime: new Date(1567579020475), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579020, 3), t: 1 }, lastCommittedWall: new Date(1567579020475), lastOpVisible: { ts: Timestamp(1567579020, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579020, 3), $clusterTime: { clusterTime: Timestamp(1567579020, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579020, 3) } 2019-09-04T06:37:02.841+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:37:02.841+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:37:04.841Z 2019-09-04T06:37:02.841+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:32.839+0000 2019-09-04T06:37:02.921+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:02.968+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:02.968+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:03.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:37:03.021+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:03.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:03.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:03.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579020, 3), signature: { hash: BinData(0, 2F2D0E3F548EEC5812CA69EFF565FB54E237A004), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:37:03.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:37:03.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579020, 3), signature: { hash: BinData(0, 2F2D0E3F548EEC5812CA69EFF565FB54E237A004), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:37:03.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579020, 3), signature: { hash: BinData(0, 2F2D0E3F548EEC5812CA69EFF565FB54E237A004), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:37:03.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579020, 3), t: 1 }, durableWallTime: new Date(1567579020475), opTime: { ts: Timestamp(1567579020, 3), t: 1 }, wallTime: new Date(1567579020475) } 2019-09-04T06:37:03.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579020, 3), signature: { hash: BinData(0, 2F2D0E3F548EEC5812CA69EFF565FB54E237A004), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:03.121+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:03.162+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:03.163+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:03.201+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:03.201+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:03.221+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:03.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:37:03.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:37:03.253+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:03.253+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:03.254+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:03.254+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:03.321+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:03.421+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:03.468+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:03.468+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:03.479+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:37:03.479+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:37:03.479+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567579020, 3) 2019-09-04T06:37:03.479+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 27156 2019-09-04T06:37:03.479+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:37:03.479+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:37:03.479+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 27156 2019-09-04T06:37:03.480+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 27159 2019-09-04T06:37:03.480+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 27159 2019-09-04T06:37:03.480+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567579020, 3), t: 1 }({ ts: Timestamp(1567579020, 3), t: 1 }) 2019-09-04T06:37:03.522+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:03.525+0000 D2 COMMAND [conn90] run command admin.$cmd { logout: 1, $db: "admin" } 2019-09-04T06:37:03.525+0000 I COMMAND [conn90] command admin.$cmd command: logout { logout: 1, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:37:03.525+0000 D2 COMMAND [conn90] run command admin.$cmd { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:37:03.525+0000 I COMMAND [conn90] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:267 locks:{} protocol:op_query 0ms 2019-09-04T06:37:03.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:03.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:03.622+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:03.647+0000 D2 ASIO [RS] Request 1823 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567579023, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567579023645), o: { $v: 1, $set: { ping: new Date(1567579023640) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579020, 3), t: 1 }, lastCommittedWall: new Date(1567579020475), lastOpVisible: { ts: Timestamp(1567579020, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567579020, 3), t: 1 }, lastCommittedWall: new Date(1567579020475), lastOpApplied: { ts: Timestamp(1567579023, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579020, 3), $clusterTime: { clusterTime: Timestamp(1567579023, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579023, 1) } 2019-09-04T06:37:03.647+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567579023, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567579023645), o: { $v: 1, $set: { ping: new Date(1567579023640) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579020, 3), t: 1 }, lastCommittedWall: new Date(1567579020475), lastOpVisible: { ts: Timestamp(1567579020, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567579020, 3), t: 1 }, lastCommittedWall: new Date(1567579020475), lastOpApplied: { ts: Timestamp(1567579023, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579020, 3), $clusterTime: { clusterTime: Timestamp(1567579023, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579023, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:37:03.647+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:37:03.647+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567579023, 1) and ending at ts: Timestamp(1567579023, 1) 2019-09-04T06:37:03.647+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:37:13.180+0000 2019-09-04T06:37:03.647+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:37:13.805+0000 2019-09-04T06:37:03.647+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:37:03.647+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567579023, 1), t: 1 } 2019-09-04T06:37:03.647+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:37:03.647+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:37:03.647+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567579020, 3) 2019-09-04T06:37:03.647+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 27165 2019-09-04T06:37:03.647+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:37:03.647+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:37:03.647+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 27165 2019-09-04T06:37:03.647+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:37:03.647+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:37:03.647+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:37:03.647+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567579020, 3) 2019-09-04T06:37:03.647+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 27168 2019-09-04T06:37:03.647+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567579023, 1) } 2019-09-04T06:37:03.647+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:32.839+0000 2019-09-04T06:37:03.647+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:37:03.647+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:37:03.647+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 27160 2019-09-04T06:37:03.647+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 27168 2019-09-04T06:37:03.647+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 27160 2019-09-04T06:37:03.647+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 27171 2019-09-04T06:37:03.647+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 27171 2019-09-04T06:37:03.647+0000 D3 EXECUTOR [repl-writer-worker-10] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:37:03.648+0000 D3 STORAGE [repl-writer-worker-10] WT begin_transaction for snapshot id 27173 2019-09-04T06:37:03.648+0000 D4 STORAGE [repl-writer-worker-10] inserting record with timestamp Timestamp(1567579023, 1) 2019-09-04T06:37:03.648+0000 D3 STORAGE [repl-writer-worker-10] WT set timestamp of future write operations to Timestamp(1567579023, 1) 2019-09-04T06:37:03.648+0000 D3 STORAGE [repl-writer-worker-10] WT commit_transaction for snapshot id 27173 2019-09-04T06:37:03.648+0000 D3 EXECUTOR [repl-writer-worker-10] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:37:03.648+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:37:03.648+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 27172 2019-09-04T06:37:03.648+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 27172 2019-09-04T06:37:03.648+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 27175 2019-09-04T06:37:03.648+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 27175 2019-09-04T06:37:03.648+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567579023, 1), t: 1 }({ ts: Timestamp(1567579023, 1), t: 1 }) 2019-09-04T06:37:03.648+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567579023, 1) 2019-09-04T06:37:03.648+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 27176 2019-09-04T06:37:03.648+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567579023, 1) } } ] } sort: {} projection: {} 2019-09-04T06:37:03.648+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:37:03.648+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:37:03.648+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567579023, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:37:03.648+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:37:03.648+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:37:03.648+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:37:03.648+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567579023, 1) || First: notFirst: full path: ts 2019-09-04T06:37:03.648+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:37:03.648+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567579023, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:37:03.648+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:37:03.648+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:37:03.648+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:37:03.648+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:37:03.648+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:37:03.648+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:37:03.648+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:37:03.648+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:37:03.648+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:37:03.648+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567579023, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:37:03.648+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:37:03.648+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:37:03.648+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:37:03.648+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567579023, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:37:03.648+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:37:03.648+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567579023, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:37:03.648+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 27176 2019-09-04T06:37:03.648+0000 D3 EXECUTOR [repl-writer-worker-4] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:37:03.648+0000 D3 STORAGE [repl-writer-worker-4] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:37:03.648+0000 D3 REPL [repl-writer-worker-4] applying op: { ts: Timestamp(1567579023, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" }, wall: new Date(1567579023645), o: { $v: 1, $set: { ping: new Date(1567579023640) } } }, oplog application mode: Secondary 2019-09-04T06:37:03.648+0000 D3 STORAGE [repl-writer-worker-4] WT set timestamp of future write operations to Timestamp(1567579023, 1) 2019-09-04T06:37:03.648+0000 D3 STORAGE [repl-writer-worker-4] WT begin_transaction for snapshot id 27178 2019-09-04T06:37:03.648+0000 D2 QUERY [repl-writer-worker-4] Using idhack: { _id: "cmodb811.togewa.com:27018:1566460779:627962261717024944" } 2019-09-04T06:37:03.648+0000 D4 WRITE [repl-writer-worker-4] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:37:03.648+0000 D3 STORAGE [repl-writer-worker-4] WT commit_transaction for snapshot id 27178 2019-09-04T06:37:03.648+0000 D3 EXECUTOR [repl-writer-worker-4] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:37:03.648+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567579023, 1), t: 1 }({ ts: Timestamp(1567579023, 1), t: 1 }) 2019-09-04T06:37:03.648+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567579023, 1) 2019-09-04T06:37:03.648+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 27177 2019-09-04T06:37:03.648+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:37:03.648+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:37:03.648+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:37:03.648+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:37:03.648+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:37:03.648+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:37:03.648+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 27177 2019-09-04T06:37:03.648+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567579023, 1) 2019-09-04T06:37:03.648+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 27181 2019-09-04T06:37:03.648+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 27181 2019-09-04T06:37:03.648+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567579023, 1), t: 1 }({ ts: Timestamp(1567579023, 1), t: 1 }) 2019-09-04T06:37:03.648+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:37:03.649+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567579020, 3), t: 1 }, durableWallTime: new Date(1567579020475), appliedOpTime: { ts: Timestamp(1567579020, 3), t: 1 }, appliedWallTime: new Date(1567579020475), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579020, 3), t: 1 }, durableWallTime: new Date(1567579020475), appliedOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, appliedWallTime: new Date(1567579023645), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579020, 3), t: 1 }, durableWallTime: new Date(1567579020475), appliedOpTime: { ts: Timestamp(1567579020, 3), t: 1 }, appliedWallTime: new Date(1567579020475), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579020, 3), t: 1 }, lastCommittedWall: new Date(1567579020475), lastOpVisible: { ts: Timestamp(1567579020, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:37:03.649+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1829 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:37:33.649+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567579020, 3), t: 1 }, durableWallTime: new Date(1567579020475), appliedOpTime: { ts: Timestamp(1567579020, 3), t: 1 }, appliedWallTime: new Date(1567579020475), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579020, 3), t: 1 }, durableWallTime: new Date(1567579020475), appliedOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, appliedWallTime: new Date(1567579023645), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579020, 3), t: 1 }, durableWallTime: new Date(1567579020475), appliedOpTime: { ts: Timestamp(1567579020, 3), t: 1 }, appliedWallTime: new Date(1567579020475), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579020, 3), t: 1 }, lastCommittedWall: new Date(1567579020475), lastOpVisible: { ts: Timestamp(1567579020, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:37:03.649+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:37:33.648+0000 2019-09-04T06:37:03.649+0000 D2 ASIO [RS] Request 1829 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579020, 3), t: 1 }, lastCommittedWall: new Date(1567579020475), lastOpVisible: { ts: Timestamp(1567579020, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579020, 3), $clusterTime: { clusterTime: Timestamp(1567579023, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579023, 1) } 2019-09-04T06:37:03.649+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579020, 3), t: 1 }, lastCommittedWall: new Date(1567579020475), lastOpVisible: { ts: Timestamp(1567579020, 3), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579020, 3), $clusterTime: { clusterTime: Timestamp(1567579023, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579023, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:37:03.649+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:37:03.649+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:37:33.649+0000 2019-09-04T06:37:03.649+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567579023, 1), t: 1 } 2019-09-04T06:37:03.649+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1830 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:37:13.649+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567579020, 3), t: 1 } } 2019-09-04T06:37:03.649+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:37:33.649+0000 2019-09-04T06:37:03.652+0000 D2 ASIO [RS] Request 1830 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579023, 1), t: 1 }, lastCommittedWall: new Date(1567579023645), lastOpVisible: { ts: Timestamp(1567579023, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567579023, 1), t: 1 }, lastCommittedWall: new Date(1567579023645), lastOpApplied: { ts: Timestamp(1567579023, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579023, 1), $clusterTime: { clusterTime: Timestamp(1567579023, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579023, 1) } 2019-09-04T06:37:03.652+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579023, 1), t: 1 }, lastCommittedWall: new Date(1567579023645), lastOpVisible: { ts: Timestamp(1567579023, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567579023, 1), t: 1 }, lastCommittedWall: new Date(1567579023645), lastOpApplied: { ts: Timestamp(1567579023, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579023, 1), $clusterTime: { clusterTime: Timestamp(1567579023, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579023, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:37:03.653+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:37:03.653+0000 D2 REPL [replication-1] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:37:03.653+0000 D2 REPL [replication-1] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567579023, 1), t: 1 }, 2019-09-04T06:37:03.645+0000 2019-09-04T06:37:03.653+0000 D2 REPL [replication-1] Setting replication's stable optime to { ts: Timestamp(1567579023, 1), t: 1 }, 2019-09-04T06:37:03.645+0000 2019-09-04T06:37:03.653+0000 D2 STORAGE [replication-1] oldest_timestamp set to Timestamp(1567579018, 1) 2019-09-04T06:37:03.653+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:37:13.805+0000 2019-09-04T06:37:03.653+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:37:13.976+0000 2019-09-04T06:37:03.653+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1831 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:37:13.653+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567579023, 1), t: 1 } } 2019-09-04T06:37:03.653+0000 D3 REPL [conn546] Got notified of new snapshot: { ts: Timestamp(1567579023, 1), t: 1 }, 2019-09-04T06:37:03.645+0000 2019-09-04T06:37:03.653+0000 D3 REPL [conn546] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:21.660+0000 2019-09-04T06:37:03.653+0000 D3 REPL [conn557] Got notified of new snapshot: { ts: Timestamp(1567579023, 1), t: 1 }, 2019-09-04T06:37:03.645+0000 2019-09-04T06:37:03.653+0000 D3 REPL [conn557] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:15.593+0000 2019-09-04T06:37:03.653+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:37:33.649+0000 2019-09-04T06:37:03.653+0000 D3 REPL [conn565] Got notified of new snapshot: { ts: Timestamp(1567579023, 1), t: 1 }, 2019-09-04T06:37:03.645+0000 2019-09-04T06:37:03.653+0000 D3 REPL [conn565] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:18.447+0000 2019-09-04T06:37:03.653+0000 D3 REPL [conn553] Got notified of new snapshot: { ts: Timestamp(1567579023, 1), t: 1 }, 2019-09-04T06:37:03.645+0000 2019-09-04T06:37:03.653+0000 D3 REPL [conn553] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:22.594+0000 2019-09-04T06:37:03.653+0000 D3 REPL [conn540] Got notified of new snapshot: { ts: Timestamp(1567579023, 1), t: 1 }, 2019-09-04T06:37:03.645+0000 2019-09-04T06:37:03.653+0000 D3 REPL [conn540] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:13.408+0000 2019-09-04T06:37:03.653+0000 D3 REPL [conn533] Got notified of new snapshot: { ts: Timestamp(1567579023, 1), t: 1 }, 2019-09-04T06:37:03.645+0000 2019-09-04T06:37:03.653+0000 D3 REPL [conn533] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:13.272+0000 2019-09-04T06:37:03.653+0000 D3 REPL [conn560] Got notified of new snapshot: { ts: Timestamp(1567579023, 1), t: 1 }, 2019-09-04T06:37:03.645+0000 2019-09-04T06:37:03.653+0000 D3 REPL [conn560] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:13.280+0000 2019-09-04T06:37:03.653+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:37:03.653+0000 D3 REPL [conn563] Got notified of new snapshot: { ts: Timestamp(1567579023, 1), t: 1 }, 2019-09-04T06:37:03.645+0000 2019-09-04T06:37:03.653+0000 D3 REPL [conn563] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:18.426+0000 2019-09-04T06:37:03.653+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:32.839+0000 2019-09-04T06:37:03.653+0000 D3 REPL [conn571] Got notified of new snapshot: { ts: Timestamp(1567579023, 1), t: 1 }, 2019-09-04T06:37:03.645+0000 2019-09-04T06:37:03.653+0000 D3 REPL [conn571] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:21.767+0000 2019-09-04T06:37:03.653+0000 D3 REPL [conn534] Got notified of new snapshot: { ts: Timestamp(1567579023, 1), t: 1 }, 2019-09-04T06:37:03.645+0000 2019-09-04T06:37:03.653+0000 D3 REPL [conn534] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:13.288+0000 2019-09-04T06:37:03.653+0000 D3 REPL [conn561] Got notified of new snapshot: { ts: Timestamp(1567579023, 1), t: 1 }, 2019-09-04T06:37:03.645+0000 2019-09-04T06:37:03.653+0000 D3 REPL [conn561] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:16.415+0000 2019-09-04T06:37:03.653+0000 D3 REPL [conn558] Got notified of new snapshot: { ts: Timestamp(1567579023, 1), t: 1 }, 2019-09-04T06:37:03.645+0000 2019-09-04T06:37:03.653+0000 D3 REPL [conn558] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:07.852+0000 2019-09-04T06:37:03.653+0000 D3 REPL [conn569] Got notified of new snapshot: { ts: Timestamp(1567579023, 1), t: 1 }, 2019-09-04T06:37:03.645+0000 2019-09-04T06:37:03.653+0000 D3 REPL [conn569] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:21.662+0000 2019-09-04T06:37:03.653+0000 D3 REPL [conn551] Got notified of new snapshot: { ts: Timestamp(1567579023, 1), t: 1 }, 2019-09-04T06:37:03.645+0000 2019-09-04T06:37:03.653+0000 D3 REPL [conn551] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:14.696+0000 2019-09-04T06:37:03.653+0000 D3 REPL [conn542] Got notified of new snapshot: { ts: Timestamp(1567579023, 1), t: 1 }, 2019-09-04T06:37:03.645+0000 2019-09-04T06:37:03.653+0000 D3 REPL [conn542] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:18.527+0000 2019-09-04T06:37:03.653+0000 D3 REPL [conn536] Got notified of new snapshot: { ts: Timestamp(1567579023, 1), t: 1 }, 2019-09-04T06:37:03.645+0000 2019-09-04T06:37:03.653+0000 D3 REPL [conn536] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:13.339+0000 2019-09-04T06:37:03.653+0000 D3 REPL [conn562] Got notified of new snapshot: { ts: Timestamp(1567579023, 1), t: 1 }, 2019-09-04T06:37:03.645+0000 2019-09-04T06:37:03.653+0000 D3 REPL [conn562] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:18.425+0000 2019-09-04T06:37:03.653+0000 D3 REPL [conn564] Got notified of new snapshot: { ts: Timestamp(1567579023, 1), t: 1 }, 2019-09-04T06:37:03.645+0000 2019-09-04T06:37:03.653+0000 D3 REPL [conn564] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:18.430+0000 2019-09-04T06:37:03.653+0000 D3 REPL [conn547] Got notified of new snapshot: { ts: Timestamp(1567579023, 1), t: 1 }, 2019-09-04T06:37:03.645+0000 2019-09-04T06:37:03.653+0000 D3 REPL [conn547] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:21.660+0000 2019-09-04T06:37:03.653+0000 D3 REPL [conn566] Got notified of new snapshot: { ts: Timestamp(1567579023, 1), t: 1 }, 2019-09-04T06:37:03.645+0000 2019-09-04T06:37:03.653+0000 D3 REPL [conn566] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:18.461+0000 2019-09-04T06:37:03.653+0000 D3 REPL [conn539] Got notified of new snapshot: { ts: Timestamp(1567579023, 1), t: 1 }, 2019-09-04T06:37:03.645+0000 2019-09-04T06:37:03.653+0000 D3 REPL [conn539] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:18.438+0000 2019-09-04T06:37:03.653+0000 D3 REPL [conn574] Got notified of new snapshot: { ts: Timestamp(1567579023, 1), t: 1 }, 2019-09-04T06:37:03.645+0000 2019-09-04T06:37:03.653+0000 D3 REPL [conn574] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:31.318+0000 2019-09-04T06:37:03.662+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:03.663+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:03.681+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:37:03.681+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:37:03.681+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567579020, 3), t: 1 }, durableWallTime: new Date(1567579020475), appliedOpTime: { ts: Timestamp(1567579020, 3), t: 1 }, appliedWallTime: new Date(1567579020475), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, durableWallTime: new Date(1567579023645), appliedOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, appliedWallTime: new Date(1567579023645), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579020, 3), t: 1 }, durableWallTime: new Date(1567579020475), appliedOpTime: { ts: Timestamp(1567579020, 3), t: 1 }, appliedWallTime: new Date(1567579020475), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579023, 1), t: 1 }, lastCommittedWall: new Date(1567579023645), lastOpVisible: { ts: Timestamp(1567579023, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:37:03.682+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1832 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:37:33.682+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567579020, 3), t: 1 }, durableWallTime: new Date(1567579020475), appliedOpTime: { ts: Timestamp(1567579020, 3), t: 1 }, appliedWallTime: new Date(1567579020475), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, durableWallTime: new Date(1567579023645), appliedOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, appliedWallTime: new Date(1567579023645), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579020, 3), t: 1 }, durableWallTime: new Date(1567579020475), appliedOpTime: { ts: Timestamp(1567579020, 3), t: 1 }, appliedWallTime: new Date(1567579020475), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579023, 1), t: 1 }, lastCommittedWall: new Date(1567579023645), lastOpVisible: { ts: Timestamp(1567579023, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:37:03.682+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:37:33.649+0000 2019-09-04T06:37:03.682+0000 D2 ASIO [RS] Request 1832 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579023, 1), t: 1 }, lastCommittedWall: new Date(1567579023645), lastOpVisible: { ts: Timestamp(1567579023, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579023, 1), $clusterTime: { clusterTime: Timestamp(1567579023, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579023, 1) } 2019-09-04T06:37:03.682+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579023, 1), t: 1 }, lastCommittedWall: new Date(1567579023645), lastOpVisible: { ts: Timestamp(1567579023, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579023, 1), $clusterTime: { clusterTime: Timestamp(1567579023, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579023, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:37:03.682+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:37:03.682+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:37:33.649+0000 2019-09-04T06:37:03.700+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:03.701+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:03.722+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:03.747+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567579023, 1) 2019-09-04T06:37:03.753+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:03.753+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:03.754+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:03.754+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:03.822+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:03.922+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:03.968+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:03.968+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:04.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:37:04.022+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:04.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:04.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:04.122+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:04.162+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:04.163+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:04.200+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:04.201+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:04.222+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:04.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:37:04.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:37:04.235+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579023, 1), signature: { hash: BinData(0, 4EE11E92D2D335D742705167D7E9102F24ABE03F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:37:04.235+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:37:04.235+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579023, 1), signature: { hash: BinData(0, 4EE11E92D2D335D742705167D7E9102F24ABE03F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:37:04.235+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579023, 1), signature: { hash: BinData(0, 4EE11E92D2D335D742705167D7E9102F24ABE03F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:37:04.235+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, durableWallTime: new Date(1567579023645), opTime: { ts: Timestamp(1567579023, 1), t: 1 }, wallTime: new Date(1567579023645) } 2019-09-04T06:37:04.235+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579023, 1), signature: { hash: BinData(0, 4EE11E92D2D335D742705167D7E9102F24ABE03F), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:04.253+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:04.253+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:04.254+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:04.254+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:04.322+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:04.422+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:04.468+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:04.468+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:04.523+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:04.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:04.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:04.623+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:04.648+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:37:04.648+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:37:04.648+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567579023, 1) 2019-09-04T06:37:04.648+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 27200 2019-09-04T06:37:04.648+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:37:04.648+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:37:04.648+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 27200 2019-09-04T06:37:04.648+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 27203 2019-09-04T06:37:04.649+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 27203 2019-09-04T06:37:04.649+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567579023, 1), t: 1 }({ ts: Timestamp(1567579023, 1), t: 1 }) 2019-09-04T06:37:04.662+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:04.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:04.701+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:04.701+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:04.723+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:04.753+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:04.753+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:04.754+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:04.754+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:04.823+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:04.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:37:04.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1833) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:37:04.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1833 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:37:14.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:37:04.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:32.839+0000 2019-09-04T06:37:04.839+0000 D2 ASIO [Replication] Request 1833 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, durableWallTime: new Date(1567579023645), opTime: { ts: Timestamp(1567579023, 1), t: 1 }, wallTime: new Date(1567579023645), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579023, 1), t: 1 }, lastCommittedWall: new Date(1567579023645), lastOpVisible: { ts: Timestamp(1567579023, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567579023, 1), $clusterTime: { clusterTime: Timestamp(1567579023, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579023, 1) } 2019-09-04T06:37:04.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, durableWallTime: new Date(1567579023645), opTime: { ts: Timestamp(1567579023, 1), t: 1 }, wallTime: new Date(1567579023645), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579023, 1), t: 1 }, lastCommittedWall: new Date(1567579023645), lastOpVisible: { ts: Timestamp(1567579023, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567579023, 1), $clusterTime: { clusterTime: Timestamp(1567579023, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579023, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:37:04.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:37:04.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1833) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, durableWallTime: new Date(1567579023645), opTime: { ts: Timestamp(1567579023, 1), t: 1 }, wallTime: new Date(1567579023645), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579023, 1), t: 1 }, lastCommittedWall: new Date(1567579023645), lastOpVisible: { ts: Timestamp(1567579023, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567579023, 1), $clusterTime: { clusterTime: Timestamp(1567579023, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579023, 1) } 2019-09-04T06:37:04.839+0000 D4 ELECTION [replexec-4] Postponing election timeout due to heartbeat from primary 2019-09-04T06:37:04.839+0000 D4 REPL [replexec-4] Canceling election timeout callback at 2019-09-04T06:37:13.976+0000 2019-09-04T06:37:04.839+0000 D4 ELECTION [replexec-4] Scheduling election timeout callback at 2019-09-04T06:37:15.381+0000 2019-09-04T06:37:04.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:37:04.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:37:04.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:34.839+0000 2019-09-04T06:37:04.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:37:06.839Z 2019-09-04T06:37:04.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:34.839+0000 2019-09-04T06:37:04.841+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:37:04.841+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1834) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:37:04.841+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1834 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:37:14.841+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:37:04.841+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:34.839+0000 2019-09-04T06:37:04.841+0000 D2 ASIO [Replication] Request 1834 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, durableWallTime: new Date(1567579023645), opTime: { ts: Timestamp(1567579023, 1), t: 1 }, wallTime: new Date(1567579023645), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579023, 1), t: 1 }, lastCommittedWall: new Date(1567579023645), lastOpVisible: { ts: Timestamp(1567579023, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579023, 1), $clusterTime: { clusterTime: Timestamp(1567579023, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579023, 1) } 2019-09-04T06:37:04.841+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, durableWallTime: new Date(1567579023645), opTime: { ts: Timestamp(1567579023, 1), t: 1 }, wallTime: new Date(1567579023645), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579023, 1), t: 1 }, lastCommittedWall: new Date(1567579023645), lastOpVisible: { ts: Timestamp(1567579023, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579023, 1), $clusterTime: { clusterTime: Timestamp(1567579023, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579023, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:37:04.841+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:37:04.841+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1834) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, durableWallTime: new Date(1567579023645), opTime: { ts: Timestamp(1567579023, 1), t: 1 }, wallTime: new Date(1567579023645), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579023, 1), t: 1 }, lastCommittedWall: new Date(1567579023645), lastOpVisible: { ts: Timestamp(1567579023, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579023, 1), $clusterTime: { clusterTime: Timestamp(1567579023, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579023, 1) } 2019-09-04T06:37:04.841+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:37:04.841+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:37:06.841Z 2019-09-04T06:37:04.841+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:34.839+0000 2019-09-04T06:37:04.923+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:04.968+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:04.968+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:05.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:37:05.023+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:05.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:05.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:05.063+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:37:05.063+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:37:04.839+0000 2019-09-04T06:37:05.063+0000 D3 REPL [replexec-3] memberData lastupdate is: 2019-09-04T06:37:04.841+0000 2019-09-04T06:37:05.063+0000 D3 REPL [replexec-3] stalest member MemberId(0) date: 2019-09-04T06:37:04.839+0000 2019-09-04T06:37:05.063+0000 D3 REPL [replexec-3] scheduling next check at 2019-09-04T06:37:14.839+0000 2019-09-04T06:37:05.063+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:34.839+0000 2019-09-04T06:37:05.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579023, 1), signature: { hash: BinData(0, 4EE11E92D2D335D742705167D7E9102F24ABE03F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:37:05.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:37:05.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579023, 1), signature: { hash: BinData(0, 4EE11E92D2D335D742705167D7E9102F24ABE03F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:37:05.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579023, 1), signature: { hash: BinData(0, 4EE11E92D2D335D742705167D7E9102F24ABE03F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:37:05.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, durableWallTime: new Date(1567579023645), opTime: { ts: Timestamp(1567579023, 1), t: 1 }, wallTime: new Date(1567579023645) } 2019-09-04T06:37:05.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579023, 1), signature: { hash: BinData(0, 4EE11E92D2D335D742705167D7E9102F24ABE03F), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:05.123+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:05.162+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:05.163+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:05.201+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:05.201+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:05.223+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:05.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:37:05.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:37:05.253+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:05.253+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:05.254+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:05.254+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:05.323+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:05.423+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:05.468+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:05.468+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:05.524+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:05.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:05.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:05.624+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:05.648+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:37:05.648+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:37:05.648+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567579023, 1) 2019-09-04T06:37:05.648+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 27220 2019-09-04T06:37:05.648+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:37:05.648+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:37:05.648+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 27220 2019-09-04T06:37:05.649+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 27223 2019-09-04T06:37:05.649+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 27223 2019-09-04T06:37:05.649+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567579023, 1), t: 1 }({ ts: Timestamp(1567579023, 1), t: 1 }) 2019-09-04T06:37:05.662+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:05.663+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:05.701+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:05.701+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:05.724+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:05.753+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:05.753+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:05.754+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:05.754+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:05.824+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:05.924+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:05.968+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:05.968+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:06.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:37:06.024+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:06.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:06.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:06.124+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:06.162+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:06.163+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:06.201+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:06.201+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:06.224+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:06.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:37:06.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:37:06.235+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579023, 1), signature: { hash: BinData(0, 4EE11E92D2D335D742705167D7E9102F24ABE03F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:37:06.235+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:37:06.235+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579023, 1), signature: { hash: BinData(0, 4EE11E92D2D335D742705167D7E9102F24ABE03F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:37:06.235+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579023, 1), signature: { hash: BinData(0, 4EE11E92D2D335D742705167D7E9102F24ABE03F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:37:06.235+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, durableWallTime: new Date(1567579023645), opTime: { ts: Timestamp(1567579023, 1), t: 1 }, wallTime: new Date(1567579023645) } 2019-09-04T06:37:06.235+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579023, 1), signature: { hash: BinData(0, 4EE11E92D2D335D742705167D7E9102F24ABE03F), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:06.253+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:06.253+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:06.254+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:06.254+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:06.324+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:06.337+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:06.337+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:06.425+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:06.468+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:06.468+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:06.525+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:06.552+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:06.552+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:06.625+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:06.648+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:37:06.648+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:37:06.648+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567579023, 1) 2019-09-04T06:37:06.648+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 27241 2019-09-04T06:37:06.648+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:37:06.648+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:37:06.648+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 27241 2019-09-04T06:37:06.649+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 27244 2019-09-04T06:37:06.649+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 27244 2019-09-04T06:37:06.649+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567579023, 1), t: 1 }({ ts: Timestamp(1567579023, 1), t: 1 }) 2019-09-04T06:37:06.662+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:06.663+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:06.700+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:06.701+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:06.725+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:06.753+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:06.753+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:06.754+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:06.754+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:06.825+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:06.837+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:06.837+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:06.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:37:06.839+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1835) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:37:06.839+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1835 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:37:16.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:37:06.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:34.839+0000 2019-09-04T06:37:06.839+0000 D2 ASIO [Replication] Request 1835 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, durableWallTime: new Date(1567579023645), opTime: { ts: Timestamp(1567579023, 1), t: 1 }, wallTime: new Date(1567579023645), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579023, 1), t: 1 }, lastCommittedWall: new Date(1567579023645), lastOpVisible: { ts: Timestamp(1567579023, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567579023, 1), $clusterTime: { clusterTime: Timestamp(1567579023, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579023, 1) } 2019-09-04T06:37:06.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, durableWallTime: new Date(1567579023645), opTime: { ts: Timestamp(1567579023, 1), t: 1 }, wallTime: new Date(1567579023645), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579023, 1), t: 1 }, lastCommittedWall: new Date(1567579023645), lastOpVisible: { ts: Timestamp(1567579023, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567579023, 1), $clusterTime: { clusterTime: Timestamp(1567579023, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579023, 1) } target: cmodb802.togewa.com:27019 2019-09-04T06:37:06.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:37:06.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1835) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, durableWallTime: new Date(1567579023645), opTime: { ts: Timestamp(1567579023, 1), t: 1 }, wallTime: new Date(1567579023645), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579023, 1), t: 1 }, lastCommittedWall: new Date(1567579023645), lastOpVisible: { ts: Timestamp(1567579023, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567579023, 1), $clusterTime: { clusterTime: Timestamp(1567579023, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579023, 1) } 2019-09-04T06:37:06.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:37:06.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:37:15.381+0000 2019-09-04T06:37:06.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:37:17.484+0000 2019-09-04T06:37:06.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:37:06.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:37:08.839Z 2019-09-04T06:37:06.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:37:06.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:36.839+0000 2019-09-04T06:37:06.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:36.839+0000 2019-09-04T06:37:06.841+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:37:06.841+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1836) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:37:06.841+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1836 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:37:16.841+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:37:06.841+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:36.839+0000 2019-09-04T06:37:06.841+0000 D2 ASIO [Replication] Request 1836 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, durableWallTime: new Date(1567579023645), opTime: { ts: Timestamp(1567579023, 1), t: 1 }, wallTime: new Date(1567579023645), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579023, 1), t: 1 }, lastCommittedWall: new Date(1567579023645), lastOpVisible: { ts: Timestamp(1567579023, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579023, 1), $clusterTime: { clusterTime: Timestamp(1567579023, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579023, 1) } 2019-09-04T06:37:06.841+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, durableWallTime: new Date(1567579023645), opTime: { ts: Timestamp(1567579023, 1), t: 1 }, wallTime: new Date(1567579023645), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579023, 1), t: 1 }, lastCommittedWall: new Date(1567579023645), lastOpVisible: { ts: Timestamp(1567579023, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579023, 1), $clusterTime: { clusterTime: Timestamp(1567579023, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579023, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:37:06.841+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:37:06.841+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1836) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, durableWallTime: new Date(1567579023645), opTime: { ts: Timestamp(1567579023, 1), t: 1 }, wallTime: new Date(1567579023645), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579023, 1), t: 1 }, lastCommittedWall: new Date(1567579023645), lastOpVisible: { ts: Timestamp(1567579023, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579023, 1), $clusterTime: { clusterTime: Timestamp(1567579023, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579023, 1) } 2019-09-04T06:37:06.841+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:37:06.841+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:37:08.841Z 2019-09-04T06:37:06.841+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:36.839+0000 2019-09-04T06:37:06.925+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:06.968+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:06.968+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:07.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:37:07.025+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:07.052+0000 D2 COMMAND [conn58] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:07.052+0000 I COMMAND [conn58] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:07.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579023, 1), signature: { hash: BinData(0, 4EE11E92D2D335D742705167D7E9102F24ABE03F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:37:07.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:37:07.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579023, 1), signature: { hash: BinData(0, 4EE11E92D2D335D742705167D7E9102F24ABE03F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:37:07.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579023, 1), signature: { hash: BinData(0, 4EE11E92D2D335D742705167D7E9102F24ABE03F), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:37:07.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, durableWallTime: new Date(1567579023645), opTime: { ts: Timestamp(1567579023, 1), t: 1 }, wallTime: new Date(1567579023645) } 2019-09-04T06:37:07.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579023, 1), signature: { hash: BinData(0, 4EE11E92D2D335D742705167D7E9102F24ABE03F), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:07.125+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:07.162+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:07.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:07.201+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:07.201+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:07.225+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:07.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:37:07.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:37:07.253+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:07.253+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:07.254+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:07.254+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:07.326+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:07.337+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:07.337+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:07.426+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:07.468+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:07.468+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:07.526+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:07.626+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:07.648+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:37:07.648+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:37:07.648+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567579023, 1) 2019-09-04T06:37:07.648+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 27263 2019-09-04T06:37:07.648+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:37:07.648+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:37:07.648+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 27263 2019-09-04T06:37:07.649+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 27266 2019-09-04T06:37:07.649+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 27266 2019-09-04T06:37:07.649+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567579023, 1), t: 1 }({ ts: Timestamp(1567579023, 1), t: 1 }) 2019-09-04T06:37:07.662+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:07.663+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:07.700+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:07.701+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:07.713+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:07.713+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:07.726+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:07.753+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:07.753+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:07.754+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:07.754+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:07.826+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:07.837+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:07.837+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:07.841+0000 I NETWORK [listener] connection accepted from 10.108.2.44:38948 #575 (85 connections now open) 2019-09-04T06:37:07.841+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:37:07.842+0000 D2 COMMAND [conn575] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:37:07.842+0000 I NETWORK [conn575] received client metadata from 10.108.2.44:38948 conn575: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:37:07.842+0000 I COMMAND [conn575] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:37:07.855+0000 I COMMAND [conn558] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578988, 1), signature: { hash: BinData(0, D80E87FECF943C28EA6B59DE3744F95F3A830FED), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:37:07.856+0000 D1 - [conn558] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:37:07.856+0000 W - [conn558] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:37:07.872+0000 I - [conn558] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:37:07.872+0000 D1 COMMAND [conn558] assertion while executing command 'find' on database 'config' with arguments '{ find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578988, 1), signature: { hash: BinData(0, D80E87FECF943C28EA6B59DE3744F95F3A830FED), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:37:07.872+0000 D1 - [conn558] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:37:07.872+0000 W - [conn558] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:37:07.892+0000 I - [conn558] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:37:07.892+0000 W COMMAND [conn558] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:37:07.892+0000 I COMMAND [conn558] command config.$cmd command: find { find: "collections", filter: { _id: "config.system.sessions" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578988, 1), signature: { hash: BinData(0, D80E87FECF943C28EA6B59DE3744F95F3A830FED), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30029ms 2019-09-04T06:37:07.892+0000 D2 NETWORK [conn558] Session from 10.108.2.44:38924 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:37:07.892+0000 I NETWORK [conn558] end connection 10.108.2.44:38924 (84 connections now open) 2019-09-04T06:37:07.926+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:07.968+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:07.968+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:08.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:37:08.026+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:08.126+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:08.162+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:08.162+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:08.201+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:08.201+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:08.213+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:08.213+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:08.226+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:08.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:37:08.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:37:08.235+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579026, 1), signature: { hash: BinData(0, 8C0BACB0894F287A06DF883D8BE160E36AAE79F8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:37:08.235+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:37:08.235+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579026, 1), signature: { hash: BinData(0, 8C0BACB0894F287A06DF883D8BE160E36AAE79F8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:37:08.236+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579026, 1), signature: { hash: BinData(0, 8C0BACB0894F287A06DF883D8BE160E36AAE79F8), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:37:08.236+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, durableWallTime: new Date(1567579023645), opTime: { ts: Timestamp(1567579023, 1), t: 1 }, wallTime: new Date(1567579023645) } 2019-09-04T06:37:08.236+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579026, 1), signature: { hash: BinData(0, 8C0BACB0894F287A06DF883D8BE160E36AAE79F8), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:08.253+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:08.253+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:08.254+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:08.254+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:08.326+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:08.337+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:08.337+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:08.427+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:08.459+0000 D2 ASIO [RS] Request 1831 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567579028, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567579028456), o: { $v: 1, $set: { ping: new Date(1567579028456) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579023, 1), t: 1 }, lastCommittedWall: new Date(1567579023645), lastOpVisible: { ts: Timestamp(1567579023, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567579023, 1), t: 1 }, lastCommittedWall: new Date(1567579023645), lastOpApplied: { ts: Timestamp(1567579028, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579023, 1), $clusterTime: { clusterTime: Timestamp(1567579028, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579028, 1) } 2019-09-04T06:37:08.459+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567579028, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567579028456), o: { $v: 1, $set: { ping: new Date(1567579028456) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579023, 1), t: 1 }, lastCommittedWall: new Date(1567579023645), lastOpVisible: { ts: Timestamp(1567579023, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567579023, 1), t: 1 }, lastCommittedWall: new Date(1567579023645), lastOpApplied: { ts: Timestamp(1567579028, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579023, 1), $clusterTime: { clusterTime: Timestamp(1567579028, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579028, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:37:08.459+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:37:08.459+0000 D2 REPL [replication-0] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567579028, 1) and ending at ts: Timestamp(1567579028, 1) 2019-09-04T06:37:08.459+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:37:17.484+0000 2019-09-04T06:37:08.459+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:37:19.801+0000 2019-09-04T06:37:08.459+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:37:08.459+0000 D3 REPL [replication-0] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567579028, 1), t: 1 } 2019-09-04T06:37:08.459+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:36.839+0000 2019-09-04T06:37:08.459+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:37:08.459+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:37:08.459+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567579023, 1) 2019-09-04T06:37:08.459+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 27286 2019-09-04T06:37:08.459+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:37:08.459+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:37:08.459+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 27286 2019-09-04T06:37:08.459+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:37:08.459+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:37:08.459+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:37:08.459+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567579023, 1) 2019-09-04T06:37:08.459+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 27289 2019-09-04T06:37:08.459+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567579028, 1) } 2019-09-04T06:37:08.459+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:37:08.459+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:37:08.459+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 27289 2019-09-04T06:37:08.459+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 27267 2019-09-04T06:37:08.459+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 27267 2019-09-04T06:37:08.459+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 27292 2019-09-04T06:37:08.459+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 27292 2019-09-04T06:37:08.459+0000 D3 EXECUTOR [repl-writer-worker-6] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:37:08.459+0000 D3 STORAGE [repl-writer-worker-6] WT begin_transaction for snapshot id 27294 2019-09-04T06:37:08.459+0000 D4 STORAGE [repl-writer-worker-6] inserting record with timestamp Timestamp(1567579028, 1) 2019-09-04T06:37:08.459+0000 D3 STORAGE [repl-writer-worker-6] WT set timestamp of future write operations to Timestamp(1567579028, 1) 2019-09-04T06:37:08.460+0000 D3 STORAGE [repl-writer-worker-6] WT commit_transaction for snapshot id 27294 2019-09-04T06:37:08.460+0000 D3 EXECUTOR [repl-writer-worker-6] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:37:08.460+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:37:08.460+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 27293 2019-09-04T06:37:08.460+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 27293 2019-09-04T06:37:08.460+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 27296 2019-09-04T06:37:08.460+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 27296 2019-09-04T06:37:08.460+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567579028, 1), t: 1 }({ ts: Timestamp(1567579028, 1), t: 1 }) 2019-09-04T06:37:08.460+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567579028, 1) 2019-09-04T06:37:08.460+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 27297 2019-09-04T06:37:08.460+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567579028, 1) } } ] } sort: {} projection: {} 2019-09-04T06:37:08.460+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:37:08.460+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:37:08.460+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567579028, 1) Sort: {} Proj: {} ============================= 2019-09-04T06:37:08.460+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:37:08.460+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:37:08.460+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:37:08.460+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567579028, 1) || First: notFirst: full path: ts 2019-09-04T06:37:08.460+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:37:08.460+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567579028, 1) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:37:08.460+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:37:08.460+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:37:08.460+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:37:08.460+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:37:08.460+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:37:08.460+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:37:08.460+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:37:08.460+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:37:08.460+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:37:08.460+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567579028, 1) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:37:08.460+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:37:08.460+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:37:08.460+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:37:08.460+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567579028, 1) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:37:08.460+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:37:08.460+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567579028, 1) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:37:08.460+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 27297 2019-09-04T06:37:08.460+0000 D3 EXECUTOR [repl-writer-worker-8] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:37:08.460+0000 D3 STORAGE [repl-writer-worker-8] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:37:08.460+0000 D3 REPL [repl-writer-worker-8] applying op: { ts: Timestamp(1567579028, 1), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "ConfigServer" }, wall: new Date(1567579028456), o: { $v: 1, $set: { ping: new Date(1567579028456) } } }, oplog application mode: Secondary 2019-09-04T06:37:08.460+0000 D3 STORAGE [repl-writer-worker-8] WT set timestamp of future write operations to Timestamp(1567579028, 1) 2019-09-04T06:37:08.460+0000 D3 STORAGE [repl-writer-worker-8] WT begin_transaction for snapshot id 27299 2019-09-04T06:37:08.460+0000 D2 QUERY [repl-writer-worker-8] Using idhack: { _id: "ConfigServer" } 2019-09-04T06:37:08.460+0000 D4 WRITE [repl-writer-worker-8] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:37:08.460+0000 D3 STORAGE [repl-writer-worker-8] WT commit_transaction for snapshot id 27299 2019-09-04T06:37:08.460+0000 D3 EXECUTOR [repl-writer-worker-8] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:37:08.460+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567579028, 1), t: 1 }({ ts: Timestamp(1567579028, 1), t: 1 }) 2019-09-04T06:37:08.460+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567579028, 1) 2019-09-04T06:37:08.460+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 27298 2019-09-04T06:37:08.460+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:37:08.460+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:37:08.460+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:37:08.460+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:37:08.460+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:37:08.460+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:37:08.460+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 27298 2019-09-04T06:37:08.460+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567579028, 1) 2019-09-04T06:37:08.460+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:37:08.460+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 27302 2019-09-04T06:37:08.460+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, durableWallTime: new Date(1567579023645), appliedOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, appliedWallTime: new Date(1567579023645), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, durableWallTime: new Date(1567579023645), appliedOpTime: { ts: Timestamp(1567579028, 1), t: 1 }, appliedWallTime: new Date(1567579028456), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, durableWallTime: new Date(1567579023645), appliedOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, appliedWallTime: new Date(1567579023645), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579023, 1), t: 1 }, lastCommittedWall: new Date(1567579023645), lastOpVisible: { ts: Timestamp(1567579023, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:37:08.460+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 27302 2019-09-04T06:37:08.460+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567579028, 1), t: 1 }({ ts: Timestamp(1567579028, 1), t: 1 }) 2019-09-04T06:37:08.460+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1837 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:37:38.460+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, durableWallTime: new Date(1567579023645), appliedOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, appliedWallTime: new Date(1567579023645), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, durableWallTime: new Date(1567579023645), appliedOpTime: { ts: Timestamp(1567579028, 1), t: 1 }, appliedWallTime: new Date(1567579028456), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, durableWallTime: new Date(1567579023645), appliedOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, appliedWallTime: new Date(1567579023645), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579023, 1), t: 1 }, lastCommittedWall: new Date(1567579023645), lastOpVisible: { ts: Timestamp(1567579023, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:37:08.460+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:37:38.460+0000 2019-09-04T06:37:08.461+0000 D2 ASIO [RS] Request 1837 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579028, 1), t: 1 }, lastCommittedWall: new Date(1567579028456), lastOpVisible: { ts: Timestamp(1567579028, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579028, 1), $clusterTime: { clusterTime: Timestamp(1567579028, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579028, 1) } 2019-09-04T06:37:08.461+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579028, 1), t: 1 }, lastCommittedWall: new Date(1567579028456), lastOpVisible: { ts: Timestamp(1567579028, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579028, 1), $clusterTime: { clusterTime: Timestamp(1567579028, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579028, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:37:08.461+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:37:08.461+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:37:38.461+0000 2019-09-04T06:37:08.461+0000 D3 REPL [replication-0] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567579028, 1), t: 1 } 2019-09-04T06:37:08.461+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1838 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:37:18.461+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567579023, 1), t: 1 } } 2019-09-04T06:37:08.461+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:37:38.461+0000 2019-09-04T06:37:08.461+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:37:08.461+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:37:08.461+0000 D2 REPL [replication-1] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, durableWallTime: new Date(1567579023645), appliedOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, appliedWallTime: new Date(1567579023645), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579028, 1), t: 1 }, durableWallTime: new Date(1567579028456), appliedOpTime: { ts: Timestamp(1567579028, 1), t: 1 }, appliedWallTime: new Date(1567579028456), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, durableWallTime: new Date(1567579023645), appliedOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, appliedWallTime: new Date(1567579023645), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579023, 1), t: 1 }, lastCommittedWall: new Date(1567579023645), lastOpVisible: { ts: Timestamp(1567579023, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:37:08.461+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1839 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:37:38.461+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, durableWallTime: new Date(1567579023645), appliedOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, appliedWallTime: new Date(1567579023645), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579028, 1), t: 1 }, durableWallTime: new Date(1567579028456), appliedOpTime: { ts: Timestamp(1567579028, 1), t: 1 }, appliedWallTime: new Date(1567579028456), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, durableWallTime: new Date(1567579023645), appliedOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, appliedWallTime: new Date(1567579023645), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579023, 1), t: 1 }, lastCommittedWall: new Date(1567579023645), lastOpVisible: { ts: Timestamp(1567579023, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:37:08.461+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:37:38.461+0000 2019-09-04T06:37:08.461+0000 D2 ASIO [RS] Request 1838 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579028, 1), t: 1 }, lastCommittedWall: new Date(1567579028456), lastOpVisible: { ts: Timestamp(1567579028, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567579028, 1), t: 1 }, lastCommittedWall: new Date(1567579028456), lastOpApplied: { ts: Timestamp(1567579028, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579028, 1), $clusterTime: { clusterTime: Timestamp(1567579028, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579028, 1) } 2019-09-04T06:37:08.461+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579028, 1), t: 1 }, lastCommittedWall: new Date(1567579028456), lastOpVisible: { ts: Timestamp(1567579028, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567579028, 1), t: 1 }, lastCommittedWall: new Date(1567579028456), lastOpApplied: { ts: Timestamp(1567579028, 1), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579028, 1), $clusterTime: { clusterTime: Timestamp(1567579028, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579028, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:37:08.461+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:37:08.461+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:37:08.462+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567579028, 1), t: 1 }, 2019-09-04T06:37:08.456+0000 2019-09-04T06:37:08.462+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567579028, 1), t: 1 }, 2019-09-04T06:37:08.456+0000 2019-09-04T06:37:08.462+0000 D2 ASIO [RS] Request 1839 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579028, 1), t: 1 }, lastCommittedWall: new Date(1567579028456), lastOpVisible: { ts: Timestamp(1567579028, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579028, 1), $clusterTime: { clusterTime: Timestamp(1567579028, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579028, 1) } 2019-09-04T06:37:08.462+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579028, 1), t: 1 }, lastCommittedWall: new Date(1567579028456), lastOpVisible: { ts: Timestamp(1567579028, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579028, 1), $clusterTime: { clusterTime: Timestamp(1567579028, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579028, 1) } target: cmodb804.togewa.com:27019 2019-09-04T06:37:08.462+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:37:08.462+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:37:38.462+0000 2019-09-04T06:37:08.462+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567579023, 1) 2019-09-04T06:37:08.462+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:37:19.801+0000 2019-09-04T06:37:08.462+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:37:19.490+0000 2019-09-04T06:37:08.462+0000 D3 REPL [conn557] Got notified of new snapshot: { ts: Timestamp(1567579028, 1), t: 1 }, 2019-09-04T06:37:08.456+0000 2019-09-04T06:37:08.462+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1840 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:37:18.462+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567579028, 1), t: 1 } } 2019-09-04T06:37:08.462+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:37:08.462+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:37:38.462+0000 2019-09-04T06:37:08.462+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:36.839+0000 2019-09-04T06:37:08.462+0000 D3 REPL [conn557] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:15.593+0000 2019-09-04T06:37:08.462+0000 D3 REPL [conn553] Got notified of new snapshot: { ts: Timestamp(1567579028, 1), t: 1 }, 2019-09-04T06:37:08.456+0000 2019-09-04T06:37:08.462+0000 D3 REPL [conn553] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:22.594+0000 2019-09-04T06:37:08.462+0000 D3 REPL [conn533] Got notified of new snapshot: { ts: Timestamp(1567579028, 1), t: 1 }, 2019-09-04T06:37:08.456+0000 2019-09-04T06:37:08.462+0000 D3 REPL [conn533] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:13.272+0000 2019-09-04T06:37:08.462+0000 D3 REPL [conn534] Got notified of new snapshot: { ts: Timestamp(1567579028, 1), t: 1 }, 2019-09-04T06:37:08.456+0000 2019-09-04T06:37:08.462+0000 D3 REPL [conn534] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:13.288+0000 2019-09-04T06:37:08.462+0000 D3 REPL [conn551] Got notified of new snapshot: { ts: Timestamp(1567579028, 1), t: 1 }, 2019-09-04T06:37:08.456+0000 2019-09-04T06:37:08.462+0000 D3 REPL [conn551] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:14.696+0000 2019-09-04T06:37:08.462+0000 D3 REPL [conn536] Got notified of new snapshot: { ts: Timestamp(1567579028, 1), t: 1 }, 2019-09-04T06:37:08.456+0000 2019-09-04T06:37:08.462+0000 D3 REPL [conn536] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:13.339+0000 2019-09-04T06:37:08.462+0000 D3 REPL [conn564] Got notified of new snapshot: { ts: Timestamp(1567579028, 1), t: 1 }, 2019-09-04T06:37:08.456+0000 2019-09-04T06:37:08.462+0000 D3 REPL [conn564] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:18.430+0000 2019-09-04T06:37:08.462+0000 D3 REPL [conn565] Got notified of new snapshot: { ts: Timestamp(1567579028, 1), t: 1 }, 2019-09-04T06:37:08.456+0000 2019-09-04T06:37:08.462+0000 D3 REPL [conn565] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:18.447+0000 2019-09-04T06:37:08.462+0000 D3 REPL [conn540] Got notified of new snapshot: { ts: Timestamp(1567579028, 1), t: 1 }, 2019-09-04T06:37:08.456+0000 2019-09-04T06:37:08.462+0000 D3 REPL [conn540] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:13.408+0000 2019-09-04T06:37:08.462+0000 D3 REPL [conn560] Got notified of new snapshot: { ts: Timestamp(1567579028, 1), t: 1 }, 2019-09-04T06:37:08.456+0000 2019-09-04T06:37:08.462+0000 D3 REPL [conn560] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:13.280+0000 2019-09-04T06:37:08.462+0000 D3 REPL [conn563] Got notified of new snapshot: { ts: Timestamp(1567579028, 1), t: 1 }, 2019-09-04T06:37:08.456+0000 2019-09-04T06:37:08.462+0000 D3 REPL [conn563] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:18.426+0000 2019-09-04T06:37:08.462+0000 D3 REPL [conn571] Got notified of new snapshot: { ts: Timestamp(1567579028, 1), t: 1 }, 2019-09-04T06:37:08.456+0000 2019-09-04T06:37:08.462+0000 D3 REPL [conn571] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:21.767+0000 2019-09-04T06:37:08.462+0000 D3 REPL [conn561] Got notified of new snapshot: { ts: Timestamp(1567579028, 1), t: 1 }, 2019-09-04T06:37:08.456+0000 2019-09-04T06:37:08.462+0000 D3 REPL [conn561] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:16.415+0000 2019-09-04T06:37:08.462+0000 D3 REPL [conn566] Got notified of new snapshot: { ts: Timestamp(1567579028, 1), t: 1 }, 2019-09-04T06:37:08.456+0000 2019-09-04T06:37:08.462+0000 D3 REPL [conn566] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:18.461+0000 2019-09-04T06:37:08.462+0000 D3 REPL [conn569] Got notified of new snapshot: { ts: Timestamp(1567579028, 1), t: 1 }, 2019-09-04T06:37:08.456+0000 2019-09-04T06:37:08.462+0000 D3 REPL [conn569] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:21.662+0000 2019-09-04T06:37:08.462+0000 D3 REPL [conn574] Got notified of new snapshot: { ts: Timestamp(1567579028, 1), t: 1 }, 2019-09-04T06:37:08.456+0000 2019-09-04T06:37:08.462+0000 D3 REPL [conn574] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:31.318+0000 2019-09-04T06:37:08.462+0000 D3 REPL [conn542] Got notified of new snapshot: { ts: Timestamp(1567579028, 1), t: 1 }, 2019-09-04T06:37:08.456+0000 2019-09-04T06:37:08.462+0000 D3 REPL [conn542] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:18.527+0000 2019-09-04T06:37:08.462+0000 D3 REPL [conn562] Got notified of new snapshot: { ts: Timestamp(1567579028, 1), t: 1 }, 2019-09-04T06:37:08.456+0000 2019-09-04T06:37:08.462+0000 D3 REPL [conn562] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:18.425+0000 2019-09-04T06:37:08.462+0000 D3 REPL [conn547] Got notified of new snapshot: { ts: Timestamp(1567579028, 1), t: 1 }, 2019-09-04T06:37:08.456+0000 2019-09-04T06:37:08.462+0000 D3 REPL [conn547] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:21.660+0000 2019-09-04T06:37:08.462+0000 D3 REPL [conn539] Got notified of new snapshot: { ts: Timestamp(1567579028, 1), t: 1 }, 2019-09-04T06:37:08.456+0000 2019-09-04T06:37:08.462+0000 D3 REPL [conn539] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:18.438+0000 2019-09-04T06:37:08.462+0000 D3 REPL [conn546] Got notified of new snapshot: { ts: Timestamp(1567579028, 1), t: 1 }, 2019-09-04T06:37:08.456+0000 2019-09-04T06:37:08.462+0000 D3 REPL [conn546] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:21.660+0000 2019-09-04T06:37:08.468+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:08.468+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:08.527+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:08.559+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567579028, 1) 2019-09-04T06:37:08.627+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:08.662+0000 D2 COMMAND [conn59] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:08.662+0000 I COMMAND [conn59] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:08.700+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:08.701+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:08.713+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:08.713+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:08.727+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:08.742+0000 D2 ASIO [RS] Request 1840 finished with response: { cursor: { nextBatch: [ { ts: Timestamp(1567579028, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567579028739), o: { $v: 1, $set: { ping: new Date(1567579028738) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579028, 1), t: 1 }, lastCommittedWall: new Date(1567579028456), lastOpVisible: { ts: Timestamp(1567579028, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567579028, 1), t: 1 }, lastCommittedWall: new Date(1567579028456), lastOpApplied: { ts: Timestamp(1567579028, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579028, 1), $clusterTime: { clusterTime: Timestamp(1567579028, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579028, 2) } 2019-09-04T06:37:08.742+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp(1567579028, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567579028739), o: { $v: 1, $set: { ping: new Date(1567579028738) } } } ], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579028, 1), t: 1 }, lastCommittedWall: new Date(1567579028456), lastOpVisible: { ts: Timestamp(1567579028, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567579028, 1), t: 1 }, lastCommittedWall: new Date(1567579028456), lastOpApplied: { ts: Timestamp(1567579028, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579028, 1), $clusterTime: { clusterTime: Timestamp(1567579028, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579028, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:37:08.742+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:37:08.742+0000 D2 REPL [replication-1] oplog fetcher read 1 operations from remote oplog starting at ts: Timestamp(1567579028, 2) and ending at ts: Timestamp(1567579028, 2) 2019-09-04T06:37:08.742+0000 D4 REPL [replication-1] Canceling election timeout callback at 2019-09-04T06:37:19.490+0000 2019-09-04T06:37:08.742+0000 D4 ELECTION [replication-1] Scheduling election timeout callback at 2019-09-04T06:37:19.114+0000 2019-09-04T06:37:08.742+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:37:08.742+0000 D3 REPL [replication-1] batch resetting _lastOpTimeFetched: { ts: Timestamp(1567579028, 2), t: 1 } 2019-09-04T06:37:08.742+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:37:08.742+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:37:08.742+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567579028, 1) 2019-09-04T06:37:08.742+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 27310 2019-09-04T06:37:08.742+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:37:08.742+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:37:08.742+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 27310 2019-09-04T06:37:08.742+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:37:08.742+0000 D2 REPL [rsSync-0] replication batch size is 1 2019-09-04T06:37:08.742+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:37:08.742+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:36.839+0000 2019-09-04T06:37:08.743+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(1567579028, 2) } 2019-09-04T06:37:08.743+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567579028, 1) 2019-09-04T06:37:08.743+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 27313 2019-09-04T06:37:08.743+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 27303 2019-09-04T06:37:08.743+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:37:08.743+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:37:08.743+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 27313 2019-09-04T06:37:08.743+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 27303 2019-09-04T06:37:08.743+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 27316 2019-09-04T06:37:08.743+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 27316 2019-09-04T06:37:08.743+0000 D3 EXECUTOR [repl-writer-worker-2] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:37:08.743+0000 D3 STORAGE [repl-writer-worker-2] WT begin_transaction for snapshot id 27318 2019-09-04T06:37:08.743+0000 D4 STORAGE [repl-writer-worker-2] inserting record with timestamp Timestamp(1567579028, 2) 2019-09-04T06:37:08.743+0000 D3 STORAGE [repl-writer-worker-2] WT set timestamp of future write operations to Timestamp(1567579028, 2) 2019-09-04T06:37:08.743+0000 D3 STORAGE [repl-writer-worker-2] WT commit_transaction for snapshot id 27318 2019-09-04T06:37:08.743+0000 D3 EXECUTOR [repl-writer-worker-2] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:37:08.743+0000 D3 REPL [rsSync-0] setting oplog truncate after point to: { : Timestamp(0, 0) } 2019-09-04T06:37:08.743+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 27317 2019-09-04T06:37:08.743+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 27317 2019-09-04T06:37:08.743+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 27320 2019-09-04T06:37:08.743+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 27320 2019-09-04T06:37:08.743+0000 D3 REPL [rsSync-0] setting minvalid to at least: { ts: Timestamp(1567579028, 2), t: 1 }({ ts: Timestamp(1567579028, 2), t: 1 }) 2019-09-04T06:37:08.743+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567579028, 2) 2019-09-04T06:37:08.743+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 27321 2019-09-04T06:37:08.743+0000 D2 QUERY [rsSync-0] Running query as sub-queries: query: { $or: [ { t: { $lt: 1 } }, { t: 1, ts: { $lt: Timestamp(1567579028, 2) } } ] } sort: {} projection: {} 2019-09-04T06:37:08.743+0000 D5 QUERY [rsSync-0] Subplanner: index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:37:08.743+0000 D5 QUERY [rsSync-0] Subplanner: planning child 0 of 2 2019-09-04T06:37:08.743+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and t $eq 1 ts $lt Timestamp(1567579028, 2) Sort: {} Proj: {} ============================= 2019-09-04T06:37:08.743+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:37:08.743+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:37:08.743+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:37:08.743+0000 D5 QUERY [rsSync-0] Rated tree: $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567579028, 2) || First: notFirst: full path: ts 2019-09-04T06:37:08.743+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:37:08.743+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and t $eq 1 ts $lt Timestamp(1567579028, 2) ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:37:08.743+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:37:08.743+0000 D5 QUERY [rsSync-0] Subplanner: planning child 1 of 2 2019-09-04T06:37:08.743+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:37:08.743+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:37:08.743+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:37:08.743+0000 D5 QUERY [rsSync-0] Rated tree: t $lt 1 || First: notFirst: full path: t 2019-09-04T06:37:08.743+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:37:08.743+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:37:08.743+0000 D5 QUERY [rsSync-0] Subplanner: got 1 solutions 2019-09-04T06:37:08.743+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $or $and t $eq 1 ts $lt Timestamp(1567579028, 2) t $lt 1 Sort: {} Proj: {} ============================= 2019-09-04T06:37:08.743+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:37:08.743+0000 D5 QUERY [rsSync-0] Predicate over field 'ts' 2019-09-04T06:37:08.743+0000 D5 QUERY [rsSync-0] Predicate over field 't' 2019-09-04T06:37:08.743+0000 D5 QUERY [rsSync-0] Rated tree: $or $and t $eq 1 || First: notFirst: full path: t ts $lt Timestamp(1567579028, 2) || First: notFirst: full path: ts t $lt 1 || First: notFirst: full path: t 2019-09-04T06:37:08.743+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:37:08.743+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $or $and t $eq 1 ts $lt Timestamp(1567579028, 2) t $lt 1 ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:37:08.743+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 27321 2019-09-04T06:37:08.743+0000 D3 EXECUTOR [repl-writer-worker-0] Executing a task on behalf of pool repl writer worker Pool 2019-09-04T06:37:08.743+0000 D3 STORAGE [repl-writer-worker-0] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:37:08.743+0000 D3 REPL [repl-writer-worker-0] applying op: { ts: Timestamp(1567579028, 2), t: 1, h: 0, v: 2, op: "u", ns: "config.lockpings", ui: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce"), o2: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" }, wall: new Date(1567579028739), o: { $v: 1, $set: { ping: new Date(1567579028738) } } }, oplog application mode: Secondary 2019-09-04T06:37:08.743+0000 D3 STORAGE [repl-writer-worker-0] WT set timestamp of future write operations to Timestamp(1567579028, 2) 2019-09-04T06:37:08.743+0000 D3 STORAGE [repl-writer-worker-0] WT begin_transaction for snapshot id 27323 2019-09-04T06:37:08.743+0000 D2 QUERY [repl-writer-worker-0] Using idhack: { _id: "cmodb810.togewa.com:27018:1566460779:1951479814477371466" } 2019-09-04T06:37:08.743+0000 D4 WRITE [repl-writer-worker-0] UpdateResult -- upserted: {} modifiers: 1 existing: 1 numDocsModified: 1 numMatched: 1 2019-09-04T06:37:08.743+0000 D3 STORAGE [repl-writer-worker-0] WT commit_transaction for snapshot id 27323 2019-09-04T06:37:08.743+0000 D3 EXECUTOR [repl-writer-worker-0] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2019-09-04T06:37:08.743+0000 D3 REPL [rsSync-0] setting appliedThrough to: { ts: Timestamp(1567579028, 2), t: 1 }({ ts: Timestamp(1567579028, 2), t: 1 }) 2019-09-04T06:37:08.743+0000 D3 STORAGE [rsSync-0] WT set timestamp of future write operations to Timestamp(1567579028, 2) 2019-09-04T06:37:08.743+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 27322 2019-09-04T06:37:08.743+0000 D5 QUERY [rsSync-0] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:37:08.743+0000 D5 QUERY [rsSync-0] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:37:08.743+0000 D5 QUERY [rsSync-0] Rated tree: $and 2019-09-04T06:37:08.743+0000 D5 QUERY [rsSync-0] Planner: outputted 0 indexed solutions. 2019-09-04T06:37:08.743+0000 D5 QUERY [rsSync-0] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:37:08.743+0000 D2 QUERY [rsSync-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:37:08.743+0000 D3 STORAGE [rsSync-0] WT commit_transaction for snapshot id 27322 2019-09-04T06:37:08.743+0000 D2 STORAGE [rsSync-0] Setting new oplogReadTimestamp: Timestamp(1567579028, 2) 2019-09-04T06:37:08.743+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 27327 2019-09-04T06:37:08.744+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 27327 2019-09-04T06:37:08.744+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567579028, 2), t: 1 }({ ts: Timestamp(1567579028, 2), t: 1 }) 2019-09-04T06:37:08.744+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:37:08.744+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, durableWallTime: new Date(1567579023645), appliedOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, appliedWallTime: new Date(1567579023645), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579028, 1), t: 1 }, durableWallTime: new Date(1567579028456), appliedOpTime: { ts: Timestamp(1567579028, 2), t: 1 }, appliedWallTime: new Date(1567579028739), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, durableWallTime: new Date(1567579023645), appliedOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, appliedWallTime: new Date(1567579023645), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579028, 1), t: 1 }, lastCommittedWall: new Date(1567579028456), lastOpVisible: { ts: Timestamp(1567579028, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:37:08.744+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1841 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:37:38.744+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, durableWallTime: new Date(1567579023645), appliedOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, appliedWallTime: new Date(1567579023645), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579028, 1), t: 1 }, durableWallTime: new Date(1567579028456), appliedOpTime: { ts: Timestamp(1567579028, 2), t: 1 }, appliedWallTime: new Date(1567579028739), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, durableWallTime: new Date(1567579023645), appliedOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, appliedWallTime: new Date(1567579023645), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579028, 1), t: 1 }, lastCommittedWall: new Date(1567579028456), lastOpVisible: { ts: Timestamp(1567579028, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:37:08.744+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:37:38.743+0000 2019-09-04T06:37:08.744+0000 D2 ASIO [RS] Request 1841 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579028, 1), t: 1 }, lastCommittedWall: new Date(1567579028456), lastOpVisible: { ts: Timestamp(1567579028, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579028, 1), $clusterTime: { clusterTime: Timestamp(1567579028, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579028, 2) } 2019-09-04T06:37:08.744+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579028, 1), t: 1 }, lastCommittedWall: new Date(1567579028456), lastOpVisible: { ts: Timestamp(1567579028, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579028, 1), $clusterTime: { clusterTime: Timestamp(1567579028, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579028, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:37:08.744+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:37:08.744+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:37:38.744+0000 2019-09-04T06:37:08.744+0000 D3 REPL [replication-1] oplog fetcher setting last fetched optime ahead after batch: { ts: Timestamp(1567579028, 2), t: 1 } 2019-09-04T06:37:08.744+0000 D3 EXECUTOR [replication-1] Scheduling remote command request: RemoteCommand 1842 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:37:18.744+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567579028, 1), t: 1 } } 2019-09-04T06:37:08.744+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:37:38.744+0000 2019-09-04T06:37:08.746+0000 D4 STORAGE [ApplyBatchFinalizerForJournal] flushed journal 2019-09-04T06:37:08.746+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:37:08.746+0000 D2 REPL [replication-0] Reporter sending slave oplog progress to upstream updater cmodb804.togewa.com:27019: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, durableWallTime: new Date(1567579023645), appliedOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, appliedWallTime: new Date(1567579023645), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579028, 2), t: 1 }, durableWallTime: new Date(1567579028739), appliedOpTime: { ts: Timestamp(1567579028, 2), t: 1 }, appliedWallTime: new Date(1567579028739), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, durableWallTime: new Date(1567579023645), appliedOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, appliedWallTime: new Date(1567579023645), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579028, 1), t: 1 }, lastCommittedWall: new Date(1567579028456), lastOpVisible: { ts: Timestamp(1567579028, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:37:08.746+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1843 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:37:38.746+0000 cmd:{ replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, durableWallTime: new Date(1567579023645), appliedOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, appliedWallTime: new Date(1567579023645), memberId: 0, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579028, 2), t: 1 }, durableWallTime: new Date(1567579028739), appliedOpTime: { ts: Timestamp(1567579028, 2), t: 1 }, appliedWallTime: new Date(1567579028739), memberId: 1, cfgver: 2 }, { durableOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, durableWallTime: new Date(1567579023645), appliedOpTime: { ts: Timestamp(1567579023, 1), t: 1 }, appliedWallTime: new Date(1567579023645), memberId: 2, cfgver: 2 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579028, 1), t: 1 }, lastCommittedWall: new Date(1567579028456), lastOpVisible: { ts: Timestamp(1567579028, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 2 } } 2019-09-04T06:37:08.746+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:37:38.744+0000 2019-09-04T06:37:08.746+0000 D2 ASIO [RS] Request 1843 finished with response: { ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579028, 1), t: 1 }, lastCommittedWall: new Date(1567579028456), lastOpVisible: { ts: Timestamp(1567579028, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579028, 1), $clusterTime: { clusterTime: Timestamp(1567579028, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579028, 2) } 2019-09-04T06:37:08.746+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579028, 1), t: 1 }, lastCommittedWall: new Date(1567579028456), lastOpVisible: { ts: Timestamp(1567579028, 1), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579028, 1), $clusterTime: { clusterTime: Timestamp(1567579028, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579028, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:37:08.746+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:37:08.746+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:37:38.744+0000 2019-09-04T06:37:08.747+0000 D2 ASIO [RS] Request 1842 finished with response: { cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579028, 2), t: 1 }, lastCommittedWall: new Date(1567579028739), lastOpVisible: { ts: Timestamp(1567579028, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567579028, 2), t: 1 }, lastCommittedWall: new Date(1567579028739), lastOpApplied: { ts: Timestamp(1567579028, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579028, 2), $clusterTime: { clusterTime: Timestamp(1567579028, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579028, 2) } 2019-09-04T06:37:08.747+0000 D3 EXECUTOR [RS] Received remote response: RemoteOnAnyResponse -- cmd:{ cursor: { nextBatch: [], id: 2779728788818727477, ns: "local.oplog.rs" }, ok: 1.0, $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579028, 2), t: 1 }, lastCommittedWall: new Date(1567579028739), lastOpVisible: { ts: Timestamp(1567579028, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1567579028, 2), t: 1 }, lastCommittedWall: new Date(1567579028739), lastOpApplied: { ts: Timestamp(1567579028, 2), t: 1 }, rbid: 1, primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579028, 2), $clusterTime: { clusterTime: Timestamp(1567579028, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579028, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:37:08.747+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:37:08.747+0000 D2 REPL [replication-0] oplog fetcher read 0 operations from remote oplog 2019-09-04T06:37:08.747+0000 D2 REPL [replication-0] Updating _lastCommittedOpTimeAndWallTime to { ts: Timestamp(1567579028, 2), t: 1 }, 2019-09-04T06:37:08.739+0000 2019-09-04T06:37:08.747+0000 D2 REPL [replication-0] Setting replication's stable optime to { ts: Timestamp(1567579028, 2), t: 1 }, 2019-09-04T06:37:08.739+0000 2019-09-04T06:37:08.747+0000 D2 STORAGE [replication-0] oldest_timestamp set to Timestamp(1567579023, 2) 2019-09-04T06:37:08.747+0000 D4 REPL [replication-0] Canceling election timeout callback at 2019-09-04T06:37:19.114+0000 2019-09-04T06:37:08.747+0000 D4 ELECTION [replication-0] Scheduling election timeout callback at 2019-09-04T06:37:18.966+0000 2019-09-04T06:37:08.747+0000 D3 EXECUTOR [replication-0] Scheduling remote command request: RemoteCommand 1844 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:37:18.747+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567579028, 2), t: 1 } } 2019-09-04T06:37:08.747+0000 D3 REPL [conn534] Got notified of new snapshot: { ts: Timestamp(1567579028, 2), t: 1 }, 2019-09-04T06:37:08.739+0000 2019-09-04T06:37:08.747+0000 D3 REPL [conn534] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:13.288+0000 2019-09-04T06:37:08.747+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:37:38.744+0000 2019-09-04T06:37:08.747+0000 D3 REPL [conn536] Got notified of new snapshot: { ts: Timestamp(1567579028, 2), t: 1 }, 2019-09-04T06:37:08.739+0000 2019-09-04T06:37:08.747+0000 D3 REPL [conn536] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:13.339+0000 2019-09-04T06:37:08.747+0000 D3 REPL [conn565] Got notified of new snapshot: { ts: Timestamp(1567579028, 2), t: 1 }, 2019-09-04T06:37:08.739+0000 2019-09-04T06:37:08.747+0000 D3 REPL [conn565] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:18.447+0000 2019-09-04T06:37:08.747+0000 D3 REPL [conn547] Got notified of new snapshot: { ts: Timestamp(1567579028, 2), t: 1 }, 2019-09-04T06:37:08.739+0000 2019-09-04T06:37:08.747+0000 D3 REPL [conn547] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:21.660+0000 2019-09-04T06:37:08.747+0000 D3 REPL [conn564] Got notified of new snapshot: { ts: Timestamp(1567579028, 2), t: 1 }, 2019-09-04T06:37:08.739+0000 2019-09-04T06:37:08.747+0000 D3 REPL [conn564] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:18.430+0000 2019-09-04T06:37:08.747+0000 D3 REPL [conn566] Got notified of new snapshot: { ts: Timestamp(1567579028, 2), t: 1 }, 2019-09-04T06:37:08.739+0000 2019-09-04T06:37:08.747+0000 D3 REPL [conn566] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:18.461+0000 2019-09-04T06:37:08.747+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:37:08.747+0000 D3 REPL [conn574] Got notified of new snapshot: { ts: Timestamp(1567579028, 2), t: 1 }, 2019-09-04T06:37:08.739+0000 2019-09-04T06:37:08.747+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:36.839+0000 2019-09-04T06:37:08.747+0000 D3 REPL [conn574] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:31.318+0000 2019-09-04T06:37:08.747+0000 D3 REPL [conn562] Got notified of new snapshot: { ts: Timestamp(1567579028, 2), t: 1 }, 2019-09-04T06:37:08.739+0000 2019-09-04T06:37:08.747+0000 D3 REPL [conn562] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:18.425+0000 2019-09-04T06:37:08.747+0000 D3 REPL [conn539] Got notified of new snapshot: { ts: Timestamp(1567579028, 2), t: 1 }, 2019-09-04T06:37:08.739+0000 2019-09-04T06:37:08.747+0000 D3 REPL [conn539] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:18.438+0000 2019-09-04T06:37:08.747+0000 D3 REPL [conn553] Got notified of new snapshot: { ts: Timestamp(1567579028, 2), t: 1 }, 2019-09-04T06:37:08.739+0000 2019-09-04T06:37:08.747+0000 D3 REPL [conn553] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:22.594+0000 2019-09-04T06:37:08.747+0000 D3 REPL [conn557] Got notified of new snapshot: { ts: Timestamp(1567579028, 2), t: 1 }, 2019-09-04T06:37:08.739+0000 2019-09-04T06:37:08.747+0000 D3 REPL [conn557] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:15.593+0000 2019-09-04T06:37:08.747+0000 D3 REPL [conn561] Got notified of new snapshot: { ts: Timestamp(1567579028, 2), t: 1 }, 2019-09-04T06:37:08.739+0000 2019-09-04T06:37:08.747+0000 D3 REPL [conn561] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:16.415+0000 2019-09-04T06:37:08.747+0000 D3 REPL [conn540] Got notified of new snapshot: { ts: Timestamp(1567579028, 2), t: 1 }, 2019-09-04T06:37:08.739+0000 2019-09-04T06:37:08.747+0000 D3 REPL [conn540] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:13.408+0000 2019-09-04T06:37:08.747+0000 D3 REPL [conn542] Got notified of new snapshot: { ts: Timestamp(1567579028, 2), t: 1 }, 2019-09-04T06:37:08.739+0000 2019-09-04T06:37:08.747+0000 D3 REPL [conn542] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:18.527+0000 2019-09-04T06:37:08.747+0000 D3 REPL [conn569] Got notified of new snapshot: { ts: Timestamp(1567579028, 2), t: 1 }, 2019-09-04T06:37:08.739+0000 2019-09-04T06:37:08.747+0000 D3 REPL [conn569] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:21.662+0000 2019-09-04T06:37:08.747+0000 D3 REPL [conn563] Got notified of new snapshot: { ts: Timestamp(1567579028, 2), t: 1 }, 2019-09-04T06:37:08.739+0000 2019-09-04T06:37:08.747+0000 D3 REPL [conn563] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:18.426+0000 2019-09-04T06:37:08.747+0000 D3 REPL [conn560] Got notified of new snapshot: { ts: Timestamp(1567579028, 2), t: 1 }, 2019-09-04T06:37:08.739+0000 2019-09-04T06:37:08.747+0000 D3 REPL [conn560] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:13.280+0000 2019-09-04T06:37:08.747+0000 D3 REPL [conn571] Got notified of new snapshot: { ts: Timestamp(1567579028, 2), t: 1 }, 2019-09-04T06:37:08.739+0000 2019-09-04T06:37:08.747+0000 D3 REPL [conn571] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:21.767+0000 2019-09-04T06:37:08.747+0000 D3 REPL [conn551] Got notified of new snapshot: { ts: Timestamp(1567579028, 2), t: 1 }, 2019-09-04T06:37:08.739+0000 2019-09-04T06:37:08.747+0000 D3 REPL [conn551] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:14.696+0000 2019-09-04T06:37:08.747+0000 D3 REPL [conn533] Got notified of new snapshot: { ts: Timestamp(1567579028, 2), t: 1 }, 2019-09-04T06:37:08.739+0000 2019-09-04T06:37:08.747+0000 D3 REPL [conn533] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:13.272+0000 2019-09-04T06:37:08.747+0000 D3 REPL [conn546] Got notified of new snapshot: { ts: Timestamp(1567579028, 2), t: 1 }, 2019-09-04T06:37:08.739+0000 2019-09-04T06:37:08.747+0000 D3 REPL [conn546] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:21.660+0000 2019-09-04T06:37:08.753+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:08.753+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:08.754+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:08.754+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:08.827+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:08.837+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:08.837+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:08.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:37:08.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1845) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:37:08.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1845 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:37:18.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:37:08.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:36.839+0000 2019-09-04T06:37:08.839+0000 D2 ASIO [Replication] Request 1845 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579028, 2), t: 1 }, durableWallTime: new Date(1567579028739), opTime: { ts: Timestamp(1567579028, 2), t: 1 }, wallTime: new Date(1567579028739), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579028, 2), t: 1 }, lastCommittedWall: new Date(1567579028739), lastOpVisible: { ts: Timestamp(1567579028, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567579028, 2), $clusterTime: { clusterTime: Timestamp(1567579028, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579028, 2) } 2019-09-04T06:37:08.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579028, 2), t: 1 }, durableWallTime: new Date(1567579028739), opTime: { ts: Timestamp(1567579028, 2), t: 1 }, wallTime: new Date(1567579028739), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579028, 2), t: 1 }, lastCommittedWall: new Date(1567579028739), lastOpVisible: { ts: Timestamp(1567579028, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567579028, 2), $clusterTime: { clusterTime: Timestamp(1567579028, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579028, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:37:08.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:37:08.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1845) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579028, 2), t: 1 }, durableWallTime: new Date(1567579028739), opTime: { ts: Timestamp(1567579028, 2), t: 1 }, wallTime: new Date(1567579028739), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579028, 2), t: 1 }, lastCommittedWall: new Date(1567579028739), lastOpVisible: { ts: Timestamp(1567579028, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567579028, 2), $clusterTime: { clusterTime: Timestamp(1567579028, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579028, 2) } 2019-09-04T06:37:08.839+0000 D4 ELECTION [replexec-4] Postponing election timeout due to heartbeat from primary 2019-09-04T06:37:08.839+0000 D4 REPL [replexec-4] Canceling election timeout callback at 2019-09-04T06:37:18.966+0000 2019-09-04T06:37:08.839+0000 D4 ELECTION [replexec-4] Scheduling election timeout callback at 2019-09-04T06:37:20.272+0000 2019-09-04T06:37:08.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:37:08.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:37:10.839Z 2019-09-04T06:37:08.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:37:08.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:38.839+0000 2019-09-04T06:37:08.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:38.839+0000 2019-09-04T06:37:08.841+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:37:08.841+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1846) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:37:08.841+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1846 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:37:18.841+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:37:08.841+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:38.839+0000 2019-09-04T06:37:08.841+0000 D2 ASIO [Replication] Request 1846 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579028, 2), t: 1 }, durableWallTime: new Date(1567579028739), opTime: { ts: Timestamp(1567579028, 2), t: 1 }, wallTime: new Date(1567579028739), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579028, 2), t: 1 }, lastCommittedWall: new Date(1567579028739), lastOpVisible: { ts: Timestamp(1567579028, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579028, 2), $clusterTime: { clusterTime: Timestamp(1567579028, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579028, 2) } 2019-09-04T06:37:08.841+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579028, 2), t: 1 }, durableWallTime: new Date(1567579028739), opTime: { ts: Timestamp(1567579028, 2), t: 1 }, wallTime: new Date(1567579028739), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579028, 2), t: 1 }, lastCommittedWall: new Date(1567579028739), lastOpVisible: { ts: Timestamp(1567579028, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579028, 2), $clusterTime: { clusterTime: Timestamp(1567579028, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579028, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:37:08.841+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:37:08.841+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1846) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579028, 2), t: 1 }, durableWallTime: new Date(1567579028739), opTime: { ts: Timestamp(1567579028, 2), t: 1 }, wallTime: new Date(1567579028739), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579028, 2), t: 1 }, lastCommittedWall: new Date(1567579028739), lastOpVisible: { ts: Timestamp(1567579028, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579028, 2), $clusterTime: { clusterTime: Timestamp(1567579028, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579028, 2) } 2019-09-04T06:37:08.841+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:37:08.841+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:37:10.841Z 2019-09-04T06:37:08.841+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:38.839+0000 2019-09-04T06:37:08.842+0000 D2 STORAGE [WTOplogJournalThread] No new oplog entries were made visible: Timestamp(1567579028, 2) 2019-09-04T06:37:08.927+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:08.968+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:08.968+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:09.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:37:09.027+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:09.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579028, 2), signature: { hash: BinData(0, C61EA7299222D9AD3104A60FFEBFB36389ABD80E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:37:09.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:37:09.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579028, 2), signature: { hash: BinData(0, C61EA7299222D9AD3104A60FFEBFB36389ABD80E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:37:09.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579028, 2), signature: { hash: BinData(0, C61EA7299222D9AD3104A60FFEBFB36389ABD80E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:37:09.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579028, 2), t: 1 }, durableWallTime: new Date(1567579028739), opTime: { ts: Timestamp(1567579028, 2), t: 1 }, wallTime: new Date(1567579028739) } 2019-09-04T06:37:09.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579028, 2), signature: { hash: BinData(0, C61EA7299222D9AD3104A60FFEBFB36389ABD80E), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:09.127+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:09.201+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:09.201+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:09.213+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:09.213+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:09.227+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:09.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:37:09.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:37:09.253+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:09.253+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:09.254+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:09.254+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:09.327+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:09.337+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:09.337+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:09.428+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:09.468+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:09.468+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:09.528+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:09.628+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:09.700+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:09.701+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:09.713+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:09.713+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:09.728+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:09.743+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:37:09.743+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:37:09.743+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567579028, 2) 2019-09-04T06:37:09.743+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 27345 2019-09-04T06:37:09.743+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:37:09.743+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:37:09.743+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 27345 2019-09-04T06:37:09.744+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 27348 2019-09-04T06:37:09.744+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 27348 2019-09-04T06:37:09.744+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567579028, 2), t: 1 }({ ts: Timestamp(1567579028, 2), t: 1 }) 2019-09-04T06:37:09.753+0000 D2 COMMAND [conn60] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:09.753+0000 I COMMAND [conn60] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:09.754+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:09.754+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:09.828+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:09.837+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:09.837+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:09.928+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:09.968+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:09.968+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:10.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:37:10.002+0000 D2 COMMAND [conn90] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:37:10.002+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:37:10.002+0000 I COMMAND [conn90] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:385 locks:{} protocol:op_query 0ms 2019-09-04T06:37:10.008+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:37:10.008+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:338 locks:{} protocol:op_query 0ms 2019-09-04T06:37:10.010+0000 D2 COMMAND [conn90] run command admin.$cmd { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } 2019-09-04T06:37:10.010+0000 D1 ACCESS [conn90] Returning user dba_root@admin from cache 2019-09-04T06:37:10.010+0000 I ACCESS [conn90] Successfully authenticated as principal dba_root on admin from client 10.108.2.33:45456 2019-09-04T06:37:10.010+0000 I COMMAND [conn90] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, mechanism: "SCRAM-SHA-1", payload: "xxx", $db: "admin" } numYields:0 reslen:308 locks:{} protocol:op_query 0ms 2019-09-04T06:37:10.010+0000 D2 COMMAND [conn90] run command admin.$cmd { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:37:10.010+0000 I COMMAND [conn90] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:35151 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:37:10.011+0000 D2 COMMAND [conn90] run command admin.$cmd { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:37:10.011+0000 D2 COMMAND [conn90] command: replSetGetStatus 2019-09-04T06:37:10.011+0000 I COMMAND [conn90] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:2250 locks:{} protocol:op_query 0ms 2019-09-04T06:37:10.011+0000 D2 COMMAND [conn90] run command config.$cmd { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:37:10.011+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:37:10.011+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION IS_COUNT SPLIT_LIMITED_SORT Canonical query: ns=config.chunksTree: jumbo $eq true Sort: {} Proj: {} ============================= 2019-09-04T06:37:10.011+0000 D5 QUERY [conn90] Index 0 is kp: { ns: 1, min: 1 } unique name: '(ns_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2019-09-04T06:37:10.011+0000 D5 QUERY [conn90] Index 1 is kp: { ns: 1, shard: 1, min: 1 } unique name: '(ns_1_shard_1_min_1, )' io: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2019-09-04T06:37:10.011+0000 D5 QUERY [conn90] Index 2 is kp: { ns: 1, lastmod: 1 } unique name: '(ns_1_lastmod_1, )' io: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2019-09-04T06:37:10.011+0000 D5 QUERY [conn90] Index 3 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2019-09-04T06:37:10.011+0000 D5 QUERY [conn90] Predicate over field 'jumbo' 2019-09-04T06:37:10.012+0000 D5 QUERY [conn90] Rated tree: jumbo $eq true || First: notFirst: full path: jumbo 2019-09-04T06:37:10.012+0000 D5 QUERY [conn90] Planner: outputted 0 indexed solutions. 2019-09-04T06:37:10.012+0000 D5 QUERY [conn90] Planner: outputting a collscan: COLLSCAN ---ns = config.chunks ---filter = jumbo $eq true ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:37:10.012+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { jumbo: true } sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:37:10.012+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567579028, 2) 2019-09-04T06:37:10.012+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27360 2019-09-04T06:37:10.012+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27360 2019-09-04T06:37:10.012+0000 I COMMAND [conn90] command config.chunks command: count { count: "chunks", query: { jumbo: true }, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 numYields:0 queryHash:D593D06D planCacheKey:D593D06D reslen:274 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:37:10.012+0000 D2 COMMAND [conn90] run command admin.$cmd { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:37:10.012+0000 I COMMAND [conn90] command admin.$cmd command: shardConnPoolStats { shardConnPoolStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2019-09-04T06:37:10.012+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:37:10.012+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:37:10.012+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: 1 } Proj: {} ============================= 2019-09-04T06:37:10.012+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:37:10.012+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:37:10.012+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567579028, 2) 2019-09-04T06:37:10.012+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27363 2019-09-04T06:37:10.012+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27363 2019-09-04T06:37:10.012+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:F5CE282E planCacheKey:F5CE282E reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:37:10.014+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:37:10.014+0000 D3 STORAGE [conn90] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:37:10.014+0000 D5 QUERY [conn90] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.oplog.rs batchSize=1 limit=1Tree: ts exists Sort: { $natural: -1 } Proj: {} ============================= 2019-09-04T06:37:10.014+0000 D5 QUERY [conn90] Forcing a table scan due to hinted $natural 2019-09-04T06:37:10.014+0000 D2 QUERY [conn90] Only one plan is available; it will be run but will not be cached. query: { ts: { $exists: true } } sort: { $natural: -1 } projection: {} batchSize: 1 limit: 1, planSummary: COLLSCAN 2019-09-04T06:37:10.014+0000 D3 STORAGE [conn90] begin_transaction on local snapshot Timestamp(1567579028, 2) 2019-09-04T06:37:10.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27365 2019-09-04T06:37:10.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27365 2019-09-04T06:37:10.014+0000 I COMMAND [conn90] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $exists: true } }, sort: { $natural: -1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 queryHash:27434851 planCacheKey:27434851 reslen:571 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:37:10.014+0000 D2 COMMAND [conn90] run command local.$cmd { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:37:10.014+0000 D2 QUERY [conn90] Collection local.oplog.$main does not exist. Using EOF plan: query: { ts: { $exists: true } } sort: { $natural: 1 } projection: {} batchSize: 1 limit: 1 2019-09-04T06:37:10.014+0000 I COMMAND [conn90] command local.oplog.$main command: find { find: "oplog.$main", filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:335 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_query 0ms 2019-09-04T06:37:10.014+0000 D2 COMMAND [conn90] run command admin.$cmd { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:37:10.014+0000 D2 COMMAND [conn90] command: listDatabases 2019-09-04T06:37:10.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:37:10.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27368 2019-09-04T06:37:10.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { user_1_db_1: "admin/index/18--6194257481163143499", _id_: "admin/index/19--6194257481163143499" }, ns: "admin.system.users", ident: "admin/collection/17--6194257481163143499" } 2019-09-04T06:37:10.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.users", options: { uuid: UUID("1c65b785-f989-45d0-a6f4-6a4233f87231") }, indexes: [ { spec: { v: 2, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { user: BinData(0, 00), db: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.users" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:37:10.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:37:10.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.users @ RecordId(11) 2019-09-04T06:37:10.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27368 2019-09-04T06:37:10.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:37:10.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27369 2019-09-04T06:37:10.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/21--6194257481163143499" }, ns: "admin.system.version", ident: "admin/collection/20--6194257481163143499" } 2019-09-04T06:37:10.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.version", options: { uuid: UUID("4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:37:10.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.version @ RecordId(12) 2019-09-04T06:37:10.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27369 2019-09-04T06:37:10.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:37:10.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27370 2019-09-04T06:37:10.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "admin/index/23--6194257481163143499" }, ns: "admin.system.keys", ident: "admin/collection/22--6194257481163143499" } 2019-09-04T06:37:10.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "admin.system.keys", options: { uuid: UUID("6fa72c52-1098-49d5-8075-97e44ea0d586") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:37:10.014+0000 D3 STORAGE [conn90] looking up metadata for: admin.system.keys @ RecordId(13) 2019-09-04T06:37:10.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27370 2019-09-04T06:37:10.014+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:37:10.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27371 2019-09-04T06:37:10.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/27--6194257481163143499" }, ns: "config.changelog", ident: "config/collection/26--6194257481163143499" } 2019-09-04T06:37:10.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.changelog", options: { uuid: UUID("0196ba23-ca72-4f67-b3ac-b305f18a38e3"), capped: true, size: 209715200 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.changelog" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:37:10.014+0000 D3 STORAGE [conn90] looking up metadata for: config.changelog @ RecordId(14) 2019-09-04T06:37:10.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27371 2019-09-04T06:37:10.014+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:37:10.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27372 2019-09-04T06:37:10.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ping_1: "config/index/29--6194257481163143499", _id_: "config/index/31--6194257481163143499" }, ns: "config.lockpings", ident: "config/collection/28--6194257481163143499" } 2019-09-04T06:37:10.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.lockpings", options: { uuid: UUID("0e9c403c-5a7d-421c-a744-6abbab57bdce") }, indexes: [ { spec: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { ping: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:37:10.014+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:37:10.014+0000 D3 STORAGE [conn90] looking up metadata for: config.lockpings @ RecordId(15) 2019-09-04T06:37:10.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27372 2019-09-04T06:37:10.014+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:37:10.014+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27373 2019-09-04T06:37:10.014+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/35--6194257481163143499" }, ns: "config.transactions", ident: "config/collection/34--6194257481163143499" } 2019-09-04T06:37:10.014+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.transactions", options: { uuid: UUID("1614ccf0-7860-48e4-ab95-3aaa4633e218") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:37:10.014+0000 D3 STORAGE [conn90] looking up metadata for: config.transactions @ RecordId(16) 2019-09-04T06:37:10.014+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27373 2019-09-04T06:37:10.014+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27374 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/39--6194257481163143499" }, ns: "config.mongos", ident: "config/collection/38--6194257481163143499" } 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.mongos", options: { uuid: UUID("1734bd4e-af6d-441a-8751-93e269784617") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.mongos" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.mongos @ RecordId(17) 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27374 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27375 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ts_1: "config/index/43--6194257481163143499", state_1_process_1: "config/index/45--6194257481163143499", _id_: "config/index/47--6194257481163143499" }, ns: "config.locks", ident: "config/collection/42--6194257481163143499" } 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.locks", options: { uuid: UUID("1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287") }, indexes: [ { spec: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { ts: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { state: BinData(0, 00), process: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.locks @ RecordId(18) 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27375 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27376 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/51--6194257481163143499" }, ns: "config.version", ident: "config/collection/50--6194257481163143499" } 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.version", options: { uuid: UUID("20d8341b-073f-4dea-b0c5-c2626b006feb") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.version @ RecordId(19) 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27376 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27377 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/55--6194257481163143499" }, ns: "config.collections", ident: "config/collection/54--6194257481163143499" } 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.collections", options: { uuid: UUID("5c6c3426-ae2d-4c69-bf22-b1d2601211ff") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.collections" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.collections @ RecordId(20) 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27377 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27378 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/59--6194257481163143499", ns_1_shard_1_min_1: "config/index/62--6194257481163143499", ns_1_lastmod_1: "config/index/65--6194257481163143499", _id_: "config/index/68--6194257481163143499" }, ns: "config.chunks", ident: "config/collection/58--6194257481163143499" } 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.chunks", options: { uuid: UUID("925b3d05-7eb4-4b6d-b339-82784de07cbe") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), shard: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), lastmod: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.chunks @ RecordId(21) 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27378 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27379 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "config/index/72--6194257481163143499" }, ns: "config.system.sessions", ident: "config/collection/71--6194257481163143499" } 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.system.sessions", options: { uuid: UUID("a6938268-0b91-476c-a0f3-aaac5e5117ed") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.system.sessions" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.system.sessions @ RecordId(22) 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27379 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27380 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/76--6194257481163143499", _id_: "config/index/79--6194257481163143499" }, ns: "config.migrations", ident: "config/collection/75--6194257481163143499" } 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.migrations", options: { uuid: UUID("b8de9e4c-de38-4698-9ceb-7e686f580e61") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.migrations @ RecordId(23) 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27380 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27381 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { host_1: "config/index/83--6194257481163143499", _id_: "config/index/86--6194257481163143499" }, ns: "config.shards", ident: "config/collection/82--6194257481163143499" } 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.shards", options: { uuid: UUID("cc5f25a3-25cf-4a45-b674-6595d24d7e9a") }, indexes: [ { spec: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { host: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.shards @ RecordId(24) 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27381 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27382 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { ns_1_min_1: "config/index/90--6194257481163143499", ns_1_tag_1: "config/index/93--6194257481163143499", _id_: "config/index/95--6194257481163143499" }, ns: "config.tags", ident: "config/collection/89--6194257481163143499" } 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "config.tags", options: { uuid: UUID("f71519e3-c8e3-42c8-9579-254e000a6c18") }, indexes: [ { spec: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), min: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { ns: BinData(0, 00), tag: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 }, { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: true, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] looking up metadata for: config.tags @ RecordId(25) 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27382 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27383 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/7--6194257481163143499" }, ns: "local.replset.election", ident: "local/collection/6--6194257481163143499" } 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.election", options: { uuid: UUID("0512231e-bb78-4048-95aa-63ea7eb6b5a5") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.election @ RecordId(5) 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27383 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27384 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/9--6194257481163143499" }, ns: "local.system.rollback.id", ident: "local/collection/8--6194257481163143499" } 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.rollback.id", options: { uuid: UUID("1f10291c-f664-4c4a-a48a-a3c7297b837c") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.rollback.id" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] looking up metadata for: local.system.rollback.id @ RecordId(6) 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27384 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27385 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/1--6194257481163143499" }, ns: "local.startup_log", ident: "local/collection/0--6194257481163143499" } 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.startup_log", options: { uuid: UUID("4860912c-c555-4fe1-b1bb-e6281b586983"), capped: true, size: 10485760 }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] looking up metadata for: local.startup_log @ RecordId(1) 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27385 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27386 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/11--6194257481163143499" }, ns: "local.system.replset", ident: "local/collection/10--6194257481163143499" } 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.system.replset", options: { uuid: UUID("6518740c-6e6d-47d6-acc8-8f7aaf4d591e") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] looking up metadata for: local.system.replset @ RecordId(7) 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27386 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27387 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27387 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27388 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/5--6194257481163143499" }, ns: "local.replset.minvalid", ident: "local/collection/4--6194257481163143499" } 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.minvalid", options: { uuid: UUID("e1f04497-1bed-46e1-b7a9-714cf5b1cd7b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.minvalid @ RecordId(4) 2019-09-04T06:37:10.015+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27388 2019-09-04T06:37:10.016+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:37:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27389 2019-09-04T06:37:10.016+0000 D3 STORAGE [conn90] fetched CCE metadata: { md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 }, idxIdent: { _id_: "local/index/3--6194257481163143499" }, ns: "local.replset.oplogTruncateAfterPoint", ident: "local/collection/2--6194257481163143499" } 2019-09-04T06:37:10.016+0000 D3 STORAGE [conn90] returning metadata: md: { ns: "local.replset.oplogTruncateAfterPoint", options: { uuid: UUID("f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b") }, indexes: [ { spec: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.oplogTruncateAfterPoint" }, ready: true, multikey: false, multikeyPaths: { _id: BinData(0, 00) }, head: 0, prefix: -1, backgroundSecondary: false, runTwoPhaseBuild: false, versionOfBuild: 1 } ], prefix: -1 } 2019-09-04T06:37:10.016+0000 D3 STORAGE [conn90] looking up metadata for: local.replset.oplogTruncateAfterPoint @ RecordId(3) 2019-09-04T06:37:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27389 2019-09-04T06:37:10.016+0000 I COMMAND [conn90] command admin.$cmd command: listDatabases { listDatabases: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:459 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 21 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1ms 2019-09-04T06:37:10.016+0000 D2 COMMAND [conn90] run command admin.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } 2019-09-04T06:37:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27391 2019-09-04T06:37:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27391 2019-09-04T06:37:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27392 2019-09-04T06:37:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27392 2019-09-04T06:37:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27393 2019-09-04T06:37:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27393 2019-09-04T06:37:10.016+0000 I COMMAND [conn90] command admin command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 4 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:37:10.016+0000 D2 COMMAND [conn90] run command config.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } 2019-09-04T06:37:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27395 2019-09-04T06:37:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27395 2019-09-04T06:37:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27396 2019-09-04T06:37:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27396 2019-09-04T06:37:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27397 2019-09-04T06:37:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27397 2019-09-04T06:37:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27398 2019-09-04T06:37:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27398 2019-09-04T06:37:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27399 2019-09-04T06:37:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27399 2019-09-04T06:37:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27400 2019-09-04T06:37:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27400 2019-09-04T06:37:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27401 2019-09-04T06:37:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27401 2019-09-04T06:37:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27402 2019-09-04T06:37:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27402 2019-09-04T06:37:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27403 2019-09-04T06:37:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27403 2019-09-04T06:37:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27404 2019-09-04T06:37:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27404 2019-09-04T06:37:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27405 2019-09-04T06:37:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27405 2019-09-04T06:37:10.016+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27406 2019-09-04T06:37:10.016+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27406 2019-09-04T06:37:10.017+0000 I COMMAND [conn90] command config command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "config" } numYields:0 reslen:492 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 13 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:37:10.017+0000 D2 COMMAND [conn90] run command local.$cmd { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } 2019-09-04T06:37:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27408 2019-09-04T06:37:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27408 2019-09-04T06:37:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27409 2019-09-04T06:37:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27409 2019-09-04T06:37:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27410 2019-09-04T06:37:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27410 2019-09-04T06:37:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27411 2019-09-04T06:37:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27411 2019-09-04T06:37:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27412 2019-09-04T06:37:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27412 2019-09-04T06:37:10.017+0000 D3 STORAGE [conn90] WT begin_transaction for snapshot id 27413 2019-09-04T06:37:10.017+0000 D3 STORAGE [conn90] WT rollback_transaction for snapshot id 27413 2019-09-04T06:37:10.017+0000 I COMMAND [conn90] command local command: dbStats { dbStats: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "local" } numYields:0 reslen:491 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 0ms 2019-09-04T06:37:10.029+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:10.038+0000 D2 COMMAND [conn69] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:10.038+0000 I COMMAND [conn69] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:10.056+0000 D2 COMMAND [conn70] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:10.056+0000 I COMMAND [conn70] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:10.130+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:10.201+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:10.201+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:10.213+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:10.213+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:10.230+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:10.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:37:10.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:37:10.235+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579028, 2), signature: { hash: BinData(0, C61EA7299222D9AD3104A60FFEBFB36389ABD80E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:37:10.235+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:37:10.235+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579028, 2), signature: { hash: BinData(0, C61EA7299222D9AD3104A60FFEBFB36389ABD80E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:37:10.235+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579028, 2), signature: { hash: BinData(0, C61EA7299222D9AD3104A60FFEBFB36389ABD80E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:37:10.235+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579028, 2), t: 1 }, durableWallTime: new Date(1567579028739), opTime: { ts: Timestamp(1567579028, 2), t: 1 }, wallTime: new Date(1567579028739) } 2019-09-04T06:37:10.235+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579028, 2), signature: { hash: BinData(0, C61EA7299222D9AD3104A60FFEBFB36389ABD80E), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:10.254+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:10.254+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:10.330+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:10.337+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:10.337+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:10.430+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:10.464+0000 D2 COMMAND [conn71] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567579023, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579026, 1), signature: { hash: BinData(0, 8C0BACB0894F287A06DF883D8BE160E36AAE79F8), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567579023, 1), t: 1 } }, $db: "config" } 2019-09-04T06:37:10.464+0000 D1 COMMAND [conn71] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567579023, 1), t: 1 } } } 2019-09-04T06:37:10.464+0000 D3 STORAGE [conn71] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:37:10.464+0000 D1 COMMAND [conn71] Using 'committed' snapshot: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567579023, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579026, 1), signature: { hash: BinData(0, 8C0BACB0894F287A06DF883D8BE160E36AAE79F8), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567579023, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567579028, 2) 2019-09-04T06:37:10.464+0000 D2 QUERY [conn71] Collection config.settings does not exist. Using EOF plan: query: { _id: "balancer" } sort: {} projection: {} limit: 1 2019-09-04T06:37:10.464+0000 I COMMAND [conn71] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567579023, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579026, 1), signature: { hash: BinData(0, 8C0BACB0894F287A06DF883D8BE160E36AAE79F8), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567579023, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:37:10.468+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:10.468+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:10.530+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:10.630+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:10.701+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:10.701+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:10.713+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:10.713+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:10.721+0000 D2 COMMAND [conn72] run command config.$cmd { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567579023, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579026, 1), signature: { hash: BinData(0, 8C0BACB0894F287A06DF883D8BE160E36AAE79F8), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567579023, 1), t: 1 } }, $db: "config" } 2019-09-04T06:37:10.721+0000 D1 COMMAND [conn72] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567579023, 1), t: 1 } } } 2019-09-04T06:37:10.721+0000 D3 STORAGE [conn72] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:37:10.721+0000 D1 COMMAND [conn72] Using 'committed' snapshot: { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567579023, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579026, 1), signature: { hash: BinData(0, 8C0BACB0894F287A06DF883D8BE160E36AAE79F8), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567579023, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567579028, 2) 2019-09-04T06:37:10.721+0000 D2 QUERY [conn72] Collection config.settings does not exist. Using EOF plan: query: { _id: "balancer" } sort: {} projection: {} limit: 1 2019-09-04T06:37:10.721+0000 I COMMAND [conn72] command config.settings command: find { find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567579023, 1), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579026, 1), signature: { hash: BinData(0, 8C0BACB0894F287A06DF883D8BE160E36AAE79F8), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567579023, 1), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:37:10.722+0000 D2 COMMAND [conn72] run command config.$cmd { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567579028, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579028, 2), signature: { hash: BinData(0, C61EA7299222D9AD3104A60FFEBFB36389ABD80E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567579028, 2), t: 1 } }, $db: "config" } 2019-09-04T06:37:10.722+0000 D1 COMMAND [conn72] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567579028, 2), t: 1 } } } 2019-09-04T06:37:10.722+0000 D3 STORAGE [conn72] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:37:10.722+0000 D1 COMMAND [conn72] Using 'committed' snapshot: { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567579028, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579028, 2), signature: { hash: BinData(0, C61EA7299222D9AD3104A60FFEBFB36389ABD80E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567579028, 2), t: 1 } }, $db: "config" } with readTs: Timestamp(1567579028, 2) 2019-09-04T06:37:10.722+0000 D2 QUERY [conn72] Collection config.settings does not exist. Using EOF plan: query: { _id: "chunksize" } sort: {} projection: {} limit: 1 2019-09-04T06:37:10.722+0000 I COMMAND [conn72] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567579028, 2), t: 1 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579028, 2), signature: { hash: BinData(0, C61EA7299222D9AD3104A60FFEBFB36389ABD80E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567579028, 2), t: 1 } }, $db: "config" } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } protocol:op_msg 0ms 2019-09-04T06:37:10.730+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:10.743+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:37:10.743+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:37:10.743+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567579028, 2) 2019-09-04T06:37:10.743+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 27429 2019-09-04T06:37:10.743+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:37:10.743+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:37:10.743+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 27429 2019-09-04T06:37:10.744+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 27432 2019-09-04T06:37:10.744+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 27432 2019-09-04T06:37:10.744+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567579028, 2), t: 1 }({ ts: Timestamp(1567579028, 2), t: 1 }) 2019-09-04T06:37:10.754+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:10.754+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:10.830+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:10.837+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:10.837+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:10.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:37:10.839+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1847) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:37:10.839+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1847 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:37:20.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:37:10.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:38.839+0000 2019-09-04T06:37:10.839+0000 D2 ASIO [Replication] Request 1847 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579028, 2), t: 1 }, durableWallTime: new Date(1567579028739), opTime: { ts: Timestamp(1567579028, 2), t: 1 }, wallTime: new Date(1567579028739), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579028, 2), t: 1 }, lastCommittedWall: new Date(1567579028739), lastOpVisible: { ts: Timestamp(1567579028, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567579028, 2), $clusterTime: { clusterTime: Timestamp(1567579028, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579028, 2) } 2019-09-04T06:37:10.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579028, 2), t: 1 }, durableWallTime: new Date(1567579028739), opTime: { ts: Timestamp(1567579028, 2), t: 1 }, wallTime: new Date(1567579028739), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579028, 2), t: 1 }, lastCommittedWall: new Date(1567579028739), lastOpVisible: { ts: Timestamp(1567579028, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567579028, 2), $clusterTime: { clusterTime: Timestamp(1567579028, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579028, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:37:10.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:37:10.839+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1847) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579028, 2), t: 1 }, durableWallTime: new Date(1567579028739), opTime: { ts: Timestamp(1567579028, 2), t: 1 }, wallTime: new Date(1567579028739), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579028, 2), t: 1 }, lastCommittedWall: new Date(1567579028739), lastOpVisible: { ts: Timestamp(1567579028, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567579028, 2), $clusterTime: { clusterTime: Timestamp(1567579028, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579028, 2) } 2019-09-04T06:37:10.839+0000 D4 ELECTION [replexec-3] Postponing election timeout due to heartbeat from primary 2019-09-04T06:37:10.839+0000 D4 REPL [replexec-3] Canceling election timeout callback at 2019-09-04T06:37:20.272+0000 2019-09-04T06:37:10.839+0000 D4 ELECTION [replexec-3] Scheduling election timeout callback at 2019-09-04T06:37:21.641+0000 2019-09-04T06:37:10.839+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:37:10.839+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:37:12.839Z 2019-09-04T06:37:10.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:37:10.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:40.839+0000 2019-09-04T06:37:10.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:40.839+0000 2019-09-04T06:37:10.841+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:37:10.841+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1848) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:37:10.841+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1848 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:37:20.841+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:37:10.841+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:40.839+0000 2019-09-04T06:37:10.841+0000 D2 ASIO [Replication] Request 1848 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579028, 2), t: 1 }, durableWallTime: new Date(1567579028739), opTime: { ts: Timestamp(1567579028, 2), t: 1 }, wallTime: new Date(1567579028739), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579028, 2), t: 1 }, lastCommittedWall: new Date(1567579028739), lastOpVisible: { ts: Timestamp(1567579028, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579028, 2), $clusterTime: { clusterTime: Timestamp(1567579028, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579028, 2) } 2019-09-04T06:37:10.841+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579028, 2), t: 1 }, durableWallTime: new Date(1567579028739), opTime: { ts: Timestamp(1567579028, 2), t: 1 }, wallTime: new Date(1567579028739), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579028, 2), t: 1 }, lastCommittedWall: new Date(1567579028739), lastOpVisible: { ts: Timestamp(1567579028, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579028, 2), $clusterTime: { clusterTime: Timestamp(1567579028, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579028, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:37:10.841+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:37:10.841+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1848) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579028, 2), t: 1 }, durableWallTime: new Date(1567579028739), opTime: { ts: Timestamp(1567579028, 2), t: 1 }, wallTime: new Date(1567579028739), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579028, 2), t: 1 }, lastCommittedWall: new Date(1567579028739), lastOpVisible: { ts: Timestamp(1567579028, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579028, 2), $clusterTime: { clusterTime: Timestamp(1567579028, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579028, 2) } 2019-09-04T06:37:10.841+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:37:10.841+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:37:12.841Z 2019-09-04T06:37:10.841+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:40.839+0000 2019-09-04T06:37:10.930+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:10.944+0000 D2 COMMAND [conn73] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:10.944+0000 I COMMAND [conn73] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:10.968+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:10.968+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:10.968+0000 D2 COMMAND [conn74] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:10.968+0000 I COMMAND [conn74] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:11.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:37:11.030+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:11.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579028, 2), signature: { hash: BinData(0, C61EA7299222D9AD3104A60FFEBFB36389ABD80E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:37:11.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:37:11.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579028, 2), signature: { hash: BinData(0, C61EA7299222D9AD3104A60FFEBFB36389ABD80E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:37:11.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579028, 2), signature: { hash: BinData(0, C61EA7299222D9AD3104A60FFEBFB36389ABD80E), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:37:11.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579028, 2), t: 1 }, durableWallTime: new Date(1567579028739), opTime: { ts: Timestamp(1567579028, 2), t: 1 }, wallTime: new Date(1567579028739) } 2019-09-04T06:37:11.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579028, 2), signature: { hash: BinData(0, C61EA7299222D9AD3104A60FFEBFB36389ABD80E), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:11.131+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:11.201+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:11.201+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:11.213+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:11.213+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:11.221+0000 D2 COMMAND [conn77] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:11.221+0000 I COMMAND [conn77] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:11.231+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:11.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:37:11.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:37:11.254+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:11.254+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:11.273+0000 D2 COMMAND [conn78] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:11.273+0000 I COMMAND [conn78] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:11.331+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:11.337+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:11.337+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:11.431+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:11.468+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:11.468+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:11.531+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:11.631+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:11.700+0000 D2 COMMAND [conn75] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:11.701+0000 I COMMAND [conn75] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:11.713+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:11.713+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:11.731+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:11.743+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:37:11.743+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:37:11.743+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567579028, 2) 2019-09-04T06:37:11.743+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 27452 2019-09-04T06:37:11.743+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:37:11.743+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:37:11.743+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 27452 2019-09-04T06:37:11.744+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 27455 2019-09-04T06:37:11.744+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 27455 2019-09-04T06:37:11.744+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567579028, 2), t: 1 }({ ts: Timestamp(1567579028, 2), t: 1 }) 2019-09-04T06:37:11.754+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:11.754+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:11.797+0000 D2 COMMAND [conn81] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567579015, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579028, 2), signature: { hash: BinData(0, C61EA7299222D9AD3104A60FFEBFB36389ABD80E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567579015, 1), t: 1 } }, $db: "config" } 2019-09-04T06:37:11.797+0000 D1 COMMAND [conn81] Waiting for 'committed' snapshot to be available for reading: { readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567579015, 1), t: 1 } } } 2019-09-04T06:37:11.797+0000 D3 STORAGE [conn81] setting timestamp read source: 2, provided timestamp: none 2019-09-04T06:37:11.797+0000 D1 COMMAND [conn81] Using 'committed' snapshot: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567579015, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579028, 2), signature: { hash: BinData(0, C61EA7299222D9AD3104A60FFEBFB36389ABD80E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567579015, 1), t: 1 } }, $db: "config" } with readTs: Timestamp(1567579028, 2) 2019-09-04T06:37:11.797+0000 D5 QUERY [conn81] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=config.shardsTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:37:11.797+0000 D5 QUERY [conn81] Index 0 is kp: { host: 1 } unique name: '(host_1, )' io: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2019-09-04T06:37:11.797+0000 D5 QUERY [conn81] Index 1 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2019-09-04T06:37:11.797+0000 D5 QUERY [conn81] Rated tree: $and 2019-09-04T06:37:11.797+0000 D5 QUERY [conn81] Planner: outputted 0 indexed solutions. 2019-09-04T06:37:11.797+0000 D5 QUERY [conn81] Planner: outputting a collscan: COLLSCAN ---ns = config.shards ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:37:11.797+0000 D2 QUERY [conn81] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:37:11.797+0000 D3 STORAGE [conn81] WT begin_transaction for snapshot id 27458 2019-09-04T06:37:11.798+0000 D3 STORAGE [conn81] WT rollback_transaction for snapshot id 27458 2019-09-04T06:37:11.798+0000 I COMMAND [conn81] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1567579015, 1), t: 1 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579028, 2), signature: { hash: BinData(0, C61EA7299222D9AD3104A60FFEBFB36389ABD80E), keyId: 6727891476899954718 } }, $configServerState: { opTime: { ts: Timestamp(1567579015, 1), t: 1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:4 cursorExhausted:1 numYields:0 nreturned:4 reslen:989 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 0ms 2019-09-04T06:37:11.831+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:11.837+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:11.837+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:11.931+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:11.968+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:11.968+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:12.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:37:12.031+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:12.132+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:12.213+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:12.213+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:12.232+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:12.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:37:12.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:37:12.235+0000 D2 COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579029, 1), signature: { hash: BinData(0, 3C801ACF3685CAF949B459F1D261548602EB1AE6), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:37:12.235+0000 D2 COMMAND [conn28] command: replSetHeartbeat 2019-09-04T06:37:12.235+0000 D2 REPL_HB [conn28] Received heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579029, 1), signature: { hash: BinData(0, 3C801ACF3685CAF949B459F1D261548602EB1AE6), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:37:12.235+0000 D2 REPL_HB [conn28] Processing heartbeat request from cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579029, 1), signature: { hash: BinData(0, 3C801ACF3685CAF949B459F1D261548602EB1AE6), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:37:12.235+0000 D2 REPL_HB [conn28] Generated heartbeat response to cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579028, 2), t: 1 }, durableWallTime: new Date(1567579028739), opTime: { ts: Timestamp(1567579028, 2), t: 1 }, wallTime: new Date(1567579028739) } 2019-09-04T06:37:12.235+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb804.togewa.com:27019", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579029, 1), signature: { hash: BinData(0, 3C801ACF3685CAF949B459F1D261548602EB1AE6), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:12.254+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:12.254+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:12.321+0000 I NETWORK [listener] connection accepted from 10.108.2.53:50950 #576 (85 connections now open) 2019-09-04T06:37:12.321+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:37:12.321+0000 D2 COMMAND [conn576] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:37:12.321+0000 I NETWORK [conn576] received client metadata from 10.108.2.53:50950 conn576: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:37:12.321+0000 I COMMAND [conn576] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:37:12.324+0000 D2 COMMAND [conn576] run command config.$cmd { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579022, 1), signature: { hash: BinData(0, DDC5C0906B380CD8B4DE14242A2E4E1DA23C3C92), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:37:12.324+0000 D1 REPL [conn576] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567579028, 2), t: 1 } 2019-09-04T06:37:12.324+0000 D3 REPL [conn576] waitUntilOpTime: waiting for a new snapshot until 2019-09-04T06:37:42.334+0000 2019-09-04T06:37:12.332+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:12.337+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:12.337+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:12.432+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:12.468+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:12.468+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:12.532+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:12.632+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:12.713+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:12.713+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:12.732+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:12.743+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:37:12.743+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:37:12.743+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567579028, 2) 2019-09-04T06:37:12.743+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 27472 2019-09-04T06:37:12.743+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:37:12.743+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:37:12.743+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 27472 2019-09-04T06:37:12.744+0000 D3 STORAGE [rsSync-0] WT begin_transaction for snapshot id 27475 2019-09-04T06:37:12.744+0000 D3 STORAGE [rsSync-0] WT rollback_transaction for snapshot id 27475 2019-09-04T06:37:12.744+0000 D3 REPL [rsSync-0] returning minvalid: { ts: Timestamp(1567579028, 2), t: 1 }({ ts: Timestamp(1567579028, 2), t: 1 }) 2019-09-04T06:37:12.754+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:12.754+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:12.832+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:12.837+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:12.837+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:12.839+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:37:12.839+0000 D2 REPL_HB [replexec-3] Sending heartbeat (requestId: 1849) to cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:37:12.839+0000 D3 EXECUTOR [replexec-3] Scheduling remote command request: RemoteCommand 1849 -- target:[cmodb802.togewa.com:27019] db:admin expDate:2019-09-04T06:37:22.839+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:37:12.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:40.839+0000 2019-09-04T06:37:12.839+0000 D2 ASIO [Replication] Request 1849 finished with response: { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579028, 2), t: 1 }, durableWallTime: new Date(1567579028739), opTime: { ts: Timestamp(1567579028, 2), t: 1 }, wallTime: new Date(1567579028739), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579028, 2), t: 1 }, lastCommittedWall: new Date(1567579028739), lastOpVisible: { ts: Timestamp(1567579028, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567579028, 2), $clusterTime: { clusterTime: Timestamp(1567579029, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579028, 2) } 2019-09-04T06:37:12.839+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579028, 2), t: 1 }, durableWallTime: new Date(1567579028739), opTime: { ts: Timestamp(1567579028, 2), t: 1 }, wallTime: new Date(1567579028739), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579028, 2), t: 1 }, lastCommittedWall: new Date(1567579028739), lastOpVisible: { ts: Timestamp(1567579028, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567579028, 2), $clusterTime: { clusterTime: Timestamp(1567579029, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579028, 2) } target: cmodb802.togewa.com:27019 2019-09-04T06:37:12.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:37:12.839+0000 D2 REPL_HB [replexec-4] Received response to heartbeat (requestId: 1849) from cmodb802.togewa.com:27019, { ok: 1.0, electionTime: new Date(6727891468310020097), state: 1, v: 2, set: "configrs", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579028, 2), t: 1 }, durableWallTime: new Date(1567579028739), opTime: { ts: Timestamp(1567579028, 2), t: 1 }, wallTime: new Date(1567579028739), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579028, 2), t: 1 }, lastCommittedWall: new Date(1567579028739), lastOpVisible: { ts: Timestamp(1567579028, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1567579028, 2), $clusterTime: { clusterTime: Timestamp(1567579029, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579028, 2) } 2019-09-04T06:37:12.839+0000 D4 ELECTION [replexec-4] Postponing election timeout due to heartbeat from primary 2019-09-04T06:37:12.839+0000 D4 REPL [replexec-4] Canceling election timeout callback at 2019-09-04T06:37:21.641+0000 2019-09-04T06:37:12.839+0000 D4 ELECTION [replexec-4] Scheduling election timeout callback at 2019-09-04T06:37:23.915+0000 2019-09-04T06:37:12.839+0000 D3 REPL [replexec-4] setUpValues: heartbeat response good for member _id:MemberId(0) 2019-09-04T06:37:12.839+0000 D2 REPL_HB [replexec-4] Scheduling heartbeat to cmodb802.togewa.com:27019 at 2019-09-04T06:37:14.839Z 2019-09-04T06:37:12.839+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:37:12.839+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:42.839+0000 2019-09-04T06:37:12.839+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:42.839+0000 2019-09-04T06:37:12.841+0000 D3 EXECUTOR [replexec-4] Executing a task on behalf of pool replexec 2019-09-04T06:37:12.841+0000 D2 REPL_HB [replexec-4] Sending heartbeat (requestId: 1850) to cmodb804.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:37:12.841+0000 D3 EXECUTOR [replexec-4] Scheduling remote command request: RemoteCommand 1850 -- target:[cmodb804.togewa.com:27019] db:admin expDate:2019-09-04T06:37:22.841+0000 cmd:{ replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb803.togewa.com:27019", fromId: 1, term: 1 } 2019-09-04T06:37:12.841+0000 D3 EXECUTOR [replexec-4] Not reaping because the earliest retirement date is 2019-09-04T06:37:42.839+0000 2019-09-04T06:37:12.841+0000 D2 ASIO [Replication] Request 1850 finished with response: { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579028, 2), t: 1 }, durableWallTime: new Date(1567579028739), opTime: { ts: Timestamp(1567579028, 2), t: 1 }, wallTime: new Date(1567579028739), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579028, 2), t: 1 }, lastCommittedWall: new Date(1567579028739), lastOpVisible: { ts: Timestamp(1567579028, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579028, 2), $clusterTime: { clusterTime: Timestamp(1567579029, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579028, 2) } 2019-09-04T06:37:12.841+0000 D3 EXECUTOR [Replication] Received remote response: RemoteOnAnyResponse -- cmd:{ ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579028, 2), t: 1 }, durableWallTime: new Date(1567579028739), opTime: { ts: Timestamp(1567579028, 2), t: 1 }, wallTime: new Date(1567579028739), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579028, 2), t: 1 }, lastCommittedWall: new Date(1567579028739), lastOpVisible: { ts: Timestamp(1567579028, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579028, 2), $clusterTime: { clusterTime: Timestamp(1567579029, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579028, 2) } target: cmodb804.togewa.com:27019 2019-09-04T06:37:12.841+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:37:12.841+0000 D2 REPL_HB [replexec-3] Received response to heartbeat (requestId: 1850) from cmodb804.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb802.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579028, 2), t: 1 }, durableWallTime: new Date(1567579028739), opTime: { ts: Timestamp(1567579028, 2), t: 1 }, wallTime: new Date(1567579028739), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1567579028, 2), t: 1 }, lastCommittedWall: new Date(1567579028739), lastOpVisible: { ts: Timestamp(1567579028, 2), t: 1 }, configVersion: 2, replicaSetId: ObjectId('5d5e459bac9313827bdd88e9'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1567579028, 2), $clusterTime: { clusterTime: Timestamp(1567579029, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1567579028, 2) } 2019-09-04T06:37:12.841+0000 D3 REPL [replexec-3] setUpValues: heartbeat response good for member _id:MemberId(2) 2019-09-04T06:37:12.841+0000 D2 REPL_HB [replexec-3] Scheduling heartbeat to cmodb804.togewa.com:27019 at 2019-09-04T06:37:14.841Z 2019-09-04T06:37:12.841+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:42.839+0000 2019-09-04T06:37:12.917+0000 D2 COMMAND [conn535] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:12.917+0000 I COMMAND [conn535] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:12.932+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:12.968+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:12.968+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:13.000+0000 D3 STORAGE [ftdc] setting timestamp read source: 1, provided timestamp: none 2019-09-04T06:37:13.032+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:13.063+0000 D2 COMMAND [conn34] run command admin.$cmd { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579029, 1), signature: { hash: BinData(0, 3C801ACF3685CAF949B459F1D261548602EB1AE6), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:37:13.063+0000 D2 COMMAND [conn34] command: replSetHeartbeat 2019-09-04T06:37:13.063+0000 D2 REPL_HB [conn34] Received heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579029, 1), signature: { hash: BinData(0, 3C801ACF3685CAF949B459F1D261548602EB1AE6), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:37:13.063+0000 D2 REPL_HB [conn34] Processing heartbeat request from cmodb802.togewa.com:27019, { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579029, 1), signature: { hash: BinData(0, 3C801ACF3685CAF949B459F1D261548602EB1AE6), keyId: 6727891476899954718 } }, $db: "admin" } 2019-09-04T06:37:13.063+0000 D2 REPL_HB [conn34] Generated heartbeat response to cmodb802.togewa.com:27019, { ok: 1.0, state: 2, v: 2, set: "configrs", syncingTo: "cmodb804.togewa.com:27019", term: 1, primaryId: 0, durableOpTime: { ts: Timestamp(1567579028, 2), t: 1 }, durableWallTime: new Date(1567579028739), opTime: { ts: Timestamp(1567579028, 2), t: 1 }, wallTime: new Date(1567579028739) } 2019-09-04T06:37:13.063+0000 I COMMAND [conn34] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "configrs", configVersion: 2, hbv: 1, from: "cmodb802.togewa.com:27019", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579029, 1), signature: { hash: BinData(0, 3C801ACF3685CAF949B459F1D261548602EB1AE6), keyId: 6727891476899954718 } }, $db: "admin" } numYields:0 reslen:717 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:13.133+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:13.213+0000 D2 COMMAND [conn6] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:13.213+0000 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:13.233+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:13.235+0000 D4 STORAGE [FlowControlRefresher] Trimmed samples. Num: 0 2019-09-04T06:37:13.235+0000 D4 - [FlowControlRefresher] Refreshing tickets. Before: 1000000000 Now: 1000000000 2019-09-04T06:37:13.254+0000 D2 COMMAND [conn5] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:13.254+0000 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:13.269+0000 I NETWORK [listener] connection accepted from 10.108.2.55:36930 #577 (86 connections now open) 2019-09-04T06:37:13.269+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:37:13.269+0000 D2 COMMAND [conn577] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:37:13.269+0000 I NETWORK [conn577] received client metadata from 10.108.2.55:36930 conn577: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:37:13.269+0000 I COMMAND [conn577] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:37:13.275+0000 I COMMAND [conn533] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579002, 1), signature: { hash: BinData(0, 38AD2B934DDEB2D907B8E3A2E411898B9145843C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:37:13.275+0000 D1 - [conn533] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:37:13.276+0000 W - [conn533] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:37:13.284+0000 I COMMAND [conn560] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579001, 1), signature: { hash: BinData(0, 7537361D776A054A83C9137EA9B4D3F225E292EA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:37:13.284+0000 D1 - [conn560] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:37:13.284+0000 W - [conn560] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:37:13.291+0000 I COMMAND [conn534] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578998, 1), signature: { hash: BinData(0, 452433E92E6007DECC7F3453A9B73B44CC5A0C2C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:37:13.291+0000 D1 - [conn534] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:37:13.291+0000 W - [conn534] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:37:13.292+0000 I - [conn533] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:37:13.292+0000 D1 COMMAND [conn533] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579002, 1), signature: { hash: BinData(0, 38AD2B934DDEB2D907B8E3A2E411898B9145843C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:37:13.292+0000 D1 - [conn533] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:37:13.292+0000 W - [conn533] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:37:13.309+0000 I - [conn534] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:37:13.309+0000 D1 COMMAND [conn534] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578998, 1), signature: { hash: BinData(0, 452433E92E6007DECC7F3453A9B73B44CC5A0C2C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:37:13.309+0000 D1 - [conn534] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:37:13.309+0000 W - [conn534] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:37:13.329+0000 I - [conn533] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:37:13.329+0000 W COMMAND [conn533] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:37:13.329+0000 I COMMAND [conn533] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579002, 1), signature: { hash: BinData(0, 38AD2B934DDEB2D907B8E3A2E411898B9145843C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:37:13.329+0000 D2 NETWORK [conn533] Session from 10.108.2.73:52378 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:37:13.329+0000 I NETWORK [conn533] end connection 10.108.2.73:52378 (85 connections now open) 2019-09-04T06:37:13.333+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:13.334+0000 I NETWORK [listener] connection accepted from 10.108.2.63:36538 #578 (86 connections now open) 2019-09-04T06:37:13.334+0000 D3 EXECUTOR [listener] Starting new executor thread in passthrough mode 2019-09-04T06:37:13.334+0000 D2 COMMAND [conn578] run command admin.$cmd { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } 2019-09-04T06:37:13.334+0000 I NETWORK [conn578] received client metadata from 10.108.2.63:36538 conn578: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2019-09-04T06:37:13.334+0000 I COMMAND [conn578] command admin.$cmd command: isMaster { isMaster: 1, client: { driver: { name: "NetworkInterfaceTL", version: "4.2.0" }, os: { type: "Linux", name: "CentOS Linux release 7.6.1810 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } }, compression: [ "snappy", "zstd", "zlib" ], internalClient: { minWireVersion: 7, maxWireVersion: 8 }, hangUpOnStepDown: false, saslSupportedMechs: "local.__system", $db: "admin" } numYields:0 reslen:947 locks:{} protocol:op_query 0ms 2019-09-04T06:37:13.337+0000 D2 COMMAND [conn26] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:13.337+0000 I COMMAND [conn26] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:13.343+0000 I COMMAND [conn536] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579002, 1), signature: { hash: BinData(0, 38AD2B934DDEB2D907B8E3A2E411898B9145843C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:37:13.343+0000 D1 - [conn536] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:37:13.343+0000 W - [conn536] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:37:13.346+0000 I - [conn560] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:37:13.346+0000 D1 COMMAND [conn560] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579001, 1), signature: { hash: BinData(0, 7537361D776A054A83C9137EA9B4D3F225E292EA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:37:13.346+0000 D1 - [conn560] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:37:13.346+0000 W - [conn560] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:37:13.365+0000 I - [conn534] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:37:13.366+0000 W COMMAND [conn534] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:37:13.366+0000 I COMMAND [conn534] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578998, 1), signature: { hash: BinData(0, 452433E92E6007DECC7F3453A9B73B44CC5A0C2C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30030ms 2019-09-04T06:37:13.366+0000 D2 NETWORK [conn534] Session from 10.108.2.57:34458 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:37:13.366+0000 I NETWORK [conn534] end connection 10.108.2.57:34458 (85 connections now open) 2019-09-04T06:37:13.382+0000 I - [conn536] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:37:13.383+0000 D1 COMMAND [conn536] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579002, 1), signature: { hash: BinData(0, 38AD2B934DDEB2D907B8E3A2E411898B9145843C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:37:13.383+0000 D1 - [conn536] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:37:13.383+0000 W - [conn536] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:37:13.387+0000 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends 2019-09-04T06:37:13.387+0000 D1 - [signalProcessingThread] User Assertion: NotMaster: not primary so can't step down src/mongo/db/repl/replication_coordinator_impl.cpp 1977 2019-09-04T06:37:13.387+0000 W - [signalProcessingThread] DBException thrown :: caused by :: NotMaster: not primary so can't step down 2019-09-04T06:37:13.402+0000 I - [conn560] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:37:13.403+0000 W COMMAND [conn560] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:37:13.403+0000 I COMMAND [conn560] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579001, 1), signature: { hash: BinData(0, 7537361D776A054A83C9137EA9B4D3F225E292EA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30075ms 2019-09-04T06:37:13.403+0000 D2 NETWORK [conn560] Session from 10.108.2.55:36906 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:37:13.403+0000 I NETWORK [conn560] end connection 10.108.2.55:36906 (84 connections now open) 2019-09-04T06:37:13.412+0000 I - [signalProcessingThread] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6b5b62 0x561749c38a0a 0x561749c42521 0x5617499d0aa6 0x561749cc2861 0x56174b703575 0x561749c43605 0x56174a3a5593 0x56174b82dbbf 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"272DB62","s":"_ZN5mongo11DBExceptionC2ERKNS_6StatusE"},{"b":"561748F88000","o":"CB0A0A","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"A48AA6"},{"b":"561748F88000","o":"D3A861"},{"b":"561748F88000","o":"277B575"},{"b":"561748F88000","o":"CBB605","s":"_ZN5mongo8shutdownENS_8ExitCodeERKNS_16ShutdownTaskArgsE"},{"b":"561748F88000","o":"141D593"},{"b":"561748F88000","o":"28A5BBF"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo11DBExceptionC2ERKNS_6StatusE+0x32) [0x56174b6b5b62] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x6D08) [0x561749c38a0a] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xA48AA6) [0x5617499d0aa6] mongod(+0xD3A861) [0x561749cc2861] mongod(+0x277B575) [0x56174b703575] mongod(_ZN5mongo8shutdownENS_8ExitCodeERKNS_16ShutdownTaskArgsE+0x504) [0x561749c43605] mongod(+0x141D593) [0x56174a3a5593] mongod(+0x28A5BBF) [0x56174b82dbbf] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:37:13.412+0000 D1 QUERY [signalProcessingThread] received interrupt request for unknown op: 2 known ops: 2019-09-04T06:37:13.412+0000 D2 - [signalProcessingThread] Stopping periodic job LogicalSessionCacheRefresh 2019-09-04T06:37:13.412+0000 D2 - [signalProcessingThread] Stopping periodic job LogicalSessionCacheReap 2019-09-04T06:37:13.412+0000 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets... 2019-09-04T06:37:13.412+0000 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-27019.sock 2019-09-04T06:37:13.412+0000 I - [signalProcessingThread] Stopping further Flow Control ticket acquisitions. 2019-09-04T06:37:13.412+0000 D2 - [signalProcessingThread] Stopping periodic job startPeriodicThreadToAbortExpiredTransactions 2019-09-04T06:37:13.412+0000 D2 - [signalProcessingThread] Stopping periodic job startPeriodicThreadToDecreaseSnapshotHistoryCachePressure 2019-09-04T06:37:13.412+0000 I REPL [signalProcessingThread] shutting down replication subsystems 2019-09-04T06:37:13.412+0000 I REPL [signalProcessingThread] Stopping replication reporter thread 2019-09-04T06:37:13.413+0000 D3 REPL [conn547] Got notified of new snapshot: { ts: Timestamp(1567579028, 2), t: 1 }, 2019-09-04T06:37:08.739+0000 2019-09-04T06:37:13.413+0000 D1 - [conn547] User Assertion: ShutdownInProgress: Shutdown in progress src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:37:13.413+0000 W - [conn547] DBException thrown :: caused by :: ShutdownInProgress: Shutdown in progress 2019-09-04T06:37:13.413+0000 D3 REPL [conn564] Got notified of new snapshot: { ts: Timestamp(1567579028, 2), t: 1 }, 2019-09-04T06:37:08.739+0000 2019-09-04T06:37:13.413+0000 D1 - [conn564] User Assertion: ShutdownInProgress: Shutdown in progress src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:37:13.413+0000 W - [conn564] DBException thrown :: caused by :: ShutdownInProgress: Shutdown in progress 2019-09-04T06:37:13.413+0000 D3 REPL [conn574] Got notified of new snapshot: { ts: Timestamp(1567579028, 2), t: 1 }, 2019-09-04T06:37:08.739+0000 2019-09-04T06:37:13.413+0000 D1 - [conn574] User Assertion: ShutdownInProgress: Shutdown in progress src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:37:13.413+0000 W - [conn574] DBException thrown :: caused by :: ShutdownInProgress: Shutdown in progress 2019-09-04T06:37:13.413+0000 I COMMAND [conn540] Command on database config timed out waiting for read concern to be satisfied. Command: { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578998, 1), signature: { hash: BinData(0, 452433E92E6007DECC7F3453A9B73B44CC5A0C2C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } 2019-09-04T06:37:13.413+0000 D1 - [conn540] User Assertion: MaxTimeMSExpired: operation exceeded time limit src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:37:13.413+0000 W - [conn540] DBException thrown :: caused by :: MaxTimeMSExpired: operation exceeded time limit 2019-09-04T06:37:13.413+0000 D3 REPL [conn539] Got notified of new snapshot: { ts: Timestamp(1567579028, 2), t: 1 }, 2019-09-04T06:37:08.739+0000 2019-09-04T06:37:13.413+0000 D1 - [conn539] User Assertion: ShutdownInProgress: Shutdown in progress src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:37:13.413+0000 W - [conn539] DBException thrown :: caused by :: ShutdownInProgress: Shutdown in progress 2019-09-04T06:37:13.413+0000 D3 REPL [conn557] Got notified of new snapshot: { ts: Timestamp(1567579028, 2), t: 1 }, 2019-09-04T06:37:08.739+0000 2019-09-04T06:37:13.413+0000 D1 - [conn557] User Assertion: ShutdownInProgress: Shutdown in progress src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:37:13.413+0000 W - [conn557] DBException thrown :: caused by :: ShutdownInProgress: Shutdown in progress 2019-09-04T06:37:13.413+0000 D3 REPL [conn569] Got notified of new snapshot: { ts: Timestamp(1567579028, 2), t: 1 }, 2019-09-04T06:37:08.739+0000 2019-09-04T06:37:13.413+0000 D1 - [conn569] User Assertion: ShutdownInProgress: Shutdown in progress src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:37:13.413+0000 W - [conn569] DBException thrown :: caused by :: ShutdownInProgress: Shutdown in progress 2019-09-04T06:37:13.413+0000 D3 REPL [conn551] Got notified of new snapshot: { ts: Timestamp(1567579028, 2), t: 1 }, 2019-09-04T06:37:08.739+0000 2019-09-04T06:37:13.413+0000 D1 - [conn551] User Assertion: ShutdownInProgress: Shutdown in progress src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:37:13.413+0000 W - [conn551] DBException thrown :: caused by :: ShutdownInProgress: Shutdown in progress 2019-09-04T06:37:13.413+0000 D3 REPL [conn546] Got notified of new snapshot: { ts: Timestamp(1567579028, 2), t: 1 }, 2019-09-04T06:37:08.739+0000 2019-09-04T06:37:13.413+0000 D1 - [conn546] User Assertion: ShutdownInProgress: Shutdown in progress src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:37:13.413+0000 W - [conn546] DBException thrown :: caused by :: ShutdownInProgress: Shutdown in progress 2019-09-04T06:37:13.413+0000 D3 EXECUTOR [replication-1] Executing a task on behalf of pool replication 2019-09-04T06:37:13.413+0000 D3 EXECUTOR [replication-1] Not reaping because the earliest retirement date is 2019-09-04T06:37:38.744+0000 2019-09-04T06:37:13.413+0000 I REPL [SyncSourceFeedback] SyncSourceFeedback error sending update to cmodb804.togewa.com:27019: CallbackCanceled: Reporter no longer valid 2019-09-04T06:37:13.413+0000 D1 REPL [SyncSourceFeedback] The replication progress command (replSetUpdatePosition) failed and will be retried: CallbackCanceled: Reporter no longer valid 2019-09-04T06:37:13.413+0000 I REPL [signalProcessingThread] Stopping replication fetcher thread 2019-09-04T06:37:13.413+0000 D2 ASIO [signalProcessingThread] Canceling operation; original request was: RemoteCommand 1844 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:37:18.747+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567579028, 2), t: 1 } } 2019-09-04T06:37:13.413+0000 I REPL [signalProcessingThread] Stopping replication applier thread 2019-09-04T06:37:13.413+0000 D2 ASIO [RS] Failed to get connection from pool for request 1844: CallbackCanceled: Command canceled; original request was: RemoteCommand 1844 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:37:18.747+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567579028, 2), t: 1 } } 2019-09-04T06:37:13.413+0000 D1 - [RS] User Assertion: CallbackCanceled: Callback was canceled src/mongo/executor/network_interface_tl.cpp 400 2019-09-04T06:37:13.413+0000 W - [RS] DBException thrown :: caused by :: CallbackCanceled: Callback was canceled 2019-09-04T06:37:13.413+0000 D3 STORAGE [ReplBatcher] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:37:13.413+0000 D3 STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(10) 2019-09-04T06:37:13.413+0000 D3 STORAGE [ReplBatcher] begin_transaction on local snapshot Timestamp(1567579028, 2) 2019-09-04T06:37:13.413+0000 D3 STORAGE [ReplBatcher] WT begin_transaction for snapshot id 27491 2019-09-04T06:37:13.413+0000 D3 STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "local/collection/16--6194257481163143499", md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } } 2019-09-04T06:37:13.413+0000 D3 STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { uuid: UUID("b891bec6-9e37-4763-8f6c-c2ecc2c361d5"), capped: true, size: 1073741824.0, autoIndexId: false }, indexes: [], prefix: -1 } 2019-09-04T06:37:13.413+0000 D3 STORAGE [ReplBatcher] WT rollback_transaction for snapshot id 27491 2019-09-04T06:37:13.413+0000 I REPL [rsSync-0] Finished oplog application 2019-09-04T06:37:13.413+0000 D3 EXECUTOR [rsSync-0] waiting for work; I am one of 1 thread(s); the minimum number of threads is 1 2019-09-04T06:37:13.413+0000 D3 REPL [conn576] Got notified of new snapshot: { ts: Timestamp(1567579028, 2), t: 1 }, 2019-09-04T06:37:08.739+0000 2019-09-04T06:37:13.413+0000 D1 - [conn576] User Assertion: ShutdownInProgress: Shutdown in progress src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:37:13.413+0000 W - [conn576] DBException thrown :: caused by :: ShutdownInProgress: Shutdown in progress 2019-09-04T06:37:13.413+0000 D3 REPL [conn571] Got notified of new snapshot: { ts: Timestamp(1567579028, 2), t: 1 }, 2019-09-04T06:37:08.739+0000 2019-09-04T06:37:13.413+0000 D1 - [conn571] User Assertion: ShutdownInProgress: Shutdown in progress src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:37:13.413+0000 W - [conn571] DBException thrown :: caused by :: ShutdownInProgress: Shutdown in progress 2019-09-04T06:37:13.413+0000 D3 REPL [conn563] Got notified of new snapshot: { ts: Timestamp(1567579028, 2), t: 1 }, 2019-09-04T06:37:08.739+0000 2019-09-04T06:37:13.413+0000 D1 - [conn563] User Assertion: ShutdownInProgress: Shutdown in progress src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:37:13.413+0000 W - [conn563] DBException thrown :: caused by :: ShutdownInProgress: Shutdown in progress 2019-09-04T06:37:13.413+0000 D3 REPL [conn542] Got notified of new snapshot: { ts: Timestamp(1567579028, 2), t: 1 }, 2019-09-04T06:37:08.739+0000 2019-09-04T06:37:13.413+0000 D1 - [conn542] User Assertion: ShutdownInProgress: Shutdown in progress src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:37:13.413+0000 W - [conn542] DBException thrown :: caused by :: ShutdownInProgress: Shutdown in progress 2019-09-04T06:37:13.414+0000 D3 REPL [conn561] Got notified of new snapshot: { ts: Timestamp(1567579028, 2), t: 1 }, 2019-09-04T06:37:08.739+0000 2019-09-04T06:37:13.414+0000 D1 - [conn561] User Assertion: ShutdownInProgress: Shutdown in progress src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:37:13.414+0000 W - [conn561] DBException thrown :: caused by :: ShutdownInProgress: Shutdown in progress 2019-09-04T06:37:13.414+0000 D3 REPL [conn553] Got notified of new snapshot: { ts: Timestamp(1567579028, 2), t: 1 }, 2019-09-04T06:37:08.739+0000 2019-09-04T06:37:13.414+0000 D1 - [conn553] User Assertion: ShutdownInProgress: Shutdown in progress src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:37:13.414+0000 W - [conn553] DBException thrown :: caused by :: ShutdownInProgress: Shutdown in progress 2019-09-04T06:37:13.414+0000 D3 REPL [conn562] Got notified of new snapshot: { ts: Timestamp(1567579028, 2), t: 1 }, 2019-09-04T06:37:08.739+0000 2019-09-04T06:37:13.414+0000 D1 - [conn562] User Assertion: ShutdownInProgress: Shutdown in progress src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:37:13.414+0000 W - [conn562] DBException thrown :: caused by :: ShutdownInProgress: Shutdown in progress 2019-09-04T06:37:13.414+0000 D3 REPL [conn566] Got notified of new snapshot: { ts: Timestamp(1567579028, 2), t: 1 }, 2019-09-04T06:37:08.739+0000 2019-09-04T06:37:13.414+0000 D1 - [conn566] User Assertion: ShutdownInProgress: Shutdown in progress src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:37:13.414+0000 W - [conn566] DBException thrown :: caused by :: ShutdownInProgress: Shutdown in progress 2019-09-04T06:37:13.415+0000 D3 REPL [conn565] Got notified of new snapshot: { ts: Timestamp(1567579028, 2), t: 1 }, 2019-09-04T06:37:08.739+0000 2019-09-04T06:37:13.415+0000 D1 - [conn565] User Assertion: ShutdownInProgress: Shutdown in progress src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:37:13.415+0000 W - [conn565] DBException thrown :: caused by :: ShutdownInProgress: Shutdown in progress 2019-09-04T06:37:13.432+0000 I - [conn536] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc085 0x561749c339ba 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734085","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAB9BA","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE50EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_9EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc085] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x1CB8) [0x561749c339ba] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:37:13.432+0000 W COMMAND [conn536] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:37:13.432+0000 I COMMAND [conn536] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579002, 1), signature: { hash: BinData(0, 38AD2B934DDEB2D907B8E3A2E411898B9145843C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation exceeded time limit" errName:MaxTimeMSExpired errCode:50 reslen:568 locks:{} protocol:op_msg 30053ms 2019-09-04T06:37:13.432+0000 D2 NETWORK [conn536] Session from 10.108.2.63:36504 encountered a network error during SourceMessage: HostUnreachable: Connection closed by peer 2019-09-04T06:37:13.432+0000 I NETWORK [conn536] end connection 10.108.2.63:36504 (83 connections now open) 2019-09-04T06:37:13.433+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:13.448+0000 I - [conn551] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc335 0x561749c34707 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734335","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE91EJNS_15ExceptionForCatILNS_13ErrorCategoryE6EEENS4_ILS5_7EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAC707","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE91EJNS_15ExceptionForCatILNS_13ErrorCategoryE6EEENS4_ILS5_7EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc335] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x2A05) [0x561749c34707] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:37:13.448+0000 D1 COMMAND [conn551] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579001, 1), signature: { hash: BinData(0, 7537361D776A054A83C9137EA9B4D3F225E292EA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" }': ShutdownInProgress: Shutdown in progress 2019-09-04T06:37:13.448+0000 I COMMAND [conn551] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579001, 1), signature: { hash: BinData(0, 7537361D776A054A83C9137EA9B4D3F225E292EA), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"Shutdown in progress" errName:ShutdownInProgress errCode:91 reslen:561 locks:{} protocol:op_msg 28762ms 2019-09-04T06:37:13.462+0000 D2 COMMAND [conn42] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:13.462+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:13.465+0000 I - [conn563] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc335 0x561749c34707 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734335","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE91EJNS_15ExceptionForCatILNS_13ErrorCategoryE6EEENS4_ILS5_7EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAC707","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE91EJNS_15ExceptionForCatILNS_13ErrorCategoryE6EEENS4_ILS5_7EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc335] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x2A05) [0x561749c34707] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:37:13.465+0000 D1 COMMAND [conn563] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579005, 1), signature: { hash: BinData(0, 375553C631FDD70A0763FB36DD9A69D312E32B86), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" }': ShutdownInProgress: Shutdown in progress 2019-09-04T06:37:13.465+0000 I COMMAND [conn563] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579005, 1), signature: { hash: BinData(0, 375553C631FDD70A0763FB36DD9A69D312E32B86), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"Shutdown in progress" errName:ShutdownInProgress errCode:91 reslen:561 locks:{} protocol:op_msg 25048ms 2019-09-04T06:37:13.468+0000 D2 COMMAND [conn13] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:13.468+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:13.469+0000 D2 COMMAND [conn25] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:13.469+0000 I COMMAND [conn25] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:13.472+0000 D2 COMMAND [conn567] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579031, 1), signature: { hash: BinData(0, 9A26564F48C33530630A5B45AA26B2261F0AF1D3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:37:13.472+0000 D1 REPL [conn567] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567579028, 2), t: 1 } 2019-09-04T06:37:13.472+0000 D1 - [conn567] User Assertion: ShutdownInProgress: Shutdown in progress src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:37:13.472+0000 W - [conn567] DBException thrown :: caused by :: ShutdownInProgress: Shutdown in progress 2019-09-04T06:37:13.489+0000 I - [conn576] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc335 0x561749c34707 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734335","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE91EJNS_15ExceptionForCatILNS_13ErrorCategoryE6EEENS4_ILS5_7EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAC707","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE91EJNS_15ExceptionForCatILNS_13ErrorCategoryE6EEENS4_ILS5_7EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc335] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x2A05) [0x561749c34707] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:37:13.489+0000 D1 COMMAND [conn576] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579022, 1), signature: { hash: BinData(0, DDC5C0906B380CD8B4DE14242A2E4E1DA23C3C92), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': ShutdownInProgress: Shutdown in progress 2019-09-04T06:37:13.489+0000 I COMMAND [conn576] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579022, 1), signature: { hash: BinData(0, DDC5C0906B380CD8B4DE14242A2E4E1DA23C3C92), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"Shutdown in progress" errName:ShutdownInProgress errCode:91 reslen:561 locks:{} protocol:op_msg 1164ms 2019-09-04T06:37:13.505+0000 I - [conn564] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc335 0x561749c34707 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734335","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE91EJNS_15ExceptionForCatILNS_13ErrorCategoryE6EEENS4_ILS5_7EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAC707","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE91EJNS_15ExceptionForCatILNS_13ErrorCategoryE6EEENS4_ILS5_7EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc335] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x2A05) [0x561749c34707] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:37:13.506+0000 D1 COMMAND [conn564] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578999, 1), signature: { hash: BinData(0, C8771EE9C7A10B42498AC50E6183BD33E56835EB), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': ShutdownInProgress: Shutdown in progress 2019-09-04T06:37:13.506+0000 I COMMAND [conn564] command admin.$cmd command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567578999, 1), signature: { hash: BinData(0, C8771EE9C7A10B42498AC50E6183BD33E56835EB), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } numYields:0 ok:0 errMsg:"Shutdown in progress" errName:ShutdownInProgress errCode:91 reslen:561 locks:{} protocol:op_msg 25085ms 2019-09-04T06:37:13.508+0000 D2 COMMAND [conn535] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579033, 1), signature: { hash: BinData(0, 81A5873113B5B4B6C64F0E2137DFB28A22848131), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:37:13.508+0000 D1 REPL [conn535] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567579028, 2), t: 1 } 2019-09-04T06:37:13.508+0000 D1 - [conn535] User Assertion: ShutdownInProgress: Shutdown in progress src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:37:13.508+0000 W - [conn535] DBException thrown :: caused by :: ShutdownInProgress: Shutdown in progress 2019-09-04T06:37:13.529+0000 D2 COMMAND [conn559] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579031, 1), signature: { hash: BinData(0, 9A26564F48C33530630A5B45AA26B2261F0AF1D3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" } 2019-09-04T06:37:13.529+0000 D1 REPL [conn559] waitUntilOpTime: waiting for optime:{ ts: Timestamp(1566459168, 1), t: 92 } to be in a snapshot -- current snapshot: { ts: Timestamp(1567579028, 2), t: 1 } 2019-09-04T06:37:13.529+0000 D1 - [conn559] User Assertion: ShutdownInProgress: Shutdown in progress src/mongo/db/service_entry_point_mongod.cpp 89 2019-09-04T06:37:13.529+0000 W - [conn559] DBException thrown :: caused by :: ShutdownInProgress: Shutdown in progress 2019-09-04T06:37:13.529+0000 D2 COMMAND [conn45] run command admin.$cmd { isMaster: 1, $db: "admin" } 2019-09-04T06:37:13.530+0000 I COMMAND [conn45] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:907 locks:{} protocol:op_msg 0ms 2019-09-04T06:37:13.530+0000 I - [RS] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6b5b62 0x561749c34665 0x561749c42521 0x561749b4da31 0x56174ae4d9d2 0x56174a01c507 0x56174ae92b4d 0x56174a01c507 0x56174ae92193 0x56174a01c507 0x56174ae9269a 0x56174a01c507 0x56174ae91dfa 0x56174a01c507 0x56174ae908d8 0x56174a01c507 0x56174ae6d402 0x56174a01c507 0x56174ae7dad5 0x56174ae82643 0x56174b0f5c04 0x56174b0f5e95 0x56174b0fdb1e 0x56174ae6b84d 0x56174ae48814 0x56174b82dbbf 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"272DB62","s":"_ZN5mongo11DBExceptionC2ERKNS_6StatusE"},{"b":"561748F88000","o":"CAC665","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"BC5A31"},{"b":"561748F88000","o":"1EC59D2"},{"b":"561748F88000","o":"1094507","s":"_ZN5mongo14future_details15SharedStateBase20transitionToFinishedEv"},{"b":"561748F88000","o":"1F0AB4D"},{"b":"561748F88000","o":"1094507","s":"_ZN5mongo14future_details15SharedStateBase20transitionToFinishedEv"},{"b":"561748F88000","o":"1F0A193"},{"b":"561748F88000","o":"1094507","s":"_ZN5mongo14future_details15SharedStateBase20transitionToFinishedEv"},{"b":"561748F88000","o":"1F0A69A"},{"b":"561748F88000","o":"1094507","s":"_ZN5mongo14future_details15SharedStateBase20transitionToFinishedEv"},{"b":"561748F88000","o":"1F09DFA"},{"b":"561748F88000","o":"1094507","s":"_ZN5mongo14future_details15SharedStateBase20transitionToFinishedEv"},{"b":"561748F88000","o":"1F088D8","s":"_ZZN5mongo15unique_functionIFvPNS_14future_details15SharedStateBaseEEE8makeImplIZNS1_10FutureImplINS1_8FakeVoidEE16makeContinuationINS_7MessageEZZNOS9_4thenIZNS_9transport18TransportLayerASIO11ASIOSession17sourceMessageImplERKSt10shared_ptrINS_5BatonEEEUlvE_EEDaOT_ENKUlvE1_clEvEUlPNS1_15SharedStateImplIS8_EEPNSP_ISB_EEE_EENS7_ISM_EEOT0_EUlS3_E_EEDaSN_EN12SpecificImpl4callEOS3_"},{"b":"561748F88000","o":"1094507","s":"_ZN5mongo14future_details15SharedStateBase20transitionToFinishedEv"},{"b":"561748F88000","o":"1EE5402","s":"_ZZN5mongo15unique_functionIFvPNS_14future_details15SharedStateBaseEEE8makeImplIZNS1_10FutureImplImE16makeContinuationIvZZNOS8_4thenIZNOS8_11ignoreValueEvEUlOT_E_EEDaSC_ENKUlvE1_clEvEUlPNS1_15SharedStateImplImEEPNSF_INS1_8FakeVoidEEEE_EENS7_ISB_EEOT0_EUlS3_E_EEDaSC_EN12SpecificImpl4callEOS3_"},{"b":"561748F88000","o":"1094507","s":"_ZN5mongo14future_details15SharedStateBase20transitionToFinishedEv"},{"b":"561748F88000","o":"1EF5AD5","s":"_ZN5mongo9transport18use_future_details18AsyncHandlerHelperIJSt10error_codemEE8completeIJmEEEvPNS_7PromiseImEES3_DpOT_"},{"b":"561748F88000","o":"1EFA643","s":"_ZN4asio6detail23reactive_socket_recv_opINS_17mutable_buffers_1ENS0_7read_opINS_19basic_stream_socketINS_7generic15stream_protocolEEES2_PKNS_14mutable_bufferENS0_14transfer_all_tEN5mongo9transport18use_future_details12AsyncHandlerIJSt10error_codemEEEEEE11do_completeEPvPNS0_19scheduler_operationERKSG_m"},{"b":"561748F88000","o":"216DC04","s":"_ZN4asio6detail9scheduler10do_run_oneERNS0_27conditionally_enabled_mutex11scoped_lockERNS0_21scheduler_thread_infoERKSt10error_code"},{"b":"561748F88000","o":"216DE95","s":"_ZN4asio6detail9scheduler3runERSt10error_code"},{"b":"561748F88000","o":"2175B1E","s":"_ZN4asio10io_context3runEv"},{"b":"561748F88000","o":"1EE384D","s":"_ZN5mongo9transport18TransportLayerASIO11ASIOReactor3runEv"},{"b":"561748F88000","o":"1EC0814","s":"_ZN5mongo8executor18NetworkInterfaceTL4_runEv"},{"b":"561748F88000","o":"28A5BBF"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo11DBExceptionC2ERKNS_6StatusE+0x32) [0x56174b6b5b62] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x2963) [0x561749c34665] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xBC5A31) [0x561749b4da31] mongod(+0x1EC59D2) [0x56174ae4d9d2] mongod(_ZN5mongo14future_details15SharedStateBase20transitionToFinishedEv+0x197) [0x56174a01c507] mongod(+0x1F0AB4D) [0x56174ae92b4d] mongod(_ZN5mongo14future_details15SharedStateBase20transitionToFinishedEv+0x197) [0x56174a01c507] mongod(+0x1F0A193) [0x56174ae92193] mongod(_ZN5mongo14future_details15SharedStateBase20transitionToFinishedEv+0x197) [0x56174a01c507] mongod(+0x1F0A69A) [0x56174ae9269a] mongod(_ZN5mongo14future_details15SharedStateBase20transitionToFinishedEv+0x197) [0x56174a01c507] mongod(+0x1F09DFA) [0x56174ae91dfa] mongod(_ZN5mongo14future_details15SharedStateBase20transitionToFinishedEv+0x197) [0x56174a01c507] mongod(_ZZN5mongo15unique_functionIFvPNS_14future_details15SharedStateBaseEEE8makeImplIZNS1_10FutureImplINS1_8FakeVoidEE16makeContinuationINS_7MessageEZZNOS9_4thenIZNS_9transport18TransportLayerASIO11ASIOSession17sourceMessageImplERKSt10shared_ptrINS_5BatonEEEUlvE_EEDaOT_ENKUlvE1_clEvEUlPNS1_15SharedStateImplIS8_EEPNSP_ISB_EEE_EENS7_ISM_EEOT0_EUlS3_E_EEDaSN_EN12SpecificImpl4callEOS3_+0x48) [0x56174ae908d8] mongod(_ZN5mongo14future_details15SharedStateBase20transitionToFinishedEv+0x197) [0x56174a01c507] mongod(_ZZN5mongo15unique_functionIFvPNS_14future_details15SharedStateBaseEEE8makeImplIZNS1_10FutureImplImE16makeContinuationIvZZNOS8_4thenIZNOS8_11ignoreValueEvEUlOT_E_EEDaSC_ENKUlvE1_clEvEUlPNS1_15SharedStateImplImEEPNSF_INS1_8FakeVoidEEEE_EENS7_ISB_EEOT0_EUlS3_E_EEDaSC_EN12SpecificImpl4callEOS3_+0x42) [0x56174ae6d402] mongod(_ZN5mongo14future_details15SharedStateBase20transitionToFinishedEv+0x197) [0x56174a01c507] mongod(_ZN5mongo9transport18use_future_details18AsyncHandlerHelperIJSt10error_codemEE8completeIJmEEEvPNS_7PromiseImEES3_DpOT_+0x75) [0x56174ae7dad5] mongod(_ZN4asio6detail23reactive_socket_recv_opINS_17mutable_buffers_1ENS0_7read_opINS_19basic_stream_socketINS_7generic15stream_protocolEEES2_PKNS_14mutable_bufferENS0_14transfer_all_tEN5mongo9transport18use_future_details12AsyncHandlerIJSt10error_codemEEEEEE11do_completeEPvPNS0_19scheduler_operationERKSG_m+0x113) [0x56174ae82643] mongod(_ZN4asio6detail9scheduler10do_run_oneERNS0_27conditionally_enabled_mutex11scoped_lockERNS0_21scheduler_thread_infoERKSt10error_code+0x3B4) [0x56174b0f5c04] mongod(_ZN4asio6detail9scheduler3runERSt10error_code+0x115) [0x56174b0f5e95] mongod(_ZN4asio10io_context3runEv+0x3E) [0x56174b0fdb1e] mongod(_ZN5mongo9transport18TransportLayerASIO11ASIOReactor3runEv+0x3D) [0x56174ae6b84d] mongod(_ZN5mongo8executor18NetworkInterfaceTL4_runEv+0x44) [0x56174ae48814] mongod(+0x28A5BBF) [0x56174b82dbbf] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:37:13.530+0000 D3 EXECUTOR [RS] Received remote response: CallbackCanceled: Command canceled; original request was: RemoteCommand 1844 -- target:[cmodb804.togewa.com:27019] db:local expDate:2019-09-04T06:37:18.747+0000 cmd:{ getMore: 2779728788818727477, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1567579028, 2), t: 1 } } 2019-09-04T06:37:13.530+0000 D3 EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-09-04T06:37:13.530+0000 D1 REPL [replication-0] oplog fetcher oplog query cancelled to cmodb804.togewa.com:27019: CallbackCanceled: error in fetcher batch callback: oplog fetcher is shutting down 2019-09-04T06:37:13.530+0000 D3 EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-09-04T06:37:38.744+0000 2019-09-04T06:37:13.530+0000 D1 REPL [rsBackgroundSync] fetcher stopped reading remote oplog on cmodb804.togewa.com:27019 2019-09-04T06:37:13.530+0000 I REPL [rsBackgroundSync] Replication producer stopped after oplog fetcher finished returning a batch from our sync source. Abandoning this batch of oplog entries and re-evaluating our sync source. 2019-09-04T06:37:13.530+0000 I REPL [rsBackgroundSync] Stopping replication producer 2019-09-04T06:37:13.530+0000 D1 REPL [signalProcessingThread] Stopping noop writer 2019-09-04T06:37:13.530+0000 I REPL [signalProcessingThread] Stopping replication storage threads 2019-09-04T06:37:13.530+0000 D1 EXECUTOR [rsSync-0] shutting down thread in pool rsSync 2019-09-04T06:37:13.530+0000 D2 ASIO [signalProcessingThread] Shutting down network interface. 2019-09-04T06:37:13.530+0000 I ASIO [RS] Killing all outstanding egress activity. 2019-09-04T06:37:13.530+0000 D2 ASIO [RS] NetworkInterfaceTL shutdown successfully 2019-09-04T06:37:13.530+0000 D1 EXECUTOR [replication-1] shutting down thread in pool replication 2019-09-04T06:37:13.530+0000 D1 EXECUTOR [replication-0] shutting down thread in pool replication 2019-09-04T06:37:13.530+0000 D2 ASIO [signalProcessingThread] Shutting down network interface. 2019-09-04T06:37:13.530+0000 I ASIO [RS] Killing all outstanding egress activity. 2019-09-04T06:37:13.530+0000 D2 CONNPOOL [RS] Delisting connection pool for cmodb804.togewa.com:27019 2019-09-04T06:37:13.530+0000 I CONNPOOL [RS] Dropping all pooled connections to cmodb804.togewa.com:27019 due to ShutdownInProgress: Shutting down the connection pool 2019-09-04T06:37:13.530+0000 D2 NETWORK [RS] Draining remaining work in reactor. 2019-09-04T06:37:13.530+0000 D2 ASIO [RS] NetworkInterfaceTL shutdown successfully 2019-09-04T06:37:13.531+0000 D3 STORAGE [signalProcessingThread] WT begin_transaction for snapshot id 27490 2019-09-04T06:37:13.531+0000 D3 STORAGE [signalProcessingThread] WT rollback_transaction for snapshot id 27490 2019-09-04T06:37:13.531+0000 D3 REPL [signalProcessingThread] No initial sync flag set, returning initial sync flag value of false. 2019-09-04T06:37:13.531+0000 D3 STORAGE [signalProcessingThread] setting timestamp read source: 4, provided timestamp: none 2019-09-04T06:37:13.531+0000 D3 STORAGE [signalProcessingThread] begin_transaction on local snapshot Timestamp(1567579028, 2) 2019-09-04T06:37:13.531+0000 D3 STORAGE [signalProcessingThread] WT begin_transaction for snapshot id 27501 2019-09-04T06:37:13.531+0000 D3 STORAGE [signalProcessingThread] WT rollback_transaction for snapshot id 27501 2019-09-04T06:37:13.531+0000 D3 STORAGE [signalProcessingThread] begin_transaction on local snapshot Timestamp(1567579028, 2) 2019-09-04T06:37:13.531+0000 D3 STORAGE [signalProcessingThread] WT begin_transaction for snapshot id 27502 2019-09-04T06:37:13.531+0000 D3 STORAGE [signalProcessingThread] WT rollback_transaction for snapshot id 27502 2019-09-04T06:37:13.531+0000 D3 REPL [signalProcessingThread] returning oplog truncate after point: Timestamp(0, 0) 2019-09-04T06:37:13.531+0000 D3 STORAGE [signalProcessingThread] begin_transaction on local snapshot Timestamp(1567579028, 2) 2019-09-04T06:37:13.531+0000 D3 STORAGE [signalProcessingThread] WT begin_transaction for snapshot id 27503 2019-09-04T06:37:13.531+0000 D3 STORAGE [signalProcessingThread] WT rollback_transaction for snapshot id 27503 2019-09-04T06:37:13.531+0000 D3 REPL [signalProcessingThread] returning appliedThrough: { ts: Timestamp(1567579028, 2), t: 1 }({ ts: Timestamp(1567579028, 2), t: 1 }) 2019-09-04T06:37:13.531+0000 D3 REPL [signalProcessingThread] clearing appliedThrough at: Timestamp(1567579028, 2) 2019-09-04T06:37:13.531+0000 D3 STORAGE [signalProcessingThread] WT set timestamp of future write operations to Timestamp(1567579028, 2) 2019-09-04T06:37:13.531+0000 D3 STORAGE [signalProcessingThread] begin_transaction on local snapshot Timestamp(1567579028, 2) 2019-09-04T06:37:13.531+0000 D3 STORAGE [signalProcessingThread] WT begin_transaction for snapshot id 27504 2019-09-04T06:37:13.531+0000 D5 QUERY [signalProcessingThread] Beginning planning... ============================= Options = INDEX_INTERSECTION SPLIT_LIMITED_SORT Canonical query: ns=local.replset.minvalidTree: $and Sort: {} Proj: {} ============================= 2019-09-04T06:37:13.531+0000 D5 QUERY [signalProcessingThread] Index 0 is kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" } 2019-09-04T06:37:13.531+0000 D5 QUERY [signalProcessingThread] Rated tree: $and 2019-09-04T06:37:13.531+0000 D5 QUERY [signalProcessingThread] Planner: outputted 0 indexed solutions. 2019-09-04T06:37:13.531+0000 D5 QUERY [signalProcessingThread] Planner: outputting a collscan: COLLSCAN ---ns = local.replset.minvalid ---filter = $and ---fetched = 1 ---sortedByDiskLoc = 0 ---getSort = [] 2019-09-04T06:37:13.531+0000 D2 QUERY [signalProcessingThread] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2019-09-04T06:37:13.531+0000 D2 STORAGE [signalProcessingThread] WiredTigerSizeStorer::store Marking table:local/collection/4--6194257481163143499 dirty, numRecords: 1, dataSize: 45, use_count: 3 2019-09-04T06:37:13.531+0000 D3 STORAGE [signalProcessingThread] WT commit_transaction for snapshot id 27504 2019-09-04T06:37:13.531+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:37:13.531+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:37:13.531+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:37:13.531+0000 D3 EXECUTOR [replexec-3] Executing a task on behalf of pool replexec 2019-09-04T06:37:13.531+0000 D3 EXECUTOR [replexec-3] Not reaping because the earliest retirement date is 2019-09-04T06:37:43.531+0000 2019-09-04T06:37:13.531+0000 D1 EXECUTOR [replexec-3] shutting down thread in pool replexec 2019-09-04T06:37:13.531+0000 D1 EXECUTOR [replexec-6] starting thread in pool replexec 2019-09-04T06:37:13.531+0000 D1 EXECUTOR [replexec-6] shutting down thread in pool replexec 2019-09-04T06:37:13.531+0000 D1 EXECUTOR [replexec-5] starting thread in pool replexec 2019-09-04T06:37:13.531+0000 D1 EXECUTOR [replexec-5] shutting down thread in pool replexec 2019-09-04T06:37:13.531+0000 D1 EXECUTOR [replexec-4] shutting down thread in pool replexec 2019-09-04T06:37:13.531+0000 D2 ASIO [signalProcessingThread] Shutting down network interface. 2019-09-04T06:37:13.531+0000 I ASIO [Replication] Killing all outstanding egress activity. 2019-09-04T06:37:13.531+0000 D2 CONNPOOL [Replication] Delisting connection pool for cmodb804.togewa.com:27019 2019-09-04T06:37:13.531+0000 D2 CONNPOOL [Replication] Dropping all pooled connections to cmodb804.togewa.com:27019 due to ShutdownInProgress: Shutting down the connection pool 2019-09-04T06:37:13.531+0000 D2 CONNPOOL [Replication] Delisting connection pool for cmodb802.togewa.com:27019 2019-09-04T06:37:13.531+0000 I CONNPOOL [Replication] Dropping all pooled connections to cmodb802.togewa.com:27019 due to ShutdownInProgress: Shutting down the connection pool 2019-09-04T06:37:13.531+0000 D2 NETWORK [Replication] Draining remaining work in reactor. 2019-09-04T06:37:13.531+0000 D2 ASIO [Replication] NetworkInterfaceTL shutdown successfully 2019-09-04T06:37:13.531+0000 D1 QUERY [signalProcessingThread] received interrupt request for unknown op: 19903 known ops: 2019-09-04T06:37:13.531+0000 D1 QUERY [signalProcessingThread] received interrupt request for unknown op: 20755 known ops: 2019-09-04T06:37:13.531+0000 D1 QUERY [signalProcessingThread] received interrupt request for unknown op: 20748 known ops: 2019-09-04T06:37:13.531+0000 D1 QUERY [signalProcessingThread] received interrupt request for unknown op: 34 known ops: 2019-09-04T06:37:13.531+0000 D1 QUERY [signalProcessingThread] received interrupt request for unknown op: 20158 known ops: 2019-09-04T06:37:13.531+0000 D1 QUERY [signalProcessingThread] received interrupt request for unknown op: 20169 known ops: 2019-09-04T06:37:13.531+0000 D1 QUERY [signalProcessingThread] received interrupt request for unknown op: 20030 known ops: 2019-09-04T06:37:13.531+0000 D1 QUERY [signalProcessingThread] received interrupt request for unknown op: 19817 known ops: 2019-09-04T06:37:13.531+0000 D1 QUERY [signalProcessingThread] received interrupt request for unknown op: 20032 known ops: 2019-09-04T06:37:13.531+0000 D1 QUERY [signalProcessingThread] received interrupt request for unknown op: 20756 known ops: 2019-09-04T06:37:13.531+0000 D1 QUERY [signalProcessingThread] received interrupt request for unknown op: 20023 known ops: 2019-09-04T06:37:13.531+0000 D1 QUERY [signalProcessingThread] received interrupt request for unknown op: 20157 known ops: 2019-09-04T06:37:13.531+0000 D1 QUERY [signalProcessingThread] received interrupt request for unknown op: 20474 known ops: 2019-09-04T06:37:13.531+0000 D1 QUERY [signalProcessingThread] received interrupt request for unknown op: 20195 known ops: 2019-09-04T06:37:13.531+0000 D1 QUERY [signalProcessingThread] received interrupt request for unknown op: 20754 known ops: 2019-09-04T06:37:13.531+0000 D1 QUERY [signalProcessingThread] received interrupt request for unknown op: 87 known ops: 2019-09-04T06:37:13.531+0000 D1 QUERY [signalProcessingThread] received interrupt request for unknown op: 20161 known ops: 2019-09-04T06:37:13.531+0000 D1 QUERY [signalProcessingThread] received interrupt request for unknown op: 20028 known ops: 2019-09-04T06:37:13.531+0000 D1 QUERY [signalProcessingThread] received interrupt request for unknown op: 19937 known ops: 2019-09-04T06:37:13.531+0000 D1 QUERY [signalProcessingThread] received interrupt request for unknown op: 20035 known ops: 2019-09-04T06:37:13.531+0000 D2 WRITE [signalProcessingThread] Beginning scanSessions. Scanning 0 sessions. 2019-09-04T06:37:13.532+0000 D1 NETWORK [signalProcessingThread] Shutting down task executor used for monitoring replica sets 2019-09-04T06:37:13.532+0000 D2 ASIO [signalProcessingThread] Shutting down network interface. 2019-09-04T06:37:13.532+0000 I ASIO [ReplicaSetMonitor-TaskExecutor] Killing all outstanding egress activity. 2019-09-04T06:37:13.532+0000 D2 CONNPOOL [ReplicaSetMonitor-TaskExecutor] Delisting connection pool for cmodb808.togewa.com:27018 2019-09-04T06:37:13.532+0000 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to cmodb808.togewa.com:27018 due to ShutdownInProgress: Shutting down the connection pool 2019-09-04T06:37:13.532+0000 D2 CONNPOOL [ReplicaSetMonitor-TaskExecutor] Delisting connection pool for cmodb812.togewa.com:27018 2019-09-04T06:37:13.532+0000 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to cmodb812.togewa.com:27018 due to ShutdownInProgress: Shutting down the connection pool 2019-09-04T06:37:13.532+0000 D2 CONNPOOL [ReplicaSetMonitor-TaskExecutor] Delisting connection pool for cmodb813.togewa.com:27018 2019-09-04T06:37:13.532+0000 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to cmodb813.togewa.com:27018 due to ShutdownInProgress: Shutting down the connection pool 2019-09-04T06:37:13.532+0000 D2 CONNPOOL [ReplicaSetMonitor-TaskExecutor] Delisting connection pool for cmodb807.togewa.com:27018 2019-09-04T06:37:13.532+0000 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to cmodb807.togewa.com:27018 due to ShutdownInProgress: Shutting down the connection pool 2019-09-04T06:37:13.532+0000 D2 CONNPOOL [ReplicaSetMonitor-TaskExecutor] Delisting connection pool for cmodb809.togewa.com:27018 2019-09-04T06:37:13.532+0000 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to cmodb809.togewa.com:27018 due to ShutdownInProgress: Shutting down the connection pool 2019-09-04T06:37:13.532+0000 D2 CONNPOOL [ReplicaSetMonitor-TaskExecutor] Delisting connection pool for cmodb811.togewa.com:27018 2019-09-04T06:37:13.532+0000 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to cmodb811.togewa.com:27018 due to ShutdownInProgress: Shutting down the connection pool 2019-09-04T06:37:13.532+0000 D2 CONNPOOL [ReplicaSetMonitor-TaskExecutor] Delisting connection pool for cmodb806.togewa.com:27018 2019-09-04T06:37:13.532+0000 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to cmodb806.togewa.com:27018 due to ShutdownInProgress: Shutting down the connection pool 2019-09-04T06:37:13.532+0000 D2 CONNPOOL [ReplicaSetMonitor-TaskExecutor] Delisting connection pool for cmodb810.togewa.com:27018 2019-09-04T06:37:13.532+0000 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to cmodb810.togewa.com:27018 due to ShutdownInProgress: Shutting down the connection pool 2019-09-04T06:37:13.532+0000 D2 NETWORK [ReplicaSetMonitor-TaskExecutor] Draining remaining work in reactor. 2019-09-04T06:37:13.532+0000 D2 ASIO [ReplicaSetMonitor-TaskExecutor] NetworkInterfaceTL shutdown successfully 2019-09-04T06:37:13.532+0000 D1 SHARDING [signalProcessingThread] Shutting down task executor for reloading shard registry 2019-09-04T06:37:13.532+0000 D1 SHARDING [shard-registry-reload] Reloading shardRegistry 2019-09-04T06:37:13.532+0000 W SHARDING [shard-registry-reload] cant reload ShardRegistry :: caused by :: CallbackCanceled: Callback canceled 2019-09-04T06:37:13.532+0000 D2 ASIO [signalProcessingThread] Shutting down network interface. 2019-09-04T06:37:13.532+0000 I ASIO [shard-registry-reload] Killing all outstanding egress activity. 2019-09-04T06:37:13.532+0000 D2 NETWORK [shard-registry-reload] Draining remaining work in reactor. 2019-09-04T06:37:13.532+0000 D2 ASIO [shard-registry-reload] NetworkInterfaceTL shutdown successfully 2019-09-04T06:37:13.532+0000 I CONTROL [signalProcessingThread] Shutting down free monitoring 2019-09-04T06:37:13.532+0000 I FTDC [signalProcessingThread] Shutting down full-time diagnostic data capture 2019-09-04T06:37:13.537+0000 I STORAGE [signalProcessingThread] Deregistering all the collections 2019-09-04T06:37:13.537+0000 D1 STORAGE [signalProcessingThread] Deregistering collection config.lockpings with UUID 0e9c403c-5a7d-421c-a744-6abbab57bdce 2019-09-04T06:37:13.537+0000 D1 STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: config.lockpings 2019-09-04T06:37:13.538+0000 D1 STORAGE [signalProcessingThread] Deregistering collection config.version with UUID 20d8341b-073f-4dea-b0c5-c2626b006feb 2019-09-04T06:37:13.538+0000 D1 STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: config.version 2019-09-04T06:37:13.538+0000 D1 STORAGE [signalProcessingThread] Deregistering collection config.chunks with UUID 925b3d05-7eb4-4b6d-b339-82784de07cbe 2019-09-04T06:37:13.538+0000 D1 STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: config.chunks 2019-09-04T06:37:13.538+0000 D1 STORAGE [signalProcessingThread] Deregistering collection config.mongos with UUID 1734bd4e-af6d-441a-8751-93e269784617 2019-09-04T06:37:13.538+0000 D1 STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: config.mongos 2019-09-04T06:37:13.538+0000 D1 STORAGE [signalProcessingThread] Deregistering collection config.tags with UUID f71519e3-c8e3-42c8-9579-254e000a6c18 2019-09-04T06:37:13.538+0000 D1 STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: config.tags 2019-09-04T06:37:13.538+0000 D1 STORAGE [signalProcessingThread] Deregistering collection admin.system.users with UUID 1c65b785-f989-45d0-a6f4-6a4233f87231 2019-09-04T06:37:13.538+0000 D1 STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: admin.system.users 2019-09-04T06:37:13.538+0000 D1 STORAGE [signalProcessingThread] Deregistering collection admin.system.keys with UUID 6fa72c52-1098-49d5-8075-97e44ea0d586 2019-09-04T06:37:13.538+0000 D1 STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: admin.system.keys 2019-09-04T06:37:13.538+0000 D1 STORAGE [signalProcessingThread] Deregistering collection config.changelog with UUID 0196ba23-ca72-4f67-b3ac-b305f18a38e3 2019-09-04T06:37:13.538+0000 D1 STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: config.changelog 2019-09-04T06:37:13.538+0000 D1 STORAGE [signalProcessingThread] Deregistering collection config.transactions with UUID 1614ccf0-7860-48e4-ab95-3aaa4633e218 2019-09-04T06:37:13.538+0000 D1 STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: config.transactions 2019-09-04T06:37:13.538+0000 D1 STORAGE [signalProcessingThread] Deregistering collection config.collections with UUID 5c6c3426-ae2d-4c69-bf22-b1d2601211ff 2019-09-04T06:37:13.538+0000 D1 STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: config.collections 2019-09-04T06:37:13.538+0000 D1 STORAGE [signalProcessingThread] Deregistering collection config.shards with UUID cc5f25a3-25cf-4a45-b674-6595d24d7e9a 2019-09-04T06:37:13.538+0000 D1 STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: config.shards 2019-09-04T06:37:13.538+0000 D1 STORAGE [signalProcessingThread] Deregistering collection admin.system.version with UUID 4a8c99f7-f181-48c0-aaa0-4f5639d8f4ab 2019-09-04T06:37:13.538+0000 D1 STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: admin.system.version 2019-09-04T06:37:13.538+0000 D1 STORAGE [signalProcessingThread] Deregistering collection local.replset.minvalid with UUID e1f04497-1bed-46e1-b7a9-714cf5b1cd7b 2019-09-04T06:37:13.538+0000 D1 STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.replset.minvalid 2019-09-04T06:37:13.538+0000 D1 STORAGE [signalProcessingThread] Deregistering collection config.system.sessions with UUID a6938268-0b91-476c-a0f3-aaac5e5117ed 2019-09-04T06:37:13.538+0000 D1 STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: config.system.sessions 2019-09-04T06:37:13.538+0000 D1 STORAGE [signalProcessingThread] Deregistering collection local.replset.election with UUID 0512231e-bb78-4048-95aa-63ea7eb6b5a5 2019-09-04T06:37:13.538+0000 D1 STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.replset.election 2019-09-04T06:37:13.538+0000 D1 STORAGE [signalProcessingThread] Deregistering collection local.startup_log with UUID 4860912c-c555-4fe1-b1bb-e6281b586983 2019-09-04T06:37:13.538+0000 D1 STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.startup_log 2019-09-04T06:37:13.538+0000 D1 STORAGE [signalProcessingThread] Deregistering collection local.system.rollback.id with UUID 1f10291c-f664-4c4a-a48a-a3c7297b837c 2019-09-04T06:37:13.538+0000 D1 STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.system.rollback.id 2019-09-04T06:37:13.538+0000 D1 STORAGE [signalProcessingThread] Deregistering collection config.locks with UUID 1e44ae20-a3ee-4ba0-b0a8-e33bc7a2c287 2019-09-04T06:37:13.538+0000 D1 STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: config.locks 2019-09-04T06:37:13.538+0000 D1 STORAGE [signalProcessingThread] Deregistering collection local.oplog.rs with UUID b891bec6-9e37-4763-8f6c-c2ecc2c361d5 2019-09-04T06:37:13.538+0000 D1 STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.oplog.rs 2019-09-04T06:37:13.538+0000 D4 STORAGE [WTJournalFlusher] flushed journal 2019-09-04T06:37:13.538+0000 D1 - [WT-OplogTruncaterThread-local.oplog.rs] User Assertion: InterruptedAtShutdown: interrupted at shutdown src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:37:13.538+0000 W - [WT-OplogTruncaterThread-local.oplog.rs] DBException thrown :: caused by :: InterruptedAtShutdown: interrupted at shutdown 2019-09-04T06:37:13.538+0000 I STORAGE [WTOplogJournalThread] Oplog journal thread loop shutting down 2019-09-04T06:37:13.538+0000 D1 STORAGE [signalProcessingThread] Deregistering collection config.migrations with UUID b8de9e4c-de38-4698-9ceb-7e686f580e61 2019-09-04T06:37:13.538+0000 D1 STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: config.migrations 2019-09-04T06:37:13.538+0000 D1 STORAGE [signalProcessingThread] Deregistering collection local.replset.oplogTruncateAfterPoint with UUID f3e039c8-2e80-4cb0-8155-2d1d3b3ae39b 2019-09-04T06:37:13.538+0000 D1 STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.replset.oplogTruncateAfterPoint 2019-09-04T06:37:13.538+0000 D1 STORAGE [signalProcessingThread] Deregistering collection local.system.replset with UUID 6518740c-6e6d-47d6-acc8-8f7aaf4d591e 2019-09-04T06:37:13.538+0000 D1 STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.system.replset 2019-09-04T06:37:13.538+0000 D1 STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: _mdb_catalog 2019-09-04T06:37:13.538+0000 I STORAGE [signalProcessingThread] Timestamp monitor shutting down 2019-09-04T06:37:13.538+0000 D2 - [signalProcessingThread] Stopping periodic job TimestampMonitor 2019-09-04T06:37:13.538+0000 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down 2019-09-04T06:37:13.538+0000 D2 STORAGE [signalProcessingThread] WiredTigerSizeStorer::flush table:local/collection/4--6194257481163143499 -> { numRecords: 1, dataSize: 45 } 2019-09-04T06:37:13.538+0000 D2 STORAGE [signalProcessingThread] WiredTigerSizeStorer::flush table:config/collection/28--6194257481163143499 -> { numRecords: 24, dataSize: 2000 } 2019-09-04T06:37:13.538+0000 D2 STORAGE [signalProcessingThread] WiredTigerSizeStorer::flush table:local/collection/16--6194257481163143499 -> { numRecords: 1632, dataSize: 368193 } 2019-09-04T06:37:13.539+0000 D2 STORAGE [signalProcessingThread] WiredTigerSizeStorer flush took 747 µs 2019-09-04T06:37:13.539+0000 I STORAGE [signalProcessingThread] Shutting down session sweeper thread 2019-09-04T06:37:13.539+0000 D1 STORAGE [WTIdleSessionSweeper] stopping WTIdleSessionSweeper thread 2019-09-04T06:37:13.539+0000 I STORAGE [signalProcessingThread] Finished shutting down session sweeper thread 2019-09-04T06:37:13.539+0000 I STORAGE [signalProcessingThread] Shutting down journal flusher thread 2019-09-04T06:37:13.563+0000 I - [conn569] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc335 0x561749c34707 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734335","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE91EJNS_15ExceptionForCatILNS_13ErrorCategoryE6EEENS4_ILS5_7EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAC707","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE91EJNS_15ExceptionForCatILNS_13ErrorCategoryE6EEENS4_ILS5_7EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc335] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x2A05) [0x561749c34707] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:37:13.563+0000 D1 COMMAND [conn569] assertion while executing command 'find' on database 'config' with arguments '{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, limit: 1, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579002, 1), signature: { hash: BinData(0, 38AD2B934DDEB2D907B8E3A2E411898B9145843C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "config" }': ShutdownInProgress: Shutdown in progress 2019-09-04T06:37:13.563+0000 D1 - [conn569] User Assertion: InterruptedAtShutdown: interrupted at shutdown src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:37:13.563+0000 W - [conn569] DBException thrown :: caused by :: InterruptedAtShutdown: interrupted at shutdown 2019-09-04T06:37:13.580+0000 I - [conn542] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc335 0x561749c34707 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734335","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE91EJNS_15ExceptionForCatILNS_13ErrorCategoryE6EEENS4_ILS5_7EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAC707","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE91EJNS_15ExceptionForCatILNS_13ErrorCategoryE6EEENS4_ILS5_7EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc335] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x2A05) [0x561749c34707] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:37:13.580+0000 D1 COMMAND [conn542] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579002, 1), signature: { hash: BinData(0, 38AD2B934DDEB2D907B8E3A2E411898B9145843C), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': ShutdownInProgress: Shutdown in progress 2019-09-04T06:37:13.580+0000 D1 - [conn542] User Assertion: InterruptedAtShutdown: interrupted at shutdown src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:37:13.580+0000 W - [conn542] DBException thrown :: caused by :: InterruptedAtShutdown: interrupted at shutdown 2019-09-04T06:37:13.598+0000 I - [WT-OplogTruncaterThread-local.oplog.rs] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc735 0x561749c38b24 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5ddaf0 0x561749d27205 0x561749ce93ff 0x561749ce9968 0x56174b5e60dc 0x56174b82dbbf 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734735","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE11600EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_6EEENS4_ILS5_7EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CB0B24","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"2655AF0","s":"_ZN5mongo10LockerImpl16restoreLockStateEPNS_16OperationContextERKNS_6Locker12LockSnapshotE"},{"b":"561748F88000","o":"D9F205","s":"_ZN5mongo21WiredTigerRecordStore33yieldAndAwaitOplogDeletionRequestEPNS_16OperationContextE"},{"b":"561748F88000","o":"D613FF"},{"b":"561748F88000","o":"D61968"},{"b":"561748F88000","o":"265E0DC","s":"_ZN5mongo13BackgroundJob7jobBodyEv"},{"b":"561748F88000","o":"28A5BBF"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE11600EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_6EEENS4_ILS5_7EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc735] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x6E22) [0x561749c38b24] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo10LockerImpl16restoreLockStateEPNS_16OperationContextERKNS_6Locker12LockSnapshotE+0x140) [0x56174b5ddaf0] mongod(_ZN5mongo21WiredTigerRecordStore33yieldAndAwaitOplogDeletionRequestEPNS_16OperationContextE+0xB5) [0x561749d27205] mongod(+0xD613FF) [0x561749ce93ff] mongod(+0xD61968) [0x561749ce9968] mongod(_ZN5mongo13BackgroundJob7jobBodyEv+0x9C) [0x56174b5e60dc] mongod(+0x28A5BBF) [0x56174b82dbbf] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:37:13.614+0000 I - [conn567] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc335 0x561749c34707 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734335","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE91EJNS_15ExceptionForCatILNS_13ErrorCategoryE6EEENS4_ILS5_7EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAC707","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE91EJNS_15ExceptionForCatILNS_13ErrorCategoryE6EEENS4_ILS5_7EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc335] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x2A05) [0x561749c34707] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:37:13.614+0000 D1 COMMAND [conn567] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579031, 1), signature: { hash: BinData(0, 9A26564F48C33530630A5B45AA26B2261F0AF1D3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': ShutdownInProgress: Shutdown in progress 2019-09-04T06:37:13.614+0000 D1 - [conn567] User Assertion: InterruptedAtShutdown: interrupted at shutdown src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:37:13.614+0000 W - [conn567] DBException thrown :: caused by :: InterruptedAtShutdown: interrupted at shutdown 2019-09-04T06:37:13.631+0000 I - [conn566] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc335 0x561749c34707 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734335","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE91EJNS_15ExceptionForCatILNS_13ErrorCategoryE6EEENS4_ILS5_7EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAC707","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE91EJNS_15ExceptionForCatILNS_13ErrorCategoryE6EEENS4_ILS5_7EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc335] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x2A05) [0x561749c34707] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:37:13.631+0000 D1 COMMAND [conn566] assertion while executing command 'find' on database 'admin' with arguments '{ find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579008, 1), signature: { hash: BinData(0, 7BDD2443E6EB09988B11F09F05FE4E15377E0BBD), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "admin" }': ShutdownInProgress: Shutdown in progress 2019-09-04T06:37:13.631+0000 D1 - [conn566] User Assertion: InterruptedAtShutdown: interrupted at shutdown src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:37:13.631+0000 W - [conn566] DBException thrown :: caused by :: InterruptedAtShutdown: interrupted at shutdown 2019-09-04T06:37:13.638+0000 D1 STORAGE [WTJournalFlusher] stopping WTJournalFlusher thread 2019-09-04T06:37:13.638+0000 I STORAGE [signalProcessingThread] Finished shutting down journal flusher thread 2019-09-04T06:37:13.638+0000 I STORAGE [signalProcessingThread] Shutting down checkpoint thread 2019-09-04T06:37:13.638+0000 D1 STORAGE [WTCheckpointThread] stopping WTCheckpointThread thread 2019-09-04T06:37:13.638+0000 I STORAGE [signalProcessingThread] Finished shutting down checkpoint thread 2019-09-04T06:37:13.638+0000 D2 RECOVERY [signalProcessingThread] Shutdown timestamps. StableTimestamp: 6732700659155468290 Initial data timestamp: 6732698082175090690 2019-09-04T06:37:13.641+0000 I - [conn574] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc335 0x561749c34707 0x561749c42521 0x56174a075070 0x56174a083f24 0x56174a084e0e 0x56174a0856a0 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734335","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE91EJNS_15ExceptionForCatILNS_13ErrorCategoryE6EEENS4_ILS5_7EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CAC707","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"10ED070","s":"_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE"},{"b":"561748F88000","o":"10FBF24"},{"b":"561748F88000","o":"10FCE0E"},{"b":"561748F88000","o":"10FD6A0","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE91EJNS_15ExceptionForCatILNS_13ErrorCategoryE6EEENS4_ILS5_7EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc335] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x2A05) [0x561749c34707] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(_ZNK5mongo23ServiceEntryPointMongod5Hooks18waitForReadConcernEPNS_16OperationContextEPKNS_17CommandInvocationERKNS_12OpMsgRequestE+0x720) [0x56174a075070] mongod(+0x10FBF24) [0x56174a083f24] mongod(+0x10FCE0E) [0x56174a084e0e] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x540) [0x56174a0856a0] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:37:13.641+0000 D1 COMMAND [conn574] assertion while executing command 'find' on database 'config' with arguments '{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579012, 1), signature: { hash: BinData(0, 03F942D7850679A43066F511B52016CE3558C974), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" }': ShutdownInProgress: Shutdown in progress 2019-09-04T06:37:13.641+0000 D1 - [conn574] User Assertion: InterruptedAtShutdown: interrupted at shutdown src/mongo/db/concurrency/lock_state.cpp 884 2019-09-04T06:37:13.641+0000 W - [conn574] DBException thrown :: caused by :: InterruptedAtShutdown: interrupted at shutdown 2019-09-04T06:37:13.650+0000 D2 COMMAND [conn551] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579031, 1), signature: { hash: BinData(0, 9A26564F48C33530630A5B45AA26B2261F0AF1D3), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:37:13.650+0000 D1 - [conn551] User Assertion: InterruptedAtShutdown: interrupted at shutdown src/mongo/db/service_entry_point_common.cpp 830 2019-09-04T06:37:13.650+0000 W - [conn551] DBException thrown :: caused by :: InterruptedAtShutdown: interrupted at shutdown 2019-09-04T06:37:13.661+0000 I - [conn574] 0x56174b707c81 0x56174b707b74 0x56174b6fd1f7 0x56174b6bc735 0x561749c38b24 0x561749c42521 0x561749c18d34 0x56174b5e0452 0x56174b5d4d11 0x56174b5d4def 0x56174b5d4e22 0x56174b0bae2d 0x56174a0858fa 0x56174a0736dc 0x56174a07f2cc 0x56174a07ac1f 0x56174a07de9c 0x56174ae5f742 0x56174a07863d 0x56174a07b8d3 0x56174a079d07 0x56174a07ab7b 0x56174a07de9c 0x56174ae5fbab 0x56174b499c94 0x7f0ed8607dd5 0x7f0ed833102d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"561748F88000","o":"277FC81","s":"_ZN5mongo15printStackTraceERSo"},{"b":"561748F88000","o":"277FB74","s":"_ZN5mongo15printStackTraceEv"},{"b":"561748F88000","o":"27751F7","s":"_ZN5mongo11DBException13traceIfNeededERKS0_"},{"b":"561748F88000","o":"2734735","s":"_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE11600EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_6EEENS4_ILS5_7EEEEEC1ERKNS_6StatusE"},{"b":"561748F88000","o":"CB0B24","s":"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE"},{"b":"561748F88000","o":"CBA521","s":"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj"},{"b":"561748F88000","o":"C90D34"},{"b":"561748F88000","o":"2658452","s":"_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CD11","s":"_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE"},{"b":"561748F88000","o":"264CDEF","s":"_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE"},{"b":"561748F88000","o":"264CE22","s":"_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE"},{"b":"561748F88000","o":"2132E2D","s":"_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb"},{"b":"561748F88000","o":"10FD8FA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"561748F88000","o":"10EB6DC","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"561748F88000","o":"10F72CC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2C1F","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7742","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"561748F88000","o":"10F063D","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"561748F88000","o":"10F38D3","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"561748F88000","o":"10F1D07","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F2B7B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"561748F88000","o":"10F5E9C"},{"b":"561748F88000","o":"1ED7BAB"},{"b":"561748F88000","o":"2511C94"},{"b":"7F0ED8600000","o":"7DD5"},{"b":"7F0ED8233000","o":"FE02D","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.0", "gitVersion" : "a4b751dcf51dd249c5865812b390cfd1c0129c30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-957.27.2.el7.x86_64", "version" : "#1 SMP Mon Jul 29 17:46:05 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "561748F88000", "elfType" : 3, "buildId" : "E8D75D13E92279CB6AF8104353A95729FD262FAB" }, { "b" : "7FFEC1E60000", "elfType" : 3, "buildId" : "88212B24946DE81A7B05A4F4422F5A8B37CBD908" }, { "b" : "7F0ED9A2D000", "path" : "/lib64/libcurl.so.4", "elfType" : 3, "buildId" : "9570D81C6E0E7EE6E021640223115E827B7BCBF2" }, { "b" : "7F0ED9814000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "C444AE61E7CBB716FD9C18A0B46A7FE8F4FCF3E5" }, { "b" : "7F0ED93B2000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "3593FA778645A59EA272DBBB59D318C60940E792" }, { "b" : "7F0ED9140000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AEF5E6F2240B55F90E9DF76CFBB8B9D9F5286583" }, { "b" : "7F0ED8F3C000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "357693C8F1F49D93010C4E31529C07CDD2BD3D08" }, { "b" : "7F0ED8D34000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "EFDE2029C9A4A20BE5B8D8AE7E6551FF9B5755D2" }, { "b" : "7F0ED8A32000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "5B14BE4D749631673523A61074C10959D50F5455" }, { "b" : "7F0ED881C000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "179F202998E429AA1215907F6D4C5C1BB9C90136" }, { "b" : "7F0ED8600000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "96900CB0FF25B26F2BBDF247DE1408242E4773D8" }, { "b" : "7F0ED8233000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "426A04647352308628F2091A30D347EDEEDED787" }, { "b" : "7F0ED9C96000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "A527FE72908703C5972AE384E78D1850D1881EE7" }, { "b" : "7F0ED8000000", "path" : "/lib64/libidn.so.11", "elfType" : 3, "buildId" : "2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5" }, { "b" : "7F0ED7DD6000", "path" : "/lib64/libssh2.so.1", "elfType" : 3, "buildId" : "4F4D120B3A652DC2651DA82CB6DC6F05516F7571" }, { "b" : "7F0ED7B84000", "path" : "/lib64/libssl3.so", "elfType" : 3, "buildId" : "2E28F6A705F2ECEA8460D4716D5D1C24B5DDA5E4" }, { "b" : "7F0ED795D000", "path" : "/lib64/libsmime3.so", "elfType" : 3, "buildId" : "8D0B4010959C321022DF9CE239277A9D7B34A76A" }, { "b" : "7F0ED7630000", "path" : "/lib64/libnss3.so", "elfType" : 3, "buildId" : "F5A64BB37FA3972E545EF459A51310F0AB56FA56" }, { "b" : "7F0ED7400000", "path" : "/lib64/libnssutil3.so", "elfType" : 3, "buildId" : "E0705772325A52C3372FFFB8BDE5F786E2E200D6" }, { "b" : "7F0ED71FC000", "path" : "/lib64/libplds4.so", "elfType" : 3, "buildId" : "084D2194302908913F68B9DCD27DE46FA5B50522" }, { "b" : "7F0ED6FF7000", "path" : "/lib64/libplc4.so", "elfType" : 3, "buildId" : "799B28AD9A5460D78376E2C11260F2E858B95DE3" }, { "b" : "7F0ED6DB9000", "path" : "/lib64/libnspr4.so", "elfType" : 3, "buildId" : "DE762A28174110911B273E175D54F222B313CFE0" }, { "b" : "7F0ED6B6C000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "BCC30853830CD911E58700591830DF51ABCBD7BA" }, { "b" : "7F0ED6883000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "45BAB0BB455BDFA960FDA22E4124CF17B67CC930" }, { "b" : "7F0ED6650000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A9B3906192687CC45D483AE3C58C8AF745A6726A" }, { "b" : "7F0ED644C000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "B4BE1023D9606A88169DF411BF94AF417D7BA1A0" }, { "b" : "7F0ED623D000", "path" : "/lib64/liblber-2.4.so.2", "elfType" : 3, "buildId" : "3192C56CD451E18EB9F29CB045432BA9C738DD29" }, { "b" : "7F0ED5FE8000", "path" : "/lib64/libldap-2.4.so.2", "elfType" : 3, "buildId" : "F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883" }, { "b" : "7F0ED5DD2000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745" }, { "b" : "7F0ED5BC2000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "94B3BCB669126166B77CDCE6092679A6AA2004C8" }, { "b" : "7F0ED59BE000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F0ED57A1000", "path" : "/lib64/libsasl2.so.3", "elfType" : 3, "buildId" : "E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B" }, { "b" : "7F0ED557A000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179" }, { "b" : "7F0ED5343000", "path" : "/lib64/libcrypt.so.1", "elfType" : 3, "buildId" : "740CAD898E29E1F3B73A323CCEC4A7C88911647F" }, { "b" : "7F0ED50E1000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" }, { "b" : "7F0ED4EDE000", "path" : "/lib64/libfreebl3.so", "elfType" : 3, "buildId" : "B758881F4B6AF6C28C07A1A57713CBD2144628D4" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x56174b707c81] mongod(_ZN5mongo15printStackTraceEv+0x74) [0x56174b707b74] mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x157) [0x56174b6fd1f7] mongod(_ZN5mongo13error_details16ExceptionForImplILNS_10ErrorCodes5ErrorE11600EJNS_15ExceptionForCatILNS_13ErrorCategoryE1EEENS4_ILS5_6EEENS4_ILS5_7EEEEEC1ERKNS_6StatusE+0x45) [0x56174b6bc735] mongod(_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE+0x6E22) [0x561749c38b24] mongod(_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj+0x17C) [0x561749c42521] mongod(+0xC90D34) [0x561749c18d34] mongod(_ZN5mongo10LockerImpl4lockEPNS_16OperationContextENS_10ResourceIdENS_8LockModeENS_6Date_tE+0x32) [0x56174b5e0452] mongod(_ZN5mongo4Lock10GlobalLock8_enqueueENS_8LockModeENS_6Date_tE+0x51) [0x56174b5d4d11] mongod(_ZN5mongo4Lock10GlobalLockC2EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorENS1_11EnqueueOnlyE+0x5F) [0x56174b5d4def] mongod(_ZN5mongo4Lock10GlobalLockC1EPNS_16OperationContextENS_8LockModeENS_6Date_tENS0_17InterruptBehaviorE+0x12) [0x56174b5d4e22] mongod(_ZN5mongo5CurOp23completeAndLogOperationEPNS_16OperationContextENS_6logger12LogComponentEN5boost8optionalImEENS6_IxEEb+0x42D) [0x56174b0bae2d] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x79A) [0x56174a0858fa] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x56174a0736dc] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x56174a07f2cc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x56174a07ac1f] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x56174ae5f742] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x56174a07863d] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x56174a07b8d3] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x56174a079d07] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x56174a07ab7b] mongod(+0x10F5E9C) [0x56174a07de9c] mongod(+0x1ED7BAB) [0x56174ae5fbab] mongod(+0x2511C94) [0x56174b499c94] libpthread.so.0(+0x7DD5) [0x7f0ed8607dd5] libc.so.6(clone+0x6D) [0x7f0ed833102d] ----- END BACKTRACE ----- 2019-09-04T06:37:13.661+0000 W COMMAND [conn574] Unable to gather storage statistics for a slow operation due to lock aquire timeout 2019-09-04T06:37:13.661+0000 I COMMAND [conn574] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459168, 1), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579012, 1), signature: { hash: BinData(0, 03F942D7850679A43066F511B52016CE3558C974), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459168, 1), t: 92 } }, $db: "config" } numYields:0 ok:0 errMsg:"Shutdown in progress" errName:ShutdownInProgress errCode:91 reslen:561 locks:{} protocol:op_msg 12332ms 2019-09-04T06:37:13.666+0000 D2 COMMAND [conn563] run command admin.$cmd { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(1579858365, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1566459161, 3), t: 92 } }, maxTimeMS: 30000, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1567579029, 1), signature: { hash: BinData(0, 078DA3B73DEAD9D400EC302B9ADE7D6661BB6AE6), keyId: 6690867815131381761 } }, $configServerState: { opTime: { ts: Timestamp(1566459161, 3), t: 92 } }, $db: "admin" } 2019-09-04T06:37:13.666+0000 D1 - [conn563] User Assertion: InterruptedAtShutdown: interrupted at shutdown src/mongo/db/service_entry_point_common.cpp 830 2019-09-04T06:37:13.666+0000 W - [conn563] DBException thrown :: caused by :: InterruptedAtShutdown: interrupted at shutdown 2019-09-04T06:37:13.676+0000 I STORAGE [signalProcessingThread] shutdown: removing fs lock... 2019-09-04T06:37:13.676+0000 I CONTROL [signalProcessingThread] now exiting 2019-09-04T06:37:13.676+0000 I CONTROL [signalProcessingThread] shutting down with code:0